Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 PatNarciso joined #gluster
00:07 PatNarciso hoooolycrap -- https://bugzilla.redhat.com/show_bug.cgi?id=1228093 -- I'd *really* like to see this in the ubuntu ppa asap.
00:07 glusterbot Bug 1228093: unspecified, unspecified, ---, spalai, POST , Glusterd crash
00:08 PatNarciso glusterbot, long time no see... buddy.
00:14 PatNarciso because bug 1228093 is a bastard... go to remove a brick; and BAM-- glusterd is done.  and ya know, that sucks.  a lot.
00:14 glusterbot PatNarciso: BAM's karma is now -1
00:14 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1228093 unspecified, unspecified, ---, spalai, POST , Glusterd crash
00:16 victori joined #gluster
00:17 PatNarciso aside from a production environment blowing uptoday... the rebalance is a hell of a lot swifter now.  kudos, and thanks fellas.
00:27 magamo ... Okay, so I just tried that rolling update... And, unlike the other two clusters I upgraded, I can't mount this one.
00:27 magamo gluster volume status looks gree.
00:27 magamo Green.
00:27 magamo But using gluster-fuse to mount, I keep getting no subvols up.
00:36 magamo Okay, it's specifically one of the two volumes.
00:48 magamo Anyone around?  This is suddenly getting to be urgent.
00:48 magamo One of my volumes, when I try to 'gluster volume start' it, I get Error : Request timed out.
00:57 aaronott joined #gluster
00:58 magamo Now I can't get a volume status from any of my nodes.
00:58 magamo I just get request timed out.
01:05 haomaiwa_ joined #gluster
01:07 harish joined #gluster
01:15 nangthang joined #gluster
01:21 victori joined #gluster
01:22 magamo Anyone seen anything of this sort before?
01:22 kovshenin joined #gluster
01:37 kdhananjay joined #gluster
01:42 glusterbot News from newglusterbugs: [Bug 1232983] Disperse volume : fuse mount hung on renames on a distributed disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1232983>
01:49 nangthang joined #gluster
02:04 harish joined #gluster
02:17 nangthang joined #gluster
02:18 haomaiwa_ joined #gluster
02:24 plarsen joined #gluster
02:54 morphkurt joined #gluster
02:55 bharata-rao joined #gluster
03:08 DV__ joined #gluster
03:35 tessier joined #gluster
03:35 DV joined #gluster
03:48 TheSeven joined #gluster
03:53 gildub joined #gluster
03:55 atinm joined #gluster
03:58 itisravi joined #gluster
03:59 cholcombe joined #gluster
04:01 DV joined #gluster
04:07 RameshN joined #gluster
04:13 shaunm joined #gluster
04:14 shubhendu joined #gluster
04:15 ekuric joined #gluster
04:21 suliba joined #gluster
04:23 sakshi joined #gluster
04:24 victori joined #gluster
04:26 soumya_ joined #gluster
04:32 victori joined #gluster
04:35 ppai joined #gluster
04:40 Manikandan joined #gluster
04:40 Manikandan_ joined #gluster
04:41 ashiq joined #gluster
04:42 Manikandan__ joined #gluster
04:43 ndarshan joined #gluster
04:48 hgowtham joined #gluster
04:50 meghanam joined #gluster
04:56 spandit joined #gluster
05:00 jiffin joined #gluster
05:03 pppp joined #gluster
05:06 zeittunnel joined #gluster
05:09 nangthang joined #gluster
05:16 gem joined #gluster
05:18 nbalacha joined #gluster
05:20 deepakcs joined #gluster
05:23 glusterbot News from resolvedglusterbugs: [Bug 1206134] glusterd :- after volume create command time out, deadlock has been observed among glusterd and all command keep failing with error "Another transaction is in progress" <https://bugzilla.redhat.com/show_bug.cgi?id=1206134>
05:24 Bhaskarakiran joined #gluster
05:26 magamo joined #gluster
05:26 badone_ joined #gluster
05:28 anil joined #gluster
05:29 badone__ joined #gluster
05:31 vimal joined #gluster
05:32 schandra joined #gluster
05:41 soumya_ joined #gluster
05:43 glusterbot News from newglusterbugs: [Bug 1233025] GlusterFS 3.7.3 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1233025>
05:44 kdhananjay joined #gluster
05:56 rjoseph joined #gluster
06:03 kdhananjay joined #gluster
06:05 kotreshhr joined #gluster
06:07 RameshN joined #gluster
06:08 raghu joined #gluster
06:11 ndarshan joined #gluster
06:12 nsoffer joined #gluster
06:14 vimal joined #gluster
06:14 ndarshan joined #gluster
06:15 overclk joined #gluster
06:16 maveric_amitc_ joined #gluster
06:17 ndarshan joined #gluster
06:17 Humble_ joined #gluster
06:17 ndarshan joined #gluster
06:25 atalur joined #gluster
06:26 victori joined #gluster
06:28 spalai joined #gluster
06:29 nangthang joined #gluster
06:30 spandit joined #gluster
06:34 anrao joined #gluster
06:37 kshlm joined #gluster
06:39 ramteid joined #gluster
06:43 spalai1 joined #gluster
06:43 bjornar joined #gluster
06:52 victori joined #gluster
06:55 autoditac joined #gluster
07:01 rgustafs joined #gluster
07:04 haomaiwa_ joined #gluster
07:06 saurabh_ joined #gluster
07:10 Philambdo joined #gluster
07:14 ppai joined #gluster
07:20 autoditac_ joined #gluster
07:23 [Enrico] joined #gluster
07:26 elico joined #gluster
07:33 ndarshan joined #gluster
07:40 kdhananjay joined #gluster
07:44 morphkurt joined #gluster
07:57 liquidat joined #gluster
08:00 rgustafs joined #gluster
08:00 davidself joined #gluster
08:04 mator_ joined #gluster
08:08 al joined #gluster
08:10 nsoffer joined #gluster
08:12 Slashman joined #gluster
08:15 pppp joined #gluster
08:20 anrao joined #gluster
08:21 dgbaley left #gluster
08:22 Trefex joined #gluster
08:25 pppp joined #gluster
08:25 fsimonce joined #gluster
08:34 Pupeno joined #gluster
08:42 legreffier semiosis: i saw you were lead ubuntu packager , there's some problems with latest .debs can i query you on this
08:42 legreffier backlog me , i sent a msg here on tuesday
08:45 ctria joined #gluster
08:46 chirino_m joined #gluster
08:48 s19n joined #gluster
08:57 JonathanD joined #gluster
08:57 poornimag joined #gluster
09:02 anrao joined #gluster
09:03 nsoffer joined #gluster
09:10 kaushal_ joined #gluster
09:18 arcolife joined #gluster
09:22 anrao joined #gluster
09:22 ghenry joined #gluster
09:24 poornimag joined #gluster
09:26 rjoseph joined #gluster
09:26 hagarth joined #gluster
09:37 R0ok_ joined #gluster
09:41 Bhaskarakiran joined #gluster
09:52 kdhananjay joined #gluster
10:14 glusterbot News from newglusterbugs: [Bug 1233139] Null pointer dreference in dht_migrate_complete_check_task <https://bugzilla.redhat.com/show_bug.cgi?id=1233139>
10:14 glusterbot News from newglusterbugs: [Bug 1233136] Libgfapi client program crashes during glfs_fini() because io-cache xlator is leaking iobufs <https://bugzilla.redhat.com/show_bug.cgi?id=1233136>
10:15 morphkurt joined #gluster
10:19 harish_ joined #gluster
10:25 kovshenin joined #gluster
10:30 kdhananjay joined #gluster
10:30 haomaiwang joined #gluster
10:31 ndarshan joined #gluster
10:43 jcastill1 joined #gluster
10:45 poornimag joined #gluster
10:49 anrao joined #gluster
10:55 autoditac joined #gluster
11:00 jcastillo joined #gluster
11:01 kanagaraj joined #gluster
11:04 abrt joined #gluster
11:06 kotreshhr1 joined #gluster
11:09 [Enrico] joined #gluster
11:14 glusterbot News from newglusterbugs: [Bug 1233151] rm command fails with "Transport end point not connected" during add brick <https://bugzilla.redhat.com/show_bug.cgi?id=1233151>
11:14 glusterbot News from newglusterbugs: [Bug 1233158] Null pointer dreference in dht_migrate_complete_check_task <https://bugzilla.redhat.com/show_bug.cgi?id=1233158>
11:17 atalur joined #gluster
11:18 atinm joined #gluster
11:20 atinm joined #gluster
11:21 overclk joined #gluster
11:30 pjschmitt joined #gluster
11:41 gem joined #gluster
11:43 hmtm joined #gluster
11:48 merlink joined #gluster
11:49 nsoffer joined #gluster
11:54 overclk joined #gluster
11:55 meghanam joined #gluster
11:56 atalur joined #gluster
12:00 B21956 joined #gluster
12:01 zeittunnel joined #gluster
12:04 LebedevRI joined #gluster
12:06 poornimag joined #gluster
12:06 kshlm joined #gluster
12:10 unclemarc joined #gluster
12:10 aaronott joined #gluster
12:12 sysconfig joined #gluster
12:15 firemanxbr joined #gluster
12:15 sysconfig joined #gluster
12:16 itisravi joined #gluster
12:34 kotreshhr joined #gluster
12:34 atinm joined #gluster
12:35 DV__ joined #gluster
12:41 kotreshhr left #gluster
12:44 aravindavk joined #gluster
12:48 hagarth joined #gluster
12:49 klaxa|work joined #gluster
12:50 [Enrico] joined #gluster
12:50 wkf joined #gluster
12:50 dusmant joined #gluster
12:57 aravindavk joined #gluster
13:03 julim joined #gluster
13:03 shaunm joined #gluster
13:07 pppp joined #gluster
13:09 [Enrico] joined #gluster
13:11 kanagaraj joined #gluster
13:14 ctria joined #gluster
13:20 ashiq joined #gluster
13:21 Manikandan joined #gluster
13:21 rwheeler joined #gluster
13:22 smohan joined #gluster
13:26 aravindavk joined #gluster
13:27 georgeh-LT2 joined #gluster
13:30 jrm16020 joined #gluster
13:30 ninkotech__ joined #gluster
13:32 rjoseph joined #gluster
13:33 hamiller joined #gluster
13:34 Twistedgrim joined #gluster
13:40 overclk joined #gluster
13:47 plarsen joined #gluster
13:55 anrao joined #gluster
13:55 overclk joined #gluster
13:56 zeittunnel joined #gluster
13:57 dgandhi joined #gluster
14:10 tessier joined #gluster
14:18 smohan joined #gluster
14:18 Trefex joined #gluster
14:23 dbruhn joined #gluster
14:25 ctria joined #gluster
14:27 sysconfig joined #gluster
14:31 tessier joined #gluster
14:33 bene2 joined #gluster
14:34 semiosis legreffier: yes, you may query me
14:35 abrt hi all, i'm new to glusterfs, i've some trouble with lxc with glusterfs. exactly only when i try create a container on a gluster volume  after the creation of the container's directory i receive the following error "chown: changing ownership of /mnt/foo" if i simply change from "/mnt/baz" where baz is a local directory all works fine.  Anyone have any experience or have some hint?
14:35 semiosis legreffier: but unless it's personal, please keep it in channel, so others can benefit from our discussion
14:42 soumya joined #gluster
14:44 spalai joined #gluster
14:44 kdhananjay joined #gluster
14:58 aravindavk joined #gluster
15:01 vimal joined #gluster
15:03 arcolife joined #gluster
15:06 prg3 joined #gluster
15:08 haomai___ joined #gluster
15:15 glusterbot News from newglusterbugs: [Bug 1233273] 'unable to get transaction op-info' error seen in glusterd log while executing gluster volume status command <https://bugzilla.redhat.com/show_bug.cgi?id=1233273>
15:20 maveric_amitc_ joined #gluster
15:26 spalai joined #gluster
15:30 CyrilPeponnet Hi Guys, some question about quorum, is it preferred to have a quorum in a 3 nodes setup (with some volume with replica 3) ? I don't want my volume to become offline but I just want to prevent split-brain situation
15:32 arcolife joined #gluster
15:32 CyrilPeponnet is it putting down the brick part of faulty nodes or the whole volume
15:34 overclk joined #gluster
15:35 ndevos CyrilPeponnet: there is server-side quorum and client-side - server-side checks glusterd processes, client-side the availability of bricks
15:36 ndevos at least, that is what I *think* is how its done
15:36 CyrilPeponnet Ok but when we are using nfs as transport server-side only matters right
15:36 ndevos no, the gluster/nfs server is a client
15:36 CyrilPeponnet right
15:37 CyrilPeponnet I still dont underdsant how I can have split-brain with a replica 3 on 3 nodes
15:38 ndevos well, if the network is partitioned, and there are changes on the 1/3 and 2/3 side, which change do you want to keep?
15:38 CyrilPeponnet I see.
15:39 CyrilPeponnet I still dont understand how quorum prevent this
15:39 CyrilPeponnet :p
15:40 ndevos ah, well, when you have quorum configured, it will not be possible for the 1/3 side to make changes
15:41 CyrilPeponnet ok, will it stop the volume or just denied write access to his own brick
15:41 ndevos qourum would be configured to only allow changes is > 50% of the (bricks or glusterds) are available
15:41 ndevos server-side quorum will kill the brick processes in the 1/3 side
15:42 ndevos client-side quorum will deny writes to the bricks on the 1/3 side
15:42 cholcombe joined #gluster
15:42 CyrilPeponnet in worst case scenario if I loose 2 of the nodes is the third will continue to work and what will happens when the 2 other nodes will rejoin the cluster
15:42 s19n left #gluster
15:43 CyrilPeponnet (and what is the best server-side or client-side )
15:43 soumya joined #gluster
15:43 ndevos depends... with server-side quorum, you would not have any bricks running to connect to
15:44 ndevos with client-side quorum you would be able to read from the last brick (but that could have stale/old data if 2/3 is active and changing data)
15:44 CyrilPeponnet server side quorum will bring down all the bricks ? or just bricks of nodes which not have the "majority"
15:45 spalai joined #gluster
15:45 ndevos just the bricks on the non-quorate side
15:45 CyrilPeponnet ok good
15:46 CyrilPeponnet the thing I don't understand in my case is I had a split brain situation with lot of files but my volume is only accessed RO
15:46 CyrilPeponnet actually metadata split brain
15:46 CyrilPeponnet I had to remove the faulty brick to "heal" the whole thing
15:47 ndevos http://gluster.readthedocs.org/en/latest/Features/server-quorum/ and maybe there is a doc for client-side quorum too
15:47 CyrilPeponnet (I'm reading that)
15:48 haomaiwa_ joined #gluster
15:48 ndevos well, this should add some understanding too http://gluster.readthedocs.org/en/latest/Features/afr-arbiter-volumes/
15:48 CyrilPeponnet Ok I think this will not help I didn't have any network partition
15:49 ndevos how do you get a split-brain without a network partition?
15:49 CyrilPeponnet that my question...
15:50 CyrilPeponnet each file I try to create on the brick become split-brain
15:50 CyrilPeponnet as far as a I can see
15:50 CyrilPeponnet it's been replicated to 2 bricks but not on the thir
15:50 CyrilPeponnet d
15:50 CyrilPeponnet and metadata were not concistent
15:50 CyrilPeponnet and some other files (not touched) starts to stale and appear in split-brain
15:51 CyrilPeponnet I removed the brick with was not replicated any more and this fix all the thing
15:51 ndevos you're creating files on the brick directly? not through gluster/nfs or a fuse mount?
15:51 ndevos if thats the case, you're doing it wrong :)
15:51 CyrilPeponnet NO
15:51 ndevos ah :)
15:51 CyrilPeponnet :)
15:52 ndevos how do you mean "create on the brick"?
15:52 CyrilPeponnet touch /mnt/vol/bla
15:52 CyrilPeponnet sorry volume
15:52 ndevos okay, much clearer!
15:52 CyrilPeponnet :)
15:53 squizzi_ joined #gluster
15:53 CyrilPeponnet I got a tons of
15:53 CyrilPeponnet [afr-self-heal-data.c:1611:afr_sh_data_open_cbk] 0-usr_global-replicate-0: open of b0fed4db-4028-411b-9617-2a2d635ebfa7 failed on child usr_global-client-0 (Permission denied)
15:53 CyrilPeponnet which lead to split-brain situation
15:54 CyrilPeponnet or
15:54 CyrilPeponnet [2015-06-15 06:32:53.627049] W [client-rpc-fops.c:464:client3_3_open_cbk] 0-usr_global-client-2: remote operation failed: Permission denied. Path: 3d06c0dc-e0dd-4a79-8d9a-86601543feba (3d06c0dc-e0dd-4a79-8d9a-86601543feba)
15:54 CyrilPeponnet [2015-06-15 06:32:53.627061] E [afr-self-heal-data.c:1611:afr_sh_data_open_cbk] 0-usr_global-replicate-0: open of 3d06c0dc-e0dd-4a79-8d9a-86601543feba failed on child usr_global-client-2 (Permission denied)
15:54 hagarth CyrilPeponnet: I think you can drop a note on gluster-users for this
15:54 ndevos hmm
15:55 CyrilPeponnet AFAIK those error are not really errors (fixed in 3.5.3 if I remember well).
15:56 CyrilPeponnet but to sum up, when I have a split brain situtation (very rare) I just remove the faulty file from the brick and it's healed.
15:57 CyrilPeponnet but last time removing the file from brick1 or brick2 lead to heal (file synced) but from brick3, it never heard (never synced the file). That's why I remove brick3 and everything healed fine.
15:57 CyrilPeponnet s/heard/heald/
15:57 glusterbot What CyrilPeponnet meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
15:58 jcastill1 joined #gluster
16:02 arcolife joined #gluster
16:02 CyrilPeponnet From the doc: By default, client quorum (cluster.quorum-type) is set to auto for a replica 3 volume when it is created; i.e. at least 2 bricks need to be up to satisfy quorum and to allow writes.
16:04 jcastillo joined #gluster
16:05 spalai joined #gluster
16:05 ndevos I dont know, you really should not get split-brain'd files that easily, but you are on 3.5, I think replica 3 had major changes in 3.6 and 3.7
16:06 CyrilPeponnet Yeah but we are not confident to upgrade to 3.6 and certainly not to 3.7 :p
16:07 ndevos the afr developers normally respond nicely to emails on the list, they definitely know more about it than I do
16:07 ndevos make sure to mention 3.5 and "replica 3" in the subject ;-)
16:08 CyrilPeponnet Sure, but as the issue is fixed it's fine. It occurs once
16:08 haomaiwang joined #gluster
16:08 CyrilPeponnet oh last thing, how to re-add my brick with maybe corrupter data
16:08 CyrilPeponnet corrupted
16:08 CyrilPeponnet should I clean it before ?
16:09 ninkotech joined #gluster
16:09 ninkotech_ joined #gluster
16:09 ndevos I would delete the corrupted data, just to be sure self-heal detects it correctly, but maybe that is not needed - it probably depends on the corruption
16:10 CyrilPeponnet well the point is at lest 30% of 5TB went bad and stale
16:10 CyrilPeponnet (around 300K files)
16:11 CyrilPeponnet So I think it's better to start from scratch
16:11 CyrilPeponnet I can't afford another outage :)
16:11 ndevos maybe there are some tricks to prevent a full re-sync, but I don't know about those
16:12 CyrilPeponnet maybe prunning metadata
16:12 CyrilPeponnet (like when you are restoring a master from a geo-replicated salve)
16:12 krink joined #gluster
16:12 gsaadi joined #gluster
16:14 spalai joined #gluster
16:15 gsaadi left #gluster
16:16 ndevos maybe, but I have never tried that
16:19 jiffin joined #gluster
16:20 RameshN joined #gluster
16:29 CyrilPeponnet joined #gluster
16:29 CyrilPeponnet joined #gluster
16:29 CyrilPeponnet joined #gluster
16:30 CyrilPeponnet joined #gluster
16:30 CyrilPeponnet joined #gluster
16:30 ira joined #gluster
16:30 CyrilPeponnet joined #gluster
16:31 CyrilPeponnet joined #gluster
16:31 RameshN joined #gluster
16:31 CyrilPeponnet joined #gluster
16:31 CyrilPeponnet joined #gluster
16:32 hamiller joined #gluster
16:36 spalai joined #gluster
16:37 CyrilPeponnet joined #gluster
16:38 CyrilPeponnet joined #gluster
16:38 CyrilPeponnet joined #gluster
16:38 CyrilPeponnet joined #gluster
16:39 CyrilPeponnet joined #gluster
16:39 CyrilPeponnet joined #gluster
16:46 glusterbot News from newglusterbugs: [Bug 1233333] glusterfs-resource-agents - volume - doesn't stop all processes <https://bugzilla.redhat.com/show_bug.cgi?id=1233333>
16:46 glusterbot News from newglusterbugs: [Bug 1233344] glusterfs-resource-agents - volume - voldir is not properly set <https://bugzilla.redhat.com/show_bug.cgi?id=1233344>
17:03 haomaiw__ joined #gluster
17:23 Rapture joined #gluster
17:24 elico joined #gluster
17:30 autoditac joined #gluster
17:31 dusmant joined #gluster
17:32 ira joined #gluster
17:32 autoditac joined #gluster
17:41 bene2 joined #gluster
17:45 Manikandan joined #gluster
17:45 Manikandan_ joined #gluster
17:47 ashiq joined #gluster
17:48 Philambdo joined #gluster
17:51 arcolife joined #gluster
17:58 autoditac joined #gluster
18:03 smohan joined #gluster
18:16 Marqin joined #gluster
18:22 Humble_ ashiq this can be merged http://review.gluster.org/#/c/10297/
18:22 Humble_ it got netbsd vote .. Manikandan ^^
18:29 atinm joined #gluster
18:30 Manikandan yeah Humble
18:30 Manikandan It got NetBSD too
18:31 Humble_ and gluster build system was passed
18:31 Humble_ even though there is no vote
18:31 Humble_ so its a sure merge
18:31 Manikandan_ Humble, yup
18:31 Manikandan_ Yeah but overclk is not available
18:32 Humble_ please ask ashiq to get it done asap
18:32 Humble_ true..
18:32 Manikandan Sure Humble!
18:32 Humble_ raghu can also merge that
18:32 Manikandan Oh okay
18:37 Vortac joined #gluster
18:40 unclemarc joined #gluster
18:46 spalai joined #gluster
18:47 autoditac joined #gluster
18:50 spalai1 joined #gluster
19:05 jdossey joined #gluster
19:07 prhiannon joined #gluster
19:07 jdossey semiosis: I just noticed that the PPA for ubuntu-glusterfs-3.3 is 404
19:07 semiosis jdossey: yep, been that way a while now.
19:08 prhiannon .
19:08 woakes070048 joined #gluster
19:09 jdossey semiosis: for my old glusterfs 3.3 cluster, should I just use the 3.4 client?
19:10 semiosis usual advice is to upgrade all servers before any clients.  see ,,(3.4 upgrade notes)
19:10 glusterbot http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
19:10 semiosis hope that helps
19:11 jdossey semiosis: yeah, I know.  I didn't do so because of outstanding bugs in 3.4-- JoeJulian brought them to my attention iirc.  Ugh.
19:11 glusterbot jdossey: 3.4's karma is now -1
19:11 jdossey haha
19:11 woakes070048 how can i check to see if i have split brain other than gluster heal
19:12 semiosis woakes070048: stat the file then check the client log?
19:12 jdossey I might as well bite the bullet and bring us all the way to 3.6
19:16 woakes070048 Is there a way to tell if a file such as a disk image is currupt from gluster?
19:17 squizzi joined #gluster
19:19 smohan joined #gluster
19:21 DV_ joined #gluster
19:25 bene_in_meeting joined #gluster
19:28 nsoffer joined #gluster
19:28 atalur joined #gluster
19:39 gfranx joined #gluster
19:40 smohan joined #gluster
19:42 gfranx any hint to enable  "strict locking" for a glusterfs newbie ?  i would like to prevent two or more kvm clients to open the same qcow2 file.
19:52 atalur joined #gluster
19:58 woakes070048 semiosis: they are the same but i cant get my ovirt hosted engine to start
20:00 rotbeard joined #gluster
20:08 spalai joined #gluster
20:14 dusmant joined #gluster
20:24 bennyturns joined #gluster
20:26 bennyturns hmmm after leaving all my data on my bricks and recreating my volume I now see doubles of all the files and directories
20:27 DV__ joined #gluster
20:27 bennyturns anyone ever leave data on the bricks and recreate the volume?
20:31 JoeJulian bennyturns: Just the "left" side of a replicated volume.
20:31 bennyturns JoeJulian, hmm what you mean?>
20:31 JoeJulian @brick order
20:31 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
20:32 JoeJulian In that example, server1, server3
20:32 bennyturns JoeJulian, oh crap so I prolly added the bricks back in a different order?
20:32 DV joined #gluster
20:32 bennyturns ?
20:33 JoeJulian could be
20:33 bennyturns I bet that is it
20:33 bennyturns JoeJulian, hmm so should I delete everyhting and re create in the proper order?
20:34 JoeJulian seems likely at this point.
20:34 bennyturns JoeJulian, kk on it
20:35 dusmant joined #gluster
20:46 TvL2386 joined #gluster
21:08 badone__ joined #gluster
21:13 smohan joined #gluster
21:27 kovshenin joined #gluster
21:30 wkf joined #gluster
21:41 kovshenin joined #gluster
21:53 ninkotech__ joined #gluster
22:11 merlink joined #gluster
22:15 CyrilPeponnet I may have already asked but for production usage and starting from 3.5.2 should we upgrade to latest 3.5.x or bump to 3.6.x (not sure about the 3.5 -> 3.6 migration)
22:15 CyrilPeponnet what is the current stable / supported release for centos7
22:22 krink any known working examples of using transport=‘unix’ socket=‘/var/run/glusterd.socket’ ? http://pastebin.com/raw.php?i=j6KnCfmV
22:22 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
22:28 Pupeno joined #gluster
22:28 Pupeno joined #gluster
22:29 JoeJulian CyrilPeponnet: I feel confident recommending 3.6
22:30 JoeJulian krink: That's not an option.
22:30 CyrilPeponnet Well I will trust you and try to schedule an update so...
22:31 CyrilPeponnet any recommandation for a migration from 3.5 to 3.6 ? including a 2 geo-replicated slaves on remote sites
22:31 CyrilPeponnet around 10TB of data on 5 vol some of them replica 2 other 3
22:31 CyrilPeponnet and around 1k clients
22:32 CyrilPeponnet mostly using nfs :)
22:32 CyrilPeponnet and some 3.5.2 gfs clients
22:32 JoeJulian upgrade one side of the replica, wait for self-heal to be clean, upgrade the other.
22:32 JoeJulian Then do the clients and, lastly, the remotes
22:32 CyrilPeponnet so peering a cluster with 3.5 and 3.6 will works >
22:32 CyrilPeponnet ?
22:33 JoeJulian They will remain peered, yes.
22:33 CyrilPeponnet good :)
22:33 CyrilPeponnet We will try that. If it fails I'll directly come where ever you live to hide :P
22:33 krink JoeJulian:  i’m still doing research into this config…  looks like there are a few hits that it is possible.  https://bugzilla.redhat.com/show_bug.cgi?id=1115809
22:34 glusterbot Bug 1115809: low, low, rc, libvirt-maint, CLOSED NOTABUG, Error messages are not clearly enough during start a guest with source protocol='gluster'
22:45 woakes070048 how can i tell if my gouster volume is read only?
22:45 CyrilPeponnet gluster vol info your_vol
22:47 woakes070048 is there a way to take it out of read only so i can write to it?
22:47 Vortac joined #gluster
22:48 JoeJulian If you've enabled the read-only setting you can reset it.
22:48 JoeJulian gluster volume reset $vol $setting
22:51 CyrilPeponnet actually there at leat 3 options which can do that root-squash (allow/denied access as root), nfs.volume-access and feature.readonly
22:52 jvandewege_ joined #gluster
22:52 akay1 joined #gluster
22:53 necrogami joined #gluster
22:54 dusmant joined #gluster
23:02 pjschmitt joined #gluster
23:03 ctria joined #gluster
23:06 badone__ joined #gluster
23:06 JonathanD joined #gluster
23:06 suliba joined #gluster
23:06 TheSeven joined #gluster
23:06 ConSi joined #gluster
23:06 Leildin joined #gluster
23:06 Micromus joined #gluster
23:06 legreffier joined #gluster
23:06 side_control joined #gluster
23:06 edong23 joined #gluster
23:06 mjrosenb joined #gluster
23:06 semajnz joined #gluster
23:06 samsaffron___ joined #gluster
23:06 kkeithley joined #gluster
23:06 bitpushr joined #gluster
23:06 xrsanet joined #gluster
23:15 plarsen joined #gluster
23:21 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary