Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 nangthang joined #gluster
00:35 davidself joined #gluster
00:35 plarsen joined #gluster
00:38 ndk joined #gluster
00:51 unclemarc joined #gluster
01:32 Lee1092 joined #gluster
01:48 harish joined #gluster
01:59 nsoffer joined #gluster
02:17 nangthang joined #gluster
02:19 nsoffer joined #gluster
02:24 spcmastertim joined #gluster
02:24 tquinn joined #gluster
02:51 RedW joined #gluster
02:53 maveric_amitc_ joined #gluster
03:13 TheSeven joined #gluster
03:14 bharata-rao joined #gluster
03:32 sakshi joined #gluster
03:33 vmallika joined #gluster
03:35 schandra joined #gluster
03:39 calavera joined #gluster
03:43 harish_ joined #gluster
03:47 ppai joined #gluster
03:50 shubhendu joined #gluster
04:01 ramky joined #gluster
04:11 neoice joined #gluster
04:14 nbalacha joined #gluster
04:15 itisravi joined #gluster
04:15 yazhini joined #gluster
04:27 kanagaraj joined #gluster
04:29 atinm joined #gluster
04:37 sripathi joined #gluster
04:38 deepakcs joined #gluster
04:42 ndarshan joined #gluster
04:43 kdhananjay joined #gluster
04:54 vimal joined #gluster
04:55 Manikandan joined #gluster
04:56 hgowtham joined #gluster
04:57 jwd joined #gluster
04:57 LebedevRI joined #gluster
04:58 RameshN joined #gluster
05:00 ashiq joined #gluster
05:03 gem joined #gluster
05:12 atalur joined #gluster
05:14 anil_ joined #gluster
05:16 pppp joined #gluster
05:20 harish_ joined #gluster
05:20 rafi joined #gluster
05:23 hagarth joined #gluster
05:23 harish_ joined #gluster
05:25 surabhi joined #gluster
05:27 Manikandan joined #gluster
05:27 nishanth joined #gluster
05:40 vmallika joined #gluster
05:40 kotreshhr joined #gluster
05:42 raghu joined #gluster
05:43 surabhi joined #gluster
05:46 rjoseph joined #gluster
05:47 skoduri joined #gluster
05:49 Bhaskarakiran joined #gluster
05:50 overclk joined #gluster
06:04 sahina joined #gluster
06:05 glusterbot News from newglusterbugs: [Bug 1218732] gluster snapshot status --xml gives back unexpected non xml output <https://bugzilla.redhat.com/show_bug.cgi?id=1218732>
06:05 glusterbot News from newglusterbugs: [Bug 1245895] gluster snapshot status --xml gives back unexpected non xml output <https://bugzilla.redhat.com/show_bug.cgi?id=1245895>
06:05 glusterbot News from newglusterbugs: [Bug 1245908] snap-view:mount crash if debug mode is enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1245908>
06:07 anil_ joined #gluster
06:24 jtux joined #gluster
06:25 meghanam joined #gluster
06:29 soumya joined #gluster
06:29 jtux joined #gluster
06:35 glusterbot News from newglusterbugs: [Bug 1222614] Misleading error message during snapshot creation <https://bugzilla.redhat.com/show_bug.cgi?id=1222614>
06:35 maveric_amitc_ joined #gluster
06:36 kotreshhr joined #gluster
06:39 kshlm joined #gluster
06:39 glusterbot News from resolvedglusterbugs: [Bug 1218961] snapshot: Can not activate the name provided while creating snaps to do any further access <https://bugzilla.redhat.com/show_bug.cgi?id=1218961>
06:41 ppai joined #gluster
06:48 harish_ joined #gluster
06:49 hchiramm joined #gluster
06:49 ppai joined #gluster
06:59 spalai joined #gluster
07:05 glusterbot News from newglusterbugs: [Bug 1213349] [Snapshot] Scheduler should check vol-name exists or not  before adding scheduled jobs <https://bugzilla.redhat.com/show_bug.cgi?id=1213349>
07:05 glusterbot News from newglusterbugs: [Bug 1245923] [Snapshot] Scheduler should check vol-name exists or not  before adding scheduled jobs <https://bugzilla.redhat.com/show_bug.cgi?id=1245923>
07:05 glusterbot News from newglusterbugs: [Bug 1245928] [rfe] glusterfs snapshot cli commands should provide xml output. <https://bugzilla.redhat.com/show_bug.cgi?id=1245928>
07:05 glusterbot News from newglusterbugs: [Bug 1245926] [USS]: gluster volume reset <vol-name>, resets the uss configured option but snapd process continues to run <https://bugzilla.redhat.com/show_bug.cgi?id=1245926>
07:05 glusterbot News from newglusterbugs: [Bug 1245922] [SNAPSHOT] : Correction required in output message after initilalising snap_scheduler <https://bugzilla.redhat.com/show_bug.cgi?id=1245922>
07:09 anil_ joined #gluster
07:09 glusterbot News from resolvedglusterbugs: [Bug 1211614] [NFS] Shared Storage mounted as NFS mount gives error "snap_scheduler: Another snap_scheduler command is running. Please try again after some time" while running any scheduler commands <https://bugzilla.redhat.com/show_bug.cgi?id=1211614>
07:10 skoduri joined #gluster
07:10 nangthang joined #gluster
07:13 arcolife joined #gluster
07:16 kotreshhr joined #gluster
07:18 kotreshhr joined #gluster
07:35 glusterbot News from newglusterbugs: [Bug 1245935] Data Tiering: Change the error message when a detach-tier status is issued on a non-tier volume <https://bugzilla.redhat.com/show_bug.cgi?id=1245935>
07:39 kotreshhr joined #gluster
07:55 aravindavk joined #gluster
08:06 ctria joined #gluster
08:07 harish_ joined #gluster
08:08 the-me joined #gluster
08:08 ajames-41678 joined #gluster
08:15 itisravi joined #gluster
08:19 kdhananjay joined #gluster
08:19 smohan joined #gluster
08:21 Saravana_ joined #gluster
08:31 jcastill1 joined #gluster
08:34 haomaiwa_ joined #gluster
08:36 jcastillo joined #gluster
08:52 jwd joined #gluster
08:53 jiffin joined #gluster
08:54 kotreshhr joined #gluster
08:54 hagarth joined #gluster
08:55 meghanam joined #gluster
08:56 [Enrico] joined #gluster
09:04 DV joined #gluster
09:04 kotreshhr joined #gluster
09:05 glusterbot News from newglusterbugs: [Bug 1245966] log files removexattr() can't find a specified key or value <https://bugzilla.redhat.com/show_bug.cgi?id=1245966>
09:05 glusterbot News from newglusterbugs: [Bug 1245981] forgotten inodes are not being signed <https://bugzilla.redhat.com/show_bug.cgi?id=1245981>
09:07 jwaibel joined #gluster
09:10 ndarshan joined #gluster
09:10 kaushal_ joined #gluster
09:11 kotreshhr joined #gluster
09:17 shubhendu joined #gluster
09:19 atinm joined #gluster
09:20 jmarley joined #gluster
09:24 jwd joined #gluster
09:30 m0le_ joined #gluster
09:30 spalai1 joined #gluster
09:38 karnan joined #gluster
09:39 coredumb joined #gluster
09:42 autoditac joined #gluster
09:46 nbalacha joined #gluster
09:47 sakshi joined #gluster
09:48 shubhendu joined #gluster
09:50 nishanth joined #gluster
09:56 ndarshan joined #gluster
10:01 Bhaskarakiran joined #gluster
10:06 autoditac joined #gluster
10:15 raghu left #gluster
10:27 atinm joined #gluster
10:27 dusmant joined #gluster
10:28 vimal joined #gluster
10:28 Manikandan joined #gluster
10:29 Manikandan_ joined #gluster
10:30 nbalacha joined #gluster
10:32 ndarshan joined #gluster
10:33 sahina joined #gluster
10:36 glusterbot News from newglusterbugs: [Bug 1246024] gluster commands space in brick path fails <https://bugzilla.redhat.com/show_bug.cgi?id=1246024>
10:39 fsimonce joined #gluster
10:43 cleong joined #gluster
10:45 kkeithley1 joined #gluster
10:50 nsoffer joined #gluster
10:51 uebera|| joined #gluster
10:53 nishanth joined #gluster
10:54 jiffin joined #gluster
11:12 arcolife joined #gluster
11:14 ChrisNBlum joined #gluster
11:17 harish_ joined #gluster
11:19 RameshN joined #gluster
11:22 dusmant joined #gluster
11:23 harish_ joined #gluster
11:25 sahina joined #gluster
11:30 rwheeler joined #gluster
11:32 shyam joined #gluster
11:33 jwaibel joined #gluster
11:34 jwd_ joined #gluster
11:35 btspce joined #gluster
11:38 Saravana_ joined #gluster
11:39 jwd joined #gluster
11:39 jrm16020 joined #gluster
11:39 ira joined #gluster
11:42 Manikandan_ joined #gluster
11:42 skoduri joined #gluster
11:49 kdhananjay joined #gluster
11:55 fghaas joined #gluster
11:56 fghaas left #gluster
11:56 soumya_ joined #gluster
11:58 kaushal_ joined #gluster
11:59 [Enrico] joined #gluster
12:01 unclemarc joined #gluster
12:03 anti[Enrico] joined #gluster
12:03 Saravana_ joined #gluster
12:06 glusterbot News from newglusterbugs: [Bug 1246052] Deceiving log messages like "Failing STAT on gfid : split-brain observed. [Input/output error]" reported <https://bugzilla.redhat.com/show_bug.cgi?id=1246052>
12:10 overclk joined #gluster
12:12 ppai joined #gluster
12:13 Philambdo joined #gluster
12:13 jtux joined #gluster
12:15 rjoseph joined #gluster
12:17 spalai joined #gluster
12:21 Manikandan_ joined #gluster
12:27 rjoseph joined #gluster
12:37 B21956 joined #gluster
12:39 harish joined #gluster
12:40 harish joined #gluster
12:42 itisravi_ joined #gluster
12:43 ppai joined #gluster
12:47 rafi joined #gluster
12:48 jmarley joined #gluster
12:48 jmarley joined #gluster
12:52 xoritor joined #gluster
12:57 overclk joined #gluster
13:03 rjoseph joined #gluster
13:06 shyam joined #gluster
13:08 ajames-41678 joined #gluster
13:09 mpietersen joined #gluster
13:13 aaronott joined #gluster
13:13 nangthang joined #gluster
13:19 lalatenduM joined #gluster
13:21 julim joined #gluster
13:21 maveric_amitc_ joined #gluster
13:23 alexandregomes joined #gluster
13:23 julim joined #gluster
13:24 spalai1 joined #gluster
13:24 dgandhi joined #gluster
13:33 autoditac joined #gluster
13:33 Twistedgrim joined #gluster
13:36 ashiq joined #gluster
13:36 hgowtham joined #gluster
13:38 DV joined #gluster
13:39 overclk joined #gluster
13:40 georgeh-LT2 joined #gluster
13:41 arcolife joined #gluster
13:41 dusmant joined #gluster
13:42 hagarth joined #gluster
13:42 jcastill1 joined #gluster
13:44 hamiller joined #gluster
13:48 side_control joined #gluster
13:48 jcastillo joined #gluster
13:50 spcmastertim joined #gluster
13:51 TrincaTwik joined #gluster
13:52 tquinn joined #gluster
13:54 spalai1 left #gluster
14:00 bennyturns joined #gluster
14:05 _Bryan_ joined #gluster
14:06 lpabon_ joined #gluster
14:08 maveric_amitc_ joined #gluster
14:10 mpietersen joined #gluster
14:17 overclk joined #gluster
14:19 _dist joined #gluster
14:20 overclk joined #gluster
14:20 LebedevRI joined #gluster
14:22 kaushal_ joined #gluster
14:29 jmarley joined #gluster
14:31 jwaibel joined #gluster
14:34 ChrisNBlum joined #gluster
14:35 mpietersen joined #gluster
14:35 jwd joined #gluster
14:36 lpabon joined #gluster
14:37 ekuric joined #gluster
14:39 shubhendu joined #gluster
14:45 skoduri joined #gluster
14:45 jwd joined #gluster
14:48 jwaibel joined #gluster
14:50 dusmant joined #gluster
14:51 togdon joined #gluster
14:51 pdrakewe_ joined #gluster
14:51 mpietersen joined #gluster
14:55 Slashman joined #gluster
14:56 xoritor anyone using fleet?
14:57 xoritor ovirt was a complete bust for me... kept eating itself and just not working
14:57 xoritor openstack is overkill for me
14:57 xoritor and does not fit my use case at all
14:58 xoritor so i am hand rolling my own using glusterfs, etc, and fleet
14:58 xoritor lol
15:00 ajames-41678 joined #gluster
15:01 maveric_amitc_ joined #gluster
15:09 vincent_vdk xoritor: have a look at opennebula
15:09 vincent_vdk yoou might like it
15:10 ira joined #gluster
15:17 mpietersen joined #gluster
15:22 LebedevRI joined #gluster
15:22 xoritor vincent_vdk, looking now
15:22 xoritor but i have most of this working alreayd
15:22 xoritor heh
15:24 cholcombe joined #gluster
15:29 uebera|| joined #gluster
15:29 uebera|| joined #gluster
15:36 LebedevRI joined #gluster
15:40 uebera|| joined #gluster
15:40 plarsen joined #gluster
15:40 _maserati joined #gluster
15:42 overclk joined #gluster
16:00 ninkotech_ joined #gluster
16:00 ninkotech joined #gluster
16:07 rcampbel3 joined #gluster
16:08 uebera|| joined #gluster
16:16 ank joined #gluster
16:16 jdossey joined #gluster
16:17 Slashman joined #gluster
16:19 RameshN joined #gluster
16:19 calavera joined #gluster
16:23 jrm16020 joined #gluster
16:29 nangthang joined #gluster
16:35 uebera|| Hi there. Using v3.6.4 on Ubuntu, I currently see a lot of warnings/errors in the log during self-healing:
16:35 uebera|| (E) "0-iobuf: invalid argument: iobuf", "0-iobuf: invalid argument: iobref" (this was once said to be fixed, cf. https://bugzilla.redhat.com/show_bug.cgi?id=1116514)  and
16:35 glusterbot Bug 1116514: unspecified, unspecified, ---, vbellur, CLOSED CURRENTRELEASE, iobuf_unref errors killing logging
16:35 uebera|| (W) "[client-rpc-fops.c:2145:client3_3_setattr_cbk]", "[client-rpc-fops.c:2145:client3_3_setattr_cbk]", "[client-rpc-fops.c:1023:client3_3_setxattr_cbk]" (Operation not permitted/Permission denied)
16:35 uebera|| Does this look familiar?
16:39 maZtah joined #gluster
16:42 mpietersen joined #gluster
16:50 autoditac joined #gluster
16:53 shubhendu joined #gluster
17:02 jobewan joined #gluster
17:02 jwd joined #gluster
17:06 shyam uebera||: Could you post more of the log somewhere to look at and see if this is originating from some other place as well
17:07 shyam @paste
17:07 glusterbot shyam: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
17:07 glusterbot News from newglusterbugs: [Bug 1246203] iobuf_unref errors killing logging ("again") <https://bugzilla.redhat.com/show_bug.cgi?id=1246203>
17:08 bennyturns joined #gluster
17:10 klaas joined #gluster
17:14 uebera|| shyam: I can paste the log. Which one was the preferred pastebin?
17:15 bennyturns joined #gluster
17:15 shyam uebera||: I am looking at the bug, looks like you filed the same, if so it has the information I am looking for at present (bug #1246203)
17:15 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1246203 unspecified, unspecified, ---, bugs, NEW , iobuf_unref errors killing logging ("again")
17:15 shyam @paste
17:15 glusterbot shyam: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
17:16 _dist joined #gluster
17:16 uebera|| Ok, I used nc.
17:17 shyam Hmmm... that is not right (the paste output), here you go for future reference at least, Please use http://fpaste.org or http://paste.ubuntu.com/
17:17 Rapture joined #gluster
17:18 uebera|| Ah, paste.ubuntu.com it was. Shall I repaste it there right away?
17:19 shyam uebera||: No need, checked the code on 3.6 line, it does not contain the older fix that you referenced
17:20 uebera|| I see. That was quick. :)
17:21 shyam uebera||: So as you are rolling your own build, you *could* add this patch to your source for temprory relief till we port the patch upstream to the 3.6 branch, http://review.gluster.org/#/c/8242/2
17:21 uebera|| Thanks, this I will do right away.
17:21 * shyam goes away to check if master branch is nice and clean and includes the fix
17:24 shyam uebera||: Or, alternatively this patch from the master branch may apply more cleanly, http://review.gluster.org/#/c/10206/
17:26 * uebera|| tries the latter, then
17:30 calavera joined #gluster
17:31 calisto joined #gluster
17:34 ira joined #gluster
17:34 uebera|| The patch applied cleanly. :)
17:34 xoritor why am i getting all subvolumes are down from libvirt?
17:36 shyam uebera||: nice, I updated the bug and marked it for attention as required...
17:37 * shyam could as well backport the fix, Hmmm... maybe today evening
17:47 Machpitt joined #gluster
17:50 Machpitt hello, I'm on gluster 3.5.2 and running 6 bricks in a volume with 2 replicas. Is there a way to increase the replica count to 3 without adding any bricks live?
17:51 ChrisNBlum joined #gluster
17:52 xoritor hmm... does not matter how fast this is if it does not work its not useable
17:52 xoritor lol
17:53 Machpitt it looked like the syntax someone else had used to increase replica count was "gluster volume add-brick <volume> replica 3 <new brick>" but I didn't see a way to do it without defining adding another brick i wouldn't need
17:54 JoeJulian Machpitt: nope, there's no way to do that. You would have to add bricks equal to the distribute count to add another replica.
17:54 xoritor JoeJulian, any ideas on what this means?
17:54 JoeJulian To do what you're asking, you'll need to remove-brick down to 2x2 then you can wipe them and re-add them as replica.
17:55 xoritor http://fpaste.org/247495/37674144/
17:57 xoritor http://fpaste.org/247496/67421914/
17:58 Machpitt ouch ok thanks. what about adding 3 "fake" bricks (different subdirectory within existing brick storage location) just to increase the replica count and remove them with another command right after? would that be any safer?
17:58 PatNarcisodto joined #gluster
17:58 uebera|| In v3.6.4, I also see lots of "I [socket.c:379:ssl_setup_connection] 0-socket.management: peer CN = Anyone", "E [socket.c:2499:socket_poller] 0-socket.management: error in polling loop" pairs -- not sure whether this is related to https://bugzilla.redhat.com/show_bug.cgi?id=1218167
17:58 glusterbot Bug 1218167: high, unspecified, 3.6.4, jdarcy, MODIFIED , [GlusterFS 3.6.3]: Brick crashed after setting up SSL/TLS in I/O access path with error: "E [socket.c:2495:socket_poller] 0-tcp.gluster-native-volume-3G-1-server: error in polling loop"
17:59 JoeJulian Machpitt: Sure, if you've got the space that should work.
17:59 autoditac joined #gluster
18:00 xoritor fyi i have etcd+fleet running using "instances" to start the vms if i can get libgfapi to play im done
18:01 JoeJulian xoritor: remove the port specification.
18:01 JoeJulian That's the wrong port.
18:02 xoritor not if you look at my other page
18:02 xoritor ill try that but i get the same
18:02 JoeJulian xoritor: The client connects to 24007 to retrieve the volume definition. It connects to the bricks using that.
18:03 JoeJulian Since your bricks are defined by hostname, make sure those hostnames are resolvable from your client.
18:03 xoritor http://fpaste.org/247501/67458514/
18:03 xoritor ok yea they are
18:03 xoritor should i use hostnames instead of ip?
18:04 xoritor ok it creates it but gives errors about subvolumes
18:04 xoritor what is that about
18:04 xoritor ?
18:04 JoeJulian I always do, but it shouldn't be the issue.
18:04 PatNarciso_ fellas... I'm having a moment... it appears files I've been placing on a distributed 3.7.2 volume via gluster-client mount maybe corrupted?
18:04 Machpitt ok thanks. Is there any way to flag a brick to not be used to play it safe? I was thinking of adding all 3 then one by one flagging them down/unused and verify data is replicated to the online bricks before flagging the next ...then when all 3 are down, remove them
18:04 jrm16020 joined #gluster
18:05 xoritor that is the error i get when i try to connect to i via libvirt
18:05 xoritor nothing works
18:05 PatNarciso_ I had my staff download an ubuntu iso, and copy'n'paste the same iso to a dir within a gluster mount.  each md5sum is unique.
18:05 JoeJulian Machpitt: no, that sounds like a good idea for a feature request though.
18:05 xoritor all subvolumes are down... im not using subvolumes
18:06 Machpitt thanks, although i guess an FR for live replica modification would make things even easier :)
18:06 calavera joined #gluster
18:06 JoeJulian xoritor: It's the subvolumes to the distribute translator.
18:06 PatNarciso_ performing this test outside of the mount root, but on the same drive, I get correct and consistent md5sums.
18:07 JoeJulian PatNarciso_: Is writeback caching enabled? It could be a race condition.
18:07 * PatNarciso_ checks
18:08 JoeJulian xoritor: basically it's saying that it cannot connect to any of the bricks.
18:08 PatNarciso_ performance.write-back-window-size: 4MB
18:08 xoritor why would it say that?
18:08 xoritor status and everything say they are online
18:08 xoritor they are all there
18:08 xoritor none of them are down at all
18:09 JoeJulian cannot resolve hostname, firewall, rdma connection problem (maybe)...
18:10 calavera joined #gluster
18:11 xoritor none of those are issues... hosts resolve, firewall works, rdma works, tcp is there too (tcp,rdma) and gluster works flawlessly if i mount either nfs or fuse
18:11 xoritor actually firewall is off right now
18:12 JoeJulian What version are you using/
18:12 JoeJulian ?
18:13 xoritor 3.7 from gluster.org
18:14 PatNarciso_ JoeJulian, write-back-window-size is at 4MB.  should this be disabled?  or is your writeback comment related to the spinning-disks?
18:16 autoditac joined #gluster
18:18 jwd joined #gluster
18:18 JoeJulian PatNarciso_: Try disabling it.
18:23 JoeJulian xoritor: Check... did it actually create the file?
18:23 xoritor when i did it with hostnames yes
18:23 xoritor but it will not start the vm
18:23 xoritor i get all subvolumes down
18:24 xoritor my rdma with qpef  bw  =  1.89 GB/sec
18:24 JoeJulian xoritor: I asked because I saw the same error creating the image here, but it still said all subvolumes were down.
18:25 JoeJulian On mine, though, one brick is down on a replica 2 volume. No quorum so that shouldn't prevent anything.
18:25 xoritor yea its says that and when i try to start it with libvirt using the howto info... specifying it via libgfapi and a host... it says all subvolumes are down
18:27 JoeJulian Assume for the moment that the "All subvolumes are down" error is a red herring and look for other reasons for it to fail to boot. I'll try to dig in to this error and see if it's just an invalid error.
18:28 xoritor well it started that time but kernel paniced
18:30 xoritor i think that is something i did
18:30 calavera joined #gluster
18:32 calavera joined #gluster
18:37 ira joined #gluster
18:38 glusterbot News from newglusterbugs: [Bug 1246229] tier_lookup_heal.t contains incorrect file_on_fast_tier function <https://bugzilla.redhat.com/show_bug.cgi?id=1246229>
18:39 PatNarciso_ gluster v set vol100015 performance.flush-behind off
18:40 PatNarciso_ un & re mounted the volume, re-performed the '.iso copy test', md5sums are still abnormal.
18:41 PatNarciso_ ... is it worth it to stop/start the volume?
18:42 xoritor ok... for some reason it seems to be working
18:42 xoritor not sure why
18:42 xoritor but it is so i am happy
18:42 xoritor ;-)
18:42 xoritor thanks again JoeJulian
18:43 xoritor the subvolume thing is odd though
18:45 togdon joined #gluster
18:45 TrincaTwik joined #gluster
18:46 JoeJulian PatNarciso_: I can't think of any reason why it would matter. The only option I know doesn't change without doing that is server.allow-insecure.
18:48 TheCthulhu4 joined #gluster
18:49 PatNarciso_ k.
18:49 JoeJulian PatNarciso_: This is the native client, right?
18:49 PatNarciso_ yes.
18:50 JoeJulian PatNarciso_: btw... I'd also check the md5 at the brick(s). I would lay odds that they're correct.
18:51 PatNarciso_ I've fully rebooted the system; re-downloading my 'test iso' now...  will perform the tests, and look at the underlying bricks in just a moment.
18:52 uebera|| joined #gluster
18:56 PatNarciso_ underlying bricks md5sum inconsistent.
18:56 calavera joined #gluster
19:07 jonb joined #gluster
19:08 uebera|| Is it possible to activate TRACE on a single server for a volume called "gvol01" by pasting the (adapted) second stanza from https://www.gluster.org/community/documentation/index.php/Translators/debug/trace into a separate file /var/lib/glusterd/vols/gvol01/gvol01-trace.vol and restart that server?
19:11 PatNarciso_ ... really not sure whats going on here; why the files would be corrupted like this upon write.
19:14 autoditac joined #gluster
19:15 jonb Hello, I am having a bit a trouble with one of my Gluster volumes and could use some help.
19:17 nsoffer joined #gluster
19:17 jonb I've got a 8-wide, 4x2 replicated set up with SAMBA exposing the volume to clients. CentOS7.1, Gluster version 3.5.2.
19:19 jonb A few days ago a brick went awol, the brick daemon crashed apparently due to the storage mount hanging and going offline. We had written that off to an oddity untill the next day when it's partner brick also suffered a similar failure.
19:20 jonb After walking all the nodes doing a systemctl restart glusterd the volume appeared to stabalize. It was busy with a fix-layout on each node and had 7.5 million files to sync between the pair of bricks that had trouble.
19:21 jonb Today however performance has continued to degrade and I am getting messages such as "All subvolumes are down. Going offline..." and "gfid or missing entry self heal failed"
19:22 jonb and LS of the offending directory from a node FUSE mounted to itself has gotten hung to the point I can't CTRL+C to kill it.
19:25 calavera joined #gluster
19:33 uebera|| joined #gluster
19:33 uebera|| joined #gluster
19:41 PatNarciso_ JoeJulian, fwiw - file sizes are always correct.  clients never get an error during writes.  it simply...  is not the identical file.
19:42 rotbeard joined #gluster
19:47 jdossey joined #gluster
19:50 ueberall joined #gluster
19:57 calavera joined #gluster
20:09 virusuy hi guys, i have a distributed-replicated volume in 4 nodes, and 1 of those nodes failed, which procedure should i follow to replace it ?
20:16 ghenry joined #gluster
20:17 ira joined #gluster
20:20 virusuy also, how can i know in a distributed-replicated volume which brick is distributed or replicated ?
20:21 autoditac joined #gluster
20:26 elico joined #gluster
20:28 jwd joined #gluster
20:29 TrincaTwik joined #gluster
20:32 nsoffer joined #gluster
20:33 togdon joined #gluster
20:34 _maserati virusuy, `volume info all`
20:34 _maserati virusuy, `gluster volume info all`
20:34 virusuy _maserati:  ok,
20:34 virusuy Brick2 is replica of Brick 1
20:34 vimal joined #gluster
20:35 virusuy and Brick 4 is replica of Brick 3
20:35 virusuy right ?
20:35 virusuy and Brick 1 and 3 are distributed , am i right ?
20:35 _maserati couldnt tell you without seeing the output
20:35 _maserati !pastebin
20:35 _maserati err
20:35 _maserati i forget which pastebin they like you to use here, but its not pastebin
20:36 tessier fpaste possibly
20:37 virusuy _maserati:  http://ur1.ca/n7k18
20:38 badone joined #gluster
20:48 msvbhat virusuy: So it's distributed across (Brick1 + brick2) and (Brick3 + Brick4)
20:48 _maserati Well you definitely got two replicated bricks being distributed to another pair of replicated bricks. But I do not know how to tell which two is which.
20:48 _maserati msvbhat, for my info, how do you tell? does it just go in order?
20:48 msvbhat And Brick1 and Brick2 are replicate pairs. So is Brick3 and Brick4
20:49 msvbhat Yes, volume info lists in order
20:49 virusuy msvbhat:  thanks
20:50 _maserati cool so 2 x 2 = 4   (the first 2 indicates brick1 and 2 are replicated) (and the second 2 indicates the next 2 are replicated) per the list?
20:50 msvbhat Also this is controlled (or determined) by the order in which you specify bricks during "gluster volume create"
20:50 virusuy oh, ok, good to know
20:50 msvbhat _maserati: Yes, And if you have replica 3, first three bricks in `gluster vol info` would be replica pairs
20:51 msvbhat And so on
20:51 _maserati What if I decided that I want to trash the server that brick2 exists on, is it pretty straight forward to drop that brick, and readd another brick on new hardware to the same position and then rebalance?
20:52 virusuy _maserati:  or you can create a new brick with the same node's hostname and follow this procedure http://www.gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server
20:52 _maserati cool, thanks, link saved
20:52 virusuy am i right msvbhat ?
20:52 * msvbhat opening the link
20:53 calavera joined #gluster
20:53 msvbhat It says gluster 3.4 ( The support of 3.4.x series is over now)
20:54 _maserati Yeah i'm on 3.6.1 myself
20:54 virusuy yes, sadly that is my volume's version (we're planning to update it)
20:55 msvbhat I think the simplest way would be to do a "replace-brick commit force" from server2 to a new server/brick
20:55 msvbhat And then let the self-heal to heal from it's replica pair
20:56 msvbhat _maserati: But note that you can add/remove brick only in multiples of replica count
20:56 _maserati oo
20:56 virusuy yes
20:56 virusuy in distributed-replicated always add/remove on pairs
20:56 msvbhat But in case of disk failure there is a procedure documented to replace it.
20:57 * msvbhat too sleepy to find the doc now
20:57 _maserati so break the distribution layer, break up the replication bricks, recreate them with your new node, and then rejoin the two distribution?
20:58 msvbhat _maserati: What do you mean break up the distribution and replcaition layer?
20:58 msvbhat _maserati: It's 2:30 in the morning here and I'm feeling very sleepy. If you share your mail-d with me, I will find the doc tomorrow and share ot with you
20:59 msvbhat How to replace a failed brick in latest release I mean
20:59 _maserati Based on what you said, if i needed to replace a brick's hardware, I would have to break up that entire replicated set and add in the new brick to take its place
21:00 _maserati or am i confusing things?
21:00 msvbhat _maserati: Yeah, there is replace-brick for that.
21:00 msvbhat _maserati: So lets say server2:brick2 is bas
21:00 msvbhat *bad
21:01 _maserati AND
21:01 msvbhat So you add a new node first (peer probe server5)
21:01 _maserati i want to replace server2 all together
21:01 _maserati rigfht
21:01 _maserati right
21:01 msvbhat And do "gluster replace-brick server2:brick2 server5:brickn commit force"
21:01 _maserati got it
21:01 _maserati easy
21:01 msvbhat And "gluster volume heal start"
21:02 _maserati can you shed some light on the differences between "heal" and "rebalance" ?
21:02 msvbhat And then peer detach force server2
21:02 msvbhat heal to sync a brick from it's replica pair
21:03 msvbhat In this example server5:brickn will be healed from server2:brick2
21:03 msvbhat And it makes sense only in replicated or dispersed volume
21:03 _maserati and rebalance is to spread data more evenly over a distributed volume?
21:04 msvbhat rebalance is when a layout changes
21:04 msvbhat YES
21:04 _maserati roger, thanks :)
21:04 msvbhat rebalance makes sense in distributed volumes
21:05 msvbhat welcome
21:05 marcoceppi joined #gluster
21:06 msvbhat _maserati: Also there are few other ways in whichyou can trigger self-heal. Maybe xavih or Pranithk can shed more light on it
21:06 msvbhat Anyway, I'm *really* feeling sleepy now
21:06 _maserati Goto sleep young lad :)
21:07 msvbhat :)
21:07 msvbhat Good night (or Good day, depending on which part of world you are) :P
21:29 _maserati Is there such thing as a gluster 3.6 manual ?
21:31 finknottle joined #gluster
21:32 autoditac joined #gluster
21:33 finknottle Hi. Is automount with fuse supported ? I get 'too many symbolic links' when i try to do it
21:35 ndevos finknottle: works for me, I do it like http://blog.nixpanic.net/2014/04/configuring-autofs-for-glusterfs-35.html
21:36 shaunm_ joined #gluster
21:38 RedW joined #gluster
21:38 finknottle @ndevos. Thanks! However, this looks like something that would have to be done on every client. That is to say, it is known not to work out of the box ?
21:38 side_control joined #gluster
21:39 finknottle I have an ipa server and all other automount maps for nfs are exported through ipa
21:39 finknottle adding a map for glusterfs fuse doesn't work readily though
21:39 Lee- joined #gluster
21:40 finknottle right now i have a line in fstab, but ideally i would want to get rid of it
21:59 ndevos finknottle: hmm, I'm not sure about that, it might be possible to get it to work, but you can only mount a volume, not a subdir of a volume
22:00 ndevos finknottle: if you use systemd, you can set the mount option x-systemd.automount in /etc/fstab as alternative (or provision a systemd.mount unit)
22:04 finknottle I'm interested in mounting an entire volume. So that's not a big problem. In particular, I want to have no configuration on clients, similar to the way nfs automounts work from a centralized NIS/IPA server
22:06 finknottle If i need to do per client configuration, then all the workarounds including plain old fstab become similar in functionality
22:08 kiwnix joined #gluster
22:08 _maserati I have 3 replicated bricks. 2 exist in the same rack in one data center. And then the third brick exists 600mi/966km away in another datacenter. If something happens to the trunk connecting these two datacenters, and data may be placed on the A side as well as the B side while that link is down. Will gluster be able to handle reconsiliation?
22:09 _maserati when the link comes back online that is
22:09 calavera joined #gluster
22:09 jdossey joined #gluster
22:09 JoeJulian _maserati: That's the definition of split-brain.
22:10 JoeJulian If a single file is altered in two different ways, there's no logical way to reconcile that.
22:10 tessier There must be a way to tell gluster which one you want to keep?
22:10 JoeJulian So you can use quorum to prevent it, or you just have to pick one.
22:10 _maserati They will not be the same file, im just saying if site A drops a file to its gluster and site B drops files to its gluster, will gluster recognize this and fix it once the link is up?
22:10 JoeJulian tessier: until recently you had to do that from the brick. 3.7 has commands from the cli.
22:11 JoeJulian _maserati: if it's not the same file, you're fine.
22:11 tessier Nice.
22:11 _maserati JoeJulian, great, thanks
22:11 _maserati is there a command to list any such files that may be resultant to a "split-brain" scenario ?>
22:12 JoeJulian "gluster volume heal $vol info" will show them.
22:12 JoeJulian Usage: volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [healed | heal-failed | split-brain] |split-brain {bigger-file <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]}]
22:12 _maserati that wont start a heal tho, correct?
22:13 JoeJulian The heal will start when the connection is reestablished.
22:14 _maserati What i mean is from the standpoint of: My gluster setup is working just fine right now. But I wonder if we had some unseen cases of splitbrain occur. Is it possible to get a list of files that are not matching properly between bricks? without starting a heal
22:15 JoeJulian "gluster volume heal $vol info" will show them.
22:16 JoeJulian or, for a log of the last 1024 identified split-brain files "gluster volume heal $vol info split-brain"
22:19 _maserati Number of entries: 0 on both bricks, im gravby
22:19 _maserati gravy
22:20 _maserati the good news I just heard, any file we right, will never get modified, as we version things
22:20 _maserati write*
22:29 cleong joined #gluster
22:30 smohan joined #gluster
22:35 side_control joined #gluster
22:42 mpietersen joined #gluster
23:33 coreping joined #gluster
23:56 maveric_amitc_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary