Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 partner so does staying up this late so i better get going to stay awake at tomorrows (todays) gluster meetup
00:03 JoeJulian Nice, where's the meetup?
00:05 partner helsinki, finland
00:05 JoeJulian I'd come, but I'd probably have to have left already.
00:06 partner there should be some familiar on the list of speakers: http://www.meetup.com/RedHatFinland/events/218774694/
00:06 partner names*
00:07 JoeJulian Nice. Only name I haven't talked to directly is Scargo.
00:12 partner heh, had to check, there isn't a single flight any more that would arrive to helsinki "today" from new york (chosen for possible straight flights)
00:12 partner so i guess its a bit too late, see you next time?-)
00:13 partner (and who would take 10+ hour flight to join a couple of hours of meetup anyways :o)
00:13 partner hmm, i guess there are such people. there was just a story about a finnish man that flew to china to have a beer to celebrate his birthday
00:14 partner oh, there is the story in english available: http://www.ign.com/boards/threads/finnish-guy-makes-day-long-beer-to-have-one-beer-in-shanghai-for-his-birthday.454363936/
00:15 m0zes joined #gluster
00:16 partner ok, far out of topic, i'll grab some sleep now, cya ->
00:26 haomaiwang joined #gluster
00:42 DV joined #gluster
00:45 B21956 joined #gluster
00:53 bala joined #gluster
01:08 psi_ joined #gluster
01:57 natgeorg joined #gluster
01:57 georgeh joined #gluster
01:57 ubungu joined #gluster
01:59 julim_ joined #gluster
01:59 hybrid5121 joined #gluster
01:59 swebb_ joined #gluster
02:00 side_con1rol joined #gluster
02:00 social_ joined #gluster
02:01 tru_tru_ joined #gluster
02:03 DJCl34n joined #gluster
02:03 mikedep3- joined #gluster
02:05 shaunm__ joined #gluster
02:06 ron-slc joined #gluster
02:23 nangthang joined #gluster
02:28 badone joined #gluster
02:36 DV joined #gluster
02:38 MugginsM joined #gluster
02:40 PeterA we having issue on gluster with sybase dump
02:40 PeterA sybase can't write to the gluster NFS export :(
02:41 PeterA while sybase user can write and move files
02:55 lpabon joined #gluster
02:59 hchiramm joined #gluster
03:05 cyberbootje joined #gluster
03:09 quydo joined #gluster
03:13 hagarth joined #gluster
03:22 sputnik13 joined #gluster
03:25 ubungu joined #gluster
03:32 gildub joined #gluster
03:33 anoopcs joined #gluster
03:36 ubungu joined #gluster
03:41 kanagaraj joined #gluster
03:51 hagarth joined #gluster
03:56 anoopcs joined #gluster
04:03 itisravi joined #gluster
04:14 atinmu joined #gluster
04:16 side_control joined #gluster
04:21 shubhendu joined #gluster
04:28 rjoseph joined #gluster
04:31 anoopcs joined #gluster
04:41 sakshi joined #gluster
04:45 suman_d joined #gluster
04:49 spandit joined #gluster
04:51 jiffin joined #gluster
04:55 gem joined #gluster
04:55 ppai joined #gluster
04:56 hagarth joined #gluster
04:57 rafi joined #gluster
05:00 quydo joined #gluster
05:01 daMaestro joined #gluster
05:05 kshlm joined #gluster
05:07 pp joined #gluster
05:11 nbalacha joined #gluster
05:11 ndarshan joined #gluster
05:14 smohan joined #gluster
05:14 kdhananjay joined #gluster
05:20 quydo joined #gluster
05:23 kumar joined #gluster
05:24 lalatenduM joined #gluster
05:34 suman_d joined #gluster
05:35 bala joined #gluster
05:35 soumya joined #gluster
05:39 ubungu joined #gluster
05:43 hagarth joined #gluster
05:51 anil joined #gluster
05:53 zerick joined #gluster
05:53 flu__ joined #gluster
05:53 dusmant joined #gluster
06:02 Bardack joined #gluster
06:04 raghu joined #gluster
06:06 anoopcs joined #gluster
06:07 quydo joined #gluster
06:14 overclk joined #gluster
06:19 Manikandan joined #gluster
06:24 sakshi joined #gluster
06:28 RameshN joined #gluster
06:35 meghanam joined #gluster
06:46 atalur joined #gluster
06:50 hagarth joined #gluster
06:52 booly-yam-6137 joined #gluster
06:52 R0ok_ joined #gluster
06:56 suman_d joined #gluster
06:59 DV joined #gluster
07:01 LebedevRI joined #gluster
07:19 Philambdo joined #gluster
07:21 ricky-ticky joined #gluster
07:23 jtux joined #gluster
07:26 nshaikh joined #gluster
07:27 suman_d joined #gluster
07:45 wkf_ joined #gluster
07:48 hagarth joined #gluster
07:50 MugginsM joined #gluster
07:54 quydo joined #gluster
08:04 chirino joined #gluster
08:07 glusterbot News from newglusterbugs: [Bug 1023134] Used disk size reported by quota and du mismatch <https://bugzilla.redhat.com/show_bug.cgi?id=1023134>
08:07 glusterbot News from newglusterbugs: [Bug 917901] Mismatch in calculation for quota directory <https://bugzilla.redhat.com/show_bug.cgi?id=917901>
08:18 plarsen joined #gluster
08:18 [Enrico] joined #gluster
08:18 [Enrico] joined #gluster
08:22 booly-yam-6137 joined #gluster
08:35 ricky-ticky1 joined #gluster
08:37 glusterbot News from newglusterbugs: [Bug 1123067] User is not informed, nor is there a way to check if completed, the quota xattr cleanup after disabling quota <https://bugzilla.redhat.com/show_bug.cgi?id=1123067>
08:42 fsimonce joined #gluster
08:56 liquidat joined #gluster
08:57 dusmant joined #gluster
08:59 suman_d joined #gluster
09:00 swebb joined #gluster
09:06 ntt joined #gluster
09:07 glusterbot News from newglusterbugs: [Bug 1115197] Directory quota does not apply on it's sub-directories <https://bugzilla.redhat.com/show_bug.cgi?id=1115197>
09:07 glusterbot News from resolvedglusterbugs: [Bug 1080296] The child directory created within parent directory ( on which the quota is set ) shows the entire volume size, when checked with "df" command. <https://bugzilla.redhat.com/show_bug.cgi?id=1080296>
09:07 glusterbot News from resolvedglusterbugs: [Bug 823790] Renaming file says  "Disk quota limit exceeded" even it has not. <https://bugzilla.redhat.com/show_bug.cgi?id=823790>
09:07 ntt Hi. How can i check if some clients mount a volume?
09:07 ntt Clients using nfs
09:10 MooDoo joined #gluster
09:16 booly-yam-6137_ joined #gluster
09:16 aravindavk joined #gluster
09:19 partner ntt: gluster volume status <volname> nfs clients
09:20 Thilam joined #gluster
09:21 Thilam left #gluster
09:29 mator would "showmount -a" work for internal glusterfs nfs ?
09:29 jvandewege_ joined #gluster
09:30 partner quickly testing i don't get any output from it
09:31 maveric_amitc_ mator, showmount -a will show the exported FS
09:31 maveric_amitc_ it won't show the  clients that are mounted
09:31 partner sorry, it does, i was on the wrong box (round-robin dns..)
09:32 partner it shows the client, not the offered nfs mountpoints
09:32 partner at least on my systems as i just tested, i see only the nfs client from the gluster server end with the given command
09:33 deniszh joined #gluster
09:34 DV joined #gluster
09:35 sakshi joined #gluster
09:38 smohan_ joined #gluster
09:38 jvandewege_ joined #gluster
09:41 Anuradha joined #gluster
09:56 lalatenduM joined #gluster
09:59 RameshN joined #gluster
10:01 calum_ joined #gluster
10:02 fandi joined #gluster
10:12 sakshi joined #gluster
10:14 Slashman joined #gluster
10:18 deniszh1 joined #gluster
10:18 deniszh1 joined #gluster
10:21 karnan joined #gluster
10:23 dusmant joined #gluster
10:40 DV joined #gluster
10:41 Fen1 joined #gluster
10:41 ralala joined #gluster
10:42 shubhendu joined #gluster
10:42 peem Is Bug 991084 still present in 3.6 or this is expected behaviour now ?
10:42 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=991084 high, unspecified, ---, bugs, NEW , No way to start a failed brick when replaced the location with empty folder
10:43 polychrise joined #gluster
10:47 Arrfab joined #gluster
10:53 hagarth joined #gluster
10:55 jvandewege_ joined #gluster
11:02 Anuradha joined #gluster
11:04 atalur joined #gluster
11:15 gem joined #gluster
11:18 ira joined #gluster
11:20 T0aD joined #gluster
11:21 nshaikh joined #gluster
11:22 neoice joined #gluster
11:30 _shaps_ joined #gluster
11:32 booly-yam-6137_ joined #gluster
11:33 kkeithley1 joined #gluster
11:37 shubhendu joined #gluster
11:40 social joined #gluster
11:52 nishanth joined #gluster
11:57 calisto joined #gluster
11:59 meghanam joined #gluster
12:03 shubhendu joined #gluster
12:04 ctria joined #gluster
12:05 itisravi_ joined #gluster
12:05 lalatenduM joined #gluster
12:07 hagarth joined #gluster
12:07 itisravi joined #gluster
12:08 glusterbot News from newglusterbugs: [Bug 1184883] gluster "volume heal info" command caused crash <https://bugzilla.redhat.com/show_bug.cgi?id=1184883>
12:17 smohan joined #gluster
12:19 meghanam joined #gluster
12:22 calisto joined #gluster
12:27 anoopcs joined #gluster
12:28 meghanam joined #gluster
12:28 haomaiwa_ joined #gluster
12:31 meghanam_ joined #gluster
12:32 haomaiw__ joined #gluster
12:37 peem Is Bug 991084 still present in 3.6 or this is expected behaviour now ?
12:37 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=991084 high, unspecified, ---, bugs, NEW , No way to start a failed brick when replaced the location with empty folder
12:44 prasanth_ joined #gluster
12:47 prasanth_ joined #gluster
12:51 harish joined #gluster
13:01 bennyturns joined #gluster
13:03 jvandewege_ joined #gluster
13:04 Fen1 joined #gluster
13:06 tru_tru joined #gluster
13:08 RameshN joined #gluster
13:10 jvandewege_ joined #gluster
13:12 jvandewege_ joined #gluster
13:12 badone joined #gluster
13:22 jvandewege_ joined #gluster
13:26 tdasilva joined #gluster
13:26 vimal joined #gluster
13:32 jvandewege joined #gluster
13:38 ricky-ticky joined #gluster
13:41 meghanam_ joined #gluster
13:44 [o__o] joined #gluster
13:45 lalatenduM joined #gluster
13:49 theron joined #gluster
13:52 Gill joined #gluster
13:56 Fen1 cluster glusterfs (2 networks interfaces) / 2 proxmox (glusterfs client) / bonding balance-alb with the 2 link. Why only one link is used ?
14:01 marbu joined #gluster
14:03 nbalacha joined #gluster
14:04 shubhendu joined #gluster
14:05 partner Fen1: not exactly gluster issue but how do you tell only one link is being used?
14:07 Fen1 i just look the bit rate in interfaces : $bmon
14:08 Fen1 partner : i've tried with 2 separated volumes but same thing :/
14:09 quydo joined #gluster
14:12 jvandewege joined #gluster
14:12 theron joined #gluster
14:14 partner Fen1: i don't see how these are related.. if you are just reading then with alb it will come only via one link
14:15 Fen1 partner : is there a way to use this 2 link in the same time (even if it's not the same client)
14:15 Fen1 ?
14:16 ndevos Fen1: from my understanding, the bonding implementation in the Linux kernel is 'smart', it optimizes TCP connections to go through one interface, this reduces the need to re-order TCP-segments from multiple interfaces into one stream
14:16 Fen1 ndevos : but here i have two stream, correct ?
14:16 dgandhi joined #gluster
14:17 ndevos Fen1: so, I guess you should try to use multiple clients (mount points, or qemu processes, ..) and see if that makes a difference
14:18 elico joined #gluster
14:18 Fen1 ndevos : I've 2 clients which not use the same volume but on the same server, so there are 2 tcp connection, it should use 2 links but not :S
14:21 badone joined #gluster
14:22 ndevos Fen1: hmm, okay... well, my bonding foo is weak, and it is more of a network question then a Gluster one, not sure if there is someone here that can help with that
14:22 ndevos Fen1: you could send an email to the list, the audience is a little wider there
14:23 Fen1 i've already tested this : 2 different client using different volumes on the same glusterfs server, and only one link is used instead of 2 (bonding mode : balance-alb)
14:23 Fen1 ndevos : ok i will, thx ;)
14:25 kshlm joined #gluster
14:28 virusuy joined #gluster
14:28 virusuy joined #gluster
14:32 badone joined #gluster
14:38 jvandewege joined #gluster
14:39 mbukatov joined #gluster
14:48 deniszh joined #gluster
14:49 quydo joined #gluster
14:51 fandi joined #gluster
14:53 fandi joined #gluster
14:54 dbruhn joined #gluster
14:55 neofob joined #gluster
14:57 rwheeler joined #gluster
15:02 morsik joined #gluster
15:05 fandi joined #gluster
15:07 jmarley joined #gluster
15:08 badone joined #gluster
15:13 lpabon joined #gluster
15:13 primusinterpares Greetings, I have a gfapi question.  If an app bundles it's own gfapi and deps, is there a way for it to specify where to pull in xlator .so's like api.so other than at gfapi build time?  Like a relative path config file or env variable?
15:19 wushudoin joined #gluster
15:21 fandi joined #gluster
15:23 deniszh1 joined #gluster
15:29 sputnik13 joined #gluster
15:30 badone joined #gluster
15:31 jmarley joined #gluster
15:34 deniszh1 left #gluster
15:35 deniszh joined #gluster
15:41 klaxa|work joined #gluster
15:42 klaxa|work left #gluster
15:47 vimal joined #gluster
15:48 dgandhi greetings all, I'm using gluster as a back end volume to some samba shares, I am using the fuse driver to mount the glusterfs volumes, but was wondering if/when it would be preferable to use the built in NFS.
15:49 lmickh joined #gluster
15:49 dbruhn dgandhi, are the client windows/linux/mac?
15:50 dgandhi dbruhn: nodes are linux, one of which shares the gluster data with windows using samba
15:51 RameshN joined #gluster
15:51 dbruhn nodes as in gluster servers, or gluster clients?
15:52 dgandhi both, one node that runs samba, three nodes that have bricks
15:52 dgandhi all are part of the cluster
15:53 dbruhn so the one server that is a samba server is also a gluster server with bricks
15:53 dgandhi the samba server it is a gluster server with no bricks
15:54 dbruhn so it's a gluster client
15:54 dbruhn the term node is ambiguous, making sure I understand what you are doing.
15:54 dgandhi they all have the same gluster config, the client server distinction seems arbitrary
15:56 dbruhn The reason for the distinction is due to how gluster works. Each "client" talks to ALL of the brick servers directly.
15:56 badone joined #gluster
15:58 dbruhn back to your original question, the advantages of samba/cifs over NFS is usually around the types of data transmitting, and in your case the ease of not having to configure NFS on the windows servers.
15:58 dbruhn samba vs nfs v3 typically comes down to, NFS is better at raw throughput, samba is better with small files. (generalization abstracted outside of the use of gluster)
15:59 dbruhn that being said, you are creating a secondary network layer the way you are doing things now. So NFS in your architecture would be better, because you are directly attaching to the gluster system rather than going through your intermediary samba server.
15:59 dgandhi so you mean which of the nodes mount the volume, vs which server bricks? OK,  My question may be more clearly stated as which of the built in filesystemy ways to access the volume data is better under which circumstances, it it worth evaluating the difference if I already have a working system running fuse -- in either case I would be sharing over cifs, I'm only looking at how the samba server will access the volume.
16:00 jiffin joined #gluster
16:00 dbruhn If you moved samba back to the gluster brick servers and use ctdb for failover, you'll probably see an increase in speed, and remove the single point of failure
16:02 dbruhn if you are looking for better performance from there, you will want to look at this
16:02 dbruhn https://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
16:02 jiffin1 joined #gluster
16:04 dgandhi my samba server is a VM and has HA features, the failover scenario for CIFS seems less clean. In either case when mounting the volume on the gluster client is fuse/NFS likely to make a difference ?
16:05 dbruhn Well you original question was about performance, you have an extra network layer the way you do it vs the other way.
16:05 dbruhn If it's working for you, don't mess with it
16:06 jiffin joined #gluster
16:12 jobewan joined #gluster
16:16 nshaikh joined #gluster
16:20 bala joined #gluster
16:22 daMaestro joined #gluster
16:22 sauce joined #gluster
16:23 primusinterpares Answering my own question: It appears there is no way.  XLATORPATH is passed in solely at compile time.
16:23 primusinterpares https://github.com/gluster/glusterfs/blob/5b8de971a4b81bc2bd6de0ffc6386587226295c6/libglusterfs/src/xlator.c#L128
16:26 primusinterpares Sigh.  That's an automatic 12-factor app violation.  http://12factor.net/dependencies
16:30 _dist joined #gluster
16:31 dbruhn primusinterpares, those packages are developed by semiosis, he was asking if anyone wanted to help him work on the development of it the other day. I'm sure if you offered assistance he would be more than happy to accept your help to get it where it needs to be.
16:32 deniszh1 joined #gluster
16:33 semiosis primusinterpares: i believe if you set the LD_LIBRARY_PATH env var you can change the shared lib search paths
16:35 semiosis primusinterpares: http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html
16:36 fandi joined #gluster
16:37 jiffin joined #gluster
16:39 glusterbot News from resolvedglusterbugs: [Bug 1184387] error: line 135: Unknown tag:     %filter_provides_in /usr/lib64/glusterfs/%{version}/ <https://bugzilla.redhat.com/show_bug.cgi?id=1184387>
16:43 jiffin joined #gluster
16:45 partner greetings from RH glusterfs + ceph meetup
16:46 partner plenty of people around
16:48 primusinterpares semiosis: thanks.  Unfortunately if dlopen() is called where "/" is the leading character, it won't follow ld.so path resolution.
16:50 primusinterpares (i.e. LD_LIBRARY_PATH isn't working, strace shows it's not being walked, and per "man dlopen" I get why...)
16:55 hagarth joined #gluster
16:57 primusinterpares Just looking over libglusterfs/src/xlator.c myself, I'd add something like xlator_set_type_with_xlator_path() that called
16:57 primusinterpares xlator_dynload_with_xlator_path() that libgfapi could call instead of
16:57 primusinterpares xlator_set_type(), and allow libgfapi to have functions developers
16:57 primusinterpares could call or default configs or environment variables it can read to
16:57 primusinterpares set up that path properly.
16:57 RameshN joined #gluster
16:57 primusinterpares Oops, sorry for the extra newlines there.
17:01 ndk joined #gluster
17:08 plarsen joined #gluster
17:08 booly-yam-6137_ joined #gluster
17:09 jiffin joined #gluster
17:09 ilde joined #gluster
17:09 glusterbot News from newglusterbugs: [Bug 1117509] Gluster peer detach does not cleanup peer records causing peer to get added back <https://bugzilla.redhat.com/show_bug.cgi?id=1117509>
17:15 msmith_ joined #gluster
17:17 jiffin joined #gluster
17:26 ricky-ticky2 joined #gluster
17:36 julim joined #gluster
17:39 glusterbot News from newglusterbugs: [Bug 1173909] glusterd crash after upgrade from 3.5.2 <https://bugzilla.redhat.com/show_bug.cgi?id=1173909>
17:44 edwardm61 joined #gluster
17:46 suman_d joined #gluster
17:50 SpeeR joined #gluster
17:53 PeterA joined #gluster
17:53 SpeeR good morning, I just noticed that one of my HA bricks is offline, however the distributed brick on the same server is still online, what is the least impacting way to bring this back online? killall glusterfsd ; killall -9 glusterfsd ; killall glusterd ; glusterd
17:53 SpeeR ?
17:55 SpeeR or should i just try to start it?
17:55 rwheeler_ joined #gluster
17:56 dusmant joined #gluster
17:56 rwheeler joined #gluster
17:56 bene joined #gluster
17:56 lpabon joined #gluster
18:02 theron joined #gluster
18:03 partner SpeeR: IMO it was to start (already) started volume with force to bring up all the necessary bricks/procs
18:03 SpeeR Thanks partner, I'll give that a try first
18:04 kanarip joined #gluster
18:05 partner SpeeR: quickly googling here's one set of instructions http://www.tecmint.com/perform-self-heal-and-re-balance-operations-in-gluster-file-system/
18:05 SpeeR haha Yeh, I was just reading through there!
18:06 JoeJulian SpeeR: gluster volume start $volume_with_brick_down force
18:06 SpeeR Thanks Joe
18:10 SpeeR hmm, so I did that, and gluster1:/glusterHA went offline
18:13 _Bryan_ joined #gluster
18:13 JoeJulian paste logs.
18:14 JoeJulian brb... have to reboot. chrome has a endless loop bug eating up my cpu.
18:20 SpeeR sorry we are still trying to clean up the vcsphere connections
18:26 JoeJulian Back. So the log that would be useful is the client log for that mount that "went offline", when you can.
18:30 JoeJulian http://www.reddit.com/r/sysadmin/comments/2t85ya/anyone_using_glusterfs/ (in case anyone wants to participate)
18:32 kanarip hello there
18:32 _dist JoeJulian: can't remember if I asked yesterday. Are there any plans in 3.6.x to make heal for large files (like my VMs) not require a full read/replace of the entire file ? I'm not sure who told me I could expect that shortly, but I feel like I heard it somewhere a few months back
18:33 kanarip i have a lot of entries on my glusterfs clients that result in "not found" log messages
18:33 kanarip is there any way i can retain a file not having been found (on the client) to avoid the round trip?
18:34 jiffin joined #gluster
18:37 kanarip it has to do with the load path i reckon
18:37 AntonZ joined #gluster
18:38 Teela joined #gluster
18:38 Teela Hello
18:38 glusterbot Teela: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:38 Teela does anyone know how to replush replication
18:38 Teela on node 2
18:39 jmills joined #gluster
18:39 Teela I'm having split brain issues and rather than waste time trying to fix it, due to have 260gigs of files. I would like to wipe out the brick on node 2 and the have node1 repush to node 2
18:39 Teela please and thanks
18:40 Debloper joined #gluster
18:42 AntonZ Hello everyone, i have the following configuration of the cluster, please see the gist - https://gist.github.com/azavrin/537e932debc587236dfe
18:42 AntonZ each brick is LVM volume
18:43 AntonZ is is possible to expand LVM on both nodes and do not do anything with gluster itself ?
18:52 AntonZ anyone ?
18:53 dbruhn AntonZ, yep, gluster just reads the size of the disk from the file system it's on
18:54 Teela how about my question ? :P
18:55 AntonZ Great, it seemingly easier to expand LVM then keep adding bricks... is volume rebalance required after expanding lvm n both nodes ?
18:57 elico joined #gluster
18:58 ralalala joined #gluster
19:03 Teela only if you have more than one volume i belive antonZ
19:03 Teela it will actually error out if you try and only have 1 volume
19:06 AntonZ i have only a single volume
19:06 AntonZ okay, going to pull the trigger on lvm now , will report back in a bit
19:07 AntonZ Thank you
19:14 dhgwilliam_ joined #gluster
19:14 dhgwilliam_ happy thursday folks
19:15 dhgwilliam_ i have checked out the basic troubleshooting guide and i'm at a bit of a loss here
19:15 dhgwilliam_ i am very hopeful that i'm just doing something stupid
19:15 dhgwilliam_ i have glusterfs-server 3.6.1 installed on CentOS 7 and the glusterd service appears to be up and running and by all accounts healthy
19:16 dhgwilliam_ but the glustercli can't connect
19:16 dhgwilliam_ i can telnet from node a to node b on port 24007 just fine
19:16 lkoranda joined #gluster
19:16 dhgwilliam_ but i get the Connection failed error on every `gluster` action
19:17 dhgwilliam_ i'm using puppet & the covermymeds/gluster module to deploy and everything worked fine in my local test environment but now that i've pushed to a vSphere cluster test env things aren't working the way I expect them to
19:18 dhgwilliam_ any troubleshooting tips or PEBKAC pointers would be much appreciated
19:18 dhgwilliam_ i'm a puppet "expert," not a gluster expert so i am not super clear on the available introspection tools (besides logs, obv)
19:24 purpleidea dhgwilliam_: covermymeds? haven't heard of that one, but i do recommend https://github.com/purpleidea/puppet-gluster
19:25 purpleidea it's the most used module afaict, and it's been around a while, has more magic, etc...
19:26 dhgwilliam_ purpleidea: lol no disclosure statement ? :D
19:26 dhgwilliam_ there is a specific reason i didn't use this... let me think back
19:26 dhgwilliam_ oh yeah
19:26 dhgwilliam_ data in modules
19:26 dhgwilliam_ that's a no go
19:26 purpleidea dhgwilliam_: data in modules is optional
19:26 purpleidea but why is it bad? are you on puppet < 3.x ?
19:26 dhgwilliam_ orly. this is halfway decent news
19:27 dhgwilliam_ purpleidea: data in modules is 100% unsupported and unsupportable, commercially speaking
19:27 purpleidea dhgwilliam_: yeah disclosure, i'm the original author, but it should be obvious as it matches my username
19:27 dhgwilliam_ my understanding is that it breaks PE sometimes/always
19:27 dhgwilliam_ hehe
19:27 purpleidea dhgwilliam_: well a huge amount of the puppet community uses it, but it's an optional feature. puppetlabs has been quite terrible at listening to what the community wants afaict
19:27 dhgwilliam_ purpleidea: all things considered, i'm pro-DiM
19:28 dhgwilliam_ purpleidea: disclosure: I'm a PL employee
19:28 dhgwilliam_ :D
19:28 purpleidea dhgwilliam_: if there's a more elegant way to do what i'm trying to do, that's recommended by PL, that's fine, but there isn't AFAICT.
19:28 dhgwilliam_ yeah idk i'm not trying to wade into politics here
19:28 purpleidea dhgwilliam_: disclosure, i work for RH, but i started the project long before they even bought gluster or i worked there
19:28 purpleidea dhgwilliam_: but it's definitely the most prominent capable module i've seen.
19:29 purpleidea in any case, how can i help?
19:29 dhgwilliam__ joined #gluster
19:29 dhgwilliam__ ugh sorry the network connect here is garbage
19:29 dhgwilliam__ i may have missed what you said
19:29 purpleidea (i've got to run to a meeting in 15, but i'll be back afterwards)
19:30 dhgwilliam__ oh ok. i'll see if i can rework around your module...
19:30 dhgwilliam__ ping me when you get back
19:30 purpleidea dhgwilliam_: http://fpaste.org/173209/21955023/
19:30 dhgwilliam__ (thanks in advance)
19:30 elico joined #gluster
19:30 dhgwilliam__ welp can't get to that. web filtering is obnoxiously aggressive here
19:30 dhgwilliam__ sigh
19:30 purpleidea dhgwilliam_: there are fairly elaborate docs, examples, blog posts, screencasts, etc...
19:30 purpleidea dhgwilliam_: lol, you said you work for puppetlabs and they filter out fpaste.org ?
19:31 dhgwilliam__ no i'm at a customer
19:31 JoeJulian _dist: Already doesn't require a replace of the entire file. Differential self-heal has been there since 2.0.
19:31 dhgwilliam__ i'm in services
19:31 purpleidea dhgwilliam_: ah
19:31 dhgwilliam__ purpleidea: so basically i'm not super concerned that the puppet module is at fault, although i'm willing to consider the idea
19:32 dhgwilliam__ but basically all the troubleshooting tips i've seen deal with a stage slightly past where i'm at
19:32 purpleidea dhgwilliam_: well, have a look at the module, docs, etc... i post a lot of related puppet things (including about that module) here: https://ttboj.wordpress.com/ if you get stuck, ping me with your questions
19:32 dhgwilliam__ liek i can telnet to 24007 from local and remote hosts but none of the gluster commands work afaict
19:32 JoeJulian kanarip: a "missing" cache has been proposed many times, and even sort-of implemented in a python translator, but has never made it into the core code.
19:33 dhgwilliam__ can't peer probe, can't volume list. they all fail with the same error
19:33 dhgwilliam__ [root@d2p1tlm-glstr-datastore-a ~]# gluster volume list Connection failed. Please check if gluster daemon is operational.
19:33 Teela Anyone know how to repush replication from the first node to the second node. I want to blow away the second node2 and then have replush replication..Not sure how to do that?
19:33 purpleidea dhgwilliam__: my recommendation is to try my module. if you're good at puppet, it should be trivial to get going. try using gluster::simple. if you still get errors, ask again.
19:33 dhgwilliam__ purpleidea: alright, i'll start there.
19:33 dhgwilliam__ thanks again.
19:33 purpleidea yw
19:33 purpleidea bbl
19:34 JoeJulian Teela: Sure, just kill glusterfsd for that brick on node2, rm -rf the brick root, "gluster volume start $vol force" on node2 to start the brick server again then "gluster volume heal $vol full"
19:38 JoeJulian dhgwilliam__: selinux?
19:38 Teela Hey joe Julian
19:39 Teela thanks I will give that a test on mt test evnvironment
19:39 Teela do you know if this approach will solve the split brain
19:44 dhgwilliam__ JoeJulian: hmmmmmmmmmmmmmmmmmm idk let me check
19:46 dhgwilliam__ ;alsfkdj;asihfj;asldkjfa;sdhf;asdoijf;'aledjfn;askdfj;aslekdfj;adlkfjas;dlifjas;dlifja;osdihjfas;dlifhja;ldifja
19:46 dhgwilliam__ duh
19:46 dhgwilliam__ of COURSE it was SELinux
19:46 dhgwilliam__ f;alsdkfja;lsdhf;asldifjads;ljfas
19:46 dhgwilliam__ ok sorry for flooding
19:47 Teela @joejulian - thanks that worked beautifully :) So my next question is. If i'm seeing split brain..will this approach clear that out? As im guessing this is similar to most replication technologies? :)
19:49 Gill joined #gluster
19:50 rafi joined #gluster
19:53 jackdpeterson joined #gluster
19:54 jackdpeterson @purpleidea -- just got this when adding a new volume: [2015-01-22 19:52:34.784336] E [glusterd-volume-ops.c:1134:glusterd_op_stage_start_volume] 0-management: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /mnt/storage5l/inventory1. Reason : No data available [2015-01-22 19:52:34.784363] E [glusterd-syncop.c:1151:gd_stage_op_phase] 0-management: Staging of operation 'Volume Start' failed on localhost : Failed to get e
19:55 jackdpeterson After puppet re-ran a bunch of times it's now finally showing that it's finsihing with no further things to execute; however, when attempting to start the volume it fails with the above error
19:58 jackdpeterson performing volume start force got 'er up ... it just *feels wrong* though
19:59 theron joined #gluster
20:01 B21956 joined #gluster
20:05 dbruhn joined #gluster
20:06 Teela does anyone know during a full volume heal, if the A node is still getting files will they get replicated after the heal is done?
20:07 cfeller so I just noticed that 3.6.2 is in http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.2/ but I didn't see an announcement - I presume that is 3.6.2 final?
20:07 Teela or during the heal
20:14 partner Teela: on replica the files are written in parallel to both bricks so new ones get in right away to both, it won't write them only to other
20:17 sauce joined #gluster
20:21 Teela @partner I understand that. I'm asking because i have split brain issues. So im gonna kill node 2 and re-replicate by volume heal. I just wanted to know if new files came in will it do it after or during the heal from A to B
20:22 partner during, its the client that writes them there
20:27 Teela sweeet
20:27 Teela thanks so much for you help
20:27 Teela also would you happen to know if this approach would generally remedy split brain issues
20:30 partner if there's a lot of them and they are all on one brick i'd guess its the best approach.
20:31 Teela yes 260gigs
20:31 Teela lots
20:31 Teela sweet thanks
20:31 partner but how many are actually on splitbrain?
20:31 Teela there is just one volume
20:31 Teela with a lot of split brain errors in the logs
20:32 Teela and i have like thousands and thousands of files
20:32 Teela so my thinking is some files are missed or something
20:32 partner and you know there was something wrong with the server that hosts the brick that you are now wiping?
20:33 Teela there doesnt appear to be anything wrong
20:33 partner something did cause the split-brain..
20:33 Teela but since i cant pinpoint the problem in the brick
20:33 Teela yes
20:33 Teela i figure this would be the best approach
20:33 partner lets think this again
20:34 Teela im no expert on gluster which is im here
20:34 partner if the gluster says there is a split-brain it means it finds the two copies of a file to be different
20:35 partner (assuming replica 2 volume here)
20:35 Teela yes
20:35 Teela agreed
20:35 partner if you decide to wipe either of the bricks holding the copies how would you know which brick holds the healthy files?
20:36 Teela because
20:36 Teela we have the traffic only going to node A
20:36 Teela so node A should have the correct data
20:37 partner so the B isn't even up?
20:37 Teela B is up
20:37 partner and the B bricks are also up?
20:37 Teela yes
20:37 partner or brick
20:37 partner hmm i'm confused. how do you know the traffic goes only to A ?
20:38 Teela well its a file server
20:38 Teela so people only are allowed to send files to A
20:38 Teela and A just replicated to B
20:38 Teela behind the scenes
20:38 partner now i'm even more confused
20:38 partner geo-replication?
20:39 partner its the clients responsibility to write the files to both bricks on both servers
20:39 JoeJulian @mount server
20:39 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
20:40 Teela Volume Name: sftp Type: Replicate Volume ID: eb8f08d9-1fd0-441d-9e88-7568d0fbdeb3 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: kam1odbus06a:/mnt/lv_gluster/brick Brick2: kam1odbus06b:/mnt/lv_gluster/brick
20:40 partner ok, looks sane to me, a 2 brick replica
20:40 Teela so your saying
20:41 Teela B is writing to A?
20:41 JoeJulian Teela: You're not writing to the brick directly, are you?
20:41 partner are you blocking the clients accessing the B or is this just some confusion on terms and how glusterfs works?
20:41 Teela no
20:41 Teela we have a directory mounted
20:41 Mattlantis joined #gluster
20:41 partner client writes to A *and* B if you have mounted that volume to your client
20:41 Teela glusterfs
20:41 JoeJulian The client is wherever you mount the volume. That client is writing to both servers.
20:41 Teela which we write to
20:41 Teela which in turn writes to the bricks
20:42 partner umm yes, the mountpoint presents both bricks ie. A and B while its not shown to the user exactly, its the internals of how replication works
20:43 Mattlantis hey all, quick question causing me confusion: the geo-rep logs say something like "60 crawls, 0 turns" every few minutes, can anyone tell me exactly what this is communicating?
20:43 partner Teela: don't let it confuse you might have mounted it only from A
20:44 Teela sorry i apologize if i not making sense...Im new to glusterfs and pretty much learned this all in a day
20:44 Teela I have it mount on both
20:44 Teela pretty sure
20:44 partner so if your mount points to a-node it only means you have told glusterfs client to go there to fetch info about the volume and its details such as where the bricks are
20:45 Teela correct
20:45 partner from there on the client knows to which bricks its supposed to write the files
20:45 Teela i see
20:45 Teela so how do i determine which is my client
20:45 Teela if im not 100% sure
20:45 partner your client is the one that has the gluster volume mounted, the one that is actually reading and writing the files
20:46 partner kam1odbus06a and b are your gluster servers which host the bricks that make up the volume you have mounted to your client
20:47 Teela okay
20:48 Teela the split brain occurs between a & b
20:48 Teela so wouldnt clearing out the bricks on b
20:48 partner still no
20:48 Teela and healing the volume
20:48 partner as said couple of times already the client writes to *both* simultaneously
20:49 Teela ohhhhh
20:49 Teela so i would need to clear both out
20:49 Teela for this to work
20:49 partner try what the following command says: gluster volume heal $vol info split-brain
20:49 partner it should tell what and where are the files in split-brain
20:50 Teela kk
20:50 Teela ill be right back
20:50 Teela just gonna grab some grub
20:50 Teela ill put that in and write back
20:50 Teela thanks so much for teaching me
20:50 Teela all this
20:50 partner np
20:52 Gill joined #gluster
20:54 MacWinner joined #gluster
20:56 ttyS1 joined #gluster
21:01 JoeJulian Mattlantis: Sorry, no real idea what that means. It's supposed to crawl the marker tree every N minutes (I think it's 10 by default), so I'm guessing that's crawls. turns, though... no idea.
21:04 Mattlantis thanks Joe, sorry for the random interjection, discussing the logs trying to debug an issue, and no one on my team can find any docs saying what that means
21:04 Mattlantis we assumed it was crawlind directories though, so that's good to know
21:07 partner Mattlantis: https://github.com/gluster/glusterfs/blob/c399cec72b9985f120a1495e93e1a380911547d9/geo-replication/syncdaemon/master.py#L416
21:11 Mattlantis partner: perfect, thanks!
21:11 partner and also #L945
21:12 Mattlantis if I'm reading this right, is it safe to say that as that log goes, turns = changes detected to be transferred, or is it more subtle than that?
21:15 ttyS1 how does gluster performed compared to NFS or a local disk ?
21:17 partner ttyS1: small file performance has been reported often being better with NFS. local disk is of course fastest as there are no additional layers in between (ie. software) but without gluster (or similars) you can't move the data out from the local disk "online", nor expand the capacity on the fly to use next servers disks.
21:18 partner but pretty hard question to answer just like that, depends on so many things
21:19 partner if in doubt, i say try it out. at the simplest its pretty much 1) apt-get install glusterfs-server, 2) gluster volume create myvol myhostname:/some/disk, 3) gluster volume start myvol, 4) mount it somewhere and try it out
21:19 partner replace with your favourite package manager and rest should more or less apply..
21:21 _dist JoeJulian: (I've been gone for some time) this is not what I see in practice, I see gluster reading the entire file in 3.5.2-4 which is why it takes 21+ hours for a single replica brick to heal
21:22 ttyS1 partner: thankyou. I want to use this for a couple of image resizing servers to write to a central location and serve the image on demand.
21:23 ttyS1 partner: the average file size is  20KB
21:24 _dist JoeJulian: perhaps I've got something set wrong? 1.47TB over 10Gbe (3 way replica) takes around 20-25 hours to heal regarless of the brick being down for 30 seconds, or 2 days. Unfortunately in the past I've been unable to find anyone using a gluster volume to host multiple VMs hopefully that's not the case anymore
21:26 _dist does anyone else here use gluster for hosting live VM images?
21:28 Mattlantis ttyS1: some advice from similar situations, if you can't get good performance estimates ahead of time and can get away with it, start with the native gluster driver as it gives you the best availability and if you run into performance problems, then consider the NFS option
21:29 _dist joined #gluster
21:30 Mattlantis _dist: I've got 20 or so vm's in an openstack environment running on gluster
21:31 _dist Mattlantis: what kind of gluster volume do you have?
21:31 Mattlantis we had some issues wth 3.4.1, upgraded to 3.4.5 and haven't touched it since then
21:31 Mattlantis it's a 4 TB replicated volume across two machines
21:31 _dist Mattlantis: is the false healing issue fixed in 3.4.5? (I 3.4x I had an issue where gluster volume heal info always shows everything healing)
21:32 dbruhn _dist, I have been testing multiple gluster systems under a xenserver pool for the last couple weeks
21:32 dbruhn I am using NFS+CTDB and it's working extremely well
21:32 dbruhn I am running 3.6.1 right now after my last rebuild this week.
21:32 _dist Mattlantis: , dbruhn:, what kind of heal times do you experience when you take a brick down/up ?
21:33 Mattlantis I don't know there, I know I've not noticed that issue on any of my 3.4.5 volumes
21:34 Mattlantis afraid I've never timed it, wouldn't want to guess
21:34 _dist I've been running for about a year using qemu libgfapi. 3.4.x worked fine, but you could never tell what was truly healed. Since 3.5.x the heal info works, the new heal status still spits constant heal notifications for files that aren't healing though
21:35 _dist Mattlantis: On the occasion where we've had to reboot a host for a major upgrade, etc. We have found seconds or days the heal time is about the same. Typcailly bottleneck of cpu
21:36 _dist (downtime of seconds or days, I meant to say)
21:36 brownoxford joined #gluster
21:38 brownoxford HI all, I've just found gluster by way of the PAPER mag article on the Kardashian photos (https://medium.com/message/how-paper-magazines-web-engineers-scaled-kim-kardashians-back-end-sfw-6367f8d37688). In the network diagram, it looks like they set up a single server contributing volumes to the glusterfs, but had four additional that just mounted it. Is that possible, or do all participants need to contribute volumes?
21:39 _dist brownoxford: I read that article earlier today and noticed my favourite clustered filesystem was mentioned. I suspected it'd draw some attention here
21:40 brownoxford I'm currently trying to solve a fault tolerance issue for a client with a web app that writes user generated content to the local filesystem. Gluster seems like a great solution to get around that particular issue.
21:40 brownoxford (without modifying the app)
21:41 brownoxford The setup in the article seems like it would be pretty close to what I'm looking to do, but with n web servers all contributing a volume to the fs. Just trying to understand if I'm reading this diagram wrong
21:42 _dist brownoxford: it might be, it sounds like you should read about gluster replicas. when I saw that diagram I did think it was a bit simple
21:42 lmickh joined #gluster
21:43 _dist brownoxford: With gluster, every replica you add does slow down your max write speed, but there is definitely a read & fault tolerence benefit. So depending on the connections between your "bricks" that make up the volume the sacrifice on write migth not be worth it to you
21:43 _dist might*
21:45 purpleidea dhgwilliam__: puppet-gluster has selinux stuff build in so you don't his these problems
21:45 brownoxford Our app is almost entirely read, so that shouldn't be too much of an issue. Does that increasing slowness with more replicas hold true if you only store two copies?
21:45 dhgwilliam__ purpleidea: love it
21:45 purpleidea jackdpeterson: i didn't understand the error message... can you rephrase please?
21:45 _dist brownoxford: sorry when I say "does slow down" I should say "could slow down". As you might not reach your bottleneck, it's almost always network.
21:45 purpleidea dhgwilliam__: is that what you used to get it going?
21:46 rwheeler joined #gluster
21:46 brownoxford got it. These would likely live in AWS with EBS volumes being contributed. Anyone here with experience regarding network latency there?
21:46 jackdpeterson @purpleidea -- it looks like there's a file that is not available or an xattr during the creation of the new volume in puppet-gluster. it's neither consistent, nor too repeatable. it happens occasionally for me. The solution to the issue appears to force-start the volume and then perform a full heal
21:46 jackdpeterson xattr [that's missing]
21:46 purpleidea jackdpeterson: hm. i think i'll need a lot more info, and your configs to know what's up
21:47 jackdpeterson sure
21:47 purpleidea but i'm happy to help if i have enough info
21:47 _dist brownoxford: Gluster appears (I've tested this) to operate in a full sync write mode, unless you do geo-replicate (different type of volume). So the added latency over ethernet will be your main concern, we run about 40 VMs on a 3-way replica volume that are a heavy combination of read/write loads
21:48 dbruhn _dist, just did a test and took a gluster server down and it took less than a min for it to sync back up.
21:48 _dist brownoxford: additionally you should read about the difference between using the fuse client or apis directly. FUSE mounts are not optimal for lots and lots of small files
21:49 _dist dbruhn: Running VMs? How did you initiate the heal? What version of GFS are you on?
21:49 dhgwilliam__ purpleidea: yeah no problems after i `setenforce 0`
21:50 dhgwilliam__ although when i have a minute to breathe i'm perfectly happy to switch to your module
21:50 dhgwilliam__ it seems like it wouldn't be that much trouble in any case.
21:50 dhgwilliam__ i only didn't pick it because of DiM (this customer uses PE)
21:51 dbruhn _dist, running vm that is running a script with a  while loop dumping a count into a file. Actively dumping data into the vm while taking the server offline and online.
21:51 jmarley joined #gluster
21:51 dbruhn the heal was taken care of by a self heal
21:54 _dist dbruhn: Thanks for running that test, unfortunately your environement is so different than mine. But perhaps 3.6 is much better at healing (not that I read about that in the release notes). Does it still give that message about "probably healing" in the heal info screen?
21:54 dbruhn It just showed up in the needs healing list and then cleared
21:55 dbruhn Like I said, this has just been testing the last couple weeks so it's probably too fresh to really give you what you need
21:55 _dist dbruhn: I should probably test 3.6, our primary differences are that you I'm using libgfapi (you're using nfs), qemu vs xen, 3.5 vs 3.6, and I assume 1 vm? vs 40ish. I'm here quite a bit though so I'd love to hear about your progress
21:56 DJclean joined #gluster
21:56 _dist all in all, 20 hours for an automated heal that gives me no hassles is still great
21:56 dbruhn yeah, I actually have 10 vm's running right now and have just been testing failover. I am running one storage pool with SAS and one with SATA.
21:57 dbruhn I need to get it under a real world load before too long
21:57 dbruhn I was running gluster at my last job, but I wasn't running it under virtualization, I was using it for storage directly.
21:58 _dist we jumped in after about 1 month of testing last feb, things have been great ever since. We use gluster for  distributing our SMB share and for our VMs
21:59 dbruhn I have been running about 1/2PB of gluster based storage for a couple years now, other than some split-brain gremlins, and fighting with a couple RDMA issues, it's gotten better and better.
21:59 dbruhn and served me well
22:03 _dist we've only had one issue, when we were foolish enough to add a client that didn't have all of the dns entries for the bricks
22:04 dbruhn ugh, that's messy. I've been lucky enough to have a fairly limited set of "clients" which were also the brick servers up till now. So I have always been able to use host files, and not have to rely on DNS.
22:04 _dist we use host for our brick DNS as well
22:04 _dist host files I mean
22:04 dbruhn I am using NFS now with CTDB though, so I need to use RRDNS to actually spread the load around
22:05 _dist well, I gotta run. Let's chat again sometime :)
22:05 dbruhn of course, good luck ttyl
22:06 jbrooks joined #gluster
22:06 purpleidea dhgwilliam__: send me a doc patch clarifying that DiM is optional )
22:06 purpleidea dhgwilliam__: my module works *with* selinux enabled... you really shouldn't disable it. it's a bad idea!
22:16 JoeJulian purpleidea: Do you apply your own selinux changes? This should be fixed in selinux-policy-targeted.
22:26 Teela Brick kam1oddev01:/mnt/lv_gluster/brick Number of entries: 1023
22:26 Teela thats how many split brain entries are showing
22:26 Teela discrepancies
22:26 purpleidea JoeJulian: good question. i forget what happened. i haven't looked into that in a while, but it should be okay.
22:27 MugginsM joined #gluster
22:30 plarsen joined #gluster
22:38 calum_ joined #gluster
22:48 JoeJulian Teela: Check the timestamps. That's a log.
22:52 partner any foreseen issues if i mix up 3.4.5 and 3.6.2 ?
22:53 semiosis i wouldnt expect those to interoperate well
22:53 JoeJulian I've never mixed versions for more than a few minutes.
22:53 semiosis but they might
22:53 partner or rather anything known? mainly peering and not running mixed volumes but to keep the option open to move bricks/volumes around..
22:53 semiosis usual advice is to upgrade all servers before any clients, but that's no guarantee
22:54 partner hmm
22:55 partner ok, a followup then, what's the recommended way of moving volumes from a peer to another?
22:55 partner peer as in trusted peer and another ie. separate clusters
22:55 partner i bet the answer is rsync or something similar :o
22:55 booly-yam-6137__ joined #gluster
22:56 JoeJulian rsync with --inplace, or just pull the drives and move them.
22:56 Teela @parnter
22:56 glusterbot Teela: I do not know about 'parnter', but I do know about these similar topics: 'party', 'paste'
22:56 partner hmph
22:56 partner need to think this a bit then
22:57 Gill joined #gluster
22:58 Teela so both node A and B are also clients
22:58 Teela the reason why
22:59 Teela is we have ace load balancer round robin'ing them
22:59 partner there must be some low level means to go and adjust some peers files and what not.. thought that probably requires downtime on all the volumes.. can't do that either just like that..
22:59 Teela so if it goes to either its basically running active active for redundnacy
22:59 JoeJulian pretty normal
23:01 partner i guess i'll downgrade, i don't want to have separate clusters that cannot be expanded or joined with the existing ones, kind of knew this already but thought if there is something i wouldn't yet know
23:02 Teela 2015-01-22 22:53:32
23:02 Teela time stamp also shows split brain is every 10 mins
23:03 Teela 2015-01-22 22:43:33
23:03 Teela 2015-01-22 22:33:36
23:03 Teela and so on
23:03 partner two of the servers are out of service right now due to things, i thought to make a most fresh gluster on them, hoped to join to existing peer but as the server do take part to each operation on the cluster (such as rebalance) i don't want to take chances there
23:03 partner Teela: or rather its attempting to heal every 10 min and fails?
23:04 partner does the log show they are same files or?
23:10 ttyS1 left #gluster
23:27 brownoxford joined #gluster
23:49 Teela similar files
23:49 Teela different directories
23:49 Teela mail directories
23:49 Teela its just so splitbrain every 10 mins on those files
23:49 Teela 1023 different files
23:56 wkf joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary