Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 kam270_ joined #gluster
00:03 sputnik1_ joined #gluster
00:09 dkorzhevin joined #gluster
00:10 kam270_ joined #gluster
00:11 mattapperson joined #gluster
00:19 kam270_ joined #gluster
00:27 dkorzhevin joined #gluster
00:28 kam270_ joined #gluster
00:37 robo joined #gluster
00:45 sputnik13 joined #gluster
00:46 jag3773 joined #gluster
00:50 sputnik13 joined #gluster
00:54 dkorzhevin joined #gluster
01:12 bala joined #gluster
01:15 vpshastry joined #gluster
01:21 jag3773 joined #gluster
01:42 tokik joined #gluster
01:47 ninkotech__ joined #gluster
01:47 ninkotech joined #gluster
01:48 elico joined #gluster
02:03 andreask joined #gluster
02:06 nightwalk joined #gluster
02:08 harish joined #gluster
02:09 dkorzhevin joined #gluster
02:12 sputnik13 joined #gluster
02:34 mattapperson joined #gluster
02:39 raghug joined #gluster
02:45 shubhendu joined #gluster
02:47 sputnik13 joined #gluster
02:50 gdubreui joined #gluster
02:59 bharata-rao joined #gluster
03:04 aravindavk joined #gluster
03:14 cjanbanan joined #gluster
03:15 mattapperson joined #gluster
03:39 itisravi joined #gluster
03:41 spandit joined #gluster
03:46 sahina joined #gluster
04:00 nightwalk joined #gluster
04:04 ndarshan joined #gluster
04:10 ravindran joined #gluster
04:13 sks joined #gluster
04:17 kdhananjay joined #gluster
04:27 RameshN joined #gluster
04:29 shylesh joined #gluster
04:37 deepakcs joined #gluster
04:40 jordan_ joined #gluster
04:40 davinder joined #gluster
04:40 ravindran joined #gluster
04:44 vpshastry2 joined #gluster
04:47 prasanth_ joined #gluster
04:48 bala joined #gluster
04:49 ppai joined #gluster
04:51 sputnik13 joined #gluster
04:57 bala joined #gluster
05:12 coredump joined #gluster
05:17 shylesh joined #gluster
05:19 vkoppad joined #gluster
05:20 cjanbanan joined #gluster
05:23 lalatenduM joined #gluster
05:31 hchiramm_ joined #gluster
05:33 hagarth joined #gluster
05:45 atinm joined #gluster
05:48 mattapperson joined #gluster
05:54 raghu joined #gluster
05:56 rahulcs joined #gluster
05:58 nshaikh joined #gluster
06:00 mohankumar joined #gluster
06:06 benjamin_____ joined #gluster
06:20 Philambdo joined #gluster
06:20 ricky-ti1 joined #gluster
06:22 nightwalk joined #gluster
06:25 vimal joined #gluster
06:28 psharma joined #gluster
06:39 shubhendu joined #gluster
06:39 ravindran joined #gluster
06:40 ravindran left #gluster
06:54 sputnik13 joined #gluster
07:00 cjanbanan joined #gluster
07:07 RameshN joined #gluster
07:09 sks joined #gluster
07:19 Rydekull joined #gluster
07:20 Copez joined #gluster
07:20 rshade98 joined #gluster
07:22 tshefi joined #gluster
07:23 harish joined #gluster
07:24 rgustafs joined #gluster
07:24 brosner joined #gluster
07:27 ekuric joined #gluster
07:30 kanagaraj joined #gluster
07:31 jtux joined #gluster
07:50 slayer192 joined #gluster
07:53 rwheeler joined #gluster
07:55 ngoswami joined #gluster
08:06 Pavid7 joined #gluster
08:08 ekuric joined #gluster
08:15 eseyman joined #gluster
08:18 DV_ joined #gluster
08:21 ctria joined #gluster
08:25 keytab joined #gluster
08:27 fsimonce joined #gluster
08:28 cjanbanan joined #gluster
08:29 asku joined #gluster
08:40 cjanbanan joined #gluster
08:46 haomaiwa_ joined #gluster
08:47 hybrid512 joined #gluster
08:50 hybrid512 joined #gluster
08:59 shylesh joined #gluster
09:02 X3NQ joined #gluster
09:08 msciciel_ joined #gluster
09:12 nightwalk joined #gluster
09:14 lalatenduM joined #gluster
09:20 liquidat joined #gluster
09:24 tjikkun_work joined #gluster
09:25 hagarth joined #gluster
09:26 monotek joined #gluster
09:26 gujo joined #gluster
09:33 ppai joined #gluster
09:40 Norky joined #gluster
09:44 nightwalk joined #gluster
09:53 jbustos joined #gluster
09:54 mattiasg joined #gluster
09:58 mattiasg ls
09:59 ekuric joined #gluster
10:00 monotek if somebody is interested in sambas vfs gluster plugin in ubuntu trusty... i made a ppa:  https://launchpad.net/~monotek/+archive/samba-vfs-glusterfs
10:00 glusterbot Title: samba-vfs-glusterfs : André Bauer (at launchpad.net)
10:02 HeisSpiter left #gluster
10:04 Pavid7 joined #gluster
10:04 Rydekull joined #gluster
10:09 lalatenduM monotek, cool, I think you should send a mail gluster-users about it
10:12 liquidat Hej hej, I have a self healing problem: /var/log/glusterfs/glustershd.log has many entries saying " Skipping entry self-heal because of gfid absence" - any idea what I could do now?
10:13 sks joined #gluster
10:13 monotek lalatenduM, done :-)
10:14 lalatenduM monotek, awesome , thanks :)
10:14 shubhendu joined #gluster
10:20 sahina joined #gluster
10:20 kanagaraj joined #gluster
10:22 ndarshan joined #gluster
10:31 lijiejun joined #gluster
10:42 gujo left #gluster
10:44 sahina joined #gluster
10:55 mattapperson joined #gluster
10:56 kanagaraj joined #gluster
10:58 ndarshan joined #gluster
11:04 davinder joined #gluster
11:04 sahina joined #gluster
11:08 shubhendu joined #gluster
11:14 hagarth joined #gluster
11:21 kanagaraj_ joined #gluster
11:26 edward1 joined #gluster
11:29 tokik joined #gluster
11:30 lijiejun joined #gluster
11:32 kanagaraj joined #gluster
11:35 vkoppad joined #gluster
11:45 social ndevos: do you have minute, I finally got time to return to this http://review.gluster.org/#/c/7223/ and I think that as the copy frame is created it's OK to dict_ref there and don't unref as the whole frame will get deleted in dht_dir_attr_heal_done by DHT_STACK_DESTROY (sync_frame); Is that safe to assume?
11:45 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:48 calum_ joined #gluster
11:48 DV joined #gluster
11:48 raghug joined #gluster
11:51 lijiejun joined #gluster
11:52 ppai joined #gluster
11:55 RameshN joined #gluster
11:55 sprachgenerator joined #gluster
12:01 askb joined #gluster
12:01 glusterbot New news from newglusterbugs: [Bug 1066778] Make AFR changelog attributes persistent and independent of brick position <https://bugzilla.redhat.com/show_bug.cgi?id=1066778>
12:08 mattappe_ joined #gluster
12:08 itisravi joined #gluster
12:12 pdrakeweb joined #gluster
12:14 kam270_ joined #gluster
12:21 kdhananjay joined #gluster
12:21 mattappe_ joined #gluster
12:23 kam270_ joined #gluster
12:24 tokik_ joined #gluster
12:24 ctria joined #gluster
12:27 saurabh joined #gluster
12:29 B21956 joined #gluster
12:31 shubhendu joined #gluster
12:32 kam270_ joined #gluster
12:36 FarbrorLeon joined #gluster
12:36 tdasilva joined #gluster
12:39 nightwalk joined #gluster
12:41 kam270_ joined #gluster
12:41 DV joined #gluster
12:45 Arrfab I see that "gluster bd create" exists, but not really finding doc around that. is that related to http://raobharata.wordpress.com/2013/11/27/glusterfs-block-device-translator/ ?
12:47 nshaikh joined #gluster
12:50 kam270_ joined #gluster
12:55 Pavid7 joined #gluster
12:56 mattappe_ joined #gluster
12:59 kam270_ joined #gluster
13:02 ninkotech__ joined #gluster
13:02 ninkotech joined #gluster
13:03 benjamin_____ joined #gluster
13:07 sroy joined #gluster
13:09 kam270_ joined #gluster
13:12 jag3773 joined #gluster
13:15 ctria joined #gluster
13:19 sroy joined #gluster
13:19 davinder joined #gluster
13:19 japuzzo joined #gluster
13:23 bennyturns joined #gluster
13:23 NuxRo joined #gluster
13:24 diegows joined #gluster
13:25 pdrakeweb joined #gluster
13:26 RameshN joined #gluster
13:27 kam270_ joined #gluster
13:27 prasanth_ joined #gluster
13:29 jtux joined #gluster
13:29 robo joined #gluster
13:29 ninkotech joined #gluster
13:30 rfortier joined #gluster
13:33 prasanth__ joined #gluster
13:34 wgao joined #gluster
13:35 ninkotech_ joined #gluster
13:35 chirino joined #gluster
13:35 rfortier1 joined #gluster
13:36 kam270_ joined #gluster
13:36 gmcwhistler joined #gluster
13:36 saravanakumar1 joined #gluster
13:36 saravanakumar1 Hello everyone
13:37 raghug joined #gluster
13:37 saravanakumar1 i have 3 machines one is ubuntu and other 2 are centos 6, when i probe cento 6 from ubuntu machine am getting  Error through RPC layer, retry again later
13:39 jmarley joined #gluster
13:39 jmarley joined #gluster
13:41 jmarley__ joined #gluster
13:48 jobewan joined #gluster
13:49 dbruhn joined #gluster
13:50 RayS joined #gluster
13:51 lyang0 joined #gluster
13:51 kam270_ joined #gluster
13:53 RayS joined #gluster
13:53 raghug joined #gluster
13:54 RayS joined #gluster
13:56 doekia probably different version of the xlators
13:59 lyang0 joined #gluster
14:00 wgao joined #gluster
14:01 saravanakumar1 do we have any fix for this
14:03 doekia Take the source and build the same version for both systems
14:04 rpowell joined #gluster
14:05 ravindran joined #gluster
14:06 calum_ joined #gluster
14:07 venkatesh_ joined #gluster
14:08 primechuck joined #gluster
14:08 saravanakumar1 okay am trying that
14:11 venkatesh_ shyam
14:21 jag3773 joined #gluster
14:22 rafael joined #gluster
14:22 hybrid512 joined #gluster
14:22 robo joined #gluster
14:22 harish joined #gluster
14:22 hybrid512 joined #gluster
14:23 monotek i'm just trying sambas vfs gluster module but get the following errors in the defined gluster.log: http://paste.ubuntu.com/7146541/
14:23 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:23 seapasulli joined #gluster
14:23 dashdashT joined #gluster
14:23 monotek it seems gluster is trying to access localhost but my gluster server is somewhere else..
14:24 dbruhn Monotek, if you do a gluster peer status from each member server do you see any of them listed as local host?
14:24 monotek smb.con foption "glusterfs:volume = storage1.local:/antivirus" seems not to work?
14:26 monotek no, i see the domain or ip of my nodes....
14:26 keytab joined #gluster
14:26 dbruhn on all of them?
14:27 lmickh joined #gluster
14:29 lpabon joined #gluster
14:30 sprachgenerator joined #gluster
14:31 kam270 joined #gluster
14:32 monotek yes... but i have a "peer rejected" :-(
14:32 monotek think i have to fix taht first
14:34 vpshastry joined #gluster
14:35 vpshastry left #gluster
14:38 ravindran joined #gluster
14:38 wrale joined #gluster
14:41 robo joined #gluster
14:45 dkorzhevin joined #gluster
14:50 kam270 joined #gluster
14:51 mattapperson joined #gluster
14:53 robo joined #gluster
15:01 ndk joined #gluster
15:01 kaptk2 joined #gluster
15:02 kam270 joined #gluster
15:03 daMaestro joined #gluster
15:04 benjamin_____ joined #gluster
15:07 harish joined #gluster
15:15 kam270 joined #gluster
15:22 jtux joined #gluster
15:23 sks joined #gluster
15:24 failshell joined #gluster
15:26 ravindran joined #gluster
15:26 kam270 joined #gluster
15:31 coredump_ joined #gluster
15:32 jtux joined #gluster
15:33 raghug joined #gluster
15:38 coredump joined #gluster
15:38 kaptk2 joined #gluster
15:39 coredump_ joined #gluster
15:43 nightwalk joined #gluster
15:49 sroy joined #gluster
15:49 sroy_ joined #gluster
15:53 ale84ms joined #gluster
15:53 ale84ms Hello
15:53 glusterbot ale84ms: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:59 jbrooks joined #gluster
16:00 kam270 joined #gluster
16:01 mattappe_ joined #gluster
16:02 ale84ms I have a problem in adding a brick to my gluster configuration. I have three physical computer which are connected back-to-back to each other, forming a sort of routing triangle. I created this situation to keep the gluster traffic located to the routing triangle and to the three machines. Now, each pair of machines are in a dedicated /30. If I use the command "gluster peer probe" I have to specify one of the IP addresses related to the /
16:02 ale84ms 30 of the other computers. The problem is that if I specify the IP address directly connected to the same computer, then the same IP address is propagated to the third brick with the same IP address, and as a consequence that computer wouldn't be able to reach that computer. How can I set manually the IP addresses on each brick?
16:10 ctria joined #gluster
16:16 kam270 joined #gluster
16:17 zerick joined #gluster
16:20 Mo__ joined #gluster
16:25 FarbrorLeon joined #gluster
16:29 hagarth joined #gluster
16:32 kam270 joined #gluster
16:34 danishman joined #gluster
16:37 Slash joined #gluster
16:41 thigdon joined #gluster
16:44 badone joined #gluster
16:54 azalime joined #gluster
16:55 failshel_ joined #gluster
16:55 chirino_m joined #gluster
16:56 azalime hi guys. i have an issue with mounting a volume on a client machine but same command works on the server
16:56 azalime mount -vvv -t nfs -o _netdev,noatime,vers=3,nolock nfs-1-server:/sites /export
16:57 azalime RPC: Timed out
16:58 kam270 joined #gluster
16:58 failshe__ joined #gluster
17:09 sputnik13 joined #gluster
17:11 sputnik13 joined #gluster
17:12 zaitcev joined #gluster
17:14 raghug joined #gluster
17:17 lijiejun joined #gluster
17:20 sroy joined #gluster
17:24 kam270 joined #gluster
17:25 lijiejun joined #gluster
17:26 aixsyd joined #gluster
17:28 dbruhn Anyone use that GFID resolver script ever?
17:30 JoeJulian ale84ms: use hostnames
17:31 wrale_ joined #gluster
17:32 MacWinner joined #gluster
17:33 semiosis dbruhn: ,,(gfid resolver)
17:33 glusterbot dbruhn: https://gist.github.com/4392640
17:33 semiosis dbruhn: and iirc JoeJulian had a fork
17:33 semiosis dbruhn: i havent' used it since i wrote it
17:33 JoeJulian I did?
17:33 semiosis didnt you?
17:33 dbruhn Yeah I am using it, just taking forever i see it's just doing a find on the inum
17:33 * JoeJulian shrugs
17:33 semiosis apparently not
17:33 JoeJulian I spend days researching and writing blog articles and forget that I've done it...
17:34 JoeJulian It's possible.
17:34 dbruhn I am having some serious issues...
17:34 dbruhn trying to trouble shoot some crap
17:34 semiosis dbruhn: yes it does a find, that can take time
17:35 JoeJulian I'm having lighthearted lowbrow issues.
17:35 dbruhn Either of you guys ever see files regularly coming back with mismatches hashes but not throwing any sort of split-brain issues?
17:36 JoeJulian Ew... no.
17:36 discretestates joined #gluster
17:36 JoeJulian mismatched from the client?
17:36 systemonkey Happy monday folks!
17:37 dbruhn I am trying to track it down, my software accesses the files through the client, the software does a hash check, the hash check says it's not the same as it was when it was written, and it's encrypted data so I can't even dig into it
17:37 JoeJulian I see.
17:37 JoeJulian Was it /just/ written? From another client perhaps?
17:37 discretestates joined #gluster
17:38 JoeJulian Try mounting with "attribute-timeout=0"
17:38 dbruhn Nah the front end software has it's own clustering algorithm, to ensure a customer only connects via a single gluster client
17:38 JoeJulian Hmm
17:40 systemonkey we upgraded our gluster systems to 10G network over the weekend. Upped the performance cache-size to 4GB. read speed increased dramatically. However, sometimes there is a long pause and at times as it reads same directory. Furthermore, We find transfer speed degrades overtime down to 20-25MB. Does anyone have these similar issues?
17:40 dbruhn yeah, I am in the same mind set
17:41 JoeJulian Actually, I would still try the attribute-timeout
17:41 dbruhn ok, what does that control? and is that a 3.3.2/1 compatible settings?
17:42 JoeJulian argh... probably not...
17:42 JoeJulian Ok then... I set performance.write-behind=off to work around a similar problem with bzr.
17:43 dbruhn I have three more gluster systems going in next week TCP is so getting setup even if I am using IB....
17:45 glob157 joined #gluster
17:46 micu joined #gluster
17:47 systemonkey JoeJulian: whats the benefit of performance.write-behind=off?
17:49 JoeJulian I was getting a thing where bazaar was trying to re-open a file it just closed. It would then try to seek to some fixed point in the file and would error because the file wasn't that long. There was some really old bug about that, but I've long since forgotten what it was titled as it went through like 4 different bug merges.
17:49 JoeJulian Disabling write-behind cured it though.
17:50 azalime showmount -e nfs-server
17:50 azalime hangs remotly from a client but if run on local server, it works
17:50 azalime any idea?
17:52 jmalm joined #gluster
17:52 JoeJulian ~nfs | azalime
17:52 glusterbot azalime: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
17:53 JoeJulian I suspect there might be something similar about showmount, but I don't know for sure.
17:53 JoeJulian If that's not it, then I would look at iptables/selinux/firewalls.
17:53 JoeJulian @ports
17:53 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
17:53 azalime glusterbot: I can mount locally on the server but not remotely
17:54 slayer192 joined #gluster
17:54 azalime iptables is disabled, no firewall at all
17:54 dbruhn JoeJulian, is performance.write-behind one of the undocumented settings?
17:55 ale84ms hello JoeJulian, sorry for the delay... How can I use hostnames in the gluster? Is there a man page to help?
17:55 azalime i can telnet server ip 111, it says connected
17:55 JoeJulian @hostnames
17:55 glusterbot JoeJulian: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
17:57 lijiejun joined #gluster
17:57 azalime any way to debug this
18:00 kam270 joined #gluster
18:00 ale84ms where should I insert the hostnames? On /etc/hosts ?
18:04 JoeJulian ale84ms: That's a networking question. /etc/hosts works, so does bind,powerdns,maybe even mdns. I prefer powerdns
18:05 voronaam joined #gluster
18:07 lijiejun joined #gluster
18:07 voronaam Hi all. Quick question, does anybody uses GlusterFS over NFS with FS-Cache on the client? I tried that today and it works great - more than order of magnitude increase in performance, but I can not find how to flush the cache to repeat the test.
18:08 ale84ms ok, so why gluster keeps on saying that the hostname "is an invalid address"? It works fine with ping...
18:10 JoeJulian voronaam: No idea on how to flush the cache short of unmounting.
18:11 JoeJulian ale84ms: Need more details.
18:11 ale84ms joejulian, which details do you require?
18:11 voronaam unmount is a great idea I have not tried yet :) I tried to restart cachefilesd to flush the cache. Thank you
18:11 ale84ms gluster version is 3.4.0
18:14 JoeJulian actual error messages are quite useful and 3.4.0 has some serious management bugs.
18:14 JoeJulian I would upgrade
18:14 ricky-ticky1 joined #gluster
18:16 ale84ms ok, I will for sure
18:17 ale84ms is it possible to upgrade it via aptitude?
18:17 semiosis yes if you're using the ,,(ppa)
18:17 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
18:18 systemonkey ale84ms: try checking /var/lib/glusterd/peers/ node uuid parameters.
18:19 azalime i have aws firewall disabled, iptables disabled, selinux disabled but still showmount -e server ip hangs remotly but works if run locally on the server
18:19 XpineX joined #gluster
18:19 kam270 joined #gluster
18:23 JoeJulian Yeah, you're not kidding...
18:23 ale84ms systemonkey: in that directory I have the details of my first peer. It looks correct, with hostname1=192.168.5.5. I just updated the gluster version to 3.4.2, I will try now to connect with the hostname
18:23 raptorman joined #gluster
18:24 ale84ms but again, I get "xxx is an invalid address" when I type gluster peer probe (ping keeps on working)
18:24 lijiejun joined #gluster
18:26 chirino joined #gluster
18:26 JoeJulian ~pasteinfo | ale84ms
18:26 glusterbot ale84ms: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:28 ale84ms http://ur1.ca/gwsn1
18:28 glusterbot Title: #88140 Fedora Project Pastebin (at ur1.ca)
18:28 kam270 joined #gluster
18:29 JoeJulian You defined your volume with specific ip addresses. The client will need to be able to access those addresses.
18:29 JoeJulian If that's not what you were trying for, you need to define your volume using hostnames.
18:30 JoeJulian Then you can use a technique called split-horizon dns to allow your servers to use one address to identify each other, and your clients may use another.
18:30 semiosis @split horizon
18:30 JoeJulian @whatis split horizon
18:30 glusterbot JoeJulian: Error: No factoid matches that key.
18:30 vpshastry joined #gluster
18:31 ale84ms I think I may have found the problem... I was using an hostname that was not the output of /sbin/hostname of that machine... that's why it wasn't working
18:31 ale84ms still, now I get "Peer Rejected (Connected)"
18:32 glob157 joined #gluster
18:33 tyl0r joined #gluster
18:33 robo joined #gluster
18:33 ale84ms I'm figuring out now how to fix this problem :) Thank you so far!
18:34 JoeJulian ale84ms: You might also want to realize ,,(mount server)
18:34 glusterbot ale84ms: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
18:35 JoeJulian The amount of traffic between servers on a regular basis is pretty small, unless you're client is mounting via nfs.
18:36 JoeJulian So dual-homing/split-horizon simply adds complication in most cases, imho, with no real benefit.
18:40 Matthaeus joined #gluster
18:41 JoeJulian grr.. I hate wasting time trying to duplicate someone's problem to only have them disappear.
18:42 thigdon hello, i'm having trouble with gluster-3.4.2 on kernel 3.10.24 with an ext4 backing fs. Doing 'ls' on directories in my fuse-mounted volume where i've created > ~100 files are giving me "Input/output error". According to strace, it's the return from the getdents64 system call. It seems reminiscent of http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/ .. but I think 3.4.2 is supposed to have a fix. Any ideas how to debug further?
18:42 glusterbot Title: GlusterFS bit by ext4 structure change (at joejulian.name)
18:44 vpshastry left #gluster
18:44 JoeJulian thigdon: Check the client log and if you find an error that points to a specific brick, the corresponding brick log.
18:45 systemonkey Does anyone can point to some performance parameters that can increase the write speed to distributed model?
18:45 kam270 joined #gluster
18:46 lijiejun joined #gluster
18:48 ale84ms JoeJulian: sorry! I was trying to fix the problem! I am not disappeared!
18:49 ale84ms I have a bunch of servers that have to store some important data, and the amount of data is pretty large (in the order of TBs)... that's why we decided to keep every gluster machine connected back-to-back
18:49 ale84ms we have 3 of them so far
18:49 JoeJulian No worries, not you ale84ms. :D
18:49 ale84ms everything was working fine with 2, but I'm struggling to make everything work with 3...
18:50 ale84ms ahah :) Sorry... I feel a lil bit egocentric right now :)
18:50 JoeJulian We all are. That's our nature.
18:50 ale84ms :D You should be proud to hear that now my three machines see each other as: "Accepted peer request (Connected)" and "Peer Rejected (Connected)"
18:50 ale84ms :P it's still not working, but at least they are connected...
18:51 semiosis ,,(peer rejected)
18:51 glusterbot I do not know about 'peer rejected', but I do know about these similar topics: 'peer-rejected'
18:51 semiosis ,,(peer-rejected)
18:51 glusterbot http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
18:51 JoeJulian Well, at least it's documented...
18:52 ale84ms yeah... I already checked that, but the computer doesn't change its status...
18:52 semiosis keep trying
18:52 ale84ms what does "Accepted peer request (Connected)" means? why it is not in connected state yet?
18:53 JoeJulian I REALLY need to figure that all out and document it. That's annoyed me since day 1 of glusterd.
18:53 nightwalk joined #gluster
18:53 DR_D12525252 joined #gluster
18:54 ale84ms I'm trying now to make two of the machines to connect to each other... then I will try to add the third
18:55 ale84ms and right now, I get a "Accepted peer request (Connected)" on one of them, and a "Peer in Cluster (Connected)" on the other...
18:55 thigdon JoeJulian: thanks for the suggestion.. the client log reads like this for each of my bricks:
18:55 thigdon [2014-03-24 18:52:40.527038] D [afr-dir-read.c:126:afr_examine_dir_readdir_cbk] 0-two-replicate-0: /foo: no entries found in two-client-0
18:55 thigdon the server logs on each brick aren't telling me anything interesting i don't think.. just a bunch of "x scheduled as fast fop"
18:56 DR_D12525252 I'm new to this channel, is it acceptable etiquette to pose a technical question I could use some help with?
18:56 JoeJulian thigdon: Of course the " D " lines are just debugging info.
18:56 thigdon i understand. there doesn't seem to be much at the INFO or above level.
18:56 JoeJulian Those might be useful if they proceed an " E " or " C ".
18:56 JoeJulian DR_D12525252: Absolutely.
18:57 DR_D12525252 thanks JoeJulian
18:57 rahulcs joined #gluster
18:58 JoeJulian thigdon: Can you truncate the log and freshly mount, then cause the error and fpaste the log?
18:58 thigdon do you prefer the debug or info log?
18:58 JoeJulian thigdon: Either is fine.
18:58 thigdon i'm sorry .. what is fpaste exactly?
18:59 thigdon ah fpaste.org i assume
18:59 JoeJulian http://fpaste.org
18:59 glusterbot Title: New paste Fedora Project Pastebin (at fpaste.org)
18:59 JoeJulian @paste
18:59 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
18:59 JoeJulian Either of those is fine.
19:01 DR_D12525252 i've added 2 additional bricks to my 'replicate' volume today, expanding to 4 bricks.  gluster started mirroring the data onto the 2 new bricks as I expected.  it seemingly has stopped midway throught he self-heal operation and the 2 new bricks only have half they data they should (version 3.3.1)
19:01 DR_D12525252 i see errors like this:  "[2014-03-24 14:58:38.390699] E [afr-self-heal-data.c:1311:afr_sh_data_open_cbk] vol-replicate-0: open of <gfid:17f31f0b-4421-4932-bc72-4ac30e2e0b49> failed on child vol-client-3 (No such file or directory)"
19:05 robo joined #gluster
19:06 DR_D12525252 any ideas of why the process seemingly has failed partway through?
19:09 andreask joined #gluster
19:10 kam270 joined #gluster
19:11 JoeJulian DR_D12525252: I think it would appear that some file was deleted during the process.
19:11 JoeJulian DR_D12525252: You must have a pretty high demand for the files on those replicas, eh?
19:11 ale84ms ok, I managed to connect every node by re-installing from zero the gluster
19:12 JoeJulian ale84ms: excellent.
19:12 thigdon JoeJulian: http://paste.ubuntu.com/7147741/
19:12 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
19:12 thigdon [2014-03-24 19:10:42.267794] W [socket.c:514:__socket_rwv] 0-two-client-0: readv failed (No data available)
19:12 ale84ms now I have an important question... I have very important data on /storage/brick of the two machines that were connected previously in the gluster
19:12 thigdon is the only thing i see that might be worrying
19:12 ale84ms with the command "gluster volume create gv0 replica 2 node01.mydomain.net:/export/sdb1/brick node02.mydomain.net:/export/sdb1/brick"  (for example), those data will be kept?
19:12 JoeJulian thigdon: Nice... and that gives you an error at the client...
19:13 thigdon JoeJulian: correct
19:13 JoeJulian @meh
19:13 glusterbot JoeJulian: I'm not happy about it either
19:13 thigdon JoeJulian: my causing the error didn't seem to cause any extra output in the client log
19:13 DR_D12525252 JoeLuian:  yes, it is definitely possible files on the source side could have been deleted during the process
19:14 JoeJulian thigdon: Have to look at the brick logs around the timestamp where you created the error.
19:14 JoeJulian thigdon: I never directly answered your question. Yes, the ext4 thing is fixed.
19:14 thigdon JoeJulian: i will produce those as well, if you'd like
19:14 JoeJulian Sure
19:15 DR_D12525252 JoeJulian:  is it possible to start the process again somehow?
19:15 sputnik13 joined #gluster
19:16 JoeJulian DR_D12525252: It should walk the self-heal queue when a server's status changes, or every ~5 (iirc) minutes. Or you can start it immediately with a "gluster volume heal $vol".
19:17 JoeJulian That will only heal files that are marked as needing healed. You may, instead, need to "gluster volume heal $vol full" to replicate the entire contents.
19:17 JoeJulian DR_D12525252: ... and now that you've already done this, have you read http://joejulian.name/blog/glusterfs-replication-dos-and-donts/
19:18 glusterbot Title: GlusterFS replication dos and donts (at joejulian.name)
19:20 DR_D12525252 i have not read this, no.  ill check it out, thanks
19:23 DR_D12525252 probably a dumb question, but if i run "gluster volume heal $vol full", is there any possibility gluster will overwrite the data on the 2 "older" bricks with the halfway synced content from the 2 new bricks i added today?
19:25 DR_D12525252 or does gluster just replicate any content that exists on any individual bricks, to all the bricks in the volume?
19:27 kam270 joined #gluster
19:28 thigdon JoeJulian: http://paste.ubuntu.com/7147806/ and http://paste.ubuntu.com/7147806/
19:28 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
19:29 japuzzo joined #gluster
19:29 ricky-ti1 joined #gluster
19:30 JoeJulian DR_D12525252: I'm not sure I understand the question. Did you add two bricks that already had data on them?
19:31 DR_D12525252 no, the newly added bricks had no data on them
19:31 JoeJulian Ok, then the files on them should remain in sync.
19:32 DR_D12525252 but they are not currently in sync
19:32 JoeJulian Unless you take a brick offline, then it's healed when it returns online from the self-heal daemon, or if a file is accessed, it's healed before you can use that file.
19:33 DR_D12525252 i am probably being unclear, let me try to re-ask
19:33 JoeJulian GlusterFS is file-based, not block based. So individual files may be in a fully healed state despite the entire brick not yet being fully complete.
19:36 DR_D12525252 before today, i had to bricks (1 & 2) in replicate mode.  today, i added two additional bricks  (3 & 4) and increased the count to 4.  the content on 1&2 started replicating to 3&4 as expected, but has seemingly stopped, and is incomplete.  1&2 have 100GB of data each while 3&4 only have 45GB each.
19:37 DR_D12525252 im wondering, if i run "gluster volume heal $vol full", will 3&4 then each have the exact same content as 1&2?  i just don't want to lose any data on 1&2
19:37 JoeJulian That is correct.
19:38 DR_D12525252 ok, so gluster basically replicates any files that exist on any individual brick in the volume, to all bricks in the volume?
19:39 DR_D12525252 pardon my ignorance, just trying to understand how it works.  youve been a big help
19:39 JoeJulian To all the bricks in a replica group.
19:39 JoeJulian @brick-order
19:39 glusterbot JoeJulian: I do not know about 'brick-order', but I do know about these similar topics: 'brick order'
19:39 JoeJulian @brick order
19:39 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
19:40 JoeJulian That only indirectly applies since all your bricks are part of the same replica set.
19:40 ale84ms joined #gluster
19:41 DR_D12525252 ok, my replica count is 4
19:41 JoeJulian precisely.
19:42 JoeJulian Which will be great if your requests for any one file exceed the ability of one server to serve that file, or if you require eight nines of uptime.
19:43 ale84ms ok... another problem... my machines are connected now, I'm trying to create a new volume... I type "gluster volume create isolariovol replica 3 Tatooine:/storage/brick Hoth:/storage/brick Ilum:/storage/brick" but all I get is "volume create: isolariovol: failed"... can you help me?
19:43 robo joined #gluster
19:43 JoeJulian ale84ms: check all your /etc/glusterfs/etc-glusterfs-glusterd.vol.log
19:44 DR_D12525252 ok, i think i follow.  so if brick 1 has file1.txt and brick 3 does not, and i issue the full self-heal command, file1.txt will be replicated to brick 3
19:44 ale84ms JoeJulian: what should I check in those files?
19:45 JoeJulian You should know if you see it. I'm not sure what the error is that is causing it to fail.
19:46 DR_D12525252 there is no scenario where a file will be deleted during a ""gluster volume heal $vol full" operation then?
19:47 ale84ms JoeJulian: I just tried to check the error in that log file, but nothing appears :(
19:47 ale84ms no error... no messages... nothing...
19:48 JoeJulian DR_D12525252: yes, if a file is marked as out-of-sync due to its having been deleted while a brick was unavailable.
19:48 DR_D12525252 ok, thanks for your assistance, much appreciated
19:48 JoeJulian Any time. :)
19:48 ale84ms all I can see in the log files is "[2014-03-24 19:48:29.308505] I [input.c:36:cli_batch] 0-: Exiting with: -1" in cli.log
19:49 JoeJulian ale84ms: check all of them.
19:49 ale84ms I just did!
19:49 JoeJulian That log file on each server.
19:49 ale84ms ok, I will check on the other servers
19:49 JoeJulian One of the servers is reporting an error.
19:50 JoeJulian The frustrating part is, if it was the local server, you could actually read the error. For some reason nobody ever thought to pass the failure message back to the issuing glusterd to be passed back to the cli.
19:51 fraggeln JoeJulian: is there any good flags to the mount command to make the glusterfs-client "timeout" faster when a brick goes offline?
19:51 module000 joined #gluster
19:51 fraggeln at the moment, I get like 30-40 sec hung io to my mountpath
19:51 JoeJulian Yes, don't pull network plugs.
19:52 JoeJulian @ping-timeout
19:52 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
19:52 ale84ms aww... :( I get that a brick is already in a gluster (which is not, as far as I can tell)
19:52 ale84ms and a couple of "[2014-03-24 20:52:26.181624] W [socket.c:514:__socket_rwv] 0-testvol-client-1: readv failed (No data available)" on the other machine
19:52 JoeJulian @path of prefix
19:52 glusterbot JoeJulian: I do not know about 'path of prefix', but I do know about these similar topics: 'path or prefix'
19:52 JoeJulian @path or prefix
19:52 glusterbot JoeJulian: http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
19:52 JoeJulian damned typos...
19:53 JoeJulian fraggeln: And, of course, the direct answer to your question is ping-timeout. Find it in "gluster volume set help"
19:55 ale84ms nothing... I just read your blog, and did everything in that, but I keep on getting the same errors
19:56 JoeJulian Did you do everything in that to the brick that's reporting the error?
19:56 JoeJulian Or, for that matter, all the bricks?
19:57 JoeJulian A failed volume creation still creates the xattrs, so it probably needs to be done to all the bricks.
19:59 SFLimey I'm trying to remove four bricks thats were mistakenly created on in a gluster cluster. but when I try to do it. I get the following error.
19:59 SFLimey volume remove-brick commit force: failed: Bricks are from same subvol
20:00 ale84ms I did it
20:00 ale84ms right now, I removed glusterfs from every machine
20:00 SFLimey I have a total of 8 bricks 2 x 4
20:00 ale84ms removed /var/lib/glusterfs
20:00 kam270 joined #gluster
20:01 ale84ms and I restarted once again
20:01 ale84ms one funny thing is that I "gluster peer probe Tatooine"
20:01 ale84ms then "gluster peer probe Ilum"
20:01 ale84ms and the third machine (Hoth) sees Ilum as an IP address, and not as an hostname
20:02 ale84ms and when I create the volume I get "Host Ilum is not in 'Peer in Cluster' state
20:03 SFLimey Anyone have any ideas how I can remove those bricks. There on my / volume and have filled it up.
20:04 ale84ms "[2014-03-24 21:05:24.044948] E [glusterd-op-sm.c:3719:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Create', Status : -1"
20:05 ale84ms I'm officially lost :(
20:05 monotek anybody familiar wit sambas gluster vfs plugin?
20:05 monotek i currently try to get it working but it seems the samba server is trying to access localhost instead of my cluster nodes.
20:05 monotek smb.conf & gluster.log: http://paste.ubuntu.com/7147979/
20:05 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
20:10 semiosis johnmark: ping/
20:10 semiosis ?
20:14 johnmark semiosis: pong
20:14 semiosis see PM
20:14 johnmark er
20:14 johnmark um
20:17 kam270 joined #gluster
20:20 SFLimey semiosis: any ideas how I can get around this "volume remove-brick commit force: failed: Bricks are from same subvol" problem?
20:21 SFLimey http://paste.ubuntu.com/7148048/
20:21 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
20:22 SFLimey I'm trying to remove all the /data/gv0/brick/data bricks.
20:23 ale84ms ok... last question...
20:23 ale84ms I have managed to find what is the problem
20:24 ale84ms only one of my three computers report "[2014-03-24 21:24:19.350260] E [glusterd-op-sm.c:3719:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Create', Status : -1"
20:24 ale84ms even if I try to create a volume between that and only one of the other computers
20:24 ale84ms while if I try to create a volume between the other two machines everything works fine
20:25 ale84ms as far as I can see, that error message is the only error message I find in the machine
20:25 ale84ms :( do you have an idea of what is wrong about that?
20:25 semiosis SFLimey: what was the exact/complete command that produced that message?
20:32 RayS joined #gluster
20:32 qdk joined #gluster
20:33 RayS joined #gluster
20:33 SFLimey Semiosis, no idea what happened I just tried again and it gave me a weird errors but deleted the bogus bricks.
20:34 zerick joined #gluster
20:34 kam270 joined #gluster
20:35 robo joined #gluster
20:35 RayS joined #gluster
20:36 RayS joined #gluster
20:38 RayS joined #gluster
20:41 RayS joined #gluster
20:42 RayS joined #gluster
20:43 RayS joined #gluster
20:43 RayS joined #gluster
20:44 RayS joined #gluster
20:45 RayS joined #gluster
20:46 RayS joined #gluster
20:48 RayS joined #gluster
20:49 RayS joined #gluster
20:50 RayS joined #gluster
20:51 RayS joined #gluster
20:52 Pavid7 joined #gluster
20:52 kam270 joined #gluster
20:55 module000 left #gluster
20:59 nightwalk joined #gluster
21:00 JoeJulian ~hostname | ale84ms
21:00 glusterbot ale84ms: I do not know about 'hostname', but I do know about these similar topics: 'hostnames'
21:00 JoeJulian ~hostnames | ale84ms
21:00 glusterbot ale84ms: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
21:01 * JoeJulian just returned from getting Teriyaki with the boss...
21:01 kam270 joined #gluster
21:02 thigdon JoeJulian: thanks for the help so far.. do you have any more ideas on how to debug this problem i'm having?
21:02 JoeJulian thigdon: Nothing in any of the brick logs around that error either?
21:04 thigdon JoeJulian: not that i can see. logs are here: http://paste.ubuntu.com/7147806/ and http://paste.ubuntu.com/7147806/
21:04 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
21:10 kam270 joined #gluster
21:11 JoeJulian thigdon: Since there's no errors in the logs, my guess would be that the error is filtering up from the filesystem. Maybe something in dmesg?
21:12 ale84ms I solved my problem
21:14 ale84ms everything was related to the fact that I was using a root partition on one of the machines
21:14 ale84ms that one which gave me all that pain
21:14 ale84ms in the other cases, I was using a separate partition
21:14 JoeJulian Ah yes.
21:15 JoeJulian You can override that behavior with "force".
21:15 ale84ms let me tell... it was quite impossible to debug... I passed that all the afternoon and part of my evening :(
21:15 ale84ms yes, I did that
21:15 ale84ms once I understood where the problem was
21:15 monotek just for the record.... got samba vfs working by redirecting localhost:24007 to my gluster node via xinetd....
21:16 Matthaeus joined #gluster
21:18 JoeJulian monotek: That's wierd.
21:18 monotek yes... but it seems the is even no smb.conf option for defining the server...
21:19 monotek ist only volumename
21:19 monotek so i guess its made for localhost?
21:19 JoeJulian No, storage3.local is the server that it's trying to retrieve the volume definition from.
21:19 JoeJulian Or, rather, should be.
21:19 JoeJulian s/trying/should be/
21:19 glusterbot What JoeJulian meant to say was: No, storage3.local is the server that it's should be to retrieve the volume definition from.
21:20 Matthaeus joined #gluster
21:20 ale84ms :) Guess what... another problem ahead...
21:20 monotek yes, but in the documentation i found the server is never used in glusterfs:volume...
21:20 ale84ms now, I start the volume and fails
21:20 monotek this was my try to access my nodes...
21:20 ale84ms what I read in the log files is : E [glusterd-volume-ops.c:911:glusterd_op_stage_start_volume] 0-management: Could not find peer on which brick Ilum:/storage/brick resides
21:20 kam270 joined #gluster
21:21 ale84ms what does that mean??? how is it possible?
21:21 JoeJulian Scroll back and read that blurb on hostnames carefully.
21:21 JoeJulian monotek: Can I get a link to that documentation please?
21:22 ale84ms which blurb? This: "Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others." ?
21:22 JoeJulian Yeah, I figured I'd already spammed the channel with it twice during this conversation and didn't need to post it again.
21:23 ale84ms it is not helping... that's what I already did at the beginning
21:23 ale84ms exactly following that
21:24 JoeJulian From a server that's not ilum, ,,(pastestatus)
21:24 glusterbot Please paste the output of gluster peer status from more than one server to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
21:25 thigdon JoeJulian: unfortunately, i can't find anything amiss in the system logs on my brick servers (including dmesg)
21:27 JoeJulian thigdon: This is the point where I usually start running strace and/or wireshark to find where the error is actually coming from.
21:27 thigdon strace on client tells me that the failing system call is getdents64
21:28 thigdon i haven't been able to notice a failing system call on the servers when run under strace
21:28 thigdon i haven't tried wiresharking — i assume it would require me to know a bit about the gluster protocol
21:29 JoeJulian Probably not all that much. ndevos did all the hard work for you.
21:29 thigdon hm ok
21:30 chirino_m joined #gluster
21:30 ndevos thigdon: uh, I'm missing all the fun?
21:31 ndevos thigdon: getdents64() as a syscall gets translated to a READDIR procedure... but whats failing for you?
21:31 thigdon ls on a dir with a significant (somewhere north of 100) files gives me "input/output error"
21:31 thigdon using fuse client and an ext4 backing filesystem
21:31 thigdon gluster-3.4.2 on kernel 3.10.24
21:32 Matthaeus joined #gluster
21:32 JoeJulian Oh, wait...
21:32 JoeJulian Is that that mixed arch problem you found last week?
21:32 ndevos thigdon: and you are running 32-bit gluster servers and a 64-bit client?
21:32 tdasilva left #gluster
21:33 fidevo joined #gluster
21:33 * JoeJulian kicks himself for not thinking of that sooner...
21:33 thigdon 32-bit gluster executables on both sides actually
21:34 JoeJulian Looking at the patch for that, I don't think the client would have mattered.
21:34 thigdon pointer to the patch?
21:34 ndevos http://review.gluster.org/7278 - but that is not the latest version, I fixes the issues mentioned with it
21:34 glusterbot Title: Gerrit Code Review (at review.gluster.org)
21:35 jiqiren joined #gluster
21:36 ndevos okay, now its the latest version, but that has not been tested yet - at least it compiles :D
21:38 thigdon arg hm.. what is the best way to pull a patch text file out of that?
21:39 thigdon i'm running a 64-bit kernel if that's interesting
21:40 pjschmitt joined #gluster
21:42 glusterbot New news from resolvedglusterbugs: [Bug 1024369] Unable to shrink volumes without dataloss <https://bugzilla.redhat.com/show_bug.cgi?id=1024369>
21:45 FarbrorLeon joined #gluster
21:45 kam270 joined #gluster
21:46 ndevos thigdon: http://paste.fedoraproject.org/88198/97550139 has the patch for you, there should be a 'raw' link there
21:46 glusterbot Title: #88198 Fedora Project Pastebin (at paste.fedoraproject.org)
21:47 thigdon ndevos: that's great, thank you! i will try it out right now.
21:47 ndevos thigdon: if you have a Gerrit account, you can post comments on the patch, and mark it Verified if it works for you
21:48 ndevos thigdon: or, you can post in https://bugzilla.redhat.com/1074023 in case you have only an account there
21:48 glusterbot Title: Bug 1074023 list dir with more than N files results in Input/output error (at bugzilla.redhat.com)
21:54 kam270 joined #gluster
21:59 seapasulli left #gluster
22:00 seapasulli joined #gluster
22:07 hydro-b left #gluster
22:09 diegows joined #gluster
22:13 kam270 joined #gluster
22:20 thigdon ndevos: your patch against 3.4.2 appears to crash my client
22:20 thigdon i'm happy to help with debugging
22:22 kam270 joined #gluster
22:23 ndevos thigdon: hmm... maybe I should have tested it before :-/
22:24 ndevos thigdon: the change between the 1st version and the 2nd version of the patch is minimal, maybe you can spot the problem?
22:25 thigdon ndevos: there is a "Depends On" patch in the gerrit review page
22:25 thigdon should i be taking that too?
22:26 ndevos thigdon: no, it does not really depend on something else, that dependency is just the last commit I wrote the patch against
22:27 thigdon ok, that makes sense
22:27 ndevos thigdon: the diff between patch 1 and 2: http://review.gluster.org/#/c/7278/1..2/xlators/mount/fuse/src/fuse-bridge.c
22:27 glusterbot Title: Gerrit Code Review (at review.gluster.org)
22:28 thigdon so you tested patch 1 and it worked for you? and you changed it due to review comments?
22:30 ndevos thigdon: yes, thats correct
22:30 chirino joined #gluster
22:31 ndevos thigdon: I'll be gone in a bit, if you find anything, leave a message somewhere, or email me (address in the patch)
22:31 thigdon ndevos: will do. thanks.
22:31 ndevos thigdon: thank you!
22:31 vpshastry joined #gluster
22:33 discretestates joined #gluster
22:40 kam270 joined #gluster
22:48 diegows joined #gluster
22:48 jbrooks joined #gluster
22:49 kam270 joined #gluster
22:52 gdubreui joined #gluster
22:53 gdubreui joined #gluster
22:58 kam270 joined #gluster
23:04 SJ1 joined #gluster
23:05 SJ1 Hi, is there any advantage is having a three node  cluster vs 2 node one if replication is used ?
23:06 nightwalk joined #gluster
23:07 kam270 joined #gluster
23:08 SJ1 my requirement is to take down one node at a time for patching without affecting the storage availability
23:10 tdasilva joined #gluster
23:12 tryggvil joined #gluster
23:14 zerick joined #gluster
23:15 robo joined #gluster
23:18 kam270 joined #gluster
23:21 JoeJulian SJ1: It's all about uptime calculations vs SLA/OLA. If two is enough for you to meet your obligations then that's what you should use.
23:25 seapasulli left #gluster
23:29 SJ1 The uptime requirement is 99%, would 3 nodes make any difference if all the downtime were controlled reboots ? I read about the quorum settings in version 3.4, but Iam unsure if a 3 node setup will make a difference.
23:30 JoeJulian http://www.eventhelix.com/realtimemantra/faulthandling/system_reliability_availability.htm#.UzDAB2dB3iM
23:30 glusterbot Title: System Reliability and Availability Calculation (at www.eventhelix.com)
23:30 kam270 joined #gluster
23:31 JoeJulian Since you can allow for 3.65 days/year of downtime, I doubt it's going to matter for you.
23:31 SJ1 ok. thanks for that
23:31 JoeJulian See "Availability in Parallel"
23:36 robo joined #gluster
23:40 kam270 joined #gluster
23:41 gdubreui joined #gluster
23:41 sputnik13 joined #gluster
23:44 SJ1 In the event of an uncontrolled shutdown/power fail etc, would it matter if the cluster comprises of 3 nodes (odd number) from a split brain recovery perspective ?
23:45 nueces joined #gluster
23:45 lijiejun joined #gluster
23:49 JoeJulian SJ1: No.
23:51 SJ1 Thanks Joe.
23:52 JoeJulian Using quorum, however, having an arbitrator could help prevent split-brain from occurring. Gotta run and catch a train. Later.
23:55 SJ1 I'll be thankful if someone can point me to a document related to arbitrator for version 3.4
23:56 jag3773 joined #gluster
23:59 kam270 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary