Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 glusterbot New news from newglusterbugs: [Bug 1029235] Simple brick operations shouldn't fail <http://goo.gl/7ubsId> || [Bug 1029237] Extra erroneous row seen in status output <http://goo.gl/5dGEkU> || [Bug 1029239] RFE: Rebalance information should include volume name and brick specific information <http://goo.gl/qLK5U0>
00:07 Remco joined #gluster
00:43 davidbierce joined #gluster
00:44 rubbs left #gluster
01:01 Remco joined #gluster
01:15 Remco joined #gluster
01:18 asias joined #gluster
01:24 mohankumar joined #gluster
01:30 asias joined #gluster
01:51 tg3 joined #gluster
01:52 bala joined #gluster
01:59 kshlm joined #gluster
02:06 harish joined #gluster
02:28 fyxim joined #gluster
02:48 satheesh1 joined #gluster
03:03 XpineX_ joined #gluster
03:06 bharata-rao joined #gluster
03:07 asias joined #gluster
03:09 vpshastry joined #gluster
03:15 shubhendu joined #gluster
03:20 sgowda joined #gluster
03:21 nueces joined #gluster
03:31 kanagaraj joined #gluster
03:32 RameshN joined #gluster
03:35 satheesh joined #gluster
03:35 bit4man joined #gluster
03:54 shylesh joined #gluster
03:56 itisravi joined #gluster
03:59 vpshastry joined #gluster
04:02 hchiramm__ joined #gluster
04:07 ababu joined #gluster
04:09 shruti joined #gluster
04:09 davinder joined #gluster
04:11 satheesh joined #gluster
04:27 ngoswami joined #gluster
04:29 65MAA70GY joined #gluster
04:29 13WABE23A joined #gluster
04:36 vpshastry joined #gluster
04:36 Shri joined #gluster
04:40 ppai joined #gluster
04:45 saurabh joined #gluster
04:48 mohankumar joined #gluster
04:59 dusmant joined #gluster
05:00 davidbierce joined #gluster
05:06 CheRi joined #gluster
05:12 satheesh joined #gluster
05:15 rjoseph joined #gluster
05:21 satheesh joined #gluster
05:23 saurabh joined #gluster
05:24 ndarshan joined #gluster
05:28 asias joined #gluster
05:34 bala joined #gluster
05:34 bala joined #gluster
05:36 hagarth joined #gluster
05:39 lalatenduM joined #gluster
05:53 rastar joined #gluster
05:58 bala joined #gluster
05:59 raghu joined #gluster
06:07 nshaikh joined #gluster
06:10 vikumar joined #gluster
06:14 delhage joined #gluster
06:23 ppai joined #gluster
06:33 bulde joined #gluster
06:36 vshankar joined #gluster
06:41 bala joined #gluster
06:44 spandit joined #gluster
06:47 CheRi joined #gluster
06:51 hagarth joined #gluster
06:54 sgowda joined #gluster
06:55 kshlm joined #gluster
06:56 rjoseph joined #gluster
07:00 ricky-ticky joined #gluster
07:02 dasfda joined #gluster
07:04 glusterbot New news from newglusterbugs: [Bug 1025404] Delete processes exiting with directory not empty error. <http://goo.gl/9TJ9zC> || [Bug 1025411] Enhancement: Self Heal Selection Automation <http://goo.gl/WjPhry> || [Bug 1025415] Enhancement: Server/Brick Maintenance Mode <http://goo.gl/dz9sXL> || [Bug 1025425] Enhancement: File grows beyond available size of brick <http://goo.gl/pTi1uG> || [Bug 1026977] [abrt] glusterfs-3g
07:06 glusterbot New news from resolvedglusterbugs: [Bug 979164] Can't 'make install' as non-root <http://goo.gl/mHbDC>
07:23 ngoswami joined #gluster
07:26 jtux joined #gluster
07:28 DV__ joined #gluster
07:39 sgowda joined #gluster
07:47 rjoseph joined #gluster
07:50 Rio_S2 joined #gluster
08:00 jmeeuwen joined #gluster
08:10 DV__ joined #gluster
08:22 eseyman joined #gluster
08:22 mgebbe joined #gluster
08:22 mgebbe_ joined #gluster
08:27 ricky-ticky joined #gluster
08:29 itisravi joined #gluster
08:30 hybrid5121 joined #gluster
08:32 aravindavk joined #gluster
08:34 glusterbot New news from newglusterbugs: [Bug 1029337] Deleted files reappearing <http://goo.gl/ZMqe4A>
08:37 ricky-ticky joined #gluster
08:40 raghu joined #gluster
08:41 andreask joined #gluster
08:44 psharma joined #gluster
08:48 bharata-rao joined #gluster
08:56 crashmag joined #gluster
09:08 satheesh joined #gluster
09:12 satheesh1 joined #gluster
09:12 X3NQ joined #gluster
09:20 harish joined #gluster
09:24 ngoswami joined #gluster
09:33 rjoseph joined #gluster
09:41 stickyboy The client-side log file for one of my volumes is 470GB...
09:41 stickyboy And /var is full now. :)
09:44 samppah huh
09:46 calum_ joined #gluster
09:46 stickyboy samppah: The file has lots of this:  https://gist.github.com/alanorth/1c36529244ee792fd32d/raw/5d5117ab91959b807488a7470df051749105bdc8/gistfile1.txt
09:46 glusterbot <http://goo.gl/iTfrFv> (at gist.github.com)
09:51 nshaikh joined #gluster
09:55 nullck joined #gluster
10:02 marcoceppi joined #gluster
10:02 marcoceppi joined #gluster
10:02 crashmag joined #gluster
10:03 andreask joined #gluster
10:11 bharata-rao joined #gluster
10:12 hybrid5122 joined #gluster
10:23 DV__ joined #gluster
10:41 hybrid512 joined #gluster
10:43 rjoseph joined #gluster
10:51 hagarth joined #gluster
10:52 glusted joined #gluster
10:52 glusted Hello all, I seek help with some "split-brain" files on Glusterfs Version3.4.0
10:54 * glusted slaps JoeJulian around a bit with a large trout
10:55 franc joined #gluster
10:55 franc joined #gluster
11:01 glusted I get a lot of results (1023) with gluster volume heal gv0 info split-brain or gluster volume heal gv0 info heal-failed
11:04 ndarshan joined #gluster
11:04 glusted My files seems to be "corrupted" or "split brained" (ls: cannot access /var/data/www/NewsStuffs/2.jpg: Input/output error
11:05 glusted )
11:05 glusted ??????????? ? ?       ?            ?            ? 2.jpg
11:08 glusted gluster irc chat status info
11:11 ndevos ~split brain | glusted
11:11 glusterbot glusted: I do not know about 'split brain', but I do know about these similar topics: 'split-brain', 'splitbrain'
11:12 ndevos ~split-brain | glusted
11:12 glusterbot glusted: To heal split-brain in 3.3+, see http://goo.gl/FPFUX .
11:12 ndevos that should work for 3.4 too
11:12 glusted Thanks- unfortunately it's not really working.
11:13 glusted I actually saw that "joejulian" blog and he is in this iRC thing.
11:13 glusted I've done everything he said, and yes, at least that *??????" thing disappeared. BUT
11:14 glusted the commands gluster volume heal gv0 info split-brain and ...info heal-failed, show the exact same amount of issue than before the "FIX"
11:15 glusted what are those "Topics"?? you mean to google it? (I did for 2 days )
11:15 glusted (split-brain | glusted )
11:15 franc joined #gluster
11:15 franc joined #gluster
11:30 manu joined #gluster
11:39 rastar joined #gluster
11:40 Moosya joined #gluster
11:41 badone joined #gluster
11:42 hagarth joined #gluster
11:44 geewiz joined #gluster
11:51 ngoswami joined #gluster
11:52 geewiz Hi! I need to resolve a split-brain situation. "heal info split-brain" shows a few file names but much more gfid's without a path. How do I handle these?
11:53 anonymus joined #gluster
11:54 anonymus hi guys, help me please find out the problem about mounting glusterfs
11:54 anonymus http://www.gluster.org/community/documentation/index.php/QuickStart I have done this step by step
11:54 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
11:54 anonymus and mounting does not work
11:55 anonymus it says readlink("/root/server1:", 0x7fff09aa2ee0, 4096) = -1 ENOENT (No such file or directory)
12:01 kkeithley1 joined #gluster
12:02 hchiramm__ joined #gluster
12:02 anonymus anybody
12:02 anonymus please
12:03 anonymus respond
12:03 samppah anonymus: what's the command you are using for mounting?
12:03 moosya joined #gluster
12:04 anonymus strace  mount -t glusterfs server1:/gv0 /mnt
12:04 anonymus the same that in the guide
12:05 anonymus mentioned upper
12:05 anonymus http://www.gluster.org/community/documentation/index.php/QuickStart I have done this step by step
12:05 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
12:05 anonymus it says readlink("/root/server1:", 0x7fff09aa2ee0, 4096) = -1 ENOENT (No such file or directory)
12:05 samppah okay
12:05 samppah what distribution you are using?
12:06 anonymus samppah: what do you mean?
12:06 anonymus ahh
12:06 anonymus fedora
12:06 anonymus 19
12:06 anonymus for servers
12:06 anonymus and ubuntu 13 10 for client
12:06 samppah okay, can you do rpm -qa | grep gluster and send output to pastie.org?
12:06 samppah oh
12:06 anonymus shure
12:06 samppah ubuntu and fedora same version of gluster?
12:06 anonymus sure
12:07 anonymus hmm moment
12:11 anonymus glusterfs-server-3.4.1-1.fc19.i686
12:11 anonymus glusterfs-client                          3.2.7-3ubuntu2                      amd64
12:11 anonymus samppah:
12:12 samppah ookay
12:12 samppah you need at least version 3.3 on client side
12:12 anonymus ок
12:12 samppah 3.4 is recommended
12:12 samppah @ppa
12:12 anonymus ок
12:12 glusterbot samppah: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
12:12 anonymus hmm
12:13 rcheleguini joined #gluster
12:13 anonymus samppah: remind me please which repo
12:13 anonymus ?
12:14 anonymus sudo add-apt-repository ppa:*****/ppa ??
12:14 andreask joined #gluster
12:14 anonymus I am usually not using ubuntu :(
12:14 samppah anonymus: ouch, me neither ;)
12:14 anonymus and fedora :(
12:15 anonymus ok google knows
12:23 ndarshan joined #gluster
12:23 anonymus samppah:
12:24 anonymus YOU ARE WIZARD!
12:24 anonymus it works
12:24 anonymus with 3/4 client
12:25 anonymus thank you very much!
12:29 samppah anonymus: that's great to hear, you are welcome :)
12:30 ndevos JoeJulian: could you respond to https://bugzilla.redhat.com/show_bug.cgi?id=1022542#c19 and lift your -1 karma in case you agree?
12:30 glusterbot <http://goo.gl/BGWGeH> (at bugzilla.redhat.com)
12:30 glusterbot Bug 1022542: unspecified, unspecified, ---, ndevos, ON_QA , glusterfsd stop command does not stop bricks
12:30 ndevos kkeithley_: ^ is the only thing that currently blocks me from submitting the packages as an update
12:31 ndevos well, I could probably just push the packages anyway, but thats not really user friendly
12:31 Glusted joined #gluster
12:32 Glusted Hi, I got disconnected, I was checking for help with split-brain, anyone could help please? (I followed JoeJulian notes)
12:33 stickyboy What is the default setting for direct-io-mode in GlusterFS 3.4?
12:41 hagarth joined #gluster
12:47 XpineX joined #gluster
12:47 Glusted GlusterFS 3.4.0
12:48 stickyboy Cloning the git repo so I can grep the code myself.
12:48 stickyboy win
12:49 Glusted is the option ./configure not telling you those settings?
12:49 Glusted or the config file?
12:50 kanagaraj joined #gluster
12:51 stickyboy Glusted: I don't compile Gluster from source. :)
12:52 chirino joined #gluster
12:55 vpshastry joined #gluster
13:00 vpshastry left #gluster
13:00 stickyboy Glusted: I just checked, there's no option in the configure script.
13:00 anonymus left #gluster
13:00 stickyboy But it seems the default behavior falls to the FUSE client; xlators/mount/fuse/utils/mount_glusterfs.in
13:00 dusmant joined #gluster
13:00 stickyboy --disable-direct-io-mode   <-- default.
13:02 Glusted No ideas sorry, was just suggesting from documents I am currently reading-
13:03 stickyboy Thanks
13:03 Glusted I'm trying to troubleshoot and solve issue with split-brain files on GlusterFS3.4.0
13:04 stickyboy Glusted: Ouch
13:05 Glusted yes, ouch... 1203 files in bad state... (healed-failed and split-brain). Fix here, not really working: http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
13:05 glusterbot <http://goo.gl/FPFUX> (at joejulian.name)
13:06 samppah Glusted: uhm, sounds like something is causing split brains? anything in log files?
13:06 Glusted Well, i'm not too sure, I read many of the logs
13:07 Glusted [2013-11-12 13:01:36.348247] I [server-rpc-fops.c:243:server_inodelk_cbk] 0-gv0-server: 2015058: INODELK (null) (2859483f-2acf-4308-a03f-5da49195ab59) ==> (No such file or directory)
13:07 Glusted [2013-11-12 13:01:36.348763] I [server-rpc-fops.c:1572:server_open_cbk] 0-gv0-server: 2015059: OPEN (null) (2859483f-2acf-4308-a03f-5da49195ab59) ==> (No such file or directory)
13:08 Glusted thats the type of things I can see in less bricks/export-brick1.log
13:09 kkeithley_ ndevos: Not sure what you're asking. Whether to push the update? We need the patch that's in 3.4.1-3 for libvirt (or do we?) and if nothing else has changed I'd say push it and we'll fix the glusterfsd.service in 3.4.1-4.
13:09 kkeithley_ Or if your glusterfsd-shutdown-only.service is okay?
13:12 ndarshan joined #gluster
13:13 kkeithley_ Or is there a change to glusterfsd.service in 3.4.1-3. Hmmm, I guess I could look at the changelog and see. ;-)
13:15 Glusted Which log I'm supposed to check first?
13:16 bulde joined #gluster
13:16 Glusted pwd
13:17 B21956 joined #gluster
13:17 Glusted var/log/glusterfs
13:19 hybrid5121 joined #gluster
13:24 ababu joined #gluster
13:24 premera joined #gluster
13:32 Glusted [2013-11-12 09:46:17.667862] E [posix.c:350:posix_setattr] 0-gv0-posix: setattr (lstat) on /export/brick1/.glusterfs/57/7a/577abbe5-f2ca-4abc-8b28-8cb852755c74 failed: No such file or directory
13:36 glusterbot New news from newglusterbugs: [Bug 1029482] AFR: cannot get volume status when one node down <http://goo.gl/VFYYJ4> || [Bug 1029492] AFR: change one file in one brick,prompt "[READ ERRORS]" when open it in the client <http://goo.gl/2JEpZI>
13:38 badone joined #gluster
13:40 kkeithley_ ndevos: I'm not sure what the change to glusterfsd.service does. Historically stopping glusterd has not stopped any of the child processes. I have not heard a compelling argument for changing that. (Arguing that gluster must be forced to follow some one-size-fits-all policy != compelling IMO.)
13:46 Glusted Someone knows if the gluster volume heal xxx info split-brain and gluster volume heal xxx info head-failed are "instantaneous"?
13:46 Glusted I read heal-failed: 27 on brick 1 and 1023 on brick 2; split-brain: 44 on Brick1 and 0 on brick 2...
13:47 davidbierce joined #gluster
13:52 GabrieleV joined #gluster
13:54 hybrid512 joined #gluster
14:01 plarsen joined #gluster
14:02 bennyturns joined #gluster
14:02 Glusted online
14:02 Glusted how many people are online here?
14:04 dbruhn joined #gluster
14:06 glusterbot New news from newglusterbugs: [Bug 1029496] AFR: lose files in one node, "ls" failed in the client, but open normally <http://goo.gl/BiLTjX> || [Bug 1029506] AFR: “volume heal newvolume full” recover file -- deleted file not copy from carbon node <http://goo.gl/btkIWp>
14:08 lalatenduM_ joined #gluster
14:08 jag3773 joined #gluster
14:11 vpshastry joined #gluster
14:21 vpshastry joined #gluster
14:32 ndarshan joined #gluster
14:35 davidbierce joined #gluster
14:43 moosya joined #gluster
14:43 ndevos ~channelstats | Glusted
14:43 glusterbot ndevos: Error: No factoid matches that key.
14:43 ndevos @channelstats
14:43 glusterbot ndevos: On #gluster there have been 205300 messages, containing 8483379 characters, 1413860 words, 5522 smileys, and 751 frowns; 1227 of those messages were ACTIONs. There have been 82304 joins, 2549 parts, 79772 quits, 23 kicks, 173 mode changes, and 7 topic changes. There are currently 200 users and the channel has peaked at 239 users.
14:44 Glusted what?
14:44 ndevos kkeithley_: glusterfsd.service handles the stopping of the brick processes on shutdown, or a restart on updating (if the service is active)
14:45 ndevos Glusted: you wrote: < Glusted> how many people are online here?
14:45 Glusted ha yes, I was checking, I see no one replies...
14:45 Glusted So I was wondering if it was because no one were actually online... apparently not then...200users!!!
14:46 ndevos kkeithley_: the comment in the bug suggests a way how restarting glusterfsd processes can be prevented while updating, but can be done while rebooting
14:47 ndevos kkeithley_: it's rather ugly to not kill a glusterfsd process upon shutdown, is it not?
14:47 chirino joined #gluster
14:50 kkeithley_ yes, on shutdown we want the glusterfsds shut down. But just because we stopped glusterd we should not stop glusterfsds.
14:50 ndevos kkeithley_: oh, I agree with that, hence the two .service scripts
14:51 premera joined #gluster
14:51 kkeithley_ I think all of us except Michael are in agreement
14:53 Glusted If you do service glusterd stop
14:53 Glusted is that supposed to kill all "gcluster processes"??
14:53 kkeithley_ Glusted: most devs are in India, several of the volunteers here are on the U.S. west coast, probably aren't even awake yet
14:54 ndevos kkeithley_: well, I for one would prefer to see a restart of the glusterfsd processes when an update is installed - running old binaries is just a pain to support
14:54 Glusted oh good im currently in Asia, i'll try in office hours time
14:55 bugs_ joined #gluster
14:55 kkeithley_ ndevos: I suppose we could tick the box during the bohdi update that says reboot suggested.
14:56 lalatenduM joined #gluster
14:56 ndevos kkeithley_: we could, but that is not really needed, just stopping the glusterfsd processes before restarting glusterd is sufficient
14:57 ndevos kkeithley_: and, if someone does not have the glusterfsd.service active, the glusterfsd processes will not be stopped anyway
14:57 vpshastry left #gluster
14:58 ndevos kkeithley_: in case you want the glusterfsd processes to be stopped on shutdown, copy the glusterfsd.service to some other name, and activate that
14:58 ndevos the scripts in the .spec will not know about the new name of the .service, and will not restart the glusterfsd.service if it was not active
14:58 kkeithley_ right, I don't _dis_agree with that.
14:58 Glusted thanks anyway, im off,
14:59 ndevos oh, then I fail to see the issue, I guess...
14:59 bulde joined #gluster
14:59 premera_e joined #gluster
15:00 kkeithley_ Other than JoeJulian not wanting his dozens of volumes restarted all at once after an update. Which apparently causes a lot of split-brain problems.  I think he has a valid concern.
15:02 premera joined #gluster
15:03 ndevos yes, that is why there is a way to disable restarts of the brick processes, otherwise I'd like to see only one glusterd.service :)
15:04 getup joined #gluster
15:05 getup hi, quick question, was the self heal deamon introduced in version 3.3? when i look at the documentation it seems that you have to trigger it manually for 3.2, is that right?
15:06 ndk joined #gluster
15:08 hybrid512 joined #gluster
15:10 Remco getup: if you're just starting, you should really use the latest 3.4.x version
15:10 kkeithley_ I'm not seeing how to disable the brick restarts.
15:10 kkeithley_ getup: yes, in 3.2 you have to manually initiate selfheal.
15:10 getup we're depending on what the repository gives us i'm afraid
15:11 kkeithley_ @yum
15:11 glusterbot kkeithley_: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://goo.gl/42wTd5 The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
15:11 kkeithley_ @ppa
15:11 glusterbot kkeithley_: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
15:11 kkeithley_ getup: ^^^
15:12 getup hmm, ok, yea, i may be able to work with that :)
15:13 ndevos kkeithley_: you need to *disable* (not stop) the glusterfsd.service, that will prevent stopping the glusterfsd processes
15:13 failshell joined #gluster
15:14 ndevos kkeithley_: iff you want to stop the glusterfsd processes on shutdown, copy the glusterfsd.service and *enable* and *start* that newly created copy
15:14 kkeithley_ oh, that's what you meant.
15:15 kkeithley_ oh, okay, that's what you meant
15:16 ndevos I think, that is what I meant, and I hope it is clear(er) now?
15:16 kkeithley_ maybe it was just me. ;-)
15:17 * ndevos is a little confused now, and carries on doing other stuff
15:22 sprachgenerator joined #gluster
15:32 ulimit joined #gluster
15:35 wushudoin joined #gluster
15:42 daMaestro joined #gluster
15:43 micu Hi, Does anyone know where I can get the rpm of glusterfs-openstack-swift-1.10.1 for Centos 6?
15:44 jdarcy joined #gluster
16:02 bala joined #gluster
16:03 lalatenduM joined #gluster
16:06 micu Gluster 3.4.1 is compatible with OpenStack Swift Havana?
16:16 andreask joined #gluster
16:18 jskinner_ joined #gluster
16:21 mohankumar joined #gluster
16:21 dbruhn joined #gluster
16:24 Rio_S2 joined #gluster
16:51 aliguori joined #gluster
16:58 JoeJulian micu: Isn't in the rdo repo?
16:58 geewiz Hi! How do I repair split-brain files that Gluster 3.3 lists without a path, just with a gfid?
16:59 JoeJulian geewiz: http://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/
16:59 glusterbot <http://goo.gl/j981n> (at joejulian.name)
17:00 JoeJulian check the gfid file link count. If it's one, you should be able to just delete it. If it's not, get the inode number and "find -inum $inode" to find the actual filename.
17:01 geewiz JoeJulian: I already read your other article about healing split-brain. But I wasn't sure what to do when I don't have a file path.
17:01 geewiz Okay, that's straightforward. Thanks!
17:02 ndevos geewiz: it is possible that you see the gfid split-brain when a directory is in a split-brain situation, the contents of that directory can be detected as split-brain, but self-heal for those gfids will only work after the directory is healed
17:03 geewiz ndevos: So I'd better heal directories first then.
17:03 ndevos geewiz: yes, do that, and trigger a targetted self-heal for the contents in the directory, that often helps
17:04 geewiz I see. Brilliant, thanks!
17:04 ndevos geewiz: otherwise you may need N+1 self-heal-daemon runs, where N is the depth of the directory-tree
17:04 JoeJulian ... and don't get split-brained.... ;)
17:04 geewiz JoeJulian: I'll try not to! ;-)
17:05 ndevos geewiz: you could look into quorum, that can prevent some of the split brains
17:06 geewiz I'll do that! With a 2-node cluster, this will disable the storage in a split-brain situation, right?
17:06 glusterbot New news from newglusterbugs: [Bug 1002020] shim raises java.io.IOException when executing hadoop job under mapred user via su <http://goo.gl/M8tYEm> || [Bug 1005838] Setting working directory seems to fail. <http://goo.gl/67SUJ3> || [Bug 1006044] Hadoop benchmark WordCount fails with "Job initialization failed: java.lang.OutOfMemoryError: Java heap space at.." on 17th run of 100GB WordCount (10hrs on 4 node cluster) <http
17:07 ndevos well, it will try to, but with only two servers it'll be difficult, maybe you can add a 3rd server without a brick? Only to add a vote to the quorum mechanism?
17:08 geewiz ndevos: That makes sense. I need to read up on that.
17:08 geewiz I'll get back to you with my results. Thanks, ndevos, JoeJulian!
17:09 ndevos geewiz: good luck!
17:09 nasso joined #gluster
17:29 micu JoeJulian: Not here http://repos.fedorapeople.org/repos/openstack/openstack-havana/epel-6/
17:29 glusterbot <http://goo.gl/JnIu5z> (at repos.fedorapeople.org)
17:31 geewiz ndevos: How do I self-heal a split-brain directory without removing it completely from one of the two bricks? Removing the gfid xattr?
17:32 micu At http://goo.gl/E72K9U  said that you can get gluster-swift here goo.gl/yp0Poe, but the version is 1.8.0
17:32 glusterbot Title: gluster-swift/doc/markdown/quick_start_guide.md at master 路 gluster/gluster-swift 路 GitHub (at goo.gl)
17:36 glusterbot New news from newglusterbugs: [Bug 1029597] Geo-replication not work, rsync command error. <http://goo.gl/mnOvOt>
17:38 jbd1 joined #gluster
17:40 Mo_ joined #gluster
17:46 badone joined #gluster
17:46 getup joined #gluster
17:47 daMaestro joined #gluster
17:48 lalatenduM joined #gluster
17:51 RameshN joined #gluster
17:57 andreask joined #gluster
17:59 LoudNois_ joined #gluster
18:03 andreask joined #gluster
18:05 vpshastry joined #gluster
18:57 andreask joined #gluster
18:59 aurigus Hey all, getting an i/o error on 3.2... here is summary of issue and logs:
18:59 aurigus http://pastebin.com/wUfz3CKw
18:59 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
19:00 aurigus Unfortunately I read advice above to start on 3.4 above too late :)
19:26 geewiz joined #gluster
19:29 Skaag joined #gluster
19:41 ctria joined #gluster
19:43 ira joined #gluster
19:51 cfeller joined #gluster
20:01 cfeller joined #gluster
20:01 lpabon joined #gluster
20:02 jag3773 joined #gluster
20:17 zaitcev joined #gluster
20:25 ricky-ticky joined #gluster
20:59 bulde joined #gluster
21:01 badone joined #gluster
21:21 dbruhn aurigus what OS are you running on?
21:22 aurigus CentOS 6.4 x64
21:22 aurigus I tried to unmount/remount the shares, and it seemed to pick right back up... but that still makes me nervous ;)
21:22 dbruhn what is the filesystem for your gluster bricks?
21:22 aurigus ext4
21:23 dbruhn hmm, there is an ext4 bug in 3.2
21:23 aurigus yum!
21:23 semiosis ,,(ext4)
21:23 dbruhn you should be running xfs under it if I remember right
21:23 glusterbot The ext4 bug has been fixed in 3.3.2 and 3.4.0. Read about the ext4 problem at http://goo.gl/Jytba or follow the bug report here http://goo.gl/CO1VZ
21:23 dbruhn Also, are you running iptables, or selinux?
21:23 aurigus glusterfs 3.2.7 built on Jun 11 2012 13:22:29
21:23 dbruhn Yeah, you should at min upgrade to 3.3.2
21:24 semiosis probably bit by the ext4 bug
21:24 aurigus selinux no, iptables, yes but whitelist that adapter
21:24 dbruhn kk
21:24 semiosis go for 3.4.1
21:24 dbruhn then yeah might be the ext4 bug
21:24 dbruhn upgrade and you will be much happier
21:24 dbruhn like I said 3.3.2 at min, and 3.4.1 is ideal
21:28 aurigus I thought 3.2 might be more stable
21:28 aurigus I think thats a good idea though
21:28 semiosis what gave you that idea?
21:30 aurigus just years of experience using distributed file systems
21:31 dbruhn 3.3.2 is super stable and 3.4.1 has been out for months now
21:32 aurigus good to know
21:33 aurigus I'll do that upgrade but will have to schedule it because this is in production
21:33 aurigus thanks for the help all
21:36 ricky-ticky joined #gluster
21:36 Skaag joined #gluster
21:41 daMaestro joined #gluster
21:49 jag3773 joined #gluster
21:54 failshell joined #gluster
22:03 ricky-ticky joined #gluster
22:21 geewiz joined #gluster
22:22 Scandian joined #gluster
22:25 failshel_ joined #gluster
22:45 jskinner_ joined #gluster
22:50 ricky-ticky joined #gluster
22:51 daq joined #gluster
22:53 cjh973 joined #gluster
22:53 cjh973 gluster: what bitwise operations do i need to perform on the trusted.afr.groot-client to figure out how many outstanding heals there are?
22:54 daq Why am I getting "volume create: test: failed: /mnt/data1 or a prefix of it is already part of a volume"
22:54 glusterbot daq: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
22:55 cjh973 I should prob ask without the prefix.  Anyone know how to tease out the outstanding heals from a getfattr return value?
22:57 semiosis cjh973: article about ,,(extended attributes) might help.  i can't say that you don't know what you're talking about, but I don't know what you're talking about
22:57 glusterbot cjh973: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
22:58 daq I am running the latest version of GlusterFS 3.4 and still getting the prefix error.
22:58 daq I'm getting it when trying to create a volume so the fix from the first link doesn't apply.
22:58 cjh973 semiosis: i'll check out the article
22:59 andreask joined #gluster
23:00 cjh973 semiosis: i see.. so the hex output is those 3 values packed together.  Yeah that's what I was getting out.
23:00 daq I used different named directories for each brick like data1, data2, etc and still getting "prefix of it is already part of a volume."
23:00 daq Is there any other fix for this issue?
23:00 semiosis daq: that link *is* the solution
23:01 semiosis and you only ever get that message when you try to create a volume
23:04 ricky-ticky joined #gluster
23:04 daq i'm an idiot, thanks :-)
23:04 semiosis lots of people get stumped by that, you're not alone
23:06 cjh973 semiosis: for instance my getfattr returns: 0x000000820000000000000000 for the afr.client.  Do you know how to tease the pieces out from it?  which part is data operations outstanding, which is metadata, etc?
23:06 ricky-ticky joined #gluster
23:07 semiosis i do not
23:07 semiosis never had to dig that deep
23:07 cjh973 ok
23:07 semiosis curious, why do you want to know such details?
23:08 cjh973 because the gluster heal command doesn't return sometimes
23:08 cjh973 i'm working around it
23:08 andreask joined #gluster
23:09 cjh973 if one of the heal daemons isn't running for some reason it just gives up and says there's no heals outstanding.  that's not good enough :)
23:09 daq left #gluster
23:31 Technicool joined #gluster
23:32 spechal joined #gluster
23:33 Scandian joined #gluster
23:48 davidbierce joined #gluster
23:50 jskinner joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary