Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 inodb_ left #gluster
00:28 chandank left #gluster
00:28 blubberdi JoeJulian: thanks
01:28 lng joined #gluster
01:31 lng Hello! I have 8225 of split-brain entries. The majority of them looks like '<gfid:0b924d5f-52fc-4c99-a903-6a77e1a8468b>' and the rest are pathes like '/2170000/2179000/2179500'. What can I do to fix the situation?
01:35 robo joined #gluster
01:51 kevein joined #gluster
02:06 m0zes @split brain
02:06 glusterbot m0zes: I do not know about 'split brain', but I do know about these similar topics: 'split-brain'
02:06 m0zes @split-brain
02:06 glusterbot m0zes: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
02:06 m0zes lng ^^
02:12 lng m0zes: thank you!
02:16 lng m0zes: in my case, I can't decide which files are good
02:22 sunus joined #gluster
02:25 m0zes i typicslly base it on file modification times, but if you can't decide, backup the one you decide to remove ;)
02:28 nick5 joined #gluster
02:33 elyograg joined #gluster
02:46 ika2810 joined #gluster
02:55 lng m0zes: I have tones of files
03:05 zwu joined #gluster
03:07 dalekurt joined #gluster
03:36 gcbirzan1 joined #gluster
03:46 sripathi joined #gluster
03:49 lng what is SBFILE? 'GFID=$(getfattr -n trusted.gfid --absolute-names -e hex ${BRICK}${SBFILE} | grep 0x | cut -d'x' -f2)'
03:49 lng split brain?
03:50 lng but I have '2012-11-14 00:02:19 <gfid:498a4485-b626-4b79-8015-66c2ef536164>'
03:50 lng what is gfid?
04:19 bulde joined #gluster
04:21 sripathi joined #gluster
04:38 mohankumar joined #gluster
04:40 bulde joined #gluster
04:58 zwu joined #gluster
05:12 shylesh_ joined #gluster
05:12 shylesh joined #gluster
05:15 deepakcs joined #gluster
05:30 hagarth joined #gluster
05:41 bulde joined #gluster
05:52 kshlm joined #gluster
05:52 kshlm joined #gluster
05:54 Guest70479 joined #gluster
06:01 shylesh joined #gluster
06:08 hagarth joined #gluster
06:14 ika2810 joined #gluster
06:18 puebele joined #gluster
06:21 Guest70479 joined #gluster
06:36 raghu joined #gluster
06:41 bulde joined #gluster
07:02 hagarth joined #gluster
07:09 ctria joined #gluster
07:11 zwu joined #gluster
07:14 Humble joined #gluster
07:14 bala1 joined #gluster
07:14 lng how to fix split-brain by <gfid:*>?
07:21 lng s there any script?
07:21 lng is*
07:31 lng anybody use this? https://github.com/vikasgorur/gfid
07:31 glusterbot Title: vikasgorur/gfid · GitHub (at github.com)
07:34 JoeJulian lng: yes, but not since the bug that was written to cure was fixed. That was about 3.2.5.
07:36 JoeJulian To cure gfid splitbrains just delete the appropriate file in .glusterfs. I would delete it from all the replica as it will be healed by the actual file if it's still supposed to exist.
07:36 JoeJulian So for your example, delete .glusterfs/49/8a/498a4485-b​626-4b79-8015-66c2ef536164
07:36 JoeJulian Notice that the directories correspond to the first 4 digits of the gfid.
07:38 JoeJulian If that makes you nervous you can move the file somewhere off the brick instead of just deleting it.
07:49 lng JoeJulian: thanks for the help!
07:49 lng can I delete it on active volume?
07:49 JoeJulian yes
07:49 lng thanks
07:50 JoeJulian Well, I shouldn't just say "yes" but rather "I have and I didn't notice any ill effects."
07:51 lng JoeJulian: is it okay to execute `gluster volume heal storage info split-brain` on different peers of the same cluster?
07:52 JoeJulian Sure. You /should/ get the same result from any of them.
07:52 lng ah, I see
07:52 JoeJulian If you don't, I'd consider it a bug.
07:52 64MAB8WP2 joined #gluster
08:02 Azrael808 joined #gluster
08:03 lkoranda joined #gluster
08:05 shylesh joined #gluster
08:05 shylesh_ joined #gluster
08:16 ekuric joined #gluster
08:20 JoeJulian @hack
08:20 glusterbot JoeJulian: The Development Work Flow is at http://goo.gl/ynw7f
08:21 manik joined #gluster
08:25 gbrand_ joined #gluster
08:37 andreask joined #gluster
08:37 shylesh joined #gluster
08:37 shylesh_ joined #gluster
08:39 lng JoeJulian: after gfids were deleted, do I need to engage self-heal?
08:41 JoeJulian lng: It probably won't matter. Be aware that the entries won't be removed, but they are time stamped. As long as new entries stop appearing you should be good.
08:42 sunus joined #gluster
08:44 bulde joined #gluster
08:47 lng JoeJulian: entries won't be removed?
08:47 lng JoeJulian: but I just deleted them
08:50 TheHaven joined #gluster
08:51 lng and is output the same? `gluster volume heal storage info split-brain`
08:52 JoeJulian lng: https://bugzilla.redhat.com/show_bug.cgi?id=871987
08:52 glusterbot <http://goo.gl/DG74F> (at bugzilla.redhat.com)
08:52 glusterbot Bug 871987: low, medium, ---, vsomyaju, ASSIGNED , Split-brain logging is confusing
08:55 lng JoeJulian: no solution
08:55 lng ?
08:55 lng Gluster is rather buggy
08:56 lng in our company, some indian developers produce many bugs
08:58 JoeJulian If you would actually read that bug, it's not a bug in the sense that it doesn't break functionality but rather that it's unclear to the user.
08:58 niv joined #gluster
08:59 lng JoeJulian: it is bug
08:59 lng not severe, but bug
08:59 JoeJulian ok, whatever.
08:59 lng JoeJulian: thanks for pointing
09:00 JoeJulian I'm not here to argue, but rather to help.
09:00 JoeJulian And now I'm going to bed.
09:00 * JoeJulian departs.
09:02 Triade joined #gluster
09:03 lng JoeJulian: good night
09:07 lh joined #gluster
09:07 lh joined #gluster
09:20 efries joined #gluster
09:23 duerF joined #gluster
09:23 lh joined #gluster
09:28 dobber joined #gluster
09:29 Tarok joined #gluster
09:30 TheHaven joined #gluster
09:30 Tarok left #gluster
09:55 Jippi joined #gluster
09:56 DaveS joined #gluster
09:56 tryggvil joined #gluster
10:00 Staples84 joined #gluster
10:17 bulde joined #gluster
10:56 Nr18 joined #gluster
11:00 helloadam joined #gluster
11:29 shylesh joined #gluster
11:32 joeto joined #gluster
11:32 edward1 joined #gluster
11:33 Staples84 joined #gluster
11:48 mooperd_ joined #gluster
11:52 vpshastry joined #gluster
11:54 quillo joined #gluster
12:06 bulde joined #gluster
12:08 mooperd_ Hi,
12:08 mooperd_ I am trying to install Gluster with swift
12:08 mooperd_ http://www.gluster.org/2012/09/howto-using​-ufo-swift-a-quick-and-dirty-setup-guide/
12:08 glusterbot <http://goo.gl/Wf7Yx> (at www.gluster.org)
12:09 mooperd_ Error: Package: glusterfs-swift-3.3.1-2.el6.noarch (epel-glusterfs-swift)
12:09 mooperd_ Requires: python-greenlet >= 0.3.1 .. Requires: python-netifaces .. Requires: python-webob1.0
12:09 mooperd_ etc
12:09 mooperd_ Centos 6.3
12:10 mooperd_ kkeithley: hi
12:10 ndevos mooperd_: I've got all those from epel...
12:11 64MAB8WP2 left #gluster
12:11 ndevos "yum provides python-netifaces python-webob1.0 python-greenlet" shows them all for me
12:12 mooperd_ ndevos: um, confused
12:13 ndevos mooperd_: got the epel repository active? "yum repolist" should show that
12:14 mooperd_ ndevos: hmm, I cant find a .repo
12:15 ndevos mooperd_: right, check http://fedoraproject.org/wiki/EPEL#​How_can_I_use_these_extra_packages.3F
12:15 glusterbot <http://goo.gl/6Ppg0> (at fedoraproject.org)
12:16 mooperd_ ndevos: thanks man
12:16 ndevos you're welcome, mooperd_
12:16 mooperd_ yum still confuses the hell out of me :)
12:25 RNZ joined #gluster
12:29 Psi-Jack joined #gluster
12:40 bulde joined #gluster
12:46 Psi-Jack joined #gluster
12:54 stephane joined #gluster
12:55 stephane_ joined #gluster
12:57 andreask left #gluster
12:58 esm_ joined #gluster
13:07 stopbit joined #gluster
13:09 mooperd_ does the swift bit use the linux users?
13:13 kkeithley Not sure I understand your question. Swift uses tempauth, which requires you to define users in the swift proxy.conf file, and there's no connection between that and users in /etc/passwd.
13:13 kkeithley Sorry, I should say that gluster-swift still uses tempauth.
13:14 mooperd_ kkeithley: ah, there is no ref to users in your howto
13:14 mooperd_ http://www.gluster.org/2012/09/howto-using​-ufo-swift-a-quick-and-dirty-setup-guide/
13:14 glusterbot <http://goo.gl/Wf7Yx> (at www.gluster.org)
13:14 mooperd_ oh, no :)
13:14 mooperd_ there is
13:14 mooperd_ Im just being dumb
13:15 jdarcy Maybe you're just having trouble believing it.  ;)
13:15 mooperd_ user_$myvolname_$username=$password .admin
13:15 mooperd_ This is confusing however
13:16 hagarth joined #gluster
13:16 kkeithley And we are working on making it use, gahhh, drawing a blank.
13:16 mooperd_ $myvolname is the gluster vol I assume
13:16 kkeithley yes
13:16 mooperd_ username is….the username
13:17 mooperd_ and the .admin gives that account admin rights I assume?
13:17 kkeithley I'd look at http://docs.openstack.org/api/open​stack-object-storage/1.0/content/ for more
13:17 glusterbot <http://goo.gl/ZWr7a> (at docs.openstack.org)
13:17 kkeithley yes
13:17 stephane1 joined #gluster
13:17 mooperd_ how old / stable is this swift stuff now?
13:17 mooperd_ for gluster
13:18 davdunc joined #gluster
13:18 davdunc joined #gluster
13:18 kkeithley gluster-swift is based on Essex 1.4.8 currently. We're working on rebasing to Folsom 1.7.4 or later
13:19 kkeithley So it's as stable as 1.4.8 is.
13:19 mooperd_ kkeithley: how big is the userbase?
13:19 kkeithley No idea
13:19 mooperd_ kkeithley: any serious bugs?
13:20 kkeithley Maybe johnmark knows.
13:20 johnmark kkeithley: you rang?
13:20 johnmark mooperd_: we don't have enough users to adequately know how well it performs
13:21 johnmark although we have more now over the last couple of months
13:21 mooperd_ johnmark: hmm, so it might be a bit silly to put a PB into production then :)
13:21 mooperd_ I trust gluster I think
13:22 kkeithley I'm not personally aware of serious bugs, but I'm not that close to openstack-swift devel to know.
13:23 jdarcy We have some pretty large service-provider customers using it.
13:23 johnmark jdarcy: that's good to know
13:23 johnmark gluster-swift?
13:24 jdarcy Swift is not inherently a high-performance type of storage, but within those limits we mostly do OK.  Of course we're finding and fixing performance issues all the time, just as on any project.
13:24 johnmark mooperd_: I think the old adage of "trust, but verify" remains - just as it does for any deployment
13:24 jdarcy johnmark: Yes.  My impression is that a significant percentage of new deployments (as measured by machines or PB) are being driven largely or entirely by UFO.
13:24 kkeithley gluster-swift = UFO
13:25 johnmark jdarcy: cool!
13:25 kkeithley (And we are still using a forked version of swift, i.e. heavily patched, for UFO.)
13:25 jdarcy Personally I find the object-store "wave" a bit strange, but then I'm an old fart who can't let go of old storage paradigms.
13:26 kkeithley And BTW the new auth, which we will be moving to eventually, is Keystone. Don't know why I can't keep that name in my head.
13:26 robo joined #gluster
13:26 * jdarcy can't help wonder if the name is from Keystone Kops.
13:26 mooperd_ jdarcy: Im with you there
13:28 johnmark jdarcy: it's all about simplicity
13:28 johnmark it's in one of my slides for pete's sake ;)
13:28 mooperd_ http://pastie.org/5377034
13:28 kkeithley simplicity? Or UFO? In your slides?
13:28 glusterbot Title: #5377034 - Pastie (at pastie.org)
13:28 mooperd_ I think Im doin something silly here...
13:28 jdarcy johnmark: Nobody ever reads those.
13:29 jdarcy (For those who don't know, johnmark and I have a relationship based on mutual teasing)
13:29 kkeithley We can feel the love
13:29 jiffe1 joined #gluster
13:29 kkeithley ew
13:30 jdarcy Last night my wife described my remodeling project as a labor of love.  I told her it was a labor of loathing.  I hate the old wallpaper.
13:30 mooperd_ `curl -v -X GET -H 'X-Auth-Token: AUTH_tkd904f9cc3b1b469c9f564f746a101538' https://176.74.56.57:443/v1/AUTH_test -k`
13:30 kkeithley I'll do your wallpaper if you come finish my kitchen cabinets
13:33 stephane_ joined #gluster
13:33 jdarcy mooperd_: Are those the values from .../auth/1.0?
13:34 mooperd_ jdarcy: yea
13:34 mooperd_ http://pastie.org/5377034
13:34 glusterbot Title: #5377034 - Pastie (at pastie.org)
13:34 mooperd_ Im not actually sure what I should expect
13:35 kkeithley mooperd_: I revised the users part my setup guide a bit. See if that makes things a bit less unclear.
13:35 mooperd_ kkeithley: I have the users bit licked I think
13:35 mooperd_ I am getting a key
13:35 mooperd_ and I think the key is actually working :)
13:35 kkeithley for the next person...
13:35 jdarcy mooperd_: That paste doesn't show the token exchange.  Is it possible the token has expired?
13:36 mooperd_ I think curl in OSX is wierd
13:36 kkeithley token should last for 24 hours by default
13:36 kkeithley IIRC
13:37 mooperd_ ah,
13:37 mooperd_ < HTTP/1.1 401 Unauthorized
13:37 mooperd_ ok
13:37 johnmark mooperd_: make sure you have memcached running
13:37 johnmark or you'll need to continually get a new auth key
13:38 mooperd_ johnmark: aha
13:38 johnmark mooperd_: that's tripped me up a couple of times
13:39 mooperd_ now I have been upgraded to 503 error :=
13:39 johnmark haha... getting closer!
13:39 mooperd_ ^^
13:39 johnmark mooperd_: now you have to get a new auth key and try that one
13:39 johnmark I think..
13:40 mooperd_ hmm, I got a 404 trying to make a new container
13:42 mooperd_ curl -v -X PUT -H 'X-Auth-Token: AUTH_tk128f27eb385a42008c302af1e1f776b6' https://176.74.56.57:443/v​1/AUTH_test/new_container -k
13:42 glusterbot <http://goo.gl/9SvwF> (at 176.74.56.57:443)
13:42 mooperd_ my Volume Name: test
13:43 johnmark mooperd_: and that gives you 404?
13:44 johnmark on the server side, can you verify that the volume is mounted?
13:44 mooperd_ johnmark: ah
13:44 johnmark and were you ever able to list the contents of the volume?
13:44 johnmark using the auth key
13:44 mooperd_ johnmark: no
13:45 mooperd_ So I have to mount the gluster volume locally?
13:45 mooperd_ localhost:test on /mnt/gluster-object/AUTH_test type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)
13:45 kkeithley swift will mount the gluster volume — presuming everything else is working correctly
13:45 johnmark mooperd_: on the server - that part should be done automatically
13:45 NuxRo @repo
13:45 glusterbot NuxRo: I do not know about 'repo', but I do know about these similar topics: 'repository', 'yum repo', 'yum repository', 'git repo', 'ppa repo', 'yum33 repo', 'yum3.3 repo'
13:46 NuxRo @yum repo
13:46 glusterbot NuxRo: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
13:46 NuxRo @ppa
13:46 glusterbot NuxRo: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
13:46 johnmark mooperd_: I just wanted you to check to make sur ethat had been done
13:46 mooperd_ it does seem to be mounted
13:46 johnmark ok
13:46 puebele joined #gluster
13:48 mooperd_ shouldnt I get a new X-Auth-Token: everytime I run:
13:48 mooperd_ curl -v -H 'X-Storage-User: test:andrew' -H 'X-Storage-Pass: x7deix7dei' -k https://176.74.56.57:443/auth/v1.0
13:49 johnmark mooperd_: if it's stored correctly in memcache, it should just return the same one
13:49 mooperd_ johnmark: so that bit seems to be working ok
13:50 mooperd_ ok, so i ran
13:51 mooperd_ curl -v -X GET -H 'X-Auth-Token: AUTH_tk128f27eb385a42008c302af1e1f776b6' https://176.74.56.57:443/v1/AUTH_test -k
13:51 johnmark and you got 503
13:51 mooperd_ HTTP/1.1 503 Internal Server Error
13:51 johnmark I'd restart swift
13:51 mooperd_ johnmark: ooh. your good
13:51 johnmark because I just ran the same command and got 503, too :)
13:52 mooperd_ johnmark: and memcached also?
13:52 mooperd_ Or
13:52 mooperd_ flush it?
13:52 johnmark mooperd_: eh, shouldn't have to, but what do I know :)
13:52 mooperd_ Its still returning the same key
13:53 johnmark huh. and still 503.
13:53 johnmark this is the point where I think you start checking your logs in /var/log/gluster
13:53 johnmark and see if you can find anything there
13:53 plarsen joined #gluster
13:56 mooperd_ johnmark:
13:56 mooperd_ [2012-11-14 08:52:28.918239] W [client3_1-fops.c:2630:client3_1_lookup_cbk] 0-test-client-0: remote operation failed: No data available. Path: /gluster-object/AUTH_test (00000000-0000-0000-0000-000000000000)
13:56 mooperd_ [2012-11-14 08:52:28.918258] W [fuse-bridge.c:292:fuse_entry_cbk] 0-glusterfs-fuse: 99: LOOKUP() /gluster-object/AUTH_test => -1 (No data available)
13:57 aliguori joined #gluster
13:58 mooperd_ johnmark: er
13:59 ika2810 joined #gluster
14:02 mooperd_ oh
14:03 mooperd_ I think my urls are messed up.
14:03 mooperd_ < HTTP/1.1 200 OK
14:03 mooperd_ < X-Storage-Url: https://127.0.0.1:443/v1/AUTH_test
14:03 mooperd_ < X-Storage-Token: AUTH_tk7d78d37abbe1499c8adc77761700fbf1
14:03 mooperd_ < X-Auth-Token: AUTH_tk7d78d37abbe1499c8adc77761700fbf1
14:03 mooperd_ < Content-Length: 0
14:03 mooperd_ < Date: Wed, 14 Nov 2012 14:01:42 GMT
14:03 mooperd_ https://127.0.0.1:443/v1/AUTH_test is probably not correct
14:08 kkeithley mooperd_: which Linux dist is this on?
14:08 mooperd_ kkeithley: centos6.3
14:12 lh joined #gluster
14:12 lh joined #gluster
14:16 balunasj joined #gluster
14:17 mooperd_ kkeithley: any ideas?
14:19 kkeithley joined #gluster
14:25 kkeithley mooperd_: not off the top of my head.
14:26 mooperd_ kkeithley: darn
14:26 mooperd_ Is this the auth problem you mention at the bottom of your howto?
14:27 kkeithley dunno. I thought I only saw that on RHEL6
14:27 kkeithley Or do I say it was on CentOS?
14:28 kkeithley (My xserver just freaked out and I'm getting signed back on after resetting it)
14:29 rwheeler joined #gluster
14:35 mooperd_ I am on centos
14:35 mooperd_ kkeithley: Actually, I just rebooted the VM
14:35 mooperd_ and now swift does not seem to be mounting the gluster vol
14:38 kkeithley I only saw the auth problem on RHEL6. Someone else said that everything worked for them on CentOS. (and I don't remember at this point whether I tried it on CentOS myself.)
14:41 [{L0rDS}] joined #gluster
14:42 mooperd_ kkeithley: how about swift not mounting the gluster?
14:42 kkeithley I haven't seen that happen, not when everything else was right.
14:42 mooperd_ kkeithley: aha
14:43 mooperd_ it only seems to mount when it gets a request
14:43 mooperd_ Its there now :)
14:43 kkeithley correct
14:43 kkeithley good
14:43 mooperd_ kkeithley: so the auth bit must be working ok then I suppose?
14:43 mooperd_ else the gluster wouldnt be moutning
14:43 kkeithley give me a few, I'm in the middle of a couple things. I'll fire up my centos vm and try
14:43 mooperd_ kkeithley: cool, thanks
14:43 JoeJulian I also have a quick run through at ,,(ufopilot) that I pretty much took from kkeithley's page.
14:43 glusterbot A simple html5 interface to your GlusterFS UFO store is at https://github.com/joejulian/ufopilot
14:45 twx_ on that subject, is there a proper, working, web interface for gluster out there? (just out of curiosity)
14:46 twx_ not that gluster isn't dead on simple from command line, but it could be nice with some monitoring etc
14:47 JoeJulian oVirt's the only supported gui at this time. There's been rumor of webmin doing something.
14:47 kkeithley twx_: it's part of oVirt/RHEV-M
14:48 [{L0rDS}] Hi... I need some help: I'm making a test to verify the failover of GlusterFS using Gluster Native Client. I created a shell script that reads one simple file from the mountpoint and during this reading I shutdown one of the 2 replicas that have this file. The problem is: The GlusterFS is taking about 15 seconds to change to the another replica. Is it right?? I thought that it would be in
14:48 [{L0rDS}] realtime... Can anyone help me please??
14:49 JoeJulian If you're using the ,,(yum repo) then I know that an actual shutdown will not cause any timeouts.
14:49 glusterbot kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
14:49 JoeJulian @ping-timeout
14:49 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
14:50 [{L0rDS}] joined #gluster
14:51 twx_ kkeithley: yeah sure, but that's not just for gluster
14:52 [{L0rDS}] Hi... I need some help: I'm making a test to verify the failover of GlusterFS using Gluster Native Client. I created a shell script that reads one simple file from the mountpoint and during this reading I shutdown one of the 2 replicas that have this file. The problem is: The GlusterFS is taking about 15 seconds to change to the another replica. Is it right?? I thought that it would be in
14:52 [{L0rDS}] realtime... Can anyone help me please??
14:53 JoeJulian You're about to get the boot.
14:53 aricg [{L0rDS}], @ping-timeout
14:53 aricg @ping-timeout
14:53 glusterbot aricg: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
14:55 [{L0rDS}] aricg: So, this is a normal behaviour?
14:56 JoeJulian [{L0rDS}]: If the TCP connections are closed properly, there is no timeout. If you're using the ,,(yum repo) then I know that an actual shutdown will not cause any timeouts.
14:56 glusterbot [{L0rDS}] : kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
14:57 obryan joined #gluster
15:00 [{L0rDS}] JoeJulian: What is ,,(yum repo)? I think that the TCP connection is being closed properly once I shutdown the machine. The O.S. do this properly...
15:00 glusterbot JoeJulian: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
15:02 obryan Has anybody run into a problem where you try to generate a volume (gluster volume create...) and you get no reply at all, no error, no confirmation, nothing
15:02 JoeJulian [{L0rDS}]: First, I'll often respond without typing a nick, especially when they start with characters that require a reach (I have short pinkies) so feel free to read lines that don't start with your nick. Which distro are you using?
15:02 obryan I'm getting this in both 3.2.5 and 3.3.1
15:04 JoeJulian obryan: Hello again. Come back to 3.3.1 and wipe out /var/lib/glusterd from those two servers before starting glusterd. I think you have some seriously bad state in there which caused the segfault you saw yesterday.
15:04 [{L0rDS}] CentOS6
15:05 noob2 joined #gluster
15:05 JoeJulian [{L0rDS}]: Where did you install from?
15:05 [{L0rDS}] RPM
15:05 [{L0rDS}] I downloaded the packages from the glusterfs site...
15:08 JoeJulian [{L0rDS}]: glusterbot has given you information following every mention of a ,,(yum repo). That's why we mention it. It does make it a lot easier to keep current and make sure your dependencies are there. ... and this was version 3.3.1 that you installed, correct?
15:08 glusterbot [{L0rDS}] : kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
15:08 stephane joined #gluster
15:09 kkeithley (the epel rpms on download.gluster.org are the same as what's in my repo.)
15:09 [{L0rDS}] JoeJulian: correct
15:09 JoeJulian kkeithley: Unless it was older than 3.3. Just wanted to make sure. :D
15:10 kkeithley right
15:11 JoeJulian Ok then, [{L0rDS}], you've gotten me perplexed. As long as "glusterfsd stop" is run before "network stop" that timeout shouldn't happen, and those rpms do configure that correctly... hmm...
15:11 [{L0rDS}] Everything is working fine... The replicas are being created (replica 2), gluster peer status command is showing right informations about the peers and so on...
15:13 JoeJulian Could you fpaste the client log from around the time that you tested this?
15:13 changi left #gluster
15:14 tryggvil_ joined #gluster
15:14 stephane joined #gluster
15:15 [{L0rDS}] There is no timeout. When I shutdown one of the replicas, the shellcript that is reading the file stops for 15 seconds and then continues. It means that GlusterFS is taking about 15 seconds to change to the another replica... got it?
15:15 JoeJulian I've got it more than you could possibly know.
15:15 [{L0rDS}] eheheh... good
15:16 [{L0rDS}] Can I paste the log here?
15:16 JoeJulian fpaste.org
15:19 [{L0rDS}] do you want the cli.log ou the <mountpoint>.log?
15:20 kkeithley mooperd_: okay, I just went through all the steps in my setup guide on my centos61 vm and it all worked.
15:20 [{L0rDS}] on /var/log/glusterfs...
15:21 Staples84 joined #gluster
15:22 mooperd_ kkeithley: centos6.1
15:22 mooperd_ kkeithley: hmm, so its possible its broken on 6.3?
15:22 nightwalk joined #gluster
15:22 kkeithley yeah, I don't have a 6.3
15:23 mooperd_ kkeithley: ok, Ill find a 6.1 image and try again
15:23 wushudoin joined #gluster
15:25 [{L0rDS}] JoeJulian: http://fpaste.org/oaeF/
15:25 glusterbot Title: Viewing client_mountpoint.log by dfalcao (at fpaste.org)
15:29 JoeJulian Yep, "Connection reset by peer". How about the brick log around the 11:22:21 timestamp +-43 seconds?
15:31 [{L0rDS}] http://fpaste.org/vJXg/
15:31 glusterbot Title: Viewing brick_that_was_restarted.log by dfalcao (at fpaste.org)
15:32 [{L0rDS}] its the log of the brick that the host was restarted
15:32 [{L0rDS}] "/var/log/glusterfs/bricks/mountpoint.log"
15:32 mooperd_ kkeithley: could you add in a bit into your howto about EPEL
15:32 kkeithley sure
15:33 mooperd_ It had me banging my head againt my screen for a couple of hours :)
15:33 kkeithley You mean adding the epel repo to your centos?
15:35 bulde joined #gluster
15:36 dstywho joined #gluster
15:40 Triade joined #gluster
15:44 JoeJulian Sorry, [{L0rDS}], I'm at work and for some reason someone wanted me to do something they pay me to do... I hate when they interrupt my conversations. :)
15:45 JoeJulian Ok, that's odd. There's nothing in that log before 11:22:29 huh?
15:45 mooperd_ kkeithley: yes
15:45 mooperd_ and also the start memcached bit :)
15:46 kkeithley already added the memcached start
15:50 obryan Hey Joe, I'm not sure if this is an inconsistency in the docs, but I tried creating a volume on server1 without doing a peer/probe on it, but it refused saying it wasn't in the list of peers
15:50 obryan But the documentation says not to add localhost
15:51 kkeithley obryan: that's correct
15:51 obryan so how does server1 participate then?
15:53 mooperd_ kkeithley: http://pastie.org/5377649 :(
15:53 glusterbot Title: #5377649 - Pastie (at pastie.org)
15:53 kkeithley you should be able to use `gluster volume create $volname server1:$pathtobrickdir
15:53 kkeithley port 8080?
15:54 mooperd_ kkeithley: I excluded the ssl bit
15:54 kkeithley okay, I suppose that should work.
15:55 JoeJulian mooperd_: You need the v1 in there... http://176.74.56.57:8080/v1/AUTH_test/wobble
15:55 mooperd_ kkeithley: hmm, the gluster is not mounted
15:55 mooperd_ ah, its mounted not
15:55 mooperd_ now
15:56 mooperd_ < HTTP/1.1 503 Internal Server Error
15:56 mooperd_ however
15:56 mooperd_ I am missing something
15:57 ika2810 joined #gluster
15:57 mooperd_ When I try and create container I see a 403 and when I try and list containters I see 503
15:57 [{L0rDS}] JoeJulian: http://fpaste.org/9wQN/
15:57 glusterbot Title: Viewing brick_that_was_restarted.log by dfalcao (at fpaste.org)
15:57 [{L0rDS}] Its all that a I have
15:57 ika2810 joined #gluster
15:57 obryan nope, it is the same as standalone as it is with the two servers.  it just sits there for a couple minutes and gives no output
15:59 kkeithley and glusterd is running on server1?
15:59 ika2810 joined #gluster
15:59 JoeJulian [{L0rDS}]: Odd. Those logs aren't supposed to be truncated on a reboot. I was really hoping to be able to read the timing of the shutdown.
15:59 obryan and I'm getting this non-response from both servers when i try creating a volume on a single server
15:59 obryan yes kkeithley
15:59 obryan on both
15:59 kkeithley firewall (iptables?)
15:59 obryan actually gluster won't work if glusterd isn't running
15:59 obryan i have checked, i have ports 111 and 24007-24017 open
16:00 [{L0rDS}] JoeJulian: What log do you want? Could you tell me the name of the log and the path?
16:00 FU5T joined #gluster
16:00 ika2810 joined #gluster
16:00 JoeJulian ~pasteinfo | [{L0rDS}]
16:00 glusterbot [{L0rDS}] : Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
16:03 obryan ok kkeithley i did this inside the gluster CLI and from inside that running volume create testvol server-1:/mnt/shared
16:03 tryggvil joined #gluster
16:03 obryan from that I get "Operation failed"
16:04 [{L0rDS}] JoeJulian: http://fpaste.org/ijjB/
16:04 glusterbot Title: Viewing gluster volume info by dfalcao (at fpaste.org)
16:04 kkeithley server1 or server-1? is this on a multi-homed box?
16:05 obryan sorry server1
16:07 obryan This is the etc and cli logs:
16:07 obryan http://dpaste.org/TLtJ3/
16:07 glusterbot Title: dpaste.de: Snippet #213114 (at dpaste.org)
16:09 JoeJulian obryan: "cat /var/lib/glusterd/glusterd.info" is the UUID 9d185ea5-d427-463c-afe0-5f295c378ae5 ?
16:10 JoeJulian Wait a minute.. I thought there were no peers...
16:11 JoeJulian I see 9d185ea5-d427-463c-afe0-5f295c378ae5 and 9104f632-fafd-4740-9975-073bfe853818.
16:12 JoeJulian [{L0rDS}]: Which server were you rebooting?
16:12 [{L0rDS}] glusterfs3
16:13 [{L0rDS}] glusterfs4 have the another replica
16:13 JoeJulian So on glusterfs3 the log file would be /var/log/glusterfs/bricks/glusterdata.log
16:14 [{L0rDS}] correct
16:14 kkeithley Doh. I just tried my swift setup guide on RHEL6.3 and it all worked. I think my original auth failure may have been because I neglected to start memcached.
16:15 mooperd_ kkeithley: !!! how are you makeing it work?
16:16 JoeJulian [{L0rDS}]: But everything before the reboot is gone. This is atypical behavior. What's different about your reboot over the normal 'shutdown -r now'? Is /var/log on a ramdisk or something?
16:16 mooperd_ kkeithley: this is horribly frustrating
16:16 ekuric left #gluster
16:16 kkeithley yeah, I can imagine. Wish I knew what to tell you.
16:16 JoeJulian selinux?
16:17 * JoeJulian brings out the old whipping boy...
16:18 mooperd_ kkeithley: could I invite you to log in?
16:18 kkeithley want me to script what I did?
16:18 kkeithley I can do that too
16:18 * mooperd_ thinking how he would do that
16:20 Venkat1 joined #gluster
16:26 Humble joined #gluster
16:26 puebele3 joined #gluster
16:27 [{L0rDS}] JoeJulian: Nops. I'm restarting using 'shutdown -r now' command but all the logs that I have I paste on that link.
16:29 obryan Sorry Joe, I was afk
16:30 daMaestro joined #gluster
16:31 deckid joined #gluster
16:32 nightwalk joined #gluster
16:32 obryan JoeJulian,  I have removed/purged all the glusterfs yesterday but when I reinstalled both servers still have my peers in the peer listing, where is that coming from?
16:34 pdurbin gluster for kvm. thanks, sjoeboo: http://software.rc.fas.harvard.edu/ganglia2​/ganglia2_storage/?c=gvm&amp;m=load_one&amp​;r=hour&amp;s=by%20name&amp;hc=4&amp;mc=2
16:34 glusterbot <http://goo.gl/ttDeI> (at software.rc.fas.harvard.edu)
16:34 JoeJulian obryan: /var/lib/glusterd/peers
16:34 sjoeboo no gluster yet...
16:34 sjoeboo just have 1 half of the cluster ready is all
16:38 pdurbin sjoeboo: sure sure, but it's still awesome
16:41 hagarth joined #gluster
16:44 mooperd_ is this normal
16:45 mooperd_ |/brick/.glusterfs/00/00/00000000-0000-0000-0000-​000000000001/.glusterfs/00/00/00000000-0000-0000-​0000-000000000001/.glusterfs/00/00/00000000-0000-​0000-0000-000000000001/.glusterfs/00/00/00000000-​0000-0000-0000-000000000001/.glusterfs/00/00/0000​0000-0000-0000-0000-000000000001/.glusterfs/00/00​/00000000-0000-0000-0000-000000000001/.glusterfs/
16:46 JoeJulian Assuming that /brick/.glusterfs/00/00/000000​00-0000-0000-0000-000000000001 is a symlink, yes.
16:49 daMaestro joined #gluster
16:53 nightwalk joined #gluster
16:57 puebele1 joined #gluster
17:00 johnmark sjoeboo: keep us apprised
17:02 mooperd_ Is it correct for me to use the external ip address as the brick hostname?
17:02 mooperd_ gluster volume create test 176.74.56.56:/brick
17:03 mooperd_ I am running this command on the server with IP of 176.74.56.56
17:06 ika2810 left #gluster
17:07 semiosis mooperd_: i highly recommend using ,,(hostnames)
17:07 glusterbot mooperd_: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
17:07 mooperd_ semiosis: ok
17:07 mooperd_ It wouldnt accept localhost
17:07 semiosis and the hostname should resolve to the actual interface address of the machine
17:08 semiosis because localhost will resolve to the wrong machine from another machine
17:08 semiosis but you can map the machine's hostname to 127.0.0.1 on itself, just make sure that it's hostname resolves to the real, actual, routable IP address of that machine from other machines
17:09 mooperd_ semiosis: I see
17:09 semiosis i'd recommend using dns and setting up real FQDNs
17:09 mooperd_ well, this is just a one brick setup to test swift
17:09 semiosis oh
17:09 mooperd_ semiosis: my cluster has all dns setup and stuff :)
17:10 semiosis awesome
17:10 semiosis cluster of one brick?!
17:13 mooperd_ semiosis: no, Im just using a vm on my local machine
17:13 mooperd_ my router is broken
17:13 mooperd_ long story
17:13 stopbit joined #gluster
17:14 mooperd_ I just wanted to make sure that using the ip would not kill it
17:14 semiosis well there's really only one way to make sure... and that is to try it
17:17 mooperd_ semiosis: well, this swift stuff is making me want to hurt kittens
17:24 [{L0rDS}] hey JoeJulian, I made the tests again e get the logs...
17:25 [{L0rDS}] can you see again? http://www.fpaste.org/ccwG/
17:25 glusterbot Title: Viewing /var/log/glusterfs/bricks/glusterdata.log by dfalcao (at www.fpaste.org)
17:36 Mo__ joined #gluster
17:56 esm_ joined #gluster
18:10 nightwalk joined #gluster
18:11 jdarcy I'm going to change my job title to "software neurosurgeon" because I spend all my time healing split brains.
18:12 plarsen joined #gluster
18:13 obryan I wonder how much the aspirin industry owes to people in IT?
18:13 jdarcy Even more than the other way around, if you include ibuprofen etc.
18:14 johnmark jdarcy: sounds serious
18:17 jdarcy johnmark: Hold still while I reload the stapler.
18:21 johnmark haha!
18:26 y4m4 joined #gluster
18:26 mooperd_ [2012-11-14 18:58:47.903431] W [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 51: SETXATTR() / => -1 (Operation not supported)
18:26 mooperd_ Anyone seen these before?
18:28 jdarcy I really hope you're not using ZFS.
18:29 JoeJulian I've seen that too. Seems to have something to do with trying to set xattrs on a file that doesn't exist.
18:29 FU5T left #gluster
18:30 mooperd_ jdarcy: I start the ZFS Gluster experiments soon
18:30 mooperd_ bu
18:30 mooperd_ but this is on ext3
18:30 kkeithley set xattrs on a file that doesn't exist, or setting xattrs on a file system that doesn't support xattrs?
18:30 mooperd_ remount with…user_xattr
18:30 mooperd_ I suppose
18:30 JoeJulian no
18:31 JoeJulian Am I on ignore or something?
18:31 kkeithley ignore?
18:32 semiosis kkeithley: ignore what?
18:32 mooperd_ JoeJulian: I was responding to you!
18:32 kkeithley JoeJulian asked "Am I on ignore or something?"
18:32 semiosis hehehe
18:32 JoeJulian :)
18:32 jdarcy I wonder if we can get a path, so we can see what file Swift is trying to mess with.
18:32 jdarcy Maybe strace?
18:33 JoeJulian If the file doesn't exist at the moment that it's trying to set xattrs, it will throw that warning.
18:33 JoeJulian My suspicion was that's why it's just a warning.
18:34 mooperd_ also, this is in messages
18:34 mooperd_ http://gauntlet.sys11.net/log
18:34 kkeithley swift logs to /var/log/messages
18:34 JoeJulian Oh, well that could be user_xattr, sure.
18:34 JoeJulian hehe
18:35 JoeJulian Hmm, I don't set user_xattr on my bricks.
18:35 kkeithley I don't have any SETXATTR messages from swift in my /var/log/messages. I'm using xfs for my brick.
18:36 kkeithley but those are coming from fuse-bridge.c
18:36 JoeJulian My single brick swift volume sits on ext3 and works like a charm.
18:36 mooperd_ Deleting volume test has been unsuccessful :(
18:36 jdarcy Does AUTH_test actually exist?
18:37 jdarcy ERROR:root:setxattr failed on /mnt/gluster-object/AUTH_test key user.swift.metadata err: [Errno 95] Operation not supported
18:37 mooperd_ Wierd that I have a log file called "mnt-gluster-object-AUTH_test.log"
18:37 johnmark and jdarcy finally asks the question that could point us in the right direction
18:38 kkeithley mooperd_: said it did in private chat
18:38 kkeithley told me it does
18:38 jdarcy For a non-existent file I'd expect ENOENT.  EOPNOTSUPP suggests to me that it exists, you have permissions (or else EPERM), but that operation doesn't work on whatever kind of thing it is.
18:39 jdarcy Does setxattr work on symlinks?
18:39 JoeJulian yes
18:39 mooperd_ AUTH_test is not a gluster vol
18:39 mooperd_ test is
18:39 JoeJulian ***
18:39 kkeithley test is the gluster vol, swift mounts it at /mnt/gluster-object/AUTH_test
18:39 jdarcy $ ln -s foo bar
18:40 jdarcy $ setfattr -h -n user.foo -v ugh bar
18:40 jdarcy setfattr: bar: Operation not permitted
18:40 bulde joined #gluster
18:40 JoeJulian -l
18:40 JoeJulian I think
18:41 JoeJulian no
18:41 JoeJulian I'm wrong
18:41 kkeithley output from his mount: localhost:test on /mnt/gluster-object/AUTH_test type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)
18:41 jdarcy Same operation works if bar is a regular file.
18:41 jdarcy So AUTH_test is the volume root?
18:42 kkeithley right (and that's consistent with mine, which works)
18:42 jdarcy Still not seeing how that affects anything, but interesting.
18:42 tryggvil joined #gluster
18:42 JoeJulian can set trusted. attributes though.
18:42 hattenator joined #gluster
18:43 kkeithley but my brick is xfs. Okay, let me try an ext3 volume
18:44 mooperd_ gentlemen. I took down my gluster volume and tried to delete it.
18:44 mooperd_ But it wouldnt let me delete it
18:44 mooperd_ so, I thought Id bring it back up
18:44 mooperd_ but now, the auth doesnt work....
18:45 JoeJulian so you cannot set user.* on a symlink but you can set trusted.* on one.
18:45 jdarcy johnmark: Yeah, I'm seeing the same thing, apparently independent of user-xattr.
18:45 jdarcy Um, JoeJulian.
18:46 mooperd_ and now volume list hangs
18:50 gbrand_ joined #gluster
18:53 kkeithley so, same box, same everything, but ext3 brick, I get 404 Not Found when I try to create a container
18:53 kkeithley and swift complains about not being able to set xattrs
18:53 kkeithley Dr. It hurts when I use ext3
18:54 semiosis well, don't use ext3
18:54 kkeithley ;-)
18:55 kkeithley you still with us mooperd_?
18:56 jdarcy I think his hed asploded.
19:01 mooperd2 joined #gluster
19:01 mooperd2 hi
19:01 glusterbot mooperd2: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:01 mooperd2 my lappy died
19:01 mooperd2 go away glustebot
19:01 mooperd2 what did I miss?
19:02 semiosis use xfs
19:02 mooperd2 semiosis: eurgh
19:02 mooperd2 crappy performance though
19:02 mooperd2 for the files
19:02 JoeJulian not for a couple years.
19:02 mooperd2 I mean, I'm using 1 - 3 meg files
19:04 kkeithley to recap: I replaced my xfs brick with an ext3 brick, everything else the same, and got the same 404 Not Found error you got when I tried to create the container.
19:04 mooperd2 http://community.gluster.org/q/​whats-the-staus-of-xfs-support/
19:04 glusterbot <http://goo.gl/vDyvu> (at community.gluster.org)
19:05 kkeithley use ext4 or xfs.
19:05 kkeithley btrfs if you're brave
19:05 mooperd2 kkeithley: I was using ext4
19:05 mooperd2 with the first test
19:05 kkeithley oh
19:06 mooperd2 so, for swift
19:06 mooperd2 its either ifs or xfs
19:06 chandank joined #gluster
19:06 mooperd2 xfs
19:06 mooperd2 or, if your feeling adventurous…..xfs :)
19:06 bulde joined #gluster
19:07 * JoeJulian is successfully using ext3 as the brick for ufo.
19:07 mooperd2 hmm
19:07 semiosis kkeithley: in your test did you add user_xattr to the ext3 mount opts?
19:08 gcbirzan joined #gluster
19:08 kkeithley nope
19:08 kkeithley I could try that
19:14 kkeithley okay, with user_xattr on the brick then swift create container worked
19:14 mooperd2 tada
19:14 mooperd2 :)
19:14 kkeithley mooperd2: for you too?
19:14 mooperd2 kkeithley: no
19:14 kkeithley oh
19:14 kkeithley what's tada?
19:14 mooperd2 I just get excited when things start working
19:15 kkeithley okay
19:23 jdarcy Huh.  My patch is working with post-op-delay better than I expected.
19:25 mooperd2 kkeithley: 201
19:25 Tarok joined #gluster
19:26 jdarcy 201 is good.
19:26 kkeithley 201 is the container create, yes, that's good
19:26 kkeithley that means it worked
19:26 mooperd2 :)
19:28 kkeithley you should see the container in /mnt/gluster-object/AUTH_test/ and be able to list it with the curl command
19:31 mooperd2 kkeithley: yes indeed
19:31 mooperd2 by the way
19:31 mooperd2 I was born in keithley
19:31 mooperd2 in yorkshire, england
19:32 kkeithley no kidding. That's Keighley though, right?
19:32 kkeithley Some day I want to go there
19:32 mooperd2 oh yea.
19:32 mooperd2 It is Keighley
19:32 mooperd2 :) Ive been living in germany too long
19:32 davdunc joined #gluster
19:33 davdunc joined #gluster
19:33 kkeithley Though my ancesters were German, Keicheli or something, that got anglicized to Keithley
19:33 kkeithley Kinda like Apollo Revoir was anglicized to Paul Revere.
19:34 kkeithley And that's your 'Merkin History lesson for the day
19:35 mooperd2 My name 'Holway' was anglicised from a french name I can't remember
19:35 andreask joined #gluster
19:35 mooperd2 kkeithley: thanks so much for your help!
19:35 jdarcy For most of my life, I bristled when people thought my last name was Dorsey.  Turns out it kind of is, and it means "servant of the dark one"  http://www.surnamedb.com/Surname/Darcy
19:35 glusterbot Title: Surname Database: Darcy Last Name Origin (at www.surnamedb.com)
19:35 kkeithley yw. Glad we go it working for you.
19:36 Tarok joined #gluster
19:36 kkeithley Not D'Arcy?
19:37 jdarcy kkeithley: Apparently not in my case.
19:37 DaveS_ joined #gluster
19:37 jdarcy Maybe I'll get "servant of the dark one" on my next business cards.
19:38 kkeithley Alias Peter Pettigrew
19:38 foster that warrants a shirt, at the very least
19:38 andreask left #gluster
19:39 kkeithley Or a Nazgûl
19:42 jdarcy Argh.  I forgot that I deliberately disable the new checks when post-op-delay is non-zero, so my tests weren't really working.  They were sitting there creating MASSIVE split-brain.
19:43 mooperd2 kkeithley: small reminder
19:44 mooperd2 add the user_xattr bit to your how to :)
19:44 mooperd2 then it should be doable by idiots like me
19:45 DaveS_ joined #gluster
19:45 kkeithley okay
19:49 Psi-Jack joined #gluster
19:49 mooperd2 gnight gents
19:49 kkeithley It's Lowenbrau time
19:49 DaveS__ joined #gluster
20:01 JoeJulian Tonight's the night, the night is kind of special, tonight...
20:02 JoeJulian Only us old folks would get that reference though.
20:08 * kkeithley wonders why they took Löwenbräu off the market here
20:20 Nr18 joined #gluster
20:21 nightwalk joined #gluster
20:34 y4m4 joined #gluster
20:42 NuxRo joined #gluster
20:51 rwheeler joined #gluster
21:16 johnmark kkeithley: hahaha
21:18 johnmark what, he suddenly gets it working and decides to leave? pshaw
21:18 * johnmark crosses mooperd2 off his xmas card list
21:18 * johnmark is trying to remember if he had to remount with user_xattr
21:19 JoeJulian I know that my little demo that I built for you doesn't use user_xattr and I do see user. attributes set.
21:22 [{L0rDS}] JoeJulian, sometimes you sound like a crazy bot eheheh
21:22 pdurbin [{L0rDS}]: +1
21:23 elyograg possibly preaching to the choir: mount.xfs doesn't seem to like the user_xattr option, so I think it has it on by default.
21:23 neofob joined #gluster
21:24 JoeJulian Sometimes I feel like a crazy bot. condition/response, condition/response, condition/response...
21:26 [{L0rDS}] eheheh
21:26 balunasj joined #gluster
21:36 joeto joined #gluster
21:48 jiqiren joined #gluster
21:54 vjarjadian joined #gluster
21:55 vjarjadian hi
21:55 glusterbot vjarjadian: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:58 Daxxial_1 joined #gluster
22:14 [{L0rDS}] Hey JoeJulian
22:16 [{L0rDS}] I think the delay is "normal". I tested UCARP with a ping -a while I shutdown one host and the delay was almost the same...
22:23 nueces joined #gluster
22:25 nightwalk joined #gluster
22:31 hackez joined #gluster
22:43 JoeJulian One man's normal...
22:56 inodb_ joined #gluster
23:06 daMaestro joined #gluster
23:19 lh joined #gluster
23:21 nightwalk joined #gluster
23:30 quillo joined #gluster
23:30 Psi-Jack joined #gluster
23:32 rabbit7 joined #gluster
23:37 rabbit7 hey, i am having a major split-brain problem on glusterfs 3.3, I am very new to the environment so any hint on how to go about this would be very helpful
23:41 JoeJulian Sure, I've written up a blog article about that at ,,(split-brain)
23:41 glusterbot (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
23:41 JoeJulian #2
23:42 simcen joined #gluster
23:43 elyograg I'm having a split brain problem with my 3.3.1 setup.  I copied a bunch of stuff into a directory on the mount point created by UFO and one of the files got a split brain.  Nothing went down - servers or network.  Only one client -- the UFO server.
23:44 JoeJulian Same answer. ;)
23:44 elyograg I had another split brain problem and tried to use your blog entry to fix it, and now the two files I tried to heal are listed twice on info split-brain.
23:45 rabbit7 thx a lot JoeJulian
23:51 TSM joined #gluster
23:52 rabbit7 the gfid and the size and md5 is the same for both files, does that mean there is no split-brain issue with that file ?
23:52 nightwalk joined #gluster
23:54 elyograg JoeJulian: FYI, you might want to add -f to the rm statements in your blog post.  When I do it without the -f, I get asked if I'm sure I want to remove the file.  apparently the time lag involved in answering that question is enough to keep the fix from working.
23:54 glusterbot New news from resolvedglusterbugs: [Bug 834464] Stat structure contains entries that are too large for defined data type in 32bit EL5 <http://goo.gl/qGbAe>
23:58 JoeJulian elyograg: Sounds good.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary