Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 semiosis bye purpleidea
00:03 partner darn, its 2AM already, what the heck i'm doing up still..
00:04 JoeJulian Apparently having fun.
00:04 amccloud joined #gluster
00:04 partner wohoo \o/
00:18 H__ joined #gluster
00:18 tomsve joined #gluster
00:21 ninkotech_ joined #gluster
00:33 partner joined #gluster
00:34 partner fail..
00:44 Ryan_Lane ok. I'm seriously at a loss as to how to actually make my cluster work again
00:45 Ryan_Lane it seems to be screwed up beyond repair
00:46 JoeJulian What now?
00:47 Ryan_Lane most of the volumes show one or more bricks down
00:47 Ryan_Lane I can't restart the glusterfs-server service
00:48 Ryan_Lane stop/start *kind of* works
00:48 Ryan_Lane for each volume
00:48 Ryan_Lane but not for the entire process
00:48 JoeJulian You don't have to stop the volume, just start...force
00:48 Ryan_Lane start force?
00:48 JoeJulian gluster volume start foo force
00:49 Ryan_Lane ah. didn't see that
00:49 Ryan_Lane nice to know. that'll speed things up some
00:49 JoeJulian Do the brick logs show why they're not starting?
00:50 Ryan_Lane no
00:50 Ryan_Lane force seems to work well
00:50 JoeJulian whew
00:50 JoeJulian Good. 'cause I have a "big data" meet and greet I'm supposed to go to.
00:50 Ryan_Lane even better, it's *way* faster
00:50 JoeJulian But I hate to leave you haning.
00:50 JoeJulian hanging even.
00:50 Ryan_Lane let me see if I can script it
00:50 Ryan_Lane heh
00:51 Ryan_Lane well, worst case, I'm going to be here for hours and hours fixing it ;)
01:15 dustint joined #gluster
01:36 kevein joined #gluster
01:54 bharata joined #gluster
02:00 nueces joined #gluster
02:27 amccloud joined #gluster
03:01 overclk joined #gluster
03:02 mohankumar joined #gluster
03:04 theron joined #gluster
03:10 shylesh joined #gluster
03:13 bulde joined #gluster
03:19 hagarth joined #gluster
03:31 lala joined #gluster
03:31 __Bryan__ joined #gluster
03:34 lala_ joined #gluster
03:35 lala_ joined #gluster
03:38 21WAAFULL joined #gluster
03:52 shireesh joined #gluster
03:52 sripathi joined #gluster
03:55 sahina joined #gluster
03:57 kr4d10 joined #gluster
04:05 pai joined #gluster
04:07 daMaestro joined #gluster
04:17 Shdwdrgn joined #gluster
04:21 glusterbot New news from newglusterbugs: [Bug 825569] [enhancement]: Add support for noatime, nodiratime <http://goo.gl/6Z8Po>
04:23 ninkotech_ joined #gluster
04:27 Humble joined #gluster
04:31 deepakcs joined #gluster
04:32 theron joined #gluster
04:45 rastar joined #gluster
04:51 glusterbot New news from newglusterbugs: [Bug 905747] [FEAT] Tier support for Volumes <http://goo.gl/UPxNI>
04:52 Ryan_Lane joined #gluster
05:05 mohankumar joined #gluster
05:18 melanor9 joined #gluster
05:19 lala joined #gluster
05:20 bala joined #gluster
05:23 vpshastry joined #gluster
05:28 ramkrsna joined #gluster
05:28 ramkrsna joined #gluster
05:45 raghu joined #gluster
06:27 sashko JoeJulian: hey you around?
06:27 vimal joined #gluster
06:42 Nevan joined #gluster
07:02 test__ joined #gluster
07:03 ngoswami joined #gluster
07:11 pai joined #gluster
07:19 jtux joined #gluster
07:39 ekuric joined #gluster
07:42 puebele joined #gluster
07:43 shireesh joined #gluster
07:48 ctria joined #gluster
07:51 hagarth joined #gluster
07:53 sripathi joined #gluster
08:01 puebele1 joined #gluster
08:02 Azrael808 joined #gluster
08:13 guigui1 joined #gluster
08:18 lala joined #gluster
08:24 Joda joined #gluster
08:26 andreask joined #gluster
08:27 manik joined #gluster
08:27 Humble joined #gluster
08:34 polenta joined #gluster
08:39 sripathi1 joined #gluster
08:40 hagarth joined #gluster
08:42 lala_ joined #gluster
08:47 rgustafs joined #gluster
08:48 77CAAJGBX joined #gluster
08:49 Norky joined #gluster
08:50 sripathi joined #gluster
08:51 rastar joined #gluster
08:55 tryggvil joined #gluster
09:00 ramkrsna joined #gluster
09:00 ramkrsna joined #gluster
09:05 melanor9 joined #gluster
09:06 bauruine joined #gluster
09:07 shireesh joined #gluster
09:11 rgustafs joined #gluster
09:12 lala joined #gluster
09:12 Staples84 joined #gluster
09:33 jtux joined #gluster
09:35 Humble joined #gluster
09:42 sripathi joined #gluster
09:44 vpshastry joined #gluster
09:45 ctria joined #gluster
09:46 rastar1 joined #gluster
09:50 sashko hey guys, does glusterfs-server-3.3.0.5rhs-37.el6.x86_64 have granular locking patch?
09:53 rajesh_amaravath joined #gluster
09:58 Humble joined #gluster
10:03 Humble joined #gluster
10:06 overclk joined #gluster
10:06 ramkrsna joined #gluster
10:07 dobber joined #gluster
10:11 sashko joined #gluster
10:16 ndevos sashko: you'll have to ask Red Hat Support (https://access.redhat.com/support/cases/new) about that, this channel handles the community version
10:16 glusterbot Title: redhat.com (at access.redhat.com)
10:20 lala joined #gluster
10:25 ngoswami joined #gluster
10:26 atrius joined #gluster
10:36 sripathi1 joined #gluster
10:38 hagarth @channelstats
10:38 glusterbot hagarth: On #gluster there have been 78526 messages, containing 3503620 characters, 587425 words, 2369 smileys, and 295 frowns; 585 of those messages were ACTIONs. There have been 26862 joins, 953 parts, 25944 quits, 9 kicks, 102 mode changes, and 5 topic changes. There are currently 197 users and the channel has peaked at 203 users.
10:42 sripathi joined #gluster
10:42 kevein joined #gluster
10:45 ctria joined #gluster
11:02 manik joined #gluster
11:20 NeatBasis joined #gluster
11:23 glusterbot New news from newglusterbugs: [Bug 905871] Geo-rep status says OK , doesn't sync single file from the master. <http://goo.gl/CpA33>
11:27 vpshastry joined #gluster
11:38 killermike_ joined #gluster
11:38 bulde joined #gluster
11:39 rastar joined #gluster
11:41 Qten does anyone know if gluster can leverage the iops of multipule disks to achieve grater iops for a kvm image instead of say using 2 x raid 10 and then raid 1 over the network?
11:42 andreask joined #gluster
11:47 Qten hmm maybe if i said i have 4 storage servers and 8 disks each, can gluster use all 32 disks as a single volume to achieve redundancy and performance do is it really only able to mirror 2 or 3 way etc?
11:47 ekuric left #gluster
11:47 ekuric joined #gluster
11:48 test__ joined #gluster
11:50 pai_ joined #gluster
11:51 vijaykumar joined #gluster
11:53 andreask Qten: you can create a distributed+replicated volume
12:02 Qten andreask: this however wont "leverage" the entire pool for a single large file this would require striping?
12:02 andreask Qten: yes, exactly
12:03 Qten andreask: ok thanks pretty much what i thought :)
12:03 andreask Qten: yw
12:15 noob21 joined #gluster
12:15 mohankumar joined #gluster
12:22 rastar1 joined #gluster
12:31 ctria joined #gluster
12:35 noob21 i think my gluster is hosed up
12:48 rastar joined #gluster
12:53 longsleep joined #gluster
12:54 dustint joined #gluster
12:55 ngoswami joined #gluster
12:56 longsleep Hi guys, my glusterfs client eats up ~2G memory per hour, until the oom killer kills it. I see this with 3.3.1 and 3.4.0qa7 - any suggestions for debugging?
12:57 edward joined #gluster
13:15 w3lly joined #gluster
13:25 shylesh joined #gluster
13:33 vpshastry joined #gluster
13:38 bauruine joined #gluster
13:40 noob21 i'm seeing some strange behavior in my gluster.  when i issue gluster volume status to all the nodes with cssh, some of them report operation failed
13:40 noob21 but it's not the same nodes each time that are reporting that
13:44 aliguori joined #gluster
13:48 morse joined #gluster
13:49 ndevos noob21: "gluster volume status" is a pretty expensive operation and it even may involve getting a global lock, if that lock is obtained on one node, the others can not obtain it and will fail
13:52 noob21 yeah i saw the same response in my dev cluster
13:52 noob21 i suspected it was normal
13:53 noob21 i had a problem where glusterfsd was using up 1200% cpu and just sitting there.  when i tried to restart the service it wouldn't come back
13:53 noob21 rebooting the server seems to fix the issue because it comes back fine
13:54 noob21 i saw this when i tried to start the fsd process again: http://fpaste.org/3I0q/
13:54 glusterbot Title: Viewing Paste #272020 (at fpaste.org)
13:58 bulde joined #gluster
14:07 noob21 brb i have to jump off for an hour.  i'll be back on at 10
14:07 noob21 left #gluster
14:09 vijaykumar joined #gluster
14:10 vijaykumar left #gluster
14:17 sashko joined #gluster
14:23 glusterbot New news from newglusterbugs: [Bug 905933] GlusterFS 3.3.1: NFS Too many levels of symbolic links/duplicate cookie <http://goo.gl/YA2vM>
14:30 plarsen joined #gluster
14:31 zoldar joined #gluster
14:45 chouchins joined #gluster
14:50 bala1 joined #gluster
14:53 glusterbot New news from newglusterbugs: [Bug 905946] Get 405 and 503 errors when PUT two objects at once while using close connection <http://goo.gl/PPrSl>
14:54 Humble joined #gluster
14:54 stopbit joined #gluster
15:03 raven-np joined #gluster
15:08 abyss^ Is possible to run gluster server and client with replication on the same node? I haven't got more servers for gluster... I have 2 WWW server and I need replication between them...
15:09 samppah abyss^: that's possible
15:09 duerF joined #gluster
15:10 abyss^ so I configure server on first WWW node and second then I mount them as client on first WWW and second WWW, yes?
15:13 wushudoin joined #gluster
15:14 hateya joined #gluster
15:18 noob2 joined #gluster
15:22 bennyturns joined #gluster
15:22 manik joined #gluster
15:26 lala joined #gluster
15:27 bugs_ joined #gluster
15:28 Humble joined #gluster
15:39 semiosis abyss^: yes but beware of ,,(split-brain)
15:39 glusterbot abyss^: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
15:39 semiosis learn how to cause it
15:40 x4rlos How do you guys go on with ensuring that gluster mounts on boot for machines that require to have a gluster mountpoint?
15:42 x4rlos It looks like writing my own startup/shutdown script is going to be the only way?
15:54 JusHal joined #gluster
15:55 semiosis x4rlos: i use my ubuntu packages and it just works
15:55 semiosis afaik the other packages work too
15:57 JusHal ndevos: as you proposed, submitted the bug report, https://bugzilla.redhat.com/show_bug.cgi?id=905933
15:57 glusterbot <http://goo.gl/YA2vM> (at bugzilla.redhat.com)
15:57 glusterbot Bug 905933: urgent, unspecified, ---, vinaraya, NEW , GlusterFS 3.3.1: NFS Too many levels of symbolic links/duplicate cookie
15:58 x4rlos semiosis: I mean, if i put in /etc/fstab i want it mounting on boot :-)
15:58 x4rlos Should this work?
15:59 x4rlos I just checked as i guessed it would have problems, and plenty of people are talking about it not working cos it needs to start up glusterd first.
15:59 ndevos JusHal: thanks, looks clear enough to me
16:02 dblack joined #gluster
16:03 manik joined #gluster
16:04 semiosis x4rlos: plenty of people?  when was that?  2011?
16:04 amccloud joined #gluster
16:04 x4rlos semiosis: Probably :-D
16:05 semiosis x4rlos: should be fixed in all the recent packages afaik
16:05 x4rlos cool, sorry to bother you :-)
16:05 semiosis no worries
16:05 semiosis if you try it and have issues, pastie your client log file
16:06 semiosis oh and btw, needing to start glusterd before the mount is only in the case where you're mounting from localhost
16:06 x4rlos ah, okay. Maybe wires getting crossed :-)
16:08 x4rlos Can gluster mount lvm lv's directly on the server? I mount them on /mnt/blahblah  and then create the bricks on them. Would be good if could write to them without the mount required.
16:08 semiosis gluster needs a mounted posix filesystem with xattr support for bricks
16:09 semiosis xfs recommended
16:09 semiosis can't use raw block devices
16:09 x4rlos yeah, i heeded the advice of going xfs. :-)
16:10 x4rlos hmm. I just checked a test machine of mine, and it doesnt seem to be mounting of it's own accord.
16:11 x4rlos there's talk of it unmounting on shutdown. This may be me just being stupid.
16:11 x4rlos client1:database-archive /srv/pg_archive glusterfs defaults
16:11 x4rlos (that's my fstab)
16:14 Hymie joined #gluster
16:15 x4rlos changed to _netdev.
16:16 daMaestro joined #gluster
16:17 semiosis x4rlos: what distro?  what vers of glusterfs?
16:17 x4rlos debian.
16:17 x4rlos I got 3.4 on the client,
16:17 x4rlos and 3.3.1 on the server.
16:17 x4rlos wheezy.
16:17 semiosis good luck with that
16:18 semiosis let us know if those versions interoperate
16:18 x4rlos hehe. I know, right. Not sure how i did that.
16:18 semiosis idk if they are expected to
16:18 x4rlos They actually seem to work. Not like the 3.2 and 3.3 versions.
16:18 x4rlos Its only my home vm env.
16:19 semiosis did you install from packages?  where did you get them?
16:20 semiosis ah i see 3.4.0 package in debian experimental
16:20 x4rlos i got from experimental wheezy packages a while back iirc. they had 3.3.1 on there for a bit.
16:20 x4rlos Yeah, they must have changed about the time i was installing them there.
16:21 x4rlos Clients: 3.4.0~qa5-1
16:21 x4rlos server: 3.3.1-2
16:22 x4rlos on my actual live machines they are using your 3.3.1-1 deb packages :-)
16:24 x4rlos hmm, either way, the client isn't mounting on boot. mount -a mounts it just fine though.
16:24 hateya joined #gluster
16:26 semiosis x4rlos: hmm
16:26 x4rlos i put loglevel @ warning.
16:26 semiosis whats your fstab line?
16:26 semiosis oh wait its above, never mind
16:27 x4rlos Its this now: client1:database-archive /srv/pg_archive glusterfs defaults,_netdev,log-level=loglevel 0 0
16:27 semiosis x4rlos: pastie your client log file... /var/log/glusterfs/srv-pg_archive.log
16:27 x4rlos doesnt even mention attempting to mount it there.
16:28 x4rlos Last entry reads: [2013-01-30 16:25:51.293214] I [fuse-bridge.c:4433:fuse_thread_proc] 0-fuse: unmounting /srv/pg_archive
16:28 x4rlos (will happily pastie it you if you want though :-))
16:29 ndevos x4rlos: really "log-level=loglevel"? not "log-level=WARNING"? or DEBUG while you're working on fixing it?
16:29 x4rlos lol
16:30 x4rlos yeah, it reads DEBUG :-)
16:30 guigui1 left #gluster
16:31 ndevos :)
16:32 x4rlos I'm a fool, but it still doesnt say anything XD
16:34 x4rlos http://pastie.org/5966081
16:34 glusterbot Title: #5966081 - Pastie (at pastie.org)
16:34 andreask joined #gluster
16:39 Humble joined #gluster
16:47 x4rlos semiosis: you died? :-s
16:50 JoeJulian x4rlos: Try "bash -x /usr/sbin/mount.glusterfs client1:database-archive /srv/pg_archive" so you can maybe see a little more about what's failing and why.
16:53 x4rlos JoeJulian: It works on a normal mount. And also on mount -a after boot. It's having problems mounting automagically on boot.
16:53 JoeJulian Ah, ok.
16:53 JoeJulian ... works fine in fedora... ;)
16:53 x4rlos :-p
16:54 x4rlos I guess it should work in debian.
16:54 JoeJulian Actually... which debian is that?
16:54 x4rlos wheezy / 3.4.0~qa5-1
16:55 JoeJulian Nope. Not the one and only thing I know about debian.
16:56 x4rlos logging on the fstab entry doesn't seem to want to write anything out wither. Even when i tell it to log somewhere explicitly.
16:56 x4rlos I suppost to be honest, if i get it from experimental source it's my own fault :-)
16:58 JoeJulian There is an expectation with pre-releases that one should be able to isolate and report bugs found, but we don't mind helping as long as we're not the ones doing all the work. ;)
16:59 JoeJulian I just can't handle the people that don't have a clue and come in telling us how they can't get some qa release to work and expect quick and ready answers. Those people are nuts.
17:00 JoeJulian So one other thing I see about your client log with loglevel=DEBUG... is that there's no " D " in it.
17:01 nueces joined #gluster
17:02 sashko joined #gluster
17:02 w3lly1 joined #gluster
17:03 x4rlos JoeJulian: How do you mean?
17:04 longsleep Please help - my glusterfs client needs 13GB memory and gets eventually killed by the oom watcher. This happens with 3.3.1 and 3.4.0qa7 :-/
17:04 m0zes .cball
17:04 m0zes whoops
17:04 JoeJulian x4rlos: With log-level=DEBUG there should be some debug lines in that log. There aren't any.
17:04 x4rlos Yeah, what's with that? Is there something i need to do?
17:04 JuanBre joined #gluster
17:05 JoeJulian longsleep: That's not something I've seen with 3.3.1. How fast does it consume memory?
17:05 x4rlos btw, I dont expect anything of anyone. I think what you guys are doing is great. Hats off to semiosis: for his work on the packages too. I have files a few bugs (and even submitted a patch for a simple manual spelling mistake) and helped out
17:06 JoeJulian x4rlos: You're not one of "them".
17:06 x4rlos with debigging things liek the problems with the georeplication not working.
17:06 x4rlos Oh, okay :-)
17:07 x4rlos And of course your contributions regarding the xattr stuff :-) That made me very confused a while back :-)
17:07 lh joined #gluster
17:07 lh joined #gluster
17:07 JuanBre hi! I am facing the following problem....https://bugzilla.redhat.com/show_bug.cgi?id=861423 ...and I am wondering if compiling from source would fix my problem
17:07 glusterbot <http://goo.gl/pFESl> (at bugzilla.redhat.com)
17:07 glusterbot Bug 861423: medium, medium, ---, vshastry, ASSIGNED , Glusterfs mount on distributed striped replicated volume fails with E [stripe-helpers.c:268:stripe_ctx_handle] <volname> Failed to get stripe-size
17:08 JoeJulian JuanBre: Doesn't look like it would. Just shows as assigned which suggests that it's not fixed.
17:09 JuanBre JoeJulian: and reverting to version 3.2?
17:10 JoeJulian I don't use (or often condone the use of ) stripe, so I don't know what changes there might have been. My initial expectation would be that there shouldn't be a problem with rolling back, but you might have to remove the index xlator for the volume info files.
17:16 JoeJulian x4rlos: Just to satisfy an unlikely hunch, try removing "defaults" from the mount options.
17:17 x4rlos will do
17:18 x4rlos nope. And no log either. It must be ignoring this mount request entirely for some reason.
17:20 partner didn't read the whole backlog but someone said the mount problem on debian/ubuntu would be fixed?
17:21 x4rlos partner: Remember what the problem was? :-/
17:21 partner x4rlos: mount from fstab doesn't mount
17:21 partner volume mount that is..
17:22 partner wasn't it what you asked few hours back?
17:22 x4rlos hmm. Yeah, most likely :-)
17:22 x4rlos (15:58:23)
17:23 partner (17:58 :)
17:23 x4rlos Saw this though: (06:27:36) sashko: JoeJulian: hey you around?
17:24 JoeJulian Like I'm the only guy with answers... <eyeroll>
17:25 x4rlos :-)
17:26 x4rlos Okay. Do i grab a copy of 3.3.1 and see if i still cannot mount from fstab? Am i the only person with this problem? (/using debian)
17:26 JoeJulian x4rlos: So there must be some other log that says something. It shouldn't just ignore it entirely.
17:27 partner can you please recap the problem, debian does not mount the volume from fstab on reboot?
17:27 x4rlos JoeJulian: You'd think so. partner: Yup. At least what i have.
17:28 x4rlos Now, caveat coming up: I have the experimental 3.4 package from debian sources (wheezy)
17:28 x4rlos Whilst i would have _thought_ this should really matter, it's worth noting in case something has been introduced in between.
17:28 * semiosis back
17:28 semiosis meetings pull me afk randomly
17:28 x4rlos (i actually run 3.3.1 on my live boxes, and need to add so it mounts upon boot).
17:28 partner x4rlos: i don't think or guess anything, i am just asking to recap the problem so that i could perhaps help
17:30 partner x4rlos: yup, its expected
17:30 semiosis partner: the mount problem on ubuntu is fixed
17:30 semiosis partner: has been for some time already
17:31 x4rlos anyone remember what the problem was?
17:31 partner ubuntu probably uses something else for starting up the stuff since some version..
17:31 semiosis partner: idk whats going on with x4rlos' debian issue
17:31 partner semiosis: in squeeze or obviously wheezy too you cannot mount volumes automatically from fstab during a boot
17:32 * semiosis dusts off the debian vm
17:32 partner http://unix-heaven.org/gluster​fs-fails-to-mount-after-reboot
17:32 glusterbot <http://goo.gl/3EOke> (at unix-heaven.org)
17:32 partner i can confirm its the case and so it seems with x4rlos too. and no, its not the _netdev mentioned
17:33 semiosis meh there's so much wrong with that article
17:34 partner regardless does not mount. haven't investigated the root cause, perhaps some module or anything glusterfs mount depends on is not yet loaded when disks are mounted
17:35 semiosis i'm getting tired of all the "I tried one thing and it didnt work like i imagined it would and i didn't take any time to learn why or what i should have expected therefore the software is crap/broke"
17:35 * x4rlos looks in mirror :: "see man. You're not crazy".
17:35 semiosis blog posts
17:35 partner huh?
17:35 partner oh
17:35 chouchins joined #gluster
17:36 x4rlos Well, _netdev only attempts once network has been established - Isn;t is simply a case of network not being up in time to attempt a network filesystem?
17:36 semiosis x4rlos: on what distro?  not ubuntu... that's redhat/centos stuff
17:36 x4rlos Can't we just delay the mount attempt by 10 seconds or so.
17:36 partner not on debian/ubuntu
17:37 semiosis x4rlos: no
17:37 x4rlos I'm on debian wheezy.
17:37 semiosis x4rlos: yeah i need to diagnose your issue on debian, stand by
17:37 x4rlos :-)
17:38 semiosis x4rlos: *my* ubuntu packages... the ones in ubuntu universe (3.2.5) and my ppa (official glusterfs ubuntu packages since 3.3.1) mount at boot from fstab
17:38 JoeJulian You could see if this has any relevance if you like: https://gist.github.com/4472816
17:38 glusterbot Title: The glusterfs-server upstart conf would start before the brick was mounted. I added this ugly hack to get it working. I ripped the pre-start script from mysql and decided to leave the sanity checks in place, just because. (at gist.github.com)
17:39 semiosis there used to be gluster.org packages for ubuntu up until 3.3.0 which may not have worked for mounting at boot, those packages are gone since 3.3.1 and gluster.org recommends my ppa as official for ubuntu
17:39 semiosis that blog post author was most likely using the gluster.org 3.3.0 package, though never said
17:39 zaitcev joined #gluster
17:39 partner same issue on 3.3.1
17:40 partner i need to check where my packages originate
17:40 semiosis bullsh**
17:40 JoeJulian hehe
17:40 semiosis JoeJulian: that gist has my name on it but i did not write that
17:40 JoeJulian No, I did.
17:40 JoeJulian That got it working for me.
17:41 partner semiosis: by your BS you are (again) referring to your ubuntu packages?
17:41 semiosis ha no i'm referring to the gist JoeJulian linked which has my name on code i did not write
17:42 partner oh, i so well connected it to my comments :)
17:42 JoeJulian What happened was that glusterd is started by upstart to satisfy the dependency of the mount. The mount then proceeded so quickly that glusterd hadn't started the bricks yet.
17:42 semiosis JoeJulian: please take my name off that
17:42 semiosis put yours there
17:43 JoeJulian Hehe, ok. It's there because it's an edit of yours.
17:44 JoeJulian pfft... can't. I wasn't logged in when I put it up there for you to look at.
17:44 semiosis x4rlos: booted the wheezy vm, updating packages, then going to try to reproduce your fstab mount issue :)
17:44 x4rlos Okay :-) This is on a client machine accessing the gluster servers, right?
17:44 semiosis JoeJulian: then at least leave a comment
17:45 semiosis x4rlos: i will try both remote server & localhost in fstab
17:45 semiosis i wouldn't be satisfied unless both worked
17:45 JoeJulian semiosis: http://irclog.perlgeek.de/g​luster/2013-01-07#i_6304826
17:45 glusterbot <http://goo.gl/puMva> (at irclog.perlgeek.de)
17:45 x4rlos thanks for your help all :-) Much apprieciated :-)
17:45 JoeJulian That's where I was trying to show/ask you about it back then, fyi.
17:46 semiosis JoeJulian: i'm sure you had good intentions, though tbh i dont remember that
17:46 semiosis one day i'll get an email asking for support on this gist :)
17:46 JoeJulian hehe
17:48 amccloud joined #gluster
17:48 JoeJulian So anyway... as I was saying... since the bricks weren't started, the mount failed. By throwing in the sleep into pre-start it left time for the bricks to start and the mount succeeded.
17:49 JoeJulian There's probably a more elegant way of waiting, but I was out of time.
17:50 JoeJulian This only matters, of course, if this client is also a server.
17:51 JoeJulian Oh, wait... no... that's not true at all.
17:52 JoeJulian Crap, nevermind. I had forgotten what the problem was.
17:52 JoeJulian That was to solve the problem where the bricks werent' mounted when glusterd tried to start.
17:52 x4rlos2 joined #gluster
17:52 semiosis that's better solved by having bricks be subdirs of mount points
17:53 semiosis then if not mounted, brick path doesnt exist, glusterfsd exits at start
17:53 JoeJulian They were, but the xfs mounts were happening late
17:53 ctria joined #gluster
17:53 x4rlos2 joined #gluster
17:54 JoeJulian And yes. glusterfsd was exiting at start, leaving the bricks down.
17:54 JoeJulian Kind-of sucked. :)
17:54 x4rlos Hometime in 5 mins - in case i don't reply. I have logged in from home.
17:54 _Bryan_ So...lets say I have a volume created with 10 boxes....and I want to change the hostnames on all 10 servers?  What would be the easiest way to do this? Delete the Volume and then recreate it with the new hostnames and bricks in the same order?
17:54 JoeJulian yes
17:55 _Bryan_ Damn..I was hopingyou had some majic bullet...
17:55 _Bryan_ but that was the answer I expected
17:55 _Bryan_ thanks JJ
17:55 JoeJulian There's other ways of doing it, but you did specify "easiest"
17:56 _Bryan_ yeah....and more risky...
17:56 UnixDev_ joined #gluster
17:56 mmakarczyk joined #gluster
17:57 JoeJulian So yeah, there's the way I recommend doing it (that one) and the way I would do it (cheating).
17:57 _Bryan_ what with cnames?
17:58 andrewbogott left #gluster
17:59 JoeJulian If it's a replicated volume and I'm concerned with downtime, I would kill one brick, use "replace-brick...commit force" to change the hostname for that one brick.
18:00 JoeJulian I /think/ it'll complain about it already being a part of a volume so I might have to mv the brick directory temporarily.
18:02 JoeJulian The 3rd option would be to stop all the volumes, stop glusterd, sed replace the hostnames under /var/glusterd
18:03 x4rlos semiosis: On way home so won't be able to reply for 30-60 minutes.
18:03 semiosis ok
18:11 hateya joined #gluster
18:12 Ryan_Lane joined #gluster
18:20 johan joined #gluster
18:22 Guest84807 I have a question about the 3.3.1 version, I had a crash of one of the bricks, now I have added the peer and volume information on the new brick, when I launch the self-heal I get: Launching Heal operation on volume dataGluster has been unsuccessful. What can cause this and what is the proper way to restore a new brick?
18:27 UnixDev joined #gluster
18:43 bauruine joined #gluster
18:46 Mo___ joined #gluster
18:48 rastar joined #gluster
18:49 kkeithley hmmm, gerrit says joe.julian.prime@gmail.com is not a registered user?
18:54 JoeJulian I know, that's so weird.
18:56 JoeJulian Maybe it's because I had never selected a username. Just did, now it's joejulian. Try that.
18:58 kkeithley that worked, and tells me ex post facto that the email for that user is joe.julian.prime@gmail.com
18:58 JoeJulian Hehe
18:59 tryggvil joined #gluster
19:06 cmcdermott1 joined #gluster
19:09 x4rlos2 semiosis: Any joy? No worries if not, i'll keep playing after i made me some pasta :-)
19:10 semiosis x4rlos2: real busy will advise
19:10 x4rlos2 semiosis: No probs :-)
19:18 semiosis debian wheezy has upstart?!
19:19 badone joined #gluster
19:20 semiosis x4rlos2: i'm trying kkeithley's debian wheezy packages of glusterfs 3.3.1 from download.gluster.org first before the debian experimental packages
19:21 semiosis x4rlos2: adding _netdev to the fstab options made mounting from a remote glusterfs server work. without that the mount is executed before networking is up
19:21 semiosis wheezy has upstart but (unlike ubuntu) _netdev works like it used to
19:21 semiosis s/used to/always has/
19:21 glusterbot What semiosis meant to say was: wheezy has upstart but (unlike ubuntu) _netdev works like it always has
19:27 semiosis trying localhost mount now
19:29 semiosis localhost mount fails at boot with kkeithley's packages, most likely with debian experimental too though i've not verified that yet
19:32 andreask joined #gluster
19:32 elyograg kkeithley: ping
19:33 semiosis the failure i mentioned is possibly due to my trivial test setup, trying again with a real world scenario
19:34 JuanBre joined #gluster
19:35 semiosis ok here's what i've learned...
19:35 semiosis glusterfs client quits if it can't connect to any bricks
19:37 semiosis with kkeithley's packages (at least on my test vm) with a default install and _netdev mount option for localhost, the client can fetch volume info from localhost server, but at that time local bricks haven't yet started (they do a split second later)
19:37 semiosis sooo....
19:37 semiosis if that volume has replicas on another server, client finds them & starts up, eventually (a split second later) finding the lcoalhost bricks & connecting to them as well
19:37 semiosis so no problem :)
19:38 semiosis a localhost mount of a glusterfs volume where all bricks are local should be using nfs instead of glusterfs :)
19:39 partner was that the original problem?
19:39 partner as i had a bit hard time figuring it out except some mount didn't work on boot..
19:40 partner i have this on my squeeze client (as in, bricks/volume all remote): testgn1:/rv0  /mnt/gluster-rv0        glusterfs       defaults        0       0
19:40 semiosis partner: "some mount didn't work on boot" general enough to be true
19:41 partner does not come up on boot
19:41 partner semiosis: heh true
19:41 semiosis partner: replace 'defaults' with '_netdev'
19:41 semiosis in your fstab
19:41 kkeithley elyograg: pong
19:41 semiosis lots of mounts dont work on boot, for lots of reasons.
19:41 elyograg kkeithley: thanks for your help on the mailing list for UFO.
19:41 semiosis and we can fix just about all of them :)
19:42 kkeithley yw
19:42 partner i think i had it there but removed for some reason
19:42 elyograg kkeithley: Still can't make it work.  Looks like gluster-ufo just adds -gluster config files, so I moved those to their real names.  after a permission problem, I still can't get it to start.  http://www.fpaste.org/bD2S/
19:42 glusterbot Title: Viewing gluster-ufo 3.3.1-8 start attempt by elyograg (at www.fpaste.org)
19:43 kkeithley this is on f18?
19:43 elyograg yes.
19:43 elyograg originally installed as beta, then distro-synced after release.
19:43 kkeithley I know it worked on f17. I just installed f18 last week and haven't gotten to trying ufo yet on it
19:44 rspada joined #gluster
19:44 luckybambu joined #gluster
19:46 partner semiosis: testgn1:/rv0  /mnt/gluster-rv0        glusterfs       _netdev         0       0
19:47 semiosis yes?
19:47 partner i think i had previously defaults,_netdev but it didn't work. the above does not mount either on reboot i'm afraid, just tested it
19:48 partner i'll see if i can find anything from the logs
19:48 semiosis partner: pastie the client log file covering the time since boot
19:48 semiosis also what version of glusterfs?  where'd you get the packages?
19:48 partner oh, this too, the mount complains:
19:48 partner unknown option _netdev (ignored)
19:48 semiosis ignore that
19:49 partner why?-) its the only line that fails
19:49 semiosis _netdev is not a mount option it's an init option
19:49 partner oh, i thought i already mentioned but i guess i didn't. 3.3.1 from download.gluster.org and squeeze
19:50 semiosis it's not failing, it's being ignored & proceeding
19:50 semiosis if it were failing it would say fail or error instead of ignored
19:50 semiosis get those client logs pastied if you want a solution
19:51 _Bryan_ joined #gluster
19:51 partner as said, looking into logs
19:52 semiosis ok good luck
19:52 * semiosis lunches
19:52 _benoit_ joined #gluster
19:53 theron joined #gluster
19:53 partner semiosis: alright, enjoy, well deserved :)
19:53 semiosis ty :)
19:56 x4rlos2 semiosis: hmmm, I had that in there originally. Can you c+p me the fstab entry please?
19:56 semiosis heh already deleted it preparing for another route of testing once i get back with lunch
19:57 x4rlos2 no problem. Sorry, i was eating too. :-)
19:57 theron joined #gluster
19:57 semiosis but it was nothing fancy... "10.168.100.96:baz /mnt/glf glusterfs _netdev 0 0"
19:57 partner ok now i've got something
19:57 semiosis x4rlos2: i havent tried the debian experimental packages yet, going to do that soon... just tried kkeithley's packages from download.gluster.org
19:58 x4rlos2 ah, okay :-)
19:58 semiosis but i expect it to be the same
19:58 partner http://dpaste.com/900607/ there
19:58 glusterbot Title: dpaste: #900607 (at dpaste.com)
19:58 semiosis partner: fuse module not loaded by default in squeeze?!
19:58 partner i'll google around for hints
19:59 semiosis bbaib
19:59 Staples84 joined #gluster
19:59 partner fsck, i screwed up, need to do another reboot..
20:01 partner lsmod does show fuse after i login from reboot
20:01 chouchi__ joined #gluster
20:03 w3lly joined #gluster
20:05 partner rcS.d has fure almost last, all the mount* stuff is prior that, no idea what handles the _netdev stuff later on.. mountall.sh does lots for nfs, smbfs, gfs,...
20:05 partner *fuse
20:06 theron joined #gluster
20:08 JoeJulian Does your server(s) mount the client from fstab?
20:09 lh joined #gluster
20:09 lh joined #gluster
20:10 JoeJulian @ppa
20:10 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
20:11 partner JoeJulian: preferably.. but with the given problem i do have fstab entry and then use rc.local to call mount command..
20:11 partner i've only now got time to investigate the issue
20:13 partner it seems the squeeze approach is to use /etc/network/if-up.d/ to call for for example nfs and samba mounts
20:13 JoeJulian Here's what I encountered. The glusterfs mount triggers the start of glusterfs-server. It does that at about the same time bricks are mounted. If the brick doesn't mount before the glusterfs mount in upstart, then when glusterd starts there are no bricks. The fuse mount will then fail, because there's no bricks.
20:13 partner JoeJulian: i read that earlier but my case is remote client, volume is up and running just fine
20:14 JoeJulian upstart writes a log somewhere, iirc. look in there and paste anything relevant.
20:15 partner not on squeeze (by default)
20:15 JoeJulian maybe I'll spin up some vms somewhere... I know it's upstart that's the problem.
20:15 partner i have sysvinit, not upstart
20:16 partner debian squeeze, thought wheezy i guess uses it already by default, i can't remember, installed few but no need to fiddle much
20:16 UnixDev joined #gluster
20:17 semiosis back
20:20 semiosis JoeJulian: upstart on ubuntu is very different than upstart on other distros (debian & redhat).  lets just say ubuntu has a novel way of doing mounts at boot
20:21 semiosis even though debian wheezy & redhat/centos use upstart, they still do traditional initv-style mounts where _netdev works like you expect
20:21 partner in case anyone happens to be interested its documented here: http://www.debian.org/doc/manual​s/debian-reference/ch03.en.html
20:21 glusterbot <http://goo.gl/bTRgu> (at www.debian.org)
20:21 JoeJulian semiosis: That was an debian box I was doing that on.
20:22 semiosis hmm
20:22 JoeJulian wan't it? no it wasn't...
20:22 semiosis what glusterfs packages did you install on it?
20:22 JoeJulian it was ubutu.
20:22 semiosis ha
20:22 JoeJulian gah
20:22 JoeJulian can't type
20:22 partner heh, "almost same"
20:23 partner i guess i should go into #debian to ask for more details and where exactly to put something to do it properly and without too many whining..
20:24 JoeJulian Does debian have a netfs init script?
20:25 semiosis not in wheezy, mountall -- but not like the mountall in ubuntu
20:25 partner no, my understanding is that the approach is to start networking and then use if-up stuff once the net is up
20:25 partner 22:13 < partner> it seems the squeeze approach is to use /etc/network/if-up.d/ to call for for example nfs and samba mounts
20:26 partner mountnfs file there with a comment:
20:26 partner #                    Also mounts SMB filesystems now, so the name of
20:26 partner #                    this script is getting increasingly inaccurate.
20:31 JoeJulian So if it hasn't changed, debian's mountall is a shell script. Should use -O no_netdev to mount to prevent those from mounting on the first mount pass...
20:33 partner it has the option yes. and initscript mountnfs runs the mentioned script from if-up.d and waits for it to finish.
20:34 JoeJulian I see. So there needs to be an /etc/network/if-up.d/glusterfs script?
20:34 partner i'll concentrate on the fuse error. its like fuse would get loaded too late (its one of the last ones to be run..)
20:34 semiosis are you two talking about the same debian release?  partner = squeeze... JoeJulian = ?
20:35 JoeJulian Looks like it should be squeeze...
20:35 JoeJulian I don't actually have an install I'm just finding what I can find through google.
20:35 partner JoeJulian: that just might not work because network gets started up way earlier but fuse is third last to be brought up
20:36 JoeJulian F it. I'll spin up a rackspace instance so I can see what's going on.
20:36 partner i guess i'll spare you now with the case and you can do something more important
20:37 JoeJulian I'm still doing my work simultaneously.
20:37 partner i'm not worried about myself, i can glue the mounts to come up, i just don't like the approach plus ALL the squeeze users will be affected by this anyways..
20:38 JoeJulian Right, that's what's piqued my interest.
20:38 JoeJulian what? rackspace doesn't have a squeeze image?
20:38 mmakarczyk_ joined #gluster
20:39 semiosis squeeze is oold
20:39 JoeJulian They actually don't have any debian images.
20:39 JoeJulian nevermind... they do
20:39 w3lly joined #gluster
20:39 JoeJulian It was just one little line that was easily overlooked.
20:39 x4rlos2 if i suggested /etc/insserv/override/ would that mean anything to anyone?
20:41 partner x4rlos2: suggest a bit more, i don't yet see the point
20:42 cmcdermott joined #gluster
20:43 semiosis x4rlos2: 3.4.0qa package from experimental mounts remote glusterfs volume at boot just fine (using _netdev)
20:43 semiosis trying localhost next
20:44 x4rlos2 hmmm. Well, that's the puppy.
20:44 semiosis x4rlos2: there was some issue with creation of /var/log/glusterfs though... i had to mkdir that manually after installing the experimental package
20:44 semiosis idk why
20:44 partner JoeJulian: in case you missed, the three lines from client log that make the difference: http://dpaste.com/900607/
20:44 glusterbot Title: dpaste: #900607 (at dpaste.com)
20:45 semiosis partner: why dont you pastie more of the log?  it may be helpful and kinda frustrating just getting a tiny peek
20:45 partner semiosis: that was all of it
20:45 semiosis the whole file?
20:45 partner yes
20:46 x4rlos2 3.4.0~qa5-1
20:46 semiosis oh ok didnt understand that before
20:46 JoeJulian yep, that would be all of it.
20:46 badone joined #gluster
20:46 x4rlos2 hmmm. I already have tht dir. I may remove and re-create 777 to make sure.
20:46 x4rlos2 what's our fstabentry?
20:46 partner semiosis: well yes i did rm the logs before reboot but rest did not have anything related to the problem so you'd just waste your time reading them
20:46 x4rlos2 s/our/your
20:46 semiosis appreciated :)
20:47 partner i haven't been out on the community channels for a moment so excuse me for my rustiness :)
20:48 partner no time for anything fun anymore
20:48 semiosis x4rlos2: localhost mount of a replicated volume worked too!
20:48 semiosis in summary... i can't reproduce your problem
20:48 x4rlos2 hrump.
20:49 partner darn, i don't have wheezy at hand, i'd love to try out reproducing myself too, perhaps tomorrow..
20:49 x4rlos2 I could firewall off the vm and give you access.... if i have time.
20:49 x4rlos2 and if you would want to have a look.
20:49 semiosis thanks but no thanks :)
20:49 x4rlos2 but i would doubt it :-)
20:50 x4rlos2 hahaha.
20:51 semiosis x4rlos2: but yeah if you could pastie the client log file from boot time onward, covering when it should've mounted but didnt, i'm sure that would be helpful
20:51 x4rlos2 hang on.
20:51 partner x4rlos2: did you already paste your client logs somewhere?
20:51 partner ..
20:51 semiosis partner: ^^ if x4rlos2 did we both missed it
20:51 x4rlos2 yes. i can do again. I just checked the experimental repo, and looks like another update.
20:52 nueces joined #gluster
20:52 amccloud joined #gluster
20:52 x4rlos2 long shot, but updating now.
20:53 tomsve joined #gluster
20:53 JoeJulian he did.
20:53 x4rlos2 yes, but nothing was even showing in logs for attempt to mount
20:53 x4rlos2 so didnt even look like it was going to try.
20:54 JoeJulian http://pastie.org/5966081
20:54 glusterbot Title: #5966081 - Pastie (at pastie.org)
20:55 semiosis JoeJulian: thanks i found that one in scrollback
20:55 semiosis what i see there is the tail end of a successful mount followed by a sigterm (15)
20:55 semiosis nothing unusual
20:55 partner the start is not visible at all?
20:56 semiosis the start would have a dump of the vol file
20:56 semiosis and something about getting port numbers from 24007
20:57 partner yes, in case the mount would succeed which to my understanding doesn't? thus the first lines would be the most important ones perhaps..
20:57 partner i just suspect the relevant lines are now missing
20:58 semiosis partner: 16:23:43.777608 ... FUSE inited.  the mount is up
20:59 semiosis partner: 16:25:51.292779 ... signum (15), shutting down
20:59 semiosis that's from x4rlos2's log
20:59 x4rlos2 now, i may have something...
21:00 x4rlos2 I noticed an old heartbeat session left over from a million years ago.
21:00 jbrooks joined #gluster
21:00 x4rlos2 and stoped it from attempting to start up.
21:00 semiosis heyyyyy
21:01 x4rlos2 and now it looks like it _attempted_ to mount.
21:01 semiosis so to recap... still no evidence that glusterfs has boot time mount problems on debian squeeze, ubuntu precise or newer, centos, redhat, fedora...
21:02 semiosis JoeJulian and I worked hard over the last two years (at least in my case, longer in his) to resolve all these boot-time problems
21:02 partner semiosis: what evidence do you want from squeeze, it does NOT work
21:02 semiosis partner: yeah we need to diagnose that :)
21:02 semiosis i will boot my squeeze vm and try to reproduce
21:02 x4rlos2 not mounted just yet, however ther eis now a gluster.log from the attmpt.
21:02 semiosis putting wheezy back to bed for another (hopefully long) nap
21:03 partner of course its possible it worked at some point and some update broke it thought debian does not easily introduce anything major to stable..
21:04 semiosis partner: one case of a mount not working at boot does not mean it never works on a particular distro release.
21:04 chouchins joined #gluster
21:04 partner sure
21:04 semiosis so lets find out if it's reproducible on a stock squeeze box with kkeithley's (official) squeeze packages
21:05 kkeithley official, heh
21:05 semiosis kkeithley: you da man
21:05 partner yup, sorry i don't have right now a vanilla machine available, its again past 11PM here and too late to start setting up at the office env
21:05 x4rlos2 http://pastebin.com/UECv88mF
21:05 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
21:05 x4rlos2 (logs from attempted mount)
21:05 JoeJulian kkeithley's doing the squeeze packages? I didn't know that. Where are they?
21:05 semiosis JoeJulian: ,,(latest)
21:05 glusterbot JoeJulian: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
21:06 semiosis http://download.gluster.org/pub/glust​er/glusterfs/LATEST/Debian/readme.txt
21:06 glusterbot <http://goo.gl/JvlV6> (at download.gluster.org)
21:06 kkeithley I did them with a lot of handholding from semiosis.
21:06 kkeithley elyograg: ping
21:08 JoeJulian kkeithley: Need to add " -" to the end of the key add.
21:08 partner semiosis: that's a reprepro running the mirror?-)
21:08 JoeJulian "apt-key add -"
21:10 kkeithley elyograg: follow my howto, and delete /etc/swift/{account,container,object}-server.conf before staring ufo (i.e. swift-init main start). I hope that works for you. It's working for me with that work-around.
21:10 kkeithley JoeJulian: whosiwhatsis?
21:10 x4rlos2 It still won't mount :-)
21:11 JoeJulian kkeithley: without the dash at the end to read from stdin, it errors, "gpg: can't open `': No such file or directory"
21:11 semiosis +1
21:11 kkeithley oh, in the readme.txt?
21:11 JoeJulian yes
21:11 x4rlos2 not sure if the upgrade just made it start writing the lon in the fstab entry :-/
21:12 semiosis x4rlos2: wasn't it your heartbeat thing?
21:12 kkeithley like so?
21:12 x4rlos2 i don't think so now.
21:12 dblack joined #gluster
21:12 semiosis ok this is not surprising at all
21:12 x4rlos2 sorry, guess i'm being a little verbal rather than confirming clearly.
21:12 kkeithley s/staring/starting
21:13 semiosis partner: my stock squeeze vm, with the gluster.org/kkeithley package, can mount a remote glusterfs volume at boot time (using _netdev) just fine
21:13 JoeJulian yes
21:13 x4rlos2 I agree, that this may be something i am doing wrong.
21:13 x4rlos2 let me do a full system upgrade and see if this helps.
21:14 partner semiosis: ok, that is positive.
21:14 kkeithley still no glusterbot, eh?
21:15 kkeithley elyograg: follow my howto, and delete /etc/swift/{account,container,object}-server.conf before starting ufo (i.e. swift-init main start). I hope that works for you. It's working for me with that work-around.
21:15 JoeJulian kkeithley: the regex match requires the tailing /
21:15 kkeithley same thing that stole the trailing / must have stolen the trailing - from my readme.txt ;-)
21:16 JoeJulian hehe
21:16 flrichar joined #gluster
21:17 lh joined #gluster
21:17 lh joined #gluster
21:18 partner semiosis: i see no other option but to attempt to reproduce on similar environment myself and then try to figure out the difference between the hosts
21:18 partner its not just one single machine having issues
21:18 JoeJulian yep, no squeeze mount from fstab at boot.
21:18 partner \o/
21:19 semiosis JoeJulian: works for me
21:19 semiosis just confirmed localhost mount of a replicated volume at boot
21:19 partner you said remote?
21:19 semiosis that was the previous test
21:19 partner ok :)
21:19 semiosis remote mount of a volume
21:19 semiosis last test was localhost mount of a replicated volume
21:19 semiosis a very common real world use case
21:20 JoeJulian installed squeeze from image. installed glusterfs-client package. Confirmed that I can mount from remote volume using fstab entry. rebooted. volume not mounted.
21:20 semiosis JoeJulian: pastie your logs :P
21:20 JoeJulian :D
21:20 partner do you use localhost as host there on fstab line?
21:20 JoeJulian my volume's remote.
21:20 semiosis partner: my fstab: "localhost:baz   /mnt/glf        glusterfs       _netdev 0 0"
21:21 partner i'll try it out
21:21 semiosis JoeJulian: _netdev ?
21:21 JoeJulian I wanted to test the remote volume to ensure it's not a race condition with any server services.
21:21 JoeJulian _netdev.
21:23 partner ok, got my test volume mounted with a fstab entry similar to semiosis: localhost:rv0          20G  7.8G   13G  39% /mnt/gluster-rv0
21:23 partner rebooting
21:23 JoeJulian yeah, it was the same fuse error.
21:24 partner up, i don't have mount
21:25 semiosis hmm
21:25 partner my log is now completely different
21:26 partner just a sec for pastie
21:26 x4rlos2 upgrade didn't help :-/
21:26 semiosis going to make a new squeeze vm just to be beyond sure it's stock
21:26 partner i'll summarize this quickly:
21:26 partner [2013-01-30 23:23:12.053769] W [socket.c:1512:__socket_proto_state_machine] 0-glusterfs: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:24007)
21:27 partner so this is different case from the remote client but yet end result is same. lets not still confuse these two
21:27 partner as now it probably was not yet ready to serve the volume (i guess you guys discussed it earlier)
21:28 partner nvm, i'll shut up, no point guessing, its fun but might just confuse others..
21:29 x4rlos2 well, the logs i saw would have been from manual mounts from mount -a. I thought it was attempting on reboot. My bad there for misleading.
21:29 x4rlos2 so it still looks like it's not attempting to mount upon boot.
21:30 x4rlos2 port 5432 get used for anything on gluster?
21:30 partner http://pastie.org/5973717
21:30 glusterbot Title: #5973717 - Pastie (at pastie.org)
21:30 partner theres my log for localhost mount attempt
21:31 partner x4rlos2: its postgresql port so i hope not
21:31 x4rlos2 thats why i was asking :-)
21:32 x4rlos2 I run postgrs on these machines. Long shot.
21:32 partner ,,(ports)
21:32 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
21:32 x4rlos2 but that's the only ports (barring22) being used on the machine.
21:32 x4rlos2 hmmm
21:32 x4rlos2 nfs.
21:32 JoeJulian yeah, fuse is S19 and mountnfs is S15
21:33 partner my fuse is S18
21:34 partner nevertheless after
21:34 x4rlos2 i get error in messages: Jan 30 21:24:06 pgDatabase1 if-up.d/mountnfs[eth3]: if-up.d/mountnfs[eth3]: lock /var/run/network/mountnfs exist, not mounting
21:34 amccloud joined #gluster
21:35 semiosis starting to doubt that my "stock" squeeze vm was really stock
21:36 x4rlos2 OH MY GOD
21:36 partner i can't think anything relevant changes i would have done to our servers.. baseline is defined by automation and then only services added on top of it..
21:36 semiosis x4rlos2: ?
21:36 x4rlos2 it has mounted!
21:36 x4rlos2 i just removed the /var/run/network/mountnfs folder.
21:37 partner JoeJulian: how's your rc6.d, fuse killed first and then glusterfs-server ?
21:37 x4rlos2 let me try on client 2
21:38 x4rlos2 now, this has exactly same error.
21:38 x4rlos2 (think, think. Did i upgrade from old gluster versionor something?!?!)
21:38 JoeJulian partner: glusterfs-server isn't installed. Just looking at the very basic mounting issue.
21:38 x4rlos2 i always removed entirely if i did.
21:39 x4rlos2 i guess if i remove the folder (what created it?!) like i did on first, then it will also work.
21:40 x4rlos2 let me try and track where it came from first.
21:44 semiosis JoeJulian: i just created a new vm with debian squeeze from their official iso and i have ./rcS.d/S16mountnfs.sh & ./rcS.d/S20fuse
21:45 semiosis oh and remote mount works at boot btw
21:45 semiosis reverified
21:45 semiosis partner: are you running a rackspace vm image like JoeJulian is?
21:46 JoeJulian semiosis: what if you upgrade your packages, out of curiosity?
21:46 JoeJulian Maybe some upgrade since the iso broke something...
21:47 partner semiosis: no, vmware virtual machines, sorry forgot to mention that...
21:47 semiosis JoeJulian: i used the netinst iso which gets latest packages from repo during install & i also tried a full upgrade after but there was nothing to update
21:47 JoeJulian hmm
21:47 partner as said, i'm in a bit rust.. these are installed from netinst too, just base-system and ssh-server
21:47 partner then added with a selection of tools (debs)
21:48 partner and baseline for the company (accounts, ntp, dns, ...)
21:48 ctria joined #gluster
21:48 x4rlos2 http://pastie.org/5974156
21:48 glusterbot Title: #5974156 - Pastie (at pastie.org)
21:48 partner stock kernels and stuff on these boxes, all the way patched from upstream
21:48 x4rlos2 Okay, a litte more of an explination of the "problem"
21:49 x4rlos2 So seems that this is what's causing the mount on boot to fail. Not sure if i have installed something in the past to cause this?
21:50 x4rlos2 If i remove the problematic folder, then boom. It'll mount.
21:50 semiosis boom
21:50 JoeJulian wtf? the log times don't mesh! Grrr
21:51 x4rlos2 huh? and yes, boom :-)
21:51 JoeJulian The client log says it tried to mount at 21:43:39, but the boot up doesn't even appear to have started until 21:43:40
21:51 x4rlos2 these are two different machines. The latter paste is from client2
21:52 x4rlos2 client is now working after removing the said folder causing the problem.
21:52 JoeJulian I'm talking about my own logs...
21:52 x4rlos2 oh, hahahahaa
21:52 x4rlos2 it's getting late :-)
21:52 partner midnight, not that bad yet..
21:53 partner there are not much options if one wants to have a chat here with anybody.. only joins and parts during my working hours
21:53 x4rlos2 so, before i go ahead and remove the folder and live happily ever after, does anyone want me to do anything further?
21:54 JoeJulian pfft... ndevos is on during the eastern hemisphere day.
21:54 x4rlos2 not sure if this is the problem directly, or if it;s one of many problems, how it got there, byproduct of another problem.
21:55 x4rlos2 did anyone else get a wheezy/squeeze image set up to check something real quick?
21:55 JoeJulian well, he's probably closer to meridian day, but still...
21:56 x4rlos2 my guess is that upon power cut or poweroff uncleanly, it won't remount upon reboot. But i could be wrong :-)
21:56 partner yeah he was online around noon today
21:57 JoeJulian What part of the world are you in, partner?
21:57 partner JoeJulian: finland
21:58 clusterflustered joined #gluster
21:59 JoeJulian samppah: should be in your timezone then.
21:59 clusterflustered hey guys, anyone around that can help steer me in the right direction? we beleive we have a need for a distibuted file system.
21:59 nueces joined #gluster
22:00 partner JoeJulian: i do recognize the name being finnish, seen around few times
22:00 greylurk left #gluster
22:00 JoeJulian ping him sometime. He's been around almost as long as I have.
22:01 partner sure
22:01 JoeJulian clusterflustered: Any particular steering, or just something random?
22:02 elyograg_ joined #gluster
22:02 JoeJulian The right direction is to install the software, configure it, then start using it. ;)
22:02 partner heh
22:02 JoeJulian If you're really lucky, test things before you put them into production.
22:02 clusterflustered pretty particular. should i start from the beginning?
22:02 x4rlos2 JoeJulian: thats for girls
22:03 partner the more i test the more i come up with questions, it seems i'm gonna stick here for a looong time to get them all sorted out :D
22:03 partner sorry guys..
22:03 x4rlos2 lol.
22:03 x4rlos2 partner: i soooo get what you mean.
22:03 JoeJulian I'm sarcastic, so if you leave me open ended questions like that, I'm likely to come off as snarky.
22:03 x4rlos2 snarky. I'd use that as the word of the day. If there was more than 2 hours left.
22:04 semiosis partner: if it's any consolation, i started using glusterfs in october '10 & didnt go live with it on production until june '11
22:04 clusterflustered i already feel at home here, don't go chaning on my account.
22:04 partner clusterflustered: but Joe does have a point there too, build up a testing environment and play around, get familiar with it, break it and try to fix it, expand it
22:04 semiosis partner: there were other reasons but there sure is plenty to learn
22:04 partner semiosis: well i have a pressure from the fact we are running out of disk space.. getting something like 5-10 TB per month and trying to cope that with regular servers and SAN and what not..
22:05 clusterflustered other than nfs, we've been pretty good at building test environments before deployment. so i hope we can handle that.
22:05 JoeJulian Ah, ok... that's why my client log starts before the kernel seems to start. the timestamps in messages must start with rsyslog.
22:06 hateya joined #gluster
22:06 x4rlos2 semiosis: did you put georeplication fix into the debian packages for 3.3.1 btw?
22:06 x4rlos2 (was looking at wrong path)
22:07 semiosis x4rlos2: i dont cherry pick patches for my packages, i just package the release version source code
22:07 semiosis x4rlos2: only exception is when the release source code fails to build
22:07 partner clusterflustered: nfs or the native client which will save you the trouble of doing the nfs mountpoint failover stuff
22:07 x4rlos2 sure, im just wondering what the process is - when you apply the patches where do they get uploaded to?
22:07 semiosis x4rlos2: but i have nudged the glusterfs developers to make a 3.3.2 release with that... http://www.gluster.org/community/docu​mentation/index.php/Backport_Wishlist
22:07 glusterbot <http://goo.gl/6LCcg> (at www.gluster.org)
22:08 x4rlos2 I would like to give you hand if you want me to. Seems not many people using debian on here - or am i wrong?
22:08 clusterflustered so, we have 250 compute nodes, and we are just using nfs/rsync to distrubute files currently. we'd like to replace nfs with a distributed file system. ideally, one that sees what each compute node is doing, and locates the data that compute node needs to its local disk. is there anything out there that works in this nature?
22:09 JoeJulian Most people that are doing that just make a volume then mount that volume on each compute node. All the clients have access to the same filesystem.
22:10 clusterflustered okey doke, in that case, is the volume spread throughout the compute nodes, or are there data nodes that have a main role of being part of the gluster cluster?
22:11 JoeJulian either way
22:11 semiosis in hadoop, the former, otherwise, the latter, as far as i know
22:11 semiosis not positive tho
22:11 x4rlos2 bedtime. Thanks all.
22:11 semiosis yw
22:11 JoeJulian later x4rlos2
22:12 x4rlos2 have fun all.
22:12 clusterflustered so we've ruled out hadoop, mainly due to our file size. which brings me to my next point. we look at a tons of files, most are only 2 MB
22:14 semiosis http://gluster.org/community/d​ocumentation/index.php/Hadoop
22:14 glusterbot <http://goo.gl/m2Zix> (at gluster.org)
22:14 semiosis fwiw, not sure if thats helpful tho
22:16 clusterflustered thanks im reading this right now
22:23 clusterflustered so one of our current issues, is that a share on NFS is 5.8 TB, and when we copy out from here, the NFS server gets hammered, and essentially taken offline. this nfs server is 6 disk array, with 802.3ad bonded ethernet, and can disk our 1.7Gb pretty consitently. Is there a use case where we move this nfs system to gluster, to improve reads and writes?
22:26 partner sure
22:26 semiosis clusterflustered: definitely
22:26 partner now you have one single serving having all the possible bottlenecks
22:28 duerF joined #gluster
22:29 partner clusterflustered: https://access.redhat.com/knowledge/do​cs/en-US/Red_Hat_Storage/2.0/html/Admi​nistration_Guide/sect-User_Guide-Setti​ng_Volumes-Distributed_Replicated.html as an example
22:29 glusterbot <http://goo.gl/G5IqT> (at access.redhat.com)
22:31 clusterflustered okey, talk o me about replication, if i have say 10 Gluster servers serving a volume that replaced nfs, and i have 200 nodes all try to read the same file, what happens? does it create a read lock as soon as one server tries to read the file? if my replication is similar to MongoDB, and that file exists on 3 of the 10 servers, is the read access 3 times the speed it would with with 1 server?
22:31 clusterflustered does the file automatically ge replicated more to allow larger read access?
22:31 clusterflustered again thanks for the link
22:32 amccloud anyway to have the first peer (the server) use a hostname instead of an ip?
22:36 semiosis ~hostnames | amccloud
22:36 glusterbot amccloud: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
22:36 amccloud okay thanks
22:37 semiosis yw
22:37 raven-np joined #gluster
22:38 semiosis clusterflustered: you set a replica count for the volume, then add bricks in multiples of that count
22:38 semiosis clusterflustered: for example, you could use 'replica 3' then specify 12 bricks in your volume create command
22:38 semiosis this would make four distribution units (aka subvolumes) of 3 replicates bricks each
22:39 semiosis when you create files glusterfs will choose one of the distribution subvolumes (i.e. one of the 4 replica sets) to place the file
22:39 semiosis overall files will be "distributed" evenly over these 4 replica sets
22:40 semiosis each brick in a replica set contains all the same files as the other bricks in that set
22:40 semiosis each replica set would hold 1/4 of all files in the volume
22:40 semiosis writes/creates go to all replias in the set (which holds the file) in sync
22:40 semiosis reads get balanced between the replicas in the set that holds the file
22:41 semiosis when a file is opened glusterfs polls all the replicas in the set that holds the file and the first to respond serves all subsequent reads for that filehandle
22:43 clusterflustered ok this seems ver similar to mongo sharding. so in this use case, i should have 4x the read/write perfomance of a single server, correct?
22:43 clusterflustered evne though i could have 12 servers, each with 1 brick?
22:43 semiosis well not exactly
22:44 semiosis since writes go to all replicas in sync you'd get 1/3 of available client bandwidth for writing
22:44 semiosis with replica 3
22:45 semiosis reads should be significantly improved though since you'd have 3 servers for each file
22:45 amccloud semiosis: Is that the only way to do a hostname on the first server? Doing it that way creates a circular dependency in my deployment process.
22:46 partner nice old slides explaining quite a bit of the architecture with somewhat practical examples: https://confluence.oceanobservatories.org/​download/attachments/30998760/An_Introduct​ion_To_Gluster_ArchitectureV7_110708.pdf
22:46 glusterbot <http://goo.gl/E0Vfh> (at confluence.oceanobservatories.org)
22:47 partner for 3.1.x so not exactly up to date but i'd guess for major part it applies still
22:49 clusterflustered ok, i think i understand. replica 3 gives me ~3 times the read speed, but when a a file is written, since that client has to write it to three servers, its uses 1/3 of its bandwidth per server. is there a limit to how many clients are writing? feasibly if 3 clients are all writing different files, we would have acheived ~3x the write perfomance for the cluster as well.
22:49 clusterflustered @partner, thanks reading now
22:55 chirino joined #gluster
23:00 amccloud semiosis: If not, would it hurt to leave it as just the ip?
23:00 noob2 joined #gluster
23:01 semiosis amccloud: i always recommend using hostnames -- FQDNs in fact
23:01 semiosis amccloud: so i would not advise you to leave it as IP
23:02 amccloud semiosis: Yea, that is what I would prefer. But having machine a depends on machine b and machine b depend on machine a breaks my automated deployment.
23:03 amccloud one of the machines has to be setup before the other and I can't back track
23:03 semiosis why do you need to automate peer probing anyway?
23:03 amccloud my whole deployment is automated
23:04 semiosis and what are you using to automate this?
23:04 amccloud this is just part of it
23:04 amccloud chef
23:04 JoeJulian semiosis: If your squeeze install is still running, can I see a 'head -n 10 /etc/init.d/fuse' please?
23:04 amccloud I wrote my own chef cookbook that uses the gluster cli
23:05 semiosis JoeJulian: http://pastie.org/5975424
23:05 glusterbot Title: #5975424 - Pastie (at pastie.org)
23:05 amccloud semiosis: also imagine a scenario where I have only 1 peer
23:05 JoeJulian I would also suggest hostnames. automate them into /etc/hosts if you have to.
23:05 amccloud how would it get the correct hostname?
23:05 amccloud 1 server*
23:06 semiosis 1 server does not a cluster make
23:06 semiosis imho, use nfs
23:07 amccloud well that was just a scenario
23:07 JoeJulian Building a volume by IP is only going to be useful if the servers are *never* going to change their address.
23:07 amccloud I was considering moving the probing out of the automation
23:07 semiosis amccloud: i automated everything except peer probe & volume create
23:07 semiosis in my infra
23:08 semiosis because gluster cli provides its own cluster config management & orchestration
23:08 semiosis and i saw lots of work & little benefit automating those two steps
23:08 amccloud semiosis: About half of my deploymentt requires the volume.
23:08 JoeJulian Most of the questions we get regarding volumes with ip-based definitions are asking how to change them to hostnames because the user has realized that ip based was too inflexible.
23:08 semiosis +1
23:10 amccloud JoeJulian: I am trying to use hostnames.
23:10 JoeJulian semiosis: How about an ls -l /etc/rcS.d? Something's got to be different here and if I'm seeing it, someone else most certainly will come in here asking about it.
23:10 semiosis amccloud: oh most of my deployment depends on the volumes, it simply assumes they already exist
23:11 partner semiosis: i was just day or two ago laughed out loud from the channel due to opposing to automate it all the way ;)
23:11 semiosis JoeJulian: http://pastie.org/5975536
23:11 glusterbot Title: #5975536 - Pastie (at pastie.org)
23:12 semiosis partner: hope you didnt really feel laughed out of the channel
23:12 amccloud semiosis: So if you deployment assumes the volume is there (when it's not), where do the files get written to?
23:13 semiosis so i basically have a three stage process to deploy my entire production infrastructure starting from zero
23:13 tryggvil joined #gluster
23:13 semiosis stage 1: deploy gluster server hosts. stage 2: set up glusterfs volumes (peer probe, volume create & start). stage 3: deploy all other nodes
23:13 partner JoeJulian: for the record my fuse init is identical to one semiosis pasted but there are differences on rcS.d, nothing major quickly viewing but there: http://pastie.org/5975543
23:13 glusterbot Title: #5975543 - Pastie (at pastie.org)
23:14 semiosis i did stages 1 & 2 long ago, since then i have only had to partially repeat stages 1 & 3 when servers died
23:14 hattenator joined #gluster
23:15 amccloud semiosis: And the only reason you deployment is 3 steps is because of gluster. That seems like a problem to me.
23:16 partner for the record, cfengine in its very normal run goes it all through 3 times due to fact certain things need to happen first in order to be able to proceed to next one..
23:17 partner its all about convergence. hey, server wasn't there, lets try again, found it, all fine and configured
23:18 semiosis amccloud: given the choice between a properly configured infrastructure and a fully automated infrastructure, i'll take the former if i can't have both
23:18 semiosis (or if having both would be too costly)
23:19 JoeJulian A server cannot add itself to the trusted pool. A member of that pool has to add a new member. That's the right and secure way of doing that. So automating that from a new server is going to be problematic.
23:21 amccloud JoeJulian: So probe should take a hostname option so that the reverse relation gets created correctly.
23:21 semiosis amccloud: see purpleidea's ,,(puppet) module, i think it may do (most of) what you want
23:21 glusterbot amccloud: (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
23:21 JoeJulian amccloud: I'm sure I've filed that bug report already...
23:22 JoeJulian Yes, the first server not only should be able to set it's own hostname, but should be required to, imho.
23:22 amccloud yes!
23:22 semiosis JoeJulian: how would it know which (of potentially many names) it should use for itself?
23:22 JoeJulian Same way it does now.
23:22 JoeJulian Just locally.
23:22 amccloud semiosis: It would be an option you pass to the cli.
23:23 semiosis oh right like probing itself, gotcha
23:23 polenta|gone joined #gluster
23:23 amccloud JoeJulian: Do you have a link to that ticket?
23:23 JoeJulian I'm really not sure why we've gone through 3 minor versions with that kludge.
23:24 JoeJulian @query kludge
23:24 glusterbot JoeJulian: Bug http://goo.gl/AlvZg low, low, ---, lakshmipathi, CLOSED CURRENTRELEASE, volfile upgrade process seems kind-of kludged
23:24 JoeJulian hmm, not that one...
23:25 JoeJulian @query own hostname
23:25 glusterbot JoeJulian: No results for "own hostname."
23:25 partner fsck, its morning in a moment, cya
23:26 semiosis take care partner
23:27 semiosis JoeJulian: diff this against your squeeze 'dpkg -l'  http://pastie.org/5975776
23:27 glusterbot Title: #5975776 - Pastie (at pastie.org)
23:28 semiosis JoeJulian: i suspect different packages or versions
23:28 JoeJulian hmm, so /etc/init.d/fuse requires $remote_fs. In order to mount all the $remote_fs, networking has to start. networking will run /etc/network/if-up.d/mountnfs which will try to mount the _netdev volumes, of which glusterfs is one and it fails because fuse isn't loaded.
23:31 sashko JoeJulian: yeah was looking for you previously
23:32 JoeJulian My answer is fairly similar to ndevos. If I were to guess I would say yes, but redhat manages that build separately from the community version so it's possible they do or don't include that patch.
23:33 JoeJulian sashko: ^
23:35 manik joined #gluster
23:36 clusterflustered how much cpu power is needed with gluster?
23:38 JoeJulian 65W
23:38 JoeJulian Actually, there's some doing it with ARM so probably even less than that.
23:38 sashko JoeJulian: was going to ask you if you know when granular locking patch was introduced, you know which version it was? friend of mine is running his guest VMs on gluster and self-heal is blocking it completely
23:38 sashko JoeJulian: haha
23:39 semiosis sashko: 3.3.0
23:39 JoeJulian sashko: Community version 3.3.0 was the first to have it. Would recommend 3.3.1 though.
23:39 sashko hm ok
23:39 clusterflustered so 65 is a quad core xeon class, if we have all 6 core machines, a single cpu would handle gluster well?
23:40 semiosis clusterflustered: JoeJulian was joking around, it all depends on your workload
23:40 sashko he's running rhel 3.3.0.5 maybe should tell him to upgrade. i'm assuming some fixes have gone into 3.3.1?
23:40 semiosis and the speed of your disks & network
23:40 JoeJulian I was running it successfully on some old dual core 32bit xeon boxes.
23:40 clusterflustered on those boxes, what was your disk and network setup like joejulian?
23:40 JoeJulian 4 disks, gig-e, 30 bricks.
23:41 semiosis clusterflustered: someone came around here who was using a super fast infiniband network and crazy ssd storage... in their case cpu was bottleneck, but that's pretty unusual
23:41 sashko semiosis: i would like to have that problem
23:41 JoeJulian My disks were normal off-the-shelf sata-iii.
23:41 semiosis sashko: yep
23:41 JoeJulian +1
23:42 clusterflustered thanks semi, that answered 2 of my questions. the ssd's we have, the inifiband we don't. best we got is 4 seperate gig-e ports aggregated
23:42 isomorphic joined #gluster
23:42 jjnash joined #gluster
23:42 nightwalk joined #gluster
23:42 semiosis yw
23:42 sashko btw guys I've discovered something interesting with seagate sas drives lately that came brand new from factory
23:43 sashko they were awfully slow in the first 24-48 hours
23:43 * semiosis doesnt like "interesting" hard drives
23:43 sashko yeah me neither!
23:43 JoeJulian well, semiosis, the only differences are a bunch of packages that I /don't/ have installed. The ones that I do are all the same version.
23:43 amccloud JoeJulian: Where you able to find that hostname ticket?
23:43 sashko so apparently those things do something called smart pre-scan
23:44 sashko and while pre-scan hasn't completed, the drives are awfully slow, and they have to be idle to complete because smart doesn't run when the drive is busy
23:44 sashko so i had to let the drives sit idle for 24hours for pre-scan to finish and after that they started to be regular fast again
23:45 JoeJulian amccloud: Not yet... doing 3 things at once here... Here's a list of the bugs I'm following. It should be one of them (unless it was closed and I don't remember) https://bugzilla.redhat.com/buglist.cgi?bug_stat​us=NEW&amp;bug_status=VERIFIED&amp;bug_status=AS​SIGNED&amp;bug_status=MODIFIED&amp;bug_status=ON​_DEV&amp;bug_status=ON_QA&amp;bug_status=RELEASE​_PENDING&amp;bug_status=POST&amp;email1=joe%40ju​lianfamily.org&amp;emailtype1=exact&amp;emailass​igned_to1=1&amp;emailreporter1=1&amp;emailcc1=
23:45 JoeJulian 1&list_id=1064024
23:45 glusterbot <http://goo.gl/mJ69q> (at bugzilla.redhat.com)
23:48 jjnash joined #gluster
23:48 nightwalk joined #gluster
23:54 JoeJulian Well, semiosis, I have no idea why it works for you. It shouldn't. I can see the path the processes take and it's obvious why it's not working. It would be cool to see why it does work though.
23:54 semiosis because i scare the **** out of computers
23:55 JoeJulian hehe
23:55 JoeJulian That's usually my effect too.
23:55 semiosis "show me the problem?" ... "oh now it's working"

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary