Camelia, the Perl 6 bug

IRC log for #gluster, 2012-10-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 xiu joined #gluster
00:45 Bullardo joined #gluster
00:49 bala joined #gluster
01:02 lng joined #gluster
01:02 atrius 0-rpc-service: Auth too weak... trying to create a new cluster... one node keeps saying this.. the other one says transport endpoint not connected
01:03 lng Hi! Is it normal a brick could not be added back to cluster after it was removed from it?
01:10 nodots1 joined #gluster
01:21 xiu joined #gluster
01:21 nightwalk joined #gluster
01:21 adechiaro joined #gluster
01:21 a2 joined #gluster
01:21 copec joined #gluster
01:21 RNZ joined #gluster
01:21 flin joined #gluster
01:21 _Bryan_ joined #gluster
01:21 ladd joined #gluster
01:21 plantain joined #gluster
01:21 er|c joined #gluster
01:22 kevein joined #gluster
01:24 nodots joined #gluster
01:25 nodots left #gluster
01:47 rext7 joined #gluster
02:04 er|c joined #gluster
02:15 JoeJulian atrius: Are you sure the versions match?
02:15 JoeJulian lng: Yes and no. Is there an error?
02:17 atrius JoeJulian: they should.. but honestly at this point i'm not sure
02:17 aliguori joined #gluster
02:31 JoeJulian atrius: Ok, here's the best I can find so far. There are 3 types of auth that's been used. AUTH_UNIX for which the comments say "unix style (uid, gids)", AUTH_GLUSTERFS and AUTH_GLUSTERFS_V2. I would have to imagine that same versions must use the same auth keys, but that's really just a guess at this point.
02:35 lng JoeJulian: it was yesterday and I don't remember the exact string, but it was something about 'it's already brick' or so...
02:35 lng never mind - I reformated it
02:36 lng :-)
02:36 JoeJulian path or prefix
02:36 JoeJulian path or prefix is already
02:36 lng JoeJulian: you told be if no rebalancing, added volumes will not be available - that is not true
02:36 JoeJulian or a prefix of it is already part of a volume
02:36 glusterbot JoeJulian: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
02:36 JoeJulian yeah, that's the ticket...
02:37 lng I can see new files added to the new volumes
02:37 JoeJulian yay
02:37 lng JoeJulian: thanks for the link - need that for future
02:37 lng ah, that one
02:37 lng I remember that pain
02:37 Bullardo joined #gluster
02:38 lng setfattr
02:38 JoeJulian pfft... pain... pshaw.
02:38 lng it didn't work for me
02:38 lng :-)
02:38 JoeJulian you didn't do it right. ;)
02:38 lng haha
02:38 lng maybe
02:38 JoeJulian :)
02:39 lng after adding new volumes, I see new data is comming there - maybe I don't need to rebalance it?
02:40 lng I can see mounted gluster space is encreased as well
02:40 lng no errors
02:41 lng JoeJulian: meanwhile, wich version are you on?
02:42 JoeJulian 3.3.1
02:42 lng I'm on 3.3.0
02:43 lng JoeJulian: is it easy to upgrade live?
02:44 JoeJulian super.
02:45 JoeJulian As is always recommended, servers first then clients.
02:46 JoeJulian I prefer to do one server at a time, verifying the self-heal status before starting the next.
02:50 lng JoeJulian: please tell me about self-heal
02:50 lng how to verify?
02:57 atrius JoeJulian: for now i've moved to standard NFS for the purpose (need it up fast).. i'll circle back on gluster after my current testing :)
03:04 ika2810 joined #gluster
03:05 sashko joined #gluster
03:14 lng should I rebalance under www-data to keep the permissions?
03:22 shylesh joined #gluster
03:38 _Marvel_ joined #gluster
04:11 MTecknology joined #gluster
04:17 faizan joined #gluster
04:20 sripathi joined #gluster
04:24 lh joined #gluster
04:24 lh joined #gluster
04:26 MTecknology So.. I remember once upon a time I set up glusterfs on a server and didn't have to edit any config files... Any hints on where I can find that info again?
04:31 ankit9 joined #gluster
04:38 deepakcs joined #gluster
04:44 vpshastry joined #gluster
04:49 MTecknology root@domino:~# gluster peer status
04:49 MTecknology Connection failed. Please check if gluster daemon is operational.
04:49 MTecknology :S
05:15 puebele joined #gluster
05:20 sgowda joined #gluster
05:23 jays joined #gluster
05:24 faizan joined #gluster
05:26 kshlm joined #gluster
05:26 kshlm joined #gluster
05:31 bala1 joined #gluster
05:35 raghu joined #gluster
05:46 faizan joined #gluster
05:48 ankit9_ joined #gluster
06:02 rgustafs joined #gluster
06:08 guigui3 joined #gluster
06:15 hagarth joined #gluster
06:15 faizan joined #gluster
06:27 kshlm joined #gluster
06:27 kshlm joined #gluster
06:33 ramkrsna joined #gluster
06:33 ramkrsna joined #gluster
06:34 glusterbot New news from resolvedglusterbugs: [Bug 762976] ecryptfs does not work when the directory to be encrypted is on gluster mount <https://bugzilla.redhat.com/show_bug.cgi?id=762976>
06:40 kshlm joined #gluster
06:40 kshlm joined #gluster
06:40 joeto joined #gluster
06:46 ctria joined #gluster
06:48 deepakcs joined #gluster
06:50 deepakcs joined #gluster
06:59 Nr18 joined #gluster
06:59 Azrael808 joined #gluster
07:01 samppah hmmh.. 3.4 qa
07:01 samppah http://gluster.org/community/doc​umentation/index.php/Planning34 is this list still accurate?
07:01 ekuric joined #gluster
07:08 lkoranda joined #gluster
07:19 Alpha64 joined #gluster
07:25 ngoswami joined #gluster
07:29 berend` joined #gluster
07:34 glusterbot New news from newglusterbugs: [Bug 867263] gluster peer probe appears in peer info <https://bugzilla.redhat.com/show_bug.cgi?id=867263> || [Bug 867252] gluster volume remove-brick force does not work <https://bugzilla.redhat.com/show_bug.cgi?id=867252>
07:41 kshlm joined #gluster
07:41 kshlm joined #gluster
07:47 TheHaven joined #gluster
07:52 Triade joined #gluster
07:54 stickyboy joined #gluster
08:09 adechiaro joined #gluster
08:14 andreask joined #gluster
08:19 badone_home joined #gluster
08:22 hagarth samppah: that list needs an update
08:25 samppah hagarth: ah ok, i'm mostly intrested in qemu integration and multitenancy.. do you know status of those?
08:28 berend joined #gluster
08:28 Humble joined #gluster
08:28 hagarth qemu integration is right now possible with upstream qemu and upstream glusterfs
08:29 hagarth multitenancy will not make it into 3.4
08:30 samppah thanks hagarth :)
08:30 manik joined #gluster
08:40 adechiaro joined #gluster
08:40 dobber joined #gluster
08:43 flowouffff Hello guys, quick question, is there any way to modify the default log directory for GlusterFS?
08:44 flowouffff (v3.3)
08:57 sgowda joined #gluster
08:59 duerF joined #gluster
09:07 sripathi joined #gluster
09:10 hateya joined #gluster
09:15 zArmon joined #gluster
09:32 hateya joined #gluster
09:33 kshlm joined #gluster
09:33 kshlm joined #gluster
09:33 Alpinist joined #gluster
09:35 faizan joined #gluster
09:41 michig joined #gluster
09:41 michig Hi, i noticed download.gluster.org is back. What do you think when will be the 3.3.1 packages for debian ready?
09:44 lng michig: download.gluster.org is not accessible for me
09:44 michig http://download.gluster.org/pub/gl​uster/glusterfs/3.3/LATEST/Debian/
09:45 glusterbot Title: Index of /pub/gluster/glusterfs/3.3/LATEST/Debian (at download.gluster.org)
09:45 lng oh
09:46 lng sorry, I used download.gluster.com
09:46 lng before this link was up: http://download.gluster.com/pub/gluster/glusterfs​/LATEST/Ubuntu/12.04/glusterfs_3.3.0-1_amd64.deb
09:47 michig np
09:47 lng michig: but Debian|Ubuntu are empty
09:48 lng http://download.gluster.org/pub/gl​uster/glusterfs/3.3/LATEST/Ubuntu/
09:48 glusterbot Title: Index of /pub/gluster/glusterfs/3.3/LATEST/Ubuntu (at download.gluster.org)
09:48 lng no files
09:48 michig thats why I'm asking when the packages for 3.3.1 will be ready :P
09:48 lng michig: maybe some ppa?
09:50 michig nah, I am waiting for a "official" .deb from gluster since i got the info that gluster will create them for debian squeeze aswell.
09:54 oneiroi- joined #gluster
10:17 sgowda joined #gluster
10:21 sripathi joined #gluster
10:21 sripathi joined #gluster
10:22 TheHaven joined #gluster
10:27 lkoranda_ joined #gluster
10:29 _Marvel_^265l^ joined #gluster
10:35 glusterbot New news from newglusterbugs: [Bug 859581] self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=859581>
10:39 balunasj joined #gluster
10:43 clopez joined #gluster
10:58 nightwalk joined #gluster
11:00 hagarth joined #gluster
11:01 Triade joined #gluster
11:01 lkoranda joined #gluster
11:05 glusterbot New news from newglusterbugs: [Bug 861481] volume sync fails <https://bugzilla.redhat.com/show_bug.cgi?id=861481> || [Bug 861308] lookup blocked while waiting for self-heal that fails due to pre-existing locks <https://bugzilla.redhat.com/show_bug.cgi?id=861308>
11:06 Humble joined #gluster
11:28 dobber [2012-10-17 10:53:55.945546] E [afr-self-heal-data.c:763:afr​_sh_data_fxattrop_fstat_done] 0-freecloud-replicate-1: Unable to self-heal contents of '<gfid:bc60a1bf-54e3-4936-84e2-d22dc1fce1b8>' (possible split-brain). Please delete the file from all but the preferred subvolume.
11:28 dobber how do i find out witch file is it /
11:28 dobber ?
11:29 ika2810 left #gluster
11:35 glusterbot New news from newglusterbugs: [Bug 865825] Self-heal checks skip pending counts that they shouldn't <https://bugzilla.redhat.com/show_bug.cgi?id=865825>
11:40 shylesh joined #gluster
11:58 gbrand_ joined #gluster
12:05 glusterbot New news from newglusterbugs: [Bug 866916] volume info displays information about a brick that has already been removed <https://bugzilla.redhat.com/show_bug.cgi?id=866916>
12:17 pkoro joined #gluster
12:25 vpshastry left #gluster
12:30 ramkrsna joined #gluster
12:35 glusterbot New news from resolvedglusterbugs: [Bug 834464] Stat structure contains entries that are too large for defined data type in 32bit EL5 <https://bugzilla.redhat.com/show_bug.cgi?id=834464> || [Bug 819444] for few directories, ls command is giving 'Invalid argument' when one of the server(brick, distributed volume) is down <https://bugzilla.redhat.com/show_bug.cgi?id=819444>
12:37 oneiroi joined #gluster
12:39 DaveS__ joined #gluster
12:51 balunasj joined #gluster
13:05 glusterbot New news from newglusterbugs: [Bug 867406] gluster volume replace-brick does not work as expected <https://bugzilla.redhat.com/show_bug.cgi?id=867406>
13:07 gbrand_ joined #gluster
13:07 hagarth joined #gluster
13:15 Azrael808 joined #gluster
13:18 faizan joined #gluster
13:19 plarsen joined #gluster
13:19 bit4man joined #gluster
13:20 bennyturns joined #gluster
13:23 balunasj joined #gluster
13:32 Nr18_ joined #gluster
13:32 kkeithley joined #gluster
13:43 Bonaparte joined #gluster
13:46 sripathi joined #gluster
13:46 sashko joined #gluster
13:47 Bonaparte gluster is taking 1100% on a server. It seems to happen only on this server. It causes high load and other processes are affected on the sytem
13:49 Azrael808 joined #gluster
13:50 hagarth joined #gluster
13:51 Bonaparte This happens when auto-heal runs
13:51 Bonaparte Is there a way to avoid high load due to this?
13:52 mspo faster network?
13:53 mspo Bonaparte: you could try messing with the performance. tunables
13:53 Bonaparte mspo, there are two other servers with same config. The problem doesn't occur on those two machines
13:59 lh joined #gluster
13:59 lh joined #gluster
14:05 gbrand_ joined #gluster
14:05 mspo Bonaparte: replica factor of 3?
14:06 Bonaparte mspo, yes
14:06 stopbit joined #gluster
14:07 gbrand_ joined #gluster
14:09 aliguori joined #gluster
14:13 mspo Bonaparte: is that node busy trying to heal stuff?  are the error logs a lot busier on it?
14:14 rkubany joined #gluster
14:14 Bonaparte mspo, yeah, it is busy trying to heal stuff. Yeah, error logs are lot busier on this server
14:16 Nr18 joined #gluster
14:20 clopez joined #gluster
14:24 dbruhn joined #gluster
14:26 dbruhn Totally lazy question, but is there a way to check the fill levels of all of the bricks under a volume from the gluster system instead of like a df on each node?
14:27 guigui1 joined #gluster
14:31 wN joined #gluster
14:31 wushudoin joined #gluster
14:34 gbrand_ joined #gluster
14:38 semiosis :O
14:40 semiosis my PPA descriptions say to go here for support, yet people keep on emailing me.  meh
14:41 michig Any dev around? Still got no answer regarding the debian packages for 3.3.1
14:42 semiosis michig: yes, the download site has moved from .com to .org
14:42 semiosis michig: no, the debian packages aren't ready yet.  i hope to have them up this week, but can't promise
14:42 kkeithley well, most of us "devs" have other priorities. We're working on it.
14:42 michig i know that download.gluster.org is available, but no .debs
14:43 michig Okay ;)
14:43 semiosis though we do appreciate your interest.  it's nice to know people are hungry for glusterfs :)
14:43 michig No problem, just wanted to get a status for planning the new gluster :)
14:43 michig I'm very hungry for it :) Using it for 2 years now
14:43 semiosis sweet
14:44 michig Best solution after drbd+gfs2
14:44 lkoranda_ joined #gluster
14:44 MTecknology kkeithley: NOTHING is more important than ME
14:44 kkeithley good, I'll start working on NOTHING right now
14:44 kkeithley ;-)
14:45 semiosis ha
14:45 MTecknology :P
14:45 neofob joined #gluster
14:46 kkeithley semiosis: I grabbed your glusterfs_3.3.1-ubuntu1~precise1.debian.tar.gz from https://launchpad.net/~semiosis/+arc​hive/ubuntu-glusterfs-3.3/+packages
14:46 glusterbot Title: Packages in “ubuntu-glusterfs-3.3” : ubuntu-glusterfs-3.3 : semiosis (at launchpad.net)
14:46 kkeithley this is on debian wheezy FWIW.
14:46 kkeithley when I do a debuild I get dpkg-source: error: can't build with source format '3.0 (quilt)': no upstream tarball found at ../glusterfs_3.3.1.orig.tar.{bz2,gz,lzma,xz}
14:46 kkeithley but there is a ../glusterfs_3.3.1.orig.tar.gz
14:47 semiosis hm
14:47 kkeithley % ls ../glusterfs_3.3.1.orig.tar.gz
14:47 kkeithley ../glusterfs_3.3.1.orig.tar.gz
14:47 kkeithley it's there.
14:48 kkeithley probably something stupid that I'm not doing correctly
14:48 semiosis if you're going to build for debian please use the debian/ folder on the github repo i sent yesterday
14:48 semiosis the ubuntu package you're working with now isn't going to work for debian because it's "upstartified" and may also link to the wrong version of libssl
14:49 semiosis though that doesn't explain why it cant find the orig tarball
14:49 kkeithley remind me the URL of your github repo? I don't see it in the scrollback
14:50 gbrand__ joined #gluster
14:50 semiosis https://github.com/semiosis/glusterfs-debian
14:50 glusterbot Title: semiosis/glusterfs-debian · GitHub (at github.com)
14:52 kkeithley debuild in .../glusterfs-3.3.1/debian vs. dbuild in .../glusterfs-3.3.1 maybe?
14:52 kkeithley You're going to have to bear with me, I'm a debian/ubuntu newbie
14:53 kkeithley oops, no, nevermind.
14:54 semiosis debuild in the root of the source tree, not the debian/ subdir
14:54 semiosis i just updated the readme in github, added the step to update the debian/changelog file
14:55 semiosis kkeithley: happy to bear with you, i appreciate that you're getting into this and there's a lot to learn
14:56 kkeithley although I probably shouldn't, I have other stuff I'm supposed to be doing
14:57 semiosis so do I... it's a miracle debian survived this long
14:57 semiosis with such a horrifically complicated packaging process and an all volunteer workforce
14:57 semiosis :)
15:00 H__ rebalance layout fix is running for 5 days now
15:01 kkeithley is there a way to get dch to use vi instead of gnu nano?
15:02 H__ plain common sense perhaps ?   :-D
15:03 semiosis kkeithley: see the dch man page... it looks at env vars VISUAL and EDITOR and also uses something called sensible-editor
15:04 lkoranda joined #gluster
15:06 johnmark semiosis: srsly. there's a reason ubuntu caught on quickly.
15:06 semiosis several reasons :)
15:07 semiosis though not all because of debian ;)
15:07 kkeithley but ubuntu uses .debs and the .deb build process is essentially the same AFAICT at this point, so I don't think ubuntu gets props for this
15:07 wushudoin joined #gluster
15:08 semiosis kkeithley: yes the packaging tools are the same but ubuntu's process is a little bit more open
15:10 kkeithley okay, now with your github .../debian directory in place, ./debian/changelog updated to 3.3.1, etc. I still have glusterfs_3.3.1.orig.tar.gz, and it's still whining: ... but there does not seem to be
15:10 kkeithley an appropriate original tar file ...
15:10 faizan joined #gluster
15:11 kkeithley odd
15:11 guigui1 joined #gluster
15:13 kkeithley oh, it wants it in ../glusterfs_3.3.1.orig.tar.gz, i.e. one dir up from .../glusterfs-3.3.1
15:13 semiosis that's right
15:14 semiosis it will actually compare the contents of that tarball to the contents of the untarred source tree the debian/ folder is in, and if there's any diffs between the code it will generate a debian/patches/ folder for them
15:15 semiosis but that shouldnt happen here
15:15 kkeithley :q
15:16 kkeithley seems like it did though: dpkg-source: error: aborting due to unexpected upstream changes, see /tmp/glusterfs_3.3.1-1.diff.8AHWvy
15:16 johnmark kkeithley: awesome
15:17 semiosis kkeithley: ok i will try to reproduce this... which debian release are you using?
15:17 lkoranda joined #gluster
15:17 kkeithley wheezy
15:17 semiosis wheezy is unstable... ?
15:17 kkeithley wheezy/sid
15:18 semiosis and you've updated to the latest everything since you installed?
15:18 kkeithley yup, yesterday
15:18 semiosis ok i'm updating my sid vm and will try myself
15:18 semiosis hah, oops, i upgraded my sid vm to experimental... ok need to shave some yaks
15:18 semiosis :)
15:19 kkeithley dunno whether wheezy is unstable. Remember, I'm a debian newbie. Slackware, SuSE, and RHEL/Fedora are what I've used over the years
15:19 kkeithley The good thing about linux is they're all the same! ;-)
15:19 semiosis ah, according to http://wiki.debian.org/DebianWheezy wheezy is testing
15:19 glusterbot Title: DebianWheezy - Debian Wiki (at wiki.debian.org)
15:20 kkeithley that's for the changelog?
15:20 semiosis i believe so... will confirm once i get a wheezy vm running
15:20 johnmark Current Releases/Repositories
15:20 johnmark oldstable - The previous stable release (lenny).
15:20 johnmark stable - The current stable release (squeeze).
15:20 johnmark testing - The next generation release (wheezy).
15:20 johnmark unstable - The unstable development release (sid), w
15:21 kkeithley my /etc/debian_release says wheezy/sid
15:21 JoeJulian I can see why debian's so popular...
15:21 johnmark heh :)
15:22 semiosis JoeJulian: that's not much different from fedora... oldstable, stable, testing are like the three latest fedora releases up to & including the next release, then sid ~~ rawhide
15:22 semiosis if i understand fedora correctly
15:22 semiosis s/sid/unstable/
15:22 glusterbot What semiosis meant to say was: JoeJulian: that's not much different from fedora... oldstable, stable, testing are like the three latest fedora releases up to & including the next release, then unstable ~~ rawhide
15:22 saz_ joined #gluster
15:22 johnmark semiosis: actually rawhide most closely resembles experimental
15:23 johnmark because it's not a "full distro" per se
15:23 semiosis oh ok
15:26 wushudoin joined #gluster
15:27 kkeithley progress, now it's failing because it can't find my gpg key (because I haven't copied them to this machine)
15:28 semiosis kkeithley: you can ignore that
15:28 semiosis you won't need a signed package unless you are going to upload to a distribution
15:28 edward1 joined #gluster
15:29 semiosis above the glusterfs-3.3.1 folder you'll now have some new files
15:29 kkeithley it didn't give me a choice, it just aborted
15:29 semiosis among them one ending in .dch
15:29 er|c joined #gluster
15:29 er|c joined #gluster
15:29 semiosis kkeithley: you can now build binary packages using pbuilder
15:30 kkeithley .dch or .dsc? There is no .dch
15:30 semiosis first you need to initialize your pbuilder environment using 'sudo pbuilder --create' which will set up a chroot env to do the build... then you can use 'sudo pbuilder --build <the-.dch-file>'
15:30 semiosis oh, dsc
15:30 semiosis haha, i was confused by the dch command
15:30 * semiosis needs moar coffee
15:31 kkeithley first apt-get install pbuilder
15:31 kkeithley gah, what a process
15:31 semiosis pbuilder will put the binaries in the obvious place, you know, /var/cache/pbuilder/result... you'd have guessed that, i'm sure ;)
15:31 kkeithley of course
15:31 robo joined #gluster
15:32 kkeithley not!
15:32 semiosis lol
15:32 kkeithley once I get this working then I can make it build UFO .debs too :-)
15:32 kkeithley and a -dev .deb
15:32 semiosis great!
15:33 kkeithley a glusterfs-dev .deb
15:33 semiosis what's the -dev deb for?
15:33 semiosis wait i think we make one of those already...
15:33 kkeithley same thing the glusterfs-devel rpm is for. People who want to write xlators outside the glusterfs source tree.
15:33 kkeithley I didn't see it in your ppa
15:33 ika2810 joined #gluster
15:34 semiosis ah, hmm
15:34 semiosis ok, there used to be one, but at some point it got dropped, i dont remember when/why
15:35 kkeithley nor -rdma or -geo-replication
15:35 semiosis dropped from the debian official package that is... if you look at my other PPAs, like https://launchpad.net/~semiosis/+arch​ive/upstarted-glusterfs-3.3/+packages, you'll see there is a libglusterfs-dev package built
15:36 kkeithley okay
15:36 semiosis so you can look at that to compare
15:36 kkeithley I suppose the -rdma and -geo-replication bits are in the main .deb?
15:36 semiosis yes
15:37 kkeithley And I dunno, is there a reason to have a separate -rdma and -geo-replication .deb even?
15:37 semiosis key to translating package names between debian/ubuntu official and my custom PPAs is... debian/ubuntu glusterfs-server = semiosis glusterd, debian/ubuntu glusterfs-common = semiosis libglusterfs0
15:37 kkeithley IOW, I don't necessarily feel compelled to mirror the rpm packaging
15:38 semiosis agreed
15:39 kkeithley a foolish consistency is the hobgoblin of little minds.
15:39 er|c joined #gluster
15:41 kkeithley okay, here goes nothing --- pbuild --build glusterfs_3.3.1-1.dsc
15:42 kkeithley s/pbuild/pbuilder/
15:42 glusterbot What kkeithley meant to say was: okay, here goes nothing --- pbuilder --build glusterfs_3.3.1-1.dsc
15:43 faizan joined #gluster
15:45 bala joined #gluster
15:46 kkeithley well, nothing in /var/cache/pbuilder/result    :-(
15:47 kkeithley oh, failed to get dependencies... Failed to fetch http://ftp.us.debian.org/debian/poo​l/main/m/m4/m4_1.4.16-4_amd64.deb: 404  Not Found [IP: 64.50.236.52 80]
15:50 kkeithley I can get it with curl from my shell.
15:51 elyograg i read some of the backlog.  on debian, the unstable release is *always* called sid. All debian releases are named after Toy Story characters. Sid is rather unstable. :)
15:51 kkeithley but pbuilder can't in its chroot?
15:55 seanh-ansca joined #gluster
15:57 Nr18 joined #gluster
16:00 crashmag joined #gluster
16:02 semiosis kkeithley: that's odd
16:02 semiosis try again?
16:02 JoeJulian I wonder how long 'till Disney sues...
16:07 akadaedalus joined #gluster
16:09 kkeithley I just put the missing .deb in /var/cache/pbuilder/aptcache and that seemed to fix it. At least now I have glusterfs .debs in /var/cache/pbuilder/result
16:10 sashko joined #gluster
16:11 semiosis yay
16:15 kkeithley JoeJulian: huh?
16:18 semiosis debian using toy story names
16:19 kkeithley oh that. Good luck to them if they try. Of course these days they'd probably win.
16:19 kkeithley So, shall I put these on download.g.o?
16:20 bala joined #gluster
16:21 semiosis well we wanted to have an apt repo set up, ideally with packages signed by a gluster key
16:22 kkeithley I asked about a gluster key, never got a response.
16:22 kkeithley And if we're going to mirror what's in the repo then does it matter where we put them first?
16:22 kkeithley The real question is, do we want these binaries?
16:23 semiosis who'd you ask about a key?
16:23 kkeithley ostensibly johnmark, although he might not have seen it.
16:24 kkeithley The Fedora/EPEL rpms are signed by me with my kkeithle@redhat.com key.
16:24 kkeithley for now
16:25 kkeithley And having done wheezy, what should I do next? I guess I should push it back up the hill on squeeze? Does anyone care about lenny?
16:26 semiosis i think squeeze and wheezy are the only ones we should provide
16:26 semiosis and for ubuntu... i'm inclined to only provide for precise (12.04) and newer
16:31 lh joined #gluster
16:31 lh joined #gluster
16:38 ika2810 left #gluster
16:40 geek65535 joined #gluster
16:41 semiosis hi geek65535
16:41 geek65535 Hi, semiosis.
16:42 semiosis so to recap, you're using the upstarted-glusterfs-3.3 packages and having issues with mounts not working at boot time, probably due to network not being up (since it's a bridge)
16:42 semiosis what's your fstab line?
16:42 geek65535 I'm having a problem with mounting volumes on boot. My box is Ubuntu 12.04 server amd64, with the NIC bridged to provide support to the KVM guests that I am (going to be running) on it.
16:42 geek65535 yup.
16:43 geek65535 er, gimme a sec. remote access to this machine is spotty.
16:44 geek65535 vmhost1:/libvirt_ssd    /var/lib/libvirt                glusterfs       defaults,nobootwait     0 0
16:44 geek65535 Right now, the volume only exists on this host (only one brick).
16:44 semiosis is vmhost a remote machine or localhost?
16:44 semiosis s/vmhost/vmhost1/
16:44 glusterbot What semiosis meant to say was: is vmhost1 a remote machine or localhost?
16:45 johnmark kkeithley: nobody cared about lenny
16:45 johnmark semiosis: agreed re: squeeze and wheezy
16:45 geek65535 localhost. I do have a second volume that has a brick on another host (the creatively named vmhost2), but the first one (libvirt_ssd) fails too.
16:46 johnmark semiosis: we should always support the most recent LTS release
16:46 semiosis johnmark: agreed, but nothing older than that
16:46 semiosis i'd actually say both the latest LTS and the latest regular release
16:47 semiosis imnsho if you're using LTS and you don't pay canonical for support you're nuts :)
16:47 johnmark semiosis: +!
16:47 semiosis better to stay current if you're going the open source route
16:47 johnmark heh... uh, that's +1
16:47 johnmark yup yup
16:47 johnmark kkeithley: you asked me for a key? say wha?
16:47 * johnmark wonders if my memory is worse than previously thought
16:48 ramkrsna joined #gluster
16:48 ramkrsna joined #gluster
16:48 vimal joined #gluster
16:48 johnmark kkeithley: and yes, we want to host the binaries
16:48 johnmark kkeithley: wow, I owe you  big time
16:49 zArmon left #gluster
16:49 semiosis geek65535: ok i have a quick hackey fix for you, which is not ideal, but should get you working pretty quick...  add 'sleep 5' to the top of /sbin/mount.glusterfs :D
16:50 semiosis geek65535: the "right way" to solve this is to block mounting of glusterfs filesystems until the network is ready, and although i have some idea of how to do that, i'm not sure enough to tell you to try it
16:52 geek65535 I can certainly try the 'sleep'. It sounds like it would work.
16:52 geek65535 I would like to solve this the right way, though. If you'd like to expand on how to block mounting glusterfs til the network is ready, I'd be willing to work on it.
16:54 geek65535 I took my bridge out at one point (going back to a straight eth0), and I did get a mount at boot. Unfortunately, that's not workable for me, since I have to have the bridge for KVM. I've been playing with bridge parameters to help speed it up. The big thing was turning off spanning tree (bridge_stp off), since there's no need for it, and it really slows things down.
16:55 geek65535 I'd like to find a more general solution (ie blocking while waiting for the network) so the timing isn't so important.
16:56 semiosis geek65535: how comfy are you with upstart? :)
16:57 geek65535 Getting there. I'm still more comfortable with old-school init scripts (Solaris background), but upstart is definitely getting clearer to me lately.
16:58 semiosis check out the upstart-events man page, that describes the basic theory of the boot process.  key thing to know is that it's asynchronous and event driven
16:59 semiosis mountall sends mounting events before trying to mount filesystems from fstab, you can intercept those events and block them to delay the mounting of a filesystem
16:59 akadaedalus just in time for systemd to become the latest and greatest fad.
17:00 semiosis akadaedalus: idk about that
17:00 seanh-ansca joined #gluster
17:00 semiosis geek65535: upstart jobs live in /etc/init, where you can find lots of examples of all kinds of stuff
17:00 semiosis geek65535: look at /etc/init/mounting-glusterfs.conf, this is where we block mounting of glusterfs filesystems until the glusterd daemon is running
17:01 semiosis something like that would be needed to block until your network interface is up, but i dont know exactly how to write that
17:02 johnmark kkeithley: ping
17:03 johnmark kkeithley: to install the swift packages from your epel repo, what other repos do I need to enable?
17:03 johnmark kkeithley: for RHEL 6.3
17:03 geek65535 I've looked at that file. I've also changed the 'start on' line in /etc/init/glusterd.conf to 'start on static-network-up', thinking that if I delayed gluster starting until the network started, then that would delay the mount. Didn't work.
17:04 semiosis interesting theory, are you sure that static-network-up does not happen before your bridge is up?
17:06 semiosis seems like that strategy could work, and if you want glusterd to listen on the bridge interface (which i think you do) then it's probably a good way to go
17:08 geek65535 Not for sure. One thing I have seen is that it looks like glusterd itself is failing because of the lack of network. I just rebooted the machine in question (I *hate* how long Dell's 12th gen hardware takes to boot!), and not only were the volumes not mounted, but I couldn't mount them until I restarted glusterd.
17:08 semiosis yes, if network is not ready, glusterd will probably fail
17:10 semiosis ok, instead of static-network-up, try 'start on net-device-up INTERFACE=br0' or whatever your bridge is
17:10 semiosis ... in the glusterd.conf
17:11 geek65535 I've actually tried that as well.
17:11 semiosis also there's some inconsistency in upstart jobs, some use IFACE= and others use INTERFACE=, idk why, so try both of those
17:12 geek65535 brb, wanna test something.
17:13 semiosis k
17:16 geek65535 I really suspect that the problem is that there's a disconnect between the bridge physically coming up, and how long it takes before it'll pass packets. I'm trying to get a better sense of how long it takes--and I'd really like a way to block on that.
17:17 semiosis geek65535: have you checked your syslog?  maybe you can tell whats going on by comparing times between network interface log events and glusterd/glusterfs logs
17:17 semiosis good news though is that in any case the sleep trick should work :)
17:18 kkeithley johnmark: glusterfs-epel-swift I think. Let me look
17:18 semiosis we can also add a sleep to the glusterd upstart job to delay it :)
17:18 geek65535 At one point I did, and the bridge came up about 2 seconds after the mount failed out.
17:18 semiosis ...if we can't find out exactly what to block on
17:19 kkeithley download the epel-glusterfs.repo file to /etc/yum.repos.d, enable epel-glusterfs and epel-glusterfs-swift
17:19 johnmark kkeithley: yeah I did that
17:19 johnmark but I'm getting install errors for all sorts of random python things
17:20 semiosis although blocking mountall is an explicity ubuntu thing to do, having to wait for a network interface is not... does anyone here know about waiting for bridges, in general?
17:20 johnmark kkeithley: http://pastie.org/5074237
17:20 glusterbot Title: #5074237 - Pastie (at pastie.org)
17:21 kkeithley have you registered your RHEL so that you can get those RPMs from the RHEL channel/repo?
17:21 johnmark kkeithley: oh. just created a new instance on AWS
17:21 johnmark how do I do that? heh
17:21 kkeithley ummm. hang on, let me fire up my rhel vm
17:22 geek65535 kkeithley: for as much as I loathe Oracle for some things, their repos are completely open, no registration required...
17:22 johnmark kkeithley: thanks :)
17:22 kkeithley what can I say? I don't make those decisions
17:22 johnmark heh :)
17:23 kkeithley rhn_register
17:23 Bullardo joined #gluster
17:23 kkeithley I dunno, you might have to do something on RHN first. It's been so long since I did it I don't really remember.
17:26 kkeithley johnmark, semiosis: ... apt has its own mechanism for signing whole releases, via SecureApt
17:26 semiosis "was implemented in Apt version 0.6 in 2003, which Debian migrated to in 2005"
17:26 johnmark kkeithley: ok
17:26 * kkeithley wonders what this means wrt me putting .debs on download now and creating an apt repo later
17:27 kkeithley maybe nothing
17:32 kkeithley johnmark: wrt "the key". I don't suppose there's a gluster.org or gluster.com "official" gpg key for signing things like RPMs, .debs, and apt repos.
17:34 gbrand_ joined #gluster
17:39 johnmark kkeithley: ah, no. we should definitely create one
17:40 johnmark kkeithley: the only type of signing we did previously was MD5 sums
17:40 johnmark which is, like, inadequate
17:40 johnmark to put it mildly
17:45 sshaaf joined #gluster
17:49 kkeithley okay, apt repo for debian wheezy is at http://download.gluster.org/pub/gluster​/glusterfs/3.3/3.3.1/Debian/wheezy.repo
17:49 glusterbot Title: Index of /pub/gluster/glusterfs/3.3​/3.3.1/Debian/wheezy.repo (at download.gluster.org)
17:49 kkeithley guess I need to put my pub key up there too
17:51 semiosis thanks kkeithley!
17:57 kkeithley someone want to try adding the repo, and installing with apt-get? See the README.txt at the above URL for more info
17:59 semiosis kkeithley: sure
18:00 johnmark kkeithley: sweeeet
18:00 semiosis kkeithley: W: GPG error: http://download.gluster.org wheezy InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 3730DD4989CCAE8B
18:00 glusterbot Title: Index of / (at download.gluster.org)
18:01 kkeithley did you get the gpg.key that's there and add it with `apt-key add ...`
18:01 johnmark kkeithley: btw, kudos on your UFO blogs. totally straightforward and easily followed
18:01 semiosis d'oh!
18:01 johnmark haha
18:01 semiosis kkeithley: let me try that
18:01 johnmark semiosis: :P
18:02 kkeithley like it says in the README.txt? ;-)
18:02 semiosis i made this when no one read my readmes... http://www.quickmeme.com/meme/3qidwv/
18:02 glusterbot Title: Bad Luck Brian - writes detailed readme file no one reads it (at www.quickmeme.com)
18:03 kkeithley Long READMEs suck. This one's Short-And-Sweet®
18:03 semiosis ok wait, i had the extra coffee.... now where's this readme?  i dont see it in http://download.gluster.org/pub/gluster​/glusterfs/3.3/3.3.1/Debian/wheezy.repo
18:03 glusterbot Title: Index of /pub/gluster/glusterfs/3.3​/3.3.1/Debian/wheezy.repo (at download.gluster.org)
18:04 kkeithley up one dir in .../Debian
18:04 kkeithley along with the gpg.key file
18:04 semiosis all i see is gpg.key and wheezy.repo, dont see any readme :(
18:05 * semiosis tries another browser
18:05 semiosis same
18:05 kkeithley why is this webserver blocking README and README.txt files
18:05 kkeithley try now
18:05 kkeithley readme.txt
18:05 semiosis now i see it
18:05 kkeithley s/blocking/filtering/
18:05 glusterbot What kkeithley meant to say was: why is this webserver filtering README and README.txt files
18:07 semiosis success
18:07 semiosis no key warnings
18:08 kkeithley woohoo
18:08 johnmark kkeithley: yeah, I noticed that too and couldn't figure it out
18:08 kkeithley I learned it once in my Fedora and EPEL repos, then had to relearn it again just now
18:08 semiosis kkeithley: s/readme.txt/index.html/
18:09 Nr18 joined #gluster
18:09 semiosis anyway, how about some i386?  :P
18:09 kkeithley that'll hide the subdirs
18:09 semiosis shouldn't matter
18:10 tc00per wow... you guys have been busy. glad I'm not on debian :)
18:10 kkeithley What century are we in? i386? 32-bit? ;-)
18:10 semiosis as long as you give the 'deb ...' line like you do, and a link to the gpg.key file, shouldn't need to be able to browse dirs
18:10 kkeithley you want stable/squeeze amd64 first or wheezy 32-bit first
18:11 semiosis kkeithley: considering someone asked for squeeze, but no one (but me, sarcastically) asked for i386, go for the squeeze, please
18:12 kkeithley I forget how I installed wheezy. Can I install from a LiveCD? I don't want to download six DVD ISOs.
18:12 johnmark kkeithley: modified apache config - should auto-append readme stuff in a jiff
18:12 tc00per @repo
18:12 glusterbot tc00per: I do not know about 'repo', but I do know about these similar topics: 'repository', 'yum repo', 'yum repository', 'git repo', 'ppa repo', 'yum33 repo', 'yum3.3 repo'
18:12 semiosis johnmark: nice!
18:13 semiosis kkeithley: i used the amd64 netinst cd image from here http://www.debian.org/releas​es/squeeze/debian-installer/
18:13 glusterbot Title: Debian -- Debian squeeze Installation Information (at www.debian.org)
18:13 geek65535 semiosis: *Almost* got it! I made two changes to /etc/init/glusterd.conf:
18:13 geek65535 pre-start script
18:13 geek65535 sleep 5
18:13 geek65535 end script
18:13 geek65535 emits glusterd
18:13 johnmark semiosis: ha. it was hard-coded to ignore everthing that starts with README*
18:14 kkeithley oh, good. I'll do that.
18:14 johnmark which was the default behavior... weird
18:14 semiosis kkeithley: which is minimal and does and uses apt-get to pull in most of the system
18:14 kkeithley just what I want
18:15 semiosis geek65535: what's the 'emits glusterd' for?
18:15 geek65535 There was a problem in regards to /etc/init/mounting-glusterfs.conf: "exec start wait-for-state WAIT_FOR=glusterd WAITER=mounting-glusterfs"
18:15 geek65535 In order to WAIT_FOR=glusterd, something has to emit glusterd
18:16 semiosis um
18:16 geek65535 My problem now is that the volume in question holds my KVM files (/var/lib/libvirt), and it's getting mounted after libvirt is started.
18:18 semiosis geek65535: i dont think that's what emits is for, nor how wait-for-state works
18:18 semiosis glusterd is not a state it is a service, and the state wait-for-state waits for by default is started/running
18:20 geek65535 glusterd is a service that has to be in a certain state, and we need to wait until it is in the state of started/running. Agreed?
18:20 semiosis yep
18:21 geek65535 So, if the config doesn't emit glusterd, then how does the WAIT_FOR=glusterd  know what to wait for?
18:21 semiosis upstart emits a 'started' event when the service is started, see started(7) man page
18:26 geek65535 started(7) isn't explicit, but it looks like the emit is done automatically and based off the servicename.conf file (eg emits 'servicename'). So maybe that isn't necessary. I'm trying on a test VM right now.
18:26 semiosis that's how i understand it
18:33 Bullardo joined #gluster
18:39 sshaaf joined #gluster
18:42 kkeithley uh, that's unhappy. squeeze net install completed, then on boot I get an unaligned pointer abort.
18:44 y4m4 joined #gluster
18:44 semiosis :[
18:44 semiosis ^ sad robot
18:48 semiosis kkeithley: i suspect the libssl dependency will need to change, possibly from libssl1.0.0 to libssl0.9.8 -- that's under glusterfs-common in debian/control
18:48 kkeithley okay
18:48 semiosis that's for squeeze
18:48 kkeithley yup
18:48 kkeithley first I need a working system though
18:49 Technicool joined #gluster
18:49 semiosis well, ok
18:49 kkeithley minor details
18:54 Bullardo joined #gluster
18:54 wushudoin| joined #gluster
19:00 vimal joined #gluster
19:02 stickyboy joined #gluster
19:08 hagarth1 joined #gluster
19:10 kkeithley well, I pushed it back up the hill pulling from a different netinst repo and I'm still getting the unaligned pointer abort. Odd that the netinst ISO kernel runs, but not the kernel it installs. Why wouldn't they be the same?
19:12 pdurbin KVM on gluster whiteboard - https://twitter.com/philipdur​bin/status/258646361438822400
19:12 glusterbot Title: Twitter / philipdurbin: KVM on gluster whiteboard ... (at twitter.com)
19:13 elyograg kkeithley: the installer, epecially netinst, pulls all packages from the internet, what's present in the installer isn't used.  I beleive that Fedora net installer ISOs do the same.
19:17 kkeithley yup, I know that what's in the installer isn't used, I'm just mildly surprised that the installer and what it installs are otherwise the same version. Eating your own dogfood, drinking your own champagne, and all that
19:17 kkeithley I'll try installing from the Live CD. It's 6.0.5 instead of the netinst that's 6.0.6
19:18 kkeithley s/are otherwise/aren't otherwise/
19:18 glusterbot What kkeithley meant to say was: yup, I know that what's in the installer isn't used, I'm just mildly surprised that the installer and what it installs aren't otherwise the same version. Eating your own dogfood, drinking your own champagne, and all that
19:22 elyograg that is indeed odd.
19:25 elyograg i wrote that before I saw your replace.  I installed two fedora 17 systems a couple weeks apart using the exact same network install CD.  The two systems got different kernel versions installed.  In both cases, a 'yum upgrade' immediately after installation indicated everything was up to date.
19:32 seanh-ansca joined #gluster
19:33 elyograg I just used yum to bring a CentOS 6 system current.  apparently they expect your boot partition to be bigger than your RAM size.  it failed to create a kdump file.  my boot partition is only 500MB, RAM is 64GB.  Even the root partition is only 50GB.
19:35 elyograg The partition for the solr index is 2.5TB on df -h, though.
19:44 kkeithley well, it seems squeeze just isn't going to run in a vm for me
19:46 semiosis strange, works fine for me
19:46 kkeithley at least with the version of qemu/kvm on this machine.
19:47 elyograg some googling suggests that changing the graphics adapter from vmvga to cirrus or stdvga will fix it.
19:49 kkeithley it was qxl by default. I'll try vga
19:49 kkeithley much better.  Whoodathunkit
19:50 semiosis certainly not me
19:50 elyograg a very specific bug report: http://bugs.debian.org/cgi-b​in/bugreport.cgi?bug=653068
19:50 glusterbot Title: #653068 - squeeze-VMs fail to boot with qxl video - Debian Bug report logs (at bugs.debian.org)
19:50 semiosis i just click new vm in virt-manager and chose debian squeeze from the list! :)
19:50 dblack joined #gluster
19:50 kkeithley yeah, that's what I did too
19:50 semiosis odd then
19:53 kkeithley well, the first two times I picked Debian Squeeze. That most recent time though I tried Debian Wheezy, because of the definition of insanity
19:54 y4m4 joined #gluster
20:18 hchiramm_ joined #gluster
20:18 Nr18 joined #gluster
20:23 sshaaf joined #gluster
20:30 kkeithley gahh. trying to run debuild on squeeze: /usr/share/cdbs/1/class/python-module.mk:43: *** unsupported Python system:  (select either pysupport or pycentral).  Stop.
20:30 kkeithley dpkg-buildpackage: error: fakeroot debian/rules clean gave error exit status 2
20:30 kkeithley debuild: fatal error at line 1325:
20:30 kkeithley dpkg-buildpackage -rfakeroot -d -us -uc -S -sa failed
20:31 * semiosis goes to reproduce
20:31 kkeithley I did apt-get update, apt-get upgrade, apt-get of all the dependencies, changed cdbs >= 0.4.89, libssl0.9.8
20:32 semiosis installing the build tools now
20:35 kkeithley google's no help
20:36 semiosis if it were that easy to build debian packages everyone would be doing it
20:38 kkeithley I actually just tar-ed up my whole source tree from wheezy. I wonder if there are turds left from that that are biting me. I'll try with a "clean tree" and see what happens
20:39 semiosis shouldn't be any turds left, pbuilder does a clean room build in a chroot
20:40 semiosis doesnt touch your source tree
20:40 semiosis problem is the debian/rules file
20:41 oneiroi joined #gluster
20:44 semiosis kkeithley: removing the python-module.mk include from the rules file let debuild proceed, but i dont know at what cost
20:44 semiosis pbuilding now
21:33 JoeJulian @ports
21:33 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
21:38 * johnmark tries to remember who claimed to be a CyanogenMod dev
21:42 JoeJulian Heh, gunna get me some CM10?
21:43 semiosis got my eye on a "bad esn" nexus s... want to use it wifi only with cm10
21:43 semiosis they're 60-100 on ebay lately
21:46 hattenator joined #gluster
21:50 y4m4 joined #gluster
22:06 MTecknology I have a volume that doesn't want to let me delete it... any tips?
22:06 MTecknology gluster volume delete ngx; Volume ngx does not exist; gluster volume list; ngx
22:06 JoeJulian Make sure it's stopped.
22:07 JoeJulian restart glusterd
22:07 semiosis hey JoeJulian, check this out... http://community.gluster.org/q/doe​s-glusterfs-support-mkfs-command/
22:07 glusterbot Title: Question: Does GlusterFS support mkfs command? (at community.gluster.org)
22:07 semiosis you use disk images on glusterfs, right?  maybe you can answer this question
22:07 MTecknology restart.. it went away
22:07 MTecknology :)
22:08 JoeJulian Ugh... I hate that bug.
22:08 JoeJulian It just won't die.
22:09 MTecknology there's also what may be a bug where you can create a volume where the brick and volume are the same directory
22:10 * MTecknology thinks it should just check to see if they're the same and call you an idiot if you try to do that
22:12 JoeJulian What command?
22:12 MTecknology gluster volume create gv0 replica 2 node01.mydomain.net:/var/lib/glusterd/vols/gv0 node02.mydomain.net:/var/lib/glusterd/vols/gv0
22:14 JoeJulian Yeah, I thought I saw a bug and commit for that...
22:15 MTecknology maybe semiosis didn't get that ppa updated yet... blame it on him
22:17 MTecknology So.... is it a bad idea to make /mnt/ngx/, throw all the data I want synchronized in it, then symlink from the rest of the system to it?
22:20 geek65535 semiosis: I have beat the problem into submission. Adding the sleep in the pre-start script section of /etc/init/glusterd.conf allows glusterd to start properly. The mountall still fails, so /etc/rc.local has a 'mount -a'. That still happens after libvirt is started, so I also have a 'service libvirt-bin restart' in /etc/rc.local.
22:20 JoeJulian MTecknology: That's what I do.
22:21 MTecknology splendid! :D
22:21 JoeJulian "mountall still fails" do you use the _netdev mount option?
22:21 geek65535 joejulian: fuse now complains that it is an unrecognized option.
22:23 JoeJulian That's okay.
22:23 JoeJulian That's just the bash script.
22:23 JoeJulian The mount command ignores it, but the init scripts do not.
22:23 semiosis JoeJulian: he's on ubuntu
22:24 duerF joined #gluster
22:24 JoeJulian I thought we had decided that upstart did that also.
22:29 _Marvel_ joined #gluster
22:35 plarsen joined #gluster
22:35 bit4man joined #gluster
22:42 y4m4 joined #gluster
22:43 semiosis JoeJulian: not ubuntu upstart, it's different
22:43 semiosis JoeJulian: but centos/rhel upstart acts just like the old init
22:48 dsj joined #gluster
22:49 dsj Greetings all.  Question: does performance.stat-prefetch not work in 3.3?
22:49 dsj I set it with sudo gluster volume set <volname> performance.stat-prefetch on one of my peers but that seems to have no effect on the volfile downloaded by a client when mounting.
23:00 JoeJulian It's on by default, I believe.
23:01 JoeJulian Nope, I'm wrong. It's off by default.
23:02 JoeJulian Oh, no, I'm wrong again.
23:02 JoeJulian I was looking at performance.nfs.stat-prefetch
23:04 Ryan_Lane joined #gluster
23:04 Ryan_Lane you guys want to see an interesting graph?
23:04 Ryan_Lane http://ganglia.wikimedia.org/latest/graph_all_​periods.php?c=Glusterfs%20cluster%20pmtpa&amp;​m=mem_report&amp;r=hour&amp;s=by%20name&amp;hc​=4&amp;mc=2&amp;st=1350514964&amp;g=mem_report​&amp;z=large&amp;c=Glusterfs%20cluster%20pmtpa
23:04 glusterbot Title: Ganglia: Graph all periods (at ganglia.wikimedia.org)
23:04 JoeJulian Oh, wow... performance.stat-prefetch is gone in 3.3
23:04 Ryan_Lane seems there's a memory leak in gluster 3.3
23:04 Ryan_Lane a really, really bad one
23:05 JoeJulian 3.3.1?
23:05 JoeJulian I know at least one was found in 3.3.0 and fixed.
23:05 Ryan_Lane 3.3.0
23:05 Ryan_Lane ah
23:05 Ryan_Lane good to know
23:05 * JoeJulian found and reported it. :D
23:06 Ryan_Lane I need to upgrade it seems
23:06 JoeJulian And that looks pretty much like the one I was seeing on my fedora box.
23:06 Ryan_Lane but first, I'm doing a killall :)
23:06 JoeJulian I've not seen it since upgrading.
23:06 Ryan_Lane good to know
23:07 johnmark Ryan_Lane: howdy
23:07 johnmark Ryan_Lane: which OS?
23:07 Ryan_Lane ubuntu
23:08 JoeJulian johnmark: Did you get me CM10 yet? ;)
23:09 johnmark haha
23:09 johnmark :P
23:10 Ryan_Lane restarting the processes helped a lot :)
23:10 * johnmark is looking for a simple HTML5 app for a UFO demo
23:10 johnmark or someone who wants to write one :)
23:14 JoeJulian Can showoff do html5?
23:15 dsj JoeJulian: interesting, it doesn't complain when I set that option
23:15 dsj but it just doesn't do anything :(
23:15 dsj So what's the best way to make ls -l and similar operations not suck?
23:16 dsj This is on a 2x replicated, distributed volume
23:16 Ryan_Lane heh. look at the graphs now: http://ganglia.wikimedia.org/latest/graph_all_​periods.php?c=Glusterfs%20cluster%20pmtpa&amp;​m=mem_report&amp;r=hour&amp;s=by%20name&amp;hc​=4&amp;mc=2&amp;st=1350514964&amp;g=mem_report​&amp;z=large&amp;c=Glusterfs%20cluster%20pmtpa
23:16 glusterbot Title: Ganglia: Graph all periods (at ganglia.wikimedia.org)
23:19 Alpha64 joined #gluster
23:20 johnmark JoeJulian: showoff? never tried
23:21 johnmark Ryan_Lane: slightly better :)
23:22 johnmark JoeJulian: I'll give somebody some money if they can get it to display content from a UFO server
23:36 tryggvil joined #gluster
23:37 Ryan_Lane left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary