Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 sjoeboo_ joined #gluster
00:44 sjoeboo_ joined #gluster
01:20 sjoeboo_ joined #gluster
01:36 bharata joined #gluster
01:38 bala joined #gluster
01:49 tomsve joined #gluster
01:58 sjoeboo_ joined #gluster
02:36 sjoeboo_ joined #gluster
02:44 raven-np joined #gluster
02:46 bulde joined #gluster
03:07 pipopopo joined #gluster
03:11 jdarcy joined #gluster
03:13 sjoeboo_ joined #gluster
03:18 jag3773 joined #gluster
03:47 tomsve joined #gluster
03:53 sjoeboo_ joined #gluster
04:01 Ryan_Lane joined #gluster
04:12 hagarth joined #gluster
04:24 sripathi joined #gluster
04:32 sjoeboo_ joined #gluster
04:40 an joined #gluster
04:41 tomsve joined #gluster
04:42 glusterbot New news from newglusterbugs: [Bug 911160] Following a change of node UUID I can still reconnect to the volume <http://goo.gl/gdN37> || [Bug 909798] Quota doesn't handle directory names with ','. <http://goo.gl/ZzHRS>
04:50 Humble joined #gluster
04:51 deepakcs joined #gluster
04:56 sgowda joined #gluster
04:58 bala joined #gluster
05:02 sripathi joined #gluster
05:03 tomsve joined #gluster
05:05 shylesh joined #gluster
05:09 vpshastry joined #gluster
05:13 sjoeboo_ joined #gluster
05:22 sahina joined #gluster
05:22 sgowda joined #gluster
05:23 lala joined #gluster
05:24 tomsve joined #gluster
05:34 mohankumar joined #gluster
05:38 satheesh joined #gluster
05:40 atrius joined #gluster
05:48 sgowda joined #gluster
05:51 rastar joined #gluster
05:51 rastar1 joined #gluster
05:54 sjoeboo_ joined #gluster
06:00 aravindavk joined #gluster
06:00 shireesh joined #gluster
06:03 bulde1 joined #gluster
06:08 raghu joined #gluster
06:13 glusterbot New news from newglusterbugs: [Bug 912206] gf_string2percent_or_bytesize doesn't convert float numbers <http://goo.gl/lj8ST>
06:13 shireesh joined #gluster
06:14 ngoswami joined #gluster
06:20 overclk joined #gluster
06:24 ekuric joined #gluster
06:25 hagarth joined #gluster
06:27 tomsve joined #gluster
06:31 an joined #gluster
06:31 Ryan_Lane joined #gluster
06:32 sripathi joined #gluster
06:35 sjoeboo_ joined #gluster
06:35 16WAAGGWY joined #gluster
06:37 bulde joined #gluster
06:38 vikumar joined #gluster
06:44 Nevan joined #gluster
06:47 ramkrsna joined #gluster
06:47 ramkrsna joined #gluster
06:51 ricky-ticky joined #gluster
06:55 hagarth joined #gluster
06:55 rgustafs joined #gluster
06:58 tomsve joined #gluster
07:01 sripathi1 joined #gluster
07:07 theron joined #gluster
07:09 theron left #gluster
07:10 theron joined #gluster
07:11 deepakcs joined #gluster
07:15 sjoeboo_ joined #gluster
07:26 jtux joined #gluster
07:29 anmol joined #gluster
07:42 badone joined #gluster
07:55 glusterbot New news from resolvedglusterbugs: [Bug 765473] [glusterfs-3.2.5qa1] glusterfs client process crashed <http://goo.gl/4fZUW>
07:55 sjoeboo_ joined #gluster
07:57 sjoeboo__ joined #gluster
08:00 jtux joined #gluster
08:01 ctria joined #gluster
08:04 tomsve joined #gluster
08:16 sahina joined #gluster
08:20 shireesh joined #gluster
08:24 bala joined #gluster
08:31 tjikkun_work joined #gluster
08:36 sjoeboo joined #gluster
08:40 Staples84 joined #gluster
08:43 _br_ joined #gluster
08:45 duerF joined #gluster
08:45 _br_ joined #gluster
08:47 _br_ joined #gluster
08:49 WildPikachu joined #gluster
08:54 puebele joined #gluster
08:56 shireesh joined #gluster
08:57 gbrand_ joined #gluster
09:02 bala joined #gluster
09:09 tomsve joined #gluster
09:12 tjikkun_work joined #gluster
09:13 puebele joined #gluster
09:16 sahina joined #gluster
09:16 theron joined #gluster
09:17 sjoeboo joined #gluster
09:17 ekuric joined #gluster
09:19 tjikkun_work joined #gluster
09:24 satheesh joined #gluster
09:27 shireesh joined #gluster
09:32 rgustafs joined #gluster
09:37 bauruine joined #gluster
09:38 guigui3 joined #gluster
09:49 tryggvil joined #gluster
09:53 sripathi joined #gluster
09:54 cw joined #gluster
09:54 sjoeboo joined #gluster
09:59 gbrand_ joined #gluster
10:08 Ryan_Lane joined #gluster
10:09 satheesh joined #gluster
10:19 theron joined #gluster
10:21 sripathi joined #gluster
10:27 an joined #gluster
10:27 sripathi joined #gluster
10:33 Staples84 joined #gluster
10:34 ngoswami joined #gluster
10:35 sjoeboo joined #gluster
10:38 isomorphic joined #gluster
10:40 VSpike If I've cloned a pair of gluster servers to use for a beta/staging setup, is it safe to leave the peers with the same UUIDs?
10:40 VSpike I'd guess so since there should be no link between clients and peers on the live servers and clients and peers on the beta servers....
10:41 VSpike except that they are on the same subnet
10:41 VSpike So wondering if there is any broadcast/autodiscovery voodoo that happens that might shoot me in the foot?
11:02 dobber_ joined #gluster
11:05 tryggvil joined #gluster
11:13 sahina joined #gluster
11:13 shireesh joined #gluster
11:16 sjoeboo joined #gluster
11:24 cwin joined #gluster
11:30 rotbeard joined #gluster
11:35 tryggvil joined #gluster
11:47 jclift_ joined #gluster
11:50 jclift_ joined #gluster
11:55 sjoeboo joined #gluster
12:05 manik joined #gluster
12:10 jdarcy joined #gluster
12:19 tomsve joined #gluster
12:24 ngoswami joined #gluster
12:24 jdarcy joined #gluster
12:26 mooperd joined #gluster
12:26 tomsve joined #gluster
12:30 JuanBre joined #gluster
12:30 rgustafs joined #gluster
12:34 hagarth joined #gluster
12:35 sjoeboo joined #gluster
12:35 edward1 joined #gluster
12:37 andreask joined #gluster
12:39 rastar1 joined #gluster
12:44 mynameisbruce left #gluster
12:44 mynameisbruce joined #gluster
12:51 social_ Hi, what does op_txn_begin do?
13:03 rotbeard hi folks, what is the best way to reset a brick? I upgrade my 3.0 to a 3.2 \o/ so far so good, but now I want to reset a node. My frist try: clean the lokal storage with rm -rf on node1 and trigger the selfheal process. but now ~1GB of data are missing on node 1
13:03 rotbeard any suggestions?
13:08 kkeithley @repos
13:08 glusterbot kkeithley: See @yum, @ppa or @git repo
13:08 kkeithley @yum
13:08 glusterbot kkeithley: I do not know about 'yum', but I do know about these similar topics: 'yum repo', 'yum repository', 'yum33 repo', 'yum3.3 repo'
13:09 kkeithley @yum33 repo
13:09 glusterbot kkeithley: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
13:11 ctria joined #gluster
13:13 sjoeboo joined #gluster
13:16 holden247 joined #gluster
13:17 theron looking for an install guide / notes for oVirt installation integration.
13:17 rotbeard kkeithley, I can not upgrade higher ;)
13:17 H__ odd, I get warning for 3.3.1 : Extended attributes not supported (try remounting brick with 'user_xattr' flag , but this is on an ext4 filesystem !
13:20 rastar joined #gluster
13:20 nhm_ joined #gluster
13:22 dblack joined #gluster
13:22 satheesh joined #gluster
13:25 shireesh joined #gluster
13:25 nhm joined #gluster
13:27 hagarth joined #gluster
13:31 Staples84 joined #gluster
13:38 dustint joined #gluster
13:40 bulde joined #gluster
13:44 tomsve joined #gluster
13:49 vpshastry joined #gluster
13:51 kkeithley rotbeard: huh?
13:53 sjoeboo joined #gluster
13:55 raven-np joined #gluster
13:56 morse joined #gluster
13:59 raven-np1 joined #gluster
14:00 morse joined #gluster
14:01 rotbeard meh, I was wrong ;)
14:03 vpshastry left #gluster
14:03 morse joined #gluster
14:04 tryggvil joined #gluster
14:05 raven-np joined #gluster
14:08 tryggvil joined #gluster
14:16 sjoeboo joined #gluster
14:20 aliguori joined #gluster
14:21 tqrst last I tried rebalancing (3.2.7, I think), gluster started sending files from near empty bricks to fuller ones, leading to a disk space problem. Has this been addressed in 3.3.1?
14:21 tqrst I added two more bricks to my volume last week, and all signs point to "no"
14:21 tqrst they're only at 13%, but some other bricks are up to 97% now
14:23 semiosis tqrst: did you file a bug about that?
14:23 glusterbot http://goo.gl/UUuCq
14:23 tqrst semiosis: I have a vague recollection of there already being one, but I can't find it right now
14:23 tqrst here's what my bricks look like right now: http://pastie.org/6212781
14:23 glusterbot Title: #6212781 - Pastie (at pastie.org)
14:24 morse joined #gluster
14:24 semiosis well imho file another one, if there's a duplicate they will get merged, and either way it will give this issue a bump
14:25 tqrst btw some pages on gluster.org still link to the old bug tracker
14:25 tqrst eg http://www.gluster.org/dow​nload/gluster-source-code/
14:25 glusterbot <http://goo.gl/LTPw> (at www.gluster.org)
14:25 tqrst (bug tracker link on the right)
14:26 ndevos johnmark: ^ is probably something you need to change?
14:29 manik joined #gluster
14:32 H__ gluster 3.3.1 source code install overwrites UUID in /var/lib/glusterd/glusterd.info
14:33 semiosis ndevos: that whole page is outdated, glusterfs is dual licensed GPLv2 and LGPLv3 now
14:33 semiosis johnmark: ^^^
14:33 semiosis s/v3/v3+/
14:33 glusterbot semiosis: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
14:33 semiosis ha
14:34 tqrst ndevos / johnmark / semiosis: in case you're wondering, I got there by googling "gluster bugs"
14:34 wN joined #gluster
14:34 semiosis johnmark: maybe that page should just be removed completely & replaced with a page within the mediawiki section, so it can be easily maintained by the community
14:34 morse joined #gluster
14:35 nueces joined #gluster
14:39 deepakcs joined #gluster
14:40 bennyturns joined #gluster
14:49 rastar left #gluster
14:50 H__ what's the recommended option for glusters' configure script to build release binaries ? (so no debug binaries) or is that just using strip afterwards ?
14:54 shireesh joined #gluster
14:56 H__ correction -> it does not overwrite, it provides a new one when no glusterd.info was there . <H__> gluster 3.3.1 source code install overwrites UUID in /var/lib/glusterd/glusterd.info
14:59 semiosis H__: that sounds more right :)
15:00 morse joined #gluster
15:07 stopbit joined #gluster
15:07 nixpanic joined #gluster
15:08 nixpanic joined #gluster
15:08 H__ after starting glusterd, what's a scriptable way of detemining it's ready to serve mounts ?
15:10 rotbeard H__, maybe with the gluster subshell
15:10 rotbeard gluster volume info all
15:10 rotbeard maybe
15:11 tqrst semiosis: from a more practical standpoint, what should I do? Some of my bricks will be full soon.
15:11 jdarcy joined #gluster
15:12 semiosis tqrst: idk maybe pause the rebal?
15:12 semiosis H__: if you've started it, why would it not be ready to serve mounts?
15:12 tqrst semiosis: sure, but that still leaves some bricks sitting at 97% and others at 10%
15:12 dbruhn joined #gluster
15:13 rotbeard semiosis, if there are no volumes into :>
15:13 H__ semiosis: I see a race, it's not immediately ready to serve when it returns
15:13 semiosis rotbeard: no volumes is a different kind of issue than glusterd not being ready.  you can't just wait longer for volumes to appear
15:14 H__ anyone clues on the release target instead of the debug target by configure ?
15:15 tqrst semiosis: if I didn't have any users, I guess I could take all but my empty bricks + another one offline and rebalance
15:15 semiosis H__: not sure about the debug symbols question, you might want to try that in #gluster-dev
15:15 dbruhn Hey quick question, Redhat 6.3, is XFS the safest these days?
15:16 tqrst define safest?
15:16 semiosis dbruhn: glusterfs recommends xfs, with inode size 512
15:16 kkeithley @ext4
15:16 glusterbot kkeithley: Read about the ext4 problem at http://goo.gl/PEBQU
15:16 kkeithley you could use ext4 too, but watch out for ^^^
15:17 semiosis hi kkeithley, maybe you know about the debug symbols... [09:50] <H__> what's the recommended option for glusters' configure script to build release binaries ? (so no debug binaries) or is that just using strip afterwards ?
15:17 dbruhn super aware of the ext4 issues, my current system is on it.
15:17 dbruhn I am standing gip two new IB systems here in the next week or so
15:17 tqrst I've read worrying stories of xfs not handling abrupt outages very well (e.g. http://oss.sgi.com/archives​/xfs/2012-02/msg00517.html), but I don't know if this is still an issue
15:17 glusterbot <http://goo.gl/3qPTQ> (at oss.sgi.com)
15:17 H__ about ext4 i'm holding back an upgrade to ubuntu 12.10 because of it
15:18 kkeithley yeah, wrt release versus debug, I'm not sure, which is why I wasn't speaking up
15:18 jskinner joined #gluster
15:18 tqrst H__: same here but on centos. Can't upgrade the kernel because of all those ext4 bricks.
15:18 kkeithley I just let rpmbuild make release and debuginfo rpms for me
15:19 kkeithley I'm lazy ;-)
15:20 semiosis fwiw, i think the debian packages use strip, though not really sure how that works
15:20 jdarcy I used to know how to build RPMs without all of that debuginfo-stripping silliness, but it broke in the last round of specfile changes.
15:20 kkeithley actually, I think it's libtool that's automagically strips the libs, but I could be wrong
15:20 semiosis jdarcy's back!
15:20 rotbeard sorry folks, whats about ext4 in newer distros/kernels?
15:21 H__ strip removes debug symbols, but i'm wondering if the resulting code is the same as compiling and linking without -g
15:21 wushudoin joined #gluster
15:21 semiosis jdarcy, kkeithley: were you guys travelling recently? welcome back
15:21 semiosis rotbeard: see ,,(ext4)
15:21 glusterbot rotbeard: Read about the ext4 problem at http://goo.gl/PEBQU
15:22 jdarcy semiosis: Yep, I was at FAST.  Good to be back to semi-normal.  ;)
15:22 kkeithley yes, and I was at FOSDEM, and about to go to Red Hat Dev Summit in Brno, then the Gluster Dev. Summit in Bangalore.
15:23 semiosis wow cool
15:23 rotbeard holy...thanks for the information
15:23 kkeithley With a meeting at CERN in the middle for good measure.
15:23 semiosis CERN uses glusterfs?
15:24 kkeithley apparently so.
15:25 H__ not sure about that. I've read about their EOS storage system
15:25 kkeithley racking up airline miles, wracking up my sleep pattern
15:25 sjoeboo joined #gluster
15:25 semiosis must feel good to be writing code that helps advance science so directly
15:25 H__ they use xrootd https://eos.cern.ch/index.php?option=com​_wrapper&amp;view=wrapper&amp;Itemid=11
15:25 glusterbot <http://goo.gl/OPeXU> (at eos.cern.ch)
15:26 jdarcy H__: Like many organizations, CERN uses many different storage systems for many different things.
15:26 kkeithley hmmm, did Fedora 16 EOL when I blinked?
15:27 H__ jdarcy: yes, most probably. gluster might be inthere somewhere , but i have not found it. btw EOS -> http://uscms.org/uscms_at_work/co​mputing/setup/mass_storage.shtml
15:27 glusterbot <http://goo.gl/oxpcw> (at uscms.org)
15:28 H__ last one, they passed the 100 T -> http://home.web.cern.ch/about/updates/2013​/02/cern-data-centre-passes-100-petabytes
15:28 glusterbot <http://goo.gl/ngVsp> (at home.web.cern.ch)
15:28 H__ ehh, P. sorry
15:28 jdarcy Also, if I were using dCache I'd be trying to get away from it too.  ;)
15:29 holden247 left #gluster
15:31 bugs_ joined #gluster
15:35 bala joined #gluster
15:39 jdarcy_ joined #gluster
15:41 lala joined #gluster
15:42 jdarcy_ joined #gluster
15:43 jdarcy_ joined #gluster
15:44 ndevos kkeithley: I just ran into an issue on fedore where glusterd.service starts before rpcbind.service, caused nfs from working - seen that before?
15:44 ndevos s|caused|prevented|
15:44 flrichar joined #gluster
15:49 bdperkin joined #gluster
15:50 bdperkin joined #gluster
15:51 H__ I see these during gluster upgrade : E [glusterd-store.c:2080:glus​terd_store_retrieve_volume] 0-: Unknown key: brick-0 , same for brick-1.  How serious is this error ?
15:51 lh joined #gluster
15:51 lh joined #gluster
15:55 rwheeler joined #gluster
15:57 kkeithley ndevos: no, haven't seen that before, but probably because every time I started glusterd I already had rpcbind running
15:57 kkeithley fedora with systemd or with init.d?
15:57 ndevos thats systemd
15:58 H__ I'm worried about the unkown-brick lines. How can I check if glusterd upgrade worked ? http://dpaste.org/YxYmR/
15:58 glusterbot Title: dpaste.de: Snippet #219480 (at dpaste.org)
15:58 ndevos kkeithley: my glusterd.service now contains "After=network.target rpdbind.service" and that seems to work
15:59 kkeithley good timing, I'm about to pull the trigger on 3.3.1-10 after the specfile sync
16:00 ndevos great!
16:00 * ndevos is on glusterfs-3.3.1-8.fc17.armv7hl
16:01 kkeithley Oh,
16:03 kkeithley hmm, last kernel updates bricked my trimslice and beagleboard. I'll have to remember how to kick off scratch builds in arm.koji
16:03 WildPikachu does anyone have the link to the site with the gluster clusters listing and their sizes on it?
16:05 ndevos kkeithley: I'm using your fedorapeople.org repo for those machines, so it would be apprciated if you can put the update there too
16:05 kkeithley indeed.
16:07 kkeithley maybe I'm looking at the wrong site?  http://arm.koji.fedoraproject.org​/koji/packageinfo?packageID=2416  hasn't mirrored all builds I've done in https://koji.fedoraproject.org/​koji/packageinfo?packageID=5443
16:07 glusterbot <http://goo.gl/I2UCS> (at arm.koji.fedoraproject.org)
16:07 kkeithley or maybe it's not supposed to
16:08 kkeithley anyway, with my arm boxes bricked until I can reinstall, I have to try to remember how I do builds in arm.koji if we want arm rpms.
16:09 ndevos kkeithley: something like this: arm-koji build --scratch fedora-17 "$(fedpkg giturl)"
16:10 ndevos but yeah, some releases seem to be missing... maybe a buildrequires is not available on arm yet?
16:11 kkeithley everything I needed wrt BuildRequires seemed to be available on my arm boxes
16:14 ndevos I'm also not sure what would be missing... let's see  -> http://arm.koji.fedoraproject.o​rg/koji/taskinfo?taskID=1447300
16:14 glusterbot <http://goo.gl/XYO3Z> (at arm.koji.fedoraproject.org)
16:14 lh joined #gluster
16:15 glusterbot New news from newglusterbugs: [Bug 912427] features/protect: protection state needs to be persistent <http://goo.gl/WGkSQ>
16:15 luckybambu joined #gluster
16:15 ekuric joined #gluster
16:16 luckybambu Is there any way to prevent the mount point permissions from taking effect across the Gluster?
16:16 bala joined #gluster
16:17 kkeithley ndevos: rpm -q --whatprovides `which arm-koji`?
16:17 ndevos kkeithley: fedora-packager-0.5.10.1-1.fc17.noarch
16:18 kkeithley yup,
16:19 rodlabs joined #gluster
16:20 Ramereth joined #gluster
16:20 elyograg redhat bugzilla will soon break 1 million.
16:22 lhawthor_ joined #gluster
16:24 lh joined #gluster
16:24 raven-np joined #gluster
16:29 Footur joined #gluster
16:30 kkeithley oh, I had arm-koji (fedora-packager) on my f17 vm. I could have answered for myself if I had remembered.
16:30 Footur left #gluster
16:34 tqrst @xfs
16:35 tqrst semiosis: trying xfs on the 6 bricks I am adding to my volume. Other than inodesize=512, are there any options I should use?
16:38 overclk joined #gluster
16:38 semiosis all i have heard is inode size 512 normally, or 1024 when using UFO
16:39 tqrst and there aren't any issues with having different bricks backed by different filesystems, right? (e.g. 6 new bricks with xfs, the rest ext4)
16:40 semiosis no i migrated my volumes one brick at a time and it went fine
16:41 semiosis i never had problems with ext4 but i did get some strange, and as far as i could tell harmless, log messages with ext4 bricks which have gone away completely since i switched to xfs
16:41 semiosis i never investigated because there was never a problem, but i do enjoy seeing zero length gluster log files now with xfs :)
16:42 tqrst that would indeed be nice :)
16:42 tqrst mine are filled with transport errors due to a glitch when nfs is disabled
16:43 tqrst looking at mkfs.xfs --help, there are a few inode size options: log=n, perblock=n, size=num, maxpct=n, attr=0|1|2. I take it you meant perblock=512? Or is it =1 since the unit is 512 bytes?
16:43 tqrst 256K per block seems pretty high
16:44 tqrst nevermind, -i size=512 is the right one
16:47 lh joined #gluster
16:47 lh joined #gluster
16:52 semiosis tqrst: +1
16:53 jeffrin joined #gluster
16:54 tqrst semiosis: do you use any special mount options for your bricks?
16:54 jeffrin avati : hello is  ab available for chat ?
16:56 * semiosis hasn't seen AB here in a whiiile
16:57 semiosis @seen unlocksmith
16:57 glusterbot semiosis: unlocksmith was last seen in #gluster 48 weeks, 5 days, 21 hours, 31 minutes, and 15 seconds ago: <unlocksmith> semiosis: hi!
16:57 semiosis hehe
16:58 semiosis tqrst: you can see my mount options in my ,,(puppet) module
16:58 glusterbot tqrst: (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
16:58 tqrst semiosis: thanks
16:59 jeffrin left #gluster
17:04 luckybambu Anyone have any idea on how I might get GlusterFS mount point permissions not to propagate across clients?
17:05 luckybambu Seems like when I change the permissions or ownership of my mount point on client A, those changes carry to client B
17:05 an joined #gluster
17:05 semiosis luckybambu: thats kinda the point of glusterfs... all clients see the same thing
17:06 luckybambu Well yes, but I wasn't expecting the permissions of the mount point itself to change
17:08 tqrst semiosis: just to make sure I'm reading this right (I don't use puppet), server options are acl,noatime,nodiratime and client options are nobootwait,noatime,nodiratime (and tcp,vers=3 for nfs)?
17:09 semiosis yeah, though tbh i dont need acl there
17:09 tqrst I was going to ask about that
17:09 semiosis i do highly recommend the no(dir)atime opts
17:09 semiosis and the client nobootwait is an ubuntu thing, you'd want _netdev instead on most other distros afaik
17:11 tqrst yeah, pretty sure centos uses _netdev
17:11 semiosis and there's a whole bunch of nfs client options you could consider, see the man pages for your nfs client.  things like sync,noac can improve consistency with multiple writers though will probably hurt performance
17:11 semiosis i dont use nfs clients much
17:13 tqrst we have our own nfs server, so we're stuck with -t glusterfs afaik
17:21 jdarcy joined #gluster
17:26 jdarcy joined #gluster
17:26 Mo___ joined #gluster
17:27 tqrst semiosis: it looks like noatime and nodiratime don't exist on the client side in 3.3.1 ("unknown option noatime (ignored)", ditto for nodiratime)
17:28 tqrst unless I mistyped something
17:31 tryggvil joined #gluster
17:35 an joined #gluster
17:36 bennyturns joined #gluster
17:36 tqrst joined #gluster
17:36 rubbs joined #gluster
17:36 neofob joined #gluster
17:36 joeto1 joined #gluster
17:36 bfoster joined #gluster
17:36 haakond joined #gluster
17:37 tqrst semiosis: not sure if you got this before the netsplit, so:
17:37 tqrst 12:27 < tqrst> semiosis: it looks like noatime and nodiratime don't exist on the client side in 3.3.1 ("unknown option noatime (ignored)", ditto for nodiratime)
17:37 tqrst 12:29 < tqrst> from fstab: ml43:/bigdata           /mnt/bigdata            glusterfs auto,_netdev,noatime,nodiratime      0 0
17:37 raven-np joined #gluster
17:41 kkeithley just thinking out loud, wouldn't it make more sense to use noatime,nodiratime on the brick mounts?
17:41 tqrst kkeithley: I already do
17:41 tqrst kkeithley: I was just looking at semiosis's puppet scripts
17:41 kkeithley okay
17:41 tqrst https://github.com/semiosis/puppet-gluste​r/blob/master/gluster/manifests/client.pp
17:41 glusterbot <http://goo.gl/SNYsX> (at github.com)
17:44 guigui3 left #gluster
17:45 morse joined #gluster
17:52 __Bryan__ joined #gluster
18:11 gbrand__ joined #gluster
18:20 semiosis i use them everywhere.  why not
18:25 disarone joined #gluster
18:25 lh joined #gluster
18:27 JoeJulian No jdarcy? That's odd.
18:28 semiosis @seen jdarcy
18:28 glusterbot semiosis: jdarcy was last seen in #gluster 3 hours and 30 seconds ago: <jdarcy> Also, if I were using dCache I'd be trying to get away from it too.  ;)
18:29 andreask joined #gluster
18:29 JoeJulian kkeithley: Heh, I was just about to ask about atime. The setting of atime is done by the filesystem, isn't it?
18:30 JoeJulian So even without a noatime option on the gluster mount, no extra network round trips are triggered?
18:35 an joined #gluster
18:38 gbrand_ joined #gluster
18:39 Ryan_Lane joined #gluster
18:44 lh joined #gluster
18:45 H__ JoeJulian: may I bother you again ? -> I'm worried about the unkown-brick lines. How can I check if glusterd upgrade 3.2.5->3.3.1 worked ? http://dpaste.org/YxYmR/
18:45 glusterbot Title: dpaste.de: Snippet #219480 (at dpaste.org)
18:50 Ryan_Lane joined #gluster
19:04 atrius_away joined #gluster
19:27 cw joined #gluster
19:54 an joined #gluster
19:59 jdarcy joined #gluster
20:07 lh joined #gluster
20:07 lh joined #gluster
20:11 manik joined #gluster
20:11 mooperd joined #gluster
20:54 dustint joined #gluster
21:05 vex joined #gluster
21:06 vex can someone help me make sense of what this all means? (from volume profile info): http://paste.nothing.net.nz/e96e37
21:06 glusterbot Title: [untitled] (at paste.nothing.net.nz)
21:07 vex trying to troubleshoot slowness and I turned on profiling
21:07 vex but there's very little documentation on what any of that means
21:09 DataBeaver joined #gluster
21:16 andreask joined #gluster
21:24 jdarcy_ joined #gluster
21:28 tryggvil joined #gluster
21:31 cicero looks like adding a mk* is taking a long time
21:31 cicero terribly long time
21:31 balunasj joined #gluster
21:31 cicero have you tried benching the underlying fs?
21:32 cicero i'd try that first and then figure out if it's a network issue
21:33 vex cicero: was that intended for me?
21:34 andreask joined #gluster
21:35 vex is performance.quick-read on by default in 3.3.1 ?
21:35 vex (documention doesn't say)
21:41 dustint_ joined #gluster
21:47 JoeJulian H__ That's normal and nothing to worry about.
21:51 JoeJulian vex: Reading what cicero said, it was definitely to you. Everything he said matches what you posted. Also, yes. quick-read is on by default.
21:52 vex JoeJulian: thx ;)
21:52 vex is there a way I can add that to the gluster wiki or something?
21:52 JoeJulian Sure
21:52 JoeJulian It's a wiki. ;)
21:53 vex maybe it belongs in the offical docs under chapter 7 / 7.1
21:54 vex oh .. there's a nice note at the top. "The default options given here are subject to modification at any given time and may not be the same for all versions."
21:56 JoeJulian Not sure if I'd even technically call it an option. It's a performance translator that's part of the default graph.
21:56 vex ok, it's just not documented anywhere.
21:57 vex (regardless of what it is/does)
21:57 JoeJulian I don't think most of the translators are documented anywhere useful.
21:58 vex I'm noticing that :)
21:58 JoeJulian And the performance.quick-read=off is an undocumented option by design, according to the source.
21:59 JoeJulian I wonder if it's under ,,(undocumented options)
21:59 glusterbot The old 3.1 page of undocumented options is at http://goo.gl/P89ty
22:00 vex whoa
22:10 jdarcy joined #gluster
22:11 semiosis vex: can you describe your performance issue?
22:13 vex well.. We typically write once, read lots. few updates., mostly read-only.
22:13 vex things like listing files and performing find commands are quite slow
22:13 vex i'm trying to tune things to run a bit better
22:14 semiosis do you have lots of files/subdirs per directory?
22:14 vex we have probably one directory with a lot of files/subfiles
22:14 vex current volume info is: http://paste.nothing.net.nz/d1f289
22:14 glusterbot Title: [untitled] (at paste.nothing.net.nz)
22:14 semiosis ls is going to be slow :(
22:15 vex yeah, I'm ok with ls being slow on those directories
22:16 semiosis it's slow to collect the directory listings from all the bricks, and when you use the -l option with ls, or do a find, you get stat calls on the entries as well, which queues self heal checks taking even longer
22:16 semiosis and there's not much you can do except wait & hope a future version of glusterfs improves that
22:16 vex yep, i've seen mailing list posts that talk about that
22:17 vex i'm just wondering if I'm doing anything stupid with the options we have configured
22:17 vex or if there's anything obvious for tuning it better for what our file operations are
22:18 vex whoa. my english is bad this morning.
22:19 semiosis noatime,nodiratime on your brick mounts for sure (unless you *need* atimes.) i put those options on my client mounts too mainly out of superstition, but it certainly can't hurt
22:19 semiosis vex: also ,,(pasteinfo)
22:19 glusterbot vex: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
22:19 wushudoin left #gluster
22:20 vex semiosis: http://fpaste.org/vb4A/
22:20 glusterbot Title: Viewing Paste #278482 by vex (at fpaste.org)
22:23 semiosis vex: have any of those volume options measurably helped your performance
22:24 semiosis ?
22:24 jclift_ 'nite all
22:24 vex not noticeably, but then i've just turned them on ;)
22:29 semiosis so far the only options i've heard of or seen make a performance impact are changing the self heal algorithm to full and reducing the background self heal count
22:30 semiosis and that just speeds up healing, not actual volume use
22:30 vex okay
22:32 jskinner_ joined #gluster
22:32 vex semiosis: i get 'unknown option noatime (ignored)' trying to mount via fstab, btw
22:32 semiosis ?!
22:33 semiosis did you put that on your backend brick mounts?
22:33 semiosis or client mount?
22:33 vex trying that on client mount
22:35 vex ubuntu lucid, glusterfs 3.3.1
22:36 vex same for backend brick, fwiw
22:36 semiosis interesting
22:37 semiosis that's odd
22:38 vex (I also get unknown option _netdev (ignored) )
22:38 semiosis heh, yeah, you can ignore that message
22:38 vex yeah :)
22:41 semiosis ok so noatime,nodiratime get ignored by the client. i'd never seen that info message before (hadn't tried that yet with 3.3.1)
22:41 semiosis but for backend bricks it really should work & improve read perf
22:41 semiosis what is your backend brick filesystem?
22:42 vex xfs
22:56 johnmark Cool - OpenStack Cinder + GlusterFS integration: https://review.openstack.org/#/c/21342/
22:56 glusterbot Title: Gerrit Code Review (at review.openstack.org)
22:58 badone joined #gluster
23:01 semiosis johnmark: ndevos pointed out this morning that http://www.gluster.org/dow​nload/gluster-source-code/ is outdated.  perhaps that page should be replaced with a wiki entry so it can be more easily maintained
23:01 glusterbot <http://goo.gl/LTPw> (at www.gluster.org)
23:13 lh joined #gluster
23:13 raven-np joined #gluster
23:36 semiosis vex: got sidetracked before when i was going to test noatime,nodiratime on xfs, but i got back to it & reconfirmed those options do work
23:36 semiosis vex: so if you're getting messages saying they're ignored when you try to mount xfs filesystems, something is going terribly wrong
23:37 semiosis gotta run, good luck
23:37 * semiosis &
23:40 johnmark semiosis: oh good point
23:40 * johnmark looks to update
23:46 raven-np joined #gluster
23:53 johnmark semiosis: oh, and that page isn't linked from anywhere anymore, but I updated it for Google SEO goodness
23:53 johnmark ndevos: ^^^^^^

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary