Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-04-22

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 hagarth_ joined #gluster-dev
00:06 avati joined #gluster-dev
00:20 hagarth_ joined #gluster-dev
00:20 avati joined #gluster-dev
00:34 hagarth_ joined #gluster-dev
00:36 raghaven1rabhat joined #gluster-dev
00:50 badone__ joined #gluster-dev
00:51 hagarth_ joined #gluster-dev
01:16 avati joined #gluster-dev
01:24 raghaven1rabhat joined #gluster-dev
01:25 hagarth_ joined #gluster-dev
01:46 raghaven1rabhat joined #gluster-dev
01:49 portante joined #gluster-dev
02:01 raghaven1rabhat joined #gluster-dev
02:04 hagarth_ joined #gluster-dev
02:11 raghavendrabhat joined #gluster-dev
02:19 avati joined #gluster-dev
02:20 raghavendrabhat joined #gluster-dev
02:31 raghaven1rabhat joined #gluster-dev
02:50 bharata joined #gluster-dev
03:53 spai joined #gluster-dev
03:53 itisravi joined #gluster-dev
04:05 raghu joined #gluster-dev
04:06 sarvothampai joined #gluster-dev
04:14 bulde joined #gluster-dev
04:30 bala joined #gluster-dev
05:03 sgowda joined #gluster-dev
05:05 mohankumar joined #gluster-dev
05:13 bulde1 joined #gluster-dev
05:14 bala joined #gluster-dev
05:19 lalatenduM joined #gluster-dev
05:27 vshankar joined #gluster-dev
05:30 aravindavk joined #gluster-dev
05:48 aravindavk joined #gluster-dev
05:53 rastar joined #gluster-dev
05:55 mohankumar hagarth: ping
05:55 hagarth mohankumar: pong
05:56 mohankumar thanks for the review comments, i will respond to your comments/questions
05:56 mohankumar hagarth: but are you ok with the design? bd -> posix relationship
05:57 hagarth mohankumar: np, the overall approach does look ok. We can either use this approach or have a thin xlator that has both bd and posix as children.
05:58 hagarth the thin translator could just be a router to bd and/or posix based on fop
05:58 mohankumar hagarth: IMHO bd xlator is a light xlator which offloads most of the work to posix
06:02 hagarth mohankumar: you are right, I don't have problems with your approach either. Having bd and posix as "peers" would let the aggregation be taken care of by the translator above.
06:03 hagarth and not let bd worry about aggregation. It can just perform block operations and be very similar to posix.
06:05 hagarth mohankumar: have to step out now. will bbl.
06:14 deepakcs joined #gluster-dev
06:43 hagarth joined #gluster-dev
07:27 bulde joined #gluster-dev
08:05 sarvothampai joined #gluster-dev
08:12 gbrand_ joined #gluster-dev
08:19 raghu joined #gluster-dev
08:53 bala joined #gluster-dev
09:04 rastar1 joined #gluster-dev
09:12 itisravi joined #gluster-dev
09:17 sgowda joined #gluster-dev
09:53 sgowda joined #gluster-dev
09:55 rastar joined #gluster-dev
10:26 edward1 joined #gluster-dev
11:16 hagarth joined #gluster-dev
11:20 sgowda joined #gluster-dev
11:47 sgowda joined #gluster-dev
12:01 bulde1 joined #gluster-dev
12:13 bulde joined #gluster-dev
12:14 bulde1 joined #gluster-dev
12:43 awheeler_ joined #gluster-dev
12:46 awheeler_ johnmark: GOO, sounds interesting
13:09 itisravi joined #gluster-dev
13:27 sarvothampai joined #gluster-dev
13:41 jdarcy joined #gluster-dev
13:56 jclift_ joined #gluster-dev
13:59 mohankumar joined #gluster-dev
14:09 lpabon joined #gluster-dev
14:11 wushudoin joined #gluster-dev
14:19 sandeen joined #gluster-dev
14:20 sandeen today might be a better day to ask, can anyone explain how the xattrop/ dir is used during rebalance?  We're seeing a race happen that corrupts the unlinked list in xfs during a rebalance and it seems related to inodes in this dir
14:20 sandeen would like to reproduce but not quite sure how gluster behaves here
14:20 jdarcy I'm not up with all of the details myself TBH.
14:21 jdarcy The index directory (including xattrop) is used to hold links for files that still have pending I/O (i.e. started on at least one replica but not complete on all).
14:21 portante joined #gluster-dev
14:21 jdarcy That means it's *very* busy with linking/unlinking - both hard and soft links.
14:23 * jclift_ wonders if that's related to the patch recently to stop "ls" getting only partial results from some nodes in dirs
14:23 jclift_ Was a race condition there too
14:23 sandeen jdarcy, any idea if there is ever a link to the same inode in another dir?
14:24 jdarcy jclift_: It was a race, but AFAICT a very different one.
14:24 sandeen I think the vfs might serialize racing links/unlinks on the same dir, but probably doens't carea bit about 2 different dirs
14:24 jdarcy sandeen: Absolutely.  That directory will contain hard links to files in other dirs, symlinks to directories elsewhere (since hard links aren't allowed).
14:24 sandeen jdarcy, right, hardlinks to files elsewhere, but the "elsewhere" stays put, right?
14:25 sandeen i.e. would we race to unlink both links, in the xattrop/ dir and wherever else?
14:25 jdarcy sandeen: Seems quite likely.
14:25 sandeen hmm
14:26 jclift_ jdarcy: k. :)
14:26 jdarcy sandeen: I'd have to look at the order in which the links are removed in self-heal/rebalance, vs. a normal unlink/rmdir/rename.
14:26 sandeen so if it' "for files that still have pending I/O " then a racing unlink would mean that the target of the IO is unlinked as soon as IO completes?
14:26 * sandeen is confused
14:26 jdarcy sandeen: I suspect there are dozens of code paths involved.  :(
14:26 jclift_ jdarcy: Watched your presentation about Glupy at Linux Conf in Japan (?) from Last year.
14:27 sandeen gluster does stress an fs in new and interesting ways ;)
14:27 jclift_ jdarcy: The one where you had to manually resize slides and stuff
14:27 jclift_ jdarcy: It was very useful
14:27 jclift_ jdarcy: That, combined with the Glupy stuff was good
14:27 jclift_ jdarcy: Tried to read your Linux Journal article, but couldn't get into it.  :(
14:28 jdarcy sandeen: If the I/O is *known* to have completed everywhere it needs to, then its counters will be decremented.  If they reach zero then I *think* it will be removed from the index directory, but that's where things get a bit fuzzy for me.
14:28 * jdarcy hates the index implementation with the white-hot fury of a thousand suns.
14:28 sandeen jdarcy, right, I just wouldn't expect the last link to go (otherwise what was the point of all this?)
14:28 jdarcy jclift_: Haven't been to Japan.  Might that have been from BLR?
14:29 * jdarcy tries to remember where he presented about Glupy.
14:29 sandeen I would expect nlink 2 to be removed from the xattrop/ dir, but not the last one from <wherever else the real file lives>
14:29 sandeen but who knows
14:29 jdarcy sandeen: Right, the non-index link should remain throughout.
14:29 bfoster jdarcy: fwiw, what I observed was there's an xattrop-* file in that dir, and every other file in that dir is a link of that inode
14:30 bfoster jdarcy: with the name of the linked entry as the gfid of the file under I/O
14:30 sandeen bfoster, oh, that's right (I forgot that detail)
14:31 jdarcy Shows how much I know.  ;)
14:31 sandeen so it seems really unlikely that we'd have racing unlinks of the same inode in some other dir
14:31 jclift_ jdarcy: http://video.linux.com/videos/gluste​rfs-translators-conceptual-overview
14:31 sandeen jdarcy led me astray ;)
14:31 jclift_ jdarcy: That one anyway. :)
14:31 bfoster hehe, well I thought the same thing at first given how .glusterfs is organized
14:32 jclift_ jdarcy: Reckon it would be useful to create some templates people can use, and "how to" types of articles about writing translators with Glupy specifically?
14:33 jdarcy jclift_: Ah, that looks like the Sheraton San Diego.  August would mean it was LinuxConf, I think.
14:33 sandeen bfoster, FWIW out of desperation I suggested to him that he could try the spinlock rwsem implementation instead, if he still hits it then it's clearly not an rwsem problem
14:34 sandeen if he doesn't hit it it probably just means timing changed
14:34 sandeen I asked him offline if he'd still help test, rather than working around it by moving the dir to ext4 :/
14:34 sandeen he said he was willing
14:34 bfoster sandeen: heh, ok. missed that mail
14:34 bfoster umm. yeah, I wasn't too sure how to respond to that
14:34 sandeen sorry, I might have accidentally replied on a private thread
14:34 sandeen yeah I asked him offline if he'd still test and he said sure
14:34 bfoster ok
14:35 sandeen that was probably when I suggested the spinlock-based rwsems, sorry
14:35 sandeen probably shouldn't have taken it off the list
14:35 bfoster I ran a test that started/stopped a bunch of parallel writes to a rep volume yesterday with no luck
14:36 sandeen ok, and I wrote a simple C test program which links & unlinks a random filename to a given filename
14:36 sandeen and launched 12 of them in series for an hour or so, no luck
14:36 sandeen I should probably put in a random delay between link & unlink
14:36 bfoster ok, yeah I started on something similar that creates a bunch of threads to create 100k+ links to a file
14:36 bfoster then remove them all
14:37 sandeen also fiddled w/ populating every ag unlinked bucket prior to the test, and periodically removing/readding them
14:37 sandeen no luck :(
14:37 bfoster hrm, ok
14:38 sandeen maybe I'll add some delays kernelside
14:38 bfoster I've been debating on whether I want to test with a single ag or a large number
14:39 bfoster I also have background threads doing an open/unlink/close pattern to try and keep the unlinked lists populated
15:08 avishwan joined #gluster-dev
15:18 portante|ltp joined #gluster-dev
15:20 awheeler_ kkeithley: ping -- I've approved the backports.
15:21 kkeithley I saw that you resubmitted
15:23 awheeler_ The 3.3 release doesn't seem to have the unittest.sh there though.
15:23 awheeler_ s/unittest.sh/unittests.sh/
15:26 sghosh joined #gluster-dev
15:33 awheeler_ kkeithley: On a different note, I've got a script for deploying GlusterFS Swift with Keystone.
15:34 kkeithley yes, I want to try that
15:34 jclift_ awheeler_: Which OS's is it tested on?
15:34 awheeler_ jclift: CentOS 6.3
15:34 awheeler_ x64
15:34 jclift_ awheeler_: Have you looked at the RDO version of OpenStack?
15:34 jclift_ That uses packstack for deployment
15:35 jclift_ awheeler_: i.e. https://github.com/redhat-openstack/packstack
15:35 awheeler_ jclift_: yes.  Thinking about how I can leverage that -- but that's a fair bit more complicated.
15:35 * jclift_ was wondering how to integrate your GlusterFS stuff into that
15:35 awheeler_ I'm not doing anything fancy like they are -- just bash commands
15:35 jclift_ if it's even useful
15:35 jclift_ Sure.
15:35 jclift_ But the bash command you use, if split out into puppet files for specific task bits, are still the right commands
15:35 jclift_ Meh
15:36 jclift_ Ignore this
15:36 jclift_ I'm talking with only 1/2 understanding it
15:36 jclift_ I'll wait until I understand the keystone integration better +  puppet / packstack more
15:36 jclift_ :)
15:36 awheeler_ the real trick isn't likely going to be the puppet side, but getting a full install of openstack  to co-habitate with glusterfs swift
15:37 * jclift_ should look at that in near future
15:37 jclift_ *gulp*
15:37 awheeler_ Here's the link: https://github.com/awheeler/GlusterFS-Misc/blo​b/master/scripts/keystone_glusterfs_install.sh
15:37 awheeler_ swift-bench works with non SSL
15:38 awheeler_ bbiab
15:38 jclift_ Interesting
15:41 hagarth joined #gluster-dev
15:56 awheeler_ It should work with ssl as well, if it's a real cert.  There are some issues with the certs I'm creating that are causing some issues.
15:58 awheeler_ possibly due to the swift-bench client being provided by the gluster swift (3.3.1-13, or swift 1.7.6), while the keystone version is meant to work with grizzly swift.
15:59 awheeler_ The script is designed to be run on the CentOS AMI in EC2.
15:59 awheeler_ And if you follow the script, and execute the commented-out swift-bench at the end (with no ssl) it works.
16:00 awheeler_ s/follow/execute/
16:12 awheeler_ back
16:15 bala joined #gluster-dev
16:15 hagarth joined #gluster-dev
16:27 portante` joined #gluster-dev
16:40 sarvothampai joined #gluster-dev
17:02 * sandeen reproduces :D
17:03 hagarth sandeen: awesome!
17:04 sandeen yup, 90% of the battle complete.
17:06 hagarth had the same user enquire today about how he could disable indices/xattrop activity on xfs. there's no way to do that with gluster today.
17:07 sandeen the upstream guy configured it to a different dir on a different fs
17:07 sandeen seemed to work?
17:08 sandeen not a good solution, but a short term workaround
17:08 sandeen ok, gonna go swim over lunch, bbiab
17:11 hagarth yeah, it would be a short term workaround.
17:19 gbrand_ joined #gluster-dev
17:45 bulde joined #gluster-dev
17:49 rastar joined #gluster-dev
17:49 sarvothampai left #gluster-dev
18:05 awheeler_ jclift: fixed all my ssl issues.
19:54 sandeen ok, and there's a stupid-trivial upstream fix, too.  well, it's all good I guess :)
20:30 portante` joined #gluster-dev
20:51 jbrooks joined #gluster-dev
21:38 sandeen_ joined #gluster-dev
21:42 jclift joined #gluster-dev
21:45 awheeler_ kkeithley: Ping -- you'll be interested to know that I just tested out your koji 3.4.0-0.3.alpha3 and I'm seeing the same get errors.

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary