Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-01-22

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:26 bharata joined #gluster-dev
02:48 overclk joined #gluster-dev
03:04 hagarth joined #gluster-dev
04:22 sripathi joined #gluster-dev
04:34 sgowda joined #gluster-dev
04:46 mohankumar joined #gluster-dev
04:51 deepakcs joined #gluster-dev
04:54 Guest58155 joined #gluster-dev
05:11 vpshastry joined #gluster-dev
05:12 hagarth joined #gluster-dev
05:28 raghu joined #gluster-dev
06:17 vpshastry joined #gluster-dev
06:31 shireesh joined #gluster-dev
06:33 bala joined #gluster-dev
06:51 pai joined #gluster-dev
07:25 bala joined #gluster-dev
07:28 vpshastry joined #gluster-dev
07:50 vpshastry joined #gluster-dev
08:18 deepakcs shireesh, Hi, got some Qs for u.. reg. sharad's gluster domain OE patch, ... in pvt chat..
08:38 bulde joined #gluster-dev
08:49 pai joined #gluster-dev
08:54 hagarth joined #gluster-dev
09:30 shireesh joined #gluster-dev
09:41 bulde joined #gluster-dev
10:02 ram_raja joined #gluster-dev
10:09 harshpb joined #gluster-dev
10:13 36DACS7N5 joined #gluster-dev
10:33 deepakcs shireesh, there ?
11:13 hagarth joined #gluster-dev
11:21 blues-man joined #gluster-dev
11:22 blues-man joined #gluster-dev
11:24 sripathi1 joined #gluster-dev
11:36 hagarth joined #gluster-dev
11:54 kkeithley1 joined #gluster-dev
11:55 kkeithley1 johnmark: 3.4 update here, or on the phone?
11:57 shireesh joined #gluster-dev
12:03 kkeithley1 johnmark: ping
12:04 jdarcy Argh.  Forgot to charge my phone.
12:04 kkeithley1 are we on the phone? numbers?
12:05 jdarcy I guess we're here, but still.
12:06 hagarth hello folks
12:06 kkeithley1 hi
12:07 jdarcy Hi Vijay.
12:07 hagarth Hi Jeff
12:07 hagarth quick update from my side - still waiting for rdma and client-op-version plus more testing coverage
12:08 hagarth that has been blocking alpha - will do qa7 tonight since significant fixes have gone in after qa6
12:08 jdarcy I'm pulling up the tracker bug now.  Do we think that's the right list?
12:08 edward1 joined #gluster-dev
12:09 hagarth that does indeed
12:09 kkeithley1 ???
12:11 hagarth kkeithley1: there's a tracker bug for the alpha
12:12 hagarth kkeithley1: 895528
12:13 kkeithley1 thanks
12:14 hagarth kkeithley1: please add any other bugs that you want to be fixed as blocking this bug
12:15 kkeithley1 okay
12:15 jdarcy Apparently my browser chose an excellent time to die.
12:16 kkeithley1 An old Klingon proverb comes to mind
12:17 jdarcy So for the bugs that are marked POST, do we have any interesting news?
12:18 jdarcy Looks like I need to update at least one.
12:19 hagarth we need to take patches in for the POST bugs .. code reviews will help there
12:19 jdarcy Actually it looks like only the volume-delete one is really in POST.
12:21 jdarcy That leaves four in ASSIGNED - two RDMA-related, replica 3 (which is kind of a placeholder), and granular self-heal (ditto)
12:21 hagarth yeah
12:21 hagarth should we move out granular self-heal?
12:22 hagarth i am not too sure if we have a ready design for implementing that
12:22 jdarcy I'm fine with that.
12:23 hagarth ok, let us remove that post the meeting
12:23 hagarth so we are left with only RDMA bugs and client op-version
12:23 kkeithley1 are all the changes/fixes to ufo on the 3.4 branch?
12:23 jdarcy So basically we're down to RDMA, testing, and 889382
12:24 jdarcy Not sure why that one's in POST since there's no patch.
12:25 kkeithley1 remind me what POST stands for?
12:25 jdarcy kkeithley1: Patch posted, not yet merged (that would be MODIFIED)
12:25 kkeithley1 and btw, may we have a tag for 3.3.1 added to the tree please?
12:25 kkeithley1 ah
12:26 kkeithley1 <snark>PATCHPOSTED would have used too many bits I guess</snark>
12:26 hagarth kkeithley1: I thought 3.3.1 is present .. Is that not the case?
12:27 hagarth I think we could branch out release-3.4 at the time of alpha
12:27 jdarcy This might actually be an instance of 893851 (memory corruption probing invalid host) which I fixed.
12:27 jdarcy Would it be OK to mark it as a dup?
12:27 kkeithley1 I don't see it. v3.3.0qa9, v3.3.1qa1, v3.3.1qa2, v3.3.1qa3, v3.3beta1, v3.3beta2, v3.4.0qa3, ...
12:27 hagarth jdarcy: think so
12:28 hagarth kkeithley1: will push 3.3.1 later today
12:28 jdarcy If it shows up again we can revisit, but I don't think it will.
12:29 kkeithley1 are all the changes/fixes to ufo on the 3.4 branch?
12:30 kkeithley1 since there is no branch yet, I should presume the answer is yes. Is that correct?
12:30 hagarth kkeithley1: yes
12:32 jdarcy Anybody know what's up with 895656 (gsyncd path)?
12:32 kkeithley1 Avati merged it. Yesterday, or the day before
12:32 hagarth right
12:33 jdarcy OK, moving to MODIFIED>
12:33 hagarth cool!
12:34 jdarcy I think that's all except for a couple of RDMA issues.  If we cut another QA build, we can move a bunch of these from MODIFIED to ON_QA.
12:35 jdarcy Maybe next week we should consider whether to keep the RDMA stuff in, or push it to beta.
12:36 foster hagarth: ping re: bug 858495. see my last comment. I'm not sure whether the fixed in field should include that tag info, or it's a qa/release thing that should be filled in when a release is cut..?
12:36 glusterbot Bug http://goo.gl/CcLzu unspecified, high, ---, bfoster, ON_QA , performace problem with VMs and replicate
12:39 jdarcy I need to go shovel some snow.  See you all later?
12:40 jdarcy Guess so.  ;)
12:40 kkeithley1 I got about 1/4", how much did you get?
12:40 jdarcy More like 1/2".  Really more sweeping than shoveling, but still.
12:41 kkeithley1 yup, just checking that three miles away didn't get dumped on somehow. ;-)
12:42 jdarcy Well, it ain't gonna sweep itself.  ;)
12:42 kkeithley1 that's what I've got a 22 year old living in the basement for.
13:38 johnmark gah. timezone fail
13:39 * johnmark reads backlog
13:40 * kkeithley1 wonders what time zone johnmark is in ;-)
13:42 * kkeithley1 thinks maybe that one that Indiana flutters between trying t
13:42 johnmark haha
13:42 kkeithley1 o make up its mind whether it's EST or CST
13:42 johnmark I just landed in SJC last night
13:42 kkeithley1 ah
13:42 johnmark and promptly forgot thatthis meeting was at 4am
13:42 johnmark but anyway, yay progress
13:43 johnmark jdarcy: happy shoveling
14:02 johnmark hagarth_: will look for qa7
14:03 1JTAAEZCH joined #gluster-dev
14:09 puebele1 joined #gluster-dev
14:11 hagarth joined #gluster-dev
14:11 sgowda joined #gluster-dev
14:18 sgowda joined #gluster-dev
14:31 vpshastry joined #gluster-dev
14:33 puebele joined #gluster-dev
14:33 harshpb joined #gluster-dev
14:39 foster hmm, it appears we might have a fix in the pipe for the vaunted sparse-file speculative preallocation behavior in xfs :)
14:40 johnmark ooooh
14:42 foster the high-level proposal is to base the preallocation size on the size of the last extent, rather than the file size
14:42 foster e.g., a seek past a hole disables preallocation
14:43 foster i have to play with it to understand it better tbh, but interesting development nonetheless :)
14:51 xavih_ Hello, I have noticed that performance/quick-read and performance/md-cache translators take a dict_t structure for each "live" inode_t to cache some information
14:52 xavih_ if a recursive directory listing is performed (ls -R, find, rsync, ...), the dict_t memory pool is depleted very fast, which makes it useless
14:53 xavih_ couldn't this be a performance issue ?
14:58 xavih as a side note, I've made an "ls -laR" with and without these two translators and without them the operation completes faster (specially on directories with many files)
14:58 xavih However I'm not sure if it is directly related to the dict_t memory pool or other issue
14:58 foster xavih: are you referring to a workload that in general defeats the cache, or the memory pool depletion in particular?
14:59 foster oh, you might want to try that same test without quick-read, but with md-cache...?
14:59 xavih well, both
14:59 xavih I think that the pool depletion could be an issue
14:59 xavih and I think that this kind of access could be very common
15:00 foster if your test is not reading the file, that work could be wasted
15:00 foster how big are the files btw? I think quick-read has a threshold
15:00 xavih there are files of all sizes, however I'm not reading them
15:01 foster could you repeat your test without quick-read but with md-cache and see whether you end up closer to your faster or slower result?
15:01 xavih I've made tests with only one of the translators and the dict_t depletion is the same (maybe a bit later)
15:02 foster how does the test result compare?
15:02 xavih ok, I'll try again. I'm not sure if the speed changed with only one of them
15:02 foster ok
15:07 vpshastry left #gluster-dev
15:11 xavih foster: without none of the xlators, the recursive ls takes 16 seconds
15:12 xavih with quick-read it takes 27 seconds
15:12 xavih with md-cache, 56 seconds
15:12 xavih and with both xlators, 1 minutes 12 seconds
15:13 johnmark xavih: hi!
15:13 xavih I'm thinking that I'm running gluster in debug mode. Maybe the increased number of messages when the pool is depleted is the cause of the great slowdown ?
15:13 wushudoin joined #gluster-dev
15:13 xavih hi John :)
15:14 foster hrm, that's interesting. i guess it could be worth a try without debug mode
15:15 foster the md-cache case anyways
15:16 foster i'll see if I can replicate that
15:16 harshpb joined #gluster-dev
15:17 xavih I'll try without debug mode...
15:22 xavih foster: without debug mode and none of the xlators, it takes 15 seconds
15:23 xavih with md-cache, 21 seconds
15:23 foster ok, how many files?
15:24 xavih 25000 more or less
15:24 xavih could it be the memory pool depletion or could it be caused by other things ?
15:25 xavih if I'm right, the dict_t pool is initialized with 4096 items
15:25 xavih with both md-cache and quick-read, after 2048 lookups the pool will be empty
15:26 foster seems like a possibility. unfortunately it doesn't look like we could indirectly limit the usage here via md-cache
15:27 foster xavih: this would be good to file a bug for btw, with details on the test, volume info, results, etc.
15:29 xavih ok, I'll file a bug
15:29 foster great, thanks
15:31 xavih however these tests are part of the debugging of the translator I'm working on and I run everything manually (yes, I know this is not good but is more versatile :p), I'll try to repeat them in a more "official" environment...
15:52 johnmark xavih: thanks - when it's in a form that's sharable, please post your work on the gluster.org wiki
15:56 xavih johnmark: I'm currently making it a bit more stable. I don't consider it beta yet, but I'm preparing to publish it on github if anyone is interested
15:58 hagarth xavih: please feel free to reach out if you need any specific help
15:59 xavih however the performace is quite poor. There are some ideas to improve it (some taken from afr) but they are not implemented yet.
15:59 xavih hagarth: thank you :)
16:03 sghosh joined #gluster-dev
16:04 foster xavih: on a quick (stupid) test, I didn't reproduce a problem, a slight drop with md-cache inserted vs. not, and md-cache provides a nice bump on a subsequent ls
16:05 foster but I'm on a vm with a single brick volume, so maybe that is washing something out
16:05 xavih foster: I'm currently preparing to reproduce the test with a "normal" installation
16:05 foster ok, if you file a bug with all the details I'll take a harder look on real hardware :)
16:06 xavih I'll tell you something when I can make the test
16:06 xavih ok
16:34 polfilm joined #gluster-dev
16:35 semiosis joined #gluster-dev
16:46 hagarth kkeithley1: pushed v3.3.1
16:47 kkeithley1 thanks
17:21 harshpb joined #gluster-dev
17:22 johnmark xavih: I second what hagarth said - we're happy to help
17:23 johnmark xavih: I htink what might make sense is to start publishing on github
17:23 johnmark and looking for pointers from those who start diving into your code
17:30 xavih johnmark: well, if someone is capable of understanding it... :p it lacks a bit of documentation :-/
17:32 jdarcy Maybe I'll put "Lightning Rod" on my next set of business cards.
17:34 jdarcy xavih: Well, if we can't understand it then we'll be coming to you for help.  Seems rather mutual.  :)
17:38 xavih jdarcy: my intention is to document it, however it is not mture yet and I concentrate on developing and kill bugs instead of documenting...
17:40 xavih I've just finished a first test on a standard configuration (replica 2) with the md-cache and quick-read xlators
17:41 xavih with them, the first recursive ls takes 1:03, the subsequent ls takes 0:50
17:41 xavih without them, the first ls takes 0:55, and the second 0:49
17:42 xavih foster: ^^
17:42 xavih tomorrow I'll do additional tests and file a bug to analyze it
17:50 foster xavih: ok, thanks
18:01 portante joined #gluster-dev
19:47 johnmark hagarth_: w00t
19:49 kkeithley1 I gather you mean replace xlator X with a  3rd party xlator that's not known to gluster at build time.
19:50 johnmark kkeithley1: right
19:50 johnmark kkeithley1: nice redirect to the appropriate channel ;)
19:51 jdarcy kkeithley1: Exactly.
19:51 kkeithley1 ha ha, yeah
19:52 jdarcy johnmark: This isn't just for developers, BTW.  Users of third-party translators will be affected by our choices too.
19:52 johnmark jdarcy: ah, very good point
19:52 johnmark that means it's even more important
19:52 johnmark than I thought
20:09 Technicool joined #gluster-dev
21:22 Technicool joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary