Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-04-12

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 dlambrig_ joined #gluster-dev
00:54 baojg joined #gluster-dev
01:23 shyam joined #gluster-dev
01:31 EinstCrazy joined #gluster-dev
01:38 dlambrig_ joined #gluster-dev
01:49 luizcpg joined #gluster-dev
02:03 rafi joined #gluster-dev
02:09 baojg joined #gluster-dev
02:17 mmckeen joined #gluster-dev
02:24 kbyrne joined #gluster-dev
02:28 jtc joined #gluster-dev
02:33 vmallika joined #gluster-dev
02:33 rastar joined #gluster-dev
02:40 kshlm joined #gluster-dev
02:46 nishanth joined #gluster-dev
02:55 jtc joined #gluster-dev
03:19 luizcpg joined #gluster-dev
03:25 overclk joined #gluster-dev
03:32 atinm joined #gluster-dev
03:54 shubhendu joined #gluster-dev
03:55 shubhendu joined #gluster-dev
04:04 nbalacha joined #gluster-dev
04:07 poornimag joined #gluster-dev
04:09 sakshi joined #gluster-dev
04:14 gem joined #gluster-dev
04:15 pranithk joined #gluster-dev
04:24 itisravi joined #gluster-dev
04:39 vmallika joined #gluster-dev
04:40 kdhananjay joined #gluster-dev
04:48 jiffin joined #gluster-dev
04:49 Gaurav_ joined #gluster-dev
04:57 poornimag itisravi, ndevos, Can you please re-review http://review.gluster.org/#/c/13872/ ?
04:59 ndarshan joined #gluster-dev
04:59 kshlm joined #gluster-dev
05:05 EinstCrazy joined #gluster-dev
05:06 rraja joined #gluster-dev
05:12 aspandey joined #gluster-dev
05:13 ppai joined #gluster-dev
05:14 Apeksha joined #gluster-dev
05:14 aravindavk joined #gluster-dev
05:15 itisravi poornimag: done. Looks like you need to re-trigger smoke
05:15 ashiq_ joined #gluster-dev
05:17 EinstCra_ joined #gluster-dev
05:17 prasanth joined #gluster-dev
05:20 poornimag itisravi, thanks
05:20 poornimag itisravi++
05:20 glusterbot poornimag: itisravi's karma is now 15
05:20 itisravi np :)
05:20 Manikandan joined #gluster-dev
05:24 karthik___ joined #gluster-dev
05:31 Bhaskarakiran joined #gluster-dev
05:32 rastar joined #gluster-dev
05:36 nishanth joined #gluster-dev
05:39 decay joined #gluster-dev
05:46 baojg joined #gluster-dev
05:52 hgowtham joined #gluster-dev
06:02 asengupt joined #gluster-dev
06:10 pur joined #gluster-dev
06:11 pranithk joined #gluster-dev
06:13 pkalever joined #gluster-dev
06:15 rafi joined #gluster-dev
06:18 spalai joined #gluster-dev
06:21 hchiramm joined #gluster-dev
06:31 skoduri joined #gluster-dev
06:32 atalur joined #gluster-dev
06:32 mchangir joined #gluster-dev
06:45 anoopcs pranithk, Can you please respond to the following mail from manu? http://www.gluster.org/pipermail/gl​uster-devel/2016-April/048971.html
06:52 vmallika joined #gluster-dev
06:55 aspandey joined #gluster-dev
06:55 pranithk anoopcs: I hope to be a bit free in the evening sometime, made a note
06:56 anoopcs pranithk, Thanks.
07:00 asengupt joined #gluster-dev
07:25 Saravanakmr joined #gluster-dev
07:57 ndevos hi anoopcs, please poke some of the reviewers for https://review.gluster.org/12014 and try to get it merged soon
08:12 pur joined #gluster-dev
08:22 baojg joined #gluster-dev
08:23 atalur joined #gluster-dev
08:24 anoopcs ndevos, Will do.
08:25 anoopcs ndevos, btw can you please take a look at this mail? http://www.gluster.org/pipermail/gl​uster-devel/2016-April/048971.html
08:25 itisravi joined #gluster-dev
08:32 ndevos anoopcs: I've seen it, but wasnt sure without checking the code, I did not have time for that
08:33 ndevos itisravi: you know a bit about fuse too, maybe you can check out http://www.gluster.org/pipermail/gl​uster-devel/2016-April/048971.html ?
08:35 itisravi ndevos: I actually did see that email. I wasn't sure what manu said is true.
08:35 itisravi (or false)
08:35 anoopcs ndevos, Ok. As he mentioned Emmanuel is ready to prepare the fix but he needs confirmation w.r.t Linux FUSE.
08:35 ndevos itisravi: yeah, same here...
08:36 itisravi anoopcs: it might be a good idea to bounce it off fuse-devel
08:37 anoopcs itisravi, OK. Thanks.
08:38 itisravi anoopcs: or perhaps ask Du. He also has some context on fuse.
08:40 anoopcs itisravi, Ok.
08:46 sakshi joined #gluster-dev
09:03 dlambrig_ joined #gluster-dev
09:05 pranithk ndevos: hey ping me once you are online, let us talk about inclusion of multi-threaded self-heal in 3.7.x
09:12 itisravi joined #gluster-dev
09:19 poornimag joined #gluster-dev
09:26 spalai1 joined #gluster-dev
09:27 ndarshan joined #gluster-dev
09:33 ndevos pranithk: I just replied to your email, I hope that makes it a little clearer?
09:42 pranithk ndevos: yes, replied. Will get a nice blog-post preferably with all the graphs given by paul
09:46 ndevos pranithk: ok, thanks!
09:47 prasanth joined #gluster-dev
10:06 kotreshhr joined #gluster-dev
10:17 kshlm joined #gluster-dev
10:18 pranithk rastar: hey I wanted to talk to you about what are your plans to get workload based tests to be submitted by users.
10:19 pranithk rastar: I am trying to get post-factum to submit the tests he runs before he pushes the build for deployment.
10:19 rastar pranithk: I in a meeting, will reply soon
10:19 pranithk rastar: cool
10:26 gem joined #gluster-dev
10:27 kkeithley1 joined #gluster-dev
10:30 pranithk kshlm: hey
10:30 pranithk kshlm: Did you make 3.7.12 tag?
10:30 pranithk kshlm: I mean 3.7.11
10:30 kshlm nope waiting for the group virt change to pass tests
10:32 pranithk kshlm: okay, over the weekend I merged a patch in 3.7 branch by mistake but quickly sent a revert  http://review.gluster.org/13932, which is also failing on centos
10:34 pranithk kshlm: basically http://review.gluster.org/13859, http://review.gluster.org/13859 are two patches which fix a corruption because the lookups are executed in two parallel threads
10:34 pranithk kshlm: Both need to be merged for the fix
10:35 pranithk kshlm: sorry http://review.gluster.org/13574
10:35 pranithk kshlm: Or we revert the patch by merging http://review.gluster.org/13932
10:35 pranithk kshlm: Otherwise there will be a bug with tiering
10:36 pranithk kshlm: rafi tested that with both the patches no issues are seen just now
10:37 pranithk kshlm: I just retriggered recheck centos on http://review.gluster.org/13932 as well
10:37 kshlm What happens if only one of them is present?
10:38 pranithk kshlm: The race window gets bigger
10:38 pranithk kshlm: The race has always been there just that the window is less.
10:38 pranithk kshlm: Because I accidentally merged ec patch race window is bigger
10:38 pranithk kshlm: if we take afr patch, no corruption
10:39 pranithk kshlm: If we revert the patch, then same old race window before ec patch
10:42 pranithk kshlm: I can monitor to get the revert patch get all the acks by resubmitting if it fails etc if you want to go that route.
10:42 pranithk kshlm: http://review.gluster.org/13574 already has all the acks.
10:42 kshlm It requires smoke+1
10:43 pranithk kshlm: I see you already submitted smoke...
10:43 kshlm I've triggered a smoke run for the change, it should complete faster than regression.
10:43 pranithk kshlm: okay. WIll monitor that too...
10:53 rastar pranithk: there is no doc yet on workload based tests.
10:54 rastar pranithk: my current plan, create a new dir under tests
10:54 pranithk rastar: I want to talk to some of the users I have been helping with to get their scripts they use before they push things in production into distaf preferably
10:54 pranithk rastar: Not sure how successful I will be, but I want to give this a go
10:55 rastar pranithk: I did not understand
10:55 rastar pranithk: their scripts use current framework?
10:55 pranithk rastar: One more thing I want to know. This is a noob question. Do we have a way of finding if a workload lead to leaks in Distaf?
10:55 rastar pranithk: and you want them to submit in Distaf?
10:55 pranithk rastar: I want to work them to convert their scripts to be in distaf
10:56 rastar oh ok
10:56 pranithk rastar: yeah. Selling point is that these will be run before each release, so they don't have to worry about it?
10:56 rastar pranithk: Awesome idea. msv sent a how to on writing distaf tests recently
10:56 pranithk rastar: I wanted to start all of this at the time of 3.7.7 when I screwed up the release but Didn't catch a break because of too many customer issues
10:56 rastar pranithk: that should help them get started
10:56 pranithk rastar: okay
10:57 rastar pranithk: there is no mem leak find feature in distaf yet
10:57 pranithk rastar: I think glusterfs has infra to find if a workload leads to leaks using statedumps.
10:57 rastar pranithk: but someone can write a function in distaf-gluster-libs to monitor the mem usage
10:57 rastar pranithk: let me think on that
10:57 pranithk rastar: If there is anything needed in glusterfs may be simpler output of statedump, that is something that we can add
10:58 pranithk rastar: may be something like xml/json output. We can do that, for easier parsing I mean.
10:58 rastar pranithk: better thing would be to write nice parsers for statedump
10:58 rastar pranithk: :) same thing
10:58 pranithk rastar: Hmm, it is human readable, so may not be a good idea.
10:59 pranithk rastar: Yeah, you can solve it in both ways :-)
10:59 rastar pranithk: agreed, if no one has objection, that should be the first step.
10:59 pranithk rastar: But everytime we have to add something new we will have to touch parser...
11:00 rastar pranithk: but that has already become neccessary
11:00 rastar pranithk: not many know how to read statedump
11:01 rastar pranithk: ok summary
11:02 rastar pranithk: for now let users write tests and submit them under tests/NEWDIR
11:02 rastar pranithk: need not use any framework
11:02 rastar pranithk: All we need is a executable script under that dir which returns 0 or non-zero
11:02 rastar pranithk: ask them to slowly convert to distaf
11:03 pranithk rastar: yeah agree
11:03 rastar pranithk: for mem leak, lets jsonify statedump and write parsers or just write parsers
11:11 pranithk rastar: jsonify is better, we can remove the current way of printing too as json output is human-readable unlike xml
11:15 post-factum pranithk: rastar: what about automatizing valgrind cases? it requires bunch of suppression files first
11:15 pranithk post-factum: I always had problem running valgrind tests, something or the other used to hang. You never had issues?
11:16 post-factum pranithk: all the memleaks I discovered were discovered by valgrind :)
11:16 post-factum pranithk: the only issue I have each time is false-positives
11:17 post-factum pranithk: either those are false positives, or small leaks that do not matter
11:18 post-factum pranithk: so to use valgrind properly we need either to suppress false positives, or fix really all, even small, init-related, leaks
11:18 pranithk post-factum: got it
11:20 pranithk post-factum: We can do this in stages also. We really need valgrind runs if we know there are leaks. Valgrind runs are slow. So we will come up with infra to find if there are leaks. If there are leaks then we can do valgrind run and provide the output
11:22 mchangir joined #gluster-dev
11:22 post-factum pranithk: so, statedumps first, valgrind second?
11:23 pranithk post-factum: yeah
11:23 pranithk post-factum: because statedumps based will run at production setup speed
11:23 post-factum pranithk: any possibility statedumps won't handle some leaks :)?
11:23 prasanth joined #gluster-dev
11:24 post-factum pranithk: also, gperftools (with tcmalloc) are able to do memory profiling
11:24 pranithk post-factum: In all the leaks I have fixed till now, only 1 is not found using statedumps
11:25 pranithk post-factum: Valgrind can't find inode-leaks very well. It will say it as possibly because they will be in inode-table
11:25 kkeithley_ I've tried doing automated valgrind runs.  For me they get wedged about 50% of the time. If you have found some magic that keeps them from getting wedged, it would be great to have that.
11:27 pranithk kshlm: http://review.gluster.org/#/c/13574/ smoke passed
11:27 pranithk kkeithley_: You mean they hang?
11:27 kkeithley_ wedged == hang
11:27 kkeithley_ yes
11:28 kkeithley_ there's some lock/unlock race condition that's occurs (only) under valgrind
11:28 post-factum kkeithley_: probably, they were just slow enough to exhaust your patience
11:28 poornimag joined #gluster-dev
11:28 kkeithley_ If I let them sit, they sit for days before I lose patience. ;-)
11:29 pranithk kkeithley_: post-factum: Okay, then, staged testing it is then?
11:29 kkeithley_ but then I was running in VMs on old slow hardware too, so
11:30 post-factum yup, staged testing looks and feels better
11:31 pranithk kkeithley_: post-factum: rastar: We have a way to go forward then?
11:34 pranithk rastar: Do you think we can be ready for this by 3.7.12?
11:35 rastar pranithk: this being valgrind based tests?
11:36 pranithk rastar: I think phase-1 is to find if we have leaks
11:37 rastar I am tied up with my 3.8 feature till the end of this month. But the work is only to create a new test in centos-ci I can do it before 3.7.12
11:37 rastar post-factum: It would be great if you summarize how you use valgrind with gluster
11:38 rastar post-factum: I would use that and create a nightly test out of it to be run on centos-ci
11:38 pranithk rastar: Who all know distaf code?
11:38 post-factum rastar: unfortunately, I didn't manage to find correct suppressions for "false-positives"
11:38 pranithk rastar: ms you and shwetha and jonathan?
11:39 rastar pranithk: and ndevos
11:39 pranithk rastar: okay cool. Could you find if anyone of them have time to contribute?
11:39 kkeithley_ hmmm, still no 3.7.11???
11:40 rastar post-factum: no problem, let us report false-positives too. Initially we will rely on manual filtering of them
11:40 pranithk kkeithley_: kshlm is waiting for regression results, wait
11:40 post-factum rastar: also, I did valgrind checks only on client side
11:40 pranithk kshlm: http://review.gluster.org/#/c/13574/ is ready for merge...
11:41 rastar shwetha and jonathan want to sync upstream and downstream distaf libs before taking up something else
11:41 rastar pranithk: but I will ask them
11:41 post-factum rastar: and the valgrind itself is useless without some workload that should be simulated by some hand-crafted scripts
11:41 rastar post-factum: :) that is what started this discussion
11:42 rastar post-factum: the workload you have, would it be possible to create a single script for it
11:42 rastar post-factum: that we can run on the jenkins setup?
11:43 post-factum rastar: i doubt. that involves manipulation with millions of files and takes at least several hours to gain reliable statistics on leaks
11:43 rastar post-factum: that is not a problem. It could be scheduled once a week
11:44 rastar post-factum: what I meant to ask was, is your workload simulated manually or is it automated?
11:44 shyam joined #gluster-dev
11:44 post-factum rastar: usually, i'm unable to stat million of files manually :)
11:45 ashiq_ joined #gluster-dev
11:45 rastar post-factum: :) then we can easily convert that to a jenkins job
11:46 post-factum rastar: it should be some script that mounts a volume, does some job, takes statedumps, umounts and returns something
11:47 post-factum rastar: "some job" includes creating, stat'ting and removing files in general. that is how i've found leaks
11:53 rraja joined #gluster-dev
11:53 rastar post-factum: Cool, thanks. I will try that .
11:57 post-factum rastar: in fact i did valgrind instead of statedumps first, but as we concluded, statedumps run at production speed, so they should be first
11:59 kkeithley_ post-factum: pm
12:00 kkeithley_ are we having a bug triage today?
12:04 kkeithley_ I guess not
12:07 jiffin kkeithley_: rafi was planned to host bug triage meeting , right?
12:07 jiffin kkeithley_: on 29th March
12:10 rafi REMINDER: Gluster Community Bug Triage meeting to start now
12:14 _Bryan_ joined #gluster-dev
12:16 Manikandan joined #gluster-dev
12:19 vmallika joined #gluster-dev
12:25 ira joined #gluster-dev
12:34 mchangir joined #gluster-dev
12:36 rafi pranithk: 1324239
12:36 pranithk kdhananjay: you are working on similar bug reported by Paul right? ^^
12:38 kdhananjay pranithk: possibly.
12:38 pranithk kdhananjay: Could you take it in your name?
12:38 kdhananjay pranithk: i meant, they're possibly talking about the same issue.
12:38 kdhananjay pranithk: cool
12:38 pranithk kdhananjay: cool
12:39 rafi kdhananjay: you can also put a triaged keyword also,
12:39 kdhananjay rafi: i already did :)
12:39 rafi kdhananjay: thanks
12:39 rafi kdhananjay++ pranithk++
12:40 glusterbot rafi: kdhananjay's karma is now 16
12:40 glusterbot rafi: pranithk's karma is now 48
12:40 pur_ joined #gluster-dev
12:43 luizcpg joined #gluster-dev
12:49 luizcpg joined #gluster-dev
13:00 rraja joined #gluster-dev
13:03 ndevos rafi++ thanks for hosting the bug triage meeting :)
13:03 glusterbot ndevos: rafi's karma is now 45
13:04 kotreshhr left #gluster-dev
13:09 EinstCrazy joined #gluster-dev
13:14 mchangir joined #gluster-dev
13:18 lalatenduM joined #gluster-dev
13:28 jiffin joined #gluster-dev
13:32 hchiramm joined #gluster-dev
13:38 Apeksha joined #gluster-dev
13:42 EinstCrazy joined #gluster-dev
13:46 skoduri joined #gluster-dev
13:48 nbalacha joined #gluster-dev
13:52 ndevos kkeithley_: I've also sent an update to the nfs.disable=true by default change, I thought it passed on a local vm, but now it fails :-/
13:53 ndevos maybe I was running only the failed test, or something
13:53 ndevos kkeithley_: I think after a "gluster volume reset ...", the nfs-server is still running, even if the option is set to nfs.disable=true
13:57 kkeithley_ ndevos: yes, that's what I'm seeing as well
13:57 ndevos kkeithley_: you're fixing that now, or shall I have a go at it?
13:58 aravindavk joined #gluster-dev
13:58 hagarth ndevos, kkeithley: have we checked on gluster-users about this change to disable gnfs by default?
13:59 hagarth I suspect that a lot of user scripts would need to be changed and that might not be an easy exercise
13:59 ira joined #gluster-dev
14:01 ndevos hagarth: not yet, only very few features announced themselves towards users, it'll definitely be part of the release-notes and 3.8 announcements
14:01 kkeithley_ hagarth: we have, let me see if I can retrieve a link to the email...
14:02 kkeithley_ ndevos: so there's that, and glusterd is not sending back nfs.disable value in the dict it returns after the reset, so glusterd doesn't print that, and the test fails
14:02 kkeithley_ fails because of that
14:03 ndevos kkeithley_: if there is an email about it, I want to add it to the feature description in the roadmap :)
14:03 ndevos kkeithley_: we have glusterd_nfssvc_need_start(), and we need a glusterd_nfssvc_need_stop() too
14:03 hagarth kkeithley_, ndevos: might be worth explaining the ramifications to users
14:04 ndevos kkeithley_: that is in xlators/mgmt/glusterd/src/glusterd-nfs-svc.c for the reconfigure part, I guess
14:04 ndevos hagarth: yes, of course, and that is true for all other features too :)
14:05 hagarth ndevos: anything that changes the default we ought to be careful :)
14:09 kkeithley_ sure, but if we accept that the long term plan is to migrate to nfs-ganesha for NFS, then sooner or later we need to deprecate gnfs.
14:10 kkeithley_ we need to start phasing it out, a little at a time.
14:12 vmallika joined #gluster-dev
14:15 kkeithley_ I can't find the email. Maybe I never really sent it. Thought I did.
14:21 kkeithley_ ndevos: okay, sounds like you're further along than I am
14:22 ndevos kkeithley_: does that mean I get the job?
14:23 kkeithley_ do you want it? ;-)
14:23 kkeithley_ but I'm happy to keep plugging away at it.
14:28 ppai joined #gluster-dev
14:30 ppai joined #gluster-dev
14:31 wushudoin joined #gluster-dev
14:43 pranithk joined #gluster-dev
15:12 overclk joined #gluster-dev
15:23 Apeksha joined #gluster-dev
15:38 spalai joined #gluster-dev
15:45 Gaurav_ joined #gluster-dev
16:00 rraja joined #gluster-dev
16:02 dlambrig_ joined #gluster-dev
16:09 skoduri joined #gluster-dev
16:17 jiffin joined #gluster-dev
16:18 luizcpg joined #gluster-dev
16:24 jiffin joined #gluster-dev
17:13 rastar joined #gluster-dev
18:02 hagarth joined #gluster-dev
18:19 dlambrig_ joined #gluster-dev
21:03 v12aml joined #gluster-dev
22:14 luizcpg joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary