Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-08-18

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:10 nigelb kkeithley: netbsd regressions are running on netbsd7.
01:10 nigelb we only have smoke running on netbsd6.
03:05 julim joined #gluster-dev
03:20 hchiramm joined #gluster-dev
03:24 sanoj joined #gluster-dev
03:35 magrawal joined #gluster-dev
04:01 atinm joined #gluster-dev
04:27 shubhendu joined #gluster-dev
04:29 poornimag joined #gluster-dev
04:34 aravindavk joined #gluster-dev
04:36 shubhendu joined #gluster-dev
04:40 nbalacha joined #gluster-dev
04:45 karthik_ joined #gluster-dev
05:14 jiffin joined #gluster-dev
05:16 ndarshan joined #gluster-dev
05:18 atinm joined #gluster-dev
05:27 ashiq joined #gluster-dev
05:34 nbalacha joined #gluster-dev
05:35 mchangir joined #gluster-dev
05:37 Saravanakmr joined #gluster-dev
05:37 msvbhat joined #gluster-dev
05:43 rastar joined #gluster-dev
05:44 kdhananjay joined #gluster-dev
05:44 hgowtham joined #gluster-dev
05:46 rafi joined #gluster-dev
06:01 Manikandan joined #gluster-dev
06:01 itisravi joined #gluster-dev
06:03 asengupt joined #gluster-dev
06:05 kshlm joined #gluster-dev
06:08 ankitraj joined #gluster-dev
06:10 kotreshhr joined #gluster-dev
06:12 aspandey joined #gluster-dev
06:12 spalai joined #gluster-dev
06:15 nishanth joined #gluster-dev
06:19 nbalacha joined #gluster-dev
06:19 Bhaskarakiran joined #gluster-dev
06:20 atalur joined #gluster-dev
06:22 atinm joined #gluster-dev
06:30 ashiq atalur++ thanks :)
06:30 glusterbot ashiq: atalur's karma is now 15
06:31 Muthu_ joined #gluster-dev
06:31 atalur ashiq, you are welcome :)
06:32 Muthu joined #gluster-dev
06:33 jiffin1 joined #gluster-dev
06:43 atinm joined #gluster-dev
06:44 aravindavk joined #gluster-dev
06:45 devyani7 joined #gluster-dev
06:48 devyani7 joined #gluster-dev
06:58 Manikandan joined #gluster-dev
07:01 nigelb poornimag: It this an infra failure? https://build.gluster.org/job/smoke/29867/console
07:14 jiffin1 joined #gluster-dev
07:29 atinm joined #gluster-dev
07:44 Manikandan joined #gluster-dev
08:05 itisravi joined #gluster-dev
08:29 nigelb misc: entirely possible the machines were shutdown/inaccessible the last time you ran restorecon.
08:30 ndevos poornimag: I'm looking for your feedback on http://review.gluster.org/15191 , will probably need to check/correct the test-cases
08:30 misc nigelb: yeah, but still, selinux is supposed to tag the file correctly
08:31 misc restorecon is just a fix, the real long term is having proper fcontext, and that's already done,so wonder what is missing
08:36 nigelb what the...
08:36 nigelb regression-test-burn-in is a very confusing job.
08:36 nigelb We use define a bit of settings on two plugins, but actually only use one of them.
08:36 misc randomly ?
08:37 nigelb Dunno, I have to check history to figure out why it's this way.
08:37 nigelb Maybe I can remove one plugin.
08:43 itisravi joined #gluster-dev
08:44 pur joined #gluster-dev
09:01 jiffin1 joined #gluster-dev
09:04 ramky joined #gluster-dev
09:09 Bhaskarakiran joined #gluster-dev
09:11 mchangir joined #gluster-dev
09:26 rastar joined #gluster-dev
09:30 jiffin1 joined #gluster-dev
09:43 nbalacha joined #gluster-dev
10:00 post-factum what is wind/unwind in terms of gluster?
10:01 jmpq joined #gluster-dev
10:03 ndevos post-factum: wind -> call the FOP in the next xlator, unwind -> call the callback-FOP in the previous xlator
10:03 post-factum ndevos: oh, that simple
10:03 post-factum ndevos: thanks
10:04 ndevos post-factum: yes, if you know its simple, if you're seeing it for the first time, you're like *whaaaat?!*
10:04 post-factum ndevos: the first idea is about libunwind
10:04 ndevos post-factum: ah, yeah, thats rather different
10:22 msvbhat joined #gluster-dev
10:33 nigelb ndevos: Do you do the packaging for centos?
10:34 ashiq joined #gluster-dev
10:36 rastar joined #gluster-dev
10:38 nigelb kshlm: I have questions about the regression-test-burn-in job. Do you have a minute?
10:39 nigelb We use both the Mailer plugin and the Extended Email plugin.
10:39 kshlm nigelb, Yeah?
10:39 nigelb I *think* we only need the mailer plugin, because we want to notify maintainers when the test fails, correct?
10:39 kshlm The intention is to notify maintainers.
10:40 nigelb and only on failure.
10:40 nigelb Right?
10:40 kshlm Only on failure, doesn't matter how.
10:40 nigelb I'll probably get rid of one plugin and see how it goes.
10:40 nigelb and a few of the manual options as well.
10:40 kshlm It keeps sending updates to maintainers@gluster.org.
10:40 kshlm Don't know which plugin does that.
10:41 nigelb heh, we use the mailer plugin to do that.
10:41 nigelb but we use another plugin to seemingly notify the author of the commit that failed.
10:41 nigelb which I suspect isn't useful anyway, because the failer isn't the last commiter's fault.
10:41 nigelb *isn't necessarily
10:45 kdhananjay post-factum: there? :)
11:03 nigelb kshlm: regression-test-burn-in is basically the centos regression but run on master every 4 hours or so. And reports failures to the mailing list, correct?
11:03 kshlm Correct.
11:03 nigelb (and doesn't skip if you edit only tests or distaf)
11:03 nigelb okay, so I'm modifying the script so it actually only does that.
11:04 nigelb There's a lot of redundant code in there.
11:04 mchangir joined #gluster-dev
11:10 ashiq joined #gluster-dev
11:11 post-factum kdhananjay: had a dinner :)
11:11 post-factum kdhananjay: oh, you are not here :(
11:14 rastar joined #gluster-dev
11:26 Bhaskarakiran joined #gluster-dev
11:34 kkeithley occasional update on the status of longevity cluster.  In 16 days:
11:35 kkeithley glusterfs fuse bridge has grown from RSZ 59628, VSZ 800044 to RSZ 61656, VSZ 947508
11:36 kkeithley glusterd (doing pretty much nothing) has grown from RSZ 22132, VSZ 607624 to RSZ 22132, VSZ 673160
11:36 misc so memory leak ?
11:37 kkeithley glusterfs shd (also doing pretty much nothing) has grown from RSZ 58928, VSZ 747476 to RSZ 59368, VSZ 813012
11:38 kkeithley glusterfsd (brick, very busy) has grown from RSZ 47452, VSZ 1319908 to RSZ 47704, VSZ 1517544
11:38 post-factum leaks
11:38 post-factum leaks everywhere
11:38 kkeithley yes, leaks
11:39 post-factum https://cdn.meme.am/instances/500x/58757680.jpg
11:40 kkeithley and glusterfs nfs (also busy) has grown from RSZ 98948, VSZ 742524, to RSZ 269008, VSZ 941308
11:40 kkeithley http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/
11:42 kkeithley some mem size, e.g. in glusterfsd and glusterfs/nfs, can be attributed to cacheing and is to be expected.
11:42 kkeithley but I don't know why glusterfs/shd is growing
11:44 ndevos lol post-factum
11:45 kkeithley Buzz Memleak, to Infinity and Beyond
11:46 misc ah ah
11:46 kkeithley or Too Oomkill and Beyond
11:46 kkeithley To
11:46 * kkeithley need to find time again to hunt for memleaks
11:49 kdhananjay joined #gluster-dev
11:49 ndevos kkeithley: see any reconnects of the shd?
11:49 mchangir is struct iatt::ia_blocks the same as what a stat() would return for the number of data blocks consumed by a file ? i.e. a multiple of 512 ?
11:51 ndevos mchangir: probably, but filesystems could use a different size of blocks
11:52 mchangir yeah, I know the file-system IO block size could be different ... but stat() is stubborn in that sense ... its always a multiple of 512
11:55 kkeithley ndevos: in glustershd.log on one server (haven't looked at the others) I see a couple of [glusterfsd.c:1355:reincarnate] 0-glusterfsd: Fetching the volume file from server...
11:55 itisravi mchangir: In gluster, posix xlator always rounds it off to ia_size*512. See the manipulation of ia_blocks in iatt_from_stat().
11:55 kkeithley and a lot of [client-rpc-fops.c:2930:client3_3_lookup_cbk] 0-longevity-client-1: remote operation failed. Path: <gfid:e5461654-9c88-47bc-a87c-d75d63ea7901> (e5461654-9c88-47bc-a87c-d75d63ea7901) [No such file or directory]
11:56 kkeithley nothing that says reconnect
11:57 ndevos no idea... but I think there was/is a memleak for each connection that gets initiated and torn down
11:57 mchangir itisravi, I'll take a look ... thanks
11:57 itisravi np :)
12:00 kkeithley ndevos: longevity is running 3.8.1.  Do you think that leak is fixed already in 3.8.1?  I do plan to update to 3.8.2 eventually. Or maybe I'll wait for 3.8.3.  (and switch from gnfs to ganesha)
12:03 kkeithley btw, what status, if any, of my HA testing in CentOS CI?
12:03 kkeithley ndevos: ^^^
12:03 post-factum kkeithley: oom-noom-noom. i have dedicated ticket for memleaks in shd
12:04 overclk joined #gluster-dev
12:05 ndevos kkeithley: not sure what has been resolved, and what not
12:06 rastar joined #gluster-dev
12:07 ndevos kkeithley: I've been modelling around your HA testing scripts, need to do the ssh-keygen/copy next
12:07 kkeithley and the volume is 2x4 (or 4x2) distribute+replica.  I should probably try shard or disperse at some point
12:08 kkeithley and tiering
12:19 poornimag joined #gluster-dev
12:31 jiffin1 joined #gluster-dev
12:34 post-factum kkeithley: http://i.piccy.info/i9/9b75f96eb060205a544e7468ad5509a8/1471523656/31587/1061736/shd.png
12:34 post-factum kkeithley: http://i.piccy.info/i9/aed9ac23fd1f0e486219e073f92face0/1471523672/33001/1061736/bricks.png
12:35 spalai left #gluster-dev
12:35 post-factum kkeithley: not that much, but shd vsz jumps are interesting
12:36 hagarth joined #gluster-dev
12:38 post-factum kdhananjay++
12:38 glusterbot post-factum: kdhananjay's karma is now 24
12:38 kdhananjay post-factum: so, you're ok with the change?
12:39 post-factum kdhananjay: yep, the code i commented previously looks ok to me
12:39 post-factum kdhananjay: everything else should read some other guy or girl
12:39 kdhananjay post-factum: awesome! many thanks for the reviews.
12:39 kdhananjay post-factum++
12:39 glusterbot kdhananjay: post-factum's karma is now 28
12:51 kkeithley post-factum: they're better than they were, but still worrisome.  My network is solid, so I don't think shd should be doing any real work; I think the memory growth is suspicious.
12:52 pranithk1 joined #gluster-dev
12:53 julim joined #gluster-dev
12:53 post-factum kkeithley: for shd big jump on my chart it was reconnection
12:53 post-factum kkeithley: i rebooted another node in the cluster, and shd reconnected to it
12:54 kkeithley post-factum: which version?
12:55 post-factum 3.7.14+extra patches, merged for .15 release
12:55 kkeithley okay
12:58 dlambrig joined #gluster-dev
12:58 nbalacha joined #gluster-dev
12:59 ira joined #gluster-dev
13:03 lpabon joined #gluster-dev
13:18 rastar joined #gluster-dev
13:26 rraja joined #gluster-dev
13:27 shubhendu joined #gluster-dev
13:46 ndevos aravindavk: we really should not need to compile events.c if it is disabled, I can update http://review.gluster.org/15198 with that if you like
13:47 kkeithley nigelb: "recheck netbsd" in gerrit doesn't seem to be triggering netbsd7-regressions in jenkins? Am I doing something wrong?
13:47 aravindavk ndevos: I got your comment. will modify to have empty func in events.h and conditionally disable in Makefile.am
13:48 aravindavk ndevos: *conditionally disable events.c in Makefile.am
13:49 ndevos aravindavk: and please, pass @localstatedir@ in as a -D define, no need for a events.h.in and the unneeded #defines there (move them to the .c)
13:50 ndevos aravindavk: also, the "extern" can be dropped from gf_event too
13:50 aravindavk ndevos: sure, that part is anyways going out with client side events support. Changing Unix domain socket to UDP
13:51 ndevos aravindavk: ok, thanks!
13:54 post-factum ndevos: did you check my comments on two similar reviews with "gfapi: Fix IO error caused when there is consecutive graph switches" subject?
13:55 baojg joined #gluster-dev
14:01 ppai joined #gluster-dev
14:02 nigelb kkeithley: It was working fine yesterday.
14:03 nigelb kkeithley: file a bug with the review request link, please
14:07 rafi joined #gluster-dev
14:16 nigelb kkeithley: If you're talking about http://review.gluster.org/#/c/14085/ you actually entered "recheck centos netbsd". That doesn't trigger either of them.
14:16 kkeithley oh? doh.
14:17 kkeithley really, it looks like it at least retriggered centos
14:17 nigelb ooh, because "recheck centos" is a subset of "recheck centos netbsd". I figured it might be an exact match check.
14:18 nigelb Good to know.
14:18 ndevos obnox: do you know if there are bugs for the md-cache related changes that need to be done? dlambrig and I would like to track/follow/test/review/... parts
14:23 obnox ndevos: hi. afaik, ther currenly no know bugs. (apart for the bugs related to the patches up for review that bring in the new feature/change)
14:23 obnox ndevos: rastar or rjoseph may know better, if poornima is not around
14:24 obnox (oops lost a few letters above ;-)
14:25 rastar obnox: ndevos I am not aware of any
14:25 ndevos obnox, rastar, rjoseph: I really need bugs for each single enhancement and bug, otherwise tracking is a real pita
14:26 ndevos rastar: I understood you were aware of this issue, a bug would have helped in fixing it much faster - http://review.gluster.org/15191
14:26 shyam joined #gluster-dev
14:27 obnox ndevos: i don't understand -- i thought you can only put up a patch for review if you have a BZ?
14:28 ndevos obnox: yes, but some bugs are re-used for many changes, that might make sense for some features, but not if they cross different components
14:28 obnox ndevos: oh, that. yeah..
14:29 ndevos obnox: and as maintainer of gfapi and upcall I really need to be able to track things... if it is a bug against md-cache only, I probably wont see it
14:29 ndevos also, caching unneeded things is a bug, fixing that is not a feature and should be backported
14:33 obnox ndevos: dunno. if there are bugs discovered in other components, by the new md-cache code, then these should certainly have separate BZs.
14:33 dlambrig obnox: some background- it appears the md-cache/upcall work could be really helpful to tiering and maybe DHT in general… as it will cut down the LOOKUP traffic. Soooo  … myself and other are  very keenly interested in tracking this … and helping in any way possible.
14:34 obnox dlambrig: great. yeah it's a generic improvement. and it's very cool that it gets a good amount of attention now!
14:34 rastar ndevos: what poornima mentioned was more granular
14:35 rastar ndevos: a gfapi client might be interested in a particular type of upcall and not interested in other
14:35 dlambrig obnox: I realize work has been underway a while… and we just connected the dots recently so come late to the party …
14:36 ppai obnox, dlambrig swift workloads should improve too. woot!
14:37 obnox dlambrig: sure. 'better late than never'... :-)
14:38 ndevos rastar: sure, that is the registration API we need at one point, but that is much more work
14:39 ndevos rastar: we need a design for the registation API and discuss what applications would like to get notification on, we wont have that before 3.9
14:39 rastar ndevos: yes, but you are right, there should be a bz filed for all the bugs we can think of
14:39 obnox rastar: ndevos: yeah
14:39 rastar ndevos: that would tell the state of the implementation and overall stability
14:39 ndevos rastar: yes please, and maybe have a tracker bz that we can use to see all related BZs
14:40 rastar ndevos: I think poornima already has a tracker bz and a few bugs attached to it
14:40 rastar ndevos: but I haven't kept track lately..she is the only one who can answer authoritatively
14:40 ndevos rastar: I looked on bugs.cloud.gluster.org, but could not find any :-/
14:41 ppai fwiw, few todo's are listed by poornima here https://bugzilla.redhat.com/show_bug.cgi?id=1211863#c73
14:41 glusterbot Bug 1211863: medium, medium, ---, pgurusid, POST , RFE: Support in md-cache to use upcall notifications to invalidate its cache
14:42 ndevos ppai: ah, nice find! most of those should probablt be separate BZs though
14:43 obnox sure
14:46 nishanth joined #gluster-dev
14:56 ankitraj joined #gluster-dev
14:59 atalur joined #gluster-dev
15:22 nishanth joined #gluster-dev
15:35 spalai joined #gluster-dev
15:44 rastar joined #gluster-dev
15:56 rafi joined #gluster-dev
16:10 poornimag joined #gluster-dev
17:33 mchangir joined #gluster-dev
17:37 post-factum joined #gluster-dev
17:38 skoduri joined #gluster-dev
17:44 ashiq joined #gluster-dev
18:04 jiffin joined #gluster-dev
18:07 rafi joined #gluster-dev
18:21 jiffin joined #gluster-dev
18:59 lpabon joined #gluster-dev
19:14 julim joined #gluster-dev
19:22 JoeJulian Hrm, there's no maintainer for shard?
19:41 hagarth joined #gluster-dev
20:01 kkeithley off hand I'd say kdhanajay is the shard maintainer
20:01 kkeithley kdhananjay
20:39 rastar joined #gluster-dev
20:39 kkeithley ndevos, nixpanic: re: http://review.gluster.org/14085, done "correctly" (IOW without the pragmas in the rpc/xdr .h files "leaking" into the compile of just about every one of the sources) means lots of  warnings for unused variables.
20:40 kkeithley as if the patch isn't big already, now it's going to get even bigger.
20:42 post-factum kkeithley: ...cough...cmake...cough...
20:42 kkeithley it's an issue because, e.g. Fedora builds do -Werror=uninitialized
20:43 kkeithley it's not a autoconf vs. cmake thing
20:43 post-factum nvm, just saw random crocodile in the window
20:44 * kkeithley thought post-factum was in Ukraine, not Africa or South America
20:44 * post-factum was surprised too
20:45 kkeithley "crocodile in the window" must be a Ukrainian euphemism. Like seeing pink elephants when you're drunk
20:46 kkeithley or something
20:46 post-factum kkeithley: correct, but that has no direct transation as it plays around "crocodile" world and how it sounds in ukrainian
20:47 post-factum *word
20:47 * kkeithley nods
21:07 kkeithley it may be an issue because, e.g. Fedora builds may use -Werror=unused-variables
21:08 kkeithley more weasel words
21:08 mchangir joined #gluster-dev
21:15 ankitraj joined #gluster-dev
22:11 hagarth joined #gluster-dev
22:19 julim joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary