Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-01-11

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 sankarshan_ joined #gluster-dev
00:47 zhangjn joined #gluster-dev
00:53 zhangjn_ joined #gluster-dev
01:21 EinstCrazy joined #gluster-dev
02:07 nishanth joined #gluster-dev
02:22 kanagaraj joined #gluster-dev
02:47 zhangjn joined #gluster-dev
02:49 ilbot3 joined #gluster-dev
02:49 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:53 kanagaraj joined #gluster-dev
02:56 zhangjn joined #gluster-dev
03:03 zhangjn joined #gluster-dev
03:08 zhangjn joined #gluster-dev
03:16 kanagaraj joined #gluster-dev
03:17 zhangjn joined #gluster-dev
03:30 kanagaraj joined #gluster-dev
03:42 kanagaraj_ joined #gluster-dev
03:43 overclk joined #gluster-dev
03:47 kanagaraj__ joined #gluster-dev
03:58 jiffin joined #gluster-dev
04:01 kanagaraj joined #gluster-dev
04:03 itisravi joined #gluster-dev
04:04 atinm joined #gluster-dev
04:15 nbalacha joined #gluster-dev
04:18 kanagaraj joined #gluster-dev
04:21 gem_ joined #gluster-dev
04:34 rafi joined #gluster-dev
04:36 nbalacha joined #gluster-dev
04:40 sakshi joined #gluster-dev
04:47 kotreshhr joined #gluster-dev
05:07 apandey joined #gluster-dev
05:07 EinstCrazy joined #gluster-dev
05:09 kdhananjay joined #gluster-dev
05:13 poornimag joined #gluster-dev
05:15 ndarshan joined #gluster-dev
05:17 pppp joined #gluster-dev
05:28 zhangjn joined #gluster-dev
05:30 aravindavk joined #gluster-dev
05:32 pranithk joined #gluster-dev
05:36 nishanth joined #gluster-dev
05:36 Apeksha joined #gluster-dev
05:42 deepakcs joined #gluster-dev
05:42 vmallika joined #gluster-dev
05:44 kanagaraj joined #gluster-dev
05:44 skoduri joined #gluster-dev
05:47 Bhaskarakiran joined #gluster-dev
05:50 vimal joined #gluster-dev
05:53 Manikandan joined #gluster-dev
05:55 kanagaraj_ joined #gluster-dev
05:59 asengupt joined #gluster-dev
06:05 ggarg joined #gluster-dev
06:05 kanagaraj joined #gluster-dev
06:08 hgowtham joined #gluster-dev
06:12 kshlm joined #gluster-dev
06:17 kanagaraj_ joined #gluster-dev
06:18 shubhendu__ joined #gluster-dev
06:21 ashiq_ joined #gluster-dev
06:22 nbalacha When can a mainline BZ be closed?
06:25 kanagaraj joined #gluster-dev
06:30 atalur joined #gluster-dev
06:32 Humble joined #gluster-dev
06:35 kdhananjay joined #gluster-dev
06:37 spalai joined #gluster-dev
06:38 EinstCrazy joined #gluster-dev
06:41 zhangjn joined #gluster-dev
06:49 hgowtham joined #gluster-dev
06:57 EinstCrazy joined #gluster-dev
07:02 Saravana_ joined #gluster-dev
07:02 EinstCrazy joined #gluster-dev
07:05 aravindavk joined #gluster-dev
07:06 EinstCrazy joined #gluster-dev
07:07 EinstCrazy joined #gluster-dev
07:09 EinstCrazy joined #gluster-dev
07:11 EinstCrazy joined #gluster-dev
07:12 gem joined #gluster-dev
07:13 EinstCrazy joined #gluster-dev
07:15 EinstCrazy joined #gluster-dev
07:22 EinstCrazy joined #gluster-dev
07:24 kanagaraj joined #gluster-dev
07:37 ppai joined #gluster-dev
07:37 kotreshhr joined #gluster-dev
07:38 kdhananjay1 joined #gluster-dev
07:39 kdhananjay1 joined #gluster-dev
07:41 aravindavk joined #gluster-dev
08:09 sankarshan_ joined #gluster-dev
08:19 EinstCrazy joined #gluster-dev
08:22 itisravi joined #gluster-dev
08:32 josferna joined #gluster-dev
08:45 nbalacha ndevos, when can a mainline BZ be closed?
08:46 ndevos nbalacha: at the moment we close bugs only when their commit-id has been tagged in a release
08:47 ndevos nbalacha: that means, for current mainline BZs, they get closed when 3.8 is released
08:48 nbalacha ndevos, how do you determine whether a commit-id has made it to a release?
08:48 ndevos nbalacha: however naga/satish would like to see them closed before, maybe when their backport has been included in a release
08:48 nbalacha ndevos, ok.
08:49 nbalacha ndevos, is that the procedure we will be following?
08:49 ndevos nbalacha: we check what BUG: tags commit messages have, and close those with a note
08:49 nbalacha ndevos, so those would mainly be the BZs files on the 3.7.x releases, not the mainline ones
08:50 nbalacha ndevos, is that right?
08:50 ndevos nbalacha: the procedure has not changed, we close them only when commits are in a release
08:50 msvbhat joined #gluster-dev
08:50 ndevos nbalacha: correct, the mainline BZs will stay open for a while, their backports can be closed when a stable release is done
08:50 nbalacha ndevos, so the current mainline BZs shoudl stay open until 3.8?
08:50 nbalacha ndevos, ok. thanks
08:50 ndevos nbalacha: that is the current procedure, yes
08:51 ndevos nbalacha: fwiw we use https://github.com/gluster/release-tools for closing and such
08:51 nbalacha ndevos, ok
08:52 apandey joined #gluster-dev
08:53 ndevos nbalacha: in case you speak to naga/satish about it, ask them to send a proposal to change the procedure :)
08:53 nbalacha ndevos, ok
08:56 rraja joined #gluster-dev
08:57 sakshi joined #gluster-dev
09:04 aravindavk joined #gluster-dev
09:07 sakshi joined #gluster-dev
09:09 zhangjn joined #gluster-dev
09:09 EinstCrazy joined #gluster-dev
09:11 EinstCrazy joined #gluster-dev
09:11 EinstCrazy joined #gluster-dev
09:12 ndevos kshlm: the CentOS CI can now use the Gluster Gerrit for triggers, I guess you needed that to get started?
09:13 EinstCrazy joined #gluster-dev
09:13 EinstCrazy joined #gluster-dev
09:16 EinstCrazy joined #gluster-dev
09:17 Saravana_ joined #gluster-dev
09:18 kotreshhr joined #gluster-dev
09:24 EinstCra_ joined #gluster-dev
09:25 karthik_us joined #gluster-dev
09:28 poornimag joined #gluster-dev
09:29 obnox rastar: do you remember why you added the 'sleep 5' to the S29CTDBstartup.sh script?
09:30 EinstCrazy joined #gluster-dev
09:31 atinm ndevos, my mail box is getting flooded because of the mass removal ;)
09:32 ndevos atinm: hah, yeah, I guess mine as well, it was only a little over 1000 bugs...
09:32 atinm ndevos, 'only 1000' ;)
09:33 obnox ndevos: what is the reason for that?
09:33 ndevos obnox: removing the old gluster-bugs@redhat.com list from CC, it has long been replaced by bugs@gluster.org
09:36 obnox oh, that makes a lot of sense.
09:36 rastar obnox: From what I remember, It was added when init scripts could not guarantee that mount
09:36 obnox ndevos: are new bugs not getting that cc ?
09:36 rastar obnox: checking logs for more details
09:37 ndevos obnox: not all existing ones had, new ones should get it all right
09:39 obnox ndevos: ok. asking, because apparently also a few of my bugs were affected, which were just very few weeks old
09:40 obnox rastar: apparently the sleep 5 was sneaked in with ecc475d0a517d7f58014bed93fc0957b3369d1b7 ("Move smb hooks to right place.")
09:40 obnox rastar: ;-)
09:40 ndevos obnox: only recently bugs@gluster.org was set as the default for *all* components, I've now replaced all gluster-bugs@redhat.com CC's with bugs@gluster.org
09:41 obnox ok
09:41 ndevos obnox: but, maybe I could have tried to skip bugs that had both addresses... not sure how to put that in the 'bugzilla' command, and its too late for that now
09:42 obnox rastar: the original location was this:
09:42 obnox mkdir -p $CTDB_MNT ; sleep 5 ; mount -t glusterfs `hostname`:$VOL "$CTDB_MNT" ...
09:44 obnox ndevos: no worries, just trying to understand what's going on
09:45 ndevos obnox: if you think its useful, I can send a "whats up?!" email to the -devel list?
09:47 rastar obnox: pm
09:47 obnox ndevos: If you get more questions like mine, it might be a good proactive thing. otherwise, don't bother, I'd say
09:47 ndevos obnox: ok, will do :)
09:48 * ndevos drops off for a bit, will be back later
09:48 obnox rastar: yeah?
09:48 obnox :-)
09:55 aravindavk joined #gluster-dev
09:57 zhangjn joined #gluster-dev
10:01 tigert joined #gluster-dev
10:03 ggarg joined #gluster-dev
10:04 karthik_us joined #gluster-dev
10:23 zhangjn joined #gluster-dev
10:25 zhangjn joined #gluster-dev
10:29 poornimag joined #gluster-dev
10:30 skoduri ndevos, rafi updated the rca about gluster-nfs/tiering issue - https://bugzilla.redhat.com/show_bug.cgi?id=1297311#c1
10:30 glusterbot Bug 1297311: unspecified, unspecified, ---, bugs, NEW , Attach tier + nfs : Creates fail with invalid argument errors
10:30 skoduri ndevos, please take a look
10:36 Bhaskarakiran joined #gluster-dev
10:45 ndevos skoduri, rafi: is there a reason why we cant create the inode_ctx for parent xlators?
10:46 * ndevos needs to look at the code to see how its currently done...
10:46 josferna joined #gluster-dev
10:47 kdhananjay joined #gluster-dev
11:02 Saravana_ joined #gluster-dev
11:02 zhangjn joined #gluster-dev
11:03 spalai left #gluster-dev
11:07 kotreshhr joined #gluster-dev
11:15 ggarg joined #gluster-dev
11:15 skoduri ndevos, do you mean when dht creates the inode, it should create the inode_ctx of its parent xlators as well?
11:16 rafi ndevos: it will be much cleaner if we don't do that
11:17 rafi ndevos: since that lookup was trigeered from dht xlators, we have to finish in dht itself
11:17 rafi ndevos: travelling back till master xlators won't be nice
11:26 rafi1 joined #gluster-dev
11:31 aravindavk joined #gluster-dev
11:31 ndevos skoduri: yes, I would expect that the inode and all of its attributes are valid everywhere?
11:32 ndevos rafi1: maybe not, I dont think I've ever looked at how the inode ctx is created/populated
11:33 rafi1 ndevos: inode_ctx are usually created during first lookup
11:34 ndevos rafi1: yeah, I thought it was always created, but from your comment I understand it only gets created for the current xlator and its children?
11:34 rafi1 ndevos: here, the lookup started from dht, so finished in dht itself
11:34 rafi1 ndevos: yes
11:35 rafi1 ndevos: and also we are linking the inode to inode table from dht
11:35 ndevos rafi1: right, but how is the ctx invalid (it is NULL, or what?) from the nfs xlator?
11:36 rafi1 ndevos: yes
11:36 rafi1 ndevos: so any operation on that inode, will proceed with out any lookup
11:37 rafi1 ndevos: so if any xlator expect to have an inode_ctx, till dht will fail saying that inode_ctx is not present
11:37 ndevos rafi1: sure, but isnt it a nicer solution to have the whole inode (+ctx) be valid for all xlators?
11:38 rafi1 ndevos: i thought considering such inode as a need lookup during resolving since no lookup was went through nfs xlators
11:39 ndevos rafi1: hmm, yeah, true, maybe some of the parent xlators would set something in the ctx, and that would then be missing later on too
11:45 ndevos rafi1, jiffin: do you remember the last time there was an incomplete inode? maybe we should introduce a flag like inode->valid where we can set bits for each attribute?
11:46 rafi1 ndevos: in fuse and libgfapi , we have something like need lookup in ctx
11:47 ndevos rafi1: yeah
11:47 rafi1 ndevos: in this case we don't even have inode_ctx
11:47 ndevos rafi1, jiffin: oh, maybe last time it was an incomplete iatt structure...
11:47 ndevos rafi1: indeed
11:55 ndevos rafi1: hmm, there is something like nfs3_call_state_t->hardresolved, that could ne used for checking instead of ctx != NULL
11:58 ndevos rafi1: could you add a note in the BZ where the error exactly comes from? isnt svc_access() part of snapshot?
11:58 rafi joined #gluster-dev
11:59 * rafi is checking
11:59 ndevos svc = snap-view-client ?
11:59 rafi ndevos: here, full inode table is fresh after a process restart
11:59 rafi ndevos: yes
11:59 rafi ndevos: then dht created some inode with out knowing any upper xlators
12:01 ndevos rafi: yes, but inode_ctx_get() only gets called in 4 (or so) places in the nfs-server, and all seem to be doing something sane (except maybe in nfs3_getattr_resume)
12:07 rafi ndevos: ya
12:07 EinstCrazy joined #gluster-dev
12:07 ndevos obnox: freebsd smoke fails for every patch currently :-/ http://review.gluster.org/13208 should fix that
12:07 rafi ndevos: from nfs_fop_lookup_cbk . it will create an inode_ctx
12:07 rafi ndevos: which means for every successful lookup trigeered will have an inode_ctx
12:08 rafi ndevos: but inode which linked from lower layer might not have the inode_ctx
12:08 rafi ndevos: in that case the inode is not yet ready to use
12:09 ndevos rafi: yes, I understand that, but in what code path (for nfs) does it happen? I dont know what fop causes svc_access() to be called
12:10 ndevos rafi: and, does the problem happen when snapshot is not enabled?
12:10 rafi ndevos: i'm not about how much nfs uses inode_ctx
12:10 rafi ndevos: no
12:10 rafi ndevos: but there will be code path that can hit
12:11 rafi ndevos: * I'm not sure about how much nfs uses inode_ctx
12:11 obnox ndevos: ok, thx. not retriggering any more ;-)
12:11 ndevos rafi: maybe, but inode_ctx_get() in nfs all seem to have appropriate error handlers, I think
12:13 rafi ndevos: that I agree, the fix which we were talking about is a resolution to fix inode_ctx for everyone
12:15 ndevos rafi: that is a great cause, but when I look at svc_inode_ctx_get() I think we need to fix that regardless
12:16 msvbhat win3
12:16 msvbhat ignore that please
12:17 ndevos is that win3.11?
12:18 ndevos rafi: would it not be basic for anything that calls inode_ctx_get() to have a failure handler and setup the ctx if it does not exist?
12:18 rafi ndevos: we cannot blindly set up inode ctx, it needs a lookup
12:19 ndevos rafi: not sure, does that not depend on the xlator?
12:19 rafi ndevos: for example for svc, we use inode_ctx to store the inode type,
12:19 rafi ndevos: whether the inode is real or virtual
12:20 rafi ndevos: depends to xlators itself, but it might requires some values to construct
12:21 ndevos rafi: sure, but if it depends on the xlator, it will be difficult to get a generic solution
12:33 rjoseph ndevos: I think most of the xlator creates the ctx on the cbk of first lookup. Because by that time it has most of the information needed for ctx creation.
12:35 rjoseph ndevos: I think it would be much cleaner if explicit lookup is initiated from the master xlator
12:36 ppai joined #gluster-dev
12:44 ndevos rjoseph: yes, a lookup from the master xlator would probably be cleanest
12:52 shubhendu joined #gluster-dev
12:53 ilbot3 joined #gluster-dev
12:53 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
12:56 ira joined #gluster-dev
13:05 zhangjn joined #gluster-dev
13:12 kotreshhr left #gluster-dev
13:14 bfoster joined #gluster-dev
13:16 poornimag joined #gluster-dev
13:24 Manikandan rastar++, ndevos++
13:24 glusterbot Manikandan: rastar's karma is now 20
13:24 glusterbot Manikandan: ndevos's karma is now 230
13:27 csim wow, only 2h to find how to access a netbsd slave \o/
13:29 ndevos csim: uhm, isnt it just the same for all slaves?
13:30 rjoseph thanks ndevos. rafi, jiffin, skoduri: Can any of you send the patch for BZ 1297311?
13:32 csim ndevos: the /root/.ssh/authorized_keys was flagged immutable, took me a while to find the issue
13:32 ndevos csim: ah, thats a nice one :)
13:33 csim also, there is a problem on some initscript
13:38 kanagaraj joined #gluster-dev
14:02 skoduri joined #gluster-dev
14:04 atinm joined #gluster-dev
14:06 kanagaraj_ joined #gluster-dev
14:09 shyam joined #gluster-dev
14:16 kanagaraj joined #gluster-dev
14:26 ggarg joined #gluster-dev
14:28 kdhananjay joined #gluster-dev
14:29 vmallika joined #gluster-dev
14:34 rafi rjoseph: sure
14:42 zhangjn joined #gluster-dev
14:44 kanagaraj joined #gluster-dev
15:14 Ethical2ak joined #gluster-dev
15:15 hagarth joined #gluster-dev
15:20 shaunm joined #gluster-dev
15:32 overclk joined #gluster-dev
15:32 kshlm joined #gluster-dev
15:36 wushudoin joined #gluster-dev
15:36 kotreshhr joined #gluster-dev
16:20 dlambrig joined #gluster-dev
16:23 skoduri joined #gluster-dev
17:02 Manikandan joined #gluster-dev
17:19 jiffin joined #gluster-dev
18:09 nishanth joined #gluster-dev
18:53 ashiq_ joined #gluster-dev
19:45 lpabon joined #gluster-dev
20:43 lpabon_ joined #gluster-dev
21:24 obnox ndevos: reading your latest mail, what is the policy on reviewing : +2 is done by maintainers and +1 by 'mere mortals' ? is that a soft or a hard rule?
21:53 lkoranda joined #gluster-dev
22:46 ndevos obnox: we expect maintainers of components to do +2, others can do so as well, but hardly any non-maintainers do it
22:47 ndevos even maintainers reviewing other components mostly do only +1's
23:00 obnox ndevos: ok. 'others can do so as well' is still vague. it seems that no-one except for the maintainers of the component in question is _encouraged_ to give +2? or else it is of no more relevance than that person giving +1 ? :-)
23:00 obnox ndevos: not asking for strict rules where none are needed, just trying to understand

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary