Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2018-01-08

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 nh2[m] joined #gluster-dev
00:23 susant joined #gluster-dev
01:10 msvbhat joined #gluster-dev
01:51 Shu6h3ndu joined #gluster-dev
03:04 ilbot3 joined #gluster-dev
03:04 Topic for #gluster-dev is now Gluster Development Channel - https://www.gluster.org | For general chat go to #gluster | Patches - https://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:06 atinm_ joined #gluster-dev
03:13 nbalacha joined #gluster-dev
03:16 poornima joined #gluster-dev
03:21 gyadav joined #gluster-dev
03:53 itisravi joined #gluster-dev
03:58 itisravi joined #gluster-dev
04:02 psony|afk joined #gluster-dev
04:14 aravindavk joined #gluster-dev
04:50 ppai joined #gluster-dev
04:54 Girjesh joined #gluster-dev
05:00 skumar joined #gluster-dev
05:01 jiffin joined #gluster-dev
05:06 kmohanan joined #gluster-dev
05:23 susant joined #gluster-dev
05:24 atinm_ joined #gluster-dev
05:25 kdhananjay joined #gluster-dev
05:26 amarts joined #gluster-dev
05:27 ndarshan joined #gluster-dev
05:28 varshar joined #gluster-dev
05:30 varshar joined #gluster-dev
05:32 ppai joined #gluster-dev
05:33 gobinda joined #gluster-dev
05:34 skumar_ joined #gluster-dev
05:34 Shu6h3ndu joined #gluster-dev
05:35 sunnyk joined #gluster-dev
05:38 atinm ppai, kshlm : have we thought about how to handle VOLOPT_FLAG_FORCE flag from VME table in GD2 integration?
05:39 ppai atinm, No. I'll have to check what that does.
05:39 atinm ppai, kshlm : for ex - I was trying to update bit-rot xlator options and came across the scrubber option which has this special flag
05:41 itisravi__ joined #gluster-dev
05:41 karthik_us joined #gluster-dev
05:42 atinm ppai, also this scrubber option, the key is set as scrub but in the xlator its defined as scrubber.. so we can't change this to scrub as it will break backward compatibility?
05:42 kshlm atinm, For now, add the flag when migrating.
05:43 kshlm We haven't yet handled the flag in volume set.
05:43 kmohanan joined #gluster-dev
05:44 kdhananjay joined #gluster-dev
05:45 itisravi joined #gluster-dev
05:45 kdhananjay joined #gluster-dev
05:50 mchangir joined #gluster-dev
05:51 prasanth joined #gluster-dev
05:56 kotreshhr joined #gluster-dev
05:58 susant left #gluster-dev
05:59 susant joined #gluster-dev
06:08 hgowtham joined #gluster-dev
06:09 pkalever joined #gluster-dev
06:11 rafi joined #gluster-dev
06:18 lxbsz joined #gluster-dev
06:25 prasanth joined #gluster-dev
06:28 pranithk1 joined #gluster-dev
06:29 atinm amarts, ppai , kshlm : As discussed earlier, I have moved most of the mvp-1 issues to mvp-2 in GD2 as we're already over due mvp-1 by a few days
06:30 atinm kshlm, can we have a dev release by this week?
06:30 amarts atinm, ack! Good thing is to have an alpha/beta release by next target date
06:30 kshlm atinm, Sure.
06:31 amarts so community can start giving feedback
06:31 atinm kshlm, ^^
06:32 atinm I think we can target alpha this month and beta by 14th Feb
06:33 amarts that would be great
06:34 sanoj joined #gluster-dev
06:35 pranithk1 atinm: Is anyone investigating how https://review.gluster.org/19096 passed regression without "dict: fix VALIDATE_DATA_AND_LOG call"
06:35 atinm pranithk1, that's not surprising, we don't retrigger regressions on rebases
06:36 amarts pranithk1, yes, at that time the patch causing the error was not merged
06:36 atinm pranithk1, so the patch which had introduced some new references to this call already passed regression
06:37 atinm pranithk1, so when it got merged, it just got rebased on top of Nithya's changes
06:37 amarts pranithk1, there were 2 patches with same parent, and both changed dict.c
06:37 pranithk1 amarts: atinm: I see. Thanks!
06:37 atinm pranithk1, so even though we changed the gerrit submit type, we can't avoid these type of problems to happen
06:37 amarts both passed regression and smoke together, and they iddn't had any 'merge conflicts'
06:37 atinm pranithk1, there's a cost associated with it
06:38 pranithk1 atinm: yeah... :-/
06:38 amarts i guess we can prevent it by having a separate trigger for 'submit' which can do a final build (not other tests) before submitting
06:38 amarts nigelb, ^^
06:38 amarts will add it to automation document
06:39 nigelb no, don't.
06:39 nigelb We can't do that :)
06:39 amarts nigelb, ok :p
06:39 nigelb The only way to do that is to have something like zuul merging the patches.
06:39 nigelb that's for consideration once we have chunked regressions in production.
06:43 ppai kshlm, can you review PR #484 ?
06:44 sanju joined #gluster-dev
06:52 skumar__ joined #gluster-dev
06:57 xavih joined #gluster-dev
06:59 ppai kshlm++
06:59 glusterbot ppai: kshlm's karma is now 155
07:14 squarebracket joined #gluster-dev
07:16 nbalacha does anybody know why in features/locks/src/inodelk.c, we use pl_inode_lock_t->lk_owner instead of pl_inode_lock_t->user_flock.l_owner for lock owner comparison?
07:17 kotreshhr joined #gluster-dev
07:18 decayofmind joined #gluster-dev
07:19 Saravanakmr joined #gluster-dev
07:22 skumar_ joined #gluster-dev
07:37 skumar__ joined #gluster-dev
07:48 Acinonyx joined #gluster-dev
08:03 kshlm joined #gluster-dev
08:17 nbalacha kdhananjay, got a minute?
08:17 kdhananjay nbalacha: yeah, tell me
08:19 nbalacha kdhananjay, regarding the locks translaot
08:19 nbalacha kdhananjay, in features/locks/src/inodelk.c, why do we use pl_inode_lock_t->lk_owner instead of pl_inode_lock_t->user_flock.l_owner for lock owner comparison?
08:19 kkeithley joined #gluster-dev
08:20 kdhananjay nbalacha: checking
08:20 nbalacha kdhananjay, unless I am missing something, this means I cannot set a lk_owner if I use syncop_inodelk
08:20 kdhananjay nbalacha: line number/function?
08:20 nbalacha kdhananjay, same_inodelk_owner
08:23 fam_away joined #gluster-dev
08:24 kdhananjay nbalacha: checking..
08:35 nbalacha kdhananjay, is flock only for posix_locks?
08:37 nbalacha kdhananjay,  but : syncop_inodelk (xlator_t *subvol, const char *volume, loc_t *loc, int32_t cmd,
08:37 nbalacha struct gf_flock *lock, dict_t *xdata_in, dict_t **xdata_out)
08:43 sanju joined #gluster-dev
08:50 kdhananjay nbalacha: i think so, sorry i had to go into a meeting..
08:50 atinm csaba,tests/bugs/fuse/bug-858215.t - every time I run this test locally, its failing consistently for me
08:50 sanoj joined #gluster-dev
08:51 kdhananjay nbalacha: i dont  remember all the code. it's been a long time since i looked at locks last. give me few min, i will get back
08:51 nbalacha kdhananjay, ok
09:00 kotreshhr joined #gluster-dev
09:00 amarts joined #gluster-dev
09:03 Vishnu__ joined #gluster-dev
09:11 voidm joined #gluster-dev
09:14 rastar joined #gluster-dev
09:18 mchangir_ joined #gluster-dev
09:23 mchangir__ joined #gluster-dev
09:24 kdhananjay nbalacha: syncop_create_frame()
09:24 kdhananjay nbalacha: that seems to be initialising frame->root->lk_owner right?
09:24 nbalacha kdhananjay, the problem here is I do not have access to the frame via the syncop call
09:25 kdhananjay nbalacha: why would you want access to it?
09:25 nbalacha I need to set the lk_owner to prevent conflicts
09:25 nbalacha in m case , I am taking inodelk on hardlinks
09:25 nbalacha and they are all getting granted
09:26 kdhananjay nbalacha: hmm thinking..
09:27 nbalacha kdhananjay, ideally, if I have set a value int he flock being passed to the inodelk call, that is the lk_owner that the call should use
09:27 kdhananjay nbalacha: no but the assignment happens further down in the stack
09:27 kdhananjay nbalacha: in protocol/client and protocol/server
09:28 nbalacha kdhananjay, why do we do that if the flock structure already has the lk_owner
09:28 nbalacha it is confusing
09:28 kdhananjay nbalacha: im not so sure about that. pranithk1 would you know the history? ^^
09:29 nbalacha kdhananjay, and if we are using frame->root->lk_owner, the syncop calls do not provide access to the frame
09:29 pranithk1 nbalacha: I am not sure why it is done that way either. It was like that by the time I came here.
09:30 kdhananjay nbalacha: for that you could use synctask_new()?
09:30 kdhananjay nbalacha: put all your code in a synctask?
09:31 kdhananjay nbalacha: synctask takes frame as parameter
09:31 kdhananjay nbalacha: so that will be in your control
09:35 nbalacha kdhananjay, ok, but that is not a good way
09:35 kdhananjay nbalacha: why? :)
09:35 nbalacha kdhananjay, poor api design
09:36 nbalacha kdhananjay, also, unless someone explicitly sets the frame->root->lk_owner, there is no guarantee the frame will not be reused
09:36 nbalacha and will not conflict
09:36 nbalacha nbalacha, a dev now has to know that we ignore the lk_owner in some cases
09:37 nbalacha and that it is taken fromt he frame->root in some cases.not all apis provide access to the frame.
09:37 nbalacha this is not a good way to do things
09:38 nbalacha if the flock struct already has a lk_owner member, why not just take it from there?
09:42 msvbhat joined #gluster-dev
09:44 voidm joined #gluster-dev
09:44 kdhananjay nbalacha: hmm maybe we should do a git-blame to figure out the rationale behind the way it is assigned today
09:46 nbalacha kdhananjay, we should figure this out and make it consistent
09:47 kdhananjay nbalacha: got it.
10:04 amarts :+1:
10:26 msvbhat joined #gluster-dev
10:47 susant joined #gluster-dev
10:54 pranithk1 joined #gluster-dev
11:06 susant joined #gluster-dev
11:09 foster joined #gluster-dev
11:10 voidm joined #gluster-dev
11:14 kotreshhr joined #gluster-dev
11:18 Vishnu_ joined #gluster-dev
11:21 poornima joined #gluster-dev
11:23 susant joined #gluster-dev
11:28 shyam joined #gluster-dev
11:37 gyadav joined #gluster-dev
11:49 poornima joined #gluster-dev
11:51 itisravi joined #gluster-dev
11:55 itisravi__ joined #gluster-dev
12:16 skumar_ joined #gluster-dev
12:21 voidm_ joined #gluster-dev
12:21 skumar__ joined #gluster-dev
12:23 voidm__ joined #gluster-dev
12:42 msvbhat joined #gluster-dev
12:52 mchangir__ joined #gluster-dev
12:59 rraja joined #gluster-dev
13:30 shyam joined #gluster-dev
13:33 rwheeler joined #gluster-dev
13:35 shyam joined #gluster-dev
13:39 msvbhat joined #gluster-dev
13:40 atinm joined #gluster-dev
13:52 nbalacha joined #gluster-dev
13:55 major joined #gluster-dev
14:09 aravindavk joined #gluster-dev
14:16 obnox joined #gluster-dev
14:20 csaba atinm: thanks for letting me know, I'll look at it. Which version did you use for testing it?
14:20 atinm csaba, up to date mainline version
14:21 csaba atinm, ok.
14:26 amarts joined #gluster-dev
14:46 jstrunk joined #gluster-dev
14:48 Shu6h3ndu joined #gluster-dev
14:55 pladd joined #gluster-dev
14:58 amarts joined #gluster-dev
15:04 aravindavk joined #gluster-dev
15:09 gyadav joined #gluster-dev
15:16 aravindavk joined #gluster-dev
15:22 jobewan joined #gluster-dev
15:30 kdhananjay joined #gluster-dev
15:34 susant joined #gluster-dev
15:35 kkeithley nigelb,misc: can we please fix whatever jenkins config is amiss that makes links to things like console output https://jobs/jobs/centos6-regresssion.... instead of https://build.gluster.org/jobs/centos6-regression/...
15:35 kkeithley oh, let me guess, you want me to file a bz
15:36 misc I would, yes, as I am on PTO today and about to go get grocery :)
15:36 misc (also, I might have no idea on where to start for jenkins config :/ )
15:37 kkeithley I think it can wait until you return from PTO. It's been borked for a long time now
15:37 * kkeithley doesn't know when you're on PTO
15:39 misc I came back tomorrow
15:39 misc (I hope to be back from grocery before tomorrow)
15:42 kkeithley whew, you had me worried
15:46 jiffin joined #gluster-dev
15:49 Girjesh joined #gluster-dev
15:54 sanju joined #gluster-dev
16:00 aravindavk joined #gluster-dev
17:00 amarts joined #gluster-dev
18:25 sunny joined #gluster-dev
18:43 vbellur joined #gluster-dev
18:53 jstrunk_ joined #gluster-dev
18:57 jstrunk__ joined #gluster-dev
20:10 kkeithley fairly fresh checkout of gluster main branch.  file ./tests/geo-rep/georep-basic-dr-rsync.t
20:10 kkeithley centos6 regression on build.gluster.org, I see
20:11 kkeithley 14:48:54 Gluster version mismatch between master and slave.
20:11 kkeithley how could that be?
20:11 kkeithley and then it's just endlessly spitting out
20:11 kkeithley 15:11:39 stat: cannot stat `/mnt/glusterfs/1/changelog_chown_f1': No such file or directory
20:12 kkeithley did we bump a version in one place and forget to bump it somewhere else?
20:19 pladd_ joined #gluster-dev
20:30 msvbhat joined #gluster-dev
20:56 jstrunk_ joined #gluster-dev
23:24 jobewan joined #gluster-dev
23:36 msvbhat joined #gluster-dev
23:40 jobewan joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary