Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-05-16

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 shyam joined #gluster-dev
01:49 ilbot3 joined #gluster-dev
01:49 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:44 hagarth joined #gluster-dev
04:15 gem joined #gluster-dev
05:39 gem joined #gluster-dev
06:04 spalai joined #gluster-dev
06:57 gem joined #gluster-dev
07:06 shubhendu joined #gluster-dev
07:49 vpshastry joined #gluster-dev
08:42 gem joined #gluster-dev
08:44 pranithk joined #gluster-dev
09:10 hagarth joined #gluster-dev
09:15 pranithk xavih: I need to talk to you a bit about lookup+EIO because of less number of responses than ec->fragments. When will be a good time?
09:17 JoeJulian joined #gluster-dev
09:29 pranithk xavih: Based on our discussion I think this is what I will do, change get_size_version to use xattrop instead of lookup when fd is not present. And in lookup if fop->answer is NULL take the next best answer.
09:29 pranithk xavih: i.e. first cbk in 'list'
09:29 pranithk xavih: Let me know if you see any problem with this approach
09:34 msvbhat_ joined #gluster-dev
09:42 hagarth ndevos, pranithk: banning all patch acceptance till we fix all tests in is_bad_test()
09:43 ndevos hagarth: sure, but some seem to pass regression testing :)
09:43 hagarth ndevos: yeah, but let us clean it up now :)
09:44 ndevos although, not on netbsd
09:44 nbalacha joined #gluster-dev
09:44 ndevos I plan to have a look at the glupy tests, its something that I would like to understand batter
09:44 ndevos *better
09:45 hagarth yeah.. it seems to fail a very basic test
09:46 ndevos I dont know, I have not checked the test at all
09:47 hagarth ndevos: it doesn't do much IIRC
09:51 ndevos hagarth: http://review.gluster.org/10798 contains re-enabled results for bug and smoke tests, it seems to be working :)
09:51 hagarth ndevos: great!
09:51 ndevos hagarth: I'll leave it up to you and others to test the negative voting ;-)
09:52 hagarth ndevos: by intentionally introducing a compilation error? ;)
09:53 ndevos hagarth: hopefully not intentionally, but feel free to do what you like
09:53 hagarth ndevos: thanks, will try something out later today
09:53 ndevos I'm sure people will file bugs against the wrong version/branch soon
09:54 ndevos oh, I should have tried the [cherry-pick] button, maybe next time
09:58 ndevos hagarth: can you +1 or share https://plus.google.com/+NielsdeVos/posts/X5XxbouvgMV on G+ ?
09:59 hagarth done
09:59 hagarth I am wondeing if I have access to gluster community on G+
10:00 ndevos well, if you do not have that, who would?
10:01 ndevos hmm, ripples: https://plus.google.com/ripples/details?url=http%3A%2F%2Fblog.nixpanic.net%2F2015%2F05%2Fglusterfs-370-has-been-released.html
10:03 xavih pranithk: I think your approach is ok
10:03 ndevos 261 views of the post on my blog, no idea how many would have read it through planet.fedoraproject.org, planet.gluster.org or others
10:05 hagarth ndevos: I think spot or somebody from Red Hat's community support wing might have access on G+
10:05 ndevos hagarth: yeah, I guess he should
10:09 hagarth ndevos: I made my share public now :)
10:10 ndevos hagarth: ah, thanks :)
10:10 pranithk joined #gluster-dev
10:19 ndevos pranithk: did you see xavihs response? or were you already disconnected at that time?
10:19 ndevos 12:03 < xavih> pranithk: I think your approach is ok
10:19 * ndevos posts it anyway, maybe it helps :)
10:24 pranithk ndevos: Thanks a lot! this fixes some badtests in ec
10:24 ndevos pranithk: nice!
10:25 pranithk ndevos: :-)
10:25 pranithk ndevos: Still not recovered fully. Got a bad cold+cough in return journey. Tablets are causing me too much sleep :-/
10:26 ndevos pranithk: why are you online then?
10:27 pranithk ndevos: :-), There is nothing much to do. Don't feel like resting. It is raining outside. so just came online to see if I can get that answer from xavih
10:28 ndevos pranithk: :-) same here, but rain just stopped, it now is suddenly very foggy and not nice outside
10:29 pranithk ndevos: okay, rain seems to have stopped. Will get some mangoes to eat. adios amigo
10:29 hagarth pranithk: mangoes
10:29 ndevos cya pranithk!
10:29 hagarth you better share them :)
10:29 * ndevos likes papaya more
10:30 ndevos or chico!
10:30 pranithk hagarth: Please feel free to come to Indira nagar ;-)
10:30 * hagarth likes all tropical fruits ;)
10:30 ndevos we dont have much of those, India is paradise!
10:30 hagarth pranithk: why don't you come over here? we can go for a walk by the lake .. the lake looks lovely from here
10:30 pranithk ndevos: I keep telling you to shift ;-)
10:31 ndevos pranithk: I'll visit you again :D
10:31 pranithk ndevos: :-)
10:31 pranithk hagarth: ndevos: okay guys, I will go out for some time, cya
10:31 hagarth pranithk: ok
10:32 rjoseph joined #gluster-dev
10:41 hagarth ndevos: negative test seems to work - http://review.gluster.org/10800
10:43 ndevos hagarth++ thank you
10:43 glusterbot ndevos: hagarth's karma is now 59
10:44 ndevos hagarth: in case it would pass regression testing, the verified value will get reset - thats something I'm still thinking of fixing
10:45 hagarth ndevos: right, maybe use a different user for smoke ?
10:46 ndevos hagarth: that, or have a dependency between the jobs, only start smoke after bug-check, and regressions after smoke
10:47 hagarth ndevos: yes, that could be done too.
10:47 ndevos hagarth: do you have a preference for one of the two appoaches?
10:48 hagarth ndevos: the approach you outlined seems to be cleaner
10:48 hagarth else we would need to keep adding users as we increase the number of tests
10:48 hagarth a pipeline kind of approach with parallelism wherever we can would be ideal
11:02 ndevos hagarth: yes, I think adding more users would be good is we start tests in parallel, one user per component or such
11:03 hagarth ndevos: right..
11:03 ndevos Gaurav expressed interest in looking into parallel jobs for it, I'll check with him next week about it
11:17 gem joined #gluster-dev
11:28 gem joined #gluster-dev
11:58 shyam joined #gluster-dev
12:04 shyam left #gluster-dev
12:18 nbalacha joined #gluster-dev
13:07 gem joined #gluster-dev
13:40 gem joined #gluster-dev
13:57 shyam joined #gluster-dev
15:12 gem joined #gluster-dev
22:17 wushudoin joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary