Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-11-03

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:48 msvbhat joined #gluster-dev
02:20 gyadav__ joined #gluster-dev
02:20 gyadav joined #gluster-dev
02:55 msvbhat joined #gluster-dev
02:57 ilbot3 joined #gluster-dev
02:57 Topic for #gluster-dev is now Gluster Development Channel - https://www.gluster.org | For general chat go to #gluster | Patches - https://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
04:00 mchangir joined #gluster-dev
04:11 itisravi joined #gluster-dev
04:28 karthik_us joined #gluster-dev
04:34 atinm joined #gluster-dev
04:42 skumar joined #gluster-dev
04:51 msvbhat joined #gluster-dev
04:54 rraja joined #gluster-dev
04:57 Shu6h3ndu joined #gluster-dev
05:02 msvbhat joined #gluster-dev
05:09 amarts joined #gluster-dev
05:13 aravindavk joined #gluster-dev
05:18 poornima joined #gluster-dev
05:22 sanoj joined #gluster-dev
05:27 hgowtham joined #gluster-dev
05:28 ndarshan joined #gluster-dev
05:35 gobindadas joined #gluster-dev
05:37 susant joined #gluster-dev
05:37 poornima joined #gluster-dev
05:48 psony joined #gluster-dev
05:52 apandey joined #gluster-dev
05:56 Shu6h3ndu joined #gluster-dev
05:57 vaibhav hello, in test ./tests/features/trash.t, perform rebalance returns success, but does not migrate the existing data, issue is seen on s390x architecture. any idea?
06:05 msvbhat joined #gluster-dev
06:10 pkalever joined #gluster-dev
06:16 poornima joined #gluster-dev
06:16 pranithk1 joined #gluster-dev
06:19 atinm joined #gluster-dev
06:20 xavih joined #gluster-dev
06:20 kotreshhr joined #gluster-dev
06:44 skoduri joined #gluster-dev
07:05 atinm joined #gluster-dev
07:14 rastar joined #gluster-dev
07:19 ppai joined #gluster-dev
07:21 amarts xavih, if you get time today, can you recheck if things are fine with https://review.gluster.org/18309
07:21 ppai joined #gluster-dev
07:21 amarts i addressed all your previous concerns
07:37 poornima joined #gluster-dev
07:40 xavih amarts: sure
07:42 Saravanakmr joined #gluster-dev
07:45 atinm joined #gluster-dev
07:53 xavih amarts: I don't see any new patch set since Sep 25
07:55 rafi joined #gluster-dev
08:05 rastar joined #gluster-dev
08:06 * kshlm will be AFK for a couple of hours
08:11 amarts xavih, yes, i had addressed both yours and nixpanic's comment by then
08:11 amarts seeing 2 comments from you. About mean? What do you suggest on doing? we have count and aggregated latencies
08:12 xavih amarts: I already reviewed the patch and posted two more comments. I didn't say anything else about the previous ones, so I think they were ok. I'll recheck anyway
08:12 amarts cool
08:13 amarts also check https://review.gluster.org/18310
08:14 amarts as its also going to be rebased for fixing those two comments
08:19 xavih amarts: my old comments are ok, but the new ones still apply. On line 70 of latency.c, 'mean' field is being used to return the mean, but this field is not computed anywhere. We should remote it from fop_latency_t and use "total / count" when mean is needed
08:19 amarts yes, working on it
08:20 amarts but commenting saying io-stats still uses the structure i guess
08:20 amarts will check and update
08:22 xavih amarts: the second comment is more debatable. It's hard to force everyone to check error codes, and this particular function should never fail (only possible failure cases would imply a really critical condition, so we shouldn't care anyway)
08:22 amarts Agree, will update the 'comment' and call ABRT there.
08:23 xavih amarts: but I can accept that. Only more care needs to be taken when some patch uses this function
08:34 itisravi joined #gluster-dev
08:38 itisravi joined #gluster-dev
09:12 gobindadas joined #gluster-dev
09:30 sunny joined #gluster-dev
09:31 kdhananjay joined #gluster-dev
09:34 kdhananjay1 joined #gluster-dev
09:36 poornima_ joined #gluster-dev
09:41 kdhananjay joined #gluster-dev
10:17 kdhananjay joined #gluster-dev
10:28 kdhananjay joined #gluster-dev
10:37 sunny joined #gluster-dev
10:38 poornima_ joined #gluster-dev
10:38 gyadav_ joined #gluster-dev
10:40 gyadav joined #gluster-dev
11:04 kdhananjay joined #gluster-dev
11:15 gyadav__ joined #gluster-dev
11:17 gyadav joined #gluster-dev
11:23 skumar joined #gluster-dev
11:31 kdhananjay1 joined #gluster-dev
11:38 ndevos rafi++ kshlm++ for sending in talks for FOSDEM, thanks!
11:38 glusterbot ndevos: rafi's karma is now 68
11:38 glusterbot ndevos: kshlm's karma is now 151
11:39 atinm joined #gluster-dev
11:45 rafi ndevos: :)
11:45 rafi ndevos: you are an admin ?
11:58 atinm ppai, https://review.gluster.org/#/c/18644/2/xlators/mgmt/glusterd/src/glusterd.c - what defect id it points to?
12:00 ppai atinm, 43, 44 (strncpy) and 811 (fd close)
12:02 atinm ppai, have a minor comment on the patch
12:02 susant joined #gluster-dev
12:02 susant left #gluster-dev
12:03 ndevos rafi: yeah, I'm one of the devroom managers that will review the submissions :)
12:04 ppai atinm, sure. will update the patch
12:07 skumar joined #gluster-dev
12:07 rafi ndevos: awesome,
12:07 rafi ndevos: I submitted the talks as lecture :( , there was only Lighting talk
12:08 rafi ndevos: I mean in the event type
12:14 kshlm rafi, That's the correct type.
12:15 rafi kshlm: thanks :)
12:15 rafi kshlm++
12:15 glusterbot rafi: kshlm's karma is now 152
12:18 Saravanakmr joined #gluster-dev
12:34 ppai devyani7, pm
12:52 itisravi joined #gluster-dev
12:52 msvbhat joined #gluster-dev
12:54 pranithk1 joined #gluster-dev
12:54 DoubleJ joined #gluster-dev
12:54 pranithk1 joined #gluster-dev
13:04 rraja joined #gluster-dev
13:10 psony joined #gluster-dev
13:19 kkeithley ndevos: what did we decide on for the statedumps from the longevity cluster? Is one daily sufficient? And keep the prior statedump for comparison?
13:19 kkeithley one daily per component, i.e. glusterd, glusterfsd, ganesha-fsal-gluster on server,  glusterfs on client?
13:20 kkeithley instead of hourly
13:20 kkeithley daily intead of hourly statedumps
13:22 kkeithley hm, actually, thus far we haven't taken statedumps of the client
13:41 ndevos kkeithley: daily, and keep them gzip'd for a longer period, maybe the last 7 days, and then one from each week longer ago?
13:42 ndevos hourly stats are only useful for major leaks, the longer term ones are easier to identify over days/weeks
13:44 shyam joined #gluster-dev
13:46 DoubleJ joined #gluster-dev
14:08 godas_ joined #gluster-dev
14:10 gyadav__ joined #gluster-dev
14:11 gyadav joined #gluster-dev
14:12 amarts joined #gluster-dev
14:21 rastar joined #gluster-dev
14:47 aravindavk joined #gluster-dev
14:50 kotreshhr left #gluster-dev
15:17 gyadav joined #gluster-dev
15:17 gyadav__ joined #gluster-dev
15:30 mchangir joined #gluster-dev
15:44 ppai joined #gluster-dev
16:11 rastar joined #gluster-dev
16:17 msvbhat joined #gluster-dev
16:57 amarts joined #gluster-dev
17:10 gyadav__ joined #gluster-dev
17:10 gyadav joined #gluster-dev
17:11 skumar joined #gluster-dev
17:31 csaba joined #gluster-dev
17:31 msvbhat joined #gluster-dev
18:09 glusterbot joined #gluster-dev
18:18 glusterbot joined #gluster-dev
21:11 shyam joined #gluster-dev
23:22 Acinonyx joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary