Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-06-29

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:43 penguinRaider joined #gluster-dev
00:50 firemanxbr joined #gluster-dev
00:56 firemanxbr joined #gluster-dev
01:01 firemanxbr joined #gluster-dev
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:11 gem joined #gluster-dev
02:48 poornimag joined #gluster-dev
02:56 pkalever1 joined #gluster-dev
02:57 pkalever1 joined #gluster-dev
02:57 pkalever1 left #gluster-dev
03:00 pkalever joined #gluster-dev
03:00 magrawal joined #gluster-dev
03:01 pkalever left #gluster-dev
03:01 pkalever joined #gluster-dev
03:21 luizcpg joined #gluster-dev
03:24 firemanxbr joined #gluster-dev
03:29 Bhaskarakiran joined #gluster-dev
03:30 sakshi joined #gluster-dev
03:43 jiffin joined #gluster-dev
03:51 hagarth joined #gluster-dev
03:53 Bhaskarakiran joined #gluster-dev
03:57 overclk joined #gluster-dev
04:05 Bhaskarakiran joined #gluster-dev
04:08 Apeksha joined #gluster-dev
04:10 ppai joined #gluster-dev
04:19 pkalever joined #gluster-dev
04:35 mchangir joined #gluster-dev
04:39 atinm joined #gluster-dev
04:43 aspandey joined #gluster-dev
04:44 Bhaskarakiran joined #gluster-dev
04:46 ramky joined #gluster-dev
04:48 Bhaskarakiran joined #gluster-dev
04:58 kdhananjay joined #gluster-dev
05:11 shubhendu joined #gluster-dev
05:11 penguinRaider joined #gluster-dev
05:15 spalai joined #gluster-dev
05:21 prasanth joined #gluster-dev
05:21 Manikandan joined #gluster-dev
05:23 jiffin1 joined #gluster-dev
05:29 baojg joined #gluster-dev
05:33 ndarshan joined #gluster-dev
05:36 sakshi joined #gluster-dev
05:39 hgowtham joined #gluster-dev
05:43 Saravanakmr joined #gluster-dev
05:45 sakshi joined #gluster-dev
05:46 mchangir joined #gluster-dev
05:49 atinm joined #gluster-dev
05:53 ashiq joined #gluster-dev
05:53 atalur joined #gluster-dev
05:56 karthik___ joined #gluster-dev
05:56 spalai left #gluster-dev
05:56 spalai joined #gluster-dev
05:57 aravindavk_ joined #gluster-dev
05:59 pranithk1 joined #gluster-dev
06:01 itisravi joined #gluster-dev
06:02 baojg joined #gluster-dev
06:05 prasanth joined #gluster-dev
06:05 ppai joined #gluster-dev
06:15 kotreshhr joined #gluster-dev
06:17 atinm joined #gluster-dev
06:18 itisravi joined #gluster-dev
06:20 skoduri joined #gluster-dev
06:21 mchangir joined #gluster-dev
06:22 kshlm joined #gluster-dev
06:22 nishanth joined #gluster-dev
06:23 pkalever joined #gluster-dev
06:24 msvbhat joined #gluster-dev
06:27 rafi joined #gluster-dev
06:33 asengupt joined #gluster-dev
06:47 pur joined #gluster-dev
06:57 kaushal_ joined #gluster-dev
07:03 penguinRaider joined #gluster-dev
07:06 baojg joined #gluster-dev
07:07 nigelb ndevos / misc - http://xkcd.com/1700/ (read the alt text :D)
07:10 ppai nigelb, :)
07:11 ppai left #gluster-dev
07:11 ppai joined #gluster-dev
07:24 sakshi joined #gluster-dev
07:49 ndevos nigelb: hehe, does Randall lurk in here?
07:57 karthik___ joined #gluster-dev
08:02 prasanth joined #gluster-dev
08:09 misc ah ah
08:10 penguinRaider joined #gluster-dev
08:15 Bhaskarakiran joined #gluster-dev
08:16 kdhananjay joined #gluster-dev
08:17 rjoseph joined #gluster-dev
08:33 itisravi joined #gluster-dev
08:33 misc nigelb: quick question, a change on jjb requires to restart jenkins or something ?
08:34 nigelb misc: nope.
08:34 misc (I was at a meetup yesterday and between 2 pizzas, a friend told me that)
08:34 nigelb Or rather, as far as all the jobs I've done, there's been no restarts needed.
08:35 misc mhh
08:35 misc ok
08:36 aravindavk joined #gluster-dev
08:39 baojg joined #gluster-dev
08:41 nigelb misc: I've also figured out how to get Jenkins to run the job to update itself. Still contemplating whether gerrit is a good idea or not.
08:41 misc nigelb: why not do it and revisit later ?
08:42 misc or not do it and revisit later
08:42 nigelb well, I'm getting on a plane in a few hours, so definitely not now.
08:42 misc or do it, then revisit later in a meeting in a sunny place near the seashore
08:42 nigelb I was thinking of getting some feedback from -devel about it.
08:43 nigelb See how much people will like writing jjb yml files for new jobs they'd like to run.
08:43 misc I suspect not much will write
08:44 misc also, on a unrelated note, would splitting the test in smaller job help us to make the test suite run faster ?
08:45 nigelb rastar has done some work on that. It needs more machines.
08:45 nigelb I'm looking to talk to him this week/next week about that.
08:46 misc mhh, well no, let's say we split the test suite into 2 shell script we run at the same time
08:46 misc that will use 2 servers instead of 1, but roughly be finished faster
08:46 nigelb I'm curious to know about that.
08:46 misc (now, when we are out of capacity, it will not work)
08:46 nigelb I also want to see if we can use a pipeline.
08:46 nigelb Build gluster only once
08:46 nigelb Because right now we build it quite a few times
08:47 nigelb If it's built for devrpms, we should be able to feed that for the regression tests, for instance.
08:47 nigelb I don't know if splitting it into two will work.
08:47 nigelb Ideally, it should be faster.
08:47 nigelb But how to do the splitting, I don't know.
08:47 misc splitting in smaller chunk also help to pinpoint what fail
08:47 rraja joined #gluster-dev
08:48 misc provided we can classify the tests
08:48 misc if we have a test suite dedicated to NFS, we can get a history of failure for that type of tests
08:48 nigelb I agree. I think it's a good idea.
08:49 misc downside is that's quite complex :/
08:49 gem joined #gluster-dev
08:49 rafi1 joined #gluster-dev
08:58 kotreshhr joined #gluster-dev
09:05 ndevos misc, nigelb: it is possible to run selected tests, and each component already puts its own tests in ./tests/bugs/<component>
09:05 nigelb Oh.
09:05 nigelb So we can speed them up if we want.
09:05 ndevos I've also been doing the build rpms, run tests by installing rpms in a private jenkins test enviroment
09:06 prasanth joined #gluster-dev
09:09 spalai left #gluster-dev
09:18 sakshi joined #gluster-dev
09:25 spalai joined #gluster-dev
09:31 ndevos hchiramm, ashiq: how do you test the docker images currently? can we create a nightly job in the CentOS CI to build the container image and test it?
09:32 atinm pranithk|afk, http://review.gluster.org/5786
09:32 sakshi joined #gluster-dev
09:33 skoduri joined #gluster-dev
09:34 jiffin1 joined #gluster-dev
09:34 ndevos okay... testing a libgfapi with QEMU should be easy, but they suggest to run their upstream unittests *gulp* http://thread.gmane.org/gmane.comp.emulators.qemu/422907/focus=423311
09:36 pranithk kaushal_: hey, it is not possible to set 3.7.12 options while 3.7.11 clients are in play right?
09:37 kaushal_ Yup.
09:37 kaushal_ But only if the option was marked as a client option.
09:38 pranithk kaushal_: Okay, let us wait for Lindsay's response. Yes it is marked as CLIENT_OPT
09:38 pranithk kaushal_:   4         { .key        = "cluster.locking-scheme",
09:38 pranithk 3           .voltype    = "cluster/replicate",
09:38 pranithk 2           .type       = DOC,
09:38 pranithk 1           .op_version = GD_OP_VERSION_3_7_12,
09:38 pranithk 0           .flags      = OPT_FLAG_CLIENT_OPT
09:38 pranithk 1         },
09:38 pranithk kaushal_: looks okay right?
09:45 sakshi joined #gluster-dev
09:54 ashiq joined #gluster-dev
09:59 hgowtham joined #gluster-dev
09:59 spalai left #gluster-dev
09:59 spalai joined #gluster-dev
10:13 aravindavk joined #gluster-dev
10:17 hgowtham joined #gluster-dev
10:24 baojg joined #gluster-dev
10:27 msvbhat joined #gluster-dev
10:29 aspandey joined #gluster-dev
10:40 Bhaskarakiran joined #gluster-dev
10:43 kotreshhr joined #gluster-dev
10:48 ndevos hi kdhananjay: I see http://review.gluster.org/14672 got merged, but there is no test-case for it?
10:48 hgowtham joined #gluster-dev
10:49 kdhananjay ndevos: didn't give it a thought. i guess i could write one.
10:50 ndevos kdhananjay: yes, all patches are expected to have a test-case
10:50 kdhananjay ndevos: sure, will do.
10:50 ndevos if there is no test-case possible, it should be explained in the commit message, or it must be very obvious (like spelling fixes etc..)
10:50 ndevos kdhananjay++ thanks!
10:50 msvbhat joined #gluster-dev
10:50 glusterbot ndevos: kdhananjay's karma is now 20
10:50 pranithk kdhananjay: How will you test?
10:51 kdhananjay pranithk: well nothing more than doing reads on an odirect fd.
10:51 pranithk kdhananjay: which is already present in the tests.
10:51 pranithk kdhananjay: So no need is what I thought, so merged it
10:51 kdhananjay pranithk: just to make sure it doesn't break what was working already.
10:51 pranithk kdhananjay: Let me point to the test
10:51 kdhananjay oh!
10:52 kdhananjay ndevos: ^^
10:52 ndevos pranithk: if that is available, why did it not fail?
10:52 pranithk kdhananjay: http://review.gluster.org/#/c/14623/4/tests/bugs/shard/bug-1342298.t
10:53 pranithk ndevos: This is optimization of memory usage
10:53 pranithk ndevos: it avoids extra calloc
10:53 ndevos pranithk: subject is "libglusterfs: Implement API that provides page-aligned iobufs", in that case, the subject could have been a little more clear
10:56 ndevos pranithk, kdhananjay and *: when you send patches, please add a test-case, an explanation of why it is not possible, or reference a test-case that covers the change - PLEASE :)
10:57 aspandey joined #gluster-dev
10:58 ndevos kaushal_: I've modified and added some notes to https://public.pad.fsfe.org/p/glusterfs-release-process-201606 - what are the next steps?
10:59 prasanth joined #gluster-dev
11:02 kshlm ndevos, Next steps would be to make it official by getting it merged into the docs.
11:10 spalai left #gluster-dev
11:29 hchiramm joined #gluster-dev
11:29 rafi joined #gluster-dev
11:36 rafi joined #gluster-dev
11:37 gem joined #gluster-dev
11:37 luizcpg joined #gluster-dev
11:38 rafi1 joined #gluster-dev
11:40 poornimag joined #gluster-dev
11:41 ppai joined #gluster-dev
11:42 jiffin joined #gluster-dev
11:56 surabhi joined #gluster-dev
11:58 karthik___ joined #gluster-dev
11:58 poornimag joined #gluster-dev
11:59 kotreshhr joined #gluster-dev
11:59 kshlm Weekly community meeting starts in 1 minute in #gluster-meeting
12:02 gem joined #gluster-dev
12:49 Saravanakmr joined #gluster-dev
13:06 Jules-2 joined #gluster-dev
13:08 ira joined #gluster-dev
13:10 rafi1 does anybody know how to force FUSE_BATCH_FORGET ?
13:10 rafi1 ndevos: ^
13:11 jiffin joined #gluster-dev
13:12 ndevos rafi1: not really, what direction is that? from the VFS to fuse, or from fuse to the VFS?
13:12 rafi1 ndevos: fuse to vfs
13:13 ndevos rafi1: hmm, there is a special xattr that you can set/read and that triggers a forget, maybe that is what you're looking for?
13:13 rafi1 ndevos: yes
13:13 rafi1 ndevos: i'm running bene's smallfile perf tule
13:14 rafi1 *tool
13:15 rafi1 ndevos: I'm getting fuse_batch_forget call
13:15 rafi1 ndevos: I was wondering how to do that
13:16 ndevos rafi1: xlators/mount/fuse/src/fuse-bridge.c mentions "inode-invalidate" in setxattr
13:16 ndevos rafi1: maybe just like: setfattr -k inode-invalidate -v 1 /path/to/fuse/mount/dir/file
13:17 ndevos although I wonder why it is not prefixed with user. or something...
13:18 ndevos oh, -k should be -n :)
13:20 ndevos rafi1: from vfs to fuse might be done by dropping the caches, that looks more to what FUSE_BATCH_FORGET does
13:21 rafi1 ndevos: echo 3 > /proc/sys/vm/drop_caches won't do that
13:21 rafi1 ndevos: any other way around ?
13:27 ndevos rafi1: maybe unmounting?
13:28 ndevos rafi1: a quick glance in the kernel sources does not immediately point me to anything
13:31 nigelb ndevos: I'm curious - What's stopping us from running rpmlint on our existing rpm jobs on build.g.o?
13:31 nigelb Other than changing the script slightly..
13:32 ndevos nigelb: someone doing the work?
13:32 ndevos nigelb: and, probably some morality, rpm verification is different from building rpms, so the jobs should be split
13:33 nigelb aha.
13:33 nigelb I'm curious about how to setup a pipeline on jenkins, so I may test that out.
13:36 ndevos nigelb: I've done that with a MultiJob task in my internal testing jenkins, 1. build tarball, 2. build RPMs from the tarball (el6+el7), 3. run test with the build RPMs (el6+el7)
13:36 nigelb My curiousity is getting that done in Jenkins with config defined by jjb.
13:37 nigelb ... and it's possible \o/
13:37 pkalever left #gluster-dev
13:37 ndevos yes, I think others in the CentOS CI use that already
13:52 pranithk1 joined #gluster-dev
13:59 rafi1 ndevos: unmounting will do
13:59 rafi1 ndevos: but the script doesn't seems to be unmounting it
13:59 rafi1 ndevos: cool
14:00 rafi1 ndevos: I will again search
14:12 hagarth joined #gluster-dev
14:37 atinm nigelb, http://review.gluster.org/#/c/14829/2 - regression passed, but jenkins didn't vote
14:48 ramky joined #gluster-dev
14:56 pkalever joined #gluster-dev
15:00 kshlm joined #gluster-dev
15:08 wushudoin joined #gluster-dev
15:10 jiffin1 joined #gluster-dev
15:12 hgowtham joined #gluster-dev
15:20 mchangir joined #gluster-dev
15:23 pkalever left #gluster-dev
15:25 hchiramm joined #gluster-dev
15:38 jiffin joined #gluster-dev
15:52 mchangir joined #gluster-dev
15:53 baojg joined #gluster-dev
16:05 pkalever joined #gluster-dev
16:07 rraja joined #gluster-dev
16:07 jiffin joined #gluster-dev
16:12 jiffin joined #gluster-dev
16:43 hagarth joined #gluster-dev
17:19 pkalever left #gluster-dev
17:49 shubhendu joined #gluster-dev
18:35 spalai joined #gluster-dev
18:39 mchangir joined #gluster-dev
18:41 wushudoin joined #gluster-dev
19:07 mchangir joined #gluster-dev
19:26 jiffin joined #gluster-dev
20:43 owlbot joined #gluster-dev
20:50 hagarth joined #gluster-dev
21:30 owlbot joined #gluster-dev
22:38 hagarth joined #gluster-dev
22:57 firemanxbr joined #gluster-dev
23:32 luizcpg joined #gluster-dev
23:32 hagarth joined #gluster-dev
23:43 luizcpg left #gluster-dev
23:47 pranithk1 joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary