Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-08-04

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 shyam joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:29 prasanth joined #gluster-dev
02:35 _Bryan_ joined #gluster-dev
02:39 hchiramm joined #gluster-dev
03:06 spalai joined #gluster-dev
03:14 magrawal joined #gluster-dev
03:20 sanoj joined #gluster-dev
03:25 sanoj joined #gluster-dev
03:51 nbalacha joined #gluster-dev
03:54 atinm joined #gluster-dev
04:02 ppai joined #gluster-dev
04:04 itisravi joined #gluster-dev
04:10 ira_ joined #gluster-dev
04:13 nishanth joined #gluster-dev
04:16 julim joined #gluster-dev
04:20 poornimag joined #gluster-dev
04:26 nbalacha joined #gluster-dev
04:27 shubhendu joined #gluster-dev
04:33 spalai joined #gluster-dev
04:37 aspandey joined #gluster-dev
04:41 itisravi joined #gluster-dev
04:42 spalai left #gluster-dev
04:48 rafi joined #gluster-dev
05:00 mchangir joined #gluster-dev
05:02 karthik_ joined #gluster-dev
05:09 jiffin joined #gluster-dev
05:12 Muthu joined #gluster-dev
05:16 ndarshan joined #gluster-dev
05:22 aravindavk joined #gluster-dev
05:29 raghug joined #gluster-dev
05:35 itisravi joined #gluster-dev
05:35 nishanth joined #gluster-dev
05:41 skoduri joined #gluster-dev
05:43 ppai joined #gluster-dev
05:45 hgowtham joined #gluster-dev
05:46 karthik_ joined #gluster-dev
05:50 rafi1 joined #gluster-dev
05:51 ndarshan joined #gluster-dev
05:51 shubhendu joined #gluster-dev
05:52 Muthu joined #gluster-dev
05:53 Manikandan joined #gluster-dev
05:54 spalai joined #gluster-dev
05:54 baojg joined #gluster-dev
06:04 mchangir joined #gluster-dev
06:19 kshlm joined #gluster-dev
06:31 aspandey joined #gluster-dev
06:40 mchangir joined #gluster-dev
06:42 kdhananjay joined #gluster-dev
06:51 mchangir joined #gluster-dev
06:55 devyani7_ joined #gluster-dev
07:02 Apeksha joined #gluster-dev
07:31 ndarshan joined #gluster-dev
07:44 rafi1 joined #gluster-dev
07:45 itisravi_ joined #gluster-dev
07:50 ndarshan joined #gluster-dev
07:53 post-factum http://review.gluster.org/#/c/15082/ needs your review, please
07:57 ppai joined #gluster-dev
07:58 mchangir joined #gluster-dev
08:01 msvbhat joined #gluster-dev
08:03 _nixpanic joined #gluster-dev
08:03 _nixpanic joined #gluster-dev
08:04 ankitraj joined #gluster-dev
08:05 hchiramm joined #gluster-dev
08:14 shubhendu joined #gluster-dev
08:15 nishanth joined #gluster-dev
08:34 ramky joined #gluster-dev
08:39 atalur joined #gluster-dev
08:42 ankitraj joined #gluster-dev
08:43 itisravi joined #gluster-dev
08:59 karthik_ joined #gluster-dev
08:59 ashiq joined #gluster-dev
09:05 Manikandan joined #gluster-dev
09:08 pranithk1 joined #gluster-dev
09:25 pur joined #gluster-dev
09:32 jiffin1 joined #gluster-dev
09:56 aspandey joined #gluster-dev
09:56 baojg joined #gluster-dev
09:58 atalur joined #gluster-dev
10:05 jiffin joined #gluster-dev
10:15 rjoseph Review needed for http://review.gluster.org/#/c/15072/ and http://review.gluster.org/#/c/15073/. Thanks
10:16 jiffin joined #gluster-dev
10:25 msvbhat joined #gluster-dev
10:26 pranithk1 joined #gluster-dev
10:58 aspandey_ joined #gluster-dev
11:06 pranithk1 joined #gluster-dev
11:07 msvbhat joined #gluster-dev
11:07 pranithk11 joined #gluster-dev
11:29 spalai left #gluster-dev
11:30 ppai skoduri++
11:30 glusterbot ppai: skoduri's karma is now 35
11:47 Manikandan joined #gluster-dev
11:55 pranithk1 joined #gluster-dev
11:57 kotreshhr joined #gluster-dev
12:12 post-factum skoduri: http://review.gluster.org/#/c/15087/ looks related to my coredumps...
12:13 skoduri post-factum, yes een I thought so..but I guess there was no "inode_table_destroy" in your coredumps call stacks right?
12:13 skoduri *even
12:13 skoduri it happens only when "inode_table_destroy" gets called...probably there is some other code-path where we miss to adjust lru size
12:13 skoduri will check that..
12:14 post-factum skoduri: inode_table_prune is in my stacktrace
12:15 skoduri post-factum, right..but the issue was with "inode_table_destroy" in this particular case...probably they are related..could you please re-try your tests with this patch applied though I doubt if it fixes your issue..
12:16 post-factum skoduri: will try
12:16 skoduri thanks
12:16 ira_ joined #gluster-dev
12:19 post-factum skoduri: i have to recheck 3.7.14 first. just launched tests
12:19 skoduri post-factum, okay
12:20 mchangir joined #gluster-dev
12:20 ndevos poornimag: got an opinion on http://www.mail-archive.com/gluste​r-devel@gluster.org/msg09819.html ?
12:22 ndevos post-factum: btw, do you have a minimal test-case script for xglfs?
12:22 ndevos post-factum: I'd like to get it automatically tested in the CentOS CI if you do not mind
12:22 post-factum ndevos: nope
12:22 post-factum skoduri: ok, 3.7.14 crashes as usual
12:23 skoduri :)
12:23 ndevos post-factum: hmm, I could add build tests for now, just to make sure we do not break a libgfapi header
12:24 post-factum ndevos: i was going to refactor cmdline arguments first
12:24 post-factum ndevos: and also add some mount helper for it
12:24 ndevos skoduri: any idea if cthon04 runs on fuse mounts too?
12:25 post-factum skoduri: but this time different stacktraces
12:25 post-factum skoduri: i'll attach them to bugreport
12:25 ndevos post-factum: ok, but compiling should not break, right?
12:25 s-kania joined #gluster-dev
12:25 post-factum ndevos: i believe i won't commit the code that could not be compiled :)
12:26 ndevos post-factum: sounds safe enough to me then :)
12:26 skoduri post-factum, okay
12:28 skoduri ndevos, I doubt...but except for few tests , I think majority of them will be applicable for all FS
12:34 post-factum skoduri: BZ 1353561 c8 and below
12:34 post-factum skoduri: stacktraces and corefiles themselves
12:41 nbalacha joined #gluster-dev
12:44 baojg_ joined #gluster-dev
12:49 rraja joined #gluster-dev
12:49 skoduri post-factum, will check
12:51 skoduri ndevos, did a quick check ...looks like it does only NFS mount..maybe we could tweak the mount-related script and try
12:53 ndevos skoduri: but you do a nfs-mount yourself in the client.sh script you gave me?
12:54 skoduri ndevos, the script does
12:55 ndevos skoduri: just replacing that with a gluster mount could work?
12:55 ndevos skoduri: or maybe sraj knows, but he's not here :-/
12:55 skoduri I mean cthon test auto mounts it
12:55 skoduri we have to tweak that
12:56 ndevos hmm, ok
12:57 ndevos kkeithley: hah, "just" fix glfs_fini? others have put a lot of effort in that already, but there still seem to be some corner cases or races
12:58 kkeithley well, it was just a thought.  Do the right thing.
12:59 ndevos fixing *is* the right thing, just not sure how when it'll be fixed
12:59 ndevos if spurious libgfapi failures are acceptible in the regression tests, we dont need to modify the test cases for it
13:01 ndevos I currently have two samples, 1/1027 and 1/287 faulure/success ratio, that might be acceptible
13:10 mchangir_ joined #gluster-dev
13:16 mchangir is the client-side/mount point graph/vol file stored in a file at all?
13:24 ndevos mchangir: I think that is in the -fuse.vol file under /var/lib/glusterd/<volume>/
13:25 ndevos but it seems possible that the client modifies it a little with mount options, I've not looked at how that is handled
13:31 julim joined #gluster-dev
13:35 mchangir ndevos, thanks ... however, I'm wondering if the file should be available on the node where only a *mount* command is run exclusively
13:47 ndevos mchangir: oh, no, I do not think the mount process saves a local copy of the file
13:47 mchangir ndevos, ok
13:49 mchangir ndevos, at any instant, how can I check the client-side and server-side graph organization without breaking into a gdb session
13:58 ndevos mchangir: the fuse client write the graph to its log, that is what I mostly check
13:59 ndevos mchangir: there is also "gluster ::system getspec <volume>" or something like that, and it'll print the client volfile
14:00 shyam joined #gluster-dev
14:06 skoduri joined #gluster-dev
14:08 baojg joined #gluster-dev
14:12 post-factum skoduri: no luck with your patch as well
14:12 post-factum skoduri: it should be really related to another issue
14:14 mchangir ndevos, thanks
14:16 jiffin joined #gluster-dev
14:18 penguinRaider joined #gluster-dev
14:21 hagarth joined #gluster-dev
14:25 spalai joined #gluster-dev
14:29 nigelb ndevos: Hey, I need some help (not urgently)
14:29 nigelb https://github.com/gluster/gluster​fs-patch-acceptance-tests/pull/47
14:37 pranithk1 joined #gluster-dev
14:44 ndevos nigelb: what help do you need?
14:46 dlambrig_ joined #gluster-dev
14:46 msvbhat joined #gluster-dev
14:47 skoduri post-factum, okay expected so..
15:01 hagarth joined #gluster-dev
15:04 rafi joined #gluster-dev
15:17 nigelb ndevos: Mostly I want to know how to figure out which bits of changes in build.sh and regression.sh are needed. I can go over with you tomorrow the bits I'm unsure about.
15:17 nigelb I'm guessing that the changes in tests are easily mergable without issues.
15:17 wushudoin joined #gluster-dev
15:21 ndevos nigelb: hmm, I'm not sure who managed those scripts... maybe hagarth can have a look at it today?
15:23 devyani7__ joined #gluster-dev
15:31 shyam1 joined #gluster-dev
15:33 rafi joined #gluster-dev
15:35 VONO joined #gluster-dev
15:44 kkeithley ndevos: want to cast a jaundiced eye on https://paste.fedoraproject.org/401292/70325441
15:45 kkeithley see if there's anything that makes you retch
15:46 kkeithley before I move on to client setup and failure triggering
15:53 ndevos kkeithley: looks ok to me, but on the ganesha server you would only need to install glusterfs-ganesha and all others should get pulled in
15:54 kkeithley yes, that's true. ;-)
15:54 ndevos kkeithley: also, I recommend to use /bricks/... everywhere, it is nice to keep it that way, and SELinux should know about the path too
15:54 rafi joined #gluster-dev
15:55 kkeithley good point
15:55 ndevos kkeithley: in the for-loops, you should be able to do "for NODE in ${nodes[@]}", it reduces the i and i++ thingy
15:55 glusterbot ndevos: i's karma is now 1
15:56 ndevos oh, we have a happy i!
15:56 ndevos kkeithley: all "gluster" cli commands can have the --mode=script, it should not ask for confirmation then
15:57 ndevos kkeithley: and, please add a little description in the first few lines of the script, with an example of how it should get executed :)
16:00 ndevos kkeithley: ah, assume you need me=$(hostname --fqdn), there are different sub-domains
16:00 kkeithley no, must not use FQDN at this point in time
16:01 kkeithley are you saying nodes in CI may be in different subdomains?
16:01 ndevos yes, subdomains (domainnames), but still in the same subnet
16:01 kkeithley ugh
16:02 ndevos thats pretty commmon for deployments everywhere though...
16:02 kkeithley sure, but this isn't for deployment everywhere
16:02 ndevos I didnt know there was this limitation... where does it come from?
16:03 kkeithley long, i.e. fqdn, names need the '.'s changed to '_'s in the VIP lines of the config
16:03 kkeithley I can fix that. Was hoping to punt for now
16:04 ndevos hmm, I guess that needs fixing :-/
16:04 ndevos sorry about that!
16:04 kkeithley it's not that hard actually.
16:04 kkeithley I was just being lazy
16:04 ndevos its just a sed away :D
16:04 kkeithley bash built-in
16:05 ndevos or even that
16:05 ndevos lazy is not always the right approach :P
16:06 * ndevos got a reply that said "Just fix the bug, don't hack the test." so he isn't feeling too sorry about it
16:06 kkeithley glad to know you didn't take that personally or anything. :-P
16:06 ndevos hehe
16:06 ndevos I'll review your scripts *very* closely now
16:07 ndevos anyway, its passed dinner time here, I'll wrap up for the day and continue tomorrow
16:07 kkeithley my nefarious strategy worked then.
16:07 spalai left #gluster-dev
16:07 prasanth joined #gluster-dev
16:08 kkeithley lunch, biab
16:08 ndevos I would need to 'dict' nefarious, so I'll let it slide
16:08 ndevos enjoy your lunch, and maybe ttyl
16:09 aravindavk joined #gluster-dev
16:13 xavih joined #gluster-dev
16:16 xavih_ joined #gluster-dev
16:20 mchangir joined #gluster-dev
16:46 inevity joined #gluster-dev
16:49 rafi joined #gluster-dev
16:51 inevity joined #gluster-dev
16:59 shyam joined #gluster-dev
17:10 hagarth joined #gluster-dev
17:12 aravindavk joined #gluster-dev
17:31 shubhendu joined #gluster-dev
18:01 shubhendu joined #gluster-dev
18:05 julim joined #gluster-dev
18:13 mchangir there are two volume options with the same name: performance.cache-size     defined in xlators/mgmt/glusterd/src/glusterd-volume-set.c ... is that intentional?
18:20 hagarth joined #gluster-dev
19:03 hagarth joined #gluster-dev
19:14 xavih joined #gluster-dev
19:22 penguinRaider joined #gluster-dev
20:05 penguinRaider joined #gluster-dev
20:23 penguinRaider joined #gluster-dev
20:33 dlambrig left #gluster-dev
21:05 gluster-newb joined #gluster-dev
21:18 julim joined #gluster-dev
21:20 dlambrig_ joined #gluster-dev
21:29 spalai joined #gluster-dev
22:43 shyam joined #gluster-dev
23:18 inevity joined #gluster-dev
23:55 shyam left #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary