Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-07-26

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 deep-book-gk_ joined #gluster-dev
00:04 deep-book-gk_ left #gluster-dev
00:06 wushudoin| joined #gluster-dev
00:22 wushudoin joined #gluster-dev
01:52 ilbot3 joined #gluster-dev
01:52 Topic for #gluster-dev is now Gluster Development Channel - https://www.gluster.org | For general chat go to #gluster | Patches - https://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:54 prasanth joined #gluster-dev
02:18 rastar joined #gluster-dev
02:19 bwerthmann joined #gluster-dev
02:24 decayofmind joined #gluster-dev
02:40 mchangir joined #gluster-dev
02:59 BlackoutWNCT joined #gluster-dev
03:13 ashiq joined #gluster-dev
03:22 smit joined #gluster-dev
03:46 riyas joined #gluster-dev
03:50 nbalacha joined #gluster-dev
03:59 pkalever joined #gluster-dev
04:06 ppai joined #gluster-dev
04:07 itisravi joined #gluster-dev
04:09 jiffin joined #gluster-dev
04:30 atinm joined #gluster-dev
04:31 smit joined #gluster-dev
04:43 Shu6h3ndu joined #gluster-dev
04:53 karthik_us joined #gluster-dev
04:54 deep-book-gk_ joined #gluster-dev
04:57 deep-book-gk_ left #gluster-dev
04:59 susant joined #gluster-dev
05:10 Venkata joined #gluster-dev
05:11 apandey joined #gluster-dev
05:11 Venkata Hello All, I would like to contribute to libgfapi . any docs are there for ramping up libgfapi or just going through source code is enough under api directory ?
05:16 amarts joined #gluster-dev
05:21 ndarshan joined #gluster-dev
05:22 Saravanakmr joined #gluster-dev
05:26 skumar joined #gluster-dev
05:27 sanoj joined #gluster-dev
05:28 sahina joined #gluster-dev
05:30 apandey_ joined #gluster-dev
05:36 nishanth joined #gluster-dev
05:42 prasanth joined #gluster-dev
05:43 atinm joined #gluster-dev
05:46 karthik_us joined #gluster-dev
05:49 apandey__ joined #gluster-dev
05:49 skoduri joined #gluster-dev
05:57 atalur joined #gluster-dev
06:03 kdhananjay joined #gluster-dev
06:04 hgowtham joined #gluster-dev
06:05 ankitr joined #gluster-dev
06:05 msvbhat joined #gluster-dev
06:08 rafi1 joined #gluster-dev
06:16 atalur_ joined #gluster-dev
06:17 poornima joined #gluster-dev
06:24 pkalever joined #gluster-dev
06:34 atinm joined #gluster-dev
06:42 karthik_us joined #gluster-dev
06:44 sona joined #gluster-dev
06:45 kotreshhr joined #gluster-dev
07:18 aravindavk joined #gluster-dev
07:32 rastar joined #gluster-dev
08:13 atinm if we have used an github issue to send a patch in mainline, what's the best way to send the backport in 3.12? keep the same github issue and the topic set to rfc?
08:14 atinm amarts, nigelb ^^
08:17 itisravi joined #gluster-dev
08:56 Saravanakmr joined #gluster-dev
08:56 Acinonyx joined #gluster-dev
09:02 sahina joined #gluster-dev
09:08 sanoj joined #gluster-dev
09:15 msvbhat joined #gluster-dev
09:22 nbalacha joined #gluster-dev
09:23 major joined #gluster-dev
09:26 decayofmind joined #gluster-dev
09:33 rafi joined #gluster-dev
09:33 ndevos atinm: I would use the same GitHub Issue for backports, assuming the feature was approved to get backported
09:34 atinm ndevos, cool
09:34 ndevos atinm: for all GitHub Issues the topic of the patch needs to be 'rfc', otherwise the smoke-test will complain and fail
09:35 atinm ndevos, yes, that's what I figured out now, but I think for release branches we can have this tweaked? like for 3.12 it should be rfc-release-3.12
09:36 ndevos atinm: I'm not sure, IMHO the topic check for patches linked to GitHub Issues is not needed at all
09:37 ndevos but, well, currently if no BUG: is found in the commit message, the topic is assumed to be 'rfc' (and not 'bug-123456')
09:38 ndevos patches to improve the script are welcome :-) https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/master/compare-bug-version-and-git-branch.sh
09:41 rafi2 joined #gluster-dev
09:43 kdhananjay joined #gluster-dev
09:56 hgowtham joined #gluster-dev
09:59 kotreshhr ndevos: Could you take a look at this bug https://bugzilla.redhat.com/show_bug.cgi?id=1475255 ?
09:59 glusterbot Bug 1475255: high, unspecified, ---, bugs, NEW , [Geo-rep]: Geo-rep hangs in changelog mode
09:59 kotreshhr ndevos: It is caused by mem_pool patch https://review.gluster.org/#/c/17779/
10:01 ndevos kotreshhr: that suggests there is something using libglusterfs without calling mem_pools_init_early() and mem_pools_init_late()
10:02 ndevos kotreshhr: what executables use the libglusterfs in that case?
10:02 kotreshhr ndevos: Its not libglusterfs, as I have put up in the bug, it's libgfchangelog
10:02 kotreshhr ndevos: Yes it's not using mem_pools_init_early/late
10:03 skoduri joined #gluster-dev
10:03 ndevos kotreshhr: is there some init() and fini() routine on libgfchangelog?
10:04 kotreshhr ndevos: nope, it uses mem_pools though
10:04 ppai kshlm++
10:04 glusterbot ppai: kshlm's karma is now 147
10:05 ndevos kotreshhr: in that case, a contructor and destructor need to be added to libgfchangelog, so that the mem-pools get initilized/destroyed when the library gets loaded/unloaded
10:06 ndevos kotreshhr: the notation for that is in libglusterfs/src/mem-pools.c
10:07 ndevos kotreshhr: do you want me to send a patch for that, or do you have the information you need to do it yourself?
10:07 kotreshhr ndevos: if would be great if you can send the patch
10:07 kotreshhr ndevos: *it
10:08 nbalacha joined #gluster-dev
10:08 ndevos kotreshhr: ok, I'll try to have a look at that later today
10:09 kotreshhr ndevos: thank you!
10:10 ndevos kotreshhr: is there a test-case that I can run easily to check if it is fixed?
10:11 kotreshhr ndevos: one more thing regarding the same, man page says pthread_getspecific's behaviour is undefined when key used is not set. But in mem_pools_init_early, if condition might be problematic?
10:12 kotreshhr ndevos: Yes, setting up geo-rep is one way
10:12 kotreshhr ndevos: the simpler way is to run the example in xlators/features/changelog/lib/examples/c/get-changes.c
10:12 ndevos kotreshhr: I really meant *easy* , as in a .t file that I can run
10:12 ndevos kotreshhr: oh, ok, that sounds simpler :)
10:13 kotreshhr ndevos: you should enable changelog 'gluster vol set <volname> changelog on
10:13 kotreshhr ndevos: modify the brick path hard coded in get-changes.c
10:14 ndevos kotreshhr: hmm, can you leave that as a comment in the BZ? otherwise I definitely forget that
10:14 kotreshhr ndevos: sure
10:17 ndevos kotreshhr: yes, and that if-statement could be troublesome indeed... I'll have to think a little more about how to do it differently
10:17 kotreshhr ndevos:ok
10:22 sahina joined #gluster-dev
10:29 kshlm ppai, https://github.com/gluster/glusterd2/pull/341
10:31 ppai kshlm, on it
10:36 smit joined #gluster-dev
10:37 ashmitha joined #gluster-dev
10:49 major joined #gluster-dev
11:04 kdhananjay joined #gluster-dev
11:08 rafi joined #gluster-dev
11:09 ppai kshlm, I pulled the changes. It doesn't work. You can reproduce this by putting in some sleep after peer add and before listing in the e2e tests
11:09 kshlm ppai, I'll check
11:13 jiffin ndevos: I faced a similar while exporting ganesha volume
11:13 jiffin exporting got hung
11:13 jiffin on latest upstream
11:14 itisravi joined #gluster-dev
11:18 ndevos jiffin: oh, really? libgfapi should do it correctly though...
11:32 jiffin I saw a hung twice yesterday on master branch
11:32 jiffin i will try to reproduce issue and let u know
11:37 apandey joined #gluster-dev
12:02 kshlm ppai, Is e2e that didn't work, or your test?
12:02 ppai kshlm, when I put in sleep, the test hangs
12:02 kshlm For me e2e hangs on the remove peer test, because remove peer doesn't work well in 2 node clusters right now.
12:03 ppai kshlm, IOW, in manual testing, if I do a peer list after waiting for few seconds after peer add, the curl client hangs.
12:03 kshlm Hmm, it works for me.
12:04 ppai kshlm, does your dev VM by any chance has multiple network interfaces ?
12:05 kshlm I'm testing in docker.
12:05 kshlm So 1 interface.
12:05 kshlm I
12:05 kshlm I'm testing right directly on my laptop, and server 2 just hangs.
12:06 kshlm My laptop has a lot of interfaces.
12:08 smit joined #gluster-dev
12:11 jiffin ndevos: it is still reproducible wondering how it pass the regression tests
12:12 jiffin ndevos: https://paste.fedoraproject.org/paste/Ti69nl8VTopuLzoElAXNFQ
12:15 jiffin1 joined #gluster-dev
12:15 ppai kshlm, I tried again, it still hangs for me. etcd logs: https://paste.fedoraproject.org/paste/G2U4D0FiJ0VtICfj5MhAjQ
12:18 kshlm ppai, This being caused due to one of the gd2 using the default etcd endpoints. Changing to non-default ports works.
12:19 kshlm I had previously tested by in docker by having the gd2s bind on the docker network instead of localhost.
12:19 kshlm So I worked for me then.
12:20 kshlm There seems to be some problem with the code handling default urls for etcd.
12:22 ppai kshlm, The client and peer URLs in config passed to both instances is clean.
12:23 kshlm Clean as in? Try setting the curl and purl to 127.0.0.1:2400* and 127.0.0.1:2300* in the respective configs.
12:24 ndevos jiffin: hmm, not sure how that can happen... maybe something with the multi-threaded-ness of mem-pools, that could use more testing
12:24 nbalacha joined #gluster-dev
12:24 jiffin1 joined #gluster-dev
12:25 ppai kshlm, I did. On 1.yaml: http://127.0.0.1:2479, http://127.0.0.1:2480 On 2.yaml: http://127.0.0.1:2379, http://127.0.0.1:2380
12:26 ppai kshlm, sorry, I meant on second node: http://127.0.0.1:2579, http://127.0.0.1:2580
12:26 ndevos (repeat) jiffin1: hmm, not sure how that can happen... maybe something with the multi-threaded-ness of mem-pools, that could use more testing
12:27 jiffin ndevos: i asked anoopcs to check in samba, the issue is not seen
12:28 Acinonyx joined #gluster-dev
12:28 jiffin ndevos: so it may be multithread. But I was  wondering ganesha has only one dbus thread
12:28 kshlm ppai, I tested again, with my config changes reverted. This time it worked.
12:28 jiffin so during export only thread will be accessing gluster mempool
12:28 kshlm This is crazy.
12:28 ndevos jiffin: yes, the dbus thread never did any mem_get() before, and maybe that thread does not have its mem-pools initilized completely
12:29 jiffin ndevos: ohh
12:29 aravindavk joined #gluster-dev
12:29 kshlm ppai, Could you do `pkill -KILL glusterd2` and clean up /tmp/gd2_func_test and try again.
12:29 jiffin in application the first call to mempool will be glfs_init right?
12:30 jiffin so why ganesha is effected
12:30 kshlm ppai, I'm guessing you have old instances of GD2 still running.
12:30 ndevos jiffin: glfs_new() should do that, but the dbus thread may get started later, and not have the per-thread-mempool yet
12:30 ndevos jiffin: but, I'm on a call and trying to follow that....
12:31 jiffin I will be here for another hour
12:32 jiffin ndevos: sorry I don't have much knowledge in mempools
12:32 jiffin ndevos: this is flow in ganesha code
12:34 jiffin dbus thread reach In FSAL layer on glusterfs_create_export()
12:34 kshlm ppai, I'll be back in ~30 minutes.
12:35 jiffin which calls glusterfs_get_fs() then calls following function in below order from it
12:35 jiffin glfs_new, glfs_set_volfile_server, glfs_set_logging, glfs_init
12:36 jiffin so glfs_new will called only after  dbus thread getting started
12:37 ndevos jiffin: ok, thats fine, glfs_new() should call mem_pools_init_*() once per process (affects all threads), and that should initialize the mem-pools for all threads
12:39 jiffin ndevos: do u think is there any chance lock is not released after intitalisation of mempool
12:40 ndevos jiffin: either not released, or maybe not initialized - but I'm on a call and dont have the source file open
12:43 ppai kshlm, I just did that (again). I put in the 5 second sleep in the tests, It hangs. fwiw, i'm on fedora 26 and my VM has multiple n/w interfaces. elasticetcd's init() used to enumerate all interfaces but your PR fixed that as it'll be overwritten. But the original problem still exists.
12:46 aravindavk joined #gluster-dev
12:49 kotreshhr left #gluster-dev
13:10 aravindavk joined #gluster-dev
13:16 aravindavk joined #gluster-dev
13:19 amarts joined #gluster-dev
13:29 aravindavk joined #gluster-dev
13:35 Humble nigelb, ping
13:35 Humble hi
13:44 aravindavk joined #gluster-dev
13:49 aravindavk joined #gluster-dev
13:55 aravindavk joined #gluster-dev
13:58 msvbhat joined #gluster-dev
14:03 kkeithley @pgp
14:03 kkeithley @php
14:03 jdarcy joined #gluster-dev
14:03 jdarcy Is the maintainers' meeting supposed to be happening now?  I seem to be all alone in the call.
14:06 misc i got a email to say the date was changed
14:06 aravindavk joined #gluster-dev
14:06 misc jdarcy: it got moved to tomorrow
14:07 jdarcy Ah.  Thanks!
14:19 sona joined #gluster-dev
14:33 kshlm joined #gluster-dev
14:42 aravindavk joined #gluster-dev
14:55 aravindavk joined #gluster-dev
15:00 wushudoin joined #gluster-dev
15:21 jstrunk joined #gluster-dev
15:22 aravindavk joined #gluster-dev
15:28 jstrunk joined #gluster-dev
15:40 nbalacha joined #gluster-dev
15:42 ashiq joined #gluster-dev
16:11 aravindavk joined #gluster-dev
16:16 rastar joined #gluster-dev
16:25 aravindavk joined #gluster-dev
16:40 susant joined #gluster-dev
16:40 ankitr joined #gluster-dev
16:41 sona joined #gluster-dev
16:42 msvbhat joined #gluster-dev
17:00 atalur joined #gluster-dev
17:02 BlackoutWNCT joined #gluster-dev
17:15 Shu6h3ndu joined #gluster-dev
17:39 msvbhat joined #gluster-dev
17:55 msvbhat joined #gluster-dev
17:59 sona joined #gluster-dev
18:25 amarts joined #gluster-dev
18:36 atalur joined #gluster-dev
18:47 smit joined #gluster-dev
22:19 smit joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary