Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-07-14

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 craigcabrey joined #gluster-dev
00:08 craigcabrey Hey everyone, I'm using the gfapi to build a set of coreutils for Gluster and I'm running into two separate issues related to opening and closing connections to a cluster. The first is that glfs_fini appears to leak memory and does not fully free the object. Is this intended behavior? To reproduce, I wrote a small test program that illustrates the issue: https://paste.fedoraproject.org/244019/14368322/ and the valgrind output:
00:08 craigcabrey https://paste.fedoraproject.org/244020/32308143/. Second, to get around this issue, I tried calling glfs_unset_volfile_server with the fs object, the transport (tcp), the host, and port, but that leads to a segfault. Is there anybody who can lend a hand or should I file a bug (or both)?
00:09 craigcabrey this is on GFS 3.6.3
00:33 topshare joined #gluster-dev
01:55 craigcabrey joined #gluster-dev
02:02 overclk joined #gluster-dev
02:10 topshare_ joined #gluster-dev
02:35 mribeirodantas joined #gluster-dev
02:55 craigcabrey joined #gluster-dev
03:04 vmallika joined #gluster-dev
03:15 kshlm joined #gluster-dev
03:38 krishnan_p joined #gluster-dev
03:40 atalur joined #gluster-dev
03:40 anoopcs craigcabrey, I think glfs_fini had some freeing issues earlier but it was reduced to some extent sometime before. Not sure whether those were included in 3.6.3 or not.
03:46 atinm joined #gluster-dev
03:51 anoopcs craigcabrey, I couldn't see those cleanup patches for glfs_fini() in 3.6.3 though. May be you can try with a later version
03:53 craigcabrey anoopcs: thanks for taking a look, I'll try linking against 3.7.x and see if the problem persists
03:53 craigcabrey left #gluster-dev
03:54 craigcabrey joined #gluster-dev
03:55 anoopcs craigcabrey, np.
03:58 anoopcs craigcabrey, Yes.. 3.7.x includes those changes..
03:58 * anoopcs is expecting minimum memory leak with 3.7.x :)
04:00 shubhendu joined #gluster-dev
04:04 saurabh joined #gluster-dev
04:12 itisravi joined #gluster-dev
04:19 kanagaraj joined #gluster-dev
04:30 ppai joined #gluster-dev
04:33 gem joined #gluster-dev
04:34 rafi joined #gluster-dev
04:42 nbalacha joined #gluster-dev
04:50 rjoseph joined #gluster-dev
04:53 nbalacha joined #gluster-dev
04:56 pranithk joined #gluster-dev
05:05 pppp joined #gluster-dev
05:06 sakshi joined #gluster-dev
05:08 ashish joined #gluster-dev
05:09 kshlm joined #gluster-dev
05:12 vmallika joined #gluster-dev
05:15 spandit joined #gluster-dev
05:16 soumya joined #gluster-dev
05:24 gem_ joined #gluster-dev
05:25 hgowtham joined #gluster-dev
05:30 Manikandan joined #gluster-dev
05:30 Saravana_ joined #gluster-dev
05:30 kdhananjay joined #gluster-dev
05:35 anekkunt joined #gluster-dev
05:43 ashiq joined #gluster-dev
05:44 Bhaskarakiran joined #gluster-dev
05:45 craigcabrey joined #gluster-dev
05:45 gbt joined #gluster-dev
05:46 jiffin joined #gluster-dev
05:46 deepakcs joined #gluster-dev
05:46 gbt left #gluster-dev
05:53 craigcabrey joined #gluster-dev
06:01 ggarg joined #gluster-dev
06:07 raghu joined #gluster-dev
06:11 aravindavk joined #gluster-dev
06:13 vimal joined #gluster-dev
06:19 hgowtham Manikandan++
06:19 glusterbot hgowtham: Manikandan's karma is now 16
06:24 anmol joined #gluster-dev
06:25 atalur joined #gluster-dev
06:29 hchiramm joined #gluster-dev
06:34 pranithk joined #gluster-dev
07:05 hagarth joined #gluster-dev
07:51 pranithk xavih: hey! buenos dias
07:59 xavih pranithk: hi :)
07:59 pranithk xavih: busy?
08:00 xavih pranithk: a bit. I've some work to do
08:00 pranithk xavih: I am working on yet another race in ec, where gf_timer_proc is crashing trying to access 'link' for the delayed unlock.
08:00 xavih pranithk: you told me that there's something we need to talk. What is it ?
08:00 pranithk xavih: I don't see any problem with code :-(
08:00 pranithk xavih: wonder if you see anything...
08:01 pranithk xavih: may be there is one more sleep/resume problem we still may have to find out...
08:02 pranithk xavih: Do you think you will have some time to help me with this? I already spent around 20 hours on it. See nothing wrong with code...
08:02 pranithk xavih: If you don't have time, it is fine :-)
08:03 pranithk xavih: problem is it is not reproducible that easily...
08:03 xavih pranithk: pranithk where exactly do you see link == NULL ?
08:03 xavih pranithk: and doing what ?
08:04 pranithk xavih: it seems like the fop which this 'link' is part of is in use by some other fop like fxattrop/inodelk etc.
08:04 xavih pranithk: oh, it's not set to NULL, then...
08:05 pranithk xavih: Because fxattrop/inodelk will have lock_count as 0 all structures inside link are NULL/0
08:05 xavih pranithk: you are talking about the data argument of the timer callback, right ?
08:05 pranithk xavih: no no, link is fine, but it points to already memput memory
08:05 pranithk xavih: yes yes
08:05 xavih pranithk: let me think a bit...
08:05 pranithk xavih: Do let me know if something flashes...
08:07 itisravi joined #gluster-dev
08:09 xavih pranithk: I see one possibility...
08:09 pranithk xavih: cool
08:09 pranithk xavih: how?
08:10 pranithk xavih: event is valid memory. event is not freed.
08:11 xavih pranithk: suppose we have an active timer that will call ec_unlock_timer_cbk()
08:11 pranithk xavih: okay.
08:12 xavih pranithk: event is not freed ? or it's not been overwritten/reused ?
08:12 pranithk xavih: 'event' structure in the bt is valid memory, i.e. it has 'cafebabe' at the point where magic should be...
08:13 xavih pranithk: is that magic value cleared on free ?
08:13 pranithk xavih: ah! let me check
08:13 xavih pranithk: I think not
08:14 pranithk xavih: no it isn't :-(
08:14 xavih pranithk: good, because it should be freed, otherwise my idea is not possible...
08:14 pranithk xavih: okay so, by the time timer is expired someone already did call_cancel and resumed the fop?
08:15 xavih pranithk: yes. The problem is that cancelling the timer doesn't guarantee that the callback is not called
08:15 pranithk xavih: So both timer and ec_lock were racing?
08:15 pranithk xavih: I written it off, thinking event is valid memory
08:15 xavih pranithk: that is what I think...
08:15 xavih pranithk: gluster's timer implementation is inherently racy... I already commented that on ml or here (I don't remember), but no feedback...
08:15 pranithk xavih: I think that must be it then. I don't see any other possibility
08:16 pranithk xavih: I know :-), I remember.
08:16 pranithk xavih: Wondering how to fix this though... I don't see any easy way
08:17 xavih pranithk: cancelling a timer should avoid execution of the callback or wait until the callback has completed
08:17 xavih pranithk: some caution needs to be taken to avoid deadlock, though
08:18 * ndevos points out that pranithk is maintainer of libglusterfs/ and is therefor the right person to get gf_times_* fixed ;-)
08:19 pranithk xavih: I will check what needs to be done.
08:19 pranithk ndevos: you come to India, you are in trouble boy!
08:19 xavih pranithk: I already posted a new timer implementation proposal some time ago that I think removes all these racy problems: http://review.gluster.org/9316/
08:19 ndevos pranithk: hah! wont you be on holidays?
08:19 xavih pranithk: it's quite outdated now, though...
08:20 pranithk xavih: kdhananjay says http://review.gluster.org/#/c/6459/ is possible fix
08:21 kdhananjay pranithk: I was trying to see if the race this patch fixes could be the reason for the crash you guys are talking about.
08:23 pranithk ndevos: We will see...
08:25 ndevos pranithk: I'm sure we'll meet, and you can blame me in person then :D
08:25 pranithk ndevos: I have some ideas for gf_ref_t I wanted to discuss with you once...
08:26 ndevos pranithk: sure, but that does not have to wait until I'm in the office
08:26 pranithk ndevos: well, I have to wait :-P
08:27 pranithk ndevos: I am on crazy schedule
08:27 ndevos pranithk: enjoy your ec bugs - if you pronounce that, it sounds like 'easy bugs' :D
08:38 pranithk ndevos: amazing man!! :-)
08:50 josferna joined #gluster-dev
09:08 topshare joined #gluster-dev
09:09 krishnan_p anekkunt, could you review http://review.gluster.com/#/c/11523/3 (again).
09:09 krishnan_p anekkunt, it has passed regressions and the changes look OK to me. You had a comment previously on this patch, so I am waiting for you to check if this patchset is fine by you.
09:10 krishnan_p ndevos, by the same argument, would you be maintaining epoll? ;-)
09:10 * ndevos *cough*
09:12 krishnan_p ndevos, on a more serious note, I would like to hear what you would like to see documented for glusterfs event subsystem.
09:13 krishnan_p ndevos, I know how it works, but don't remember how I went about learning it (pedagogy).
09:13 krishnan_p ndevos, if and when you come to this side of Earth, we could talk about it.
09:14 ndevos krishnan_p: is there documentation on how epoll, mt-epoll and et-epoll are implemented/designed?
09:14 krishnan_p ndevos, you guessed right, there is none other than the source itself.
09:15 krishnan_p ndevos, I have been meaning to write one. It seems easier to talk about it than write ;(
09:15 ndevos krishnan_p: at least the concepts should be documented, the implementation details are less important to me
09:17 krishnan_p ndevos, OK. Some implementation details are important to understand the properties they come with. I will give it another try which hides details where possible.
09:18 ndevos krishnan_p: sure, details are good to have, and some might be required to get a good understanding, but I prefer to start simple and extend later
09:19 krishnan_p ndevos, sure.
09:19 nishanth joined #gluster-dev
09:27 gem sakshi++ :)
09:27 glusterbot gem: sakshi's karma is now 2
09:28 gem_ joined #gluster-dev
09:33 aravindavk joined #gluster-dev
09:37 shubhendu joined #gluster-dev
09:39 nishanth joined #gluster-dev
09:44 ashiq ndevos++ thanks :)
09:44 glusterbot ashiq: ndevos's karma is now 175
09:45 anmol joined #gluster-dev
09:48 Manikandan joined #gluster-dev
09:59 ira joined #gluster-dev
10:10 kkeithley1 joined #gluster-dev
10:11 aravindavk joined #gluster-dev
10:26 overclk joined #gluster-dev
10:32 shubhendu joined #gluster-dev
10:33 anmol joined #gluster-dev
10:33 soumya_ joined #gluster-dev
10:47 krishnan_p joined #gluster-dev
10:48 atinm joined #gluster-dev
10:55 vmallika joined #gluster-dev
10:59 Saravana_ joined #gluster-dev
11:01 Saravana_ Hi facing an error while trying to "start" the volume.
11:01 Saravana_ ===============================================
11:01 Saravana_ mem-pool.c:417:mem_get0] (-->/usr/local/lib/libglusterfs.so.0(+0x2b1f2) [0x7fb2829591f2] -->/usr/local/lib/libglusterfs.so.0(log_buf_new+0x33) [0x7fb28295523a] -->/usr/local/lib/libglusterfs.so.0(mem_get0+0x5e) [0x7fb28299857f] ) 0-mem-pool: invalid argument [Invalid argument]
11:01 glusterbot Saravana_: ('s karma is now -8
11:02 Saravana_ any idea about this?
11:04 ndevos Saravana_: well, the mem-pool that was passed to mem_get0() was NULL
11:04 ndevos Saravana_: mem_get0 (THIS->ctx->logbuf_pool) from libglusterfs/src/logging.c
11:04 * ndevos doesnt really know how that can happen... maybe something tries to log too early?
11:05 ndevos but, its interesting that you get a log anyway :D
11:05 Saravana_ hmm...all I do is create a volume..then starting the volume.
11:06 Saravana_ just upgraded the specific VM to fedora 22. not sure whether it causes this.
11:06 Saravana_ let me try on another one...BTW this is on MASTER branch.
11:07 ndevos not sure, but I would suggest to build RPMs and use those for testing, thats generally more stable
11:07 ndevos like; make -C extras/LinuxRPM glusterrpms
11:09 atinm Saravana_, do you see these logs in CLI?
11:10 Saravana_ yes...cli.log
11:10 ndevos atinm, rafi: bug triage announcement?
11:10 atinm Saravana_, I remember a customer case where I saw it because of an incorrect upgrade
11:10 atinm Saravana_, cli packages got upgraded but rest were not
11:11 anoopcs atinm, I could see those errors in cli logs.
11:12 Saravana_ good that one more person sees it :)
11:12 ndevos Saravana_: do you have a mix of glusterfs packages installed, *and* source installation on that system?
11:12 Saravana_ nope...
11:12 Saravana_ only source
11:12 ndevos Saravana_: so you do not have qemu installed either?
11:12 atinm Saravana_, your cli package is screwed
11:13 atinm Saravana_, that's what I can say
11:13 Saravana_ qemu ? actually I am running in local VM (qemu based).
11:14 Saravana_ no qemu rpms installed on my VM. ( But my VM is qemu based :)
11:14 ndevos Saravana_: yeah, if it is a vm thats fine, but the qemu package(s) pull in some glusterfs dependencies, those will conflict with a source install
11:14 atinm ndevos, let me announce it, but not sure whether I can host it or not :(
11:14 ndevos atinm: I'll be in a meeting that overlaps a little :-/
11:15 atinm rafi, you can save us :D
11:15 anoopcs Saravana_, For me, this error log is seen every time a cli command is executed. What about you?
11:15 * ndevos just noticed, there was no meeting invite....
11:15 vmallika joined #gluster-dev
11:16 Saravana_ yes..I can see it while running "gluster volume start"
11:17 anoopcs Saravana_, I tried a volume set and observed the next same entry in cli logs. Have you noticed this error before in fedora version <22?
11:17 Saravana_ [root@gfvm1 glusterfs]# gluster volume start tv1
11:17 Saravana_ volume start: tv1: failed: Commit failed on localhost. Please check the log file for more details.
11:17 kdhananjay joined #gluster-dev
11:19 Saravana_ @anoopcs, I have git pulled as well as "updated to fedora 22"...please wait. I have another VM with fc21..I will try and update.
11:19 Saravana_ Whether issue is with master / Fedora 22 - can be confirmed
11:20 anoopcs Saravana_, Ok. Just curious to know.. Even though cli logs this error, gluster commands doesn't fail for me with master. I could create, start and set volume options without any failures on fedora 22 since my upgrade.
11:21 rafi ndevos: atinm : I sent the reminder
11:21 atinm rafi, awesome
11:21 atinm rafi++
11:22 glusterbot atinm: rafi's karma is now 18
11:22 Saravana_ @anoopcs, For me it is failing to "start" the volume. :(
11:24 anoopcs Saravana_, Interesting. I barely looked into cli logs either :).
11:27 rafi1 joined #gluster-dev
11:30 ndevos rafi++ thanks!
11:30 glusterbot ndevos: rafi's karma is now 19
11:32 jrm16020 joined #gluster-dev
11:35 kdhananjay joined #gluster-dev
11:39 soumya_ joined #gluster-dev
11:42 overclk joined #gluster-dev
11:42 atalur joined #gluster-dev
11:48 rafi joined #gluster-dev
11:49 shubhendu joined #gluster-dev
11:50 rafi joined #gluster-dev
11:52 krishnan_p joined #gluster-dev
11:54 atinm joined #gluster-dev
11:59 rafi REMINDER: Gluster Community Bug Triage meeting starting in another 1 minutes in #gluster-meeting
11:59 soumya_ joined #gluster-dev
12:01 overclk joined #gluster-dev
12:03 topshare joined #gluster-dev
12:04 topshare joined #gluster-dev
12:04 vmallika joined #gluster-dev
12:05 topshare joined #gluster-dev
12:06 topshare joined #gluster-dev
12:17 ashiq hchiramm++ thanks :)]
12:17 glusterbot ashiq: hchiramm's karma is now 53
12:28 _iwc joined #gluster-dev
12:29 topshare joined #gluster-dev
12:45 rafi kkeithley_++ ndevos++ itisravi++ Romeor++
12:45 glusterbot rafi: kkeithley_'s karma is now 5
12:45 glusterbot rafi: ndevos's karma is now 176
12:45 glusterbot rafi: itisravi's karma is now 8
12:46 glusterbot rafi: Romeor's karma is now 1
12:47 ndevos rafi++
12:47 glusterbot ndevos: rafi's karma is now 20
12:47 kkeithley_ I wonder if I can sell my kkeithley_ karma to kkeithley?
12:49 ndevos @karma kkeithley_
12:49 glusterbot ndevos: Karma for "kkeithley_" has been increased 5 times and decreased 0 times for a total karma of 5.
12:49 ndevos @karma kkeithley
12:49 glusterbot ndevos: Karma for "kkeithley" has been increased 84 times and decreased 1 time for a total karma of 83.
12:49 Saravana_ joined #gluster-dev
12:49 ndevos maybe like this? kkeithley_-- kkeithley++ kkeithley_-- kkeithley++ kkeithley_-- kkeithley++
12:49 glusterbot ndevos: kkeithley's karma is now 84
12:49 glusterbot ndevos: kkeithley's karma is now 85
12:50 glusterbot ndevos: kkeithley's karma is now 86
12:50 glusterbot ndevos: kkeithley_'s karma is now 4
12:50 glusterbot ndevos: kkeithley_'s karma is now 3
12:50 glusterbot ndevos: kkeithley_'s karma is now 2
12:51 kkeithley_ woohoo
12:56 hagarth joined #gluster-dev
13:00 shaunm joined #gluster-dev
13:00 Manikandan ndevos++
13:00 glusterbot Manikandan: ndevos's karma is now 177
13:05 overclk joined #gluster-dev
13:06 soumya_ joined #gluster-dev
13:19 anoopcs Saravana_, Were you able to start volume inspite of these cli errors on fedora 21?
13:19 Saravana_ anoopcs, yes, I am able to start volumes in fc 21
13:19 Saravana_ BUT errors are still there as you mentioned
13:20 josferna joined #gluster-dev
13:20 Saravana_ anoopcs, it works , but it still LOGS errors.
13:20 anoopcs Saravana_, It's interesting to see that errors are not logged when commands are executed from gluster> prompt.
13:21 Saravana_ anoopcs, hmm....so you enter gluster prompt and then issue command.?
13:22 anoopcs Saravana_, I tried both. Through prompt, I don't see errors. Direct execution of gluster commands leads to error logging.
13:22 Saravana_ anoopcs, you are right...it does not show errors when command executed through prompt.
13:22 anoopcs Saravana_, I was trying to attach gdb to gluster process and investigate. But then error was not present
13:24 pppp joined #gluster-dev
13:24 Saravana_ anoopcs, ok...actually with gdb it is logging the error.
13:25 anoopcs kshlm, Do you happen to see the following error in cli logs? http://ur1.ca/n4due
13:25 anoopcs Saravana_, Really?
13:25 Saravana_ yes...just do tailf cli.log
13:26 Saravana_ gdb `which glusterfs` and then run volume status.
13:26 overclk joined #gluster-dev
13:26 Saravana_ I mean r volume status....I can see cli.log updated with those errors
13:32 hagarth kshlm: on a vanilla RHEL/CentOS 6.6 system, how do I get pkg-config to pick up liburcu-bp after installing userspace-rcu package?
13:35 kshlm hagarth, It should be automatically picked if installed from a package.
13:35 kshlm hagarth, Have you also installed the -devel package?
13:37 hagarth kshlm: installing devel now
13:37 hagarth kshlm: thanks, installing the -devel package fixed it
13:38 RedW joined #gluster-dev
14:02 Manikandan joined #gluster-dev
14:09 shubhendu joined #gluster-dev
14:17 ashiq joined #gluster-dev
14:21 aravindavk joined #gluster-dev
14:26 wushudoin joined #gluster-dev
14:29 ashiq joined #gluster-dev
14:40 overclk joined #gluster-dev
14:43 kbyrne joined #gluster-dev
14:43 nbalacha joined #gluster-dev
14:52 pousley joined #gluster-dev
15:00 shyam joined #gluster-dev
15:07 ashiq joined #gluster-dev
15:13 josferna joined #gluster-dev
15:17 jobewan joined #gluster-dev
15:27 topshare joined #gluster-dev
15:28 topshare joined #gluster-dev
15:33 kshlm joined #gluster-dev
15:33 kshlm joined #gluster-dev
15:41 josferna joined #gluster-dev
15:51 soumya joined #gluster-dev
15:53 nbalacha joined #gluster-dev
15:55 rafi joined #gluster-dev
16:14 kaushal_ joined #gluster-dev
16:23 craigcabrey joined #gluster-dev
16:28 shyam joined #gluster-dev
16:41 shyam joined #gluster-dev
16:54 nishanth joined #gluster-dev
17:17 shyam joined #gluster-dev
17:30 ggarg joined #gluster-dev
18:16 shyam joined #gluster-dev
18:22 rafi1 joined #gluster-dev
18:26 msvbhat_ joined #gluster-dev
18:28 dlambrig_ joined #gluster-dev
18:29 Adifex joined #gluster-dev
18:30 wushudoin| joined #gluster-dev
18:36 wushudoin| joined #gluster-dev
18:50 jbautista- joined #gluster-dev
18:56 jbautista- joined #gluster-dev
19:01 RedW joined #gluster-dev
19:04 RedW joined #gluster-dev
19:44 badone joined #gluster-dev
19:48 [o__o] joined #gluster-dev
19:49 pousley joined #gluster-dev
20:04 dlambrig_ left #gluster-dev
20:49 wushudoin| joined #gluster-dev
20:54 wushudoin| joined #gluster-dev
22:21 jbautista- joined #gluster-dev
22:23 dlambrig_ joined #gluster-dev
22:39 jbautista- joined #gluster-dev
23:09 shyam joined #gluster-dev
23:12 kshlm joined #gluster-dev
23:14 pranithk joined #gluster-dev
23:45 ira joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary