Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-07-02

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:30 vmallika joined #gluster-dev
00:45 an joined #gluster-dev
00:51 mribeirodantas joined #gluster-dev
01:13 RedW joined #gluster-dev
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:51 kdhananjay joined #gluster-dev
02:24 an joined #gluster-dev
02:27 dlambrig_ joined #gluster-dev
02:37 aravindavk joined #gluster-dev
02:37 akay1 joined #gluster-dev
02:38 hagarth joined #gluster-dev
02:40 rarylson joined #gluster-dev
02:43 rarylson joined #gluster-dev
02:44 rarylson left #gluster-dev
02:44 rarylson joined #gluster-dev
03:26 vmallika joined #gluster-dev
03:29 pranithk joined #gluster-dev
03:32 atinm joined #gluster-dev
03:42 overclk joined #gluster-dev
03:48 pranithk joined #gluster-dev
03:54 krishnan_p joined #gluster-dev
03:56 shberry joined #gluster-dev
03:59 soumya joined #gluster-dev
04:01 anmol joined #gluster-dev
04:02 spandit joined #gluster-dev
04:20 shubhendu joined #gluster-dev
04:25 gem joined #gluster-dev
04:37 kanagaraj joined #gluster-dev
04:37 aravindavk joined #gluster-dev
04:39 sakshi joined #gluster-dev
04:45 jiffin joined #gluster-dev
05:05 vimal joined #gluster-dev
05:08 ndarshan joined #gluster-dev
05:11 deepakcs joined #gluster-dev
05:13 rafi1 joined #gluster-dev
05:19 ppai joined #gluster-dev
05:22 hgowtham joined #gluster-dev
05:24 Guest40393 joined #gluster-dev
05:35 ashiq joined #gluster-dev
05:35 Manikandan joined #gluster-dev
05:37 pppp joined #gluster-dev
05:47 atalur joined #gluster-dev
05:52 overclk joined #gluster-dev
05:53 deepakcs joined #gluster-dev
05:53 an_ joined #gluster-dev
05:55 vmallika joined #gluster-dev
06:02 kdhananjay joined #gluster-dev
06:04 an joined #gluster-dev
06:19 kotreshhr joined #gluster-dev
06:22 G_Garg joined #gluster-dev
06:27 schandra joined #gluster-dev
06:31 aravindavk joined #gluster-dev
06:37 soumya joined #gluster-dev
06:39 aravindavk joined #gluster-dev
06:42 pranithk ndevos: could you review http://review.gluster.org/#/c/11495/
06:44 kdhananjay joined #gluster-dev
06:53 nbalacha joined #gluster-dev
07:01 pranithk xavih: could you review http://review.gluster.org/11473
07:05 an joined #gluster-dev
07:05 anmol joined #gluster-dev
07:05 rgustafs joined #gluster-dev
07:07 hagarth o/
07:07 nkhare joined #gluster-dev
07:07 hagarth is gerrit still being problematic?
07:17 pranithk hagarth: works fine for me
07:21 pranithk ndevos: there?
07:22 hagarth pranithk: thanks, one less problem to be worried about before I sleep :)
07:24 pranithk hagarth: :-)
07:31 pranithk hagarth: are you still there?
07:31 Saravana_ joined #gluster-dev
07:32 pranithk hagarth: Could you merge http://review.gluster.org/#/c/11495 if you feel it is fine...
07:33 an joined #gluster-dev
07:33 pranithk ndevos: ^^
07:36 hagarth pranithk: done, had reviewed this patch earlier in the day
07:38 pranithk hagarth: thanks!
07:45 an joined #gluster-dev
08:01 gem joined #gluster-dev
08:21 anekkunt joined #gluster-dev
08:39 ndarshan joined #gluster-dev
08:45 surabhi_ joined #gluster-dev
08:57 gem joined #gluster-dev
09:03 rjoseph joined #gluster-dev
09:07 spalai joined #gluster-dev
09:09 Manikandan joined #gluster-dev
09:13 raghu joined #gluster-dev
09:15 pranithk joined #gluster-dev
09:18 ndevos pranithk: hmm, you dropped the numbering of the mem-types in http://review.gluster.org/11495 - does that not make it more difficult to figure out what memory allocations are done when looking into a coredump?
09:20 ndevos pranithk: or, do we have a way to extract the numbers and types from a core? that would be nice to document :)
09:21 surabhi_ joined #gluster-dev
09:23 pranithk ndevos: I was the one who added the numbers in ~2011 so that we can look at statedumps easily. But now that we are printing strings instead I thought we can remove it :-/
09:25 ndevos pranithk: if statedump can figure out the names and numbers, we should be able to do that from gdb too... I just have never looked into that before
09:25 pranithk ndevos: gah!
09:25 pranithk ndevos: then we have to add them back :-(
09:25 pranithk ndevos: but wait
09:26 ndevos pranithk: I dont think we need them back, statedump can do it, right?
09:26 pranithk ndevos: they are enums... so if you typecast then it prints the stringified form anyway
09:26 ndarshan left #gluster-dev
09:26 pranithk ndevos: yes yes even in gdb we can get them
09:27 ndevos pranithk: oh, right, gdb should be able to do that when typecasting
09:28 ndevos hmm, I never through of trying that, and was always counting the size of the enums and checking the sources - all those lost minutes!
09:28 ndevos s/through/thought
09:35 pranithk xavih: I think lets not take any input of setting the background-heals to '0'
09:36 pranithk xavih: code will be super simple with that
09:38 pranithk xavih: for disabling background heals one needs to set the queue length to '0'
09:39 xavih pranithk: I don't think code gets more complex
09:39 xavih pranithk: it's ok to set queue length to 0
09:40 pranithk xavih: What is your view about not allowing background-heals to '0'?
09:40 xavih pranithk: but current implementation disables self heal when the queue length is set to 0, even if background_heals is set to 8
09:40 pranithk xavih: ah!
09:40 xavih pranithk: I do allow background_heals to 0
09:40 pranithk xavih: I think I understand what you mean...
09:40 xavih pranithk: but you are setting it to 1...
09:41 xavih pranithk: I think it's not the best thing to do
09:41 xavih pranithk: setting it to 0 is more clear, and the code changes are minimal
09:41 gem joined #gluster-dev
09:41 xavih pranithk: it also would allow to have background self-heal activated with the queue limit set to 0
09:43 pranithk xavih: I think I understood what you are saying...
09:44 pranithk xavih: SO for disabling background heals, one needs to set both these variables to zero... right?
09:45 pranithk xavih: I mean background_heals and qlen
09:45 xavih pranithk: no, no, it's ok your approach: when background_heals is set to 0, the queue length is also set to 0
09:45 pranithk xavih: damn it I understood wrong again :-)
09:46 xavih pranithk: what I say is to not depend on qlen to decide if a background self-heal can be started or not
09:46 pranithk xavih: Hmm... Ah! now I think I got it
09:46 shubhendu joined #gluster-dev
09:46 pranithk xavih: lets see :-)
09:46 xavih pranithk: if background_heals is set to 0, all healing should be disabled. Only pending heals in the queue should be processed until the queue is empty
09:47 xavih pranithk: if background_heals > 0, them we should always do self-heals in background, even if qlen == 0
09:47 pranithk xavih: Hmm...
09:49 anmol joined #gluster-dev
09:50 an joined #gluster-dev
09:52 pranithk xavih: Could you check the comment in ec_heal_throttle
09:52 * xavih looks the comment
09:53 pranithk xavih: if we set waitq_len to 0 I think we will still not do the heals?
09:53 pranithk xavih: I am also thinking...
09:53 rgustafs joined #gluster-dev
09:54 pranithk xavih: I mean even when ec->healers can do some heals...
09:54 xavih pranithk: yes, you are right. My comment is not correct either... :(
09:55 pranithk xavih: I think we can do something like if (ec->background_heals + ec->heal_wait_qlen > ec->healers + ec->heal_waiters) add it for heal else don't do heal?
09:56 pranithk xavih: wait wait
09:56 pranithk xavih: let me check that again
09:56 pranithk xavih: I think that works..?
09:57 pranithk xavih: nah!
09:57 xavih pranithk: I think it's easier. ec_heal_throttle() is only called for new heals, so we only allow them if ec->background_heals > 0
09:57 pranithk xavih: that check above should be done only when ec->background_heals is > -
09:57 pranithk xavih: > 0
09:57 pranithk xavih: hehe :-)
09:59 xavih pranithk: the only problem that remains is if it's allowed but the queue is full. This case should be tested in __ec_dequeue_heals()
09:59 xavih pranithk: wait, no...
10:00 pranithk xavih: told ya it is complex :-P
10:00 xavih pranithk: the condition I wrote is ok
10:00 xavih pranithk: no, sorry :-/
10:00 xavih pranithk: let me think...
10:02 pranithk xavih: (ec->background_heals == 0) || (ec->heal_waiters >= ec->heal_wait_qlen) can_heal = _gf_false
10:03 pranithk xavih: brb
10:04 pranithk xavih: it is the same condition you gave, I think it seems fine...
10:05 xavih pranithk: no because if qlen == 0, heal_waiters will always be >= qlen, not starting the heal even if the number of background self-heals is less than the maximum
10:06 pranithk xavih: wait I am not able to think straight, Let me have lunch... I will be back...
10:07 xavih pranithk: Ok, but I think your previous condition was right (the one adding both counts) but it needs to be combined with background_heals > 0
10:07 xavih pranithk: we'll talk later
10:19 an joined #gluster-dev
10:24 anmol joined #gluster-dev
10:27 shubhendu joined #gluster-dev
10:30 jiffin1 joined #gluster-dev
10:37 aravindavk joined #gluster-dev
10:47 shubhendu joined #gluster-dev
10:48 atalur joined #gluster-dev
10:49 nbalacha joined #gluster-dev
10:49 pranithk joined #gluster-dev
10:49 pranithk xavih: I am back, you there?
10:49 pranithk xavih: No my solution is not correct, it will add things even when background_heals is zero
10:49 atinm joined #gluster-dev
10:51 xavih pranithk: it needs to be combined with background_heals > 0
10:53 pranithk xavih: but both the conditions are equvivalent, arent' they?
10:54 xavih pranithk: which conditions ?
10:55 pranithk xavih: if ((ec->background_heals == 0) || (ec->heal_waiters >= ec->heal_wait_qlen)) can_heal = _gf_false
10:55 xavih pranithk: this is equivalent to the condition I wrote into the comment
10:56 atalur joined #gluster-dev
10:56 xavih pranithk: but I'm not saying this
10:57 xavih pranithk: I mean: if ((ec->background_heals >0) && (ec->background_heals + ec->heal_wait_qlen > ec->healers + ec->heal_waiters)) { start heal; }
10:59 xavih pranithk: qlen could even always contain the sum of option heal-wait-qlen + option background-heals
11:03 kotreshhr1 joined #gluster-dev
11:06 kotreshhr joined #gluster-dev
11:07 pranithk xavih: hmm... that is a bit difficult
11:08 pranithk left #gluster-dev
11:08 pranithk joined #gluster-dev
11:09 atinmu joined #gluster-dev
11:19 pranithk xavih: Do you think it is better to write 1G file instead of truncate -s 1G?
11:19 pranithk xavih: for the test case...
11:19 pranithk xavih: You are right that sometimes it may fail but it is very unlikely...
11:19 pranithk xavih: for all practical purposes the "hack" should work
11:21 pranithk xavih: IMHO :-)
11:21 pranithk xavih: I am just waiting for your answer before sending the updated patch...
11:21 xavih pranithk: No, I don't think it's better, but a truncate -s 1G when SEEK_DATA/SEEK_HOLE is implemented could be healed in less than a second
11:22 pranithk xavih: I don't see a better solution that writing a big file which will take a while to heal...
11:22 pranithk xavih: than*
11:23 xavih pranithk: I don't see a good approach. If you want we can leave this as it's now and find another solution when/if a problem appears...
11:24 pranithk xavih: The only way I can see is to write a gfapi/syncop program which will hold an inode lock which will stall self-heal...
11:25 rafi joined #gluster-dev
11:25 pranithk xavih: gah! but that will also block the I/O which triggers self-heal :-(
11:26 pranithk xavih: let us live with it for now....
11:26 xavih pranithk: maybe too much work for a simple test... leave the test as it's now and we'll change it if needed
11:27 pranithk xavih: the patch is sent. I hope I got it right this time :-)
11:28 rafi1 joined #gluster-dev
11:29 rafi joined #gluster-dev
11:30 pranithk xavih: wait there is a bug, uint32 needs to be changed in options as well
11:31 josferna joined #gluster-dev
11:33 lpabon joined #gluster-dev
11:36 jiffin joined #gluster-dev
11:38 pranithk xavih: done sir!
11:41 pranithk xavih: I also sent the opendir lock removal http://review.gluster.com/#/c/11506, wait let me also add you as reviewer
11:41 dlambrig_ joined #gluster-dev
11:42 atinmu RaSTar, do we have a patch now to turn of bind-insecure ?
11:43 kkeithley1 joined #gluster-dev
11:43 an joined #gluster-dev
11:45 pranithk xavih: I will send readlink patch probably next week. I wrote it, but I will be working on the calloc/free hang and appending writes leading to corrupted file before that...
11:46 rafi joined #gluster-dev
11:46 Bhaskarakiran joined #gluster-dev
11:48 spalai left #gluster-dev
11:52 xavih pranithk: both patches seem ok. I'll accept them once regression tests pass
11:53 shubhendu joined #gluster-dev
11:58 Manikandan joined #gluster-dev
11:58 pranithk xavih: cool sir!
11:59 pranithk xavih: Bhaskar told me you sent him a patch for extra info... I am providing a build to him with the patch. Lets hope for the best :-)
12:00 xavih pranithk: I only added more log messages to try to identify where is the problem. I have been unable to see what could be happening in the code... :(
12:01 pranithk xavih: yep. That is how Bhaskarakiran raises bugs. Happens only on his machines :-D
12:01 pranithk xavih: His testing has some crazy load :-)
12:04 vmallika joined #gluster-dev
12:11 spalai joined #gluster-dev
12:12 firemanxbr joined #gluster-dev
12:15 kotreshhr joined #gluster-dev
12:18 kkeithley1 ndevos: do you know anything about pinentry? (used to get gpg passphrases?)  Supposedly you can set `export GPG_TTY=$(tty)` and force the use of pinentry-curses instead of an X-based window.
12:19 ndevos kkeithley1: uh, nope, never tried it
12:19 rjoseph joined #gluster-dev
12:20 pranithk xavih: Do you mind giving +2 for the patches? In case the results morning my time, I can merge them...
12:28 kkeithley_ ndevos: okay, it was a long shot
12:33 jiffin joined #gluster-dev
12:35 kotreshhr left #gluster-dev
12:46 shyam joined #gluster-dev
12:48 itisravi joined #gluster-dev
12:49 jrm16020 joined #gluster-dev
12:52 Bhaskarakiran joined #gluster-dev
13:04 pranithk joined #gluster-dev
13:07 spalai left #gluster-dev
13:10 xavih pranithk: done :)
13:12 pranithk xavih: thanks :-)
13:18 jrm16020 joined #gluster-dev
13:18 jrm16020 joined #gluster-dev
13:19 pousley joined #gluster-dev
13:22 ank joined #gluster-dev
13:24 ndevos oh man, it is hot and sticky on my balcony: http://openweathermap.org/city/2744118
13:29 pppp joined #gluster-dev
13:32 ndevos kkeithley_: responded to http://review.gluster.org/#/c/11144​/8/xlators/storage/posix/src/posix.c@3358
13:40 jobewan joined #gluster-dev
13:49 ira joined #gluster-dev
14:00 vmallika joined #gluster-dev
14:02 sankarshan joined #gluster-dev
14:04 shubhendu joined #gluster-dev
14:12 dlambrig_ left #gluster-dev
14:21 wushudoin joined #gluster-dev
14:31 dlambrig_ joined #gluster-dev
14:39 dlambrig_ joined #gluster-dev
14:47 dlambrig_ joined #gluster-dev
14:54 dlambrig_ joined #gluster-dev
14:59 nbalacha joined #gluster-dev
14:59 Bhaskarakiran joined #gluster-dev
15:30 ndevos PRO TIP: do not use git repositories on a gluster/nfs mount if you need some reasonable performance for checkouts
16:02 pranithk joined #gluster-dev
16:08 rafi joined #gluster-dev
16:10 jiffin joined #gluster-dev
16:20 jrm16020 joined #gluster-dev
16:28 jiffin ndevos: kkeithley: i have resent the patch http://review.gluster.org/#/c/11144/ , can u please review the same
16:35 rafi joined #gluster-dev
16:35 kdhananjay joined #gluster-dev
16:36 josferna joined #gluster-dev
16:49 pranithk joined #gluster-dev
16:58 ndevos jiffin: some minor changes would be appreciated, left comments in the review
17:01 vmallika joined #gluster-dev
17:18 jiffin ndevos: done :)
17:18 hagarth joined #gluster-dev
17:24 wushudoin| joined #gluster-dev
17:29 wushudoin| joined #gluster-dev
17:33 vmallika joined #gluster-dev
17:34 dlambrig_ joined #gluster-dev
17:39 atalur joined #gluster-dev
17:42 G_Garg joined #gluster-dev
17:44 vmallika joined #gluster-dev
17:45 shaunm_ joined #gluster-dev
18:02 gem joined #gluster-dev
18:03 dlambrig_ joined #gluster-dev
18:04 vmallika joined #gluster-dev
18:53 hchiramm_home joined #gluster-dev
19:27 dlambrig_ joined #gluster-dev
20:31 shyam joined #gluster-dev
21:37 dlambrig_ joined #gluster-dev
21:54 dlambrig_ joined #gluster-dev
22:30 jrm16020 joined #gluster-dev
22:40 dlambrig_ joined #gluster-dev
22:46 dlambrig__ joined #gluster-dev
22:47 tdasilva joined #gluster-dev
22:49 cogsu joined #gluster-dev
23:04 ndk joined #gluster-dev
23:41 dlambrig_ joined #gluster-dev
23:52 badone joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary