Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-07-09

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:52 topshare joined #gluster-dev
01:08 vmallika joined #gluster-dev
01:46 topshare joined #gluster-dev
02:09 lpabon joined #gluster-dev
03:10 overclk joined #gluster-dev
03:22 nishanth joined #gluster-dev
03:33 shubhendu joined #gluster-dev
03:44 ashiq joined #gluster-dev
03:46 soumya joined #gluster-dev
03:49 mikedep3- joined #gluster-dev
03:49 atinm joined #gluster-dev
03:52 Manikandan joined #gluster-dev
03:53 vmallika joined #gluster-dev
04:01 nbalacha joined #gluster-dev
04:01 nbalacha joined #gluster-dev
04:03 ppai joined #gluster-dev
04:06 mikedep333 joined #gluster-dev
04:14 soumya_ joined #gluster-dev
04:14 nkhare joined #gluster-dev
04:16 kanagaraj joined #gluster-dev
04:22 hagarth joined #gluster-dev
04:28 itisravi joined #gluster-dev
04:52 sakshi joined #gluster-dev
04:59 jiffin joined #gluster-dev
04:59 Bhaskarakiran joined #gluster-dev
05:05 hgowtham joined #gluster-dev
05:08 sabansal_ joined #gluster-dev
05:09 gem joined #gluster-dev
05:12 ndarshan joined #gluster-dev
05:12 atinm joined #gluster-dev
05:14 topshare joined #gluster-dev
05:16 rafi joined #gluster-dev
05:18 nbalachandran_ joined #gluster-dev
05:19 ashish joined #gluster-dev
05:19 rafi1 joined #gluster-dev
05:20 vimal joined #gluster-dev
05:31 anrao joined #gluster-dev
05:32 pppp joined #gluster-dev
05:32 G_Garg joined #gluster-dev
05:37 nishanth joined #gluster-dev
05:43 spandit joined #gluster-dev
05:47 Manikandan joined #gluster-dev
05:50 Manikandan hchiramm++
05:50 glusterbot Manikandan: hchiramm's karma is now 50
05:52 gem_ joined #gluster-dev
05:52 soumya_ joined #gluster-dev
05:52 kdhananjay joined #gluster-dev
05:56 topshare joined #gluster-dev
05:57 raghu joined #gluster-dev
06:08 kshlm joined #gluster-dev
06:08 nishanth joined #gluster-dev
06:16 pranithk joined #gluster-dev
06:18 shubhendu joined #gluster-dev
06:19 anekkunt joined #gluster-dev
06:21 atalur joined #gluster-dev
06:23 vmallika joined #gluster-dev
06:30 spandit joined #gluster-dev
06:32 owlbot` joined #gluster-dev
06:34 deepakcs joined #gluster-dev
06:41 Guest24523 left #gluster-dev
06:41 Guest24523 joined #gluster-dev
06:42 ashish joined #gluster-dev
07:13 topshare joined #gluster-dev
07:17 gem_ joined #gluster-dev
07:21 nbalachandran_ joined #gluster-dev
07:21 vimal joined #gluster-dev
07:24 rafi1 anoopcs++
07:24 glusterbot rafi1: anoopcs's karma is now 15
08:10 josferna joined #gluster-dev
08:16 kshlm joined #gluster-dev
08:22 anrao joined #gluster-dev
08:25 pranithk xavih: hey, you have some time to talk about the work for next few days...
08:27 xavih pranithk: some time, yes :P
08:29 pranithk xavih: I updated the pad https://public.pad.fsfe.org/p/gluster-ec moving completed tasks to the end
08:30 xavih pranithk: yes, I've already seen it
08:30 ashish joined #gluster-dev
08:30 pranithk xavih: atalur and I are thinking of implementing a new eager-locking xlator which will move out the functionality in afr/ec to a common xlator, I see from your mail you wanted something common to exist which you want to add in libglusterfs?
08:31 shubhendu joined #gluster-dev
08:32 xavih pranithk: yes, I think that doing this in an xlator is difficult as the xlator architecture is defined now
08:32 xavih pranithk: it seems easier to use and more flexible if this is implemented as library calls
08:32 xavih pranithk: how do you plan to do it in an xlator ?
08:34 pranithk xavih: inodelk/entrylk calls will be implemented in the new xlator. Delaying of the unlock for 1 minute is what will happen. I am still to decide on how to give lk-owner re-use...
08:35 xavih pranithk: will the new xlator be below afr/ec ?
08:35 pranithk xavih: Second thing is it will put inode-count query xattrs in other fops to disable itself from doing the delayed unlock.
08:35 pranithk xavih: yes above each client xl
08:36 xavih pranithk: the you will also need additional synchronization between each xlator (there will be multiple locking xlators)
08:37 xavih pranithk: I think this adds a lot of complexity
08:37 xavih pranithk: and does not let the user xlator to have some control over it
08:38 pranithk xavih: What synchronization do you foresee?
08:39 pranithk xavih: problems I mean :-)
08:40 xavih pranithk: for example, if I correctly understand what you say, if ec wants to issue a lock, it will send a request to all its subvolumes, right ?
08:40 pranithk xavih: true
08:41 xavih pranithk: how will each locking xlator coordinate with the others ? I mean, if one xlator fails to get the lock, how it will tell that problem to the others so that a sequential lock sequence is initiated ?
08:42 xavih pranithk: they will also need to determine an order between them to avoid deadlocks in sequential locking
08:42 xavih pranithk: I think this requires a lot of complexity that can be avoided is some library calls are implemented and used directly from afr/ec
08:43 pranithk xavih: They don't need to co-ordinate...
08:43 xavih pranithk: why not ? how will the locks be acquired ?
08:44 pranithk xavih: think of it this way. For user-xlator it is just a normal inodelk/entrylk call. It doesn't know if it is optimized away or not(Although we can build that logic as well)
08:44 pranithk xavih: Lets take an example:
08:44 pranithk xavih: let us say there are 3 bricks for ec xlator.
08:45 xavih pranithk: ok
08:45 pranithk xavih: It tries to acquire non-blocking locks, lets say it get the lock on 0, 2 and on 1 it fails with EAGAIN, we give the same op_ret, op_errno to ec
08:46 pranithk xavih: now it needs to do unlock and sequential locking.
08:46 xavih pranithk: ec will receive 3 unrelated answers
08:46 xavih pranithk: unlock and sequential lock will be implemented in ec ?
08:46 pranithk xavih: yes yes
08:46 pranithk xavih: well, yes :-) that is what I am thinking...
08:47 xavih pranithk: then I don't see the advantage...
08:47 xavih pranithk: if afr/ec still needs to handle part of the logic of locking, splitting the job between two xlators will reduce the flexibility and the optimization opportunities...
08:47 pranithk xavih: Okay, may be the expectations from this functionality are different for you and me :-). I simply want to reduce the number of network calls. Just this much
08:48 pranithk xavih: What are the expectations you have from this functionality?
08:48 xavih pranithk: how can this reduce the number of network calls ? at most the new xlator will do the same optimizations that are currently done in afr/ec, right ?
08:49 xavih pranithk: I expect to move all the locking logic and complexity outside the cluster xlators
08:49 pranithk xavih: yes, but it will reduce complexity of handling all optimizations. Afr, ec self-heals can also benefit from it
08:50 xavih pranithk: this has two advantages: 1) cluster xlators only need to say if they require a lock or not, and 2) all optimization and implementation changes are completely outside of the cluster xlators, so they won't need to be touched if at some point we decide that there is a better way to implement locking
08:51 xavih pranithk: self-heals can also benefit from a set of library functions quite easily, and I think the advantages are bigger
08:52 xavih pranithk: even self-heal main logic could be abstracted and only leave the core rebuild logic inside afr/ec, but this is another discussion :)
08:53 pranithk xavih: Oh, ah! if I understand correctly, you want to implement something similar to cluster_inodelk/entrylk
08:53 kotreshhr joined #gluster-dev
08:53 pranithk xavih: This also has the logic of nonblocking + unlock + seq_lock in same function
08:53 pranithk xavih: Makes sense xavi
08:54 xavih pranithk: possibly it's something similar, though I haven't looked at them in detail
08:54 overclk raghu, ping. 3.7 patches for bitrot have passed regression.
08:55 pranithk xavih: Do you mind if someone else implements it? atalur is looking to implement these... we can guide her?
08:55 xavih pranithk: of course :)
08:56 pranithk xavih: about self-heal, I was thinking of similar lines. We should talk about that in some weeks, we are also thinking of some optimizations. But I want some deadlines to go away
08:56 pranithk xavih: I am busy till this month end... After that there are so many awesome problems to solve :-)
08:56 xavih pranithk: if we achieve a small and flexible set of library calls that remove all locking logic from ec, it will be great :)
08:56 pranithk xavih: not just from ec...
08:57 pranithk xavih: We will make it generic enough that any xl can use it
08:57 xavih pranithk: good, we can talk when you can. I've many ideas :P
08:57 pranithk xavih: Good problem to have. Having many ideas I mean :-)
08:57 pranithk xavih: Okay now on to the actual reason I pinged you for...
08:59 pranithk xavih: I want to implement most of the functionality/bug-fixes I listed under 'Healing' section
09:00 pranithk xavih: I need to get most of it done by tomorrow...
09:00 atinm joined #gluster-dev
09:01 itisravi_ joined #gluster-dev
09:01 pranithk xavih: I am two minds about the readdirp and size problem. I wonder if we can live with that for now or shall we just make a quick fix as entry->inode = NULL? for the fix?
09:02 pranithk xavih: I promise I will fix it properly but I am not sure I have time to get all of it done by Monday...
09:02 pranithk xavih: so what do you think?
09:03 itisravi joined #gluster-dev
09:04 xavih pranithk: I'm not sure what's better. The safest way is to set entry->inode = NULL, however this have a performance impact on some use cases
09:04 kshlm joined #gluster-dev
09:04 pranithk xavih: yes. I am in pleading mode :-) Can we live with it for next 8 weeks?
09:05 ndarshan joined #gluster-dev
09:05 xavih pranithk: I think so...
09:06 pranithk xavih: cool. thanks! I think I will have time to address this again in August...
09:07 pranithk xavih: I will add to the pad then. I think for 3.7.3 we are in excellent shape. Not sure when the release date though. I will be sending all the backports shortly...
09:08 xavih pranithk: great. thanks :)
09:08 pranithk xavih: there is one problem with selinux and ec.
09:08 pranithk xavih: selinux xattr can be different on different machines...
09:08 kaushal_ joined #gluster-dev
09:08 pranithk xavih: it gives EIO when this is the case for obvious reasons :-)
09:09 atinm joined #gluster-dev
09:09 xavih pranithk: how needs this to be handled ?
09:09 xavih pranithk: I don't have any knowledge about selinux
09:10 xavih pranithk: if each brick can have different values, then we will need some way to parse them and combine, right ?
09:10 pranithk xavih: Problem is it can be missing also. We are thinking of ignoring just like in ec_xattr_match we did for other xattrs
09:11 xavih pranithk: but then what we will return to the user ?
09:11 anekkunt joined #gluster-dev
09:11 xavih pranithk: choosing one answer for selinux at random could have funny effects on user side
09:11 pranithk xavih: exactly!
09:12 xavih pranithk: why selinux xattr cannot be forced to be the same on all bricks ?
09:13 xavih pranithk: having security implications like acl's, I think it would be the best option
09:14 gem Manikandan++
09:14 glusterbot gem: Manikandan's karma is now 13
09:14 G_Garg joined #gluster-dev
09:15 Manikandan gem++ too, thanks!
09:15 glusterbot Manikandan: gem's karma is now 18
09:15 pranithk xavih: Forcing it to be same on all bricks is what we are thinking too...
09:15 pranithk xavih: shall we leave this issue as is? Otherwise as you said, it will have funny side-effects
09:16 xavih pranithk: I always ask the same... :P... how is this handled in afr ?
09:16 xavih pranithk: could we state that selinux is currently not supported for ec ?
09:16 pranithk xavih: no no
09:16 pranithk xavih: we will say for selinux to work set same label on all bricks
09:17 xavih pranithk: ah, ok. This is easier :)
09:17 pranithk xavih: people with more knowledge about this suggested we do it this way
09:17 pranithk xavih: okay then, it is settled then
09:18 pranithk xavih: Bhaskarakiran has moved on to screwing the disks in testing... He is saying the mount is not succeeding when the disks are Full or something, I am gonna go to his desk, if you have nothing more... I am done talking :-D
09:19 xavih pranithk: when some brick is full, we have problems with recovery
09:19 xavih pranithk: we already talked about that some time ago
09:19 pranithk xavih: that is fine, but the mount is not happening...
09:20 pranithk xavih: 4 of the 6 bricks are still good.
09:20 xavih pranithk: ah, ok :P
09:20 xavih pranithk: nothing more from me for now ;)
09:21 pranithk xavih: I will be resubmitting the readlink patch addressing all the comments. It will be my 700th patch :-)
09:21 pranithk xavih: So kind of special :-P
09:21 xavih pranithk: :)
09:22 pranithk xavih: ok sir! I will be off now. cya
09:30 atalur_ joined #gluster-dev
09:33 ndarshan joined #gluster-dev
09:37 kshlm joined #gluster-dev
09:40 G_Garg joined #gluster-dev
09:44 ashiq Manikandan++ thanks:)
09:44 glusterbot ashiq: Manikandan's karma is now 14
09:45 atinm joined #gluster-dev
09:47 pranithk joined #gluster-dev
09:51 anekkunt joined #gluster-dev
10:08 ndarshan joined #gluster-dev
10:18 Manikandan joined #gluster-dev
10:23 kshlm joined #gluster-dev
10:35 atinm joined #gluster-dev
10:44 anekkunt joined #gluster-dev
10:45 G_Garg joined #gluster-dev
10:52 soumya_ joined #gluster-dev
10:58 vmallika joined #gluster-dev
11:03 kotreshhr joined #gluster-dev
11:05 Manikandan joined #gluster-dev
11:10 ira joined #gluster-dev
11:19 kkeithley1 joined #gluster-dev
11:25 atalur joined #gluster-dev
11:26 kkeithley_ ndevos: please look at http://review.gluster.org/11581 for me? thanks
11:27 pranithk xavih: There still seems to be one more bug with fop->healing data corruption :-(. I am still looking
11:27 pranithk xavih: If the lock is re-used healing bits are not set anywhere... I think that is the bug :-(
11:32 pranithk xavih: I will fix that one as well...
11:39 kkeithley_ hchiramm: ping.  any ETA for uploading the 3.5.5 rpms?
11:40 hchiramm kkeithley, I am sorry, got pulled into some other work today
11:40 hchiramm I am starting :)
11:40 hchiramm give me 30 mins..
11:40 kkeithley_ okay, no prob. Was just curious
11:41 hchiramm kkeithley, we only have f22,23 , epel6 and 7 ..
11:41 hchiramm Isnt it
11:42 kkeithley_ yes,
11:43 kkeithley_ 3.5.x is in f21...   oh, el5 is what you're asking about
11:43 kkeithley_ ?
11:43 hchiramm yeah, el5
11:44 kkeithley_ one min
11:44 kkeithley_ http://koji.fedoraproject.org/koji/taskinfo?taskID=10327697 in about 10 minutes
11:45 hchiramm sure
11:45 kkeithley_ thanks
11:50 kkeithley_ hchiramm: oops, that didn't work. Can you go ahead with everything but el5 until I can figure it out
11:51 kotreshhr joined #gluster-dev
11:59 kaushal_ joined #gluster-dev
12:01 lpabon joined #gluster-dev
12:03 atinm joined #gluster-dev
12:07 hagarth joined #gluster-dev
12:08 pranithk joined #gluster-dev
12:10 anrao joined #gluster-dev
12:12 hchiramm kkeithley, sure thing
12:25 hchiramm kkeithley, http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.5/
12:26 jiffin1 joined #gluster-dev
12:28 kkeithley_ hchiramm: thanks, but.... I sent you a second mail with links to 3.5.5-2 builds...  (you put up the 3.5.5-1 builds!)
12:28 hchiramm ahhhhhhhhhhhh...
12:28 * hchiramm cross checking
12:29 hchiramm I have 3 mails
12:29 hchiramm out of those 3, 2 are encrypted
12:31 hchiramm kkeithley,
12:34 kanagaraj joined #gluster-dev
12:42 kkeithley_ oh merde. Sorry.
12:42 * kkeithley_ wonders why tbird keeps encrypting.
12:43 dlambrig_ joined #gluster-dev
12:47 hchiramm kkeithley, np.. I will wait for the mail..
12:47 hchiramm :)
12:50 soumya__ joined #gluster-dev
12:53 kkeithley_ if I send you an ecrypted email just ping me. It's because I've left my brain somewhere else
12:54 hchiramm is above applicable from now on ? or it apply for existing mails :)
13:01 hchiramm kkeithley, I am working on 3.5.5-2 builds :)
13:02 kkeithley_ el5 is building.  http://koji.fedoraproject.org/koji/taskinfo?taskID=10327849
13:03 kkeithley_ from now if you get encrypted email. I think it's always been safe to say I've left my brain somewhere else. ;-)
13:06 topshare joined #gluster-dev
13:07 topshare joined #gluster-dev
13:08 topshare joined #gluster-dev
13:10 pranithk xavih: Are you there? I think the solution we used in ec_child_select is regressing the updates while self-heal in progress case...
13:10 pranithk xavih: Because the xattrop of update_size_version should still happen on the bad brick...
13:12 shyam joined #gluster-dev
13:22 pranithk left #gluster-dev
13:22 xavih pranithk: I thought we said that all write operations should be allowed on bad bricks. This should include xattrop...
13:23 hchiramm kkeithley, http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.5/
13:23 hchiramm I havent moved el5 builds there
13:31 hchiramm kkeithley, I am moving those as well :)
13:38 jrm16020 joined #gluster-dev
13:44 kotreshhr left #gluster-dev
13:52 hchiramm kkeithley, jfyi Done..
13:52 hchiramm everything is available @download.gluster.org
13:52 hchiramm I have moved el5 rpms as well
13:54 kkeithley_ hchiramm++
13:54 glusterbot kkeithley_: hchiramm's karma is now 51
13:56 hchiramm kkeithley++ thanks!
13:56 glusterbot hchiramm: kkeithley's karma is now 80
13:59 dlambrig_ joined #gluster-dev
14:01 shubhendu joined #gluster-dev
14:15 shyam joined #gluster-dev
14:15 firemanxbr joined #gluster-dev
14:29 overclk joined #gluster-dev
14:29 kkeithley_ ndevos: ping, are you back from lunch?
14:32 pousley joined #gluster-dev
14:35 jiffin joined #gluster-dev
14:35 jobewan joined #gluster-dev
14:38 kshlm joined #gluster-dev
14:39 wushudoin| joined #gluster-dev
14:51 kkeithley_ ndevos: ping, are you back from lunch?
15:01 topshare joined #gluster-dev
15:02 lpabon joined #gluster-dev
15:03 nbalachandran_ joined #gluster-dev
15:06 kanagaraj joined #gluster-dev
15:07 topshare joined #gluster-dev
15:11 kkeithley1 joined #gluster-dev
15:23 shyam joined #gluster-dev
15:28 jiffin joined #gluster-dev
15:32 soumya_ joined #gluster-dev
15:35 vimal joined #gluster-dev
15:43 soumya_ shyam, ping
15:43 glusterbot soumya_: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:48 soumya_ ndevos, shyam , request your inputs on http://review.gluster.org/#/c/11572/2/api/src/glfs-handleops.c
15:49 soumya_ also would you prefer that to be addressed in a separate patch?
15:53 pranithk joined #gluster-dev
15:57 pranithk xavih: there?
16:11 pranithk joined #gluster-dev
16:30 shyam soumya|afk: Separate patch, yes absolutely, it was more an observation as I was doing the review, let me recheck and post an updated comment on the same
16:31 jbautista- joined #gluster-dev
16:36 jbautista- joined #gluster-dev
16:39 wushudoin| joined #gluster-dev
16:43 G_Garg joined #gluster-dev
16:43 wushudoin| joined #gluster-dev
16:45 vmallika joined #gluster-dev
16:48 wushudoin| joined #gluster-dev
17:17 firemanxbr joined #gluster-dev
17:34 soumya|afk shyam, thanks :)
18:49 pranithk joined #gluster-dev
19:00 dlambrig_ joined #gluster-dev
19:14 pranithk joined #gluster-dev
19:15 jbautista- joined #gluster-dev
19:20 jbautista- joined #gluster-dev
19:38 shaunm_ joined #gluster-dev
20:14 pranithk joined #gluster-dev
20:22 pranithk joined #gluster-dev
20:42 dlambrig_ joined #gluster-dev
20:50 dlambrig__ joined #gluster-dev
21:13 badone joined #gluster-dev
23:19 wushudoin| joined #gluster-dev
23:25 wushudoin| joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary