Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-10-06

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:28 jobewan joined #gluster-dev
01:09 vikumar joined #gluster-dev
02:22 jobewan joined #gluster-dev
02:50 dlambrig_ left #gluster-dev
03:09 anoopcs joined #gluster-dev
03:30 atinm joined #gluster-dev
03:33 vimal joined #gluster-dev
03:35 vmallika joined #gluster-dev
03:36 nishanth joined #gluster-dev
03:48 kanagaraj joined #gluster-dev
03:49 shubhendu joined #gluster-dev
03:55 Byreddy joined #gluster-dev
04:06 nbalacha joined #gluster-dev
04:10 nbalacha joined #gluster-dev
04:11 nbalacha joined #gluster-dev
04:32 sakshi joined #gluster-dev
04:32 mohan_ joined #gluster-dev
04:35 rafi joined #gluster-dev
04:40 kotreshhr joined #gluster-dev
04:49 jobewan joined #gluster-dev
04:57 Manikandan joined #gluster-dev
04:59 ndarshan joined #gluster-dev
05:00 maveric_amitc_ joined #gluster-dev
05:01 ashiq joined #gluster-dev
05:02 ppai joined #gluster-dev
05:03 tigert joined #gluster-dev
05:05 csim joined #gluster-dev
05:09 pppp joined #gluster-dev
05:10 vimal joined #gluster-dev
05:10 gem joined #gluster-dev
05:10 jiffin joined #gluster-dev
05:15 pranithk joined #gluster-dev
05:24 aspandey joined #gluster-dev
05:25 kdhananjay joined #gluster-dev
05:27 maveric_amitc_ joined #gluster-dev
05:28 hgowtham joined #gluster-dev
05:29 rjoseph joined #gluster-dev
05:41 hagarth joined #gluster-dev
05:42 mohan_ joined #gluster-dev
05:53 aravindavk joined #gluster-dev
06:03 Bhaskarakiran joined #gluster-dev
06:03 hagarth joined #gluster-dev
06:07 kshlm joined #gluster-dev
06:11 ndarshan joined #gluster-dev
06:28 raghu joined #gluster-dev
06:30 anekkunt joined #gluster-dev
06:36 asengupt joined #gluster-dev
06:42 Apeksha joined #gluster-dev
06:45 ndarshan joined #gluster-dev
06:49 csaba kshlm: ping
06:49 glusterbot csaba: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
06:50 pg joined #gluster-dev
06:50 csaba OK, so let me try again, kshlm: ping wrt asking your opinion how something would be best to implement with Glusterfs ;)
06:51 kshlm csaba, implement what?
06:52 csaba kshlm: when dealing with Gluster volumes in Manila, we manage two states wrt. a volume -- in-use and not-in-use
06:52 kshlm okay
06:52 nbalacha joined #gluster-dev
06:52 csaba kshlm: these states are currently managed within Manila, stored in Manila's db, so GlusterFS does not know anything about it
06:53 csaba howeever, this approach is seen fragile for some reasons
06:54 csaba that is, if something goes wrong with the Manila shares that's not handled properly and volumes in-use flag will be forgotten about and they will fall back to be not-in-use
06:54 vmallika joined #gluster-dev
06:55 csaba we'd need a more robust mechanism to mark volumes in-use and we thought of placing some permanent indicator on the volume itself
06:56 csaba eg. vol rename could be such -- like rename in-use volumes with some prefix, say MANILA-IN-USE
06:56 csaba but I figured user support for vol rename  has been dropped
06:56 kshlm Yes. It was dropped a little while back.
06:56 csaba while the RPC still has message for this, the cli does not support rename
06:57 csaba so yeah, that's how it is, I just told you of that idea now to give an example what I mean by a "permanent indicator"
06:58 csaba question is: with current glusterfs feature set what could serve as a convenient permanent indicator?
06:59 kshlm I think the easiest approach would be to set a dummy option on the volume.
06:59 kshlm `volume set <vol> is-in-use true'
07:00 kshlm Volume set/get give a pretty good interface to label volumes.
07:00 csaba is it not the case that only options that are known by gluster code can be set?
07:00 kshlm It's only that they don't support arbitrary labels.
07:00 csaba yeah
07:00 csaba so that's the problem
07:01 kshlm We could introduce a domain `label.*` that would be ignored by glusterd.
07:01 kshlm That should be easy to do.
07:01 atalur joined #gluster-dev
07:02 csaba yeah, that seems be a proper solution
07:03 csaba however, with current glusterfs, is there no such an option which is practically dummy and clould be hijacked for this purpose?
07:03 kshlm None AFAIK. But I need to check to confirm.
07:06 csaba kshlm: can I ask you to put together a proposal or even better, a patch for this label feature?
07:08 kshlm csaba, I cannot commit to a getting a patch out, but I'll put out a proposal.
07:09 kshlm Which release to you want this in? I'm hoping it's 3.8.
07:10 csaba what's the glusterfs roadmap?
07:10 csaba I don't know how many 3.7 microreleases are to be done and when 3.8 is to be released
07:12 kshlm I don't know the exact date either. IIRC 3.8 should be out in November.
07:12 kshlm hagarth, ^ is this right?
07:14 kshlm The release cadence is a new 3.Y release every 6 months, with a new 3.Y.Z release every month.
07:14 kshlm 3.7 was out in may, so 3.8 should be out in November.
07:15 pg joined #gluster-dev
07:17 csaba kshlm:  well so if you can write up the proposal in coming days, then 3.8 would be OK -- then we can implement support for this feature by trying to set our label, and ignoring failure, regardless of the feature itself is not yet being there.
07:17 anekkunt joined #gluster-dev
07:17 asengupt joined #gluster-dev
07:17 kshlm csaba,  I will do that.
07:17 csaba thank you very much!
07:18 csaba btw just out of curiousity: what was wrong with rename?
07:18 kshlm I really do not know.
07:18 kshlm From the git log, I see it never made it into a release
07:19 csaba well OK.
07:19 Bhaskarakiran joined #gluster-dev
07:20 csaba hm, I found out about it not in the git log but here: http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_gluster_Command which seems to be a release doc
07:20 nbalacha joined #gluster-dev
07:21 kshlm 2b0299da this commit comments out the command from the table. This was pre 3.1.0
07:22 kshlm hagarth possibly knows why he did it.
07:22 kshlm I'll check it with him.
07:23 rraja joined #gluster-dev
07:27 csaba kshlm: thanks. So "git grep 'volume rename' v3.2.0" suggests how all this together is possible -- while the actual command is commented out from the command table as you pointed out, it's still covered by the gluster man page
07:30 nbalacha joined #gluster-dev
07:31 nbalacha joined #gluster-dev
07:32 kshlm csaba, I was probably mistakenly left in the man page. 762973 is a bug filed by raghu, before the command was hidden. It's been closed as invalid, and not something that would be supported.
07:32 kshlm s/I/It/
07:40 hagarth kshlm, csaba: 3.8 is likely to be delayed. We will probably have a release in Jan/Feb.
07:41 hagarth kshlm, csaba: we will continue to have one minor release per month for 3.7.x
07:42 hagarth csaba: volume rename support was dropped as we did not have too many use cases for that previously.
07:50 csaba hagarth: the fact that the rename removal was lumped under the notorious umbrella bug https://bugzilla.redhat.com/show_bug.cgi?id=GLUSTER-971 does it not suggest that it was found somehow incompatible with the approach taken for dynamic volume management?
07:50 glusterbot Bug GLUSTER: could not be retrieved: InvalidBugId
07:50 csaba sorry, https://bugzilla.redhat.com/show_bug.cgi?id=762703
07:50 glusterbot Bug 762703: low, low, 3.1.0, aavati, CLOSED CURRENTRELEASE, dynamic volume management
07:51 csaba hagarth, kshlm: can it then be asked to deliver the labeling feature in November (whatever minor is coming out then)?
07:56 hagarth csaba: do you want to take a stab at implementing this by november?
07:58 pg joined #gluster-dev
07:59 csaba hagarth: I can do that, if kshlm provides the spec how it would fit in best.
08:00 csaba s/spec how/spec to rely on his insight how/
08:01 hagarth csaba: that would be cool. I think kshlm is away for lunch now. We can discuss once he is back.
08:01 csaba hagarth: sounds good!
08:27 rastar csaba kshlm hagarth There is already such domain in volume set
08:28 rastar user.*
08:28 rastar we currently use it for smb/cifs, "gluster vol set volname user.smb off" disables all SMB hook scripts for that volume.
08:29 kshlm rastar, Awesome!
08:30 kshlm csaba, Looks like we already have this support!
08:50 csaba rastar, kshlm : that's good news! thanks
08:57 nishanth joined #gluster-dev
09:05 hagarth rastar: thanks for reminding! I always felt that I was something in the discussion here :).
09:19 Manikandan joined #gluster-dev
09:20 nbalacha joined #gluster-dev
09:21 pg joined #gluster-dev
09:24 atalur joined #gluster-dev
09:25 rjoseph joined #gluster-dev
09:29 nbalacha joined #gluster-dev
09:38 gem joined #gluster-dev
09:40 anekkunt joined #gluster-dev
09:42 deepakcs joined #gluster-dev
09:45 hchiramm joined #gluster-dev
10:04 msvbhat Hello, A quick question
10:04 msvbhat I was making changes to distaf and one last thing is remaining
10:04 msvbhat Doing setup before calling a test case function
10:05 msvbhat In the process I had a question, so wanted suggestions from you guys.
10:05 Manikandan joined #gluster-dev
10:06 msvbhat Do you guys prefer the test case to be a simple python function *or* a python class (with setup, execute/run, cleanup)
10:06 msvbhat Or support for both?
10:06 msvbhat rastar: ndevos: hagarth: ^^
10:20 Manikandan joined #gluster-dev
10:28 skoduri joined #gluster-dev
10:38 nishanth joined #gluster-dev
10:46 ndevos msvbhat: a python function would be sufficient for me, but if you want to put it in a class to make it easier extensible that would be fine too
10:47 kkeithley1 joined #gluster-dev
10:48 gem joined #gluster-dev
10:57 ndevos rafi, kdhananjay, jiffin, raghu: I'm having booth duty ate LinuxCon Europe while our Bug Triage is happening, I wont be able to host it, can one of you take care of that?
11:00 rafi ndevos: Today I can't host the meeting., :(
11:00 rafi ndevos: I can join on time for the meeting
11:04 Bhaskarakiran joined #gluster-dev
11:06 Manikandan joined #gluster-dev
11:12 hagarth joined #gluster-dev
11:19 asengupt joined #gluster-dev
11:21 Bhaskarakiran joined #gluster-dev
11:24 Bhaskarakiran joined #gluster-dev
11:33 josferna joined #gluster-dev
11:39 rjoseph joined #gluster-dev
11:40 asengupt joined #gluster-dev
11:44 msvbhat ndevos: Okay, Maybe I will start with either class or function (One of them)
11:44 msvbhat ndevos: And later add other if there is a requirement
11:56 rastar msvbhat: yes, one of them seems to be enough
11:57 rastar hagarth csaba : there is no reset for user.* yet
11:57 rastar there are RFE for that https://bugzilla.redhat.com/show_bug.cgi?id=880058 and https://bugzilla.redhat.com/show_bug.cgi?id=1003840
11:57 glusterbot Bug 880058: is not accessible.
11:57 glusterbot Bug 1003840: unspecified, medium, ---, pgurusid, ASSIGNED , user.cifs option should have a way to be removed from the volume options
11:58 rastar csaba: just make sure your logic takes care of that
11:58 csaba rastar: thx, that's good to know. we can just use then two alternating values
11:58 rastar csaba: yes, thats what we do :)
12:01 csaba rastar: can you point me to the code bit where user.* is implemented? I tried to find out but a just a few git greps did not reveal it
12:18 rjoseph joined #gluster-dev
12:20 pranithk joined #gluster-dev
12:25 Bhaskarakiran joined #gluster-dev
12:26 sakshi joined #gluster-dev
12:28 spalai joined #gluster-dev
12:30 spalai left #gluster-dev
12:50 firemanxbr joined #gluster-dev
13:01 kdhananjay joined #gluster-dev
13:06 shyam joined #gluster-dev
13:11 EinstCrazy joined #gluster-dev
13:12 pranithk joined #gluster-dev
13:36 Manikandan joined #gluster-dev
13:40 ira joined #gluster-dev
13:41 jiffin joined #gluster-dev
13:58 shubhendu joined #gluster-dev
14:09 kanagaraj joined #gluster-dev
14:20 nishanth joined #gluster-dev
14:23 gem joined #gluster-dev
14:46 shyam left #gluster-dev
14:56 mohan_ joined #gluster-dev
14:57 kanagaraj joined #gluster-dev
15:04 shyam joined #gluster-dev
15:13 zhangjn joined #gluster-dev
15:16 zhangjn joined #gluster-dev
15:17 zhangjn joined #gluster-dev
15:18 xavih joined #gluster-dev
15:19 wushudoin joined #gluster-dev
15:28 zhangjn joined #gluster-dev
15:30 zhangjn joined #gluster-dev
15:36 pranithk joined #gluster-dev
15:39 pranithk shyam: hey, I am confident about the steps for Replicate/Distribute Replicate. But I am not sure if the steps for distribute are the best they could be. Considering that your comment says it can lead to lot of data movement. So I don't want it to be the official way to replace a brick in plain distribute volume.
15:40 pranithk shyam: I understand you guys gave +1, that means it would work fine :-). But are these steps the best way to achieve replace-brick in plain distribute?
15:40 zhangjn joined #gluster-dev
15:41 pranithk shyam: For afr, the steps I gave are pretty much they best it can get. No gluster volume heal <volname> full needed at all. Only the replica subvol with brick that needs healing will heal. No other replica subvols are affected.
15:47 pranithk shyam: there?
15:48 lkoranda_ joined #gluster-dev
15:49 csaba joined #gluster-dev
15:58 lkoranda joined #gluster-dev
16:30 cholcombe joined #gluster-dev
16:41 pranithk joined #gluster-dev
16:49 ira joined #gluster-dev
17:19 nishanth joined #gluster-dev
18:47 maveric_amitc_ joined #gluster-dev
18:53 jobewan joined #gluster-dev
20:13 gem joined #gluster-dev
20:24 zhangjn_ joined #gluster-dev
20:46 hchiramm joined #gluster-dev
20:46 anoopcs joined #gluster-dev
20:46 msvbhat joined #gluster-dev
20:46 sankarshan_away joined #gluster-dev
20:46 purpleidea joined #gluster-dev
20:49 atalur joined #gluster-dev
22:13 badone joined #gluster-dev
22:32 skoduri joined #gluster-dev
22:34 csim joined #gluster-dev
23:52 zhangjn joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary