Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-05-20

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 yinyin joined #gluster-dev
02:54 bharata joined #gluster-dev
02:59 yliu joined #gluster-dev
03:23 shubhendu joined #gluster-dev
03:44 kshlm joined #gluster-dev
04:46 hagarth joined #gluster-dev
04:50 aravindavk joined #gluster-dev
04:55 deepakcs joined #gluster-dev
04:55 bharata hagarth, I am surprised to learn that a recent commit (0d415f7f8) has removed the capability to online-remove a brick from a replicate volume!
04:56 bharata hagarth, May I know the reasoning behind this ?
05:28 hagarth bharata: Changing a replicated volume to distribute is something that we do not intend supporting as of now (this is with replica count being 2). In addition to replica count 2, we have not seen many use cases where we want to change the replica count.
05:29 mohankumar joined #gluster-dev
05:31 bharata hagarth, This is the usecase I had in mind...
05:31 bharata hagarth, Say I have a 1brick dht volume and I want to upgrade my storage to a newer/better one
05:32 bharata hagarth, I do add-brick and do healing and then remove the original brick
05:32 bharata hagarth, Is there any other way in which I can meet my above requirement ?
05:35 bharata hagarth, also think about the user who starts with a local storage based VM store and then wants to migrate to SAN at a later point in time
05:36 bharata hagarth, All he has to do is to add the SAN brick, heal and remove the local bricks
05:36 bharata hagarth, such uses will be hit by this commit
05:38 rastar joined #gluster-dev
05:41 bulde joined #gluster-dev
05:42 jclift_ joined #gluster-dev
05:58 bala joined #gluster-dev
06:02 vshankar joined #gluster-dev
06:11 lalatenduM joined #gluster-dev
06:14 yinyin joined #gluster-dev
06:20 raghu joined #gluster-dev
06:29 bala joined #gluster-dev
06:36 rgustafs joined #gluster-dev
06:44 bharata hagarth, btw I can't move from replica 3 to replica 2 configuration due to the above mentioned change, I guess this commit just disables brick removal from replicate volume
07:49 badone joined #gluster-dev
08:15 hagarth bharata: you can add a brick and then initiate a remove-brick start operation to migrate data off the old brick
08:25 bharata hagarth, that's what I am doing, remove-brick just doesn't work anymore with replicate volumes after that commit
08:27 bharata hagarth, what do you mean by 'remove-brick start" operation ? Is that different from the usual remove-brick operation ?
08:31 bharata hagarth, tried remove-brick start, the result is same
08:31 bharata volume remove-brick start: failed: Removing brick from a replicate volume is not allowed
08:31 bharata hagarth, ^that's the one msg I see for all the cases for removal from replicate
08:39 hagarth bharata: I still don't think there is value in reducing replica-count from 2 to 1
08:40 hagarth bharata: remove-brick start does data migration where as vanilla remove-brick does not.
08:44 bharata hagarth, ok there are two things here
08:45 bharata hagarth, 1. Unless I am doing something fundamentally wrong, I can't convert a volume from replica3 to replica2. I guess that's a supported configuration
08:45 bharata hagarth, you might want to check on that
08:46 hagarth yes, we can probably relax replica count > 2 to 2.
08:48 bharata hagarth, so do agree that as per today gluster will only support only addtion of bricks and not removal in case of replicate!
08:48 bharata hagarth, s/so do/so you do
08:49 bharata hagarth, coming to the 2nd point, do you have any suggestion on how should I achieve brick to brick online migration like the use case I described above ? Is there a way to do it w/o using remove-brick and heal method ?
08:49 hagarth bharata: as of today,yes. note that changing replication count or removing bricks from a replica set is not allowed for a distributed replicated volume.
08:50 bharata hagarth, not only distributed replicated but also pure replicated volumes
08:51 hagarth bharata: yes, that was before this patch was introduced. With this even replicated volumes are affected.
08:51 bharata hagarth, so the change wasn't intended for pure replicated vols ?
08:52 nickw joined #gluster-dev
08:56 hagarth bharata: No, it was definitely intended for replica-count 2. As I mentioned earlier, we can accommodate count >2.
08:56 bharata hagarth, ok
09:05 kshlm joined #gluster-dev
09:07 rgustafs joined #gluster-dev
09:10 jbrooks joined #gluster-dev
09:16 jclift_ hagarth: This sounds super odd to me: "hagarth: bharata: I still don't think there is value in reducing replica-count from 2 to 1"
09:16 jclift_ hagarth: Why would we block someone from doing that if they want?
09:17 jclift_ hagarth: Surely we're not trying to be smarter than a SysAdmin?
09:18 hagarth jclift_: we surely aren't. There are design issues which will bite a sysadmin when we do that and hence we are trying to be conservative there.
09:18 jclift_ k
09:19 rgustafs joined #gluster-dev
09:19 jclift_ hagarth: Do you have any idea when those potential CVE's will be looked at?
09:21 jclift_ hagarth: Two weeks is an fairly long time for CVE's to be waited on before being looked at
09:21 jclift_ hagarth: https://bugzilla.redhat.com/show_bug.cgi?id=959887
09:21 hagarth jclift_: I did take a preliminary look with Kurt. Will update the bug with my findings when I get some more time.
09:21 glusterbot Bug 959887: unspecified, unspecified, ---, amarts, NEW , clang static src analysis of glusterfs
09:21 jclift_ hagarth: k
09:21 hagarth jclift_: thanks for helping me refresh my todo :)
09:22 * jclift_ just doesn't want it to become one of those "Red Hat was told about XYZ security bug on 6th May 2013, and didn't fix it until n months later"
09:22 jclift_ hagarth: That directly gives us really bad reputation
09:22 jclift_ hagarth: And sure, it's important, so I will hassle you. :D
09:26 hagarth jclift_: I understand. I will get to that sooner than later. :)
09:27 hagarth jclift_: I am hoping that we need not fix anything there.
09:28 bulde joined #gluster-dev
09:28 jclift_ hagarth: Completely agree.  Hopefully there's no real CVE's there. :)
09:30 bulde1 joined #gluster-dev
09:32 rgustafs joined #gluster-dev
09:44 rgustafs joined #gluster-dev
09:57 kshlm joined #gluster-dev
10:06 hagarth joined #gluster-dev
10:12 bulde joined #gluster-dev
10:27 xavih joined #gluster-dev
10:52 edward1 joined #gluster-dev
10:56 badone joined #gluster-dev
11:23 kkeithley avati,portante: I don't know what the story is with those 2+ hour regressions. They stall on one of the bug-90xxxx.t. Sometimes I kill them when I notice.
11:26 hagarth joined #gluster-dev
12:10 jclift_ johnmark: Meh to the "check if they're in 3.4".  Lets just get people checking the new code in master, and we'll backport stuff after fixes are done./
12:12 vshankar joined #gluster-dev
12:42 vshankar joined #gluster-dev
12:55 rastar joined #gluster-dev
13:51 lpabon joined #gluster-dev
13:59 vshankar joined #gluster-dev
14:04 wushudoin joined #gluster-dev
14:08 portante|ltp joined #gluster-dev
14:33 lpabon_ joined #gluster-dev
14:34 bfoster joined #gluster-dev
14:35 lpabon_ joined #gluster-dev
14:38 lpabon joined #gluster-dev
15:10 portante|ltp joined #gluster-dev
16:05 portante` joined #gluster-dev
16:18 awheeler_ kkeithley: ping  will the 3.3.2 release include gluster swift?
16:21 kkeithley yes, same as the 3.3.1 releases
16:24 awheeler_ Just wondering since it's not in the QA release.
16:24 awheeler_ http://bits.gluster.org/pub/​gluster/glusterfs/3.3.2qa3/
16:25 kkeithley Swift gets added by me when I do the Fedora packaging.
16:27 awheeler_ Ok, cool.  Thanks
16:27 kkeithley I'll expect to keep doing that for 3.3.x.
16:29 kkeithley Or I should say ...when I do the fedora packaging for my fedorapeople.org repo.
16:36 awheeler_ Yeah, I just noticed that it's a weird host, not donwload, and not fedorapeople.
16:42 kkeithley yes, bits.g.o is where the jenkins automated build server puts release builds.
16:42 kkeithley s/where the/where our/
16:43 johnmark kkeithley: gah, I'm trying to find the pi
16:43 kkeithley johnmark: don't sweat it if it's not close at hand
16:44 johnmark ok
16:47 sghosh joined #gluster-dev
17:07 bfoster joined #gluster-dev
17:45 portante|ltp joined #gluster-dev
19:12 lpabon joined #gluster-dev
20:02 sghosh joined #gluster-dev
20:48 badone joined #gluster-dev
22:57 sghosh joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary