Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-02-17

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 hagarth joined #gluster-dev
00:30 badone_ joined #gluster-dev
00:53 badone__ joined #gluster-dev
00:58 bala joined #gluster-dev
01:02 badone joined #gluster-dev
03:26 shubhendu joined #gluster-dev
03:50 bharata-rao joined #gluster-dev
04:03 mohankumar__ joined #gluster-dev
04:18 mohankumar__ joined #gluster-dev
04:19 ndarshan joined #gluster-dev
04:30 aravindavk joined #gluster-dev
04:58 itisravi joined #gluster-dev
05:03 ppai joined #gluster-dev
05:07 hagarth joined #gluster-dev
05:18 ajha joined #gluster-dev
05:33 bala joined #gluster-dev
05:40 bala joined #gluster-dev
06:00 hagarth joined #gluster-dev
06:12 raghu joined #gluster-dev
06:17 pk1 joined #gluster-dev
06:41 mohankumar__ joined #gluster-dev
06:55 spandit joined #gluster-dev
06:57 hagarth joined #gluster-dev
07:25 hagarth joined #gluster-dev
08:08 kanagaraj joined #gluster-dev
08:34 mohankumar__ joined #gluster-dev
08:34 Humble joined #gluster-dev
08:40 kanagaraj_ joined #gluster-dev
08:41 kanagaraj joined #gluster-dev
08:42 surabhi joined #gluster-dev
08:55 lalatenduM joined #gluster-dev
09:20 bharata-rao joined #gluster-dev
09:44 kanagaraj joined #gluster-dev
09:49 badone joined #gluster-dev
09:51 kanagaraj joined #gluster-dev
10:24 bharata-rao joined #gluster-dev
10:33 mohankumar__ joined #gluster-dev
10:36 ppai joined #gluster-dev
11:36 ira joined #gluster-dev
11:59 hagarth joined #gluster-dev
12:03 edward1 joined #gluster-dev
12:08 itisravi_ joined #gluster-dev
12:10 kkeithley joined #gluster-dev
12:23 pk1 joined #gluster-dev
12:25 bfoster joined #gluster-dev
12:33 spandit joined #gluster-dev
12:34 spandit joined #gluster-dev
12:47 kanagaraj_ joined #gluster-dev
12:54 kanagaraj joined #gluster-dev
13:06 pk1 joined #gluster-dev
13:08 kanagaraj joined #gluster-dev
13:15 portante joined #gluster-dev
13:33 pk1 joined #gluster-dev
13:34 pk1 left #gluster-dev
13:59 kkeithley hagarth: if Avati reviewed-and-merged http://review.gluster.org/7003 (lgetxattr called with invalid keys on the bricks) on master, is it acceptable to take http://review.gluster.org/7005 into release-3.4 without further review?
14:02 hagarth kkeithley: should be fine if it is a simple backport.
14:04 kkeithley yes, it's a pretty simple change.
14:09 kkeithley was mainly curious about what the protocol was in this sort of situation.
14:24 mohankumar__ joined #gluster-dev
14:43 raghu` joined #gluster-dev
14:46 ira joined #gluster-dev
14:47 mohankumar__ joined #gluster-dev
14:57 cjanbanan joined #gluster-dev
15:04 cjanbanan Does anyone here know if there's a risk of ending up with different contents of the replicated bricks if the network connection is broken?
15:11 kkeithley cjanbanan: yes. We call that split-brain. When the network connection is restored recent versions of gluster will automatically initiate self-healing.
15:11 kkeithley ,,(split-brain)
15:12 kkeithley To heal split-brain in 3.3+, see http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/ .
15:13 cjanbanan I'm investigating this for a use case that may be somewhat different to what glusterfs is developed for, that's the reason for my worries.
15:13 kkeithley and BTW, I recommend asking this sort of question in #gluster.  #gluster-dev is for developers to discuss development
15:15 cjanbanan I understand the concept of split-brain and I think I'd better address the guys who are familiar with the source code. Hope that's OK?
15:21 mohankumar__ joined #gluster-dev
15:22 cjanbanan In my application, a client may be writing to the file system when the host restarts. In such a case, I'm worried that data will only reach the brick on the local host and not the replica on the network.
15:24 cjanbanan The reason is that the host containing the replica will take over in such a scenario. Most probably, the client will continue to write to the host containing the replica which, as far as I understand, will lead to a split-brain situation.
15:26 cjanbanan Are there any protection in the src code to avoid such situations where data only reach one of the replicated bricks?
15:34 cjanbanan I guess it would be hard to implement such a protection, but I have to ask to avoid jumping to the wrong conclusion.
15:34 lpabon joined #gluster-dev
15:41 cjanbanan In my application, I'd prefer to lose the most recent data instead of having a split-brain. That's why it would be better to copy the file from the replica without trying to figure out which is the most recent one (I guess that's the reason why the split-brain exists?).
16:13 semiosis cjanbanan: developers are in #gluster as well, and your inquiry would be more appropriate there
16:22 jobewan joined #gluster-dev
16:24 cjanbanan OK. Thanks!
16:40 mohankumar__ joined #gluster-dev
17:19 mohankumar__ joined #gluster-dev
17:21 portante joined #gluster-dev
20:05 cjanbanan joined #gluster-dev
21:02 jobewan joined #gluster-dev
21:50 jclift_ Hmmm, that's weird. /usr/lib64/libgfapi.so is part of the glusterfs-api-devel rpm, but /usr/lib64/libgfapi.so.6 and 6.0.0 are part of the -api package.  That doesn't seem right.
22:23 kkeithley but it is right.
22:31 jclift_ :)
22:50 cjanbanan joined #gluster-dev
23:15 badone joined #gluster-dev
23:27 badone joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary