Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-11-27

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:08 topshare joined #gluster-dev
03:02 hagarth joined #gluster-dev
03:43 soumya_ joined #gluster-dev
03:58 shubhendu joined #gluster-dev
04:00 itisravi joined #gluster-dev
04:11 mikedep333 joined #gluster-dev
04:13 anoopcs joined #gluster-dev
04:17 lalatenduM joined #gluster-dev
04:20 Gaurav__ joined #gluster-dev
04:21 wushudoin joined #gluster-dev
04:26 deepakcs joined #gluster-dev
04:32 jiffin joined #gluster-dev
04:32 kanagaraj joined #gluster-dev
04:38 ndarshan joined #gluster-dev
04:41 pp joined #gluster-dev
04:42 kdhananjay joined #gluster-dev
04:49 rafi joined #gluster-dev
04:49 kdhananjay left #gluster-dev
04:51 lalatenduM joined #gluster-dev
04:53 nkhare joined #gluster-dev
04:57 ppai joined #gluster-dev
04:59 kdhananjay joined #gluster-dev
04:59 hagarth joined #gluster-dev
05:00 atinmu joined #gluster-dev
05:01 jiffin1 joined #gluster-dev
05:05 hagarth JoeJulian: ping, looking into https://bugzilla.redhat.co​m/show_bug.cgi?id=1127140. Are you still observing memory leaks with 3.4.x?
05:05 glusterbot Bug 1127140: unspecified, unspecified, 3.4.6, kkeithle, ASSIGNED , memory leak
05:06 kshlm joined #gluster-dev
05:24 spandit joined #gluster-dev
05:33 atinmu xavih, hi
05:39 bala joined #gluster-dev
05:53 overclk joined #gluster-dev
06:28 atalur joined #gluster-dev
06:48 ppai joined #gluster-dev
06:57 krishnan_p joined #gluster-dev
07:39 raghu` joined #gluster-dev
07:54 ppai joined #gluster-dev
08:06 atalur joined #gluster-dev
08:50 atinmu joined #gluster-dev
08:53 krishnan_p joined #gluster-dev
08:55 hagarth joined #gluster-dev
09:00 ppai joined #gluster-dev
09:33 badone joined #gluster-dev
09:37 atalur joined #gluster-dev
09:37 rafi1 joined #gluster-dev
09:58 krishnan_p joined #gluster-dev
10:07 atinmu joined #gluster-dev
10:14 hagarth joined #gluster-dev
10:50 ppai joined #gluster-dev
11:03 hagarth spandit: for http://review.gluster.org/#/c/9194/1, have you checked the functionality with aux groups?
11:18 ndevos spandit: ah, yes, I wanted to ask about that too - a frame->root as a ->groups and ->ngrps, should those also not get filled, or are they set correctly by syncop automatically?
11:35 vimal joined #gluster-dev
11:58 eljrax Is there *any* mechanism inside gluster that I can use to have a translator on different servers talk to each other?
11:59 eljrax So same translator, runs on two servers in a replicated setup. Can I somehow coordinate those two, some RPC or something ?
12:22 edward1 joined #gluster-dev
12:23 itisravi_ joined #gluster-dev
12:25 ppai joined #gluster-dev
12:29 overclk eljrax, you would have a client translator on one server and server translator on another. (kind of fan out from the server is what it'll look like)
12:52 eljrax overclk: Hm okay. Thinking about it, what I'm attempting to do might as well be a client translator. Are those also executed once for each replica?
12:52 eljrax Or only the once?
12:56 overclk eljrax, what do you mean by "executed"? could you elaborate?
12:57 eljrax So if I have two servers, replicated. And I create a new file on the mount on, say client1, that's then put on server1 and server2. Will that translator be run twice or once?
12:58 eljrax For example, if write "created" to /tmp/whatever in the client translator. Will "created" appear once or twice in /tmp/whatever ?
12:59 overclk eljrax, you may need to configure one of the clients as active and the other as passive, else they would ping-pong each other.
13:00 overclk eljrax, provided that they are configure to talk to each other (from the server side)
13:01 eljrax So assuming I don't do anything like that, it'll write to the file twice?
13:02 overclk eljrax, in a replicated setup each server will write to it's own copy. if the request is "forwarded" via the server side fan out, that may contend with the actual write.
13:03 overclk eljrax, if I understand you correctly..
13:03 anoopcs joined #gluster-dev
13:03 eljrax I'm not sure I'm explaining this very well :)
13:03 eljrax If I use my example from the other day - http://fpaste.org/154339/01841114/
13:03 eljrax Say that was a client translator. How many times would "File was created" appear in the gluster log?
13:04 eljrax If I wrote a file, on a volume with replicas set to 2
13:05 overclk eljrax, this translator is on the client side? if yes, then a log message in the call & callback path.
13:06 eljrax But only once through _create and once through create_cbk, regardless of how many replicas?
13:07 overclk eljrax, depends where is the translator loaded in the graph.
13:08 overclk eljrax, if you load it in a fashion that "ccfsync_create" is invoked by the replication translator (directly or indirectly), then you would get >1 depending on the replication factor.
13:11 eljrax So if I put it immediately after writebehind (in the default translator chain/graph), it should only go through ccfsync_create once per file creation?
13:12 overclk eljrax, yep. where were you loading it now?
13:12 eljrax As a server translator, right before changelog
13:13 eljrax My trouble is, I want to take any file created, put its full path and what action was done on it on a queue
13:13 overclk eljrax, in that case you'd get one message per replicated subvolume.
13:13 eljrax I'd then have things reading off of that queue, and do stuff with it. My problem currently is that each event ends up in that queue twice (or however many replicas I have)
13:13 overclk eljrax, correct, one message per replicated subvolume "brick"
13:14 eljrax Yeah.. Which in hindsight makes perfect sense :)
13:15 eljrax But now I figure I can put it on the queue from a client translator (before replication translator), and it'd only appear once in the queue
13:15 overclk eljrax, if it's on the client, the client does you "stuff" :)
13:17 vimal joined #gluster-dev
13:17 overclk eljrax, I'm not sure if that's what you'd like.
13:19 eljrax I'm not sure what "stuff" means here :)
13:21 eljrax I'll just try to give it a go, see what happens
13:22 vimal joined #gluster-dev
13:23 overclk eljrax, by "stuff" I mean any kind of processing that would be done will be on the client.
13:54 _Bryan_ joined #gluster-dev
13:55 kshlm joined #gluster-dev
14:22 spandit hagarth, ndevos : I will check that
14:22 ndevos spandit: thanks :)
14:22 bala joined #gluster-dev
14:23 spandit ndevos, thanks for pointing it out, I might have missed it :)
14:23 spandit ndevos++
14:23 glusterbot spandit: ndevos's karma is now 60
14:58 shubhendu joined #gluster-dev
15:25 vimal joined #gluster-dev
15:26 shubhendu joined #gluster-dev
16:29 nkhare joined #gluster-dev
16:40 lalatenduM joined #gluster-dev
16:48 hagarth joined #gluster-dev
17:04 lalatenduM kkeithley, are you around
17:15 eljrax A client-translator was exactly what I needed.. Now we're cookin'!
17:20 hagarth lalatenduM: ping, pm?
17:20 lalatenduM hagarth, sure
17:22 anoopcs joined #gluster-dev
18:20 lalatenduM kkeithley, have sent a mail with the first cut :)
18:20 lalatenduM to you
18:59 lalatenduM joined #gluster-dev
21:06 badone joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary