Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-12-27

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:06 nbalacha joined #gluster-dev
02:48 ilbot3 joined #gluster-dev
02:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:43 riyas joined #gluster-dev
03:46 gem joined #gluster-dev
03:50 ashiq joined #gluster-dev
03:54 atinm joined #gluster-dev
03:55 msvbhat joined #gluster-dev
03:56 jiffin joined #gluster-dev
04:05 itisravi joined #gluster-dev
04:07 nbalacha joined #gluster-dev
04:10 mchangir joined #gluster-dev
04:24 vbellur joined #gluster-dev
04:27 hchiramm joined #gluster-dev
04:30 ppai joined #gluster-dev
04:35 kotreshhr joined #gluster-dev
05:04 sanoj joined #gluster-dev
05:05 prasanth joined #gluster-dev
05:06 Shu6h3ndu joined #gluster-dev
05:14 ashiq joined #gluster-dev
05:17 gyadav joined #gluster-dev
05:17 karthik_us joined #gluster-dev
05:19 ndarshan joined #gluster-dev
05:20 gyadav joined #gluster-dev
05:25 aravindavk joined #gluster-dev
05:26 apandey joined #gluster-dev
05:38 rafi joined #gluster-dev
05:53 hgowtham joined #gluster-dev
06:04 susant joined #gluster-dev
06:16 mchangir joined #gluster-dev
06:17 itisravi joined #gluster-dev
06:20 kdhananjay joined #gluster-dev
06:24 Anjana joined #gluster-dev
06:25 asengupt joined #gluster-dev
06:33 jiffin joined #gluster-dev
06:42 msvbhat joined #gluster-dev
07:10 jiffin joined #gluster-dev
07:17 Saravanakmr joined #gluster-dev
07:29 ankitraj joined #gluster-dev
07:31 devyani7 joined #gluster-dev
07:51 Saravanakmr joined #gluster-dev
07:53 pranithk1 joined #gluster-dev
08:05 ankitraj joined #gluster-dev
08:12 ashiq joined #gluster-dev
08:14 gyadav joined #gluster-dev
08:17 prasanth joined #gluster-dev
08:21 itisravi nigelb: ping
08:22 skoduri joined #gluster-dev
08:27 nigelb itisravi: hey, whats up?
08:29 itisravi nigelb: nvm, saw your mail :)
08:30 nigelb ha
08:45 devyani7_ joined #gluster-dev
08:49 itisravi nigelb: its failing in netbsd because netbsd doesn't support the `stat -c %` syntax I made in http://review.gluster.org/#/c/16288/2/tests/basic/afr/split-brain-favorite-child-policy.t
09:00 nigelb itisravi: I think it's `stat -f` in netbsd?
09:01 pkalever hchiramm: pranithk1 https://bugzilla.redhat.com/show_bug.cgi?id=1376022
09:01 glusterbot Bug 1376022: medium, unspecified, ---, bchilds, ASSIGNED , RFE iscsi multipath
09:03 msvbhat joined #gluster-dev
09:20 msvbhat joined #gluster-dev
09:32 hchiramm pkalever, aleady noticed
09:32 hchiramm I have the data now
09:32 pkalever hchiramm: seems to be multiple target doesn't work
09:33 pkalever only mpath devices are supported
09:33 hchiramm it wont
09:33 hchiramm need manual configuration
09:33 pkalever so in the pvc artifact file we cannot give multiple iqn's or targetportals
09:34 pkalever hchiramm: but I think since the bug has devel ack I think we should be ready for the future release
09:35 hchiramm pkalever, someone is aleady fixing it
09:35 hchiramm pkalever, we cannot give multiple Tportals now
09:36 pkalever hchiramm: right
09:36 hchiramm pkalever, but question is do we have multipath support
09:36 hchiramm ?:)
09:36 hchiramm from the server side.
09:37 pkalever hchiramm: In the past I have tested the mpath devices by manually creating them, that's why it worked for me
09:37 hchiramm pkalever, correct.
09:37 pkalever hchiramm: which server are you talking about ?
09:37 hchiramm as a iscsi export
09:37 pkalever hchiramm: yes we do have
09:38 ashiq joined #gluster-dev
09:38 pkalever hchiramm: tcmu-runnner supports it
09:38 hchiramm cool.. then
09:39 pkalever hchiramm: we just need to wait for kube next targeted release ?
09:40 hchiramm pkalever, someone is already working on it
09:40 pkalever hchiramm: right :) I have noticed it
09:40 hchiramm so, hopefully it will land soon
09:41 hchiramm I will do a follow up and get the status
09:41 hchiramm pkalever, pranithk1 to summarize at present , we dont have a way to mention it as an array
09:41 pkalever hchiramm: right, may be next release ? I think its worth to check with kube team
09:41 hchiramm pkalever, yes, I will
09:41 pkalever hchiramm: great
09:42 hchiramm pkalever, pranithk1 but iscsi locking issue is resolved
09:42 hchiramm I mean, it should not allow more than one pod to mount RW
09:42 hchiramm :)
09:42 hchiramm may be we need to test it out
09:42 pkalever hchiramm: correct
09:42 pkalever hchiramm: yeah
09:43 hchiramm iic, thats an important feature for us. Isnt it ?
09:47 Muthu joined #gluster-dev
09:49 karthik_us joined #gluster-dev
09:52 kotreshhr joined #gluster-dev
09:59 poornima_ joined #gluster-dev
10:28 itisravi nigelb: -f is the right flag but there is a whole lot of mumbo jumbo going on in tests/include.rc  in the overloaded stat function that does not make it work out of the box
10:29 * itisravi curses himself for having to work on fixing .ts to work on netbsd.
10:31 itisravi pranithk1: ^^
10:33 nigelb itisravi: Damn.
10:33 nigelb itisravi: Talk to Emmanuel over email.
10:33 nigelb He should be able to help navigate that.
10:33 itisravi nigelb: yeah
10:49 nigelb itisravi: You can either ignore the bug on centos7 until you get netbsd help, or fix it but mark the test as bad on netbsd.
10:49 nigelb They're both not-good options.
10:53 itisravi nigelb: marking it bad on netbsd seems better. Let me speak to pranithk1 and see which one he is comfortable merging.
11:15 rafi joined #gluster-dev
11:32 karthik_us joined #gluster-dev
11:34 mchangir joined #gluster-dev
11:47 msvbhat joined #gluster-dev
11:50 kotreshhr joined #gluster-dev
11:52 pranithk1 itisravi: Is this the same one we fixed just now?
11:52 itisravi pranithk1: yup
11:52 Saravanakmr joined #gluster-dev
11:52 pranithk1 itisravi: I guess you should start getting better at regex than curse yourself :-P . It is not rocket science.
11:52 itisravi pranithk1: yeah that is an option
11:58 ashiq joined #gluster-dev
12:03 kotreshhr left #gluster-dev
12:40 ppai kshlm, around ?
12:40 kshlm Yes.
12:41 ppai The custom encoding feature provided by net/rpc is usable only if both the client and servers are in go. Is that correct /
12:42 kshlm It's the easiest if both are in go.
12:42 kshlm But it is simple enough to be implemented in any language.
12:43 kshlm Which we actually did, https://github.com/kshlm/pbrpc
12:43 ppai I see that the custom encoding is applied only to the payload and the payload received by the custom codec is not the payload from network layer.
12:44 kshlm This has a go protobuf codec. And a C implementation of the rpc format.
12:46 ppai The net/rpc package has a wrapper around the codec payload. https://golang.org/pkg/net/rpc/#Request
12:47 ppai I see that it assigns a sequence number and service method before the actual payload
12:49 ppai As I understand it, the client (assume uses sunrpc) should be modified to talk to the go server to include this wrapper around actual payload.
12:51 kshlm ppai, The client doesn't need to.
12:52 kshlm The onwire format is left to you to decide upon.
12:52 kshlm So you can still use the sunrpc format.
12:53 ppai kshlm: Here https://github.com/kshlm/pbrpc/blob/master/pbrpc.c#L255-L266
12:53 kshlm The ServerCodec the decodes the payload, and translates it to the Request type that net/rpc expects.
12:53 kshlm The same happens in reverse for the ClientCodec.
12:54 kshlm ppai, That is the onwire format we chose. That can be anything you want.
12:55 kshlm ppai, The major difference beteween sunrpc and net/rpc is the way the address procedures.
12:56 ppai just to confirm, what is sent on wire by net/rpc package is just the payload that we provide ?
12:56 kshlm In net/rpc, procedures are addressed using strings. sunrpc uses numbers (prognum and procnum)
12:57 kshlm Yup. The net/rpc server writes/reads whatever the codec provides.
12:57 ppai So the request.Seq is set by whom ?
12:57 kshlm By the client.
12:57 kshlm The flow would be like this.
12:58 ppai And the client (if using sunrpc) does not set Seq right ?
12:58 kshlm Client performs a call. `someserver.Call(procedure, args)`
12:59 kshlm Call() creates the *Request, can calls codec.WriteRequest() with it.
13:00 kshlm On the server, when a new request arrives, the server first calls codec.ReadRequestHeader(), which returns a *Request to net/rpc.
13:01 ppai I get those parts, how does a C client create *Request ?
13:01 kshlm Package checks if the request can be handled, ie. if the procedure is present, then calls ReadRequestBody().
13:01 kshlm Ok, cool.
13:01 kshlm The C client sets it.
13:02 ppai For which the C client should be modified to create this Request structure right ?
13:02 kshlm Check https://github.com/kshlm/pbrpc/blob/master/pbrpc-clnt.c#L159-L193
13:03 kshlm ppai, Not really. As that would change the sunrpc spec.
13:04 kshlm I'm pretty sure sunrpc has some sort of sequence numbers in it.
13:04 ppai sunrpc has an XID
13:05 kshlm Well then they can be used as identifiers.
13:07 kshlm The codecs job is to map between an onwire format and what net/rpc expects.
13:07 kshlm Most codecs define their own onwire format.
13:07 kshlm But since sunrpc already has a defined rpc format, you will need to check if you can map between sunrpc and net/rpc.
13:09 ppai My question was, if the onwire format was sunrpc, can I just leave the client as is and replace the sunrpc server with a net/rpc+codec(sunrpc) server and expect things to work ?
13:10 kshlm You should be able to. If you can map out properly.
13:15 ppai I see. So there's no pre-processing (on network payload) done at net/rpc layer before ReadRequestHeader is invoked.
13:17 rraja joined #gluster-dev
13:23 karthik_us joined #gluster-dev
13:29 Muthu joined #gluster-dev
13:45 kdhananjay joined #gluster-dev
13:57 jiffin joined #gluster-dev
14:41 ankitraj joined #gluster-dev
14:53 mchangir dlambrig_, are we meeting today ?
15:08 susant joined #gluster-dev
15:38 dlambrig_ mchangir: this is not a working week in the US- will see you next week or feel free to send me an IRC message
15:39 mchangir oops :P
16:01 riyas joined #gluster-dev
16:10 wushudoin joined #gluster-dev
16:34 timotheus1 joined #gluster-dev
16:36 lpabon joined #gluster-dev
17:27 jiffin joined #gluster-dev
17:31 Anjana joined #gluster-dev
17:49 msvbhat joined #gluster-dev
20:18 devyani7 joined #gluster-dev
21:44 ankitraj joined #gluster-dev
22:38 Acinonyx_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary