Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-07-20

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:44 topshare joined #gluster-dev
01:01 topshare joined #gluster-dev
01:07 topshare_ joined #gluster-dev
01:18 vmallika joined #gluster-dev
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:50 topshare joined #gluster-dev
02:22 topshare joined #gluster-dev
02:47 kshlm joined #gluster-dev
02:47 topshare joined #gluster-dev
03:17 sakshi joined #gluster-dev
03:35 topshare joined #gluster-dev
03:49 itisravi joined #gluster-dev
03:50 shubhendu joined #gluster-dev
04:02 atinm joined #gluster-dev
04:05 josferna joined #gluster-dev
04:07 schandra joined #gluster-dev
04:14 schandra joined #gluster-dev
04:18 hagarth joined #gluster-dev
04:18 vimal joined #gluster-dev
04:22 nbalacha joined #gluster-dev
04:25 kdhananjay joined #gluster-dev
04:26 kdhananjay joined #gluster-dev
04:34 ppai joined #gluster-dev
04:34 topshare joined #gluster-dev
04:39 pranithk joined #gluster-dev
04:47 gem joined #gluster-dev
04:51 itisravi joined #gluster-dev
04:53 ashishpandey joined #gluster-dev
04:56 ndarshan joined #gluster-dev
05:06 pppp joined #gluster-dev
05:13 aravindavk joined #gluster-dev
05:22 deepakcs joined #gluster-dev
05:25 Bhaskarakiran joined #gluster-dev
05:26 pranithk joined #gluster-dev
05:38 soumya joined #gluster-dev
05:39 gbit joined #gluster-dev
05:39 hgowtham joined #gluster-dev
05:42 anekkunt joined #gluster-dev
05:46 vmallika joined #gluster-dev
05:58 saurabh_ joined #gluster-dev
06:02 Saravana_ joined #gluster-dev
06:05 overclk joined #gluster-dev
06:06 spalai joined #gluster-dev
06:08 atalur joined #gluster-dev
06:14 hagarth joined #gluster-dev
06:15 ndarshan joined #gluster-dev
06:16 raghu joined #gluster-dev
06:16 topshare joined #gluster-dev
06:17 Manikandan joined #gluster-dev
06:21 pranithk joined #gluster-dev
06:31 RedW joined #gluster-dev
06:39 soumya joined #gluster-dev
06:45 jiffin joined #gluster-dev
06:56 Manikandan joined #gluster-dev
07:02 atalur joined #gluster-dev
07:07 kshlm joined #gluster-dev
07:21 ndarshan joined #gluster-dev
07:22 Manikandan joined #gluster-dev
07:25 raghu overclk: can you retrigger regression for http://review.gluster.org/#/c/11703/? I am not able to login to jenkins
07:31 kshlm joined #gluster-dev
07:33 topshare joined #gluster-dev
07:34 pranithk joined #gluster-dev
07:56 pranithk joined #gluster-dev
07:56 atalur joined #gluster-dev
08:01 topshare joined #gluster-dev
08:11 saurabh_ joined #gluster-dev
08:13 itisravi joined #gluster-dev
08:18 overclk raghu, sure.
08:24 itisravi pranithk: http://review.gluster.org/#/c/11713/ is the one. appreciate if you can (re) trigger linux and netbsd regression
08:28 pranithk itisravi: not working for me either
08:28 itisravi pranithk: oh :( I guess all logins have some problem then.
08:29 pranithk itisravi: yeah :-(
08:29 hagarth joined #gluster-dev
08:33 overclk raghu, done.
08:34 itisravi overclk: seems your jenkins login is working..if you don't mind, could you trigger linux and netbsd regression for  http://review.gluster.org/#/c/11713/
08:43 pcaruana joined #gluster-dev
08:53 atalur joined #gluster-dev
08:54 topshare joined #gluster-dev
09:00 topshare joined #gluster-dev
09:01 hchiramm schandra++ thx!
09:01 glusterbot hchiramm: schandra's karma is now 11
09:11 kdhananjay joined #gluster-dev
09:14 atalur_ joined #gluster-dev
09:14 rjoseph joined #gluster-dev
09:14 atalur_ xavih, ping
09:14 glusterbot atalur_: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
09:15 atalur_ xavih, are you free now?
09:15 xavih atalur_: yes
09:15 kaushal_ joined #gluster-dev
09:16 atalur_ xavih, okay then. Wanted to know the ideas you said you had in mind
09:16 vmallika joined #gluster-dev
09:16 atalur_ xavih, we can pick up from there.
09:16 xavih atalur_: have you done any work ?
09:17 atalur_ xavih, as in anything else right now? no
09:17 xavih atalur_: my main idea was to simplify as much as possible the lock management for cluster xlators
09:18 xavih atalur_: what I've been thinking is to write an new api in glusterfs and a new xlator that would work on top of protocol/client (and storage/posix if needed, but I don't think so)
09:18 atalur_ xavih, correct and given that more than one xlators use them, we are bringing a common api
09:18 xavih atalur_: yes
09:19 xavih atalur_: I also wanted to be able to efficiently handle nested locks from two xlators
09:19 atalur_ xavih, okay.
09:20 xavih atalur_: for example a rename on dht could use these new api to lock source and destination subvolumes, and then these subvolumes could also use the same api for their work, without causing multiple lock requests going to the network
09:20 xavih atalur_: my idea is to have transactions
09:21 xavih atalur_: any xlator with more than one subvolume that wants to send a request to two or more of them should use transactions
09:21 topshare joined #gluster-dev
09:21 xavih atalur_: it should issue a gf_txn_create request. This would mark the request (using xdata) as a transaction
09:22 atalur_ xavih, if my understanding of ^^ is correct, you are saying multiple lock-requests will be packed in one transaction and the request will be sent?
09:23 xavih atalur_: after that it would start sending the request with normal STACK_WIND calls
09:24 kdhananjay joined #gluster-dev
09:25 xavih atalur_: not exactly. Each inode can be locked or not locked. Only the xlator that uses transactions nearer to the protocol/client xlator will cause locks through the network
09:26 gbit joined #gluster-dev
09:26 xavih atalur_: for example, if dht uses transactions, and below it there's afr that also uses transactions, the dht transaction will not directly generate any lock request to the bricks
09:26 xavih atalur_: the transaction state from afr will determine the transaction status of dht
09:27 xavih atalur_: if the transactions on dht and afr involve the same inode, only one lock on the inode will be issued
09:27 anekkunt joined #gluster-dev
09:27 atalur_ xavih, okay. got it. dht will only request lock to its xlator below. but it won't go through the network unless this new api sends the request
09:28 atalur_ xavih, hmm.. reduces the number of requests.
09:28 xavih atalur_: yes, all clustered xlators only need to create a new transaction and attach it to the requests it sends. All other work is made by the transaction system.
09:28 atalur_ xavih, the logic of clubbing the requests should recide in the transaction framework
09:29 xavih atalur_: yes. I think it's a good advantage. This could allow to extend the usage of transactions to many places where now they are not used to avoid additional network requests
09:29 xavih atalur_: using transactions also simplifies the logic of many functions
09:30 xavih atalur_: the client side of the transaction api only consists on 3 functions: gf_txn_create, gf_txn_prepared and gf_txn_release
09:31 xavih atalur_: gf_txn_create is used to create a new transaction and bind it to an xdata that will be used for the requests (gf_txn_assign can also be used to assign a transaction to an xdata if necessary)
09:31 xavih atalur_: gf_txn_prepared is used when the xlator has finished sending the requests (using STACK_WIND)
09:32 xavih atalur_: gf_txn_release is used when the xlator receives all the answers and it has finished processing the transaction
09:32 atalur_ xavih, gf_txn_release should be invoked only by the xlator that invoked gf_txn_create
09:32 xavih atalur_: yes
09:32 xavih atalur_: I think it's a very simple and lightweight framework for users of the transaction api
09:33 atalur_ xavih, gf_txn_assign should also have a counter-part
09:33 xavih atalur_: the real work will be made inside the new xlator. At first I though to implement it inside the protocol/client itself, but a new xlator would be more modular
09:33 xavih atalur_: what do you mean ?
09:34 atalur_ xavih, like gf_txn_create has gf_txn_release, shouldn't assign also have a corresponding api that says the xlator that assigned the transaction has successfully completed its job
09:35 atalur_ xavih, correct. a separate xlator would be more modular.
09:36 soumya joined #gluster-dev
09:36 xavih atalur_: no, no. gf_txn_assign is only necessary if the xlator sends different requests to different subvolumes and require different xdata arguments. In this case, the xdata modified by gf_txn_create may not be used in a particular request to one subvolume
09:37 xavih atalur_: in this case gf_txn_assign is used to mark the new xdata to belong to the same transaction, but the transaction is still the same
09:37 xavih atalur_: once the all answered have been received, a single gf_txn_release is needed
09:38 xavih atalur_: real lock calls are delayed until the request reaches the new xlator
09:39 xavih atalur_: this is also compatible with xlators implementing virtual inodes or entries that do not exist on bricks
09:40 atalur_ xavih, understood that real lock calls are delayed till it reaches the new xlator.
09:41 atalur_ xavih, gf_txn_release corresponds to success/failure of locking the inode? I mean, you said " once the all answered have been received" are answers here a response about the status of locks?
09:41 xavih atalur_: do you think it's a good approach for users of locks to substitute current implementations on each cluster xlator ?
09:41 xavih atalur_: no, no, it simply tells to the transaction framework that the client has finished using the transaction
09:42 xavih atalur_: error in locking will be reported in the normal cbk for the request
09:43 xavih atalur_: for example a writev made using transactions will call gf_txn_create, then it will send writev requests using STACK_WIND
09:43 xavih atalur_: once all requests have been sent, it will call gf_txn_prepared. Then it will wait until it received all cbks of writev
09:44 xavih atalur_: if the needed locks cannot be acquired, the cbk of writev will receive the corresponding error
09:44 xavih atalur_: when all cbks of the writev have been received, a final call to gf_txn_release is made
09:45 atinm joined #gluster-dev
09:46 atalur_ xavih, this is what I meant too. :) I think I didn't write my understanding correctly.
09:46 xavih atalur_: ah, ok
09:47 xavih atalur_: internally, gf_txn_create will check if the xdata already contains a transaction. If that's the case, the new transaction will be assigned as a child of the parent transaction. This will allow to manage nested transactions
09:48 atalur_ xavih, about your question of all cluster xlators using this api. It will be good to achieve substitution. I'm yet to analyze if there will be any complications with more than one cluster xlator in the stack. I'll analyze this design today.
09:49 xavih atalur_: what do you mean with substitution ?
09:49 atalur_ xavih, nested transactions as in more than one xlator requesting for locks on the same inode, right?
09:50 atalur_ xavih, it was in response to "do you think it's a good approach for users of locks to substitute current implementations on each cluster xlator ?" I meant substitute current implementation.
09:50 xavih atalur_: yes, that would be possible, but xlators do not need to care about if any parent xlator has create another transaction. It will be handled transparently by the transaction framework
09:51 atalur_ xavih, yes. correct. that's how it will be modular. each xlator doesn't have to worry about it.
09:52 xavih atalur_: the real work and locking logic is moved to the txn xlator
09:52 atalur_ xavih, got it.
09:52 atalur_ xavih, locking logic too?
09:52 xavih atalur_: inside this xlator we can implement all locking optimizations, like eager-locking, delayed unlock, ...
09:52 atalur_ xavih, what do you mean?
09:53 xavih atalur_: yes. I think locking logic is one of the complex problems that it's better to implement in a single shared place
09:54 atalur_ xavih, okay. do you mean to say locks will still be provided by the existing locks xlator but generating lock requests and special cases like eager-locking, delayed unlock etc will be done by this txn-xlator?
09:55 xavih atalur_: since locks are deferred until the transactions reach the transaction xlator, it's possible to control which inodes are locked and decide if the inode needs to be unlocked immediately or the unlock can be deferred to improve performance
09:55 atalur_ xavih, correct.
09:55 rjoseph joined #gluster-dev
09:56 xavih atalur_: for the first implementation, yes. But having an independent xlator managing all this, it will be possible in the future to use other sunchronization mechanisms without having to change afr, ec, dht, ...
09:59 atalur_ xavih, "independent xlator managing all this" .. 'all this' here is just the decision of whether or not to take lock right?
10:00 xavih atalur_: no, it's the transaction itself. In a future implementation, transactions could even synchronize the requests without issuing inodelk calls
10:01 xavih atalur_: and defer the ordering to the server side, improving parallelism and throughput
10:01 xavih atalur_: but this is another topic... :P
10:01 atalur_ xavih, lets come back to that later :)
10:01 atalur_ xavih, I do have a few questions about it though :D
10:02 xavih atalur_: tell me
10:03 atalur_ xavih, how do you plan to synchronize between multiple clients without locking? or did you mean defer the lock only in case there already exists a lock on the inode?
10:05 xavih atalur_: no, no, I mean to completely remove the locking functions and implement a server side algorithm to sort the requests and ensure that all bricks process the same requests in the same order
10:05 atalur_ xavih, aah. got it :)
10:06 xavih atalur_: this has some good advantages. For example, many operations get simplified because there won't be any conflict or race with other fops. Another example is that initial locks could receive valid information for multiple bricks in afr and ec (now this is not possible because lookup cannot be executed with locks, so concurrent fops can cause each brick to return different data)
10:07 xavih atalur_: but this is another complex topic to discuss that will require much more work :P
10:08 xavih atalur_: sorry I meant "initial lookups" instead of "initial locks"
10:09 atalur_ xavih, hmm.. but as you said, let us keep it for later. for now let us proceed with this new api. :)
10:10 xavih atalur_: of course
10:10 xavih atalur_: :)
10:10 xavih atalur_: a transaction can contain more than one fop as long as the involved inodes are the same
10:11 xavih atalur_: if more complex combinations of fops inside a single transaction are required, this will need to be thought in more detail
10:14 atalur_ xavih, so this xlator on receiving the first fop on an inode on which the lock has not been taken just proceeds and send the request for lock.
10:16 atalur_ xavih, here it can anticipate that future fop requests will be made on the inode and it can modify the lock request ( according to its anticipation) that it receives from the xlators above.
10:17 atalur_ xavih, the other requests that follow on the inode can be directly wound down without lock request. unlock mechanism should also be handled by this xlator
10:18 xavih atalur_: it's a bit more complex. When the xlator receives a request belonging to a transaction, it doesn't know which bricks the the request will be sent (it only knows one of the bricks)
10:18 atalur_ xavih, as you said. possible combinations of complex fops have to be thought of . We need to check if there can be any complications involved
10:18 xavih atalur_: it needs to wait until gf_txn_prepared() is called
10:18 xavih atalur_: at this point the transaction will know to which bricks the request must be sent
10:19 xavih atalur_: this is needed for blocking lock calls, though a non blocking lock call could be sent immediately
10:22 ndevos xavih, atalur_: that sounds very much like compound operations, which are something that would be nice to have
10:24 ndevos like http://gluster.readthedocs.org/en/latest/Feature%​20Planning/GlusterFS%204.0/composite-operations/
10:28 atalur_ xavih, how is it decided who calls gf_txn_prepared() ?
10:29 xavih ndevos: I think it's not exactly the same. We are trying to implement a generic way of guaranteeing that a fop is processed atomically in all bricks (currently afr, ec and maybe others use a custom locking method to do this)
10:29 xavih ndevos: this can be used also for compound operations when they are implemented
10:30 xavih atalur_: it's decided by the user. When it has sent all the requests to subvolumes (using STACK_WIND), it must call gf_txn_prepared
10:31 atalur_ xavih, the user here is the xlators, correct?
10:31 xavih atalur_: yes, afr, ec, ...
10:31 atalur_ xavih, cluster xlators
10:32 atalur_ xavih, assuming dht initiated gf_txn_create as it wanted lock on some inode i1. afr too wanted lock on the same inode. now, is gf_txn_prepared invoked by both dht and afr?
10:32 xavih atalur_: yes, but on different transactions
10:32 xavih atalur_: both dht and afr must call gf_txn_create
10:33 xavih atalur_: the second gf_txn_create() will create a child transaction of dht's transaction
10:33 xavih atalur_: when afr's transaction is ready (i.e. all needed locks acquired), the parent transaction will also be marked as ready
10:35 xavih atalur_: well, the child corresponding to the ready transaction will be marked as ready. DHT's transaction can have other child transactions not ready yet
10:38 atalur_ xavih, I think here is where I'm getting confused. The lock request has to be sent only after the final gf_txn_prepared is sent, correct?
10:39 atalur_ xavih, even though they work on diff txns, it is txn-xlator's job to club them together and send a request
10:40 gbit joined #gluster-dev
10:40 xavih atalur_: it's an implementation decision, but once txn-xlator receives a request on a non-locked inode, it could immediately issue a non-blocking lock call
10:41 xavih atalur_: if that fails, it will need to wait until gf_txn_prepared() is called to initiate a sequential blocking lock call
10:42 xavih atalur_: note that the txn-xlator will only see the last transaction (it can see the parents of that, but they do not have relevant inode information)
10:45 atalur_ xavih, have you thought of timeline of events when multiple cluster xlators are there in the graph?
10:46 xavih atalur_: to some extent, yes. I think the nesting method is ok
10:47 xavih atalur_: I've basically thought in two levels of transactions, but I think it can be extended
10:47 anekkunt joined #gluster-dev
10:47 ira joined #gluster-dev
10:51 pppp joined #gluster-dev
10:52 atalur_ xavih, will the invocation of gf_txn_prepared affect the decision of nesting the requests?
10:55 xavih atalur_: what do you mean ? the child xlator is who decides if it creates a transaction or not. Once created it's nested and normal operation will happen...
11:04 atalur_ xavih, child xlator meaning txn-xlator
11:04 atalur_ ?
11:04 xavih atalur_: no, another cluster xlator, for example afr as a child of dht
11:04 atalur_ xavih, okay. let me put it this way. What would the function of gf_txn_prepare be with respect to txn-xlator?
11:06 xavih atalur_: gf_txn_prepare indicates that the request has been sent to all subvolumes, so the txn-xlator will have all needed information
11:07 xavih atalur_: in fact I've just realized that there could exist an intermediate xlator that delays the forwarding of requests or even blocks them (returning an early error). To solve this problem we will need to pass the number of successfully called subvolumes as an argument to gf_txn_prepare
11:08 xavih atalur_: anyway, having an xlator that delays requests between afr and txn-xlator would be bad
11:08 xavih atalur_: for performance reasons
11:09 atalur_ xavih, shouldn't delaying of forwarding the request be decided only by txn-xlator?
11:09 atalur_ xavih, I don't think intermediate xlators should be required to have that logic
11:09 xavih atalur_: yes. If there are other xlators in the middle that do that, it could be bad
11:13 atalur_ xavih, so gf_txn_prepared only changes the state of txn from "created" to "prepared i.e. sent to all subvols".  this is more of a state-machine for the xlator that invoked gf_txn_create right?
11:14 xavih atalur_: it could be interpreted this way. It indicates that the txn-xlator can proceed once all childs have been locked
11:15 xavih atalur_: however there's a problem if an intermediate xlator replies to a request before it reaches the txn-xlator. We'll need to use gf_txn_release() for each callback
11:15 atalur_ xavih, hmm.. but it sounds counter-intuitive to me. :-/ txn-xlator will have to keep track of all the xlators that invoked gf_txn_create and wait for their corresponding gf_txn_prepared to send the request
11:17 xavih atalur_: txn-xlator will receive the request. The request will contain the transaction inside xdata. It doesn't need to track or even knwo who created the transaction
11:17 xavih atalur_: each time txn-xlator receives a request, it should increment some counter inside the transaction object
11:17 shubhendu joined #gluster-dev
11:18 ndarshan joined #gluster-dev
11:18 xavih atalur_: when the originator xlator calls gf_txn_prepared() it will indicate how many childs are involved in the transaction. When the counter and this number match, it means that txn-xlator already knows about all required bricks and can proceed
11:21 atalur_ xavih, how does this txn-xlator receive the request? what mechanism?
11:21 xavih atalur_: normal STACK_WIND. It will be a normal xlator above protocol/client
11:21 asengupt joined #gluster-dev
11:23 atalur_ xavih, I'm sorry different people have different interpretation of above and below in the graph here. the graph would look like dht->afr->client-0,client-1->txn-xlator?
11:24 atalur_ xavih, or dht->afr->txn-xlator->client-0,client-1?
11:24 atalur_ xavih, from my understanding you meant the latter
11:25 xavih atalur_: dht->afr-0,afr-1,... afr-0->txn-0.0,txn-0.1,...  afr-1->txn-1.0,txn-1.1,...  txn0.0->client-0   txn0.1->client-1   txn1.0->client-2   txn1.1->client-3
11:25 ndarshan joined #gluster-dev
11:26 xavih atalur_: there will be one txn xlator on top of each client xlator
11:26 atalur_ xavih, anyway, the point I had in my mind is a simple stack_wind should be indication enough for the txn-xlator to send the lock request. imo we shouldn't need gf_txn_prepare from the initiator xlator to send the call.
11:27 xavih atalur_: if we don't use gf_txn_prepared(), how will we know that the request can proceed ?
11:27 xavih atalur_: txn-xlator will receive a request. It initiates a non-blocking call. It succeeds. Is it allowed to send the request ? or it needs to wait until othre bricks get also locked ?
11:29 xavih atalur_: note that a single txn-xlator instance will only have one client as a child, not all the clients
11:30 dlambrig joined #gluster-dev
11:32 ndevos csim: did dns or /etc/hosts on build.gluster.org change? for some reason the reboot-vm just failed 2x, and that has not happened since quite a while
11:33 ndevos ... the reboot-vm Jenkins job
11:36 atalur_ xavih, why was the decision made such that one txn-xlator has only one client xlator as its child, I think I'm missing a point here
11:38 xavih atalur_: how do you want to layout the graph otherwise ? afr should have multiple subvolumes, right ? if we place txn-xlator in the middle, we cannot put only one
11:39 xavih atalur_: if we use afr->txn->client-0,client-1, how afr would know how many childs it has and how would it send requests to each of them ? it could be done, but I think it's a big change with many consequences
11:41 overclk joined #gluster-dev
11:46 atalur_ xavih, sorry for the delay in response. Was afk. yes, you are right it will be a bug change.
11:46 atalur_ xavih, *big change
11:48 atalur_ xavih, hmm.. so gf_txn_prepare will tell these txn-xlators that the request can be sent. okay I think I got the idea
11:50 xavih atalur_: remember that all these xlators will be in the same machine, so they will share memory and all this information is passed through a common structure (i.e. the gf_txn_t structure created with gf_txn_create() on the user xlator)
11:50 xavih atalur_: there's no need to establish connections between xlators using sockets or anything else
11:51 pppp joined #gluster-dev
11:51 shubhendu joined #gluster-dev
11:51 atalur_ xavih, yes. understood that.
11:55 xavih atalur_: I have to leave in a few minutes. I'll return in an hour. If you want I can prepare a basic implementation in pseudo-code for tomorrow so that you can see how I've thought it and maybe some of the details help you understand what I'm saying or allows you to identify problems I might have not seen
11:55 atalur_ xavih, yes. I'll also go through our discussion again. I still have a few questions which I think will be solved once I go over the design
11:56 atalur_ xavih, shall we have an etherpad to note down the pseudo-code?
11:56 xavih atalur_: great. I'll send you an email once I have the pseudo-code, and we can talk tomorrow
11:56 atalur_ xavih, yes.
11:57 atalur_ xavih, what time tomorrow?
11:57 soumya joined #gluster-dev
11:58 xavih atalur_: I start working at 7:00 UTC. I'll be at the office, so at the time you prefer.
11:58 xavih atalur_: I'll be here
11:59 atalur_ xavih, okay :) I'll also come at the same time
11:59 atalur_ xavih, see you.
11:59 xavih atalur_: goog :) see you
12:00 xavih atalur_: "good" I meant :P
12:11 overclk joined #gluster-dev
12:14 hchiramm joined #gluster-dev
12:15 ppai joined #gluster-dev
12:17 csim ndevos: didn't remember touching it :/
12:18 rjoseph joined #gluster-dev
12:18 csim ndevos: which VM was it ?
12:18 ndevos csim: slave21
12:19 * csim do a quick check
12:19 jrm16020 joined #gluster-dev
12:19 csim ndevos: the server did reboot 47 minutes ago
12:20 csim and ssh work
12:20 csim so i will "blame hosting"
12:20 * csim will investigate to install copr jenkins on ci.gluster.org
12:20 csim but jenkins is quite a beast to understand, and  I really dislike that all is xml or a web interface :)
12:20 ndevos csim: yeah, I had to try rebootinf 3x, the first 2x failed
12:21 csim lovely, the openvpon tunnel is down again
12:21 jiffin joined #gluster-dev
12:23 csim ndevos: so sorry, no idea what could have caused it, but I think moving out of the current hosting would help
12:23 ndevos csim: there is a cli for Jenkins, I use http://termbin.com/7ncf
12:23 csim ndevos: yeah, but I prefer to have simpler config file to under stand what is going on :)
12:23 csim (what me, ranting, no way :p )
12:24 ndevos csim: no problem, and yes, I also think moving is much preferred
12:24 csim ndevos: I wonder if we could have 2 master for some time, like 1 as a test and this kind of stuff ?
12:25 csim as long as the 2nd master is a no voting in gerrit, it would be fine, no ?
12:27 pppp joined #gluster-dev
12:28 ndevos csim: yeah, that is possible, I have an internal Jenkins instance (master + slaves) for some tests as well
12:29 ndevos csim: a slave should be connected to one master at the same time, other that that, there shouldnt be an issue
12:44 jarrpa joined #gluster-dev
12:45 jarrpa Hello channel! I can't log in to review.gluster.org since the migration to GitHub login. Anyone available for help? :)
12:53 kkeithley try clearing cookies for review.gluster.org
12:53 kkeithley that usually worked for me
13:09 spalai left #gluster-dev
13:29 hagarth joined #gluster-dev
13:40 aravindavk joined #gluster-dev
13:46 shyam joined #gluster-dev
13:50 pousley joined #gluster-dev
13:54 ndevos csim: do you know how to merge/link existing gerrit accounts with github ones?
13:55 ndevos jarrpa: your github account needs to get linked somehow to the one you already have in gerrit...
13:55 jarrpa ndevos: Ah, thought so.
13:57 ndevos jarrpa: maybe csim can do that, JustinClift did it for most of us when the change was done, but he's on extended leave/gone
13:59 jarrpa ndevos: Got it. I have to step away for about 20 minutes or so, so I'll mention when I return. :)
14:13 wushudoin joined #gluster-dev
14:26 jarrpa ndevos, csim: Back
14:42 sankarshan joined #gluster-dev
14:42 wushudoin joined #gluster-dev
14:42 jarrpa joined #gluster-dev
14:44 [o__o] joined #gluster-dev
14:47 anoopcs joined #gluster-dev
15:03 ndevos jarrpa: maybe you can send an email to gluster-infra@gluster.org with the request to get your accounts linked/merged?
15:04 jarrpa ndevos: That works! I wasn't sure what a proper contact alias was. Thanks! :)
15:04 ndevos I'm not sure who else can help with that, but I think there were some others that understood how it works
15:10 sankarshan_ joined #gluster-dev
15:12 vimal joined #gluster-dev
15:18 gbit joined #gluster-dev
15:26 kkeithley what's the magic to get a root shell on review.g.o? Maybe Justin left some notes?
15:26 Bhaskarakiran joined #gluster-dev
15:27 kkeithley let's see if I can find an email about root shell access to review.g.o
15:31 kkeithley striking out...
15:53 csim ndevos: no idea :/
16:04 hagarth jarrpa: what is your github handle?
16:06 jarrpa hagarth: jarrpa
16:07 hagarth jarrpa: ok, let me try out something through gsql
16:15 hagarth jarrpa: can you please try now? a different browser session might be better
16:15 jarrpa hagarth: Will do
16:16 jarrpa hagarth: Login successful!
16:16 hagarth jarrpa: cool!
16:16 jarrpa hagarth, ndevos: Thanks guys! :D
16:17 hagarth jarrpa: yw, enjoy :)
16:18 hagarth fwiw, I just added one more row to account_external_ids in gerrit so that (gerrit:jarrpa) could be used as a valid external_id.
16:19 jarrpa hagarth: Sweet
16:30 soumya joined #gluster-dev
16:50 gem joined #gluster-dev
16:55 pppp joined #gluster-dev
17:14 jobewan joined #gluster-dev
18:15 jrm16020 joined #gluster-dev
18:35 cholcombe joined #gluster-dev
18:35 cholcombe anyone good with gluster's RPC system?  I have some questions about the struct members
18:36 cholcombe i don't understand what all the hdr* members mean
18:44 ndevos cholcombe: not sure I would be good enough for that, what struct are you looking at?
18:45 cholcombe ndevos, rpc_transport_msg
18:45 cholcombe there's a lot of hdr* and hdrcount.  what do those mean?
18:46 cholcombe are they for the xdr serialization piece?
18:47 hgowtham joined #gluster-dev
18:47 ndevos cholcombe: yes, I guess so, one RPC packet can contain multiple "records"
18:47 cholcombe and hdr means record i guess?
18:48 ndevos yes, the record header
18:48 cholcombe ah ok that helps
18:48 ndevos and the record payload
18:49 ndevos it is not common to have more than one record though, but the sunrpc spec allows it
18:49 cholcombe oh interesting
18:49 cholcombe so that's mostly there to stay compatible
18:50 ndevos I think recent Linux NFS clients can send multiple records, I do not know if anything else uses it
18:50 cholcombe yeah i don't either
18:50 ndevos yes, the Gluster protocols use sunrpc as transport over tcp, so the functionality just got inherited
18:51 cholcombe i see
18:51 cholcombe ndevos, thanks this was very helpful
18:52 ndevos the 1st BIT of the RPC header is "last fragment" iirc, the rest of the 1st xdr-int is the "fragment length"
18:52 ndevos if you check the rpc header in wireshark, you will almost always see 0x80 as the first byte of the header, meaning that "last fragment" bit is set
18:53 ndevos fragment and records are used for the same thing, fragments in wireshark, records in (some of) the RFCs
18:53 cholcombe oh good point.  i was trying to figure out a way to dump the RPC communication over the unix socket but I forgot i could just dump the tcp portion of it
18:54 ndevos hmm, not sure how to capture capture stuff over from a unix-socket... never tried it
18:55 cholcombe i don't think you can haha
18:55 cholcombe are the iobref pieces for the server to fill out and send back?
18:55 ndevos it should be possible I guess, lets see
18:56 cholcombe ok
18:56 wushudoin joined #gluster-dev
18:58 ndevos not sure, but I think the iobref contains the reply from the server, yes
18:58 cholcombe ok
18:58 cholcombe so i need to give it enough space in the xdr to add that response
18:58 cholcombe this is tricky
18:59 ndevos now I'm wondering how multiple records can get their replies...
18:59 ndevos what are you trying to do?
19:00 cholcombe i'm trying to create an API that uses gluster's RPC calls to communicate with it.  Then I can build a REST server on top of it that works properly.  I'm tired of wrapping the CLI.  It's clunky and error prone
19:00 ndevos oh, thats interesting
19:00 cholcombe :)
19:01 cholcombe ndevos, i plan on open sourcing this of course
19:01 cholcombe it'll basically look like another cli client to Gluster
19:02 ndevos cholcombe: that sounds cool, I think someone else (aravinda maybe?) was looking into something like that too
19:02 cholcombe yeah his program wraps the cli.  I looked at it
19:03 cholcombe ndevos, thanks!  I hope i can get it working
19:03 cholcombe i'm gdb breakpointing the code to figure out what is going on now
19:05 ndevos cholcombe: ah, this one http://gluster.readthedocs.org/en/latest/Fe​ature%20Planning/GlusterFS%203.7/rest-api/
19:06 ndevos cholcombe: would it not make sense to have the functions from the cli/src/ in a libgfcli.so so that you can use those?
19:22 ndevos cholcombe: if you have clear wishes or suggestions to provide something like a management api that would make it easier to write and maintain a rest api, send it to the devel list, I'm confident others would like it too
19:25 ndevos and unfortunately I also dont see a way that quickly to capture unix-socket traffic...
19:27 * ndevos leaves for the day, cya!
20:57 badone joined #gluster-dev
22:07 mribeirodantas joined #gluster-dev
22:39 topshare joined #gluster-dev
22:50 topshare joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary