Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-01-20

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 raghu hagarth: ok
00:15 badone joined #gluster-dev
00:20 pranithk joined #gluster-dev
00:21 pranithk joined #gluster-dev
00:32 dlambrig joined #gluster-dev
00:55 josferna joined #gluster-dev
01:17 EinstCrazy joined #gluster-dev
01:24 zhangjn joined #gluster-dev
01:30 dlambrig joined #gluster-dev
01:50 EinstCrazy joined #gluster-dev
01:55 EinstCrazy joined #gluster-dev
02:00 EinstCra_ joined #gluster-dev
02:06 dlambrig joined #gluster-dev
02:46 skoduri joined #gluster-dev
02:48 ilbot3 joined #gluster-dev
02:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:24 sakshi joined #gluster-dev
03:24 badone joined #gluster-dev
03:29 kanagaraj joined #gluster-dev
03:41 kanagaraj joined #gluster-dev
03:42 zhangjn joined #gluster-dev
03:43 overclk joined #gluster-dev
03:44 EinstCrazy joined #gluster-dev
03:45 Gaurav__ joined #gluster-dev
03:49 atinm joined #gluster-dev
03:49 atinm hagarth, did you get a time to look over the IPv6 patch?
03:53 badone joined #gluster-dev
03:53 [o__o] joined #gluster-dev
04:00 Manikandan joined #gluster-dev
04:18 gem joined #gluster-dev
04:21 shubhendu joined #gluster-dev
04:26 kotreshhr joined #gluster-dev
04:28 hagarth atinm: will complete it, I reviewed it a few days back and had some unanswered thoughts. will need to revisit that.
04:28 atinm hagarth, ok, thanks
04:44 Bhaskarakiran_ joined #gluster-dev
04:51 aspandey joined #gluster-dev
04:59 josferna joined #gluster-dev
05:00 mchangir joined #gluster-dev
05:01 ndarshan joined #gluster-dev
05:05 pppp joined #gluster-dev
05:08 overclk joined #gluster-dev
05:12 atinm hagarth, are you around?
05:13 atinm hagarth, r.g.o is not responding now
05:13 atinm hagarth, I could use it few mins back
05:13 samikshan joined #gluster-dev
05:14 karthikfff joined #gluster-dev
05:16 kdhananjay joined #gluster-dev
05:19 aravindavk joined #gluster-dev
05:23 skoduri joined #gluster-dev
05:23 atinm any infra guys around?
05:23 atinm review.gluster.org is down!
05:25 Apeksha joined #gluster-dev
05:35 ppai joined #gluster-dev
05:36 rjoseph review.gluster.org seems up now but very slow
05:36 atinm rjoseph, I'd still take it as non operational considering the amount of time its taking to open a link
05:37 jiffin joined #gluster-dev
05:42 rjoseph atinm: yesI agree
05:48 Gaurav__ joined #gluster-dev
05:52 poornimag joined #gluster-dev
05:54 hchiramm_ joined #gluster-dev
05:58 pranithk joined #gluster-dev
05:58 itisravi joined #gluster-dev
05:59 pranithk joined #gluster-dev
06:01 asengupt joined #gluster-dev
06:03 kotreshhr left #gluster-dev
06:11 rafi joined #gluster-dev
06:13 vimal joined #gluster-dev
06:26 zhangjn joined #gluster-dev
06:30 ashiq joined #gluster-dev
06:31 ashiq joined #gluster-dev
06:35 poornimag aravindavk, Could you please review http://review.gluster.org/#/c/13061/ ?
06:36 skoduri joined #gluster-dev
06:40 pranithk joined #gluster-dev
06:46 eljrax joined #gluster-dev
06:50 josferna joined #gluster-dev
06:50 spalai joined #gluster-dev
06:52 Manikandan_wfh joined #gluster-dev
06:57 vmallika joined #gluster-dev
06:57 skoduri joined #gluster-dev
06:57 itisravi rastar: hey!  http://review.gluster.org/#/c/13233/  doesn't seem to have a vote from the Smoke test. Could you trigger that alone? Centos and NetBSD runs have passed.
07:03 aravindavk poornimag: sure
07:10 Saravanakmr joined #gluster-dev
07:17 pranithk joined #gluster-dev
07:22 zhangjn joined #gluster-dev
07:34 atalur joined #gluster-dev
07:45 poornimag joined #gluster-dev
07:51 pranithk joined #gluster-dev
08:49 itisravi joined #gluster-dev
08:52 pranithk joined #gluster-dev
09:00 spalai joined #gluster-dev
09:02 rraja joined #gluster-dev
09:07 itisravi rastar++
09:07 glusterbot itisravi: rastar's karma is now 22
09:11 josferna joined #gluster-dev
09:24 Saravanakmr joined #gluster-dev
09:31 jiffin1 joined #gluster-dev
09:32 pranithk xavih: kdhananjay and I went through the pseudo code you mailed us for txn framework. We are not understanding what is branch and where locks will be acquired.
09:34 xavih pranithk: I chose 'branch', but maybe a better name could be found. A branch is one of the members of a transaction: when a transaction reaches one of the txn-xlators, it represents a branch (or a brick) where the transaction will be executed
09:34 pranithk xavih: ah! so it is a leaf?
09:34 mchangir joined #gluster-dev
09:35 xavih pranithk: one moment, phone...
09:35 pranithk xavih: sure.
09:38 kdhananjay joined #gluster-dev
09:44 rraja Saravanakmr: http://dst.lbl.gov/~boverhof/openssl_certs.html , how to create certs, CA needed for enabling TLS network encryption in GlusterFS
09:44 rraja Saravanakmr: gluster volume set <volume name> ro.cn-access-list  "commonname1,commonname2,.
09:46 rraja Saravanakmr: i suggest that you come up with some similar CLI syntax to make RO access selective.
09:46 xavih pranithk: I'm back, sorry
09:46 pranithk xavih: np
09:47 xavih pranithk: yes, we could call it a leaf also
09:47 pranithk xavih: okay
09:47 csaba rraja: thx
09:47 rraja csaba: you're welcome!
09:48 pranithk xavih: where are we sending inodelk also we are not understanding...
09:49 xavih pranithk: the locks are acquired inside the library, issuing inodelk STACK_WINDs using the xlator registered previously by gf_txn_start()
09:50 pranithk xavih: gf_txn_start is not there in the file... there is gf_txn_start_leaves)(
09:50 pranithk xavih: I mean there is a call but not that function :-)
09:51 xavih pranithk: oops :-/
09:54 pranithk xavih: what does gf_txn_add_txn() supposed to do?
09:54 pranithk xavih: I am not understanding the part about how child txn is interacting with the branch
09:54 xavih pranithk: give me some time and I'll send you what gf_txn_start() does
09:54 pranithk xavih: wait wait
09:55 xavih pranithk: gf_txn_add_txn() adds a child txn as if it were a resource used by the parent txn
09:55 pranithk xavih: Why don't we give all comments and you can clear all of them in the next revision?
09:55 pranithk xavih: will that be fine?
09:55 xavih pranithk: it's like the child txn should be acquired to allow the parent to continue
09:55 xavih pranithk: sure
09:56 pranithk xavih: in txn-user.txt
09:57 spalai joined #gluster-dev
09:57 pranithk xavih: we keep calling gf_txn_release() in a loop in afr_write(). If quorum number of requests can't be served it will immediately call afr_write_txn_cbk(), which may free memory of frame etc. We are wondering if we need ref/unref kind of mechanism there
09:59 kdhananjay xavih: https://public.pad.fsfe.org/p/txn-framework, has the comments with "<----"
09:59 kdhananjay xavih: may be it will be easy to update there?
09:59 kdhananjay xavih: Instead of exchanging mails?
10:00 xavih pranithk: the pseudo-code I sent wasn't a full working solution. It was only to be able to conceptually see how it's working. I didn't consider all possible error conditions not code paths, memory references and some other things
10:00 pranithk xavih: ah! got it
10:00 xavih kdhananjay: yes, I can update that :)
10:01 pranithk xavih: line 101!
10:01 EinstCra_ joined #gluster-dev
10:01 xavih pranithk: oh, that's one thing I wanted to talk...
10:01 pranithk xavih: cool
10:02 xavih pranithk: since with the txn framework all fops that affect more than one brick should be used inside a transaction, wouldn't it be better to move all post-op work (dirty management, version, ...) into the txn-xlator ?
10:03 xavih pranithk: I think this is a good place to move all this logic and made it common to all clustered xlators
10:03 pranithk xavih: but different xlators are doing it differently... backward compatibility?
10:04 xavih pranithk: we would need to think about that, but I really think that having a common robust management of inter-brick integrity would be great
10:04 xavih pranithk: probably backward compatibility would require some special cases inside the txn-xlator
10:04 pranithk xavih: I thought about it at the time of ec. But afr and ec are semantically different.
10:05 pranithk xavih: Replication just needs info of who is ahead of who
10:05 pranithk xavih: but ec requires which fragments are of same version.
10:06 pranithk xavih: In afr, we can end up in same version even when data is different. So I don't think we can change it easily is what I thought at the time and left it at that
10:06 pranithk xavih: if we follow version approach I mean.
10:06 xavih pranithk: we can see it in another way: we may define that the good version is the one that have at least quorum bricks with the same transaction
10:07 pranithk xavih: But afr has 2 way replication. Where we need clear understanding of split-brain...
10:07 pranithk xavih: That is where we can end up with same version but different data...
10:07 pranithk xavih: because we need to allow writes.
10:07 pranithk xavih: even when only one brick is up
10:09 xavih pranithk: would this be possible/acceptable ? when one brick comes online, it's not considered healthy (the entire brick)
10:09 xavih pranithk: self-heal should check if there are pending changes from other replicas and apply them
10:10 xavih pranithk: if the source replica dies before this synchronization is done, the files not healed would be inaccessible (EIO)
10:10 xavih pranithk: is this too restrictive ?
10:11 mchangir joined #gluster-dev
10:11 pranithk xavih: in 2 way replication... kind of :-(
10:11 xavih pranithk: I know this is completely different than the current behavior, but it seems reasonable and avoids split-brains
10:12 xavih pranithk: what is the advantage of allowing writes (or even reads) to a file that is not healthy ?
10:12 xavih pranithk: and that it will cause an split-brain when the other brick comes online ?
10:12 pranithk xavih: Even if one file is not healthy the brick is treated bad...
10:13 pranithk xavih: Granularity is too low
10:14 xavih pranithk: we should have some way to mark the bad files. For example, initially the whole brick should be marked as bad. The first task of self-heal would be to mark the bad files (one way would be to add an xattr to all files inside the indices/xattrop of other bricks, or create another index)
10:14 xavih pranithk: once this is done, the brick can be considered healthy, and only the marked files will be bad
10:14 pranithk xavih: This is the outcast feature we wanted to bring in
10:15 xavih pranithk: at this point self-heal will heal the bad files, removing the mark
10:15 xavih pranithk: oh
10:15 pranithk xavih: the reason we didn't go ahead was that
10:15 pranithk xavih: 1) The good brick can go down before marking all bad files
10:16 xavih pranithk: in this case the remaining brick will still be bad as a whole
10:16 pranithk xavih: which is too restrictive... People have TBs of data.
10:16 pranithk xavih: Some of the files can be years old
10:16 pranithk xavih: they would like to access it
10:16 pranithk xavih: that is the reason we went with arbiter instead of outcast
10:16 xavih pranithk: remember that a replica 2 cannot have 2 bricks down... the brick will be considered down until this initial process is completed
10:17 xavih pranithk: if you need to be prepared to have 2 bricks down, you need a replica 3
10:17 pranithk xavih: here is the example that is common, which made us move away from outcast
10:17 pranithk xavih: we have 4TB disk lets say
10:17 pranithk xavih: 2TB is filled on both bricks
10:17 pranithk xavih: one brick goes down now
10:18 pranithk xavih: say some 10 files are created while the brick is down
10:18 pranithk xavih: the brick comes up... before marking of bad files happen, the good brick goes down
10:18 pranithk xavih: we can't let users not access 2TB that have been there for ages. It is a bit too restrictive.
10:19 xavih pranithk: this is the same that having had both bricks down, because the initial task to mark bad files is necessary before the brick is considered up
10:19 xavih pranithk: you need to bring up the good brick
10:20 xavih pranithk: if that's not possible, we could allow to forcibly consider the old/damaged brick as good, and let it work
10:21 xavih pranithk: if later the old good brick comes online, we can give split-brains if file have been modified in both sides or we could even take the file of the new good brick as the valid one (the administrator had explicitly marked the brick as good, so its contents should be good)
10:22 xavih pranithk: I think this has many advantages and the only problem is hard to happen
10:23 xavih pranithk: brb
10:23 pranithk xavih: it doesn't solve split-brains because of network partitions as well.
10:23 pranithk xavih: You can see the whole thing here: http://www.gluster.org/community/documentation/index.php/Features/outcast
10:24 pranithk xavih: seems like itisravi changed some of the things now...
10:24 pranithk xavih: Outcast was only solving split-brains in time. That too not completely
10:24 pranithk xavih: we can classify 2 types of split-brains
10:25 pranithk xavih: where files/directories go into split-brains because of connections not being there because of network partition
10:25 pranithk xavih: Bricks going up/down in a manner that the file may end up in split-brain if the client modifies it
10:26 pranithk xavih: outcast only solves 2nd type that too not completely like I was saying, granularity is too low
10:26 pranithk xavih: until the marking is done I mean
10:29 poornimag joined #gluster-dev
10:30 xavih pranithk: yes, this only solves the second case. The first one needs some sort of quorum...
10:31 xavih pranithk: I'll read outcast details, but I don't see the granularity problem. It will take little time to mark the bad files, and once done, the granularity is a single file
10:31 pranithk xavih: arbiter solves all of these
10:32 pranithk xavih: kdhananjay wants the discussion to come back on txn library :-D
10:32 xavih pranithk: arbiter solves this by denying access when two bricks are down ?
10:33 xavih pranithk: (arbiter and one brick)
10:33 pranithk xavih: write access. Yes.
10:33 Saravanakmr joined #gluster-dev
10:33 overclk joined #gluster-dev
10:34 pranithk xavih: thinking is, probability of losing two bricks at the same time is lesser compared to probability of losing one brick...
10:34 xavih pranithk: then, the other approach is better once the marks are done: even with only one brick up, the users could use it safely
10:35 xavih pranithk: probability of losing two bricks with arbiter is the same than losing two bricks without it
10:35 pranithk xavih: didn't get you....
10:36 xavih pranithk: if we consider that two bricks cannot die at the same time, then the problem you stated against outcast won't happen
10:36 pranithk xavih: Let me put it this way. If one brick dies. With arbiter there is no 'marking' step because it is already done as part of earlier transactions
10:37 xavih pranithk: ok, but now another brick dies
10:37 xavih pranithk: files won't be accessible
10:37 pranithk xavih: then no writes.
10:38 pranithk xavih: redundancy with arbiter is 1
10:38 xavih pranithk: this is the same that happens with outcast. The only difference is that the down window is a bit larger
10:38 pranithk xavih: but consistency is good
10:38 pranithk xavih: But it doesn't prevent split-brains when connections go bad.
10:39 pranithk xavih: It is solving 50% of the problem...
10:39 pranithk xavih: with outcast split-brains are still possible.
10:40 xavih pranithk: of course, I'm not saying we also need other mechanisms. But it could allow to move most of the versioning logic to txn-xlator and be common to afr and ec
10:41 xavih pranithk: anyway, this would be another topic of discussion and I would need to think more about it...
10:41 xavih pranithk: we can return to the txn framework if you want :)
10:41 pranithk xavih: yes please!
10:42 pranithk xavih: kdhananjay says she is out of context :-D
10:42 kdhananjay pranithk: i forgot where we stopped. ;)
10:42 pranithk kdhananjay: we are talking about pre/post ops with new framework
10:43 ndevos pranithk, itisravi, kdhananjay, overclk, anoopcs, jiffin, poornimag, rastar, atinm, *: please reply to http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/13673 soon, thanks!
10:43 xavih pranithk: one way to execute post-ops would be to create child transactions in the callback
10:44 pranithk xavih: that is what I was thinking too
10:45 xavih pranithk: another possibility would be to reuse the transaction, but this have some additional complexities
10:45 xavih pranithk: I talked about that in the comment of afr_write_txn_cbk()
10:46 xavih pranithk: though not explicitly talking about post-op :P
10:47 atinm ndevos, I will
10:47 pranithk xavih: yeah. child transaction looks better to me. Seems cleaner.
10:47 itisravi ndevos: pranithk just did for AFR
10:47 atinm ndevos, Are you going to have a separate section called 4.0 ?
10:47 ndevos itisravi: does it include the heal policcy stuff too?
10:48 ndevos itisravi: and arbiter?
10:48 pranithk xavih: hmm... may be not
10:48 pranithk xavih: by the time afr_write_txn_cbk comes, unlock might have been issued right?
10:48 atinm ndevos, how about showing the roadmap page of 4.0 itself? that doesn't cover too much details though
10:48 pranithk xavih: by the txn library
10:48 itisravi ndevos: it has multi-threaded selfheal patch by richard
10:49 ndevos atinm: probably split it in "available features" and "planned features", or something like it
10:49 xavih pranithk: no, the transaction is not still completed
10:49 itisravi ndevos: arbiter bug fixes and perf improvements have gone in for 3.7.7. Not sure if you want info on that.
10:49 xavih pranithk: gf_txn_release() for the last branch/leaf should not have been called yet
10:49 * itisravi means its not a new feature any more :)
10:50 ndevos itisravi: I think only few know about arbiter, it would be nice to have something about it
10:51 ndevos kdhananjay: taht ^ counts for sharding too ;-)
10:51 kdhananjay ndevos: Agreed!
10:51 itisravi ndevos: sure I can send a jist of  the patches that went in..
10:51 xavih pranithk: or we could be processing the last gf_txn_release(), but even in this case, the txn will still be valid and not destroyed
10:52 ndevos itisravi: just a line or two about the functionality, and a few keywords would do, maybe a link to the docs
10:52 itisravi ndevos: got it
10:54 pranithk xavih: brb on call
10:54 itisravi ndevos: talking of features, the seek_hole/data  patch is just not getting pulled in the mainline :(
11:00 sakshi joined #gluster-dev
11:01 ndevos itisravi: oh, still not? maybe miklos did not send a merge request to linus?
11:01 pranithk xavih: back, sorry
11:03 itisravi ndevos: he did not.no replies when I asked why. I resent the patch last week directly to linus, dave chinner and al viro.  Again no replies.
11:04 pranithk xavih: how do we know it is not destroyed?
11:04 pranithk xavih: what is the lifecycle of transactioN?
11:05 xavih pranithk: transaction should only be destroyed once all branches have unwinded to the fop creator and gf_txn_release() called on all of them
11:06 pranithk xavih: if we take simple 2 replica case. there are 2 branches right?
11:07 xavih pranithk: yes
11:07 pranithk xavih: In afr_write_txn_cbk() we wait for all the replicas to respond. Once transaction is called on both of them, we need to start post op. But that will only happen after both of the branches are done executing right?
11:09 pranithk xavih: s/Once transaction is called on both of them/Once afr_write_txn_cbk() is called on both of them/
11:13 xavih pranithk: if I have done it correctly, the txn callback can only be called from inside gf_txn_release(), and this can only be called by the same xlator that have started the txn. Once the appropriate conditions are met (quorum, pending...) the afr_write_txn_cbk() is called, but only once per transaction
11:13 ndevos itisravi: hmm, yeah, http://thread.gmane.org/gmane.linux.kernel/2127345 looks pretty okay to me... maybe you can find viro on irc and point him to miklos' tree with the patch?
11:14 xavih pranithk: so, it's at this point that we can start other nested transactions
11:14 pranithk xavih: oh, it is similar to ec then.. I think I am slowly understanding
11:15 xavih pranithk: one thing is the cbk of each wind, and another thing is the cbk of the full txn. It's only executed when the transaction has been completed (successfully or not)
11:16 pranithk xavih: Ah! that is the point I was missing
11:16 pranithk xavih: you are right
11:16 pranithk xavih: similar to ec. fop_wind_cbk and cbk are different. One is per child other is fop completion cbk
11:16 xavih pranithk: I hope this is what I wrote in the pseudo-code, but it might contain bugs :P
11:16 xavih pranithk: exactly
11:17 pranithk xavih: it is fine. I have understood it better now. Let me give it some more thought
11:17 pranithk xavih: We may ask you some more things. Sorry it is taking so long. We both are involved in multiple things.
11:17 pranithk xavih: kdhananjay works on sharding as well...
11:18 xavih pranithk: don't worry :)
11:19 pranithk xavih: Alright then. I want to conclude the discussion for today. Thanks a lot!
11:19 pranithk kdhananjay: any more questions?
11:20 kdhananjay pranithk: err.. no. i lost context after sometime. i will go through the logs again and talk to you guys tomorrow if i have questions. :)
11:20 pranithk kdhananjay: cool!
11:21 pranithk xavih: okay then. That is it for today for me. Cya!
11:21 pranithk kdhananjay: you too
11:21 pranithk kdhananjay++, xavih++
11:21 glusterbot pranithk: kdhananjay's karma is now 12
11:21 glusterbot pranithk: xavih's karma is now 22
11:26 jiffin1 joined #gluster-dev
11:30 rafi1 joined #gluster-dev
11:31 EinstCrazy joined #gluster-dev
11:37 kdhananjay1 joined #gluster-dev
11:39 ndevos ping atinm: are you hosting todays meeting, or do you know who is?
11:39 atinm ndevos, I am not
11:40 ndevos atinm: could you poke someone to do it? I was planning to take todays afternoon off...
11:42 atinm overclk, can you?
11:50 ndevos rafi1, jiffin, kdhananjay1: could one of you host the community meeting in 10 minutes?
11:51 ndevos it's not much different from the bug triage meeting ;-)
11:57 ndevos hey hagarth, are you online? if not, I guess the meeting will get cancelled for today :-/
11:58 pranithk joined #gluster-dev
11:59 anoopcs 10
12:02 overclk atinm, sorry, tied up with things..
12:03 ndevos atinm, overclk: could one of you send a cancel note to the lists? I hope we can have more participants next week
12:04 * ndevos is the afternoon off, and has an appointment to catch
12:07 overclk ndevos, atinm: I'll send a note.
12:14 rafi joined #gluster-dev
12:17 dlambrig joined #gluster-dev
12:20 ppai joined #gluster-dev
12:21 mchangir joined #gluster-dev
12:25 kdhananjay1 sorry ndevos. was afk.
12:35 kkeithley is there a community meeting today?
12:36 kkeithley looks like no
12:44 shubhendu joined #gluster-dev
12:45 atinm kkeithley, it got cancelled
12:59 ppai joined #gluster-dev
13:06 josferna joined #gluster-dev
13:15 Ethical2ak joined #gluster-dev
13:16 kkeithley atinm: yup
13:46 jwang_ joined #gluster-dev
13:49 mrrrgn_ joined #gluster-dev
13:51 ndevos_ joined #gluster-dev
13:51 obnox_ joined #gluster-dev
13:54 jtc` joined #gluster-dev
13:55 JoeJulian_ joined #gluster-dev
13:55 bfoster1 joined #gluster-dev
14:00 samikshan joined #gluster-dev
14:02 ndarshan joined #gluster-dev
14:02 Apeksha joined #gluster-dev
14:03 Humble joined #gluster-dev
14:14 primusinterpares joined #gluster-dev
14:14 gem joined #gluster-dev
14:45 hagarth joined #gluster-dev
14:46 raghu joined #gluster-dev
14:53 lpabon joined #gluster-dev
15:05 raghu joined #gluster-dev
15:06 Gaurav__ joined #gluster-dev
15:08 caveat- joined #gluster-dev
15:09 hagarth joined #gluster-dev
15:15 hagarth anybody facing a problem with git pull atm?
15:16 nishanth joined #gluster-dev
15:18 vimal joined #gluster-dev
15:20 csim hagarth: what kind of issue ?
15:20 hagarth csim: failing to pull
15:20 csim hagarth: like, waiting forever, or saying something ?
15:20 csim and on what repo ?
15:20 hagarth csim: on glusterfs repo
15:21 csim hagarth: github or gerrit ?
15:21 hagarth csim: I am getting a Permission denied (publickey) error.
15:21 hagarth csim: gerrit/r.g.o
15:22 csim hagarth: and you didn't change keys nor anythng ?
15:22 hagarth csim: no
15:22 csim ssh review.gluster.org  work ?
15:22 csim (I assume "no")
15:23 hagarth csim: no, just tried that
15:23 csim hagarth: can you try with ssh -vvv review.gluster.org ?
15:24 csim mhh
15:24 csim the server is a bit "loaded"
15:25 hagarth csim: looks like that
15:26 hagarth getting random failures with r.g.o
15:26 csim ok, trying to get things in order
15:27 csim mhh google bot
15:28 mchangir joined #gluster-dev
15:29 hagarth csim: thanks in advance!
15:29 csim hagarth: so, I did some "cleaning", is it better ?
15:30 csim seems there was a lockup when google started to index everything :/
15:31 hagarth csim: not much luck with ssh yet but I was able to merge a patch
15:32 hagarth csim: still get permission denied
15:33 csim hagarth: so maybe this is not related
15:33 csim let's restart gerrit
15:35 hagarth csim: yes, that might be better
15:36 csim hagarth: so, now ?
15:38 hagarth csim: unfortunately, it still fails :-/
15:39 csim hagarth: can't see message for you to connect
15:39 csim hagarth: and you didn't gave the -vvv output log :)
15:40 hagarth csim: will do that now
15:43 hagarth csim: http://paste.fedoraproject.org/312838/30455414/
15:44 shyam joined #gluster-dev
15:44 csim mhh
15:46 hagarth csim: emails are trickling on in gluster-infra, I notice one in moderation about problems with gerrit
15:47 csim hagarth: the strange part is that it work fine for me :/
15:48 hagarth csim: :-/
15:51 hagarth csim: could you share the root password of r.g.o with me? i'll see if I can debug my issue too
15:52 csim hagarth: I can add your keys
15:52 csim so there is 3 keys in the DB for you
15:52 csim which one is supposed to be the one you are using ?
15:54 hagarth csim: the one with deepthought as the hostname
15:54 csim hagarth: there is 2 of them :)
15:55 hagarth csim: ending with O2Dntw/HYXpQ== vijay@deepthought
15:57 csim there is 19 keys in that server root authorized_keys, this seems a bit extreme
15:58 hagarth csim: right
15:59 csim hagarth: does it work ?
16:00 atinm joined #gluster-dev
16:01 hagarth csim: checking
16:03 hagarth csim: no, still asks me for a password with root
16:04 csim Jan 20 08:01:26 dev sshd[7414]: userauth_pubkey: unsupported public key algorithm: ecdsa-sha2-nistp256
16:05 csim that's the only thing I see
16:06 csim hagarth: try again, i bumped the verbosity
16:08 hagarth csim: doing so now
16:09 csim still the same
16:09 hagarth right
16:11 csim you did a recent fedora upgrade or anything ?
16:11 hagarth csim: nothing in the recent past. It worked fine yesterday.
16:12 csim hagarth: so ssh work on others servers, even RHEL 5 one ?
16:13 csim and what about the old ssh keys listed in the configuration ?
16:14 hagarth csim: can you list all my keys somewhere?
16:14 csim hagarth: sure
16:15 wushudoin joined #gluster-dev
16:15 csim let me just find the right sql
16:17 csim hagarth: http://fpaste.org/312854/06634145/
16:18 hagarth csim: the second key is what I normally use
16:21 csim hagarth: ie, taht's the one in /home/vijay/.ssh/id_rsa.pub ?
16:21 hagarth csim: right
16:21 atalur joined #gluster-dev
16:24 csim mhhhh
16:26 csim hagarth: you are currently in the boston office, on untrusted wireless, can you try to go on a different network (like trusted, or wired? )
16:26 hagarth csim: I cannot get away from wireless atm
16:26 hagarth csim: will try again in a bit
16:28 csim hagarth: or try from another server
16:28 hagarth csim: will do
16:32 csim for now, I am out of ideas :/
16:33 hagarth csim: me too :/
16:33 csim hagarth: just to make sure, you can ssh fine on other servers ?
16:34 hagarth csim: yes
16:34 * csim wonder if there is another RHEL 5 somewhere
16:44 hagarth csim: get the same permission denied failure from a different server too
16:45 hagarth csim: there is certainly something odd about r.g.o, even a jenkins job got aborted now since it was unable to pull
16:48 csim hagarth: but the git pull do not use ssh
16:48 shubhendu joined #gluster-dev
16:49 hagarth csim: I use a ssh clone
16:50 csim ok, let's remove the current gitweb
16:50 csim and see how it goes
16:51 csim do other people have a issue too with gerrit ?
16:52 csim cause maybe we are looking from the wrong angle
16:52 hagarth csim: possibly, I see two complaints on gluster-infra
16:53 hagarth shyam: are you able to git pull glusterfs from r.g.o now?
16:53 csim hagarth: I see only one
16:53 hagarth csim: check the moderation queue
16:55 csim hagarth: nope, nothing in gluster-infra
16:55 hagarth csim: I thought I observed a post from byreddy
16:56 csim the only post was from Samikshan Bairagya
16:56 csim and I did as for more information, but it look like web related
16:57 hagarth csim: possible, I might have been confused with the names..
16:58 csim hagarth: so that would be the first, maybe, and the 2nd ?
16:59 hagarth csim: there is no second one possibly. I thought Samikshan's mail was from byreddy and hence the confusion.
17:00 csim ok
17:00 csim so yeah, we are still in the dark :/
17:00 hagarth csim: unfortunately yes :/
17:02 jiffin joined #gluster-dev
17:02 hagarth kkeithley, dlambrig1, jiffin: does git pull from gerrit work for you at the moment?
17:02 kkeithley let me try
17:03 * jiffin checking
17:03 kkeithley jiffin, I'm on bluejeans now
17:04 kkeithley git clone works, do I need to do a pull instead?
17:04 jiffin kkeithley: connecting
17:04 hagarth kkeithley: I think that's good enough. Thanks for checking!
17:04 jiffin hagarth: git pull worked
17:04 hagarth jiffin: thank you!
17:05 hagarth csim: thankfully the problem seems to be localized to me
17:05 kkeithley can you hear me? bluejeans say you're mike is on mute
17:05 kkeithley s/you're/your/
17:06 csim hagarth: yeah, but it is surprising
17:06 samikshan hagarth: Hey. Yeah I sent out a mail to gluster-infra. Some trouble with not being able to access review.gluster.org after loggin in
17:06 jiffin kkeithley: reconnecting
17:06 csim (and I need to go now for the evening)
17:07 hagarth samikshan: cool, has it worked for you before?
17:07 hagarth csim: later, I will see what I can do
17:07 hagarth fatal: unable to access 'https://review.gluster.org/glusterfs.git/': Peer's Certificate has expired.
17:07 hagarth csim: something else we need to fix :-/ ^^
17:08 samikshan hagarth: Nope. I tried loggin into review,gluster.org for the first time. I have received a reply from Michael Scherer
17:08 samikshan Will reply back
17:08 hagarth samikshan: ok
17:11 rafi joined #gluster-dev
17:26 kkeithley jiffin: ping
17:26 glusterbot kkeithley: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
17:26 kkeithley say goodnight glusterbot
17:27 semiautomatic joined #gluster-dev
17:30 jiffin joined #gluster-dev
17:37 dlambrig joined #gluster-dev
18:00 ira joined #gluster-dev
18:12 jwang__ joined #gluster-dev
18:16 shyam1 joined #gluster-dev
18:16 mchangir_ joined #gluster-dev
18:18 lalatend1M joined #gluster-dev
18:20 ira_ joined #gluster-dev
18:21 mrrrgn_ joined #gluster-dev
18:33 dlambrig joined #gluster-dev
18:52 rafi joined #gluster-dev
18:53 dlambrig joined #gluster-dev
18:54 rafi joined #gluster-dev
18:57 bfoster joined #gluster-dev
18:57 samikshan joined #gluster-dev
18:57 Humble joined #gluster-dev
18:57 raghu joined #gluster-dev
18:57 hagarth joined #gluster-dev
18:58 rafi joined #gluster-dev
18:59 shyam1 joined #gluster-dev
18:59 lalatend1M joined #gluster-dev
19:19 rafi joined #gluster-dev
19:20 jiffin1 joined #gluster-dev
19:23 lpabon joined #gluster-dev
20:28 shyam joined #gluster-dev
20:59 xavih joined #gluster-dev
21:02 dlambrig joined #gluster-dev
21:13 rafi joined #gluster-dev
21:13 shyam joined #gluster-dev
21:59 hagarth joined #gluster-dev
22:41 hagarth joined #gluster-dev
22:50 dlambrig joined #gluster-dev
23:06 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary