Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-03-24

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:02 topshare joined #gluster-dev
01:24 topshare_ joined #gluster-dev
01:27 bala joined #gluster-dev
01:33 ira joined #gluster-dev
01:54 ira joined #gluster-dev
02:27 topshare joined #gluster-dev
02:41 topshare joined #gluster-dev
03:01 ppai joined #gluster-dev
03:23 soumya joined #gluster-dev
03:35 shubhendu joined #gluster-dev
03:39 topshare joined #gluster-dev
03:40 itisravi joined #gluster-dev
03:48 topshare joined #gluster-dev
03:49 rjoseph joined #gluster-dev
04:01 atinmu joined #gluster-dev
04:16 hagarth joined #gluster-dev
04:20 nkhare joined #gluster-dev
04:28 kdhananjay joined #gluster-dev
04:47 anoopcs joined #gluster-dev
04:50 rafi joined #gluster-dev
04:51 nishanth joined #gluster-dev
04:53 jiffin joined #gluster-dev
04:54 lalatenduM joined #gluster-dev
04:55 ndarshan joined #gluster-dev
04:59 ppai_ joined #gluster-dev
05:01 hagarth hello world!
05:03 badone_ joined #gluster-dev
05:09 badone__ joined #gluster-dev
05:10 ppp joined #gluster-dev
05:10 spandit joined #gluster-dev
05:11 vimal joined #gluster-dev
05:13 pranithk joined #gluster-dev
05:24 hagarth pranithk, itisravi: ping, have you had a chance to look into the self-heal failures happening during regression runs for bitrot patchset?
05:24 pranithk hagarth: no
05:24 itisravi hagarth: not really.
05:25 hagarth pranithk, itisravi: It would be great if you can check with overclk or raghu if they need help with that.
05:25 hagarth http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/5844/consoleFull is one such example of failure
05:25 itisravi hagarth: okay
05:37 gem joined #gluster-dev
05:38 kanagaraj joined #gluster-dev
05:40 kshlm joined #gluster-dev
05:40 Manikandan joined #gluster-dev
05:40 ashiq joined #gluster-dev
05:45 spandit joined #gluster-dev
05:52 badone joined #gluster-dev
05:53 overclk joined #gluster-dev
05:58 jiffin joined #gluster-dev
06:03 hchiramm joined #gluster-dev
06:07 soumya joined #gluster-dev
06:10 pranithk joined #gluster-dev
06:36 deepakcs joined #gluster-dev
06:38 itisravi_ joined #gluster-dev
07:01 suliba joined #gluster-dev
07:02 itisravi_ joined #gluster-dev
07:04 aravindavk joined #gluster-dev
07:05 kdhananjay joined #gluster-dev
07:18 itisravi joined #gluster-dev
07:26 hchiramm joined #gluster-dev
07:48 bala joined #gluster-dev
07:49 ppai_ joined #gluster-dev
07:53 pranithk joined #gluster-dev
08:20 suliba joined #gluster-dev
08:40 pranithk joined #gluster-dev
08:48 ndarshan joined #gluster-dev
08:53 shubhendu joined #gluster-dev
08:56 bala joined #gluster-dev
09:00 nishanth joined #gluster-dev
09:06 ppai_ joined #gluster-dev
09:14 overclk left #gluster-dev
09:15 overclk joined #gluster-dev
09:16 overclk hagarth, ping. there are spurious failures with ./tests/bugs/distribute/bug-1190734.t test in master.
09:17 overclk hagarth, plus there is an brick crash with backtrace pointing to locks xlator.
09:24 hagarth overclk: is it the rebalance test that fails with 1190734.t?
09:25 hagarth overclk: yes, I am aware of the crash in locks translator
09:25 hagarth overclk: have notified pranithk & shyam about the backtrace. would it be possible for you to send a full bt on gluster-devel?
09:28 overclk hagarth, sure. will do.
09:28 hagarth overclk: thanks
09:29 overclk hagarth, it's the relabance test in bug-1190734.t
09:30 hagarth overclk: we can ignore that failure for now
09:32 ndevos overclk: btw, should libgfchangelog.so.* be available client-side, or only server-side?
09:35 * ndevos disconnects and will read backlog later...
09:44 lalatenduM hagarth, ndevos JustinClift I am seeing a rise in top posting in gluster mailing lists, IMO we should either follow top or bottom post, not both . what say?
09:45 hagarth lalatenduM: yes, we should normally follow bottom/inline posting. top posting makes it hard to understand the context.
09:45 lalatenduM hagarth, agree with you. Do you mind reminding every one again :)
09:47 hagarth lalatenduM: no problem, will drop a note sometime.
09:48 lalatenduM hagarth++ thanks :)
09:48 glusterbot lalatenduM: hagarth's karma is now 44
09:49 shubhendu joined #gluster-dev
09:51 ndarshan joined #gluster-dev
09:51 bala joined #gluster-dev
10:09 ira joined #gluster-dev
10:24 pranithk joined #gluster-dev
10:38 nishanth joined #gluster-dev
10:40 kkeithley1 joined #gluster-dev
10:43 ppai_ joined #gluster-dev
10:51 kkeithley1 Gluster Bug Triage Meeting in 10 minutes in #gluster-meeting
11:00 firemanxbr joined #gluster-dev
11:01 hchiramm kkeithley, it used to be at 5:30 PM IST
11:02 hchiramm and now its 4:30 PM IST
11:02 hchiramm may be due to DST ?
11:02 kkeithley_ My mistake . Gluster Bug Triage meeting in 1 hour in #gluster-meeting
11:02 hchiramm kkeithley, np :)
11:04 kkeithley_ Zimbra gets this one meeting wrong
11:04 kkeithley_ for me
11:05 hchiramm hmmm.. not sure why it is .
11:08 kkeithley_ me too.  Zimbra shows this meeting on my calendar at 7:00 AM
11:09 hchiramm Is DST changes are taken care by zimbra ?
11:10 * hchiramm : typo-- s/Is/Are.. s/are/'' :)
11:10 glusterbot hchiramm: typo's karma is now -2
11:10 kkeithley_ it should. All my other meetings are shown at the correct time
11:10 hchiramm hmmm.. thats weird
11:11 kkeithley_ I have a meeting in 20 minutes:  Time: 12:15:00 PM - 12:30:00 PM GMT +05:30 Chennai, Kolkata, Mumbai, New Delhi
11:11 kkeithley_ It shows correctly at 7:30 EDT
11:14 kkeithley_ Oh, I've found the problem. PEBKAB
11:15 hchiramm oh..ok :)
11:31 lalatenduM kkeithley_, pm
11:31 lalatenduM kkeithley_,
11:31 kkeithley_ yup
11:38 suliba joined #gluster-dev
11:39 pranithk left #gluster-dev
11:45 ws2k3 joined #gluster-dev
11:52 kkeithley_ Gluster Bug Triage Meeting in 10 minutes in #gluster-meeting
12:03 kkeithley_ Gluster Bug Triage Meeting now in #gluster-meeting
12:23 firemanxbr joined #gluster-dev
12:26 anoopcs joined #gluster-dev
12:30 firemanxbr joined #gluster-dev
12:41 suliba joined #gluster-dev
12:46 suliba joined #gluster-dev
12:54 JustinClift misc: Are you using those two new boxes that were brought online for Gluster?
12:54 JustinClift misc: If not, I'll jump on them and take a look at them now. :)
13:03 hagarth overclk: ping, did we progress more on the open-behind.t failure?
13:05 dlambrig joined #gluster-dev
13:07 suliba joined #gluster-dev
13:18 shyam joined #gluster-dev
13:20 shubhendu joined #gluster-dev
13:23 shaunm joined #gluster-dev
13:23 suliba joined #gluster-dev
13:37 jiffin joined #gluster-dev
13:42 suliba joined #gluster-dev
13:42 jiffin1 joined #gluster-dev
13:59 bala joined #gluster-dev
14:05 nishanth joined #gluster-dev
14:05 JustinClift hagarth: Curiosity thought... how workable do you reckon Spain would be for the India team if Barcelona doesn't work out?
14:07 hagarth JustinClift: it might work but we need to figure out dates well in advance
14:07 hagarth JustinClift: I would really like to have a chat with spot this week
14:08 kkeithley_ ??? Spain? Barcelona? Huh?
14:08 JustinClift Potential Gluster Summit locations being kicked around
14:09 JustinClift Barcelona seems like it's going to be really difficult to find a venue for the locations
14:09 kkeithley_ How is "Spain" better than Barcelona?
14:09 * kkeithley_ must be missing some context
14:09 JustinClift Nerja in spain is really cheap (eg fits in the budget), has A grade weather, and pretty much all the locals speak English (tourism is important there)
14:09 kkeithley_ Spain as in "Madrid"
14:10 JustinClift I was thinking Nerja, because I've been there
14:10 kkeithley_ Just other places in Spain besides Barcelona
14:10 JustinClift We could do Malaga as well
14:10 JustinClift Yeah, fair point ;)
14:10 JustinClift I'm just suggesting a place I've been and know is ok
14:10 kkeithley_ sure
14:14 kkeithley_ Málaga sounds good (because Alhambra)
14:15 kkeithley_ And Sevilla and Gibralter
14:15 kkeithley_ Gibraltar
14:15 JustinClift This one? https://en.wikipedia.org/wiki/Alhambra
14:15 kkeithley_ that would be the one
14:15 JustinClift Looks pretty cool :)
14:15 kkeithley_ My wife will _really_ hate me
14:16 JustinClift ?
14:16 JustinClift I'm sure you can find an excuse to bring her?
14:16 kkeithley_ If I go, without her. She's starting a new job, won't be able to get time off to come with
14:16 JustinClift Gah
14:16 kkeithley_ yeah
14:17 JustinClift Urgent medical procedure excuse?
14:17 kkeithley_ lol
14:17 JustinClift You'd need to think of something that has a suntan effect :)
14:17 kkeithley_ it's bad enough that we've both been to Paris, but not together. We have a deal that we can't see the Louvre until we see it together.  No such deal for the Alhambra though.
14:19 JustinClift Still sounds risky ;)
14:32 lalatenduM JustinClift, plz recommend Atomic team to have meeting in Spain ;)
14:38 ndevos JustinClift: why would Barcelona be difficult? We have xavih and also a Red Hat office there, I'm sure there should be some regular event venues
14:50 JustinClift ndevos: Apparently the venues are really expensive atm :(
14:51 JustinClift ndevos: Budget killing expensive
14:51 suliba_ joined #gluster-dev
14:51 JustinClift lalatenduM: ;)
14:51 JustinClift lalatenduM: You can definitely suggest it btw :)
14:51 ndevos JustinClift: hmm, yeah, I guess booking more in advance would save some costs
14:52 ndevos JustinClift: or we go to Berlin if that is good reachable, it may be a little cheaper
14:53 ndevos or Portugal, that tends to be relatively cheap too, I think
14:54 ndevos oh, hey, Amsterdam, and I'll pay my own travel costs?
14:54 JustinClift :)
14:55 JustinClift I have no objection to any of this really
14:55 jiffin joined #gluster-dev
14:55 JustinClift Jen is having trouble with the booking aspect, so I was just trying to help her find an alternative (without putting too much time into it myself)
14:56 * ndevos does not know Jen
14:56 jobewan joined #gluster-dev
14:58 JustinClift ndevos: Jen Madriaga == Recent OSAS hire as our Events Specialist
15:01 ndevos JustinClift: ah, not this Jen :-/ https://www.youtube.com/watch?v=UTBsm0LzSP0
15:04 jiffin1 joined #gluster-dev
15:07 * JustinClift should have known better than to click on that link
15:07 JustinClift 3:02 minutes I won't get back
15:07 JustinClift It was funny tho ;)
15:08 ndevos JustinClift: you should fetch the rest of The IT Crowd series, its good amuzement
15:08 JustinClift ndevos: I've tried watching bits of it before... but I kinda don't really like the humour.  And it has a laugh track, which I find incredibly offputting
15:08 * JustinClift can't take anything with a laugh track :/.
15:08 ndevos hahahaha
15:09 JustinClift left #gluster-dev
15:09 JustinClift joined #gluster-dev
15:09 ndevos :P
15:09 JustinClift Gah, see
15:09 JustinClift It's like inbuild
15:09 JustinClift inbuilt even
15:09 ndevos builtin?
15:10 ndevos but well, maybe amuzement does not exist in English either
15:11 jiffin joined #gluster-dev
15:11 sankarshan joined #gluster-dev
15:17 JustinClift ndevos: What is "amuzement" ?
15:18 ndevos JustinClift: probably something like entertainment?
15:18 kkeithley_ amuzement is to amusement like while is to whilst
15:19 * JustinClift is not amuzed ?
15:19 ndevos hmm, maybe its written with an 's'?
15:19 JustinClift Ahhh.  You're meaning "amusement"
15:19 JustinClift Yeah
15:21 kkeithley_ maybe it's a mashup of amusement and amazement
15:26 jiffin1 joined #gluster-dev
15:49 shubhendu joined #gluster-dev
15:56 bala joined #gluster-dev
16:11 firemanxbr joined #gluster-dev
16:17 jiffin joined #gluster-dev
16:29 jiffin joined #gluster-dev
16:33 kshlm joined #gluster-dev
16:38 JustinClift hagarth: Looking at the git repos we need to keep / migrate from review.gluster.org to a new home
16:39 JustinClift hagarth: At the moment we're grabbing glusterfs.git, which is on disk at /git/glusterfs.git
16:39 hagarth JustinClift: why not grab everything?
16:39 JustinClift hagarth: I've just changed the backup script to grab everything under /git, but it looks like most of the other stuff is old
16:39 JustinClift hagarth: Yeah
16:40 JustinClift hagarth: That's what I was thinking, except it blow the backup on disk from ~110MB to ~860MB
16:40 JustinClift That's tar.bz2 version as awell
16:40 JustinClift as well
16:40 hagarth JustinClift: are you picking up everything under /review/r.g.o/git as well?
16:40 JustinClift It could be reduced a _bit_ using lzma in extreme mode... but not a lot
16:40 JustinClift Ahhhh, I was about to ask
16:41 JustinClift Yeah, I don't think the dirs under /git/ have all our repos
16:41 * JustinClift looks at /review/r.g.o/git
16:41 JustinClift hagarth: Ahhh, that's where the new stuff has been going
16:41 JustinClift hagarth: That's actually good
16:42 hagarth JustinClift: right
16:42 JustinClift hagarth: The entire /review directory and everything under it has been getting backed up nightly from the start, so yeah, it's been captured already
16:42 JustinClift It's a different tarball, but it's there ;)
16:42 hagarth JustinClift: cool :)
16:43 JustinClift Now, I need to figure out which - if any - of the dirs under /git/ can be not backed up + migrated
16:43 soumya joined #gluster-dev
16:44 JustinClift hagarth: These are the sizes:
16:44 JustinClift 165M/git/glusterfs.git
16:44 JustinClift 68K/git/glusterfs-hadoop.git
16:44 JustinClift 49M/git/historic.git
16:44 JustinClift 83M/git/old
16:44 JustinClift 2.9M/git/regression.git
16:44 JustinClift 648M/git/users
16:44 JustinClift (yeah, inline not fpaste)
16:44 JustinClift /git/old and /git/historic sound like they could go
16:44 JustinClift /git/users is huge
16:45 hagarth JustinClift: might be better to retain them for historic/archival reasons :D
16:45 JustinClift How about this stuff? http://fpaste.org/202159/42721553/
16:47 hagarth JustinClift: maybe check with csaba on whether he needs anything there?
16:47 JustinClift Good idea
16:47 JustinClift Writing email now
16:48 JustinClift hagarth: With /git/historic.git, the latest commit in it seems to be ~2009
16:49 hagarth JustinClift: yes, precisely we need it for historic reasons :)
16:49 * JustinClift sighs
16:49 JustinClift k :/
16:49 hagarth JustinClift: how about moving everything to gluster in github and nuking them off here?
16:50 JustinClift Ahhh, good idea
16:50 JustinClift Yeah
16:50 JustinClift hagarth: Is any of the stuff there "private" ?
16:50 JustinClift eg shouldn't be public
16:50 hagarth you mean in /git ?
16:50 * JustinClift nods
16:51 JustinClift Hmmm, its all public already isn't it...
16:51 JustinClift ?
16:53 jiffin joined #gluster-dev
16:53 hagarth JustinClift: I wouldn't migrate users to github/gluster
16:53 hagarth others can go there
16:54 hagarth JustinClift: no need to move regression.git as well
16:55 nishanth joined #gluster-dev
16:57 JustinClift regression.git is legacy and can be not backed up, or it's non-public stuff?
16:58 JustinClift (and should be backed up)
17:00 hagarth JustinClift: backup and don't move it to github
17:00 JustinClift hagarth: Next question...  who are these users? shehjart, kaushik, sac
17:00 JustinClift hagarth: np
17:00 hagarth JustinClift: legacy gluster developers :)
17:00 JustinClift Kill or keep their user dirs?
17:01 hagarth keep
17:02 ndevos sac is still there, isnt he? doing QE or something?
17:03 JustinClift hagarth: k.  At some point we're going to need to nuke this stuff.  Else it will be hanging around forever.
17:03 hagarth ndevos: he is part of engineering now.
17:04 hagarth ndevos: thanks for being vigilant as ever :)
17:04 JustinClift "just coz" isn't an excuse I'll accept forever ;)
17:04 ndevos hagarth: ah, that kind of 'legacy' :)
17:11 lalatenduM joined #gluster-dev
17:14 jiffin joined #gluster-dev
17:44 lalatenduM kkeithley_, r u around?
17:50 hchiramm_ joined #gluster-dev
17:57 lalatenduM kkeithley, or the alter ego :)
18:09 ndevos lalatenduM: dare to click a link? http://red.ht/1CMel7R
18:09 * lalatenduM clicking it :)
18:10 lalatenduM ndevos, cool , I had seen similar with rhs bugs
18:11 ndevos lalatenduM: oh, you do? I haven't...
18:11 ndevos lalatenduM: I do have a report with different versions and all their bugs, but this one should speak to component maintainers ;)
18:12 lalatenduM ndevos, thats a good idea
18:12 lalatenduM ndevos++
18:12 glusterbot lalatenduM: ndevos's karma is now 97
18:15 lalatenduM So glusterd has highest number of bugs :/
18:15 ndevos wll, people always  report bugs against glusterd...
18:16 ndevos nfs and gfapi are pretty high on the list too :-/
18:17 ndevos maybe that is not too bad, I guess people just use it then
18:17 lalatenduM yeah, I am surprised to see liggfapi
18:19 ndevos oh, http://red.ht/1BKWsRq stripts out the feature requests
18:20 ndevos the over-all picture does not change too much
18:24 kanagaraj joined #gluster-dev
18:26 lalatenduM ndevos, btw, we need to track if new features in master/3.7 have documentation with them , else we need to raise blocker bugs for 3.7
18:27 kkeithley_ we should add some more components.  tiering, bitrot, ganesha-ha, what else?
18:27 lalatenduM similar to what we did with 3.6
18:27 kkeithley_ and versions, 3.4.7, etc.
18:27 ndevos lalatenduM, kkeithley_: yes, +1 to both
18:27 lalatenduM selinux , backupapi?
18:28 ndevos selinux?
18:28 lalatenduM ndevos, I mean gluster-selinux ...now that glusterfs works fine with selinux
18:29 lalatenduM what abt having gluster-ganesha?
18:29 ndevos lalatenduM: but are the selinux changes itself not in the main selinux policy?
18:30 lalatenduM ndevos, it is , but if there is a issue, user might not be sure if its selinux issue or glusterfs?
18:30 ndevos lalatenduM: rather ganesha-ha for the scripts and all, there is a gluster component in the nfs-ganesha bugzilla/project
18:31 lalatenduM ndevos, cool
18:31 ndevos lalatenduM: who would be the maintainer for an selinux component? I think it spreads through many... we need to triage new bugs anyway
18:32 kkeithley_ gluster-ganesha?  There's already a GlusterFS FSAL component for bugs against nfs-ganesha
18:32 lalatenduM ndevos, kkeithley I think we should tell community about ganesha and glusterfs integration ..I mean more docs
18:32 lalatenduM kkeithley, ok
18:32 kkeithley_ All we have is ganesha-ha, which will be superceded by converged HA for both Samba and Ganesha
18:32 lalatenduM kkeithley, ndevos I did not know that RE:GlusterFS FSAL component for bugs against nfs-ganesha
18:33 kkeithley_ yes, +1 for more docs about ganesha integration
18:33 ndevos d(o_O)b
18:33 kkeithley_ what is that? Wide eyed Princess Leia?
18:33 lalatenduM sounds like an awesome youtube video :)
18:34 lalatenduM who wants to do it ??? who who...
18:34 ndevos hmm, maybe give it some arms? d_(o_O)_b
18:34 lalatenduM lol
18:34 ndevos nope. that doesnt make it much better
18:35 kkeithley_ what's it supposed to be?
18:35 ndevos d would be a thumbs up
18:35 ndevos and so would b be
18:35 ndevos the () is just to make it more full, and o_O are the eyes and nose
18:36 kkeithley_ ah, okay. Just don't use that in Italy, Greece, Iran, or Afghanistan
18:36 lalatenduM kkeithley, +1 :)
18:36 lalatenduM lol
18:37 ndevos it's way more difficult to give a thumbs up when you write smilies like :-)
18:37 ndevos you know, you should not make the "excellent, tastes great" symbol in Germany either
18:38 kkeithley_ Ein pils, bitte
18:38 lalatenduM gave +1, http://review.gluster.org/#/c/9983/ this is ready to be merged
18:39 kkeithley_ or een pils AUB
18:40 ndevos https://www.colourbox.com/prev​iew/6653795-chef-emoticon.jpg is what I meant, not sure if it can be understood differently in other countries
18:41 ndevos volgende week kan je een biertje van mij kijgen, kkeithley_
18:41 kkeithley_ haha, yes could be bad
18:42 lalatenduM agree with kkeithley_ :)
18:43 kkeithley_ I don't know kijgen, neither does translate.google.com
18:43 ndevos *krijgen
18:44 kkeithley_ to get, to receive, to pick up?
18:45 ndevos yes, like, I can get you a beer next week?
18:45 kkeithley_ yup, I knew all of it except how to fit krijgen into it.
18:46 ndevos I could also say, volgende week kan ik je een biertje geven, but that just sounds awkward to me
18:48 lpabon joined #gluster-dev
18:49 kkeithley_ Well, I figured it was either me giving/buying you a beer, or the other way around
18:50 kkeithley_ both ways work
19:22 ndevos hagarth: is there a reason why normal I/O or other FOPs do not count as a PING towards the server?
19:23 * ndevos will leave that standing like that for a while, he needs to make some dinner now...
19:23 shyam ndevos: I think they do...
19:24 * shyam goes to check the code around the ping
19:24 ndevos shyam: hmm, how can we then loose a ping if I/O is happening?
19:25 ndevos anyway, I'd be interested to hear about that :) ttyl!
19:36 shyam ndevos: Check rpc_clnt_ping_timer_expired where we set transport_activity, which is last sent/recieved time stamp against the ping expiry
19:36 shyam That basically means if we successfully sent or received a packet but ping expired, it is really not a ping expiry, at least that is how I read it...
19:37 shyam So to your question, if the client has nothing more to send, or gets an EAGAIN on a write to the socket because the sendQ is full, and the server has not responded to any of the previous sent RPCs we have a ping timeout...
19:38 ppp joined #gluster-dev
20:02 ndevos shyam: right, yes rpc_clnt_ping_timer_expired seems to do the right thing
20:02 ndevos shyam: is that running on both the client and on the server side?
20:04 shyam client only, server does not ping client (again statement made from memory, not from the code)
20:07 ndevos yeah, that is what I thought, but the server can initiate a disconnect too
20:08 ndevos or, maybe not disconnect directly? and it only starts to cleanup locks once a disconnect happened?
20:09 shyam ndevos: This is the NFS problem when ping timeouts happen...? (jut checking :) )
20:10 kkeithley_ please merge http://review.gluster.org/9974  ?
20:12 ndevos shyam: yeah, but ben turner also mentioned the issue on other protocols like cifs and glusterfs - I think the server-side cleansup the locks too eagerly
20:13 ndevos and to reproduce, you need 10G or 40G network connections....
20:14 ndevos kkeithley_: wow, you resisted the temptation to merge your own patch :)
20:25 kkeithley_ I did
20:26 kkeithley_ tomorrow I can do another one line
20:28 ndevos you're a coding machine!
20:37 kkeithley_ It's the quality that counds
20:37 kkeithley_ counts
20:38 kkeithley_ Would you like to take a look at https://bugzilla.redhat.co​m/show_bug.cgi?id=1204898 ?
20:38 glusterbot Bug 1204898: medium, medium, ---, nobody, NEW , Review Request: libntirpc - New Transport Independent RPC library for NFS-Ganesh
20:39 ndevos yeah, can I do that tomorrow?
20:39 kkeithley_ sure
21:13 badone joined #gluster-dev
21:45 suliba joined #gluster-dev
21:55 suliba joined #gluster-dev
22:12 hchiramm_ joined #gluster-dev
22:28 JustinClift ndevos: 10G and 40G connections aren't that uncommon any more :/
22:29 JustinClift Hmmmm, our Gerrit implementation has hooks that rely on a client side bugzilla package
22:29 JustinClift We'll need that for CentOS 7 then
22:29 JustinClift Will loook at that tomorrow though.  Sleepy now. :(
22:52 nkhare joined #gluster-dev
23:05 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary