Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-07-08

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 jv115 joined #gluster-dev
00:15 jv115 left #gluster-dev
00:15 jv115 joined #gluster-dev
00:27 jv115 left #gluster-dev
00:29 joevartuli joined #gluster-dev
00:54 joevartuli joined #gluster-dev
00:55 joevartuli joined #gluster-dev
00:56 joevartuli joined #gluster-dev
01:35 bala joined #gluster-dev
02:31 joevartuli joined #gluster-dev
02:33 bala joined #gluster-dev
02:57 flu_ joined #gluster-dev
03:37 atinmu joined #gluster-dev
03:43 ppai joined #gluster-dev
03:47 flu__ joined #gluster-dev
03:49 bharata-rao joined #gluster-dev
03:53 kanagaraj joined #gluster-dev
03:58 MacWinner joined #gluster-dev
04:01 bala joined #gluster-dev
04:08 ppai joined #gluster-dev
04:10 shubhendu joined #gluster-dev
04:35 shubhendu joined #gluster-dev
04:46 ndarshan joined #gluster-dev
05:00 kdhananjay joined #gluster-dev
05:00 aravindavk joined #gluster-dev
05:02 vpshastry joined #gluster-dev
05:18 nishanth joined #gluster-dev
05:21 lalatenduM joined #gluster-dev
05:28 kanagaraj joined #gluster-dev
05:29 kshlm joined #gluster-dev
05:29 kshlm joined #gluster-dev
05:35 hagarth joined #gluster-dev
05:36 aravindavk joined #gluster-dev
05:36 raghu joined #gluster-dev
05:43 vimal joined #gluster-dev
05:51 flu_ joined #gluster-dev
05:51 kdhananjay joined #gluster-dev
05:54 flu__ joined #gluster-dev
06:00 ppai joined #gluster-dev
06:23 flu_ joined #gluster-dev
06:23 flu__ joined #gluster-dev
06:37 aravindavk joined #gluster-dev
06:41 hagarth joined #gluster-dev
06:50 bala2 joined #gluster-dev
06:50 kanagaraj joined #gluster-dev
07:17 joevartu_ joined #gluster-dev
07:32 spandit joined #gluster-dev
07:35 kanagaraj joined #gluster-dev
07:46 rgustafs joined #gluster-dev
08:26 bala joined #gluster-dev
08:49 ndevos joined #gluster-dev
08:50 _ndevos joined #gluster-dev
08:50 _ndevos joined #gluster-dev
09:09 kanagaraj_ joined #gluster-dev
09:25 kanagaraj joined #gluster-dev
09:49 ppai joined #gluster-dev
09:54 hagarth joined #gluster-dev
09:55 kdhananjay joined #gluster-dev
10:02 aravindavk joined #gluster-dev
10:06 ndevos hchiramm__: the libgfapi Jenkins is suddenly marking commits a failure? like http://build.gluster.org/job/rackspace-regression-2GB-triggered/192/consoleFull and others
10:06 glusterbot Title: rackspace-regression-2GB-triggered #192 Console [Jenkins] (at build.gluster.org)
10:06 ndevos hchiramm__: ah, wait, wrong one
10:06 ndevos hchiramm__: http://rhs-client34.lab.eng.blr.redhat.com:8080/job/libgfapi-qemu/291/
10:17 kkeithley1 joined #gluster-dev
10:23 shyam joined #gluster-dev
10:25 kkeithley_ JustinClift: ping, I need to pick your brain on Infiniband
10:25 kkeithley_ Infiniband setup
10:30 kdhananjay joined #gluster-dev
10:37 kshlm joined #gluster-dev
10:45 kkeithley_ There haven't been any more backport requests for 3.4.5. I'm going to tag and release 3.4.5beta2. That's a day earlier than I said I was going to do it, but I don't see any reason to wait. Since it's still early here in EDT I'll wait a couple hours for people to weigh in. OTOH it's getting late in IST, so I wanted to "announce" early
10:45 kkeithley_ ...I'm going to tag and release 3.4.5beta2 _today_
10:47 lalatenduM kkeithley, saw your change to Fedora spec file wrt bz 1073217
10:47 lalatenduM however I have a doubt on this, the suggested fix for this issue , is yet to be merged in master , i.e. http://review.gluster.org/#/c/7195/2
10:48 kkeithley_ yes, I'm going to send you and hchiramm__ an email.
10:48 glusterbot Title: Gerrit Code Review (at review.gluster.org)
10:48 lalatenduM kkeithley, cool
10:48 kkeithley_ it won't be merged in master, glusterfsd.init is a community-only thing
10:48 kkeithley_ community-release only thing
10:50 kkeithley_ It's a legacy, and in the past people have protested quite vigorously when we tried to remove it
10:51 kkeithley_ although maybe we should try again, e.g. for the 3.6 release perhaps?
10:51 lalatenduM kkeithley, should we add a fedora specific file then?
10:52 kkeithley_ To where?
10:52 lalatenduM kkeithley, when u say "glusterfsd.init is a community-only thing"  you mean the file will be used for community rpms right?
10:52 kkeithley_ there's already a glusterfsd.init in the Fedora dist-git
10:52 kkeithley_ correct
10:52 kkeithley_ The ones in Fedora and on download.gluster.org
10:52 lalatenduM kkeithley, to gluster git repo like "extras/init.d/glusterd-Redhat.in"
10:53 kkeithley_ We could go that direction too, although there was resistance to doing that too at one point
10:54 ndevos kkeithley_: do you know if nfs.mount-udp works in glusterfs-3.4.5? I'll pull http://review.gluster.org/8258 into 3.5, but can send a backport for 3.4 too if you like
10:54 glusterbot Title: Gerrit Code Review (at review.gluster.org)
10:57 kkeithley_ ndevos: I don't know
10:58 kkeithley_ As far as I know/knew, we did not support any UDP; if we do now, it's new AFAIK.
10:58 ndevos kkeithley_: I dont think it works (like on RHS-2.1), but I have not tested 3.4.5 yet - it would affect Solaris and HP-UX mounting
10:59 ndevos kkeithley_: it's only the MOUNT protocol over UDP, the NFS protocol uses TCP
10:59 kkeithley_ correct
10:59 ndevos its not new, it used to work in 3.3 :)
10:59 kkeithley_ oh, oka
10:59 kkeithley_ okay
11:00 kkeithley_ If it worked in 3.3, then it _ought_ to work in 3.4. The easiest thing to do would be to try it.
11:01 ndevos obviously its not used very much, so I guess we dont need to delay 3.4.5 for it - I'm not sure when I have time to test it on 3.4 though, the backport would be very simple
11:06 kkeithley_ oh, we have nfs/server/src/mount3udp_svc.c too. Looking at the patch it feels like it's something we ought to have in 3.4 too. But that's just a gut feeling. OTOH since nobody seems to be complaining about it.
11:07 lalatenduM kkeithley_, check thi out http://review.gluster.org/#/c/7199/
11:07 kkeithley_ And now that I've said that, everyone will start clamoring for it.
11:07 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:10 kanagaraj joined #gluster-dev
11:12 lalatenduM kkeithley_, so I think I need to know more about why change http://review.gluster.org/#/c/7195/2 can't be merged (specially when we use it for community packaging only)
11:12 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:12 kkeithley_ lalatenduM: yup, it resurfaces from time to time. If it gets a +2 and merged into master then there's something more to talk about. And yes, I'd love to drop things like the *.init out of the Fedora dist-git in favor of bits in our own bits
11:12 lalatenduM forgot to mark ndevos in my previous msg
11:14 lalatenduM kkeithley, cool, will pick hagarth's  brain on it
11:14 kkeithley_ because the change to glusterd.Redhat.in indirectly references a non-existent /etc/init.d/glusterfsd, mainly through its /var/lock/subsys/glusterfsd file.  It'd be benign, but a a little weird
11:15 ndevos lalatenduM, kkeithley_: I dont think we really support running manual created .vol files since 3.3 anymore, at least not with the rpms
11:16 ndevos lalatenduM, kkeithley_: and surely we are not able to run those on systemd servers, manually editing .vol files should not really be needed anyway (maybe some devs need to it, but they know what to do anyway?)
11:16 kkeithley_ bits in our own bits. wow, that's good
11:17 kkeithley_ ndevos: I'm not understanding what your point is. You lost me
11:18 ndevos kkeithley_: about the glusterfsd service script, I think it should only be used to stop the glusterfsd processes - not start anything when there is a custom .vol file
11:19 kkeithley_ oh, yes, okay. Right, it should (must) only stop glusterfsd
11:25 kkeithley_ Or the other way around, it must never start any glusterfsd(s). Regardless of whether they are hand-written .vol files or not.
11:25 ndevos yes
11:27 kkeithley_ JustinClift: ping, I need to pick your brain on Infiniband setup
11:43 hagarth kkeithley_: there's a patch I sent over the weekend for quick-read in response to an issue reported by JoeJulian.. we would need that for 3.4.5
11:50 kkeithley_ hagarth: sent in email, or gerrit?
11:51 kkeithley_ http://review.gluster.org/8242
11:51 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:51 hagarth kkeithley_: yes
11:52 kkeithley_ done
11:53 hagarth cool, thanks!
11:55 kkeithley_ yw
12:04 tdasilva joined #gluster-dev
12:11 kkeithley_ ndevos: re: http://review.gluster.org/8258, regression is failing!
12:20 kkeithley_ ndevos: ^^^ what's up with that?
12:40 aravindavk joined #gluster-dev
12:41 kshlm joined #gluster-dev
12:47 kkeithley_ JustinClift: paging Justin Clift, please come to the courtesy white phone. Justin Clift, please come to the courtesy white phone.
12:47 hagarth joined #gluster-dev
12:52 kanagaraj joined #gluster-dev
12:53 shyam joined #gluster-dev
12:56 ppai joined #gluster-dev
13:01 vpshastry joined #gluster-dev
13:14 aravindavk joined #gluster-dev
13:18 edward2 joined #gluster-dev
13:31 bala joined #gluster-dev
13:35 ndevos kkeithley_: uh... not sure why that would be, looking into it *again* :-/
13:41 kkeithley_ it's biting me too in release-3.4 branch. ;-)  A couple things I overlooked in include.rc and volume.rc
13:55 rgustafs joined #gluster-dev
14:14 ndevos kkeithley_: I'm happy to backport commit 7cd32c1 to 3.5, I guess you need that in 3.4 too then?
14:15 pranithk joined #gluster-dev
14:15 kkeithley_ I've already done it.  http://review.gluster.org/#/c/8262/
14:16 glusterbot Title: Gerrit Code Review (at review.gluster.org)
14:16 pranithk kkeithley_: Thanks :-)
14:16 kkeithley_ If you're referring to the NFS UDP mount patch
14:17 ndevos kkeithley_: yeah
14:17 pranithk hagarth: ping
14:17 glusterbot pranithk: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
14:17 ndevos hah!
14:17 pranithk ndevos: See!
14:17 ndevos pranithk: you're not learning
14:18 kkeithley_ then there's this other backport request from hagarth in our inboxes for 1061211
14:18 pranithk ndevos: will take time for old habits to die I guess
14:20 ndevos kkeithley_: I've not seen the email yet, but from that bug I think it makes sense to backport it
14:20 kkeithley_ agreed
14:20 ndevos and bug 1117241 has been filed for 3.5.2 already, so it'll be on the radar
14:20 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1117241 unspecified, unspecified, ---, kaushal, NEW , backport 'gluster volume status --xml' issues
14:22 kkeithley_ I'm not sure how important that one is for 3.4. Fedora/EPEL/download.gluster.org (well, no epel really, but...)  are on 3.5. But someone might still be using 3.4 and ovirt
14:23 hagarth pranithk: pong, what's up?
14:23 * ndevos has no idea either
14:24 pranithk hagarth: so on gluster-users one user is facing problem because of entry self-heal I think
14:24 hagarth pranithk: ok...
14:25 pranithk hagarth: He says "We are considering taking another network cluster sytem, but we are not quite sure what to do. " I donno what he means by this. Is he saying he will move out of gluster because of afr entry-self-heal? :-(
14:26 hagarth pranithk: possibly so, the entry self-heal performance problems might not be suitable for his use case :(
14:27 * pranithk feels REALLY bad :'-(.
14:28 pranithk hagarth: If only I can clone myself :-|.
14:28 pranithk hagarth: So there are 2 things I need to do
14:28 pranithk hagarth: Fix VM issues as soon as possible, Fix all these small file perf
14:28 pranithk hagarth: Which one to pick first?
14:29 hagarth pranithk: the first one
14:29 hagarth pranithk: IMO, that is quite important for existing users.
14:30 hagarth pranithk: but I believe there are a few common enhancements which will help either case.
14:30 pranithk hagarth: got it.
14:30 pranithk hagarth: Who does perf testing upstream?
14:30 ndevos pranithk: is that about the emails from Norman_M (in #gluster)?
14:31 pranithk ndevos: yes :-(
14:31 ndevos pranithk: from what I noticed in the logs, they store user homedirs on gluster
14:31 hagarth pranithk: we need to run our own perf. tests. If you need help with hardware etc., let me know.
14:31 ndevos ah, dammit, I have to be in a meeting :-/
14:31 * ndevos will be back later
14:32 pranithk hagarth: Let me ask the question differently. Who has been testing perf tests on upstream? Do we have volunteers?
14:33 hagarth pranithk: what kind of tests do you want to run?
14:35 pranithk hagarth: The kind of testing corvid-tech people did, I have never seen anything like it so far. 700 mounts keep pounding on a plain replicate volume. While taking things into our hands will solve the problems in the short run. May be we should also try to increase the number of volunteers who test different workloads.
14:35 hagarth pranithk: i will bbiab, need to run out for a bit now. NMI.
14:37 pranithk hagarth: May be this is something we can discuss for tomorrow's meeting.
14:50 pranithk ndevos: When does Justin come online?
15:01 ndevos pranithk: for all I know he is in the UK and would be online now, but he seems to be working on really strange hours
15:03 pranithk ndevos: :-) okay. I will send him a mail
15:25 JustinClift kkeithley_: Heh, still need IB assistance?
15:32 YoungJoo_ joined #gluster-dev
15:34 kkeithley_ JustinClift: actually, it works
15:34 JustinClift kkeithley_: Cool. :)
15:35 kkeithley_ although I can't figure out how to get the ibaddr of a host and ibping it
15:37 kkeithley_ I have opensm running on one of the servers. ISTR that having an opensm slave is a good thing
15:37 JustinClift ndevos: Yeah, normally I'd be online by now, but I'm taking it super easy this week (kinda like 1/2 days).  Organising a bunch of both non-gluster & non-work related stuff that I've been putting off too long. ;)
15:38 JustinClift kkeithley_: Ahhh.
15:38 JustinClift kkeithley_: What does ibv_devinfo show?
15:39 kkeithley_ http://paste.fedoraproject.org/116399/04833913
15:39 glusterbot Title: #116399 Fedora Project Pastebin (at paste.fedoraproject.org)
15:39 kkeithley_ beaker just shut off three of my four machines. That's from the remaining. I'm powering the others back on
15:41 JustinClift k.  ibstat output?
15:42 kkeithley_ http://paste.fedoraproject.org/116400/34120140
15:42 glusterbot Title: #116400 Fedora Project Pastebin (at paste.fedoraproject.org)
15:43 JustinClift kkeithley_: Btw, the "MT_1060110018" field in your ibv_devinfo output there is the board model number.  That's what you look up on the firmware page: http://www.mellanox.com/page/firmware_table_ConnectX3IB
15:43 kkeithley_ duly noted
15:43 JustinClift kkeithley_: And yeah, it has the latest firmware on it already 2.31.5050
15:44 kkeithley_ good, they're brand new cards, so I sort of expected they'd be latest-and-greatest
15:45 JustinClift kkeithley_: From memory, you should be able to start an ibping server up on one of the hosts, and then ibping it from another (when beaker gives you the boxes back).
15:45 JustinClift So, if you run ibping -S (<-- may not be right option, I'm doing this from old memory)
15:45 glusterbot JustinClift: (<'s karma is now -1
15:46 JustinClift Then on another box you should be able to ping that one's port GUID, as shown in ibstat
15:46 kkeithley_ oh, didn't know (remember) that I need an ibping server.
15:46 JustinClift I _think_ ibping is client/server
15:46 * JustinClift takes a quick look at the community.mellanox.com website
15:47 JustinClift Hmmm.
15:47 JustinClift I can't remember if the default is for ipbing to use GUID's, or do use LID's
15:47 kkeithley_ well, glusterfs w/ rdma "just works" for me. I'm not banging on it though
15:47 cristov joined #gluster-dev
15:48 JustinClift kkeithley_: Would you be ok to create 3-4 volumes, and see if they all show up as available for NFS?
15:48 kkeithley_ sure
15:49 kkeithley_ right now I just have a two brick dht volume on two servers, and two clients
15:49 kdhananjay joined #gluster-dev
15:50 kkeithley_ oops, I'm heading out to lunch with the wife, biab. I try the 3-4 volumes + nfs when I get back
15:59 johnmark anyone going to oscon?
16:00 johnmark hagarth: ^^^
16:02 JustinClift kkeithley_: This is the reason I'm asking about the several volumes + NFS: https://bugzilla.redhat.com/show_bug.cgi?id=978205
16:02 glusterbot Bug 978205: medium, unspecified, ---, vagarwal, ASSIGNED , NFS mount failing for several volumes with 3.4.0 beta3.  Only last one created can be mounted with NFS.
16:02 JustinClift kkeithley_: It'd be useful to know if that's resolved or not. ;)
16:08 hchiramm__ JustinClift, is any work going on with gerrit or build.gluster.org servers ?
16:08 hchiramm__ wrt upgrading or anything of that sort ?
16:09 JustinClift hchiramm__: Not that I'm doing atm.
16:09 hchiramm__ oh..ok
16:09 JustinClift hchiramm__: Why do you ask?
16:09 hchiramm__ http://rhs-client34.lab.eng.blr.redhat.com:8080/job/libgfapi-qemu/302/console
16:09 hchiramm__ JustinClift, at times it fail to fetch the source
16:10 hchiramm__ looks to be something outside from rhs-client34.lab.eng.blr.redhat.com causing the failure
16:10 JustinClift hchiramm__: That's interesting.  We sometimes see the same kind of git timeout failures with the regression tests too
16:11 hchiramm__ hmmmmm..
16:11 JustinClift Seems to be super temporary connectivity problem to the gerrit server or something
16:11 hchiramm__ exactly
16:11 hchiramm__ but not sure when it can pop up
16:11 JustinClift I'm not sure it's something we can control with the current state of things.  Probably just have to keep working around it.
16:12 hchiramm__ the failure actually vote "-1"
16:12 hchiramm__ where it is not an actual failure..
16:13 hchiramm__ yeah, for now we have to survive with it :
16:13 hchiramm__ 0
16:13 hchiramm__ thanks JustinClift++
16:13 glusterbot hchiramm__: JustinClift's karma is now 1
16:13 hchiramm__ ndevos++ u too
16:13 glusterbot hchiramm__: ndevos's karma is now 3
16:13 ndevos \o/
16:14 hchiramm__ kkeithley++
16:14 glusterbot hchiramm__: kkeithley's karma is now 2
16:14 hchiramm__ kkeithley, , pending karma for the mem_account review :)
16:15 johnmark lol
16:21 JustinClift hchiramm__: Looking at that failure, I guess you could instead have the checkout done manually by a script that loops on connetion failure.
16:21 JustinClift hchiramm__: It's something I've been tempted to do, but haven't yet looked into.
16:22 JustinClift hchiramm__: You'd need to alter the job so that even though it's triggered by the gerrit CR, it doesn't then check stuff out using that normal approach.
16:22 hchiramm__ JustinClift, the checkout happens from jenkins job script
16:22 JustinClift hchiramm__: Probably easy to ask Ben Turner if he has ideas, as he's _much_ more in depth with Jenkins. :)
16:23 hchiramm__ so I think if we want to loop , we can do that from there
16:23 JustinClift hchiramm__: Cool, do that then. :)
16:23 hchiramm__ but yeah, need to test it and see..
16:23 JustinClift hchiramm__: I had the return codes from everything printing out in the upstream console logs for a while
16:24 hchiramm__ :) .. its in queue with less priority set :)
16:24 hchiramm__ JustinClift, thats good :)
16:24 JustinClift With that, I was able to put a loop in (when return code == 128) for a different time out problem (ssh from memory)
16:24 hchiramm__ then u r more experienced here :)
16:25 vpshastry joined #gluster-dev
16:25 JustinClift hchiramm__: Take a look here: http://build.gluster.org/job/rackspace-regression-2GB/configure
16:25 JustinClift (you'll need to be logged in)
16:25 JustinClift In the script window at the bottom, do a search for LOOP_COUNTER
16:25 JustinClift That'll show you the loop bit
16:26 JustinClift And yeah, that was for a git checkout too it turns out
16:26 hchiramm__ yep..
16:27 hchiramm__ thanks Justin for ur inputs here..
16:27 hchiramm__ will check it ..
16:27 JustinClift :)
16:37 hagarth johnmark: any gluster/storage event planned around oscon?
16:38 vikumar joined #gluster-dev
16:44 JustinClift purpleidea: Btw, it's "Libvirt", not "LibVirt"
16:49 johnmark hagarth: trying to plan a bof
16:50 johnmark hagarth: but I need to make sure someone smart will be there
16:50 johnmark so I'm asking y4m4, a2 and eco
16:51 JoeJulian ... and I already told him I could be there, so I guess I know where I rank now.
17:03 jobewan joined #gluster-dev
17:05 johnmark JoeJulian: facepalm
17:05 johnmark I totally forgot that
17:06 johnmark JoeJulian: I already submitted the bof, so I'll be happy to put you as mod
17:15 hagarth JoeJulian: lol
17:15 johnmark :)
17:15 johnmark oops
17:15 JoeJulian hehe
17:17 * JoeJulian < smart < (y4m4|a2|eco)
17:25 johnmark LOL
17:27 MacWinne_ joined #gluster-dev
17:28 johnmark hagarth: I'll know by the end of today or tomorrow whether we get a BoF
17:32 skoduri joined #gluster-dev
17:39 skoduri joined #gluster-dev
18:05 jdarcy joined #gluster-dev
18:08 purpleidea JustinClift: ? where?
18:08 JustinClift purpleidea: http://www.gluster.org/2014/07/following-james-libvirt-vagrant-recipes/
18:09 purpleidea JustinClift: then since you're a details man, you'll notice that I didn't write that article...
18:10 nishanth joined #gluster-dev
18:10 johnmark heh
18:11 * JustinClift looks
18:11 JustinClift Dammit
18:11 JustinClift He's not on IRC
18:11 JustinClift I'll bitch at him via email. ;)
18:14 kkeithley_ JustinClift: okay,  six transport rdma volumes on two servers (dht). One rdma native mount and six nfs mounts. All mounted, all writable from the client
18:14 JustinClift Cool, that's good news
18:14 kkeithley_ all writable from both clients
18:15 JustinClift Do you have a sec to update that BZ to say it's working for you now?
18:20 purpleidea JustinClift: yeah, actually when you send jayunit100 an email, please explain to him that if he doesn't *STAY* on irc, he shouldn't ask me questions. He's asked me like 5+ different questions on different IRC channels even and never stays around for the response.
18:20 JustinClift heh
18:21 kkeithley_ JustinClift: bz updated
18:21 JustinClift purpleidea: k, I just block quoted you there.
18:21 JustinClift :)
18:21 JustinClift kkeithley_: Thank you. :)
18:21 purpleidea JustinClift: :)
18:22 kkeithley_ yw
18:22 purpleidea JustinClift: side note, evolutin (email client) actually has a metric ton of useful features. eg: "paste as quotation" is one example...
18:22 purpleidea JustinClift: i keep discovering more... It just needs some SERIOUS bug fixing and maybe performance help. otherwise it's pretty good
18:23 JustinClift purpleidea: Try doing a search for keywords across all of your inbox folders
18:23 purpleidea JustinClift: yeah performance sucks
18:23 JustinClift Oh, it can do it now?
18:23 purpleidea JustinClift: also the UI isn't obvious but yeah
18:23 JustinClift The last version I tried didn't have the capability.  At all.
18:23 purpleidea click the magnifying glass in the search box to the left of the text entry...
18:24 JustinClift I don't have it anywhere any more, so that description isn't going to help me
18:24 * JustinClift uses OSX desktop
18:24 purpleidea JustinClift: lame
18:24 purpleidea JustinClift: to the right of the search box you can pick:
18:25 * JustinClift isn't interested :)
18:25 purpleidea current folder, current account, all accounts
18:25 purpleidea JustinClift: anyways, got to go back to hacking!
18:25 JustinClift :)
18:33 kkeithley_ Tomorrow I'll try rdma on Fedora19 (Mellanox doesn't have drivers for f20) and run some heavy traffic. When I get back from PTO I'll try some Ubuntu and Debian.
18:34 kanagaraj joined #gluster-dev
18:39 JustinClift kkeithley_: Cool. :)
18:57 scuttle_ joined #gluster-dev
19:18 shyam joined #gluster-dev
19:19 hchiramm_ joined #gluster-dev
20:07 _Bryan_ joined #gluster-dev
21:16 scuttle_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary