Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-08-17

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:45 dlambrig joined #gluster-dev
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:00 hagarth joined #gluster-dev
02:20 poornimag joined #gluster-dev
02:25 pranithk1 joined #gluster-dev
02:50 aspandey joined #gluster-dev
03:14 magrawal joined #gluster-dev
03:50 pranithk1 joined #gluster-dev
03:59 atinm joined #gluster-dev
03:59 itisravi joined #gluster-dev
04:29 shubhendu joined #gluster-dev
04:30 poornimag joined #gluster-dev
04:34 nbalacha joined #gluster-dev
04:50 nigelb I've switched the centos regression to the new job.
04:52 nigelb so if you notice anything go wrong, please let me know.
05:10 msvbhat joined #gluster-dev
05:10 ashiq joined #gluster-dev
05:13 kdhananjay joined #gluster-dev
05:13 skoduri joined #gluster-dev
05:13 nbalacha joined #gluster-dev
05:16 Manikandan joined #gluster-dev
05:17 nbalacha atinm, where does glusterd get the input to generate the volinfo files?
05:19 ndarshan joined #gluster-dev
05:28 atinm nbalacha, this is done as part of vol create workflow
05:28 aspandey joined #gluster-dev
05:28 nbalacha atinm, what about for an existing volume? Say if I restart a node
05:28 nbalacha where does it read the info from?
05:30 atinm nbalacha, refer glusterd_store_retrieve_volumes ()
05:31 atinm nbalacha, does this answer your question?
05:32 karthik_ joined #gluster-dev
05:34 nbalacha atinm, let me check and get back
05:35 rafi joined #gluster-dev
05:37 ankitraj joined #gluster-dev
05:38 ankitraj joined #gluster-dev
05:38 Muthu_ joined #gluster-dev
05:39 rafi joined #gluster-dev
05:41 ppai joined #gluster-dev
05:43 atalur joined #gluster-dev
05:43 ramky joined #gluster-dev
05:44 msvbhat joined #gluster-dev
05:45 asengupt joined #gluster-dev
05:46 kotreshhr joined #gluster-dev
05:46 Bhaskarakiran joined #gluster-dev
05:59 hgowtham joined #gluster-dev
06:00 mchangir joined #gluster-dev
06:01 aravindavk joined #gluster-dev
06:04 msvbhat joined #gluster-dev
06:17 sanoj joined #gluster-dev
06:20 kshlm joined #gluster-dev
06:23 hchiramm joined #gluster-dev
06:23 ashiq rafi++, thanks
06:23 glusterbot ashiq: rafi's karma is now 53
06:30 devyani7 joined #gluster-dev
06:32 msvbhat joined #gluster-dev
06:41 hchiramm joined #gluster-dev
06:46 msvbhat joined #gluster-dev
06:52 poornimag joined #gluster-dev
06:56 noobs joined #gluster-dev
06:57 msvbhat joined #gluster-dev
07:04 kdhananjay joined #gluster-dev
07:04 jiffin joined #gluster-dev
07:14 devyani7 joined #gluster-dev
07:30 poornimag joined #gluster-dev
08:28 Bhaskarakiran joined #gluster-dev
08:34 rastar joined #gluster-dev
08:55 berkayunal joined #gluster-dev
09:02 bunal joined #gluster-dev
09:05 bunal_ joined #gluster-dev
09:06 bunal left #gluster-dev
09:09 kaushal_ joined #gluster-dev
09:11 ira_ joined #gluster-dev
09:12 msvbhat joined #gluster-dev
09:22 ppai joined #gluster-dev
09:43 kdhananjay joined #gluster-dev
09:58 kshlm joined #gluster-dev
10:11 ppai joined #gluster-dev
10:26 rastar joined #gluster-dev
10:26 bfoster joined #gluster-dev
10:33 asengupt joined #gluster-dev
10:40 msvbhat joined #gluster-dev
10:54 post-factum nigelb: i see many netbsd failures for recent reviews
10:56 asengupt joined #gluster-dev
10:57 hagarth joined #gluster-dev
11:02 atalur joined #gluster-dev
11:05 rafi1 joined #gluster-dev
11:17 poornimag joined #gluster-dev
11:26 ndevos hagarth, poornimag: did you ever hear me speak? I feel like being talked-over and ignored...
11:29 poornimag ndevos, ohh, we couldn't here you
11:29 poornimag ndevos, could you try again
11:29 ndevos poornimag: yeah, I thought that :-/
11:30 rastar ndevos: we were auto-muted too, try *4
11:31 ndevos rastar: I dont think that works on bluejeans connected phones?
11:31 rastar ndevos: it worked for us
11:32 * ndevos types, and you should hear that
11:32 rastar ndevos: Yes :)
11:32 ndevos :)
11:32 rastar ndevos: you have a nice mechanical keyboard
11:37 skoduri joined #gluster-dev
11:39 ndevos nbalacha: press *4 :D
11:39 nbalacha ndevos, thanks
11:39 ndevos rastar++
11:39 glusterbot ndevos: rastar's karma is now 43
11:40 ndevos hmm :-/
11:54 rafi REMINDER: Gluster Community Meeting in #gluster-meeting in ~ 5 minutes from now
11:59 mchangir joined #gluster-dev
12:01 julim joined #gluster-dev
12:03 ndevos poornimag: please have different bugs for the different features that are needed  in the different xlators, that helps a lot with the tracking
12:10 dlambrig joined #gluster-dev
12:20 kdhananjay1 joined #gluster-dev
12:22 nishanth joined #gluster-dev
12:24 noobs joined #gluster-dev
12:29 ppai joined #gluster-dev
12:30 kdhananjay joined #gluster-dev
12:31 Muthu_ joined #gluster-dev
12:33 aspandey joined #gluster-dev
12:42 post-factum Manikandan|afk: are you around?
12:42 ndevos seen the |afk?
12:42 rastar joined #gluster-dev
12:43 post-factum yep
12:43 post-factum just to be sure
12:43 aspandey xavih, ping
12:43 glusterbot aspandey: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
12:44 Manikandan post-factum, I am here, but in middle of something
12:44 ndevos Manikandan: YES, the community meeting ;-)
12:44 post-factum Manikandan: oh, sorry. when I could reach you to ask several questions about quotas?
12:45 Manikandan post-factum, I am sorry, is it fine if I could answer you on Friday
12:45 Manikandan ndevos, solving an issue
12:45 post-factum Manikandan: that's okay
12:45 Manikandan ;-)
12:45 Manikandan post-factum, thanks :)
12:45 post-factum Manikandan: thanks
12:45 Manikandan post-factum, np
13:00 post-factum rafi++
13:00 glusterbot post-factum: rafi's karma is now 54
13:00 kshlm rafi++
13:00 glusterbot kshlm: rafi's karma is now 55
13:00 ndevos rafi++
13:00 glusterbot ndevos: rafi's karma is now 56
13:01 rafi post-factum, kshlm, ndevos: thanks :)
13:02 kdhananjay joined #gluster-dev
13:05 atinm ndevos, I got busy looking into one of the issue so couldn't reply to you
13:06 atinm ndevos, I never said that I wanted to start a thread on restructuring, did I?
13:06 atinm ndevos, I'll look for a volunteer to do that is what I told :)
13:09 kkeithley rafi++
13:09 glusterbot kkeithley: rafi's karma is now 57
13:11 ndevos atinm: yeah, you'll find a volunteer to get the discussion started and changes made, but thats a little difficult to explain others in a meeting
13:15 justinclift joined #gluster-dev
13:16 justinclift kkeithley: ping, haven't heard back from you?
13:17 kkeithley oh, sorry. I have nothing more really. I'm going to try to get OSAS to buy something and maybe I can get misc to put it in the cage.
13:18 kkeithley It's not pressing enough to make you jump through hoops to use yours.
13:18 misc mhh, buy what ?
13:18 kkeithley a mac mini to replace the one we had here until justinclift took it away
13:19 kkeithley to do nightly OS X builds on.
13:19 kkeithley OS X builds which are currently broken
13:19 misc oh
13:19 kkeithley ndevos: btw, sed on OS X doesn't like the -r option in buildaux/xdrgen
13:20 kkeithley it wants -E for extended regex
13:20 misc mac mini would be interesting
13:20 nishanth joined #gluster-dev
13:20 misc kkeithley: and budget wise, we might need to get a ip kvm on top of it
13:21 kkeithley Well, amye is waiting to see what money is left after Gluster Summit
13:21 kkeithley If there's no money left then maybe we'll be doing nothing
13:22 Manikandan atinm++
13:22 ndevos kkeithley: someone still cares about OS X?
13:22 glusterbot Manikandan: atinm's karma is now 64
13:23 misc also, isn't netbsd/freebsd close enough to os x to detect the sed issue and this kind of things ?
13:23 justinclift ndevos: Every film production company ever :p
13:23 kkeithley there are a bunch of patches rotting in gerrit for OS X from Dennis Schafroth.
13:23 misc and if we want osx, would it be as a jenkins builder, something else ?
13:24 justinclift Harsha leaving really hurt us for getting OSX support completed :(
13:24 kkeithley and apparently the sed -E vs. -r is OS X specific, because our NetBSD and FreeBSD smoke tests appear to work.
13:24 ndevos well, use -E then?
13:24 misc good to know
13:24 ndevos or drop the sed entirely
13:25 misc use --posix :p
13:25 kkeithley if only rpc-gen didn't put '-' in the guard macro
13:25 justinclift misc: It probably depends.  If OSAS are ok to pay for a mac mini it can probably run dedicated as "whatever's needed".
13:25 kkeithley even with --posix, rpcgen still puts '-' in the guard macro
13:26 justinclift misc: If they don't though, you're welcome to run something simpler (eg nightly cron job) on the mac mini here
13:26 misc justinclift: yeah, just trying to size the work
13:26 ndevos kkeithley: escape the macro with % ?
13:26 rafi1 joined #gluster-dev
13:26 misc I wonder if we can get sponsoring from apple
13:26 ndevos like, %#ifndef _WHATEVER_H_
13:27 justinclift misc: If you have contacts who could be asked, then well.. why not ask? :D
13:27 justinclift misc: Ask around in OSAS maybe, someone might know the right people. :)
13:28 kkeithley ndevos: you lost me.   #ifndef _GLUSTERFS-FOPS_H_RPCGEN
13:28 justinclift misc: I know one potential guy, but don't want to burn the favour if there's a better option. ;)
13:28 kkeithley what does escaping it with % do?
13:29 kkeithley rpcgen adds the #ifndef. It's not in glusterfs-fops.x
13:29 kkeithley I could rename all the .x files, e.g. from glusterfs-fops.x to glusterfs_fops.x
13:29 kkeithley (yuck)
13:29 ndevos kkeithley: oh, yuck, why would rpcgen put a - in there?
13:30 kkeithley because glusterfs-fops.x
13:30 ndevos yeah, got it
13:30 justinclift kkeithley: With the mac mini thought, amye got the "get the old quad core model" bit yeah?
13:30 ndevos just rename the file then
13:31 kkeithley justinclift: dunno. but for our purposes I don't think it really matters a whole lot.
13:32 justinclift kkeithley: Probably better to play it safe, and get the quad version.  But hey, up to you guys. ;)
13:32 kkeithley ndevos: ick. I could make a temp renamed copy during the build. Maybe that's what you meant?
13:33 kkeithley justinclift: will keep it in mind, presuming we have any money after the summit
13:33 * kkeithley needs to get back to http://review.gluster.org/14085
13:33 misc justinclift: why stop at the mac mini ?
13:33 justinclift misc: Because budget is a factor
13:33 ndevos kkeithley: really, rename the file if that is the easiest, who cares about a - or _?
13:33 justinclift ;)
13:34 justinclift misc: If you're serious about the Apple sponsorship though, then sure, get whatever they're happy to provide :D
13:34 atinm Manikandan, I believe we need to mark tests/basic/quota-rename.t as bad for NetBSD, I am seeing it failing frequently these days
13:35 Manikandan atinm, I will look into that soon
13:35 kkeithley If someone knows somebody at Apple to ask, great.  I'd guess nobody at Apple cares about running glusterfs on OS X.
13:35 misc justinclift: we do have a macpro in the DC
13:35 misc not sure which project however
13:35 kkeithley Or we'd have heard from them by now
13:36 hagarth joined #gluster-dev
13:36 nishanth joined #gluster-dev
13:36 justinclift misc: Well, follow that up. :)
13:37 kkeithley has anyone seen my email to gluster-devel about bugzilla versions come through?
13:37 misc and Fedora is trying to get a mac system too for building purpose, so I wonder if some partnership can be achieved
13:38 kkeithley misc: what is it about https://bugzilla.redhat.co​m/show_bug.cgi?id=1367588 you want people to look at?
13:38 glusterbot Bug 1367588: unspecified, unspecified, ---, bugs, NEW , Improve the redirection for specific URL for RTD coming from old website
13:39 kkeithley nm about gluster-devel email, I finally received it.
13:39 kkeithley slow
13:42 misc kkeithley: just that I guess we want to decide what we do, between restore the wiki and forget redirection, or decide that we have better redirections, or anything
13:42 misc I am still unsure who is responsible for the documentation
13:44 misc (this and I am unsure also of where i should redirect https://bugzilla.redhat.co​m/show_bug.cgi?id=1365706 )
13:44 glusterbot Bug 1365706: unspecified, unspecified, ---, nigelb, NEW , Broken link for Opversion feature page
13:44 kkeithley I think you hit the nail on the head. No one is responsible
13:46 misc mhh
13:57 kkeithley humble orchestrated the last round of doc updates and the switch to RTDs
14:02 justinclift It's the ugly baby noone wants :/
14:08 msvbhat joined #gluster-dev
14:11 ira kkeithley: Expand RTD?
14:13 ndevos Read The Docs
14:13 ndevos see gluster.readthedocs.io :)
14:16 lpabon joined #gluster-dev
14:18 justinclift Hmmm, would anyone mind if I setup a new gluster slave in Jenkins and run a few regression jobs on it just to see how it performs?
14:18 justinclift Note - wouldn't have it voting. ;)
14:19 ndevos justinclift: you'll need to check that with nigelb, he's our Jenkins master now :)
14:19 justinclift ndevos: Thanks :)
14:20 justinclift nigelb: I'm mucking around with things in Scaleway (hosting place in France) and curious what these cheapo Atom processors are like in actual real world CI performance
14:20 nigelb justinclift: I'm running everything through JJB.
14:20 nigelb these days.
14:20 nigelb You probably won't have permissions to create/edit jobs anymore.
14:20 justinclift Ahhh.
14:20 justinclift I've not heard of JJB before.
14:21 justinclift Improved front end to Jenkins?
14:21 nigelb It's jenkins jobs as yaml config files
14:21 nigelb https://github.com/gluster/glusterf​s-patch-acceptance-tests/blob/maste​r/jenkins/jobs/glusterfs-rpms.yml
14:21 kshlm joined #gluster-dev
14:21 nigelb https://github.com/gluster/glusterfs​-patch-acceptance-tests/blob/master/​jenkins/jobs/centos6-regression.yml
14:22 justinclift Ahhh, so they can be kept in git, and managed through git tools.  That's a much better idea than the manual crap. ;)
14:22 nigelb Yeah, and it's easier to change multiple jobs at once.
14:22 nigelb I've added timestamps to a bunch of jobs which didn't have it.
14:24 justinclift Hmm, not all yaml though: https://github.com/gluster/glusterfs-patc​h-acceptance-tests/blob/master/jenkins/jo​bs/rackspace-regression-2GB-triggered.xml
14:24 nigelb justinclift: I haven't deleted the old xml files yet.
14:24 nigelb that centos6-regression is the new regression job converted from that.
14:24 nigelb I've disabled the old one on jenkins
14:25 justinclift Makes sense :)
14:26 nigelb I often do diffs after I do yaml to xml conversion, to confirm I haven't broken anything.
14:26 nigelb (I still break thigns :P)
14:26 justinclift ;)
14:26 nigelb JJB has some sensible defaults, except it's different from what we used to do.
14:26 justinclift Makes sense.  New tool, getting familir with it.
14:27 justinclift Makes sense.  New tool, getting familir with it kind of thing
14:27 justinclift Ugh
14:27 nigelb The documentation is spectacular though
14:27 nigelb so, if you want to create a job, send me a pull request.
14:28 justinclift I'm heading right into the "can't be bothered" territory :D
14:28 nigelb lol
14:28 nigelb If you get a machine into a particular label, I'm happy to run a test tomorrow am.
14:28 nigelb Things are quiet then :)
14:29 nigelb (my am)
14:29 justinclift Makes sense.
14:29 justinclift I think I'll put this off a bit though.
14:29 rraja joined #gluster-dev
14:30 nigelb post-factum: Looking now.
14:31 nigelb post-factum: haha, so at that time I was battling the new centos-regression job and it was failing too.
14:31 justinclift nigelb: Might ping you about it in a few weeks though. :D
14:31 nigelb justinclift: no problem! It's become incredibly easy to create new jobs now :)
14:31 justinclift :D
14:31 justinclift nigelb: Is build.gluster.org migrated to new host yet?
14:31 justinclift eg not still running on that dodgy old thing
14:32 nigelb justinclift: It's not running on iWeb anymore.
14:32 justinclift Thanks god :D
14:32 nigelb But, it's not at it's permanent home yet.
14:32 justinclift Ugh
14:33 nigelb misc is working on migrating it soon. One host to another inside the cage.
14:33 nigelb So short downtime.
14:33 justinclift [Well, I'm an athiest, but the principle applies] ;)
14:33 post-factum nigelb: i did several rechecks, and it finally succeeded
14:33 justinclift nigelb: Cool. :)
14:33 nigelb post-factum: weird, I see a bunch of heal related failures.
14:33 nigelb I don't see anything that I can jump in and fix.
14:34 nigelb I want to bring this up in Berlin. We can't be playing lottery with regression failures.
14:35 justinclift nigelb: With JJB, how are slave failures handled?  Or is JJB just the job config side of things, and the hands on fixing of failed bits still needs to be done manually?
14:36 justinclift Asking because Rackspace offered hosting to my main focus project (sqlitebrowser), and I've yet to really do anything with it.
14:36 nigelb justinclift: Think of JJB as the way to avoid the configure button on the job. It does nothing else.
14:36 justinclift Ugh
14:36 nigelb As you said, failure is still manually.
14:36 justinclift Thanks. :)
14:36 nigelb It's similar to ansible/puppet.
14:36 nigelb Your config is code.
14:36 justinclift Yep, got it.
14:37 justinclift Yeah, I'll avoid jenkins then.  That's not a pig to be married twice. ;)
14:37 nigelb Heh
14:37 nigelb If it's a trivial enough job for an opensource project, I'd go with travis-ci
14:38 justinclift Yeah, we use Travis for automatic testing of new commits, and it's working well with that
14:38 justinclift (on GitHub)
14:39 justinclift Ok, just received somewhat official info.  Apple doesn't do hardware sponsorship, period. ;)
14:39 justinclift misc: ^
14:39 * justinclift reached out unofficially
14:40 justinclift (but to people who know ;>)
14:40 hchiramm joined #gluster-dev
14:42 poornimag joined #gluster-dev
14:46 nigelb ndevos: so I've been seeing a lot of these https://build.gluster.org/job/​centos6-regression/15/console
14:46 nigelb there's usually a slave20.cloud.gluster.org:d/ folder inside the checkout.
14:46 nigelb Are we doing something like that in our tests?
14:46 nigelb (creating that folder I mean)
14:47 ndevos nigelb: I guess that some files are owned by root:root, and the wipe-out runs as jenkins?
14:48 justinclift nigelb: Are you meaning the hostname it embedded in a path where it probably shouldn't be?
14:48 justinclift s/it/is/
14:49 * justinclift used to see things like that occasionally in jobs last year
14:49 justinclift Never found the root cause of it though
14:49 nigelb justinclift: yep
14:49 justinclift nigelb: I think it's a bug in something
14:49 nigelb ndevos: that's exactly what's happening
14:49 ndevos nigelb: the name of that directory is weird... but it looks a little like a path to a brick, but that should be <hostname>:/d
14:49 nigelb somewhere there's a bug
14:49 ndevos looks like it, yes
14:50 ndevos either in the qa scripts, or in the test-cases... I would look at the qa-scripts first, I've never seen it on any of my test machines
14:52 nigelb will do.
14:52 nigelb ndevos: wait, do you run your tests as root or your user?
14:53 ndevos nigelb: the tests always need to run as root
14:56 nigelb nothing in the qa scripts
14:56 nigelb so I have to look at the tests.
14:57 nigelb justinclift: thank you! I'm glad it's not an entirely new monster I'm hitting :)
14:59 hagarth joined #gluster-dev
15:13 nigelb misc: huh, bugzilla-bot@gluster.org doesn't seem to forward to me.
15:27 kkeithley rewind....   from FreeBSD's sed man page:      -r      Same as -E for compatibility with GNU sed.
15:28 kkeithley sed on OS X doesn't have that.
15:29 misc nigelb: mhh, let me see
15:29 kkeithley so no, building on NetBSD and FreeBSD isn't a substitute for building on OS X
15:31 wushudoin joined #gluster-dev
15:31 nigelb kkeithley: Uh, but do we have users on OS X?
15:31 kkeithley dunno. Do we have users on FreeBSD?
15:32 nigelb let me rephrase.
15:32 kkeithley We do have patches for OS X rotting in gerrit
15:32 nigelb Does apple have a server product where people routinely deploy gluster or want to deploy gluster?
15:32 kkeithley Apple? Have a server product?  haha
15:32 nigelb misc: okay, I have mail. I guess you fixed something?
15:32 misc nigelb: nope
15:33 kkeithley Apple dropped the Xserve hardware product years ago.
15:33 misc nigelb: greylisting ?
15:33 nigelb Ooh. Maybe. I sent it from my domain.
15:33 nigelb But it also has DKIM and the whole shebang set up.
15:34 kkeithley There are a few people trying to use gluster on OS X though.  Is it a priority?  Dunno.  Harsha was trying to keep gluster portable to OS X. Until he left
15:34 nigelb I can see why it'd be nice to make sure developers can write code for gluster on OS X.
15:34 nigelb It makes no other sense (to me)
15:34 kkeithley I think it'd be kinda kewl to have gluster on iOS and be able to store things from my phone to a gluster in the cloud.
15:35 kkeithley Android too.
15:35 kkeithley even though I don't have an Android phone
15:36 nigelb So, the reason I ask.
15:36 nigelb Mozilla used to do Mac builds on Mac Minis
15:36 nigelb And they got rid of them.
15:36 misc how do hey build now ?
15:36 nigelb Because they were hard to maintain + automate.
15:36 nigelb They build on linux, sign on mac.
15:37 nigelb They're in the userspace though
15:37 misc so does gluster :)
15:37 kkeithley yeah, gluster is all user space too
15:38 nigelb Hrm, if you really want to do OS X builds, I'll talk to the mozilla folk who worked on the switch and see if we have any take aways.
15:39 hagarth ndevos: ping, around?
15:42 hagarth ndevos: never mind, inboxed you
15:46 justinclift nigelb: Your welcome
15:47 justinclift nigelb: You're thinking of gluster on OSX from the wrong end. ;)
15:48 justinclift nigelb: Having the gluster native client working on OSX is probably the most desired aspect
15:48 nigelb Ah
15:49 justinclift nigelb: eg film production houses (etc) who use OSX desktops, with their shared file assets on gluster servers running <whatever> .  RHEL, etc
15:50 justinclift nigelb: If there's a good take away from the Mozilla folks for getting OSX builds happening from Linux, it may turn out be useful
15:50 nigelb I'll start a conversation.
15:50 justinclift :)
15:57 msvbhat joined #gluster-dev
16:36 kkeithley nigelb, misc: is something going on with centos regressions? I've tried to retrigger from jenkins and gerrit and don't see any new regressions starting
16:37 nigelb kkeithley: which job are you looking at?
16:37 kkeithley dashboard says last build was 22910, finished 18 hours ago
16:37 nigelb https://build.gluster.org/job/centos6-regression/
16:37 kkeithley http://review.gluster.org/15182
16:38 nigelb I switched to this early this morning.
16:38 nigelb kkeithley: I see a job started for that.
16:39 kkeithley oh, okay. didn't realize there was a switch. I was looking at the old rackspace-regression
16:40 kkeithley how do we get these new ones into the gluster-only tab?
16:42 kkeithley well, maybe nm about the "gluster only" tab
16:42 skoduri joined #gluster-dev
16:43 nigelb kkeithley: I'll announce now.
16:43 nigelb I was waiting for a successful run before I did announce.
16:48 kkeithley you might be waiting a long time. ;-)
16:49 nigelb kkeithley: yeah, practically till evening.
17:23 msvbhat joined #gluster-dev
17:53 hagarth joined #gluster-dev
18:16 kkeithley nigelb: are netbsd regressions still running on netbsd6? Have we got a plan to update to netbsd7?
18:59 loadtheacc joined #gluster-dev
19:05 dlambrig left #gluster-dev
19:39 lpabon joined #gluster-dev
19:46 hagarth joined #gluster-dev
20:49 pranithk1 joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary