Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-07-13

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 hchiramm_ joined #gluster-dev
01:10 hchiramm_ joined #gluster-dev
01:19 shyam joined #gluster-dev
01:32 keiviw joined #gluster-dev
01:33 keiviw I have installed GlusterFS 3.7.13,and now I got Fsync failures when saving files with O_DIRECT flag in open() and create(),I tried to save a file in vi and got this error: test E667:Fsync failed
01:33 keiviw the mnt-test.log: [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 102:         FSYNC() ERR => -1 (Invalid argument)
01:33 keiviw [fuse-bridge.c:968:fuse_err_cbk] 0-glusterfs-fuse: 174:         FLUSH() ERR => -1 (Invalid argument)
01:33 keiviw the brick logs:    E [posix.c:2128:posix_writev] 0-test-posix: write failed: offset 0, Invalid argument
01:33 keiviw I [server3_1-fops.c:1414:server_writev_cbk] 0-test-server: 8569840: WRITEV 5 (526a3118-9994-429e-afc0-4aa063606bde) ==> -1 (Invalid argument)
01:33 keiviw I have checked the page-aligned, i.e. the file was larger than one page, a part of the file(one page size) was saved successfully, and the rest(more than one page but less than two pages) was lost
02:08 nishanth joined #gluster-dev
02:18 hagarth keiviw: how are you opening with O_DIRECT using vi?
02:20 hagarth keiviw: if you are passing O_DIRECT from gluster, have you ensured that all buffers passed to writev() calls are aligned properly?
02:21 hagarth keiviw: I suspect that a misaligned buffer is causing some writev calls to fail with Invalid argument
02:22 keiviw I add O_DIRECT flag in xlator/protocol/client/src/client.c open() and create()
02:26 keiviw Actually, if I wrote a long text and saved, some of the text was saved successfully, and some over one page was lost
02:27 keiviw And I have found that the test.txt is 4kb, 8kb.....
02:28 hagarth keiviw: I would recommend disabling write-behind if not already done so
02:28 keiviw So if I wrote a 5kb test.txt and saved, the real size of the test.txt was 4kb
02:30 keiviw I have tried to set flush-behind off, and the error still existed
02:31 hagarth keiviw: disable write-behind xlator
02:31 spalai joined #gluster-dev
02:31 hagarth keiviw: also make sure that _page_aligned_alloc() gets invoked for every write in posix
02:34 keiviw I set performance.write-behind off just now, and there was another error: test.txt E514: write error(file system full?)
02:36 hagarth keiviw: as I said before, do make sure that page alligned alloc is happening in posix
02:51 keiviw In other words, in GlusterFS 3.6 3.7, is there some options to enable odirect open() and create(), the file will not be cached in pagecache?
02:54 ndk_ joined #gluster-dev
03:24 sanoj joined #gluster-dev
03:25 glustin joined #gluster-dev
03:26 magrawal joined #gluster-dev
03:29 keiviw_1 joined #gluster-dev
03:47 atinm joined #gluster-dev
03:50 hagarth keiviw_1: check strict-o-direct and also there is an option in posix xlator not exposed through gluster volume set CLI
03:50 hagarth keiviw_1: option remote-dio too
03:51 devyani7 joined #gluster-dev
04:01 atinm joined #gluster-dev
04:01 nbalacha joined #gluster-dev
04:05 poornimag joined #gluster-dev
04:19 shubhendu joined #gluster-dev
04:26 kotreshhr joined #gluster-dev
04:32 Manikandan joined #gluster-dev
04:33 pranithk1 joined #gluster-dev
04:50 spalai left #gluster-dev
04:57 hchiramm joined #gluster-dev
04:58 prasanth joined #gluster-dev
04:58 aravindavk joined #gluster-dev
05:01 jiffin joined #gluster-dev
05:04 aspandey joined #gluster-dev
05:08 nbalacha joined #gluster-dev
05:14 ndarshan joined #gluster-dev
05:14 Bhaskarakiran joined #gluster-dev
05:15 hchiramm joined #gluster-dev
05:17 karthik_ joined #gluster-dev
05:18 sakshi joined #gluster-dev
05:20 Bhaskarakiran joined #gluster-dev
05:20 nishanth joined #gluster-dev
05:25 _ndevos joined #gluster-dev
05:25 _ndevos joined #gluster-dev
05:28 ashiq joined #gluster-dev
05:32 pkalever joined #gluster-dev
05:33 nigelb poornimag: shall I remove the global environment variable in light of niels' comment on the review?
05:34 rafi joined #gluster-dev
05:36 penguinRaider joined #gluster-dev
05:40 mchangir joined #gluster-dev
05:41 Apeksha joined #gluster-dev
05:41 nbalacha joined #gluster-dev
05:42 poornimag nigelb, yup, you can remove that
05:46 poornimag nigelb, review.gluster.org is not responding/too slow
05:47 nigelb yeah, I'm on it.
05:47 nigelb There's already a bug.
05:48 nigelb I'm just trying to get a stack trace before I fix it.
05:48 poornimag nigelb, oh alright.
05:49 ramky joined #gluster-dev
05:54 skoduri joined #gluster-dev
06:07 hgowtham joined #gluster-dev
06:08 asengupt joined #gluster-dev
06:18 msvbhat joined #gluster-dev
06:19 sakshi joined #gluster-dev
06:25 nigelb (still debugging gerrit)
06:26 kshlm joined #gluster-dev
06:27 pur joined #gluster-dev
06:31 Saravanakmr joined #gluster-dev
06:32 nigelb kshlm: have you connected for formicary recently? Is it horribly slow for you?
06:33 kshlm nigelb, Not recently.
06:34 kshlm Just tried, really slow for me as well.
06:34 nigelb I'm glad it's not just me.
06:35 nigelb I can't even bring gerrit online again because of the slowness.
06:35 nigelb Could you try and bring it up?
06:36 kshlm Give me a minute
06:36 nigelb I'd be grateful if you can send me a screenshot of top.
06:37 kshlm Looks like a network issue near whereever formicary lives.
06:37 kdhananjay joined #gluster-dev
06:38 keiviw The dentries and inodes are cached in GlusterFS server even GlusterFS client(mount by fuse) fuse kernerl provides metadata cache??
06:38 devyani7_ joined #gluster-dev
06:39 nigelb yep, that's my guess too.
06:40 kshlm nigelb, https://i.imgur.com/Knz1MCS.png
06:41 kshlm Something's wrong in RDU.
06:41 nigelb ahh
06:42 kshlm Where do you want the top output from?
06:42 kshlm dev.g.o or formicary?
06:43 nigelb dev
06:43 kshlm The issue seems to be fixed now.
06:44 nigelb oh good.
06:44 nigelb Did you start gerrit?
06:44 kshlm Yup.
06:44 nigelb Thanks
06:44 nigelb I stopped it for no reason.
06:44 kshlm The screenshots at http://i.imgur.com/DXJQuc2.png if you still need it.
06:44 nigelb I thought gerrit was causing the slowness.
06:45 nigelb And then the network issue meant I couldn't start it agian.
06:47 nigelb kshlm: I still have a good amount of packet loss to RDU
06:47 nigelb so I suspect it's ongoing.
06:47 atinm well, it still doesn't work for me
06:47 kshlm It was temporarily good.
06:47 kshlm The loss is back for me as well.
06:48 nigelb Basically, it's not our fault.
06:48 nigelb We're affected by it.
06:48 kshlm But gerrit is being affected because of it. Isn't there anyone we could reach out to?
06:49 nigelb ah,
06:49 nigelb there's an email on the outage list.
06:50 kshlm That's from yesterday.
06:50 kshlm The packets don't seem to be having any trouble getting out of India
06:51 nigelb The email on the outage list is from 30 mins ago if I did my math correctly.
06:52 kshlm I don't see that? Are we looking at the same outage-list?
06:53 nigelb PMing you the link.
06:59 devyani7_ joined #gluster-dev
07:10 ppai joined #gluster-dev
07:13 nigelb kshlm: what's the title of the last email from outage-list you see?
07:14 nigelb misc: I didn't think you'd be awake. Or else I'd have pinged you directly :)
07:14 misc nigelb: I wasn't, i am still trying to find the only door of my room to go out
07:14 kshlm 'Network packet loss Incident | 12th July 2016 - 14:00 IST'
07:14 nigelb Heh :)
07:15 kshlm From comparing with the archives, I haven't recieved a few more after that, including the RDU outage.
07:15 misc network seems fine right now however
07:16 misc RHT, RDU2 etc Incident | 20160713 - 6:00 UTC
07:16 kshlm Seems fine from here as well now.
07:16 misc IT pushed the traffic to another vendor and waiting is the last status
07:16 nigelb when we looked, it had that status as well.
07:17 nigelb BUt we were seeing packet loss
07:18 misc yeah, I am asking to them
07:19 nigelb I'm going to break for lunch.
07:20 karthik_ joined #gluster-dev
07:21 Saravanakmr joined #gluster-dev
07:25 _ndevos kdhananjay: hmm, care to check up on https://twitter.com/Djelibeybi/​status/751323402107957248?s=03 ?
07:26 kdhananjay ndevos: will do.
07:26 ndevos kdhananjay++ thank you!
07:26 glusterbot ndevos: kdhananjay's karma is now 21
07:28 kdhananjay ndevos: id been meaning to ask you this: suppose a user raised a bug against version 3.7.x and he reports back saying it is fixed in 3.7.y. So what happens to the bug? It is right now in assigned state - https://bugzilla.redhat.co​m/show_bug.cgi?id=1318136
07:28 glusterbot Bug 1318136: high, unspecified, ---, kdhananj, ASSIGNED , Distributed-Replicate sharding corrupt VMs in ESXi
07:29 kdhananjay ndevos: i mean, i need to know when i should close it.
07:29 ndevos kdhananjay: we can close it, just set the "fixed in version" to glusterfs-3.7.y and mention the version in a note too
07:30 kdhananjay ndevos: perfect. thanks! will do that.
07:30 ndevos kdhananjay: an example would be https://bugzilla.redhat.com/show​_bug.cgi?id=glusterfs-3.7.10#c4
07:30 glusterbot Bug glusterfs: could not be retrieved: InvalidBugId
07:31 kdhananjay ndevos: ah! got it. thanks.
07:31 kdhananjay ndevos++
07:31 glusterbot kdhananjay: ndevos's karma is now 286
07:34 misc nigelb, kshlm: answer from IT: "bgp take time to propagate", so that's why it took time
07:34 misc (and now, I can go to breakfast)
07:49 aspandey joined #gluster-dev
08:36 nigelb misc: I still have a good deal of packet loss, but it's beyond us to do anything about it.
08:46 misc nigelb: from where ?
08:46 misc mhh, I do from my 3g phone
08:46 misc it was fine from the same provider at home :/
08:48 nigelb I'm on 4g, so yeah.
08:48 kshlm It's the same from BLR.
08:49 kshlm The situation has regressed again.
08:49 Muthu joined #gluster-dev
08:50 misc still on the twtelecom.net router ?
08:50 kshlm Yup.
08:52 misc mhh, I guess noc is already aware, i will see once I am in the office
08:56 post-factum same from UA
09:06 atalur joined #gluster-dev
09:16 Saravanakmr joined #gluster-dev
09:23 karthik_ joined #gluster-dev
09:25 post-factum question about sys_rename called in statedump code path
09:25 post-factum https://github.com/gluster/glusterfs/blob​/master/libglusterfs/src/statedump.c#L864
09:25 Manikandan Muthu++, thanks for your first patch ;-)
09:25 glusterbot Manikandan: Muthu's karma is now 1
09:25 rastar joined #gluster-dev
09:25 post-factum shouldn't that be hide under if clause? strace shows me that rename is called even if open() fails
09:25 post-factum *hidden
09:29 nigelb misc: I'm going to decomission sonarqube today. Does it need to be taken off salt/ansible/monitoring?
09:34 misc nigelb: nope
09:59 mchangir joined #gluster-dev
10:06 misc python[20913] trap invalid opcode ip:7fbab7f26d60 sp:7ffe3773d6f8 error:0 in libfreeblpriv3.so[7fbab7ed4000+72000]
10:06 misc mhhh
10:06 misc on slave25.cloud
10:13 msvbhat joined #gluster-dev
10:24 misc nigelb: also, as said on internal IRC, i pinged again the on-call guy from NOC to take a look on why it didn't switched properly
10:25 nigelb misc: Thanks
10:25 nigelb ah, I got kicked off VPN
10:33 kshlm joined #gluster-dev
10:35 nbalacha joined #gluster-dev
10:40 ira joined #gluster-dev
10:51 Manikandan joined #gluster-dev
10:54 msvbhat joined #gluster-dev
10:57 anoopcs ndevos, Don't we need to fix the documentation regarding the packages for Red Hat/ CentOS? http://gluster.readthedocs.io/en/latest/​Install-Guide/Install/#for-red-hatcentos
11:00 karthik_ joined #gluster-dev
11:12 kshlm joined #gluster-dev
11:23 prasanth joined #gluster-dev
11:26 Manikandan joined #gluster-dev
11:30 ndevos hi Manikandan, how about updating https://github.com/gluster/glusterdocs/pull/127 ?
11:31 Manikandan ndevos, sure, in a while
11:31 ndevos Manikandan: no need to rush :)
11:31 ndevos anoopcs: yes, probably, send a patch that points to https://wiki.centos.org/SpecialInter​estGroup/Storage/gluster-Quickstart
11:32 kkeithley Gluster Community Meeting starts in 30min in #gluster-meeting
11:32 ndevos kkeithley: I'll be in a meeting at that time :-/
11:33 kkeithley ndevos: do you have any action items? Or status on 3.8 you want to share?
11:34 ndevos kkeithley: no, I dont think there is anything specific
11:34 kkeithley okay
11:35 jiffin joined #gluster-dev
11:37 devyani7_ joined #gluster-dev
11:51 ppai joined #gluster-dev
11:51 nigelb Oh joy.
11:52 nigelb At the end of today, Jenkins is going to have a tough time catching up on all the jobs.
11:54 pur joined #gluster-dev
11:54 kkeithley nigelb: bet it's all caught up by Monday. ;-)
11:56 rraja joined #gluster-dev
11:57 Manikandan ndevos, thanks for reminding :-)
11:57 kkeithley nigelb, misc: fyi, I'm adding an Infrastructure status topic to the community meeting agenda.
11:58 ndevos Manikandan: I only noticed it while checking if anoopcs already send a patch for the docs :)
11:58 pur joined #gluster-dev
11:58 kaushal_ joined #gluster-dev
11:59 pur joined #gluster-dev
11:59 kkeithley Gluster Community Meeting starts now in #gluster-meeting
12:00 nigelb kkeithley: ha, I wanted to bring that up at some point :)
12:00 kkeithley good
12:01 nigelb There's one thing I can bring up that'd be interesting to community :)
12:01 nigelb other than the outage.
12:13 nbalacha joined #gluster-dev
12:17 nbalacha https://build.gluster.org/job/rackspace-n​etbsd7-regression-triggered/17995/console has been running for 7+ hours
12:18 nbalacha seems to be hung in: 21:36:04 + /opt/qa/build.sh
12:18 nbalacha can someone abort this regression?
12:19 Saravanakmr joined #gluster-dev
12:19 nbalacha ditto for https://build.gluster.org/job/rackspac​e-netbsd7-regression-triggered/17996/
12:28 mchangir joined #gluster-dev
12:28 pranithk1 nbalacha: on it
12:31 nbalacha pranithk1, thanks
12:31 pranithk1 nbalacha: np
12:32 nbalacha pranithk1, any idea why it happened? And can I trigger the job again?
12:32 pranithk1 nbalacha: doing recheck netbsd should do
12:32 kkeithley ira, obnox: any status to report on Samba in the community meeting?  Join us in #gluster-meeting
12:32 pranithk1 nbalacha: Not sure why there was hang though, didn't check
12:33 nbalacha pranithk1, ok. will trigger it again and hope for the best
12:35 kkeithley rabhat, rtalur: any status to report on 3.6?  Join us in #gluster-meeting
12:36 ppai joined #gluster-dev
12:37 kkeithley rastar: ^^^
12:38 kkeithley nigelb: you're up in #gluster-meeting
12:46 nbalacha nigelb, netbsd runs are failing
12:46 nbalacha nigelb, can you take a look?
12:46 nigelb nbalacha: Not touching anything until we have network connectivity.
12:46 nigelb Because I don't know why somethings fail yet.
12:47 nbalacha nigelb, ok. seems to be network related anyway.
12:48 nigelb basically we have to write everything off today.
12:48 nigelb when network is back, we'll retry and get everything sorted.
12:51 nigelb rastar - CAn you file a bug about the user email thing?
12:52 rastar nigelb: user email? I did not get the context
12:52 nigelb Off this one -> #action kshlm, csim to chat with nigelb about setting up faux/pseudo user email for gerrit, bugzilla, github
12:53 post-factum skoduri: any luck with reproducing brick crashes with TCP probing and/or volume status?
12:53 rastar nigelb: I would prefer if kshlm did it as he has more context. I don't know the use case why they wanted it
12:53 skoduri post-factum, I did test for multiple iterations with 3.7.12 branch...but hadn't hit any crash
12:53 nigelb okay.
12:53 nigelb I'll send him a note.
12:54 skoduri post-factum, later I wanted to apply the same patches which you had..but just hadn't got chance to do it
12:54 post-factum skoduri: ah, ok, let me know. but i doubt that will help as I tried to revert them.
12:55 skoduri post-factum, right... but I will give a try probably tmrw and let you know
12:56 post-factum skoduri: ok. if no luck, i hope we will have some time to revise the steps to reproduce the issue
12:56 skoduri post-factum, sure
13:00 post-factum kkeithley++
13:00 glusterbot post-factum: kkeithley's karma is now 132
13:01 nigelb Thanks kkeithley :)
13:01 skoduri kkeithley++
13:01 glusterbot skoduri: kkeithley's karma is now 133
13:03 kshlm joined #gluster-dev
13:10 ashiq joined #gluster-dev
13:15 kkeithley ndevos: the question I should have posed is "should comments added to bugzilla by gerrit use bugs@gluster.org instead of vbellur@...."
13:15 kkeithley in response to q about setting up faux/pseudo accounts for bugzilla/gerrit/jenkins
13:16 kkeithley nigelb: what questions do you have about the pseudo accounts for bugzilla/gerrit/jenkins?
13:16 ndevos kkeithley: yes, of course bugs@gluster.org should be used for that, and I've mentioned that several times already :)
13:16 kkeithley indeed. ;-)
13:16 misc ndevos: just to be sure, bugs.cloud.gluster;org is still used ?
13:17 ndevos misc: yes, it is, and the website should get the latest data every night
13:17 kkeithley nigelb: what would it take to fix up whatever script it is that is adding comments to bugzilla? Change it from saying the comment is added by vbellur@...  to bugs@gluster.org
13:17 misc ndevos: ok, trying to move it to ansible, if you want to do a review of the ansible code (or nigelb)
13:18 misc I am trying to figure a proper workflow, not sure if patch by email or PR for now
13:18 ndevos misc: ah, so thats why I get a 403 when going to the site?
13:18 misc ndevos: mhh
13:18 misc ndevos: let me see
13:18 kkeithley what is that site for?
13:18 * ndevos frowns at kkeithley
13:19 misc ndevos: so part of the process is "turning selinux on", and I think that's the issue. I will fix
13:19 kkeithley oops, the frown of discomfort.  Not as fearsome as the glare of discomfort, but...
13:19 ndevos kkeithley: it lists all open bugs against gluster, with the (lack of) status of the patches that have been posted/abandoned
13:20 ndevos misc: hmm, not sure why selinux would have been off in the 1st place?
13:21 kkeithley so who should I volunteer to host next week's community meeting?
13:22 ndevos anyone but me?
13:22 kkeithley yeah, I don't want to pick on the Good Guys who always volunteer.
13:22 kkeithley time for some new blood
13:23 * kkeithley looks around for his vampire teeth
13:24 * kkeithley is annoyed by regression tests that pass on his machine but fail in jenkins
13:26 misc ndevos: it is turned off by default on rackspace, so that's a easy mistake to assume it is on when you take a Certified Trusted and Tried Industrial OS :p
13:30 poornimag joined #gluster-dev
13:30 ndevos misc: oh, thats ugly!
13:30 misc ndevos: at least, now on RHEL 7, that's permissive, it wa disabled before
13:31 misc (I suspect me complaing to the right person inside RH to make Rackspace respect their agreement to be certified cloud provider did help, even if I think they still not respect it )
13:40 ndevos misc: anyway, that system has the website, and a cron-job, not much more I think
13:41 ndevos misc: oh, there is a cronjob in my user account too, sending the email (to me) for "incorrect bug status" every week
13:43 misc ndevos: ok, I didn't see the 2nd cron job
13:45 ndevos misc: you're welcome to configure it to send the email directly to gluster-devel, but it needs an address that the ml accepts
13:52 atalur joined #gluster-dev
13:53 jdarcy joined #gluster-dev
13:53 jdarcy We seem to have a persistent problem with NetBSD test failures leaving the machine in a state where the next run will hang in unmount.
13:54 jdarcy Logging in and killing perfused seems to get things going again.
14:03 kshlm joined #gluster-dev
14:04 kkeithley misc,nigelb: all these regressions failing because "slave going offline" — due to the network issues?
14:05 misc kkeithley: likely, yes
14:05 kkeithley aren't all the slaves in the same rack, on the same switch with the master? Or are these rackspace slaves?
14:06 misc no, that's the rdu <=> rackspace link that caused trouble
14:06 misc basically, all outgoing jenkins/gerrit connections
14:09 nigelb kkeithley: My question was around what it's supposed to do.
14:09 nigelb kkeithley: If there's a jenkins to slave problem or problem with cloning
14:09 nigelb that's all part of the network outage.
14:09 nigelb jenkins master and gerrit live on the same host
14:10 shyam1 joined #gluster-dev
14:10 nigelb BUt everything else is on rackspace.
14:11 misc point 1 of https://en.wikipedia.org/wiki/Seven​_Fallacies_of_Distributed_Computing :p
14:13 kkeithley I don't think it's supposed to do anything per se. Excepting maybe be able to accept/forward acknowledgement emails to someone who can act on them. For anything that might need that.
14:14 nigelb we want an alias on @gluster.org where we can forward emails?
14:15 Manikandan joined #gluster-dev
14:18 msvbhat joined #gluster-dev
14:18 kkeithley yes.  E.g. if we create an account on bugs.redhat.com for Josephus Ant, so that Josephus can add comments to BZs (instead of hagarth) every time someone posts a patch for a bug.
14:18 kkeithley bugs.gluster.org is going to send a confirmation email.
14:19 kkeithley Someone needs to actually receive that email and respond.
14:19 pkalever left #gluster-dev
14:21 nigelb I didn't even know we had a bugs.gluster.org.
14:21 kkeithley but we probably don't want the forwarding to be always on.
14:21 kkeithley bugs@gluster.org?
14:23 nigelb Okay, I still don't understand what we're trying to solve. BUt it's also 8 pm here.
14:23 nigelb I'll take a second look when I wake up.
14:24 kkeithley look at a gluster BZ.  e.g. https://bugzilla.redhat.co​m/show_bug.cgi?id=1350793
14:24 glusterbot Bug 1350793: unspecified, unspecified, ---, kkeithle, MODIFIED , build: remove absolute paths from glusterfs spec file
14:25 kkeithley Comment 1 and Comment 2 were added by gerrit when a patch was posted.
14:25 nigelb AHHHH.
14:25 nigelb That needs to be a generic account instead?
14:25 kkeithley ideally yes. IMO
14:25 nigelb I agree with this idea.
14:26 nigelb But I think we may get blocked until the bugzilla ops team figures out their permissions reorganization.
14:26 kkeithley get some sleep
14:26 nigelb I need to wake up some netbsd machines first.
14:26 nigelb Or else we're going to run a nice backlog.
14:28 nigelb misc: If you need something ansible reviewed, assign it to me. I can look.
14:28 nigelb oh wow, jenkins is doing a decent job on it's own.
14:29 misc nigelb: I just did put you in CC
14:30 misc that's still a draft, but as i am about to leave to buy food, I rather push now
14:32 aravindavk joined #gluster-dev
14:32 nigelb ndevos / kkeithley - so the issue with compare-bug-and-branch job day before yesterday is an issue across all the JJB jobs.
14:32 nigelb I "fixed" the issue.
14:32 nigelb This means if we broke anything in the rpm jobs recently, it'll bubble up now.
14:33 nigelb so if you see a lot of bustage in rpm jobs, the reason is that I broke the jenkins job and I've only fixed it today.
14:33 kkeithley Oh, okay. Maybe that's why my recent patches failed the rpm jobs
14:33 kkeithley s/jobs/tests/
14:33 nigelb I'll send a mail to devel.
14:33 kkeithley for no apparent reason
14:33 nigelb someone probably has broken it, and we'll have to dissect to find who.
14:34 kkeithley ??
14:34 nigelb this was never fun at mozilla and i bet it's not going to be fun now. But hey, at least it's easy to bisect in git.
14:34 nigelb I'll see if I can get a box running with mock separately and find out who broke it.
14:34 * kkeithley thinks "fix the problem, not the blame"
14:35 nigelb s/find out who/find out what/
14:35 nigelb yes
14:35 misc so we need regression testing for the regression tests :) ?
14:36 nigelb nah, rpm is just a smoke test :)
14:36 misc do I need to call Leaonardo Di caprio to tell him "we are doing inception 2"
14:36 nigelb lol
14:36 misc "jenkinception"
14:36 nigelb I briefly considered that for distaf :P
14:36 kkeithley how many levels can _you_ do?
14:36 misc it is always time to change
14:36 nigelb in fact, that's how you run distaf. Assign the job to a "node".
14:37 nigelb And the "node" spawns 4 other nodes and tests inside them.
14:37 nigelb anyway, time to spend some time away from the computer.
14:37 misc I still think that inception is virtualisation in reverse, since the deeper they go, the faster it get
14:37 kkeithley It's Beer:30
14:38 jiffin joined #gluster-dev
14:39 kkeithley That's just a perception problem
14:39 kkeithley It only seems faster
14:41 sanoj joined #gluster-dev
14:57 spalai joined #gluster-dev
14:58 pranithk1 joined #gluster-dev
15:02 dlambrig_ joined #gluster-dev
15:04 atinm joined #gluster-dev
15:04 wushudoin joined #gluster-dev
15:06 dlambrig_ joined #gluster-dev
15:14 mchangir joined #gluster-dev
15:15 dlambrig_ joined #gluster-dev
15:17 dlambrig_ left #gluster-dev
15:21 ramky joined #gluster-dev
15:34 pkalever joined #gluster-dev
15:54 skoduri joined #gluster-dev
16:08 ramky joined #gluster-dev
16:19 ashiq joined #gluster-dev
16:24 Apeksha joined #gluster-dev
16:53 shaunm joined #gluster-dev
16:53 nishanth joined #gluster-dev
16:56 shubhendu joined #gluster-dev
17:09 ashiq joined #gluster-dev
17:13 lpabon joined #gluster-dev
17:23 ashiq_ joined #gluster-dev
17:46 spalai left #gluster-dev
18:12 rafi joined #gluster-dev
18:32 nishanth joined #gluster-dev
18:48 rafi joined #gluster-dev
18:48 jiffin joined #gluster-dev
18:54 rafi joined #gluster-dev
20:48 shyam joined #gluster-dev
21:15 sankarshan_away joined #gluster-dev
23:32 glustin joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary