Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-03-28

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 lkoranda joined #gluster-dev
00:54 badone joined #gluster-dev
01:21 bala joined #gluster-dev
01:45 tdasilva left #gluster-dev
02:20 tg2 Can has tiered storage pls
02:46 bharata-rao joined #gluster-dev
03:21 bala joined #gluster-dev
03:29 shubhendu joined #gluster-dev
03:45 aravindavk joined #gluster-dev
03:46 itisravi joined #gluster-dev
03:52 kanagaraj joined #gluster-dev
03:52 itisravi joined #gluster-dev
04:07 spandit joined #gluster-dev
04:13 hagarth joined #gluster-dev
04:16 ndarshan joined #gluster-dev
04:47 ppai joined #gluster-dev
04:52 mohankumar joined #gluster-dev
04:55 kdhananjay joined #gluster-dev
04:59 nishanth joined #gluster-dev
05:00 purpleidea @later tell lalatenduM hey lala, i can't edit http://wiki.centos.org/SpecialInterestGroup/Storage/Proposal but i should somehow link that to the RPM's i made, please let me know. hope it helps for your storage sig.
05:00 glusterbot purpleidea: The operation succeeded.
05:00 purpleidea @later tell lalatenduM hey lala, i can't edit http://wiki.centos.org/SpecialInterestGroup/Storage/Proposal but if i should somehow link that to the RPM's i made, please let me know. hope it helps for your storage sig.
05:00 glusterbot purpleidea: The operation succeeded.
05:17 sahina joined #gluster-dev
05:19 deepakcs joined #gluster-dev
05:38 hagarth joined #gluster-dev
05:50 aravindavk joined #gluster-dev
05:52 edward1 joined #gluster-dev
05:52 lalatenduM joined #gluster-dev
06:12 hchiramm_ joined #gluster-dev
06:29 bala joined #gluster-dev
06:44 hchiramm_ joined #gluster-dev
06:52 hagarth joined #gluster-dev
09:02 lalatenduM joined #gluster-dev
09:33 mohankumar joined #gluster-dev
10:13 Frankl joined #gluster-dev
10:52 jclift purpleidea: No account on wiki.centos.org?
11:07 lpabon joined #gluster-dev
11:20 lalatenduM joined #gluster-dev
11:29 * jclift wonders what the practical host memory requirements are for Gluster.  The smoke tests are failing with master branch on Rackspace 512MB instances.  But inconsistently as to which test fails.
11:29 hagarth joined #gluster-dev
11:29 jclift Going to try bigger instances first.
11:29 jclift It might not be the memory size... not sure.
11:30 jclift Could also be related to the apparently memory bug recently mentioned on the mailing lists.
11:32 jclift hagarth: Any objection to having build.gluster.org's /opt/qa stuff hooked up to the git repo?
11:32 jclift hagarth: Such that any changes in the git repo get pulled into /opt/qa/ on build.gluster.org
11:39 kkeithley purpleidea: cool, I'll take a look
11:51 jclift k, smoke tests aren't immediately barfing on the 1GB instances.  Sounds like 512MB isn't enough.
12:17 kkeithley purpleidea: even better than a SHA256SUM for the RPM and src.rpm would be to sign them with your gpg key.
12:22 kkeithley purpleidea: even better than a SHA256SUM and SHA256SUM.asc files for the RPM and src.rpm would be to just sign them with your gpg key.
12:32 itisravi joined #gluster-dev
12:41 tdasilva joined #gluster-dev
12:47 bala joined #gluster-dev
12:50 hagarth joined #gluster-dev
13:16 hagarth joined #gluster-dev
13:41 jclift Interesting.  At least some of the regression tests depend on the underlying filesystem.
13:42 jclift ext3 (default for the boot volume of rackspace instances) causes failure in lots of places
13:42 jclift xfs so far is working ok (still early in a regression run, but no failures so far)
13:42 jclift ext4 should prob work too
13:44 jclift (but will find out)
13:44 jclift Hmmm, spoke too soon.  Failures in mount.t
13:48 Frankl joined #gluster-dev
14:11 itisravi joined #gluster-dev
14:17 jclift Updated kernel seems to fix the problem
14:17 wushudoin joined #gluster-dev
14:20 kkeithley I think there's a mount option to enable xattrs on ext3?
14:23 lalatenduM johnmark, ping
14:23 purpleidea jclift: i have an account, but it didn't let me edit it.
14:24 purpleidea jclift: 512mb is what i use for all my gluster tests.
14:25 purpleidea kkeithley: ah, good idea. what signing command should i do instead? (it's in the Makefile)
14:26 kkeithley rpmsign --addsign. You just need some extra config in your .rpmmacros file
14:27 kkeithley Here's what mine looks like (/me breaks rules about pasting)
14:27 kkeithley %_signature gpg
14:27 kkeithley %_gpg_path  /home/kkeithle/.gnupg
14:27 kkeithley %_gpg_name kkeithle@redhat.com
14:28 kkeithley oh, look there's where %_topdir is defined to ~/rpmbuild too. I'd never noticed that before
14:32 johnmark lalatenduM: howdy
14:32 johnmark lalatenduM: pong
14:33 lalatenduM johnmark, I am good :)
14:33 kkeithley ext3, -o user_xattr to enable extended user attributes
14:33 lalatenduM johnmark, wanted to talk to you about "Data Liberate hackathon"
14:34 lalatenduM johnmark, do u already have specific plan for the SIG stuff thr?
14:35 johnmark lalatenduM: sort of :)
14:35 johnmark I am hoping some packaging experts will be on hand to help
14:35 johnmark *cough*kaleb*cough*
14:35 johnmark kkeithley: you'll be there, right?
14:36 kkeithley at Summit, funny you should ask, just this minute I started to compose email to Ric
14:36 lalatenduM johnmark, but we need to give information about which packages we are looking for, I am trying collect these info, but if you already have it thats awesome :)
14:36 kkeithley and don't be looking at me as a packaging expert. ;-)
14:36 johnmark lol
14:37 johnmark lalatenduM: I could use your help getting that information together
14:37 lalatenduM kkeithley, even I consider you as  packaging expert :)
14:37 johnmark and preparing for such an event
14:37 johnmark kkeithley: you're cursed!
14:37 lalatenduM johnmark,  yeah lets do a hangout early next week to talk about these
14:37 johnmark kkeithley: you can never shed that title
14:37 johnmark lalatenduM: cool! that sounds great
14:38 kkeithley that's me, the one guy who'll do whatever it takes. No glory in it though
14:38 johnmark kkeithley: I'll give you a gold star
14:38 lalatenduM johnmark,   am going to spend some time to install all required packages on centos 6.5 (manually) and see if there is some conflict.
14:38 johnmark post your pic on Gluster.org - you'll be (in)famous!
14:39 kkeithley That and $2.43 gets me a tall starbucks with a shot of syrup
14:39 johnmark lalatenduM: ah, that sounds like a good first step
14:39 johnmark kkeithley: :)
14:39 lalatenduM johnmark, kkeithley but we need to build libvirt packages with required patches for sure :0
14:40 kkeithley lol lol lol especially after my post to gluster-users
14:40 kkeithley no good deed goes unpunished
14:40 lalatenduM haha , LOL :)
14:41 lalatenduM kkeithley, I want to do it, but need your help
14:41 kkeithley always happy to help
14:42 lalatenduM kkeithley, whats the first step
14:43 kkeithley I'd start with the libvirt spec(s) in Fedora.
14:45 kkeithley does the fedora libvirt have everything that's needed in it?
14:45 lalatenduM kkeithley, cool, will do
14:49 lalatenduM kkeithley, I did n't see your msg about F20, and asked the same on the mail thread. I am going to check and find out
14:51 kkeithley message about F20?
14:53 lalatenduM yeah
14:55 kkeithley what message about F20?
14:56 jclift purpleidea: Do you want me to ping one of the CentOS guys, and see if there's something they need to do to your account to allow editing that page?
14:56 lalatenduM kkeithley, "does the fedora libvirt have everything that's needed in it?"
14:57 kkeithley oh, that message
14:57 lalatenduM jclift, I have talked to purpleidea abt it
14:58 lalatenduM jclift, there is no page present currently to list rpms and its source, I need to work with KB  create one
14:58 purpleidea jclift: don't think it's necessary, thanks :)
14:59 jclift np :)
14:59 lalatenduM jclift, purpleidea actually the meeting sig is meting is scheduled on 4th April, I think we will get more info after that
14:59 purpleidea kkeithley: cool, i'll try this
15:10 kkeithley johnmark: booked my flight
15:10 lpabon Quick question -- I'm trying to run one of the scripts run by the Jenkins smoke tests on my VM.  But I get this message:  gluster --mode=script volume create patchy replica 2 centos.localdomain.com:/build/export/export1 centos.localdomain.com:/build/export/export2 centos.localdomain.com:/build/export/export3 centos.localdomain.com:/build/export/export4
15:10 lpabon volume create: patchy: failed: The brick centos.localdomain.com:/build/export/export1 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
15:11 lpabon anyone know why it works on the build server but not on my VM?  Build server is CentOS 6.3 my VM is CentOS 6.5
15:13 lpabon <crickets...>  :-)
15:14 jclift Sure
15:14 jclift On the build server /d is not on the root partition
15:14 lalatenduM lpabon, I had also seen this on Fedora , I think I had made another partition for the bricks
15:14 jclift lpabon: Do a "df -h" on the build server
15:14 lalatenduM jclift, you beat me :)
15:14 lpabon aaah!,
15:15 lalatenduM lpabon, yup I made /d a separate partition
15:15 jclift Btw, I'm updating the git repo version of the build scripts to add "force" on the end of that
15:15 lpabon lol
15:15 jclift lalatenduM: Later on today I'll adjust build.gluster.org so that it pulls updated /opt/qa/ directory from the repo.
15:16 jclift lalatenduM: That'll mean we can update things in the repo, and build.gluster.org will automatically pick them up
15:16 lalatenduM jclift, I think without force is fine
15:16 lpabon What I *really* tried to do first is to see if I could create a "test" directory in the same directory as the source.. something like ${SRC}/.test... but ..
15:16 jclift lalatenduM: What about when people _need_ the test scripts to mount in the root partition?
15:16 lalatenduM jclift, cool
15:16 lpabon I found that even if I install there, '/var/lib/gluster' is hardcoded in the C code, so it does not look for state in ${SRC}/.test/install/var/lib/...
15:17 jclift lpabon: Are you building with /opt/qa/build.sh ?
15:17 lpabon jclift: yep
15:17 lalatenduM jclift, if people use root partition , tests wil fail for file systems other than ext4, xfs, btrfs
15:17 lpabon i would like at some point for the tests to be part of the repo so that developers test their code before submitting
15:17 lalatenduM jclift, I mean might fail
15:18 jclift lalatenduM: Yeah, ext3 definitely doesn't seem happy ;)
15:18 lpabon ..but the way the code and tests.t are now, that does not seem possible
15:18 lalatenduM jclift, yup, anything that does not support xattrs
15:18 jclift That's the default on the 512MB cloud server instances on rackspace
15:19 jclift I think maybe I'll need to experiment with creating an xfs filesystem in a block device mounted somewhere in the root partition then.  Non-optimal, but it'll prob work
15:19 lalatenduM jclift, lpabon I think I am missing something here, isn't the tests part of the code now?
15:19 kkeithley johnmark: oops, okay, now I really booked the flight. Now I need a hotel and summit registration
15:19 lpabon no not at all
15:19 jclift lalatenduM: The tests/ subdirectory is definitely part of the code
15:19 kkeithley fly in Sunday, out Thursday night.
15:19 jclift lalatenduM: The /opt/qa/ stuff isn't
15:19 lpabon the regression, smoke, and build shell scripts are in /opt/qa of the build VM
15:19 kkeithley Thursday night on red eye
15:20 lalatenduM lpabon, jclift got it
15:20 jclift lalatenduM: https://forge.gluster.org/gluster-patch-acceptance-tests
15:20 lpabon One more thing, I have *never* been able to successfully run run_tests.sh from the repo :-)
15:21 lalatenduM jclift, checking
15:21 lpabon at some point, I would really like for glusterfs 1 node regression to be able to not need root access... just a dream I have :-) .. maybe using libgfapi instead of FUSE
15:22 jclift lpabon: Yeah, I used to have the same problem.  Finally cracked the shits when working on my glupy stuff, so went through and tried to figure + fix every failure I was hitting.
15:23 lpabon jclift: lol, i pitty you ;-)
15:23 jclift lpabon: Managed to get every one (that I keep hitting) sorted, except for a quota.t failure
15:23 jclift lpabon: The quota.t one now has a BZ, Varun knows about it, and said he'll submit a patch for it today
15:23 lpabon awesome
15:24 jclift lpabon: What are the failures which you see?  fpaste?
15:24 lpabon sure, but first... i don't see smoke.sh needing '/d' .. it uses '/build'  , right?
15:25 jclift ... every time I have to look into regression test failures, I wish there was a simple option we could turn on where the regression test harness would capture stdout + stderr to files, so we could see what the fuck the failure output is
15:25 lpabon lol! /build -> /d/build .. got it
15:25 jclift yeah
15:25 lpabon doh! .. let me make that change first, see if I can at least make it pass build and smoke...
15:25 kkeithley who else is going to summit?
15:26 lpabon not me
15:26 lalatenduM kkeithley, not me :(
15:26 lpabon jclift: Let me try to create the environment expected and if I get an error I'll get back to you
15:26 bala joined #gluster-dev
15:27 lpabon jclift: My goal is to create one VM that passes build, smoke, and regression on Rackspace, then clone the heck out of it :-)
15:27 lalatenduM lpabon, I have questions about gluster-swift, good that you are here :)
15:27 jclift lpabon: Try this: https://forge.gluster.org/glusterfs-rackspace-regression-tester
15:27 lpabon lalatenduM: sure, go ahead
15:27 lpabon jclift: I definitely will
15:28 jclift lpabon: You'll need an API key for it, if you don't have one already.  It's just a simple click of a button in the Rackspace UI when you log in if you need one.
15:28 * kkeithley guesses he needs the devnation add-on?
15:28 lpabon jclift: why is there so much git commands?  Is that because when it was first created it had to do this years ago?  I don't need to do any of this for gluster-swift VMs and branch/patch testing
15:28 kkeithley johnmark: ^^^
15:29 jclift lpabon: Git commands in what?
15:29 lpabon jclift: on jenkins.sh
15:29 lpabon jclift: https://forge.gluster.org/gluster-patch-acceptance-tests/gluster-patch-acceptance-tests/blobs/master/jenkins.sh
15:29 jclift Heh, I've never actually run that.
15:30 jclift It seems to be what jenkin uses
15:30 jclift I just copied it to there since I saw jenkins using it. :)
15:30 lpabon jclift: yeah, that is what I saw on the Jenkins build, but I do not think any of that is necessary now
15:30 johnmark kkeithley: nope
15:30 johnmark unless you just want it
15:30 lpabon jclift: check out the jobs for gluster-swift
15:30 lalatenduM lpabon, for CentOS storage SIG ,we need to packages related to gluster-swift, I think the code is in launchpad not sure about the exact project
15:31 lpabon lalatenduM: you mean, you need RPMs as part of CentOS?
15:32 lalatenduM lpabon, yeah, actually need src.rpm
15:32 jclift lpabon: What does "check out the jobs for gluster-swift" mean?
15:32 lalatenduM lpabon, the project is https://launchpad.net/gluster-swift right?
15:33 jclift Wondering if I should be looking through gluster-swift tree on forge for something?
15:33 lpabon jclift: Check this job:  http://build.gluster.org/job/gluster-swift-pre-commit-f19-havana/configure
15:33 lpabon lalatenduM: yes
15:34 lalatenduM lpabon, I think gluster-swift-plugin is a separate plugin right? this also comes out of the same source
15:34 jclift lpabon: Can you create a user for me in Jenkins?
15:34 lalatenduM s/seoarate plugin/ separate rpm/
15:34 jclift lpabon: That link is asking me for a login, which I don't have
15:34 lpabon lalatenduM: but, at the moment, there are no Fedora/CentOS packages... We do place our own RPMs... We have a small issue in that our RPMs need a specific version of OpenStack Swift's RPMs, so we have been trying to figure out how to deal with that, specifically as part of Fedora
15:35 jclift lpabon: Anyway, if you want to get a rackspace vm up and running with the regression test in it, that's what the above regression testing repo does.
15:36 lpabon jclift: no, I  just checked... I do not have the credentials to add another user.  a2 created my account
15:36 lalatenduM lpabon, good that I talked to you about it. this something we have to figure out for the sig
15:36 jclift lpabon: It has all of the "launch vm" bit working, plus installs the build + regression testing dependencies, + pulls down the gluster source and the /opt/qa source, then kicks off the smoke and regression tests
15:36 jclift lpabon: np, I'll email a2 to ask
15:37 lpabon lalatenduM: ok, yes gluster-swift is a separate RPM... Bummer we need a specific version.. You can also talk to Chetan Risbud in Bangalore for more information
15:37 lalatenduM lpabon, can I see the rpms you guys provide , I want to see the sepc file
15:37 jclift lpabon: I'm just working through the failures in the regression tests now
15:37 lpabon lalatenduM: absolutely, the spec file is in the code...
15:38 lpabon lalatenduM: Also, the Jenkins jobs all output RPMs .. pretty cool :-)
15:38 lpabon lalatenduM: https://github.com/gluster/gluster-swift/blob/master/glusterfs-openstack-swift.spec
15:38 lpabon jclift: What does?  What do you mean by "it"
15:39 lpabon jclift: the jenkins.sh?
15:39 lalatenduM lpabon, cool
15:39 jclift lpabon: This: https://forge.gluster.org/glusterfs-rackspace-regression-tester
15:39 lalatenduM lpabon, yeah will talk to Chetan about , I know him I referred him for the current job :)
15:39 jclift lpabon: It does what you're attempting.
15:39 jclift lpabon: git clone that somewhere, and run it
15:39 lpabon Oh.. interesting.. what is the expected environment?  Can we just set it up now?
15:40 jclift lpabon: Yeah.  It works as far as vm setup and getting the smoke test happening
15:40 lpabon Can we just some VMs now and hook them up then?
15:40 lpabon ah, but regression still does not work?
15:40 jclift lpabon: Almost.  There are some regression testing failures, yeah.
15:40 jclift lpabon: I'm experimenting, to figure out WTF is up with that
15:40 lalatenduM lpabon, you mean "gluster-swift-builds-el6" in build.gluster.org
15:40 lpabon jclift: do you know if it works on Fedora 20 and CentOS?
15:41 jclift lpabon: I'm using CentOS 6.5 VM's
15:41 jclift lpabon: I can't see the /opt/qa/ stuff working on F20 any time soon though
15:41 lpabon lalatenduM: yes, but they are also avaiable here: https://launchpad.net/gluster-swift/havana/1.10.0-2
15:41 jclift lpabon: The build process has some compile time option for "treat warnings as errors"
15:41 lpabon jclift: that's not good :--(
15:42 jclift And the F20 compiler chucks up a bunch or warnings.  Nothing fatal, but that option causes them to be
15:42 lpabon My ultimate goal is for dev's to run this stuff on their systems
15:42 jclift lpabon: Sure.
15:42 lpabon Yeah I noticed that when I copied the build.sh from the build server to my VM, I had to remove the CFLAG -Werror for it to pass
15:43 lpabon jclift: I have never yelled at my monitor so much ;-)
15:43 jclift lpabon: I'd like patch submissions to automatically kick off regression testing on all of CentOS 6.x, F19/20, and (maybe) an Ubuntu platform (depending if feasible)
15:43 lpabon jclift: Definitely!
15:43 jclift lpabon: My last few weeks have been entirely operating at this level of frustration (holds hand up very high)
15:44 lpabon jclift: When we have it running w/o need of root and on Ubuntu they we can have forked repoes use Travis-ci.org!
15:44 jclift lpabon: So anyway, git clone the above repo
15:44 jclift Then create a ~/.rackspace_cloud_credentials  file with your API key in it
15:44 lpabon jclift: you got it, I'll use that as my starting point
15:44 jclift http://fpaste.org/89554/96021454/
15:44 lalatenduM lpabon, in "https://launchpad.net/gluster-swift/havana/1.10.0-2" it is just one rpm, I thought we need around 5 , 6 packages
15:45 lpabon lalatenduM: no, gluster-swift is just one RPM.. OpenStack Swift, on the other hand, is 5-6 packages
15:45 lpabon lalatenduM: gluster-swift should install the others if their repo is setup
15:46 lpabon lalatenduM: Here is a quick start guide: https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md
15:46 lalatenduM lpabon, checking
15:46 lpabon jclift: where do I install that? on the VM?
15:46 jclift lpabon: I can walk you through the setup for the above repo, which you'll prob need since I haven't written the steps yet.  It's simple, but non-obvious.
15:46 jclift lpabon: I just run it from my OSX 10.7 desktop
15:47 jclift Which uses Python 2.7.x, so that would prob be the minimum requirement :)
15:47 lpabon jclift: hmmm, ok. I see
15:47 * lpabon will be back @ 2pm EST
15:47 lpabon i gtg, but I will be back later.
15:47 jclift lpabon: It then talks to the rackspace cloud, creating the vm + providing the build insructions for it
15:47 jclift lpabon: Sure, np at all
15:53 lalatenduM lpabon, u left :)
15:53 lalatenduM lpabon, I am late , I think I had last one question :)
16:01 jobewan joined #gluster-dev
17:02 hagarth joined #gluster-dev
17:34 jclift Ugh... there's a bug in selinux-policy that's stopping cloud-init from updating the kernel at boot
17:35 jclift ... and since there's an XFS bug in the kernel that Rackspace is provisioning with (fixed in an available update)... there's no way to get CentOS working until the selinux-policy bug is fixed, or Rackspace includes a newer kernel with it's provisioning
17:36 jclift I think I'll remove the -Wall flag from build.sh, and try provisioning with F20 instead
17:57 jclift -Werror, not -Wall
18:08 ndk joined #gluster-dev
18:43 jclift__ joined #gluster-dev
18:43 jclift__ left #gluster-dev
18:53 lpabon jclift: yeah, we may need to have -Werror default, that developers start fixing their warnings
18:54 jclift lpabon: Well, I'm having trouble with cloud-init atm.
18:54 lpabon cloud-init?
18:55 jclift lpabon: I can't it to apply kernel updates with CentOS 6.5, following the docs
18:55 jclift cloud-init is the system used to provide an initial list of stuff to do, to the instance
18:55 lpabon ah
18:55 jclift eg "when you boot, do this, this, and this"
18:56 lpabon fyi, in the near term, we could just use a VM and not shut it down
18:56 jclift I can get it to run arbitrary commands, and that might be good enough for us on Fedora
18:56 lpabon like we do in gluster-swift
18:56 jclift Screw that
18:56 lpabon lol
18:56 jclift I want it working properly
18:56 lpabon i agree, but just in case :-)
18:57 lpabon jclift: I'm going to make in a few minutes a Jenkins job which creates RPMs for pre-commits
18:57 lpabon for glusterfs patches
18:59 lpabon jclift: is the jenkins server on /d or / on build.gluster.org... I'm trying to see how much space it has left to store RPMs built from Jobs
19:04 jclift lpabon: That bit I'm not sure of.  I've not really looked at how Jenkins is setup there, since I don't yet have an account.
19:04 lpabon ok, np
19:09 jclift I *might* be able to get the regression stuff running in F19 or F20 in Rackspace.  Not sure yet, going to try shortly.
19:09 lpabon cool!
19:09 lpabon there is something definitely wrong if it is this difficult to run the tests imo
19:10 jclift CentOS 6.5 doesn't seem that likely short term.  There's a kernel bug in the provisioned build that seems to affect XFS.  And cloud-init bug stopping me from getting the kernel updaed.
19:10 jclift F20 hopefully doesn't have that bug
19:10 lpabon yeah, but F20 prob has other issues with the automated build scripts >.>
19:11 jclift lpabon: Well, a large part of the problem (yesterday) was that there's literally 0 documentation for getting cloud-init working through pyrax.
19:11 lpabon jikes
19:11 jclift With code tracing and pointers from the ppl in #rackspace, got that bit figured
19:11 * jclift will submit a PR for their docs in a bit, to add initial bits for the next sucker... ;)
19:12 jclift With F20, I think the problem is mostly likely to just be the -Werror flag on the build
19:12 jclift I've cloned the repo and removed that in my fork, so will test on F20 with that removed tonight
19:13 jclift I'm seriously brain fried atm though, so don't really want to start on it right now
19:13 lpabon i hear you
19:13 jclift eg I'll go have food or something, get a bit of mental clarity, then attack it again
19:13 lpabon Half time!
19:14 jclift Maybe something will work itself out overnight too, and I'll wake up with a solution in head tomorrow morning
19:14 jclift But yeah. 1/2 time ;)
19:16 lpabon nice
19:16 lalatenduM joined #gluster-dev
20:44 vpshastry joined #gluster-dev
21:45 wushudoin joined #gluster-dev
22:53 Frankl joined #gluster-dev
23:49 badone joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary