Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-11-24

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:22 topshare joined #gluster-dev
01:37 shyam joined #gluster-dev
02:07 tdasilva joined #gluster-dev
02:33 baojg joined #gluster-dev
02:33 bala joined #gluster-dev
03:06 kshlm joined #gluster-dev
03:23 bala joined #gluster-dev
03:25 bharata-rao joined #gluster-dev
03:40 baojg joined #gluster-dev
03:40 aravindavk joined #gluster-dev
03:48 kanagaraj joined #gluster-dev
03:55 itisravi joined #gluster-dev
04:02 aravindavk joined #gluster-dev
04:06 shubhendu joined #gluster-dev
04:12 nkhare joined #gluster-dev
04:24 baojg joined #gluster-dev
04:29 kdhananjay joined #gluster-dev
04:37 rafi joined #gluster-dev
04:37 Rafi_kc joined #gluster-dev
04:37 ndarshan joined #gluster-dev
04:39 anoopcs joined #gluster-dev
04:40 rafi joined #gluster-dev
04:41 hagarth joined #gluster-dev
04:43 baojg joined #gluster-dev
04:45 baojg joined #gluster-dev
04:49 Gaurav_ joined #gluster-dev
04:59 soumya_ joined #gluster-dev
05:03 spandit joined #gluster-dev
05:04 atalur joined #gluster-dev
05:06 jiffin joined #gluster-dev
05:23 baojg joined #gluster-dev
05:44 jiffin joined #gluster-dev
05:49 bharata-rao joined #gluster-dev
05:54 ppai joined #gluster-dev
05:59 lalatenduM joined #gluster-dev
06:06 baojg joined #gluster-dev
06:06 bala joined #gluster-dev
06:08 nkhare_ joined #gluster-dev
06:16 Anuradha joined #gluster-dev
06:17 shubhendu joined #gluster-dev
06:19 ndarshan joined #gluster-dev
06:22 baojg joined #gluster-dev
06:38 deepakcs joined #gluster-dev
06:46 ndarshan joined #gluster-dev
06:50 ppai joined #gluster-dev
06:50 jiffin joined #gluster-dev
07:02 shubhendu joined #gluster-dev
07:03 bharata-rao joined #gluster-dev
07:07 itisravi joined #gluster-dev
07:22 hchiramm_ joined #gluster-dev
07:43 Humble joined #gluster-dev
08:03 baojg joined #gluster-dev
08:06 ppai joined #gluster-dev
08:26 vimal joined #gluster-dev
08:27 baojg joined #gluster-dev
08:32 bala joined #gluster-dev
08:35 raghu` joined #gluster-dev
08:35 kshlm joined #gluster-dev
09:01 ppai joined #gluster-dev
09:11 pranithk joined #gluster-dev
09:26 bala joined #gluster-dev
09:28 lalatenduM joined #gluster-dev
09:35 kshlm joined #gluster-dev
09:47 bala joined #gluster-dev
09:51 Guest95118 joined #gluster-dev
09:53 deepakcs joined #gluster-dev
10:08 lalatenduM_ joined #gluster-dev
10:33 kaushal_ joined #gluster-dev
10:45 baojg joined #gluster-dev
10:48 shubhendu joined #gluster-dev
10:48 ndarshan joined #gluster-dev
10:49 ppai joined #gluster-dev
10:50 Guest95118 joined #gluster-dev
10:52 bala joined #gluster-dev
11:08 ppai joined #gluster-dev
11:50 shubhendu joined #gluster-dev
11:52 ndarshan joined #gluster-dev
11:52 bala joined #gluster-dev
11:59 tdasilva joined #gluster-dev
12:25 baojg joined #gluster-dev
12:27 baojg joined #gluster-dev
12:38 azar joined #gluster-dev
12:47 baojg joined #gluster-dev
12:48 baojg_ joined #gluster-dev
12:53 soumya joined #gluster-dev
12:55 ramon_dl joined #gluster-dev
12:59 baojg joined #gluster-dev
13:00 topshare joined #gluster-dev
13:01 baojg_ joined #gluster-dev
13:01 topshare joined #gluster-dev
13:31 lpabon joined #gluster-dev
13:40 hagarth joined #gluster-dev
13:47 shubhendu joined #gluster-dev
13:48 kkeithley joined #gluster-dev
13:49 kkeithley left #gluster-dev
13:49 kkeithley joined #gluster-dev
13:50 bala joined #gluster-dev
14:05 kshlm joined #gluster-dev
14:05 edward1 joined #gluster-dev
14:06 aravindavk joined #gluster-dev
14:12 ndevos lalatenduM, Humble, hagarth, kkeithley, kshlm: could one of you do the Gluster Bug Triage meeting tomorrow? I have an appointment at that time and can not make it
14:31 tdasilva joined #gluster-dev
14:35 lalatenduM ndevos, will do
14:41 dlambrig joined #gluster-dev
14:56 lalatenduM joined #gluster-dev
15:07 shyam joined #gluster-dev
15:09 ira joined #gluster-dev
15:11 kkeithley semiosis: I missed it last week when you did @later tell kkeithley ping me when you have a free minute to talk about keeping the debian/ dir in a git repo.
15:27 lalatenduM kkeithley, ndevos are you guys around?
15:28 kkeithley I am
15:34 lalatenduM kkeithley, http://koji.fedoraproject.org/koji/taskinfo?taskID=8219982
15:34 lalatenduM kkeithley, the issue on which I have sent a email 3.6.1-3
15:34 lalatenduM s/a email/an email/
15:35 lalatenduM kkeithley, wondering why it is coming now , may be after the regression test rpm changes
15:36 lalatenduM let me confirm that
15:36 kkeithley I expect that's right. Did those land in the regression-tests rpm? I bet they did.
15:37 kkeithley Somehow we need to add awareness about the packaging and how adding files has consequences.
15:38 lalatenduM kkeithley, these files come under %files geo-replication
15:39 kkeithley Maybe all those belong to the georep package? Can be solved by adding to the %files geo-replication section
15:40 kkeithley You're saying that's where they belong. They're not there now.
15:41 kkeithley how did devos build 3.6.1-3.fc22 rpms?
15:42 lalatenduM kkeithley, the issue coming only for el5
15:42 davemc joined #gluster-dev
15:42 lalatenduM kkeithley, it is still part of %files geo-replication
15:42 kkeithley ah, I missed that. We don't build geo-rep for el5 because the python is too old
15:43 kkeithley If they
15:43 kkeithley If they're part of geo-rep then we need an %exclude when !geo-repo
15:43 kkeithley !geo-rep
15:44 kkeithley Oh, I see them in %files geo-rep now
15:45 lalatenduM kkeithley, we have not done any code changes wrt geo-rep recently
15:45 kkeithley probably need to rm -f them in %install.  Wrapped in %if _without_georeplication.
15:46 _Bryan_ joined #gluster-dev
15:46 kkeithley Looks to me like they probably fell through the cracks in %files regression-tests on el5
15:47 lalatenduM kkeithley, yes
15:49 kkeithley we should probably change the lines in %files regression-tests with  %{_prefix}/share/glusterfs/*  to use %{_datadir}/glusterfs/...   just to be consistent.   Although I wonder why they're in /usr/share/glusterfs instead of in %{_libexecdir}/glusterfs.
15:50 kkeithley anyway, probably need to rm -f them in %install.  Wrapped in %if _without_georeplication.
15:50 kkeithley or use a %exclude somehow
15:52 kkeithley the more I think about it though.  %files regression-tests needs an %exclude, so as not to package them in that RPM under any circumstances.
15:53 kkeithley and add a couple lines to %install to rm -f those files when %_without_georeplication
15:59 kkeithley seem reasonable?
16:05 lalatenduM kkeithley, brb
16:16 ndevos lalatenduM++ awesome, many thanks!
16:16 glusterbot ndevos: lalatenduM's karma is now 48
16:16 bala joined #gluster-dev
16:17 ndevos lalatenduM: I'll send a reminder for the meeting later today :)
16:23 soumya joined #gluster-dev
16:37 lalatenduM ndevos, thanks
16:38 lalatenduM kkeithley, leaving for home now, will relogin and talk to you
16:41 baojg joined #gluster-dev
16:48 dlambrig left #gluster-dev
16:54 hagarth joined #gluster-dev
17:25 Guest95118 joined #gluster-dev
17:41 baojg joined #gluster-dev
17:44 lalatenduM joined #gluster-dev
17:49 kkeithley semiosis: ping, re: .dpkg bits in a git repo.
17:49 semiosis yes
17:50 semiosis https://github.com/semiosis/glusterfs-debian
17:50 semiosis i have branches for distros & glusterfs releases, and tags for each package i publish
17:50 kkeithley cool
17:51 semiosis it's a real pain to build all these packages, with all their little differences, and at least with this repo it's manageable
17:51 semiosis although i'd like to make it easier
17:51 kkeithley yup
17:51 semiosis if you have any ideas...
17:52 davemc Tomorrow. Please join us. RT @gluster: New blog post: GlusterFS Future Features: BitRot detection http://bit.ly/14SjnSr
17:53 kkeithley Short of having the equivalent of Fedora's koji, dist-git, and official package owners, I'd say this might be as good as it gets, for now. I'll think on it.
17:53 semiosis ok thanks
17:54 * kkeithley makes notes on his Debian build boxes so he remembers to use the github repo.
17:54 lalatenduM davemc, the time zone is not mentioned in the blog post? It is just Tue, Nov 25, 5:00 AM – 5:00 AM?
17:54 davemc sorry. 5AM PST
17:54 davemc will change
17:56 semiosis kkeithley: i'll give you committer access.  whats your github username?
17:57 lalatenduM davemc, thanks
17:58 kkeithley kalebskeithley
18:03 kkeithley semiosis: ^^^
18:03 semiosis added
18:06 jobewan joined #gluster-dev
18:43 lalatenduM kkeithley, RE:<kkeithley> seem reasonable?
18:43 lalatenduM kkeithley, I have couple of doubts
18:47 kkeithley fire when ready
18:51 lalatenduM kkeithley, i did not get this "Although I wonder why they're in /usr/share/glusterfs instead of in %{_libexecdir}/glusterfs."
18:51 lalatenduM %{_libexecdir translates to /usr/libexec
18:51 lalatenduM right?
18:52 kkeithley oh, I'm just surprised that we have executables in /usr/share/glusterfs
18:53 kkeithley I would have expected them to be in /usr/libexec/glusterfs. That's all. Nothing really to do with how the RPM is built
18:54 lalatenduM kkeithley, agree
18:55 lalatenduM kkeithley, regarding  %files regression-tests needs an %exclude, so as not to package them in that RPM under any circumstances
18:56 lalatenduM kkeithley, %files regression-tests have a exclude already
18:56 lalatenduM for %exclude %{_prefix}/share/glusterfs/tests/basic/rpm.t
18:57 kkeithley yes, rpm.t is not those geo-rep scripts
18:59 tdasilva joined #gluster-dev
19:01 kkeithley if you look at the contents of, e.g. glusterfs-regression-tests-3.6.1-1.fc22.x86_64.rpm, it's got the geo-rep scripts in it. We need to %exclude those.  That way when we do build -regression-tests, e.g. for dgo or Storage SIG, we won't get the geo-rep scripts.
19:02 kkeithley yes?
19:02 lalatenduM kkeithley, ohh . I kind of got that from from from previous msg, not able to find the bug in the specfile
19:03 lalatenduM kkeithley, yes, agree
19:03 kkeithley oh, okay. Sorry. Just me and my Keen Eye For The Obvious. ;-)
19:03 lalatenduM kkeithley, just trying find the loop hole in the spec file , thanks for making it easy for me :)
19:07 kkeithley ??? lost me. You see where the %exclude needs to be in %files regression-tests.
19:07 lalatenduM kkeithley, got it
19:08 lalatenduM kkeithley, finally I got it , the issue is %{_datadir} resolves to /usr/share
19:08 lalatenduM and %files regression-tests
19:08 lalatenduM 893 %{_prefix}/share/glusterfs/*
19:08 kkeithley yes
19:08 lalatenduM contains the geo-rep scripts too
19:09 kkeithley yes
19:13 lalatenduM kkeithley, it seems (from internet) rm -rf is better solution then %exclude
19:17 JustinClift lalatenduM: Does editing the CentOS wiki require special priviledges?
19:17 lalatenduM JustinClift, yes
19:17 * JustinClift registered account there, but can't see an edit button
19:17 JustinClift Ahhh
19:17 JustinClift lalatenduM: Thx :)
19:18 lalatenduM JustinClift, if you need access to certain page, you can send a mail to docs mailing list and you will get access
19:18 JustinClift Ahhh, cool
19:18 JustinClift Tx
19:18 lalatenduM JustinClift, otherwise you have frnds thr ;)
19:21 JustinClift lalatenduM: Yeah, they pointed me at the mailing list too: http://wiki.centos.org/Contribute#head-42b3d8e26400a106851a61aebe5c2cca54dd79e5
19:21 lalatenduM JustinClift, yup thats the one :)
19:23 JustinClift I'm already on too many mailing lists, so I'll let the SCST guys add themselves
19:24 eljrax joined #gluster-dev
19:24 * JustinClift was just doing a "drive by contribution".  No biggie. ;)
19:24 eljrax Hey, I'm toying with the idea of writing a glusterfs translator. I just found this: http://gluster.org/community/documentation/index.php/Arch
19:24 eljrax Looks like Jeff Darcy's domain has expired, and it's now hosting a squatter
19:25 eljrax Seems like the information is still available through archive.org, but thought I'd mention it here as those links probably should be removed
19:25 eljrax The Translator 101 stuff
19:25 JustinClift eljrax: In theory, the content from that site should be on the wiki and in the gluster source docs now
19:26 JustinClift eljrax: Hopefully that's the reality too, and the links on that page are just bad
19:26 JustinClift eljrax: I'm not sure though personally, as I haven't yet looked
19:26 JustinClift eljrax: Sounds like you're about to though? :D
19:27 eljrax Myeah, still considering options at this point :) But quite possibly, yeah
19:27 JustinClift Actually, if you do locate them on the present wiki / gluster docs, would you be ok to update the links in that Arch page?
19:27 JustinClift Or I can, if you point me at them. :D
19:27 JustinClift eljrax: Which computer languages are you ok with?
19:27 JustinClift eljrax: C or Python? (hopefully)
19:27 eljrax I mainly write C and python
19:27 eljrax ;)
19:28 JustinClift Cool :)
19:28 JustinClift C will give you the most power/integration
19:28 eljrax I found the API and the Swift stuff, but I'm hoping to catch files as they are written and deleted
19:29 eljrax I've got a prototype of something using inotify, and a rudimentary messaging system for keeping directories in sync using, but GlusterFS kind of solves that whole bit for me, and I was hoping to do a Translator to do the inotify bits for me
19:29 JustinClift There is a widget/thinymajig/shim for Python coders called Glupy, which can be used for rapid prototyping stuff
19:29 JustinClift Ahhh, cool.
19:30 JustinClift Sounds like it would be similar or perhaps work in with the ChangeLog stuff being developed
19:30 baojg joined #gluster-dev
19:30 JustinClift As far as I remember, inotify has some limitations around not being able to monitor subdirectories or something
19:30 JustinClift (this is from dodgy memory of a while ago tho :>)
19:31 eljrax Yeah, you kind of have to cater for that yourself, as well as handling the recursion of subdirectories in subdirectories etc.
19:31 JustinClift Yeah, people with filesystems full of subdirectories have to set up an er... inotify thing for every directory on the filesystem
19:31 eljrax And if you move a directory, you need to go back and remove the watches for the old ones, and update the watches for the new ones..  I've got all that stuff down after much hair pulling
19:31 JustinClift A recent-ish thing people have been working on in GlusterFS, is a "ChangeLog" um... thing...
19:32 JustinClift ^^^ Obviously I'm not that technically in depth any more
19:32 JustinClift So, this ChangeLog um... thing monitors the complete filesystem.  All changes, unlike inotify
19:33 JustinClift It's a newish thing being worked on, and once it's fully ready some of our existing code will be adapted to use it (my understanding anyway)
19:33 eljrax My main goal is to allow local writes, but also upload files to a CDN on-the-fly, while still allowing multiple machines reading and writing
19:33 JustinClift Ahhhh
19:33 eljrax So all machines should (eventually) have identical filesystems, and a mirror of that on a CDN
19:34 JustinClift That's a good idea
19:34 semiosis eljrax: libgfchangelog support is planned for the java connector.  once that's done it would be pretty easy to write such a program
19:34 * JustinClift just thought of a use for a similar thing to that on a different side project he's working on
19:34 eljrax http://gluster.org/community/documentation/index.php/Arch/Change_Logging_Translator_Design
19:35 eljrax semiosis: Aha, I can get my java out if I absolutely have to :) I'll have a look at libgfchangelog
19:35 eljrax Is there a github repo or something anywhere?
19:35 semiosis eljrax: you could use any jvm language
19:35 semiosis https://github.com/semiosis/glusterfs-java-filesystem
19:35 semiosis but for now there's just a cheesy polling implementation to watch for changes
19:36 semiosis which i wrote before i knew about libgfchangelog
19:36 eljrax Hate it when that happens :)
19:37 semiosis basically you code your app to use the NIO.2 API then drop in the glusterfs-java-filesystem jar on your classpath and you can magically access glusterfs via gluster://server:volume URIs
19:37 semiosis besides the URI scheme your app doesnt know anything about glusterfs
19:37 eljrax Can I "subscribe" to events though ? I'm not familiar enough with the NIO API to know if that is built in
19:38 semiosis to be clear (everyone gets this confused, thanks Oracle!) it's NIO.2 -- NIO is something else
19:38 semiosis https://docs.oracle.com/javase/tutorial/essential/io/notification.html
19:38 eljrax Well, proves my point of being unfamiliar with it :)
19:39 eljrax Wow, and I could use that API on a GlusterFS mount?
19:40 semiosis no mount.  your app connects directly to the volume (via the libgfapi C client library)
19:41 eljrax So how does it detect changes?
19:41 eljrax Metadata diffs or ?
19:43 tdasilva joined #gluster-dev
19:45 semiosis currently it polls the filesystem (which is horrible) but the plan is to replace that with libgfchangelog
19:46 semiosis ...which is a C library we'll bind to via JNI, then use that to implement the NIO.2 WatchService API
19:46 eljrax http://review.gluster.org/#/c/5127/22/xlators/features/changelog/lib/examples/c/get-changes.c   That looks promising though
19:48 eljrax It's getting late here, will continute looking tomorrow. Thanks for the input!
19:49 JustinClift eljrax: https://github.com/gluster/glusterfs/tree/release-3.6/xlators/features/changelog/src
19:49 JustinClift ^^^ Not sure if that helps.  It might though. :D
19:49 JustinClift eljrax: And no worries.  Have a good night. :D
19:50 JustinClift https://github.com/gluster/glusterfs/tree/release-3.6/xlators/features/changelog/lib/examples
20:05 JustinClift eljrax: May also be useful: http://supercolony.gluster.org/pipermail/gluster-devel/2014-October/042671.html
20:49 lalatenduM kkeithley, regarding the slides
20:49 kkeithley yes
20:49 lalatenduM kkeithley, we should ass add more to expectation from storage SIG
20:50 kkeithley (02:13:58 PM) lalatenduM: kkeithley, it seems (from internet) rm -rf is better solution then %exclude
20:51 lalatenduM kkeithley, doing a scratch build, at  http://koji.fedoraproject.org/koji/taskinfo?taskID=8222318 lets see it is works
20:51 kkeithley re: ^^^    you still need the %exclude in %files regression-tests so that you don't get geo-rep scripts in the -regression-tests RPM when you're building both
20:52 lalatenduM kkeithley, right
20:52 lalatenduM will do that too
20:52 kkeithley as for adding more to expectations from storage SIG, yes, absolutely. I just wanted to get a start and capture what we said in IRC the other day
20:53 lalatenduM kkeithley, on the slides. apart from seamless experience SIG should also be able rebuild other ecosystem packages tuned for SIG projects , it will help in easy integration
20:53 lalatenduM if required we can rebuild packages from base os too
20:54 lalatenduM and spec file can be changed for better integration
20:54 kkeithley "other ecosystems" means what? Debian? or oVirt?
20:54 lalatenduM kkeithley, i meant Samba, nfs-ganesha may be libvirt or qemu
20:55 kkeithley We're already doing that. I'm not getting your meaning.
20:56 lalatenduM kkeithley, from Gluster point of view we are doing that , not all projects rebuild other eco-system packages
20:56 kkeithley ah, okay
20:57 lalatenduM kkeithley, ans SIG can fill that gap
20:58 lalatenduM kkeithley, do you mind if I put this presentation in google drive , so that we both can edit it same time
20:59 kkeithley sure, go ahead
21:04 lalatenduM kkeithley, on it
21:04 lalatenduM kkeithley, in your opinion what is the probability of this talk is getting selected ?
21:06 kkeithley oh, dunno. 100%! (I'm an optimist ;-)) Wouldn't kbsingh be a better person to ask? I see no harm in getting his opinion anyway.
21:07 lalatenduM kkeithley, :)
21:07 kkeithley If you're asking because you're debating whether to get travel auth, I'd say presume that if it's accepted that you'd get the travel auth.
21:07 kkeithley But obviously I'm not in a position to guarantee that
21:08 lalatenduM kkeithley, right
21:09 lalatenduM kkeithley, I will be OOO next week 1st Dec to 5th Dec, so will try to submit this week only
21:09 kkeithley Okay.
21:10 lalatenduM Google drive does not understand odp format, urgh!!
21:10 kkeithley We just need to get a proposal submitted before the deadline, we can finalize the presentation later
21:10 kkeithley what about cut-and-paste?
21:11 lalatenduM kkeithley, I am trying MS format, which I dont like but :(
21:11 kkeithley try SaveAs .ppt or .pptx.
21:12 lalatenduM yeah
21:12 kkeithley yeah, I know. Me too.
21:13 lalatenduM kkeithley, you should get a mail for the share in ur rh mailbox
21:13 kkeithley yup, got it
21:15 kkeithley if your manager doesn't approve travel, let me know and I'll ask Ric. I don't know about the office politics, Indian style or otherwise. We might have to be careful that your manager doesn't get his or her nose out of joint about it, if it comes to that.
21:16 kkeithley that seems to have survived the .ppt and import into g'drive
21:17 lalatenduM kkeithley, :)
21:19 lalatenduM kkeithley, should we mention that EPEL is not available to SIG build target
21:19 lalatenduM and there are so many orphan ( no maintainers) pkgs in EPEL
21:19 lalatenduM and EPEL going to remove in near future
21:19 kkeithley yes and yes.
21:19 lalatenduM remove them*
21:21 kkeithley none of those are dependencies for Samba though, are they?
21:22 lalatenduM kkeithley, nope . I will cross check again.  I am pretty sure samba folks would have taken care of it
21:23 lalatenduM kkeithley, also remember the spec file of Ceph in epel is different than upstream
21:23 lalatenduM that will be tricky
21:24 kkeithley Indeed. Isn't Boris working on fixing that though?
21:25 lalatenduM kkeithley, not yet, I will start a email with Boris as the current Ceph maintainer in SIG is inactive
21:25 lalatenduM and we looking for someone to maintain Ceph in SIG
21:25 kkeithley I think it's easier to deviate from what's in Fedora and EPEL. Or it ought to be. Do you think that's not the case?
21:28 lalatenduM kkeithley, yeah, do you mean less restriction in SIG?
21:30 kkeithley No, well, as far as package .spec reviews and that sort of thing, not really. What I mean is some requirement to be, maybe not exactly the same, but close to what's in Fedora and EPEL?
21:31 kkeithley Thinking specifically about Ceph. Does anything think that the SIG version of Ceph should look like the Fedora/EPEL Ceph packages? Or can we break out of the mold and use the upstream ceph.spec?
21:32 kkeithley For that matter, _could_ we bundle Gluster differently? (Independent of whether we really want to or not. And no, I'm not suggesting we should.)
21:32 lalatenduM kkeithley, actually Patrick wanted to use upstream spec for Ceph in SIG
21:32 kkeithley Is Patrick the inactive SIG maintainer?
21:32 lalatenduM yes
21:33 kkeithley I think that's smart. I just don't know how people are thinking about it. Hence my question.
21:33 kkeithley Anyway, I need to go home. I've been here since 7am. Long day
21:34 lalatenduM on a little different /quick note,  I have plan to create a RPM like redhat-storage-server
21:34 kkeithley talk more about it tomorrow
21:34 lalatenduM kkeithley, ok
21:34 kkeithley cool
21:34 lalatenduM basically it would pull everything for sig
21:34 lalatenduM ttyl :)
21:35 kkeithley yup, seems like it would be a hit
21:35 lalatenduM yeah
21:36 lalatenduM we should put that in the slides too
21:37 lalatenduM pkg name: "gluster-storage-server"
22:02 shyam joined #gluster-dev
22:06 JustinClift kkeithley: Just realised I still haven't sent those cables yet.  They're in a box sitting next to me I just glanced at.
22:06 JustinClift :/
22:06 JustinClift I think it'll be a Wednesday thing.
22:13 badone joined #gluster-dev
22:38 ira joined #gluster-dev
22:44 badone joined #gluster-dev
22:56 shyam joined #gluster-dev
23:04 baojg joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary