Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-07-26

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:04 julim joined #gluster-dev
02:12 kshlm joined #gluster-dev
02:41 Bhaskarakiran joined #gluster-dev
03:02 poornimag joined #gluster-dev
03:07 nishanth joined #gluster-dev
03:17 magrawal joined #gluster-dev
03:51 itisravi joined #gluster-dev
04:02 ankitraj joined #gluster-dev
04:09 rafi joined #gluster-dev
04:13 shubhendu__ joined #gluster-dev
04:17 shubhendu__ joined #gluster-dev
04:24 aspandey joined #gluster-dev
04:24 nbalacha joined #gluster-dev
04:26 sanoj_ joined #gluster-dev
04:31 atinm joined #gluster-dev
04:31 rafi joined #gluster-dev
04:34 rafi joined #gluster-dev
04:37 nishanth joined #gluster-dev
04:42 Saravanakmr joined #gluster-dev
04:45 sanoj_ joined #gluster-dev
04:51 ramky joined #gluster-dev
04:55 kshlm nigelb, misc, www.gluster.org is now opening wordpress at blog.gluster.org, instead of the homepage.
04:57 ppai joined #gluster-dev
05:00 nigelb I thought misc fixed that up yesterday.
05:00 nigelb kshlm: It's one for misc. File a bug for him.
05:00 kshlm I've filed a bug. I'll assign it to him now.
05:05 ankitraj joined #gluster-dev
05:12 poornimag joined #gluster-dev
05:17 mchangir joined #gluster-dev
05:19 prasanth joined #gluster-dev
05:20 nbalacha nigelb, ping
05:20 glusterbot nbalacha: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
05:21 nigelb nbalacha: hi!
05:21 nbalacha nigelb, https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/18323/console has failed with a crash
05:21 nbalacha nigelb, but I cannot get the core - location returns 404
05:22 nigelb You need http://nbslave7h.cloud.gluster.org/archives/archived_builds/build-install-20160725190136.tgz yes?
05:22 Manikandan joined #gluster-dev
05:22 nbalacha nigelb, http://nbslave7h.cloud.gluster.org/archives/archived_builds/build-install-20160725190136.tgz, yes
05:22 nbalacha and another netbsd machine to debug it
05:22 nigelb okay, let me figure out what's wrong.
05:22 nigelb will do
05:23 skoduri joined #gluster-dev
05:23 nigelb huh, that's weird.
05:25 kdhananjay joined #gluster-dev
05:25 nigelb nbalacha: Can you check if this is what you want? http://nbslave7h.cloud.gluster.org/logs/glusterfs-logs-20160725190136.tgz
05:25 nigelb Ah, dang.
05:26 nigelb "tar: Failed open to write on /archives/archived_builds/build-install-20160725190136.tgz (No such file or directory)"
05:26 nigelb I need to send a patch for that script.
05:26 karthik_ joined #gluster-dev
05:27 poornimag joined #gluster-dev
05:27 nbalacha ok
05:28 nbalacha nigelb, is the core still around on the system?
05:28 nigelb No, it isn't. Turns out the regression scripts expects certain folders.
05:28 nigelb I'm going to patch it so they're created if they don't exist.
05:29 nbalacha ok
05:29 nbalacha I'll keep an eye out for the next crash.
05:29 nbalacha btw, looks like a lots of netbsd runs have failed
05:29 nbalacha have not gone through them o see why thoguht
05:30 ndarshan joined #gluster-dev
05:30 shubhendu_ joined #gluster-dev
05:30 nigelb I'll look today.
05:30 Apeksha joined #gluster-dev
05:35 hgowtham joined #gluster-dev
05:37 shubhendu__ joined #gluster-dev
05:37 aravindavk joined #gluster-dev
05:52 nigelb nbalacha: Thanks for pointing that out. Turns out regressions need way more than 200 mins sometimes.
05:52 pkalever joined #gluster-dev
05:52 nigelb I've upped the limit to 300 mins.
05:59 Bhaskarakiran joined #gluster-dev
06:02 sakshi joined #gluster-dev
06:03 shubhendu_ joined #gluster-dev
06:13 nigelb nbalacha: At least we should get cores from now on. I've setup the folders correctly now.
06:14 nbalacha nigelb, thanks.
06:14 nigelb nbalacha: Can I give you the same machine as last time to test?
06:14 nbalacha nigelb, I don't need it now. I will ask again once I get a core
06:14 nigelb okay.
06:14 nbalacha nbalacha, I couldn't get it to crash the last time around so I will wait
06:15 Muthu joined #gluster-dev
06:15 anoopcs nigelb, After Gerrit upgrade cherry-picking changes to different branches sets topic as bug-<#bz number>-release-<version> instead of having it as bug-<#bz number>. For example see http://review.gluster.org/#/c/15008/.
06:18 Muthu_ joined #gluster-dev
06:19 nigelb I don't think I can do anything about it. But file a bug, I'll look into it.
06:20 devyani7_ joined #gluster-dev
06:20 anoopcs Ok.
06:20 aspandey joined #gluster-dev
06:21 pur joined #gluster-dev
06:22 msvbhat joined #gluster-dev
06:22 nigelb anoopcs: How are you cherrypicking?
06:22 nigelb UI or commandline?
06:23 anoopcs nigelb, UI.
06:23 nigelb hrm, I couldn't reproduce that on review.nigelb.me.
06:23 nigelb Probably fixable then.
06:25 anoopcs nigelb, Shall I try it on review.nigelb.me?
06:25 mchangir nigelb, would breaking up the regression suite and running them in parallel be possible? rastar had discussed this probability with me a while ago
06:25 nigelb anoopcs: please do.
06:26 nigelb mchangir: The bit that rastar did needs 20+ machines per run.
06:26 nigelb we don't have that much machines.
06:26 mchangir ok
06:26 nigelb But I'd love to see if we can break it up and run as separate jobs.
06:27 pranithk1 joined #gluster-dev
06:27 mchangir the component specific runs could be of good help too
06:28 anoopcs nigelb, I could reproduce it. See http://review.nigelb.me/#/c/14630/
06:29 nigelb anoopcs: uhh
06:29 nigelb I'm very confused.
06:29 nigelb I just did this http://review.nigelb.me/#/c/14564/
06:29 nigelb where that didn't happen.
06:31 nigelb What are we doing differently?
06:31 nigelb anoopcs: oh, are you taking an open review request?
06:31 anoopcs nigelb, Yes.
06:32 nigelb AH, I was taking closed.
06:32 nigelb Let me try again.
06:32 nigelb anoopcs: *still* don't see it :(
06:32 anoopcs *Ugh*
06:33 nigelb What are the steps you're following? I bet I'm doing something differently/incorrectly
06:33 skoduri joined #gluster-dev
06:34 nigelb I'm doing "Click Cherry-pick", "Enter a branch name" (I've been entering release-3.8), "Click on Cherry pick in the dialog box"
06:34 anoopcs nigelb, Click on `Cherry-Pick` button -> Specify the branch -> Edit the commit message (if needed) -> Click on `Cherry-pick change`
06:35 nigelb I bet it's a permission thing.
06:35 mchangir nigelb, I too get to see the cherry-pick effect that anoopcs is talking about ... just didn't report it and went ahead with editing the Topic to the bug-<bug-id> format
06:35 nigelb Mine is basically copying all the comments from the old one.
06:35 nigelb Yours is a new request.
06:35 nigelb Yeah, file a bug. Definitely worth digging.
06:36 anoopcs nigelb, Sure.
06:43 nbalacha magrawal, ping. I think you will need to rebase your patch to get the netbsd runs to pass
06:43 nbalacha magrawal, there was a fix made over the weekend
06:44 magrawal nbalacha,i already done it but still it is failing
06:44 nbalacha magrawal, looks liek the last run was on patch 5
06:44 nbalacha magrawal, it is patch 6 that has been rebased
06:44 nbalacha I guess it should work if you retrigger it now
06:45 magrawal nbalacha,sure
06:46 magrawal nbalacha,thanks
06:46 nbalacha magrawal, np
06:50 rastar joined #gluster-dev
06:50 nigelb https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/41
06:51 nigelb kshlm: ^ one for you to review :)
06:51 nigelb mchangir: re: tests, chime in on the thread
06:53 kshlm nigelb, Done.
06:53 nigelb whoa, that was fast.
06:53 nigelb :)
06:53 kshlm :D
06:54 poornimag joined #gluster-dev
06:55 Bhaskarakiran joined #gluster-dev
06:56 asengupt joined #gluster-dev
06:57 nishanth joined #gluster-dev
07:10 shubhendu__ joined #gluster-dev
07:13 spalai joined #gluster-dev
07:30 ankitraj joined #gluster-dev
07:31 Rick_ joined #gluster-dev
07:48 poornimag rastar, can you merge http://review.gluster.org/#/c/14997/?
07:54 gem joined #gluster-dev
08:09 shubhendu__ joined #gluster-dev
08:14 csaba Manikandan: the manila commit that has the quota issue is: https://review.openstack.org/#/c/331389/14 . The glusterfs change that interrupted the workflow is: a7e04388
08:21 ppai joined #gluster-dev
08:21 shubhendu__ joined #gluster-dev
08:21 csaba Manikandan: that is, http://review.gluster.org/10415. This itself is a cherry-pick of http://review.gluster.org/10889. The manila issue can be observed in this C
08:21 csaba I log: http://logs.openstack.org/89/331389/14/check/gate-manila-tempest-dsvm-glusterfs/0273fa6/logs/screen-m-shr.txt.gz?level=ERROR
08:22 csaba Manikandan: thanks for your time.
08:23 Manikandan csaba, no problem.
08:23 atalur joined #gluster-dev
08:27 csaba Manikandan: oops, the gluster review of the oriiginal gluster patch is wrong. Correctly, it's http://review.gluster.org/10261 (as seen in the commit message of a7e04388).
08:35 shubhendu_ joined #gluster-dev
08:52 asengupt joined #gluster-dev
08:53 nigelb kshlm / ndevos - Can either of you give me access to our centos CI account?
08:54 ndevos poornimag: I left a comment in http://review.gluster.org/14997 for you, could you reply to that?\
08:54 ndevos the libgfapi-fini-hang.t fix
08:55 ndevos nigelb: you need to request an account in teh CentOS CI yourself, I thought you did that already?
08:55 shubhendu__ joined #gluster-dev
08:55 ndevos nigelb: can you login over ssh on jump.ci.centos.org?
08:57 ndevos nigelb: basically, you need this ssh-config - https://wiki.centos.org/QaWiki/CI/GettingStarted#head-93173f19351f5e50a96f50445368c729869ddcc3
08:58 nigelb I was going to ask :)
08:58 ndevos nigelb: once you login through the ssh-tunnel/jump you are gluster@slave01.ci.centos.org, and there are files with shared credentials in ~/
09:00 nigelb got it, thanks.
09:02 rraja joined #gluster-dev
09:17 kshlm nigelb, Are you going to convert the centos-ci jobs to JJB?
09:19 nigelb kshlm: Once I'm done with the ones on our Jenkins server, yes.
09:19 kshlm Okay.
09:19 kshlm I have to add a new job to centos-ci.
09:20 kshlm I wanted to know if I should wait and add a jjb directly.
09:22 ndevos kshlm: dont wait, get it in there :)
09:22 nigelb ^^
09:22 nigelb that.
09:22 nigelb I'll take care of converting later.
09:22 nigelb I was talking to the ceph folks the other day
09:22 nigelb https://github.com/ceph/ceph-build/
09:23 nigelb They have a really nice structure for their jenkins jobs which I'll probably copy.
09:26 karthik_ joined #gluster-dev
09:39 kotreshhr joined #gluster-dev
09:40 jiffin joined #gluster-dev
09:47 atinm joined #gluster-dev
09:47 shubhendu_ joined #gluster-dev
09:49 shaunm joined #gluster-dev
09:50 jiffin joined #gluster-dev
10:08 shubhendu_ joined #gluster-dev
10:17 atinm joined #gluster-dev
10:22 ashiq joined #gluster-dev
10:35 asengupt joined #gluster-dev
10:36 nigelb poornimag: Are you parsing the Jenkins HTML to find the failures for failed-tests.py?
10:36 poornimag nigelb, yup
10:36 nigelb Is the Jenkins API unsusable for this?
10:38 poornimag nigelb, Can use the jenkins API for certain things like build id, jobname etc. But to really identify the failed test case
10:38 poornimag its not possible i guess
10:39 nigelb ahh.
10:39 nigelb okay.
10:39 poornimag as we output the test result in the console as a text
10:40 poornimag i guess we could explore more on that, but we might atleast have to change the test case output format, to be able to use the jenkins APIs
10:41 nigelb I get very worried when we have to scrape things
10:41 nigelb It suceptable to easily breaking.
10:41 msvbhat joined #gluster-dev
10:41 nigelb Let me see if I can give you a patch.
10:43 kkeithley ndevos: don't know if you saw it, but I pushed a pair of build.sh scripts (and .spec files) for building (lib)ntirpc and nfs-ganesha in the CentOS CI.  Please take a look when you have a few minutes
10:45 poornimag nigelb, yup sure. But considering the effort needed, parsing worked well enough.
10:46 ndevos kkeithley: thanks, no, I did not see that yet
10:55 anoopcs ndevos, Is this somewhere near to your expectation regarding the script for running glusterfs-coreutils tests in CentOS CI? http://termbin.com/uc46
10:55 ashiq joined #gluster-dev
11:05 kotreshhr left #gluster-dev
11:05 Saravanakmr joined #gluster-dev
11:21 mchangir joined #gluster-dev
11:21 kkeithley Gluster Community Bug Triage in ~40min in #gluster-meeting
11:25 ira joined #gluster-dev
11:31 atinm asengupt, refreshed http://review.gluster.org/#/c/15005
11:36 asengupt atinm, check comments
11:38 nigelb poornimag: can you send me a sample output of the failed tests script? I'm not sure if I'm seeing the right thing.
11:44 nigelb I see the summary at the end, but should I be seeing a lot of the console output too?
11:51 skoduri joined #gluster-dev
11:54 atinm asengupt, replied
11:54 atinm asengupt, however I see the same snapshot test failing now
11:55 Muthu_ REMINDER: Gluster Community Bug Triage Meeting in #gluster-meeting in ~ 5minutes from now.
11:57 ndevos anoopcs: something like that, but it should fail (exit != 0) when run-tests.sh has some problems
11:57 ndevos anoopcs: also, yum-config-manager comes from yum-utils :)
11:58 ndevos anoopcs: I'll modify it a little and will run some tests with it, after I'm done with some meetings
11:58 spalai left #gluster-dev
12:00 Saravanakmr joined #gluster-dev
12:02 rastar joined #gluster-dev
12:03 shubhendu joined #gluster-dev
12:04 shubhendu joined #gluster-dev
12:07 mchangir joined #gluster-dev
12:10 kdhananjay joined #gluster-dev
12:16 atalur joined #gluster-dev
12:18 ppai joined #gluster-dev
12:30 atalur joined #gluster-dev
12:31 Manikandan Muthu_++, thanks for hosting
12:31 glusterbot Manikandan: Muthu_'s karma is now 2
12:31 hgowtham Muthu_++
12:31 glusterbot hgowtham: Muthu_'s karma is now 3
12:32 ndevos Muthu_++ thanks!
12:32 glusterbot ndevos: Muthu_'s karma is now 4
12:36 Apeksha joined #gluster-dev
12:37 ndevos kkeithley: do you remember why you closed https://bugzilla.redhat.com/show_bug.cgi?id=1302149 ?
12:37 glusterbot Bug 1302149: unspecified, unspecified, ---, ndevos, CLOSED CURRENTRELEASE, Add support for SEEK_DATA/SEEK_HOLE to stripe
12:44 kkeithley Muthu_++
12:44 glusterbot kkeithley: Muthu_'s karma is now 5
12:45 kkeithley ndevos: bugs with incorrect status---   Bug 1302149 should be CLOSED, v3.8.1 contains a fix **
12:45 glusterbot kkeithley: status-'s karma is now -1
12:45 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1302149 unspecified, unspecified, ---, ndevos, CLOSED CURRENTRELEASE, Add support for SEEK_DATA/SEEK_HOLE to stripe
12:45 kkeithley status++
12:45 glusterbot kkeithley: status's karma is now 1
12:45 kkeithley status--
12:45 glusterbot kkeithley: status's karma is now 0
12:46 ndevos kkeithley: weird, there was no patch sent for it...
12:46 kkeithley reopened
12:47 kkeithley don't know why I didn't see that no patch was posted or merged
12:48 ndevos oh, I think it was in the email because it mentions the bug in the commit messages of http://review.gluster.org/13294
12:49 v12aml left #gluster-dev
13:03 shubhendu joined #gluster-dev
13:17 ashiq joined #gluster-dev
13:18 anoopcs ndevos, Thanks... My bad..I forgot to check the return status of test script..
13:21 shubhendu joined #gluster-dev
13:24 julim joined #gluster-dev
13:31 nigelb Is it possible to submit a review that's rebased on top of an open review request.
13:31 nigelb Or do I drive the existing review request to completion first?
13:38 shubhendu joined #gluster-dev
13:38 dlambrig_ joined #gluster-dev
13:39 ndevos nigelb: you can do that, you'd use "git review -d $PATCH_ID" then cherry-pick your new patch on top of that and "git review -t bug-12345 -R master"
13:40 shubhendu joined #gluster-dev
13:40 ndevos nigelb: git-review is a little smarter than ./rfc.sh and does not abort if the base patch was not modified
13:41 ndevos nigelb: oh, git-review needs a remote called "gerrit", or you can pass one like "-r origin"
13:50 nigelb ndevos: Noted, thanks!
13:50 pkalever left #gluster-dev
14:00 wushudoin joined #gluster-dev
14:00 wushudoin joined #gluster-dev
14:02 dlambrig_ left #gluster-dev
14:03 ndevos anoopcs: run #1, cross your fingers! https://ci.centos.org/view/Gluster/job/gluster_coreutils/1/console
14:06 ndevos oh, needs libtool too, and maybe others?
14:10 spalai joined #gluster-dev
14:12 Saravanakmr joined #gluster-dev
14:22 ndevos anoopcs: hmm, it complains automake-1.15 is needed, but CentOS-7 comes with automake-1.13...
14:23 nbalacha joined #gluster-dev
14:45 hagarth joined #gluster-dev
14:47 pkalever joined #gluster-dev
15:02 anoopcs ndevos, Really sorry...I forgot to install basic build requirements like autotools
15:05 anoopcs ndevos, Regarding automake version, what shall we do now?
15:12 pkalever joined #gluster-dev
15:22 dlambrig joined #gluster-dev
15:29 ankitraj joined #gluster-dev
15:30 pkalever joined #gluster-dev
15:33 anoopcs ndevos, Ah..I saw the issue you raised..
15:36 Manikandan joined #gluster-dev
15:37 ndevos anoopcs: maybe the "make dist" on a recent Fedora makes it possible to do builds on CentOS, I guess that is why I did not notice it before
15:40 anoopcs ndevos, I don't understand that logic. How come then ./configure passed while building it for CentOS Storage SIG?
15:41 pkalever left #gluster-dev
15:43 ndevos anoopcs: I didnt check how it is done, if it is only for the automake part, it may get skipped by ./configure in case some files have been generated already
15:43 ndevos anoopcs: the package is in the CentOS Storage SIG already, so somehow it was building just fine
15:46 anoopcs ndevos, Yeah..got it. We do not run autogen.sh while building packages.
15:46 anoopcs for glusterfs-coreutils
15:48 anoopcs we begin by running configure on tar ball which was prepared on a system with automake version>=1.15
16:01 atinm joined #gluster-dev
16:08 glustin joined #gluster-dev
16:10 Saravanakmr joined #gluster-dev
16:11 rraja joined #gluster-dev
16:23 msvbhat joined #gluster-dev
16:26 mchangir joined #gluster-dev
16:31 ndevos kkeithley: *so* close! rpmbuild was missing for the src.rpm generation...
16:31 ndevos https://ci.centos.org/view/NFS-Ganesha/job/nfs-ganesha_build-libntirpc/2/console is currently running
16:32 ndevos oh, and now some rsync problem, but building the RPMs works \o/
16:38 pkalever joined #gluster-dev
16:56 ndevos kkeithley: bstinson from the centos team fixed the rsync permissions on the server, and it should be working now
16:56 ndevos oh, wait, thats nfs-ganesha... lets move to the right irc channel
17:08 ndevos anoopcs: hmm, now "bats" is still missing, I guess that is needed to run the tests?
17:09 * anoopcs regrets.
17:12 anoopcs ndevos, Yes..we need
17:12 ndevos anoopcs: lets see if this works... https://ci.centos.org/view/Gluster/job/gluster_coreutils/4/console
17:15 * anoopcs again with fingers crossed.
17:16 anoopcs ndevos, One more fix may be needed related to IP
17:16 anoopcs or hostname.
17:18 anoopcs https://ci.centos.org/view/Gluster/job/gluster_coreutils/4/console <- This failure will be most probably due to that reason.
17:18 ndevos anoopcs: ah, right, do you know what to change?
17:18 anoopcs ndevos, It uses IPv6 :-)
17:19 anoopcs ::1
17:19 anoopcs Let me raise a PR quickly
17:20 ndevos anoopcs: ah, yes, I see... no idea if IPv6 is available in the CI, it surely isnt exclusively IPv6
17:22 anoopcs ndevos, hostname --fqdn would be fine, right?
17:31 ndevos anoopcs: I guess so, or just "localhost"?
17:32 atinm joined #gluster-dev
17:32 ndevos anoopcs: oh, wait, creating a volume with localhost:/path/to/brick tends to fail
17:32 ndevos interesting though that ::1 is acceptable, but 127.0.0.1 is not ;-)
17:33 ndevos atinm: ^ is a bug in glusterd, I think
17:34 atinm ndevos, I just logged in, so missing the entire context
17:34 anoopcs Our IPv6 support is not good enough. I think we reverted a patch which fixed IPv6 issues some time back.
17:34 atinm anoopcs, right
17:35 atinm anoopcs, it is still broken
17:35 ndevos atinm: those two lines are all you need to know :)
17:36 ndevos atinm: glusterfs-coreutils has a test-suite that configures bricks on ::1, the IPv6 equivalent of localhost
17:36 ndevos I think ::1 should not be allowed, just like localhost and 127.0.0.1 are not
17:37 atinm ndevos, mind to file a bug and then we will look at it?
17:38 dlambrig left #gluster-dev
17:40 ndevos well, maybe it is correct in the latest versions after all
17:41 ndevos facebook tends to test with older releases, and they had the ::1 address working, but it failed on the nightly RPMs in the CentOS CO
17:41 ndevos *CI
17:50 atinm ndevos, vol create still goes through with ::1
17:50 atinm ndevos, in latest master
17:52 ndevos atinm: oh, then I guess there is an issue with IPv6 in the CentOS CI, otherwise the test should have succeeded
17:53 atinm ndevos, but the catch is volume start fails :-/
17:54 ndarshan joined #gluster-dev
17:55 ndevos atinm++ ah, so I guess you're reporting a bug for that then ;-)
17:55 glusterbot ndevos: atinm's karma is now 62
17:55 ndevos anoopcs: you can follow progress on https://ci.centos.org/view/Gluster/job/gluster_coreutils/5/console
17:56 pkalever left #gluster-dev
17:57 anoopcs ndevos, Yay...Finally...
17:58 shubhendu_ joined #gluster-dev
17:58 ndevos anoopcs++ w00t! it passed :D
17:58 glusterbot ndevos: anoopcs's karma is now 36
17:58 anoopcs ndevos++ for all you help.
17:58 glusterbot anoopcs: ndevos's karma is now 291
17:58 anoopcs s/you/your
18:06 ndevos anoopcs: and a 2nd run with the scripts in the real/upstream tests git repo works too: https://ci.centos.org/view/Gluster/job/gluster_coreutils/6/console
18:07 anoopcs ndevos, Cool.
18:08 ndevos anoopcs: lets check tomorrow if the test run again with the nightly RPMs :)
18:08 anoopcs ndevos, Ok. Once again thanks for triggering the jobs..
18:09 ndevos anoopcs: np! this should come a long way in making gfapi better tested and more stable
18:11 atinm joined #gluster-dev
18:30 shubhendu_ joined #gluster-dev
18:37 ankitraj joined #gluster-dev
18:53 penguinRaider joined #gluster-dev
19:07 hchiramm joined #gluster-dev
19:15 shubhendu__ joined #gluster-dev
20:07 ashiq joined #gluster-dev
21:37 hchiramm joined #gluster-dev
22:37 hchiramm joined #gluster-dev
23:38 hchiramm joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary