Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:43 gildub joined #gluster
00:49 aaronott joined #gluster
01:53 Alex left #gluster
02:14 dusmant joined #gluster
02:24 harish joined #gluster
02:36 kdhananjay joined #gluster
02:51 smohan joined #gluster
02:54 RameshN joined #gluster
02:57 kanagaraj joined #gluster
02:58 julim joined #gluster
03:08 overclk joined #gluster
03:29 badone_ joined #gluster
03:30 bharata-rao joined #gluster
03:37 itisravi joined #gluster
03:44 atinm joined #gluster
03:49 atinm joined #gluster
03:49 soumya joined #gluster
03:52 nishanth joined #gluster
03:54 TheSeven joined #gluster
03:58 shubhendu__ joined #gluster
04:09 ramteid joined #gluster
04:28 meghanam joined #gluster
04:29 poornimag joined #gluster
04:29 spalai joined #gluster
04:30 spalai left #gluster
04:33 nbalacha joined #gluster
04:33 sakshi joined #gluster
04:34 smohan1 joined #gluster
04:35 atalur joined #gluster
04:45 ppai joined #gluster
04:49 pppp joined #gluster
04:53 zeittunnel joined #gluster
04:53 kdhananjay joined #gluster
04:54 gem joined #gluster
04:55 tdasilva joined #gluster
05:02 DV__ joined #gluster
05:03 spandit joined #gluster
05:13 yazhini joined #gluster
05:17 hgowtham joined #gluster
05:20 Manikandan joined #gluster
05:20 Manikandan_ joined #gluster
05:22 vimal joined #gluster
05:23 aravindavk joined #gluster
05:24 Bhaskarakiran joined #gluster
05:26 hagarth joined #gluster
05:28 ndarshan joined #gluster
05:34 kshlm joined #gluster
05:40 Manikandan_ joined #gluster
05:44 glusterbot News from newglusterbugs: [Bug 1231040] gf_log_callingfn's output make me dizzy <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231040>
05:49 zeittunnel joined #gluster
05:51 Bhaskarakiran_ joined #gluster
05:56 shubhendu__ joined #gluster
05:56 jiffin joined #gluster
05:59 schandra joined #gluster
06:01 poornimag joined #gluster
06:02 dusmantkp_ joined #gluster
06:08 Bhaskarakiran_ joined #gluster
06:16 Philambdo joined #gluster
06:18 jcastill1 joined #gluster
06:20 haomaiwa_ joined #gluster
06:23 jcastillo joined #gluster
06:24 glusterbot News from resolvedglusterbugs: [Bug 1152956] duplicate entries of files listed in the mount point after renames <https://bugzilla.redhat.co​m/show_bug.cgi?id=1152956>
06:30 spalai joined #gluster
06:31 kaushal_ joined #gluster
06:37 jtux joined #gluster
06:43 poornimag joined #gluster
06:46 hchiramm joined #gluster
06:50 raghu joined #gluster
06:53 spalai joined #gluster
06:53 deepakcs joined #gluster
06:58 nangthang joined #gluster
06:58 [Enrico] joined #gluster
07:00 DV__ joined #gluster
07:01 elico joined #gluster
07:02 deniszh joined #gluster
07:05 al joined #gluster
07:08 jbrooks joined #gluster
07:11 ashiq joined #gluster
07:15 Trefex joined #gluster
07:17 saurabh_ joined #gluster
07:17 rafi joined #gluster
07:24 glusterbot News from resolvedglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075611>
07:33 DV__ joined #gluster
07:49 anrao joined #gluster
08:07 liquidat joined #gluster
08:24 appelgriebsch joined #gluster
08:26 Trefex hi all, i had mounted gluster via fuse and started a fast and long transfer to it
08:26 Trefex now the mountpoint seems vanished, and get a weird error
08:26 Trefex df: ‘/glusterfs/live’: Transport endpoint is not connected
08:26 Trefex could anybody help me how to start the debugging process? I'm new to gluster
08:29 nbalacha joined #gluster
08:31 gildub joined #gluster
08:36 rjoseph joined #gluster
08:37 T0aD joined #gluster
08:44 glusterbot News from newglusterbugs: [Bug 1231132] Detect and send ENOTSUP if upcall feature is not enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231132>
08:48 spot joined #gluster
08:49 frakt joined #gluster
08:51 social joined #gluster
08:52 soumya joined #gluster
08:52 msvbhat Trefex: I assume you're doing a FUSE (glusterfs) mount? Can you check the logs? Also if the client process is running?
08:54 tessier joined #gluster
08:54 Trefex msvbhat: it seems that gluster crashed in one of the server and the endpoint became unavailable
08:55 Trefex [1357992.305305] BUG: soft lockup - CPU#22 stuck for 22s! [rcuos/16:51]
08:55 Trefex was found on one of the gluster servers
09:01 msvbhat So restart server and re-mounting the client working?
09:01 msvbhat And as for the crash, can you file a bug report if there are no bug for it?
09:01 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
09:02 msvbhat Trefex: Which is the version of gluster BTW?
09:04 hagarth joined #gluster
09:06 baoboa joined #gluster
09:35 [Enrico] joined #gluster
09:35 gem joined #gluster
09:43 Trefex msvbhat: mhhh latest from my yum, it's 3.6.3
09:44 ghenry joined #gluster
09:54 xavih joined #gluster
09:54 malevolent joined #gluster
10:13 autoditac joined #gluster
10:15 nsoffer joined #gluster
10:16 Manikandan_ joined #gluster
10:22 DV__ joined #gluster
10:27 rjoseph joined #gluster
10:44 edualbus joined #gluster
10:45 glusterbot News from newglusterbugs: [Bug 1231171] [RFE]- How to find total number of glusterfs client mounts? <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231171>
10:45 glusterbot News from newglusterbugs: [Bug 1231175] [RFE]- How to find total number of glusterfs samba client mounts? <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231175>
10:45 Manikandan_ overclk++
10:45 glusterbot Manikandan_: overclk's karma is now 2
11:00 p8952 joined #gluster
11:01 arcolife joined #gluster
11:06 jcastill1 joined #gluster
11:14 jcastillo joined #gluster
11:18 DV joined #gluster
11:26 Bhaskarakiran joined #gluster
11:27 kanagaraj joined #gluster
11:27 Bhaskarakiran_ joined #gluster
11:28 nangthang joined #gluster
11:32 shubhendu__ joined #gluster
11:33 jcastill1 joined #gluster
11:35 rjoseph joined #gluster
11:38 jcastillo joined #gluster
11:45 glusterbot News from newglusterbugs: [Bug 1231195] rm -rf throws 'Is a directory' error for few directories while add-brick operation is done <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231195>
11:45 glusterbot News from newglusterbugs: [Bug 1231202] [RFE]- How to find total number of glusterfs nfs client mounts? <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231202>
11:45 glusterbot News from newglusterbugs: [Bug 1231205] [geo-rep]: RHEL7.1: rsync should be made dependent package for geo-replication <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231205>
11:45 glusterbot News from newglusterbugs: [Bug 1231207] [RFE]- How to find total number of glusterfs fuse client mounts? <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231207>
11:49 nangthang joined #gluster
11:53 mdavidson joined #gluster
11:55 LebedevRI joined #gluster
11:59 zeittunnel joined #gluster
12:03 vovcia hmm utimensat says EOPNOTSUPP interesting
12:03 ira joined #gluster
12:06 bene2 joined #gluster
12:07 aaronott joined #gluster
12:09 anrao joined #gluster
12:10 spalai left #gluster
12:16 shubhendu__ joined #gluster
12:18 B21956 joined #gluster
12:19 julim joined #gluster
12:37 stickyboy joined #gluster
12:45 chirino joined #gluster
12:54 hagarth joined #gluster
13:01 smohan joined #gluster
13:03 veonik_ joined #gluster
13:06 Sjors hmm
13:06 Sjors I just got a gluster daemon crash
13:06 Sjors during a self-heal, by the way
13:10 Folken is there a ubuntu ppa for the latest version somewhere?
13:12 arcolife joined #gluster
13:13 hagarth joined #gluster
13:22 bene2 joined #gluster
13:25 d-fence joined #gluster
13:26 ndevos ~ppa | Folken
13:26 glusterbot Folken: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
13:26 ndevos Folken: and kkeithley is trying to figure out how to update packages there...
13:27 georgeh-LT2 joined #gluster
13:27 chirino joined #gluster
13:28 ndevos Sjors: I think there have been some crashes reported recently, bug 1229139 would be the last one
13:28 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1229139 medium, medium, ---, anekkunt, POST , glusterd: glusterd crashing if you run  re-balance and vol status  command parallely.
13:29 ndevos I'm not sure how different it would be if you replace re-balance with self-heal
13:33 ira joined #gluster
13:38 RameshN joined #gluster
13:39 brahman joined #gluster
13:39 kovshenin joined #gluster
13:40 brahman Hi, new to gluster. I am reading the docs and do not understand whether each physical device (/dev/sdX) is a brick or whether I can have more than 1 brick on each physical device.
13:43 dgandhi joined #gluster
13:43 brahman http://gluster.readthedocs.org/en/lates​t/Administrator%20Guide/glossary/found
13:43 brahman Found what I was after.
13:48 Slashman joined #gluster
13:48 theron joined #gluster
13:49 semiosis ndevos: kkeithley: i've been remiss lately but want to help get you set up to publish to the PPAs.  I have a vagrant box that I need to send you which has all the tools & some scripts to facilitate the process.
13:50 semiosis really busy with work, travel, and moving houses, fyi
13:51 hamiller joined #gluster
13:53 atrius joined #gluster
13:53 kkeithley semiosis: I can publish to the ppas. Likewise, I've been really busy with $dayjob. I'm actually mainly interested in which version of gluster I should be building for which Ubuntu release?
13:54 semiosis my policy has been to publish all the current gluster releases for the latest ubuntu LTS and the latest interim (non-LTS) releases
13:54 kkeithley there doesn't seem to be a way to have distinct 3.5.x, 3.6.x, and 3.7.x .debs for each of precise, trusty, utopic, and vivid
13:54 semiosis but people always want builds for old stuff.....
13:55 semiosis kkeithley: you need separate PPAs for the gluster releases
13:56 kkeithley yup, the launchpad.net/~gluster has release-3.5, release-3.6, and release-3.7 ppas. (sorry, I know you know that.)
13:56 semiosis but in each PPA you can publish the same gluster version for different ubuntu releases.  best way is to put the ubuntu release name (trusty, for ex) in the changelog in *both* the version name and the release name.
13:56 kkeithley yeah, I believe that's what I did for 3.6.3, e.g.
13:56 kkeithley But launchpad only shows the last build I did.
13:57 semiosis oh yeah, it will throw out the old version when you upload a new one.  it only keeps the latest :)
13:57 semiosis that's just the way it is.
13:58 kkeithley then I don't understand how to achieve "publish the same gluster version for different ubuntu releases"
13:58 semiosis i always tell people they need to copy the debs either to their own APT repo or to their own PPA, so they dont get forced into upgrading.  that's pretty standard for using PPAs in production
13:58 bennyturns joined #gluster
13:59 kkeithley don't we really need/want release-3.6-precise, release-3.6-trusty, release-3.6-utopic, and release-3.6-vivid ppas?  And similar for release-3.5 and release-3.7?
13:59 semiosis kkeithley: check this out, https://github.com/semiosis/glusterfs-debian/​blob/utopic-glusterfs-3.6/debian/changelog#L1
14:00 semiosis notice the line says utopic in the version string, but trusty in the release... that is bad.  those need to match
14:00 kkeithley yeah, that's an awshit.
14:00 semiosis so if those match, you can put the same version for different ubuntus in one ppa
14:01 semiosis launchpad keeps the highest version in each release
14:01 semiosis i have a script in the vagrant box i'm going to send you that does this automatically
14:02 kkeithley okay, well, delta that mistake, I'll pretend I understand and believe that in the release-3.6 ppa that there are packages for trusty, utopic, and vivid
14:03 semiosis check this one out for proof :) https://launchpad.net/~gluster/+arc​hive/ubuntu/glusterfs-3.4/+packages
14:03 semiosis version string & Series column have matching release names... the same version for all three ubuntus, in one ppa
14:03 kkeithley yeah, okay
14:05 kkeithley starting to make sense
14:08 jcastill1 joined #gluster
14:09 * kkeithley wonders where my 3.6.3 build for trusty disappeared to
14:10 kkeithley I'll revisit a bit later when I have time to work on this
14:10 semiosis utopic > trusty, in the version string, so launchpad threw it out, since the utopic version string was tagged for the trusty release... that mistake
14:13 jcastillo joined #gluster
14:13 kkeithley the last build for trusty that I submitted had "glusterfs (3.6.3-ubuntu1~trusty7) trusty; urgency=medium"  You're saing that the build with "glusterfs (3.6.3-ubuntu1~utopic1) trusty; urgency=medium" blew that out of the water?
14:14 kkeithley *saying*
14:15 semiosis correct.  doesn't that make sense?  it only keeps the latest version, for each release.
14:16 glusterbot News from newglusterbugs: [Bug 1231264] DHT : for many operation directory/file path is  '(null)' in brick  log <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231264>
14:16 glusterbot News from newglusterbugs: [Bug 1231265] cluster.nufa :- got errors in mount log( 1-nufa-limit-dht: invalid argument: loc->parent) and brick log(setting xattrs failed) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231265>
14:16 glusterbot News from newglusterbugs: [Bug 1231257] nfs-ganesha: trying to bring up nfs-ganesha on three node shows error although pcs status and ganesha process on all three nodes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231257>
14:16 Manikandan joined #gluster
14:16 kkeithley yeah, it makes sense now, now that I've seen the mistake on the utopic build.
14:16 elico joined #gluster
14:17 harish joined #gluster
14:18 Trefex joined #gluster
14:19 lyang0 joined #gluster
14:21 semiosis kkeithley: i've made that mistake so many times, and had to bump the package patch number to fix it :/
14:21 semiosis but the script will save you
14:22 kkeithley yup, I'll redo them when I get the chance. Kind busy atm. Script will be nice
14:22 semiosis what email should I send to?
14:22 kkeithley kaleb@redhat.com
14:23 woakes07004 joined #gluster
14:25 RameshN joined #gluster
14:26 kedmison left #gluster
14:29 woakes070048 joined #gluster
14:31 Slashman joined #gluster
14:32 Gill joined #gluster
14:32 Bardack hum , having big issues on my gluster prod atm
14:33 Bardack some volumes not answering anymore, i stopped one, and start it: volume start: jira2_prod: failed: Commit failed on localhost
14:34 elico joined #gluster
14:35 marcoceppi joined #gluster
14:35 marcoceppi joined #gluster
14:36 theron_ joined #gluster
14:39 cholcombe joined #gluster
14:46 elico joined #gluster
14:51 soumya joined #gluster
14:53 jbrooks joined #gluster
14:54 DV joined #gluster
14:56 Bardack anybody has an idea like that ?
15:02 elico left #gluster
15:06 RameshN joined #gluster
15:22 nage joined #gluster
15:36 Trefex hi guys
15:36 Trefex it seems my glusterfs fuse is crashing all the time
15:36 Trefex under heavy load
15:36 Trefex any ideas how to start debugging?
15:37 hchiramm_ joined #gluster
15:40 Trefex this is the log extract that I got from the control node http://pastebin.com/GvHGKt1L
15:40 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:41 bennyturns joined #gluster
15:43 Trefex apologieis
15:43 Trefex here is a log of one of the gluster clients
15:43 Trefex http://paste.ubuntu.com/11702689/
15:43 Trefex it seems I'm being hit by some kind of BUG: soft lockup
15:43 Trefex This setup is on ZOL CentOS 7.1
15:45 haomaiwa_ joined #gluster
15:51 jcastill1 joined #gluster
15:53 cholcombe joined #gluster
15:54 ndarshan joined #gluster
15:56 jcastillo joined #gluster
16:03 sripathi joined #gluster
16:08 rwheeler joined #gluster
16:12 shubhendu__ joined #gluster
16:22 JoeJulian Trefex: See https://raw.githubusercontent.com/torvalds​/linux/master/Documentation/workqueue.txt for debugging kworker hogging. (down at the bottom of the page)
16:23 Trefex JoeJulian: not sure i understand that part :)
16:23 Trefex JoeJulian: get this error, when transferring large amounts fast to the gluster (700-800MB/s)
16:24 JoeJulian We don't know, yet, if it's a gluster problem or something else. You'll have to find out from that kworker process.
16:25 eljrax 0.000073 open("file13927", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
16:25 eljrax 33.014279 fcntl(1, F_GETFD)         = 0
16:25 Trefex JoeJulian: ok, i'll do my best, interestingly, htop seems to suffer
16:25 eljrax That's an strace -c of echo "..." >> file13927
16:26 eljrax There are "only" 14k files in that directory, and gluster's taking half a minute to do that operation
16:27 JoeJulian Wow! That's not normal.
16:27 JoeJulian That shouldn't even matter how many files are in that directory.
16:28 eljrax Must be something weird on that node, it's reasonable on other's. A directory listing takes ~2 seconds, but echo "..." > file is fast
16:30 Trefex JoeJulian: when i start the transfer now, the controller node is extremely slow
16:31 Trefex http://paste.ubuntu.com/11702920/
16:31 Trefex here is an excerpt from dstat on the controller during transfer
16:31 JoeJulian I don't know what a controller node is.
16:31 Trefex mhhh it's just a node that exposes the mount points and access gluster via fuse
16:32 JoeJulian So it's both a server and a client.
16:32 Trefex mhhh i guess so ye, it exposes the gluster to the users (server) and writes the files to the gluster (client)
16:33 JoeJulian @glossary
16:33 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
16:33 Trefex ok so controller = master
16:35 JoeJulian Alrighty then... Well I've not seen that behavior with georeplication before. I've not had a lot of experience with ZoL so maybe there's something there.
16:35 JoeJulian But really, until you can debug why the kernel process is hanging it's all just guessing.
16:39 eljrax Seems like I get random stalls in gluster
16:41 eljrax I see a lot of these in strace of the server: lgetxattr("/run/gluster/snaps/5490b17ebcc9​435e87a59e19d39a0914/brick1/brick1/file1", ....
16:41 eljrax I don't have any active snapshots as far as `gluster snapshot list` is concerned. But I do see an LVM snapshot
16:41 anil joined #gluster
16:41 eljrax /dev/mapper/vgglus1-5490b17​ebcc9435e87a59e19d39a0914_0 on /run/gluster/snaps/5490b17ebc​c9435e87a59e19d39a0914/brick1  is mounted.. is that intended?
16:42 anil left #gluster
16:47 eljrax Is that because I did a gluster restore from a snapshot earlier?
16:48 JoeJulian seems likely
16:54 Trefex just figured out why the master was slow, the gluster was not mounted, so the transfer was writing to SSD
16:54 Trefex haha
16:54 eljrax :)
16:54 Trefex well let's see overnight if the kernel will crap out aghain
16:55 eljrax TIL: If you restore from a snapshot, performance is crap afterwards?
16:55 eljrax Just stopped, deleted and recreated the volume, and I'm flying again
16:56 eljrax Or I spoke to soon :/
16:56 eljrax Yeah, started dying at about 13900 files again
17:08 aravindavk joined #gluster
17:10 shaunm_ joined #gluster
17:22 arcolife joined #gluster
17:26 ProT-0-TypE joined #gluster
17:30 pppp joined #gluster
17:31 Rapture joined #gluster
17:39 eljrax http://fpaste.org/231566/43413075/  Am I supposed to be concerned by this?
17:39 eljrax Files in a 2x2 volume only appears on one brick as far as I can see
17:45 malevolent joined #gluster
17:45 xavih joined #gluster
17:46 glusterbot News from newglusterbugs: [Bug 1231334] Gluster native client hangs on accessing dirty file in disperse volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231334>
17:54 woakes070048 joined #gluster
18:00 kkeithley semiosis: ping. still around?  I got 3.6.3 trusty sorted out. But my uploads to the 3.7 ppa seem to be going into the bit bucket. How do I find out why?
18:15 semiosis kkeithley: you should be receiving emails saying upload accepted or rejected.  (assuming dput was successful)
18:16 kkeithley yup. I wasn't reading them though. ;-)
18:24 kkeithley but wait. The reject says: Unable to find glusterfs_3.7.1.orig.tar.gz in upload or distribution.
18:24 kkeithley but it should be there
18:25 kkeithley It was there, in the right place, when I ran the debuild. And it's still there.
18:26 kkeithley symlink, pointing at glusterfs-3.7.1.tar.gz
18:33 semiosis did your debuld have -sa?
18:34 semiosis or is it the -S?  one of those includes the source archive
18:35 kkeithley It had -S, not -sa.  I was reading somewhere that said I shouldn't use -sa, and I haven't been using -sa on any of the 3.6 or 3.5 builds, which have all been working find
18:35 kkeithley find
18:35 kkeithley fine
18:36 kkeithley the source.changes for 3.5 and 3.6 include the _orig.3.x.y.tar.gz. The 3.7 source changes does not. I'll try adding -sa and see if that changes anything
18:37 semiosis -sa should include the source archive... https://www.debian.org/doc/manuals/m​aint-guide/upload.en.html#option-sa
18:42 craigcabrey joined #gluster
18:46 rwheeler joined #gluster
18:48 theron joined #gluster
19:02 maveric_amitc_ joined #gluster
19:10 Pupeno joined #gluster
19:22 bcicen joined #gluster
19:22 Bosse joined #gluster
19:23 bene2 joined #gluster
19:29 lyang0 joined #gluster
19:34 kkeithley Ubuntu ppas have been updated; now with 3.7.1, 3.6.3, and 3.5.4 for most current ubuntu releases.
19:46 nsoffer joined #gluster
19:48 lyang0 joined #gluster
20:02 nage joined #gluster
20:07 social joined #gluster
20:07 Dropje joined #gluster
20:07 bfoster joined #gluster
20:08 purpleidea joined #gluster
20:08 purpleidea joined #gluster
20:08 ndk joined #gluster
20:08 bene_at-car-deal joined #gluster
20:08 m0zes joined #gluster
20:11 CP|AFK joined #gluster
20:12 Intensity joined #gluster
20:12 trig joined #gluster
20:17 glusterbot News from newglusterbugs: [Bug 1231366] NFS Authentication Performance Improvements <https://bugzilla.redhat.co​m/show_bug.cgi?id=1231366>
20:24 rotbeard joined #gluster
20:27 DV joined #gluster
20:33 semiosis kkeithley: thank you very much!
20:40 bennyturns joined #gluster
20:42 B21956 joined #gluster
20:46 bcicen Hi all, I recently deployed 3.7.1 but seem to be missing something --
20:47 bcicen # gluster volume heal qa1 info heal-failed
20:47 bcicen Command not supported. Please use "gluster volume heal qa1 info" and logs to find the heal information.
20:49 bcicen heal-failed seems to still show up in the usage
21:37 nsoffer joined #gluster
21:39 woakes070048 joined #gluster
22:16 dgandhi joined #gluster
22:23 Pupeno joined #gluster
22:27 badone_ joined #gluster
22:43 woakes070048 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary