Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 bennyturns joined #gluster
00:18 vu joined #gluster
00:53 tdasilva joined #gluster
00:57 RicardoSSP joined #gluster
00:57 RicardoSSP joined #gluster
01:17 wgao joined #gluster
01:28 gildub joined #gluster
01:28 Pupeno joined #gluster
01:30 ccha3 joined #gluster
01:33 bala joined #gluster
01:39 bennyturns joined #gluster
02:04 harish joined #gluster
02:21 bharata-rao joined #gluster
02:42 bala joined #gluster
02:54 justglusterfs joined #gluster
02:55 justglusterfs hi all
02:57 justglusterfs how to improve  glusterfs self-heal lots of small file performance?
03:03 justglusterfs_ joined #gluster
03:05 overclk joined #gluster
03:05 justglusterfs_ hi  how  to  improve  glusterfs  self heal lots of small file  performance?
03:07 plarsen joined #gluster
03:11 justglusterfs joined #gluster
03:13 hagarth joined #gluster
03:15 justglusterfs joined #gluster
03:15 kshlm joined #gluster
03:15 kshlm joined #gluster
03:17 justglusterfs hello
03:17 glusterbot justglusterfs: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
03:17 glusterbot answer.
03:18 justglusterfs why nobody answer quesetion
03:19 harish joined #gluster
03:21 wgao joined #gluster
03:26 rejy joined #gluster
03:28 kanagaraj joined #gluster
03:29 Pupeno joined #gluster
03:39 shubhendu joined #gluster
03:40 overclk joined #gluster
03:42 haomaiwa_ joined #gluster
03:44 overclk_ joined #gluster
03:51 itisravi joined #gluster
03:52 zerick joined #gluster
03:58 haomai___ joined #gluster
04:09 hchiramm_ joined #gluster
04:10 nbalachandran joined #gluster
04:15 kumar joined #gluster
04:18 rjoseph joined #gluster
04:21 RameshN joined #gluster
04:21 haomaiwa_ joined #gluster
04:30 haomai___ joined #gluster
04:38 Rafi_kc joined #gluster
04:38 rafi1 joined #gluster
04:39 atinmu joined #gluster
04:40 anoopcs joined #gluster
04:46 KenShiro|MUPF joined #gluster
04:48 spandit joined #gluster
04:53 jiffin joined #gluster
04:54 ndarshan joined #gluster
04:59 meghanam joined #gluster
04:59 meghanam_ joined #gluster
05:07 nishanth joined #gluster
05:22 prasanth_ joined #gluster
05:26 kaushal_ joined #gluster
05:32 atalur joined #gluster
05:32 raghu joined #gluster
05:34 Pupeno joined #gluster
05:36 aravindavk joined #gluster
05:36 overclk joined #gluster
05:49 lalatenduM joined #gluster
05:49 saurabh joined #gluster
05:51 atinmu joined #gluster
05:52 doubt01 joined #gluster
05:55 soumya_ joined #gluster
05:57 kshlm joined #gluster
06:03 ws2k333 joined #gluster
06:05 milka joined #gluster
06:06 kshlm joined #gluster
06:09 rjoseph joined #gluster
06:11 hagarth joined #gluster
06:12 doubt01 joined #gluster
06:13 pkoro joined #gluster
06:15 harish joined #gluster
06:17 rgustafs joined #gluster
06:21 atinmu joined #gluster
06:21 overclk joined #gluster
06:27 klaas_ joined #gluster
06:28 dusmant joined #gluster
06:30 DJCl34n joined #gluster
06:30 DJClean joined #gluster
06:42 doubt01 joined #gluster
06:44 glusterbot New news from resolvedglusterbugs: [Bug 1078061] Need ability to heal mismatching user extended attributes without any changelogs <https://bugzilla.redhat.com/show_bug.cgi?id=1078061>
06:50 kdhananjay joined #gluster
06:52 nshaikh joined #gluster
07:00 ppai joined #gluster
07:08 R0ok_ I keep getting a bunch of these log entries on /var/log/glusterfs/export-data-mirror.log on glusterfs client mount point
07:08 R0ok_ [2014-09-15 07:19:40.695491] I [dict.c:370:dict_get] (-->/usr/lib64/glusterfs/3.5.2/xlator/performance/md-cache.so(mdc_lookup+0x318) [0x7f7217bf4518] (-->/usr/lib64/glusterfs/3.5.2/xlator/debug/io-stats.so(io_stats_lookup_cbk+0x113) [0x7f72179d9c63] (-->/usr/lib64/glusterfs/3.5.2/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x1e1) [0x7f72177cb381]))) 0-dict: !this || key=system.posix_acl_access
07:08 glusterbot R0ok_: ('s karma is now -30
07:08 glusterbot R0ok_: ('s karma is now -31
07:08 glusterbot R0ok_: ('s karma is now -32
07:10 R0ok_ the volume is mounted on /export/data/mirror & used for by our mirror server for backend storage,
07:19 glusterbot New news from newglusterbugs: [Bug 1144282] Documentation for meta xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1144282>
07:22 doubt01 joined #gluster
07:23 dmachi joined #gluster
07:25 Fen1 joined #gluster
07:27 haomaiwa_ joined #gluster
07:28 fattaneh joined #gluster
07:29 ekuric joined #gluster
07:29 gildub joined #gluster
07:36 Pupeno joined #gluster
07:36 soumya_ joined #gluster
07:39 fubada joined #gluster
07:41 Philambdo1 joined #gluster
07:44 Pupeno joined #gluster
07:50 Fen1 Hi, what is the command to install glusterfs on Debian ?
07:52 haomaiwa_ joined #gluster
08:05 fattaneh left #gluster
08:07 bala joined #gluster
08:11 fattaneh1 joined #gluster
08:13 liquidat joined #gluster
08:15 fattaneh1 left #gluster
08:28 mjrosenb Fen1: apt-cache search gluster will likely tell you what its name is.
08:28 Philambdo joined #gluster
08:29 Fen1 Because when i use  : apt-get install glusterfs-server. It's the 3.2.7 version...
08:30 Fen1 and i would like the last 3.5
08:32 mjrosenb Fen1: you'll likely need to either upgrade debian, or build it yourself.
08:33 mjrosenb debian may also have a backports like thing?
08:38 RaSTar joined #gluster
08:41 harish joined #gluster
08:59 dan_ joined #gluster
09:01 coredump joined #gluster
09:09 RaSTar joined #gluster
09:10 sputnik13 joined #gluster
09:12 nbalachandran joined #gluster
09:17 vimal joined #gluster
09:17 Slashman joined #gluster
09:37 deepakcs joined #gluster
09:47 haomaiwa_ joined #gluster
09:51 haomaiw__ joined #gluster
09:55 karnan joined #gluster
10:08 Freman left #gluster
10:11 pkoro joined #gluster
10:22 fattaneh2 joined #gluster
10:22 glusterbot New news from newglusterbugs: [Bug 1144407] Disperse xlator issues in a 32 bits environment <https://bugzilla.redhat.com/show_bug.cgi?id=1144407> || [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1117822>
10:23 atinmu joined #gluster
10:23 LebedevRI joined #gluster
10:23 aravindavk joined #gluster
10:24 hagarth joined #gluster
10:26 rjoseph joined #gluster
10:27 diegows joined #gluster
10:44 atinmu joined #gluster
10:48 ramon_dl joined #gluster
10:53 spandit joined #gluster
11:05 mojibake joined #gluster
11:11 atinmu joined #gluster
11:17 aravindavk joined #gluster
11:24 hagarth joined #gluster
11:35 monotek joined #gluster
11:43 edward1 joined #gluster
11:51 soumya__ joined #gluster
11:52 jiffin1 joined #gluster
11:58 B21956 joined #gluster
11:59 B21956 joined #gluster
12:09 sputnik13 joined #gluster
12:12 plarsen joined #gluster
12:13 RameshN joined #gluster
12:14 ricky-ti1 joined #gluster
12:16 itisravi_ joined #gluster
12:18 bala joined #gluster
12:23 elico joined #gluster
12:26 sputnik13 joined #gluster
12:26 bene2 joined #gluster
12:38 plarsen joined #gluster
12:42 itisravi_ joined #gluster
12:42 itisravi joined #gluster
12:46 theron_ joined #gluster
12:48 tdasilva joined #gluster
12:51 jiffin joined #gluster
12:52 rjoseph joined #gluster
13:10 julim joined #gluster
13:10 _Bryan_ joined #gluster
13:15 keds joined #gluster
13:17 keds joined #gluster
13:18 dusmant joined #gluster
13:21 chirino joined #gluster
13:22 hagarth joined #gluster
13:43 aravindavk joined #gluster
13:55 Philambdo joined #gluster
13:57 aravindavk joined #gluster
14:23 ilbot3 joined #gluster
14:23 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
14:23 glusterbot New news from newglusterbugs: [Bug 1138621] Wrong error handling for mmap() syscall in gf-changelog-process.c FILE <https://bugzilla.redhat.com/show_bug.cgi?id=1138621>
14:24 jobewan joined #gluster
14:29 sprachgenerator joined #gluster
14:42 ramon_dl1 joined #gluster
14:43 failshell joined #gluster
14:45 glusterbot New news from resolvedglusterbugs: [Bug 848556] glusterfsd apparently unaware of brick failure. <https://bugzilla.redhat.com/show_bug.cgi?id=848556>
14:45 DV__ joined #gluster
14:47 AaronGreen joined #gluster
14:47 tru_tru_ joined #gluster
14:52 sac`away joined #gluster
14:57 jobewan joined #gluster
14:57 XpineX joined #gluster
14:57 shubhendu joined #gluster
14:58 lalatenduM joined #gluster
15:00 sijis could someone provide feedback on this question? http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018799.html
15:00 glusterbot Title: [Gluster-users] Issue with tar, ls and rsync on 10G volume (at supercolony.gluster.org)
15:01 dblack joined #gluster
15:02 natgeorg joined #gluster
15:03 theron_ joined #gluster
15:04 longshot902 joined #gluster
15:05 _Bryan_ joined #gluster
15:07 XpineX joined #gluster
15:08 Fen1 joined #gluster
15:09 R0ok_ sijis: what's the current performance.size ?
15:10 R0ok_ sijis: if the volume contains alot of read-only files, then you'd probably want to increase the cache size
15:10 sijis R0ok_: how would i be able to pull that?
15:11 R0ok_ sijis: run this command: gluster volume info <VOLNAME>
15:11 R0ok_ sijis: as always, just replace <VOLNAME> with your volume name
15:12 sijis http://paste.fedoraproject.org/134912/11395581/
15:12 glusterbot Title: #134912 Fedora Project Pastebin (at paste.fedoraproject.org)
15:13 cultav1x joined #gluster
15:14 sijis R0ok_: i just pasted the output
15:15 Philambdo joined #gluster
15:15 R0ok_ sijis: you need to set the performance.cache-size option to gbp3 volume, i would suggest set it to 1GB by running the command: 'gluster volume gbp3 set performance.cache-size 1GB'
15:16 R0ok_ sijis: have a look at this blog to get a good understanding of performance tweeking: http://www.jamescoyle.net/how-to/559-glusterfs-performance-tuning
15:17 sijis R0ok_: can i ask how you get 1G?
15:17 sijis is that just something to start with?
15:19 * sijis is reading the blog post
15:20 R0ok_ sijis: by default, glusterfs uses a performance.cache-size of 32MB
15:21 rwheeler joined #gluster
15:24 sijis R0ok_: so after change, i'd need to 'service glusterfsd restart' ?
15:28 klaas joined #gluster
15:30 saltsa joined #gluster
15:30 sman_ joined #gluster
15:30 gomikemike i finally got geo-replication working
15:30 fim_ joined #gluster
15:30 Chr1s1an joined #gluster
15:31 duerF^ joined #gluster
15:31 nixpanic joined #gluster
15:31 l0uis__ joined #gluster
15:31 gomikemike freaking ping was holding me back...
15:31 nixpanic joined #gluster
15:31 Gugge joined #gluster
15:31 kke_ joined #gluster
15:31 Peanut__ joined #gluster
15:31 samppah_ joined #gluster
15:31 gomikemike cant believe it uses ping to check availability
15:31 ndevos joined #gluster
15:31 ndevos joined #gluster
15:32 jbrooks joined #gluster
15:32 Zordrak joined #gluster
15:32 Zordrak joined #gluster
15:32 R0ok_ sijis: you can restart glusterd
15:33 ninjabox1 joined #gluster
15:33 xavih joined #gluster
15:33 Slasheri joined #gluster
15:33 Slasheri joined #gluster
15:33 georgeh joined #gluster
15:34 khanku joined #gluster
15:34 sijis R0ok_: so what's the diff between glusterd and glusterfsd?
15:34 cyberbootje joined #gluster
15:34 R0ok_ sijis: you can modify your cache-size to whatever value you want, depending on your requirements/usage & hardware specs. on the server
15:34 lkoranda joined #gluster
15:42 nbvfuel joined #gluster
15:42 drajen_ joined #gluster
15:44 JoeJulian ~processes | sijis
15:44 glusterbot sijis: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
15:45 daMaestro joined #gluster
15:45 sijis_ joined #gluster
15:45 mjrosenb_ joined #gluster
15:45 atoponce joined #gluster
15:46 Gugge joined #gluster
15:46 rturk|afk joined #gluster
15:48 R0ok_ sijis: ^^^
15:49 lkoranda joined #gluster
15:50 sijis_ R0ok_: ?? i understand that i can set it to whatever i want
15:51 sijis_ i was curious if that setting was going to use ram or disk caching
15:53 failshel_ joined #gluster
15:53 dmachi When I do a remove brick, i can see data being migrated, but when it is completed there is still 14gb of data left on the brick.  Should it be empty when it is really safe to do a "commit"?
15:55 side_control joined #gluster
15:56 R0ok_ sijis: performance.cache-size is the size of the read cache...& i think thats more off ram caching , gotta ask JoeJulian
15:56 R0ok_ @JoeJulian: ^^^
15:56 side_control joined #gluster
15:57 nbvfuel We're on gluster 3.5.2.  When unzipping an 11MB file (~7000 small files, 65MB unzipped) it takes 10 seconds locally.  On a gluster mirror node itself (mounting the volume as a client) it takes just over a minute.  On a client, 5ms away, it takes 8 minutes.
15:58 nbvfuel I pulled up some older blog posts that mention this type of scenario, but I'm not sure how up to date they are, and what tunables are available.
15:59 sijis_ R0ok_:  do you also know whta's the difference between the glusterd and glusterfsd service?
16:01 gomikemike question, I'm mounting a volume on an apache server as its docroot, i need the dir (And files) owned by apache, how do i set that up? do i just chown them on the instance that has mounted the volume?
16:04 semiosis yes
16:04 semiosis sijis_: ,,(processes)
16:04 glusterbot sijis_: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
16:08 sijis_ semiosis: thanks
16:08 semiosis yw
16:09 sijis_ follow up - does restarting glusterd affects glusterfsd? (i would image it would)
16:09 semiosis no
16:09 semiosis use gluster volume start/stop to control glusterfsd procs
16:10 semiosis only effect restarting glusterd has is to start a missing glusterfsd (where the vol is started but the proc is dead)
16:10 hagarth joined #gluster
16:10 sijis ahh. ok. that was going to lead me to the next questino... did something like 'service glusterfsd <vol> stop' work?
16:11 sijis semiosis: sure. if mgmt is down.. it can't 'control' the volume.
16:11 sijis that makes sense
16:14 JoeJulian R0ok_, sijis: I think perfomance.cache-size is the size of the cache pool that's supposed to be shared. Don't quote me on that though.
16:15 sijis fair enough. i was just curious
16:17 dblack joined #gluster
16:19 tru_tru joined #gluster
16:23 glusterbot New news from newglusterbugs: [Bug 1144527] log files get flooded when removexattr() can't find a specified key or value <https://bugzilla.redhat.com/show_bug.cgi?id=1144527>
16:28 fattaneh1 joined #gluster
16:29 mojibake1 joined #gluster
16:29 PeterA joined #gluster
16:38 soumya__ joined #gluster
17:13 l0uis joined #gluster
17:15 cultav1x joined #gluster
17:16 glusterbot New news from resolvedglusterbugs: [Bug 947153] [TRACKER] Hadoop Compatible File System (HCFS) <https://bugzilla.redhat.com/show_bug.cgi?id=947153>
17:18 ricky-ticky joined #gluster
17:19 fattaneh1 What are the advantages of glusterfs compared to ceph?
17:28 dtrainor joined #gluster
17:29 JoeJulian better dipthongs.
17:30 JoeJulian GlusterFS has a working posixly correct, stable filesystem interface.
17:31 JoeJulian ceph doesn't.
17:31 rturk Yeah, the Ceph filesystem is still under development
17:31 JoeJulian Other than that, they're fairly comparable.
17:32 rturk (although fairly different in the way that they distribute and replicate data)
17:33 JoeJulian I'd go so far as to say completely different in that respect.
17:33 rturk :)
17:37 * JoeJulian is disappointed he didn't even get a groan over the dipthong comparison...
17:38 * skippy groans
17:40 JoeJulian Thank you.
17:44 fattaneh joined #gluster
17:47 fattaneh What are the advantages of glusterfs compared to ceph?
17:47 JoeJulian fattaneh: scroll up
17:48 nothau heh
17:48 fattaneh JoeJulian: i was disconnected
17:48 * nothau groans.
17:48 JoeJulian Type "/topic" and check the logs when that happens, please.
17:48 fattaneh JoeJulian: thanks :)
17:48 nothau There are a lot of write ups too comparing the 2
17:49 fattaneh nothau: thanks
17:49 dtrainor_ joined #gluster
17:49 longshot902_ joined #gluster
17:49 nothau http://lmgtfy.com/?q=ceph+vs+glusterfs
17:49 glusterbot Title: Let me google that for you (at lmgtfy.com)
17:50 sputnik13 joined #gluster
17:52 kkeithley diphthong (with two haitches): a two-element speech sound that begins with the tongue position for one  vowel and ends with the tongue position for another all within one  syllable   <the sounds of "ou" in "out" and of "oy" in "boy" are diphthongs>
17:53 kkeithley a dipthong is a sagging swimsuit?
17:53 JoeJulian lol
17:54 longshot902 joined #gluster
17:55 JoeJulian The wikipedia article has a better description: http://en.wikipedia.org/wiki/Diphthong
17:55 glusterbot Title: Diphthong - Wikipedia, the free encyclopedia (at en.wikipedia.org)
17:55 kkeithley yeah, that was the definition for students (because I couldn't understand what dictionary.reference.com's explanation meant
17:56 JoeJulian "lus" and "ter" are dipthings. From a speech programming standpoint, so is "gl".
17:56 kkeithley student, as in fifth grade
17:57 rturk whereas I believe "ph" is a consonant cluster?
17:58 * JoeJulian changes the channel name to #english204.
17:58 kkeithley what are the two vowel sounds in "lus" or "ter"?
17:58 kkeithley or "gl"?
17:59 rturk sorry, it's actually a consonant digraph
17:59 * rturk uses the google
17:59 JoeJulian Don't need two vowel sounds in a unitary diphthong.
18:00 dtrainor joined #gluster
18:00 JoeJulian But really I was taking my definition from TI-99/4a speech synthesizer "dipthong" programming.
18:01 rturk really we should be talking about the biological similarities and differences between cephalopods and ants, though, right?
18:01 JoeJulian Absolutely.
18:02 JoeJulian "Technically, a diphthong is a vowel with two different targets: that is, the tongue (and/or other parts of the speech apparatus) moves during the pronunciation of the vowel."
18:02 JoeJulian That's the definition I'm going to accept as it supports my argument.
18:02 kkeithley maybe we could get the lady doctor who keeps tagging her encephalograph tweets with #ceph to pick a different hashtag?
18:03 kkeithley oh, looks like someone already did
18:05 Philambdo joined #gluster
18:10 _nothau joined #gluster
18:28 ndk joined #gluster
18:29 jmarley joined #gluster
18:31 rwheeler joined #gluster
18:35 ekuric joined #gluster
18:40 zerick joined #gluster
18:54 ThatGraemeGuy joined #gluster
19:04 fattaneh1 joined #gluster
19:16 chirino joined #gluster
19:19 bene2 joined #gluster
19:23 nbvfuel NFS client mounting woes.  I'm trying to mount the gluster volume locally (on RHEL 6.5) via mount -t nfs -o vers=3,proto=tcp localhost:/my_volume_name /mnt/test
19:23 vu joined #gluster
19:23 nbvfuel But no dice: mount.nfs: requested NFS version or transport protocol is not supported
19:23 glusterbot nbvfuel: make sure your volume is started. If you changed nfs.disable, restarting your volume is known to work.
19:24 dmachi1 joined #gluster
19:24 nbvfuel The volume is definitely started, and "gluster volume status" reports: NFS Server on localhost 2049 Y 44830 (Y = Online)
19:28 nbvfuel Well-- oddly, stopping and then starting the volume allowed me to magically connect
19:28 glusterbot nbvfuel: Well's karma is now -1
19:40 virusuy joined #gluster
19:42 semiosis nbvfuel: beware localhost nfs mounts are dangerous
19:45 semiosis this is interesting, http://lwn.net/Articles/595652/
19:45 glusterbot Title: Loopback NFS: theory and practice [LWN.net] (at lwn.net)
19:45 edwardm61 joined #gluster
19:45 _nothau Inception mount
19:46 semiosis that article really needs a tl;dr summary
19:46 nbvfuel semiosis: Interesting-- did not know that.
19:46 glusterbot nbvfuel: Interesting's karma is now -1
19:47 nbvfuel Grrrr, glusterbot.
19:48 nbvfuel I was just testing performance (my unzip 1000s of small files scenario) from earlier.
19:51 fubada hi, does the author of http://blog.gluster.org/category/puppet-module/ hang out here?
19:51 fubada the puppet module
19:52 semiosis fubada: meet purpleidea
19:52 fubada hi purpleidea
19:52 fubada ty semiosis
19:52 semiosis yw
20:01 virusuy good afternoon yall
20:03 davdunc joined #gluster
20:07 fubada purpleidea: im trying to use your puppet-gluster module with the following error: Error 400 on SERVER: undefined method `brick_str_to_hash' for Scope(Gluster::Volume[reports]):Puppet::Parser::Scope
20:45 G________ joined #gluster
20:45 sprachgenerator joined #gluster
21:09 purpleidea fubada: hey there
21:10 purpleidea fubada: i saw your message in #puppet too, but the paste you posted did not exist...
21:12 purpleidea fubada: the method is defined... https://github.com/purpleidea/puppet-gluster/blob/master/lib/puppet/parser/functions/brick_layout_simple.rb#L55 can you let me know what version of OS, puppet-gluster, and puppet you are using?
21:12 glusterbot Title: puppet-gluster/brick_layout_simple.rb at master · purpleidea/puppet-gluster · GitHub (at github.com)
21:24 elico joined #gluster
21:53 RicardoSSP joined #gluster
22:36 Pupeno joined #gluster
23:05 longshot902 joined #gluster
23:10 ramon_dl1 left #gluster
23:54 plarsen joined #gluster
23:55 PeterA oh man…just got a brick crashed on a late friday afternoon :(
23:56 PeterA http://pastie.org/9577742
23:56 glusterbot Title: #9577742 - Pastie (at pastie.org)
23:56 PeterA any clue on how can a pid lock issue happen??

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary