Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 TP_ Peace out gang!
00:37 RicardoSSP joined #gluster
00:37 RicardoSSP joined #gluster
00:49 itisravi joined #gluster
00:53 yinyin joined #gluster
00:54 bala joined #gluster
01:08 theron joined #gluster
01:18 asias joined #gluster
01:44 bala joined #gluster
02:01 Guest54292 joined #gluster
02:16 itisravi joined #gluster
02:18 Alex joined #gluster
02:20 Alex Hello. I have a slightly odd problem. When using Gluster on a 3.4.55 kernel/Debian box, we see that directory listings in a directory with unicode filenames in just hangs. I'm going to spend my time staring at straces, but just wondered if anyone else had seen it.
02:28 thinmint_ joined #gluster
02:30 thinmint_ left #gluster
02:56 Alex Fwiw stracing it just sits at getdents until I interrupt it. Wondering if it might be a kernel/libc issue rather than gluster, but doubt it since I don't see the problem if I don't use gluster...
02:59 bharata-rao joined #gluster
03:08 sgowda joined #gluster
03:15 Alex Heh, interesting. If I try to scp a directory from the box it just loops forever. It's like getdents/whatever the call is doesn't know when the directory ends.
03:15 Alex nb: the same works on non gluster mounts on the same box.
03:20 satheesh joined #gluster
03:31 dusmant joined #gluster
03:44 kanagaraj joined #gluster
03:44 RameshN joined #gluster
03:48 itisravi joined #gluster
03:49 shubhendu joined #gluster
03:49 itisravi_ joined #gluster
03:58 satheesh joined #gluster
03:59 hagarth joined #gluster
04:17 rjoseph joined #gluster
04:17 Qu_bits joined #gluster
04:29 ppai joined #gluster
04:30 ngoswami joined #gluster
04:36 bala joined #gluster
04:38 satheesh1 joined #gluster
04:38 Qu_bits hi satheesh1
04:52 dusmant joined #gluster
05:04 Qu_bits !list
05:05 Qu_bits ,list
05:05 shylesh joined #gluster
05:14 vpshastry joined #gluster
05:17 rc10 joined #gluster
05:27 CheRi joined #gluster
05:27 ajha joined #gluster
05:28 bulde joined #gluster
05:32 mohankumar joined #gluster
05:33 zerick joined #gluster
05:35 glusterbot New news from newglusterbugs: [Bug 1024181] Unicode filenames cause directory listing interactions to hang/loop <http://goo.gl/wPFkNZ>
05:35 aravindavk joined #gluster
05:45 shilpa_ joined #gluster
05:47 satheesh1 joined #gluster
05:52 dusmant joined #gluster
06:07 nshaikh joined #gluster
06:10 harish_ joined #gluster
06:22 ngoswami joined #gluster
06:22 kr1ss joined #gluster
06:28 sticky_afk joined #gluster
06:28 itisravi_ joined #gluster
06:29 stickyboy joined #gluster
06:30 mohankumar joined #gluster
06:31 hflai joined #gluster
06:31 compbio joined #gluster
06:34 shanks joined #gluster
06:38 Qu_bits !topic
06:47 yinyin joined #gluster
06:56 kr1ss joined #gluster
06:56 RameshN joined #gluster
06:57 raghu joined #gluster
07:03 lalatenduM joined #gluster
07:09 ndarshan joined #gluster
07:12 rastar joined #gluster
07:14 satheesh1 joined #gluster
07:19 kanagaraj joined #gluster
07:22 RedShift2 joined #gluster
07:25 purpleidea Qu_bits: /topic
07:25 Qu_bits insufficient arguements for command
07:26 rc10 what are opimal perforamnce settings for small files ?
07:26 Qu_bits I think that's what a chan op types if they want to change the topic
07:26 jtux joined #gluster
07:27 Qu_bits http://technet.microsoft.com/​en-us/magazine/ff382717.aspx
07:27 glusterbot <http://goo.gl/4TpOGp> (at technet.microsoft.com)
07:31 vpshastry1 joined #gluster
07:35 glusterbot New news from newglusterbugs: [Bug 955548] adding host uuids to volume status command xml output <http://goo.gl/rZS9c>
07:36 hateya joined #gluster
07:57 rjoseph joined #gluster
08:05 eseyman joined #gluster
08:05 keytab joined #gluster
08:06 kanagaraj joined #gluster
08:08 ctria joined #gluster
08:08 franc joined #gluster
08:08 dneary joined #gluster
08:09 meghanam_ joined #gluster
08:09 meghanam joined #gluster
08:17 vshankar joined #gluster
08:21 vpshastry1 joined #gluster
08:32 dusmant joined #gluster
08:36 kanagaraj joined #gluster
08:37 satheesh1 joined #gluster
08:44 shruti joined #gluster
09:04 shruti joined #gluster
09:09 shruti joined #gluster
09:13 mbukatov joined #gluster
09:18 manik joined #gluster
09:22 CheRi joined #gluster
09:42 Norky joined #gluster
09:48 GabrieleV joined #gluster
09:51 shruti joined #gluster
10:11 mgebbe_ joined #gluster
10:11 satheesh joined #gluster
10:41 harish_ joined #gluster
10:42 jordi1 joined #gluster
10:44 social joined #gluster
10:46 RameshN joined #gluster
10:46 bala joined #gluster
10:48 jordi1 Hi! I'm trying to start a Geo-Replication but I get the following error:
10:48 jordi1 gluster volume geo-replication master-volume ssh://root@NCSL008:/home/node1 config log-level debug
10:48 jordi1 geo-replication config-set failed for master-volume ssh://root@NCSL008:/home/node1
10:48 jordi1 geo-replication command failed
10:48 calum_ joined #gluster
10:48 jordi1 Anyone know how can I solve this error?
11:09 rcheleguini joined #gluster
11:12 mkarg joined #gluster
11:13 yinyin joined #gluster
11:15 kshlm joined #gluster
11:20 hagarth joined #gluster
11:21 diegows_ joined #gluster
11:39 geewiz joined #gluster
11:39 edward1 joined #gluster
11:44 DV__ joined #gluster
11:51 hagarth joined #gluster
12:02 social hi, how do I remove brick from distribute?
12:05 jordi1 use this command: gluster volume remove-brick test-volume server2:/exp2 start
12:07 asias joined #gluster
12:07 social that will end up in dataloss
12:10 social jordi1: http://paste.fedoraproject.org/50142/13830486
12:10 glusterbot Title: #50142 Fedora Project Pastebin (at paste.fedoraproject.org)
12:12 social this was a test but I tried it on similar setup with 6 nodes and ended up with same situation, if I want to remote one replica pair from distribute I'll end up with dataloss
12:14 itisravi_ joined #gluster
12:18 social JoeJulian: have you seen something like what I pasted abowe? ^^
12:18 jordi1 well, if you don't remove the directories where this data is stored, you won't have a data loss. But this content will be unaccessible to Gluster
12:22 dusmant joined #gluster
12:23 hagarth joined #gluster
12:23 m0zes joined #gluster
12:29 GabrieleV joined #gluster
12:36 glusterbot New news from newglusterbugs: [Bug 1023309] geo-replication command failed <http://goo.gl/JKsevM>
12:49 sgowda joined #gluster
12:51 ababu joined #gluster
12:56 jordi1 left #gluster
13:13 DV__ joined #gluster
13:17 sgowda joined #gluster
13:19 GabrieleV joined #gluster
13:22 lpabon joined #gluster
13:23 bennyturns joined #gluster
13:24 vpshastry joined #gluster
13:28 asias joined #gluster
13:31 onny1 joined #gluster
13:33 ndk joined #gluster
13:34 dewey joined #gluster
13:35 ndarshan joined #gluster
13:38 danci1973 joined #gluster
13:45 elyograg my rebalance is going slowly.  Got about 200GB moved in 10 hours, but in total it's going to have to move about 10TB.
13:48 elyograg Makes me very glad that I engineered it with a separate gigabit LAN for the servers to talk to each other.  No matter how saturated that network might get, the access from clients will have a clean network to access it.  I'm sure there will be additional latency because disk heads need to be moved, but the network won't be overloaded.
13:49 rc10 joined #gluster
13:53 hngkr_ joined #gluster
13:56 DV__ joined #gluster
13:56 3JTAAKW6K joined #gluster
13:57 hagarth joined #gluster
13:59 ndarshan joined #gluster
13:59 dusmant joined #gluster
14:02 mohankumar joined #gluster
14:02 chirino joined #gluster
14:03 aixsyd elyograg: still there, mate?
14:05 aixsyd I was just going to ask about what you just said - I just set up my first glusterfs cluster, but they do not talk to eachother directly, or have their own seperate network. How does oen set that up? they both have second NIC's to do it, i just am unsure of the configuration aspect
14:12 bala joined #gluster
14:12 danci1973 Hello, I'm struggling to get Gluster working with RDMA over Infiniband... I got to a point where I can mount a volume, but it's very slow - 5-6 MB/s...
14:13 danci1973 Anyone knows some rdma/gluster troubleshooting guid? I couldn't find much on the topic...
14:13 bugs_ joined #gluster
14:14 bala1 joined #gluster
14:14 plarsen joined #gluster
14:24 wushudoin joined #gluster
14:30 vpshastry joined #gluster
14:35 rjoseph joined #gluster
14:36 glusterbot New news from newglusterbugs: [Bug 1024369] Unable to shrink volumes without dataloss <http://goo.gl/jZ350k>
14:45 gmcwhistler joined #gluster
14:48 ababu joined #gluster
14:52 failshell joined #gluster
15:00 kshlm joined #gluster
15:01 B21956 joined #gluster
15:10 JoeJulian social: Interesting. What if you try it without "cluster.readdir-optimize on"?
15:11 JoeJulian elyograg: rebalance is in a different priority queue anyway. Clients will always be served first.
15:12 JoeJulian aixsyd: He has a second nic on each server and uses split-dns to have the servers and clients resolve the hostnames to different IPs.
15:14 aixsyd how is that a seperate LAN, though?
15:15 JoeJulian danci1973: There isn't much on the topic out there. For people that use rdma successfully, it seems to just work. For the rest, I don't know if they just don't have the skills to figure it out, or just keep it to themselves once they do. Make sure you're using 3.3, though, as 3.4's rdma is apparently broken.
15:16 social JoeJulian: same effect, I opend bugreport on this, dunno why dht does not migrate data off decomissioned bricks
15:16 JoeJulian aixsyd: Not sure where your confusion is on that one.
15:16 mgebbe joined #gluster
15:18 aixsyd Well, if my lan is a 10.0.0.0/24 network, a seperate lan would be on different hardware and say a 172.16.0.0/24 network. Would you need a DNS server on the 172.16 network? where and how do you tell glusterfs to look for other nodes on 172.16 as opposed to 10?
15:19 aixsyd I guess you could static the resolv.conf to the 172.16 IPs... is it that simple?
15:20 JoeJulian By using ,,(hostnames) your servers then have names. If "server1" resolves to 10.0.0.1 to your client, and 172.16.0.1 on your server...
15:20 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
15:20 JoeJulian yeah, though I think you meant /etc/hosts
15:20 aixsyd ewps, yea, hosts
15:21 vpshastry bfoster: ping
15:23 bfoster vpshastry: pong
15:23 vpshastry about the comment in http://review.gluster.org/#/c/6​125/5/extras/quota-remove-xattr.sh
15:23 glusterbot <http://goo.gl/o79VTF> (at review.gluster.org)
15:26 bfoster ?
15:28 rwheeler joined #gluster
15:30 JoeJulian Odd... that script never seems to have conformed to it's "usage".
15:30 vpshastry bfoster: oh, sorry. I didn't understand at first.. now understand your comment. Thanks :)
15:31 bfoster ok :)
15:36 LoudNoises joined #gluster
15:39 Guest54292 joined #gluster
15:51 elyograg aixsyd: DNS has the 'client-side' addresses for my gluster storage servers.  The hosts file on each gluster server defines different addresses for them to use locally.
15:53 elyograg aixsyd: the hosts file on the storage servers: http://fpaste.org/50191/30619761/
15:53 glusterbot Title: #50191 Fedora Project Pastebin (at fpaste.org)
15:55 bulde joined #gluster
16:02 hagarth joined #gluster
16:06 daMaestro joined #gluster
16:06 crashmag joined #gluster
16:06 daMaestro joined #gluster
16:07 ofu_ joined #gluster
16:08 davidbierce joined #gluster
16:10 Gilbs joined #gluster
16:13 zerick joined #gluster
16:14 JoeJulian file a bug
16:14 glusterbot http://goo.gl/UUuCq
16:16 dbruhn Well, I have another 6 server system going in this next week.... gotta love last min rash growth.
16:19 harish joined #gluster
16:19 zaitcev joined #gluster
16:22 JoeJulian It beats the alternative.
16:24 aixsyd JoeJulian: Got a hypothetical for you
16:26 vpshastry joined #gluster
16:27 aixsyd Say I have two identicalphysical  servers, both dual Quad Core Xeons, say 8-16gb RAM, and both have 6x 2TB hard drives in a RAID 10. Say I really only need 3TB usable. Would it be a smart move to install, say, Proxmox on each server, and create two virtual servers on each that both contain GlusterFS and all four VMs would act as nodes. Does this offer any additional protection from somethinkg like random Kernel panics, or software i
16:27 vpshastry1 joined #gluster
16:28 daMaestro joined #gluster
16:28 aixsyd or would I be just as well off running them on their own bare bones, as opposed to a VM environment?
16:29 aixsyd I figure 4 nodes > 2 nodes for failover and HA and all that jazz
16:30 aixsyd and true, if theres a catastrophic failure of hardware on one of the two physical servers, i'd loose two nodes as opposed to only one..
16:32 aixsyd a major benefit I see running them as VMs is that I can use snapshot backups to back up the OS disk every night and in the event of a failure, I can just replicate the OS back, attach a new data disk and let it resync in minutes as opposed to an hour or more
16:33 aixsyd I plan to use Infiniband between the two hardware servers - 10gb or 40gb infiniband
16:33 aixsyd and with the Redhat VirtIO iSCSI and NIC drivers, there shouldnt be much performance degridation
16:34 Mo__ joined #gluster
16:36 calum_ joined #gluster
16:37 glusterbot New news from newglusterbugs: [Bug 1024434] remove-brick status output fields are misaligned <http://goo.gl/xrrbCD>
16:43 vpshastry joined #gluster
16:46 Gilbs Morning all!  Does anyone have a link to the how-to document on how to re-use the same UUID when rebuilding a failed OS on a gluster box?
16:47 semiosis @replace
16:47 glusterbot semiosis: (replace [<channel>] <number> <topic>) -- Replaces topic <number> with <topic>.
16:47 semiosis ,,(replace)
16:47 glusterbot Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/nIS6z ... or if replacement server has same
16:47 glusterbot hostname: http://goo.gl/rem8L
16:47 glusterbot New news from resolvedglusterbugs: [Bug 862082] build cleanup <http://goo.gl/pzQv9M>
16:47 sgowda joined #gluster
16:47 semiosis glusterbot: second link
16:47 semiosis s/glusterbot/Glibs/
16:48 glusterbot semiosis: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
16:48 * semiosis is not in the zone
16:48 semiosis Gilbs: second link in ,,(replace)
16:48 glusterbot Gilbs: Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/nIS6z ... or if replacement server has
16:48 glusterbot same hostname: http://goo.gl/rem8L
16:50 kaptk2 joined #gluster
16:50 lpabon joined #gluster
16:51 Gilbs thank you.
16:52 torrancew left #gluster
16:54 Gilbs Nothing like discovering our OS drive is non-raided :)
17:07 glusterbot New news from newglusterbugs: [Bug 1023638] Change location of volume configuration files <http://goo.gl/ExD5Cs>
17:13 rotbeard joined #gluster
17:19 JoeJulian social: I was able to duplicate that, fyi.
17:23 failshel_ joined #gluster
17:28 polfilm joined #gluster
17:32 dbruhn Who maintains the documentation for the projects?
17:32 dbruhn s/projects/project
17:32 JoeJulian Did you just volunteer? ;)
17:32 dbruhn I actually would help if I had a little direction to start.
17:33 dbruhn But I was going to offer a blurb about partitions and warning about log files overrunning the /var partition
17:33 JoeJulian We had a volunteer fork the docs, do a bunch of work, and then it's just kind-of sat there ever since.
17:33 JoeJulian Let me find that again...
17:33 kkeithley It's a wiki, be bold.
17:33 JoeJulian Well, there's that too.
17:34 dbruhn hahah
17:34 kkeithley Fortune favors the bold.
17:34 dbruhn Well, let me know how I can help. I was thinking the official documentation, but whatever the project needs that I can help with I will
17:34 kkeithley To ask permission is to seek denial.
17:34 dbruhn Whats the link to the wiki?
17:34 kkeithley It's better to ask for forgiveness than permission
17:34 JoeJulian kkeithley: imho, bug 1023638 should be closed as invalid/wontfix
17:34 glusterbot Bug http://goo.gl/ExD5Cs unspecified, unspecified, ---, amarts, NEW , Change location of volume configuration files
17:35 dbruhn hahah, that's the one I want to warn about
17:35 dbruhn that damn issue has bit me in the ass several times
17:35 JoeJulian hehe, I didn't notice it was yours. :D
17:35 elyograg I read that this error message during a rebalance is not actually a failure condition.  http://fpaste.org/50219/30680941/  Running 3.3.1, is this truly not a problem?
17:35 aixsyd :(
17:35 glusterbot Title: #50219 Fedora Project Pastebin (at fpaste.org)
17:36 dbruhn JoeJulian, I agree on making it it's own partition is the best option.
17:36 JoeJulian dbruhn: And I agree. log directories filling stuff up sucks. Even when I do logs in their own partition, it still sometimes sucks when it fills.
17:37 glusterbot New news from newglusterbugs: [Bug 1024465] Dist-geo-rep: Crawling + processing for 14 million pre-existing files take very long time <http://goo.gl/BxNBkc>
17:38 JoeJulian elyograg: right, not truly a problem. It's saying that moving it would actually imbalance the used space so it's avoiding that. If you want it do proceed anyway, use "start force".
17:39 dusmant joined #gluster
17:40 dbruhn Had an idea the other day that maybe you guys could weigh in on, maybe useful, maybe not. But a hybrid approach to DHT and Stripping, where DHT is utilized and only when a file is going to full a brick up does it stripe the file across multiple bricks. So a DHT+Strpe+AFR volume
17:40 JoeJulian dbruhn: Unfortunately, I'm afraid that the only way one learns to partition the logs is to get bit in the ass. I would blog about it, but the only people that would read it are the ones that have already learned that.
17:41 dbruhn I was just thinking a small warning in the initial setup documentation would be useful, for the people who are setting it up. I will write it and submit it to where ever it needs to go.
17:42 social hmm second time I saw gluster crash on production with this looping in logs  E [glusterd-op-sm.c:5261:glusterd_op_sm] 0-management: handler returned: -1 E [glusterd-utils.c:362:glusterd_unlock] 0-management: Cluster lock not held! :(
17:42 social looks like I have some tracing to do =[
17:43 chirino joined #gluster
17:46 chirino joined #gluster
17:47 jruggiero joined #gluster
17:47 jruggiero left #gluster
17:58 Gilbs left #gluster
18:00 dbruhn There updates the wiki, if someone wants to review it and make sure it reads well enough.
18:00 dbruhn On the install guide
18:02 failshell joined #gluster
18:05 lpabon joined #gluster
18:07 glusterbot New news from newglusterbugs: [Bug 1024472] optimize geo-replication changelog processing <http://goo.gl/E3yC49> || [Bug 1024467] Dist-geo-rep : Change in meta data of the files on master doesn't get propagated to slave. <http://goo.gl/sQ8vGe>
18:23 kr1ss joined #gluster
18:32 B21956 joined #gluster
18:54 Technicool joined #gluster
18:58 chirino joined #gluster
19:01 chirino joined #gluster
19:21 jruggiero joined #gluster
19:24 difeta joined #gluster
19:35 Qu_Bits joined #gluster
19:50 go joined #gluster
19:53 go2k Hey guys, I've noticed I've got issues with my 1x3 replication schema. It is that files' access is very slow. Is there a way to measure time to access to files and any way to improve it?
19:54 go2k Obviously restarting gluster helps, but that's a workaround though :)
19:55 dbruhn go2k, tell us more about your environment? and how much does a restart improve things?
19:59 go2k I've got my apache Docroot where the gluster mountpoint is. The other 2 nodes in different physical locations but the link between them is 10 Gbps.
19:59 go2k For instance when I launch firebug - the load time oscillates around 1 minute.
19:59 go2k or 50 something seconds.
19:59 dbruhn and after you restart?
19:59 go2k Let me do that now.
20:00 go2k so- httpd stop, umount the mountpoint, gluster restart
20:01 go2k and back online
20:02 go2k Load time avg is now 8-10 seconds (it's fine for me, I know that having a docroot on gluster is not maybe the best idea :) )
20:02 Remco It works, as long as you make sure files don't get stat'ed all the time
20:02 dbruhn is this all php stuff?
20:02 Remco As in, cache that
20:02 go2k yeah it's php
20:02 go2k + mysql
20:03 dbruhn agh, yeah php stats the files everytime, which causes an self heal check
20:03 go2k yeah, ok that explains 8 seconds, but that's fine for me
20:03 go2k but not 56 :)
20:03 dbruhn you would be better off using a php caching mechanism to reduce the times a file gets stat'ed
20:04 Remco Even that is problematic though, since apc won't cache stats for files with relative paths
20:04 go2k it's wordpress
20:04 go2k basically
20:07 go2k yeah, but I take that 10 seconds, I understand that files get stat'ed... althoguh why after some time I get 2 minutes 30 seconds delay?
20:08 dbruhn That I am not sure about to be honest, assuming you are using the FUSE client?
20:08 go2k or maybe I'll ask differently, is there any gluster option I could possibly set to improve it ?
20:08 go2k yeah I do
20:10 rwheeler joined #gluster
20:11 chirino joined #gluster
20:12 jruggiero left #gluster
20:13 JoeJulian gluster version?
20:14 go2k 3.3.2
20:16 JoeJulian Is your session data in files on the gluster volume?
20:17 go2k not sure what you mean, but when I set it up I did everything in the gluster console, which pretty much narrows it down to the gluster volume I guess
20:21 harish joined #gluster
20:22 dbruhn go2k, I think JoeJulian is asking if your php session data is being stored on the gluster volume.
20:24 go2k oh sorry, yes it is but I can change it if that helps
20:24 go2k does not hurt to try :)
20:27 JoeJulian I strongly recommend using memcached for sessions.
20:32 harish joined #gluster
20:34 JoeJulian go2k: btw, readdirplus support in 3.4 is a decent improvement for php performance.
20:37 harish joined #gluster
20:44 go2k JoeJulian: thanks
20:44 go2k I found out though that this is where my session files are
20:44 go2k session.save_path = "/var/lib/php/session"
20:44 go2k and that's out of my gluster mount
20:45 VerboEse joined #gluster
20:47 nueces joined #gluster
20:49 JoeJulian Are you getting in to swap, maybe, when it's that slow?
20:53 go2k hold on, I'll check this out
21:02 bstr_ joined #gluster
21:05 go2k JoeJulian: looks better now
21:05 go2k let's see if that was that, swapping
21:06 ninkotech__ joined #gluster
21:08 sysconfig joined #gluster
21:10 go2k JoeJulian: bingo, this is it. As soon as system starts swapping - the lags start. Guess I will have to take care of the memory then.
21:10 baoboa joined #gluster
21:10 go2k I noticed that if I open 10 windows and refresh my website I have got less and less memory
21:10 go2k but that's another issue.
21:10 go2k Thanks a lot all of you for your help.
21:11 semiosis you can of course tune the number of threads in apache & the max memory in php to try to keep it under your available memory limit
21:11 go2k yeah doing that now :)
21:14 Guest54292 joined #gluster
21:14 stickyboy joined #gluster
21:14 T0aD joined #gluster
21:14 NuxRo joined #gluster
21:14 vpagan joined #gluster
21:14 klaxa joined #gluster
21:14 rubbs joined #gluster
21:14 ThatGraemeGuy joined #gluster
21:14 foster joined #gluster
21:14 efries_ joined #gluster
21:14 eightyeight joined #gluster
21:14 ke4qqq joined #gluster
21:38 glusterbot New news from newglusterbugs: [Bug 1023191] glusterfs consuming a large amount of system memory <http://goo.gl/OkQlS3>
21:51 zaitcev joined #gluster
21:58 DV__ joined #gluster
22:23 daMaestro joined #gluster
22:34 ira joined #gluster
23:18 glusterbot New news from resolvedglusterbugs: [Bug 852869] volume heal info shows mostly gfid not filenames <http://goo.gl/MjsoL>
23:24 VerboEse joined #gluster
23:51 Gugge_ joined #gluster
23:53 samppah joined #gluster
23:53 Xunil__ joined #gluster
23:53 GLHMarmo1 joined #gluster
23:53 kopke_ joined #gluster
23:53 __NiC joined #gluster
23:53 roidelap1uie joined #gluster
23:57 mibby- joined #gluster
23:59 yinyin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary