Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:35 arpu joined #gluster
00:58 shdeng joined #gluster
01:02 shdeng joined #gluster
01:02 gyadav joined #gluster
01:07 vbellur joined #gluster
01:12 moneylotion joined #gluster
01:27 daMaestro joined #gluster
01:43 gyadav joined #gluster
02:04 shdeng joined #gluster
02:04 kramdoss_ joined #gluster
02:09 gyadav joined #gluster
02:19 shdeng joined #gluster
02:20 derjohn_mob joined #gluster
02:26 masber joined #gluster
02:35 pioto joined #gluster
02:35 msvbhat joined #gluster
02:46 vinurs joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 shdeng joined #gluster
03:03 Gambit15 joined #gluster
03:09 msvbhat joined #gluster
03:13 rafi joined #gluster
03:19 oajs joined #gluster
03:40 rejy joined #gluster
03:53 itisravi joined #gluster
03:58 armyriad joined #gluster
04:13 skumar joined #gluster
04:22 vinurs joined #gluster
04:23 DevSidious joined #gluster
04:25 nthomas joined #gluster
04:29 RameshN joined #gluster
04:44 skoduri joined #gluster
04:45 buvanesh_kumar joined #gluster
04:48 poornima joined #gluster
04:50 matt_ joined #gluster
04:51 atm0sphere joined #gluster
04:59 susant joined #gluster
05:00 Prasad joined #gluster
05:04 sbulage joined #gluster
05:05 susant joined #gluster
05:05 mb_ joined #gluster
05:09 karthik_us joined #gluster
05:09 sanoj joined #gluster
05:11 buvanesh_kumar joined #gluster
05:15 Shu6h3ndu joined #gluster
05:19 ndarshan joined #gluster
05:22 mb_ joined #gluster
05:28 vinurs joined #gluster
05:36 hgowtham joined #gluster
05:40 riyas joined #gluster
05:41 Saravanakmr joined #gluster
05:42 apandey joined #gluster
05:44 Karan joined #gluster
05:55 tom[] joined #gluster
05:55 skoduri joined #gluster
05:58 poornima joined #gluster
06:01 buvanesh_kumar joined #gluster
06:02 itisravi joined #gluster
06:04 apandey joined #gluster
06:05 ppai joined #gluster
06:07 ankush joined #gluster
06:10 vinurs joined #gluster
06:11 susant joined #gluster
06:15 k4n0 joined #gluster
06:18 buvanesh_kumar joined #gluster
06:22 tom[] joined #gluster
06:23 rafi joined #gluster
06:26 daMaestro joined #gluster
06:31 ashiq joined #gluster
06:36 sona joined #gluster
06:44 poornima joined #gluster
06:45 mhulsman joined #gluster
06:47 hgowtham joined #gluster
06:47 kdhananjay joined #gluster
06:54 d0nn1e joined #gluster
07:01 msvbhat joined #gluster
07:13 itisravi joined #gluster
07:13 Humble joined #gluster
07:20 Vytas_ joined #gluster
07:20 karthik_us joined #gluster
07:22 jtux joined #gluster
07:27 karthik_us joined #gluster
07:30 apandey joined #gluster
07:33 aardbolreiziger joined #gluster
07:35 sbulage joined #gluster
07:37 Klas [MSGID: 113001] [posix.c:5415:_posix_handle_xattr_keyvalue_pair] 0-icinga-lab-posix: getxattr failed on /local/glusterfs/storage_pool/icin​ga-lab/brick01/.glusterfs/c2/60/c2​605d35-f4f5-41ab-8918-1c3853a1063f while doing xattrop: Key:trusted.afr.dirty  [No such file or directory]
07:37 Klas What does this mean?
07:38 Klas it looks like metadata for file is broken, this is on the arbiter
07:38 Klas shouldn't this be easily resolved by the two other nodes?
07:38 Humble joined #gluster
07:39 Klas the file and directory is cleary missing
07:57 [diablo] joined #gluster
08:16 mbukatov joined #gluster
08:21 Klas we fixed the issue by unmounting and mounting on all clients
08:21 Klas should the clients be able to lock servers from fixing the files to a synchronized state?
08:28 sanoj joined #gluster
08:33 anbehl joined #gluster
08:42 sanoj joined #gluster
08:45 anbehl joined #gluster
08:46 ivan_rossi joined #gluster
08:48 anbehl joined #gluster
08:49 flying joined #gluster
08:50 itisravi Klas: is  /local/glusterfs/storage_pool/icin​ga-lab/brick01/.glusterfs/c2/60/c2​605d35-f4f5-41ab-8918-1c3853a1063f present on the other 2 bricks?
08:52 Klas it was only present on 1
08:52 Klas there seemed to be issues with reading the corresponding "real" file on all three server nodes as well
08:54 k4n0 joined #gluster
08:54 level7_ joined #gluster
08:57 scoban joined #gluster
09:01 Seth_Karlo joined #gluster
09:01 Seth_Kar_ joined #gluster
09:02 ashiq joined #gluster
09:04 bartden joined #gluster
09:05 bartden Hi, i want to create a multi master setup. I have two offices where i want to setup a NAS environment. But employees working at site A should use gluster volumes residing in site A and employees at site B use volumes at site B. Volumes of course have to be in sync. Is this possible with glusterfs?
09:21 karthik_us joined #gluster
09:32 mhulsman joined #gluster
09:36 rafi joined #gluster
09:40 prasanth joined #gluster
09:43 atinm joined #gluster
09:48 Klas bartden: both sides need to write?
09:49 bartden yes, preferably
09:49 karthik_us joined #gluster
09:50 Klas I *think* that the answer is no then, georeplication is toward slaves unless I'm mistaken
09:50 Klas pretty much a noob about it though
09:50 bartden and what if it is only read, and write to the master of the geo rep?
09:50 Klas then it works fine
09:51 Klas if some asyncrounisity is fine
09:51 Klas (up to ten minutes behind, generally)
10:07 ankitr joined #gluster
10:11 Clone why cant you setup 2+1 over two datacenter setup. It is possible, but possibly you get poor performance because of the writes to all nodes.
10:11 Clone there is no reason why it shouldnt work, or is there?
10:13 rastar joined #gluster
10:19 jkroon joined #gluster
10:21 bartden Clone do you mean distributed setup over two dc’s and one geo replication as backup?
10:23 msvbhat joined #gluster
10:27 cloph bartden: either you setup four volumes, (A → georepofAonB, B → georepofBonA)
10:27 ankitr joined #gluster
10:28 cloph or you create a replicated volume using A and B, but depnding on your network-environment/latency this might not work out performance wise.
10:28 Clone bartden: no, I mean a replicated 2+1 setup.
10:29 anbehl joined #gluster
10:29 cloph it all depends on what your workload is, how frequently files are changed/accessed, whether  it is a few large files or tons of small ones. and your requirements on how fast the files need to be in-sync.
10:29 cloph (and in the case of the geo-replication case, the slave should be read-only, so this might impose additional limitations that rule this solution out for you)
10:31 bartden hmm ok, but in your first suggestion. content of Volume A is the same content of Volume B …All clients need to access the same content, only access it within their location
10:31 cloph this won't fly with gluster then.
10:32 bartden ok thx for the info
10:33 derjohn_mob joined #gluster
10:36 RameshN_ joined #gluster
10:37 cloph I mean it might work, but again:  depends on how much performance penalty due to the network latency is OK for your usecase.
10:42 ankitr joined #gluster
10:42 ankitr joined #gluster
10:44 Clone cloph: it won't fly because of performance? say you have low latency dedicated fiber between the datacenters.. does it still not fly?
10:44 cloph Well, if he doesn't give any info on the network, even when asked about it, then I assume no special/dedicated network.
10:44 mhulsman joined #gluster
10:45 mhulsman joined #gluster
10:46 cloph also no workd about actual usecase, so yes, you can have a system where both datacenter A and datacenter B have the same data in read-write mode. But whether that floats your boat is up to circumstances.
10:48 EO_ joined #gluster
10:48 Clone hehe, we have this usecase. this exact usecase.
10:49 Clone and it works, but without arbiter and server and client quorum, we get split brains and tons of files that cannot be healed.
10:50 cloph oh, quorum is different issue, sure, for not running into server quorum problems you should have at least three peers in the cluster.
10:50 cloph But whether  it makes sense to split the replication across the two datacenters is more of the question.
10:51 cloph maybe better to just have it in center A and only mount it on B - but again, depends on setup and requirements.
10:55 Clone what do you mean mount in on B.. we use gluster because we want a DR site with RTO RPO near zero :)
10:56 * cloph won't bother looking up those letter combinations...
11:04 Clone ow, sorry... no dataloss and instant up on the other side.
11:16 Klas Clone: the point is that a mounted client is fully possible on the off-site
11:16 Klas but then it's not locally accessible, of course
11:16 Klas the data is basically secure though, but, if the other site is down, the share is down
11:17 TvL2386 joined #gluster
11:17 Klas you could combine a read-only copy with a read-write mount towards mainsite
11:17 Klas but people will use this incorrectly and it will break, since users are clumsy
11:17 Clone hehe.
11:18 Clone but why (except for performance) can't you use a replicated setup?
11:18 Clone over 2 datacenter.
11:19 Clone I am dying to get the answer to that question. Is gluster not robust enough to tolerate latency (race conditions and such?)..
11:19 cloph no reason other than performance, and maybe increased risk of loosing network connection between the two
11:19 karthik_us joined #gluster
11:20 Clone sure, but does loosing that network connection do anything bad other than creating files that need to be healed.
11:20 cloph if connectivity is bad, you're running higher risk of running into split-brain/need to heal, esp. if the bandwidth/network connection is not reserved for gluster
11:20 Clone why split brain.
11:21 Clone we have quorum checking in place.
11:21 cloph if quorum is met, then no split brain.
11:21 Clone (server & client)
11:21 Clone if quorom is not met, client get read only, and for server, brick process goes down.
11:21 cloph But if you talk about about two peers only, then failing to meet quorum is easy...
11:21 Clone no, I say 2+1
11:22 Clone we have an arbiter.
11:22 cloph still doesn't say anything about the number of peers or how they are laid out.
11:24 Clone we have two multi volume servers and an arbiter in one of the datacenters on a different node.
11:25 Klas the only issue is poor performance
11:25 Clone so datacenter A: peer + arbiter, B: peer.
11:25 Clone ok.
11:25 Klas since every stat incurs the full latency
11:25 Clone yes.
11:25 Klas sequencially
11:25 Klas that is generally frowned upon
11:25 Clone what do you mean sequencially?
11:25 Clone it has to wait for one to finish?
11:26 Klas that is my impression at least
11:26 Klas otherwhise it would go about as fast to stat 1000 files as 1 file
11:26 Clone well, yeah I can understand why. Ideally you don'
11:26 Klas not 1000 as long
11:27 cloph Klas: should be close to that if the files are in the same directory and you have the readdir optimizations turned on, shouldn't it?
11:27 Clone t want writes or reads to be dependable on the other side.. However we use this also with galera mysql clusters without problems.
11:27 Klas cloph: dunno =)
11:27 Klas I didn't even know of readdir optimization
11:28 Klas it sounds interesting
11:28 Klas Clone: dbs should be fine since it's generally few files
11:28 Klas at least if it doesn't require low-latency, of course
11:31 Clone readdir opt is bug free? I thought there were directory inconsistencies with it..
11:35 ashiq joined #gluster
11:45 ahino joined #gluster
11:50 jkroon joined #gluster
11:50 mhulsman joined #gluster
11:57 mhulsman joined #gluster
11:57 RameshN_ joined #gluster
11:59 ankush joined #gluster
12:06 mhulsman joined #gluster
12:19 kpease joined #gluster
12:25 mhulsman joined #gluster
12:30 mhulsman joined #gluster
12:30 mhulsman joined #gluster
12:39 mhulsman joined #gluster
12:41 plarsen joined #gluster
12:52 prasanth joined #gluster
12:53 ankitr joined #gluster
12:54 prasanth joined #gluster
12:55 ira joined #gluster
12:56 sona joined #gluster
12:57 ankush joined #gluster
13:02 mb_ joined #gluster
13:13 vbellur joined #gluster
13:14 unclemarc joined #gluster
13:17 derjohn_mob joined #gluster
13:17 poornima joined #gluster
13:19 mhulsman1 joined #gluster
13:20 nishanth joined #gluster
13:21 squizzi joined #gluster
13:22 mhulsman joined #gluster
13:30 shyam joined #gluster
13:31 skylar joined #gluster
13:33 vbellur joined #gluster
13:33 shyam joined #gluster
13:37 rafi joined #gluster
13:51 morganb left #gluster
13:51 shaunm joined #gluster
14:28 skylar joined #gluster
14:43 shyam joined #gluster
14:45 pat972 joined #gluster
14:45 pat972 Hi
14:45 glusterbot pat972: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an an
14:47 pat972 I have an issue where programs create an extra line of special character ^@ at the end of the file. This happen under load, running the program manually does not produce it.
14:47 farhorizon joined #gluster
14:48 pat972 I did an strace on it , and the difference is ENOTTY (Inappropriate ioctl for device) then lseek(1, 0, SEEK_CUR^@^@^@^@)
14:50 pat972 The strace is the same on a panfs filesystem, but it never produces this extra line on the file.
14:50 ankitr joined #gluster
14:52 mhulsman1 joined #gluster
14:55 squizzi_ joined #gluster
14:59 mhulsman joined #gluster
15:06 rwheeler joined #gluster
15:06 wushudoin joined #gluster
15:07 mhulsman joined #gluster
15:07 mhulsman joined #gluster
15:22 shyam joined #gluster
15:25 ankitr joined #gluster
15:34 atinm joined #gluster
15:46 gem joined #gluster
15:52 ju5t joined #gluster
15:53 Seth_Karlo joined #gluster
15:59 StormTide joined #gluster
16:00 StormTide anyone know how to setup timeouts for io operations in gluster's fuse client. we had a bad crash last night (not glusters fault, digitalocean crashed their block storage system)... but it didnt handle all bricks going away at the same time gracefully
16:03 ahino joined #gluster
16:11 arpu joined #gluster
16:13 squizzi_ joined #gluster
16:15 ankush joined #gluster
16:20 buvanesh_kumar joined #gluster
16:23 StormTide not talking peer/ping timeout for clarity (the bricks or daemons never actually went down, they just became io blocked 100%.) ... but the clients then themselves io blocked and it went sideways fast. CPU ended up in 100% iowait state
16:25 riyas joined #gluster
16:33 jiffin joined #gluster
16:39 jiffin joined #gluster
16:42 mlg9000 joined #gluster
16:45 JoeJulian StormTide: glusterfsd has routines to detect brick failure. I'd check the brick logs and see if you can figure out why that didn't happen. If you can, that sounds like a good bug report.
16:45 pat972 joined #gluster
17:00 ju5t hello, if a brick doesn't want to start on port 49152 (it gives a connection refused on its own ip address apparently), what could be the reason for that?
17:01 shyam joined #gluster
17:02 ju5t the management layer is all working, a gluster peer status says its connected but because it failed to start up the process that's supposed to listen on 49152 we can't access the data on that brick
17:02 StormTide JoeJulian: lemme take a look. but its mostly the client hang im worried about
17:04 atinm joined #gluster
17:05 JoeJulian StormTide: If the server detects the hang and stops, the client won't hang waiting on the server.
17:10 StormTide JoeJulian: thats definitely not what happened
17:10 JoeJulian But that's why I'm focusing on the server.
17:10 StormTide the client maintained connections to gluster on the failed servers but blocked all io
17:10 StormTide so the webserver blew up with like 900 load lol
17:11 StormTide im lookin at the servers, very little logged about other than me trying to recover
17:11 StormTide but i think this might have triggered readv on /var/run/gluster/.<redacted>.sock failed (No data available)
17:11 StormTide but its hard to tell what caused what errors, as once it io locked up, it was basically unresponsive
17:12 StormTide the cpu blocked the tasks on iowait i think
17:12 JoeJulian Ah, so even /var/lib was blocked?
17:12 StormTide no
17:12 StormTide just the bricks
17:12 StormTide like i could login and ssh and look around, but the bricks you couldnt even ls
17:12 StormTide and anything that tried to touch that path iowait blocked
17:13 JoeJulian I know the original intention was to be able to detect an unresponsive physical drive. I wonder what would make a virtual drive any different...
17:13 StormTide it was a pretty simple failure really. the bricks on on digitaloceans block storage feature and their block storage cluster went offline
17:13 StormTide but they didnt return an io error/disconnected
17:13 JoeJulian Don't they run ceph? I think...
17:13 StormTide they just blocked
17:14 StormTide infinite iowait as far as i could tell
17:14 rwheeler joined #gluster
17:14 StormTide im not sure what their underlying tech is tbh
17:15 StormTide they tell me it was a failed switch that took the storage cluster offline, so just network timeout/no route to host...
17:15 StormTide but their devices are mounted as if physical drives, not as netdev or anything
17:15 StormTide like they present via the hyperviser as /dev/sda etc
17:15 JoeJulian Yeah
17:15 StormTide so really would just act like a frozen block device
17:16 StormTide its the client io timeout i really want to fix though, as i can handle the glusters going away, it wouldnt have really hurt us that much because everything is cached up front
17:16 JoeJulian That's why I expected it to function the same as the hard drive detection. Maybe the hard drive failure detection isn't working. I'd file a bug report and include the brick logs.
17:16 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:17 JoeJulian That would be fuse and that would be a kernel thing.
17:17 StormTide hrm
17:18 JoeJulian That's why it was implemented server-side.
17:18 StormTide yah because if it had just promptly returned an io error, and not blocked, it would have continued to chug along except without cold cache access for imagery heh
17:18 StormTide instead it blocked every apache worker iowait style
17:20 JoeJulian So I guess the answer to your question is, no. It can't do that.
17:21 StormTide drats
17:21 StormTide so on the server side, yah, nothing logged until i rebooted the box
17:22 StormTide going through my notes the bricks showed as online as well when status was checked when they were definitely offline
17:22 StormTide so yah, its like gluster never saw the problem
17:23 JoeJulian file a bug. I remember when this first came up. A lot of thought and effort was put in to it. I'm sure they'd like it to have worked.
17:23 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:23 StormTide hrm i dont have a redhat login... i'll do that later when i figure out how to test/reproduce reliably
17:24 StormTide i might be able to rig something up via systemd to kill gluster
17:25 oajs joined #gluster
17:34 StormTide JoeJulian: is there any sort of scripting api for gluster that can check things like latency without going through the kernel/fuse mounter.. like i'd like to setup an advisory task that measures some stats re the gluster connection from a client perspective, but if i go via the kernel and it ioblocks it'll just suffer the same outage as the fuse mount
17:35 StormTide last night i couldnt even umount the thing, i had to actually kill the gluster task which then dropped all the io locks.
17:36 JoeJulian There are hooks, but that's probably not what you're asking for.
17:36 JoeJulian Oh, and that's glusterd hooks, too, so nothing client-side.
17:36 StormTide no probably not
17:36 StormTide thinking more like a thin client that speaks the protocol you guys are using
17:37 StormTide returns some debugging data kinda thing
17:37 JoeJulian Oh, libgfapi is what you're looking for.
17:37 StormTide aha, ok lemme rtfm on that
17:37 StormTide thx
17:39 StormTide hrm, looks like the language bindings page didnt survive the doc migration... any idea where the python bindings docs went
17:40 StormTide http://www.gluster.org/community/docu​mentation/index.php/Language_Bindings <--- former url
17:40 glusterbot StormTide: <-'s karma is now -6
17:40 glusterbot Title: Documentation has moved! — Gluster (at www.gluster.org)
17:43 major glusterbot hates arrows
17:43 JoeJulian https://gluster.readthedocs.io/en/latest/ but I'm not finding the gfapi documentation either.. <grumble>
17:43 glusterbot Title: Gluster Docs (at gluster.readthedocs.io)
17:44 StormTide major apparently lol. habit of mine.
17:44 major heh
17:44 major gonna give the poor lil minion a facial twitch
17:45 StormTide JoeJulian: think i found it by searching github...
17:45 StormTide http://libgfapi-python.readthedoc​s.io/en/latest/api-reference.html
17:45 glusterbot Title: API Reference — libgfapi-python (at libgfapi-python.readthedocs.io)
17:53 JoeJulian amye: Looks like the gfapi documentation never made the transition from the wiki to glusterdocs. Can you help?
17:54 amye JoeJulian, arrgh, that's going to be an -infra bug.
17:54 amye Because the community wiki stuff got lost in a redirect that happened months ago
17:55 amye I'm in a meeting but I can take a look at it in ~5
17:58 vbellur joined #gluster
18:00 ivan_rossi left #gluster
18:07 Tanner_ joined #gluster
18:08 StormTide JoeJulian: any idea what gcollect is and where its hosted? http://lists.gluster.org/pipermail/​gluster-users/2011-July/008297.html ...
18:08 glusterbot Title: [Gluster-users] anyone else playing with gcollect yet? (at lists.gluster.org)
18:11 mb_ joined #gluster
18:17 rastar joined #gluster
18:21 msvbhat joined #gluster
18:58 riyas joined #gluster
19:13 ira joined #gluster
19:19 IRCFReAK joined #gluster
19:28 Acinonyx joined #gluster
19:30 atinm joined #gluster
19:30 shyam joined #gluster
19:33 MrAbaddon joined #gluster
19:38 shaunm joined #gluster
19:38 Wizek_ joined #gluster
19:48 farhorizon joined #gluster
19:50 derjohn_mob joined #gluster
19:53 amye StormTide, I hate to do this to you, but can you put in a docs bug around the language bindings URL and where you found it? https://bugzilla.redhat.com/enter_bu​g.cgi?product=Gluster-Documentation
19:53 glusterbot Title: Log in to Red Hat Bugzilla (at bugzilla.redhat.com)
19:54 amye As far as the gcollect stuff, that's 2011 and may be deprecated, that was back in 3.2 days - but take it to gluster-users@ to get a wider group looking at it
19:57 jkroon joined #gluster
20:03 Seth_Karlo joined #gluster
20:09 rwheeler joined #gluster
20:30 k4n0 joined #gluster
20:53 StormTide amye, i found the url through searching the docs site iirc... i dont have a redhat account to file bugs but i'll get around to that at some point here... im kinda swamped today trying to tie this off before the weekend.
20:53 StormTide amye: https://readthedocs.org/search/?q=libgfapi&amp;t​ype=file&amp;project=gluster&amp;version=latest
20:54 StormTide and google is indexing your doc staging site... https://staged-gluster-docs.readthedocs.i​o/en/release3.7.0beta1/Features/libgfapi/ ... might need robots.txt or something
20:54 glusterbot Title: libgfapi - Gluster Docs (at staged-gluster-docs.readthedocs.io)
20:54 JoeJulian readthedocs.org search is complete garbage.
20:55 StormTide yah, finding docs has been a bit of a struggle lol
20:56 JoeJulian I think that's why there's absolutely no organization to the docs on readthedocs, and everything is one big top-level mess.
20:56 IRCFReAK joined #gluster
20:56 StormTide JoeJulian: im thinking a collectd service/status monitor component might be the trick here... looking at what i can do with the python bindings but it looks like just basic open files n stuff
20:56 amye StormTide, that works for enough info. Will file on your behalf.
20:57 StormTide basically if i can figure out a way to get some responsiveness stats from the client side of things aside from the fuser mount, i can probably create a systemd job to stop gluster if the server goes away.
21:00 IRCFReAK joined #gluster
21:06 givemeparttt2000 joined #gluster
21:13 givemeparttt2000 joined #gluster
21:13 rastar joined #gluster
21:13 vbellur joined #gluster
21:14 d0nn1e joined #gluster
21:17 vbellur joined #gluster
21:18 givemeparttt2000 joined #gluster
21:18 k4n0 joined #gluster
21:19 ahino joined #gluster
21:31 lamer14897856317 joined #gluster
21:37 IRCFrEAK joined #gluster
21:59 k4n0 joined #gluster
22:23 IRCFrEAK joined #gluster
22:23 Seth_Karlo joined #gluster
22:27 IRCFrEAK joined #gluster
22:35 IRCFrEAK joined #gluster
22:55 IRCFrEAK joined #gluster
22:59 IRCFrEAK joined #gluster
23:24 IRCFrEAK joined #gluster
23:53 vbellur joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary