Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 daMaestro stoile, xfs_check is what you are looking for
00:03 daMaestro semiosis, too
00:04 daMaestro we've had gluster create stuff that xfs_repair could not fix... that was fun
00:04 daMaestro luckly we had two independent namespaces so we were able to find and recover data... but had to reinit the failed cluster as ext3
00:04 daMaestro and now reading that ext wonderful.... ;-)
00:06 m0zes from the xfs_check manpage "Note that using xfs_check is NOT recommended. Please use xfs_repair -n instead, for better scalability and speed."
00:10 rwheeler joined #gluster
00:30 sjoeboo joined #gluster
00:34 jdarcy joined #gluster
00:35 semiosis @later tell daMaestro thanks for the xfs_check tip!
00:35 glusterbot semiosis: The operation succeeded.
00:35 semiosis m0zes: thank you as well
00:36 yinyin joined #gluster
00:45 jdarcy joined #gluster
01:00 madphoenix joined #gluster
01:01 sjoeboo_ joined #gluster
01:09 jules_ joined #gluster
01:18 yinyin joined #gluster
01:20 sjoeboo joined #gluster
01:37 yinyin joined #gluster
01:45 sjoeboo_ joined #gluster
01:48 madphoenix joined #gluster
01:57 kevein joined #gluster
02:02 sjoeboo joined #gluster
02:11 tyl0r joined #gluster
02:24 bulde joined #gluster
02:25 lkthomas anyone using Infiniband ?
02:25 Jedblack joined #gluster
02:39 bala joined #gluster
02:41 helloadam joined #gluster
02:44 raghug joined #gluster
02:49 lkthomas anyone still alive ?
02:58 sjoeboo_ joined #gluster
03:10 jdarcy joined #gluster
03:36 bharata joined #gluster
03:37 misuzu joined #gluster
03:40 hagarth_ joined #gluster
03:40 bulde joined #gluster
03:46 anmol joined #gluster
03:51 cyberbootje joined #gluster
04:03 JoeJulian lkthomas: Need a "hello" refresher? :D
04:03 JoeJulian hello
04:03 glusterbot JoeJulian: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
04:11 Jedblack joined #gluster
04:13 jbrooks joined #gluster
04:16 sgowda joined #gluster
04:19 sjoeboo joined #gluster
04:20 yinyin joined #gluster
04:20 shylesh joined #gluster
04:29 cyberbootje1 joined #gluster
04:40 pai joined #gluster
04:49 cyberbootje joined #gluster
04:56 tyl0r joined #gluster
04:59 saurabh joined #gluster
05:00 harshpb joined #gluster
05:02 Kins joined #gluster
05:03 hagarth joined #gluster
05:04 sahina joined #gluster
05:07 yinyin joined #gluster
05:14 glusterbot New news from newglusterbugs: [Bug 923540] features/compress: Compression/DeCompression translator <http://goo.gl/l5Y0Z>
05:20 sjoeboo joined #gluster
05:21 mohankumar joined #gluster
05:21 lalatenduM joined #gluster
05:29 vpshastry joined #gluster
05:30 vigia joined #gluster
05:35 bala joined #gluster
05:40 aravindavk joined #gluster
05:42 raghu joined #gluster
05:54 Jedblack joined #gluster
06:09 harshpb joined #gluster
06:10 satheesh joined #gluster
06:11 test joined #gluster
06:14 glusterbot New news from newglusterbugs: [Bug 923580] ufo: `swift-init all start` fails <http://goo.gl/F73bO>
06:20 rastar joined #gluster
06:30 Jedblack joined #gluster
06:42 statix_ joined #gluster
06:53 vimal joined #gluster
07:05 shireesh joined #gluster
07:10 vpshastry joined #gluster
07:14 guigui3 joined #gluster
07:20 jtux joined #gluster
07:21 bulde joined #gluster
07:28 Nevan joined #gluster
07:31 ngoswami joined #gluster
07:44 Jedblack joined #gluster
07:45 hagarth joined #gluster
07:49 sahina joined #gluster
07:51 ctria joined #gluster
07:53 shireesh joined #gluster
07:59 ekuric joined #gluster
08:02 sripathi joined #gluster
08:08 mooperd joined #gluster
08:12 harshpb joined #gluster
08:21 Jedblack joined #gluster
08:24 mooperd left #gluster
08:26 HaraldJensas joined #gluster
08:33 tryggvil__ joined #gluster
08:34 sripathi joined #gluster
08:49 vpshastry joined #gluster
08:51 deepakcs joined #gluster
08:52 andreask joined #gluster
09:03 red_solar joined #gluster
09:05 harshpb joined #gluster
09:06 bala joined #gluster
09:08 red_solar joined #gluster
09:09 sahina joined #gluster
09:09 shireesh joined #gluster
09:13 hagarth joined #gluster
09:18 manik joined #gluster
09:19 stickyboy joined #gluster
09:29 Jedblack joined #gluster
09:36 Alpinist joined #gluster
09:40 manik joined #gluster
09:41 stickyboy I'm debating how to slice up my bricks in a new server with 30TB of raw disks.
09:42 stickyboy One large XFS partition?
09:42 stickyboy Is there any prevailing wisdom against large bricks like that?  Either in XFS or GlusterFS itself?
09:43 andreask if you loose such a large brick you have a looooong time till its recovered
09:44 dobber_ joined #gluster
09:44 stickyboy We had originally split them into 3 x 10TB to get around e2fsprogs limits in CentOS 6.
09:45 stickyboy But then ext4 and Gluster 3.3.x you know... we switched to XFS.
09:46 manik1 joined #gluster
09:46 andreask I don't expect XFS having a problem with such sizes but waiting for 30 or even 10 TB with a lot of files to self-heal ....
09:48 Skunnyk xfs is better than ext4 on gluster ?
09:51 stickyboy Skunnyk: ext4 is currently unusable on GlusterFS 3.3.x with most distro Linux kernels. :\
09:51 sripathi joined #gluster
09:51 Skunnyk ah, I use gluster 3.0 + ext4  on debian
09:51 Skunnyk and I want to switch to 3.3
09:51 Skunnyk for a new gluster
09:52 Skunnyk for lot of small files
09:52 Skunnyk (debian backport kernel 3.2)
09:52 stickyboy !ext4
09:52 stickyboy @ext4
09:52 glusterbot stickyboy: Read about the ext4 problem at http://goo.gl/PEBQU
09:52 stickyboy @ext4 Skunnyk
09:53 stickyboy Skunnyk: I'm not an expert... but you might be ok on backports kernels.  Again I am not an expert.
09:53 stickyboy andreask: I guess one 30TB brick would take long to self heal, but assuming it was a hardware failure in the underlying RAID then we'd have to self heal 3 x 10TB bricks anyways.  I guess?
09:54 Skunnyk Ok, i've already read this blogpost :)
09:54 stickyboy If I understand correctly, I dunno if any use case is better than another.
09:55 stickyboy Skunnyk: Ok.  Then choose wisely :)  Or maybe run GlusterFS 3.4 alpha, I think(?) it has a fix to accomodate the ext4 behavior that broke earlier Glusters.
09:56 Skunnyk stickyboy, I use kernel 3.2
09:56 Skunnyk "That patch was for kernel v3.3-rc2. "
09:56 manik joined #gluster
09:57 StucKman Skunnyk: it has been backported, you should ckeck
09:58 vpshastry joined #gluster
10:00 eiki hej, i have an issue with mismatching gfid on a directory
10:00 eiki any input on how this can be solved?
10:00 eiki we are running glusterfs 3.1.3
10:03 tryggvil__ joined #gluster
10:07 eiki i've been looking for  https://github.com/vikasgorur/gfid but the link is not there anymore
10:09 andreask stickyboy: sure, if you loose all bricks on one node anyway there is no advantage but if you have e.g. one brick per disk
10:11 aravindavk joined #gluster
10:11 harshpb joined #gluster
10:12 spai joined #gluster
10:30 stickyboy andreask: Yeah, we've had hardware RAIDs just "forget"  their configurations lately... so I'm really only worried about that kind of low-level failure (which would affect any size of brick).
10:41 manik joined #gluster
10:41 aravindavk joined #gluster
10:42 Jedblack joined #gluster
10:42 andreask stickyboy:
10:43 andreask yes, that is a problem ... you could of course go without raid-controller and let gluster do the replication/distribution
10:43 tryggvil__ joined #gluster
10:49 Nagilum hmm, any hints on making lookups work faster on this gluster: http://dpaste.org/qw8FB/  ?
10:49 glusterbot Title: dpaste.de: Snippet #222025 (at dpaste.org)
10:49 hagarth joined #gluster
10:52 vpshastry joined #gluster
10:58 jdarcy joined #gluster
11:02 satheesh joined #gluster
11:02 mohankumar joined #gluster
11:04 wrale joined #gluster
11:12 wrale joined #gluster
11:15 glusterbot New news from newglusterbugs: [Bug 922542] Please add support to replace multiple bricks at a time. <http://goo.gl/0O7OW>
11:22 vpshastry joined #gluster
11:29 abyss^_ How I can check which volumen have the most read/write operations?
11:37 abyss^_ only via top and profiling I can see if gluster volume have any read/write operations?
11:42 yinyin joined #gluster
11:46 manik1 joined #gluster
11:47 manik1 joined #gluster
11:57 Jedblack joined #gluster
11:58 dustint joined #gluster
12:08 vpshastry joined #gluster
12:14 masterzen joined #gluster
12:29 Jedblack joined #gluster
12:29 bennyturns joined #gluster
12:31 plarsen joined #gluster
12:32 hagarth joined #gluster
12:39 raghu joined #gluster
12:51 rubbs Is there a known problem with using `tail -f` on a file that's being written to on gluster?
12:53 lalatenduM joined #gluster
12:55 H__ rubbs: not that i know. what are you seeing ?
12:57 andreask joined #gluster
12:59 aliguori joined #gluster
13:02 deepakcs joined #gluster
13:03 theron joined #gluster
13:05 rubbs H__: we've got a log file and if it's written too nothing shows up on tail -f
13:05 rubbs oh wait. the time is off on one of these machines
13:05 rubbs let me see if fixing that fixes it
13:06 manik joined #gluster
13:07 theron joined #gluster
13:13 lalatenduM joined #gluster
13:13 hagarth joined #gluster
13:15 robos joined #gluster
13:15 StucKman I w3as reading thisd page, http://www.gluster.org/community/documentat​ion/index.php/Basic_Gluster_Troubleshooting , when I found question #15. where can I read about what kind of load gluster can or cannot support?
13:16 glusterbot <http://goo.gl/7m2Ln> (at www.gluster.org)
13:19 rubbs H__: that was it, I had some bad errors
13:19 ctria joined #gluster
13:21 jdarcy joined #gluster
13:24 jskinner_ joined #gluster
13:24 H__ rubbs: cool
13:27 rubbs crap. now another error
13:27 rubbs "cp: skipping file `blocktemp.txt', as it was replaced while being copied"
13:32 jdarcy joined #gluster
13:39 rwheeler joined #gluster
13:49 harshpb joined #gluster
13:53 Chiku|dc what you guys use to bench your glusterfs volumes ?
13:59 mohankumar joined #gluster
14:10 lh joined #gluster
14:10 lh joined #gluster
14:10 jbrooks joined #gluster
14:11 hagarth joined #gluster
14:12 StucKman I also notice that all the getting started tutorials mention 64bit OSes only. is there a reason? I realy can't run ir in 32 bits?
14:14 samppah StucKman: it's possible to use 32 bit but afaik it's not supported or tested
14:15 StucKman that mean the reason is 'we haven't tried'? sounds weird...
14:17 sgowda joined #gluster
14:17 samppah heh
14:17 samppah actually there are some community members that use 32 bit OSes
14:18 plarsen joined #gluster
14:21 vpshastry joined #gluster
14:24 bugs_ joined #gluster
14:27 ProT-0-TypE joined #gluster
14:29 rubbs Ok, this is puzzling me. If I mount a volume with fuse, and run a perl script that creates a copy of a file, does changes to it, then copies it back over the file, I get a bunch of "skipping file" errors. If I mount it as nfs, I don't.
14:29 rubbs This is in a loop, and done very fast, so maybe nfs does some sort of caching?
14:29 rubbs why would gluster say the files are being replaced while copied?
14:33 mohankumar joined #gluster
14:43 wdilly joined #gluster
14:44 nueces joined #gluster
14:46 Harald_Jensas joined #gluster
14:46 portante joined #gluster
14:48 harshpb joined #gluster
14:49 vpshastry joined #gluster
14:51 daMaestro joined #gluster
14:53 lalatenduM joined #gluster
14:53 mriv joined #gluster
14:54 harshpb joined #gluster
15:24 failshell joined #gluster
15:24 sahina joined #gluster
15:25 harshpb joined #gluster
15:27 jdarcy joined #gluster
15:29 lh joined #gluster
15:29 lh joined #gluster
15:29 failshell hello. the tutorials mention formatting the drive with XFS. is that a requirement? or a suggestion?
15:29 lpabon joined #gluster
15:30 semiosis technically it's a suggestion
15:30 semiosis however there's an unresolved issue with using ,,(ext4)
15:30 glusterbot Read about the ext4 problem at http://goo.gl/PEBQU
15:31 semiosis glusterfs is used & tested most heavily with xfs, so that's the safe bet
15:31 failshell ok thanks. will reformat :)O
15:32 failshell what about LVM? can I use that? Or just a simple disk for the brick?
15:32 semiosis when you format, set inode size 512 (-i size=512)
15:32 semiosis sure you can use lvm
15:32 semiosis glusterfs doesnt care what's under the filesystem
15:37 manik joined #gluster
15:37 jdarcy joined #gluster
15:37 mooperd joined #gluster
15:39 NeatBasis joined #gluster
15:40 harshpb joined #gluster
15:41 alex88 hi guys, trying to create volume on a host, hostname -f gives storage.site.com, volume create site-development storage.site.com:/gluster/site-development gives Host storage.site.com not a friend
15:41 alex88 any idea?
15:42 failshell hmm it seems RHEL is not providing xfsprogs anymore
15:43 failshell or im missing some channel
15:43 semiosis alex88: alias the server's own hostname to 127.0.0.1 in /etc/hosts
15:44 alex88 semiosis: ok I'll add the hostname also on 127.0.0.1
15:44 alex88 thanks!
15:44 semiosis yw
15:44 elyograg failshell: it's my understanding that xfsprogs requires buying a more expensive server tier.  i don't actually know, because i use CentOS, which has it.
15:44 harshpb joined #gluster
15:44 Chiku|dc hum, I created a geo-replication, I deleted a file on the slave, but status still OK,... there is no way to check if SLAVE and MASTER is not anymore same ?
15:44 alex88 semiosis: last thing, to protect access to the volume? just iptables?
15:45 semiosis alex88: i guess it depends who/what you're protecting against.  you can use iptables, and there's also auth ,,(options)
15:45 glusterbot alex88: http://goo.gl/dPFAf
15:46 semiosis that options page may be outdated, you can run 'gluster volume set help' also to see most options
15:46 semiosis although the auth options probably havent changed much
15:46 alex88 yeah auth.allow is the one i need
15:46 alex88 hope that works for ipv6 too
15:47 failshell elyograg: ill get the centos package if i have to :)
15:47 alex88 I just had an issue yesterday for ipv6 connections
15:48 StucKman left #gluster
15:48 satheesh joined #gluster
15:50 tyl0r joined #gluster
15:53 rotbeard joined #gluster
15:58 alex88 also, is possible to change the interface gluster listen on?
15:59 elyograg alex88: it should listen on all interfaces.  is that the problem?  i don't know if it's possible to restrict that so it only listens on the ones you choose.
16:00 alex88 elyograg: http://fossies.org/unix/privat/glusterfs-3.2.7.ta​r.gz:a/glusterfs-3.2.7/doc/glusterfsd.vol.sample seems that there is an option like option transport.socket.bind-address
16:00 glusterbot <http://goo.gl/zOps6> (at fossies.org)
16:00 alex88 elyograg: btw, it is since it listen on ipv4 only by default
16:00 alex88 and mails like http://gluster.org/pipermail/glust​er-users/2011-February/029590.html gets no reply :)
16:01 glusterbot <http://goo.gl/pDNY9> (at gluster.org)
16:01 alex88 and those are 2011 mails
16:02 elyograg I need to get into ipv6. one day i'll be forced into it. it's really too bad that ipv4 and ipv6 cannot communicate with each other.
16:02 satheesh joined #gluster
16:02 alex88 elyograg: they'll never
16:07 mohankumar__ joined #gluster
16:07 mooperd_ joined #gluster
16:09 alex88 it's totally a mess, from devel mailing list they say to use ipv6, from docs inet6
16:11 harshpb joined #gluster
16:11 mtanner joined #gluster
16:11 zaitcev joined #gluster
16:11 kkeithley1 joined #gluster
16:12 jclift joined #gluster
16:15 alex88 anyone is using via ipv6?
16:17 ujjain joined #gluster
16:19 harshpb joined #gluster
16:19 plarsen joined #gluster
16:20 ujjain does Red Hat Storage Server use Gluster technology?
16:20 nat joined #gluster
16:20 ndevos ujjain: yes
16:21 ujjain ndevos, dankje! :)
16:22 ndevos ujjain: :)
16:24 nat hi folks, i'm trying to set up gluster, and i'm hitting mmap issues. This looks similar to the fuse mmap images that my web searching suggests were solve circa 2009. i'm using a really new kernel (3.8.3), on a mixed mostly testing Debian box, with fuse 2.9.0 and gluster 3.3.1.
16:24 nat any ideas on where to look, and for what?
16:25 nat there are all sorts of issues that crop up when you can't mmap files... but like i said, it looks like this was fixed years ago.
16:25 manik joined #gluster
16:28 harshpb joined #gluster
16:39 jdarcy joined #gluster
16:39 mohankumar joined #gluster
16:43 vpshastry joined #gluster
16:55 jdarcy joined #gluster
16:55 roo9 joined #gluster
16:55 hagarth joined #gluster
16:59 shylesh joined #gluster
17:01 bulde joined #gluster
17:09 tryggvil___ joined #gluster
17:10 samppah @node
17:10 glusterbot samppah: The term "node" could mean a server, a client, or possibly even a brick. It would be easier to follow if you would please use the @glossary terms. Thank you.
17:10 samppah @glossary
17:10 glusterbot samppah: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
17:12 zaitcev joined #gluster
17:25 f0urtyfive hey, anyone in here use gluster to server VMware NFS?
17:31 rastar joined #gluster
17:42 jskinner_ joined #gluster
17:45 y4m4 joined #gluster
17:59 mooperd joined #gluster
17:59 mynameisbruce joined #gluster
17:59 mynameisbruce_ joined #gluster
18:00 vpshastry joined #gluster
18:00 rwheeler joined #gluster
18:00 mynameisbruce left #gluster
18:01 harshpb joined #gluster
18:06 mooperd joined #gluster
18:07 jskinne__ joined #gluster
18:07 mooperd joined #gluster
18:08 failshell hmm, i use mount -t glusterfs host:/data /data
18:08 failshell exit code is 0
18:08 failshell yet, mount shows no mount
18:09 wrale dmesg might give hints
18:09 failshell nada
18:11 wrale i think i had that problem trying to use gluster with ovirt recently.. i was testing on a desktop, which i later learned had a 100Mbps NIC... i didn't resolve the issue
18:11 mooperd joined #gluster
18:11 harshpb joined #gluster
18:13 samppah failshell: have you checked log files?
18:13 failshell im doing that now. its my first setup.
18:13 failshell 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected)
18:13 failshell i get a lot of those
18:13 failshell but gluster peer status says everyone is connected
18:14 harshpb joined #gluster
18:15 samppah failshell: can you send out put of gluster volume info to pastie.org?
18:17 failshell http://pastebin.ca/2336681
18:17 glusterbot Title: pastebin - Untitled - post number 2336681 (at pastebin.ca)
18:17 samppah okay, everything seems right.. are all server accessible from client?
18:17 failshell yup, same vlan
18:18 harshpb joined #gluster
18:18 failshell ive installed the glusterfs and glusterfs-rdam packages on the client
18:19 failshell 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
18:19 failshell could that be it?
18:22 samppah failshell: do you have glusterfs-fuse package installed also on client?
18:23 failshell yup
18:24 samppah what version this is btw?
18:25 failshell uh
18:25 failshell i used mount -t glusterfs host:volumeID instead
18:25 failshell and it works
18:25 failshell its now mounted
18:25 samppah oh
18:26 failshell so, how can i tell what path on the filesystem the volume is using?
18:26 failshell cuase when i used gluster volume create, i did specify /data
18:27 failshell it seems to be use /var/lib/glusterd/
18:27 failshell is there where the data will be stored?
18:28 \_pol joined #gluster
18:29 harshpb joined #gluster
18:30 samppah failshell: mount -t glusterfs host:/data /mnt/data should definetly work.. i haven't seen such issue before
18:31 failshell yeah its just that its not really /data on the nodes
18:31 failshell its /var/libg/glusterd/vols/GV0
18:31 failshell gluster volume create GV0 replica 2 1.2.3.4:/data
18:31 failshell that's what i used
18:31 failshell should it not have created the volume at /data?
18:32 samppah ahh.. so GV0 is the name of the volume and that's what you should use when mounting it
18:33 samppah sorry didn't notice that in volume info
18:33 failshell ok
18:33 mooperd joined #gluster
18:44 harshpb joined #gluster
18:49 failshell which is more performant? mount gluster over NFS or gluster itself?
18:49 JoeJulian Depends on how you define that.
18:49 failshell id like to serve data to web servers
18:50 JoeJulian The most performant, of course, is when you don't actually read anything from a filesystem.
18:50 JoeJulian See ,,(php)
18:50 glusterbot php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
18:50 JoeJulian There's tips on that page for improving web service performance that's good regardless of filesystem.
18:51 JoeJulian Some people don't want to use a cache, like apc, but are happy to mount via nfs (which provides a cache as well).
18:52 failshell well, we already use all that :)
18:52 failshell and a CDN
18:53 JoeJulian I live close enough to Vancouver, British Columbia that whenever I see CDN, my first thought is "Canadian Dollar".
18:53 JoeJulian Tell me about the data you're going to be serving?
18:54 failshell mainly static files
18:54 failshell which will be cached by varnish, and also some of it served by the CDN
18:55 JoeJulian Then I'd use the fuse mount. It's going to give greater throughput and reliability. The caching advantage of NFS would be redundant and largely unused.
18:55 failshell hmm mount gluster over nfs times out
18:55 JoeJulian @nfs
18:55 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also portmapper should be running on the server, and the kernel nfs server (nfsd) should be disabled
18:55 Mo___ joined #gluster
18:56 harshpb joined #gluster
18:56 failshell hmm, portmap has been replaced with something in newer RHELs
18:56 JoeJulian yeah, was just typing the command to change that to make it clearer
19:01 samppah rpcbind is correct service i think
19:01 JoeJulian @factoid change nfs s/portmapper/an rpc port mapper (like rpcbind in EL distributions)/
19:01 lpabon joined #gluster
19:02 JoeJulian hmm, must have the syntax wrong...
19:03 JoeJulian @factoids change nfs 1 s/portmapper/an rpc port mapper (like rpcbind in EL distributions)/
19:03 glusterbot JoeJulian: The operation succeeded.
19:03 JoeJulian @nfs
19:03 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
19:03 JoeJulian @reconnect
19:03 glusterbot joined #gluster
19:07 failshell mount.nfs: requested NFS version or transport protocol is not supported
19:07 glusterbot failshell: make sure your volume is started. If you changed nfs.disable, restarting your volume is known to work.
19:08 failshell im trying TCP vers=3
19:12 jskinn___ joined #gluster
19:15 failshell alright, here we go
19:19 rwheeler joined #gluster
19:45 disarone joined #gluster
19:50 jag3773 joined #gluster
20:31 DigiDaz joined #gluster
20:50 \_pol_ joined #gluster
20:51 \_pol joined #gluster
20:59 andreask joined #gluster
21:22 semiosis johnmark: ping
22:28 jdarcy joined #gluster
22:47 andreask joined #gluster
22:50 ramkrsna joined #gluster
22:50 ramkrsna joined #gluster
22:59 tryggvil__ joined #gluster
23:09 jdarcy joined #gluster
23:33 jdarcy joined #gluster
23:42 jdarcy joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary