Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 JoeJulian yes
00:01 captainflannel cool, im gonig to give that a try!
00:32 Staples84 joined #gluster
00:47 jamesc joined #gluster
00:47 prg3 joined #gluster
00:51 Kedsta joined #gluster
01:00 captainflannel hello there, trying to get libgfapi vfs working through samba, samba server is a member server of AD and its not working.. when we use fuse mount we can access
01:05 JoeJulian captainflannel: Can you paste an example smb.conf file to fpaste.org?
01:07 captainflannel http://fpaste.org/169841/28397914/
01:15 JoeJulian captainflannel: Well, that fits the way I have it configured and working.
01:15 JoeJulian Anything useful in the log?
01:18 captainflannel well in the samba gluster log i see 0-vol1-client-3: Server and Client lk-version numbers are not same, reopening the fds
01:18 glusterbot captainflannel: This is normal behavior and can safely be ignored.
01:18 captainflannel 0-vol1-client-1: Server lk version = 1
01:23 captainflannel where do i add the option of "“option rpc-auth-allow-insecure on”
01:35 captainflannel okay got it working, on my smb.conf had to take out vfs shadow_copy2
01:35 captainflannel we're not using it at the moment so no issues here :)
01:40 JoeJulian +1
01:40 JoeJulian captainflannel++
01:40 glusterbot JoeJulian: captainflannel's karma is now 1
01:42 Durzo anyone got any ideas about what is going on with bug 1181870 ?
01:42 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1181870 high, unspecified, ---, bugs, NEW , Geo-replication fails with OSError: [Errno 16] Device or resource busy
02:01 sage__ joined #gluster
02:13 bala joined #gluster
02:18 badone joined #gluster
02:32 RameshN joined #gluster
02:52 nangthang joined #gluster
03:00 keds joined #gluster
03:11 rcampbel3 joined #gluster
03:26 saurabh joined #gluster
03:35 dusmant joined #gluster
03:43 RameshN joined #gluster
03:47 itisravi joined #gluster
03:50 kdhananjay joined #gluster
03:51 bala joined #gluster
03:52 hchiramm joined #gluster
03:55 atinmu joined #gluster
04:01 dusmant joined #gluster
04:04 ppai joined #gluster
04:10 anoopcs joined #gluster
04:11 hflai joined #gluster
04:13 raghug joined #gluster
04:19 swebb joined #gluster
04:20 hagarth joined #gluster
04:30 raghug joined #gluster
04:30 nrcpts joined #gluster
04:31 overclk joined #gluster
04:38 hagarth joined #gluster
04:40 fandi joined #gluster
04:43 badone joined #gluster
04:43 sage_ joined #gluster
04:46 saurabh joined #gluster
04:46 kanagaraj joined #gluster
04:49 sakshi joined #gluster
04:50 overclk joined #gluster
04:57 jiffin joined #gluster
04:58 rafi1 joined #gluster
04:58 lalatenduM joined #gluster
05:00 Manikandan joined #gluster
05:01 ndarshan joined #gluster
05:02 kumar joined #gluster
05:04 kanagaraj joined #gluster
05:04 soumya joined #gluster
05:05 suman_d joined #gluster
05:12 overclk_ joined #gluster
05:12 meghanam joined #gluster
05:15 gem joined #gluster
05:16 anil joined #gluster
05:17 vimal joined #gluster
05:18 karnan joined #gluster
05:19 meghanam joined #gluster
05:31 purpleidea fubada: not yet sorry, i've got some paperwork to do, but soon i hope :)
05:32 Durzo raaaaaaaaaaaaaaaaaaaaaaaaage
05:34 tagati joined #gluster
05:42 tagati hi there... is it advisable to host daemon logs on a mounted gluster volume?  I currently have nginx and PHP logs being written to a mounted gluster volume, and when default log rotation happens every morning things break... the client's mount hangs to the point where I need to do umount -l, then kill the glusterfs process, then remount.
05:43 tagati this also appears in the glusterfs server log after that: [2015-01-15 05:37:34.882812] I [afr-self-heal-data.c:655:afr_sh_data_fix] 0-gla_files-replicate-0: no active sinks for performing self-heal on file <gfid:52fda5c8-54ca-44ce-a4ad-702bcc515421>
05:43 tagati it's a 2 server replica environment, with only client currently
05:43 tagati +one
05:47 edualbus joined #gluster
05:47 maveric_amitc_ joined #gluster
05:48 kanagaraj joined #gluster
05:50 ramteid joined #gluster
05:54 RameshN joined #gluster
05:57 kshlm joined #gluster
06:00 kanagaraj joined #gluster
06:01 smohan joined #gluster
06:01 aravindavk joined #gluster
06:15 atalur joined #gluster
06:16 maveric_amitc_ joined #gluster
06:24 raghug joined #gluster
06:31 rjoseph joined #gluster
06:32 deepakcs joined #gluster
06:35 nshaikh joined #gluster
06:54 nangthang joined #gluster
07:03 glusterbot News from resolvedglusterbugs: [Bug 959069] A single brick down of a dist-rep volume  results in geo-rep session "faulty" <https://bugzilla.redhat.com/show_bug.cgi?id=959069>
07:07 ctria joined #gluster
07:07 RameshN joined #gluster
07:15 rgustafs joined #gluster
07:19 jtux joined #gluster
07:20 atrius joined #gluster
07:24 karnan joined #gluster
07:40 TvL2386 joined #gluster
07:58 fandi joined #gluster
08:05 mbukatov joined #gluster
08:08 mbukatov joined #gluster
08:09 mbukatov joined #gluster
08:14 [Enrico] joined #gluster
08:19 m0ellemeister joined #gluster
08:20 deniszh joined #gluster
08:25 suman_d joined #gluster
08:28 misko_ JoeJulian: math does not work.
08:28 fsimonce joined #gluster
08:32 ronis joined #gluster
08:33 athinkingmeat joined #gluster
08:34 Fen1 joined #gluster
08:35 fandi joined #gluster
08:36 anil joined #gluster
08:37 misko_ JoeJulian: pebkac.
08:43 Telsin joined #gluster
08:47 Telsin left #gluster
08:50 Telsin joined #gluster
08:54 ghenry joined #gluster
08:54 ghenry joined #gluster
08:54 nshaikh joined #gluster
08:57 RameshN joined #gluster
09:05 ricky-ticky joined #gluster
09:05 lalatenduM joined #gluster
09:08 Slashman joined #gluster
09:36 Pupeno joined #gluster
09:47 nshaikh joined #gluster
09:48 Norky joined #gluster
09:48 atalur joined #gluster
09:51 rjoseph joined #gluster
09:52 m0ellemeister joined #gluster
09:55 fandi joined #gluster
10:01 meghanam joined #gluster
10:01 ppai joined #gluster
10:03 glusterbot News from resolvedglusterbugs: [Bug 1147422] dist-geo-rep: Session going into faulty with "Can no allocate memory" backtrace when pause, rename and resume is performed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1147422>
10:03 glusterbot News from resolvedglusterbugs: [Bug 1159190] dist-geo-rep: Session going into faulty with "Can no allocate memory" backtrace when pause, rename and resume is performed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1159190>
10:04 elico joined #gluster
10:10 rgustafs joined #gluster
10:15 badone joined #gluster
10:15 nishanth joined #gluster
10:29 T0aD joined #gluster
10:33 misko_ Pls. what is the correct way to mount gluster filesystem? Assume i have 4 nodes (a,b,c,d) which export /partition. Normally i mount nodea:/partition /mnt/partition. What happens if nodea goes down?
10:36 rjoseph joined #gluster
10:37 deepakcs joined #gluster
10:48 ppai joined #gluster
10:51 kkeithley1 joined #gluster
10:52 Norky that mount command just means it fetches the volfile from nodea, reads all server names (nodea, b, c, d) and connects simultaneously to all four servers. If nodea were to go down after that it would carry on working (assuming you are using replication)
10:53 Norky if nodea were down when you tried to mount, then the mount woudl fail, but once mounted you are connected to all machines serving the volume
10:55 Norky I believe one can use some kind of HA to make the initial mount work even if a server is down
10:58 saurabh joined #gluster
11:03 glusterbot News from newglusterbugs: [Bug 1182514] Force add-brick lead to glusterfsd core dump <https://bugzilla.redhat.co​m/show_bug.cgi?id=1182514>
11:04 ctria joined #gluster
11:05 hagarth joined #gluster
11:08 misko_ Norky: thx
11:08 misko_ HA for mounting is not a problem
11:12 meghanam joined #gluster
11:17 rjoseph joined #gluster
11:27 the-me joined #gluster
11:46 chirino_m joined #gluster
11:46 rjoseph joined #gluster
11:48 social joined #gluster
12:00 ricky-ticky joined #gluster
12:03 lpabon joined #gluster
12:11 ppai joined #gluster
12:12 itisravi_ joined #gluster
12:18 [Enrico] joined #gluster
12:26 ctria joined #gluster
12:32 anil joined #gluster
12:34 glusterbot News from newglusterbugs: [Bug 1182547] Unable to connect to a brick when volume is recreated <https://bugzilla.redhat.co​m/show_bug.cgi?id=1182547>
12:37 nangthang joined #gluster
12:43 DV joined #gluster
12:44 LebedevRI joined #gluster
12:59 misko_ [root@xfc0 ~]# /usr/lib/ocf/resource.d/glusterfs/volume start
12:59 misko_ /usr/lib/ocf/resource.d/glusterfs/volume: line 16: /lib/heartbeat/ocf-shellfuncs: No such file or directory
12:59 misko_ this is a stock resource-agent from default centos RPM
13:01 edwardm61 joined #gluster
13:01 Slashman_ joined #gluster
13:03 Fen1 joined #gluster
13:04 tdasilva joined #gluster
13:18 DV joined #gluster
13:21 diegows joined #gluster
13:28 smohan_ joined #gluster
13:30 nangthang joined #gluster
13:31 anoopcs joined #gluster
13:34 B21956 joined #gluster
13:44 bennyturns joined #gluster
13:45 DV joined #gluster
13:55 julim joined #gluster
14:05 Gill joined #gluster
14:13 suman_d joined #gluster
14:20 dusmant joined #gluster
14:22 tdasilva joined #gluster
14:24 partner misko_: use dns round-robin ie. add all your gluster servers to one common A record and use that name when mounting from the clients
14:24 partner also gives you benefit of removing for example nodea in the future without having to touch any clients, just empty data from it, remove the dns entry and nobody noticed anything
14:26 squizzi joined #gluster
14:29 hchiramm joined #gluster
14:32 gem joined #gluster
14:34 msciciel_ joined #gluster
14:40 ira joined #gluster
14:42 fandi joined #gluster
14:44 smohan joined #gluster
14:48 plarsen joined #gluster
14:49 plarsen joined #gluster
14:49 plarsen joined #gluster
14:56 dgandhi joined #gluster
15:12 wushudoin joined #gluster
15:20 _Bryan_ joined #gluster
15:22 nocturn left #gluster
15:25 misko_ partner: i started shared ip addr resource and works fine
15:26 nishanth joined #gluster
15:32 nage joined #gluster
15:33 [Enrico] joined #gluster
15:39 ndevos misko_: you should be able to use the backup-volfile-server mount option too: https://access.redhat.com/documentation/en-U​S/Red_Hat_Storage/2.0/html/Administration_Gu​ide/chap-Administration_Guide-GlusterFS_Clie​nt.html#sect-Administration_Guide-GlusterFS_​Client-GlusterFS_Client-Mounting_Volumes
15:40 partner oh yeah, was about to mention that too but obviously forgot
15:42 bene joined #gluster
15:51 ctria joined #gluster
15:57 dbruhn joined #gluster
15:57 nshaikh joined #gluster
15:57 dbruhn CentOS 7 suggested for Gluster 3.6?
15:59 ndevos dbruhn: sure, seems to work fine
16:01 fubada purpleidea: thanks
16:02 kanagaraj joined #gluster
16:02 * kkeithley_ wonders when we can retire Gluster el5.
16:03 * kkeithley_ thinks perhaps with 3.7?
16:03 * ndevos doubts that
16:06 hagarth joined #gluster
16:11 gothos joined #gluster
16:15 ctria joined #gluster
16:19 tru_tru joined #gluster
16:20 virusuy joined #gluster
16:20 virusuy joined #gluster
16:20 sputnik13 joined #gluster
16:29 tru_tru joined #gluster
16:34 lmickh joined #gluster
16:37 elico joined #gluster
16:38 daMaestro joined #gluster
16:39 neofob joined #gluster
16:39 partner long live lts :)
16:39 partner at least el4 is only couple of years anymore
16:42 gothos joined #gluster
16:55 squizzi joined #gluster
17:32 PeterA joined #gluster
17:47 nueces joined #gluster
17:49 sage_ joined #gluster
17:50 lalatenduM joined #gluster
17:51 rcampbel3 joined #gluster
17:52 coredump joined #gluster
17:55 jackdpeterson joined #gluster
17:58 jackdpeterson @purpleidea -- when executing puppet it appears that the operating version keeps getting removed.  CentOS 6.6, Gluster 3.6.1. I believe I have all appropriate dependencies installed
17:59 jackdpeterson @purpleidea -- Any thoughts/am I missing something very basic?
17:59 jackdpeterson @purpleidea -- My config is basically: https://github.com/purpleidea/puppet-gluster/blob​/master/examples/distributed-replicate-example.pp
18:29 dgandhi I'm running gluster3.5.2 on linux3.16 from deb sid. Docs on the web for 3.2 list a rebalance option "migrate-data", but it does not seem to work on 3.5 has this been removed? Is migrate now the default rebalance action?
18:37 getup joined #gluster
18:39 vimal joined #gluster
18:42 B21956 joined #gluster
18:44 jackdpeterson @purpleidea -- https://github.com/purplei​dea/puppet-gluster/pull/25
18:53 gkleiman joined #gluster
18:56 pdrakeweb joined #gluster
19:03 roost joined #gluster
19:07 roost_ joined #gluster
19:11 purpleidea jackdpeterson: taking a look now, thanks!
19:14 Gill joined #gluster
19:19 purpleidea jackdpeterson: left some comments
19:20 misko_ dbruhn: some small issues
19:21 dbruhn misko_, what kind of small issues?
19:27 misko_ dbruhn: I had some dependency problem when installing latest version. I had to use --nodeps
19:27 dbruhn ahh ok
19:27 misko_ dbruhn: when you have some exotic locales like I do, you will deal with mount failing
19:28 dbruhn exotic locals?
19:28 misko_ refer to this
19:28 misko_ 22:45 <@JoeJulian> according to bug 1117591 they suggest LC_NUMERIC should be "C": env -i LC_NUMERIC=C /usr/sbin/glusterfs --volfile-server=xfc0 --volfile-id=/disks /shared/isos
19:28 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1117591 is not accessible.
19:31 misko_ dbruhn: feel free to share experience with me :)
19:31 dbruhn Damn bug is restricted
19:33 elico joined #gluster
19:37 misko_ dbruhn: and also you have to setenforce 0 in order to get glusterd working
19:37 misko_ otherwise, the process starts but does not listen on the socket
19:42 getup joined #gluster
19:43 roost joined #gluster
19:49 purpleidea A user ( jackdpeterson ) has reported that the usable or correct OperatingVersion [1] for 3.6.1 is 30600 !! Can someone confirm? cc JoeJulian
19:50 purpleidea [1] https://www.gluster.org/community/docu​mentation/index.php/OperatingVersions
19:50 purpleidea JoeJulian: I cc-ed you because we had that discussion about the patterning, and it would be a hilariously fast example if it turned out to be correct... Maybe there's a bug in the users setup though. IDK.
19:59 jackdpeterson @purpleidea cc JoeJulian https://gist.github.com/jackd​peterson/aad48879a9c2e02f0882
20:07 partner i can confirm the version is 30600 on debian jessie aswell for 3.6.1
20:08 partner and if i manually edit it to 30601 as was done on gist link the symptoms are the same ie. fails to start up the daemon
20:21 purpleidea partner: thanks for the info...
20:22 purpleidea JoeJulian is going to flip over this ;)
20:30 fandi joined #gluster
20:37 eryc joined #gluster
20:42 jackdpeterson @purpleidea -- pushed os-independent yaml commits for CentOS 6.6 compatibility w/ puppet-gluster
20:49 fandi joined #gluster
20:53 purpleidea jackdpeterson: one last comment added...
20:55 bene joined #gluster
20:59 fandi joined #gluster
21:02 partner purpleidea: np, anything to make joe flip ;)
21:03 purpleidea partner: hehe... it's just funny, because we *just* had a discussion about this issue, and i argued for a table based approach, but he argued that we should derive from the version number... i do agree with that approach, unless there are exceptions, and how funny that one might come up so soon. so maybe it's a gluster bug ;) but now i have a use case for my users needing the table!
21:04 partner we've been making some interesting (?) findings this week while moving the storage to another place. somehow weirdly our uwsgi application goes into harakiri when the moved servers come back online, no issues while they are down and volumes are missing bricks
21:04 gomikemike anyone seen this [posix-handle.c:733:posix_handle_hard]
21:05 partner found some "timeout = 1800" errors from the logs but the core reason is still unknown, probably something with the gluster but no idea what
21:05 partner purpleidea: hehe
21:06 kkeithley_ Sounds like we need a checklist for releases if we forgot to bump the operating version when we did the quickie 3.6.1 release w/ gfapi symbol versions. :-(
21:06 partner accidentially bumped into that version thingy myself too as there was a previous 3.3 -> 3.4 upgrade and when attempted to peer with fresh 3.4 versions it didn't fly
21:07 partner it took quite a lot of time to figure out there was some version string in some file that normally nobody would ever look but which in this case prevented new peers to join.
21:08 partner a dummy set operation fixed the issue once it was pinpointed and the version got updated to "2"
21:08 gomikemike here is more of the error *[posix-handle.c:733:posix_handle_hard] 0-fnrw-vol-posix: mismatching ino/dev between file [filepath] and handle [filepath/.glusterfs/00/00/000.....]*
21:08 partner the very same operating version
21:10 purpleidea kkeithley_: any idea if the operating version for 3.6.1 will stay the same, or be changed in a subsequent 3.6.1-2 style release?
21:11 sputnik13 joined #gluster
21:11 kkeithley_ not sure
21:11 purpleidea kkeithley_: no worries, i opened a thread on gluster-devel ML to discuss. Cheers!
21:13 kkeithley_ I don't remember when we introduced the operating version. I haven't done anything with it in the 3.4.x series releases, and nobody has complained.
21:14 gomikemike does self-heal in gluster happen automagically? or do i need to trigger it?
21:20 partner gomikemike: it runs on the background
21:27 badone joined #gluster
21:30 partner i don't know all the details but accessing file makes gluster to check the health of it thought there's some changes since the 3.6
21:31 partner ,,targeted
21:33 partner @targeted self heal
21:33 glusterbot partner: https://web.archive.org/web/20130314122636/htt​p://community.gluster.org/a/howto-targeted-sel​f-heal-repairing-less-than-the-whole-volume/
21:33 partner whee, i'm learning
21:35 partner thought the link ain't working. and gluster.org is full of 404 :/
21:36 gomikemike partner: thanks, i thought so but this error has been going on for a few hours
21:45 theron joined #gluster
21:48 cornus_ammonis joined #gluster
21:52 Staples84 joined #gluster
21:59 misko_ has anyone measured maximum speeds of glusterfs in various environments? is there some report?
21:59 misko_ i have 3 nodes, every node can write >200MB/s itself, over glusterfs my avg speed is 35MB/s
21:59 misko_ net is 1Gbps, MTU 9000
22:12 dbruhn misko_, what kind of volume?
22:17 dbruhn is xfs still the suggested file system for bricks? How about if your using it for VHD's in Xen
22:23 ildefonso joined #gluster
22:23 ildefonso hi all!
22:27 ildefonso quick question, just as an academic exercise, I decided to configure 4 nodes with a gluster filesystem, with replica 4.
22:28 MugginsM joined #gluster
22:28 ildefonso however... I can see the backend filesystem is using a different amount of disk space on all the nodes.
22:30 ildefonso like a split-brain that won't go away even after I overwrote all files.
22:31 dbruhn ildefonso, have you checked the logs to see if there are any split brain issues?
22:31 dbruhn Also homogenous servers?
22:31 ildefonso yeah, kinda academic test, just 4 servers, I expected all of them to end up with the same data.
22:33 ildefonso now, all of them write to the glusterfs, through a mount point: mount -t glusterfs name_of_local_node:name_of_gluster /some_directory
22:34 ildefonso so, this mean each of them wrote data to the "pool" (on different directories)
22:35 ildefonso ok, now that I think about it, maybe I overstressed this, I filled the filesystem once, of course, it would complain like this "remote operation failed: No space left on device", then I grew all filesystems, and continued writing data here and there.
22:36 dbruhn If the self heal daemon run's you'll probably see that corrected.
22:37 ildefonso no, it has been like that for over two weeks, and data has been completely overwritten (files were overwritten with new versions of them).
22:45 klaas- joined #gluster
22:47 marcoceppi_ joined #gluster
22:47 d-fence_ joined #gluster
22:47 abyss^_ joined #gluster
22:47 juhaj_ joined #gluster
22:47 haakon__ joined #gluster
22:48 basso_ joined #gluster
22:48 Lee-- joined #gluster
22:48 purpleid1a joined #gluster
22:48 eljrax_ joined #gluster
22:48 ndevos_ joined #gluster
22:48 ndevos_ joined #gluster
22:49 Arminder- joined #gluster
22:49 vincent_1dk joined #gluster
22:49 dockbram_ joined #gluster
22:50 dblack_ joined #gluster
22:50 devilspgd_ joined #gluster
22:50 verboeseq joined #gluster
22:51 sac`away` joined #gluster
22:51 and`_ joined #gluster
22:51 samppah_ joined #gluster
22:51 James joined #gluster
22:51 gomikemi1e joined #gluster
22:51 xavih_ joined #gluster
22:52 hflai_ joined #gluster
22:52 RobertLaptop_ joined #gluster
22:52 foster_ joined #gluster
22:53 semiosis joined #gluster
22:53 eryc_ joined #gluster
22:53 natgeorg joined #gluster
22:53 Az joined #gluster
22:53 pdrakeweb joined #gluster
22:53 PeterA joined #gluster
22:53 ira joined #gluster
22:53 bennyturns joined #gluster
22:53 the-me joined #gluster
22:53 ghenry joined #gluster
22:53 y4m4_ joined #gluster
22:53 sauce joined #gluster
22:53 jbrooks joined #gluster
22:53 atrius` joined #gluster
22:53 ckotil joined #gluster
22:53 PatNarciso joined #gluster
22:53 eightyeight joined #gluster
22:53 JustinClift joined #gluster
22:53 shaunm joined #gluster
22:53 mikedep333 joined #gluster
22:53 ccha joined #gluster
22:53 yoavz joined #gluster
22:53 Ramereth joined #gluster
22:53 y4m4 joined #gluster
22:53 partner joined #gluster
22:53 tessier_ joined #gluster
22:53 churnd joined #gluster
22:53 Guest75764 joined #gluster
22:53 Arminder- joined #gluster
22:53 semiosis joined #gluster
22:53 quydov joined #gluster
22:54 Gorian joined #gluster
22:54 codex__ joined #gluster
22:54 social_ joined #gluster
22:54 jackdpeterson joined #gluster
22:54 dbruhn joined #gluster
22:54 chirino_m joined #gluster
22:54 mbukatov joined #gluster
22:54 lanning joined #gluster
22:54 cfeller joined #gluster
22:54 morse_ joined #gluster
22:54 primusinterpares joined #gluster
22:54 Micromus joined #gluster
22:54 wgao joined #gluster
22:54 DJCl34n joined #gluster
22:54 Arminder- joined #gluster
22:55 NuxRo joined #gluster
22:55 mibby- joined #gluster
23:03 ilbot3 joined #gluster
23:03 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
23:03 misko__ joined #gluster
23:04 tziom joined #gluster
23:04 nueces joined #gluster
23:05 stickyboy joined #gluster
23:05 stickyboy joined #gluster
23:05 blkperl_ joined #gluster
23:05 wushudoin joined #gluster
23:05 JustinClift joined #gluster
23:05 Bosse joined #gluster
23:05 owlbot joined #gluster
23:05 johnnytran joined #gluster
23:05 masterzen joined #gluster
23:05 strata joined #gluster
23:05 schrodinger joined #gluster
23:05 sadbox joined #gluster
23:05 kalzz joined #gluster
23:05 capri joined #gluster
23:05 fandi_ joined #gluster
23:06 tobias- joined #gluster
23:06 d-fence joined #gluster
23:06 eryc joined #gluster
23:06 eryc joined #gluster
23:06 mrEriksson joined #gluster
23:06 Kins joined #gluster
23:07 ama joined #gluster
23:07 rastar_afk joined #gluster
23:07 mdavidson joined #gluster
23:07 neoice joined #gluster
23:07 fubada- joined #gluster
23:09 l0uis joined #gluster
23:09 shaunm joined #gluster
23:09 l0uis joined #gluster
23:09 bennyturns joined #gluster
23:09 ira joined #gluster
23:10 doekia joined #gluster
23:10 primusinterpares joined #gluster
23:10 kalzz joined #gluster
23:11 [o__o] joined #gluster
23:11 tru_tru joined #gluster
23:11 Peanut_ joined #gluster
23:11 kaii joined #gluster
23:12 _jmp_ joined #gluster
23:12 Ramereth joined #gluster
23:12 morse joined #gluster
23:12 Lee- joined #gluster
23:12 lyang0 joined #gluster
23:12 devilspgd joined #gluster
23:12 gothos joined #gluster
23:12 vincent_vdk joined #gluster
23:12 hflai joined #gluster
23:12 asku joined #gluster
23:12 tg2 joined #gluster
23:12 oxidane joined #gluster
23:12 eightyeight joined #gluster
23:12 churnd joined #gluster
23:12 tessier joined #gluster
23:12 sauce joined #gluster
23:12 wgao joined #gluster
23:12 Micromus joined #gluster
23:12 cfeller joined #gluster
23:12 lanning joined #gluster
23:12 mbukatov joined #gluster
23:12 chirino_m joined #gluster
23:12 dbruhn joined #gluster
23:12 jackdpeterson joined #gluster
23:12 social_ joined #gluster
23:12 codex__ joined #gluster
23:12 Gorian joined #gluster
23:12 kke joined #gluster
23:12 ndk_ joined #gluster
23:12 XpineX joined #gluster
23:12 side_control joined #gluster
23:12 PeterA1 joined #gluster
23:12 ccha2 joined #gluster
23:12 yosafbridge` joined #gluster
23:12 DJCl34n joined #gluster
23:12 kkeithley joined #gluster
23:12 Arminder joined #gluster
23:12 maveric_amitc_ joined #gluster
23:12 rwheeler joined #gluster
23:12 nhayashi joined #gluster
23:12 necrogami joined #gluster
23:12 mibby joined #gluster
23:12 gildub joined #gluster
23:12 msciciel joined #gluster
23:12 jbrooks joined #gluster
23:12 the-me joined #gluster
23:12 mikedep333 joined #gluster
23:12 SmithyUK joined #gluster
23:12 mator joined #gluster
23:12 julim joined #gluster
23:12 Rogue-3 joined #gluster
23:13 haakon joined #gluster
23:13 saltsa joined #gluster
23:13 badone joined #gluster
23:13 ckotil joined #gluster
23:13 ghenry joined #gluster
23:13 PatNarciso joined #gluster
23:13 yoavz joined #gluster
23:13 semiosis joined #gluster
23:13 javi404 joined #gluster
23:13 Intensity joined #gluster
23:13 ildefonso joined #gluster
23:13 atrius` joined #gluster
23:13 hchiramm joined #gluster
23:13 Azaril joined #gluster
23:13 T0aD joined #gluster
23:13 dastar joined #gluster
23:13 B21956 joined #gluster
23:13 Telsin joined #gluster
23:13 JordanHackworth joined #gluster
23:13 samppah joined #gluster
23:13 partner joined #gluster
23:13 Intensity joined #gluster
23:14 sauce joined #gluster
23:14 semiosis joined #gluster
23:14 necrogami joined #gluster
23:14 tom[] joined #gluster
23:14 yoavz joined #gluster
23:16 samsaffron___ joined #gluster
23:17 _NiC joined #gluster
23:18 Guest14232 joined #gluster
23:25 harish joined #gluster
23:26 dblack joined #gluster
23:27 hchiramm_ joined #gluster
23:28 wushudoin joined #gluster
23:28 JustinClift joined #gluster
23:28 rastar_afk joined #gluster
23:29 ndk_ joined #gluster
23:30 kkeithley joined #gluster
23:30 maveric_amitc_ joined #gluster
23:37 systemonkey joined #gluster
23:38 DJClean joined #gluster
23:38 DJClean joined #gluster
23:39 Guest38837 joined #gluster
23:51 Staples84 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary