Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 plarsen joined #gluster
00:08 siel joined #gluster
00:13 davemc joined #gluster
00:18 bene2 joined #gluster
00:35 justinmburrous joined #gluster
00:46 bala joined #gluster
00:59 cjanbanan joined #gluster
01:19 haomaiwa_ joined #gluster
01:21 jansegre joined #gluster
01:23 topshare joined #gluster
01:25 eryc joined #gluster
01:25 eryc joined #gluster
01:32 msmith_ joined #gluster
01:35 justinmburrous joined #gluster
01:39 haomaiw__ joined #gluster
01:56 haomaiwa_ joined #gluster
02:00 bmikhael joined #gluster
02:01 bmikhael can anybody help me with the procedures to replace a failed brick
02:01 haomai___ joined #gluster
02:34 panpanfeng joined #gluster
03:07 bmikhael joined #gluster
03:12 samsaffron___ joined #gluster
03:19 kanagaraj joined #gluster
03:23 justinmburrous joined #gluster
03:29 cjanbanan joined #gluster
03:34 samsaffron___ joined #gluster
03:34 haomaiwa_ joined #gluster
03:35 bharata-rao joined #gluster
03:36 DV joined #gluster
03:42 samsaffron___ joined #gluster
03:43 haomaiw__ joined #gluster
03:51 ACiDGRiM joined #gluster
03:54 shubhendu joined #gluster
03:57 itisravi joined #gluster
03:59 neoice joined #gluster
04:00 zerick joined #gluster
04:04 nbalachandran joined #gluster
04:25 ramteid joined #gluster
04:32 rafi1 joined #gluster
04:32 Rafi_kc joined #gluster
04:33 jiffin joined #gluster
04:38 ndarshan joined #gluster
04:42 nbalachandran joined #gluster
04:46 rjoseph joined #gluster
04:48 nishanth joined #gluster
04:58 anoopcs joined #gluster
04:59 justinmburrous joined #gluster
05:02 spandit joined #gluster
05:03 prasanth_ joined #gluster
05:05 anoopcs joined #gluster
05:13 raghu joined #gluster
05:14 atalur joined #gluster
05:23 RaSTar joined #gluster
05:26 shubhendu joined #gluster
05:32 Rafi_kc joined #gluster
05:39 prasanth_ joined #gluster
05:44 overclk joined #gluster
05:45 ricky-ticky joined #gluster
05:46 soumya_ joined #gluster
05:47 karnan joined #gluster
05:49 panpanfeng hello , anyone can help me ?  I met a problem. when I use gluster 3.6 client mount gluster 3.3 volume , the write operation will fail.
05:49 kdhananjay joined #gluster
05:52 prasanth_ joined #gluster
05:52 rjoseph joined #gluster
05:53 prasanth_ joined #gluster
05:55 prasanth_ joined #gluster
05:59 prasanth_ joined #gluster
05:59 nbalachandran joined #gluster
06:01 justinmb_ joined #gluster
06:01 msmith_ joined #gluster
06:10 saurabh joined #gluster
06:10 plarsen joined #gluster
06:15 lalatenduM joined #gluster
06:20 Philambdo joined #gluster
06:20 kshlm joined #gluster
06:22 bmikhael joined #gluster
06:31 Fen2 joined #gluster
06:33 shubhendu joined #gluster
06:37 bala joined #gluster
06:37 rjoseph joined #gluster
06:39 nbalachandran joined #gluster
06:44 ekuric joined #gluster
06:48 deepakcs joined #gluster
06:50 nshaikh joined #gluster
06:51 RaSTar joined #gluster
06:51 topshare joined #gluster
06:52 Fen2 Hi, good morning :)
06:56 topshare joined #gluster
07:00 Fen1 joined #gluster
07:01 ctria joined #gluster
07:02 msmith_ joined #gluster
07:07 topshare joined #gluster
07:09 pkoro joined #gluster
07:10 topshare joined #gluster
07:11 R0ok_ Fen2: greetings
07:18 topshare joined #gluster
07:19 Lee- joined #gluster
07:20 ricky-ticky joined #gluster
07:24 Fen1 R0ok_ how are you ?
07:26 rolfb joined #gluster
07:29 topshare joined #gluster
07:33 haomaiwang joined #gluster
07:35 topshare joined #gluster
07:35 haomai___ joined #gluster
07:37 topshare joined #gluster
07:39 panpanfeng http://supercolony.gluster.org/pipermail/gluster-users/2014-October/019026.html
07:39 glusterbot Title: [Gluster-users] gluster 3.6 compatibility problem with gluster 3.3 (at supercolony.gluster.org)
07:40 panpanfeng anybody has idea about the compatibility issue  of gluster 3.6   ?
07:40 khanku left #gluster
07:44 ws2k3 Morning all
07:47 Rafi_kc joined #gluster
07:48 fsimonce joined #gluster
07:48 doubt joined #gluster
07:50 R0ok_ panpanfeng: what volume options have you set on that volume?
07:51 panpanfeng the volume is servered by gluster 3.3, and I just enable the quota  and set features.quota-timeout=5    enable allow-insecure
07:52 panpanfeng Options Reconfigured: features.quota-timeout: 5 server.allow-insecure: on features.quota: on features.limit-usage: /:10TB
07:52 panpanfeng Volume Name: maintain4 Type: Distributed-Replicate Volume ID: 27cd9529-cde0-4752-8aa2-181fad118198 Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp
07:53 R0ok_ panpanfeng: what about features.read-only option on the volume ? is it set to on ?
07:53 pkoro joined #gluster
07:54 panpanfeng It's not read-only.    I can write file in this volume when I use gluster 3.3 or gluster 3.4 as client.
07:55 panpanfeng the write operation will only fail when I use gluster 3.6.0 as client.
07:55 anands joined #gluster
07:56 R0ok_ what about mount options & dir permissions that you're using on the 3.6 client ?
07:56 cjanbanan joined #gluster
08:00 panpanfeng I just use   " mount -t glusterfs  host:/volname  /mnt/mountpoint " to mount the volume. and mount point is with "755" permission
08:03 panpanfeng the write operation will fail with the log " [fuse-bridge.c:1261:fuse_err_cbk] 0-glusterfs-fuse: 261: FLUSH() ERR => -1 (Transport endpoint is not connected)"
08:03 msmith_ joined #gluster
08:04 panpanfeng [2014-10-09 08:03:04.222896] W [client-rpc-fops.c:850:client3_3_writev_cbk] 0-maintain4-client-9: remote operation failed: Transport endpoint is not connected
08:06 liquidat joined #gluster
08:09 panpanfeng I have commit the bug some days before. https://bugzilla.redhat.com/show_bug.cgi?id=1147236
08:09 glusterbot Bug 1147236: high, unspecified, ---, gluster-bugs, NEW , gluster 3.6.0 beta1 cannot write file on gluster 3.3 volume
08:09 panpanfeng But nobody give me some idea. TT
08:10 topshare joined #gluster
08:10 panpanfeng I think it's a compatibility problem of gluster 3.6.0
08:10 topshare joined #gluster
08:12 sputnik13 joined #gluster
08:15 Fen1 panpanfenf: i think too, but what exactly i don't know, maybe some new features are the problem
08:16 Fen1 panpanfeng: did you create your directory "mountpoint" ?
08:16 panpanfeng yes  of course.  the mount operation success   and the read operation can success too.
08:18 Fen1 And have you tried with just a distributed volume ?
08:18 R0ok_ panpanfeng: i've got "Transport endpoint is not connected" messages & all i did was to unmount & remount
08:19 Fen1 Did you have set the mount automatically (on fstab) ?
08:19 panpanfeng no  I just use mount command.
08:20 panpanfeng I have tried many distributed volume servered by gluster 3.3.
08:20 Fen1 Try to unmount all / reboot / mount it automatically with another directory name
08:20 panpanfeng and of course, I have tried umount&remount but this doesn't work.
08:21 R0ok_ panpanfeng: what about iptables rules ?
08:21 panpanfeng I have turned iptables off. and I have test on other machine too.
08:22 Fen1 And with NFS ?
08:22 Fen1 Maybe your partition are wrong...
08:22 panpanfeng you mean mount with NFS?
08:22 Fen1 yeah
08:23 panpanfeng I think it's not partition problem . Because I use the same machine. and mount with gluster 3.3 client, everything will be ok.
08:23 Fen1 ok, using xfs ?
08:24 panpanfeng ok   I will try to mount with NFS
08:24 panpanfeng thanks   I will tell you the result
08:35 DV joined #gluster
08:37 jvandewege_ joined #gluster
08:44 _NiC joined #gluster
08:46 shubhendu joined #gluster
08:51 ricky-ticky joined #gluster
08:56 bala joined #gluster
08:56 nishanth joined #gluster
09:00 vimal joined #gluster
09:01 nshaikh joined #gluster
09:02 panpanfeng so strange . I find many connection from client to client by "lsof | grep gluster" command. While "netstat" show me there are connections with client and server.
09:04 cjanbanan I saw some info which said that time has to be synchronized between all servers running glusterfs. What would happen in case time was out of sync?
09:04 msmith_ joined #gluster
09:13 R0ok_ cjanbanan: i think probably different timestamps on files on replicated volumes or georep vols..e.t.c
09:15 kdhananjay joined #gluster
09:16 atalur joined #gluster
09:19 haomaiwang joined #gluster
09:25 cjanbanan So just the timestamps. No wrong decision when replicating or even worse, considering it a split brain?
09:25 ctria joined #gluster
09:26 Fen1 panpanfeng: does it works with nfs ?
09:26 mrEriksson joined #gluster
09:27 panpanfeng sorry, I deal with some other thing . I will test nfs now.
09:30 doubt joined #gluster
09:35 haomaiwa_ joined #gluster
09:36 haoma____ joined #gluster
09:47 doubt joined #gluster
09:47 wushudoin| joined #gluster
09:53 prasanth_ joined #gluster
09:54 haomaiwa_ joined #gluster
09:56 haomai___ joined #gluster
10:04 TvL2386 joined #gluster
10:05 msmith_ joined #gluster
10:06 nishanth joined #gluster
10:18 bala joined #gluster
10:18 shubhendu joined #gluster
10:22 doubt joined #gluster
10:26 necrogami joined #gluster
10:34 atalur joined #gluster
10:38 kdhananjay joined #gluster
10:39 haomaiwa_ joined #gluster
10:44 doubt joined #gluster
10:49 edward1 joined #gluster
10:52 sputnik13 joined #gluster
10:55 jbautista- joined #gluster
10:57 haomai___ joined #gluster
11:05 msmith_ joined #gluster
11:10 virusuy joined #gluster
11:10 virusuy joined #gluster
11:10 diegows joined #gluster
11:12 vstokes joined #gluster
11:18 Pupeno joined #gluster
11:20 vstokes1 joined #gluster
11:21 vstokes1 left #gluster
11:21 vstokes1 joined #gluster
11:21 sputnik13 joined #gluster
11:22 LebedevRI joined #gluster
11:24 raghu joined #gluster
11:27 doubt joined #gluster
11:35 sputnik13 joined #gluster
11:39 sputnik13 joined #gluster
11:45 doubt joined #gluster
11:45 julim joined #gluster
11:47 soumya__ joined #gluster
11:53 Fen1 joined #gluster
11:54 Fen1 panpanfeng: hi ?
11:56 ctria joined #gluster
11:57 glusterbot New news from newglusterbugs: [Bug 1151004] [USS]: deletion and creation of snapshots with same name causes problems <https://bugzilla.redhat.com/show_bug.cgi?id=1151004>
11:59 ira joined #gluster
12:00 itisravi_ joined #gluster
12:01 itisravi_ joined #gluster
12:01 ira joined #gluster
12:02 shubhendu joined #gluster
12:02 itisravi_ joined #gluster
12:04 doubt joined #gluster
12:06 msmith_ joined #gluster
12:06 anands joined #gluster
12:18 geaaru joined #gluster
12:29 uebera|| joined #gluster
12:29 uebera|| joined #gluster
12:31 sputnik13 joined #gluster
12:35 uebera|| joined #gluster
12:43 Slashman joined #gluster
12:49 calisto joined #gluster
12:56 theron joined #gluster
12:56 jansegre joined #gluster
12:58 tdasilva joined #gluster
13:00 ivok joined #gluster
13:01 calum_ joined #gluster
13:04 theron_ joined #gluster
13:06 doubt joined #gluster
13:08 rjoseph joined #gluster
13:10 ppai joined #gluster
13:10 Fen1 Hi, if i do a distributed striped replicated volume like this : https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/images/6156.png       If one of my server crash, i lost my acces to the data no ?
13:15 Pupeno joined #gluster
13:23 sputnik13 joined #gluster
13:25 Pupeno joined #gluster
13:26 msmith_ joined #gluster
13:31 Fen1 Because with this volume, i don't understand really well the replicate side....
13:34 portante left #gluster
13:52 theron joined #gluster
13:53 jmarley joined #gluster
14:01 portante joined #gluster
14:05 vstokes1 Hello, anyone have any success in getting HP/UX 11.11 to NFS (client) to a gluster volume?
14:07 SmithyUK Hi all, still having mad rebalance issues. Have typed up my findings in here. Any help is greatly appreciated
14:07 SmithyUK http://fpaste.org/140634/86361814/
14:07 glusterbot Title: #140634 Fedora Project Pastebin (at fpaste.org)
14:10 SmithyUK Not sure what it is about the 202,908th file in gluster :P
14:10 coredump joined #gluster
14:12 prasanth_ joined #gluster
14:13 portante left #gluster
14:21 cmtime joined #gluster
14:35 anands joined #gluster
14:35 kshlm joined #gluster
14:36 cliluw joined #gluster
14:36 doubt joined #gluster
14:37 xleo joined #gluster
14:38 aulait joined #gluster
14:45 vstokes1 Hello, anyone have any success in getting HP/UX 11.11 to NFS mount a gluster volume?
14:48 ricky-ticky joined #gluster
14:50 dgandhi joined #gluster
14:52 jansegre joined #gluster
14:55 dgandhi Greetings all, I am looking to build a replicated cluster with ~200 SATA drives, what are the considerations for making them all discrete bricks vs creating RAID1 sets as bricks, the size of my largest file is less than 1/1000 of the size of the disks - so single massive files are not an issue in my environment.
14:58 deepakcs joined #gluster
15:02 james28 joined #gluster
15:02 soumya joined #gluster
15:04 RioS2 joined #gluster
15:06 james28 Hi Guys, Little problem.. I have two gluster machines with a 500GB drive each setup for a single replicated volume. This has been running lovely for over a year now but i'm at 80% usage. The only other machine i can add has a 1TB drive, is there anyway I can add this into the cluster and re-balance it so there are at least two copy's of a single file and increase the overall space?
15:07 calisto joined #gluster
15:10 kodapa joined #gluster
15:10 Fen1 james28: add your machine in your cluster and add a brick to your volume and rebalance it
15:11 lpabon joined #gluster
15:12 james28 Fen1, just testing on my VM demo and when I try and add the brick I get "volume add-brick: failed: Incorrect number of bricks supplied 1 with count 2"
15:14 Fen1 james28: yeah of course, you need a pair of brick, juste make 2 partition of 500Go and and 2 brick
15:14 Fen1 *add
15:15 james28 Okay I'm guessing that will take the whole thing to 1TB of space then?
15:15 Fen1 james28: don't you want this ?
15:16 james28 yes
15:16 Fen1 dgandhi : don't understand your problem sry
15:19 jansegre joined #gluster
15:21 julim joined #gluster
15:23 vstokes1 left #gluster
15:25 dgandhi Fen1: not a problem, so much as seeking info on any potential issues with using large numbers of bricks, like a couple hundred, I remember reading somewhere that there was some limit due to port/brick associations, but I can't find it.
15:25 sputnik13 joined #gluster
15:28 Fen1 dghandhi: if your files are bigger than your brick you need to use a stripe volume, but otherwise i don't see any probleme, just the fact that if you want to add file then you need each time to add brick, so it's just boring...
15:32 harish joined #gluster
15:33 Pupeno joined #gluster
15:33 _Bryan_ joined #gluster
15:35 dtrainor__ joined #gluster
15:37 R0ok_ james28: i think you should use a distributed or a stripped volume for that
15:39 giannello joined #gluster
15:39 nated joined #gluster
15:39 nated Hi.  in gluster 3.5 geo-rep uses /var/run as the working_dir and that's where changelogs are stored.  Hwoever on Ubuntu /var/run is a tmpfs file, so large changelogs fill up /var/run after a while and geo-rep breaks
15:40 mojibake joined #gluster
15:40 nated what's the best way to fix this, stop the geo-rep and volume, move data aroudn to disk backed storage, reconfig the working_dir and start everythign back up?
15:44 rotbeard joined #gluster
15:45 portante joined #gluster
15:53 anands joined #gluster
15:59 jansegre joined #gluster
15:59 cjanbanan joined #gluster
16:00 DV joined #gluster
16:05 semiosis nated: that's interesting. i thought /var/run was just for pid files and the like. do you think larger working datasets should be stored in /var/lib, or elsewhere?
16:08 rolfb joined #gluster
16:20 theron joined #gluster
16:24 sputnik13 joined #gluster
16:27 Pupeno joined #gluster
16:28 bennyturns joined #gluster
16:34 XpineX joined #gluster
16:34 FarbrorLeon joined #gluster
16:36 bennyturns joined #gluster
16:37 quique doing some testing: i have four peers, server 1-4, i took down server1, then brought it back up, and it has connected to the cluster and has self-heald from what I can tell, but servers 2-4 should it as State: Peer in Cluster (Disconnected)
16:38 quique a gluter volume status on server 2-4 don't show server1 as an active brick, but server1 shows all four as active
16:50 ira joined #gluster
16:51 semiosis quique: sounds like the IP address of server1 changed but the other peers don't know the new addr. hopefully you're using hostnames and can just update the hostname for server1 with the new IP
16:51 quique semiosis: the dns is updated
16:51 quique and I can ping from all the other nodes
16:51 semiosis quique: then restart glusterd on them
16:52 semiosis also, what version of glusterfs?
16:53 doubt joined #gluster
16:53 quique glusterfs 3.5.2
16:53 quique semiosis: restart glusterd on all the other nodes?
16:53 semiosis yes
16:54 dtrainor__ joined #gluster
16:56 quique semiosis: restarting worked, does the daemon stick with the original ip of all nodes until restarted?
16:56 dtrainor joined #gluster
16:56 semiosis apparantly
17:00 theron joined #gluster
17:01 zerick joined #gluster
17:06 R0ok_ joined #gluster
17:10 doubt joined #gluster
17:19 theron joined #gluster
17:26 jobewan joined #gluster
17:28 nated semiosis: yes, /var/run is a symlink to run and is tmpfs mounted, so presently at reboot you loose all your changelogs
17:28 nated and generally only have 10% of ram for changlogs, shared with a number of other critical functions
17:29 nated changlogs shodul be on persistent backed storage, minimally
17:30 doubt joined #gluster
17:30 semiosis nated: agreed. could you file a bug about this please?
17:30 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:30 nated yeah, let me clean things up here and will do so
17:30 ricky-ticky joined #gluster
17:31 semiosis i think our options are... 1) convince upstream to move this stuff to /var/lib/gluster... for everyone, or 2) add a patch to the ubuntu debs to do that.  i'd prefer #1 of course
17:31 vipulnayyar joined #gluster
17:32 nated /var/lib/gluster is its self a glsuter volume, isn't it?
17:32 nated in that case, writing changelogs there woudl be recursive :)
17:33 semiosis no it's not.
17:33 nated probably a new /var/lib/gluster-georep woudl be needed, though I winder what's done for RHEL7, since I bet it suffers from this same problem
17:34 semiosis /var/lib/gluster is just persistent state for glusterd & other processes... imo the right place for these changelogs
17:34 nated ahh, ok
17:35 semiosis how large are your changelogs?
17:36 nated I have 2 volume replicating
17:36 nated each is 180MB for hte changlog dir
17:38 semiosis what kind of workload is it?
17:38 nated image files
17:39 nated like 90k total
17:39 nated and transaction volume of 1 or 2 a minute
17:39 nated at peak
17:39 nated really minimal workload
17:39 semiosis i was expecting a lot more
17:39 semiosis wonder why the logs are so big
17:40 nated dunno
17:40 nated looking for which files are large now
17:40 nated starting to think it's a unlinked by not released file
17:40 semiosis a wha?
17:41 semiosis bbiab, lunch
17:41 nated a file that's been deleted but is still open by a process
17:41 nated so not visable, but still there
17:42 bennyturns joined #gluster
17:42 nated but really, the space is consumed by .processed directory keeping the old CHANGELOG.XXXXXX files
17:43 jmarley joined #gluster
17:49 rwheeler joined #gluster
17:58 JordanHackworth joined #gluster
18:02 johnmark joined #gluster
18:07 ivok joined #gluster
18:11 doubt joined #gluster
18:20 theron joined #gluster
18:21 ThatGraemeGuy joined #gluster
18:24 soumya joined #gluster
18:24 theron joined #gluster
18:26 nueces joined #gluster
18:27 theron joined #gluster
18:35 XpineX joined #gluster
18:46 doubt joined #gluster
18:52 Pupeno joined #gluster
18:57 nated so, is there any known issue with changing geo-replication's working_dir?
18:59 nated also, this is fixed in 3.6 :) https://bugzilla.redhat.com/show_bug.cgi?id=1077516
18:59 glusterbot Bug 1077516: high, unspecified, ---, vshankar, ON_QA , [RFE] :-  Move the container for changelogs from /var/run to /var/lib/misc
19:01 semiosis thanks
19:05 nated so i guess the next obvious question is when 3.6 will be out?  The Planning36 wiki page says late September which seems out of date now, and the 3.6 blocker bug in bugzilla seems still fairly well populated
19:06 semiosis nated: i'll try to get a 3.6 QA PPA started by this weekend
19:06 semiosis it's in beta now
19:06 virusuy joined #gluster
19:06 nated yeah, I see beta3 release
19:09 doubt joined #gluster
19:23 bennyturns joined #gluster
19:29 vertex Hey guys, what's the most proper way to shutdown and bring a cluster back up? Can I shutdown all the nodes and it'll be ok or should I do a "gluster volume <name> stop" or something like that first?
19:30 vertex like shut it down and start it a few weeks later
19:33 plarsen joined #gluster
19:34 virusuy joined #gluster
19:34 virusuy joined #gluster
19:40 theron joined #gluster
19:47 pkoro joined #gluster
19:47 doubt joined #gluster
19:48 andreask joined #gluster
19:53 msmith_ joined #gluster
19:55 msmith_ joined #gluster
19:58 glusterbot New news from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.com/show_bug.cgi?id=1058300>
20:00 theron joined #gluster
20:02 funkybunny joined #gluster
20:07 funkybunny joined #gluster
20:07 funkybunny joined #gluster
20:08 funkybunny hello, i have a few questions related to gluster if anyone is around
20:13 anands joined #gluster
20:19 calisto joined #gluster
20:20 dtrainor joined #gluster
20:22 doubt joined #gluster
20:31 cyberbootje joined #gluster
20:33 doubt joined #gluster
20:37 funkybunny when shrinking volumes in a striped/replicated volume, how can I tell which bricks are in the same replica set
20:37 funkybunny or i suppose, in general, is there a way to see which bricks are assigned to which subvolume
20:45 nshaikh joined #gluster
20:46 calisto joined #gluster
20:47 semiosis funkybunny: gluster volume info... bricks are listed in replica set order. for example, if you have replica 3, first three bricks are a replica set, next three, etc.
20:49 funkybunny if it's set to both distributed and replicated, which is first, ie: if we have node 1 2 3 4 in that order, is  1 and 2 the stripe with 3 and 4 being the replicated? or 1 3 stripe 2 4 the replicated?
20:50 funkybunny and in that case, im assuming bricks need to be added and removed in 4's?
21:00 semiosis stripe is something else entirely
21:01 semiosis distributed-replicated means files are distributed among replica sets. so if you have a 6 brick volume of replica 3, for example, then the first three bricks are one replica set, the other three bricks are the other replica set
21:02 semiosis each file is stored on one replica set or the other, so each replica set contains half of all files
21:06 funkybunny ok i get it, thanks!
21:07 funkybunny i changed the term distributed with stripe in my question, but that still answers it
21:10 semiosis yw
21:33 anands joined #gluster
21:38 ppai joined #gluster
21:40 theron joined #gluster
21:49 dtrainor joined #gluster
22:08 siel joined #gluster
22:27 gildub joined #gluster
22:41 theron_ joined #gluster
23:00 cjanbanan joined #gluster
23:12 nueces_ joined #gluster
23:16 bennyturns joined #gluster
23:16 bennyturns joined #gluster
23:33 Pupeno joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary