Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-12-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 farhoriz_ joined #gluster
00:03 JoeJulian Yep
00:11 kraynor5b Thank you again :)
00:14 haomaiwang joined #gluster
00:17 kraynor5b joined #gluster
00:17 kraynor5b1 joined #gluster
00:52 shdeng joined #gluster
01:05 jeremyh joined #gluster
01:11 shdeng joined #gluster
01:16 shruti joined #gluster
01:17 daMaestro joined #gluster
01:47 haomaiwang joined #gluster
01:53 nishanth joined #gluster
01:56 Saravanakmr joined #gluster
02:05 phileas joined #gluster
02:08 derjohn_mob joined #gluster
02:14 haomaiwang joined #gluster
02:16 ShwethaHP joined #gluster
02:59 arc0 joined #gluster
03:14 haomaiwang joined #gluster
03:18 bwerthmann joined #gluster
03:21 magrawal joined #gluster
03:30 loadtheacc joined #gluster
03:33 vbellur joined #gluster
03:34 vbellur joined #gluster
03:34 kramdoss_ joined #gluster
03:34 vbellur joined #gluster
03:44 nbalacha joined #gluster
03:45 nishanth joined #gluster
03:46 riyas joined #gluster
03:50 atinm joined #gluster
03:53 derjohn_mob joined #gluster
03:53 sbulage joined #gluster
04:02 yalu joined #gluster
04:09 itisravi joined #gluster
04:13 rwheeler joined #gluster
04:14 haomaiwang joined #gluster
04:18 Karan joined #gluster
04:20 kdhananjay joined #gluster
04:22 buvanesh_kumar joined #gluster
04:25 Lee1092 joined #gluster
04:28 jiffin joined #gluster
04:29 loadtheacc joined #gluster
04:33 apandey joined #gluster
04:34 aravindavk joined #gluster
04:37 Prasad joined #gluster
04:38 RameshN joined #gluster
04:39 victori joined #gluster
04:50 ndarshan joined #gluster
04:59 shubhendu joined #gluster
04:59 nishanth joined #gluster
05:03 susant joined #gluster
05:05 atinm joined #gluster
05:05 ppai joined #gluster
05:12 nishanth joined #gluster
05:12 Jacob843 joined #gluster
05:13 skoduri joined #gluster
05:14 haomaiwang joined #gluster
05:15 PotatoGim_ joined #gluster
05:16 rafi joined #gluster
05:17 siel joined #gluster
05:19 Wizek joined #gluster
05:19 fyxim joined #gluster
05:19 billputer joined #gluster
05:20 ndarshan joined #gluster
05:20 moss joined #gluster
05:21 rideh joined #gluster
05:21 sankarshan joined #gluster
05:23 Saravanakmr joined #gluster
05:23 Saravanakmr joined #gluster
05:23 yawkat joined #gluster
05:24 panina joined #gluster
05:26 al joined #gluster
05:30 colm joined #gluster
05:34 karthik_us joined #gluster
05:38 nbalacha joined #gluster
05:41 sanoj joined #gluster
05:45 msvbhat joined #gluster
05:47 nbalacha joined #gluster
05:58 loadtheacc joined #gluster
06:03 loadtheacc joined #gluster
06:10 k4n0 joined #gluster
06:13 hgowtham joined #gluster
06:14 haomaiwang joined #gluster
06:16 loadtheacc joined #gluster
06:17 hchiramm joined #gluster
06:33 rastar joined #gluster
06:33 haomaiwang joined #gluster
06:34 k4n0 joined #gluster
06:34 mhulsman joined #gluster
06:42 Karan joined #gluster
06:46 sbulage joined #gluster
06:48 rafi1 joined #gluster
06:51 masber joined #gluster
06:52 rafi joined #gluster
06:58 Philambdo joined #gluster
07:05 poornima_ joined #gluster
07:07 mhulsman joined #gluster
07:08 mhulsman joined #gluster
07:09 mhulsman joined #gluster
07:10 mhulsman joined #gluster
07:11 mhulsman joined #gluster
07:13 ankitraj joined #gluster
07:13 devyani7 joined #gluster
07:14 haomaiwang joined #gluster
07:17 d4n13L joined #gluster
07:18 ashiq joined #gluster
07:19 Muthu joined #gluster
07:21 jtux joined #gluster
07:23 rafi1 joined #gluster
07:25 msvbhat joined #gluster
07:26 marbu joined #gluster
07:30 rafi1 joined #gluster
07:34 prth joined #gluster
07:36 rastar joined #gluster
07:41 shdeng joined #gluster
07:47 Ryllise joined #gluster
07:49 atinm joined #gluster
07:49 [diablo] joined #gluster
07:51 mhulsman joined #gluster
08:02 Marbug joined #gluster
08:02 victori joined #gluster
08:02 hchiramm joined #gluster
08:04 RameshN joined #gluster
08:14 jri joined #gluster
08:14 haomaiwang joined #gluster
08:15 shubhendu joined #gluster
08:19 nbalacha joined #gluster
08:23 apandey joined #gluster
08:30 ivan_rossi joined #gluster
08:32 p7mo joined #gluster
08:35 panina joined #gluster
08:40 aravindavk joined #gluster
08:43 tdasilva joined #gluster
08:45 DaKnOb joined #gluster
08:48 Saravanakmr joined #gluster
08:50 d0nn1e joined #gluster
08:50 suliba joined #gluster
08:54 k4n0 joined #gluster
08:59 fsimonce joined #gluster
09:08 panina joined #gluster
09:10 panina joined #gluster
09:12 karthik_us joined #gluster
09:14 haomaiwang joined #gluster
09:20 suliba joined #gluster
09:23 prth joined #gluster
09:31 derjohn_mob joined #gluster
09:32 suliba joined #gluster
09:42 suliba joined #gluster
09:45 kramdoss_ joined #gluster
09:48 Philambdo joined #gluster
09:50 kraynor5b joined #gluster
09:53 prth joined #gluster
09:55 hackman joined #gluster
09:57 flying joined #gluster
09:58 pank_ joined #gluster
09:58 pank_ hi
09:58 glusterbot pank_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:59 pank_ After enabling quota, mounted data are not showing on client side. We have 8 volume of varying sizes. We are enabling quota on 3 volumes of 4TB each in sizes. We have installed GlusterFS community version 3.7.16 on AWS. But once we mount GlusterFS Volumes to the client side with quota enabled then we are not able to get consistent data. Sometime it shows and sometime it doesn't. If shown then we are not able to create or write on th
10:00 lanning joined #gluster
10:01 pank_ We keep monitoring directory based quota limit and all are under our control but still we are facing this issue. Some strange behavior regarding file system attributes. We see some duplicate quota attributes attached on our files under same volume which have quota enabled. If we clear those particular attributes by "setfattr -x " command then data become consistent. But we can't clear thousand of file manually on both glusetrfs serv
10:01 pank_ Please help us to resolve this issue or tell us if we have to do some explicit configuration for same. We are following GlusterFS documentation for configuration.
10:04 Saravanakmr joined #gluster
10:08 suliba joined #gluster
10:08 Muthu joined #gluster
10:11 Muthu joined #gluster
10:13 pank_ help please?
10:14 haomaiwang joined #gluster
10:18 ankitraj joined #gluster
10:18 panina joined #gluster
10:22 sanoj pank_ gluster volume quota disable, will disable all quota xattr if that is what u want
10:22 jtux joined #gluster
10:23 sanoj pank_ ehat exactly is the issue when u say "not able to get consistent data"
10:23 pank_ @sanoj we have already tried this but still no change
10:25 pank_ @sanoj this means that on client side we are facing some r/rw issue on files. Also some dir and files are not showing. After cleaning quota, gfid attributes then we are able to get data on client side
10:26 deniszh joined #gluster
10:27 msvbhat joined #gluster
10:28 sanoj pank_ I dont think issue may be with quota attributes. quot wouldn't change file visibility. I am not sure abt what clearing gfid attribs will do
10:31 pank_ @sanoj clearing those files gfid attributes which are not show on client side
10:31 rafi1 joined #gluster
10:32 sanoj magrawal any idea on ^^
10:32 pank_ again enabling quota on those volume which are very large in size leads inconsistent state
10:32 rafi joined #gluster
10:34 sanoj pank_ if files are not visible is ehat u mean then quota does not cause this, should relate to DHT. Quota only prevents writes to files
10:34 magrawal sanoj, need to check mount logs at the time when it is not showing data on mount point
10:35 rafi1 joined #gluster
10:35 sanoj pank_ ^^
10:37 pank_ @sanoj will try to send logs. Also would you please tell me what is DHT and how to fix this issue if it is related to DHT?
10:39 apandey joined #gluster
10:39 sanoj pank_ DHT is distributed hash table (component that does distribution of files) Can u please send a mail with the logs on gluster-devel mailing list,
10:41 itisravi joined #gluster
10:42 rafi1 joined #gluster
10:42 mhulsman joined #gluster
10:43 sanoj susant, do u know work around for above
10:47 pank_ @sanoj [2016-12-05 10:22:08.447007] W [MSGID: 114031] [client-rpc-fops.c:2981:client3_3_lookup_cbk] 0-VOL09-client-1: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]
10:47 pank_ @sanoj my logs are filled with above lines
10:47 glusterbot pank_: You've given me 5 invalid commands within the last 60 seconds; I'm now ignoring you for 10 minutes.
10:49 anoopcs pank_, Please try to avoid using '@' to reply to somebody as it is configured as a special character to address the bot.
10:49 mahendratech joined #gluster
10:50 sanoj pank_, Is this message before or after remove gfid
10:52 poornima_ joined #gluster
10:53 pank_ sanoj this is before
10:53 pank_ removing gfid
10:53 pank_ and after removing gfid we are able to see our files
11:14 haomaiwang joined #gluster
11:16 Saravanakmr joined #gluster
11:16 poornima_ joined #gluster
11:17 msvbhat joined #gluster
11:19 pank_ joined #gluster
11:19 telius joined #gluster
11:23 ahino joined #gluster
11:29 kshlm #Announcement Weekly Community Meeting starts in 30 minutes in #gluster-meeting. Add your topics for discussion and updates to https://bit.ly/gluster-community-meetings (Current topic count: 0)
11:29 glusterbot Title: Gluster Community Meeting - HackMD (at bit.ly)
11:34 haomaiwang joined #gluster
11:40 rafi1 joined #gluster
11:46 d-fence joined #gluster
11:46 d-fence_ joined #gluster
11:50 mhulsman joined #gluster
11:53 ppai joined #gluster
11:56 B21956 joined #gluster
12:07 jiffin joined #gluster
12:14 prth joined #gluster
12:23 larsemil joined #gluster
12:25 vbellur joined #gluster
12:26 panina joined #gluster
12:27 devyani7 joined #gluster
12:32 kdhananjay joined #gluster
12:43 devyani7 joined #gluster
12:52 ashka joined #gluster
12:52 ashka joined #gluster
12:52 rafi joined #gluster
12:55 larsemil hi. we have quickly tried gluster and are impressed. but i have some questions.
12:56 larsemil the size of the volume is determined by the smallest folder attached to the gluster right?
13:00 johnmilton joined #gluster
13:01 DV joined #gluster
13:09 Prasad_ joined #gluster
13:15 DV joined #gluster
13:23 msvbhat joined #gluster
13:24 haomaiwang joined #gluster
13:33 unclemarc joined #gluster
13:45 mahendratech joined #gluster
13:46 nbalacha joined #gluster
13:53 nishanth joined #gluster
13:56 vbellur joined #gluster
14:14 haomaiwang joined #gluster
14:20 shyam joined #gluster
14:20 msvbhat joined #gluster
14:24 riyas joined #gluster
14:25 skylar joined #gluster
14:28 pulli joined #gluster
14:37 ivan_rossi larsemil: not necessarily. if you replicate, every brick in a replica set should be of the same size. In a distributed configuration you are adding up the spaces. Although the maximum file size is limited by the dimension of the smallest brick. (first approximation)
14:38 nishanth joined #gluster
14:42 Saravanakmr joined #gluster
14:43 Manikandan joined #gluster
14:44 mhulsman1 joined #gluster
14:48 squizzi joined #gluster
14:50 mhulsman joined #gluster
14:59 kramdoss_ joined #gluster
15:00 arc0 joined #gluster
15:04 ankitraj joined #gluster
15:08 annettec joined #gluster
15:11 * Utoxin flops into his chair after a few hours of sleep.
15:11 Utoxin Gluster upgrade took a bit longer than planned last night. ;)
15:11 Utoxin But it appears to be functional, with just a few odd quirks I need to iron out soon.
15:13 derjohn_mob joined #gluster
15:13 Utoxin Namely, why rebalance claims a node is offline when every other command shows it as online and healthy.
15:14 haomaiwang joined #gluster
15:14 Utoxin If I remove and re-add the same brick, will it be smart enough to realize that it's got the (mostly) correct data and need minimal healing?
15:15 Utoxin (Distributed-replicated volume, 3x3)
15:16 ira joined #gluster
15:17 nbalacha Utoxin, regarding rebalance - anything useful in the logs?
15:17 Utoxin Let me try and determine that. Still waiting for caffiene to kick in.
15:21 Utoxin So far, all I can find is: [2016-12-07 15:10:48.497860] E [MSGID: 106301] [glusterd-op-sm.c:4275:glusterd_op_ac_send_stage_op] 0-management: Staging of operation 'Volume Rebalance' failed on localhost : Rebalance not started.
15:21 Utoxin That's on the server I initiated the request from. Going to go look at the logs on the server that said it was offline.
15:22 Utoxin [2016-12-07 15:10:53.589515] W [MSGID: 106222] [glusterd-rebalance.c:768:glusterd_op_stage_rebalance] 0-management: Missing rebalance-id
15:23 Utoxin Not quite the same timestamp... but that's the only thing that's close.
15:24 bluenemo joined #gluster
15:25 nbalacha Utoxin, seems like a glusterd issue
15:26 atinm GlusterD issue for what?
15:27 atinm nbalacha, ^
15:27 Utoxin I'm having an issue getting rebalance to run.
15:27 Utoxin It claims one of the nodes is offline, even though it's up and running fine.
15:27 Utoxin (All other commands report it as healthy.)
15:28 atinm what does rebalance CLI command throw back?
15:28 atinm Utoxin, `Staging of operation 'Volume Rebalance' failed on localhost : Rebalance not started.` indicates that you triggered a rebalance status command
15:29 Utoxin gluster> volume rebalance xvdb1 start
15:29 Utoxin volume rebalance: xvdb1: failed: Host node vulpes.intranet.igraystone.com of brick /gluster/xvdb1/brick is down
15:29 atinm gluster peer status output on the same node?
15:29 atinm which gluster version?
15:30 Utoxin v3.8.6
15:30 Utoxin Status pastebin incoming momentarily
15:30 Utoxin http://pastebin.com/p5iBVrSA
15:30 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:31 Utoxin *rolls his eyes*
15:31 atinm Utoxin, let me know if this looks similar to https://bugzilla.redhat.com/show_bug.cgi?id=1402172
15:31 glusterbot Bug 1402172: unspecified, unspecified, ---, bugs, POST , Peer unexpectedly disconnected
15:31 Utoxin https://paste.fedoraproject.org/501074/24701148/
15:31 glusterbot Title: #501074 • Fedora Project Pastebin (at paste.fedoraproject.org)
15:32 Utoxin I don't think that's similar... the peer is showing as online in status lists.
15:32 atinm Utoxin, yes, I see that now
15:32 atinm Utoxin, this is strange
15:33 Utoxin Recent work involved upgrading from 3.5.x to 3.8.6, and then adding a new set of 3 bricks to the volume.
15:33 atinm Utoxin, can you get me the glusterd log snippet from vulpes.intranet.igraystone.com ?
15:33 Slashman joined #gluster
15:34 Utoxin Nothing in glustershd.log for that timechop.... let me see if I get something in any of these when I run the command again.
15:35 Utoxin [2016-12-07 15:35:02.680989] W [MSGID: 106222] [glusterd-rebalance.c:768:glusterd_op_stage_rebalance] 0-management: Missing rebalance-id
15:35 atinm Utoxin, the log file name should start with etc-glusterfs, that's the glusterd log
15:35 Utoxin That's the only log activity when I run the command.
15:40 atinm Utoxin, you there?
15:40 Utoxin Yes.
15:40 atinm Utoxin, did you get the glusterd log file?
15:40 Utoxin [2016-12-07 15:35:02.680989] W [MSGID: 106222] [glusterd-rebalance.c:768:glusterd_op_stage_rebalance] 0-management: Missing rebalance-id
15:41 Utoxin That's the only activity that occurs when I request the rebalance. Did you need more than that?
15:42 unclemarc joined #gluster
15:42 sbulage joined #gluster
15:42 msvbhat joined #gluster
15:42 Utoxin atinm: Is that what you needed?
15:44 atinm Utoxin, no, there should be more logs
15:46 Utoxin In general, or just from this request? (Going to be a little slow. Work meeting.)
15:46 buvanesh_kumar joined #gluster
15:46 farhorizon joined #gluster
15:48 atinm Utoxin, no issues, I have found the code from where its erroring out
15:48 Utoxin Okay. If you need it, I can try and dig up the startup section of the logs. It'll just take a bit. Have to at least pretend to pay attention. ;)
15:49 Asako joined #gluster
15:49 atinm Utoxin, as a work around can you restart glusterd on that node and see if the problem disappears?
15:50 Utoxin Sure. One minute.
15:50 Asako Good morning.  Is gluster appropriate for a globally distributed file system?  We have plants all over the world and I'd like data to be stored locally for performance reasons.
15:50 Utoxin atinm: No change. Grabbing startup logs.
15:51 Asako I'm seeing some pretty slow performance with a server located a few states away.  :D
15:52 atinm Utoxin, the strange part is the flag on which peer status and this check works is same, peer status says everything is up, however rebalance fails
15:52 Utoxin atinm: https://paste.fedoraproject.org/501082/11259261/
15:52 glusterbot Title: #501082 • Fedora Project Pastebin (at paste.fedoraproject.org)
15:52 Utoxin That's the full logs from startup until I get the error again.
15:53 Asako would a client benefit from having bricks located locally?
15:53 jtux joined #gluster
15:54 Micha2k joined #gluster
15:55 Micha2k Hi all. I have a strange problem... glusterfs is writing everything to the ".1" logfiles, i.e. all logs are written to nfs.log.1, nfs.log exists but is not written to. How can that happen?
15:55 Micha2k Also, the logs are not rotated
15:56 nbalacha Utoxin, any change if you try starting rebalance from another node?
15:57 Utoxin nbalacha: Same error.
15:57 Utoxin Logs from the original request node, from the time I restarted vulpes until the request: https://paste.fedoraproject.org/501083/81126185/
15:57 glusterbot Title: #501083 • Fedora Project Pastebin (at paste.fedoraproject.org)
15:58 atinm Utoxin, Ok, so here is what I suggest
15:58 atinm Utoxin, take a statedump of glusterd using kill -SIGUSR1 $(pidof glusterd) and you will find a gluster dump file at /var/run/gluster and share in gluster-users ML, I have to leave in sometime
15:59 atinm Utoxin, it looks like there is an issue in the peerinfo entries
15:59 atinm Utoxin, have you bumped up cluster op-version?
15:59 Utoxin I... probably haven't. 3.5 didn't seem to even have it, so I'm not sure what it would be now.
15:59 atinm Utoxin, are all the nodes running with 3.8.6?
15:59 Utoxin Yes, all nodes are on 3.8.6
16:00 Utoxin (Some clients are on a 3.7.x version, I think, but most are 3.8.6)
16:00 atinm Utoxin, can you run 'gluster v set <volname> cluster.op-version 30800' and then retry the rebalance?
16:01 skylar1 joined #gluster
16:01 Utoxin volume set: failed: Option "cluster.op-version" is not valid for a single volume
16:01 atinm Utoxin, sorry, use 'all'
16:02 Utoxin Succeeded, but rebalance still fails.
16:02 atinm Utoxin, ok, how about restarting glusterd on all the nodes and bring them back one by one?
16:02 Utoxin If I try that, it'll need to be tonight.
16:02 atinm Utoxin, if that doesn't work, then take a statedump from atleast three nodes and share
16:03 Utoxin Alright. I'll collect more data today and tonight, and post to the mailling list or come back here.
16:05 atinm Utoxin, sure, thanks!
16:05 dnorman joined #gluster
16:05 Karan joined #gluster
16:05 dnorman_ joined #gluster
16:07 farhorizon joined #gluster
16:08 dnorman joined #gluster
16:10 dnorman__ joined #gluster
16:14 haomaiwang joined #gluster
16:19 wushudoin joined #gluster
16:19 msvbhat joined #gluster
16:27 Muthu joined #gluster
16:28 susant joined #gluster
16:34 RameshN joined #gluster
16:54 skoduri joined #gluster
16:57 B21956 joined #gluster
17:15 Utoxin I made unexpected progress on my rebalance issue. It's now just complaining about a <3.6.0 client being connected. Is there an easy way to tell what IP that's coming from?
17:17 haomaiwang joined #gluster
17:26 hchiramm joined #gluster
17:29 rafi joined #gluster
17:38 ivan_rossi left #gluster
17:40 rafi joined #gluster
17:46 Micha2k joined #gluster
17:51 MidlandTroy joined #gluster
17:57 JoeJulian Utoxin: Unfortunately, no. Perhaps the glusterd log has something to show which client it is? "gluster volume status $vol client" can at least get you a list of clients.
17:58 vbellur joined #gluster
17:58 rafi joined #gluster
17:58 Utoxin JoeJulian: Yeah. I just went through all my servers and upgraded them as much as their repos would allow. Didn't see any likely suspects. But now I'm back to the earlier error. ;)
18:00 sage__ joined #gluster
18:01 Utoxin I /love/ having strange and mysterious errors that make people in IRC go: That doesn't make sense.
18:01 Utoxin >.>
18:02 JoeJulian gluster --xml volume status iso client | grep 'hostname.*:' | sed 's/.*<hostname>\([^:]\+\):.*/\1/'| sort -u | xargs -I{} ssh {} gluster --version
18:02 larsemil JoeJulian: a small little oneliner. ;)
18:03 JoeJulian Oops, meant to change iso to all.
18:03 skylar joined #gluster
18:07 riyas joined #gluster
18:08 primusinterpares joined #gluster
18:14 uebera|| joined #gluster
18:14 uebera|| joined #gluster
18:14 haomaiwang joined #gluster
18:22 Karan joined #gluster
18:52 hchiramm joined #gluster
19:03 prth joined #gluster
19:15 haomaiwang joined #gluster
19:15 ashiq joined #gluster
19:31 hchiramm joined #gluster
19:54 timotheus1 joined #gluster
19:55 dnorman joined #gluster
20:02 hackman joined #gluster
20:09 Wizek joined #gluster
20:14 haomaiwang joined #gluster
20:50 haomaiwang joined #gluster
21:02 farhoriz_ joined #gluster
21:14 haomaiwang joined #gluster
21:24 Utoxin AHHA. Got rebalance to run.
21:25 bwerthmann joined #gluster
21:25 Utoxin Not sure what exactly fixed it, but my most recent change was fixing a hostname in a peer file on the node that was saying it was offline. It was just the IP before, and I changed it so the name was the first name.
21:26 Utoxin I'm guessing this is going to take a LONG time to finish, with 750 gig bricks.
21:30 bwerthmann joined #gluster
21:31 ankitraj joined #gluster
21:35 garamelek joined #gluster
21:37 bwerthmann joined #gluster
21:37 johnmilton joined #gluster
21:39 farhorizon joined #gluster
21:45 mhulsman joined #gluster
21:46 mhulsman1 joined #gluster
21:47 bwerthmann joined #gluster
21:50 d0nn1e joined #gluster
21:51 mhulsman joined #gluster
21:58 bwerthmann joined #gluster
22:03 BuBU291 joined #gluster
22:03 bwerthmann joined #gluster
22:10 farhorizon joined #gluster
22:11 timotheus1 joined #gluster
22:14 bwerthmann joined #gluster
22:14 haomaiwang joined #gluster
22:19 bwerthmann joined #gluster
22:24 bwerthmann joined #gluster
22:25 raghu joined #gluster
22:29 garamelek joined #gluster
22:29 bwerthmann joined #gluster
22:34 bwerthmann joined #gluster
22:39 bwerthmann joined #gluster
22:45 bwerthmann joined #gluster
22:50 bwerthmann joined #gluster
22:54 edong23 joined #gluster
22:55 bwerthmann joined #gluster
23:00 bwerthmann joined #gluster
23:05 bwerthmann joined #gluster
23:10 bwerthmann joined #gluster
23:14 haomaiwang joined #gluster
23:15 bwerthmann joined #gluster
23:20 bwerthmann joined #gluster
23:25 bwerthmann joined #gluster
23:26 Klas joined #gluster
23:30 bwerthmann joined #gluster
23:36 bwerthmann joined #gluster
23:41 bwerthmann joined #gluster
23:46 bwerthmann joined #gluster
23:51 bwerthmann joined #gluster
23:56 timotheus1_ joined #gluster
23:57 bwerthmann joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary