Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 haomaiwa_ joined #gluster
00:21 elico joined #gluster
00:24 calavera joined #gluster
00:35 gildub joined #gluster
00:53 nsoffer joined #gluster
01:01 kevinquinnyo joined #gluster
01:02 haomaiwa_ joined #gluster
02:02 haomaiwa_ joined #gluster
02:13 harish_ joined #gluster
02:20 victori joined #gluster
02:34 kdhananjay joined #gluster
02:47 maveric_amitc_ joined #gluster
02:58 nangthang joined #gluster
03:01 haomaiwa_ joined #gluster
03:05 jobewan joined #gluster
03:25 aravindavk joined #gluster
03:32 shubhendu joined #gluster
03:45 phoenixstew joined #gluster
03:46 phoenixstew was just reading the troubleshooting nfs page and my solution is to add option rpc-auth.addr.namelookup off to the NFS server. Not quite sure where to do that?
03:47 nishanth joined #gluster
03:51 sc0001 joined #gluster
03:52 atinm joined #gluster
03:56 kaushal_ joined #gluster
03:59 phoenixstew never mind, found it using volume options :0
03:59 phoenixstew :)*
04:02 18VAADJGC joined #gluster
04:03 TheSeven joined #gluster
04:04 ppai joined #gluster
04:07 RameshN joined #gluster
04:11 kovshenin joined #gluster
04:13 gem_ joined #gluster
04:17 calisto joined #gluster
04:19 coredump joined #gluster
04:21 sakshi joined #gluster
04:30 jwd joined #gluster
04:33 jwaibel joined #gluster
04:37 RameshN joined #gluster
04:38 DV_ joined #gluster
04:40 rafi joined #gluster
04:44 rafi joined #gluster
04:46 ndarshan joined #gluster
04:48 deepakcs joined #gluster
04:50 cabillman joined #gluster
04:51 wushudoin| joined #gluster
04:53 kotreshhr joined #gluster
05:01 nbalacha joined #gluster
05:02 haomaiwa_ joined #gluster
05:05 jiffin joined #gluster
05:08 nbalacha joined #gluster
05:09 rjoseph joined #gluster
05:11 hgowtham joined #gluster
05:11 Eric_HOU joined #gluster
05:14 nangthang joined #gluster
05:15 sahina joined #gluster
05:16 dusmant joined #gluster
05:16 anrao joined #gluster
05:17 vimal joined #gluster
05:30 jwd joined #gluster
05:31 raghu joined #gluster
05:33 vmallika joined #gluster
05:36 arcolife joined #gluster
05:38 pppp joined #gluster
05:39 bharata-rao joined #gluster
05:39 hagarth joined #gluster
05:40 sripathi1 joined #gluster
05:42 kshlm joined #gluster
05:42 kanagaraj joined #gluster
05:44 ashiq joined #gluster
05:44 Manikandan joined #gluster
05:51 kdhananjay joined #gluster
05:56 kotreshhr joined #gluster
05:58 RedW joined #gluster
06:00 sripathi2 joined #gluster
06:02 haomaiwa_ joined #gluster
06:03 Bhaskarakiran joined #gluster
06:05 jtux joined #gluster
06:05 ppai joined #gluster
06:06 kayn_ joined #gluster
06:08 overclk joined #gluster
06:09 Eric_HOU left #gluster
06:09 KennethDejonghe Have any of you guys experienced network issues whilst using gluster , My node seems to be killing its network service after x time
06:09 yazhini joined #gluster
06:10 jwd joined #gluster
06:16 dusmant joined #gluster
06:22 anil_ joined #gluster
06:28 ramky joined #gluster
06:34 fabio joined #gluster
06:35 fabio joined #gluster
06:39 Saravana_ joined #gluster
06:45 vmallika joined #gluster
06:45 sakshi joined #gluster
06:46 Manikandan joined #gluster
06:49 nbalacha joined #gluster
06:51 sc0001 joined #gluster
06:53 gildub joined #gluster
06:55 maveric_amitc_ joined #gluster
06:59 [Enrico] joined #gluster
06:59 Philambdo joined #gluster
07:02 haomaiwa_ joined #gluster
07:14 dusmant joined #gluster
07:14 sakshi joined #gluster
07:17 Lee1092 joined #gluster
07:24 ramky joined #gluster
07:25 SOLDIERz joined #gluster
07:33 meghanam joined #gluster
07:41 arcolife joined #gluster
07:46 pranithk joined #gluster
07:47 jbautista- joined #gluster
07:51 arcolife joined #gluster
07:51 dusmant joined #gluster
07:51 arcolife joined #gluster
07:52 ppai joined #gluster
07:52 arcolife joined #gluster
07:52 jbautista- joined #gluster
07:54 ctria joined #gluster
07:54 deniszh joined #gluster
08:02 haomaiwa_ joined #gluster
08:06 arcolife joined #gluster
08:07 arcolife joined #gluster
08:12 ajames-41678 joined #gluster
08:19 ppai joined #gluster
08:26 Manikandan joined #gluster
08:29 vmallika joined #gluster
08:34 pranithk nbalacha: did any patches go in dht which could create linkto files with uid/gid other than the uid/gid of the actual file?
08:34 nbalacha pranithk, no
08:34 pranithk meghanam: ^^
08:35 pranithk nbalacha: Could you see the mail with subject "Bareos backup from Gluster mount" on gluster-users mailing list?
08:35 pranithk nbalacha: On brick-2: -r--r--r--. 2       602       602   869939200 Apr  3 07:36 scan_90.tar.bak, on brick-1: ---------T  2 nfsnobody nfsnobody    0 Jul 13 11:42 scan_90.tar.bak
08:35 glusterbot pranithk: -r--r--r's karma is now -2
08:35 glusterbot pranithk: -------'s karma is now -6
08:35 glusterbot pranithk: brick-1's karma is now -1
08:36 pranithk glusterbot: you really need to learn about linux file permissions :-)
08:36 nbalacha pranithk, does the user exist on both bricks?
08:36 pranithk nbalacha: I don't think such a user exists that is why it is showing uid instead of user-name?
08:37 nbalacha pranithk, possible
08:39 anil_ joined #gluster
08:40 pranithk nbalacha: Apparently they didn't have this issue in 3.7.1. It started from 3.7.2
08:40 pranithk nbalacha: "I've been seeing this at least since Gluster version 3.7.2, which I  updated to owing to a need to expand my backend storage (and 3.7.1,  which worked fine) had a bug that broke bricks while rebalancing."
08:40 pranithk nbalacha: that's what the user said in the mail
08:41 nbalacha pranithk, we put in a fix to fix an uid/gid issue in dht linkto files
08:41 nbalacha one sec
08:42 gildub joined #gluster
08:42 nbalacha pranithk, http://review.gluster.org/9998
08:42 glusterbot Title: Gerrit Code Review (at review.gluster.org)
08:45 pranithk nbalacha: seems like gfid related change. Not uid/gid...
08:46 nbalacha pranithk, how is it a gfid related change?
08:47 dusmant joined #gluster
08:48 pranithk nbalacha: sorry read it wrong :-)
08:50 pranithk nbalacha: this is fixed on 3.7.0....
08:50 pranithk nbalacha: user says something changed from 3.7.1 to 3.7.2 not sure how accurate he is...
08:52 nbalacha pranithk, any stepsto reproduce it?
08:52 pranithk nbalacha: Hmm.. not sure. He runs a backup application. Apart from that even I don't know :-(. May be you can ask him on the thread? 2 people are facing the same issue as per the thread
08:53 nbalacha pranithk, will do
08:54 pranithk nbalacha: thanks nbalacha++
08:54 glusterbot pranithk: nbalacha's karma is now 1
08:57 ppai joined #gluster
08:59 LebedevRI joined #gluster
09:01 ajames-41678 joined #gluster
09:02 haomaiwa_ joined #gluster
09:03 prabu joined #gluster
09:06 dusmant joined #gluster
09:10 s19n joined #gluster
09:13 the-me joined #gluster
09:25 pkoro joined #gluster
09:31 meghanam joined #gluster
09:31 kovshenin joined #gluster
09:35 kovshenin left #gluster
09:40 harish_ joined #gluster
09:41 ajames41678 joined #gluster
09:42 maveric_amitc_ joined #gluster
09:45 yazhini joined #gluster
09:46 skoduri joined #gluster
09:48 shubhendu joined #gluster
09:48 meghanam joined #gluster
09:49 ndarshan joined #gluster
09:57 arcolife joined #gluster
10:00 ajames-41678 joined #gluster
10:01 haomaiwa_ joined #gluster
10:04 ira joined #gluster
10:08 kevein joined #gluster
10:09 crashmag joined #gluster
10:11 jwd joined #gluster
10:17 * ndevos is still waiting for glusterbot to BOOM! on the revolved bugs...
10:20 nbalacha joined #gluster
10:23 kbyrne joined #gluster
10:25 kkeithley1 joined #gluster
10:36 Slashman joined #gluster
10:36 arcolife joined #gluster
10:44 kotreshhr joined #gluster
10:46 Saravana_ joined #gluster
10:47 cabillman i'm getting a stale file when running ls/rsync against two files in a volume. I see the following in the logs at the time of the stat
10:47 cabillman gfid different on data file on oc_data-replicate-0, gfid local = 00000000-0000-0000-0000-000000000000, gfid node = abcb635e-bfd2-489e-b8ef-e8abf03ae3f5
10:47 cabillman gfid differs on subvolume oc_data-replicate-0, gfid local = 8fbc8700-7f8e-4cbc-afd7-8d38c23b4486, gfid node = abcb635e-bfd2-489e-b8ef-e8abf03ae3f5
10:48 Manikandan joined #gluster
10:48 cabillman this volume has a replica count of 2, and looking at the trusted.gfid attribute on both bricks shows the same gfid: abcb...
10:49 cabillman any ideas on how to resolve this?
10:49 overclk joined #gluster
10:57 prabu joined #gluster
11:03 haomaiwang joined #gluster
11:03 nisroc joined #gluster
11:05 vmallika joined #gluster
11:15 renm joined #gluster
11:15 KennethDejonghe joined #gluster
11:19 KennethDejonghe_ joined #gluster
11:20 ndevos glusterbot: why dont you post resolved bugs anymore?
11:21 ndevos cabillman: maybe the directory where the file is located has a different gfid?
11:21 elico joined #gluster
11:21 arcolife joined #gluster
11:23 sahina joined #gluster
11:28 Philambdo joined #gluster
11:28 cabillman ndevos: if i check the gfid of the parent directory on all bricks it matches
11:28 cabillman ndevos: but should the other attributes be different
11:29 cabillman since it is a directory it exists on all bricks. but doing the getfattr returns different results depending on the replica set
11:29 ndevos cabillman: I'm not sure which attributes can/should be different, sorry
11:30 cabillman ndevos: no problem - thanks for the tip
11:30 cabillman ndevos: do you know if I need to check the entire tree up to the root or just the directory the file lives in? the folder contains a bunch of files and I only get the error on one
11:34 ndevos cabillman: only the parent directory would do, at least, for the normal cases, not sure whats wrong with that one file :-/
11:37 nbalacha cabillman, can you post the complete log message? It should include the filename and function name.
11:38 cabillman nbalacha: sure. in the chat or somewhere else?
11:38 shubhendu joined #gluster
11:38 dusmant joined #gluster
11:38 nbalacha cabillman, chat is fine
11:39 ndarshan joined #gluster
11:39 cabillman [2015-07-30 10:23:19.698987] W [MSGID: 109009] [dht-common.c:1690:dht_lookup_linkfile_cbk] 0-oc_data-dht: /data/97702A4A-70D1-4162-8AC​A-7A11492019F0/files/Dourdis SLO data spreadsheet.xls: gfid different on data file on oc_data-replicate-0, gfid local = 00000000-0000-0000-0000-000000000000, gfid node = abcb635e-bfd2-489e-b8ef-e8abf03ae3f5
11:39 cabillman [2015-07-30 10:23:19.700006] W [MSGID: 109009] [dht-common.c:1435:dht_lookup_everywhere_cbk] 0-oc_data-dht: /data/97702A4A-70D1-4162-8AC​A-7A11492019F0/files/Dourdis SLO data spreadsheet.xls: gfid differs on subvolume oc_data-replicate-0, gfid local = 8fbc8700-7f8e-4cbc-afd7-8d38c23b4486, gfid node = abcb635e-bfd2-489e-b8ef-e8abf03ae3f5
11:40 cabillman [2015-07-30 10:23:19.700044] W [fuse-bridge.c:483:fuse_entry_cbk] 0-glusterfs-fuse: 320: LOOKUP() /data/97702A4A-70D1-4162-8AC​A-7A11492019F0/files/Dourdis SLO data spreadsheet.xls => -1 (Stale file handle)
11:40 nbalacha cabillman, how many replica sets does this volume have?
11:41 cabillman nbalacha: 2
11:41 nbalacha cabillman, can you check to see if the file exists on both?
11:41 nbalacha cabillman, this error usually shows up when the file exists on multiple subvolumes
11:42 cabillman nbalacha: it does. the gfid's match, md5sum is equal and the other attributes seem to be the same
11:42 cabillman it is setup like this server1/brick1 server2/brick1 server1/brick2 server2/brick2
11:42 cabillman i have been checking brick1 on server 1+2
11:43 nbalacha can you check on brick2 as well?
11:45 cabillman it does appear to be on both...
11:45 cabillman actually a lot of files are
11:45 kotreshhr joined #gluster
11:46 cabillman i just upgraded us to 3.6 from 3.4
11:46 cabillman could something have gone wrong?
11:47 jiffin1 joined #gluster
11:48 mator joined #gluster
11:51 dijuremo joined #gluster
11:51 nishanth joined #gluster
11:51 Philambdo joined #gluster
11:51 Manikandan joined #gluster
11:52 nbalacha cabillman, can you send me the logs
11:53 nbalacha cabillman, gluster has found a file with the same name but different gfids on 2 subvolumes, hence the error
11:53 nbalacha cabillman, can you find out which is the correct file?
11:53 harish_ joined #gluster
11:55 ajames-41678 joined #gluster
11:56 RameshN joined #gluster
11:57 cabillman nbalacha: which logs would you like? the mnt-oc_data.log? oc_data is the volume with the problem
11:58 nbalacha cabillman, yes, the mnt and the brick logs to start with
11:59 nbalacha cabillman, do you have quota enabled on the volume?
11:59 cabillman nbalacha: no quota - at least I haven't ever set them up
12:00 cabillman nbalacha: what is the best place to send the logs?
12:00 arcolife joined #gluster
12:00 nbalacha cabillman, can you email them to me?
12:00 nbalacha cabillman, or better yet to gluster-users
12:00 Bhaskarakiran joined #gluster
12:02 haomaiwa_ joined #gluster
12:04 jiffin1 joined #gluster
12:04 gem_ joined #gluster
12:06 dijuremo joined #gluster
12:07 lpabon joined #gluster
12:10 jtux joined #gluster
12:16 Mr_Psmith joined #gluster
12:17 arcolife joined #gluster
12:17 gletessier joined #gluster
12:19 arcolife joined #gluster
12:21 arcolife joined #gluster
12:24 arcolife joined #gluster
12:29 kshlm joined #gluster
12:30 prabu joined #gluster
12:30 nsoffer joined #gluster
12:30 rmstar left #gluster
12:32 arcolife joined #gluster
12:32 hagarth joined #gluster
12:34 jrm16020 joined #gluster
12:36 TvL2386 joined #gluster
12:38 nsoffer joined #gluster
12:39 arcolife joined #gluster
12:40 cuqa_ joined #gluster
12:43 overclk joined #gluster
12:49 pkoro joined #gluster
13:02 haomaiwa_ joined #gluster
13:06 mbukatov joined #gluster
13:08 plarsen joined #gluster
13:10 julim joined #gluster
13:10 arcolife joined #gluster
13:11 aaronott joined #gluster
13:13 harold joined #gluster
13:15 dgandhi joined #gluster
13:15 kpabijanskas joined #gluster
13:18 kpabijanskas Hi. Are there any issues with working 3.7.2 and 3.7.3 together? I am trying to upgrade a 2-node gluster to 3.7.3, and after upgrading the first one, it seems that it cannot connect to the second one (which is still 3.7.2). The second one (the 3.7.2 one), however, can still connect to the first one (the 3.7.3 one). I actually see packets being sent from the 3.7.3 one to the other one on port 24007, but there is no reply whatsoever, and
13:18 kpabijanskas nothing in the logs there
13:21 kshlm kpabijanskas, there hasn't been any protocol level change that I'm aware of that could cause this.
13:22 kshlm The no logging part could indicate the the second glusterd has gotten stuck at some point.
13:22 lpabon joined #gluster
13:27 kbyrne joined #gluster
13:28 bennyturns joined #gluster
13:29 spcmastertim joined #gluster
13:29 B21956 joined #gluster
13:39 dusmant joined #gluster
13:46 coredump joined #gluster
13:46 uebera|| joined #gluster
13:59 theron joined #gluster
13:59 DV joined #gluster
14:02 haomaiwa_ joined #gluster
14:05 marbu joined #gluster
14:17 jcastill1 joined #gluster
14:19 cabillman is anyone aware of any tools to find files that exist on two bricks? I have brick1 + brick2 that are part of a distributed volume and it seems some files ended up on both bricks
14:19 cabillman i would like to find out how many files exist on both brick1 + brick2
14:22 jcastillo joined #gluster
14:23 squizzi joined #gluster
14:26 kdhananjay joined #gluster
14:26 mbukatov joined #gluster
14:40 rwheeler joined #gluster
14:48 jobewan joined #gluster
14:49 cleong joined #gluster
14:52 ajames41678 joined #gluster
14:53 nisroc joined #gluster
14:56 _maserati joined #gluster
14:56 auzty joined #gluster
14:57 pkoro joined #gluster
14:57 auzty any ideas why i got 0-glusterfs-fuse: 125: REMOVEXATTR() warning?
14:58 s19n I too have lots of them
14:58 auzty my forever server didn't detect the changes, althoung the file was changed
14:58 auzty (foreverjs)
14:58 auzty *although
15:01 DV__ joined #gluster
15:02 haomaiwa_ joined #gluster
15:07 purpleidea joined #gluster
15:07 theron_ joined #gluster
15:18 nishanth joined #gluster
15:21 auzty and now i got No data available occurred while creating symlinks
15:21 auzty -_-
15:21 auzty any ideas?
15:33 rafi joined #gluster
15:35 tru_tru joined #gluster
15:40 jcastill1 joined #gluster
15:45 jcastillo joined #gluster
15:46 kayn_ joined #gluster
15:51 calavera joined #gluster
15:51 cholcombe joined #gluster
15:58 wushudoin| joined #gluster
15:59 theron joined #gluster
16:02 haomaiwang joined #gluster
16:02 sc0001 joined #gluster
16:03 wushudoin| joined #gluster
16:07 twisted` joined #gluster
16:07 kayn_ joined #gluster
16:14 Lee1092 joined #gluster
16:25 timotheus1 joined #gluster
16:26 frankS2 joined #gluster
16:26 Pintomatic joined #gluster
16:28 firemanxbr joined #gluster
16:28 jcastill1 joined #gluster
16:29 lezo joined #gluster
16:32 JoeJulian cabillman: Nope, there's nothing built for that problem. I suspect the problem existed before the upgrade but there wasn't a check that would resolve the file being on multiple dht subvolumes. If it were me, I would take a sorted ls from each brick and use comm to find common entries.
16:32 JoeJulian auzty: check the client log
16:33 jcastillo joined #gluster
16:34 vicomte joined #gluster
16:34 cleong left #gluster
16:35 cabillman JoeJulian: Thanks. I did something similar with find, sort and uniq -c but I didn't realize it was normal to have so many zero length files
16:36 vicomte Hi, if I am utilizing geo-rep to a slave volume, should my status say all master nodes are Active or just the primary one?
16:41 jmarley joined #gluster
16:44 leucos joined #gluster
16:48 JoeJulian cabillman: yes, you would want to filter out size 0 mode 1000 from your find.
16:48 kayn_ joined #gluster
16:48 JoeJulian vicomte: just the master
16:48 Vicomte_ joined #gluster
16:49 Vicomte_ Ok thanks!
17:02 haomaiwa_ joined #gluster
17:08 jbautista- joined #gluster
17:11 rafi joined #gluster
17:12 Vicomte joined #gluster
17:14 jbautista- joined #gluster
17:17 Rapture joined #gluster
17:19 liewegas joined #gluster
17:27 calisto joined #gluster
17:29 sc0001 joined #gluster
17:33 ipmango joined #gluster
17:33 wushudoin| joined #gluster
17:37 marshall joined #gluster
17:37 ashka hi, is there a way to set the umask when mounting a glusterfs volume? I tried mount -o umask=0077 but that does something weird, log: http://paste.awesom.eu/ZSmH&raw
17:42 wushudoin| joined #gluster
17:46 _maserati joined #gluster
17:47 kiwnix joined #gluster
17:48 vimal joined #gluster
17:51 kayn_ joined #gluster
17:52 RedW joined #gluster
17:52 cc1 joined #gluster
17:54 Vicomte joined #gluster
17:55 Vicomte left #gluster
17:56 deniszh joined #gluster
17:57 bennyturns joined #gluster
17:58 kayn_ joined #gluster
17:58 bennyturns joined #gluster
18:02 haomaiwa_ joined #gluster
18:05 JoeJulian ashka: Hmm, that's odd. I don't know that umask will work, but there's nothing in the log that explains why it exits...
18:06 ashka JoeJulian: do you know of another way to only allow uid 0 to read/write to the mounted volume?
18:07 JoeJulian Make the files all owned by root?
18:07 JoeJulian chmod
18:08 anrao joined #gluster
18:09 ashka JoeJulian: oh, I didn't notice that it actually keeps the mountpoint permissions that it had when mounted. that'll work, thanks
18:10 JoeJulian You're welcome
18:12 jwd joined #gluster
18:13 nzero joined #gluster
18:14 jwd joined #gluster
18:16 victori joined #gluster
18:18 _maserati joined #gluster
18:18 jwaibel joined #gluster
18:33 cabillman_ joined #gluster
18:33 purpleid1a joined #gluster
18:33 tru_tru_ joined #gluster
18:34 twisted`_ joined #gluster
18:35 fabio joined #gluster
18:35 kbyrne joined #gluster
18:35 elico joined #gluster
18:37 and` joined #gluster
18:55 calisto joined #gluster
18:59 cc1 JoeJulian: Was hoping you had a minute to help with my timeout issues we were talking about on Tues
19:02 haomaiwa_ joined #gluster
19:06 JoeJulian Sure... I was just following up on a copyright violation.
19:07 JoeJulian It's kind of cool... I was plagiarized in a scholarly article. :)
19:07 cc1 no worries
19:07 cc1 HA. well that's kind of awesome. Just asking for credit?
19:07 JoeJulian yeah
19:07 cc1 fair enough.
19:10 cc1 summary from last time: Whenever I tried to create a volume, was getting a timeout error, but after a few minutes, i can do a "volume info" and there would be my volume.
19:11 shaunm_ joined #gluster
19:12 cc1 kept digging into strace, but couldn't see anything of value (to me).  Here's pastebin of last few lines before timeout: http://pastebin.com/KvxgNMKK
19:12 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
19:16 JoeJulian Nothing useful there.
19:18 JoeJulian cc1: Maybe run glusterd --debug in the forground on both servers and try it. Maybe something will become obvious.
19:20 monotek1 joined #gluster
19:22 cc1 JoeJulian: normal to see lots of "0-management: Unable to find friend: host1"'s on localhost?
19:24 cc1 JoeJulian: end of the request via debug: http://fpaste.org/249923/38284249/
19:24 glusterbot Title: #249923 Fedora Project Pastebin (at fpaste.org)
19:29 tdasilva joined #gluster
19:30 samikshan joined #gluster
19:46 ChrisNBlum joined #gluster
19:57 jwd joined #gluster
19:57 julim joined #gluster
19:59 nsoffer joined #gluster
20:02 haomaiwa_ joined #gluster
20:04 tru_tru joined #gluster
20:05 ChrisNBlum joined #gluster
20:06 cleong joined #gluster
20:07 cleong left #gluster
20:08 nzero joined #gluster
20:19 coredump joined #gluster
20:25 squizzi_ joined #gluster
20:36 natarej joined #gluster
20:51 cc1 joined #gluster
20:53 ToMiles joined #gluster
21:00 victori joined #gluster
21:02 haomaiwa_ joined #gluster
21:03 theron_ joined #gluster
21:03 beeradb_ joined #gluster
21:03 ChrisNBlum joined #gluster
21:03 chirino_m joined #gluster
21:04 bfoster1 joined #gluster
21:05 yoda1410_ joined #gluster
21:05 mrEriksson joined #gluster
21:05 bfoster1 joined #gluster
21:05 cholcombe joined #gluster
21:06 Telsin joined #gluster
21:06 lalatenduM joined #gluster
21:06 mikemol joined #gluster
21:06 yosafbridge joined #gluster
21:07 neoice joined #gluster
21:09 frostyfrog joined #gluster
21:09 frostyfrog joined #gluster
21:10 doekia joined #gluster
21:17 calavera joined #gluster
21:22 cc1 left #gluster
21:28 nsoffer joined #gluster
21:28 dgandhi joined #gluster
21:43 nsoffer joined #gluster
21:43 cyberswat joined #gluster
21:48 badone joined #gluster
21:56 Rapture joined #gluster
21:59 shaunm_ joined #gluster
21:59 nzero joined #gluster
22:02 haomaiwa_ joined #gluster
22:03 sclarke_ joined #gluster
22:08 nsoffer joined #gluster
22:18 Vicomte joined #gluster
23:02 haomaiwang joined #gluster
23:06 nzero joined #gluster
23:15 gildub joined #gluster
23:16 gildub_ joined #gluster
23:35 cyberswat joined #gluster
23:53 plarsen joined #gluster
23:55 Mr_Psmith joined #gluster
23:57 nzero joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary