Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 wkf joined #gluster
00:02 edwardm61 joined #gluster
00:11 wkf joined #gluster
00:17 corretico joined #gluster
00:17 hagarth joined #gluster
00:52 aaronott joined #gluster
00:57 sysconfig joined #gluster
01:03 jrm16020 joined #gluster
01:20 gildub joined #gluster
01:35 kdhananjay joined #gluster
01:43 Pupeno joined #gluster
01:49 lyang0 joined #gluster
01:58 julim joined #gluster
01:59 RameshN joined #gluster
02:12 mribeirodantas joined #gluster
02:16 RameshN joined #gluster
02:16 PatNarciso after attach-tier, while it's doing the initial rebalance... the dirs that haven't been rebalanced yet, are 'missing' from 'ls -la'.  any suggestions on how to get a full ls, even tho the tier rebalance isn't yet complete?
02:22 nangthang joined #gluster
02:41 bharata-rao joined #gluster
02:44 aaronott joined #gluster
02:51 kdhananjay joined #gluster
03:06 ira joined #gluster
03:14 shubhendu joined #gluster
03:18 kevein joined #gluster
03:24 gem joined #gluster
03:25 gem joined #gluster
03:28 RameshN joined #gluster
03:32 overclk joined #gluster
03:38 sakshi joined #gluster
03:41 nbalacha joined #gluster
03:41 [7] joined #gluster
03:43 atinm joined #gluster
04:02 ppai joined #gluster
04:11 hagarth joined #gluster
04:18 itisravi joined #gluster
04:25 rafi joined #gluster
04:31 itisravi left #gluster
04:33 javi404 joined #gluster
04:37 Bhaskarakiran joined #gluster
04:45 ramteid joined #gluster
04:47 DV__ joined #gluster
04:47 zeittunnel joined #gluster
04:55 jiffin joined #gluster
04:58 rjoseph joined #gluster
05:01 R0ok_ joined #gluster
05:04 pppp joined #gluster
05:11 DV__ joined #gluster
05:14 nsoffer joined #gluster
05:14 vikumar joined #gluster
05:14 spandit joined #gluster
05:17 SOLDIERz joined #gluster
05:17 atalur joined #gluster
05:17 gem joined #gluster
05:18 hgowtham joined #gluster
05:20 ashiq joined #gluster
05:23 ndarshan joined #gluster
05:29 PatNarciso noticed issue with 3.7.2 distributed tiering:  reading a file (mpeg ts) while its being written; causes write to fail; resulting in file appearing to have 0 file size.
05:30 ababu joined #gluster
05:38 atalur joined #gluster
05:43 kdhananjay joined #gluster
05:46 ramteid_ joined #gluster
05:55 kshlm joined #gluster
05:56 atalur joined #gluster
05:57 soumya_ joined #gluster
05:57 amitc joined #gluster
06:03 TvL2386 joined #gluster
06:07 raghu joined #gluster
06:09 [7] how does gluster decide where to run NFS servers for which volumes?
06:11 bharata_ joined #gluster
06:12 [7] also, how well does gluster scale beyond a few hundred servers?
06:13 [7] (I'm not talking about huge volumes, just about let's say a thousand servers having each other as peers)
06:14 tomased joined #gluster
06:26 gem joined #gluster
06:26 SOLDIERz joined #gluster
06:34 deepakcs joined #gluster
06:40 kotreshhr joined #gluster
06:48 atalur joined #gluster
06:54 meghanam joined #gluster
06:56 nsoffer joined #gluster
07:03 nbalacha joined #gluster
07:07 jiffin joined #gluster
07:09 nbalacha joined #gluster
07:11 karnan joined #gluster
07:13 Bhaskarakiran joined #gluster
07:13 Bhaskarakiran joined #gluster
07:14 Bhaskarakiran joined #gluster
07:17 karnan_ joined #gluster
07:18 LebedevRI joined #gluster
07:19 Bhaskarakiran joined #gluster
07:22 bharata__ joined #gluster
07:30 sabansal_ joined #gluster
07:33 autoditac joined #gluster
07:38 nbalacha joined #gluster
07:48 al joined #gluster
07:52 social joined #gluster
07:59 ctria joined #gluster
08:00 NTQ joined #gluster
08:19 anrao joined #gluster
08:24 RedW joined #gluster
08:27 ramteid joined #gluster
08:32 Slashman joined #gluster
08:34 tomased joined #gluster
08:43 glusterbot News from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058300>
08:46 atinm joined #gluster
08:55 nsoffer joined #gluster
09:01 kdhananjay joined #gluster
09:04 sysconfig joined #gluster
09:12 soumya_ joined #gluster
09:18 anrao joined #gluster
09:29 ndevos wohooo! I'm at the DevOps days in Amsterdam, anyone wants to meet me there?
09:30 * ndevos is aware it is rather short notice, and there might be only few people in this channel in the Amsterdam area anyway...
09:30 csim I would love to do that, but I am in paris :p
09:33 soumya_ joined #gluster
09:34 ndevos csim: there are an other 2 days here, not sure if I will stick around *that* long though
09:35 spalai joined #gluster
09:35 kdhananjay joined #gluster
09:38 side_control csim: salut, how are you doing?
09:38 side_control csim: i dont know if you remember, but i stopped by the office that one time ;)
09:42 spalai left #gluster
09:42 csim side_control: we spoke about freeipa and kerberos and ldap, yes
09:43 csim side_control: I am fine, and you, back to US I see ?
09:44 side_control csim: yes i've been back for a while now actually
09:44 RedW hi
09:44 glusterbot RedW: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:44 RedW why gluster support mouting over nfs?
09:46 Trefex joined #gluster
09:46 nbalacha joined #gluster
09:47 side_control csim: i remember you talkinga bout moving the dc, has that happen, did it go smoothly>?
09:48 csim side_control: moving the DC ?
09:50 side_control datacenter
09:50 side_control ?
09:50 csim oh
09:51 csim welll, how to explain :)
09:51 csim we are still waiting on the current customer to free the space in DC so we can move
09:51 csim it was supposed to be early april 2015
09:51 csim now, that's early july :p
09:52 csim (and april 2015 was the worst case before...)
09:53 side_control csim: c'est dommage
09:53 side_control but its always that way
09:53 side_control something always comes up
09:54 csim side_control: yeah, it make me sad to see i was right into being pessimistic :)
09:55 side_control man.. my gluster servers are a absolute mess right now
09:56 side_control spent 10 hours fixing split-brain yesterday
09:57 Ulrar RedW: Better performance for small files, easier to mount without having to install the gluster client
10:09 RedW Thanks.
10:09 eljrax I keep hearing the "better performance for small files". Are there any actual numbers for this anywhere?
10:09 eljrax How big a difference, and is it big enough to make up for the loss of HA?
10:10 _shaps_ joined #gluster
10:14 vovcia is there recommended distro for glusterfs?
10:16 eljrax www.gluster.org says © 2015 Red Hat, Inc. ;)
10:16 vovcia sos, CentOS 7 should be fine? :)
10:17 RedW HA, are there any other disadvantages/risks in using nfs?
10:17 vovcia RedW: on NFSv3 locks issues
10:20 vovcia but im looking forward to try ganesha :)
10:28 surabhi_ joined #gluster
10:28 natarej joined #gluster
10:31 akay1 ive switched from mounting with fuse and sharing that out over samba to using samba vfs... performance is great but ive lost my recycle functionality for some reason... and given that 3.7.2 trashcan doesnt work im kinda stuck... anyone seen that before?
10:39 eljrax 3.7.2 trashcan doesn't work? :/
10:41 atinm eljrax, anoopcs can help you on that
10:42 _shaps_ left #gluster
10:42 anoopcs atinm, sure
10:43 anoopcs eljrax, Can you explain a little bit more?
10:43 eljrax anoopcs: I was just asking akay1 who said it wasn't
10:44 eljrax < akay1> ... and given that 3.7.2 trashcan doesnt work im kinda stuck... "
10:44 glusterbot News from newglusterbugs: [Bug 1212842] tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212842>
10:44 glusterbot News from newglusterbugs: [Bug 1235217] tools/glusterfind: build errors while compiling for RHEL 5 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1235217>
10:44 anoopcs eljrax, Ok.
10:47 smohan joined #gluster
10:51 purpleidea joined #gluster
10:52 anoopcs akay1, Can you explain me the issue you faced with trashcan in glusterfs?
10:54 surabhi_ joined #gluster
11:00 nbalacha joined #gluster
11:03 rjoseph joined #gluster
11:05 gildub joined #gluster
11:08 ndevos @later tell teknologeek I've filed bug 1235231 for your unix domain socket problem, let me know if you need it backported to a stable version
11:08 glusterbot ndevos: The operation succeeded.
11:09 NTQ left #gluster
11:09 kotreshhr joined #gluster
11:17 atalur joined #gluster
11:20 akay1 yep no trashcan... when i change the folder used for trashcan it complains if the folder already exists, but it still never puts any files there
11:21 atinm joined #gluster
11:21 nbalacha joined #gluster
11:22 soumya_ joined #gluster
11:22 anoopcs akay1, IIRC, If you happen to place vfs_glusterfs module before vfs_recycle, then recycle functionality is lost. Because vfs_glusterfs returns without invoking vfs_recycle on receiving an unlink request. The unlink call is re-directed to glusterfs layer via libgfapi.
11:28 dusmant joined #gluster
11:32 anoopcs akay1, How did you change the folder used for trashcan?
11:32 rafi1 joined #gluster
11:34 rafi joined #gluster
11:34 rjoseph joined #gluster
11:35 raghu joined #gluster
11:37 akay1 anoops, just in my share vfs ofjects put recycle before glusterfs?
11:37 akay1 gluster volume set [vol] features.trash-dir
11:38 akay1 i read that .trashcan should have been created automatically but it wasnt
11:39 Ulrar Does anyone have experience with using glusterFS to store VMs for HA ? Does that work well ?
11:41 anoopcs akay1, If volume started successfully, then .trashcan will be created automatically. Can you check the logs to see what went wrong?
11:42 akay1 volume started fine... which log would it be in?
11:42 overclk joined #gluster
11:43 [7] Ulrar: I'm doing that, but I'm not really happy with the results yet
11:43 Ulrar It's slow ?
11:44 [7] it's somewhat slow but I'm not sure if glusterfs or dm-cache is to blame for that ;)
11:44 glusterbot News from newglusterbugs: [Bug 1235242] changelog: directory renames not getting recorded <https://bugzilla.redhat.co​m/show_bug.cgi?id=1235242>
11:44 [7] it basically works as long as everything works, but I'm having stability issues in recovery situations or when modifying the volume
11:44 anoopcs akay1, Check the brick logs..Did you see any ERROR logs?
11:45 [7] I've had catastrophic data corruption by just adding a brick to a volume (to increase replica count) while a VM was running on it
11:45 [7] reproducibly
11:46 Ulrar Well that's not good
11:46 Ulrar A little scary
11:46 [7] I've also had some availability issues but that might be misconfiguration on my part
11:46 [7] so yeah, that's my impression as well: a little scary if you're doing this for reliability reasons in the first place
11:47 Ulrar We wouldn't be adding new bricks often, but still
11:47 Ulrar Using 3.7 ?
11:49 [7] yes
11:49 [7] but same thing on 3.6 as well
11:50 overclk joined #gluster
11:50 Ulrar Might try something else then :)
11:51 Ulrar Thanks for the feedback
11:51 akay1 nope, nothing in the logs
11:52 ira joined #gluster
11:54 Suckervi1le what does "Skipping entry self-heal because of gfid absence" mean?
11:56 anoopcs akay1, Check whether .trashcan directory is created in brick.
11:56 surabhi_ joined #gluster
11:56 akay1 i checked all the bricks - doesnt exist
11:56 zeittunnel joined #gluster
11:57 akay1 ive got another volume here too... no trashcan in those bricks either
11:58 anoopcs akay1, Are you sure that glusterfs version is 3.7.x?
11:58 akay1 yep, or it wouldnt let me enable the feature, right?
12:00 akay1 glusterfs 3.7.2 built on Jun 19 2015 16:33:23
12:00 raghu joined #gluster
12:00 anoopcs akay1, Was that an upgrade to 3.7.2 or fresh install ?
12:01 akay1 upgrade from 3.6.2
12:02 anoopcs akay1, Those volumes which you mentioned earlier, were created before upgrade or after upgrade?
12:02 rjoseph joined #gluster
12:03 akay1 they were both created before...
12:04 akay1 please dont tell me trashcan will only work on new volumes :)
12:04 anoopcs akay1, Ok. Can you just stop the volume and then start again?
12:05 akay1 yeah ill stop and restart one
12:05 [7] Ulrar: http://pastie.org/pastes/10225618/text
12:05 [7] this is what I used to make it eat my data somewhat reproducibly
12:05 [7] example output: http://pastie.org/pastes/10225619/text
12:07 akay1 anoopcs: ok stopped and started, still no change
12:07 akay1 actually cancel that, .trashcan is there
12:07 anoopcs akay1, That's how it should work
12:08 akay1 the documentation doesnt say you need to restart the volume
12:08 atalur joined #gluster
12:09 gildub joined #gluster
12:09 akay1 after i upgraded to 3.7.2 i started the volume (obviously) then enabled the trash feature... same as docs
12:09 anoopcs akay1, Since your volume was created before upgrade, the trash translator was not included in the volume graph and it was not created.
12:09 akay1 so i need to restart the volume after enabled the feature on that volume?
12:10 akay1 *enabling
12:10 anoopcs akay1, Not always
12:10 anoopcs akay1, This is an issue with upgrade only.
12:11 RaSTar akay1: did you do a upgrade using rpms or source install ?
12:11 akay1 damnit... wish i hadve known that while i was doing the upgrade. the other volume is fairly large so ill have to do it this weekend
12:11 akay1 RaSTar: it was with ubuntu ppa
12:12 akay1 anoopcs: thanks for pointing it out... in the meantime i can make do with getting recycle to work with samba vfs... so you think jsut changing vfs objects order will do it?
12:13 RaSTar akay1: ok
12:13 SOLDIERz joined #gluster
12:14 glusterbot News from newglusterbugs: [Bug 1235246] Missing trusted.ec.config xattr for files after heal process <https://bugzilla.redhat.co​m/show_bug.cgi?id=1235246>
12:15 anoopcs akay1, If you place vfs_recycle before vfs_glusterfs, then glusterfs will be unaware of the delete from Samba share. You shouldn't do that
12:16 anoopcs akay1, You can use the trash feature provided by glusterfs.
12:16 R0ok_ joined #gluster
12:17 anoopcs akay1, Remove the recycle vfs object from smb.conf and keep only glusterfs vfs object.
12:18 akay1 ok ill turn on the trash feature this weekend... what do you mean that glusterfs will be unaware of the delete? (i just tested on one file and looked to work as expected)
12:18 ppai joined #gluster
12:20 Ulrar [7]: Thanks a lot, I'm going to set up a few VMs to try that out
12:20 anoopcs akay1, How does your vfs objects = line look now?
12:21 akay1 anoopcs:  vfs objects = recycle glusterfs
12:21 spalai joined #gluster
12:22 rafi joined #gluster
12:24 anoopcs akay1, What are the other parameters that you specified for recycle object? I mean recycle:repository = ?
12:26 akay1 recycle:repository = .recycle
12:26 akay1 recycle:keeptree = yes
12:26 akay1 recycle:versions = yes
12:27 anoopcs akay1, And when you deleted a file, whether the file was moved to .recycle or .trashcan?
12:27 kotreshhr joined #gluster
12:27 akay1 .recycle - this is on my volume that i havent restarted so theres no .trashcan folder
12:28 anoopcs akay1, Ahh.. That's expected..
12:28 [7] Ulrar: someone here claimed that he failed to reproduce it, even though it happened in like 5-7 out of 10 attempts on my system, so please let me know if you hit these issues as well ;)
12:28 anoopcs akay1, Ok. Let me make it clear for you
12:30 [7] Ulrar: what fails absolutely reproducibly is that all VMs are paused in KVM (due to storage errors) when increasing replica count, and cannot be resumed without destroying them. After restarting them, some had signs of filesystem corruption (that might be related to attempts by HA monitoring to restart them on another node, which seemed to partially succeed while the replication was still in progress)
12:31 mribeirodantas joined #gluster
12:35 anoopcs akay1, Or else.. Just go on with your testing...
12:36 B21956 joined #gluster
12:36 anoopcs akay1, Get back in case of issues
12:37 akay1 anoopcs: think i missed a msg from you after "Let me make it clear for you"
12:38 akay1 can i basically put recycle in my vfs objects before glusterfs on the volume that hasnt been restarted, then when it has - change it?
12:41 anoopcs akay1, Basically if you put it that way, then you are not testing the trash feature from glusterfs. You will be using the recycle feature from Samba.
12:42 R0ok_ joined #gluster
12:42 anoopcs akay1, If you really want to test trash provided by glusterfs, then you need to remove recycle object and keep glusterfs vfs object only.
12:43 akay1 anoopcs: thats fine for the time being... at least i have a way of saving data until i am able to restart the volume which will then allow me to use the glusterfs trashcan
12:44 akay1 thanks veery much for your help
12:44 glusterbot News from newglusterbugs: [Bug 1235269] Data Tiering: Files not getting promoted once demoted <https://bugzilla.redhat.co​m/show_bug.cgi?id=1235269>
12:45 anoopcs akay1, Fine. After restarting the volume, you need to remove recycle from vfs objects from smb.conf, that's all :)
12:46 akay1 great, will do :) thanks :)
12:46 Ulrar [7]: I'll let you know !
12:46 Ulrar Hope I'll have the time to test that this week
12:47 Skinny_ joined #gluster
12:47 Skinny_ hi all
12:47 anoopcs akay1, np
12:48 [7] Ulrar: thanks, I'll be idling here as TheSeven or [7]
12:51 wkf joined #gluster
12:51 autoditac joined #gluster
12:54 gildub joined #gluster
12:54 spalai left #gluster
12:57 bennyturns joined #gluster
13:03 ppai joined #gluster
13:11 kokopelli joined #gluster
13:24 dgandhi joined #gluster
13:25 klaxa|work joined #gluster
13:26 georgeh-LT2 joined #gluster
13:26 B21956 joined #gluster
13:27 autoditac joined #gluster
13:28 apahim_ joined #gluster
13:30 Ulrar Looks like the email address in the FAQ doesn't exist anymore
13:31 squizzi joined #gluster
13:31 shyam joined #gluster
13:32 aaronott joined #gluster
13:35 monotek joined #gluster
13:35 chirino joined #gluster
13:37 surabhi_ joined #gluster
13:37 hamiller joined #gluster
13:40 DV joined #gluster
13:43 kokopellifd joined #gluster
13:46 kokopelli joined #gluster
13:49 ashiq joined #gluster
13:50 kokopellifd joined #gluster
13:57 theron joined #gluster
14:04 nangthang joined #gluster
14:11 julim joined #gluster
14:12 pppp joined #gluster
14:14 marcoceppi joined #gluster
14:18 soumya_ joined #gluster
14:22 wushudoin joined #gluster
14:22 kdhananjay joined #gluster
14:30 soumya_ joined #gluster
14:34 DV joined #gluster
14:34 rafi joined #gluster
14:37 pppp joined #gluster
14:43 scubacuda joined #gluster
14:50 DV joined #gluster
15:03 zeittunnel joined #gluster
15:12 julim joined #gluster
15:13 maveric_amitc_ joined #gluster
15:13 shyam joined #gluster
15:17 julim joined #gluster
15:26 kotreshhr left #gluster
15:27 cholcombe joined #gluster
15:29 overclk joined #gluster
15:32 shyam joined #gluster
15:38 paulc_AndChat joined #gluster
15:48 soumya_ joined #gluster
15:59 theron_ joined #gluster
16:04 monotek joined #gluster
16:06 paulc_AndChat joined #gluster
16:10 jiffin joined #gluster
16:14 maveric_amitc_ joined #gluster
16:22 jbrooks joined #gluster
16:24 squizzi joined #gluster
16:37 Larsen joined #gluster
16:38 rotbeard joined #gluster
16:45 autoditac joined #gluster
16:45 cholcombe joined #gluster
16:47 dbruhn joined #gluster
16:52 soumya_ joined #gluster
16:53 AndChat|25625 joined #gluster
16:54 calavera joined #gluster
16:58 theron joined #gluster
16:59 rafi joined #gluster
17:04 paulc_AndChat joined #gluster
17:09 julim_ joined #gluster
17:10 Rapture joined #gluster
17:12 hagarth joined #gluster
17:17 bfoster joined #gluster
17:27 julim joined #gluster
17:33 elico joined #gluster
17:45 AndChat|25625 joined #gluster
17:45 Rapture joined #gluster
17:48 overclk joined #gluster
17:50 overclk joined #gluster
17:53 sage joined #gluster
18:04 glusterbot joined #gluster
18:05 JoeJulian joined #gluster
18:07 theron_ joined #gluster
18:10 paulc_AndChat joined #gluster
18:12 theron joined #gluster
18:14 calavera joined #gluster
18:20 hagarth joined #gluster
18:26 aaronott joined #gluster
18:27 calavera joined #gluster
18:34 julim_ joined #gluster
18:56 paulc_AndChat joined #gluster
18:56 julim joined #gluster
19:07 calavera joined #gluster
19:14 elico joined #gluster
19:25 deniszh joined #gluster
19:34 Skinny_ hi guys, having some issues with mounting glusterfs @ boottime on my newly created cluster (ubuntu). I suppose the issues I find on google regarding this are fixed in the latest packages (15.04) and glusterfs 3.5.2. However I still can't seem to get my volumes mounted @ boottime
19:40 lexi2 joined #gluster
19:43 chirino joined #gluster
19:44 SpComb^ Skinny_: using _netdev in fstab?
19:45 Skinny_ jup
19:45 Skinny_ and also tried this : http://serverfault.com/questions/611462/gluste​rfs-failing-to-mount-at-boot-with-ubuntu-14-04
19:46 Skinny_ but now back at 'default' behaviour
19:46 Skinny_ running mount -a just after logging in works just fine, so it should be related to the order of startup events
19:47 Skinny_ entry in /etc/fstab:  localhost:/lockvol /gluster/lock glusterfs defaults,auto,_netdev 0 0
19:47 Skinny_ weird thing is, that boot.log is showing :
19:47 Skinny_ [  OK  ] Started Wait for all "auto" /etc/network/interfaces to be up for network-online.target. [  OK  ] Reached target Network is Online.          Mounting /gluster/lock...          Mounting /gluster/vol01... [  OK  ] Mounted /gluster/lock. [  OK  ] Mounted /gluster/vol01.
19:48 Skinny_ but /gluster/lock is actually not mounted
19:49 SpComb^ what does /var/log/glusterfs/gluster-lock.log (or somesuch) say
19:50 Skinny_ https://gist.github.com/sk​inny/04e5c24bd8fdf5da3b9e
19:51 Skinny_ that message actually led me to the serverfault.com post and I tried changing the upstart file to wait on glusterfs-server instead of networking but same result
19:52 SpComb^ mm, mounting from localhost-only is a bit odd
19:52 SpComb^ dunno, I mount from an alias that resolves to each node in the cluster
19:53 Skinny_ yeah, but this is to mount the 'config/recovery_lockfile' volume for CTDB
19:53 Skinny_ several tutorials/guides are referring to this
19:55 Skinny_ but because running the mount command after boot actually works just fine, I assume it has something to do with the ordering of services
20:06 paulc_AndChat joined #gluster
20:19 paulc_AndChat joined #gluster
20:19 calavera joined #gluster
20:30 theron joined #gluster
20:35 julim joined #gluster
20:41 julim joined #gluster
20:43 mrEriksson joined #gluster
20:44 julim joined #gluster
20:49 calavera joined #gluster
20:52 calavera joined #gluster
20:52 mribeirodantas joined #gluster
20:55 monotek1 joined #gluster
20:55 badone_ joined #gluster
21:03 julim joined #gluster
21:07 nsoffer joined #gluster
21:19 calavera joined #gluster
21:39 gildub joined #gluster
21:49 julim joined #gluster
21:53 wkf joined #gluster
21:55 B21956 joined #gluster
21:58 scooby2 joined #gluster
22:52 theron_ joined #gluster
23:02 calavera joined #gluster
23:05 calavera joined #gluster
23:52 maveric_amitc_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary