Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 VerboEse joined #gluster
00:18 necrogami joined #gluster
00:20 _Bryan_ joined #gluster
00:26 gildub joined #gluster
00:58 jmarley joined #gluster
01:07 vu joined #gluster
01:08 itisravi_ joined #gluster
01:14 bala joined #gluster
01:28 recidive joined #gluster
01:41 vimal joined #gluster
02:10 nishanth joined #gluster
02:18 haomaiwang joined #gluster
02:34 haomai___ joined #gluster
02:38 bharata-rao joined #gluster
02:39 haomaiwa_ joined #gluster
02:47 haomaiwa_ joined #gluster
03:02 haomaiw__ joined #gluster
03:07 hagarth joined #gluster
03:09 haomaiwa_ joined #gluster
03:16 B21956 joined #gluster
03:18 haomai___ joined #gluster
03:25 zerick joined #gluster
03:25 vu joined #gluster
03:27 meghanam joined #gluster
03:28 meghanam_ joined #gluster
03:34 spandit joined #gluster
03:44 shubhendu_ joined #gluster
03:52 Pupeno joined #gluster
03:53 vimal joined #gluster
03:55 itisravi_ joined #gluster
03:55 nbalachandran joined #gluster
03:58 haomaiwang joined #gluster
03:58 harish joined #gluster
03:59 haomaiwang joined #gluster
04:01 haomai___ joined #gluster
04:05 atinmu joined #gluster
04:17 atinm joined #gluster
04:17 haomaiwa_ joined #gluster
04:31 ndarshan joined #gluster
04:33 haomaiw__ joined #gluster
04:34 rjoseph joined #gluster
04:38 haomai___ joined #gluster
04:45 haomaiwa_ joined #gluster
04:45 Rafi_kc joined #gluster
04:45 rafi1 joined #gluster
04:47 anoopcs joined #gluster
04:54 ramteid joined #gluster
04:58 kdhananjay joined #gluster
05:02 harish joined #gluster
05:03 haomai___ joined #gluster
05:12 recidive joined #gluster
05:14 lalatenduM joined #gluster
05:27 sputnik13 joined #gluster
05:28 sputnik13 joined #gluster
05:36 dockbram_ joined #gluster
05:42 kanagaraj joined #gluster
05:46 saurabh joined #gluster
05:55 nishanth joined #gluster
05:57 glusterbot New news from newglusterbugs: [Bug 1126932] Random crashes while writing to a dispersed volume <https://bugzilla.redhat.com/show_bug.cgi?id=1126932>
06:02 LebedevRI joined #gluster
06:03 nshaikh joined #gluster
06:04 deepakcs joined #gluster
06:06 haomaiwa_ joined #gluster
06:16 rgustafs joined #gluster
06:23 karnan joined #gluster
06:25 meghanam joined #gluster
06:26 aravindavk joined #gluster
06:26 meghanam_ joined #gluster
06:27 haomaiwang joined #gluster
06:29 jtux joined #gluster
06:30 RaSTar joined #gluster
06:32 Guest94813 joined #gluster
06:35 haomaiwa_ joined #gluster
06:49 nishanth joined #gluster
06:51 haomai___ joined #gluster
06:57 MickaTri joined #gluster
06:58 MickaTri2 joined #gluster
06:58 MickaTri left #gluster
07:00 soumya1 joined #gluster
07:02 ekuric joined #gluster
07:02 ekuric1 joined #gluster
07:07 bala joined #gluster
07:08 meghanam_ joined #gluster
07:09 meghanam joined #gluster
07:13 mhoungbo joined #gluster
07:13 getup- joined #gluster
07:14 haomaiwang joined #gluster
07:15 fsimonce joined #gluster
07:20 aravindavk joined #gluster
07:25 atinm joined #gluster
07:28 raghu joined #gluster
07:29 haomai___ joined #gluster
07:34 yosafbridge joined #gluster
07:36 jiffin joined #gluster
07:38 andreask joined #gluster
07:39 haomaiwa_ joined #gluster
07:43 haomai___ joined #gluster
07:47 TheFearow joined #gluster
07:47 TheFearow Hey all - quick question. I've got a single gluster server that's had to change hostnames due to an early mistake - is there any easy way to now update this in the volumes?
07:55 aravindavk joined #gluster
07:56 atinm joined #gluster
08:07 liquidat joined #gluster
08:09 gildub joined #gluster
08:15 RameshN joined #gluster
08:25 Pupeno joined #gluster
08:28 Norky joined #gluster
08:32 andreask joined #gluster
08:35 TvL2386 joined #gluster
08:35 ricky-ti1 joined #gluster
08:36 Slashman joined #gluster
08:37 ekuric joined #gluster
08:39 rgustafs joined #gluster
08:47 sputnik13 joined #gluster
08:49 jiffin joined #gluster
08:52 6JTAAFFCU joined #gluster
08:52 17SAA8PI0 joined #gluster
08:58 MattAtL joined #gluster
08:58 kdhananjay joined #gluster
09:07 muhh_ left #gluster
09:16 atinmu joined #gluster
09:21 mdavidson joined #gluster
09:44 RaSTar joined #gluster
09:47 kdhananjay joined #gluster
09:55 meghanam joined #gluster
09:56 meghanam_ joined #gluster
09:59 soumya1 joined #gluster
10:06 kumar joined #gluster
10:06 RaSTar joined #gluster
10:24 edward1 joined #gluster
10:28 glusterbot New news from newglusterbugs: [Bug 978082] auxiliary group permissions fail via kerberized nfs export <https://bugzilla.redhat.com/show_bug.cgi?id=978082> || [Bug 1066128] glusterfsd crashes with SEGV during catalyst run <https://bugzilla.redhat.com/show_bug.cgi?id=1066128>
10:32 ekuric joined #gluster
10:36 ekuric joined #gluster
10:39 shubhendu joined #gluster
10:40 ekuric joined #gluster
10:41 ekuric joined #gluster
10:43 ekuric joined #gluster
10:46 andreask joined #gluster
10:52 rjoseph joined #gluster
10:53 hagarth joined #gluster
10:59 RaSTar joined #gluster
11:02 soumya1 joined #gluster
11:08 mojibake joined #gluster
11:10 chirino joined #gluster
11:10 kanagaraj joined #gluster
11:14 side_control joined #gluster
11:17 rjoseph joined #gluster
11:26 glusterbot New news from resolvedglusterbugs: [Bug 978082] auxiliary group permissions fail via kerberized nfs export <https://bugzilla.redhat.com/show_bug.cgi?id=978082>
11:34 meghanam joined #gluster
11:34 meghanam_ joined #gluster
11:45 soumya joined #gluster
11:46 LHinson joined #gluster
12:06 Slashman joined #gluster
12:06 MickaTri2 Is it easy to update glusterfs ? Or it 's recommended to stay in the current version which we install ?
12:09 recidive joined #gluster
12:20 itisravi_ joined #gluster
12:36 rotbeard joined #gluster
12:37 andreask joined #gluster
12:38 bene2 joined #gluster
12:44 julim joined #gluster
12:47 soumya joined #gluster
12:50 jmarley joined #gluster
13:08 theron joined #gluster
13:09 soumya joined #gluster
13:13 aravindavk joined #gluster
13:18 ira joined #gluster
13:20 clutchk joined #gluster
13:21 fattaneh1 joined #gluster
13:21 _Bryan_ joined #gluster
13:21 fattaneh1 left #gluster
13:23 diegows joined #gluster
13:26 SpComb good ways to backup a glusterfs volume?
13:26 SpComb rsyncing from a glusterfs mount is very, very slow
13:27 MickaTri2 What is exactly a brick ?
13:27 MickaTri2 Do they must have the same size ?
13:27 harish joined #gluster
13:39 monotek left #gluster
13:43 monotek1 joined #gluster
13:44 MickaTri2 Nobody here ?
13:44 MickaTri2 ..........
13:45 guntha joined #gluster
13:48 bennyturns joined #gluster
13:49 MacWinner joined #gluster
13:50 kkeithley They should have the same size. That's a "best practice".
13:52 skippy a "brick" is just "a place to which Gluster writes data".  It is recommended that each brick be its own partition and filesystem, but this is not strictly necessary.
13:53 kkeithley (,,brick)
13:54 kkeithley ,,(brick)
13:54 kkeithley @factoid
13:54 kkeithley @brick
13:54 kkeithley wake up glusterbot
13:54 glusterbot I do not know about 'brick', but I do know about these similar topics: 'brick naming', 'brick order', 'brick port', 'clean up replace-brick', 'former brick', 'reuse brick', 'which brick'
13:54 glusterbot kkeithley: I do not know about 'brick', but I do know about these similar topics: 'brick naming', 'brick order', 'brick port', 'clean up replace-brick', 'former brick', 'reuse brick', 'which brick'
13:54 gomikemike b00m
13:55 gomikemike how can i recover old (removed) bricks ports?
13:56 gomikemike im setting up a new cluster, and created a volume but it was not replicated so i removed it and re did it with replica =2 but the new volume was created on the next available ports 49154
13:57 gomikemike since there is nothing here yet, i would like to have it start @ 49152
13:57 skippy ports are cheap.  why bother?
13:57 skippy each new brick will use the next available port.
13:58 gomikemike because security groups need to match.... and if i keep this change, i need to make crazy requests to have security team edit the security groups etc..
13:58 SpComb backing up a glusterfs volume with 5GB of blocks and 300k inodes with rsync takes 30min. Anything one could do to optimize that?
13:59 gomikemike i understand that as this grows, the ports grow w/ it. and its actually pretty cool that it does not "recycle" the ports but since this is brand new, i see no issue in recycling the port
14:02 nbalachandran joined #gluster
14:04 kkeithley since you're starting over you can completely uninstall glusterfs, wipe /etc/glusterfs and /var/lib/glusterd, then reinstall; then ports should reset to 49152.
14:05 kkeithley just removing the RPMs or DPKGs won't wipe those directories
14:09 gomikemike womp womp
14:09 gomikemike ok, thanks
14:13 LHinson joined #gluster
14:16 tdasilva joined #gluster
14:20 HoloIRCUser1 joined #gluster
14:26 skippy how long should "State: Sent and Received peer request (Connected)" be displayed after peer probing?
14:26 skippy or, how do I complete the peering process if both peers show that message?
14:27 aravindavk joined #gluster
14:29 bharata-rao joined #gluster
14:32 vu joined #gluster
14:32 JayJ joined #gluster
14:33 JayJ hey i have 2x 10gb links running lacp and i'm trying to send as much traffic as i can from a client on the same network to the storage group, and i'm only getting about 6 gbps
14:39 HoloIRCUser2 joined #gluster
14:41 plarsen joined #gluster
14:50 LHinson joined #gluster
14:51 VeggieMeat joined #gluster
14:52 lmickh joined #gluster
15:03 failshell joined #gluster
15:04 ghenry joined #gluster
15:04 ghenry joined #gluster
15:07 MickaTri2 Hi, can a pool has a specific role ? Or a pool is a different name for cluster ?
15:08 toordog-work testing concurrent write to a file and the behavior is not what I expected
15:09 HoloIRCUser1 joined #gluster
15:10 toordog-work I wrote a python script to open, write data, close a file. It will write 10M of random data from urandom.  I start the script to write to the same file on 2 differents server that have the glusterfs mounted.  I run a bash script to wrap the python script to while python script failed retry and output attempt number to stdout.
15:10 jobewan joined #gluster
15:10 toordog-work I was expecting a lot of retry attempt from one of the 2 server until it can lock the file to write it.
15:10 toordog-work but no error and it's like both were writting at the same time
15:11 toordog-work but the total size of the file was 10M *not 20M*
15:11 toordog-work on a second test, server2 would attempt to write a file of 100M,  when server1 finished its 10M, server2 continu to write until 100M reach
15:11 toordog-work anyone can help me understand this behavior?
15:28 soumya joined #gluster
15:35 HoloIRCUser1 Anyone tried geo rep and got successful results? We are having some weird issues.
15:36 toordog-work geo rep use rsync in the background
15:36 toordog-work what kind of problem do you have?
15:36 jamoflaw We have written a really really simple script and it's failing to keep up
15:36 jamoflaw mkdir test && mv test test2
15:37 toordog-work geo rep is not real time
15:37 jamoflaw On the remote partner you will see test and test2
15:37 toordog-work ok
15:38 jamoflaw Where as it should rename test to test2
15:39 jamoflaw Same with files as well
15:39 fattaneh joined #gluster
15:41 HoloIRCUser1 joined #gluster
15:42 julim joined #gluster
15:42 ninkotech joined #gluster
15:43 fattaneh does the glusterfs indexes the metadata attributes?
15:47 semiosis HoloIRCUser1: describe your weird issues
15:50 HoloIRCUser1 Basically it's where you create a folder and instantly rename
15:51 HoloIRCUser1 Both folders will be created on the geo rep pair
15:51 HoloIRCUser1 Whereas on the local volume the folder is correctly renamed
15:53 hagarth HoloIRCUser1: what version of glusterfs is this with?
15:53 HoloIRCUser1 If that makes any sense
15:53 HoloIRCUser1 :) gluster 3.5.2 with the distributed rep
15:53 hagarth HoloIRCUser1: any possibility of testing this with glusterfs 3.6 nightly builds?
15:54 hagarth lot of changes have got into 3.6  as far as geo-replication is concerned
15:54 HoloIRCUser1 Yup can upgrade this is pre prod
15:55 hagarth http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.6/ has the latest 3.6 nightlies
15:55 glusterbot Title: Index of /pub/gluster/glusterfs/nightly/glusterfs-3.6 (at download.gluster.org)
15:56 hagarth if your test passes with 3.6, we can consider a backport for 3.5 or else log this issue as a blocker for 3.6.0 :)
15:57 HoloIRCUser1 Kk just updating now
16:02 toordog-work Any reason why 2 server with a volume mounted can write at the same time to the same file in concurrence?
16:02 toordog-work *distributed replicat volume
16:04 fattaneh does the glusterfs indexes the metadata attributes?
16:04 toordog-work fattaneh what do you call metadata?
16:05 fattaneh toordog-work: files metadata
16:05 toordog-work xattr attribute^
16:05 toordog-work ?
16:05 fattaneh toordog-work: yes
16:05 toordog-work i don't know the answer, but it will help other to answer you
16:06 fattaneh toordog-work: for example i like to know that when we search a file by its name or othe attr how it finds it
16:06 fattaneh toordog-work: thanks :)
16:09 LHinson joined #gluster
16:09 toordog-work by the way, i found this doc very interesting.  It's more edgy than on gluster.org.  https://github.com/gluster/glusterfs/blob/master/doc/features/
16:09 glusterbot Title: glusterfs/doc/features at master · gluster/glusterfs · GitHub (at github.com)
16:09 toordog-work maybe you will find your answer there
16:10 HoloIRCUser2 joined #gluster
16:11 mkzero joined #gluster
16:23 HoloIRCUser2 Nightly is killing the remote glusterd service on one of the replicas and coming up as faulty
16:23 HoloIRCUser2 Is there a beta rather than nightly?
16:25 jbrooks joined #gluster
16:25 nbalachandran joined #gluster
16:27 jamoflaw Of 3.6 that is
16:30 hagarth jamoflaw: beta of 3.6 should be available shortly .. mostly over the next week or so
16:37 jamoflaw Kk
16:38 jamoflaw Yeah just tested the alpha and the create push-pem command kills the replica node
16:38 jamoflaw On the remote geo volume
16:40 toordog-work test further more my concurrent access to file from 2 differents server. Finally, the behavior was that one was overwritting the other *last one to write overwrite*.
16:40 toordog-work that mean the locking system didn't manage well
16:40 HoloIRCUser1 joined #gluster
16:41 toordog-work HoloIRCUser1 is it me or your internet provider is unstable?
16:42 HoloIRCUser1 Mobile network
16:42 toordog-work ah ok :)
16:42 jamoflaw Will retest on 3.6 when it comes out
16:43 toordog-work jamoflaw did you do some test for concurrent access to file from 2 differnets server?
16:43 jamoflaw No, this is from the same server
16:43 toordog-work k
16:43 jamoflaw Replica 2
16:44 jamoflaw V odd though
16:45 jamoflaw In terms of replicating the fault just run mkdir test && mv test test2
16:46 jamoflaw Both test and test2 will appear on the geo rep volume whereas the local vol will correctly show the single test2 folder
16:46 jamoflaw Would be interested to hear if anyone else is seeing this
16:46 skippy jamoflaw: http://blog.vorona.ca/the-way-gluster-fs-handles-renaming-the-file.html
16:46 glusterbot Title: The way Gluster FS handles renaming the file (at blog.vorona.ca)
16:47 skippy moving a file on the same partition is basically a rename operation.
16:48 jamoflaw Different problem on this one, the local server is fine it's the geo replicated one showing the problem
16:49 msolo joined #gluster
16:51 msolo is there a way to see all options on a current volume?
16:52 toordog-work if you cat file in /var/lib/gluster/
16:52 toordog-work be careful to not write those file ..
16:52 skippy gluster volume info <vol>
16:53 msolo skippy: i don't see any options there, just the bricks
16:55 msolo toordog-work: i see lots of files there, but the one that is named like my volume has a lot of data and subvolumes
16:55 HoloIRCUser1 joined #gluster
16:55 msolo is that what you are referring to?
16:56 skippy msolo: if `gluster volume info` shows no options, the you're using all of the default values for all options.
16:56 skippy msolo: https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_managing_volumes.md#tuning-options
16:56 glusterbot Title: glusterfs/admin_managing_volumes.md at master · gluster/glusterfs · GitHub (at github.com)
16:56 msolo thanks
16:57 msolo i needed the secret decoder ring since the interface is pretty lacking
16:57 toordog-work msolo what i referred you is being used by the gluster command like gluster volume info *it is actually reading those file*, it is the same principle of linux command reading /proc file value
16:57 toordog-work so you might have more visibility in what you are looking for, the file will have the name of your volume.
16:57 toordog-work some file will by example list in order all the translator used by that volume
16:58 toordog-work if you have replica, it will show the replica translator and the other that related to your volume
16:58 toordog-work it's more backend information.  but you will see everything very granulary
16:58 msolo yes, is see there are a number of layers
16:58 msolo what is the "trusted" volume?
16:59 msolo it seems to loosely mirror my actual volume
16:59 toordog-work msolo, you might wnat to check my blog to see what i figure out so far about gluster.  I have many  links that explain thing that are not easy to find on google.  olivier.marinaolivier.com
16:59 toordog-work I'm learning myself and been working with gluster for only 10-15 days.
17:00 toordog-work so i'm actively updating my blog with what i learn *as a mind reminder*
17:00 msolo very generous of you. i've been hoarding my knowledge in a google doc :)
17:00 fattaneh1 joined #gluster
17:00 msolo but i concur, the existing documentation is very lacking
17:00 toordog-work msolo if you want to contribute, i don't mind to give you the write
17:01 msolo no no, i don't think i have much to add at this point :)
17:01 toordog-work my blog is more as a hands on cheat sheet or reminder.  I didn't do it before over the year and I realize that I keep forgetting what i did because i didn't touch a technology for a few years.
17:02 toordog-work the link at the end will be of interest for you
17:03 msolo sadly, that markdown page does not list the option i'm looking for
17:03 toordog-work what option are you looking for ?
17:03 msolo perfomance/write-behind
17:04 skippy is that an option, or a translator?
17:04 toordog-work http://gluster.org/community/documentation/index.php/Translators/performance/writebehind
17:04 skippy https://github.com/gluster/glusterfs/tree/master/xlators/performance/write-behind
17:04 glusterbot Title: glusterfs/xlators/performance/write-behind at master · gluster/glusterfs · GitHub (at github.com)
17:05 skippy that's not an option, msolo. it's a translator.
17:05 msolo ok, interesting, so adding an option will disable this apparently
17:06 msolo gluster volume set <volname> performance.write-behind off
17:06 msolo i was hoping for corresponding "get" operation to confirm that it is enabled
17:06 msolo but in the volume file, i can see there is a write-behing volume
17:07 msolo that is layered in
17:07 msolo well, let's see what happens
17:08 skippy you might need to restart the volume.
17:08 JoeJulian shouldn't.
17:08 skippy ok then!
17:08 msolo yeah, it restructured the volume file a bit
17:08 msolo took a while to run too
17:08 msolo for an "option set"
17:09 msolo thanks for your advice, let's see if this changes my problems :-)
17:09 JoeJulian Updates all the servers which then trigger all the clients to reload the graph.
17:09 PeterA joined #gluster
17:09 JoeJulian Lots of locks and verifications.
17:10 msolo welcome to distributed programming :)
17:10 toordog-work JoeJulian is glusterfs suppose to manage concurrent access to a file with a lock system?
17:10 JoeJulian @lucky posix locks
17:10 glusterbot JoeJulian: http://en.wikipedia.org/wiki/File_locking
17:11 toordog-work because following our discussion yesterday, i ran some test with python to write to a file and overall, the last process overwrite the file, and i never get a lock error even if the file is locked for write by another process.
17:11 toordog-work i think my test is flawed
17:11 JoeJulian Could be.
17:12 toordog-work because even locally on the same server i get the same behavior
17:12 toordog-work :s
17:12 toordog-work same on ext4
17:12 JoeJulian If you check the source, there's probably a test for that that gets run in jenkins.
17:12 toordog-work ok
17:13 toordog-work i will try to find it
17:13 LebedevRI joined #gluster
17:16 johnmark_ joined #gluster
17:16 oxidane_ joined #gluster
17:19 ninkotech joined #gluster
17:19 dblack_ joined #gluster
17:20 Philambdo joined #gluster
17:21 jiqiren joined #gluster
17:27 xleo joined #gluster
17:31 RobertLaptop_ joined #gluster
17:32 zerick joined #gluster
17:36 vu joined #gluster
17:49 fattaneh1 left #gluster
17:54 HoloIRCUser2 joined #gluster
17:56 jbrooks joined #gluster
18:04 fattaneh1 joined #gluster
18:10 sputnik13 joined #gluster
18:12 gomikemike so, how do multiple bricks (for same volume) normally get mounted on the gluster server?
18:12 gomikemike i need to create 1 volume replicated in 2 bricks
18:13 JoeJulian You wouldn't typically replicate to two bricks on the same server.
18:14 gomikemike
18:14 Jamoflaw joined #gluster
18:15 Jamoflaw where should i log the bug I mentioned earlier?
18:15 Jamoflaw going to do a test on another volume to get steps to reporoduce and just veriufy im not seeing a one off
18:21 gomikemike JoeJulian: back, network dropped
18:22 JoeJulian Jamoflaw: here is where you can file a bug report
18:22 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:22 gomikemike so, i need a replicated volume, my gluster cluster is 2 nodes
18:22 skippy gomikemike: you'd put one brick on each server.
18:22 gomikemike so, i created 1 brick per server and created the volume pointing at each brick (1 per host)
18:23 skippy did you explicitly declare "replica 2" ?
18:23 gomikemike no, thats why i had to recreate all this....
18:23 skippy gotcha.
18:23 gomikemike i just want to make sure this is correct before i do it
18:24 skippy gluster volume create <vol> replica 2 <brick1> <brick2>
18:24 fattaneh1 left #gluster
18:24 gomikemike so, it was really not replicated, i found out cause i tried to trigger a self heal and it said the volume was not recplicated
18:24 gomikemike ok
18:25 gomikemike so, (this is all in AWS) i add an EBS volume, mount it, then i make it a LVM
18:25 Jamoflaw thx will do that now
18:26 gomikemike my train of thougt was that if we need to grow the "brick" we can add EBS, then add to LVM and then we can grow the LVM mount point
18:26 gomikemike so the brick would grow that way, is that "sane" ?
18:26 skippy sounds sane, though I dont know anything about EBS specifically.
18:26 skippy if it's just Amazon's LVM, that should be fine.
18:27 gomikemike its just block storage (networked)
18:27 gomikemike adding an ebs volume is the same as adding a harddrive to a server
18:27 gomikemike you still need to parition, format and mount
18:28 skippy sure, make that a pv, add that pv to your Linux volume group, and then lvextend the parition serving your brick
18:28 skippy and resize the filesystem
18:28 gomikemike so gluster will not go crazy cause the brick is growing "under the rug"
18:28 skippy nope
18:28 gomikemike nice
18:28 gomikemike now, 2nd question (please)
18:29 gomikemike i need to have this 2 node cluster both on East and West
18:29 theron_ joined #gluster
18:29 gomikemike I can recreate on both sides (same process)
18:29 gomikemike but i need them to stay in sync
18:29 gomikemike thats when geo replication comes into play, correct?
18:30 skippy i think so; but I haven't dug into geo replication yet.
18:30 gomikemike ahh
18:30 gomikemike well maybe joe can chime in when he gets a chance
18:31 gomikemike the infor that i've read points the replication to a directory on another host
18:31 gomikemike im not sure if i can point it at another brick
18:31 gomikemike and if so, do i need to setup replication 2 times?
18:32 nbvfuel joined #gluster
18:32 gomikemike host1B1 => host3B2
18:32 gomikemike host2B1 => host4B2
18:32 theron joined #gluster
18:33 gomikemike since the volumes would be created:
18:33 JoeJulian georeplication is not bi-directional (yet)
18:33 gomikemike ok
18:33 gomikemike so, then i should replicate both ways
18:33 gomikemike need a white board :)
18:35 gomikemike if volume is replicated within the region, I can say region1 B1 replicates to Region2 B2
18:36 gomikemike and region2 B1 replicates to Region1 B2
18:36 JoeJulian yes
18:37 gomikemike and the region replication will take care of the individual bricks getting updated from one to the other
18:37 gomikemike coolio
18:40 gomikemike JoeJulian: so, i should setup the replicated volume on each (AWS) region
18:40 gomikemike THEN setup geo replication?
18:40 JoeJulian wait... are b1 and b2 bricks?
18:40 gomikemike yes
18:40 JoeJulian no.
18:40 JoeJulian Bricks are for gluster's use only. You don't use them directly.
18:41 rafi1 joined #gluster
18:41 JoeJulian you could use geo-rep to replicate to a volume, but as you're describing you would need two volumes at each end.
18:43 gomikemike so, i need to provide a shared file system that is replicated from regions
18:44 nbvfuel My gluster bricks / volume are behind a firewall.  What ports (tcp/udp?) need to be open to connect via a fuse client?  The "gluster volume status" command shows that the bricks are available at port 49152
18:44 gomikemike w/ 2 volumes on each region, do i create 1 volume just to sync from the other region
18:44 JoeJulian @ports
18:44 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
18:44 gomikemike then i sync volumes within region?
18:45 JoeJulian gomikemike: Are you doing a lot of writing from each region?
18:45 gomikemike no
18:45 JoeJulian Is it important that your writes finish very quickly?
18:45 gomikemike its for a webserver docroot
18:46 gomikemike but its going to be fairly static
18:46 gomikemike BUT we dont want to have "jobs" to sync stuff across
18:46 gomikemike the west region is basically DR
18:46 JoeJulian I would mount one region as a "write" volume, and geosync from that region to the other for it's reads.
18:47 gomikemike i've made my customer aware that the replication happens over ssh, so that by defintion is "slow"
18:47 JoeJulian So all writes go to east, geo-sync to west.
18:48 gomikemike but if we loose east, then we are "static" until we get east back
18:48 toordog-work nbvfuel it is well described in gluster.org
18:48 nbvfuel JoeJulian: I've opened up port 49152 so that it should be accessible to the client...  I'll keep digging , thanks
18:49 JoeJulian nbvfuel: and 24007?
18:50 toordog-work JoeJulian i though that since 3.5 geo rep was bidirectional or at least state distributed among node.  So if node1 fail after 60 seconds another node will take over and continu the replication.
18:50 JoeJulian gomikemike: Yes, in that configuration if east gets loose, then you're essentially read-only, unless you have some method to switch roles.
18:51 JoeJulian toordog-work: That's still uni-directional. From master(s) to slave.
18:52 nbvfuel JoeJulian: Sorry for wasting your time.  It wasn't clear that the management port needed to be open as well.  Thank you for clarifying, though!
18:52 toordog-work so if slave do a write, master will not sync^
18:52 toordog-work ?
18:52 JoeJulian correc
18:52 JoeJulian t
18:52 toordog-work aww ok
18:52 JoeJulian nbvfuel: no waste if you learned something. :D
18:55 jbrooks joined #gluster
18:55 gomikemike https://medium.com/@msvbhat/distributed-geo-replication-in-glusterfs-ec95f4393c50
18:55 glusterbot Title: Distributed Geo-Replication in glusterfs — Medium (at medium.com)
18:55 gomikemike found that article just now
19:01 theron joined #gluster
19:02 theron joined #gluster
19:10 chirino joined #gluster
19:10 ThatGraemeGuy joined #gluster
19:26 sijis joined #gluster
19:26 fattaneh joined #gluster
19:27 sijis got a couple of newbie questions. The volume sizes are about 10G. 'du -sh' takes a long time against a volume. I've also noticed that tar -czvf takes a long time too.
19:27 sijis is there a better way, to 1) see directory sizes 2) tar up files from the volume
19:27 sijis (this is all done from the client pov)
19:34 theron joined #gluster
19:36 JoeJulian Would be interesting to see a poc showing whether or not those operations are faster over libgfapi.
19:45 dtrainor joined #gluster
19:47 semiosis JoeJulian: good idea.  should be easy with glusterfs-java-filesystem
19:47 sijis JoeJulian: was that to me or just a general statment
19:48 semiosis JoeJulian: i'd be surprised if that made any difference though
19:53 JoeJulian semiosis: So would I...
19:58 theron_ joined #gluster
20:07 sijis is my issue common?
20:10 semiosis sijis: somewhat
20:11 sijis is there some workaround it?
20:11 sijis or i'd just have to deal with it
20:11 sijis i am going to try and see if adding noatime will help
20:11 semiosis that should help.  noatime/nodiratime on the brick mounts (not the client mount)
20:12 sijis oh. not on the client?
20:12 semiosis istr something about a new xlator that should help in this kind of situation
20:12 _Bryan_ joined #gluster
20:15 MattAtL Hi, I have 2 node replicas, with client/server on each machine.  Lots of small files, read performance lousy, I know that's not news.  My workload is almost entirely 'write once' - files once there don't ever change, they just get read.  On the gluster docs it says I can directly access the data on the underlying storage volume, if you are just doing read()/access()/stat() like operations. (http://gluster.org/community/doc
20:15 MattAtL umentation/index.php/GlusterFS_Technical_FAQ)
20:15 MattAtL and it says "you should be fine", but not tested and not for production.
20:16 MattAtL Question is, does anyone know how reliable 'should' is?
20:17 MattAtL If I ensure that my application writes to the client, but reads from the underlying storage, how likely am I to get into trouble?
20:17 semiosis MattAtL: you should be fine
20:17 MattAtL Thanks
20:17 semiosis hehe
20:18 semiosis what can I say?
20:18 semiosis not too likely?
20:18 MattAtL Do you have any idea whether anyone ever does this, or am I going to be flying by the seat of my pants?
20:19 semiosis sure people have done that
20:19 semiosis they were flying by the seat of their pants too though
20:20 semiosis set noatime/nodiratime on your brick mounts to be sure even reads wont modify them.
20:20 semiosis maybe even use a read only bind mount somewhere else to be extra sure
20:21 MattAtL ok... and is NFS reckoned to be better than fuse for the client mount? For many small files? I can't really tell from the internets
20:21 sijis semiosis: not setting these opitons.. will that trigger a file sync with te cluster, if just looking at the file?
20:21 semiosis dont know
20:23 MattAtL semiosis - thanks for the pointers
20:23 jbrooks joined #gluster
20:24 semiosis yw
20:25 semiosis regarding nfs, depends on you.  with nfs you lose automatic HA, and while the attribute caching can help performance for some things, you lose some consistency.
20:26 MattAtL hmm probably defeats most of my object of using gluster at all then!
20:27 sijis hold up, with nfs mount, you manually have to fail to secondary node if it fails? i thought that was still transparent, like when using the fuse client
20:30 glusterbot New news from newglusterbugs: [Bug 1141371] Data Classification Patches <https://bugzilla.redhat.com/show_bug.cgi?id=1141371>
20:30 semiosis some people set up a virtual IP to get HA NFS
20:37 sijis i would assume that ip would be on an lb?
20:37 sijis anyhow, i was just curious.
20:37 semiosis sijis: idk really
20:38 semiosis usually a virtual ip is on a host
20:38 semiosis one host or another
20:38 semiosis but i'm sure there are ways to do it through an LB as well
20:39 sijis fair enough
20:42 _dist joined #gluster
20:45 _dist By monitoring my 3-way replication volume I've noticed that the a brick being written to sends the data out twice (which does makes sense), but since it will be the same data (all things healthy considered) isn't there someway to avoid that?
20:47 _dist I suppose it probably wouldn't be faster though, only in situations where the sending brick is reaching its' network cap
20:54 side_con1rol joined #gluster
20:54 zerick joined #gluster
20:55 DV joined #gluster
20:56 LHinson joined #gluster
20:58 Jamoflaw1 joined #gluster
21:00 MattAtL left #gluster
21:09 dtrainor joined #gluster
21:18 jbd123 joined #gluster
21:19 jbd__ joined #gluster
21:20 jbd__ joined #gluster
21:22 Jamoflaw1 I verified the Geo-rep issue on another volume set and get the same results
21:22 Jamoflaw1 have logged a bug under https://bugzilla.redhat.com/show_bug.cgi?id=1141379
21:22 glusterbot Bug 1141379: urgent, unspecified, ---, gluster-bugs, NEW , Geo Replication fails to handle fast mv commands
21:27 clyons joined #gluster
21:27 jbd123 joined #gluster
21:30 glusterbot New news from newglusterbugs: [Bug 1141379] Geo Replication fails to handle fast mv commands <https://bugzilla.redhat.com/show_bug.cgi?id=1141379>
21:33 qdk joined #gluster
21:38 anscomp joined #gluster
21:40 DJClean joined #gluster
21:45 skippy_ joined #gluster
21:45 Zordrak joined #gluster
21:45 Zordrak joined #gluster
21:45 JoeJulian_ joined #gluster
21:45 saltsa joined #gluster
21:45 ThatGraemeGuy joined #gluster
21:45 sijis joined #gluster
21:45 sijis joined #gluster
21:45 Spiculum joined #gluster
21:45 rgustafs_ joined #gluster
21:46 _VerboEse joined #gluster
21:46 ron-slc joined #gluster
21:46 huleboer joined #gluster
21:46 monotek joined #gluster
21:46 ackjewt joined #gluster
21:46 monotek left #gluster
21:46 green_man joined #gluster
21:47 ws2k3 joined #gluster
21:48 stickyboy joined #gluster
22:10 torbjorn__ joined #gluster
22:11 Dave2_ joined #gluster
22:12 sickness_ joined #gluster
22:13 guntha_ joined #gluster
22:13 samsaffron__ joined #gluster
22:13 JayJ1 joined #gluster
22:14 d4nku_ joined #gluster
22:14 stickyboy_ joined #gluster
22:14 cfeller_ joined #gluster
22:15 JayJ1 how would you set o_direct for all bricks of a vol via the CLI?
22:15 ninkotech_ joined #gluster
22:15 atrius_ joined #gluster
22:15 UnwashedMeme1 joined #gluster
22:16 _dist JayJ1: Pretty sure there's a volume set options, type "gluster volume set help | grep direct"
22:16 _dist might need cpaital D
22:16 _dist but sorry, I gotta head
22:17 sac`away` joined #gluster
22:17 siXy_ joined #gluster
22:18 johnmark_ joined #gluster
22:18 apscomp joined #gluster
22:19 JayJ1 so
22:19 toordog-work joined #gluster
22:19 JayJ1 there is no option for posix in the cli then
22:20 ws2k3 joined #gluster
22:21 SmithyUK joined #gluster
22:22 XpineX_ joined #gluster
22:23 JonathanS joined #gluster
22:23 ninkotech__ joined #gluster
22:24 glusterbot` joined #gluster
22:25 sauce_ joined #gluster
22:26 Jamoflaw2 joined #gluster
22:26 theron joined #gluster
22:28 tru_tru_ joined #gluster
22:29 nated_ joined #gluster
22:29 nated_ joined #gluster
22:29 Diddi joined #gluster
22:30 glusterbot joined #gluster
22:30 Zordrak joined #gluster
22:30 Zordrak joined #gluster
22:31 VeggieMeat_ joined #gluster
22:32 necrogami joined #gluster
22:32 tom[] joined #gluster
22:33 Dave2 joined #gluster
22:33 saltsa_ joined #gluster
22:33 johnmark joined #gluster
22:34 JoeJulian joined #gluster
22:35 d-fence joined #gluster
22:37 RobertLaptop_ joined #gluster
22:37 DJCl34n joined #gluster
22:37 stickyboy joined #gluster
22:37 HuleB joined #gluster
23:05 ilbot3 joined #gluster
23:13 ninkotech__ joined #gluster
23:37 harish joined #gluster
23:38 DV joined #gluster
23:38 samsaffron__ joined #gluster
23:38 Rydekull joined #gluster
23:38 sage joined #gluster
23:38 AaronGr joined #gluster
23:38 morse_ joined #gluster
23:38 tg2 joined #gluster
23:38 ghenry joined #gluster
23:38 hflai joined #gluster
23:38 pdrakewe_ joined #gluster
23:38 diegows joined #gluster
23:38 RioS2 joined #gluster
23:38 necrogami_ joined #gluster
23:38 JonathanD joined #gluster
23:38 bet_ joined #gluster
23:38 fubada joined #gluster
23:38 nated joined #gluster
23:38 txbowhunter joined #gluster
23:38 cfeller joined #gluster
23:38 toordog_wrk joined #gluster
23:38 77CAAFWDR joined #gluster
23:38 HuleB joined #gluster
23:38 stickyboy joined #gluster
23:38 DJClean joined #gluster
23:38 RobertLaptop joined #gluster
23:38 d-fence joined #gluster
23:38 JoeJulian joined #gluster
23:38 johnmark joined #gluster
23:38 Dave2 joined #gluster
23:38 tom[] joined #gluster
23:38 necrogami joined #gluster
23:38 VeggieMeat_ joined #gluster
23:38 glusterbot joined #gluster
23:38 Diddi joined #gluster
23:38 tru_tru_ joined #gluster
23:38 Jamoflaw2 joined #gluster
23:38 sauce_ joined #gluster
23:38 1JTAAFU2G joined #gluster
23:38 XpineX_ joined #gluster
23:38 SmithyUK joined #gluster
23:38 ws2k3 joined #gluster
23:38 apscomp joined #gluster
23:38 sac`away` joined #gluster
23:38 UnwashedMeme1 joined #gluster
23:38 atrius joined #gluster
23:38 ninkotech_ joined #gluster
23:38 d4nku_ joined #gluster
23:38 JayJ1 joined #gluster
23:38 sickness_ joined #gluster
23:38 torbjorn__ joined #gluster
23:38 ackjewt joined #gluster
23:38 ron-slc joined #gluster
23:38 VerboEse joined #gluster
23:38 rgustafs_ joined #gluster
23:38 Spiculum joined #gluster
23:38 sijis joined #gluster
23:38 ThatGraemeGuy joined #gluster
23:38 skippy joined #gluster
23:38 qdk joined #gluster
23:38 jbd123 joined #gluster
23:38 clyons joined #gluster
23:38 Jamoflaw1 joined #gluster
23:38 chirino joined #gluster
23:38 sputnik13 joined #gluster
23:38 HoloIRCUser2 joined #gluster
23:38 xleo joined #gluster
23:38 oxidane_ joined #gluster
23:38 mkzero joined #gluster
23:38 monotek1 joined #gluster
23:38 yosafbridge joined #gluster
23:38 Lee- joined #gluster
23:38 and` joined #gluster
23:38 primeministerp joined #gluster
23:38 portante joined #gluster
23:38 charta joined #gluster
23:38 Gugge joined #gluster
23:38 khanku joined #gluster
23:38 schrodinger joined #gluster
23:38 swc|666 joined #gluster
23:38 jvandewege joined #gluster
23:38 m0zes joined #gluster
23:38 msp3k joined #gluster
23:38 cmtime joined #gluster
23:38 edong23_ joined #gluster
23:38 lkoranda joined #gluster
23:38 juhaj joined #gluster
23:38 JamesG joined #gluster
23:38 delhage joined #gluster
23:38 Debolaz joined #gluster
23:38 cultavix joined #gluster
23:38 avati joined #gluster
23:38 Gabou joined #gluster
23:38 eshy joined #gluster
23:38 siel joined #gluster
23:38 Ramereth joined #gluster
23:38 georgeh_ joined #gluster
23:38 twx joined #gluster
23:38 coredumb joined #gluster
23:38 semiosis joined #gluster
23:38 bjornar joined #gluster
23:38 crashmag joined #gluster
23:38 jezier joined #gluster
23:38 weykent joined #gluster
23:38 kalzz joined #gluster
23:38 abyss^^_ joined #gluster
23:38 goo_ joined #gluster
23:38 marcoceppi joined #gluster
23:38 masterzen joined #gluster
23:38 lezo__ joined #gluster
23:38 mibby joined #gluster
23:38 Andreas-IPO joined #gluster
23:38 fyxim_ joined #gluster
23:38 SteveCooling joined #gluster
23:38 eclectic joined #gluster
23:38 ccha2 joined #gluster
23:38 _NiC joined #gluster
23:38 pasqd joined #gluster
23:38 l0uis joined #gluster
23:38 sadbox joined #gluster
23:38 nixpanic_ joined #gluster
23:38 gehaxelt joined #gluster
23:38 ultrabizweb joined #gluster
23:38 [o__o] joined #gluster
23:38 cicero joined #gluster
23:38 churnd joined #gluster
23:38 johnnytran joined #gluster
23:38 JordanHackworth joined #gluster
23:38 radez_g0n3 joined #gluster
23:38 msvbhat joined #gluster
23:38 the-me joined #gluster
23:38 kke joined #gluster
23:38 stigchristian joined #gluster
23:38 Kins joined #gluster
23:38 tobias- joined #gluster
23:38 lava joined #gluster
23:38 foobar joined #gluster
23:38 prasanth|brb joined #gluster
23:38 atrius` joined #gluster
23:38 Nowaker joined #gluster
23:38 gomikemike joined #gluster
23:38 Guest44047 joined #gluster
23:38 T0aD joined #gluster
23:38 mjrosenb joined #gluster
23:38 doekia joined #gluster
23:38 codex joined #gluster
23:38 mikedep333 joined #gluster
23:38 Chr1s1an_ joined #gluster
23:38 Moe-sama joined #gluster
23:38 eightyeight joined #gluster
23:38 xavih joined #gluster
23:38 neoice joined #gluster
23:38 johnmwilliams__ joined #gluster
23:38 JustinClift joined #gluster
23:38 purpleidea joined #gluster
23:38 partner joined #gluster
23:38 drajen joined #gluster
23:38 lanning joined #gluster
23:38 vincentvdk joined #gluster
23:38 tomased joined #gluster
23:38 Slasheri joined #gluster
23:38 klaas joined #gluster
23:38 fim joined #gluster
23:38 Peanut joined #gluster
23:38 samppah joined #gluster
23:38 sman joined #gluster
23:38 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
23:39 torbjorn__ joined #gluster
23:39 zerick joined #gluster
23:39 rturk|afk joined #gluster
23:39 coredumb joined #gluster
23:39 nage joined #gluster
23:39 verdurin joined #gluster
23:39 ackjewt joined #gluster
23:39 torbjorn__ joined #gluster
23:39 green_man joined #gluster
23:39 crashmag joined #gluster
23:39 baoboa joined #gluster
23:39 NuxRo joined #gluster
23:39 jbrooks joined #gluster
23:39 hybrid512 joined #gluster
23:39 bfoster joined #gluster
23:39 bjornar joined #gluster
23:39 saltsa joined #gluster
23:39 kkeithley joined #gluster
23:39 elico joined #gluster
23:39 SpComb joined #gluster
23:39 Jamoflaw joined #gluster
23:39 natgeorg joined #gluster
23:39 baoboa joined #gluster
23:39 SpComb joined #gluster
23:39 morse joined #gluster
23:39 d-fence joined #gluster
23:39 JonathanS joined #gluster
23:39 siXy joined #gluster
23:39 txbowhunter joined #gluster
23:39 tom[] joined #gluster
23:39 crashmag joined #gluster
23:39 morse joined #gluster
23:39 DV joined #gluster
23:39 coredumb joined #gluster
23:39 Norky joined #gluster
23:39 al joined #gluster
23:39 SpComb joined #gluster
23:39 saltsa joined #gluster
23:41 ilbot3 joined #gluster
23:42 Alex__ joined #gluster
23:42 saltsa joined #gluster
23:42 harish joined #gluster
23:42 ninkotech__ joined #gluster
23:42 Zordrak joined #gluster
23:42 guntha_ joined #gluster
23:42 foster joined #gluster
23:42 SpComb joined #gluster
23:42 al joined #gluster
23:42 Norky joined #gluster
23:42 coredumb joined #gluster
23:42 DV joined #gluster
23:42 morse joined #gluster
23:42 crashmag joined #gluster
23:42 tom[] joined #gluster
23:42 txbowhunter joined #gluster
23:42 siXy joined #gluster
23:42 JonathanS joined #gluster
23:42 d-fence joined #gluster
23:42 dastar joined #gluster
23:42 baoboa joined #gluster
23:42 natgeorg joined #gluster
23:42 elico joined #gluster
23:42 kkeithley joined #gluster
23:42 bfoster joined #gluster
23:42 hybrid512 joined #gluster
23:42 jbrooks joined #gluster
23:42 capri joined #gluster
23:42 cyberbootje joined #gluster
23:42 6JTAAFSEP joined #gluster
23:42 uebera|| joined #gluster
23:42 Humble joined #gluster
23:42 green_man joined #gluster
23:42 torbjorn__ joined #gluster
23:42 ackjewt joined #gluster
23:42 verdurin joined #gluster
23:42 rturk|afk joined #gluster
23:42 zerick joined #gluster
23:42 samsaffron__ joined #gluster
23:42 Rydekull joined #gluster
23:42 sage joined #gluster
23:42 AaronGr joined #gluster
23:42 tg2 joined #gluster
23:42 ghenry joined #gluster
23:42 hflai joined #gluster
23:42 pdrakewe_ joined #gluster
23:42 diegows joined #gluster
23:42 RioS2 joined #gluster
23:42 necrogami_ joined #gluster
23:42 bet_ joined #gluster
23:42 fubada joined #gluster
23:42 nated joined #gluster
23:42 cfeller joined #gluster
23:42 toordog_wrk joined #gluster
23:42 HuleB joined #gluster
23:42 stickyboy joined #gluster
23:42 DJClean joined #gluster
23:42 RobertLaptop joined #gluster
23:42 JoeJulian joined #gluster
23:42 johnmark joined #gluster
23:42 Dave2 joined #gluster
23:42 VeggieMeat joined #gluster
23:42 glusterbot joined #gluster
23:42 Diddi joined #gluster
23:42 tru_tru_ joined #gluster
23:42 sauce_ joined #gluster
23:42 XpineX_ joined #gluster
23:42 SmithyUK joined #gluster
23:42 ws2k3 joined #gluster
23:42 apscomp joined #gluster
23:42 sac`away` joined #gluster
23:42 UnwashedMeme1 joined #gluster
23:42 atrius joined #gluster
23:42 ninkotech_ joined #gluster
23:42 d4nku_ joined #gluster
23:42 JayJ1 joined #gluster
23:42 sickness_ joined #gluster
23:42 ron-slc joined #gluster
23:42 VerboEse joined #gluster
23:42 rgustafs_ joined #gluster
23:42 sijis joined #gluster
23:42 ThatGraemeGuy joined #gluster
23:42 skippy joined #gluster
23:42 qdk joined #gluster
23:42 chirino joined #gluster
23:42 sputnik13 joined #gluster
23:42 HoloIRCUser2 joined #gluster
23:42 xleo joined #gluster
23:42 oxidane_ joined #gluster
23:42 mkzero joined #gluster
23:42 monotek1 joined #gluster
23:42 yosafbridge joined #gluster
23:42 Lee- joined #gluster
23:42 and` joined #gluster
23:42 primeministerp joined #gluster
23:42 portante joined #gluster
23:42 charta joined #gluster
23:42 Gugge joined #gluster
23:42 khanku joined #gluster
23:42 schrodinger joined #gluster
23:42 swc|666 joined #gluster
23:42 jvandewege joined #gluster
23:42 m0zes joined #gluster
23:42 msp3k joined #gluster
23:42 cmtime joined #gluster
23:42 edong23_ joined #gluster
23:42 lkoranda joined #gluster
23:42 juhaj joined #gluster
23:42 JamesG joined #gluster
23:42 delhage joined #gluster
23:42 Debolaz joined #gluster
23:42 cultavix joined #gluster
23:42 avati joined #gluster
23:42 Gabou joined #gluster
23:42 eshy joined #gluster
23:42 siel joined #gluster
23:42 Ramereth joined #gluster
23:42 georgeh_ joined #gluster
23:42 twx joined #gluster
23:42 semiosis joined #gluster
23:42 jezier joined #gluster
23:42 weykent joined #gluster
23:42 kalzz joined #gluster
23:42 abyss^^_ joined #gluster
23:42 goo_ joined #gluster
23:42 marcoceppi joined #gluster
23:42 masterzen joined #gluster
23:42 lezo__ joined #gluster
23:42 mibby joined #gluster
23:42 Andreas-IPO joined #gluster
23:42 fyxim_ joined #gluster
23:42 SteveCooling joined #gluster
23:42 eclectic joined #gluster
23:42 ccha2 joined #gluster
23:42 _NiC joined #gluster
23:42 pasqd joined #gluster
23:42 l0uis joined #gluster
23:42 sadbox joined #gluster
23:42 nixpanic_ joined #gluster
23:42 gehaxelt joined #gluster
23:42 ultrabizweb joined #gluster
23:42 [o__o] joined #gluster
23:42 cicero joined #gluster
23:42 churnd joined #gluster
23:42 johnnytran joined #gluster
23:42 JordanHackworth joined #gluster
23:42 radez_g0n3 joined #gluster
23:42 msvbhat joined #gluster
23:42 the-me joined #gluster
23:42 kke joined #gluster
23:42 stigchristian joined #gluster
23:42 Kins joined #gluster
23:42 tobias- joined #gluster
23:42 lava joined #gluster
23:42 foobar joined #gluster
23:42 prasanth|brb joined #gluster
23:42 atrius` joined #gluster
23:42 Nowaker joined #gluster
23:42 gomikemike joined #gluster
23:42 Guest44047 joined #gluster
23:42 T0aD joined #gluster
23:42 mjrosenb joined #gluster
23:42 doekia joined #gluster
23:42 codex joined #gluster
23:42 mikedep333 joined #gluster
23:42 Chr1s1an_ joined #gluster
23:42 Moe-sama joined #gluster
23:42 eightyeight joined #gluster
23:42 xavih joined #gluster
23:42 neoice joined #gluster
23:42 johnmwilliams__ joined #gluster
23:42 JustinClift joined #gluster
23:42 purpleidea joined #gluster
23:42 partner joined #gluster
23:42 drajen joined #gluster
23:42 lanning joined #gluster
23:42 vincentvdk joined #gluster
23:42 tomased joined #gluster
23:42 Slasheri joined #gluster
23:42 klaas joined #gluster
23:42 fim joined #gluster
23:42 Peanut joined #gluster
23:42 samppah joined #gluster
23:42 sman joined #gluster
23:42 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
23:44 dblack joined #gluster
23:44 jiqiren_ joined #gluster
23:44 dblack_ joined #gluster
23:44 17SAA8T7X joined #gluster
23:44 17SAA7P9V joined #gluster
23:44 sauce joined #gluster
23:54 tty00 joined #gluster
23:54 NuxRo joined #gluster
23:54 ninjabox joined #gluster
23:54 Spiculum joined #gluster
23:54 ndevos joined #gluster
23:54 lyang0 joined #gluster
23:58 sac`away` joined #gluster
23:59 samsaffron__ joined #gluster
23:59 rturk|afk joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary