Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 seapasulli left #gluster
00:15 chirino joined #gluster
00:23 zaitcev_ joined #gluster
00:25 Leolo I wonder... has anyone used google-drive-ocamlfuse as a brick :-)
00:26 elyograg I think I want to use my bricks as doorstops.
00:26 elyograg gluster is seriously screwing my life up.
00:26 sputnik13 joined #gluster
00:26 Elico left #gluster
00:27 gdubreui joined #gluster
00:35 chirino joined #gluster
00:36 primusinterpares joined #gluster
00:42 zerick joined #gluster
00:42 cjanbanan joined #gluster
00:44 tokik joined #gluster
00:49 abyss^ joined #gluster
00:50 avati joined #gluster
00:50 philv76 joined #gluster
00:51 glusterbot joined #gluster
00:51 awheeler_ joined #gluster
00:53 partner joined #gluster
00:54 rshade98 joined #gluster
00:55 YazzY joined #gluster
00:55 YazzY joined #gluster
00:55 nullck joined #gluster
01:02 awheeler joined #gluster
01:03 nage joined #gluster
01:03 nage joined #gluster
01:04 gdubreui joined #gluster
01:07 chirino joined #gluster
01:13 bala joined #gluster
01:25 jim80net joined #gluster
01:28 tokik_ joined #gluster
01:29 primusinterpares joined #gluster
01:31 ultrabizweb joined #gluster
01:31 awheeler joined #gluster
01:35 Ark joined #gluster
01:39 ultrabizweb joined #gluster
01:44 awheeler joined #gluster
01:44 tdasilva left #gluster
02:00 cjanbanan joined #gluster
02:08 awheeler_ joined #gluster
02:09 ultrabizweb joined #gluster
02:12 delhage joined #gluster
02:17 gmcwhistler joined #gluster
02:18 satheesh joined #gluster
02:21 bharata-rao joined #gluster
02:22 gmcwhist_ joined #gluster
02:32 kiwnix joined #gluster
02:37 neurodrone joined #gluster
02:45 gmcwhistler joined #gluster
02:45 philv76 joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 kdhananjay joined #gluster
02:53 saurabh joined #gluster
02:59 haomaiwa_ joined #gluster
03:31 philv76 joined #gluster
03:33 gmcwhistler joined #gluster
03:35 itisravi joined #gluster
03:41 RameshN joined #gluster
03:42 harish_ joined #gluster
03:42 shubhendu joined #gluster
03:45 cjanbanan joined #gluster
03:49 lalatenduM joined #gluster
03:56 tokik joined #gluster
03:59 mohankumar joined #gluster
04:05 harish_ joined #gluster
04:18 deepakcs joined #gluster
04:18 badone_ joined #gluster
04:19 ndarshan joined #gluster
04:34 robo joined #gluster
04:35 Jakey joined #gluster
04:35 Jakey hi would like to know if gluster is dead
04:35 Jakey i mean HekaFS
04:35 ppai joined #gluster
04:36 prasanth_ joined #gluster
04:39 ravindran joined #gluster
04:40 chirino joined #gluster
04:42 thotz joined #gluster
04:43 lalatenduM Jakey, "gluster v status"
04:43 thotz hi jonhmark
04:43 lalatenduM or "ps aux | grep gluster"
04:43 thotz hi johnmark
04:45 thotz johnmark : I would like to know about gluster GSOC project "Implement a Cassandra/NoSQL Connector or Translator for GlusterFS"
04:49 Jakey lalatenduM: no about the HekaFS project
04:49 Jakey is it dead
04:49 Jakey is anyone maintaining it
04:49 sks joined #gluster
04:50 lalatenduM Jakey, I have not hread abt from long, not sure
04:50 lalatenduM s/hread/heard/
04:50 glusterbot lalatenduM: Error: I couldn't find a message matching that criteria in my history of 665 messages.
04:51 dusmant joined #gluster
04:53 spandit joined #gluster
04:55 Jakey so anyway
04:55 Jakey is gluster using encryption
04:55 Jakey or do you use encryption with gluster
04:57 hagarth joined #gluster
04:59 ajha joined #gluster
05:01 lalatenduM Jakey, there is encryption code in glusterfs from 3.5 version. which is in beta now
05:01 lalatenduM Jakey, I haven't used it
05:05 fidevo joined #gluster
05:09 Jakey lalatenduM: so okay
05:09 Jakey its still new
05:09 Jakey but hekafs was out like 3 years ago
05:10 Jakey what took it so long
05:10 Jakey you want me to swith to ceph
05:10 Jakey huh #gluster :)
05:10 Jakey not that ceph has any encryption
05:11 ira joined #gluster
05:12 lalatenduM Jakey, I dont know the reason for the delay :)
05:15 shylesh joined #gluster
05:17 pk1 joined #gluster
05:18 cjanbanan joined #gluster
05:34 bala joined #gluster
05:39 shubhendu joined #gluster
05:42 rastar joined #gluster
05:59 benjamin_____ joined #gluster
06:01 raghu joined #gluster
06:01 fidevo joined #gluster
06:01 nshaikh joined #gluster
06:05 shubhendu joined #gluster
06:09 rahulcs joined #gluster
06:14 psharma joined #gluster
06:17 rastar joined #gluster
06:18 badone_ joined #gluster
06:25 ricky-ti1 joined #gluster
06:25 edward joined #gluster
06:27 rahulcs joined #gluster
06:27 ndarshan joined #gluster
06:32 jim80net joined #gluster
06:40 vimal joined #gluster
06:43 chirino joined #gluster
06:47 ndarshan joined #gluster
06:49 rahulcs joined #gluster
06:53 rahulcs joined #gluster
06:56 pk1 left #gluster
06:57 rahulcs joined #gluster
06:58 ctria joined #gluster
07:08 FarbrorLeon joined #gluster
07:13 chirino joined #gluster
07:19 jtux joined #gluster
07:24 glusterbot New news from newglusterbugs: [Bug 1076348] Multiple bricks on a node with changelog enabled could cause changelog/journal corruption <https://bugzilla.redhat.co​m/show_bug.cgi?id=1076348>
07:28 rahulcs joined #gluster
07:29 rahulcs joined #gluster
07:35 ekuric joined #gluster
07:39 harish_ joined #gluster
07:45 eseyman joined #gluster
07:47 ngoswami joined #gluster
07:54 Psi-Jack joined #gluster
08:05 badone_ joined #gluster
08:05 andreask joined #gluster
08:14 slayer192 joined #gluster
08:15 rahulcs joined #gluster
08:24 DV__ joined #gluster
08:37 cjanbanan joined #gluster
08:42 rgustafs joined #gluster
08:42 Philambdo joined #gluster
08:43 rahulcs joined #gluster
09:00 rahulcs joined #gluster
09:01 Pavid7 joined #gluster
09:14 tokik joined #gluster
09:18 rahulcs joined #gluster
09:23 sks joined #gluster
09:26 mohankumar joined #gluster
09:30 rahulcs joined #gluster
09:44 chirino joined #gluster
09:47 rahulcs joined #gluster
09:54 raptorman joined #gluster
09:57 calum_ joined #gluster
09:59 yinyin joined #gluster
10:06 rahulcs joined #gluster
10:11 rahulcs joined #gluster
10:18 ade_b joined #gluster
10:25 slayer192 joined #gluster
10:28 rahulcs joined #gluster
10:32 slayer192 joined #gluster
10:32 ade_b hi, just reading the quickstart it recommends 2 disks, but I assume i can just use an LV ?
10:37 Slash joined #gluster
10:37 samppah ade_b: sure!
10:39 ade_b ok cool thanks samppah
10:40 lalatenduM @quickstart
10:40 glusterbot lalatenduM: I do not know about 'quickstart', but I do know about these similar topics: 'quick start'
10:40 lalatenduM @learn quickstart as http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
10:40 glusterbot lalatenduM: The operation succeeded.
10:40 lalatenduM @quickstart
10:40 glusterbot lalatenduM: http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
10:45 ppai joined #gluster
10:49 Pavid7 joined #gluster
10:54 dusmant joined #gluster
10:55 kdhananjay joined #gluster
10:55 rahulcs joined #gluster
10:58 ricky-ti1 joined #gluster
10:59 ndarshan joined #gluster
11:16 tokik joined #gluster
11:31 lpabon joined #gluster
11:35 jdarcy joined #gluster
11:36 ngoswami joined #gluster
11:39 tokik joined #gluster
11:40 dusmant joined #gluster
11:40 ndarshan joined #gluster
11:41 jdarcy joined #gluster
11:42 ppai joined #gluster
11:48 chirino joined #gluster
11:51 rahulcs joined #gluster
11:52 rahulcs joined #gluster
11:54 rahulcs joined #gluster
12:00 bennyturns joined #gluster
12:01 nshaikh joined #gluster
12:04 kam270 joined #gluster
12:05 prasanth_ joined #gluster
12:10 tdasilva joined #gluster
12:10 kam270 joined #gluster
12:11 qdk joined #gluster
12:15 lalatenduM @ports
12:15 glusterbot lalatenduM: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
12:17 rahulcs joined #gluster
12:18 kam270 joined #gluster
12:19 rahulcs joined #gluster
12:22 gmcwhistler joined #gluster
12:29 rfortier joined #gluster
12:31 Ark joined #gluster
12:34 pk1 joined #gluster
12:34 ppai joined #gluster
12:36 jag3773 joined #gluster
12:40 lalatenduM johnmark, ping
12:42 kam270 joined #gluster
12:43 hagarth joined #gluster
12:46 ccha (Deleted volumes do not reset this counter.) <-- how can we reset the counter when we migrate 3.3 to 3.4 ?
12:47 ccha so you can have 3.4 with port 24009+ and 3.3 with 49152+ ?
12:47 ccha I have an node with 3.3.2 with port 49152
12:48 ccha 49153
12:49 chirino joined #gluster
12:52 GabrieleV joined #gluster
13:06 vimal joined #gluster
13:07 rfortier joined #gluster
13:13 benjamin_____ joined #gluster
13:22 rwheeler joined #gluster
13:23 Ark joined #gluster
13:29 jtux joined #gluster
13:33 nightwalk joined #gluster
13:34 hagarth joined #gluster
13:38 bennyturns joined #gluster
13:38 FarbrorLeon joined #gluster
13:40 theron joined #gluster
13:47 rahulcs joined #gluster
13:47 pk1 left #gluster
13:49 jobewan joined #gluster
13:50 philv76 joined #gluster
13:51 chirino joined #gluster
13:51 sroy joined #gluster
13:53 DV__ joined #gluster
13:58 ira joined #gluster
13:58 dusmant joined #gluster
13:59 jmarley joined #gluster
13:59 jmarley joined #gluster
14:02 theron joined #gluster
14:06 ira_ joined #gluster
14:13 rahulcs joined #gluster
14:20 B21956 joined #gluster
14:21 B21956 joined #gluster
14:22 kaptk2 joined #gluster
14:22 sroy joined #gluster
14:35 lmickh joined #gluster
14:36 robo joined #gluster
14:38 theron joined #gluster
14:50 rgustafs joined #gluster
14:52 failshell joined #gluster
15:00 kam270 joined #gluster
15:04 Leolo joined #gluster
15:05 kam270 joined #gluster
15:07 FarbrorLeon joined #gluster
15:08 calum_ joined #gluster
15:13 DV__ joined #gluster
15:18 Pavid7 joined #gluster
15:19 seapasulli joined #gluster
15:21 kam270 joined #gluster
15:25 calum_ joined #gluster
15:28 DV__ joined #gluster
15:35 Copez joined #gluster
15:39 rahulcs joined #gluster
15:49 Copez joined #gluster
15:52 rshade98 any ideas why I would get transport not connected when I can telnet 24009
15:52 rshade98 gluster 3.2.7
15:53 DV__ joined #gluster
15:55 semiosis rshade98: pastie client log please
15:55 rshade98 I also have one client connected to same host
15:56 rshade98 http://pastebin.com/D7313FLy
15:56 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:57 rshade98 http://fpaste.org/85434/94812668/
15:57 glusterbot Title: #85434 Fedora Project Pastebin (at fpaste.org)
15:58 semiosis connection timed out suggests either no server at the IP or iptables is dropping packets
15:58 semiosis double check the mount server address.  can you telnet to 24007 at that host?
15:58 semiosis ,,(ports)
15:58 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
16:00 rshade98 yeah, I manually mounted and checked address. Iptables is off on the gluster servers, and ec2 security group is wide open
16:04 rshade98 this is my mount command:  mount -t glusterfs 10.9.150.207:/multi_gluster /mnt/ephemeral/glusterfs
16:05 robo joined #gluster
16:06 rshade98 its is possibly a ttl thing?
16:06 semiosis drop the / from the volume address: 10.9.150.207:multi_gluster
16:06 semiosis although that shouldn't matter, it's more correct without the slash
16:07 semiosis can you telnet from the client machine to 10.9.150.207 port 24007?
16:07 semiosis that's where the client is failing
16:07 rshade98 yep
16:07 rshade98 and to 24009 both
16:07 semiosis truncate the client log, try again, and pastie the complete client log file please
16:12 DV__ joined #gluster
16:12 semiosis rshade98: also, are you aware there's a much newer version of glusterfs available?  latest is 3.4.2
16:12 semiosis ,,(latest)
16:12 glusterbot The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
16:16 rshade98 yeah, I was hoping to get this just to work, but it may have to be upgraded.
16:16 rshade98 it was an existing cookbook
16:17 semiosis a cookbook?
16:17 semiosis link?
16:17 semiosis we should nudge the author to update or deprecate
16:20 m0zes_ joined #gluster
16:20 rshade98 it was mine :)
16:21 rshade98 well ours, I am in the process of fixing it up. https://github.com/rs-services/cookbooks_​internal/tree/master/cookbooks/glusterfs
16:21 glusterbot Title: cookbooks_internal/cookbooks/glusterfs at master · rs-services/cookbooks_internal · GitHub (at github.com)
16:21 rshade98 and re-releasing
16:23 DV__ joined #gluster
16:26 bala joined #gluster
16:26 robo joined #gluster
16:26 semiosis rshade98: you should use the ,,(ppa) packages
16:26 glusterbot rshade98: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
16:28 Mo__ joined #gluster
16:30 rshade98 you mean these? http://download.gluster.org/pu​b/gluster/glusterfs/3.4/3.4.2/
16:30 glusterbot Title: Index of /pub/gluster/glusterfs/3.4/3.4.2 (at download.gluster.org)
16:32 hagarth joined #gluster
16:35 HeisSpiter Where to look when volume create fails?
16:35 HeisSpiter (for info)
16:35 rahulcs joined #gluster
16:37 semiosis HeisSpiter: glusterd log file... /var/log/glusterfs/etc-glusterfs-glusterd.log
16:38 semiosis rshade98: i mean these ... ,,(ppa)
16:38 glusterbot rshade98: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
16:39 semiosis rshade98: i see you have an ubuntu deb in your cookbook, so i figure you're using ubuntu
16:39 HeisSpiter Doesn't contain anything relevant @ semiosis
16:40 HeisSpiter Basically, just this: [2014-03-14 16:33:08.055573] I [glusterd-handler.c:952:__glus​terd_handle_cli_list_friends] 0-glusterd: Received cli list req
16:40 kkeithley1 joined #gluster
16:40 HeisSpiter not really... useful
16:40 semiosis HeisSpiter: pastie the log
16:41 rshade98 semiosis, yeah, its for both. I am usually a centos guy. I will checkout the ppa
16:41 rshade98 I am an idiot, I just realized that is your ppa repo
16:41 semiosis rshade98: the ppa is where we release ubuntu packages.  centos packages are on the download.gluster.org site at the link you pasted
16:41 HeisSpiter That's all it has semiosis
16:42 HeisSpiter This single line
16:42 semiosis rshade98: i maintain the ubuntu & debian packages for the community, yes
16:43 semiosis HeisSpiter: oh hmmm
16:45 HeisSpiter cli.log just contains this: http://fpaste.org/85454/
16:45 zerick joined #gluster
16:45 glusterbot Title: #85454 Fedora Project Pastebin (at fpaste.org)
16:45 semiosis HeisSpiter: that's odd.  does the gluster command work otherwise?  can you do gluster volume info or gluster peer status?
16:45 HeisSpiter sure
16:45 HeisSpiter gluster peer status works
16:46 semiosis HeisSpiter: well, try restarting glusterd, i guess
16:46 HeisSpiter All peers in cluster & connected
16:46 HeisSpiter Already tried
16:46 semiosis normally when volume create fails there's something in the local glusterd log
16:47 rahulcs joined #gluster
16:47 HeisSpiter there's nothing here :-(
16:47 HeisSpiter except what I quote you
16:47 DV__ joined #gluster
16:50 HeisSpiter (even with force it fails)
16:54 HeisSpiter I can try to create any volume I want, it fails....
17:04 HeisSpiter even aptitude purge && aptitude install doesn't fix it....
17:07 DV__ joined #gluster
17:09 HeisSpiter it seems that at some point
17:09 HeisSpiter One of my nodes has an issue
17:10 HeisSpiter Which one? I don't know, glusterfs doesn't say a thing about it
17:10 HeisSpiter restricting number of nodes allowed to create a volume...
17:13 Copez joined #gluster
17:15 theron_ joined #gluster
17:19 FarbrorLeon joined #gluster
17:28 glusterbot New news from newglusterbugs: [Bug 1076625] file disappeared in the heterogeneity architecture computer system(arm and intel) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1076625>
17:39 DV joined #gluster
17:40 MacWinner joined #gluster
17:41 wrale_ joined #gluster
17:51 andreask joined #gluster
17:53 harish_ joined #gluster
17:57 DV joined #gluster
18:04 lpabon joined #gluster
18:08 robo joined #gluster
18:13 DV joined #gluster
18:16 ThatGraemeGuy joined #gluster
18:19 zaitcev joined #gluster
18:21 failshel_ joined #gluster
18:28 DV joined #gluster
18:33 Pavid7 joined #gluster
18:34 robo joined #gluster
18:48 JonnyNomad joined #gluster
18:54 nueces joined #gluster
18:55 failshell joined #gluster
19:05 DV joined #gluster
19:06 JoeJulian "HeisSpiter> there's nothing here :-( except what I quote you" - but you only paste the cli log. Not the glusterd log (/var/log/glusterfs/etc-gl​usterfs-glusterd.vol.log) which is the part that actually does the work and communication with the other GlusterFS administration daemons.
19:06 HeisSpiter I had pasted it as well
19:06 jiffe98 anyone used proftpd on top of gluster?  Seems to work fine except one user has a directory with 24000 files in it and listing times out
19:06 JoeJulian Ah, ok. I guess I didn't scroll back far enough.
19:07 diegows joined #gluster
19:08 JoeJulian re-paste the link 'cause I don't see it up there.
19:10 robo joined #gluster
19:10 HeisSpiter I don't have it anymore, changed my computer :-(
19:11 JoeJulian If you'd like more help, please provide that.
19:11 RayS joined #gluster
19:13 HeisSpiter I'll check on Monday, not at work anylonger
19:15 DV joined #gluster
19:22 JoeJulian jiffe98: Can you mount with the mount option "use-readdirp=on" and see if that has any positive effect?
19:24 jiffe98 JoeJulian: getting unknown option
19:24 robo joined #gluster
19:27 JoeJulian jiffe98: What version?
19:27 jiffe98 JoeJulian: 3.3.1
19:29 wrale__ joined #gluster
19:29 JoeJulian That's why
19:30 JoeJulian There's current versions that make use of more recent fuse updates. You'd need a current version of GlusterFS and a kernel with those changes (like one released within the last year or so)
19:34 [o__o] left #gluster
19:37 [o__o] joined #gluster
19:40 [o__o] left #gluster
19:43 [o__o] joined #gluster
19:44 rahulcs joined #gluster
19:46 [o__o] left #gluster
19:49 [o__o] joined #gluster
19:49 jiffe98 JoeJulian: gotcha alright
20:12 cjanbanan joined #gluster
20:12 robo joined #gluster
20:17 wrale__ So, I'm installing ovirt self hosted engine atop glusterfs (sharing the physical nodes)... Six nodes in the cluster.. Anyway, for the self hosted engine (brain VM floats and fences when necessary), ovirt only supports nfsv3 and v4 with no option for custom mount options (that i can see).. (cont.)
20:18 wrale__ I'm worried that such an introspective mount would not be a good idea.. Rather, i don't want to hard code the NFS server as a single host.. I'm considering re-exporting gluster on localhost, so that the same volume is available everywhere by the same path.. Does this make sense, in your opinion?
20:19 wrale__ I've read that GlusterFS doesn't like being fronted by the kernel's NFS.. hmmm
20:21 JoeJulian jclift?
20:21 jclift JoeJulian: Here
20:22 JoeJulian ovirt question above.
20:22 wrale__ Thanks JoeJulian
20:23 nueces joined #gluster
20:24 jclift wrale__ JoeJulian: Sadly, this is really outside of my area.  I haven't touched oVirt in a very very long time.
20:24 JoeJulian Oh, I thought you were the ovirt expert... Who was I thinking then?
20:24 wrale__ jclift: thank you
20:24 jclift I'll ask bkp if he knows the right person to ask.
20:24 jclift JoeJulian: Yeah, it's not me. ;)
20:25 semiosis wrale__: idk about ovirt but regarding nfs i can tell you 1) a server can run either a gluster-nfs daemon or a kernel-nfs daemon, but not both, and 2) most people use a virtual IP to do high availability NFS with glusterfs
20:26 semiosis wrale__: i suppose you could re-export a glusterfs fuse mount over nfs on a separate machine, but i dont see the benefit
20:26 wrale__ semiosis: that is helpful.. thank you.. i'll look into the virtual ip method
20:26 semiosis yw
20:27 cjanbanan joined #gluster
20:27 semiosis i assumed ovirt would work with qemu-glusterfs for the vm image hosting
20:28 semiosis but i'm not familiar with that stuff
20:29 rshade98 ok getting close. reusing ebs volume, but new setup
20:29 rshade98 and gettting /mnt/storage1/gluster or a prefix of it is already part of a volume
20:29 glusterbot rshade98: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
20:30 wrale__ semiosis: they seem to be having some issues with the qemu-glusterfs stuff, at least from the self-hosted angle.. (I guess) :)
20:31 semiosis rshade98: if you haven't already done so, i recommend using dedicated hostnames for your gluster servers (gluster1.domain.whatever)
20:31 semiosis rshade98: alias each machine's own gluster hostname to 127.0.0.1 in its /etc/hosts
20:32 DV joined #gluster
20:32 semiosis rshade98: i actually divide my cluster into left & right for replication (between two AZs) so i have gluster servers like front-1-right & front-1-left ...
20:36 cjanbanan joined #gluster
20:37 rshade98 cool man, I will add that in the redesign
20:38 semiosis those hostnames are CNAMEs pointed at the public-hostname of my ec2 instances btw
20:38 semiosis all in route53 of course
20:39 rshade98 you are using your publics? with EIP?
20:39 semiosis i dont recommend EIP, simply because they're not necessary.  it would add complexity for no reason
20:40 semiosis public-hostname resolves to local-ipv4 when resolved inside ec2 and resolves to public-ipv4 when resolved from the internet
20:40 rshade98 oh never mind, was thinking something else. Head is fried on this
20:40 semiosis ec2 has split-horizon dns
20:40 rshade98 yeah, I love that.
20:40 rshade98 When I rebuild want to help me track features?
20:41 semiosis this works really well for failure recovery.  when i lose a server I can just restore it from a snapshot (I call CreateImage on all my servers at least once a day) then update the CNAME for that gluster server, and sync up the data from the replica.  most of the time clients just wait until the cname is updated then reconnect to the new server
20:41 semiosis idk what you mean track features
20:43 rshade98 what all needs to be put in it. I am going to make a more generic one with simple setups
20:43 semiosis cool, feel free to ask me Q's
20:43 semiosis i'll also volunteer purpleidea, who wrote puppet-gluster, the puppet module for glusterfs
20:44 semiosis really feel free to ask anyone around here
20:44 rshade98 will do man.
20:44 tdasilva left #gluster
20:44 semiosis ,,(puppet)
20:44 glusterbot https://github.com/purpleidea/puppet-gluster
20:44 rshade98 I used to be not as horrible on gluster, my old rightscript based templates rocked
20:47 FarbrorLeon joined #gluster
20:48 cjanbanan joined #gluster
20:53 FarbrorLeon joined #gluster
20:55 robo joined #gluster
20:56 DV joined #gluster
20:57 sroy joined #gluster
21:04 robo joined #gluster
21:05 cjanbanan joined #gluster
21:07 DV joined #gluster
21:09 jag3773 joined #gluster
21:10 rahulcs_ joined #gluster
21:14 [o__o] left #gluster
21:17 [o__o] joined #gluster
21:21 jrcresawn-home joined #gluster
21:23 Matthaeus joined #gluster
21:24 rahulcs joined #gluster
21:28 chirino joined #gluster
21:30 jrcresawn Could someone direct me to a document that explains the GlusterFS design options such as mirroring, striping, RAID 5 like design, etc.?
21:32 cjanbanan joined #gluster
21:33 semiosis there's nothing like raid5 in glusterfs.  striping is not recommended (except in exceptional cases).
21:33 jrcresawn I see this URL: http://www.gluster.org/community/docum​entation/index.php/GlusterFS_Concepts
21:33 semiosis perhaps a GlusterFS Is Not RAID document would be helpful as many people ask about this
21:33 glusterbot Title: GlusterFS Concepts - GlusterDocumentation (at www.gluster.org)
21:35 jrcresawn I see. I think I was confused by Stripe being the same name.
21:35 semiosis the most common design is "distributed replicated" where you have several replica sets.  each replica set has the same number of replicas in it, most often 2 or 3.  glusterfs distributes files evenly over these replica sets
21:37 semiosis for example a 2x2 volume has four bricks, where half of all files are replicated between the first pair, and the other half of all files are replicated between the second pair
21:37 jrcresawn OK that's very helpful. I am also curious about geo replication. My use case is to have a server in one location and a second server about a 90 minute drive away. Is it possible to have only one primary server and a secondary server configured to use geo replication?
21:39 semiosis i think that would work.  you should try it out on a couple of VMs
21:39 purpleidea semiosis: i think i'm going to have to double your promoter salary
21:40 semiosis yay now i can afford twice as much nothing
21:40 purpleidea semiosis: (btw, lots of new features are in puppet-gluster ... will land in master very soon!)
21:40 purpleidea ;)
21:40 semiosis cool!
21:46 jrcresawn I imagine a common two server configuration would use distribute and replicate and that writes to the volume would be done synchronously. If I'm right about that then am I right to say that a geo-replication server would be similar to the other servers but that writes would occur asynchronously to the primary server(s)?
21:46 jrcresawn sorry...
21:47 jrcresawn asynchronously from the primary server(s) to the geo-replication server?
21:47 failshel_ joined #gluster
21:47 Slasheri joined #gluster
21:47 cjh973 joined #gluster
21:47 askb joined #gluster
21:48 velladecin joined #gluster
21:48 semiosis correct
21:48 crazifyngers joined #gluster
21:53 jrcresawn Here's a question about capacity planning. Let's say I have 10 TB on server1 and 10TB on server2 and both use distribute and replicate. If I add server3, a geo-replication server, does it need to have 10 TB of capacity? I'm imagining a case in which server1 and server2 provide storage to the primary data center and server3 provides storage to a disaster recovery site. server3 would need a full copy of the data at the primary data center to be useful
21:55 semiosis sounds right to me... each server will have a complete copy of all the data
21:58 tjikkun_work joined #gluster
21:59 jrcresawn OK, what if I add a third 10 TB server in the primary data center. Would it too have a complete copy of all the data?
22:02 jrcresawn I think I have an answer to my last question, "Increasing volume can be done by adding a new server. Adding servers can be done on-the-fly (Since 3.1.0)."
22:03 semiosis if you're using replica 2 then you really should add servers in pairs
22:03 rahulcs joined #gluster
22:05 cjanbanan joined #gluster
22:13 semiosis jrcresawn: otoh if you want to change the replica count to 3 then yes it would also have a complete copy
22:13 semiosis but that wont increase volume
22:14 cjanbanan joined #gluster
22:14 NeatBasis joined #gluster
22:14 semiosis jrcresawn: also fwiw, i recommend setting up several bricks per server.  if you need to add capacity you can expand the underlying disk filesystems (using RAID or LVM).  if you need to add performance you can use replace-brick to move a brick to a new server.  this keeps the number of bricks in the volume the same.
22:15 semiosis if you use add-brick to increase the number of bricks in the volume then you have to do a rebalance, which is an expensive (slow) operation.
22:16 rotbeard joined #gluster
22:20 awheeler_ joined #gluster
22:20 Jakey joined #gluster
22:23 ultrabizweb joined #gluster
22:25 Copez joined #gluster
22:26 delhage joined #gluster
22:27 cjanbanan joined #gluster
22:36 cjanbanan joined #gluster
22:38 chirino joined #gluster
22:42 badone_ joined #gluster
22:42 RayS joined #gluster
22:45 jag3773 joined #gluster
22:49 elyograg left #gluster
23:35 wrale_ joined #gluster
23:37 zaitcev_ joined #gluster
23:37 Matthaeus1 joined #gluster
23:39 jag3773 joined #gluster
23:42 RayS joined #gluster
23:44 nixpanic joined #gluster
23:44 nixpanic joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary