Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 gdubreui joined #gluster
00:36 yinyin_ joined #gluster
00:37 bala joined #gluster
00:43 tdasilva left #gluster
00:59 jag3773 joined #gluster
01:06 chirino joined #gluster
01:09 gdubreui joined #gluster
01:12 hagarth joined #gluster
01:14 glusterbot joined #gluster
01:24 baojg joined #gluster
01:24 jmarley joined #gluster
01:24 jmarley joined #gluster
01:26 dbruhn joined #gluster
01:28 yinyin joined #gluster
01:39 jag3773 joined #gluster
01:45 haomaiwa_ joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:49 kkeithley joined #gluster
01:53 gdubreui joined #gluster
02:02 haomai___ joined #gluster
02:34 ceiphas_ joined #gluster
02:34 jiku joined #gluster
02:36 Honghui joined #gluster
02:38 chirino joined #gluster
02:44 kdhananjay joined #gluster
02:58 dusmantkp_ joined #gluster
03:05 bharata-rao joined #gluster
03:13 spandit joined #gluster
03:16 hagarth joined #gluster
03:30 kanagaraj joined #gluster
03:40 shubhendu joined #gluster
03:44 vimal joined #gluster
03:59 itisravi joined #gluster
04:13 kumar joined #gluster
04:20 rastar joined #gluster
04:21 RameshN joined #gluster
04:24 atinm joined #gluster
04:27 tdasilva joined #gluster
04:35 kdhananjay joined #gluster
04:46 yinyin_ joined #gluster
04:46 Humble joined #gluster
04:51 ppai joined #gluster
04:53 ndarshan joined #gluster
04:58 bala joined #gluster
04:59 dusmant joined #gluster
04:59 ndarshan joined #gluster
05:00 sputnik13 joined #gluster
05:04 ajha joined #gluster
05:05 ravindran1 joined #gluster
05:06 kanagaraj joined #gluster
05:10 benjamin_____ joined #gluster
05:16 dusmant joined #gluster
05:21 Philambdo joined #gluster
05:25 prasanth_ joined #gluster
05:27 zerick joined #gluster
05:28 ndarshan joined #gluster
05:31 aravindavk joined #gluster
05:50 nishanth joined #gluster
05:51 nthomas joined #gluster
05:52 kanagaraj joined #gluster
05:56 lalatenduM joined #gluster
05:59 Amanda joined #gluster
06:02 vimal joined #gluster
06:16 kdhananjay joined #gluster
06:20 ndarshan joined #gluster
06:21 ekuric joined #gluster
06:33 nshaikh joined #gluster
06:34 rgustafs joined #gluster
06:41 ravindran1 left #gluster
06:43 rahulcs joined #gluster
06:44 raghu joined #gluster
06:45 ngoswami joined #gluster
06:57 ctria joined #gluster
07:03 psharma joined #gluster
07:03 edward1 joined #gluster
07:06 eseyman joined #gluster
07:12 rbw joined #gluster
07:15 pvh_sa joined #gluster
07:17 Andyy2 joined #gluster
07:23 saurabh joined #gluster
07:25 ngoswami joined #gluster
07:33 fsimonce joined #gluster
07:34 giannello joined #gluster
07:36 dusmant joined #gluster
07:38 Daan joined #gluster
07:38 Daan Hello!
07:38 glusterbot Daan: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:40 Daan I'm having problems combining Gluster and Samba with CTDB. I have Gluster set up to replicate across 4 servers, and on those servers stripe across 4 disks.
07:42 Daan I also have Samba4 with CTDB set up on each server, advertising the Gluster volume as a share. The volume is mounted on each server using the Gluster client, it's then used for the CTDB lock file etc, and it's that volume that is being shared as a volume.
07:43 Daan Samba4 is supposed to fail over on host failure. However, my setup does not function as intended. There is no failover at all. When I'm connected to one server in the cluster (and copying a file to the share) and another server fails, the Samba connection breaks (and the copy stops).
07:45 Daan What could be my problem? My setup is built as described in this document: http://download.gluster.org/pub/gluster/​glusterfs/doc/Gluster_CTDB_setup.v1.pdf
08:00 dusmant joined #gluster
08:16 Norky joined #gluster
08:22 calum_ joined #gluster
08:24 karolis joined #gluster
08:25 rahulcs joined #gluster
08:26 karolis left #gluster
08:26 kdhananjay joined #gluster
08:29 mgarcesMZ joined #gluster
08:29 mgarcesMZ hi there
08:30 mgarcesMZ quick newbie question... if I have a volume based on a LVM volume, and I grow the LVM logical volume, do I need to do anything on the gluster side?
08:33 Oneiroi joined #gluster
08:36 rahulcs_ joined #gluster
08:36 ctria joined #gluster
08:37 Daan What filesystem do you use for the brick?
08:40 mgarcesMZ Daan: I am using XFS in some bricks, ext4 for others... I know I can resize them online;
08:41 mgarcesMZ but I never mix filesystem (ex: server1 brick on xfs, server2 brick on ext4)
08:42 Daan In this topic: http://www.gluster.org/pipermail/g​luster-users/20081203/000710.html
08:42 glusterbot Title: [Gluster-users] Recommended underlining disk storage environment (at www.gluster.org)
08:42 Daan It says gluster is recommended to use LVM just because you can resize
08:42 Daan (not just "just" but rather "it is one of the available features")
08:43 mgarcesMZ ok, so if I resize the LVM, glusterFS its ok with that
08:44 mgarcesMZ oh, btw... if I have 2 bricks, different sizes, and I create a volume, it will have the size of the smallest brick, correct?
08:44 Daan I'm not sure
08:44 Daan I would like to know how that works...
08:44 Daan It depends on your setup
08:45 Daan I'm guessing when you have replication it would only be able to replicate smallest brick x2
08:45 Daan But when you're using a distributed volume... not sure
08:48 Andyy2 I'm getting volume creation errors, which I can't diagnose. gluster vol create test replica 2 s1:/brick s2:/brick gives volume creation failed. peers are connected.
08:48 Andyy2 In the logs I get:
08:48 Andyy2 [2014-04-16 08:48:15.661250] I [cli-rpc-ops.c:545:gf_cli_get_volume_cbk] 0-cli: Received resp to get vol: 0
08:48 Andyy2 [2014-04-16 08:48:15.661324] I [cli-rpc-ops.c:778:gf_cli_get_volume_cbk] 0-cli: Returning: 0
08:49 Andyy2 these are 2 new bricks added to a 6 node cluster. I am trying to create a volume on these two nodes for testing replication.
08:49 Andyy2 old nodes are gluster 3.4.2. The new ones are 3.4.3.
08:49 Andyy2 Any ideas what could be wrong?
08:50 saravanakumar joined #gluster
08:51 Daan Which ones are old nodes?
08:57 davinder joined #gluster
09:04 steveeJ what is the backup approach of glusterfs currently? i see the snapshot feature is in development still
09:05 steveeJ i'm thinking of using glusterfs for LXC containers and would like to take consistent backups of these
09:14 rahulcs joined #gluster
09:19 prasanth_ joined #gluster
09:19 jkroon joined #gluster
09:20 Philambdo joined #gluster
09:20 jkroon hi all, mounting a glusterfs filesystem using the built-in NFS server it seems that flock calls are blocking.  Any ideas how I can trace how/why or possible solutions?
09:24 rahulcs_ joined #gluster
09:28 raghu joined #gluster
09:33 qdk joined #gluster
09:34 vpshastry1 joined #gluster
09:38 jkroon http://www.gluster.org/pipermail/g​luster-users/2013-July/036626.html - but that's a rather oldish version.
09:38 glusterbot Title: [Gluster-users] Gluster 3.3.2 NFS & flock (at www.gluster.org)
09:40 gdavis33 joined #gluster
09:44 cyber_si joined #gluster
09:45 chirino joined #gluster
09:53 kanagaraj joined #gluster
09:54 vpshastry1 left #gluster
09:55 ctria joined #gluster
10:00 uebera|| joined #gluster
10:04 harish joined #gluster
10:11 Oneiroi joined #gluster
10:14 Andyy2 are there problems creating a volume on a cluster running different gluster versions (3.4.2 and 3.4.3 ) ?
10:17 Philambdo joined #gluster
10:24 ProT-0-TypE joined #gluster
10:29 doekia joined #gluster
10:31 kdhananjay joined #gluster
10:32 Oneiroi joined #gluster
10:34 kanagaraj joined #gluster
10:40 rahulcs joined #gluster
10:43 vpshastry1 joined #gluster
10:45 davinder joined #gluster
10:46 chirino joined #gluster
10:48 saurabh joined #gluster
10:48 ira joined #gluster
10:48 dusmant joined #gluster
10:49 nishanth joined #gluster
10:49 nthomas joined #gluster
10:50 Oneiroi joined #gluster
10:55 rahulcs joined #gluster
10:57 andreask joined #gluster
10:57 kdhananjay joined #gluster
11:03 Slashman joined #gluster
11:06 14WACKQLR joined #gluster
11:08 dusmant joined #gluster
11:11 rahulcs joined #gluster
11:12 kanagaraj joined #gluster
11:13 Calum joined #gluster
11:13 gdubreui joined #gluster
11:14 mgarcesMZ left #gluster
11:25 saurabh joined #gluster
11:27 carnil joined #gluster
11:28 edward1 joined #gluster
11:28 carnil Hi all. I have a problem which I'm trying to recover from: With gluster volume replace-brick [...] I was replacing one brick from one server to another brick of a second server.
11:29 kdhananjay left #gluster
11:29 carnil I mistakenly had to kill the glusterfs processes, after that the first invocation of gluster volume replace-brick [...] status result in an empty output, any subsequent
11:30 carnil invocation say "operation failed"
11:30 carnil is there a way to recover and reset the status for the brick-replacement?
11:31 carnil glusterfs version is unfortunately not neewest ones, but the one as found in Debian Wheezy (3.2.7 based)
11:34 giannello joined #gluster
12:00 liquidat joined #gluster
12:09 jmarley joined #gluster
12:09 jmarley joined #gluster
12:13 XATRIX joined #gluster
12:14 XATRIX Hi guys, don't know but my imap works up 3 times slower on gluster than on a simple fs
12:19 gdubreui joined #gluster
12:24 benjamin_____ joined #gluster
12:24 itisravi_ joined #gluster
12:27 bennyturns joined #gluster
12:36 jag3773 joined #gluster
12:37 T0aD joined #gluster
12:42 rahulcs joined #gluster
12:48 chirino joined #gluster
12:50 diegows joined #gluster
12:56 Copez joined #gluster
12:57 Copez Does someone know houw to mount GlusterFS in ovirt-latest-stable?
12:57 Copez I only getting; StorageDomainNot supported
12:57 jag3773 joined #gluster
12:58 Ark joined #gluster
13:00 sroy joined #gluster
13:12 FarbrorLeon joined #gluster
13:14 FarbrorLeon Out of nowhere I now get 'Illegal argument' when running mkdir. It occurs on all my gluster volumes. Altough, the directory seems to be created correctly. Any knows anything about this??
13:17 dbruhn joined #gluster
13:18 glusterbot New news from newglusterbugs: [Bug 1088338] [SNAPSHOT]: Glusterd crashes when a same command eg snapshot create is fired simultaneously on a node <https://bugzilla.redhat.co​m/show_bug.cgi?id=1088338> || [Bug 1088324] glusterfs nfs mount can't handle flock calls. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1088324>
13:19 Slashman_ joined #gluster
13:35 theron joined #gluster
13:43 kanagaraj joined #gluster
13:44 rahulcs joined #gluster
13:47 stickyboy Are people using hardware RAID? JBOD? Or what?
13:47 dbruhn There is a mix on both sides of the fence, depends on what your goals are
13:49 chirino joined #gluster
13:50 stickyboy dbruhn: I'm using hardware RAID because it's what I'm used it.
13:50 stickyboy s/used it/used to/
13:50 glusterbot stickyboy: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
13:50 stickyboy But I had a hardware RAID derp last week and remove the entire array from service... kinda annoying.
13:51 dbruhn I use hardware raid as well, it provides me better single thread performance, and I would rather on the fly rebuild a raid than repopulate a brick
13:51 rahulcs joined #gluster
13:51 stickyboy dbruhn: I'm not entirely concerned about performance, but the management aspect is very important for me.
13:52 dbruhn How did you derp the raid?
13:52 stickyboy dbruhn: The RAID array just farted and just fell off the system.
13:52 stickyboy 'fell off' as in... disappeared.  Had to reboot. :)
13:52 dbruhn ahh ok
13:53 dbruhn so it wasn't unrecoverable or anything like that
13:53 stickyboy But it was 1am and I didn't notice until the next morning.  And there was 200MB/sec writes to the replica at that time
13:53 stickyboy So the replica was out of service for 8 or 10 hours.
13:53 stickyboy Still sorting out the split brain, about ~72 files from what I can tell.
13:53 dbruhn the self heal should fix that without split brain
13:54 stickyboy dbruhn: Yah, I don't think it's technically a split brain, but there are missing files.
13:54 dbruhn if you have an idea of what files they are simply stat them and it will populate them on the other side
13:54 stickyboy Even had some user programs OOPs, which sucked.
13:55 stickyboy dbruhn: I am currently running `find /blah -noleaf -print0 | xargs stat` on the volume.
13:55 stickyboy And there is no such healing. :D
13:55 dbruhn what do you mean oops?
13:56 dbruhn and what do you mean no such healing?
13:56 stickyboy Although maybe I'm wrong, `gluster volume heal <blah> info` shows less than when I last  checked a few days ago...
13:56 lmickh joined #gluster
13:57 dbruhn what does split-brain show?
13:58 stickyboy dbruhn: Oooh, maybe the processes didn't crash.  Just saw a lot of this in dmesg:  "INFO: task RepeatMasker:62129 blocked for more than 120 seconds."
13:58 dbruhn When a brick in a replication pair goes off line there is a 42 second timeout, that could have caused your application to take issue.
14:00 stickyboy brosner: Ah, yes, there is split brain. `gluster volume heal blah info split-brain | wc -l`   == 169.
14:00 stickyboy dbruhn: ^^
14:00 dbruhn Do you know how much of that existed before?
14:02 stickyboy dbruhn: Before the crash?  Not sure.  That user was running a bunch of jobs on the cluster... and that data was being written.
14:02 zaitcev joined #gluster
14:03 vsa joined #gluster
14:05 dbruhn You might have to go through and fix those split-brain files via joe julians method
14:07 ajha joined #gluster
14:11 stickyboy dbruhn: I saw Joe's split-mount tool yesterday.  Pretty nifty.
14:12 dbruhn Totally
14:13 stickyboy dbruhn: `find` has been running for 3 days. :P
14:13 stickyboy Spent the last 24 hours inside this one user's directory (I was spying on /proc/pid/cwd). :P
14:13 dbruhn lol, I wish I could say I don't know your pain. My entire life is spent managing processes that scan all of the files on my volumes it seems.
14:14 ctria joined #gluster
14:16 wushudoin joined #gluster
14:17 stickyboy dbruhn: What interconnect are you using?  Like 10GbE or?
14:20 dbruhn 40gb infiniband
14:23 stickyboy dbruhn: Wow. :)
14:23 stickyboy Replica or distribute?
14:23 dbruhn I have several volumes, all of them are distributed+replica x2
14:23 stickyboy Nice.
14:24 stickyboy I'm currently only replica.  But need to expand and probably move to distributed+replica.
14:24 stickyboy And get off copper... maybe Infiniband.
14:25 bennyturns joined #gluster
14:27 stickyboy dbruhn: IPoIB or RDMA?
14:28 dbruhn My older systems are on RDMA, all systems I put up now are TCP and RDMA
14:29 Ark joined #gluster
14:29 kaptk2 joined #gluster
14:30 stickyboy dbruhn: Whoa, TCP + RDMA?  Didn't know you could do that.
14:30 stickyboy Never used Infiniband actually, so not really sure about it.
14:31 dbruhn It's really not that terrible these days
14:32 stickyboy Nice.
14:34 stickyboy dbruhn: How's the price of the hardware?
14:35 kanagaraj joined #gluster
14:35 stickyboy dbruhn: 10GbE is pretty fast, but the latency over copper is killing me.
14:35 stickyboy :(
14:35 stickyboy I didn't realize it would be such a big factor.
14:36 jkroon stickyboy, ns vs ms ... and since the majority of IO is small blocks it turns out latency is a much bigger concern in general than throughput.
14:37 tdasilva joined #gluster
14:39 kkeithley joined #gluster
14:40 stickyboy jkroon: Yah.  I have lots of advice for new Gluster users now. :D
14:41 dbruhn Sorry, 40GB IB is about the same rice as 10GB right now
14:41 dbruhn and the latency difference is amazing
14:41 jkroon IB >>> Ethernet
14:41 dbruhn safar
14:41 jkroon and fuse vs nfs ... just a shame flock doesn't work on nfs :(
14:42 jkroon although, I'll admit that the current scenario the fuse mount seems to be doing OKish.
14:46 dbruhn I use the fuse mount for the resiliency, I've heard rumblings of pNFS support
14:47 pvh_sa joined #gluster
14:53 nightwalk joined #gluster
14:55 stickyboy dbruhn: 40GB IB is price competitive with 10GbE?
14:55 stickyboy w00t!
14:55 sputnik13 joined #gluster
14:56 giannello joined #gluster
15:02 XATRIX Guys, is it possible to get/set NUFA translator on a running volume ?
15:03 doekia joined #gluster
15:03 kumar joined #gluster
15:04 gmcwhistler joined #gluster
15:05 lmickh joined #gluster
15:08 XATRIX http://ur1.ca/h3it1 -i have the following configuration
15:08 glusterbot Title: #94704 Fedora Project Pastebin (at ur1.ca)
15:08 XATRIX And i have extremely slow operation during IMAP reads of the dirs
15:14 daMaestro joined #gluster
15:15 jobewan joined #gluster
15:17 Ark joined #gluster
15:28 pvh_sa joined #gluster
15:29 ctria joined #gluster
15:31 benjamin_____ joined #gluster
15:43 monotek joined #gluster
15:56 kanagaraj joined #gluster
15:57 jag3773 joined #gluster
16:14 Humble joined #gluster
16:14 MeatMuppet joined #gluster
16:16 jbd1 joined #gluster
16:19 RameshN joined #gluster
16:23 pvh_sa joined #gluster
16:23 sputnik13 joined #gluster
16:27 Licenser joined #gluster
16:27 dblack joined #gluster
16:29 RameshN joined #gluster
16:29 glusterbot New news from resolvedglusterbugs: [Bug 1087771] Include new email aliases in the "who wrote glusterfs" configuration files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1087771>
16:38 glustercjb1 joined #gluster
16:39 glustercjb1 hey all, quick question on the "ignore_deletes" option gluster
16:40 glustercjb1 why does it default to true, what if I want it to keep the two cluster in sync?
16:40 glustercjb1 doesn't seem to be an option to change it, appears to be statically coded
16:40 glustercjb1 xlators/mgmt/glusterd/src/glusterd-geo-rep.c:        runner_add_args (&runner, "ignore-deletes", "true", ".", ".", NULL);
17:10 MeatMuppet left #gluster
17:15 Mo_ joined #gluster
17:19 kkeithley joined #gluster
17:23 chirino joined #gluster
17:23 Matthaeus joined #gluster
17:27 Ark joined #gluster
17:31 pkoro joined #gluster
17:34 jag3773 joined #gluster
17:41 vipulnayyar joined #gluster
17:44 Humble joined #gluster
17:45 jkroon left #gluster
17:48 _dist joined #gluster
17:49 MeatMuppet joined #gluster
17:59 ctria joined #gluster
18:05 andreask joined #gluster
18:13 Humble joined #gluster
18:16 lmickh joined #gluster
18:21 jbd1 glustercjb1: that would be a good question for the gluster-users or gluster-devel
18:22 Joe630 stupid question time
18:22 jbd1 glustercjb1: redhat says you can set it to 0 in the "ignore-deletes" option but having it default to 1 is funny
18:23 Joe630 how do i remove all traces of a volume created with volume create? i tried to reuse the name and it said it was in use
18:23 zerick joined #gluster
18:24 lmickh joined #gluster
18:24 jbd1 Joe630: did you gluster volume delete the volume?
18:25 Joe630 yes
18:25 Joe630 before i nuked the bricks
18:25 B21956 joined #gluster
18:25 lpabon joined #gluster
18:25 Joe630 then i tried to recreate it properly
18:25 Joe630 got this error
18:25 Joe630 volume create: gv0: failed: /export/emcpowera1/brick or a prefix of it is already part of a volume
18:25 glusterbot Joe630: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
18:25 jbd1 haha
18:25 _dist joined #gluster
18:26 Joe630 did i do something stupid
18:26 Joe630 you can make fun of me if you want.
18:26 chirino joined #gluster
18:26 jbd1 Joe630: no, you just didn't clear extended attributes on the bricks.  To truly nuke a brick, it's easiest just to reformat it (mkfs.xfs -i size=512 -L BRICK1 /dev/...)
18:26 Joe630 oh!
18:26 Joe630 ok
18:27 Joe630 i am deleting the whole thing.  this makes total sense
18:27 jbd1 Joe630: but you can read joe julian's stuff in the bot link
18:27 Joe630 i created 8 LUNs instead of one big one.
18:27 Joe630 forgetting data is replicated across bricks, not peers.
18:27 jbd1 ah
18:28 Joe630 rookie mistake
18:29 Joe630 thanks again.
18:29 jbd1 'course
18:32 glustercjb1 jbd1: although ignore_deletes is an option that settable, it's actually not
18:32 glustercjb1 jbd1: [root@dev604 ~]# gluster volume geo-replication sac-poc 10.52.228.120::sac-poc config ignore_deletes 1
18:32 glustercjb1 Reserved option
18:32 glustercjb1 geo-replication command failed
18:32 glustercjb1 [root@dev604 ~]#
18:33 jbd1 glustercjb1: interesting. https://access.redhat.com/site/docume​ntation/en-US/Red_Hat_Storage/2.0/htm​l-single/2.0_Release_Notes/index.html is wrong then
18:33 glusterbot Title: 2.0 Release Notes (at access.redhat.com)
18:33 jbd1 glustercjb1: all the more reason to post to the mailing list
18:35 athe joined #gluster
18:38 jbd1 glustercjb1: that doc has the option as ignore-deletes (hyphen, not underscore)
18:42 glustercjb1 jbd1: same thing
18:42 glustercjb1 [root@dev604 ~]# gluster volume geo-replication sac-poc 10.52.228.120::sac-poc config ignore-deletes 1
18:42 glustercjb1 Reserved option
18:42 glustercjb1 geo-replication command failed
18:42 glustercjb1 [root@dev604 ~]#
18:42 glustercjb1 I'll post a question to the list
18:43 Humble joined #gluster
19:52 Matthaeus joined #gluster
19:56 andreask joined #gluster
20:02 hagarth joined #gluster
20:17 kanagaraj joined #gluster
20:19 dblack joined #gluster
20:20 jclift joined #gluster
20:20 msvbhat joined #gluster
20:20 portante joined #gluster
20:21 mwoodson joined #gluster
20:23 edward1 joined #gluster
20:26 radez_g0n3 joined #gluster
20:27 jmarley joined #gluster
20:27 jmarley joined #gluster
20:29 pvh_sa joined #gluster
20:39 jmarley joined #gluster
20:50 glusterbot New news from newglusterbugs: [Bug 1088589] Failure in gf_log_init reopening stderr <https://bugzilla.redhat.co​m/show_bug.cgi?id=1088589>
20:51 zomg_ joined #gluster
21:02 chirino joined #gluster
21:06 gmcwhist_ joined #gluster
21:21 gmcwhist_ joined #gluster
21:28 gmcwhist_ joined #gluster
21:49 basso joined #gluster
21:56 hagarth joined #gluster
22:02 fidevo joined #gluster
22:05 Oneiroi joined #gluster
22:07 chirino joined #gluster
22:24 gdubreui joined #gluster
22:29 swat30 joined #gluster
22:37 hagarth joined #gluster
22:50 lmickh joined #gluster
22:53 Durzo joined #gluster
22:53 Durzo semiosis, around?
22:57 Ark joined #gluster
23:06 Durzo AWS question.. I have 2 gluster servers in 2 physically seperate data centers (AZ1, AZ2) with clients in both.. how important is it that a client in AZ1 uses the gluster server in AZ1 as the mount server, as apposed to AZ2 when clients communicate to all servers in the cluster anyway?
23:06 Matthaeus Same region?
23:06 Durzo yeah
23:06 Matthaeus Don't worry about it.
23:06 Durzo and im setting backupovlserver=othernode
23:07 Matthaeus AWS abstracts away the AZ stuff anyway.  You're not even guaranteed that two instances in the same AZ are actually in the same physical building.
23:07 Matthaeus And AZ1 for your account is not the same as AZ1 for my account.
23:07 Durzo where else could they be?
23:07 Matthaeus Amazon has something like 11 buildings for us-east-1
23:07 Durzo i know my region (i have been to the data centers).. there are only 2 buildings they could possible be in
23:08 Durzo im in ap-southeast-2
23:08 Matthaeus They only have two buildings?
23:08 Durzo sydney isnt very good :P
23:09 Matthaeus You're still fine from a performance standpoint.  You might be able to measure the difference if you look really hard, but the internal AWS architecture is still meant to be able to abstract AZ's away from physical buildings without the end users noticing.
23:15 sputnik13 joined #gluster
23:25 harish joined #gluster
23:29 Ark joined #gluster
23:37 tdasilva joined #gluster
23:39 Alex Durzo: The server that you specify in the mount is, I believe, unimportant - as it's ignored, and connections to all bricks are established anyway
23:39 Alex Durzo: Effectively it uses whatever it sees as the mount server to bootstrap itself and know where to go for the other bricks
23:39 Alex (assuming the gluster fuse stuff)
23:39 Durzo thats what i thought
23:40 Alex I didn't. I've had every box connecting to localhost in the desperate hope that it'll avoid inter-brick comms altogether. It doesn't. ;-)
23:40 Alex but the kind people of #gluster re-educated me... :)
23:44 harish joined #gluster
23:51 glusterbot New news from newglusterbugs: [Bug 1088649] Some newly created folders have root ownership although created by non-user <https://bugzilla.redhat.co​m/show_bug.cgi?id=1088649>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary