Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 jiqiren a2: is there a guide for doing this?
00:02 jiqiren fwiw, i'm on 3.3.1
00:03 a2 in 3.3.1 replace-brick would work
00:05 jiqiren i'm assuming this is ok to do as things are running
00:41 andrewklau joined #gluster
01:09 Chocobo joined #gluster
01:35 semiosis debian packages for 3.4.1 are now live on download.gluster.org :D
01:36 semiosis also uploaded the ubuntu packages to the ppa
01:37 semiosis and they are queued for building; will be live in hours
01:39 * semiosis 's kind of friday hehe
01:48 mibby hey semiosis have to admit I'm starting to lose with with my ec2 design!
01:51 mibby in your example using distributed replicated volumes, assuming 2 x AZ's (left and right).... with left-a replicating with right-a, left-b with right-b, etc.... so files are evently distributed between a & b and a whole copy in each AZ....... if a single AZ goes down, this would be 50% of the total bricks, so by default (i know it's configurable) quorum is still achieved and read/write is still available?... or not?
01:59 semiosis no, in that scenario you would lose quorum (it actually needs 51% :) and the clients would turn read-only
02:00 semiosis thats as far as I know, never tried it myself, so i could be wrong
02:00 semiosis mibby:
02:01 andrewklau joined #gluster
02:01 sprachgenerator joined #gluster
02:02 mibby ok, I guess 51% sounds logical. So.. any recommendations for a single region dual AZ config that can survive a single AZ going down and still having RW quorum?
02:08 semiosis don't take my word for it but i doubt you can do it
02:09 semiosis you should ask the experts though, they're here mostly during US business hours on weekdays
02:09 semiosis although, we can try pinging hagarth_, he's here lots of different times
02:10 mibby if there were 3 x AZ's (which I can't do but am curious), left-a middle-a right-a, left-b middle-b right-b, etc... and one AZ went down would that work?
02:10 mibby ok thx... it's already Sat here in AU wil might try next week
02:10 semiosis yes that's my understanding
02:12 mibby but having that 3rd AZ in another region would break it because of the additional latency?
02:13 mibby if you dont mind me asking what TZ are you in? You always seem to be around when I jump in here
02:17 semiosis you should also try the gluster-users mailing list.  i'm not active on there (just lurk occasionally) but my impression is that it's active all the time
02:17 semiosis @mailing list
02:17 semiosis glusterbot: thanks
02:17 semiosis glusterbot: meh
02:17 semiosis ?!?!
02:17 semiosis @ports
02:17 semiosis glusterbot: reconnect
02:17 semiosis wow something is srsly wrong with glusterbot
02:17 semiosis mibby: can you see all my glusterbot chatter?  my last ~5 messages?  just checking it's not me
02:17 semiosis mibby: can you see this message?
02:17 glusterbot semiosis: Error: You don't have the owner capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
02:17 glusterbot semiosis: you're welcome
02:17 glusterbot semiosis: I'm not happy about it either
02:17 glusterbot semiosis: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
02:18 mibby all your chatter just came through in one bit hit
02:18 semiosis that was so weird
02:18 mibby just now
02:18 semiosis never seen that before, and i've been on irc a long time
02:19 semiosis mibby: so anyway, try the gluster-users mailing list :)
02:19 semiosis i suspect you'll get answers there before monday
02:20 semiosis glusterbot: whoami
02:20 glusterbot semiosis: semiosis
02:20 semiosis :D
02:20 mibby haha
02:20 mibby ok cool, thanks for all your help
02:21 mibby I wish AU had 3 x AZ's :/
02:21 semiosis @learn mailing list as the gluster general discussion mailing list is gluster-users, here: http://www.gluster.org/mail​man/listinfo/gluster-users
02:21 glusterbot semiosis: The operation succeeded.
02:22 semiosis @mailing list
02:22 glusterbot semiosis: the gluster general discussion mailing list is gluster-users, here: http://goo.gl/2zvu7
02:22 semiosis glusterbot: thx
02:22 glusterbot semiosis: you're welcome
02:22 semiosis :O
02:22 mibby :)
02:25 semiosis that lag was truly bizarre... for a minute there i thought maybe it was just the effects of the beer i'm drinking
02:26 jporterfield left #gluster
02:30 mibby maybe it was... maybe it was... ;)
02:34 sac`away joined #gluster
02:34 * semiosis <-- mind... blown
02:36 sac__ joined #gluster
02:36 yinyin joined #gluster
02:36 mibby I just caught up on all my Breaking Bad viewing last night... my mind is definitely blown!
02:36 purpleidea mibby: no spoilers!
02:36 purpleidea or kb
02:37 semiosis heh, i just saw that show for the first time at my folks place on weds... they had it on while i worked on their computer
02:37 mibby yep yep I wouldn't do that.
02:37 mibby just..... wow... and I'll leave it at that.
02:37 sac_ joined #gluster
02:37 sac`away` joined #gluster
02:37 semiosis tv shows don't usually do it for me, and that one was no exception
02:37 purpleidea @learn hack,hack,hack as keep hacking
02:38 glusterbot purpleidea: The operation succeeded.
02:38 purpleidea thanks glusterbot
02:38 purpleidea @hack,hack,hack
02:38 glusterbot purpleidea: keep hacking
02:39 semiosis i'm more inclined to listen to music, as i am now for example, while i ,,(hack,hack,hack)
02:39 glusterbot keep hacking
02:39 purpleidea hehe
02:39 semiosis gluster, a beer, and bob marley... my kind of friday
02:40 semiosis :O
02:40 purpleidea semiosis: you're chilling with bob marley! sweet
02:40 semiosis bob marley >> breaking bad
02:40 semiosis there's your spoiler
02:41 GLHMarmot joined #gluster
02:41 mibby haha
02:44 purpleidea semiosis: did you have a chance to see my email on the ml about algorithms ?
02:44 semiosis i dont usually read the ml :/
02:44 semiosis but i will read it now
02:45 DV__ joined #gluster
02:47 purpleidea semiosis: see if you know the answer to that one question... if you did, then you wouldn't have to ever gluster admin anymore. you'd be able to puppet: 'include gluster' and then have it add/remove bricks
02:47 purpleidea automatically
02:47 purpleidea also i like the facemask
02:48 semiosis ha yeah, our office flooded and the next day a demolition crew (whos office/warehouse happens to be in the next bldg over) came through and cut the bottom two feet off all our walls
02:49 semiosis it was pretty dusty!
02:51 wgao joined #gluster
02:52 semiosis purpleidea: ml subject line?
02:56 purpleidea semiosis: http://supercolony.gluster.org/pipermail/​gluster-users/2013-September/037488.html
02:56 glusterbot <http://goo.gl/P98Avd> (at supercolony.gluster.org)
02:57 semiosis thx
02:57 sac`away joined #gluster
02:58 sac_ joined #gluster
03:11 andrewklau joined #gluster
03:13 mtanner_ joined #gluster
03:13 chirino_m joined #gluster
03:16 neofob joined #gluster
03:36 jglo joined #gluster
03:38 jglo1 joined #gluster
03:47 t4bs joined #gluster
04:01 mohankumar joined #gluster
04:18 Chocobo joined #gluster
04:18 Chocobo joined #gluster
04:24 Chocobo joined #gluster
04:24 Chocobo joined #gluster
04:45 Chocobo_ joined #gluster
05:05 Chocobo_ joined #gluster
05:09 XpineX joined #gluster
05:10 davinder2 joined #gluster
05:22 semiosis @later tell kkeithley when we both have some time to spare, i'd like to hear more about your idea for a /debian/ packaging dir included in the glusterfs source tree. ping me.
05:22 glusterbot semiosis: The operation succeeded.
05:25 Guest16688 joined #gluster
05:26 sgowda joined #gluster
05:29 yinyin joined #gluster
05:30 Chocobo_ joined #gluster
05:44 Chocobo_ joined #gluster
06:10 davinder joined #gluster
06:17 fyxim joined #gluster
06:18 johnmwilliams joined #gluster
06:25 arusso joined #gluster
07:02 davinder joined #gluster
07:12 ujjain2 joined #gluster
07:12 ujjain2 joined #gluster
07:18 yinyin_ joined #gluster
07:30 davinder joined #gluster
08:06 mohankumar joined #gluster
08:12 yinyin_ joined #gluster
08:23 andrewklau joined #gluster
08:48 StarBeast joined #gluster
09:06 yinyin_ joined #gluster
09:26 yinyin_ joined #gluster
09:35 RedShift joined #gluster
10:15 t4bz joined #gluster
10:28 dneary joined #gluster
10:30 andrewklau joined #gluster
10:33 andrewklau joined #gluster
10:44 andrewklau joined #gluster
11:00 polfilm joined #gluster
11:03 davinder joined #gluster
11:09 cyberbootje joined #gluster
11:45 ricky-ticky joined #gluster
12:14 ricky-ticky joined #gluster
12:21 ricky-ticky joined #gluster
12:32 andrewklau joined #gluster
12:49 mohankumar joined #gluster
12:54 andrewklau joined #gluster
12:55 yinyin_ joined #gluster
13:31 t4bs_ joined #gluster
13:43 MediaSmurf joined #gluster
13:44 toad- joined #gluster
14:03 shruti joined #gluster
14:07 DV joined #gluster
14:13 saurabh joined #gluster
14:18 recidive joined #gluster
14:32 failshell joined #gluster
14:38 yinyin_ joined #gluster
14:45 foster joined #gluster
14:51 XpineX joined #gluster
14:59 andrewklau left #gluster
15:06 yinyin_ joined #gluster
15:35 yinyin_ joined #gluster
16:14 B21956 joined #gluster
16:14 B21956 left #gluster
17:49 recidive joined #gluster
18:13 shruti joined #gluster
18:40 derelm joined #gluster
19:11 derelm can someone describe what effect scaling gluster has on bandwidth requirements for client and server(s)? for example, what happens if i add a replica or additional servers with stripe volumes
19:33 Remco From what I understand, a replica will cut your client bandwidth in as many pieces as there are replicas
19:33 StarBeast joined #gluster
19:33 Remco Stripes are only needed if you have files that are larger than your bricks, and will increase throughput in some cases
19:34 Remco Distribution will make clients use server I/O better, since they can fetch files from multiple bricks
19:41 derelm my current setup is two servers with replicas each one using fuse mount of the volume and exporting it via http - now i noticed the bandwidth between the servers is the same as http outgoing bandwidth. that lead me to the question what happens if i added a third server / replica
19:42 derelm i am probably somewhat misusing gluster for ha
21:10 StarBeast joined #gluster
21:35 StarBeast joined #gluster
22:11 fidevo joined #gluster
22:14 mooperd joined #gluster
22:46 StarBeast joined #gluster
23:41 rcheleguini joined #gluster
23:56 fidevo joined #gluster
23:57 dneary joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary