Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 yinyin_ joined #gluster
00:10 gdubreui joined #gluster
00:38 vpshastry joined #gluster
00:41 Ark joined #gluster
00:43 Durzo joined #gluster
00:44 Durzo semiosis, is there a 3.5 ppa yet?
00:56 atrius joined #gluster
00:58 chirino joined #gluster
01:00 jmarley joined #gluster
01:00 jmarley joined #gluster
01:18 baojg joined #gluster
01:21 gdubreui joined #gluster
01:25 jag3773 joined #gluster
01:28 chirino joined #gluster
01:38 Honghui joined #gluster
01:39 gmcwhistler joined #gluster
01:53 glusterbot New news from newglusterbugs: [Bug 1085671] [barrier] reconfiguration of barrier time out does not work <https://bugzilla.redhat.co​m/show_bug.cgi?id=1085671>
01:57 harish_ joined #gluster
02:08 yinyin_ joined #gluster
02:11 purpleidea user tg2 asks: "where are the docs for > v3.2? from the gluster.org homepage there is a link to: http://www.gluster.org/community/​documentation/index.php/Main_Page which is a bit of a mess and doesn't seem to have any docs whatsoever".
02:11 glusterbot Title: GlusterDocumentation (at www.gluster.org)
02:52 haomaiwa_ joined #gluster
02:54 harish_ joined #gluster
02:58 Humble joined #gluster
02:58 vpshastry joined #gluster
03:01 hchiramm_ joined #gluster
03:14 rastar joined #gluster
03:30 chirino joined #gluster
03:31 shubhendu joined #gluster
03:46 gmcwhistler joined #gluster
03:52 ndarshan joined #gluster
03:54 itisravi joined #gluster
04:02 RameshN joined #gluster
04:05 kanagaraj joined #gluster
04:07 JustinClift joined #gluster
04:19 tgerhrhsrthrt6h joined #gluster
04:19 tgerhrhsrthrt6h hi
04:19 glusterbot tgerhrhsrthrt6h: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
04:19 glusterbot answer.
04:20 tgerhrhsrthrt6h I am very lost.
04:20 tgerhrhsrthrt6h I spun up a DigitalOcean Cloud VPS but I don't know how to install gluster. I am reading the directions, but my VPS does not have a sdb1
04:20 tgerhrhsrthrt6h at this point
04:20 tgerhrhsrthrt6h #gluster
04:20 tgerhrhsrthrt6h oops
04:20 tgerhrhsrthrt6h mkfs.xfs -i size=512 /dev/sdb1   mkdir -p /data/brick1   vi /etc/fstab
04:20 tgerhrhsrthrt6h it does not work
04:21 tgerhrhsrthrt6h if anybody can help, that would be reat.
04:21 tgerhrhsrthrt6h thanks
04:21 tgerhrhsrthrt6h *great
04:22 Humble joined #gluster
04:23 tgerhrhsrthrt6h hi
04:23 glusterbot tgerhrhsrthrt6h: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
04:23 glusterbot answer.
04:25 bala joined #gluster
04:25 rjoseph joined #gluster
04:30 ravindran1 joined #gluster
04:32 vpshastry joined #gluster
04:33 joshin joined #gluster
04:33 joshin joined #gluster
04:42 yinyin_ joined #gluster
04:43 hagarth joined #gluster
04:43 kumar joined #gluster
04:48 dusmant joined #gluster
04:56 kdhananjay joined #gluster
04:57 aravindavk joined #gluster
05:02 baojg joined #gluster
05:04 baojg_ joined #gluster
05:07 haomaiwang joined #gluster
05:16 atinmu joined #gluster
05:18 primechuck joined #gluster
05:24 ppai joined #gluster
05:26 Pavid7 joined #gluster
05:30 prasanth_ joined #gluster
05:33 kanagaraj joined #gluster
05:34 Honghui joined #gluster
05:35 aravindavk joined #gluster
05:40 davinder joined #gluster
05:50 rjoseph joined #gluster
05:54 raghu joined #gluster
05:54 nshaikh joined #gluster
05:54 lalatenduM joined #gluster
05:58 Philambdo joined #gluster
06:01 hchiramm_ joined #gluster
06:02 chirino joined #gluster
06:08 rahulcs joined #gluster
06:11 snehal joined #gluster
06:11 morse_ joined #gluster
06:14 foster joined #gluster
06:15 Andy5__ joined #gluster
06:18 edward1 joined #gluster
06:21 caosk_kevin joined #gluster
06:24 glusterbot New news from newglusterbugs: [Bug 1090298] Addition of new server after upgrade from 3.3 results in peer rejected <https://bugzilla.redhat.co​m/show_bug.cgi?id=1090298>
06:24 ricky-ticky joined #gluster
06:25 caosk_kevin hi all ,
06:25 caosk_kevin gluster and zfs is a good partner???????
06:31 jtux joined #gluster
06:32 samppah intresting at least :)
06:35 aravindavk joined #gluster
06:37 rahulcs joined #gluster
06:37 ngoswami joined #gluster
06:37 caosk_kevin samppah: there are very serious problems  about gluster and zfs together???
06:38 atinmu joined #gluster
06:38 samppah caosk_kevin: there are some caveats and it's not officially supported
06:38 samppah but there are people that are using it
06:40 Pavid7 joined #gluster
06:41 ekuric joined #gluster
06:45 * Durzo pokes semiosis
06:48 kanagaraj joined #gluster
06:48 psharma joined #gluster
06:50 rahulcs joined #gluster
06:51 psharma_ joined #gluster
06:52 caosk_kevin samppah: gluster work well with XFS???
06:53 kanagaraj joined #gluster
06:53 Durzo caosk_kevin, XFS is the recommended fs for gluster
06:55 kumar joined #gluster
06:57 rbw joined #gluster
07:01 caosk_kevin Durzo: thanks
07:01 eseyman joined #gluster
07:03 harish_ joined #gluster
07:04 chirino joined #gluster
07:15 ctria joined #gluster
07:16 foster joined #gluster
07:17 JoeJulian @later tell tgerhrhsrthrt6h If you happen to come back, you should probably learn about basic filesystems before you try a more advanced clustered one. It should be clear from those instructions you quoted that you're trying to mount a filesystem somewhere. If you don't have a device to do that with, you should know how to create one. If you can't do that yet then I'm afraid this might be a little too advanced yet.
07:17 glusterbot JoeJulian: The operation succeeded.
07:29 andreask joined #gluster
07:30 foster joined #gluster
07:32 aravindavk joined #gluster
07:37 fsimonce joined #gluster
07:43 sputnik13 joined #gluster
07:54 glusterbot New news from newglusterbugs: [Bug 1090363] Add support in libgfapi to fetch volume info from glusterd. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1090363>
08:02 glafouille joined #gluster
08:03 Honghui joined #gluster
08:06 haomaiwa_ joined #gluster
08:13 baojg joined #gluster
08:15 baojg_ joined #gluster
08:28 atinmu joined #gluster
08:34 knfbny joined #gluster
08:39 lanning joined #gluster
08:45 kasturi joined #gluster
08:48 lalatenduM kasturi, its #gluster-meeting
08:48 kasturi okay
08:48 kasturi lalatenduM, okay
08:55 vimal joined #gluster
08:56 giannello joined #gluster
09:00 baojg joined #gluster
09:06 jvandewege joined #gluster
09:08 rahulcs joined #gluster
09:12 haomai___ joined #gluster
09:16 rahulcs joined #gluster
09:18 ravindran1 joined #gluster
09:21 * Durzo pokes semiosis
09:25 rahulcs_ joined #gluster
09:34 caosk_kevin joined #gluster
10:00 meghanam joined #gluster
10:00 baojg joined #gluster
10:12 ravindran1 joined #gluster
10:15 baojg_ joined #gluster
10:20 caosk_kevin joined #gluster
10:21 kanagaraj joined #gluster
10:24 kasturi_ joined #gluster
10:35 Pavid7 joined #gluster
10:39 chirino joined #gluster
10:40 atinmu joined #gluster
10:43 cyber_si_ joined #gluster
10:45 edward1 joined #gluster
10:46 RameshN joined #gluster
10:48 Honghui joined #gluster
10:52 shubhendu joined #gluster
10:52 rjoseph joined #gluster
10:53 ndarshan joined #gluster
10:55 deepakcs joined #gluster
11:02 baojg joined #gluster
11:04 tdasilva joined #gluster
11:05 gdubreui joined #gluster
11:09 chirino joined #gluster
11:20 ppai joined #gluster
11:28 aravindavk joined #gluster
11:29 foster_ joined #gluster
11:38 bala joined #gluster
11:40 Andy5__ joined #gluster
11:42 diegows joined #gluster
11:45 Ark joined #gluster
11:45 mjrosenb joined #gluster
11:45 decimoe joined #gluster
11:49 foster joined #gluster
11:51 ndarshan joined #gluster
11:51 DV joined #gluster
11:52 nshaikh joined #gluster
11:56 ktosiek joined #gluster
12:01 ktosiek I'm having problems with using GlusterFS for archiving postgresql WALs (pretty much copying whole files to the volume most of the time): http://pastebin.mozilla.org/4916606
12:01 glusterbot Title: Mozilla Pastebin - collaborative debugging tool (at pastebin.mozilla.org)
12:02 ktosiek I'm getting "server 10.0.1.2:24011 has not responded in the last 42 seconds, disconnecting." all the time, then it reconnects, finishes the operation it was doing, and fails in the same way after a minute or two
12:07 Norky joined #gluster
12:09 saurabh joined #gluster
12:12 ira_ joined #gluster
12:16 ktosiek I'm also worried with FILE EXISTS errors...
12:16 ktosiek s/with/by/
12:16 glusterbot What ktosiek meant to say was: I'm also worried by FILE EXISTS errors...
12:16 ktosiek glusterbot: nice one :-D
12:18 jmarley joined #gluster
12:18 jmarley joined #gluster
12:18 itisravi joined #gluster
12:23 ndarshan joined #gluster
12:26 tdasilva left #gluster
12:39 * Durzo pokes semiosis
12:43 kanagaraj joined #gluster
12:44 ktosiek I'm getting TONS of errors like this in brick's logs: "open on /data/pg_archive/brick/0000000100000003000000C6: File exists" (for different files) - what might cause this?
12:44 [o__o] joined #gluster
12:45 shubhendu joined #gluster
12:50 Humble joined #gluster
12:56 glusterbot New news from newglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075611> || [Bug 1090488] [barrier] O_SYNC writes from libgfapi are not barriered <https://bugzilla.redhat.co​m/show_bug.cgi?id=1090488>
12:57 jobewan joined #gluster
12:57 harish_ joined #gluster
13:00 japuzzo joined #gluster
13:03 scuttle_ joined #gluster
13:04 bennyturns joined #gluster
13:04 andreask joined #gluster
13:07 baojg joined #gluster
13:11 foster joined #gluster
13:12 suliba joined #gluster
13:12 seddrone joined #gluster
13:15 chirino joined #gluster
13:19 rahulcs joined #gluster
13:22 John_HPC Anyone seeing this after upgrading to 3.5 on CentOS 5.10.   http://paste.ubuntu.com/7309952/
13:22 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:24 Pavid7 joined #gluster
13:29 dbruhn joined #gluster
13:33 badone joined #gluster
13:45 sroy_ joined #gluster
13:55 zaitcev joined #gluster
13:57 davinder joined #gluster
14:00 John_HPC Heres the error with glusterd when running it in the foreground: http://paste.ubuntu.com/7314858/
14:00 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:01 dbruhn John_HPC, what did you upgrade from?
14:01 dbruhn and to
14:01 John_HPC 3.4.3-3 to 3.5
14:02 dbruhn did you backup your config files before upgrading?
14:02 JoseBravo joined #gluster
14:02 John_HPC ya
14:04 dbruhn Seems like something didn't upgrade properly, could you uninstall, put your backed up files back in place, and install the original version?
14:04 John_HPC shuold be able to
14:05 dbruhn Assuming this is production and the goal is to get back up as quickly as possible
14:06 kaptk2 joined #gluster
14:12 John_HPC Ok. back to 3.4.3-3 and it started just fine
14:13 dbruhn seems like the error is complaining of a failed upgrade
14:13 dbruhn could you log a bug in the bugzilla
14:13 dbruhn that should be looked into for sure
14:16 liammcdermott joined #gluster
14:18 liammcdermott Just wanted to point out that the instructions for Ubuntu on the following page do not work...
14:18 liammcdermott http://www.gluster.org/community/documen​tation/index.php/Getting_started_install
14:18 glusterbot Title: Getting started install - GlusterDocumentation (at www.gluster.org)
14:18 dbruhn liammcdermott, what is wrong with the instructions
14:18 liammcdermott The wget command returns 404
14:19 liammcdermott I believe people should be using a PPA now
14:19 dbruhn agh ok, were you able to get it working?
14:19 liammcdermott https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.4
14:19 glusterbot Title: ubuntu-glusterfs-3.4 : semiosis (at launchpad.net)
14:19 liammcdermott Haven't got that far yet, I'll let you know. :)
14:20 rahulcs joined #gluster
14:20 dbruhn Ok, once you have you can register for the wiki, it would be super appreciated if you would make any corrections to ensure the next person doesn't run into the same issue :)
14:23 rahulcs joined #gluster
14:23 liammcdermott Am running short on time today, but will do it if I can!
14:23 rahulcs joined #gluster
14:23 lmickh joined #gluster
14:25 kkeithley Reminder!!!
14:25 kkeithley The weekly Gluster Community meeting is in 40 mins, in #gluster-meeting  on IRC.
14:25 kkeithley This is a completely public meeting, everyone is encouraged to attend  and be a part of it.
14:25 kkeithley To add Agenda items
14:25 kkeithley *******************
14:25 kkeithley Just add them to the main text of the Google Doc (which we're trying out  this week), and be at the meeting.
14:25 kkeithley * http://goo.gl/NTLhYu
14:25 glusterbot Title: glusterpad - Google Docs (at goo.gl)
14:26 ktosiek can glusterfsd 3.4 work with 3.2 client?
14:28 ndevos ktosiek: no, 3.3 and newer have a little different rpc protocol
14:29 Pavid7 joined #gluster
14:36 plarsen joined #gluster
14:43 tdasilva joined #gluster
14:48 atinmu joined #gluster
14:51 wushudoin joined #gluster
14:53 pvh_sa joined #gluster
14:57 jdarcy joined #gluster
14:59 _Bryan_ joined #gluster
15:01 kkeithley gluster community meeting is starting now in #gluster-meeting
15:02 kasturi joined #gluster
15:04 gmcwhistler joined #gluster
15:05 badone joined #gluster
15:06 hagarth joined #gluster
15:08 jbrooks joined #gluster
15:09 Pavid7 joined #gluster
15:10 rahulcs joined #gluster
15:16 bennyturns joined #gluster
15:20 pvh_sa joined #gluster
15:26 baojg joined #gluster
15:27 theron joined #gluster
15:35 semiosis Durzo: re: 3.5.0 in ppa, yes tonight
15:37 sroy__ joined #gluster
15:39 shubhendu joined #gluster
15:45 rahulcs joined #gluster
15:51 jbd1 joined #gluster
15:52 chirino joined #gluster
15:53 nage joined #gluster
15:53 nage joined #gluster
15:56 sputnik13 joined #gluster
15:58 rahulcs joined #gluster
16:06 T0aD joined #gluster
16:09 in joined #gluster
16:11 badone joined #gluster
16:11 cdez joined #gluster
16:13 silky joined #gluster
16:17 cdez joined #gluster
16:18 n0de hey gang, it has been a while since I popped my head in here, my last visit was the start of rebalancing my Distributed-Replicate volume. That was beginning of March, it is still going due to the volume being very big in size (283TB)
16:18 rahulcs joined #gluster
16:19 n0de What I am seeing at this point is one of my storage nodes is sitting very high in load (around 100 mark) and the memory usage looks odd - http://cl.ly/image/3P1O0L0l1I2E
16:19 glusterbot Title: Image 2014-04-23 at 12.19.26 PM.png (at cl.ly)
16:20 n0de I would like to restart the server to flush things out, but do not wish to burden it further when it comes time to rebuild all that memory cache again.
16:20 n0de Any tips?
16:22 Mo__ joined #gluster
16:22 deepakcs joined #gluster
16:23 dbruhn n0de, not sure what you are asking? The rebalance op is known to be a fairly heavy operation.
16:24 n0de dbruhn: What I dont understand is why only one or two of the nodes is under heavy load
16:24 n0de while the rest are fine
16:25 anoopcs joined #gluster
16:25 dbruhn have you checked the status of the rebalance?
16:26 n0de I would love to take the nodes with the high load down for a RAM / CPU upgrade in order to try and give it some additional resources for the rebuild to go smoother, but my fear is that if I reboot the box it will need to basically start fresh again with the memory cache and the load can at that point spiral out of control.
16:27 anoopcs left #gluster
16:27 n0de Node Rebalanced-files          size       scanned      failures       skipped         status run time in secs
16:27 n0de 10.0.144.12                0        0Bytes      16576731             0             0    in progress       4144472.00
16:27 dbruhn use fpaste for the info
16:28 n0de It wasn't much info, otherwise I always use fpaste to keep the noise down
16:28 dbruhn how many gluster servers do you have?
16:28 JoeJulian What version is that?
16:29 n0de glusterfs 3.4.2 built on Feb 28 2014 16:37:00
16:29 n0de I have 6 storage nodes
16:29 n0de 4 original, and 2 more which were added.
16:29 zerick joined #gluster
16:29 n0de 1 of the two new nodes is the one with the constant high load
16:29 n0de memory and CPU seem to be the bottle neck
16:30 n0de bottleneck*
16:31 Matthaeus joined #gluster
16:31 Guest20136 n0de: if it is cpu what is the %wa in top? Divide 1/$number_of_processors cat /proc/cpuinfo should be less then the %wa
16:31 n0de I have 24GB RAM in the storage nodes, CPU is 2 x Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz
16:32 JoeJulian 4 files persecond?!?!
16:33 n0de Guest20136: wa in top is between 0 and 10
16:33 n0de it jumps around between that range
16:34 codex joined #gluster
16:37 Guest20136 what are the block sizes on the 283TB volume? Does the node with 100 load have a bigger block size? Did you check sar -d for disk i/o? Any swap going on (sar -S)?
16:38 Guest20136 You running 2x 1gb bonded nics? Does ifconfig show any dropped / overrun / frame counts?
16:38 n0de Guest20136: for i in 4 5 6 7 8; do mkfs.ext4 -b 4096 -E stride=128,stripe-width=896 -I 1024 -O ^resize_inode /dev/md$i;done
16:40 n0de These storage nodes hold 45 x 3TB drives each
16:40 Guest20136 just as an fyi the L in the cpu name means low voltage. I like this website to review cpus http://www.cpu-world.com/CPUs/X​eon/Intel-Xeon%20E5-2630L.html
16:40 glusterbot Title: Intel Xeon E5-2630L - CM8062107185405 (at www.cpu-world.com)
16:40 n0de Guest20136: yes, we like to run the datacenter as efficient as possible.
16:42 rahulcs_ joined #gluster
16:43 JoeJulian n0de: Does your rebalance status really only show just one server? It should be all of them.
16:44 n0de JoeJulian: it shows a bunch, but no real stats...says localhost alot
16:44 vpshastry joined #gluster
16:44 JoeJulian ok
16:45 n0de is it better to execute the rebalance status command on the client or storage node?
16:45 JoeJulian server
16:45 JoeJulian @glossary
16:45 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
16:45 chirino joined #gluster
16:46 n0de got it
16:46 n0de http://cl.ly/image/3n2F150G3E09
16:46 glusterbot Title: Image 2014-04-23 at 12.45.42 PM.png (at cl.ly)
16:52 n0de JoeJulian: hope that screenshot shows you something useful
17:16 badone joined #gluster
17:23 Gilbs1 joined #gluster
17:25 bennyturns joined #gluster
17:33 MacWinner joined #gluster
17:36 MeatMuppet joined #gluster
17:38 Gilbs1 I'm running security scans on my gluster boxes and about 60% though I'm getting Out of Memory errors and glusterfs is killed.  Has anyone seen or run into this issue before?   CentOS 6.5 -- gluster 3.4.2
17:41 rahulcs joined #gluster
17:43 glusterbot New news from resolvedglusterbugs: [Bug 1089432] glusterfs-server-3.5.0-1.el6.x86_64 postinstall script failure <https://bugzilla.redhat.co​m/show_bug.cgi?id=1089432>
17:43 rjoseph joined #gluster
17:45 kkeithley @later tell JoeJulian I just put glusterfs-3.5.0-2.el[56] RPMs on download.gluster.org to fix the non-fatal error in the glusterfs-server install. I hope you've got all your puppet updates in good order.
17:45 glusterbot kkeithley: The operation succeeded.
17:45 kkeithley John_HPC: ^^^
17:52 John_HPC Yes
17:52 John_HPC Thanks, I'll give it a try
17:53 vpshastry left #gluster
17:53 jbrooks joined #gluster
17:54 John_HPC kkeithley: Gluster is still failing to start. This is upgrading from 3.4.3-3 to 3.5.0-2. Here is what my yum update looks like: http://paste.ubuntu.com/7316361/ and the error I get when trying to run glusterd in the forground: http://paste.ubuntu.com/7314858/
17:54 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:55 John_HPC Falling back to 3.4.3-3, I can restart gluster with no issues.
17:55 John_HPC failure: not a valid option: ssh-command-tar, I am running a custom ssh client that is intended to work with Kerberos. Its based on 6.6p1
17:58 jbd1 Gilbs1: the security scanner is what is using all the memory, no?  That's a separate issue from the oom-killer killing glusterfs.  WRT oom-killer, you should tell the system not to kill glusterfs :)
17:59 jbd1 To tell the oom-killer not to kill a particular PID, echo -17 > /proc/<pid>/oom_score_adj
18:00 Gilbs1 Maybe, when I was on 3.3 we had these same scans without any oom issues. :)   This started when I upgraded to 3.4.
18:01 Gilbs1 But thank you, let me configure that oom-killer.
18:08 Peanut joined #gluster
18:08 jbd1 Gilbs1: it's probable that 3.4 uses more memory than 3.3.  You may be able to tweak your server-side cache parameters to reduce memory utilization on 3.4.  GlusterFS doesn't like to share a system though :)
18:12 JoeJulian kkeithley: Thanks. :D I only point to the 3.4 repo so I should be safe.
18:13 Gilbs1 Yeah, he's greedy like SQL lol
18:13 JoeJulian John_HPC: That's Centos 5, right?
18:15 JoeJulian Ah yes, there it is... I thought so.
18:17 JoeJulian kkeithley, John_HPC: The "with" keyword is not in python 2.4.
18:21 pvh_sa joined #gluster
18:22 JoeJulian John_HPC: If you're not using georeplication, you can uninstall that package to use 3.5. Otherwise it looks like you're going to be dead-ended at 3.4 on Centos 5.
18:22 rahulcs joined #gluster
18:28 n0de JoeJulian: did you have a chance to look at that last screenshot I sent of the rebalance status
18:31 Intensity joined #gluster
18:31 lpabon joined #gluster
18:39 jbd1 heh, the last rebalance I ran, GlusterFS told me it had scanned something like 126 trillion files
18:46 n0de jbd1: how long did that take? :)
18:46 n0de and how big was the volume?
18:46 n0de my rebalance has been running since beginning of March
18:48 jbd1 n0de: I don't believe there were that many files.  The volume is only 64T and 31T used.  The fix-layout (I haven't run the actual rebalance yet) took about 5 weeks.
18:49 dbruhn weird, fix-layout usually goes way faster
18:50 jbd1 It seems like every time I do something with rebalance or fix-layout (on 3.3.2) something goes wrong.  This time, I had a server crash (kernel panic) about a week into the fix-layout.  Last time (on 3.2) I got bitten by the FD leak and had to reboot a server during a rebalance, which led to extensive permissions issues on the volume
18:51 n0de jbd1: thanks for the heads up, so far I have had no issues with the uptime on the storage nodes
18:51 dbruhn 3.3.x has a known memory leak with rebalance ops
18:51 n0de I was thinking of taking the machines down one by one to upgrade the memory / cpu but now I am starting to think that it may be a bad idea mid rebalance
18:51 John_HPC JoeJulian: Thanks. I am replicating it Gluster0[1-3] is one copy of the data and Gluster0[4-6] is another copy
18:51 n0de Things may spiral out of control
18:52 jbd1 n0de: yeah, I would highly recommend letting the rebalance complete first, based on my experience.
18:52 jbd1 dbruhn: then I'll upgrade to 3.4.x before doing my rebalance.  My fullest bricks are only at 78% so I have some time
18:52 dbruhn n0de, the rebalance restarts from the beginning
18:52 dbruhn I am not sure if the memory leak is still present in 3.4
18:53 n0de jbd1: I upgraded to Gluster 3.4.2 prior to adding in the two new storage nodes
18:54 n0de and at that time my storage nodes were at 90% full :(
18:54 pvh_sa joined #gluster
18:54 n0de 283T  178T   91T
18:54 n0de 91TB free atm
18:54 jbd1 dbruhn: https://bugzilla.redhat.com/show_bug.cgi?id=985957 ?
18:54 glusterbot Bug 985957: high, unspecified, ---, nsathyan, NEW , Rebalance memory leak
18:55 ctria joined #gluster
18:55 jobewan joined #gluster
18:55 n0de only one of the two new storage nodes is showing weird memory usage - http://cl.ly/image/3P1O0L0l1I2E
18:55 glusterbot Title: Image 2014-04-23 at 12.19.26 PM.png (at cl.ly)
18:55 n0de I am afraid its only a matter of time till things go boom :(
18:57 dbruhn n0de, the last couple rebalance ops I've done I've just baby sat the processes. Its often better to just run the fix-layout and if you have enough change the fill levels will even out over time
18:57 dbruhn otherwise, do the fix-layout first, and then run the full rebalance
18:57 glusterbot New news from newglusterbugs: [Bug 985957] Rebalance memory leak <https://bugzilla.redhat.com/show_bug.cgi?id=985957>
18:58 n0de dbruhn: I believe I executed the fix-layout and rebalance in one command
18:58 dbruhn the full rebalance does a fix layout, as it rebalances
18:59 JoeJulian ~pasteinfo | John_HPC
18:59 glusterbot John_HPC: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:59 jbd1 n0de: I just checked my servers too, and only one of the 5 nodes has memory pressure-- coincided directly with the fix-layout though
18:59 jbd1 dbruhn: doing a rebalance will do the fix-layout again, will it not?
18:59 dbruhn the fix-layout creates the directory structure on the new bricks, the full redistributes the files from all nodes based off the new hash table
19:00 pvh_sa joined #gluster
19:00 dbruhn once the fix-layout runs the nodes become usable to the volume
19:00 dbruhn so new data can be written to them
19:00 n0de dbruhn: I am seeing new data going to the new storage nodes
19:01 social joined #gluster
19:01 dbruhn its because part of the layout is corrected, or maybe all of it
19:01 n0de about 15.3TB has been written to each of the two new nodes since beginning of March when I started the full rebalance
19:02 n0de Is that good / okay / poor?
19:02 n0de dbruhn: yea, I have no way of knowing though
19:04 ricky-ticky joined #gluster
19:05 dbruhn Honestly I'm not sure, my last rebalance took about a month, but the system that is on only has 3TB bricks, and I only added 4 bricks on 2 servers.
19:06 pvh_sa joined #gluster
19:06 * jbd1 is realizing that if the memory leak still exists in latest 3.4, he won't be able to complete a rebalance
19:07 lalatenduM joined #gluster
19:08 jbd1 it's shocking to me that a severity-high bug opened in *July 2013* is still open-- that must mean that there were so many worse bugs that this one didn't make the cut
19:08 jbd1 and "Can't rebalance a large volume" seems like a pretty major issue to me
19:09 dbruhn It will rebalance, it just often takes a long time.
19:09 n0de jbd1: that is what I am afraid of as well - memory will crap out before the rebalance is completed - http://cl.ly/image/3P1O0L0l1I2E
19:09 glusterbot Title: Image 2014-04-23 at 12.19.26 PM.png (at cl.ly)
19:10 n0de I am very glad I didn't just reboot this storage node in hopes of getting the load down under control
19:10 n0de things would have def spiraled out of control with all that memory cache being gone
19:10 jbd1 dbruhn: most of my brick servers have 8G of memory.  If a node crashes or glusterfsd crashes due to out-of-memory, the rebalance starts all over, no?
19:11 dbruhn what's your volume type?
19:11 jbd1 I think what caused my pain last rebalance was that two simultaneous rebalances got running when one node crashed (due to fd leak).  I'm on distributed-replicate 2
19:11 jbd1 n0de: yeah, it's rather ominous
19:12 n0de oh man, I hope that is not the case. My volume is of type Distributed-Replicate
19:12 dbruhn I believe, and this would need to be tested, that the rebalance would continue, just bring the offline brick/brick server back online, and the system should self heal any gaps from the replica
19:12 * jbd1 fires up his test nodes
19:13 n0de :)
19:14 jbd1 n0de: As long as a single process fits in memory, you could use swap space to keep the server running.  My GlusterFS brick servers don't have any swap
19:15 n0de jbd1: right, I was just thinking to add some more swap should things get bad
19:15 n0de funny part is I called RedHat in the past to find out how much a support contract would be, they didn't even bother to get back to me
19:15 n0de I guess they are already fat from the money they are making?
19:16 dbruhn Weird, they were great when I called them looking for pricing
19:16 dbruhn who did you call?
19:17 John_HPC JoeJulian: Here is my paste - http://paste.ubuntu.com/7316951/
19:17 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
19:17 n0de the number on redhats website
19:17 jbd1 heh, I *still* get email from RedHat from the pricing inquiry I made last summer.
19:18 n0de hrm, maybe it went to my spam box
19:18 n0de how was the pricing? reasonable?
19:18 John_HPC This system was basically dropped in my lap (bought 3 years ago) and I've been told to get it working asap ;)
19:18 n0de I figured they would tell me to hit the road anyway since I am running Gentoo Linux, and not Redhat
19:20 jbd1 n0de: wow, gentoo in production.  I did that for a while (still have some old gentoo servers in prod).  switched to ubuntu after getting tired of emerge
19:20 jbd1 RHS requires that you run redhat-- that's the main reason I wound up not buying it
19:21 jbd1 I know RedHat well--ran it since 1995 or so--hell, I even own redhat stock.  But I don't want to run a mixed-distro environment or change distros on my entire datacenter just to get support for glusterfs
19:23 JoeJulian John_HPC: Actually, gluster0{1,3,5} is replicated to gluster0{2,4,6}. None of that is geo-replication, though, so you should be able to uninstall glusterfs-geo-replication.
19:23 rahulcs joined #gluster
19:23 n0de jbd1: makes sense
19:24 jbd1 conditions are ripe for someone to come in and provide third-party GlusterFS support (like Percona did with MySQL)
19:24 n0de jbd1: are you running the gluster for yourself or have customers using it?
19:24 jbd1 n0de: my website is the only "user" of the storage, but my website's customers provide the data I store
19:24 n0de jbd1: absolutely, just need a team of Gluster experts to get together and offer a product...JoeJulian will not join because his vision is different :)
19:25 n0de jbd1: gotcha
19:25 JoeJulian Maybe... but I'm going to the optomitrist in an hour so maybe my vision will be corrected later.
19:26 dbruhn I'll join up on this endeavor
19:26 dbruhn lol
19:27 n0de heh, I remember there was a guy that joined this channel a month or two back looking for Gluster support for his customer
19:27 badone joined #gluster
19:27 n0de He was offering a nice hourly rate of like 200$, but nothing came out of it
19:27 JoeJulian I agree that there's room for competition with Red Hat for supporting storage. That's not a business I would want to be in though.
19:27 dbruhn and yes for RHS you have to run their provided version of gluster
19:28 dbruhn +1, RedHat does a good job for the people who pay for support
19:28 JoeJulian I made $200/hr consulting before I was specialized.
19:28 n0de dbruhn: yea, too late to change the env for me, otherwise I would be interested in a support contract
19:29 n0de how much did they want btw?
19:29 n0de JoeJulian: sounds pretty nice, why not continue?
19:30 dbruhn it's been more than a few years since I checked. I think when they first started selling it was 3k a year per brick server. But I could be way off on that number
19:30 jbd1 RedHat's standard pricing is 10k/2 nodes, so my 9-node cluster would be 100k/year before any discounts
19:30 dbruhn there you go
19:31 n0de yikes
19:32 jbd1 The numbers I have are as of last summer.  They may have changed something since then.  Back then, they also were only supporting NFS as well-- no FUSE client allowed
19:33 n0de Where do I sign up for a training program to be specialist ? :)
19:33 jbd1 haha
19:34 n0de I would def hang out here regardless to give free consulting though
19:34 n0de Thus far I am only a specialist in the company I work for heh
19:35 dbruhn I wonder if anyone would help create some openedx courses with me if I set up the server
19:35 ricky-ticky joined #gluster
19:35 John_HPC JoeJulian: Thanks! Just curious; how do you tell that from that output? Is that something that is a default with the install?
19:35 dbruhn And if it would be worthwhile, and used by the community
19:36 jbd1 *crickets*
19:36 n0de :/
19:38 andreask joined #gluster
19:38 rahulcs joined #gluster
19:39 JoeJulian ~brick order | John_HPC
19:39 glusterbot John_HPC: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
19:39 John_HPC ah, thanks
19:42 jbd1 seems like the best way to use the commercial RedHat Storage software is to buy a couple of really huge storage servers and just run a replica volume.  That would only be 10k/year
19:43 glusterbot New news from resolvedglusterbugs: [Bug 949102] [RFE] Expected single-mount point by glusterfs-hadoop limits use of multiple input sources <https://bugzilla.redhat.com/show_bug.cgi?id=949102>
19:43 steve_____ joined #gluster
19:44 n0de jbd1: Up until your really huge storage boxes run out of space heh
19:44 n0de and what is your definition of really huge servers?
19:44 steve_____ hello, I'm getting an error when I attempt to stop geo-replication on a volume:
19:44 steve_____ gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 status
19:45 steve_____ ovirt001.miovision.corp rep1                 gluster://10.0.11.4:/rep1                          OK
19:45 jbd1 n0de: I'm thinking 60-bay 4U servers with 60x6T drives each-- so 360T raw per node
19:45 steve_____ gluster volume geo-replication rep1 gluster://10.0.11.4:/rep1 stop
19:45 steve_____ geo-replication command failed
19:45 dbruhn steve_____, anything in the logs?
19:45 jbd1 n0de: if you outgrow that, you can add external enclosures and chain storage from there, just growing the brick volume via RAID or LVM
19:45 chirino joined #gluster
19:46 steve_____ cli.log doesn't have anything, the replication log doesn't have anything useful either
19:46 John_HPC dbruhn: I know little about gluster, that's why I am here. But I would love to see a class ;)
19:47 dbruhn steve_____,  What version are you running?
19:47 edward1 joined #gluster
19:48 n0de jbd1: makes sense
19:48 steve_____ dbruhn glusterfs-server-3.4.2-1.el6.x86_64
19:49 jbd1 n0de: and considering that the pair of servers will run around 50k, $10k/year wouldn't be so painful
19:50 dbruhn steve_____, was it working before and now not?
19:52 steve_____ yes it was working previously, I changed the IP address of the master, then started rep again successfully, now when I try to stop I get this error
19:53 n0de jbd1: true
19:55 dbruhn steve_____, are you able to change it back to see if it is working?
19:55 dbruhn I am assuming you changed it and then started the geo sync?
19:56 steve_____ sorry, thats wrong, IP didn't change
19:57 dbruhn which logs are you looking in?
19:57 JoeJulian John_HPC: regarding classes, I spoke with a Red Hat educator that's putting together storage classes.
19:58 John_HPC JoeJulian: That's great and I'm allowed to get training this year; unlike last.
19:58 John_HPC ;)
19:58 steve_____ I looked in /var/log/glusterfs/cli.log, and the log file listed in the geo-rep config
19:59 steve_____ "/var/log/glusterfs/geo-replication/rep1/glus​ter%3A%2F%2F10.0.11.4%3A%2Frep1.gluster.log"
20:00 dbruhn sorry shooting blind here to see if there is anything obvious, I've not used geo-replication before.
20:00 dbruhn Are all of the nodes on both sides healthy?
20:00 CodeMonke joined #gluster
20:00 dbruhn s/nodes/servers/
20:00 glusterbot What dbruhn meant to say was: Are all of the servers on both sides healthy?
20:01 steve_____ yep, peer in cluster on both pairs
20:01 steve_____ volumes active
20:01 JoeJulian I'm not sure on that one either, but I would also look in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log for clues.
20:03 rahulcs joined #gluster
20:03 steve_____ only line in etc-glusterfs-glusterd.vol.log: [2014-04-23 20:00:48.722063] I [glusterd-handler.c:952:__glus​terd_handle_cli_list_friends] 0-glusterd: Received cli list req
20:03 foster_ joined #gluster
20:03 rahulcs joined #gluster
20:04 CodeMonke I've got a question regarding the size of storage bricks.
20:04 dbruhn ask away
20:05 CodeMonke So say I'm looking to build a *big* gluster system
20:05 CodeMonke in the 5-10 PB range
20:05 CodeMonke Using 72-disk servers as the building blocks.
20:06 dbruhn steve_____, on all the servers?
20:06 dbruhn So whats the question CodeMonke?
20:06 CodeMonke What would be the best way for Gluster to use these drives as bricks?  i.e. each node with 1 big brack, backed by a large single raid?
20:07 CodeMonke Or with each node having 8 or so smaller bracks
20:07 CodeMonke each made of a handfuk of drives?
20:07 semiosis i usually recommend more, smaller bricks, rather than fewer, larger bricks
20:07 CodeMonke **handfull
20:07 dbruhn There are a lot of different approaches, it really depends on what you need from the system? Performance,etc.
20:07 JoeJulian value=f(purpose)
20:07 semiosis with replication, if you have to replace a brick, it will sync quicker if it is smaller
20:08 JoeJulian ditto for rebalance.
20:08 steve_____ dbruhn: yes, logs are the same on all servers
20:08 * semiosis still wouldn't recommend rebalancing
20:08 JoeJulian I'm on the fence. I've rebalanced some non-critical volumes successfully a few times now.
20:09 dbruhn rebalancing can be an absolute crap shoot...
20:09 semiosis were any of those volumes larger than a single cd-rom? ;)
20:09 JoeJulian lol
20:09 JoeJulian barely
20:09 semiosis haha
20:09 CodeMonke My only real large filesystem experience has been in designing and deploying small Lustre systems for HPC with only a handfull of servers so this is really a completely different beast.
20:09 JoeJulian I've rebalanced my centos repo.
20:09 semiosis CodeMonke: should be fun then
20:09 dbruhn I've successfully rebalanced a 33TB volume
20:10 dbruhn with 24TB of data on it
20:10 JoeJulian dbruhn: what was the first version you could do that successfully with?
20:10 semiosis sure, but you have to admit it's risky, having all your data be shuffled around
20:10 dbruhn it was not fun though, and took several attempts, and created split-brain issues
20:10 dbruhn 3.3.2
20:11 CodeMonke It's only in the initial concept stages now so I'm staving off my excitement unless we actually win the contract to build it.
20:11 semiosis ah there we go
20:11 dbruhn oh no I agree 100% with you guys it was a mess
20:11 dbruhn just saying, I had to do it
20:12 CodeMonke I also see in the docs that distributed striped replicated volumes are "only supported for map-reduce workloads".  What does that even mean?
20:12 semiosis CodeMonke: maybe d3vz3r0 can weigh in.  iirc he has >1PB
20:12 semiosis CodeMonke: you most likely dont want stripe, unless you have extremely huge files
20:12 JoeJulian @lucky map-reduce
20:12 glusterbot JoeJulian: http://en.wikipedia.org/wiki/MapReduce
20:13 JoeJulian @stripe
20:13 glusterbot JoeJulian: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
20:13 semiosis multi-terabyte files
20:13 CodeMonke No, I know what map-reduce means
20:13 CodeMonke But couldn't you use it for any type of workload?
20:13 glusterbot New news from resolvedglusterbugs: [Bug 1003626] java.lang.NullPointerException is raised during execution of multifilewc mapreduce job <https://bugzilla.redhat.co​m/show_bug.cgi?id=1003626>
20:13 CodeMonke The files would be huge
20:13 JoeJulian "can" yes. "should" no.
20:14 semiosis CodeMonke: what's teh application anyway?
20:14 JoeJulian I explain the functionality of stripe in that article.
20:14 semiosis archival storage?  analytics?  media processing/rendering?
20:14 CodeMonke The use case is to build a storage cluster foain imaging.r ultra-high resolution three-dimensional br
20:15 CodeMonke Wow, that got jumbled up.
20:15 JoeJulian Most people who think stripe is the end-all-be-all of performance enhancements are greatly disappointed.
20:15 CodeMonke High resolution brain imaging.
20:15 semiosis sweeet
20:15 CodeMonke that's the application.
20:15 JoeJulian cool!
20:15 dbruhn +1 if your files fit on the bricks, it will be faster
20:15 * JoeJulian wants a high resolution brain.
20:15 dbruhn and you can tailor your brick speeds to better perform on reads/writes
20:16 dbruhn I wish I could find a day to take all of my systems down and do performance comparisons for a blog.
20:16 CodeMonke Yeah, it's super cool.  The project is to scan a 1 cubic mm of mouse brain at high enough resolution map the three dimensional neural connections.
20:17 dbruhn That is super cool!
20:17 JoeJulian my brain tends to work more like a map-reduce, eventually consistent model that uses fuzzy logic to do its retreivals.
20:17 CodeMonke So this 1 scan of 1 cubic mm is supposed to be ~ 3-5PB uncompressed.
20:17 semiosis CodeMonke: over how many files?
20:18 semiosis it would be nice if the files were somewhere between 100kB & 100GB
20:18 semiosis then you wouldn't need striping, or even raid
20:19 CodeMonke It's completely flexible.  We''d basically be designing the file storage to mach the hardware.
20:20 CodeMonke If millions of 10GB files works best then we equipment do that.could make the
20:20 semiosis either you're two threads fighting over the output buffer, or your palm is hitting the touchpad
20:20 CodeMonke Or if it's instead easier on the storage system to deal with fewer 1TB files, then we can do that.
20:21 sauce_ aw, my gluster servers both crapped out just now.  AWS EC2, each in their own AZ.  i couldn't mount anything, and couldn't SSH to the server. AWS monitoring says the CPU pegged to 100% on both boxes. no idea what happened. rebooting them now
20:21 semiosis sauce_: what instance type are they?
20:21 sauce_ m1.medium with EBS volumes attached
20:22 CodeMonke I'm just tasked right now with trying to guestimate equipment costs so I'm trying to design the filesystem first to determine that.
20:23 CodeMonke Has anybody had good experience wth zfs cked gluster deployments?ba
20:23 jbd1 CodeMonke: IME, GlusterFS does well with lower numbers of files and bricks.
20:23 jbd1 CodeMonke: but you'll probably need to run some tests to find the right size for you
20:24 zerick joined #gluster
20:24 jbd1 CodeMonke: 72x6T drives, a ton of memory, 10Gb interfaces-- those machines are going to be pricey!
20:25 CodeMonke Yep.
20:25 CodeMonke ~$45k each
20:25 edward2 joined #gluster
20:26 kmai007 does anybody have some test scripts i can use to pound on glusterfs ?
20:26 kmai007 like some writes/reads
20:26 dbruhn CodeMonke, do you have any idea what your performance requirements are?
20:26 kmai007 just need it to be quick/dirty
20:26 dbruhn kmai007, just use dd
20:27 kmai007 oh yeh, thanks dbruhn
20:27 kmai007 dbruhn: did u got to summit?
20:27 dbruhn and from my personal experience I hate to 72 drive chassis... this coming from a guy who has had to move those stupid things around.
20:27 dbruhn kmai007, I didn't, I really wanted to
20:28 kmai007 just curious, it was cool finally meeting JoeJulian in person
20:28 CodeMonke dbruhn, not really, which makes this tough.  The people working with the data don't know either because this is looking to be 2 orders of magnitude than any other dataset thier community has had available.
20:29 dbruhn do you at least know how long it took the next set down to generate?
20:29 dbruhn you can at least get a rough guess then
20:30 CodeMonke From what I understand, there would be relatively low performance requirements.  Basically once the data is online then they only look at small pieces of it at a time.
20:30 CodeMonke They just need a place to have all of it online and available.
20:30 jbd1 CodeMonke: I'm surprised you're not looking at HDFS (hadoop) for this kind of storage.  It sounds like it might be a good fit
20:31 dbruhn but if they are generating several petabytes of data... how quick does it have to be written out?
20:31 CodeMonke I've been looking at many things.
20:31 CodeMonke The estimate is ~ 100TB a day.
20:32 dbruhn Assuming you are looking for something in a replica 2 for data resiliency?
20:32 JoeJulian kmai007: So it wasn't a disappointment? Oh, good. I was telling johnmark that if I'm supposed to be a celebrity, he should hire me an entourage so we don't disappoint.
20:33 JoeJulian ... he didn't agree.
20:33 dbruhn hahah
20:34 dbruhn Joe if you ever end up in MN, I'll give you the rockstar treatment, we'll get you some pictures for your blog.
20:35 theron joined #gluster
20:35 * jbd1 is testing remove-brick in the lab... so far, remove-brick kills the glusterfs process 2/2 times
20:35 dbruhn it should...
20:36 dbruhn glusterfs is the brick process
20:36 jbd1 dbruhn: on node 1, I remove node3/brick1, and it causes glusterfs to exit on node 1
20:38 dbruhn weird
20:38 dbruhn CodeMonke, were you looking at SAS or SATA for this project?
20:38 jbd1 maybe pebkac; it just worked correctly this try
20:39 CodeMonke NL-SAS.
20:39 jbd1 ah, I was doing gluster volume remove-brick gv0 glustertest3:/export/brick1/vol1 start
20:39 jbd1 instead of gluster volume remove-brick gv0 glustertest3:/export/brick1/vol1
20:39 pdrakewe_ joined #gluster
20:40 CodeMonke I haven't even seen 6T drives out in the wild yet.
20:40 dbruhn from my calculations you need to be able to sustain about 1.2GB of write per second, does that line up with what you were thinking?
20:40 dbruhn and thats assuming 100TB over a 24 hour period
20:40 B21956 joined #gluster
20:41 CodeMonke Yeah, that's the problem.
20:41 CodeMonke Huge write bandwidth needed,but pidly read bandwidth.
20:41 dbruhn lol, thats why I asked to make sure my assumption was correct
20:41 dbruhn well first things first, I would suggest IPoverIB
20:42 dbruhn and FDR or QDR are probably the way to go, the price for FDR right now I believe lines up with 10GB ethernet
20:42 CodeMonke If the read requirements were similar then i'd really be looking more at Lustre,but they want this to be thier long term archive as well.
20:43 CodeMonke Yea, I've had great success with the Mellanox FDR equipment.
20:43 dbruhn are you planning on using replication?
20:43 pdrakeweb joined #gluster
20:43 dbruhn I'm running a bunch of Mellanox QDR stuff myself
20:44 CodeMonke There would have to be replication if it's the long term archive.
20:44 dbruhn ok, the reason I ask is because replication comes with a write speed penalty
20:44 semiosis well, whats the timeline for this?
20:44 CodeMonke It's a long ways off, likely 6 months out at least.
20:45 semiosis so maybe new-style replication will be an option by then, which can alleviate the write penalty
20:46 kkeithley John_HPC: can you try uninstalling glusterfs-geo-replication and then reinstalling it, and uninstalling/reinstalling glusterfs-server, each by itself. I want to confirm where that ssh-command-tar error is coming from.
20:46 CodeMonke It's still in the proposal phase now but I need to get a handle on layout of the filesystem in order to get a handle on the necessary equipment, whihc will in turn inform the projected budget.
20:47 * dbruhn needs to learn more about the new style replication
20:48 purpleidea dbruhn: it doesn't exist yet. but jdarcy gave a good talk about it at summit. maybe there are slides or video available.
20:49 semiosis i was thinking of AFRv2 which may be in glusterfs 3.6 later this year... looks like NSR is still further out :(
20:49 badone joined #gluster
20:50 CodeMonke I'm goign to go crunch some numbers on this for a bit.Thanks for the feedback and sanity checks.
20:51 semiosis yw, keep us posted
20:51 CodeMonke Will do.
20:51 dbruhn CodeMonke, for sure on keep us posted, I am interested to see what you come up with.
20:51 CodeMonke Sure thing.
20:52 dbruhn @later tell jdarcy, do you have some information available from your replication talk from summit?
20:52 glusterbot dbruhn: The operation succeeded.
20:56 semiosis here it is: http://rhsummit.files.wordpress.com/20​14/04/darcy_w_1430_rhs_replication.pdf
20:56 semiosis @learn nsr pdf as http://rhsummit.files.wordpress.com/20​14/04/darcy_w_1430_rhs_replication.pdf
20:56 glusterbot semiosis: The operation succeeded.
20:56 dbruhn Oh perfect, than you.
20:56 semiosis from the summit '14 presentations page: http://www.redhat.com/summit/2014/presentations/
20:56 glusterbot Title: Presentations: Red Hat Summit (at www.redhat.com)
21:01 Matthaeus1 joined #gluster
21:02 edong23 joined #gluster
21:29 qdk_ joined #gluster
21:36 kmai007 semiosis: do you have the scholastic use case presentation?
21:38 kmai007 er. at least know where i can find it b/c i cannot find it
21:41 semiosis no, sorry all i have is the same google you have :/
21:42 semiosis i gave two speakers in that session my contact info (didnt get either of theirs, oops!)
21:42 semiosis i probably seemed like a raving lunatic talking about tcp source ports session checksum offloading
21:54 theron_ joined #gluster
21:55 tdasilva left #gluster
21:56 ira joined #gluster
21:57 Ark joined #gluster
22:03 kmai007 semiosis: i recall seeing you, you were the amazon storage admin
22:04 chirino joined #gluster
22:04 kmai007 i should have introduced myself,
22:04 semiosis i was the guy harassing the speakers afterward.  well, one of the guys harassing the speakers
22:10 pvh_sa joined #gluster
22:13 hagarth joined #gluster
22:27 theron joined #gluster
22:28 theron joined #gluster
22:29 raghug joined #gluster
22:29 raghug y4m4: ping
22:29 glusterbot raghug: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
22:37 Gilbs1 left #gluster
22:56 chirino joined #gluster
23:00 chirino_m joined #gluster
23:10 ira joined #gluster
23:12 MeatMuppet left #gluster
23:13 cyberbootje joined #gluster
23:20 Ark joined #gluster
23:24 DV joined #gluster
23:39 fidevo joined #gluster
23:47 zerick joined #gluster
23:50 Georgyo joined #gluster
23:59 cyberbootje joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary