Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 joeljojojr I'm afraid full will just run through the whole filesystem and do the same thing I'm doing on a targeted basis: find a bunch of files that don't have gfids in .glusterfs and choose not to fix them. That's what it looks like it would do from my reading.
00:21 yinyin joined #gluster
00:22 forest joined #gluster
00:25 semiosis could you pastie.org a log file showing exactly where glusterfs chooses not to fix them?
00:26 vpshastry joined #gluster
00:26 fcami joined #gluster
00:28 semiosis afk, will have to wait until tmrw
00:29 joeljojojr http://pastie.org/8061369
00:29 glusterbot Title: #8061369 - Pastie (at pastie.org)
00:30 joeljojojr Thanks for your help so far.
00:33 forest joined #gluster
00:35 nightwalk joined #gluster
00:43 yinyin joined #gluster
00:52 itisravi_ joined #gluster
00:57 jbrooks joined #gluster
01:04 nightwalk joined #gluster
01:09 rcoup joined #gluster
01:14 fcami joined #gluster
01:22 bala1 joined #gluster
01:22 lpabon joined #gluster
01:27 theron joined #gluster
01:28 glusterbot New news from newglusterbugs: [Bug 976124] "make glusterrpms" errors out for GlusterFS release-3.4 branch on F19 <http://goo.gl/gpGMm>
01:33 bala1 joined #gluster
01:33 itisravi_ joined #gluster
01:40 bala1 joined #gluster
01:47 juhaj joined #gluster
01:58 glusterbot New news from newglusterbugs: [Bug 976129] Upstream Gluster could do with a decent Feature Matrix on the website <http://goo.gl/frwCq>
02:01 forest joined #gluster
02:04 shanks joined #gluster
02:04 hchiramm__ joined #gluster
02:16 thisisdave hmm, gluster geo-replication (a) won't start for me, and (b) wont' even take config options. Anyone care to help point me in the right direction/
02:19 thisisdave i've ensured the ssh options within gsyncd.conf work (had to adjust the spacing after the options '-o')... have ensured the geo-replication rmps are installed...
02:19 thisisdave rpms*
02:23 vpshastry joined #gluster
02:23 thisisdave ...and each time I try to execute a geo-replication command, I get a cli.log warning: W [rpc-transport.c:174:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
02:35 joelwallis joined #gluster
02:39 forest joined #gluster
02:40 bulde joined #gluster
02:41 jbrooks joined #gluster
02:42 vpshastry joined #gluster
02:46 yinyin joined #gluster
02:49 lalatenduM joined #gluster
02:50 jag3773 joined #gluster
03:00 bharata joined #gluster
03:19 psharma joined #gluster
03:24 sprachgenerator joined #gluster
03:29 glusterbot New news from newglusterbugs: [Bug 960818] Installing glusterfs rpms on a pristine f19 system throws "error reading information on service glusterfsd". <http://goo.gl/b7ZLa> || [Bug 960285] Client PORTBYBRICK request for replaced-brick will retry forever because brick port result is 0 <http://goo.gl/8sKao> || [Bug 953332] repo missing 6Server directory for intstalling alpha packages on RHEL6 <http://goo.gl/pEl35> || [Bug
03:43 itisravi joined #gluster
03:57 hagarth joined #gluster
04:01 mohankumar joined #gluster
04:01 sgowda joined #gluster
04:02 mjrosenb https://gist.github.com/5820222 -- does this look like an issue with gluster, or something else?
04:02 glusterbot Title: xcut (at gist.github.com)
04:02 brosner joined #gluster
04:04 vpshastry joined #gluster
04:17 vpshastry1 joined #gluster
04:25 shireesh joined #gluster
04:57 bala joined #gluster
04:57 lalatenduM joined #gluster
04:58 rastar joined #gluster
05:00 satheesh joined #gluster
05:01 Shahar joined #gluster
05:02 bala joined #gluster
05:04 CheRi joined #gluster
05:09 fcami joined #gluster
05:29 bulde joined #gluster
05:32 hagarth joined #gluster
05:33 Shahar joined #gluster
05:40 yinyin joined #gluster
05:45 aravindavk joined #gluster
05:46 deepakcs joined #gluster
05:48 ngoswami joined #gluster
05:50 vimal joined #gluster
05:58 bala joined #gluster
05:59 glusterbot New news from newglusterbugs: [Bug 976189] statedump crashes in ioc_inode_dump <http://goo.gl/lrSqJ>
06:01 Shahar joined #gluster
06:02 raghu joined #gluster
06:07 Shahar joined #gluster
06:10 satheesh1 joined #gluster
06:13 jtux joined #gluster
06:17 dobber joined #gluster
06:18 bala1 joined #gluster
06:27 glusterbot New news from resolvedglusterbugs: [Bug 763858] Getting performance stats needs to be easier <http://goo.gl/a7CKD> || [Bug 765359] Local user's can abuse gluster <http://goo.gl/5pIZC>
06:28 bala joined #gluster
06:29 badone joined #gluster
06:34 ctria joined #gluster
06:34 hagarth joined #gluster
06:36 bala joined #gluster
06:47 rajesh joined #gluster
07:02 saurabh joined #gluster
07:02 andreask joined #gluster
07:03 bala joined #gluster
07:03 65MAACT7H joined #gluster
07:10 ricky-ticky joined #gluster
07:12 krishna_ joined #gluster
07:17 fcami joined #gluster
07:22 jbrooks When mounting an RDMA volume, do you need to use any special or different mount command?
07:32 rgustafs joined #gluster
07:34 ramkrsna joined #gluster
07:54 realdannys1 joined #gluster
07:56 realdannys1 joined #gluster
07:58 pkoro joined #gluster
08:04 yinyin joined #gluster
08:18 hybrid512 hi
08:18 hybrid512 Is there a way to reconfigure a volume safely without the need to destroy/recreate it ?
08:20 hybrid512 I had a 3 nodes replicated volume and I added a 4th node so now, this is a 4 nodes replicated volume but I'd like to change this configuration to a 2x2 distributed/replicated volume but, if possible, I would love not to have to backup the whole data, destroy, recreate and restore.
08:21 hybrid512 Off course, I'll do if needed but if it was possible, that would definitely ease my day :)
08:21 hybrid512 thanks for your help
08:23 bala joined #gluster
08:26 samppah hybrid512: what version you are running? if you have enough space available you should be able to remove 2 extra bricks and readd them
08:27 hybrid512 samppah: I have plenty of space, my volume is occupied at 38%
08:28 hybrid512 So I just remove 2 bricks and re-add them as a distributed/replicated volume ?
08:28 hybrid512 I'll try, thanks !! :D
08:29 samppah yes, something like gluster volume remove-brick volname replica 2 host1:/brick host2:/brick
08:29 samppah it should remove two bricks and decrease replication level to 2
08:38 hybrid512 samppah: great ! Do I have to stop the volume first or can I do this live ?
08:40 samppah hybrid512: it should be fine to do it live
08:40 samppah hybrid512: i'd recommend to test it in test environment first if possible :)
08:41 hybrid512 samppah: I'll do a backup of the data anyway ... just in case
08:41 samppah okay, good
08:42 hybrid512 samppah: but if it works, it will definitely save my time
08:42 hybrid512 samppah: I'll try and keep you posted
08:42 hybrid512 thanks :)
08:42 samppah hybrid512: np :) good luck!
08:45 mjrosenb this is curious
08:46 mjrosenb I have a file with permissions: ?????????? ? ? ? ?            ?
08:46 mjrosenb according to ls
08:46 mjrosenb on the brick, it looks just fine.
08:47 mjrosenb weird
08:47 mjrosenb I can stat it as root
08:48 mjrosenb just not as a normal user
08:50 mjrosenb has anyone seen something like this before?
08:52 Norky hybrid512, make sure to part "start" at the end of the remove-brick command, otherwise I think it will happen straight away
08:52 Norky you'll also need I think to remove the xattrs marking the (removed) bricks as bricks
08:53 mjrosenb root sees the perms as "-rw-r--r-- 1 mjrosenb media"
08:53 mjrosenb so mjrosenb *should* be able to see this no problem
08:54 Norky mjrosenb, is this a replicated volume?
08:55 mjrosenb Norky: negative, distributed.
08:55 Norky hmm
08:57 mjrosenb i'm running the ls/stat on the same machine, just different users
08:58 mjrosenb a while back, it looked like the underlying file had something wrong with it, but I killed glusterd, unmonuted the filesystem, remounted it, and now it is fine.
08:58 yinyin joined #gluster
09:08 mjrosenb [dht-layout.c:593:dht_layout_normalize] 0-magluster-dht: found anomalies in ... holes=1 overlaps=0
09:08 mjrosenb what does that mean?
09:25 hflai joined #gluster
09:28 bulde joined #gluster
09:56 yinyin joined #gluster
10:02 ricky-ticky joined #gluster
10:08 foxban joined #gluster
10:21 spider_fingers joined #gluster
10:21 spider_fingers left #gluster
10:29 shireesh joined #gluster
10:34 foxban_ joined #gluster
10:42 Staples84 joined #gluster
10:47 bulde joined #gluster
10:49 sgowda joined #gluster
10:53 ccha joined #gluster
11:00 manik joined #gluster
11:02 chirino joined #gluster
11:07 spider_fingers joined #gluster
11:10 rastar1 joined #gluster
11:16 bfoster joined #gluster
11:22 CheRi joined #gluster
11:22 kkeithley joined #gluster
11:25 psharma joined #gluster
11:28 manik joined #gluster
11:32 pvradu joined #gluster
11:32 partner or holes=0 overlaps=2 ?
11:33 partner that might explain it a bit: http://hekafs.org/index.php/2012/03​/glusterfs-algorithms-distribution/
11:35 piotrektt joined #gluster
11:36 anands joined #gluster
11:40 vpshastry1 joined #gluster
11:46 partner is it normal the fix-layout takes / is run every 5 seconds or so? i added 1 new brick to a 2-brick distributed systema dn i recall previous round took only <3 hours
11:46 partner now i guess it takes 3 days with the current speed..
11:49 sgowda joined #gluster
11:49 aravindavk joined #gluster
11:54 krokar joined #gluster
12:00 rcheleguini joined #gluster
12:02 pkoro joined #gluster
12:02 manik joined #gluster
12:06 bala joined #gluster
12:16 manik joined #gluster
12:22 hagarth joined #gluster
12:23 jthorne joined #gluster
12:25 Oneiroi joined #gluster
12:27 krishna_ joined #gluster
12:33 jbrooks joined #gluster
12:43 plarsen joined #gluster
12:52 dobber_ joined #gluster
12:57 lalatenduM joined #gluster
12:59 stickyboy Hmm, I can't figure out how to mount a volume as readonly with the FUSE client.
13:01 yinyin joined #gluster
13:04 andreask joined #gluster
13:11 rwheeler joined #gluster
13:14 theron joined #gluster
13:16 deepakcs joined #gluster
13:16 pvradu joined #gluster
13:19 krishna_ joined #gluster
13:23 bala joined #gluster
13:26 chirino joined #gluster
13:37 ccha joined #gluster
13:40 edward1 joined #gluster
13:42 spider_fingers joined #gluster
13:43 chirino joined #gluster
13:45 kkeithley joined #gluster
13:47 manik joined #gluster
13:47 brosner joined #gluster
14:07 bugs_ joined #gluster
14:09 krishna__ joined #gluster
14:10 bet_ joined #gluster
14:12 lpabon joined #gluster
14:14 brosner joined #gluster
14:15 johnmark 912747
14:16 chirino joined #gluster
14:16 johnmark wherefore art thou, oh glusterbot?
14:17 Norky "wherefore" means "why"
14:18 * Norky disengages pedant mode
14:19 vpshastry joined #gluster
14:37 duerF joined #gluster
14:45 forest joined #gluster
14:46 vpshastry left #gluster
14:46 aliguori joined #gluster
14:47 pvradu_ joined #gluster
14:49 kkeithley Romeo, Romeo, why did you have to be a damn Montague.
14:50 kkeithley Just doesn't have the same ring to it.
14:51 dbruhn kkeithley:remember that delete issue I was having yesterday? I have some stuff the heal won't get past, if I go delete the data from the bricks directly will that cause an issue?
14:52 kkeithley since you're deleting anyway, I'd give it a try. It's not like you'll lose anything you care about.
14:55 Norky kkeithley, indeed not
14:55 Norky the man did know how to craft a phrase
14:55 kkeithley Ever and anon.
14:59 isomorphic joined #gluster
15:02 joelwallis joined #gluster
15:06 sprachgenerator joined #gluster
15:11 daMaestro joined #gluster
15:15 Deformative joined #gluster
15:29 manik joined #gluster
15:34 gmcwhistler joined #gluster
15:42 Deformative joined #gluster
15:58 sjoeboo joined #gluster
16:02 sprachgenerator joined #gluster
16:07 manik joined #gluster
16:07 jclift joined #gluster
16:11 forest joined #gluster
16:12 rwheeler joined #gluster
16:19 Mo_ joined #gluster
16:25 chirino joined #gluster
16:26 piotrektt joined #gluster
16:48 vrturbo joined #gluster
16:48 joelwallis joined #gluster
16:52 sprachgenerator joined #gluster
16:52 zaitcev joined #gluster
16:54 hchiramm_ joined #gluster
16:56 jdarcy joined #gluster
17:03 JoeJulian Hmm... glusterbot's absent again, huh. Must be a bad upgrade.
17:07 dowillia joined #gluster
17:09 wushudoin joined #gluster
17:12 * jclift cheers for the bad upgrade
17:12 jclift Completely sick of glusterbot trying to fix my typos
17:13 JoeJulian :(
17:14 jclift Everything else it does seems useful though.
17:15 JoeJulian You referring to regex
17:15 jclift Yep
17:15 JoeJulian If you don't want it to interpret that, just leave off the trailing /
17:19 pvradu_ joined #gluster
17:24 sprachgenerator joined #gluster
17:32 rwheeler joined #gluster
17:37 zaitcev joined #gluster
17:40 brosner joined #gluster
17:41 dbruhn Are any of you guys going to the cloudstack thing in Santa Clara/San Jose this next week? I was talking to a guy running Gluster in DHT/Replicated as volumes under his hyper visors, seems like an interesting thing.
17:45 krishna__ joined #gluster
17:47 pvradu joined #gluster
17:52 ste76 joined #gluster
17:54 krishna_ joined #gluster
18:07 tg3 @dbruhn, you mean using gluster as the storage newtork for his hypervisor's vm's?
18:11 pvradu joined #gluster
18:12 dbruhn yeah
18:12 dbruhn as primary storage
18:12 dbruhn he is running QDR IBoIP between nodes
18:13 dbruhn tg3: read above
18:13 pvradu left #gluster
18:14 dbruhn he is running the gluster and kvm on the same machine, and running about 24 1ghzx1gb virtual servers per hypervisor without issue
18:17 tg3 yeah there's no reason that wouldn't work
18:18 tg3 i/o would be a potential issue, but not significantly so as the kvm has some kind of write caching
18:18 tg3 we have a 200Tb gluster volume mounted via NFS by vmware esxi hypervisor
18:18 tg3 and the vm's live on that
18:18 tg3 without issue
18:19 tg3 kvm would be almost better since you're mounting gluster natively on the hypervisor host
18:19 wushudoin left #gluster
18:19 tg3 probably using 3.4 though with qemu support
18:19 tg3 would be best
18:20 dbruhn agreed
18:21 tg3 only issue with using gluster via nfs for vm's we've had in 3-4 months running is when gluster was completely unresponsive for like an hour (while doing a rebalance it choked i/o) and that caused some vm corruption, but in reality that wouldn't happen much.  We even did a remove-brick with live running vm's and it went transparently.
18:21 tg3 now we use a dedicated nfs nas, but thats just to keep things separate
18:23 dbruhn he is using the Fuse Client under his, since he has access to the hypervisors OS, and said it's works well. He is getting better than backspace storage I/O with it
18:24 dbruhn s/ backspace /rackspace/
18:41 NcA^ I'm seeing a lot of errors in my brick/glustershd logs that mention "client3_1". What is this referring to? A specific client (#3?)?
18:49 lyang0 joined #gluster
18:52 forest joined #gluster
18:56 larsks joined #gluster
19:03 partner can i stop rebalance fix-layout and start migrating files - or run it on top of it? as mentioned few times on the channel the fix-layout seems to take ages and i'd really need to start rebalance the files..
19:03 partner i thought dirs would have been created in a matter of hours but seeing the current speed its going to end next week
19:24 forest joined #gluster
19:32 mrfsl joined #gluster
19:35 vdrmrt joined #gluster
19:37 vdrmrt joined #gluster
19:39 y4m4 joined #gluster
19:42 vdrmrt Hello, I'm trying to mount my gluster volumen on boot in fstab but I get the following error when booting
19:42 vdrmrt rpcbind: cannot open '/run/rpcbind/rpcbind.xdr' file for reading, errno 2 (no such file or directory)
19:42 vdrmrt anybody any idea?
19:52 sw__ joined #gluster
19:54 failshell joined #gluster
19:55 failshell hello. on multi-homed servers, how do you force gluster to answer back to clients on a specific interface/IP? servers are talking to each other on a non-routable network
19:55 failshell clients are told to connect to those
19:58 jclift failshell: There's no way to segregate Gluster traffic into different connection groups like that
19:58 jclift (yet)
19:59 * jclift wishes there was, as it's how my home network is setup as well
19:59 failshell so there's no point using 2 NICs then?
19:59 jclift Not that I know of, other than for bonding type of purposes
19:59 failshell that's silly
20:00 thisisdave * drains hopes of moving geo-replication to the unused IB port...
20:00 jclift Some of the more experienced Gluster people around might know of workarounds or better approaches
20:01 thisisdave jclift: an update with regard to rdma boost: i only saw a measurable increase after ditching replication.
20:01 jclift failshell: I tend to agree with you on this.  Most of the corp places I've worked in have big pipes between servers for ultra-fast communication and backup, and smaller pipes out to clients
20:01 jclift thisisdave: Interesting
20:02 jclift failshell: So, I reckon we need to introduce a way to make use of this kind of topology.
20:02 brosner joined #gluster
20:02 failshell jclift: we'll see what RH has to say
20:02 thisisdave jclift: to be replaced with geo-replication. initial geo-replicate to a gluster target on a new raidz3 19-disk array (+3 hot spares + cache SSD) is underrway, but slow.
20:02 failshell we're buying support from them
20:02 jclift failshell: Cool.  Good idea. :)
20:02 neofob vdrmrt: i mount it in /etc/rc.local; not an elegant way to do it; does anybody know how to get it to work in fstab?
20:03 thisisdave jclift: a ding in HA is certainly worth the performance. (it _is_ an HPC after all...)
20:06 jclift thisisdave: What kind of perf stats difference did you get?
20:06 Deformative joined #gluster
20:07 thisisdave also of note: the ".rdma-fuse" suffix works to get rdma on a tcp/rdma geo-replication slave
20:09 thisisdave jclift: http://s21.postimg.org/twt7bxs6f/Scre​en_Shot_2013_06_20_at_1_08_28_PM.png
20:11 jclift thisisdave: So, it looks like the rdma thing there is a bunch better.  Is it enough for what you guys need?
20:12 thisisdave jclift: we shall see. bbiab; have to pick Apples in Palo Alto ;-)
20:12 jclift thisisdave: np.  I'm way behind in getting the "RDMA Test Day" testing plans written up
20:27 jiqiren jclift: what's this about an RDMA test day?
20:27 jclift jiqiren: We're putting together one, to be starting in about 1.5 hours.
20:27 * jclift thinks we should put it off until next week, so we can like promo the hell out of it first
20:28 jclift We have a _bunch_ of people with Infiniband gear around that would be interested methinks
20:29 jclift And I haven't gotten the test plans written up yet. :(
20:29 jiqiren yea, i have a lot of infiniband - but use IPoIB because native RDMA seems completely broken to me (i've not touched non-GA builds of gluster)
20:29 jclift jiqiren: Yeah, there's been a _lot_ of changes in the RDMA code for 3.4 and git master.
20:30 jclift jiqiren: So we want to run a test day to shake out any bugs and weirdness
20:30 jiqiren that sounds good and bad at the same time
20:30 jclift jiqiren: yeah, I know
20:30 jiqiren good because the old is unusable, bad because time hasn't tested the code
20:31 jclift jiqiren: So far for me, the RDMA code has been more _stable and generally good_.  But I've hit a weird perf problem in one situation I haven't had time to look into and figure out wtf was happening.
20:31 jiqiren i think it would be great to have some kind of guide to move from IPoIB to native RDMA
20:31 jclift jiqiren: Sure.  So, the more people we get in the RDMA test day the better.  We have some places with up to 100 nodes spare volunteering them, which is good.
20:31 jiqiren (without having to completely redo volumes from scratch)
20:32 jclift jiqiren: It also sounds like for yourself you'd probably want to hold off on using the RDMA code until it's more "known good" too.  That's a completely valid choice for real data. :)
20:32 jiqiren unfortunately the test lab for rdma doesn't have enough IO/disks to mirror what i'm doing in production
20:32 jiqiren (i should say my test lab)
20:33 jclift jiqiren: Hmmm... I have no idea how to changes volumes from tcp to rdma.  That'd be useful to look into later on.
20:33 jclift jiqiren: With your test lab, it'd be very useful for you to try stuff out anyway, with whatever gear you have around.
20:33 jclift This is definitely one of the cases of "the more the merrier" type of thing. :)
20:34 jiqiren jclift: unfortunately i'm already overwelmed with other more pressing things. :(
20:34 jclift jiqiren: No worries.  It was a thought. :)
20:34 jiqiren would be great if changing the protocol for a volume was a simple command
20:35 jclift jiqiren: If you get any time next week that you reckon might be spare, give us a thought and ping me.  I could get you to run through the tests anyway, but with rpms compiled right then from git.
20:35 jiqiren like "gluster volume blah set proto tcp,rdma"
20:35 jclift jiqiren: Feel up to creating a BZ with that idea?  It sounds like a good idea to me.
20:35 jclift jiqiren: If not, I can do it pretty easily.
20:35 jiqiren sure, i'll do that
20:36 jclift :)
20:37 sprachgenerator fwiw - I have noticed huge changes in RDMA performance since the rewrite that happened sometime in the early alpha's - I've been running this now for ~1 month - and overall it's been doing OK performance is reasonable
20:37 jclift Cool
20:37 sprachgenerator nothing has bombed out - I just have a steady stream of performance monitoring data via whisper chunking to it
20:38 jiqiren jclift: what's a good component for this feature request? protocol or transport?
20:39 jclift jiqiren: Probably "cli" really.
20:39 jclift jiqiren: From a "make it simple for end users to change volume transport type from cli" point of view, I'm thinking. :)
20:47 jiqiren jclift: https://bugzilla.redhat.com/show_bug.cgi?id=976574
20:49 jclift Heh.
20:49 jclift "i like turtles"
20:49 jclift jiqiren: Cool, that's a pretty well written BZ.
20:50 jclift jiqiren: Hopefully there's nothing that would stop this from working technically, so it could get into 3.5 or similar.
20:51 failshell joined #gluster
20:51 failshell anyone has a good performance tweaking document for gluster?
20:52 failshell the write speeds are so poor in our environment
20:52 andreask joined #gluster
20:52 jiqiren failshell: what transport are you using? 1G nics? 10G? 40G?
20:53 failshell 1G
20:53 failshell in vmware
20:53 failshell it doesnt appear to be network related, as im testing between VMs with iperf, and we're pushing ~800Mb/s
20:53 jiqiren well there is your problem. Max you can push is ~50MB/sec assuming replica=2
20:54 failshell which is about what we get
20:54 failshell hmmm
20:55 jiqiren if you have replica=2, then you have to stream data (write to a file) to 2 bricks. so your total bandwidth is really half of a p2p test
20:56 failshell yeah that's why we wanted to use 2 different NICs/VLANs
20:56 failshell but its apparently not supported/possible
20:56 failshell unless we monkey around with the hosts file
20:56 jiqiren sure it is, plug in 2 nics to the same switch, use channel bonding for the nics, you'll have 2Gbit link
20:57 failshell im running this in vmware, there's not really bonding in VMs
20:57 jiqiren (using 2 nics to double your bandwith is doable, but not with 2 IP's or 2 Vlans)
20:59 jiqiren i think vmware allows you to bond 2 nics into 1 nic. you want to bond nics using 802.3ad. so your switch needs to be able to do that as well
21:00 jiqiren unfortunately it is possible that the 2 IP's (the 2 bricks living on different hosts) both have the same hash - so you'll end up using 1 nic anyway. so it works mostly
21:01 failshell wish we had budget to build a physical setup
21:01 jiqiren failshell: checkout the different methods linux does for channel bonding and see if vmware has equivelents
21:02 jiqiren failshell: http://www.linuxfoundation.org/colla​borate/workgroups/networking/bonding
21:06 tg2 joined #gluster
21:17 failshell jiqiren: i guess i'll have to access that ~50MB/s write speed
21:17 failshell s/access/accept
21:22 jclift failshell: how many hosts do you have that you're trying to use gluster with?
21:23 failshell not that many, less than 20 for now
21:23 * jclift is just wondering if there's a cheapo way you can get more bandwidth
21:23 jclift failshell: Is it the kind of corporate place that thinks buying stuff from ebay is idiotic?
21:24 * jclift looks innocent and points at the many infiniband cards on ebay
21:24 failshell well, if we needed hardware, we'd buy it
21:24 failshell its more a case of we dont have budget for that this year. and i was told i needed a reduandant storage solution, similar to NFS
21:24 jiqiren failshell: infiniband is really cheap if you buy from an authorized reseller
21:25 failshell so i make do with what i can build in vmware
21:25 jiqiren can easily get 50% of retail pricing
21:25 jclift jiqiren: Cool, that's good to know.
21:25 jiqiren i paid $220 for 40G cards and a 12 port switch was $3500
21:25 jclift jiqiren: Which model switch did you go for?
21:26 failshell jiqiren: are the clients connecting to gluster over infiniband too?
21:26 failshell or regular ethernet?
21:26 jiqiren IPoIB
21:26 plarsen joined #gluster
21:26 jiqiren i used a switch similar to this: http://www.mellanox.com/page/products_dyn?​product_family=149&amp;mtag=sx6005_sx6012
21:27 jiqiren but bought 2 years ago for $3500 and it was only 40G, not 56G
21:28 failshell jiqiren: at the same time, ~45MB/s is roughly 400Mb/s, which is the speed at which i can write on a local disk with dd
21:30 jclift failshell: You can run IP over infiniband, same as you can over ethernet.  With infiniband it's called IP-over-IB (IPoIB)
21:30 jiqiren failshell: 40G switch will allow you to write at about 4Gbyte/sec, or 2Gbyte/sec with replica=2
21:32 jclift failshell: Just as a perspective on "how cheap this can go", the older generation infiniband adapters (20Gb/s) are on eBay for about $US40 each. http://www.ebay.co.uk/itm/360657396651
21:32 jclift And that's 20Gb/s per port, so these adapters can do 40Gb/s.  Old stuff, but works well (in my test lab :>)
21:33 * jclift gets back to the testing stuff he's working on
21:37 soukihei joined #gluster
21:45 jag3773 joined #gluster
21:46 yinyin joined #gluster
21:51 Deformative joined #gluster
21:52 tg3 joined #gluster
21:58 Deformative joined #gluster
21:58 mooperd joined #gluster
22:02 yinyin joined #gluster
22:06 partner is it normal the fix-layout takes / is run every 5 seconds or so? i added 1 new brick to a 2-brick distributed systema dn i recall previous round took only <3
22:06 partner hours
22:09 partner its a 128k directory structure, it actually now seems it takes 2+ weeks to get the "fix-layout" complete
22:10 partner previous took <3 hours.. not sure where is the difference this time, more load yes, more data but still?
22:16 dowillia joined #gluster
22:19 theron y
22:22 * semiosis waves to theron
22:23 theron Hey semiosis!
22:23 theron still recovering from Summit?
22:23 semiosis haha no recovered quickly
22:23 theron exxxcellent.
22:23 semiosis it was rejuvenating
22:24 theron yup I enjoyed it as well.
22:32 thisisdave wrt geo-replication (which I've just begun), with two local servers connected via IB, and the destination being a gluster volume, will I see better results via ssh:// or gluster:// for the geo-replication speed?
22:34 thisisdave (I've got about 30TB sans parity to move so I'm trying to ensure it gets to the safer destination as quickly as possible)
22:49 andreask joined #gluster
22:55 johnmark GlusterFest email going out to gluster-users in 5...4...3..2..1..
22:55 johnmark for RDMA testing
23:05 jiqiren johnmark: any reason the meetup was canceled for today?
23:25 edoceo joined #gluster
23:44 mooperd joined #gluster
23:49 Guest44122 joined #gluster
23:49 Guest44122 We're seeing really slow file operations over Gluster (ls on a directory takes a few seconds with 15 files in it, for example.)
23:49 Guest44122 Any ideas?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary