Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 justinmburrous joined #gluster
00:20 sputnik13 joined #gluster
00:46 JoeJulian @meh
00:46 glusterbot JoeJulian: I'm not happy about it either
00:46 T0aD joined #gluster
00:47 JoeJulian glusterbot: when did you come back?
00:47 glusterbot I've been here all day.
00:47 ccha joined #gluster
00:48 JoeJulian Then why is semiosis getting all worked up in gluster-infra?
00:48 glusterbot Heck if I know.
00:49 sputnik13 joined #gluster
01:22 justinmburrous joined #gluster
01:23 harish joined #gluster
01:30 msmith joined #gluster
01:32 msmith_ joined #gluster
01:36 MacWinner joined #gluster
01:45 harish joined #gluster
01:49 Alssi_ joined #gluster
01:57 sprachgenerator joined #gluster
02:06 sputnik13 joined #gluster
02:19 harish joined #gluster
02:47 coredump joined #gluster
03:22 jobewan joined #gluster
03:30 justinmburrous joined #gluster
04:41 justinmb_ joined #gluster
04:49 hagarth joined #gluster
04:50 msmith joined #gluster
04:50 haomaiwa_ joined #gluster
05:02 haomaiwa_ joined #gluster
05:10 justinmburrous joined #gluster
05:15 haomai___ joined #gluster
05:17 haomaiwa_ joined #gluster
05:35 haomaiwang joined #gluster
05:41 MacWinner joined #gluster
05:42 andreask joined #gluster
05:52 msmith joined #gluster
06:08 MacWinner joined #gluster
06:15 ekuric joined #gluster
06:33 ackjewt Hi, i've just upgraded one node in our 4 node distributed cluster and it started to rebalance. Is it safe to continue to upgrade the rest of the nodes? Or should i wait for the rebalance to finish?
06:35 Slydder joined #gluster
06:35 Slydder morning all
06:42 justinmburrous joined #gluster
06:52 msmith joined #gluster
06:53 justinmburrous joined #gluster
06:55 ctria joined #gluster
06:56 charta joined #gluster
07:00 rgustafs joined #gluster
07:04 Fen1 joined #gluster
07:11 koodough joined #gluster
07:12 Telsin joined #gluster
07:13 cliluw joined #gluster
07:21 hybrid512 joined #gluster
07:24 ekuric joined #gluster
07:30 delhage joined #gluster
07:30 fsimonce joined #gluster
07:30 justinmburrous joined #gluster
07:33 elico joined #gluster
07:53 msmith joined #gluster
07:59 charta joined #gluster
08:01 koodough joined #gluster
08:05 liquidat joined #gluster
08:09 justinmburrous joined #gluster
08:54 msmith joined #gluster
09:02 vimal joined #gluster
09:04 justinmburrous joined #gluster
09:19 tryggvil joined #gluster
09:25 tryggvil joined #gluster
09:27 haakon_ joined #gluster
09:45 finknottle joined #gluster
09:47 finknottle I'm seeing a 10-15x slowdown for small files with glusterfs. Does anyone have any recommendations ?
09:48 justinmburrous joined #gluster
09:49 finknottle In a distributed setup with 3 nodes, gigabit ethernet, and hdds, 'git clone http://llvm.org/git/llvm.git' takes about 15 minutes on glusterfs
09:50 finknottle it takes under 2 minutes on local storage, and slightly more than 3 minutes on regular nfs
09:52 finknottle anyone around ?
09:54 diegows joined #gluster
09:55 msmith joined #gluster
10:19 harish joined #gluster
10:29 R0ok_ finknottle: i think this is related to performance cache options for the volume
10:31 Slashman joined #gluster
10:48 kkeithley1 joined #gluster
10:56 msmith joined #gluster
10:58 calum_ joined #gluster
11:02 ekuric joined #gluster
11:04 getup- joined #gluster
11:06 justinmburrous joined #gluster
11:12 ramteid joined #gluster
11:27 gildub joined #gluster
11:42 ira joined #gluster
11:48 TvL2386 joined #gluster
11:51 LebedevRI joined #gluster
11:54 justinmburrous joined #gluster
11:56 TvL2386 hi guys, I'm creating a fileserver with glusterfs for a replicated volume. I need to be able to write to the volume from those two servers, I don't have the resources, nor should it be necessary performance wise, to create more servers. How would I mount the gluster volume on the two glusterfs servers?
11:56 msmith joined #gluster
11:58 Fen1 joined #gluster
11:59 pkoro joined #gluster
12:00 TvL2386 also: is there some way to determine wat glusterfs is doing? It's causing 100% CPU WAIT on one core and I want to find out what in glusterfs is causing this?
12:05 Slashman joined #gluster
12:05 sputnik13 joined #gluster
12:06 Philambdo joined #gluster
12:08 virusuy joined #gluster
12:16 chirino joined #gluster
12:20 chirino joined #gluster
12:24 tryggvil joined #gluster
12:27 mojibake joined #gluster
12:45 theron joined #gluster
12:46 theron_ joined #gluster
13:00 bene2 joined #gluster
13:03 hchiramm_ joined #gluster
13:06 n1x0n_ joined #gluster
13:07 jmarley joined #gluster
13:07 rwheeler joined #gluster
13:07 ppai joined #gluster
13:07 n1x0n_ Hi there, I'm making my first steps with gluster , but it seems very slow so I suspect "I'm doing it wrong"  , I have 2 nodex, X & Y , I created replica volume . and mounted it locally on each (mount -t gluster... localhost:volume /foo) , is that the way you would do it ?
13:08 justinmburrous joined #gluster
13:13 coredump joined #gluster
13:16 Fen1 What is the OS and the FS you use ?
13:16 Fen1 n1x0n_: ?
13:25 n1x0n_ sorry
13:25 n1x0n_ RHEL7
13:25 n1x0n_ mounting it as fuse.glusterfs
13:25 n1x0n_ gluster : 3.5.2
13:26 Fen1 xfs ?
13:26 n1x0n_ ext4
13:27 Fen1 Try with xfs maybe
13:27 Fen1 And what is your hardware ?
13:27 n1x0n_ ok thx, I'm going to try to mount it as nfs first
13:28 tdasilva joined #gluster
13:28 n1x0n_ Fen1: hw, hmm beefy 48gb ram, 24 cores
13:28 Fen1 ok that enought
13:29 Fen1 NFS is better than Fuse for a huge quantity of small files
13:29 n1x0n_ that's exactly what I have, (basically building a puppet master cluster) - shitloads of small text files
13:29 Fen1 And xfs is recommended than ext3/4
13:30 n1x0n_ right, I'll try both then! thx
13:30 n1x0n_ and btw - mounting it locally is .. a generally acceptable thing or am I doing some hack here and people will cringe ?
13:30 Fen1 but wait did you install gluster-server&client on 1 node ?
13:31 n1x0n_ yup
13:31 ndevos n1x0n_: you can mount glusterfs locally, but if that server does not need to use the volume, there is little need to mount it
13:32 ndevos n1x0n_: however, if a server uses the volume, it should mount it, writing to the bricks directly should never be done
13:32 Fen1 you should not use both in one
13:32 n1x0n_ ndevos: it does need to use it, I initially tried to use it directly (I expose directory /foo... so I used /foo) - but that broke files/ IO errors etc, so I understand that IF I need to use it , then only via mount not directly
13:33 n1x0n_ ah yes , what you just said
13:33 n1x0n_ :)
13:33 h4rry joined #gluster
13:36 ndevos n1x0n_: uh, yes :)
13:37 n1x0n_ oooh yes, nfs looks much much much better
13:39 coredump joined #gluster
13:48 justinmburrous joined #gluster
13:49 sputnik13 joined #gluster
13:55 BrianR___ joined #gluster
13:56 BrianR___ Can someone offer pointers on whether it's better to use the native client or built in NFS server?
13:56 BrianR___ (assuming I rig up failover for the NFS server)
13:56 skippy is there an explanation of why NFS is better for some use cases of Gluster?  I see the "NFS is better with small files" a lot, but little backing data as to why
13:56 BrianR___ joined #gluster
13:57 skippy and is there a breakpoint for defining "many"? :)
13:58 n1x0n_ BrianR___: nfs seems better for my workload (~10 000 files)
13:58 n1x0n_ small files that is
13:58 msmith joined #gluster
13:59 BrianR___ I can see the potential huge benefit when all of the glusterfs boxes are on a very fast network but the clients are on somewhat slower networks
14:01 ws2k3 hello using this PPA for ubunti : ppa:semiosis/ubuntu-glusterfs-3.5 that is a stable release right ?
14:04 sprachgenerator joined #gluster
14:05 BrianR___ n1x0n_: Do you use a VIP and failover?
14:07 n1x0n_ BrianR___: not yet, I've literally started with gluster .. yesterday
14:10 Thilam joined #gluster
14:11 ws2k3 hello using this PPA for ubunti : ppa:semiosis/ubuntu-glusterfs-3.5 that is a stable release right ?
14:16 wushudoin joined #gluster
14:16 sputnik13 joined #gluster
14:30 ws2k3 hello using this PPA for ubunti : ppa:semiosis/ubuntu-glusterfs-3.5 that is a stable release right ?
14:31 coredump joined #gluster
14:34 msmith joined #gluster
14:39 DV joined #gluster
14:40 DV__ joined #gluster
14:43 lmickh joined #gluster
14:49 h4rry joined #gluster
14:54 calum_ joined #gluster
14:56 sputnik13 joined #gluster
14:58 soumya_ joined #gluster
15:02 sputnik13 joined #gluster
15:04 cjanbanan joined #gluster
15:06 sputnik13 joined #gluster
15:06 jobewan joined #gluster
15:06 cjanbanan Hi,  Can anyone please explain how reading from a GlusterFS volume works?  I've configured a volume of two replicated bricks and judging by the report produced by the profiling tool, only one of these bricks are accessed when data is read. That makes perfectly sense, but when I measure data transfer rates from this volume I get really confused. I would expect GlusterFS to always add a small penalty compared to when the underlying ext4 volume is accesse
15:08 h4rry joined #gluster
15:12 ctria joined #gluster
15:14 Fen1 BrianR___: With glusterfs you don't need a VIP, failover is integred no ?
15:15 n-st joined #gluster
15:19 ndevos ~ppa | ws2k3
15:19 glusterbot ws2k3: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/M9CXF8 -- 3.5 stable: http://goo.gl/6HBwKh -- QEMU with GlusterFS support: http://goo.gl/e8IHnQ (3.4) & http://goo.gl/tIJziO (3.5)
15:23 justinmburrous joined #gluster
15:27 sputnik13 joined #gluster
15:29 Fen1 n1x0n_: you need a VIP with glusterfs ?
15:37 XpineX joined #gluster
15:38 DougBishop joined #gluster
15:47 sputnik13 joined #gluster
15:48 semiosis glusterbot!
15:56 semiosis cjanbanan: afaik clients try to read from the lower latency brick (which ever responds to lookup first) for each file.  this is supposed to provide load balancing with many clients
16:04 plarsen joined #gluster
16:11 nueces joined #gluster
16:20 cmtime I have a 12 node setup with replica.  2 of the nodes have 4000 failed files.  One node list about 20 gfid. The other list 4000 files.  Any thoughts on how to solve this?  I do not care about the directory involved.
16:22 justinmburrous joined #gluster
16:35 _pol joined #gluster
16:48 skippy how do I enable io-cache?
16:53 JoeJulian enabled by default
16:53 skippy thanks
16:55 JoeJulian cmtime: I don't know what a "failed file" is. Perhaps if you explain how you determine a file is failed it might make things clearer.
16:56 cmtime gluster volume heal gf-osn info split-brain | less
16:56 cmtime I get listed 4000 files on one node being real file paths
16:56 JoeJulian @split-brain
16:56 glusterbot JoeJulian: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
16:57 JoeJulian Which version of gluster are you running?
16:57 cmtime 3.5.0-2.el6.x86_64
16:58 JoeJulian Firstly, you should be running 3.5.2 if you're going to run the 3.5 series. Lots of bug fixes.
16:58 sputnik13 joined #gluster
16:58 cmtime Ya I need to update it but my video encoding people are a pain to work with
16:59 JoeJulian Second, I've had some people running 3.5 report that splitmount doesn't successfully communicate with the server. I haven't solved that yet.
16:59 cmtime the files in question I do not even care about and can delete them all
16:59 JoeJulian Then I would do that.
16:59 cmtime the only problem is in some cases they are not on the brick
17:00 cmtime so trying to fix the gfid the corresponds to the real file I think is what scares me.
17:00 plarsen joined #gluster
17:00 JoeJulian Had you already removed them?
17:01 cmtime no someone messed up and caused me to get to this point.  Node 11 and 12 replicate.  node 12 went 6 months running a older version some how.
17:02 cmtime Some point later var was maxed at 100% and one of the files needed to run gluster became a zero byte file and when I came through and did a restart I could not.
17:02 JoeJulian If the files aren't on the replicas, there's no way for them to be split-brain.
17:03 cmtime if .glusterfs dir has the link but the fs does not have the file would that not mess things up?
17:04 PeterA joined #gluster
17:04 cmtime When I looked last I took 2 of the files went into the brick and tried to ls it and file was not found.  I will go check again.
17:04 theron joined #gluster
17:04 JoeJulian It'd be a stale gfid. Shouldn't harm anything.
17:05 JoeJulian If a gfid file (and only a real file) has only one link as shown in stat, then the file its associated with has been removed. At that point its safe to remove the gfid file.
17:05 ricky-ticky joined #gluster
17:05 JoeJulian Not true if it's a symlink.
17:07 cmtime k
17:08 cmtime so should I try to delete it on the gluster first then if not delete it on the brick then force a heal?
17:09 justinmburrous joined #gluster
17:10 failshell joined #gluster
17:13 dtrainor joined #gluster
17:14 dtrainor joined #gluster
17:20 cjanbanan joined #gluster
17:27 zerick joined #gluster
17:38 cmtime Have any of you played with infinibade connected mode vs datagram?
17:45 justinmburrous joined #gluster
17:47 coredump|br joined #gluster
17:58 hchiramm_ joined #gluster
18:00 htrmeira joined #gluster
18:02 theron joined #gluster
18:04 h4rry joined #gluster
18:05 coredump joined #gluster
18:15 tryggvil joined #gluster
18:18 tryggvil joined #gluster
18:31 andreask joined #gluster
18:33 justinmburrous joined #gluster
18:47 mbukatov joined #gluster
18:50 tryggvil joined #gluster
18:59 diegows joined #gluster
19:13 scuttlemonkey joined #gluster
19:18 justinmburrous joined #gluster
19:57 LessSeen_ joined #gluster
19:57 justinmburrous joined #gluster
19:57 thermo44 joined #gluster
20:00 thermo44 Hello guys!! A few months a go we installed a big glusterfs, but had problems with performance with Windows Clients, We tried samba, Right now I have my pools with ZFS+iscsi+samba, and they run file, but now is the double of production and we really need a DFS I prefer it to be Gluster. Anyone with some experience with Windows clients?
20:00 thermo44 *fine
20:00 tg2 try nfs
20:02 tg2 i have a 2008 server connected to a gluster pool (only to 1 server) via nfs, it works quite fast
20:02 thermo44 Hello tg2 Can we connect a few linux clients via glusterd and NFS at the same time? the guy that helped us said he didn't know...
20:02 tg2 yeah you can
20:02 tg2 you can use glusterd and/or nfs on linux clients
20:02 tg2 then nfs on windows clients
20:02 thermo44 OMG!!! that makes me happy!!! let me paste my samba file so you can see if there is something bad in it
20:02 tg2 I wouldn't use samba tbh
20:02 semiosis dont paste in channel, use a ,,(paste) site
20:03 tg2 ye
20:03 glusterbot For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
20:03 thermo44 So best to use nfs and glusterd, we have centos right now running
20:03 tg2 if you only want to use gluster for DFS and not necessarily fault-tolerant connectivity, then nfs can solve that
20:04 thermo44 ohh yes, thank you guys!! I have it on pastebin
20:04 tg2 if you need to have fault tolerant connectivity you'll have to set up ucarp and nfs on N nodes
20:04 tg2 I'm doing some performance testing between nfs and native gluster with 3.5 right now
20:04 thermo44 tg2 what speeds do you get with your array?
20:05 thermo44 NIce!!
20:05 tg2 via NFS I can get 800MB/s
20:05 tg2 on 10G network
20:05 thermo44 Ok let me tel you more or less what I have, OMG really?
20:05 tg2 anyway, try it out, I think you'll find that windows plays nicely with nfs gluster exports
20:05 thermo44 yes, yes, I have the 10gb/s that came with the boards
20:06 tg2 if your bricks are fast enough
20:06 tg2 you can expect some good performance
20:06 tg2 i have had mixed results with gluster native mount
20:06 thermo44 Yes every server has 3 areca cards with 12 disks each one
20:06 tg2 can the areca's do ssd cache?
20:06 tg2 like lsi's cachecade?
20:07 * nated always giggles at "cachecade"
20:07 thermo44 I believe they can, because I've seen arrays with the whole bunch or ssd and the cache disk
20:07 thermo44 hehehe
20:08 tg2 I run R6 with 12 disks + 1 ssd in cachecade
20:08 thermo44 OK never done a nfs connection with windows, any link on basic connections?
20:08 semiosis nated: sounds like sean connery
20:08 tg2 you can acheive comparable results with zfs z2 + ZIL ssd
20:08 tg2 i have yet to try zfs natively on linux with this configuration
20:09 nated semiosis: "Ryan: Things in here don't react very well to cachecacde" -- ha, yes, works well :)
20:09 thermo44 tg2, well zfs says it gives me 1.2gb/s but with samba, well, i can make 2 connections giving 200mb/s each and that's about it
20:09 tg2 yeah i hate samba
20:09 tg2 even windows own dfs
20:10 tg2 with samba
20:10 tg2 is dodgy
20:10 thermo44 Yes, exactly, I am trying to convince this guys to put some linux clients and have mixed environment, and I hope they change a few things to linux
20:11 thermo44 OK, so 1 thing I can test is to use ZFS with nfs and check results?
20:11 tg2 anyway, gluster + nfs to your windows clients, and gluser with either glusterd or nfs will work well
20:12 thermo44 and the other one would be to use gluster, which I will be fu....ing static to use!!!
20:12 tg2 yeah you can export your zfs volume via nfs
20:12 tg2 and test that
20:12 thermo44 OK going to check about that, last time I tried to use nfs, I get some errors in Centos, maybe because I was using the mount point for samba
20:13 tg2 its quite trivially easy
20:13 tg2 make sure you specify fsid=1 on your nfs exports
20:13 tg2 one random tip that is mentionned nowhere and which makes nfs not randomly break
20:13 semiosis static?  you mean ecstatic?
20:13 tg2 ecstatic
20:14 thermo44 tg2, Going to eat something, but really glad to converse with you, if you have a few minutes later, I hope I can catch you... Going to check about nfs after launch! exactly, what you said :D
20:15 tg2 just speaking from experience, I'm sure there are others who have other opinions
20:16 thermo44 Would be great to have it again in here and working. I come back in a few minutes!
20:16 tg2 I might not be around
20:17 thermo44 tg2, Are you in Europe?
20:18 tg2 nope
20:18 tg2 not right now
20:18 thermo44 Can I contact you for quick questions?
20:18 thermo44 your user I mean?
20:18 tg2 ask them in here
20:18 tg2 be sure to pm all the devs and annoy them with questions that were already asked
20:19 thermo44 OK, so excited, hope..... lol, damn!!! Well is very new for me :(
20:20 tg2 read up
20:20 thermo44 Hope not to get kicked!! See you in a bit
20:21 semiosis dont pm the devs unless you have business with them
20:23 cjanbanan joined #gluster
20:27 h4rry joined #gluster
20:28 rwheeler left #gluster
20:32 PeterA1 joined #gluster
20:34 dtrainor joined #gluster
20:36 dtrainor joined #gluster
20:42 davemc joined #gluster
20:46 cmtime Does anyone know when 3.6 might be out? =P
20:47 semiosis might be next month, just a wild guess.  it's in beta now
20:47 justinmburrous joined #gluster
20:48 cmtime ty
20:48 semiosis yw.
20:48 cmtime Trying to figure out a plan for crazy 500TB non replica backup solution
20:49 JoeJulian write it all to paper tape
20:49 cmtime lol
20:49 JoeJulian ... you did say crazy...
20:49 tg2 write to floppies
20:49 cmtime trying to resort 500TB just it not realistic anyways
20:49 davemc that wasy plan for crazy not crazy plan... grin
20:50 cmtime =)
20:50 tg2 anybody benchmark 3.5 already?
20:51 JoeJulian To what end?
20:51 tg2 vs previous versions
20:51 JoeJulian Ah
20:51 tg2 I couldn't find anything online
20:51 JoeJulian That's because there isn't anything.
20:51 tg2 ie: resource usage on server nodes, speeds to clients
20:51 tg2 replication speed
20:52 cmtime I am benchmarking the impact of mtu 1500 vs 65520 lol found that bug today
20:52 tg2 :D
20:52 tg2 i use 9000 on 10-40G stuff
20:52 tg2 i think 40G could use a higher mtu
20:52 JoeJulian Benchmarking clustered systems would require a substantial investment to get anything meaningful. At least 100 clients, dozens of servers, various workloads to demonstrate the value of clustering...
20:53 cmtime IB with connected mode and mtu 65520 is what I am running now
20:53 tg2 To do concurrency benchmarks yes
20:53 JoeJulian That's the value of clusters.
20:53 tg2 single file operations are still important in many workloads
20:53 tg2 and server->server speeds are important too for brick rebalances and adding/removing bricks dynamically
20:53 JoeJulian For a single file, clusters are only going to slow things down.
20:54 tg2 its an interesting challenge to make a meaningful benchmark since gluster is so flexible in what it runs on top of
20:54 tg2 and what it is used for
20:54 LessSeen_ hi all - i have a two brick volume (replica 2) - and when i run "gluster volume status", the non-local brick shows N under online. it probably happened when updating to 3.5, i am just wondering how i can bring it back online. gluster peer status shows that it is online, and it has the correct ports open (it was working fine before). anyone have input on how i may bring the offline brick back online?
20:55 tg2 LessSeen - can you ping non-local brick server?
20:55 tg2 i guess if peer status show sit online it can
20:56 LessSeen_ it has icmp off but yea, i can see it and ssh tunnel etc
20:56 tg2 is it added by IP or hostname
20:57 LessSeen_ hostname but the ip lives in etc/hosts
20:57 semiosis JoeJulian: depends on the single file... if it's a reaaaaaally big single file, which lots of clients are working on together, then a stripe volume could help
20:57 tg2 semiosis - can gluster clients read from multiple replicas simultaneously to increase read speed in a replicated volume set?
20:58 semiosis tg2: can and do
20:58 tg2 nice
20:58 semiosis afaik the client polls the replicas on lookup, first to respond serves reads for the file
20:58 semiosis so with many clients the load should be distributed
20:58 tg2 oh i meant one client
20:58 tg2 getting data from multiple replicasets
20:58 tg2 ie: chunks 1-8 from replica 1, 9-16 from replica 2
20:59 semiosis i think for each filehandle the reads come from one replica
20:59 tg2 could grant some raid-0 read speed
20:59 semiosis tg2: not really.  stripe volumes do that, and perhaps disperse volumes too, but no one uses those
21:00 semiosis i mean some people use them, but most dont
21:00 fyxim joined #gluster
21:00 tg2 yeah, but you /could/ get the read speed benefit from a replicated volume
21:00 semiosis with many clients
21:00 tg2 that is inherent
21:01 tg2 how is erasure coming?
21:01 tg2 heard from the grapevine that it was being toyed with
21:01 tg2 multi-server erasure redundancy would be pretty awesome
21:01 semiosis i'm not up to speed on it.  that's the disperse (iirc) volume i mentioned
21:01 tg2 raid 6 across servers :D
21:02 tg2 xavier has a working version in 3.6
21:02 tg2 interesting
21:02 tg2 xavih, is it working well in testing?
21:03 tg2 can you specify how many redundancy nodes you want? ie: 2 for R6-like operation
21:06 siel joined #gluster
21:14 cjanbanan joined #gluster
21:27 thermo44 tg2,  i was reading what you wrote, I have 1200 VM's (windows clients) 80 dell machines(windows too) and 2 scanners that make 1TB in 5 minutes. As I said before, right now i have 4 pools. 2 with 178Tb, and 2 pools with 266Tb. in ZFS+ISCSI+samba, but sometimes a lot of VM's hit one pool more than other, and 1 scanner at the same time plus 80 productions machines that want to see the images... In this case a DFS like this one would be a good option ri
21:27 thermo44 ght?
21:27 thermo44 The servers are 4u with 3 areca cards each one. 36 hard drives of 2tb, and 3tb each one, this are 12 file servers
21:28 JoeJulian Sounds like a good fit to me.
21:30 thermo44 We had gluster before but this Russian guy couldn't make samba work, so I configure samba and works good with zfs, but with gluster not. So tg2 recommend to connect with windows clients with nfs
21:32 mbukatov joined #gluster
21:32 thermo44 JoeJulian, In a 4 server pool like the one mentioned, what speeds would possibly achieve ? in a singular server, gives with zfs like 750mb/s
21:33 thermo44 I understand that with more servers, gives more speed right?
21:35 LessSeen_ joined #gluster
21:37 _pol joined #gluster
21:37 andreask joined #gluster
21:38 drajen joined #gluster
21:40 cjanbanan joined #gluster
21:40 cmtime I am trying to force rdma for the first time after using tcp,rdma and IPoIB.  I am able to follow the Debunking glusterFS RDMA blog post till the point of mounting but the mount hangs.
21:41 nated joined #gluster
21:45 elico joined #gluster
21:51 cjanbanan joined #gluster
21:53 mbukatov joined #gluster
21:56 tryggvil joined #gluster
21:58 JoeJulian JustinClift: wasn't it you who wanted to put together a gluster consultant guide? Did that ever get anywhere?
21:59 tryggvil joined #gluster
22:02 cjanbanan joined #gluster
22:08 justinmburrous joined #gluster
22:10 calum_ joined #gluster
22:13 LessSee__ joined #gluster
22:19 davemc joejulian: I'm fighting some vagarities of the website but hope to have the consultants page up RSN
22:21 cjanbanan joined #gluster
22:33 kiwnix joined #gluster
22:48 LessSeen_ joined #gluster
22:51 thermo44 davemc I sent you a PM, when you're not busy let me know, thanks!!
22:51 thermo44 So stable version still 3.4 right?
22:56 JoeJulian 3.4.5 or 3.5.2
23:00 zerick joined #gluster
23:01 LessSeen_ joined #gluster
23:03 JustinClift thermo44: If you don't need the features and optimisations in 3.5, then 3.4.5 is a bit stabler.  We know of some memory leaks that affect a minority of people in some situations (to be fixed in 3.4.6), but apart from that you'd be golden.
23:04 JustinClift thermo44: 3.4.6 and 3.5.3 are in the works, both of which should be pretty damn good. :D
23:04 PeterA1 looking forward :)
23:04 PeterA1 i am on 3.5.2 and so far pretty ok….except some heal-failed and extended warnings
23:05 msmith joined #gluster
23:08 thermo44 Thanks JustinClift Thanks you very much Sr. Well my clients are windows based, so don't know if 3.5 would be better in that case.
23:08 davemc joined #gluster
23:09 JoeJulian Nothing about windows makes one better than another.
23:11 justinmburrous joined #gluster
23:22 JustinClift :)
23:31 _pol_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary