Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 calavera joined #gluster
00:13 atalur joined #gluster
00:26 Pupeno joined #gluster
00:36 Pupeno joined #gluster
00:49 ahino joined #gluster
00:52 camg joined #gluster
00:56 suliba joined #gluster
01:05 EinstCrazy joined #gluster
01:09 EinstCrazy joined #gluster
01:28 Lee1092 joined #gluster
01:32 vmallika joined #gluster
01:32 F2Knight joined #gluster
01:34 Pupeno joined #gluster
01:34 atalur joined #gluster
01:35 dlambrig_ joined #gluster
01:36 Skaag joined #gluster
01:37 Skaag does a brick have to be a drive? or can I just use a folder?
01:37 haomaiwang joined #gluster
01:41 hagarth Skaag: it is recommended to use a folder within a partition
01:48 baojg joined #gluster
01:54 Pupeno joined #gluster
01:56 haomaiwang joined #gluster
01:57 kdhananjay joined #gluster
02:01 haomaiwang joined #gluster
02:02 nangthang joined #gluster
02:11 F2Knight joined #gluster
02:21 F2Knight joined #gluster
02:26 juhaj joined #gluster
02:39 BitByteNybble110 Hey all, I've got a 20TB replicated gluster cluster set up and I think I'm having some read/write performance issues over NFS - If anyone has a few minutes to spare, I've put all my hardware, node, and volume options in a few pastebins so I don't have to spell it all out here
02:46 nangthang joined #gluster
03:01 haomaiwa_ joined #gluster
03:03 F2Knight joined #gluster
03:05 suliba joined #gluster
03:10 vmallika joined #gluster
03:10 devilspgd_ joined #gluster
03:15 pgreg joined #gluster
03:20 jhyland joined #gluster
03:31 shyam left #gluster
03:34 itisravi joined #gluster
03:37 ovaistariq joined #gluster
03:41 om joined #gluster
03:58 luizcpg joined #gluster
04:01 haomaiwa_ joined #gluster
04:01 jiffin joined #gluster
04:03 atinm joined #gluster
04:05 EinstCra_ joined #gluster
04:05 Pupeno joined #gluster
04:06 jiffin joined #gluster
04:11 jiffin joined #gluster
04:14 jiffin1 joined #gluster
04:16 calavera joined #gluster
04:19 jiffin1 joined #gluster
04:21 suliba joined #gluster
04:22 nbalacha joined #gluster
04:23 harish_ joined #gluster
04:41 atrius joined #gluster
04:42 jiffin1 joined #gluster
04:45 jhyland joined #gluster
04:47 jiffin joined #gluster
04:56 sakshi joined #gluster
05:01 haomaiwa_ joined #gluster
05:07 m0zes joined #gluster
05:11 itisravi joined #gluster
05:12 itisravi joined #gluster
05:15 jobewan joined #gluster
05:33 EinstCrazy joined #gluster
05:37 amye joined #gluster
05:37 kdhananjay joined #gluster
05:42 shubhendu joined #gluster
05:43 Apeksha joined #gluster
05:46 Gnomethrower joined #gluster
05:47 Apeksha joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 nbalacha joined #gluster
06:05 Skaag hagarth: how do I use a folder instead of a whole drive?
06:06 Pupeno joined #gluster
06:11 RameshN joined #gluster
06:11 JesperA joined #gluster
06:27 kovshenin joined #gluster
06:27 nangthang joined #gluster
06:35 kshlm joined #gluster
06:42 kshlm joined #gluster
06:44 jiffin1 joined #gluster
06:47 jiffin joined #gluster
06:48 EinstCrazy joined #gluster
06:48 overclk joined #gluster
06:50 atalur joined #gluster
06:50 jiffin1 joined #gluster
06:52 atalur joined #gluster
06:59 ovaistariq joined #gluster
06:59 jiffin1 joined #gluster
07:01 7YUAAMP3Z joined #gluster
07:02 jiffin joined #gluster
07:09 mhulsman joined #gluster
07:19 ovaistariq joined #gluster
07:21 EinstCrazy joined #gluster
07:23 jiffin1 joined #gluster
07:25 post-factum BitByteNybble110: first, you should want to describe your issues in more details
07:26 post-factum Skaag: while creating volume, you specify the path to brick, and it is actually folder, mounted into filesystem hierarchy. so, just specify another path
07:27 jiffin1 joined #gluster
07:30 jiffin joined #gluster
07:47 Ulrar Am I right in thinking that having two different gluster "clusters" on the same network isn't a problem ?
07:49 post-factum Ulrar: absolutely
07:50 Ulrar That's too bad, would have been great if that could have been the problem :(
07:50 Ulrar Thanks
07:52 post-factum Ulrar: haven't got your sarcasm
07:53 Ulrar post-factum: No sarcasm, I have data getting corrupted and I'
07:53 Ulrar m trying to figure out why
07:53 post-factum Ulrar: two separate gluster clusters do not interfere
07:55 jiffin1 joined #gluster
07:55 Ulrar I'll just try downgrading to 3.7.6, not sure gluster is the problem but since I've had an other bug with 3.7.8 I guess it's worth a try
07:56 post-factum maybe, one should try 3.7.9 instead?
07:57 Ulrar I have the exact same setup on others server running fine with 3.7.6, so I was thinking of just re-doing the exact same thing that works elsewhere. Don
07:57 Ulrar Don't really have the time to test anymore on that project unfortunatly
07:59 robb_nl joined #gluster
07:59 suliba joined #gluster
08:01 haomaiwa_ joined #gluster
08:03 vmallika joined #gluster
08:07 Pupeno joined #gluster
08:10 jiffin1 joined #gluster
08:12 jri joined #gluster
08:14 jiffin1 joined #gluster
08:18 nbalacha joined #gluster
08:29 kanagaraj joined #gluster
08:29 baojg joined #gluster
08:32 fsimonce joined #gluster
08:33 dlambrig_ joined #gluster
08:33 camg joined #gluster
08:34 EinstCrazy joined #gluster
08:43 kdhananjay joined #gluster
08:44 kdhananjay left #gluster
08:44 kdhananjay joined #gluster
08:45 Philambdo joined #gluster
08:46 TvL2386 joined #gluster
08:55 nbalacha joined #gluster
09:01 haomaiwa_ joined #gluster
09:03 baojg joined #gluster
09:03 Shakor joined #gluster
09:04 Shakor Hi, so we setup 2 servers with replicated volumes. These 2 servers are connected through 10GB. We are seeing very slow transfers.. 50MB max.
09:05 Shakor Could someone help me out tuning it and making it faster?
09:06 Shakor I do understand that using 1GB ethernet, and doing a replicated volume transfer it should be about 50/60MB as its being written by 2 transfers but still 50MB for 10GB is pretty bad
09:11 camg Shakor: what do you mean by transfer?  between the peers?  Or from clients to the volume?
09:12 Shakor hmm
09:12 Shakor camg: I totally forgot the client is on 1GB ethernet.
09:12 Shakor sigh
09:13 camg Shakor: Is the client connected by fuse or nfs?
09:13 Shakor NFS
09:13 DV__ joined #gluster
09:14 camg The fuse client will be faster
09:14 Shakor but still it should be using about 115MB on the client doing the transfer right?
09:14 camg it will write to both simultaneously
09:15 Shakor the client does not support fuse
09:15 nishanth joined #gluster
09:15 camg I found the nfs3 client surprisingly slow, perhaps why they integrated ganesha
09:15 camg rsync?
09:15 Shakor rsync should be faster?
09:16 nbalacha joined #gluster
09:16 camg Also parallel writes (ie. run parallel cp or mv for portions of the data)
09:16 EinstCra_ joined #gluster
09:17 camg I think the parallel approach is key for you
09:17 camg You essentially need to create the parallel ability of the fuse client
09:17 Shakor Will be a bit difficult to implement on the XenServer
09:18 camg I don't know the true limit on the nfs client though
09:18 skoduri joined #gluster
09:18 EinstCrazy joined #gluster
09:18 camg are you simply doing a "cp -r" or mv?
09:18 Shakor XenServer is "moving" the virtual machines disks to the gluster volume through nfs
09:19 Shakor Not sure how that is being synced over under the hood.
09:19 camg hmmm, so you want your vm "disks" on gluster?
09:19 Shakor yeah for 1 volume that is the case
09:20 camg yeah the problem is the access to that vm volume will be slow without libgfapi support
09:21 EinstCrazy joined #gluster
09:21 camg I tried virtualbox with gluster and it was unusable
09:21 Shakor I wish I knew this before going live.
09:21 camg I switched to kvm/qemu with gluster support built in, which is better
09:22 Shakor I was doing some testing with a single server glusterfs and it was running pretty good, was not a replicated volume.
09:23 EinstCr__ joined #gluster
09:23 camg Shakor: How can you configure a single server glusterfs?  Just 2 bricks on one node?
09:24 camg I didn't think it would run as a single node
09:24 Shakor runs pretty awesome on 1 node to be honest hehe
09:24 camg but what is the configuration on 1 node?
09:24 EinstCrazy joined #gluster
09:25 camg just one brick with no translators?
09:25 Shakor give me a sec
09:26 Einst____ joined #gluster
09:26 Shakor Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: CLUS-AMS1-SC-001:/data/clusterfs
09:26 Shakor 1 brick yeah
09:27 camg and then you can change that to a replicated volume?  Interesting but yeah I think the afr translator introduces some additional latency
09:28 ahino joined #gluster
09:28 Slashman joined #gluster
09:30 Shakor yep
09:31 Shakor So for my 2 servers currently running live there is no possibility to make it faster?
09:31 Skaag post-factum: thanks
09:33 camg possibly.  I don't understand the performance tuning yet.  Still working on stability.
09:33 Shakor The funny thing is when shutting down 1 node, the transfers are like almost 300% faster
09:34 mhulsman joined #gluster
09:34 jiffin joined #gluster
09:35 camg ha, yeah because no replication.  But you can't run it that way
09:36 camg a good test of the additional latency though
09:37 Shakor should be a way to enable some kind of delay mechanism writing the duplicated data right
09:37 atalur_ joined #gluster
09:38 baojg_ joined #gluster
09:38 Shakor Having a nice gluster with 2 servers with each 24 x 1TB disks and on 10GB, is pretty dissapointed with current speed.
09:38 jhyland joined #gluster
09:39 Shakor My manager is not going to be happy hehe
09:39 Pupeno joined #gluster
09:40 camg i haven't used xen in a long time but there must be discussions of this by now, yes?  people who've tried this out recently
09:40 Shakor not to mention the disks are all ssd's
09:40 Shakor have seen some posts about it yep.
09:41 camg same setup you have?
09:41 Shakor Yes
09:41 Shakor All complaining about speed.
09:42 camg ha
09:42 Shakor Maybe I should focus on getting FUSE on the xenservers
09:42 Shakor that will mean enabling some repos that are not officially supported
09:42 Shakor pretty scary
09:43 atrius joined #gluster
09:43 camg Shakor: you need to find out how well integrated xen is with gluster (specifically libgfapi)
09:44 Shakor Sounds like a plan.
09:44 Shakor Will give that a go.
09:44 Shakor Thanks camg
09:44 camg sure good luck
09:50 hchiramm joined #gluster
09:50 purpleid1a joined #gluster
09:56 Philambdo joined #gluster
09:56 EinstCrazy joined #gluster
09:59 camg Does anyone know how gluster handles hardlinks in a distributed volume?  Will the DHT cause hardlink files to span multiple peers?
10:01 haomaiwang joined #gluster
10:03 camg http://www.gluster.org/community/documentation/index.php/Arch/Glusterfs_Hard_Links
10:03 Philambdo joined #gluster
10:06 camg I guess Linkto files is the answer
10:11 k-ma joined #gluster
10:21 kanagaraj joined #gluster
10:25 robb_nl joined #gluster
10:27 nbalacha joined #gluster
10:30 k-ma_ joined #gluster
10:31 dlambrig_ left #gluster
10:34 camg Can a 3.7.9 client connect to 3.7.6 volumes without any issues?
10:35 k-ma joined #gluster
10:41 post-factum camg: yes, tested
10:44 jri_ joined #gluster
10:46 mhulsman joined #gluster
10:47 camg post-factum: thanks.  How is 3.7.9 so far?
10:50 anil_ joined #gluster
10:56 k-ma joined #gluster
11:00 k-ma joined #gluster
11:01 jri joined #gluster
11:05 k-ma joined #gluster
11:14 mhulsman joined #gluster
11:14 post-factum camg: umm, it works :)
11:15 johnmilton joined #gluster
11:16 DV joined #gluster
11:16 k-ma joined #gluster
11:20 Gnomethrower joined #gluster
11:25 Pupeno joined #gluster
11:26 jhyland joined #gluster
11:26 ahino joined #gluster
11:41 Gnomethrower joined #gluster
11:49 Apeksha joined #gluster
12:02 haomaiwang joined #gluster
12:06 pgreg joined #gluster
12:08 Shakor are the older 3.5.x clients compatible when upgrading the servers to latest gluster?
12:09 mowntan joined #gluster
12:09 mowntan joined #gluster
12:10 mowntan joined #gluster
12:19 atalur_ joined #gluster
12:42 robb_nl joined #gluster
12:43 vmallika joined #gluster
13:01 haomaiwa_ joined #gluster
13:03 jiffin joined #gluster
13:05 scobanx joined #gluster
13:07 ahino joined #gluster
13:07 post-factum Shakor: i believe you'd better don't do that
13:08 BitByteNybble110 post-factum: Thanks!  I'm noticing slow read/write speeds over NFS to two node replicated volume.  Performance on read/write direct to the bricks is good, but mounting NFS, even as a loopback to a nodes mount point, cuts the read/write performance by a factor of at least two, sometimes more
13:11 post-factum BitByteNybble110: and your client/server versions are?..
13:14 BitByteNybble110 post-factum: glusterfs 3.7.5 built on Oct  7 2015 16:27:15
13:14 BitByteNybble110 Running on CentOS 7.2 - w/ kernel 3.10.0-327.10.1.el7.x86_64
13:15 BitByteNybble110 Here's the underlying hardware configuration - http://pastebin.com/raw/v9nVcseG
13:15 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
13:16 BitByteNybble110 Per the bots request - Here's the underlying hardware configuration - http://www.fpaste.org/345069/11766145/
13:16 glusterbot Title: #345069 Fedora Project Pastebin (at www.fpaste.org)
13:17 Pupeno joined #gluster
13:17 BitByteNybble110 And here is my volume configuration - http://www.fpaste.org/345072/14589118/raw/
13:18 post-factum oh, vmware and raid5. weird, but this shouldn't be an issue
13:20 BitByteNybble110 Yea, here's some of the throughput we're seeing... http://www.fpaste.org/345074/45891198/
13:20 glusterbot Title: #345074 Fedora Project Pastebin (at www.fpaste.org)
13:20 BitByteNybble110 Originally we were running the bricks formatted as XFS - But we would encounter a kernel memory alloc deadlock exception that would cause the node to fail
13:20 BitByteNybble110 So we switched to EXT4
13:20 post-factum hm
13:20 post-factum seem to be normal speed
13:20 post-factum and here is why
13:21 post-factum replica 2 means that client is responsible for doing replica on two nodes
13:21 post-factum are they interconnected with 2gbe?
13:21 BitByteNybble110 Correct
13:21 jiffin joined #gluster
13:22 post-factum in case of nfs, server does replication, but if the nodes are interconnected by 2gbe link, it is still the max
13:22 post-factum so, the max for writing is 250 MB/s
13:23 post-factum do you mount nfs on one of server nodes?
13:23 BitByteNybble110 Yes, directly to the loopback address
13:23 post-factum what is the latency between nodes?
13:24 post-factum have you tried setting block size to something more reasonable like bs=4M instead of bs=8k?
13:25 BitByteNybble110 Average latency is about 0.3ms.  I haven't but I can certainly do that now
13:26 jiffin1 joined #gluster
13:28 post-factum BitByteNybble110: what I saw was huge difference in speed between client-server interconnected with 1gbe and with 10gbe. but try block size first
13:29 post-factum BitByteNybble110: btw using xfs here with no issues
13:29 post-factum same distro/kernel
13:29 jiffin1 joined #gluster
13:30 BitByteNybble110 The XFS bug would only manifest itself after about 20-30 days and after transferring a few TB in and out of the volume
13:31 scobanx I am getting [xlator.c:148:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.7.9/xlator/features/bitrot.so: cannot open shared object file: No such file or directory as W
13:31 shaunm joined #gluster
13:31 scobanx gluster version 3.7.9
13:31 scobanx fil is there but with different name: bit-rot.so
13:31 scobanx is this a problem
13:31 post-factum no, if you do not need bit-rot :)
13:32 scobanx i don't need it now but maybe doing a symlink fixes the problem :)
13:37 bluenemo joined #gluster
13:45 nbalacha joined #gluster
13:46 sakshi joined #gluster
13:46 atalur joined #gluster
13:47 jlp1 joined #gluster
13:48 jlp1 all of the bricks from one of my peers are showing offline, but everything else looks OK.  where is a good starting point to get those bricks online?
13:50 scobanx By the way 78x(16+4) distributed disperse volume working very well during tests. Using fio with 50 clients, gluster cluster reached 16GB/sec write speed.
13:51 scobanx jlp1: gluster vol start vol_name force may help?
13:53 post-factum jlp1: restart glusterd and glusterfsd on the node with bricks down
13:54 DV joined #gluster
13:56 jlp1 scobanx: the volumes are started, just any brick from that host shows offline
13:56 jlp1 post-factum: i have restarted glusterd and glusterfsd is not running on any of the other hosts, and the bricks are fine
13:56 jlp1 post-factum: i also restarted rpcbind
13:58 scobanx post-factum: Do you know some documentation that describes how to replace bricks in distributed-disperse volume? Is it same as distributed-replicate volume?
13:59 Gnomethrower joined #gluster
14:00 post-factum scobanx: https://gluster.readthedocs.org/en/latest/
14:00 glusterbot Title: Gluster Docs (at gluster.readthedocs.org)
14:00 hackman joined #gluster
14:00 post-factum jlp1: glusterfsd is bricks daemon. it must be run for brick to be up
14:00 scobanx post-factum: There is no documentation there that mentions replacing bricks for distributed-disperse.
14:01 scobanx Thats why I am asking if the procedure is same as distribited-replicate
14:01 jwd joined #gluster
14:01 haomaiwa_ joined #gluster
14:04 jlp1 post-factum: i tried starting glusterfsd and then restarting glusterd, but no luck.  any other suggestions?
14:05 atalur jlp1, is glusterd up on the node where you are facing this issue?
14:07 post-factum scobanx: it should be, replacing a brick is high-level operation comparing to volume layout
14:07 jwaibel joined #gluster
14:08 jwd_ joined #gluster
14:08 chromatin Hi all, I will try one more time although even JoeJulian didn’t have any ideas; I have a single-brick, single node GlusterFS system (other nodes exist and are ready to join but I am testing on this first). The brick’s backing store is a RAID60 capable of 1-2 Gbyte/sec. However, reading from the gluster FUSE mount with dd maxes out at 200-300 MB/sec . Has anyone seen this or have any ideas? This will make GlusterFS a non-start
14:08 chromatin for us. :/
14:09 jlp1 atalur: yes, glusterd is running, the peer is shown as "Peer in Cluster (Connected)" on all other hosts and the nfs server and self heal daemone are showing online
14:09 vmallika joined #gluster
14:12 hackman joined #gluster
14:12 coredump joined #gluster
14:13 scobanx chromatin: dd is single thread use fio instead with more threads.
14:17 shakor joined #gluster
14:19 shakor gluster (NFS) write: 838860800 bytes (839 MB, 800 MiB) copied, 0.64248 s, 1.3 GB/s
14:19 shakor compared to local diks: 838860800 bytes (839 MB, 800 MiB) copied, 0,147568 s, 5,7 GB/s
14:19 jlp1 thank you guys for your help.  after a gluster volume sync, everything is showing online.
14:20 shakor I am so dissapointed
14:23 ahino joined #gluster
14:23 jwd joined #gluster
14:29 * post-factum is hoping that one day noone will use dd for benchmarking
14:32 shakor ha! :)
14:32 hackman joined #gluster
14:32 shakor so post-factum explain
14:33 scobanx dd is single thread use fio instead with more threads.
14:33 shakor Ok good enough.
14:33 shakor Any better test then?
14:34 scobanx https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Performance%20Testing/
14:34 glusterbot Title: Performance Testing - Gluster Docs (at gluster.readthedocs.org)
14:34 skylar joined #gluster
14:35 shakor thanks scobanx
14:37 robb_nl joined #gluster
14:37 shyam joined #gluster
14:39 jobewan joined #gluster
14:41 kpease joined #gluster
14:42 vmallika joined #gluster
14:46 Pupeno joined #gluster
14:53 Gnomethrower joined #gluster
14:54 bennyturns joined #gluster
15:00 shakor even worse
15:00 shakor 145 MB/s
15:00 shakor on 10GB replication NFS volume
15:01 haomaiwang joined #gluster
15:18 atalur joined #gluster
15:38 amye joined #gluster
15:38 ahino joined #gluster
15:45 luizcpg joined #gluster
16:01 haomaiwa_ joined #gluster
16:04 dgandhi joined #gluster
16:05 dgandhi joined #gluster
16:06 dgandhi joined #gluster
16:08 dgandhi joined #gluster
16:09 dgandhi joined #gluster
16:10 robb_nl joined #gluster
16:11 dgandhi joined #gluster
16:11 kdhananjay joined #gluster
16:12 DV joined #gluster
16:13 jiffin joined #gluster
16:13 ovaistariq joined #gluster
16:13 dgandhi joined #gluster
16:16 d0nn1e joined #gluster
16:38 calavera joined #gluster
16:39 hackman joined #gluster
16:42 vmallika joined #gluster
16:45 pdrakewe_ joined #gluster
16:46 jhyland joined #gluster
16:46 plarsen joined #gluster
16:51 jhyland joined #gluster
17:01 haomaiwa_ joined #gluster
17:16 jwd joined #gluster
17:20 bennyturns joined #gluster
17:22 B21956 joined #gluster
17:45 mowntan joined #gluster
17:45 mowntan joined #gluster
17:53 nishanth joined #gluster
17:56 bennyturns joined #gluster
17:57 bennyturns joined #gluster
18:01 7YUAAMVEZ joined #gluster
18:24 ovaistariq joined #gluster
18:26 toppy joined #gluster
18:28 toppy Does anyone have any experience with ctdb on glusterfs? I am getting a "Could not add client IP ... This is not a public address"
18:32 gbox joined #gluster
18:42 dlambrig_ joined #gluster
18:50 ninjaryan joined #gluster
18:52 dlambrig_ joined #gluster
19:00 dlambrig_ joined #gluster
19:01 haomaiwa_ joined #gluster
19:08 amye joined #gluster
19:14 kpease joined #gluster
19:19 dlambrig_ joined #gluster
19:28 dlambrig_ joined #gluster
19:38 dlambrig_ joined #gluster
19:38 kpease joined #gluster
19:52 tswartz joined #gluster
19:55 dlambrig_ joined #gluster
20:01 haomaiwa_ joined #gluster
20:05 dlambrig_ joined #gluster
20:05 calavera joined #gluster
20:11 DV joined #gluster
20:12 ovaistariq joined #gluster
20:14 calavera joined #gluster
20:22 jobewan joined #gluster
20:23 dlambrig_ joined #gluster
20:32 dlambrig_ joined #gluster
20:38 mhulsman joined #gluster
20:42 calavera joined #gluster
20:50 dlambrig_ joined #gluster
21:01 haomaiwa_ joined #gluster
21:03 calavera joined #gluster
21:09 robb_nl joined #gluster
21:17 jri joined #gluster
21:49 dlambrig_ joined #gluster
21:52 gbox joined #gluster
21:58 dlambrig_ joined #gluster
21:58 greendeath joined #gluster
22:01 ovaistariq joined #gluster
22:01 haomaiwa_ joined #gluster
22:07 dlambrig_ joined #gluster
22:16 dlambrig_ joined #gluster
22:25 dlambrig_ joined #gluster
22:25 chromatin Hi all, I wrote this morning with performance problems (Gluster FUSE mount can only do sequential read at 200-300 MB/sec from a brick capable of doing 1-2 GB/sec) and scobanx pooh-poohed this and suggested I use fio instead of dd. However, our use case IS sequential access of very large files. Should we be using two RAID6 bricks instead of a single RAID60 brick (per server) ?
22:34 dlambrig_ joined #gluster
22:43 dlambrig_ joined #gluster
22:46 jobewan joined #gluster
22:49 johnmilton joined #gluster
22:52 dlambrig_ joined #gluster
23:01 DV joined #gluster
23:01 haomaiwa_ joined #gluster
23:01 dlambrig_ joined #gluster
23:11 dlambrig_ joined #gluster
23:19 dlambrig_ joined #gluster
23:23 calavera joined #gluster
23:28 beeradb joined #gluster
23:28 dlambrig_ joined #gluster
23:30 tessier joined #gluster
23:35 gbox joined #gluster
23:38 dlambrig_ joined #gluster
23:47 dlambrig_ joined #gluster
23:49 ovaistariq joined #gluster
23:56 dlambrig_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary