Camelia, the Perl 6 bug

IRC log for #gluster, 2013-07-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:18 _pol joined #gluster
00:47 sprachgenerator joined #gluster
01:03 harish joined #gluster
01:07 cleeming[foxx] question.. im getting about 30MB/sec write throughput on gluster, where as i get 1.2GB/sec if i go direct to disk, which makes gluster 28 times slower on writes.. is this normal/expected?
01:10 JoeJulian cleeming[foxx]: Don't get in the rut of comparing apples to orchards. http://joejulian.name/blog/dont-get​-stuck-micro-engineering-for-scale/
01:10 glusterbot <http://goo.gl/rxZi1> (at joejulian.name)
01:11 JoeJulian But the answer to your actual question is, it depends.
01:11 cleeming[foxx] *reads*
01:11 cleeming[foxx] so.. the benefit of having replicated/scalable storage, is that it comes at a performance cost?
01:12 JoeJulian I suppose you could put it that way.
01:12 cleeming[foxx] thats one heck of a price :/
01:13 JoeJulian You're passing data over networks, through kernel/userspace context switches, etc. Yeah, there's overhead to that.
01:13 cleeming[foxx] this might be a silly question but.. how come gluster uses fuse? will it never have a kernel fs driver?
01:13 brosner joined #gluster
01:14 cleeming[foxx] (if thats the right term to use for it)
01:14 JoeJulian You could also be using a failed testing technique.
01:14 cleeming[foxx] ive tried several... mixture of bonnie, fio, iozone, dd, and a few custom scripts... no one tool gave me the info i needed, so i had to use a mixture
01:14 JoeJulian It will not have a kernel fs driver. That would defeat the micro-kernel translator strength.
01:15 JoeJulian ... but really, the rest of the world seems to be meeting us half way. There is now a direct interface for qemu, samba, and java.
01:15 semiosis cleeming[foxx]: single thread performance?  how many threads can you run at 30MB/s in parallel before they drop?
01:16 cleeming[foxx] semiosis: thats actually a very good question, i believe this was always ran in single threaded, two secs ill run a multi thread test now
01:16 semiosis also libgfapi is new in 3.4 that will enable more application integrations bypassing fuse
01:16 semiosis there used to be an apache module, one day there may be again
01:16 JoeJulian And in single-thread tests, I commonly saturate my 1gig ethernet.
01:19 semiosis cleeming[foxx]: also ,,(pasteinfo)
01:20 _pol joined #gluster
01:20 glusterbot cleeming [foxx] : Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
01:21 cleeming[foxx] got it - 2 moments
01:21 * JoeJulian is out... date night.
01:21 semiosis enjoy
01:22 cleeming[foxx] JoeJulian: enjoy! ty for the quick replies btw
01:22 cleeming[foxx] same to you semiosis
01:22 cleeming[foxx] much appreciated.
01:23 * semiosis just dropped in for a few
01:23 semiosis happy to help
01:25 cleeming[foxx] btw, multithreaded i got 50MB/sec, which is better
01:25 cleeming[foxx] ah wait no, that was reads.. writes was 36.43MB/sec.. which is a little faster i guess
01:27 semiosis block size?
01:27 cleeming[foxx] 4096
01:28 cleeming[foxx] http://fpaste.org/25033/67887513/
01:28 glusterbot Title: #25033 Fedora Project Pastebin (at fpaste.org)
01:28 cleeming[foxx] vol info: http://fpaste.org/25034/13736789/
01:28 glusterbot Title: #25034 Fedora Project Pastebin (at fpaste.org)
01:28 semiosis try block size 1M
01:28 semiosis write size, not filesystem block size
01:29 cleeming[foxx] sure, 1 sec
01:30 semiosis thats gotta be the limiting factor
01:45 semiosis long delay a good sign?
01:45 cleeming[foxx] yessir. seems to be so much throughput that df -h hangs.
01:45 semiosis wow
01:45 semiosis check client log file
01:45 semiosis to be sure
01:45 semiosis df shouldnt hang imho
01:46 cleeming[foxx] initial results;
01:46 cleeming[foxx] 18524.56 KB/sec
01:46 cleeming[foxx] thats random writes
01:46 semiosis nice
01:46 cleeming[foxx] not sequential
01:46 cleeming[foxx] the test i did earlier was sequential, so we'll see how that performs
01:48 cleeming[foxx] heh, du/df/ls everything hangs during these tests
01:49 cleeming[foxx] is that to be expected or..?
01:49 cleeming[foxx] sequential writes; http://fpaste.org/25036/36801861/
01:49 glusterbot Title: #25036 Fedora Project Pastebin (at fpaste.org)
01:50 cleeming[foxx] so.. 47MB/sec with multiple threads
01:50 cleeming[foxx] which is actually pretty decent
01:50 semiosis there you go
01:50 cleeming[foxx] so im trading 50% of my possible write performance
01:51 cleeming[foxx] at those sort of levels, that makes more sense
01:51 semiosis writes are replicated 2x
01:51 cleeming[foxx] ahhh
01:51 cleeming[foxx] its a shame theres so much overhead in open/stat/read.. but i can get around that by reading direct from the brick dir
01:52 cleeming[foxx] and in comparison with drbd/ocfs2, gluster feels a lot cleaner
01:52 cleeming[foxx] with ocfs2, it was like i was putting stuff into a magic box, with no insight into what the hell it was doing
01:53 semiosis open/stat does a repl integrity check on the file, a self-heal check, which is slow
01:53 semiosis several round trips to the bricks
01:53 cleeming[foxx] hmm
01:53 cleeming[foxx] could i potentially cause issues by reading direct from the brick dir, as id effectively be bypassing those checks?
01:53 semiosis for large data ops it's negligible, but for small ops it's noticable
01:53 semiosis use noatime on the brick
01:53 cleeming[foxx] yeah lots of php/jpeg files.. millions of them :/
01:53 semiosis it *should* be ok
01:53 cleeming[foxx] got it
01:54 semiosis s/brick/bricks/
01:54 glusterbot What semiosis meant to say was: use noatime on the bricks
01:54 cleeming[foxx] lmao thats a nifty feature
01:54 cleeming[foxx] really appreciate you taking time to talk through this, has definately changed my mind about gluster
01:55 semiosis community \o/
01:55 semiosis pass it on
01:56 cleeming[foxx] absolutely!
02:00 semiosis everyone who uses glusterfs successfully went through what you're doing.  some give up in the process
02:00 semiosis it's great software, and if it fits your use case & you can learn how to use it, it will be great
02:01 cleeming[foxx] +1
02:01 bala joined #gluster
02:01 semiosis been using it 2 years and it's been all win
02:02 semiosis in fact i still use 3.1.7 (last 3.1 release) in prod
02:02 semiosis it just keeps running
02:02 cleeming[foxx] hah, thats 2 years old right?
02:03 semiosis give or take. i started with 3.1.3 or thereabouts
02:04 semiosis what version are you using?  what distro?
02:05 cleeming[foxx] the one from your ppa i believe.. 3.3.1-ubuntu1~raring9
02:05 semiosis @ppa
02:05 glusterbot semiosis: The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
02:05 cleeming[foxx] yeah that one
02:05 semiosis check out the last link (but dont tell anyone what you find there)
02:06 semiosis ;)
02:06 cleeming[foxx] ahh
02:06 cleeming[foxx] thats beta right?
02:07 semiosis announcement is coming monday, code was released today tho
02:07 cleeming[foxx] ohhhhhhhhhhhh i see what you mean now
02:07 cleeming[foxx] *doh*
02:07 cleeming[foxx] sorry, its 3am here, my brain is slow heh
02:08 semiosis all good
02:17 semiosis cleeming[foxx]: there's been some concern about mounting in fstab.  the 3.3QA & 3.4 repo packages have what I think is a good solution.  Please let me know if your fstab mounts dont work at boot time
02:17 semiosis if you intend to have more than one glusterfs mount in fstab, let me know that too, there's an extra step you'll need to do manually
02:18 cleeming[foxx] sure, ill give that a shot
02:18 cleeming[foxx] ah, is that the thing you have to add into /etc/rc.local ?
02:18 cleeming[foxx] or am i thinking of something else
02:18 semiosis well imo you should never (have to) do that
02:18 semiosis if my packages are good, mounts should just work
02:18 cleeming[foxx] got it
02:19 semiosis but people have resorted to rc.local when things go wrong
02:19 cleeming[foxx] i dislike using rc.local - feels wrong
02:19 semiosis indeed
02:19 semiosis it's been a constant battle keeping the fstab mounts working
02:19 semiosis so many things have popped up which broke them
02:20 semiosis i dont even know the cause of the latest issue, but found a solution (i hope)
02:20 semiosis even tho it only works for one mount
02:20 semiosis bleh
02:20 cleeming[foxx] heh, how comes its been such a struggle?
02:20 semiosis i really need to engage the ubuntu devs about that
02:20 semiosis because i'm the only one keeping it working
02:21 semiosis and ubuntu changes things
02:21 cleeming[foxx] ah
02:21 cleeming[foxx] ofc.
02:21 cleeming[foxx] the joys of being the only maintainer? :P
02:21 semiosis yeah
02:21 cleeming[foxx] i noticed that the ubuntu package puts everything in /etc/glusterd/ where as your package puts it in /var/lib/glusterd
02:21 semiosis i also blame people for having trouble & not reporting it
02:22 cleeming[foxx] will the next official ubuntu releases for gluster use /var/lib?
02:22 semiosis so by the time people tell me its broken there's like 10 blog posts telling the world its broken & to use rc.local :(
02:22 cleeming[foxx] lmao
02:22 semiosis yep
02:22 cleeming[foxx] the only issue i had getting it working, was gluster bitching about hostnames.. because i didnt assign the machine hostname to 127.0.01 in /etc/hosts
02:22 semiosis hmm, should all be /var/lib since 3.3.0
02:22 cleeming[foxx] which i did a write up here;
02:22 cleeming[foxx] http://blog.simplicitymedialtd.co.uk/5​69/glusterfs-operation-failed-on-xxx-h​ost-xxx-not-a-friend-and-silent-output
02:22 glusterbot <http://goo.gl/7dQTN> (at blog.simplicitymedialtd.co.uk)
02:23 cleeming[foxx] tried to give something back at least ;/
02:23 cleeming[foxx] tho i dont know if what ive put on that article is 100% correct, as ive only been using gluster for 2 days :X
02:23 semiosis interesting
02:24 cleeming[foxx] oh and i did a probe of the local machine name
02:24 cleeming[foxx] which added it, and i then couldnt remove it
02:24 cleeming[foxx] had to go into /etc and remove manually etc
02:24 semiosis last time i tried that gluster realized & refused, failed gracefully
02:24 semiosis what version?
02:24 semiosis that was 3.2 from the universe package?
02:24 cleeming[foxx] that was 3.2.7 i believe
02:24 cleeming[foxx] ill try now on 3.3, sec
02:25 semiosis the packages dont actually control where glusterd puts stuff, except when they do
02:25 cleeming[foxx] curious;
02:25 cleeming[foxx] # gluster peer probe test1.int
02:25 cleeming[foxx] Probe on localhost not needed
02:25 cleeming[foxx] seems to work as expected now :X
02:25 semiosis yep
02:26 semiosis the package puts stuff in sbin, usr/bin, etc, not responsible for the glusterd working dir
02:26 cleeming[foxx] yeah i think /var/lib makes more sense
02:26 semiosis def
02:27 semiosis well i gotta go
02:27 semiosis get some sleep!
02:27 cleeming[foxx] night man :P yup 3.27am, sleepy time for me! thanks again
02:27 semiosis yw
02:28 semiosis oh hey it would be real nice if you could update that blog post to note that it pertains to an old version people shouldnt be using
02:28 semiosis thx
02:29 kedmison joined #gluster
02:29 * semiosis out
02:29 cleeming[foxx] np - updating now
02:30 cleeming[foxx] done
02:54 y4m4 joined #gluster
02:58 zaitcev joined #gluster
03:09 sprachgenerator joined #gluster
03:12 raghug joined #gluster
03:40 hagarth joined #gluster
03:42 kevein joined #gluster
04:09 raghug joined #gluster
04:12 zaitcev joined #gluster
04:12 sprachgenerator joined #gluster
04:22 krink joined #gluster
04:30 JusHal joined #gluster
04:38 Deformative joined #gluster
05:04 bulde joined #gluster
05:08 cenit joined #gluster
05:08 cenit congratulations to the team for the 3.4 final release! :)
05:25 kedmison joined #gluster
05:30 samppah wait what.. 3.4 released?
05:37 hagarth joined #gluster
05:38 samppah @bug 922183
05:38 glusterbot samppah: Bug http://goo.gl/ZD3FO is not accessible.
05:55 bulde joined #gluster
06:16 harish joined #gluster
06:56 StarBeast joined #gluster
07:34 zwu joined #gluster
07:36 bulde joined #gluster
07:39 mooperd joined #gluster
08:08 ricky-ticky joined #gluster
08:13 mtanner_ joined #gluster
08:15 skyw joined #gluster
08:43 _ilbot joined #gluster
08:43 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
09:13 skyw joined #gluster
10:03 hagarth joined #gluster
11:03 recidive joined #gluster
11:10 skyw joined #gluster
11:14 skyw joined #gluster
11:31 cyberbootje joined #gluster
12:17 chirino joined #gluster
12:17 skyw joined #gluster
12:27 piotrektt joined #gluster
12:29 sprachgenerator joined #gluster
13:15 sprachgenerator joined #gluster
13:22 harish joined #gluster
13:29 mkollaro joined #gluster
13:52 raghug joined #gluster
13:58 codex joined #gluster
14:21 lalatenduM joined #gluster
14:40 balunasj joined #gluster
14:40 balunasj joined #gluster
14:42 bet_ joined #gluster
14:43 bet_ joined #gluster
15:01 bet_ anyone know where statedumps go by default?  I can't seem to find em
15:04 joelwallis joined #gluster
15:05 edong23 joined #gluster
15:15 chirino joined #gluster
15:38 jebba joined #gluster
15:46 l0uis joined #gluster
15:56 chirino joined #gluster
16:02 raghug joined #gluster
16:12 robo joined #gluster
16:13 robos joined #gluster
16:19 krink joined #gluster
16:25 raghug joined #gluster
16:32 ultrabizweb joined #gluster
16:52 skyw joined #gluster
17:35 cyberbootje joined #gluster
17:37 kedmison joined #gluster
17:48 premera joined #gluster
18:02 semiosis @qa releases
18:02 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
18:04 semiosis glusterbot: awesome
18:04 glusterbot semiosis: ohhh yeeaah
18:05 edward1 joined #gluster
18:29 kedmison joined #gluster
18:35 nixpanic joined #gluster
18:58 skyw joined #gluster
19:05 jag3773 joined #gluster
19:18 jebba joined #gluster
19:41 Deformative joined #gluster
19:43 foster joined #gluster
19:44 skyw joined #gluster
21:50 mkollaro joined #gluster
21:59 joelwallis joined #gluster
22:13 Deformative joined #gluster
22:23 krink joined #gluster
23:47 \_pol joined #gluster
23:50 Recruiter joined #gluster
23:54 Recruiter Hi guys (and gals), newbie question here. What's the different between RHEL Storage Server and me using RHEL with Gluster?
23:54 Recruiter since they both have paid support?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary