Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 vex semiosis: thx
00:21 hagarth joined #gluster
00:26 yinyin joined #gluster
00:29 ehg joined #gluster
00:30 jdarcy joined #gluster
00:34 johndescs_ joined #gluster
00:34 JoeJulian semiosis: https://lists.gnu.org/archive/html/​gluster-devel/2013-02/msg00095.html
00:34 glusterbot <http://goo.gl/JsbAa> (at lists.gnu.org)
00:41 hagarth joined #gluster
01:01 disarone joined #gluster
01:09 bala joined #gluster
01:30 hagarth joined #gluster
01:49 yinyin_ joined #gluster
01:49 bala joined #gluster
02:06 jdarcy joined #gluster
02:12 ultrabizweb joined #gluster
02:18 jdarcy joined #gluster
02:50 an joined #gluster
02:56 yinyin joined #gluster
02:58 bharata joined #gluster
03:03 yinyin joined #gluster
03:03 yinyin joined #gluster
03:04 yinyin joined #gluster
03:04 yinyin_ joined #gluster
03:06 yinyin joined #gluster
03:08 bulde joined #gluster
03:09 overclk joined #gluster
03:14 yinyin_ joined #gluster
03:14 atrius joined #gluster
03:17 glusterbot New news from newglusterbugs: [Bug 912564] [FEAT] Use regexes to control hashing for temp files <http://goo.gl/X7ihp>
03:34 hagarth joined #gluster
03:44 kshlm joined #gluster
03:44 kshlm joined #gluster
03:50 syoyo__ joined #gluster
04:01 sripathi joined #gluster
04:07 portante` joined #gluster
04:14 Humble joined #gluster
04:17 jag3773 joined #gluster
04:24 sgowda joined #gluster
04:25 al joined #gluster
04:27 lala joined #gluster
04:29 sahina joined #gluster
04:33 shylesh joined #gluster
04:41 deepakcs joined #gluster
04:45 vpshastry joined #gluster
04:54 bala joined #gluster
04:59 satheesh joined #gluster
05:01 aravindavk joined #gluster
05:02 rastar joined #gluster
05:02 rastar1 joined #gluster
05:17 juhaj joined #gluster
05:17 sonne joined #gluster
05:18 jiqiren joined #gluster
05:18 cw joined #gluster
05:19 niv joined #gluster
05:19 al joined #gluster
05:22 hagarth joined #gluster
05:23 _br_ joined #gluster
05:27 mohankumar joined #gluster
05:34 satheesh joined #gluster
05:41 bala joined #gluster
05:48 an joined #gluster
06:09 raghu joined #gluster
06:29 ricky-ticky joined #gluster
06:34 ramkrsna joined #gluster
06:34 ramkrsna joined #gluster
06:47 dblack joined #gluster
07:09 satheesh1 joined #gluster
07:13 vpshastry left #gluster
07:13 vikumar joined #gluster
07:16 jtux joined #gluster
07:17 vpshastry joined #gluster
07:18 mooperd joined #gluster
07:28 hagarth joined #gluster
07:42 Nevan joined #gluster
07:43 ngoswami joined #gluster
07:48 theron joined #gluster
07:49 cw joined #gluster
08:00 jtux joined #gluster
08:03 ekuric joined #gluster
08:12 rwheeler joined #gluster
08:20 Footur joined #gluster
08:25 Footur hello there
08:25 Footur I'm looking for the documentation of glusterfs 2.0
08:26 Footur i can't find it on the website
08:28 tjikkun_work joined #gluster
08:30 badone joined #gluster
08:35 lh joined #gluster
08:35 lh joined #gluster
08:36 shireesh joined #gluster
08:38 duerF joined #gluster
08:45 dobber_ joined #gluster
08:46 aravinda_ joined #gluster
08:48 mohankumar joined #gluster
08:50 ultrabizweb joined #gluster
08:50 WildPikachu joined #gluster
08:56 gbrand_ joined #gluster
09:00 sahina joined #gluster
09:08 bala joined #gluster
09:08 Norky joined #gluster
09:11 gbrand_ joined #gluster
09:14 tryggvil joined #gluster
09:26 vpshastry joined #gluster
09:32 Humble joined #gluster
09:39 puebele joined #gluster
09:51 cw joined #gluster
09:52 sripathi joined #gluster
09:53 satheesh joined #gluster
09:58 puebele joined #gluster
10:02 shireesh joined #gluster
10:02 aravindavk joined #gluster
10:07 sripathi joined #gluster
10:07 an joined #gluster
10:16 hagarth joined #gluster
10:20 Humble joined #gluster
10:27 rotbeard joined #gluster
10:32 mohankumar joined #gluster
10:35 jag3773 joined #gluster
10:45 andreask joined #gluster
10:46 aravinda_ joined #gluster
10:46 shireesh joined #gluster
10:48 aravinda__ joined #gluster
10:53 shireesh joined #gluster
10:57 aravindavk joined #gluster
11:11 hagarth joined #gluster
11:14 sahina joined #gluster
11:20 rgustafs joined #gluster
11:25 vpshastry joined #gluster
11:33 Footur joined #gluster
11:37 raven-np joined #gluster
11:38 ricky-ticky joined #gluster
11:45 andreask joined #gluster
11:55 andreask joined #gluster
11:55 jdarcy joined #gluster
12:13 andreask joined #gluster
12:13 GLHMarmot joined #gluster
12:14 neofob joined #gluster
12:16 hagarth joined #gluster
12:38 trapni joined #gluster
12:38 * trapni waves
12:39 trapni Hey, we have accidentally caused one of our web servers to serve files from a Gluster volume, which caused a minor outage, because this node's gluster agent used up more cpu than we had then free for the http server, causing lots of timeouts.
12:40 trapni is there a way to improve performance for high traffic on gluster volumes ? (e.g. more than 1000 requests per second on static files, for stat()/open()/read()/ ... ?)
12:41 sjoeboo joined #gluster
12:48 sgowda joined #gluster
12:57 manik joined #gluster
13:15 NuxRo trapni: http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/
13:15 glusterbot <http://goo.gl/uDFgg> (at joejulian.name)
13:15 sonne trapni, i don't know about gluster specifically, but you could help it with a reverse proxy
13:15 overclk joined #gluster
13:25 johnmark joined #gluster
13:27 tryggvil joined #gluster
13:28 erik49 joined #gluster
13:31 sgowda joined #gluster
13:38 balunasj joined #gluster
13:42 dustint_ joined #gluster
13:42 dustint joined #gluster
13:54 jclift_ joined #gluster
13:57 bennyturns joined #gluster
13:58 plarsen joined #gluster
14:00 vpshastry joined #gluster
14:01 tryggvil joined #gluster
14:15 jdarcy joined #gluster
14:22 stat1x joined #gluster
14:28 cicero yeah perhaps something like varnish
14:28 cicero depending on what your web content is
14:31 nueces joined #gluster
14:41 aliguori joined #gluster
14:44 wN joined #gluster
14:46 bennyturns joined #gluster
14:57 bugs_ joined #gluster
14:57 stopbit joined #gluster
15:02 morse joined #gluster
15:06 ctria joined #gluster
15:19 jag3773 joined #gluster
15:21 Humble joined #gluster
15:27 ctrianta joined #gluster
15:28 edward1 joined #gluster
15:30 randomcamel left #gluster
15:42 bala joined #gluster
15:46 aliguori joined #gluster
15:46 dock34 joined #gluster
15:49 jdarcy joined #gluster
15:50 jbrooks joined #gluster
15:55 daMaestro joined #gluster
15:58 vpshastry joined #gluster
16:02 _br_ joined #gluster
16:05 _br_ joined #gluster
16:06 ngoswami joined #gluster
16:08 Humble joined #gluster
16:13 _br_ joined #gluster
16:18 Humble joined #gluster
16:19 _br_ joined #gluster
16:26 puebele joined #gluster
16:30 Humble joined #gluster
16:48 hagarth joined #gluster
16:58 Ryan_Lane joined #gluster
16:58 tryggvil joined #gluster
17:06 xian1 joined #gluster
17:08 _br_ joined #gluster
17:13 plarsen joined #gluster
17:15 _br_ joined #gluster
17:17 bit4man joined #gluster
17:23 m0zes_ joined #gluster
17:25 Mo_ joined #gluster
17:28 cw joined #gluster
17:47 hagarth joined #gluster
18:08 Ryan_Lane joined #gluster
18:11 raven-np joined #gluster
18:18 rodlabs joined #gluster
18:27 gbrand__ joined #gluster
18:33 raven-np joined #gluster
18:34 Humble joined #gluster
18:38 jclift joined #gluster
18:44 lh joined #gluster
18:44 lh joined #gluster
18:49 lh joined #gluster
18:49 lh joined #gluster
18:50 Ryan_Lane if I have 4 bricks and have cluster.quorum-count set to auto, how many bricks are necessary for a quorum?
18:50 Ryan_Lane 3?
18:51 Ryan_Lane also, how does the replication work in this situation, if I have replication count set to 2?
18:54 semiosis Ryan_Lane: iirc auto means you need a majority for read/write access, otherwise clients turn readonly
18:54 Ryan_Lane so, majority would be 3?
18:54 Ryan_Lane a quorum of 2 isn't really a quorum, after all :)
18:54 Ryan_Lane I can't really find much docs on this topic
18:54 semiosis 3 would be a majority for replica 3, 4, and 5
18:54 semiosis afaict
18:55 semiosis well this isnt robert's rules
18:55 semiosis i'd expect using quorum auto with replica 2 means you either have both replicas online or you go read-only
18:55 semiosis but i havent tried it myself, just going by what i've heard around here
18:56 semiosis i plan to run through it soon though
18:56 semiosis gotta run, good luck
18:57 elyograg that's my understanding as well with replica 2.  which is why i filed bug 884381.
18:57 glusterbot Bug http://goo.gl/rsyR6 medium, unspecified, ---, jdarcy, ASSIGNED , Implement observer feature to make quorum useful for replica 2 volumes
19:02 jdarcy joined #gluster
19:12 sdvsedfav joined #gluster
19:19 bit4man joined #gluster
19:20 dfvevbeb joined #gluster
19:20 vpshastry joined #gluster
19:27 disarone joined #gluster
19:28 dvba joined #gluster
19:32 dvba joined #gluster
19:41 _Bryan_ joined #gluster
19:42 fvfddgb joined #gluster
19:50 JoeJulian Ryan_Lane: $replica / 2 + 1
19:52 asdfghjk joined #gluster
20:04 cwDVCWE joined #gluster
20:06 cw joined #gluster
20:08 Ryan_Lane JoeJulian: so it'll be 2?
20:08 Ryan_Lane that won't actually work, will it?
20:13 jdarcy joined #gluster
20:14 gbrand_ joined #gluster
20:18 johnmark Ryan_Lane: for 3.3.x and before, I don't think replica 2 will be able to auto-resolve split brain issues
20:19 johnmark Ryan_Lane: if it's any consolation, 3.4 will (because of the arbiter node)
20:23 hagarth joined #gluster
20:24 fjtryjyjryjrxtht joined #gluster
20:25 andreask joined #gluster
20:30 Ryan_Lane johnmark: so, really I'd need a replica of 3?
20:30 Ryan_Lane for quorum?
20:31 duerF joined #gluster
20:40 y4m4 joined #gluster
20:40 jclift___ joined #gluster
20:44 pipopopo joined #gluster
20:45 dvb joined #gluster
20:53 johnmark Ryan_Lane: as implemented in 3.3, I *think* so
20:54 * Ryan_Lane nods
20:54 Humble joined #gluster
20:56 szopa joined #gluster
21:04 bronaugh ok, got a (probably) stupid question.
21:04 bronaugh one can compose a zpool out of multiple raidz vdevs, yes?
21:07 jdarcy joined #gluster
21:10 bronaugh and.. hmm.
21:11 bronaugh more questions. dedup. can one make sure zfs -only- uses the L2ARC for dedup data?
21:11 bronaugh for our use case I can easily believe that user reads / writes would nuke the cache.
21:27 rm__ joined #gluster
21:27 bronaugh and ... more questionS!
21:27 rm__ hi everyone
21:27 bronaugh what is the ratio of RAM to L2ARC that one must keep up?
21:37 neofob left #gluster
21:38 Humble joined #gluster
21:55 plarsen joined #gluster
21:56 bronaugh shit. wrong channel.
21:59 johnmark bronaugh: I was about to ask, but was afraid to :)
21:59 bronaugh please do next time :P
21:59 xian1 can we get a courtesy flush?
22:00 vex /clear
22:00 bronaugh (sorry about that... obviously failed to check chan name)
22:01 xian1 haha, cool, I've forgotten more about IRC in the last 15 years than I knew.
22:06 hattenator joined #gluster
22:19 Humble joined #gluster
22:21 glusterbot New news from newglusterbugs: [Bug 895528] 3.4 Alpha Tracker <http://goo.gl/hZmy9>
22:23 sjoeboo joined #gluster
22:23 hagarth joined #gluster
22:50 dustint joined #gluster
22:54 Ryan_Lane joined #gluster
22:56 edong23 joined #gluster
22:56 szopa hello, I used command gluster volume create test replica 2 transport tcp server1:/srv server2:/srv server3:/srv server4:/srv server5:/srv server6:/srv server7:/srv server8:/srv server9:/srv server10:/srv. I'm not sure how it works. I observe that server2 is mirror of server1, server4 is mirror of server3 etc.
22:57 szopa Is it true or it was just accident?
22:58 Humble joined #gluster
23:00 edong23_ joined #gluster
23:01 Ryan_Lane joined #gluster
23:01 andrewbogott left #gluster
23:01 jbrooks joined #gluster
23:03 wN joined #gluster
23:03 szopa anyone know how it works?
23:10 lanning yes, it is true.
23:10 lanning the order added to the volume matters
23:10 lanning replica 3 = 1 1 1 2 2 2 3 3 3 4 4 4...
23:12 Ryan_Lane joined #gluster
23:19 raven-np joined #gluster
23:20 szopa great thanks :)
23:21 Humble joined #gluster
23:26 rm__ hi robert. just saw your mail … i had seen that page before. but i was pretty sure that for the kind of workload we have, it doesnt realy matter where offset 0 of a file ist. we have a couple of thousand files of > 50GB each, and we have less than 20 clients. so i was expecting that with striping combined with read-ahead, i could maximize the throughput from each 1GBE node. this would be much more important - because offset 0 is VERY seldomly read
23:26 rm__ was hoping to get 10x1GBE - overhead as an aggregated throughput for a single client, but in fact i get way, way less.
23:28 tryggvil joined #gluster
23:34 rm__ in fact, i've created a stripeset today, using the mdsets of all nodes as network block devices. so baiscally, exporting /dev/md0 of each node using nbd server, then importing it on one 10GBE connected node and then building a stripeset from these … just to check how much throughput i could get. but that 10GBE node was setup using 32bit ubuntu … got to reinstall tomorrow.
23:35 sjoeboo joined #gluster
23:36 gbrand_ joined #gluster
23:44 rm_ joined #gluster
23:51 lanning rm_: can you paste your volfile somewhere (or send it to me)?  I don't have a setup with striping, but it might be an issue with translator stacking.
23:51 glusterbot New news from newglusterbugs: [Bug 912897] gluster volume rebalance status node order should be constant <http://goo.gl/7EPKw>
23:59 rm_ yes. i'll do that tomorrow (right now, i'm not at the office anymore). the current one is, however, created through CLI commands. but the basic question still is: if i have ten nodes, each with a (software) raid that (as per iozone) is able to pump out 320 to 400 mbyte/s, and a 1GBE link (i.e. roughly 100mbyte/s) … how come that the combination only pumps out somewhere between 190 and 300 mbyte/s? the network traffic on the interfaces also never go
23:59 rm_ all that high … is this only an issue of latency?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary