Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-03-08

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:30 lge joined #gluster-dev
00:39 tg2 joined #gluster-dev
00:39 ndevos joined #gluster-dev
00:39 sghosh joined #gluster-dev
00:39 kkeithley joined #gluster-dev
00:39 a2 joined #gluster-dev
00:39 bfoster joined #gluster-dev
01:09 jdarcy joined #gluster-dev
02:28 hagarth joined #gluster-dev
02:57 jdarcy joined #gluster-dev
03:04 bulde joined #gluster-dev
03:20 johnmark ndevos: hey - you uploaded a file to teh wiki? but it was rejected?
03:20 johnmark ok, I'll resolve
03:56 bala joined #gluster-dev
04:06 bala joined #gluster-dev
04:06 vshankar joined #gluster-dev
04:15 sripathi joined #gluster-dev
04:18 pai joined #gluster-dev
04:27 rastar joined #gluster-dev
04:28 krishnan_p joined #gluster-dev
04:44 raghu joined #gluster-dev
05:13 aravindavk joined #gluster-dev
05:14 sahina joined #gluster-dev
05:15 rastar1 joined #gluster-dev
05:32 sripathi joined #gluster-dev
05:32 sahina joined #gluster-dev
05:33 pai_ joined #gluster-dev
05:46 sripathi joined #gluster-dev
05:52 vshankar joined #gluster-dev
05:55 vshankar joined #gluster-dev
06:25 sripathi joined #gluster-dev
06:59 puebele1 joined #gluster-dev
07:04 sgowda joined #gluster-dev
07:19 puebele joined #gluster-dev
07:40 sgowda joined #gluster-dev
07:42 sripathi joined #gluster-dev
07:45 sripathi1 joined #gluster-dev
07:50 rastar joined #gluster-dev
08:06 ndevos johnmark: yeah, something like "you do not have priviledges to upload files"
08:14 bala joined #gluster-dev
08:28 lge joined #gluster-dev
08:33 rastar joined #gluster-dev
08:38 sgowda joined #gluster-dev
08:54 rastar joined #gluster-dev
09:05 sgowda joined #gluster-dev
09:25 rastar joined #gluster-dev
09:34 sripathi joined #gluster-dev
09:37 badone joined #gluster-dev
10:12 aravindavk joined #gluster-dev
10:35 sgowda joined #gluster-dev
11:07 sgowda joined #gluster-dev
11:36 inodb joined #gluster-dev
11:37 sahina joined #gluster-dev
11:39 jclift joined #gluster-dev
11:44 edward1 joined #gluster-dev
11:53 sripathi1 joined #gluster-dev
12:12 xavih Hello, I've been doing some speed tests with gluster and I think I've detected a possible bottleneck in mount/fuse
12:13 xavih I've used two nodes connected with infiniband. I've create a 2-bricks distributed volume (no replica)
12:14 xavih I get about 400 MB/s doing a single write
12:15 xavih if I do 2 simulaneous writes to files that go to different bricks, I get the speed divided by 2 for each write
12:15 xavih the same happend for any combination of writes that I do from the same client
12:16 xavih if I do two writes from two distinct clients, both writes go at the same maximum speed (about 400 MB/s)
12:16 xavih if I create a replica-2 volume, the speed is dropped to ~190 MB/s
12:17 xavih and any combination of writes always give a total combined speed of 190 MB/s unless the writes come from distinct clients
12:19 xavih 400 MB/s (using blocks of 128KB) means about 3200 request per second, that is ~310 us per request
12:20 xavih looking at mount/fuse, I see that there is only one thread getting requests from kernel, and once it receives a request, it is send down the xlators stack to process it, only reading additional requests once the stack returns
12:22 xavih assuming that each translator will allocate memory and make some system calls, it is possible that each request consumes 300 us
12:23 xavih to test this I've tried to put the performance/io-threads xlator next to the mount/fuse xlator
12:24 xavih I repeated the tests and a single write performed worst (about 250 MB/s)
12:24 xavih however, concurrent writes from the same client got a combined speed of more than 600 MB/s
12:27 xavih this solution is not valid because there is some problem with the interaction between performance/io-threads and mount/fuse because when both are running an 'ls' does not show all files present on the volume
12:28 xavih however the writes worked fine and I think the speed measurement is valid
12:30 xavih I also think that using a multi-threaded implementation inside mount/fuse with multiple threads reading from fuse fd could improve even more the performance
12:34 jdarcy joined #gluster-dev
12:35 xavih jdarcy: hi
12:36 xavih jdarcy: I've just written something about the mount/fuse xlator I would like you to see
12:36 xavih jdarcy: on irc
13:08 sripathi joined #gluster-dev
13:18 hagarth joined #gluster-dev
13:28 lpabon joined #gluster-dev
13:32 rgustafs joined #gluster-dev
14:10 hagarth joined #gluster-dev
14:28 hagarth joined #gluster-dev
14:43 lalatenduM joined #gluster-dev
14:53 blues-man joined #gluster-dev
15:03 jclift Anyone know what this compile error with gluster head means?
15:03 jclift In file included from rpc-transport.c:23:
15:03 jclift ../../../libglusterfs/src/logging.h:63: error: expected specifier-qualifier-list before 'pthread_mutex_t'
15:03 jclift Saw mentions of gluster compiling on OSX, so thought I'd give it a shot.
15:03 jclift :)
15:05 jdarcy joined #gluster-dev
15:05 * jclift kicks Gluster compilation errors on osx. :/
15:11 jclift Aha, seemed to be a missing #include <pthread.h>
15:44 jbrooks joined #gluster-dev
16:03 blues-man joined #gluster-dev
16:04 kr4d10 joined #gluster-dev
16:37 bala joined #gluster-dev
17:38 jclift Anyone know who admins the review.gluster.org server?
17:43 lalatenduM joined #gluster-dev
17:45 semiosis jclift: they're here & probably listening, just ask question
17:46 jclift semiosis: I just want to put them in contact with the Fedora Accounts System people, who I'm chatting with in another IRC window about the OpenID endpoint problem.
17:46 jclift semiosis: It looks like the server may need some minor conf change.
17:46 semiosis hagarth: ^^^
17:47 jclift As per the email I sent to gluster-devel about an hour ago, saying there's a problem. :)
17:47 semiosis thats probably all you need to do to let the right people know
17:51 jclift There's a ticket in the Fedora bug system with some details too: https://fedorahosted.org/fedo​ra-infrastructure/ticket/3695
17:51 jclift Heh
17:55 * jclift wonders if inotify support could be made to replace the "check sync status on every stat()"
17:55 vshankar joined #gluster-dev
17:55 jclift Could lessen the prob with lots of files
17:56 jclift Meh, think about it later. :)
18:06 hagarth joined #gluster-dev
18:33 hagarth joined #gluster-dev
19:05 __Bryan__ joined #gluster-dev
21:28 lpabon joined #gluster-dev
23:34 gbrand_ joined #gluster-dev
23:56 hagarth joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary