Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-02-11

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:10 hagarth joined #gluster-dev
02:39 hagarth joined #gluster-dev
03:00 overclk joined #gluster-dev
03:02 overclk joined #gluster-dev
03:42 sripathi joined #gluster-dev
03:51 bharata joined #gluster-dev
04:03 bulde joined #gluster-dev
04:05 hagarth joined #gluster-dev
04:56 vpshastry joined #gluster-dev
05:07 sgowda joined #gluster-dev
05:11 sripathi joined #gluster-dev
05:23 bharata joined #gluster-dev
05:53 johnmark joined #gluster-dev
05:55 raghu joined #gluster-dev
06:03 bulde1 joined #gluster-dev
06:24 hagarth joined #gluster-dev
06:24 shireesh joined #gluster-dev
06:28 sgowda joined #gluster-dev
06:32 sahina joined #gluster-dev
06:35 aravindavk joined #gluster-dev
06:37 shireesh2 joined #gluster-dev
06:39 kanagaraj joined #gluster-dev
06:41 shireesh joined #gluster-dev
06:42 bharata joined #gluster-dev
06:42 bala1 joined #gluster-dev
07:19 mohankumar joined #gluster-dev
07:21 sgowda joined #gluster-dev
07:32 bulde joined #gluster-dev
07:56 shireesh joined #gluster-dev
08:20 bharata wrt http://www.gluster.org/community/docu​mentation/index.php/Features/snapshot, Sec 6.2.a says "...Issue snap commands to start lvm snap of each bricks". Does this mean that the volume snapshot feature will be supported only on those volumes that have LVs as bricks ?
08:20 bharata a2, sgowda ^
08:23 hagarth joined #gluster-dev
08:32 bulde bharata: as of now, yes... But the snapshot taking option will be plugable script, so even if you have some other backend which can take snapshot (for example, btrfs), it should work
08:33 bulde we want to provide solution for 1, and have it plug-able, so, it can be extended later on need basis
08:33 bharata bulde, ok so the core logic to snapshot will not be in gluster then
08:34 bharata bulde, the actual snapshot will be done by either LVM or btrfs or some other FS
08:37 bharata bulde, Also, what do you intend to do during the "Prepare phase" ?
08:37 poornima joined #gluster-dev
08:45 bulde pause server's RPC layer to take any 'modification' fops
08:46 bulde the logic inside gluster will be to provide a distributed interface to brick's individual snapshots
08:49 bharata bulde, 'man fsfreeze' and other places mention that explicit fs freezing isn't needed when taking LVM snapshots, I wonder if that applies or how that could appy to LVM with GlusterFS on it
08:59 bulde1 joined #gluster-dev
09:24 sripathi joined #gluster-dev
09:27 poornima I have written a testcase to test QEMU-GlusterFS.  Is it ok to post it on gluster-devel mailing list for review?
09:27 mohankumar hagarth: bulde1: ^^
09:30 vpshastry joined #gluster-dev
09:47 bharata joined #gluster-dev
10:18 bulde joined #gluster-dev
10:25 bharata joined #gluster-dev
10:38 shireesh joined #gluster-dev
10:47 inodb joined #gluster-dev
10:54 poornima joined #gluster-dev
11:01 hagarth poornima: just responded to your email, please go ahead and post your test case on gerrit
11:01 poornima hagarth, ok..thx!
12:09 edward1 joined #gluster-dev
12:11 vpshastry joined #gluster-dev
13:23 blues-man joined #gluster-dev
13:57 hagarth joined #gluster-dev
14:26 johnmark glusterbot: seen jdarcy?
14:26 glusterbot johnmark: I have not seen jdarcy?.
14:27 hagarth joined #gluster-dev
15:02 overclk joined #gluster-dev
15:20 wushudoin joined #gluster-dev
16:27 semiosis @seen jdarcy
16:27 glusterbot semiosis: jdarcy was last seen in #gluster-dev 6 days, 4 hours, and 31 seconds ago: <jdarcy> And bon voyage.  ;)
16:45 hagarth joined #gluster-dev
16:48 kkeithley Pretty sure jdarcy is on his way to Usenix FAST, which starts tomorrow
17:05 johnmark ah crap
17:05 johnmark kkeithley: forgot about that
17:14 sghosh joined #gluster-dev
17:17 raghu joined #gluster-dev
17:33 gbrand_ joined #gluster-dev
17:45 bulde joined #gluster-dev
18:21 bulde joined #gluster-dev
18:22 blues-man hi johnmark, hello everybody. Then I didn't let you know anything about the geo-replication with gluster 3.3 of the Florence's Library digitalization, because they stopped me due missing funds (also for a thesis reimbursement of expenses!) and so the project is stopped as I know. Without that reimbursement I can't join them with the academic thesis work on gluster geo-replication for that case-study, so I hope they'll find a solution to that probl
18:22 blues-man em keeping gluster. I suggested them to join gluster developers, in case of restarting of the project, to try to discover and fix the issue, so maybe they'll contact you or they reply the thread in the mailing list that I had started in december
18:24 blues-man thank you for the attention and the courtesy! anyway I'll be in the gluster as user or contributor if I can help in someway :)
18:36 gbrand_ joined #gluster-dev
19:16 johnmark doh... missed it
20:09 a2 bfoster, ping
20:12 bfoster a2: pong
20:12 a2 bfoster, do you happen to have the patches which you used to test splice() on /dev/fuse sometime ago?
20:13 bfoster hmm, I should have them in a branch, but it's probably _very_ ugly iirc :P
20:14 bfoster they could even be broken, I don't think I spent a ton of time on it
20:14 a2 that's ok :-)
20:14 a2 anything is better than nothing for me to start with :-)
20:15 * bfoster looks...
20:18 bfoster yeah, looks like I have something... it might explode if I rebase it, so I'll git send you the patch as is
20:20 a2 sounds good.. i just need something which exercises splice(), even if against an older glusterfs version, for running ballpark perf comparison
20:21 a2 i (almost) have support for mmap-io on /dev/fuse for zero-copy, want to make sure it is not slower than splice()
20:22 bfoster cool, like I said though, you might want to fire it up in gdb or something, because I don't recall if it even worked :P
20:22 a2 sure :)
20:26 bfoster sent
20:29 a2 bfoster, thanks! do you recall if the reason why you are not using SPLICE_F_MOVE in flags to splice() was because of the comment in man splice(2)?
20:31 bfoster I don't recall specifically, but I suspect I would have read this: "therefore starting in Linux 2.6.21 it is a no-op" and not used it
20:31 a2 yeah
20:31 bfoster at least until having a chance to look at whether it actually does anything
20:33 a2 i'm trying to make a case that while splice() is probably good (with/without SPLICE_F_MOVE) for zero copy with sockets, it does not help with zero-copy with RDMA.. and that mmap-io is (hopefully) no worse than splice() in performance and addresses both socket and RDMA needs
20:36 bfoster interesting, so by mmap-io/zero copy, you're talking about using mmap access to /dev/fuse?
20:36 bfoster can you even do that with a character special file?
20:36 a2 mmap access to the page cache. /dev/fuse has some helpers that's all
20:37 bfoster so you're reducing the amount of data that passes through dev/fuse, effectively?
20:38 a2 not exactly mmap(/dev/fuse, ..). but adding support for helpers in /dev/fuse which can return a fd backed by the inode which can be passed to mmap() the pagecache of the file into the server. this way read/write can just pass file offsets instead of copying data through /dev/fuse, and server refers to those offsets within mmaped region by pointers
20:38 bfoster gotcha, very cool :)
20:39 a2 this "backend fd" has custom page fault handlers to treat the server "special" (write into those pages is actually like doing a DMA from disk, not "making them dirty" -- etc. kind of awareness)
20:41 a2 i heard a previous attempt/idea at something like this was shot down. i hope i can present with comparison perf numbers (and the argument that this works with RDMA while splice cannot) and try for better luck
20:41 bfoster I think I follow the gist of it...
20:42 bfoster it would be cool to see the code, plan on posting it to upstream fuse?
20:42 bfoster ah
20:42 gbrand_ joined #gluster-dev
20:42 a2 of course, i just need to make sure it is actually working, and at least does not totally suck in performance with the pagefault overhead
20:43 bfoster :)
20:47 bfoster so is there a socket api that allows taking advantage of such a thing, or is this the reason for the page fault stuff?
20:48 a2 sorry did not understand the question..
20:48 bfoster so we have a fuse request and some magic fd with the data for that request
20:48 a2 that's in splice
20:49 bfoster ah, so splice moves the data from the fd to the socket?
20:49 a2 here the magic fd is only for doing mmap(.., magic_fd)
20:49 a2 and we use a pointer within the mmap'ed region for regular read/write on socket, or pass pointer to RDMA operations. splice is unused completely
20:50 a2 the userspace server will have to manage a pool of mmaped regions for currently open files
20:51 a2 (one using /dev/fuse we get one magic_fd per inode)
20:51 bfoster so you're trading data copies for page faults?
20:52 bfoster or pseudo page faults anyways
20:52 a2 yep.. and since these are going to be minor/soft faults, hoping that they will not be too expensive. and for a VM image where the fd is kept open for long, the page fault costs should get amortized with time
20:54 bfoster ok, cool. i'll take a look when code is available
21:53 dopry joined #gluster-dev
21:54 dopry hey, I cant' seem to unsubscribe from gerrit emails...
21:54 dopry I'm not watching any projects, but keep getting emails...
21:54 dopry any one give me a hand getting my inbox cleaned up?
21:55 badone joined #gluster-dev
21:58 inodb joined #gluster-dev
22:18 gbrand__ joined #gluster-dev
23:26 bfoster joined #gluster-dev
23:40 kkeithley joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary