Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-05-16

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 avati it should have been upstream.. now ben is going to complain of poor VM perf on rhs-2.1 soon :-)
00:00 foster hehe
00:00 avati btw, the virt team is intersted in discard() support
00:01 avati i think you got the mail too?
00:01 foster hmm, which mail?
00:01 foster (or when)
00:01 yinyin joined #gluster-dev
00:02 avati sub: "QEMU Gluster FS and discard support"
00:02 foster I recall your comment in the falloc review, but I assumed it was a separate command and hadn't looked at how it works yet
00:02 avati an hour ago I guess?
00:02 foster ok, possible, my laptop is closed :P
00:04 avati discard was impelemnted as FALLOC_FL_PUNCH_HOLE cmd to fallocate()
00:05 avati which i think really deserves to be a separate FOP at the gluster level, because:
00:05 avati 1. needs different (opposite) handling than fallocate() in quota
00:05 avati 2. needs zero'ing out of cached pages in io-cache while fallocate() does not
00:06 foster ok, so you don't want to just check the falloc command?
00:06 foster e.g., mode/flags param that specifies a discard
00:07 avati it will be clean to separate them out.. just the way we sparated truncate() out of setattr() because it required fundamentally separate kind of handling at the gluster level
00:07 foster ok, sounds reasonable.
00:08 avati we can just call it discard(fd, offset, size, xdata)
00:09 avati not sure what the recommended fillzero() equivalent is at a syscall level
00:09 foster that one I'm less familiar with
00:09 foster fill a range with zeroes?
00:09 foster (whether it's allocated or not I assume)
00:10 avati yes.. qemu uses it for some reason
00:10 avati i think you want to zerofill() after you fallocate() otherwise you risk showing old data in read()s
00:11 avati (security problem)
00:11 foster the fs should do that already
00:11 foster as an unwritten extent, iiuc
00:11 foster and then partially written blocks are zeroed as necessary
00:12 avati yeah.. that sounds sensible.. not sure why qemu has that in its block API for drivers to implement
00:13 foster interesting, maybe I'll ask luis about that. I think he worked on qemu block drivers before
00:13 avati (maybe for efficiently writing out qcow2/qed metadata?)
00:17 foster BTW, if we do discard as a separate fop, I think I'd return an EOPNOTSUPP for discard in the current falloc patchset and create the discard fop in another set, since it touches a lot of files. reasonable?
00:17 avati qemu write path has a logic to check if data is filled with zeroes and calls write_zeroes() instead of write() to the block driver
00:18 avati foster, the question becomes, do we need the @cmd param in fallocate() method signature?
00:18 foster the mode param?
00:19 avati ah yes, @mode
00:19 avati i guess it is needed for the *_KEEP_SIZE
00:19 foster FALLOC_FL_KEEP_SIZE is still a valid param for falloc
00:19 avati right!
00:20 avati have you added the protocol conversion flags for it?
00:21 foster not sure what you mean by that
00:21 foster (so I guess not :P)
00:22 avati i mean, see gf_flags_from_flags in xdr/src/glusterfs3.h
00:23 avati if we are passing bit flags defined in external system headers.. they might vary from client to server
00:23 avati (vary in numerical value)
00:23 foster ok, nope those bits are missing from the current set
00:24 avati partly another reason to avoid having an enumeration parameter :p
00:24 avati if keep size is the only flag.. do you think it's worth changing 'int mode' to 'int keep_size' like a boolean?
00:25 avati (need not decide right now)
00:26 foster i guess that's reasonable if we don't expect that to change any time soon
00:26 foster you mean basically set it to 1 or 0?
00:26 avati yeah.. like the 'datasync' flag to fsync()
00:27 foster that sounds reasonable to me
00:27 foster at least, I can't think of any reason not to right now ;)
00:30 avati cool
00:31 foster ok, gotta run. ttyl!
00:31 avati ttyl!
01:08 kshlm joined #gluster-dev
01:34 bala joined #gluster-dev
01:54 jclift_ joined #gluster-dev
01:54 awheeler joined #gluster-dev
01:59 awheeler does the upgrade from 3.3 to 3.4 require downtime for the volumes?
02:12 badone joined #gluster-dev
02:39 awheeler joined #gluster-dev
02:55 awheeler joined #gluster-dev
03:07 bharata joined #gluster-dev
03:09 foster_ joined #gluster-dev
03:14 awheeler joined #gluster-dev
03:16 jclift_ I have *so* got to set up automated testing of patches I'm interested in.  Manually installing and testing things is so pita and last century. :/
03:16 jclift_ (and slow)
03:16 lkoranda joined #gluster-dev
03:33 awheeler joined #gluster-dev
03:33 shubhendu joined #gluster-dev
03:52 awheeler jclift_: Yup, that seems like the thing to do.
04:05 awheeler joined #gluster-dev
04:08 awheeler joined #gluster-dev
04:24 yinyin joined #gluster-dev
04:41 hagarth joined #gluster-dev
05:00 mohankumar joined #gluster-dev
05:19 awheeler joined #gluster-dev
05:27 hagarth joined #gluster-dev
05:54 rgustafs joined #gluster-dev
06:01 raghu joined #gluster-dev
06:02 bala joined #gluster-dev
06:06 hagarth joined #gluster-dev
06:08 lalatenduM joined #gluster-dev
06:14 bala joined #gluster-dev
06:37 bulde joined #gluster-dev
06:43 krishnan_p joined #gluster-dev
06:47 vshankar joined #gluster-dev
06:52 puebele joined #gluster-dev
07:11 puebele joined #gluster-dev
07:40 shubhendu joined #gluster-dev
07:58 xavih avati: ping
08:05 aravindavk joined #gluster-dev
11:15 kkeithley1 joined #gluster-dev
11:24 hagarth joined #gluster-dev
12:12 edward1 joined #gluster-dev
12:23 hagarth joined #gluster-dev
12:39 awheeler joined #gluster-dev
12:58 mohankumar joined #gluster-dev
13:05 kshlm joined #gluster-dev
13:43 hagarth joined #gluster-dev
13:51 jbrooks joined #gluster-dev
14:03 mohankumar joined #gluster-dev
14:57 lpabon joined #gluster-dev
15:37 puebele1 joined #gluster-dev
15:39 bulde joined #gluster-dev
15:44 portante kkeithley| you around?
15:51 kkeithley| I am
15:51 kkeithley| around as in WFH
15:52 portante great, are you able to review Luis's (hopefully final) rpm build changes for upstream gluster-swift?
15:53 portante We'd like to get that in and then construct a jenkins job to run builds
15:53 portante heading out for a run, back in about an hour
15:55 kkeithley| yup
15:57 puebele1 joined #gluster-dev
16:36 kkeithley| lpabon, portante: why glusterfs-openstack-swift.spec? Did we not say/agree glusterfs-g4s yesterday? Was there more discussion and you changed your mind? (per se, that's okay, but....) I do feel somewhat strongly that it shouldn't have "openstack-swift" in the name because that's not what's in the rpm — what's in the rpm is g4s. It even says so right in the spec. ;-)
16:53 lalatenduM joined #gluster-dev
17:54 bfoster avati: around?
18:12 kkeithley| lpabon, portante: ^^^
18:29 wushudoin joined #gluster-dev
18:36 wushudoin joined #gluster-dev
19:43 portante kkeithley|, I think Luis can answer that. I am not fond of these names, as I think it should be named "gluster-swift", but don't want to hold anything up. Can you and Luis work this out so that we can finish this?
19:55 kkeithley| apart from gluster versus glusterfs——  we've been glusterfs all along. I kinda think we should preserve that. If we didn't already have a glusterfs-swift that's something else entirely then I'd say okay. Is glusterfs-swiftapi so far from what you want?
20:01 * kkeithley| guesses that during package review someone's gonna whine about glusterfs versus gluster.
20:57 lkoranda joined #gluster-dev
21:20 badone joined #gluster-dev
21:47 portante kkeithley| when I look at my RHS 2.0 system, I see glusterfs-* packages for pure GlusterFS code, and gluster-swift* packages for the RHS 2.0 swift code:
21:48 portante http://pastebin.test.redhat.com/142522
21:50 portante naming this correctly is a big deal, to me, but perhaps it is not really an issue, I am happy to let this be somebody else's problem
21:51 portante lpabon: ^^^
21:53 johnmark I have no hard opinions. Just pick one and be done
21:54 badone joined #gluster-dev
22:03 portante luis has picked one then, glusterfs-openstack-swift, so that is it.
22:03 portante lpabon
22:12 portante joined #gluster-dev
22:36 jclift_ joined #gluster-dev
23:42 foster_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary