Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-05-15

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 jclift joined #gluster-dev
00:10 yinyin joined #gluster-dev
00:58 foster avati: random thought, perhaps log space is a factor as well. a new transaction has to reserve space in the log. if not available, the caller pends up on activity focused on freeing up log space (i.e., flushing things to disk so that they can be removed from the log)
00:58 foster bbl
01:03 yinyin joined #gluster-dev
01:05 jclift avati avati_: Any idea if there is an easy way to add custom translators to an installation, without having to manually hack .vol files?
01:06 jclift If not, we'll need to think of something pretty soon.
01:07 jclift Ideally it'd be nice if people could distribute their own "extra" translators in rpms/debs/etc, which automagically get "picked up" if included in some special directory with appropriate metadata.  That'd really help get people into extending Gluster with translators and making them widely available.
01:27 jclift kkeithley: Well, F18 git master definitely barfs when compiling rpms on the Glupy stuff (without your patch).
01:34 kkeithley| jclift: yes, I didn't do that work in a vacuum. ;-)
01:34 jclift :)
01:35 jclift Testing the patches out now.
01:35 jclift Weirdly, it seems to print the python version to stdout in the middle of configure part of make run (for make dist and make glusterrpms)
01:36 jclift kkeithley|: http://fpaste.org/12207/85817561/
01:36 jclift Lines 6 & 7 of that
01:36 jclift kkeithley|: Guessing they shouldn't be there.
01:36 jclift Apart from that though, seems to compile fine now. :)
01:37 jclift Just finished compile testing on F18 x64
01:37 kkeithley| yeah, seems I left an extraneous echo in there
01:37 * jclift nods
01:37 kkeithley| fails the rpm.t regression test though
01:37 jclift Ahhh, I haven't yet really looked into the regression tests.
01:38 jclift Should I try out the compiled version and see if a glupy translator still works using it?
01:39 jclift kkeithley|: Btw, do you know of any good guides to step people through using custom .vol files with their own translators?
01:39 jclift kkeithley|: Didn't see any from simple Google searching when I went looking, so wrote up some initial ones here (for my GlusterFlow project):
01:39 jclift https://github.com/justinclift/g​lusterflow/tree/master/vol_files
01:39 kkeithley| jdarcy wrote some python to do that in HekaFS
01:40 jclift Interesting
01:41 * jclift is looking
01:41 kkeithley| 3.1 was before my time, but IIRC that's the last release that didn't have auto generated vol files, so there might be some old docs with useful bits in them
01:42 jclift We need to figure out a way for people to add their own custom .vol files easily soon
01:42 jclift Anyway....
01:42 * jclift gets back to testing your patch
01:44 kkeithley| and I seem to remember jdarcy and I had some preliminary discussions just this sort of thing with avati and hagarth during our BLR visit in March 2012.
01:45 kkeithley| s/just/about just/
01:45 jclift Cool.
01:45 * jclift thinks we should definitely get "something" into 3.5
01:45 jclift With a rather loose definition of "something"
01:45 jclift ;0
01:47 bharata joined #gluster-dev
01:51 bala joined #gluster-dev
02:04 johnmark greetings
02:06 hagarth greetings
02:06 bharata johnmark, anything special ? :)
02:07 johnmark bharata: oh hi :)
02:07 johnmark bharata: this is the time to see where we are wrt 3.4
02:07 hagarth bharata: at work?
02:07 bharata hagarth, yes
02:07 johnmark hagarth: are you the only one here?
02:08 hagarth johnmark: does look like
02:08 bharata johnmark, read that mail on F19 and QEMU-GlusterFS, just got the F19 iso, will try it and check
02:09 hagarth bharata: will ping you later in the day for a few things.
02:09 bharata hagarth, sure
02:09 hagarth johnmark: status on 3.4 - I am spending time to figure out what other patches we need to get in.
02:10 johnmark bharata: oh thanks
02:10 hagarth I think we can do beta2 once a nagging regression failure is sorted out.
02:10 johnmark hagarth: ok. hrm, which regression failure?
02:11 hagarth johnmark: in the regression tests that we run pre-commit, a nfs mount fails quite frequently. It does look like a race condition. need to debug why that's happening.
02:12 hagarth 962431 will be the bug tracker for all betas from now.
02:12 kkeithley| no, not the only one here
02:13 hagarth kkeithley|: hello there.
02:13 kkeithley| hello
02:15 hagarth kkeithley|: see that your release-3.4 patch finally passed regression :)
02:15 kkeithley| yes, that was a maze of twisty test failures, all different, none related to the change I made.
02:16 kkeithley| seemingly
02:16 hagarth yeah, we need to sort out those failures. can affect any patch there.
02:16 kkeithley| maybe that's the nfs race condition?
02:16 hagarth does look like
02:16 johnmark kkeithley| oh hai
02:17 johnmark hagarth: that is weird
02:17 johnmark hagarth: ok, so 962431 is the catch-all for all beta releases
02:18 johnmark hagarth: if we isolate that bug, then beta 2 will follow shortly?
02:18 kkeithley| now if Deepak (and someone else) will okay it on both the master and release-3.4 branches then I'll be happy
02:18 hagarth johnmark: yes. if there's anything that we need in beta, let us add a dependency to that bug.
02:18 hagarth johnmark: yes, beta2 can be out after we sort out that failure.
02:19 kkeithley| I'm piggybacking it on Deepak's original BZ, but I guess it's not on the tracker.
02:20 hagarth kkeithley|: yeah, but it is in my radar of things for inclusion in 3.4.0.
02:20 kkeithley| let me fix that. ;-)
02:21 johnmark heh
02:21 johnmark hagarth: so 6 bugs? really?
02:22 hagarth johnmark: need to add a few more there. hope to have all of that sorted over this week.
02:22 hagarth kkeithley|: thanks for fixing that!
02:24 johnmark hagarth: *sigh* ok
02:24 nickw joined #gluster-dev
02:24 * johnmark was getting excited
02:24 hagarth johnmark: nevertheless, let us target a beta refresh on a regular cadence till we get there.
02:28 hagarth johnmark: does that seem like a plan?
02:31 kkeithley| people with nothing better to do filing BZs about the ChangeLog file in the glusterfs RPM
02:32 kkeithley| IIRC that comes out of the `make dist`
02:33 johnmark hagarth: +1
02:33 johnmark kkeithley|: ha... that's... great
02:33 lalatenduM joined #gluster-dev
02:34 kkeithley| do we really even need that in the RPM?
02:34 kkeithley| whining because it's "big" (6MB)
02:34 hagarth kkeithley|: do other packages have ChangeLog in RPMs?
02:35 kkeithley| Don't know
02:35 hagarth if others or packaging standards don't mandate it, we can nuke it from the buildspec.
02:36 jclift Including the ChangeLog is reasonably common
02:36 jclift It's not a "have to have" though
02:36 kkeithley| gcc has ChangeLogs
02:36 jclift It'd be just as simple to have a text file saying "The changelog for this release can be found at http://some/url/here/with/approriate/tag"
02:36 hagarth maybe we should improve our ChangeLog. It cannot be `git log > ChangeLog` forever.
02:36 kkeithley| the gcc ChangeLogs are compressed.
02:36 jclift libvirt pretty much has a changelog that's git log > ChangeLog
02:37 jclift PostgreSQL went the opposite way, and it's comprehensive human readable (it's pretty good.  lots of man effort though)
02:37 hagarth jclift: I would prefer the latter though.
02:38 kkeithley| bzipped it's only ~1MB
02:38 jclift No disagreement here.  It's up to you guys to prioritise resources, etc. ;)
02:38 kkeithley| versus the %change in the rpm spec
02:39 hagarth jclift: yeah, let's see when we can get to it.
02:39 portante|ltp johnmark: can we get a #gluster-swift channel registered on freenode?
02:40 kkeithley| #gluster-swift or #gluster-g4s?
02:40 kkeithley| oh sugar, that reminds me, I need to ask for a gluster-g4s repo in fedora-scm
02:40 portante|ltp #gluster-swift to parallel #openstack-swift
02:40 jclift "g4s" is a bit opaque sounding
02:40 portante|ltp yes, thanks
02:41 jclift s/bit/very/
02:41 portante|ltp jclift: we have a legacy naming conflict
02:41 jclift :(
02:41 hagarth i am on #gluster-swift already!
02:41 jclift gluster-swiftv2
02:41 jclift gluster-swift-next
02:41 * jclift gets back to testing patch manualls
02:41 jclift manually
02:42 kkeithley| if there's already a #gluster-swift why not just use that?
02:42 portante|ltp Looks like vjay alread registered it!
02:44 kkeithley| I dunno, when the package name is gluster{,fs}-g4s are people going to look for #gluster-g4s or #gluster-swift. We're not shipping anything called gluster{,fs}-swift
02:44 johnmark portante|ltp: ha, I didn't realize that
02:45 hagarth please add #gluster-swift to your list of autojoins.
02:45 johnmark kkeithley|: not sure it matters that much
02:46 kkeithley| No, ultimately nothing matters very much. I like names that have meanings though.
02:46 johnmark kkeithley|: I thought there were packages called gluster-swift?
02:46 johnmark at least, that's what I have installed on my machine
02:47 portante|ltp kkeithley| we already have gluster-swift-* packages, right?
02:47 kkeithley| glusterfs-swift is my/the packaging of openstack-swift bundled in with glusterfs
02:47 portante|ltp or are they glusterfs-swift* packages?
02:47 * jclift suspects we'll be adding a /topic to #gluster and #gluster-dev along the lines of "Join #gluster-[whatever] if you're looking for help with #gluster-g4s"
02:48 kkeithley| glusterfs-ufo (soon to be glusterfs-g4s) is the UFO package on top of glusterfs-swift or openstack-swift.
02:48 portante|ltp so we don't have a gluster-swift name conflict for packaging, that is nice.
02:49 johnmark kkeithley|: ah, ok
02:49 kshlm joined #gluster-dev
02:51 kkeithley| glusterfs-swift{,-*} packages start to disappear as the right versions of swift get packaged in Fedora and RDO openstack-swift.
02:55 kkeithley| are we done?
02:56 hagarth kkeithley|: I think so.
02:56 hagarth kkeithley|: I will notify the crew when the lingering issues in release-3.4 are sorted out and we should be ready for a beta refresh then.
02:56 kkeithley| portante|ltp: no conflict in Fedora and Gluster.Org packaging. I don't know what the state of things is in RHS
02:56 kkeithley| hagarth: okay
02:59 hagarth later folks.
03:02 johnmark night
03:18 aravindavk joined #gluster-dev
03:20 shubhendu joined #gluster-dev
03:28 aravindavk joined #gluster-dev
03:53 aravindavk joined #gluster-dev
04:20 nickw joined #gluster-dev
04:23 yinyin joined #gluster-dev
04:28 nickw joined #gluster-dev
04:32 aravindavk joined #gluster-dev
04:36 lalatenduM joined #gluster-dev
04:41 hagarth joined #gluster-dev
04:57 lalatenduM joined #gluster-dev
05:03 bulde joined #gluster-dev
05:03 xavih joined #gluster-dev
05:03 glusdev joined #gluster-dev
05:09 kshlm joined #gluster-dev
05:18 bala joined #gluster-dev
05:23 kshlm joined #gluster-dev
05:25 yinyin joined #gluster-dev
05:31 bala joined #gluster-dev
05:34 raghu joined #gluster-dev
05:39 deepakcs joined #gluster-dev
05:55 bharata_ joined #gluster-dev
06:01 vshankar joined #gluster-dev
06:10 aravindavk joined #gluster-dev
06:36 badone_ joined #gluster-dev
06:48 aravindavk joined #gluster-dev
06:51 bala joined #gluster-dev
07:13 xavih can anyone take a look at this bug (https://bugzilla.redhat.co​m/show_bug.cgi?id=961668) and this patch (http://review.gluster.org/#/c/5003/), please ?
07:13 glusterbot Bug 961668: unspecified, unspecified, ---, amarts, NEW , gfid links inside .glusterfs are not recreated when missing, even after a heal
07:13 xavih thank you
07:37 aravindavk joined #gluster-dev
07:41 mohankumar joined #gluster-dev
07:54 bala joined #gluster-dev
07:57 aravindavk joined #gluster-dev
08:24 mohankumar joined #gluster-dev
09:13 mohankumar hagarth: ping
09:35 hagarth mohankumar: pong
09:36 mohankumar hagarth: bd_map xlator is not available in F19 gluster-server rpm, so whats the plan for bd_map xlator?
09:36 hagarth mohankumar: I think it is a packaging mistake. Possibly done on a machine which does not have lvm-devel installed.
09:37 hagarth the plan is to have bd_map in F19. Let me check that over with kkeithley.
09:37 mohankumar hagarth: could be, so how to make sure that bd_map will be present in next beta release?
09:38 hagarth mohankumar: let us work that over with kkeithley. He manages all the Fedora builds for glusterfs.
09:39 mohankumar hagarth: my concern is bd-xlator is not a mandatory option and during configure its enabled if lvm-devel is installed
09:39 mohankumar otherwise its disabled silently
09:41 mohankumar instead can we mandate lvm-devel to be installed ie bd-xlator is enabled always?
09:41 mohankumar hagarth: ^^ if one does not want bd xlator, he can explicitly say --disable-bd-xlator
09:48 hagarth mohankumar: having an explicit dependency is not in line with other configuration options.
09:52 shubhendu joined #gluster-dev
09:57 mohankumar hagarth: but some libraries like crypto.so, pthread.so are needed to build glusterfs
10:02 hagarth mohankumar: they are not needed for configure options. However should bd-map be a configurable option at all?
10:03 mohankumar hagarth: my concern is in future if any of the build machine does not have lvm-devel installed, bd xlator will not be enabled
10:03 mohankumar and we will not notice it unless we install the rpm and specifically look for bd xlator
10:03 hagarth mohankumar: right. should we drop the configurable option for bd-map and just have an explicit dependency?
10:03 mohankumar hagarth: i suggest that for bd xlator (v2)
10:03 mohankumar but one can still disable it explicitly
10:04 mohankumar if u run ./configure and if lvm-devel not insalled configure script will error saying "lvm-devel not installed, its needed for bd xlator"
10:05 mohankumar if user does not want bd xlator in this case, they can run ./configure --disable-bd-xlator
10:06 puebele joined #gluster-dev
10:06 hagarth are you suggesting different behaviors to bd-map and bd xlators?
10:07 mohankumar hagarth: when bd xlator is merged in gluster git, the plan is to remove bd-map xlator right?
10:07 hagarth mohankumar: yes, that's the plan.
10:08 mohankumar hagarth: in that case we can make bd xlator as a mandate feature
10:08 hagarth mohankumar: i think so as well. Let us just have an explicit dependency on lvm-devel and get rid of the bd options in configure.ac
10:09 mohankumar hagarth: nice, will do needed changes in my v2 series
10:09 hagarth mohankumar: great, will respond to your email later today.
10:10 mohankumar hagarth: regarding remote-host ?
10:10 hagarth mohankumar: yes
10:11 mohankumar hagarth: thanks in advance!
10:11 deepakcs hagarth, thanks-ahead (in gluster terms) ;-)
10:11 puebele1 joined #gluster-dev
10:13 mohankumar bala: ping
10:14 edward1 joined #gluster-dev
10:22 bala mohankumar: hi
10:23 mohankumar bala: in bd xlator v2, its capable of having both regular LVs and thin LVs
10:24 mohankumar but we want to export that information through gluster volume info --xml output, so that vdsm can know the capabilities of BD xlator
10:24 mohankumar and ask to create thin LVs. if thin LV is not supported, vdsm will not send a request to create thin LV
10:25 bala ok
10:25 mohankumar so deepakcs and i were talking how to expose this capability to VDSM through xml and i think this should help right?
10:25 mohankumar <capabilities>
10:25 mohankumar <capability>BD</capability>
10:26 mohankumar <capability>thin</capability>
10:26 mohankumar </capabilities>
10:26 mohankumar bala: ^^
10:26 bala mohankumar:  so far we dont' have feature list in volume info
10:27 bala mohankumar: i think this is global feature right?!
10:27 bala mohankumar: affects all volumes?
10:27 mohankumar bala: at least this capabilities is per volume
10:27 mohankumar some of the capabilities
10:27 mohankumar for example posix volume can't support BD capability
10:27 bala mohankumar: ok
10:27 mohankumar but it might support UNMAP(or DISCARD) capability
10:28 mohankumar so do you think above <capabilities>..</capabilities> is ok?
10:28 bala mohankumar: how do we differentiate posix and bd type volume?
10:28 mohankumar bala: if a volume info capabilities has 'BD' its a BD volume otherwise posix
10:29 mohankumar hagarth: is it acceptable? ^^
10:29 mohankumar VDSM needs to know if the volume is BD or posix
10:29 mohankumar or any management entity to start using BD volumes, it has to know if the volume is BD or not
10:29 hagarth mohankumar: looks ok to me.
10:29 bala mohankumar: this is create time capability.  so we suppose to get thru vol info
10:30 bala mohankumar: i dont know whether this fits into get and set read-only option
10:31 bala mohankumar: as u told in </capability> tag
10:31 puebele joined #gluster-dev
10:31 mohankumar bala: this capabilties will be stored as part of brick information in respective brick file in /var/lib/glusterd/vols/<vo​lname>/bricks/<brick-name>
10:31 bala mohankumar: we could add that which doesn't break compatibility
10:32 bala mohankumar: ok
10:32 mohankumar but i didnt get what you mean by get/set read-only option
10:33 bala mohankumar: i was thinking of fitting capability as a read-only option
10:33 mohankumar bala: ah ok
10:34 bala mohankumar: could u open up a discussion in gluster-devel?
10:34 bala mohankumar: capability is an interesting feature in volume info
10:34 deepakcs bala, mohankumar but once a BD volume is created... volume info should show capability.. makes logical sense to me. . why should user do a 'get' again ?
10:35 deepakcs bala, yes.. the generic idea was to export volume capabilities thru cli commands.. sothat mgmt layer can exploit it as needed
10:35 mohankumar bala: but there is no 'gluster volume get' functionality as of now
10:36 deepakcs bala, what exactly you mean by read-only option. Today if i use volume set option.. eg: server.allow-insecure on.. then it shows up in volume info (thats the only way to 'get' it) rite ?
10:37 bala deepakcs: i got it.  volume info is the right place to show that info
10:37 deepakcs bala, ok, so we still need a discussion on the list ?
10:37 mohankumar bala: are you ok with the xml tags  <capabilities>..</capabilities> ?
10:37 bala mohankumar: i am good with that
10:38 mohankumar bala: thanks, in v2 i am trying to address this xml tags also
10:38 bala deepakcs: as i am from vdsm-gluster side, it looks ok for me.  we would know what gluster core guys think
10:38 deepakcs mohankumar, worth to have a discussion on the list then ?
10:39 mohankumar deepakcs: bala: sure
10:39 bala you could check with amar/vijay etc
10:39 deepakcs bala,  in the above example.. BD is in caps.. is that ok.. or u folks have rule that all elements in xml output shud be in small letters ?
10:40 bala deepakcs: i prefer caps here
10:41 deepakcs bala,  so its ok to intemix caps and small.. liek BD and thin are 2 different capabilities
10:41 bala mohankumar: elements inside <capabilities> could be more cleaner
10:42 bala mohankumar: like <capabilities><xlator>DB</xlator>​<type>THIN</type></capabilities>
10:42 mohankumar bala: thats possible
10:42 bala mohankumar: you could choose better name than <xlator>
10:42 bala sorry one more change
10:42 mohankumar but having everything caps, look like alarming, imho it could be small letters
10:43 bala <capabilities><xlator><name>DB</name><t​ype>THIN</type></xlator></capabilities>
10:44 bala mohankumar: ok. we can do with 'thin'
10:44 deepakcs bala, so addnl capabilities will be addnl <type> tags inside <capabilities> ?
10:44 mohankumar bala: after s/DB/BD, above one looks neat
10:45 mohankumar for posix it could be <capabilities><xlator><name>posix</name><t​ype>discard</type></xlator></capabilities>
10:46 deepakcs mohankumar, like the xml tags hierarchy u will have to decide on equivalent way of dumping info in non-xml way (as part of gluster volume info)
10:47 mohankumar deepakcs: volume info will show this extra line: Capabilities: BD, Thin
10:47 bala mohankumar: thats my typo :)
10:47 mohankumar :)
10:48 deepakcs mohankumar, in xml if we are attaching the caps per xlator.. in the non-xml output too we shud think abt doing similar
10:48 bala deepakcs: i think, with/without --xml option is handled two diff way in cli code
10:48 deepakcs mohankumar, new xlators might add new functionality/capabilities.. so its better to put what cap is coming from what xlator in non-xml output too
10:49 deepakcs bala, i understand that.. but its better to be in sync between the 2
10:49 mohankumar deepakcs: then its not per xlator, it should be per brick
10:49 mohankumar ie brick capabilies
10:49 bala deepakcs: yes. output of both types need to be cleaner
10:50 bala mohankumar: we can't mix up two types of bricks for a volume, right?
10:50 deepakcs mohankumar, ok.. then we need to decide how the xml tags hierarchy looks liek and accorindly non-xml shud be similar.. so that users looking at non-xml can relate to the xml output
10:52 mohankumar bala: deepakcs: ok
11:15 hagarth joined #gluster-dev
11:19 yinyin_ joined #gluster-dev
11:20 aravindavk joined #gluster-dev
11:27 raghu joined #gluster-dev
11:57 yinyin_ joined #gluster-dev
12:17 shubhendu joined #gluster-dev
12:32 edward1 joined #gluster-dev
12:34 bala joined #gluster-dev
12:41 yinyin_ joined #gluster-dev
13:00 shubhendu joined #gluster-dev
13:02 lalatenduM joined #gluster-dev
13:19 hagarth joined #gluster-dev
14:07 puebele joined #gluster-dev
14:14 lpabon joined #gluster-dev
14:15 wushudoin joined #gluster-dev
14:38 lalatenduM joined #gluster-dev
14:54 hagarth joined #gluster-dev
15:03 xavih can anyone take a look at this bug (https://bugzilla.redhat.co​m/show_bug.cgi?id=961668) and this patch (http://review.gluster.org/#/c/5003/), please ?
15:03 glusterbot Bug 961668: unspecified, unspecified, ---, amarts, NEW , gfid links inside .glusterfs are not recreated when missing, even after a heal
15:03 xavih thank you
15:05 hagarth xavih: will do.
15:28 lalatenduM joined #gluster-dev
15:52 portante|ltp joined #gluster-dev
15:56 xavih hagarth: thanks
16:43 bulde joined #gluster-dev
17:32 avati xavih, ping?
19:02 lpabon joined #gluster-dev
21:09 badone joined #gluster-dev
21:34 xavih avati: pong
21:56 avati xavih, do you really want gfid linnk recreation check every time?
21:56 avati checking it for the first time should be sufficient
21:57 avati (to guard against deletion in crash recovery)
21:57 Supermathie avati: perhaps only check for missing gfid file if #hardlinks != 0mod2?
21:58 avati mod2?
21:58 avati why?
21:58 avati < 2 is a good check
21:59 Supermathie I was initially thinking if the file itself is hardlinked, you'd have 4 total links, then realized hmm nope, there'd only be one gfid file.
22:00 Supermathie < 2, heal definitely. But you can easily have a case of a file with 2 (non-gfid) hardlinks...
22:00 avati yeah, check for that only first time the inode is getting looked up
22:01 avati no point checking it on every access
22:01 Supermathie yeah
22:02 xavih I thought about that, but it is really an lstat call that it will most probably be cached by the kernel
22:04 Supermathie Hey, does this crash in fuse client (http://fpaste.org/12436/36865539/) look familiar? It's based on my 3.3.1+patches code... wondering if it's a known issue or if I should chase it down and try it on stock
22:09 xavih avati: trying to heal a damaged gluster of one of our customers I saw lost gfids several times, so it's possible in some way to loose those gfids while gluster is running
22:09 xavih avati: if they are only tested on first lookup, they won't be recreated until gluster is restarted or the inode goes out of the cache
22:10 avati xavih, if gfids are lost while gluster was running, that's a different (probably more dangerous) bug which will very likely get masked behind this symptom-fix
22:11 xavih yes, but I haven't been unable to identify the problem nor reproduce it, so this might mitigate the problem, at least by now
22:11 avati xavih, isn't st_nlink >= 2 check would be acceptable for a run time check (not just first time check..)? that would cover most regular files (which are not hardlinked)
22:12 xavih anyway, I'll reexamine the path tomorrow
22:12 avati stating the gfid link everytime is a huge perf hit
22:12 xavih avati: yes, probably it would be better that check
22:13 avati besides, the gfid link is really an "insurance policy" for the gfid based NFS filehandles, it's not a big deal if it is missing for sometime under dire circumstances
22:15 xavih well, the bug 859581 happens when the gfid is missing
22:15 glusterbot Bug http://goo.gl/60bn6 high, unspecified, ---, vsomyaju, ASSIGNED , self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
22:17 * avati checks bug report
22:17 xavih this is caused by another problem in storage/posix, but if gfid were not missing in the first place it wouldn't happen
22:18 avati yeah
22:18 xavih I have a patch ready for this bug
22:20 xavih anyway, I'll reconsider how to check for missing gfid without checking them every time. Maybe the st_nlink check could be a good compromise
22:20 avati i'm still not sure what caused the gfid link to go missing for regular files.. but for that let's just limit to st_nlink check
22:21 avati yeah
22:21 xavih avati: thanks :)
22:21 xavih I'll modify it tomorrow
22:21 xavih now it's time to go to sleep :p
22:22 avati good night :)
22:47 yinyin joined #gluster-dev
23:34 avati bfoster, ping
23:34 avati foster, ping
23:36 foster avati: pong
23:37 avati foster, hey
23:37 foster heya
23:37 avati foster, wanted to ask about the write-behind hang fix.. was the aio test setting O_SYNC or O_DSYNC in open()?
23:38 foster lemme check..
23:38 avati if not, the code path you explained should not be executed
23:39 avati (or if you explictly set "option strict-O_DIRECT on" in write-behind config)
23:40 foster I believe it was just O_DIRECT
23:41 avati hmm
23:41 avati strange
23:42 foster yeah I was using aio-stress ... -O
23:42 foster no -S (for O_SYNC)
23:43 lpabon joined #gluster-dev
23:43 avati ahhh, i see what's happening
23:43 avati downstream 7c4f4e665e1a9be780e77a15d734674b584de3f8 is missing upstream
23:45 foster oh, so write behind should be enabled in this case?
23:45 foster since it's not sync
23:45 avati O_DIRECT is a parameter for OS page-cache exclusion
23:45 avati *only for
23:45 avati it is fine to write-behind O_DIRECT (non O_SYNC writes)
23:46 avati however the fix in 7c4f4e665e1a9be780e77a15d734674b584de3f8 is also not "proper"
23:46 avati we should skip setting O_DIRECT flag in fuse_write()
23:46 foster yeah, makes sense. I just saw write-behind being disabled in that path and didn't think much of it, assumed it was expected
23:47 avati the idea is to retain synchronous behavior in case of GFAPI access
23:47 avati in which case only, O_DIRECT should be set in writev() flags
23:47 foster are you saying we should filter O_DIRECT in fuse_write()?
23:47 avati currently fuse is setting O_DIRECT in writev() as well
23:47 avati foster, yes.. that's correct
23:48 avati for FUSE based access, flags established in open() are final
23:48 avati the per-FOP writev flag is really for overriding or anonymous FD
23:49 avati wb_writev() is doing the right thing.. it uses o_direct (in small) against fd->flags, and uses O_DIRECT (caps) against @flags
23:50 avati your patch is still proper and required..
23:50 avati (for handling O_SYNC case)
23:50 avati i was just confused why AIO stress hit this code path
23:50 avati mystery solved
23:50 foster heh
23:51 foster I don't really follow the reason to filter O_DIRECT though
23:51 avati to filter O_DIRECT in case of FUSE?
23:51 foster yeah, what's the scenario that tries to avoid?
23:52 avati O_DIRECT is really for avoiding caching (and performing extra mem copies)
23:53 avati write-behind does not do that, fuse has already (inevitably) copied the data into our iobuf
23:53 avati we are just writing it in the background without making the app syscall wait
23:53 avati just like how a drive cache would behave
23:54 avati O_DIRECT != durability gurantee (however, O_SYNC or O_DSYNC is)
23:54 foster yeah, that makes sense.. but why filter it out of flags?
23:55 foster for clarity? or another problem?
23:55 avati so that gfapi has a way to express enforcing of stricter behavior
23:55 avati in case of gfapi we want to _not_ write-behind when the app asks us to
23:55 avati (because now we would need to mem-copy to do a safe write-behind)
23:56 foster ok, so we (gluster) overload the meaning of O_DIRECT in a sense in write() ?
23:56 avati right..
23:56 foster i.e., if we get an O_DIRECT in write() flags it means the caller is explicitly saying don't copy this
23:56 foster whereas fuse already has
23:56 avati we want O_DIRECT in write() to mean true and hard O_DIRECT
23:57 avati correct
23:57 foster gotcha, ok.. makes sense now :)
23:58 foster so then is that downstream patch wrong?
23:58 avati it is wrong in a "larger sense" (when seen in consideration of gfapi)
23:59 avati but rhs-2.0 did not have gfapi.. so it was good enough
23:59 avati though strictly speaking, a wrong fix
23:59 foster right, wrong in the sense of our latest context
23:59 foster hmm, and why would that be downstream and not upstream? :/
23:59 avati i'm confused too

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary