Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-09-08

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:24 pranithk1 joined #gluster-dev
02:47 rastar joined #gluster-dev
03:03 nbalacha joined #gluster-dev
03:08 aspandey joined #gluster-dev
03:20 magrawal joined #gluster-dev
03:30 skoduri joined #gluster-dev
03:36 nbalacha joined #gluster-dev
03:44 atinm joined #gluster-dev
04:10 riyas joined #gluster-dev
04:16 sanoj joined #gluster-dev
04:18 itisravi joined #gluster-dev
04:24 hgichon joined #gluster-dev
04:27 k4n0 joined #gluster-dev
04:39 poornima joined #gluster-dev
04:42 pranithk1 joined #gluster-dev
04:47 shubhendu joined #gluster-dev
04:56 jiffin joined #gluster-dev
04:57 kotreshhr joined #gluster-dev
05:10 aravindavk joined #gluster-dev
05:10 mchangir joined #gluster-dev
05:11 nbalacha nigelb, I see a netbsd failure for one of my patches
05:12 nbalacha [04:47:13] Running tests in file ./tests/basic/quota-rename.t
05:12 nbalacha 21:47:58 touch: /mnt/glusterfs/0/dir/dir1/f4: Socket is not connected
05:12 nbalacha any ideas?
05:15 gem joined #gluster-dev
05:20 karthik_ joined #gluster-dev
05:22 aspandey joined #gluster-dev
05:26 asengupt joined #gluster-dev
05:36 Saravanakmr joined #gluster-dev
05:45 ppai joined #gluster-dev
05:46 ankitraj joined #gluster-dev
05:48 ndarshan joined #gluster-dev
05:56 hgowtham joined #gluster-dev
05:59 prasanth joined #gluster-dev
06:10 Bhaskarakiran joined #gluster-dev
06:13 spalai joined #gluster-dev
06:13 rafi joined #gluster-dev
06:15 suliba joined #gluster-dev
06:18 anoopcs nigelb, Can you please tell me why https://build.gluster.org/job/smoke/30478/console returned failure?
06:22 devyani7 joined #gluster-dev
06:29 itisravi joined #gluster-dev
06:30 aravindavk_ joined #gluster-dev
06:31 jiffin1 joined #gluster-dev
06:31 nigelb anoopcs: ugh, it's a bug I fixed, but probaby haven't deployed.
06:31 nigelb anoopcs: I'll push a fix today, in the meanwhile, retrigger.
06:32 nigelb nbalacha: THis is how that test usually fails - https://build.gluster.org/job/ne​tbsd7-regression/517/consoleText
06:32 nigelb is that the same as yours?
06:33 nbalacha nigelb, looks like it
06:33 kdhananjay joined #gluster-dev
06:34 anoopcs nigelb, Ok.
06:34 nigelb nbalacha: so looks like something is exiting uncleanly somewhere :(
06:34 aspandey_ joined #gluster-dev
06:41 msvbhat joined #gluster-dev
06:45 ppai joined #gluster-dev
06:47 atinm joined #gluster-dev
06:52 nbalacha joined #gluster-dev
06:53 rafi joined #gluster-dev
06:57 kshlm joined #gluster-dev
07:04 jiffin1 joined #gluster-dev
07:07 aravindavk joined #gluster-dev
07:07 itisravi joined #gluster-dev
07:11 jiffin joined #gluster-dev
07:16 nishanth joined #gluster-dev
07:17 devyani7 joined #gluster-dev
07:20 ppai joined #gluster-dev
07:24 atinm joined #gluster-dev
07:32 prasanth joined #gluster-dev
07:41 prasanth joined #gluster-dev
07:55 gem joined #gluster-dev
08:02 k4n0_away joined #gluster-dev
08:25 rastar joined #gluster-dev
08:30 devyani7 joined #gluster-dev
08:38 misc I like when bug get resolved before I even wake up and get to the office
08:38 nbalacha joined #gluster-dev
08:47 karthik_ joined #gluster-dev
08:51 ashiq joined #gluster-dev
09:15 asengupt joined #gluster-dev
09:26 aspandey_ xavih: Hi
09:27 xavih aspandey_: hi
09:28 aspandey_ xavih: I want to discuss about comment http://review.gluster.org/#/c/13733/​15/xlators/cluster/ec/src/ec-common.c@1081
09:29 aspandey_ xavih, while I agree your first statement, we will have an issue with second statement of your in this comment..
09:29 atinm joined #gluster-dev
09:30 aspandey_ xavih: So we do not send xattr for version and size for entry operations like create because we want to send it on all the bricks even if the version on all the bricks are not same...right?
09:32 aspandey_ xavih: last time I asked why do we not send query for create and all, you said that. If I understood you correctly at that time. That makes sense too..
09:35 xavih aspandey_: one moment please
09:35 aspandey_ xavih: sure
09:38 aravindavk joined #gluster-dev
09:45 asengupt joined #gluster-dev
09:45 hchiramm joined #gluster-dev
09:55 nigelb misc: though, it's not fully resolved.
09:55 misc nigelb: mhh, ie ?
09:55 nigelb It isn't updating on the web head. I'm not sure how that's supposed to work. So I didn't look into that bit.
09:56 nigelb The "bug" that's filed is resolved, of course.
09:56 nigelb This is a new one :)
09:56 rastar joined #gluster-dev
09:57 misc ok, I can take a look
09:58 misc $ /usr/local/bin/build_deploy.py /srv/builder/gluster_web.yml
09:58 misc error: unable to resolve reference refs/remotes/origin/events/2016/osbconf: Not a directory
09:59 misc wha I like with middleman is that each time I fix a error
09:59 misc a new one keep popping, more puzzling
10:01 misc so the repo is corrupted
10:02 misc like someone did remove branch on the repo and this triggered the error :/
10:02 atinm joined #gluster-dev
10:03 nigelb where are you running that command? supercolony?
10:04 misc webbuilder.gluster.org
10:05 nigelb aha
10:06 misc the planet setup is documented
10:06 misc but not yet the main website
10:14 xavih aspandey_: sorry for the delay. You are right. A create operation should not take into account the version info to allow the creation of the file in all bricks
10:15 xavih aspandey_: however this exposes a problem. Even if we don't request that information on the first xattrop, it's possible that another fop (like getxattr) queries that information later. At this point, any new create fop will have that information
10:16 xavih aspandey_: Quering version and size info later doesn't solve the problem. We'll have to see how we can solve this.
10:16 aspandey_ xavih: yes :-)
10:17 nigelb misc: if you do `git pull --rebase origin master` that should fix it I think?
10:17 xavih aspandey_: regarding the patch, I think it would be better to do a single xattrop with all information
10:18 nigelb so you don't really mind about other branches.
10:18 * nigelb could be wrong
10:18 misc nigelb: I did fixed with git prune origin already, it should deploy
10:18 misc I am just porting the issue upstream
10:18 nigelb ah cool
10:18 misc now, I wonder why the branch did disappear
10:19 aspandey_ xavih: You mean to say that irrespective of we want to set only dirty flag, we should send xarrt for version and size and dirty all...
10:19 nigelb it's still there on the github repo
10:19 xavih aspandey_: I also see now that we can have ctx->have_info set in ec_get_size_version() if a previous read operation has been executed, so some of my comments are wrong
10:20 aspandey_ xavih: exactly... I also wanted to point out this..
10:20 xavih aspandey_: yes. This saves an additional xattrop call later, and the added overhead is negligible
10:20 aspandey_ xavih: ok . Then this will create one more issue..
10:20 magrawal ndevos, ping
10:20 glusterbot magrawal: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
10:21 xavih aspandey_: we have the problem of ignoring version for create/mkdir/... fops
10:22 aspandey_ xavih: No.. There is other problem..
10:23 aspandey_ xavih: wait. Let me see it again...
10:25 aspandey_ xavih: Ok the problem I see is that in ec_prepare_update_cbk, when we do ctx->post_version += ctx->pre_version
10:28 aspandey_ xavih: lets say getxattr gives us version and size and next we want to to do update fop and request version even if we have it then in this function it will add it again and that could be problematic..
10:28 aspandey_ xavih: It was creating issue in my earlier patch.
10:31 aspandey_ xavih: I think I have a partial solution for this. I will modify it and send it...
10:31 xavih aspandey_: no, no. If we really need to call xattrop twice, the second one must not query version and size information
10:32 xavih aspandey_: the checks for have_info in this case are correct
10:32 aspandey_ xavih: correct..
10:33 aspandey_ xavih: Let me modify it and incorporate your other comments...Then you can again review it  :-)
10:34 xavih aspandey_: thanks :)
10:34 spalai joined #gluster-dev
10:37 rastar joined #gluster-dev
10:43 nbalacha nigelb, is there an issue with the netbsd regressions for master?
10:48 magrawal jiffin, ping
10:48 glusterbot magrawal: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
10:51 ndarshan joined #gluster-dev
10:51 jiffin magrawal: pong
10:54 spalai joined #gluster-dev
11:03 aravindavk joined #gluster-dev
11:03 msvbhat joined #gluster-dev
11:04 misc so, we need a xfs partition for gluster test
11:04 misc does someone have more information, like "where hsould it be" ?
11:05 misc (like /d, I think ?)
11:06 rraja joined #gluster-dev
11:07 aravindavk_ joined #gluster-dev
11:10 aravindavk_ ndevos: hi,  please review http://review.gluster.org/#/c/15367/  I addressed your comments
11:16 ndarshan joined #gluster-dev
11:24 rastar joined #gluster-dev
11:30 kkeithley misc: need more context.  What system(s)? need a xfs volume? For what kind of gluster test?
11:32 aravindavk joined #gluster-dev
11:33 misc kkeithley: the regression tests requires to have a xfs partition
11:34 kdhananjay joined #gluster-dev
11:35 misc kkeithley: and so the new 2 builders I installed do not have it yet, and so just wanted to verify that I am not wrong about my assumptions
11:36 misc (I already have the code to be wrong)
11:39 kkeithley don't know. Can't we look at existing vms to see what they have?
11:39 misc I did
11:39 misc that's how I figured that it should be in /d
11:39 misc but I do not like how everything seemed to have been lumped on that specific mount point
11:42 kkeithley ???  I expect that is/was true about the original build.gluster.org box. Was it true on all the rackspace vms too?
11:42 misc yep
11:42 misc with some links to not break compat
11:42 devyani7 joined #gluster-dev
11:43 misc that's not a good day to break stuff however
11:44 kkeithley right, two days before ndevos releases 3.8.4
11:44 kkeithley not a good day to break things
11:45 misc so I will just add the 2 new builder, we will likely activate them on Monday and debug in the mean time
11:54 misc nigelb: so builder1.rht.gluster.org should be officially ready to be tested
11:55 kkeithley wow, git clone is really slow today
11:56 misc nigelb: I am going to get pizza and coming back in 1h/1h30, and you have root on the builder
12:03 aspandey_ joined #gluster-dev
12:14 nigelb misc: cheers, thanks.
12:20 dlambrig_ joined #gluster-dev
12:21 poornima joined #gluster-dev
12:31 EinstCrazy joined #gluster-dev
12:32 shyam joined #gluster-dev
12:36 ira joined #gluster-dev
12:37 k4n0 joined #gluster-dev
12:43 lpabon joined #gluster-dev
12:55 kdhananjay joined #gluster-dev
13:14 jdarcy joined #gluster-dev
13:14 jdarcy Anyone else having trouble reaching machines in RH Westford?
13:18 * ndevos doesnt use anything there
13:22 mchangir joined #gluster-dev
13:31 spalai left #gluster-dev
13:35 nbalacha joined #gluster-dev
13:46 gem joined #gluster-dev
13:49 shaunm joined #gluster-dev
13:50 dlambrig_ bagl machines in RH Westford look ok, I am having trouble accessing *.gdev.lab.eng.bos.redhat.com
13:52 jdarcy Just found the outage-list email.  Some switches went dark, but not all.
13:55 kdhananjay joined #gluster-dev
14:04 hagarth joined #gluster-dev
14:12 baojg joined #gluster-dev
14:12 rafi ndevos++ thanks for the reviews
14:13 glusterbot rafi: ndevos's karma is now 309
14:13 kotreshhr left #gluster-dev
14:13 ndevos rafi: np, trying to get some long pending reviews done :)
14:15 shubhendu joined #gluster-dev
14:15 rafi ndevos: due to long review request, it is been difficult to get reviews for patches from developers ;)
14:17 ndevos rafi: I find myself spending *much* more time on reviewing than coding... and there is no way to keep up with so many new patches :-/
14:17 shyam joined #gluster-dev
14:17 ndevos maybe we should have some days where developers are not allowed to post patches, and they should only do reviews instead
14:18 misc +1
14:18 misc or write docs
14:18 misc what about a crypt corrency with the proof of work being "having done a review"
14:18 rafi ndevos: +1
14:19 * rafi also prefer rewards to those who do reviews :D
14:19 misc the risk is then people do bad review to get the reward
14:19 EinstCrazy joined #gluster-dev
14:20 rafi misc: possible
14:22 rafi I think the lack of reviews cause the  new developers to loose the interest :(
14:22 rafi especially if the patch is not in urgent state
14:23 misc yep
14:23 misc but writing code is more fun
14:23 ndevos one of the most important things for maintainers of components is the responsiveness, I wonder if we can measure that somehow
14:24 ankitraj joined #gluster-dev
14:27 misc mhh, the time to first answer ?
14:27 misc or the age of a patch waiting on maintainer ?
14:30 ndevos more like the 2nd, specially for new or updated patches
14:36 shyam joined #gluster-dev
14:50 jdarcy joined #gluster-dev
15:10 kdhananjay joined #gluster-dev
15:26 mchangir joined #gluster-dev
15:28 jiffin joined #gluster-dev
15:31 ChrisHolcombe joined #gluster-dev
15:36 kdhananjay joined #gluster-dev
15:44 gem joined #gluster-dev
15:51 jiffin joined #gluster-dev
16:00 aravindavk joined #gluster-dev
16:10 hagarth joined #gluster-dev
16:15 ndevos shyam: do you know how libgfapi can get triggered to cancel a glfs_posix_lock() request?
16:15 ndevos shyam: or, care to join #ganesha?
16:16 ffilzwin joined #gluster-dev
16:17 ndevos shyam: ganesha would need a way to cancel blocked lock requests, I dont know if we have something for that already?
16:17 ndevos shyam: some more details about the requirement are in https://sourceforge.net/p/nfs-ga​nesha/mailman/message/35354585/
16:18 ndevos shyam: and ffilzwin is on the US west-coast, he'll be able to explain more details :)
16:18 ndevos ffilzwin: shyam is in Westford, might be lunch time over there
16:19 ffilzwin shyam, ndevos suggested I ask you... the libgfapi function glfs_posix_lock, I have some questions:
16:19 ffilzwin 1. does is support the F_SETLKW blocking lock request
16:19 ffilzwin 2. is there a way to cancel a blocking lock request (for example, sending the thread a signal?)
16:20 ndevos oh, pranith might get online later too, he's in the US west-coast this week and may have ideas
16:20 ndevos I'm pretty sure we support F_SETLKW
16:21 ndevos I do not think we have a signal handler for libgfapi.so, and I expect other users (QEMU) would not want us to have it by default
16:22 ffilzwin ndevos, well, Ganesha would have the signal handler...
16:22 ffilzwin what I did for multilock was have a signal handler for SIGIO that did nothing, the sole purpose being able to fire the signal at a specific blocking lock thread
16:23 ffilzwin signal interrupts an fcntl F_SETLKW (thus causing an EINTR return)
16:23 ndevos ffilzwin: right, but that thread needs to handle the signal, and somehow cancel the request through Gluster
16:24 ffilzwin I can use ml_cephfs_client to check libcephfs...
16:26 ffilzwin I signal the thread that is blocked on the lock request, but the question is if the library implementation of blocking locks for gluster (or ceph) results in a system call that is blocked and that can be interrupted by a signal...
16:27 ndevos shyam probably is in a better position to answer this, I thought he did more in the locking code than me
16:27 * shyam reading the IRC logs now...
16:27 ndevos its not a systemcall, all is in userspace
16:29 ffilzwin ndevos, yea, but in order for the thread to block, it either must busy wait (which I'm assuming it does not) or it must make some kind of system call that blocks...
16:30 ffilzwin unfortunately man pthread_cond_wait looks like it's ambiguous if a signal interrupts it or not:
16:30 ffilzwin If a signal is delivered to a thread waiting for a condition variable, upon return from the signal handler the thread resumes waiting for the condition variable  as  if it was not interrupted, or it shall return zero due to spurious wakeup.
16:31 ndevos I think we use pthread_mutex's and similar, but not sure... there is syncop_lk() that handles it and forwards it to the core gluster library
16:31 ffilzwin if it's not interruptible, we would need to add a mechanism to call to cancel...
16:32 glustin joined #gluster-dev
16:32 gem joined #gluster-dev
16:32 ndevos we probably need to add something, at the very least a little bit of documentation on how to abort a glfs_posix_lock() request
16:33 shyam So, with Gluster the call that blocks would be the syncop, the request is sent async to the bricks etc.
16:34 * shyam checks the status of a few commits to understand if some changes to locking in gfapi went into master branches
16:38 ankitraj joined #gluster-dev
16:38 ffilzwin syncop, is that a system call?
16:38 ndevos syncop is a framework in Gluster that can be used to do network procedures
16:38 aspandey joined #gluster-dev
16:39 shyam ok, so the gfapi code should be waiting on pthread_cond_wait after forwarding the lock request, specifically glfs_posix_lock would wait on the said system call
16:40 shyam ndevos: cross check, the SYNCOP __yeild will not get a synctask from the current thread, as this is an application thread, hence it would not do a synctask_yeild but rather wait on the cond as above
16:41 ffilzwin shyam, ok, so a signal might not do it, we would need an explicit cancel api...
16:41 ffilzwin ndevos, we may have to implement blocking locks for FSAL_GLUSTER by polling, until we can add a better interface...
16:42 shyam We possibly can do better requesting async blocking locks
16:42 shyam and cancel around the same (possibly)
16:42 ffilzwin shyam, yep...
16:43 ffilzwin yea, we would need to be able to fire off an async blocking lock request, and also have a way to cancel it
16:43 * shyam rereading Ffilz's mail to ganesha-devel
16:45 shyam ffilzwin: So "I realize blocking locks (used by NFS v3 NLM clients) are broken for FSALs that don't support async blocking locks. " This is *new* in Ganesha or preexisting?
16:45 ndevos ffilzwin: we're pretty flexible in what we can provide in libgfapi, we just need to decide what interface would work best
16:47 ffilzwin shyam, pre-existing...
16:48 ffilzwin ndevos, yea, lots of flexibility, problem is short runway...
16:49 shyam ffilzwin: Can you name a FSAL that supports this (i.e async blocking locks) behavior? (I have Ganesha code stashed away somewhere to look at more closely and understand the behavior)
16:49 ffilzwin you should easily be able to demonstrate, take a conflicting lock from outside Ganesha, and then request blocking lock in Ganesha, client will get ENOLCK and probably not like it...
16:49 ffilzwin FSAL_GPFS supports async blocking locks
16:51 ffilzwin it really amounts to being able to send a "fire and forget" lock request and then when the lock is granted, either have a direct call back to Ganesha, or fire an event (how does FSAL_GLUSTER do other upcall stuff?)
16:51 ffilzwin and then some way to cancel a pending request...
16:53 shyam Another way is to maintain a queue as mentioned, and request F_SETLK (!W) and keep requesting, till it is cancelled or aquired (i.e polling queue as mentioned above)
16:53 ffilzwin oh, got another question... since you use the fcntl commands, do you duplicate the POSIX behavior where closing any file descriptor held by the process dumps all the locks that process holds on the file, even if there is another file descriptor still open? Or are your locks tied to a specific file descriptor? Or some other mechanism?
16:54 ffilzwin shyam, yea, quick fix will have to be polling (I will add a support capability to Ganesha to indicate if blocking locks can be interrupted by signal or not, if not, the blocking lock threads will skip those locks, leaving them to the polling thread
16:54 ffilzwin basically the blocking lock threads are an optimization on polling...
16:56 nbalacha joined #gluster-dev
17:03 shyam FSAL_GLUSTER has a polling loop calling "are there any events" for the upcall part of the picture, see glusterfs_create_export->initiate​_up_thread->GLUSTERFSAL_UP_Thread in FSAL_GLUSTER code
17:05 gem joined #gluster-dev
17:07 shyam ffilzwin: locks are held on the fd, and after the last ref on the fd is released, the locks are released as well. Now, the "another file descriptor" is a dupe'd descriptor or just another one?
17:08 ffilzwin anither one, if the locks are associated with fd that should be ok too...
17:09 ffilzwin I haven't looked at how FSAL_GLUSTER handles the support_ex...
17:10 rafi joined #gluster-dev
17:10 shyam Ummm... support_ex?
17:10 ffilzwin kernel POSIX locks have this annoying behavoir:
17:10 msvbhat joined #gluster-dev
17:11 ffilzwin open a file into fd1, use fd1 to acquire a lock, open the same file into fd2, do stuff, close fd2, all locks (specifically the ones that had been acquired using fd1) go away
17:11 ffilzwin the nasty thing is the possibility some unexpected library function decides to open and close an fd on the file you have locks on...
17:12 ffilzwin support_ex is an FSAL API extension that allows for an open file descriptor for each lock ower/file pair (and each open owner/file pair)
17:13 ffilzwin with that, the file descriptor used to acquire locks is not closed until there are no locks remaining for that lock owner/file pair
17:14 ffilzwin and FSAL_VFS uses the new Open File Description locks which have two nice properties: 1. they don't have that stupid POSIX behavior, 2. each file description is treated as a lock owner
17:14 shyam ffilzwin: understood, thanks
17:15 ffilzwin (there is a one to one corespondence between file descriptors and file descriptions except when dupfd or fork is involved)
17:15 * shyam is still twiddling his thumbs on the right way to cancel locks even if they are requested async
17:16 ffilzwin how does Gluster deal with the process that is using libgfapi exits while a blocking lock request is in progress?
17:17 ndevos ffilzwin: the client (gfapi) passes the request on to the server (bricks), when the tcp-connection dies (and not reconnects) the locks are cleared after a timeour
17:18 shyam Locks are managed/remembered on the bricks (which are the server side components in Gluster), gfapi application (or other) are client side initiators of requests. So when an gfapi application dies, the connections die, and cleanup is initiated for various references held by the client, that includes the fd etc. and so cleanup is achieved hence. (do not ask me to track the code for all this 8-) )
17:18 ffilzwin ok, so that probably doesn't help too much...
17:18 hagarth joined #gluster-dev
17:18 shyam ndevos provides a more terse and correct answer :)
17:19 shyam By when do we need to solve this? We can get the locks owners to take a look, possibly such considerations are present in the code already
17:21 ffilzwin I think a full async solution will have to be for Ganesha 2.5 unless we could turn it around in a week or so...
17:21 ffilzwin the polling solution I can turn around in a day or two...
17:22 shyam Does the polling solution involve requesting F_SETLK and not F_SETLKW?
17:22 ffilzwin I already have a model of the thread block/cancel plus polling, I just need to port the code into Ganesha...
17:22 ffilzwin yes, polling will just be F_SETLK
17:22 shyam ok
17:23 shyam That works for now, I guess failed locks get to the tail of the queue, and there is no blocking hence no cancellations
17:23 ffilzwin right
17:25 shyam ok, ndevos ffilzwin I guess we need to take in this request, and find out if we can cleanly support async (blocking) lock requests, with cancellation, and hence move out of the interim polling solution. So file a bug? ndevos other ideas?
17:25 ndevos shyam: that would be the approach I'd take too
17:26 ndevos ffilzwin: we wont have it in a week, but can get it for nfs-ganesha-2.5
17:26 shyam Also, the various locking behaviours that we discussed are best reviewed once more, possibly a dev thread on that, to have more final answers? (things like fd association, etc.)  Unless ffilzwin feels the current responses are adequate
17:26 shyam (mail thread)
17:27 ffilzwin one thing we need is robust testing...
17:27 ndevos a mail thread would be good in any case, maybe the samba folks can use something like this too
17:27 ffilzwin for some reason, byte range locks don't seem to get much testing...
17:27 shyam yup for the async blocking locks, I was thinking more around the lock to fd association questions.
17:28 shyam ffilzwin: Sure, do you have bugs to serve as examples? We can take off from there...
17:28 shyam (for the testing part i.e)
17:28 ffilzwin shyam, can you clarify the association of locks with fd?
17:28 hchiramm joined #gluster-dev
17:28 shyam ffilzwin: I realize I should add who I am responding to :), well the questions around what happens if I close an fd, but there are other fd's still open questions
17:29 ffilzwin I don't have an explicit bug... I was looking at the code because GPFS had an issue and realized we actually did not handle blocking locks very well if the FSAL didn't support async blocking locks...
17:30 shyam ffilzwin: Ok, got it
17:31 shyam ffilzwin: Are you filing the bug?
17:31 ffilzwin shyam, I can file the bug
17:31 shyam glusterbot: file a bug
17:32 ndevos shyam: thats only in #gluster :)
17:32 shyam ndevos: How does that prodding glusterbot work
17:32 shyam Ah!
17:32 ndevos https://bugzilla.redhat.com/enter_bug.cg​i?product=GlusterFS&component=gfapi
17:33 shyam ffilzwin: ndevos: All right! we will watch for that, gotta take my eyes off IRC for some time now!
17:33 ffilzwin shyam, so back to fd association, is each fd treated as a separate lock owner? I.e. if a single Ganesha process has two fds open and requests conflicting locks, will the 2nd lock be denied?
17:33 shyam ffilzwin: At the outset the answer is yes, but I still need to be sure (hence the devel thread question if possible)
17:34 ndevos ffilzwin: we actually have a lk_owner value we can set, but it is relatively new and I am not sure FSAL_GLUSTER uses it
17:34 ndevos (and apart from that, its done in an ugly way)
17:35 ndevos ah, nope - src/FSAL/FSAL_GLUSTER/handle.c: /** @todo: setlkowner as well */
17:35 ffilzwin ndevos, what version do I open the bug against?
17:35 ndevos ffilzwin: start with 'mainline'
17:36 ffilzwin ndevos, ah, I hope separate fd are treated as separate owners then...
17:36 ndevos ffilzwin: I doubt that, but soumya would know
17:37 aravindavk joined #gluster-dev
17:38 ffilzwin if not, then range locks are not very usefull yet...
17:40 ndevos yeah, unless clients use different ganesha servers
17:41 ndevos I think by default the lock-owner is set to the PID and some hostname/ip, but I'm not sure
17:41 ffilzwin Unfortunately cthon04 tests really don't help... hmm, but pynfs does have locking tests and yes, it has multiple owner test
17:41 ndevos oh, in that case we should be good
17:42 ndevos well, if we run those pynfs tests and not skip them
17:43 ffilzwin ndevos, There is no component named 'gfapi' in the 'GlusterFS' product.
17:43 ndevos ffilzwin: maybe libgfapi?
17:44 ndevos I just copy/pasted a URL I found in my history
17:44 ffilzwin yep: https://bugzilla.redhat.co​m/show_bug.cgi?id=1374462
17:44 glusterbot Bug 1374462: unspecified, unspecified, ---, bugs, NEW , Ganesha needs asynchronous blocking byte range locks that can be cancelled
17:45 ndevos thanks ffilzwin!
17:45 ffilzwin brief description, but hopefully enough to get design conversation started...
17:45 ndevos yeah, it'll be a reminder for us to follow-up on
17:46 ffilzwin ira, would Samba benefit from async blocking locks at all?
17:46 ndevos I expect that shyam will write down some idea(s) and post an email about it
17:46 ffilzwin btw, I'm not on any of the gluster mailing lists...
17:48 ndevos the discussion will happen on gluster-devel@gluster.org, send subject 'subscribe' to gluster-devel-request@gluster.org if you want to be on it
17:49 ffilzwin could you make sure the discussion is cc:ed to nfs-ganesha-devel?
17:50 ndevos we can try, shyam ^
17:51 ffilzwin well, I'll subscribe anyway
17:52 ffilzwin but in case others want in on the discussion...
17:53 ndevos I'll forward the discussion in case it doesnt get cc'd
18:45 pranithk1 joined #gluster-dev
20:00 post-factum pranithk1: hey
20:01 pranithk1 post-factum: hey!
20:01 post-factum pranithk1: saw my update?
20:01 pranithk1 post-factum: I saw your update, but I have to deliver few things today so didn't get a chance to dig through the data
20:01 post-factum pranithk1: that's okay, i just want to be sure you didn't miss it :)
20:01 pranithk1 post-factum: I think I may not get a chance to take a look today though
20:02 pranithk1 post-factum: yeah, will definitely take a look once
20:02 post-factum pranithk1: let me know anyway once your hands get to it
20:02 pranithk1 post-factum: definitely
20:34 shyam joined #gluster-dev
20:43 jiffin joined #gluster-dev
20:50 hagarth joined #gluster-dev
21:06 pranithk1 joined #gluster-dev
22:38 ira ffilzwin: I don't think so.  We just use the posix behaviors.
22:40 ffilzwin ira, ok, how does Samba handle blocking locks?
23:59 ira ffilzwin: Not sure?

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary