Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-05-21

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:24 ppai joined #gluster-dev
00:46 dlambrig joined #gluster-dev
01:32 badone_ joined #gluster-dev
02:32 kdhananjay joined #gluster-dev
03:07 Gaurav_ joined #gluster-dev
03:35 msvbhat joined #gluster-dev
03:36 pranithk joined #gluster-dev
03:37 shubhendu joined #gluster-dev
03:38 kdhananjay joined #gluster-dev
03:39 overclk joined #gluster-dev
03:40 itisravi joined #gluster-dev
03:40 hagarth joined #gluster-dev
03:41 nishanth joined #gluster-dev
03:54 atinmu joined #gluster-dev
04:15 kanagaraj joined #gluster-dev
04:18 sakshi joined #gluster-dev
04:25 rraja joined #gluster-dev
04:26 spandit joined #gluster-dev
04:26 ndarshan joined #gluster-dev
04:37 rjoseph joined #gluster-dev
04:41 krishnan_p joined #gluster-dev
04:41 kshlm joined #gluster-dev
04:42 ashiq joined #gluster-dev
04:46 ashish joined #gluster-dev
04:46 sakshi joined #gluster-dev
04:49 rafi joined #gluster-dev
04:55 kdhananjay spandit: Are you done using slave0/slave1 for your debugging?
04:59 spandit kdhananjay, I might need them for couple of hours.
04:59 spandit kdhananjay, you can take that post lunch
04:59 kdhananjay spandit: No problem. Could you let me know when you are done?
05:00 spandit kdhananjay, sure
05:00 kdhananjay spandit: Thanks!
05:02 hgowtham joined #gluster-dev
05:05 schandra joined #gluster-dev
05:12 gem joined #gluster-dev
05:19 pppp joined #gluster-dev
05:27 nishanth joined #gluster-dev
05:33 21WAB7UVY joined #gluster-dev
05:39 shubhendu joined #gluster-dev
05:40 raghu joined #gluster-dev
05:42 sakshi joined #gluster-dev
05:45 sakshi joined #gluster-dev
06:01 hagarth joined #gluster-dev
06:02 gem joined #gluster-dev
06:04 jiffin joined #gluster-dev
06:07 atinmu joined #gluster-dev
06:18 spalai joined #gluster-dev
06:22 spalai joined #gluster-dev
06:34 nkhare joined #gluster-dev
06:43 xrsanet joined #gluster-dev
06:43 ndevos joined #gluster-dev
06:46 owlbot joined #gluster-dev
06:46 schandra joined #gluster-dev
06:48 nishanth joined #gluster-dev
06:48 hagarth joined #gluster-dev
06:50 spandit joined #gluster-dev
06:55 kdhananjay joined #gluster-dev
07:01 prasanth_ joined #gluster-dev
07:02 itisravi hagarth: would you be sending a patch to add the -a flag to grep?
07:03 hagarth itisravi: will test with your patch and get back
07:04 itisravi hagarth: Adding the -a flag is needed because in the link that you shared http://lists.gnu.org/archive/htm​l/bug-grep/2015-05/msg00000.html, the consensus seems to be that it is not a bug.
07:04 hagarth the workaround for the intended behavior is to use -a
07:04 itisravi hagarth: right
07:09 atinmu joined #gluster-dev
07:12 pranithk joined #gluster-dev
07:14 pranithk xavih: Did you get a chance to review http://review.gluster.org/10868? This is blocking my testing with the cooperative locks testing.
07:14 anekkunt joined #gluster-dev
07:15 rgustafs joined #gluster-dev
07:15 xavih pranithk: yes, I've already reviewed it. I was just writing a comment.
07:16 pranithk xavih: cool! I will address your comments and resume my testing
07:16 tigert okay
07:16 xavih pranithk: the question was if you also want to move the call to ec_pending_fops_completed() into ec_fop_data_release() ?
07:17 xavih pranithk: otherwise I'll +2 now :P
07:17 pranithk xavih: I wanted to move it, but didn't know the reason why you chose to put it in ec_manager.
07:17 pranithk xavih: any specific reason?
07:18 tigert are we ready to switch www.gluster.org to http://glusternew-tigert.rhcloud.com/ ?
07:18 * tigert thinks we switch and then fix stuff if we spot some issues
07:18 xavih pranithk: not really. I just thought it was a good place, but now I think it's better into ec_fop_data_release(). It's more logic.
07:18 tigert objections?
07:18 hagarth tigert: +1
07:19 pranithk xavih: I am moving it :-). Will resend the patch
07:19 xavih pranithk: good :)
07:20 Guest10011 joined #gluster-dev
07:23 kshlm joined #gluster-dev
07:31 kdhananjay joined #gluster-dev
07:34 hagarth pranithk: http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/9373/consoleFull
07:37 spalai joined #gluster-dev
07:37 rafi1 joined #gluster-dev
07:38 rafi1 joined #gluster-dev
08:01 spandit joined #gluster-dev
08:06 hagarth joined #gluster-dev
08:11 tigert yay, pull requests to add feeds to planet
08:11 tigert \o/
08:11 * tigert is happy and merged
08:11 hchiramm_ joined #gluster-dev
08:20 hchiramm_ ndevos, there ?
08:23 hchiramm_ http://review.gluster.org/#/c/10120/ ndevos can u review this ?
08:24 ndevos hchiramm_: sure
08:24 hchiramm_ ndevos++ thanks
08:24 glusterbot hchiramm_: ndevos's karma is now 128
08:28 xavih hagarth: I think the problem on regression 9373 is caused by a bug Pranith already found. There's a patch (http://review.gluster.org/10868). However Pranith might better know if this is the same case.
08:37 atinmu joined #gluster-dev
08:38 gem joined #gluster-dev
08:47 Gaurav_ joined #gluster-dev
08:47 sbonazzo joined #gluster-dev
08:48 sbonazzo left #gluster-dev
08:53 nmathew joined #gluster-dev
08:53 nmathew hi guys
08:53 ashiq joined #gluster-dev
08:54 nmathew i have a basic problem with DHT bucket range calculation
08:54 hgowtham joined #gluster-dev
08:54 nmathew say I have volume created out of 6 bricks with replica 2
08:54 pranithk joined #gluster-dev
08:54 Guest10011 joined #gluster-dev
08:54 nmathew bricks are brick11 brick12 brick21 brick22 brick31 and brick32
08:55 nmathew how gluster calculates the DHT bucket range
09:08 nmathew left #gluster-dev
09:08 nmathew joined #gluster-dev
09:10 pranithk xavih: I re-uploaded the change for crash. I see that you reuploaded your change as well. I am almost done with cooperative lock testing, may need 1 hour more.. Will start reviewing once that is done
09:10 spalai joined #gluster-dev
09:13 xavih pranithk: I'm sorry, but I've just made the same change to your patch... I thought you were busy... sorry...
09:13 kshlm joined #gluster-dev
09:13 xavih pranithk: can you resend it ?
09:15 pranithk xavih: I was in a meeting when I was talking to you, then went to lunch :-(
09:15 ndevos nmathew: in your case you would have these replica pairs (brick11 brick12) + (brick21 brick22) + (brick31 brick32)
09:15 ndevos nmathew: the order in which you passed the bricks on the volume create command line is important
09:16 pranithk xavih: I saw your changes. The patch I sent is similar as well. Just that I made it into different function and added some comment, thats all.
09:17 xavih pranithk: if you want to use your change, please, resubmit it. However I prefer to move the ec_handle_last_pending_fop_completion() call to the end of ec_fop_data_release(). This will delay the notification until all cleanup has been done
09:17 pranithk xavih: oh! why?
09:17 xavih pranithk: I've sent mine 4 minutes later than yours...
09:17 pranithk xavih: hmm.... it shouldn't matter right? cleanup is only cleaning up memory...
09:17 pranithk xavih: yeah I saw it :-)
09:18 pranithk xavih: ah! races with fini in future... got it
09:18 pranithk xavih: will move it to end
09:18 jiffin ndevos++ for nfsidmapping
09:18 glusterbot jiffin: ndevos's karma is now 129
09:19 xavih pranithk: it also calls ec_resume_parent(). It shouldn't matter, but once we notify, other things can get destroyed in parallel
09:19 ndevos nmathew: I'm not sure if we have a nice description on how its done, at least I can not find it quickly
09:20 xavih pranithk: yes, fini could interfere :)
09:20 nmathew ndevos: I have gone through the code to an extent
09:20 ndevos nmathew: you may want to send an email to the gluster-devel@gluster.org list, the DHT developers should be able to point you to some details
09:20 xavih pranithk: in my modification I even call ec_pending_fops_completed() after having freed fop itself
09:20 nmathew ok ndevos
09:21 pranithk xavih: But I can't do that as fop needs to be alive when the call is made....
09:21 ndevos nmathew: I dont see any DHT developers online now, otherwise I'd point you to them :)
09:21 pranithk xavih: that part shouldn't matter IMO
09:22 pranithk xavih: hmm.... may be it does, wait
09:22 ndevos jiffin++ nfsidmapping--
09:22 glusterbot ndevos: jiffin's karma is now 5
09:22 glusterbot ndevos: nfsidmapping's karma is now -1
09:23 pranithk ndevos: local pool destroy and mem_put (fop) race :-(, I will move it the way you did
09:24 nishanth joined #gluster-dev
09:24 ndevos pranithk: what, me?
09:24 xavih pranithk: yes
09:24 xavih ndevos: I think this was for me :P
09:24 ndevos hehe, I thought so too :D
09:25 Guest10011 joined #gluster-dev
09:25 pranithk ndevos: gah! sorry about that ;-)
09:26 nmathew ndevos: Thanks I will mail
09:26 ndevos pranithk: no problem, I still like you :)
09:26 pranithk ndevos: hehe
09:27 ndevos nmathew: yeah, I guess that is best, just include code snippets or references with your questions, that makes it easier to respond with the details you'd like to know
09:28 hagarth joined #gluster-dev
09:30 sakshi joined #gluster-dev
09:31 sakshi joined #gluster-dev
09:41 anrao joined #gluster-dev
09:44 pranithk xavih: I am running ec regressions, will send it post that...
09:45 xavih pranithk: good :)
09:48 Manikandan joined #gluster-dev
09:55 overclk hagarth, mind having a look at http://review.gluster.org/#/c/10763/ ?
10:02 * tigert thinks we have a new website now
10:02 tigert have a poke at it and tell me if you notice anything broken / missing / stupid
10:03 nmathew left #gluster-dev
10:07 schandra tigert, the url >  gluster.org/documentation
10:07 schandra says docs-redirect/ was not found
10:10 tigert this is exactly the thing I wanted to spot asap
10:11 tigert thanks
10:11 Manikandan joined #gluster-dev
10:11 * tigert investigates
10:11 pranithk raghu: johnny is that you?
10:13 tigert yeah
10:13 * tigert builds
10:13 shubhendu joined #gluster-dev
10:15 pranithk xavih: done
10:15 pranithk xavih: was debugging ec-12-4.t hang, which points to a memory corruption, could be the same reason...
10:18 raghu pranithk: yeah its me
10:20 xavih pranithk: strictly speaking, to avoid all possible races with fini (destroy of fop mem pool) I have been thinking that we need to check for list_empty *after* having freed fop. Otherwise another thread could have removed the fop form the list but not yet released it
10:27 anrao joined #gluster-dev
10:35 atalur joined #gluster-dev
10:36 atinmu joined #gluster-dev
10:42 kshlm joined #gluster-dev
10:45 anrao joined #gluster-dev
10:51 pranithk xavih: not true
10:52 pranithk xavih: we are notifying PARENT_DOWN only after mem_put(fop) so fini shouldn't come till then?
10:54 aravindavk joined #gluster-dev
10:55 xavih pranithk: suppose there are two threads finishing a fop. One of them will come first and remove itself from the pending_fops list (list won't be empty). Now it goes to sleep. The other thread will remove the other fop from the list. The list will be empty, so fop will be released and notify called. When the first thread resumes and tries to release its fop, the fop pool could be destroyed
11:01 badone_ joined #gluster-dev
11:05 pranithk xavih: ah! true
11:09 hagarth joined #gluster-dev
11:09 rgustafs joined #gluster-dev
11:09 pranithk xavih: I don't see any easy way to solve it though. You can't really free the fop and then remove it from the list...
11:10 xavih pranithk: the only reliable way I see is to remove it from the list, then destroy the fop, and then check if ec->pending_fops is empty
11:10 xavih pranithk: of course this means that ec->lock needs to be taken twice...
11:17 Guest10011 anuradha: tried again to recreate the memory leak issue but it is working fine. Used bitrot option too but can not see any memory leak
11:17 nkhare joined #gluster-dev
11:19 Guest27648 schandra: there?
11:19 krishnan_p joined #gluster-dev
11:20 schandra|away Guest27648, here
11:20 krishnan_p Need reviews for http://review.gluster.com/#/c/10872/
11:27 pranithk xavih: or ref counting
11:28 xavih pranithk: do you want to use ref counting instead of having a list ?
11:29 pranithk xavih: yeah...
11:29 nkhare_ joined #gluster-dev
11:29 xavih pranithk: but we need the list to be able to force unlocks in case of a graph switch
11:30 pranithk xavih: ah!
11:30 hagarth itisravi: testing arbiter.t patch now
11:30 pranithk xavih: Do we really need to solve it now? or we can defer it to later?
11:30 itisravi hagarth: okay
11:31 pranithk xavih: Quite a few things don't work with fini... Its massive amount of work IMO.
11:31 xavih pranithk: it's highly unlikely to happen. We can solve it later if you want...
11:31 pranithk xavih: yeah, lets solve it later. There are more common operations that we need to get right now, lets concentrate there...
11:32 xavih pranithk: ok, I'll +2 it
11:32 hagarth itisravi: are you planning to add grep -a in volume.rc ?
11:33 itisravi hagarth:  I thought you should do it since you found it :)
11:34 pranithk xavih: Just spoke to hagarth, he was asking if we can get all the pending patches in by tomorrow afternoon, I feel yes, what do you say?
11:34 hagarth itisravi: since you have added everything in the patch's commit history, feel free to :)
11:34 hagarth it might be faster that way
11:35 itisravi hagarth: okay :)
11:35 itisravi hagarth: will resend now.
11:35 xavih pranithk: I think all patches are already uploaded for review, right ?
11:35 xavih pranithk: or there's anything else I missed ?
11:37 poornimag joined #gluster-dev
11:38 pranithk xavih: yes, the work we thought we would do for 3.7.1 in Barcelona is almost done. Need your review on data-self-heal. I will take care of reviews for the rest. I think we have week or two to get everything ready for 3.7.1
11:38 pranithk xavih: it would be awesome if we complete self-heal part as well?
11:38 xavih pranithk: will work on it
11:43 pranithk xavih: great! may be instead of using gerrit, you want to send mail? because there are two patches you need to look at to see the final picture, so it is probably better to look at the code directly IMO
11:44 xavih pranithk: yes, probably it's better
11:45 spalai joined #gluster-dev
11:47 sankarshan_away joined #gluster-dev
11:48 kanagaraj joined #gluster-dev
11:48 itisravi hagarth: resent arbiter.t
11:49 Anjana joined #gluster-dev
11:50 itisravi pranithk: hagarth sent fix for data-self-heal.t as well http://review.gluster.org/#/c/10875/
11:52 xavih pranithk: just to be sure, the patches to look at more deeply are 10298 and 10384 ?
11:55 tigert hagarth: if you didnt notice yet, www.gluster.org
11:55 pranithk xavih: there were three patches, you already looked at 10298 I think. But not http://review.gluster.org/10385, http://review.gluster.org/10384
11:55 tigert others too, give me a poke if you find issues or broken things
11:56 xavih pranithk: ok
11:57 kshlm tigert, custom avatars aren't being shown on planet.
11:58 tigert they arent?
11:58 tigert I saw some
11:58 * tigert checks
11:58 tigert ooh did I break something
12:01 kshlm Isn't the planet deployment automatic?
12:02 Anjana tigert: hi. I noticed a redirect on http://www.gluster.org/documentation and have few suggestion to make the content more shorted and crisp.
12:03 tigert hold a secm
12:03 tigert sec,
12:03 tigert since I changed it already too
12:03 tigert http://glusternew-tigert.r​hcloud.com/docs-redirect/
12:03 tigert see there
12:04 tigert made it more friendly and like a note
12:04 itisravi_ joined #gluster-dev
12:04 tigert but it takes a hour to get through the webcache
12:06 tigert www.gluster.org/docs-redir​ect/?cachebuster31337=true < or this ;-)
12:14 * tigert goes through the blog feed list for planet
12:16 * tigert notes "How to add extra airplanes on FlightGear Flight Simulator" in planet
12:16 tigert it is interesting, but I think we need to look for a category or tag on that feed
12:17 tigert bah, it has only "storage"
12:18 rafi joined #gluster-dev
12:27 atalur joined #gluster-dev
12:31 Anjana joined #gluster-dev
12:34 aravindavk joined #gluster-dev
12:35 rgustafs_ joined #gluster-dev
12:37 Gaurav__ joined #gluster-dev
12:39 kanagaraj joined #gluster-dev
12:42 hagarth tigert: great going on gluster.org! maybe we should send a note to the lists about our minimalist approach and ask for feedback along with explaining how to contribute
12:42 tigert that might be a good idea
12:42 tigert I am trying to troubleshoot why avatars dont currently show up
12:42 tigert on planet
12:45 hagarth tigert: ok!
12:54 atalur joined #gluster-dev
13:01 spalai left #gluster-dev
13:02 shyam joined #gluster-dev
13:03 tigert ..aaand we have avatars working!
13:04 tigert required some chmodding in the builder node :P
13:11 ndevos tigert++
13:11 glusterbot ndevos: tigert's karma is now 9
13:23 aravindavk joined #gluster-dev
13:25 pranithk xavih: I see the following assertion failure when I run posix-compliance test on my machine, then the mount hung: "glusterfs: ec-common.c:1390: ec_lock_unfreeze: Assertion `list_empty(&lock->waiting)' failed"
13:26 pranithk xavih: Had to run it more than once to hit it
13:27 pranithk xavih: I actually started running it because it crashed even after the fix we made. I will try to re-create that as well...
13:28 pranithk xavih: I think this is the crash, because I use DDEBUG
13:29 xavih pranithk: I'm looking into it...
13:31 pranithk xavih: It is not consistent. It happened twice in 3 runs
13:33 xavih pranithk: maybe the problem is the assert itself... I think in fact it's possible to have elements inside lock->waiting...
13:34 xavih pranithk: if a fop arrives just when a fop was to be released
13:34 xavih pranithk: the second 'fop' is 'lock'
13:35 pranithk xavih: I will remove the assert and run things :-)
13:36 xavih pranithk: yes. I think it will be better
13:39 rjoseph joined #gluster-dev
13:49 shubhendu joined #gluster-dev
13:49 pranithk xavih: Now, the following assertion failed: glusterfs: ec-common.c:1807: ec_lock_reuse: Assertion `lock->owner == fop' failed.
13:49 xavih pranithk: that's worse :(
13:50 xavih pranithk: doing posix compliance tests ?
13:50 pranithk xavih: yeah, I ran it in a loop for 10 times, got it 3rd time or 4th time
13:50 pranithk xavih: Compile it with DDEBUG
13:55 pranithk xavih: I started reading the code. Will tell you if I find something...
13:56 gothos joined #gluster-dev
13:59 vimal joined #gluster-dev
14:03 vimal joined #gluster-dev
14:06 vimal joined #gluster-dev
14:10 vimal joined #gluster-dev
14:12 xavih pranithk: which version are you testing ? that assert is on another line on the patches I'm looking
14:14 nkhare_ joined #gluster-dev
14:14 hagarth joined #gluster-dev
14:17 pranithk xavih: I just deleted the assert you asked me to...
14:18 pranithk xavih: that could have reduced the line number?
14:18 pranithk xavih: were you able to re-create it?
14:19 xavih pranithk: oh, it's true. Could be that... sorry...
14:19 xavih pranithk: I'm testing
14:20 vimal joined #gluster-dev
14:20 pranithk xavih: You seem to have slept late based on the timestamp of your mails. I will try my best if you are not able to complete this today. Not a problem.
14:24 pranithk xavih: This happened in chown, not sure if that helps
14:24 pranithk xavih: I mean chown part of the tests...
14:24 xavih pranithk: I'll concentrate on that... thanks :)
14:33 pranithk xavih: I have one doubt in ec_lock_insert, when we do ec_lock_compare and swap locks we do not touch link->wait_list at all, is that fine?
14:34 pranithk xavih: I mean fop->locks[0].wait_list
14:34 dlambrig joined #gluster-dev
14:35 xavih pranithk: no, that's a problem...
14:35 xavih pranithk: I've also seen a double UNLOCK...
14:36 xavih pranithk: I'll change that and upload the patch again...
14:36 pranithk xavih: Oh that was the problem?
14:37 xavih pranithk: not sure, but it's really a bad problem...
14:37 pranithk xavih: cool. I will keep reading
14:38 pranithk xavih: OMG it is 8 already here. I need to get home. Will resume from home...
14:38 xavih pranithk: oh, I'm sorry :(
14:39 pranithk xavih: hey no no, I didn't keep a tab on time :-)
14:39 xavih pranithk: don't worry. I'll continue with that. However I've been unable to reproduce it in my test system...
14:39 xavih pranithk: go home :)
14:39 pranithk xavih: Please do send me a mail/update on gerrit if you find something. I will understand this patch
14:40 xavih pranithk: sure
14:40 pranithk xavih: thats too bad. Did you enable DDEBUG in the builds?
14:40 pranithk xavih: do you think my patch is triggering the code paths?
14:40 pranithk xavih: Shall I give you my patch which sets the responses with inodelk/entrylk counts?
14:41 xavih pranithk: the patch for the crashes ?
14:41 pranithk xavih: no no, locks change which I was working on yesterday?
14:41 xavih pranithk: ah, you mean the change in locks xlator...
14:41 pranithk xavih: the patch which actually sets the number of entry/inodelk locks
14:41 pranithk xavih: yes yes
14:41 xavih it shouldn't cause a crash in ec
14:41 xavih pranithk: otherwise is an ec's bug :P
14:42 xavih pranithk: don't worry. Go home :)
14:42 pranithk xavih: I think that is the only difference between your setup and mine. And it is reasonably consistent on my machine.
14:42 pranithk xavih: I haven't completed my testing but let me send the patch anyway...?
14:42 xavih pranithk: my test machines are slow. That can also be a difference...
14:43 pranithk xavih: I am running this on my laptop...
14:43 xavih pranithk: sure. I can start reviewing it also
14:43 xavih pranithk: mine are atoms :P
14:43 pranithk xavih: hehe
14:44 xavih pranithk: I mean Intel Atom processors...
14:45 pranithk xavih: okay I sent the patch: 1165041
14:45 pranithk xavih: http://review.gluster.org/10880
14:45 xavih pranithk: thanks :)
14:45 pranithk xavih: okay I will try and see the updates may be after dinner.. it should take two hours.
14:45 pranithk xavih: I mean from now for me to come online.
14:46 pranithk xavih: cya
14:59 nkhare_ joined #gluster-dev
15:06 poornimag joined #gluster-dev
15:20 kshlm joined #gluster-dev
15:25 anekkunt joined #gluster-dev
15:27 aravindavk joined #gluster-dev
15:30 vimal joined #gluster-dev
15:53 pranithk joined #gluster-dev
15:54 pranithk xavih: I see you posted a patch got the problem?
15:55 xavih I was just writing an email to you :P
15:55 xavih I think the problem were the two bugs we have fond before
15:55 xavih both are solved in the new patch set
15:55 xavih however there's still a problem...
15:55 pranithk xavih: oh?
15:55 xavih it has to do with the dirty flag...
15:56 xavih pranithk: now the dirty flag is set before doing any operation that modifies something
15:56 pranithk xavih: Didn't get it :-(
15:56 pranithk xavih: got it got it
15:56 xavih pranithk: the problem is that if the operation fails, the dirty flag is cleared (this is ok), but the change time has changed
15:57 xavih pranithk: and posix says that it shouldn't change if the operation fails
15:57 xavih pranithk: how does afr solve this problem ?
15:57 pranithk xavih: It doesn't solve this problem. It leaves dirty flag as is and it is only cleared as part of heal
15:58 xavih pranithk: heal ? but everything goes ok, so it shouldn't leave the dirty flag set...
15:59 xavih pranithk: I mean if the failure is caused by insufficient rights or something like that
15:59 pranithk xavih: oh!, yeah if everything goes okay, it will clear it.
15:59 xavih pranithk: but in this case the change time of the inode will get updated
15:59 xavih pranithk: and posix says it shouldn't
15:59 pranithk xavih: yep it does
16:00 xavih pranithk: tests link/00.t and open/00.t fail for this reason
16:00 pranithk xavih: I am not aware of any such problem because of this behavior :-(
16:00 pranithk xavih: ah! link keeps failing even on normal xfs... but not open
16:01 xavih pranithk: in the open case, an open on a file that already exists should not update the modification time of the parent directory
16:01 xavih pranithk: but since we set the dirty flag on the parent, it's modified
16:01 pranithk xavih: why does open need to set dirty flag?
16:01 xavih pranithk: I mean open with O_TRUNC
16:02 xavih pranithk: O_CREAT, sorry
16:02 pranithk xavih: O_CREAT never comes on fuse. separate creat + open comes
16:03 xavih pranithk: maybe it's another problem ? I thought it was the dirty flag
16:03 xavih pranithk: I'll look into the detailed log...
16:04 pranithk xavih: okay...
16:05 krishnan_p joined #gluster-dev
16:12 xavih pranithk: sorry. It's not the dirty flag update
16:13 pranithk xavih: oh, then?
16:13 xavih pranithk: the problem is that opendir calls xattrop to read size and version. It passes 0's, but posix modifies the xattr anyway
16:14 xavih pranithk: this is similar to a comment I wrote on patch http://review.gluster.org/10727/
16:14 pranithk xavih: yes it does. This was not happening earlier because we were using lookup instead. It is the same problem even for that patch I sent on read-only xlator
16:14 pranithk xavih: yeah :-)
16:15 xavih pranithk: would it be possible to update posix so that unmodified xattrs are not rewritten ?
16:15 xavih pranithk: I can do that if you consider it's a good approach
16:18 ndevos xavih: not writing unmodified xattrs should be part of md-cache on the brick side, imhp
16:18 ndevos *imho
16:20 xavih ndevos: I think md-cache should not filter modifications requests, only serve reads faster
16:21 xavih ndevos: it's even possible that md-cache doesn't have the metadata cached, so it will forward and it's posix who will need to handle that
16:23 xavih ndevos: even if the metadata is cached, a write can have side effects (like the modification of the change time), so a caching xlator should not filter writes unless it's completely able to emulate posix behaviour
16:24 dlambrig joined #gluster-dev
16:27 ndevos xavih: hmm, right, I did not think about the ctime
16:28 wushudoin joined #gluster-dev
16:28 ndevos xavih: but I would not like to see some cache/checking in the posix xlator, I do not think it belongs there
16:29 xavih ndevos: it's not a cache/checking issue I think
16:30 xavih ndevos: it's an optimization to not write values that have not been updated
16:30 xavih ndevos: and this is useful for other purposes
16:31 xavih ndevos: I don't see another way to do it
16:31 ndevos xavih: sure, but how do you track values that have not changed?
16:32 xavih ndevos: oh, this is only needed for (f)xattrop
16:32 xavih ndevos: xattrop first reads the xattr and then adds some value to it and writes it back to the xattr
16:33 xavih ndevos: if the value to add is 0, then the last update can be skipped
16:33 ndevos xavih: ah, yes, in that case it makes sense
16:33 xavih ndevos: :)
16:33 ndevos :)
16:33 xavih ndevos: I'll send a patch for that (it's quite easy) to let Pranith see it.
16:34 xavih ndevos: Do you want I add you as a reviewer ?
16:34 ndevos xavih: sure, feel free to
16:34 xavih ndevos: :)
16:59 spot joined #gluster-dev
16:59 JustinCl1ft spot: The Presentations page on the Gluster Wiki is definitey the correct place for the recent Summit slides
17:00 spot JustinCl1ft: okay, but it shows File: entries for stuff uploaded to the wiki
17:00 spot JustinCl1ft: but I'm not sure how to upload files
17:00 JustinCl1ft However, ndevos mentioned in the meeting yesterday they're making the wiki read-only
17:00 JustinCl1ft Apparently so it can be converted across into a GitHub repo :)
17:00 JustinCl1ft tigert and/or ndevos should know more
17:01 ndevos JustinCl1ft: I think that is more up to hchiramm_
17:01 spot I just need to know where should I upload Summit presentations?
17:01 JustinCl1ft Cool
17:01 JustinCl1ft I couldn't remember :)
17:01 ndevos spot: maybe just the summit page in the wiki, at least for now?
17:02 JustinCl1ft ndevos: spot wants to upload files
17:02 spot ndevos: i cannot find an upload option in the wiki.
17:02 JustinCl1ft If the wiki is read-only though, that's not going to work
17:02 spot the wiki isn't currently read-only, I've been updating the summit page today
17:02 ndevos spot: ah, edit the wiki page, and do something like [[file:filename.pdf]]
17:03 ndevos people are still updating the wiki pages, I do not know when the cut-over is planned <- hchiramm_
17:03 JustinCl1ft Hmmm, has the link to the wiki gone from the front page?
17:04 * ndevos doesnt know, he uses those things called 'bookmarks' and 'history'
17:04 JustinCl1ft Yeah, I'm not seeing it
17:04 JustinCl1ft Yep, its gone
17:04 JustinCl1ft tigert: ^ ???
17:05 spot You do not have permission to upload this file, for the following reason:
17:05 spot The action you have requested is limited to users in one of the groups: Administrators, uploadaccess.
17:05 spot Sooo... can someone put spotrh in uploadaccess? :)
17:06 spot I wonder if that ACL showed up with a mediawiki update at some point.
17:06 hagarth joined #gluster-dev
17:08 ndevos spot: I do not have the permission (or know how to?) add you to the uploadaccess group in the wiki
17:08 JustinCl1ft I'm trying to remember how to change people's groups in the wiki
17:08 JustinCl1ft Gimme a minute
17:08 spot JustinCl1ft: sure. Just poke me when you figure it out.
17:08 ndevos JustinCl1ft: maybe through http://www.gluster.org/community/do​cumentation/index.php?title=Special​:ListUsers&amp;group=uploadaccess ?
17:09 JustinCl1ft Oooh, that looks useful. :)
17:09 JustinCl1ft spot: "Spotrh" ?
17:09 spot Yep.
17:09 spot Thats me
17:09 dlambrig joined #gluster-dev
17:10 JustinCl1ft spot: Added that + admin for you
17:10 JustinCl1ft ndevos: Thanks, that helped :)
17:11 JustinCl1ft For reference in case anyone cares, this is the URL for changing a person's groups: http://www.gluster.org/community/document​ation/index.php?title=Special:UserRights
17:11 JustinCl1ft ndevos: Should you be added to the admin list as well?
17:13 ndevos JustinCl1ft: nah, no need to, I guess the wiki will be legacy soon?
17:13 JustinCl1ft Yeah, that's the idea I guess
17:13 JustinCl1ft No worries. :)
17:13 ndevos hchiramm_ is going to send an email about the documentation plans "soon"...
17:14 JustinCl1ft ;)
17:14 ndevos :-/
17:14 * ndevos would like to know the plan, and if he should (not) update any wiki pages anymore
17:14 * JustinCl1ft agrees
17:17 * csim would also like to now when to put mediawiki in a blackhole
17:18 spot ugh. I hate wiki.
17:21 spot for some reason, internal links put in a table don't let me use override text.
17:21 spot hate hate hate wiki
17:32 dlambrig joined #gluster-dev
18:28 spot joined #gluster-dev
18:49 Gaurav__ joined #gluster-dev
18:49 ppai joined #gluster-dev
19:07 hchiramm joined #gluster-dev
19:28 soumya joined #gluster-dev
19:52 dlambrig joined #gluster-dev
21:00 ppai joined #gluster-dev
21:23 ppai joined #gluster-dev
22:20 ppai joined #gluster-dev
23:32 ppai joined #gluster-dev
23:49 dlambrig joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary