Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-11-11

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 jobewan joined #gluster-dev
00:56 topshare joined #gluster-dev
01:30 _Bryan_ joined #gluster-dev
02:08 rastar_afk joined #gluster-dev
02:14 hagarth joined #gluster-dev
02:49 bala joined #gluster-dev
03:43 kanagaraj joined #gluster-dev
03:53 hagarth joined #gluster-dev
03:57 shubhendu joined #gluster-dev
04:02 itisravi joined #gluster-dev
04:11 bharata-rao joined #gluster-dev
04:19 ndarshan joined #gluster-dev
04:21 ppai joined #gluster-dev
04:27 nishanth joined #gluster-dev
04:34 atinmu joined #gluster-dev
04:34 rafi joined #gluster-dev
04:39 anoopcs joined #gluster-dev
04:47 jiffin joined #gluster-dev
04:59 soumya__ joined #gluster-dev
05:16 spandit joined #gluster-dev
05:31 aravindavk joined #gluster-dev
05:35 _Bryan_ joined #gluster-dev
05:39 bala joined #gluster-dev
05:43 krishnan_p joined #gluster-dev
05:45 kshlm joined #gluster-dev
05:45 atalur joined #gluster-dev
05:46 lalatenduM joined #gluster-dev
05:47 kdhananjay joined #gluster-dev
05:53 bala joined #gluster-dev
05:57 kdhananjay left #gluster-dev
06:08 anoopcs joined #gluster-dev
06:08 pranithk joined #gluster-dev
06:21 shubhendu joined #gluster-dev
06:25 ppai joined #gluster-dev
06:45 soumya_ joined #gluster-dev
06:55 badone joined #gluster-dev
07:17 aravindavk joined #gluster-dev
07:18 atinmu joined #gluster-dev
07:29 foster_ joined #gluster-dev
07:29 davemc joined #gluster-dev
07:35 soumya_ joined #gluster-dev
07:38 atinmu joined #gluster-dev
08:05 ppai joined #gluster-dev
08:40 krishnan_p xavih, I have replied to your email on timer API :)
08:42 deepakcs joined #gluster-dev
08:43 xavih krishnan_p: I haven't received it yet (we have some sort of greylisting anti-spam that delays the emails). I'll read it soon :)
08:44 krishnan_p xavih, oh ok. No hurry. I am a little excited to see how the API comes out ;)
08:45 vikumar joined #gluster-dev
08:49 nkhare joined #gluster-dev
08:52 kdhananjay joined #gluster-dev
08:58 ppai joined #gluster-dev
09:13 ndevos hmm, so many bugs closed, and there are still 1066 left!
09:14 krishnan_p ndevos, for a suitable value of many :P
09:15 hagarth @channelstats
09:15 glusterbot hagarth: On #gluster-dev there have been 76986 messages, containing 1978721 characters, 334234 words, 2806 smileys, and 338 frowns; 883 of those messages were ACTIONs. There have been 42563 joins, 747 parts, 41786 quits, 0 kicks, 728 mode changes, and 1 topic change. There are currently 60 users and the channel has peaked at 65 users.
09:17 ndevos krishnan_p: I just closed 384 bugs that had their fixes in a 3.6.x alpha/beta, and I'd call that many too!
09:17 hagarth ndevos: you would have successfully moved to the top of bitergia bugzilla stats for gluster now :)
09:18 krishnan_p ndevos, I'd call it many too and thanks for closing that many bugs diligently. I said that in jest
09:18 ndevos hagarth: yeaaaah! who wants to beat me!?
09:19 hagarth ndevos: how about throwing an open challenged to the community? :)
09:20 ndevos krishnan_p: I'm pretty confident that most of the open ones are glusterd bugs :P
09:23 kdhananjay left #gluster-dev
09:26 ndevos any one else of you received an email with the subject "Gluster EHT / DHT"?
09:27 hagarth ndevos: that message is awaiting gluster-devel moderation
09:30 ndevos hagarth: ah, okay, than I wont respond to the private copy I got :)
09:31 atalur joined #gluster-dev
09:48 kdhananjay joined #gluster-dev
09:49 ppai joined #gluster-dev
09:58 kdhananjay left #gluster-dev
10:18 ppai joined #gluster-dev
10:23 ndevos hagarth: ah, lol at the closed ticket peak(s) on http://bitergia.com/projects/redhat-​glusterfs-dashboard/browser/its.html
10:25 ndevos hmm, Amar is still the all-time master with 1500 closed tickets, but I'm a proud #2
10:26 pranithk ndevos: good job, niels!
10:26 krishnan_p ndevos, cheers!
10:27 ndevos closing tickets/bugs is one thing, it does not have to relate to submitting patches ;_
10:27 pranithk ndevos: Amar has quite a few records niels, for example: Developers with the most changesets
10:27 pranithk Amar Tumballi              676 (9.6%)
10:29 ndevos pranithk: http://bitergia.com/projects/redhat-​glusterfs-dashboard/browser/scm.html [tab: complete history] says different?
10:29 pranithk ndevos: Even the person with most changed lines: Amar Tumballi             152165 (12.0%)
10:29 ndevos pranithk: yes, I think Amar was quite a worker ant "D
10:29 ndevos :D
10:30 hagarth I wonder how I figure in the top 10 still :D
10:30 pranithk ndevos: yeah :-). I was giving who-wrote-glusterfs.sh output
10:31 ndevos pranithk: okay, and the who-wrote-glusterfs.sh is a little more tuned, it should pickup email-aliases and name typos too - I'd trust that more
10:31 pranithk ndevos: I feel there is something wrong with this output, because I am still at 604 patches
10:32 pranithk ndevos: yeah
10:32 ndevos pranithk: who-wrote-glusterfs.sh uses the current checked-out branch, unless you pass it a parameter
10:32 pranithk ndevos: yes I did it on master
10:33 * ndevos runs "./extras/who-wrote-gluste​rfs/who-wrote-glusterfs.sh origin/master" and sees whats current
10:34 ndevos pranithk: indeed, 604 for you
10:35 pranithk ndevos: yes. But not a lot of lines changed. I need to write some big features :-)
10:35 ndevos pranithk: or fix more bugs ;)
10:36 ndevos just number of patches: git shortlog -n -s -e | sort -r -n | less
10:37 ndevos pranithk: you have "only" 540 patches in 3.5, 64 patches more in master, I think that is quite a lot
10:38 pranithk ndevos: yeah I send a lot of patches ;-)
10:39 krishnan_p pranithk, ndevos intimidating stats :)
10:41 ndevos did you guys get a prize for each 100 patches you submitted?
10:41 ndevos I'm almost there, and would like to know if it is worth it?
10:42 krishnan_p ndevos, that's a surprise, we aren't allowed to talk about it in public :)
10:42 pranithk ndevos: I thought it is for showing off to the grand kids
10:42 ndevos lol
10:42 krishnan_p pranithk, :)
10:43 pranithk ndevos: krishnan_p ;-)
10:43 ndevos pranithk: you have to talk to krishnan_p, he seems to get something :D
10:43 pranithk krishnan_p: really?
10:44 ndevos a kiss on the cheek from hagarth?
10:44 pranithk ndevos: hehe
10:44 krishnan_p ndevos, how many patches more for 100?
10:45 ndevos krishnan_p: I'm at 92 now, and you should merge #93
10:45 ndevos that would be http://review.gluster.org/9078 , krishnan_p
10:46 ndevos or, http://review.gluster.org/9035 even
10:46 pranithk ndevos: Actually quality of my patches decreased in between for 2-3 months. But I am back on track now
10:46 ndevos pranithk: quality, or quantity?
10:47 hagarth ndevos: I don't dole out such gifts to anybody in this room ;)
10:47 krishnan_p hagarth, thanks for clearing the air :)
10:47 ndevos hagarth: that means I have a chance?
10:47 krishnan_p ndevos, definitely worth trying!
10:48 hagarth ndevos: even if you drop out of this room, you don't :D
10:48 krishnan_p ndevos, i am not sure if I maintain gf_store under the libglusterfs/src directory. I will give my +1.
10:49 pranithk ndevos: both
10:49 krishnan_p hagarth, could you merge #9078
10:49 ndevos krishnan_p: I'm not sure if you maintain it, but glusterd is the main user of it, I think?
10:50 krishnan_p ndevos, so do I transitively inherit it? I am happy to do that ;)
10:50 ndevos hagarth: oh, too bad...
10:50 ndevos krishnan_p: yes, I would say so :D
10:50 hagarth done merging 9078
10:51 ndevos hagarth++ krishnan_p++ thanks, thats 93!
10:51 glusterbot ndevos: hagarth's karma is now 24
10:51 glusterbot ndevos: krishnan_p's karma is now 1
10:51 krishnan_p ndevos,  you would be a more appropriate maintainer than me, given that the entire store refactoring was by you
10:51 hagarth ok, how about a contest for 10k?
10:51 * krishnan_p wished the glusterbot wouldn't display my karma points
10:51 hagarth whoever sends the 10000th CR in gerrit, gets a gift from gluster.org?
10:52 hagarth or whoever sends maximum patches between now and 10k?
10:52 ndevos krishnan_p: aha! now I understand why pranithk asked me to document stuff for gf_store_*
10:52 hagarth krishnan_p++ to make you feel better :)
10:52 pranithk hagarth: 2nd one is better. And the winner is most probably emmanuel.
10:52 glusterbot hagarth: krishnan_p's karma is now 2
10:52 pranithk krishnan_p++
10:52 glusterbot pranithk: krishnan_p's karma is now 3
10:53 pranithk ndevos: yes document
10:53 raghu_ joined #gluster-dev
10:53 hagarth right now we are qat 9089 I think
10:53 hagarth *at
10:53 * krishnan_p always thought what I wrote in this form was only visible to me, like notes to self
10:54 hagarth or how about a guessing contest on when we will hit 10k in gerrit?
10:54 ndevos hagarth: more about getting patches merged, not posted?
10:55 hagarth ndevos: more the merrier, though merging adds a lot more value than the former.
10:55 pranithk hagarth: it is a nice thing to do every quarter?
10:56 pranithk hagarth: best feature proposal in the quarter and most number of patches sent in a quarter?
10:56 pranithk hagarth: and person with most number of lines changed
10:56 hagarth pranithk: I like that idea .. why don't we discuss this over in tomorrow's community meeting?
10:56 ndevos hagarth: I dont really like counting posted patches, they can add very little value if there is no update after a review :-/
10:57 pranithk hagarth: Now that I got rid of all the "busy" things, I can also attend
10:57 hagarth pranithk: and whoever reviews most patches too
10:57 pranithk ndevos: we are talking about merged patches, I meant merged patches when I said, patches sent
10:57 pranithk hagarth: yes
10:58 hagarth that reminds me, would any of you be interested in sending out those monthly reports on stats to gluster ML?
10:58 pranithk hagarth: You don't even need a monetary rewards. Just recognition is good.
10:58 hagarth I started it but somehow have not been able to keep up that. Maybe a cronjob or CI job would also do ;)
10:59 hagarth pranithk: yep, let us discuss this with a larger audience tomorrow
10:59 pranithk hagarth: that should be the way to go
10:59 pranithk hagarth: yes
11:00 hagarth we could also recognize folks who contribute the most number of lines to IRC logs too :)
11:00 pranithk hagarth: yes :-)
11:00 pranithk hagarth: I know who is going to win there :-) over and over that too :-)
11:01 hagarth pranithk: glusterbot? :D
11:01 ndevos pranithk: yes, that sounds good
11:01 ndevos pranithk: you're attending teh bug triage meeting in one hour too?
11:01 pranithk hagarth: I guess it will be ndevos
11:01 ndevos hagarth: yes, get IRC into bitergia!
11:02 ndevos wohoo, more stats in my favour!
11:02 pranithk ndevos: you do a good job niels. thanks for the continued efforts and persistence :-)
11:02 hagarth ndevos: let me check if bitergia already has that .. a simple grep on botbot.me logs could do too.
11:02 pranithk ndevos: I gotta go home now so that I can attend the meeting. cya folks
11:02 ndevos pranithk: ttyl!
11:03 ndevos hagarth: yeah, it should be simple enough, but someone needs to do it....
11:04 ndevos aaah! Pranith is gone now, and I wanted to get his opinion on an ESTALE topic...
11:05 ndevos well, for anyone that is reading: should getxatt() return ESTALE in case of a nameless (by gfid) request, where the handle/gfid does not exist?
11:05 ndevos or, should it return ENODATA or something?
11:05 * ndevos would prefer ESTALE
11:06 lalatenduM joined #gluster-dev
11:06 jiffin1 joined #gluster-dev
11:06 ndevos hmm, maybe it should be ENOENT (man getxattr -> man stat)
11:10 ndevos hagarth: the ovirt dashboard in bitergia has irc stats, so it seems to be possible
11:10 ndevos hagarth: is that something we can ask davemc to look at?
11:11 soumya_ joined #gluster-dev
11:11 rafi1 joined #gluster-dev
11:11 ndevos and http://projects.bitergia.com/redhat-​glusterfs-dashboard/browser/irc.html exixts, its just a little empty
11:12 hagarth ndevos: certainly, bitergia is sponsored by davemc's group
11:13 ndevos hagarth: cool, I'll send a request to him
11:13 lalatenduM ndevos, in a meeting for next 1:30 hrs, so may not be active in bug triage meeting
11:18 ndevos lalatenduM: oh, thats a shame - but we'll assign all the SElinux bugs to you then
11:19 lalatenduM ndevos, lol
11:20 lalatenduM ndevos, btw how do you know abt the SELinux session?
11:22 ndevos lalatenduM: you're not the only one that attends :)
11:23 lalatenduM ndevos, right, the nfs link :)
11:23 ndevos lalatenduM: indeed!
11:42 kkeithley1 joined #gluster-dev
11:44 kkeithley_ ndevos++
11:44 glusterbot kkeithley_: ndevos's karma is now 51
11:45 ndevos you're welcome, kkeithley_!
11:50 ppai joined #gluster-dev
11:57 ndevos REMINDER: Gluster Community Bug triage starting in 2 minutes in #gluster-meeting
11:59 rafi joined #gluster-dev
12:02 kshlm joined #gluster-dev
12:03 ndevos kkeithley_: -> #gluster-meeting ?
12:24 itisravi joined #gluster-dev
12:25 nkhare joined #gluster-dev
12:33 jiffin joined #gluster-dev
12:42 krishnan_p joined #gluster-dev
12:45 rafi joined #gluster-dev
12:45 soumya_ joined #gluster-dev
12:52 edward1 joined #gluster-dev
12:54 lpabon joined #gluster-dev
12:54 rafi joined #gluster-dev
12:58 nkhare joined #gluster-dev
13:04 rafi1 joined #gluster-dev
13:08 ndevos lalatenduM: the trouble with many of the EasyFix bugs is, that some developers file a nug and send a patch immediately
13:08 ndevos we need somehow to be able to get users to file more EasyFix ones
13:09 lalatenduM ndevos, yeah agree
13:13 kkeithley_ Not having (m)any EasyFix bugs ought to be a good sign of project maturity. I would not complain about not having many of them.  Then again, many of the clang and clang-analyze warnings and errors might qualify for EasyFix
13:14 kkeithley_ lalatenduM. If you really want to be a full time dev, you should speak up. We do have dev spots open.
13:19 kshlm joined #gluster-dev
13:20 kshlm joined #gluster-dev
13:20 bala joined #gluster-dev
13:29 lalatenduM kkeithley, yes I am interested, I am thinking the same too
13:43 hagarth joined #gluster-dev
14:01 lalatenduM kkeithley, ndevos regrding excluding regression-tests from building in koji any suggestion ?
14:05 kkeithley_ wrap in %if ! %{_for_fedora_koji_builds}
14:05 kkeithley_ %if ( ! 0%{_for_fedora_koji_builds} )
14:09 hagarth JustinClift: around?
14:10 jobewan joined #gluster-dev
14:22 lalatenduM kkeithley, thanks
14:22 lalatenduM kkeithley++
14:22 glusterbot lalatenduM: kkeithley's karma is now 38
14:26 shyam joined #gluster-dev
14:42 kkeithley_ ndevos: I appear to have lost the Zimbra calendar entries for the bug triage. It seems to have happened when you canceled the meeting(s) after daylight savings ended. Would you please resend the meeting notice. Thanks
14:43 krishnan_p joined #gluster-dev
14:46 pranithk joined #gluster-dev
14:53 ndevos kkeithley_: I deleted those and am not intending to add them back - many people send me "tentative" and "declined" responses...
14:58 krishnan_p joined #gluster-dev
15:03 jdarcy joined #gluster-dev
15:05 kkeithley_ oh, okay
15:15 jobewan joined #gluster-dev
15:16 wushudoin joined #gluster-dev
15:36 ndevos LinkedIn is not a support forum.
15:37 tdasilva joined #gluster-dev
16:00 jobewan joined #gluster-dev
16:03 jobewan joined #gluster-dev
16:13 anoopcs joined #gluster-dev
16:18 jobewan joined #gluster-dev
16:43 JustinClift pranithk: Did you find a server for the logs to be copied to?
16:43 pranithk JustinClift: yes, he provided me with a private link. I am debugging the problem with him.
16:43 JustinClift pranithk: If not, I can put a new VM in Rackspace in just a few minutes, and we can take it down after this is all figured out
16:43 JustinClift pranithk: Cool
16:43 hagarth pranithk++
16:43 glusterbot hagarth: pranithk's karma is now 6
16:43 JustinClift pranithk: As per email, I've also pointed them at the new Consultants list
16:44 JustinClift pranithk: So, that might help them in a future occasion :)
16:44 hagarth JustinClift: that was a perfect response, thanks for that!
16:44 JustinClift pranithk: Thx btw :)
16:44 hagarth JustinClift++ too :D
16:44 glusterbot hagarth: JustinClift's karma is now 29
16:44 JustinClift hagarth: No worries.  Done exactly this before for PostgreSQL project years ago
16:44 hagarth JustinClift: cool
16:46 JustinClift Need to build out our base of companies supporting and contributing to GlusterFS.  This is an approach that works. ;)
16:46 hagarth right :)
16:46 hagarth JustinClift: I'll talk to the fractal-io folks next week, they seem to be doing this too.
16:46 JustinClift Oh cool
16:47 JustinClift As in, they seem to be supporting GlusterFS?
16:47 hagarth maybe we can announce this revamped page on our MLs too.
16:47 JustinClift Yep.
16:47 hagarth JustinClift: yes, kiran patil is from there.
16:47 * JustinClift has already suggested that to davemc
16:47 JustinClift Ahh cool
16:47 JustinClift Have we asked them if they want to be on the Consultants page?
16:48 hagarth no, I haven't met them in a while. Will be meeting them next week at a GlusterFS meetup in Bangalore.
16:50 JustinClift hagarth: Cool.  All yours then. :)
16:50 xavih sorry, a couple of questions about gluster architecture...
16:50 xavih when an xlator notifies that it's DOWN, what should be the behavior of this xlator when it receives requests ? is it undefined or it should do something in particular (return a specific error ?)
16:51 xavih Also, if an opendir request fails, is it allowed to send readdir(p) requests ?
16:51 hagarth xavih: yes, readdirp should ideally fail if the preceding opendir did.
16:51 xavih but how a xlator knows that ?
16:52 pranithk xavih: fail with ENOTCONN
16:52 xavih pranithk: so any request while the xlator is DOWN should return ENOTCONN, right ?
16:52 pranithk xavih: if opendir fails the application doesn't have fd, so it can't send readdirp  on the fd
16:52 pranithk xavih: yes. That is what afr does as well
16:53 hagarth xavih: look at the fd
16:53 xavih pranithk: if opendir fails in one subvolume of dht but not the other, dht should make readdir calls to the subvolume that returned an error to opendir ?
16:54 pranithk xavih: oh, dht runs in degraded mode for that scenario, serving only the readdirs on the subvolumes that are up
16:54 xavih hagarth: so each xlator has to save in the fd if that particular fd has been open by it ?
16:55 pranithk xavih: when an fd open/opendir succeeds, we can maintain fd-ctx in that fd. If any readdir comes without the fd-ctx we can fail with EINVAL
16:56 xavih pranithk: I think dht is doing something weird. After having notified that the xlator is down, I continue receiving requests. One of these requests is opendir, that returns an error, but I receive a readdirp request anyway
16:56 pranithk xavih: "when an fd open/opendir succeeds, we can maintain fd-ctx in that fd. If any readdir comes without the fd-ctx we can fail with EINVAL"
16:57 xavih pranithk: if that's normal behavior, I'll try to detect these cases in ec
16:57 xavih pranithk: however I think that dht should avoid these scenarios in the first place
16:58 pranithk xavih: may be dht doesn't maintain that information in fd. In any case, it is always preferrable to handle such cases local to the ec xlator
16:59 xavih pranithk: I'll do then...
16:59 pranithk xavih: If you have a reliable test case, I can send that as my first patch in ec? I just got off the *busy* work I was put on past 2 weeks.
17:00 _Bryan_ joined #gluster-dev
17:01 xavih I'm working on bug #1161066
17:01 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1161066 medium, unspecified, ---, xhernandez, ASSIGNED , A disperse 2 x (2 + 1) = 6 volume, kill two glusterfsd program, ls  mountpoint abnormal.
17:02 pranithk xavih: Do you mind if I take the bug and look into? I can send the patch
17:02 xavih I've detected a problem in the propagation of errors and unlocking when lock failed, but it still has the same behavior
17:02 xavih Do I send the changes I have till now ?
17:03 pranithk xavih: oh you already worked on it?
17:03 xavih yes, I was trying to determine if the problem is in ec or not...
17:04 pranithk xavih: okay, then take it to completion :-)
17:04 xavih and I found some mistakes in the process...
17:04 xavih :-/
17:04 pranithk xavih: :-)
17:04 pranithk xavih: let me know if you need any help with this though
17:04 pranithk the opendir/open part I mean
17:05 xavih pranithk: thanks :)
17:10 pranithk xavih: Do you mind if I change the code to meet glusterfs coding guidelines for the patches I send in ec?
17:10 xavih pranithk: no problem. I'm already doing that for the patches I send (except for the indentation)
17:11 pranithk xavih: cool
17:23 anoopcs joined #gluster-dev
17:36 lalatenduM joined #gluster-dev
18:10 JustinClift hagarth: Email received?
18:12 shyam xavih: DHT does not maintain the success/failure of an opendir in the fd context, it leaves the protocol/client to do that work and blindly sends readdir to all subvols (one after the other) and continues even if there are errors from one of its subvols
18:13 hagarth JustinClift: yes, muchas gracias!
18:16 JustinClift hagarth: :)
18:21 JustinClift ndevos: Were you the only person at the bug triage meeting today?
18:21 * JustinClift only say ndevos on the roll call part of your minutes
18:23 ndevos JustinClift: sometimes I talk to myself, but I would not let zodbot take minutes about that
18:25 pranithk JustinClift: I said I would attend but didn't reach home in time :-(
18:27 ndevos pranithk: no problem, I did not delete your action items - you have an other chance next week\
18:31 pranithk ndevos: cool
18:32 JustinClift ;)
18:35 * ndevos can now very clearly make the NFS-client hang by introducing some minor "optimizations" in nfs-ganesha/md-cache
18:36 JustinClift It's a feature
18:36 JustinClift Helps reduce the different options ppl could actually be using in production.  Makes supporting production setups easier. ;)
18:37 JustinClift ^not serious, in case anyone reads this later in online archives ;)
18:40 rafi joined #gluster-dev
18:59 _Bryan_ joined #gluster-dev
19:53 lalatenduM joined #gluster-dev
20:03 shyam left #gluster-dev
20:03 shyam joined #gluster-dev
21:37 _Bryan_ joined #gluster-dev
21:56 badone joined #gluster-dev
23:28 _Bryan_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary