Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-06-22

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:01 dlambrig left #gluster-dev
01:15 shyam joined #gluster-dev
02:19 kdhananjay joined #gluster-dev
03:51 shubhendu joined #gluster-dev
03:56 sakshi joined #gluster-dev
03:58 itisravi joined #gluster-dev
04:03 atinm joined #gluster-dev
04:08 itisravi_ joined #gluster-dev
04:10 badone_ joined #gluster-dev
04:13 badone__ joined #gluster-dev
04:15 badone joined #gluster-dev
04:22 nbalacha joined #gluster-dev
04:29 ndarshan joined #gluster-dev
04:39 kshlm joined #gluster-dev
04:41 ppai joined #gluster-dev
04:48 ashishpandey joined #gluster-dev
04:56 vimal joined #gluster-dev
05:03 hgowtham joined #gluster-dev
05:04 gem joined #gluster-dev
05:04 ashiq joined #gluster-dev
05:07 Manikandan joined #gluster-dev
05:11 pppp joined #gluster-dev
05:15 spandit joined #gluster-dev
05:23 hchiramm joined #gluster-dev
05:26 Bhaskarakiran joined #gluster-dev
05:43 hchiramm ashiq, ping
05:43 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
05:43 ashiq pong hchiramm
05:44 hchiramm ashiq, did any of the patches merged in upstream
05:44 hchiramm I believe bitrot ?
05:44 ashiq hchiramm, bit-rot yet to be reviewed by overclk
05:44 soumya_ joined #gluster-dev
05:45 hchiramm hmmmm,
05:45 anekkunt joined #gluster-dev
05:47 nbalacha joined #gluster-dev
05:53 kdhananjay joined #gluster-dev
06:00 josferna joined #gluster-dev
06:02 Gaurav__ joined #gluster-dev
06:04 soumya_ joined #gluster-dev
06:05 gem joined #gluster-dev
06:08 deepakcs joined #gluster-dev
06:18 overclk joined #gluster-dev
06:18 raghu joined #gluster-dev
06:20 atalur joined #gluster-dev
06:24 raghu spandit: http://build.gluster.org/job/rackspace-re​gression-2GB-triggered/11085/consoleFull.
06:25 spandit raghu, Thanks, I will have a look
06:25 krishnan_p joined #gluster-dev
06:26 krishnan_p ndevos, pranithk, http://review.gluster.com/11095 is "Ready to Submit". Is there anything waiting on me?
06:26 spalai joined #gluster-dev
06:29 overclk left #gluster-dev
06:29 overclk joined #gluster-dev
06:31 hchiramm itisravi_, http://build.gluster.org/job/rackspace-re​gression-2GB-triggered/11144/consoleFull
06:33 itisravi_ hchiramm: I'll take a look at it.
06:34 gem joined #gluster-dev
06:36 hchiramm itisravi_, thanks
06:36 hchiramm itisravi++
06:36 glusterbot hchiramm: itisravi's karma is now 6
06:39 atalur joined #gluster-dev
06:40 gem ashiq++
06:40 glusterbot gem: ashiq's karma is now 3
06:42 ashiq can anyone merge http://review.gluster.org/11223
06:56 badone_ joined #gluster-dev
06:58 krishnan_p ashiq, ndevos or pranithk would be merging it. See https://github.com/gluster/glu​sterfs/blob/master/MAINTAINERS to find out the maintainer of the component your patch is for.
07:00 ashiq krishnan_p, Thanks :)
07:00 ashiq krishnan_p++
07:00 glusterbot ashiq: krishnan_p's karma is now 8
07:10 soumya_ joined #gluster-dev
07:10 nbalacha joined #gluster-dev
07:28 badone__ joined #gluster-dev
07:30 jiffin joined #gluster-dev
07:40 Manikandan joined #gluster-dev
07:42 ndevos krishnan_p: we were waiting for NetBSD results, that was all
07:42 hchiramm ndevos,  http://review.gluster.org/10822 can u please merge it
07:43 ndevos hchiramm: you have to wait in line :)
07:43 ndevos hchiramm: *and* your patch did not pass regression testing yet
07:44 hchiramm ndevos, it has passed
07:44 ndevos hchiramm: if so, please fix it?
07:44 hchiramm ndevos, eventhough it didnt vote
07:45 ndevos hchiramm: post the link where it passed as a comment and I'll look at it later
07:45 Manikandan raghu, http://review.gluster.org/#/c/11280/ (protocol client clean up patch) merged in master
07:45 Manikandan raghu, could you please merge it in downstream
07:45 Manikandan raghu, https://code.engineering.re​dhat.com/gerrit/#/c/51100/
07:46 hchiramm ndevos, done
07:48 Manikandan overclk, could you retrigger gluster build for this patch http://review.gluster.org/#/c/11200/
07:50 hchiramm Manikandan, downstream bits u discuss in internal channels
07:50 Manikandan hchiramm, oops sorry
07:51 Manikandan hchiramm, okay:)
07:52 hgowtham ndevos, could you help me out with this test ./tests/bugs/nfs/bug-904065.t It keeps failing for he patch http://review.gluster.org/#/c/11279/
07:53 ndevos hgowtham: not right now, ping again in 45-60 minutes? or send me an email
07:54 hgowtham ndevos, okie :)
08:01 schandra joined #gluster-dev
08:02 hchiramm schandra, ping
08:02 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
08:07 rgustafs joined #gluster-dev
08:12 xavih ndevos: is option 'trusted-write' in nfs really implemented ?
08:12 xavih ndevos: looking at the code doesn't seem to be used
08:12 ndevos xavih: I've never heard of that option before?
08:12 xavih ndevos: it appears in 'gluster volume set help'
08:14 kdhananjay joined #gluster-dev
08:15 kdhananjay joined #gluster-dev
08:16 ndevos xavih: it surely was implemented at one point, but nfs3_export_write_trusted is never called anymore as far as I can see
08:16 xavih ndevos: yes. Is what I've seen...
08:16 xavih ndevos: is it considered too dangerous ?
08:16 ndevos xavih: do you need that option?
08:17 xavih ndevos: well, vmware performance through nfs is quite bad because it sends all writes as filesync
08:17 xavih ndevos: I'm trying to see a way to optimize it
08:18 ndevos xavih: https://github.com/gluster​/glusterfs/commit/c705b679 introduced it - note the "violate the NFS protocol"
08:19 ndevos xavih: do you mean that VMware sends a COMMIT after each write?
08:21 xavih ndevos: no, each write comes with a flag (or mode. I'm not an expert on NFS) filesync
08:21 ndevos xavih: right, and the reply from the NFS-server is marked as UNSTABLE?
08:22 xavih ndevos: and I see a lot of small requests (1KB, 4KB, 8KB) coming all with this flag
08:22 xavih ndevos: one moment...
08:22 ndevos xavih: I think "filesync" is like writing after an open with O_SYNC
08:23 xavih ndevos: is it possible that the answer also contains 'filesync' ?
08:23 xavih ndevos: a tcpdump shows '<filesync>' also in the answer
08:23 xavih ndevos: yes, I think that is the problem
08:24 ndevos xavih: can you fpaste a part of the output? use "tshark -r /path/to/dump -V 'frame.number == $NO'"
08:25 * ndevos fetches a coffee, back in a minute
08:28 gem joined #gluster-dev
08:30 xavih ndevos: http://paste.fedoraproject.org/235303/34961780
08:30 xavih ndevos: there is a write request (frame 6) and the reply (frame 26)
08:33 ndevos xavih: okay, so the WRITE call has the FILE_SYNC flag set, and the reply just confirms that
08:34 xavih ndevos: yes, this means that all writes are done synchronously, and since almost all are of them are of 4KB, the speed is quote poor...
08:34 xavih ndevos: I'll try to find more information. Thank you very much for your help :)
08:35 pranithk joined #gluster-dev
08:35 overclk Manikandan, sure.
08:36 ndevos xavih: FILE_SYNC is explained in the, http://tools.ietf.org/html/rfc1813#section-3.3.7 (on the following page under "stable")
08:36 Manikandan overclk, Thank you:)
08:41 ndevos xavih: I doubt that trusted-write could address it, it sounds like the other way around
08:45 shubhendu joined #gluster-dev
08:46 xavih ndevos: looking at the code of the original modification, it seems to really work the opposite from what I interpreted from the option description...
08:47 ndarshan joined #gluster-dev
08:47 xavih ndevos: I'll need to investigate further and find other possibilities, since this is not implemented anymore...
08:47 xavih ndevos: thanks :)
08:51 ndevos xavih: you seem to be looking for the "async" option that the Linux kernel NFS-server offers, see "man 5 exports"
08:52 krishnan_p ndevos, thanks :)
08:52 krishnan_p ndevos++
08:52 glusterbot krishnan_p: ndevos's karma is now 162
08:53 pranithk ndevos: Do you have any bandwidth today for reviewing logging patches?
08:54 ndevos pranithk: possibly, I'm trying to handle review requests in a queue, but a numbering system would help...
08:54 rjoseph joined #gluster-dev
08:55 pranithk ndevos: :-)
08:56 ndevos who wants to write some fuse kernel code? I've got an offer: http://thread.gmane.org/gmane.co​mp.file-systems.fuse.devel/14741
08:56 poornimag joined #gluster-dev
08:56 kdhananjay1 joined #gluster-dev
09:04 soumya_ joined #gluster-dev
09:11 itisravi ndevos: I'd like to.
09:16 xavih ndevos++
09:16 glusterbot xavih: ndevos's karma is now 163
09:17 Manikandan joined #gluster-dev
09:18 overclk spandit, could you check tests/bugs/quota/bug-1153964.t ? This test seems to be failing consistently in upstream master.
09:21 soumya_ joined #gluster-dev
09:23 ndevos itisravi: wohoo! when do you think you can make some time for it? it should not take more than 2-3 days, I think (if you did some kernel stuff before)
09:23 spandit overclk, raghu also observed the same, I am on it, thanks :)
09:25 itisravi ndevos: Does before next monday look acceptable? I'll whip up a patch by then if things go well.
09:27 ndevos itisravi: sure, sounds great! could you respond to the email from Miklos that you're going to have a go at it?
09:29 poornimag joined #gluster-dev
09:29 itisravi ndevos: Will do. How do I do a 'reply-to'? I'm not subscribed to fuse-devel.
09:29 ndevos itisravi: I'll bounce it to you so that you can reply easier
09:29 itisravi ndevos: perfect, thanks :-)
09:30 ndevos itisravi: ravishankar@ ?
09:30 itisravi ndevos: yup.
09:30 ndevos okay, on its way :)
09:32 itisravi ndevos: got it, thanks for the opportunity :)
09:33 ndevos itisravi: sure, no problem, thanks for stepping up :) I almost started to write the code yesterday, but looked into the Gluster bits instead
09:33 gem joined #gluster-dev
09:34 ndevos itisravi: that is actually initiated by bug 1220173
09:34 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1220173 high, unspecified, ---, bugs, NEW , SEEK_HOLE support (optimization)
09:34 rjoseph joined #gluster-dev
09:35 Manikandan joined #gluster-dev
09:35 itisravi ndevos: Ah okay.
09:35 * itisravi needs to participate actively in bug-triages.
09:36 gem joined #gluster-dev
09:43 ndevos pranithk: which logging patches were you asking about? does it include http://review.gluster.org/10822 ?
09:44 kdhananjay joined #gluster-dev
09:46 * ndevos informs all the non-Dutch folks in this channel that nature confuses summer with authumn, this kind of rain is not suitable for going to the beach
09:46 gem_ joined #gluster-dev
09:46 kshlm Has anyone seen this error when attempting to install rpms? https://gist.github.com/kshlm/f6431826185c8854b284
09:47 kshlm I just built some rpms for doing some tests, but glusterfs-server fails to install with 'error: unpacking of archive failed on file /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py;5587d955: cpio: open'
09:48 ndevos kshlm: yes, but aravinda thinks http://review.gluster.org/11298 would introduce a glusterd issue
09:49 gem_ joined #gluster-dev
09:49 pranithk ndevos: I am taking care of that
09:50 ndevos pranithk: 10822?
09:51 * ndevos assumes that is a "yes"
09:51 kshlm ndevos, Thanks.
09:52 kshlm Is there any workaround for now? I don't want to build gluster on each vm.
09:52 ndevos kshlm: "mkdir -p /var/lib/glusterd/hooks/1/delete/post" might do
09:53 ndarshan joined #gluster-dev
09:53 shubhendu joined #gluster-dev
09:53 kshlm I did that but it didn't work.
09:53 kshlm Let my try again.
09:54 ndevos hmm, I guess it could have been deleted when you uninstalled rpms
09:54 kshlm These are fresh vms, being provisioned using Vagrant and Ansible. So it's not there.
09:55 ndevos no, right, create the dir before installing the package?
09:55 pranithk ndevos: yeah, just completed the review. Asked him to remove all logs with NO_MEMORY msg-id. We should not log messages about No-memory as gf_msg is going to call CALLOC again.
09:56 ndevos pranithk: okay, I did not check at all yet - gf_log() had a _nomem_ variation, right?
09:58 pranithk ndevos: yes.
09:58 pranithk ;qa
09:59 kshlm ndevos, it was my mistake. I'd created /var/gluster/hooks/... instead of /var/lib/glusterd/hooks/...
09:59 ndevos :qa!
09:59 ndevos kshlm: okay, so the workaround works around it?
10:00 kshlm It does. Thanks. ndevos++
10:00 glusterbot kshlm: ndevos's karma is now 164
10:00 ndevos kshlm: okay, good to know!
10:03 spalai left #gluster-dev
10:04 spalai joined #gluster-dev
10:08 ndevos itisravi++ for looking into SEEK_HOLE / SEEK_DATA :)
10:08 glusterbot ndevos: itisravi's karma is now 7
10:10 pranithk xavih: why do we do ec_resume even from ec_combine?
10:19 xavih pranithk: to continue fop execution as soon as the required number of answers have been received
10:21 pranithk xavih: It never leads to dispatch again? may be in inodelk/entrylk?
10:22 pranithk xavih: oh it is based on fop->expected... not fop->minimum
10:22 pranithk xavih: which is always the number of winds we did?
10:23 pranithk xavih: seems to be equal to fop->winds always... based on my understanding at least :-)
10:25 soumya_ joined #gluster-dev
10:28 ndarshan joined #gluster-dev
10:35 anrao joined #gluster-dev
10:36 xavih pranithk: yes, I think that currently when ec_resume is called there won't be any pending wind
10:37 xavih pranithk: if ec_complete is always called after ec_combine, maybe we could change that (is that where you wanted to go ?)
10:39 pranithk xavih: yeah, basically I am thinking of removing ec_resume handling code from ec_combine...
10:40 pranithk xavih: I am not comfortable that fop->winds is non-zero at the time of re-dispatch...
10:41 xavih pranithk: I think it could be done safely. Anyway fop->winds should always be 0 before re-dispatching... when could it be different ?
10:41 ndevos hgowtham: thanks for the email, I'm not sure why it would fail, but I'll try to find out after lunch
10:42 hgowtham ndevos, thank you :)
10:42 pranithk xavih: If ec_resume is called from ec_combine then ec_complete is yet to be called, thus fop->winds is yet to be decremeneted.
10:43 pranithk xavih: I checked code, nothing bad is happening. Just that I have this feeling that this may lead to some problems :-)
10:43 sankarsh` joined #gluster-dev
10:43 xavih pranithk: oh, I see. Yes, I think we can do all handling in ec_complete
10:43 overclk joined #gluster-dev
10:43 hgowtham joined #gluster-dev
10:43 deepakcs joined #gluster-dev
10:43 pranithk xavih: Cool
10:43 xavih pranithk: yes, if it can be done safely only in one place, I think it's better
10:45 pranithk xavih: Cool, I will change that.
10:47 [o__o] joined #gluster-dev
10:47 pranithk xavih: I am picking up some good programming skills, thanks to you!
10:48 pranithk xavih: What other programming languages do you know?
10:49 hchiramm joined #gluster-dev
10:51 pppp joined #gluster-dev
10:51 anrao joined #gluster-dev
10:54 ashiq anoopcs++
10:54 glusterbot ashiq: anoopcs's karma is now 11
11:02 spandit joined #gluster-dev
11:02 rjoseph joined #gluster-dev
11:03 poornimag joined #gluster-dev
11:08 anrao joined #gluster-dev
11:09 xavih pranithk: I've used many languages over time, but I mostly use C for most of what I do
11:15 anrao joined #gluster-dev
11:15 atalur joined #gluster-dev
11:16 nbalacha joined #gluster-dev
11:19 soumya_ joined #gluster-dev
11:21 pranithk xavih: Hmm... For how many years have you been coding? in C. Just wondering how many years it would take to get as good :-)
11:22 xavih pranithk: haha, (too) many, but I still have things to learn... :P
11:23 pranithk xavih: You are afraid it will tell your age? :-P
11:23 owlbot joined #gluster-dev
11:24 rjoseph joined #gluster-dev
11:24 _Bryan_ joined #gluster-dev
11:24 xavih pranithk: no, I'm 40. And I started coding quite young (maybe 12 or 13)
11:25 pranithk xavih: Really? :-O I thought you are in late twenties
11:26 pranithk xavih: I will shut up for some time and let it process...
11:29 pranithk xavih: Did you ever have RSI problems? If no, do you take extra care? I am coding only for around 10 years roughly and I had problems...
11:32 Manikandan ashiq, thanks for the script!
11:32 Manikandan ashiq++
11:32 glusterbot Manikandan: ashiq's karma is now 4
11:34 xavih pranithk: fortunately I haven't suffered it, but I must admit that I don't do anything special (at least consciously). When I remember I try to sit adequately and relax hands
11:35 anekkunt joined #gluster-dev
11:35 Bhaskarakiran joined #gluster-dev
11:39 anrao joined #gluster-dev
11:40 pranithk xavih: good. I am leaving for home. Ttyl. WIll push the patch today
11:46 _Bryan_ joined #gluster-dev
11:47 josferna joined #gluster-dev
11:58 hchiramm schandra++ thanks
11:58 glusterbot hchiramm: schandra's karma is now 10
12:01 * ndevos is prepared for summer, he now has a 4G modem in his laptop and it obviously works o/
12:03 kkeithley and that's better than, e.g., tethering to your phone's w/ wifi?
12:03 csim he can do bonding
12:04 kkeithley s/phone's/phone/
12:06 kkeithley what's the tariff? If I do that here I have to pay another $35/month for another line.
12:08 * csim pay 25€ for 5Gb of roaming across europe and unlimited tet message and unlimited call from france to france
12:08 csim (and that's not the cheapest offer, since there is offer around 20€ for more data in more place)
12:17 soumya joined #gluster-dev
12:20 poornimag joined #gluster-dev
12:22 rgustafs joined #gluster-dev
12:24 kkeithley 25€ for only 5Gb, srsly? Not 5GB?
12:25 itisravi_ joined #gluster-dev
12:26 csim mhh, maybe I get it wrong
12:26 csim GB
12:26 kkeithley that's better ;-)
12:26 csim ( in french, we have GB and GO, so I likely used the wrong one )
12:27 kkeithley ah, okay.  Here Gb = bits, GB = bytes
12:30 spandit joined #gluster-dev
12:30 gsaadi joined #gluster-dev
12:32 itisravi joined #gluster-dev
12:34 hchiramm kkeithley++ thanks
12:34 glusterbot hchiramm: kkeithley's karma is now 74
12:34 hchiramm packages are avavailable at http://download.gluster.org/pu​b/gluster/glusterfs/3.7/3.7.2/
12:34 kkeithley hchiramm++
12:34 glusterbot kkeithley: hchiramm's karma is now 41
12:37 ndevos kkeithley: wifi through my phone drains battery, it hardly lasts for a day when I do that
12:37 ndevos and, both are free, more or less, the SIM in my laptop is part of the broadband package I have, and my phone is from work
12:38 kkeithley free is good. ;-)
12:38 ndevos yes, I think so too... well I paid for the modem in my laptop, so there was some cost
12:39 anrao joined #gluster-dev
12:40 hchiramm anrao, can u respond to that thread ?
12:40 anrao hchiramm: okay
12:41 hchiramm please do it asap
12:41 hchiramm anrao, have u addressed the changlog comments given by overclk ?
12:43 anrao hchiramm: yes working on it
13:09 hagarth joined #gluster-dev
13:09 hagarth o/
13:10 kkeithley live, from Red Hat Summit?
13:11 hagarth kkeithley: not yet there, live from Westford :)
13:14 kkeithley I wouldn't be in any hurry to go downtown. Traffic going in was pretty bad.
13:16 csim traffic of people going to the summit ?
13:21 kkeithley no, just traffic of people going to work. Road work slowing things down too
13:22 shyam joined #gluster-dev
13:27 foster joined #gluster-dev
13:40 krink joined #gluster-dev
13:52 sbonazzo joined #gluster-dev
13:53 sbonazzo Hi, looks like  http://download.gluster.org/​pub/gluster/glusterfs/LATEST doesn't exist anymore. is this a temporary issue or a planned change?
13:55 kkeithley sbonazzo: fixed
13:55 sbonazzo kkeithley: thanks!
13:57 csim mhh the 2nd issue with LATESt
13:58 csim ( this time? i broke nothing /o\ )
13:59 kkeithley I didn't break it either, but "fix the problem, not the blame"
13:59 csim yeah, this is something that scream "automation"
13:59 csim as long as we will employ more humans thatn cyborgs, this is bound to happen
14:00 kkeithley indeed
14:14 hchiramm_home joined #gluster-dev
14:26 soumya joined #gluster-dev
14:33 spalai left #gluster-dev
14:47 kdhananjay joined #gluster-dev
14:48 pk1 joined #gluster-dev
14:49 pk1 xavih: One question about not assigning fop->answer=cbk when cbk->op_errno is ENOTCONN. Why is that?
14:49 pk1 xavih: My patch is working for other errnos fine except ENOTCONN...
14:51 pranithk xavih: ^^
14:54 pppp joined #gluster-dev
14:56 xavih pranithk: fop->answer is not assigned because any other answer with errno != ENOTCONN is better, even if there are less of them (given that the minimum requirement is met)
14:57 xavih pranithk: at least that was the initial idea. What problem are you having ?
14:57 nbalacha joined #gluster-dev
14:59 pranithk xavih: fop->expected is 1. It gives ENOTCONN. but cbk is not assigned so fop_set_error(EIO) is happening...
15:00 pranithk xavih: I wanted to not depend on fop->answer and get the list_entry(cbk...) but just wanted to ask what is special about ENOTCONN.
15:00 pranithk xavih: But I didn't understand your answer :-(
15:01 pranithk xavih: if fop->answer is not set, we are always sending EIO right?
15:01 pranithk xavih: At least in the fops I saw :-)
15:02 xavih pranithk: yes, maybe we could remove the check for ENOTCONN, and let this error to be returned instead of EIO
15:02 overclk joined #gluster-dev
15:03 pranithk xavih: good :-)
15:04 overclk shyam, needed some info about logging patches (esp. http://review.gluster.org/#/c/10297/)
15:05 overclk shyam, for components that have sub-components (such as bitrot, i.e. stub/ and bitd/ under xlator/features/bitrot/src)
15:06 kanagaraj joined #gluster-dev
15:06 kanagaraj joined #gluster-dev
15:06 overclk shyam, it's good to have one -messages.h for each sub-{dir,components}.
15:10 krink joined #gluster-dev
15:30 kkeithley hchiramm: ping, still around?  Please add Version:=3.7.2 and Target Milestone:=3.7.3 to bugzilla. Thanks
15:32 kanagaraj joined #gluster-dev
15:40 nbalacha joined #gluster-dev
16:01 nbalacha joined #gluster-dev
16:01 hchiramm_home kkeithley, sure. will open a bz request :)
16:10 shubhendu joined #gluster-dev
16:11 kkeithley hchiramm_home: open a request? I thought you had privs to do it yourself?
16:11 hchiramm_home kkeithley, no :)
16:12 kkeithley oh, sorry. I didn't realize.
16:12 kkeithley Does Rejy?
16:12 hchiramm_home I dont know .
16:12 hchiramm_home may be he has for downstream
16:16 pranithk xavih: Sent the updated patch :-)
16:16 pranithk xavih: Let me know how you like it...
16:17 xavih pranithk: I'll do. Thanks :)
16:18 krink joined #gluster-dev
16:19 pranithk xavih: Just update me here... I will be reading quora till then :-)
16:19 pranithk xavih: In case today is not possible, let me know then as well :-)
16:30 kkeithley hchiramm_home: cc me on the request. Next time I won't bother you and will just submit the req. myself.
16:39 overclk hchiramm, I would need some clarification on http://review.gluster.org/#/c/10297/
17:06 firemanxbr joined #gluster-dev
17:10 xavih pranithk_afk: just reviewed the patch. I think it's basically ok, but I would prefer to place some code in a different place. See the comments.
17:25 anekkunt joined #gluster-dev
17:31 shubhendu joined #gluster-dev
18:27 shyam joined #gluster-dev
18:33 Gaurav__ joined #gluster-dev
19:06 kkeithley jenkins----   rackspace-regression-2GB-triggered? Anyone know what's going on? Only one is running. Lots of slaves are off-line. patches submitted over six hours ago haven't reported a regression run — successful or otherwise.
19:06 glusterbot kkeithley: jenkins--'s karma is now -1
19:08 kkeithley launching slave agents by hand. several are getting java run-time errors
19:13 kkeithley kicking them doesn't seem to have helped
19:14 jobewan joined #gluster-dev
19:18 hagarth kkeithley: kicking as in doing reboot-vm?
19:18 ndevos kkeithley: when the slaves have issues, I normally reboot them and cross my fingers that they come back
19:19 kkeithley I don't have access. root password? Or ssh pub key?
19:20 kkeithley kicking as in launching the slave agent though the jenkins UI
19:20 kkeithley hagarth: ^^^
19:20 hagarth kkeithley: there's a reboot-vm job in jenkins. All registered users should have access to it.
19:22 kkeithley haven't used that before, looking....
19:22 ndevos kkeithley: http://build.gluster.org/job/reboot-vm/
19:23 ndevos kkeithley: also, the community 3.7.2 rpms on CentOS-7 fail to install :-/
19:23 ndevos that hook-scripts-glusterfind error...
19:23 kkeithley wonderful
19:24 kkeithley ;-)
19:24 ndevos well, unless you create "/var/lib/glusterd/hooks/1/delete/post/" before installing the rpms...
19:24 kkeithley but they install on RHEL7?
19:25 ndevos I doubt it, unless you create the dir
19:26 ndevos my jenkins setup for nfs-ganesha runs on centos-7, and I use the upstream latest 3.7.x version for compiling ganesha
19:26 hagarth so what's the solution here? do 3.7.3 later this week? ;)
19:26 ndevos now people are wondering why the gluster build fails... seems it does not even start build, installation failure
19:27 kkeithley ???
19:27 ndevos we need an agreement on http://review.gluster.org/11298 , it may break something for glusterd if I remember
19:27 kkeithley hagarth: it's the spec(.in) file. I can finesse the spec in fedora dist-git until upstream catches up.
19:28 ndevos yeah, that works, kkeithley
19:29 hagarth kkeithley: ok
19:29 ndevos hagarth: oh, and guess who merged the change that introduced the issue? without checking with the package maintainers? :P
19:29 hagarth ndevos: of course me
19:29 kkeithley well, I can add the hook directory. We still need closure on the .pyc and .pyo files
19:29 ndevos you have to stand in the corner for a while now
19:29 hagarth ndevos: I already am
19:29 ndevos hagarth: lol
19:32 kkeithley s/add the hook directory/unghost the hook directory/
19:33 kkeithley we still need a real fix that addresses the .pyc and .pyo files.
19:35 csim ie, closure on the .pyc/.pyo ?
19:35 kkeithley yes
19:36 csim like, removing them once everything is finished ?
19:38 kkeithley no, that's kinda what we have now. Fedora packaging says we must ship the .pyc and .pyo files.  But if we leave them in  .../hooks/1/delete/post then they run three times. They need to be installed elsewhere and have a scriptlet in .../hooks/1/delete/post that invokes it.
19:38 ndevos kkeithley: just %exclude them in fedora for now?
19:38 ndevos oh, you might have that already
19:38 kkeithley they already are
19:39 shyam joined #gluster-dev
19:39 kkeithley but didn't catch the %ghost
19:39 ndevos I think the script should probably be placed in /usr/libexec/... and a symlink should be put in the hook directory
19:39 kkeithley sounds like a good solution to me
19:43 ndevos you want to do that, shall I do that tomorrow, or do we force that on aravinda?
19:43 ndevos or, get hagarth to fix what he broke? :D
19:44 kkeithley do what, force the proper fix through, or just finesse it in fedora dist-git for now?
19:45 ndevos oh, either, I was going for the proper fix, but that may take an other day at least
19:47 hagarth ndevos: pass it on to aravinda
19:47 ndevos hagarth: we need an ant suit, it would be awesome if you do your presentation(s) and a huge ant is running over the stage
19:48 hagarth ndevos: live ants running over me?
19:48 kkeithley I've got scratch builds running with the work-around. But let's get the real fix done. +1 to having aravinda getting it fixed
19:48 ndevos hagarth: I was more thinking of a suit for a person to dress up like an ant, not sure if a suit with ants has the same effect
19:49 kkeithley I'll have hchiramm put them on d.g.o. first thing tomorrow
19:49 hagarth ndevos: talk to spot about that :)
19:49 ndevos spot: you want to dress up like an ant and run/crawl over the stage while hagarth is presenting?
19:50 ndevos ... unfortunately he's not online :-/
19:51 csim but he is on twitter
19:51 csim (I also have his mobile phone number)
19:51 ndevos tigert_: do you know if an ant costume would be an option? that would be awesome!
19:52 ndevos it would be a nice introduction for the new community manager, *really* becoming an ant
19:53 hagarth obnox: do you know if there is a testbed where we can continue to investigate the samba - gluster performance problem?
19:53 ndevos hagarth: I'm at 16245 steps today, are your going to reach that?
19:53 csim http://www.alibaba.com/product-detail/Good-v​ision-Hot-Sale-high-quality_1897432707.html seems not that expensive
19:53 ndevos csim: LOL
19:54 hagarth ndevos: maybe over this week I can reach there
19:54 csim http://www.alibaba.com/product-detail/HI-CE-​Lifelike-brown-ant-costume_60253964476.html
19:55 ndevos csim: ah, thats a kid size? well, some of the devs are not very big, that might fit ;)
19:56 ndevos csim: the 2nd one is much cooler!
19:57 csim "HI CE Lifelike brown ant costume fit all adults ant mascot costume "
19:57 ndevos *ant* it is adult size
19:58 ndevos hey, its also ROHS complaint, and I always thought that was for electronics
19:58 csim seems that just mean they didn't use too much dangerous stuff
19:59 * ndevos gets reminded by his oven that dinner is ready, I'll be back later, possibly
20:00 csim oh nice, a talking oven
20:00 csim technology is so advanced
20:13 tigert_ ndevos: :D
20:34 dlambrig joined #gluster-dev
20:48 obnox damn, missed hagarth :)
20:51 badone__ joined #gluster-dev
21:33 badone__ joined #gluster-dev
21:59 hagarth joined #gluster-dev
22:00 hagarth o/
22:03 hagarth ndevos: ping, can you make maintainers a closed list?
22:05 ndevos hagarth: I guess so, but I dont like closed lists, is there a good reason to close it?
22:06 hagarth ndevos: some maintainers feel that similar lists are also closed.
22:06 ndevos hagarth: I feel that those similar lists are not community friendly
22:07 hagarth ndevos: maybe start a poll on maintainers :)
22:07 * ndevos does not know why a OPEN source community would want to have closed lists, except for security related things
22:07 hagarth ndevos: I don't have a strong opinion here
22:08 ndevos hagarth: I'm happy the way it is, anyone can make the suggestion and give some reasons for the proposal
22:08 hagarth ndevos: I am going to be busy this week .. not sure if I will have the right bandwidth to follow up.. let's see.
22:09 ndevos hagarth, kkeithley: if you have a few spare minutes, you could review http://review.gluster.org/11021 for me
22:09 ndevos hagarth: give it as a suggestion to whoever asked you?
22:09 ndevos hagarth: I'm happy to send an email to the list, but it'll be biased :D
22:10 hagarth ndevos: go ahead .. after all everybody has a bias :)
22:11 ndevos hagarth: I'll think about it, and will likely send something tomorrow, *cough*, later today
22:13 hagarth ndevos: ok :)
22:14 * ndevos made a note to follow up on "Reasons for an open maintainers list"
22:17 ndevos hagarth: if you come across "Bart van den Heuvel", one of our cloud architects, please pass him some Gluster goodies for me ;-)
22:17 hagarth ndevos: will reserve some for him
22:18 hagarth ndevos: curious, what did he do?
22:18 ndevos hagarth: oh, cool, I'll tell him to find your stand
22:18 ndevos hagarth: he sometimes visits the Amsterdam office, and we have lunch together, he can be postman too!
22:19 hagarth ndevos: ok :)
22:22 ndevos hagarth: and he's a nice guy too, we worked a little together on Red Hat Storage deployments at <censored>
22:22 hagarth ndevos: ok
22:22 ndevos hagarth: he likes Gluster, but is doing more OpenStack at the moment...
22:24 * ndevos leaves the internetz for now, have a good time over there!
22:27 hagarth ndevos: thanks, good night!
23:12 dlambrig left #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary