Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-05-08

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 rjoseph|afk joined #gluster-dev
01:12 kdhananjay joined #gluster-dev
01:23 soumya joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:57 kdhananjay joined #gluster-dev
02:01 shyam1 joined #gluster-dev
02:05 Supermathie joined #gluster-dev
02:54 soumya joined #gluster-dev
02:56 rjoseph|afk joined #gluster-dev
03:03 shubhendu joined #gluster-dev
03:06 kdhananjay joined #gluster-dev
03:13 overclk joined #gluster-dev
03:28 shubhendu joined #gluster-dev
03:29 atinmu joined #gluster-dev
03:35 itisravi joined #gluster-dev
03:46 sakshi joined #gluster-dev
03:47 nishanth joined #gluster-dev
04:00 kanagaraj joined #gluster-dev
04:09 nbalacha joined #gluster-dev
04:14 dlambrig joined #gluster-dev
04:15 ashishpandey joined #gluster-dev
04:17 vimal joined #gluster-dev
04:31 dlambrig left #gluster-dev
04:37 poornimag joined #gluster-dev
04:43 deepakcs joined #gluster-dev
04:45 Joe_f joined #gluster-dev
04:47 gem joined #gluster-dev
04:48 ppai joined #gluster-dev
04:54 shubhendu joined #gluster-dev
04:56 soumya joined #gluster-dev
05:07 Apeksha joined #gluster-dev
05:09 shubhendu_ joined #gluster-dev
05:11 ndarshan joined #gluster-dev
05:14 lalatenduM joined #gluster-dev
05:15 raghu joined #gluster-dev
05:16 kshlm joined #gluster-dev
05:18 spandit joined #gluster-dev
05:19 Debloper joined #gluster-dev
05:19 Manikandan joined #gluster-dev
05:20 gem_ joined #gluster-dev
05:21 ashiq joined #gluster-dev
05:31 hagarth joined #gluster-dev
05:35 anekkunt joined #gluster-dev
05:36 rafi joined #gluster-dev
05:42 schandra joined #gluster-dev
05:46 pppp joined #gluster-dev
05:47 Gaurav_ joined #gluster-dev
05:49 kdhananjay joined #gluster-dev
05:54 Joe_f hagarth: Hi hagarth, :) could you please merge http://review.gluster.org/#/c/10615/
05:54 jiffin joined #gluster-dev
05:54 vimal joined #gluster-dev
05:54 Joe_f hagarth : the regression is failing for a spurious .t file
05:58 atalur joined #gluster-dev
05:59 schandra joined #gluster-dev
05:59 hchiramm schandra, ping
05:59 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
05:59 kshlm atinmu, could you review https://review.gluster.org/10076 , you've previously reviewed it
06:04 hagarth Joe_f: will check
06:04 Joe_f hagarth: Thanks
06:07 tigert morning
06:16 anrao joined #gluster-dev
06:20 pppp ndevos: ping
06:20 glusterbot pppp: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
06:20 pppp ndevos: hi
06:31 hagarth joined #gluster-dev
06:33 Debloper joined #gluster-dev
06:38 atinmu kshlm, kp has already looked into it :) if he is ok u can go ahead and merge
06:39 atinmu kshlm, can u pls review http://review.gluster.org/10619 ?
06:39 ashiq joined #gluster-dev
06:39 kotreshhr joined #gluster-dev
06:44 krishnan_p joined #gluster-dev
06:48 kotreshhr hagarth: Could you merge http://review.gluster.org/#/c/10628/1 ?
06:49 hagarth kotreshhr: I would like somebody from the dht team to ack that change as well.
06:49 nkhare joined #gluster-dev
06:49 kotreshhr Sure. I will check with du
06:57 soumya joined #gluster-dev
07:03 hagarth Gaurav_: is there any difference between patchset 6 and 7 of http://review.gluster.org/#/c/10521/ ?
07:04 Gaurav_ hagrth, no
07:04 Gaurav_ hagrth, i just did minor change in my patch which is dependent of http://review.gluster.org/#/c/10521/ patch
07:05 hagarth Gaurav_: ok
07:05 Gaurav_ http://review.gluster.org/#/c/10521/6..7
07:11 hagarth ndevos: ping, around?
07:12 pranithk joined #gluster-dev
07:14 anekkunt kshlm,Kp , Can you merge this patch  http://review.gluster.org/#/c/10622/
07:19 krishnan_p anekkunt, could you ping me once the regression for linux and netbsd are done?
07:24 ndevos pong pppp
07:24 ndevos hi hagarth
07:25 hagarth ndevos: hi, with the rpm restructuring .. would our upgrades from 3.6 to 3.7 be seamless?
07:25 hagarth ndevos: I like the idea very much!
07:25 ndevos hagarth: yes, should be, we're splitting up more packages, not removing any
07:26 rafi joined #gluster-dev
07:26 hagarth ndevos: right and the dependencies would take care of pulling in the necessary new packages..
07:26 ndevos hagarth: we need someting like that, if you like it or not :)
07:26 ndevos hagarth: indeed, all dependencies are set correctly
07:26 hagarth ndevos: cool, I will merge this shortly.
07:27 * hchiramm is reviewing the same patch
07:27 pppp ndevos: could you pls check the repodata of glusterfs3.7 nightly?
07:27 hagarth hchiramm: will await your review
07:27 hchiramm thanks hagarth
07:28 ndevos hagarth: I think kkeithley approved of it too, he just mentioned some small changes that I posted in v2
07:28 hagarth ndevos: ok
07:29 rafi1 joined #gluster-dev
07:30 ndevos pppp: what kind of problem do you see?
07:32 hchiramm ndevos, regarding the same patch , why  glusterfs-server need to pull in glusterfs-fuse ?
07:32 ndevos hchiramm: there are some server bits that do a mount, or expect that a mount can be done
07:33 ndevos hchiramm: also, the "glusterfs" client executable is used by the nfs-server, self-heal daemon and others
07:36 hchiramm ndevos, hmmm.. looks like  a complex dependency
07:36 ndevos pppp: there seem to be some permission errors while syncing... I'll have to check that out a little later, ok?
07:37 ndevos hchiramm: yeah... its rather ugly
07:37 hchiramm indeed
07:37 ndevos hchiramm: and, add to that, that the "glusterfs" executable is only a symlink to the "glusterfsd" binary
07:37 hchiramm ndevos, if that is the case ( I mean server depends on client bit and vice versa)  cant we do some type of merging between these 2 ?
07:37 hchiramm ndevos, yep
07:38 pppp ndevos: hmm, QE is waiting for the latest available build to continue their testing. So if you can look into it at the earliest possible, that would be really helpful
07:38 ndevos hchiramm: no, glusterfs-fuse or glusterfs-api should *not* depend on -server
07:38 * hchiramm why we dont have package called glusterfs-client ? :)
07:39 ndevos pppp: I'll try to, just need to figure out where the issue comes from
07:39 ndevos hchiramm: because it would only contain the "glusterfs" executable?
07:39 pppp ndevos: ok, I understand and I appreciate your help here!
07:40 pppp ndevos: do you have any temporary fix for the time being ?
07:40 ndevos pppp: you could try https://copr.fedoraproject.or​g/coprs/devos/glusterfs-3.7/ instead, there are other .repo files you can download
07:40 pppp ndevos: as otherwise we will have to manually download each and every package one by one
07:40 ndevos pppp: thats where the packages get build, they only get sync'd to download.gluster.org later
07:40 pppp ndevos: ok, lemme try that
07:43 ndevos hchiramm: if there is any value in a glusterfs-client package, we can introduce that, but I do not think it really makes things easier
07:43 pppp ndevos++ thanks, that works!
07:43 glusterbot pppp: ndevos's karma is now 118
07:43 ndevos pppp: okay, nice!
07:44 soumya joined #gluster-dev
07:55 hchiramm ndevos, hmm.. afict, the client bits has to be packaged in one and can have some meaningful name like glusterfs-client ..
07:56 hchiramm glusterfs-fuse does not convey that impression
07:56 hchiramm atleast for me :)
07:57 hchiramm as we are considering a revamp there, I thouht this is the best time to introduce some changes like this
07:58 hchiramm also we have lots of packages now.
07:59 hchiramm I think we can shrink it some more. :)
08:00 ndevos hchiramm: glusterfs-api is a client too, but we dont want the fuse bits and api in the same package
08:01 ndevos pppp: I think the repo on download.gluster.org is now correct, could you verify that?
08:02 hchiramm also we need "glusterfs" and "glusterfs-libs" ?
08:02 hagarth joined #gluster-dev
08:02 pranithk joined #gluster-dev
08:05 pppp ndevos: that's good news! lemme give it a try now
08:08 hchiramm hmmm. why we need seperate glusterfs-cli :)
08:09 ndevos hchiramm: there are some people that need the "gluster" executable on non-Gluster servers
08:09 ndevos hchiramm: some management interfaces call "gluster --remote-host=... ..." to get status details and all
08:10 pppp ndevos: I still see the same error even after clearing the cache :(
08:17 hchiramm ndevos, if we dont have a plan to rename glusterfs-fuse and no plan to shrink number of packages , the change lgtm :)
08:18 ndevos pppp: hmm, okay, thanks for trying... I wonder what else is wrong then :-/
08:19 ndevos hchiramm: renaming glusterfs-fuse to glusterfs-client will confuse users, and its not really a correct name either :_
08:19 ndevos :)
08:19 pppp ndevos: no problem, for me it looks like a problem with the repodata
08:19 ndevos hchiramm: I also do not know how we can reduce the number of packages, I would love to, but I do not think thats easily possible
08:23 aravindavk joined #gluster-dev
08:24 hchiramm ndevos, I will record this chat in gerrit for future reference and +1 from me
08:25 ndevos hchiramm: ok, thanks!
08:25 hchiramm ndevos++
08:25 glusterbot hchiramm: ndevos's karma is now 119
08:25 ndevos pppp: hmm, I'll delete all the epel-6-x86_64 packages and resync them...
08:26 ndevos hchiramm++
08:26 glusterbot ndevos: hchiramm's karma is now 34
08:26 pppp ndevos: ok
08:28 pppp ndevos: until this is resolved, QE will use https://copr.fedoraproject.or​g/coprs/devos/glusterfs-3.7/ to get the 3.7 nightly builds
08:28 pppp ndevos: hope, that should be ok?
08:29 ndevos pppp: sure, thats fine, we just can not count the download stats for those - but I do not think we care *that* much for nightly builds
08:29 tigert http://glusternew-tigert.rhcloud.com/
08:29 tigert some new content there
08:29 tigert getting there IMHO
08:29 tigert ndevos: what would be a good link for that release announcement? it'll be on the mailing list I guess?
08:30 hchiramm tigert++ thanks
08:30 glusterbot hchiramm: tigert's karma is now 7
08:30 hchiramm tigert, there will be a mail and blog
08:30 ndevos tigert: there will be an email and there also should be a blog post
08:31 pppp ndevos: ok, thanks for the confirmation!
08:31 hchiramm tigert, "Planet gluster news"
08:31 pppp ndevos: pls ping me once the re-sync is done so that I can re-test it to see if it works
08:31 hchiramm I think there are irrelevant entries
08:32 ndevos pppp: yeah, will do, thanks!
08:32 pppp ndevos: ok, thanks!
08:32 hchiramm and duplicate entries
08:32 hchiramm for ex:
08:32 ndevos hchiramm: but, but, but 3.5.4beta1 *is* released!
08:32 ndevos :P
08:32 hchiramm RDO KIlo Set up Two KVMs Nodes (Controller+Compute) ML2&OVS&VXLAN on CentOS 7.1
08:32 hchiramm RDO KIlo Set up Two KVMs Nodes (Controller+Compute) ML2&OVS&VLAN on CentOS 7.1
08:32 hchiramm Running prometheus metrics
08:32 hchiramm ndevos, yes :)
08:34 ndevos hchiramm: you want to send a backport to 3.7 for the packaging change, or shall I do that?
08:34 hchiramm ndevos, go ahead..
08:35 ndevos hchiramm: it was an offer to ++ your patch stats!
08:36 hchiramm I think I can survive with the stats :)
08:37 ndevos ok, ok, I'll do it, you just review it later on then!
08:38 hchiramm sure
08:39 hchiramm Joe_f, ping .. any luck with that ?
08:41 ndevos hchiramm: hmm, your glupy packaging changes are not in release-3.7 yet?
08:41 hchiramm ndevos, I havent backported
08:41 ndevos hchiramm: I think we want those in 3.7, or not?
08:41 hchiramm yeah, better to have it ..
08:42 hchiramm ndevos, give me some time, I will backport all those changes
08:42 ndevos pppp: could you try again?
08:42 hchiramm I think I need to pick jeff's patch as well
08:42 hchiramm will be starting with backports soon.
08:43 ndevos hchiramm: okay, please let me know when you have done those, I'll review/merge and can then send my backport as well
08:44 pppp ndevos: ok, checking
08:45 sakshi joined #gluster-dev
08:47 * ndevos steps out for a bit, back in 15-20 minutes
08:51 pppp ndevos: still no luck! same error
08:51 pranithk1 joined #gluster-dev
08:53 soumya joined #gluster-dev
08:54 spalai joined #gluster-dev
08:56 kdhananjay joined #gluster-dev
08:56 anrao joined #gluster-dev
09:01 pranithk joined #gluster-dev
09:06 poornimag joined #gluster-dev
09:08 ws2k3 joined #gluster-dev
09:08 ndevos pppp: hmm, okay, strange... I'll setup a rhel6 for testing too now
09:10 pppp ndevos: hmm,  do you think it can be due to the "repodata" of the parent dir?
09:12 hchiramm ndevos, sure
09:12 ndevos pppp: I do not think so, but I'll try to figure out whats up
09:14 pppp ndevos: yea, even I don't think so. But can't think of any other reason as you might have already recreated the repomd.xml in the epel6 dir
09:15 pppp ndevos: I'll launch a new rhel6 and re-test it
09:16 pppp ndevos: wait, I got the fix!
09:19 pppp ndevos: changing the baseurl to https:<> resolved the issue!
09:20 pppp ndevos: but the question is what has changed in between?
09:26 ndevos pppp: hmm, I did not change anything after the re-sync earlier
09:27 ndevos pppp: the repomd.xml gets generated on the Fedora COPR side, it and the packages really only get copied to download.gluster.org
09:29 pppp ndevos: ooh, then what could be the reason?
09:29 pppp ndevos: the moment I change it back to http, it throws out the same error
09:30 ndevos pppp: I have no idea, maybe there is a caching webserver on download.gluster.org?
09:30 ndevos hagarth, JustinClift: any idea why pppp gets issues with an http repo on download.gluster.org, but not when https is used?
09:31 pppp ndevos: possible, something is providing the old repodata when connected via http
09:32 tigert hm
09:32 tigert hchiramm: those are not duplicates
09:32 tigert its two different urls with very similar title
09:32 ashiq joined #gluster-dev
09:33 tigert hchiramm: what we should really do is go through the blog list
09:33 tigert its in https://github.com/gluster/planet-​gluster/blob/master/data/feeds.yml
09:34 tigert the planet supports feed avatar images
09:34 hgowtham joined #gluster-dev
09:34 tigert and also the feed list should be checked that it makes sense and doesnt miss anything
09:35 tigert hchiramm: I also added the home icon on the planet top bar so you can get back to gluster without typing
09:37 * tigert is looking into refreshing the background cloud image with something different to give a new look
09:42 pppp ndevos: I could narrow down the issue to the following:
09:43 pppp ndevos: pulling via http downloads an old sqlite file "1c332d812f490e768000b0ab4848a3b7591f3d58c4​ed252ec3a6633514b40c4e-primary.sqlite.bz2" which causes the issue
09:43 ndevos pppp: thats strange, because for me http works...
09:43 pppp ndevos: whereas pulling over https pulls in the latest sqlite file "4310052cecfa719eb2478016f0a2aec6ded64562​b6f0d73ad8cb2dfd91c55146-primary.sqlite"
09:44 pppp ndevos: really?
09:44 ndevos pppp: yes, installed a rhel6, added the .repo (with baseurl=http://download.gluster.org/pub/gluster/glust​erfs/nightly/glusterfs-3.7/epel-6-$basearch) and did a "yum install glusterfs-fuse"
09:45 ndevos that is on a clean rhel-6.6 installation
09:45 hgowtham joined #gluster-dev
09:46 pppp ndevos: aah, lemme also try it on a clean installation
09:48 ndevos pppp: maybe there is still something in your yum cache? have you tried "yum clean all" ?
09:53 pppp ndevos: ofcourse, I did that!
09:53 ndevos pppp: hehe, yes, I thought so :)
09:53 pppp ndevos: hehe! :)
09:54 shubhendu__ joined #gluster-dev
09:54 kotreshhr joined #gluster-dev
09:55 ndevos pppp: maybe there is a transparant proxy somewhere between you and download.gluster.org... https can not reasonably get cached, so that could explain it
09:57 pppp ndevos: yea, possible. testing it on a clean rhel6.6 should mostly explains that
09:58 pppp ndevos: even if it works on a fresh installation for me, i'll anyway have to find a solution for the existing systems as everyone over here in QE is facing the same!
10:01 Manikandan joined #gluster-dev
10:02 ndevos pppp: how strange! but at least that point to something on your side, and that is not something I would know how to fix
10:03 pppp ndevos: yea, it's strange! thanks for all your help and now lemme see if I can root cause it further
10:03 ndevos pppp: you're welcome, I hop you get it solved soon
10:03 ndevos +e
10:03 pppp ndevos: :)
10:06 ira joined #gluster-dev
10:07 kshlm joined #gluster-dev
10:07 ira joined #gluster-dev
10:08 atinmu joined #gluster-dev
10:08 shubhendu_ joined #gluster-dev
10:08 pranithk joined #gluster-dev
10:09 nishanth joined #gluster-dev
10:09 hagarth joined #gluster-dev
10:10 krishnan_p joined #gluster-dev
10:10 atalur joined #gluster-dev
10:11 poornimag joined #gluster-dev
10:14 Joe_f joined #gluster-dev
10:23 ndevos hagarth: I plan to move all bugs that have their fix in v3.7.0beta1 to ON_QA with fixed-in-version=glusterfs-3.7.0beta1 - any objections?
10:24 hagarth ndevos: a drink on me next week for doing that :)
10:24 ndevos and maybe others get moved to MODIFIED too, I'll have to see how big that mess is
10:25 hagarth ndevos: yeah
10:25 ndevos hagarth: deal! but you should add your flight times to the http://www.gluster.org/community/docu​mentation/index.php/GlusterSummit2015 page
10:26 ndevos and, is anands coming too?
10:26 rafi joined #gluster-dev
10:27 hagarth ndevos: I doubt if anands will be there
10:27 ndevos hagarth: okay :-/
10:27 ndevos soumya, pranithk: any luck with your visa?
10:28 soumya ndevos, I just received it an half an hour back :)
10:28 ndevos soumya: wohoo, awesome!
10:28 hagarth ndevos: pranithk has received his too
10:28 ndevos hagarth: ah, great!
10:29 * ndevos does not know who else was waiting for a visa, but hopes all received them now
10:29 pranithk ndevos: yes all received
10:31 ndevos pranithk: very good - nice to see you next week
10:32 pranithk ndevos: yes, I need to discuss some of the modifications we need to do for moving bugs to POST/MODIFIED automatically...
10:32 pranithk ndevos: i.e. at the time of rfc.sh we should ask if we want to move the bug to POST
10:33 hagarth pranithk: good idea
10:33 ndevos pranithk: well, when patches get posted, the bug should move to POST - but moving to MODIFIED is more difficult to automate
10:33 pranithk ndevos: second one is if the last patch on the bug/feature is send we should give something in rfc.sh so that in commit description it adds some metadata and when that patch is merged, it removes that metadata from the commit description and move it to MODIFIED automatically.
10:34 pranithk ndevos: we should add something like final-commit:yes just like we have commit-id:, bug:
10:34 ndevos pranithk: hmm, what if there are multiple patches that do not have a dependency upon eachother?
10:35 pranithk ndevos: the user will say that it is last-patch. we will do all this only if the dev tells it that it is the last patch...
10:35 pranithk ndevos: if he doesn't we don't move it to MODIFIED
10:35 ndevos pranithk: right, that could work
10:36 ndevos pranithk: I wonder if we have bugs that first get one patch sent (the "final" one), but need a follow-up one later
10:37 pranithk ndevos: it is already in modified state, how does it matter?
10:37 ndevos pranithk: in case the 2nd patch is under review, but not merged yet
10:37 pranithk ndevos: people can send a patch that is on ON_QA too, nobody can stop them :-D
10:38 nishanth joined #gluster-dev
10:38 nbalacha joined #gluster-dev
10:38 ndevos pranithk: I think it is worse to have a bug incorrectly in MODIFIED than have it incorrectly in NEW/ASSIGNED/POST
10:38 rafi joined #gluster-dev
10:39 hagarth pranithk: we can always add a check in Jenkins to NACK such patches.
10:39 ndevos pranithk: indeed, people can send patches for bugs that are MODIFIED/ON_QA, and I hope they correct the status when they post a patch
10:39 pranithk ndevos: fail that submission :-)
10:39 pranithk ndevos: just like hagarth mentioned :-)
10:40 ndevos pranithk: a 1st step would be to move a bug to POST (+ clear fixed-in-version) whenever a patch gets sent
10:40 hagarth ndevos: I would love to NACK bugs that are not assigned to a real person (bringing this up again ;))
10:41 pranithk ndevos: I am the worst offender of these kinds of problems, I don't feel I can get any better at my forgetfulness so trying to improve tools so that I won't forget :-D
10:41 ndevos hagarth: yes, I would like that too, but unfortunately that would require all people who post a patch to have a Bugzilla account (and emails should match)
10:42 hagarth ndevos: if you are logging a bug, wouldn't you need an account?
10:42 ndevos pranithk: we surely should improve the tools, but we need to have a reasonable approach that works for everyone and we can understand and explain
10:42 hagarth ndevos: at least, I would like the bug to be assigned to the maintainer/core reviewer of the component.
10:42 ndevos hagarth: not everyone logs a bug, think of the spelling fixes
10:42 hagarth i.e. for cases where somebody does not have a bz account but sends a patch
10:43 ndevos hagarth: yeah, that *should* be the case, but who maintains "spelling fixes"? should we check modified files against our MAINTAINERS file maybe?
10:43 hagarth ndevos: for such cases, we would have umbrella bugs  and would be hopefully assigned to some real person.
10:43 hagarth I am happy to be the default assignee for umbrella bugs ;)
10:44 ndevos hagarth: sure, and those "umbrella bugs" should have the Tracking keyword set, so that is easy to  check too
10:44 hagarth ndevos: right, maybe we can skip the assignee check for "umbrella bugs" also.
10:44 ndevos hagarth: yes, I think that makes sense
10:45 kshlm rpmbuild is failing on release-3.7, has anyone already observed it?
10:45 hagarth the first thing we need to do is to get smoke report back status
10:45 hagarth kshlm: yes, have noticed that. do you happen to know why?
10:45 kshlm Nope. I just tried to build rpms and it failed.
10:45 ndevos kshlm: again? we reverted a change yesterday that caused the build to break, it was something with bitrot and logging
10:46 kshlm Something to do with glupy.
10:46 kshlm I'm using the lates 3.7
10:46 atinmu ndevos, bitrot changes are already reverted
10:46 ndevos oh, thats different...
10:46 ndevos atinmu: yeah, I was just wondering if kshlm was on an old tree :)
10:46 hagarth kshlm: could be related to 40df2ed4d098d4cd2c6abbed23e497ac3e2e5804
10:46 kshlm 'File not found by glob: /vagrant/glusterfs/extras/LinuxRPM/rpmbu​ild/BUILDROOT/glusterfs-3.7.0beta1-0.107​.git54f8ee4.el7.centos.x86_64/usr/lib/py​thon2.7/site-packages/gluster/glupy.*'
10:47 ndevos hchiramm: you were looking into glupy packaging backports right? whats the status of that?
10:47 hagarth scratch that, this commit does not exist in  release-3.7
10:48 ndevos hagarth: I think hchiramm was going to backport it, maybe we need it?
10:48 kshlm so should it be backported?
10:48 hagarth ndevos: quite possible
10:49 ndevos kshlm, hagarth: yes, I think we need that to fix this issue
10:49 kshlm Looks like the spec file changes got backported, but the glupy changes haven't
10:49 hchiramm ndevos, yeah, back porting those changes
10:52 kshlm It is not a clean cherry-pick. There are some conflicting Makefile.am changes for glupy
10:52 nbalacha Joe_f, ping regarding BZ 1215896
10:53 hagarth hchiramm: can you please pick up the smoke tests not reporting problem? looks like that is causing us enough heartburn.
10:53 hagarth hchiramm: offering it to you since you have copious spare time ;)
10:54 Joe_f nbalacha: yep
10:54 nbalacha Joe_f, would be good to have in 3.7 I think
10:54 Joe_f nbalacha: hmmm yes this can be fixed :)
10:54 nbalacha Joe_f, simple fix and would improve the logs. What do you think?
10:55 Joe_f nbalacha: on it
10:55 nbalacha Joe_f, cool. Thanks :)
10:56 hchiramm http://review.gluster.org/#/c/10617/ kshlm
10:56 hchiramm I think we need to backport Jeffs patch as well
10:56 hchiramm atinmu, ping
10:56 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
10:56 pranithk ndevos: so, I will need help in doing ASSIGNED->POST, POST->MODIFED
10:57 atinmu hchiramm, yes?
10:57 ndevos pranithk: sure, help how?
10:58 pranithk ndevos: need to know commands which change the status.
10:58 ndevos pranithk: bugzilla modify --status=MODIFIED 123456 23456
10:58 ndevos pranithk: install "python-bugzilla" and do a "bugzilla login" first
10:59 pranithk ndevos: gah! login... lets see..
10:59 pranithk ndevos: Let me first do the ASSIGNED->POST
11:00 ndevos pranithk: it'll get you a cookie and will use that for other commands
11:00 pranithk ndevos: that seems like an easy one.
11:00 ndevos pranithk: you can also do: bugzilla modify --status=ASSIGNED --assigned_to=pkarampu@redhat.com <list-of-bugs>
11:03 hchiramm http://review.gluster.org/#/c/10617/ do u think this has to be backported to release-3.7
11:03 hchiramm ndevos, ^^
11:03 hchiramm or can u review that change ?
11:03 ndevos hchiramm: I dont know, I have not seen that change yet...
11:04 ndevos hchiramm: that is a patch against release-3.6, is there one for the master branch too?
11:04 pranithk ndevos: I think I have enough to start the work for ASSIGNED->POST :-) for now. I will ask you if I face any problems
11:04 hchiramm ndevos, even I havent noticed against master
11:04 ndevos pranithk: sure, good luck!
11:05 hchiramm ndevos, the glupy changes are not backported to release 3.7
11:05 hchiramm then how can it cause a build failure
11:07 hgowtham joined #gluster-dev
11:08 ndevos hchiramm: I do not know, I did not try to build rpms since yesterday evening for 3.7
11:08 atalur joined #gluster-dev
11:08 hchiramm I havent submitted any changes for 3.7
11:08 hchiramm so I am really wondering how the build can failure due to that
11:09 hchiramm hagarth, ^^^
11:10 hagarth hchiramm: no idea, we need to figure out the problem :)
11:12 poornimag joined #gluster-dev
11:14 dlambrig joined #gluster-dev
11:18 shyam1 joined #gluster-dev
11:18 rafi ping JustinClift
11:19 ndevos hchiramm: building rpms from release-3.7 fails for me too, in the same way as kshlm reported
11:21 soumya joined #gluster-dev
11:26 atinmu joined #gluster-dev
11:26 krishnan_p joined #gluster-dev
11:26 hchiramm ndevos, it looks like this commit  c0ca8aee8085bce0418c6e0cfc3504bc59f60cdb
11:26 hchiramm causes the failure
11:27 hchiramm none of my changes have gone in release 3.7
11:27 Joe_f joined #gluster-dev
11:31 hchiramm JustinClift, ping
11:31 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
11:31 kotreshhr joined #gluster-dev
11:32 ndevos hchiramm: hmm, yes, looks like the cause, and I have not seen that patch, or the one for the master branch, sorry :-/
11:32 dlambrig joined #gluster-dev
11:32 ppai joined #gluster-dev
11:42 rafi1 joined #gluster-dev
11:44 sabansal_ joined #gluster-dev
11:44 schandra joined #gluster-dev
11:45 dlambrig joined #gluster-dev
11:46 ndevos hchiramm: I'll be having lunch now, if you do not have time for the packaging backports, I can do so later
11:46 hagarth joined #gluster-dev
11:48 anrao joined #gluster-dev
11:51 hchiramm ndevos, thanks ..
11:55 overclk joined #gluster-dev
11:58 rafi joined #gluster-dev
12:00 rafi1 joined #gluster-dev
12:17 Joe_f joined #gluster-dev
12:18 hagarth joined #gluster-dev
12:19 itisravi joined #gluster-dev
12:30 anrao joined #gluster-dev
12:40 rafi joined #gluster-dev
12:46 spalai left #gluster-dev
12:47 tigert hchiramm_home, ndevos: http://glusternew-tigert.rhcloud.com/
12:55 firemanxbr joined #gluster-dev
12:56 nbalacha joined #gluster-dev
12:57 rafi1 joined #gluster-dev
13:00 ppai joined #gluster-dev
13:00 anrao joined #gluster-dev
13:06 soumya joined #gluster-dev
13:07 overclk joined #gluster-dev
13:08 shyam1 joined #gluster-dev
13:10 gem joined #gluster-dev
13:10 hchiramm joined #gluster-dev
13:18 nbalacha joined #gluster-dev
13:19 Joe_f joined #gluster-dev
13:21 hagarth JustinClift: ping, around?
13:25 ndevos tigert: looking... you made it slower?
13:25 dlambrig left #gluster-dev
13:27 ndevos tigert: there are dead images on http://glusternew-tigert.rhcloud.com/community/ - next to the mailinglist/events
13:27 anrao joined #gluster-dev
13:54 atalur joined #gluster-dev
13:59 lpabon joined #gluster-dev
14:02 shyam1 Anyone: upstream master is failing compile (in my local machine) stating "implicit declaration of function 'CHANGELOG_GET_ENCODING'" which got introduced in commit: http://review.gluster.org/#/c/9572/
14:02 shyam1 Checked the commit and the code, did not find that macro anywhere
14:03 shyam1 Just checking if anyone else faced this issue?
14:06 shyam1 Ah ok, FreBSD smoke is failing with the same: http://build.gluster.org/job/free​bsd-smoke/lastFailedBuild/console
14:08 kkeithley me too :-(
14:08 shyam1 Linux smoke in Jenkins is also failing at the same place: http://build.gluster.org/j​ob/smoke/lastBuild/console
14:08 anoopcs anoopcs, For me too...
14:09 shaunm joined #gluster-dev
14:10 shyam1 Ok, anyone has ideas on what CHANGELOG_GET_ENCODING should be defined to? :) or know Saravanakumar Arumugam's IRC nick?
14:13 ndevos shyam1: I guess kotreshhr might be able to help? seems to be coming from http://review.gluster.org/9572
14:13 anoopcs shyam1, I think they are looking into it now
14:14 kkeithley Looks like release-3.7 might be borked too. ?
14:14 shyam1 yup, commit: http://review.gluster.org/#/c/10642/
14:15 shyam1 How did this pass regression? any ideas...
14:16 shyam1 anoopcs: thanks
14:17 kkeithley yup, release-3.7 branch is borken too
14:18 kkeithley Help us Obi Wan, you're our only hope
14:18 JustinClift Doh, missed hagarth
14:19 JustinClift Damn, missed ppai too
14:20 wushudoin joined #gluster-dev
14:20 dlambrig joined #gluster-dev
14:25 JustinClift Hmmm
14:25 * JustinClift kicks Gerrit
14:31 rafi joined #gluster-dev
14:31 soumya joined #gluster-dev
14:34 dlambrig joined #gluster-dev
14:34 jobewan joined #gluster-dev
14:36 ndevos kkeithley, shyam1: ah, I guess this is the issue: http://review.gluster.org/#/c/10620/3/xlat​ors/features/changelog/src/changelog-misc.h
14:37 ndevos so it passed Jenkins testing, but not when both patches were applied
14:38 ndevos aravindavk, kotreshhr, anoopcs: are you looking into fixing that CHANGELOG_GET_ENCODING compile problem? whats the status?
14:39 anoopcs ndevos, Not me..
14:39 kkeithley both patches, you mean on master and release-3.7 branch?
14:39 anoopcs kotreshhr, Do you have any update?
14:39 kkeithley I'm about 10 seconds away from reverting the patches...
14:40 ndevos kkeithley: no, two conflicting patches, both are in master and release-3.7
14:40 ndevos 4 patches in total
14:40 kkeithley I grok
14:40 ndevos http://review.gluster.org/#/c/10620 conflicts with http://review.gluster.org/#/c/10642/
14:42 kkeithley I'm about 10 seconds away from reverting the patches...  unless hagarth reappears soon
14:42 JustinClift Just do it
14:43 JustinClift We can re-revert later :)
14:43 shyam1 ndevos: Where was the conflict? I am unable to see that information
14:43 shyam1 JustinClift: Ummm... there seem to be dependent patches... so let's give us 15 seconds
14:43 JustinClift kkeithley: We should also email the devel mailing list, so people not on IRC know (Manu comes to mind)
14:43 JustinClift shyam1: Sure
14:44 ndevos shyam1: http://review.gluster.org/#/c/10620/3/xlat​ors/features/changelog/src/changelog-misc.h renames CHANGELOG_GET_ENCODING
14:44 JustinClift If you guys can figure out a fix quickly instead, that's obviously better :)
14:44 kkeithley release-3.7 ( http://review.gluster.org/#/c/10642/ ) reverted
14:44 shyam1 ndevos: And got merged at 6:22 AM (as per my screen)
14:45 kkeithley http://review.gluster.org/#/c/9572/ is the master branch?  Someone quickly check before I pull the trigger
14:45 ndevos shyam1: both patches passed regression, but the 2nd patch tries to use the renamed function/macro, when that got merged, things broke
14:45 kkeithley it is
14:45 shyam1 kkeithley: yup, that is the master branch
14:46 ndevos kkeithley: yes, it is
14:46 kkeithley master (http://review.gluster.org/#/c/9572/ ) reverted
14:46 ndevos thats 3x a +1 :)
14:46 shyam1 ndevos: The first patch that renames the macro is also merged, right? So we should not have broken
14:46 shyam1 ndevos: Oh no, got it
14:47 shyam1 ndevos: The first patch renames the macro to HEADER_INFO and the second patch uses the older macro name GET_ENCODING
14:47 ndevos kkeithley: whats the url for the revert?
14:47 ndevos shyam1: yeah :-/
14:47 jobewan joined #gluster-dev
14:48 kkeithley release-3.7:  This patchset was reverted in change: Ia8f994e5c82d7ec6255f658682e764924957d8ba
14:48 kkeithley url?
14:48 kkeithley master: This patchset was reverted in change: I6c8f3105dd1df3e136906c470576d90c30e1239b
14:48 ndevos kkeithley: does a revert not generate a new patch that needs to get reviewed?
14:49 aravindavk I just checked the message. will work on the CHANGELOG_GET_ENCODING issue
14:49 kkeithley I presumed the revert was the inverse of the submit/cherry-pick
14:49 ndevos kkeithley: http://review.gluster.org/10683 for 3.7 :)
14:50 ndevos aravindavk: send patches *very* quick, or the reverts will get merged
14:50 JustinClift k, can we confirm that smoke tests now pass?
14:50 aravindavk will send now
14:51 ndevos JustinClift: well, no, they will not pass, compiling is broken
14:52 kkeithley I see my presumption was incorrect
14:52 ndevos kkeithley: aravindavk is sending fixes, it should not be needed to merge those reverts
14:52 kkeithley acknowledged
14:52 shyam1 Sent a mail to devel with required information
14:53 shyam1 kkeithley: ndevos: aravindavk: You may want to update that thread once done with either the revert or the fix
14:53 ndevos shyam1: okay, thanks!
14:55 aravindavk ndevos: kkeithley, kkeithley working on the fix now.
14:56 ndevos aravindavk++ much appreciated!
14:56 glusterbot ndevos: aravindavk's karma is now 3
14:56 kkeithley avarvindak: yup. Will wait for your fixes
14:58 aravindavk kkeithley: thanks
15:04 hchiramm aravindavk, kotresh and me are working on this
15:04 hchiramm we are testing now
15:04 hchiramm hopefully we will send it now
15:04 ndevos *now*?
15:05 * ndevos continously hits F5 in gerrit and causes an other DOS
15:10 JustinClift Heh
15:11 hchiramm kkeithley, have u reverted the patch in master
15:11 aravindavk kkeithley: ndevos sent patch http://review.gluster.org/#/c/10685/
15:12 kkeithley hchiramm: no. Avarvinda has a real fix
15:13 ndevos hchiramm: he did press the revert button, but that only creates a new patch that needs to get reviewed
15:14 hchiramm ndevos, I think we have the fix
15:14 hchiramm any way aravindavk has sent the fix
15:14 hchiramm we have some more error checking
15:14 kkeithley IOW I am not going to submit the revert patches (that are currently in gerrit)
15:15 hchiramm http://fpaste.org/219905/43109812/ aravindavk
15:15 hchiramm u can add those checks as well
15:16 kkeithley wait, I don't see anything in that change that either defines CHANGELOG_GET_ENCODING or removes it. Am I missing something obvious?
15:17 kkeithley nm, I'm blinkd
15:17 kkeithley blind
15:17 hchiramm kkeithley, check line no 19
15:17 * kkeithley loves latency on the internet
15:17 hchiramm :)
15:18 kkeithley submitted.  someone want to submit a patch for release-3.7 branch?
15:19 aravindavk kkeithley: patches 10620 and 9572 dev started in parallel. Both regressions completed successfully. But both had conflicting changes. After one is merged regression was not run again or missed in manual rebase
15:20 atalur joined #gluster-dev
15:20 aravindavk kkeithley: will do that now
15:20 kkeithley aravindavk: thanks
15:20 ndevos aravindavk: yeah, something like this happened before when function renames were done :-/
15:21 ndevos I think we need something like: [submit] button in Gerrit -> run smoke tests -> merge the patch
15:21 kkeithley sh*t happens. fortunately we had people around to handle it
15:22 kkeithley just a week or two ago we had smoke tests run after a submit. What happened to that?
15:22 hchiramm :(
15:22 hchiramm kkeithley, not sure whats happening with smoke test
15:22 ndevos no idea what happened to it, at least I do not see smoke test results in Gerrit anymore :-(
15:23 kkeithley smoke tests generally have been broken since the gerrit upgrade. Maybe that's part of it.
15:23 hagarth joined #gluster-dev
15:23 hchiramm true.. how can we trigger smoke tests again
15:23 kdhananjay joined #gluster-dev
15:23 hchiramm kkeithley, thats correct.
15:23 hchiramm its not running any more in upstream
15:24 kkeithley Our man in Havana (or London, as the case may be) is working on it.
15:24 ndevos the short, round and bald one? :D
15:24 hchiramm kkeithley, oh..ok
15:24 kkeithley round?
15:27 aravindavk kkeithley: 3.7 patch http://review.gluster.org/#/c/10686/
15:28 kkeithley London=Farnborough. I thought about going into the Farnborough office and Guildford when I was there week before last, but traffice on the M25 was so bad I didn't feel like taking the extra time.
15:30 JustinClift Hang on, so submitting a CR no longer triggers a smoke run?
15:30 kkeithley yeah, thought you knew about that.
15:30 JustinClift No
15:30 kkeithley well, you do now ;-)
15:31 JustinClift k, I'll figure out what the problem is
15:31 JustinClift I'm really not going to get the Forge stuff done in time for Summit
15:31 * JustinClift sighs
15:32 kkeithley I had a boss once who would have told you "work smarter, not harder"
15:32 ndevos kkeithley: did you learn something from that?
15:33 kkeithley yes. That it was time to quit and get a new job with a smarter manager
15:33 ndevos haha
15:33 kkeithley smart enough to not say stupid things like that
15:33 kkeithley ndevos: are you verifying the release-3.7?
15:34 ndevos kkeithley: its compiling now!
15:34 JustinClift Whoo
15:35 kkeithley <montgomery burns>excellent</montgomery burns>
15:35 ndevos but, I think the same bug was used for both the master branch and release-3.7
15:35 kkeithley ugh. c'est la vie. (or c'est la guerre)
15:35 ndevos oh, and compiling it not finished *yet*
15:36 kkeithley did it get past changelog?
15:36 aravindavk ndevos: oh my bad, I used 3.7 bug in patch for master :(
15:36 aravindavk ndevos: which is merged
15:36 ndevos aravindavk: ah, hmm
15:37 * ndevos shrugs
15:37 JustinClift k, if it's working now, someone please email -devel
15:38 kkeithley yes, as soon as I submit the release-3.7 patch
15:38 ndevos kkeithley: I merged it before you, HAHAHAHA
15:38 kkeithley oh, okay. you can send the email. ;-)
15:38 hagarth ndevos: I was also about to hit +2 now :P
15:38 ndevos hagarth: yours does not really count, you missed all the fun
15:38 hagarth ndevos: indeed
15:39 aravindavk ndevos: kkeithley thanks a lot for not reverting the patches :)
15:39 ndevos kkeithley: can you now abandon your reverts, if you have not done so already?
15:39 kkeithley fixing the real problem is always better. Thanks for helping
15:40 kkeithley snap
15:40 ndevos aravindavk: thanks for the quick fix!
15:40 ndevos aravindavk: you want to send an email to the list?
15:41 kdhananjay left #gluster-dev
15:41 aravindavk ndevos: ok
15:42 ndevos aravindavk++ thank you
15:42 glusterbot ndevos: aravindavk's karma is now 4
15:43 lalatenduM joined #gluster-dev
15:44 soumya joined #gluster-dev
15:45 nbalacha joined #gluster-dev
15:53 ndevos hagarth: I did not send backports for the packaging bits yet, I'm waiting for the changes from hchiramm
15:53 ndevos maybe the one he posted is sufficient, I will check that later
15:53 * ndevos will have dinner now
15:54 hchiramm ndevos, I was looking into manu's commit fix
15:54 hchiramm however I have backported python gluster package fix
15:54 hchiramm glupy one I didnt backport yet
15:55 hchiramm I thought of identifying the current issue which introduced by the other commit
16:01 jobewan joined #gluster-dev
16:09 kotreshhr left #gluster-dev
16:39 poornimag joined #gluster-dev
16:43 atinmu joined #gluster-dev
16:45 * JustinClift is restarting a bunch of NetBSD regression runs that just barfed on a busted slave
16:54 pranithk joined #gluster-dev
17:03 hagarth ndevos: let us get it in soon
17:03 hagarth ndevos: I intend pulling the trigger on 3.7.0 before i leave for BCN
17:06 ndevos hagarth: when is your flight?
17:07 ndevos hagarth: I'll go for a walk and can post packaging backports when I'm back, maybe in ~1 hour
17:07 hagarth ndevos: 8:45 PM on Sunday.. there's enough time. However I would like to tag beta2 tomorrow and run tests on that.
17:08 hagarth If things look good, beta2 will become 3.7.0.
17:08 ndevos hagarth: okay, that should not be a problem then
17:09 ndevos shyam1: did you review http://review.gluster.org/#/c/9797 yet? poornima would like to have that in 3.7 too
17:10 shyam1 ndevos: Not yet after the last time, i'll do it this weekend
17:10 ndevos shyam1: okay, I plan to do that as well :)
17:10 * ndevos really leaves now, before it gets dark or starts to rain
17:10 hagarth shyam1, ndevos: Please be done with all reviews, merging and backporting by 1600 UTC tomorrow
17:11 hagarth i.e. for anything that you want to see in 3.7.0
17:11 kkeithley hchiramm, hchiramm_home: ping, is there a BZ open already for the RPM failure in release-3.7?
17:12 hagarth kkeithley: don't recollect seeing one
17:14 kkeithley was thinking of the BZ for the recent glupy spec work, not for the RPM failure per se.
17:14 kkeithley but maybe should have a separate BZ anyway
17:14 hagarth kkeithley: hchiramm has used 1211900 for that one.
17:15 kkeithley okay, I'll use that (unless told otherwise)
17:15 kkeithley thanks
17:15 shyam1 hagarth: Hmmm... lookup-unhashed still needs review blessing
17:16 hagarth shyam1: let us get it in 3.7.1?
17:16 shyam1 hagarth: Well a deadline is a deadline, I am fine.
17:26 hagarth shyam1: any perf. numbers with lookup-unhashed?
17:26 shyam1 Oh yes
17:26 JustinClift New AFR regression failure on release-3.7 branch
17:26 shyam1 hagarth: Very good ones :)
17:26 * JustinClift just emailed -devel
17:27 shyam1 hagarth: unfortunately had to be tied into rebalance perf. as some changes were common, there is a tier.t regression with that patch that I am investigating now
17:27 shyam1 hagarth: But otherwise it is ready for review..
17:28 hagarth shyam1: fantastic!
17:33 shyam1 hagarth: RH Internal link warning: https://mojo.redhat.com/people/bengland/​blog/2014/04/30/gluster-scalability-test​-results-using-virtual-machine-servers for perf. numbers with lookup-unhashed
17:34 shyam1 hagarth: In short -ve scalability of create, becomes a +ve scalability as you grow the cluster with the fix
17:35 hagarth shyam1: great, let us publish this result on gluster.org blog once the patch is in
17:35 shyam1 hagarth: yup! I guess we need to do the same for the MT epoll patch as well
17:36 hagarth shyam1: yes, would you take a shot at it for next week? to go with the 3.7.0 release?
17:41 JustinClift shyam1: 86 VM cluster.  Cool.
17:41 JustinClift shyam1: Any interest in setting up something bigger, on Rackspace?
17:41 JustinClift (in a different account, not the standard Gluster one)
17:42 JustinClift s/86/84/
17:50 shyam1 JustinClift: Hmmm... I would need it for DHT2 :)
17:51 shyam1 hagarth: I think I will be done with the regression failure on tier today, if we can close the review then we should be good to go in 48 hours
17:56 hagarth JustinClift: containers are the way to go :)
17:59 hagarth shyam1: ok, let us see if we can merge unhashed auto in the next 24 hours to both mainline & release-3.7
18:03 JustinClift shyam1: Well, we have a special purpose "Unlimited" account with Rackspace for special things
18:03 JustinClift shyam1: We'd have to clear it with them first for this, but it's very likely they'd ok it
18:03 shyam1 JustinClift: Hmmm... maybe better to demonstrate the problem and the solution there for others to see and use, makes it repetable
18:04 hagarth JustinClift: can I get some VMs to run parallel tests for 24 - 48 hours?
18:04 JustinClift hagarth: How many do you want?
18:04 shyam1 JustinClift: Will get back to you when I get some cycles to get that analyzed and published...
18:04 JustinClift shyam1: Sure
18:04 hagarth JustinClift: as many as possible ;)
18:05 JustinClift hagarth: I can make 100 if you want.  OSAS will probably charge it back to your budget though, as they'll cop it at month end. ;)
18:05 hagarth JustinClift: how about 8-12? possible to get it from OSAS budget?
18:06 JustinClift We're already so far over budget it's not funny
18:06 JustinClift But "fuck it"
18:06 JustinClift Gimme a few minutes to set them up
18:07 JustinClift hagarth: I'll give you 8
18:07 JustinClift Takes us to a round 10 over our allocation ;/
18:08 hagarth JustinClift: you can even set them up for me tomorrow
18:08 hagarth JustinClift: I plan to run tests after we've merged most/all patches.
18:15 JustinClift hagarth: Actually, instead of creating new VM's for you... would you be ok to disconnect a bunch of them in Jenkins yourself and just use those?
18:16 JustinClift We have 15 slaves for CentOS in jenkins.
18:16 hagarth JustinClift: can do that
18:16 JustinClift Since it's the weekend, they're not likely to be heavily utilised
18:16 hagarth JustinClift: right
18:16 JustinClift So, I'm thinking I might be able to avoid getting myself yelled at ;)
18:17 hagarth JustinClift: happy to help for that :)
18:18 rafi joined #gluster-dev
18:32 pranithk rafi: have been waiting for you, got the results?
18:35 rafi pranithk: My internet connection was not working, I'm so sorry
18:36 rafi pranithk: it is not reported back
18:36 rafi pranithk: let me check in jenkins :)
18:38 rafi pranithk: there was a build failure in master
18:40 rafi JustinClift: ping
18:40 glusterbot rafi: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
18:40 JustinClift rafi: pong
18:41 JustinClift ?
18:41 rafi JustinClift: I was looking to a spurious failure in quorum
18:41 ndevos JustinClift: hey, this permission failure should look familiar to you? http://build.gluster.org/job/racksp​ace-regression-2GB/1159/consoleFull
18:42 rafi JustinClift: i couldn't reproduce from my local machine, and also couldn't make any conclusion from log also
18:42 rafi JustinClift: can you give me one slave machine to investigate a spurious failure ?
18:50 pranithk JustinClift: yes, please give us the machine you said I could use :-)
18:52 JustinClift Oops, wasn't looking at this window
18:52 JustinClift ndevos: Hmm, that's a failure that shouldn't happen. :(
18:52 JustinClift Ahhh.  Manual run
18:52 JustinClift I don't know if the script for the manual runs was fixed
18:53 JustinClift 1 sec
18:53 lalatenduM joined #gluster-dev
18:53 ndevos JustinClift: oh, I didnt notice it was a manual run... that should not be needed anymore :-/
18:55 pranithk shyam1: You are coming to barcelona right?
18:55 JustinClift ndevos: Hmmm.
18:56 JustinClift The manual run script already has the explicit mkdir  and permissions adjustement for that directory
18:56 JustinClift So, I have no idea how that error happened
18:56 JustinClift I've gone into the slave vm, and remake the dir
18:56 JustinClift But I doubt it will do anything
18:57 JustinClift ndevos: That CR is already running a triggered regression test anyway
18:57 JustinClift http://build.gluster.org/job/rackspace​-regression-2GB-triggered/8847/console
18:57 ndevos JustinClift: oh, thanks!
18:58 JustinClift pranithk: Which machine is that?
18:58 pranithk JustinClift: http://build.gluster.org/job/reg​ression-test-burn-in/23/console
18:58 JustinClift It's already been given away
18:58 pranithk :'-(
18:59 * JustinClift will email you the details.  You can figure it out with them :)
18:59 JustinClift pranithk: Does it have to be that one?
18:59 JustinClift Or will any slave do?
18:59 pranithk JustinClift: no any of the slave machines should do...
18:59 JustinClift So, slave24 and 26 are idle at the moment
19:00 JustinClift So, if you disconnect one of them in Jenkins and Do Stuff with it, I don't think anyone will notice ;)
19:00 JustinClift Just don't use it for a week ;)
19:00 pranithk JustinClift: oh, how do we do that?
19:00 JustinClift pranithk: You have your Jenkins login details yeah?
19:00 pranithk JustinClift: no we just need to debug this one test and see if it happens
19:00 pranithk JustinClift: I think yes, wait
19:02 pranithk JustinClift: okay, logged into http://build.gluster.org/
19:02 pranithk JustinClift: now?
19:03 JustinClift Heh, they're all in use again
19:03 JustinClift 1 sec
19:03 JustinClift So, some of the smoke tests will finish pretty soon
19:03 JustinClift So, lets pick one of those
19:04 jiffin joined #gluster-dev
19:05 JustinClift pranithk: Go here: http://build.gluster.org/compu​ter/slave26.cloud.gluster.org/
19:05 JustinClift In the top right corner click the "Mark this node temporarily offline" button
19:05 JustinClift Type in something useful
19:05 JustinClift eg "Testing stuff out, contact me if needed"
19:06 JustinClift That will make the slave no longer pick up jobs.  It won't interfere with the currently running job though
19:06 JustinClift Then, wait for the current job on that slave to finish
19:06 JustinClift Once it was, login and Do Stuff to the slave via ssh
19:07 JustinClift Once you're done, reboot the node (just to be safe), and then reconnect it in Jenkins (same page, button in similar spot)
19:07 JustinClift pranithk: Make sense?
19:07 JustinClift rafi: You can follow this same procedure ^
19:12 rafi JustinClift: Thanks
19:16 hagarth rafi, pranithk: good luck with debugging. I will check back tomorrow morning.
19:22 shyam1 hagarth: tier issue in unhashed-lookup is fixed, patch is refreshed, jFYI
19:23 jiffin joined #gluster-dev
19:23 hagarth shyam1: cool!
19:26 shyam1 pranithk: Nope, not coming to Barcelona :(
19:26 rafi hagarth++ JustinClift++
19:26 glusterbot rafi: hagarth's karma is now 58
19:26 glusterbot rafi: JustinClift's karma is now 55
19:28 hagarth JustinClift++ certainly deserves more karma than me :)
19:28 glusterbot hagarth: JustinClift's karma is now 56
19:28 ndevos JustinClift++ for paying the sangria next week
19:28 glusterbot ndevos: JustinClift's karma is now 57
19:29 hagarth JustinClift++ for hosting us a dinner next week
19:29 glusterbot hagarth: JustinClift's karma is now 58
19:29 ndevos JustinClift must have an amazing budget
19:30 rafi hagarth: now it is equal :)
19:30 hagarth ndevos: undoubtedly!
19:30 hagarth rafi: give him an increment and he may offer you a treat too ;)
19:31 * ndevos is thinking hard about some nice sweets from the UK...
19:31 rafi JustinClift++ now JustinClift owe me a treat :)
19:31 glusterbot rafi: JustinClift's karma is now 59
19:31 * ndevos also thinks the UK does not have many/nice sweets, or ndevos is forgetful
19:33 hagarth alright, me off for today. tty all tomorrow.
19:36 ndevos good night, hagarth
19:40 rafi1 joined #gluster-dev
19:44 pranithk JustinClift: That machine is already running a job, how do we find which machine is free?
19:44 pranithk JustinClift: rafi1 told me :-)
19:49 JustinClift pranithk: All good now?
19:50 pranithk JustinClift: there is no idle machine. We are taking offline one of the machines which is nearing completion
19:50 JustinClift Yep
19:50 JustinClift Just pick one which has a job finishing soon
19:50 JustinClift ... or kill someone's job, and requeue it :D
19:52 JustinClift pranithk: If you're going to be quick, you could use slave47
19:52 pranithk JustinClift: hehe, they will kill me if they find out :-)
19:52 JustinClift Yeah :)
19:59 pranithk JustinClift: How do we login?
20:00 pranithk JustinClift: rafi1 knows it seems
20:07 shaunm joined #gluster-dev
20:09 JustinClift pranithk: "jenkins" user.  It's the same pw on all the slaves ;)
20:09 pranithk JustinClift: rafi1 already got the details from hagarth_afk
20:10 pranithk shyam1: too bad :-(
20:11 JustinClift pranithk: Oh wow
20:11 shyam1 pranithk: yeah! well that's life I guess... wanted to be there badly though.
20:11 JustinClift Core generated on release-3.6 branch.
20:11 JustinClift pranithk: Is that unexpected? http://build.gluster.org/job/reg​ression-test-burn-in/24/console
20:13 JustinClift shyam1: Actually, you're a good person for checking cores quickly...
20:13 JustinClift Does ^ look like a new thing to you?
20:13 shyam1 JustinClift: on it.. :)
20:13 JustinClift :)
20:13 JustinClift It's release-3.6 branch
20:15 pranithk shyam1: lets hope you can make it next time :-)
20:16 shyam1 pranithk: Yup! :)
20:17 pranithk shyam1: I for one am excited to meet xavi, Have loads of questions to ask him about ec
20:17 shyam1 xavih: Pssst! Xavi run! ;)
20:18 pranithk shyam1: I already told him and he is so excited that he wants to start the meetings from 11th itself :-P
20:18 shyam1 lol!
20:19 shyam1 JustinClift: What the... the stacks seem similar to https://bugzilla.redhat.co​m/show_bug.cgi?id=1195415
20:19 glusterbot Bug 1195415: unspecified, unspecified, ---, bugs, ON_QA , glusterfsd core dumps when cleanup and socket disconnect routines race
20:20 JustinClift shyam1: Hmm.
20:21 shyam1 JustinClift: Ok, the patch that disables this, http://review.gluster.org/#/c/10167/ is included in master and 3.7 only
20:21 JustinClift Double check it's really release-3.6 branch and Jenkins didn't get screwy?
20:21 shyam1 Not on 3.6, hence still occuring there
20:21 dlambrig1 pranith what time is it in bangalore? dont u ever sleep? ;)
20:21 pranithk dlambrig1: It is 2 well 1:51
20:22 shyam1 JustinClift: Yup, this is 3.6 in which the mentioned commit is not present, so... I guess we either back port that or ?
20:22 pranithk dlambrig1: This release is bad!!
20:22 shyam1 pranithk: Not to put you out of sleep, this release has just started ;)
20:22 dlambrig1 pranithk: lol lots of regression problems!
20:23 pranithk shyam1: well I keep saying today is the last night out and they keep coming :-(. Today it is for rafi though. He is facing some problem only on his patches
20:23 pranithk dlambrig1: not so many now :-P
20:24 JustinClift shyam1: What's the right course of action for the patch then?
20:24 JustinClift Do we email Jeff, asking him to backport it, or ?
20:25 JustinClift shyam1: Btw, am I ok to grab that VM back?
20:25 pranithk JustinClift: forget all that. There is a BIG bug in locks when people use clear-locks command which is not something people use. All we did is to remove the test so that regressions don't fail
20:25 shyam1 JustinClift: Ummm... the patch just deletes the tests suspected of causing the cores, and that has worked in master and 3.7, so I guess we back port the same to 3.6, I am fine with that
20:25 JustinClift Cool
20:25 shyam1 pranithk: yup, so merging the said patch should be fine
20:25 shyam1 I can send a merge request, yes?
20:25 pranithk shyam1: okay :-)
20:25 shyam1 JustinClift: I do not need the VM (mostly) for crashes
20:25 shyam1 ;)
20:25 JustinClift :)
20:26 * JustinClift wants to see that VM pass at least one "burn in"
20:26 JustinClift ... trying release-3.5 branch this time
20:26 JustinClift Maybe that one will work...
20:29 JustinClift Anyone around that can merge http://review.gluster.org/#/c/10619/ ?
20:29 pranithk JustinClift: I will do
20:30 ndevos JustinClift, pranithk: should we not wait for the NetBSD test?
20:31 ndevos oh, well, maybe not then :)
20:31 JustinClift It's a really simple one-liner.
20:31 JustinClift I'm *possibly wrongly* assuming that it would work on NetBSD
20:31 rafi joined #gluster-dev
20:31 * ndevos +2'd it in the same moment pranithk clicked the submit buttont :)
20:31 JustinClift And it's for fixing a regression failure
20:32 JustinClift ;)
20:32 ndevos buttont is not a word
20:32 shyam1 JustinClift: pranithk: http://review.gluster.org/#/c/10696/ for deleting the tests for clear-locks
20:32 shyam1 on 3.6
20:32 pranithk shyam1: but 3.6 is sacred :-/
20:32 pranithk shyam1: power is only with Raghavendra Bhat
20:33 shyam1 pranithk: Ah ok will loop him in
20:33 * ndevos slaps anyone that merges patches in 3.5 too
20:34 ndevos I also think kkeithley might still have some waiting for him...
20:35 pranithk shyam1: 3.7 has multiple maintainers I think, so there we can merge :-)
20:35 pranithk shyam1: It is just like master...
20:36 JustinClift Breaks on 80% of regression runs? :D
20:36 pranithk JustinClift: is it okay if we install screen on the slave?
20:36 JustinClift pranithk: Yep
20:36 rafi joined #gluster-dev
20:36 JustinClift I often do anyway
20:36 ndevos pranithk: use tmux, thats way cooler
20:36 pranithk rafi: lets install screen
20:36 ndevos instead of CTRL+a use CTRL+b - and to attach a session: tmux attach
20:38 pranithk ndevos: It is amazing how people who are passionate about tmux help others use it so much. Even sac in BLR office is thatway
20:38 shyam1 pranithk: Yup, understood
20:38 dlambrig1 I am a tmux convert
20:39 pranithk dlambrig1: how is tiering doing?
20:39 ndevos pranithk: I only use screen when I have to nowadays, rhel6 does not have tmux by default, only from epel (and epel is on the slaves)
20:39 ndevos pranithk: OH YOU ARE MEAN!
20:39 pranithk ndevos: sorry?
20:39 pranithk ndevos: did I miss something?
20:39 ndevos because "< pranithk> dlambrig1: how is tiering doing?"
20:40 dlambrig1 tiering , going about as well as ec ;)
20:40 ndevos just start to use tmux, and all is well
20:40 JustinClift Doh!
20:41 pranithk ndevos: ah! now I understood. No it was just normal question, I am helping rafi so that he can get his tiering patches merged.
20:41 pranithk ndevos, dlambrig1: didn't mean to be "mean"
20:41 ndevos pranithk: haha, no problem :D
20:42 ndevos dlambrig1: btw, deadline for merging 3.7.0 patches is tomorrow 16:00 UTC
20:43 dlambrig1 ah well, I do not think we will have any shortage of bugs... we may have found one today while reviewing 7702
20:44 pranithk dlambrig1: will tiering and ec ready by 3.7.1?
20:44 ndevos \_(o_O)_/
20:45 dlambrig1 pranithk: the bottleneck is with QE.. their testing matrix is pretty big..as far as our code goes it should work for 3.7.1
20:45 dlambrig1 pranithk: can talk about it more in barcelona
20:45 pranithk dlambrig1: ah! coding now are we? :-). Sure sure carry on.
20:47 dlambrig1 left #gluster-dev
20:48 ndevos lol, disconnected immediately
20:48 ndevos pranithk: so, how *is* ec getting along?
20:48 * ndevos does really not know
20:50 JustinClift It kinda has the form of a non-fatal train wreck
20:51 JustinClift Like, something you can look at and point out... but hope you're not going to be intimiately involved with in the near future
20:51 JustinClift But, that's just my impression from looking at failure stuff :)
20:52 JustinClift It might look completely differently from everyone else's PoV
20:52 JustinClift ... and I'm in cynical mood today
20:53 ndevos I wonder how a non-fatal train wreck can happen, and how it will look in the future - fatal for the train, or its load?
20:54 pranithk ndevos: for 3.7.0 patches are all submitted. We need to work on 2 important bugs, 1 of which xavi already sent
20:54 ndevos pranithk: oh, that does not sound bad at all!
20:55 pranithk ndevos: Second one we need to work to fix. In barcelona I will talk to xavi
20:55 JustinClift Yeah, I was kinda exaggerating... ;D
20:55 ndevos pranithk: sounds like you have a plan :)
20:56 pranithk ndevos: well I always have a plan. Not yet sure if it all work out well :-)
20:56 * ndevos is very tempted to just merge http://review.gluster.org/10672 and send an other backport on top of that
20:57 ndevos pranithk: a plan is always good to have, I never expect to see a plan work, and plan for adapting the plan when needed ;-)
21:01 pranithk ndevos: :-)
21:02 pranithk ndevos: We just found the bug. HAVE_BD_XLATOR is screwing things for rafi
21:03 ndevos pranithk: oh, yes, interesting, I doubt tiering works with the bd-xlator
21:05 pranithk pranithk: no it is a code bug, ret should be zero at the end of function
21:06 pranithk ndevos: how do we tell dan that things are fine? when he logs in?
21:06 ndevos you can do: @later tell dlambrigh1 <message goes here?
21:07 ndevos but, maybe 2x, also for dlambrig1 and dlambrigt
21:07 ndevos (there is no 'h' in his name)
21:13 rafi pranithk++
21:13 glusterbot rafi: pranithk's karma is now 13
21:15 amanjain110893 joined #gluster-dev
21:17 amanjain110893 hey guys I have recently learnt about file-system concepts and I want go deeper in distributed systems, how should I start ?
21:23 rafi joined #gluster-dev
21:32 JustinClift amanjain110893: Ahhh, kinda bad timing for jumping on here.  It's friday evening/sat morning and ppl are either signing off exhausted or already gone :/
21:33 ndevos amanjain110893: have a read through http://gluster.readthedocs.org/en/late​st/Developer-guide/Developers%20Index/
21:33 ndevos amanjain110893: and for more general info on how things work and all, you can see https://access.redhat.com/documentat​ion/en-US/Red_Hat_Storage/3/html-sin​gle/Administration_Guide/index.html
21:34 ndevos amanjain110893: also http://people.redhat.com/dblack/ has some nice presentations for sysadmins, but a little deeper into the theory
21:35 ndevos amanjain110893: if you read all thos bits, and have more detailed questions about gluster, send them to gluster-devel@gluster.org
21:36 JustinClift Ahh cool.  Thought you'd split already. ;)
21:36 ndevos JustinClift: some people never give up!
21:38 amanjain110893 @nevdos thanks, I will go through the stuffs you suggested and will come to you later
21:40 ndevos amanjain110893: sure, and anjoy the reading :)
21:40 ndevos amanjain110893: oh, http://www.gluster.org/community/do​cumentation/index.php/Presentations contains some good things too
21:48 pranithk ndevos: You bad boy, I think we offended Dan :-(
21:48 JustinClift Really?
21:50 pranithk ndevos: in barcelona we are in trouble :-D
21:50 ndevos pranithk: don't scare me!
21:51 amanjain110893 joined #gluster-dev
21:51 ndevos JustinClift: I think you have to save us, and invite us all for nice drinks and food and stuff
21:54 shaunm joined #gluster-dev
21:56 JoeJulian +1
21:57 pranithk ndevos: we are in trouble you be prepared
21:57 pranithk ndevos: how come you are still not sleeping, isn't it late there?
21:58 ndevos pranithk: you think we can bribe dan with some dutch sweets?
21:58 ndevos or, linger his pain?
21:58 JustinClift "linger his pain"?
21:58 ndevos pranithk: I'm working! someone needs to fix the bug status of 3.7.0, and the packaging
21:59 pranithk ndevos: ah!
21:59 ndevos isnt "linger" english?
21:59 pranithk ndevos: I don't know :-(.
21:59 pranithk who here is going to be awake for next 6-7 hours?
21:59 ndevos pranithk: and, we only have until 16:00 UTC on saturday 9 may...
22:00 ndevos pranithk: I do not plan to stay awake *that* long
22:00 ndevos Day changed to 09 May 2015
22:00 ndevos 00:00 < ndevos> pranithk: I do not plan to stay awake *that* long
22:01 ndevos btw, we now have a new tracker: https://bugzilla.redhat.com/sh​ow_bug.cgi?id=glusterfs-3.7.1
22:01 glusterbot Bug glusterfs: could not be retrieved: InvalidBugId
22:01 ndevos hahaha, glusterbot, you should learn about people friendly bug aliases
22:01 pranithk ndevos: :-)
22:07 JoeJulian https://github.com/leamas/supybot-bz
22:07 rafi joined #gluster-dev
22:08 shyam1 joined #gluster-dev
22:08 pranithk shyam1: will you be online for next 3-4 hours?
22:09 shyam1 pranithk: I should be online for a few more minutes and then online after a couple of hours, why?
22:10 pranithk shyam1: One of the patches I submitted failed regression. I resubmitted it after posting about the crash in changelog on gluster-devel. If it fails again, could you re-trigger it? http://review.gluster.com/#/c/10693/
22:11 shyam1 pranithk: Ah! cool no problem will do
22:11 pranithk shyam1: Thanks for the help. Me going offline to get some sleep.
22:11 pranithk bye guys!
22:11 shyam1 pranithk: Yup you should do that...
22:13 ndevos rafi: what is the difference between bug 1219842 and bug 1219843 ?
22:13 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1219842 medium, urgent, ---, bugs, NEW , [RFE] Data Tiering:Need a way from CLI to identify hot and cold tier bricks easily
22:13 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1219843 medium, urgent, ---, bugs, NEW , [RFE] Data Tiering:Need a way from CLI to identify hot and cold tier bricks easily
22:13 ndevos rafi: to me it seems that we can close 1219842 as a duplicate of 1219843?
22:14 rafi ndevos: let me check
22:17 rafi ndevos: it is duplicate
22:17 jobewan joined #gluster-dev
22:17 ndevos rafi: okay, thanks, shall I mark it as such, or will you do so?
22:18 rafi ndevos: sorry, this days , my head is really screwing me
22:18 rafi ndevos: i will do
22:18 ndevos rafi: no problem, I know how it is :)
22:18 ndevos rafi++
22:18 glusterbot ndevos: rafi's karma is now 9
22:19 rafi ndevos++ to for pointing out the duplicate
22:19 glusterbot rafi: ndevos's karma is now 120
22:20 ndevos rafi: one less tiering bug to fix ;-)
22:20 rafi ndevos: absolutely
22:21 jiffin1 joined #gluster-dev
22:24 rafi ndevos: off to bed, I'm done for today , it is already 4:00 AM :)
22:24 ndevos rafi: yeah, good night!
22:24 rafi see u tomorrow
22:24 * ndevos is finishing up an email and will drop off soon too
22:35 jiffin joined #gluster-dev
22:52 JustinClift 'nite all
22:52 JustinClift Have a good wkend and stuff :)
22:52 JustinClift ndevos: "linger his pain" <-- Nope, not english. ;)
22:53 glusterbot JustinClift: <'s karma is now -12
22:53 ndevos JustinClift: oh, well, too bad!
22:53 JustinClift :D
23:10 ndevos argh... now I get "Error 500" when trying to update bugs :-/ time to leave it for the day, I guess
23:24 JustinClift Hmmm, review.gluster.org seems quicker
23:24 JustinClift I guess iWeb sorted out their problems
23:24 * JustinClift never got around to ringing them
23:40 dlambrig joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary