Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-06-28

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:04 hagarth joined #gluster-dev
01:13 hagarth joined #gluster-dev
01:25 jtc joined #gluster-dev
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:54 baojg joined #gluster-dev
02:11 baojg joined #gluster-dev
02:12 asengupt joined #gluster-dev
02:32 shyam left #gluster-dev
02:58 gem joined #gluster-dev
03:23 ppai joined #gluster-dev
03:41 magrawal joined #gluster-dev
03:53 pkalever joined #gluster-dev
03:57 mchangir joined #gluster-dev
04:12 itisravi joined #gluster-dev
04:13 pkalever left #gluster-dev
04:16 itisravi joined #gluster-dev
04:21 kotreshhr joined #gluster-dev
04:30 itisravi_ joined #gluster-dev
04:32 poornimag joined #gluster-dev
04:34 shubhendu joined #gluster-dev
04:40 gem joined #gluster-dev
04:43 Apeksha joined #gluster-dev
04:54 ramky joined #gluster-dev
04:56 prasanth joined #gluster-dev
04:58 ppai joined #gluster-dev
05:00 nigelb ppai: Did we ever get a response from RTD folks?
05:01 nigelb I'd recommend we kick off a thread about moving to sphinx.
05:02 ppai nigelb, no I haven't
05:02 ppai yeah it makes sense now
05:04 jiffin joined #gluster-dev
05:04 sakshi joined #gluster-dev
05:08 kotreshhr left #gluster-dev
05:11 ndarshan joined #gluster-dev
05:15 Manikandan joined #gluster-dev
05:20 atinm joined #gluster-dev
05:21 Bhaskarakiran joined #gluster-dev
05:22 baojg joined #gluster-dev
05:25 aravindavk joined #gluster-dev
05:30 rraja joined #gluster-dev
05:34 hgowtham joined #gluster-dev
05:39 Manikandan joined #gluster-dev
05:41 mchangir joined #gluster-dev
05:42 pkalever joined #gluster-dev
05:44 pur joined #gluster-dev
05:54 kshlm joined #gluster-dev
05:54 skoduri joined #gluster-dev
06:01 ashiq joined #gluster-dev
06:02 pkalever left #gluster-dev
06:08 aspandey joined #gluster-dev
06:09 ppai joined #gluster-dev
06:16 hchiramm ppai, but do we have any tools to migrate to rst files from md?
06:16 ppai hchiramm, pandoc cli can do it
06:16 spalai joined #gluster-dev
06:17 hchiramm ppai, then may be we can start a thread in gluster devel asking for devel's comfortness
06:17 ppai hchiramm, sure
06:17 hchiramm iic,everyone voted for md though
06:17 nigelb except, md isn't really standard :)
06:17 nigelb rst is the standard for documentation (at least in the python world)
06:18 ppai but i'd like to try the tools out first and check out how good is the conversion. I tried it out on libgfapi-python migration from .md to .rst, needed some manual tinkering later on
06:19 ppai if the conversion requires manual inspection later, this can also be an opportunity to cleanup the docs
06:20 nigelb Don't do too much cleanup
06:20 nigelb I mean, if you want to clean up, remove things.
06:20 Saravanakmr joined #gluster-dev
06:20 nigelb But don't waste too much time on it :)
06:20 ppai yeah
06:20 kdhananjay joined #gluster-dev
06:21 nigelb Otherwise, it'll never finish. I've been there :)
06:23 nishanth joined #gluster-dev
06:27 itisravi joined #gluster-dev
06:30 rafi joined #gluster-dev
06:30 pranithk1 joined #gluster-dev
06:37 karthik___ joined #gluster-dev
06:38 rafi joined #gluster-dev
06:39 kshlm joined #gluster-dev
06:39 Manikandan joined #gluster-dev
06:40 Manikandan joined #gluster-dev
06:43 prasanth joined #gluster-dev
06:44 rafi joined #gluster-dev
06:46 rafi joined #gluster-dev
06:48 msvbhat joined #gluster-dev
06:48 rafi joined #gluster-dev
06:53 rafi joined #gluster-dev
07:00 atalur joined #gluster-dev
07:10 itisravi joined #gluster-dev
07:13 pranithk1 joined #gluster-dev
07:13 pranithk1 ndevos: kshlm: How to get 3.9.0 appear in bugzilla?
07:13 pranithk1 ndevos: kshlm: To raise a bug I mean :-)
07:19 hgowtham joined #gluster-dev
07:19 poornimag joined #gluster-dev
07:25 ramky joined #gluster-dev
07:37 poornimag joined #gluster-dev
07:44 hagarth joined #gluster-dev
08:04 itisravi joined #gluster-dev
08:13 hchiramm joined #gluster-dev
08:20 rraja joined #gluster-dev
08:22 ndevos pranithk|lunch: ask kkeithley to request the 3.9.0 version or send an email to the bugzilla team (I dont remember their address)
08:23 pranithk ndevos: I will send a mail to Kaleb
08:23 kshlm joined #gluster-dev
08:25 rafi1 joined #gluster-dev
08:25 rastar joined #gluster-dev
08:26 ndevos pranithk: you can already create a 3.9.0 tracker bug if you like, and have the bugs for features block that
08:27 rraja joined #gluster-dev
08:27 karthik___ joined #gluster-dev
08:27 pranithk ndevos: Is it not required to be tagged against 3.9.0?
08:28 ndevos pranithk: you can change that later, for now, mainline would work too
08:28 ndevos pranithk: the bug has a 'Alias' field, you should write "glusterfs-3.9.0" in there
08:28 pranithk ndevos: yeah, I will do that :-)
08:28 ndevos pranithk: also add the Triaged, Tracking keywords
08:28 rraja joined #gluster-dev
08:28 pranithk ndevos: will do
08:29 ndevos pranithk: an example of the contents would be in the glusterfs-3.8.1 bug
08:29 rafi joined #gluster-dev
08:29 ndevos and because it is an alias, you can use "glusterfs-3.8.1" just like a bug number :)
08:55 hchiramm joined #gluster-dev
09:01 aravindavk_ joined #gluster-dev
09:05 Saravanakmr joined #gluster-dev
09:18 post-factum kkeithley: Arch is much like Slackware, but it has deps :)
09:27 nigelb misc: how can we get our build machines' ssh whitelisted only to a few machines?
09:27 post-factum skoduri: http://review.gluster.org/#/c/13658/
09:27 post-factum skoduri: any news on that?
09:28 nigelb Does it just need an ansible role to do the job or is there something I'm missing?
09:28 post-factum skoduri: i'm running it in production since it was posted for review and see no visible issues
09:28 misc nigelb: we have a ton of way
09:29 misc do it like fedora, with a VPN
09:29 misc do it with a firewall
09:29 misc do it by using the private network of rackspace
09:29 nigelb my vote is for something we can easily share across freebsd/netbsd/fedora without too much effort.
09:29 misc nigelb: and yeah, we likely need a ansible job :)
09:30 misc and the problem is also that not all builders are in rackspace, so we might have different jumphost
09:30 misc and that jenkins use ssh too :/
09:31 nigelb the *bsd machines are going to be fun!
09:31 pranithk aravindavk_: https://bugzilla.redhat.com/show_bug.cgi?id=1350744 this is the tracker bug for 3.9.0, you can use this in the mail
09:31 glusterbot Bug 1350744: unspecified, unspecified, ---, pkarampu, NEW , GlusterFS 3.9.0 tracker
09:31 misc nigelb: the private network is IMHO the best way
09:32 misc except that I think we did had to disable it, because gluster tests suite didn't support having 2 interfaces
09:32 misc so the VPN is likely out too
09:32 nigelb I was going to ask about that
09:32 ndevos kkeithley: I'm wondering if I should buy https://shop.softiron.co.uk/product/overdrive-3000/ - but its rather expensive...
09:33 misc nigelb: so in the end, firewall is our best hope
09:37 baojg joined #gluster-dev
09:37 itisravi joined #gluster-dev
09:37 kdhananjay joined #gluster-dev
09:37 misc nigelb: also, even after update, slave2' fail to update
09:37 misc I am gonna try with a newer ansible snapshot
09:39 nigelb misc: was that 24 or 25?
09:39 nigelb I'm going to try a run with verbose output
09:39 nigelb maybe it'll give us more info
09:43 misc nigelb: 24
09:44 misc also, I am starting to feel the need to get food, and going to the office, so I might disappear a bit
09:44 skoduri post-factum, okay thanks ... I haven't followed it lately... I do not see kotresh, raghug online to ping them.. probably aravindavk_ can help
09:44 skoduri aravindavk_, could you please review http://review.gluster.org/#/c/13658/
09:45 post-factum skoduri: i tried to ping raghug a couple of days before...
09:45 post-factum skoduri: thanks anyway
09:45 ndevos misc, nigelb: btw, I found a bug in gluster/nfs when it tries to validate hostnames that contain a "-", please do not add Jenkins slaves with hostnames like slave-02.cloud.gluster.org :)
09:46 skoduri post-factum, oh okay..I will drop a note too to consider that patch..
09:46 misc ndevos: give me a good reason to hide a bug :p
09:46 nigelb ndevos: uhh, shouldn't we fix the actual bug?
09:46 baojg joined #gluster-dev
09:46 ndevos nigelb: I've got a patch for it, but until its merged the regression tests will fail
09:46 misc I guess we will, but for now, it is unfixed
09:47 nigelb no plans to add new machines for now.
09:47 nigelb So you should be good for a while.
09:47 ndevos ok, just checking :)
09:48 nigelb I will admit, I was planning on replacing all them slowly with newer machines built from scratch.
09:49 misc now, i wonder if we can start putting random utf8 name in the server name
09:49 nigelb emojis!
09:49 misc +1
09:50 aspandey joined #gluster-dev
09:50 mchangir so we can now use gluster to test the robustness of the regression tests :P
09:50 atalur joined #gluster-dev
09:50 mchangir or was that the plan from day 1 ;)
09:50 misc nigelb: so, the git work fine with ansible HEAD
09:51 misc not sure what happen here :/
09:51 nigelb oh wow
09:51 nigelb we've run into an ansible bug? :)
09:52 pranithk1 joined #gluster-dev
09:56 ndevos nigelb, misc: utf-8 will fail too, unless a regex like [a-zA-Z0-9] covers it - but regex and utf-8?!
09:57 nigelb does regex even support utf8?
09:57 ndevos no idea!
09:58 sakshi joined #gluster-dev
09:58 misc nigelb: I would rather blame the salt/ansible transport, but maybe it is time to slowly get ride of it
09:58 luizcpg joined #gluster-dev
09:59 nigelb There's an easy way to test that theory :)
09:59 * nigelb does a quick local test.
09:59 poornimag joined #gluster-dev
10:01 rastar joined #gluster-dev
10:02 nigelb misc: works fine from my local machine.
10:02 nigelb so your guess is correct :)
10:03 misc nigelb: that's a bit annoying, because I wrote that plugin
10:03 nigelb hahah
10:03 misc but I will just blame salt
10:03 misc in fact, I suspect I even have a idea of the issue
10:03 misc like a env var leakage
10:04 misc mhh nop
10:05 misc what bother me is why it fail on 2 nodes only
10:12 nigelb that bothers me too.
10:13 nigelb ndevos: If it makes you feel any better, I've identified a large portion of rpm build failure.
10:13 nigelb (and caused a lot of it since yesterday)
10:13 ndevos nigelb: oh, good, I didnt notice an issue yet?
10:13 nigelb It's a race condition :)
10:13 nigelb So it only happens rarely.
10:14 nigelb We've enabled concurrent building of rpms. Sometimes, we build rpms concurrently on the same build machine. Causing both builds to fail.
10:14 ndevos misc, nigelb: maybe check if the slaves that fail to update the git repo, do not have local modifications there?
10:14 nigelb We ruled that out already.
10:14 nigelb Or at least, I did. That's the first thing I checked.
10:15 ndevos nigelb: uh, concurrent builds? That is an option for the local slave only, it will have issues because mock is used with default (system-wide) settings
10:15 misc I did checked too, and the error message would different
10:15 misc https://bugzilla.redhat.com/show_bug.cgi?id=1350631 for people who want details
10:15 glusterbot Bug 1350631: unspecified, unspecified, ---, bugs, NEW , Some machines don't have /opt/qa updated
10:16 ndevos or read the emails on gluster-infra...
10:16 nigelb ndevos: yeah, I've changed the new jobs to not do concurrent builds.
10:16 nigelb Old jobs going off today.
10:16 ndevos nigelb: yes, of course... I wonder when that got enabled?
10:18 nigelb It's been there from the start.
10:18 nigelb we just run into it so rarely, it wasn't an issue until now.
10:18 ndevos hmm, ok, I'm not sure who created the job
10:19 nigelb I'm guessing we made a mistake in one place and carried it over. Anyway, it's caught now :)
10:21 kkeithley ndevos: "rather expensive" is quite an understatement.
10:21 nigelb misc: I wonder if deleting the /opt/qa folder on those two machines and re-running the playbook will help.
10:22 ndevos kkeithley: the non-rack version is cheaper, but too expensive for a box that needs a place somewhere
10:22 kkeithley I'm not seeing the non-rack version
10:23 bfoster joined #gluster-dev
10:23 kkeithley misc: random utf8 strings like this one, "asdfghjkl"  ?
10:23 kkeithley ;-)
10:24 ndevos kkeithley: https://shop.softiron.co.uk/product/overdrive-1000/
10:24 ndevos and it runs CentOS already
10:26 misc nigelb: it may not help to diagnose the problem :)
10:27 misc I will ponder on my way yo food|office
10:27 nigelb ah indeed :)
10:28 kkeithley $599 is a lot better.  I guess a $299 LeMaker Cello plus case, power supply, memory, and 1TB disk will come close to that.
10:28 ndevos yes, probably
10:29 kkeithley and you can get it now, versus pre-order the Cello.
10:31 kkeithley what are the rpmbuild jobs in jenkins doing now, building in ~jenkins/rpmbuild?  Why don't we fix them to build in a different (random) subdir?
10:33 kdhananjay joined #gluster-dev
10:33 ndevos kkeithley: I hope the rpmbuild jobs use mock? we could pass them a different working directory, but we also dont need to run multiple jobs on the same slave at the same time...
10:34 misc not sure that mock supporte multiple concurent build on the same root
10:34 misc (by default)
10:34 itisravi joined #gluster-dev
10:37 ndevos misc: there is --rootdir=... for mock
10:37 misc ndevos: and we use it :) ?
10:38 msvbhat joined #gluster-dev
10:38 ndevos misc: no, we expect only a single job to run on a slave, but if someone wants the concurrent builds, we could use that option with $JENKINS_WORKSPACE or whatver the env var is
10:38 misc ndevos: yup, so we could indeed
10:39 ndevos but I dont see an advantage in multiple mock jobs on the same slave...
10:40 nigelb frees up the machine sooner, but we don't need to optimize for that yet.
10:41 itisravi joined #gluster-dev
10:42 ndevos well, those jobs dont take long anyway, a mock build on my laptop takes < 5 minutes
10:43 nigelb exactly.
10:43 ndevos nigelb: actually, are the slaves configured to keep the yum repositories cached? there is a setting for it in /etc/mock/site-defaults.cfg
10:43 nigelb No idea.
10:43 nigelb I want to rebuild our slaves from scratch at some point, but I haven't done that yet.
10:44 pranithk1 joined #gluster-dev
10:45 ndevos maybe make sure it is part of your ansible tasks?
10:47 nigelb will do.
10:48 hchiramm joined #gluster-dev
10:49 nigelb misc: I'm merging https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/22 in.
10:49 nigelb I've made it more templated
10:49 nigelb If you want to glance at it.
10:52 Manikandan joined #gluster-dev
10:54 aravindavk_ joined #gluster-dev
10:59 misc nigelb: so far, ti seems fine
10:59 misc I wonder if we can get a service that check yaml syntax on PR
11:00 misc or rather, i think we can do it with travis, but maybe we can do that another way ?
11:00 ira joined #gluster-dev
11:05 aspandey joined #gluster-dev
11:06 Saravanakmr joined #gluster-dev
11:07 nigelb misc: I'm wondering if I should write a jenkins job to do it :)
11:08 ndevos REMINDER: Gluster Bug Triage meeting at 12:00 UTC - http://article.gmane.org/gmane.comp.file-systems.gluster.devel/15726
11:08 misc nigelb: I am pondering that too, not sure how jenkins and github work together
11:13 hchiramm joined #gluster-dev
11:17 nigelb There is a Jenkins plugin.
11:17 nigelb misc: the other option (which you may not like) is getting this onto gerrit
11:17 nigelb and based on gerrit we trigger a test and update for jenkins via jenkins job.
11:22 hchiramm joined #gluster-dev
11:22 misc nigelb: well, i like gerrit as a concept, not as a implementation :)
11:22 nigelb haha
11:23 misc ie, I am all for moving stuff to 1 single place under our control
11:23 nigelb The concept of dogfooding our own infra is what interests me.
11:23 nigelb we'll know all the pain points the devs go through
11:23 misc it just that the current place is sometime making my eyes bleed
11:24 nigelb There's not much better than that.
11:30 hchiramm joined #gluster-dev
11:31 rastar joined #gluster-dev
11:31 poornimag joined #gluster-dev
11:46 ndevos kkeithley: why is https://bugzilla.redhat.com/show_bug.cgi?id=1301647 NOTABUG?
11:46 glusterbot Bug 1301647: unspecified, unspecified, ---, ndevos, CLOSED NOTABUG, Add support for SEEK_DATA/SEEK_HOLE to sharding
11:46 kkeithley bad edit
11:47 ndevos ah
11:47 gem joined #gluster-dev
11:56 kshlm joined #gluster-dev
11:57 skoduri joined #gluster-dev
11:59 ndevos REMINDER: Gluster Bug Triage meeting starting now in #gluster-meeting
12:00 pkalever joined #gluster-dev
12:01 rraja joined #gluster-dev
12:02 skoduri joined #gluster-dev
12:10 karthik___ joined #gluster-dev
12:18 ppai joined #gluster-dev
12:22 msvbhat joined #gluster-dev
12:29 kkeithley nigelb++
12:29 glusterbot kkeithley: nigelb's karma is now 12
12:30 kkeithley ndevos++
12:30 glusterbot kkeithley: ndevos's karma is now 277
12:30 ndevos thanks for joining, kkeithley++ jiffin++ Saravanakmr++ Manikandan++ skoduri++
12:30 glusterbot ndevos: kkeithley's karma is now 129
12:30 glusterbot ndevos: Manikandan's karma is now 59
12:30 glusterbot ndevos: jiffin's karma is now 46
12:30 glusterbot ndevos: Saravanakmr's karma is now 12
12:30 misc nigelb: yeah, for now :)
12:30 glusterbot ndevos: skoduri's karma is now 25
12:30 Manikandan ndevos++
12:30 glusterbot Manikandan: ndevos's karma is now 278
12:31 jiffin ndevos++
12:31 glusterbot jiffin: ndevos's karma is now 279
12:32 magrawal joined #gluster-dev
12:33 misc mhh, could we ask to peopl ebefore assigning them bugs ?
12:34 misc (cause if I filled a bug report to be tracked later, that do not mean I need to get it assigned to me)
12:34 kkeithley which bug?
12:34 misc https://bugzilla.redhat.com/show_bug.cgi?id=1349450
12:34 glusterbot Bug 1349450: unspecified, unspecified, ---, mscherer, ASSIGNED , Use something else that chroot to build rpm
12:35 kkeithley shall I assign it to nigel?
12:35 kkeithley nigelb:
12:35 misc well, we have to decide a workflow for that
12:36 misc and what 'assigned' mean
12:36 kkeithley actually it shouldn't have been set to ASSIGNED.
12:36 kkeithley just set the assignee to something besides bugs@gluster.org for future assignment
12:36 kkeithley that's our normal workflow
12:37 kkeithley for gluster bugs. Tell me what you want for infra bugs
12:38 misc yeah, we need to discuss, we pondered on that with nigelb and the fact that infra bugs would interfere, but my understanding was that the bugs were outside of the regular process
12:38 mchangir joined #gluster-dev
12:39 misc nigelb: ^any opinion ?
12:40 kkeithley we don't want bugs assigned to bugs@gluster.org sitting around, going nowhere
12:40 kkeithley for extended lengths of time
12:40 misc yeah
12:40 misc so we would need to have a different default assignee
12:40 ndevos kkeithley: just put both nigelb and misc on CC of the bug, leave it in NEW and add the Triaged keyword
12:41 kkeithley ndevos: that's effectively what it is now
12:42 kkeithley assignee is nigelb, state is NEW, misc is on cc
12:42 ndevos kkeithley: thats fine then, maintainers of components should have a bugzilla query or rss-feed to list the bugs in NEW+Triaged
12:42 kkeithley nigelb can change the assignee and cc if he wants
12:43 ndevos I never count NEW+assignee as ASSIGNED, and just ignore who is assigned in that case ;-)
12:43 ndevos no need to change it from bugs@gluster.org to something else either
12:46 kkeithley it happened in bug triage. So for the future for infra  BZs we should just set Triaged and cc nigelb and misc.
12:48 ndevos yeah, that should be sufficient, but I think nigelb and misc will triage the bugs they file themselves now too :)
12:49 kkeithley oh, definitely.
12:50 misc Irather have bugzilla set "triaged" by default for me
12:50 misc or a bot
12:51 ndevos misc: we want at least some manual check, users tend to file the most weird bugs...
12:52 misc ndevos: true
12:52 misc but for infra, we are not there yet
12:53 ndevos no, but I can see users reporting a real gluster software bug against 'project-infrastructure', during triage we check+correct the component
12:53 misc mhhh
12:54 misc yeah, that's kinda also why I prefer to have separate trackers :/
12:55 nigelb haven't had a case of someone filing against project infra yet.
12:57 ndevos yes we did, some vagrant test thingy was project-infrastructure
12:57 ndevos should have been 'tests' instead
12:58 ndevos see https://bugzilla.redhat.com/show_bug.cgi?id=1291537
12:58 glusterbot Bug 1291537: medium, medium, ---, rtalur, CLOSED CURRENTRELEASE, [RFE] Provide mechanism to spin up reproducible test environment for all developers
12:59 ndevos so, id did happen, and I assume it will happen in future too
12:59 misc in this case, that's discutable
12:59 misc I think it could have been a infra task
13:01 misc but i guess that's more that we maybe do not have the same understanding of category
13:02 nigelb what I mean is, ever since we started watching it closely, it hasn't happened.
13:02 nigelb I go through the list every morning
13:06 pranithk1 joined #gluster-dev
13:09 ndevos nigelb, misc: the other option could be to use bugzilla, but request a different component for the infra stuff
13:09 ndevos uh, not component, product
13:11 JoeJulian post-factum, kkeithley, ndevos: Sergej responded, "done" and rolled back to 3.7.11.
13:13 kkeithley A separate BZ product just for Gluster Infra seems very unlikely.
13:13 kkeithley JoeJulian: ???
13:13 JoeJulian arch linux
13:14 ndevos kkeithley: we could suggest that it replaces the Gluster-Documentation product on https://bugzilla.redhat.com/enter_bug.cgi?classification=Community
13:14 post-factum JoeJulian: nice job
13:14 post-factum JoeJulian++
13:14 glusterbot post-factum: JoeJulian's karma is now 7
13:15 post-factum kkeithley: reverting 3.8 update in arch community repo
13:15 anoopcs Can somebody please review https://github.com/gluster/glusterweb/pull/84?
13:16 kkeithley well, I hope they don't have to wait too long for 3.7.13 and 3.8.1
13:17 post-factum i believe, Sergej could just apply 1 extra patch on top of tag
13:17 post-factum but Arch policy is mainly "do not modify upstream"
13:18 post-factum so they would just bump package epoch and downgrade the version
13:18 JoeJulian Bummer that didn't get fixed in 3.7.12.
13:18 post-factum aye, that is weird too
13:19 JoeJulian I specifically commented on the original bug during rc2 in hopes that it would get fixed before release.
13:22 kkeithley If we keep slipping the release to get one more bugfix in, and never actually ship, then what do we have?
13:23 post-factum kkeithley: if we get unusable release then, what do we have?
13:25 kkeithley Then we fix the release, as we said we would, and as we did. Our 3.7.12 and 3.8.0 packages have the fix in them.
13:25 kkeithley Arch linux is free to apply the patch to their builds too.
13:26 post-factum so, respinning is considered to be good idea?
13:26 ndevos kkeithley: the Storage SIG build from Friday doesnt have the fix either, not in 3.7.12 or in 3.8.0 :-(
13:27 nishanth joined #gluster-dev
13:27 kkeithley are you going to fix that?
13:27 kkeithley I thought you asked if you should, and I said yes
13:28 ndevos I think I have to... but did not have the time yesterday
13:30 kkeithley because otherwise _I_ wasn't going to build broken 3.7.12 packages
13:32 spalai left #gluster-dev
13:42 JoeJulian If pwritev, fsync, ftruncate, discard, and zerofill are not used in self-heal or nfs, then I guess it doesn't really matter. If those fops are needed then yeah, slipping the release to get one more bugfix in might be warranted.
13:43 JoeJulian Because none of those fops will work.
13:43 Apeksha joined #gluster-dev
13:46 luizcpg joined #gluster-dev
13:48 atalur joined #gluster-dev
13:48 post-factum :/
13:51 ashiq joined #gluster-dev
13:52 kkeithley Without the fix they won't work.  But all the packages _I_ build have the fix. And ndevos will respin the CentOS SIG packages soon.  So what is it that doesn't work?
13:54 post-factum so we have tagging decoupled from actual release
13:54 post-factum something smells wrong
13:55 kkeithley Building packages with patches is not uncommon.  The release is the release. The packages are the packages.
13:56 kkeithley Any time someone else wants to step up and take over building packages, be my guest.
14:02 kkeithley any volunteers?
14:03 msvbhat joined #gluster-dev
14:09 ndevos dlambrig: ./tests/basic/tier/new-tier-cmds.t failed on https://build.gluster.org/job/rackspace-regression-2GB-triggered/21929/console, maybe you can improve/correct the test?
14:13 dlambrig ndevos: I'll look..
14:15 ndevos thanks dlambrig++
14:15 glusterbot ndevos: dlambrig's karma is now 3
14:24 aravindavk_ joined #gluster-dev
14:44 anoopcs joined #gluster-dev
14:48 ndevos kkeithley: do you know if there is a 3.7.x version of https://bugzilla.redhat.com/show_bug.cgi?id=1349276 >
14:48 glusterbot Bug 1349276: high, unspecified, ---, jthottan, MODIFIED , Buffer overflow when attempting to create filesystem using libgfapi as driver on OpenStack
14:48 ndevos there were two 3.8.x ones, but I can not find the 3.7.x
14:48 kkeithley nope, don't know
14:48 kkeithley two for 3.8.x?  Was one supposed to be for 3.7.x?
14:48 JoeJulian There's 3.8 and master, I haven't seen a 3.7 yet.
14:49 ndevos JoeJulian: do you know if the libgfapi/iovec bug has been reported against 3.7.x?
14:50 JoeJulian I'd have to look
14:54 kkeithley I only see the master and release-3.8 BZs in my emails
14:54 ndevos then I really wonder why everyone is surprised that 3.7.12 got released without the fix...
14:57 ndevos kkeithley, JoeJulian: is one of you cloning bug 1349276 for 3.7.12 already, or shall I do that?
14:57 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1349276 high, unspecified, ---, jthottan, MODIFIED , Buffer overflow when attempting to create filesystem using libgfapi as driver on OpenStack
14:58 kkeithley I wasn't planning on doing it. Not any time soon anyway
14:58 kkeithley go ahead
14:59 ndevos ok, I just need a bug to refer when I include the patch in the RPM... didnt you do that?
15:00 kkeithley I put http://review.gluster.org/14779 in the %change
15:00 JoeJulian surprised because I noted the problem on bug 1333268
15:00 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1333268 high, unspecified, ---, pgurusid, CLOSED CURRENTRELEASE, SMB:while running I/O on cifs mount and doing graph switch causes cifs mount to hang.
15:03 kkeithley that bug should not have been closed
15:03 wushudoin joined #gluster-dev
15:05 kkeithley IMO
15:06 ndevos well, yes, probably
15:07 ndevos we now have https://bugzilla.redhat.com/show_bug.cgi?id=1350880 for 3.7.13
15:07 glusterbot Bug 1350880: urgent, urgent, ---, bugs, NEW , Buffer overflow when attempting to create filesystem using libgfapi as driver on OpenStack
15:08 spalai joined #gluster-dev
15:11 hchiramm ashiq++ thanks !
15:11 glusterbot hchiramm: ashiq's karma is now 24
15:15 ndevos hchiramm: what did he do?
15:16 * ndevos just wonders if he should do a ++ too
15:16 kkeithley only if you know what he did
15:16 ndevos oh, maybe it would be a -- then
15:16 hchiramm ndevos, good question https://bugzilla.redhat.com/show_bug.cgi?id=1348947 :)
15:16 glusterbot Bug 1348947: unspecified, unspecified, ---, hchiramm, ASSIGNED , agetty process consuming 100% CPU on the OSE node
15:16 kkeithley or a +-
15:16 hchiramm kkeithley, \o/
15:17 hchiramm ndevos, \o/
15:18 kkeithley :+1
15:18 ashiq so ndevos have you decided to give ++ ;)
15:18 pkalever left #gluster-dev
15:18 kkeithley :+1:
15:18 ndevos ashiq: well, I can not do that here, because the comments are private :-/
15:18 * kkeithley wonders if either of those did anything interesting
15:20 ndevos ashiq, hchiramm: did you check what could be done for providing Gluster + Heketi containers in the Storage SIG?
15:21 hchiramm ndevos, I am yet to check
15:21 hchiramm just too busy with few things and release
15:21 mchangir joined #gluster-dev
15:22 ndevos hchiramm: sure, no problem, just asking as a reminder :)
15:22 hchiramm yep :)
15:22 hchiramm ashiq, https://hub.docker.com/u/gluster/ 10k+ pulls
15:24 ashiq hchiramm, wow 10k+ \o/
15:26 hchiramm yeah, after migrating to new account the pull count is really amazing
15:27 nishanth joined #gluster-dev
16:03 shubhendu joined #gluster-dev
16:06 pkalever joined #gluster-dev
16:07 pkalever left #gluster-dev
16:11 pkalever joined #gluster-dev
16:32 Acinonyx joined #gluster-dev
16:37 skoduri joined #gluster-dev
16:46 ndevos kkeithley: fyi, the 3.8.0-2 packages are in the testing repository in the storage sig, once they are on the mirrors, I'll push the centos-release-gluster38 package to CentOS/Extras
17:13 jiffin joined #gluster-dev
17:30 kkeithley ndevos++
17:30 glusterbot kkeithley: ndevos's karma is now 280
18:58 pkalever left #gluster-dev
18:58 pkalever joined #gluster-dev
19:04 jiffin joined #gluster-dev
19:15 jiffin joined #gluster-dev
20:17 shyam joined #gluster-dev
20:28 shyam left #gluster-dev
20:29 hagarth joined #gluster-dev
20:44 pkalever joined #gluster-dev
20:54 nishanth joined #gluster-dev
21:16 penguinRaider joined #gluster-dev
22:27 hagarth joined #gluster-dev
22:42 hagarth joined #gluster-dev
22:57 luizcpg joined #gluster-dev
23:38 luizcpg joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary