Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-05-20

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 shyam joined #gluster-dev
00:22 luizcpg joined #gluster-dev
00:47 itisravi joined #gluster-dev
00:48 itisravi Hi ndevos can you review/merge http://review.gluster.org/#/c/14372/ ?
01:09 kdhananjay joined #gluster-dev
01:22 dlambrig_ joined #gluster-dev
01:36 EinstCrazy joined #gluster-dev
01:46 poornimag joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:04 EinstCrazy joined #gluster-dev
02:05 josferna joined #gluster-dev
02:32 kdhananjay1 joined #gluster-dev
02:53 EinstCrazy joined #gluster-dev
03:30 EinstCrazy joined #gluster-dev
03:34 pranithk1 joined #gluster-dev
03:40 nbalacha joined #gluster-dev
03:41 EinstCra_ joined #gluster-dev
03:43 itisravi joined #gluster-dev
03:46 atinm joined #gluster-dev
04:03 hagarth joined #gluster-dev
04:17 luizcpg joined #gluster-dev
04:21 itisravi joined #gluster-dev
04:23 shubhendu joined #gluster-dev
04:24 josferna joined #gluster-dev
04:25 sankarshan_away joined #gluster-dev
04:29 sakshi joined #gluster-dev
04:48 spalai joined #gluster-dev
04:48 mchangir joined #gluster-dev
04:50 raghug joined #gluster-dev
04:50 vshankar joined #gluster-dev
04:53 josferna joined #gluster-dev
04:54 EinstCrazy joined #gluster-dev
04:55 Apeksha joined #gluster-dev
04:59 nishanth joined #gluster-dev
04:59 pkalever joined #gluster-dev
05:01 kotreshhr joined #gluster-dev
05:03 gem joined #gluster-dev
05:04 EinstCrazy joined #gluster-dev
05:06 Bhaskarakiran joined #gluster-dev
05:07 ndarshan joined #gluster-dev
05:16 jiffin joined #gluster-dev
05:17 pkalever left #gluster-dev
05:18 kotreshhr joined #gluster-dev
05:19 aravindavk joined #gluster-dev
05:23 prasanth joined #gluster-dev
05:23 poornimag joined #gluster-dev
05:24 nbalachandran_ joined #gluster-dev
05:29 aspandey joined #gluster-dev
05:32 aravindavk joined #gluster-dev
05:38 Saravanakmr joined #gluster-dev
05:38 mchangir joined #gluster-dev
05:42 hgowtham joined #gluster-dev
05:44 ppai joined #gluster-dev
05:44 atinm joined #gluster-dev
05:45 skoduri joined #gluster-dev
05:46 rastar joined #gluster-dev
05:52 asengupt joined #gluster-dev
06:04 vimal joined #gluster-dev
06:06 EinstCrazy joined #gluster-dev
06:13 kdhananjay joined #gluster-dev
06:17 aspandey joined #gluster-dev
06:20 spalai joined #gluster-dev
06:20 jiffin1 joined #gluster-dev
06:22 anoopcs spalai, I verified the change https://review.gluster.org/#/c/14189/ with mandatory-locks. It works as expected.
06:33 kotreshhr joined #gluster-dev
06:37 primusinterpares joined #gluster-dev
06:40 EinstCrazy joined #gluster-dev
06:40 nishanth joined #gluster-dev
06:41 spalai anoopcs: cool. update on gerrit and go ahead with merge.
06:42 anoopcs spalai, Already done.
06:42 spalai anoopcs: ok
06:46 poornimag joined #gluster-dev
06:48 EinstCrazy joined #gluster-dev
06:54 aravindavk joined #gluster-dev
06:59 aspandey joined #gluster-dev
07:00 spalai joined #gluster-dev
07:02 anoopcs pranithk1, https://review.gluster.org/#/c/14189/ got enough reviews. Can you please take a look?
07:02 pranithk1 anoopcs: will do
07:03 anoopcs pranithk1, Thanks.
07:04 EinstCra_ joined #gluster-dev
07:08 kdhananjay joined #gluster-dev
07:14 mchangir joined #gluster-dev
07:19 nbalachandran_ joined #gluster-dev
07:28 kshlm joined #gluster-dev
07:33 kdhananjay joined #gluster-dev
07:37 rastar joined #gluster-dev
07:39 atinm joined #gluster-dev
07:39 jiffin1 joined #gluster-dev
07:53 poornimag joined #gluster-dev
08:06 rraja joined #gluster-dev
08:10 hagarth joined #gluster-dev
08:14 hchiramm joined #gluster-dev
08:28 penguinRaider joined #gluster-dev
08:58 aravindavk joined #gluster-dev
08:59 aravindavk joined #gluster-dev
09:00 jiffin joined #gluster-dev
09:08 mchangir joined #gluster-dev
09:29 spalai joined #gluster-dev
09:30 nigelb misc: Hey, do we have backups of gerrit somewhere?
09:34 misc nigelb: there is some on backups.cloud.gluster.org
09:35 nigelb misc: full copy of /review ?
09:35 nigelb Or just the db folder
09:35 shubhendu joined #gluster-dev
09:35 misc nigelb: full copy, yes
09:36 misc nigelb: not sure how up to date that is however, this was setup before my time and I never been able to test restore
09:36 nigelb Hrm.
09:36 misc (and the server is also full)
09:36 nigelb Hrm.
09:36 luizcpg joined #gluster-dev
09:36 nigelb If I want a copy of the whole thing, would you recommend I do a full fresh backup
09:36 nigelb or use one of the old backups?
09:37 misc do a fresh backup
09:37 misc I am not even sure how those backups are done
09:37 nigelb I'm going to advertise for a downtime on Wednesday then.
09:37 nigelb The db has locks and it's only ever safe to dump things when gerrit is running.
09:37 misc nigelb: why ?
09:37 nigelb Because I can't be sure the h2 db will be consistent otherwise.
09:38 misc ok so just a 5 minutes downtime, not a full outage
09:38 nigelb I'll advertise 30 mins, but will probably only need 5.
09:38 misc to be honest, I would just wait until gerrit freeze and do it at that time :)
09:38 nigelb when is that?
09:38 misc but unfortunately, I didn't use lvm backend for the VM, so we can just do a stop, lvm snapshot, start
09:39 misc nigelb: it do have regular issues so we can't know
09:39 nigelb when you're back from PTO, I'd love to setup a staging server for gerrit
09:39 nigelb so we can test our backups
09:39 misc I was also unable to diagnose, because priority is fixing it, so just a restart is sufficient
09:39 nigelb and test upgrades
09:40 nigelb I tried the h2 to postgres migration on a test instance.
09:40 misc (but I am sure using a java debugger on a old code base looking for race condition is fun)
09:40 nigelb It went pretty decently.
09:40 misc nigelb: no sql conversion ?
09:40 nigelb Haha, yeah.
09:40 nigelb I basically went h2 -> csv
09:40 nigelb and csv -> postgres
09:40 nigelb I have no clue if that will work on the amount of production data we have.
09:40 nigelb or how much time that'll take.
09:41 nigelb and I let gerrit create the postgres tables rather than do it by hand.
09:42 nigelb misc: My plan is to get a full data dump, test it on a temporary server, and then make a plan for migration to postgresql.
09:42 nigelb that way we'll have a vague number for how much time it'll take.
09:43 misc nigelb: seems fine
09:43 misc I was trying h2 => sql => postgresql
09:43 misc but it wasn't that straightforward
09:43 nigelb I tried that. That path seemed mildly painful.
09:43 nigelb This one was tried by people on the gerrit list.
09:44 misc and I pushed for later when the conclusion was "so I have to use perl to modify sql, what could go wrong"
09:44 nigelb heh, the answer that is always EVERYTHING.
09:44 misc I didn't knew about csv however, so I am not sure this was a option on the current gerrit
09:45 misc nigelb: you have a gerrit account ?
09:45 nigelb I have an account.
09:45 nigelb but no access of any sort.
09:45 misc let's see if this can be done without ressorting to sql query
09:46 misc (as I had to connect as root to the VM, shutdown gerrit and use the h2 jar to give myself priv last time..)
09:47 misc nigelb: ok, can you try now ?
09:47 nigelb Yep!
09:47 nigelb that worked :)
09:48 * misc close his eyes now he see the list of admins without the sql to hide the ugly details
09:49 nigelb Now I can try out things on my clone to see if all this goes well.
09:49 nigelb I plan to do a test upgrade as well, just to see what unknown problems we'll run into.
09:49 nigelb so far, on my test instance, the hairy bit was reindexing.
09:49 nigelb Had to do it twice
09:49 misc nigelb: that's also a problem on current instance
09:50 misc I do have a email from ppai about his 2 accounts, and so far, after a few hours, i still have no solution :/
09:52 nigelb there's a nasty sql query.
09:52 nigelb Probably best run after we migrate to postgres.
09:54 misc in this case, that was more a indexing issue
09:54 misc cause the db is ok
09:54 nigelb ahh
09:54 nigelb one of things I found is
09:54 misc and it was working for a while, iirc, but then it broke again
09:54 nigelb we're okay to clear out the entire index folder
09:54 nigelb and redindex
09:55 misc yeah, I think I did it once, but maybe not the right way
09:55 misc the sql interface of gerrit is painful
09:55 nigelb yup.
09:55 misc like, no completion, no history
09:55 nigelb I can't wait for postgres.
09:55 misc and looking at the gerrit server, i can't figure how the db is done
09:55 nigelb It's in that h2 file.
09:55 nigelb which is basically something like sqlite, but different.
09:56 nigelb as far as I understand.
09:56 misc I mean, the backup is done
09:56 nigelb Oh.
09:56 nigelb Oh dear :)
09:56 misc so I suspect they are broken, or they were done "offline" from the previous virt host
09:56 Debloper joined #gluster-dev
09:56 * nigelb adds one more thing to his long list
09:57 pkalever joined #gluster-dev
09:57 nigelb misc: is the blog on the backup server or the web server?
09:57 misc nigelb: web server
09:57 misc supercolony.gluster.org
09:57 nigelb can I ask for access to help clean it up? Maybe upgrade wordpress and get it in a stable state?
09:58 misc nigelb: sure
09:58 nigelb considering you're away until 1st of june, there's very little useful things I can do otherwise.
09:58 misc nigelb: I was about to add your ssh keys and let you deal with it, but I got intercepted while entering the office
09:58 nigelb do not want to break gerrit or jenkins when you are not around.
09:59 nigelb I've noticed that when gerrit breaks, sometimes, it really doesn't give me a useful errot.
09:59 nigelb *error
09:59 nigelb `java -jar gerrit.war run` gives more debugging info, but it doesn't use the index, I think (I ran into that yesterday)
10:01 misc nigelb: to be honest, I am sure you will be able to fix stuff like I do
10:02 * misc wait on ansible to push keys around
10:04 nigelb :)
10:09 pkalever left #gluster-dev
10:10 misc nigelb: so ansible pushed stuff, salt too
10:10 pmanny joined #gluster-dev
10:10 misc going to lunch
10:10 penguinRaider joined #gluster-dev
10:11 nigelb misc: thank you!
10:12 pkalever joined #gluster-dev
10:36 raghug joined #gluster-dev
10:42 misc so slave25 is down, yeah
10:44 nigelb uh oh.
10:45 rastar joined #gluster-dev
10:46 atinm joined #gluster-dev
10:47 skoduri joined #gluster-dev
10:53 aravindavk joined #gluster-dev
11:13 misc nigelb: usually, just a reboot is fine, but I still investigate when it happen, because sometime, there is more
11:13 misc (like that time with the script backuping /var/log in /var/log itself...)
11:13 luizcpg joined #gluster-dev
11:14 nigelb heh
11:16 misc so in this case, there was just something that killed openssh :/
11:16 misc on RHEL 7, systemd would have restartedit, but I am unsure on the right solution for el6
11:17 misc either protect openssh from the oom killer, or a cron to restart ssh
11:17 nigelb Or a monitoring thing that restarts the machine from the console after x minutes of not-reachable.
11:17 misc yeah, but then we can inspect what happened
11:17 nigelb Oh, right.
11:18 nigelb maybe we need some centralized logging
11:18 nigelb so we can do that and still have logs.
11:18 misc or we can limit test to run in less than X% of the ram
11:18 nigelb ^ I like this.
11:18 misc since my hypothesis is that'srunaway process doing that
11:19 misc but "X%" of ram requires long term data, and munin.gluster.org didn't collect them
11:19 misc and still don't :/
11:19 nigelb Do you know what exactly is wrong?
11:19 misc nope
11:20 nigelb Is it something I can fix by the time you're back?
11:20 misc nigelb: I suspect
11:20 misc that's likely something stupid, like firewall issue
11:20 nigelb I'll dig into it then.
11:20 nigelb I've been attempting to monitor gerrit lately.
11:21 nigelb I can't tell if it's working or not, since I haven't heard of a downtime yet.
11:21 nigelb Or at least a downtime in the specific way we usually fail.
11:22 misc /dev/xvda1       20G   19G  152M 100% /
11:22 misc sig
11:22 misc so I guess that's the problem with munin
11:22 nigelb ah :(
11:25 raghug joined #gluster-dev
11:25 misc nigelb: so, I see the problem
11:25 misc # pwd ; ls |wc -l
11:25 misc /var/lib/munin/cloud.gluster.org
11:25 misc 227342
11:26 nigelb Ooh, too much data from jenkins?
11:26 misc no
11:26 misc the test suite create device mapper device
11:26 misc and munin pick them
11:27 misc # ls slave46.cloud.gluster.org-diskstats_utilization-patchy* |wc -l
11:27 misc 1362
11:27 misc I did fix the ignore in munin
11:27 misc but I guess munin didn't clean up the rrd as I would have hoped
11:28 nigelb ahh
11:28 anoopcs pranithk1++
11:28 glusterbot anoopcs: pranithk1's karma is now 3
11:28 misc (of course, that's ironic that the ressources monitoring system is the one suffering from ressources issues)
11:29 csaba joined #gluster-dev
11:29 nigelb heh.
11:29 nigelb I've run into this once before :)
11:30 misc so 9g freed, so I hope the server is back
11:30 nigelb yay!
11:30 nigelb It's 5 pm. I'm calling it weekend. See you when you're back from PTO, or next week if you're on IRC while on PTO.
11:30 ira joined #gluster-dev
11:30 misc I am always on irc
11:31 misc and I have more than 1 week of PTO :)
11:31 nigelb :)
11:40 skoduri joined #gluster-dev
11:45 kkeithley https://bugzilla.redhat.com/show_bug.cgi?id=1223937#c2   Nearly a year ago it was said:  ... CentOS 6.3, ... is what the glusterfs jenkins is running on. We are working on updating both jenkins and the platform it runs on.
11:45 glusterbot Bug 1223937: medium, unspecified, ---, kkeithle, MODIFIED , Outdated autotools helper config.* files
11:45 kkeithley do we have an ETA yet for when Jenkins will be updated?
11:46 ndevos jenkins was updated, but that is not relevant for the automake/autoconf packages on the build server :)
11:48 kkeithley semi-correct.  The 'release' job runs under Jenkins on the build server.
11:49 ndevos well, we can update the release job whenever we want, but I dont know what we want to change there
11:49 ndevos its basically this script: https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/master/release.sh
11:49 kkeithley updating the release job is not the issue
11:50 kkeithley ... updating both jenkins _and_ the platform...
11:50 misc mhh, the platform to run on el7 ?
11:51 ndevos I think only updating the _platform_ and then mainly the autoconf and automake packages
11:51 ndevos misc: build.gluster.org
11:51 kkeithley platform, including the autoconf and automake that the release job uses
11:51 misc ndevos: yeah, so moving from el6 to el7
11:51 misc I would rather suggest doing it in mock
11:51 misc but that's maybe already the case
11:52 ndevos misc: no, we do not create the tarball in mock, that is a little tricky
11:52 misc ndevos: why ?
11:53 misc cause i am not very confortable on having build tools on the jenkins server itself
11:53 ndevos misc: true, but that is the current state of things, it is also the reason why the server is called *build* :)
11:54 misc ndevos: well, we aren't gonna upgrade the server to EL7 soon  so ...
11:54 kkeithley and mock seems mainly meant to build RPMs, not so much to do other arbitrary things, although I'm sure it could be made to do them.
11:54 misc (I wasn't even aware this was a request in the first place)
11:55 misc I would have upgraded anyway to have the same platform, but that was not urgent in my book :/
11:55 ndevos misc: I'm not sure if there was a request to update the server to el7, we 'only' need newer automake packages (that provide config.sub/guess)
11:56 misc ndevos: oh, taht's esay, go ask to the PM of RHEL to have it backported :)
11:56 kkeithley I don't know that we've ever had an exact conversation about updating build.g.o, other than the ongoing "yeah, we ought to get around to that some day"
11:56 misc but so there is multiple solution:
11:56 misc - do some backport of that and maintain that
11:57 misc - do set a specific builder for that, running newer version of platform
11:57 kkeithley er, it wouldn't be hard to roll our own updated autoconf and automake RPMs for  el6.3.
11:57 kkeithley a specific builder for release jobs is an option too
11:57 misc I would count "move to docker/mock/whatever" technical variation of the option 2
11:57 ndevos and 'need' is depending on who you ask, see http://thread.gmane.org/gmane.comp.file-systems.gluster.maintainers/727/focus=731
11:58 misc ndevos: is this the right mail, as I fail to see the point :/
11:59 misc ok, so the other mails of the thread
11:59 ndevos misc: the issue is that config.sub/guess get copied from the host system into the release tarball, and those files on build.gluster.org are too old
11:59 misc ndevos: yeah, but the link opened was the patch from patrick so I didn't understood
11:59 ndevos the fix we did for that was to not include the script in the tarball at all
12:00 misc but it broke stuff
12:00 kkeithley right. It's breaking 3.8 (and master)
12:00 ndevos yes, and it would be nice of us to include those files, but a recent version
12:01 misc well, a backport of autoconf/etc is a short term solution
12:01 kkeithley and so for now, the answer is "yes, you do have to run autogen.sh before configure"   A better solution is to update build.g.o or update the auto tools on build.g.o.
12:01 misc but i fear the long term cost of doing that
12:02 misc and I am quite sure that going the short term solution mean we will not do the proper fix :/
12:02 kkeithley and maybe the best solution is to get a dedicated "release.gluster.org" machine to run release jobs on and keep that up to date
12:02 misc update to date being running Fedora ?
12:03 kkeithley Debian Stretch. ;-)
12:03 ndevos maybe, or just make sure all jenkins slaves are up to date... and we run the release job on a CentOS-7 or Fedora one?
12:03 atinm joined #gluster-dev
12:03 kkeithley Probably CentOS7 would be fine
12:03 misc all is centos 6 for now
12:03 ndevos misc: also slave{27,28,33} are marked offline by you for doing a reboot since the 11th?
12:04 misc ndevos: I did, tought I did put them back
12:04 gvandeweyer joined #gluster-dev
12:04 misc I guess I didn't see they were offline for that, because jenkins UI do not tell on the summary
12:06 gvandeweyer hi all, is it possible to assign weights to hosts serving bricks? we have a 3*2 setup, and one server (the one I setup gluster from if that matters), show high load, high IO-wait values (~30%) compared to the other two machines.
12:06 kkeithley Do we have any way to see, track, and add infra tasks? E.g. a trello board? Or even just an etherpad?
12:07 misc nope
12:07 misc but as I say, just a list is not gonna work if everything is urgent
12:07 ndevos there is the project-infrastructure component in bugzilla :)
12:09 kkeithley gvandeweyer: is the load on the one server a side effect of the way file names are hashed to decide where to place the files? If so you could use the nufa xlator and adjust the hash to place the files differently
12:10 misc ndevos: wow, didn't knew
12:11 gvandeweyer kkeithley: how would I investigate this ? Wouldn't you expect that in a replicate setup both servers of the same brick would balance their work?
12:11 kkeithley FIFO list, if everything really is urgent.
12:11 ndevos misc: http://bugs.cloud.gluster.org/ even lists some open bugs for it
12:11 misc in fact, the problem is that once we list infra tasks, they are not fixed faster, we just have the illusion of having done something for it, while that's the same
12:12 misc ndevos: it make my firefox become a bit sluggish
12:13 ndevos only once the .json data is loading, I guess, it works fine for me
12:13 misc yeah, then it work fine
12:13 kkeithley gvandeweyer: they should have a similar write workload. For reads the machine that responds first/fastest is going to get the read workload. Is there a reason why one server would always respond first even though it's heavily loaded.
12:13 misc so https://bugzilla.redhat.com/show_bug.cgi?id=1160732
12:13 glusterbot Bug 1160732: high, medium, ---, kwade, ASSIGNED , gluster-devel@nongnu.org should not be accepting email anymore
12:14 misc no one followed up it seems
12:14 misc so that kinda confirm that a list is not gonna help if there is no one to read and groom it
12:14 ndevos I dont know, I think jclift was mainly watching those things....
12:14 misc we could do it some kind of scrum/kanban or meeting
12:15 misc but that involve having a constant amount of time from people who are to take care of the issues
12:16 misc so I suspect we might need another approach to correspond more to the level of involvement we can have from volunteers or people not being full time on the topic
12:16 kkeithley transparency isn't about making anyone work faster. Or even an illusion of working faster. It's just so we can all see what's going on and so that things that need to get done don't get forgotten about.
12:16 misc kkeithley: sure, but having the bug fixed is what we want
12:17 misc if something requires more time and do not result in a improvement regarding the fix, then maybe we have to tweak
12:17 kkeithley yes, but when I stated a year ago that updating jenkins and build.g.o was being worked on, then it got forgotten about.
12:17 kkeithley exactly
12:17 misc now, in the case of the release, I would tend that's more valid because release are infrequent
12:19 kkeithley I don't disagree, but it doesn't change the fact that without some kind of list, things get forgotten.
12:19 kkeithley in this case, updating build.g.o. ;-)
12:19 misc (at the same time, there is also a confusion between "infra" and "release engineering", because IMHO, that's a rel-eng task, but that's caused by the current infra)
12:19 misc kkeithley: well, updating it, outside of the release issue, is on the list of thing to do
12:19 misc just because el67 is better than 6
12:20 misc but yeah, the exact reason were lost
12:20 gvandeweyer kkeithley: I don't know. I would say network is identical for all, all static ips, etc
12:20 misc anyway, for the 3.8, what solution do we want ?
12:20 gvandeweyer kkeithley: illustration of load : http://imgur.com/iRwB6O6
12:20 kkeithley it's been on the list, the list you said  we don't have, and the rest of us can't see ;-)
12:22 kkeithley anyway, I believe the short term solution is run autogen.sh before configure.
12:23 kkeithley short/medium term solution might be update to newer version of auto* on build.g.o (including a backport if necessary)
12:23 misc so who is gonna do the backport ?
12:23 kkeithley longer mediium term solution is to update build.gluster.org to (CentOS) 6.8
12:23 misc kkeithley: mhhh
12:24 misc if that's upgrade to centos 6.8, I can do that now
12:24 misc (if that's out)
12:24 kkeithley oh, well, that would be great.
12:24 kkeithley or 6.7 then
12:24 misc but I was under the impression that we wanted a upgrade to 7
12:25 misc there is no autoconf update however
12:25 kkeithley and the best long term solution would be to have a dedicated 'release' machine to run release jobs.
12:25 kkeithley another facet of long term solution would be to have some real release engineering.
12:25 kkeithley amye: ^^^
12:26 kkeithley and no, that is most emphatically _not_ me.
12:26 * misc did also ask for a rel-eng dedicated person around 1 year ago
12:26 kkeithley yes, I suppose we do want to upgrade to 7.x
12:26 misc (ie, 1 person in charge of infra, 1 for rel-eng and 1 for ci)
12:27 misc installing package kernel-2.6.32-573.26.1.el6.x86_64 needs 8MB on the /boot filesystem
12:27 misc so /boot has 100m
12:28 kkeithley and 22M available
12:29 rraja joined #gluster-dev
12:33 spalai left #gluster-dev
12:37 mchangir joined #gluster-dev
12:55 kkeithley okay, so build.g.o is now at el6.7, including autoconf-2.63-5.1.  Thanks misc++
12:55 glusterbot kkeithley: misc's karma is now 27
12:57 kkeithley 2.63-5.1 is not too old. We could try installing autoconf-2.65-1 from the el6.8 (beta)
12:58 kkeithley I'm guessing that 2.65-1 is from el6.8beta
13:00 kkeithley while el7.x has 2.69
13:01 kkeithley ndevos: do you think 2.63 or 2.65 is new enough? And we should restore config.{guess,sub} to the dist tarball?
13:02 misc but I didn't upgrade autoconf today
13:02 kkeithley didn't it come with the upgrade to el6.7?
13:03 misc nope
13:03 misc Install Date: Sun Oct 14 12:47:42 2012
13:05 kkeithley well, thanks for updating to el6.7 anyway
13:06 kkeithley fwiw
13:06 misc so, looking, rhel 6.8 is out
13:06 misc so let's see for centos 6.8
13:11 spalai joined #gluster-dev
13:12 spalai left #gluster-dev
13:13 kkeithley ndevos: ^^^
13:18 misc so 6.8 can be used for centos, just waiting on anaconda and iso
13:20 EinstCrazy joined #gluster-dev
13:27 nbalachandran_ joined #gluster-dev
13:29 EinstCrazy joined #gluster-dev
13:48 EinstCrazy joined #gluster-dev
13:50 pkalever left #gluster-dev
13:52 kotreshhr joined #gluster-dev
13:53 shyam joined #gluster-dev
14:11 nbalacha joined #gluster-dev
14:41 atinm joined #gluster-dev
14:46 pranithk1 joined #gluster-dev
14:47 rraja joined #gluster-dev
15:08 ndevos kkeithley: config.sub comes from automake, not autoconf - at least on Fedora
15:09 ndevos kkeithley: and I do not know what version would be acceptable...
15:10 gem_ joined #gluster-dev
15:10 kkeithley mkay.  well, if it's that way on Fedora I'm about 99&44/100% confident that's how it is on RHEL.
15:10 ndevos the version attached to the bug by patrick was timestamped 2014-09-11
15:10 kkeithley and maybe we should just have the latest of both
15:11 kkeithley building autoconf 2.69 on RHEL was pretty simple. Let me look at automake
15:14 kkeithley RHEL6
15:15 ndevos kkeithley: patrick is in #gluster as the-me, maybe he remembers the reason for the new version?
15:16 * ndevos goes to play squash, will be back later
15:16 kkeithley in the BZ he said the newer version had support for "newer ports"  by which I think he meant newer linux distributions.
15:16 kkeithley have fun, good luck,
15:16 jiffin joined #gluster-dev
15:19 shyam1 joined #gluster-dev
15:33 mchangir joined #gluster-dev
16:01 misc 2016/05/20 16:00:00 [INFO]: Munin-update finished (650.15 sec)
16:02 misc seriously
16:02 misc mhh, likely a side effect of nofork
16:05 poornimag joined #gluster-dev
16:10 shaunm joined #gluster-dev
16:12 wushudoin joined #gluster-dev
16:13 anoopcs ndevos, https://review.gluster.org/#/c/14457/ for you..
16:13 post-factum misc: omg, munin...
16:13 wushudoin joined #gluster-dev
16:15 misc post-factum: in debug mode, so that could explain why
16:15 anoopcs kkeithley, Feel free to take a look at https://bugzilla.redhat.com/show_bug.cgi?id=1331704.
16:15 glusterbot Bug 1331704: medium, medium, ---, nobody, NEW , Review Request: glusterfs-coreutils - Mimics standard Linux coreutils for GlusterFS clusters
16:16 post-factum misc: /me remembers those days with munin. ugh
16:16 misc post-factum: well, that's the easiest  know to deploy and integrate with ansible :)
16:17 Debloper joined #gluster-dev
16:17 post-factum misc: could be, but what's wrong with zabbix?
16:17 misc I only managed to get cacti  right after entering some kind of special state of the mind, zabbix is not very ansible friendly
16:17 misc post-factum: last time I checked, all was done with api
16:17 kkeithley anoopcs: hmm. None of the packing cabal took that up?  That's rather annoying
16:17 kkeithley packaging
16:18 post-factum misc: api is good approah in general, isn't it?
16:18 anoopcs :-(
16:18 post-factum *approach
16:18 misc post-factum: well, I rather manage a complete file, taht's faster
16:18 misc like "here is the file I want", rather than 1 million function calls
16:19 misc a api is great for event, do not get me wrong
16:19 post-factum misc: that is why glusterfs layout is based on files :D
16:19 anoopcs kkeithley, You are one among them .. :-)
16:19 misc like "set this server in maintainance mode and do not notfy for the 10 next mintues"
16:19 misc API is great for that
16:19 kkeithley me? In the packaging cabal?   Hardly.
16:20 anoopcs Really?
16:21 misc or a API is great is stuff is changing quite often too and you want orchestration outside of the cfg mgmt, sure
16:21 misc post-factum: anyway, that's why I took munin and not zabbix
16:21 post-factum misc: ok, got that
16:21 misc (this and the fact that zabbix web interface is in php, and I wasn't able to understand simple stuff)
16:22 kkeithley I have three packages in Fedora, but I'm not part of the elite inner circle. I'm not even a "proven packager"
16:22 misc that's exactly what someone who is part of a secret society of packager would say :p
16:23 anoopcs kkeithley, Ok...
16:24 kkeithley tinc
16:24 misc kkeithley++
16:24 glusterbot misc: kkeithley's karma is now 117
16:26 anoopcs ;-)
16:27 kkeithley I'll review it. watch, I bet if I say it's okay a cabal member will come out of the woodworks and find 10+ things wrong with it. ;-)
16:28 misc /msg #cabal "he" knows, do not go out right after his post
16:28 misc oops
16:29 kkeithley lol
16:29 misc mhh, I am packager sponsor so I could look
16:29 anoopcs Cool...
16:31 kkeithley heh
16:31 misc but so far, it seems ok
16:31 kkeithley what about glusterfs-coreutils.spec:16: W: unversioned-explicit-provides bundled(gnulib)  ?
16:32 kkeithley shouldn't it have _hardened_build ?
16:32 misc that's a false positive
16:32 misc (the rpmlint error)
16:32 anoopcs I think misc got it right here.
16:34 kkeithley ???   Provides:         bundled(gnulib)
16:34 misc yeah, taht's ok
16:34 misc mhh
16:35 anoopcs kkeithley, We do have an exception for gnulib
16:35 misc wonder if it shouldn't provides the version, not 100% sure
16:35 kkeithley okay
16:35 kkeithley isn't the version going to depend on which version of gcc you build with?
16:35 kkeithley nm, that's libgcc I'm thinking of
16:36 anoopcs misc, I came across some packages without specifying a particular version.
16:36 misc https://fedoraproject.org/wiki/Bundled_Libraries_Virtual_Provides
16:36 misc anoopcs: yeahn I see that
16:36 anoopcs I was searching for that link.
16:36 misc but since the goal is to find which version is bundled, I would push for that, if gnulib has a concept of version, of course
16:38 anoopcs https://github.com/coreutils/gnulib/releases
16:38 misc I take that as "no"
16:42 shubhendu joined #gluster-dev
16:51 kkeithley and it appears that %global _hardened_build 1  is not required as of F23
16:57 kkeithley is it needed for EPEL & CentOS ? If only as a Good Idea®
16:58 spalai joined #gluster-dev
16:59 anoopcs ndevos would like to have it for CentOS Storage SIG
16:59 kkeithley would be good if you could add the review checklist to the BZ along with status for each item in the checklist
17:00 kkeithley it would be good
17:00 misc mhh, you mean cut and paste of fedora-review, or more integrated ?
17:01 kkeithley maybe people don't do that any more.  ISTR doing it for mine (years ago)
17:03 anoopcs I cut and paste the whole review.txt in my initial review of a package (https://bugzilla.redhat.com/show_bug.cgi?id=1308779#c6). I don't know if that's intended or not?
17:03 glusterbot Bug 1308779: medium, medium, ---, puiterwijk, CLOSED ERRATA, Review Request: git-tools - Assorted git-related scripts and tools
17:05 kkeithley that was for git-tools? Yes, like that.
17:05 anoopcs just an example.
17:08 kkeithley gah, test phase of rpmbuild automake is taking forever
17:10 kkeithley anoopcs: so you're already a packager? You don't need a sponsor?
17:11 anoopcs kkeithley, Yeah...But I need reviews.
17:13 kkeithley rpmlint complains about Source0.   I think you need something like this one  Source0:        https://github.com/%{name}/%{name}/archive/V%{dash_dev_version}/%{name}-%{dash_dev_version}.tar.gz
17:14 anoopcs kkeithley, Hm. We don't have an upstream release yet.
17:14 anoopcs Can we have something like that without a release in github?
17:15 kkeithley that's just an example. Even without a tagged release I think you can still do the https://github.com/.... part with the short-commit ID can't you?
17:16 kkeithley but feel free to tell me I'm all wet
17:17 anoopcs I have not done anything like that before. After all I don't have permission for github.com/gluster/glusterfs-coreutils
17:20 anoopcs Did you mean git-archive?
17:20 kkeithley sure, but I don't think you need any permissions in order to give a valid Source0: link.  Do you have the source tarball at https://anoopcs.fedorapeople.org/glusterfs-coreutils/ ?
17:20 kkeithley I think you could use a link to that as an interim Source0: entry
17:21 anoopcs kkeithley, Yes.
17:21 anoopcs kkeithley, I mean to say that I don't have permission in github for glusterfs-coreutils repo
17:22 kkeithley that's what I understood you to mean.
17:23 kkeithley although maybe it doesn't matter.
17:24 kkeithley I just don't know what any other reviewers will make of the Source0:
17:27 kkeithley See, if I was in the Cabal, I'd know the answer. ;-)
17:29 anoopcs he..hee
17:48 ndevos anoopcs: I'm planning to take on the 3.8 merging later tonight or over the weekend
17:49 ndevos misc: anoopcs already is a packager, so no sponsoring needed, only the package review and approval
17:55 shyam joined #gluster-dev
18:29 misc ndevos: sure, just wanted to signal I would be part of the cabal if it did exist :p
18:30 misc and the source0 is fine, iirc
18:30 misc rpmlint tend to complain a lot, potentially because it was done by a french guy who gave it to another french guy
18:30 misc (who both now work at RH, and maybe I was the 2)
18:59 ndevos misc: thats good to know, I was abusing puiterwijk as proxy :D
19:02 misc ndevos: that's also a solution :p
19:06 lpabon joined #gluster-dev
19:56 kotreshhr joined #gluster-dev
20:06 kotreshhr left #gluster-dev
21:06 wushudoin joined #gluster-dev
22:19 shaunm joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary