Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-07-08

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:10 shyam joined #gluster-dev
02:24 nbalacha joined #gluster-dev
03:11 pkalever joined #gluster-dev
03:34 skoduri joined #gluster-dev
04:09 itisravi joined #gluster-dev
04:11 atinm joined #gluster-dev
04:29 shubhendu joined #gluster-dev
04:31 shubhendu joined #gluster-dev
04:33 aspandey joined #gluster-dev
04:35 sakshi joined #gluster-dev
04:36 penguinRaider joined #gluster-dev
04:47 kdhananjay joined #gluster-dev
04:51 ashiq joined #gluster-dev
04:53 karthik___ joined #gluster-dev
04:54 nishanth joined #gluster-dev
04:59 karthik___ misc, Hey, I had sent a PR on planet gluster to tag my blogs on WORM/Retention translator and Niels had merged that. https://github.com/gluster/planet-gluster/pull/32 But its still not showing up in planet.gluster.org
05:00 karthik___ misc, Is this something related to https://github.com/gluster​/planet-gluster/issues/20
05:00 karthik___ ?
05:01 penguinRaider joined #gluster-dev
05:02 ndarshan joined #gluster-dev
05:10 Apeksha joined #gluster-dev
05:10 Apeksha joined #gluster-dev
05:12 poornimag joined #gluster-dev
05:19 Muthu__ joined #gluster-dev
05:19 Muthu_ joined #gluster-dev
05:22 pkalever left #gluster-dev
05:23 sanoj joined #gluster-dev
05:27 prasanth joined #gluster-dev
05:28 Apeksha_ joined #gluster-dev
05:40 spalai joined #gluster-dev
05:49 asengupt joined #gluster-dev
05:50 penguinRaider joined #gluster-dev
05:51 hchiramm joined #gluster-dev
05:53 skoduri joined #gluster-dev
05:56 devyani7_ joined #gluster-dev
06:02 prasanth joined #gluster-dev
06:09 Apeksha joined #gluster-dev
06:17 kshlm joined #gluster-dev
06:18 pur joined #gluster-dev
06:21 nbalacha joined #gluster-dev
06:28 atalur joined #gluster-dev
06:30 jiffin joined #gluster-dev
06:33 ramky joined #gluster-dev
06:41 shubhendu_ joined #gluster-dev
06:46 Saravanakmr joined #gluster-dev
06:47 rastar joined #gluster-dev
07:06 shubhendu_ joined #gluster-dev
07:39 penguinRaider joined #gluster-dev
07:43 karthik___ joined #gluster-dev
07:44 prasanth joined #gluster-dev
07:47 pranithk1 joined #gluster-dev
07:50 nbalacha joined #gluster-dev
07:52 kshlm joined #gluster-dev
08:20 post-factum nbalacha: http://review.gluster.org/#/c/14875/ could this be the fix for https://bugzilla.redhat.co​m/show_bug.cgi?id=1348095 ?
08:20 glusterbot Bug 1348095: medium, unspecified, ---, bugs, NEW , GlusterFS memory leak on bricks reconnection
08:21 nbalacha post-factum, Let me take a look at the BZ and get back to you
08:21 post-factum nbalacha: ok, thanks
08:25 aravindavk joined #gluster-dev
08:28 nbalacha pranithk1, ping
08:28 glusterbot nbalacha: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
08:29 pranithk1 nbalacha: yes nithya
08:29 nbalacha pranithk1, regarding BZ 1348095 which post-factum has raised
08:29 pranithk1 nbalacha: checking
08:30 nbalacha pranithk1, NZ indicates shd is using a great deal of memory - do you know if anyone has looked at it already?
08:30 post-factum nbalacha: not only shd but it seems that any client (fuse too)
08:30 pranithk1 nbalacha: nope, I didn't get a chance to take a closer look
08:31 nbalacha pranithk1, ok. thanks
08:31 pranithk1 nbalacha: Do you have any insight?
08:31 nbalacha pranithk1, no. post-factum just pinged me about this. I saw that it was the shd process so thought of checking with you
08:32 pranithk1 nbalacha: okay
08:32 misc karthik___: I am on PTO today, so I will look on monday at best, i think it would be better to send that to gluster-infra until we figure the right place to open a ticket
08:35 karthik___ misc, No issues. Thanks!
08:36 misc I see that we are also missing a .travis.yml file to verify merge :/
08:43 ramky joined #gluster-dev
08:45 nbalacha post-factum, do you have a way to reproduce the memleak issue
08:45 post-factum nbalacha: sure, i described it below in comment
08:46 post-factum nbalacha: pkilling brick and brinning it back to force client to reconnect perfectly recreates the issue
08:47 post-factum *bringing
08:47 nbalacha post-factum, so I should see this if I fuse mount a gluster vol, and continuously kill and restart a brick process?
08:47 post-factum nbalacha: correct, fuse client VSZ should grow each time you restart the brick
08:48 post-factum nbalacha: you may even not touch the brick itself, just do the trick with iptables and -j REJECT to drop connection to it
08:48 post-factum nbalacha: that worked for me too
08:50 post-factum nbalacha: or start/stop the volume. whatever, you know. any thing that makes client reconnect will work
08:52 nbalacha post-factum, ok
09:01 misc karthik___: ok so I am bad at taking PTO and took a look, and the planet is building fine but it seems to block on one specific feed
09:03 post-factum skoduri: hello. I've reverted 2 patches you asked and repeated stress-testing, but anyway got brick crash. so the reason is somewhere else
09:09 post-factum skoduri: updated both bugreports to reflect this info
09:28 nbalacha post-factum, I'm using the latest master code and checking the memusage for the fuse client while stopping and starting a volume repeatedly
09:28 nbalacha post-factum, I do not see a leak so far
09:28 nbalacha post-factum, I am using a pure dist vol.
09:28 post-factum nbalacha: 3.7 here...
09:29 nbalacha post-factum, 3.7 branch?
09:29 nbalacha *latest branch?
09:30 post-factum nbalacha: 3.7 as pointed in bugreport
09:30 nbalacha post-factum, ok. shall try with 3.7 branch as well
09:31 post-factum nbalacha: thanks!
09:31 jiffin1 joined #gluster-dev
09:47 mchangir joined #gluster-dev
09:56 kdhananjay1 joined #gluster-dev
09:57 sakshi joined #gluster-dev
09:59 jiffin1 joined #gluster-dev
10:00 nbalacha pranithk1, got a minute?
10:00 Saravanakmr joined #gluster-dev
10:00 pranithk1 nbalacha: Need to go for lunch. Is it okay to have this discussion after it?
10:00 nbalacha pranithk1, sure
10:01 penguinRaider joined #gluster-dev
10:03 ira joined #gluster-dev
10:05 kdhananjay joined #gluster-dev
10:23 skoduri post-factum, sorry was afk..
10:23 skoduri post-factum, I will check the bug report...so it does happen only when there is port probing right?
10:29 pranithk nbalacha: tell me
10:44 post-factum skoduri: correct
10:45 post-factum skoduri: without probing the test lasted much longer, and i saw no crashes
10:45 post-factum skoduri: ~15 mins or so. but if i do probing, it crashes within 1-2 minutes
10:46 skoduri post-factum, okay ..what were the tests you were running?
10:46 post-factum skoduri: emmm... i've described them in details in bugreport
10:47 post-factum skoduri: basically. it was 4 tabs in tmux. 1 tab is tcp probing with nmap, 3 others are creating files in a loop with touch, stat them via find and remove by rm -rf
10:47 post-factum skoduri: creating/stat/remove in parallel in a loop
10:48 post-factum skoduri: if it is that important:
10:48 post-factum index=0; while true; do hash=$(echo $index | sha1sum); p1=$(echo $hash | cut -c 1-2); p2=$(echo $hash | cut -c 3-4); sudo mkdir -p $p1/$p2; sudo touch $p1/$p2/$hash; ((index++)); done
10:48 glusterbot post-factum: ((index's karma is now 1
10:48 post-factum while true; do find .; done
10:48 post-factum while true; do sudo rm -rfv *; sleep 10; done
10:48 skoduri post-factum, sorry.. hadn't looked at bug in detail I guess...I will try to reproduce and update my findings if any
10:48 post-factum dumb and straightforward
10:48 post-factum no problem, let me know if you need more info
10:49 post-factum index--
10:49 glusterbot post-factum: index's karma is now -1
10:49 post-factum hm
10:49 post-factum oh
10:49 post-factum index++
10:49 glusterbot post-factum: index's karma is now 0
10:49 post-factum ((index--
10:49 glusterbot post-factum: ((index's karma is now 0
10:51 penguinRaider joined #gluster-dev
10:51 skoduri post-factum, hey so are you not hitting crash in inode_unref(..) anymore? I do not see that updated in the bug
10:51 post-factum skoduri: emm. wait, i've created another one and CCed you
10:51 skoduri post-factum, or wait was there another BZ for that?
10:51 skoduri post-factum, oops :)
10:51 post-factum skoduri: you should have got another message ;)
10:52 post-factum skoduri: as I assumed, those are 2 different bugs. or, at least, with 2 different consequences
10:52 post-factum skoduri: have you found that?
10:52 skoduri post-factum, yupp
10:52 post-factum ok then
11:06 shyam joined #gluster-dev
11:12 aspandey joined #gluster-dev
11:17 anoopcs ndevos, shyam : I have updated https://review.gluster.org/#/c/11177/ .
11:21 aspandey joined #gluster-dev
11:26 ndevos anoopcs: yes, I've seen that, I actually check those emails Gerrit sends me ;-)
11:28 anoopcs In case if you miss to notice it from numerous other Gerrit mails you receive everyday. :-)
11:31 ndevos :)
11:46 * kkeithley was expecting to arrive at the office and find 3.7.13 and 3.8.1 releases.
11:46 kkeithley Are we stuck in "one more fix" hell again?
11:50 ndevos kkeithley: no, just busy with other things
11:56 post-factum kkeithley: fixing is good, releasing broken tags and forcing ppl to patch the code themselves is not that good
11:56 kkeithley promising a release but never actually releasing is not that good either.
11:57 kkeithley There will be another release. But we can't have _another_ release until we have _this_ release.
11:57 kkeithley There will always be bugs.
11:59 ndevos I am of the opinion that a release should be made, whenever the schedule sais so, and bugs have been fixed
11:59 kkeithley This is Open Source. This is the community release.  If you want the illusion of something else, you should buy RHGS.
11:59 ndevos fixes for some bugs will make at least a few users happy, others will still have the same bugs as in previous releases
11:59 kkeithley exactly
12:00 ndevos oh ,we agree, thats a rare occasion!
12:00 kkeithley I don't see kshlm online anywhere.
12:00 kkeithley lol. We agree more often than not.
12:00 kshlm I'm here.
12:00 ndevos hah, yes, I guess so too
12:00 * ndevos hands a tag to kshlm
12:00 kkeithley oh, hi
12:02 * kshlm is making good use of the tag
12:02 skoduri post-factum, I ran similar tests (leaving out gluster v status) on my workspace tagged to 3.7.11 .. I do not see any inode_unref crash
12:02 skoduri post-factum, I will check with 3.7.12
12:02 post-factum skoduri: what was your volume layout (how many bricks) and how long the test lasted?
12:03 kkeithley going off-line for a bit. rebooting to upgrade to F24.  Because my fscking desktop was frozen again. Wut? A bug in Open Source Software?  Say it isn't so!
12:03 skoduri post-factum, I created 2*3 volume and  I ran the test for about 10 min
12:04 post-factum skoduri: it should be triggered within several minutes... however, we have 10 bricks here instead of 6
12:04 skoduri post-factum, oh okay..I though it shall be hit in 2-3 min .. I will wait for some more time then
12:05 skoduri post-factum, sorry mine is 2*2 = 4 bricks
12:05 pranithk1 joined #gluster-dev
12:20 shyam joined #gluster-dev
12:22 post-factum skoduri: let me know. if the result is negative, let's revise steps you performed to reproduce the issue. probably, i've missed something
12:22 skoduri post-factum, sure..I am right now updating my workspace..
12:28 spalai left #gluster-dev
13:38 kkeithley kshlm++
13:38 glusterbot kkeithley: kshlm's karma is now 94
13:39 kkeithley hmm, looks like I didn't get the sha256sum part of the release script right
13:43 kdhananjay joined #gluster-dev
13:48 ndevos kkeithley: how long do you need to fix it? I can wait a little with v3.8.1 :)
13:48 kkeithley it's fixed (but not pushed to the github repo
13:49 kkeithley )
13:50 kkeithley so go ahead
13:52 * ndevos needs to do the release notes etc, will tag after that
14:08 kkeithley lots of *printf format string warnings creeping back in on 32-bit :-(
14:09 hagarth joined #gluster-dev
14:09 kkeithley particularly size_t and ssize_t that should use %z
14:09 kkeithley but don't
14:21 spalai joined #gluster-dev
14:32 ndevos should we have a job for complaining about that?
14:33 ndevos we could even download and parse http://artifacts.ci.centos.org/glus​ter/nightly/master/6/i386/build.log every day
14:38 poornimag joined #gluster-dev
14:45 hagarth joined #gluster-dev
14:54 kkeithley ndevos++
14:54 glusterbot kkeithley: ndevos's karma is now 283
14:54 kkeithley +1 to having a job for 32-bit format string warnings
14:55 ndevos kkeithley: give me a regex that can be used to match such a build.log and I'll put it in the CentOS CI
14:56 ndevos kkeithley: or, even the exact warning with XXX in the location that can vary
14:58 kkeithley XXX: warning: format '%ld' expects argument of type 'XXX', but argument XXX has type 'ssize_t {aka int}' [-Wformat=]
14:58 kkeithley XXX: warning: format '%XXX' expects argument of type 'XXX', but argument XXX has type 'ssize_t {aka int}' [-Wformat=]
14:59 kkeithley and XXX: warning: format '%XXX' expects argument of type 'XXX', but argument XXX has type 'size_t {aka int}' [-Wformat=]
14:59 kkeithley https://kojipkgs.fedoraproject.org/​/work/tasks/1740/14821740/build.log
15:02 kkeithley correction, and XXX: warning: format '%XXX' expects argument of type 'XXX', but argument XXX has type 'size_t {aka unsigned int}' [-Wformat=]
15:03 mchangir joined #gluster-dev
15:03 wushudoin joined #gluster-dev
15:04 ndevos kkeithley: I'll use http://artifacts.ci.centos.org/glus​ter/nightly/master/6/i386/build.log instead, that will get updated every day :)
15:04 ndevos its rather difficult to figure out what path to use from koji
15:07 kkeithley true, on top of which there won't be daily builds in koji
15:10 shaunm joined #gluster-dev
15:12 kkeithley but I only meant the link to koji as a place to look at 32-bit build warnings
15:52 kaushal_ joined #gluster-dev
15:58 spalai joined #gluster-dev
16:02 dblack joined #gluster-dev
16:20 Manikandan joined #gluster-dev
16:25 shyam joined #gluster-dev
16:26 nigelb kkeithley: I want to see if I can make that as a part of the job.
16:26 nigelb If you add a new one, the smoke test will fail.
16:27 nigelb And at some point we mark the existing ones as "good first bugs"
16:27 kkeithley make what part of what job?
16:27 nigelb That's my goal with all of these kind of tests, Coverity, clang analyser.
16:27 nigelb A job.
16:27 nigelb A linter sort of thing
16:27 nigelb which just makes sure you haven't added more failures to our existing bunch.
16:28 penguinRaider joined #gluster-dev
16:28 kkeithley okay
16:28 nigelb One that doesn't exist yet, but once I have the jobs in a stable state on JJB, I can add more jobs with less effort.
16:30 nigelb kkeithley: The stuff you run on the labs machines is the top of my list to move to Jenkins, so you don't have to maintain that anymore.
16:30 nigelb (Coverity will need some tinkering though)
16:31 kkeithley yes, the license will be an issue
16:32 kkeithley cppcheck and clang compile.  clang analyzer finds a lot of things that might be considered truth and beauty, but aren't really bugs.  I think it's more noise than signal.
16:32 kkeithley but we can run it anyway, just for giggles
16:33 nigelb The trick, I suspect, would be to build a mechanism for all of these things to mark them as "known issues" and "not an issue" in our code or test configuration.
16:33 nigelb So we can have a loud failure when something new gets added.
16:33 kkeithley yup
16:33 nigelb This is probably what I'll tackle after this round of Jenkins things.
16:33 nigelb Someone should have done this before.
16:35 misc nigelb: you can take a look at how openstack did it with bandit
16:35 kkeithley well, in a perfect world there are dozens of things we could have and should have been doing all along.   Trying to run Coverity six years into a project is an exercise in futility.
16:35 misc since bandit is also pushing lots of false positive
16:35 kkeithley You need to use it from day one.
16:36 ndevos nigelb: I've got this for testing now, https://github.com/nixpanic/glusterfs-patch-a​cceptance-tests/tree/centos-ci/gluster_strfmt
16:36 ndevos and the 1st email should have been sent, but I do not know where it is lingering around...
16:36 misc kkeithley: so you think we couldn't fix it in the long run, like "fix 1 coverty issue per day or you do not get paid" stuff ?
16:36 ndevos kkeithley: https://ci.centos.org/view/Glust​er/job/gluster_strfmt/2/console seems to do it, if the email ever arrives
16:37 ndevos nigelb: we actually should check+merge+enable https://github.com/gluster/gluster​fs-patch-acceptance-tests/pull/17 as well
16:38 nigelb ndevos: I'm moving that repo into gerrit soonish
16:38 kkeithley misc: sure, we could try that.  But just take a look at all the coverity patches in gerrit left over from the interns two summers ago.
16:38 nigelb there's a thread about this on devel+infra
16:38 nigelb kkeithley: They're good first bugs for sure.
16:38 ndevos nigelb: ok, should be good :)
16:38 nigelb ndevos: Ooh, that one is on my list.
16:39 nigelb That's getting converted to JJB and moved into Jenkins :)
16:39 ndevos nigelb: ok, maybe assign it to youself then?
16:39 nigelb Done.
16:39 nigelb rastar and kshlm pointed me to that one last week.
16:39 ndevos hmm, email arrived, but it is rather empty...
16:39 kkeithley if by first bugs you mean easy to fix for first timers (freshers) then no. Many Coverity bugs are hard to fix correctly.
16:40 nigelb Ohh.
16:40 nigelb My more simplistic goal is, we have X failures. Let's not add more than X failures.
16:40 nigelb (unless they're false positives, in which case we add them as known issues)
16:41 misc I wouldn't accept false positive either
16:41 nigelb At some point we need to tackle them, perhaps when someone is touching that bit of code again, and we can build a habit of fixing any Coverity issues around there.
16:41 nigelb But before we get there, we need to arrest adding more real failures for all of them.
16:42 kkeithley That's another Coverity gotcha. Ideally we would put Coverity false positives into the Workbench (I think that's what they call it) and then they don't keep coming up in the following covscan.
16:42 nigelb Yeah, I've seen the internal one.
16:42 misc because, detecting what is a false positive and what isn't might take a lot of time, and requires someone well versed into coverty, so we might soon put too much load on that person
16:43 misc so flat out "no false positive" distribute the load and the duty of fixing that
16:43 nigelb This is why Coverity will take some time to figure out :)
16:43 nigelb misc: I'm thinking of putting together a "what's gluster infra" been doing email next week.
16:44 nigelb Will you have time to add what you've been working on as well?
16:44 ira_ joined #gluster-dev
16:44 misc nigelb: sure, should I send a picture of my mojito with my pool in background, or you prefer me on the transat :) ?
16:44 nigelb haha
16:45 nigelb I was thinking of putting in, for example, the infra docs you've written up.
16:45 misc but what i did this week, before transit + mojito was "fixing freeipa and fixing random errors on ansible" + infra doc
16:45 misc (and being in a security conference unrelated to gluster afaik)
16:45 nigelb excellent. I'll put it on an etherpad on Monday and run it by you beofre sending it out.
16:45 kkeithley bah, bloody xenial wants "reproducible builds" now.  No more build strings with dates in them
16:46 kkeithley dates and times
16:46 nigelb heh
16:46 nigelb ubuntuN
16:46 nigelb that's what they do
16:46 nigelb ubuntu0, ubuntu1, ubuntu2, etc
16:46 misc kkeithley: that's interesting, how does it manifests ?
16:47 misc (like, I know debian is doing it, but I didn't knew ubuntu pushed that too)
16:48 jiffin joined #gluster-dev
16:48 kkeithley one sec
16:49 kkeithley trying to find it in the build log
16:50 kkeithley at https://launchpadlibrarian.net/271770656​/buildlog_ubuntu-xenial-amd64.glusterfs_​3.7.13-ubuntu1~xenial1_BUILDING.txt.gz
16:51 kkeithley or maybe not, maybe it's complaining about unpackaged files it found
16:52 kkeithley it didn't have a problem with 3.7.12.   maybe this is the /usr/lib/python2.7/{site,dist}-packages thing.
16:52 kkeithley yeah, that's it.  Wonder why my fix didn't carry over from 3.7.12
16:53 nigelb ha.
16:55 hagarth1 joined #gluster-dev
17:02 kkeithley oh, because I didn't need to do it for 3.7.12.  But I did for 3.8.0.  I must have straddled the window when they made that change.
17:03 kkeithley somehow?   I thought I did 3.8.0 before 3.7.12.  weird
17:12 Manikandan joined #gluster-dev
17:17 overclk joined #gluster-dev
17:31 kkeithley download.gluster.org:/var/www/html is full (33K remaining)
17:34 misc put 5 more G on it
17:34 misc but I wonder why it seems to grow so much :/
17:35 misc or rather so fast
17:35 * misc also keep in mind to write doc on monday
17:46 nigelb I looked at some monitoring data I had for gerrit.
17:46 nigelb We've had 5 outages since June.
17:46 nigelb Two of which were planned outages for the upgrade and then fixing the username issues after the upgrade.
17:47 nigelb Of the remaining 3, one lasted 3 mins and two lasted one min each.
17:47 nigelb Generally, we're more stable now.
17:53 jiffin joined #gluster-dev
17:54 pkalever joined #gluster-dev
18:10 pkalever joined #gluster-dev
18:22 pkalever left #gluster-dev
18:39 penguinRaider joined #gluster-dev
19:40 shyam joined #gluster-dev
20:52 shyam joined #gluster-dev
21:48 shaunm joined #gluster-dev
22:10 penguinRaider joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary