Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-06-07

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 luizcpg joined #gluster-dev
00:24 shyam joined #gluster-dev
00:25 nigelb hagarth: I am now and I'm looking.
01:36 pranithk1 joined #gluster-dev
01:49 ilbot3 joined #gluster-dev
01:49 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:12 shyam joined #gluster-dev
02:22 luizcpg joined #gluster-dev
02:31 penguinRaider joined #gluster-dev
02:55 penguinRaider joined #gluster-dev
03:14 nigelb hagarth: do you know which of the jenkins slaves are actively used by us?
03:14 nigelb All of them?
03:14 nigelb I've just restarted jenkins with a gerrit plugin upgrade
03:17 spalai joined #gluster-dev
03:17 spalai left #gluster-dev
03:18 penguinRaider joined #gluster-dev
03:19 overclk joined #gluster-dev
03:43 nishanth joined #gluster-dev
03:46 itisravi joined #gluster-dev
03:48 atinm joined #gluster-dev
03:49 kotreshhr joined #gluster-dev
03:52 gem joined #gluster-dev
03:54 raghug joined #gluster-dev
04:09 nbalacha joined #gluster-dev
04:16 atinm nigelb, https://build.gluster.org/job/rackspace-r​egression-2GB-triggered/19490/consoleFull
04:16 atinm nigelb, this looks green but it has voted -1, would be able to change the flag from backend?
04:17 nigelb atinm: Probably but give me a few minutes, I need to be afk for a bit.
04:18 atinm nigelb, sure
04:31 kotreshhr left #gluster-dev
04:35 shubhendu joined #gluster-dev
04:46 prasanth joined #gluster-dev
04:49 gem joined #gluster-dev
04:50 ashiq joined #gluster-dev
04:50 aspandey joined #gluster-dev
04:51 josferna joined #gluster-dev
04:53 nigelb atinm: where has it voted -1?
04:53 nigelb The review linked from the build log doesn't show it.
04:54 nigelb Has it voted in the wrong review?
04:54 atinm nigelb, http://review.gluster.org/#/c/14653/
04:54 atinm nigelb, look at the bottom
04:54 atinm http://build.gluster.org/job/rackspace-re​gression-2GB-triggered/19490/consoleFull : FAILED
04:55 nigelb Oh wow.
04:55 nigelb The entire flow is busted.
04:56 atinm :-/
04:56 nigelb Jenkins claims that it was triggered by http://review.gluster.org/#/c/13893/
04:57 atinm oh god! so something is terribly wrong
04:59 nigelb I'm writing notes into this etherpad: https://public.pad.fsfe.org/​p/gluster-jenkins-debugging
05:00 * atinm can't access the link
05:03 nigelb Hmm, it should work.
05:03 Bhaskarakiran joined #gluster-dev
05:04 karthik___ joined #gluster-dev
05:05 poornimag joined #gluster-dev
05:07 ppai joined #gluster-dev
05:14 jiffin joined #gluster-dev
05:21 penguinRaider joined #gluster-dev
05:22 ndarshan joined #gluster-dev
05:24 Manikandan joined #gluster-dev
05:25 rafi joined #gluster-dev
05:27 atinm nigelb, so can you mark verified +1 from backend for http://review.gluster.org/#/c/14653/ ?
05:30 nigelb atinm: Pretty sure there's an actual failure there
05:30 nigelb https://build.gluster.org/job/rackspace​-regression-2GB-triggered/21467/console
05:30 nigelb https://build.gluster.org/job/rackspace-r​egression-2GB-triggered/21477/consoleFull
05:30 nigelb these are both the other jobs in there that have failed.
05:31 nigelb for that review.
05:31 nigelb You do have an actual failure there
05:32 atinm that job was aborted
05:32 atinm probably because of another updated patch set
05:33 rjoseph joined #gluster-dev
05:34 kotreshhr joined #gluster-dev
05:36 aravindavk joined #gluster-dev
05:39 Apeksha joined #gluster-dev
05:53 spalai joined #gluster-dev
05:55 hgowtham joined #gluster-dev
05:57 pkalever joined #gluster-dev
05:59 aspandey joined #gluster-dev
06:00 pranithk1 joined #gluster-dev
06:04 pranithk11 joined #gluster-dev
06:05 mchangir joined #gluster-dev
06:07 pkalever left #gluster-dev
06:07 pkalever joined #gluster-dev
06:08 ashiq joined #gluster-dev
06:14 kdhananjay joined #gluster-dev
06:15 nigelb Temporarily, my user will be commenting on some of the build failures.
06:15 nigelb (I'm trying to distinguish what's reporting failures)
06:15 atinm nigelb, I saw that
06:17 Saravanakmr joined #gluster-dev
06:21 hchiramm joined #gluster-dev
06:23 nigelb atinm: should I be worried that according to jenkins there's been no successful build since yesterday 3 pm.
06:23 nigelb That sounds non-ideal :|
06:23 * atinm echoes the same
06:24 atinm nigelb, I think its the infrastructure which is broken, I don't suspect the code yet :)
06:25 mchangir kkeithley, is rsyslog a requirement for gluster client environment?
06:27 nigelb atinm: If it were an infra failure, I would see at least there'd be multiple failures that look alike.
06:27 nigelb No two failures look anywhere close to the same.
06:27 mchangir nigelb, the builds started failing for me when I added a man page edit to the patch :)
06:29 nigelb Oh boy.
06:29 atinm mchangir, so you broke the build?
06:30 * mchangir runs for cover
06:30 nigelb misc: when you're up, I need a hand.
06:30 mchangir nigelb, could we reboot the nodes and see if things return back to normal ... or has that been done already
06:30 nigelb I haven't yet.
06:31 nigelb I want to talk to either misc or kaushal before I do anything.
06:31 atinm mchangir, why aren't you reverting your change then?
06:32 mchangir atinm, I want to double confirm after node reboots
06:34 nigelb mchangir: link me to your change?
06:35 mchangir wow ... are you serious ... I was just kidding all along
06:35 nigelb Phew.
06:35 nigelb I thought you were serious and then I was unsure.
06:35 nigelb s/serious/kidding/
07:05 pur__ joined #gluster-dev
07:15 kshlm joined #gluster-dev
07:23 atinm nigelb, seems like you solved it? I can see regressions started passing through!
07:24 misc nigelb: yes ?
07:40 penguinRaider joined #gluster-dev
07:40 karthik___ joined #gluster-dev
08:07 sakshi joined #gluster-dev
08:10 josferna joined #gluster-dev
08:14 _shaps_ joined #gluster-dev
08:16 _shaps_ left #gluster-dev
08:39 ppai joined #gluster-dev
08:40 itisravi I'm still getting regressions triggered by other patches voting on my patch.
08:46 Manikandan joined #gluster-dev
08:48 atinm joined #gluster-dev
08:49 aspandey joined #gluster-dev
08:50 nbalacha joined #gluster-dev
08:50 ashiq joined #gluster-dev
08:51 mchangir joined #gluster-dev
08:58 karthik___ joined #gluster-dev
08:59 nigelb itisravi: link me to an example?
09:00 itisravi nigelb: http://review.gluster.org/#/c/14642/
09:00 itisravi nigelb: see the last one:https://build.gluster.org/job/racksp​ace-regression-2GB-triggered/19492/
09:01 nigelb itisravi: It hasn't voted, only commented.
09:02 nigelb You need to recheck centos-regression I think.
09:02 itisravi nigelb: oh, how can you tell?
09:02 nigelb The commenting, I'm aware of and looking into.
09:02 nigelb There's no -1 in the line.
09:02 itisravi nigelb: okay
09:02 nigelb when there's a vote, there's a +1 or -1 reported.
09:04 pkalever joined #gluster-dev
09:05 itisravi got it.
09:05 Manikandan joined #gluster-dev
09:08 aravindavk joined #gluster-dev
09:09 penguinRaider joined #gluster-dev
09:13 rraja joined #gluster-dev
09:16 mchangir nigelb, misc : would you know what file-system are we running where the Jenkins jobs run?
09:17 nigelb mchangir: The thing that looked like a filesystem failure is actually something else.
09:17 mchangir nigelb, oh, okay
09:17 nigelb Our guess is that it's https://access.redhat.com/solutions/2313911
09:17 nigelb which is basically causing segfault in a lot of applications.
09:18 mchangir I'll take a look ... thanks
09:19 mchangir nigelb, just out of curiosity I'd still like to know the file-system ... if its not too much to ask
09:21 nigelb That needs misc. He's commuting, I think.
09:23 guest____ joined #gluster-dev
09:23 guest____ I want to use glusterfs as a storage in esxi, but I do not have the .ovf file, so I have to mount the glustefs with nfs, the problem is, NAS server does not permit to create thick VM, but I need it. I have searched about synology and find out there is a esxi plugin to install and get the permission to create thick vm. I want to know if there is a esxi plugin for glusterfs to create thick vm??
09:23 skoduri joined #gluster-dev
09:29 nbalacha joined #gluster-dev
09:30 guest____ xavih: Can u help me?
09:30 kshlm joined #gluster-dev
09:31 xavih guest____: what do you need
09:32 gem nigel
09:34 guest____ xavih: thx for reply, my question is:
09:34 guest____ I want to use glusterfs as a storage in esxi, but I do not have the .ovf file, so I have to mount the glustefs with nfs, the problem is, NAS server does not permit to create thick VM, but I need it. I have searched about synology and find out there is a esxi plugin to install and get the permission to create thick vm. I want to know if there is a esxi plugin for glusterfs to create thick vm??
09:35 nigelb gem: hi
09:36 gem nigelb, hey! wanted an update on the test machines.
09:36 xavih guest____: I don't have enough knowledge on ESXi. You should ask in gluster-users mailing list
09:40 guest____ xavih: ok, thank u. I have another question,
09:44 guest____ xavih: crypt xlator memory usage is high, as glusterfs get killed. I have searched alot and find out the kernel should free the cached inodes. but the kernel does not free and I have tho free the cached inodes with "echo 0>>/proc/sys/vm/drop_caches". Is it the right solution??
09:45 nigelb gem: Let me get some free time. I'm firefighting Jenkins at the moment.
09:45 xavih guest____: kernel frees cached inodes when it needs memory. If you are having memory issues probably you have a memory leak. What version of gluster are you using ?
09:46 xavih guest____: BTW, to force a cache drop, the right command if echo 3 >/proc/sys/vm/drop_caches
09:46 xavih guest____: anyway I don't think this is the solution you need
09:47 guest____ xavih: Yes u are right, I used the command: echo 3 >/proc/sys/vm/drop_caches
09:47 gem nigelb, Okay, Thanks!
09:49 guest____ I have tried version 3.7 and other different versions, but all of them had the same result.
09:49 guest____ xavih: How can I find out the memory leakage in glusterfs?
09:50 guest____ xavih: I have tried version 3.7 and other different versions, but all of them had the same result.
09:50 xavih guest____: it's not easy. Have you tested 3.7.11 ?
09:50 xavih guest____: if you are experiencing a memory leak using the latest version, you should file a bug
09:50 guest____ xavih: Yes, it did the same.
09:52 mchangir joined #gluster-dev
09:52 atinm joined #gluster-dev
09:53 ashiq joined #gluster-dev
09:53 ashiq joined #gluster-dev
09:54 pkalever left #gluster-dev
09:54 guest____ xavih: sorry, but what do u mean by memory leakage? Is it the same with memory is getting full which results in killing glusterfs?
09:54 pkalever joined #gluster-dev
09:55 xavih guest____: it means that someone is getting memory but not releasing it when it should, causing memory usage to go high for that process
09:55 xavih guest____: this can cause the Kernel's OOM Killer to kill some processes under high memory pressure
09:56 pkalever left #gluster-dev
09:58 pkalever joined #gluster-dev
09:58 mchangir kshlm, would you happen to know the file-system type on the systems where the Jenkins jobs are run?
10:03 guest____ xavih: So U mean, killing glusterfs is not for caching inodes, and memory leakage can not be ok with the command: echo 3 >/proc/sys/vm/drop_caches. Am I right?
10:04 xavih guest____: a normal behaviour would no cause kernel to kill any process. If it does, then most probably there's a memory leak. In that case, the echo won't solve the problem either
10:05 xavih guest____: you should file a bug describing your environment and all details you can to reproduce the problem
10:06 pkalever1 joined #gluster-dev
10:07 kaushal_ joined #gluster-dev
10:07 kaushal_ mchangir, I don't know, but I guess it's most likely XFS on the linux systems.
10:08 mchangir kaushal_, ok ,thanks
10:12 kaushal_ joined #gluster-dev
10:19 mchangir nigelb, https://build.gluster.org/job/rackspace-n​etbsd7-regression-triggered/17360/console
10:20 misc grmbl, so the nss stuff block also deployment
10:21 kshlm joined #gluster-dev
10:21 mchangir misc: would you know what file-system are we running where the Jenkins jobs run?
10:22 spalai left #gluster-dev
10:23 nigelb misc: aw man.
10:23 nigelb want to make a jenkins job for it? :)
10:24 misc nigelb: no, in fact, ansible did run it, but the file is empty
10:24 misc I wonder if something empty it
10:25 misc mchangir: / is ext3 /d is xfs
10:25 mchangir misc, and /d is where the jobs are located?
10:26 misc mchangir: no idea
10:26 nigelb huh
10:26 misc I could look later, but for now, I am trying to fix the segfault issue
10:26 nigelb misc: I suspect we did a yum update on these machines recently?
10:27 nigelb which gifted us the segfault?
10:27 nigelb I think that's also caused an automake upgrade
10:27 nigelb causing the errors that mchangir is running into.
10:27 nigelb mchangir: I'm guessing these work fine for you on your macine.
10:28 mchangir nigelb, yup ... I'm running fc22 and things are building well
10:28 nigelb mchangir: what's your version of automake?
10:29 mchangir nigelb, automake-1.15-1.fc22.noarch
10:29 nigelb That particular slave machine has automake (GNU automake) 1.14.1
10:29 ira joined #gluster-dev
10:29 misc also, we have a few builders out
10:30 misc slave21.cloud.gluster.org slave22.cloud.gluster.org slave20.cloud.gluster.org
10:30 misc slave20 is full
10:31 misc May 29 13:29:53 slave20 sm-notify[17642]: Already notifying clients; Exiting!
10:31 * misc remove the file
10:32 mchangir nigelb, my RHEL 6.6 (automake-1.11.1-4.el6.noarch) and RHEL 7.2 (automake-1.13.4-3.el7.noarch) virtual machines on my laptop are building well too
10:32 nigelb neither of those are 1.14
10:33 nigelb I recommend trying on a netbsd machine with automake-1.14.1
10:33 misc ok so I pushed the change on all but builders who are down
10:33 misc so I am gonna reboot a few of them
10:36 bfoster joined #gluster-dev
10:38 nigelb mchangir: my bad, that's not what's wrong.
10:39 nigelb I can see the build succeeding on machines with automake 1.14
10:53 ashiq joined #gluster-dev
10:54 hchiramm joined #gluster-dev
11:31 kkeithley mchangir: for the client? I wouldn't think so.  RHEL6 and earlier do not have rsyslog
11:31 mchangir kkeithley, okay
11:33 nigelb mchangir: okay, so removing that build machine out of rotation has caused things to be green
11:33 nigelb so that machine was at fault. I'll take a look at it.
11:34 skoduri kkeithley, can you please review and merge http://review.gluster.org/#/c/14657/
11:35 mchangir nigelb, wow! excellent
11:35 kkeithley skoduri: working....   welcome back.
11:36 skoduri kkeithley, thanks :) .. yeah
11:36 kkeithley looked like you had a great holiday
11:37 skoduri yeah ..a bit :)
11:37 kkeithley merged
11:38 mchangir nigelb, so autotools should be running regression tests with gluster builds to qualify for release :P
11:38 skoduri kkeithley++ ..atinm ^^^ done
11:38 glusterbot skoduri: kkeithley's karma is now 122
11:39 misc kkeithley: are you sure ? I see rsyslog on the centos6 builder
11:40 mchangir misc, now that you mention it, I do have it on my RHEL 6.6 virtual machine as well : [root@rhel6-6 ~]# rpm -q rsyslog
11:40 mchangir rsyslog-5.8.10-8.el6.x86_64
11:41 kkeithley mchangir: looking back through the rpm .spec changelogs and git logs, we had an erroneous dependency on rsyslog for a sample rsyslog.conf.  Users that want syslog/rsyslog are free to install it and use it, but we removed the run-time dependency
11:41 kkeithley okay. I just trusted (or misread) ndevos' comment in the BZ.
11:41 kkeithley anyway, we don't need a dependency in glusterfs packages.
11:42 mchangir kkeithley, yeah, I can see the _without_syslog being set in the spec file
11:44 mchangir kkeithley, there's however some references to rsyslog left under %post and %postun   ... specifically to restart rsyslog ... and it fails if rsyslog isn't available since systemctl complains about it
11:45 mchangir kkeithley, things for rhel-7 need to be addressed differently
11:46 kkeithley in the upstream and fedora dist-git .spec files they are wrapped with %if ( 0%{!?_without_syslog:1} )
11:47 mchangir okay, thanks
11:47 kkeithley is that incorrect?
11:47 mchangir that's right
11:49 kkeithley what do we need for RHEL7?
11:50 mchangir if rsyslog is not installed, then the systemctl try-restart invokation emits warnings/errors
11:51 mchangir [root@rhel7-2 ~]# rpm --erase rsyslog-7.4.7-12.el7.x86_64
11:51 mchangir [root@rhel7-2 ~]# systemctl try-restart rsyslog
11:51 mchangir Failed to try-restart rsyslog.service: Unit rsyslog.service failed to load: No such file or directory.
11:51 mchangir [root@rhel7-2 ~]# echo $?
11:51 mchangir 6
12:01 atinm skoduri, thank you!
12:01 atinm kkeithley++
12:01 glusterbot atinm: kkeithley's karma is now 123
12:02 jiffin Gluster Bug Triage started on gluster-meeting
12:18 luizcpg joined #gluster-dev
12:32 misc mchangir: so, I see that the patchset failing (http://review.gluster.org/#/c/14051/) date back to april
12:33 misc mchangir: have you rebased on a newer gluster since pushing that ?
12:34 nbalacha joined #gluster-dev
12:34 mchangir misc, I'll try that right away
12:35 mchangir misc, I think the script used to push patches does that every time I submit a patch update
12:36 misc mchangir: mhh, I think it was not the case in the past, and we had some issue that did resurface due to that, but people are in a meeting
12:42 nbalacha joined #gluster-dev
12:42 kkeithley jiffin++
12:42 glusterbot kkeithley: jiffin's karma is now 40
12:42 jiffin rafi++ kkeithley++ hgowtham++ Saravanakmr++ for attending the meeting
12:42 glusterbot jiffin: rafi's karma is now 49
12:42 glusterbot jiffin: kkeithley's karma is now 124
12:42 glusterbot jiffin: hgowtham's karma is now 24
12:42 glusterbot jiffin: Saravanakmr's karma is now 9
12:56 pkalever joined #gluster-dev
13:00 pranithk11 xavih: I am facing one issue with latest locking scheme in ec. I have 2 3 ways of fixing it. But not sure which one would be the best way forward. Let me know if you are free today to discuss it...
13:01 pranithk11 xavih: I think it could have been there in older releases also. But I saw it only now..
13:01 misc mhh so someone know if the script rebase properly when pushing ?
13:01 aravindavk joined #gluster-dev
13:02 xavih pranithk11: I'm leaving now. We can talk tomorrow. You can also send me an email describing the problem. I'll look at it as soon as possible.
13:02 pranithk11 xavih: In that case I will send a patch...
13:03 pranithk11 xavih: We can discuss it tomorrow
13:03 pranithk11 xavih: Easier to discuss a patch :-)
13:03 xavih pranithk11: good :)
13:03 shaunm joined #gluster-dev
13:04 rraja joined #gluster-dev
13:09 ndevos jiffin++ thanks for hosting the meeting!
13:09 glusterbot ndevos: jiffin's karma is now 41
13:10 jiffin ndevos: a patch related to gluster nfs got merged master http://review.gluster.org/#/c/14657/
13:11 jiffin it will be helpful if u have a look
13:11 ndevos jiffin: ok, I'll check that out
13:11 ndevos jiffin: do you have any concerns about that change?
13:12 jiffin and notify me if there are concerns
13:13 jiffin ndevos: i don't have, but your feedback is very much valuable
13:15 mchangir joined #gluster-dev
13:16 atinm joined #gluster-dev
13:17 ndevos jiffin: look ok to me, althrough I do not like the 'attr_in' name much... a comment in the struct that it is only used during setattr would have been good
13:22 jiffin ndevos: will address in different patch
13:22 jiffin ndevos++ for the quick review
13:22 glusterbot jiffin: ndevos's karma is now 269
13:22 ndevos jiffin: ok, thanks - an no need to backport that one :)
13:23 jiffin ndevos: K
13:41 shyam joined #gluster-dev
14:02 pkalever joined #gluster-dev
14:16 shyam joined #gluster-dev
14:26 kotreshhr joined #gluster-dev
14:29 aravindavk joined #gluster-dev
14:30 pkalever left #gluster-dev
14:35 wushudoin joined #gluster-dev
14:37 wushudoin joined #gluster-dev
14:45 atinm joined #gluster-dev
14:47 mchangir joined #gluster-dev
15:04 nigelb Anyone know what's the cause of this error message?
15:04 nigelb chown: changing ownership of `/build/install/bin/fusermount-glusterfs': Operation not permitted
15:10 misc the fact that this patch is not applied: http://paste.fedoraproject.org/375890/65312078
15:11 shyam joined #gluster-dev
15:13 hagarth joined #gluster-dev
15:16 nigelb kkeithley: ^^ Do you know whhy that happens
15:16 kkeithley no, doesn't ring any bells
15:20 nigelb It's been happening on all our builds.
15:20 nigelb And we've been ignoring it.
15:20 nigelb I bet that's the real cause of all the "infra" issues.
15:26 misc I wouldn't say 'all'
15:41 pranithk1 joined #gluster-dev
15:44 nishanth joined #gluster-dev
15:45 shubhendu joined #gluster-dev
16:04 pranithk1 joined #gluster-dev
16:15 nbalacha joined #gluster-dev
16:19 penguinRaider joined #gluster-dev
16:19 nbalacha joined #gluster-dev
16:24 hagarth joined #gluster-dev
16:42 mchangir nigelb, a bit late on this but ... what user attempts to chmod u+s on that file?
16:45 mchangir nigelb, only root should be able to do that
16:46 nishanth joined #gluster-dev
16:47 mchangir nigelb, and the chown root as well will be successful only for root user
16:48 mchangir nigelb, if the build is being run as non-root user these will fail
17:16 jiffin joined #gluster-dev
17:19 penguinRaider joined #gluster-dev
17:19 hagarth joined #gluster-dev
17:20 jiffin joined #gluster-dev
17:40 jiffin joined #gluster-dev
18:52 penguinRaider joined #gluster-dev
18:53 jiffin joined #gluster-dev
18:58 rraja joined #gluster-dev
19:06 jtc joined #gluster-dev
19:30 kotreshhr joined #gluster-dev
20:26 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary