Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-06-05

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:17 tg2 joined #gluster-dev
02:26 nishanth joined #gluster-dev
02:34 bharata-rao joined #gluster-dev
02:55 kkeithley1 joined #gluster-dev
03:08 vpshastry joined #gluster-dev
03:42 kanagaraj joined #gluster-dev
03:57 itisravi joined #gluster-dev
03:58 shubhendu joined #gluster-dev
04:08 bala joined #gluster-dev
04:30 spandit joined #gluster-dev
04:35 vikhyat joined #gluster-dev
04:39 ndarshan joined #gluster-dev
04:50 ppai joined #gluster-dev
04:53 kshlm joined #gluster-dev
05:03 lalatenduM joined #gluster-dev
05:11 systemonkey joined #gluster-dev
05:14 aravindavk joined #gluster-dev
05:21 awheeler joined #gluster-dev
05:30 kanagaraj joined #gluster-dev
05:33 kshlm joined #gluster-dev
05:40 hagarth joined #gluster-dev
05:45 skoduri joined #gluster-dev
05:47 krishnan_p joined #gluster-dev
05:51 aravindavk joined #gluster-dev
06:02 deepakcs joined #gluster-dev
06:08 [o__o] joined #gluster-dev
06:30 nishanth joined #gluster-dev
06:46 raghu joined #gluster-dev
06:49 aravindavk joined #gluster-dev
06:57 atinmu joined #gluster-dev
07:00 ppai joined #gluster-dev
07:02 krishnan_p joined #gluster-dev
07:10 lalatenduM ndevos, https://bugzilla.redhat.co​m/show_bug.cgi?id=1100204 is logged for 3.5 does that mean this issue is not seen on master?
07:11 glusterbot Bug 1100204: medium, high, ---, lmohanty, NEW , brick failure detection does not work for ext4 filesystems
07:11 lalatenduM ndevos, also I tested on master and I did not see this issue
07:12 ndevos lalatenduM: it is logged against the version that it was confirmed not to work correctly, it's up to the assigned dev to verify in that version and in master
07:13 ndevos lalatenduM: if the issue also happens in master (I really expect it would), then the bug can get cloned and the master bug should block the 3.5 bug
07:13 lalatenduM ndevos, cool, I think it is working fine master , updating the bug , you can verify my steps
07:13 ndevos lalatenduM: I really doubt that :)
07:15 ndevos lalatenduM: I expect that you need to make a change in the master branch, get the change merged and then backport it - it would be 3.5.2 candidate
07:16 hagarth joined #gluster-dev
07:17 nishanth joined #gluster-dev
07:19 lalatenduM ndevos, yes agree on backport thing
07:19 lalatenduM ndevos, I have updated the bug, plz take a look my comment in the bug
07:21 awheeler joined #gluster-dev
07:21 ndevos lalatenduM: oh, 'rm -rf /d/testvol-1' is not a valid test
07:21 lalatenduM ndevos, ohh , I am not surprised :), I have doubt on that
07:21 ndevos lalatenduM: the heal-check does indeed catch that, irreleant of the used filesystem
07:22 ndevos lalatenduM: you'll need to remove the disk, or pull the RAID card
07:23 ndevos yes, people actually complained that the brick process kept running (and causing issues) when the RAID card was pulled from a running system
07:24 lalatenduM ndevos, thats would be difficult in VM I guess, bcz even I remove the disk , it will get reflected in next reboot
07:25 lalatenduM ndevos, I think the only option is use to device-mapper to load an error-table for the disk
07:26 ndevos lalatenduM: depends... you can 'echo remove > /sys/class/block/vdb/device/state' or something like that
07:27 lalatenduM ndevos, cool , will try that. thanks
07:27 ndevos lalatenduM: yeah, device-mapper works too, see http://review.gluster.org/5176 for some notes
07:27 glusterbot Title: Gerrit Code Review (at review.gluster.org)
07:27 ndevos lalatenduM: that echo command is maybe not really correct, but it should be possible in a similar way
07:28 lalatenduM ndevos, ok, will check the review patch and search the correct command to remove the disk, thanks
07:31 ndevos lalatenduM: you may also be able to remove the disk through virsh, maybe that can force it, and virt-manager might be more 'secure'
07:34 ndevos lalatenduM: 'echo offline > /sys/block/vdb/device/state' should do it
07:35 lalatenduM ndevos, cool wil try it
07:38 ndevos lalatenduM: I've explained it again in the bug, just to have everything in there - and moved the bug to ASSIGNED ;)
07:46 lalatenduM ndevos, awesoem :) thank you
08:01 [o__o] joined #gluster-dev
08:13 ndevos oh no, JustinClift, we desperately need your rackspace jenkins slaves
08:14 ndevos with the number of regression tests scheduled, I won't be able to make a beta2 before next week...
08:18 bala joined #gluster-dev
08:38 edward1 joined #gluster-dev
08:43 kdhananjay joined #gluster-dev
09:02 ndevos Humble: do you want to change the tabs to spaces in the libgfapi doc? http://review.gluster.org/7865
09:02 glusterbot Title: Gerrit Code Review (at review.gluster.org)
09:05 ndevos Humble: of not, thats ok too, it has some positive reviews, and I guess I could merge it
09:05 Humble ndevos, may be later
09:05 Humble let this go this way
09:05 ndevos Humble: you mean "merge it!" ?
09:06 Humble ndevos, yes :)
09:06 ndevos :)
09:07 Humble ndevos , s/this/it  -> bit delayed though :) .. thanks1
09:07 ndevos Humble: now just send a backport for 3.5 :)
09:07 Humble ndevos, sure !!
09:09 ndevos Humble: hmm, Jenkins is lagging, it can not get merged because of the Verified:-1 from the build system...
09:09 ndevos I've retriggered the smoke test this morning already, but it seems to be waiting in the queue...
09:17 Humble ndevos, ok.. will wait
09:17 Humble as soon as its merged I will cherrypick .. thanks
09:18 ndevos Humble: yeah, sorry for the delay - thanks!
09:18 Humble np ... not ur mistake !! :)
09:22 hagarth joined #gluster-dev
09:27 nishanth joined #gluster-dev
09:33 vpshastry joined #gluster-dev
09:46 kshlm joined #gluster-dev
10:07 aravindavk joined #gluster-dev
10:09 kkeithley_ ndevos: there was a regression test that hung over night that I killed early this morning IST, I don't know how long it was hung for. Then after a couple ran successfully another had only made it halfway through the basic tests in 1.5 hours before I killed it.
10:10 kkeithley_ they seem to be running okay now, but maybe requires some hand holding
10:10 kkeithley_ wish I knew why it's suddenly so brittle
10:19 kkeithley_ maybe no regressions actually ran between the two; only worked off the bugid and smoke backlog. (And yes, I cleaned the regression workspace when I killed the hung jobs)
10:25 ndarshan joined #gluster-dev
10:30 shubhendu joined #gluster-dev
10:33 shyam joined #gluster-dev
10:33 vpshastry joined #gluster-dev
10:38 kshlm joined #gluster-dev
10:43 krishnan_p joined #gluster-dev
10:44 ndarshan joined #gluster-dev
10:53 atinmu joined #gluster-dev
11:03 edward1 joined #gluster-dev
11:12 nishanth joined #gluster-dev
11:25 kshlm joined #gluster-dev
11:32 bfoster joined #gluster-dev
11:37 krishnan_p joined #gluster-dev
11:48 vpshastry joined #gluster-dev
11:56 atinmu joined #gluster-dev
12:00 _Bryan_ joined #gluster-dev
12:06 itisravi joined #gluster-dev
12:55 hagarth joined #gluster-dev
13:00 shyam joined #gluster-dev
13:24 skoduri joined #gluster-dev
13:25 krishnan_p joined #gluster-dev
13:25 rgustafs joined #gluster-dev
13:33 hagarth joined #gluster-dev
13:41 jobewan joined #gluster-dev
13:59 hagarth kshlm: does this ring a bell --> http://fpaste.org/107419/76502140/ ?
13:59 glusterbot Title: #107419 Fedora Project Pastebin (at fpaste.org)
14:12 wushudoin joined #gluster-dev
14:12 kshlm It's looks like the deadlocked quotad bug. https://bugzilla.redhat.co​m/show_bug.cgi?id=1095585 http://review.gluster.org/7703
14:12 glusterbot Bug 1095585: urgent, urgent, ---, kaushal, MODIFIED , Two instances each, of brick processes, glusterfs-nfs and quotad seen after glusterd restart
14:16 ndevos kshlm: hmm, should that get backported to 3.5 too?
14:16 kshlm looks like. it.
14:18 kshlm ndevos, it needs to be backported.
14:18 ndevos kshlm: hmm, ok
14:19 kshlm Also, I kind of remember someone facing another issue after this was fixed, which was also fixed on master.
14:19 kshlm But cannot remember exactly what it was.
14:20 ndevos oh well, we'll wait and see a bug report in that case
14:20 ndevos kshlm: how urgent is that bug you think, should it get included in 3.5.1?
14:21 kshlm It happens only if quota is enabled on some volume.
14:22 kshlm And only happens when glusterd is restarted.
14:22 ndevos kshlm: yeah, Thilam in #gluster run into that
14:22 kshlm Or if someone is using server quorum as well, it could happen at other times as well.
14:23 ndevos kshlm: any easy solution to get it working again?
14:23 kshlm Other than disabling quota, no.
14:23 kshlm The fix is quite small and straight forward as well.
14:24 kshlm It will be a quick backport.
14:24 kshlm I'll do it, if needed once I get back home.
14:25 ndevos kshlm: that would be awesome, I'm just cloning the bug and will assign it to you then
14:25 ndevos kshlm: no need to fix it tonight, tomorrow would be fine too :)
14:25 kshlm cool.
14:26 kshlm I'll do it tonight anyway. Otherwise I'll forget.
14:26 * kshlm has been really forgetful this week.
14:26 ndevos hehe, ok
14:44 deepakcs joined #gluster-dev
15:02 shubhendu joined #gluster-dev
15:06 skoduri joined #gluster-dev
15:15 shyam joined #gluster-dev
15:15 shyam left #gluster-dev
15:16 shyam joined #gluster-dev
15:23 lalatenduM ndevos, "echo offline > /sys/block/sdb/device/state" does not work in VMs
15:23 lalatenduM ndevos, in "/sys/block/vdb/device/" we dont have a state file
15:24 aravindavk joined #gluster-dev
15:24 ndevos lalatenduM: hmm, then you should add a disk as scsi device, not as virtual block device
15:25 lalatenduM ndevos, yeah good idea
15:25 ndevos lalatenduM: or, use virsh, there probably is a disk-detatch command
15:25 lalatenduM ndevos, yeah
15:37 lalatenduM ndevos, good news, the above works for disk bus type IDE, raw. I had virtio and qcow2
15:38 ndevos lalatenduM: ah, cool, good to know!
15:41 ndevos JustinClift: are you playing with Jenkins?
15:41 hagarth who's playing with Jenkins now?
15:42 ndevos it seems that a lot of the regression tests have just errored out
15:42 hagarth yeah, I aborted a test run and that seems to have triggered the distclean problem causing runs to fail
15:43 ndevos hmm
15:43 ndevos when I click on one of the failed tests, the page says 'Jenkins s going to shut down', is that intentional?
15:43 JustinClift Ahhh
15:43 JustinClift That's probably kkeithley
15:44 JustinClift He emailed me to say things are going weird on build.gluster.org, and he's considering rebooting it
15:44 kshlm joined #gluster-dev
15:44 * JustinClift basically said "go for it"
15:44 hagarth JustinClift: would be better to intimate gluster-infra about changes of this nature
15:45 JustinClift hagarth: Now you mention it, yeah.
15:45 JustinClift Oops :(
15:46 ndevos and, maybe a heads up in this channel?
15:48 vpshastry joined #gluster-dev
15:50 JustinClift We'll need to install the updated OpenSSL rpms on our boxes as well.
15:52 bala joined #gluster-dev
15:54 hagarth JustinClift: right
16:02 ndevos JustinClift: are those packages available already?
16:13 aravindavk joined #gluster-dev
16:18 kshlm ndevos, http://review.gluster.org/7995
16:18 glusterbot Title: Gerrit Code Review (at review.gluster.org)
16:18 ndevos kshlm: awesome, thanks!
16:20 ndevos hmm, there is a GlusterBuildSystem2 commenting on that, that can't be right?
16:20 ndevos JustinClift, hagarth: ^ you know about that one?
16:21 hagarth ndevos: GBS2 is a new jenkins instance that we are trying to add
16:22 ndevos hagarth: okay, and it does libgfapi-qemu testing?
16:22 hagarth ndevos: yes and potentially regression tests too
16:22 ndevos well, "does"... it tries to
16:23 ndevos oh, good, it has this in the logs "Sending e-mails to: vbellur@redhat.com" - at least I know who'll fix it :D
16:24 hagarth one more email to forward to someone ;)
16:25 ndevos hehe
16:27 Humble ndevos, that noise was from me
16:28 ndevos Humble: ah, okay, but now that patch is marked as verified:-1, how can that be removed?
16:28 JustinClift ndevos: The packages for CentOS are distributing to mirrors presently.  Unsure if they're available to ours yet.
16:29 ndevos JustinClift: oh, ok
16:29 JustinClift hagarth: on the note about letting gluster-infra know about stuff, that GlusterBuildSystem2 fits in that category. ;)
16:30 JustinClift I've been trying to motivate myself to setup a new Jenkins master on <something>.cloud.gluster.org, but failing miserably so far (motivation wise) ;)
16:30 Humble ndevos, stilll checking on it :)
16:30 JustinClift Sounds like you guys are already getting it done instead. :)
16:30 ndevos Humble: no problem :)
16:31 JustinClift hagarth: Is it using OpenJDK 1.7 and the jenkins software from the upstream jenkins repo?
16:31 ndevos and whats that with you guys, I'm sure its passed your (except JustinClift's) business hours
16:31 * JustinClift would like us to have a Jenkins solution we can do "yum update" on auto weekly or so
16:32 JustinClift (eg every tuesday or something)
16:35 Humble hagarth, pm ?
16:35 hagarth JustinClift: GBS2 is right now experimental, Humble is toying around with that
16:36 JustinClift hagarth: Cool
16:37 JustinClift Humble: If at all possible, please use the Jenkins rpm's from the upstream Jenkins repo, and also use the openjdk (1.7?) rpm that's in um... EPEL or standard CentOS repos.
16:38 JustinClift Humble: Just saying that so we can avoid having boxes seriously out of date with patches again.
16:38 Humble JustinClift, gotcha :)
16:38 JustinClift :)
16:41 JustinClift Humble: This is possibly useful too: http://fpaste.org/107466/14019864/
16:41 glusterbot Title: #107466 Fedora Project Pastebin (at fpaste.org)
16:41 JustinClift Humble: It's my notes of what needs to be done on Jenkins slave boxes
16:42 JustinClift Humble: It's specifically so they can run the patch regression tests that are on the forge, which are slightly different to the ones on build.gluster.org.
16:44 ndevos JustinClift: we should have a ansible repository for such configurations :)
16:44 JustinClift +1
16:44 JustinClift That was actually why the recent delay
16:44 JustinClift Was going to start learning Ansible.  Then got pulled onto something else.
16:56 ndevos oh, I'm looking at ansible too, looks simple but effective
16:56 JustinClift Yeah, it has that reputation.
16:57 JustinClift The Salt thing that's build on top of it has the reputation as being extremely buggy, so I'd personally avoid that for now.  But Ansible itself seems to have a good rep
17:04 ndevos oh, I would not use something on top yet, I'd like to know how to use/apply it, things on top just make it more difficult in case something breaks
17:09 ndk joined #gluster-dev
17:11 JustinClift Humble: Do you want the new GlusterBuildSystem2 added to global DNS?
17:22 kanagaraj joined #gluster-dev
17:24 shyam1 joined #gluster-dev
17:32 vpshastry joined #gluster-dev
17:34 systemonkey joined #gluster-dev
17:36 krishnan_p joined #gluster-dev
17:41 JustinClift Looks like build.gluster.org didn't come back up. ;)
17:42 shyam joined #gluster-dev
18:10 skoduri joined #gluster-dev
18:41 lpabon joined #gluster-dev
19:49 lpabon joined #gluster-dev
19:55 JoeJulian Can anyone tell me exactly how rebalance works?
21:13 edward1 joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary