Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-11-13

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:28 badone joined #gluster-dev
00:30 shyam joined #gluster-dev
00:57 topshare joined #gluster-dev
01:37 bala joined #gluster-dev
01:59 topshare joined #gluster-dev
02:49 topshare joined #gluster-dev
03:01 _Bryan_ joined #gluster-dev
03:02 topshare joined #gluster-dev
03:07 shyam joined #gluster-dev
03:32 bharata-rao joined #gluster-dev
03:32 bharata_ joined #gluster-dev
03:35 topshare joined #gluster-dev
03:43 shubhendu_ joined #gluster-dev
03:45 aravindavk joined #gluster-dev
04:09 glusterbot` joined #gluster-dev
04:12 glusterbot joined #gluster-dev
04:18 kanagaraj joined #gluster-dev
04:18 nishanth joined #gluster-dev
04:24 rafi joined #gluster-dev
04:24 Rafi_kc joined #gluster-dev
04:41 jobewan joined #gluster-dev
04:42 anoopcs joined #gluster-dev
04:43 shubhendu joined #gluster-dev
04:45 jiffin joined #gluster-dev
04:49 hagarth joined #gluster-dev
05:07 raghug joined #gluster-dev
05:13 spandit joined #gluster-dev
05:13 kshlm joined #gluster-dev
05:15 atinmu joined #gluster-dev
05:15 Guest7440 joined #gluster-dev
05:17 soumya joined #gluster-dev
05:25 kdhananjay joined #gluster-dev
05:25 kdhananjay left #gluster-dev
05:28 ppai joined #gluster-dev
05:29 ndarshan joined #gluster-dev
05:30 aravindavk joined #gluster-dev
05:31 lalatenduM joined #gluster-dev
05:37 atalur joined #gluster-dev
05:44 raghug joined #gluster-dev
05:48 bala joined #gluster-dev
05:56 anoopcs joined #gluster-dev
06:02 Humble joined #gluster-dev
06:06 pranithk joined #gluster-dev
06:14 krishnan_p joined #gluster-dev
06:23 soumya joined #gluster-dev
06:24 badone joined #gluster-dev
06:44 lalatenduM joined #gluster-dev
06:55 raghug joined #gluster-dev
07:06 ndarshan joined #gluster-dev
07:15 ppai joined #gluster-dev
07:19 aravindavk joined #gluster-dev
07:33 ndarshan joined #gluster-dev
07:40 nkhare joined #gluster-dev
07:42 aravindavk joined #gluster-dev
07:44 bala joined #gluster-dev
08:15 ndevos hey all, davemc++ created a Gluster Community "company" on LinkedIn, contributors, testers and user can add their role to their profile too now
08:15 glusterbot ndevos: davemc's karma is now 1
08:20 Humble ndevos, I reached out to John mark for the same
08:20 Humble :)
08:21 ndevos Humble: Hah, I asked Dave :)
08:21 Humble There already exist a group called "Friends of Glusterfs"
08:21 Humble I thought of exploring it or come up with a new group called "Gluster community"
08:21 Humble any way its there now .. :)
08:21 Humble ndevos++
08:21 glusterbot Humble: ndevos's karma is now 52
08:22 Humble davemc++
08:22 glusterbot Humble: davemc's karma is now 2
08:22 ndevos Humble: yes, but a group is different from a company/organization, I think - but I'm no LinkedIn expert
08:22 Humble well, depends :)
08:22 ndevos at least, my profile now shows the Gluster Community logo, and I think thats a nice touch
08:22 Humble I take it as where-ever we are grouped
08:33 bala joined #gluster-dev
08:36 bala joined #gluster-dev
08:37 atinmu I've edited my profile as well :)
08:40 Humble joined #gluster-dev
08:43 deepakcs joined #gluster-dev
08:46 lalatenduM joined #gluster-dev
08:47 vikumar joined #gluster-dev
08:47 lalatenduM joined #gluster-dev
08:52 bala1 joined #gluster-dev
09:10 ndevos atinmu: cool :)
09:10 atinmu ndevos, thanks for informing :)
09:15 spandit joined #gluster-dev
09:18 ndevos atinmu: are you still using slave26.cloud.gluster.org for debugging? we have quite a queue of tests and one more system would help...
09:19 atinmu ndevos, no, u can take it
09:19 atinmu ndevos, I am sorry that I didn't send an official release note :)
09:20 ndevos atinmu: np, I'll add it back to the pool now, thanks!
09:20 shubhendu joined #gluster-dev
09:28 ppai joined #gluster-dev
09:38 hagarth got on the linkedin group .. I think we can pay some more attention to our g+ group too - https://plus.google.com/u/0/com​munities/110022816028412595292
09:47 krishnan_p joined #gluster-dev
09:51 kkeithley1 joined #gluster-dev
09:51 spandit joined #gluster-dev
10:01 soumya_ joined #gluster-dev
10:09 krishnan_p joined #gluster-dev
10:20 shubhendu joined #gluster-dev
10:31 Humble lalatenduM, ping, pm
10:32 kkeithley_ ndevos: you're adding bugs to the 3.6.1 tracker? don't we need a 3.6.2 tracker at this point?
10:33 ndevos kkeithley_: yes, tried the 3.6.2 tracker, but it does not exist yet - hagarth wanted to create one and move the open bugs from 3.6.1 to 3.6.2
10:36 pranithk ndevos: I updated http://review.gluster.org/#/c/9090/ with relevant info I believe. Could you please let me know if you need more info
10:40 ndevos pranithk: sure, I'll have a look at that later, thanks!
10:40 pranithk ndevos: is it possible to take a look at it later today?
10:43 ndevos pranithk: yes, I'll do that, you should have comments about it tomorrow morning, ok?
10:46 pranithk ndevos: that patch is in critical path, if you know what I mean :-(
10:47 ndevos pranithk: I can look into it in 2-3 hours from now, not sure if I can promise something quicker
10:47 pranithk ndevos: Any time today is fine. Cool I will take a look at it after 3 hours from now. Thanks a lot :-)
10:53 atinmu ndevos, I've a question on backlogs
10:54 atinmu ndevos, what should we do for the BZs which have been marked needinfo on the reporter but no more input is available from him/her since long time
10:55 atinmu ndevos, shouldn't we close these bugs saying inadequate information?
10:55 atinmu ndevos, I can see few of glusterd bugs in similar state
10:55 ndevos atinmu: yes, close them in case the information is really insufficient and we can not do anything about it without the response of the reporter
10:56 atinmu ndevos, the question is how long should we wait for the reporter to get back, is there a guideline which is set or need to be set
10:57 ndevos atinmu: I'm not sure of any guideline, can you propose something and add it to the Bug Triage page?
10:58 atinmu ndevos, sure, let me shoot a mail in devel & users ML and see
10:58 ndevos atinmu: I'd vote for 1 or 2 weeks, or something like that
10:58 ndevos atinmu++ thanks
10:58 glusterbot ndevos: atinmu's karma is now 4
11:04 ndevos kkeithley_: ah, there was no glusterfs-3.6.2 Alias in the bug, I've set that now so that its easier to find :)
11:08 pranithk ndevos++
11:08 glusterbot pranithk: ndevos's karma is now 53
11:13 ndevos pranithk: thanks for the description, but could you add something like that to the commit message? it helps when others check patches after it has been merged
11:22 pranithk ndevos: Oh the bullet points should also contain the file-name and line numbers?
11:22 soumya_ joined #gluster-dev
11:25 ndevos pranithk: whatever you think makes it easier to understand the change
11:27 kshlm joined #gluster-dev
11:36 kkeithley_ An alias, I hadn't noticed that before. Yes, that's a good idea.
11:40 pranithk ndevos: I really need your inputs Niels. I thought the info I gave already sufficed :-), but it clearly did not. I guess I will update the commit message with line numbers for each of the bullet points, if that is fine
11:42 ndevos pranithk: yes, that should be fine. Details in the review comments are helpful, but for major differences I really want to have it in the commit message (too).
11:43 pranithk ndevos: cool.
11:43 ndevos pranithk++ thanks :)
11:43 glusterbot ndevos: pranithk's karma is now 7
11:58 soumya_ joined #gluster-dev
12:07 lpabon joined #gluster-dev
12:09 nkhare joined #gluster-dev
12:10 pranithk ndevos: done
12:11 ndevos pranithk: awesome, now its waiting for regression testing to finish...
12:12 ndevos pranithk: you'll have someone looking at http://review.gluster.org/9069 too?
12:12 lalatenduM Humble++
12:12 glusterbot lalatenduM: Humble's karma is now 17
12:17 pranithk ndevos: I resubmitted the regression a while back for that one
12:18 pranithk ndevos: I will ask anuradha to resubmit the change
12:18 kshlm joined #gluster-dev
12:19 ndevos pranithk: thanks, maybe it failed because I merged some AFR changes in 3.6 already...
12:22 ndevos uh, not in 3.6, but 3.5, of course :)
12:22 pranithk ndevos: yes, my patch on 3.5
12:22 pranithk ndevos: yes :-)
12:22 pranithk ndevos: okay niels. I am leaving for the day. Thanks a lot for the review.
12:22 pranithk ndevos: cya
12:23 lalatenduM kkeithley, looks like 3.4.6beta2 looks like exactly same as 3.4.6GA
12:33 hagarth joined #gluster-dev
12:42 shyam joined #gluster-dev
12:45 soumya_ joined #gluster-dev
12:51 lpabon joined #gluster-dev
12:59 edward1 joined #gluster-dev
13:02 deepakcs joined #gluster-dev
13:44 tdasilva joined #gluster-dev
13:59 Humble lalatenduM++ thanks!
13:59 glusterbot Humble: lalatenduM's karma is now 43
14:16 shyam joined #gluster-dev
14:24 ndevos JustinClift: I wonder if it is doable to create+delete regression test VMs on demand? the testing takes longer and longer, that affects the queue on occasion
14:33 hagarth ndevos: maybe we could consider a openstack/ovirt based system with glusterfs as the backend for VM images
14:33 hagarth that way glusterfs regression tests get tested on glusterfs
14:34 hagarth kkeithley++ for 3.6.2 tracker
14:34 glusterbot hagarth: kkeithley's karma is now 40
14:37 ndevos hagarth: I think we can achieve it with Rackspace too, I just dont know Jenkins that well
14:38 ndevos so, this uses Chef, but we would use Ansible? https://github.com/racker/rax-jenkins
14:40 JustinClift ndevos: Yeah, that's the first thing I tried doing.  Automatic scale-up and scale down that way
14:41 ndevos JustinClift: ah, okay, remember what difficulties you run into?
14:41 JustinClift ndevos: It wasn't so easy with Rackspace, and at the time they didn't have user permissions on the DNS bits
14:41 JustinClift ndevos: I think a lot of the initial problem was in making the VM's run in Rackspace in the first place
14:41 JustinClift I started out with 1GB instances instead of the current 2GB ones
14:42 JustinClift The 1 GB instances are so slow, that they exposed a *tonne* of race conditions and similar
14:42 ndevos JustinClift: hmm, how about a jenkins-slave ansible config?
14:43 JustinClift The other thing that was a problem, is the Rackspace VM's don't always start up in an ok condition
14:43 JustinClift I needed to log in and check that the VM's actually kicked off the internal setup scripting (from cloud-init)
14:43 JustinClift Then I'd nuke the ones that didn't, and re-create VM's
14:43 JustinClift That happened a lot
14:43 ndevos JustinClift: could you post an email to the infra list with some details? I'd like to have its issues documented and maybe we can find some solutions
14:44 JustinClift eg When starting 10 VM's, 2 of them would be fails
14:44 ndevos for some real rackspace issues, I can find some of their engineers to talk to
14:44 JustinClift ndevos: The problem I have now is that it's all from memory from months ago.  So, I don't have any specific details at hand.  Just generalisations
14:45 ndevos JustinClift: anything would do :)
14:45 * ndevos does not have any experience with it at all
14:46 JustinClift ndevos: And yeah, we can do that.  I think Joe Julian or Louis also knows some people there too, and also offered.  But we'd moved to the static VM's setup, and I wasn't doing them on demand any more by then
14:46 JustinClift ndevos: k, I'll send something through.  Please remind me about it if you don't seen something in a few hours (have been getting distracted easily lately) :/
14:47 ndevos JustinClift: we will need some on-demand solution soon, the queue was 7+ earlier today, on 6(?) VMs running - testing now takes 2 hours
14:47 ndevos JustinClift++ thanks, I'll remind you later today or tomorrow :)
14:47 glusterbot ndevos: JustinClift's karma is now 30
14:49 ndevos JustinClift: 6 VMs running tests now, 9 triggered tests waiting
14:59 shyam joined #gluster-dev
15:13 JustinClift ndevos: Ahhh.  I've just manually scaled us up before when things get busy.  Eg we were running 10 VM's for a while just before the 3.6.0 code freeze point
15:14 JustinClift People were madly submitting stuff ;)
15:14 JustinClift Lets ask Atin if we can have slave26 back for general use
15:15 ndevos JustinClift: already asked, and slave26 is back actively participating
15:20 wushudoin joined #gluster-dev
15:35 JustinClift Cool :)
15:55 raghug joined #gluster-dev
15:57 lpabon joined #gluster-dev
16:10 JustinClift ndevos: Should have the new VM's online in under 20 mins
16:10 * JustinClift had to reinstall Python first
16:11 ndevos JustinClift++ awesome!
16:11 glusterbot ndevos: JustinClift's karma is now 31
16:11 JustinClift ndevos: I've told the scripting to create 3 new VM's
16:11 JustinClift Hopefully all 3 are ok right away, and non of them is a fail
16:12 ndevos JustinClift: nice, that should help for a while :)
16:12 JustinClift Hmmm, if one of them is a fail, I guess we can point Rackspace Support at it for diagnosis in their own time
16:17 JustinClift kshlm: You've been doing stuff with Go lately haven't you?
16:17 JustinClift kshlm: Have you come across gopm yet? http://gopm.io
16:17 JustinClift kshlm: Seems to be the equivlent of pip for Python
16:17 rastar_afk joined #gluster-dev
16:17 JustinClift (but for Go)
16:18 sac`away joined #gluster-dev
16:19 ndevos whats the thing that people do not want distribution packages anymore?
16:20 kkeithley_ ndevos: you mean like putting a docker image on a coreos box?
16:20 ndevos yes, something like that
16:20 ndevos and like installing python packages with pip, and not have them packaged as rpm
16:26 vikumar joined #gluster-dev
16:27 Humble joined #gluster-dev
16:34 shubhendu joined #gluster-dev
16:40 hagarth joined #gluster-dev
16:49 jobewan joined #gluster-dev
16:52 davemc joined #gluster-dev
17:04 shyam joined #gluster-dev
17:05 soumya_ joined #gluster-dev
17:17 sac`away` joined #gluster-dev
17:17 hchiramm_ joined #gluster-dev
17:17 vikumar__ joined #gluster-dev
17:17 RaSTarl joined #gluster-dev
17:19 davemc hi folks.  Is there a quick start doc for 3.6.1? or should the current Quick Start still work, from a comment on the blog?
17:19 jiffin joined #gluster-dev
17:21 JustinClift hagarth: ^ ?
17:21 hagarth davemc: the current quick start is the best that we have
17:22 hagarth I am sure we can do a better job with that (as with most of our content & docs :( )
17:22 davemc k. the quick start seems fairly high priority.
17:23 * JustinClift can find 5 high priorities just looking behind the couch is the problem :/
17:23 JustinClift (not literally)
17:23 hagarth hmm, I wonder if we should come up with a weekly top 5 to sort out every week
17:24 * davemc can find 5 priorities per page on the website
17:24 JustinClift Excellent idea.  Lets make that a high priority...
17:24 davemc snicker
17:24 davemc "priority 1. Identify the next 4  priorities"
17:25 JustinClift hagarth: I think my smartarse level is a bit too high today.  It's actually a good idea that we should do. :)
17:25 davemc and while I've got you all here.
17:26 JustinClift We even have Community volunteers in the -infra list, they just need to be pointed at stuff
17:27 davemc what is the thought on removing  (or pointing to 3.7) the "proposing new features" sections on the 3.4 and 3.5 feature pages?s
17:28 hagarth davemc: we could remove them
17:28 * davemc is trying unsuccessfully to make all 'latest release point to 3.6 family
17:29 davemc hagarth, that would be my first choice, but I do want more suggested features
17:30 davemc But I'm thinking moving the feature request stuff to the top level page may be better
17:30 hagarth davemc: we could have an easily navigable "Suggest a feature" link from our landing page
17:31 davemc wiki landing or static site landing. The gluster.org home page would be best from my view
17:31 davemc but the wiki is _easy_
17:31 hagarth davemc: yeah, gluster.org home page is likely to be more effective.
17:34 davemc will add that to my list of items to talk to the designers about
17:35 hagarth davemc: cool
17:49 JustinClift Ugh
17:50 JustinClift I put the new slave20 into service before changing it's internal hostname
17:50 JustinClift No idea if that'll screw up the regression testing
17:50 JustinClift :/
17:55 JustinClift ndevos: Those new slave VM's are online and accepting jobs now
17:55 lalatenduM joined #gluster-dev
17:56 JustinClift It's a good idea for us to keep an eye on them until they've completed one ok, in case I forgot something in the setup
17:56 JustinClift A lot of it is automated, but there is still some stuff I have to do by hand for now
17:56 JustinClift And yeah, I should obviously put that on the wiki ;)
17:58 JustinClift Like forgetting to create the python site-package dir
17:58 * JustinClift does that now
18:01 JustinClift Ok *now* they should be ok
18:02 * JustinClift also retriggered the jobs that just failed on them due to the python gluster package dir permissions
18:08 davemc JustinClift, that exchange made my eyes cross
18:19 JustinClift ?
18:19 JustinClift About the new slave VM's?
18:20 lpabon joined #gluster-dev
18:21 davemc yep
18:21 davemc but my eyes might be crossed cause this is day 2 of Storage BU planning
18:32 jiffin joined #gluster-dev
18:39 JustinClift davemc: We have several (was 7, now 9) VM's in Rackspace that run our regression tests.  Normally 7 is enough to cope with the daily patch submissions and similar.
18:40 JustinClift davemc: However, 1 was out of commission (slave20.cloud.gluster.org), and one had been put aside to investigate a spurious failure in the test (slave26.cloud.gluster.org).
18:40 JustinClift And it turns out 5 VM's doesn't really meet our needs
18:41 JustinClift So, we started getting a build up of regression tests queing up.  We fixed the one that was out of commission, returned the set-aside one into service, and created an additional 2 VM's (9 in total) to process the backlog and catch things up
18:43 JustinClift Seems to have worked, as there's no longer a queue. ;)
18:44 ndevos JustinClift: very cool, thanks!
18:44 JustinClift http://build.gluster.org <-- The VM's are on the left side list, under "Build Executor Status"
18:44 glusterbot JustinClift: <'s karma is now -4
18:44 JustinClift ndevos: All good :)
18:45 ndevos and if the test pass, I might not need to stay awake until very late :)
18:45 JustinClift ndevos: 9 is probably overkill for general testing.  Lets leave it a few days and see though, just in case. ;)
18:45 JustinClift We can always hand one back to Atin (if he still needs it) for investigating the failures
18:45 JustinClift ndevos: Which one are you waiting on?
18:45 ndevos JustinClift: yes, I'm not sure why there were so many patches now, I guess the guys in Bangalore are getting back from holidays
18:46 JustinClift Ahhhh
18:46 JustinClift Yeah, that makes sense then
18:46 ndevos JustinClift: I have 3 or 4 that need to pass regression before I can merge them
18:46 JustinClift k
18:46 ndevos and, others need them merged before they can backport to RHS, deadline is tomorrow :)
18:46 JustinClift Ouch
18:47 ndevos yeah, and the delay with regression tests was not helping
18:47 JustinClift Yeah.  I wonder if our Jenkins setup can communicate to a slave VM that's only accessible through IPv6
18:48 JustinClift eg I can setup a few test boxes here in home network that are quicker than VM's to run, and hook them into Jenkins using
18:48 JustinClift s/using/without using/
18:48 ndevos I cant say... now making dinner, will be back later
18:48 JustinClift port mapping
18:48 JustinClift Np
18:49 JustinClift Hmm, I think it's better to leave that as an excercise for later
18:49 JustinClift Need to setup proper DMZ anyway
18:49 JustinClift We could (in theory) reduce the regression run times to about 45 mins tho.
18:50 JustinClift Something for later anyway
19:12 jiffin joined #gluster-dev
19:14 _Bryan_ joined #gluster-dev
19:50 davemc joined #gluster-dev
21:10 _Bryan_ joined #gluster-dev
21:51 davemc joined #gluster-dev
23:03 shyam joined #gluster-dev
23:17 badone joined #gluster-dev
23:55 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary