Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-03-11

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 IlyaE joined #fuel
00:11 justif joined #fuel
00:43 rmoe joined #fuel
00:57 crandquist joined #fuel
00:59 crandquist joined #fuel
01:24 geogdog joined #fuel
02:04 GeertJohan joined #fuel
02:04 GeertJohan joined #fuel
02:36 xarses joined #fuel
02:52 richardkiene joined #fuel
03:24 morello joined #fuel
03:26 smackers_mcfuggs fuel experts awake?
03:37 brain461 joined #fuel
04:54 Arminder- joined #fuel
05:06 Arminder joined #fuel
05:14 Arminder- joined #fuel
05:17 IlyaE joined #fuel
05:24 dburmistrov joined #fuel
05:32 jkirnosova joined #fuel
05:41 IlyaE joined #fuel
06:28 vkozhukalov joined #fuel
07:18 alex_didenko joined #fuel
08:10 e0ne joined #fuel
08:34 topochan joined #fuel
08:35 evgeniyl joined #fuel
08:43 anotchenko joined #fuel
09:08 tatyana joined #fuel
09:20 Arminder joined #fuel
09:26 anotchenko joined #fuel
10:04 b-zone joined #fuel
10:05 saju_m joined #fuel
10:07 aglarendil joined #fuel
10:09 warpig joined #fuel
10:31 saju_m joined #fuel
10:32 dburmistrov joined #fuel
10:32 anotchenko joined #fuel
10:42 dburmistrov joined #fuel
10:47 anotchenko joined #fuel
11:05 dburmistrov joined #fuel
11:30 dburmistrov joined #fuel
11:31 dburmistrov left #fuel
11:34 anotchenko joined #fuel
11:36 dburmistrov joined #fuel
11:36 dburmistrov left #fuel
11:47 Dr_Drache joined #fuel
11:50 dburmistrov joined #fuel
11:50 dburmistrov left #fuel
12:02 dburmistrov joined #fuel
12:03 dburmistrov left #fuel
12:09 dburmistrov joined #fuel
12:10 dburmistrov left #fuel
12:16 warpig Hi guys, come across a bit of a UI bug in 4.1...
12:17 warpig Not sure if it's already been discussed, but it looks as though the "Assign Roles" options don't appear under Chrome
12:17 warpig Loaded the same page with FF et viola, they're there
12:19 evgeniyl jkirnosova: ^
12:20 jkirnosova Hi warpig, can you provide any screenshot?
12:20 warpig yep, can do.
12:21 warpig there are no errors loading unfortunately, it's just the required divs aren't there...
12:22 Dr_Drache warpig, I pointed this out friday.
12:23 Dr_Drache lol, clear your cache.
12:23 Dr_Drache wonder if that guy got ubuntu deployed yet
12:25 warpig schweet!
12:25 warpig sorry, wasn't online on Friday...
12:25 warpig cheers guys
12:25 Dr_Drache warpig, there is an issue, that's just the "fix"
12:25 Dr_Drache warpig, and no
12:26 Dr_Drache didn't mean it like that.
12:27 Dr_Drache sorry, my thoughts are not collected today
12:34 warpig np :o)
12:37 dburmistrov joined #fuel
12:39 e0ne_ joined #fuel
12:41 justif joined #fuel
12:41 Dr_Drache warpig, fighting because I can't deploy ubuntu, the patches that were made in 4.0 so i could, didn't make it
12:42 Dr_Drache justif, how much did I screw up your system?
12:42 Dr_Drache :P
12:49 anotchenko joined #fuel
12:50 TVR___ joined #fuel
12:53 warpig Dr_Drache: were the patches for the disk ordering or something else?
12:55 Dr_Drache warpig, grub issues, and apci/virtual terminals
12:55 Dr_Drache I can manually patch the grub, but the rest. ubuntu is unbootable
12:55 Dr_Drache on 5 machines
12:59 MiroslavAnashkin joined #fuel
13:03 DaveJ__ joined #fuel
13:04 DaveJ__ Hi guys, quick question on fuel.  Is it possible to provision both ubuntu and centos based compute nodes in the same stack?  From the UI it looks like it's one or the other, but Is there options from the CLI to do this ?
13:05 Dr_Drache DaveJ__, no.,
13:05 rvyalov joined #fuel
13:05 Dr_Drache and i don't see that as a good idea personally
13:06 anotchenko DaveJ__: Nope, API doesn't allow that. Distros tied to releases, and you choose one release when create environment.
13:06 dubmusic joined #fuel
13:06 DaveJ__ anotchenko: Dr_Drache:  Cheers guys
13:07 Dr_Drache DaveJ__, may I understand your thought process on that?
13:07 TVR___ so.. has anyone else had issues getting a 3controller HA cluster up with the new 4.1?
13:07 Dr_Drache I don't see such an enviroment as well fuctioning.
13:08 DaveJ__ Dr_Drache:  Typically we develop and deploy on Redhat.  But we were interested to see how the KVM hypervisor performed on Ubuntu vs Centos
13:08 TVR___ I am getting MCollective call failed in agent 'puppetsync', method 'rsync', failed nodes:
13:08 TVR___ ID: 3 - Reason: Fail to upload folder using command rsync -c -r --delete rsync://10.7.212.45:/puppet/modules/ /etc/puppet/modules/.
13:08 Dr_Drache DaveJ__, you don't want my opinion on that. :P
13:08 DaveJ__ Dr_Drache: In our non Mirantis cloud, we test ESXi and RedHat KVM hypervisors side by side.
13:09 DaveJ__ so it would be nice to create somethig similar if we decide to adopt Mirantis.
13:09 Dr_Drache DaveJ__, there isn't enough differnce between the two.
13:10 evgeniyl DaveJ__: in theory you can do it via cli, you need to change 'profile' in provisioning data, but I'm not sure if it would work correctly.
13:10 DaveJ__ Dr_Drache:  Not currently.  I'm really keen to see if ubuntu will include multi-queue support in KVM.  It wont be available until RHEL/CentOS 7 anyway, but it's a pretty key feature for our workloads.
13:10 Dr_Drache but if i had to put #'s to it, redhat is slightly behind
13:10 DaveJ__ Dr_Drache: Really - that's good to know.
13:10 DaveJ__ Cheers
13:11 Dr_Drache TVR___, I'm willing to guess 4.1 wasn't tested after the final build.
13:12 TVR___ do you have or were you able to get a centos 3 controller HA env up yourself?
13:12 Dr_Drache no.
13:12 Dr_Drache not able.
13:12 TVR___ ok.. and I assume.. you tried to, yes?
13:13 Dr_Drache it fails at a differnt spot than you.
13:13 TVR___ OK..
13:13 Dr_Drache but, yes tried early this morning.
13:13 Dr_Drache not HA centOS works, mostly.
13:13 TVR___ so.. here is where I stand.... first attempt worked as in got installed, but could not talk to the ceph backend....
13:14 TVR___ I rebuilt with the redeploy, and that simply failed entirely, as in it had 2 out of 3 controller nodes error and not build
13:15 TVR___ tried 2 complete from scratch builds since, and both tries had had at least one node (controller) fail..
13:15 TVR___ redeploys do not fix it.
13:15 Dr_Drache I havn't attempted redeploy. not since you told me yesterday of that error.
13:15 TVR___ so, what worked for me with 4.0 is failing with 4.1
13:16 Dr_Drache same here... more or less.
13:16 Dr_Drache and I don't have the patches for 4.0, so I can't go back either.
13:17 TVR___ so I guess when our main man gets on here we will discuss this with him and see what we can do to help... and maybe figure this out...
13:24 Dr_Drache right, we'll see
13:24 Dr_Drache ....main man
13:24 Dr_Drache that's soo 90s... or was that 70s?
13:26 TVR___ forgot the spelling... so went with the slang
13:26 TVR___ heh
13:27 anotchenko joined #fuel
13:28 Dr_Drache pysc!
13:28 Dr_Drache ....nevermind
13:28 Dr_Drache lol
13:42 bogdando joined #fuel
13:47 dburmistrov joined #fuel
13:47 Dr_Drache TVR___, I have noticed, last 3 days there has been nearly no activity but a few of us needing help.
13:48 Dr_Drache justif, is having the same exact issue I am with ubuntu.
13:48 Dr_Drache and I guess "needing help" is pushing it....
13:48 TVR___ so, you suspect these issues I am seeing, and you guys are seeing are being actively worked on then by the guys?
13:49 justif Dr_Drache
13:49 justif not much i think i fubarded it by being impatient
13:50 Dr_Drache TVR___, I think so, but no response one way or another.
13:50 Dr_Drache justif, so it wasn't my edit?
13:50 justif i dont think so
13:50 Dr_Drache did my edit help at all?
13:51 justif nope did the same thing
13:51 Dr_Drache dammit.
13:51 Dr_Drache your's might be the drive enumeration.
13:52 justif possibly, they are old hp g5's
13:52 Dr_Drache then yes.
13:53 Dr_Drache that's exactly the issue.
13:53 Dr_Drache meaning, that's 4+ patches that didn't go into 4.1
13:55 TVR___ I didn't have any issues with my DL360 G5's with 4.0... but have not tried the 4.1 on them
13:55 Dr_Drache it's not all HP's, it's only some smartarray controllers.
13:55 Dr_Drache that's bad. and they wonder why we asked for RC isos...
13:56 justif mine are the 380's with I *think* the p400 controllers
13:58 jobewan joined #fuel
13:59 Dr_Drache justif, same here
14:00 MiroslavAnashkin Hss! Just got my network port unblocked. Not all the switches can stand my unhuman experiments with networks. Let me check this chat log for last 3 days first
14:01 jseutter joined #fuel
14:01 jseutter left #fuel
14:01 TVR___ there's the main man
14:02 TVR___ there is unrest in the kingdom me'loard.....
14:02 jseutter joined #fuel
14:09 TVR___ MiroslavAnashkin I am having issues getting the cluster initially set up.... I am trying a few things, but using centos and HA 3 controllers, have you been successful in building one yet?
14:10 MiroslavAnashkin TVR___: Yes, many times. And we even fixed the bug with third controller not installed, while first and second deployed successfully.
14:12 Dr_Drache MiroslavAnashkin, and I'm back to the same ubuntu deploy problems we fixed with the pmanager.py edits, and justif is having the HP drive issue.
14:12 jseutter hi, anyone else see the 4.1 installer work in virtualbox but not makke it to the grub screen on real hardware?  I tried a 4.0 and it got past this point..
14:13 TVR___ ok.. good to know... as I am having issues getting it deployed with the exact same parameters that worked with 4.0... once deployed, I was going to test killing a controller but I have not gotten that far yet
14:16 Dr_Drache jseutter, what type of hardware?
14:16 jseutter dell r210
14:17 jseutter Dr_Drache: and a dell r610
14:18 dubmusic left #fuel
14:18 dubmusic joined #fuel
14:22 dubmusic Has anyone installed Fuel 4.1
14:22 dubmusic ?
14:22 warpig just did it...
14:23 Dr_Drache I have
14:23 Dr_Drache I just can't deploy a full deploymenty
14:23 * warpig smokes cigarette
14:25 MiroslavAnashkin Dr_Drache and jseutter, We have not made a decision on excluding serial console from default GRUB options in 4.1. And we still have to add the workaround for Dell machines, the same you used for 4.0
14:25 dubmusic Hey @jseutter
14:25 dubmusic Stein here
14:26 Dr_Drache MiroslavAnashkin, and grub timeout.
14:26 Dr_Drache and MiroslavAnashkin even if I edit those lines, ubuntu is still unbootable
14:26 Dr_Drache still cause a system V freeze on computes.
14:27 Dr_Drache and a new error on the controllers.
14:27 MiroslavAnashkin Dr_Drache: Yes, it happens because Matrox and nVidia graphics drivers are not compatible with serial console, enabled in GRUB
14:28 MiroslavAnashkin So, you have not only add timeout, but remove serial console from GRUB options as well
14:28 Dr_Drache I did.
14:28 Dr_Drache and doesn't change a thing.
14:28 Dr_Drache (on the computes)
14:29 Dr_Drache and on the controller machine, I get a completly new error
14:29 TVR___ MiroslavAnashkin so I went to the boxes that kept failing and rebooted them... then did a redeploy... and now the cluster is up... I also remember this from before, where I needed to reboot a server to have the deploy or redeploy work.... I will try to come up with a pattern so it can be set as a bug and be replicated.
14:30 Dr_Drache MiroslavAnashkin, the freeze is @ line : "bio create slab <bio-1> at 1
14:31 Dr_Drache that's with the nomodeset + rootwait added, and the serial removed in pmanager.py
14:32 MiroslavAnashkin TVR___: Yes, it would be helpful. BTW, have you ran network check before deployment?
14:32 TVR___ yes... and it worked this time.. the neutron check works now
14:32 MiroslavAnashkin Dr_Drache: May I take a look on your resulted pmanager.py?
14:33 Dr_Drache MiroslavAnashkin, I can pull it, or paste the section.
14:33 TVR___ 4.1 network check when using neutron works fantastically ..
14:34 MiroslavAnashkin Dr_Drache: Any way you like, I believe it should also help with Dell machines to jseutter
14:35 Dr_Drache MiroslavAnashkin, http://paste.openstack.org/show/73148/
14:35 Dr_Drache that's the edited one.
14:35 Dr_Drache only line 903 was touched by me.
14:37 TVR___ my environment now works and is up... good
14:38 TVR___ testing controller failure after I test adding a compute node when ENV is under heavy load
14:38 Dr_Drache TVR___, nice.
14:44 MiroslavAnashkin TVR___: Could you please generate diagnostic snapshot and share it? We are very interesting to see the rsync failure closer.
14:44 TVR___ will do
14:44 TVR___ root
14:49 BillTheKat joined #fuel
14:56 IlyaE joined #fuel
14:58 MiroslavAnashkin Dr_Drache: Please try the following: 1. Find and delete /usr/lib/python2.6/site-packages/cobbler/pmanager.pyc, *.pyo and any precompiled pmanager files.
15:00 Dr_Drache just a .pyo is there
15:00 Dr_Drache do i need to edit that .py?
15:01 Dr_Drache err .pyc was there
15:01 MiroslavAnashkin Dr_Drache: 2. Ensure one more time you have applied patch and run `cobbler sync` to update your environment settings with new partition manager. Or simply create new environment after you have applied the patch - it does not touch already existing ones.
15:01 MiroslavAnashkin Dr_Drache: Simply remove that .pyo
15:02 Dr_Drache k
15:02 MiroslavAnashkin Dr_Drache: Let python create new one after you applied the patch.
15:03 Dr_Drache k
15:04 anotchenko joined #fuel
15:41 Dr_Drache MiroslavAnashkin, no change
15:42 Dr_Drache still stuck @ system V
15:44 TVR___ compute add was successful.... now to 2 controllers while under load.. for a total of 5 controllers and 2 compute
15:45 TVR___ now to add 2 controllers
15:47 Dr_Drache MiroslavAnashkin, and both controllers with the bio slab error
15:57 dburmistrov joined #fuel
16:10 bogdando joined #fuel
16:17 vkozhukalov joined #fuel
16:24 MiroslavAnashkin Dr_Drache: Does it stuck with the same error as with 4.0? 4.
16:24 MiroslavAnashkin Dr_Drache: 4.0 stalled with Matrox driver failure.
16:25 MiroslavAnashkin Dr_Drache:  Please share the screen with the message and create new diagnostic snapshot.
16:27 xarses joined #fuel
16:27 e0ne joined #fuel
16:38 designate joined #fuel
16:40 rmoe joined #fuel
16:46 dubmusic_ joined #fuel
16:47 rvyalov joined #fuel
16:52 dubmusic joined #fuel
16:57 dburmistrov joined #fuel
17:20 IlyaE joined #fuel
17:26 justif yay centos deployment issues
17:27 toha joined #fuel
17:27 justif on a clean install of fuel
17:27 justif 4.1, failed to parse the kickstart file: specified nonexistent partition 3 in partition command
17:33 angdraug joined #fuel
17:38 rupsky joined #fuel
17:41 rupsky joined #fuel
17:50 Dr_Drache MiroslavAnashkin, yes, and no
17:53 rupsky joined #fuel
17:59 Dr_Drache MiroslavAnashkin, http://www.sendspace.com/file/v2n6no
18:07 joelgarboden joined #fuel
18:41 MiroslavAnashkin Dr_Drache: And, if possible, plese share a screenshot of the error message
18:42 Dr_Drache k
19:07 Dr_Drache MiroslavAnashkin, https://www.dropbox.com/s/9xzfzaul8ldglbz/2014311145523.jpg
19:16 borgil joined #fuel
19:18 bookwar1 joined #fuel
19:28 MiroslavAnashkin Dr_Drache: Great, it looks like a new bug...
19:31 oburkov joined #fuel
19:31 e0ne joined #fuel
19:33 Dr_Drache MiroslavAnashkin,
19:33 Dr_Drache and the controllers are at the same system V as before
19:33 Dr_Drache https://www.dropbox.com/s/b0r2rep68f2xmc4/2014228110850.jpg
19:35 MiroslavAnashkin OK, at least controllers stopped at graphics driver as in 4.0. I'll check, what was changed in our default GRUB options.
19:43 postru joined #fuel
20:11 warpig joined #fuel
20:24 MiroslavAnashkin joined #fuel
20:30 rupsky joined #fuel
20:40 IlyaE joined #fuel
20:41 justif are there outstanding issues with HP P400 smart array controllers?
20:43 rikhtig joined #fuel
20:43 Dr_Drache justif, i think so
20:43 Dr_Drache MiroslavAnashkin, would be able to help you justif
20:43 justif centos doesnt works as it fails with a failed to find partition issue
20:44 justif and ubuntu also does not work
20:44 Dr_Drache justif, make a dianostic snapshow
20:45 Dr_Drache snapshot
20:45 Dr_Drache and upload it
20:50 justif https://mail.dr3vil.com/fuel-snapshot-2014-03-14_09-35-48.tgz
21:00 e0ne joined #fuel
21:11 e0ne joined #fuel
21:11 warpig joined #fuel
21:12 warpig justif: silly question, but you've defined a logical drive on the controller?
21:12 vkozhukalov joined #fuel
21:14 warpig anyone about who can shed some light on some "advanced" networking options?
21:18 warpig I'm looking at the reference architecture doc http://docs.mirantis.com/fuel/fuel-4.1/pdf/Mirantis-OpenStack-4.1-ReferenceArchitecture.pdf (page 21), but issuing the command in the doc doesn't produce the yaml files, only the directory...
21:19 warpig trying to get multiple VLAN-tagged public networks down to neutron...
21:28 meow-nofer__ joined #fuel
21:28 tatyana joined #fuel
21:31 isAAAc joined #fuel
21:37 rmoe joined #fuel
21:38 alexz joined #fuel
21:38 GeertJohan_ joined #fuel
21:38 brain461 joined #fuel
21:38 bookwar joined #fuel
21:40 justif joined #fuel
21:41 geogdog joined #fuel
21:41 rupsky joined #fuel
21:41 bogdando joined #fuel
21:41 aglarendil joined #fuel
21:41 jeremydei joined #fuel
21:41 mattymo joined #fuel
21:45 jkirnosova joined #fuel
22:03 xarses warpig: fun
22:06 warpig yeah, it can be...
22:06 warpig any ideas?
22:07 warpig from what I've read, the VLAN from the physical switch terminates on the OVS and then is translated to an internal VLAN.
22:08 warpig I've been trying to trunk the VLANS on the OVS side and terminate them at neutron, but I'm not sure if that's the way to go...
22:09 warpig #fuel may not be the forum for this, but the documentation suggests it's a manual config change on fuel before deployment of the nodes.
22:10 xarses warpig: id guess that you can configure additional endpoints and add patches for them
22:12 xarses warpig: you will have issues with return routes as the default route would cause asymmetric route issues
22:13 xarses let me see if i can tweak something
22:13 rupsky_ joined #fuel
22:14 warpig well, not necessarily.  different tenants are to be given different public/floating subnets.
22:15 warpig I've done quite a bit with puppet, so it would have been great to just tweak the yaml files for the nodes. Unfortuantely, they're not being produced as per the docs (unless I've missed something perviously)
22:19 xarses downloading the current release to test a theory. But out of the box, even if you get the yaml transforms to work, the puppet module for building the networking settings won't create policy routes which im ~95% sure you will need
22:25 warpig xarses: I haven't quite got my head around the networking config just yet, so bare with me... My understanding was that each tenant has a private and a public/floating address range.  If a VM has a floating IP, the router takes care of the SNAT, so I'm not sure where the policy routes come into play.
22:28 xarses In order to implement the router, node acting as the route must have access to each public network you want to configure
22:28 dubmusic joined #fuel
22:29 warpig understood.
22:29 xarses neutron uses ip namespaces to implment the routing, which should still need to consult the routing table on the physical interface that they are proxying from
22:30 warpig OK, cool.
22:32 xarses in which case, if you add multiple interfaces the routing table for each interface will need to be comprehensive enough to deliver packets beyond the subnet of the interface, otherwise it will seek other interfaces from the routing table to attempt to deliver the packets
22:32 warpig so, just so you're aware, I'm deploying on a HP blades with FlexConnect.  The "public" interface on each blade has the same VLANs trunked down to that interface.
22:32 xarses this is where policy based routing would come into play
22:33 warpig OK, so if I was to trunk all VLANs down to the OVS and create a network in neutron something like this:
22:34 warpig net-create --tenant-id d4cc9f301893494bbb637ccc4ae5d013 --provider:network_type=vlan --provider:physical_network=physnet1 --provider:segmentation_id=666 --router:external=true devops_ext
22:34 warpig would that not terminate VLAN 666 as an external netowrk on the router?
22:35 richardkiene joined #fuel
22:38 xarses oh, hmm you just want to bridge the vlan
22:38 xarses ok, that would be easier
22:43 warpig yep, exactly...
22:44 warpig :oD
22:44 xarses need to finish deploying the iso so i can look at the current transform section, unless you want to paste one of your's
22:44 warpig unfortuantely, don't have remote access to our lab, so will have to send it through in the morning...
22:49 Matt_V joined #fuel
22:53 warpig xarses: no hurry...  appreciate the help.
23:10 xarses in   - action: add-patch
23:10 xarses bridges:
23:10 xarses - br-eth1
23:10 xarses - br-ex
23:10 xarses trunks:
23:10 xarses - 0
23:10 xarses add - 10
23:10 xarses after trunks -0
23:10 xarses for whichever vlans you want to add to the trunk
23:10 warpig xarses: is that in the Puppet manifests?
23:11 xarses thats the node yaml
23:11 xarses you would have to modify each node-role yaml file for the all of the controllers
23:12 warpig does it use hiera?
23:12 xarses hira like yaml
23:12 warpig Puppet, that is...
23:13 xarses puppet is sent the these yaml files for each role that is run, its nearly hiera
23:15 xarses that should be all you need
23:15 xarses https://ask.openstack.org/en/question/5659/how-to-use-vlan-in-public-network/
23:41 justif joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary