Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-08-03

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:20 julien_ZTE joined #fuel
01:26 RedShift joined #fuel
01:40 fedexo joined #fuel
02:15 julien_ZTE joined #fuel
02:23 sergmelikyan joined #fuel
02:52 hakimo_ joined #fuel
02:54 skylerberg joined #fuel
03:35 sergmelikyan joined #fuel
03:38 ximepa joined #fuel
03:39 skylerberg joined #fuel
04:05 skylerberg joined #fuel
04:36 fedexo joined #fuel
04:49 skylerberg joined #fuel
04:57 sergmelikyan joined #fuel
05:46 hezhiqiang joined #fuel
06:22 sergmelikyan joined #fuel
06:31 ub joined #fuel
06:45 mkwiek joined #fuel
07:06 e0ne joined #fuel
07:14 mkwiek joined #fuel
07:18 monester joined #fuel
07:43 devvesa joined #fuel
07:52 e0ne joined #fuel
08:10 subscope joined #fuel
08:20 ddmitriev1 joined #fuel
08:29 HeOS joined #fuel
08:30 julien_ZTE joined #fuel
08:34 omolchanov joined #fuel
08:38 julien_ZTE joined #fuel
08:40 gcossu joined #fuel
08:42 tzn joined #fuel
08:42 gcossu Hello. I have a question about ns_IPaddr2 used in Fuel 6.0. I suppose it is a Mirantis versionsio of IPaddr with namaspaces...
08:43 xenolog13 which question?
08:43 xenolog13 You suppression is correct.
08:43 gcossu The question is: is it possible to add multiple VIPs using ns_IPaddr2? I tried, but it didn't work...
08:45 xenolog13 Do you know, that ns_IPaddr2  adds IP address to the network namespace?
08:45 gcossu Yes I know.
08:46 xenolog13 Did you use separate network namespace for your solution?
08:46 gcossu No, the same namespace "haproxy"
08:47 xenolog13 IMHO in the 6.0 it shouldn't   work.
08:48 gcossu I noticed that in crm there are iptables configuration both for br-ex and br-mgmt for public vip and managemnt vip.
08:48 xenolog13 This OCF script can manipulate only one VIP per virtual NIC inside network namespace
08:49 gcossu Ok, in fact I tried to configure multiple, but the configuration between nodes was instable (messing with VNICs and ARP)
08:51 gcossu So If I wanto to add multiple (pubblic) VIPs should I use the "standard" heartbeat IPaddr2?
08:51 xenolog13 Multiple public VIP is an unsupported case for fuel 6.0
08:54 xenolog13 Using more than one VIP per virtual nic inside the same network namespace is a wrong way.
08:54 gcossu What about for fuel 6.1? I noticed that the configuration of CRM is quite different. There is a vrouter and vip__public_vrouter.
08:55 xenolog13 in the 6.1 for passing VIPs to the network namespace we used native linux bridges and veth-pairs. It's possible, if you re-write ns_Ipaddr2 OCF script for it.
08:56 xenolog13 In the 6.0 this case not possible anyway.
08:56 gcossu So vrouter allow to use more than one vips? or is it just to add connectivity to dns and ntp?
08:56 xenolog13 vrouter live in another network namespace
08:58 gcossu ok, so coming back to ns_Ipaddr2, it needs changes to support multiple pubblic vips.
08:59 gcossu right?
09:00 xenolog13 yep. There is no support for multiple VIPs for same network out of box.
09:01 xenolog13 But in the 6.1 this possible by design.
09:01 xenolog13 In the 6.0 this solution impossible.
09:02 gcossu My use case is this one: I need multiple public IP in order to configure the endpoints with https, and so bind each IP to port 443.
09:02 hyperbaba_ joined #fuel
09:02 kutija joined #fuel
09:03 gcossu So I can try to update my environment using 6.1 OCF files...
09:04 xenolog13 No, it's not enough
09:06 xenolog13 in the 6.0 for pass VIP to the network namespace used proxy-arp method.
09:06 xenolog13 in the 6.0 for pass VIP to the network namespace used proxy-arp method.
09:06 xenolog13 In the 6.1 veth-pairs adds directly to the corresponded bridges.
09:07 xenolog13 moreover, in the 6.0 base network topology builded over Open vSwitch
09:08 xenolog13 in the 6.1 -- over native linux bridges.
09:08 gcossu I noticed this behaviors, but I didn't catch very well the big picture of all components.
09:08 xenolog13 OCF for 6.0 and 6.0 is incompotible.
09:08 e0ne joined #fuel
09:09 gcossu I see...
09:12 gcossu Thanks for the clarification. So any suggestion in order to add vips in fuel 6.0?
09:13 gcossu or is it not possible?
09:13 sergmelikyan joined #fuel
09:16 gcossu I mean, is threre any workaround?
09:17 xenolog13 only deep rewrite of ns_IPaddr2 may help to solve your solution
09:17 xenolog13 IMHO, You can't solve it by short workaround.
09:20 xenolog13 If you use Neutron case -- the shortest way -- is port ns_Ipaddr2 from 6.1.
09:20 xenolog13 If You use nova-network I see no way.
09:21 gcossu Actually I tried the first one. Porting files from 6.1... it works, but unfortunatelly it was instable.
09:22 gcossu My environment is Neutron with VLANs
09:24 gcossu I noticed that ARPs and the interfaces in the namaspaces was sometimes inconsistent.
09:29 hyperbaba_ joined #fuel
09:35 xenolog13 which inconsistence do you mean?
09:50 Billias Hi All
09:50 Billias I have a fuel-community installation
09:50 Billias But My main issue is, that the master node doesn't start the 8000 port
09:51 gcossu Well, using 3 controllers. the arp was sometimes wrong in one node. I suppose that when a crm resource move from one node to another leaves inconsistent ARP
09:52 gcossu as you meationed, maybe it does depend on the the openvswitch/linux bridge changes between fuel 6.0 and 6.1
09:53 gcossu so using ns_Ipaddr2 of Feul 6.1 in the 6.0 enviroment can't solve.
09:55 gcossu But here I need your help in order to understand... :)
10:03 mkwiek1 joined #fuel
10:03 xarses_ joined #fuel
10:03 eliqiao joined #fuel
10:09 nurla joined #fuel
10:17 e0ne joined #fuel
10:18 xenolog13 gcossu, do you use Centos or Ubuntu ?
10:18 youellet__ joined #fuel
10:41 ub joined #fuel
10:50 kutija joined #fuel
11:19 gcossu xenolog13: Ubuntu
11:26 osryan left #fuel
11:27 ximepa joined #fuel
11:27 jaypipes joined #fuel
11:30 Billias I am booting my first fuel node, and installing it with two interfaces, but the 8000 port is not present on my netstat, neither accessible from my system.
11:37 ximepa left #fuel
12:00 francois joined #fuel
12:17 Billias anybody can help? I followed the install guides, but I can never access the port 800
12:17 Billias 8000*
12:22 Billias any suggestions on that?
12:24 e0ne joined #fuel
12:31 karume joined #fuel
12:45 warpc__ joined #fuel
13:10 e0ne joined #fuel
13:15 RedShift_ joined #fuel
13:22 kutija joined #fuel
13:25 monester joined #fuel
13:30 mudblur joined #fuel
13:31 mudblur hi all, does anyone know how to install the Cisco APIC plugin on Mirantis FUEL 6.1?
13:32 RedShift_ joined #fuel
13:34 NERvOus joined #fuel
13:51 sergmelikyan joined #fuel
13:51 Ken316 joined #fuel
14:04 Ken316 left #fuel
14:07 claflico joined #fuel
14:46 RedShift_ joined #fuel
14:50 RedShift_ joined #fuel
15:00 RedShift_ joined #fuel
15:03 kutija joined #fuel
15:14 thumpba joined #fuel
15:27 mudblur joined #fuel
15:47 blahRus joined #fuel
15:58 skylerberg joined #fuel
16:12 ub2 joined #fuel
16:15 pmcg So, I've got this controller node that just won't deploy and i'm not sure how to troubleshoot it further. It's discovered correctly, i can add it to the environment but every time i try to deploy the controller fails. 6.1 14.04 juno
16:15 pmcg The only thing I can find in logs is "Discover prevented by /etc/nailgun-agent/nodiscover presence." about 5 times before it finally puts the node in error
16:15 pmcg all the other nodes (between 1 and 6 through various attempts) go into OS install properly
16:15 pmcg any hints would be greatly appreciated. This all worked in 6.0 12.04 Icehouse
16:26 ub joined #fuel
16:27 skylerberg joined #fuel
16:29 jdandrea joined #fuel
16:36 mquin joined #fuel
16:40 richoid joined #fuel
16:42 ub joined #fuel
16:48 kutija joined #fuel
16:59 ub2 joined #fuel
17:01 CTWill joined #fuel
17:02 sergmelikyan joined #fuel
17:02 mquin_ joined #fuel
17:06 ub joined #fuel
17:22 e0ne joined #fuel
17:33 pmcg ok something simpler does anyone know where I can find the logic that creates and places /etc/nailgun-agent/nodiscover on a node
17:33 pmcg grep/find in both fuel-web and fuel-nailgun-agent have been unfruitful
17:34 mwhahaha it's over in fuel-astute
17:36 mwhahaha it gets dropped when the node is supposed to be erased, https://github.com/stackforge/fuel-astute/blob/776157f722b13aff5f59bc098cf948793e6498ef/mcagents/erase_node.rb#L172-L177
17:36 mwhahaha did you remove that node or reset the environment at some point? it looks like that would happen if it was unsuccessfully erased
17:38 pmcg Yeah, all those things - but we've re-pxe'd the node afterward and it still fails to deploy. Is there something I can do other than that to ensure the node is in a clean state?
17:38 mwhahaha ensure the disks are wiped and reboot it
17:39 mwhahaha that's what we do but it's still picking up on the old file
17:39 mwhahaha which seems weird
17:39 pmcg Yeah i don't think it is actually, i think it's a red herring and its dropping the file in there in anticipation of the provision which fails
17:40 pmcg Expected behavior, just that the node never actually reboots for os installation
17:40 pmcg which is weird because ti reboots fine if we reset/delete the openstack environment
17:40 mwhahaha classic provisioning or image based?
17:40 mwhahaha because we don't reboot image based provisioning
17:40 pmcg classic
17:41 pmcg and it's just the one node, we've got 9 others that have no issues. Unfortunately for me this is the only one on the public network, so has to be controller.
17:42 mwhahaha i've not messed with classic much
17:42 e0ne joined #fuel
17:43 pmcg Got it, we'll if the provisioning process is that much different between the 2 maybe we'll switch if we can't find a resolution.
17:43 pmcg thanks!
17:43 mwhahaha i hear classic was getting dropped in 7
17:44 mwhahaha image based is much faster since we only build the image once
17:44 mwhahaha i'm going to go look into the code a bit but you may want to submit a bug with logs
17:44 mwhahaha https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Test_and_report_bugs
17:44 mwhahaha someone might be able to point you in the right direction if you want to continue with the classic provisioning method
17:45 pmcg Thanks, will do but i'd love to find something incriminating first.
17:45 mwhahaha so the node doesn't reboot at all?
17:46 pmcg not during provisioning, no
17:46 pmcg but env reset/delete it reboots fine
17:46 mwhahaha is there a line in the logs aboue nodes failing to reboot?
17:47 pmcg not that i've seen no, not in the nodes logs nor in the fuel master. It's always possible i missed it but... i'm pretty confident
17:47 mwhahaha https://github.com/stackforge/fuel-astute/blob/stable/6.1/lib/astute/provision.rb#L102
17:48 mwhahaha that should be the provisioning process and on L130, we should kick an info line about nodes not rebooting
17:49 pmcg this is exactly what I've been digging for, you rock.
17:49 mwhahaha L240 is where the node should have been rebooted
17:49 mwhahaha and of course theres, https://github.com/stackforge/fuel-astute/blob/stable/6.1/lib/astute/provision.rb#L246-L257
17:51 pmcg docker-astute.log, yeah?
17:51 mwhahaha docker-logs/astute/astute.log
17:53 Billias joined #fuel
17:53 pmcg Yup, there it is.
18:01 tzn joined #fuel
18:11 ub joined #fuel
18:38 ub2 joined #fuel
18:44 tatyana joined #fuel
18:45 deckardkain joined #fuel
18:47 deckardkain hi guys. Was wondering if anybody used fuel to provide tenant instances direct vlan access via the public network. So instead of instances automatically getting a private IP they would get an IP on the public network.
18:55 tatyana joined #fuel
19:14 kutija_ joined #fuel
19:17 angdraug joined #fuel
19:37 angdraug joined #fuel
19:40 xarses joined #fuel
19:40 xarses joined #fuel
19:48 angdraug joined #fuel
19:49 Billias joined #fuel
19:51 geekinutah joined #fuel
19:53 alwaysatthenoc joined #fuel
19:56 HeOS joined #fuel
19:59 RedShift_ joined #fuel
20:02 angdraug joined #fuel
20:05 JoeStack joined #fuel
20:09 JoeStack has anyone ever deployed fuel6.1 successfuly with VLAN segmentation in HA mode?
20:14 mquin_ joined #fuel
20:14 warpc__ joined #fuel
20:15 mwhahaha yes many times, what's up?
20:16 JoeStack run always into same issue... after installing the 1st controll node, the other run into timeout
20:17 mwhahaha any specific error message?
20:22 JoeStack the 2nd and 3rd controll nodes stuck with the message
20:23 JoeStack (Haproxy_backend_status[mysql](provider=haproxy)) Get CSV from url 'http://192.168.11.2:10000/;csv'
20:23 JoeStack repeating for about 30 minutes
20:23 JoeStack then aborting the install
20:23 mwhahaha so if the public gateway is not available, the public vip is down which will cause that issue
20:24 JoeStack general message from the installer:
20:24 JoeStack Deployment has failed. Timeout of deployment is exceeded
20:24 mwhahaha yea, verify the gateway for the public vip is pingable from the controller that was deployed
20:24 mwhahaha you should also verify that the public vip is running (crm status)
20:25 JoeStack so my public network is 192.168.1.0/24 and is reachable...
20:25 mwhahaha what network is 192.168.11.0/24?
20:26 JoeStack the 192.168.11.0/24 is management (vlan tagged network)
20:28 JoeStack so I think this network should be generated by the openstack nodes without any external acceess
20:28 JoeStack maybe I'm wrong
20:29 JoeStack the storage network is192.168.10.0/24 (also VLAN tagged)
20:29 mwhahaha Ok so then the management vip may not be up
20:29 mwhahaha can your other nodes ping 192.168.11.2?
20:29 mwhahaha via their management interfaces?
20:30 mwhahaha also did you run a network verification prior to deployment?
20:30 JoeStack network verification was successful
20:30 JoeStack maybe one word about my envoronment
20:31 JoeStack I try to install fuel and fuel depolymoent on VMware Nodes
20:31 mwhahaha oh
20:31 mwhahaha there's your problem
20:31 JoeStack FUEL Node eth0 for PXE L2
20:32 JoeStack FUEL Node eth1 public 192.168.1.0/24
20:32 JoeStack gw .1
20:32 mwhahaha did you allow promiscuous mode on the network?
20:32 JoeStack fuel .10
20:32 JoeStack yes i did
20:33 JoeStack so PXE Boot os not the problem. This works well and I find 5 Nodes for installing OS
20:33 mwhahaha yea that's not the issue
20:33 mwhahaha the issue is that vmware is eating your tagged traffic, someone else here had the same problem
20:33 mwhahaha and if i recall it was the network settigns in vmware
20:33 mwhahaha https://docs.mirantis.com/fuel/fuel-master/planning-guide.html?highlight=vmware#esxi-host-networks-configuration
20:34 JoeStack all other nodes have one ethernet interface connected to Tagged VLAN vSwitch (allowd VLANS 1-4000)
20:35 JoeStack have a look onto that doc.....
20:37 mwhahaha Verilium may have had a similar issue, I think his solution was to enable portgroup and allowing promiscuous mode
20:37 JoeStack just to clarify, I'm not using FUEL to inegrate vCenter... VMware is just the bare metal part
20:37 mwhahaha right
20:38 mwhahaha it's an issue with the network config the vmware vm instances
20:40 JoeStack I still use ditributed port groups
20:40 JoeStack for public: VLAN ID 3900 as native VLAN (no tagging)
20:41 JoeStack for any other than PXE tagged VLAN on Trunking Portgroup
20:42 mwhahaha https://docs.mirantis.com/fuel/fuel-master/user-guide.html#port-group-vsphere
20:42 JoeStack so lets doublecheck the Trunking vSwitch
20:42 JoeStack ....
20:42 tzn joined #fuel
20:42 mwhahaha for the security settings, might need to enable forged transmits as well i'm not sure
20:43 JoeStack I did both
20:44 JoeStack but I can try to rename the Trunking vSwitch to br100
20:44 mwhahaha i don't think the name matters
20:44 JoeStack me too ;-)
20:44 mwhahaha You could always switch to gre
20:44 JoeStack but that's not what I need
20:46 mwhahaha https://docs.mirantis.com/fuel/fuel-master/planning-guide.html#fuel-on-vsphere-plan
20:46 mwhahaha https://docs.mirantis.com/fuel/fuel-master/reference-architecture.html#fuel-on-vsphere-arch
20:47 JoeStack I also tried my network config with tagged public vlan... the verify is successful.. so the tagged frames should go through the network, but it end up with the same issue
20:48 xarses joined #fuel
20:49 mwhahaha i'm unfortunately at the end of my ability to assist with this one since I don't have vmware to play with.  But the error you are getting is related to the inability of the nodes to communicate with each other over the management network
20:50 JoeStack thank you so far for your assistance... I will try some debugging tomorrow
20:55 bildz is it possible to just create an etherchannel, with 4 nics on the controller/storage, and have fuel push out all the vlan tags?
20:56 mwhahaha i want to say no because of the admin network needing an interface because i don't think pxe works over that but it might work
20:56 bildz thats why you use a native vlan
20:56 mwhahaha then yes? :D
20:57 bildz i've read over the design, but i'm just unsure of how the network changes get "pushed" to a controller, compute, and storage node
20:57 bildz I'm looking at the Nova-Network VLAN Manager design
20:58 bildz thats what I'd like to implement
20:58 mwhahaha you mean neutron vlan right?
20:59 mwhahaha oh no you mean nova-network, so that's getting phased out
20:59 mwhahaha not sure you want to do that
20:59 bildz https://docs.mirantis.com/openstack/fuel/fuel-6.0/reference-architecture.html#nova-network-vlan-manager
21:00 bildz oh it is?
21:00 tatyana joined #fuel
21:00 bildz it's moving to a neutron vlan?
21:01 bildz https://docs.mirantis.com/openstack/fuel/fuel-6.1/planning-guide.html#example-3-ha-neutron-with-vlan-sr-iov-iser
21:01 mwhahaha my understanding is that the openstack community has been trying to kill nova-network for years
21:01 mwhahaha you'd probably want neutron vlan
21:01 bildz can I dual purpose compute/storage?
21:01 mwhahaha if you want
21:01 bildz i have those super micro systems with like 49TB each
21:02 bildz with dual 10GbE
21:02 mudblur joined #fuel
21:03 bildz looks like all the changes are done from fuel and pushed down to the nodes
21:03 bildz hmmmm
21:03 mwhahaha yea so as part of the deployment, fuel sends network configurations to the nodes and it will create the required network interfaces as specified for the node
21:04 mwhahaha not sure on the status of bonding for 10g though, not my thing
21:04 bildz oh sweet. I was thinking i'd have to log into all the boxes and do it
21:05 mwhahaha the whole point of fuel is to try and do as much as possible for the user so you don't have to log into stuff. that being said if you want any customizations to the openstack install like availability zones or other openstack customizations you'd have to do that yourself
21:05 mwhahaha but you should be able to rely on fuel to handle partitioning and network setup as well as an initial service configuration and installation
21:30 RedShift_ joined #fuel
21:54 JoeStack Thanks <mwhahaha> you guided me onto the right track. The deployment was now successful. I did promiscuous mode and rename "br100". One or both did it :-)
22:19 thumpba joined #fuel
22:41 thumpba joined #fuel
22:55 xarses bildz: if you combine them, you should take steps to ensure that resources are reserved for storage and that you don't suffocate your storage with the compute load
23:08 jhova joined #fuel
23:13 glavni_ninja joined #fuel
23:14 ken316 joined #fuel
23:17 ken316 Question from a newbie?  Is the expected customization for even non-public mods like MAC/IP mappings in dhcp/dnsclient for the (PXE/admin network) to code this in a fuel plugin?

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary