Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-07-09

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:47 xarses joined #fuel
02:03 ub2 joined #fuel
02:06 youellet_ joined #fuel
02:52 hakimo joined #fuel
03:16 xarses joined #fuel
03:54 sbfox joined #fuel
04:17 neophy joined #fuel
05:56 stamak joined #fuel
06:24 dancn joined #fuel
06:33 ub joined #fuel
06:55 dancn_ joined #fuel
07:20 Miouge joined #fuel
07:38 dancn__ joined #fuel
07:50 Longgeek joined #fuel
08:34 Longgeek joined #fuel
08:48 HeOS joined #fuel
08:54 glavni_ninja joined #fuel
09:05 junkao_ After deploy fuel 6.1 ,Slow network speed between VM and external, why.
09:10 aliemieshko_ joined #fuel
09:11 stamak joined #fuel
09:12 sc-rm joined #fuel
09:12 sc-rm any news on fixing this https://bugs.launchpad.net/mos/+bug/1410797 for 6.0?
09:28 e0ne joined #fuel
09:47 neophy joined #fuel
10:40 NERvOus joined #fuel
10:50 NERvOus hi
10:51 NERvOus I'm doing a test deployment of mirantis 6.1 using a ubuntu 14.04 host and mirantis virtualbox scripts
10:52 NERvOus machine has 8 physical cores and 64GB of RAM
10:52 NERvOus I used launch.sh
10:52 NERvOus I got the fuel master up and running
10:52 NERvOus I connected to the web interface of fuel master
10:52 NERvOus there were no slaves running
10:53 NERvOus so I ran the script: slave-nodes-create-and-boot.sh
10:53 NERvOus manually
10:53 NERvOus and got 3 slaves
10:53 NERvOus 1 -> controller 1 -> storage 1 -> computing
10:53 NERvOus I clicked "deploy changes"
10:54 NERvOus and after a while got: "Deployment has failed. Method granular_deploy. Deployment failed on nodes 1. Inspect Astute logs for the details"
10:54 NERvOus astute logs -> http://paste.openstack.org/show/358260/
10:56 NERvOus any idea about what went wrong?
11:14 Longgeek joined #fuel
11:15 barthalion connectivity test failed
11:15 barthalion but to be honest, I don't know how to fix it when virtualbox scripts are used
11:15 barthalion sovsianikov might know
11:15 NERvOus barthalion: connectivity between slaves and fuel?
11:15 NERvOus or just between slaves?
11:16 barthalion no, outside connectivity
11:16 NERvOus to the internet?
11:16 barthalion yeah
11:16 NERvOus ok, assuming that I fix the connectivity issue from vbox VMs to the Internet
11:17 NERvOus can I just click "deploy changes" to retry
11:17 NERvOus or should I start clean and reinstall everything?
11:17 barthalion yes
11:17 barthalion no, 'deploy changes' should be fine
11:17 NERvOus ok, many thanks, will try it after lunch
11:17 barthalion np
11:20 ub joined #fuel
11:32 Miouge_ joined #fuel
11:45 jaypipes joined #fuel
11:46 hezhiqiang joined #fuel
12:15 dancn joined #fuel
12:21 Miouge joined #fuel
12:22 v1k0d3n joined #fuel
12:25 saibarspeis joined #fuel
12:59 aarefiev joined #fuel
12:59 akupko joined #fuel
12:59 aliemieshko_ joined #fuel
13:00 e0ne joined #fuel
13:05 artem_panchenko joined #fuel
13:10 jaypipes joined #fuel
13:13 rbrooker joined #fuel
13:15 Miouge joined #fuel
13:17 Longgeek joined #fuel
13:18 Miouge joined #fuel
13:26 vladko joined #fuel
13:30 vladko hi,
13:31 vladko installed 6.1 and have dhcp problem - when run number of VMs via horizon part of them cannot get IP
13:31 vladko get this  in dnsmasq.log :
13:31 vladko Jul  9 13:11:32 dnsmasq-dhcp[43759]: 2647629351 DHCPDISCOVER(tapeb900bc0-89) fa:16:3e:f8:91:e1 no address available
13:32 vladko the pool is 95%  free and there is no duplication IPs
13:32 vladko running 1 VMs works fine
14:09 jhova joined #fuel
14:18 Molk joined #fuel
14:18 Molk Hello all
14:19 Molk I have an openstack environment in HA mulitmode on Ubuntu 12.04 deployed with fuel 6.0, using neutron GRE
14:20 Molk I would like to know if there is a way to move qrouter from one controller to another ? anyone can help ?
14:20 ssbdmntd joined #fuel
14:21 ssbdmntd Hi, I'm having an issue with deploying single controller install on 6.1
14:21 ssbdmntd looks like public__vip is not coming up and it fails the tests
14:22 ssbdmntd tried disabling/re-enabling it, no go (vip__public    (ocf::fuel:ns_IPaddr2): Stopped in pcs status)
14:22 ssbdmntd any pointers?
14:26 mwhahaha ssbdmntd: can you reach the public gateway?
14:26 mwhahaha vip__public goes down if it can't ping the public gateway
14:28 ssbdmntd mwhahaha: this is it, thanks
14:28 ssbdmntd probably something in the switch configuration
14:28 mwhahaha np
14:29 mwhahaha Molk: i don't know the answer to your question but give me a few minutes to poke around the docs and see if there is anything on how to do that
14:29 ssbdmntd the bootstrapped node were able to get to the gw though..
14:29 mwhahaha bootstrapped nodes go through the fuel master for networking
14:29 mwhahaha so you're running into issues once the openstack network deploys
14:29 mwhahaha check vlan tags/switch config
14:32 dklepikov joined #fuel
14:35 jobewan joined #fuel
14:45 mwhahaha Molk: i think you should be able to use pacemaker to move it
14:46 devvesa joined #fuel
14:47 mwhahaha Molk: it should move if the network fails on the controller, but i'm not completely sure if you can force it over.  if it's a pacemaker service, i would think that you could move the service to another node using standard crm commands
14:48 Molk thanks for your answers
14:49 Molk as it is a balanced service on all controllers, i did not find any other way that banning the resource from the other nodes, then it only run on the non-ban host
14:49 mwhahaha i think there's a command to migrate w/o banning but i'm not a pacemaker expert
14:49 Molk and it worked that way, command is crm_resource --ban .....
14:49 mwhahaha but yea that's one way to do it
14:49 Molk hmm it seems you can't migrate as resource is running on all controllers
14:50 Molk I tried but did not work
14:50 mwhahaha oh yea if it's one of those things that runs everywhere then no you can't migrate it
14:50 mwhahaha you'd just have to ban it
14:50 mwhahaha is there a reason you don't want it on a particular controller?
14:50 Molk anyway thanks for your help :)
14:51 Molk yeah, a bandwidth problems on one controller, and I don't want end users to be impacted ^^
14:51 mwhahaha ah yea then ban would be your best bet
14:56 claflico joined #fuel
15:03 alexz joined #fuel
15:09 rmoe joined #fuel
15:12 championofcyrodi ssbdmntd: is there a 3rd party firewall blocking ICMP requests to public gateway?
15:13 championofcyrodi mwhahaha: so it looks like the controller's ARP requests arent making it to the VLAN(s)
15:13 championofcyrodi they're just being broadcasted on the default LAN
15:13 championofcyrodi (in regard to my LACP bonded controller)
15:13 xarses joined #fuel
15:13 championofcyrodi slaves can chat fine and i can see their arp packets encapsulated in the Managment VLAN via 802.Q1
15:14 mwhahaha widner if we need http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding#Configuring_Multiple_ARP_Targets
15:14 mwhahaha s/winder/wodner
15:21 ssbdmntd championofcyrodi: there is no fw, i'm trying to debug this by altering nw conf of the bootstrapped node
15:23 kozhukalov_ joined #fuel
15:35 neophy joined #fuel
15:48 stamak joined #fuel
15:52 Miouge joined #fuel
15:55 ddmitriev joined #fuel
16:03 angdraug joined #fuel
16:03 bitblt joined #fuel
16:20 ub joined #fuel
16:22 Miouge joined #fuel
16:39 championofcyrodi mwhahaha: I'm not sure if it's an issue with multiple targets, or if the br-mgmt is just not utilizing the bond0.<vlan_id> interface.
16:39 championofcyrodi my management VLAN is 201...
16:39 championofcyrodi so i have a eth1.201 that is working on all the slaves.
16:39 championofcyrodi and the controller has a bond0.201, but i'm not yet sure how it is being utilized.
16:39 championofcyrodi there are more network components than you can shake a stick at.
16:42 CTWill joined #fuel
16:57 Miouge joined #fuel
17:00 mwhahaha pretty much
17:00 kutija_ joined #fuel
17:06 mattgriffin joined #fuel
17:11 rbrooker joined #fuel
17:14 rbrooker Hello all, I have attempted many times, now with Fuel 6.1  2 nodes, ubuntu, vlan  but I keep getting the error"=>"Method granular_deploy... error, I've looked it up. and the fix was committed awhile back, and idea how  I can fix it, or how to pull the patch ?
17:15 mwhahaha rbrooker: that error is fairly generic
17:15 mwhahaha that's actually the error that any time there's  a failure during the deploy phase, you have to dig deeper to find the actual cause
17:16 mwhahaha if you can provide more of the logs, we might be able to narrow it down more
17:19 rbrooker in the Astute debug log, it seems to be failing on the  ->  puppet connectivity_tests.pp;
17:19 mwhahaha so that points to an issue with your nodes being able to reach the configured repositories
17:20 mwhahaha you either need to switch to local repositories or make sure your nodes that are being deployed to have internet connectivity
17:20 rbrooker The repo urls given are incorrect, although when I attempt to change them on in the dashboard, they still load the same ones
17:21 mwhahaha which ones?
17:21 rbrooker I have also pulled in a local one, although for mos security updates .. are not being pulled.
17:22 mwhahaha hmm that might be a new issue
17:22 rbrooker http://archive.ubuntu.com/ubuntu/
17:23 mwhahaha that's not the security ones
17:23 rbrooker is the current one
17:23 mwhahaha did you reset your environment prior to updating the repos?
17:23 rbrooker I reset post update
17:23 mwhahaha you should reset, update the repos and then deploy
17:24 rbrooker http://mirror.fuel-infra.org/mos/ubuntu  mos6.1-updates main restricted   | mos6.1-security main restricted  | mos6.1-holdback main restricted
17:25 rbrooker these where the first ones that cause issues,
17:25 rbrooker I removed them, in an attempt to see if the all sections were grouped in one large local repo
17:26 rbrooker the url that works requires the dists to be added to after ubuntu/
17:27 mwhahaha i don't think you need to add dists to that
17:51 e0ne joined #fuel
17:59 stamak joined #fuel
18:02 vladko joined #fuel
18:18 bogdando joined #fuel
18:27 jaypipes joined #fuel
18:28 sgolovatiuk joined #fuel
18:36 mattgrif_ joined #fuel
18:52 HeOS joined #fuel
18:53 mattgriffin joined #fuel
19:04 championofcyrodi mwahahaha, not sure what to make of Configuring Multiple ARP Targets.
19:05 championofcyrodi is this something I would need to modify in the /etc/sysconfig/network-scripts?
19:06 mwhahaha you could try
19:06 mwhahaha but wasn't it a module param
19:07 championofcyrodi oh i see... yes it is a bonding driver option and similar to lacp_rate, defined in the bonding options in ifcfg-bond0
19:09 championofcyrodi this is a bit painful, have no idea what i'm reading the first time through under 802.1q VLAN Support...
19:09 championofcyrodi getting some post-lunch coffee
19:19 rbrooker joined #fuel
19:33 championofcyrodi i think the module is loaded in a dynamic fashion.
19:33 championofcyrodi so no need to rebuild the image or anything
19:33 championofcyrodi e.g. 'bonding' is not in /etc/modules
19:52 mattgriffin joined #fuel
20:34 CTWill joined #fuel
20:44 devvesa joined #fuel
20:50 championofcyrodi okay, so after some review... it looks like the vlan encapsulation IS happening on for the individual NICs in my link aggregate...
20:50 Longgeek joined #fuel
20:50 championofcyrodi e.g. tcpdump of ARP shows the management vlan encapsulation around each arp request...
20:51 championofcyrodi but i'm not yet sure how all this get translated to the br-mgmt/bond0/bond0.<vlanid>
21:11 ub joined #fuel
21:16 ub2 joined #fuel
21:17 vladko joined #fuel
21:46 devvesa joined #fuel
21:47 Rich joined #fuel
21:51 Longgeek joined #fuel
22:07 kutija joined #fuel
22:38 sbfox joined #fuel
23:38 Longgeek joined #fuel
23:42 xarses championofcyrodi: ping xenolog in the AM, he's in russia, but would be the best to explain how it works

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary