Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-04-21

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:56 thehybridtech joined #fuel
01:06 justif joined #fuel
01:10 justif2 joined #fuel
01:11 xarses joined #fuel
01:31 IlyaE joined #fuel
02:09 MaverickHunter joined #fuel
03:09 IlyaE joined #fuel
04:01 IlyaE joined #fuel
05:03 dburmistrov_ joined #fuel
05:14 IlyaE joined #fuel
05:18 justif2 joined #fuel
06:21 Ch00k joined #fuel
07:50 dburmistrov_ joined #fuel
07:54 ADMiNZ joined #fuel
08:24 akislitsky joined #fuel
08:32 akislitsky joined #fuel
09:19 vk joined #fuel
09:32 rvyalov joined #fuel
10:02 vk joined #fuel
10:48 ogelbukh joined #fuel
11:10 meow-nofer joined #fuel
11:19 justif joined #fuel
11:23 tatyana joined #fuel
11:44 evg joined #fuel
11:44 evg ADMiNZ: hi
11:48 aglarendil|nb joined #fuel
12:11 ADMiNZ Hi
12:22 nurla_ left #fuel
12:27 e0ne_ joined #fuel
13:05 BillTheKat joined #fuel
13:08 BillTheKat Does anyone know how remnove an old node from fuel? I have a node that was replaced due to hardware issues. It shows up in "fuel node" command but when I run "fuel node remove --node 56" it returns a 404: Not found.
13:21 nurla_ joined #fuel
13:37 MiroslavAnashkin BillTheKat: First, try `fuel node list` on master node to get the obsolete node ID
13:38 MiroslavAnashkin Your exhanged node should be discovered as new one with new ID.
13:38 MiroslavAnashkin Then, please try the following:
13:39 MiroslavAnashkin # su - postgres
13:39 MiroslavAnashkin $ psql nailgun
13:39 MiroslavAnashkin nailgun=# delete from pending_node_roles where node=<node id>;
13:40 MiroslavAnashkin nailgun=# delete from node_attributes where node_id=<node id>;
13:40 MiroslavAnashkin nailgun=# delete from nodes where id=<node id>;
13:48 MaverickHunter joined #fuel
13:51 MiroslavAnashkin You may also run elete from node_roles where node=<node id>;
13:51 MiroslavAnashkin `delete from node_roles where node=<node id>;`
13:59 BillTheKat MiroslavAnashki: perfect THANKS!!!!
14:00 jobewan joined #fuel
14:09 jaypipes joined #fuel
14:13 MaverickHunter joined #fuel
15:15 crandquist joined #fuel
15:15 thehybridtech joined #fuel
15:17 xarses joined #fuel
15:30 BillTheKat Where does fuel get it base ubuntu image from? And where is it kept on the fuel server?
15:42 thehybridtech joined #fuel
15:43 MiroslavAnashkin BillTheKat: Yes, Cobbler loads images from /var/lib/tftpboot/images
15:43 BillTheKat thanks
15:44 MiroslavAnashkin BTW, Cobbler UI is accessible by <master node IP>/cobbler_web link
15:59 rmoe joined #fuel
16:00 mattymo|home joined #fuel
16:01 dhblaz joined #fuel
16:09 tatyana joined #fuel
16:25 IlyaE joined #fuel
16:37 tatyana left #fuel
17:01 BillTheKat I added in a second external network and everything works except anytime I change any network config (e.g. add allocate flaoting IP) I get the following error in /var/log/rabbitmq (below). Any idea as too what is going on? I did not get this before adding in 2nd external network.
17:01 BillTheKat connection <0.30819.0>, channel 1 - error:
17:01 BillTheKat {amqp_error,precondition_failed,
17:01 BillTheKat "inequivalent arg 'x-ha-policy'for queue 'notifications.info' in vhost '/': received the value 'all' of type 'longstr' but current is none",
17:01 BillTheKat 'queue.declare'}
17:03 xarses BillTheKat: what version of rabbit?
17:06 BillTheKat # dpkg -l | grep rabbit
17:06 BillTheKat ii  rabbitmq-server                      2.8.7-ubuntu0                                       An AMQP server written in Erlang
17:10 justif joined #fuel
17:10 BillTheKat xarses: the dpkg -l output is above
17:11 dburmistrov_ joined #fuel
17:35 BillTheKat xarses: rabbitmq version is 2.8.7
17:36 xarses BillTheKat: yep, I was hoping it was another version. With that one, I'm at a loss for why you see it
18:00 angdraug joined #fuel
18:00 BillTheKat xarses:  can you explain what the error is by any chance?
18:02 xarses BillTheKat: it sounds like it's complaining that the queue is not currently set to ha replication
18:03 xarses should have nothing to do with adding an external network
18:03 xarses you could look up how to manually set the queue back to ha replication for rabbit 2.8.x
18:04 xarses but i thought the consumer did it the way that message is complaining about
18:05 BillTheKat ok
18:16 MiroslavAnashkin Looks like this: http://rabbitmq.1065348.n5.nabble.com/HA-Queue-declaration-td24223.html
18:38 albionandrew joined #fuel
18:38 IlyaE joined #fuel
18:42 albionandrew xarses I have a cluster that is passing most of the health checks it fails the instance check etc but when I spark an instance from the commandline it runs just fine. I assume there is a networking issue. If I ping from a controller to the master I see the pings hitting the master because I see a reply in tcpdump , I’m using an interface in the management vlan to do this. However from the controller the ping looks like nothing is happening. Any ideas
18:49 xarses the ping 8.8.8.8 test?
18:52 albionandrew xarses - yes that fails
18:53 albionandrew but even the keypairr one too.
18:53 albionandrew I can create a keypair fine so I think its networking ?
18:54 xarses albionandrew: usually the neutron router has a port that is down or is otherwise defunct or the neutron router its self can't reach the next gateway
18:55 xarses you can test the neutron router by going to the host running neutron_l3_agent
18:55 xarses and then look at the ip namespace
18:55 xarses ip netns list
18:56 xarses if you still only have the one router, it will be the only qrouter-<uuid>
18:56 albionandrew I just looked and in horizon the external gateway and internal interface show as down
18:56 xarses and then you can ip netns exec qrouter-<uuid> [commands] to trouble shoot
18:57 xarses id check 'ip -4 a' and 'ping 8.8.8.8' from the namespace as a start
18:57 xarses also check that the namespace can ping the insance
18:57 xarses but you might need the security rule to allow you to ping the instance first
18:58 albionandrew I did a crmstatus and it showed the l3agent, I then did the ip netns list and it came back with nothing.
18:59 xarses crm resource show p_neutron-l3-agent ?
19:00 albionandrew [root@node-27 ~]# crm resource show p_neutron-l3-agent
19:00 albionandrew resource p_neutron-l3-agent is running on: node-27
19:00 xarses so there are no name spaces on node-27 then?
19:01 albionandrew ..node-27.ourdomain.com
19:01 albionandrew I’ve just discovered root is full
19:09 albionandrew xarses - Any suggestions on what I should remove? Theres lots of log files.
19:10 albionandrew Or what to change so I dont get so much logging
19:13 MiroslavAnashkin Please check /etc/openstack-dashboard/local_settings.py .
19:14 MiroslavAnashkin And turn off unnecessary loggers
19:15 dhblaz joined #fuel
19:17 albionandrew thanks
19:19 MiroslavAnashkin And please do not remove log directories. Remove only files. Not all Openstack services are capable to create directories for logs.
19:20 albionandrew Thanks again
19:21 angdraug joined #fuel
19:54 albionandrew miroslavAnashkin xarses I have space on the problem controller now. ip netns list still comes back with nothing listed
20:03 xarses_ joined #fuel
20:03 xarses_ albionandrew: you will likely need to restart it. but first i would check glarera mysql and rabbitmq
20:11 Kupo24z joined #fuel
20:19 albionandrew left #fuel
20:23 IlyaE joined #fuel
20:42 TVR_ will the 5.0 make the may 9 deadline do we think?
20:56 andreww joined #fuel
20:57 Kupo24z Is it normal to have packetloss on the controller node during install with neutron + vlan?
20:59 blahRus joined #fuel
21:04 albionandrew joined #fuel
21:06 albionandrew xarses - rebooted the controller we were talkign about earlier. Fixed the logging. Functional health checks fail. crm status shows node 27 the one filling up earlier and 28 having issues with openvswitch.
21:06 albionandrew ip net list now comes back with a qrouter
21:08 albionandrew create and instatnce, volume attach now passing launch instance and create snapshot and create user and auth, with horizon pass. The rest not so good.
21:09 albionandrew shoudl I restart open vswitch on all three controllers? They all now say unknow error in crm status
23:06 BillTheKat joined #fuel
23:23 Kupo24z Anyone know of cases where Floating IP's are not getting auto-assigned even though its checked in the settings tab?
23:31 IlyaE joined #fuel
23:32 dhblaz joined #fuel
23:35 crandquist joined #fuel
23:46 xarses Kupo24z: nope; I haven't tested that myself
23:46 xarses feel free to open a bug
23:46 xarses and attach a support bundle
23:52 Kupo24z xarses: It is supposed to change it in nova.conf correct? on the compute servers? I've got #auto_assign_floating_ip=false and
23:52 Kupo24z #default_floating_pool=nova just after install
23:56 xarses Kupo24z: it's probably set on the controllers
23:57 Kupo24z Yeah, on controller it has auto_assign_floating_ip=True weird
23:57 Kupo24z #default_floating_pool=nova though, no default pool
23:58 xarses not sure it would matter

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary