Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2013-12-23

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:54 vkozhukalov joined #fuel
03:32 ArminderS joined #fuel
03:40 ArminderS- joined #fuel
05:25 mihgen joined #fuel
05:46 e0ne joined #fuel
06:06 vk joined #fuel
07:45 e0ne joined #fuel
07:58 SteAle1 joined #fuel
07:59 mihgen joined #fuel
08:21 mattymo|srt joined #fuel
08:28 e0ne_ joined #fuel
08:29 vkozhukalov joined #fuel
09:02 SergeyLukjanov joined #fuel
09:09 mattymo|srt joined #fuel
09:23 e0ne joined #fuel
09:25 e0ne_ joined #fuel
09:26 rvyalov joined #fuel
09:39 e0ne joined #fuel
10:04 SergeyLukjanov joined #fuel
10:08 mrasskazov joined #fuel
10:18 ruhe joined #fuel
10:52 mrasskazov joined #fuel
11:42 ArminderS joined #fuel
11:42 mrasskazov joined #fuel
12:20 miguitas joined #fuel
12:30 SergeyLukjanov joined #fuel
12:32 mattymo|srt joined #fuel
13:01 ruhe joined #fuel
13:49 MiroslavAnashkin h6w: We've checked your diagnostic snapshot. It looks like you installed master node with default settings and did nothing more. So, your settings are default. Please check 10.20.0.0 network - it should have accessible DHCP on 10.20.0.2 address and no one else DHCP servers.
14:06 ruhe joined #fuel
14:07 Vidalinux joined #fuel
14:20 archon1st joined #fuel
14:21 archon1st hi, im using fuel3.2.1, and just checked the queue: http://paste.openstack.org/show/56365/
14:21 archon1st is that normal?
14:29 MiroslavAnashkin archon1st: Please run this command one more time. BTW, which configuration do you use - HA or non-HA?
14:30 archon1st MiroslavAnashkin: im using multinode HA 3 controller
14:30 archon1st MiroslavAnashkin: all 3 controllers have the same result
14:31 MiroslavAnashkin archon1st: OK, then please check rabbitmq queue status one more time. If the same status remains more than 5 minutes - your quantum_L3_agent presumably dead.
14:32 archon1st MiroslavAnashkin: ok will do
14:34 MiroslavAnashkin archon1st: you may also run `crm status` command on any of controllers
14:38 ArminderS joined #fuel
14:44 archon1st MiroslavAnashkin: ok still the same result
14:45 MiroslavAnashkin archon1st: Well, what `crm status` tells?
14:45 archon1st crm status are ok, everything still run
14:45 archon1st tested dhcp and l3 connectivity inside vm also ok
14:47 archon1st MiroslavAnashkin: the crm status: http://paste.openstack.org/show/56366/
14:49 archon1st MiroslavAnashkin: quantum agent-list http://paste.openstack.org/show/56367/
14:51 MiroslavAnashkin archon1st: These queues with long ID in names are actually single-use queues. So, if L3_ahent dies, or Pacemaker moved it because of other than L3agent death reason - such queues may remain in rabbitmq forever without any impact
14:52 archon1st MiroslavAnashkin: ahh glad to hear that, thanks
14:53 archon1st MiroslavAnashkin: how about the notifications.info? is that normal having that queue size for a long time?
15:23 MiroslavAnashkin archon1st: Hmm, no, it is abnormal
15:24 archon1st MiroslavAnashkin: is there any way to purge/delete it? because i see the system *looks like* is working well
15:28 archon1st MiroslavAnashkin: i just created a new vm, now it grows from 1527 into 1533 --> notifications.info 1533
15:43 MiroslavAnashkin archon1st: notifications.info queue is a kind of trashcan. You may clean it. It is known, Ceilometer in Grizzly creates a lot of messages in this queue, but 3.2.1 does not install Ceilometer.
15:44 MiroslavAnashkin archon1st: or you may check the origin of these messages with rabbitmqadmin tool. http://www.rabbitmq.com/management-cli.html
15:45 archon1st MiroslavAnashkin: ah thanks, will check the doc url
16:01 angdraug joined #fuel
16:03 SergeyLukjanov joined #fuel
16:16 Vidalinux joined #fuel
16:26 Shmeeny joined #fuel
16:33 ruhe joined #fuel
16:48 IlyaE joined #fuel
17:28 jkirnosova_ joined #fuel
17:33 IlyaE joined #fuel
17:43 Vidalinux joined #fuel
17:43 MiroslavAnashkin h6w: BTW, have you changed your IP settings on master node during installation or you have ran bootstrap_admin_node.sh after the masternode was installed?
17:47 ruhe joined #fuel
17:52 SergeyLukjanov joined #fuel
18:03 vk joined #fuel
18:03 MiroslavAnashkin joined #fuel
18:05 e0ne joined #fuel
18:08 MiroslavAnashkin joined #fuel
18:15 vkozhukalov joined #fuel
18:27 mutex joined #fuel
18:34 mutex anyone enabled LDAP for the keystone module in fuel ?
18:34 mutex It is not clear to me where the role assignment is inside the puppet modules
18:34 mutex I mean puppet/cobbler sequence
18:55 MiroslavAnashkin mutex: https://www.mirantis.com/blog/ldap-identity-store-for-openstack-keystone/
18:56 mutex yeah I read that
18:56 mutex I was hoping I could change the puppet class rather than edit N different keystone config files
18:56 mutex the ldap backend code is in the puppet manifests
19:10 MiroslavAnashkin mutex: You may tale <masternode>:/etc/puppet/modules/openstack/manifests/nova/controller.pp as example on how to write a line to nova.conf
19:10 MiroslavAnashkin tale==take
19:12 mutex neat
19:12 mutex i'll poke around there
19:16 mihgen joined #fuel
19:38 IlyaE joined #fuel
19:55 mutex onoes
19:55 mutex my console service has failed again
19:55 mutex worked on friday, failure today
19:56 MiroslavAnashkin mutex: btw, what happened with your console service previous time?
19:56 mutex never tracked it down
19:57 mutex just blew away the cluster and reinstalled
19:57 mutex so the same thing has happened this time
19:57 mutex over the weekend console service fails
19:59 MiroslavAnashkin mutex: So, console service works until it reaches some point in time...
19:59 mutex yeah looks that way
20:00 MiroslavAnashkin mutex: Are you leave console page opened over weekends?
20:00 mutex I might have on my laptop
20:00 mutex but I went home over the weekend, laptop was disconnected
20:01 MiroslavAnashkin mutex: Thanks, I'll try to reproduce it.
20:49 mrasskazov joined #fuel
21:23 mrasskazov joined #fuel
21:30 mutex and previously I just attempted to restart the consoleauth services and it didn't work
21:40 mutex MiroslavAnashkin: FYI, my error from last week http://paste.openstack.org/show/56464/
22:11 e0ne_ joined #fuel
22:11 mutex MiroslavAnashkin: should I file a bug ?
22:23 e0ne joined #fuel
22:26 mrasskazov joined #fuel
22:31 Shmeeny joined #fuel
22:32 albionandrew joined #fuel
22:33 albionandrew What is the relationship between minimum disk size and RAM?
22:33 albionandrew For example if I'm trying to build a node that has 12GB of RAM how big does the disk need to be ?
22:33 teran joined #fuel
22:49 e0ne joined #fuel
22:50 e0ne joined #fuel
22:50 mutex MiroslavAnashkin: I wonder, could the failure of some ovs layer cause this problem ?
22:51 mutex http://paste.openstack.org/show/56478/
22:51 mutex albionandrew: depends on how much hard drive space you need
22:52 e0ne joined #fuel
22:53 Shmeeny joined #fuel
22:56 mrasskazov joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary