Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-12-02

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 emagana joined #fuel
00:25 emagana joined #fuel
00:30 emagana joined #fuel
00:49 mattgriffin joined #fuel
01:01 emagana joined #fuel
01:14 rmoe joined #fuel
01:25 xarses joined #fuel
04:36 ArminderS joined #fuel
04:37 ArminderS- joined #fuel
05:20 neophy joined #fuel
05:27 coryc joined #fuel
05:55 monester_laptop joined #fuel
06:20 teran joined #fuel
06:32 emagana joined #fuel
06:39 dklepikov joined #fuel
06:47 ArminderS joined #fuel
07:24 stamak joined #fuel
07:27 subscope joined #fuel
07:35 e0ne joined #fuel
07:36 alexbh joined #fuel
07:36 strictlyb joined #fuel
07:37 teran joined #fuel
07:48 e0ne joined #fuel
07:51 dkusidlo joined #fuel
07:53 corepb joined #fuel
08:03 ArminderS joined #fuel
08:04 dkusidlo joined #fuel
08:07 ArminderS joined #fuel
08:09 seeg joined #fuel
08:09 baboune joined #fuel
08:09 baboune hello
08:09 SergK joined #fuel
08:10 bookwar joined #fuel
08:10 baboune any updates on https://bugs.launchpad.net/fuel/+bug/1378327 log rotation? I can observe on my system that the filesystem partition /var is slowly increasing day by day
08:11 mattymo joined #fuel
08:12 baboune maybe I failed to apply the fix?
08:15 baboune how is the logrotate file change applied to a running environment?
08:15 neophy_ joined #fuel
08:24 mattgriffin joined #fuel
08:37 hyperbaba joined #fuel
08:47 teran joined #fuel
08:49 dkusidlo joined #fuel
08:58 neophy joined #fuel
09:07 hyperbaba joined #fuel
09:18 HeOS joined #fuel
09:22 ddmitriev joined #fuel
09:28 artem_panchenko_ left #fuel
09:28 artem_panchenko_ joined #fuel
09:31 dkusidlo joined #fuel
09:34 sovsianikov joined #fuel
09:35 e0ne joined #fuel
09:35 aarefiev joined #fuel
09:35 avlasov joined #fuel
09:41 dkusidlo joined #fuel
09:42 evgeniyl__ joined #fuel
09:44 emagana joined #fuel
09:58 evg baboune: it's not applied for running env. But the fix is trivial.
09:59 evg baboune: edit /etc/cron.hourly/logrotate
10:00 evg baboune: s/20-fuel.conf/20-fuel*.conf
10:01 dklepikov joined #fuel
10:01 aarefiev joined #fuel
10:01 aliemieshko_ joined #fuel
10:01 avorobiov joined #fuel
10:01 sovsianikov joined #fuel
10:01 t_dmitry joined #fuel
10:01 e0ne joined #fuel
10:05 shauno_ joined #fuel
10:08 avlasov joined #fuel
10:08 omelchek joined #fuel
10:13 shauno_ joined #fuel
10:27 emagana joined #fuel
10:28 ddmitriev joined #fuel
10:29 adanin joined #fuel
10:35 monester_laptop joined #fuel
10:37 e0ne joined #fuel
10:54 corepb joined #fuel
10:58 omartsyniuk_ joined #fuel
10:59 omartsyniuk_ joined #fuel
11:22 emagana joined #fuel
11:22 akurenyshev joined #fuel
11:27 dkusidlo joined #fuel
11:32 stamak joined #fuel
12:01 emagana joined #fuel
12:08 teran joined #fuel
12:18 baboune evg: ok. that is what I have done.  Despite this the /var %use keeps on increasing
12:31 baboune more interesting bug.  5.1, two environments.  I added 4 compute nodes to env 2.  Env 2 says 100% but does not complete.  "Fuel env" indicates non-committed changes. nothing in astute logs. It is stuck at 100%.  Known bug? Solution=
12:38 dancn joined #fuel
12:39 evg baboune: there seems to be three bug related log rotate, not one. Can you evaluate which log files are to blame?
12:41 evg baboune: about 100% - have these nodes returned from reboot? And there was a bug, let me find it.
12:52 baboune evg: the two biggest log files for a node are nailgung-agent.log (282610355) and sudo.log (41256241).  I can try to monitor one node see which files grow
12:55 emagana joined #fuel
12:56 vt102 joined #fuel
13:00 baboune evg: all 4 computes indicate a login prompt. hard to say if they were rebooted.  Should I trigger a reboot?
13:41 alexbh joined #fuel
13:49 emagana joined #fuel
13:49 teran joined #fuel
14:11 Longgeek joined #fuel
14:36 mattgriffin joined #fuel
14:40 rongze joined #fuel
14:43 emagana joined #fuel
14:43 Longgeek joined #fuel
14:50 dkusidlo joined #fuel
14:50 teran joined #fuel
14:53 swordzz joined #fuel
14:54 swordzz Hi. Last week I installed Fuel, and used it to set up 3 nodes (1 Controller, 2 Compute). I'm currently unable to start any instances - they don't boot as the scheduler fails.
14:54 kaliya swordzz: do you have enough resources?
14:54 swordzz I'm also unable to view the System Information through the Openstack dashboard.
14:55 kaliya swordzz: which flavour are you trying to instanciate?
14:55 evg baboune: not the same bug https://bugs.launchpad.net/fuel/+bug/1255426 ? sorry for delay...
14:55 swordzz kaliya: I've got 2 Compute resources, they're fairly powerful machines, and no currently running VMs. The VM I'm setting up is much smaller than they are.
14:55 swordzz It ends up being a medium VM.
14:55 kaliya swordzz: disk space? Medium I think requires 40G
14:56 swordzz I should probably have started by saying this is 6.0 Tech Preview
14:56 swordzz 600GB disks, 500GB for storage
14:56 kaliya Do you have the chance to go with `nova hypervisor-stats` ?
14:57 swordzz I'm on Ubuntu as well.
14:57 evg baboune: I don't know should you reboot or not. You have to check if it's really deployed well.
14:58 swordzz http://paste.openstack.org/show/17pHTaXHvcSFLwuZ0Znp/
14:58 swordzz kaliya: That's my hypervisor stats
14:58 kaliya swordzz: which scheduler error do you have?
14:58 evg baboune: if you're sure it's ok, you can change theh status
14:58 swordzz No valid host was found. Exceeded max scheduling attempts 3 for instance
14:59 swordzz This is after altering scheduler_default_filters to AllHostsFilter
14:59 baboune evg: how do I change status?
15:00 kaliya swordzz: did you try to start the cirros in a tiny?
15:02 swordzz kaliya: I don't entirely follow that suggestion. Cirros?
15:03 swordzz Ah, found it
15:03 swordzz Downloading now, will let you know when II've tried
15:05 ArminderS joined #fuel
15:06 kaliya swordzz: please try to boot a cirros in a m1.tiny, just to be sure that nova is operational
15:06 coryc joined #fuel
15:08 swordzz kaliya: Same error. So you think Nova is not operational? How can I confirm this? All the Nova services are running, and nova-scheduler.log does have output.
15:08 ArminderS left #fuel
15:14 kaliya swordzz: is the technical preview you downloaded from software.mirantis.com or from the nightly builds?
15:14 swordzz https://wiki.openstack.org/wiki/Fuel#Techinical_Preview ISO Image
15:15 kaliya But no valid host is a bit general error, do you have a stack trace in the nova-all.log ?
15:17 evg baboune: manually in nailgun db if you're sure it's ok
15:17 swordzz No, nothing at all in nova-all.log - 0 size file. I can get some stack trace though if I repeat it.
15:17 kaliya swordzz: on the controller, length 0?
15:17 swordzz Yes
15:17 swordzz -rw-r----- 1 syslog syslog 0 Nov 28 12:16 nova-all.log
15:18 swordzz Time on the system is correct and set to UTC, so it hasn't been touched for 4 days
15:18 kaliya I would try to boot an instance with --debug
15:19 kaliya But I'm not sure about the filter, AllHostsFilter, could not work. Can you restore the original conf and boot an instance, in order to confirm we have no bug please? Would be very useful for us
15:19 baboune evg: it looks fine. all services report fine.
15:19 baboune evg: so how do I change that in nailgun?
15:20 swordzz OK, will do. So far I've been using the GUI, I'm guessing you want me to do it from the command line now?
15:23 evg baboune: dockerctl shell postgres
15:23 evg baboune: su - postgres
15:23 evg baboune: psql nailgun
15:24 evg baboune: update nodes set status = 'ready', error_type = NULL where id = <NODE_ID>
15:37 emagana joined #fuel
15:38 kaliya swordzz: you can try to restore the original schedulers, and start from Horizon again
15:41 swordzz kaliya: http://paste.openstack.org/show/jQjROcmpiv6eBTe9o3Ry/
15:41 swordzz Sorry for delay, took a while to work the networking out as I'm new to this
15:42 kaliya swordzz: this is in Horizon?
15:42 swordzz Running the command line command also gave me some output. Nothing looked interesting, but do you want that too?
15:42 kaliya Nope. Which schedulers now?
15:42 swordzz The error is copied from Horizon, but I ran it from the command line
15:42 kaliya Did you restart /etc/init.d/nova-scheduler
15:42 swordzz Default, let me go copy it.
15:42 swordzz scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter
15:42 swordzz I did restart it, yes
15:43 swordzz I'm getting my original error now as well, so it's definitely happened
15:43 baboune evg: all nodes are ready
15:44 baboune evg: so that is not the problem
15:44 baboune evg: nailgun=# SELECT id, status,error_type from nodes;  id | status | error_type  ----+--------+------------  50 | ready  |   54 | ready  |   68 | ready  |   69 | ready  |   70 | ready  |   52 | ready  |   73 | ready  |   51 | ready  |   71 | ready  |   72 | ready  |   48 | ready  |   49 | ready  |   59 | ready  |   74 | ready  |   57 | ready  |   67 | ready  |
15:48 kaliya swordzz: nova-manage service list
15:49 swordzz http://paste.openstack.org/show/kqdjbTFp80lGMOo7RzRE/
15:49 swordzz All happy, 2 compute nodes
15:50 kaliya swordzz: which networks do you assign in horizon, when launching instance?
15:50 swordzz net04 is the name, what do you want to know about it? I've also tried net04_ext.
15:51 teran joined #fuel
15:51 teran_ joined #fuel
15:51 kaliya swordzz: very weird, /var/log/nova/nova-scheduler.log is empty as well?
15:52 swordzz No, it has contents and is updating.
15:52 swordzz I haven't seen anything very obvious in there though, or I'd have mentioned it
15:53 swordzz Looking now it's the Retry Filter which eventually returns 0 hosts
15:53 emagana joined #fuel
15:53 swordzz So it could be the same error, but presented in a different way
15:53 kaliya swordzz: do you have some stack trace?
15:53 kaliya In detail
15:54 swordzz No more than there was in my original paste
15:54 swordzz Sorry, not original, this one: http://paste.openstack.org/show/jQjROcmpiv6eBTe9o3Ry/
15:54 swordzz 2014-12-02 15:29:46.614 12651 DEBUG nova.scheduler.filters.retry_filter [req-f30bb3e8-678a-4ff8-bece-14d26156126b None] Host [u'node-3', u'node-3'] fails.  Previously tried hosts: [[u'node-3', u'node-3'], [u'node-2', u'node-2']] host_passes /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:42 2014-12-02 15:29:46.614 12651 INFO nova.filters [req-f30bb3e8-678a-4ff8-bece-14d26156126b None] Filter RetryFilter return
15:55 swordzz Those are I think the final logs that mean it stops trying
15:55 kaliya With Cirros?
15:55 swordzz Yes
15:55 swordzz Let me just copy the entire logs from that period
15:56 swordzz http://paste.openstack.org/show/kggh97HCku9FzFIVPkB7/
15:58 kaliya swordzz: are you creating a volume alongside? Or without?
15:58 swordzz Not sure I understand that?
15:58 swordzz So probably without?
15:58 swordzz Volume tab of Horizon shows none if that helps
16:00 swordzz Just been told our images normally use ephemeral storage. Not sure if that also applies to CirrOS or not.
16:01 kaliya Do you have enough space for ephemeral so?
16:02 swordzz Compute nodes have 500GB HDD free. Is that what you want to know?
16:04 kaliya swordzz: it depends, if you're running ceph, and ceph for ephemeral also?
16:05 swordzz This might be it... Looking at my settings (in Fuel), the "Cinder LVM over iSCSI for volumes" is selected.
16:05 swordzz I don't have a dedicated Cinder node.
16:05 kaliya Ok...
16:05 swordzz Should I have selected the Ceph RBD options?
16:05 kaliya So if you dedicated 500G should work
16:05 kaliya No no
16:06 swordzz The comment below that option is "Requires at least one Storage - Cinder LVM node."
16:06 swordzz Which I don't think I have?
16:07 fandi joined #fuel
16:07 kaliya swordzz: please screenshot me the Fuel UI for this environment (nodes list)
16:08 swordzz 1 Controller, 2 Compute. Will do. Where do I put the image when I've done that?
16:10 kaliya Cinder?
16:10 swordzz http://imagebin.ca/v/1jGWZItZRvjJ
16:10 swordzz No, no Cinder nodes
16:13 kaliya swordzz: so you have to assign Cinder. You can combine it with Compute. But you have to redeploy a piece.
16:14 swordzz OK. That's going to take an hour or so then. Thanks for helping me out here, sorry it's so fundamental. I've obviously missed something big along the way!
16:14 swordzz Thanks
16:14 kaliya swordzz: no problem, come again to chat with us! :)
16:15 kaliya pay attention to the Disk Partitioning
16:15 swordzz While I'm here, how does the licensing for this work? If you download from Mirantis directly you have to say yes to some T+Cs, where the main restriction seems to be testing only and only 10 nodes
16:15 kaliya swordzz: no restrictions. This is opensource. Just you can pay our high qualified support on Fuel and OpenStack.
16:15 swordzz Does this apply to the ISO I downloaded direct from Fuel? I ask because there's still references to Mirantis, even in this setup.
16:16 swordzz OK, thanks for letting me know.
16:27 swordzz kaliya: Sorry to continue being a pain, but before I spend an hour doing this I'd like to understand a bit more! Why do we need a Cinder node to hold volumes if we're not planning on having any volumes and to just use ephemeral storage?
16:31 mattgriffin joined #fuel
16:38 angdraug joined #fuel
16:52 teran joined #fuel
16:59 teran joined #fuel
17:00 teran_ joined #fuel
17:05 emagana joined #fuel
17:19 swordzz kaliya: Just waiting for the healthcheck to finish now. When the nodes were removed they weren't fully removed from the controller - it still thinks they're there.
17:20 swordzz This causes the health check to fail as e.g. "nova-manage service list" shows my old nodes as failed, unsurprisingly...
17:20 swordzz This known about? Fixed in latest? Do you want a bug report raising? Obviously it's not going to make the upcoming initial release, but still!
17:28 mattgriffin joined #fuel
17:32 rmoe joined #fuel
17:48 Longgeek joined #fuel
17:53 Longgeek_ joined #fuel
17:58 Longgeek joined #fuel
18:08 jobewan joined #fuel
18:22 emagana joined #fuel
18:24 xarses joined #fuel
18:57 matt_dupre joined #fuel
18:58 mattgriffin joined #fuel
18:59 matt_dupre Hi all - I don't seem to be able to connect to http://fuel-repository.mirantis.com - is it down?
18:59 e0ne joined #fuel
19:00 matt_dupre (I'm trying to build a fuel ISO, and it's failing to download the CentOS packages.)
19:14 angdraug joined #fuel
19:26 e0ne joined #fuel
19:40 angdraug joined #fuel
19:43 rongze joined #fuel
19:58 rahulb joined #fuel
19:58 Longgeek joined #fuel
20:05 e0ne joined #fuel
20:08 ddmitriev1 joined #fuel
20:14 emagana joined #fuel
20:21 emagana joined #fuel
20:24 miroslav_ joined #fuel
20:58 rongze joined #fuel
21:06 angdraug joined #fuel
21:07 ddmitriev1 joined #fuel
21:16 teran joined #fuel
21:24 emagana joined #fuel
21:30 e0ne joined #fuel
21:32 mattgriffin joined #fuel
21:32 emagana joined #fuel
21:48 Longgeek joined #fuel
21:55 emagana joined #fuel
21:57 emagana_ joined #fuel
21:59 rongze joined #fuel
22:01 angdraug joined #fuel
22:38 emagana joined #fuel
22:38 emagana joined #fuel
22:57 Obi-Wan joined #fuel
22:57 xarses joined #fuel
23:07 jobewan joined #fuel
23:28 Bomfunk colleagues: is https://review.openstack.org/#/c/121139/ patch included in MOS 5.1?
23:42 emagana joined #fuel
23:44 emagana_ joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary