Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-06-11

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 xarses its possible that open-stack generally wont be upgradeable in a stable, hands-off manner for icehouse -> juno, not withstanding the prior, we expect to be able to upgrade from icehouse -> juno in 6.0/6.0.1, however it might be experimental or semi-manual
00:01 Kupo24z1 Would it be safer to upgrade from 5.1 to 6.1 when going from icehouse -> Juno or is the process inherently risky?
00:04 xarses It's possible to upgrade from havana to icehouse now, but its risky and very manual // hard to automate in a variety of use cases. It looks like it will be much less in both cases for icehouse to juno, however it won't be as obvious until Juno is closer to release and we know which of the improvements around upgrades did, or didn't land for the release.
00:06 Kupo24z1 Alright, I guess its best to wait and see
00:06 xarses If we can release something that will upgrade in a safe manner, we will
00:07 xarses We will have our plumbing ready by 6.0
00:07 xarses well, we expect to
00:09 Kupo24z1 cool, thanks a lot for your help
00:10 Kupo24z1 you guys should add an option for a puppet manifest or bash script to run on firstboot of node creation :)
00:10 xarses Kupo24z1: you could write it and propose a patch =P
00:11 Kupo24z1 I suppose, i have to get a failover script for instances first :)
00:12 xarses back in a bit
01:16 rmoe joined #fuel
01:20 xarses joined #fuel
03:47 casanch1 joined #fuel
05:29 anivar joined #fuel
05:30 anivar left #fuel
05:31 e0ne joined #fuel
05:53 odyssey4me joined #fuel
05:58 skul1 joined #fuel
06:15 Nishant joined #fuel
06:16 al_ex joined #fuel
06:16 Nishant left #fuel
07:52 b-zone joined #fuel
08:21 e0ne joined #fuel
08:33 lromagnoli joined #fuel
08:58 artem_panchenko joined #fuel
08:58 dotty joined #fuel
09:01 dotty Hi everyone. I just installed Fuel (via the Mirantis ISO) and noticed there's no login page, everything is available publicly - is there a setting for this somewhere or are you meant to secure it with a firewall or something?
09:33 mattymo dotty, you can set up iptables rules to limit access
09:33 mattymo secure login to Fuel is planned for 5.1
09:34 dotty Right, ok, thanks.
09:53 lromagnoli hi i make with virtualbox these deployment 3 controller HA 2 compute 3 ceph
09:54 lromagnoli added iptables -t nat -A POSTROUTING -s 172.16.0.0/24 \! -d 172.16.0.0/24 -j MASQUERADE
09:55 lromagnoli on the phisical host
09:55 lromagnoli but there is no ping from node to external network
09:59 lromagnoli added iptables -t nat -A POSTROUTING -s 172.16.1.0/24 \! -d 172.16.1.0/24 -j MASQUERADE too as written on the manual
10:21 lromagnoli ok quite ....stupid** i forget echo '1' > /proc/sys/net/ipv4/ip_forward
11:12 Arminder joined #fuel
11:13 xarses_ joined #fuel
11:19 aedocw joined #fuel
11:28 mattymo joined #fuel
11:50 alex_didenko joined #fuel
11:55 sanek joined #fuel
12:02 sanek joined #fuel
12:18 ogelbukh joined #fuel
12:19 alex_didenko left #fuel
12:33 lromagnoli in a virtualbox enviroment using 3 node with HA controller 2 compute node and 3 ceph node i can test block live migration on ceph?
12:37 lromagnoli i'm asking becouse on the wizard create a new openstack environment on the page where i chose cinder-> ceph is reported "ceph backend requires two or more ceph-OSD nodes and KVM hypervisor"
12:38 lromagnoli so menas that is not supported on qemu enviroment?
12:38 odyssey4me joined #fuel
12:41 odyssey4me_ joined #fuel
12:53 casanch1 joined #fuel
13:10 akupko joined #fuel
13:49 jaypipes joined #fuel
14:01 scroiset_ joined #fuel
14:02 xarses joined #fuel
14:25 jobewan joined #fuel
14:44 xarses joined #fuel
14:49 blahRus joined #fuel
14:59 xarses_ joined #fuel
15:04 andreww joined #fuel
15:04 albionandrew joined #fuel
15:05 albionandrew xarses christopheraedo MiroslavAnashkin Is there a new ISO/IMG I can get hold of with the patched up to the beginning of the week?
15:21 xarses_ joined #fuel
15:32 xarses mihgen: ^^
15:36 albionandrew Thanks xarses
15:38 MiroslavAnashkin albionandrew: 5.0 or 4.1.1?
15:38 albionandrew 5 please
15:39 MiroslavAnashkin Yesterday we released 4.1.1 only
15:45 MiroslavAnashkin Oh, actually, release happened 18 minutes ago
15:57 e0ne_ joined #fuel
16:28 e0ne joined #fuel
16:40 albionandrew xarses MiroslavAnashkin I patched with http://tinyurl.com/mu8yzmd, I ammened the console args with showmenu=yes etc and saw it. But when going through it I did not ssee the docker interface so I applied the patch and rebooted. I did not push apply etc. How do I see that menu again?
16:41 xarses fuelmenu
16:50 albionandrew xarses thanks for ^ Should I set the pxe to run off the docker interface?
16:51 xarses no, it should be one of the physical interfaces
16:53 albionandrew So I shouldn’t select the docker interfaces anywhere in that menu?
16:56 xarses not likely
16:57 albionandrew xarses great. thanks
16:57 albionandrew xarses I’m going to use the same set up I used in 4.
17:02 angdraug joined #fuel
17:04 albionandrew xarses I put my settings in, did a “save and quit” but puppet hasnt started ? The console says “All changes saved successfully” Is this because I applied the patch, rebooted then went to fuelmenu? What do I need to do to get this running
17:14 albionandrew xarses it looks like my settings have saved to asute.yaml
17:22 albionandrew MiroslavAnashkin: ^ ?
17:48 mutex joined #fuel
18:00 Kupo24z1 xarses: you know of any 3rd parties releasing a compute evacuation script on downed node?
18:02 lromagnoli joined #fuel
18:04 lromagnoli in a virtualbox enviroment using 3 node with HA controller 2 compute node and 3 ceph node i can test block live migration on ceph?
18:04 lromagnoli i'm asking becouse on the wizard create a new openstack environment on the page where i chose cinder-> ceph is reported "ceph backend requires two or more ceph-OSD nodes and KVM hypervisor"
18:04 lromagnoli so menas that is not supported on qemu enviroment?
18:19 MiroslavAnashkin Docker interface configuration has been removed from Fuelmenu in one of the recent patches, along with vethXXX
18:20 MiroslavAnashkin And there is issue - if you exit menu, master node deployment continues
18:22 MiroslavAnashkin lromagnoli: You may set the hypervisor type to KVM in new OpenStack environment settings. KVM works slower under VBox, but I think it should work
18:23 lromagnoli is possible to run KVM in virtualbox?
18:24 lromagnoli on internet i found KVM requires VT-X/AMD-V, but VirtualBox does not pass VT-X/AMD-V to the guest operating system.
18:24 MiroslavAnashkin Yes. You have to enable nested virtualization in VBox VM settings
18:26 MiroslavAnashkin And please use latest VBox version. Some distros has old VBox in repos
18:26 lromagnoli my version is 4.3.10-dfsg-1
18:27 lromagnoli too old?
18:27 MiroslavAnashkin No, 4.3 should work fine
18:28 lromagnoli how i enabled nested virtualization?
18:28 MiroslavAnashkin Nested Paging
18:29 MiroslavAnashkin Both VTx/AMD-v and Nested Paging sections are on the System tab of the VM
18:30 lromagnoli ok i found it great
18:30 lromagnoli may i ask u something more?
18:31 MiroslavAnashkin Yes, feel free
18:33 lromagnoli i've to convert the image from cow2 to raw manually?
18:33 MiroslavAnashkin Better to convert manually.
18:33 lromagnoli i explain better i deployd 3 controller 2 compute 3 ceph
18:35 lromagnoli before i set ceph RDB for ephemeral volumes
18:35 lromagnoli so now my virtual machine boot and live on ceph right?
18:35 MiroslavAnashkin Ceph converts images on the fly but does it even when it is not necessary. So, please convert the image to RAW with qemu-img first.
18:37 georgem2 joined #fuel
18:37 lromagnoli the health check in my config with ceph as to say ok or before i've to convert something?
18:38 MiroslavAnashkin If you selected ceph as backend for both Glance and Cinder - yes, it boots from Ceph
18:38 lromagnoli yes it's what i've to do to have live migration
18:42 MiroslavAnashkin Do you mean you enabled ceph rbd for ephemeral volumes adter your environment has been deployed?
18:43 MiroslavAnashkin s/adter/after/
18:43 lromagnoli no i will do it before
18:43 lromagnoli now i will destroy and rebuild all my lab
18:43 Pookz joined #fuel
18:43 MiroslavAnashkin Ah, then OK. Simply convert your image to RAW before uploading to OpenStack
18:44 lromagnoli but in openstack health ceck there is the voice Functional tests. Duration 3 min - 14 min
18:44 lromagnoli with Launch instance
18:45 lromagnoli that istance is raw?
18:47 lromagnoli \lromagnoli ciao
18:47 MiroslavAnashkin No, it is qcow2. http://docs.openstack.org/image-guide/content/ch_obtaining_images.html
18:48 lromagnoli ok last question i promise
18:49 lromagnoli with iptables -t nat -A POSTROUTING -s 172.16.1.0/24 \! -d 172.16.1.0/24 -j MASQUERADE, I masquerade the public floating ip of my netwok so my vm inside openstack can go outside
18:51 lromagnoli if i would like to ssh to my vm into openstack, after i enabled security group, is enought that i let the host computer become a router?
18:52 MiroslavAnashkin Have you additionally shared your external network in OpenStack, in the settings for this network?
18:52 lromagnoli and configure my route correctly to tell that all traffico to 172.16.1.0 has to go to the host ip address on the outside network?
18:53 MiroslavAnashkin Yes, ssh should work in this case, if it is enabled and allowed inside the instance
18:53 lromagnoli ok perfect tonight i will give a try, u are very kind
18:53 MiroslavAnashkin np
18:56 Kupo24z1 Having a weird issue with live-migrations, 'node-53 nova-nova.virt.libvirt.driver INFO: Instance launched has CPU info:' appears in the nova-all log on the destination server however in horizon the instance node id remains the same. re-trying the migration has the
18:56 Kupo24z1 'Returning exception The supplied disk path already exists' error
18:57 Kupo24z1 heres the debug info from the command; http://pastebin.mozilla.org/5392246
18:59 Kupo24z1 this worked fine on CentOS, apparently broken on ubuntu
18:59 e0ne joined #fuel
19:01 dhblaz joined #fuel
19:04 albionandrew joined #fuel
19:06 albionandrew xarses MiroslavAnashkin https://bugs.launchpad.net/fuel/+bug/1323354 Does anyone have an initrd I can use?
19:24 xarses albionandrew: the old 2.6.32 inird discovery image is on the fuel node, there is a document somewhere around how to restore it
19:25 MiroslavAnashkin albionandrew: http://9f2b43d3ab92f886c3f0-e8d43ffad23ec549234584e5c62a6e24.r60.cf1.rackcdn.com/bootstrap-5.0-kernel-2.6.zip
19:25 MiroslavAnashkin http://docs.mirantis.com/fuel/fuel-5.0/release-notes.html#known-issues-in-mirantis-openstack-5-0
19:25 xarses albionandrew: ^ yep, thats it
19:26 MiroslavAnashkin Sorry, our Boss removed the ability to send direct links to the particular issues, so please search Bootstrap kernel issues on certain hardware
19:26 albionandrew MiroslavAnashkin: xarses  great thanks
19:27 Kupo24z1 xarses: you know of any current issues with ubuntu live migrations? I've had two environments with the same issue
19:28 Kupo24z1 however on a centos env, same source iso has no issues
19:32 albionandrew MiroslavAnashkin: xarses it looks like cobbler is not installed? I can see the gui etc but yum list installed. … and whereis do not show cobbler
19:33 xarses albionandrew: everything is in containers in 5.0
19:33 xarses so you need to start by switching to the cobbler container
19:33 xarses cobblerctl shell cobbler
19:34 albionandrew dockerctl shell cobbler .. TASK complete thanks. Will try pxe now.
19:40 albionandrew xarses: Fixed. Thanks
20:09 e0ne joined #fuel
20:40 dilyin joined #fuel
20:40 jseutter joined #fuel
20:41 meow-nofer_ joined #fuel
20:56 rmoe joined #fuel
21:44 xarses joined #fuel
21:45 xarses_ joined #fuel
22:22 Kupo24z1 Is evaction tested on 5.0? getting this error with ceph epemeral: nova-oslo.messaging.rpc.dispatcher ERROR: Exception during message handling: Invalid state of instance files on shared storage
22:23 Kupo24z1 evacuation*
22:25 Kupo24z1 xarses xarses_ any thing else i can check?
22:27 xarses rmoe ^
22:29 Kupo24z1 turns out live migration doesnt work for me on centos either, maybe a non-HA thing
22:29 Kupo24z1 as i have an HA env that works fine on centos
22:35 Kupo24z1 rmoe: here's the log from the destination evacuation node: http://pastebin.mozilla.org/5393181
22:36 rmoe I'm also interested in how live migration failed on your non-ha env, there could have a similar root cause
22:36 Kupo24z1 i can test that real quick let me get the node back online and ceph healthy
22:37 Kupo24z1 ive just been shutting off via IPMI to replicate a power loss and im sure the placement groups are mad at me
22:38 Kupo24z1 this is on a brand new install btw, all ceph boxes checked
22:39 e0ne joined #fuel
22:40 rmoe what is the exact command you used to evacuate?
22:40 angdraug Kupo24z1: looking at nova code around that error message, looks like it's down completely wrong path
22:40 angdraug it's raised from rebuild_instance (quoth docstring: Destroy and re-make this instance)
22:40 Kupo24z1 nova evacuate --on-shared-storage ebcf96d6-fdb9-496b-90dd-b31e818b3975 node-61
22:41 rmoe try it without --on-shared-storage
22:41 angdraug sounds like evacuate isn't even trying live migration
22:41 rmoe I see some old reference to evacuation failing for volume backed instances when using that flag
22:41 rmoe it's sort of similar to the ceph case so it's worth a shot
22:41 Kupo24z1 you want me to try live migration first?
22:42 rmoe if you can retry the evacuation lets do that first
22:45 Kupo24z1 alright waiting for nova-compute to report down
22:46 Kupo24z1 fatal error again
22:46 Kupo24z1 different this time, http://pastebin.mozilla.org/5393232
22:46 Kupo24z1 command was  nova evacuate 62dcd7e3-373b-48b3-a9de-81c606767c0a node-63
22:53 Kupo24z1 rmoe: ^
22:53 rmoe I'm not surprised, that was really just a quick shot-in-the-dark
22:53 rmoe I'll check the other live migration error and see if there is something in common here
22:54 Kupo24z1 https://bugs.launchpad.net/nova/+bug/1284709 ?
22:54 Kupo24z1 hmm nvm thats supposed to be included with icehouse
22:55 e0ne joined #fuel
22:56 Kupo24z1 well icehouse-backports, not sure if you guys apply those
22:57 rmoe we don't have that patch
22:58 rmoe I just checked a deployed environment
22:58 Kupo24z1 i am using neutron/GRE so it may be the same thing
23:03 Kupo24z1 rmoe: heres the result of a live migration http://pastebin.mozilla.org/5393257
23:03 Kupo24z1 command nova --debug live-migration ebcf96d6-fdb9-496b-90dd-b31e818b3975 node-59
23:05 angdraug oh wait are you running on ceph or local storage for nova?
23:06 Kupo24z1 according to fuel im running ceph for volumes, glance, and ephemeral (nova)
23:06 angdraug hm
23:06 Kupo24z1 under storage the only two boxes that arnt checked are Ceph RadosGW and Cinder
23:06 Kupo24z1 Cinder LVM*
23:07 Kupo24z1 Normally I do Ceph RadosGW but this time I diddnt, figrued it wouldnt make a difference
23:07 angdraug yeah, you've got nova on ceph
23:07 angdraug looks like a misleading error message, it's actually instance path but not disk path
23:08 angdraug there's lot of confusion among nova developers about these two concepts unfortunately
23:08 Kupo24z1 I also have the debug message from the command running if you need that
23:09 angdraug looks like one of your previous attempts failed halfway through and left instance path on the destination node behind
23:09 e0ne joined #fuel
23:10 Kupo24z1 Should i make a new instance then try?
23:11 e0ne_ joined #fuel
23:14 Kupo24z1 angdraug: tried with a new instance, no error however it did not migrate
23:14 Kupo24z1 just has the 'INFO: Instance launched has CPU info' line
23:15 Kupo24z1 and i do see a directory for the instance id '9386ff15-7c23-4501-86b0-fbbdcfcd08fb' in /var/lib/nova/instances on the target nova-comute node
23:15 Kupo24z1 compute*
23:17 Kupo24z1 I have the debug as well for this one; nova --debug live-migration 9386ff15-7c23-4501-86b0-fbbdcfcd08fb node-59
23:18 angdraug ok, that's new
23:18 angdraug no exceptions in nova log?
23:19 angdraug on either node?
23:23 Kupo24z1 sec
23:24 Kupo24z1 Nope, just a ton of INFO lines in the controller nova-all log
23:25 Kupo24z1 on 23:17 i got <180>Jun 11 23:17:45 node-58 nova-nova.consoleauth.manager AUDIT: Received Token: 084c556b-3810-4412-bc99-3f874c114f05, {'instance_uuid': u'9386ff15-7c23-4501-86b0-fbbdcfcd08fb', 'internal_access_path': None, 'last_activity_at': 1402528665.5617981, 'console_type': u'novnc', 'host': u'192.168.0.9', 'token': u'084c556b-3810-4412-bc99-3f874c114f05', 'port': u'5900'}
23:25 Kupo24z1 however since the migraiton was done on 23:13 i doubt its related
23:26 angdraug and it's not migrated?
23:26 angdraug still running on node-58?
23:26 Kupo24z1 correct, never went down and virsh list on the supposed destination node turns up empty
23:26 Kupo24z1 still running on node-65
23:26 angdraug what does instance status say?
23:26 Kupo24z1 58 is the controller
23:27 angdraug ah ok
23:27 Kupo24z1 state is active
23:29 angdraug to continue with CO questions, debug is on in nova.conf, right?
23:29 Kupo24z1 everything is defaults for now, let me enable it.
23:30 Kupo24z1 also deleted the instance directory in /var/lib/nova/instances
23:30 angdraug yeah
23:31 Kupo24z1 what services do i need to restart on the controller after changing nova.conf?
23:31 rmoe api, scheduler, conductor
23:33 Kupo24z1 hmm
23:33 Kupo24z1 so i just repeated the same command from earlier and it looks like it still think the disk is there, '
23:33 Kupo24z1 DestinationDiskExists: The supplied disk path (/var/lib/nova/instances/9386ff15-7c23-4501-86b0-fbbdcfcd08fb) already exists, it is expected not to exist.'
23:33 Kupo24z1 even after i removed the directory
23:34 Kupo24z1 im gunna make a new instance to get a new ID
23:36 Kupo24z1 same, no error on destination
23:36 Kupo24z1 got this on source server: <179>Jun 11 23:35:48 node-64 nova-nova.virt.libvirt.driver ERROR: Live Migration failure: internal error Attempt to migrate guest to the same host 00000000-0000-0000-0000-00000000efbe
23:38 Kupo24z1 command:  nova --debug live-migration 740eced0-269e-41ba-83fe-c49330764575 node-5
23:38 angdraug something wrong with your uuid rng there :)
23:39 Kupo24z1 yeah
23:41 angdraug any chance it's clashing with other nodes?
23:41 Kupo24z1 Don't see how it could be, its a brand new install
23:41 Kupo24z1 anything i can check for that?
23:43 angdraug should be in the nova db somewhere
23:44 Kupo24z1 sounds ominous
23:44 Kupo24z1 lol
23:45 Kupo24z1 well my instance table has 5 rows with only 2 servers live
23:45 angdraug that would be host table
23:46 angdraug nova host-describe from CLI might also help
23:47 Kupo24z1 angdraug: http://pastebin.mozilla.org/5393478
23:48 angdraug hm, no uuid's there
23:49 xarses the instance uuid's should be listed in nova list output
23:52 casanch1 joined #fuel
23:53 Kupo24z1 rmoe: any more ideas?
23:54 angdraug xarses: it's host uuid that in question
23:57 casanch1_ joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary