Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-09-25

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 geekinutah joined #fuel
00:32 adanin joined #fuel
00:32 mattgriffin joined #fuel
00:54 adanin joined #fuel
00:57 rmoe joined #fuel
01:06 emagana joined #fuel
01:08 mattgriffin joined #fuel
01:45 emagana joined #fuel
01:52 geekinutah joined #fuel
02:08 jpf_ joined #fuel
02:14 emagana joined #fuel
02:19 adanin joined #fuel
02:28 AKirilochkin joined #fuel
02:34 pasquier-s joined #fuel
02:36 AKirilochkin_ joined #fuel
02:43 jpf joined #fuel
02:44 AKirilochkin joined #fuel
03:03 teran joined #fuel
03:15 emagana joined #fuel
03:16 alex_didenko joined #fuel
03:47 emagana joined #fuel
03:54 harybahh joined #fuel
03:56 AKirilochkin_ joined #fuel
04:04 teran joined #fuel
04:08 ArminderS joined #fuel
04:25 geekinutah joined #fuel
04:47 emagana joined #fuel
04:54 pasquier-s joined #fuel
04:55 vidalinux joined #fuel
05:03 GeertJohan joined #fuel
05:05 teran joined #fuel
05:11 stamak joined #fuel
05:48 emagana joined #fuel
05:54 anand_ts joined #fuel
05:55 anand_ts Dr_drache: are you there?
05:55 harybahh joined #fuel
05:58 anand_ts how can I add servers later if I start installation with 4 servers now. that is 4 servers connected to switch and install Fuel master in one server, It will discover automatically other  nodes, how scaling the setup. ?
06:11 dancn joined #fuel
06:12 dancn joined #fuel
06:21 stamak joined #fuel
06:21 adanin joined #fuel
06:23 tuvenen joined #fuel
06:35 teran joined #fuel
06:36 vidalinux anand_ts, i think this is not posible right now, last time I try fuel
06:38 flor3k joined #fuel
06:40 anand_ts vidalinux: okay ,so If we want to add servers , entire setup we need to rebuild. one more thing, If I activate the free support subscription from Mirantis and it is valid for one month. If I recreated my entire setup will I get free subscription again?
06:41 vidalinux anand_ts, no idea
06:41 vidalinux maybe is time to buy the subscription ;)
06:41 anand_ts vidalinux , okay :)
06:49 emagana joined #fuel
07:03 harybahh joined #fuel
07:13 flor3k joined #fuel
07:27 flor3k joined #fuel
07:37 artem_panchenko joined #fuel
07:44 HeOS joined #fuel
07:46 e0ne joined #fuel
07:47 hyperbaba joined #fuel
07:50 emagana joined #fuel
07:50 e0ne joined #fuel
07:52 emagana joined #fuel
07:56 omelchek joined #fuel
07:59 emagana joined #fuel
07:59 e0ne joined #fuel
08:10 avorobiov joined #fuel
08:10 e0ne joined #fuel
08:11 merdoc kaliya: so, 5.1 released? Seems like someone forget fix text here https://software.mirantis.com/ (%
08:12 kaliya hi merdoc, yes
08:12 kaliya what's wrong
08:12 Alremovi4 joined #fuel
08:12 merdoc kaliya: Choose ‘Upgrade Package’ if you’re upgrading from 5.0 to 5.0.1
08:13 merdoc maybe "from 5.0 to 5.1" now?
08:13 kaliya I get MirantisOpenstack-5.1-Upgrade, what do you get?
08:13 kaliya Oh right you mean the text
08:13 kaliya Sure, you're right, I'll ask the fix
08:14 merdoc thx for release.
08:14 * merdoc going to update 5.0.1
08:14 kaliya thanks to you for always providing such useful points
08:14 kaliya ok let us know how it goes
08:15 merdoc anand_ts: scaling is easy - you only need boot your new servers via network, so they autodiscovered. after that you may add it to your env
08:21 teran joined #fuel
08:23 merdoc kaliya: is it possible switch from non-HA to HA without recreating all env?
08:23 baboune joined #fuel
08:24 baboune hi, I applied the upgrade on a 5.0.2 fuel node. the uågrade seems to be stuck in an infinite loop:  2014-09-25 08:23:00 DEBUG 29468 (health_checker) Start ostf checker 2014-09-25 08:23:00 DEBUG 29468 (health_checker) Start rabbitmq checker 2014-09-25 08:23:00 DEBUG 29468 (health_checker) Start cobbler checker   2014-09-25 08:23:00 DEBUG 29468 (health_checker) Start postgres checker 2014-09-25 08:23:00 DEBUG 29468 (health_checker) Star
08:24 kaliya merdoc: at the very present, unfortunately not, through Fuel
08:24 merdoc kaliya: got it
08:24 baboune see http://pastebin.com/c7EiTLY3
08:25 baboune I cleaned up the environment before I applied the patch
08:26 baboune so there are no keystone service anywhere...
08:28 kaliya baboune: try `dockerctl restart keystone` and retry. if it fails again, let's look at dockerctl logs keystone
08:29 merdoc kaliya: where can I find upgrading manual?
08:30 baboune kaliya: should I just ctrl+C the upgrade?
08:31 kaliya baboune: yes, restart the container, and retry
08:32 baboune wait... it seems to have passed it now... doing some OpenStackUpgrader upgrades
08:32 kaliya merdoc: here http://docs.mirantis.com/openstack/fuel/fuel-5.1/user-guide.html?highlight=upgrade#upgrade-fuel-from-earlier-versions :)
08:35 baboune The loop repeated itself from 8:23 (host time) till 8:30 then stopped
08:36 merdoc kaliya: thx (%
08:41 emagana joined #fuel
08:41 merdoc kaliya: so I need recreate my env to fix issue with ephemeral storage?
08:43 kaliya baboune: it would be very useful if you could send us the  `dockerctl logs keystone` after the process will be finished
08:43 kaliya merdoc: sorry which issue with ephemeral?
08:44 merdoc kaliya: if I create flavor with ephemeral storage, and then, from that flavor create instance I got error.
08:44 merdoc sec, I'm trying to find related ticket
08:45 kaliya merdoc: flavor or image?
08:45 merdoc kaliya: https://bugs.launchpad.net/mos/+bug/1360000
08:47 merdoc I see that fix alreaddy in Icehouse master branch. So as I understand - I need create new env with new Icehouse release from 5.1
08:51 merdoc hmmmm. will try that -> "When you upgrade your Master Node to Fuel 5.1, you get experimental access to the ability to update existing environments to Mirantis OpenStack 5.0.2"
08:52 Longgeek joined #fuel
08:54 baboune kaliya: here is the log: /var/tmp/patch
08:55 baboune kaliya: https://dl.dropboxusercontent.com/u/18136096/mirantis-fuel-upgrade-keystone.log
08:58 teran joined #fuel
09:11 harybahh joined #fuel
09:12 tuvenen joined #fuel
09:14 kaliya thanks baboune
09:15 baboune new pb.. after successful upgrade I rebooted
09:15 baboune now when accessing fuel UI I get a cloud being drawn over and over
09:16 kaliya baboune: is it a UI issue?
09:16 baboune dont know...
09:17 kaliya baboune: you access the UI and you see cloud boxes repeatedly growing??
09:18 baboune no.. just the one white cloud being drawn on the screen on a grey background
09:20 kaliya try to `dockerctl restart all`
09:21 baboune this: "fuel environment create --name test --release 34 Environment 'test' with id=6, mode=ha_compact and network-mode=nova_network was created!" worked
09:21 baboune now I get to a login page "Fuel for OpenStack"
09:21 kaliya yay
09:21 baboune as u say
09:22 flor3k joined #fuel
09:23 merdoc kaliya: do you know something about murano? i found http://murano-api.readthedocs.org/en/latest/image_builders/windows.html they tell that I can use winserv 2012/2008R2, is it possible to do same things with win7 and other 'desktop' os?
09:23 e0ne joined #fuel
09:25 kaliya merdoc: murano isn't intended just for windows
09:25 kaliya merdoc: if you have the win7 image, you can upload to openstack and run it
09:26 merdoc kaliya: I need something more that raw windows 7. I need to create some sort of automation, when you can select win version and some specific software that will be installed on first instance run
09:28 tdubyk_ joined #fuel
09:29 merdoc *** UPGRADE DONE SUCCESSFULLY. ~15 min
09:31 merdoc lets try experimental features!
09:32 kaliya merdoc: not sure but maybe might be done with a Heat stack
09:33 merdoc kaliya: yes, currentrly I'm trying to do that via Heat, I thought that Murano can give me more flexibility
09:35 e0ne joined #fuel
09:41 merdoc seems like exeprimental upgrade fail
09:41 merdoc (Puppet::Type::Neutron_net::ProviderNeutron) Neutron API not avalaible. Wait up to 1 sec.
09:41 merdoc Could not prefetch neutron_net provider 'neutron': Can't prefetch net-list. Neutron or Keystone API not availaible.
09:41 emagana joined #fuel
09:42 vtzan ceph-disk activate /dev/mapper/mpath1
09:42 vtzan INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/mapper/mpath1
09:42 vtzan ceph-disk: Cannot discover filesystem type: device /dev/mapper/mpath1: Line is truncated:
09:42 vtzan Disk /dev/mapper/mpath1: 3414GB
09:42 vtzan Sector size (logical/physical): 512B/512B
09:42 vtzan Partition Table: gpt
09:42 vtzan Number  Start   End     Size    File system  Name          Flags
09:42 vtzan 2      1049kB  2147MB  2146MB  xfs          ceph journal
09:42 vtzan 1      2149MB  3414GB  3412GB               ceph data
09:43 vtzan data and journal is on the same disk. is that a problem?
09:43 vtzan trying to activate a SAN multipath device on ceph
09:44 Longgeek joined #fuel
09:44 merdoc vtzan: it will be nice if next time you will be use pastebin
09:44 sc-rm Trying to run a windows instance on openstack installed with fuel and getting “no valid host found"
09:44 sc-rm it’s running as kvm
09:44 vtzan merdoc: sorry for spamming guys. you r right.
09:45 kaliya sc-rm: that error is usually when the scheduler can't find a suitable hypervisor with the requirements to boot the istance
09:45 merdoc sc-rm: check that you have enough ram/cpu/storage on node according to your flavor spec
09:45 kaliya sc-rm: check in the Admin -> Hypervisors
09:46 sc-rm kaliya: that would make sence, but I have nothing else running and healt check does end up positive
09:47 merdoc sc-rm: look at flavor, how many cpu/ram allocated? maybe you don't have such in metal
09:48 merdoc for exapmle, if you have metal with 4cpu/8ram you can't use flavor, that say 'give new instance 6cpu'
09:48 merdoc kaliya: by the way, what about overcommiting in 5.1? is it fixed?
09:49 merdoc saaad... Update has failed. Error occurred while running method 'deploy'. Inspect Astute logs for the details
09:49 sc-rm merdoc: http://snag.gy/Wh8LD.jpg
09:50 merdoc sc-rm: and what flavor do you use?
09:51 merdoc also look at nova log, maybe you found something more usefull
09:51 sc-rm merdoc: it’s a fresh install of fuel 5.1. I’m trying to use http://snag.gy/NlsCj.jpg
09:51 merdoc not nova, scheduler rather
09:52 sc-rm merdoc: also having trouble just staring a debian instance, which is able to use the small flavor
09:54 merdoc sc-rm: look into logs
09:55 sc-rm merdoc: I tried to run the TestVM on all the types of flavores and there is no problem with that
09:56 merdoc kaliya: do you need logs from my unsuccessful attempt of upgrading icehouse to 5.0.2? (%
09:56 merdoc or I recreate env from scratch
09:57 sc-rm merdoc: nova-scheduler log says : Setting instance to ERROR state.
09:59 sc-rm merdoc: on the compute instances nova-compute log says “Info cache for instance 93f86fa5-1da7-4d95-aa96-e5dd0a09e130 could not be found” and so on for each instance failed
10:05 sc-rm joined #fuel
10:06 kaliya merdoc: yes please
10:10 merdoc kaliya: http://paste.openstack.org/show/115266/ + while upgrading seems that neutron-server crashed, and puppet wait while I restart it
10:15 merdoc second try also failed. ok, I will start from scratch
10:19 kaliya merdoc: metal or vm?
10:20 f13o joined #fuel
10:21 merdoc kaliya: metal
10:25 merdoc kaliya: I found UI bug in fuel. when I choose Neutorn with GRE, then on network page choose 'Use VLAN tagging' in 'Public' section I got error 'Invalid VLAN ID' and no input for enter vlan id, because it has "style='display:none;'"
10:26 kaliya merdoc: I check
10:27 kaliya merdoc: 5.1?
10:28 merdoc kaliya: yes. upgraded from 5.0.1
10:37 merdoc kaliya: why I can't choose 'Install Mellanox drivers and SR-IOV plugin' on Settings tab? is it only for neutron with vlan?
10:42 kaliya merdoc: yes, it's for neutron vlan. You can find more info about Mellanox and MOS here http://community.mellanox.com/docs/DOC-1474/
10:42 emagana joined #fuel
10:42 kaliya merdoc: I cannot reproduce your UI issue with GRE and then the VLAN settings, on a 5.1
10:44 merdoc kaliya: thx. and another question - I use ceph for glance\cinder. what is 'Virtual storage' in disk configuration tab? it's for ephemeral disks?
10:45 merdoc kaliya: ok. I'll try reproduce it with different browser/new env. if succeed - I tell you
10:50 teran joined #fuel
10:51 teran_ joined #fuel
10:58 merdoc kaliya: I also cannot reproduce that bug in new browser tab. Looks like chrome cache specific
10:58 kaliya merdoc: ok
11:03 kaliya merdoc: sorry, where 'virtual storage'?
11:04 vtzan anyone knows if ceph automounts den osd device on boot?
11:04 vtzan den=the
11:05 merdoc kaliya: in disk configuration tab of compute node
11:05 merdoc vtzan: yes, automount
11:06 vtzan merdoc: ok thx mate!
11:07 merdoc kaliya: http://i.imgur.com/oGZZwZG.png
11:17 kaliya merdoc: I'm looking into it. Seems we have a gap in the documentation also
11:21 merdoc kaliya: ok. thx
11:27 sc-rm merdoc: so far I managed to fix the debian instances not booting, now I’m back to the windows instance
11:27 vtzan merdoc: i just rebooted and wasn't mounted
11:28 vtzan but ceph -s shows the osd and its clean
11:29 aleksandr_null joined #fuel
11:29 stamak joined #fuel
11:30 vtzan merdoc: df -h doesn't show the path isn't mounted
11:36 sc-rm merdoc: I used this guide for trying out windows on openstack http://www.cloudbase.it/ws2012r2/
11:38 merdoc sc-rm: I used this one - http://docs.openstack.org/image-guide/content/windows-image.html they pretty much same
11:40 sc-rm merdoc: but then I have to create the image by hand, which is what I wanted to avoid just to make sure that it was not part of the problem
11:42 merdoc sc-rm: it's can't be part of the problem in any case. it's more likely problem with scheduler
11:43 sc-rm merdoc: nova-scheduler also returned: Filter ImagePropertiesFilter returned 0 hosts
11:43 emagana joined #fuel
11:43 Dr_drache hmmm
11:44 sc-rm merdoc: so, something with the scheduler is the problem, but what :-)
11:44 Dr_drache which scheduler?
11:44 Dr_drache simpler or filter?
11:45 merdoc hm. I can't install compute node with ceph at all.
11:45 Dr_drache weird
11:45 Dr_drache I only use ceph
11:46 pasquier-s joined #fuel
11:47 Dr_drache merdoc, you don't "need" virtual storage if you plan on using ceph for emph disks.
11:47 Dr_drache you can set it at it's minimum of 5,120MB
11:48 Dr_drache the need for it was supposed to be removed by now.
11:48 merdoc Dr_drache: yes, exactly what I did.
11:49 Dr_drache if you use ceph for cinder, then IIRC, that space will never be used.
11:52 sc-rm Dr_drache: nova-schedule - either it concludes something wrong or the subprocesses of validating the ImagePropertiesFilter is not liking the image as is
11:52 merdoc what docker on master manage dhcp? I whant to look into dhcp.leasess file, and maybe remove some unused ip/hostnames
11:55 kaliya merdoc: should be cobbler
11:56 flor3k joined #fuel
11:56 harybahh joined #fuel
12:10 merdoc kaliya: I got error while provision compute+ceph. puppet log http://paste.openstack.org/show/115303/
12:13 Dr_drache merdoc, looks like the config wasn't saved properly
12:13 merdoc yep
12:14 Dr_drache you have over 7 nodes right?
12:15 merdoc Dr_drache: no. I have 4 server, but fuel after deleting env not delete ip/hostnames, so when I recreate compute or controller I got new ip and hostname for it
12:16 Dr_drache umm...
12:16 Dr_drache what?
12:16 Samos123 joined #fuel
12:17 tuvenen_ joined #fuel
12:17 merdoc Dr_drache: when I first time run openstack deployment my nodes was node-1,2,3,4. when I delete node-4 and create it again with different disk config I got node-5
12:18 merdoc now, when I delete all nodes and try install fresh openstack they named as node-6,7,8,9
12:19 merdoc so now I try to find where in fuel master I can clear dhcp.leasess file
12:19 Dr_drache right
12:20 Dr_drache that's how it it always is in fuel
12:20 Dr_drache since version 3.x
12:20 Dr_drache you have to reset all of cobbler to do that.
12:20 Dr_drache not just a single file
12:22 tuvenen joined #fuel
12:24 merdoc Dr_drache: so it's almost imposible to start naming from node-1 again? (%
12:25 Dr_drache very difficult, redeploy fuel master node?
12:25 Dr_drache someone here can give you the commands, but I am sure they are differnt than they were (I don't have the old ones)
12:28 Dr_drache merdoc, you happen to know the spot to edit for overcommit ratios?
12:31 Dr_drache found it, was looking in wrong locations
12:35 kaliya merdoc: why do you want to restart naming?
12:37 merdoc kaliya: it's wierd when I have only 4 server and they named like node-123
12:37 kaliya merdoc: it's like primary keys in db, just uniq names
12:38 kaliya however, a blueprint to permit node renaming is in progress
12:38 merdoc it will be nice if I can change name in fuel UI and it will change hostname
12:39 merdoc because now only maping "host in UI" -> "metal" is MAC
12:40 Dr_drache kaliya, I'm wrong still, I want to check the overcommit rates in fuel; also, fuel master node is the gateway for the nodes, how do I fix the routing for the nodes? (compute)
12:42 merdoc Dr_drache: http://i.imgur.com/nO9jsLF.png in "Public" section gateway is my router. and all nodes routing looking into it, if I remember correctly
12:43 Dr_drache merdoc, doesn't route dns
12:43 merdoc hm?
12:44 Dr_drache problematic if most of my services are DNS based :P
12:44 emagana joined #fuel
12:44 kaliya merdoc: with `ip r` but I should ask if there is some better way
12:45 Dr_drache issue is, instances are golden. but the nodes can't get out, so I cannot add imagages with a DNS based name to download from
12:48 merdoc Dr_drache: 5.1? are you checked 'Assign public network to all nodes' on settings tab?
12:48 Dr_drache merdoc, you know what sir.
12:48 Dr_drache last deployment didn't get that checked.
12:49 Dr_drache little dumb things.
12:49 e0ne joined #fuel
12:55 Dr_drache merdoc, assuming that works, you happen to know where the overcommit rates are?
12:56 Dr_drache /deployment/puppet/openstack/manifests/nova/controller.pp <- don't see it there
12:56 merdoc Dr_drache: /etc/nova/nova.conf on controller
12:56 merdoc dunno where it's set in puppet
12:57 Dr_drache I'd like to set it in puppet, otherwise you have to edit it every time you scale.
12:58 merdoc Dr_drache: I think you need to grep 'cpu_allocation_ratio'
12:59 Dr_drache yea, going to have to.
13:00 merdoc as far as I can see right now that option configured only on controller, so you need change it once per controller. I'm not sure that you will have more than 3 controller
13:01 merdoc YAY!!!! Second run succeed!
13:01 Dr_drache merdoc, I will have at least 4 controllers in 6 m
13:03 merdoc HA on 4 controllers? I thought it's better when you got an odd number of controllers
13:05 Dr_drache as far as I've been able to test, HA is good at long as you have AT LEAST 3... I havn't noticed any issues with even past that
13:05 Dr_drache but, I got it to work on 2 even
13:06 Dr_drache looks like we got 8:1:1 by default
13:06 Dr_drache I can live with that
13:06 merdoc Dr_drache: yep, and it's not working by default %(
13:07 Dr_drache what? the over provisioning?
13:07 merdoc yes
13:08 merdoc I have metal with 4 vcpu, dafeult cpu ratio 8, so in horizon I must see 4*8 = 32 vcpu per node
13:08 merdoc but I still see 4vcpu per node
13:09 Dr_drache hmmm, wonder where the bug is on that.
13:09 Dr_drache kaliya?
13:17 thehybridtech joined #fuel
13:25 merdoc ephemeral storages working! left to deal with overcommiting and I'm ready for production use
13:28 tdubyk joined #fuel
13:34 jaypipes joined #fuel
13:39 kaliya merdoc: I'm looking into the nova code and reports: "Virtual CPU to physical CPU allocation ratio"
13:44 emagana joined #fuel
13:50 Dr_drache merdoc, pretty close to where I am at as well
13:57 merdoc looks like I misunderstood concept of ephemeral storages. according to http://docs.openstack.org/admin-guide-cloud/content/section_storage-and-openstack-compute.html all ephemeral storage is not the kind of thing that I can reattach between instances. and data on that storage persistent between reboot, as well as root storage
13:59 merdoc so can someone explain me what difference between root and ephemeral disks in scope of flavors?
14:00 enik_ ephemeral storage is local hardware storage which dissapears after shutdown/migratnion
14:00 enik_ some kind of temp storage
14:00 mattgriffin joined #fuel
14:01 enik_ root storage can be the same (created from image, disappears after shutdown) or shared storage from ceph/iscsi/fc etc
14:03 merdoc enik_: so, if I create some files on ephemeral disk and then reboot instance, that files disappears?
14:04 merdoc and how I can share root storage between instance while using ceph?
14:04 merdoc or what you mean by "shared storage from ceph/iscsi/fc"
14:09 avorobiov joined #fuel
14:09 kaliya merdoc: about overcommit, there is a confusing point in Horizon https://bugs.launchpad.net/horizon/+bug/1202965
14:10 dkaigarodsev joined #fuel
14:10 Dr_drache merdoc, what are you trying to accomplish with storage?
14:10 enik_ merdoc: reboot, I don't know, shutdown (meaning remove instance from serwer) - yes
14:11 Dr_drache enik_ : I think that term is terminate with deletion.
14:12 enik_ merdoc: you should not share storage between instances without using shared filesystem like gfs, I don't even know do openstack allow sharing remote storage
14:12 Dr_drache merdoc, are you just trying like have a shared disc between instances?
14:13 Dr_drache that can be done by adding a volume and attaching it to the instance.
14:13 enik_ Dr_drache: thx, I don't know many Openstack terms yet ;)
14:14 Dr_drache enik_ ; it's fine, they confuse me alot, I'm used to KVM clusters - where delete is delete and terminate is shutdown
14:15 merdoc Dr_drache: no, I try to understand while I need ephemeral storage at all, if I can create flavor with large root disk. I need some persistent storage that I can reattach to another instance if needed. And seems like I only need create 'Volume' in horizon and attache it
14:15 Dr_drache right
14:16 Dr_drache ephemeral storage isn't needed, that's the "virtual stoage" you shrunk to 5,120km
14:16 Dr_drache ceph takes care of that.
14:16 Dr_drache but, you should not be using qcow2 formats.
14:18 enik_ merdoc: because it was fundamental for old Cloud systems, root image was only 10GB, there was no shared storage, and apps stored data in: database, object storage, and local disks (ephemeral, temporary)
14:19 Dr_drache with ceph, there are no longer need for "temportary" discs.
14:19 Dr_drache disks?
14:19 merdoc Dr_drache: seems like mirantis may work with qcow2 format as well. they convert it to raw autoaticaly, when you use ceph
14:19 enik_ Dr_drache: it depends, I have a large webapp cloud, where almost all instances work only on root(from image)+ephemeral
14:20 Dr_drache merdoc, I don't know how they do it now, I havn't watched the data, but in 4.1, if you used qcow2, it would have to double copy to create a new instance from it.
14:22 Dr_drache enik_ ; I've never gotten into large webapps like that, only direct VM's. the complexity never was ecisped by the "benifits"
14:22 enik_ "ecisped"?
14:22 emagana joined #fuel
14:23 merdoc Dr_drache: i found description of that magic here - http://ceph.com/cloud/ceph-and-mirantis-openstack/ section 'THINGS WE’VE DONE' part 6
14:23 Dr_drache enik_ : yea, I can't spell... Eclipsed - overtaken by.
14:23 enik_ Dr_drache: yes for most people root on shared storage is the best and simplest option
14:26 Dr_drache enik_ it's more that the completixies didn't gain you anything. the loads were the same, the storage use was the same. it fell into the more work for same amount of payoff.
14:26 Dr_drache merdoc, right, what I'm saying is, it takes 2x as much storage is all.
14:27 Dr_drache or did in january.
14:29 Dr_drache but we are now on firefly, not dumpling, sooo hard to say what happens now
14:34 merdoc kaliya: ok, so now i will try create 10-20 instance with 4vcpu for each with cpu rate 16, ram rate 1.5. let's see what happens
14:36 emagana joined #fuel
14:36 mpetason joined #fuel
14:37 Dr_drache merdoc - go go go sir!
14:38 kaliya merdoc: depending on your scheduler, if there won't be enough resources, nova will allocate as many istances as possible
14:42 merdoc kaliya: yes. it's working! http://i.imgur.com/Jtk236o.png
14:43 kaliya merdoc: if you're subscribed to launchpad please add a +1 to the horizon bug, let's force the guys to correct :)
14:43 merdoc with cpu 16:1 I may use 192 vcpu on my current metal. and with ram 1.5:1 - 32gb instead of 22gb
14:44 kaliya merdoc: did you configure with ratio 8 or 16 now?
14:44 merdoc kaliya: no, but I can try to register for that (%
14:44 merdoc 16:1 now
14:45 merdoc also it's bug with storage computing
14:59 adanin joined #fuel
15:12 tuvenen__ joined #fuel
15:15 ArminderS joined #fuel
15:16 tuvenen_ joined #fuel
15:20 Longgeek_ joined #fuel
15:29 jobewan joined #fuel
15:29 blahRus joined #fuel
15:58 harybahh joined #fuel
16:00 AKirilochkin joined #fuel
16:03 angdraug joined #fuel
16:10 rmoe joined #fuel
16:31 pasquier-s joined #fuel
16:36 stamak joined #fuel
16:39 adanin joined #fuel
17:08 AKirilochkin joined #fuel
17:13 mattgriffin joined #fuel
17:30 AKirilochkin joined #fuel
17:33 sressot joined #fuel
17:37 AKirilochkin joined #fuel
17:42 e0ne joined #fuel
18:03 jpf joined #fuel
18:03 AKirilochkin joined #fuel
18:06 phreak__ will there be a (micro)update for fuel regarding the bash exploit?
18:06 phreak__ [root@fuel ~]# env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
18:06 phreak__ vulnerable
18:06 phreak__ it's firewalled but..
18:07 Dr_drache I doupt it. it will be fixed of course, but I highly doupt that a update for that alone is required
18:08 phreak__ i can forsee that for some people it will be .. severe?
18:09 phreak__ not jumping on this, looks like a 'lack of security'
18:10 mpetason There will be guidelines on what to do to update, which packages to install.
18:11 mpetason It is important so if anything it could be a patch if the packages are not enough.
18:12 phreak__ true, i always wondered why fuel uses a upgrade script instead of a repo
18:12 stamak joined #fuel
18:15 mpetason phreak__: https://bugs.launchpad.net/mos/+bug/1373965
18:16 phreak__ mpetason: thanks, didn't just read that one ;)
18:17 emagana joined #fuel
18:32 Dr_drache I find the over-reaction to bugs hillarious.
18:33 HeOS joined #fuel
18:40 mpetason So there is over-reaction and then there are security concerns for large companies. While we aren’t that crazy and reacting in an odd way, customers will be a lot more invested in their data and will definitely “over-react”
18:41 mpetason There are real concerns about being targeted, and if it’s a highly publisized bug of course they are going to want a fix now, not later.
18:42 mpetason publicized*
18:44 Dr_drache but it's not like this bug was just found today.
18:46 Dr_drache in other news...
18:46 Dr_drache my openstack is broked.
18:46 Dr_drache lol
18:46 Dr_drache File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 108, in schedule_run_instance raise exception.NoValidHost(reason="")
18:47 emagana joined #fuel
18:48 Dr_drache http://i.imgur.com/2QK4zmo.png
18:51 mpetason Are the computes not reporting back enough available resources for you to launch an instance with the specific image you are using? Such as 10gb free on the compute node and your image requires 20gb?
18:51 Dr_drache mpetason, wish it was that simple
18:52 mpetason Did you modify your scheduling filters?
18:52 Dr_drache but I have 3TB of ceph and 48 real cores.
18:52 Dr_drache yes
18:52 Dr_drache I changed ram to 1.5
18:52 mpetason Well I mean in the scheduler. You can change filters to place images only on specific hosts.
18:53 mpetason Did you restart nova-compute after the over-sub update?
18:53 Dr_drache I didn't touch that at all, just changed the allocation before deployment, and then attempted to deploy an instance from a downloaded qcow2.
18:54 Dr_drache I did not restart, I made the change on fuel master before the cluster deployment.
18:54 mpetason Gotcha, check the nova-all.log on the compute nodes to see if they are checking in and updating their resources.
18:55 Dr_drache they are
18:56 Dr_drache and cirrOS seems to create bigger VMs, but that damn ubuntu image, not so much
18:56 mpetason Did you setup Ceph for Ephemeral before deployment?
18:56 Dr_drache sure did
18:57 Dr_drache http://i.imgur.com/cXECU8O.png
18:57 mpetason Is Nova reporting back the correct free disk space?
18:58 mpetason There was a bug where Nova didn’t report back the correct amount of free disk space due to not using Ceph but still checking whichever partition /var/lib/nova/instances was on
18:58 mpetason for 4.1.1/5.0
18:58 Dr_drache well
18:58 Dr_drache that's what's funny
18:59 Dr_drache it's reporting like 5.9GB
18:59 Dr_drache on all the nodes.
18:59 mpetason Which version are you running off of?
18:59 Dr_drache but CirrOS was able to claim 20GB
18:59 Dr_drache 5.1
18:59 mpetason Of Fuel
18:59 mpetason ok
19:00 Dr_drache http://i.imgur.com/If0Q0z4.png
19:00 mpetason does nova service-list show computes as :) instead of XX
19:00 Dr_drache let me get into a shell
19:02 Dr_drache they show as up
19:02 mpetason Ok, is it a custom Ubuntu image or was it downloaded form the ubuntu cloud download page?
19:03 stamak joined #fuel
19:03 mpetason If your resources are reporiting back fine. You said 5.9GB but you meant 5.9 TB, which would assume 1.5 or so for HD
19:04 Dr_drache no, i didn't mean 5.9TB
19:04 Dr_drache but
19:04 Dr_drache that's what it is
19:04 Dr_drache lol
19:04 mpetason If it is a custom ubuntu image and you are launching with a flavor that is too small, there might not be enough space in the flavor
19:05 Dr_drache it was from a "validate your openstack" post
19:05 mpetason In my experience most of the images end up wanting at least small, not tiny
19:05 mpetason Cirros will take Tiny, but CentOS expected small at least
19:05 Dr_drache To download the image to your Glance Image Service, perform the following steps:
19:05 Dr_drache Login to your OpenStack dashboard.
19:05 Dr_drache In the menu on the left-hand side, click Images & Snapshots.
19:05 Dr_drache You should see a list of images. Click Create Image.
19:05 Dr_drache For Name, enter Ubuntu Server 64-bit.
19:05 Dr_drache For Image Location, enter http://cloud-images.ubuntu.com/lucid/current/lucid-server-cloudimg-amd64-disk1.img.
19:05 Dr_drache For Format, select QCOW2 - QEMU Emulator.
19:05 Dr_drache For Minimum Disk, enter 5GB.
19:05 Dr_drache For Minimum Ram, enter 1024MB.
19:05 Dr_drache Click Create Image.
19:05 Dr_drache shit spam.
19:06 mpetason Yeah, so if you launch with a bigger flavor does it still error out?
19:06 Dr_drache that was with a small. will try with a med here in a second
19:07 Dr_drache just trying to validate everything like 5x before I have my meeting.
19:07 mpetason Launch it and check nova service-list to see if any of the computes are reporting as down.
19:07 mpetason There may even be more information in nova-all on one of the controllers regarding the scheduling error.
19:08 Dr_drache is there another version of that command?
19:10 mpetason nova-manage service-list?
19:10 Dr_drache nova-all
19:10 mpetason I think nova-manage service-list is going to be depreciated
19:10 mpetason ohhh
19:10 mpetason nova-all.log
19:10 Dr_drache \ahh, ok
19:10 Dr_drache thought it was a command, and my controller was all like "no sir"
19:11 mpetason it is in /var/log/nova-all.log - but you need to find it on the controller that actually took over and did the scheduling
19:11 Dr_drache lucky me, only one crappy controller right now
19:11 mpetason just grep {uuid-of-instance} /var/log/nova-all.log
19:11 mpetason see if there is anything relevant
19:11 adanin joined #fuel
19:13 Dr_drache there isn't
19:14 Dr_drache just a single line
19:14 Dr_drache lol
19:16 mpetason So you can dig deeper to see what the hypervisors have available, see if the hypervisors think they don’t have resources.
19:17 mpetason Did the ubuntu image work with med?
19:17 Dr_drache no
19:17 Dr_drache it didn't
19:17 mpetason http://docs.openstack.org/cli-reference/content/novaclient_commands.html — starting with hypervisor options
19:17 Dr_drache just deleted it and going to try a known good img. maybe a direct download didn't work
19:17 mpetason nova hypervisor-list
19:18 mpetason nova hypervisor-show {id}
19:18 emagana joined #fuel
19:18 mpetason that should show available resources reported back by the compute nodes
19:22 Dr_drache clean bill of health as far as I can tell, just curious why it shows QEMU when I selected KVM (i know they are the same package)
19:22 mpetason As long as you setup the environemnt with KVM you are fine. I believe the qemu being reported back incorrect/bug with nova
19:23 mpetason As long as you are on physical hardware
19:23 Dr_drache of course.
19:23 Dr_drache just curious is all.
19:23 Dr_drache ok, I think it was an issue with that image
19:23 Dr_drache nope
19:23 Dr_drache more info forthcoming
19:24 Dr_drache Failed to launch instance "u-test": Please try again later [Error: Block Device Mapping is Invalid.].
19:25 mpetason Ok, so how are you launching the image? Are you choosing “launch from image” or are you trying to create a volume with the image?
19:26 Dr_drache "boot from image (creates new volume)
19:27 Dr_drache http://paste.openstack.org/show/115391/
19:29 mpetason Ok so try it without doing that, Just from image without new volume
19:30 mpetason If you are booting from volume you might need to look at quotas for the tenant you are using. Verify that it has enough quota left to create a volume as large as the image you are creating.
19:30 Dr_drache http://paste.openstack.org/show/115393/
19:31 mpetason Weird - InvalidVolume: Invalid volume: status must be 'available' - It is like it is trying to create the volume then put the image into it, but the volume isn’t reporting back available. It is like a race condition.
19:31 mpetason Why are you launching with Volumes?
19:31 mpetason Is there a use case where you need it?
19:32 mpetason If you are using Ceph as a backend for Nova then you can just launch from image, without creating a new volume. It doesn’t get deleted when you shut it down or anything. Plus you are limiting the different apis you ahve to talk to, you are going through Nova instead of going through nova + cinder
19:33 Dr_drache well, no. there isn't really. I guess
19:33 Dr_drache booting straight from the image is the same error
19:34 mpetason Same error regarding invalid block mapping? Did you modify anything regarding Ceph before you deployed?
19:34 mpetason I haven’t seen any bugs about invalid block mapping so far with 5.1. It doesn’t mean there isn’t one, I just haven’t seen it after a few deployments.
19:35 Dr_drache only thing I modifyed was I shrunk virtual disks to 5,120
19:35 Dr_drache and use the size left over for more ceph
19:35 mpetason That is fine as long as it is correctly configured on the backend for ceph. It should be. You can view the configuration on the compute nodes in /etc/nova/nova.conf - look for the section [libvirt] verify that the settings below say something like image_type = rbd
19:37 Dr_drache I don't see a [libvirt] section
19:38 Dr_drache or lmage_type
19:39 Dr_drache ot rdb
19:41 mpetason grep rbd /etc/nova/nova.conf
19:41 mpetason should be in there if ceph for nova was selected before deployment
19:41 Dr_drache root@node-3:/etc/nova# grep "libvirt" nova.conf
19:41 Dr_drache libvirt_use_virtio_for_bridges=True
19:41 Dr_drache connection_type=libvirt
19:41 Dr_drache root@node-3:/etc/nova# grep "rdb" nova.conf
19:41 Dr_drache root@node-3:/etc/nova# grep "image" nova.conf
19:41 Dr_drache image_service=nova.image.glance.GlanceImageService
19:41 Dr_drache use_cow_images=True
19:41 Dr_drache root@node-3:/etc/nova# grep rbd /etc/nova/nova.conf
19:41 Dr_drache root@node-3:/etc/nova#
19:41 mpetason Sorry, on a compute node.
19:42 Dr_drache lol
19:42 Dr_drache k
19:42 Dr_drache root@node-2:~# grep rbd /etc/nova/nova.conf
19:42 Dr_drache libvirt_images_type=rbd
19:42 Dr_drache libvirt_images_rbd_pool=compute
19:42 Dr_drache rbd_user=compute
19:42 Dr_drache rbd_secret_uuid=a5d0dd94-57c4-ae55-ffe0-7e3732a24455
19:42 Dr_drache root@node-2:~#
19:42 Dr_drache much betta
19:44 Dr_drache http://paste.openstack.org/show/115396/
19:44 mpetason Well it looks like it set it up. I’m not quite sure what is causing the issue then. If you launch with just an image - not saving to a volume - it gave you the same error?
19:44 Dr_drache don't know how much this means to you
19:44 Dr_drache yes, same error
19:44 Dr_drache takes longer, but same error
19:44 mpetason Ah, that error is different that you posted
19:44 mpetason 'qemu-img: error while reading sector 452608: Input/output error\\n'\n"]
19:45 Dr_drache i JUST saw this on the controller
19:45 mpetason Here’s your test - Try cirros with each flavor type, see if it fails
19:45 mpetason If it doesn’t then get another image from ubuntu
19:45 Dr_drache this is the 2nd unbutu image BTW.
19:46 mpetason What are you uploading it as? qcow2 or raw?
19:46 Dr_drache qcow2
19:46 mpetason http://uec-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img — uploaded as qcow2 - try this
19:47 Dr_drache http://i.imgur.com/lWO4wDs.png
19:47 mpetason Already tried that one?
19:47 Dr_drache yes sir :P
19:47 mpetason How many servers in total do you have?
19:48 Dr_drache just 3 in this test.
19:48 Dr_drache 2 computes
19:49 mpetason So you have a replication factor of 3, did you install ceph on all three nodes?
19:49 Dr_drache yes sir.
19:50 mpetason Were you able to launch cirros with each flavor type?
19:50 mpetason Your best bet may be to re-deploy with 2 as the replication factor, use the computes for ceph + compute, then use the controller as just a controller.
19:51 mpetason It should work out of the box without these errors. There could have been an issue with the deployment. Although it is odd that Cirros works without issues.
19:51 mpetason How many instances are you running currently
19:51 Dr_drache one second
19:51 mpetason So many questions, so is the troubleshooting of openstack
19:52 Dr_drache I thought 3 was the minium for acaully testing out ceph, and 2 was default for virtualbox.
19:53 Dr_drache http://i.imgur.com/u2TqZYW.png
19:53 Dr_drache all of them created with : boot from image (creates new volume) for consistancy
19:54 mpetason So you are testing it out, but you still have a small environment. You probably don’t want it running on the controller as well. But for just testing sure. I’m not a ceph expert, he’s out right now :P
19:55 mpetason http://cloud.centos.org/centos/7/devel/
19:55 mpetason try a centos image
19:55 mpetason just to make sure
19:55 emagana joined #fuel
19:55 mpetason Upload it to Glance, then - Launch from Image.
19:55 mpetason You could try it with the volume backed setup too
19:56 Dr_drache ok
19:56 Dr_drache give me 10 min.
19:56 Dr_drache slow network
20:05 Dr_drache mpetason
20:05 Dr_drache same error
20:05 mpetason Got me then, Maybe someone else can step in and check it out? Or you could try a re-deploy.
20:06 Dr_drache nova.scheduler.host_manager [req-10cd7c23-882f-45bb-b21e-372f930d1af0 None] Host has more disk space than database expected (5930gb > 5694gb)
20:07 Dr_drache dammit. this sucks
20:07 Dr_drache lol
20:07 Dr_drache had none of these issues on nightlys of 5.1 :P
20:07 adanin joined #fuel
20:11 mpetason With the nightly releases were you modifying the disk partitions to give virtual disks less?
20:11 Dr_drache yes
20:12 Dr_drache I was told in version 4, ceph doesn't use them, so to shrink them.
20:12 Dr_drache oslo.messaging._drivers.impl_rabbit [req-6da826b8-fce3-435c-b135-68c9c99fac20 ] AMQP server on 192.168.0.3:5673 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 30 seconds. <--- that mean anythign?
20:13 Dr_drache and I was expecting not to see them by 5.1 at all.
20:13 kupo24z Dr_drache: what version of fuel deployment
20:13 Dr_drache kupo24z 5.1
20:14 kupo24z You are probably running into the same issue i was, had to use new oslo python package
20:14 kupo24z there is an open bug about it.. 1s
20:16 kupo24z https://bugs.launchpad.net/mos/+bug/1371723
20:16 kupo24z What log is that from, nova-api?
20:17 Dr_drache nova-compute on the controller
20:17 kupo24z nova-compute on the controller?
20:17 kupo24z the nova-compute service shouldnt be running on the controllers
20:18 Dr_drache the log from fuel
20:19 Dr_drache mp
20:19 Dr_drache mpetason
20:19 Dr_drache http://paste.openstack.org/show/115404/
20:20 Dr_drache ImageUnacceptable: Image e5075845-16e1-4268-b941-7acba56c09a9 is unacceptable: Size is 8GB and doesn't fit in a volume of size 5GB.
20:20 mpetason that just means you need to make the volume larger than the image you are using
20:20 mpetason so if the image requires 5GB then your volume needs to be like 10-20 or something
20:20 Dr_drache hmmm
20:20 Dr_drache let me try something
20:20 mpetason So if you launch an image with a volume then the volume should be pretty large
20:21 mpetason as in larger than the image, so normally 10gb +
20:25 Dr_drache ok
20:25 Dr_drache WTF
20:25 Dr_drache that worked
20:26 mpetason Odd
20:26 mpetason Alright, I’m getting some food. good luck with your meeting
20:26 Dr_drache I make "device side (GB)" match the disk size in the flavor
20:26 Dr_drache s/side/size
20:27 mpetason So you usually want to match or make larger, because the image will grow-root
20:27 mpetason and expand into it
20:27 mpetason but it at least needs more than the base image size
20:27 Dr_drache how do I know the base image size?
20:28 teran joined #fuel
20:39 miroslav_ joined #fuel
20:55 harybahh joined #fuel
20:55 mpetason just assume linux > 5gb, and windows probably > 20
20:56 mpetason you probably want to give at least 10gb to linux for / anyways though
20:57 Dr_drache yea.
20:57 Dr_drache thanks sir
21:00 mpetason You are welcome. Glad you didn’t need to redeploy.
21:14 emagana joined #fuel
22:41 emagana joined #fuel
22:43 emagana joined #fuel
22:55 harybahh joined #fuel
23:48 boris-42 joined #fuel
23:49 mattgriffin joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary