Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-05-07

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 rmoe joined #fuel
01:29 xarses joined #fuel
02:27 xarses joined #fuel
02:43 thumpba joined #fuel
02:43 RodrigoUSA joined #fuel
04:20 mattgriffin joined #fuel
04:36 fedexo joined #fuel
04:41 emagana joined #fuel
05:07 mattgriffin joined #fuel
05:33 Longgeek joined #fuel
05:35 emagana joined #fuel
06:14 saibarspeis joined #fuel
06:14 dklepikov joined #fuel
06:21 sbfox joined #fuel
06:23 LiJiansheng joined #fuel
06:29 sbfox joined #fuel
06:30 stamak joined #fuel
06:37 emagana joined #fuel
06:45 alecv joined #fuel
07:02 stamak joined #fuel
07:04 hyperbaba joined #fuel
07:07 Longgeek joined #fuel
07:13 alecv joined #fuel
07:21 Longgeek joined #fuel
07:30 hyperbaba joined #fuel
07:32 emagana joined #fuel
07:32 homegrown joined #fuel
08:08 hyperbaba joined #fuel
08:20 HeOS joined #fuel
08:22 samuelBartel Hi all
08:22 bogdando joined #fuel
08:22 samuelBartel i s there anyway to configure log rotate during deployment
08:22 samuelBartel in order to limite the *-all. log files?
08:23 e0ne joined #fuel
08:26 emagana joined #fuel
08:26 kaliya hi samuelBartel, this is not configurable in an easy way, you have to hack a bit some puppet modules, or add your owns
08:28 samuelBartel kaliya, ok thanks i will make a fuel plugin so in order to atumatize it
08:29 kaliya yes that's a good idea
08:29 kaliya what do you want to limit? the size?
08:34 samuelBartel yes the size
09:20 emagana joined #fuel
09:23 Miouge joined #fuel
09:57 ddmitriev joined #fuel
10:08 thumpba joined #fuel
10:14 emagana joined #fuel
10:33 hyperbaba joined #fuel
10:37 Miouge joined #fuel
10:54 monester_laptop joined #fuel
10:58 Miouge joined #fuel
10:58 Longgeek joined #fuel
11:00 alecv joined #fuel
11:03 dbuechler joined #fuel
11:03 dbuechler Hi!
11:03 hyperbaba joined #fuel
11:08 emagana joined #fuel
11:17 dbuechler joined #fuel
11:21 ChUsama joined #fuel
11:39 Ves joined #fuel
11:43 ChUsama joined #fuel
11:56 sc-rm joined #fuel
11:57 sc-rm After some time I get this warning on the controllers in the nova-scheduler log “(node-13, node-13.microting.com) ram:20526 disk:1785856 io_ops:0 instances:2 has not been heard from in a while” and for other nodes also
11:57 LanceHaig joined #fuel
11:58 sc-rm This leads to I’m unable to start new instances. If I manually restart nova-compute on the compute nodes, then it goes away
12:01 sc-rm kaliya: ^^ is very critical and I have seen it now for about 1 month with fuel 6.0 running and then it begins to produce this warning and inabillity to start instances untill restart og nove-compute on compute nodes.
12:02 emagana joined #fuel
12:11 dklepikov sc-rm: Did you check time shift on nodes?
12:11 dmellado joined #fuel
12:13 sc-rm dklepikov: So they drift away in time? I thought that was the whole point of having them all use the fuel masters ntp server, so they where all in sync
12:13 waterkinfe joined #fuel
12:13 sc-rm dklepikov: So far I see that all nodes are having the same date and time set
12:16 sc-rm dklepikov: In the nova-conductor logs I get “nova.scheduler.driver MessagingTimeout: Timed out waiting for a reply to message ID d36c7e91f020406ab5a4369442db7462” sort of messages
12:18 dklepikov sc-rm: what is in the nova-compute and nova-scheduler logs in the same time
12:20 sc-rm dklepikov: nova-compute logs are not showing signs of problems. It’s only on the controller nodes I get errors/warnings
12:21 sc-rm dklepikov: But the problem goes away when nova-compute have been restartet
12:22 dklepikov sc-rm: do you use ceph
12:22 sc-rm dklepikov: Yep we do
12:22 dklepikov ceph -v
12:24 LiJiansheng joined #fuel
12:24 dklepikov sc-rm: and  'cat /etc/nova/nova.conf | grep disk_cachemodes'
12:25 dklepikov sc-rm: and can you please run a creation of the instance via CLI, using '--debug' option
12:26 dklepikov sc-rm: nova --debug boot --image ...........  --flavor .......   --nic net-id=...
12:26 sc-rm dklepikov: I never created an instance from cli, so I have no clue how to do so
12:27 dklepikov sc-rm: and provide with the output
12:27 sc-rm dklepikov: On compute nodes the grep gives: disk_cachemodes="network=writeback","block=none"
12:27 dklepikov sc-rm: 'ceph -v'
12:28 sc-rm dklepikov: ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
12:30 dklepikov sc-rm: nova --debug boot --image IMAGE_ ID  --flavor FLAVOR_ID --key_name KEY_NAME  --nic net-id=NET_ID
12:30 dklepikov nova flavor-list
12:30 dklepikov nova image-list
12:35 sc-rm dklepikov: I get the same timeout when doing it in CLI
12:37 sc-rm dklepikov: I could try to restart the nova-conductor or the rabbitmq instances on the controllers?
12:52 sc-rm dklepikov: To me, it looks like a rabbitmq problem
12:57 emagana joined #fuel
13:21 dklepikov sc-rm: what is in the rabbitmq.log?
13:22 sc-rm dklepikov: One of the controllers rabbitmq will not restart
13:22 dklepikov sc-rm: did you apply this - https://bugs.launchpad.net/fuel/+bug/1396946
13:23 sc-rm dklepikov: A lot of stuff, but so far no erros,
13:24 sc-rm dklepikov: nope
13:24 dklepikov sc-rm: 'rabbitmqctl cluster_status' post to http://paste.openstack.org/
13:25 sc-rm dklepikov: http://paste.openstack.org/show/216164/
13:26 dklepikov sc-rm: I see only 2 controllers
13:26 sc-rm dklepikov: But it’s missing node-16 which upon service rabbitmq restart gives this http://paste.openstack.org/show/216173/ and it never gets to a running state
13:26 sc-rm dklepikov: No errors, just silence
13:27 dklepikov sc-rm: How did you restart it?
13:27 dklepikov sc-rm: I mean what command
13:27 sc-rm dklepikov: service rabbitmq-server restart
13:27 dklepikov sc-rm: no
13:29 dklepikov sc-rm: please show 'pcs status'
13:29 sc-rm dklepikov: http://paste.openstack.org/show/216175/
13:33 sc-rm dklepikov: last-rc-change=Sat May  2 00:19:14 2015 <<< that is bad I guess
13:37 dklepikov sc-rm: some workaround - stop rabbitmq-server on node-16 (service rabbitmq-server stop)
13:38 sc-rm dklepikov: done
13:39 dklepikov sc-rm: on node-17 'pcs resource disable p_rabbitmq-server' this will stop rabbitmq on all controllers
13:40 dklepikov sc-rm: check 'pcs status' you should see something like http://paste.openstack.org/show/216176/
13:40 dklepikov sc-rm: then node-17 'pcs resource enable p_rabbitmq-server'
13:41 dklepikov sc-rm: this will start rabbitmq-server on all controllers with ocf scripts
13:41 sc-rm dklepikov: Ahe
13:42 dklepikov sc-rm: please show 'pcs status'
13:42 sc-rm dklepikov: http://paste.openstack.org/show/216177/
13:42 dklepikov sc-rm: and 'ps ax | grep rabbit' from node-16, and  'rabbitmqctl cluster_status'
13:43 sc-rm dklepikov: Now node-16 is back in
13:43 dklepikov sc-rm: )
13:43 ChUsama joined #fuel
13:44 sc-rm dklepikov: So some of the services can be controller through pcs command
13:44 sc-rm dklepikov: Thanks for helping out here :-)
13:45 dklepikov sc-rm: Also cleanup an old errors
13:45 dklepikov sc-rm: 'pcs resource cleanup p_neutron-plugin-openvswitch-agent'
13:46 dklepikov sc-rm: 'pcs resource cleanup ping_vip__public'
13:46 dklepikov sc-rm: and please show 'pcs status'
13:47 sc-rm dklepikov: http://paste.openstack.org/show/216180/
13:47 martineg_ left #fuel
13:47 martineg_ joined #fuel
13:48 dklepikov sc-rm: looks good. Please create an instance
13:50 sc-rm dklepikov: I already did that and it boots with no problem now
13:51 dklepikov sc-rm: ok
13:51 sc-rm dklepikov: Thanks again for fixing this problem :-)
13:51 dklepikov sc-rm: Have a nice day.
13:52 samuelBartel joined #fuel
13:52 sc-rm dklepikov: Thanks and you too
13:56 blahRus joined #fuel
14:23 jobewan joined #fuel
14:26 jobewan joined #fuel
14:29 jobewan joined #fuel
14:38 hyperbaba joined #fuel
14:41 emagana joined #fuel
14:42 rmoe joined #fuel
14:57 daniel3_ joined #fuel
14:59 mattgriffin joined #fuel
15:00 ChUsama joined #fuel
15:09 jobewan joined #fuel
15:14 teran joined #fuel
15:15 CheKoLyN joined #fuel
15:17 jobewan joined #fuel
15:23 jobewan joined #fuel
15:32 jobewan joined #fuel
15:33 jobewan joined #fuel
15:39 kozhukalov joined #fuel
15:41 xarses joined #fuel
15:57 angdraug joined #fuel
16:15 stamak joined #fuel
16:42 emagana joined #fuel
17:00 daniel3_ joined #fuel
17:01 emagana joined #fuel
17:06 daniel3_ joined #fuel
17:13 jobewan joined #fuel
17:16 ChUsama joined #fuel
17:17 jobewan joined #fuel
17:17 thumpba joined #fuel
17:20 jobewan joined #fuel
17:21 thumpba joined #fuel
17:22 thumpba_ joined #fuel
17:23 jobewan joined #fuel
17:26 jobewan joined #fuel
17:27 jobewan joined #fuel
17:27 jobewan joined #fuel
17:43 emagana joined #fuel
17:47 championofcyrodi Hi everybody.  So after almost a year, we are FINALLY running stable-ish openstack 24/7 with our developers as end-users.  We've started to automate deployments with cloud-init and puppet so that we can create the whole subnet, router, etc... and 'n' number of nodes, bootstrap everything and start installing our actual 'cloud' software...
17:47 championofcyrodi however, there are rabbitmq timeouts and metadata service issues in that sometimes all of the nodes boot and everything is beautiful.
17:48 championofcyrodi other times master node cant get to metadata service, 1 node boots and gets IP, while 2 other nodes ERROR out launching instances with 'timeout' errors.
17:49 championofcyrodi Juno was VERY fast when we first started using it.  but it seems to have become less responsive as the weeks roll by.
17:52 championofcyrodi in a nutshell, the most common issue I have seen throughout the entire stack is,
17:52 championofcyrodi "MessagingTimeout: Timed out waiting for a reply to message ID ..."
17:54 angdraug I'll start with a dumb question: you're cleaning out keysone admin tokens on a regular basis, right?
17:54 angdraug and otherwise keep an eye on mysql db size...
17:54 championofcyrodi I am not.
17:55 championofcyrodi e.g. of metadata call failing: http://paste.openstack.org/show/216335/
17:56 championofcyrodi I'm learning as I go and have been more reactive than proactive because of poor planning and management.  However that is not really a technical issue I can solve.
17:56 championofcyrodi I'll look in to cleaning keystone admin tokens now.
17:56 angdraug https://docs.mirantis.com/openstack/fuel/fuel-6.0/operations.html#keystone-token-cleanup
17:58 championofcyrodi this could be exactly why things get 'slower' over the months.  same thing happened w/ our icehouse instance.
17:59 angdraug if you're not cleaning them, it will *definitely* make things slower
17:59 angdraug lookups in tokens table are a part of every single API & RPC call between OpenStack services
17:59 jobewan joined #fuel
18:00 angdraug OpenStack itself has a policy to never delete anything, all records are only marked deleted
18:00 championofcyrodi that makes sense.  But I don't understand why this is not addressed as standard workflow of keystone configuration.
18:00 championofcyrodi you just answered my question. (policy based decision)
18:00 angdraug yup
18:00 championofcyrodi I guess i need this Percona Toolkit?
18:00 angdraug it's shipped with fuel
18:02 championofcyrodi so i assume i would run this from the fuel server rather than one of my HA controllers? Doesnt seem to be on the PATH
18:05 emagana joined #fuel
18:05 championofcyrodi /var/lib/mysql on the controller is 4 GB
18:05 championofcyrodi with /keystone at 2.2 MB
18:06 e0ne joined #fuel
18:25 championofcyrodi perhaps having an 'keystone_tokens_flushed' table to 'move' the expired tokens to in mysqldb would prevent the active token table from growing so large.  then nothing would be deleted.
18:25 championofcyrodi just a thought
18:27 angdraug you really really don't need expired keystone tokens
18:27 angdraug they're not good for anything
18:28 angdraug you'll need to install percona-toolkit on the controller and then run it on a controller
18:39 championofcyrodi that makes sense.
18:39 championofcyrodi thanks angdraug
18:40 angdraug you're welcome
18:52 jobewan joined #fuel
18:53 championofcyrodi well, it seems like it just worked by chance.  a second attempt to re-launch the instances timed out again.  looking in the keystone database, the token table has 0 records :\
18:55 emagana joined #fuel
18:57 steved joined #fuel
18:59 emagana joined #fuel
18:59 steved hello, I'm attempting to pxe boot nodes using a dhcp relay. I've modified the /etc/fuel/astute.yaml file to define my remote network, but the like "ipaddress:" fills in both router and pxe boot server addresses on cobblers dnsmasq config. How can I set the router option (remote network router IP) separately so it doesn't get overwritten on fuel reboot?
18:59 steved (fuel 6)
19:01 HeOS joined #fuel
19:03 championofcyrodi angdraug: looks like my install doesnt use mysql for tokens...
19:03 championofcyrodi driver=keystone.token.backends.memcache.Token
19:03 championofcyrodi but memcache instead
19:04 championofcyrodi http://www.sebastien-han.fr/blog/2012/12/12/cleanup-keystone-tokens/
19:06 angdraug if it's not keystone tokens it must be something else, 4gb worth of data will make mysql sluggish
19:07 angdraug if it were only amqp rpc errors and no http api errors, I would also be suspicious of rabbitmq
19:08 championofcyrodi i keep thinking it's rabbitmq's dropping messages.  I see handshake_timeout error almost as often as i see connection requests.  Sometimes even handshake_timeout from 127.0.0.1->127.0.0.1
19:08 angdraug 6.0?
19:08 championofcyrodi yes
19:09 angdraug xarses: do you know anything about dhcprelay problem steved is talking about?
19:10 angdraug rmoe: were there any rabbitmq related bugs in 6.x that could explain championofcyrodi's symptoms?
19:11 wayneeseguin joined #fuel
19:12 championofcyrodi basically i'm getting timeout every 2 minutes or less, constantly.
19:13 championofcyrodi going to try to catch the process using the ephemeral port when the 'accepting AMQP connection' message appears.
19:15 rmoe the only thing I can think of is all of the work done on the OCF script for rabbit
19:15 rmoe have you lost a controller?
19:15 championofcyrodi no
19:16 championofcyrodi all 3 controllers are the same 3 from the initial HA deployment via fuel UI
19:16 jobewan joined #fuel
19:17 rmoe do the timeouts ever stop or do you see MessagingTimeout every 2 minutes 24/7?
19:17 ChUsama joined #fuel
19:18 championofcyrodi every 2 minutes 24/7
19:19 championofcyrodi the ephemeral ports making the requests seem to be 'python' process... looking deeper, in this one instance it seems to be neutron-metadata-agent making the request to connect to rabbitmq
19:19 championofcyrodi going to look at a few others and see if its just metadata agent or others.
19:20 championofcyrodi hard to catch the port within the 10 second timeout, opening another terminal...
19:24 championofcyrodi okay, that time it looks like it was 'ceilometer-agent-notification
19:24 championofcyrodi hold on... i'll paste bin
19:26 championofcyrodi http://pastebin.com/4qj3ku5s
19:27 championofcyrodi so the one happening every 2 minutes seems to be the "/usr/bin/python /usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log"
19:28 championofcyrodi but some of the others are not always from ceilometer.  e.g. the localhost->localhost was metadata agent.  so this inclines me to think that it's not specific to any 1 service.
19:29 championofcyrodi maybe it's ceilometer trying to connect so much?
19:29 rmoe do you see errors in any of the logs that look like "Failed to publish message to topic"?
19:29 championofcyrodi i have seen that in some of the nova logs. let me double check...
19:30 championofcyrodi fyi, when we installed this HA, we DID NOT check ceilometer.  however the ceilometer agent notification log has: http://pastebin.com/2yqCvQMq
19:31 championofcyrodi so i believe the constant request to connect to rabbitmq is failing because it is not configured?
19:31 mwhahaha https://bugs.launchpad.net/mos/+bug/1393505 ?
19:31 championofcyrodi looking for the message publish failures
19:33 rmoe also https://bugs.launchpad.net/mos/+bug/1410797 looks like something you might be running into
19:33 championofcyrodi yup
19:33 championofcyrodi lots of errors in neutron logs with broken pipe and failures to publish message to topics
19:35 championofcyrodi <164>May  7 19:01:29 node-54 neutron-server 2015-05-07 19:01:29.899 5968 WARNING keystonemiddleware.auth_token [-] Authorization failed for token
19:35 championofcyrodi (neutron-all.log)
19:35 championofcyrodi <163>May  7 18:48:06 node-54 neutron-server 2015-05-07 18:48:06.753 5997 ERROR oslo.messaging._drivers.impl_rabbit [-] Failed to publish message to topic 'reply_ede93cf05fbe48c580304472a022a590': [Errno 32] Broken pipe
19:36 championofcyrodi let me restart that ceilometer process and see if that stops the reconnects...
19:38 championofcyrodi well that seems to have helped
19:39 championofcyrodi the ceilometer agent-notification.log is showing publish samples via DEBUG statements.  and I'm not seeing ERROR every minute...
19:39 rmoe with ceilometer not trying to connect constantly has the neutron situation improved?
19:42 championofcyrodi hmm.. maybe?
19:44 championofcyrodi i'm still seeing some handshake_timeout on node-46 localhost->localhost for the celiometer process...
19:44 championofcyrodi but only localhost->localhost
19:45 rmoe is cloud-init still flaky?
19:45 rmoe and you're seeing handshake_timeouts on the localhost->localhost side?
19:46 championofcyrodi yes for node-46
19:47 championofcyrodi but the neutron log isnt flooding w/ the token auth failures anymore
19:47 championofcyrodi i'll try to boot my instances with cloud-init and see if there is much improvement.
19:49 championofcyrodi okay it's attempt to boot instances and what not.  I'm seeing 'accepting'  over and over, but not 'error' yet...
19:50 championofcyrodi okay... cloud-init is attempting to hit metadata services...
19:50 championofcyrodi it looks real clean so far though...
19:52 championofcyrodi utils.py Caught exception reading metadata, followed by a bunch of errors  now...
19:53 championofcyrodi the instance seems to have picked up the metadata anyway...
19:54 championofcyrodi i launched 3 nodes. 2 build and are up.  1 errored and is "Timed out waiting for a reply to message ID 5ff2b9ce665a4b038258f81a96907c41"
19:55 championofcyrodi it's almost as if as soon as cloud-init tries to hit the metadata service... a get a flurry of heartbeat_timeout
19:55 championofcyrodi (not handshake_timeout)
19:56 championofcyrodi well.. there was a delay for about a minute or two before the flurry of heartbeat_timeout error messages from neutron processes.
19:57 angdraug joined #fuel
19:58 championofcyrodi i did pcs disable/enable neutron metadata service earlier today, since yesterday rabbitmq master was switched and restarted...
19:58 championofcyrodi well thanks for all your help. I'm going to keep digging and looking into logs and hopefully will find out what is going on.  i feel like maybe i should just bounce each controller.
20:08 teran joined #fuel
20:08 championofcyrodi is there a proper way to reboot controllers? or just a 'sudo reboot' ?
20:17 jobewan joined #fuel
20:19 xarses steved: still around?
20:20 steved eyp
20:21 xarses so you added the dnsmasq config by hand and it's not being kept across reboots?
20:21 steved right, then I modified the yaml config on the fuel master, and its pushing the field 'ipaddress' to both router IP and PXE boot server
20:22 steved on the cobbler instance
20:23 steved which means my nodes get the correct dhcp address, but have no route to get to the PXE server
20:24 steved and as a note, there isn't a dhcp gateway field in the installer config menu either
20:25 jobewan joined #fuel
20:31 jobewan joined #fuel
20:31 xdeller joined #fuel
20:32 emagana joined #fuel
20:32 xarses can you paste a copy of your dnsmasq template?
20:37 steved http://paste.openstack.org/show/216456/
20:41 steved and the astute.yaml file admin section http://paste.openstack.org/show/216468/
20:41 teran_ joined #fuel
20:52 xarses steved: you should just need to change http://paste.openstack.org/show/216456/ line 33 to the gateway  on 10.0.230.x
20:53 xarses in the template
20:53 xarses not the generated file
20:53 xarses oh, this looks like it is the template still
20:53 xarses then we will also need to touch the puppet file that creates this and muck with it slightly so it will rebuild this container for you correctly
20:55 xarses good news is that we have a dhcp_gateway in the upcomming 6.1 release so it should not be as much as a problem for you down the road
20:56 steved when does 6.1 go GA?
20:56 xarses should be avail in the first week of june
20:56 steved built on centos 7?
20:56 jobewan joined #fuel
20:57 xarses no, we wheren't able to spend time on cent 7
20:57 steved :( -- so I see dhcp-option=net:internal,option:router,<%= @dhcp_gateway %> in dnsmasq.template.erb
20:58 jobewan joined #fuel
20:59 steved arg... I have a meeting in 2 minutes, can you list out what I'll need to change to push the right configs with puppet? I need to run
20:59 xarses it looks like you just need to add 'dhcp_gateway' to the fuel yaml
21:00 xarses I'll provide some links / notes
21:00 jobewan joined #fuel
21:00 xarses doh, this is 6.1
21:00 xarses gotta look back at 6.0
21:01 teran joined #fuel
21:07 xarses steved: looks like you should just be able to take the master copy of https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/nailgun/examples/cobbler-only.pp and replace /etc/puppet/modules/nailgun/examples/cobbler-only.pp with it and add 'dhcp_gateway' key in the yaml you noted
21:12 jobewan joined #fuel
21:14 jobewan joined #fuel
21:15 jobewan joined #fuel
21:16 jobewan joined #fuel
21:21 jobewan joined #fuel
21:23 jobewan joined #fuel
21:27 jobewan joined #fuel
21:29 jobewan joined #fuel
21:39 jobewan joined #fuel
21:42 jobewan joined #fuel
21:45 jobewan joined #fuel
21:51 jobewan_ joined #fuel
21:53 jobewan joined #fuel
21:55 jobewan joined #fuel
21:56 jobewan joined #fuel
22:04 jobewan joined #fuel
22:05 steved thanks xarses that worked
22:05 xarses yay
22:05 xarses good to hear
22:07 blahRus joined #fuel
22:16 jobewan joined #fuel
22:20 jobewan joined #fuel
22:21 jobewan joined #fuel
22:24 jobewan joined #fuel
22:25 jobewan joined #fuel
22:31 jobewan joined #fuel
22:37 dbuechler joined #fuel
22:37 dbuechler Hi!
22:38 dbuechler Anybody active at the moment?
22:39 angdraug there may be, but they'll be hiding until you ask your question )
22:40 jobewan joined #fuel
22:43 dbuechler LOL!  Ok.  I can work with that. ;-)
22:46 dbuechler I currently have a deployed environment - 6 nodes (Juno on CentOS 6.5) HA (3 controllers, 2 ceph storage, 1 compute).  Install ran smoothly, but when I launch a CentOS 7 instance, cloud-init fails "fetching metadata" and I can't SSH into the image.
22:47 dbuechler I can ping it all day long, so I know it's not a networking fault.  Cirros test image launches fine and can curl the url, so metadata service appears to be functioning...
22:47 dbuechler I'm at a loss to explain it.
22:53 dbuechler Oh.  Oops.  Network Topology is Neutron with GRE.  Forgot that little detail.
23:18 rmoe joined #fuel
23:25 angdraug ping by itself doesn't prove network is ok, could be a blocked port or failing dns lookup
23:25 angdraug do you know what that image's cloud-init is trying to do?
23:26 dbuechler Sadly, I don't.  It's a stock image from cloud.centos.org.  I assume it's supposed to be loading SSH keys...
23:27 dbuechler I *just* launched a CentOS 6 image to see what happens.
23:29 dbuechler Log looks good.  It imported keys, which is a lot better than the CentOS 7 image did.
23:31 dbuechler And I can login to CentOS 6 image.
23:31 dbuechler Oddly enough, the CentOS 6 image is a QCOW2 and I was under the impression that Ceph hated anything that wasn't RAW...
23:42 mattgriffin joined #fuel
23:57 dbuechler I just tried a QCOW2 image of CentOS 7 and it's having the same problem.  I'm guessing it's either a problem in CentOS 7 or in the image builds they're doing.

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary