Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-02-17

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:27 Longgeek joined #fuel
01:28 Longgeek joined #fuel
01:30 Longgeek joined #fuel
01:46 codybum joined #fuel
02:01 tobiash joined #fuel
02:49 ilbot3 joined #fuel
02:49 Topic for #fuel is now Fuel 5.1.1 (Icehouse) and Fuel 6.0 (Juno) https://software.mirantis.com | Fuel for Openstack: https://wiki.openstack.org/wiki/Fuel | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
02:51 mattgriffin joined #fuel
03:07 MarkDude joined #fuel
03:51 emagana joined #fuel
04:29 dhblaz joined #fuel
04:37 emagana_ joined #fuel
05:02 neophy joined #fuel
05:43 MarkDude joined #fuel
07:36 sambork joined #fuel
07:37 ARaza joined #fuel
07:40 ARaza Using mirantis fuel i have to deployed openstack.But the scenario is i have to make this solution HA for mirantis fuel itself.I mean if mirantis fuel goes down then it affects all the openstack deployment so if i have to deployed two mirantis fuel and sync them with a single DB (its suggestion) so my question is how can we achieve HA for deployed mirantis fuel
07:54 dklepikov joined #fuel
08:12 kaliya joined #fuel
08:34 pal_bth joined #fuel
08:59 ofkoz joined #fuel
09:01 azemlyanov joined #fuel
09:05 ofkoz Hi, does anybody know that can I remove new commited images from local docker repository  after doing dockerctl backup ?
09:11 ofkoz After few backups a free disk space is very low and I can't do another backup. I removed unused images and then free space increase but I don't know is it healthy or not .. ;)
09:17 duculete joined #fuel
09:19 HeOS joined #fuel
09:22 saibarspeis joined #fuel
09:23 sc-rm dklepikov: Tried to restart and reinstall the calamari instance, but now I have trouble making it do the # sudo calamari-ctl initialize
09:24 sc-rm dklepikov: I’m waiting with this until it’s stable and I don’t have to add a ppa, because it seems like the ppa is broken and causing the above command not to work
09:27 sambork joined #fuel
09:29 tzn joined #fuel
09:33 dklepikov sc-rm: Hello. I do not use calamari.
09:34 kaliya sc-rm: we're evaluating to include calamari in Fuel https://blueprints.launchpad.net/fuel/+spec/add-calamari-gui-for-ceph, but at this very moment this is not supported. So maybe better ask in some #ceph chat :)
09:40 stamak joined #fuel
09:42 evgeniyl___ joined #fuel
09:48 e0ne joined #fuel
09:51 sc-rm kaliya: I hope you do at some point in time. It seems to be really nice for what we need in the long run
10:02 t_dmitry joined #fuel
10:06 kaliya sc-rm: yes I like the project. In fact I filed that BP :)
10:12 ddmitriev joined #fuel
10:32 jskarpet joined #fuel
10:33 jskarpet What's the correct way of triggering Puppet manually after Error ?
10:36 neophy_ joined #fuel
10:37 ofkoz jskarpet: puppet apply -d -v /etc/puppet/manifests/site.pp
10:37 jskarpet That doesn't seem to report back to Fuel web GUI when successful?
10:38 e0ne joined #fuel
10:38 ofkoz Hmm, I don't think so
10:39 ofkoz I use this command onlt for debug or troubleshooting
10:40 jskarpet Puppet shows no errors, but GUI says it's in error state
10:40 jskarpet logs from first Puppet run only shows ERROR in log
10:40 jskarpet Run no. 2 doesn't
10:40 jskarpet but GUI still report error
10:41 ofkoz Hmm you can delete node and redeploy
10:41 jskarpet Error is on all controllers and all storage nodes
10:41 ofkoz So re-deploy all ?:)
10:41 jskarpet Tried that 4-5 times already
10:41 jskarpet same result
10:41 jskarpet First Puppet run fails somehow, but second (manual) works just fine
10:42 jskarpet I want to manually do what the installed does
10:42 jskarpet so it reports correct state back
10:42 jskarpet *installer
10:43 ofkoz Puppet first run logs dosn't show any clue ?
10:44 jskarpet no, because the message is somewhat bogus
10:44 jskarpet Error is related to ordering
10:44 jskarpet E.G ceph install failure due to device already mounted
10:44 kaliya jskarpet: what's the node role?
10:44 neophy joined #fuel
10:44 jskarpet while the device was formatted during install
10:45 jskarpet kaliya; both controller and storage
10:45 kaliya controller+storage?
10:45 jskarpet no
10:45 jskarpet separate nodes
10:45 kaliya can you share on paste, the `puppet -v -d` run?
10:47 jskarpet http://paste.openstack.org/show/175919/
10:50 monester_laptop joined #fuel
10:52 kaliya jskarpet: is this full? I see it cut at row 706
10:55 aarzhanov joined #fuel
11:02 e0ne joined #fuel
11:19 tzn joined #fuel
11:23 bogdando joined #fuel
11:23 neophy joined #fuel
11:25 e0ne joined #fuel
11:36 bogdando joined #fuel
11:42 jskarpet kaliya: No, it was pasted as full - but seems to have been cut off
11:42 jskarpet but there's just more of the same
11:42 jskarpet no warnings, no errors
11:42 jskarpet and a reporting step at the end
11:55 e0ne joined #fuel
12:15 saibarspeis joined #fuel
12:26 sambork joined #fuel
12:35 jskarpet How do I clear out old logs when resetting an environment?
12:45 alecv joined #fuel
13:11 ofkoz exit
13:38 Philipp_ joined #fuel
13:44 ofkoz joined #fuel
13:49 Philipp_ how do I install security updates for the hosts manged by fuel?
13:50 Philipp_ Is fuel only used for the initial installation and everything else is done manually?
13:53 tzn joined #fuel
14:03 codybum joined #fuel
14:09 sc-rm kaliya: Now I tried to upgrade our fuel master from 5.1 to 6.0 but get this error “Cannot upgrade from Fuel 5.1. You can upgrade only from one of next versions: 5.1.1"
14:09 aarefiev joined #fuel
14:10 kaliya sc-rm: upgrade first to 5.1.1, and then to 6.0
14:10 sc-rm kaliya: how do I get the upgrade file from 5.1 to 5.1.1 can’t seem to find it on mirantis.com
14:15 kaliya sc-rm: software.mirantis.com Software -> Prior Releases (on the right)
14:25 sc-rm kaliya: I can’t see the Prior Releases
14:26 sc-rm kaliya: searching with chrome and chrome finds the text, but I cant see it visually
14:26 championofcyrodi MiroslavAnashkin: We can't choose to add additional mongo servers at all.  Should we just let it run on 1 controller? Or should we manually install a second or third mongodb instance?  curious what would happen if we removed the controller w/ mongo on it later, would we be able to re-install it.
14:27 sc-rm kaliya: Ahe in the top menu
14:27 championofcyrodi looking for docs that break the telemetry role down a bit and expand on the expectations.
14:34 aarzhanov left #fuel
14:38 championofcyrodi hmmm nova resize shows success on an instance.  but it does not resize.
14:42 championofcyrodi it was booted using a custom flavor that has custom metadata.... of course resizing to a 'larger' flavor what I'm attempting.  i think maybe it's time to enable debug logging on the cluster so i can figure out where errors are happening.  currently there are no errors in my nova logs.
14:42 codybum joined #fuel
14:44 championofcyrodi just noticed my instance status switches to 'active' from 'shutoff' when trying to do a resize.   but is still 'shutdown' as the powerstate. (i'm attemping to resize with the server powered off.
14:47 codybum I feel like I am beating a dead horse, but I can't be the only person getting almost constant "oslo.messaging._drivers.impl_rabbit [-] Failed to publish message to topic" when using Fuel 6.0.1 in HA.
14:50 codybum There are almost 1000 connections on a single controllers rabbitmq server.  Is this right?  How could it be?
14:52 claflico joined #fuel
14:53 daniel3_ joined #fuel
14:58 mattgriffin joined #fuel
14:58 claflico well crap, neutron deployment failed…again….going back to nova
14:59 codybum claflico: what is your failure?
15:01 claflico last week tried deploying with a single controller and could reproduce that the controller would go offline after its deployment was done, causing the compute and storage node deployment to fail…until the control node was rebooted
15:01 claflico yesterday attempted a redeploy using 2 controllers and HA
15:01 claflico one controller showed deployed but the other just showed that puppet was failing
15:02 claflico I left it and went home
15:02 claflico came in this morning and saw that all had failed
15:03 claflico part of the reason I made the move to neutron was so that I could use the load-balancer plugin but it doesn't show up as an option when doing neutron with HA
15:03 claflico so I'm giving up on neutron for now and going back to nova
15:04 codybum I am also having problems with HA.  I can get things going, but rabbitmq failures are almost constant with neutron.
15:04 codybum Did your single node deployment work well?
15:06 codybum I can tail the rabbitmq logs and see several AMPQ connections connecting constantly.  I am not sure why others are not experiencing this, I am running a generic as it gets.
15:07 julien_ZTE joined #fuel
15:07 claflico no, it has worked ok on my home lab of low-end gear but at my work with actual dell servers it doesn't want to
15:07 claflico I'd run a fuel health-check and none of the created resources would be deleted after the tests were done.
15:11 dhblaz joined #fuel
15:12 codybum claflico: That is interesting.. Same things with me.  lab worked great, but on 3 controllers with 64G ram and 20 cores.. things came undone.
15:13 codybum claflico: Do you see lots of rabbitmq failures?  I see them everywhere, but specifically under neutron-server.
15:14 codybum When using the system every few seconds I see a : "oslo.messaging._drivers.impl_rabbit [-] Failed to publish message to topic 'reply_6d8e975eb243430fb2e484ef732488b0': Socket closed" error, under neutron-server.
15:14 codybum I don't know if connections are getting reset or what
15:16 codybum I am wondering if the keepalive timeout for tcp is longer than the timeout for the rabbitmq timeout
15:17 claflico codybum:  haven't dug in enough to where I need to look into the issues yet.
15:17 claflico we're not quite ready to use yet
15:18 codybum I am ready, but apparently the system is not :)
15:18 claflico but I do the install, upload a couple of images, spin up 1-2 small linux instances, and then let it sit for a week or so
15:18 claflico and see how long before the health tests start failing
15:19 claflico usually by the 4th day or so tests start failing
15:21 claflico this is all with nova, i'm not going to install celiometer this time (with nova)  and see how long it stays working
15:23 duculete joined #fuel
15:27 jobewan joined #fuel
15:59 xarses joined #fuel
16:00 codybum joined #fuel
16:04 blahRus joined #fuel
16:06 xarses jskarpet: you can make fuel run the deployment one at a time with the cli 'fuel node --env <id> --nodes <id[,id,...]> --deploy'
16:06 xarses env id is from 'fuel env' node id is from 'fuel nodes'
16:07 xarses jskarpet: you can clear out the old logs in /var/log/docker-logs/remote on the fuel master node
16:09 xarses claflico: yes, I would say most people deploy in HA mode especially since you can deploy with one controller now and add more later. re the LBaaS plugin, I just don't think it was written for ha support which is likely why the option was configured to disappear.
16:20 codybum joined #fuel
16:21 codybum ah @xarses.. ready for one more round with rabbitmq issues?
16:28 MarkDude joined #fuel
16:37 codybum for reals
16:38 duculete joined #fuel
16:47 MarkDude joined #fuel
16:47 MarkDude joined #fuel
16:50 duculete joined #fuel
16:50 aarzhanov joined #fuel
16:50 aarzhanov left #fuel
16:58 mattgriffin joined #fuel
16:58 ofkoz_ joined #fuel
17:08 thumpba joined #fuel
17:08 claflico deployment of nova based worked no failures
17:09 codybum Neutron seems to be the devil
17:10 rmoe joined #fuel
17:10 claflico yeah, it just seems too complex
17:10 claflico specially for the SMB market
17:11 claflico flat dhcp would work great for us but was just wanting to try and take advantage of that lb plugin
17:11 claflico rather than mess with my own
17:12 julien_ZTE joined #fuel
17:12 claflico "Autoscaling with native cloudwatch mechanism does not work in Heat when used multi-engine architecture." = doesn't work with HA?
17:13 julien_ZTE joined #fuel
17:14 dhblaz joined #fuel
17:26 jaypipes joined #fuel
17:29 jobewan joined #fuel
17:30 xarses claflico: no, it looks like it should run with ha, can you paste a pic of the issue?
17:33 claflico xarses:  it's in the Fuel Healthcheck
17:33 claflico under "Platform services functional tests"
17:34 claflico under "Check stack autoscaling", it didn't run the test with the above message in red
17:34 xarses claflico: with the lbaas option disappearing
17:34 claflico xarses:  the lbaas option isn't in play since i'm using nova now rather than neutron
17:38 e0ne joined #fuel
17:42 claflico in nova, ha env should I be able to ping & SSH to my openstack dashboard IP
17:47 championofcyrod1 joined #fuel
17:48 championofcyrod1 I was worried about this, and expected it.  but still disappointing.   Trying to add 2 controllers to a single controller in HA with telemetry deployed on it resulted in an error deploying the new controllers;  (/Stage[main]/Openstack::Mongo_primary/Mongodb::Db[ceilometer]/Mongodb_user[ceilometer]) Could not evaluate: Tue Feb 17 17:40:45.789 count failed: { "ok" : 0, "errmsg" : "unauthorized" } at src/mongo/shell/query.
18:03 kupo24z joined #fuel
18:05 angdraug joined #fuel
18:12 emagana joined #fuel
18:22 julien_ZTE joined #fuel
18:49 clauded joined #fuel
18:50 clauded Hi. i have a large snapshot (~1Gb) : is there any dedicated site to upload it?
18:51 clauded It's a diagnostic snapshot...
18:59 kaliya hi clauded usually users share their big snapshots via google drive or dropbox or any equivalent service
19:00 clauded @kaliya: unfortunatly, this will explode my Google drive account :(
19:00 kaliya I thought google was gifting 80G and more
19:03 clauded @kaliya: wo oh! I tought I had 1 Gb! What am I gonna do with all that storage? :)
19:14 duculete joined #fuel
19:18 xarses championofcyrod1: where is the mongo roles?
19:19 xarses is/are
19:28 julien_ZTE joined #fuel
19:39 samuelBartel joined #fuel
19:43 clauded left #fuel
20:01 stamak joined #fuel
20:13 mattgriffin joined #fuel
20:22 championofcyrod1 xarses: Initially we did 1 controller w/ HA and the Controller+Telemetry roles.  everything was running smooth.  tried to add two more controllers to the HA fuel instance and got the error above
20:22 championofcyrod1 now all three controllers are in error state and the horizon UI is no longer available
20:25 championofcyrod1 i'm guessing the new mongo  nodes can't connect to the existing node due to some security constraint.
20:34 HeOS joined #fuel
20:34 championofcyrod1 should the celiometer entries in the mongo.yaml have the same metering_secret, user_password, and db_password?
21:06 mattgriffin joined #fuel
21:06 angdraug https://bugs.launchpad.net/fuel/+bug/1308990
21:06 angdraug championofcyrod1: is ^^^ your problem?
21:07 angdraug if that's the case: http://docs.mirantis.com/openstack/fuel/fuel-6.0/operations.html#add-a-mongodb-node
21:12 championofcyrod1 so i'm hoping that deleteing the two failed controllers from the fuel UI does not overwrite the information on the pre-existing controller...
21:12 championofcyrod1 even though it says 'Installing Openstack'
21:13 championofcyrod1 on the original controller
21:14 championofcyrod1 well... the horizon UI and all the networking services seem to be back up.
21:14 championofcyrod1 and the mysql data doesnt seem to have been lost.
21:14 championofcyrod1 (phew)
21:16 championofcyrod1 okay.. .going to read through that bug and the info on deploying additional mongo nodes.  thanks andraug
21:47 stamak joined #fuel
21:49 julien_ZTE joined #fuel
22:06 championofcyrod1 left #fuel
22:22 kupo24z joined #fuel
23:10 mattgriffin joined #fuel
23:12 julien_ZTE joined #fuel
23:33 jobewan joined #fuel
23:59 julien_ZTE joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary