Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-03-04

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:48 dmellado joined #fuel
00:49 Obi-Wan joined #fuel
00:57 thumpba joined #fuel
01:51 rmoe joined #fuel
02:47 Longgeek joined #fuel
03:05 thumpba joined #fuel
03:37 xarses joined #fuel
03:41 thumpba joined #fuel
04:15 thumpba_ joined #fuel
04:54 claflico joined #fuel
04:59 gongysh joined #fuel
06:08 Longgeek joined #fuel
06:24 thumpba joined #fuel
06:25 thumpba_ joined #fuel
07:27 dklepikov joined #fuel
07:34 Longgeek joined #fuel
07:36 sc-rm evg: thanks :-)
07:37 sc-rm evg: Now that I have gotten your keystone.tar, what is then next?
07:46 tzn joined #fuel
08:00 saibarspeis joined #fuel
08:01 vicvinc joined #fuel
08:05 gongysh joined #fuel
08:09 hyperbaba joined #fuel
08:20 e0ne joined #fuel
08:27 stamak joined #fuel
08:37 aliemieshko_ joined #fuel
08:38 Miouge joined #fuel
08:41 maximov joined #fuel
08:42 alecv joined #fuel
08:42 sambork joined #fuel
08:43 thumpba joined #fuel
08:43 HeOS joined #fuel
08:53 adanin joined #fuel
09:02 monester_laptop joined #fuel
09:05 corepb joined #fuel
09:07 JohnGiorno joined #fuel
09:31 e0ne joined #fuel
09:48 tzn joined #fuel
09:53 Longgeek joined #fuel
10:38 andriikolesnikov joined #fuel
10:42 evg sc-rm: hello, sorry i'm out of kb now. Please try "docker load -i keystone.tar"
10:42 evg sc
10:43 evg sc-rm: then "puppet apply --detailed-exitcodes -d -v /etc/puppet/modules/nailgun/examples/host-only.pp"
10:44 evg sc-rm: backup your containers, config, pgsql db. I'm affraid it's not the single issue.
11:02 sc-rm evg: docker ps, still has no keystone in the list
11:04 sc-rm evg: What should I backup and how, to transfer to a new install of a fuel master?
11:09 evg sc-rm: docker images?
11:33 thumpba joined #fuel
11:41 Longgeek joined #fuel
11:52 sc-rm evg: in docker images I now have fuel/keystone_6.0                        latest              3d07100014c1        9 weeks ago         213.2 MB
11:52 sc-rm evg: but docker is looking for [error] server.go:89 HTTP Error: statusCode=404 No such container: fuel-core-6.0-keystone
11:58 ofkoz_ joined #fuel
12:30 sambork1 joined #fuel
12:32 sambork2 joined #fuel
12:33 sambork joined #fuel
12:35 sambork2 joined #fuel
12:35 sambork3 joined #fuel
12:39 sambork joined #fuel
12:39 sambork1 joined #fuel
12:40 sambork2 joined #fuel
12:41 sambork1 joined #fuel
12:44 azemlyanov joined #fuel
12:52 sambork1 joined #fuel
13:06 e0ne joined #fuel
13:09 mattymo sc-rm, ping
13:10 sc-rm mattymo: hi
13:11 mattymo did your Fuel Master ever run out of disk space
13:11 mattymo ?
13:11 mattymo http://docs.mirantis.com/openstack/fuel/master/operations.html#id97 has steps for recovery
13:12 sc-rm mattymo: yep, it did
13:13 mattymo so what's most likely happened is there was a failed write to the sqlite db of docker itself
13:14 mattymo check the 4th code block where it has a sqlite3 command in it
13:14 mattymo replace postgres with keystone and try it yourself, using fuel-core-6.0-keystone as container_name
13:14 mattymo and then just try dockerctl start keystone
13:21 sc-rm mattymo: I’ll looking at it
13:22 thumpba joined #fuel
13:26 sc-rm mattymo: If I do use  fuel-core-6.0-keystone i get no id, but if I use keystone I get
13:26 sc-rm mattymo: some output
13:27 mattymo that shouldn't be possible
13:27 mattymo there are no wildcards in the queries
13:28 mattymo are you able to pastebin your console output?
13:30 sc-rm mattymo: ahe, wrong block of code, now it workerd
13:30 sc-rm mattymo: so, now it’s been deleted from the sqlite db
13:31 mattymo there is only one that contains sqlite3 commands
13:31 sc-rm mattymo: Yep, just me not reading ;-)
13:33 mattymo I should make anchors for every code block so I can point you more specifically
13:33 sc-rm mattymo: So, now that it’s been deleted, then what? :-)
13:33 mattymo <mattymo> and then just try dockerctl start keystone
13:36 sc-rm mattymo: http://paste.openstack.org/show/187689/
13:38 mattymo yeah that's not a good sign
13:38 mattymo try dockerctl logs keystone
13:39 sc-rm mattymo: it just says: Usage: docker logs CONTAINER
13:39 Longgeek joined #fuel
13:40 mattymo so it seems the upgrade went badly
13:40 mattymo what version is in /etc/fuel/version.yaml?
13:41 mattymo also can you send me your /var/log/docker file?
13:41 sc-rm http://paste.openstack.org/show/187697/
13:41 mattymo it will have more details
13:45 sc-rm mattymo: https://drive.google.com/file/d/0B3um7dvw7_L0UlVhNng2dGdmdTg/view?usp=sharing
13:54 championofcyrodi Does anyone have any recommendations on a way to maintain backup of mapping between instances, RBD Images (Volumes, Snapshots, Images),  and their corresponding Ceph objects? (asking in #ceph @ oftc.net as well)
13:54 daniel3_ joined #fuel
13:55 championofcyrodi Maybe just the entire ceph monitor working directory?
13:56 gongysh_ joined #fuel
13:57 sc-rm championofcyrodi: why would you want that mapping out?
14:01 championofcyrodi sc-rm: recently we 'lost' the monitors for ceph and wanted to recover data.
14:02 championofcyrodi I've learned that while OSDs are very resilient, the monitors are crucial.
14:03 championofcyrodi So I'd like to know how to 'backup' a monitor.
14:03 championofcyrodi since i assume they are all active in the consensus?
14:03 championofcyrodi I only need to back up 1
14:04 championofcyrodi since we were using 'ceph' for cinder,glance and nova... it was quite devastating to our RAW volumes in the compute pool.  qcow2 and images were fine and recoverable though.
14:06 championofcyrodi also, I have access to the fuel server and potentially the logs from the 'rolled back' deployment of a third controller that apparently undeployed our 1st and 2nd controller as well.  need to dig those up for xarses
14:06 sambork joined #fuel
14:06 championofcyrodi on the fuel server, anyone know off hand where the 'deployment' logs would be for an environment?
14:07 sc-rm championofcyrodi: That will make sense, hope you get it back
14:08 championofcyrodi sc-rm: I already got the qcow2 images back from an export weeks earlier.  but the RAW volumes seem to be missing the ext4 journaling data... or something of the likes.
14:08 championofcyrodi fsck seemed to be able to piece the filesystem back together and i got the volume mounted.  but when i would cat a file or cd to some directories, the output would be corrupt or input/output failure.
14:18 ChrisNBlum joined #fuel
14:19 sc-rm championofcyrodi: what if you log into one of the other remaining controllers, the 1st og 2nd controller?
14:20 anotchenko joined #fuel
14:20 sc-rm championofcyrodi: are they still running and having the ceph installed?
14:20 anotchenko left #fuel
14:20 championofcyrodi no.  they were undeployed and the LVM partition data was deleted.
14:21 championofcyrodi when deploying the 3rd controller error-ed out
14:24 sc-rm championofcyrodi: I don’t know if starting 3 ceph-mon daemons by hand would recreate it. That would be a question better handled by the ceph channel. Last time I tried with a total ceph-mon failure, I did not succed from it, but had to trash it all :-(
14:27 sc-rm championofcyrodi: but to answer your backup question in the first place. http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors/
14:28 blahRus joined #fuel
14:32 sc-rm mattymo: I’m going to get a new server in the next couple of days, then I could install fuel-master v6.0 on it an copy the different files I need from it. Is that a feasible solution?
14:33 mattymo yeah it totally is another option
14:33 mattymo you want the contents of /var/lib/pgsql from postgres, /var/lib/astute from astute, and /var/lib/cobbler from cobbler containers
14:34 mattymo we haven't quite automated that outside of fuel_upgrade script yet
14:37 sc-rm mattymo: So If I shut the server down, and start on a dvd, so I don’t have anything writing to the filesystem. Then copy those folders out to external storage. Then install a completely new server. Start it from Dvd and replace those folders with the ones backedup?
14:38 mattymo in theory, yes
14:38 mattymo but if you reinstall, you can't manage old deployed envs
14:38 mattymo you need to start with an old version of fuel, then upgrade
14:41 sc-rm mattymo: okay, I’ll try, when we get it op and running after the upgrade process is done from 5.0 to 6.0
14:46 Rodrigo_US joined #fuel
14:50 sc-rm mattymo: dockerctl shell postgres su postgres -c "pg_dumpall --clean' \
14:50 sc-rm > /root/postgres_backup_$(date).sql" will that not do for the postgres?
14:51 mattymo yeah it does work
14:51 mattymo the dump is safer than just copying the raw data in place
14:56 sc-rm mattymo: Then I’ll do so, but the astute and cobbler has to be copied
15:01 stamak joined #fuel
15:01 sc-rm mattymo: But how do I get the 5.0 iso, because I can see there is only 5.0.1 iso
15:02 mattymo :( yeah that's a problem
15:02 sc-rm my latests deploy is an icehouse, so will 5.1.1 be good enough?
15:02 mattymo not exactly..
15:02 mattymo if you have a 5.0 env and you want to keep managing it, you need to start on that same release
15:02 mattymo we can try to fix keystone some more on your side
15:03 sc-rm if we can fix it that would be best, at least I now have an backup of astute, cobbler and postgres
15:03 mattymo ok
15:04 sc-rm mattymo: so what would you suggest we try next?
15:05 mattymo sc-rm, docker ps -a | grep keystone        <- does it have any results?
15:05 sc-rm mattymo: http://paste.openstack.org/show/187774/
15:06 sc-rm mattymo: try a classic windows trick and reboot? :-P
15:06 mattymo no way
15:06 mattymo docker rm -f 7190b5a99f85
15:06 mattymo that is some useless container that was created without proper links
15:07 sc-rm okay, done
15:07 mattymo http://paste.openstack.org/show/DRlqaKGtW1mBL8xCUoqQ/
15:07 mattymo ^ try this command sc-rm
15:08 thumpba joined #fuel
15:09 claflico joined #fuel
15:10 sc-rm mattymo: [debug] commands.go:1905 End of CmdRun(), Waiting for hijack to finish.
15:10 anotchenko joined #fuel
15:11 mattymo then docker logs fuel-core-6.0-keystone
15:11 mattymo if it didn't error, that's a good sign
15:11 sc-rm martineg_ http://paste.openstack.org/show/187800/
15:12 sc-rm sorry martineg_  :-)
15:13 sc-rm mattymo: Now I can log in to fuel web again
15:16 mattymo yay
15:16 mattymo dockerctl check all
15:17 sc-rm mattymo: ostf is hagning in the check
15:17 mattymo then restart it: dockerctl restart ostf
15:24 sc-rm mattymo: It’s running, but dockerctl check all is not getting past it
15:24 mattymo then run dockerctl logs ostf
15:24 mattymo it will say some sort of error
15:25 book` joined #fuel
15:27 corepb joined #fuel
15:27 championofcyrodi where is the latest fuel download link? the site seems to have changed and I can't seem to find it at https://software.mirantis.com/ without an email registration
15:27 championofcyrodi was curious if 6.1.1 is available.
15:28 mattymo championofcyrodi, there are community builds available without any registration: https://www.fuel-infra.org/
15:28 championofcyrodi Thanks mattymo
15:28 mattymo 6.1 is what the current master will end up being. 6.1.1 hasn't started any development yet
15:29 championofcyrodi sorry, that's what i meant, 6.1
15:29 championofcyrodi we're running 6.0 now.
15:30 mattymo championofcyrodi, anything else you're looking for today?
15:30 championofcyrodi heh... not at the moment.
15:31 championofcyrodi thanks
15:31 sc-rm mattymo: the settings tab is also not working in the fuel-web, so maybe something else needs to be restartet too?
15:31 mattymo browser cache?
15:31 mattymo try clearing cache or restarting browser
15:32 mattymo that's one side effect of upgrading
15:33 sc-rm mattymo: Tried in a different browser, but still same result
15:33 mattymo then you have errors in nailgun... gotta read logs and find out what's wrong
15:34 mattymo dockerctl logs nailgun      and also dockerctl logs nginx
15:34 mattymo maybe something went wrong
15:35 mattymo it's entirely possible you ran into more issues, like postgres db index corruption
15:39 jobewan joined #fuel
15:41 sc-rm mattymo: just restartet and are getting this in /var/log/docker http://paste.openstack.org/show/187810/ but this has been there since it ran out of space
15:42 mattymo :( yeah more stuff is broken but those IDs don't easily map back to container IDs
15:42 mattymo something is still corrupted
15:43 sc-rm mattymo: in /var/log/messages I get EXT4-fs (dm-13): warning: mounting fs with errors, running e2fsck is recommended
15:44 mattymo oh that would be it
15:45 mattymo sc-rm, there's directions on fixing that in the link I gave you about an hour ago
15:45 mattymo http://docs.mirantis.com/openstack/fuel/master/operations.html#id97
15:46 sc-rm but which container is dm-13 and dm-14?
15:48 mattymo sc-rm, looks like 4 of them have the telltale error http://docs.mirantis.com/openstack/fuel/master/operations.html#id99
15:49 mattymo hmm maybe not
15:49 daniel3_ joined #fuel
15:50 mattymo cobbler and ostf seem to be most affected
15:51 mattymo and mcollective
15:51 mattymo cobbler requires the actual fsck. the other two can be deleted and recreated
15:58 sc-rm mattymo: dockerctl destroy ostf; dockerctl start ostf
15:58 sc-rm mattymo: ERROR: ostf failed to start.
15:59 mattymo more info in /var/log/docker...
15:59 sc-rm mattymo http://paste.openstack.org/show/187826/
15:59 mattymo sc-rm, sorry for the slow resolution here. You seem to have wrecked it harder than most. Usually it's only 1 or 2 things that get jammed up when space runs out
16:00 tzn joined #fuel
16:00 sc-rm mattymo: No problem, as long as it get’s back to functional, then it does not matter with the time. Open Stack it self is still running, so as long as the production env is not down I have a lot of time ;-)
16:00 thumpba joined #fuel
16:00 mattymo that should never happen.... the volume is mounted directly
16:01 mattymo on the master node, does yum check-update work okay? if so, then I need the output of docker inspect fuel-core-6.0-ostf and the output of dockerctl shell ostf bash -c 'cat /etc/yum.repos.d/*'
16:02 sc-rm yep, no errors
16:03 sc-rm http://paste.openstack.org/show/187827/
16:04 sc-rm the last command is not returning
16:08 angdraug joined #fuel
16:11 adanin joined #fuel
16:12 sc-rm mattymo: http://paste.openstack.org/show/187855/
16:18 sc-rm mattymo: But I have to go for today, will work more on it tomorrow.
16:19 sc-rm mattymo: Thanks so far for the help. You have some free beer here ;-)
16:26 jaypipes joined #fuel
17:06 xarses joined #fuel
17:17 rmoe joined #fuel
17:20 omolchanov joined #fuel
17:22 eriswans If I’m using ceph for everything (including ephemeral storage), can I zero out Virtual Storage on my compute nodes and give it all to Ceph?
17:27 andriikolesnikov joined #fuel
17:28 aarefiev joined #fuel
17:29 aarefiev joined #fuel
17:31 angdraug no, there are still some cases where this storage is needed, e.g. creating a glance image
17:32 angdraug it will essentially be used as qemu-img scratch space
17:32 aarefiev joined #fuel
17:33 angdraug you can put it on the same volume as your OS root, you shouldn't combine that with OSDs anyway
17:36 eriswans Hmm, okay. I have a weird set of resources available: 2 24-drive filers (hw raid 50 + hot spares) and up to 16 blades that each have two relatively tiny drives in a raid-0. Without dropping ceph replication to 2 I’ll be bottlenecked by the total of the tiny drives in the blades (collectively acting as a third filer)
17:41 e0ne joined #fuel
17:41 MarkDude joined #fuel
17:42 loth joined #fuel
17:43 blahRus joined #fuel
17:49 DaveJ__ joined #fuel
17:49 DaveJ__ Hi - can anyone help me with Fuel HA - I've setup a deployment with 3 controllers.  If I modify keystone.conf on one controller, do I need to manually update it on the rest, or is there a way to keep it in sync ?  Same goes for the nova and neutron.conf files ?
17:51 thumpba joined #fuel
18:09 asilenkov_ joined #fuel
18:19 teran joined #fuel
18:22 e0ne joined #fuel
18:54 alecv joined #fuel
18:55 adanin joined #fuel
19:16 e0ne joined #fuel
19:21 xarses joined #fuel
19:21 teran joined #fuel
19:21 HeOS joined #fuel
19:31 corepb joined #fuel
19:36 daniel3_ joined #fuel
19:44 adanin joined #fuel
20:02 jaypipes joined #fuel
20:05 e0ne joined #fuel
20:13 andriikolesnikov joined #fuel
20:16 thumpba joined #fuel
22:22 e0ne joined #fuel
23:08 adanin joined #fuel
23:09 CheKoLyN joined #fuel
23:18 thumpba joined #fuel
23:22 CheKoLyN Hello All, is there a way to change the release_id for an environment ?  This is in fuel cli

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary