Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-03-03

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:59 e0ne joined #fuel
01:58 e0ne joined #fuel
02:58 e0ne joined #fuel
03:58 e0ne joined #fuel
04:20 dburmistrov joined #fuel
04:56 anotchenko joined #fuel
04:58 e0ne joined #fuel
05:10 dburmistrov joined #fuel
05:58 e0ne joined #fuel
06:31 saju_m joined #fuel
06:35 e0ne joined #fuel
06:37 anotchenko joined #fuel
07:13 anotchenko joined #fuel
07:26 alex_didenko joined #fuel
08:06 e0ne joined #fuel
08:23 amartellone joined #fuel
08:28 miguitas joined #fuel
08:43 saju_m joined #fuel
08:55 e0ne joined #fuel
09:00 vk joined #fuel
09:23 rvyalov joined #fuel
09:26 anotchenko joined #fuel
09:32 tatyana joined #fuel
09:34 saju_m joined #fuel
09:47 e0ne_ joined #fuel
10:09 warpig joined #fuel
10:28 anotchenko joined #fuel
10:30 meow_nofer joined #fuel
10:32 Rvyalov joined #fuel
10:41 saju_m joined #fuel
10:45 e0ne joined #fuel
10:51 rvyalov joined #fuel
11:05 Ch00k joined #fuel
11:25 saju_m joined #fuel
11:29 Arminder joined #fuel
12:20 Ch00k joined #fuel
12:22 TVR___ joined #fuel
12:32 bookwar joined #fuel
12:46 e0ne joined #fuel
13:13 justif joined #fuel
13:24 e0ne_ joined #fuel
13:36 e0ne joined #fuel
13:39 e0ne joined #fuel
13:41 topochan joined #fuel
13:43 sanek joined #fuel
13:59 dburmistrov joined #fuel
14:22 e0ne joined #fuel
14:28 vkozhukalov joined #fuel
14:42 tatyana joined #fuel
15:21 vkozhukalov joined #fuel
15:30 Dr_Drache joined #fuel
15:31 jobewan joined #fuel
15:44 TVR___ OK.. so super quiet in here....
15:44 TVR___ so does that mean 4.1 was released and all the bugs are fixed?
15:46 Dr_Drache nope
15:46 Dr_Drache last i looked 10 hours
15:46 Dr_Drache BUT
15:46 Dr_Drache i fixed ubnut
15:46 Dr_Drache ubuntu
15:46 Dr_Drache or "we" fixed it
15:46 TVR___ cool cool
15:47 Dr_Drache 2 things, 1 timeout of grub, and then virtual com ports
15:49 Dr_Drache TVR___, NOW, I forgot how to get dhcp to the test vm.
15:49 Dr_Drache lol
15:49 Dr_Drache of all things
15:54 Dr_Drache TVR___, ok, maybe I missed a step
15:54 Dr_Drache I made a new project.
15:54 Dr_Drache .. subnet DHCP, and a new sec group.
15:58 alex_didenko joined #fuel
16:59 brain461 left #fuel
17:02 brain461 joined #fuel
17:14 Dr_Drache MiroslavAnashkin,
17:27 Ch00k joined #fuel
17:42 Dr_Drache I cannot create any instances
17:53 xarses joined #fuel
17:55 Dr_Drache xarses, so, I'm back to having issues
17:55 Dr_Drache LOL
17:56 Dr_Drache I cannot create an instance that creates a new volume.
17:56 Dr_Drache but, I can create new volumes.
17:57 xarses odd, what do you have for logs
17:57 vkozhukalov joined #fuel
17:58 Dr_Drache which logs would be approprate?
17:58 Dr_Drache fuctional tests #2 fail as well
18:00 Dr_Drache "Create volume and attach it to instance" failed and "Launch instance, create snapshot, launch instance from snapshot" failed
18:03 xarses cinder-all.log from the controllers, the nova log from the compute that tried to start the instance
18:03 xarses Dr_Drache: this is 4.0?
18:03 xarses Dr_Drache: back in about 10 min
18:04 Dr_Drache xarses, yes 4.0 - unbuntu, all ceph options choosen, unchecked qcow2 option.
18:04 rvyalov joined #fuel
18:19 rvyalov joined #fuel
18:50 melodous joined #fuel
18:57 angdraug joined #fuel
18:59 e0ne joined #fuel
19:30 Dr_Drache xarses :P
19:33 Dr_Drache I have a question
19:36 e0ne joined #fuel
19:37 Dr_Drache I edited the drives so that I only have 5GB of virtual disk.
19:37 Dr_Drache could that be the issue?
19:46 Dr_Drache using the built in flavors, I can only use small and below, so 20GB... anything bigger, it fails, anything after 20gb (2 1gb, then add a 20 on top of that) also fail
19:49 xarses Dr_Drache: where you able to collect any logs?
19:50 Dr_Drache xarses, I have logs for days, I don't know which you would want.
19:50 Dr_Drache fuel master?
19:50 Dr_Drache just trying step by step to see where what happens
19:51 Dr_Drache i've looked through cinder/ceph/nova/glace logs.
19:51 Dr_Drache but I don't see anything that looks like a failure
19:52 xarses I suspect its https://bugs.launchpad.net/fuel/+bug/1260911 http://tracker.ceph.com/issues/5426
19:53 Dr_Drache looks like it
19:53 Dr_Drache well, looks simular.
19:53 xarses thats what i thought
19:54 Dr_Drache blah.
19:55 Dr_Drache any way to know for sure?
19:56 xarses so I'd look for cinder-all.log, you might need to disable syslog and restart one, and if your HA stop cinder-volume and cinder-scheduler on the other controllers
19:57 Dr_Drache no cinder-all
20:01 xarses /var/log/cinder-all.log?
20:01 xarses hmm
20:02 Dr_Drache which node? controller?
20:02 Dr_Drache doesn't matter
20:02 xarses controller
20:02 Dr_Drache it's not on any
20:03 Dr_Drache http://paste.openstack.org/show/71870/
20:04 xarses ah, so look in cinder-scheduler.log and cinder-volume.log then
20:06 xarses If there is nothing useful in there, then we need to turn on debug logging and disable the syslog logging
20:08 Dr_Drache debug logging is on AFAIK.
20:10 xarses probably why there isn't cinder-all =)
20:10 xarses then there should be tracebacks in cinder-volume
20:10 Dr_Drache i left it enabled in fuel
20:10 Dr_Drache thought it would help if i ran into problmes
20:10 Dr_Drache s/problmes/problems
20:15 Dr_Drache xarses, https://www.dropbox.com/s/2b6mgl9bheemags/cinder-volume.log
20:15 e0ne joined #fuel
20:24 xarses Nothing useful looking in there =(
20:25 xarses It appears that cinder-volume thinks everything is working
20:25 xarses so we want to look at the nova-compute log on the compute that tried to start the instance
20:29 Dr_Drache shit
20:29 Dr_Drache huge huge logs
20:29 Dr_Drache 18/26 MB
20:30 Dr_Drache I assume the bigger one is going to be the one that is having issues
20:31 xarses Dr_Drache: we can just cause the error to occur and take the last 1K lines
20:31 Dr_Drache https://www.dropbox.com/s/0bdmgdpg36jsmf2/nc-2.log
20:31 Dr_Drache lol
20:32 Dr_Drache https://www.dropbox.com/s/o1uhdzoxyqafmpj/nc-1.log
20:32 Dr_Drache -2 is the big one
20:32 Dr_Drache "656" is the instance name
20:32 Dr_Drache if it helps
20:38 mutex strange
20:46 Dr_Drache lol
20:46 Dr_Drache I seem to break fun stuff
20:53 rvyalov joined #fuel
20:54 Dr_Drache only thing i see right now
20:54 Dr_Drache is I have a volume that is stuck attaching
21:10 xarses Dr_Drache: is this a HA deployment?
21:10 Dr_Drache xarses, no sir
21:10 Dr_Drache xarses, i just attempted another
21:10 xarses check that cinder-api is running on your controller
21:10 Dr_Drache then tailed the nova-compute.log for you
21:10 Dr_Drache (tail -n 1500) http://paste.openstack.org/show/71894/
21:11 xarses http://paste.openstack.org/show/71895/ from nc-1
21:12 Dr_Drache ahhh
21:12 xarses also check that your compute can reach that url
21:12 xarses Caused by <class 'socket.error'>: [Errno 113] EHOSTUNREAC
21:12 xarses H
21:13 xarses hmm neutron client worked fine
21:13 xarses so it should reach the controller
21:14 Dr_Drache how do you check cinder-api?
21:14 Dr_Drache i know cinder works
21:14 Dr_Drache well, cinder list/etc
21:15 Dr_Drache could this be related to using ceph as object?
21:15 Dr_Drache ceph RadosGW for objects (swift API)
21:15 xarses i think you can also get nova to show the volume list too
21:16 Dr_Drache root@node-3:~# cinder list
21:16 Dr_Drache root@node-3:~# nova list
21:16 Dr_Drache +--------------------------------------+----------+--------+------------+-------------+----------+
21:16 Dr_Drache | ID                                   | Name     | Status | Task State | Power State | Networks |
21:16 Dr_Drache +--------------------------------------+----------+--------+------------+-------------+----------+
21:16 Dr_Drache | f9bb48aa-1eb1-4f5c-9efc-52d937fdb362 | stressed | ERROR  | None       | NOSTATE     |          |
21:16 Dr_Drache +--------------------------------------+----------+--------+------------+-------------+----------+
21:16 Dr_Drache sorry, should have used paste
21:17 xarses we want something like nova volume list, but i dont have a running env, so i cant confirm
21:19 xarses you might also want to do a ping -s 3008 -c 100 <controller> from the compute. There is another random traceback in the nova log
21:20 xarses 2014-03-03 15:44:26 AUDIT nova.compute.resource_tracker [-]  Free disk (GB): -35
21:21 Dr_Drache LOL
21:21 xarses is /var/lib/nova out of space?
21:21 Dr_Drache on which?
21:21 xarses nc-1
21:21 xarses other trace http://paste.openstack.org/show/71897/
21:22 Dr_Drache http://paste.openstack.org/show/71898/
21:24 xarses another random trace http://paste.openstack.org/show/71899/
21:25 xarses freedisk is 5 in that newer log
21:27 Dr_Drache I don't understand
21:27 xarses me either
21:28 xarses you only have 5GB in /var/lib/nova, which should be fine since we are using ceph ephemeral
21:29 Dr_Drache http://imgur.com/e5d0ejA
21:29 Dr_Drache http://imgur.com/etXLxWC
21:29 xarses yep
21:30 Dr_Drache link #2 - i have 2 nodes of that type
21:30 Dr_Drache anyway
21:31 xarses that should be fine
21:31 Dr_Drache I thought so. that's why I set it up like that.
21:31 Dr_Drache lol
21:31 Dr_Drache but it seems to still want that space
21:32 xarses can you do a glance index
21:32 xarses on the controller
21:33 Dr_Drache http://paste.openstack.org/show/71901/
21:39 rmoe joined #fuel
21:41 xarses Dr_Drache: and the output from cinder list?
21:42 Dr_Drache http://paste.openstack.org/show/71907/
21:49 xarses ok, can you walk me through the process you are using to launch the vm or post the cli (if you're using it)
21:50 xarses I spoke with someone and we looked at the code that is generating these messages, they are related to booting from volumes, but you have no volumes
21:50 xarses Dr_Drache: ^
21:52 Dr_Drache ok, well that's simple. using UI, Launch instance -> named instance -> choose flavor -> instance boot source "choose boot from image (Create new volume)" -> security groups -> networking -> "launch"
21:52 Dr_Drache ^ choose image name.
21:53 Dr_Drache leave device size untouched (using flavor values)
21:55 xarses do a 'cinder create 1' and then a 'cinder show <uuid>' about 2 sec later
21:55 xarses from controller
22:00 Dr_Drache http://paste.openstack.org/show/71912/
22:01 xarses odd, doe it do this with the TestVM image too?
22:02 xarses s/doe/does/
22:02 Dr_Drache let me see
22:02 Dr_Drache no
22:03 Dr_Drache I have uploaded that ubnutu image both in RAW, and qcow2
22:03 Dr_Drache ahhh
22:03 Dr_Drache i created a 40gb
22:03 Dr_Drache worked fine
22:03 Dr_Drache created a 80gb
22:03 Dr_Drache error
22:04 xarses cinder create ?
22:04 Dr_Drache no, via UI
22:04 Dr_Drache it's just faster right now for me
22:12 xarses where you able to do that ping -s 3008 -c 100 <controller> from compute-1?
22:14 Dr_Drache yes
22:14 Dr_Drache 100% came back
22:14 xarses are the working instances still running?
22:15 Dr_Drache average ping .288 ms
22:15 Dr_Drache one is
22:15 Dr_Drache i deleted the others
22:16 xarses can you deploy the TestVM, with macro size, and launch 4, and then dump the list of nodes on the instance list page in horizon? (It shows which node they scheduled on)
22:16 Dr_Drache 4 with macro
22:16 Dr_Drache k
22:17 Dr_Drache micro?
22:17 Dr_Drache or large?
22:19 xarses doesn't really matter if you have enough ram
22:20 Dr_Drache 2 failed 2 didn't
22:21 Dr_Drache http://imgur.com/f5EMyvq
22:22 Dr_Drache xarses, I'm going to have to call it a day here in a few
22:22 xarses Dr_Drache: ok, that shows if for certian, the issue is node-1
22:23 Dr_Drache xarses, I can redeploy in the morning.
22:23 Dr_Drache you think 4.1 will be ready by then?
22:23 xarses OK, does 'ceph -s' show all the OSD's online?
22:24 Dr_Drache yes
22:24 xarses ok, Implies that node-1 has a network issue on the management network talking to the controller
22:25 xarses or some other random defect
22:28 Dr_Drache will check on it in the morning
22:28 xarses Dr_Drache: Ok, good luck

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary