Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-10-21

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 mattgriffin joined #fuel
00:14 emagana joined #fuel
00:16 harybahh joined #fuel
00:24 rmoe joined #fuel
00:59 Arminder joined #fuel
01:21 Longgeek joined #fuel
01:32 xarses joined #fuel
01:55 Arminder joined #fuel
02:01 jpf joined #fuel
02:41 mpetason joined #fuel
02:42 dhblaz joined #fuel
03:10 Longgeek joined #fuel
03:36 dhblaz joined #fuel
03:37 dhblaz joined #fuel
03:41 Arminder joined #fuel
03:54 Arminder joined #fuel
04:16 harybahh joined #fuel
04:18 emagana joined #fuel
05:03 ArminderS joined #fuel
05:30 emagana joined #fuel
05:41 Longgeek joined #fuel
06:06 Arminder joined #fuel
06:09 kozhukalov joined #fuel
06:11 saibarspeis joined #fuel
06:15 Longgeek joined #fuel
06:17 Arminder joined #fuel
06:17 harybahh joined #fuel
06:25 Arminder- joined #fuel
06:27 Arminder joined #fuel
06:27 ArminderS- joined #fuel
06:29 ArminderS joined #fuel
06:30 Arminder- joined #fuel
06:32 ArminderS- joined #fuel
06:32 Arminder joined #fuel
06:34 lordd joined #fuel
06:35 ArminderS joined #fuel
06:36 Arminder- joined #fuel
06:38 Arminder joined #fuel
06:42 Arminder- joined #fuel
06:44 Arminder joined #fuel
06:44 ArminderS- joined #fuel
06:48 ArminderS joined #fuel
06:49 apalkina joined #fuel
06:55 lordd joined #fuel
06:55 Arminder- joined #fuel
06:56 ArminderS- joined #fuel
06:57 harybahh joined #fuel
06:58 Arminder joined #fuel
07:00 ArminderS joined #fuel
07:01 ArminderS- joined #fuel
07:01 Arminder- joined #fuel
07:02 pasquier-s joined #fuel
07:04 sovsianikov joined #fuel
07:05 ArminderS joined #fuel
07:05 Arminder joined #fuel
07:06 azemlyanov joined #fuel
07:07 ArminderS- joined #fuel
07:07 Arminder- joined #fuel
07:11 Arminder joined #fuel
07:12 e0ne joined #fuel
07:13 dklepikov joined #fuel
07:14 Arminder- joined #fuel
07:15 Arminder joined #fuel
07:16 lordd joined #fuel
07:17 dklepikov left #fuel
07:25 sovsianikov joined #fuel
07:26 dklepikov joined #fuel
07:29 dklepikov left #fuel
07:30 dklepikov joined #fuel
07:30 hyperbaba_ joined #fuel
07:30 dancn joined #fuel
07:31 Arminder joined #fuel
07:32 Arminder- joined #fuel
07:33 sc-rm joined #fuel
07:34 dklepikov joined #fuel
07:34 ArminderS joined #fuel
07:37 Arminder joined #fuel
07:38 Arminder- joined #fuel
07:39 sc-rm kaliya: When trying to do a resieze of an instance to a bigger flavor I get the following: http://paste.openstack.org/show/122543/
07:39 ArminderS- joined #fuel
07:40 holser joined #fuel
07:49 ArminderS joined #fuel
08:11 kaliya sc-rm: from horizon or do you have the command?
08:14 Arminder joined #fuel
08:36 syt joined #fuel
08:41 merdoc kaliya: I saw that error from horizon several times
08:42 HeOS joined #fuel
08:47 ArminderS- joined #fuel
08:52 ArminderS joined #fuel
08:57 ArminderS- joined #fuel
08:58 sc-rm kaliya: From horizon
09:01 ArminderS joined #fuel
09:02 ArminderS joined #fuel
09:03 ArminderS joined #fuel
09:06 ArminderS joined #fuel
09:08 ArminderS joined #fuel
09:09 ArminderS joined #fuel
09:10 dkaigarodsev joined #fuel
09:12 ArminderS joined #fuel
09:14 ArminderS joined #fuel
09:15 ArminderS joined #fuel
09:17 ArminderS joined #fuel
09:18 ArminderS joined #fuel
09:19 ArminderS joined #fuel
09:22 ArminderS joined #fuel
09:23 Picachu joined #fuel
09:23 Picachu hi
09:23 anand_ts joined #fuel
09:24 ArminderS joined #fuel
09:24 tdubyk joined #fuel
09:24 Picachu Anyone can help me with haproxy.cfg configured  by fuel 3.2.1 ?
09:24 kaliya Picachu: what's your problem
09:25 ArminderS joined #fuel
09:25 sc-rm I just saw that the “Live migrate” menu entry is no longer avaliable in instances in horizon, why is that?
09:26 Picachu it's about the "backup" option on some services
09:28 kaliya sc-rm: I honestly don't know. Maybe we can ask in #openstack-horizon...
09:28 sc-rm ahe, found it under the admin section, but it also fails
09:28 Picachu our fuel installer have been customized, and I am wondering if this has configured by Mirantis or if it has been added by the customization
09:29 Picachu cause when I look at http://docs.mirantis.com/fuel/fuel-3.2.1/reference-architecture.html#ha-logical-setup, all services should be active/active in ha ?
09:29 sc-rm like the resize instance
09:29 ArminderS- joined #fuel
09:30 ArminderS joined #fuel
09:30 adanin joined #fuel
09:30 Picachu this backup option concerns rabbit, quantum and mysql
09:30 Picachu is this default behaviour with fuel 3.2.1 ?
09:34 ArminderS joined #fuel
09:37 e0ne joined #fuel
09:39 ArminderS joined #fuel
09:40 sc-rm merdoc: Did you figure what was the problem or even fixed it?
09:42 merdoc sc-rm: no
09:42 ArminderS- joined #fuel
09:42 sc-rm merdoc: :-( haha
09:43 kaliya sc-rm: seems that live migrate has always been in the Admin panel
09:43 sc-rm merdoc: Then we have to fill a bug, but where...
09:43 sc-rm kaliya: Yep, just my bad memory ;-) But it also fails
09:43 ArminderS joined #fuel
09:43 merdoc sc-rm: after 3 attempt (I have 3 node) that error dissapears
09:43 kaliya merdoc: is this scientifically proved? ;)
09:44 merdoc kaliya: yep (%
09:44 sc-rm merdoc: I’ll try for 3-4 times now and let you know
09:44 ArminderS joined #fuel
09:45 merdoc sc-rm: you need to do #of_nodes + 1 attempts (%
09:46 merdoc it's because scheduler try each node and fail
09:46 ArminderS- joined #fuel
09:46 artem_panchenko joined #fuel
09:46 artem_panchenko left #fuel
09:46 artem_panchenko joined #fuel
09:47 sc-rm merdoc: Now it can resize the instance - wtf :-)
09:47 sc-rm merdoc: but live-migrate still fails
09:48 ganso joined #fuel
09:49 ArminderS joined #fuel
09:51 ArminderS joined #fuel
09:51 merdoc sc-rm: it's because scheduler got ssh key from that node where you try to resize. but dont have key from node on wich you want to migrate
09:51 sc-rm kaliya: Live-migration “Unacceptable CPU info: CPU doesn't have compatibility.\n\n0\n\nRefer to http://libvirt.org/html/libvirt-libvirt.html#virCPUCompareResult"
09:52 merdoc oh. it's another problem (%
09:52 kaliya maybe related to libvirt_cpu_model
09:52 sc-rm merdoc: ahe, that make sence, but should’t the layers below handle this, and just retry
09:52 ArminderS joined #fuel
09:53 merdoc sc-rm: that question for devs (%
09:55 kaliya sc-rm: https://bugs.launchpad.net/nova/+bug/1082414
09:56 ArminderS- joined #fuel
09:58 Picachu joined #fuel
10:00 teran joined #fuel
10:05 ArminderS joined #fuel
10:09 ArminderS- joined #fuel
10:10 ArminderS joined #fuel
10:11 adanin joined #fuel
10:14 ArminderS joined #fuel
10:15 teran joined #fuel
10:19 teran_ joined #fuel
10:25 evg Picachu: hi, hard to say about 3.2.1. We should ask people who remember it.
10:26 evg Picachu: We've got customers with 3.2.1 but their env are hard customised too.
10:31 artem_panchenko joined #fuel
10:39 teran joined #fuel
10:40 teran joined #fuel
10:43 merdoc weird. my dashbord start show me that I have only 12Gb hdd at all. looks like nova forgot, that I have ceph with 1,5Tb
10:48 artem_panchenko left #fuel
10:49 sc-rm kaliya: I see, but I’m migrating from one node to a 100% idintical node, so I thnink that the changes described are not in the current release
11:10 teran_ joined #fuel
11:17 merdoc kaliya: how to tell nova that I have ceph? all my nodes tells to dashboard that they have only 5Gb storage that located in /var/lib/nova instead of ceph
11:18 aignatov joined #fuel
11:19 merdoc kaliya: I fix it. someone here tells me that images_type must be 'raw'. but according to comments in nova.conf that param is deprecated
11:20 merdoc and I need to use libvirt_images_type that already set to 'rbd'
11:22 aglarendil joined #fuel
11:31 ArminderS- joined #fuel
11:37 strictlyb joined #fuel
11:37 brad[] joined #fuel
11:38 jseutter joined #fuel
11:39 vtzan joined #fuel
11:45 Dr_Drache merdoc, i'm not sure what those settings are supposed to be
11:45 Dr_Drache but they shouldn't have to be manual
11:46 merdoc Dr_Drache: I found it here in disqus about CoW, snapshots and raw
11:47 Dr_Drache merdoc, I was part of that, kinda.
11:47 Dr_Drache still doesn't "make sense" that's something fuel should take care of when you make your selections
11:50 Longgeek joined #fuel
11:54 HeOS_ joined #fuel
11:55 HeOS_ joined #fuel
11:55 harybahh joined #fuel
11:59 saibarspeis joined #fuel
12:07 dancn` joined #fuel
12:17 hyperbaba__ joined #fuel
12:18 bdudko joined #fuel
12:54 Picachu joined #fuel
13:09 dlkepikov joined #fuel
13:17 meow-nofer joined #fuel
13:19 ArminderS joined #fuel
13:36 kaliya left #fuel
13:36 kaliya joined #fuel
13:37 lordd joined #fuel
13:47 emagana joined #fuel
14:12 vtzan joined #fuel
14:24 rmoe joined #fuel
14:27 mpetason joined #fuel
14:31 mattgriffin joined #fuel
14:39 adanin joined #fuel
14:40 teran joined #fuel
14:43 rmoe joined #fuel
14:51 jobewan joined #fuel
15:10 hyperbaba joined #fuel
15:11 mattgriffin joined #fuel
15:17 dlkepikov joined #fuel
15:19 blahRus joined #fuel
15:29 teran joined #fuel
15:51 harybahh_ joined #fuel
15:56 harybahh joined #fuel
15:56 emagana joined #fuel
15:58 kupo24z joined #fuel
15:59 harybahh_ joined #fuel
16:03 harybahh joined #fuel
16:03 emagana joined #fuel
16:06 dlkepikov joined #fuel
16:09 rmoe joined #fuel
16:37 teran joined #fuel
16:45 artem_panchenko joined #fuel
16:54 pal_bth joined #fuel
17:08 teran joined #fuel
17:09 teran_ joined #fuel
17:49 emagana joined #fuel
17:53 angdraug joined #fuel
17:53 teran joined #fuel
17:54 emagana joined #fuel
17:56 xarses joined #fuel
18:00 jpf joined #fuel
18:16 xarses joined #fuel
18:24 kupo24z xarses: looks like it still crashed keystone even after those changes :(
18:24 kupo24z eventually it got so bad where it totally locked up and killed all queued VM's
18:24 kupo24z 'too many open files' was the last error i saw
18:24 jpf hey, quick question, are there any cronjobs set up by Fuel 5.1 by default? Specifically at 4am? Would be on a compute or controller node potentially.
18:26 kupo24z I can check my deployment jpf
18:27 kupo24z i just have '30 7    * * *', '* * * * *', '0 * * * *'
18:28 jpf looks the same for me, thanks for checking
18:39 teran joined #fuel
18:45 pasquier-s joined #fuel
18:51 pasquier-s joined #fuel
18:58 kupo24z xarses: is it normal for some instances to fail to build/timeout if you hit an obscene number creating at once (50+)?
19:02 emagana joined #fuel
19:18 emagana joined #fuel
19:33 Rajbir joined #fuel
19:34 xarses kupo24z: what's the error message, sounds like you have no eligible hosts found
19:35 kupo24z Error creating the virtual interface, on the instance spawning
19:36 kupo24z I've since lowered it below 30 on each set and no errors
19:36 kupo24z not a huge deal
19:37 kupo24z For the script im just going to source out a seperate cron running every hour that has the token and flavor values already
19:37 Rajbir Hi All,
19:38 Rajbir I'm trying to create a VM image using nova image-create instancename filename but getting the error
19:38 Rajbir ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500)
19:49 chuebner joined #fuel
19:50 chuebner Is there a way in Fuel to kill a cloud environment that is hanging while being deleted?
19:50 chuebner mine is holding 3 nodes hostage
19:51 xarses chuebner: which version of fuel?
19:51 e0ne joined #fuel
19:51 xarses Rajbir: please paste the command line you used
19:51 chuebner 5.0.1 (Mirantis Openstack Express)
19:52 chuebner the environment got stuck deploying, so I deleted it
19:53 chuebner and it is still there after 2 hours, holding on to three nodes
19:55 xarses you can try curl -X DELETE 'http://<fuel-node>:<port>/api/nodes/<node id>' and then restart the node
19:55 xarses idr if 5.0.1 has the user auth, if it does then you need a token first
19:56 chuebner ok, I'll try that
19:56 jpf_ joined #fuel
19:57 Rajbir @xarses::  it's nova image-create vmname vmname_10222014
19:58 Rajbir actually I'm trying to  create a snapshot
19:58 jpf__ joined #fuel
20:06 chuebner xarses, would this not only delete a single node?
20:06 xarses Rajbir: what storage options did you deploy with?
20:06 xarses chuebner: it will only delete a single node, correct
20:06 chuebner i need a whole hung cluster deleted
20:07 chuebner the nodes are only locked in there because the cluster is still 'up'
20:07 xarses chuebner: thats fine, you will delete the nodes individually, then you can use /api/clusters/<cluster id> to remove the cluster config
20:07 chuebner ah
20:08 xarses really, all you need is the nodes removed from the db, and cobbler and then they can be discovered again
20:08 chuebner ok
20:09 xarses the errored cluster is irrelevant after the nodes are removed.
20:09 Rajbir @xarses, nova trace showing the below errors ::
20:09 Rajbir 2014-10-21 13:03:59.422 2909 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/nova/compute/api.py", line 1613, in snapshot_volume_backed
20:09 Rajbir 2014-10-21 13:03:59.422 2909 TRACE nova.api.openstack     del bdm['volume_id']
20:09 Rajbir 2014-10-21 13:03:59.422 2909 TRACE nova.api.openstack AttributeError: __delitem__
20:09 Rajbir 2014-10-21 13:03:59.422 2909 TRACE nova.api.openstack
20:09 chuebner not really, i am going to have to show this environment to customers, so a stuck cluster is  gonna look bad
20:10 xarses chuebner: the delete method to the clusters url will dispose of it
20:10 xarses it doesn't spawn the erase nodes task which is what stopped responding
20:11 Rajbir <@xarses> :: https://bugs.launchpad.net/nova/+bug/1166160 similar to  this one
20:13 xarses Rajbir: you are using fuel 5.x? this issue was resolved in havana and 5.x is running icehouse
20:14 Rajbir yes, it's on older version of fuel .
20:15 xarses that issue should only be present in some 3.x versions of fuel
20:15 xarses based on the tags on the bug
20:16 xarses 4.x will have havana, which includes the fix noted in that bug report
20:16 Rajbir not sure really, but getting the same error in nova/api.log
20:16 Rajbir Currently I'm logged in to  controller node.
20:17 xarses you can check if the patch is applied by opening /usr/lib/python2.6/site-packages/nova/compute/api.py
20:17 chuebner does fuel have a way to list node IDs?
20:17 xarses and going to line 1613
20:17 xarses chuebner: fuel node
20:18 xarses the id is the number after node- in the name 'node-10' the id is 10
20:18 Rajbir <@xarses> : checking.
20:18 chuebner permission denied
20:18 xarses Rajbir: you should see https://review.openstack.org/#/c/30110/1/nova/compute/api.py
20:18 chuebner this is in MOX so I am not root
20:18 emagana joined #fuel
20:18 xarses chuebner: cp $(which fuel) . ; chmod +x fuel
20:19 xarses chuebner: file a bug for it, they don't listen to me about it
20:20 chuebner ya, will do
20:25 Rajbir tried to  change the line from   del bdm['volume_id'] to  bdm['volume_id'] = None but getting the same error
20:25 Rajbir File "/usr/lib/python2.6/site-packages/nova/compute/api.py", line 1613, in snapshot_volume_backed
20:25 Rajbir 2014-10-21 13:22:55.210 2909 TRACE nova.api.openstack     bdm['volume_id'] = None
20:25 Rajbir 2014-10-21 13:22:55.210 2909 TRACE nova.api.openstack AttributeError: __delitem__
20:25 chuebner what's the default port for fuel?
20:25 xarses Rajbir: you will have to restart nova-api on all controllers
20:25 xarses chuebner: 8000
20:26 Rajbir <@xarses> :: okay will  do.
20:28 chuebner $ curl -X DELETE 'http://localhost:8000/api/clusters/16'
20:28 chuebner Environment removal already started
20:28 Rajbir <@xarses> :: that has fixed the issue with nova imgae-create :)
20:28 chuebner the nodes were removed fine
20:28 chuebner but the environment is still stuck
20:29 Rajbir but now getting the error ::  OverLimit: SnapshotLimitExceeded: Maximum number of snapshots allowed (10) exceeded
20:29 xarses Rajbir: i'd still like to know what version of fuel you have, and what storage options you chose
20:29 chuebner and so far the nodes have not shown up in the unallocated node list
20:29 xarses chuebner: that will not cause the node to reboot, you will still have to re-init the nodes
20:29 Rajbir as of now I don't have access to  fuel master
20:30 Rajbir <@xarses> ::  how can I find the information about storage options
20:32 boris-42 joined #fuel
20:33 Rajbir @xarses ::  openstack-cinder-2013.1.1.fuel3.0-mira.2.noarch : may be this should help
20:33 teran joined #fuel
20:34 teran_ joined #fuel
20:36 chuebner ok
20:36 adanin joined #fuel
20:39 teran joined #fuel
20:44 xarses Rajbir: ah, that explains it, you are on grizzly
20:45 Rajbir yes, I'm on grizzly
20:45 Rajbir @xarses :: OverLimit: SnapshotLimitExceeded: Maximum number of snapshots allowed (10) exceeded  ::  any easy fix for that ?
20:55 jpf joined #fuel
20:56 teran joined #fuel
20:56 HeOS joined #fuel
20:57 jpf joined #fuel
21:01 jpf joined #fuel
21:07 Rajbir @xarses  ::  can I  try https://bugs.launchpad.net/cinder/+bug/1157506 ?
21:08 emagana joined #fuel
21:08 xarses should be possible, you would want do do it as a patch file though
21:09 xarses https://review.openstack.org/gitweb?p=openstack/cinder.git;a=patch;h=5ffed5d2f71215de963c5a54e4f8a0ba48a05803
21:10 Rajbir <@xarses> :: I can  try that but I'm not sure that if it affect anything
21:10 Rajbir and do I need to  restart cinder service as well ?
21:11 xarses cinder api, scheduler, and probably volume too
21:11 mattgriffin joined #fuel
21:11 Rajbir Okay.
21:14 Rajbir do I need to  run on controller git fetch https://review.openstack.org/openstack/cinder refs/changes/20/25220/1 && git checkout FETCH_HEAD ?
21:15 xarses Rajbir: you can wget that raw patch above and use patch to apply it to /usr/local/python2.6/site-packages/cinder
21:16 emagana joined #fuel
21:16 Rajbir <@xarses> :: do I need to run the above command to apply that patch ?
21:16 xarses you want to apply the patch more than checkout the branch
21:17 Rajbir ?
21:17 Rajbir git fetch https://review.openstack.org/openstack/cinder refs/changes/20/25220/1 && git format-patch -1 --stdout FETCH_HEAD
21:17 Rajbir I guess above is the correct command
21:17 xarses the git fetch command will check out that branch of code that was proposed in the change
21:18 xarses you can use the url i provided to wget just the patch output
21:26 emagana joined #fuel
21:34 emagana joined #fuel
21:37 Rajbir <@xarses> ::  looked at the files and they already seems patched
21:37 Rajbir Is there anything else
21:37 Rajbir I  can check  ?
21:45 chuebner joined #fuel
22:20 emagana joined #fuel
22:59 emagana joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary