Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-11-25

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:35 Kupo24z joined #fuel
00:36 Rajbir joined #fuel
00:46 emagana joined #fuel
00:48 emagana_ joined #fuel
01:17 rmoe joined #fuel
01:18 mattgriffin joined #fuel
01:27 emagana joined #fuel
01:47 emagana joined #fuel
01:52 xarses joined #fuel
02:09 teran joined #fuel
02:23 emagana joined #fuel
03:13 coryc joined #fuel
03:39 xarses joined #fuel
03:41 teran joined #fuel
03:50 mattgriffin joined #fuel
04:05 coryc joined #fuel
04:08 xenolog joined #fuel
04:10 akislitsky_ joined #fuel
04:22 ArminderS joined #fuel
04:26 ArminderS- joined #fuel
04:27 ArminderS joined #fuel
04:31 ArminderS- joined #fuel
04:32 ArminderS joined #fuel
04:33 Rajbir joined #fuel
04:46 mattgriffin joined #fuel
05:11 bdudko joined #fuel
05:11 teran joined #fuel
05:12 apalkina_ joined #fuel
05:22 ArminderS joined #fuel
05:24 mattgriffin joined #fuel
05:26 ArminderS joined #fuel
05:27 ArminderS joined #fuel
05:33 ArminderS joined #fuel
05:38 ArminderS- joined #fuel
06:05 emagana joined #fuel
06:28 monester_laptop joined #fuel
06:31 emagana joined #fuel
06:42 Longgeek joined #fuel
06:42 Longgeek joined #fuel
07:00 baboune evg: Thx
07:00 strictlyb joined #fuel
07:00 teran joined #fuel
07:01 baboune kaliya: the correct command might have been as per https://bugs.launchpad.net/fuel/+bug/1383741, nailgun dockerctl shell postgres su - postgres -c "psql keystone -c 'reindex DATABASE keystone;'"
07:01 Longgeek joined #fuel
07:08 dklepikov joined #fuel
07:11 Rajbir Hi All,
07:12 Rajbir I'm trying to create a snapshot of a instance using nova image-create and it's getting created as well but it's size showing Zero after completion
07:13 Rajbir I've tried to look the logs but I'm not getting any clue as to what might be the issue with  it
07:13 Rajbir the environment I'm  running is Grizzly
07:14 Longgeek joined #fuel
07:22 dklepikov Hello, Rajbir
07:23 dklepikov What do you see running "nova image-list"
07:23 adanin joined #fuel
07:25 Rajbir dklepikov : a moment
07:27 Rajbir nova image-list does show the created image
07:28 Rajbir dklepikov : also /var/log/glance-all.log there is no such file in /var/log
07:29 Rajbir all I have is api.log and registry.log  in  /var/log/glance folder
07:29 dklepikov you can see created image detail by command "nova image-show  IMAGE_ID"
07:30 Rajbir Yeah, but let me just create a new one
07:31 dklepikov The OpenStack snapshot mechanism allows you to create new images from running instances. This is very convenient for upgrading base images or for taking a published image and customizing it for local use. To snapshot a running instance to an image using the CLI, do this: $ nova image-create <instance name or uuid> <name of new image>
07:31 Rajbir yep, this is what exactly I'm  doing.
07:31 Rajbir and nova image-show does show the created image
07:32 Rajbir OS-EXT-IMG-SIZE:size          | 0
07:32 Rajbir one of the output from nova image-show
07:37 teran joined #fuel
07:39 dklepikov what do you see in "nova image-create"
07:39 dklepikov sorry "glance image-list"
07:40 e0ne joined #fuel
07:40 Rajbir dklepikov : nova image-create  7fdf6ed0-dcc3-49d8-92c4-1604600609e7 "napshot_1" --poll
07:40 Rajbir Instance snapshotting... 100% complete
07:40 Rajbir Finished
07:41 dklepikov is it a big one?
07:42 Rajbir I'm getting nothing for that image in glance image-list
07:42 Rajbir I mean the size column is empty for that image
07:44 dklepikov Is it looks like yours? https://bugs.launchpad.net/horizon/+bug/1374931
07:45 Rajbir not exactly
07:47 jkirnosova_ joined #fuel
07:50 akurenyshev left #fuel
07:50 dklepikov https://bugs.launchpad.net/nova/+bug/1381598/
07:51 Rajbir let me check
07:53 Rajbir dklepikov: Actully the main problem is that, I'm not able to download the image create via nova image-create as it's size is zero
07:53 Rajbir and no image is creating in /var/lib/glance-images and hence there is nothing available for download.
07:56 Rajbir the environment I'm running on is Grizzly.
07:56 akurenyshev joined #fuel
07:57 e0ne joined #fuel
08:03 e0ne joined #fuel
08:09 stamak joined #fuel
08:11 Rajbir dklepikov: anything you can suggest on this will be very much appreciated
08:12 e0ne joined #fuel
08:15 hyperbaba joined #fuel
08:17 dklepikov nova image-show napshot_1
08:17 dklepikov glance image-list
08:17 dklepikov glance image-list | grep napshot_1
08:18 dklepikov do you have enough disk space for image?
08:19 azemlyanov joined #fuel
08:25 Rajbir yep
08:25 Rajbir On controller node , I've 1.5T of free space available.
08:26 Rajbir [root@fuel-controller-01 ~]# glance image-list | grep snapshot_1
08:26 Rajbir | 678e8aba-64cf-4b9a-ac38-5cef6c49c006 | snapshot_1     | raw         | bare             |             | active |
08:28 dklepikov Is 7fdf6ed0-dcc3-49d8-92c4-1604600609e7 running whet you making a snapshot?
08:28 Rajbir Nope, I have shutdown the instance and then created the image
08:29 Rajbir but it doesn't work when  the instance is running anyway
08:31 dklepikov I see the same bug wag here https://bugs.launchpad.net/nova/+bug/1361487
08:31 dklepikov Try to rum instance and than run image-create
08:32 dklepikov run
08:32 Rajbir do you mean both at the same time
08:33 Rajbir well,  I have already  tried creating an image when the instance was running and getting the same problem  though.
08:33 kaliya baboune: yes, we're packing everything in a comprehensive document
08:33 dklepikov your source image should be started during "image-create"
08:34 Rajbir okay.
08:34 dklepikov start it and try to create image from it
08:35 Rajbir dklepikov : I'm leaving for the day now, will get in touch  with you once I'm back :)
08:35 Rajbir thanks for looking into this for you, much appreciated.
08:38 dklepikov thanks, let us know the result
08:51 HeOS joined #fuel
08:58 dancn joined #fuel
09:25 e0ne joined #fuel
09:36 Rajbir joined #fuel
10:12 alexbh joined #fuel
10:15 bhi joined #fuel
10:17 adanin joined #fuel
10:23 dancn joined #fuel
10:43 azemlyanov joined #fuel
10:49 monester_laptop joined #fuel
10:54 vtzan joined #fuel
11:02 ignatenko joined #fuel
11:27 stamak joined #fuel
11:41 baboune I am getting this warning: "[425] Run command: 'ntpdate -u $(egrep '^server' /etc/ntp.conf | sed '/^#/d' | awk '{print $2}')' in nodes: ["60", "61", "62"] fail. Check debug output for more information. You can try to fix it problem manually." This is in the astute logs during deployment.  How is that possible as the "computes" should resolve to the fuel master as ntp server?
11:56 baboune And an error: "ceph-deploy --overwrite-conf config pull node-60 returned 1 instead of one of [0]" : (/Stage[main]/Ceph::Conf/Exec[ceph-deploy config pull]/returns) change from notrun to 0 failed: ceph-deploy --overwrite-conf config pull node-60 returned 1 instead of one of [0] -> deployment failed.  It is centos, multinode, neutron, ceph for cinder, glance default, 5.1.1. Any ideas?
12:05 teran joined #fuel
12:06 teran_ joined #fuel
12:08 ArminderS joined #fuel
12:10 monester_laptop joined #fuel
12:16 akurenyshev left #fuel
12:29 ArminderS joined #fuel
12:30 ArminderS joined #fuel
12:41 corepb joined #fuel
12:42 coryc joined #fuel
12:44 Longgeek_ joined #fuel
12:47 corepb_ joined #fuel
12:47 corepb_ joined #fuel
12:55 bpiotrowski joined #fuel
12:58 teran joined #fuel
12:58 teran_ joined #fuel
13:01 bpiotrowski joined #fuel
13:09 baboune the environment deployment fails only on one node with the "ceph deploy error".  It worked on the other node.  How can I force puppet to run again on the failing node?
13:23 Longgeek joined #fuel
13:25 kaliya baboune: you can remove and readd the node, what do you think evg ?
13:31 omolchanov joined #fuel
13:52 evg baboune: kaliya ..or you can run puppet-pull (if remember right) on the node
13:57 evg this script will rsync manifests and run them
14:17 kaliya evg: is this documented?
14:19 jkirnosova__ joined #fuel
14:24 evg kaliya: in dev docs
14:38 baboune evg: Will that coordinate with Astute to complete the deployment? Right now the status is failed.
14:40 akupko joined #fuel
14:41 baboune evg: puppet-pull on guilty host triggers this failure: "Error: /Stage[openstack-firewall]/Firewall::Linux/Package[iptables]/ensure: change from 1.4.7-11.mira2 to 1.4.7-11.el6 failed: Could not update: Failed to update to version 1.4.7-11.el6, got version 1.4.7-11.mira2 instead"
14:41 baboune any ideas?
14:46 neophy joined #fuel
14:50 coryc joined #fuel
14:50 sovsianikov joined #fuel
14:51 mattgriffin joined #fuel
14:57 evg baboune: just will rerun puppet
15:00 jobewan joined #fuel
15:00 evg baboune: what if you redeploy it? fuel --node-id xxx -deploy
15:02 baboune evg: tried it fails.. the problem is this https://bugs.launchpad.net/fuel/+bug/1333814
15:02 baboune exactly same behaviour
15:03 baboune but that was abandonned
15:05 baboune should this be part of 5.1 already? it says so in the description and the fix was committed, but my pp file content is different
15:08 evg baboune: it should be as I see
15:08 strictlyb joined #fuel
15:11 evg baboune: it was ababdonned for 5.0 not 5.1
15:12 baboune evg: ok but somehow my 5.1 fuel does not have the changes in that patch
15:15 blahRus joined #fuel
15:18 emagana joined #fuel
15:29 baboune I dont get it, what is "node-60" in the error: "(/Stage[main]/Ceph::Conf/Exec[ceph-deploy config pull]/returns) change from notrun to 0 failed: ceph-deploy --overwrite-conf config pull node-60 returned 1 instead of one of [0]" Because this error is triggered on node 62
15:29 baboune ?
15:37 angdraug joined #fuel
15:46 evg baboune: it's described in the bug's descussion. Is node-60 is online?
15:47 baboune yes it is the controller... but I dont understand the discussion then
15:48 baboune evg: and the two patches dotn match with the reviewed code changes in conf.pp
15:48 neosix joined #fuel
15:49 baboune evg: so what is the proper fix?
15:50 evg baboune: as I understand your case is not the same case as in the bug
15:50 evg baboune: can you try running ceph-deploy ... by hand on the node
15:52 baboune evg: I will try but right now I cleaned up the env... creating a new one
15:57 fandi joined #fuel
16:00 emagana joined #fuel
16:04 emagana_ joined #fuel
16:05 fandi hi
16:05 fandi we have problem when try to delete volume and try to console
16:06 fandi case like this : https://ask.openstack.org/en/question/8360/unable-to-delete-volumes/
16:15 kaliya fandi: you have no relevant logs? Mute?
16:16 fandi kaliya,  i'm still generate for log from fuel. do you have a better idea  for log thanks
16:17 kaliya you have to check /var/log/cinder-all on the controller/s
16:19 fandi this is state volume stuck http://paste.openstack.org/show/jqUlsOwNKshkmBXzb7lI/
16:20 fandi kaliya, ok let me check it :) thanks
16:20 kaliya fandi: ceph?
16:21 fandi yups we are using ceph
16:22 kaliya ceph -s is fine?
16:22 fandi kaliya, i'm only see http://paste.openstack.org/show/138244/
16:22 kaliya fandi: you have to upgrade oslo-messaging
16:23 kaliya 5.1 right? ubuntu or centos_
16:23 kaliya ?
16:23 fandi kaliya, 5.1 ubuntu
16:24 fandi this is for ceph http://paste.openstack.org/show/138245/
16:24 kaliya Download and install all python-oslo* packages from http://fuel-repository.mirantis.com/fwm/5.1.1/ubuntu/pool/main/ on all nodes and either restart nodes or all openstack services on them.
16:31 emagana joined #fuel
16:33 emagana_ joined #fuel
16:35 neosix hi all, i've kernel panic at bootstrap of fuel-slave. version 6 on virtualbox. can you help me? anyone has same issue
17:18 xarses joined #fuel
17:19 izinovik joined #fuel
17:33 evg neosix: hi, never seen this (with fuel bootstrap). What sort of k.panic? It shows kernel's registers?
17:40 e0ne joined #fuel
17:42 neosix hi evg, when boot slave for bootstrap: RAMDISK: gzip image found at block 0, RAMDISK: incomplete write (22878 != 32768) write error. VFS: Cannot open root device "(null)" or unknown-block(2,0): error -6. after any others information there is a call trace
17:43 neosix dump_stack+0x19/0x21
17:43 neosix panic+0xc4/0x1e4
17:43 neosix ? printk+0x4d/0x4f
17:44 neosix mount_block_root+0x212/-x2c7
17:44 neosix mount_root+0x56/0x5a
17:44 neosix prepare_namespace+0x170/0x1a9
17:45 evg neosix: how much memory do you allocate to VMs?
17:45 neosix kernel_init_freeable+0x265/0x279
17:45 neosix ? rest_init+0x80/0x80
17:45 neosix kernel_init_0xe/0xf0
17:46 neosix ret_from_fork+0x7c/0xb0
17:46 rmoe joined #fuel
17:46 neosix ?rest_init+0x80/0x80
17:47 neosix after installation, 768 for master and 1024 MB for slaves
17:48 neosix on hint of support i try to create manually a slave, network boot, sata disk, but same error
17:51 neosix evg: just now, support write me that:  that virtualbox scripts have several not fixed yet bugs when running on windows.
17:51 neosix i'll try to install onto Ubuntu...
17:51 evg neosix: have you enabled hw virtualisation in VB?
17:51 neosix yes
17:52 evg neosix: aaa, windows.....
17:52 neosix yep!! job host........
17:52 evg neosix: the support of vb?
17:53 neosix no, mirantis
17:56 evg neosix: could you try with 2G memory (i'm not sure this is the case, of-course)
17:57 evg neosix: and have you tried privious versions? 5.1? Did it work for you?
17:58 evg neosix: version on VB?
18:00 neosix i don't try 5,1 'cause i need sahara
18:01 neosix vb 4.3.r20
18:03 evg neosix: could you just try 2G?
18:04 fandi joined #fuel
18:06 e0ne joined #fuel
18:08 neosix i try now, same thing
18:09 neosix thank you evg. i'll try on ubuntu tomorrow. bye
18:14 evg neosix: good luck with ubuntu. bye
18:15 emagana joined #fuel
18:16 e0ne joined #fuel
18:24 emagana joined #fuel
18:31 emagana joined #fuel
18:32 monester_laptop joined #fuel
18:42 jkirnosova__ joined #fuel
18:48 emagana joined #fuel
18:51 xarses joined #fuel
19:03 teran joined #fuel
19:04 teran joined #fuel
19:07 xarses joined #fuel
19:21 emagana joined #fuel
19:29 ignatenko joined #fuel
19:29 agordeev joined #fuel
19:53 dhblaz joined #fuel
19:54 dhblaz I’m having trouble with rabbitmq on a HA 4.0 cluster.  Restarting one a constroller node’s rabbitmq process hangs here:
19:54 dhblaz starting adding mirrors to queues                                     ...
19:54 dhblaz Any suggestions?
19:55 dhblaz Horizon also can’t control the compute nodes (pause, reboot etc don’t work)
20:01 HeOS joined #fuel
20:02 xarses joined #fuel
20:20 emagana joined #fuel
20:21 emagana joined #fuel
20:45 e0ne joined #fuel
21:15 dhblaz Hmmm, my p_neutron-l3-agent service in crm now thinks it is a node
21:31 emagana joined #fuel
21:31 monester_laptop joined #fuel
21:32 emagana joined #fuel
21:55 emagana joined #fuel
21:57 emagana_ joined #fuel
22:09 adanin joined #fuel
22:17 angdraug joined #fuel
22:20 emagana joined #fuel
22:22 emagana joined #fuel
22:34 Rajbir joined #fuel
22:42 e0ne joined #fuel
23:01 adanin joined #fuel
23:01 teran joined #fuel
23:04 emagana joined #fuel
23:27 teran_ joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary