Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-06-12

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:30 Arminder- joined #fuel
00:36 LesterPR joined #fuel
00:40 seanfuel joined #fuel
00:40 seanfuel Hello everyone
00:40 seanfuel Has anyone tried using the singlenode deployment option in Fuel?
00:41 seanfuel it may not actually be accessible but it is present in puppet
01:00 xarses joined #fuel
01:32 seanfuel joined #fuel
01:33 seanfuel /join #fuel-dev
01:33 seanfuel err oops
03:38 dhblaz joined #fuel
05:26 jobewan joined #fuel
06:10 al_ex joined #fuel
06:31 odyssey4me joined #fuel
07:24 e0ne joined #fuel
07:45 e0ne joined #fuel
08:24 lromagnoli joined #fuel
08:46 lromagnoli joined #fuel
08:53 e0ne joined #fuel
09:23 artem_panchenko joined #fuel
09:44 e0ne_ joined #fuel
10:00 lromagno_ joined #fuel
10:00 Pookz joined #fuel
10:02 lromagn__ joined #fuel
10:04 lromagn__ just deployd new fuel server i want to activate support.. when i press register product on fuel dashboard i go to mirantis website write username and password and take some coffe...it freze on checking credentials
10:05 lromagn__ i tried my username and password just going to software and i can correctly register
10:14 agordeev joined #fuel
10:17 saibarspeis joined #fuel
10:37 e0ne joined #fuel
11:15 e0ne joined #fuel
11:17 e0ne joined #fuel
11:45 e0ne joined #fuel
12:24 al_ex2 joined #fuel
12:24 e0ne_ joined #fuel
12:33 al_ex joined #fuel
12:36 al_ex joined #fuel
12:38 al_ex2 joined #fuel
12:40 al_ex3 joined #fuel
13:01 agordeev joined #fuel
13:18 lromagn__ my lab glance with ceph
13:18 lromagn__ qemu-img convert -f qcow2 -O raw cirros-0.3.2-x86_64-disk.img  cirros-0.3.2-x86_64-disk.raw
13:18 lromagn__ glance image-create --name="cirros_raw" --disk-format=raw --container-format=bare --is-public=true < cirros-0.3.2-x86_64-disk.raw
13:18 lromagn__ Error communicating with http://172.16.0.2:9292 [Errno 32] Broken pipe
13:19 lromagn__ <150>Jun 12 13:17:07 node-4 glance-glance.registry.api.v1.images INFO: Updating metadata for image 670442a6-bba5-4451-9218-b721ef955abc
13:19 lromagn__ <150>Jun 12 13:17:07 node-4 glance-glance.wsgi.server INFO: 192.168.0.2 - - [12/Jun/2014 13:17:07] "PUT /images/670442a6-bba5-4451-9218-b721ef955abc HTTP/1.1" 200 675 0.162620
13:28 obcecado joined #fuel
15:11 rmoe joined #fuel
15:17 rmoe joined #fuel
15:25 blahRus joined #fuel
15:31 jobewan joined #fuel
15:42 alex_didenko joined #fuel
15:48 albionandrew joined #fuel
15:56 bogdando joined #fuel
16:00 e0ne joined #fuel
16:01 angdraug joined #fuel
16:03 albionandrew On our 4. deployment we have in storage an option to select the journal size for ceph. Has that option been removed in 5.0 ?
16:08 albionandrew ignore the above ^ I thought the journal size was set if you chose load defaults. My mistake.
16:19 Pookz joined #fuel
16:20 Pookz left #fuel
16:25 lromagn__ maybe someone can help me
16:25 lromagn__ my lab glance with ceph
16:25 lromagn__ qemu-img convert -f qcow2 -O raw cirros-0.3.2-x86_64-disk.img  cirros-0.3.2-x86_64-disk.raw
16:25 lromagn__ glance image-create --name="cirros_raw" --disk-format=raw --container-format=bare --is-public=true < cirros-0.3.2-x86_64-disk.raw
16:25 lromagn__ Error communicating with http://172.16.0.2:9292 [Errno 32] Broken pipe
16:25 lromagn__ <150>Jun 12 13:17:07 node-4 glance-glance.registry.api.v1.images INFO: Updating metadata for image 670442a6-bba5-4451-9218-b721ef955abc
16:25 lromagn__ <150>Jun 12 13:17:07 node-4 glance-glance.wsgi.server INFO: 192.168.0.2 - - [12/Jun/2014 13:17:07] "PUT /images/670442a6-bba5-4451-9218-b721ef955abc HTTP/1.1" 200 675 0.162620
16:26 lromagn__ the last two are from glance-all.log
16:37 albionandrew_ joined #fuel
16:54 albionandrew xarses: MiroslavAnashkin Does the Base system have to have 48128mb? Can we change that? or is that the required space needed?
17:00 albionandrew I’m talking about the ceph nodes here?^
17:17 odyssey4me odd - I thought there was already a bug logged for the vnc issue (only one in three vnc sessions work in a 3-node ha cluster)
17:18 mihgen odyssey4me: I've heard about it from angdraug or xarses ..
17:19 xarses odyssey4me: there should be
17:19 odyssey4me aha: https://bugs.launchpad.net/fuel/+bug/1323705
17:32 odyssey4me interesting, even adding memcached_servers into nova.conf does nothing to help
17:35 angdraug odyssey4me: rmoe mentioned he knows how to fix this one, he's going to be back online in ~30 min
17:36 angdraug has to do with memcached not being clustered properly for nova data
17:37 mihgen angdraug: oh could not it be same issue with keystone?
17:37 mihgen which I believe I've shown to you? with parsing comma-separated list of memcached servers?
17:40 odyssey4me so one thing I noticed is that memcached_servers is not set in nova.conf at all
17:40 odyssey4me I just tried setting memcached_servers=<controller ip 1>,controller ip 2>,controller ip 3> in nova.conf - still doesn't work
17:41 mihgen albionandrew: are you about disk size?
17:41 odyssey4me I also tried setting  memcached_servers=<controller ip 1>:11211,<controller ip 2>:11211,<controller ip 3>:11211 - also no go
17:41 odyssey4me shutting down 2 out of 3 of the nova-consoleauth services gets it working every time
17:42 angdraug mihgen: refresh my memory please, where did you see this?
17:42 albionandrew mihgen: we had the same issue in 4. I forgot we created a patch. I’m doing a new one for this.
17:45 albionandrew mihgen: If I have a applied a patch to a docker container… docker is new to me do I need to do anything to get it to work? DO I restart a container …?
17:47 mihgen albionandrew: I'm not an expert in docker too… http://docs.mirantis.com/openstack/fuel/fuel-5.0/operations.html?highlight=docker#docker-containers-and-dockerctl
17:47 mihgen it may help
17:47 albionandrew mihgen: I’ll just restart the box. Thanks
17:47 mihgen also if you go ssh into container and fix stuff, you don't need to reload anything for sure
17:47 albionandrew mihgen: I’ll do some docker reading this weekend
17:47 mihgen well don't need to reload lxc / docker at least)
17:57 LesterPR joined #fuel
18:00 LesterPR hi everyone
18:00 albionandrew mihgen: Should the controllers have an option for image storage on our 4. cluster I have image storage on the controllers but dont seem to have that option here. Is there a box I’ve missed to check?
18:00 LesterPR this fuel 5.0 is stable or beta?
18:02 LesterPR looks like this IRC channel is no longer updated
18:02 LesterPR lol
18:02 LesterPR the topics says fuel and this named is no longer used? :/
18:03 LesterPR WTF ?
18:03 neophy joined #fuel
18:04 LesterPR ?????
18:04 angdraug LesterPR: this name is very much still used
18:04 xarses joined #fuel
18:04 Topic for #fuel is now Fuel 5.0 for Openstack: http://fuel.mirantis.com/ | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
18:05 odyssey4me rmoe around yet?
18:05 angdraug LesterPR: Fuel is the open-source deployment tool that is used in Mirantis OpenStack distribution
18:05 angdraug this channel is about Fuel
18:05 albionandrew xarses: mihgen I don’t see an option to set the image storage size has that gone in 5.0?
18:06 albionandrew Used to have the option on the controller but dont seem to be able to find anywhere now.
18:06 albionandrew is that by design?
18:06 xarses if you are using ceph, yes
18:07 albionandrew xarses: Great Thanks.
18:08 albionandrew xarses: just incase this goes pear shaped is there a way now to take the settings as they are and create a new deployment. I’m about to deploy my changes but would like to if poss be able to just make a small change and go again.
18:08 neophy I have installed Fuel 5.0 and I can access Fuel 5.0 UI from my network. When I PXE boot my node it is getting IP address from Fuel Master but there is no unallocated  node count in fuel.
18:09 albionandrew neophy: I think I had that yesterday. Its in the irc logs.
18:09 albionandrew 2 secs I’ll dig it out for you
18:09 xarses albionandrew: if you have problems with the cluster settings / deployment there are stop deployment and reset cluster buttons on the last page (where you can delete the cluster)
18:10 albionandrew neophy: http://irclog.perlgeek.de/fuel/2014-06-11 19:06
18:10 albionandrew xarses: thanks
18:10 Kupo24z1 is neutron/GRE compatible with only 1 NIC?
18:10 LesterPR angdraug, but where I can find fuel 5.0 ?
18:11 LesterPR it looks like you're now using docker in 5.0
18:11 xarses Kupo24z1: in that you want to use multiple nics or you only have one nic?
18:12 Kupo24z1 Only have 1x 10gb per node
18:12 xarses LesterPR: we are using docker in 5.0, what are you looking for? the services are in docker containers nailgun, postgres, cobbler, etc
18:12 LesterPR but it's take longer to start after installation and when I try to access the GUI at http://ipaddress:8080 I only see files there
18:12 LesterPR don't see the GUI
18:12 xarses LesterPR: the port is 8000 for fuel
18:13 LesterPR xarses, 502 Bad Gateway
18:13 LesterPR :/
18:14 xarses check if the nailgun container is running 'docker ps'
18:14 xarses you can jump into it with 'dockerctl shell nailgun'
18:15 LesterPR docker ps |grep nailgun
18:15 LesterPR 04385178a227        fuel/nailgun_5.0:latest       /bin/sh -c /usr/loca   4 hours ago         Up 4 hours          0.0.0.0:8001->8001/tcp                                                                               fuel-core-5.0-nailgun
18:17 rmoe odyssey4me: I just tested this on a 5.0 deployment and setting memcached_servers fixes the problem
18:17 rmoe odyssey4me: I restarted nova-api and nova-consoleauth on all controllers
18:17 odyssey4me rmoe - interesting, it's not working for me
18:17 rmoe odyssey4me: then logged out and back into horizon (it didn't work before I logged out)
18:18 odyssey4me rmoe - I've tried with and without memcached_servers, and also just tried setting only one of them
18:18 odyssey4me has there perhaps been some sort of fix introduced since 5.0's release that fixes this?
18:18 angdraug LesterPR: fair point. the link in the topic should be https://wiki.openstack.org/wiki/Fuel
18:19 angdraug xarses: ^
18:19 rmoe odyssey4me: were you still logged into the same horizon session while you were testing?
18:19 odyssey4me rmoe - I'll try logout/login again, but yeah... tat didn't help earlier
18:20 odyssey4me rmoe - nope, still not working
18:21 odyssey4me I can actually see it fail in the debug logs
18:21 e0ne joined #fuel
18:21 rmoe what are you seeing fail?
18:21 odyssey4me nova-nova.consoleauth.manager AUDIT: Checking Token: 4c7d885b-ed2d-4224-aeb3-9b32b2ad09f7, False
18:22 odyssey4me only one controller has: nova-nova.consoleauth.manager AUDIT: Checking Token: 4c7d885b-ed2d-4224-aeb3-9b32b2ad09f7, True
18:22 odyssey4me it seems to always be the controller who issued the token
18:22 rmoe that's definitely the symptoms we see when memcached_servers is unset
18:23 odyssey4me ah, maybe this is it - my memcached_servers is set at the end of nova.conf, so it's under the [DATABASE] heading
18:23 odyssey4me let me mpove it
18:23 odyssey4me *move
18:23 rmoe I was jsut about to ask you that
18:23 rmoe it should be in [DEFAULT]
18:25 LesterPR angdraug, why I get 502 Bad Gateway
18:25 LesterPR nginx/1.0.15 when I try to access Fuel GUI in 5.0
18:26 odyssey4me rmoe - yup, that was it... great
18:26 odyssey4me now how do we patch this fix in?
18:28 xarses angdraug: where?
18:30 Topic for #fuel is now Fuel 5.0 for Openstack: https://wiki.openstack.org/wiki/Fuel | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
18:34 Kupo24z1 xarses: on page 27 of the reference archiecture it has a 2 NIC deployment for Neutron GRE Segmentation Planning, is it possible to do it with 1 NIC?
18:37 xarses Kupo24z1: if you can set a vlan tag for each of the public, mgmt, storage interfaces, then yes
18:38 xarses Kupo24z1: you will need to set the 'defalut' or 'native' vlan for the switch port to the network for the fuel-admin (PXE) network
18:38 xarses then the tenant instances will be able to us gre
18:39 odyssey4me rmoe - I see some logic in the metadata_api puppet manifest which excludes the memcached_servers option if quantum_netnode_on_cnt
18:40 rmoe that class doesn't get included anymore
18:40 rmoe see openstack/manifests/nova/controller.pp line 259
18:42 dhblaz joined #fuel
18:44 odyssey4me rmoe - uh, are we looking at the same thing?
18:45 rmoe since you asked, I'm guessing not :)
18:45 odyssey4me rmoe - I don't see that manifest here: https://github.com/stackforge/fuel-library/tree/stable/5.0/deployment/puppet/nova/manifests
18:45 odyssey4me and on my fuel controller :259 refers to galera_nodes
18:46 rmoe ah that metadata_api class used to  get included in deployment/puppet/openstack/manifests/nova/controller.pp
18:46 rmoe there is a comment in that file that explains why it no longer does
18:46 rmoe the side effect of that is that memcached_servers never get set
18:46 odyssey4me rmoe - yeah, that's what I was starting to figure out
18:47 odyssey4me rmoe - it still gets set on the compute nodes
18:47 odyssey4me as that's in the compute class
18:49 odyssey4me found what you were talking about: https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/openstack/manifests/nova/controller.pp#L256-L265
18:51 albionandrew xarses: I see 2 /volumes/manager.py files on the master. By chance I patched the right one. Is there a way of knowing what one I should patch? I’m talking about /var/lib/docker/devicemapper/mnt 6c029853731411c29c07c7c7c…….
18:51 xarses you would need to patch whichever is in the cobbler container
18:51 albionandrew The reason I patched was to calculate the size needed for the base system differently
18:51 xarses oh, you shouldn't need to do that
18:52 xarses its imported from yaml
18:52 albionandrew xarses we did it in 4.0 too.
18:52 albionandrew we have a sda and sdb
18:52 xarses and what do you want done?
18:52 albionandrew sdb is local and is 33GB
18:53 albionandrew sda is on a storage array and we want 205G of ceph there.
18:53 xarses are you using ubuntu?
18:53 albionandrew but fuel said we need to have a 48G base system
18:53 albionandrew ubuntu whats that?
18:53 albionandrew Just kidding
18:54 albionandrew I’ll pastebun the patch
18:54 albionandrew xarses http://pastebin.com/sd0F22wb
18:55 albionandrew anyway it seems to be working
18:56 xarses https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L503
18:57 xarses or https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L98
18:58 TVR_ so is making the dashboard https as simple as editing /etc/puppet/modules/horizon/templates/openstack-dashboard.conf.erb  ?
18:58 xarses if you modify that in the nailgun container, and then "manage.py dropdb && manage.py syncdb && manage.py loaddefault" (WARNING IS DISTRUCTIVE TO DB)
18:58 xarses it will take the new size
18:58 albionandrew xarses: thanks
18:59 xarses slightly less impactful to modify
19:00 odyssey4me rmoe - thanks for the help, I've updated the bug report... now someone just needs to get the code in... if it's still unresolved after my leave next week I'll learn how to do the modification and submit the code. I'm still in the very early stages of learning puppet and Fuel as a whole.
19:05 alex_didenko joined #fuel
19:14 e0ne joined #fuel
19:48 e0ne joined #fuel
19:51 LesterPR joined #fuel
19:53 neophy I am configuring my public ip in networking tab in fuel 5.0 UI. After changing the Public IP I can't save the settings because "save" button is not enabled. My browser is Chrome
19:57 Arminder joined #fuel
20:00 e0ne joined #fuel
20:10 LesterPR fix the issue restarting the fuel container ;)
20:10 LesterPR docker restart nailgun
20:17 xarses joined #fuel
20:18 boris-42 joined #fuel
20:18 bookwar joined #fuel
20:22 Kupo24z1 xarses: MiroslavAnashkin is there are bug open for VNC issues with HA environments? I have to refresh 3 times before it loads, and it only works in its own window
20:26 rmoe Kupo24z1: See here https://bugs.launchpad.net/fuel/+bug/1323705
20:26 rmoe Comment 36 has the workaround
20:26 rmoe comment 6 rather
20:26 Kupo24z1 angdraug: rmoe re: yesterdays live-migration issue. I've reformatted the fuel master and redeployed with Ubuntu/HA and looks like the same issue. I think it may be due to my single network interface for all traffic, thats the only real difference between my env that works and this one
20:27 Kupo24z1 rmoe: ty
20:28 Kupo24z1 Normal migration works fine however
20:35 rmoe I'm not sure why having everything on one nic would matter, but I'm going to deploy that way and check it out
20:35 rmoe you're getting the same error message/
20:41 albionandrew xarses: “All the Platform services functional tests” are failing. Is this anything to worry about? Everything else has passed. I ask because I know in 4 there were issues with tests.
20:43 Kupo24z1 rmoe: Yes '<179>Jun 12 20:42:54 node-8 nova-nova.virt.libvirt.driver ERROR: Live Migration failure: internal error Attempt to migrate guest to the same host 00000000-0000-0000-0000-00000000efbe'
20:44 Kupo24z1 Tried a different instance to and i get the same error '<179>Jun 12 20:43:40 node-8 nova-nova.virt.libvirt.driver ERROR: Live Migration failure: internal error Attempt to migrate guest to the same host 00000000-0000-0000-0000-00000000efbe'
20:45 albionandrew xarses: I see “ Is not implemented for this deployment mode”
20:47 albionandrew xarses: What does it mean?
20:48 Kupo24z1 rmoe: what services need to be restarted after applying the workaround in the VNC bug?
20:48 Kupo24z1 api, scheduler, conductor?
20:49 rmoe api and consoleauth
20:49 rmoe also, I have an ubuntu env deploying right now to check the live-migration issues
20:50 Kupo24z1 cool
20:50 jobewan joined #fuel
20:50 Kupo24z1 im using Neutron/GRE and all ceph boxes ticked if that makes a difference
20:57 e0ne joined #fuel
20:57 xarses joined #fuel
21:26 e0ne joined #fuel
22:59 Kupo24z1 rmoe: any luck?
22:59 rmoe it just finished deploying
22:59 rmoe should know shortly
23:10 rmoe it worked for me
23:11 rmoe have you uploaded a diagnostic snapshot somewhere? being able to go through all of the logs for the deployment and openstack might help shed some light on this
23:16 albionandrew joined #fuel
23:23 Kupo24z1 rmoe: check pm
23:23 rmoe thanks, I'll look through this and see what I can come up with
23:24 Kupo24z1 thanks

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary