Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2016-07-06

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:33 fatdragon joined #fuel
01:48 ilbot3 joined #fuel
01:48 Topic for #fuel is now Fuel 8.0 (Liberty) https://www.fuel-infra.org/ | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
01:55 code-R joined #fuel
02:01 elo joined #fuel
02:17 code-R_ joined #fuel
02:19 code-R__ joined #fuel
02:56 fatdragon joined #fuel
02:56 fatdragon joined #fuel
03:03 maestropandy joined #fuel
03:22 maestropandy left #fuel
03:48 code-R joined #fuel
04:01 code-R joined #fuel
04:06 gongysh joined #fuel
04:17 code-R_ joined #fuel
05:11 code-R joined #fuel
06:14 tosc_fiberdata mwhahaha, ah okey. i guess we have to stick with the tripleO installer then. and try to figure out all the configurations. We'll figure that out
06:48 vkulanov joined #fuel
08:10 noshankus joined #fuel
08:52 bgaifullin joined #fuel
09:13 code-R joined #fuel
09:15 ekosareva joined #fuel
09:16 ikar joined #fuel
09:21 DavidRama joined #fuel
09:21 DavidRama left #fuel
09:39 kaliya joined #fuel
09:41 kaliya joined #fuel
10:13 Vijayendr_ joined #fuel
10:59 romcheg ikalnitsky, akasatkin: guys, could you please core review this? https://review.openstack.org/#/c/337258/
10:59 romcheg It fixes some tests that fail on the CI
11:09 ekosareva joined #fuel
11:14 code-R joined #fuel
12:02 permalac joined #fuel
12:37 kutija joined #fuel
12:49 ekosareva folks, please re-review https://review.openstack.org/#/c/282911/
13:02 nurla joined #fuel
13:11 maestropandy joined #fuel
13:14 maestropandy hi all, I am trying to deploy Fuel Environment with norman 3 nodes setup 1 controller, 1 compute, 1 Storage, where installation of Controller is loading for more than 2 hours, rest compute & storage are installed, is their any way to check the on going process of controller ndoe ?
13:15 code-R joined #fuel
13:42 nurla joined #fuel
13:45 cartik joined #fuel
13:53 azemlyanov joined #fuel
13:59 maestropandy Here Fuel agent log for Controller Node " http://paste.openstack.org/show/526563/" which is pending for long time
14:01 code-R joined #fuel
14:06 Sketch 13:28:27.419499 #2314] WARN -- : netio.rb:387:in `_init_line_read' PLMC7: Exiting after signal: SignalException: SIGTERM
14:07 Sketch i assume these mcollective warnings can be ignored, since i see them on every node?
14:10 Sketch though i am having installs fail
14:10 Sketch i guess my real question should be: how the heck do i debug failed installs?
14:10 Sketch there is nothing particularly useful in WARNING or higher state in the logs for the nodes
14:13 Sketch there are a couple of tracebacks about 'Extracting of actor_id failed' on the fuel master:  AttributeError: 'NoneType' object has no attribute 'actor_id'
14:16 code-R_ joined #fuel
14:25 xarses joined #fuel
14:30 aarefiev joined #fuel
14:32 mwhahaha Sketch: what failed?
14:33 mwhahaha generally the puppet logs are a good source of what went wrong
14:34 fritchie joined #fuel
14:35 fritchie with Fuel 9 3 node controller setup, should the VNC Console show 'Failed to connect to server (Code:1006)' when the browser is on an external network?
14:49 aglarendil maestropandy: I am more than sure you have slow network connectivity to repositories. you can either create a local mirror with fuel-mirror utility or use the closest repos
14:49 aglarendil Sketch: which part of deployment do you have failed? and which version of Fuel is it?
14:49 aglarendil fritchie: for VNC console to work you need to have connectivity at least to public IP
14:50 aglarendil e.g. you will have a frame open with an URL of something like http://<public_ip>:6080/blablabla
14:53 fritchie sketch, nothing failed, If I run 'nova get-vnc-console xyz novnc', when I go to the URL I get the 1006 error, however if I refresh a few time I get a console
14:55 fritchie " handler exception: The token is invalid or has expired" in logs when it fails
14:57 fritchie ie, novnc console only works when the lb sends traffic to 2nd controller (out of the 3)
15:00 Julien-zte joined #fuel
15:00 thumpba joined #fuel
15:08 code-R joined #fuel
15:11 fatdragon joined #fuel
15:18 maestropandy joined #fuel
15:18 thiagolib joined #fuel
15:41 fritchie this was the fix https://bugs.launchpad.net/fuel/+bug/1599559
15:42 mwhahaha fritchie: thanks we'll look at getting that fixed
15:47 ekosareva joined #fuel
15:47 mwhahaha fritchie: so those should be the default values
15:49 mwhahaha oh no, gogpile.cache.null is the default
15:50 mwhahaha fritchie: we don't configure caching for nova, only for keystone_authtoken
15:52 fritchie joined #fuel
15:57 code-R joined #fuel
15:59 code-R_ joined #fuel
16:07 Sketch ok, just deleted my nodes and started over.  defined 3 nodes as role=virt, then tried to deploy VMs to them.  it fails with the error:
16:07 Sketch Provision has failed. Mcollective problem with nodes [{"uid"=>"2", "error"=>"Node not answered by RPC."}], please check log for details
16:08 Sketch hey, i see a puppet log in the UI this time.  i guess that's an improvement.
16:11 Sketch ok, looks like it's probably a network configuration issue.  the node can't get to it's default route via the bridge interface.
16:13 fritchie mwhahaha: thx, so I really only need to change the cache backend?
16:14 thiagolib I would like to know if you like me to add CentOS 6.5 repository to deploy the liberty OpenStack
16:14 mwhahaha it seems that you needed to configure it since we don't
16:14 mwhahaha fritchie: -^
16:15 mwhahaha we were configuring it correctly on the controllers but not the compute nodes, https://review.openstack.org/#/c/338373/
16:17 Sketch yep, in fact, the bridge interface is attached to the wrong physical interface
16:18 mwhahaha Sketch: did you run network verification before deploying?
16:20 Sketch not this time around, probably should have.
16:25 Zer0Byte__ joined #fuel
16:30 fritchie joined #fuel
16:46 kaliya_ joined #fuel
16:54 itsjustme hi all
16:55 itsjustme my deployment complains taht nodes have more space than expected. i take it i have to fix this in the db. is there a script that does that or am i to back up the nova db and manually change things?
16:56 xarses more space than expected?
16:56 xarses thats a new one
16:57 xarses can you share screen shot? I'm not sure where thats coming from
16:58 fritchie joined #fuel
16:59 mwhahaha wasn't there a bug with 512 block size vs 4k
17:00 itsjustme xarses: thats from nova-scheduler log
17:00 itsjustme im post install already
17:01 Zer0Byte__ hey
17:03 fritchie joined #fuel
17:03 Zer0Byte__ hey mwhahaha
17:03 Zer0Byte__ how are u
17:03 mwhahaha hi2u
17:05 Zer0Byte__ mwhahaha question
17:05 Zer0Byte__ did u have idea
17:06 Zer0Byte__ why if i create a new project
17:06 Zer0Byte__ i cant deploy new vms with a heat template
17:06 Zer0Byte__ because is failing on neutron
17:06 Zer0Byte__ using a shared network
17:07 mwhahaha any specific error in neutron?
17:07 Zer0Byte__ http://paste.openstack.org/show/526594/
17:07 Zer0Byte__ but i can deploy a normal vm without the template
17:07 Zer0Byte__ and the template works fine on the admin project
17:07 mwhahaha looks like an permissions thing
17:08 Zer0Byte__ i know but the weird part if why i can create vms without any problem with the same network on instances
17:08 mwhahaha not sure the fix but it seems that you need to allow the 2nd project to be able to create the resources but they are currently not allowed so you get an error
17:08 mwhahaha might be a heat thing
17:08 mwhahaha perhaps the heat user doesn't have access
17:26 Sketch ok, fixed the network settings and reprovisioned, now i get a new error:
17:26 Sketch (/Stage[main]/Main/Exec[generate_vms]/returns) change from notrun to 0 failed: /usr/bin/generate_vms.sh /etc/libvirt/qemu /var/lib/nova returned 1 instead of one of [0]
17:27 Sketch if i manually try to start the vm with virsh start, it says:  error: Unable to add bridge br-prv port vnet4: Operation not supported
17:27 Sketch br-prv is UP.  it doesn't have any IP address bound to it, but I assume that is normal.
17:28 Sketch hrm, wait.  ifconfig show br-prv up, but brctrl show doesn't list it as a bridge
17:30 Sketch that doesn't seem right?
17:59 Sketch i do notice that's the only interface in the libvirt config which is <model type='openvswitch'/>
18:18 fritchie joined #fuel
18:35 javeriak joined #fuel
18:39 Sketch hrm, is there a problem if you define more than one node with role=virt?
18:53 mwhahaha Sketch: what version are you trying to use reduced footprint with? 8?
18:53 Sketch yep
18:54 mwhahaha and you're following https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-the-reduced-footprint-feature ?
18:54 Sketch yep
18:54 mwhahaha since no one else seems to be offering any ideas let me go see if i can reproduce the problem :D
18:55 Sketch at least mostly...i was not able to upload a network template, but it seems like that shouldn't be required if you set up the network manually via the fuel web UI?
18:55 mwhahaha not necessarily
18:56 mwhahaha that may be the issue
18:57 mwhahaha gimme a bit, i have to download and setup an 8 env
18:58 mwhahaha so we do test it and in 9 it's green
18:59 mwhahaha https://github.com/openstack/fuel-qa/blob/master/fuelweb_test/tests/test_reduced_footprint.py
19:00 mwhahaha well we test the virt role at least
19:01 Sketch what about 8?
19:02 mwhahaha it was green last time it was run
19:03 mwhahaha actually no wrong one
19:03 mwhahaha let me see
19:04 mwhahaha yea it was green last time it was run but that was 4 months ago
19:04 mwhahaha let me try it out
19:13 Sketch actually i'm not even 100% sure i need reduced footprint now, because i installed fuel master into another kvm i have.  could i just deploy straight controller+compute+ceph nodes to bare metal, then migrate fuel over to it?  or are the virt nodes special somehow for hosting controllers?
19:15 mwhahaha or you could just leave it in a kvm instance
19:15 mwhahaha the virt role basically just does that for you
19:16 Sketch the long term plan is to migrate the stuff off of that kvm to turn the hardware into an additional compute node
19:16 Sketch so i'd still need to migrate it eventually :)
19:17 mwhahaha personally, i wouldn't mix the fuel master with the deployed nodes but i guess you could do that
19:18 Sketch i kinda thought that was the point of the reduced footprint
19:18 mwhahaha well i guess it depends on your requirements but the ops/security conscious part of me says nooOoooOooo
19:26 preilly_ joined #fuel
19:29 Jabadia joined #fuel
19:41 Sketch we can't really afford to blow 2-3 nodes just for controllers ATM
19:42 Sketch and the machines should be plenty powerful to dedicate a few cores for controller duties
19:42 Sketch looks like you can't do controller+compute on bare HW with fuel anyway
19:43 mwhahaha correct
19:43 mwhahaha paying for reliability/security
19:43 mwhahaha so like i said, you can do it depending on your requirements but it's not really a wise idea :)
19:45 Sketch i guess the question would be whether the virt nodes can support failover, ceph storage, etc
19:45 Sketch if they do, then it ought to be pretty reliable.  but i haven't gotten far enough to find out ;)
19:47 Sketch it seems like they may not support ceph storage.  i'm also not entirely clear from the docs if the virt node(s) is(are) to remain, or to be replaced by compute nodes after the migration+initial virtual controller setup is done
20:10 mwhahaha no i'
20:10 mwhahaha no i'm pretty sure it's just a basic kvm instance
20:12 Sketch yeah, that's what i'm suspecting. my coworker seems to think you can install one virt role, provision controllers on it, then provision bare compute nodes, and migrate the controllers over and everything will be happy and redundant.
20:13 Sketch btw, when i try to use a networking template...
20:13 Sketch # fuel --env 1 network-template --upload --dir /root
20:13 Sketch 500 Server Error: Internal Server Error ('NoneType' object has no attribute 'id')
20:13 Sketch happens even if i try one of the examples without modifying it at all
20:15 mwhahaha what's the network template you're using?
20:15 mwhahaha i found https://bugs.launchpad.net/fuel/+bug/1523529 which seems to point to the interfaces needing to not be eth#
20:15 cr0wrx joined #fuel
20:16 mwhahaha the docs for the reduced footprint may not have been updated from 7.0
20:16 mwhahaha Sketch: yea rename the if#: to the correct enp0s# naming scheme
20:16 mwhahaha in your template
20:17 mwhahaha that's a difference between 7 and 8
20:17 Sketch aha, yep
20:18 Sketch now it uploads
20:18 mwhahaha those high quality error messages
20:18 * mwhahaha sighs
20:21 mwhahaha also installing multiple controllers on a single node with the virt roles isn't redundant because if you loose that host you've lost your entire cluster
20:22 mwhahaha if you wanted to skimp, you could use the virt role on 2 machines and deploy the master/controllers to have some redundancy but thats why you really should have 3 controllers on actual hardware
20:22 bgaifullin joined #fuel
20:29 cr0wrx anyone know how the swift service is running on mos 8.0? I'm trying to work through an availability zone issue and now using swift returns 503 service unavailable
20:29 cr0wrx I don't see anything under pcs or service commands that look swift-y
20:30 mwhahaha cr0wrx: check haproxy
20:31 cr0wrx radosgw                  FRONTEND       Status: OPEN        Sessions: 0    Rate: 0
20:31 cr0wrx radosgw                  controller1    Status: DOWN/L7STS  Sessions: 0    Rate: 0
20:31 cr0wrx radosgw                  BACKEND        Status: DOWN        Sessions: 0    Rate: 0
20:31 cr0wrx radosgw-baremetal        FRONTEND       Status: OPEN        Sessions: 0    Rate: 0
20:31 cr0wrx radosgw-baremetal        controller1    Status: DOWN/L7STS  Sessions: 0    Rate: 0
20:31 cr0wrx radosgw-baremetal        BACKEND        Status: DOWN        Sessions: 0    Rate: 0
20:32 cr0wrx how do I stry to start radosgw? seems down
20:59 Sesso joined #fuel
21:09 mwhahaha cr0wrx: service radowsgw start?
21:13 cr0wrx ignore my stupidity
21:13 cr0wrx tab completion wasn't completing
21:13 cr0wrx I thought it wasn't a thing
21:13 cr0wrx well, it seems to stay up and running, not sure why it wasn't running after changing AZ stuff around for cinder/nova
21:17 cr0wrx and that fixed my AZ issues. One of these days I'm going to have to buy you a beer, water, or beverage of your choice. thanks again
21:18 mwhahaha pretty sure this entire channel owes me a swimming pool of such things :D
21:22 Sketch mwhahaha: i was a little unclear if you could run controllers on top of compute nodes, or if that was sort of a chicken and the egg situation
21:23 mwhahaha you can't run it within the nova world, but you could probably just run it as a standalone instance via kvm
21:23 mwhahaha not sure if virt+compute roles work
21:23 Sketch i see
21:23 mwhahaha not completely sure
21:24 Sketch that would be idea if it worked, because then you could have multiple controllers on redundant VM nodes with redundant storage
21:24 Sketch but it sounds like it probably won't work
22:13 DavidRama joined #fuel
22:25 Julien-zte joined #fuel
22:33 ikutukov Please, review CLI patch for custom graphs https://review.openstack.org/#/c/338584/
22:43 DavidRama joined #fuel
22:56 DavidRama left #fuel
23:23 HeOS joined #fuel
23:45 code-R joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary