Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-07-09

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 IlyaE joined #fuel
00:36 thehybridtech joined #fuel
01:19 mattgriffin joined #fuel
01:22 eshumakher joined #fuel
01:25 xarses joined #fuel
01:59 xarses joined #fuel
02:26 xarses joined #fuel
02:37 IlyaE joined #fuel
03:06 IlyaE joined #fuel
03:24 mattgriffin joined #fuel
03:40 IlyaE joined #fuel
04:12 ArminderS joined #fuel
05:03 IlyaE joined #fuel
05:29 ArminderS- joined #fuel
05:29 xarses joined #fuel
05:37 ArminderS joined #fuel
05:41 ArminderS joined #fuel
05:42 ArminderS joined #fuel
05:44 ArminderS joined #fuel
05:45 ArminderS joined #fuel
05:47 ArminderS joined #fuel
05:48 ArminderS joined #fuel
05:49 ArminderS joined #fuel
05:51 ArminderS joined #fuel
05:52 ArminderS joined #fuel
05:53 ArminderS joined #fuel
05:55 ArminderS joined #fuel
05:58 IlyaE joined #fuel
06:23 ArminderS joined #fuel
06:24 ArminderS joined #fuel
06:25 ArminderS joined #fuel
06:26 ArminderS joined #fuel
06:27 ArminderS joined #fuel
06:28 ArminderS joined #fuel
06:29 ArminderS joined #fuel
06:30 ArminderS joined #fuel
06:30 al_ex8 joined #fuel
06:31 e0ne joined #fuel
06:36 Longgeek joined #fuel
06:36 ArminderS joined #fuel
06:36 e0ne joined #fuel
06:46 e0ne joined #fuel
06:52 e0ne joined #fuel
07:06 IlyaE joined #fuel
07:15 artem_panchenko joined #fuel
07:20 hyperbaba joined #fuel
07:21 pasquier-s joined #fuel
07:33 e0ne joined #fuel
07:40 ArminderS joined #fuel
08:03 ArminderS joined #fuel
08:14 guillaume__1 joined #fuel
08:36 ArminderS joined #fuel
08:43 AndreyDanin joined #fuel
08:52 brain461 joined #fuel
09:06 AndreyDanin joined #fuel
09:10 e0ne joined #fuel
09:18 ArminderS joined #fuel
09:23 e0ne joined #fuel
09:25 sallum joined #fuel
09:31 geekinutah joined #fuel
09:51 ArminderS- joined #fuel
09:58 ddmitriev joined #fuel
09:58 Longgeek joined #fuel
10:09 odyssey4me joined #fuel
10:15 brain461 joined #fuel
10:15 ArminderS joined #fuel
10:17 ArminderS- joined #fuel
10:34 artem_panchenko joined #fuel
10:37 e0ne joined #fuel
10:52 guillaume__1 joined #fuel
11:16 pasquier-s joined #fuel
11:38 Longgeek joined #fuel
12:28 e0ne_ joined #fuel
12:48 e0ne joined #fuel
12:57 pasquier-s joined #fuel
13:09 pasquier-s joined #fuel
13:35 Longgeek joined #fuel
14:12 ilbot3 joined #fuel
14:12 Topic for #fuel is now Fuel 5.0 for Openstack: https://wiki.openstack.org/wiki/Fuel | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
14:16 crandquist joined #fuel
14:18 dpyzhov joined #fuel
14:18 dpyzhov left #fuel
14:21 jobewan joined #fuel
14:25 mattgriffin joined #fuel
14:27 jaypipes joined #fuel
14:33 wrale trying to debug my neturon "verify networks" failure... the interfaces failing are tagged VLANs .. any tips?  i was able to do this with nova-network
14:34 al_ex9 joined #fuel
14:35 MiroslavAnashkin Are you rinning Verify Networks before or after the cluster deployment done?
14:35 wrale before
14:36 wrale oh wait..
14:36 wrale after reset
14:36 wrale environment reset
14:38 al_ex9 joined #fuel
14:42 MiroslavAnashkin Looks like a bug. Could you please generate diagnostic snapshot? And did the verification worked right before you deployed this environment?
14:44 MiroslavAnashkin BTW, you may ssh to 2 of the nodes, create the VLAN interface with the necessary VLAN number between these 2 nodes and check connection with ping
14:44 wrale I skipped the pre-verification, because I met catastrophic failure by pre-verifying at some point with nova-network.  I am confident my settings are right, but my first neutron deployment failed with rsync errors.  Will look further into it.
14:44 wrale Thanks
14:48 xarses joined #fuel
14:52 al_ex9 joined #fuel
14:54 dpyzhov joined #fuel
14:54 crandquist joined #fuel
14:54 MiroslavAnashkin Hmm, just checked Neutron+VLAN network after env reset - works well.
14:56 wrale Thanks for the test.. I'll try to track down what's going on.  I'm using GRE tunnels with management and storage on VLANs
14:57 wrale these are the failing networks
15:02 IlyaE joined #fuel
15:04 al_ex10 joined #fuel
15:09 AndreyDanin joined #fuel
15:21 wrale I have errors like the following in the fuel master backend: "2014-07-08 20:31:25  WARNING  [7f68ccf97700] (base) Invalid MAC is specified"  .. These seem related to bug: https://bugs.launchpad.net/fuel/+bug/1305017
15:22 MiroslavAnashkin Yes, but it is just a warning
15:25 MiroslavAnashkin The fix is in this bug, but you have to rebuild initramfs.img
15:25 MiroslavAnashkin https://bugs.launchpad.net/fuel/+bug/1333629
15:30 wrale thanks.. i'm also seeing this in the astute logs: http://paste.openstack.org/show/85794/
15:33 MiroslavAnashkin And for this we have a bug: https://bugs.launchpad.net/fuel/+bug/1322577  And there is fix.
15:34 wrale that's interesting.  i just built this 5.0.1 iso yesterday.. shouldn't the fix be there?
15:35 MiroslavAnashkin It should, but our repository mirrors were broken yesterday.
15:36 wrale are they okay today?
15:37 Longgeek joined #fuel
15:37 MiroslavAnashkin Yes, at least right now.
15:39 ArminderS joined #fuel
15:40 Longgeek joined #fuel
15:41 wrale i looked at the patches for that bug and compared it to the .rb files inside docker on the fuel node..  my install has those patches... i must be running into a different problem..
15:45 wrale my paste shows the retry logic in action.. it retries ten times, per the patch variable.. all ten retries fail.. maybe it has something to do with the key warnings
15:47 MiroslavAnashkin Hmm, then let us ping ^ evgeniyl
15:50 evgeniyl wrale: do you have nodes where rsync failed? Could you please try to run `rsync -c -r --delete rsync://10.20.20.4:/puppet/modules/ /etc/puppet/modules/` command on the one of failed nodes?
15:51 wrale evgeniyl: thanks.. yes.. it seems all nodes are failing here.. will try
15:52 evgeniyl wrale: and also try to use -v flag for rsync, to get more information
15:53 wrale got some skips, but seems to have worked
15:53 wrale on second run with -v: sent 881 bytes  received 99819 bytes  201400.00 bytes/sec
15:53 evgeniyl wrale: could you please copy output and paste it here http://paste.openstack.org/ ?
15:55 wrale http://paste.openstack.org/show/85796/
15:58 evgeniyl wrale: hmm, could you please execute `rsync -v -c -r --delete rsync://10.20.20.4:/puppet/modules/ /etc/puppet/modules/` and then `echo $?` paste output in the same way on http://paste.openstack.org/
15:59 wrale http://paste.openstack.org/show/85797/
15:59 vogelc joined #fuel
15:59 wrale seems odd for it to work :)
16:00 wrale maybe the call out to node-X is the problem?
16:00 wrale the call to tell it to rsync?
16:00 wrale *guess*
16:06 evgeniyl wrale: I have no idea what happened :) MiroslavAnashkin will try to reproduce this issue, it seems to happens on clusters over 10 nodes, I don't believe that rsync cannot handle such amount of requests, but we need to check it to be sure.
16:07 wrale evgeniyl: sounds good.  i hope to find a fix.. lol. i have 60+ compute nodes
16:12 wrale i wonder if there is some good way to stagger the rsync connections to the fuel host.  random number of seconds sleep (within reason)? sleep based on the X in node-X?
16:15 MiroslavAnashkin There is parameter in orchectrator - it sets the number of deployed nodes in parallel. Default is 50.
16:18 wrale it's strange i didn't see this problem with nova-network
16:19 AndreyDanin joined #fuel
16:19 evgeniyl wrale: here is the parameter which MiroslavAnashkin is talking about https://github.com/stackforge/fuel-astute/blob/master/lib/astute/config.rb#L71 , this file should be in astute container
16:21 wrale i changed it to 5 for a test.. will redeploy and report back.. thanks!
16:22 evgeniyl wrale: yeah, thanks, let us know if it works (or doesn't)
16:58 wrale no luck
17:05 wrale evgeniyl: i reset and redployed.. same errors.. i manually logged into all of the nodes and ran the rsync successfully.. :(
17:07 guillaume__1 left #fuel
17:10 e0ne joined #fuel
17:12 ArminderS- joined #fuel
17:17 angdraug joined #fuel
17:19 evgeniyl wrale: ohh... Ok, next assumption, try to use this script http://paste.openstack.org/show/85807/ , save it to some file on the slave where you have this problem and run it like `ruby file.rb`
17:20 evgeniyl wrale: I just copy pasted the code which we use to run rsync command via mcollective which is here https://github.com/stackforge/fuel-astute/blob/e0b90b9892b3d7b826496b776a0abb17d7469d4c/mcagents/puppetsync.rb#L44-L70
17:22 wrale will do, asap
17:22 wrale @ evgeniyl
17:23 mutex joined #fuel
17:28 pasquier-s joined #fuel
17:30 vogelc we are running into a lot of CEPH issue when deploying from new 5.0.1 iso's using stable/5.0.  Is there a more stage release we should be using?
17:34 e0ne joined #fuel
17:39 wrale when did you build your 5.0.1 iso?
17:40 brain461 joined #fuel
17:44 vogelc wrale: Monday 7/7
17:45 wrale vogelc: you may want to rebuild: take a look at the bug i reported: https://bugs.launchpad.net/fuel/+bug/1335628
17:51 vogelc wrale: We had the problem where nova would not start, but a new build fixed that.  when ever we deploy more than two storage nodes, OSD on node3 and beyond start showing osd's are down.   ticket # 2180
17:53 sanek joined #fuel
17:54 wrale evgeniyl: http://paste.openstack.org/show/85812/
17:55 wrale vogelc: do you have a link to your ticket please?
17:55 wrale is it a bug?
17:56 vogelc wrale: https://support.mirantis.com/requests/2180
17:56 wrale vogelc: i see.. i don't have an account there.. right on
17:57 IlyaE joined #fuel
18:16 wrale evgeniyl: anything left to try before i try nova-network again?
18:22 MiroslavAnashkin wrale: Please try to remove Mongo role from the controllers 1 and 2, but leave it on 3 - and re-deploy.
18:23 wrale will do.. thanks
18:27 vogelc Quick Question - If I make a chance to the puppet files on the fuel master, will it be pushed out during deployment?
18:30 IlyaE joined #fuel
18:32 MiroslavAnashkin Yes, puppet manifests are synced to the nodes every deployment
18:42 brain461 joined #fuel
18:47 brain461 joined #fuel
18:49 wrale MiroslavAnashkin: http://paste.openstack.org/show/85817/  Error is connection refused this time.  I ran the ruby snippet on a controller after the deployment failure.  ruby snippet / rsync ran without issue.  Seems like a timing issue.. Clients hit the server before server is ready somehow..  maybe?
18:50 MiroslavAnashkin rsync daemon is always ready...
18:51 wrale thought so, but i'm lost
18:54 Kupo24z1 MiroslavAnashkin: can you confirm if https://bugs.launchpad.net/nova/+bug/1284709 is fixed in 5.0.1?
19:05 wrale i'm going to try with nova-network flatdhcp.. maybe it will help
19:08 brain4611 joined #fuel
19:15 e0ne joined #fuel
19:19 e0ne joined #fuel
19:26 MiroslavAnashkin Kupo24z1: Looks like yes, we updated OpenStack version in 5.0.1 to 2014.1.1
19:30 MiroslavAnashkin wrale: evgeniyl OMG, we run rsync inside container but via xinetd...
19:33 eshumakher joined #fuel
19:34 IlyaE joined #fuel
19:44 wrale MiroslavAnashkin.. :)
19:46 wrale http://www.linuxsv.org/training/l29_linux_xinetd.html  "The xinetd daemon can add a basic level of protection from Denial of Service (DoS) attacks. The following is a list of directives which can aid in limiting the effectiveness of such attacks: "
19:46 e0ne joined #fuel
19:46 wrale max_load maybe enabled?
19:47 MiroslavAnashkin wrale: Please try the following
19:47 MiroslavAnashkin On master node run
19:47 MiroslavAnashkin `dockerctl shell rsync`
19:47 MiroslavAnashkin Inside the rsync container shell
19:47 MiroslavAnashkin `yum install vim`
19:47 MiroslavAnashkin `vim /etc/xinetd.conf` - there is no vi in container
19:47 MiroslavAnashkin Set cps = 150 10 to increase the connection number max value to 150 and save config.
19:47 MiroslavAnashkin Run `service xinetd restart`
19:47 MiroslavAnashkin Type `exit` to exit container shell.
19:48 MiroslavAnashkin Then, try to deploy one more time
19:48 wrale will do so, asap.. awaiting my nova-network provisioning... if it fails, i'll try immediately.. if it succeeds, i'll restart with neutron and do so
19:51 vogelc_ joined #fuel
19:52 mattgriffin joined #fuel
19:52 wrale looks like nova-network doesn't suffer here.. i see "installing openstack"
19:52 wrale (for me)
19:53 wrale maybe it has something to do with murano?
20:06 IlyaE joined #fuel
20:15 dpyzhov joined #fuel
20:25 e0ne joined #fuel
20:29 e0ne joined #fuel
20:30 IlyaE joined #fuel
20:46 e0ne joined #fuel
20:47 wrale my install with nova-network completed. all health checks are fine except those dealing with heat..(the last three checks on the page).. i guess i'll rebuild with neutron again, because i really want/need murano, GRE and heat
20:50 wrale one thing i notice is that horizon is calculating the total storage for the hypervisors all wrong.. i suppose this is because the entire ceph pool is available on all compute nodes.. misleading to the uninitiated, however
20:51 wrale it also says the hypervisor type is qemu when i selected kvm.. this is i guess because kvm and qemu are so closely related
20:57 dpyzhov joined #fuel
21:00 e0ne joined #fuel
21:02 IlyaE joined #fuel
21:27 bookwar left #fuel
21:43 wrale MiroslavAnashkin: tried with neutron and the xinetd.conf edit/restart.. Fails.. Error is : "Exit code: 12, stderr: rsync: read error: Connection reset by peer (104"
21:48 wrale where can i find the astute log on the fuel node?
21:52 IlyaE joined #fuel
21:54 wrale i think we need a place in fuel to set the MTU
22:02 jay-house-hunter joined #fuel
22:12 IlyaE joined #fuel
22:14 Kupo24z1 xarses: MiroslavAnashkin having an issue where nova thinks i only have 98gb because /dev/mapper/vm-nova is mounted as a 98GB filesystem, even though i have 1.5TB in Ceph
22:14 Kupo24z1 thus we can only spin up to 98GB of VM's per hypervisor
22:15 Kupo24z1 is there a way to overwrite or ignore this limit?
22:15 xarses there are over provision limits somewhere
22:15 xarses nova scheduler i think
22:15 xarses there is a conversation about it on the mailing list
22:16 xarses and there is a bug open for ceph
22:16 angdraug Kupo24z1: https://launchpad.net/bugs/1332660
22:16 angdraug Kupo24z1: grab latest nova packages from fuel 5.0.1 mirrors
22:16 wrale MiroslavAnashkin evgeniyl: #rsync person says about this part of the error: "rsync error: error in rsync protocol data stream (code 12) at io.c(764) [Receiver=3.0.9]" .... "it means it got data it wasn't expecting that wasn't part of the rsync protocol"
22:20 Kupo24z1 angdraug: my current mirror is deb http://23.108.33.226:8080/ubuntu/fuelweb/x86_64 precise main how would i locate/get the 5.0.1 files?
22:20 angdraug Kupo24z1: http://fuel-repository.mirantis.com/fwm/5.0.1/ubuntu/pool/main/
22:36 Kupo24z1 angdraug: which nova packages am i updating?
22:37 IlyaE joined #fuel
22:38 angdraug python-nova should be enough, all the code is there
22:39 boris-42 joined #fuel
22:40 Kupo24z1 angdraug: what do i need to restart, just scheduler?
22:43 angdraug tbh don't remember, probably at least api, too
22:43 Kupo24z1 kk thx
22:46 Kupo24z1 angdraug: looks like its still occuring, installed the new 5.0.1 python-nova on all servers and restarted nova-api, nova-scheduler, and nova-conductor on controllers
22:46 angdraug ah. you should restart nova-compute on computes too
22:47 angdraug nova-scheduler asks computes about available space, so that's where that code runs
22:51 Kupo24z1 Hmm, same problem after restarting nova-compute, 'no valid host found' on vm creation
22:58 Kupo24z1 angdraug: im using this file: python-nova_1:2014.1.1.fuel5.0.1~mira18_all.deb
23:03 angdraug hm, let me download and check that
23:03 angdraug http://fuel-repository.mirantis.com/fwm/5.0.1/ubuntu/pool/main/python-nova_1:2014.1.1.fuel5.1~mira19_all.deb
23:03 Kupo24z1 ah thats the 5.1 package, i just saw the fix commited for 5.0.1 so i got that one
23:04 angdraug nope, see "5.0.1" in the URL?
23:04 angdraug that's a 5.0.1 package
23:04 Kupo24z1 weird, it also says fuel5.1
23:05 angdraug it's the same binary, but all you need to care is which repo it's in
23:05 angdraug it's icehouse both in 5.0 and 5.1 after all
23:08 angdraug did it work?
23:21 wrale MiroslavAnashkin & evgeniyl: There is apparently a temporary fix for the rsync problem: https://answers.launchpad.net/fuel/+question/251076  .. Here's wishing for a very quick permanent fix.
23:21 wrale thanks for your help!
23:30 Kupo24z1 angdraug: it works great, thanks
23:30 Kupo24z1 angdraug: well, sorta, it has the total space listed as 3TB, even though i have 1.5TB in my ceph total
23:30 Kupo24z1 so im guessing it lists my total ceph multipiled by each hypervisor
23:35 mattgriffin joined #fuel
23:37 angdraug yes, every node will report 1.5TB
23:38 angdraug and scheduler, like most of the rest of nova, is still too stupid to take shared storage into account
23:41 Kupo24z1 honestly not a huge deal since if we are runnging shared storage its monitored elsewhere and much more closely than individual hypervisor usage
23:41 Kupo24z1 running*
23:48 Kupo24z1 are deployed node hostnames always set to node-$id no matter what you label them in fuel as?
23:51 wrale Do GRE tunnels still traverse the management network?  I saw the word "mesh" in the docs somewhere.  Is it meshed across multiple NICs/LANs?

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary