Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-01-31

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 xarses joined #fuel
02:11 IlyaE joined #fuel
02:49 rmoe joined #fuel
03:31 ArminderS joined #fuel
03:43 jouston__ joined #fuel
04:07 xarses joined #fuel
04:25 IlyaE joined #fuel
04:36 rmoe joined #fuel
04:52 ArminderS joined #fuel
04:56 mihgen joined #fuel
05:26 vkozhukalov joined #fuel
05:27 IlyaE joined #fuel
06:10 IlyaE joined #fuel
06:18 e0ne joined #fuel
08:06 steale joined #fuel
08:10 miguitas joined #fuel
08:16 e0ne joined #fuel
08:16 vk joined #fuel
08:18 e0ne joined #fuel
08:29 vkozhukalov joined #fuel
08:40 evgeniyl joined #fuel
08:41 mrasskazov1 joined #fuel
08:47 tramp joined #fuel
09:02 rvyalov joined #fuel
09:23 evgeniyl joined #fuel
09:23 steale joined #fuel
09:26 bas joined #fuel
09:38 e0ne joined #fuel
09:42 e0ne_ joined #fuel
10:05 e0ne joined #fuel
10:21 syt Guys, Is there any way to allow multiple subnets to be used as public ip ranges when using Fuel?
10:32 evgeniyl joined #fuel
10:34 tatyana joined #fuel
10:34 mihgen joined #fuel
10:51 e0ne joined #fuel
11:02 Bomfunk joined #fuel
11:40 evgeniyl joined #fuel
12:11 evgeniyl joined #fuel
12:13 e0ne joined #fuel
12:14 richardkiene joined #fuel
12:42 bogdando joined #fuel
12:43 TVR_ joined #fuel
12:48 ArminderS joined #fuel
12:59 Dr_Drache joined #fuel
13:09 ruhe_ joined #fuel
13:45 blinky_ghost joined #fuel
13:50 e0ne joined #fuel
14:02 TVR_ I am still having an issue with my instanced getting an dhcp lease.... I have a HA setup with 6 nodes, ceph storage  and images, neutron with VLANs....
14:02 TVR_ anyone else have this issue?
14:03 TVR_ all  my agents are happy... and the subnet-list shows the allocation pools are there..
14:17 Dr_Drache hmmm.
14:18 Dr_Drache internal net, or external?
14:18 TVR_ internal net
14:18 Dr_Drache correct me if i'm wrong, but don't they need to be assigned first?
14:18 TVR_ net04 which is my default... I took the default 192.168.111.0/24 from the installer
14:19 TVR_ cfc55789-531f-42db-b15c-7ec68de6365d | net04__subnet     | 192.168.111.0/24 | {"start": "192.168.111.2", "end": "192.168.111.254"}
14:20 Dr_Drache I mean, assigned to instances.
14:21 TVR_ I know I am supposed to be able to assign a floating IP... but from boot, isn't it supposed to get an IP dron it's corresponding subnet when set..?
14:21 TVR_ nova show f07f916b-c65e-4303-8227-320fdb984c60
14:21 TVR_ net04 network                        | 192.168.111.5
14:22 TVR_ and from the instances in the dashboard, that's the ip it has... so shouldn't that ber it's IP?
14:22 Dr_Drache with that information, I agree.
14:22 TVR_ cool.. so not to figure out the 'why' it isn't getting assigned...
14:23 Dr_Drache that's the part i'm now out of opinions.
14:23 TVR_ heh.. cool..
14:31 MiroslavAnashkin TVR_: If you check the instance console log - are there some strings like
14:31 MiroslavAnashkin Sending discover...
14:31 MiroslavAnashkin Sending select for 192.168.111.2...
14:31 TVR_ only "??"
14:31 MiroslavAnashkin Lease of 192.168.111.2 obtained, lease time 120
14:34 TVR_ nova console-log f07f916b-c65e-4303-8227-320fdb984c60
14:34 TVR_ ??
14:36 TVR_ do you have access to an environment? can you please run ip netns then take that output and run ip netns exec <that output>  ip addr
14:36 TVR_ please tell me the state  of your device for the internal net
14:37 TVR_ my tap devices are state UNKNOWN
14:37 TVR_ I am wondering if that is normal
14:37 TVR_ also.. I have 2 tap devices with inet 192.168.111.2/24 brd 192.168.111.255 scope
14:39 MiroslavAnashkin TVR_: Do you have working DHCP in your external network?
14:39 TVR_ working? working from neutron, or another dhcp server?
14:40 MiroslavAnashkin Another one
14:41 MiroslavAnashkin External DHCP may be the root cause of the second tap interface
14:43 TVR_ there is no external dhcp server on this network... I do have a floating IP pool, but on a (obviously) different subnet
14:43 Dr_Drache TVR_, i'd have an enviroment, but I just took mine down this morning to try a redeploy.
14:43 TVR_ ok.. cool... I just thought the UNKNOWN to be odd..
14:44 TVR_ also:   ip netns exec qrouter-7ccac132-51c6-4f79-b773-fd7fddc7f93e ip a   <== does NOT show me a router...
14:51 MiroslavAnashkin TVR_: `ip netns exec qrouter-* ip a` should return IP address of the gateway and not the IP, assigned to instance
14:52 TVR_ yes... I get lo: <LOOPBACK,UP,LOWER_UP> device only without and gw- device
14:53 MiroslavAnashkin http://paste.openstack.org/show/62242/ - My  ip netns exec qrouter output
14:55 alexz__ joined #fuel
14:57 TVR_ http://pastebin.com/pBW0UKZk
14:57 MiroslavAnashkin TVR_: Please go to the instance overview in Openstack dashboard and paste IP addresses section and Security groups
14:58 TVR_ http://pastebin.com/FYqWer2a
14:59 TVR_ sorry... pasted all
15:03 MiroslavAnashkin Can you see the instance console via VNC?
15:04 MiroslavAnashkin IP assignment happens at the moment of the instance TAP interface creation
15:04 TVR_ yes.. can log in.. all is good... It just doesn't receive an IP.. and if I manually set one, I cannot ping the GW or DHCP server... which makes me suspect it's routing, not the dhcp server issue
15:05 MiroslavAnashkin Ah, OK.
15:06 alexz__ Hi, can someone look\check\confirm\invalid or comment this bug https://bugs.launchpad.net/fuel/+bug/1274905 ?
15:06 TVR_ so.. in an HA setup.. where is the router defined (ip netns list) .. is it on all 3 servers? or just the main controller...?
15:07 Dr_Drache joined #fuel
15:07 MiroslavAnashkin TVR_: No, it is on the single server, which currently runs neutron-L3-agent
15:07 TVR_ ok.. good.. it's only there
15:07 MiroslavAnashkin TVR_: Please check where is is with `crm status`
15:08 TVR_ wait... it is on the wrong server
15:09 TVR_ http://pastebin.com/JpPR0NEu
15:11 TVR_ crm status
15:11 TVR_ http://pastebin.com/L6uTDSGY
15:15 MiroslavAnashkin TVR_: Everything looks OK. Please check `ip netns exec qrouter-7ccac132-51c6-4f79-b773-fd7fddc7f93e ip a` on the node-10
15:16 TVR_ http://pastebin.com/Ra1HAcqm
15:16 rmoe joined #fuel
15:19 MiroslavAnashkin alexz__: Yes, it is duplicate of this bug https://bugs.launchpad.net/fuel/+bug/1267431
15:21 Dr_Drache I purpose a change to openstack, change all naming schemes to fish.
15:22 alexz__ mm, Are you sure? in comment "also, if manually fix this issue,it did not fix https://bugs.launchpad.net/fuel/+bug/1267431 problem." -- i try fix *yaml files, but it's doesn't fix 1267431
15:23 alexz__ *i mean, i fix *yaml and then upload them in fuel
15:25 MiroslavAnashkin alexz__: tenant `admin` is hardcoded to puppet manifests, located on master node at /etc/puppet/modules/
15:26 MiroslavAnashkin alexz__: fix to *yaml cannot help with it.
15:30 MiroslavAnashkin TVR_: What IP adderess do you use to SSH to the instance?
15:30 TVR_ I don't.... I go in through console
15:30 alexz__ <MiroslavAnashkin> yeap, i know. i mean its another one bug?
15:31 TVR_ that is the crux of my issue
15:31 ruhe joined #fuel
15:31 MiroslavAnashkin alexz__: No, it is the same bug as I mentioned.
15:32 MiroslavAnashkin TVR_: And what `ip a` in the console does show?
15:32 TVR_ it should be getting 192.168.111.5 and I should be able to asign a floating IP to it and get in through ssh, but no ip seems to get to the instance
15:32 IlyaE joined #fuel
15:33 TVR_ since I manually set an IP, it shows lo and eth0 with IP
15:33 TVR_ state up for both
15:33 MiroslavAnashkin TVR_: No, you should not set IP manually.
15:33 TVR_ state up for eth0 and state UNKNOWN for lo... sorry
15:34 alexz__ MiroslavAnashkin: hmm, but o think, generator of yaml files and puppet-apply(in node depl.time) use different modules?
15:34 TVR_ ok doing service network restart now.. one sec
15:34 TVR_ I set it manually to test if I could get to the router.. setting it manually, I cannot get to it
15:35 TVR_ ok.. now ip a shows it as up.. but without an IP .. as it did not receive one
15:35 MiroslavAnashkin alexz__: Orchestrator takes generated yaml file and calls puppet-apply with input parameters, described in yaml.
15:38 MiroslavAnashkin TVR_: Do you have DHCP client in your instance?
15:38 TVR_ yes I do
15:38 Dr_Drache MiroslavAnashkin, I have a question, is there a repo, or plans for a repo? I'd assume that'd be the simplest way to get fixes to users who need specific bug fixes, or is there a special way already in place?
15:39 TVR_ dhcp-common rpm
15:39 alexz__ MiroslavAnashkin: yeap, and i see bug in yaml generator, before Orchestrator starts. or I am mistaken?
15:40 TVR_ dhcp-common as there is no dhcp-client package
15:41 TVR_ it is set to BOOTPROTO=dhcp for the int
15:43 MiroslavAnashkin TVR_: Please check /etc/sysconfig/network-scripts/ifcfg-eth*
15:43 MiroslavAnashkin Inside your instance
15:45 TVR_ http://pastebin.com/WbP0DfXi
15:45 TVR_ had to manually type it ..but those are my settings....all of them
15:45 MiroslavAnashkin alexz__: Yes, admin tenant is hardcoded everywhere, including yaml generator. So, to fix it one must change both, the yaml and puppet manifests.
15:46 TVR_ only ifcfg- are lo and eth0
15:47 TVR_ this image I used with my home-rolled puppeted environment using rdo packstack and it would boot and get a dhcp IP in that environment.... (my issue was I didn't have HA and adding a compute node I couldn't figure out the neutron tettings is why I abandoned that project and came over to fuel)
15:48 TVR_ I am 100% posative this image is good and gets an IP from dhcp if presented with one
15:50 TVR_ by manually asigning an IP to the eth0, from the subnet of the internal scheme, I cannot ping either the gateway or the dhcp server, so there must be something wrong there..
15:50 MiroslavAnashkin TVR_: Please connect to master node and check /var/log/remote/<IP address of node-8>/neutron-agent-dhcp.log
15:51 alexz__ MiroslavAnashkin: thanks! that's what I wanted to know! and can you say name\manifests\module which generate yamls?
15:52 MiroslavAnashkin alexz__: Astute
15:52 alexz__ MiroslavAnashkin: thanks
15:58 Dr_Drache TVR_, your name finally clicked, and now I want one again.
15:58 TVR_ heh.. yes.. I have a 1977 2500M myself
15:59 Dr_Drache I might go find myself a 280
16:01 jouston_ joined #fuel
16:02 TVR_ many out there... I am doing a conversion, as mine is not stock... converting to the 3000S and I have a 4.9L landrover motor with MPFI and modern ECU that's supercharged... fun toy
16:03 Dr_Drache yea, I've been a modifiyed honda guy car wise myself. but have dabbled in others.
16:08 Dr_Drache envy your selections over on that side of the pond though.
16:08 Dr_Drache some beautiful automobiles.
16:09 TVR_ so I restarted neutron-dhcp-agent maybe 3 times earlier.. which would explain 3 of these logs... but they seem to be periodically throughout the night as well...
16:09 TVR_ http://pastebin.com/hrbMRWCy
16:14 TVR_ also.. my node-10 L3 agen is giving errors http://pastebin.com/iGZqRYLb
16:19 xarses joined #fuel
16:21 designated joined #fuel
16:35 MiroslavAnashkin TVR_: It is Pacemaker should start DHCP agent and select node where to run it
16:40 TVR_ ok.. so where are we now...where alse can I look to get more info?
16:44 MiroslavAnashkin TVR_: Let us do a simple check. Please start new VM based on TestVM (Cirros) image with one or both internal and external networks and check if it get IP
16:45 MiroslavAnashkin TVR_: There is no errors in DHCP agent logs
16:49 TVR_ creating Z1 now.. with both networks and 40G drive...
16:49 Dr_Drache MiroslavAnashkin, I have a question. I have a DHCP on my public/external... what errors will that cause? (the dhcp is differnt subnet than my selected IPs)
16:51 Dr_Drache ref : http://paste.openstack.org/show/62262/
16:53 e0ne joined #fuel
16:53 Dr_Drache I assumed, I'd be safe. but, looks like I am missing a piece of key networking information.
16:53 TVR_ no dhcp received.. ip a shows lo eth0 and eth1
16:55 TVR_ this booting from the cirros image that comes with the fuel install
17:07 MiroslavAnashkin Dr_Drache: There is one more DHCP in Neutron. If both DHCP ranges intersects - VMs may get 2 IP addresses. It is a matter of interconnection between DHCP - both should know what they assigned to the instance
17:08 MiroslavAnashkin If external DHCP range do not overlap the configured external network - all should work OK
17:14 Dr_Drache ahhh
17:14 Dr_Drache so x.x.30.x for fuel, and x.x.40.x for other dhcp, all is sugar coated?
17:14 mihgen joined #fuel
17:15 Dr_Drache now to deploy on ubuntu to a HP, and manually fix it, woooo!
17:16 rmoe joined #fuel
17:17 TVR_ if I thought ubuntu would fix this (and I don't) I would re-roll this cluster.
17:18 Dr_Drache well, it doesn't deploy at all to HPs
17:19 TVR_ ? It worked for my DL360's G5 setup?
17:19 TVR_ maybe a 'too new' driver?
17:20 Dr_Drache doesn't on my 380s
17:20 TVR_ ok..interesting
17:21 TVR_ MiroslavAnashkin ... do you think rebooting my cluster would help.. or cause other issues?
17:21 TVR_ just to make everything 'fresh'
17:24 designated i have a question regarding fuel.  if something fails on the fuel master and has to be rebuilt, or gets upgraded, what happens to the nodes and the information fuel had stored?  does everything have to be reprovisioned?
17:24 designated is there a way to backup your deployments in fuel so it can be restored later?
17:25 Dr_Drache nope
17:25 Dr_Drache I heard that's in the plan for 4.1
17:25 TVR_ good question.... I would like that answer as well...
17:25 Dr_Drache but as of now, fuel dies, or reboots.
17:25 Dr_Drache you now have to manully do anything to the cluster.
17:25 angdraug joined #fuel
17:25 Dr_Drache unless I tested that wrong.
17:25 designated that's not a good position to be in
17:28 TVR_ during install, asking if a san is present and if this will be a new deployment or a rebuild would be a huge step in the right direction.. after all, out puppetmasters here are all off san, so know what files are touched / created and making sure they are on a san wouldn't be that hard to do... I imagine
17:31 Dr_Drache how DO you add a san to this?
17:31 designated i think starting with a simple backup file that can later be imported would be fairly easy to implement
17:31 Dr_Drache designated, it's being implmented.
17:31 TVR_ right now.. manually
17:31 designated Dr_Drache, cool
17:32 Dr_Drache TVR_, by each instance, or node wide?
17:32 Dr_Drache *cluster
17:32 TVR_ no, I was talking about san for the fuel node.... the cluster gets to be on ceph
17:33 TVR_ the ceph nodes can connect to jbods as needed
17:33 Dr_Drache well, I mean, I have sans, that will stay that way, (as iscsi).... I havn't looked how to add them as storage for the instances.
17:34 TVR_ I haven't got my network up yet.. so I can't speak on that ... yet
17:34 TVR_ heh
17:34 Dr_Drache same.
17:34 Dr_Drache been too busy with networking to go farther.
17:35 Dr_Drache and my HPs
17:35 TVR_ are your instances getting dhcp addresses?
17:35 Dr_Drache are DL360's with 400i SM
17:35 Dr_Drache TVR_, they were, but I'm a ubuntu place, and I want to acually test with buntu.
17:36 Dr_Drache but i'm using GRE, no matter how much I tryed fuel/OS was the only thing that didn't vlan right for me.
17:36 TVR_ ok.. so they were centos.. and getting dhcp leases? From  / through neutron? you running neutron with VLAN's?
17:37 Dr_Drache no, neutron GRE
17:37 TVR_ OK.. so you are GRE then...ok..
17:38 Dr_Drache blag, I hate this. to deploy my cluster on my HPs, i'd have to deploy the controllers first, and then manually fix the grub, but to do that I have to set ceph as Zero replciation, which means, i'm stuck at zero when I deploy computes/storages.
17:39 Dr_Drache if I do it all at once, the timeout kills the deployment
17:46 MiroslavAnashkin Dr_Drache: Your patch. It is under review now, but you may check it anyway. https://github.com/stackforge/fuel-web/commit/04f17482e97c1c7ee12f7f99bafc2dc9dbfc9a95
17:48 Dr_Drache cool. MiroslavAnashkin, I am not bitching or trying to push. just working out how this will all fit me.
17:50 MiroslavAnashkin Dr_Drache: How to apply. 1. Download initramfs.img from master node (/var/www/nailgun/bootstrap/initramfs.img)
17:51 MiroslavAnashkin Dr_Drache: Unpack it somewhere UNDER ROOT ACCOUNT! Root is mandatory, since otherwise all files will change its owner.
17:55 MiroslavAnashkin Apply the patch to /opt/nailgun/bin/agent  - it is inside initramfs
17:56 MiroslavAnashkin Dr_Drache: cd to the directory with unpacked initramfs and run
17:57 MiroslavAnashkin Dr_Drache: `find . -xdev | cpio --create --format='newc' | gzip -9 > <some path where you want to place new initramfs,img>/initramfs.img`
17:58 MiroslavAnashkin All the actions above should be also performed under root
17:59 MiroslavAnashkin Dr_Drache: How to instakk patched initramfd.img back to master node http://paste.openstack.org/show/62265/
17:59 MiroslavAnashkin install
17:59 richardkiene_ joined #fuel
18:01 Dr_Drache just about done
18:02 Dr_Drache seems my workstation needs moar cores
18:12 vk joined #fuel
18:13 richardkiene joined #fuel
18:19 tsduncan Is it possible to get the source=install/puppet logs for the master node via the api?  If so, what value is used for node?
18:40 KresiusMengg joined #fuel
18:41 KresiusMengg left #fuel
18:47 Dr_Drache MiroslavAnashkin, i'm going to check the iniramfs again
18:47 Dr_Drache but so far, no go
18:48 Dr_Drache fixed the grub issue yes, but got the /lib/var/glace issue
18:48 Dr_Drache *glance
18:51 e0ne joined #fuel
19:04 MiroslavAnashkin Dr_Drache: What is the error message?
19:06 MiroslavAnashkin TVR_: Are you use VLAN segmentation? What is your VLAN ID Range? Are all the VLAN numbers in this range allowed on you switches?
19:06 TVR_ Neutron L2 configuration?
19:07 MiroslavAnashkin Yes.
19:07 Dr_Drache MiroslavAnashkin, i'll find the error, https://bugs.launchpad.net/fuel/+bug/1260293
19:07 Dr_Drache pretty much that.
19:09 Dr_Drache the lvm isn't created
19:09 TVR_ VLAN range is 1000 1030 all default... and that is not on my switch, no
19:09 MiroslavAnashkin TVR_: If you have 2 more servers - create the same non-HA environment in Fuel UI, add these 2 nodes as controller and compute, set the same network settings and click Verify Networks button. Do not deploy, just verify
19:09 jaypipes joined #fuel
19:10 MiroslavAnashkin Even if you use dedicated non-tagged NICs for each network - you need VLANs enabled on the switch for Openstack virtual networks
19:11 TVR_ gre would solve this?
19:12 TVR_ I can do tunnels rather than VLANs if that would work..
19:15 MiroslavAnashkin Dr_Drache: Please attach diagnostic snapshot and the output from `sfdisk -d` from all disks, `lvdisplay`,  `pvdisplay`, `blkid`, and `cat /etc/fstab`
19:16 MiroslavAnashkin Dr_Drache: Right to this bug
19:16 MiroslavAnashkin Dr_Drache: after that - simply try to re-deploy  - it is floating bug
19:17 vk joined #fuel
19:17 MiroslavAnashkin TVR_: GRE may help. And may not - some configurations with GRE require VLANs enables as well.
19:18 Dr_Drache MiroslavAnashkin, it's deployed, just shows that error on boot.
19:20 vkozhukalov joined #fuel
19:21 Dr_Drache MiroslavAnashkin, getting the information
19:22 MiroslavAnashkin Dr_Drache: I saw the similar error and it disappeared when  I reduced Glance partition size. Weird, but you may try it as well...
19:22 MiroslavAnashkin Dr_Drache: Cannit figure out was it Anaconda magic or it was Glance
19:23 TVR_ Earlier I tried rebooting the cluster and that brought a whole new set of issues... so I am rebuilding the 6 nodes with gre tunnels and will see how that goes
19:25 MiroslavAnashkin TVR_: If you reboot HA cluster - please reboot the controllers one by one with 5 minutes pause between reboots. HA means - There is no switch off.
19:26 TVR_ yes.. I just didn't give them enough time in between..I got my mistake (better now than in production) only after it began to spiral
19:27 Dr_Drache brb
19:33 Dr_Drache MiroslavAnashkin, information posted
19:35 MiroslavAnashkin Dr_Drache: Thank you!
19:48 IlyaE joined #fuel
19:51 tsduncan_ joined #fuel
20:25 Dr_Drache so, i'm experencing the same issue as TVR_
20:25 Dr_Drache instances get no IP address
21:22 TVR_ rebuilt my environment with gre tunnels.... HA .... 6 nodes ceph image and volumes.....
21:22 TVR_ all looks good
21:22 TVR_ ip are being assigned
21:22 TVR_ dhcp working now
21:23 TVR_ will over the weekend try vlans again.. but for now.. this is what I certify as good
21:31 xarses joined #fuel
21:48 designated has anyone successfully deployed a 3 controller, neutron with VLAN environment using fuel?
21:57 alexz__ joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary