Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-02-06

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 ruhe joined #fuel
01:18 vt102 Has anybody gotten Fuel 4.0 to install on ESXi 5.x virtuals?  Any tips on how to make it work?
01:31 vt102 I seem to have nested hypervisors working; I can run VirtualBox via vagrant on a RHEL 6 box in the same cluster-- vhv.allow = true on all the hypervisors, VM hardware version 9, virtual bits exposed.
01:32 vt102 All networks set to promisc / MAC addr changes / forged transmits = Accept.
01:32 vt102 Fuel networks validate before installing.
01:32 vt102 Just always craps out.
01:34 rmoe joined #fuel
02:00 xarses joined #fuel
05:40 e0ne joined #fuel
05:52 mihgen joined #fuel
05:59 evgeniyl` joined #fuel
05:59 MiroslavAnashkin joined #fuel
06:34 IlyaE joined #fuel
06:58 IlyaE joined #fuel
07:00 rvyalov joined #fuel
07:02 ArminderS joined #fuel
07:02 steale joined #fuel
07:03 pheadron joined #fuel
07:21 IlyaE joined #fuel
07:41 IlyaE joined #fuel
07:47 e0ne joined #fuel
07:55 jouston_ joined #fuel
07:56 MiroslavAnashkin joined #fuel
07:58 AndreyDanin joined #fuel
08:30 bogdando joined #fuel
08:42 mrasskazov joined #fuel
08:50 miguitas joined #fuel
08:59 bookwar joined #fuel
08:59 anotchenko joined #fuel
09:00 vk joined #fuel
09:09 vk_ joined #fuel
09:12 e0ne_ joined #fuel
09:23 evgeniyl joined #fuel
09:55 MiroslavAnashkin joined #fuel
09:57 anotchenko joined #fuel
10:21 ruhe joined #fuel
10:26 jouston_ joined #fuel
10:31 tatyana joined #fuel
10:31 anotchenko joined #fuel
10:49 anotchenko joined #fuel
11:18 jouston_ joined #fuel
11:34 MiroslavAnashkin vt102: Please check the model of the network cards you emulate in ESXi. If there are Intel 1000 - change them to AMD. Drivers for Intel has multicast traffic optimizations, incompatible with Neutron
11:45 anotchenko joined #fuel
11:56 Valyavskiy joined #fuel
12:03 pheadron joined #fuel
12:10 Valyavskiy hello guys! I have a question about how nailgun  generate the file "astute.yaml". I need in create this file manually or use any tool for it.   It's really to create this file using any methods of nailgun? Or nailgun may do it only during the deploying process?
12:29 anotchenko joined #fuel
12:29 e0ne joined #fuel
12:33 MiroslavAnashkin Valyavskiy: Please check #fuel-dev chat
12:44 TVR__ joined #fuel
12:45 anotchenko joined #fuel
12:46 TVR__ joined #fuel
13:27 Dr_Drache joined #fuel
13:27 Dr_Drache designated,
13:49 anotchenko joined #fuel
13:51 e0ne_ joined #fuel
13:51 Dr_Drache hey, generic question
13:54 Dr_Drache i've had a volume on delete for 12 hours... :P
14:03 Dr_Drache I belive I went to delete it while it was still attached.
14:08 anotchenko joined #fuel
14:27 evgeniyl` joined #fuel
14:56 angdraug joined #fuel
15:03 anotchenko joined #fuel
15:10 anotchenko joined #fuel
15:36 anotchenko joined #fuel
15:59 designated Dr_Drache, what's up?
15:59 Dr_Drache designated, every try that?
16:00 Dr_Drache the glass code
16:00 designated I did some reading briefly of people saying it couldn't be transferred to another account but I never called google.
16:00 Dr_Drache that sucks
16:01 Dr_Drache I've given them away before.
16:02 designated Maybe if I call them I will get a different answer, unfortunately I don't have time right now.
16:02 Dr_Drache that's fair. it's your code unless you can't use it.
16:02 Dr_Drache I won't attempt anything with it
16:02 designated thank you, I'll let you know if I don't wind up using it.
16:04 Dr_Drache now, vlan question if you have time for one.
16:05 designated sure
16:06 Dr_Drache if I tag a single port with more than one VLAN, that port will have access to all the tagged data?
16:07 designated you're referring to a switch port?
16:11 designated if you want to pass more than one vlan tag you will need a trunk from the switch to the node, but that does require the node to tag all traffic leaving the interface, all untagged traffic would get tagged with whatever native vlan you have configured on the switchport.
16:12 Dr_Drache OMG, vlans are soooo complicated :P
16:12 designated not really, think of a vlan as a container
16:12 designated it's a logical container
16:12 designated think of a vlan as a car, frames are the passengers inside of the car and a trunk would be a bridge
16:13 designated bridge would allow multiple cars to cross
16:13 designated therefore trunks allow multiple vlans to traverse a single physical link
16:14 Dr_Drache as you can tell, i'm not a network guy. :P
16:16 Dr_Drache so, there is a default vlan on a switch, that is all data that is not acually tagged. tagged data only goes to the trunk that accepts that tagged data.
16:17 Dr_Drache ports can accept multiple tags.
16:17 Dr_Drache (my terms are perhaps slightly wrong)
16:17 designated lets take a step back for a second so you understand the terminology.
16:18 Dr_Drache I seem to have problems with terminology that renames what things have been for years :P
16:18 designated as frames are placed onto the network lets say one of two things is going to happen when using vlans
16:19 tatyana left #fuel
16:19 designated either the switch you're connected to will be configured as an "access" port and assigned a vlan, which means all frames entering that switchport will receive the configured vlan id
16:19 designated that is handled by the switch
16:19 designated or you will have a "trunk" port which will allow multiple vlans to traverse that link
16:20 Dr_Drache ahhh, yes sir.
16:21 designated trunks are traditionally a connection between two switches that allow the vlans to go back and forth, but you can trunk to a node.  if you have a trunk to a node or end device, that device will be responsible for adding the vlan tags as frames exit the interface
16:21 Dr_Drache so, obviously end devices must be vlan aware.
16:21 designated right
16:22 Dr_Drache what about, lower end switches?
16:22 designated in the event a frame leaves the interface untagged, if and only if you have a native vlan configured on the switchport, the untagged frame now gets the configured "native" vlan id
16:22 designated can you further define lower end switches?
16:23 Dr_Drache like, a "workgroup" netgear 8 port switch. unmanaged.
16:24 Dr_Drache I do have a confernece room or 5 that happen to have devices like that in the table.
16:24 designated if it doesn't support vlans then everything is on the same layer 2 segment, there would be no logical grouping
16:24 designated everything we're discussing is based on the assumption the network device supports vlans in the first place.
16:25 Dr_Drache so, I couldn't tag from a end device on something like that, and have it make it to core switches with the tag in place?
16:25 designated only if the switch supports vlans
16:26 Dr_Drache awesome. thank you for the crash course so far.
16:28 xarses joined #fuel
16:28 Dr_Drache I will let you off the hook for now, I have other things that need my attention.
16:29 designated np :)
16:48 dan_a joined #fuel
16:53 rmoe joined #fuel
17:05 e0ne joined #fuel
17:55 angdraug joined #fuel
18:20 IlyaE joined #fuel
18:42 Dr_Drache so, i'm having issues uploading images to the cluster.
18:42 Dr_Drache it usally times out.
18:42 e0ne joined #fuel
18:46 MiroslavAnashkin Dr_Drache: How long it take before timeout appears?
18:47 Dr_Drache like 10 min or so.
18:47 Dr_Drache but these images are around 12GB
18:47 Dr_Drache and it doesn't seem to matter where I put them
18:47 MiroslavAnashkin Are you use UI or command line?
18:47 Dr_Drache UI
18:48 Dr_Drache I havn't got the keys distributed.
18:49 MiroslavAnashkin There was error - default timeout somewhere in Nova (openstack) is 600 seconds and it does not take into account timeout settings from /etc/nova/nova.conf
18:49 Dr_Drache ahhh, ok
18:49 Dr_Drache one other question, I've had a volume showing deleting for ~20 hours
18:50 Dr_Drache happen to know the command to kill that?
18:53 MiroslavAnashkin depends on volume size and settings. If volume is big and secure delete enabled - it may last long.
18:54 Dr_Drache 20gb
18:57 Dr_Drache no big deal really. it's only a test cluster.
18:59 ruhe_ joined #fuel
19:02 MiroslavAnashkin Timeout bug. https://bugs.launchpad.net/nova/+bug/1253612 Your volume stalled most probably because the previous operation is in progress.
19:03 Dr_Drache so, i should get this volume removed before doing anything else
19:04 MiroslavAnashkin What is your Glance backend?
19:05 Dr_Drache rbd
19:05 Dr_Drache also
19:06 Dr_Drache getting a screenshot of something for you
19:08 Dr_Drache well, a camera shot of the screen.
19:08 Dr_Drache just need to know what log you want.
19:10 Dr_Drache MiroslavAnashkin, https://www.dropbox.com/s/qt625kdzwmznhip/IMAG0111.jpg
19:10 Dr_Drache bad picture
19:10 Dr_Drache someone opened the blinds :(
19:13 MiroslavAnashkin Is VT/VT-X or AMD-V enabled in the BIOS on these servers?
19:13 Dr_Drache it should be.
19:13 Dr_Drache but, your right, I should check that.
19:14 e0ne joined #fuel
19:16 MiroslavAnashkin Well, actually it is harmless KVM message
19:38 Dr_Drache ok
19:39 Dr_Drache sorted out when the timeout happens
19:39 Dr_Drache I think it's like you said
19:39 Dr_Drache that volume is stalled
19:46 Dr_Drache happen to know why cinder list
19:47 Dr_Drache doesn't work, I don't happen to know the os-name
19:50 MiroslavAnashkin Please run `source openrc` before running OpenStack command line tools
19:52 Dr_Drache openrc: No such file or Directory
19:52 Dr_Drache lol, I get the fun today!
19:52 MiroslavAnashkin It is located in /root on every controller
19:52 Dr_Drache ohh...
19:52 Dr_Drache i was a dumbass
19:52 Dr_Drache on a compute
19:54 Dr_Drache is it friday yet?
19:54 MiroslavAnashkin OK, then run `rbd ls` to see what is in Ceph
19:55 MiroslavAnashkin It is still 5 minutes before Friday. At least here, in Moscow.
19:56 Dr_Drache damn, I want russian stuff.
19:58 Dr_Drache still waiting for the rbd ls to show
20:02 Dr_Drache MiroslavAnashkin, still no display from "rbd ls"
20:02 MiroslavAnashkin Hmm, rbd is quite fast.
20:03 Dr_Drache I'd assume it would be
20:03 MiroslavAnashkin I think we need andraug or xarses to debug Ceph.
20:03 MiroslavAnashkin angdraug
20:04 xarses do a ceph -s
20:05 MiroslavAnashkin There is great chance GRE is work very slow on your environment and all this slowness comes from slow network
20:05 xarses start there none of the rbd commands will work (well) if the cluster state isn't sane
20:05 xarses also you can use https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/ceph
20:06 xarses to guide you through the same bit of steps that we used as we vetted the puppet manifests
20:06 Dr_Drache ok
20:06 Dr_Drache I have a health warning
20:07 Dr_Drache http://paste.openstack.org/show/62868/
20:07 xarses 'ceph osd tree'
20:09 Dr_Drache all there
20:09 Dr_Drache http://paste.openstack.org/show/62869/
20:10 xarses it look like you don't have enough osd's accross enough hosts for the crush map to allocate all of the data. It won't let you continue (by default) untill the crush map can determine where to place all the replicas
20:10 xarses hmm 2 hosts
20:10 Dr_Drache yea, 2 with replication of 2.
20:11 Dr_Drache test envirment
20:11 xarses thats odd the pages should start peering then
20:12 Dr_Drache [root@node-16 ~]# ceph osd lspools
20:12 Dr_Drache 0 data,1 metadata,2 rbd,3 images,4 volumes,5 compute,
20:12 Dr_Drache i've attempted a few uploads of images, and create a instance from those volumes.
20:12 Dr_Drache but they all failed after the inital upload.
20:13 Dr_Drache so, I just went in reverse deleting.
20:13 xarses yes, they will since the cluster is in a stuck untill resolved state
20:13 xarses the 492 pages are the default pages for when you create the cluster
20:13 Dr_Drache well, the testVM makes new instances all day long.
20:14 xarses hmm, that should have been saved into ceph
20:14 xarses odd that it works
20:15 Dr_Drache http://paste.openstack.org/show/62870/
20:15 Dr_Drache don't know if those changes mean anything to you
20:15 xarses hmm i guess it failed one of the osd's
20:19 Dr_Drache sucks
20:21 MiroslavAnashkin joined #fuel
20:22 Dr_Drache xarses, does that require a redeploy?
20:23 xarses which version of fuel is this?
20:24 Dr_Drache 4.0
20:24 Dr_Drache with a patch to the initramfs.img
20:24 xarses can you paste /etc/ceph/ceph.conf and /root/ceph.log from one of the osd hosts
20:25 Dr_Drache give me a min please
20:27 Dr_Drache http://paste.openstack.org/show/62871/
20:29 mrasskazov joined #fuel
20:40 mrasskazov joined #fuel
20:43 vk_ joined #fuel
20:55 xarses blah, this should have just worked out of the box
20:56 Dr_Drache I happen to agree.
20:56 TVR__ If it was easy, everyone would do it
20:56 TVR__ heh
20:56 Dr_Drache of course, I'd also like ubuntu to work out of the box as well.
20:58 Dr_Drache oh well.
20:58 Dr_Drache i'll start another redeploy if needed.
21:01 jouston__ joined #fuel
21:11 Dr_Drache TVR__, I hear you snickering overthere
21:12 TVR__ no, not really... I just got my VLANs and am presently rebuilding as we speak myself
21:14 TVR__ my manager had an issue with deploying keys to his new instance, so I have to verify that works... and after all that I still have yet another test deployment as I need to set up neutron with VLANs and see if that gives me issues now that I actually have VLANs
21:14 Dr_Drache yea
21:15 TVR__ It didn't work last time, but then again, I was using all untagged networks on separate NICs so that in itself could have been the reason...
21:15 Dr_Drache I'm going to have to redeploy to test VLANs here soon.
21:15 TVR__ I was given a switch with all ports having all 4 VLANs on each port....
21:16 Dr_Drache I don't see how thats differnt than untagged.
21:16 TVR__ well, issued a switch, not given.. but you know..
21:16 Dr_Drache yea
21:16 TVR__ we shall see, we shall see....
21:19 TVR__ I do know Fuel is a pig with it's PXE network... I had set up a NEW Fuel server... gave the discovery range 10.7.212.45 - 79 and the Install range 10.7.212.80 to 115 with the Fuel node being on 10.7.212.120 and when I tried to deploy 4 nodes, it Error'd as not enough addresses
21:20 Dr_Drache wow
21:20 Dr_Drache does it need a full /24
21:20 TVR__ I ended up with the Fuel server on .45 and the ranges I changed (new Fuel deployment off of the USB stick) ranges of Discovery were 46 - 110 and Install was 111 - 180 and now it is deploying
21:21 TVR__ maybe not a full /24 .. but it needs quite a few
21:22 Dr_Drache xarses, so, as of now, a redeploy is needed?
21:23 xarses Dr_Drache: hmm oh, you conversation sparks something. Are you using Neutron and have a VLAN tag on your Storage or Management network?
21:23 Dr_Drache neutron GRE
21:23 Dr_Drache clean switch
21:24 TVR__ do you have your storage network setting as tagged?
21:24 Dr_Drache I shouldn't
21:24 xarses yes, but are storage or management network tagged?
21:24 TVR__ I made that mistake before
21:24 Dr_Drache no tags.
21:24 Dr_Drache at least according to fuel
21:24 xarses damn
21:25 TVR__ can I suggest pinging each network from each node?
21:25 xarses ok, lets test anyway. from the monitor ping the the management IP and storage IP of each osd, and then from one OSD do the same for the other
21:25 xarses monitor==controller
21:26 TVR__ with mine, the management could talk, so it was installing OK.. and the OSD tree looked good... for a few minutes... but as the storage network could not talk, it kept dropping OSD's
21:26 xarses TVR__: thats what i was thinking
21:26 TVR__ heh.. yea.. And I proved it
21:26 TVR__ heh
21:26 Dr_Drache any easy way to find the ips of all these?
21:27 TVR__ ifconfig | grep <subnet three octets>?
21:27 Dr_Drache guess what network isn't pingable
21:27 Dr_Drache so far
21:28 TVR__ for i in server server server ; do ssh server "ifconfig | grep <subnet three octets>" ; done
21:28 xarses ip -4 a
21:28 TVR__ that was how I got mine
21:28 TVR__ ip -4 ... I like it
21:29 TVR__ ip -4 a | grep inet
21:29 TVR__ I need to read more man pages..heh
21:30 Dr_Drache everyone is pingable from everyone
21:30 Dr_Drache on all networks
21:30 xarses damn
21:31 Dr_Drache lol
21:31 Dr_Drache if it was easy..... i'd be doing it.
21:32 xarses hehehe
21:33 xarses we can allways try popping over to oftc.net #ceph where the ceph people that are smarter than me can help
21:33 xarses it sounds like your deployment should be find
21:33 xarses s/find/fine
21:46 Dr_Drache so, I'd like 4.1 released next week, with all bugs fixed, thanks xarses
21:47 TVR__ nice
21:48 Dr_Drache wow
21:49 Dr_Drache <alfredodeza> Dr_Drache: you are using a *very* old version of ceph-deploy
21:49 Dr_Drache <alfredodeza> I suggest an upgrade :)
21:51 xarses Dr_Drache: fuel 4.1 feature freezes on monday
21:53 Dr_Drache damn,  I don't know what to do now.
21:59 Dr_Drache see you guys in 14 hours!
22:14 IlyaE joined #fuel
22:51 xarses joined #fuel
23:32 pheadron does anyone have a default document for a 3 node install with fuel?
23:40 rmoe_ joined #fuel
23:49 xarses joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary