Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-03-07

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 piousbox joined #fuel
00:08 e0ne joined #fuel
00:33 e0ne joined #fuel
00:35 e0ne joined #fuel
01:28 rmoe joined #fuel
01:35 e0ne joined #fuel
02:07 ilbot3 joined #fuel
02:07 Topic for #fuel is now Fuel for Openstack: http://fuel.mirantis.com/ | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
02:21 brain4611 joined #fuel
02:25 Leontiy joined #fuel
02:26 manashkin__ joined #fuel
02:52 Srijit joined #fuel
02:53 Srijit logger url
02:53 Srijit Hi
02:54 Srijit I have been trying to set up fuel on Vms.....both the vms on my desktop
02:55 Srijit the target node is not able to boot from the PXE server...
03:04 AndreyDanin__ joined #fuel
03:27 meow-nofer_ joined #fuel
03:29 meow-nofer__ joined #fuel
03:33 e0ne joined #fuel
04:20 dhblaz joined #fuel
05:33 e0ne joined #fuel
06:00 Ch00k joined #fuel
06:35 saju_m joined #fuel
07:05 evg joined #fuel
07:30 IlyaE joined #fuel
07:33 e0ne joined #fuel
07:52 alex_didenko joined #fuel
07:52 saju_m joined #fuel
07:52 dburmistrov joined #fuel
07:52 xarses joined #fuel
07:52 1JTAALJWM joined #fuel
07:52 mutex joined #fuel
07:52 bookwar joined #fuel
07:52 Arminder joined #fuel
07:52 23LAAJERZ joined #fuel
07:52 mattymo joined #fuel
07:52 brain4611 joined #fuel
07:52 GeertJohan joined #fuel
07:52 Leontiy joined #fuel
08:00 topochan joined #fuel
08:04 geogdog joined #fuel
08:10 Ch00k joined #fuel
08:13 Ch00k joined #fuel
08:42 e0ne joined #fuel
08:56 e0ne_ joined #fuel
09:10 mihgen joined #fuel
09:24 vk joined #fuel
09:53 evg joined #fuel
09:57 e0ne joined #fuel
10:30 e0ne_ joined #fuel
10:33 anotchenko joined #fuel
10:50 evg joined #fuel
10:57 anotchenko joined #fuel
11:19 topochan joined #fuel
11:22 e0ne joined #fuel
11:30 saju_m joined #fuel
12:00 anotchenko joined #fuel
12:22 anotchenko joined #fuel
12:26 dmit2k joined #fuel
12:38 Ch00k joined #fuel
12:41 e0ne joined #fuel
13:09 dubmusic joined #fuel
13:12 dubmusic Hello.  I am setting up my first Fuel driven 10 server Openstack cluster, in Neutron VLAN mode and I have a few questions regarding the setup and errors that I am getting  trying to add the last controller to the cluster.
13:15 dubmusic On the initial Network setup page under the Neutron L3 setup, there is an internal network defined under the floating.  In the docs the internal is defined as the Management, which is already defined further up in that page.  What network does that represent, and where does neutron use that network?
13:20 anotchenko joined #fuel
13:20 dubmusic Also, there are conflicting statements in the documents regarding the Private network.  Do I need to add tagged VLANs to that network prior to install, or should it be a single VLAN?
13:24 dubmusic Is anyone here that could answer a couple of questions/.
13:24 dubmusic ?
13:28 justif joined #fuel
13:41 getup- joined #fuel
13:46 dubmusic when configuring networking, what is the neutron l3 internal network used for?
13:49 TVR___ joined #fuel
13:49 TVR___ Man I would like a sneak peak at 4.1
13:49 TVR___ heh
13:51 dubmusic Is 4.1 available for preview?
13:51 dubmusic I have already encountered a few bugs that are resolved in 4.1
13:52 TVR___ heh.. it is a wish of mine... but then again, "if wishes were horses then beggars would ride"
13:53 TVR___ I want to test one feature only... kill the main controller harshly and watch it recover... with that, I can certify it to a first production small build.
13:54 TVR___ the rest of the bugs are inconvenient, but not game stoppers for us...
13:54 dubmusic TVR, would you mind if I asked you a few questions?
13:55 TVR___ sure.. can I lie? heh.. I will try to answer what I can
13:56 dubmusic I have installed a 10 server HA cluster and have 9 out 10 server running, with on controller that fails when trying to connect to neutron/keystone.  But first the basic questions
13:56 dubmusic when configuring networking, what is the neutron l3 internal network used for?
13:56 dubmusic it is right below the floating IP spot
13:57 TVR___ ok.. so the L3 (from what I know) is how it builds it's virtual networks... the agent is how it connects that / those aspects (in database) to the rest of the cluster.. the agent also spreads the dhcp agent to the rest of the compute nodes as well
13:58 TVR___ ok.. you mean from the dashboard
13:58 dubmusic In the manual it says that internal represents the management network, but I think that is defined further up on the page. Yes from the Fuel UI
13:58 TVR___ I was talking from the neutron agent-list command
13:58 dubmusic SOrry
13:58 dubmusic the CLI makes send
13:59 dubmusic The UI is confusing
13:59 TVR___ ok.. so lets talk FUEL... not OpenStack.. OK
13:59 TVR___ yes?
13:59 dubmusic sorry.  Yes
14:00 TVR___ ok.. so when setting it up... either booting off of USB or PXE and loading the img as the second stage......
14:00 TVR___ you have settings
14:00 TVR___ the dhcp / PXE...
14:01 TVR___ this has install and discovery
14:01 TVR___ those need to be on the same network... and be routable to the internet...
14:01 TVR___ from there it loads
14:01 TVR___ then...
14:01 TVR___ the firs part
14:02 dubmusic The install from the fuel side works great
14:02 TVR___ one sec.. let me bring it up
14:02 TVR___ well.. you don't plan and it will install... but not work..
14:02 TVR___ so I am going over all of it
14:02 dubmusic OK
14:05 dubmusic_ joined #fuel
14:06 dubmusic_ I have a feeling that it represents the IP block which neutron uses to carve off to the VM subnets
14:06 TVR___ yes, yes that is correct
14:06 TVR___ my env is rebuilding.. so I will go menues by memory
14:07 TVR___ menus even
14:08 dubmusic and the gateway is common for all internal subnets?  I thought that they would each get a per tenant router
14:08 TVR___ the L3 is the IP block.. so you set the instances into a /24 or if you feel you may have >250 of them, I personally use a /22
14:09 dubmusic OK.  So that setting is definitely broken.
14:09 TVR___ the gateway is for that block... yes... but it is a loose translation of a gateway...
14:09 TVR___ that setting is only for the initial block for the admin account...
14:09 dubmusic OK, but it still exists for that subnet.  could I change that easily after installation?
14:10 TVR___ if you create a new user and project, you will create a whole new network that is none of that.. so let me give an example
14:10 TVR___ initial instances I put on a 192.168.111.0/24 like the default
14:11 TVR___ then I log in as admin and create a project bob and a user bob
14:11 TVR___ then I log in as bob and create a network
14:12 dubmusic OK.  Makes sense.
14:12 TVR___ I will see my ext_04 network and I will crete another say... 192.168.100.0/24 network and a router and connect the two
14:13 dubmusic OK
14:13 TVR___ then... as long as my ext_04 network is shared, I can asign floating IP's to the instances...
14:14 evg joined #fuel
14:14 dubmusic Floating, meaning public/internet accessable
14:14 dubmusic accessible
14:14 vk joined #fuel
14:14 TVR___ the instances get DHCP from INT network (192.168.100.0/24) and I assign the floater from ext_04 (from my initialy set floating range when I created the environment before I added bare metal to it) and as long as I set an access polity to allow ingress... I can connect from external to the floating IP
14:15 dubmusic OK.
14:15 dubmusic Makes sense
14:15 TVR___ the floating IP's are designed to get you ssh access to a project.... not an instance
14:15 dubmusic I also had some questions about the VLAN setup
14:16 Dr_Drache joined #fuel
14:16 TVR___ you ~CAN~ give every instance a floater... but that is not the design
14:16 dubmusic right
14:16 Dr_Drache why not?
14:16 Dr_Drache :P
14:17 TVR___ the design is to set up an env of instances... have them do something, and connect to the virtual load ballancer through the vip wich can be the floater
14:17 Dr_Drache I guess that works, if that's how your workflow is.
14:17 TVR___ you can set up a floater for every instance... but you had better have a very big pool
14:18 dubmusic The Network verification failed at the beginning, but all of the servers installed, except one, but now I am concerned that the setup  may be faulty as I cannot ping from controller to controller on the public network
14:18 Dr_Drache dubmusic, in some cases, the cluster blocks IMCP packets.
14:19 Dr_Drache so, no ping for you! :P
14:19 TVR___ also.. all the create int network (the one for bob of 192.168.100.0/24) can and should all be done from command line if you have several projects unless you want a PHD in clickology from doing it in the UI
14:19 dubmusic not even an ARP entry, though
14:19 Dr_Drache TVR___, here's a question.
14:19 Dr_Drache how do I have a instance know what it's floater is?
14:20 TVR___ uh, oh... I am not the expert here... but will try
14:20 dubmusic I try to stick with the CLI, where I can
14:20 TVR___ ah.. good question...
14:20 Dr_Drache since, I'm "not allowed" to have a instance with 2 eth.
14:20 Dr_Drache 1 int, 1 ext.
14:21 TVR___ so.. a one time tracroute to the external gateway and use that to create a factor fact is what I did... if fact exists, don't do it otherwise create
14:21 topochan joined #fuel
14:22 TVR___ because you are correct...it is not presented to the instance
14:22 TVR___ I ~assume~ it is dnap or pat to the floater
14:22 TVR___ dnat
14:22 Dr_Drache yea.
14:22 Dr_Drache can acess the instances via it.
14:22 Dr_Drache but they don't "know who they are"
14:23 Dr_Drache since, the internal network isn't exposed.
14:23 TVR___ so I created a simple facter fact that get loaded at boot.. but could also be done through an rc.local shell script
14:24 Dr_Drache this network thing is getting to me.
14:24 TVR___ do you have your images automatically expnding onto a new volume when boot from image and create volume?
14:24 Dr_Drache that's my general workflow yes.
14:25 Dr_Drache and I prefer to have 2 nics. 1 for general traffic 1 for internal.
14:25 TVR___ ok.. so you already have it working then.. ok.. if not I would share what I did
14:25 Dr_Drache TVR___, my only issue right now is networking.
14:25 Dr_Drache it worked fine before, and now "your not supposed to do it"
14:26 Dr_Drache frustrating me a bit.
14:26 dubmusic I have a question for the setup of VLANs on the switch prior to install.   The network verification fails, always, but the install seems to work
14:26 TVR___ I am still in-process of a wrapper for creating a user, assigning him a project and automatically building the network...
14:26 Dr_Drache dubmusic, vlan verification doesn't work (neutron)
14:27 Dr_Drache should be fixed in 4.1
14:27 TVR___ correct
14:27 TVR___ and...
14:27 Dr_Drache of course, I could never get a working cluster with vlans.
14:27 TVR___ with neutron with VLANs you need to present each VLAN on the switch to be used...
14:27 Dr_Drache all, 500 or so
14:27 Dr_Drache lol
14:27 TVR___ with GRE you only need the switch to know of the one VLAN for the interface
14:28 TVR___ so, VLAN for public, VLAN for storage, VLAN for management and you're good....
14:29 TVR___ PXE can have a VLAN , but that's the other NIC
14:29 Dr_Drache TVR___, if you give an instance (with a completly free Sec-Policy) access to net04_ext and net04, does your instance get an IP from both? (assuming net04_ext has dhcp enabled)
14:29 Dr_Drache TVR___, so, use GRE, but tag the ports?
14:29 dubmusic So I have three NICs 1 - Fuel, 1 Private, and 1 for Public (untagged), and Management, and Storage)
14:30 TVR___ tag the networks and ports, but use GRE, yes
14:30 Dr_Drache TVR___, that's the way to do it. nice. going to try that on my first 4.1
14:30 TVR___ if both networks assign DHCP... I am not sure what would happen
14:30 dubmusic For the private, I have one native VLAN and 20 tagged VLANs for projects
14:31 alexz joined #fuel
14:31 Dr_Drache TVR___, both nics should have a IP.
14:31 Dr_Drache when you add a network to an instance it gives it a new NIC.
14:31 TVR___ unless you have a specific use case for instances NEEDing a VLAN, I would stay with neutron with GRE as it has been rock solid
14:32 Dr_Drache I concer, GRE is solid.
14:32 dubmusic Are there no issues with GRE and MTU
14:33 Dr_Drache I don't see why there would be. why would you change MTU?
14:33 alexz joined #fuel
14:34 dubmusic I have not experienced it, but the network guy onsite said that it can complicate things.  Perhaps not.  Does the VLAN non-GRE work at all?
14:34 TVR___ for me, if, and only if I plan jumbo frames, I would set the MTU on the NIC itself for the bare metal... but not anywhere near the instances... and I cannot think of any edge case where I would need to do that
14:34 dubmusic OK
14:35 Dr_Drache dubmusic, i am sure VLAN non-GRE works for some people.
14:35 Dr_Drache I know I have not gotten it to work properly, and TVR___ has some pains IIRC.
14:36 dubmusic That does not sound encouraging?
14:36 Dr_Drache I bet if you had a network guy working with you, you'd get it going.
14:37 dubmusic VLAN, or GRE?
14:37 Dr_Drache full vlan
14:37 TVR___ the issue with VLANS and not GRE is the huge overhead from an administrative end.... if you are a CCISP of have updated your CCNE every year, you may like the challenge, but I make too many mistakes and don't like the confusion associated with them.
14:37 dubmusic What issues did you encounter with it yourself?
14:37 dubmusic LOL
14:38 Dr_Drache there was a network guy here a week or two back, took him a few days to do it, but he has it going very well.
14:38 Dr_Drache dubmusic, the sheer amount of extra work it's required.
14:38 dubmusic I only need 20 VLANS (read projects)
14:38 TVR___ so, off topic: curious : the handle dubmusic ... like dubstep or mantis?
14:38 Dr_Drache so you'll need like, 25
14:39 Dr_Drache and IIRC the l3 neutron needs a large # of vlans as well
14:39 Dr_Drache like one for every instance.
14:39 dubmusic 20 (private) + Public+Management+Storage+Fuel(Admin)
14:40 dubmusic per instance, or per project
14:40 Dr_Drache by what I was told, it's defaulted to need at least 100
14:40 TVR___ per project as every instance in a project is on that projects VLAN
14:40 Dr_Drache ^^^
14:40 dubmusic Got it.
14:41 Dr_Drache my terms are crossed this morning
14:41 dubmusic Since this is POC. I will only need 20, so I will give it a whirl.  I can always fall back to GRE
14:42 TVR___ so.. if it is one giant set of developers... and they all share one project... and that is going to be the only project used... neutron with VLANs will only be a few more switch configs.... but that kind of defeats the use of the project aspect
14:43 TVR___ it would, however, give you the ability to completely isolate the traffic from a second project if created later and you added that VLAN to the switch config
14:43 dubmusic It will be about 10 developers, so I preconfigured those ports as trunks with 20 or so other VLANs allowed, along with a native VLAN
14:43 Dr_Drache TVR___, is your network patched with the backport?
14:44 dubmusic Backport...?
14:44 dubmusic The performance back port?
14:45 Dr_Drache the 4.1 to 4.0 networking backport patch.
14:45 Dr_Drache that was posted here, last week or so
14:45 TVR___ so.. use case: general production environment.. no ilolation requirements (PCI compliance ot the like) go the easy route of neutron with GRE.... some gov entity or financial institution needing isolation of projects for some compliance conformity, use neutron with VLANs and call you CCISP and CCNE for work...
14:46 TVR___ not yet... I am just waiting to use 4.1
14:46 Dr_Drache TVR___, could you humor me with a cirros instance?
14:48 Dr_Drache the file was Fuel_123_network_4.0_to_4.1_backport_ver1.run BTW
14:49 TVR___ I kind of rolled my own and don't bother with the cirros image
14:50 Dr_Drache well, just because it would be fast.
14:51 jobewan joined #fuel
14:52 Dr_Drache just wanted to see if it works for you.
15:00 TVR___ launching works.... it just has no use for he
15:00 TVR___ heh
15:01 Dr_Drache no i mean, the networking portion.
15:04 TVR___ ah.. ok... so I am doing another rebuild, so I will have to get back to you on that
15:07 Dr_Drache no problem
15:27 justif Are there any gotcha's when trying to use CEPH as I have not been able to get it to work
15:27 Dr_Drache joined #fuel
15:27 Dr_Drache ahhh
15:27 Dr_Drache much better
15:27 Dr_Drache 3.13 finally accepts my patches.
15:27 TVR___ what is the setup justif?
15:28 TVR___ 3 controller, one controller... ceph + controller... ceph by itself?
15:28 justif 5 total nodes, 1 controller, 2 compute and 2 ceph
15:28 TVR___ ok.. can I suggest 3 controller + ceph and 2 compute?
15:29 TVR___ 2 ceph boxes cannot make a quorum ... who is master?
15:29 justif I can try that but what will the 3 controllers give me other than HA on the controllers
15:29 justif ok
15:29 IlyaE joined #fuel
15:29 TVR___ it is the reason why you cannot go from 1 mon in ceph to > 1 mon
15:30 TVR___ when it restarts ceph, the mons don't know how to form a quorum and they never start
15:31 justif ah ok, so can I also do 3 controllers, 2 compute and 2 ceph?
15:31 TVR___ best practice when building ceph > 1 mon.... have an odd number
15:31 TVR___ do you see the issue with 2 ceph?
15:31 justif yea
15:32 TVR___ when is starts the 2 ceph nodes... if they are the only ceph nodes... and are mon's.... who should be master?
15:33 TVR___ you can have the controllers be ceph nodes without disks
15:33 TVR___ you just need !2
15:33 TVR___ not 2
15:34 justif ok so always an odd number of ceph nodes and the odd man out, so to speak, does not need actual storage
15:35 TVR___ sure... he will simply have a role as a mon
15:36 TVR___ and break ties when quorum is challenged..
15:36 justif so technically this should work, http://i.imgur.com/scLtxVS.png
15:36 TVR___ FYI.. I learned that when I was rolling my own ceph clusters without OpenStack, but just as a cluster for storage
15:37 TVR___ so.. select the osd nodes and check the disks are as you like, but yes, that will work
15:38 TVR___ heh.. or should work..
15:39 justif does the ceph node without storage need a ceph partition?
15:40 TVR___ it should not
15:40 justif ok
15:40 justif worst case it wont work again and I try again
15:40 TVR___ now that's the spirit !!
15:41 TVR___ you just encapsulated my career as an engineer...
15:41 Dr_Drache I don't see why you wouldn't have a partition on it.
15:41 justif it required a minimum of 3 gig so I left the default
15:43 TVR___ if he has no disk size.. I can see that
15:43 Dr_Drache true
15:44 Dr_Drache I see it as only benifital if you could have some decent ceph disks on all nodes.
15:44 Dr_Drache ....I should learn to spell
15:45 TVR___ according to mirantis and inktank, the best configuration is to have the controllers and OSD's on the same bare metal and the compute by themselves, as they are resource intensive and the controllers are not so much
15:45 justif how easy is it to expand the OpenStack deployment after the initial deployment
15:46 TVR___ if you have > 7 boxes, 3 controllers by themselves, 3 OSD's by themselves, and a compute... + expand as needed by designation for more
15:46 TVR___ 4.1 will be very easy
15:46 Dr_Drache TVR___, that sounds counter productive to the openstack ideas.
15:46 TVR___ 4.0 has ~some~ issues when adding controllers... no issues adding ceph
15:47 TVR___ how so?
15:48 TVR___ I was stating for a large installation....
15:48 Dr_Drache dense clusters of hardware. if you start saying "everything on thier own" you lose the densisty.
15:48 TVR___ not really, as you can have a 4U node that holds 8 servers in it...
15:49 Dr_Drache they have done ALOT of work in the last year with openstack, to get away from the requirement of a sever for each role.
15:50 TVR___ if you have 40 servers, putting 3 aside that are small for the controllers and then dividing max disk for OSD deployment and MAX CPU / RAM for compute would be standard
15:51 TVR___ then.... disk IO is at it's peak, while the compute can do what they do best.. deal with instances... and it makes migration and upgrades an easier issue
15:53 TVR___ besides, if you have the resources, you don't want to have to deal with both a ceph and controller issue at the same time....
15:54 Dr_Drache I wasn't saying just the controller thing, but no OSD on the computes.
15:55 TVR___ I am right now pricing out 4U nodes that have 8 servers in them... 160 physical cores and 4TB rab for the compute side.... saves 39% in cost and 18% in power....
15:55 TVR___ if you have greater than 7 boxes, the 3 OSD boxes will have huge disk counts and they form a nice quorum
15:56 TVR___ if you want the performance to exceed a 1G network, you need > 15 OSD's from what I see.
15:57 TVR___ unless you use all SSD's
15:57 Dr_Drache all of my high Iops are going to stay on the SSD SANs.
15:58 TVR___ yea, so with SSD's you can exceed 1G with like 3 of them
15:58 Dr_Drache high iops still kills ceph, or vice versa. depending on how you look at it
15:58 TVR___ did you read the intel reports?
15:58 TVR___ great info in them
16:04 mihgen joined #fuel
16:04 Dr_Drache don't read all the reports. some of them are so biased and full of fud, it's crazy
16:04 Dr_Drache so, I miss some of them :P
16:06 justif Has anyone shared their architecture of their actual deployment?
16:06 justif link?
16:06 justif or just search intel + openstack
16:12 vkozhukalov joined #fuel
16:13 dubmusic joined #fuel
16:15 TVR___ http://software.intel.com/en-us/blogs/2013/10/25/measure-ceph-rbd-performance-in-a-quantitative-way-part-i
16:16 TVR___ http://software.intel.com/en-us/blogs/2013/11/20/measure-ceph-rbd-performance-in-a-quantitative-way-part-ii
16:19 Dr_Drache ahh, replica=2
16:19 Dr_Drache that inflates the #s.
16:34 richardkiene_ joined #fuel
16:46 crandquist joined #fuel
17:01 e0ne joined #fuel
17:05 IlyaE joined #fuel
17:08 crandquist joined #fuel
17:11 jobewan joined #fuel
17:12 rmoe joined #fuel
17:14 richardkiene_ Is there a particular windows virtio driver version that is recommended for Fuel 4.0 setups?
17:14 xarses joined #fuel
17:15 richardkiene_ Reason I ask, I'm still having an issue where windows guests just stop doing anything if they don't have a good amount of external stimulus
17:16 richardkiene_ I did not have this problem with Fuel 3.0 with virtio v 0.1-65
17:17 richardkiene_ In my production environment I'm using Fuel 4.0 deployment with virtio drivers at v 0.1-74
17:18 Dr_Drache richardkiene_, I think that's the only virtio drivers availble.
17:18 Dr_Drache i think the virtio versions are lagging behind KVM dev.
17:18 Dr_Drache is what the issue is
17:21 richardkiene_ Dr_Drache: Gotcha, I'm just trying to figure out why my somewhat dormant windows boxes just stop working until woken up with the console and/or rebooted
17:22 richardkiene_ They'll keep perfect time and work great during the day while they get outside traffic hitting them, but at night when the work goes away, they just stop working
17:22 richardkiene_ If we run an app that writes to the disk throughout the night you can actually see that it stops working and resumes when you wake the box
17:22 xarses richardkiene_: the few of us that run windows inside KVM just use the binaries you can download from fedora
17:23 angdraug joined #fuel
17:23 xarses richardkiene_: never seen the nodes just "stop doing anything" with out being poked
17:23 xarses but our use is usually a desktop vm
17:24 richardkiene_ xarses:Yeah it is the strangest thing in the world, there is not kernel event that takes place letting the box know it is going down, or sleeping or anything. It just stops working.
17:24 richardkiene_ internal work, is not enough to keep the box alive, it has to be "sufficient" external requests
17:25 richardkiene_ What version of windows have you been successful with?
17:25 joek_garboden joined #fuel
17:25 xarses richardkiene_: your best bet is to ask the murano project guys, they do the most with windows
17:25 vkozhukalov joined #fuel
17:32 IlyaE joined #fuel
17:36 Dr_Drache xarses, those are the same files
17:37 xarses Dr_Drache: thats all i know about, sorry =(
17:37 Dr_Drache xarses, same. I think that's something redhat should know.
17:37 Dr_Drache richardkiene_, are the VMs recently updated?
17:38 richardkiene_ Um they're patched and up to date windows server 2012 and windows server 2012 R2 boxes
17:38 richardkiene_ with the latest virtio drivers
17:39 Dr_Drache richardkiene_, I'd assume so, but you'd never know sometimes
17:39 richardkiene_ we've also created the image using all the cloud base bits
17:39 richardkiene_ so cloud-init, etc.
17:39 Dr_Drache in my case, i JUST got rid of my last NT4 box
17:40 Dr_Drache like, this week
17:40 richardkiene_ found this: https://lists.gnu.org/archive/html/qemu-devel/2013-10/msg02835.html
17:41 richardkiene_ looks like a rtc/hpet issue
17:41 Dr_Drache yea, was it patched?
17:41 richardkiene_ but I've told windows to use the platform clock, instead of HPET
17:41 richardkiene_ was what patched?
17:43 Dr_Drache that bug.
17:45 joel_garboden joined #fuel
17:46 Dr_Drache it seems HPET works better than rtc
17:51 mutex joined #fuel
17:51 IlyaE joined #fuel
18:02 mihgen finally released folks
18:02 mihgen 4.1 is up
18:02 Dr_Drache bet we can't get a direct link, can we?
18:03 TVR___ please?
18:03 mihgen you can build an ISO yourself from stable/4.1 or register on software.mirantis.com to download :)
18:06 TVR___ grabing both ISO and IMG
18:06 Dr_Drache TVR___, round 2?
18:06 Dr_Drache :P
18:06 TVR___ there will be many a destroyed-harshly controller this weekend
18:07 Dr_Drache hehe
18:07 Dr_Drache I think I'm going to work the weekend
18:07 Dr_Drache 70 hour week!
18:07 Dr_Drache :P
18:08 TVR___ yea, but testing it is a bit fun for me... I don't know... I kind of like the whole idea of watching stuff survive catastrophe
18:09 mihgen TVR___: looking forward to get some bug posts from you :)
18:10 TVR___ heh.. will try
18:10 Dr_Drache well, crap.
18:10 mihgen what's the download speed for you btw?
18:11 TVR___ ummm.. slow... but I have a meeting any sec for an hour so, no worries
18:11 Dr_Drache 100K
18:11 mihgen it's all on rackspace CDN.. it's not great I assume, but should be better in a few hours
18:11 mihgen 100k is nothing :(
18:11 Dr_Drache 6 hours to download. that's my well crap.
18:11 mihgen well I hope we will create torrent for 5.0 release
18:12 TVR___ spoke with an individual that is working with rackspace and it seems they have been having issues with bringing up an OpenStack environment for them....
18:12 mihgen so you think 100k because of your slow connection  ?
18:12 Dr_Drache mihgen, this is on a 100mbit line.
18:12 TVR___ mine is ~ 5 min on one and ~ 10 min on the other
18:13 Dr_Drache so, it could just be locations.
18:13 TVR___ probably, yea...
18:13 mihgen TVR___: download in 5min? what is your location?
18:13 mihgen Dr_Drache: and what's your location?
18:13 TVR___ watertown... neat boston
18:13 Dr_Drache mihgen, michigan.
18:13 TVR___ so far I have pulled 2.3G
18:13 mihgen interesting.
18:14 mihgen if fast was in US and slow in Europe, I could understand
18:14 mihgen but Boston & Michigan… hm
18:14 Dr_Drache how big is the iso?
18:14 Dr_Drache 1.7?
18:15 TVR___ pulling ~3% to ~4% network speed from my 1G nic
18:16 mihgen should be about 1.7G, yep
18:19 Dr_Drache mihgen, I got it going better now
18:19 Dr_Drache 1.2MB/s
18:20 crandquist joined #fuel
18:20 Ch00k joined #fuel
18:21 Dr_Drache mihgen, you have a md5?
18:21 mihgen yep, it's here: http://software.mirantis.com/quick-start/
18:22 * mihgen heading to the office.. will be in touch soon
18:22 Dr_Drache this david fishman
18:22 Dr_Drache likes sending me emails about mirantis :P
18:22 mihgen =))
18:22 mihgen he is crazy marking man)
18:22 mihgen marketing*
18:26 xarses heh, yes crazy
18:27 Dr_Drache lol
18:27 IlyaE joined #fuel
18:35 mihgen joined #fuel
18:37 Dr_Drache woot, installing
18:37 Ch00k joined #fuel
18:40 crandquist joined #fuel
18:49 Dr_Drache or not.
18:54 Matt_V joined #fuel
19:12 xarses oops?
19:14 Dr_Drache yea
19:14 Dr_Drache it was a user oops
19:15 justif is there a way to update to fuel 4.1 without reinstalling?
19:15 Dr_Drache not at all
19:16 Dr_Drache well, yes there is. but by the time you're done, you'd have saved a few hours by reinstalling
19:16 justif ok
19:16 TVR___ go foward in time....upgrade to FUEL 6.0, then regress it back to 4.1
19:16 TVR___ heh
19:17 Ch00k joined #fuel
19:17 TVR___ sorry man.. we went over that in depth, and there is a migration path but not an upgrade path
19:17 TVR___ we begged and begged, but it just isn't there yet
19:18 xarses TVR___: fuel upgrades is a 5.0 feature, sorry
19:19 TVR___ yea.. we chatted in depth about it... justif was asking... so first I was a smart-ass, and then I answered him helpfully
19:20 Topic for #fuel is now Fuel 4.1 for Openstack: http://fuel.mirantis.com/ | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
19:21 xarses TVR___: you're welcome to help make it work =)
19:22 xarses TVR___: or propose and implement anything else you are interested in.
19:22 TVR___ god... if only I could code.... I often wonder what would be different in my career if I had stuck with it past basic
19:25 Dr_Drache TVR___, same here, past C.
19:25 Dr_Drache well, I would be homeless.
19:25 Dr_Drache I suck at code.
19:26 Dr_Drache xarses,
19:27 Dr_Drache got a bug
19:27 Dr_Drache :P
19:28 Dr_Drache http://imgur.com/08d27J5
19:28 xarses_ joined #fuel
19:29 Dr_Drache xarses_,
19:29 Dr_Drache http://imgur.com/08d27J5
19:33 Dr_Drache bbiab
19:37 Dr_Drache back
19:38 Dr_Drache mihgen, you hiding?
19:39 mihgen Dr_Drache: what's up?
19:39 Dr_Drache http://imgur.com/08d27J5
19:39 Dr_Drache lol
19:40 Dr_Drache 2 installs so far of master node
19:40 mihgen whooops
19:40 mihgen wtf
19:41 mihgen may be browser thing?
19:41 xarses_ joined #fuel
19:41 mihgen very weird
19:41 xarses_ Dr_Drache: what am i looking at?
19:42 mihgen I mean caching
19:42 xarses_ oh, the assigned roles is missing
19:42 xarses_ clear cache
19:42 Dr_Drache it's chrome.
19:42 Dr_Drache LOL
19:43 Dr_Drache I know it couldn't be a REAL bug, just humor.
19:44 mutex so
19:44 mihgen I pissed off, really =)
19:44 mutex neutron-server is supposed to be running on all my master nodes right ?
19:44 mutex or controller nodes
19:50 Dr_Drache mihgen, xarses, I can't see everything that's changed, but so far, the deploy process seems "more polished"
19:57 crandquist joined #fuel
20:24 xarses Dr_Drache: there is a [X] button to stop the deployment if its in progress
20:24 Dr_Drache yea
20:24 xarses Dr_Drache: also in the UI you can setup bond's from the network page
20:24 xarses You should be able to install ubuntu just fine
20:25 Dr_Drache xarses, nope
20:25 Dr_Drache can't
20:25 TVR___ Dr_Drache.. give me control of the UI... I ~promise~ I won't hit the stop button when it is almost finished.....
20:25 xarses Still, I'll be dammed
20:25 Dr_Drache TVR___, doesn't deploy :P
20:25 Dr_Drache xarses, the fixed didn't get put in
20:26 xarses Dr_Drache: what fix?
20:26 Dr_Drache virtual consoles and timeouts
20:26 Dr_Drache in the cobbler script.
20:27 Dr_Drache yeapper, just physically checked the systems
20:28 Dr_Drache 3 controllers are frozen @ grub, 3 computes are frozen at the system V
20:32 Dr_Drache xarses, in Pmanager.py
20:36 IlyaE joined #fuel
20:41 Dr_Drache xarses, line 903; still had the virtual console, and not the rootdelay
20:41 Dr_Drache "rootdelay=90 nomodeset\"/g' /etc/default/grub", True)
20:53 Matt_V can anyone tell me which version of qemu ships with 4.1?
20:54 Matt_V for deployment on ubuntu
20:54 Dr_Drache if I can get it deployed, I can tell you if someone doesn't speak up first
20:55 Matt_V thanks
20:55 vkozhukalov joined #fuel
20:55 Dr_Drache just waiting for the last node.
21:07 TVR___ clear cache in chrome.. does that show roles to assign?
21:08 TVR___ yes.. that does it
21:09 Dr_Drache TVR___, it didn't yet for me. but it's not a acual issue.
21:11 TVR___ I'm out.. might check in the weekend if anyones on..
21:12 TVR___ will deff be on this tonight.. my cluster is now-a-building
21:18 Dr_Drache Matt_V,
21:18 Dr_Drache I cannot help you today
21:18 Dr_Drache 4.1 is undeployable for me
21:24 vk joined #fuel
21:30 Dr_Drache xarses, not that it matters, I'm leaving soon, but after editing the pmanager.py file, computes finish install, the controllers freeze @ : bio create slab <bio-1> at 1
21:31 xarses =(
21:32 Dr_Drache yea
21:32 Dr_Drache no idea what changed to make that happen
21:36 Dr_Drache just extremely frustrating to have regressions like this.
21:36 Dr_Drache but it happens
21:43 bookwar1 joined #fuel
21:45 IlyaE joined #fuel
22:04 IlyaE joined #fuel
22:20 dirrector joined #fuel
22:23 mutex ug
22:23 mutex dhcp service says it is out of leases
22:23 mutex really ? 20 / 250 IP used
22:23 mutex must be some other problem
22:35 xarses mutex: who's dhcp fuel?
22:35 mutex neutron dhcp
22:35 xarses odd
22:35 mutex i find these processes need to be restarted on a pretty regular basis
22:36 mutex yesterday I created a router via horizon, but it wouldn't appear
22:36 mutex i had to restart neutron-server and l3-agent a couple of times
22:37 xarses mutex: fuel 4.0?
22:40 xarses mutex: if so, you might want to if you applied the fix for this bug https://bugs.launchpad.net/fuel/+bug/1269334
22:40 mutex yeah
22:40 mutex oh yeah, already applied that one
22:40 mutex it at least made networking function for more than 5 minutes
22:40 xarses you hacked in the ocf scripts in to each controller?
22:40 xarses ok
22:40 mutex yes
22:40 mutex its a one character change
22:41 xarses in 4 files =(
22:41 mutex yeah yeah
22:41 mutex x controllers
22:41 mutex anyway, i'm still learning how to debug these problems so I'm sure i'll get more clarity as time goes on
22:42 mutex i have a cron job on fuel that restarts the dhcp agent every hour
22:42 mutex just to make sure
22:48 IlyaE joined #fuel
22:59 mutex well I'm going to try and create another network/router and see what happens
22:59 mutex but yesterday it totally failed
23:04 piousbox joined #fuel
23:04 piousbox hello all
23:04 piousbox I have a question. Should each compute node be connected to the outside internet?
23:05 piousbox For example, have a network interface that is bidged with the host's?
23:12 mutex yeah man, l3 agent totally did not create a new router
23:12 mutex once I restarted the daemon it worked
23:12 mutex and teh router was created
23:27 IlyaE joined #fuel
23:36 albionandrew joined #fuel
23:37 albionandrew left #fuel
23:39 maximya joined #fuel
23:40 albionandrew joined #fuel
23:40 albionandrew left #fuel
23:44 vkozhukalov joined #fuel
23:49 IlyaE joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary