Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-07-29

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 rmoe joined #fuel
00:32 kupo24z joined #fuel
00:40 KodiakF_athome joined #fuel
00:43 KodiakF_athome I'm assuming the Fuel virtualbox scripts are on the main Mirantis Openstack .iso file after you register for an account?  It wasn't super clear in the docs.
00:44 KodiakF_athome I'm hoping to either adapt the scripts or the extended documentation for Virtualbox into KVM which is what we've all switched to at home and work.
00:44 KodiakF_athome But for now I'll go ahead and do Virtualbox for old time sake :)
00:46 vidalinux KodiakF_athome, lol
00:46 KodiakF_athome ??
00:46 vidalinux KodiakF_athome, virtubox sucks a lot
00:46 KodiakF_athome The throwback nod to Virtualbox?
00:47 KodiakF_athome haha yea well it's what Mirantis did the quickstart on and I sure can't blame them since KVM doesn't exactly help out the folks using Macbooks or worse
00:48 KodiakF_athome RHEL 6 all the way here so KVM usually - but that's OK I can power down my VMs for a bit to run Virtualbox to play with fuel
00:49 vidalinux you can do manually on KVM
00:49 vidalinux create the private networks in virt-manage
00:49 vidalinux virt-manager*
00:50 vidalinux then assign the nodes interfaces to this networks
00:51 KodiakF_athome I'll probably do that - I'd like to give back so if cycles permit I'll script it and post it and maybe just maybe mirantis will provide that script as an alternate option for us KVM users
00:53 KodiakF_athome I know my way around virsh so it should be not that difficult - time is the only issue for me - this is strictly for play until mgmt blesses an abstraction on openstack rather than writing puppet mods for our existing infra
00:53 vidalinux I don't think mirantis will do this
00:54 vidalinux most people use virtualbox because is friendly
01:03 angdraug actually Mirantis already uses KVM in-house in the dev process
01:03 angdraug and we have a guide on how to set it up:
01:03 KodiakF_athome angdraug: you script the setup w/ virsh yet?
01:04 angdraug http://docs.mirantis.com/fuel-dev/devops.html
01:04 angdraug we have a special tool called fuel-devops, based on python-libvirt
01:04 KodiakF_athome oh heck yes
01:04 angdraug it's more hassle to set up than virtualbox but totally worth it
01:04 angdraug I run it on my laptop with Debian/sid, no problemo :D
01:05 angdraug oh, and no, virtualbox scripts are actuall not found on the iso itself
01:06 angdraug you can get them from the website as a separate zip, or just grab the source from github
01:06 angdraug https://github.com/stackforge/fuel-main/tree/master/virtualbox
01:07 KodiakF_athome angdraug:  Yea weirdest thing after I logged in for the first time, the download link for the zip turned into a redirect to the main software.mirantis.com site until I opened the quick start page again in a new tab (Midori browser so I blame that...)
01:08 vidalinux angdraug, nice
01:09 angdraug thanks )
01:10 vidalinux i got an strange sintome
01:11 vidalinux I got an strange issue, I configure neutron networking using GRE everything works good using "cross-cable" between the two servers 1 controller and 1 compute, but if I remove the cross-cable and connect the two GRE interfaces via switch instances doesn't get ipaddress from dhcp-agent
01:11 vidalinux no idea why lol
01:13 adanin joined #fuel
01:16 KodiakF_athome It seems kind of like there should be a cluster_size=5 option for HA + cinder...
01:16 KodiakF_athome (in config.sh in the virtualbox quick start_)
01:20 KodiakF_athome Does the quickstart honor virtualbox's VM folder preferences?
01:21 KodiakF_athome NM the quickstart scripts are beautiful
01:21 KodiakF_athome echo -e `VBoxManage list systemproperties | grep '^Default machine folder' | sed 's/^Default machine folder\:[ \t]*//'`
01:22 angdraug yeah, even works on CygWin
01:28 KodiakF_athome So Virtualbox has new stuff since like 3 years ago when I switched to KVM - what the heck does one do with the "Extension pack" on Fedora?
01:28 KodiakF_athome I've downloaded it but it's not a zip
01:28 KodiakF_athome NM found Oracle docs
01:32 angdraug sorry can't help much with virtualbox, I've set it up once and happily forgot about it as soon as I got kvm to work
01:33 mattgriffin joined #fuel
01:38 xarses joined #fuel
01:56 KodiakF_athome Ha no worries it was straightforward once I figured out what the heck it wanted.
01:57 KodiakF_athome Alrighty well I'm out for the night.  Looking forward to setting this up later on
02:01 mattgriffin joined #fuel
02:13 adanin joined #fuel
03:05 geekinutah joined #fuel
03:06 geekinutah question on CI for Fuel
03:06 geekinutah are there jobs that actually deploy Fuel on baremetal
03:06 geekinutah ?
03:14 adanin joined #fuel
04:16 adanin joined #fuel
04:21 ArminderS joined #fuel
04:24 vidalinux joined #fuel
04:43 jobewan joined #fuel
04:59 adanin joined #fuel
05:03 adanin joined #fuel
05:10 adanin joined #fuel
06:02 vidalinux joined #fuel
06:04 Longgeek joined #fuel
06:13 justif joined #fuel
07:02 pasquier-s joined #fuel
07:03 taj joined #fuel
07:03 taj joined #fuel
07:07 aglarendil geekinutah: it is better to ask in #fuel-dev channel as it is related to development
07:17 artem_panchenko joined #fuel
07:30 hyperbaba joined #fuel
07:53 al_ex11 joined #fuel
07:54 odyssey4me joined #fuel
07:56 odyssey4me joined #fuel
08:13 adanin joined #fuel
08:20 e0ne joined #fuel
08:28 e0ne joined #fuel
08:31 sc-rm evg_: Then the big question is, does openstack deployed through fuel run the instances with qemu --enable-kvm or without qemu --enable-kvm?
08:51 pasquier-s_ joined #fuel
09:12 e0ne joined #fuel
09:28 evg_ sc-rm: if you've chosen "KVM", yes, qemu --enable-kvm
09:28 sc-rm evg_: cool, then it’s fine with me and just some visual bug in openstack
09:32 e0ne joined #fuel
09:33 evg_ sc-rm: yes, there was a bug launched for nova.
09:35 sc-rm evg_: cool :-)
09:38 topochan joined #fuel
10:38 taj joined #fuel
10:54 KodiakF joined #fuel
11:16 odyssey4me joined #fuel
11:32 Longgeek joined #fuel
11:55 Longgeek joined #fuel
12:00 pasquier-s joined #fuel
12:00 adanin joined #fuel
12:04 krisch joined #fuel
12:06 krisch hi, has anyone of you problems using the PXE booting feature of fuel? I don't get the "node unallocated" notification and the bootstrap centos has not an assigned ip address?
12:13 sikor_sxe joined #fuel
12:16 igalic joined #fuel
12:19 krisch i also filled in a request on the mirantis site: https://mirantis.zendesk.com/requests/2361
12:20 HeOS joined #fuel
12:25 evg_ krisch: hi, do you mean the node boots in the bootstrap and then doesn't get IP?
12:27 krisch @evg_: yes, based on a tcpdump it gets an ip address of the correct pool, unfortunately if i login in bootstrap, eth0 has no ip and so no connection to fuel master
12:28 sikor_sxe i have the weird problem that i cannot create instances w/ tiny flavours. i used the neutron/vlan configuration in mirantis fuel 5.0 and receive "Error: Failed to launch instance "9": Please try again later [Error: Error during following call to agent: ['ovs-vsctl', '--timeout=120', 'del-port', 'br-int', u'qvofc6b629b-52']]." when trying to start a tiny instance
12:28 sikor_sxe the same image works with "small" instances, though
12:30 sikor_sxe i suspect a timing issue, as small instances take a while longer to spawn (probably because the hd  image needs to be created), some openvswitch stuff is set up meanwhile, while tiny images are spawned to quick
12:31 sikor_sxe ly
12:32 sikor_sxe makes sense?
12:33 sikor_sxe when i change the tiny flavour to 20gb root partition it works aswell
12:35 evg_ sikor_sxe: it's i think somemthing about the resource restrictions of flavor/bootimage. Check/increase  minRAM or RootDisk parameter.
12:35 sikor_sxe yeah i was tuning that
12:36 sikor_sxe the flavour is vanilla centos 6.5 cloud image
12:36 evg_ sikor_sxe: yes, check image parameters.
12:36 sikor_sxe which worked fine on nova-network setup
12:37 sikor_sxe min_disk & min_ram is both 0
12:37 sikor_sxe so it should work for tiny instances, right?
12:38 evg_ sikor_sxe: right...
12:38 sikor_sxe and when i edit the flavour and make it 20gb it works
12:39 evg_ sikor_sxe: do you mean 20G for root disk?
12:39 sikor_sxe yup
12:40 sikor_sxe 512mb ram and 20gb root
12:40 sikor_sxe 512mb ram and 10gb root does not work
12:40 evg_ sikor_sxe: and what is the size of your vanilla centos?
12:40 sikor_sxe 350mb
12:43 sikor_sxe 15gb root does not work either
12:45 evg_ sikor_sxe: hm, i don't know, m.b. you should try setting some reasonable numbers instead of zeros
12:45 sikor_sxe i think i tried that, but i'll do again
12:46 evg_ sikor_sxe: just a supposition
12:51 sikor_sxe nope
12:51 sikor_sxe does not help
12:53 sikor_sxe are there patches from mirantis?
12:57 evg_ sikor_sxe: what patches?
12:58 sikor_sxe i think there is an openstack bug about this
13:05 sikor_sxe https://review.openstack.org/#/c/101090/
13:10 evg_ sikor_sxe: it seems not your case
13:11 krisch joined #fuel
13:13 evg_ sikor_sxe: what is a format of the image your're booting from?
13:14 sikor_sxe qcow2
13:16 evg_ sikor_sxe: what if you run  "qemu-img info your image"
13:19 sikor_sxe ahh
13:19 sikor_sxe thank you :)
13:19 sikor_sxe virtual size: 16G (17179869184 bytes)
13:24 adanin joined #fuel
13:29 mattgriffin joined #fuel
13:41 al_ex11 joined #fuel
13:46 geekinutah joined #fuel
14:10 KodiakF Anyone have a good comparison matrix for Mirantis Openstack / Fuel vs Nebula One?  The mgmt here is pushing for Nebula One
14:12 e0ne joined #fuel
14:13 teknoprep joined #fuel
14:13 teknoprep hi all
14:14 teknoprep when i have say 5 nodes and a controller.. it passes network tests no problem
14:14 teknoprep when ih ave all 40 nodes its fails on all VLAN's on ALL nodes
14:14 teknoprep is there a bug with 5.0 ?
14:44 teknoprep ????
14:47 dhblaz joined #fuel
14:49 dhblaz We are about to go live with another deployment does anyone know if a new release is expected by the end of the week?
14:55 ArminderS joined #fuel
15:05 angdraug joined #fuel
15:14 jobewan joined #fuel
15:15 xarses joined #fuel
15:15 teknoprep hey xarses
15:15 xarses hi
15:16 dhblaz joined #fuel
15:17 teknoprep is there an issue with verify networks on fuel for larger number of nodes ?
15:17 TVR_ hey xarses
15:17 TVR_ just saying hello
15:20 xarses teknoprep: no that I'm aware of, but it also would not surprise me
15:21 teknoprep so if i have like 3 - 6 nodes with 1 controller it works most of the time
15:21 teknoprep go up to 15 compute nodes with 1 controller
15:21 teknoprep fails every time
15:21 al_ex11 joined #fuel
15:25 blahRus joined #fuel
15:25 krisch joined #fuel
15:27 xarses hmm, deployed nodes?
15:28 teknoprep none so far
15:28 teknoprep hey so i only have one question about netron l3
15:28 teknoprep does the Internet Network Gateway... does that need to be an actual physical router.. or is that IP address a virtual ip that Neutron manages ?
15:29 teknoprep i am thinking that i may have a few issues becuas ei have assigned the internet network gateway to a Cisco Router for routing that network
15:29 xarses is this the value in the public network range?
15:29 teknoprep i am thinking that i should not have done that
15:29 teknoprep no its internal
15:29 teknoprep 10.50.1.1
15:29 teknoprep our public router is a physical router
15:29 teknoprep for the Floating IP Ranges
15:29 teknoprep and Public IP Ranges Gateway
15:30 TVR_ it only has to be an ip on a device that has routes
15:31 adanin joined #fuel
15:31 teknoprep 10.50.1.1 needs to be an IP address on a physical device ?
15:31 Bomfunk joined #fuel
15:32 TVR_ as example... I have a site with a 4.31.x.x floating IP range, and I have instances boot to either the 10.x.x.x or the 64.71.x.x network... and although the 64.71.x.x is a real CIDR, the .1 gateway has routes to the real world as the router knows of both real blocks
15:33 teknoprep ok so you have used 10.x.x.1 and 64.71.x.1 as the Gatway on a physical router
15:33 teknoprep or the IP on a physical router
15:33 teknoprep neutron does not have a virtual router for the internal network is all i am asking
15:33 xarses starting 54 nodes to test, KSM for the win
15:33 teknoprep for this Internal L3 Gateway IP address
15:34 xarses teknoprep: where is the field that you are setting this value for?
15:35 teknoprep Fuel -> Networks -> Neutron L3 Configuration
15:35 teknoprep Internet Network gateway
15:35 xarses one moment
15:47 teknoprep xarses, thats actually internal network gateway... not internet network gateway
15:47 xarses ok, that one is just the address for the network in internal network cidr
15:47 teknoprep yes i understand that
15:47 teknoprep does it need to be a physical router ?
15:47 teknoprep or does neutron build a virtual router using NAT for me /
15:47 xarses it will be virtual
15:47 teknoprep ahh
15:48 teknoprep so that IP should not be defined anywhere else
15:48 teknoprep i'll change that up now
15:48 xarses it can if you don't care that it's natted
15:48 teknoprep ?
15:48 teknoprep i do want NAT from Neutron
15:48 teknoprep Neutron provides NAT by using that IP address correct ?
15:49 xarses neutron will create a virtual network
15:49 dhblaz_ joined #fuel
15:49 xarses using whichever L2 provider you choose GRE/VLAN
15:49 teknoprep is GRE vs VLAN any better ?
15:49 teknoprep or is just perference ?
15:50 teknoprep GRE seems to ahve the ability to have more private networks
15:50 xarses it's a flexability thing, GRE can have more networks, but can be slightly slower if the network isnt configured well
15:50 teknoprep gotcha
15:50 teknoprep one last thing
15:50 teknoprep have you had any problems when clicking verify networks ?
15:51 teknoprep i have nothing but problems once i get around 10 nodes installed
15:51 xarses so after neutron creates the L2 network for you, you have the option of creating a L3 routing for it
15:51 xarses teknoprep: deployed or still in bootstrap?
15:51 teknoprep bootstrap
15:51 xarses I'm starting to test
15:51 xarses my venv is almost done building
15:51 teknoprep ok
15:52 odyssey4me_ joined #fuel
15:52 teknoprep did you verify networks in bootstrap mode ?
15:52 xarses so the L3 configuration you see on this page is for the default network that fuel will create for you (net04)
15:52 teknoprep gotcha
15:53 xarses the confusing part is that the floating range in this section is actualy a sub-section of the public-range at the top of the page
15:53 teknoprep no i understand that
15:53 teknoprep i have deployed Nova before so i knew that.. just never used Neutron
15:53 xarses once everything is deployed, you can create more networks using openstack directly
15:54 teknoprep ok
15:54 teknoprep nice
16:01 ArminderS- joined #fuel
16:03 teknoprep i'll bbiab
16:04 teknoprep i am heading home to work instead of the office at this point
16:04 teknoprep later all
16:11 xarses joined #fuel
16:24 odyssey4me__ joined #fuel
16:25 rmoe joined #fuel
16:27 odyssey4me joined #fuel
16:29 teknoprep joined #fuel
16:29 teknoprep hey xarses
16:29 teknoprep did you get everything working ?
16:36 xarses teknoprep: I've reproduced the issue somwhere between 11 and 13 nodes, trying 12 now
16:36 teknoprep w0ot
16:36 teknoprep thats good to know
16:39 xarses ok, 12 fails too
16:40 teknoprep yeah lol
16:40 xarses so it breaks at 12 nodes
16:40 teknoprep w0ot... thats great news
16:40 xarses 11 works
16:40 teknoprep try 11 a few times
16:40 teknoprep it'll fail
16:40 teknoprep 9 compute nodes works.. and fails randomly
16:41 xarses ya, poking the code now
16:41 teknoprep this isn't actually hurting anything is it
16:42 xarses no, it's not required for deployment either
16:42 xarses it just means that you can't pre-validate all of the nic interfaces, which is one of the most common deployment failure reasons
16:42 sikor_sxe hello, where do i change the domain name for a tenant in fuel 5.0?
16:43 sikor_sxe i know i can set the nameserver via the subnets dhcp server, but there is no way to specifiy the domain
16:44 xarses sikor_sxe: iirc its a property of the sub_network in openstack
16:45 Longgeek_ joined #fuel
16:45 teknoprep is there an issue when you add the 3rd controller ... all nodes disapear
16:46 sikor_sxe xarses: i don't see it
16:46 teknoprep we are having alot of issues with Fuel 5.0... getting a bit crazy lol
16:46 sikor_sxe in icehouse
16:47 xarses teknoprep: no, that shouldn't happen, there is a problem with the node data on the 3rd node. I thought we armored the interface from being able to do that anymore
16:47 xarses there will be a traceback in the nailgun log
16:47 teknoprep ok i'll look into it
16:48 teknoprep is there a way to get back to having access to my nodes if that does happen
16:48 teknoprep can i just remove the error'd controller
16:51 xarses from the cli 'curl -X DELETE http://localhost:8000/api/nodes/<node id>' node_id is the first column in the output of 'fuel node' if  fuel node doesn't work then you need to find the node id in the logs where is raised the traceback
16:51 xarses teknoprep: it will have raised a traceback in the nailgun log, its the only thing that can make the nodes list go wonky like that
16:52 xarses it's usually going to be a message about not being able to find an admin interface
16:56 dhblaz We are about to go live with another deployment; does anyone know if a new release is expected by the end of the week?
16:57 Longgeek joined #fuel
16:57 teknoprep xarses whats the process of getting the node_id ?
16:57 xarses 'fuel node' on the cli
16:57 teknoprep ahh
16:57 teknoprep thanks lol
16:57 xarses or search the log for the id it raised the Traceback on
16:58 Longgeek_ joined #fuel
16:58 teknoprep fuel node from the CLI of a controller does nothing
16:58 xarses on the fuel master node?
16:59 teknoprep nvm i just tryed that
16:59 teknoprep thanks
16:59 sikor_sxe don't you guys use your own domain names?
16:59 sikor_sxe i don't find any mention but crude hacks to do so
17:00 sikor_sxe i know it's the same on hp helion, so i guess that's openstack specific
17:00 sikor_sxe instance cannot resolve themselves (using neutron)
17:00 teknoprep xarses, thanks for the help so far man
17:00 sikor_sxe putting stuff in /etc/hosts works sometimes, but for oracle it does not
17:01 xarses teknoprep: i'd like to get access to a support bundle from your fuel node if possible to see what caused that Traceback in the cluster details
17:02 teknoprep its already gone
17:02 teknoprep we reinstalled
17:02 teknoprep if we get it again i'll post
17:02 xarses you re-installed fuel?
17:03 teknoprep yes
17:03 teknoprep it's a VM... we have a snapshot of what it looks like after default install
17:03 teknoprep sorry
17:03 teknoprep VMware VM
17:03 teknoprep we are also running the controllers inside VMware
17:03 teknoprep the Compute / CEPH nodes are all hardware
17:05 xarses so you re-installed (or reverted a snapshot) fuel after the adding of that one controller node caused the cluster contents to disappear?
17:06 xarses teknoprep: thats fine
17:06 teknoprep i didn't do that
17:06 teknoprep someone at the DataCenter did
17:06 teknoprep he just does what he wants...
17:07 teknoprep today we had a talk about how to get this done.. so i am making a log of all issues and working through them 1by1
17:07 teknoprep hopefully we'll get this worked out
17:07 xarses sikor_sxe: correct, it would be an openstack issue. I'm guessing that you want the domain name set by the dhcp server?
17:11 Longgeek joined #fuel
17:12 teknoprep so our controller is up
17:12 teknoprep but the compute storage ceph nodes error'd out
17:13 Longgeek_ joined #fuel
17:15 e0ne joined #fuel
17:17 teknoprep does a default install need internet access ?
17:17 xarses teknoprep: no, it does not need internet access
17:17 angdraug joined #fuel
17:17 teknoprep what about time sync ?
17:17 xarses they will sync to the fuel master node
17:18 xarses so they will all have the same (but possibly wrong) time
17:18 xarses what was the error message for the compute / ceph storage nodes?
17:18 xarses there should be a warning or higher message in the node's puppet log
17:21 jaypipes joined #fuel
17:22 teknoprep which Err would you like to see
17:22 teknoprep tons of them
17:22 teknoprep (/Stage[main]/Ceph::Osd/Exec[ceph-deploy osd prepare]/returns) change from notrun to 0 failed: ceph-deploy osd prepare node-211:/dev/sdb4 returned 1 instead of one of [0]
17:23 teknoprep can i remove these nodes to restart using that curl command you gave me ?
17:28 xarses you shouldn't need to, it also probably wont help with that issue
17:29 xarses can you paste /root/ceph.log from that node
17:33 teknoprep whats the best way to login to these servers over ssh ?
17:33 teknoprep root is blocked
17:35 xarses ssh from the fuel node, it has a ssh key
17:35 xarses as root
17:35 teknoprep yeah i just noticed that
17:35 teknoprep thanks
17:36 teknoprep http://pastebin.com/V1n42cNM
17:36 teknoprep xarses, http://pastebin.com/V1n42cNM
17:41 rmoe teknoprep: looks like you're hitting this bug: https://bugs.launchpad.net/fuel/5.0.x/+bug/1323343
17:52 rmoe sikor_sxe: you're trying to set the FQDN of your VMs?
18:03 teknoprep rmoe, what should i do to fix ?
18:10 e0ne joined #fuel
18:10 teknoprep what is this ? https://fuel-jenkins.mirantis.com/
18:26 xarses dhblaz: we are waiting for resolutions to two critical bugs before releasing 5.0.1, however it's already 3 weeks late, so no solid lead on the release date yet
18:27 xarses 5.1 is due mid to late August
18:28 dhblaz Which bug would be most helpful for me to work on?
18:28 xarses Also, in case you care, we have public nightly builds from master now https://wiki.openstack.org/wiki/Fuel#Nightly_builds
18:28 xarses master is currently pre 5.1
18:29 dhblaz Does that mean that master is a tracking the branch that will release 5.1 or 5.0.1?
18:32 e0ne joined #fuel
18:34 angdraug master branch in fuel-* git repos is tracking whatever is current release focus in LP, which currently is 5.1
18:34 xarses master is allways tracking the next release
18:35 angdraug when 5.1 enters hard code freeze, we'll create stable/5.1 branches in all repos, and master will be tracking 6.0
18:35 xarses 5.0.1 is a stable release so there is a branch for it
18:43 angdraug dhblaz: https://bugs.launchpad.net/fuel/+bugs?field.searchtext=&amp;orderby=-importance&amp;field.status%3Alist=CONFIRMED&amp;field.status%3Alist=TRIAGED&amp;field.importance%3Alist=CRITICAL&amp;field.importance%3Alist=HIGH&amp;assignee_option=any&amp;field.assignee=&amp;field.bug_reporter=&amp;field.bug_commenter=&amp;field.subscriber=&amp;field.structural_subscriber=&amp;field.milestone%3Alist=63962&amp;field.tag=&amp;field.tags_combinator=ANY&amp;field.has_cve.used=&amp;field.omit_dupes.used=&amp;field.omit_dupes=on&amp;f
18:43 angdraug ield.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search
18:43 xarses spam
18:43 xarses use pastebin
18:43 xarses ;P
18:44 angdraug how's that: http://tinyurl.com/o9p5ld9
18:44 xarses http://bit.ly/1toriAj
18:44 xarses better
18:45 dhblaz angdraug: this doesn’t focus on 5.0.1 milestone bugs
18:45 angdraug oh, 5.0.1 is just held up by 1 bug
18:46 angdraug https://bugs.launchpad.net/bugs/1340711
18:47 angdraug which is our catch-all for "failover doesn't work, again..."
18:47 dhblaz Its a hard thing to get right
18:48 angdraug yup. we had it mostly right with havana in 4.1.1, icehouse brought a whole new pack of issues
18:48 dhblaz It isn’t clear what is left to do for 1340711
18:49 angdraug if it was clear we would've fixed it by now :(
18:49 angdraug one thing we're trying to do is https://bugs.launchpad.net/nova/+bug/856764
18:49 dhblaz I would argue 4.1.1’s HA brought lower availability than not having the features at all mostly due to running mysql through haproxy with no check that the node could service requests.
18:50 angdraug it's a tricky one, and testing shows that by itself it's still not enough, we've had to fix a bunch of stuff in neutron and cinder on top of that, too
18:50 xarses dhblaz: that is still the case for 5.0 and 5.0.1, it should be fixed in 5.1
18:50 angdraug and it was the case pre-4.1.1, we've always had mysql behind haproxy
18:52 christopheraedo joined #fuel
18:55 dhblaz right and use the ha-proxy mysql ping method which isn’t sufficient for galara
18:56 dhblaz xarses: I thought the fix was to monitor the reponse time.  Do you know why that got pushed to 5.1?
18:57 gleam joined #fuel
18:57 aedocw joined #fuel
18:59 xarses there wasn't bandwidth to implement the fix that was desired
19:00 xarses AFAICT we are now using percona tools cluster-check to monitor the state of the galera nodes and proactivly remove nodes that are not in ready states from the vips using the admin socket
19:09 dhblaz do you know if expired token flushing has been implemented?  I can’t find a blueprint for it.
19:12 xarses out of the database?
19:13 xarses we aren't using the database to store tokens anymore they now live in memcache, so they should age out of their on their own
19:26 dhblaz great
19:42 taj joined #fuel
19:52 teknoprep joined #fuel
19:52 teknoprep hey xarses
19:52 teknoprep any idea wtf is going on ?
19:54 xarses :humms:: the sky is as blue and all the leaves a green. The sun is as warm as a baked potato
19:54 teknoprep lol
19:55 xarses oh, you probably meant something else
19:55 teknoprep yeah but thats fine
19:55 teknoprep nothing is as important as a bit of sarcasm
19:56 xarses what was your question in reference to?
20:05 teknoprep the issues we are hving installing
20:18 adanin joined #fuel
20:23 kupo24z joined #fuel
20:23 kupo24z xarses: angdraug do you guys know if its possible to edit user-data post instance creation?
20:36 alpho2k joined #fuel
20:37 alpho2k left #fuel
20:48 xarses kupo24z: what user data?
20:53 kupo24z xarses: ec2 metadata, its in nova.instance user_data
20:54 kupo24z Trying to figure out a way to reset it on rebuild
21:06 teknoprep joined #fuel
21:06 teknoprep xarses, whacha think? i was trying to get this up and running but am having nothing but issues with the ceph-osd installations
21:07 teknoprep should i try and wipe all hdd's on servers that attach to the system
21:08 xarses teknoprep: the solution to the bug that rmoe refrenced was that the partation's need to be wiped prior to attempting to redploy
21:08 teknoprep yeah
21:08 teknoprep what a pita
21:08 xarses before that bug we would only dd the beginning of the disk, not each partation
21:09 teknoprep ok
21:10 kupo24z xarses: found a workaround with ssh 192.168.0.3 "mysql -e 'update nova.instances set user_data = \"$userData\" where uuid = \"$instance_id\"'"
21:11 kupo24z Looks like it was a blueprint for heat but has since been abandonded
21:11 xarses kupo24z: hmm, never played much with ec2 metadata so go figure
21:12 tatyana joined #fuel
21:17 teknoprep where does fuel master admin server keep the pxe boot settings for specific nodes
21:18 teknoprep i have nodes stuck in an old configuration that is not allowing for the servers to reboot and be added to a new environment
21:19 xarses that is all managed by cobbler, you can use the cobbler cli to remove the node
21:20 xarses off hand, i dont remember the exact syntax but its something like cobler node --remove --name=node-<node id>
21:20 teknoprep what if i do not have the node id
21:20 xarses hrm, cobbler system probably now that i think about it
21:20 xarses the name will be the same as the node's hostname
21:21 teknoprep i think i got it
21:21 xarses which is node-<id>
21:21 xarses id is usually found from 'fuel nodes' output
21:21 teknoprep yeah i did cobbler system list
21:21 teknoprep cobbler system remove --name node-208 and so on
22:03 xarses joined #fuel
22:26 xarses joined #fuel
22:32 KodiakF_athome joined #fuel
22:42 jaypipes joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary