Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2014-03-04

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:39 rmoe joined #fuel
00:47 xarses joined #fuel
02:48 xarses joined #fuel
03:32 vkozhukalov joined #fuel
05:21 vkozhukalov joined #fuel
05:50 rvyalov joined #fuel
06:06 Ch00k joined #fuel
06:21 dburmistrov joined #fuel
07:01 bogdando joined #fuel
07:04 Ch00k joined #fuel
07:12 saju_m joined #fuel
07:15 Ch00k joined #fuel
07:17 oleksii_klymenok joined #fuel
07:33 Ch00k joined #fuel
08:12 oleksii_klymenok left #fuel
08:12 oleksii_klymenok joined #fuel
08:15 oleksii_klymenok left #fuel
08:15 oleksii_klymenok joined #fuel
08:15 oleksii_klymenok left #fuel
08:17 oleksii_klymenok joined #fuel
08:17 oleksii_klymenok left #fuel
08:22 tatyana joined #fuel
08:35 Ch00k joined #fuel
08:37 topochan joined #fuel
08:47 oleksii_klymenok joined #fuel
08:51 dburmistrov joined #fuel
09:10 e0ne joined #fuel
09:15 e0ne_ joined #fuel
09:16 e0ne joined #fuel
09:20 e0ne_ joined #fuel
09:20 warpig joined #fuel
09:28 e0ne joined #fuel
09:29 e0ne joined #fuel
09:31 fweyns joined #fuel
09:32 fweyns Hi, greetings from Amsterdam
09:34 e0ne joined #fuel
09:34 rvyalov joined #fuel
09:38 e0ne_ joined #fuel
09:42 saju_m joined #fuel
09:43 evgeniyl fweyns: good morning
09:45 fweyns I was checking if #fuel works  and I can get my customers directed to this IRC Channel
09:45 fweyns bu let me introduce me first
09:45 fweyns I am Frank Weyns and working for the Mirantis Sales Team and based in amsterdam.
09:47 ogelbukh hi Frank
09:48 fweyns And now I am going to leave you ...  I have meeting with a customer.  I probably join later this afternoon  again.  Success with Fuel !!!
10:01 fweyns left #fuel
10:04 Arminder- joined #fuel
10:24 anotchenko joined #fuel
10:33 mattymo May the force be with you.
10:34 e0ne joined #fuel
10:35 e0ne_ joined #fuel
10:37 topochan joined #fuel
10:38 baboune joined #fuel
10:39 baboune Hi,  We are still having problems with the second boot when using Ubuntu.  It gets stuck at "booting ..." despite applying the patch
10:39 baboune so we are now running Mirantis v4.0 + 4.1 patch
10:39 baboune Any ideas where to look?
10:52 mattymo baboune, I believe Miroslav was helping you with this, but he's not in the office now
10:52 baboune one of the things we can see, is that when using an USB key to get access to the machine after the ubuntu install it looks like the /dev/grub contains almost no files
10:52 baboune ok.
10:52 mattymo You're the guy who has a cool Matrox video card in your server, right?
10:53 baboune no. we have old HP DL380 and 320 with P400 cards
10:53 mattymo disk issues?
10:54 baboune it happens on two different machines
10:54 baboune and those worked before we re-assigned them to a Proof of Concept using Mirantis Fuel
10:54 baboune I laso have a request number for support
10:55 mattymo I'm sorry, but I'm not an expert on your disk issue. I can let Support know you're looking for help if you have a case #
10:55 baboune ... #1468
10:55 baboune it might be easier to do it on irc than via email
10:56 baboune one thing that I am wondering about.  We set the disks as one single RAID 5 volume.  Before we had one volume per disk.  Is one configuration better than the other?
10:57 mattymo all I can recall about Ubuntu disk setup is the OS volume must be located on the first disk. About raid for Ubuntu, I'm not too familiar
11:05 baboune we hacked grub on the machine via an USB emergency disk and added the necessary files.  the machine then booted
11:05 baboune o it looks like during the ubuntu install something is not correctly installed
11:08 mattymo that something should be logged to /var/log/remote/$nodehostname/
11:09 mattymo look at debootstrap.log, preseed-log, apt-install.log, base-installer.log, pkgsel.log, user-setup.log, root.log, os-prober.log
11:09 mattymo in-target.log, log-output.log
11:09 mattymo and finish-install.log
11:10 mattymo sorry, I don't know why it gets split into 30 files. We have 3 for CentOS
11:10 baboune kk will take a look
11:11 mattymo baboune, I checked in with support. Miroslav is the only one who can help, but again, he hasn't arrived yet
11:20 baboune Humm in the root.log, there is this weird lines: "014-03-04T10:21:37.240278+00:00 notice:  in-target grub-mkconfig 2014-03-04T10:21:37.392232+00:00 notice:  in-target grub-mkdevicemap 2014-03-04T10:21:37.552458+00:00 notice:  in-target grub-install $(readlink -f $( (ls /dev/cciss!c0d0) 2>/dev/null) ) 2014-03-04T10:21:40.844318+00:00 notice:  in-target update-grub"
11:21 baboune notice the ! in the drive
11:21 baboune I think that may be the pb
11:24 mattymo the ! is a huge pain in the butt
11:26 baboune ok? can you be a bit more explicit?
11:26 mattymo give me am inute
11:26 mattymo a minute*
11:26 mattymo I will find our centos code to deal with it
11:29 mattymo baboune, it was all rolled into pmanager.py to handle cciss disks
11:55 vkozhukalov joined #fuel
12:02 vkozhukalov baboune, around?
12:03 mattymo he dropped off IRC about 20 minutes ago
12:03 mattymo vkozhukalov, ^
12:09 baboune joined #fuel
12:10 mattymo baboune, vkozhukalov is here now
12:10 mattymo I know he worked on improving disk partitioning for Fuel 4.1 to use disk-by-path instead of by ID
12:11 vkozhukalov baboune, I've read a ticket, and it looks like patch has not been applied correctly
12:13 vkozhukalov baboune, the problem is that we are trying to discover hard drives before we install OS and we used to look only in /dev/disk/by-path and links
12:13 Ch00k joined #fuel
12:13 vkozhukalov baboune, it turned out that they depend on the version of kernel which is booted during discovering or installing
12:14 vkozhukalov baboune, patch you were talking about should cover this issue with using /dev/disk/by-id links
12:15 vkozhukalov baboune, which are more likely to be unique and stable throughout different versions of kernels
12:16 vkozhukalov baboune, in /var/log/remote/$(hostname)/root.log should be something like that
12:16 vkozhukalov baboune, $(readlink -f $( (ls /dev/disk/by-id/bla-bla || ls /dev/cciss!c0d0) 2>/dev/null) )
12:16 vkozhukalov baboune, instead of $(readlink -f $( (ls /dev/cciss!c0d0) 2>/dev/null) )
12:18 Arminder joined #fuel
12:18 baboune I get this in the root.log: 2014-03-04T10:21:37.240278+00:00 notice:  in-target grub-mkconfig 2014-03-04T10:21:37.392232+00:00 notice:  in-target grub-mkdevicemap 2014-03-04T10:21:37.552458+00:00 notice:  in-target grub-install $(readlink -f $( (ls /dev/cciss!c0d0) 2>/dev/null) ) 2014-03-04T10:21:40.844318+00:00 notice:  in-target update-grub
12:18 baboune so suggestion is to re-apply the patch I guess?
12:19 vkozhukalov baboune, according to the patch we try to find hard drive by its by-id link and then we fall down into trying to find it by its name or by-path link
12:19 anotchenko joined #fuel
12:19 mattymo baboune, would you share a link to the patch you got?
12:19 mattymo vkozhukalov and I don't know exactly what you received
12:19 vkozhukalov baboune, yes it would be helpful
12:23 honnix joined #fuel
12:25 topochan joined #fuel
12:27 baboune we will try to reapply the patch first
12:43 baboune this is the link we used to retrieve the patch http://download.mirantis.com/fuelweb/fuel_partition_manager_patch_40_to_41_ver4.run
12:44 Dr_Drache hmmm
12:44 TVR___ joined #fuel
12:44 Dr_Drache that patch IIRC had nothing to do with ubuntu
12:45 TVR___ received a news letter... fuel 4.1 is released?
12:45 Dr_Drache baboune, when you applyed the patch, was it (O)verwrite, or (P)atch
12:45 baboune ok. re-applying the patch did produce a result
12:46 Dr_Drache TVR___, I have no idea, bug tracker hasn't changed in 2-3 days
12:46 baboune our first machine now passed the "booting...", added an "error : file not found" "grub rescue :"
12:46 baboune so it seems to be a bit further along
12:48 baboune root.log: 2014-03-04T12:39:26.867707+00:00 notice:  in-target grub-mkconfig 2014-03-04T12:39:27.019726+00:00 notice:  in-target grub-mkdevicemap 2014-03-04T12:39:27.180664+00:00 notice:  in-target grub-install $(readlink -f $( (ls /dev/cciss!c0d0) 2>/dev/null) ) 2014-03-04T12:39:30.423876+00:00 notice:  in-target update-grub
12:48 baboune so the first part before the || is missing:  $(readlink -f $( (ls /dev/disk/by-id/bla-bla || ls /dev/cciss!c0d0) 2>/dev/null) )
12:49 Dr_Drache baboune, you reverted the patch each time you tryed to -repatch it right?
12:49 baboune no /dev/disk/by-id/blablablaba
12:49 Dr_Drache TVR___,
12:49 Dr_Drache " You caught us :)
12:49 Dr_Drache New version will be uploaded soon, but for the moment you can use the previous one from Prior Releases"
12:49 Dr_Drache is what happens when you try to get 4.1
12:50 Dr_Drache I hate marketing bullshit.
12:51 Dr_Drache don't mind that it's not ready, just don't post the links like it is.
12:51 TVR___ heh..
12:51 Dr_Drache baboune, looks like somewhere along the way, you screwed up fuel
12:51 Dr_Drache either by the patch, or some other meathod.
12:51 saju_m joined #fuel
12:52 TVR___ I received the news letter... it also included the info Piston has their 3.0 release available, which surprised me that the news letter from Mirantis would have news about the 'competition'
12:55 Dr_Drache yea
12:55 Dr_Drache i'm reading that now
12:57 Dr_Drache man, too bad they don't commit code. I like the hyperOS thing.
12:57 Dr_Drache err
12:57 Dr_Drache MicroOS
12:59 Dr_Drache oh well, would have been nice to browse through that git
13:03 mattymo Piston yeah has been around for a while, but its main contributors only have commits in client libs
13:04 Dr_Drache yea, sounds good (the marketing) but, that means not much.
13:04 mattymo this patch doesn't have disks by id, only by traditional naming
13:05 mattymo One advantage, though, is it's another testing platform to find bugs in OpenStack. devstack can't reproduce realistic scenarios, so 3rd party deployment tools (like Fuel) can uncover lots of bugs in OpenStack to report upstream
13:05 Dr_Drache mattymo, I thought it got switched to /dev/disk/by-path
13:06 mattymo Dr_Drache, it's supposed to be, but in the patch I found in the ticket, it still uses devices by /dev/diskname
13:06 mattymo unless you have a newer one
13:06 Dr_Drache i'd have to look again. but the older one I have was /dev/disk/by-path IIRC
13:07 Dr_Drache v2
13:07 mattymo I can show you what the kickstart should look like with the patch
13:07 mattymo s/patch/4.1 release candidate
13:07 Dr_Drache doesn't currently effect me, but baboune might
13:08 mattymo http://pastie.org/private/wcl6ilnpg5n4c5v3ayyubg
13:08 mattymo Dr_Drache, I wasn't sure if you work with baboune or not
13:08 Dr_Drache I was hoping to download 4.1 today, I have a bunch of redeploy demos for the bosses.
13:08 Dr_Drache mattymo, I barly work with myself, I am a terrible co-worker :P
13:09 Dr_Drache mattymo, "use qcow format for images", I really havn't gotten a solid response from this, if fully CEPH, uncheck this option?
13:11 mattymo raw is better for Cinder as a whole (regardless of backend)
13:12 mattymo but I don't have any data about ceph and qcow2 at my fingertips
13:12 mattymo our US folks will be around in a few hours
13:12 mattymo Dr_Drache, where are you located?
13:13 Dr_Drache mattymo, US, Midwest.
13:14 mattymo ok. it's still early in California where our ceph experts are residing
13:15 Dr_Drache yea, I'm just going to run unchecked this time
13:16 baboune I made a test by removing the RAID on the disks so that instead of one RAID 5 volume with multiple disks on each machine, we now have one volume per disk.  The Ubuntu install failed during the partition manager part
13:16 baboune so if the patch was supposed to cover that as well it failed
13:17 Dr_Drache baboune, that would not have changed a thing.
13:17 Dr_Drache it's a bug with disc detection/naming.
13:18 Dr_Drache so, adding more discs doesn't help. :P
13:19 baboune well, it does makea  big diff
13:19 baboune if I have only one RAID volume, then the ubuntu installs
13:19 baboune if not then the partition manager step during ubuntu install fails to apply the LVM partition
13:20 baboune so the error is completely different
13:20 Dr_Drache LVM cinder?
13:20 Dr_Drache or ceph?
13:20 baboune conder
13:20 baboune cinder
13:20 Dr_Drache ahh, that's why I havn't seen that error.
13:20 Dr_Drache I use full ceph.
13:22 Dr_Drache ceph all the things!
13:22 baboune well we dont see this by-id
13:22 alex_didenko joined #fuel
13:35 anotchenko joined #fuel
13:43 justif joined #fuel
13:52 TVR___ was the recovery from a controller failure notes or procedure published in here?
13:52 TVR___ I noticed ceph didn't like it well, and redeploying the node after a harsh failure is problematic at best.
14:04 anotchenko joined #fuel
14:05 xdeller joined #fuel
14:15 baboune what should th epmanager.py file function look like to avoid the bug with !
14:15 baboune ?
14:16 baboune sry pmanager.py
14:23 Dr_Drache ?
14:24 Dr_Drache TVR___, unplugged network cable causes ceph issues
14:26 TVR___ no, no... doing a harsh dd of the / FS and a reboot causes all sorts of issues.. heh
14:27 TVR___ I am simulating the disk (s) failing on a node in such a way as to be very harsh
14:27 TVR___ I am doing this while simulating load on instances as well....
14:27 Dr_Drache TVR___, I mean me LOL
14:28 Dr_Drache why not pull the drive?
14:28 Dr_Drache faster failure.
14:29 TVR___ pulling the drive seems to work better, if you can believe it.. maybe when the system senses it disappears it copes better for some reason... what is harder to recover from is, to dd the drive until it kernel panics, and then reboot the box
14:30 TVR___ that causes ceph and Galera to do all sorts of weird stuff...
14:30 Dr_Drache I just don't seem to understand what type of senario that is meaning to represent.
14:30 TVR___ which is closer to real world... as many drives don't fail completely
14:32 Dr_Drache drive causing kernel panics doesn't seem closer to me.
14:32 Dr_Drache just trying to understand your process here.
14:34 TVR___ when I was at Nuance, we had > 9600 drives..... yes, many simply died nice and quickly.... but ~ 1% of them only got enough bad sectors to give GPFS (the clustered file system we used) a headache and I was the only one who seemed to be able to fix it when it when in that state...Why? Because I would test for the edge case scenarios
14:34 TVR___ a drive that simply fails is easy
14:35 TVR___ when that 1% happens, I will not be sitting there with an unrecoverable file system, or a need to take down 500 instances
14:35 TVR___ so I test for that
14:35 TVR___ I have seen MANY drives cause kernel panics from a failure of blocks....
14:36 TVR___ so it looks like the first step in the process is NOT to u7se fuel to remove the node.....
14:36 Dr_Drache I havn't. that's why I said to me.
14:36 TVR___ $u7se == "use"
14:37 TVR___ or s/u7se/use/
14:37 Dr_Drache and you seem defensive, I was just trying to figure out your thought process, since it didn't make sense at the time.
14:37 TVR___ heh
14:37 TVR___ no, no.. not defensive at all....
14:37 Dr_Drache so, we now have a dead node.
14:37 Dr_Drache and it's not re-addable?
14:38 TVR___ it simply brings back ~shudder~ memories of fights with IBM support telling me to recreate the file system as it was unrecoverable and me figuring out they were wrong.... after 27 hours
14:39 Dr_Drache figure with ceph replication, you just readd a node with storage, and back to the races.
14:39 Dr_Drache that's the grand plan!
14:40 TVR___ re-add doesn't seem to 'just work'.... but that was with only one test so far.... as I had to do a rebuild last time.... whatever you do... DON't restart ceph on the nodes... it will not come back
14:40 TVR___ if it is in a bad state, DON't restart ceph services
14:41 Dr_Drache i've only "killed" drives so far
14:41 Dr_Drache pull out one or two
14:41 Dr_Drache then show down the node, install another spare drive, boot up
14:41 Dr_Drache s/show/shut
14:42 Dr_Drache "seems" to be ok
14:42 TVR___ pulled the OS disks?
14:42 Dr_Drache no, the ceph disks
14:42 TVR___ I am DD 'ing the disk containing the OS for the head node
14:43 TVR___ Ah, ok
14:43 Dr_Drache for a controller node?
14:43 TVR___ I am simulating a failure from the head node itself
14:43 TVR___ head controller node
14:43 Dr_Drache yea, I thought you were talking about compute.
14:43 TVR___ what good is HA if the main controller dies and it doesn't recover?
14:43 TVR___ heh
14:43 Ch00k joined #fuel
14:44 Dr_Drache well, AFAIk, you need to redeploy.
14:44 Dr_Drache HA in this case means = your not screwed compltly
14:44 Dr_Drache openstack doesn't do full HA like that, not yet.
14:44 Dr_Drache the SQL is the issue, only one node is a master server.
14:45 Dr_Drache at least, that's what I gathered from MiroslavAnashkin when the topic was brought up
14:46 TVR___ if a controller... the one that holds the br-ex:ka  interface (VIP for controller interface) should die horribly, I need it to recover fully...
14:47 Dr_Drache AFAIK that's not possible with fuel.
14:47 Dr_Drache and i think it's an upstream issue.
14:49 tatyana joined #fuel
14:57 jobewan joined #fuel
15:03 anotchenko joined #fuel
15:15 anotchenko joined #fuel
15:23 xdeller joined #fuel
15:27 jobewan joined #fuel
15:27 Dr_Drache MiroslavAnashkin, xarses. I have a networking bug perhaps.
15:30 MiroslavAnashkin Yes, proceed please. While I am fully deep inside the one issue from customer, I am reading this channel at least with one eye
15:32 Dr_Drache the 2nd networking adaptor never gets a IP to the instance for me.
15:32 Dr_Drache http://paste.openstack.org/show/72213/
15:32 Dr_Drache 2 instances, only differnce was the order of the 2 networks.
15:32 Dr_Drache (same issue appears with cirros image as well)
15:46 TVR___ it would seem if the main node that holds the dashboard vip goes down... even if it is brought back up 5 minutes later, the dashboard does not like it at all
15:48 MiroslavAnashkin safe VIP transfer between the nodes was the main issue, caused 4.1 release delay.
15:49 rvyalov joined #fuel
15:55 TVR___ Ah.. so is this fixed in 4.1?
16:27 e0ne joined #fuel
16:37 dburmistrov joined #fuel
16:47 oleksii_klymenok joined #fuel
16:58 anotchenko joined #fuel
17:11 mihgen joined #fuel
17:11 xdeller joined #fuel
17:18 rmoe joined #fuel
17:40 xarses joined #fuel
17:44 rmoe_ joined #fuel
17:58 angdraug joined #fuel
17:59 mihgen joined #fuel
18:05 vkozhukalov joined #fuel
18:11 mihgen joined #fuel
18:26 Ch00k joined #fuel
18:37 e0ne joined #fuel
18:37 rvyalov joined #fuel
18:52 mutex so I am seeing some virtualized windows machines re-request a DHCP address every minute
18:52 mutex is that normal ?
18:52 mutex doesn't seem desireable
18:54 mutex I see the same on linux too
18:55 Dr_Drache MiroslavAnashkin, were you able to look yet? or still busy?
18:57 e0ne joined #fuel
19:16 e0ne joined #fuel
19:17 mutex it is strange, I guess the neutron time is set to 120s
19:17 mutex but I saw the request like clockwork every 60s
19:20 Dr_Drache weird
19:20 xarses mutex: if dnsmasq is set to 120, 60 is expected, most clients attempt to renew 1/2 way through the lease
19:21 mutex ah, interesting
19:21 Dr_Drache blah
19:21 mutex and I did see that when I set to the other value of 86000, the clients requested somewhere around 35k-40k
19:21 Dr_Drache xarses, a redeploy fixed the disk issue
19:28 xarses Dr_Drache: sweet, now you have a network issue?
19:28 Dr_Drache xarses, yes sir.
19:28 Dr_Drache http://paste.openstack.org/show/72213/
19:29 Dr_Drache the first network never gives a IP to the instance.
19:29 xarses whats the a / b issue?
19:29 Dr_Drache 2 instances, only differnce is what order the networks were put in.
19:30 Dr_Drache eth0 always gets the address, eth1 doesn't.
19:31 xarses Net04_ext won't run DHCP unless you have it running elsewhere on the network
19:32 Dr_Drache I enabled DHCP
19:32 xarses ah, ok
19:32 Dr_Drache and have the subnet shared as well.
19:33 xarses and if you dhclient eth1?
19:33 Dr_Drache you look, the 2nd instance gets a net04_ext ip, but not the net04, and 1st instance is reversed.
19:34 Dr_Drache I can't with that one, Need to redo with cirrus, cloud-int doesn't run on instance2
19:34 xarses ok
19:35 Dr_Drache I will make 2 instances from cirrus, one in each config
19:36 xarses ok, can you paste the whole console log(s) when thats done
19:36 Dr_Drache yes sir
19:42 Dr_Drache http://paste.openstack.org/show/72286/ instance with net04_ext as eth0
19:43 Dr_Drache http://paste.openstack.org/show/72287/ instance with net04 as eth0
19:45 xarses not sure why eth1 isn't being configured, id guess that the cloud-init script isn't configured to bring up the interface. Of note is that on net04_ext metadata isn't working, but net04 is
19:46 Dr_Drache neutron GRE
20:02 IlyaE joined #fuel
20:11 mutex in order to get metadata working with net04_ext you have to do some magic
20:11 mutex I set it up a couple of weeks ago
20:17 rvyalov joined #fuel
20:18 Dr_Drache mutex, I have as well
20:18 Dr_Drache forgot what I did, or if it was differnt than what I already did
20:19 mutex you have to enable enable_isolated_metadata = True
20:19 mutex in dhcp_agent.ini
20:20 mutex then you have to REMOVE the gateway part of the routing for any external subnets
20:20 Dr_Drache then how is it routed?
20:20 mutex and instead add a static rule for 0.0.0.0/0 to your actual default gateway
20:20 Dr_Drache ahhh
20:20 mutex there is some weird logic in the code for when to push a metadata route
20:20 Dr_Drache BUT
20:21 mutex there is a fix for icehouse already in place
20:21 Dr_Drache that doesn't really apply here.
20:21 mutex ah
20:21 Dr_Drache I can get a route out, if it's on eth0
20:21 Dr_Drache if it's not on eth0 it doesn't even dhcp
20:21 mutex oh dear, that seems like a different problem
20:21 Dr_Drache and, same symtoms with net04
20:21 Dr_Drache http://paste.openstack.org/show/72213/
20:22 Dr_Drache only differnce between those 2 instances, is, what order the nets are in
20:34 mutex hrm
20:34 mutex you sure this is not a deployment issue ?
20:35 mutex also, have you restared the dhcp-agent or openvswitch recently ?
20:42 Dr_Drache mutex, fresh redeploy.
20:42 Dr_Drache I didn't touch anything restart wise
20:47 strictlyb joined #fuel
21:20 angdraug joined #fuel
21:42 mutex oh dear... http://paste.openstack.org/show/72320/
21:43 Dr_Drache mutex, stop breaking shit
21:43 Dr_Drache :P
21:44 Dr_Drache mutex, at the very least, we make for a better end product
21:44 mutex ha
21:45 mutex I wish I could get more info than "SIGTERM"
21:51 mutex actually these SIGTERMS might be the ocf scripts
21:53 mutex there is code for sending a SIGTERM in that code
21:53 mutex *those functions
22:14 IlyaE joined #fuel
22:20 mutex I have a sneaking suspicion the q-agent-cleanup.py is cauing my dhcp agent to sigterm
22:21 rmoe_ looks like you're seeing this issue: https://bugs.launchpad.net/fuel/+bug/1269334
22:22 mutex probably
22:22 mutex I already ran into this bug in another capacity
22:22 mutex lemme patch this and run some tests
22:24 mutex sad I cannot just use the  patches, have to had port :-(
22:58 mutex hmmm
22:58 mutex i'm not sure it worked
22:58 mutex still got a sigterm
23:01 mutex but i'll deploy the fixe everywhere and see whats up
23:22 rmoe_ you'll need to deploy it to all 3 controllers
23:22 rmoe_ the agents are all tied together, if one dies the others will be restarted
23:22 rmoe_ if you check your l3-agent and ovs-agent logs I bet you'll see sigterms just like you saw for the dhcp agent
23:52 WhosJoe joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary