Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2016-02-24

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 youellet joined #fuel
01:35 v1k0d3n_ joined #fuel
01:35 v1k0d3n__ joined #fuel
01:39 xarses joined #fuel
01:40 xarses joined #fuel
02:48 ilbot3 joined #fuel
02:48 Topic for #fuel is now Fuel 7.0 (Kilo) https://www.fuel-infra.org/ | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
04:37 fedexo joined #fuel
05:48 hezhiqiang joined #fuel
05:53 v1k0d3n joined #fuel
05:56 severion joined #fuel
06:06 v1k0d3n_ joined #fuel
06:25 v1k0d3n joined #fuel
06:26 severion joined #fuel
06:42 javeriak joined #fuel
06:49 javeriak_ joined #fuel
07:21 fzhadaev joined #fuel
07:25 HeOS joined #fuel
07:50 hezhiqiang joined #fuel
08:02 magicboiz joined #fuel
08:02 sc-rm After rebooting one of the controllers I get “node name already occupied epmd-starter” in the logs
08:03 sc-rm epmd: node name already occupied epmd-starter-443584618 which seems to be a known problem, but how to solve it for fuel 7.0 rabbitmq?
08:11 zerda joined #fuel
08:16 criss2016 joined #fuel
08:36 hyperbaba joined #fuel
08:36 RageLtMan joined #fuel
08:45 pbrzozowski_ joined #fuel
08:45 criss2016 Hi all
08:45 criss2016 I have a problem starting all docker containers over my Fuel Master server 6.0.
08:46 criss2016 Firstly I observed a space problem caused by logs in the nailgun container, which stopped My Fuel UI.
08:46 criss2016 I made a check at moment and only the postgres container was down. All other containers were up.
08:47 criss2016 Then I checked in the nailgun container I sawed that the container_id-json.log had almost 300Gb. I removed this file hoping that restarting all docker containers will solve this issue. Unfortunately this solution was not successful.
08:47 criss2016 I tried to restart postgres container which was down but I did not make it.
08:47 criss2016 Error: Cannot start container fuel-core-6.0-postgres: Error getting container xxx from driver devicemapper: Error mounting '/dev/mapper/docker-253:2-600-xxx' on '/var/lib/docker/devicemapper/mnt/xxx': invalid argument
08:47 hezhiqia_ joined #fuel
08:48 criss2016 I followed the Operations Guide “Corrupt ext4 filesystem on Docker container’’ https://docs.mirantis.com/openstack/fuel/fuel-6.0/operations.html#id114, but without success.
08:54 sc-rm is it safe to do a apt-get dist-upgrade on nodes?
08:58 zubchick joined #fuel
08:59 Miouge_ joined #fuel
09:15 elo joined #fuel
09:17 magicboiz joined #fuel
09:46 magicboiz joined #fuel
09:59 dnikishov joined #fuel
10:23 javeriak joined #fuel
10:39 Miouge joined #fuel
10:53 magicboiz joined #fuel
11:09 neilus joined #fuel
11:13 t_dmitry joined #fuel
11:27 magicboiz joined #fuel
11:37 Miouge joined #fuel
11:45 mwik Is there any way to make cinder differentiate between mechanical drives and SSDs? E.g. using different volume types or so. From what I can tell fuel just lumps all disks toghether and you can only chose the amount to allocate to cinder and the amount to allocate to virtual storage.
11:48 hezhiqiang joined #fuel
12:03 neilus joined #fuel
12:21 zubchick joined #fuel
13:16 javeriak joined #fuel
13:33 javeriak_ joined #fuel
14:22 severion joined #fuel
14:24 v1k0d3n joined #fuel
14:35 zubchick joined #fuel
14:42 magicboiz joined #fuel
14:54 Miouge mwik: Yes! It is possible here is an article on how to set that up in Ceph: http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
14:56 Miouge I have ben working on a similar (SSD vs spinning disks) in my lab. It requires some manual changes in cinder.conf and Ceph CRUSH map
14:56 neilus1 joined #fuel
14:59 blahRus joined #fuel
15:02 magicboiz joined #fuel
15:17 magicboiz joined #fuel
15:19 xarses joined #fuel
15:26 ddmitriev joined #fuel
15:30 magicboiz joined #fuel
15:52 j3_ joined #fuel
15:52 j3_ \whois j3
16:05 zubchick joined #fuel
16:08 neilus joined #fuel
16:15 magicboiz joined #fuel
16:24 jcook_ joined #fuel
16:27 Verilium Hmm, using Fuel, has anyone configured things for compute nodes to be deployed in another/second datacenter/location?  What would be the requirements, that all the required VLANs be available/extended at the 2nd location?
16:33 neilus joined #fuel
16:39 samuelBartel joined #fuel
16:44 krypto joined #fuel
16:52 magicboiz joined #fuel
17:06 krypto using cli i am executing --deploy on one of the nodes after provisioning but nothing is happening in astute logs "Process message from worker queue: "null" Got message with payload "null""
17:06 krypto which logs to check to troubleshoot this
17:15 elopez joined #fuel
17:31 samuelBartel joined #fuel
17:35 krypto any idea why node is stuck in installing openstack for ever with no task in "running" state
17:46 xarses Verilium: are you intending to deploy over a MAN/WAN link with different latency as the current facility, or is this more akin to another cage or other failure domain within the same LAN?
17:50 gariveradlt joined #fuel
17:52 elo joined #fuel
17:58 Verilium xarses:  It's another physical location, which is "considered" almost the same network, latency isn't an issue.  It finally seems like we should be able to extend the VLANs from dc1 to dc2.  So, I think things should work correctly for Fuel.
17:58 Verilium In the situation that I wasn't able to extend the VLANs though, I'm not quite sure how it could have worked?
17:59 xarses ok so you have a couple of options
17:59 xarses you can extend your vlan's given that they are in the same L2 domain, fuel wouldn't know any difference
18:01 xarses second you can make use of the multi-rack feature, this is where you define additional networks to fuel. the only hard requirement from fuel in this regard is that you can forward the PXE requests from the new fuelweb-admin (Admin / PXE) network and that each network has a default gateway specified
18:03 xarses in the second case, you can potentially set this up across links of varying latency however some components like ceph are highly sensitive to this
18:04 xarses also in this case, you should place all of the controllers in the same nodegroup (the container for these sets of networks) as we don't have any mechanism built in migrate the VIP addresses between L3 segments
18:04 xarses you can see an example of this on http://www.yet.org/2015/11/mos7-nodegroup/
18:11 Verilium Interesting, I'll check it out, thanks xarses!
18:11 Verilium In this particular case, chances are we'll be able to go with option #1, which will keep things pretty simple.  My collegues need to do some testing later today.
18:11 rmoe joined #fuel
18:17 Verilium Heh, of course, these are all steps pre-deployment.
18:17 Verilium xarses:  Good stuff.  Glad to see it'll be nicely integrated with 8 too.
18:17 xarses oh, its fancy in 8.0
18:18 xarses its 'integrated' in 7.0, its just not in the UI
18:18 xarses and there are a number of enhancements to doing weirder things with them in 8.0
18:19 * Verilium nods.
18:47 neilus joined #fuel
19:29 sectrgt joined #fuel
19:49 gariveradlt joined #fuel
20:01 HeOS joined #fuel
20:04 gariveradlt joined #fuel
23:19 meow-nofer_ joined #fuel
23:21 krobzaur_ joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary