Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-05-20

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 pasquier-s joined #fuel
01:05 pasquier-s joined #fuel
01:55 Obi-Wan joined #fuel
02:39 RodrigoUSA joined #fuel
06:08 ub joined #fuel
06:43 sc-rm joined #fuel
06:48 Hulda joined #fuel
07:15 hyperbaba joined #fuel
07:16 kutija joined #fuel
07:35 kutija joined #fuel
07:42 dklepikov joined #fuel
07:55 pbrooko joined #fuel
08:08 bogdando joined #fuel
08:24 alecv joined #fuel
08:31 stamak joined #fuel
08:38 asledzinskiy joined #fuel
08:54 e0ne joined #fuel
09:16 tzn joined #fuel
10:25 teran joined #fuel
11:47 kutija joined #fuel
12:03 TiDjY35 joined #fuel
12:08 tzn joined #fuel
12:12 pasquier-s joined #fuel
12:26 mattgriffin joined #fuel
12:34 azemlyanov joined #fuel
12:36 brain461 joined #fuel
12:49 avorobiov joined #fuel
13:14 kutija joined #fuel
13:37 kutija joined #fuel
14:09 samuelBartel joined #fuel
14:41 kutija_ joined #fuel
15:04 igorbelikov joined #fuel
15:10 e0ne joined #fuel
15:10 rmoe joined #fuel
15:13 pasquier-s joined #fuel
15:15 daniel3_ joined #fuel
15:15 blahRus joined #fuel
15:27 gongysh joined #fuel
15:38 championofcyrodi joined #fuel
15:38 daniel3_ joined #fuel
15:42 pasquier-s joined #fuel
15:58 asledzinskiy left #fuel
16:02 kutija_ I have a porblem with MYSQL right after successful deploy
16:02 kutija_ mysql on all controllers wont start
16:02 kutija_ Slave SQL: Error 'Can't drop database 'ost1407'; database doesn't exist' on query. Default database: ''. Query: 'DROP DATABASE ost1407', Error_code: 1008
16:02 kutija_ it gives the same error on all controllers
16:03 Topic for #fuel is now Fuel 5.1.1 (Icehouse) - Fuel 6.0 (Juno) https://software.mirantis.com | https://wiki.openstack.org/wiki/Fuel | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
16:04 kaliya kutija_: 6.0?
16:04 kutija_ 6.1 #244
16:04 kutija_ also, there is an issue with RabbitMQ which is not started
16:04 kutija_ right after finished deploy
16:04 kutija_ the same goes for a heat-api
16:04 kaliya ah, that's because they are development ISOs
16:05 kutija_ I know ;)
16:05 kutija_ these are the nightly builds
16:05 kaliya we don't guarantee they work perfectly yet
16:05 kutija_ I can't install 6.0.1 because I hit an issue with Galera
16:05 kaliya if you have some relevant info, and already there is no same bug, would be nice if we file some
16:06 kutija_ kaliya: I can provide you with any information, my deploy is 15 minutes old
16:06 kutija_ and I'm encountered the same issue three times in the last 72 hours
16:07 kaliya https://bugs.launchpad.net/fuel/+bug/1427572
16:08 kutija_ thats it
16:09 kaliya and for Rabbit, any info from logs?
16:10 kutija_ yes
16:10 kutija_ just a sec
16:11 kutija_ this one is neutron-l3-agent log from controllers
16:11 kutija_ http://paste.openstack.org/show/229514/
16:11 kutija_ and this one is rabbitmq log
16:11 gongysh joined #fuel
16:11 kutija_ http://paste.openstack.org/show/229515/
16:12 kutija_ and it goes on and on until i restarted it manually
16:12 kutija_ which is right after fuel finished with a deploy
16:12 kaliya do you add Ceilometer also?
16:13 kutija_ of course, all services that are dependent of rabbitmq reported errors with it
16:13 kutija_ no
16:13 mattgriffin joined #fuel
16:13 kutija_ I do not have Ceilometer
16:13 e0ne joined #fuel
16:13 kutija_ this is 6 node HA deploy - 3 controllers, 1 compute, 2 ceph nodes
16:13 kutija_ and three issues ;)
16:13 kutija_ rabbitmq works fine after manual restart
16:13 kutija_ heat-api is in the state stopped after deploy
16:13 kutija_ and it works fine after restart
16:14 kutija_ but mysql is dead
16:14 kutija_ and of cource nothing works :)
16:14 kutija_ *course
16:14 kaliya we added lots of improvements to rabbit in the last rush
16:15 kaliya I cannot find any reference to your Rabbit trace in logs
16:15 kaliya in bugs*
16:15 kutija_ this is a complete RabbitMQ log
16:16 kutija_ from controllert
16:16 kutija_ http://paste.openstack.org/show/229527/
16:16 kaliya of course your HA is broken somehow
16:18 kutija_ well I'm pretty stuck now
16:18 kutija_ and what about heat-api
16:18 kutija_ ?
16:22 pbrooko joined #fuel
16:22 kutija_ I'm about to delete this fuel installation and to try some other version
16:22 kutija_ if you need logs I'm willing to provide them
16:22 kutija_ while it's still there :)
16:22 glavni_ninja_ kaliya: you say that you do not guarantee for latest builds... which fuel version you recommend?
16:29 jaypipes joined #fuel
16:33 samuelBartel joined #fuel
16:37 e0ne joined #fuel
16:51 kutija joined #fuel
16:54 mwhahaha i think it depends on what you plan on actually running, i was successful in deploying 244. unfortunately it's not certain which version might work for your configuration
16:57 kutija_ joined #fuel
17:03 mattgriffin joined #fuel
17:20 pasquier-s_ joined #fuel
17:25 teran joined #fuel
17:41 pasquier-s_ joined #fuel
17:43 glavni_ninja_ mwhahaha: 6.0 build 244?
17:50 mwhahaha 6.1 244
17:57 e0ne joined #fuel
18:00 dhblaz joined #fuel
18:01 dhblaz I need a VIP that is shared among VMs for load balancing (pgpool today)  I don’t care if it is a private or a public float.  Any ideas?
18:22 teran joined #fuel
18:31 mattgriffin joined #fuel
18:42 dhblaz It looks like I can configure pgpool’s watchdog to use any command to bring up/down the VIP.  So I can use a floating IP and use nova remove-floating-ip, add-floating-ip to move it.  Any suggestions on how to do this securely or more robustly than nova with a user password in clear text on the pgpool nodes?
18:42 dhblaz Here is a description of how watchdog works and the config knobs it has: http://www.pgpool.net/docs/pgpool-II-3.2.0/wd-en.html
18:57 mwhahaha could create a more restrictive role in keystone that can only do ip assignments and use that
18:57 mwhahaha so if someone got the user/pass they could only do ip manipulations or something
18:58 mwhahaha depending on how you plan on implementing the failover script, if you do something that supports encrypted items (puppet eyaml, chef encrypted databags, ansible vault, etc) that could prevent it from just laying around
18:59 mwhahaha alternatively using some password service api or something, really depends on what you have available to you
19:47 tatyana joined #fuel
20:08 championofcyrodi if i were to go with a switch like http://www.nextwarehouse.com/item/?841430_g10e (10gbe) for ceph... and my nodes are 1u supermicros, running centos6... what is the best 10Gbe NIC to go with?
20:08 championofcyrodi basically, my entire cluster is on a 1gbe switch w/ vlan tagging... which of course is running 100% port utilization and now my ceph backed VM storage is sluggish
20:09 championofcyrodi so i want to upgrade the switch and the NICs
20:09 championofcyrodi (without rebuilding the whole shibang)
20:15 kutija joined #fuel
20:16 pasquier-s_ joined #fuel
20:35 tzn joined #fuel
20:52 mattgriffin joined #fuel
20:52 pasquier-s joined #fuel
20:56 dhblaz championofcyrodi: We use some broadcom LOM NICs that use the bnx2x driver.  They offer good performance but the driver support wasn’t very good when we setup our cluster and had to do a lot of work to get the bootstrap image working.  Since you won’t require bootstrap you probably won’t have the kind of problem we did.
20:57 championofcyrodi yea, i've seen that a bit around here.
20:57 championofcyrodi yea... cloud migration is a nightmare.
20:58 championofcyrodi i think i might just offload the ~6TB of data to my ZFS store...
20:58 championofcyrodi blow away the entire cluster.
20:58 championofcyrodi then rebuild.
20:58 championofcyrodi w/ newer hardware.
20:58 championofcyrodi seems almost just as fast as mucking w/ nailgun and pxe images and god knows what else.
20:58 dhblaz If you are using ceph
20:58 championofcyrodi am
20:59 dhblaz you should be able to just add more ceph nodes with the fuel gui
20:59 dhblaz then use the ceph map to migrate your data from the weak nodes to the stronger ones
21:00 dhblaz if you want to get rid of the weak nodes all together eventually balance those osds to 0 and down and remove them.
21:00 dhblaz I don’t work for Mirantis so you way want to get some more specific instructions from someone that does.
21:00 dhblaz But this is how I would handle it in my cluster.
21:25 gongysh joined #fuel
21:44 pasquier-s joined #fuel
21:44 rmoe joined #fuel
21:45 mattgriffin joined #fuel
22:18 pasquier-s joined #fuel
22:20 pasquier-s_ joined #fuel
22:25 samuelBartel joined #fuel
22:29 mattgriffin joined #fuel
22:39 mattgriffin joined #fuel
22:57 pasquier-s joined #fuel
23:04 e0ne joined #fuel
23:10 rmoe joined #fuel
23:15 jaypipes joined #fuel
23:23 pasquier-s_ joined #fuel
23:33 samuelBartel joined #fuel
23:44 e0ne joined #fuel
23:54 pasquier-s joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary