Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2016-01-29

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:35 xarses joined #fuel
00:44 metsuke joined #fuel
00:52 angdraug joined #fuel
01:43 cartik joined #fuel
03:00 e0ne joined #fuel
05:07 reddy joined #fuel
05:56 javeriak joined #fuel
05:58 javeriak_ joined #fuel
06:11 javeriak joined #fuel
06:18 gongysh joined #fuel
06:58 reddy Hi
06:58 reddy i need help in installing Ldap plugin
06:58 reddy i am getting error as below
06:58 reddy [root@fuel plugins]# fuel --user=admin --password=admin123 plugins --install ldap-1.0-1.0.0-1.noarch.rpm
06:58 reddy Loaded plugins: fastestmirror, priorities
06:58 reddy Setting up Install Process
06:58 reddy Examining ldap-1.0-1.0.0-1.noarch.rpm: ldap-1.0-1.0.0-1.noarch
06:58 reddy ldap-1.0-1.0.0-1.noarch.rpm: does not update installed package.
06:58 reddy Error: Nothing to do
06:58 reddy Shell command executed with "1" exit code: yum -y install ldap-1.0-1.0.0-1.noarch.rpm
06:59 reddy Actually first i tried fuel plugins --install without providing authentication.
07:00 reddy can someone help me in removing that package
07:01 xek_ joined #fuel
07:14 reddy I used --force command and installed the plugin
07:15 reddy thansk
07:21 ppetit joined #fuel
07:38 Liuqing joined #fuel
07:43 venkat_ joined #fuel
07:52 bpiotrowski reddy: did you have it installed before?
07:57 krogon joined #fuel
07:59 hyperbaba joined #fuel
08:02 reddy yes i installed but with out providing authentication details
08:04 bpiotrowski I see
08:16 tzn joined #fuel
08:27 tzn joined #fuel
08:32 samuelBartel joined #fuel
08:33 kutija joined #fuel
08:34 pbelamge joined #fuel
08:35 hyperbaba Hello there, I have a question which is bothering me very much. I have fuel master node 5.1.1 and openstack deployed in production. Is it posible  to update the openstack to newer version (6 or 7) with upgrade of openstack enviornment  also? I am using ceph as a storage technology for all services.
08:42 Philipp__ joined #fuel
08:49 bpiotrowski ogelbukh: ↑
09:03 reddy Hi . i installed fuel 7.0 and i used default domain name while installing ( domain.tld ) . how can i change it now .
09:08 reddy if i change in resolve.conf is this sufficient ? or else where i need to change ?
09:08 ogelbukh hyperbaba: for 5.1.1 to 6.0/6.1, it is experimental manual procedure with extremely high risk
09:09 ogelbukh it also has lots of limitations and I won't recommend it on production env
09:10 ogelbukh hyperbaba: for your reference, here's the description of the process in operations manual: https://docs.mirantis.com/fuel/fuel-6.0/operations.html#upgrade-environment-experimental
09:12 hyperbaba ogelbukh: So what to do now? It's very bad to get stuck in an old openstack version. How risky is upgrade procedure? Or is there any other way to catch up with current technology?
09:12 ppetit_ joined #fuel
09:13 ogelbukh hyperbaba: first of all, check your configuration by this check list https://docs.mirantis.com/fuel/fuel-6.0/operations.html#architecture-constraints
09:15 hyperbaba ogelbukh: i match everything except the network part i have ovs + tunnels not vlans... is it possible to do it with tunnels?
09:20 ogelbukh hyperbaba: in theory, yes
09:20 ogelbukh hyperbaba: but I would recommend another way
09:20 hyperbaba ogelbukh: please do
09:20 ogelbukh install new (small) cloud side by side with the existing one and move VMs via snapshots
09:20 ogelbukh for example, by tenant
09:21 ogelbukh compact remaining VMs on some computes in the original cloud to free other computes and reinstall them in the new cloud
09:21 ogelbukh repeat until all workloads are moved
09:22 hyperbaba vm's are not a problem. Big problem is large object store data in ceph.... what to with that?
09:24 hyperbaba also what to do with large volumes? We can't juggle with couple of hunderts GB volumes around with ease?
09:27 ogelbukh btw, do you have ceph osds on separate set of nodes? or they are mixed with other roles?
09:27 hyperbaba also there is a problem in your recommendation with ip addresses... how to manage them on two clouds with same pool?
09:27 hyperbaba mixed roles unfortunately
09:29 Ma1kavian joined #fuel
09:35 ogelbukh hyperbaba: mixed with computes or controllers?
09:36 hyperbaba ogelbukh: ceph mons are on controllers with couple of osd's . Computes are with osd's
09:49 ppetit_ joined #fuel
10:13 gongysh joined #fuel
10:26 tzn joined #fuel
10:28 kaliya joined #fuel
10:42 tkhno- joined #fuel
10:48 reddy joined #fuel
10:48 Liuqing joined #fuel
10:50 jaypipes joined #fuel
11:36 reddy joined #fuel
11:41 ppetit joined #fuel
11:47 ppetit joined #fuel
11:56 kaliya joined #fuel
13:13 bhaskarduvvuri joined #fuel
14:19 e0ne joined #fuel
14:21 krobzaur_ joined #fuel
14:40 Derek joined #fuel
14:42 Guest9315 may I ask one question regarding mirantis installation
14:42 Guest9315 I encountered error
14:42 Guest9315 HTTPConnectionPool(host='archive.ubuntu.com', port=80): Max retries exceeded with url: /ubuntu/ (Caused by <class 'socket.gaierror'>: [Errno -3] Temporary failure in name resolution)
14:42 Guest9315 using virtualbox + ubuntu 14.04
14:42 Guest9315 and Mirantis 7
14:43 Guest9315 not sure if anyone can give me some advice
14:56 mwhahaha looks to be a dns or connectivity issue
15:01 Guest9315 I have passed network validation
15:01 Guest9315 have set static dns and can ping internet like google
15:03 mwhahaha from the master or the deployed nodes?
15:03 Guest9315 both
15:03 Guest9315 I manually add static route
15:06 mwhahaha that error looks like it was part of the network validation process, where did it error?
15:08 Guest9315 no, network validation shown "succeeded"
15:08 Guest9315 this is the weird part :(
15:09 kutija_ joined #fuel
15:23 fzhadaev1 joined #fuel
15:28 igene joined #fuel
15:30 igene Hi, I need help when booting PXE after installing ubuntu when deploying Openstack. I got a "Cannot get disk parameters" error when booting local boot. I've configured to install base system on a internal SD card on HP DL360p gen8 when I got this error.
15:31 igene If I configure the BIOS to direct boot SD card I'll go in to grub rescue mode
15:34 igene_ joined #fuel
15:37 xarses joined #fuel
15:39 igene_ Hi, I need help when booting PXE after installing ubuntu when deploying Openstack. I got a "Cannot get disk parameters" error when booting local boot. I've configured to install base system on a internal SD card on HP DL360p gen8 when I got this error.
15:39 igene_ If I configure the BIOS to direct boot SD card I'll go in to grub rescue mode
15:39 rmoe joined #fuel
15:47 mwhahaha igene: https://ask.openstack.org/en/question/51090/the-screen-shows-cannot-get-disk-parameters-while-deploying-mirantis-openstack-using-fuel/
15:47 mwhahaha the last comment says something about leaving leaving pxe as first boot but os disk second boot it fixed the issue
15:48 igene I did left os disk as second boot (usb boot for sd card in dl360p gen8), but the issue remains
15:50 mwhahaha yea i'm not sure, the support folks would be the best to answer that
15:51 Billias joined #fuel
16:08 claflico joined #fuel
16:12 ppetit joined #fuel
16:25 igene Anyone got a possible solution?
16:25 dslevin1 joined #fuel
16:38 PandaKing joined #fuel
16:41 PandaKing Having trouble getting lma7 toolkit installed on Kilo in Virtual Box anyone able to assist with a question ?
16:44 krylon360 joined #fuel
16:45 kaliya joined #fuel
16:47 championofcyrodi joined #fuel
16:50 blahRus joined #fuel
16:51 krylon360 so a question on Fuel; I asked this during my OS100 Bootcamp in Oct; however never got a full answer. With Fuel becoming part of the OpenStack infra. Is it possible to bootstrap an existing OpenStack Deployment into Fuel vs having to perform a fresh deployment?
16:57 PandaKing In my short experience with OS Kilo the nodes must seed from the environment and cannot be imported.
17:04 krylon360 well that isn't going to work then. Just got done doing a manual 33 Compute Node + HA Controller, MidoGateway, RMQ, and ZMQ deply. 42 nodes total.
17:28 PandaKing 3rd time deploying on virtual box.  Same error last 2 times.
17:29 PandaKing uids:
17:29 PandaKing - '4'
17:29 PandaKing parameters:
17:29 PandaKing puppet_modules: puppet/modules
17:29 PandaKing puppet_manifest: puppet/manifests/check_environment_configuration.pp
17:29 PandaKing timeout: 300
17:29 PandaKing cwd: "/etc/fuel/plugins/elasticsearch_kibana-0.7/"
17:29 PandaKing priority: 1500
17:29 PandaKing fail_on_error: true
17:29 PandaKing type: puppet
17:29 PandaKing diagnostic_name: elasticsearch_kibana-0.7.3
17:29 PandaKing .
17:37 PandaKing Is that a puppet log on the contoler ?
17:42 PandaKing I see this in the Fuel UI.  logs of type error for Astute.  Error running RPC method granular_deploy: Failed to execute hook 'lma_collector-0.7.3' Puppet run failed. Check puppet logs for details
17:45 mwhahaha it's on node-4
17:45 mwhahaha or you can look in the fuel master in /var/log/docker-logs/remote/node-4*/puppet*.log
17:47 PandaKing Ok I think I see the problem. Thanks !
17:47 PandaKing 116354+00:00 err:  The configured JVM size (1 GB) is greater than the total amount of RAM of the system (993.94 MB).
17:47 mwhahaha that would be a problem :)
17:48 PandaKing Virtualbox  could I use .5 in the JVM size ?  Looks like only 1-32
17:49 mwhahaha i think you can specify to get 1.5G
17:49 mwhahaha where do you see 1-32?
17:50 mwhahaha in know in the fuel virtualbox scripts it can be adjusted in the config.sh
17:53 javeriak joined #fuel
17:54 PandaKing error is from elasticsearch_kibana and it has a minimum of 1Gb memory for the heap.
17:55 PandaKing Settings |  Elasticsearch-Kibana Server Plugin | JVM Heap Size.
17:56 PandaKing Is there a way to change the setting to 512 on the fuel master to do I Need to completely rebuild with larger VM's ?
17:59 mwhahaha oh i'm not sure in the plugin, i'd assume you'd need bigger vms
18:00 javeriak joined #fuel
18:03 PandaKing Yea I know that's the "right
18:03 PandaKing right way  just trying to get a small test up and running.
18:04 javeriak_ joined #fuel
18:06 PandaKing be back an a couple hours :)
18:20 Sesso_ hello
18:20 Sesso_ when typing keysotn endpoint-list i get Authorization Failed: Service Unavailable (HTTP 503)
18:21 Sesso_ all my VMs died and i also cannot login to horizon
18:21 PandaKing T/F ? The Fuel UI, LMA collector plugin has an influxdb username/password whichMUST match the Influx DB Plugin settings page lma/*****
18:21 mwhahaha Sesso_: check the haproxy status
18:22 Sesso_ i forgot how ;( its been awhile
18:22 mwhahaha there's an haproxy-status.sh
18:22 mwhahaha it'll show the status of the backends
18:22 mwhahaha PandaKing: I don't know but I would assume that yes they should match
18:23 Sesso_ it happened at exactly 10am this morning
18:23 mwhahaha network issue?
18:24 Sesso_ ok did haproxy sh
18:24 mwhahaha are the keystone backends DOWN?
18:25 kaliya joined #fuel
18:25 Sesso_ keystone 2 is
18:25 Sesso_ keystone 1 up
18:25 mwhahaha so you'll need to fix keystone 2
18:26 Sesso_ my neutrons are all down also
18:26 Sesso_ except neutron frontend is up
18:26 mwhahaha frontend is haproxy
18:26 mwhahaha so check your services and make sure they are running
18:27 Sesso_ oops its keystone 1 that is all down except for the frontend
18:29 mwhahaha yea so check keystone logs
18:31 Sesso_ 2016-01-29 11:30:59.006 24965 ERROR keystone.common.environment.eventlet_server [-] Could not bind to 192.168.0.7:35357
18:32 Sesso_ in the trace, i get address already in use
18:32 mwhahaha netstat -np | grep 35357 and see what process is using it
18:36 Sesso_ 192.168.0.7:35357       already using it
18:36 mwhahaha yea if you use -p with netstat it'll tell you what process id is using it
18:37 PandaKing lsof -i :<port>
18:38 Sesso_ apache is listening on 35357
18:38 mwhahaha try restarting apache perhaps?
18:38 mwhahaha maybe it went to rotate logs and restart and got hung  or something
18:38 Sesso_ on all 3 controllers correct?
18:39 mwhahaha well start with one
18:39 mwhahaha but yea that'll probably be the fix
18:40 Sesso_ it failed with already in use
18:40 Sesso_ do i have to stop keystone first?
18:40 Sesso_ (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:35357
18:41 mwhahaha i thought it was started by apache
18:42 Sesso_ i stopped it and restarted apache. this time it started.
18:43 Sesso_ ehyyyy keystone 1 is up
18:44 Sesso_ still down on ctrl 2 and 3
18:44 mwhahaha yea you'll probably need to do the same thing on the others
18:44 mwhahaha check the logs first
18:49 Sesso_ keystone-1               ctrl1          Status: DOWN/L7TOUT
18:49 Sesso_ that one is the only one down now
18:50 kaliya joined #fuel
18:51 Sesso_ neutron                  BACKEND        Status: DOWN
18:51 Sesso_ i dont think that should be down
18:56 mwhahaha probably not, you'd have to check the logs for that one as well and perhaps restart neutron
18:57 Sesso_ the other is mysqld is down.
18:57 mwhahaha well that's a problem
18:57 mwhahaha what does pcs status report
18:57 krobzaur joined #fuel
18:58 Sesso_ 3 failed actions
18:58 Sesso_ p_neutron-l3-agent_monitor_20000 on ctrl1.bukucloud.com 'not running
18:58 Sesso_ p_mysql_start_0 on ctrl1.bukucloud.com 'unknown error'
18:59 Sesso_ PCSD Status:
18:59 Sesso_ 192.168.0.7: Offline
18:59 Sesso_ 192.168.0.15: Offline
18:59 Sesso_ 192.168.0.3: Offline
18:59 mwhahaha yea that offline stuff is always there
18:59 Sesso_ oh ok
19:00 mwhahaha check the pacemaker log to see if there's additional information around the mysql startup error
19:05 Sesso_ there is an error but i dont understand it
19:05 Sesso_ pacemaker_remoted:  warning: child_timeout_callback: p_mysql_start_0 process (PID 22822) timed out
19:06 mwhahaha might have timed out while starting, is mysql running on the other nodes?
19:07 Sesso_ stopped on all 3 controllers
19:10 mwhahaha so you'll need to try and start it again, i looked through the irc logs and here some good info https://irclog.perlgeek.de/fuel/2015-01-28#i_10020017
19:18 Sesso_ can crm_resource -r p_mysql --force-start take a while to process?
19:32 Sesso_ it just sits there after running that command above
19:50 Rodrigo_BR joined #fuel
19:50 Rodrigo_BR I configured the cinder driver for netapp with success, I can create volumes, instances using the nfs backend.
19:50 Rodrigo_BR But when I create a new instance using the horizon, the value from internal storage for the hypervisor is marked as in use at the local disk, even not in use.
19:50 Rodrigo_BR The volume for instance is created in nfs shared but the LOCAL STORAGE in hypervisor is updated as used.
19:50 Rodrigo_BR http://paste.openstack.org/show/485479/
19:51 PandaKing ok back again with build fail.  How does one make sure the node names match after build ?  I set node names for each node in the dashboard and made all the configs match but the systems build with a hostname of node-#.domain.tld
19:52 PandaKing Node name settings don't match between the LMA collector and InfluxDB-Grafana plugins. at /etc/fuel/plugins/lma_collector-0.7/puppet/manifests/check_environment_configuration.pp:62 on node node-9.domain.tld
19:53 mwhahaha Sesso_: it may be failing because it's looking for other nodes to be up to bootstrap itself, do you know why mysql failed on all the nodes?
20:06 Sesso_ in crm_mon it shows this
20:06 Sesso_ Failed actions:    p_mysql_start_0 on ctrl1.bukucloud.com 'unknown error' (1): call=206, status=Timed Out, last-rc-change='Fri Jan 29 12:59:45 2016', queued=0ms, exec=300002ms    p_mysql_start_0 on ctrl3.bukucloud.com 'unknown error
20:06 Sesso_ 1 of 3 mysql services are now started
20:20 bhaskarduvvuri joined #fuel
20:23 tatyana joined #fuel
20:46 Sesso_ IM back up. i think I figured out what happened.
20:46 angdraug joined #fuel
20:58 bhaskarduvvuri joined #fuel
21:01 mwhahaha what happened?
21:02 Sesso_ it seems that there was a rouge mysql process. It was trying to bind to the same IP that was already taken.
21:02 mwhahaha that's odd
21:03 Sesso_ I killed it. did the pcs resource disable mysql.. then enable
21:04 mwhahaha sounds like a bug in mysql resource, perhaps it was a mysql left over from a previous (re)start or something
21:04 Sesso_ They all started back up. The only thing left is compute 3 says the node is enabled but the instance says that comp 3 is unabailable
21:05 mwhahaha did you try restarting the nova services on that one?
21:05 Sesso_ not yet
21:08 Sesso_ can i restart just one?
21:08 mwhahaha which service is reporting down?
21:08 Sesso_ 38 | nova-compute     | cp3.bukucloud.com   | nova     | enabled | up
21:09 Sesso_ the instance thinks that one is down
21:09 mwhahaha you can just restart the nova-compute service on that node
21:12 Sesso_ horizon still thinks its down
21:15 Sesso_ im trying to live migrate an instance to cp3 and see if it works
21:16 Sesso_ yep cp3 works. that instance doesnt like it though.
21:16 Sesso_ Isnt there a way in CLI to tell it to abandon 3 and start on another node?
21:20 Verilium mwhahaha:  Oh wow, haproxy-status.sh.  Good to know.  And here I was having to setup ssh port forwards to get to haproxy' status page.
21:25 _bhaskarduvvuri_ joined #fuel
21:25 tkhno- joined #fuel
21:45 Sesso_ mirantis tech had me set the state to active with  nova reset-state --active and it booted back up
21:48 mwhahaha cool
21:52 Manipulated joined #fuel
21:52 Manipulated how to disable internal keystone
21:52 Manipulated and horizon
22:24 e0ne joined #fuel
22:28 HeOS joined #fuel
22:37 e0ne_ joined #fuel
22:45 e0ne joined #fuel
22:46 e0ne_ joined #fuel
23:58 krogon_ joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary