Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-07-01

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 Billias joined #fuel
00:43 artem_panchenko joined #fuel
00:49 teran joined #fuel
00:57 eliqiao joined #fuel
00:58 eliqiao left #fuel
01:04 dhblaz joined #fuel
01:45 youellet__ joined #fuel
01:49 artem_panchenko joined #fuel
01:50 dhblaz joined #fuel
01:50 teran joined #fuel
02:10 Longgeek_ joined #fuel
02:24 dhblaz_ joined #fuel
02:41 Longgeek joined #fuel
02:52 teran joined #fuel
02:53 saibarspeis joined #fuel
03:24 stamak joined #fuel
03:44 Longgeek joined #fuel
03:52 teran joined #fuel
04:34 Longgeek joined #fuel
04:47 stamak joined #fuel
04:53 teran joined #fuel
05:22 rmoe joined #fuel
05:23 saibarspeis joined #fuel
05:31 nurla joined #fuel
05:37 ub joined #fuel
05:54 teran joined #fuel
05:59 e0ne joined #fuel
06:00 gongysh_ joined #fuel
06:03 rmoe joined #fuel
06:55 teran joined #fuel
07:01 dklepikov joined #fuel
07:06 bdudko joined #fuel
07:07 monester_laptop joined #fuel
07:14 dancn joined #fuel
07:37 subscope joined #fuel
07:41 artem_panchenko left #fuel
07:44 eliqiao joined #fuel
07:44 eliqiao left #fuel
07:55 teran joined #fuel
07:56 e0ne joined #fuel
08:02 dkusidlo joined #fuel
08:10 teran joined #fuel
08:21 hyperbaba joined #fuel
08:29 HeOS joined #fuel
08:34 stamak joined #fuel
09:00 monester_laptop joined #fuel
09:00 dkusidlo joined #fuel
09:20 dkusidlo joined #fuel
09:26 junkao joined #fuel
09:27 DaniC joined #fuel
09:27 samuelBartel joined #fuel
09:30 DaniC Hi i'm running Fuel 5.1.1 (IceHouse) and it seems i'm hitting a problem wiht dnsmasq "DHCPRELEASE(tap8ecf66b6-72) 192.168.111.24 fa:16:3e:72:04:82 unknown lease". the reasearch point me to upgrading dnsmasq from 2.59 to 2.69. Has anyone tested/ can suggest if is good idea to upgrade only that pkg?
09:35 e0ne joined #fuel
10:01 hyperbaba Hi, i have 5.1.1 deployed in HA with 3 controllers. Strange thing is happening when using horizon. When i choose a operation (regardless of the type) from horizon the action does not kick in all te times and i have to do it twice on the dashboard. For example when i hit terminate instance the "working.." is shown and the instance is not scheduled for termination. When i repeat the action the sheduled operation is executed. Is someone else noticed
10:01 hyperbaba that behaviour? Tried to reboot all controllers (in sequence) and the behaviour is the same.
10:30 dkusidlo joined #fuel
10:49 sergmelikyan joined #fuel
11:15 teran joined #fuel
11:18 teran_ joined #fuel
11:25 sergmelikyan joined #fuel
11:29 sergmelikyan joined #fuel
11:32 sergmelikyan joined #fuel
11:52 sergmelikyan joined #fuel
11:52 sergmelikyan joined #fuel
12:04 sergmelikyan joined #fuel
12:10 samuelBartel joined #fuel
12:12 dkusidlo joined #fuel
12:15 sergmelikyan joined #fuel
12:15 sergmelikyan joined #fuel
12:16 sergmelikyan joined #fuel
12:19 ub joined #fuel
12:24 sergmelikyan joined #fuel
12:25 Longgeek joined #fuel
12:27 sergmelikyan joined #fuel
12:28 dhblaz joined #fuel
12:29 sergmelikyan joined #fuel
12:35 sergmelikyan joined #fuel
12:35 sergmelikyan joined #fuel
12:35 sergmelikyan joined #fuel
12:36 dhblaz joined #fuel
12:46 dkusidlo joined #fuel
12:48 sergmelikyan joined #fuel
12:49 Longgeek joined #fuel
12:59 ssimon joined #fuel
13:03 ssimon Hi! Hope you guys could help. I've installed Fuel 6.1 (Mirantis), the node is up and I can access by SSH but there is no access to web UI, also the node is not listening on port 8000 at all
13:14 Longgeek joined #fuel
13:24 hammondr_ joined #fuel
13:30 dhblaz joined #fuel
13:34 geta-tn joined #fuel
13:35 geta-tn hi all
13:35 geta-tn we had an important issue any help/hint will be helpful
13:36 geta-tn we installed our mirantis fuel 5.1.1 in a proxmox env and after installing the environment the vm hosting FUEL was stopped
13:38 geta-tn Today after acouple of months we started Fuel and on the web interface is showing our environment, not  changeable and notifying REMOVING
13:38 geta-tn we are getting a bit worried it is about to remove the environment ?
13:39 geta-tn can anyone have any idea of what is happening ?
13:45 geta-tn anyone there?
13:56 aliemieshko_ joined #fuel
14:00 dhblaz joined #fuel
14:10 dkusidlo joined #fuel
14:13 MiroslavAnashkin geta-tn: Please check time on fuel master node.
14:16 sergmelikyan joined #fuel
14:18 geta-tn joined #fuel
14:20 geta-tn hi all
14:20 DrSlump o/
14:21 geta-tn we had an important issue any help/hint will be helpful we installed our mirantis fuel 5.1.1 in a proxmox env and after installing the environment the vm hosting FUEL was stopped Today after acouple of months we started Fuel and on the web interface is showing our environment, not  changeable and notifying REMOVING we are getting a bit worried it is about to remove the environment ?  i had connection down
14:21 geta-tn @MiroslavAnashkin: you asked to check  time on fuel ?
14:22 MiroslavAnashkin Yes.
14:22 aliemieshko ssimon: please run: 'netstat -putna |grep 8000'
14:22 aliemieshko on master node
14:23 MiroslavAnashkin hyperbaba: Please clean up your browser cache. I saw even frozen buttons in Chrome due to Horizon/Browser cache issues.
14:25 sergmelikyan joined #fuel
14:26 geta-tn MiroslavAnashkin: fuel  date -- Wed Jul  1 14:24:53 UTC 2015
14:28 MiroslavAnashkin geta-tn: OK, please run `fuel task` and check the tasks in progress
14:31 geta-tn MiroslavAnashkin: sorry, what you mean 'fuel task' ?
14:32 rmoe joined #fuel
14:33 claflico joined #fuel
14:54 ilbot3 joined #fuel
14:54 Topic for #fuel is now Fuel 6.1 (Juno) https://software.mirantis.com | WARNING if you upgrade the masternode to 6.1 https://online.mirantis.com/hubfs/Technical_Bulletins/Mirantis-Technical-Bulletin-18-Upgradeto6_1-1.pdf | https://wiki.openstack.org/wiki/Fuel | Paste here http://paste.openstack.org/ | IRC logs http://irclog.perlgeek.de/fuel/
14:55 MiroslavAnashkin ?
14:56 geta-tn MiroslavAnashkin: Should i run  fuel task list on fuel ??
14:57 MiroslavAnashkin Yes
14:58 subscope joined #fuel
14:58 sergmelikyan joined #fuel
14:58 MiroslavAnashkin To identify the task IDs which Fuel considering running. Then you may delete these tasks
14:59 kun_huang joined #fuel
15:01 geta-tn MiroslavAnashkin: Thanks a lot
15:07 Longgeek joined #fuel
15:08 Longgeek joined #fuel
15:14 dkusidlo joined #fuel
15:14 Longgeek joined #fuel
15:25 xarses joined #fuel
15:29 sergmelikyan joined #fuel
15:31 sergmelikyan joined #fuel
15:43 sergmelikyan joined #fuel
15:50 dkusidlo joined #fuel
15:50 sergmelikyan joined #fuel
15:51 championofcyrodi so the whole cluster just died trying to disable and re-enable pcs resources...
15:51 championofcyrodi PCSD Status:
15:51 championofcyrodi Error: no nodes found in corosync.conf
15:52 championofcyrodi pcs cluster status does show 3 nodes configured, 3 expected votes, 29 resources configured...
15:52 championofcyrodi with the proper "current DC"
15:55 aliemieshko can you provide us the output from one of the controller nodes : pcs status
15:56 championofcyrodi aliemieshko: http://paste.openstack.org/show/zEW55gKfXR30MoGPKYpR/
15:57 championofcyrodi this was the original error, which made me choose to try to disable and re-enable the resources
15:57 championofcyrodi http://paste.openstack.org/show/mXCpiMO2i05Q8lXffHlc/
15:58 championofcyrodi using:
15:58 championofcyrodi for i in $(pcs resource | grep clone | awk '{print $3}'); do pcs resource disable $i; done
15:58 championofcyrodi and then again w/ the 'enable' flag
15:58 championofcyrodi however, som resources we did not choose to install, like ceilometer
15:59 aliemieshko now only ceilometer in 'stoped' state ?
16:00 championofcyrodi well.. now public__vip is started, and i can get to the horizon UI (which took a while)
16:01 championofcyrodi but the instances are still unreachable, even though the neutron agents are all reported as 'UP'
16:01 aliemieshko what about other resources ?
16:03 thumpba joined #fuel
16:03 championofcyrodi all of the resources seem to be 'started', but as you can see from the failed actions, there are reports of some of the resources 'timing out'
16:03 aliemieshko and provide us the output from one of the controllers: nova service-list
16:04 championofcyrodi with 'unknown error'
16:04 championofcyrodi all of the nova services are up and enabled.
16:04 championofcyrodi we have only been having issues with neutron agents
16:04 aliemieshko what about time ?
16:05 aliemieshko is it  the same on all nodes ?
16:05 jobewan joined #fuel
16:05 championofcyrodi yes
16:06 aliemieshko please provide us again:  psc status
16:07 Miroslav_ joined #fuel
16:07 championofcyrodi it has not changed since the last time i pasted it:
16:07 championofcyrodi http://paste.openstack.org/show/zEW55gKfXR30MoGPKYpR/
16:08 Longgeek joined #fuel
16:11 sergmelikyan joined #fuel
16:13 aliemieshko run on node-46 :     pcs resource cleanup p_mysql
16:13 sergmelikyan joined #fuel
16:13 championofcyrodi done
16:13 championofcyrodi Resource: p_mysql successfully cleaned up
16:14 aliemieshko pcs resource disable p_mysql
16:14 aliemieshko pcs resource enable p_mysql
16:14 Miroslav_ championofcyrodi, if you stopped all resources, Pacemaker sets all the nodes to maintenance mode. Please try `pcs cluster unstandby --all`
16:15 championofcyrodi Miroslav_ ran, there was no output on the console
16:16 championofcyrodi still nothing is reachable via public interface
16:16 Miroslav_ or `crm configure property maintenance-mode=false`. Then please wait 2-3 minutes and check the current status with `pcs status`
16:16 championofcyrodi i'll wait 2-3 minutes first.
16:18 championofcyrodi currently 'neutron agent-list' is now hanging
16:19 championofcyrodi Gateway Timeout (HTTP 504)
16:19 championofcyrodi so it looks like things are now worse. :\
16:19 championofcyrodi HA configuration is turning out to be very difficult to diagnose and troubleshoot.
16:20 championofcyrodi pacemaker/corosync seem to be out of sync with what the process states actually are.
16:20 championofcyrodi of course, as i just said that.
16:20 championofcyrodi now agent-list returned list of agents with many dead
16:21 aliemieshko what about pcs status  ?
16:23 championofcyrodi everything looks as expected... however node-46 is still stopped in regard to mysql
16:24 championofcyrodi is there a way to monitor the 'status' of pcs resource polling or whatever?
16:24 championofcyrodi I understand that there are likely timing issues with running services... but i'm hoping to be able to do more than, enable/disable a resource and then wait 5 minutes w/o any context of what is happening.
16:28 sergmelikyan joined #fuel
16:29 Miroslav_ please run `pcs resource cleanup clone_p_mysql` first. Then, somewhere on node 50 or 54, please check Galera sync status with `mysql -e "show status like 'wsrep%';"`
16:30 championofcyrodi did the cleanup, and now I am looking at a table with variable names and values.
16:30 championofcyrodi (from node-54)
16:31 championofcyrodi wsrep_incoming_addresses does not show node-46 in the list
16:32 aliemieshko mysql -e "show global status like 'wsrep_incoming_addresses'"
16:32 Miroslav_ Please post the output from ..wsrep% status
16:34 Miroslav_ And then, run the same command on node-46
16:34 championofcyrodi http://paste.openstack.org/show/CkoixpPZUI0LvSxoVdWF/
16:35 championofcyrodi no node-46, re-run the pcs clean up and then wsrep? or just wsrep? ... and paste the output.
16:35 Miroslav_ Just wsrep on node-46
16:36 championofcyrodi ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (111)
16:36 championofcyrodi mysqld is not running...
16:36 junkao_ joined #fuel
16:36 aliemieshko please run 'df -h' on node-46
16:37 championofcyrodi http://paste.openstack.org/show/iLk65fvLkJ5o1ikrUP0G/
16:37 championofcyrodi no volume on node-46 has more than 25% use
16:39 championofcyrodi free -m
16:39 championofcyrodi looks like node-46 is low on memory... currently 1.9GB free
16:39 championofcyrodi 13910 used.
16:40 championofcyrodi 16 instances of nova-api running at .5 GB each...
16:40 championofcyrodi 7 instances of cinder at 448m each
16:41 championofcyrodi mysql is claiming 2036m
16:41 championofcyrodi but not running?
16:42 championofcyrodi why would node-46
16:42 championofcyrodi why would node-46's mysql instance be affecting public client connectivity though?
16:43 championofcyrodi especially if have 2 other controllers
16:43 championofcyrodi i guess the neutron data can't be read from mysql...
16:43 championofcyrodi which is why neutron agent-list is timing out...
16:43 championofcyrodi thus l3 agents, etc... cannot work properly.
16:45 Longgeek joined #fuel
16:46 Miroslav_ Neutron may re-create DHCP namespace for the whole hour. Please go to the node-50 (node where DHCP agent is currently running) and check with `ip netns list` Then, find the DHCP namespace and check its content with `ip netns exec <namespace> ip a`
16:47 Miroslav_ Then check one more time after 1-2 minutes. If new entities appear in DHCP namespace - it means DHCP agent still read them from DB
16:47 championofcyrodi there are 5 instances of qdhcp on node-50 when running `ip netns list`
16:48 samuelBartel joined #fuel
16:48 championofcyrodi so not sure which one to check content w/ `ip netns exec <namespace> ip a`
16:53 Miroslav_ Hmm, how many networks do you have?
16:53 championofcyrodi 1 physical, using vlans
16:53 championofcyrodi oh
16:53 championofcyrodi openstack networks.
16:54 championofcyrodi 4 projects... so that would be 4 instances there...
16:54 championofcyrodi plus the private... so that makes 5
16:54 sergmelikyan joined #fuel
16:54 muadgib joined #fuel
16:56 championofcyrodi yea, nova network-list shows 5
16:57 championofcyrodi Miroslav_ okay, i got the correlation between nova networks and dhcp instances...
16:58 hammondr1 left #fuel
16:58 championofcyrodi in regard to checking the dhcp content, should that just be done with the default admin netowrk?
16:58 championofcyrodi which is where most of my VMs are that are currently not responding to public requests.
16:59 Miroslav_ Yes, you may check admin network only - we need to find root cause first.
17:00 championofcyrodi here is the output...
17:00 championofcyrodi http://paste.openstack.org/show/xjiA9eMX5BjwJkiiLHjw/
17:01 championofcyrodi okay, neutron agent-list just worked, showing all agents alive
17:01 championofcyrodi still public access down
17:01 sergmelikyan joined #fuel
17:10 Miroslav_ please check what is in ns_IPaddr2 namespace on node-54
17:11 championofcyrodi [root@node-54 ~]# ip netns exec ns_IPaddr2 ip a
17:11 championofcyrodi Cannot open network namespace: No such file or directory
17:13 Miroslav_ Where is vip__public is currently running? And what are the namespaces on that node?
17:15 Miroslav_ `pcs status` shows it is on node-54
17:17 championofcyrodi it is currently running on node-54 as far as i know.
17:18 championofcyrodi i only know from looking at pcs status
17:18 championofcyrodi not sure how else to verify
17:21 Miroslav_ Well, need to identify the vip_public namespace and check what is inside
17:21 mwhahaha i think that's in the vrouter namespace
17:22 jaypipes joined #fuel
17:23 mwhahaha or is it in the haproxy namespace
17:26 championofcyrodi there is an haproxy namespace...
17:26 mwhahaha i think it's in that one
17:26 championofcyrodi on the vip
17:26 championofcyrodi http://paste.openstack.org/show/cVECGXdM2nPBGIL0YXWA/
17:28 Miroslav_ Please try to ping public VIP from inside the haproxy namespace
17:30 championofcyrodi by 'ping from inside the namespace' do you mean just from the VIP controller itself? or is there a specific flag i need to pass to the ping command?
17:31 mwhahaha ip netns exec haproxy ping <ip>
17:32 championofcyrodi [root@node-54 ~]# ip netns exec haproxy ping 192.168.3.122
17:32 championofcyrodi PING 192.168.3.122 (192.168.3.122) 56(84) bytes of data.
17:32 championofcyrodi 64 bytes from 192.168.3.122: icmp_seq=1 ttl=64 time=0.059 ms
17:34 championofcyrodi 192.168.3.122 is the VIP public ip.
17:35 championofcyrodi so i assume that is what you meant by 'ping public VIP'
17:36 Miroslav_ yes, correct
17:38 championofcyrodi I will be reviewing this chat log for the last 2 hours and what i did, supplemented w/ more google of docs to better understand how HA Proxy works.  This has been the Achilles heal of our cluster for quite some time.
17:38 championofcyrodi in the last 3 months, we have seen client SSH sessions just lock up, and then proceed after 5-10 seconds as though nothing happened.
17:39 championofcyrodi rabbitmq connections timing out and not responding...
17:39 championofcyrodi causing other services to intermittently fail.
17:40 championofcyrodi problem is, that it's so random i can't reproduce it intentionally to debug
17:40 mwhahaha ha proxy shouldn't have anything to do with haproxy
17:40 mwhahaha er ssh sessions
17:41 championofcyrodi what about ssh sessions that connect through the VIP?
17:41 mwhahaha there should be an haproxy-status.sh that should show you the status of all the vips in haproxy
17:41 mwhahaha what service are you sshing to via a vip?
17:42 championofcyrodi running neutron services w/ VIP seems to router ALL my public traffic through a single controller
17:42 mwhahaha that's not haproxy though
17:42 mwhahaha that would be a function of the vrouter i believe
17:42 championofcyrodi huh... i just got a ping back!
17:43 championofcyrodi (have had ping running with interval every minute against a VM)
17:43 mwhahaha you might want to check the arp tables
17:44 championofcyrodi i have looked through all that... its such a nightmare for someone not familiar with so much network abstraction
17:45 championofcyrodi okay, so my public connectivity is just suddenly back up...
17:45 Miroslav_ championofcyrodi, What is your Fuel version,  BTW?
17:45 championofcyrodi after about an hour of being down.
17:45 championofcyrodi 6.0
17:45 kaliya joined #fuel
17:46 championofcyrodi we started w/ 5.0, then upgraded to 5.1, then upgraded to 6.0.. but maybe did 5.2 in the process of going to 6.0, can't recall exactly.
17:46 championofcyrodi but the openstack environment was redeployed when we moved to 6.0
17:47 championofcyrodi aka moving all the VMs off icehouse to backup as qcow, then re-imported into juno
17:47 championofcyrodi according to pcs resource, the mysql instance on node-46 is still stopped
17:47 Miroslav_ Oh,  I thought it is 5.x.x. Well, may be it is good idea to upgrade the whole Pacemaker OCF scripts to the latest version from 6.1. Upgrade guide is in progress if I remember correct. 6.1 is mush faster in terms of re-assembly speed
17:47 championofcyrodi other than that everything looks fine, and no FAILs at the bottom\
17:48 championofcyrodi heh, i'm scared to touch anything until i review the entire chat log and grok whatever troubleshooting we just did.
17:49 Miroslav_ No, do not migrate right now)) it reqiure maintenance window.
17:50 championofcyrodi you mentioned it could take an hour to rebuild dhcp tables?
17:50 championofcyrodi and approx an hour went by and everything came back up...
17:50 championofcyrodi what was that about?
17:51 championofcyrodi besides you being a wizard
17:51 Miroslav_ Neutron DHCP agent is a bit slow in reconstructing the namespaces
18:01 championofcyrodi i'm going to watch this i guess
18:01 championofcyrodi https://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/neutron-network-namespaces-and-iptables-technical-deep-dive
18:06 saibarspeis joined #fuel
18:12 championofcyrodi in the video above, the speaker asks if people are familiar with what i think he is saying 'valence' in regard to layer 2 virtualization...
18:13 championofcyrodi but i'm having trouble finding any resources on this via google
18:13 championofcyrodi valets maybe?
18:13 championofcyrodi was hoping to look in to that a bit before continuing the video since most of the audience agreed they were familiar with the concept.
18:14 championofcyrodi i'm familiar w/ OSI and layer 2,  but not... vagrance??
18:14 championofcyrodi like vagrant maybe?
18:17 championofcyrodi so it would seem that vagrant networking just implements a 'provider' interface to support configuring vmware/virtualbox layer 2 virtualization support.
18:17 championofcyrodi which i guess relates to quantum server providing API calls to L2/L3/DHCP neutron agents.
18:17 championofcyrodi within a namespace?
18:23 mwhahaha vlans
18:23 mwhahaha is what he said
18:23 e0ne joined #fuel
18:23 championofcyrodi lol, thanks
18:24 e0ne joined #fuel
18:57 sergmelikyan joined #fuel
19:11 Miroslav_ joined #fuel
19:19 championofcyrodi seems like every 3rd request to mysqldb fails because node-46 is still not running mysql.  is it a good/bad idea to try,
19:19 championofcyrodi `pcs resource clear clone_p_mysql node-46.domain.com`  ?
19:20 championofcyrodi also still getting the 'unknown error' for l3-agent_monitor and openvswitch-agent_monitor at the bottom of pcs status
19:22 Miroslav_ Please run `pcs resource cleanup clone_p_neutron-openvswitch-agent` and `pcs resource cleanup clone_p_neutron-l3-agent` first
19:23 championofcyrodi ping_vip__public_monitor_20000 on node-54.ccri.com 'unknown error', status=Timed Out... 2 minutes ago.
19:24 championofcyrodi but the other two errors we just cleaned up are gone for now.
19:26 MiroslavAnashkin And now please try `pcs resource clear clone_p_mysql node-46.domain.com`
19:32 championofcyrodi Error: clone_p_mysql is not a valid resource, so just tried 'p_mysql' thinking the 'clone' is not actually on node-46, and it seemed to work.
19:33 MiroslavAnashkin Please wait ~5 mins and check if it really work
19:33 championofcyrodi okay
19:38 HeOS joined #fuel
19:42 championofcyrodi looking in to one of my instances on a compute node, `ip a` shows the instance specific devices.  I've mapped that qbrc505b311 is the bridge source for the qemu interface, and it connected with tapc505b311.   What is the function of qvoc and qvbc ?
19:43 sergmelikyan joined #fuel
19:45 championofcyrodi or rather, what is qvoc and qvbc?
19:46 MightyFork joined #fuel
19:59 mwhahaha https://www.openstack.org/assets/presentation-media/HK-Openstack-Namespaces1-.pdf page 14
20:02 monester_laptop joined #fuel
20:09 stamak joined #fuel
20:18 e0ne joined #fuel
20:40 championofcyrodi mwhahaha: thanks.
20:42 championofcyrodi since the issue I am seeing is not local to 1 specific VM, it would seem the problem is with the layer 2 agent.
20:46 championofcyrodi it would seem my "Network Node" and "Cloud Controller Node" are co-located.
21:04 championofcyrodi so mysql resource is still not 'up' on node-46... so i just ran the same process with the same args as user mysql that was running on the other two nodes...
21:04 championofcyrodi 2015-07-01 21:03:18 302 [Note] Recovering after a crash using mysql-bin
21:04 championofcyrodi 2015-07-01 21:03:18 302 [Note] WSREP: Binlog recovery, found wsrep position 70859d2c-c2a9-11e4-8fbd-83ca353d1505:63739042
21:04 championofcyrodi 2015-07-01 21:03:18 302 [Note] WSREP: Binlog recovery scan stopped at Xid event -1
21:04 championofcyrodi 2015-07-01 21:03:18 302 [Note] Starting crash recovery...
21:04 championofcyrodi 2015-07-01 21:03:18 302 [Note] Crash recovery finished.
21:04 championofcyrodi 2
21:05 championofcyrodi and now it is running, WSREP  reports it is synced with the group....
21:05 championofcyrodi and that mysqld is ready for connections
21:06 championofcyrodi which i guess means mysql is fine, and pcs enable/disable is not working properly.
21:07 championofcyrodi pcs status is showing the clone set as all started now too...
21:13 teran joined #fuel
21:30 youellet_ joined #fuel
21:44 youellet__ joined #fuel
22:15 xarses joined #fuel
22:55 rmoe joined #fuel
23:03 monester_laptop joined #fuel

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary