Perl 6 - the future is here, just unevenly distributed

IRC log for #fuel, 2015-07-27

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:28 rmoe joined #fuel
02:52 hakimo joined #fuel
04:57 ximepa joined #fuel
05:05 sbfox joined #fuel
05:28 fedexo joined #fuel
06:20 gongysh joined #fuel
06:33 Samos123 morning
06:33 ub joined #fuel
07:09 monester joined #fuel
07:11 mkwiek joined #fuel
07:33 aliemieshko_ joined #fuel
08:00 hyperbaba joined #fuel
08:07 aarefiev joined #fuel
08:23 NERvOus joined #fuel
08:32 e0ne joined #fuel
08:41 [HeOS] joined #fuel
08:41 tzn joined #fuel
08:55 subscope joined #fuel
10:19 kdavyd_ joined #fuel
10:22 kevinbenton joined #fuel
10:22 nurla joined #fuel
10:32 Zoup joined #fuel
10:57 sergmelikyan joined #fuel
11:25 Tigran joined #fuel
11:26 Tigran Hi,  could anyone tell me how to connect to the mongodb with "admin" rights to reduce the log level?
11:27 Tigran I made a mistake and installed openstack via fuel with debug logging on...  for some components i've disabled it manually now, but still need to switch it off for mongodb
11:33 neophy joined #fuel
11:50 tzn joined #fuel
11:55 tzn joined #fuel
12:01 tobiash joined #fuel
12:22 evg Tigran: Hi, you can find mongodb passwords in /etc/astute.yaml
13:01 jhova joined #fuel
13:17 prmtl joined #fuel
13:30 warpc__ joined #fuel
13:35 teran joined #fuel
13:40 hyperbaba Hi there, i have a very strange problem on my deployed 5.1.1 stack. Every monday around 2 o clock  pm (there are no coresponding crons) my rabbitmq is failing resulting in non working environment. It is happening 3 weeks in a row. There is no increase on load on the openstack in any form. There is no any signs of errors in any of the services. Does anybody had the same expirience. The system uses ceph for all (maybe deep scrubbing) and i don't see
13:40 hyperbaba any problems with ceph (monitored fully with zabbix.
13:41 hyperbaba Only think i can do beside restarting the crm rabbitmq service is rebooting on of the 3 controller nodes and services go back in normal state
13:54 sergmelikyan joined #fuel
13:59 ximepa left #fuel
14:29 Tigran @evg: thanks, this file says   username, password and database are "ceilometer" ,
14:30 Tigran however  mongo -u ceilometer -p ceilometer ceilometer
14:30 Tigran Error: 18 { code: 18, ok: 0.0, errmsg: "auth fails" } at src/mongo/shell/db.js:228
14:31 mwhahaha The password should be random not ceilometer
14:32 mwhahaha if you don't want to look in the file, 'hiera ceilometer' should return a hash with all the info
14:33 evg Tigran: in astute.yaml see "db_password:" in ceilometer section
14:35 evg Tigran: mongo -u admin -p <this field>
14:35 claflico joined #fuel
14:35 mwhahaha he needs to look in global.yaml
14:36 mwhahaha because the password gets generated in there
14:36 mwhahaha it's better to use 'hiera ceilometer'
14:36 mwhahaha because that's what is actually used to configure it
14:41 neophy joined #fuel
14:48 jaypipes joined #fuel
14:55 Tigran @evg, @mwhahaha: thanks, that worked
14:58 jobewan joined #fuel
15:01 evg mwhahaha: missed your post. using hiera is the right idea
15:02 mwhahaha it's all good
15:18 dklepikov joined #fuel
15:50 thansen joined #fuel
15:52 goukos joined #fuel
16:05 t_dmitry joined #fuel
16:30 xarses joined #fuel
16:33 ashtokolov joined #fuel
17:21 e0ne joined #fuel
17:31 sergmelikyan joined #fuel
17:49 sergmelikyan joined #fuel
17:49 ashtokolov joined #fuel
18:01 ashtokolov joined #fuel
18:02 ashtokolov joined #fuel
18:03 sergmelikyan joined #fuel
18:07 claflico joined #fuel
18:12 tzn joined #fuel
18:15 Verilium Whoever came up with the name for assassind...  Not sure what the process does, but it better be something nasty.
18:37 Verilium Are there any plans to update https://www.fuel-infra.org/plugins/catalog.html with plugins for 6.1?
18:38 Verilium I'll get them from https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/ in the meantime, but just wondering.
18:38 mwhahaha not sure
18:40 Verilium mwhahaha:  Might have found my issue from the other day with my install that was timing out, GRE and all.  I tried to deploy on 3 hardware nodes for controllers and things worked out fine.
18:41 Verilium mwhahaha:  It's installing again on the 3 vms I had in ESXi.  Chances are it's the fact allowing promiscuous mode on the portgroup was off that probably interfered.
18:41 mwhahaha ah yea that'd be a problem
18:42 Verilium Ah, nevermind.  Yeah, confirmed, that was it.  I can ping the IP of the haproxy netns from another node now.
18:43 Verilium I keep forgetting about that darn setting...  Made me probs already previously when I was deploying vanilla openstack.
18:45 Verilium Too bad the network verification test coudn't detect it heh.
18:49 sergmelikyan joined #fuel
18:58 xarses Verilium: assassin is the subprocess that iterates over the nodes in the db and checks if they have reported in within the report interval and changes their status to offline if it has been too long
18:58 * Verilium nods.
18:58 Verilium Well, the name fits with the task. ;)
18:58 xarses Verilium: the plugins catalog should update when the plugins publish 6.1 support
18:59 xarses If they have support and where not updated then a) file a bug or b) let me know and I will get it updated
18:59 Verilium xarses:  Oh, but I mean, on the mirantis page, they've been published to 6.1, but just not on fuel-infra.org.  At least for those that are at 6.1.
18:59 xarses then somone forgot to update it =/
19:00 Verilium I'm guessing it's just a question of "synchronizing" both pages.
19:00 xarses more or less
19:01 xarses Verilium: on ESXi if all three security modes are not allowed, you will have problems with the bridges that are configured in the instances
19:01 xarses hence your failed deployment
19:03 Verilium xarses:  Yeah.  Figured that out.  The 2 other modes, in my case, were already allowed.  It was really just promiscuous that defaults to reject that needed to be changed.
19:04 Verilium Could this be something that the network test would able to detect somehow though?  I remember seeing in the console of the nodes alot of switching to promiscuous mode for the testing.
19:17 ub joined #fuel
19:20 jobewan joined #fuel
19:37 mquin joined #fuel
19:37 xarses Verilium: it's not currently set up to run. it should completely build the target network (which it should do) but isn't capable up currently.
19:38 xarses alternatly, it could create an interface ins promisc mode and try to send forged packets, but I'm not sure it would be worth the time vs doing the correct thing, and fully set up the destination network and testing it.
19:44 Verilium Well, the deploy just finished successfully.  Definitely was the promiscuous mode.
19:47 Verilium xarses:  Yeah, as this is something that's pretty specific to only a vmware environment, I can understand how it's not much of a priority, heh.
19:48 xarses Honestly, It might just be easier to warn you when you add nodes that are detected as being ESXi instances
19:49 angdraug joined #fuel
19:49 * Verilium nods.
19:50 Verilium I had completely forgotten about the ESX setting and was scrambling around wondering if it was some sort of vlan config somewhere.
19:52 monester joined #fuel
19:59 teran joined #fuel
20:30 jeh joined #fuel
20:32 sergmelikyan joined #fuel
20:35 ashtokolov joined #fuel
21:14 bildz how do you check that the fuel services are running?
21:14 bildz i just installed 6.1, but I cant hit the web service on port 8000
21:14 bildz also fuelmenu just sits there at the shell and never executes.  if I control ^D it will exit with python errors
21:15 mwhahaha you need to exit the fuelmenu
21:15 mwhahaha then once the system boots, the webservice takes a while
21:15 mwhahaha ~10-15 mins
21:15 bildz hmmm
21:15 bildz server has been rebooted and update for a hour
21:15 xarses if you ^D fuelmenu then alot of stuff will not be built
21:16 bildz well the fuelmenu went through all its stuff, but didnt entirely succeed
21:16 Verilium One way I found to have a quick idea of how far the installation was at on the fuel instance, is to see if there were docker images installed/ready or not.
21:16 xarses bildz: do "dockerctl start"
21:17 xarses it will check and force all of the containers to build/start
21:17 xarses but if you bypassed fuelmenu saving it's changes you might not have the desired settings made
21:18 bildz postgres is up
21:18 bildz checking rabbitmq and it's responding with 7
21:20 _tzn joined #fuel
21:20 bildz how many numbers does it try?
21:20 xarses it might take a little bit for it to figure out what needs to be done, just let it go
21:20 bildz and i 7 the correct return code?
21:20 bildz k
21:20 xarses up to 120
21:20 bildz thanks
21:34 Alcest joined #fuel
21:50 mquin joined #fuel
21:54 ntpttr joined #fuel
22:03 sergmelikyan joined #fuel
22:49 sergmelikyan joined #fuel
22:57 ntpttr Hey everyone, I'm curious if you know how to set up Fuel behind a proxy, I'm currently using these steps to set up OpenStack: https://docs.mirantis.com/openstack/fuel/fuel-6.1/virtualbox.html, but the network verification fails and I think it's because I'm behind a work proxy - it succeeds when I use the same steps at home.
22:59 ntpttr Also, another unrelated question - when I do the setup at home and get past the proxy issue, the build fails at the very end after installing Ubuntu and OpenStack on each node, saying it failed to execute the hook "puppet", and in the logs the error looks like a failure to communicate with the NTP servers. Do you know what's causing that/how to get around it?
23:06 ntpttr I did make sure that all of the NTP servers were pingable from each node
23:29 sergmelikyan joined #fuel
23:30 xarses ntpttr: the repos require access, I've detailed the current requirements in https://www.lucidchart.com/invitations/accept/be637f5b-0da6-4bea-a6f6-9b4a9fb790db
23:32 xarses the short is, there is no support for proxies in 6.1. You may use the create-mirror script on the fuel node and then update the repos (from the settings page) in the env to use your local mirror
23:33 xarses you can configure a squid service on a node, such as the fuel master to connect to your corporate relay for you and then point the environment settings to use the squid server
23:36 ntpttr Okay thank you for the info. Do you know anything about the nodes not being able to communicate with the NTP servers? That wasn't behind a proxy
23:36 xarses ntpttr: I would need the exerpt from the astute.log around the failure to try to see what the exact problem with NTP is. The usual suspects for the failure is a) the sync took too long and failed b) the server isn't accessible c) the drift was really high and resulted in a non-zero return-code d) the ntp server list was accepted in the wrong format and the CLI is rejecting it
23:36 xarses is the task "ntp-sync"?
23:37 xarses erm "sync_time"
23:37 xarses https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/astute/tasks.yaml#L184
23:38 xarses you can run the command by hand on each of the nodes, if you get $? == 0 from all the nodes, then you should be able to click deploy again and it should work
23:38 ntpttr The error that I'm getting is mentioned in this bug report: https://bugs.launchpad.net/fuel/+bug/1458832, but the workaround mentioned didn't seem to work for me. I just ran the commands mentioned as a "predeployment task" on the master node, but the same error happened
23:41 ntpttr thanks for the tip, I'll try that
23:43 xarses hmm, you failed on ntp-check then hmm
23:44 xarses ya, I need the debug from astute.log around your failure to see if that looks the same
23:45 xarses you can find it in /var/log/docker-logs/astute/astute.log
23:45 xarses it does contain secrets from time to time, so if you care don't share the whole file
23:45 ntpttr It is the same error I'm almost positive, I can't reproduce it right now though since I'm behing work proxys and I ran launch.sh again, which destroyed the VMs that would have the log you want
23:46 ntpttr I know that this error was in there, and the traceback was the same after it: ERROR: Unable to communicate with at least one of NTP server, checked the following host(s): ntp.org on node node-5.domain.tld
23:47 tzn joined #fuel
23:47 ntpttr And the stuff after ERR [525] Error running RPC method granular_deploy: Failed to execute hook 'puppet' was all the same too. If that isn't quite enough though I can always reproduce the error when I get home and come back with the full logs
23:48 xarses I just want to ensure it was the task for ntp-check.pp vs sync_time
23:49 xarses there may also be better messages burried in the debug output
23:49 xarses which the UI filters by default
23:50 ntpttr Okay, I'll be back with that info later then and maybe someone can help me out. Thanks a bunch for your time!
23:53 xarses we may be lite on responses for the next 4-8 hours as not alot of people are active in that slot. I might respond if you poke me to get my attention

| Channels | #fuel index | Today | | Search | Google Search | Plain-Text | summary