Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-12-10

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 mcsaltypants ok, ok. so then would you want to setup the repo in /srv/salt with a githook?
00:00 mcsaltypants also, how would you then use a top file? i put a top file to apply my nginx/init.sls to a node but it's not seen by salt.
00:01 snarfy you could set up a git repo in /srv/salt - but that would be functionally equivalent to using gitfs
00:02 mcsaltypants yeah.
00:02 mcsaltypants and i know the doc recommends a separate repo just for the top files but thats not what I had in mind.
00:03 snarfy yeah that's annoying - instead i use gitfs_env_whitelist
00:03 snarfy and whitelist master and base
00:03 snarfy so that it only looks at states
00:05 mcsaltypants excuse my ignorance (and thank you very much for answering my stupid questions) but by whitelisting master and base, does that enable highstate to utilize the top.sls file for the specified repo then?
00:11 mcsaltypants heh, i just tried it and it does reveal the top.sls
00:11 mcsaltypants thanks @snarfy!
00:13 jaybocc2_ joined #salt
00:14 mcsaltypants well, actually, it does see the file when i do a salt '*' cp.list_master but not when i try to do a salt '*' high.state.
00:14 snarfy no prob, bob
00:14 mcsaltypants when i do a highstate i get a comment in the verbose output: "Comment: No Top file or external nodes data matches found."
00:14 snarfy what does your repo look like? is there a top.sls at the top level?
00:15 snarfy also.... the states top file determines what states will apply to a node on highstate.
00:19 mcsaltypants well, i had the top.sls in a subdirectory in the repo. then tried moving it to the top. same effect.
00:23 stooj joined #salt
00:26 mcsaltypants and if i do salt '*' state.show_top i see my test minion
00:30 snarfy eh time for hiliday party
00:30 snarfy sorry. time to go back to the docs ;)
00:31 mcsaltypants np
00:31 mcsaltypants thanks
00:31 oida joined #salt
00:39 abednarik joined #salt
00:42 marekb joined #salt
00:44 tristianc joined #salt
00:44 bl4ckcontact joined #salt
00:45 marekb joined #salt
00:47 CaptainMagnus joined #salt
00:50 marekb joined #salt
00:51 flowstat_ joined #salt
00:51 bl4ckcontact is it possible to run more than one state in a reactor sls formula?
00:52 bl4ckcontact i seem to be having a problem getting a vm to first sync custom grains, then run a highstate after a 'created' tag is sent to the master when using salt-cloud to deploy a new VM
00:55 SeVenSiXseVeN joined #salt
01:03 Diaoul joined #salt
01:04 tristianc joined #salt
01:11 RobertChen117 joined #salt
01:18 dendazen joined #salt
01:23 justanotheruser joined #salt
01:23 inad922 joined #salt
01:23 Darkman802 joined #salt
01:24 TyrfingMjolnir joined #salt
01:26 akhter joined #salt
01:35 evle joined #salt
01:48 subsignal joined #salt
01:48 brianfeister joined #salt
01:49 TyrfingMjolnir joined #salt
01:50 dendazen joined #salt
01:50 cprecioso joined #salt
01:52 cprecioso left #salt
01:52 flowstat_ joined #salt
01:54 bVector_ joined #salt
01:55 TyrfingMjolnir joined #salt
01:55 lkannan joined #salt
01:55 trave joined #salt
01:56 goki joined #salt
01:56 eagles0513875_ joined #salt
01:56 nethershaw joined #salt
02:01 murrdoc joined #salt
02:03 shaggy_surfer joined #salt
02:05 Diaoul joined #salt
02:06 catpigger joined #salt
02:07 murrdoc joined #salt
02:11 RobertChen117 joined #salt
02:14 TyrfingMjolnir joined #salt
02:17 oida joined #salt
02:20 iggy should work
02:21 burp_ joined #salt
02:23 nethershaw joined #salt
02:27 micxjo joined #salt
02:30 baweaver joined #salt
02:33 jaybocc2 joined #salt
02:40 ageorgop joined #salt
02:47 ilbot3 joined #salt
02:47 Topic for #salt is now Welcome to #salt | 2015.8.1 is the latest | Please use https://gist.github.com for code, don't paste directly into the channel | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
02:50 flowstat_ joined #salt
02:55 bhosmer_ joined #salt
02:56 jaybocc2 joined #salt
03:00 favadi joined #salt
03:00 racooper joined #salt
03:03 nafg_ joined #salt
03:08 burp_ joined #salt
03:08 baweaver joined #salt
03:11 nethershaw joined #salt
03:12 falenn joined #salt
03:19 evle joined #salt
03:19 larsfronius joined #salt
03:25 nethershaw joined #salt
03:29 falenn joined #salt
03:29 pcn Does anyone have a method to have salt minions pause to e.g. not run while a deploy is pushed to master (e.g. git update a bunch of different repos, and while that's in progress don't let minions start a run)?
03:31 saltstackbot [reddit-saltstack] Web Frontend https://www.reddit.com/r/saltstack/comments/3w64cx/web_frontend/ - 2015-12-10 - 03:29:03
03:37 brianfeister joined #salt
03:39 favadi joined #salt
03:49 goldbuick__ joined #salt
03:51 clintberry joined #salt
03:52 flowstat_ joined #salt
03:53 TyrfingMjolnir joined #salt
04:00 brianfeister joined #salt
04:02 cyborg-one joined #salt
04:06 clintber_ joined #salt
04:07 iggy about to do some bot maintenance, sorry for any noise
04:11 mosen all good. It's all noise
04:11 kshlm joined #salt
04:20 zsoftich joined #salt
04:22 falenn joined #salt
04:23 ramteid joined #salt
04:24 RobertChen117 joined #salt
04:30 crazyphil joined #salt
04:37 jaybocc2 joined #salt
04:40 nethershaw joined #salt
04:41 oida joined #salt
04:42 bl4ckcontact joined #salt
04:46 Rumbles joined #salt
04:50 Sucks joined #salt
04:51 flowstat_ joined #salt
04:51 jaybocc2 joined #salt
04:53 BlackAle_ joined #salt
04:55 BlackAle__ joined #salt
04:58 RobertChen117 joined #salt
04:59 andrew_ joined #salt
04:59 RobertChen117 joined #salt
05:00 cyborglone joined #salt
05:07 tedbot joined #salt
05:08 tedbot left #salt
05:12 oida joined #salt
05:14 zmalone joined #salt
05:16 dstokes joined #salt
05:22 anmolb joined #salt
05:22 lompik joined #salt
05:33 burp_ joined #salt
05:33 colegatron joined #salt
05:35 falenn joined #salt
05:40 oida joined #salt
05:41 solidsnack joined #salt
05:42 akhter joined #salt
05:42 nethershaw joined #salt
05:43 rotbeard joined #salt
05:51 favadi joined #salt
05:58 oida joined #salt
06:02 solidsnack joined #salt
06:04 clintberry joined #salt
06:04 RobertChen117 joined #salt
06:09 kshlm joined #salt
06:10 jhauser joined #salt
06:19 oravirt joined #salt
06:21 larsfronius joined #salt
06:27 oravirt joined #salt
06:27 oida joined #salt
06:30 oravirt joined #salt
06:37 oravirt joined #salt
06:38 oravirt joined #salt
06:39 favadi joined #salt
06:40 oravirt joined #salt
06:42 oravirt joined #salt
06:45 jimklo joined #salt
06:47 solidsnack joined #salt
06:48 Sucks joined #salt
06:50 flowstat_ joined #salt
06:51 lesternygard joined #salt
06:53 martoss joined #salt
06:53 martoss left #salt
06:53 jaybocc2 joined #salt
06:57 oravirt joined #salt
07:00 lemur joined #salt
07:01 bhosmer_ joined #salt
07:02 calvinh joined #salt
07:05 clintberry joined #salt
07:06 baweaver_ joined #salt
07:10 oravirt joined #salt
07:11 keimlink joined #salt
07:13 hojgaard joined #salt
07:14 oravirt joined #salt
07:15 oravirt joined #salt
07:16 AndreasLutro does anyone have any strategies for letting a salt master (or maybe a group of masters) manage it/themselves?
07:23 sjorge joined #salt
07:23 sjorge joined #salt
07:28 g3cko joined #salt
07:30 nafg joined #salt
07:34 oida joined #salt
07:35 shiin joined #salt
07:39 KermitTheFragger joined #salt
07:52 Beelze joined #salt
07:52 flowstat_ joined #salt
07:59 elsmo joined #salt
08:03 jamesp9 joined #salt
08:05 TTimo joined #salt
08:05 jettero joined #salt
08:05 jettero joined #salt
08:06 nicksloan joined #salt
08:09 hemebond AndreasLutro: salt-formula?
08:14 THE_BOULDER joined #salt
08:15 job my pull request is not getting 5 out of 5 stars because this downstream thing fails https://jenkins.saltstack.com/job/salt-pr-rs-cent6-n/670/console
08:18 MTecknology Is there anything in existence for applying CIS guidelines to a server?
08:18 monokrome joined #salt
08:19 eseyman joined #salt
08:20 Norrland MTecknology: CIS guidelines?
08:20 MTecknology cisecurity.org
08:21 Norrland MTecknology: and what are you trying to achieve?
08:21 MTecknology Norrland: have you ever read their guidelines?
08:21 Norrland MTecknology: no. Not really.
08:24 Norrland MTecknology: I guess everything is possible if you have imagination :)
08:24 AndreasLutro hemebond: no, I mean like - does the salt master also have a minion running? does it talk to itself? do masters treat other masters as minions?
08:24 jamesp9 joined #salt
08:28 zerthimon joined #salt
08:32 MTecknology AndreasLutro: A salt master /can/, and usually does also run a minion, but the master will control the minion just like any other minion. You /can/ run only a salt master on a server, but then you're not using salt to manage that system.
08:32 oravirt joined #salt
08:33 pingwhitepong joined #salt
08:35 pingwhitepong left #salt
08:35 MTecknology AndreasLutro: I deployed my salt master (first server in the environment). Then I used /srv/salt/{states,data,pillar,top,reactor}/ for the salt master to read from. Then I used salt to do a 100% automated installation of my git server (second in the environment). Then I moved from file system salt stuff to git repos that the salt master read from instead. Then I set up the reactor so that pushes
08:35 MTecknology to the git repos triggered highstate executions.
08:35 yomilk joined #salt
08:36 MTecknology Then I made sure that 100% of 100% of anything and everything done in the environment was done only through pushes to git
08:38 AndreasLutro got no problems setting all that up, just wondering how people were doing management of the salt masters. if you say salt master and minion runs on the same server that answers my question, cheers
08:41 MTecknology AndreasLutro: server three was a logging server, fourth was a backup server, then a wlc for my wireless APs, then a minecraft server, and then a development server (building debian packages mostly), and finally a few other fun boxes
08:44 bl4ckcontact joined #salt
08:45 dgutu joined #salt
08:46 BlackAle_ joined #salt
08:47 pingwhitepong joined #salt
08:51 GreatSnoopy joined #salt
08:53 flowstat_ joined #salt
08:53 felskrone joined #salt
08:54 tyler-baker joined #salt
08:58 kbaikov joined #salt
09:02 BlackAle_ joined #salt
09:04 yomilk joined #salt
09:07 DanyC joined #salt
09:07 clintberry joined #salt
09:10 DanyC Hi all, question: does anyone knows if 3rd party clients can hook/ push notifications/ events via Salt bus in same way the minions sends reactor events back to master?
09:11 DanyC basically i want to kind of have a self-healing solution where triggers (events) are created which kick via Salt bus actions... same principle as minions <> reactors
09:11 oravirt joined #salt
09:12 oravirt joined #salt
09:12 rofl____ if schedule in my minion config doesnt work, which library do i lack?
09:12 acsir joined #salt
09:12 rofl____ python-dateutil is installed
09:12 BlackAle__ joined #salt
09:14 bl4ckcontact joined #salt
09:14 chiui joined #salt
09:15 Rumbles joined #salt
09:15 s_kunk joined #salt
09:15 s_kunk joined #salt
09:16 oravirt joined #salt
09:16 DanyC different q - does anyone know if the salt-api is going to be replaced with s'thing else soon?
09:20 teryx510 joined #salt
09:22 keimlink joined #salt
09:23 larsfronius joined #salt
09:25 linjan joined #salt
09:36 slav0nic joined #salt
09:36 stevej joined #salt
09:39 larsfronius joined #salt
09:39 av_ joined #salt
09:40 thalleralexander joined #salt
09:41 iggy DanyC: no
09:41 oida joined #salt
09:41 larsfron_ joined #salt
09:41 DanyC iggy: isn't Saltnado the case ?
09:41 aberdine 'ning. Is there a mechanism to write unit tests for states, similar to what Puppet does with rspec?
09:42 DanyC iggy: https://github.com/saltstack/salt/issues/26505
09:42 saltstackbot [#26505]title: Bring Saltnado up to feature parity with rest_cherrypy | [As has been discussed](https://github.com/saltstack/salt/issues/13698#issuecomment-94056727) now that Tornado is a dep for Salt core it makes sense to move development effort on a REST API to Saltnado. We do not yet have a timeline for this work but we cannot deprecate `rest_cherrypy` until Saltnado has feature parity and identical interfaces so this issue will serve as a place to tr
09:42 iggy DanyC: saltnado is just a means of serving the salt-api
09:42 malinoff joined #salt
09:43 shiin joined #salt
09:44 DanyC iggy: right, i was wrong on that bit. so only the underlying framework on which Salt-api stands on is going to be changed
09:45 iggy that ticket would seem to indicate that, I haven't heard anything about it though (the tornado dep is relatively new)
09:47 DanyC iggy: indeed is new, might take some time till it makes substantial progress
09:49 iggy check the saltpad code/issues if you want to see some of the differences between cherrypy and saltnado
09:50 iggy they are actually fairly minimal from what I've seen
09:50 DanyC iggy: you mean https://github.com/tinyclues/saltpad/ ?
09:51 iggy si
09:51 DanyC iggy: i thought saltnado is based on tornado ...maybe i'm confusing things here
09:52 flowstat_ joined #salt
09:53 iggy it is... there are actually 3 different netapi "backends" they all have small differences to each other
09:54 DanyC iggy: oh boy, get more fingers into the pie :)
09:54 iggy I was pointing out that saltpad can actually talk to all 3, so it's a decent example of small difference you might run into (i.e. if you were trying to use saltnado, but all the docs/code/examples are written for cherrypy)
09:54 Huxley joined #salt
09:54 Huxley Is there any project anyone here knows of, that can assist in converting an existing system into Salt configs?
09:54 Hydrosine joined #salt
09:54 DanyC iggy: right, thanks a bunch !
09:55 malinoff Huxley: http://devstructure.com/blueprint/ not exactly salt configs, but something useful
09:57 Huxley malinoff: Hey that's cool! Thanks
09:58 Huxley I would think that someone would have started something like that for Salt
09:58 Huxley It would be quite useful
09:58 malinoff Huxley: not really
09:58 Huxley Why not?
09:59 iggy the formulas are probably the closest thing... just write some pillar data for each one and off you go
09:59 malinoff Huxley: because you'll have to put much effort into making it actually return adequate information
09:59 malinoff Huxley: just because systems are hard, and consist from many moving components
10:00 iggy and there is more than one way to do... almost everything
10:00 malinoff Huxley: you'd better describe your systems and write scripts from the scratch, test them and verify
10:00 Huxley malinoff: Well just to take some of my existing configuration files and determine some of the things automatically, I can't imagine why that wouldn't be a helpful start?
10:00 Huxley iggy: True :-)
10:01 malinoff Huxley: because such tool will generate a lot of garbage
10:01 malinoff which you'll need to filter
10:02 Huxley Of course there would be pieces of garbage - I'm just not understanding why it would be so terrible
10:02 aberdine seems like a good starting point to me
10:02 iggy too hard
10:02 aberdine at least when trying to understand a system
10:02 moderation joined #salt
10:02 Huxley yeah exactly
10:02 Huxley hmm
10:03 aberdine it also allows for more broad brush testing and quicker dev/test turn around
10:03 aberdine after hoovering up the changes it can then be used as the basis for a more refined automation
10:04 iggy nobody is saying you can't do it
10:04 Huxley precisely!
10:04 iggy but nobody has done anything like that in the past
10:04 Grokzen joined #salt
10:04 Huxley k
10:04 Huxley well that is all I was really asking :) so thank you
10:04 The joined #salt
10:04 aberdine etckeeper is another useful tool, but requires more set up
10:05 Huxley yeah I've looked at that before in the past, but I haven't used it a whole lot - I had a similar setup with some scripts and a repo that did something similar
10:06 The_Loeki joined #salt
10:06 iggy one problem is, most people don't go from cobbled together systems to salt in one step
10:07 iggy they've usually got some intermediate step of scripts/scm/etc cobbled together
10:07 iggy so it's fairly easy to go from something like that to salt
10:08 aberdine yeah, I'm not advocating a one step dump to salt and run - that'd be.... fraught
10:09 Huxley yeah I'm not wanting that either :)
10:09 cyteen joined #salt
10:10 Huxley sort of like a tool that generates a 'proposed' salt configuration from an existing server
10:10 aberdine I think we all _want_ it, but know it's not ideal :)
10:10 akhter joined #salt
10:10 totzky joined #salt
10:10 Huxley so you can test and deploy it elsewhere and test and tweak the configs until its what you want
10:10 Huxley aberdine: :-)
10:10 aberdine anything that gets the iteration time down is good by me
10:10 Huxley yeah me too
10:11 aberdine hence my question about unit tests
10:11 aberdine but I'm getting the feeling there isn't a way to do offline tests of salt states
10:11 Huxley hmm, I certainly don't have enough experience with salt yet to know
10:11 aberdine Which is a real shame, as it's been a saving grace for us with Puppet
10:12 aberdine (and the only thing keeping me using it :) )
10:13 Huxley :)
10:14 N-Mi_ joined #salt
10:14 Segfault_ joined #salt
10:14 Huxley I'm sure a comparable solution will present itself soon
10:15 Segfault_ I'm having a small problem.. Is there a way to update salt-minion during a state.highstate without crashing? I have a salt-minion formula that keeps salt-minion up to date, but it crashes every time the salt-minion is updated
10:18 writteno1 joined #salt
10:19 amcorreia joined #salt
10:29 illern_ joined #salt
10:32 Joren joined #salt
10:32 yomilk joined #salt
10:34 The_Loeki @aberdine; just busting in of course, but depending on what you mean with 'offline tests'; you can *always* do state.sls <bla> test=True; not offline, but dry-run
10:34 yomilk joined #salt
10:35 aberdine The_Loeki it's more unit type testing I'm after - making sure that given the same inputs the code produces the same state output
10:35 aberdine something like rspec kind of thing
10:36 aberdine and automatable
10:44 giantlock joined #salt
10:46 oida joined #salt
10:47 inad922 joined #salt
10:48 deus_ex joined #salt
10:48 The_Loeki @aberdine: no such thing AFAIK. There's however plenty of tools in the system to create something like it; e.g. giving pillar data to states, instatiating minions with given pillar and grain data etc.
10:49 The_Loeki other interesting issues https://github.com/saltstack/salt/issues/4352 & https://github.com/saltstack/salt/issues/802, but that's more for schema-kind of validation of data
10:49 saltstackbot [#802]title: Syntax checking | When a salt state contains YAML syntax errors, it is quite hard to find these errors at the moment. It would be great to have a basic syntax check (validation agains a schema) and/or references check (e.g. do the references given in require-statements really exist).
10:49 The_Loeki lol
10:50 aberdine That saved me a click :)
10:52 flowstat_ joined #salt
10:58 Hell_Fire joined #salt
11:06 shiriru joined #salt
11:14 fredvd joined #salt
11:26 oida joined #salt
11:33 colegatron joined #salt
11:35 Hell_Fire When using jinja in SLS, is there any context or state info that is settable across all SLS (using jinja import for a macro that needs to include a state common between calls, but if the package: pkg.installed section is set up twice, salt dies with duplicate SLS ID, trying to do something like {% if package_included is not defined %}{% set package_included = True %} )
11:37 Hell_Fire I've not found an elegant way to do this yet, SLS level includes don't seem to include macros, I guess I could do include: - package {% from 'package.sls' import macro %}
11:38 Hell_Fire and just double up the include line
11:38 Hell_Fire (one for jinja, one for SLS)
11:39 favadi joined #salt
11:47 abednarik joined #salt
11:50 jaybocc2 joined #salt
12:00 oida joined #salt
12:08 yomilk joined #salt
12:15 tuxx hey guys... i have a state which sets a proxy for apt-get ... unfortunately, the state is the last state which is executed
12:15 tuxx i would like my proxy adding state to be done before performing any other state transitions...
12:15 tuxx i dont know how i could solve that without adding a require: statement to each and every pkg.installed
12:15 tuxx any advice?
12:17 AndreasLutro tuxx: add to the state: - order: 0
12:18 tuxx thanks
12:19 oravirt joined #salt
12:21 oravirt joined #salt
12:22 tuxx AndreasLutro: what would happen if i had two states both with order: 0?
12:25 favadi joined #salt
12:25 babilen *fight*
12:25 babilen tuxx: They are executed in the order in which they have been defined unless other requisites change that order
12:27 denys joined #salt
12:30 AndreasLutro ^
12:36 AlberTUX1 joined #salt
12:39 jaybocc2 joined #salt
12:39 AlberTUX2 joined #salt
12:40 favadi joined #salt
12:45 erjohnso joined #salt
12:45 yomilk joined #salt
12:49 shiin joined #salt
12:53 dynamicudpate joined #salt
12:54 tpaul joined #salt
12:54 Huxley_ joined #salt
12:54 mrtrosen_ joined #salt
12:56 drags joined #salt
12:57 tercenya joined #salt
12:57 indispeq joined #salt
12:57 borgstrom joined #salt
12:59 sirchtophe joined #salt
13:00 flowstat_ joined #salt
13:00 mapu joined #salt
13:00 cliluw joined #salt
13:11 giantlock joined #salt
13:13 akhter joined #salt
13:14 shiriru joined #salt
13:15 Mandorath joined #salt
13:19 Mandorath Hi guys, i have a salt setup where i pretty much setup a machine to the point where the products have to be installed and configured from the company im doing an internship. Now i want salt to run the configure which pretty much is a bash script, i do need to set some environment variables or the script will fail. Is there a easy way for salt to pass on variables from a state to the linux environment?
13:20 Mandorath I do want salt to do it so i can define the variables in a pillar and reuse them later on.
13:20 AndreasLutro Mandorath: cmd.run has an "env" argument
13:22 tpaul Mandorath: By configure, do you mean like an autotools style configure script to build software?
13:23 Mandorath tpaul: Yes
13:24 quasiben joined #salt
13:24 tpaul Not that its any of my concern, but it seems odd that this company wants to use configuration management such as salt, but not build binary packages to deploy.
13:25 Hell_Fire Yeh, I'd be building an artifact package up in like a tarball, and use salt to manage the deploy
13:26 tpaul Mandorath: If you want a slight edge of the other interns (if there are any others) you might consider packaging this software (rpm, deb, pkgsrc, etc), then deploy the package wit salt
13:26 slav0nic can i use service.running for restart service without watch (or force it every time when state call)?
13:28 Mandorath tpaul: They will but i have to setup a PoC to show them Salt can do X stuff. I would like to but its alot of software and not alot of time to do it in. I can choose to rebuild everything but if i cannot make it work on the first or maybe the second go i have nothing to show.
13:28 jY left #salt
13:28 jY joined #salt
13:28 flowstat_ joined #salt
13:29 Mandorath I guess i could also put the variables in template, put in on the machine than give execute permissions and just run it. (with salt ofcourse)
13:29 TyrfingMjolnir joined #salt
13:30 favadi joined #salt
13:30 tpaul Mandorath: That is another approach, similiar to Hell_Fire's suggestion and I understand your situation (PoC)
13:31 JD joined #salt
13:31 Mandorath Also i already stripped the os setup by about .... a bunch of files (estimating over a 100 files) and reduced the machine setup from about 30-40 minutes to 10 to 15 minutes using salt and foreman.
13:31 Rumbles If I had 3 different manifests, each one had a file.managed in a certain folder, and I used clean: True for all of them, and applied all of those manifests to one host. Would they respect the files from the other manifests, or would the last one remove the files from the other manifests?
13:32 tpaul Mandorath: That's what salt is good at! What OS are you using, out of curiosity?
13:33 dendazen joined #salt
13:33 Mandorath centos
13:33 Mandorath tpaul: i also have to do a basic setup for Windows machines but most of the machines are CentOS
13:34 _JZ_ joined #salt
13:34 illern joined #salt
13:39 JDiPierro joined #salt
13:47 flowstat_ morning all
13:48 spiette joined #salt
13:49 abednarik joined #salt
13:49 nafg_ joined #salt
13:49 DammitJim joined #salt
13:51 justanotheruser joined #salt
13:52 ashb joined #salt
13:52 oida joined #salt
13:54 xenoxaos joined #salt
13:54 subsignal joined #salt
13:55 gazarsgo joined #salt
13:56 is_null joined #salt
13:56 is_null hi all, how to to know that my minion has been accepted by the master from the minion ?
13:57 is_null i've been doing salt-call test.ping but it turns out that it may pass even if the minion was not accepted, salt-call is not using the minion daemon
13:58 Fabbe is_null: salt-key -L  ?
13:58 is_null Fabbe: that command is to be run on the master, not on the minion
13:58 m0nky joined #salt
13:58 Fabbe is_null: Yes.
13:59 pcn is_null wouldn't pillar.items require the master to participate?
13:59 is_null nice, thanks
14:00 nlb joined #salt
14:00 bharper joined #salt
14:00 zz_Cidan joined #salt
14:00 Cidan joined #salt
14:02 is_null pcn: right, so salt-call pillar.items works but also bypasses the salt-minion daemon, so salt-call pillar.items works even though salt-minion is still not ready
14:04 is_null cause i'm being mistreated by a race condition :)
14:04 larsfronius joined #salt
14:04 is_null so i'd really like to know when the salt-minion daemon is really ready **from the minion machine**\
14:04 BogdanR joined #salt
14:06 flowstat_ what's the race condition you're working around?
14:06 is_null i don't want my bootstrap script to exit before the minion daemon is actually connected
14:07 flowstat_ yeah, I feel you there, it's why I went masterless. Could you employ a different strategy for accepting minion keys to avoid this altogether?
14:07 wych joined #salt
14:07 is_null ok so i'll just make reactor touch a file on the minion on presence then
14:07 pkimber joined #salt
14:07 flowstat_ yeah, that could work too
14:08 is_null flowstat_: oh yeah i could, but i'm trying to gain control over my infra :)
14:08 flowstat_ I completely understand, haha
14:08 is_null hhehehe
14:09 oida joined #salt
14:09 JDiPierro joined #salt
14:10 flowstat_ I'm in the position of being the first dev ops guy at a startup after 2 years of devs making the best choice they could, but weighing expediency over everything. So, I really do understand
14:12 xmj haha
14:12 xmj flowstat_: i've done devops (and consulting) for a startup or two whose deployment practices were.. funny :)
14:13 flowstat_ yeah, like I said, these guys (and ladies) are sharp, but ... let's just say they weren't focused on ops
14:13 flowstat_ first and foremost
14:13 xmj always good fun
14:13 flowstat_ oh, and I am actually a dev who just started devops like 6 months ago
14:13 flowstat_ (it's so much better)
14:13 flowstat_ anyways, enough derailment.
14:14 flowstat_ I'm trying to wrap my head around Ryan Lane's AWS workflow
14:14 Rumbles joined #salt
14:14 flowstat_ and what piece to incorporate first
14:15 JD same here.. company been on aws for years.. def not startup
14:15 JD but... maintaining fleet like classic IaaS VMWare stack
14:17 faeroe joined #salt
14:18 viq joined #salt
14:21 flowstat_ ugh
14:22 flowstat_ yeah, we have one VPC, a TON of hardcoded IP addresses to our individual (non-ASG) EC2 instances
14:22 flowstat_ no naming
14:22 clintberry joined #salt
14:22 flowstat_ using a hodgepodge of raw ruby on jenkins, opsworks chef, and even some puppet to deploy stuff
14:22 flowstat_ sometimes with baked AMIs (all amazon linux, of course), some using base images
14:23 flowstat_ but it's okay, my job is really clear: "fix it."
14:23 bhosmer joined #salt
14:24 oida joined #salt
14:28 goldbuick__ YOLO
14:28 Tanta joined #salt
14:29 SunPowered joined #salt
14:30 HyperHorse joined #salt
14:30 malinoff flowstat_: I have a project where you need ruby, nodejs, chef and capistrano to deploy a project written in go
14:30 HyperHorse what is salt?
14:30 flowstat_ holy crap! you win.
14:30 malinoff I always win :(
14:31 job HyperHorse, https://github.com/saltstack/salt
14:31 cpowell joined #salt
14:31 HyperHorse i see.
14:31 tristianc_ joined #salt
14:33 HyperHorse what sort of infrastructure is this software for?
14:33 HyperHorse trains?
14:33 HyperHorse traffic management systems?
14:33 mortis could be
14:33 job operating systems
14:33 HyperHorse power management?
14:34 mortis sure why not
14:34 flowstat_ ah yes, it's a CMS for TMS
14:34 job if you have 10 webservers, and all of them need to have the same apache configuration (or almost the same), you can use salt to ensure all 10 of them are in sync
14:34 HyperHorse how about we use it to lock unpleasant politicians out of society?
14:34 flowstat_ also, can we please pick a less-overloaded acronym?
14:34 morissette joined #salt
14:35 HyperHorse and which acronym might that be?
14:35 HyperHorse lol
14:35 flowstat_ CMS: configuration management system
14:35 flowstat_ also content management system
14:35 flowstat_ and apparently a convention on the Conservation of Migratory Species
14:36 huddy joined #salt
14:36 HyperHorse collection of military statistics? :-P
14:37 xmj also s/system/solution
14:37 mpanetta_ joined #salt
14:37 flowstat_ yep.
14:37 JD compensation maintenance system
14:38 is_null flowstat_: this is the only safe way: http://dpaste.com/2Y8D7E4
14:38 HyperHorse left #salt
14:38 flowstat_ interesting
14:38 flowstat_ there should really be a module function for that
14:39 flowstat_ I can't imagine you're the first person to need to reliably get this information
14:39 xmj "patches accepted"
14:39 flowstat_ yep
14:39 flowstat_ I just wish I was at that point. I'm still trying to put together a plan for a full workflow
14:40 flowstat_ how to get from "I have it working in vagrant locally" to "I have it working in multiple environemtns"
14:40 flowstat_ s/environemtns/environments
14:40 xmj simple, you just scale up vertically and then branch out until you reach the horizontal
14:40 flowstat_ hahahaha
14:41 xmj You're welcome.
14:41 flowstat_ Thanks Jeff Goldblum from Independence Day
14:41 flowstat_ or perhaps Jurassic Park
14:41 AndreasLutro if you can destroy all your vagrant instances and get back to the state you were in less than 5 shell commands I think you can be pretty confident
14:41 AndreasLutro assuming one of your vagrant instances is a salt master
14:41 flowstat_ I'm at that point now
14:42 Trauma joined #salt
14:42 xmj that's just what i said :p
14:42 AndreasLutro then the sky is the limit!
14:42 flowstat_ but then I need to pillarize / templatize my environmental variables, the figure out how to deploy them, and hook up the minions to the masters in different environemtns, and bootstrap them
14:43 flowstat_ where 'them' = my EC2 instances
14:43 flowstat_ preferably with a strong naming convention and sec groups, IAM roles, ASGs, etc etc
14:43 flowstat_ it's a big chunk
14:44 AndreasLutro we're going to be using terraform + cloud-init to spawn and bootstrap instances, install salt minion and make it connect to the master
14:44 AndreasLutro security groups IAM etc is not my domain though :)
14:46 numkem joined #salt
14:46 is_null xmj: i would love to contribute but we're so much at WAR to get our salt stuff  to have proper tested working CI/CD full pipeline ... maybe next year i hope;)
14:47 xmj mhm
14:50 jaybocc2 joined #salt
14:51 oida joined #salt
14:52 winsalt joined #salt
14:53 Sucks joined #salt
14:54 abednarik joined #salt
14:54 geekatcmu Is there a known bug with salt occasionlly crossing the streams on managed, templated files?
14:55 geekatcmu I just had to fix 7 (out of 43) hosts where the managed krb5.conf had really consistent garbage in it.
14:55 geekatcmu It looked a lot like some of the data from the firewall rules (also managed) had gotten mixed in there.
14:56 evle1 joined #salt
14:57 zmalone joined #salt
14:58 is_null xmj: what, you have your overlords under an automatic CD pipeline with tests and CI ? then you did quite some work !!
14:58 is_null cause even using salt-formula there's quite some work :D
14:58 xmj is_null: i didn't do nothing
14:59 xmj you confuse me for someone who does actual work
15:00 oznah joined #salt
15:00 timoguin joined #salt
15:00 favadi joined #salt
15:00 dendazen joined #salt
15:02 AndreasLutro I read a decent article about test pipelines for provisioning recently
15:02 AndreasLutro https://www.amon.cx/blog/tdd-caps-provisioning-with-docker/
15:03 oznah I'm experiencing a 25 sec delay when running salt-call on some minions. Others run immediately. I recently 12/7 upgraded to 2015.8.3 on the master & minions. Any ideas? Is there some cruft I need to clean up on the slow minions?
15:04 perfectsine joined #salt
15:06 jaybocc2 joined #salt
15:06 bhosmer_ joined #salt
15:06 is_null xmj: when you commit something into your saltstack states repo, there's got to be something updating the code for salt-master
15:07 scoates joined #salt
15:08 Brew joined #salt
15:13 Sucks joined #salt
15:16 perfectsine_ joined #salt
15:16 dyasny joined #salt
15:16 flowstat_ joined #salt
15:16 hasues joined #salt
15:17 hasues left #salt
15:18 tarbar joined #salt
15:20 tarbar hello, I would like to use consul as an external data source for pillar, but I see that is not available until 2015.8.0, and I see I have 2015.5.3 on Ubuntu 14.04 LTS. Is there a PPA I should configure so I can access 2015.8.0 from 14.04?
15:20 illern joined #salt
15:21 UForgotten joined #salt
15:21 job deb  http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main
15:21 job there is no ppa
15:22 job you need this key to validate the sigs https://repo.saltstack.com/apt/ubuntu/12.04/amd64/latest/SALTSTACK-GPG-KEY.pub
15:22 tarbar ah, look at that, I have been on `deb http://ppa.launchpad.net/saltstack/salt/ubuntu trusty main`
15:22 rotbeard joined #salt
15:23 job that does not contain the version you want
15:24 job that has 2015.5.3+ds-1trusty1
15:24 tarbar yes, which deb url does the project consider to be the official / channel for stable?
15:24 job i use this
15:24 job deb  http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main
15:24 zmalone repo.saltstack.com is where current releases now live, the other repos were abandoned
15:24 job in /etc/apt/sources.list.d/salt.conf
15:24 job uhm
15:24 job in /etc/apt/sources.list.d/salt.list
15:24 zmalone there are still issues with repo.saltstack.com packaging though
15:24 is_null AndreasLutro: thanks ;) i myself published a few articles about salt and tdd ;)
15:25 job zmalone, what kind of issues
15:25 tarbar zmalone: thanks for the confirmation
15:26 zmalone On some platforms (rhel), the zeromq dependencies that are provided have been moved to archive/ , and it causes issues for some people.  I'm not on RHEL, but the issues are on the github issue tracker.
15:27 tarbar good to know
15:27 tarbar I'll see how it fares in QA first
15:27 zmalone On Ubuntu, some dependencies that the OS provides at too old of a version are not in the repo, so it throws errors unless you manually install the deps at the right version from pip etc.
15:27 N-Mi_ I have configuration files that I don't want to generate using templates, and which are different for minions. What is the standard way and pathes to store them on the master and then push them on all minions ?
15:27 zmalone Fedora and some old Debian packages have been abandoned altogether for the time being
15:27 _mel_ joined #salt
15:27 zmalone so you can't install on some Deb platforms or any Fedora platform from repo.saltstack.com
15:28 tarbar ouch
15:28 tmclaugh[work] joined #salt
15:28 zmalone Do you mean your QA?  This is the official Salt repo now, and these are the official releases that are post-QA.
15:28 N-Mi_ should i put them a path like this : /srv/salt/myfiles/minionXX/etc/configfile.ini  ?
15:29 tarbar zmalone: yes, I mean that I'll put the "use of this dep repo" through QA
15:29 tarbar zmalone: going this route wants to install mysql, that seems funny
15:30 tarbar *mysql-common
15:32 tmclaugh[work] joined #salt
15:32 oznah any ideas on the 25sec delay ? I posted at 10:03? Thanks for any help
15:38 fisuk left #salt
15:40 debian112 joined #salt
15:41 DammitJim joined #salt
15:41 flowstat_ joined #salt
15:42 tarbar oznah: repost?
15:42 tarbar not seeing it
15:47 teryx510 joined #salt
15:48 oznah I'm experiencing a 25 sec delay when running salt-call on some minions. Others run immediately. I recently 12/7 upgraded to 2015.8.3 on the master & minions. Any ideas? Is there some cruft I need to clean up on the slow minions?
15:48 flowstat_ does salt-call run slower on the slow minions?
15:49 The_Loeki @oznah probably the delay is in connection to your salt master; try salt-call --local to see if that is fast, try salt-call -l debug to see the delay happenin'
15:49 flowstat_ yes, ^ that's the same as what I said, except from someone who actually knows what they're talking about
15:50 alvinstarr joined #salt
15:51 oznah @The_Loeki still runs slow when I run --local
15:51 The_Loeki run it with -l debug or even -l trace; you should see where it hangs
15:53 tmclaugh[work] joined #salt
15:53 oznah @The_Loeki if I run debug I see a pause right after it reads local configs
15:54 flowstat_ what about trace?
15:54 flowstat_ what's your network infrastructure? are you in AWS?
15:55 perfectsine joined #salt
15:55 The_Loeki aight, what exactly is your CLI line?
15:56 oznah network is the same between two test boxes. They are VMs w/ identical network & hardware specs
15:57 oznah pauses after [TRACE   ] Loading core.ip6 grain
15:58 TyrfingMjolnir joined #salt
15:58 drel_ joined #salt
15:58 Sucks joined #salt
15:58 The_Loeki salt-call -l debug --local file.file_exists /etc/passwd
15:58 drel_ hello
15:58 oznah next grain is core.ip4
15:58 drel_ first time here
15:59 oznah @The_Loeki salt-call -l trace pillar.items --local
16:00 AdamSewell joined #salt
16:01 oznah @The_Loeki I prepended time & it took - real    0m25.643s on the slow minion
16:02 oznah @The_Loeki I prepended time & it took - real    0m0.649s on the fast minion
16:03 dendazen joined #salt
16:03 oznah @The_Loeki These are rhel 7 boxes that I use for testing
16:05 nicksloan joined #salt
16:06 The_Loeki execute the exact command i gave you & time it
16:07 The_Loeki (you asked for pillar.items with arg --local, it's therefore still connecting to the master)
16:07 amcorreia joined #salt
16:07 oznah @The_Loeki I did run your command. that's the times I posted above
16:08 abednarik joined #salt
16:08 whytewolf oznah: also, when using -l debug what is the line both before and after the pause [i think you gave before as loading the ipv6 grain
16:09 drel_ I have a jinja template which has a few simple ifs in it ... it runs fine on centos but when I run it against ubuntu host says that my states are not formed as dicts
16:09 oida joined #salt
16:09 oznah @whytewolf with -l trace, it paused between ip6 & ip4 grains
16:10 The_Loeki @oznah no you didnt, salt-call -l debug --local file.file_exists /etc/passwd != salt-call -l debug file.file_exists /etc/passwd --local, which is what you did
16:10 whytewolf oznah: are the host names in /etc/hosts for both the fast and the slow minion
16:11 spuder joined #salt
16:11 oznah @whytewolf with -l debug you see "reading configuration from /etc/salt/minion.d/..." it pauses for 25 secs & then you see LazyLoaded jinja.render
16:11 The_Loeki and @whytewolf is right; this delay mostly boils down to slow/nonexistent DNS resolvers/ Salt masters
16:11 elsmo_ joined #salt
16:13 irctc568 joined #salt
16:13 irctc568 hi
16:14 sdm24 joined #salt
16:15 oznah @The_Loeki my apologies. I ran your command exactly and it came back real    0m25.645s
16:15 oznah @whytewolf I cat'd both /etc/host files & they are the same
16:16 whytewolf the same? meaning they have the current host fqdn in the host file for the system they are on with an ip that is not localhost?
16:16 whytewolf or the same file that is just localhost info
16:16 oznah @whytewolf one entry for 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
16:17 fyb3r joined #salt
16:17 whytewolf okay, so localhost info only.
16:17 whytewolf what happens when you put the host info into the hosts file for the current host?
16:17 brianfeister joined #salt
16:17 oznah @whytewolf yes and a commented out ipv6 loopback. no entry for it's hostname
16:18 oznah @whytewolf let me try
16:18 fyb3r soooo is there any particular reason that a reactor would cause the salt-master process for reactors to go into Uninterruptable Sleep?
16:18 The_Loeki @oznah sorry for misreading your msg then
16:18 fyb3r eventually causing a total crash
16:19 whytewolf fyb3r: only thing i can think of would be if a reacot was local on the master causing the master to fire more events that caused more reactors [reactor infinate loop?]
16:19 oznah @The_Loeki I did have the --local in a different spot. I ran it again though just like you posted it with the same delay
16:19 whytewolf oznah: basicly this sounds like the fqdn grain isn't able to get filled and is taking it's sweet time in attempting
16:20 fyb3r AH
16:20 shaggy_surfer joined #salt
16:20 fyb3r cause i see that once that reactor is getting compiled another zeromq  connection is made
16:20 AdamSewell joined #salt
16:21 whytewolf fyb3r: okay. that is strange. it shouldn't be doing that. unless it restarts
16:21 whytewolf cause that would cause another start event
16:21 fyb3r which i think it is cause the procs act as if they were orphaned
16:22 The_Loeki @oznah, np; again, I share whytewolfs hypothesis; check your resolvers. Maybe some other modules that try to connect / resolve anything?
16:22 fyb3r the strange part is that I have the reactors executing runners which access  couchbase. and after about 2 minutes they totally tank and reactors are no longer being checked for
16:23 oznah @whytewolf I added the entry to /etc/hosts & ran this command "time salt-call -l debug file.file_exists --local"
16:23 clintberry joined #salt
16:23 fyb3r getting ready to go through the 100k lines of logs to see what i can find >_> just curious if there was an issue with zeromq i was unaware of
16:23 oznah @whytewolf It took 25.591s
16:23 The_Loeki @oznah; the --local is again in the wrong spot (but it doesn't seem to matter)
16:23 whytewolf fyb3r: this sounds fubar. check your versions
16:23 fyb3r all match up
16:24 fyb3r :P when do i not bring a problem in here thats not totally fubar lol
16:24 whytewolf fyb3r: point
16:25 jimklo joined #salt
16:25 mattiasr joined #salt
16:26 whytewolf oznah: does either of the hosts have ipv6 setup or at the very least turned on?
16:27 whytewolf also what version of salt?
16:28 flowstat_ quick question: the last loose end I have to tie up in my deployment workflow is that I need to somehow pass a variable to the masterless minions I'm provisioning.
16:28 flowstat_ basically, take a jenkins job variable and eventually allow jinja to reference it
16:28 whytewolf fyb3r: honestly that one stumps me. orphaned threads/procs it shouldn't be creating new connections.
16:28 rotbeard joined #salt
16:28 fyb3r i dont think they are
16:28 oznah @whytewolf checking ipv6. it should be turned off though. I just upgraded on 12/7 to 2015.8.3
16:28 fyb3r i think they are crashing and staying up and salt just creates another to replace it
16:28 flowstat_ the only way I can figure it right now is to set environmental variables, then reference them in /srv/salt/grains
16:29 flowstat_ anyone face a similar issue and have a cleaner solution?
16:29 fyb3r but it eventually causes the box to go into swap
16:29 whytewolf flowstat_: https://docs.saltstack.com/en/latest/topics/pillar/#set-pillar-data-at-the-command-line
16:29 whytewolf flowstat_: set pillars through the cli
16:29 fyb3r use a database backend
16:29 fyb3r :D
16:29 whytewolf [since jenkins is just running running salt-call anyway]
16:30 HappySlappy joined #salt
16:30 whytewolf sdb is another option.
16:30 oznah @whytewolf ipv6 appears to be disabled on both
16:31 flowstat_ I'm trying to follow Ryan Lane's approach, which is.. oh yeah, he has a makefile. I would assume that is the piece that sets grains / pillars
16:32 perfectsine joined #salt
16:33 whytewolf flowstat_: I'm not 100% sure how he is doing it. but for temp pillars that seems a logical choice
16:33 quasiben joined #salt
16:33 fyb3r [salt.transport.zeromq][DEBUG   ][31321] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/master', 'es-ifdm01_master', 'tcp://127.0.0.1:4506', 'clear')
16:34 flowstat_ yeah, I think he's using bash to either set grains or pillars, or setting environmental variables, and using jinja to turn those into the necessary grains on the minions before running highstate
16:34 fyb3r that is run immediately after the reactor returns
16:34 whytewolf fyb3r: odd. bug ala report
16:35 whytewolf [zmq is not something i ever got deep into]
16:35 fyb3r maaaaaaaaannnnn.
16:35 fyb3r :P  all good lol
16:35 fyb3r ty though whyte
16:37 whytewolf oznah: is the fqdn_ip6 and fqdn_ip6 grain set on either?
16:39 flowstat_ joined #salt
16:39 tracphil joined #salt
16:40 oznah @whytewolf ipv6 grain is set on both ::1
16:41 whytewolf opps one of those fqdn_ipv6's was supposted to be ipv4
16:41 oznah @whytewolf ipv4 grain is also set.
16:41 oznah @whytewolf I should have also typed fqdn_ipv6 & ipv4 respectively
16:45 whytewolf oznah: time how long this takes on both systems python -c 'import socket; print socket.getaddrinfo(socket.gethostname(),0,socket.AF_UNSPEC,socket.SOCK_STREAM,socket.SOL_TCP, socket.AI_CANONNAME)'
16:46 oznah @whytewolf no real diff. .024s on "slow" minion & .028 on "fast" one
16:47 perfectsine joined #salt
16:47 whytewolf okay, then it isn't the fqdn lookup
16:48 oznah @whytewolf that's good
16:49 whytewolf oznah: did both have the hostna,e and the ip address in them [just have to ask they should have though]
16:50 perfectsine joined #salt
16:51 oznah @whytewolf yes they both returned a list 2, 1, 6, hostname, ip addr, 0
16:51 whytewolf okay. like i said just had to ask
16:52 oznah @whytewolf no prob. I really appreciate the help
16:55 whytewolf oznah: np, unforchantly there is a ton of things that might be causing it. cause it could be happening anywhere in this bit of code https://github.com/saltstack/salt/blob/v2015.8.3/salt/grains/core.py#L1443-L1587
16:55 whytewolf [the ipv4 and ipv6 grains
16:56 whytewolf most of which are calls to this https://github.com/saltstack/salt/blob/v2015.8.3/salt/utils/network.py
16:58 oznah @whytewolf hmmm... I tested same salt-call command on salt-master and is slow also. It hangs at the core.ip6 also.
16:59 oznah @whytewolf think I should rollback the upgrade? ie. go back to 2015.8.0?
17:00 whytewolf I'm not sure it will help. but you can try it.
17:01 oznah @whytewolf maybe I should get an md5 of the core.py file to see if they are diff?
17:02 flowsta__ joined #salt
17:02 whytewolf I would get a md5 of the util.network.py file. 90% of the calls are straight calls to it
17:03 kshlm joined #salt
17:04 whytewolf but if the function that is slow is core.ip6 then this is the function that is being run https://github.com/saltstack/salt/blob/v2015.8.3/salt/utils/network.py#L958-L984
17:04 whytewolf with proto='inet6'
17:04 whytewolf and include_loopbask=True
17:06 oznah @whytewolf I'm not finding that file util.network
17:06 whytewolf in? util.network shouldn't be a directory
17:06 Bryson joined #salt
17:07 oznah @whytewolf find /usr -type f -name "util.network*"
17:08 whytewolf oh sorry. so used to salts naming scheme. util/network.py
17:09 oznah @whytewolf oh duhhh!
17:11 oznah @whytewolf just checked. they are the same on both boxes
17:11 bhosmer joined #salt
17:12 fyb3r just realized im executing a runner that executes an execution module.... wonder if thats the problem
17:12 whytewolf oznah: try the downgrade :(
17:12 whytewolf see if it helps
17:13 whytewolf fyb3r: a runner that executes an execution module? custom runner or something like orchestrate?
17:13 fyb3r yeah its a custom runner
17:14 oznah @whytewolf I have a few minions that are still running 2015.8.1. I am going to see if they run quickly or not
17:14 digismack joined #salt
17:15 fyb3r reactor -> sends event data via inline pillar -> runner -> uses pillar to execute grains.items and send return data to couchbase
17:15 grumm_servire joined #salt
17:15 troyready joined #salt
17:15 whytewolf fyb3r: i don't think that should be a problem.
17:16 keepguessing joined #salt
17:16 whytewolf fyb3r: sounds pretty reasonable
17:16 fyb3r well crap
17:16 keepguessing I am getting this error Jinja variable 'None' has no attribute 'lower'
17:16 fyb3r w/e variable name you are using doesnt exist
17:17 keepguessing and this is what is there in the sls {% if pillar.get('enable_cluster_monitoring', '').lower() == 'true' %}
17:17 fyb3r so {{ data['id'] }}, in this case data doesnt exist
17:17 fyb3r nothing is returning them
17:17 jaybocc2 joined #salt
17:17 fyb3r then*
17:17 whytewolf keepguessing: that should be salt.pillar.get or salt['pillar.get']
17:18 whytewolf or pillar['enable_cluster_monitoring']
17:18 whytewolf pillar.get = None
17:19 keepguessing whytewolf: I am reffering to this https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/top.sls
17:19 keepguessing there are multiple instances this is done there.
17:20 whytewolf keepguessing: kubernetes might have tweeked there jinja to accept that. but default salt that does jack all
17:21 keepguessing whytewolf: not sure what you mean jack all.
17:21 whytewolf nothin
17:21 whytewolf nota
17:21 keepguessing whytewolf: ah ok :-)
17:21 keepguessing so how do I fix this issue?
17:21 flowstat_ joined #salt
17:21 fyb3r i may have found the issue whyte. if so I will send a link out for ya
17:21 whytewolf salt.pillar.get
17:22 whytewolf keepguessing: basicly append salt. to the begining of the pillar.get
17:24 stomith joined #salt
17:24 keepguessing okies I will try that thanks.
17:24 AndreasLutro keepguessing: pillar.get only works one level deep
17:24 AndreasLutro salt['pillar.get'] works with nested dictionaries
17:25 stomith using the vmware driver, is there a way to remove an individual snapshot from a vm?
17:26 shaggy_surfer joined #salt
17:27 murrdoc joined #salt
17:28 timoguin joined #salt
17:30 oznah @whytewolf downgrading salt did not help
17:33 colegatron joined #salt
17:33 jaybocc2 joined #salt
17:33 hackel joined #salt
17:34 justanotheruser joined #salt
17:34 perfectsine joined #salt
17:35 writtenoff joined #salt
17:37 hackel So with the latest salt-cloud, it tells me the openstack driver has been depricated in favour of nova, yet in the nova docs, it tells me it requires the latest develop branch of salt to be installed?  So should I stick to the openstack driver if I want a stable, production release?
17:37 oznah @whytewolf well crap. I enabled ipv6 and it runs fast now
17:38 rmnuvg joined #salt
17:39 whytewolf oznah: strange with ipv6 disabled it should return faster and report that it isn't enabled.
17:40 zmalone unless it's trying to connect over ipv6, and then trying ipv4 after a timeout caused by broken ipv6 networking
17:40 whytewolf ^
17:40 whytewolf basicly ipv6 isn't being disabled but is just being broken
17:42 oznah @whytewolf uuuugh! I followed the RHEL doc on disabling. maybe I shouldn't do that anymore.
17:44 whytewolf I don't think I ever read the redhat doc on disabiling ipv6. but honestly some of the docs do have some very nasty issues with how things work after words
17:46 whytewolf I do remeber trying to disable it in centos and no matter what an ipv6 address still showed in ifconfig [a sure sign it wasn't disabled]
17:46 andrew_v joined #salt
17:47 whytewolf that was years ago though
17:47 whytewolf havn't bothered with disabiling it since.
17:47 ipmb joined #salt
17:48 oznah @whytewolf I had it disabled (ie. no ipv6 address). drop this in a file in /etc/sysctl.d/ net.ipv6.conf.all.disable_ipv6 = 1 to disable and run sysctl -p /etc/sysctl.d/filename
17:48 AdamSewell joined #salt
17:49 oznah @whytewolf I changed 1 to 0 and reran sysctl and confirmed an ipv6 address and then the salt-call was speedy again
17:49 Gunslngr4hre joined #salt
17:50 whytewolf humm. that should do it. I remeber that back when i did it it wasn't a kernel setting
17:50 whytewolf it was a stupid flag that went into /etc/sysconfig
17:50 whytewolf I don't even use centos anymore
17:50 whytewolf [redhat]
17:51 fyb3r soooo when i use salt's local client inside a runner it seems to hold it open somehow if the call times out
17:51 oznah @whytewolf I don't have much of a choice. It is still odd that one box it didn't have an issue and the other was slow
17:52 whytewolf oznah: my switch away from redhat wasn't originally a choice either ;)
17:52 whytewolf fyb3r: realllly? interesting.
17:55 whytewolf oznah: it might be possable that the kernel is in an incompleate state for the ipv6 disable and needs to be restarted to clean up all of the configs with ipv6 and the fast box is one that has been restarted after the disable?
17:55 oznah @whytewolf Also interesting that the box that has ipv6 disabled but was fast, runs in about 1/2 the time as the "slow" box with ipv6 now enabled
17:56 oznah @whytewolf possible, let me try that
17:56 murrdoc joined #salt
17:56 Pixionus joined #salt
17:56 Pixionus joined #salt
17:58 PeterO joined #salt
18:00 keepguessing whytewolf: I still get the same error.
18:00 whytewolf keepguessing: does that pillar exist?
18:01 keepguessing no. thats it is using a default value
18:01 keepguessing salt.pillar.get('enable_node_logging', 'False').lower()
18:01 keepguessing this is the change I made.
18:01 keepguessing if it does not exist it would give back a 'False' right.
18:01 fyb3r nice. got zombie procs now lol
18:02 fyb3r though i entered some better exception handling and it might be couchbase giving me issues
18:03 whytewolf keepguessing: yes it would return the string 'False'
18:03 PeterO If I use a state to add a new yum repo location and then try to install a package from that repo how can I get salt to accept the keys. Running it manually I get something like this: http://pastebin.com/unT3dDce just wondering how to accept those PGP keys in a salt state.
18:04 keepguessing whytewolf: what could be causing this failure?
18:05 keepguessing whytewolf: something is returning bool here I believe.
18:06 zmalone PeterO: I thought there were gpgkey options in https://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkgrepo.html that allowed you to accept the keys, although I'm not on a RHEL/Centos platform
18:08 whytewolf keepguessing: I have no idea. I just tossed together a quick test and am not having the same issue as you
18:08 PeterO zmalone: ah hmm.. I'll have to try that out. I was just using a file.managed to drop in the .repo file to yum.repos.d. I'll see if this would work.
18:08 whytewolf keepguessing: here is my test. and funk doens't exist https://gist.github.com/whytewolf/0d736e921dbc01df35e5
18:08 keepguessing could you see in your test if pillar actually returned True instead of "True" [ie bool vs string] how was it responding?
18:10 whytewolf it was returning a string
18:10 av_ joined #salt
18:10 whytewolf what version of salt are you on?
18:10 AdamSewell joined #salt
18:10 keepguessing salt 2015.5.3 (Lithium)
18:11 whytewolf humm. let me dig though the close bugs cause i am on 2015.8.3 and am not having the issue. and i rember there being a bug about that somewhere
18:11 grumm_servire joined #salt
18:11 larsfronius joined #salt
18:12 keepguessing whytewolf: I replaced the above statement to salt.pillar.get('enable_node_logging', False) == True
18:12 keepguessing and it worked
18:13 whytewolf nothing wrong with useing bools.
18:13 keepguessing whytewolf: this is part of kubernetes. Users could set the values to be x="True" or x=True
18:13 keepguessing there should a way to handle this.
18:13 whytewolf keepguessing: I'm sure there was a bug that was fixed between versions that fixed this
18:14 keepguessing whytewolf: sure if you could point me to it I will try to anlayze and see how I can apply that here.
18:17 whytewolf keepguessing: this was the closest thing i can find and it is from earlyer then 2015.5 https://github.com/saltstack/salt/issues/16537
18:17 saltstackbot [#16537]title: Invalid boolean value 'False' from pillar | Minion OS: Windows Server 2008 R2...
18:19 denys joined #salt
18:20 shiin joined #salt
18:22 ViciousLove joined #salt
18:23 hal58th joined #salt
18:23 keepguessing Is salt.pillar.get('me') false if me is not a defined pillar?
18:23 clintberry joined #salt
18:25 keepguessing whytewolf: ?
18:25 whytewolf keepguessing: it shouldn't be no. it should be None
18:26 keepguessing ok. So I could do a if (salt.pillar.get('me') != None and salt.pillar.get('me') == 'apple') do this
18:27 whytewolf keepguessing: != doens't work with None. you have to use is not None
18:28 Brew joined #salt
18:29 keepguessing ah ok.
18:29 shaggy_surfer joined #salt
18:31 bhosmer joined #salt
18:32 baweaver joined #salt
18:33 baweaver joined #salt
18:33 tiadobatima joined #salt
18:33 shaggy_surfer joined #salt
18:35 keepguessing Is there a syntax error here
18:35 keepguessing {% if pillar.get('enable_l7_loadbalancing') is not None and pillar.get('enable_l7_loadbalancing', 'False').lower() == 'glbc' %}
18:36 keepguessing it says "Rendering SLS 'base:kube-addons' failed: Jinja syntax error: no test named 'None'; line 33"
18:36 whytewolf humm. might be a jinja vs python thing.
18:36 pfhorge joined #salt
18:36 whytewolf in python you can't test None with !=
18:36 keepguessing yeah
18:36 kidneb joined #salt
18:37 whytewolf try dropping all of the is not None
18:37 whytewolf {% if pillar.get('enable_l7_loadbalancing') and pillar.get('enable_l7_loadbalancing', 'False').lower() == 'glbc' %}
18:46 tiadobatima joined #salt
18:49 bhosmer joined #salt
18:52 notnotpeter joined #salt
18:53 Lionel_Debroux joined #salt
18:53 keepguessing Ah it now fails with Too many functions declared in state '*' in SLS 'top'
18:55 bl4ckcontact joined #salt
18:55 keepguessing this is my top.sls http://paste.ubuntu.com/13902202/
18:56 AndreasLutro are you doing state.sls top?
18:58 keepguessing AndreasLutro: yes. I am trying to see if there is any failure in installation of any of the modules
18:58 AndreasLutro you can't do state.sls on top files
18:58 keepguessing oh ok.
18:58 AndreasLutro do state.highstate instead
18:59 ex-cowboy joined #salt
18:59 keepguessing AndreasLutro: thanks that helped
19:00 solidsnack joined #salt
19:00 ex-cowboy hi, i have trouble with one of my minions. it does not show any pillar data
19:00 ex-cowboy when i run salt 'foo' pillar.items i dont get anything back
19:01 AndreasLutro ex-cowboy: are you use there's a pillar matching the minion name in the pillar top.sls?
19:01 flowstat_ joined #salt
19:01 flowstat_ joined #salt
19:02 ex-cowboy Andreas.utro: and then there was light :-)
19:02 ex-cowboy thanks
19:03 ex-cowboy the regex was off center
19:04 PeterO How do you create an array of strings in a state? I've got this but it doesn't seem to work: http://pastebin.com/RRrQNGmK
19:05 BlackAle_ joined #salt
19:05 PeterO seems to add an extra empty string to the end of the array so it has 3 elements instead of two.
19:06 AndreasLutro PeterO: I guess that's not the end of the state file
19:06 PeterO correct
19:06 AndreasLutro there's probably something else below there that's messing up the yaml
19:07 PeterO AndreasLutro: this is the whole block: http://pastebin.com/aAtyS2iq
19:07 forrest joined #salt
19:08 AndreasLutro looks correct
19:08 PeterO yeah that's what I thought
19:08 AndreasLutro maybe the function doesn't take a list of gpgkeys?
19:08 jaybocc2 joined #salt
19:09 PeterO I was thinking that. I can't find any supporting documentation for or against that.
19:09 PeterO heck the docs don't even say it supports gpgkey at all.. it's only in the example that it shows that.
19:10 zmalone yeah, I noticed that too.
19:10 whytewolf "Additional configuration values, such as gpgkey or gpgcheck, are used verbatim to update the options for the yum repo in question."
19:10 zmalone pkgrepo.managed is a little flakey, which is probably why someone suggested using file.managed
19:10 DammitJim joined #salt
19:11 PeterO whytewolf: so if it's verbatim you think a multiline string would just work then?
19:11 PeterO `gpgkey: | ` ?
19:12 AndreasLutro I can't find any reference to the gpgkey arg in salt's source, so guessing it passes on to some other python lib
19:12 whytewolf PeterO: try it.
19:12 whytewolf what have you got to lose at this point :P
19:13 PeterO That's a great point lol
19:13 larsfronius joined #salt
19:13 baweaver joined #salt
19:14 PeterO if that doesn't work re-reading the docs looks like `key_url` could be synonymous with `gpgkey`
19:15 PeterO but that one is a function
19:15 BlackAle__ joined #salt
19:16 fyb3r i know youre dying to know whyte. seems that the issues i was having were from handling the couchbase exceptions incorrectly.
19:17 whytewolf fyb3r: oh man. was it exceptions bubbling up in salt?
19:18 fyb3r nah it was because i have a document for each minion in the db. nd ive got 2 masters, 165 syndics, 4000 minions. each syndic has anywhere from 2 to 160 minions lol
19:18 fyb3r so as events come in I have reactors firing different runners i made to gather data and keep the documents up to date
19:18 whytewolf lol. yikes.
19:18 fyb3r kinda like realitime monitoring
19:18 fyb3r just sometimes one of the minions are not in the db yet and id get a not found error. i guess because i was returning after that the connection remained open
19:19 fyb3r that and the fact it takes FOREVER to get grains.items back
19:19 fyb3r when you get 120 trying simultaneously the hardware kinda cries
19:20 justanot1eruser joined #salt
19:20 whytewolf understandable. poor hardware
19:21 fyb3r esp sine the single master is a dual core with hyper threading. 3gb ram
19:21 fyb3r hahaha
19:21 whytewolf lol. thats a little bigger [not much] then what i use for my personal at home salt master. that i do my openstack configs off of
19:22 PeterO Yeah multi-line seemed to have worked at least to write the file.
19:22 justanotheruser joined #salt
19:23 whytewolf PeterO: make sure it wrote it correctly. nothing worse then a broken repofile
19:23 pfhorge joined #salt
19:24 pfhorge How do people normally deal with separating things like dev, stage, and prod environments? Is https://docs.saltstack.com/en/latest/topics/tutorials/states_pt4.html a good representation?
19:25 whytewolf pfhorge: two ways seem most common, enviroments. [like the example you show there] which can be a pain. the other is compleatly seperate masters. I personally go with the later cause it is just easier to maintain
19:26 babilen +1
19:26 pfhorge I've been having a hard time getting all of the file_roots and pillar_roots to do what I want, so it seemed like a good time to ask for more experienced viewpoints
19:27 stomith joined #salt
19:28 whytewolf pfhorge: for what it is worth i have never seen anyone get salt enviroments right the first time. it is a PAIN to figure out how the targeting works. where the top file lives. how they interact.
19:28 pfhorge That sounds like what I'm running into. Things from prod failing to find SLS files in base, that sort of thing.
19:31 whytewolf like i said. I personally use a master per enviroment and just skipping enviroments all together
19:31 GreatSnoopy joined #salt
19:32 whytewolf or, there is always the forget masters all together approch. :P
19:32 zmalone shucks, I hit https://github.com/saltstack/salt/issues/28443 again, even though I know about it.
19:32 saltstackbot [#28443]title: salt.states.user.present creates accounts with a password hash of "None" | salt.states.user.present will create accounts with a password hash of "None" if password: is present, but undefined.  Proper behavior is probably to check for null values, and set the password hash to "!" if password is null.  Using "None" instead of "!" means that password expirations will be in place (because "None" isn't considered to be a special hash, like "!"),
19:33 whytewolf know about it. you wrote the ticket on it :P
19:34 fyb3r definition of LocalClient.cmd timeout: Seconds to wait after the last minion returns but before all minions return.
19:34 fyb3r the hell does that even mean
19:34 whytewolf fyb3r: I have no idea. I never understood that line
19:35 whytewolf I know that timeout does FUNKY magic that doesn't act like any timeout i have ever seen
19:35 fyb3r im guessing thats part of my damn problem too lol
19:36 flowstat_ I think that means "the most amount of time I'll wait between minion responses"
19:36 fyb3r i was thinking so but wasnt sure
19:36 whytewolf that might be it.
19:36 whytewolf odd wording on it though
19:36 fyb3r but it doesnt make sense for cmd with only a single minion
19:37 flowstat_ s/the last minion/the most recent minion response/
19:39 whytewolf fyb3r: well in that case i guess most recent minion response would be from the time the command is run
19:40 whytewolf it just has that option cause localclient.cmd can tkae multiple targets
19:40 whytewolf but it is worrying that there is not a total timeout
19:40 fyb3r agreed
19:41 fyb3r cause im seeing them stack up still
19:41 fyb3r >_>
19:41 fyb3r though much much slower
19:44 stomith is there any way to have a state which upgrades all packages?
19:45 whytewolf stomith: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html#salt.states.pkg.uptodate
19:45 job maybe 'aptpkg.upgrade' ?
19:45 stomith whytewolf, I tried using that. It kept failing for some reason, and I'm not sure how to diagnose *why*
19:46 whytewolf what was the error?
19:46 stomith well, let me try again.
19:46 JD question: When do the the official package repositories get updated from the 2015.8 branch on git ?
19:46 murrdoc they get updated
19:46 murrdoc when they get updated
19:46 murrdoc go away
19:46 murrdoc baiting
19:46 murrdoc :)
19:47 zmalone JD typically when a release is announced
19:47 murrdoc sorry, they normally send an email to salt-announce
19:47 murrdoc see /topic
19:47 zmalone unless you are not on the "official" repositories, which happens a lot
19:47 zmalone what repositories are you using?
19:47 JD k.  i been patching manually
19:47 JD github/saltstack/salt
19:47 zmalone sorry, which package repositories
19:48 JD oh the rhel6 yum
19:48 whytewolf JD: epel?
19:48 zmalone repo.saltstack.com or epel or copr?
19:48 stomith whytewolf, it just says, "Upgrade failed." No other message.
19:49 JD repo.saltstack.com/yum/rhel6
19:49 giantlock joined #salt
19:49 whytewolf stomith: try the exacution module pkg.upgrade
19:49 whytewolf stomith: also what minion os/distro
19:50 stomith minion is centos
19:51 whytewolf JD: did you perhaps fall into setting up during the 2 week period before they added /latest to the repo?
19:52 stomith argh. and pkg.upgrade returns no result, and doesn't do any upgrades. To any machine. Arrgh.
19:52 * stomith shakes a fist.
19:52 avenda joined #salt
19:53 stomith is this correct syntax? salt 'minion' pkg.upgrade
19:53 stomith ?
19:53 zmalone JD: or https://github.com/saltstack/salt/issues/29477
19:53 saltstackbot [#29477]title: Metadata of repo.saltstack.com/yum/rhel6/ doesn't match packages in repo | Hello there,...
19:53 JD @whyte: if there was a brief window, then yes i found it
19:53 whytewolf stomith: yeah that correct syntax
19:53 brianfeister joined #salt
19:54 Netwizard joined #salt
19:55 whytewolf stomith: finaly test what does salt 'minion' cmd.run 'yum -q -y upgrade' return
19:56 bhosmer joined #salt
19:56 aidalgol joined #salt
19:56 job where can i subscribe to salt-announce
19:57 ubaconmecrazy joined #salt
19:57 whytewolf job: https://groups.google.com/forum/#!forum/salt-announce
19:57 ubaconmecrazy i'd greatly appreciate any help with this: http://stackoverflow.com/questions/34210012/saltstack-and-gitfs-no-top-file-or-external-nodes-data-matches-found
19:58 ubaconmecrazy and you'd get some juicy karma out of it
19:58 zmalone JD: yeah, if you fix your repo files to match the current config, you should start getting updates again
19:58 job thanks whytewolf
19:58 zmalone https://repo.saltstack.com/#rhel the baseurl lines are probably all you'll need to change
19:58 ubaconmecrazy figured it'd be easier than trying to explain all the details here
19:59 whytewolf ubaconmecrazy: you didn't post your top file in that so if the error is in your top file no one would know
19:59 ubaconmecrazy oh yeah. thanks! fixing.
20:01 ubaconmecrazy updated. it's pretty basic. just trying to create a simple test for a proof of concept. I'm coming from the puppet world and looking forward to leaving puppet in the dust.
20:03 cyborg-one joined #salt
20:03 whytewolf okay, first - test doesn't exist. given the settings hoswn it should be - formulas.test. second as a line of testing try salt-run fileserver.dir_list to be sure that the server can see the files
20:04 whytewolf err that should be - formulas.test.test
20:04 whytewolf [thought that said init.sls
20:04 whytewolf ]
20:04 sdm24 ubaconmecrazy, whytwolf: also salt-run fileserver.file_list to see the files, not just directories
20:05 whytewolf ahh yes. I keep forgetting that files are a thing also :P
20:05 iggy ubaconmecrazy: `salt-call cp.list_master` and see what shows up
20:06 sdm24 and maybe what iggy just said. I had to google the specific command haha
20:06 fyb3r decreasing the timeout actually increased the orphans
20:06 ubaconmecrazy k, one sec. thanks.
20:07 whytewolf lol. sdm24 basicly it is the other side of fileserver. to make sure that the master is servering out what the rock is cook,,... i mean is serving out the filelist not just ingesting it
20:07 sdm24 or salt '*' cp.list_master, if your master and minion are seperate machines
20:07 tkharju joined #salt
20:07 iggy you should definitely have a minion installed on the master, but yeah, for completeness sake
20:08 * whytewolf never gets the fighting against running a minion on master
20:09 sdm24 im not against it, its just for completeness. His examples show salt '*'
20:10 whytewolf never said you were sdm. just a comment i see a lot about minion on master
20:10 sdm24 yep. no worries
20:10 fyb3r sounds naughty
20:10 fyb3r haha
20:11 fyb3r that moment when you're the only one laughing at  your joke.
20:11 ubaconmecrazy ok, first, salt-run fileserver.dir_list:
20:11 ubaconmecrazy - /srv/salt/salt-states - /srv/salt/salt-states/formulas - /srv/salt/salt-states/formulas/test
20:13 whytewolf fyb3r: it's okay. not everyone has a sense of humor. some just work in IT and are just insane
20:13 ubaconmecrazy cp.list_master:     - /srv/salt/salt-states/README.md     - /srv/salt/salt-states/formulas/test/test.sls     - /srv/salt/salt-states/top.sls
20:13 iggy ubaconmecrazy: so your git repo has /srv/salt/salt-states.... in it?
20:14 whytewolf the mountpoint!
20:14 babilen ubaconmecrazy: cp.list_states ?
20:14 whytewolf it's throwing everything off
20:14 ubaconmecrazy oh?
20:14 ubaconmecrazy list states:     - .srv.salt.salt-states.formulas.test.test     - .srv.salt.salt-states.top
20:14 clintberry joined #salt
20:14 whytewolf yeah. mountpoint in gitfs is how it lies in relation to salt://
20:15 whytewolf [not the real filesystem]
20:15 ubaconmecrazy k, i'll uncommment that
20:15 ubaconmecrazy i mean, comment it out
20:15 whytewolf uncomment?
20:15 ubaconmecrazy :D
20:15 racooper joined #salt
20:15 whytewolf lol
20:15 ubaconmecrazy english is my second language
20:15 shaggy_surfer joined #salt
20:16 whytewolf it's okay. typo is my first lang
20:16 fyb3r ^
20:17 ubaconmecrazy OMG it worked! it was the mountpoint! Thank you so much!!
20:17 gekitsuu joined #salt
20:17 ubaconmecrazy it was bacon me crazy!
20:18 gekitsuu Can someone show me an example of how to read a file from the salt:// file system inside a custom state module?
20:18 iggy gekitsuu: cache it then read from the cachedir
20:19 ekristen joined #salt
20:19 whytewolf ubaconmecrazy: no problem
20:20 iggy gekitsuu: more info will get you better answers btw
20:21 whytewolf ^^
20:21 gekitsuu iggy: I'm writing a salt formula for mariadb and I want user to provide a schema file for each database. I want to read the schema file location from the salt state and then run something like mysql {{ database }} < {{ path to sql file }}
20:22 whytewolf gekitsuu: template engine or no template engine?
20:23 gekitsuu if I use a template engine it'll be jinja but I haven't decided yet
20:23 whytewolf gekitsuu: doesn't have to be jinja.
20:23 gekitsuu Well people here are comfortable with Jinja
20:23 iggy yeah, if you don't need templating, salt['cp.cache_file']() and then read directly out of the cache
20:24 iggy i.e. https://github.com/saltstack/salt/blob/develop/salt/states/file.py#L3623
20:24 akhter joined #salt
20:24 gekitsuu OK I'll take a look at that, thanks!
20:26 whytewolf gekitsuu: this is also an example [this uses the template engine. but that part can be ignored] https://github.com/whytewolf/salt-debug/blob/master/_modules/debug.py
20:27 acsir joined #salt
20:31 gekitsuu Thanks whytewolf I'll take a look at that one too
20:33 roock joined #salt
20:35 keimlink joined #salt
20:36 notnotpeter joined #salt
20:38 akhter_1 joined #salt
20:44 solidsnack joined #salt
20:46 akhter joined #salt
20:47 Hazelesque_ joined #salt
20:47 notnotpeter joined #salt
20:50 baweaver joined #salt
20:50 perfectsine joined #salt
20:53 akhter joined #salt
20:58 cornfeedhobo joined #salt
20:58 ViciousL1ve joined #salt
20:58 hal58th_ joined #salt
21:00 cliluw joined #salt
21:00 cornfeedhobo so, i am sure i am missing something simple, and would appreciate someone pointing me in the right direction.  i am wanting to watch for a process to be finished starting, but salt moves on to other stuff before the service is finished starting because it forks and the original pid is gone. is there a state to wait, in a loop, for something to reach a desired output? e.g. I could curl couchbase every second until it boots up, failing after 30s
21:00 cornfeedhobo of trying
21:01 AndreasLutro joined #salt
21:02 baweaver joined #salt
21:04 cornfeedhobo in my current state, i am only using salt for provisioning, and i have to run salt 3 successive times for my state to be sane :(
21:07 PeterO bleh... this pkg.installed is making me really angry. I can't get it to `yes | `.
21:07 flowstat_ joined #salt
21:07 whytewolf ... why would pkg.installed need a yes |?
21:07 cornfeedhobo PeterO: i am new to salt, but in everything else i have done, you need to use `expect` to wrap user input
21:08 cornfeedhobo but yea, what whytewolf asked
21:08 whytewolf pkg.install already passes -y to the package manager.
21:08 PeterO whytewolf: still that custom yum.repos.d issue. I've got the file added fine but the the yum install process asks to verify multiple times the key
21:09 bhosmer joined #salt
21:09 cornfeedhobo ooooh. use `rpm --import ...` first
21:09 MindDrive joined #salt
21:09 whytewolf ... what the. this is a badly designed repo is what you are saying
21:09 PeterO Yeah probably
21:09 PeterO rpm import eh?
21:09 cornfeedhobo yeah, chain those suckers. import the key first. you'll be much happier
21:10 whytewolf ^
21:10 PeterO so if all I've been provided is a .repo file to drop into /etc/yum.repos.d/ just rpm --import whatever is in the gpgkey?
21:11 whytewolf also skip_verify should skip gpg verification
21:11 PeterO It sadly doesn't seem to be.
21:11 cornfeedhobo PeterO: yeah, that is what i do. same with ssh hosts; import the host key first before doing git clones, rsync+ssh, etc.
21:12 cornfeedhobo so much less time pulling hair that way
21:13 PeterO Yeah this is what I'm seeing: https://gist.github.com/polds/b1387620c37bfb4106a7
21:13 Hazelesque joined #salt
21:13 PeterO I'll give the import a try
21:14 whytewolf oh yeah the repo with the double keys.
21:14 PeterO Yeah
21:14 s_kunk joined #salt
21:14 s_kunk joined #salt
21:15 bonzibuddy joined #salt
21:16 akhter joined #salt
21:18 whytewolf yeah, grab the keys and import before hand [both of them] will make that process a lot smoother.
21:18 PeterO will give that a try, is there an rpm import state or just cmd.run it?
21:19 PeterO not seeing anything in the pkgrepo docs
21:19 whytewolf that double gpg thing they have going on. not very normal as far as repos go.
21:19 PeterO well as you mentioned... badly designed repo
21:19 whytewolf think you have to cmd.run it. I don't really use redhat based distros anymore
21:20 whytewolf i would see if there is a way to check if it already exists as well so you can put that into an onlyif or unless
21:21 PeterO I'll give that a try.
21:22 Hell_Fire_ joined #salt
21:24 quasiben joined #salt
21:25 ageorgop so, trying to rebuild your salt source package and noticed it's busted.  dpkg-source: error: file ./salt_2015.8.3+ds-1.debian.tar.gz has size 18396 instead of expected 18422
21:25 ageorgop E: pbuilder: Failed extracting the source
21:26 zmalone From repo.saltstack.com?
21:26 ageorgop http://repo.saltstack.com/apt/ubuntu/ubuntu14/2015.8/
21:27 Heartsbane So I have salt-cloud working with VMWare but it keeps thick provisioning my VM's.
21:27 Heartsbane Anyone know where the settings are for that? Or should I just call cedwards?
21:27 ageorgop i can work around it but it's probably something that should be addressed.  I knew something was up when apt-get source didn't work
21:28 PeterO woot! It worked
21:28 PeterO thanks whytewolf and cornfeedhobo
21:28 cornfeedhobo np
21:28 whytewolf PeterO: np
21:29 zmalone ageorgop: https://github.com/saltstack/salt/issues add it to the list?
21:29 ageorgop sure
21:31 Netwizard joined #salt
21:32 ageorgop done, https://github.com/saltstack/salt/issues/29608
21:32 saltstackbot [#29608]title: ubuntu 14.04 salt source package hosted on http://repo.saltstack.com is broken | When attempting to rebuild the salt source package located at http://repo.saltstack.com/apt/ubuntu/ubuntu14/2015.8/  I am getting the following error...
21:35 whytewolf Heartsbane: does not look like there is anything in the code that addresses provisioning type of the disks. guess it is time to pick up that phone
21:36 hightekvagabond joined #salt
21:36 Heartsbane whytewolf: just got off the phone... booo
21:36 Heartsbane basepi: jfindlay: can you ask?
21:37 Heartsbane whytewolf: thick provisioning is going to kill me, especially on dedup
21:38 Heartsbane I just spun up 2.1TB of VM's
21:38 whytewolf Heartsbane: yes it will. but I didn't write the code. I just went and looked at it.
21:39 whytewolf maybe i should put out the disclaimer. I do not work for salt. I am not paid to be here. I do it for luls
21:41 spuder_ joined #salt
21:42 StolenToast yum is a little tricky when it comes to specifying package names for states
21:42 brianfeister joined #salt
21:42 fyb3r ... so this lack of a hard timeout is apparently whats killing me
21:43 drew_ joined #salt
21:44 drew_ I am learning salt on  my own because my company requires me to
21:44 whytewolf fyb3r: https://github.com/saltstack/salt-api/issues/156
21:44 saltstackbot [#156]title: timeout doesn't work | salt-api version: 0.8.4.1...
21:44 fyb3r inside the python module i wrote as a runner, when executing something long running like grains.items, if it doesnt return and "times out" or w/e, the socket and thread stay alive
21:44 drew_ I just got something like
21:44 fyb3r i dont use the api though
21:44 drew_ salt-minion3:     Data failed to compile: ----------     The function "state.highstate" is running as PID 16464 and was started at 2015, Dec 10 12:22:12.478010 with jid 20151210122212478010
21:45 whytewolf fyb3r: you use the python-api
21:45 fyb3r im importing the libraries directly
21:45 whytewolf hence the python-api :P
21:45 fyb3r hey i got tired head
21:45 fyb3r :P
21:45 fyb3r reading through tha tpost :) ty
21:46 flowstat_ I'm trying to tar up a virtualenv and salt-minion setup so they can be pulled down (a la Ryan Lane's setup). I'm not sure how I would get a portable version of salt-minion
21:46 flowstat_ any ideas?
21:46 whytewolf drew_: sounds like it is already running. what do you get when you run salt-run jobs.lookup_jid 20151210122212478010
21:46 murrdoc flowstat_:  use salt bootstrap
21:47 whytewolf flowstat_: iirc they use salt baked into the image. but yeah salt bootstrap in your userdata script should do the job also
21:48 drew_ ID: packages_vim     Function: pkg.installed         Name: vim-enhanced       Result: True      Comment: The following packages were installed/updated: vim-enhanced      Started: 12:22:13.946605     Duration: 23617.869 ms      Changes:                  ----------               gpm-libs:                   ----------                   new:                       1.20.7-5.el7                   old:               vim-common:
21:48 fyb3r the issue isnt that though whyte, cause im passing an int as well :P
21:48 akhter joined #salt
21:48 whytewolf drew_: well that output looks like it is running
21:48 drew_ Summary ------------ Succeeded: 1 (changed=1) Failed:    0 ------------ Total states run:     1
21:49 whytewolf and it pkg.installed vim-enhanced
21:49 fyb3r it actually does time out and returns nothing. but the socket stays live
21:49 drew_ i just ran "salt '*' state.highstate
21:49 drew_ I have 1 master and 3 minions
21:49 drew_ It went through the master
21:50 drew_ but failed on 3 minions
21:50 whytewolf okay. that is one minions return.
21:50 whytewolf sounds like the other minions are still working
21:50 drew_ oh
21:50 whytewolf salt '*' test.ping
21:50 drew_ is it time out?
21:51 drew_ for test.ping
21:51 drew_ all TRUE
21:51 whytewolf salt '*' saltutil.running
21:52 drew_ I just tried to re-run state.highstate
21:52 drew_ and Its all good now
21:52 JDiPierro joined #salt
21:52 whytewolf okay. then yeah it was still running on the other minions
21:52 drew_ and then it timed out
21:53 drew_ ?
21:53 whytewolf basicly salt is an async process. if it returns nothing. doens't mean the process timed out
21:53 whytewolf it means that the master stopped waiting and let the minions do their work
21:53 sdm24 the master timed out waiting for a response. It got bored and went home
21:53 whytewolf which is why you had to switch to the jobs runner to see what it was doing
21:54 whytewolf pretty much what sdm just said.
21:54 drew_ I got confused since it says "Data failed to compile"
21:54 fyb3r sounds like what might be happening to me
21:54 fyb3r ugh
21:54 whytewolf fyb3r: your problem is a lot more complex :P
21:54 fyb3r why does salt hate meeeeee
21:54 drew_ Thanks Guys!!
21:55 whytewolf fyb3r: cause you don't get enough of it?
21:55 fyb3r guess so hahaha
21:55 whytewolf fyb3r: it wants you to stay in this channel 24/7 and keep us company
21:55 drew_ I just started learning it a week ago
21:55 drew_ and it is interesting
21:55 whytewolf drew_: welcome to salt then
21:56 drew_ Thanks
21:56 drew_ I will need you guys more often for a while
21:56 whytewolf thats why we are here. just remeber we are not here to design you infrastructure for you. just help with salt oddities
21:57 drew_ sure!!!
21:58 whytewolf unless money is involoved.
21:58 quasiben joined #salt
21:58 * whytewolf could always use mo'money
21:59 druonysus joined #salt
21:59 druonysus joined #salt
22:00 ALLmight_ joined #salt
22:00 fyb3r there any way to catch exceptions for LocalClient to see if the command times out?
22:02 akhter joined #salt
22:03 jfindlay Heartsbane: whatsoever asking do you require?
22:03 whytewolf fyb3r: internally i don't think so. but you could try wrapping it in a try except block. although if you might switch to the cmd_async and use get_cli_returns for return data
22:03 fyb3r actually found the exceptions class
22:04 fyb3r yeah ive been debating that but unsure if its going to do any good since it seems to be an issue with the actual command
22:05 brianfeister joined #salt
22:07 subsignal joined #salt
22:09 falenn joined #salt
22:09 * whytewolf really wishes his hardware would get in. i REALLY want to get these nodes up and rebuild my cluster
22:11 sdm24 Is there a way on a master to make a minion run a --local salt-call? Besides salt 'minion' cmd.run 'salt-call --local state.sls localstate'
22:11 fyb3r hm ok so heres the issue. the actual variable is getting nothing. but the threads remain open until there is a return
22:11 fyb3r even though the runner continues
22:11 whytewolf sdm, just cmd.run
22:11 sdm24 oh well, thanks
22:11 sfxandy joined #salt
22:11 fyb3r so i dont know how to get the threads or listeners or w/e they are to die
22:11 sdm24 still easier than remoting into the machine
22:12 fyb3r which is what i thought the timeout would do
22:12 whytewolf fyb3r: put an issue in for a feature request?
22:12 fyb3r might just edit the source myself lol
22:13 whytewolf lol
22:13 murrdoc joined #salt
22:13 dthorman joined #salt
22:13 sk_0 joined #salt
22:14 tawm04 joined #salt
22:14 flebel joined #salt
22:14 jacksontj joined #salt
22:14 ipmb joined #salt
22:14 Edgan joined #salt
22:14 NightMonkey joined #salt
22:14 timoguin_ joined #salt
22:15 jaybocc2 joined #salt
22:16 ageorgop joined #salt
22:17 timoguin joined #salt
22:23 buMPnet joined #salt
22:24 burp_ joined #salt
22:26 Heartsbane jfindlay: I would assume Joseph, since he wrote the last VMWare plugin. But I will throught the github figure out who wrote the new vrealize stuff and ask on the mailing list.
22:26 Heartsbane jfindlay: I figure that will be easier
22:27 Heartsbane I will find a way to blame Seth though.
22:28 fyb3r join #python
22:28 fyb3r awe
22:29 * whytewolf hands fyb3r a /
22:30 shaggy_surfer joined #salt
22:30 fyb3r ty ty
22:31 jfindlay Heartsbane: Nitin wrote a whole new VMstuff cloud driver, you may want to ping him
22:31 jfindlay I'm not entirely clear on what the extent of that is or how all that ecosystem fits together
22:32 fyb3r wow. think theres an error
22:32 baweaver joined #salt
22:32 jfindlay Heartsbane: you're welcome to ping @nmadhok
22:32 fyb3r in client.__init__py lines 1470 to 1477 it states theres an error that causes 2 threads to be leaked
22:33 whytewolf fyb3r: that does sound like what you are having
22:33 fyb3r it is
22:33 fyb3r the chanel is being left open
22:34 fyb3r channel*
22:34 fyb3r after the function returns
22:34 ALLmightySPIFF joined #salt
22:36 ALLmightySPIFF joined #salt
22:37 jfindlay fyb3r: you are welcome to submit an issue on github and ping cachedout :-)
22:38 fyb3r i gotta get all my ducks in a row first lol. gathering proof currently
22:46 burp_ joined #salt
22:50 bhosmer_ joined #salt
22:51 conan_the_destro joined #salt
22:54 abednarik joined #salt
23:01 JDiPierro joined #salt
23:01 flowstat_ joined #salt
23:04 burp_ joined #salt
23:05 Heartsbane jfindlay: thank you sir
23:07 JDiPierro joined #salt
23:10 bhosmer_ joined #salt
23:12 lompik joined #salt
23:16 JDiPierro joined #salt
23:16 murrdoc joined #salt
23:17 baweaver joined #salt
23:21 shaggy_surfer anyone know what this error means:  015-12-10 15:19:49,909 [salt.template    ][ERROR   ][1302] Template was specified incorrectly: False, I upgraded to 2015.8.3 (berylium) and starting seeing this error
23:21 hemebond shaggy_surfer: You might need the - template: jinja parameter. Not sure.
23:21 shaggy_surfer I have it specified
23:22 shaggy_surfer I wish it told me which template or file it was referring to
23:22 shaggy_surfer there is nothing around the line errors to indicate what file it's referencing
23:22 hemebond Maybe that's the problem then, you have a syntax error that prevent Salt knowing the template name.
23:22 whytewolf shaggy_surfer: somewhere you have - template: False
23:22 hemebond DO you have the state pasted somewhere?
23:23 hemebond Oh, there you.
23:23 hemebond * go.
23:25 shaggy_surfer hmm… I recursive search my salt repo which serves /srv/salt, and no instance found , could it be in config file that got upgraded as default ?
23:25 gekitsuu iggy I'm trying out the example you sent me, https://github.com/saltstack/salt/blob/develop/salt/states/file.py#L3623 for accessing a file in the salt:// file system but I am getting "NameError: global name '__salt__' is not defined" . I tried using salt and salt-call execution since I found a bug on githup issues mentioning that this can happen when you are using salt-call
23:25 gekitsuu Is there something I need to do to get __salt__ into the modules namespace?
23:25 hemebond shaggy_surfer: When you upgraded, did you keep your master config?
23:25 iggy gekitsuu: paste code
23:25 sdm24 sahggy_surfer: I sometimes get that error too, no idea why, I grepped -r through all of my salt directories and environments, all templates are "- template: jinja"
23:25 shaggy_surfer no
23:26 shaggy_surfer I let it over-write , the I changed to do a master.d/custom.conf
23:26 shaggy_surfer with only the changes I wanted
23:26 whytewolf shaggy_surfer: is it possable to have a module that might incorrectly calling template?
23:26 gekitsuu k
23:27 shaggy_surfer yeah I found lots of - template: jinja in my recursive search
23:27 gekitsuu iggy : http://pastebin.com/vWUZw2NK
23:27 shaggy_surfer I'll do some more digging , thanks for the feedback
23:28 sdm24 hmm, for me they all occured before nov 30th. Can't remember what I did to fix it
23:28 gekitsuu example of the error in one sec
23:28 iggy gekitsuu: did you add __salt__ = {} to try to fix things (it won't)
23:28 gekitsuu I did, it didn't work without it either
23:28 gekitsuu http://pastebin.com/uhAjPay3
23:29 gekitsuu thought maybe it was an import problem and what ever executes the modules would replace it with the right dict
23:30 iggy it does
23:30 whytewolf doesn't look like any of the salt libs are being included.
23:30 iggy it stuffs those dunder dicts into the module context after it runs the init function
23:31 bonzibuddy hey folks
23:32 bonzibuddy im trying to use the node-formula from saltstack-formulas repo
23:32 bonzibuddy im defining a pillar (I think? I'm very rusty) via directory + init.sls
23:32 bonzibuddy ie,
23:32 bonzibuddy api/init.sls  api/{files required for 'api' server}
23:33 bonzibuddy and in init.sls i have, effectively, what they cite here: https://github.com/saltstack-formulas/node-formula/blob/master/pillar.example
23:33 bonzibuddy but i get errors citing "State 'node' in SLS 'api' is not formed as a list
23:33 colegatron joined #salt
23:34 whytewolf bonzibuddy: that sounds like you are putting your pillars in the state tree
23:34 whytewolf [they should be in the pillar tree]
23:34 gekitsuu whytewolf: do I need to import a salt module for this to be runable by salt? My other modules seem to run ok but they don't try to call anything from __salt__
23:34 bonzibuddy whytewolf: could be :)  I have node-formula in /srv/formulas and updated the config in /etc/salt/master as required
23:34 whytewolf gekitsuu: to get __salt__ I belive you do.
23:35 whytewolf bonzibuddy: what is your pillar_roots in your /etc/salt/master [or /etc/salt/master.d/*] look like?
23:35 sdm24 bonzibuddy: formulas will have a pillar.example (or something like that). That file should go in /srv/pillar/, not /srv/state or whatever
23:35 iggy you do not need to import any salt stuff to get the dunder dicts
23:35 gekitsuu whytewolf: tried adding import salt but still got the error. Should I try some other module?
23:36 gekitsuu iggy: K
23:36 gekitsuu iggy: if it matters I'm running salt 2015.8.3 (Beryllium)
23:37 bonzibuddy whytewolf: http://pastebin.com/sVs8GA1s
23:37 bonzibuddy thats the relevant portion of /etc/salt/master
23:38 bonzibuddy no pillar_roots configured... perhaps this is my problem? I was following some howto for the config seen above
23:38 whytewolf bonzibuddy: yeah thats your problem. pillar is seperate from states.
23:38 bonzibuddy okadoke, I will rinse and repeat
23:39 sdm24 bonzibuddy: formulas will have a pillar.example (or something like that). That file should go in /srv/pillar/, not /srv/state or whatever
23:39 bonzibuddy sdm24: are 'pillars' and 'formulas' the same thing?
23:40 sdm24 so /srv/pillar will have yourpillar.sls and a top.sls. Top.sls will say "base: \n  '*':\n  yourpillar"
23:40 JDiPierro joined #salt
23:40 sdm24 nope
23:40 bonzibuddy ah
23:40 bfoxwell joined #salt
23:40 sdm24 pillars are pieces of data that you define.
23:40 sdm24 like the checksum of the node
23:40 bonzibuddy sdm24: I just followed https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html
23:40 sdm24 bonzibuddy: https://docs.saltstack.com/en/latest/topics/tutorials/pillar.html
23:40 bonzibuddy the step cited as "Clone or download the repository into a directory:" - except, with node-formula instead of apache-formula
23:41 sdm24 yeah the formulas first confused me because they need the pillar data to be idempotent, but including the pillar in the state files can be confusing
23:42 sdm24 here is another simple pillar example https://docs.saltstack.com/en/latest/topics/pillar/index.html#pillar
23:43 bonzibuddy hmm
23:43 bonzibuddy sdm24: thanks
23:43 sdm24 just follow the first section of that second link. You don't need to worry about all the complicated stuff
23:43 hightekvagabond joined #salt
23:44 sdm24 and when it gets to "srv/pillar/packages.sls", copy that pillar.example in the file instead
23:45 bonzibuddy i'm wondering if pillar is something i need at all here
23:45 bonzibuddy like, i just want to describe a bunch of stuff to install on my api servers - can i just do that in a state?
23:46 sdm24 yeah, otherwise you would need to rewrite the formula to not read the pillar data, which can be more of a hassel
23:47 sdm24 https://gist.github.com/sdm24/374ec7cdce209da026d6
23:47 sdm24 thats what you want I think. replace the spaces in the filenames with /
23:48 sdm24 as you work more with salt, you realize how powerful and helpful pillars can be
23:48 bonzibuddy ok... so if i understand correctly, and I am inferring entirely from your example here
23:49 bonzibuddy the formulat i installed - i instantiate that to make a pillar, which i can then use in my state files?
23:49 whytewolf bonzibuddy: the formula is state files. the pillar is config data for those state files.
23:50 sdm24 no. The formula is a collection of state files, and one pillar.example file. That pillar.example files says which data you need in a pillar, and how it is organized. By adding the pillar file, people can easily change one or two lines of pillar data, instead of changing each instance in the formula
23:50 bonzibuddy ahhh oooookkk i think im getting it
23:50 bonzibuddy man this 'pillar' name is not really a good analogy for what this is doing, v a pillar of salt
23:50 baweaver joined #salt
23:51 flowstat_ joined #salt
23:51 sdm24 its a lot to handle at once and very dry
23:51 whytewolf bonzibuddy: once you get the hang of it a lot of things will make sense. but learning whats where is the first steps.
23:53 bonzibuddy thx for your patience whytewolf and sdm24
23:53 whytewolf grains: minion side config data. pillar: master side secure config data. states: descriptions of what state an item should be in.
23:53 sdm24 no problem, glad to help. I was in your spot before
23:53 iggy gekitsuu: I don't really have any ideas atm, but what you have looks correct (aside from the the __salt__ = {} bit, and a missing __virtual__ function)
23:54 whytewolf bonzibuddy: np, we have all been there at some point
23:54 gekitsuu Also bonzibuddy I think sometimes the formulas are overkill. For example if I just wanted to setup one website via apache I'd just make a simple state for that. If I needed to setup 100 then a formula makes more sense
23:54 gekitsuu OK Iggy, I'll keep digging in. Thanks for taking a look
23:54 gekitsuu My hope is to share this out once it's working
23:55 bonzibuddy gektisuu: yeah, but it seems to be the path of least resistance to get the nodejs functionality i need
23:55 bonzibuddy ubuntu repos dont cut the mustard
23:55 gekitsuu might be since I don't know your problem, just wanted to throw it out there :)

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary