Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-10-22

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 MindDrive I should state that it's 1000+ over three environments (each with its own Salt masters), so none of them have 1000 minions... yet...
00:00 whytewolf lol. ok
00:00 MindDrive (Looks like 423, 376 and 779, respectively)
00:00 kinetic joined #salt
00:01 MindDrive Hmm, still taking an awfully long time, I wonder if some of the minions are hanging...
00:02 fsteinel_ joined #salt
00:03 breakingmatter joined #salt
00:03 iggy maybe in the future, only use the list from manage.up?
00:03 ahammond MindDrive using syndics?
00:05 MindDrive ahammond: Currently, no.  It's a thought for some possible things, however.
00:05 ahammond MindDrive also, how big are you master boxes?
00:05 MindDrive Iggy: true, I would expect a timeout eventually, though... then again, maybe not with this request.
00:06 MindDrive ahammond: Beefy at this point; talked with a SaltStack engineer via email a while back to get rough figures for what would suffice for the minion numbers and for the most part they've been working well.
00:07 ahammond MindDrive cool. I'd love to see that spec. we have about 300 minions right now but are looking to add a few hundred more. Don't want the master to melt.
00:08 iggy 64 cores + 512G ram should suffice
00:08 iggy python... the new Java
00:08 MindDrive ahammond: The SaltStack engineer told me at my numbers, a pair of masters should work just fine. :)
00:11 whytewolf iggy: if python were truely the new java. even having that hardware wouldn't be enough you would actually have to tell python to use it. and wonder why it still panics randomly with memory errors
00:18 bfoxwell joined #salt
00:18 jchen joined #salt
00:18 moogyver the spec we got from a salt eng. was that a 2 vCPU, 4 GB box and some high perf storage ( ssd ) should be capable of handling ~8k minions
00:19 * iggy would be shocked
00:19 jchen left #salt
00:19 iggy although I guess it depends what you are doing
00:19 moogyver yeah
00:19 moogyver maybe if you're just running test.ping every 3 hours :D
00:19 geekatcmu You're going to be sad when all 8k minions reconnect because you've shuffled keys.
00:19 iggy linkedin (for instance) basically uses salt for remote execution, which doesn't take a lot of resources
00:19 geekatcmu Do NOT try to use that as a file repository, because that's not going to work terribly well.
00:19 moogyver that was just an example they gave us, in any case.  we have beefy physicals w/ SSD's
00:20 moogyver geekatcmu: we're not..
00:20 geekatcmu good plan
00:20 moogyver iggy: that's all we use it for as well
00:21 moogyver we do all our normal cfg mgmt thru chef.  salt is just our remote execution engine and orchestrator
00:21 moogyver and the plan ( eventually ) is for around ~30k minions
00:21 drawsmcgraw Found this link lying around, thought some here would be interested to know: https://www.reddit.com/r/devops/comments/3po2y7/kickstarter_free_saltstack_devops_course/
00:22 saltstackbot [REDDIT] Kickstarter - Free SaltStack DevOps Course (DevOpsLibrary) (self.devops) | 7 points (100.0%) | 1 comments | Posted by kenerwin88 | Created at 2015-10-21 - 18:48:42
00:22 whytewolf luckly i don't do much so i can get away with absolute low end. physical box 1 core cpu, 2 gigs of mem. 1tb hdd.
00:22 whytewolf but that is also for a home lab so meh
00:22 MindDrive Right now our use of Salt is primarily for remote execution as well.
00:22 MindDrive Hmm, cmd_iter() was working well, except the timeout (120 seconds) doesn't seem to be taking effect...
00:24 whytewolf coarse that home lab is a small openstack setup with 4 boxes [1 controller 3 compute nodes] that is setup from salt, then has instences launched in it from the same salt
00:26 MindDrive Argh, this is annoying...
00:32 moogyver MindDrive: did you try doing it with cmd_async and then using the runner client to lookup the job id?
00:33 MindDrive moogyver: I'm not sure how that would help.
00:33 MindDrive If the timeout isn't being honored, I don't think the async will help any.
00:33 moogyver ah, I was talking more about how long it was taking, rather than the timeout issue.
00:35 MindDrive It's hanging on non-responsive hosts.
00:36 MindDrive (Which in our dev environment is expected - the developer's own systems are brought down a lot, so they're unreachable.)
00:37 irctc504 joined #salt
00:37 irctc504 anyone has any recommandations how to use ssl mutual tls with salt api?
00:38 irctc504 Tornado supports it, any reason salt api dosent support it??
00:38 iggy irctc504: open an issue with as much info as you can find
00:39 MindDrive Yep, timeout in all the 'cmd*' methods is being ignored.
00:39 MindDrive *sigh*
00:39 iggy the saltnado author is still pretty active (afaik)
00:39 whytewolf MindDrive: you could generate your list of minions to check with manage.up [a runner] and toss that into cmd_batch as a list
00:39 MindDrive whytewolf: Just tried that, even that won't return.
00:39 MindDrive 'ret = local.cmd('*', 'manage.up', timeout=20)' hangs indefinitely.
00:39 iggy 20:03 < iggy> maybe in the future, only use the list from manage.up?
00:39 whytewolf it is a runner, not a module.
00:39 iggy it's a runner, not a client
00:40 moogyver I don't think timeout works like I think it does..
00:40 moogyver >>> l.cmd("*", "test.sleep", arg=[10], timeout=3)
00:40 moogyver that, for instance, doesn't timeout after 3 seconds.
00:40 whytewolf MindDrive: https://docs.saltstack.com/en/develop/ref/clients/index.html#salt.runner.RunnerClient for runners
00:41 moogyver and the description of timeout has me scratching my head.
00:41 moogyver timeout -- Seconds to wait after the last minion returns but before all minions return.
00:41 nyx_ joined #salt
00:42 moogyver Someone want to put that into 'moogyver is an idiot' terms? :)
00:43 irctc504 iggy: you suggest i open a ticket? or is there a workaround to get mutual ssl working?? something obvious i missed ?
00:43 iggy irctc504: I imagine it just wasn't a feature that was added (saltnado was initially just looking for feature parity and what the author needed)
00:44 MindDrive I don't see a timeout option for the runner commands.
00:44 whytewolf MindDrive: runners are local to the master
00:45 MindDrive whytewolf: Er, that's kind of a non-answer.  I'm assuming "runner.cmd('manage.up')" is what I want, except that's also hanging indefinitely.
00:45 moogyver MindDrive: https://github.com/saltstack/salt/issues/10940
00:45 saltstackbot [#10940]title: salt command does not respect timeout parameter | `salt -vt1 '*' cmd.run 'sleep 50'` does not result in "Minion did not return"....
00:46 moogyver runner.cmd is the same as async I believe.  I think you ahve ot use cmd_sync on the runner to have it respect a timeout
00:47 whytewolf MindDrive: that shouldn't hang. it isn't doing anything to the minions. it is checking the last time minions returned. most likely one of your earlyer commands has locked up the master?
00:49 MindDrive whytewolf: It's possible.  Running the command line version to see if it also hangs.  If so, I'll bounce the Salt master.
00:49 moogyver ah, ok, timeout is making more sense now
00:50 moogyver definitely did not do what I thought it did..
00:53 MindDrive *sigh* Bouncing both Salt masters didn't seem to help.
00:53 MindDrive 'salt-run -t 30 manage.up' is still hanging indefinitely.
00:55 MindDrive Restarted again and going to let the masters have time to reconnect to everything, I think.
00:55 moogyver looks like manage.up actually pings all the minions..
00:56 whytewolf it does? huh. okay I might have been wrong about that
00:56 moogyver https://github.com/saltstack/salt/blob/develop/salt/runners/manage.py#L33
00:56 whytewolf [the one time i don't check the code first]
00:56 moogyver uses that func
00:56 moogyver which is sending test.ping
00:56 moogyver whytewolf: for shame! :P
00:57 whytewolf anyway. I'm off. need to get to a Docker meetup in which they are talking about saltstack.
00:58 moogyver MindDrive: you may have some systems that are stuck trying to retrieve their package list, which is a common thing with RPM ( depending on the version of rhel/centos/fedora ).
00:58 moogyver and from the looks of what timeout does, it's not going to help your case
00:59 fsteinel_ joined #salt
01:00 MindDrive moogyver: I would still hope the timeout would take effect, but I guess not after reading through that ticket you mentioned.
01:00 moogyver well the timeout is taking affect, but it's just using that timeout as a poll value to see if the job is still running
01:01 moogyver it's not a 'hard limit'
01:01 MindDrive Yeah, I know.  I need a hard limit in this case.
01:02 moogyver fire it off async, sleep 60, then fire off kill_job?
01:03 jmickle joined #salt
01:03 MindDrive Ugh.  Programmatically that's going to get ugly.
01:04 jmickle hi can anyone give me a hand rendering some pillar data in a template
01:04 jmickle i have a pillar with items as a yaml list
01:04 breakingmatter joined #salt
01:05 jmickle and need to render it as a list for json in a template as ["1","2","3"]
01:06 TaiSHi joined #salt
01:06 MindDrive Okay, that was weird.  My last 'salt-run -t 30 manage.up' finally returned... with a traceback. :-/
01:06 MindDrive (Took about 5 minutes to hit the traceback.)
01:06 TaiSHi joined #salt
01:08 jmickle anyone?
01:08 kinetic joined #salt
01:08 TaiSHi joined #salt
01:08 MindDrive Ugh, looks like my cache is corrupt.
01:11 zmalone joined #salt
01:11 twork joined #salt
01:12 TaiSHi joined #salt
01:13 iggy jmickle: {{ pillar_data | json }} (iirc)
01:14 iggy jmickle: yeah, https://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.jinja.html
01:15 catpigger joined #salt
01:17 jmickle ty
01:20 MindDrive IOError: [Errno 2] No such file or directory: '/var/cache/salt/master/jobs/6a/5e902b12a21a99349ed58f41e3a2ad/jid'
01:21 jmickle MindDrive: so doesnt work
01:21 MindDrive This is preventing me from actually running anything on one of the masters, and I have yet to be able to clear the cache.
01:21 jmickle iggy*
01:21 MindDrive (Any attempt ends up throwing that same error.)
01:21 jmickle iggy: i wanted just the attributes in the yaml list
01:21 jmickle unless im referencing them wrong
01:21 kinetic joined #salt
01:22 iggy jmickle: {{ pillar['listthingy'] | json }} ?
01:22 iggy err {{ pillar_data['listthingy'] | json }}
01:22 jmickle {{ pillar['sensu_subscriptions']['subscriptions'] | json }}
01:23 iggy you might want to be a little more clear what you have to work with
01:23 jmickle do i need to pass anything from the init.sls
01:23 iggy pillar should be available everywhere
01:23 iggy if it's not actually pillar you're working with, then yeah, you might have to pass it to the template
01:24 jmickle iggy: http://pastebin.com/RPixfAMC
01:25 iggy so in that case, sense_subscriptions is a list, so you can't look it up like that
01:25 jmickle yeah thats the problem i was facing
01:25 jmickle whats the easiest way to do this
01:25 iggy does it only have one item in it? (if so, why lay it out like that
01:25 jmickle i just need to generate it like that
01:25 jmickle i dont care how the pillar
01:25 MindDrive To whomever: suggestions on how to fix the missing cache directory so the master is actually usable again?
01:26 jmickle is structured
01:26 jmickle but my output needs to be ["1", "2"]
01:26 iggy MindDrive: the master's cache directory is safe to just blow away and let it rebuild
01:26 jmickle is there a better way to do the pillar?
01:26 iggy jmickle: {{ pillar['sensu_subscriptions'][0]['subscriptions'] | json }}
01:26 iggy maybe
01:27 iggy man, I'm starting to get to the point of not having used salt in so long that I feel like I have to test stuff like this
01:27 jmickle lol
01:27 jmickle try coming from a heavy chef person
01:27 jmickle and contributor to salt :-P
01:27 jmickle nothing makes any f'n sense
01:28 iggy try what I put
01:28 jmickle ok trying right now
01:28 iggy if not, you might have to restructure the pillar a little
01:28 hasues joined #salt
01:28 jmickle im fine with that
01:28 hasues left #salt
01:29 MindDrive Iggy: thanks, will do that.
01:29 jmickle that worked iggy
01:29 jmickle thank you!
01:30 iggy *phew*
01:30 jmickle hehe
01:30 iggy that was close one
01:30 jmickle haha
01:30 jmickle pillars are ridiculously hard
01:30 jmickle mostly jinja
01:30 iggy yaml
01:31 zmalone joined #salt
01:31 jmickle i hate both yaml and jinja
01:31 iggy yaml is weird when it gets anywhere near the level of nesting most people need to have something useful
01:31 jmickle most in this world
01:31 jmickle haha
01:31 jmickle yeah yaml is terrible
01:31 iggy try the python renderer?
01:31 jmickle no i just want something quick
01:31 jmickle lol
01:37 jmickle whats the command
01:37 jmickle to show the states
01:37 jmickle a host would run
01:37 jmickle show.topfile?
01:39 jmickle state.show_top
01:45 falenn joined #salt
01:47 iggy there are a number of tools that can help you find that kind of info
01:47 zwi joined #salt
01:47 iggy state.* is one
01:47 iggy state.highstate test=True (is another option)
01:48 MindDrive Do jobs eventually clear themselves from 'list_jobs' after a time (once they've completed)?
01:49 iggy 24 hours by default
01:49 MindDrive Cool, thanks.
01:49 sunkist joined #salt
01:49 MindDrive Oh look, 'manage.up' is working again.
01:59 MindDrive Okay, so two problems with my current attempt: 1) runner.cmd('manage.up') does return a list of up hosts, but it also dumps them to the screen, which is undesirable, not sure how to fix that, and 2) now everything is serial, which means 443 hosts will take a looooooong time to finish. :)  Is there a way to use that list to spawn all the jobs simultaneously?
02:00 MindDrive Aww crap, never mind about (2).
02:00 MindDrive (I didn't realize target could be a list as well as a string...)
02:01 iggy you might just have to steal the manage.up runner code and edit it to not print()
02:01 MindDrive Iggy: Ugh.  Okay.
02:01 iggy I've never looked at the code to know if that's true... fyi
02:03 kinetic joined #salt
02:03 MindDrive *sigh* "No minions matched the target. No command was sent, no jid was assigned."  I guess a list doesn't work for local.cmd()...
02:06 moogyver list should work for local.cmd..
02:06 moogyver MindDrive: you have to specify expr_form='list'
02:07 moogyver l.cmd(a, fun='test.ping', expr_form='list')
02:07 MindDrive Yeah, I figured that out just as you were saying it... *sigh*
02:09 MindDrive FINALLY IT WORKS.  OH THANK HEAVENS.
02:09 MindDrive Now I can go home. :)
02:09 JohnTunison joined #salt
02:12 moogyver hrm, not sure why manage.status is printing..
02:16 kuromagi joined #salt
02:19 dave_den joined #salt
02:25 MindDrive Iggy: https://github.com/saltstack/salt/issues/21392 - looks like I'm not the only one annoyed by the output thing with 'manage.up' :)  However, I do like the somewhat janky workaround!
02:25 saltstackbot [#21392]title: no way to control the output generated when a custom salt runner executes a function | I have a simple cusom salt runner. Here's the code. You can see that i am assigning the result of the runner.cmd function to a variable. Consequently, my expectation is that if i execute the script nothing will be emitted. ...
02:29 moogyver MindDrive: you can set 'quiet' in the Runner.
02:29 MindDrive moogyver: Set it where?
02:29 moogyver https://gist.github.com/sjmh/b321ee2e77e4bd295d8d
02:31 MindDrive Huh, I wonder why that wasn't mentioned in the ticket...
02:31 MindDrive (Unless their particular case ignores that setting (yes, it did work for me, thanks!))
02:32 moogyver not sure - not in the doc either
02:32 moogyver was just reading thru the code
02:33 rim-k joined #salt
02:35 Furao joined #salt
02:36 ajw0100 joined #salt
02:37 cyborg-one joined #salt
02:38 kinetic joined #salt
02:40 kinetic joined #salt
02:47 falenn joined #salt
02:48 favadi joined #salt
02:52 slimeate joined #salt
02:52 Furao joined #salt
02:59 larsfronius joined #salt
03:02 Rockj joined #salt
03:06 breakingmatter joined #salt
03:10 kermit joined #salt
03:13 kinetic joined #salt
03:21 falenn joined #salt
03:21 clintberry joined #salt
03:23 zwi joined #salt
03:27 Furao_ joined #salt
03:31 Furao joined #salt
03:33 kermit joined #salt
03:33 ajw0100 joined #salt
03:35 TyrfingMjolnir joined #salt
03:36 JohnTunison joined #salt
03:41 nidr0x joined #salt
03:43 kinetic joined #salt
03:43 moogyver joined #salt
03:46 Furao_ joined #salt
03:50 evle joined #salt
03:52 kinetic joined #salt
03:56 nidr0x joined #salt
03:57 TyrfingMjolnir joined #salt
04:01 auzty joined #salt
04:02 zmalone joined #salt
04:22 aparsons joined #salt
04:26 debian112 joined #salt
04:27 kinetic joined #salt
04:34 kinetic joined #salt
04:35 favadi joined #salt
04:41 Furao joined #salt
04:42 malinoff joined #salt
04:46 egil joined #salt
04:50 Furao joined #salt
04:53 favadi joined #salt
04:56 grumm_servire joined #salt
04:58 PeterO joined #salt
04:58 hrumph_ joined #salt
04:58 hrumph_ hello saltians
04:58 hrumph_ posted two new issues a few minutes ago
04:59 hrumph_ https://github.com/saltstack/salt/issues/28197
04:59 saltstackbot [#28197]title: Windows installer not quoting id's with leading 0's | Hi, I'm using the windows installer in silent mode....
04:59 larsfronius joined #salt
05:02 hrumph_ and
05:02 hrumph_ https://github.com/saltstack/salt/issues/28196
05:02 saltstackbot [#28196]title: Salt runner not treating octal values as octal. | Ok,...
05:05 Furao joined #salt
05:06 aparsons joined #salt
05:15 PeterO joined #salt
05:16 zer0def joined #salt
05:21 timoguin_ joined #salt
05:26 jalbretsen joined #salt
05:40 keimlink joined #salt
05:46 katyucha joined #salt
05:52 Furao joined #salt
05:56 Furao joined #salt
05:58 felskrone joined #salt
06:01 felskrone1 joined #salt
06:12 kinetic joined #salt
06:12 impi joined #salt
06:12 kinetic joined #salt
06:17 favadi joined #salt
06:18 Rumbles joined #salt
06:24 TyrfingMjolnir joined #salt
06:25 favadi joined #salt
06:32 Riz joined #salt
06:33 TyrfingMjolnir joined #salt
06:41 sfxandy joined #salt
06:43 jeddi joined #salt
06:54 KermitTheFragger joined #salt
06:56 mr-op5 joined #salt
06:57 falenn joined #salt
07:03 rmnuvg joined #salt
07:07 breakingmatter joined #salt
07:11 flyx left #salt
07:14 georgemarshall joined #salt
07:17 cberndt joined #salt
07:20 ITChap joined #salt
07:22 timoguin joined #salt
07:25 sfxandy joined #salt
07:25 sfxandy morning all
07:28 linjan joined #salt
07:28 kinetic joined #salt
07:29 Grokzen joined #salt
07:33 edulix joined #salt
07:33 eseyman joined #salt
07:33 GreatSnoopy joined #salt
07:39 linjan joined #salt
07:40 pezus hi guys. we wrote a custom grain to get a location to where a server is located. now i want to use that in an sls file and use the return value of that grain to assign to a variable. how do i do so?
07:40 av___ joined #salt
07:45 OliverUK joined #salt
07:48 bhosmer_ joined #salt
07:50 babilen salt['grains.get']('name_your_grain', $DEFAULT_RETURN_VALUE)
07:51 ITChap joined #salt
07:53 kinetic joined #salt
07:55 Rumbles joined #salt
07:56 jhauser joined #salt
07:59 falenn joined #salt
07:59 Furao joined #salt
08:01 larsfronius joined #salt
08:03 chiui joined #salt
08:09 s_kunk joined #salt
08:09 s_kunk joined #salt
08:14 impi joined #salt
08:17 larsfronius joined #salt
08:20 ITChap joined #salt
08:21 chrismckinnel joined #salt
08:23 Furao joined #salt
08:23 MadHatter42 joined #salt
08:23 trph joined #salt
08:23 trph joined #salt
08:27 larsfronius joined #salt
08:30 ericof joined #salt
08:38 bugga joined #salt
08:53 ernetas joined #salt
08:53 ernetas Hey guys.
08:53 ernetas What's the alternative for r10k/Puppetfile in Salt?
08:54 Norrland what is 'r10k'?
08:55 jhauser_ joined #salt
08:55 ernetas It allows similar architecture to Gemfile in Ruby. E.g. there's a Puppetfile file, in which you define all the Puppet's modules (alternative to Salt's formulas) with paths to git repositories or so and then you can automatically fetch all of them at once and combine into one environment
08:56 fredvd joined #salt
08:56 ernetas Eh, I'm more looking for an alternative to librarian-puppet than r10k actually.
09:00 Norrland mkay
09:00 Furao joined #salt
09:03 Furao joined #salt
09:04 MadHatter42 joined #salt
09:09 kinetic joined #salt
09:09 breakingmatter joined #salt
09:10 jeblair joined #salt
09:12 MadHatter42 joined #salt
09:15 malinoff joined #salt
09:19 roock joined #salt
09:21 armyriad joined #salt
09:30 losh joined #salt
09:31 WildPikachu joined #salt
09:40 tampakrap ernetas: there is none unfortunately
09:42 tampakrap there is a package manager called spm, released quite recently, but there is no puppetfile alternative
09:43 pezus babilen: thanks!
09:43 tampakrap the best thing I could do was to create an sls file to list my formulas and deploy them with salt-call through my CI
09:45 kinetic joined #salt
09:50 babilen pezus: https://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.jinja.html#jinja-in-files details how to call *any* execution function of which salt has many (cf. https://docs.saltstack.com/en/develop/ref/modules/all/index.html )
09:53 amcorreia joined #salt
09:58 ponpanderer joined #salt
09:59 ponpanderer hello
09:59 ponpanderer with the salt python api is there any way to get the job id when not running async?
10:00 ponpanderer as an example "local.cmd('*', 'cmd.run', ['whoami'])" just returns a dict with the minion responses
10:05 malinoff_ joined #salt
10:08 riftman joined #salt
10:10 N-Mi joined #salt
10:10 trapha joined #salt
10:10 trapha joined #salt
10:14 trapha joined #salt
10:14 trapha joined #salt
10:18 fredvd joined #salt
10:19 Furao joined #salt
10:19 kinetic joined #salt
10:20 markm joined #salt
10:20 markm_ joined #salt
10:21 giantlock joined #salt
10:30 stevej joined #salt
10:43 MaZ- joined #salt
10:48 bluenemo joined #salt
10:55 kinetic joined #salt
10:56 slav0nic joined #salt
11:01 evle joined #salt
11:02 ctolsen joined #salt
11:07 trph joined #salt
11:20 murrdoc joined #salt
11:30 kinetic joined #salt
11:34 liwen joined #salt
11:35 giantlock joined #salt
11:49 bhosmer joined #salt
11:50 otter768 joined #salt
11:54 JohnTunison joined #salt
11:55 jcockhren joined #salt
11:57 tzero joined #salt
12:03 traph joined #salt
12:03 traph joined #salt
12:05 kinetic joined #salt
12:11 breakingmatter joined #salt
12:20 pam joined #salt
12:21 xsteadfastx joined #salt
12:22 xsteadfastx i try the salt-minion on windows 10 right now. how is the support for it now? i tried a pkg.refresh_db and an install but i get "unable to locate package". works for the windows 7 minions
12:22 pam Hello. Is it possible to get salt 2014.1.5 running on Centos 7.1? I need that specific version for the ceph calamari stuff, since it depends on such salt version somehow
12:22 xsteadfastx the minion is version 2015.8.1
12:22 mapu joined #salt
12:24 saffe joined #salt
12:25 tmclaugh[work] joined #salt
12:28 ctolsen joined #salt
12:31 N-Mi joined #salt
12:31 N-Mi joined #salt
12:31 ferbla joined #salt
12:32 murrdoc joined #salt
12:32 otter768 joined #salt
12:34 sfxandy joined #salt
12:34 sfxandy hi everybody
12:35 sfxandy i need a bit of clarification around external Pillars, just to make sure my understanding and interpretation of the documentation is correct.  anyone got any experience with external Pillars?
12:40 kinetic joined #salt
12:41 jalbretsen joined #salt
12:45 babilen Just ask
12:46 babilen sfxandy: Nobody will claim to be an expert only to be put in a position in which the question is too arcane
12:46 anotherZero joined #salt
12:46 zwi joined #salt
12:47 sfxandy am making sure i ask a sensible question first!
12:48 illern joined #salt
12:50 sfxandy this all comes about from the fact we want to steer clear of using nodegroups as they're fiddly to use and require a master restart to pick up any changes.  the thought was then to move everything in an external Pillar source and enable the ext_pillar_first directive so that we can use the external Pillar to look up and make certain decisions about which Pillar structures to target where .... i.e. simulating nodegroups
12:51 sfxandy as i understand it, external Pillar data is mapped only the node that makes the actual Pillar request?
12:52 toastedpenguin joined #salt
12:52 sfxandy so we can't move everything into the external Pillar and apply that to one or more nodes via compound matching, for example?
12:54 furrowedbrow joined #salt
12:55 toastedpenguin joined #salt
12:58 subsignal joined #salt
12:59 JDiPierro joined #salt
13:03 Guest43368 joined #salt
13:03 bluenemo joined #salt
13:04 roock joined #salt
13:05 bluenemo joined #salt
13:07 bluenemo joined #salt
13:10 murrdoc joined #salt
13:10 rmnuvg joined #salt
13:11 pguinardco1 joined #salt
13:15 cpowell joined #salt
13:15 kinetic joined #salt
13:16 Kraln joined #salt
13:17 codehotter Is there a way to make salt print only the changes, not the unchanged?
13:17 codehotter That's what I thoguht --state-output=changes would do, but it still prints a line for each clean state
13:18 codehotter I get from --state-output=changes the output I would expect from --state-output=mixed, ie terse for unchanged, long for changed
13:19 bfrog joined #salt
13:20 stevej joined #salt
13:21 codehotter Found state_verbose in /etc/salt/master, which I have now set to false. Great! Can I override this on the command line and have a single run be verbose again?
13:22 DanyC joined #salt
13:22 codehotter Found https://github.com/saltstack/salt/pull/26962 awesome
13:22 saltstackbot [#26962]title: Add --state-verbose command line option to salt cmd | This overrides the state_verbose setting that may be set in master config. ...
13:24 DanyC all, is anyone using/ storing pillar data into git and pulling down using gitfs? if so has anyone found it reliable? am i correct in saying gitfs pillar was refactored in 2015.8.x ?
13:29 Ahlee sfxandy: external pillars are dynamically generated, running from the master
13:29 Ahlee you can -C I@extpillar:foo:bar I@pillar
13:31 JohnTunison joined #salt
13:32 flou joined #salt
13:33 murrdoc joined #salt
13:33 winsalt joined #salt
13:41 DanyC anyone ?
13:43 bfrog joined #salt
13:43 shiriru joined #salt
13:44 sfxandy Ahlee, understand that.  do you use them?
13:44 alvinstarr joined #salt
13:45 DammitJim joined #salt
13:48 Ahlee sfxandy: extensively
13:48 sfxandy ok, and may i ask which provider(s) you use?
13:49 Ahlee We have external pillars that talk to internal APIs, to zookeeper, to aws
13:49 kinetic joined #salt
13:50 jcockhren joined #salt
13:50 sfxandy ok, we're stuck with using MongoDB or redis for our external pillar provider
13:51 Ahlee simple enough since both have very good python clients
13:53 sfxandy can get them working easily enough, but trying to store anything more than a single data structure under each key is tricky.  so for example in our redis proof of concept, we can store a set, a string, or a hash against a key that relates to a specific minion .... but if you wanted a series of data structures stored against that minion i.e. some single key-value pairs a list and a hash or set .... it looks like you can;t do it
13:54 Ahlee You'd have to add logic to the ext_pillar to return it in the expected data structure (a dict)
13:55 sfxandy hmmmm ok
13:55 Ahlee but, we walk entire trees in zookeeper building the children of nodes. It's relativly heavy, but it is what it is
13:55 JDiPierro joined #salt
13:55 Ahlee at the end of the day you just return {my_new_pillar:whatever}
13:56 breakingmatter joined #salt
13:57 penguin_dan joined #salt
13:58 zmalone joined #salt
13:59 dthom91 joined #salt
14:01 Rumbles joined #salt
14:02 babilen Is there an execution function to get the ssh fingerprint used by a minion?
14:03 babilen (I want to automatically populate known_hosts)
14:05 kaptk2 joined #salt
14:05 zwi joined #salt
14:06 cpattonj joined #salt
14:06 jeffpatton1971 joined #salt
14:06 lasko joined #salt
14:06 cpattonj How is it possible that my test.ping returns minion results that aren't present in my Accepted/Denied/Unaccepted/Rejected keys?
14:06 murrdoc joined #salt
14:07 andrew_v joined #salt
14:07 debian112 joined #salt
14:08 charli joined #salt
14:08 cpattonj I want to apply states to a minion that I can ping but I can't because when I attempt to, it tells me that no minions match the target - I'm assuming since it doesn't exist in my accepted keys.
14:10 zmalone Are you running a salt ping, or the command ping?
14:10 cpattonj salt's test.ping
14:10 cpattonj against the wildcard '*'
14:15 hasues joined #salt
14:18 cpattonj No one? :(
14:18 ferbla Hey everyone, I am seeing this ssh error in my master logs. Any ideas?
14:18 ferbla https://gist.github.com/Ferbla/fde8e5d14ba98e797298
14:18 _JZ_ joined #salt
14:19 JohnTunison joined #salt
14:19 kinetic joined #salt
14:21 bluenemo joined #salt
14:22 _JZ_ joined #salt
14:22 peters-tx Why do nodegroups not work
14:22 peters-tx What is the deal with them?
14:23 peters-tx Anybody have any tips for nodegroups
14:23 peters-tx I forever get the error "Node group <whatever> unavailable in /etc/salt/master"
14:24 dthom91 joined #salt
14:25 sfxandy joined #salt
14:26 hasues left #salt
14:26 kawa2014 joined #salt
14:28 ctolsen joined #salt
14:28 mpanetta joined #salt
14:29 alvinstarr How would I enforce order of rpm installation?
14:33 Deevolution alvinstarr: The order they're listed in your sls file is the order they'll be installed.
14:33 Deevolution alvinstarr: assuming they're each a separate pkg.installed statement.
14:34 Akhter joined #salt
14:34 Akhter joined #salt
14:35 alvinstarr Deevolution:
14:35 alvinstarr Deevolution: Ahhh. I have them listed in a dingle pkg.installed
14:37 jalbretsen joined #salt
14:38 wych joined #salt
14:38 saffe joined #salt
14:40 laax joined #salt
14:41 laax joined #salt
14:41 Deevolution alvinstarr:  That behavior is going to be more dependent on the actual package management system then.
14:41 Deevolution i.e. when you use yum to install multiple packages in a single command you don't have  much say in the order in which they're handled.
14:44 alvinstarr Deevolution: are the packages clumped and sent as a single yum command?
14:44 Deevolution The way you have the sls set, yes.
14:45 Deevolution You can easily enforce precedence in it by separating it into multiple pkg.installed commands.  Each of these would be a separate yum command and they would be executed in the order they're listed.
14:45 alvinstarr the problem is dependency resolution
14:45 alvinstarr either they need to be made a single yum install or ordered at least partially.
14:48 fredvd joined #salt
14:48 alvinstarr It looks to be working correctly now. I may have had some other problem that added confusion.
14:49 Brew joined #salt
14:52 JohnTunison joined #salt
14:53 murrdoc joined #salt
14:56 DanyC_ joined #salt
14:56 quasiben1 joined #salt
14:57 jhauser_ joined #salt
14:58 NachoDuck_ joined #salt
14:58 alvinstarr joined #salt
14:59 copelco_ joined #salt
14:59 esharpmajor_ joined #salt
15:00 Ymage joined #salt
15:00 blu_ joined #salt
15:00 MadHatter42 joined #salt
15:01 marwood_ joined #salt
15:01 sk_0_ joined #salt
15:01 quix joined #salt
15:01 Emantor joined #salt
15:02 JonGretar_ joined #salt
15:03 twodayslate_ joined #salt
15:03 saffe joined #salt
15:04 ujjain- joined #salt
15:04 Vye_ joined #salt
15:04 elkektetet joined #salt
15:04 kidneb_ joined #salt
15:04 horus_plex joined #salt
15:04 llua` joined #salt
15:04 nlb_ joined #salt
15:04 rreboto joined #salt
15:04 collinanderson_ joined #salt
15:04 edulix joined #salt
15:04 mitsuhiko_ joined #salt
15:05 dave_den joined #salt
15:05 rawzone^ joined #salt
15:06 rideh- joined #salt
15:06 gchao- joined #salt
15:09 daemonkeeper So, I've set up salt with eauth (PAM). Somehow I got it working, but every 3rd or so try I get "Failed to authenticate! This is most likely because this user is not permitted to execute commands, but there is a small possibility that a disk error occurred (check disk/inode usage). " and WARNING ][25954] Authentication failure of type "user" occurred. in the log, BUT the command got executed nonetheless. WTF?
15:10 imanc joined #salt
15:10 daemonkeeper Seems to be the case especially/only for longer lasting commands such as highstate
15:10 sdm24 joined #salt
15:10 teebes joined #salt
15:11 tzero joined #salt
15:12 CaptainMagnus joined #salt
15:12 breakingmatter joined #salt
15:12 techdragon joined #salt
15:12 [vaelen] joined #salt
15:12 dandelo joined #salt
15:13 dfinn joined #salt
15:15 frankS2 joined #salt
15:15 _JZ_ joined #salt
15:17 hardwire joined #salt
15:19 GrueMaster joined #salt
15:19 llua joined #salt
15:23 _JZ__ joined #salt
15:25 RedundancyD joined #salt
15:27 N-Mi_ joined #salt
15:27 alemeno22 joined #salt
15:28 JohnTunison joined #salt
15:31 MK_FG joined #salt
15:31 pezus joined #salt
15:31 seblu joined #salt
15:34 aparsons joined #salt
15:34 clintberry joined #salt
15:37 dthom91 joined #salt
15:42 meye1677 joined #salt
15:44 virusuy joined #salt
15:46 skarn joined #salt
15:49 PI-Lloyd joined #salt
15:49 sjwoodr joined #salt
15:49 ronrib joined #salt
15:50 vieira joined #salt
15:50 bhosmer_ joined #salt
15:52 loque joined #salt
15:52 loque I seems to be having an issue with salt
15:52 sjwoodr I'm having a strange problem - my salt-master is utilizing 100% of the inodes in /var filesystem.  I delete /var/cache/salt/master/jobs (2.5GB of stuff) and 24 hours later 100% of the inodes are used again.
15:52 sjwoodr my salt-master isn't that active, but yet the jobs cache fills up with 634k inodes in < 24h  :(
15:52 dave_den joined #salt
15:53 loque in version salt 2014.1.4 context variables in jinja were being passed down into for loops
15:53 loque the same is not true in version salt 2015.8.0
15:54 loque when using {% set %} jinja in a foor loop I am unable to acces variables from a higher scope
15:54 loque so dynamically generating my salt[publish.publish] command fails
15:55 loque as the variable is not available inside the '{% set %} jinja tag
15:58 flyx joined #salt
15:58 nickermire joined #salt
15:59 sjwoodr perhaps i just need to set keep_jobs to a lower number of hours...
15:59 orion__ joined #salt
16:02 flyx left #salt
16:03 JohnTunison joined #salt
16:03 loque has nayone seen this behaviour
16:04 loque we have a lot of states that use the publish framework to build up a dict that gets passed in to another tempalte
16:09 TyrfingMjolnir joined #salt
16:10 stomith so I'm looking at the instructions for a specific thing - for instance salt.states.pkg. It lists the state, of course, but how does that translate into a cli call?
16:11 stomith salt '*' pkg.installed ?
16:11 whytewolf stomith: salt.modules is for cli. salt.states is state calls.
16:11 JohnTunison joined #salt
16:11 stomith oh.
16:11 stomith well, duh.
16:12 stomith that's good to know :P
16:12 whytewolf also salt '*' sys.doc pkg will give you the cli options
16:12 stomith awesome.
16:12 dijit Hi, I have an odd question.
16:12 dijit I need to automatically set up replication.
16:12 dijit since the server I'm replicating from is always on number lower in the hostname than the one I'm replicating to... (database01 is master, database02 is slave, database03 is a seperate master, database04 is a slave of database03 etc;)
16:12 dijit I need to figure out how to variablise that.. anyone have any idea how I can do it cleanly in salt?
16:13 dijit is it better to just have seperate pillars for each one and statically define each master?
16:14 whytewolf dijit: personally i use a roles grain for for that kind of setting. [i still pass passwords through pillar targeting on minionid.]
16:14 jeffspeff having an issue with salt and chocolatey. for some reason salt seems to be adding 'help' to the end of the chocolatey.exe command when determining the version and this is breaking chocolatey completely because 'help' doesn't exist. it just needs to be running 'chocolately.exe'  http://pastebin.com/0sVDFkHQ  i've looked at the code and i can't find where it's specifying 'help'. https://github.com/saltstack/salt/blob/2015.8/salt/modules/chocolatey.py
16:15 JohnTunison joined #salt
16:16 dijit hm
16:16 dijit ok
16:16 aparsons joined #salt
16:17 dthom91 joined #salt
16:21 wnkz joined #salt
16:23 JohnTunison joined #salt
16:27 sfxandy joined #salt
16:30 aparsons joined #salt
16:30 danlsgiga joined #salt
16:30 knite joined #salt
16:31 danlsgiga hey guys, I'm still struggling to have some way of doing deep merging (aggregate) from pillars without having to use yamlex which has some issues
16:32 danlsgiga anyone have another way of aggregating dicts without overriding conflicting keys?
16:32 OliverUK left #salt
16:33 jmickle joined #salt
16:34 nikogonzo joined #salt
16:35 Fiber^ joined #salt
16:35 grumm_servire joined #salt
16:40 murrdoc use an external pillar
16:41 bhosmer_ joined #salt
16:41 danlsgiga murrdoc: Currently I have my defaults.sls with my yaml dicts and in my map.jinja I merge the defaults with the pillar dict
16:42 danlsgiga murrdoc: the problem is my pillar is overriding the dict in defaults.sls
16:42 jeffspeff can anyone give me a hand with this? https://github.com/saltstack/salt/pull/27747
16:43 danlsgiga murrdoc: so it is merging correctly as long as the key is not equal
16:43 murrdoc oh
16:43 ericof joined #salt
16:43 murrdoc hmm
16:44 murrdoc i uh did ugly things to get around it
16:44 murrdoc made a custom module that uses salts merge dict function
16:44 murrdoc and called that in the map.jina
16:44 murrdoc jinja*
16:44 murrdoc so it merges
16:45 gfa joined #salt
16:46 danlsgiga murrdoc: https://gist.github.com/danlsgiga/230b7b616f303808839c
16:46 Lionel_Debroux joined #salt
16:46 murrdoc https://gist.git.edgecastcdn.net/pkandhari/15e913fbfea90c7b766a
16:47 danlsgiga murrdoc: in this example I gave in my gists... the bind.sls pillar is overriding my defaults and I don't want that behaviour
16:47 gfa joined #salt
16:48 gfa left #salt
16:48 danlsgiga murrdoc: I need it to be added to the dict since the only conflict is the parent dict
16:48 danlsgiga murrdoc: this gist link you sent me is not working
16:49 murrdoc well its not supposed to be a solution
16:49 jeffspeff Ok, i think i've found the issue, though i'm more confused now more than ever. my salt-master is running 2015.8.1 on centos installed via yum using the saltstack repo. when i look at line 105 in /usr/lib/python2.7/site-packages/salt/modules/chocolatey.py it still shows the 'help' and does not reflect the changes of https://github.com/saltstack/salt/pull/27747
16:49 murrdoc its a pointer
16:50 danlsgiga murrdoc: got it, but it could be a workaround for now... the yamlex renderer is too buggy and ugly for me currently
16:51 danlsgiga murrdoc: Are you able to post it to github gist? I can't open this one you sent me
16:52 bluenemo joined #salt
16:55 traph joined #salt
16:57 whytewolf jeffspeff: that looks like it was merged into 2015.5. most likely it hasn't been merged forward yet.
16:57 jeffspeff the code is changed in 2015.5.5 and 2015.8.1 branches
16:57 breakingmatter joined #salt
16:58 forrest joined #salt
17:02 Edgan murrdoc: Should salt-proxy go into a salt-minion or salt-master package?
17:02 sdm24 What is a good worker_threads value to have, for about 50 minions? I just ran a highstate (after the first few errored out that I should increase worker_threads), and now it is running but I am getting a slow return from each minion
17:03 sdm24 even though each minion's highstate duration is short
17:03 KyleG joined #salt
17:03 KyleG joined #salt
17:03 whytewolf jeffspeff: 2015.8 is not the 2015.8.1 "branch" releases are tagged not branched. the 2015.8 branch is the development branch of 2015.8.x. so that change will most likely be in 2015.8.2 [cause the tag for 2015.8.1 does not have the change]
17:04 knite joined #salt
17:04 jeffspeff oh
17:05 jeffspeff how can i go ahead and patch that change on my system to get that fix?
17:05 whytewolf jeffspeff: you can try putting the development version in _modules and sync that to your minions
17:06 jeffspeff will that overwrite the built-in module or will it conflict?
17:07 whytewolf _modules overrides built in. it is a semi commen thing to use develop modules in place of built-ins.
17:08 Karunamon Hi folks - Old hand at Puppet here doing a salt migration.. wondering what the "right" way is to find out on my master which highstate jobs failed or have problems
17:09 Edgan Karunamon: Ideally, IMHO, you want a dashboard, and you can use foreman(originally for puppet) with salt
17:09 jeffspeff whytewolf, thank you. i'll give it a shot
17:10 tracphil joined #salt
17:11 Karunamon Edgan: Ouch.. had bad experiences with foreman in the past. No way to do it from CLI?
17:11 bougie is there a way to get in real time script output ? When the script is launch with cmd.run in a state. Returners can do it ?
17:14 Edgan Karunamon: I think it is 10x the user experience with a web gui that does graphs/etc. I think you could just parse the master log.
17:14 JDiPierro joined #salt
17:15 zmalone Due to the weird return code stuff, parsing a log is probably the best bet.
17:15 pguinardco1 I do it via cron and sed... No complaints here, if all is good no emails, if something changes or goes wrong it emails me
17:15 zmalone Sometimes things fail and return 0 (Success!), etc.
17:15 Karunamon I'm actually about ready to go down the path of installing salt-eventsd and shoving the data into elasticsearch
17:16 impi joined #salt
17:16 JohnTunison joined #salt
17:17 Edgan Karunamon: What was the bad experience with foreman? You don't have to use half of it's features if you want just the reporting of salt run.
17:17 Karunamon Mostly due to database corruption causing VMs to be deleted. Last time we eval'd it, the conclusion was the project just wasn't mature enough yet
17:18 Edgan Karunamon: Never had that problem. When was this?
17:18 lexter joined #salt
17:18 Karunamon Way earlier this year, maybe february? Probably a corner case of a corner case to be honest.
17:18 ashutoshn joined #salt
17:19 chutzpah joined #salt
17:19 chutzpah joined #salt
17:20 Edgan Karunamon: The two downsides with it are their installer is puppet, and IMHO to be avoided. You would probably need to make your own salt state to install it, or do it by hand. The other is it doesn't cluster, so single point of failure. But if you just use it for the dashboard feature, it is good.
17:20 aparsons joined #salt
17:21 breakingmatter joined #salt
17:22 Edgan Karunamon: I am not aware of any other "good" dashboards that are free.
17:22 dthom91 joined #salt
17:23 kinetic joined #salt
17:25 mpanetta joined #salt
17:28 clintberry joined #salt
17:37 zwi joined #salt
17:40 Akhter_ joined #salt
17:42 jmreicha joined #salt
17:44 JohnTunison joined #salt
17:44 Nazca__ joined #salt
17:45 tmclaugh[work] joined #salt
17:46 jmreicha_ joined #salt
17:47 knite joined #salt
17:51 ashutoshn left #salt
17:53 tmclaugh[work] joined #salt
17:58 JohnTunison joined #salt
17:59 baweaver joined #salt
17:59 zsoftich2 joined #salt
17:59 _ikke_ joined #salt
18:04 iggy Edgan: neither, it should be it's own package now
18:05 timoguin_ joined #salt
18:07 stanchan joined #salt
18:07 JohnTunison joined #salt
18:08 jeffspeff anyone know of a way to manage windows features like telnet client with salt?
18:08 knite joined #salt
18:09 alvinstarr If I specify a source as salt://spam/eggs. Where would the root path for 'spam' be?
18:10 zmalone alvinstarr: your salt root
18:11 babilen alvinstarr: /srv/salt/spam/eggs, but that depends on how you configured https://docs.saltstack.com/en/latest/ref/configuration/master.html#file-roots
18:11 jeffpatton1971 cat /etc/salt/master |grep 'file server roots'
18:11 babilen useless use of cat!
18:13 timoguin joined #salt
18:14 alvinstarr babilen:  Thanks.
18:14 davisj joined #salt
18:15 timoguin_ joined #salt
18:17 kinetic joined #salt
18:17 dthom91 joined #salt
18:19 knite joined #salt
18:19 baweaver joined #salt
18:23 cberndt joined #salt
18:24 CheKoLyN joined #salt
18:24 clintberry joined #salt
18:25 notnotpeter joined #salt
18:25 knite joined #salt
18:28 chupetito joined #salt
18:29 danlsgiga whytewolf: Any roadmap and release date for the 2015.8.2? Any site where I can follow that?
18:29 knite joined #salt
18:29 chupetito hi everyone ... hope you are having a great day. Can someone give me an example of using a salt reaction and/or beacon to restart a service based on detecting a high CPU utilization?
18:30 danlsgiga murrdoc: Could you please put in github gist the merge module you implemented to do deep merging?
18:31 aron_kexp joined #salt
18:34 baweaver joined #salt
18:35 dec joined #salt
18:37 murrdoc i could
18:37 murrdoc it will cost u
18:37 murrdoc 1 pull request !
18:37 murrdoc (totally kidding)
18:37 subsigna_ joined #salt
18:38 dthom91 joined #salt
18:38 murrdoc danlsgiga:  https://gist.github.com/anonymous/745af2e469478d159286
18:38 kant joined #salt
18:38 murrdoc here u go
18:39 murrdoc basically has a utility funcion for merging
18:40 danlsgiga murrdoc: Sweet!
18:40 murrdoc so you write a module
18:40 murrdoc like the first file
18:40 murrdoc and call it from the map.jinja
18:41 danlsgiga murrdoc: Thanks so much... will try it and let you know... It's just a matter of putting it in the /srv/modules/merge folder?
18:44 dthom91 joined #salt
18:45 alainv joined #salt
18:47 Akhter joined #salt
18:48 slav0nic joined #salt
18:48 murrdoc yeah
18:53 JohnTunison joined #salt
18:54 Akhter joined #salt
18:55 dthom91 joined #salt
18:56 danlsgiga murrdoc: the dictupdate is a function that I need to import in the module?
18:59 murrdoc yeah
18:59 murrdoc import salt.utils.dict ? or import salt.utils.dictupdate
19:02 ajw0100 joined #salt
19:05 DanyC joined #salt
19:06 PredatorVI joined #salt
19:07 danlsgiga murrdoc: Fantastic!!! Working beutifully and such a simple solution
19:07 sn00py joined #salt
19:07 danlsgiga murrdoc: Thanks a lot1
19:08 PredatorVI I'm running salt 2015.5.3 on ubuntu 14.04 (no newer versions show up via apt-get update) and I can't get salt-api to run anymore.  Initially it looked like an issue with needing tornado, but I installed it.  The process just seems to die without any logs.  Is there a better way to debug it?
19:09 fredvd joined #salt
19:10 teebes joined #salt
19:10 Edgan iggy: official packages put it in minion
19:11 Edgan iggy: but yeah, it should probably be it's own package
19:12 danlsgiga murrdoc: Would be awesome to have this as an option in the pillar_mergin... something like deep instead of recurse or aggregate (that uses yamlex)
19:13 danlsgiga murrdoc: Will do some tests with a change in https://github.com/saltstack/salt/blob/develop/salt/utils/dictupdate.py to add a new option for deep merging without requiring yamlex
19:14 PredatorVI Here is the output of 'salt-api -l all'  https://gist.github.com/PredatorVI/4336cd761e3f1c47538f
19:15 Rumbles joined #salt
19:15 PredatorVI but no process is found
19:15 PredatorVI does salt-api put debug output in /var/log/salt/master?
19:20 Akhter joined #salt
19:27 kinetic joined #salt
19:29 babilen murrdoc: The latter
19:31 DanyC all, has anyone done anything to encrypt the pillar data? my use case is basically keep the pillar data on github and pull it down via ext_pillar however i haven't got the implementation detail...any help much appreciated
19:34 ajw0100 joined #salt
19:34 DanyC_ joined #salt
19:34 opensource_ninja joined #salt
19:36 kinetic joined #salt
19:36 iggy Edgan: http://repo.saltstack.com/apt/ubuntu/ubuntu14/latest/salt-proxy_2015.8.1+ds-1_all.deb
19:36 iggy Edgan: looks like it's in it's own pkg to me
19:38 Edgan iggy: look at the yum/rpm package
19:38 tapoxi joined #salt
19:39 danlsgiga murrdoc: Hey... looks like a new pillar merging strategy is coming to place https://github.com/saltstack/salt/commit/4af5b5c33f8b082797eb5827f59c4681ff9fba03
19:39 tapoxi hi everyone, new deployment. zeromq or raet?
19:39 danlsgiga murrdoc: This would deprecate your module and choosing recurse_list will do it
19:39 danlsgiga murrdoc: Don't know the release it will come out though
19:40 jeffpatton1971 we're running mesos and want to handle the configuration via salt, we'd like to store the instancename in a pillar, but we have multiple instances of mesos we'd like to configure. we were thinking along the lines of a pillar that was setup like this https://gist.github.com/jeffpatton1971/09ef1ed1d16bb555290f and then a state that does the configuration targeted a nodegroup for each cluster...would that work?
19:41 danlsgiga murrdoc: It is merged into 2015.8, so it was supposed to be released already
19:41 rmnuvg_ joined #salt
19:41 MAHDTech joined #salt
19:41 iggy danlsgiga: that probably won't be in a release until 2016.x
19:42 iggy but Salt has been known to backport shit they shouldn't so maybe before then
19:42 danlsgiga iggy: But it is in the codebase 2015.8 already
19:42 iggy that says develop
19:42 danlsgiga iggy: I mean, this is the 2015.8 branch
19:43 iggy figures
19:43 danlsgiga iggy: https://github.com/saltstack/salt/blob/2015.8/salt/utils/dictupdate.py
19:43 danlsgiga iggy: The code with the implementation is already there
19:43 * iggy gives up wondering why Salt backports stuff like that
19:44 iggy I got it
19:44 forrest jeffpatton1971, sure, you could give each different cluster specific grains so you can do the targeting in the state like you plan
19:44 danlsgiga iggy: It might not be documented yet, but it is in the 2015.8 branch and not in my yum pkg for 2015.8.1 :P
19:45 jeffpatton1971 @forrest, we were thinking nodegroups would be easier to work with, do you think that would work like grains? hoping to pull in something like mesos.{get clustername from pillar}.instance
19:45 iggy the original commit was 10 days ago... 2015.8.1 was 22 days ago
19:46 dthom91 joined #salt
19:46 forrest jeffpatton1971, nodegroups should be fine as well, all depends on how you do your matching.
19:48 danlsgiga iggy: Got it! ;)
19:48 danlsgiga iggy: So, there is going to be a 2018.1.2?
19:48 danlsgiga iggy: oops, 2015.8.2
19:50 cberndt joined #salt
19:50 baweaver joined #salt
19:50 danlsgiga iggy: *in 2015.8.2
19:51 * iggy not a Salt dev, can't speak to release plans
19:51 danlsgiga iggy: lol... ok! thanks anyways!
19:52 orionx_ joined #salt
19:53 Akhter joined #salt
19:54 mapu joined #salt
19:54 FreeSpencer Does require_reboot on network state require the machine to be rebooted before the IP changes?
19:55 DanyC_ any ideas please? thx in advance
19:57 jgee joined #salt
20:02 jeffpatton1971 @forrest ok, so this is where i'm at now... https://gist.github.com/jeffpatton1971/09ef1ed1d16bb555290f thoughts? i'm kindn of stuck in how do I grab the relevant instance name, or am I overthinking?
20:02 Greg__ joined #salt
20:03 cornfeedhobo in case anyone else uses pycharm for other stuff too, and feels like voting; thanks in advance.   https://youtrack.jetbrains.com/issue/PY-17334
20:03 whytewolf cornfeedhobo: how long are you going to continue to advertise that feature request that you seem to be the only one interested in?
20:04 cornfeedhobo whytewolf: it's been less than 24 hours ... your fuse must be tiny
20:05 forrest jeffpatton1971, so are you trying to use two different mesos files here? Or are you planning on populating from the pillar?
20:05 jeffpatton1971 trying to go from pillar
20:05 whytewolf cornfeedhobo: it was a question. and you had pretty much the same response yesterday when you asked if anyone was interested in you putting in the request.
20:06 murrdoc iggy:  https://github.com/saltstack/salt/blob/2015.8/salt/utils/dictupdate.py
20:06 cornfeedhobo whytewolf: sorry, to more accurately answer, i figured this would be the last since i brought it up yesterday at 6pm eastern time (when everyone else was pretty much off work and would not see it)
20:07 forrest jeffpatton1971, Hmm, I can't remember if you can do a nodegroup check in a state, if that's possible it is how I'd match it since nodegroup matching isn't supported in pillar.
20:08 forrest jeffpatton1971, https://github.com/saltstack/salt/issues/25292
20:08 jeffpatton1971 @forrest ya...I think we're kind of stuck at that point now as well...does the minion know what nodegroup it's in?
20:08 forrest jeffpatton1971, I'm not sure, I don't usually use nodegroups. Are you provisioning these as new systems? Or do they exist already?
20:09 cornfeedhobo whytewolf: worst case scenario, i would mention it maybe once more in a few weeks. i will probably also for the best place to bring attention to it on reddit, or something.  also, seeing as they have denied 3 previous requests for it, i assume that the problem is vocal support, not lack of interest
20:09 jeffpatton1971 @forrest they already exist
20:09 tercenya joined #salt
20:10 freelock joined #salt
20:11 forrest jeffpatton1971, You could add a grain to the systems in each cluster so they match there, then do the matching based on that.
20:11 jeffpatton1971 so minions don't know what nodegroup they are in?
20:12 forrest jeffpatton1971, I don't know. That might be functionality only the master recognizes when it compiles the data.
20:13 jeffpatton1971 ok, so it sounds like i'm thinking this backwards, if I have the nodegroup defined I can just run grains.set against the group to create a grain mesos_instance
20:16 aron_kexp joined #salt
20:17 RandyT_ greetings
20:17 RandyT_ raised this issue yesterday where deployment is creating two minion keys, one denied.
20:17 RandyT_ Didn't get a response here, so have filed the following issue. https://github.com/saltstack/salt/issues/28229
20:17 RandyT_ would be interested to know if anyone else is seeing this.
20:18 FreeSpencer Seems require_reboot: true on ubuntu doesnt work :(
20:18 forrest jeffpatton1971, Yeah usually I prefer to set grains on the instance regarding certain functionality like that because it's easy (that's why I asked if the instances already existed).
20:18 tapoxi left #salt
20:19 ekkelett joined #salt
20:25 baweaver joined #salt
20:26 stomith joined #salt
20:26 jeddi joined #salt
20:30 sunkist joined #salt
20:31 bhosmer_ joined #salt
20:43 clintberry joined #salt
20:45 Plastefuchs joined #salt
20:47 chupetito joined #salt
20:48 chupetito hi everyone ... can anyone recommend a good resource to develop a custom beacon? I'd like to intercept JMS message queue counts. Any ideas?
20:49 ajw0100_ joined #salt
20:49 Rumbles joined #salt
20:52 PredatorVI When will 2015.8.1 be available via APT?
20:53 zmalone Isn't it available now?
20:53 PredatorVI It wasn't an hour or so ago (at least my apt-get update didn't pull it).
20:53 zmalone https://repo.saltstack.com/apt/ubuntu/ubuntu14/2015.8/ / https://repo.saltstack.com/apt/debian/2015.8/
20:53 zmalone if you are configured to use the ppa, I don't believe that's still being updated
20:54 PredatorVI hmm...is there a new repo then?
20:54 zmalone yes, https://repo.saltstack.com/ :(
20:56 Greg__ left #salt
20:57 babilen chupetito: read the inotify source
20:58 RD_ joined #salt
20:58 RD_ Hey guys... got a small issue here trying to do a search for multiple roles...
20:58 RD_ {% for hostname, host_info in salt['mine.get']('roles:dns-mgr', 'network.ip_addrs', expr_form='grain').items() %}
20:59 RD_ Ill need to get one more role to filter results on oles:dns-mgr to segment dns servers. Stuck on this. Does anyone know how to do it?
21:01 chupetito thanks babilen
21:02 babilen chupetito: I think it is the best approach so far. Not sure if there is a tutorial on it, but let me know if you find one
21:02 whytewolf RD_: use compound matching
21:03 chupetito babilen or anyone else ... are there anymore beacons planned for future releases? the list i found of available beacons seem pretty short! I need a few things ... like one for checking CPU.
21:04 RD_ Ill give it a search, Thanks whytewolf.
21:04 babilen chupetito: I would hope so, beacons are totally underutilised and way too overlooked for how awesome they are
21:05 laax_ joined #salt
21:05 babilen RD_: Just to make sure: Don't target anything sensitive by grains (as they can be spoofed easily)
21:06 chupetito babilen: I know right? I mean just yesterday I saw a pretty cool presentation from Salt and they pretty much sold me and others I am sure on using reactor system in conjunction with beacons ... yet, very little out there for beacons as a whole
21:06 kiorky joined #salt
21:07 RD_ babilen: Thanks for the intel. Will try to make it work first, though... : p
21:07 dthom91 joined #salt
21:10 stopbyte joined #salt
21:11 PredatorVI Okay...updated my sources.list.d/saltstack.list file and now:  Failed to fetch http://repo.saltstack.com/apt/ubuntu/ubuntu14/dists/trusty/main/binary-amd64/Packages  404  Not Found
21:11 PredatorVI Trying to follow instructions at: http://repo.saltstack.com/#ubuntu
21:11 zmalone ha
21:11 zmalone they changed it since those were put up
21:12 PredatorVI *swearwords*
21:12 zmalone now you need "2015.5", "2015.8" or "latest" in there
21:12 zmalone https://repo.saltstack.com/apt/ubuntu/ubuntu14/
21:12 zmalone you can figure out the url you want from there
21:12 murrdoc *magic*
21:13 zmalone It looks like http://repo.saltstack.com/#ubuntu is correct right now though
21:14 DanyC joined #salt
21:16 RD_ You can go as far as http://repo.saltstack.com/apt/ubuntu/ubuntu14/
21:20 giantlock joined #salt
21:22 danlsgiga hey guys... I'd like to overwrite the salt/utils/dictupdate.py file using the _modules folder... is that possible, so salt would read the py from the _modules instead of the one from its source base?
21:22 CeBe joined #salt
21:23 nyx_ joined #salt
21:23 murrdoc yeah
21:24 murrdoc i am only answering the is that possible part of your question
21:24 murrdoc MUHAHAHAHAHAHA
21:24 murrdoc mkdir _modules
21:24 murrdoc and put your file there
21:25 bryguy joined #salt
21:25 dthom91 joined #salt
21:25 murrdoc in your file root
21:25 murrdoc danlsgiga
21:25 opensource_ninja joined #salt
21:26 danlsgiga murrdoc: I'm just reading the docs and it seems I need to specify a __virtual__ function returning the stock module name
21:26 danlsgiga murrdoc: https://docs.saltstack.com/en/latest/ref/modules/#virtual-function
21:26 murrdoc it defaults to file name
21:26 murrdoc but yes
21:27 danlsgiga murrdoc: But would this work for salt/utils/dictupdate too?
21:27 danlsgiga murrdoc: Cause dictupdate is not a module
21:27 murrdoc try _utils
21:28 DanyC anyone knows if boostrap-salt.sh can install/ upgrade only salt-cloud and not master(minion) ?
21:28 danlsgiga murrdoc: Ok
21:29 danlsgiga murrdoc: and how should I name the __virtualname__ ?
21:29 danlsgiga murrdoc: salt.utils.dictupdate?
21:29 murrdoc just dictupdate ?
21:31 PredatorVI I've upgraded my salt-master from 2015.5.3 to 2015.8.1.  I am still unable to get salt-api to start/run.  Running 'salt-api -l all' shows that it gets as far as creating /var/run/salt-api.pid, but then exits with no errors that I can see/find.  I dont know where to look next.  Any suggestions?
21:32 babilen danlsgiga: Why do you want to fork it?
21:33 baweaver joined #salt
21:36 iggy danlsgiga: no, utils != modules
21:36 * iggy catches up to the rest of the conversation
21:39 Cruz4prez joined #salt
21:40 iggy danlsgiga: if /srv/salt/_utils/ doesn't work, you might have to copy the file manually and use `utils_dirs` on the minions
21:41 babilen I seem to remember reading something about _utils, it's definitely worth a try
21:41 * babilen is still curious what changes it needs
21:41 iggy "upgrading to latest"
21:41 babilen Too bad we can't easily use it in map.jinja, or can we?
21:42 iggy no (although murrdoc has a module somewhere that allows the use of merge functions)
21:43 babilen saltstack should make all those available as jinja filters too
21:43 babilen There aren't nearly enough jinja filters :)
21:43 RandyT_ question: has anyone run into any deployment problems after installing pip on minion EC2 images?
21:43 babilen murrdoc: ping (merge foo)
21:43 * murrdoc is here
21:43 murrdoc sup baby len
21:43 iggy I'm of the opinion that if you want to use utils, you should probably start writing #!py states
21:44 murrdoc babilen:  sup
21:44 babilen iggy: A bunch of formulas are broken (to some degree) because of this inability. When the os_map is being merged into the defaults nested structures get overwritten still ... we really need a deep and intelligent merge there
21:44 babilen murrdoc: You have to be kidding me
21:44 murrdoc i dont have context
21:45 babilen That's not even close to how my nickname is being pronounced
21:45 murrdoc so no , not kidding
21:45 murrdoc oh that :D
21:45 murrdoc its thursday
21:45 murrdoc #brainfried a lil
21:45 murrdoc so the merge thing
21:45 murrdoc fucking yeah lets do it
21:45 babilen murrdoc: Anyway, iggy just mentioned that "although murrdoc has a module somewhere that allows the use of merge functions"
21:45 murrdoc yeah
21:46 murrdoc its a _module to wrap salt.util.dictupdate
21:46 murrdoc thats it
21:46 murrdoc pretty brilliant in simplicity
21:46 babilen iggy: Sure, #!py is the way to go, but that doesn't help with the current map.jinja pattern we use
21:46 murrdoc #horn tooter
21:46 babilen murrdoc: Is this a bad moment?
21:47 murrdoc no let me straighten up
21:47 murrdoc do you want to see the _module
21:47 babilen sure
21:47 murrdoc aight
21:47 babilen In fact I want you to submit a PR so it can get merged into salt proper ASAP
21:48 jhauser joined #salt
21:48 murrdoc babilen:  https://gist.github.com/anonymous/ccc52efee4721ad2f8e5
21:48 babilen I mean it shouldn't be too hard to write that wrapper ..
21:48 murrdoc not sure what module it fits under ?
21:48 murrdoc defaults ?
21:48 murrdoc https://github.com/saltstack/salt/blob/develop/salt/modules/defaults.py
21:48 murrdoc that one ?
21:48 murrdoc sure
21:49 babilen Well, lets think about this some more
21:50 geekatcmu Is there a reason why filter_by doesn't handle nested grains the way all the other grain functions do?  e.g. "cornerstone:tags" is recognized as a grain named cornerstone returns a dict, of which "tags" is oen of the keys.
21:50 babilen _utils are .. well .. a bit special in that by nature of not being an execution module they aren't available everywhere. That sucks for dictupdate as it is useful in a lot of other contexts
21:50 babilen It doesn't really fit the "execution module" semantic of "things you might genuinely want to run on your minions" which is, I guess, the reason for the _modules _utils split in the first place
21:51 babilen I guess the right thing to do would be to provide these as jinja filters (ansible style)
21:52 murrdoc babilen:  https://github.com/saltstack/salt/pull/28235
21:52 babilen That would solve the "it's not available in jinja" problem. It's no issue in #!py naturally and mako isn't any work either
21:53 murrdoc well chime in
21:53 murrdoc and explain why you twisted my arm
21:53 murrdoc by shaming me into that pull
21:55 babilen Already did
21:55 babilen No, the world is better with that, but it is a bit of an ad-hoc solution in that it is easy to implement, without much thought as to how to do this in general
21:57 subsignal joined #salt
21:58 iggy and the reason this originally came up is merge_lists support was added to salt.utils.merge in 2015.8/devel
21:58 babilen I mean the problem is "It is in _utils only because we didn't want it in _modules" but then "Due to the fact that it is in _utils we can't use it easily, but its useful" ... so ...
21:59 stanchan joined #salt
22:00 iggy yeah, I was answering your question from _way_ earlier (why did homie want to override utils/dictupdate)
22:00 babilen http://docs.ansible.com/ansible/playbooks_filters.html more of that
22:01 babilen + _filters/ :)
22:02 viq joined #salt
22:02 RandyT_ Would sure appreciate it if someone could give me some ideas about this issue: https://github.com/saltstack/salt/issues/28229
22:05 iggy I think that would be fantastic
22:07 klocek joined #salt
22:07 superseb joined #salt
22:08 babilen iggy: What exactly?
22:09 murrdoc filters
22:09 babilen https://github.com/saltstack/salt/issues/28236
22:09 baweaver joined #salt
22:10 ranomore1 joined #salt
22:10 RD_ whytewolf: 'G@roles:dns-mgr and G@roles:wan', 'network.ip_addrs', expr_form='compound' did the trick. Thx
22:10 lexter joined #salt
22:11 whytewolf RD_: no problem. glad to help
22:11 whytewolf now if only this headache would go away
22:12 dthom91 joined #salt
22:13 babilen It would be even better if we could just dump https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/filter in there and it would "just work".
22:13 babilen Once that's possible ansible and saltstack should just work on "devop filters" together that can be used in either framework
22:15 RD_ or just a shared plugin system .
22:15 babilen Yeah, that was the idea
22:15 stomith so the dockerng module suggests: "To push or pull images, credentials must be configured. Because a password must be used, it is recommended to place this configuration in Pillar data." Can I still put this in the master file?
22:15 iggy well, now that RH bought ansible, you can expect it to die
22:16 babilen stomith: If it uses config.get rather than pillar.get then yes
22:16 babilen (which it probably does)
22:16 babilen https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.config.html#salt.modules.config.get
22:16 * babilen has to learn fancybot
22:16 stomith thanks, I'm new. :/
22:17 iggy I forgot I had the bot back in here
22:17 iggy or not
22:17 DanyC i'd appreciate if anyone can reply to my 2 q: 1) anyone knows if boostrap-salt.sh can install/ upgrade only salt-cloud and not master(minion) ?
22:18 babilen Welcome! The point of using pillars for that is to be able to change it quickly without having to restart the master and keeping it in a datasource that can be specifically to *only* the minions that need that information
22:18 babilen stomith: ^
22:18 ranomore joined #salt
22:18 stomith babilen: thanks!
22:18 DanyC 2) anyone has an example/ idea how to encrypt/decrypt pillar password? i looked for an example but no luck
22:19 stomith I'm fighting with states still, haven't even contemplated pillars yet
22:19 babilen stomith: You can probably also place it in the minion's config, but then I'd consider pillars to be the best choice. Why do you want to put it in the master config? (if you simply haven't worked with pillars before and thought "Well, I look into that later" that's cool too)
22:19 babilen yeah
22:19 otter768 joined #salt
22:19 stomith babilen, I'm trying to check out dockerng, and that's what it suggests for the configuration. :/
22:20 clefebvre joined #salt
22:20 babilen stomith: Pillars are, at the end of the day, just dictionaries that are being send to specific minions which is why they are being used for sensitive data that shouldn't be available to all minions
22:20 iggy DanyC: no, gpg renderer
22:20 clintberry joined #salt
22:21 stomith okay, like a private class.
22:22 babilen stomith: I'd suggest to at least read https://docs.saltstack.com/en/latest/topics/tutorials/index.html#states (part 1 to part 4) and then https://docs.saltstack.com/en/latest/topics/tutorials/pillar.html
22:22 rob__ joined #salt
22:23 stomith sure. thanks for the suggestion!
22:24 babilen A lot of the things will be clearer afterwards (hopefully) :)
22:24 rob__ I have a question if anyone can help, I have tried searching without success... I am trying to install salt-minion on a redhat 6 server from the salt repository. It is trying to install the dependency python-requests which seems to be pulled from epel for rh6.
22:24 DanyC iggy: thanks. For the gpg render, am i right in think the flow will be s'thing like: i create my own gpg key on the master, i import it on my laptop, git push the encrypted gpg file to git, set master ext_pillar and then the gpg render ?
22:26 rob__ Has anyone had to install it recently, and and suggestion on where to get this from?
22:26 rob__ have*
22:27 iggy DanyC: never actually used it, that's just the answer for that question
22:28 DanyC iggy: ah i see, thanks for your answer btw.
22:34 clefebvre left #salt
22:34 PredatorVI Any salt-api folks out there who can help me troubleshoot my salt-api process not running?  I'm at a loss.
22:35 RandyT_ Could anyone share with me how the master determines the contact details for minion?
22:36 keimlink joined #salt
22:36 iggy RandyT_: it doesn't... the minions connect to the master
22:36 RandyT_ I have a minion that is responding to a ping that I know to be down, and another minion not responding that I know to be up.
22:36 RD_ Mixed up ids?
22:37 RandyT_ looking at the log on the minion that is responding, it seems to be happily connected to the master...
22:37 Guest89 joined #salt
22:37 PredatorVI RandyT_:  check that the contents of minion_id match your expected minion names.
22:38 kalessin joined #salt
22:38 murrdoc man jobs.active needs a minion param
22:38 murrdoc COME ON SALT
22:38 RD_ iggy: The master should start it somehow, right? Or there's a high latency pooling in bg?
22:38 horus_plex LOL
22:38 RandyT_ PredatorVI: ok, salt minion is confused about name.
22:39 iggy RD_: that's effectively what zmq does
22:39 RandyT_ working through debugging another issue. seems minion deployment might be dependent on pip 7.1.2? vs 7.1?
22:39 moloney joined #salt
22:39 sunkist joined #salt
22:40 RandyT_ anyway, I am way down the rat hole here... will see if I can clean up a bit. Seems that the minion name may be confused by the minion hostname...
22:41 kalessin joined #salt
22:41 moloney I have a weird issue in 2015.8.1 where trying to use "pip.installed" results in an error about pip not being importable. The ubuntu package python-pip is intalled and I can open python on the minion and import pip (and pip.req) successfully.  Doing a "pip install -U pip" fixes it. I didn't have this issue in 2015.5, and it doesn't show up in servers updated from 2015.5 to 2015.8.
22:42 baweaver joined #salt
22:47 breakingmatter joined #salt
22:59 otter768 joined #salt
23:00 bhosmer joined #salt
23:00 kinetic joined #salt
23:01 larsfronius joined #salt
23:01 PredatorVI so ... salt-api debugging tips anyone?
23:03 PredatorVI salt-api service won't stay up.  No obvious critical errors.  A full trace does mention a missing 'fluent_handler', 'logstash_udp_handler' and 'logstash_zmq_hander' sections missing but it goes on to create a PID file anyway, then just goes away.
23:04 Aikar left #salt
23:05 RD_ PredatorVI: strace?
23:05 sunkist joined #salt
23:05 PredatorVI never used it...but will look into it
23:06 kalessin joined #salt
23:15 moloney PredatorVI: You almost certainly want "strace -f ...." to follow forked processes.
23:19 PredatorVI I've run strace w/ -f and have a massive file of stuff I don't know how to decipher.
23:19 zmalone joined #salt
23:22 moloney PredatorVI: yeah it is pretty brute force.  You can start by grepping for errors, but plenty of errors there will be harmless
23:22 dthom91 joined #salt
23:22 PredatorVI exit code is 0
23:22 PredatorVI :q
23:26 kinetic joined #salt
23:27 dthom91 joined #salt
23:28 douglasb joined #salt
23:34 JohnTunison joined #salt
23:34 RD_ PredatorVI: I would start redirecting the output to a file and searching for the missing stuff then
23:35 RD_ If you grep from stdout, all around the match is lost (pretty much what youre looking after)
23:38 Cruz4prez joined #salt
23:39 RD_ PredatorVI: Hey... check it out: https://github.com/saltstack/salt/issues/23342
23:40 SheetiS joined #salt
23:41 RD_ Looks like the fluent_handler and logstash stuff can be ignored
23:43 ahammond is there a way to run cmd.run asynchronously?
23:43 baweaver joined #salt
23:44 ahammond I'm thinking specifically of a way to have the system reboot after the state.highstate completes and tells the master
23:45 ahammond and... I don't have at installed on these systems.
23:45 ahammond but... maybe it's worth adding.
23:45 zmalone Don't all 0mq actions happen async?
23:45 RD_ ahammond: never did, but: salt --async --subset=10 \* cmd.run 'sleep 20m; echo hello'
23:45 zmalone You can run foo; sleep; reboot, or something like that, and the master won't print success, but it should run
23:46 ahammond zmalone the goal is to have the master print success.
23:46 ahammond I can already do it without a success message... but... that buggers up my tracking and auditing. :(
23:47 PredatorVI RD_:  I've redirected to a file, but I don't see anything that jumps out as a problem.
23:48 ahammond I guess I could use an orchestration, but... ugh.
23:49 RD_ ahammond: http://stackoverflow.com/questions/19429441/salt-how-to-get-cmd-async-output-asynchronously
23:50 RD_ PredatorVI: Check the link - those issues are no issues, unless youre trying to use logstash or fluent_handler, whatever it is
23:50 bastiandg joined #salt
23:50 ahammond 2015.8.1 has modules system.reboot(at_time=None). I bet that'll solve my problem nicely. :)
23:51 RD_ Cool
23:52 RD_ PredatorVI: My system has the same TRACE information. None of the required configuration sections, 'logstash_udp_handler' and 'logstash_zmq_handler'
23:53 zmalone ahammond: depending on the platform, https://github.com/saltstack/salt/issues/24297
23:54 RD_ ahammond: I guess you can make a reactor to whatever changes you decide that should restart the server (sudo rm -Rf /etc :) and call this reboot method

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary