Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-04-10

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 rem5_ joined #salt
00:21 onlyanegg joined #salt
00:21 inetpro joined #salt
00:22 Praematura joined #salt
00:22 gadams joined #salt
00:26 gadams joined #salt
00:41 cyborg-one joined #salt
00:51 debian1121 joined #salt
00:57 onlyanegg joined #salt
01:17 nikdatrix joined #salt
01:28 rem5 joined #salt
01:46 pipps joined #salt
01:48 ilbot3 joined #salt
01:48 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.3.6, 2016.11.3 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ (please don't multiline paste into channel) <+> See also: #salt-devel, #salt-offtopic <+> Ask with patience as we are volunteers and may not have immediate answers
02:21 armyriad joined #salt
02:37 packeteer joined #salt
02:42 Klaus_D1eter_ joined #salt
02:46 JPT joined #salt
02:46 Tanta joined #salt
03:08 nikdatrix joined #salt
03:08 gmoro_ joined #salt
03:08 sh123124213 joined #salt
03:18 nikdatrix joined #salt
03:49 evle joined #salt
04:36 rdas joined #salt
04:46 golodhrim|work|3 joined #salt
05:13 calvinh joined #salt
05:20 nikdatrix joined #salt
05:24 armyriad joined #salt
05:30 preludedrew joined #salt
05:31 sh123124213 joined #salt
05:38 DarkKnightCZ joined #salt
05:39 ivanjaros joined #salt
05:44 karlthane joined #salt
05:54 karlthane joined #salt
06:14 felskrone joined #salt
06:20 aldevar joined #salt
06:27 yuhl______ joined #salt
06:42 netcho joined #salt
06:43 ReV013 joined #salt
06:45 do3meli joined #salt
06:46 do3meli left #salt
06:46 Ricardo1000 joined #salt
06:56 dario joined #salt
06:57 jhauser joined #salt
07:08 karlthane joined #salt
07:10 o1e9 joined #salt
07:13 felskrone joined #salt
07:19 wolfpackmars2 joined #salt
07:19 ivanjaros joined #salt
07:22 nikdatrix joined #salt
07:28 dariusjs joined #salt
07:28 pbandark joined #salt
07:32 colttt joined #salt
07:34 DarkKnightCZ joined #salt
07:35 kbaikov_ joined #salt
07:38 JohnnyRun joined #salt
07:39 sh123124213 joined #salt
07:46 karlthane joined #salt
07:47 candyman89 joined #salt
07:51 nikdatrix joined #salt
07:54 colttt joined #salt
08:08 Ricardo1000 joined #salt
08:08 pbandark joined #salt
08:12 mikecmpbll joined #salt
08:12 ronnix joined #salt
08:13 Rumbles joined #salt
08:15 bdrung_work joined #salt
08:18 sh123124213 joined #salt
08:25 systeem joined #salt
08:27 tyler-baker joined #salt
08:28 s_kunk joined #salt
08:40 toanju joined #salt
08:44 Mattch joined #salt
08:48 jas02 joined #salt
08:51 karlthane joined #salt
08:56 schinken joined #salt
09:06 JohnnyRun joined #salt
09:08 dariusjs joined #salt
09:11 Praematura joined #salt
09:14 tkeith joined #salt
09:14 raspado joined #salt
09:14 tkeith Is there a way to include the saltmaster hostname in a jinja template?
09:16 tkeith Nvm, I think it's {{ grains['master'] }}
09:17 Rumbles joined #salt
09:23 karlthane joined #salt
09:23 zulutango joined #salt
09:24 Reverend tkeith: you can test that from the CLI if you want
09:26 sh123124213 joined #salt
09:26 ronnix joined #salt
09:26 CeBe joined #salt
09:45 karlthane joined #salt
09:58 catpig joined #salt
10:05 mage_ joined #salt
10:07 Reverend anyone know any way of renaming the minion in a 'nice' way ?
10:08 hemebond Nope.
10:08 hemebond Stop service. Delete keys. Update ID. Start service.
10:13 Reverend darnit
10:13 Reverend that's a bit shitty. What I'm doing is removing the minion_id file and restarting the service
10:13 Reverend but it has to timeout from the master before it can do the second highstate
10:13 hemebond That doesn't sound right.
10:16 Reverend hmm, either that then or something in one of my states is taking a fucking LIFETIME to run
10:17 hemebond All jobs should eventually finish. It's only really the CLI tool that times out.
10:17 hemebond So you should be able to check the job for details
10:17 hemebond (including the time taken per state)
10:20 Reverend hmm. it's kinda maybe definitely doing ti with reactors though xD
10:20 Reverend i'll need to turn all taht off
10:20 Reverend ugh
10:20 Reverend effort
10:20 raspado joined #salt
10:20 hemebond Everything has a job.
10:21 raspado joined #salt
10:21 Reverend :D
10:36 mavhq joined #salt
10:41 candyman88 joined #salt
10:43 amcorreia joined #salt
10:52 dariusjs joined #salt
10:55 karlthane joined #salt
11:01 schinken joined #salt
11:08 ivanjaros joined #salt
11:17 ReV013 joined #salt
11:19 toanju joined #salt
11:20 netcho joined #salt
11:27 sh123124213 joined #salt
11:36 candyman88 joined #salt
11:48 sh123124213 joined #salt
11:50 netcho_ joined #salt
12:09 numkem joined #salt
12:16 felskrone joined #salt
12:19 ronnix joined #salt
12:22 nikdatrix joined #salt
12:27 evle1 joined #salt
12:29 Kelsar joined #salt
12:33 karlthane joined #salt
12:34 jdipierro joined #salt
12:38 dariusjs joined #salt
12:39 Kelsar joined #salt
12:41 cingeyedog joined #salt
12:45 dendazen joined #salt
12:46 alexlist joined #salt
12:47 cingeyed_ joined #salt
12:48 cingeyedog joined #salt
12:48 alexlist joined #salt
12:49 cingeyedog info cingeyedog
12:49 cingeyedog left #salt
12:50 alexlist joined #salt
12:57 racooper joined #salt
12:58 jdipierro joined #salt
12:59 LeProvokateur joined #salt
13:01 candyman88 joined #salt
13:08 ssplatt joined #salt
13:09 jas02 joined #salt
13:12 jdipierro joined #salt
13:14 netcho_ joined #salt
13:16 rdas joined #salt
13:24 mdpolaris joined #salt
13:24 onmeac joined #salt
13:25 jas02 joined #salt
13:27 mdpolaris Hello all. I am having a strange issue. I have a MoM and Syndic architecture, and i use saltutil.runner from MoM to execute an orch state on the syndic, the GPG pillar fails to render
13:28 mdpolaris here is the call: salt 'or-syndic-node-test2' saltutil.runner state.orch mods=orch_new_instance
13:28 mdpolaris and here is the error: Rendering SLS 'salt' failed, render error:
13:28 mdpolaris 'config_dir'
13:30 mdpolaris the pillar renders fine from the syndic node so it is not a straight config issue
13:32 Sketch joined #salt
13:33 jas02 joined #salt
13:34 afics joined #salt
13:39 Tanta joined #salt
13:41 jas02 joined #salt
13:44 ivanjaros joined #salt
13:44 jas02 joined #salt
13:46 maxtrick joined #salt
13:46 IRCFrEAK joined #salt
13:48 IRCFrEAK left #salt
13:49 Cottser joined #salt
13:54 PatrolDoom joined #salt
14:00 squishypebble joined #salt
14:01 jdipierro joined #salt
14:05 jas02 joined #salt
14:07 zzzirk joined #salt
14:07 speedlight joined #salt
14:11 toastedpenguin joined #salt
14:13 toastedpenguin joined #salt
14:13 netcho_ joined #salt
14:20 karlthane joined #salt
14:22 raspado joined #salt
14:26 onlyanegg joined #salt
14:31 cyborg-one joined #salt
14:33 XenophonF joined #salt
14:35 edrocks joined #salt
14:37 zer0def any pointers on how i remove routes from an interface with `network` states?
14:40 XenophonF it isn't apparent that you can
14:40 ProT-0-TypE joined #salt
14:40 XenophonF you might have to use cmd.run :(
14:53 babilen Why would you have to routinely remove routes? Who put them there?
14:53 pipps joined #salt
14:53 Pyro_ joined #salt
14:55 sh123124213 joined #salt
14:58 Pyro_ I've got three separate environments for salt: dev, stage, and production.  Each uses the same "base" file_root, and separate file_roots for their respective environment (for promotion).  We have a salt master in each environment, but they are not salted.  The question is what is the best way to salt the salt masters?  Should I have a completely separate salt master environment to maintain the salt-masters... this seams overkill
14:58 Pyro_ , and painful to manage.
14:58 LeProvokateur joined #salt
15:00 XenophonF Pyro_: i manage the master using the master itself
15:00 XenophonF i use salt-formula for all of my salt master/minion/cloud/etc. configuration needs
15:01 smartalek joined #salt
15:02 Pyro_ And that doesn't cause any kind of odd looping issues?
15:02 Pyro_ It just feels funny, like a robot building itself ;)
15:02 XenophonF so my DR plan for re-bootstrapping the master involves deploying a new Unix server, manually installing Salt, manually installing salt-formula, manually configuring my git ext_pillar, and manually running `salt-call state.apply salt.master`
15:03 XenophonF ooops that should read `salt-call state.apply salt.formulas,salt.master`
15:03 XenophonF at that point i can do `salt-call state.apply` and be done with it
15:04 XenophonF the salt-master process does not perform configuration
15:04 Pyro_ Cool, that makes sense.  I'm wanting to make sure, and test, what to do when AWS kills off my salt-masters.
15:04 XenophonF that's the job of salt-minion
15:04 XenophonF and salt-minion doesn't care whether it's talking to another host or to loopback
15:04 tapoxi joined #salt
15:05 Pyro_ Thanks XenophonF, I think I'm on the same/right/common track as you then.
15:05 XenophonF awesome
15:05 Pyro_ I appreciate the help, and guidance.
15:06 XenophonF my actual bootstrapping process is a little more detailed than that because I have to configure OpenSSH and gpg
15:06 XenophonF again, manually
15:06 XenophonF you're welcome
15:07 XenophonF to give you a concrete example of how i have things set up, here's how my salt-master might be configured: https://github.com/irtnog/salt-pillar-example/blob/master/salt/example/com/init.sls
15:07 Tanta joined #salt
15:08 XenophonF NB: I use FreeBSD so packages and pathnames will look different
15:11 XenophonF and here's the bootstrapping script I wrote to deploy a salt-master in EC2 as part of my COOP/DR plan: https://gist.github.com/xenophonf/d8da7f47ea29d9ad46e7
15:11 Pyro_ Cool, thanks that will help give me some context to go by, then I'm not re-inventing the wheel, which is the whole goal with Salt, Chef, Puppet, ...
15:12 LeProvokateur joined #salt
15:15 XenophonF zer0def: are you trying to configure a route or firewall or something?
15:16 XenophonF if not I'd leave network interface configuration to DHCP/SLAAC
15:17 zer0def XenophonF: add an iface to a bridge which doesn't inherit macs
15:17 XenophonF oh
15:17 XenophonF hm
15:19 XenophonF TBH my approach would be to managed the underlying config files, then restart networking via `order: last`
15:19 XenophonF kinda brittle though
15:20 XenophonF s/managed/manage
15:20 zer0def yeah, i got what you meant
15:21 sh123124213 joined #salt
15:21 XenophonF sorry am a native english speaker so, you know, my english isn't very good
15:21 XenophonF ;)
15:22 * XenophonF blames auto-correct.
15:23 hasues joined #salt
15:23 sarcasticadmin joined #salt
15:26 Inveracity joined #salt
15:38 jas02 joined #salt
15:40 greyeax joined #salt
15:42 tiwula joined #salt
15:48 aldevar left #salt
15:50 jas02 joined #salt
15:54 Trauma joined #salt
15:55 jab416171 it seems like salt isn't respecting my state_output config, I have it set to 'changes' but I'm getting output like this: https://bpaste.net/show/d96b190d7bb4
15:57 mdpolaris Hello all. I am having a strange issue. I have a MoM and Syndic architecture, and i use saltutil.runner from MoM to execute an orch state on the syndic, the GPG pillar fails to render. The pillar renders fine when i execute that same orchestration state locally from the Syndic
15:58 zer0def XenophonF: thoughts on how i would approach creating veth pairs using salt, outside of `cmd.run`?
15:59 sh123124213 joined #salt
16:01 jas02 joined #salt
16:02 XenophonF zer0def: I try to do all network interface config at O/S install time because this is one of the few configuration items I don't trust Salt to get right.
16:03 XenophonF same for renaming a Windows host and joining it to a domain
16:03 zer0def as far as getting it right on debian-based systems, it's doing a spectacular job
16:04 XenophonF Like I said, my inclination would be to drop the relevant NetworkManager (or whatever) config files into /etc and then run the Debian-equivalent of `service network restart`.
16:05 XenophonF hang on let me look at the network exec module
16:06 XenophonF you might be able to cobble together something from module.run states
16:06 zer0def well, i just want a veth pair, but now that i'm sorta writing it out, `cmd.run` isn't too horrifying
16:08 XenophonF I will sometimes use cmd.script if I have more complicated needs, e.g., https://github.com/irtnog/shibboleth-formula/blob/master/shibboleth/idp/scripts/build.sh
16:09 rem5 joined #salt
16:12 zer0def well, it isn't *THAT* complex to fall back onto `cmd.script`
16:21 Trauma joined #salt
16:22 woodtablet joined #salt
16:24 toastedpenguin anyone made use of AWS sns to run a salt state?
16:28 Praematura joined #salt
16:32 juntalis joined #salt
16:34 raspado joined #salt
16:36 sarlalian joined #salt
16:36 jab416171 anyone able to help?
16:39 gtmanfred toastedpenguin: there is an engine to listen to aws sns, and put stuff on the event stream so that the reactor could run with it
16:39 gtmanfred sorry, i think that might be sqs
16:40 gtmanfred yeah, it is sqs https://docs.saltstack.com/en/latest/ref/engines/all/salt.engines.sqs_events.html#module-salt.engines.sqs_events
16:41 toastedpenguin gtmanfred: ah, SQS, yeah I suppose that would work better then SNS or at least prevent the need to have to figure out how salt listens for incoming emails...
16:42 vexati0n i wish we used SQS at my job
16:42 toastedpenguin trying to get salt to run an s3 sync to multiple web servers anytime someone uploads a new css they want published
16:42 toastedpenguin currently manual
16:42 vexati0n we just do that through bamboo
16:43 speedlight joined #salt
16:43 vexati0n probably not exactly what you're looking for tho
16:44 ChubYann joined #salt
16:45 toastedpenguin vexati0n: just looking to get salt to run the sync anytime S3 gets updated, dont want the sync to run on a scheduled bases
16:45 toastedpenguin gtmanfred: thx, this looks like it will do what we need
16:45 vexati0n yeah. ours is on git commit, so it doesn't just sync on an interval
16:45 vexati0n but it's not S3
16:46 gtmanfred toastedpenguin: awesome!
16:47 toastedpenguin vexati0n: yeah these are developers doing the changes, if it was we'd be using git, unfortunately we have account reps helping clients tweak their portals so they drop the files in a client specific directory
16:47 toastedpenguin at the moment its on a file share which we have syncing every 10 min to S3, eventually all updates will be direct to s3, so I didnt want to also introduce a scheduled sync on each web server
16:48 toastedpenguin sorry *aren't developers doing the changes
16:48 vexati0n hmm... can you use a cmd.wait along a file.recurse or file.managed somehow ?
16:49 vexati0n i guess i'm sort of shooting in the dark here, idk how you have it all rigged up
16:50 vexati0n https://wri.tt
16:50 vexati0n this is kinda cool
16:50 vexati0n wrong window >>
16:51 toastedpenguin vexati0n: tried that, well at least I tried it for the initial deployment - minion does a initial sync from S3 bucket to a specific directory on the minion, didnt work well, find that salt evoking the aws s3 sync cli handles the sync better and then it only does a diff when stuff changes
16:52 onlyanegg joined #salt
16:52 toastedpenguin so now we install the aws cli on all minions and then execute the aws s3 sync to/from the target directory
16:53 mdpolaris I am having a pillar rendering issue with gpg, when using a Syndic when orch is called from the MoM
16:54 mdpolaris here is the call: salt 'or-syndic-node-test2' saltutil.runner state.orch mods=orch_new_instance
16:54 mdpolaris I have narrowed down the issue to the gpg_keydir is not found in the gpg renderer. I checked the __opts__ dict and verified there it is not present
16:55 mdpolaris this all works when using the same orch state directly on the syndic
16:59 SaucyElf joined #salt
17:04 jas02 joined #salt
17:07 XenophonF vexati0n: any chance you can post your git commit sync thingy for our general edification?
17:08 Fiber^ joined #salt
17:09 impi joined #salt
17:10 cyborg-one joined #salt
17:11 SaucyElf joined #salt
17:15 edrocks joined #salt
17:16 vexati0n XenophonF: there isn't much to it. It's just a matter of linking Bamboo to the appropriate git repository/branch (if it's in Stash, you can trigger the plan on commit directly, otherwise you can set an appropriate interval to check the repo status). Once the trigger is defined, the action is just an SSH call to the master and a simple command to run the appropriate state.
17:18 vexati0n there's no direct SaltStack module for Bamboo unfortunately so you have to do it through SSH
17:19 jhauser_ joined #salt
17:23 wendall911 joined #salt
17:26 nsoinm joined #salt
17:27 cingeyedog joined #salt
17:29 nsoinm https://gist.github.com/anonymous/5ad8b6de47ec6aec403b91d5f16c4025 I am trying to understand this state file. Device is the id of all the minions that nfsmounts applies too. Then what is the dir?
17:29 mikecmpbll joined #salt
17:29 nsoinm I know it's an enumerate function but I haven't used on before
17:30 ub1quit33 joined #salt
17:31 tkojames joined #salt
17:35 brasticstack joined #salt
17:35 cliluw joined #salt
17:38 brasticstack Hi! Is there a proper way to do a pillar lookup inside an orchestration? I'm using salt.function, but salt.function doesn't seem to expect the output that pillar.get returns.
17:38 snc joined #salt
17:38 ivanjaros joined #salt
17:39 XenophonF joined #salt
17:44 pcn I haven't tried doing that, but maybe if you posted a gist of what you're trying and what you want, some brainstorming could ensue
17:46 jespada joined #salt
17:46 brasticstack I'm trying to do a pillar.get from a minion from within my orchestration. Once that's working, I'll use that data later in the orch. gist coming up
17:46 pipps joined #salt
17:47 onlyanegg joined #salt
17:48 greyeax joined #salt
17:50 numkem joined #salt
17:53 brasticstack @pcn: https://gist.github.com/neutronnnate/4122d923667bd1bf1c3302338afda453
17:54 rem5 joined #salt
17:55 brasticstack From what I'm seeing, pillar.get behaves correctly, but saltmod.function can't seem to parse the output.
17:58 babilen nsoinm: We really don't know without your pillar data
17:58 wendall911 joined #salt
17:58 babilen But I'd guess it is the local mount point
18:00 brasticstack The pillar data is a list containing two ip addresses. That same lookup against the target does work if I run 'salt pillar.get'.
18:00 brasticstack oops, different convo
18:01 greyeax joined #salt
18:05 jas02 joined #salt
18:06 beautivile joined #salt
18:07 brasticstack Just to verify: salt.function _is_ how you're supposed to call execution modules from within an orchestration. Right?
18:07 nixjdm joined #salt
18:09 gtmanfred what exactly are you trying to do? just call pillar.get on a bunch of minions? or do you want to use that data?
18:10 brasticstack I do want to use the data, later in the orchestration. But for now, I'm trying to call pillar.get on a single minion.
18:10 gtmanfred so, you won't be able to use the pillar data like that in the orchestration
18:11 gtmanfred but if you run `salt 'test*.hap*' 'docker:cluster:nodes'` what do you get?
18:11 gtmanfred sorry
18:11 gtmanfred `salt 'test*.hap*' pillar.get 'docker:cluster:nodes'`
18:11 gtmanfred what do you get?
18:12 brasticstack it's a list with two ip addresses. That does work just fine.
18:12 brasticstack I can paste if you'd like
18:13 gtmanfred yeah, then it might not like the return of the salt.function, but just fyi, any data you get back from that won't be usable in other stuff in the orchestration
18:13 brasticstack not even with load_yaml? I've made that work in regular states
18:14 gtmanfred if you want to just read the pillar file sure
18:14 gtmanfred but not if you are going to request the information from the minion
18:14 gtmanfred you might be able to put it in a publish.publish if the minion on the master can query the minion
18:14 gtmanfred but it would need to be in jinja
18:18 brasticstack I swear I've seen examples where the yaml returned from a salt command has been as data in the state, but I can't find one right away. What does publish.publish do for me? It's the first I've seen mention of that, and I'm not fully grokking the documentation.
18:19 censorshipwreck joined #salt
18:20 gtmanfred it allows you to query other minions
18:20 gtmanfred from minions
18:21 gtmanfred so like the mine, but it isn't stored on the master
18:21 cliluw joined #salt
18:21 gtmanfred https://docs.saltstack.com/en/latest/ref/peer.html
18:22 brasticstack interesting! I'll have to dive more into that. Thanks!
18:22 gtmanfred yeah, there is no way to use information from other states in new states
18:22 gtmanfred there is __context__ but that does not do exactly that
18:22 gtmanfred and is only useful in modules
18:23 aldevar joined #salt
18:23 gtmanfred we are in the talking phase to allow returns from states to be used though https://github.com/saltstack/salt/pull/38469
18:23 saltstackbot [#38469][MERGED] [WIP] First run at adding "slots" | I would like to have a review of this idea, it is to allow for...
18:25 numkem joined #salt
18:26 it_dude joined #salt
18:27 brasticstack cool. It's not a big deal for me to hard-code that data for the orchestration I'm making, but in the future looking it up would be much nicer
18:27 brasticstack I'm going to give the publish.publish a shot first though
18:31 Praematura joined #salt
18:34 XenophonF Am I understanding import_json and |json correctly? import_json reads a JSON file into memory and deserializes it into a dictionary, while |json takes a dictionary and serializes it as a JSON-formatted string.
18:34 gtmanfred yes
18:34 gtmanfred no
18:34 gtmanfred opposite
18:34 gtmanfred |json takes a string, and reads it into a dictionary
18:34 XenophonF oh
18:35 gtmanfred other than that, it is correct
18:35 XenophonF so how might I output a JSON string?  i want functionality similar to |yaml_encode or |yaml
18:36 XenophonF that's really weird - the documentation claims that the |json filter should output JSON, not parse it
18:36 XenophonF https://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.jinja.html
18:36 gtmanfred so, |json is like |yaml right?
18:37 gtmanfred oh sorry, that is my bad
18:37 gtmanfred yeah, that is right
18:37 gtmanfred i forget what we did that we loaded the json from a dictionary...
18:37 gtmanfred it must have been load_json
18:37 XenophonF problem is, I'm doing something like {% import_json "file" as file %} and then later {{ file|json }}
18:37 XenophonF (this is in Pillar)
18:38 nsoinm Does a mine.update give all the minion information to the master?
18:38 XenophonF but when I look at the Pillar value, it isn't a string - it's a data structure and output as such by pillar.get
18:38 gtmanfred nsoinm: it runs all the mine_functions and sends it to the master
18:40 XenophonF i guess i'll go back to putting the JSON inline into a single-quoted YAML string or something
18:41 greyeax does anyone know how to handle using homebrew cask in salt?
18:41 Trauma joined #salt
18:41 XenophonF or do i have to chain |json|yaml_encode or |json|yaml_squote?
18:42 nsoinm @gtmanfred so when I do a mine.flush and run a state file https://gist.github.com/anonymous/45d998aa4139eed8c3f1a51d7cd26576 it does not create the files. Do I need to wait the mine_functioin interval?
18:42 gtmanfred you should not need to.
18:42 gtmanfred but you may need to wait for a bit for the master to store it
18:42 gtmanfred and make it available, because it gets processed through the fileserver like pillars and is asyncronous
18:43 gtmanfred so just because mine.update returns true, doesn't mean that the job is done, the master still has to store the data
18:45 Antiarc Is there any way to specify an environment for the rvm.installed state (ie, so I can export RUBY_CONFIGURE_OPTS)?
18:45 Antiarc I'm trolling through the code and don't see any support for that, but am not sure if I'm perhaps missing something more globally supported
18:46 nsoinm So after I do mine.flush do I need to run mine.update or can the state file still work regardless of doing a mine.update?
18:46 XenophonF hm, adding |yaml_squote seems to work
18:49 gtmanfred Antiarc: probably worth while passing env: to it, and seeing if that gets passed to the cmd.run inside the rvm module
18:50 Antiarc gtmanfred: yeah, that's what I'm trying now
18:50 gtmanfred it does not, but you could add that :)
18:50 gtmanfred we would love a PR with that in it
18:52 Antiarc I'm attempting it via environ.setenv at the moment - not sure that's gonna work, but I'm giving it a shot
18:52 gtmanfred it might, but i would be kind of surprised
18:52 Antiarc I don't know if the cmds inherit the parent process's env or not
18:53 gtmanfred it inherits the environment of the salt minion command
18:53 gtmanfred or should
18:53 Antiarc okay, passing the `update_minion: True` to environ should do it in theory, then
18:53 Antiarc Though that's plenty hacky :)
18:53 gtmanfred what you could do is just set it in a dropin file in /etc/profile.d,
18:54 Antiarc yeah, environ.setenv by itself didn't work
18:55 greyeax joined #salt
18:56 netcho_ joined #salt
18:59 XenophonF anyone using boto_iam.policy_present?
19:01 GMAzrael who checks the winrepo-ng for updates?
19:01 sh123124213 joined #salt
19:03 theblazehen joined #salt
19:04 karlthane joined #salt
19:04 XenophonF i'm getting an IndexError oh
19:04 gtmanfred From looking at the issue and stuff, you can submit a PR, but UtahDave twangyboy and TheBigBear on the github repo
19:04 XenophonF policy_document is supposed to be a dictionary
19:04 XenophonF I get it now.
19:05 gtmanfred GMAzrael: https://github.com/saltstack/salt-winrepo-ng/pull/905
19:05 saltstackbot [#905][OPEN] Update Putty.sls | Version 0.68+ moves to a MSI file. Grains for x86/x64 for compatibility and not dealing with setting the program file variable. For loop for new versions can be extended with version numbers. The URL for downloading the file also changed.
19:05 XenophonF well the example is sure wrong
19:06 Antiarc Is there a readme I'm missing on how to get set up to run the salt test suite?
19:06 GMAzrael gtmanfred ?
19:06 Antiarc ah, found it in HACKING.rst
19:06 Antiarc Reading is fundamental!
19:07 gtmanfred GMAzrael: you can help update it
19:07 GMAzrael gtmanfred I created those pull requests
19:08 GMAzrael after fixing them from my bad git pulls
19:09 gtmanfred GMAzrael: ping @TheBigBear
19:09 gtmanfred Antiarc: https://docs.saltstack.com/en/latest/topics/tutorials/writing_tests.html
19:09 Antiarc gtmanfred: fantastic, thank you
19:09 gtmanfred Antiarc: to run all the tests like we do on jenkins, you can run the `git.salt` state from http://github.com/saltstack/salt-jenkins
19:09 gtmanfred I use this script
19:10 gtmanfred Antiarc: https://gist.github.com/gtmanfred/73937f85d57c6a3caacbd6eaaa3de00e
19:10 gtmanfred (change the default user to your git user)
19:10 Antiarc Looks good, thank you
19:11 gtmanfred I only actively run that setup.sh file on centos, so use the other distributions at your own risk
19:11 ProT-0-TypE joined #salt
19:13 amcorreia joined #salt
19:13 pipps joined #salt
19:14 nikdatrix joined #salt
19:14 pcn gtmanfred: this is a trivial spelling change: https://github.com/saltstack/salt/pull/40608
19:14 saltstackbot [#40608][OPEN] Fix confusing typo in thorium start page | The thorium auto deregistration example has a slightly confusing typo.  From my read of the docs it should `s/startreg/statreg` so I'm submitting this PR to make that effective....
19:15 gtmanfred ok, mike or nicole will merge it when they get there
19:16 pcn Thanks
19:17 pcn gtmanfred: Also, I tried pinging you on friday about the changes I'm making to the slack engine.  I was wondering about how much of the work there could/should be library code for chat agents
19:17 gtmanfred ¯\(°_o)/¯ up to you
19:17 gtmanfred just remember that if you make changes in the library code, it would need to be synced in _utils,instead of _engines
19:18 gtmanfred and also that there are some circumstances where it might not get reloaded if it lives in salt.utils
19:18 pcn Maybe I'm using the term library improperly here: https://github.com/pcn/salt
19:18 pcn Sorry, this is the right link: https://github.com/pcn/salt/blob/slack-client-features/salt/engines/slack.py
19:20 gtmanfred right, in salt.utils.slack
19:20 gtmanfred I stand by what I said
19:21 pcn Ah, I'm caught up with you now
19:21 pcn I see.
19:22 pcn Do you think that being able to lookup config from pillars is useful?
19:23 jhauser joined #salt
19:24 jas02 joined #salt
19:27 keltim joined #salt
19:28 mdpolaris when using “saltutil.runner state.orch’ where does the __opt__ dict get populated from? I have discovered there are missing values, specifically gpg_keydir is not set
19:29 aldevar left #salt
19:30 jespada joined #salt
19:30 mdpolaris do i need to set the gpg_keydir in the minion config in order for that to be available?
19:34 greyeax joined #salt
19:35 Brew_ joined #salt
19:35 gtmanfred pcn: for engines?  not particularly, because engines 90% of the time run on masters, and the other 10% of the time would run on masterless minions
19:41 Brew_ joined #salt
19:48 Antiarc gtmanfred: https://github.com/saltstack/salt/pull/40610 - never written a PR for Salt before, let me know if there are any particular faux paus I need to correct :)
19:48 saltstackbot [#40610][OPEN] Add `env` support for rvm.installed and rvm.do, `opts` support for .installed | What does this PR do?...
19:58 rem5 joined #salt
19:59 gtmanfred looks good to me, will have to wait till one of our people who merge pull requests gets to it
20:01 pipps joined #salt
20:03 Flying_Panda joined #salt
20:03 Drunken_Panda joined #salt
20:04 Drunken_Panda questtion re orchestration are requisites evaluated at that states apply time or on the master before the orch has run firther more is this check carried out on the minion or the master
20:05 Drunken_Panda eg unless : cmd is this ran on the minion or master for the retcode to decide if state is run
20:06 gtmanfred it is all ordered at runtime
20:06 gtmanfred at compile time*
20:06 gtmanfred unless and onlyif are run at runtime
20:06 gtmanfred it depends on the requisite
20:06 Drunken_Panda of that state or the orch ?
20:06 gtmanfred both
20:06 gtmanfred if it has to do with the order, it is resolved at compile time
20:07 Drunken_Panda not order just unless and onlyif
20:07 gtmanfred if it has to do with just that one state, or if like onfail changes the path, it is done at runtime
20:07 gtmanfred unless and onlyif are run right before the state is run
20:07 Drunken_Panda cool are these executed on the minion specified in tgt or on the master
20:08 ProT-0-TypE joined #salt
20:08 gtmanfred they are executed on the minion
20:08 Antiarc gtmanfred: cool, thanks. I'm in no rush - would take a while to bring our infrastructure up to a new point release anyhow :)
20:09 gtmanfred Drunken_Panda: https://github.com/saltstack/salt/blob/develop/salt/state.py#L774
20:09 Antiarc Just figured I can get it in so that next upgrade cycle I can fall back on that rather than custom install scripts
20:09 Drunken_Panda cool gt thanks :D thats how I figured just having an issue which makes me question my life choices :D
20:12 SneakyPhil joined #salt
20:12 SneakyPhil Hi everybody
20:13 SneakyPhil Does anyone else use the pkgrepo.managed and 2 GPG keys for a single repository? If so, do you have issues when testing your highstate?
20:13 NightMonkey joined #salt
20:14 gtmanfred on what distro?
20:14 SneakyPhil centos 6 and 7
20:15 SneakyPhil Fedora exhibits it too, since it's yum
20:15 SneakyPhil The state always comes back as changing during a highstate test, but the actual highstate returns no change
20:16 theblazehen joined #salt
20:17 gtmanfred i did not know anyone used multiple gpg keys for one repo :/
20:17 gtmanfred it is probably just a logic error in the test=True logic of pkgrepo
20:17 gtmanfred would you open an issue on github? We would also love to have a pr if you don't mind taking a look
20:18 SneakyPhil I'll open a ticket!
20:18 SneakyPhil I'll attempt to figure out the python
20:18 SneakyPhil ty for the responsiveness
20:19 gtmanfred thanks! we really appreciate any help we get from the community!
20:21 GMAzrael Is there a plan to do an inplace upgrade from Mitaka to Newton?
20:21 GMAzrael wrong channel
20:22 gtmanfred good luck with that, openstack is always a pain to do inplace upgrades :P
20:22 SneakyPhil I heard the same thing today ^
20:22 SneakyPhil The solution presented was stand up a new cluster and migrate over
20:22 whytewolf I upgraded liberty to mitaka without issue
20:22 whytewolf inplace
20:23 whytewolf using salt :P
20:26 Drunken_Panda right again with the stupid questions apologies: I have an unless and an onlyif req both running nginx -t if nginx -t reports 0 the unless runs so thats ok but if it reports 0 nothing runs ...
20:27 Drunken_Panda * if it reports 1 nothing runs
20:27 Drunken_Panda on two seperate states
20:27 Drunken_Panda in an orch state
20:28 gtmanfred there is a bug
20:28 gtmanfred if unless and only if are both specified, unless is the only one used iirc
20:29 Drunken_Panda in the whole orch state >
20:29 Drunken_Panda or only on the same state?
20:30 NightMonkey joined #salt
20:30 Drunken_Panda this has litrally been making me question life lol :D
20:32 Drunken_Panda so dropping the check into the indivdual states would be the workaround I guess?
20:32 gtmanfred on the specific state
20:32 gtmanfred no the whole file
20:32 gtmanfred and not even if you specify multiple states inside one state id
20:32 prg3 joined #salt
20:33 Drunken_Panda This is what im doing: https://gist.github.com/DrunkenAngel/a32d3d5b2f502912b4645e76eed7a712
20:35 Praematura joined #salt
20:36 skatz joined #salt
20:36 Drunken_Panda wait think ive found it :D
20:45 asuina joined #salt
20:46 tapoxi joined #salt
20:47 Drunken_Panda nope didint :D
20:47 zfx joined #salt
20:48 Drunken_Panda do you have that bug on gitlab handy gt ?
20:48 asuina If I use "mine_functions:" in pillar with "grains.items:" below it would it just give me all the grain items from the minions? I tried using "- id" to limit grain items from getting everything but my state file still works. The state file relies on grains.items host
20:49 gtmanfred Drunken_Panda: that bug does not apply to how you are using onlyif and unless
20:51 onlyanegg joined #salt
20:51 gtmanfred that is on different states
20:51 gtmanfred if you did
20:51 gtmanfred /tmp/test:
20:51 gtmanfred file.touch:
20:52 gtmanfred - onlyif: nginx -t
20:52 gtmanfred - unless: nginx -t
20:52 gtmanfred then that would be the error
20:54 Drunken_Panda fairyplay
20:55 Tanta joined #salt
20:56 pipps joined #salt
21:00 arif-ali joined #salt
21:04 pipps joined #salt
21:11 NightMonkey joined #salt
21:13 zulutango joined #salt
21:15 nikdatrix joined #salt
21:17 pcn @gtmanfred there doesn't seem to be a star/restart command for engines, so a dynamic config source seems like it's necessary for something like command running permissions and preferences
21:17 gtmanfred you could use sdb then for your config options
21:17 gtmanfred but once it is set, you really shouldn't be needing to change it I don't think
21:18 gtmanfred but again, 90% of people doing this are going to be running the engine on the master, and the master doesn't have pillars
21:22 pcn gtmanfred: I can use local.cmd to get pillar data to the engine, which seems to work fine no matter where the engine is running.  That doesn't require sdb with a separate backend.
21:22 pcn As a user I want to be able to leverage the simple aspects of salt to get this benefit, I think we're looking at the situation differently.
21:28 toastedpenguin setting up the Salt sqs engine, is there a way to verify the salt master is polling the sqs queue?
21:29 hemebond toastedpenguin: Debug logging.
21:29 hemebond Maybe events.
21:30 hemebond Doesn't appear to be any events.
21:34 spiette joined #salt
21:39 PatrolDoom joined #salt
21:44 jas02 joined #salt
21:46 pipps joined #salt
21:47 toastedpenguin hemebond: that helped, didnt have boto installed on the master
21:48 keldwud joined #salt
21:48 keldwud joined #salt
21:49 woodtablet i am looking for an idea for a state file. i want to run a script if a file doesnt exisit. so I know about file.absent, but how would one run the script ?  if the file didnt exisit ? (its svn pull with a particular username and password)
21:50 hemebond woodtablet: cmd.run or cmd.script with an unless or onlyif parameter.
21:51 whytewolf if the script creates the file. cmd.run or cmd.script with creates
21:51 woodtablet ahh
21:51 woodtablet hemebond: thanks!
21:51 woodtablet whytewolf: thanks!
21:54 SaucyElf joined #salt
21:55 SaucyElf joined #salt
21:58 willprice joined #salt
22:00 fgimian joined #salt
22:00 pipps joined #salt
22:01 ProT-0-TypE joined #salt
22:08 toastedpenguin is it possible to execute the aws cli from the master or via a state or is there a better approach?
22:08 tkojames Ok I have dumb question. I have salt master setup in the default location srv/salt. When I leave sls files in the salt folder they work. When I try to create a subfolder under salt like srv/salt/users. And I run state.highstate I get error saying no matching sls found for base. How do I get stuff in the subfolder to work when running highstate?
22:08 hemebond toastedpenguin: There are no boto runner modules that I know of.
22:09 hemebond toastedpenguin: But you could install a minion.
22:10 hemebond tkojames: Have you created a top.sls file?
22:10 hemebond highstate uses top.sls to determine which states to apply/
22:10 toastedpenguin hemebond: I am trying to find a way to sync an s3 bucket when  minion if first deployed, "aws s3 sync C:\ s3://bucket/"
22:10 toastedpenguin was going to install a batch files and call that
22:11 hemebond toastedpenguin: cmd.run can do that for you.
22:11 toastedpenguin let me see if that works
22:12 toastedpenguin might have to be explicit with where the aws program is located
22:12 hemebond Generally, yes.
22:13 hemebond cmd.run best to provide the fully-qualified path to the executables.
22:14 pipps joined #salt
22:15 edrocks joined #salt
22:16 mikecmpbll joined #salt
22:16 tkojames hemebond: I have the sls file in top file. When I move it from the base folder into a subfolder it fails to find it. When I move it back everything works again.
22:16 hemebond tkojames: Show me you top.sls
22:16 hemebond paste it somewhere (not in IRC)
22:17 whytewolf tkojames: if you have the file under a directory you need to tell the highstate that the file is under a directory. so say you have a file named bar.sls under the directory foo you would access it with foo.bar
22:18 whytewolf you can use init.sls to call just the directory so users/init.sls could be called with - users or - users.init
22:23 tkojames This is my topfile very basic. https://pastebin.com/qmBtcE07
22:24 bbradley joined #salt
22:25 whytewolf tkojames: so you moved frank.sls under users? then it would become users.frank
22:25 bbradley joined #salt
22:26 NightMonkey joined #salt
22:28 heaje joined #salt
22:28 tkojames Whytewolf: Wow that makes prefect sense now.... Thanks for helping me with such a basic question.
22:29 hemebond (have another run through the tutorial)
22:39 hoonetorg joined #salt
22:41 adelcast joined #salt
22:43 pipps joined #salt
22:47 pipps99 joined #salt
22:53 toastedpenguin is there a way to suppress the output when a script is run via cmd.script or at least get less verbose output?
22:53 hemebond It just returns whatever the script does.
22:54 toastedpenguin hemebond: yeah, its just in in some cases its rather large output
22:55 toastedpenguin might not be possible, just like to eliminate the 20 sec output
22:55 hemebond Pipe it into /dev/nul or something
23:07 jas02 joined #salt
23:12 swa_work joined #salt
23:16 nikdatrix joined #salt
23:25 mosen joined #salt
23:48 sh123124213 joined #salt
23:48 karlthane joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary