Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2016-11-17

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 nicksloan joined #salt
00:20 rem5 joined #salt
00:25 Klas joined #salt
00:31 jas02 joined #salt
00:32 systo joined #salt
00:32 Deliant joined #salt
00:35 mikecmpbll joined #salt
00:38 nickabbey joined #salt
00:43 Renich joined #salt
00:49 pipps joined #salt
00:52 JPT joined #salt
00:52 sjorge joined #salt
00:52 sjorge joined #salt
00:56 twodayslate joined #salt
01:01 watersoul joined #salt
01:01 edrocks joined #salt
01:02 davidone_ joined #salt
01:05 akhter joined #salt
01:10 pdayton joined #salt
01:17 pdayton joined #salt
01:17 xbglowx joined #salt
01:17 pdayton joined #salt
01:19 pdayton joined #salt
01:24 pdayton joined #salt
01:25 pdayton joined #salt
01:32 jas02 joined #salt
01:41 tiwula joined #salt
01:45 fracklen joined #salt
01:47 pdayton joined #salt
01:48 pipps joined #salt
01:49 pdayton joined #salt
01:56 pdayton joined #salt
01:59 netcho joined #salt
02:01 pdayton joined #salt
02:03 pdayton joined #salt
02:07 pdayton joined #salt
02:07 pdayton joined #salt
02:10 pcn Hi, does anyone know if there's been any work on https://github.com/saltstack/salt/issues/10322?
02:10 saltstackbot [#10322][OPEN] How to add EBS volumes by VolumeID in EC2? | This seems to be an AWS feature that never made it to the new EC2 module....
02:19 Nahual joined #salt
02:33 jas02 joined #salt
02:35 MTecknology Does |- in jinja strip trailing/leading whitespace?
02:36 MTecknology err... yaml
02:36 hemebond YAML does strip leading whitespace, yes.
02:36 MTecknology where | does not?
02:37 hemebond That does too.
02:37 hemebond All leading whitespace.
02:37 MTecknology What's the difference?
02:37 hemebond | is just like """ in Python
02:38 MTecknology What is |- ?
02:39 hemebond Oh. Strips whitespace.
02:40 MTecknology cool, thanks!
02:40 hemebond Oh, trailing.
02:40 hemebond I was trying to help someone a while ago with a whitespace issue (they wanted leading spaces).
02:40 hemebond And I couldn't find any method that worked.
02:41 hemebond |- just removes the trailing newline.
02:41 hemebond | leaves it in
02:41 MTecknology ah, we have literal "\n" being printed in configs
02:41 hemebond |+ will end up with two newlines.
02:41 hemebond That's odd.
02:42 MTecknology yup, but I don't want to ask how/why. I'm shooting for a quick/easy solution and this would do it. :P
02:46 MTecknology we'll find out
02:46 MTecknology yup, that'll do :)
02:47 ilbot3 joined #salt
02:47 Topic for #salt is now Welcome to #salt! | Latest Versions: 2015.8.12, 2016.3.4 | Support: https://www.saltstack.com/support/ | Logs: http://irclog.perlgeek.de/salt/ | Paste: https://gist.github.com/ (please don't multiline paste into channel) | See also: #salt-devel, #salt-offtopic | Ask with patience as we are volunteers and may not have immediate answers
02:59 ict4ngo joined #salt
03:00 ict4ngo hello all, trying salt/salt-cloud for the first time
03:01 bastiand1 joined #salt
03:02 evle joined #salt
03:03 ict4ngo with salt-cloud I'm trying to spin off an instance but I get "the key pair does not exist"  I know it exist I just created it then put it in the right folder with 0400 permissions
03:03 edrocks joined #salt
03:03 iggy exists in the cloud provider?
03:03 ict4ngo yup it exist
03:04 hemebond *gasp*
03:04 hemebond MTecknology: |2
03:05 hemebond Then put your spaces and newlines at the start.
03:07 ict4ngo so I went on creating a new pair doing  salt-cloud -f create_keypair  ec2 keyname=createdwithsaltcloud
03:08 ict4ngo so I got the following answer         - https://ec2.us-east-1.amazonaws.com/?Action=CreateKeyPair&KeyName=createdwithsaltcloud&Version=2014-10-01
03:09 ict4ngo sounds legit but I can't see it in along my other keypair
03:11 ict4ngo and salt-cloud -f show-keypair ec2 keyname=createdwithsaltcloud  give me cloud provider alias, for the 'ec2' driver, does not define the function 'show-keypair'
03:12 ict4ngo so I created a keypair but I don't know where it is and the one that work ( I manually tested it) does  not enable be to create an instance
03:17 cliluw joined #salt
03:17 ict4ngo going to try another provider just in case
03:30 ict4ngo ah ah ah that's a good joke sorry guys the credentials were on another account it's too late
03:30 cyteen joined #salt
03:30 ict4ngo good night I feel so silly good night
03:33 pdayton joined #salt
03:34 jas02 joined #salt
03:35 hemebond Do people normally license formulas under the Apache license?
03:36 hemebond Apparently so.
03:47 RandyT joined #salt
03:49 akhter joined #salt
03:59 bastiandg joined #salt
04:00 netcho joined #salt
04:06 edrocks joined #salt
04:26 informant1 joined #salt
04:29 pdayton joined #salt
04:32 nickabbey joined #salt
04:34 jas02 joined #salt
04:39 rdas joined #salt
04:41 donmichelangelo joined #salt
04:43 systo joined #salt
04:46 fracklen joined #salt
05:00 iggy I think everybody just copies what they saw there
05:13 onlyanegg joined #salt
05:20 pipps joined #salt
05:23 ub1quit33 joined #salt
05:27 Laogeodritt joined #salt
05:29 impi joined #salt
05:36 jas02 joined #salt
05:52 systo joined #salt
05:56 systo joined #salt
05:57 systo joined #salt
06:01 netcho joined #salt
06:03 bocaneri joined #salt
06:04 systo joined #salt
06:06 systo joined #salt
06:09 systo joined #salt
06:11 systo joined #salt
06:11 awiss joined #salt
06:12 systo joined #salt
06:12 jacksontj joined #salt
06:13 systo joined #salt
06:15 systo joined #salt
06:19 samodid joined #salt
06:22 cliluw joined #salt
06:25 systo joined #salt
06:28 systo joined #salt
06:36 jas02 joined #salt
06:56 jas02 joined #salt
06:59 rdas joined #salt
07:00 rdas joined #salt
07:17 kurzweil joined #salt
07:24 brantfitzsimmons joined #salt
07:25 nidr0x joined #salt
07:27 brantfitzsimmons Any ideas on how to determine that the cmd/command passed to dockerng.running has completed?
07:31 kurztest joined #salt
07:34 kurz joined #salt
07:45 yuhlw______ joined #salt
07:59 jas02 joined #salt
08:02 netcho joined #salt
08:10 edrocks joined #salt
08:21 ronnix joined #salt
08:24 netcho joined #salt
08:31 JohnnyRun joined #salt
08:36 fracklen joined #salt
08:38 fracklen joined #salt
08:46 krymzon joined #salt
08:48 samodid joined #salt
08:57 rdas joined #salt
09:07 mikecmpbll joined #salt
09:09 sh123124213 joined #salt
09:09 keimlink joined #salt
09:14 awiss joined #salt
09:16 bluenemo joined #salt
09:23 florianb joined #salt
09:27 jas02 joined #salt
09:31 keimlink joined #salt
09:33 Rumbles joined #salt
09:35 Mattch joined #salt
09:38 ws2k3 is it possible to add a delay in the cmd.run? so run command on server. sleep 2 seconds then run it on the next server?
09:45 s_kunk joined #salt
09:45 Mattch joined #salt
09:47 florianb ws2k3: the `batch_size`-option is probably what you're searching for: https://docs.saltstack.com/en/latest/topics/targeting/batch.html
09:52 samodid joined #salt
09:56 samodid joined #salt
09:57 xet7 joined #salt
10:11 msn2 joined #salt
10:11 amcorreia joined #salt
10:12 ronnix joined #salt
10:16 inad922 joined #salt
10:27 inad922 joined #salt
10:28 jas02 joined #salt
10:32 Derailed joined #salt
10:35 akhter joined #salt
10:43 inad922 joined #salt
10:47 o1e9 joined #salt
11:04 davidone joined #salt
11:12 edrocks joined #salt
11:16 amontalban joined #salt
11:16 amontalban joined #salt
11:18 fannet joined #salt
11:18 armin joined #salt
11:25 fannet joined #salt
11:26 fannet_ joined #salt
11:27 fannet__ joined #salt
11:29 jas02 joined #salt
11:37 armin joined #salt
11:56 felskrone joined #salt
11:57 armin joined #salt
12:05 philhuk joined #salt
12:19 gpampara joined #salt
12:35 numkem joined #salt
12:43 ws2k3 im using cmd.run to execute a script. salt return to me retcode -15 what does that mean?
12:46 netcho joined #salt
12:48 manji ws2k3, does your minion log say something useful ?
12:49 ws2k3 manji no not realy, it doe show what it did but it just stopped no error nothing
12:54 ws2k3 manji only thing i could find is that the script stops at this line pkill -f nginx
12:54 ws2k3 thats the last thing salt executed. after that i get the retcode: -15 in salt
12:55 aidin joined #salt
12:55 _weiwae__ joined #salt
12:56 _weiwae__ Hi, I've received a bunch of salt state files from someone, and I'd like to run one on a specific subset of minions that I have setup through salt cloud.  I'm having trouble finding the syntax.
12:57 _weiwae__ Is it something like sudo salt '*mongodb*' monogdb/init.sls ?  (I have mongodb minions and I want to run the state init.sls in my local mongodb folder.
12:58 hlub I think there is something wrong with the pillar refreshing of version 2016.3.4 when running highstate.
12:58 hlub I've seen several questions here and experienced this couple of times myself.
12:58 netcho joined #salt
12:58 manji ws2k3, does it pkill nginx? or it fails to do so
12:59 ws2k3 manji it kills nginx sucesfully, i have places echo beforekill and echo afterkill before and after the pkill. the beforekill is shown the afterkill is not
13:00 manji _weiwae__, salt '*mongodb*' state.sls monogdb/init
13:00 manji or just monogdb
13:00 manji be careful with it :p
13:01 _weiwae__ Ok
13:01 _weiwae__ so I have my init.sls file that I want to run currently in ~/states/mongodb/init.sls
13:01 hlub manji: only mongodb if you wish to run the init.sls from that dir
13:01 manji hlub, yep
13:01 _weiwae__ do I need to move the files into etc/salt, or if I run the command from within the ~/states/ folder I should be fine?
13:02 manji _weiwae__,  if you are running this on a salt-master
13:02 _weiwae__ yes, I'm running this from the salt master
13:03 _weiwae__ sudo salt '*mongodb*' test.ping gave me the result I expected.
13:03 haam3r weiwae__ is "~/states/ " configured in your file_roots as well?
13:03 _weiwae__ No, its just files I pulled from my git repo
13:04 manji _weiwae__, you should then check the master file and find the file_roots: key
13:04 haam3r so in /etc/salt/master you have to configure the master to look for the sate files in the right folder..as manji said..look for the file_roots: key
13:05 _weiwae__ ok, I should uncomment out the line that starts with #base: ?
13:06 _weiwae__ or should I be adding my own file_roots: base: - ~/states ?
13:09 _weiwae__ I guess what I'm asking is if I need to create /srv/salt and put files there or if I can just point to my relative home directory
13:10 hlub _weiwae__: you are running salt-master with a non-root user?
13:11 _weiwae__ correct
13:11 _weiwae__ I'm running salt with sudo
13:14 edrocks joined #salt
13:18 _weiwae__ Thanks for your help. Looks like I need to read up on pillars and groups now and config those as well.
13:22 ws2k3 manji even on a service nginx stop/start it crashes with retcode:-15 but i cant figure out why
13:26 manji ws2k3, what salt version are we talking about?
13:26 manji also, your master and minions are same version ?
13:26 ws2k3 manji 2016-11-17 14:25:16,793 [salt.loaded.int.module.cmdmod][ERROR   ][25547] Command 'pkill -f nginx;/data/script/upgradenginx.sh;service nginx restart' failed with return code: -15
13:27 ws2k3 manji salt 2015.5.3 (Lithium)
13:29 ws2k3 manji minion version Salt: 2016.3.4
13:29 manji oh
13:29 edrocks joined #salt
13:30 amontalban joined #salt
13:30 amontalban joined #salt
13:30 manji ws2k3, there are many loose ends here
13:31 jas02 joined #salt
13:31 ws2k3 what do you mean?
13:31 manji something might not exiting properly/ does not return an exit code salt expects
13:31 ws2k3 manji its not an issue that the minion is another version then the master?
13:31 manji we can't rule out that having different versions is not causing any issues as well
13:32 manji yeah, I was writing just that :p
13:32 ws2k3 hmm strange. i installed master from the apt repo from saltstack, and the minions are installed with the script saltstack provided
13:33 manji what linux version/distro are you running ?
13:33 ernescz joined #salt
13:33 ws2k3 master is debian 7 minion is ubuntu 12.04
13:33 manji both are pretty old :(
13:34 ws2k3 manji yeah i know
13:34 manji well, I'd choose between upgrading
13:34 awiss joined #salt
13:34 manji ori downgrading
13:34 manji or*
13:34 XenophonF ws2k3: install master/minions from repos listed here - https://repo.saltstack.com
13:34 manji and start off by having the same salt version on both
13:34 manji and then check again
13:34 XenophonF dunno if ubuntu 12.04 is still supported
13:35 manji well it should, it is an LTS release after all
13:37 ws2k3 XenophonF i did install it using that methods but i think for ubuntu 12.04 salt provides a different version then for debian 7
13:41 ernescz hello everyone! I have a question about 'salt.states.event.wait' - when event sent to master triggers another state through reactor, does the calling state (that sent the event) has to wait for the new state to finish?
13:41 msn joined #salt
13:42 ernescz or is this the expected behavior - just send the event and continue on?
13:43 cDR joined #salt
13:47 XenophonF ernescz: you might want to ask again in a few hours
13:47 XenophonF or try the mailing list
13:48 ernescz XenophonF: thank you, I'll try again in a few hours.
13:49 manji does anyone know if it is possible to split the roster file into smaller ones ?
13:49 manji or have multiple roster files?
13:49 ernescz Though I'm starting to believe that state.orchestrate runner could be more suited to the task. Thanks.
13:52 Neighbour Is there a way to delay-load-and-parse a statefile until an earlier statefile has been executed? I have a state.sls that installs required software (including a postgres client), and then another state that uses module.run:postgres.psql_query to determine if a database needs an initial seed. But the 2nd statefile won't compile because there's no psql binary present (which is installed by the 1st
13:52 Neighbour statefile)
13:52 AndreasLutro manji: it is possible to specify the roster file with a command line argument, I don't know about having multiple. I know you can use jinja in it so maybe jinja includes could help you
13:52 Neighbour (both statefiles are included in another statefile that initializes the entire minion)
13:52 ronnix joined #salt
13:54 manji AndreasLutro, it crossed my mind
13:55 manji it is all about readabity
13:55 awiss_ joined #salt
13:55 manji a looong flat file is not as comfy as eg one file per host
13:56 amontalb1n joined #salt
13:57 aberdine joined #salt
13:58 ws2k3 manji i have upgraded the master version and now master and minion are the same version. but the issue is still there
13:58 manji do a saltutil.sync_all
13:58 manji and pillar refresh etc
13:59 traph joined #salt
13:59 traph joined #salt
13:59 manji although I am not optimistic
14:01 akhter joined #salt
14:03 ws2k3 manji that saltutil.sync_all was that ment for me?
14:03 manji ws2k3, yes, sorry
14:04 ws2k3 ah np. i dont use pillar i just use cmd.run on the master console
14:04 manji ws2k3, also tells us what happens when you
14:04 manji salt 'server' service.restart nginx
14:06 manji (it will restart your nginx)
14:07 manji ws2k3, also if it is possible, showing us your upgradenginx.sh script would help
14:08 jeddi joined #salt
14:08 manji the error you are gettng is weird nevertheless
14:15 ws2k3 manji salt 'server' service.restart nginx worked fine it returned true
14:15 manji so it is not an issue in 'service nginx restart'
14:16 manji damn, I was hoping the init script was doing something dodgy :p
14:16 ws2k3 manji yeah i understand. here is the script http://pastebin.com/DKjfUgLV
14:17 manji ws2k3, I trust it stops on line 22 ?
14:18 manji damn, nothing weird on this script
14:18 ws2k3 manji yes true its stopt executing at line 22
14:18 ws2k3 manji line 22 the pkill -f gets executed and after that nothing the afterkill echo is not sppearing
14:18 manji unless pkill returns something funny
14:19 manji I don't have any other ideas
14:19 manji can you replace pkill with service nginx stop ?
14:20 ws2k3 manji when i run this : salt --batch-size 1 'server' cmd.run "pkill -f nginx;/data/ton/upgradenginx.sh" it imidiatly stops
14:21 florianb ws2k3: manji: could it be that `pkill` kills the invoking process because of using the -f-option?
14:21 ws2k3 manji 2016-11-17 15:19:38,361 [salt.loaded.int.module.cmdmod][ERROR   ][17400] Command 'pkill -f nginx;/data/test/upgradenginx.sh' failed with return code: -15
14:21 ws2k3 florianb pkill -f is just for killing processes by name
14:22 ws2k3 lol florianb. im an idiot!
14:22 fracklen_ joined #salt
14:22 ws2k3 florianb ofcrouse it kills itself.......
14:22 manji no
14:23 manji wait
14:23 ws2k3 pkill -f nginx would kill the upgradenginx.sh cause nginx is in the name....
14:23 manji oh that
14:23 manji LOL yes
14:24 ws2k3 manji shall we togher jump of a bridge now? lol.... thx florianb for point out
14:24 florianb Hehehe.. i'm glad i had a lucky shot! :*D
14:26 manji hehe
14:26 ws2k3 florianb yeah i feel incredible stupid now. im already working on this 2 hours lol wtf
14:26 manji ws2k3, I was reading that -15
14:26 manji is what upgradenginx.sh
14:26 manji said that happened to it
14:27 manji 15 = SIGTERM
14:27 ws2k3 -15 is a sigterm maby?
14:27 ws2k3 haha lol okay cool
14:28 manji which as florianb said, what pkill sent to your script
14:28 manji so do a service nginx stop :p
14:28 Tanta joined #salt
14:29 ws2k3 manji i have to do a pkill cause i have some servers where the init script is screwed. and there it cant do service nginx stop
14:29 ws2k3 but pkill did it jobs just fine. it did its job a little to good
14:29 manji oh I see
14:31 ws2k3 im gonna try again
14:32 jas02 joined #salt
14:32 armguy joined #salt
14:34 ws2k3 manji worked as a charm great :D
14:34 POJO joined #salt
14:34 manji hahaaa
14:34 manji :D
14:34 florianb great! :D
14:35 ws2k3 thx SO much manji and florianb!
14:37 brantfitzsimmons Is there a way to execute commands in a running Docker container that make use of environment variables defined when starting the container using dockerng.running?
14:38 brantfitzsimmons When attempting to use them in a command sent to the container via dockerng.run the env. vars. are not set.
14:40 Tanta it's a different environment
14:40 Tanta when you use containers, they are contained, and don't share environments with the managing host
14:41 Tanta you might be able to pass in some vars, though, using dockerng.run: name: "export foo={{ var1 }} export bar={{ vatr2 }} command"
14:41 catpig joined #salt
14:44 aphor brantfitzsimmons you could run a minion inside a container..
14:44 aphor People misunderstand Docker.
14:45 whatevsz joined #salt
14:45 brantfitzsimmons So, the environment variables that are sent during the container creation process in dockerng.running don't set them permanently in the container.
14:45 whatevsz hey guys! is there any way to easily apply pillar data only to a single minion, without using external pillars?
14:46 aphor Solomon Hykes hates systemd and init and packages.. so he reinvented both.
14:46 whatevsz i dont really want to litter my pillar/top.sls with entries for every single minion ...
14:47 brantfitzsimmons Nice.
14:48 aphor whatevsz: I can't even imagine a world where such a thing could exist. Can you describe what you're wishing for?
14:49 brantfitzsimmons Tanta: I think I'll use your suggestion and export the vars as a prefix to the command.
14:49 gtmanfred whatevsz: you would either need to use top.sls, or an external pillar
14:49 brantfitzsimmons aphor: I'll keep your suggestion in mind for longer running containers.
14:49 whatevsz ok, so i have a DNS master and slave pair. both inherit generic DNS data from a common .sls pillar
14:49 brantfitzsimmons Thanks all.
14:49 whatevsz but for one i have to pass 'role: master', and for the other 'role: slave'
14:50 whatevsz i now have a dns1.example.com.sls containing the first, and a dns2.example.com containing the second
14:50 whatevsz and i apply them explicitly to those minions in the pillar top.sls
14:50 whatevsz this is cumbersome
14:51 whatevsz stuff like varstack makes that easy, but as i said, i don't want to use external pillars
14:51 gtmanfred whatevsz: you could use pillarstack
14:51 gtmanfred it is based on varstack
14:51 gtmanfred ahh, well, then that is going to be tough
14:51 gtmanfred https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.stack.html
14:52 whatevsz yup, i already tried pillarstack and it worked
14:52 whatevsz still, external pillar :D
14:52 gtmanfred so, this is the reason that external pillars were created
14:53 gtmanfred is to give you more flexibility in your pillar assignments
14:53 whatevsz yeah i guess so ... i just really liked the way you could apply pillars to minions
14:53 whatevsz this was the only thing missing :D
14:54 gtmanfred yeah,using the regular pillar stuff, you have to use top.sls
14:54 whatevsz oh, you can combine the default pillar and varstack right?
14:54 gtmanfred sure
14:54 gtmanfred it will use both
14:54 whatevsz hmm that sounds nice
14:55 whatevsz and pillarstack is bundled with salt ...
14:55 whatevsz i'm tempted :D
14:55 gtmanfred yes
14:55 gtmanfred and you can use ext_pillar_first, so that the pillarstack stuff is rendered first, and can be used to target in the pillar/top.sls
14:55 whatevsz ok, i'll go that way
14:55 whatevsz thank you very much!
14:55 whatevsz super helpful
14:55 gtmanfred no problem!
15:01 nickabbey joined #salt
15:01 nicksloan joined #salt
15:02 florianb Huh - does anybody know why exactly on one minion grains.fetch is missing? (`'grains.fetch' is not available.`)
15:03 gtmanfred what version of salt are you on?
15:03 gtmanfred salt \* test.version
15:03 florianb 2015.5.3 (Lithium)
15:03 XenophonF grains.get
15:04 viq cmarzullo: sorry, missed you yesterday. I am trying to figure out how to test the states I'm writing, and how to tie it all together. I have a monolythic git repo with all my states and pillars in subdirectories. I am still debating whether test-kitchen is the best tool to use, as it doesn't allow to properly test multi-node setups. Also debating what tools to use for the tests themselves, both testinfra and
15:04 XenophonF is there a fetch method?
15:04 gtmanfred grains.fetch wasn't added till 2016.3
15:04 viq inspec are interesting
15:04 XenophonF ah
15:04 florianb ah..
15:04 florianb :-O - thanks a lot. i somehow expected it being up to date.. :/
15:05 cmarzullo right on viq
15:05 cmarzullo We had swtiched to formula based so it's a little more broken up.
15:05 cmarzullo But I do have a co-worker who has done multi system testing with test-kitchen.
15:05 promorphus joined #salt
15:06 cmarzullo I've been using server spec. It's enough. for now. But we'd like to stay python with test-infra.
15:06 aphor Can you do some jinja includes in top.sls without offending the gods?
15:06 XenophonF aphor: depends - what are you planning?
15:07 aphor I was thinking about @whatevsz ugly pillar/top.sls problem.
15:07 gtmanfred aphor: he got it figured out, was going to just use pillarstack and pillar/top.sls at the same time
15:07 hasues joined #salt
15:08 hasues left #salt
15:08 aphor ... assuming it's a fair trade between fugly file headaches and salt-master CPU.
15:08 gtmanfred but yes, you can do some jinja in top.sls, it is just like any other sls file
15:09 aphor gtmanfred: well, I guess I wasn't asking if one CAN as much as whether one SHOULD or SHOULD NOT.
15:09 gtmanfred ¯\(°_o)/¯
15:09 viq cmarzullo: we're only starting, and I'm the only person with any kind of salt experience here, I don't want to make them to go to 10 different repos to make changes. Also, each repo would have to have a hook pushing out to salt master, since master is external but repo server is internal.
15:09 aphor Does anyone have any bad experiences to share WRT jinja-ed up pillar/top.sls?
15:09 gtmanfred the minions grains should be available in the file... but 90% of what I would do in there is just targeting, so you have access to all that stuff
15:09 viq cmarzullo: I would be interested in learning how the multi-node tests with test-kitchen were done
15:10 Electron^- joined #salt
15:11 fracklen joined #salt
15:11 XenophonF aphor: i haven't used jinja in pillar/top.sls, but i use it in states/top.sls, https://github.com/irtnog/salt-states/blob/master/top.sls
15:12 fracklen_ joined #salt
15:13 XenophonF though TBH I could probably get away with just YAML and cross-references
15:13 mpanetta joined #salt
15:14 sh123124213 joined #salt
15:17 mpanetta joined #salt
15:18 Rumbles Hi, I'm trying to get a state to apply some settings depending on the hostname of the machine, I currently get None appended to a matching file, instead of the content of the pillar, as the "host" variable in my state isn't substitued as I hoped, can anyone tell me if there is a way to do this that will work? https://paste.fedoraproject.org/483825/
15:19 racooper joined #salt
15:20 XenophonF Rumbles: I use PCRE matches to target Pillar SLS IDs - https://github.com/irtnog/salt-pillar-example/
15:20 preludedrew joined #salt
15:20 gtmanfred Rumbles: mailgun is awesome
15:21 XenophonF Rumbles: you're better of doing something like {% for host in salt['pillar.get']('mailgun', []) %}
15:22 remyd1 joined #salt
15:22 XenophonF Rumbles: the fqdn grain is a string, not a list
15:22 Rumbles do you wrk for mailgun gtmanfred ?
15:22 Rumbles :)
15:22 gtmanfred i work for salt
15:22 gtmanfred i used to work for rackspace though
15:23 gtmanfred and did the automation that deployed mailgun settings to managed cloud boxes
15:23 Rumbles XenophonF, I know it's a string, but if in should work on strings also... right?
15:23 XenophonF no
15:23 XenophonF a string is a list of characters
15:23 gtmanfred in works on strings
15:23 XenophonF use ==
15:23 gtmanfred at least in python
15:24 Rumbles yeah that's what I thought
15:24 * Rumbles tested in python console to be sure
15:24 XenophonF i'll double check the jinja implementation of the in operator
15:24 XenophonF i've only ever used == for string comparison
15:24 gtmanfred >>> 'this' in 'this bitch is crazy'
15:24 gtmanfred True
15:25 remyd1 Hi folk
15:25 gtmanfred hola
15:25 irated heylo
15:25 Rumbles hi
15:25 irated OH HAI!!
15:25 remyd1 how to you handle file_roots on multi-master configuration ?
15:25 remyd1 =)
15:25 irated s/to/do/
15:26 remyd1 yes
15:26 gtmanfred make sure the file_roots are in the master configuration and that they have the files synced somehow
15:26 gtmanfred use rsync, nfs, git repos
15:26 gtmanfred or just use an external filesystem like gitfs
15:26 irated git repos for the wins
15:26 irated yeah
15:26 remyd1 Yeah, I was thinking using git
15:26 irated what ^^ said
15:26 remyd1 but youn't manage it with master configuration directly , no ?
15:26 irated git will allow you to do ci/cd with peer reviews and pull requests.
15:26 remyd1 grr
15:27 remyd1 you can't
15:27 irated you can
15:27 irated here is an example
15:27 gtmanfred yeah, you have to manage the master configuration directly
15:28 remyd1 but normally you can do it for gitfs_remotes and fileserver_backend, but not the root_files directly ?
15:28 gtmanfred hrm?
15:28 Rumbles XenophonF, I'm not sure how those examples would apply in tis situation :/
15:28 remyd1 what I mean is that you can do it in your formulas with file://
15:28 irated our file_roots is the gitfs iirc
15:28 irated sec
15:28 gtmanfred remyd1: salt:// ?
15:28 remyd1 but not directly for the formulas itself
15:29 remyd1 yes, salt://
15:29 gtmanfred salt:// can be used anywhere, it just references the file in your fileserver
15:29 gtmanfred which has a heirarchy based on the order of fileserver_backends
15:29 XenophonF well color me surprised
15:29 gtmanfred and then file_roots or gitfs_remotes
15:29 XenophonF https://github.com/pallets/jinja/blob/390c3cec2bbab50d0ba276d8fea61e27d582172a/jinja2/nodes.py#L50
15:29 irated file_root is more for local file inclusion as far as i know
15:29 gtmanfred XenophonF: :)
15:29 gtmanfred remyd1: https://docs.saltstack.com/en/latest/ref/file_server/
15:30 XenophonF Rumbles: i misunderstood your question
15:30 Rumbles np :)
15:30 Rumbles I probably didn't ask it so well :)
15:30 XenophonF Rumbles: but my comments about simplifying your for loop and pillar lookup stand
15:30 XenophonF use [] as the default value and combine everything into one line
15:30 XenophonF oh wait a sec
15:30 XenophonF that's not a list, it's a dictionary
15:31 irated Anyone know of a good way to have test driven states or atomic states?
15:32 Rumbles irated, do you mean a state that only runs if a test passes?
15:32 XenophonF Rumbles: I'd do it this way: {% for host, settings in salt['pillar.get']('mailgun', {})|dictsort %}
15:32 khaije1 joined #salt
15:32 XenophonF or even better: {% for host, settings in salt['pillar.get']('mailgun', {})|dictsort if host in salt['grains.get']('fqdn') %}
15:32 jas02 joined #salt
15:32 Rumbles so that would return an empty dict if there was nothing there, what does the dictsort bit do?
15:33 irated Rumbles: no more of a health check after and if the health check failes it reverts
15:33 XenophonF then you can call settings['user'] and settings['password']
15:33 irated transactional updates so to speak
15:33 XenophonF Rumbles: http://jinja.pocoo.org/docs/dev/templates/#dictsort
15:34 remyd1 gtmanfred, so you have a formula on another master which is deploying a new master (/srv/salt) and the "salt://" in this formula -> = gitfs ?
15:35 remyd1 a state which is deploying a master
15:35 Rumbles ah yeah that would be easier XenophonF, I will try
15:36 gtmanfred i don't follow what you are trying to do?
15:36 Rumbles sorry irated not sure what you mean by "transactional updates"?
15:36 XenophonF Rumbles: see also http://jinja.pocoo.org/docs/dev/templates/#loop-filtering
15:36 remyd1 I want syncing the file_roots by using git
15:37 Rumbles ta
15:37 irated Rumbles: http://askubuntu.com/questions/630261/what-is-meant-by-transactional-updates
15:37 remyd1 or I will use rsync/inotify
15:37 remyd1 but I prefer the git method
15:37 gtmanfred why not just use gitfs and not use /srv/salt at all?
15:38 gtmanfred remyd1: https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html
15:38 irated gtmanfred: ++
15:38 irated we are moving to that model completely
15:38 gtmanfred then you don't have to care about file_roots?
15:38 remyd1 because I am using both for now
15:38 remyd1 ok
15:38 gtmanfred so, you either need to use rsync/nfs/gitrepo to sync the file_roots then, or youneed to just move all to gitfs
15:39 irated gtmanfred: you can have both
15:39 irated the gitfs will take precedence over the file_roots
15:39 gtmanfred irated: no it doesn't
15:39 irated we are doing it that way currently :)
15:39 gtmanfred it depends on the order you put them in fileserver_backends
15:39 goudale joined #salt
15:39 goudale hi all !
15:39 irated oh, maybe thats why
15:39 gtmanfred if roots is first, then /srv/salt will take precedence
15:40 remyd1 I will try the full gitfs way
15:40 gtmanfred yeah, that is what i would recommend
15:41 goudale I have `test: True` in all of my `/etc/salt/minion.d`. I also have `startup_states: highstate` in all my `/etc/salt/minion.d`. Obviously, now the highstate at startup is executed with test=True, so it is pretty useless. Is there any way to enforce test=False for my startup_state ?
15:41 remyd1 gtmanfred, thanks
15:41 gtmanfred goudale: i don't believe so
15:42 irated gtmanfred: you're corrrect its because of our confgurations
15:42 Rumbles is it possible to print a jinja variable out to stdout (or something) to see what it got from the pillar when you run state.highstate?
15:42 irated :)
15:42 gtmanfred :)
15:42 XenophonF Rumbles: not easily
15:43 Rumbles hmmm
15:43 XenophonF there's state.show_sls
15:43 gtmanfred Rumbles: i do a file.managed and just dump the whole jinja environment to a file
15:43 XenophonF but that's post-YAML-parsing
15:43 Rumbles I tried that and it didn't try to add the lines that are in the pillar
15:43 gtmanfred lemme see if I can find that function that does that
15:44 gtmanfred Rumbles: https://docs.saltstack.com/en/latest/topics/jinja/index.html#debugging
15:44 irated cmd.run: echo {{ pillar.get('') }} would work wouldnt it?
15:45 XenophonF no
15:45 XenophonF you'd end up with parse errors
15:45 gtmanfred just use the show_full_context() and print it to the contents: in a file.managed
15:45 gtmanfred /tmp/files:
15:45 gtmanfred file.managed:
15:45 gtmanfred - contents: {{show_full_context()}}
15:45 woodtablet joined #salt
15:45 gtmanfred and then run it and check /tmp/files
15:46 gtmanfred and you will have a dictionary of everything available in that file
15:46 irated oh thats cool
15:46 irated :)
15:46 XenophonF woudln't you have to pipe that to the |yaml filter?
15:46 gtmanfred you could, but you don't have to
15:46 XenophonF or does show_full_context escape it's output automatically?
15:46 gtmanfred it will just print out a one line dictionary to the file
15:46 XenophonF ah gotcha
15:46 * XenophonF is paranoid about jinja-to-yaml escaping
15:47 cmarzullo viq sorry had stand up.
15:48 cmarzullo we make our formulas pretty much 90% driven by pillar. We feel the formulas shouldn't really change that much once 'complete'
15:49 hehnope joined #salt
15:49 cmarzullo re multisystem kitchen stuff. It's not too bad. Lemme see what I can dig up. My co-worker make a test-kitchen ELK integration project. Pulls in all the formulas, stands up all the pieces and ensure all the pieces are working together.
15:49 hehnope hello! As a chef user... How does one upload states, etc to salt master? Did I miss some key docs?
15:49 gtmanfred you don't have to upload them like you do to chef
15:49 gtmanfred just put them in /srv/salt
15:50 gtmanfred hehnope: https://docs.saltstack.com/en/latest/ref/file_server/
15:50 tapoxi joined #salt
15:50 gtmanfred hehnope: https://docs.saltstack.com/en/latest/topics/tutorials/states_pt1.html
15:51 XenophonF hehnope: and once you're comfortable with the basics, you can learn how to store them in git repos, s3 buckets, and so on
15:51 hehnope I want to maintain all salt states, etc in git; so i'm just confused on how to link this to salt master
15:52 sarcasticadmin joined #salt
15:52 cmarzullo viq: fundamentally you make different test-kitchen 'suites' that are each piece of your infra. Then you can apply the various pillar states to each of them.
15:52 * Rumbles scratches his head
15:52 gtmanfred hehnope: gitfs
15:52 Rumbles well, show_full_context()gives a lot of info :)
15:52 gtmanfred hehnope: https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html
15:53 gtmanfred Rumbles: yup, everything you could possibly call in jinja in a state file
15:53 keltim joined #salt
15:53 hehnope so it looks like file_roots is the main config file that i'd point to the git local fs correct? I'd then configure a cron to update regularly?
15:53 gtmanfred hehnope: nah, just use gitfs
15:53 gtmanfred check the last link i gave you https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html
15:53 edrocks joined #salt
15:53 XenophonF by default salt-master will check for changes and update every 60 seconds
15:54 XenophonF if you're clever you can setup a webhook and have commits trigger the update sooner
15:54 gtmanfred when the event loop runs* (which defaults to every 60 seconds)
15:54 hehnope sweet this should do what i want; thanks!
15:54 gtmanfred no problem
15:54 XenophonF oh i didn't realize it was the main event loop that triggered it
15:54 gtmanfred yeah, the maintenance event loop
15:55 gtmanfred or the maintenance thread
15:55 gtmanfred XenophonF: https://docs.saltstack.com/en/latest/ref/configuration/master.html#loop-interval
15:55 XenophonF danke!
15:55 hehnope now then, for my other question... Chef published a way to maintain a repo; https://docs.chef.io/chef_repo.html; does salt have something similar?
15:56 XenophonF that's in the gitfs tutorial, too
15:56 gtmanfred a document? maybe check out the fileserver doc https://docs.saltstack.com/en/latest/ref/file_server/ or https://docs.saltstack.com/en/latest/ref/states/top.html
15:56 gtmanfred maybe?
15:57 inad922 joined #salt
15:57 XenophonF if you want to keep it simple, just commit your top.sls file (state targeting info) plus state data (other .sls files &c) to the master branch
15:57 hehnope i basically want to integrate chef workflow into salt workflow for a team (git > git push > merge > etc)
15:57 XenophonF hehnope: you're also welcome to take a look at my personal state repo
15:57 XenophonF https://github.com/irtnog/salt-states
15:58 XenophonF https://github.com/irtnog/salt-pillar-example
15:58 gtmanfred hehnope: you will probably also want external pillars like gitpillar https://docs.saltstack.com/en/latest/topics/development/external_pillars.html
15:58 gtmanfred https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.git_pillar.html
15:58 gtmanfred then just manage it like regular git
15:58 XenophonF in my salt-states repo, i have one branch for each DTAP phase
15:58 gtmanfred you can have tests hooked up to pull requests,  and use kitchen-salt
15:59 XenophonF +1
15:59 * XenophonF wants kitchen-salt and jenkins integration so bad he can almost taste it.
16:00 cmarzullo XenophonF: We do that.
16:00 cmarzullo works nice
16:00 XenophonF cmarzullo: anything public?
16:00 gtmanfred i need to sit down and play with kitchen-salt soonish
16:00 cmarzullo kinda. I mean I wrote the saltscaffold tool. What'll generate a kitchen-ci.yml.
16:01 cmarzullo our jenkins just invokes that. Which spins up a cloud instances and runs through tests.
16:01 gtmanfred neat
16:01 XenophonF omg that sounds cool
16:01 XenophonF that's exactly what i want
16:01 gtmanfred i would really like to do it with docker containers
16:01 cmarzullo Yeah we have like probably 30+ formulas that all run through jenkins.
16:01 gtmanfred and hook it into circleci
16:01 XenophonF automatic merge from dev->test and nightly CI tests that merge test->stage if passing
16:01 cmarzullo yep yep.
16:01 cmarzullo we do that.
16:01 irated XenophonF: yes we are looking to do that too
16:02 irated working on a promotion model to make it more IaC and follow the Software lifecycle
16:02 XenophonF cmarzullo: can you re-use your CI tests with your production monitoring infrastructure?
16:02 * Rumbles hangs his head
16:02 cmarzullo it does the merge into our git repos. which then make their way to masters.
16:02 cmarzullo our next step, wehich we've already figured out, has jenkins invoking the salt-api to do 'things'
16:02 Rumbles that worked perfectly XenophonF it just took me a long while to realise that :)
16:03 remyd1 gtmanfred, could you share a working gitfs_{base,remotes,roots} definitions from your config file (removing sensitive parts) ?
16:03 cmarzullo I'm not sure what you mean about re-using our CI test.
16:03 gtmanfred mine is pretty simple
16:03 gtmanfred remyd1: https://github.com/gtmanfred/blog-sls/blob/master/minion.d/fileserver.conf
16:03 XenophonF so i figure all of the same functional/integration tests I'd run in Jenkins would also be good for monitoring production
16:03 cmarzullo crap late for mtg. I'll be back.
16:03 XenophonF ttyl
16:05 remyd1 I have to go. Thanks gtmanfred but this is for your minions. I mean the same thing with the mountpoint for the salt:// root on master conf
16:05 gtmanfred remyd1: it is the exact same as for a master, that is a masterless minion
16:06 XenophonF https://github.com/cmarzullo/saltscaffold
16:06 remyd1 so you do not define any mountpoint for your states ?
16:06 gtmanfred i don't need to?
16:06 gtmanfred i just set the root to states/ in the git repo...
16:07 remyd1 ok
16:07 gtmanfred you could make it have a seperate mountpoint in your fileserver, but... i don't have a need for it because i just use it as salt:// and it is my only fileserver_backend
16:08 XenophonF that reminds me to submit the packages i made for kitchen-salt etc. to freebsd-ports@
16:09 remyd1 ok, so your top.sls is in the top of your repository
16:09 remyd1 nevermind, I understood
16:09 remyd1 =)
16:09 gtmanfred yar
16:10 remyd1 have to go, bye ;)
16:10 remyd1 thx
16:13 fracklen joined #salt
16:14 fracklen joined #salt
16:15 akhter Ladies and gentlemen of the world.
16:15 akhter Google is down for my region :'(
16:15 raspado joined #salt
16:16 gtmanfred unfortunate
16:20 akhter Google is back!
16:22 gtmanfred http://i.imgur.com/ph1fd.gif
16:23 orionx joined #salt
16:24 nickabbey joined #salt
16:28 jas02 joined #salt
16:33 cmarzullo XenophonF: yeah that's the one. I did figure out the whole pip publishing. so it's there. But huge caveat I'm a terrible coder.
16:35 gtmanfred cmarzullo: have you ever used kitchen salt with circleci?
16:35 gtmanfred I am looking at https://github.com/zuazo/kitchen-in-circleci
16:36 gtmanfred or do you have an example of it being used with like circleci or travisci?
16:36 Salander27 joined #salt
16:37 cmarzullo I haven't used it with circle ci. But I think it should be doable.
16:37 cmarzullo lemme look at that link
16:38 cmarzullo oh it's using the docker.
16:40 gtmanfred it looks like travisci can use your ec2 creds though /shrug
16:40 cmarzullo I have concerns about doing the tests in docker. You get into wierd situations where you are fighting docker and it's not represenative of whats live. (unless you do docker for everything)
16:40 cmarzullo Like managing services with systemd. That's a huge workaround. Like the one talk at saltconf16
16:41 gtmanfred yeah
16:41 gtmanfred but less than ideal
16:41 cmarzullo You gotta fake out so much it just seems meh.
16:41 cmarzullo It's easy for me though since I don't have to pay for my cloud resources. :)
16:41 gtmanfred it is important to note that docker on circleci uses the lxc backend and not libdocker
16:42 gtmanfred https://circleci.com/docs/docker/#docker-exec
16:42 gtmanfred and it has the init system running in their centos/ ubuntu containers
16:43 gtmanfred so it is a pretty good shot that it will just work
16:43 cmarzullo took a peak at travis, they have VM based provisioning. You take a hit on spinning up the vm of course.
16:44 lompik joined #salt
16:45 Brew joined #salt
16:46 fracklen joined #salt
16:47 gtmanfred cmarzullo: found one using docker https://github.com/saltstack-formulas/docker-formula/blob/master/.kitchen.yml
16:48 cmarzullo and it's using the testinfra verifier.
16:48 darioleidi_ joined #salt
16:48 gtmanfred whats that?
16:49 cmarzullo instead of serverspec. It's to verify that salt did what is was supposed to do.
16:49 gtmanfred ahh neat
16:49 gtmanfred and it looks like it is using travis
16:49 cmarzullo And that the system did what you expected.
16:50 gtmanfred cool
16:50 cmarzullo for example we wrote a haveged formula that managed all the things. But then we wrote a serverspec test to check out output of ps haveged
16:51 cmarzullo turned out that debian's systemd unit file wasn't picking up the arguments like it was supposed to.
16:51 gtmanfred neat yeah i want to run my states and make sure that they go through and deploy my blog, and i can make sure that each blog post gets rendered
16:51 o1e9 joined #salt
16:51 cmarzullo so salt was doing exactly what we told it. but it wasn't having the intended effect.
16:51 viq cmarzullo: ah, and depend on all of them being up at the same time, I guess that can work
16:51 * viq has to run for now
16:54 cmarzullo have you checked out saltscaffold gtmanfred you should be able to spin up a formula and do a kitchen verify. it'll do some simple states.
16:54 cmarzullo it creates out a skeleton of a formula that installs screen, drops in a dummy config file in /tmp and ensures they all there.
17:06 sdelic joined #salt
17:08 notnotpeter joined #salt
17:08 gtmanfred yeah, i have looked at it
17:09 gtmanfred i need to try it when i get around to redoing my blog setup
17:09 gtmanfred i think i got it working on travisci
17:10 gtmanfred and as an fyi, cmarzullo docker on travis sets it up with systemd running   Step 2 : ENV container docker
17:16 edrocks joined #salt
17:16 sh123124213 joined #salt
17:18 gtmanfred cmarzullo: can you double check my .kitchen.yml cause it isn't seeing my top file in blog-sls directory https://github.com/gtmanfred/blog-sls/blob/master/.kitchen.yml#L11
17:18 armin joined #salt
17:19 gtmanfred hrm, acutally i think i know what i need to do
17:23 nickabbey joined #salt
17:23 bltmiller joined #salt
17:24 netcho joined #salt
17:25 onlyanegg joined #salt
17:26 pipps joined #salt
17:27 Miouge joined #salt
17:28 pipps joined #salt
17:28 jas02 joined #salt
17:28 gtmanfred neat, i can just test it in my local docker
17:28 gtmanfred yeah i am going to have to play with this a lot more
17:30 samodid joined #salt
17:30 heaje joined #salt
17:32 pipps joined #salt
17:33 joe__ joined #salt
17:46 aphor https://mirceaulinic.net/2016-11-17-network-orchestration-with-salt-and-napalm/ <-- nice writeup
17:47 aphor Anyone doing NAPALM based net equipment config mgmt using salt?
17:55 nickabbey joined #salt
17:58 cscf Is there a good way to construct a pillar list with each entry in a different file?
17:59 whytewolf good? no
18:01 whytewolf you might want to look at the different settings for pillar_source_merging_strategy
18:10 Edgan joined #salt
18:12 fracklen joined #salt
18:13 anotherzero joined #salt
18:14 darioleidi_ joined #salt
18:15 nidr0x joined #salt
18:16 sh123124213 joined #salt
18:17 netcho joined #salt
18:19 cmarzullo gtmanfred: sorry was afk.
18:20 gtmanfred cmarzullo: so, it all works if I drop .kitchen.yml in the salt/ directory, but i can't get it to work from the directory above
18:20 gtmanfred no worries :) whenever you have time
18:22 Trauma joined #salt
18:24 Miouge joined #salt
18:24 cmarzullo lookin... here's a .kitchen for my rsyslog formula.
18:24 cmarzullo https://gist.github.com/cmarzullo/16130df1b9c7501add94d39b38bf60ca
18:25 cmarzullo We generally have the pillar outside the .kitchen just so we can reuse it in different places.
18:25 cmarzullo Also makes copy pasta into the prod pillar easier.
18:25 nickabbey joined #salt
18:25 cmarzullo We aren't doing the run highstate or salt_file_root
18:25 gtmanfred hrm, i got it to work in travis like this
18:25 gtmanfred https://github.com/gtmanfred/blog-sls/blob/master/.travis.yml#L13
18:25 cmarzullo yeah that'll work just fine.
18:25 gtmanfred so not a huge deal
18:26 gtmanfred but it would be nice to have kitchen.yml in the root of the git repo
18:26 cmarzullo yeah. fer sure.
18:27 gtmanfred i tried fiddling with salt_file_root and is_file_root and collection_name
18:27 cmarzullo yeah I'm not sure where it's mapping into the container.
18:27 impi joined #salt
18:28 gtmanfred it just copies it to /tmp/kitchen/srv/salt, so i have a /tmp/kitchen/srv/salt/salt, like it was a formula... but yeah /shrug
18:29 gtmanfred oh shit, i think i actually have an idea
18:29 cmarzullo for us, using full vms. it drops the whole thing into /tmp/kitchen/srv/salt
18:29 jas02 joined #salt
18:29 mikecmpbll joined #salt
18:29 cmarzullo ya ya
18:33 dxiri joined #salt
18:33 dxiri hey everyone! quick question, I am using salt-cloud to provision some VMs into openstack, and the last step I need to do it reboot them
18:34 dxiri but I am getting this: [ERROR   ] There was an error actioning machines: reboot() got an unexpected keyword argument 'call'
18:34 dxiri I get that when running salt-cloud -a reboot centos6_test_hpc
18:36 SaucyElf joined #salt
18:36 gtmanfred nope, that doesn't work either... hrm... i opened an issue on kitchen-salt
18:39 akhter joined #salt
18:39 akhter joined #salt
18:40 Edgan gtmanfred: https://github.com/saltstack/salt/issues/37746   Can https://github.com/saltstack/salt/pull/34974 be backported to 2016.3.x?
18:40 saltstackbot [#34974][MERGED] Regen the thin tarball option | What does this PR do?...
18:40 VR-Jack3-H joined #salt
18:42 gtmanfred ask in the pull request
18:44 Edgan ok
18:44 gtmanfred cmarzullo: neat, once that is fixed, it will be a little better, but it works for now https://travis-ci.org/gtmanfred/blog-sls
18:45 cmarzullo nice. pretty exciting.
18:46 cmarzullo I love the test-kitchen.
18:46 gtmanfred now i need to learn to write testinfra
18:46 cmarzullo I can work locally really fast and then promote. And everyone on the team can replicate what I do.
18:46 netcho joined #salt
18:47 Edgan cmarzullo: You recommend using test-kitchen and travis-ci together?
18:47 gtmanfred neat, there is even a salt backend to testinfra
18:47 sh123124213 how can I restrict minions from uploading and downloading files from the fileserver? I would only want the commands to come from saltmaster
18:47 cmarzullo https://github.com/ssplatt/saltstack-infratest-module
18:47 edrocks joined #salt
18:47 Edgan cmarzullo: We were just discussing testing of salt code this morning here. I brought up test-kitchen.
18:47 cmarzullo Edgan: I can't recomend it. Not for any reason just haven't done it. I'm using jenkins and test-kitchen to launch against our cloud provider and ensure our formulas work there.
18:48 Edgan cmarzullo: we have jenkins
18:49 cmarzullo cool. yeah we have two .kitche.yml files. one that works in our local dev env (mac) and anohter called kitchen-ci.yml and jenkins uses that.
18:50 cmarzullo basically KITCHEN_YAML='./.kitchen-ci.yml' kitchen test as the build
18:51 llua does anyone have an idea what could cause this: https://github.com/saltstack/salt/issues/33708 ?
18:51 saltstackbot [#33708][OPEN] visudo check command leaves cache file in /tmp | Description of Issue/Question...
18:52 Edgan cmarzullo: Thanks for the saltstack-infratest-module link. Any other goodies?
18:52 cmarzullo you know about saltscaffold?
18:52 cmarzullo we use that to ensure when writing formulas our team's formulas all kinda look the same.
18:53 Edgan cmarzullo: yes, but I basically have my own style and I just cp -a foo bar and rewrite
18:53 cmarzullo heheh
18:53 cmarzullo yeah lots of that here too.
18:55 Edgan cmarzullo: I also have 3-4 variations of the style
18:56 cmarzullo Yeah I don't care too much if the team diverges from the style. as long as there's a reson besides I don't like it.
18:56 cmarzullo But having style / formula guidelines helps.
18:56 sp0097 joined #salt
18:57 Edgan cmarzullo: I care about the style, because we have a very coherent style. The goal is to make things predictable.
18:57 cmarzullo absolutely.
18:57 Edgan cmarzullo: I end up doing clean up later.
18:58 Edgan cmarzullo: My biggest pet peeve is duplicating the same setting across formulas, when it is something like a url.
18:58 cmarzullo We have a couple up mockup states that we use.
18:59 cmarzullo stuff that falls outside the formula but we want to ensure is in place.
19:00 Edgan cmarzullo: One actual example is we use a service call pubnub. People need the settings for in it multiple formulas, and were duplicating them. This includes api keys in pillars. So I made a map.jinja for pubnub to be the source of truth, and that dedups it.
19:00 ronnix joined #salt
19:00 cmarzullo yeah dependency managment isn't quite there yet in test-kitchen.
19:01 cmarzullo But I think that's changed now.
19:01 cmarzullo think my co-worker figured it out.
19:01 cmarzullo lemme look
19:01 ssplatt joined #salt
19:02 cmarzullo yeah he solved it with gitsubmodules and using dependency section under provisoner in .kitchen.yml
19:02 Edgan cmarzullo: yeah, that sounds like a thought I had
19:03 Edgan cmarzullo: We started with just formulas and pillars
19:03 cmarzullo yeah us too.
19:03 Edgan cmarzullo: Then for "our" code we started breaking out formulas into the code repo, so they could track branches.
19:03 Edgan cmarzullo: I symlink to the other directories from formulas
19:04 Edgan cmarzullo: But we have to check out all the repos in the jenkins job
19:04 Edgan cmarzullo: My thought is that it would be easier to use submodules
19:04 jmedinar joined #salt
19:04 cmarzullo There's certainly more work to do. But we had to get back to our day jobs. lol. We want to improve more on our workflow.
19:05 cmarzullo there's a number of things that are 'sub-optimal' but it's working for now.
19:05 cmarzullo gtmanfred: https://github.com/ssplatt/infratest-formula/blob/master/.travis.yml
19:06 cmarzullo looks like ssplatt did simlar to what you were trying.
19:06 jmedinar Hi all. I just configured a new minion with the latest salt-minion version and got the following error while trying to start related to the name
19:06 jmedinar Starting salt-minion daemon: [ERROR   ] Error parsing configuration file: /etc/salt/minion - expected '<document start>', but found '<block mapping start>'
19:06 jmedinar in "<string>", line 103, column 1:
19:06 jmedinar id: _uat_shared_app2_cnw_bz_dst_
19:06 jmedinar ^
19:06 ssplatt does parallel builds
19:06 ssplatt https://travis-ci.org/ssplatt/infratest-formula
19:06 cmarzullo ^^
19:07 gtmanfred nice
19:08 jmedinar got it the error was not there but way above in the file sorry my mistake ;)
19:08 nicksloan is there a way to specify a watch with multiple things to watch?
19:08 cmarzullo no worried jmedinar I spent three hours yesterday before i relised '-b' != '-d'
19:08 gtmanfred yes
19:08 nicksloan like, run this state if A and B occur
19:08 gtmanfred nicksloan: make the watch a list
19:08 gtmanfred oh
19:08 gtmanfred not multiple like that
19:09 gtmanfred but if a or b
19:09 nicksloan gtmanfred: hmm
19:09 gtmanfred you would have to string them together
19:09 nicksloan that's what I was afraid of
19:09 gtmanfred you cant do an explicit and
19:09 nicksloan gtmanfred: how so?
19:10 nicksloan never mind, I think I get what you mean
19:10 gtmanfred like b would have to watch a, and then if b also changes, then you know that a changed
19:11 nicksloan use case was for managing systemd services. If the service file changed and systemctl daemon-reload ran, then restart the service.
19:11 nicksloan idea was to cut down on "when any service file changes, restart the service"
19:11 raspado hi all if we have a global match for minions in our top file for example in http://pastebin.com/SCpdf3xP , do the minions load everything in '*' first before it matches the role?
19:12 gtmanfred it will run everything in the '*' then everything in its role
19:12 gtmanfred but if you assign a role in the '*' states, it will not match a role
19:12 raspado its okay how its set now though right?
19:12 gtmanfred yeah
19:13 xet7 joined #salt
19:13 gtmanfred it will run it from the top down
19:13 bluenemo joined #salt
19:13 raspado okay thx gtmanfred!
19:14 Miouge joined #salt
19:14 dxiri guys, any ideas on my issue? or pointers on where to look?
19:15 dxiri getting this while trying to reboot an openstack VM with salt-cloud: [ERROR   ] There was an error actioning machines: reboot() got an unexpected keyword argument 'call'
19:18 raspado dxiri: $ salt-cloud --action=show_instance centos6_test_hpc
19:18 gtmanfred dxiri: which driver are you using?
19:18 dxiri gtmanfred: nova
19:18 dxiri I switched that from "openstack" since the tool itself mentioned openstack driver was soon to be deprecated
19:19 gtmanfred i don't see a reboot function in the nova driver
19:19 gtmanfred so i don't think you can do that...
19:19 gtmanfred actually, it is namespacing from libcloud
19:19 gtmanfred ... hrm
19:19 gtmanfred yeah, idk ,that should work
19:20 dxiri gtmanfred: http://pastebin.com/raw/CzdXP7s5
19:20 dxiri sry..meant raspado
19:20 dxiri that's the output of the command you mentioned
19:20 gtmanfred ¯\(°_o)/¯ i don't know, i have never tested reboot with nova
19:20 dxiri let me switch back to openstack and try
19:20 gtmanfred so yeah, it might not work, i will add it to the things to test during my nova rewrite
19:21 gtmanfred dxiri: also, we extended the life of openstack until nitrogen, by then i should have it completely rewritten and using shade.... then in a couple of releases we will get rid of the nova driver... just a long way around because of the need to deprecate things :/
19:24 pcn What's the RIoT tag mean in saltstack github issues?
19:24 gtmanfred it is the team that works on cloud and other integration stuff at salt
19:24 raspado dxiri: i tried with my nova driver
19:24 gtmanfred pcn: https://docs.saltstack.com/en/latest/topics/development/labels.html
19:24 raspado get the same thing, try -a stop then -a start
19:24 fracklen joined #salt
19:24 raspado well actually dont because it doesnt work either :)
19:25 dxiri with the openstack driver it breaks completely
19:25 pipps joined #salt
19:25 pcn Aha, thanks gtmanfred
19:25 dxiri says the VM isn't running
19:27 dxiri raspado: yes doesn't work :P
19:27 dxiri Invalid Actions:
19:27 dxiri ----------
19:27 dxiri nova.stop:
19:27 dxiri - centos6_test_hpc
19:28 dxiri so any workarounds? wait? any other driver I can try?
19:28 dxiri I don't mind even if its beta, this is a lab
19:30 raspado couldnt you just do salt 'centos6_test_hpc' cmd.run "shutdown -r now"
19:30 pipps joined #salt
19:30 jas02 joined #salt
19:32 raspado dxiri: ^
19:33 Sketch s/-r//
19:33 raspado ah yes
19:34 Sketch https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.nova.html
19:34 Sketch it doesn't look like there is a nova.stop
19:35 Sketch maybe you want nova.suspend?
19:35 Sketch doesn't look like there's a shutdown option, maybe they figure you could just use cmd.run for that
19:41 Brew joined #salt
19:41 IdoKaplan joined #salt
19:42 fracklen joined #salt
19:44 bluenemo joined #salt
19:45 IdoKaplan Hi, I'm using "file.managed" with Jinja and I would like to use uppercase in the grains.id. for example, grain id is xx10 and I would like to use {{ grains.id }} but I will get "XX10". Can you please help?"
19:46 Sketch {% filter upper %}{{ grains.id }}{% endfilter %}
19:48 gtmanfred {{grains.id|upper}}
19:48 gtmanfred http://jinja.pocoo.org/docs/dev/templates/#upper
19:48 IdoKaplan Sketch: Thank you! I would like also to use uppercase only on the first letter, i.e, I will get "Xx10"
19:48 gtmanfred IdoKaplan: {{grains.id|capitalize}}
19:49 gtmanfred http://jinja.pocoo.org/docs/dev/templates/#capitalize
19:51 IdoKaplan gtmanfred: Thank you!!
19:52 Sketch i guess {{grains.id|upper}} would probably work too
19:52 bluenemo joined #salt
19:52 gtmanfred :)
19:53 Sketch in my defense, i pulled that from something i wrote ages ago, when i was just starting to learn jinja ;)
19:53 gtmanfred :D
19:53 promorphus joined #salt
19:53 mohae_ joined #salt
19:56 bluenemo joined #salt
20:06 jab416171 joined #salt
20:07 s_kunk joined #salt
20:13 sebastian-w joined #salt
20:15 pipps joined #salt
20:17 tapoxi so, upgrading minions from yum
20:18 tapoxi has anyone written some sort of exec module around this to make it less sucky?
20:19 nkuttler hm. is that not straight forward?
20:19 gtmanfred as in minions disconnecting?
20:19 gtmanfred salt \* pkg.upgrade salt-minion... all the package commands in systemd systems run with systemd-run and set a specific scope, so that the minion process does not get killed anymore
20:19 gtmanfred if you are using a non systemd system, then there is no other solution
20:21 Sketch gtmanfred: i found that upgrading 2016.3.x on systemd seemed to work
20:21 Sketch so i assumed they had fixed that recently
20:21 gtmanfred yar, terminalmage did that
20:23 rashford joined #salt
20:25 tapoxi gtmanfred: awesome I was wondering why that worked on systemd
20:31 jas02 joined #salt
20:33 edrocks joined #salt
20:39 Miouge joined #salt
20:44 weylin joined #salt
20:45 mohae joined #salt
20:46 pipps joined #salt
20:49 dxiri joined #salt
20:50 pipps99 joined #salt
20:53 dxiri joined #salt
20:53 whatevsz_ joined #salt
20:53 whatevsz_ hey guys
20:54 whatevsz_ if i have the "normal" pillar_roots pillar defined, as well as an external pillar, what is the evaluation order?
20:54 whatevsz_ background: i want to use pillar data from the external pillarstack pillar in the "normal" top.sls pillar file
20:59 dxiri raspado: can't do the cmd.run stuff cause I need the reboot first so the minion becomes operational, we are using a custom kernel so need to reboot into that first
21:13 sh123124213 joined #salt
21:21 Aleks3Y joined #salt
21:23 netcho joined #salt
21:24 ALLmightySPIFF joined #salt
21:24 raspado sucks
21:29 teclator_ joined #salt
21:32 nickabbey joined #salt
21:35 ernescz joined #salt
21:37 nZac joined #salt
21:37 ernescz hello everyone! My second try for this question :)
21:37 ernescz It's about 'salt.states.event.wait' - when event sent to master triggers another state through reactor, does the calling state (that sent the event) has to wait for the new state to finish?
21:38 ernescz or is this the expected behavior - just send the event and continue on its merry way?
21:38 iggy I haven't looked at the code, but the general premise of event driven anything is fire and forget
21:40 ernescz Ok, I see. Then how would one achieve reliable results from another state on another minion and act on those results?
21:41 mpanetta joined #salt
21:41 babilen Reliable results to do what?
21:42 ernescz for example, a new minion is set up (bootstraped and runs highstate) and all the others need to know its new IP (with salt-mine) and open the firewall for it (another state)?
21:43 ernescz and the new minion does not proceed with its highstate until it knows for sure that all the rest have opened the firewall
21:43 babilen You can switch back and forth with different events
21:43 iggy minion A fires event, reactor calls state on minion B, state on minion B does stuff and then puts result somewhere (sdb, mine, consul, etc), state on minion B fires event, reactor runs state on minion A
21:43 babilen And that wouldn't be a highstate, but a succession of different SLS that are each triggered by distinct events
21:43 babilen Think of SLS as states and events as edges
21:44 ernescz hmm... never thought of it that way
21:44 babilen It's my mental model
21:45 babilen So things would tie into each other in the way iggy just described
21:46 gtmanfred whatevsz: default is pillar_roots then external_pillar unless you set ext_pillar_first: TRue
21:46 babilen A --- some-event ---> B ---- other-event ----> C ...
21:46 babilen And each of A would fire "some-event" at the end, B would fire "other-event" at the end and so on
21:47 ernescz this might actually work. Thanks! And another question - does salt.orchestration tries to achieve something like this? Or I'm totally wrong with that assumption?
21:48 Kurzweil Hello, is this the right place to ask a question about using salt.virt to launch a kvm host?
21:50 gtmanfred sure
21:50 Brew joined #salt
21:51 gtmanfred Kurzweil: whatcha got?
21:51 Kurzweil I'm trying to launch a machine with test line and am getting an error:
21:51 Kurzweil salt 'kvmmachine' virt.init 4 2048 salt://debian-8.qcow2
21:51 gtmanfred can you paste the error to gist.github.com?
21:51 Kurzweil sure
21:51 iggy ernescz: orchestration would actually do the "wait for something to finish" bit, but (imo) it's much easier to debug individual states that are called by reactor than a large orch job
21:53 Kurzweil https://gist.github.com/kurzweilbarr/2bca70f817996ef1d4384e052541d70d
21:54 gtmanfred Kurzweil: so you will have to use an image template instead of a disk profile, so the 4th argument you are passing is just an image
21:54 gtmanfred https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.virt.html#salt.modules.virt.init
21:54 izrail joined #salt
21:56 Kurzweil Hmm okay. I'm not familiar with exactly what it's looking for then.
21:57 gtmanfred hrm, yeah i am not quite sure why that is failing actually
21:57 Kurzweil Yea, it has me stumped. I've seen examples that work that look like mine.
21:58 gtmanfred have you been through the salt-run tutorial? https://docs.saltstack.com/en/latest/topics/tutorials/cloud_controller.html
21:58 Kurzweil yeah
22:00 gtmanfred hrm, yeah that is super weird, it shouldn't be a problem i don't think
22:00 ernescz thank you guys for the info! You have been amazingly helpful as always :)
22:00 gtmanfred can you try running salt 'kvmmachine' virt.init 4 2048 image=salt://debian-8.qcow2
22:00 gtmanfred oh
22:00 Kurzweil nope. But I can
22:00 gtmanfred i know why
22:00 gtmanfred Kurzweil: you are not passing a name
22:00 gtmanfred Kurzweil: salt kvmmachine virt.init <name> 4 2048 salt://debian-8.qcow2
22:01 gtmanfred so image is set to None, and it tries to use disk_profiles looking at the code, which kvm can't do
22:01 Kurzweil doh....
22:02 Kurzweil But I have a new error. It's multi-line so I'll do a gist
22:02 Kurzweil https://gist.github.com/kurzweilbarr/efd52baefdb2563160791855728c31fe
22:02 Kurzweil do I need " " maybe?
22:03 gtmanfred oh, replace <name> with an actual name?
22:03 gtmanfred salt kvmmachine virt.init vm1 4 2048 salt://debian-8.qcow2
22:03 gtmanfred did you run it exactly as I gave it to you or did you sub out <name> for a name?
22:04 Kurzweil I did: salt 'kvm' virt.init testvm 4 2048 salt://debian-8.qcow2
22:04 gtmanfred ah, hrm, yeah idk, that is odd that something there would resolve as a bool
22:05 gtmanfred can you run salt-call -l debug virt.init testvm 4 2048 salt://debian-8.qcow2 on the minion and see if it gives you another error?
22:05 Kurzweil Yeah, the example that comes up in the error looks like what I typed too.
22:05 Kurzweil okay sure.
22:06 cyborg-one joined #salt
22:07 Kurzweil A lot more information in here. I'll look through it.
22:07 gtmanfred cool
22:08 Kurzweil Okay to post one line errors directly?
22:08 gtmanfred yeah
22:08 Kurzweil [ERROR   ] Unable to cache file 'salt://debian-8.qcow2' from saltenv 'base'.
22:09 gtmanfred you have debian-8.qcow2 at /srv/salt/debian-8.qcow2 on your master yeah? or in an external filesystem?
22:10 Kurzweil yeah, on the master in that spot.
22:10 gtmanfred salt-call cp.cache_files salt://debian-8.qcow2 ?
22:11 gtmanfred sorry, cache_file
22:11 Kurzweil same error as above.
22:12 gtmanfred and you see it in the master fileserver with salt-call cp.list_master ?
22:13 Kurzweil No, it's not there.
22:13 pmcg joined #salt
22:13 gtmanfred what are your fileserver_backends and file_roots options set to on the master?
22:13 pipps joined #salt
22:17 Kurzweil fileserver_backend: is just -git and -roots
22:18 Kurzweil file_roots: base: - /srv/salt/states
22:18 Kurzweil Do I maybe just need to move the image into the states directory?
22:19 POJO joined #salt
22:19 gtmanfred ahh, so move the qcow2 file into the states directory
22:19 gtmanfred yes :)
22:19 ThomasJ joined #salt
22:20 Kurzweil Okay, no error, but no response yet.
22:20 Kurzweil I feel good about it though, since every other try failed right away.
22:21 gtmanfred :)
22:23 Kurzweil hm.. the master log has: An extra return was detected from minion <minion_host_name>, please verify the minion, this could be a replay attack
22:23 Kurzweil every 10 secons
22:23 Kurzweil seconds*
22:23 gtmanfred that i have no idea
22:23 Kurzweil Okay, I'll google for that one.
22:24 gtmanfred it is also worth checking with salt-call -l debug because that will print out all the information as it happens on the minion
22:24 Kurzweil Thanks so much for you help on the other problem though. That got me much much closer.
22:24 gtmanfred no problem :+1:
22:29 Kurzweil fyi, that other issue was resolved by stopping the minion and clearing out /var/cache/salt/minion/*
22:31 KajiMaster joined #salt
22:31 gtmanfred cool
22:33 jas02 joined #salt
22:40 mosen joined #salt
22:41 ProT-0-TypE joined #salt
22:59 perfectsine joined #salt
23:08 bigjazzsound joined #salt
23:14 JPT joined #salt
23:25 netcho joined #salt
23:26 keltim joined #salt
23:27 sp0097 joined #salt
23:30 netcho joined #salt
23:31 vexati0n joined #salt
23:32 vexati0n Q: there any way to update /etc/hosts AND use the updated hosts in the same SLS without rerunning the state?
23:33 sp0097 joined #salt
23:33 jas02 joined #salt
23:41 tkharju joined #salt
23:42 gtmanfred cmarzullo: i did something weird... https://github.com/gtmanfred/blog-sls/blob/master/salt/test/integration/default/testinfra/test_nginx.py#L43 ... didn't want to bind it to a port with forward...
23:54 goudale joined #salt
23:57 pipps joined #salt
23:58 pipps joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary