Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-03-15

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 fracklen joined #salt
00:13 gtmanfred whiteinge: pretty sure i told him to use sdb yesterday
00:15 mysticjohn joined #salt
00:15 whiteinge we need better sdb docs with examples.
00:16 gtmanfred https://docs.saltstack.com/en/latest/topics/sdb/
00:16 gtmanfred the problem with our docs is that we have to have examples for so many things, because it is easy, and can do SOO many things
00:17 gtmanfred https://docs.saltstack.com/en/latest/ref/sdb/all/salt.sdb.vault.html
00:17 gtmanfred I think we could add a tutorial about using sdb... but he probably still wouldn't use it
00:19 gtmanfred whiteinge: i agree with whytewolf, and we need to document all the places that config.get is used, because for all those settings, we could put the configuration in pillars or in sdb
00:31 msn joined #salt
00:35 hoonetorg joined #salt
00:37 Renich_ joined #salt
00:40 Renich joined #salt
00:59 sh123124213 joined #salt
01:00 felskrone joined #salt
01:06 Renich joined #salt
01:17 Shirkdog joined #salt
01:17 Shirkdog joined #salt
01:30 hoonetorg joined #salt
01:36 feld joined #salt
01:57 Tanta joined #salt
02:15 msn joined #salt
02:26 jas02 joined #salt
02:28 Whissi joined #salt
02:33 shoemonkey joined #salt
02:33 evle joined #salt
02:36 debian112 joined #salt
02:39 Whissi Hi! I am looking for an example showing how to store data in pillar (list of users with options like ssh-key) and a state accessing only a sub set of the list.
02:39 Whissi In other words: I want to maintain all users in a single location but I want to be able to specify that user A, B and C will be added to authorized_keys on mionion1 and minion3 while only user B should be present in authorized_keys on minion2.
02:39 Whissi If I'll update the definition of user B (only one file to edit), minion1, minion2 and minion3 should get updated on next highstate run.
02:39 leonkatz joined #salt
02:46 hemebond Whissi: Use Jinja to loop through a list of usernames.
02:46 hemebond You will need to import the user list file if it's just YAML.
02:46 hemebond {%- for name in ['bob', 'janet', 'carol'] %}{{ users[name] }}.......
02:46 Whissi hemebond: Ah OK. I think I got the idea.
02:48 sh123124213 joined #salt
02:48 Tanta you might want to check out the users formula
02:49 trent_ joined #salt
02:51 icebal joined #salt
02:52 Kruge joined #salt
02:54 Whissi Tanta: Well, this is the thing I don't understand at the moment:
02:55 Whissi https://github.com/saltstack-formulas/users-formula/blob/master/pillar.example -- This is the pillar data I would add to my base pillar, right? So I would have 3 users (auser, buser, cuser) in my "database".
02:55 Tanta yep
02:55 Tanta that will at least show you how to loop through pillar values and do stuff
02:55 Whissi But when viewing https://github.com/saltstack-formulas/users-formula/blob/master/users/init.sls ... the users/map.jinja will populate *all* users from that database, right?
02:55 Tanta the logic part you will have to construct yourself
02:55 Whissi So my question is:
02:56 Whissi How do I specify that buser from that "database" will get on minion3 only?
02:56 Whissi But auser and cuser will get on minion1 and 2...
02:56 Tanta {% if 'minion3' in salt['grains.get']('id') %}
02:56 Tanta something like that would work if it's simple, and just 3 servers
02:57 Whissi Hard coding minions into the state sounds ugly, not?
02:58 Whissi I'd like to store the information "put user X on minion Y" in the pillar for that specific minion
02:58 Tanta then make a more complicated pillar structure with booleans and other internal state values
02:58 Tanta and a more complicated loop in your users state
02:58 Whissi Do you have an example for such a solution?
03:01 Tanta http://pastebin.com/raw/qQHz5Sk1
03:01 Tanta here's what mine looks like
03:01 hemebond Whissi: Pillar data is targeted exactly the same way as states.
03:02 hemebond Or have some sort of jinja file that contains a lookup for the minion name.
03:02 hemebond Or create an sls file that contains the jinja list like I showed above.
03:04 Whissi Tanta: So this is your "general db"? Then how do you make sure that a specific minion will only get user2?
03:06 Whissi hemebond: Yeah, will look into this. It sounded like Tanta knew an existing example for my problem.
03:07 Tanta well you could add an entry to the dict: only_on: [ 'minion1', 'minion2' ]
03:08 Tanta for each user, at that level, you could specify which hosts to target
03:08 Tanta or have a 'whitelist' of users listed in the minion's pillar, and only add the ones that match
03:08 Tanta that would be another easy way to control it
03:10 Whissi Mh, interesting idea.
03:10 ahrs joined #salt
03:16 Whissi Tanta: Yeah, I'll take that road. Iterating over all users, check if the user is in a minion specific dict. If yes, I'll call ssh_auth.present otherwise ssh_auth.absent. So I'll also solve the problem how I make sure that a user gets removed...
03:17 Tanta cool, Whissi, good luck
03:18 Whissi It isn't a performance problem to check ~1.000 users for ssh_auth.absent each time, is it?
03:19 debian112 joined #salt
03:34 DEger joined #salt
03:34 icebal joined #salt
03:36 DEger_ joined #salt
03:37 icebal_ joined #salt
03:46 justanotheruser joined #salt
03:50 sp0097 joined #salt
03:52 msn joined #salt
03:54 Praematura joined #salt
03:59 djgerm1 joined #salt
04:01 Klaus_Dieter joined #salt
04:07 stooj joined #salt
04:07 catpig joined #salt
04:12 stooj joined #salt
04:18 stooj joined #salt
04:24 pipps joined #salt
04:27 stooj joined #salt
04:28 MajObviousman joined #salt
04:33 Klaus_D1eter_ joined #salt
04:36 pipps joined #salt
04:53 rdas joined #salt
04:53 honey_ joined #salt
04:53 stooj joined #salt
04:54 sp0097 joined #salt
04:59 packeteer joined #salt
05:03 stooj joined #salt
05:22 fracklen joined #salt
05:23 sh123124213 joined #salt
05:32 impi joined #salt
05:42 Corey joined #salt
05:42 gnomethrower joined #salt
05:46 golodhrim|work joined #salt
06:08 antpa joined #salt
06:13 cyborg-one joined #salt
06:35 shoemonkey joined #salt
06:53 Ricardo1000 joined #salt
07:02 DanniZqo joined #salt
07:04 antpa joined #salt
07:04 KingOfFools joined #salt
07:05 scristian joined #salt
07:21 Ricardo1000 Hello
07:22 Ricardo1000 Does salt-master have build-in scheduler ?
07:23 Ricardo1000 To run jobs(formulas) by time
07:23 Ricardo1000 on selected minions
07:24 yuhl______ joined #salt
07:29 k_sze[work] joined #salt
07:32 antpa joined #salt
07:33 paant joined #salt
07:46 golodhrim|work joined #salt
07:53 jhauser joined #salt
07:57 sh123124213 joined #salt
08:04 muxdaemon joined #salt
08:11 theblazehen2 joined #salt
08:11 theblazehen2 i/join #lxcontainers
08:11 theblazehen2 Uh.. Sorry. Not vim.
08:11 fracklen joined #salt
08:14 JohnnyRun joined #salt
08:15 dariusjs joined #salt
08:16 toanju joined #salt
08:28 antpa joined #salt
08:36 shoemonkey joined #salt
08:37 jas02 joined #salt
08:40 colegatron joined #salt
08:40 mbologna joined #salt
08:42 Rumbles joined #salt
08:44 candyman88 joined #salt
08:47 jas02 joined #salt
08:49 PhilA joined #salt
08:51 Electron^- joined #salt
08:56 teclator joined #salt
08:56 debian1121 joined #salt
09:01 toanju joined #salt
09:02 debian112 joined #salt
09:21 debian112 joined #salt
09:22 Ricardo1000 Where I should create scheler sls file for minion ?
09:23 AndreasLutro Ricardo1000: best to put it in pillars
09:23 AndreasLutro https://docs.saltstack.com/en/latest/topics/jobs/#scheduling-highstates
09:25 debian1121 joined #salt
09:28 Salander271 left #salt
09:29 s_kunk joined #salt
09:29 ronnix joined #salt
09:29 Ricardo1000 ok
09:30 Ricardo1000 AndreasLutro: What difference between grains and pillars ?
09:31 AndreasLutro Ricardo1000: grains are defined by the minion, pillars are defined by the master
09:34 Ricardo1000 AndreasLutro: Salt is very obscure for me, after other CM tools :(
09:34 AndreasLutro Ricardo1000: what have you used in the past?
09:34 Ricardo1000 AndreasLutro: too many obstruction layers
09:34 Ricardo1000 AndreasLutro: CFengine
09:34 dariusjs joined #salt
09:35 Ricardo1000 AndreasLutro: Also documentaion very inexplicable
09:35 AndreasLutro mm not familiar with cfengine at all so can't make any analogies
09:36 colegatron Ricardo1000, had the same impression when started with it. with a bit of persistence you will end loving it.
09:36 ronnix joined #salt
09:36 Ricardo1000 AndreasLutro: CFENgine is a grandfather of puppet
09:37 debian112 joined #salt
09:37 Ricardo1000 colegatron: Maybe present some docs about philosophy of salt ?
09:38 Ricardo1000 colegatron: to understand developers logic
09:38 debian112 joined #salt
09:39 colegatron for me it was hard to grasp. I also found documentation very hard for newcomers. there are things, after more than one year using it, that I do not know/understand/or use (roasters, salt mine, beacons)
09:40 r3m_ joined #salt
09:40 babilen Ricardo1000: Salt is, at its heart, a message bus on which configuration management is built. You won't interact with the messages bus a lot at the beginning, but will learn to love it once you start building reactive infrastructure that responds to changes in your setup.
09:40 colegatron but I started using it as remote execution tool. it was fun. bit later I found myself writting states to deploy things and avoiding bash scripts because the declarative approach of a state
09:41 debian1121 joined #salt
09:42 colegatron I find chef/puppet/cfengine are not as flexible and powerful as salt is, but not used them about a couple of tutorials each and maybe I am wrong
09:42 debian1121 joined #salt
09:42 Ricardo1000 colegatron: I have an issue, what should do minion if master is unavailable ?
09:42 r3m_ Hi there. I have a state which is returning many errors. How can I ignore these errors to transform result as "true" instead of "fail" ? In fact, I want to ignore these errors only if the return message in stderr contains a specific string. How can I do that ?
09:42 babilen Ricardo1000: For CM you have essentially two building blocks: 1. Execution modules and 2. States. The former are Python modules that perform specific actions (such as installing a package) and can also be called interactively from the command line. The latter combine modules with logic to check which actions (if any) should be performed. So for a state that ensures that a package is installed you'd find that
09:42 babilen it calls an execution module to check if ...
09:42 babilen ... it has been installed and, if not, another function to install it.
09:43 colegatron Ricardo1000,  it is the same with chef if you work in a master aproach. you can work in a masterless way with salt if you want (I started as masterless)
09:43 babilen Ricardo1000: If the master is unavailable you can't use salt at that moment. Do your boxes often become unavailable?
09:44 colegatron I guess you can setup a high available salt master setup, but I have saltmasterless state to deploy the saltmaster and my states are in a git repo, so I don't worry too much about :)
09:44 colegatron if the master (noHA) breaks, I replace it quickly.
09:45 r3m_ @colegatron so, you can use gitfs. That's what I am doing
09:46 colegatron yes. but to be honest it is a bit annoying to have to refresh manually the states and pillars (if you don't want to wait for the automatic update, which is the common case).
09:47 babilen colegatron: You could increase the automatic refresh rate
09:47 colegatron babilen it is at 10s :)
09:47 Ricardo1000 colegatron: I wanna build environment where minions could working in two modes at the same time. Main is master-slave and if master is unavailable select to masterless and keep there current states by scheduler settings and wait untill master is come back
09:47 colegatron I am thinking to just deploy the git repos inside the /srv/salt folder and update manually there, and only push to the repo when the state I'm working on is polished
09:47 debian112 joined #salt
09:48 Ricardo1000 colegatron: to return in master-slave mode
09:48 r3m_ No idea for my specific problem ? bash scripting ?
09:48 antpa joined #salt
09:48 colegatron Ricardo that's not possible as far as I know, and will lead you to have to syncronize minion configs and service states which will be very error prone, imho
09:49 debian112 joined #salt
09:49 colegatron Ricardo1000, just a question: why you don't want a master running always?
09:49 babilen One *might* be able to hack a setup like that, but I don't really see it as afvantageous
09:49 colegatron babilen, I absolutely agree, but I would add that would be very problematic in the long term
09:49 colegatron (could)
09:50 colegatron as far as I understand (correct me if I am wrong), salt traffic through zeromq is encripted. right ?
09:50 Ricardo1000 colegatron: It is one of possibilities, which I should to Anticipate
09:50 Ricardo1000 colegatron: CM system should working always
09:51 babilen Ricardo1000: What would be the adverse effect of the master becoming unavailable? I mean it is not that the minion will suddenly lose all the states that have been applied already
09:51 Ricardo1000 colegatron: But if master will fail CM system will stop in salt
09:51 colegatron Ricardo1000, I would recommend to setup a high available master (cheap $5 servers are enough if your infrastructure is not huge) instead to do tricks
09:51 babilen What's the negative effect you are trying to mitigate?
09:51 N-Mi joined #salt
09:51 N-Mi joined #salt
09:52 Ricardo1000 babilen: All states should be verified
09:53 Ricardo1000 babilen: every time
09:53 babilen What do you mean by that?
09:55 debian1121 joined #salt
09:55 Ricardo1000 babilen: Master server, maybe broken, or switch may be broken, all client should beep there states
09:55 Ricardo1000 beep => keep
09:55 babilen Why would they lose them?
09:56 Ricardo1000 babilen: What do you mean ?
09:56 colegatron states are not lost if master fails. minions continue configured as last time you applied states to them
09:56 AndreasLutro your server isn't magically going to reset its configuration if the salt master disappears
09:56 colegatron but of course you can re-aply a state if the master is offline
09:56 colegatron (you can't)
09:56 babilen If a package is installed it won't be removed just because the master goes on holiday
09:57 DEger joined #salt
09:57 debian112 joined #salt
09:58 Ricardo1000 colegatron: Yes, but scheduling jobs will loss
09:58 colegatron Ricardo1000, yes. magic does not exists yet in salt :)
09:58 babilen damn!
09:58 AndreasLutro what are you thinking to schedule? if you want scheduled jobs to run regardless of salt you should just make a cronjob
09:58 babilen But scheduled jobs won't be lost
09:58 Ricardo1000 AndreasLutro: As practice shows, it may happen :)
09:59 colegatron babilen, are they executed by a  minion scheduler? i thought it was scheduled by master
09:59 AndreasLutro magic may happen?
09:59 Ricardo1000 babilen: Some one can login into server and remove package by hands
09:59 colegatron AndreasLutro, for someone that does not knows what's going on, yes :-)
09:59 AndreasLutro :)
09:59 Ricardo1000 babilen: server should verify and reinstall it
10:00 babilen So you want to ensure that the highstate is reapplied every k minutes?
10:00 AndreasLutro Ricardo1000: even if you set up cfengine or puppet to run every 1 hour there's still potentially 59 minutes of time when the server is broken. same if your salt master goes down
10:01 debian112 joined #salt
10:01 babilen With the difference that you can configure salt to react to changes immediately if the master is still running
10:01 Ricardo1000 babilen: Yes, every k minutes
10:01 babilen But then: If you really can't build a HA architecture to ensure that your master is still available, then run masterless
10:02 muxdaemon joined #salt
10:02 Ricardo1000 AndreasLutro: cfengine verify their state every 5 minues
10:02 babilen I'd personally prefer the former setup, but if you can't ensure network connectivity or node availability at all then you obviously can't use a master-client setup
10:02 babilen Ricardo1000: You can schedule a highstate run on whatever schedule you like
10:03 AndreasLutro Ricardo1000: then there's still 4 minutes of potential brokenness. you're never going to foolproof yourself against everything
10:03 AndreasLutro Ricardo1000: if a vital package gets uninstalled then some sort of monitoring should pick up on that server's services not responding properly anyway and you should be able to react on taht
10:04 debian1121 joined #salt
10:05 colegatron Ricardo1000, I think you're suffering in the mental step of choosing a new CM tool :) that's all :)
10:05 Ricardo1000 babilen: Scheduling jobs applies on master or they can be moved to minions  ?
10:05 gnomethrower Hey, not sure when it changed
10:05 gnomethrower but whoever worked on the new SaltStack documentation: you are a goddamn rock star and I love you
10:05 colegatron Ricardo1000, you can setup cronjobs in the minion.
10:06 colegatron Ricardo1000, regular cron jobs, I mean. You can build the highstate to setup cronjobs in the minion. if the master goes down, that schedules will work anyway
10:06 Ricardo1000 colegatron: do you mean system cron or salt-minion build-in cron ?
10:06 colegatron system cron
10:07 babilen Ricardo1000: You have to decide between a master/minion or masterless model. The latter can be done with "local file roots" (states are copied to the minion and applied locally) or via SSH.
10:07 Ricardo1000 colegatron: It looks like Crutch :)
10:07 colegatron Ricardo1000, you're asking to do High Availabilty without high availability. :)
10:07 babilen A schedule is (I think, haven't double checked) applied by the minion who, naturally, can't apply a state in a master/minion model if the master is away
10:07 Ricardo1000 colegatron: :))
10:07 colegatron just do HA :-)
10:08 babilen Is there really no way for you to ensure that k boxes are online and reachable?
10:08 babilen You might want to have a word with your infrastructure provider
10:08 colegatron why you don't do it? price? time? knowledge?    price can be $5 per master, time not sure, knowledge is in #salt :)
10:08 Ricardo1000 babilen: :)
10:09 debian112 joined #salt
10:09 ronnix joined #salt
10:09 colegatron babilen or just run a provider_switch :)
10:09 aldevar joined #salt
10:10 babilen I mean I totally understand your desire to safeguard against everything, but .. at the end of the day .. you can't do that.
10:10 Ricardo1000 colegatron: Thanks
10:10 colegatron welcome
10:10 babilen If "master goes away and that's catastrophic" is really such a massive problem in your setup then, by all means, use a masterless setup
10:10 Ricardo1000 babilen: You are right, but risks should be minimal
10:11 colegatron Ricardo1000, what can happen in your case? server failure or network failure?
10:11 babilen My impression is that configuring a HA setup and using a minion/master setup is doable and it would be unlikely that all your redundant masters are unavailable
10:11 Ricardo1000 colegatron: both
10:12 colegatron do you use cloud servers (whatever it is)?
10:12 Ricardo1000 colegatron: no
10:12 colegatron well, that's the problem :-)
10:13 colegatron (typical consultant's answer) hahah, just joking.
10:14 colegatron depend of physical hardware is a problem. I guess you don't have openstack or any other on-premises cloud solution
10:14 debian112 joined #salt
10:14 colegatron maybe go masterless is the best in your case, as babilen pointed out
10:15 debian1121 joined #salt
10:16 babilen In a way I can't believe that a HA master setup would fail without other bits of the infrastructure being affected (as in: All other nodes also)
10:16 colegatron but I hate to say that, honestly. setup a master in (other) server is a matter of minutes restoring all the minions keys and so...
10:16 babilen But then I don't know Ricardo1000's setup and, as I said, if you want to safeguard 100% against a missing master: Don't use one
10:17 babilen using a master has massive advantages in salt as it allows you to tap into the *real* power of salt: It's message bus
10:17 cmarzullo ^^
10:18 debian1121 joined #salt
10:18 aldevar left #salt
10:19 ronnix joined #salt
10:20 debian112 joined #salt
10:21 colegatron Ricardo1000, just for curiosity, why are you thinking to leave cfengine?
10:25 ksk Hola. I wanted to use mysql_user.present in one of my formulas, but on executing I get "State 'mysql_user.present' was not found in SLS 'mysql-formula'" - Am I doing it wrong? thx..
10:26 amcorreia joined #salt
10:27 antpa joined #salt
10:28 debian1121 joined #salt
10:28 colegatron pastebin
10:28 babilen ksk: The mysql module requires you to install the mysqldb Python module as documnted on https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.mysql.html
10:28 ksk was missing python-mysqldb package. nevermind.
10:28 ksk thanks babilen :)
10:29 babilen Install it in a state and set "reload_modules: True" in the pkg.installed state
10:29 babilen Hmm .. wouldn't it be great if the could define requisites with wildcards/matchers?
10:30 babilen So .. an easy way to say "mysql.* states require this pkg.installed state" without having to tinker with each of those states?
10:30 babilen *if we
10:32 debian112 joined #salt
10:32 colegatron but that's dependencies system for, isn't?
10:34 babilen Yeah, exactly. I was just thinking of an easier way to express that dependencies than having to add "require: - pkg: python-mysqldb" everywhere
10:34 babilen And compound matchers for state names/name arguments/...
10:35 debian1121 joined #salt
10:36 colegatron ummm, it is like setup a 'parallel' requisites system. extra complication in my opinion. - require: [ sls: this-state ] feels enough for me
10:36 shoemonkey joined #salt
10:38 dariusjs joined #salt
10:38 babilen colegatron: Yeah, but if you have hundreds of states with a specific requirement it's a pain in the arse to modify them all
10:38 debian112 joined #salt
10:39 colegatron yep, I can agree with that. maybe someone has that amount of states with that dependency ;-)
10:39 debian112 joined #salt
10:39 JohnnyRun joined #salt
10:40 colegatron there is anyway to restart a nginx if (any) file has been added/updated/removed in the /etc/nginx/conf.d folder?
10:40 morsik just use watch or watch_in.
10:40 babilen Configure a beacon to notify you and trigger a file.recurse/whatever on events?
10:41 babilen (assuming you mean 'modified on the box')
10:41 colegatron babilen, "killing flies with a cannon" we say here :)
10:41 kedare joined #salt
10:41 babilen Us Germans typically kill sparrows with cannons
10:41 colegatron I was thinking maybe I was forgeting some option/argument in a file state
10:42 babilen But how would your state know if somebody deletes a file?
10:42 babilen (on the minion that is)
10:43 babilen If you just want to react to changes in a salt state then use watch/watch_in or listen/listen_in as mentioned by morsik
10:43 debian1121 joined #salt
10:43 colegatron yep, I understand the problem. maybe adding a 'watch_this_folder' in the state and the next state/highstate-run salt could see the difference between both momments
10:43 babilen I had assumed that you wanted to trigger an action if somebody performs manual modifications on the salt minion
10:44 babilen You could also use something like file.recurse with clean: True and schedule a state run every k minutes
10:44 colegatron nope. I want to put an unspecified amount of files in a folder and expect that 'nginx' state detects a change and reload/restarts
10:44 babilen You can do that with a beacon + reactor
10:45 colegatron yep. but I have been delaying to play with beacons+reactors because I've not spare time :)
10:45 debian1122 joined #salt
10:45 babilen How will you "put" those files there?
10:45 colegatron Ricardo1000, that's another example of the salt's power  :)
10:46 debian112 joined #salt
10:47 colegatron oh wait. sorry, with an extend: - watch would be enough if I'm able to traverse the file list
10:47 babilen If you use salt to 'put' those files there you can watch/listen the state(s) that responsible for doing so, yeah
10:48 babilen nginx won't react to manual changes on the minion (outside of salt) however
10:49 debian112 joined #salt
10:51 colegatron babilen, sure :) manual changes are not allowed.
10:52 babilen streng verboten!
10:54 debian1121 joined #salt
10:59 Praematura joined #salt
11:03 debian112 joined #salt
11:07 promorphus joined #salt
11:09 debian1121 joined #salt
11:11 debian112 joined #salt
11:12 Ricardo1000 colegatron: I wanna leave cfengine, because it has too weak support from developer
11:12 rylnd babilen: i read german. that is also verboten ;)
11:12 colegatron Ricardo1000, lucky you.
11:13 colegatron Ricardo1000, you're in the right #place
11:14 debian112 joined #salt
11:14 golodhrim|work joined #salt
11:15 Rumbles joined #salt
11:16 golodhrim|work joined #salt
11:17 colegatron the state service.running: [ reload: True, watch: [ file: /etc/nginx/conf.d/mysite.conf ] ] it not failing when nginx does not start (mysite.conf contains a syntax errors)
11:17 colegatron any idea how to catch the issue?
11:20 Ricardo1000 colegatron: You should  set standalone state to check nginx syntax and make it requed
11:20 N-Mi joined #salt
11:20 N-Mi joined #salt
11:21 debian112 joined #salt
11:22 colegatron I'm looking that approach,but a cmd.run "sudo service nginx configtest" always retuns as 'changed', so it does not fails.
11:23 colegatron I'm looking the -stateful:true argument, but seems a twist
11:24 debian1121 joined #salt
11:24 AndreasLutro maybe add an `onlyif: nginx -t`
11:25 colegatron AndreasLutro, you're great, tnx! :)
11:26 debian112 joined #salt
11:26 AndreasLutro though I'm not sure if an onlyif applies to watch statements. but you can try
11:26 cyteen joined #salt
11:27 colegatron probably will depend on what is checked first, the watches or the onlyif
11:27 colegatron will see
11:28 felskrone joined #salt
11:28 debian112 joined #salt
11:29 colegatron nah, what i've said is silly. it should work.
11:29 colegatron let's see what thinks salt about
11:34 jas02 joined #salt
11:35 debian1121 joined #salt
11:39 Ricardo1000 Does anyone using salt in huge environments (more than 10000 hosts) ?
11:41 Ricardo1000 Also interesting how much minion and master process utilize cpu, memory and i/o under load
11:41 AndreasLutro it uses 7
11:41 AndreasLutro (that's a joke, how much it uses depends entirely on your setup)
11:41 debian112 joined #salt
11:42 Ricardo1000 like example master receive 10000 connect from minions how much cpu, memory and i/o it will use
11:42 Ricardo1000 when push states to all of them
11:42 colegatron I want to work with you mate!
11:43 colegatron :D
11:43 Ricardo1000 colegatron: :)
11:43 colegatron I've only seen >10k servers in blog posts
11:43 debian112 joined #salt
11:44 Ricardo1000 colegatron: Maybe you have seen some graphics with load ?
11:44 colegatron (sorry for the offtopic hahaha)
11:44 colegatron yep, but my infrastructure is below the 50 servers
11:45 colegatron cpu about 15% average on a aws t2.micro instance and it is also running the slack bots
11:47 colegatron on the saltmaster I mean
11:48 jas02 joined #salt
11:48 Rumbles joined #salt
11:48 _Cyclone_ joined #salt
11:49 debian112 joined #salt
11:50 ny joined #salt
11:54 armyriad joined #salt
11:55 dariusjs joined #salt
11:55 debian112 joined #salt
11:56 ronnix joined #salt
11:56 Ricardo1000 colegatron: Is it permanent cpu load on master or time to time ?
11:57 ny Good morning everyone. I am having a problem when using salt vmware cloud module. I have setup the following file on the salt master /etc/salt/clouyd.providers.d/vmware.conf and I have run salt-cloud -f test_vcenter_connection to confirm that I can connect to any of the vcenters defined in vmware.conf. When running salt-cloud to reset a vm this works fine.
11:58 ravenx joined #salt
11:58 DEger joined #salt
11:58 ny I have a minion daemon also running on the same server as the master and I have tried creating a state file that uses module.run  to reset a vm and this just fails with the following: Module function cloud.action threw an exception. Exception: Either an instance or a provider must be specified
11:58 oaken_chris joined #salt
11:59 colegatron Ricardo1000, graph in priv
11:59 ny my state file looks like this:
11:59 ny reset-minion:
11:59 ny module.run:
11:59 ny - name: cloud.action
11:59 ny - fun: reset
12:00 ny - instance: <target vm hostname>
12:00 ny - provider: vmware
12:01 ny as above I have specified a provider but whenever I try to run this salt reports the following error : "Module function cloud.action threw an exception. Exception: Either an instance or a provider must be specified."
12:01 swills joined #salt
12:01 debian1121 joined #salt
12:03 cyborg-one joined #salt
12:04 johnkeates joined #salt
12:04 ravenx can someone help me with this problem:
12:04 ravenx https://paste.debian.net/920009/
12:04 ravenx i'm trying to use a combination of pillar roles and nodegroups to do targetting
12:04 ravenx however, I seem to be stuck
12:05 AndreasLutro your connect_app/dev.sls seems to be missing a space after the colon
12:07 debian112 joined #salt
12:08 ravenx hmm, let me see
12:08 JohnnyRun joined #salt
12:09 ravenx okay, so i fixed it to connect_app_role: dev with one space and it still gives me that no miniosn match :(
12:09 debian112 joined #salt
12:09 ravenx the funny thing was that i followed loosely this tutorial:  https://docs.saltstack.com/en/latest/topics/tutorials/states_pt4.html
12:09 ravenx it worked if i remove everything and anything connect_app related
12:10 ravenx and only leave in the tasks_app.init stuff
12:10 AndreasLutro you may need to refresh pillars on the minion itself
12:10 AndreasLutro maybe there's some pillar cache on the master as well, not sure
12:11 catpig joined #salt
12:11 ravenx let me try
12:11 ravenx actually, is it safe to assume that i can have two things under the "base"
12:12 AndreasLutro oh
12:12 AndreasLutro didn't notice that. no, that would cause a yaml parse error
12:12 ravenx gah.
12:12 ravenx maybe that's what is happening
12:12 debian1121 joined #salt
12:12 AndreasLutro maybe. check your master logs
12:13 ravenx could it be my file_roots tho? https://paste.debian.net/920010/
12:13 ravenx please excuse the line numbers
12:13 AndreasLutro erm, maybe
12:13 AndreasLutro I don't use saltenvs so I don't know about that
12:13 AndreasLutro but specifying both /srv/salt and /srv/salt/dev could cause issues
12:13 debian1121 joined #salt
12:14 ravenx i have been getting this though:
12:15 ravenx https://paste.debian.net/920013/
12:15 AndreasLutro well, that's because you're missing a colon after "connect-app-int"
12:15 ravenx ah i see
12:15 AndreasLutro you might want to read up on yaml syntax in general
12:15 ravenx well this is also strange for me, i dont remember having a top.sls file  with the server 'one' in it.
12:16 debian112 joined #salt
12:18 ravenx omfg... i think i got it working.
12:23 debian112 joined #salt
12:30 netcho_ joined #salt
12:34 debian1121 joined #salt
12:37 shoemonkey joined #salt
12:43 debian112 joined #salt
12:46 debian112 joined #salt
12:46 catpig joined #salt
12:48 PhilA joined #salt
12:51 debian1121 joined #salt
12:56 jas02 joined #salt
12:57 debian112 joined #salt
13:00 debian1121 joined #salt
13:00 cowyn_ joined #salt
13:04 debian112 joined #salt
13:05 cowyn joined #salt
13:06 gableroux joined #salt
13:06 debian1121 joined #salt
13:08 ronnix joined #salt
13:09 debian1122 joined #salt
13:12 sh123124213 joined #salt
13:14 catpig joined #salt
13:15 ssplatt joined #salt
13:16 debian112 joined #salt
13:17 racooper joined #salt
13:18 jas02 joined #salt
13:19 debian112 joined #salt
13:21 swills joined #salt
13:25 Cottser joined #salt
13:27 debian1121 joined #salt
13:30 Whissi Can anybody help? Trying to access a list in pillar (well, at the moment it looks like it is a dict) but I am getting "Rendering SLS 'base:tests.foo' failed: Jinja variable 'dict object' has no attribute 'test_files'" -> http://pastebin.com/raw/MAKFf9dw
13:31 hemebond .items()
13:32 Whissi hemebond: You mean changing 'for test_file in test_info["test_files"]' into 'for test_file in test_info["test_files"].items()'?
13:32 hemebond Yeah
13:33 Whissi Then I'll get "Jinja variable 'list object' has no attribute 'items'"
13:33 hemebond oh
13:34 DammitJim joined #salt
13:36 debian112 joined #salt
13:36 promorphus joined #salt
13:37 hemebond maybe try items() instead of iteritems()
13:38 paant joined #salt
13:39 dev_tea joined #salt
13:39 Whissi Doesn't change anything :( If I keep the .items() in test_info I still get the list object error. If I remove it I am back to the dict object error.
13:39 debian1121 joined #salt
13:40 hemebond What? So if you take your original code and replace iteritems with items it's the same?
13:40 Whissi Yes, I changed 'for test_name, test_info in salt["pillar.get"]('tests', {}).iteritems()' into 'for test_name, test_info in salt["pillar.get"]('tests', {}).items()'
13:43 fracklen joined #salt
13:43 hemebond I get the same error in python.
13:43 babilen Sure, a list doesn't have ".items()", that would be for dictionaries
13:43 brousch__ joined #salt
13:43 hemebond Wait, that might just be my yaml parsing fail.
13:43 debian112 joined #salt
13:44 babilen My guess would be that the 'tests' pillar results in a list
13:44 Whissi From my understanding test_files is a "list", not a dict. But I don't know how to iterate...
13:45 hemebond The problem is test_info doesn't have that key.
13:45 babilen for foo in test_files
13:45 hemebond Try just printing out test_info
13:45 mikecmpbll joined #salt
13:45 Whissi Like all the examples for lists just have "for foo in pillar['key']"
13:46 debian1122 joined #salt
13:47 babilen fwiw, I don't see an error in the pasted information
13:47 WKNiGHT joined #salt
13:48 babilen What do you get if you iterate over values in 'test_info' ?
13:48 Whissi hemebond: Key is there, http://pastebin.com/raw/Np5kqmqZ
13:48 Whissi That's the output of '{{ test_info|pprint }}'
13:49 hemebond Something strange going on there.
13:49 babilen Whissi: What does for foo in test_info.get('test_files', 'OHNOWEREDOOMED') {{ foo }} give you?
13:49 dariusjs joined #salt
13:50 ravenx is it possible to use jinja, or an if statement to check "do something if this directory exists" on salt?
13:50 babilen Exists where?
13:50 ravenx on my home
13:50 ravenx ~
13:50 babilen I don't think that's very idiomatic, but you could use onlyif with test -d ...
13:51 babilen To clarify: I don't think its idiomatic salt to react to a given state on the minion .. you should tell it if that should directory exist or not :)
13:52 ravenx true.
13:52 ravenx well my team wants to deploy using symlinks....
13:53 ravenx and that's why i gotta do this.  we want to keep a backlog o the last 5 successful builds.  if that verison exists on the local machine, update symlink, otherwise deploy the new version
13:53 ravenx which is yucky, using symlink.
13:54 mpanetta joined #salt
13:54 Whissi babilen: http://pastebin.com/raw/YhuwHks9
13:55 debian112 joined #salt
13:55 babilen So, we are, obviously, doomed
13:55 Tanta joined #salt
13:55 babilen Whissi: But that means that test_info does *not* have the 'test_files' key .. could you insert 'test_info' again?
13:55 babilen Something is fishy
13:58 debian112 joined #salt
13:59 johnkeates joined #salt
13:59 DEger joined #salt
14:00 Whissi babilen: Mh, now I am lost. Looks like I got the expected output: http://pastebin.com/raw/zaNL9G09
14:01 babilen Did you refresh pillar data in the interim?
14:01 babilen I really couldn't spot anything wrong in your original paste (apart from the fact that you could code a bit more defensively)
14:01 Whissi Yes. But now I also re-created the files.
14:02 oaken_chris joined #salt
14:04 debian1121 joined #salt
14:05 sjoerd_ joined #salt
14:07 debian112 joined #salt
14:07 rem5_ joined #salt
14:07 Whissi babilen, hemebond: Oh, I am so sorry. :(   I found the problem: The test data I showed you weren't complete, just a sub set.  In just *one* of the additional data, I had set "test-files" instead of "test_files" and this data set broke everything.
14:08 hemebond ah
14:08 sgo_ joined #salt
14:10 sgo_ I'm facing troubles getting a minion to talk to master. I just installed the salt-minion package (2015.8.8+ds-1, both on minion and master), but the minion fails to connect with this error message - http://pastebin.com/a1ykP5Lf
14:10 sgo_ also, no key shows up on the master when I run "salt-key -L", so I'm not sure what key to delete if there doesn't exist any.
14:10 sgo_ any help would be appreciated!
14:11 sgo_ (also tried starting the saltmaster in open mode but that didn't help either)
14:12 Neighbour sgo_: Are you sure the key isn't listed under "Rejected" (which also shows the rejected keys in a different color)?
14:12 CrummyGummy joined #salt
14:12 sgo_ Neighbour, all 4 sections (accepted/denied/unaccepted/rejected) are empty.
14:13 sgo_ also, all the minion_* directories inside /etc/salt/pki are empty.
14:13 sgo_ super weird
14:13 debian1121 joined #salt
14:13 Neighbour and /etc/salt/pki/master/minions_rejected/ on the master?
14:14 sgo_ also empty
14:14 netcho_ joined #salt
14:14 Neighbour that is weird indeed...is there anything in the salt master logfiles about the key rejection?
14:15 mpanetta joined #salt
14:15 Neighbour (or try running the master with -l debug and see if anything about the key rejection scrolls past)
14:15 sgo_ funny thing is, I don't see anything in the log files, except for a few config warnings.
14:15 sgo_ that's a good idea. I'll try it quickly.
14:16 debian1122 joined #salt
14:17 sgo_ the minion ID format was invalid
14:17 ronnix joined #salt
14:17 * sgo_ facepalms
14:17 Neighbour Ahh, sneaky...well at least you found it :)
14:18 sgo_ yeah, it was a good idea to run salt master with -l debug.
14:18 Neighbour Maybe open an issue about it on github so it'll be easier for others to spot in the future
14:18 sgo_ thanks
14:18 sgo_ yes, I'll do that now
14:18 feld joined #salt
14:18 Neighbour thanks :)
14:19 debian112 joined #salt
14:21 mikecmpbll joined #salt
14:22 sarcasticadmin joined #salt
14:22 debian112 joined #salt
14:22 kedare joined #salt
14:24 DammitJim joined #salt
14:24 debian112 joined #salt
14:26 debian1121 joined #salt
14:27 Praematura joined #salt
14:30 fracklen joined #salt
14:31 debian112 joined #salt
14:31 evle1 joined #salt
14:34 debian112 joined #salt
14:35 foodoo joined #salt
14:35 JohnnyRun joined #salt
14:39 _two_ joined #salt
14:41 debian112 joined #salt
14:43 m4rx joined #salt
14:44 Rumbles joined #salt
14:44 cowyn joined #salt
14:44 debian1121 joined #salt
14:45 raspado joined #salt
14:46 foodoo joined #salt
14:46 jas02 joined #salt
14:47 debian112 joined #salt
14:48 cowyn joined #salt
14:49 johnkeates joined #salt
14:50 brakkisath joined #salt
14:51 fracklen joined #salt
14:52 pipps joined #salt
14:52 debian112 joined #salt
14:55 debian1121 joined #salt
14:55 keltim joined #salt
14:57 nledez joined #salt
14:57 nledez joined #salt
14:58 debian112 joined #salt
15:01 pipps joined #salt
15:02 nikdatrix joined #salt
15:04 o1e9 joined #salt
15:05 debian112 joined #salt
15:06 czeq joined #salt
15:08 Praematura joined #salt
15:11 Brew joined #salt
15:11 jas02 joined #salt
15:11 nikdatrix left #salt
15:11 debian1121 joined #salt
15:14 nikdatrix joined #salt
15:15 mavhq joined #salt
15:15 XenophonF How do I get the Jinja renderer to include the context for errors?
15:16 XenophonF I'm trying to run users-formula on a Windows VM, but it's throwing an error I can't track down.
15:16 cowyn joined #salt
15:17 debian112 joined #salt
15:19 ronnix joined #salt
15:19 jauz XenophonF: I haven't seen a users-formula for Windows, yet. Where did you find your's? *would be curious to see how it works*
15:19 debian1121 joined #salt
15:20 LondonAppDev joined #salt
15:22 jauz All I've seen so far is: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.win_useradd.html
15:22 debian112 joined #salt
15:23 XenophonF I'm using https://github.com/saltstack-formulas/users-formula
15:24 babilen Same here
15:24 XenophonF But my question is more generic - is there a way to get more details out of the Jinja renderer?
15:24 XenophonF These errors are awfully opaque.
15:24 debian1121 joined #salt
15:24 k3dare joined #salt
15:27 debian112 joined #salt
15:27 onlyanegg joined #salt
15:29 debian112 joined #salt
15:31 ronnix joined #salt
15:33 debian1121 joined #salt
15:34 vegardx left #salt
15:35 toastedpenguin joined #salt
15:38 stooj joined #salt
15:38 XenophonF oh man the way i had to find the line with the error was stupid
15:39 XenophonF for those of you searching the web for jinja render or renderer error or exception troubleshooting:
15:39 debian112 joined #salt
15:39 XenophonF add a syntax error of your very own to the template in question
15:39 XenophonF like '{{ asdf }}'
15:39 XenophonF and then do a binary search: put that erroneous bit of code in the middle of the file, and move it half way up or halfway dow
15:40 XenophonF use file.managed or something to render the template without executing it
15:40 XenophonF e.g., salt-call state.single file.managed name='c:\temp.txt' source=salt://users/init.sls template=jinja
15:41 XenophonF if you don't find the error, move it halfway up or down in the remaining text (as appropriate)
15:41 XenophonF think binary search from CS100
15:41 XenophonF in my case, the error is here:
15:41 XenophonF https://github.com/saltstack-formulas/users-formula/blob/master/users/init.sls#L42
15:41 toastedpenguin anyone used salt-cloud or salt with AWS autoscaling or attempted to get salt to provide similar functionality to AWS autoscaling?
15:41 XenophonF i guess the value of current is False or something
15:43 KingJ joined #salt
15:43 Valfor joined #salt
15:43 Valfor joined #salt
15:44 roock joined #salt
15:44 PatrolDoom joined #salt
15:45 kshlm joined #salt
15:46 PatrolDoom joined #salt
15:46 DEger joined #salt
15:48 XenophonF babilen: do you use users-formula on Windows?
15:48 JohnnyRun joined #salt
15:49 babilen No, I don't have to touch MS Windows
15:51 Whissi joined #salt
15:59 scsinutz joined #salt
16:01 impi joined #salt
16:03 debian1121 joined #salt
16:04 sp0097 joined #salt
16:05 XenophonF ah oh well
16:05 XenophonF thanks
16:05 XenophonF i think i'm just going to run NET USER manually for now
16:06 feliks is there a way to hide certain states from state.highstate?
16:06 feliks like some cmd.run that are only cluttering the output?
16:06 Hipikat joined #salt
16:07 debian112 joined #salt
16:08 dariusjs joined #salt
16:09 Sketch feliks: not that i know but i find setting "state_output: changes" in your config is nice
16:10 debian112 joined #salt
16:11 debian112 joined #salt
16:11 colegatron I think it should be handled with salt environments, but never used and just to confirm:
16:12 pipps joined #salt
16:12 colegatron I need to update some states but in the meantime also need some users be able to continue to run the states
16:12 dendazen joined #salt
16:13 colegatron my states are loaded through gitfs to default (base) environment. do anyone knows how it is supposed to be handled ?
16:15 colegatron to be more concise; I need to modify and the nginx state while it is being deployed on servers. since now I was the only one running states, but now there is someone else doing it and I want to avoid clashing
16:17 feliks Sketch: thanks for the tip, but unfortunately it doesn't me by all the bloat ;)
16:17 feliks there are some things that "change" even if they dont. some way to hide that would be nice
16:17 debian1121 joined #salt
16:17 Sketch i try to make them not change instead
16:19 ronnix joined #salt
16:19 sgo_ joined #salt
16:20 nikdatrix quick question, and a very basic one. the file top.sls default path should be /srv/salt?
16:20 colegatron nikdatrix, just the file_root or gitfs root
16:21 debian112 joined #salt
16:22 nikdatrix ok gotcha, is the file_roots a parameter in the master conf?
16:22 nikdatrix sorry, i'm just getting into salt
16:26 babilen nikdatrix: You can configure it, but the default would be /srv/salt which makes the default location of the top.sls file /srv/salt/top.sls
16:26 babilen If you want to have it elsewhere you can configure it to your liking
16:26 babilen https://docs.saltstack.com/en/latest/ref/configuration/master.html#file-roots
16:27 debian112 joined #salt
16:28 nikdatrix Great, thanks! i'll check the link.
16:28 pipps joined #salt
16:28 diagnostuck joined #salt
16:28 muxdaemo_ joined #salt
16:28 ub1quit33 joined #salt
16:29 scsinutz joined #salt
16:29 babilen nikdatrix: Are you unhappy with /srv/salt ?
16:29 swills joined #salt
16:30 debian112 joined #salt
16:30 bantone joined #salt
16:30 jholtom joined #salt
16:31 nikdatrix no. actually i'm using a vanilla config on a dockerized salt
16:31 nikdatrix i'm just having an issue
16:31 nikdatrix No matching sls found for 'custom_tools.sls' in env 'base'
16:31 nikdatrix i added the custom_tools.sls with vim and curl. just to test....
16:33 ajolo joined #salt
16:33 debian112 joined #salt
16:34 babilen nikdatrix: Ah, make that "custom_tools" rather than "custom_tools.sls"
16:34 babilen (in your top file that is)
16:35 exegesis joined #salt
16:36 mavhq joined #salt
16:38 debian112 joined #salt
16:39 michiel joined #salt
16:39 nikdatrix Ahhhh i get it !
16:39 nikdatrix thanks babilen! is working now
16:39 shoemonkey joined #salt
16:39 babilen Wonderful :)
16:40 debian1121 joined #salt
16:41 nikdatrix salt has a very complete doc, but is difficult to start when you don't know what you are looking for
16:41 babilen Did you work through https://docs.saltstack.com/en/getstarted/config/ already?
16:42 babilen https://docs.saltstack.com/en/getstarted/ is a good entry point for the various basic tutorials
16:44 nikdatrix i've just found it... i'm following it now :D
16:45 jas02 joined #salt
16:46 Felipe joined #salt
16:47 woodtablet joined #salt
16:48 nikdatrix_ joined #salt
16:48 debian112 joined #salt
16:49 pidydx joined #salt
16:49 pidydx Is 2017.X coming soon?
16:50 leonkatz joined #salt
16:50 debian112 joined #salt
16:50 Trauma joined #salt
16:51 woodtablet left #salt
16:52 nikdatrix_ left #salt
16:54 babilen pidydx: I wouldn't think so
16:57 whytewolf soon is relative.
16:57 jas02 joined #salt
16:57 Inveracity joined #salt
16:57 whytewolf don't think they have even tagged a rc yet
16:59 Edgan joined #salt
16:59 woodtablet joined #salt
16:59 mdhas joined #salt
17:01 muxdaemon joined #salt
17:01 sgo_ joined #salt
17:02 pidydx what is the proper way to install/enable this? https://github.com/carlpett/salt-vault
17:02 debian112 joined #salt
17:02 pidydx I put it in my file_roots, but it doesn't seem to be playing nice
17:03 candyman88 joined #salt
17:03 whytewolf you have to sync after you put it in your file roots
17:03 whytewolf salt '*' saltutil.sync_all
17:04 pidydx sync from the minion?
17:05 whytewolf actaully since there is also a runner you need to also sync the master with salt-run saltutil.sync_all
17:06 whytewolf doesn't matter if you salt-call or salt [local or master respectivly]
17:07 pidydx I actually think I am having a different problem at the moment, but does saltutil.sync_all pull down all the latest state/pillar stuff without trying to run it?
17:07 whytewolf it just syncs the dunder directories
17:08 whytewolf and yes does do a pillar render/grains cache update
17:08 debian112 joined #salt
17:08 whytewolf it does not do a state run though
17:09 whytewolf https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.sync_all
17:10 tberc joined #salt
17:13 pidydx ugh yeah, I am having config issues, not sync issues
17:13 debian112 joined #salt
17:15 winsalt joined #salt
17:18 rem5 joined #salt
17:24 lclemens joined #salt
17:26 madhadron joined #salt
17:31 debian1121 joined #salt
17:31 chowmeined joined #salt
17:33 debian112 joined #salt
17:34 foundatron_ joined #salt
17:35 ronnix joined #salt
17:35 madhadron Anyone have experience using the s3 external pillar? It seems to clobber my local pillar when I try to configure it.
17:36 teclator joined #salt
17:36 jrgochan joined #salt
17:37 jrgochan Hello. Is it possible to configure a returner based off not only the Job ID, but the type of function being run?
17:41 debian1121 joined #salt
17:44 jas02 joined #salt
17:46 debian112 joined #salt
17:49 aldevar joined #salt
17:50 debian112 joined #salt
17:50 spaceman_spiff joined #salt
17:51 debian112 joined #salt
17:51 jauz jrgochan: I set my returner to only trigger on commands where I add --return slack ... for example, rather than having the config set to trigger the returner on all activity. Is that sort of what you're looking for?
17:51 aldevar left #salt
17:52 spaceman_spiff Hello, I'm trying to call salt['cmd.run'] from an execution module, but I need the retcode and stderr too. So I tried to import salt.modules.cmdmod._run, but it obviously doesn't work because it can't access __grains__. Is there a solution?
17:53 debian112 joined #salt
17:54 DammitJim joined #salt
17:55 debian112 joined #salt
17:57 jas02 joined #salt
18:00 ChubYann joined #salt
18:00 ALLmightySPIFF left #salt
18:01 PatrolDoom recently debian pkg "ferm" now has a menuconfig asking to start @ boot. i thought salt automagically did all that via, "service", module?
18:01 PatrolDoom erm basically the change w/ ferm pkg breaks ferm state
18:01 PatrolDoom hrm... ill try something
18:05 catpig joined #salt
18:06 debian1121 joined #salt
18:07 jas02 joined #salt
18:09 xet7 joined #salt
18:15 debian112 joined #salt
18:16 sgo_ joined #salt
18:16 jas02 joined #salt
18:23 scsinutz joined #salt
18:25 debian112 joined #salt
18:25 whytewolf PatrolDoom: if the package is asking for something while being installed on a debian or debian like system you want to look at the debconf tools to setup an answer to that menu before the package is installed
18:26 whytewolf spaceman_spiff: if you need cmd.run with all the extras take a look at cmd.run_all
18:26 whytewolf spaceman_spiff: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cmdmod.html#salt.modules.cmdmod.run_all
18:27 jrgochan jauz: I'm using a reactor to run when a minion connects for the first time after a kickstart build
18:27 jrgochan http://pastebin.com/3PEhn7UE
18:27 whytewolf PatrolDoom: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.debconfmod.html
18:27 jrgochan I'd like to only receive messages from my returner during that command.
18:27 jrgochan But I'm not quite sure what local.state.apply corresponds to, and how I may add a returner to it
18:28 jrgochan I've got a config in my /etc/salt/master to send various return jobs through smtp
18:28 debian1121 joined #salt
18:29 fracklen joined #salt
18:29 rylnd whytewolf: you have an idea what could be the issue with https://gist.github.com/jbfriedrich/ceeddb2b19bc8b28f9dfa065bbb33fcc?
18:29 darioleidi joined #salt
18:31 XenophonF i am about to throw a chair or something at my Windows minions
18:31 pipps joined #salt
18:31 XenophonF salt-formula hangs when running a file.recurse state in salt.minion
18:31 gmoro joined #salt
18:32 whytewolf rylnd: humm. i would say the fact add_host requires a name. and the saltmod.runner only works with **kwargs
18:34 rylnd is there a way to use the runner in my case at all? is it worth opening an issue for that?
18:34 whytewolf causing a conflict with the - name variable
18:34 whytewolf i would say yes open an issue. saltmod.runner looks way to limiting.
18:34 debian112 joined #salt
18:35 whytewolf coarse that being said there is a ddns. state module and exacution module
18:35 mdhas joined #salt
18:35 PatrolDoom joined #salt
18:35 whytewolf https://docs.saltstack.com/en/latest/ref/states/all/salt.states.ddns.html#module-salt.states.ddns
18:36 whytewolf have a minion on your master and run that with the tgt of your master and changes will still come from your master. as a work around
18:36 whytewolf instead of as a runner
18:37 rylnd whytewolf: yeah that is what i am using right now as a workaround
18:37 rylnd whytewolf: just feel a little bit dirty haha
18:37 rylnd whytewolf: i will open an issue for this. thanks!
18:38 whytewolf actually i normally avoid most runners if there is a state module or exacution module.
18:38 debian112 joined #salt
18:38 winsalt whytewolf, is there clean=True in the file.recurse?
18:38 whytewolf winsalt: yes.
18:39 whytewolf clean: true
18:39 winsalt I think there is an issue with that parameter on Windows
18:39 * whytewolf shrugs. I'm like a bad maid. I don't do windows ;)
18:40 winsalt https://github.com/saltstack/salt/issues/36802
18:40 saltstackbot [#36802][OPEN] using clean=True parameter in file.recurse causes python process to spin out of control | Description of Issue/Question...
18:40 whytewolf well. that would be a problem with clean true
18:40 shoemonkey joined #salt
18:41 rylnd whytewolf: why are you avoiding runners if possible?
18:41 debian1121 joined #salt
18:42 whytewolf rylnd: because why force something in the master context when i can abuse the minion context
18:42 whytewolf not to mention the added benifit that if i want to move functionality off my master i just have to change a tgt
18:43 rylnd whytewolf: ah ok. i havent thought about it from this way
18:43 s_kunk joined #salt
18:48 fracklen joined #salt
18:48 quique joined #salt
18:49 brakkisa_ joined #salt
18:49 debian112 joined #salt
18:51 debian112 joined #salt
18:54 debian112 joined #salt
18:55 brakkisath joined #salt
18:57 exegesis Guys, quick question. Is it possible to add two cron entries for the same script?
18:57 exegesis Say evert 15 minutes and at reboot:   cron.present:
18:57 exegesis - user: root
18:57 exegesis - minute: '*/15'
18:57 exegesis - special: '@reboot'
19:00 debian1121 joined #salt
19:01 colegatron joined #salt
19:01 cyborg-one joined #salt
19:02 fracklen joined #salt
19:02 debian112 joined #salt
19:05 keltim joined #salt
19:06 Tanta joined #salt
19:11 eldad joined #salt
19:14 eldad hi all :) does Salt support cloud orchestration at the infrastructure level? the closest thing I found was salt-cloud map file
19:14 debian112 joined #salt
19:14 eldad the let you specify all the machines you like salt to create in single file
19:14 eldad *that
19:15 eldad But if I'm not wrong, map files don't maintain state , if I'm changing the file - adding/removing machines, does salt knows to only apply the changes?
19:16 eldad I'm looking for something like Terraform capabilities
19:16 debian112 joined #salt
19:18 cmarzullo thought you were describing terraform.
19:18 cmarzullo hehh
19:19 debian1121 joined #salt
19:19 eldad Yeah, I'm looking to do something similar with SaltStack :)
19:21 exegesis joined #salt
19:23 pipps joined #salt
19:25 brokensyntax joined #salt
19:26 haam3r joined #salt
19:30 leonkatz joined #salt
19:35 djgerm1 is there a common place to look for when one is getting "No Top file or external nodes data matches found." when running state.apply?
19:37 whytewolf eldad: no, map files only describe systems that saltcloud will build. all state setup is done through orchestration/reactors/highstates/ect/ect/ect now. if you remove a machine from a map and run the map with a cleanup setup it will remove that machine and clean it out of salt-keys which will stop it from running any states
19:38 whytewolf however. you can use orchestration and the cloud state module to handle the same functionality that map does. linking that functionality into orchestration
19:40 N-Mi joined #salt
19:40 N-Mi joined #salt
19:41 brakkisath joined #salt
19:41 eldad whytewolf: thanks! are you familiar with any sls file that demonstrate such thing?
19:42 gableroux_ joined #salt
19:42 whytewolf djgerm1: unforchantly that is a rather generic error. it can mean a few of things. 2 bigest normally are  a.) top file doesn't exist where the minion can see it defaults to salt://top.sls b.) the minion does not belong to anything in the top file.
19:44 whytewolf eldad: unforchantly no. I don't do a lot with salt-cloud right now. hopefully this might help. https://docs.saltstack.com/en/latest/ref/states/all/salt.states.cloud.html
19:45 eldad Ok, thanks again!
19:45 scsinutz joined #salt
19:46 quique is there a salt solution that copies salt formulas locally on the saltmaster in an automated way? I don't like gitfs because development, fiddling, testing is slow done in multiple places.
19:46 whytewolf humm, i did mispeak. if you run a map with a cleanup step it will DELETE anything that doesn't belong in the map...
19:47 whytewolf quique: clever use of git.latest and a web hook?
19:48 jauz joined #salt
19:49 quique whytewolf: cool, I think that will work thanks!
19:50 whytewolf quique: also, I know you don't like gitfs. however i use this orchestration to speed up a lot of the waiting times involoved with testing it https://github.com/whytewolf/salt-phase0-states/blob/master/orch/salt-core-update.sls
19:54 quique whytewolf: looks interesting, so you have your git repo clone on your local workstation? make a change, push, then run this orch and the a highstate?
19:54 whytewolf yeap, or as the mysql orch in the same directory shows. I might just call that orchestration from another orchestration and just make it one call
19:55 whytewolf after i get father with that project i actually intend to have github send a web hook to salt to run that orchestration on every post commit
19:57 whytewolf only caveot with the orchstration calling it. is if the orchestration is new or updates to that call, could cause issues with it working
19:57 quique yeah, spiral
19:58 quique nice.
20:05 s_kunk joined #salt
20:07 MasterNayru joined #salt
20:10 brakkisath joined #salt
20:12 KajiMaster joined #salt
20:14 PatrolDoom joined #salt
20:15 sgo_ joined #salt
20:16 jhauser joined #salt
20:19 brakkisa_ joined #salt
20:20 spaceman_spiff whytewolf: awesome, thanks a lot -- also I need to sleep because the answer was in the fucking manual under my nose, sorry :)
20:21 whytewolf spaceman_spiff: no problem :)
20:25 jas02 joined #salt
20:27 antpa joined #salt
20:28 pipps joined #salt
20:28 aldevar joined #salt
20:28 aldevar left #salt
20:30 scsinutz joined #salt
20:33 antpa joined #salt
20:34 icebal23 joined #salt
20:41 shoemonkey joined #salt
20:42 paant joined #salt
20:46 mdhas joined #salt
20:47 onlyanegg joined #salt
20:49 eldad joined #salt
20:49 exegesis joined #salt
20:50 jauz Anyone here have experience using Saltpad? Seems to not have been updated since some time ago last year.
20:50 s_kunk joined #salt
20:50 s_kunk joined #salt
20:51 toastedpenguin can salt make outbound API calls to some other application?
20:52 eldad toastedpenguin: you mean call webhooks?
20:52 nikdatrix joined #salt
20:53 eldad https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.http.html
20:53 toastedpenguin eldad:  suppose its that, I would like salt to add/remove upstream servers from nginx+ using their API, so when salt-cloud deploys a new server it makes the call to nginx+
20:53 toastedpenguin I can have reactor initiate the call based on a salt-cloud beacon listener
20:54 whytewolf toastedpenguin: you mean like https://docs.saltstack.com/en/latest/ref/states/all/salt.states.http.html
20:56 whytewolf most "API"'s are really nothing more then a fancy web interface.
20:56 eldad_ joined #salt
20:56 whytewolf if you need something else. there is always the crack a python ide and write a module/state/ect to handle it
20:57 toastedpenguin whytewolf:  yeah the nginx+ api for adding upstream servers is curl 'http://127.0.0.1:8080/upstream_conf?add=&amp;upstream=upstream-servert&amp;server=10.1.1.2'
20:57 whytewolf oh so http.query should be able to handle it fine
20:58 eldad_ trying to call boto_rds module using salt-run salt.cmd runner and getting this error: https://gist.github.com/eldadru/3f9902c05de09b5d649d29e18798d1fe
20:59 eldad_ any ideas?
20:59 whytewolf oh wow. that ... looks like the loader blowing up. i have never seen that before.
21:00 toastedpenguin nginx was trying to push me towards using AWS autoscaling groups but I don't want to lose salt-cloud deployments
21:01 gableroux joined #salt
21:01 whytewolf toastedpenguin: you could do both. there is the aws autoscale reator that someone wrote that calls the aws deployment stuff so it is half autoscale half salt-cloud
21:02 toastedpenguin whytewolf: I did see it, but I have been trying to wrap my head around how that hybrid approach works with how we deploy
21:02 toastedpenguin deploying Windows/IIS servers
21:02 eldad_ I'm trying to use boto for cloud operations that salt-cloud doesn't support like creating security group etc, any other method?
21:03 whytewolf toastedpenguin: that i couldn't tell you. I tend to stay away from windows.
21:03 gableroux_ joined #salt
21:03 whytewolf eldad_: typically i run those operations through a orchestration with a minion that resides on the master as the target. instead of trying to go through the runner.
21:04 toastedpenguin we have windows deployments buttoned up with salt-cloud and states files, have a beacon and reactor doing the AD add/remove which is why the autoscaling group approach didnt appear to work for us
21:05 eldad_ whytewolf: that sound really wrong - installing minion on the master itself just to run commands on/from the master machine
21:06 eldad_ salt's support for local master execution seems to be pretty poor =\
21:06 whytewolf eldad_: it actually isn't that wrong. the master software is not a minion based software. it isn't meant to be treated as such
21:07 whytewolf it is mostly the communication hub for minions
21:08 whytewolf salting the master is a very common practice
21:08 eldad_ Ok, I was unfamiliar with that, I'll give it a try, thanks :)
21:10 jauz Is there something wrong here with my arguments for this command? Returning false, previous user.add was true and functioning otherwise. https://gist.github.com/jonasbach/b17e90b47bd8ed71697dcdb92d355f9d
21:10 jauz (user.update) windows minion.
21:12 mdhas joined #salt
21:12 bocaneri joined #salt
21:13 bocaneri joined #salt
21:13 whytewolf jauz: humm. I don't see anything wrong with it. was there any thing else besides just a false? maybe try the command on the minion with salt-call and add -l debug see what is happening on the backend
21:14 jauz Will do. -- And no, only return was the bool False from the job. :(
21:18 jauz This minion's path didn't get applied on minion install -- what should the PATH be on Windows for Salt?
21:19 whytewolf i have no idea
21:19 * whytewolf and windows just don't mix
21:22 bocaneri joined #salt
21:22 whytewolf humm, the actualy salt employees are very quiet today. wonder if they are getting reay to tag an rc
21:23 whytewolf so many typos in that, so little time
21:24 whytewolf it is odd having a laptop whose specs blow away each of the controller nodes on my personal openstack cluster
21:27 jas02 joined #salt
21:29 jdipierro joined #salt
21:32 brakkisath joined #salt
21:33 leonkatz joined #salt
21:41 sp0097 joined #salt
21:43 brakkisath joined #salt
21:44 jauz Unfortunately I'm tinkering with Salt in a 100% Windows environment, so I'll be getting some dirt on me no matter what. :P Maybe I'll be able to come up with some good stuff to help some other poor souls working with Winions.
21:44 * jauz contemplates calling them "My winions!"
21:45 whytewolf i like winions ... you must call them this from now on
21:45 * jauz will make it so.
21:46 jauz Although it's kind of an oxymoron since they decidedly don't win very often at most things.
21:49 jauz That said, the ease of gathering information about hosts/deploying settings/software is still better through Salt than doing it manually or falling on images, etc... We're also a small enough shop that we're looking at something cheap and light-weight. Salt fits the bill quite well so far, imo. Other "superior" Windows management platforms I've come across are too expensive or require more to learn/use. If I need to learn, I'd rather Pyth
21:50 jas02 joined #salt
21:50 hemebond You don't have AD and Group Policy?
21:50 brakkisath joined #salt
21:50 jauz We do for our office, not for our clients out in the world.
21:51 brakkisath joined #salt
21:51 Tanta salt is just like any other programming language with a stdlib (states) that contain all the functions you really need
21:51 Tanta it can do just about anything that I have needed, too
21:53 jauz Yeah, between the winrepo, modules.win*, and  modules.chocolatey.... Looking fairly promising so far.
21:59 onlyanegg joined #salt
22:02 vegasq joined #salt
22:03 brakkisath joined #salt
22:05 jauz joined #salt
22:08 sarcasticadmin joined #salt
22:08 DEger joined #salt
22:15 Tanta I did my first fully-salted iptables config
22:15 Tanta it's really nice
22:15 jauz Nice! Congrats, sounds slick.
22:16 whytewolf gratz.
22:16 jauz I figured out my Winion user-add / administrator problem! :D
22:16 Tanta I use AWS + security groups usually for border firewalling, but I am slowly adding host-based firewalls that match and may restrict more
22:17 jauz Windows won't let you commit potentially damaging changes to Administrator users, so you need to do.... user.add [stuff-about user (don't specify admin group yet) > user.update [details/options for user] > user.chgroups [user added to Administrators,Users True] -- Tada! Works! :D
22:17 * jauz dances.
22:18 jrgochan Hello. I'm using an smtp returner right now, but I'd kind of like the minions out output results of state runs into a file in the /root/ directory on their machine. Any way to accomplish this?
22:18 pipps joined #salt
22:20 hemebond jrgochan: Would https://docs.saltstack.com/en/latest/ref/returners/all/salt.returners.local_cache.html#module-salt.returners.local_cache be close enough?
22:20 hemebond Oh wait, there's https://docs.saltstack.com/en/latest/ref/returners/all/salt.returners.rawfile_json.html#module-salt.returners.rawfile_json
22:21 jrgochan hrm. I suppose that would just be in the minion config. I could give it a quick shot. Thanks
22:21 hemebond You need an entry in the minion config, yes. But you also have to specify, when you run the command, that you want it to use that returner.
22:24 fracklen joined #salt
22:32 Guest45616 joined #salt
22:36 antpa joined #salt
22:39 jauz joined #salt
22:42 shoemonkey joined #salt
22:43 Eugene A naive following of https://repo.saltstack.com/#debian yields a depdency error for python-zmq. This is a brand-new Debian 8.7.1 install. not sure what I could possibly do wrong(I'm normally not an Debianite)
22:44 Eugene Is it likely to be a problem with the salt repo, Debian, or just packaging nonsense in general?
22:47 netcho_ joined #salt
22:50 brakkisath joined #salt
22:50 jauz Eugene: Could this be related? https://www.bountysource.com/issues/28635712-bootstrap-fails-with-debian-8-2-docker-image
22:57 jrgochan Really not sure how to get this reactor state to output the run of this highstate to a file.. any thoughts? http://pastebin.com/76iFJUkz
22:57 jrgochan I'd like to get it to send results of these runs to /root/soemthingorother.txt
22:57 hemebond Do you need the -- in --return?
23:00 whytewolf jrgochan: - ret: syslog [outside of arg
23:00 whytewolf ]
23:00 scsinutz joined #salt
23:02 jrgochan no entries in syslog
23:02 jrgochan http://pastebin.com/RuGnJfef
23:02 jrgochan http://pastebin.com/iAtR9tyY
23:05 whytewolf does --return syslog work?
23:05 whytewolf [not in that context but in general
23:06 jrgochan not particularly. I swapped the minion config over to rawfile_json and it seems to work fine
23:06 jrgochan only problem is it's in a nasty format
23:06 whytewolf that would be json :P
23:06 jrgochan haha. indeed
23:06 jrgochan do you know if salt.returners.rawfile_json has a prettyprint?
23:06 jrgochan or if there's a better local file returner?
23:07 whytewolf it does not apear to, although you could probley just use jq
23:08 jrgochan jq you say?
23:08 whytewolf https://stedolan.github.io/jq/
23:08 whytewolf pretty decent product for playing around with json data
23:09 jrgochan [root@cmtest ~]# cat salt_results.txt | jq .
23:09 jrgochan parse error: Invalid numeric literal at line 1, column 12
23:09 jrgochan is unhappppppy
23:09 whytewolf odd
23:10 jrgochan is complaining about a :
23:10 whytewolf I have used jq with salt's json output before
23:10 brakkisath joined #salt
23:11 rpb joined #salt
23:12 dev_tea joined #salt
23:13 whytewolf [--out json, maybe the json local output is different]
23:14 antpa joined #salt
23:14 jrgochan http://pastebin.com/4KuhUMED
23:14 jrgochan so like this is my reactor file?
23:15 whytewolf no, compleatly different thing. not applicable here
23:16 whytewolf --out takes the output from the returner on the cli and outputs it to the user. it has nothing to do with any of the internals
23:19 shoemonkey joined #salt
23:19 whytewolf honestly, I'm not good for local returners. I work pretty much entirelly with centerlized logging systems.
23:20 jrgochan i'd kind of like to, but my boss wants results output to the minions
23:20 jrgochan thanks though. perhaps I'll figure something out
23:20 whytewolf i would say the best bet would be to figure out what is wrong with the syslog returner. get that working and it will be readable
23:21 whytewolf the ret module should at least help a bit with that
23:21 whytewolf https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.ret.html
23:22 whytewolf oh, nevermind
23:22 whytewolf that might get you decent output of the json returner
23:24 jrgochan hrm. according to jsonformatter.curiousconcept.com my output needs to have its json strings wrapped in double quotes instead of single quotes
23:30 whytewolf humm, well double quotes are the json perfered method. but i remember that jq shouldn't have issues with single or double quotes
23:38 jrgochan hrm. I tried YAJL as well and it's also complaining about the single quotes
23:43 jrgochan some salt-minion -l debug
23:43 jrgochan http://pastebin.com/WXKRgy7b
23:43 brakkisa_ joined #salt
23:44 jas02 joined #salt
23:44 scsinutz joined #salt
23:45 PatrolDoom joined #salt
23:50 sh123124213 joined #salt
23:52 justanotheruser joined #salt
23:58 jrgochan nice. I patched the file a bit and got it to output to syslog
23:58 jrgochan still in a messy format though
23:59 dev_tea joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary