Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-05-01

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 packeteer joined #salt
00:03 quarcu joined #salt
00:04 eseyman joined #salt
00:10 quarcu joined #salt
00:25 baffle joined #salt
00:43 Tanta joined #salt
00:55 justanotheruser joined #salt
01:03 rem5_ joined #salt
01:04 edrocks_ joined #salt
01:49 ilbot3 joined #salt
01:49 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.3.6, 2016.11.4 <+> Support: https://www.saltstack.com/support/ <+> SaltStack Webinar on Carbon, Nitrogen, and Enterprise 5.1 on May 18, 2017 https://goo.gl/PvsOvQ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers
02:08 zerocool_ joined #salt
02:46 JPT joined #salt
02:49 oida joined #salt
02:51 icebal joined #salt
03:07 prg3 joined #salt
03:28 miruoy_ joined #salt
03:28 icebal joined #salt
03:38 evle1 joined #salt
03:44 bantone can someone direct me on how to change the values of an already existing configuration file using a salt state
03:44 bantone for example I want to change a directive in sshd_config
03:44 bantone but I want to retain that in a .sls file
03:44 bantone i know i can put it in a file on the salt master and do file.managed and push it
03:45 Guest73 joined #salt
03:45 bantone but i'm asking how to do that in an existing state file
03:47 bantone nm figured it out
04:28 cyborg-one joined #salt
04:38 icebal joined #salt
04:46 Shados joined #salt
04:51 Shados I notice that in dev templating/rendering support has been added to the file_tree ext_pillar. While this is nice, it does break existing setups with render_default and renderer_blacklist/whitelist *not* set (file tree dict is built but not populated with file contents). Is there a renderer I can set to have the old behaviour of just loading file contents as string values?
04:58 whytewolf Shados: not setting those should be the current default behavour. other wise when dev is released backwords compatibility goes out the window. file a bug report if it is not working as expected
05:07 edrocks joined #salt
05:09 fracklen joined #salt
05:44 icebal joined #salt
06:37 jas02 joined #salt
07:01 fracklen joined #salt
07:02 debian1121 joined #salt
07:10 preludedrew joined #salt
07:15 zulutango joined #salt
07:29 jas02 joined #salt
07:31 LeProvokateur joined #salt
07:33 do3meli joined #salt
07:33 do3meli left #salt
07:34 do3meli joined #salt
07:34 Trauma joined #salt
07:34 do3meli left #salt
07:46 masber joined #salt
07:58 mikecmpbll joined #salt
08:07 LeProvokateur joined #salt
08:29 LeProvokateur joined #salt
08:39 jdipierro joined #salt
08:43 kjsaihs joined #salt
08:44 Trauma joined #salt
08:46 eseyman joined #salt
08:57 ronnix joined #salt
09:01 Rumbles joined #salt
09:08 gnomethrower joined #salt
09:09 edrocks joined #salt
09:10 sjorge joined #salt
09:15 CrummyGummy_ joined #salt
09:19 CrummyGummy_ Morning, any idea what I'm doing wrong here?
09:19 CrummyGummy_ branch: grains['host'][6:]
09:19 CrummyGummy_ I'm trying to assign the substring to the branch variable.
09:20 CrummyGummy_ it seems to be applying it literally
09:20 Neighbour CrummyGummy: use this: branch: {{ grains['host'][6:] }}
09:20 CrummyGummy_ thanks :)
09:20 Neighbour that tells the jinja templating engine to interpret what's there instead of not interpreting it (and applying it literally)
09:21 CrummyGummy_ No revision matching 'grains['host'][6:]' exists in the remote repository
09:22 Neighbour ok, then try this: branch: {{ salt['grains.get']('host')[6:] }}
09:24 CrummyGummy_ isn't that the same as your first suggestion?
09:24 Neighbour it's subtly different :)
09:25 Neighbour instead of using the jinja variable 'grains', it'll use the jinja variable 'salt' and use the "grains.get" module.function
09:26 Trauma joined #salt
09:28 CrummyGummy_ oh, ok, I'm almost done testing the first suggestion.
09:29 fracklen_ joined #salt
09:33 Neighbour They should be the same (in your case), but I prefer the 2nd method, since that will also allow you to get nested grains entries, and allow you to specify a default if it's not found (though that's not really an issue in your use case)
09:39 CrummyGummy_ Good to know, I'll use that then. I'm sure it'll help me when digging through old config for examples.
09:40 LeProvokateur joined #salt
09:46 rubenb Hi, Does salt-minion have an option to show the (interpreted) config?
09:48 cDR joined #salt
09:50 cDR Hello, anyone else here having python Cryptodome issues running salt 2016.11.4 minions on CentOS 6? (from repo.saltsack.com)
09:52 cDR error when running salt-minion -l all; TypeError: initializer for ctype 'struct $MPZ' must be a list or tuple or dict or struct-cdata, not cdata 'struct $MPZ *'
09:52 cDR any help welcome
09:54 Tanta joined #salt
10:00 Trauma_ joined #salt
10:02 sjorge joined #salt
10:02 Trauma__ joined #salt
10:16 fracklen joined #salt
10:28 mikecmpbll joined #salt
10:29 fracklen joined #salt
10:55 jas02 joined #salt
11:10 edrocks joined #salt
11:14 jas02 joined #salt
11:22 lorengordon joined #salt
11:25 oida joined #salt
11:26 sjorge joined #salt
11:37 nidr0x joined #salt
11:46 nickadam joined #salt
11:53 eseyman joined #salt
12:01 skeezix-hf joined #salt
12:13 Inveracity joined #salt
12:13 amcorreia joined #salt
12:15 jdipierro joined #salt
12:18 thinkt4nk joined #salt
12:19 feld joined #salt
12:24 sjorge joined #salt
12:29 XenophonF rubenb: the state module has a show_sls function
12:30 XenophonF that's the output of the state compiler, the in-memory representation of the state data
12:37 XenophonF alternatively, if you want to debug rendering stuff, you can use the file.managed state/file.manage function to render things to a temp file
12:37 LeProvokateur joined #salt
12:37 XenophonF it's a little clunky
12:41 jas02 joined #salt
12:57 edrocks joined #salt
13:00 Tanta joined #salt
13:01 ssplatt joined #salt
13:10 Praematura joined #salt
13:15 aldevar joined #salt
13:16 Straphka joined #salt
13:17 rubenb XenophonF: I'm looking for the 'calculated' minion config. Something is going wrong with salt-connectivity on the latest version.
13:18 eseyman joined #salt
13:20 rubenb salt-minion --versions-report shows the Tornado package differing on the offending servers.
13:20 hoonetorg joined #salt
13:21 evle joined #salt
13:22 brousch__ joined #salt
13:39 numkem joined #salt
13:42 Avery[m] left #salt
13:43 Avery[m] joined #salt
13:45 PatrolDoom joined #salt
13:59 fracklen joined #salt
14:03 jvelasquez joined #salt
14:10 XenophonF rubenb: what do you mean by the calculated minion config? the packages it intends to install or something?
14:10 Brew joined #salt
14:10 XenophonF you can always call state.apply with test=True to see what might change
14:11 XenophonF e.g., on the minion: `salt-call state.apply test=True`, on the master: `salt minion-id state.apply test=True`
14:15 mpanetta joined #salt
14:16 oida joined #salt
14:19 prg3 joined #salt
14:23 feld joined #salt
14:26 keltim joined #salt
14:26 mpanetta joined #salt
14:30 keltim anyone using salt-cloud extensively with ec2? is there a good write up of it's current capabilities anywhere?
14:33 rubenb XenophonF: I mean the stuff in /etc/salt/minion(.d/*)
14:34 nkuttler keltim: the api docs?
14:37 cyborg-one joined #salt
14:39 DEger joined #salt
14:40 XenophonF rubenb: oh you mean dump the final minion config - good question, I'm not sure
14:40 XenophonF i only have the one file in /etc/salt/minion.d/ overriding the defaults
14:41 Neighbour XenophonF: isn't that `salt minion test.get_opts` ?
14:42 rubenb Neighbour: Hero.
14:42 XenophonF Neighbour: FTW
14:42 Neighbour np :)
14:42 XenophonF https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.test.html#salt.modules.test.get_opts
14:44 armyriad joined #salt
14:45 daxroc Afternoon all
14:48 sp0097 joined #salt
14:48 DammitJim joined #salt
14:48 daxroc Whats the best way / best practice for gathering a list of data based on minion data? would it be make the call when the data is needed at the state level vs at the pillar ?
14:48 daxroc I've been using the mine but seems like accessing the mine in a pillar for that data is a very bad approach as it gets compounded by the number of nodes that pillar is attached to times the number of calls to the mine
14:49 feliks joined #salt
14:51 XenophonF keltim: I use salt-cloud to deploy instances in EC2 all the time.
14:51 XenophonF What do you want to know?
14:53 XenophonF In one case I have a CentOS 7 salt-master running as an EC2 instance itself, so I'm using an instance profile (== dynamic credentials) to give it the necessary permissions.
14:53 XenophonF that's at work
14:53 XenophonF at home I'm running a FreeBSD 10 salt-master, so I'm using an IAM user account with a static API key
14:54 XenophonF I'm also using https://github.com/saltstack-formulas/salt-formula to manage my salt-masters' configs.
14:54 prg3 joined #salt
14:54 XenophonF Unfortunately, the provided salt.cloud SLS lacks a few key packages.
14:54 XenophonF So I wrote a supplemental state that installs the missing stuff
14:54 XenophonF https://github.com/irtnog/salt-states/blob/development/salt/cloud/ext.sls
14:56 cyteen joined #salt
14:56 XenophonF I deploy both Linux and Windows EC2 instances.
14:56 XenophonF https://github.com/irtnog/salt-states/tree/development/salt/files
14:56 XenophonF ignore the cloud.deploy.d stuff - I don't use it any more and need to trash those files (sorry - been busy)
14:57 armguy joined #salt
14:58 sarcasticadmin joined #salt
14:59 XenophonF This is the salt.cloud SLS config (but I might need to update my example config - again, I've been busy)
14:59 XenophonF https://github.com/irtnog/salt-pillar-example/blob/master/salt/example/com/init.sls#L292
15:09 VR-Jack2-H joined #salt
15:10 jdipierro joined #salt
15:12 Sarphram joined #salt
15:15 aneeshusa joined #salt
15:17 numkem joined #salt
15:17 Praematura joined #salt
15:17 tercenya joined #salt
15:32 anotherzero joined #salt
15:33 dezertol joined #salt
15:36 sp0097 joined #salt
15:38 amcorreia joined #salt
15:51 viq joined #salt
16:03 XenophonF daxroc: depending on your use case, maybe you should write a custom grain?
16:03 XenophonF as I understand it, Salt Mine is really about minions exchanging configuration data
16:03 raspado joined #salt
16:04 cscf I use Mine to export ssh host keys so a state common.ssh can file.manage /etc/ssh/ssh_known_hosts
16:05 daxroc I've a custom grain - exporting it to the mine. It's doing the mine lookup in a pillar that's killing the salt-master  20-minions with mine-lookup in a pillar causes a backup on the que from what I can tell
16:05 daxroc those lookups faile (timeout) and the cycle continues until it falls over smelling of burnt pillars and states
16:06 onlyanegg joined #salt
16:06 sjorge joined #salt
16:07 daxroc use the mine they said - it would be easy  ... Third-degree burns and I'm dehydrated with all this salt :D
16:08 cscf daxroc, you are using jinja in pillar to get mine data?  Why not just use it directly?
16:08 whytewolf that doesn't sound right. pillar shouldn't be hitting the mines for ever pillar call. only every highstate.
16:10 tiwula joined #salt
16:11 daxroc So it's a jinja pillar  ~20 mine runners in it and it's assigned to 19 minions (works great on 1) but when I try an orchestration across all it starts to timeout. And will kill the master if you execute a mine.update in the orchestration. (2 core 8Gb ram on the master )
16:13 whytewolf humm,, something doesn't sound right about that. when i had a lower end salt master. [2core 2gb ram] i didn't have that kind of problem with hundreds of mines
16:14 daxroc whytewolf: any tweaking to the config ?
16:14 whytewolf none ...
16:14 scoates left #salt
16:15 mdc_ joined #salt
16:15 whytewolf my configs are generally very generic.
16:15 mdc_ left #salt
16:15 mdc joined #salt
16:16 greyeax joined #salt
16:18 lompik joined #salt
16:18 whytewolf also mine.update shouldn't timeout with only 20 minions.
16:18 mdc I'm deploying nodes on AWS with salt-cloud and map files, is there a way to append grains or replace grains defined in the cloud.profiles.d/*.conf? What I have doesn't seem to be working…
16:19 daxroc whytewolf: https://gist.github.com/daxroc/0ddb4b8fb00572eb3711554a0b787e58 is how I'm doing the mine.get
16:19 Bryson joined #salt
16:20 eseyman joined #salt
16:22 whytewolf daxroc: okay. but you also said mine.update is timing out. which has nothign to do with your mine.get. mine.update updates the master mine cache. which is what mine.get reads.
16:22 seanz joined #salt
16:23 whytewolf mine.update having issues might mean your mine_functions are taking to long to process
16:23 lorengordon joined #salt
16:24 daxroc So looking at the bus during the orchestration the gets seem to cause a storm and then subsequent runners fail they just happen to be mine.updates I think
16:27 daxroc It's definitly the gets causing it - changing the pillar map to only one minion stops the storm
16:29 whytewolf what version are you on?
16:29 jas02 joined #salt
16:30 woodtablet joined #salt
16:33 dh____ joined #salt
16:33 jhujhiti_ joined #salt
16:33 nebuchad` joined #salt
16:34 om2_ joined #salt
16:35 miruoy joined #salt
16:35 jhujhiti joined #salt
16:36 Ahlee_ joined #salt
16:36 canci_ joined #salt
16:36 utahcon__ joined #salt
16:36 wwalker_ joined #salt
16:36 demonkeeper joined #salt
16:36 utahcon_3 joined #salt
16:36 SamYaple_ joined #salt
16:37 yidhra_ joined #salt
16:38 dh joined #salt
16:39 simmel_ joined #salt
16:39 tvinson_ joined #salt
16:40 whytewolf cause looking at the code. the runner version of mine.get should not be querying the minions at all. it just calls the mine_get util code which checks the mine cache. which exists on the master. the minion version of mine.get might cause problems in large enviroments [which 20 minions with a mine wouldn't be] since it looks like it downloads the mine_cache every check of mine.get but that isn't the runner
16:40 quarcu_ joined #salt
16:40 whytewolf version.
16:42 Gabemo joined #salt
16:42 heyimawesome joined #salt
16:42 Vye joined #salt
16:42 TomJepp joined #salt
16:42 CustosLimen joined #salt
16:42 fracklen joined #salt
16:43 chutzpah joined #salt
16:43 hexa- joined #salt
16:43 daxroc joined #salt
16:43 zifnab joined #salt
16:43 Joe630 joined #salt
16:44 mrueg joined #salt
16:44 skeezix-hf joined #salt
16:45 greyeax is there a good way to do a "reverse" file.recurse?
16:45 whytewolf reverse?
16:45 whytewolf delete a directoy?
16:45 greyeax like, to remove the files in the destination directory based on the source directory
16:45 jas02 joined #salt
16:46 greyeax i've been pushing files to a folder using file.recurse
16:46 greyeax but i now need to remove them, and only them
16:47 edrocks joined #salt
16:47 oida joined #salt
16:47 daxroc whytewolf: v 2016.11.2 (Carbon)
16:48 lorengordon joined #salt
16:49 NeoXiD joined #salt
16:50 whytewolf daxroc: from what i am reading it shouldn't being creating any kind of storm. the runner version of mine.get calls util.minions.mine_get which looks like it only reads in the mine cache.
16:51 whytewolf the mine cache exists on the master
16:51 whytewolf [it is what mine.update updates]
16:51 moy joined #salt
16:52 daxroc Hym let me capture this
16:52 m0nky joined #salt
16:52 copelco joined #salt
16:52 ToeSnacks joined #salt
16:52 numkem joined #salt
16:53 daxroc I see "Salt request timed out. The master is not responding. If this error persists after verifying the master is up, worker_threads may need to be increased" quite a lot
16:56 lorengordon joined #salt
16:56 whytewolf humm. that sounds like the master is under heavy load. what else is the master doing?
16:56 whytewolf and do you notice any io issues on the master?
16:56 tapoxi joined #salt
16:57 daxroc Yeah it's dying need reduce worker threads to - 50 way to high ?
16:57 daxroc to 8
16:57 tercenya joined #salt
16:58 whytewolf have you tried the default of 5
16:59 jdipierro joined #salt
16:59 whytewolf greyeax: there really isn't anything like that. closest i could think of is file.recurse with clean:true and the files you don't want deleted in it
17:00 greyeax hrm
17:00 greyeax what would be the best way of going about it you think?
17:00 greyeax i know which files i want to get rid of, and i can totally glob them
17:00 sjorge joined #salt
17:00 jdipierro joined #salt
17:00 whytewolf humm. file.absent
17:01 whytewolf not sure it can do blobs though
17:02 greyeax ill give it a shot
17:02 rlatimore joined #salt
17:06 lorengordon joined #salt
17:16 daxroc whytewolf: salt is in a race condition now - config from git  but upon startup threads are too high ...
17:16 * daxroc shot self in foot
17:23 pipps joined #salt
17:32 daxroc whytewolf: It's taking ~25s to render pillar.items for a given minion with the Mine lookups
17:32 pipps99 joined #salt
17:32 StolenToast does jinja have a 'contains' operator, or a way to check for a substring?  if $output contains 'letters'
17:33 whytewolf StolenToast: in
17:33 StolenToast thanks
17:33 tpaul joined #salt
17:34 MTecknology I have a cron.present state with "- identifier: foo" When I run a highstate, the cron entry is created but no comment is created. I can create that comment, but after running a highstate the comment is removed.
17:34 pipps99 joined #salt
17:35 Trauma joined #salt
17:37 robawt joined #salt
17:38 xet7 joined #salt
17:39 tpaul Did I imagine that salt had a way to manage embedded devices that can't handle running a minion?
17:40 MTecknology salt-ssh
17:40 numkem joined #salt
17:40 whytewolf tpaul: salt-ssh -raw or proxy_minion
17:40 tpaul proxy_minion sounds familiar, thanks whytewolf
17:41 * MTecknology needs to get familiar with proxy_minion and start using it to manage switch configs
17:42 MTecknology this feels like a regression bug, but I struggle to imagine a unit test hasn't been written for this.
17:45 tpaul proxy_minion is exactly what I need.
17:46 Trauma_ joined #salt
17:49 sarcasticadmin joined #salt
17:52 impi joined #salt
17:53 sarlalian joined #salt
17:58 pipps joined #salt
17:59 fracklen joined #salt
18:00 Inveracity joined #salt
18:06 icebal joined #salt
18:07 czchen joined #salt
18:07 hillna joined #salt
18:09 oida joined #salt
18:11 schemanic_ joined #salt
18:13 onlyanegg joined #salt
18:14 taylorbyte joined #salt
18:16 pipps joined #salt
18:18 onlyanegg joined #salt
18:19 tercenya joined #salt
18:19 SalanderLives joined #salt
18:20 SalanderLives joined #salt
18:22 wendall911 joined #salt
18:23 cyborg-one joined #salt
18:24 tpaul left #salt
18:25 kiltzman joined #salt
18:26 LeProvokateur joined #salt
18:27 cscf What monitoring systems are people using?  Any interesting integration with Salt?
18:27 hexa- prometheus, exposing exporters via grains and rendering them as static configuration
18:28 Neighbour Zabbix, and I'm working on completely revamping the zabbix state and module currently in salt
18:31 mvensky joined #salt
18:31 txmoose joined #salt
18:32 cscf hexa-, Neighbour  are they working well for you?
18:33 hexa- yes
18:33 Neighbour except when it isn't working, sure :) But fortunately it works most of the time :)
18:33 Neighbour It was a bit of a battle with selinux at first in order to get everything smoothed out though
18:34 cscf Neighbour, CentOS?
18:34 Neighbour yep, on AWS
18:35 KyleG joined #salt
18:35 KyleG joined #salt
18:35 xet7 joined #salt
18:37 rmohta joined #salt
18:37 chutzpah joined #salt
18:38 rmohta Hi.. I was trying to write a new custom module for our application. We use salt-ssh. Followed steps mentioned in the documentation site but unable to get the custom module working. Below is what I have done
18:39 rmohta In the master config, I have file_roots:   base:     - /home/saltygan/salty-gan/salt/states
18:39 MTecknology running that thing in debug isn't helping. The state produced looks correct.
18:39 onlyanegg joined #salt
18:39 MTecknology cscf: prometheus is the one that has my interest; but I haven't played with it yet
18:39 LeProvokateur joined #salt
18:40 rmohta Create a simple python file with name rohit.py and it has a no-arg function called hello(). Calling rohit.hello doesn't work. Am I missing something/
18:40 cscf We are currently running an older version of Nagios; it works but is kinda lacking.  We have a custom API that I want to trash.
18:40 cscf rmohta, did you put it in a _modules directory?
18:40 wangofett joined #salt
18:41 rmohta Yes. I have the python file in /home/saltygan/salty-gan/salt/states/_modules
18:42 rmohta @cscf: Did saltutil.sync_all, then tried to execute the module
18:43 MTecknology http://dpaste.com/1H43JR1 <-- is there any reason this shouldn't produce a cron entry with a valid identifier? Instead, it's producing no identifier at all.
18:47 wangofett joined #salt
18:47 MTecknology ... apparently -special: breaks -identifier  :S
18:50 onlyanegg joined #salt
18:52 icebal joined #salt
18:53 wangofett joined #salt
18:53 nixjdm joined #salt
18:57 sjorge joined #salt
18:58 wangofett joined #salt
19:03 noobiedubie joined #salt
19:04 filippos joined #salt
19:06 MTecknology state.call_listen() is a scary place :(
19:08 onlyanegg joined #salt
19:10 cscf MTecknology, yeah, I don't use @daily, it's simpler to just put the numbers in
19:13 schemanic_ joined #salt
19:19 Praematura joined #salt
19:19 Trauma_ joined #salt
19:22 fracklen_ joined #salt
19:28 Trauma_ joined #salt
19:31 jdipierro joined #salt
19:41 raspado joined #salt
19:43 mpanetta joined #salt
19:45 pipps joined #salt
19:46 MTecknology BAM!!!!
19:48 MTecknology identifier isn't even being passed to the state.
19:49 MTecknology not that it's the only problem, but it appears to be one of the problems
19:54 Trauma_ joined #salt
19:56 MTecknology cscf: apparently set_special() is a really minimal and stripped down version of set_job() and doesn't have 80% of the logic that set_job does.
20:00 ahrs joined #salt
20:01 mpanetta joined #salt
20:02 sarcasticadmin joined #salt
20:03 Tanta joined #salt
20:06 onlyanegg joined #salt
20:13 MTecknology dangit... I can't file a bug
20:13 cscf Can you pass mysql config data in pillar instead of /etc/salt/minion?
20:14 cscf like for mysql_database.present
20:14 whytewolf cscf: yes
20:14 whytewolf cscf: I do it all the time
20:14 whytewolf [there is very little i actually put in /etc/salt/minion]
20:14 cscf whytewolf, cool, I thought so, but I don't see it in the docs: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.mysql.html#module-salt.modules.mysql
20:15 MTecknology set_job() is where -special: logic should have gone, but instead, it's being handled a function higher
20:15 MTecknology set_special() is a stripped down and bastardized version of set_job() ... afaict anyway
20:16 MTecknology https://github.com/saltstack/salt/issues/38425
20:16 saltstackbot [#38425][OPEN] cron identifier not added (not supported?) when 'special' used | Description of Issue/Question...
20:17 whytewolf cscf: odd, they changed the doc. it used to say yeah do it in pillar
20:17 cscf whytewolf, in pillar, is it mysql.host: 'localhost' or mysql: \n host: 'localhost' ?
20:18 whytewolf mysql.host: 'localhost'
20:18 cscf Kinda odd, isn't it?
20:18 whytewolf if you are in debian just use the maint file
20:19 whytewolf mysql.default_file: '/etc/mysql/debian.cnf'
20:19 daxroc whytewolf definitely looks like a storm on the mine.gets - gets to a point where they slow down and exceed the runner timeout which seems to compound the issue. Items in the Que are added faster than they are being dealt with
20:19 MTecknology cscf: What is it you're trying to do?
20:20 cscf MTecknology, trying to use the state mysql_database.present
20:20 cscf with pillar for mysql variables rather than /etc/salt/minion
20:20 cscf I think I got it
20:20 whytewolf daxroc: so file a bug
20:21 MTecknology I think I had salt managing the config file with the credentials, pulling the password from pillar
20:21 whytewolf cscf: basicly myql module just uses config.get for getting the options it si expecting
20:21 cscf whytewolf, yeah, I just think the syntax for putting config.get stuff in pillar is odd, having literal . in the name
20:22 whytewolf cscf: then why isn't it odd that it is that way in the config?
20:23 whytewolf it is consitent with the behavour
20:23 cscf whytewolf, I mean, everywhere else x.y means x: y:
20:23 whytewolf cscf: no. . means nothing in pillar
20:24 cscf true, that's for sls files I guess
20:24 whytewolf sls file pathing
20:24 pipps joined #salt
20:30 MTecknology {% set freq = pillar.get('foo', 'weekly') %} {% if freq = 'weekly' %}...{%elif freq=daily%}...{%elif......
20:30 MTecknology vs.  - special: "@{{ pillar.get('foo', 'weekly') }}"
20:31 jas02 joined #salt
20:31 cscf "Access denied for user 'root'@'localhost'" dangit
20:32 daxroc whytewolf: not sure if it is .. seems like my use is a code-smell. The mine.gets don't return quick enough and timeout breaking the orchestration. Seems like pushing them down into the states might work. I'm really open to suggestions tho.  My orchestration is install some stuff - use a custom grain to export data to the mine and then read that data from the
20:32 daxroc mine into a pillar. Is there a nicer way to manipulate the data in the mine? making it available to pillars ?
20:34 wendall911 joined #salt
20:34 whytewolf daxroc: thats the thing. mine in pillar actually should be less of a draw on the system then mine in the state. mine in pillar should NOT be touching the minions. it just grabs the cache data from the master. which is it is being rendered on.
20:35 whytewolf mine in states will query the master causing a reconnection for every mine
20:35 cyteen joined #salt
20:36 fracklen joined #salt
20:36 daxroc But if the mine.get is based on a compound match - does that matter or that should just be served from the cache too?
20:37 fracklen joined #salt
20:37 daxroc I can send a sighup to turn on profiling right ?
20:39 onlyanegg joined #salt
20:49 edrocks joined #salt
20:53 pipps joined #salt
20:57 PatrolDoom joined #salt
20:58 cmarzullo can you do a highstate but exclude certain states?
21:00 schemanic_ joined #salt
21:01 dezertol not that I know of on the command line but you can just copy your top.sls to something.sls comment out the stuff you don't want
21:01 dezertol and just run state.top something.sls
21:01 dezertol instead of state.highstate
21:04 onlyanegg joined #salt
21:07 Vasya666 joined #salt
21:08 pipps joined #salt
21:08 phobosd____ joined #salt
21:18 MTecknology Based on what I was reading this morning, I would expect it to be possible, but never tried.
21:19 thinkt4n_ joined #salt
21:20 Praematura joined #salt
21:21 wangofett joined #salt
21:25 onlyanegg joined #salt
21:26 MTecknology that assumes the exclusion isn't stripped in the same way it is for identifier...
21:27 daxroc Can an orchestration set an expected timeout salt.state ?
21:29 daxroc whytewolf: I believe this might be my issue https://github.com/saltstack/salt/issues/18564 - The gets keep executing in the background but the orchestration salt.state has failed and has a knock on effect to later orchestrations
21:29 saltstackbot [#18564][OPEN] salt-run state.orchestrate fails because it tries to run a second task while a first is ongoing | To start off with, this problem seems somehow related to disk speed. We can reliably reproduce the problem on sata backed virtual machines, but not on ssd backed virtual machines. (Two different openstack flavors on for the rest the same hardware.)...
21:32 Tanta hello all
21:32 Tanta I hope you feel better now
21:34 Tanta speek freely
21:42 candyman88 joined #salt
21:42 xet7 joined #salt
21:48 prometheus_ joined #salt
21:49 prometheus_ left #salt
21:55 bbbryson joined #salt
21:56 tercenya joined #salt
21:58 pipps joined #salt
22:01 candyman88 joined #salt
22:01 pcn joined #salt
22:09 oododa joined #salt
22:11 Tanta I love you all and I wish you all a great day
22:12 Tanta I wish you the best
22:13 Tanta all good now?
22:15 * whytewolf is confused
22:19 kyotejones joined #salt
22:22 gtmanfred https://github.com/saltstack/salt/issues/40997
22:22 saltstackbot [#40997][OPEN] Allow Salt Master/Minion and Salt-ssh to work together | Description of Issue/Question...
22:24 shalkie joined #salt
22:24 gtmanfred thoughts?
22:26 N-Mi joined #salt
22:26 N-Mi joined #salt
22:28 jdipierro joined #salt
22:32 Yoda-BZH joined #salt
22:33 Yoda-BZH joined #salt
22:37 pipps joined #salt
22:44 fracklen joined #salt
22:47 chowmeined joined #salt
22:48 cyteen joined #salt
22:53 woodtablet interesting
22:53 woodtablet reading the original issue now
22:54 gtmanfred there is currently no crossover between salt-ssh and salt, but it would be nice since if you are running the salt master, it could just run the roster and trigger a salt-ssh connection to the minions instead of publishing to the pub stream
22:55 woodtablet yes
22:55 woodtablet that is pretty cool, and makes sense
22:55 woodtablet thanks !
22:56 woodtablet gtmanfred - if i see a feature added in salt-develop like a month ago, would that be in 2016-11.4 ? or like the date implies the current stable release is from 2016.11.4 ?
22:57 gtmanfred it has to be made against the 2016.11 branch
22:57 gtmanfred 2016.11.4 is the 4th release of the major release that was made in november 2016
22:58 gtmanfred well, 4th minor update to the 2016.11.0 major release
22:58 woodtablet ohhhh
22:58 MTecknology My PR has been sitting around for a full week without any comment at all. That's not normal. :(
22:58 gtmanfred bug fixes should be made to older branches, new features should be made against develop
22:58 gtmanfred MTecknology: mike was traveling last week, i would expect it to get in this week
22:59 MTecknology ah, sweet! Thanks for getting me excited. :)
22:59 woodtablet Mtecknology - bribe them with beer..
22:59 woodtablet gtmanfred - btw thanks for making salt so awesome
23:01 gtmanfred :blush:
23:05 candyman88 joined #salt
23:07 bigjazzsound joined #salt
23:22 Praematura joined #salt
23:23 relidy joined #salt
23:24 daxroc wtf I'm now getting duplicate data in a pillar - this is awesome more time spent debuging salt  vs writing config mgmt code thats a win ...
23:25 gtmanfred are you using multiple environments?
23:26 daxroc No, some pillar items have duplicated values no idea why
23:26 prg3 joined #salt
23:27 gtmanfred it is worth noting that if you have multiple pillar files with the same dictionary keys, they get merged, not overwritten https://docs.saltstack.com/en/latest/topics/pillar/#pillar-dictionary-merging
23:28 gtmanfred a setting for pillar merge lists was also added in 2015.8 https://docs.saltstack.com/en/latest/ref/configuration/master.html#pillar-merge-lists
23:28 gtmanfred outside of that, i can't think of a reason that it wouldn't follow how it is described in those documents
23:37 daxroc OK so walking through this .. I've a pillar assigned once via top.sls and pillar_merge_lists: True in the master-config The formula itself is merging using the defaults/map.jinja pattern time to clear cache I guess
23:42 onlyanegg joined #salt
23:53 asyncsec joined #salt
23:56 oida_ joined #salt
23:59 daxroc is there a quick way to disable reactors and engines ?

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary