Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-03-29

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 dendazen joined #salt
00:24 K0HAX I think I found the problem.. I need to use `ssh_gateway:`, `ssh_gateway_user:`, and `ssh_gateway_key:` under `gateway:` under the host.
00:24 K0HAX also.. I need to update the version of netcat on the gateway host. :)
00:25 K0HAX the version of netcat included in CentOS 7 doesn't have the '-z' flag
00:35 zerocoolback joined #salt
01:20 karlthane joined #salt
01:22 jas02 joined #salt
01:56 ilbot3 joined #salt
01:56 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.9, 2017.7.4 <+> RC for 2018.3.0 is out, please test it! <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
01:58 jas02 joined #salt
02:10 jas02 joined #salt
02:12 karlthane joined #salt
02:13 exarkun joined #salt
02:21 shiranaihito joined #salt
02:23 karlthane joined #salt
02:23 tiwula joined #salt
02:37 zerocoolback joined #salt
02:41 jas02 joined #salt
03:01 evle2 joined #salt
03:33 zerocoolback joined #salt
03:38 karlthane joined #salt
03:42 zerocoolback joined #salt
03:48 nbari_ joined #salt
03:48 Arendtse1 joined #salt
03:51 fxhp joined #salt
03:51 tom[] joined #salt
03:52 Gabemo joined #salt
03:52 valkyr2e joined #salt
03:54 gtmanfred joined #salt
03:55 v0rtex joined #salt
04:00 zerocoolback joined #salt
04:03 cswang joined #salt
04:14 evle1 joined #salt
04:21 _xor joined #salt
04:24 justanotheruser joined #salt
04:25 justanotheruser joined #salt
04:39 FL1SK joined #salt
04:55 indistylo joined #salt
05:01 oida joined #salt
05:06 Deadhand joined #salt
05:15 zerocoolback joined #salt
05:17 zerocoolback joined #salt
05:23 sauvin joined #salt
05:25 thelocehiliosan joined #salt
05:37 aruns joined #salt
05:42 armyriad joined #salt
05:57 sjorge joined #salt
06:10 wongster80 joined #salt
06:13 tyx joined #salt
06:31 armyriad joined #salt
06:39 oida joined #salt
06:45 lkthomas joined #salt
06:49 AvengerMoJo joined #salt
06:51 aldevar joined #salt
06:54 lkthomas joined #salt
07:02 xet7 joined #salt
07:07 LGee left #salt
07:10 Hybrid joined #salt
07:10 jas02 joined #salt
07:13 exarkun joined #salt
07:19 aviau joined #salt
07:20 cewood joined #salt
07:26 KolK joined #salt
07:32 Ricardo1000 joined #salt
07:35 KolK left #salt
07:42 jas02 joined #salt
08:06 nicodemoose joined #salt
08:10 DanyC joined #salt
08:12 aruns__ joined #salt
08:20 aruns joined #salt
08:22 jas02 joined #salt
08:34 DanyC joined #salt
08:37 inad924 joined #salt
08:39 zerocoolback joined #salt
08:44 pf_moore joined #salt
08:47 losh joined #salt
08:48 shpoont joined #salt
08:54 Mattch joined #salt
08:54 aruns joined #salt
09:03 chowmeined joined #salt
09:08 aruns joined #salt
09:10 kukacz_ joined #salt
09:15 briner joined #salt
09:42 kukacz_ joined #salt
09:49 shpoont joined #salt
10:04 StolenToast joined #salt
10:05 elektrix joined #salt
10:08 Edgan joined #salt
10:09 thelocehiliosan joined #salt
10:14 nixjdm joined #salt
10:15 aarontc joined #salt
10:27 alex-zel joined #salt
10:28 inad924 joined #salt
10:28 alex-zel Is it possible to use jinja to create a roster file for salt-ssh?
10:45 oida joined #salt
10:47 shpoont joined #salt
10:59 oida joined #salt
11:04 fabos joined #salt
11:10 DanyC joined #salt
11:14 jas02 joined #salt
11:15 froztbyte anyone here at the moment that knows a fair bit about the internals of the gitfs stuff?
11:16 jas02 joined #salt
11:16 froztbyte some specifics I'm wondering about:
11:16 froztbyte why does file vs pillar root appear to be updated differently? is this maybe just an impression from pillar refreshes taking a while?
11:17 froztbyte why do the _normal_ (interval-backed) updates not show up on the event feed?
11:25 evle1 joined #salt
11:33 jas02_ joined #salt
11:35 jas02__ joined #salt
11:35 inad924 joined #salt
11:43 dendazen joined #salt
11:59 strobelight joined #salt
12:04 briner joined #salt
12:10 briner_ joined #salt
12:13 dendazen joined #salt
12:13 Nahual joined #salt
12:14 exarkun joined #salt
12:16 DanyC joined #salt
12:24 tyx joined #salt
12:25 saltnoob58 joined #salt
12:25 skeezix-hf joined #salt
12:27 saltnoob58 hi. Is there a way to append variables to pillar in a convenient fashion automatically? Like if I have an orchestrate sls that collects a variable and writes it to pillar. But if i run it against a different set of minions, the old set remains in the pillar too?
12:28 saltnoob58 then i could use that pillar in a blockreplace or something about all the previously collected minions
12:28 strobelight_ joined #salt
12:30 briner_ joined #salt
12:32 jas02 joined #salt
12:32 jas02_ joined #salt
12:35 inad924 joined #salt
12:41 evle joined #salt
12:46 jas02 joined #salt
12:58 mahafyi joined #salt
13:04 inad924 joined #salt
13:04 thelocehiliosan joined #salt
13:11 gh34 joined #salt
13:16 racooper joined #salt
13:22 dijit joined #salt
13:22 dijit Hola, anyone know an easy way to query based on a "sub" grain?
13:25 zer0def "subgrain" or grain named "sub"?
13:25 dijit subgrain
13:26 dijit imagine I make a custom grain that outputs a dict, salt seems to render it nicely.
13:26 zer0def `grains.get a:b` where b is a child of a
13:26 dijit hm. I'm trying to target based on it
13:26 zer0def salt -C 'G@a:b' test.ping
13:26 dijit salt -G 'topgrain:key:value' test.ping
13:26 dijit aha
13:27 zer0def either of those, yeah
13:27 dijit doesn't seem to match
13:27 dijit salt -G 'datacenter:code:us-east' grains.items
13:27 dijit No minions matched the target. No command was sent, no jid was assigned.
13:27 dijit ERROR: No return received
13:28 zer0def start iteratively, but i'm pretty sure something's amiss with value interpretation
13:28 KyleG joined #salt
13:28 KyleG joined #salt
13:28 zer0def also try with complex matching
13:28 dijit oh.. now it worked.
13:28 dijit case-sensitive.
13:28 dijit whups, thanks zer0def!
13:28 zer0def that's ok, glad you got it figured out
13:31 briner_ joined #salt
13:32 jas02 joined #salt
13:39 shpoont joined #salt
13:40 edrocks joined #salt
13:41 thelocehiliosan joined #salt
13:50 indistylo joined #salt
14:00 cgiroua joined #salt
14:02 exarkun What's with the lists of single-property objects where a single multi-property object would make at least as much sense and usually more?  eg the configuration for file.managed: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#operations-on-regular-files-special-files-directories-and-symlinks
14:03 babilen What?
14:04 Whissi joined #salt
14:04 DammitJim joined #salt
14:11 exarkun Why is the configuration for file.managed like [{"source": "..."}, {"user": ...}, ...] instead of {"source": "...", "user": ..., ...}
14:13 schemanic joined #salt
14:18 thelocehiliosan joined #salt
14:27 inad924 joined #salt
14:30 jrenner joined #salt
14:38 cgiroua joined #salt
14:41 tzero exarkun: I'm no expert by a long shot (just started learning Salt recently), but maybe it has something to do with the modules being analogous to Python functions?
14:42 tzero still, not exactly an answer to your question, but I wondered that too
14:45 alex-zel joined #salt
14:46 alex-zel hello, I'm having issues using git backend, I keep getting "GitError: failed to start SSH session: Unable to exchange encryption keys"
14:47 alex-zel I've tried to install pygit2 using a newer libgit2 and libssh but that didn't help
14:59 whytewolf exarkun: because when it was first designed. it was file: ['managed', {"source": "..."}, {"user": "..."}, ... ] the file.managed you know is actually that still. just a shortcut for it. now repersenting that in yaml is easy. but repersenting file: ['managed', {"source": "...", "user": "...", ... } in yaml looks confusing to those who are just learning
15:00 whytewolf also, dicts are dificult to order in earlier pythons while lists are always ordered
15:02 tzero nice!
15:03 exarkun okay, so pretty much history
15:04 whytewolf history, backwords compatability, an how it works under the hood. but yeah
15:05 exarkun thanks
15:06 spiette joined #salt
15:07 whytewolf np
15:26 jas02 joined #salt
15:33 exarkun joined #salt
15:38 aldevar joined #salt
15:46 GrisKo joined #salt
15:49 dezertol joined #salt
16:10 aldevar left #salt
16:12 aldevar joined #salt
16:26 tiwula joined #salt
16:32 gareth_ joined #salt
16:49 froztbyte hrm, now that some more people are online, gonna repost my earlier question
16:49 froztbyte 11:15:22 < froztbyte> anyone here at the moment that knows a fair bit about the internals of the gitfs stuff?
16:49 froztbyte 11:16:44 < froztbyte> some specifics I'm wondering about:
16:49 froztbyte 11:16:56 < froztbyte> why does file vs pillar root appear to be updated differently? is this maybe just an impression from pillar refreshes taking a while?
16:49 froztbyte 11:17:24 < froztbyte> why do the _normal_ (interval-backed) updates not show up on the event feed?
16:57 major Is there something special that needs to be done to get a custom module to work w/ a salt proxy minion? The logs report the module is being copied into the module cache, no errors, but it isn't working.
16:57 immune2iocane joined #salt
16:59 jas02 joined #salt
17:14 cewood joined #salt
17:22 fxhp joined #salt
17:23 tzero froztbyte: I'm reading about pillar now, there's a blurb about the generation of pillar data being computationally expensive
17:24 froztbyte link?
17:26 tzero https://docs.saltstack.com/en/latest/topics/pillar/index.html#pillar-in-memory
17:32 ymasson joined #salt
17:41 mahafyi_ joined #salt
17:42 mahafyi_ left #salt
17:42 mahafyi joined #salt
17:46 whytewolf froztbyte: the updates don't show up in the event feed because they are not done by generated events. they are done by the loop_interval. second, git_pillar and gitfs are 2 different things. git_pillar uses the code from gitfs. but it handles a lot of things seperatly after the fact.
17:46 kojiro joined #salt
17:48 DD joined #salt
17:51 froztbyte Yeah, I found the latter (it's somewhat evident in docs)
17:51 froztbyte whytewolf: so, my next question then
17:51 froztbyte Why don't those updates emit events about themselves?
17:51 whytewolf because they are not event driven
17:52 froztbyte I get that they're not initiated from there, but why no event/signal to indicate their start or conclusion?
17:52 froztbyte whytewolf: driven, sure. They're still events in the system, however.
17:53 froztbyte It makes sense for information about them to be present on the event bus
17:54 whytewolf not really. they are a construct purly in the master. if we generated events for everything the master does that doesn't move to the minions the event bus would be more flooded then it already is
17:54 whytewolf the event bus is for communication about what happens between the minions and the master.
17:54 MTecknology more flooded than the titanic!
17:55 froztbyte whytewolf: is there specific documentation about that ("what the event bus is for")
17:55 major whytewolf, soo .. question .. I should just be able to toss my <module>.py into _modules/, do a modules_sync, "see" the module get copied into the client, and then it should "just work" .. right? I'm not seeing parse errors or exceptions, just being told the module isn't valid
17:55 froztbyte ?*
17:56 froztbyte whytewolf: because off the bat that sounds "well that's dumb" and like a design choice of note
17:56 whytewolf major: I don't know I don't know what kind of module your module is. it could need to go in _proxy if it is a proxy module
17:56 major whytewolf, both
17:57 major was trying to mimic the modules/nxos.py for testing the methods in the proxy module
17:57 froztbyte I can accept it if it's clearly documented to be that, but my very next step would be opening a ticket to change every single reference of "the event bus" to "bus for communications with the master"
17:57 froztbyte Because one of these things is distinctly not like the other
17:57 major mind you .. the proxy module is working, in so much as it contacts the LXD server and collects grains about the container.  Working on service/package management at this point
17:58 major which was sort of why I was trying to support the CLI module so I can test stuff like: salt 'container1*' lxd.cmd sendline 'ls'
17:58 sarlalian joined #salt
18:00 MTecknology froztbyte: Are you complaining about functionality, or are you complaining about the documentation not saying that the exact thing you're trying to do won't show up?.. vs. the list of things that /do/ show up and are documented?
18:00 froztbyte Yeah, this came up for me to due to a lack of functionality
18:00 froztbyte Brb
18:00 edrocks joined #salt
18:01 froztbyte Right, so
18:02 MTecknology froztbyte: I think you missed my point. You are complaining about a thing that is not documented not working, vs. looking at the list of what /is/ supported and assuming what is not documented is not supported.
18:02 froztbyte Right now, I can't have anything reacting to updates in gitfs/pillarfs updates in any meaningful way outside of "make a webhook listener", and that's kinda dumb
18:03 MTecknology what a lovely attitude you have... you're sure to get lots of people wanting to help
18:03 froztbyte Webhooks aren't always possible
18:03 froztbyte MTecknology: uh, what?
18:03 MTecknology or required
18:04 kojiro Is there any way to make a state not happen using extend? We create /etc/nginx/conf.d/default.conf using file.managed in one sls, and in one particular case I want that file not to exist.
18:04 rollniak_ joined #salt
18:05 froztbyte MTecknology: I don't know why you're involving yourself here, or where your impression is coming from. But it's not helpful at all. Please read what's being asked about, or don't involve yourself.
18:05 MTecknology kojiro: you could probably use something like "- onlyif: false"
18:05 kojiro ah, yeah, that's an interesting idea
18:06 kojiro Oh... but I know why I didn't do that -- I want to guarantee the file doesn't exist. Basically, I want `file.managed` in most cases, but `file.absent` in one particular case. However, without creating a constant create-delete cycle
18:07 MTecknology What's the criteria? Does it come with a pastebin of what you're working on?
18:07 dezertol joined #salt
18:10 rollniak__ joined #salt
18:12 kojiro https://gist.github.com/kojiromike/baa0573c392fc5004cf89f44b3831dc0 MTecknology
18:13 MTecknology kojiro: you're using a bunch of formulas, aren't you?
18:13 kojiro er, I don't know what you mean by formula
18:13 kojiro I'm using salt-ssh, if that's relevant...
18:14 tzero simple question, don't intend to derail the discussion: if I have environment-specific pillar data, is it required to configure the minions with "pillarenv: prod"?
18:14 MTecknology no formulas makes this easier..
18:14 whytewolf kojiro: an extend can only modify or add. it would be easier to extend a file.absent to be a file.managed then the other way around
18:14 major kojiro, https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html
18:14 MTecknology kojiro: I'd suggest *ALWAYS* removing the default, and then choose to add a relevant config if the config should be there.
18:15 kojiro OK, this pretty much aligns with my expectations. Thanks!
18:15 MTecknology kojiro: fwiw- https://github.com/MTecknology/saltstack-demo/blob/master/states/sys/pkgs/nginx.sls
18:16 MTecknology line 55+
18:18 major whytewolf, https://github.com/major0/salt-lxd-proxy trying to get the module to work from the CLI,  of which I totally copied nearly all of it from https://github.com/saltstack/salt/blob/develop/salt/modules/nxos.py
18:19 major the module, not the proxy
18:19 whytewolf kojiro: https://gist.github.com/whytewolf/ad0e4855f4e9bc39d3e82d2de5c78f65
18:19 q1x joined #salt
18:22 Sketch joined #salt
18:27 kojiro oh! derp, another solution would be to use onlyif: false in combination with file.absent
18:27 kojiro idk why I didn't think of that before
18:27 kojiro thanks, whytewolf and MTecknology
18:28 whytewolf major: try changing the virtualname of the module. you might be stepping on the tails of the _proxy module
18:31 whytewolf major: have you looked at https://github.com/saltstack/salt/blob/develop/salt/proxy/nxos.py to check to make sure you didn't miss anything with your _proxy file?
18:33 major well, I used the samples originally, and the proxy side of things are working
18:35 dhwt joined #salt
18:37 major though ... that said .. a vimdiff between the two looks like a fairly close match .. everything generally looks to match in the outer structure of everything .. just the inner workings of the methods differ
18:40 DD left #salt
18:40 DD joined #salt
18:43 DD left #salt
18:50 major whytewolf, yah, just went through and restructured stuff a bit better to match the nxos.py proxy and module and compared it all again .. it all looks right, so I dunno..
18:50 major and .. at least the proxy works
18:51 whytewolf yeah I'm not sure either. one thing i have never had to do was write a proxy module.
18:51 whytewolf although i should think about trying it. i have a lot of things in my home network that could benifit from it
18:51 major it isn't bad really .. though I wish there was a complete API available somewhere that told me everything I "could" implement :P
18:52 major I keep finding random proxy minion modules that have alternate interfaces that don't seem to be readily documented
18:54 major I am also thinking about adding some _engines work to this for sending events when container states change
18:54 exarkun joined #salt
18:54 major such as when they are migrated between hosts
19:02 MTecknology I've been needing to play with salt-proxy, but too much other stuff has taken priority.
19:02 aldevar joined #salt
19:10 tzero would anyone mind taking a look at what I'm failing to realize about a basic pillar setup? https://paste.ee/p/DIZ2J
19:11 tzero it looks like that when pillarenv: is set on a minion, it looks for the pillar only in the matching pillar_roots entry
19:14 skeezix-hf left #salt
19:15 MTecknology tzero: where else would you expect it to hit?
19:15 MTecknology pillar is a master side thing
19:17 major is there something you need to do with the states in order to get a module to actually load?
19:18 tzero MTecknology: I guess "base" is used in different places, and pillar_roots.base specifies directories to look for when that minion config is not explicitly set?
19:19 tzero e.g. in */pillar/top.sls, what do "base" and "*" signify?
19:19 MTecknology tzero: I think you're confusing/mashing master and minion configs
19:19 MTecknology pillar is on the master, that's it
19:20 cholcombe joined #salt
19:21 tzero right, then am I doing the correct thing by setting the env and roles grains, in order for the minion to know which states to request from the master, and from which environment?
19:22 tzero for context, I'm coming from a puppet setup where we had multiple environments, and would just copy/paste modules to promote the configs from one to another; I think the same can be more elegantly accomplished with Salt
19:24 MTecknology With gitfs, branches become environments, and matching is a matter of correctly formatting your top.sls.
19:25 MTecknology master -> base && * -> *
19:26 tzero I tried to use gitfs a couple days ago, and ultimately wound up hitting a bug in one of the upstream packages; the alternative was using GitPython(?) with HTTP auth, and we have to use 2FA, rendering that impossible
19:26 MTecknology I require 2FA for git, except from the salt master
19:27 MTecknology requiring 2FA for system accounts.... icky. :(
19:27 MTecknology btw- still reading through this
19:28 MTecknology lots 'n lots of distractions around here
19:28 major whytewolf, bah! found it ... and it was a serious stupid on my part
19:29 major whytewolf, I still had the old LXD-Formula configured in my gitfs roots...
19:29 major of which .. I seriously don't think that formula should be used..
19:33 edrocks joined #salt
19:35 Hybrid joined #salt
19:37 exarkun I'm having a lot of trouble understanding https://docs.saltstack.com/en/latest/ref/states/all/salt.states.cmd.html#should-i-use-cmd-run-or-cmd-wait
19:39 tzero exarkun: cmd.run runs every time, cmd.wait only watches another resource and runs when it changes
19:40 exarkun Runs what?
19:40 tzero the command
19:40 exarkun What's with the `file.managed` in the `cmd.wait` example?
19:41 exarkun And what's with "The preferred format is using the onchanges Requisite, which works on cmd.run as well as on any other state."?  Sounds like I shouldn't use cmd.wait.
19:41 tzero it specifies that /usr/local/bin/postinstall.sh is a file resource managed by salt
19:42 tzero salt states seem like "inside-out" ansible playbooks
19:42 tzero it's name > type > attributes instead of type > name > attributes
19:46 jas02 joined #salt
19:51 exarkun I'm trying to make a command run when a managed file changes and I can't figure it out.
19:51 KingJ joined #salt
19:52 exarkun Any examples of that kind of thing around?
19:53 babilen exarkun: cmd.wait is pretty obsolete now that onchanges{_in} work as expected
19:53 babilen So, read up on https://docs.saltstack.com/en/latest/ref/states/requisites.html
19:54 tzero yeah, the _in things are cool
19:54 exarkun I think I read that
19:55 Hybrid joined #salt
19:55 babilen So, you want a cmd.run state and a file.managed one with onchanges/onchanged_in (depending on which state you pick)
19:56 exarkun Maybe my difficulty is that I have no idea how to track down why it didn't do what I expected
19:56 babilen http://paste.debian.net/1017575/
19:56 tzero exarkun: this is how I use it: https://paste.ee/p/M4VJG
19:56 babilen (as an example)
19:56 DD joined #salt
19:57 exarkun that doesn't look an awful lot different from what I have, https://gist.github.com/exarkun/3b598483a770249df903f6e88949a260
20:01 babilen And?
20:01 exarkun And "start introducer" doesn't run when I state.apply
20:03 tzero does tahoe.cfg change when that happens?
20:03 exarkun I'm using gitfs, I checked in some bogus changes to tahoe.cfg to try to make it happen
20:08 exarkun ah
20:08 exarkun I deleted the `creates` argument now it works.
20:08 exarkun But my debug technique is "randomly delete something, state.apply, see if it worked, repeat until it works"
20:09 exarkun I would like to be able to do something a bit smarter than that
20:14 mauli joined #salt
20:18 tzero if it makes you feel any better, I'm in the same boat, but as you can see my questions above haven't exactly demonstrated mastery... there's also `salt -l debug` for additional verbosity
20:21 MTecknology tzero: you have an environment root within an environment root?
20:23 briner_ joined #salt
20:24 tzero MTecknology: yeah, I did for a while; it didn't work so well. I ended up realizing that pillar_roots is kind of like $PATH. in /srv/salt/pillar/top.sls, I have a list of { 'env:foo' : [ {match: grain}, foo ] } definitions
20:24 tzero then, in /srv/terraform/pillar/, there's foo.sls, prod.sls, etc.
20:24 MTecknology tzero: that gist says you still do
20:25 MTecknology different environments don't come from arbitrarily renaming top.sls...
20:30 tzero yeah, I took this approach which is at least working -> https://paste.ee/p/kPD7m . However, I think this also means that each environment's pillar data isn't strictly separate anymore, right?
20:31 MTecknology correct
20:33 major hah .. I just figured out how the _modules/ are expected to interact with the _proxy/ .. waaaay too simple..
20:35 bbradley joined #salt
20:50 aldevar left #salt
20:59 rollniak_ joined #salt
21:02 rlefort joined #salt
21:06 xet7 joined #salt
21:09 alj[m] joined #salt
21:11 DanyC joined #salt
21:11 rlefort joined #salt
21:16 rlefort joined #salt
21:26 Edgan tzero: You can't trust grains for anything secure, like passwords.
21:28 dd joined #salt
21:31 strobelight joined #salt
21:49 tzero Edgan: right, I just use grains since those can be provisioned at boot with Terraform. then, those grains can determine which parts of salt to run. That was the thought, anyway. I'm still trying to figure out how to parameterize the environments though without using the -match:grain \n $environment in pillar/top.sls
21:59 gmoro joined #salt
22:04 Edgan tzero: I have a way
22:05 Edgan tzero: The id grain is hardcoded, aka is secure. So you can slice and dice the hostname into jinja variables, and treat them like grains.
22:08 Edgan tzero: The matching syntax isn't as clean, but I was thinking of making a pull request to make a like syntax for jinja variables.
22:15 shpoont joined #salt
22:23 mrueg joined #salt
22:26 dd__ joined #salt
22:31 cgiroua joined #salt
22:47 dynamicudpate joined #salt
23:04 froztbyte whytewolf: okay, so, really, is that statement/design/whatever documented somewhere?
23:17 oida joined #salt
23:25 rcvu joined #salt
23:29 whytewolf froztbyte: no. however point to an event in salt that has 0 to do with interaction between a master and the minion.
23:43 major whytewolf, well .. I mean .. you can use the salt bus for almost anything .. health monitoring, statistics gathering, etc... you don't have to reserve it for master/minion...
23:43 major alarms
23:44 major you could have snort or something throw alerts to the bus and have everything else respond w/ configuring firewall rules...
23:44 whytewolf those are not built in events
23:44 whytewolf froztbyte: wants an item handled by loop_interval to fire an event.
23:45 major oh...
23:45 major Then I retract my statement
23:45 major :)
23:54 exarkun joined #salt
23:58 rcvu joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary