Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-03-16

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 Trauma joined #salt
01:01 mannefu joined #salt
01:09 lkthomas_ left #salt
01:10 lkthomas joined #salt
01:14 theloceh1liosan joined #salt
01:42 shiranaihito joined #salt
01:57 hemebond left #salt
02:22 hemebond joined #salt
02:27 zerocoolback joined #salt
02:57 ilbot3 joined #salt
02:57 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.9, 2017.7.4 <+> RC for 2018.3.0 is out, please test it! <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
03:09 Guest73 joined #salt
03:30 tzero joined #salt
03:37 sh123124213 joined #salt
04:03 Psi-Jack_ joined #salt
04:09 onlyanegg joined #salt
04:10 Psi-Jack joined #salt
04:50 Guest73 joined #salt
04:59 Guest73 joined #salt
05:16 Psi-Jack joined #salt
05:26 sauvin_ joined #salt
05:37 Psi-Jack left #salt
05:38 Psi-Jack joined #salt
05:39 Psi-Jack left #salt
05:40 Psi-Jack joined #salt
05:42 sh123124213 joined #salt
05:43 Psi-Jack left #salt
05:44 Psi-Jack joined #salt
06:03 armyriad joined #salt
06:08 armyriad joined #salt
06:10 Udkkna joined #salt
06:11 zerocoolback joined #salt
06:29 zerocoolback joined #salt
06:36 zulutango joined #salt
07:02 sjorge joined #salt
07:14 exarkun joined #salt
07:20 sjorge joined #salt
07:25 indistylo joined #salt
07:26 mavhq joined #salt
07:38 sh123124213 joined #salt
07:47 aldevar joined #salt
08:11 Hybrid joined #salt
08:27 hoonetorg joined #salt
08:32 Tucky joined #salt
08:33 zulutango joined #salt
08:36 ffledgli1g joined #salt
08:39 ffledgli1g joined #salt
08:39 ffledgli1g Hello, I'm having some difficulty configuring salt-master and salt-api to talk to each other via TCP for ipc; strace-ing the api server tells me it's still trying to connect to the master via an ipc socket, although the master itself seems to be correctly listening on localhost ports. Has anyone setup something like this before? The `ipc_mode: tcp` in the master config file doesn't seem to affect the api
08:39 ffledgli1g server itself
08:40 jrenner joined #salt
08:40 Pjusur joined #salt
08:41 hemebond ffledgling: Are you trying to set it up in a fancy way or something?
08:41 ffledgling hemebond: fancy as in?
08:41 hemebond Are you not using the defaults?
08:42 ffledgling hemebond: For TCP? Not quite, just `ipc_mode: tcp` and using the default ports
08:42 hemebond Are you using cherrypy or something with the API?
08:42 ffledgling Yes
08:42 hemebond I'm not sure what that ipc_mode is for.
08:43 hemebond My rest_cherrypy config just has a port and host.
08:43 hemebond Oh, that'll be for the cherrypy server itself.
08:43 ffledgling Yeah, this for master<->api communication
08:44 hemebond I don't think I have any settings explicitly for that.
08:44 hemebond Okay so the default for ipc_mode is "ipc"
08:44 hemebond Except on Windows.
08:44 ffledgling Correct, which I've changed in the config to `tcp`
08:44 hemebond Are you on Windows?
08:45 ffledgling I'm on debian
08:46 hemebond Any errors in the logs?
08:47 ffledgling hemebond: nope, nothing that seems relevant
08:47 ffledgling Just an INFO: <request> and then a 503
08:47 hemebond Is that in the master log?
08:47 darioleidi joined #salt
08:49 ffledgling hemebond: the api log, the master doesn't show anything because the api doesn't reach it at all
08:49 hemebond Oh.
08:49 ffledgling The strace on the api says it tries to reach it via an ipc socket that doesn't exist and the master is not listening on
08:53 hemebond Can you paste the full error?
08:53 hemebond Or is it literally just the HTTP request with a 503?
08:54 mannefu left #salt
08:54 ffledgling Literally just `[INFO    ] 172.16.54.0 - - [16/Mar/2018:08:54:27] "POST /login HTTP/1.1" 503 755 "" "curl/7.47.0"
08:55 hemebond Strange that there's nothing in other logs even in debug logging.
09:00 hemebond Is the master actually listening on 451* ?
09:00 hemebond i.e., 4512 and 4513
09:01 hemebond Just out of curiosity, why are you using TCP?
09:02 ThomasJ joined #salt
09:03 mikecmpbll joined #salt
09:04 cewood joined #salt
09:12 scooby2 joined #salt
09:14 samodid joined #salt
09:24 yidhra joined #salt
09:34 Hazelesque joined #salt
09:35 ffledgling hemebond: well, basically I don't have a shared dir that I can use to mount the ipc sockets for both processes at the moment
09:38 Yamakaja joined #salt
09:41 aruns__ joined #salt
09:44 hemebond Sounds interesting. So your API process can't see any of the runtime files of the master?
09:46 mikecmpb_ joined #salt
09:56 aruns__ joined #salt
09:57 ffledgling hemebond: nop
09:57 ffledgling *nope
09:58 hemebond That's pretty annoying.
09:58 hemebond Sounds like one of those horrible Docker things.
09:58 hemebond *Docker-type
09:58 ffledgling hemebond: Haha, more or less, yes.
09:58 ffledgling Although I don't necessarily think it'd be annoying if I could get tcp to work
09:58 yuhl joined #salt
09:59 hemebond So did you find if the master was listening on those ports?
09:59 ffledgling It is, yes.
10:02 yidhra joined #salt
10:02 djinni` joined #salt
10:16 pf_moore joined #salt
10:16 zerocoolback joined #salt
10:24 mikecmpbll joined #salt
10:28 mikecmpbll joined #salt
10:30 Edgan joined #salt
10:56 zerocoolback joined #salt
11:02 prasant joined #salt
11:11 prasant joined #salt
11:11 evle joined #salt
11:12 prasant Hi, I'm getting this err: The Salt Master has rejected this minion's public key!,... after I have deleted the minion from master...
11:12 prasant I know that if I regenerate the minion keys then it will start working again
11:13 prasant How can I work around this problem without regenerating minion keys?
11:13 prasant joined #salt
11:14 babilen prasant: Look into /etc/salt/pki and track down that minion's key
11:14 prasant @babilen:  I cannot delete the minion keys... I want the master to accept the same keys from minion again...
11:14 prasant Is it possible?
11:15 prasant any configuration on the master?
11:15 babilen prasant: So, did you find your minion key in a subdir there?
11:17 prasant yes.. its in /etc/salt/pki/minion/ [*.pub & *.pem]... If I delete and then restart minion then its fine... but I want it to work without regenerating keys on the mnion
11:18 babilen /etc/salt/pki/minion/ is not being used by saltstack. Do you mean the key is in /etc/salt/pki/minions ?
11:19 babilen (and only there)
11:20 babilen I'm referring to the master in case you are describing the situation on the minion
11:28 babilen prasant: Did you look on the minion or the master? And if the former: What's the situation on the master?
11:30 prasant @babilen... Now I have looked at the master... after I delete the keys from the master those keys are deleted from the master.. that I have verified
11:30 prasant I will describe again in steps...
11:31 prasant Step 1: New minion with id say "min1" asking master to accept keys...
11:31 prasant Step 2: Master accepts "min1" and can ping it
11:31 prasant Step 3: Master deleted "min1" from accepted list
11:32 prasant Step 4: Minion "min1" will send the notification to master again...
11:32 prasant Step 5: Master will accept "min1" the second time
11:33 zerocoolback joined #salt
11:33 prasant Step 6: BAM... I get the error on minion and minion kills itself.. the error is (The Salt Master has rejected this minion's public key!.. To repair this issue, delete the public key for this minion on the Salt Master and restart this minion.... Or restart the Salt Master in open mode to clean out the keys. The Salt Minion will now exit."
11:33 babilen How do you delete in step 3? How do you accept in step 4? What's the situation in /etc/salt/pki (for that minion) after 3. and 4. and 5.
11:34 prasant I use "sudo salt-key -d min1" for deleting the key...
11:36 babilen ok
11:38 msmith joined #salt
11:38 onslack <msmith> is the minion restarted between 3 and 4?
11:39 prasant @onslack... tried both.. (a ) restarted (b ) without restarting
11:39 babilen prasant: Did you manage to look into /etc/salt/pki ?
11:40 babilen I guess the situation after step 5 is most interesting, but the other questions are likely helpfuol also
11:40 onslack <msmith> i'm wondering if the minion dies at the point of 4 rather than after accept
11:40 babilen *speling
11:40 babilen Also include information on service restarts
11:41 onslack <msmith> babilen: i see what you mean about the silly quoting from the onslack bot. i suspect it would be better to identify the slack user's name some other way
11:41 prasant @babilen... I'm looking and will answer your queries
11:42 prasant After step 4: the minion id appears in : /etc/salt/pki/master/minions_pre/
11:43 prasant After step 5: the minion id appears in : /etc/salt/pki/master/minions/
11:44 onslack <msmith> i have to ask, does the minion name appear in any other sections in the output from `salt-key -L` ? like 'denied' or 'rejected' ?
11:44 prasant nope
11:44 onslack <msmith> that error message makes me think the master has two keys for the same minion id
11:45 onslack <msmith> more specifically, that's the error that would occur if it had
11:45 onslack <msmith> and the master logs would also indicate that
11:47 onslack <msmith> are you able to run the master with debug logging enabled? that may help shed some light on what's happening
11:51 babilen I had exactly the same suspicion which is why I was curious about the situation in /etc/salt/pki
11:51 babilen I would have expected it to be in _rejected or other directories
11:53 babilen prasant: And just to make sure, the key that's in /etc/pki/minions after step 2 is no longer present *anywhere* in /etc/salt/pki and it is the same as the one present in /etc/salt/pki/master/minions_pre/ and /etc/salt/pki/master/minions/ after step 4 and 5 respectively. You also did not restart the minion after step 3 ?
11:54 onslack <msmith> prasant: also are you testing with a new minion, perhaps spun up inside docker/container/vm, and as a result the new minion is generating a new key?
11:54 onslack <msmith> heh, same idea again
12:02 saltnoob58 joined #salt
12:04 evle joined #salt
12:05 saltnoob58 hi. I want to run a salt state, from master run wget/curl example.com/minionshostname and then use the response in a config template for only that minion. Whats the best way to do that?
12:05 saltnoob58 i'd just curl in a custom grain but i want to do it from the master
12:05 saltnoob58 not run curl on each minion
12:09 onslack <msmith> depends when you want it to run. normal render occurs on the minion only, so a jinja call is out. the only other way would be to run it in advance and store the result in something you can query, perhaps sdb
12:09 onslack <msmith> pillar is rendered on the master, but would you want the call run every time the pillar is refreshed?
12:11 thelocehiliosan joined #salt
12:16 Nahual joined #salt
12:17 zerocoolback joined #salt
12:18 saltnoob58 probably
12:20 onslack <msmith> then jinja in a pillar sls would probably be a solution, even if it's not the most elegant way
12:20 prasant @babilen.. the keys are not present anywhere after step 3... and then I restart the minion...
12:21 onslack <msmith> are you changing the minion at all, for example deleting the keys?
12:22 zer0def for me it sounds like you want an sls that's called via `salt-run state.orchestrate`, since that's ran on the master (and that's where you can place the jinja curl call), then use that value as an additional pillar passed into a `state.sls` state targeting a particular minion
12:24 onslack <msmith> nice, much better ^^ :)
12:24 zer0def there's also a lot of breadth to do any sort of data mangling before you even pass it onto the minion
12:26 zer0def that's in case your curl call returns values related to multiple minions, you could just pass on returned date with minion id's as keys and their relevant data as values, then refer to that as `{{ additional_pillar[grains['id']]['blah'] }}
12:27 zer0def but that *does* expose curl-call-related information related to other minions, which might be a concern
12:27 saltnoob58 man i miss ansible where you can just run a local_action and register the result
12:28 saltnoob58 i get all the weird requirements when im working with salt
12:28 zer0def `local_action` being ran from your machine, not the one you're managing, correct?
12:28 saltnoob58 yes
12:28 zer0def because if that's the case, what i've described is basically the same thing
12:29 s7rZBOODV joined #salt
12:29 saltnoob58 well, maybe, but im a very simple person and im very confused by the need to run an extra orchestral state and pillar to populate a template differently for each minion :)
12:30 zer0def well, the extra orchestral state isn't exactly necessary
12:30 saltnoob58 it's like saying something in assembly is basically the same thing as a python library. Sure, they do one thing, but no one man should have all that power
12:30 zer0def the goal here is to pass additional data related to the call through `state.sls`'s `pillar` kwarg, you could do that from shell just as well
12:30 zer0def (i'm assuming it's a `state.sls` call, though)
12:32 saltnoob58 if it's a kward supplied during state.sls runtime like from a commandline i think i have to pass on this task to the guy running powershell which calls all the states for a complex provisioning thing. Because the guy who was supposed to convert it to orchestrator sls quit :(
12:33 saltnoob58 but the sad part is either way have to go "out" of salt to do it
12:33 zer0def i'm not sure how you have to go "out" of salt
12:34 onslack <msmith> the orchestrator state can call http.query entirely inside salt
12:35 onslack <msmith> your requirement was run something on the master, then pass that into the minion. that needs two processes. it's up to you what form the first takes in order to pass the pillar data to the second
12:36 onslack <msmith> the two suggestions are orchestrate or command-line
12:36 zer0def pretty sure all the pieces you need exist in salt: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.http.html https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html https://docs.saltstack.com/en/latest/ref/states/all/salt.states.saltmod.html
12:36 onslack <msmith> orchestrate is inside salt, command-line isn't
12:37 zer0def if you're unwilling to do it through orchestration, your best bet would be to store the curl result as a shell var, then use it in a `state.sls` or `state.highstate` call
12:39 saltnoob58 maybe saying out of state better describes what my thought meant. And unfortunately I have to use the framework already in place, where an entirely separate piece calls states over api
12:40 zer0def that's fine, you just replace the api call running an execution module function for one that runs a runner function
12:41 zer0def and if you're running multiple states over api, odds are you're in need of orchestration, anyway
12:43 saltnoob58 there is orchestration, just not salt orchestrator orchestration. And I need lots of things, maybe I'll get them next year. Until then have to make chocolate icecream out of poop for management who really wants a tasty treat :)
12:45 saltnoob58 but basically i need to make a pillar, write into it a jinja={{curl command}} and then when calling a state invoke the pillar and it will refresh itself for every minion that calls it?
12:45 zer0def oh, salt orchestrate calls can be just a piece of what you need overall and so far, the changes you'd have to do aren't as invasive, imho; the way you're describing your issue with your last statement sure sounds like someone's playing with your flexibility to see where you break :P
12:45 prasant @babilen...there is another simple way to reproduce the same error:
12:46 zer0def you're not "invoking" pillars, because pillars are data
12:46 prasant Step 1: Minion "min2" sends notification to master but master does not accept the keys and the keys are in "unaccepted keys"
12:46 saltnoob58 and it wont get race conditions if lots of minions try to use the pillar and thus update it?
12:47 zer0def you shove curl's return into a pillar in addition to existing pillars through the `pillar` kwarg in a `state.*` call
12:47 zer0def please refer to the links i've provided, so you get a better idea
12:47 prasant Step 2:  Minion "min2" keys are regenerated and rebooted and new keys are published to master
12:47 zer0def especially the "state" execution module
12:47 prasant Step 3: now master has "min2" under "denied list" and "unaccepted list"
12:48 prasant Step 4: Accept "min2" and the error appears on minion
12:49 prasant the error is: "[CRITICAL] The Salt Master has rejected this minion's public key!    To repair this issue, delete the public key for this minion on the Salt Master and restart this minion.   Or restart the Salt Master in open mode to clean out the keys. The Salt Minion will now exit."
12:50 onslack <msmith> prasant: that's precisely the condition i was describing earlier. that's expected behaviour if the minion's key changes
12:51 prasant how to bypass this issue..???
12:51 prasant any workaround?
12:51 onslack <msmith> don't ever change the minion key
12:52 prasant @onslack.... it can happen...
12:52 prasant I enabled seteting "preserve_minion_cache: False" on master
12:52 prasant still the problem does not go away
12:53 onslack <msmith> if you're saying you have a normal use case for a minion to be recreated then you'll have to go to lengths to ensure that the master and minion are kept in sync correctly
12:53 aldevar joined #salt
12:53 zerocoolback joined #salt
12:54 prasant @onslack... most of the times we will try the keys to be in sync... but there are some corner cases when it does happen... how can I workaround the situation.. any config on the master?
12:56 onslack <msmith> i can't answer that without knowing how you're getting into a situation that the master wasn't intended to encounter. a minion should always keep the key it's added with, and any time when that doesn't happen is an error
12:57 onslack <msmith> if you have a short-lived minion, for example a temporary container, then perhaps you need to consider not using a long-term minion
12:57 saltnoob58 whoever is changing the keys should at the same time delete old keys and accept the new ones
12:58 onslack <msmith> the lengths i'm thinking of involve carefully constructed orchestration of wheel to perform key management
12:59 prasant @onslack, @saltnoob58: this is probably a feature of salt for safety purpose.. is there a way to bypass this feature....
12:59 onslack <msmith> other than turning off all key security?
13:00 saltnoob58 on master run cron with salt key -A every minute
13:00 onslack <msmith> every time someone seems to want to force salt to do something in an unusual way, i advise them to instead look at how salt would prefer you to do it
13:00 onslack <msmith> saltnoob58: that wouldn't clear out the keys in error
13:01 zer0def couldn't presence events be used? i think they contain the list of responsive minions
13:02 prasant @saltnoob58: I want salt to accept the keys and not reject the minion if previously it was deleted or minion keys are regenerated..
13:03 prasant if the keys are in the accepted list then salt master should be able to communicate with the minion ..
13:03 indistylo joined #salt
13:04 evle3 joined #salt
13:09 onslack <msmith> the master will not talk to a minion that's in the rejected list. it's the key that counts, not the minion id
13:10 aruns__ joined #salt
13:12 cyteen joined #salt
13:14 prasant @onslack.. whats the master behaviour.. if I delete minion key (from accepted list) and the same minion with same id & key is accepted again... then will master communicate to minion?
13:15 mikecmpbll joined #salt
13:17 babilen prasant: I'm still not exactly sure how you end up with a "rejection" in the first scenario if you remove the minion first *and* it's key is nowhere to be found in /etc/salt/pki
13:18 babilen At that point *any* key that is presented by that minion will be treated as a new key just like the first time you accepted it
13:18 babilen You run into problems if minion key change and the master is not expecting that
13:18 onslack <msmith> that's not what's happening or you wouldn't be getting this error
13:19 babilen I know, which is why I'm surprised that the minion key is nowhere to be found in salt-key's output or /etc/salt/pki after the removal and prior to the error
13:21 babilen The second scenario is obvious, but the error we initially discussed sounds as if it shouldn't occur as the master knows nothing of earlier minion's anymore
13:22 thelocehiliosan joined #salt
13:22 babilen *keys
13:24 onslack <msmith> or there's a race condition in between one minion going down and another coming up
13:24 babilen prasant: And yeah, in the case of "delete minion key (from accepted list) and the same minion with same id & key is accepted again" the master will communicate
13:24 onslack <msmith> but like i said, if that's all it was then you wouldn't be getting this error, so there must be something else happening
13:25 babilen indeed
13:35 rkhadgar joined #salt
13:37 dynek left #salt
13:42 aldevar joined #salt
13:58 nixjdm joined #salt
14:00 cgiroua joined #salt
14:03 indistylo joined #salt
14:18 sh123124213 joined #salt
14:24 nixjdm joined #salt
14:46 om2 joined #salt
14:55 zerocoolback joined #salt
14:55 oida joined #salt
15:07 tiwula joined #salt
15:24 dendazen joined #salt
15:30 theloceh1liosan joined #salt
15:31 ecdhe joined #salt
15:37 zerocoolback joined #salt
15:39 dave_den joined #salt
15:39 dave_den left #salt
15:41 jsmith0012 joined #salt
15:44 jsmith0012 got a odd interaction on a windows minion i am trying to understand/workaround i am trying to run plink in the background to form a ssh tunnel. i can get it to start but when run by salt it dose nothing, when run via command line it runs just fine
15:46 jsmith0012 the salt command looks kinda like this: salt 'windows.10.test@lab' cmd.run_bg "plink.exe -ssh -pw ************* -P 22 -N -R 33333:127.0.0.1:4900 user@example.com "
15:47 jsmith0012 have not dealt with salt on windows before.  is there something i am missing? or some tricks?
15:51 zerocoolback joined #salt
15:51 jsmith0012 near as i can tell plink dose a tcp hand shake with the server and stop.   could this be some kind of env issue in windows?
15:52 onslack <msmith> i suspect that plink stores config in the registry, are you running salt as the same user you're manually testing as?
15:55 futuredale_ joined #salt
15:55 simonmcc_ joined #salt
15:57 descrepes joined #salt
15:57 felixhummel_ joined #salt
15:57 ffledgli1g joined #salt
15:57 Psy0rz_ joined #salt
15:57 darvon_ joined #salt
15:57 kiorky_ joined #salt
15:57 cwright_ joined #salt
15:57 sybix_ joined #salt
15:57 pfalleno1 joined #salt
15:58 upb_ joined #salt
15:58 dev_tea_ joined #salt
15:58 iggy_ joined #salt
15:59 ipsecguy_ joined #salt
16:01 wych42 joined #salt
16:01 bendoin_ joined #salt
16:02 stewgoin- joined #salt
16:02 ksa_ joined #salt
16:03 haam3r_ joined #salt
16:04 lungaro joined #salt
16:04 Deliant joined #salt
16:07 tobiasvdk joined #salt
16:07 jsmith0012 not yet. ill give it a shot
16:09 ingy1 left #salt
16:09 ingy joined #salt
16:09 AvengerMoJo joined #salt
16:10 mpanetta joined #salt
16:10 m0nky joined #salt
16:12 jsmith0012 it crashed the salt minion service.
16:12 onslack <msmith> i wasn't suggesting changing the user salt runs as, i was asking if you were running plink as that user
16:13 jsmith0012 like using runas="name" password="sasdfadf"
16:13 jsmith0012 in the salt command
16:13 jsmith0012 huh... the service came back on its own after about 1 min.
16:17 onlyanegg joined #salt
16:17 DanyC joined #salt
16:18 jsmith0012 i asume this is what you mean: here is my full example salt 'windows.10.test@lab' cmd.run_bg "plink.exe -ssh -pw ************* -P 22 -N -R 33333:127.0.0.1:4900 user@example.com "runas="name" password="sasdfadf"
16:18 cro- joined #salt
16:19 bachler joined #salt
16:19 onslack <msmith> that stands a better chance of working the same as running plink as that user, yes :)
16:19 bachler Anyone that can point me to a way to add route53 records with salt?
16:20 onslack <msmith> <https://docs.saltstack.com/en/latest/ref/states/all/salt.states.boto_route53.html>
16:21 onslack <msmith> <https://docs.saltstack.com/en/latest/ref/states/all/salt.states.boto3_route53.html>
16:21 bachler SALT.STATES.BOTO_ROUTE53 just seems to verify that stuff is there? not acutally add it? At least in my case. I am trying to debug that state, it returns true but the record is not in the AWS console
16:22 jsmith0012 thx onslack i think that did the trick.... more or less.  the remote port forward did not work but i see in the log it authenticated.
16:23 onslack <msmith> salt can automate many things, but often those things have to already work :)
16:23 jsmith0012 the only odd part is the salt job hangs. like it still sitting there waiting for a return even though its running in windows.
16:24 onslack <msmith> strange. cmd_run_bg really shouldn't
16:24 onslack <msmith> bachler: i haven't used it, i just searched the docs
16:25 jsmith0012 yea it is strange. it should have timed out by now or something
16:28 Zachary_DuBois joined #salt
16:29 jsmith0012 and there is my remote port binding... attached to the tcp6 loop back address >< well that is not a salt issue
16:29 jsmith0012 still has not returned
16:30 bendoin_ left #salt
16:36 jsmith0012 is there a way to STOP a job?  its at 10 minutes even though the program is running and happy on the minion
16:36 zerocoolback joined #salt
16:36 GrisKo joined #salt
16:37 whytewolf https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.kill_job
16:37 whytewolf https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.kill_all_jobs
16:39 Guest73 joined #salt
16:45 aldevar joined #salt
16:52 justanotheruser joined #salt
16:53 justanotheruser joined #salt
16:58 Trauma joined #salt
17:06 yidhra joined #salt
17:45 zerocoolback joined #salt
17:54 prasant joined #salt
18:01 prasant joined #salt
18:18 yuhl joined #salt
18:20 onlyanegg joined #salt
18:34 edrocks joined #salt
18:37 krusolu joined #salt
18:47 yuhl joined #salt
18:59 mikecmpbll joined #salt
19:01 zerocoolback joined #salt
19:11 cewood joined #salt
19:33 dendazen joined #salt
19:39 aldevar joined #salt
19:46 ritz joined #salt
19:52 yidhra joined #salt
19:54 thelocehiliosan joined #salt
20:05 edrocks joined #salt
20:10 ddg joined #salt
20:18 thelocehiliosan joined #salt
20:19 onlyanegg joined #salt
20:25 yuhl_ joined #salt
20:29 KingJ joined #salt
20:30 ymasson joined #salt
20:45 brokensyntax joined #salt
20:45 dendazen joined #salt
20:50 mrBen2k2k2k__ joined #salt
21:05 viq joined #salt
21:19 Hybrid joined #salt
21:31 cgiroua joined #salt
21:40 thelocehiliosan joined #salt
21:42 cyteen joined #salt
21:49 mavhq joined #salt
21:49 gswallow joined #salt
22:02 shpoont joined #salt
22:07 ymasson joined #salt
22:07 cewood joined #salt
22:07 tobiasvdk joined #salt
22:07 Yamakaja joined #salt
22:07 sjorge joined #salt
22:07 magnus1 joined #salt
22:07 monokrome joined #salt
22:07 ekkelett joined #salt
22:07 StolenToast joined #salt
22:07 averell joined #salt
22:07 rideh joined #salt
22:07 pppingme joined #salt
22:07 coldbrewedbrew_ joined #salt
22:07 rickflare joined #salt
22:07 mr_kyd joined #salt
22:07 babilen joined #salt
22:07 Whissi joined #salt
22:07 basepi joined #salt
22:07 bd joined #salt
22:07 dkehn joined #salt
22:07 Kelsar joined #salt
22:07 copec joined #salt
22:07 tys101010 joined #salt
22:07 pcn joined #salt
22:07 sjl_ joined #salt
22:07 jrklein joined #salt
22:07 ey3ba11 joined #salt
22:09 monokrome joined #salt
22:10 sayyid9000 joined #salt
22:12 cgiroua joined #salt
22:13 nledez joined #salt
22:13 yuhl_ joined #salt
22:14 onlyanegg joined #salt
22:14 shpoont joined #salt
22:23 sh123124213 joined #salt
22:30 thelocehiliosan joined #salt
22:43 shpoont joined #salt
22:47 heaje joined #salt
22:48 onlyanegg joined #salt
22:56 aldevar joined #salt
23:06 sh123124213 joined #salt
23:30 Hybrid joined #salt
23:32 gswallow joined #salt
23:51 thelocehiliosan joined #salt
23:55 cgiroua joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary