Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-04-03

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 masber joined #salt
00:56 samkottler left #salt
01:05 Pyro_ joined #salt
01:06 Pyro_ joined #salt
01:11 nikdatrix joined #salt
01:23 XenophonF joined #salt
01:40 mpanetta joined #salt
01:51 catpiggest joined #salt
01:56 Pyro_ joined #salt
01:58 jas02 joined #salt
02:37 sp0097 joined #salt
02:40 baikal joined #salt
02:45 evle joined #salt
02:46 JPT joined #salt
02:56 Klaus_D1eter_ joined #salt
03:09 justan0theruser joined #salt
03:12 nikdatrix joined #salt
03:17 sp0097 left #salt
03:41 mpanetta_ joined #salt
04:02 preludedrew joined #salt
04:02 jas02 joined #salt
04:16 stooj joined #salt
04:37 armguy joined #salt
04:37 samodid joined #salt
05:03 nikdatrix joined #salt
05:14 nikdatrix joined #salt
05:15 DarkKnightCZ joined #salt
05:15 dendazen joined #salt
05:19 DEger joined #salt
05:25 rdas joined #salt
05:28 DEger joined #salt
05:36 IRCFrEAK joined #salt
05:37 IRCFrEAK left #salt
05:43 IRCFrEAK joined #salt
05:44 IRCFrEAK left #salt
05:55 Derailed joined #salt
05:58 colttt joined #salt
06:00 do3meli joined #salt
06:06 Straphka joined #salt
06:13 golodhrim|work joined #salt
06:15 felskrone joined #salt
06:20 jas02 joined #salt
06:22 babilen joined #salt
06:22 babilen joined #salt
06:24 cyteen joined #salt
06:31 djgerm joined #salt
06:32 jas02 joined #salt
06:37 jas02 joined #salt
06:38 aldevar joined #salt
06:48 ronnix joined #salt
06:51 Ricardo1000 joined #salt
06:52 viq joined #salt
06:56 nikdatrix joined #salt
06:59 dario joined #salt
07:06 debian1121 joined #salt
07:07 ronnix joined #salt
07:11 o1e9 joined #salt
07:15 debian112 joined #salt
07:19 tru_tru joined #salt
07:21 jhauser joined #salt
07:28 tongpu joined #salt
07:29 JohnnyRun joined #salt
07:38 cyborg-one joined #salt
07:38 ReV013 joined #salt
07:42 mpanetta joined #salt
07:43 prg3 joined #salt
07:44 candyman88 joined #salt
07:48 candyman89 joined #salt
07:53 mikecmpbll joined #salt
07:54 alvin_ joined #salt
08:02 Rumbles joined #salt
08:02 kbaikov joined #salt
08:09 cyteen joined #salt
08:10 bdrung_work joined #salt
08:21 cyteen joined #salt
08:27 zulutango joined #salt
08:28 DEger joined #salt
08:30 Mattch joined #salt
08:31 candyman88 joined #salt
08:32 inad922 joined #salt
08:35 Annihitek joined #salt
08:36 robinsmidsrod I'm trying to make a simple file.line sls entry, and I can't find info on how to specify the "before:" value as a regular expression instead of a simple string?
08:36 robinsmidsrod I've tried "exit 0", "^exit 0$", ^exit 0$ and /^exit 0$/, but none of them work...
08:37 robinsmidsrod if you have a better suggestion for adding to /etc/rc.local before the final exit 0 line, feel free to suggest a better alternative
08:38 DEger joined #salt
08:38 robinsmidsrod this is my current try: https://gist.github.com/robinsmidsrod/fc5d14d3432d39b56ed3dee830da4189
08:41 yuhl______ joined #salt
08:47 Ricardo1000 Hello, world :))
08:48 DEger joined #salt
08:52 jhauser_ joined #salt
08:53 candyman88 joined #salt
08:56 cliluw joined #salt
09:04 cyteen_ joined #salt
09:04 DEger joined #salt
09:09 haam3r robinsmidsrod: "exit 0" occurs twice in the file, and that seems to be the problem, not your regex. Also you can lose the match line in your current try
09:10 robinsmidsrod haam3r: yes, which is why I try to match against "^exit 0$" to ensure I only match the one on a line by itself
09:11 robinsmidsrod but apparently that doesn't match anything (zero matches)
09:11 robinsmidsrod haam3r: thanks about the match line being superflous
09:12 DEger joined #salt
09:17 nikdatrix joined #salt
09:21 ronnix joined #salt
09:22 haam3r robinsmidsrod: Have you thought about switching over to file.replace or templating the entire file?
09:23 cmarzullo ^^
09:23 robinsmidsrod haam3r: templating the entire file is not really an alternative, as people might need to have manual edits in it
09:24 robinsmidsrod but using file.replace is doable, if you can achieve the same thing, of adding a line before the exit 0 at the end, without disturbing anything else before that line
09:25 DEger joined #salt
09:29 DEger_ joined #salt
09:33 robinsmidsrod haam3r: how would you do it with file.replace?
09:46 haam3r robinsmidsrod: Seems that file.replace uses the same re.search function, so I'm guessing that would actually fail the same way
09:47 robinsmidsrod haam3r: well, thanks anyway
09:47 robinsmidsrod is this related to multiline matching?
09:48 robinsmidsrod does re.search support setting multiline using "(?m)..." construct?
09:48 robinsmidsrod and does it support \A, \Z, \z when using multiline?
10:03 DEger joined #salt
10:03 haam3r robinsmidsrod: It seems it's a thing with re.search that the file is turned into a string and then the first occurrence from there is taken
10:05 robinsmidsrod haam3r: *sigh* - a bug that needs to be reported?
10:06 megamaced joined #salt
10:06 jas02 joined #salt
10:07 hoonetorg joined #salt
10:14 cmarzullo managing lines in files is an antipattern. you gonna run into trouble in the long run.
10:14 DEger joined #salt
10:16 kuromagi joined #salt
10:16 jas02 joined #salt
10:16 DEger joined #salt
10:18 rideh joined #salt
10:18 robinsmidsrod cmarzullo: sure it is, but I don't really feel like creating both a systemd unit and an upstart script just to be able to remove the troublesome Hyper-V vmbus host to guest time sync feature (I use ntpd)
10:19 robinsmidsrod which is why I just try to add a simple bash script to disable it on startup
10:19 robinsmidsrod most of the time I use file.managed to keep my files in check
10:20 haam3r robinsmidsrod: Actually after a bit more digging, file.replace does work...file.line just does some wierd things...there is already a pretty long issue up in github for file.line
10:20 haam3r robinsmidsrod: in the mean time, i answered your gist with how to do the same thing with file.replace
10:21 haam3r commented*
10:22 jas02 joined #salt
10:30 Rumbles joined #salt
10:31 dwfreed joined #salt
10:33 robinsmidsrod haam3r: got a link to that issue?
10:34 robinsmidsrod haam3r: about the gist comment, wouldn't that add in the line every time you run it?
10:39 haam3r robinsmidsrod: https://github.com/saltstack/salt/issues/27295
10:39 saltstackbot [#27295][OPEN] salt.modules.file.line documentation improvements | See https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.file.html#salt.modules.file.line....
10:39 haam3r robinsmidsrod: seems so yes :(
10:44 jas02 joined #salt
10:48 dendazen joined #salt
10:49 Praematura joined #salt
10:54 mavhq joined #salt
10:55 DEger joined #salt
10:58 do3meli joined #salt
11:02 DEger joined #salt
11:04 amcorreia joined #salt
11:05 GnuLxUsr joined #salt
11:12 jas02 joined #salt
11:16 DEger joined #salt
11:17 ReV013 joined #salt
11:23 lorengordon joined #salt
11:23 ikarpov joined #salt
11:26 catpig joined #salt
11:33 evle1 joined #salt
11:38 Pyro_ joined #salt
11:41 toanju joined #salt
11:48 devtea joined #salt
11:48 Neighbour Has anyone gotten fileserver.s3fs working from a master with integrated IAM-role?
11:49 Neighbour I keep getting 'No AWSAccessKey was presented.'-errors
12:09 nikdatrix joined #salt
12:09 abednarik joined #salt
12:19 numkem joined #salt
12:20 Neighbour apparently this is caused by https://github.com/kennethreitz/requests/issues/2949 ...does anyone have an idea on how to handle this?
12:20 saltstackbot [#2949][OPEN] Session's Authorization header isn't sent on redirect | I'm using requests to hit developer-api.nest.com and setting an Authorization header with a bearer token. On some requests, that API responds with an 307 redirect. When that happens, I still need the Authorization header to be sent on the subsequent request. I've tried using `requests.get()` as well as a session....
12:22 Neighbour (since AWS redirects from <bucketname>.s3.amazonaws.com to <bucketname>.s3-<region>.amazonaws.com)
12:28 lorengordon joined #salt
12:32 KennethWilke joined #salt
12:38 DarkKnightCZ joined #salt
12:45 evle joined #salt
12:49 haarp joined #salt
12:51 XenophonF joined #salt
12:52 haarp hello. is it possible to pass ssh options to salt-ssh on the command line? i don't want to make temporary edits to my ssh config file for a salt-ssh run. the doc does not mention any such an option
12:57 ronnix joined #salt
13:00 s_kunk joined #salt
13:07 DarkKnightCZ joined #salt
13:09 edrocks joined #salt
13:10 tkharju joined #salt
13:19 demize haarp: https://github.com/saltstack/salt/blob/a33b8f81f7a29d0da54a88240771a41f489954c1/salt/utils/parsers.py#L2873
13:20 GMAzrael joined #salt
13:21 demize Erhm, wait, that's wrong.
13:21 GMAzrael Aynone getting a 301 error with the WinRepo-ng?
13:22 demize haarp: So yeah, was added to develop 4 months ago, not released yet.
13:22 jas02 joined #salt
13:27 brousch__ joined #salt
13:29 Deliant joined #salt
13:31 remyd1 joined #salt
13:35 remyd11 joined #salt
13:41 GMAzrael anyone?
13:44 remyd1 joined #salt
13:45 icebal joined #salt
13:45 _Cyclone_ joined #salt
13:45 haarp thanks, demize, good to know
13:46 PatrolDoom joined #salt
13:48 racooper joined #salt
13:53 Tanta joined #salt
13:54 ivanjaros joined #salt
13:56 icebal joined #salt
13:57 icebal- joined #salt
13:59 Rumbles_ joined #salt
14:00 DarkKnightCZ joined #salt
14:02 mpanetta joined #salt
14:06 sarcasticadmin joined #salt
14:06 Rumbles joined #salt
14:10 promorphus joined #salt
14:12 aldevar left #salt
14:14 aldevar joined #salt
14:19 mpanetta joined #salt
14:24 ernescz joined #salt
14:30 gtmanfred honestly: i am not going to be back around until next week, I am moving to Utah this week
14:30 gtmanfred demize: wanna take a look at an issue for me with salt-bootstrap and arch?
14:32 wonko21 joined #salt
14:32 tapoxi joined #salt
14:33 promorphus joined #salt
14:33 wonko21 joined #salt
14:34 demize gtmanfred: I probably could when I get home, or tomorrow.
14:35 gtmanfred demize: good, right now it doesn't work and I don't have time to look at it :) bug fixing for 2016.11.4
14:35 gtmanfred <3 kthxbai!
14:35 demize Haha
14:35 demize gtmanfred: Any more specific issue than that? ;p
14:36 jdipierro joined #salt
14:36 Pyro_ joined #salt
14:36 gtmanfred not really unfortunately
14:37 gtmanfred it is failing in our archlinux branch tests
14:37 gtmanfred https://jenkins.saltstack.com/job/branch_tests/job/2016.9/job/salt-2016_9-linode-arch/
14:37 Pyro_ joined #salt
14:38 Pyro_ joined #salt
14:39 Brew joined #salt
14:39 Pyro_ joined #salt
14:39 Pyro_ joined #salt
14:40 Pyro_ joined #salt
14:41 demize What a useful log... ;p
14:42 demize Would be useful if it grabbed `systemctl status`/the salt log if it failed to start.
14:42 ernescz hey guys! I was wondering - is it possible to "embed" (for a lack of a better word) one Pillar into another one? Something like this - https://gist.github.com/anonymous/854f5d34e32a4c03d26c0010db3ea74c
14:44 ernescz This works with info from grains, somehow fails from another Pillar.
14:47 gtmanfred ernescz: only if the pillar.get you are using is from an ext_pillar
14:47 honestly gtmanfred: heh
14:47 gtmanfred ernescz: https://docs.saltstack.com/en/latest/ref/configuration/master.html#ext-pillar-first
14:48 honestly gtmanfred: you're moving in with saltstack? :P
14:48 gtmanfred honestly: what did you need? this is probably the last I will be online until then
14:48 honestly It was about the salt-ssh ticket
14:48 gtmanfred basically, my fiancée got a job in salt lake, so I have to move away from sunny texas
14:48 gtmanfred honestly: which one?
14:48 honestly oh I see
14:48 honestly gtmanfred: It doesn't matter now, since it's fixed
14:48 demize gtmanfred: Oi, is the Jenkins job configuration open?
14:49 gtmanfred honestly: dope :)
14:49 gtmanfred demize: it is not
14:49 honestly gtmanfred: unless it isn't, which I'll only find out once I'm back at work :P
14:49 gtmanfred :)
14:50 gtmanfred demize: it runs this command to bootstrap the server INFO: Command line: '/tmp/.saltcloud-a6af813e-ee12-497c-a4bb-399c8ad45840/deploy.sh -c /tmp/.saltcloud-a6af813e-ee12-497c-a4bb-399c8ad45840 -ZD git v2016.3.1'
14:51 demize Yeah, but what exactly does deploy.sh do, I wonder.
14:51 ernescz gtmanfred: thanks, I'll look into it :)
14:52 gtmanfred demize: it is bootstrap.saltstack.com or https://github.com/saltstack/salt-bootstrap
14:53 gtmanfred which is what I want you to take a look at :)
14:53 GMAzrael Aynone getting a 301 error with the WinRepo-ng?
14:53 gtmanfred 301 is not an error it is a redirect
14:54 demize Ah, didn't know -Z was a bootstrap arg.
14:54 demize Would be less confusing if it wasn't called deploy.sh :p
14:54 gtmanfred ¯\(°_o)/¯
14:56 ivanjaros3916 joined #salt
14:56 edrocks joined #salt
14:56 phx__ joined #salt
14:56 promorphus joined #salt
14:58 brakkisath joined #salt
15:02 demize uh... hm..
15:02 demize gtmanfred: It works in a fresh nspawn container... >.>
15:02 demize So knowing how the VM it's running in is set up would be helpful :p
15:07 demize gtmanfred: So I think your setup might be broke instead. *shrug*
15:08 promorphus joined #salt
15:08 demize And since it's linode based on the log, it wouldn't surprise me if the Linode images are bork somehow again.
15:09 demize Get me a disk image or something where it breaks and I'll look into it more though. ;)
15:10 GMAzrael gtmanfred: salt-run winrepo.update_git_repos is reporting: [ERROR   ] Failed to checkout master from winrepo remote 'https://github.com/saltstack/salt-winrepo-ng.git': remote ref does not exist
15:11 DEger joined #salt
15:11 sp0097 joined #salt
15:12 Pyro_ joined #salt
15:16 gtmanfred remove the .git? because that is what it is being redirected to
15:17 mbologna joined #salt
15:22 heaje joined #salt
15:24 mbologna joined #salt
15:25 Deevolution joined #salt
15:26 bantone joined #salt
15:30 Heartsbane joined #salt
15:30 phx joined #salt
15:35 Deevolution After moving our Salt master to new hardware, any targeting other then by name is taking a really long time.  Anyone have ideas as to where to investigate?
15:37 GMAzrael gtmanfred: editing master file didn't fix the problem
15:37 remyd1 Deevolution, you should try to reinstall your master and import the pkis
15:37 remyd1 <https://docs.saltstack.com/en/latest/topics/tutorials/multimaster.html>
15:37 Trauma joined #salt
15:40 Deevolution Does it matter that theirs only a single master?
15:40 Deevolution We moved from one to the other.
15:43 remyd1 No, just backup your configuration, including the pkis, then reinstall it.
15:44 remyd1 and then, restart your minions
15:44 remyd1 but be carefull to the "restart" part
15:44 remyd1 you could loose the connection
15:45 Deevolution Yeah, we've been through that.
15:45 cachedout joined #salt
15:51 aldevar left #salt
15:53 armonge joined #salt
15:54 armonge joined #salt
16:05 _JZ_ joined #salt
16:05 abednarik joined #salt
16:06 ckonstanski joined #salt
16:07 promorphus joined #salt
16:08 pipps joined #salt
16:09 leonkatz joined #salt
16:12 Pyro_ joined #salt
16:16 edrocks joined #salt
16:18 justan0theruser joined #salt
16:18 onlyanegg joined #salt
16:19 numkem joined #salt
16:21 jdipierro joined #salt
16:24 jas02 joined #salt
16:25 Pyro_ joined #salt
16:26 Pyro_ joined #salt
16:28 impi joined #salt
16:28 woodtablet joined #salt
16:29 Aikar joined #salt
16:29 Aikar joined #salt
16:30 DammitJim joined #salt
16:31 kiorky_ joined #salt
16:33 brakkisa_ joined #salt
16:38 pipps joined #salt
16:38 pipps joined #salt
16:40 promorphus joined #salt
16:43 leonkatz joined #salt
16:46 Rumbles joined #salt
16:46 abednarik joined #salt
16:50 jdipierro joined #salt
16:50 om3 joined #salt
16:54 om2 joined #salt
16:54 brakkisath joined #salt
16:56 Pyro_ joined #salt
17:01 tkharju joined #salt
17:02 tom[] joined #salt
17:02 xet7 joined #salt
17:02 Praematura joined #salt
17:05 promorphus joined #salt
17:09 candyman88 joined #salt
17:10 tom[] joined #salt
17:11 cyteen joined #salt
17:14 Pyro_ joined #salt
17:21 cyborg-one joined #salt
17:24 Klaus_D1eter_ joined #salt
17:25 ckonstanski joined #salt
17:27 DarkKnightCZ joined #salt
17:28 SaucyElf joined #salt
17:29 SaucyElf joined #salt
17:29 cscf Is there a good way to get "time since last state.apply" for a minion?  Should I just make a state that records it?
17:29 DammitJim ugh, ever since I messed around with /etc/sysctl.conf, it seems I can't communicate very well
17:30 cscf DammitJim, changes should be made in .d directories when possible. What's wrong, though?
17:31 DammitJim added this to sysctl.conf: net.ipv4.ip_nonlocal_bind=1
17:31 DammitJim and I don't see anything in the /var/log/salt/minion anymore
17:31 cscf DammitJim, ah yes, I had to do that for haproxy
17:31 DammitJim or do I have another problem?
17:31 cscf DammitJim, really? That's odd.  Not sure it's related.
17:31 DammitJim yeah, setting up haproxy
17:32 cscf well, did you try commenting that out and rebooting?
17:32 cscf fwiw, I'm running haproxy in LXC so I ended up putting that sysctl change in a post-up on the interface
17:33 cscf Our LXC containers don't seem to read /etc/sysctl.conf
17:34 DammitJim weird, it's returning now
17:35 DammitJim maybe I have network problems *sigh*
17:35 DammitJim can't get anything done!!!
17:35 cscf Does anyone know, if you use file.managed with - contents: , does jinja still work?
17:35 Tanta it does not
17:36 cscf k thanks
17:36 hasues joined #salt
17:38 tiwula joined #salt
17:42 renoirb joined #salt
17:43 juntalis joined #salt
17:49 PatrolDoom joined #salt
17:51 tercenya joined #salt
17:53 wendall911 joined #salt
17:55 promorphus joined #salt
17:55 it_dude joined #salt
17:57 felskrone joined #salt
17:57 nixjdm joined #salt
17:58 promorphus joined #salt
17:59 Inveracity joined #salt
18:02 ChubYann joined #salt
18:09 ssplatt joined #salt
18:10 moloney joined #salt
18:11 moloney I have an external pillar where I would like to expand a "role" to a list of minions. I specify roles in pillar data and normally I would use "mine.get" to do this, but that doesn't work in an external pillar. How should I do this?
18:14 moloney I guess I could build a salt.client.LocalClient and then do something like "test.ping" and target based on the pillar I want.  Is that the best way?
18:17 prg3 joined #salt
18:19 woodtablet hello
18:19 whytewolf moloney: which ext_pillar is it?
18:20 whytewolf hello woodtablet
18:20 moloney whytewolf: Custom one I am writing
18:22 whytewolf moloney: oh fun. well you COULD build in the salt jinja into your cutom pillar to allow you to "mine.get" [btw works fine in git_pillar just have to use the salt runner version]
18:22 whytewolf or. use mine.get in a runner
18:23 whytewolf https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.mine.html
18:23 whytewolf in your pillar function i mean
18:23 moloney whytewolf:  Just noticed the "mine" runner.  How would I call that from the pillar function?  Are runner available in __salt__
18:24 jdipierro joined #salt
18:25 woodtablet Hello All, I am looking for suggestions. I am looking for an alternate pillar idea for individual minions. i have formulas and salt states, where i want some minions to have certain pillar data (for example ssl certs). currently i am using file_tree. (https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.file_tree.html ) this works, but i have to make individual files for every freaking key. i think the reason i went with file_tree
18:26 whytewolf i believe they are. if not you can use saltutil.runner to call it
18:26 moloney whytewolf: awesome, thanks for the help
18:27 whytewolf woodtablet: you cut off after "i think the reason i went with file_tree i"
18:27 woodtablet oh bah
18:27 whytewolf 255 char limits :P
18:27 woodtablet RE: i think the reason i went with file_tree initially was to give pillar data to certain hosts, while sometimes no specific pillar data for other hosts (they use the default pillars). I do not want a database for an ext_pillar. Suggestions ?
18:28 woodtablet thanks whyte
18:28 DEger joined #salt
18:28 whytewolf why not target minion in a top file?
18:29 woodtablet whytewolf: what do you mean ?  i have a abunch of apache servers, and i want to give them individual pillar variables
18:30 whytewolf 'base': 'apache01': - webconfigs.common - webconfigs.apache01
18:30 woodtablet whytewolf: hmm
18:31 woodtablet whytewolf: is that a state file or a pillar ?
18:31 whytewolf pillar
18:31 whytewolf https://docs.saltstack.com/en/latest/topics/pillar/
18:31 woodtablet whytewolf: hmm.. ok i ll try that
18:31 whytewolf pillar has it's own top file
18:31 woodtablet whytewolf: ok i ll look at that and try again, tahnks whytewolf
18:32 ameneau joined #salt
18:34 ecdhe I have 10 states.  If any of them changes, I need to reload a daemon, so I'll add a state that reloads the daemon with 10 onchanges_in states.  However, if more than one of the 10 states changes, I only need to reload the daemon once.  Is this default behavior or should I use an "order" attribute?
18:35 whytewolf that is the default behavour.
18:35 pipps joined #salt
18:35 jas02 joined #salt
18:36 whytewolf it will check all 10 before the onchanges[_in] runs
18:41 leonkatz joined #salt
18:41 ecdhe thanks whytewolf.  I've been prototyping an OpenBSD router, it's actually fun.  Step 1: manually configure to taste.  Step 2: get configuration under control with salt.
18:41 Roelt_ joined #salt
18:41 feld joined #salt
18:41 Inveracity joined #salt
18:42 aarontc joined #salt
18:43 DEger joined #salt
18:44 moloney whytewolf: I got it working. Ended up doing "from salt.runner import RunnerClient" then create a runner inside the external pillar function "runner = RunnerClient(__opts__)" and finally I can do "runner.cmd('mine.get', ...)"
18:44 whytewolf nice
18:45 DEger joined #salt
18:49 edrocks joined #salt
18:49 nikdatrix joined #salt
18:50 ahrs joined #salt
18:51 jdipierro joined #salt
18:55 prg3 joined #salt
18:59 jdipierro joined #salt
19:01 jdipierro joined #salt
19:07 nidr0x joined #salt
19:08 jdipierro joined #salt
19:08 ivanjaros joined #salt
19:18 pipps joined #salt
19:24 keldwud joined #salt
19:24 keldwud joined #salt
19:28 toanju joined #salt
19:29 edrocks joined #salt
19:33 promorphus joined #salt
19:35 DarkKnightCZ joined #salt
19:38 DanniZqo joined #salt
19:39 lorengordon joined #salt
19:43 Flying_Panda joined #salt
19:43 Drunken_Panda joined #salt
19:44 Drunken_Panda hey guys what requesite can I use to only run if another state was sucessful ? I thought of require_in but that seems to run even if state has none 0
19:45 whytewolf Drunken_Panda: are you looking for onchanges?
19:45 Drunken_Panda Would that pick up changes to a docker image ?
19:46 whytewolf require runs if a state returns true or changed. onchanges runs only if a state changed
19:46 gtmanfred it would if the docker image gets rebuilt.
19:46 Drunken_Panda if it tries but fails would it still run
19:46 gtmanfred no
19:46 Drunken_Panda ahh cool
19:46 gtmanfred if you use requires and onchanges definitely
19:46 Drunken_Panda thought it would count the failure as a change
19:47 whytewolf but fail isn't change fail is fail
19:47 gtmanfred right, pretty sure that require isn't needed
19:47 Drunken_Panda would it be onchanges_in: state im watching
19:47 Drunken_Panda or just onchanges ?
19:48 whytewolf onchanges_in would but an onchanges into the state listed
19:48 whytewolf s/but/put
19:49 gtmanfred onchanges is the direction as require
19:50 jas02 joined #salt
19:50 whytewolf thought you were packing gtmanfred
19:50 whytewolf :P
19:52 DammitJim so, I just did a: sudo salt -v -l trace <minion> test.ping
19:53 DammitJim and I get a bunch of [TRACE   ] _get_event() waited 0 seconds and received nothing
19:53 DammitJim then finally Minion did not return
19:53 Drunken_Panda onchanges_in still seems to run even though the state gets non 0 will posta  code snippet :p
19:53 DammitJim the minion on the other hand prints to the log: 2017-04-03 15:51:05,055 [salt.transport.zeromq][DEBUG   ][1930] SaltReqTimeoutError, retrying. (2/3)
19:54 Drunken_Panda https://gist.github.com/DrunkenAngel/136a8159e0b0f7963f275fa29b869029
19:55 whytewolf Drunken_Panda: sooo, you want image_build to only run if the cmd.run returns that it changed? [which will be any time it runs with out error]
19:56 Drunken_Panda only want the cmd.run to run if image_build ran
19:56 Drunken_Panda with no errors
19:56 whytewolf drop the _in
19:57 whytewolf also that should be - docker: Image_build
19:57 whytewolf err - dockerng: Image_build
19:57 Drunken_Panda for onfail or for both ?
19:57 whytewolf both
19:58 whytewolf the _in means hey put this status on the other state so that it is watching this state
19:58 cscf Somebody here showed me how to use some salt functions to check if the host is in a given subnet.  Anyone know how?
19:58 Drunken_Panda ahh so its inverse
19:58 Drunken_Panda not failure in : blah
19:59 Drunken_Panda Perfect thank you seems to have done it :P
20:00 whytewolf cscf: network.in_subnet
20:00 whytewolf https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.network.html#salt.modules.network.in_subnet
20:00 cscf whytewolf, thanks!
20:00 cscf So useful!
20:00 DammitJim is it normal to have to set the timeout in /etc/salt/master to 300?
20:00 Drunken_Panda I always find it amazing how I get more value out of this IRC then enterprise support for other products :p
20:01 abednarik joined #salt
20:03 whytewolf lol, we try :)
20:03 whytewolf DammitJim: normal? no.
20:03 whytewolf but if network latency is an issue it could be
20:04 cscf I raised it to 15, plenty so far
20:05 cscf whytewolf, do you have any suggestions for tracking how long it's been since a minion got a full state.apply ?
20:06 Drunken_Panda cscf im lazy so have it date stamp the motd when it runs a highstate :P
20:07 cscf Drunken_Panda, I slapped something together with cmd.run that wrote to /var/log/salt/track_state_apply, and was applied directly from top.sls, so it wouldn't get triggered by running partial applies.  But it's tricky to compare times
20:07 DammitJim DAMMIT
20:07 cscf I'd like to, for example, warn when it's been more than a few days
20:08 cscf I'd prefer not to script any more date-parsing than needed
20:08 Drunken_Panda pretty sure theres a util I think ... 1 mo pulling out the good ol salt-training manual
20:09 Drunken_angel joined #salt
20:10 abednarik joined #salt
20:10 Drunken_Panda na not that I see you could just grab mtime of /var/cache/salt/minion
20:10 cscf Drunken_Panda, will every state run update that?
20:11 Drunken_Panda seems to be a mike place answer on google groups https://groups.google.com/forum/#!topic/salt-users/1OIJZbKw9DI
20:11 whytewolf cscf: you could sniff the job cache. but honestly a better setup would be to touch a file at the end of every run and grab the time of it
20:11 cscf whytewolf, yeah, I was trying a cmd.run "date -I > /var/log/salt/track_state_apply "
20:11 pipps joined #salt
20:12 Drunken_Panda I personally just jinja salt[cmd.run](DATE) into motd which is set to apply its state last
20:12 whytewolf I would just use https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.touch
20:12 cscf But then I want a command that can be run on the master, that can parse the dates and tell you what's old
20:12 Drunken_Panda always gets updated
20:12 whytewolf since it would update the time the last touch on the file was
20:12 whytewolf works well in non windows enviroments
20:13 cscf Yeah, that would work too
20:13 Drunken_Panda Could you not write a beacon for that for the minion to notify the master ( no update in 24hrs) then a reactor to apply /send emil
20:13 Pyro_ joined #salt
20:13 cscf Now what's an easy way for the salt master to report any minions that have a time older than x?
20:14 whytewolf there really isn't one
20:14 cscf Sounds like a runner, right?
20:14 sjorge joined #salt
20:14 whytewolf well would need to get that info to the master on a regular schedule first
20:15 cscf mine?
20:16 cscf Also, is there a way I could get the file.touch to not report changed?
20:17 cscf Since it's really not a change I want to have pointed out all the time.  Minor importance, tho.
20:19 whytewolf well. can't get the file.touched to not report a change cause it is a change it changed the time.
20:19 cscf Yeah, I guess
20:20 cscf But it's not changing the system, so it'd be nice if I could ignore it
20:20 whytewolf also you could have the file.touch fire an event and have something on the master that listens for that event
20:21 woodtablet whytewolf: hey, i am back (trying to assign pillars to minions), and i ll wait for you to finish.
20:21 whytewolf hey woodtablet welcome back
20:21 woodtablet ^o^
20:22 DammitJim what the uggggghhhhhh
20:22 DammitJim so frustrating when your minions aren't responding to the master!
20:23 denys joined #salt
20:24 whytewolf cscf: doh, can't belive i didn't think of this before. a custom grain
20:26 cscf whytewolf, oh ok I'll look into those
20:27 tkojames joined #salt
20:27 whytewolf staterun grainstamp: grains.present: - name: last_run_start: {{salt.system.get_system_date_time()}}
20:27 whytewolf or something like that
20:30 whytewolf oh lovely, thundar clouds
20:36 cscf I'm trying to replace a symlink , /etc/resolv.conf -> /run/resolvconf/resolv.conf, with a plain file.managed.  But if I remove resolvconf, then it's a broken link, and file.managed won't overwrite it
20:37 cscf ah, follow_symlinks: False
20:37 whytewolf yeah was just looking that up cause i knew there was a way to not use symlinks just couldn't rember the name of it off the top of my head
20:38 cscf worked great.  File.managed is a very nice state
20:38 cscf Then again, it's like 1/3 of Salt's usage XD
20:40 whytewolf well, depends on the company. could be 1/3 for some users 1/1 for some, and 1/10 for others that dig deep into the salt for different things.
20:41 jdipierro joined #salt
20:41 cscf So, to make this custom grain, I make a python script in file_roots/_grains, that returns a dict, right?
20:41 cscf "If compiling the information requires that system commands be run, then putting this information in an execution module is likely a better idea."
20:41 whytewolf sorry you can go that route. i was just saying a state that creates a grain that gets updated anytime you run a state run
20:42 cscf by "system commands" they mean running a shell?
20:42 whytewolf useing salt.state.grains.present
20:42 onlyanegg joined #salt
20:42 cscf whatever you think is best
20:42 whytewolf simpler is better
20:43 cscf Well, which is simpler? lol
20:44 whytewolf lol, i would say a state would be simpler. no need for custom python code
20:44 whytewolf just jinja to get the time
20:46 cscf whytewolf, a state that does what, sorry?
20:46 cscf fires an event?
20:46 whytewolf no, just updates a grain
20:46 nikdatrix joined #salt
20:46 whytewolf since you can target grains
20:47 whytewolf and search them
20:47 cscf ah ok.  So a state applied to '*' that sets the grain "last_state_run" to $DATE ?
20:47 whytewolf although i can still see some problems with targetting
20:47 whytewolf pretty much yet
20:47 whytewolf yes
20:47 cscf yeah, you can't really target "last_state_run" < yesterday, can you?
20:48 cscf no date compares
20:48 whytewolf no, but you can at least get the grains and do some clever bash to generate a list of minions
20:48 cscf perhaps just use jinja to set a sort of "fresh" "old" binary, then you could target
20:49 cscf well, no\
20:49 cscf not if you're only setting it on a state run
20:49 whytewolf you know you could bypass this hole thing. and just setup a schedule to run ighstate every now and then so it always is fresh :P
20:50 cscf whytewolf, yeah, but I'm not fond of automatic foot-pistols
20:50 tapoxi joined #salt
20:50 whytewolf yeah. but i'm not fond of hackery
20:50 whytewolf either
20:51 cscf fair
20:52 woodtablet so many good giphs for that too
20:52 woodtablet (shooting foot)
20:54 abednarik joined #salt
20:54 woodtablet whytewolf: so i put my pillar there where you suggested mostly. i think i am doing roles right. 'I@roles:apache-server': - match: compound -apache
20:54 jas02 joined #salt
20:55 whytewolf ohhh, your roles are in pillar?
20:55 woodtablet and the apache pillar looks like this: include: - webconfigs.minion1
20:55 woodtablet whytewolf: yes, in an external pillar (stacks)
20:55 djsly joined #salt
20:56 whytewolf did not know that... here is the issue. pillar can not use pillar
20:56 woodtablet whytewolf: but all the minions are getting the pillar, not just minion1
20:56 djsly hey guys, hopefully someone might be able to help.
20:56 djsly we are having issues running salt-ssh, we are getting `State 'grains.present' was not found in SLS 'salt.minion'`
20:57 djsly if we run the same sate using salt
20:57 djsly the state `grains.present` is found...
20:58 prg3 joined #salt
20:58 woodtablet whytewolf: how should i do this if i dont want all the pillars on all the minions? i was thinking that when i get hundreds of nodes the pillar would be just stupid big
21:00 whytewolf woodtablet: it will be, you might be better off switching to nodegroups. honestly I have never tried them for targetting in pillar. but i do know one thing you can not do in pillar is use pillar for anything. sicne it becomes a recursive mess. although it should be erroring not giving it everywhere.
21:00 onlyanegg joined #salt
21:01 cscf nodegroups are good.  Only downside is salt-master reload.
21:01 keldwud joined #salt
21:01 haam3r Hi! Quick question can you schedule a orchestrate run to happen at a later time from a reactor?
21:01 whytewolf yeah, that reload is why i often hesitate to mention them.
21:01 cscf I manage mine from the states dir, and file.manage into /etc/salt/master.d/nodegroups.conf
21:04 whytewolf haam3r: you mean like set off an at job that will run the orchestration??? [which actually is what i would sugest]
21:04 whytewolf cscf: good practice. :)
21:05 keltim joined #salt
21:05 haam3r whytewolf: yeah that should work too...know of any docs for that?
21:05 cscf whytewolf, it keeps it all in git, and also lets me use a jinja shortcut for our domain
21:05 djsly so if I run this `salt "host1" state.sls salt.minion test=True` I get this Grain salt_managed_pkgs:salt-minion is set to be added from the `grains.present` state. but when I run `salt-ssh "host1" state.sls salt.minion test=True` the state looks like it is missing `State 'grains.present' was not found in SLS 'salt.minion'`
21:06 woodtablet cscf and whytewolf: how would switching to nodegroups help? wont all the nodes in the nodegroup get the pillar data for the whole nodegroup ?
21:06 djsly this is a default salt state... both salt and salt-ssh are running the same version `2016.11.1`
21:06 cscf woodtablet, I thought you wanted to be able to divide your minions into roles?
21:06 whytewolf woodtablet: yes. what exatly is your end goal? if you want to target 1 minion in a list of minions you HAVE to target 1 minion
21:06 cscf woodtablet, a "apache" nodegroup would do that.
21:07 woodtablet cscf: i wanted to pillar data for one machine to hopefully show up only for that one machine
21:08 whytewolf woodtablet: then you need to target that 1 minion. there is no getting around that
21:08 cscf woodtablet, you *could* use this to help, though: https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.file_tree.html#module-salt.pillar.file_tree
21:08 cscf It may or may not be useful
21:08 pipps_ joined #salt
21:08 whytewolf he was using that
21:08 s_kunk joined #salt
21:08 woodtablet whytewolf: how do i do that then in a sane way? if i have 100 web servers, how do i get each host its ssl cert data ?
21:09 woodtablet cscf: i am using it, but its a bit of a pain because each key has to be in its own separate file, a bit of a pain
21:10 zzzirk joined #salt
21:13 whytewolf file_tree would most likely be the easiet. or mix pillar and file_tree. use pillar for common data and file_tree for data that needs to go to each minion.
21:14 woodtablet whytewolf and cscf: so i am already using stacks for my roles, what about if i use it for minion specific pillars ? see minion_id ? https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.stack.html#module-salt.pillar.stack
21:14 woodtablet whytewolf: ok, that what it looks like to me too, thanks for confirming ^_~
21:16 whytewolf the real easiest answer would be to switch to a database driven pillar. but i do remeber you saying to didn't want to go that route
21:16 woodtablet =D
21:18 woodtablet whytewolf: but.. just in case, can you link me to database pillar ? i ll read it and see
21:20 whytewolf not sure which db you would rather read. so here is a few
21:20 whytewolf https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.mysql.html#module-salt.pillar.mysql
21:20 whytewolf https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.pillar_ldap.html#module-salt.pillar.pillar_ldap
21:20 whytewolf https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.sql_base.html#module-salt.pillar.sql_base
21:20 whytewolf https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.sqlite3.html#module-salt.pillar.sqlite3
21:21 whytewolf https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.vault.html#module-salt.pillar.vault
21:21 haam3r whytewolf: can you point me at any docs or examples about setting off an at job to run the orch?
21:22 whytewolf haam3r: unforchantly not
21:22 woodtablet thanks whytewolf ^_~
21:22 leonkatz joined #salt
21:23 whytewolf haam3r: closest i can get is an example of using at to restart saltstack when trying to update saltstack from within salt stack
21:25 whytewolf wait ... omg they changed the example in the faq away from using at
21:25 hemebond :-O
21:25 haam3r whytewolf: ack...i found docs for state.at and modules.at, I think I can cook something up from those
21:25 haam3r whytewolf: okey...what example?
21:26 ekristen joined #salt
21:26 whytewolf haam3r: it's gone. the restart using states example used to have an cmd.run that would pop a command into at that would run 1 min after the state ran
21:27 whytewolf but somewhere in the last couple of weeks since i last looked they changed it to a nohup which is very different
21:27 ekristen for a syndic master to work, do I have to put all the gitfs and pillar gitfs configs on the syndic masters as well or just the top level master need it?
21:28 hemebond https://docs.saltstack.com/en/2016.3/faq.html#what-is-the-best-way-to-restart-a-salt-daemon-using-salt
21:28 whytewolf ahhh they still have it in the 2016.3 example ...
21:28 whytewolf yeah
21:29 haam3r whytewolf: I've been using this: https://gist.github.com/haam3r/e85a9cba677a79cf46ce9a9405090c86
21:29 whytewolf which is the new method
21:30 haam3r yeah noticed it now in hemebond -s link
21:30 whytewolf and is also off topic as we were talking about how to schedule a single orchestrate run for x time after a reactor fires off.
21:31 haam3r true that :D
21:31 whytewolf lol
21:32 whytewolf anyway. i would sugest the at method. I'm unsure if the scheduler can handle single runs yet.
21:33 haam3r there was an option for scheduler to schedule a job to run only once...it's more of a question of putting it inside a reactor sls....but I'll try out the at version and see where it takes me
21:33 pipps joined #salt
21:34 haam3r thanks for the tips whytewolf
21:34 whytewolf no problem :)
21:35 haam3r getting late in my part of the world so...laters :)
21:36 ProT-0-TypE joined #salt
21:43 q1x joined #salt
21:44 Tanta joined #salt
21:44 censorshipwreck joined #salt
21:46 DanyC joined #salt
21:54 prg3 joined #salt
21:56 pipps joined #salt
21:59 pipps joined #salt
22:01 tom29739 joined #salt
22:03 jas02 joined #salt
22:19 cliluw joined #salt
22:23 mschiff joined #salt
22:23 mschiff joined #salt
22:27 mikecmpbll joined #salt
22:28 jdipierro joined #salt
22:35 onmeac joined #salt
22:40 hasues left #salt
22:43 renoirb joined #salt
22:48 dendazen joined #salt
22:49 pipps99 joined #salt
22:51 ckonstanski left #salt
23:05 DEger joined #salt
23:05 justanotheruser joined #salt
23:07 jas02 joined #salt
23:10 icebal joined #salt
23:19 DEger joined #salt
23:22 q1x joined #salt
23:24 prg3 joined #salt
23:26 Guest93 joined #salt
23:27 leonkatz anyone run cloud.map_run from orchestration
23:28 leonkatz i'm getting map_run() takes exactly 1 argument (0 given)
23:28 leonkatz but i'm passing an arg: /path/to/file
23:32 nikdatrix joined #salt
23:35 whytewolf cloud.map_run is a runner right? salt.runner doesn't have an arg or karg option. it instead takes **kwargs
23:35 whytewolf so in thoery it should be something like https://gist.github.com/whytewolf/2a5b585efaca83e14bbf1b7b8b980e43
23:39 DEger joined #salt
23:46 leonkatz let me try that i also wanted to pass in parallel: true
23:49 icebal joined #salt
23:51 leonkatz that worked thank you whytewolf
23:51 whytewolf no problem :)
23:51 leonkatz is there somewhere that i can look that up to know so i can understand for next time
23:51 leonkatz about runners
23:53 whytewolf well orchestration uses this state module https://docs.saltstack.com/en/latest/ref/states/all/salt.states.saltmod.html

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary