Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-04-27

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 onslack joined #salt
00:20 eseyman joined #salt
00:20 masuberu joined #salt
00:26 renoirb joined #salt
00:29 jhujhiti osrelease grain on freebsd minions is "proxy" as of 2018.3.0?
00:29 jhujhiti seems like kind of a critical bug
00:46 masber joined #salt
00:48 cgiroua joined #salt
01:27 masuberu joined #salt
01:54 stooj joined #salt
01:56 ilbot3 joined #salt
01:56 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2017.7.5, 2018.3.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
02:01 shiranaihito joined #salt
02:04 bigjazzsound joined #salt
02:28 edrocks joined #salt
02:44 pauldalewilliams joined #salt
03:13 xet7 joined #salt
03:19 chowmeined joined #salt
03:30 portunus joined #salt
03:30 portunus left #salt
03:45 dendazen joined #salt
03:59 dendazen joined #salt
04:39 sh123124213 joined #salt
04:58 xMopx had an odd issue with salt today
04:58 xMopx for ages ive had a schedule entry to run highstate every 15 mins
04:59 xMopx one day we overload the master because nooby/dumb api usage, and hard to restart the master
04:59 xMopx metrics-based monitoring suddenly started recording that highstate results were being returned every 10 minutes
05:00 MTecknology that's odd indeed.. if anything, I'd expect it to fall behind or get wedged
05:01 xMopx the minion log showed the schedule working correctly. but on the event bus returns came at the 10 minute interval
05:01 xMopx coincidentally, the systems these minions were on were being upgraded this week
05:02 xMopx like, `apt-get update -y` (and later $product upgrade)
05:02 xMopx the minion version didnt change but i didnt look at like, common salt libs
05:02 xMopx either way it wall all fucked until i upgraded the master
05:03 xMopx I really don't understand what the exact issue was, but the newer master seemed to solve it. Despite the version number never being less than that of any minion
05:04 AssPirate joined #salt
05:06 J0hnSteel joined #salt
05:20 masuberu joined #salt
05:32 xet7 joined #salt
05:37 sauvin joined #salt
05:44 eseyman joined #salt
05:45 toanju joined #salt
05:46 toanju joined #salt
05:47 masber joined #salt
05:48 Hybrid joined #salt
05:54 armyriad joined #salt
05:54 VR-Jack3-H joined #salt
06:03 c4rc4s joined #salt
06:06 pcdummy joined #salt
06:14 onovy joined #salt
06:26 chowmein__ joined #salt
06:27 GrisKo joined #salt
06:30 sauvin_ joined #salt
06:40 cbosdonnat joined #salt
06:40 xet7 joined #salt
06:50 briner joined #salt
07:06 toanju joined #salt
07:10 cablekev1n joined #salt
07:10 Hybrid joined #salt
07:23 inad922 joined #salt
07:23 xet7 joined #salt
07:29 Elsmorian joined #salt
07:30 alexlist joined #salt
07:35 alexlist joined #salt
07:36 Pjusur joined #salt
07:40 hiroshi joined #salt
07:44 xet7 joined #salt
07:46 Ricardo1000 joined #salt
07:52 jrenner joined #salt
07:56 DanyC joined #salt
08:03 exarkun joined #salt
08:15 briner joined #salt
08:17 xet7 joined #salt
08:22 cyp3d joined #salt
08:33 Elsmorian joined #salt
08:47 alexlist joined #salt
08:59 xet7 joined #salt
09:15 inad922 joined #salt
09:18 Waples_ joined #salt
09:30 rollniak joined #salt
09:43 inad922 joined #salt
09:50 exarkun joined #salt
10:06 stooj joined #salt
10:24 jerematic joined #salt
10:35 DanyC joined #salt
10:37 DanyC_ joined #salt
10:50 colegatron joined #salt
10:50 colegatron left #salt
11:16 Elsmoria_ joined #salt
11:19 v12aml joined #salt
11:27 Elsmorian joined #salt
11:48 dendazen joined #salt
12:04 gladia2r joined #salt
12:09 mchlumsky joined #salt
12:10 gladia2r hi - trying to use check_cmd under file.managed for a simple check if file exists, like:  - check_cmd: "/usr/bin/test -f /some/path"
12:11 gladia2r however salt seems to add some extra tmp file there and test fails
12:12 gladia2r https://gist.github.com/gladia2r/77736d0e7df2914466632ee2b3d1c278
12:14 Ricardo1000 joined #salt
12:16 hemebond gladia2r: have you pasted your state somewhere?
12:17 mchlumsky joined #salt
12:18 miruoy joined #salt
12:19 gladia2r @hemebond: just updated the same gist with the state as well | not sure why there is that "/tmp/__salt.tmp.BkMgju"
12:20 hemebond What are you trying to accomplish with check_cmd?
12:21 gladia2r well in short I want a file.managed only if some other file exists, so I'm trying to check an existence of a file with check_cmd
12:21 hemebond If you're just trying to not run the state if the file exists, then you want to use "unless" or "onlyif"
12:21 hemebond check_cmd "The specified command will be run with an appended argument of a temporary file containing the new managed contents."
12:21 hemebond So it's there to allow you to customise the comparison.
12:22 hemebond You want "unless" or "onlyif"
12:22 gladia2r great, but "unless" or "onlyif" works with file.managed? i'm using it with cmd.run's
12:23 hemebond https://docs.saltstack.com/en/latest/ref/states/requisites.html
12:23 hemebond They're global
12:25 DammitJim joined #salt
12:28 gladia2r thanks @hemebond, works like a charm -
12:29 colttt joined #salt
12:30 Nahual joined #salt
12:30 hemebond 👍
12:42 sjorge joined #salt
12:45 briner joined #salt
12:52 AngryJohnnie joined #salt
12:57 Ricardo1000 joined #salt
13:07 DammitJim joined #salt
13:15 sjorge joined #salt
13:18 Hybrid joined #salt
13:18 ooboyle joined #salt
13:19 miruoy joined #salt
13:24 cgiroua joined #salt
13:28 ooboyle_ joined #salt
13:31 racooper joined #salt
13:34 nixjdm joined #salt
13:36 sjorge joined #salt
13:37 AngryJohnnie joined #salt
13:42 ooboyle_ joined #salt
13:43 ooboyle_ left #salt
13:44 ooboyle joined #salt
13:45 ooboyle_ joined #salt
13:45 gh34 joined #salt
13:47 briner joined #salt
13:51 edrocks joined #salt
13:54 ooboyle l
14:00 Hybrid joined #salt
14:02 sjorge joined #salt
14:06 colttt_ joined #salt
14:15 gswallow joined #salt
14:20 stooj joined #salt
14:40 Ricardo1000 joined #salt
14:45 tiwula joined #salt
14:47 Elsmoria_ joined #salt
14:51 onslack <ryan.walder> anyone run into `reload_grains` not reloading grains properly?
14:52 zer0def you mean `sync_grains`?
14:52 onslack <ryan.walder> nope, `reload_grains` as per <https://docs.saltstack.com/en/2015.8/ref/states/index.html#reloading-modules>
14:53 onslack <ryan.walder> basically, installing zfs, once installed the zfs grain does some detection, then the zfs module/state checks if the grain is there, if not it errors out
14:54 zer0def that's *modules*, not *grains* and i think you can bolt on the `refresh_modules` boolean kwarg to just about any state
14:54 onslack <ryan.walder> it sets the grain to `true` but doesn't see it until a 2nd `state.apply` even when using `reload_grains`
14:54 onslack <ryan.walder> will check thanks
14:54 zer0def definitely works with `pkg.latest` and `pkg.installed`
14:54 onslack <ryan.walder> oh wait, I have that
14:55 onslack <ryan.walder> will try a different order
14:55 zer0def also, not `refresh_modules`, but `reload_modules`
14:55 onslack <ryan.walder> grains then modules
14:56 onslack <ryan.walder> yeah, i had the top, now trying the bottom. <https://hastebin.com/umunoruguh.vbs>
14:56 onslack <ryan.walder> once vagrant remakes the vm...
14:57 onslack <ryan.walder> hmm, nope
14:58 Elsmorian joined #salt
14:58 onslack <ryan.walder> wait, does the `reload_x` apply before the thing it's attached to?
14:58 onslack <ryan.walder> in my case reloading before installing zfs?
15:00 zer0def you should attach it to pkg.installed or pkg.latest, since after that state zfs modules *should* be available
15:01 onslack <ryan.walder> yeah that's what I thought
15:01 zer0def you can do it later, but that's the earliest point you can rely on
15:01 onslack <ryan.walder> i'm sticking in a dummy cmd.watch
15:02 onslack <ryan.walder> and attaching them to it, see if that works
15:08 peters-tx Has anyone seen "salt -d" error out on their system?  https://gist.github.com/PeterS242/41f00de7f2a328478849ef2cb54330c4
15:11 zer0def does it crap out when you add targeting?
15:13 peters-tx zer0def, Well, I've always done just "salt -d"; although if I add a target it errors out still, same error message
15:14 peters-tx I'll dig in Issues and see if it's already there, just wanted to see if the channel gets the same error
15:15 zer0def i'd hang around for a bit longer, perhaps someone's capable of being more helpful, as far as i can tell `ret[host]` is somehow a string, not a dict
15:16 aMaZing0x41 joined #salt
15:24 peters-tx zer0def, interestingly "salt-call -d" works fine
15:25 onslack <ryan.walder> zer0def: I think it's an issue with the zfs grain/module itself basically making it unable to be reloaded in a state run
15:25 dxiri joined #salt
15:27 peters-tx zer0def, this issue illustrates my problem (and is closed / cannot reproduce) https://github.com/saltstack/salt/issues/31481
15:28 peters-tx Anyways, odd.  I guess I'll just move on
15:31 dimeshake ssc.saltstack.com still down :(
15:31 Elsmorian joined #salt
15:31 cwright joined #salt
15:42 zer0def ryan.walder: so apparently ZFS support is dependent on a grain, so after installation, you might need to reload/restart the minion
15:45 onslack <ryan.walder> it works fine after the package installation, so need 2 `state.apply`'s for a full run
15:45 onslack <ryan.walder> I've raised a bug as it's basically a race condition with the reloads
15:46 onslack <ryan.walder> the state module isn't even really fully released so it's not the end of the world, was just plying with it
15:47 AngryJohnnie joined #salt
15:52 zer0def well, if you take a peek at how zfs' execution module's `__virtual__()` function is implemented, you'll notice it's dependent on the `zfs_support` grain being a true-value, so reloading grains after installation should set you up
15:53 onslack <ryan.walder> i have been, tired `['reload_grains', 'reload_modules']` and `['reload_modules, 'reload_grains']` both with no luck
15:54 zer0def i'm not sure whether `reload_grains` is a legitimate kwarg, which would imply putting a grain refresh state between installation and whatever you're executing
15:54 onslack <ryan.walder> it is according to the docs
15:55 zer0def the same one you've linked, ya?
15:55 onslack <ryan.walder> <https://github.com/saltstack/salt/issues/35387>
15:55 onslack <ryan.walder> been in since 2014
15:56 onslack <ryan.walder> <https://docs.saltstack.com/en/latest/ref/states/requisites.html#reload>
15:56 zer0def oh yeah, just grepped through the codebase and found it
15:56 onslack <ryan.walder> yeah, that's usually my first port of call
15:57 AngryJohnnie joined #salt
15:59 zer0def i guess grains *are* refreshed on-minion, but there's a separate copy of them fed into a state run, which isn't refreshed, hence it trips over
16:00 onslack <ryan.walder> yeah, kinda defeats the point of `reload_grains` imho ;)
16:25 ooboyle I need my minions to state.apply when they start up. I see that adding 'startup_states: 'highstate' to the minion config file makes them do this: https://docs.saltstack.com/en/latest/ref/configuration/minion.html
16:25 ooboyle Is there a way to set this from the master? Like via some execution module or something?
16:25 ooboyle If not, what's the best way to uncomment that line in all my minions from the master?
16:25 ooboyle I have linux and Windows minions
16:28 zer0def ryan.walder: oh, i'm inclined to agree with that, never knew or found a need for it
16:29 zer0def ooboyle: you're probably looking for a reactor on a minion start https://docs.saltstack.com/en/latest/topics/event/master_events.html
16:29 pbuell joined #salt
16:31 ooboyle zer0def: and you would do this to change the line in the minon file or as a replacement for making the change in the minion file?
16:32 zer0def oh that's master-side
16:33 zer0def https://docs.saltstack.com/en/getstarted/event/index.html
16:38 ooboyle zer0def so it looks like I'm trying to match a salt/auth event, is that correct?
16:40 zer0def i'd think `salt/minion/<MID>/start`
16:41 ooboyle zer0def ah yes, i see it now when watching state.event
16:41 ooboyle zerodef ok, thanks. i'll play with this a bit. haven't touched reactors yet
16:42 ooboyle zer0def I assume I can use wildcards for matching the minion IDs?
16:42 zer0def yeah
16:42 ooboyle zer0def , thanks
16:43 zer0def the tutorial is short enough to skim examples from it without going much into description
16:48 Deliant joined #salt
16:53 droid joined #salt
16:56 JacobsLadd3r joined #salt
17:15 jamtoast joined #salt
17:25 JacobsLadd3r joined #salt
17:26 druonysus joined #salt
17:28 eculver joined #salt
17:31 mauli joined #salt
17:32 eculver left #salt
17:33 dxiri joined #salt
17:33 mritchie1 joined #salt
17:37 mritchie2 joined #salt
17:48 gmoro joined #salt
17:54 AngryJohnnie joined #salt
17:55 gmoro joined #salt
18:02 ponyofdeath joined #salt
18:04 Elsmorian joined #salt
18:10 exarkun joined #salt
18:22 ecdhe joined #salt
18:33 stooj joined #salt
18:42 Trauma joined #salt
18:43 cgiroua joined #salt
18:44 cgiroua_ joined #salt
18:45 crux-capacitor can anyone tell me if what I'm doing here will work? this is in a pillar file: https://ghostbin.com/paste/erv4v
18:47 whytewolf no, for several reasons. can't use pillar.get with in pillar. and file.file_exists would be checking the master not the minion as pillar is rendered on the master.
18:48 crux-capacitor ah...dang
18:48 crux-capacitor thanks for looking
18:49 dendazen joined #salt
18:56 pppingme joined #salt
18:56 ooboyle zer0def works like a charm. Added a sync_grains cmd too for my custom grain dependencies
19:05 ymasson joined #salt
19:09 Edgan MTecknology: Weren't you running ahead with 18.04?
19:17 Trauma joined #salt
19:23 MTecknology Edgan: yup, for the new environment we're pushing 18.04
19:24 Edgan MTecknology: You been just using the xenial repo for everything?
19:24 MTecknology Our upgrade process is going to be rebuild fresh systems on 18.04 pointing at new salt master.s :)
19:24 MTecknology for salt, ya
19:24 MTecknology I haven't needed any other external repos (yet).
19:29 Edgan MTecknology: I use a list of them
19:29 dendazen joined #salt
19:30 rcvu :q
19:33 MTecknology Edgan: a list of them meaning many external repos?
19:33 Edgan MTecknology: Looks like repos are more on top of bionic than I expected
19:33 Edgan MTecknology: yes
19:33 MTecknology that doesn't surprise me a whole lot
19:35 Trauma joined #salt
19:35 MTecknology I actively avoid non-distro/external repos because I've been burned too many times by garbage. Even distros can have horrible/ugly/terrible packages (bitlbee), but it's much less often.
19:38 MTecknology packages like nginx... the upstream version of that used to be installable alongside debian-maintained nginx packages because the conflicts stuff wasn't set correctly. It made a horrible mess that people asked about all the time in #nginx.
19:40 rollniak joined #salt
19:41 Edgan MTecknology: But you just live without, or make your own packages?
19:46 MTecknology yes
19:47 MTecknology A lot of times, I fined better alternatives to software by accepting living without.
19:47 MTecknology find*
19:50 Edgan MTecknology: mongodb, nginx, docker, git, elasticsearch, oracle java, jenkins, pritunl(vpn)
19:55 MTecknology nginx is packaged for debian
19:55 MTecknology as is git..
19:55 MTecknology and mongodb
19:56 MTecknology oh, and so is docker
19:57 MTecknology heck- I was the package maintainer of nginx in debian and ubuntu for a number of years
20:07 rcvu joined #salt
20:13 Edgan MTecknology: nginx, git, and docker outdated. Mongodb, you want to control the version independent of the distro
20:14 MTecknology When I need something newer that's packaged in debian, I'll pull from testing.
20:14 AngryJohnnie joined #salt
20:14 MTecknology That's rarely a need, though. In most cases, I don't need the newer features and I get security updates.
20:16 Edgan MTecknology: you pull testing into ubuntu?
20:16 MTecknology into debian
20:19 MTecknology I actually respect ubuntu as a server os about as much as fedora for the same purpose
20:19 Edgan MTecknology: No more sysv-rc in bionic?
20:20 MTecknology hurray for systemdipshit...
20:20 toanju joined #salt
20:20 Edgan MTecknology: In Silicon Valley nine out of ten startups use Ubuntu. I don't care for it, but it is reality for work.
20:20 MTecknology I'm aware... which is why I'm stuck with supporting it. Doesn't mean I need to respect it.
20:21 MTecknology My current client is in palo alto
20:22 oida joined #salt
20:38 mchlumsky joined #salt
20:55 dendazen joined #salt
20:59 motherfsck joined #salt
21:26 Edgan MTecknology: Making a bionic salt-master with the hot off the presses bionic AMI
21:41 DammitJim joined #salt
21:52 MTecknology there's a bionic ami now? yay!
21:52 MTecknology or is that one that you created?
22:00 Edgan MTecknology: official
22:00 Edgan MTecknology: Came out this morning
22:01 MTecknology Nice!
22:02 Edgan MTecknology: Looks like bionic build debs now use .xz in a way that breaks my apt repo tool, Artifactory. I am having to rebuild on xenial for bionic.
22:02 DammitJim joined #salt
22:11 cgiroua joined #salt
22:19 tyx joined #salt
22:22 AngryJohnnie joined #salt
22:24 mauli joined #salt
22:35 om2 joined #salt
22:39 tyx joined #salt
23:14 mauli_ joined #salt
23:30 AngryJohnnie joined #salt
23:38 Edgan MTecknology: Looks like they included a snap for https://github.com/aws/amazon-ssm-agent by default in the bionic ami :\
23:51 zulutango joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary