Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-07-28

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 fritz09 joined #salt
00:16 dendazen joined #salt
00:19 pabloh007fcb hello all, not sure where to report this, the documentation on docker_container on version 2017.7.0 states you can remove docker-py and install docker using pip on minions. However if you do this all docker functions fail with this message     Comment: State 'docker_container.running' was not found in SLS 'apps' Reason: 'docker_container'
00:20 pabloh007fcb https://docs.saltstack.com/en/latest/ref/states/all/salt.states.docker_image.html
00:20 woodtablet left #salt
00:21 mugsie joined #salt
00:21 mugsie joined #salt
00:22 pabloh007fcb complete error Comment: State 'docker_container.running' was not found in SLS 'apps'  Reason: 'docker_container' __virtual__ returned False: 'docker.version' is not available.
00:23 whytewolf place to report that https://github.com/saltstack/salt/issues
00:35 masber joined #salt
00:38 magnus left #salt
00:38 skorpy2009 joined #salt
00:51 Yamazaki-kun joined #salt
00:53 nethershaw joined #salt
00:54 squishypebble joined #salt
01:08 masber joined #salt
01:13 smartalek joined #salt
01:25 noobiedubie joined #salt
01:35 mbrgm joined #salt
01:40 debian112 joined #salt
01:43 debian1121 joined #salt
01:52 ilbot3 joined #salt
01:52 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.6, 2017.7.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers <+> The call for speakers for SaltConf17 is now open: http://tinyurl.com/SaltConf17
02:03 prg3 joined #salt
02:14 gnomethrower joined #salt
02:21 debian112 joined #salt
02:35 zerocoolback joined #salt
02:40 mbuf joined #salt
02:45 debian1121 joined #salt
02:57 opnsrc left #salt
03:04 _JZ_ joined #salt
03:04 donmichelangelo joined #salt
03:06 squishypebble joined #salt
03:07 sh123124213 joined #salt
03:09 tom29739 joined #salt
03:45 Guest73 joined #salt
03:48 masber joined #salt
04:02 chrism1963 joined #salt
04:05 chrism1963 left #salt
04:05 chrismarlow9 joined #salt
04:05 chrismarlow9 Is it possible to use salt environments (multiple file_roots) on a masterless environment?
04:09 Sarph joined #salt
04:11 TiredCats joined #salt
04:12 TiredCats left #salt
04:14 justan0theruser joined #salt
04:21 svij3 joined #salt
04:24 Sarphram joined #salt
04:35 zerocoolback joined #salt
04:37 Ni3mm4nd joined #salt
04:41 Diaoul joined #salt
04:43 Twiglet joined #salt
04:44 systeem joined #salt
04:49 golodhrim|work joined #salt
04:49 svij3 joined #salt
04:58 tobstone joined #salt
05:00 jesusaur joined #salt
05:26 Bock joined #salt
05:38 NV joined #salt
05:46 joe_n joined #salt
05:56 cachedout joined #salt
05:57 notakai I'm trying to use the openssh-formula openssh.config on a Raspberry Pi.
05:58 notakai The logs show file.managed dying with Comment 'Failed to commit change: [Errno 2] No such file or directory: /tmp/__salt.tmp.qENL90'
06:00 druonysus joined #salt
06:01 justanotheruser joined #salt
06:06 aldevar joined #salt
06:10 do3meli joined #salt
06:10 do3meli left #salt
06:16 sh123124213 joined #salt
06:30 samodid joined #salt
06:37 cachedout joined #salt
06:38 Ricardo1000 joined #salt
06:59 twooster notakai: I had that problem in a different formula, and it was a result of using 2017.7
06:59 twooster notakai: When I stepped back to 2016, it went away.
07:00 twooster I didn't have the time to trace it further than that
07:01 CrummyGummy joined #salt
07:13 baffle joined #salt
07:14 jhauser joined #salt
07:15 inad922 joined #salt
07:16 Rumbles joined #salt
07:17 Hybrid joined #salt
07:20 masber joined #salt
07:44 usernkey joined #salt
07:44 preludedrew joined #salt
07:47 KevinAn2757 joined #salt
07:53 MasterNayru joined #salt
07:54 aphistic joined #salt
07:59 aphistic joined #salt
08:02 mike25de joined #salt
08:10 pbandark joined #salt
08:14 _KaszpiR_ joined #salt
08:15 twooster Does anyone know if there are any builds for Wheezy with a Salt 2016 version?
08:15 twooster I have legacy systems that I'm struggling to maintain since directives have changed a lot from 2015->2016
08:17 viq twooster: https://repo.saltstack.com/apt/debian/7/amd64/
08:17 twooster sadly it's armhf :(
08:18 twooster or -- are there instructions to build/install from source or packages or something?
08:18 twooster documentation doesn't seem to have anything for that, and bootstrap says "file a ticket"
08:18 viq I believe bootstrap script can do that for you
08:19 twooster When i try with -P git v2016.11.6, it says no packages available
08:19 twooster hmm, let me look at bootstrap help one more time
08:19 viq https://github.com/saltstack/salt-pack
08:21 viq https://docs.saltstack.com/en/latest/topics/development/hacking.html
08:22 viq As for bootstrap, according to https://docs.saltstack.com/en/latest/topics/tutorials/salt_bootstrap.html it seems you need to tell it to use git, so something like "bootstrap git v2016.11.6"
08:23 twooster Trying that. And trying it with -P, so that it uses pip packages. It still exits saying there are no packages for my distribution.
08:23 twooster * ERROR: repo.saltstack.com doesn't have packages for your system architecture: armhf.
08:23 twooster Which seems weird since I'm specifying -P
08:32 jeddi joined #salt
08:36 k_sze[work] joined #salt
08:38 pjs_ joined #salt
08:39 _KaszpiR_ joined #salt
08:40 cliluw joined #salt
08:41 armyriad joined #salt
08:45 Naresh joined #salt
08:53 joe_n joined #salt
08:54 joe_n joined #salt
08:55 joe_n joined #salt
08:55 joe_n joined #salt
08:56 Guest73 joined #salt
08:56 joe_n joined #salt
08:57 joe_n joined #salt
08:58 joe_n joined #salt
08:59 joe_n joined #salt
09:01 Eagleman7 joined #salt
09:06 Eagleman7 How does one define multiple grains when using salt -G:  salt -G 'osfinger:Windows-10' 'manufacturer:FUJITSU' test.ping?
09:09 viq Eagleman7: you need a complex matcher for that, -C
09:09 viq Eagleman7: so, assuming you want an AND there, this would be "salt -C 'G@osfinger:windows-10 and G@manufacturer:fujitsu' test.ping"
09:11 Eagleman7 ah great, thanks!
09:12 Eagleman7 Is it also possible to get the NIC name/version?
09:12 Eagleman7 On windows
09:13 viq I'm not sure what you're asking
09:14 Eagleman7 I am looking for realtek network cards on windows minions, including the version of the network card, so: RealTek 8168 for example
09:15 viq will grains.items show that to you?
09:16 Eagleman7 No
09:16 Eagleman7 well not the specific version
09:18 viq do you know a command that would give you that information?
09:18 Eagleman7 http://lpaste.net/3315046541500088320
09:19 Eagleman7 Cant be filtered on tho, since its blue in the output
09:20 viq which, the "Realtek PCIe GBE Family Controller", or?
09:20 Eagleman7 Yes
09:21 viq try something like "salt -G 'hwaddr_interfaces:realtek*' test.ping
09:21 Eagleman7 http://i.imgur.com/2v8w8Kn.png
09:21 Eagleman7 It is not going to match
09:22 viq why not?
09:22 kiorky joined #salt
09:24 Eagleman7 Because the output is in blue, and you can only match on output thats green, not sure what they call these outputs
09:26 viq ok, my tests seem to confirm what you're saying
09:31 viq "salt -G 'hwaddr_interfaces:bge0:*' grains.item hwaddr_interfaces" works for me though
09:32 viq so it could be that "salt -G 'hwaddr_interfaces:Realtek PCIe GBE Family Controller:*' test.ping" would work for you
09:47 stankmac1 left #salt
09:47 stankmack joined #salt
10:00 Reverend joined #salt
10:11 smartalek joined #salt
10:20 Kelsar joined #salt
10:22 mbrgm is there a way to tell salt to apply a specific state and/or pillar, regardless of the environment?
10:22 mbrgm i.e. for all of 'base', 'dev', 'staging' etc.?
10:31 CrummyGummy joined #salt
10:37 Rumbles joined #salt
10:39 vishvendra joined #salt
10:40 vishvendra How to create multiple virtual machines using salt-cloud command
10:40 vishvendra #salt-cloud -p profile-name vmname
10:40 vishvendra can we write multiple virtualmachine names
10:40 vishvendra ?
10:41 hemebond vishvendra: Yes I think it can take a list of names
10:41 vishvendra It's not accepting..
10:41 vishvendra Actually when I am writing command: salt-cloud -p <profilename> vmname1 vmname2
10:42 hemebond Maybe it's just the API then.
10:42 vishvendra after creation of vm1 it's not doing anything
10:42 hemebond Internally I'm pretty sure it accepts a list.
10:43 vishvendra hemebond: what list? Basically how?
10:43 hemebond Sorry, never tried :-)
10:44 hemebond https://docs.saltstack.com/en/latest/ref/cli/salt-cloud.html
10:44 hemebond Says you just give it a space-separated list of names.
10:44 hemebond If it's not creating all the VMs then it's a bug.
10:45 joe_n joined #salt
10:50 vishvendra hemebond: this is not creating all the vm's
10:50 vishvendra Where to raise the bugs?
10:51 hemebond https://github.com/saltstack/salt/issues
10:55 hemebond Have you tried running it with `-l debug` ?
11:02 kedare joined #salt
11:04 masber joined #salt
11:15 xet7 joined #salt
11:28 inad922 joined #salt
11:28 KingOfFools Hey guys. Can someone give a hint in jinja related question. I have a dict and want to format a sql query from it. I'm trying to do it like this. {% for k, v in dict_name.iteritems() %} {% set string_name = string_name + k + v %} {% endfor %}. But string is not including what I concatenate into it in code blocks ({% %}).
11:29 hemebond KingOfFools: I'm not sure Jinja allows you to update a variable that easily.
11:29 hemebond I believe its one of its "rules".
11:29 hemebond Also, you concatenate strings in Jinja using ~
11:31 KingOfFools hemebond: hm, like this "{% set asd = asd ~ dsa %}" ?
11:31 hemebond Can you not just make the loop output directly rather than using a variable?
11:31 hemebond That will work but I don't think you can then update its value. Would have to test it.
11:32 KingOfFools hemebond: I need to cut last comma in query
11:32 KingOfFools hemebond: i'm trying to iterate over dict and generate VALUES blocks
11:33 hemebond Have a look in the Jinja documentation for joiner()
11:34 hemebond If I understand correctly you're basically trying to join a list?
11:34 hemebond (except the values are in a dict)
11:34 KingOfFools hemebond: dict
11:36 KingOfFools hemebond: i have a dict, for example ({id: string}). And want to generate ti to INSERT INTO blabla VALUES(id, string), (id, string), (id string)
11:40 tacoboy joined #salt
11:46 xet7 joined #salt
11:47 xet7 joined #salt
11:47 xet7 joined #salt
11:48 xet7 joined #salt
11:48 xet7 joined #salt
11:51 Rumbles joined #salt
11:53 joe_n joined #salt
12:00 Neighbour Does the salt-master have a datastore that's mutable @runtime? __opts__ doesn't work because you can't reload changes in the configfiles, and I don't know of any alternatives
12:01 joe_n joined #salt
12:02 hemebond KingOfFools: joiner()
12:03 hemebond And don't try to put everything into a variable, just output directly.
12:03 hemebond If you have to collect it first, then you'll have to use a macro.
12:04 CrummyGummy joined #salt
12:05 KingOfFools hemebond: think i found a greate thing. '{% if loop.last %}'
12:05 KingOfFools great
12:07 mbrgm how can I run a command and then parse the yaml output into a variable?
12:07 Ni3mm4nd joined #salt
12:08 KingOfFools hemebond: so i just do formatting with {% for i in bla bla %} {{ i }}, {% if loop.last }} {{ i }} {% endif %} {% endfor %}
12:08 hemebond {% set comma = joiner(',') %}{% for hk, v in d.items() %}{{ comma() }}{{ k }}, {{ v }}{% endfor %}
12:09 KingOfFools hemebond: so thereis no comma at the very end
12:09 hemebond Shouldn't be. That's what joiner() is supposed to do.
12:10 hemebond There is an example and explanation in the Jinja docs. Did you find it?
12:12 KingOfFools hemebond: dont think it would help
12:13 hemebond It's like the Jinja way to do "if this is the last item in the list don't put a joiner"
12:15 KingOfFools hemebond: lemme check
12:21 KingOfFools hemebond: looks like its really not i'm looking for. This thing escapes in first iteration, not in last
12:22 cgiroua joined #salt
12:22 KingOfFools *skips
12:23 hemebond Well, I use it to create `failover://(tcp://blah:1234, tcp://foo:1234)` things
12:23 hemebond So it seems to do what you're trying to do.
12:25 KingOfFools hemebond: oh, im retarded, sorry. I see now :D Should use it like in "put comma as prefix, but not in first time" not as "postfix"
12:25 hemebond ya
12:26 hemebond *suffix
12:26 KingOfFools hemebond: yea
12:26 KingOfFools hemebond: thanks mate, that's a cool clean solution
12:26 hemebond ????
12:29 ssplatt joined #salt
12:36 XenophonF joined #salt
12:51 jdipierro joined #salt
12:56 kedare_ joined #salt
12:56 filippos joined #salt
13:11 justanotheruser joined #salt
13:15 ntropy joined #salt
13:22 justanotheruser joined #salt
13:27 kedare_ joined #salt
13:29 drawsmcgraw joined #salt
13:50 cachedout joined #salt
13:56 tapoxi_ joined #salt
14:04 mbrgm left #salt
14:13 squishypebble joined #salt
14:16 bowhunter joined #salt
14:25 brieken joined #salt
14:29 noobiedubie joined #salt
14:29 borisrieken joined #salt
14:31 borisrieken left #salt
14:31 borisrieken joined #salt
14:33 borisrieken I'm a Salt noob and I have a problem with win_powercfg. What is the best way to get help as an open source Salt user?
14:37 kedare joined #salt
14:37 notakai twooster: Looking at the code, it looks like this might occur if using salt-master to manage salt-minion on the same box.
14:44 DammitJim joined #salt
14:44 doubletwist joined #salt
14:47 fritz09 joined #salt
15:01 KingOfFools How can I group salt-ssh hosts to target em by group?
15:02 jdipierro joined #salt
15:03 _JZ_ joined #salt
15:04 shanesveller left #salt
15:04 btorch on a muti-master-pki setup, could any master be used even if the "primary one" hasn't failed over ? or better to just do a multi-master setup
15:06 MTecknology What does "muti-master-pki" mean?
15:06 sarcasticadmin joined #salt
15:06 MTecknology What failover setup do you haev in place now?
15:11 lordcirth_work borisrieken, asking here is generally useful
15:11 btorch MTecknology: I don't, I was just reading on it and trying to decide
15:12 btorch https://docs.saltstack.com/en/latest/topics/tutorials/multimaster_pki.html
15:16 DammitJim joined #salt
15:17 MTecknology btorch: gotcha, I haven't read that before. I've just done a master and four syndic masters and had a custom grain that selected two predictably-random hosts to connect to. I could always lose two non-adjacently-named hosts and not lose salt chatter.
15:24 tiwula joined #salt
15:25 MTecknology heh, interesting link. I just kept /etc/salt/{master,pki/master}/* in sync and didn't deal with key signing, but that looks nifty.
15:31 DammitJim joined #salt
15:32 high_fiver joined #salt
15:34 thebignoob joined #salt
15:37 thebignoob hey guys, i'm running into a little issue running highstate on a few servers and i can't seem to figure out a way to troubleshoot the issue "Passed invalid arguments to state.highstate: expected string or buffer": https://pastebin.com/KKuMuyM6
15:37 thebignoob i'm targeting a minion explicitly and with grains, both render the same message
15:38 thebignoob i was wondering if there's any tips for troubleshooting this, as my google-fu is a bit lacking right now
15:40 whytewolf what versions are your master and your minion? also to troubleshoot liberal use of -l debug on the master and the minion
15:42 thebignoob @whyewolf, thanks for the feedback the master: Salt: 2015.5.3; and the minion: Salt: 2015.8.8
15:44 whytewolf https://docs.saltstack.com/en/latest/faq.html#can-i-run-different-versions-of-salt-on-my-master-and-minion
15:44 thebignoob running salt-call -l debug state.highstate on the minion certainly is much more useful now
15:44 thebignoob and i'll get salt master on the same minion release
15:48 thebignoob hmmm, same behavior on same versions, 2015.8.8
15:49 pbandark1 joined #salt
15:52 Ni3mm4nd joined #salt
15:52 whytewolf how about state.apply test=True [should throw the same error as it just runs statet.highstate internally.]
15:52 woodtablet joined #salt
15:52 whytewolf also were you seeing the same issue with salt-call on the minion?
15:54 karlthane joined #salt
15:55 thebignoob yeah exact same behvaior doing salt-call locally and on master, even using test run flag @whytewolf, i do have a new pastebin with a stacktrace at the end of a debug salt-call highstate: https://pastebin.com/8kgxzJ9z
15:55 thebignoob thanks for your help again
15:56 whytewolf humm, that looks like something in your python is borked. fnmatch.py should not be throwing a trace
15:57 thebignoob i've blown away the host and reprovisioned it a few times to make sure there was some unknown change
15:57 thebignoob wasn't*
16:00 whytewolf is any other command failing on this host except highstate? such as can you run some of the states directly?
16:00 thebignoob the funny thing is, i can run individual states on the host. This one in particular i'm running an nginx config that has been generated using values in salt mine
16:00 thebignoob i'm able to run all the states included in highstate
16:01 thebignoob afaik
16:01 thebignoob something funky going on in the top file i guess?
16:01 whytewolf maybe, check state.show_top
16:02 thebignoob it's able to render that correctly
16:02 aldevar left #salt
16:02 thebignoob i guess i'll go through that list
16:02 whytewolf start.show_highstate
16:03 thebignoob "'start.show_highstate' is not available." awww shucks
16:03 thebignoob sounds like i need a newer version
16:03 whytewolf sorry pre coffee brain
16:03 whytewolf state.show_highstate
16:03 thebignoob it's free help, I'm more than thankful either way
16:04 thebignoob hmm, that seems to render fine
16:04 whytewolf humm. try your highstate with test='True'
16:04 thebignoob well, something happened
16:05 thebignoob i was able to run a highstate after that
16:05 thebignoob didn't run 100% successfully, but the behavior is gone
16:05 whytewolf ... nothing i had you do should have changed anything it was all just test info
16:06 thebignoob welp, I will continue to investigate I guess, thanks for taking the time though
16:06 whytewolf maybe cache ...
16:06 thebignoob that's the only thing i could really think of
16:06 thebignoob still bizzare though
16:07 whytewolf very
16:07 thebignoob one of these, better nuke the server and spin it up one more time just to be sure
16:07 johnkeates joined #salt
16:07 pbandark1 joined #salt
16:11 johnkeates joined #salt
16:15 jdipierro joined #salt
16:17 gmoro joined #salt
16:21 dxiri joined #salt
16:25 jespada joined #salt
16:26 wendall911 joined #salt
16:27 sp0097 joined #salt
16:28 jholtom joined #salt
16:34 vishvendra joined #salt
16:40 jdipierro joined #salt
16:51 pdayton joined #salt
16:53 Inveracity joined #salt
17:01 mpanetta joined #salt
17:02 jdipierro joined #salt
17:09 yidhra joined #salt
17:10 sp0097 joined #salt
17:17 skatz joined #salt
17:18 skatz Anybody know where the salt-provided rpm for python 2.7 is? I don't see it in http://repo.saltstack.com/yum/redhat/7/x86_64/archive/2017.7.0/
17:19 whytewolf skatz: um, as redhat 7 comes with python 2.7 it shouldn't need it
17:19 skatz oh lol
17:19 skatz should check the 6 repo then :)
17:19 skatz haha thanks
17:19 whytewolf no problem :)
17:19 astronouth7303 i wish there was a better way to check what versions of packages redhat & friends have in their repos.
17:20 astronouth7303 (I have this problem with a project i'm coredev on, and trying to figure out what version is part of what LTS can be surprisingly hard)
17:21 whytewolf astronouth7303: for the most part check https://rpmfind.net/linux/RPM/index.html
17:22 whytewolf it doesn't cover all repos, but it does get fairly indepth
17:23 astronouth7303 whytewolf: i'm not part of the cult of the hat. Where's redhat on that list?
17:24 whytewolf they actually don't show up in the list. but if you search for a package that you are interested in redhat/centos shows
17:25 whytewolf oh, huh they changed it
17:25 whytewolf damn it.
17:25 whytewolf it USED to show core packages for centos [which are typically binnary compat with redhat]
17:28 preludedrew joined #salt
17:35 Guest73 joined #salt
17:38 astronouth7303 yeah, our problem was trying to decide if we should drop Py3.4 support because debian stable (currently stretch) is now 3.5, and trying to get data on the various rh versions is tricky
17:41 whytewolf ohhh, that is even more fun cause python 3 support in redhat is scl. [which btw i think has python 3.3,3.4 and 3,5]
17:41 whytewolf man i hate scl.
17:42 astronouth7303 yup. trying to figure it out with nobody familiar with redhat is a good time
17:42 woodtablet joined #salt
17:43 woodtablet left #salt
17:43 vishvendra joined #salt
17:43 Guest73 joined #salt
17:43 whytewolf https://www.softwarecollections.org/en/scls/
17:44 astronouth7303 i didn't even know that was a thing
17:44 whytewolf you do now ;)
17:46 sp0097 joined #salt
17:46 astronouth7303 so, wait, no released redhat or centos has py3 as part of the primary repos?
17:48 viq WTF? 3 VMs running on host where salt-master is running, I can run "salt-call state.apply" on them just fine, but master reports them as down, and restarting minion process doesn't help
17:49 astronouth7303 VM internal network weirdness?
17:49 whytewolf firewall issues on one of the two ports?
17:50 viq maaaybe, though 2 out of 3 VMs are in theory on the bridged interface with public IPs
17:50 astronouth7303 the part I find really weird is that they can access the fileserver but not the command broadcasting
17:50 astronouth7303 but both of them go over the zmq event bus?
17:53 whytewolf astronouth7303: to answer your question. yes python3 is not in the default repos for redhat.
17:53 ChubYann joined #salt
17:53 astronouth7303 huh, til
17:54 viq meh, restarted minion again, I see the auth on event bus, yet still is reported as down and doesn't respond to test.ping
17:54 yidhra joined #salt
17:55 whytewolf viq try teleneting from the minion to both 4505 and 4506 or tcptraceroute
17:56 viq whytewolf: yup, telnet works
17:57 whytewolf have you restarted the master?
17:57 viq I've rebooted that host today. I guess I can try restarting the process, though some hosts work
17:58 whytewolf well I'm just wondering if the zmq is hungup on those hosts or something odd like that
17:58 whytewolf maybe clear the master cache as well
18:00 viq 'rm -rf /var/cache/salt/master/*' or something gentler?
18:01 whytewolf well, that should be safe, do you have any custom runners or anything like that that might need to be reinstalled?
18:01 swills joined #salt
18:01 viq no
18:02 viq stop && rm && start and let's see
18:02 whytewolf then yeah that should be safest, if you use git_pillar or gitfs you might need to refresh thier caches
18:03 viq oh, hm, I wonder if there's a chance I ran out of inodes :
18:03 viq drwxr-xr-x   2 _salt _salt 51363840 07-28 19:59 publish_auth
18:03 viq and yeah, now all do respond
18:04 viq thanks
18:04 whytewolf yeap, was most likely an inode issue :)
18:06 viq oh, sorry, misread up for down for some reason... no, they still don't respond
18:06 whytewolf oh
18:06 whytewolf humm
18:06 viq and I have some parrots that *demand* attention, so I'll be back in a bit ;)
18:07 druonysus joined #salt
18:07 druonysus joined #salt
18:12 tapoxi_ joined #salt
18:12 doubletwist So any advantage/caveat to assigning roles to hosts in pillar vs assigning as a custom grain for each host?
18:14 astronouth7303 it really depends on how you've set up pillars and such. I'm using grains, but I can see pillars being easier for some installations to manipulate (esp if you're using an ext_pillar)
18:14 MTecknology I prefer not using roles at all
18:14 whytewolf advantage for pillar based, while you assigning the role you can assign the pillars that go with that role. caveat for pillar based you can't target in pillar that role. advantage for grains you can target in pillar those roles caveat it is a security issue to target those roles in pillar.
18:15 noobiedubie joined #salt
18:15 doubletwist Haven't set anything up yet, I'm just testing out some simple things [total salt noob] and trying to figure out how it might work best in our environment
18:16 doubletwist in our case, the host names are fairly consistent containing info on the app name, server type [ie app or web] and environment [dev/test/uat/prod]
18:16 whytewolf soooo, why the need for roles at all then?
18:16 astronouth7303 you can just match the hostname everywhere and use neither grains nor pillars
18:16 doubletwist True I guess
18:17 whytewolf the only case i have seen them make even the most basic of sense is when naming is off lost in the desert.
18:17 pjs_ left #salt
18:17 doubletwist heh
18:18 pjs joined #salt
18:20 astronouth7303 it's what i plan on doing, actually, because I learned cloud from AWS where yes, node names are arbitrary IDs
18:20 doubletwist There is a possibility I may have to start handling arbitrary host names
18:20 doubletwist hard to tell at this point - which of course makes planning difficult.
18:21 doubletwist I'm going to push against it though as I'm not a fan of completely un-informative host names
18:22 whytewolf no one should be. i should be able to look at a host name and know what it is doing.
18:22 whytewolf at a very basic level at least
18:22 Ni3mm4nd joined #salt
18:22 whytewolf also minion_id doens't have to be hostname
18:22 whytewolf it defaults to that but it can be different
18:23 doubletwist Well, I can see that being difficult [though not impossible] in a really large scale
18:25 woodtablet joined #salt
18:26 doubletwist The other tricky bit I need to figure out is datacenter location which is not included in the hostname or fqdn
18:26 whytewolf large scale typically can break out into dotted nomiclature. so in large scale enviroments you can have things like web00001.dev.nyc.example.com
18:27 doubletwist mainly for things like systems in tx get one set of resolvers, systems in ca get another set.
18:27 whytewolf different subnets?
18:27 astronouth7303 i'm using pillars to hand out that data, and assigning pillars based on subnets
18:27 viq doubletwist: oh, ours are informative - they tell you exactly in which DC, rack, slot they are. Tells you nothing of the role though :P
18:28 doubletwist Yeah I think I can use subnets
18:28 doubletwist we only have a few physical hosts. The rest are all virtual
18:28 doubletwist so physical location is mostly moot
18:30 viq we have some VMs named as "third vm on such and such host" ;)
18:30 whytewolf viq: that seems like a odd naming scheme. what if you move that vm?
18:32 viq Or at least RevDNS has that
18:32 astronouth7303 i could see setting up an external pillar as a metadata service
18:32 viq whytewolf: just another step in an arduous manual process, I guess
18:33 * whytewolf doens't envy viq
18:33 MTecknology I like a vm host telling me the physical location in it's name, but otherwise I don't include anything about the DC in a hostname because I being able to float HA boxes back and forth between DC's as I see fit.
18:34 viq whytewolf: though to be fair, those are old machines, on manually managed KVMs. We've mostly moved to openstack and proxmox by now
18:35 whytewolf ahh, ok. that makes sense then
18:37 woodtablet whytewolf: how did you do that action ? /action isnt working
18:37 whytewolf /me
18:37 woodtablet ahhh, thanks!
18:38 nixjdm joined #salt
18:38 * woodtablet has a big grin
18:39 whytewolf hehe
18:41 viq huh, that's what tcpdump shows on minion restart https://pbot.rmdir.de/2qkV503462uw6W4HRsynWQ
18:43 viq (some duplicates possible as master is listening on the same interface that minion is bridged to)
18:45 viq uh, what? Running "salt-minion -l debug" from CLI, it exits, last line being "great scott! serious repercussions on future events!"
18:45 viq with exit code 250
18:46 whytewolf um,
18:46 * viq starts pondering whether it's a salt thing or an OpenBSD thing
18:47 whytewolf i think that might be an openbsd thing.
18:47 whytewolf cause that error doesn't exist in salt
18:47 viq yeah...
18:47 whytewolf although the error makes me thing about check the time
18:47 whytewolf s/thing/think
18:47 astronouth7303 yeah, i can't find "great scott" in the salt code base
18:48 viq https://github.com/openbsd/src/blob/a9128774a26980a02b10602a2f3fcb3c5930abee/lib/librthread/rthread.c#L363
18:48 _KaszpiR_ joined #salt
18:49 whytewolf ... they used that error for a pthread error? ugh, thats like a perfect error for a time sync issue
18:50 viq :D
18:50 * viq looks up the reference, and finally gets it ;)
18:51 whytewolf hehe
18:53 sh123124213 joined #salt
18:54 viq https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/librthread/rthread.c?rev=1.95&amp;content-type=text/x-cvsweb-markup is the commit message for this :D
18:54 jimklo joined #salt
18:55 whytewolf lol, guess you get to change it :P
18:56 whytewolf ... cvs?
18:56 whytewolf yikes. didn't think anything was still using cvs
18:56 viq OpenBSD devs see no real reason to switch
18:58 whytewolf https://www.cvedetails.com/vulnerability-list/vendor_id-442/CVS.html [and the most stable release for it is 2009]
18:58 viq I don't believe that's quite the same CVS
18:59 viq https://github.com/openbsd/src/commits/5271000b44abe23907b73bbb3aa38ddf4a0bce08/usr.bin/cvs if you feel like going spelunking ;)
19:00 _KaszpiR_ joined #salt
19:00 whytewolf ugh, they forked cvs.... which is something you do when you are using a dead system ...
19:00 doubletwist ok, so I know from top.sls if I reference "- common.packages" it will look in ./common/packages/init.sls"
19:00 noraatepernos joined #salt
19:01 doubletwist but if I want to reference it from common/init.sls ?? to reference ./packages/init.sls ?
19:01 whytewolf doubletwist: common.packages will look in common/packages.sls and if that doens't exist will look in common/packages/init.sls
19:02 Rumbles joined #salt
19:02 whytewolf then common/init.sls should include packages
19:02 whytewolf oh missed the dot include common.packages
19:02 woodtablet left #salt
19:02 viq from common/init.sls you can also include ".packages"
19:03 skatz joined #salt
19:03 whytewolf oh yeah forgot they added the "."
19:03 cachedout joined #salt
19:03 doubletwist I'm really having trouble with the sls syntax
19:04 whytewolf how so?
19:04 doubletwist when I need to have the 'base:' at the beginning [or some other identifier] for one and when I don't
19:04 astronouth7303 that should be just topfiles
19:04 doubletwist when I need a leading dash or not
19:04 whytewolf base: goes in the top file
19:04 viq doubletwist: there's two syntaxes. There's top.sls. And there's everything else.
19:04 whytewolf other wise it isn't needed
19:05 astronouth7303 you need dashes for the arguments of your state blocks
19:05 doubletwist ok I think I have this one working
19:06 whytewolf doubletwist: think of it like this when something says it needs a list you need the dash. if it says it needs a dict you don't
19:06 doubletwist ok
19:06 doubletwist Thanks
19:07 whytewolf most things are lists
19:10 whytewolf humm, guess that doens't cover list of dicts but those are easy just - key: value
19:13 astronouth7303 large swaths of salt yaml is dict-ish--it's a list of single-key dicts because there's the occasion that you want to have non-dict items (I think it's some kind of include mechanism?)
19:13 doubletwist Yeah, I'm not a strong programmer, so it's a bit of a learning curve. I've mostly been doing bash scripting for the last 10 years
19:14 astronouth7303 follow the examples, and if it complains, try it with dashes
19:15 whytewolf think of dicts as bash labeled arrays and lists as numbered arrays [sorry there is more difference between them but thats the closest i can think of]
19:16 astronouth7303 that's not bad for a 10 word description
19:17 whytewolf for the most part it starts as a dict with the state id, followed by a dict for the module to use, followed by a list of dicts for options to the module.
19:21 doubletwist Ok so what am I missing here - it's not pulling info from pillar
19:21 doubletwist http://paste.lopsa.org/187
19:22 astronouth7303 doubletwist: the pillar topfile refers to pillar sls files, not the pillar keys themselves (i think)
19:22 whytewolf astronouth7303: correct. the pillar data goes in files that get called by top. top is ONLY for targetting.
19:22 doubletwist ok
19:22 viq doubletwist: what astronouth7303 said, you don't put data in top.sls
19:22 doubletwist ok I think lemme try
19:23 viq your top.sls would end with "- resolver" and then you'd have resolver.sls with the data
19:23 astronouth7303 actually... i think the _syntax_ of the pillar topfile is nearly identical to that of the state topfile
19:23 astronouth7303 (the context and what it's refering to is completely different, though)
19:23 doubletwist okay yeah that did work. I get it. :) Thanks!
19:24 doubletwist So can I put in the resolver.sls the 'logic' to set the variables based on the subnets? Or would that have to go in top.sls pillar?
19:25 viq doubletwist: up to you
19:25 viq doubletwist: you can assign different resolver_something.sls in top.sls, or you can have logic inside the single resolver.sls
19:26 doubletwist ok. Then I'd prefer to keep top.sls relatively clean, i'll put it in resolver.sls
19:26 whytewolf yeah it is up to you, personally i use top file for most of the targetting which is what you are discribing. and have different files that get included based on that info.
19:26 whytewolf as rendering is expensive.
19:26 viq And pillars are rendered on master
19:26 doubletwist Well in this case *everything* has to get a resolver config
19:27 doubletwist the subnet just determines which nameservers it gets
19:27 viq Oh, yeah, that reminds me I still want to see what the difference in time would be in a state with a lot of loops and a lot of data grabbed from mine between the state written in yaml and json
19:27 astronouth7303 i have a pillar file for each data center, and then select pillars by subnet in the top file
19:28 whytewolf viq a lot of mine calls will really rally the master.
19:28 xet7 joined #salt
19:29 viq rally?
19:29 whytewolf brain fart mid type. rack, punish, exterminate
19:30 viq ah
19:30 Rumbles joined #salt
19:31 viq I'm generating config for icinga and prometheus, it's ~35 hosts, I think 10-15 mine values for each, for each of the two configs. State takes about 45 seconds to a minute to apply
19:31 whytewolf thats not bad. i wouldn't want to run that in a thousand node setup but ~35 it is manageable
19:33 Shirkdog joined #salt
19:33 viq Any other ideas how to get various bits of information about other hosts to generate such config?
19:33 doubletwist So would something like this work?
19:33 doubletwist http://paste.lopsa.org/188
19:34 doubletwist nm, looks like that's a no
19:34 whytewolf viq not really. unless they are grouped together somehow. such as if it is grains data grabbing a lower grain so you can parse them together. with out the extra mine call
19:35 astronouth7303 i'm mostly using the mine for ad-hoc service discovery
19:35 systeem joined #salt
19:35 whytewolf doubletwist: I'm not exactly sure what you were trying to do there.
19:36 Twiglet joined #salt
19:36 viq doubletwist: https://pbot.rmdir.de/FYGtg5t5_w_b8a7QccjDQw
19:36 doubletwist I'm trying to set the bulk of the options once to apply to all the systems, and then the nameserver part only to the hosts that match [by datacenter]
19:36 astronouth7303 that still looks like it's mixing a pillar topfile with pillar data
19:36 viq erm, right, let me correct that bit
19:37 doubletwist viq: Ah - I see, that makes senes
19:37 doubletwist sense
19:37 drawsmcgraw joined #salt
19:37 viq doubletwist: https://pbot.rmdir.de/UkfEV8S0F6USsLJKfgsWGQ is what your resolver.sls should look like
19:38 Rumbles joined #salt
19:38 astronouth7303 that'll produce pillar data like 'resolver.use_resolvconf'
19:39 viq or even https://pbot.rmdir.de/KbSIGlEVe_QE5jVjZoMq2A
19:39 hashwagon joined #salt
19:39 viq yeah, and then you can iterate over say 'resolver:nameservers' to set them
19:40 doubletwist well I'm using an existing module for the resolvers.
19:40 viq astronouth7303: ":" is the separator in pillar and grain data, not "."
19:40 astronouth7303 I think it depends on how you access it
19:41 lionel joined #salt
19:42 doubletwist viq: The way you referenced the 'datacenter' grain though - that would depend on me setting the grain on the minion? Or could I set that in a pillar?
19:43 astronouth7303 you can't select on a pillar in the pillar
19:43 whytewolf doubletwist: you HAVE to set a grain for that, you can not do any kind of matching on pillas with in pillar
19:43 viq doubletwist: you would need to set a grain on a minion - which you could do from a state. But that's just an example, you could be matching subnets there as well
19:43 oida_ joined #salt
19:43 doubletwist ok so I'll have to match those on a list of subnets then
19:43 yidhra joined #salt
19:44 viq yeah. It's an example on how you could structure it, and how to match things. You need to figure out what you want to match on.
19:44 whytewolf {% if salt.match.ipcidr('10.20.0.0/8') %}
19:44 astronouth7303 my pillar topfile is https://pastebin.com/gEGFb2Pj
19:44 doubletwist yeah no that's great. Very very helpful. Thanks
19:45 cliluw joined #salt
19:46 whytewolf ... you use a lot of compound matching astronouth7303 :P
19:46 viq And I don't believe that those compund matchers are needed
19:46 astronouth7303 it's not strictly necessary, but i like doing it from the get-go just so i'm in the habbit
19:46 whytewolf not a match: grain or match ipcidr among the lot :P
19:47 astronouth7303 (it's probably my CSS experience showing or something)
19:47 viq https://docs.saltstack.com/en/latest/ref/states/top.html#advanced-minion-targeting
19:47 ssplatt joined #salt
19:47 astronouth7303 just so i'm used to reading compound matchers and it's easier if I do need them
19:48 whytewolf it isn't needed, and adds maybe milli or even microseconds to the run so it's that big of deal.
19:48 astronouth7303 CPU time is usually the least important computer resource
19:48 whytewolf well, it matters when the run starts takeing larger then the refresh time of pillar. but other then that it is kewl
19:49 seffyroff joined #salt
19:50 aldevar joined #salt
19:50 racooper joined #salt
19:52 doubletwist I'm trying to find this info before asking I swear! :)
19:52 doubletwist what's the syntax to match ipcidr in those if statements? Since it doesn't have an index to match like the grain?
19:53 doubletwist I can find instances of matching it like astronouth7303's top.sls but not in an if statement
19:53 whytewolf it uses cidr syntax and matches against any ip on a server
19:53 whytewolf {% if salt.match.ipcidr('10.20.0.0/8') %}
19:54 whytewolf {% if salt.match.ipcidr('10.0.2.0/24') %}
19:54 whytewolf would be the first one
19:54 nixjdm joined #salt
19:55 whytewolf https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing ... i hate throwing people into the deep end
19:55 doubletwist yeah I get CIDR's - I just wasn't clear on the function[?] to call and the syntax for it in the if statement.
19:55 doubletwist I  hadn't seen 'salt.match.ipcidr' mentioned in the place I'd searched
19:56 whytewolf https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.match.html
19:56 whytewolf salt is a dict with a short cut for all of the exacution modules
19:56 whytewolf salt.module.function
19:56 whytewolf or salt['module.function']
19:56 doubletwist So if wI want to match multiple subnets, do I need to use 'salt.match.compound'?
19:57 whytewolf yes
19:57 whytewolf such as salt.match.compound('S@10.20.0.0/24 or S@10.40.0.0/24')
19:58 doubletwist awesome
20:02 colttt joined #salt
20:10 aldevar joined #salt
20:14 viq or {% if salt.match.ipcidr('one') or salt.match.ipcidr('two') or salt.match.ipcidr('three') %}
20:51 doubletwist Yeah I'm definitely leaning toward salt at this point
20:51 doubletwist vs ansible
20:54 nixjdm joined #salt
20:57 astronouth7303 so, want to learn more about how salt works? just stream the event bus to your terminal and watch it go by
20:59 llua do you get cool points like compiling on gentoo?
20:59 sp0097 joined #salt
20:59 astronouth7303 possibly. I think i need to improve this, though, to better filter events
21:09 MTecknology llua: points aren't given for being excited about having used gentoo
21:16 socket-_ Can anyone help me understand why pkg.install will install/remove 7zip, but not winzip? Here are my repos and attempts. https://apaste.info/4xqD
21:17 whytewolf socket-_: my guess looking at the info is that the full name doens't match
21:18 socket-_ what do you mean
21:20 whytewolf i mean the full name listed there must not be the full name that shows up in the windows subsystem that is checked.
21:21 whytewolf i don't use windows so have no idea what the full name should be
21:21 whytewolf but if the full name doesn't match it won't be able to find winzip to uninstall it. hence your error
21:22 aldevar left #salt
21:23 socket-_ I can try changing the full name, but the 7zip full name is "7zip" according to pkg.list_pkgs but in the winrepo it's called 7-zip 16.04 (x64 edition)
21:23 whytewolf this isn't about list_pkgs
21:24 whytewolf this is how WINDOWS sees the package
21:24 whytewolf list_pkgs will list it as salt sees it
21:25 socket-_ so for remove to work the full name needs to match how windows sees it?
21:25 whytewolf yes
21:25 whytewolf full_name on line 48 of your post
21:26 whytewolf i believe capitalzation also matters
21:26 sh123124213 joined #salt
21:27 whytewolf most likely the is is something like WinZip 18.5
21:27 socket-_ yeah, windows shows it as WinZip 18.5
21:27 socket-_ which matches my winrepo full_name 'WinZip 18.5'
21:28 whytewolf not on what you posted
21:28 whytewolf full_name: 'winzip 18.5.11112 (x64 edition)'
21:28 whytewolf thats what you posted
21:29 socket-_ k, let me recommit my changes and try again
21:31 socket-_ while im waiting for the db_refresh.. why does install work if the name mismtaches, but not remove
21:31 whytewolf because salt sees that something installed.
21:32 whytewolf assumes it was right
21:32 whytewolf but when it is looking for the package to remove it can't find it because it is looking for what is in full_name
21:34 whytewolf in linux you do get errors on install also with that issue. as the verification is more in depth
21:36 Ni3mm4nd joined #salt
21:37 socket-_ whytewolf: thanks! it looks like it's working now with that fullname change
21:38 whytewolf no problem :)
21:38 Ni3mm4nd joined #salt
21:38 socket-_ as far as you know is there a way to wildcard it? like I don't know which version of winzip will be installed, so the fullname might be WinZip 18 or Winzip 18.5 ...
21:39 socket-_ do i just have to have my repo define all winzip releases, and pkg.remove winzip will go through all of them?
21:39 whytewolf that i don't know. like i said i don't use windows
21:54 Praematura joined #salt
21:54 nixjdm joined #salt
21:57 doubletwist SO what am I missing here? It should match the first set [RedHat family major rev 7] but i'm only getting the package from the final 'else'.
21:58 whytewolf doubletwist: ? care to gist up what you have
21:58 doubletwist http://paste.lopsa.org/189
21:58 doubletwist oops meant to paste that :)
21:58 doubletwist You mean you aren't psycic?
21:58 doubletwist psychic even
21:59 vexati0n has anyone used the Cron module with salt-api? The docs give a list of possible arguments, but those arguments don't work with salt-api, it just says "Passed invalid arguments to cron.rm_job: rm_job() takes at least 2 arguments"
22:00 whytewolf doubletwist: what version are you on?
22:01 doubletwist 2017.7.0
22:01 whytewolf grains["osmajorrelease"] == 7
22:01 whytewolf try that
22:01 astronouth7303 vexati0n: how are you calling it?
22:01 doubletwist wouldn't that match debian 7 as well?
22:02 whytewolf i meant get rid of the double qoutes around 7 not get rid of the entire thing :P
22:02 doubletwist ah
22:02 whytewolf also around 6
22:03 vexati0n astronouth7303: I figured out that "arg" can be "user", and the rest of them should be inside a kwarg array. but ... that doesn't work either. it (cron.rm_job) returns with 'absent', but doesn't actually delete anything.
22:03 vexati0n just adds the "#lines below here are managed by Salt" comment, but it leaves the existing job above that intact
22:04 astronouth7303 try arg as a list of positional arguments
22:04 doubletwist whytewolf: that fixed that part. Thanks!
22:04 vexati0n i need to match the scheduling items (minute, hour, etc), but salt acts as if the array i'm passing it doesn't match those... even though there's no possible other way to interpret what i'm giving it
22:05 whytewolf doubletwist: no problem. basicly osmajorrelease is a number, you were comparing it to a string
22:05 astronouth7303 vexati0n: kwargs or pargs should work, if the pargs match the documented order?
22:05 * astronouth7303 checks the source
22:06 whytewolf vexati0n: does it work outside of salt?
22:06 whytewolf errr salt-api not salt
22:06 vexati0n it works on cli
22:06 astronouth7303 yeah, args in the source match args as documented
22:07 astronouth7303 vexati0n: are you familiar with python calling conventions?
22:07 vexati0n not especially
22:07 squishypebble joined #salt
22:08 doubletwist whytewolf: Now why would it show as an error now for the packages that are already installed?
22:08 astronouth7303 vexati0n: basically, there's positional arguments and keyword arguments. positional arguments are in order, and keyword arguments by name. pargs always fill left-to-right, and kwargs fill in the remainder. They can't conflict.
22:08 vexati0n sure
22:08 vexati0n nothing is conflicting here
22:08 vexati0n it doesn't complain about arguments, it just doesn't do anything.
22:09 vexati0n it says "absent" as if the thing that very much does in fact exist, doesn't exist.
22:09 whytewolf doubletwist: error in what way?
22:09 whytewolf doubletwist: does name given match how linux package manager of choice has the name listed in it's database?
22:10 astronouth7303 vexati0n: i believe salt-api accepts a list of pargs on `arg` and a mapping of kwargs on `kwarg`
22:10 vexati0n yes
22:10 vexati0n both approaches do the same thing
22:10 doubletwist it's giving me this error: http://paste.lopsa.org/paste
22:10 vexati0n a list in the right order in "arg", or an array of keys and values in kwarg
22:10 astronouth7303 vexati0n: ok, good. it's being consistent
22:10 vexati0n either way, it just says 'congrats i did the thing' but doesn't do the thing
22:11 whytewolf wrong link doubletwist
22:11 doubletwist first time it ran it listed about half the packages [which were already installed] Now that all the packages are installed it shows the error for all 25 listed
22:11 doubletwist http://paste.lopsa.org/190
22:11 doubletwist sorry long day
22:12 whytewolf a package named man?
22:12 doubletwist yes, contains man pages
22:12 astronouth7303 vexati0n: if you ask cron.raw_cron or cron.list_tab, does it return what you expect?
22:12 doubletwist or rather the binary to read them
22:12 whytewolf ...
22:12 squishypebble joined #salt
22:13 whytewolf on what os?
22:13 vexati0n astronouth7303: yes, i am retrieving a list of cron items as i'd expect
22:13 doubletwist EL7
22:13 vexati0n with list_tab
22:13 doubletwist and EL6
22:13 doubletwist there's also a 'man-pages' package
22:13 astronouth7303 vexati0n: including the item you're trying to remove?
22:13 astronouth7303 huh
22:13 astronouth7303 i've never had a problem with descrepencies between `salt` and salt-api
22:13 astronouth7303 (i'm using it every day)
22:14 whytewolf doubletwist: you mean man-db?
22:14 whytewolf because man. doesn't exist
22:14 doubletwist # rpm -qa man
22:14 doubletwist man-1.6f-39.el6.x86_64
22:14 vexati0n yes, it lists the item just as with anything else. it isn't even a complicated line, it's literally just "50 1 1 * * echo "test" > /dev/null 2>&1"
22:14 whytewolf interesting.
22:14 whytewolf does yum show man?
22:15 doubletwist ok I see
22:15 doubletwist exists in EL6 not in 7
22:15 whytewolf btw. yes that is the man pages not the program to read them
22:15 whytewolf the program that reads them is insert pager of choice
22:16 usernkey joined #salt
22:16 doubletwist I see, I was misreading the erro
22:16 doubletwist my bad
22:16 whytewolf no problem :)
22:16 doubletwist removed that package from the list of EL7 packages and it works with no error.
22:16 whytewolf yay!
22:16 doubletwist And I think that's my sign to sign off for the weekend and pick it up on Monday :)
22:16 doubletwist Thank for all the assistance here
22:17 whytewolf always good to end on a high note
22:17 whytewolf have a good weekend
22:17 astronouth7303 vexati0n: i'm not able to reproduce? Are you sure the command is exact? I'm not sure how fuzzy it is in its matching
22:18 astronouth7303 vexati0n: appears to be not fuzzy at all, including about whitespace
22:19 vexati0n yeah. i took the same command and changed it from rm_job to set_job --- which works, and creates *exactly the same line*
22:19 astronouth7303 how exact?
22:19 astronouth7303 whitespace? line endings?
22:20 vexati0n yes. exactly
22:21 vexati0n as exact as you can get without making neil degrass tyson roll his eyes
22:21 astronouth7303 that seems wrong. Might want to file a github issue about that?
22:21 astronouth7303 if they're byte-for-byte identical and it doesn't recognize the first, that sounds like a bug
22:21 vexati0n i also just noticed that list_tab ... won't list the line salt just created
22:21 vexati0n so i ... give up.
22:22 astronouth7303 even if they're not byte-for-byte, it's still probably a bug, just not a blatently-wrong one
22:22 cyborg-one joined #salt
22:23 vexati0n ohhhhhhhhh
22:23 vexati0n shite
22:23 vexati0n it's because salt refuses to manage records it didn't create for fear of stepping on someone's toes, i guess
22:24 vexati0n i was only managing jobs in the 'pre' output
22:24 astronouth7303 how would it know, though?
22:24 vexati0n but salt's stuff goes into 'crons'
22:24 astronouth7303 ah
22:24 vexati0n it's based on where salt's warning comment is in the cron file
22:24 astronouth7303 i didn't know there was a distinction
22:24 astronouth7303 ok
22:24 vexati0n everything after the warning, salt will manage
22:24 astronouth7303 that makes sense
22:27 mjn12 joined #salt
22:33 mjn123 joined #salt
22:34 noraatepernos joined #salt
22:35 nic47222 joined #salt
22:42 Ni3mm4nd joined #salt
22:45 nic47222 left #salt
22:46 themikenicholson joined #salt
22:47 themikenicholson Running into an odd issue where saltstack thinks a managed file is a binary file but only on one minion
22:47 themikenicholson Is there a way I can see the file (which is generated by rendering a jinja template) without actually installing it on the minion/
22:49 Deliant joined #salt
22:49 whytewolf themikenicholson: closest you can do with what comes with salt is cp.get_template which will render it on the minion into a path you choose.
22:49 whytewolf this module does the same thing but returns the rendered info to the master https://github.com/whytewolf/salt-debug
22:51 themikenicholson will give that a try... its really odd.  It fails to commit the managed file because the tempfile it is trying to commit doesn't exist
22:52 whytewolf try clearing the minion cache
23:01 themikenicholson joined #salt
23:02 themikenicholson whytewolf: cleared minion cache with rm -rf /var/cache/salt/minion/files/base/*
23:03 themikenicholson then reran salt-call state.apply, same issue
23:03 whytewolf did you stop and start the minion before and after that?
23:03 themikenicholson ah, no
23:04 whytewolf also, check to make sure you are not running into a linux security thing such as selinux or app-armour
23:05 whytewolf if after all that use salt debugging -l all to see if anymore info can be gleamed. and post a ticket in salts github issues
23:15 tapoxi_ joined #salt
23:27 sh123124213 joined #salt
23:29 Guest73 joined #salt
23:33 ssplatt joined #salt
23:43 Ni3mm4nd joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary