Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-08-08

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:23 usernkey joined #salt
01:26 noobiedubie joined #salt
01:52 ilbot3 joined #salt
01:52 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.6, 2017.7.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers <+> The call for speakers for SaltConf17 is now open: http://tinyurl.com/SaltConf17
02:13 zerocool_ joined #salt
02:14 zerocool_ joined #salt
02:14 onlyanegg joined #salt
02:16 dh joined #salt
02:18 _KaszpiR_ joined #salt
02:19 matti joined #salt
02:19 matti joined #salt
02:20 Ahlee joined #salt
02:23 gtmanfred joined #salt
02:23 gnomethrower joined #salt
02:24 rome_390 joined #salt
02:24 ekkelett joined #salt
02:25 Puckel_ joined #salt
02:25 vexati0n joined #salt
02:25 NightMonkey joined #salt
02:32 stevednd basepi: are you still part of the salt team?
02:36 keldwud joined #salt
02:41 onlyanegg joined #salt
02:42 gnomethrower joined #salt
02:50 onlyanegg joined #salt
02:54 evle2 joined #salt
02:56 neilf__ joined #salt
03:02 onlyanegg joined #salt
03:15 donmichelangelo joined #salt
03:19 onlyanegg joined #salt
03:38 rm_jorge joined #salt
03:47 WINrmMinionInsta joined #salt
03:52 WINrmMinionInsta left #salt
04:01 vexati0n does anyone have any idea why returns from salt-minion on Windows is soooo slow? I have tried setting multiprocessing to False, as I saw in a GitHub issue, but that actually made the problem worse.
04:01 donmichelangelo joined #salt
04:09 Lionel_Debroux_ joined #salt
04:09 sh123124213 joined #salt
04:46 kiorky joined #salt
05:15 mbuf joined #salt
05:18 Bock joined #salt
05:29 felskrone joined #salt
05:29 cyborg-one joined #salt
05:31 Mokilok joined #salt
05:33 sturlik joined #salt
05:41 impi joined #salt
05:43 sturlik joined #salt
05:43 hemebond left #salt
05:46 Ni3mm4nd joined #salt
05:51 sgo_ joined #salt
05:52 oida joined #salt
05:55 zulutango joined #salt
06:00 Mokilok Hey, guys. I'm pretty new to Salt. Does anyone have any suggestions on where I should look for best practices for managing multiple separate environments with it (i.e. MSP).
06:06 Ni3mm4nd_ joined #salt
06:20 sturlik joined #salt
06:23 do3meli joined #salt
06:23 do3meli left #salt
06:24 Mokilok joined #salt
06:26 Mokilok joined #salt
06:28 gmoro joined #salt
06:34 jhauser joined #salt
06:35 sh123124213 joined #salt
06:43 high_fiver joined #salt
07:03 sturlik joined #salt
07:04 Hybrid joined #salt
07:05 babilen joined #salt
07:08 frdm joined #salt
07:11 o1e9 joined #salt
07:14 usernkey joined #salt
07:42 robin2244 joined #salt
07:43 sturlik joined #salt
07:53 robin224_ joined #salt
07:54 robin224_ joined #salt
07:56 preludedrew joined #salt
08:09 robin2244 joined #salt
08:09 pbandark joined #salt
08:11 robin2244 joined #salt
08:12 robin2244 Hi All, some on can help ? I want check grain value beginning with a value, but i don't find the good syntax in the documentation
08:12 robin2244 {% if grains['ip4_interfaces']['eth0'] == '192.168%' %} -> for exemple like this, i want check eth0 address, and it's beggining per 192.168 i will run file replace else i will run other pattern/replace in the file.replace
08:14 Ricardo1000 joined #salt
08:14 mbuf left #salt
08:15 mike25de robin2244: hi. of the top of my head... i can think of this: ... i will do a pastebin..
08:20 babilen robin2244: Are you sure you want to hardcode eth0 and compare networks based on string? You could use network.in_subnet as a much more portable solution.
08:20 mike25de https://gist.github.com/anonymous/1afdf704960a4c19976ed81ce352fb5b
08:21 mike25de babilen: awesome :)   robin2244 : babilen has always the best solution. My pastebin is just a workaround for your hardcoded stuff.
08:22 babilen It depends, but with https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/ and networks being specified with CIDR anyway, I thought I'd mention it
08:22 mike25de well done babilen ! thanks for the tip
08:23 babilen https://docs.saltstack.com/en/latest/topics/jinja/index.html#std:jinja_ref-regex_match is a new feature in 2017.7.1 that might come in handy too
08:23 babilen err, 2017.7.0
08:26 mike25de babilen: is always a pleasure to get tips from you... I HAD NO idea about that page... :P
08:27 mike25de lucky i just upgraded to 7.0
08:27 babilen Ah, enjoy all the new features then :)
08:31 Mattch joined #salt
08:36 robin2244 Thanks a lot @babilen @mike25de i will try :-)
08:54 Mokilok joined #salt
09:41 zerocool_ joined #salt
09:43 zerocool_ joined #salt
09:56 N-Mi joined #salt
09:56 N-Mi joined #salt
09:58 DammitJim joined #salt
09:59 Tucky joined #salt
10:06 aldevar2 left #salt
10:09 TyrfingMjolnir_ joined #salt
10:09 mike25de i am a bit confused about  onchanges and watch statements... i am not sure why i should use watch anymore...? can someone enlighten me? I am reading the docs but still not sure :)
10:11 ahrs joined #salt
10:12 smartalek joined #salt
10:13 lorengordon joined #salt
10:14 noirgel joined #salt
10:14 babilen mike25de: They differ in their behaviour as to when the state is executed. onchanges would only execute if the other state has changes -- https://docs.saltstack.com/en/latest/ref/states/requisites.html#requisite-overview summarises those differences
10:15 noirgel Hi
10:15 noirgel I'm still unable to fix the issue I mentioned the other day
10:15 babilen mike25de: They also differ in that a state would fail if a required/watched state fails
10:15 noirgel I tried with both 2017.7.0 and 2016.11.6, and both are producing this error when I set a master_job_cache.
10:15 noirgel https://gist.github.com/uaalto/a7983f8f547894dbcab5924c5f5d2a22
10:16 jwon_ joined #salt
10:17 zulutango joined #salt
10:17 rathier joined #salt
10:18 descrepes_ joined #salt
10:18 jerrcs joined #salt
10:20 zerocool_ joined #salt
10:23 zerocool_ joined #salt
10:23 mike25de thanks babilen !
10:42 zerocool_ joined #salt
11:10 noirgel Any tip on how I could debug that?
11:16 babilen noirgel: It's not entirely clear how/why you triggered that
11:30 honestly anybody here have ubuntu 16.04 (with python 2.7.12) *and* 17.04 (with python 2.7.13) around to help me test a bug repro? I think I found a bug that only happen on minions running python 2.7.12, just want someone to double-check that for me
11:33 honestly here's a repo that you can check out and run "salt-ssh localhost ..." from: https://github.com/duk3luk3/salt-ssh-minimal/tree/salt-bug-jinja-color-codes
11:33 honestly run state.sls colorcode.{good,bad,motd}
11:35 honestly and tell me if you just get jinja errors or if you get "Unable to serialize output to json"
11:36 honestly the only way I can reproduce it is with colorcode.motd and ubuntu 16.04...
11:46 LeProvokateur joined #salt
11:48 rgrundstrom babilen: are you here
11:48 rgrundstrom ?
11:51 babilen hjälp!
11:52 mike25de :)
11:52 rgrundstrom :)
11:52 rgrundstrom That is one way of telling were you are from :)
11:55 rgrundstrom https://gist.github.com/anonymous/f0bc647b982ef1d45f689b55f2a099d8#file-gistfile1-txt Anyone that can tell me what is wrong with my config here? Error in the bottom.
11:57 mike25de rgrundstrom: maybe for install_nrpe_pushconf you need to add a require... so it is triggered only if the make_nrpe_dir are created
11:58 babilen rgrundstrom: You want https://gist.github.com/anonymous/f0bc647b982ef1d45f689b55f2a099d8#file-gistfile1-txt-L86-L89 to be individual - file: '/etc/nagios/nrpe.cfg' entries
11:58 mike25de i think is a best practice to use require and onchanges / watch
11:58 babilen ? lunch
11:59 Reverend enjoy!
11:59 Reverend me too :D time for greggs <#
11:59 Reverend <3 *
11:59 mike25de rgrundstrom: and salt.states.file.managed(name, source=None, ... not names
12:01 babilen Reverend: Greggs, really?
12:01 rgrundstrom Well mike25de, Reverend: You are both wrong actually :) adding "file:" to each file under watch actually got it working :)
12:02 babilen \o/
12:02 zerocool_ joined #salt
12:03 babilen rgrundstrom: You can think of state references as tuples, namely <state module, state id/name> -- This also explains why you can only have a single "file/service/..." state under the same ID
12:03 N-Mi joined #salt
12:03 N-Mi joined #salt
12:04 mike25de so you can use names to chain together? - i did not know that :)
12:04 zerocool_ joined #salt
12:05 mike25de i did not see the "names" in the docs :P
12:06 rgrundstrom mike25de: A lot of trail and error before i got that running :)
12:07 mike25de rgrundstrom: well done! :)
12:07 KingOfFools is there any API to colorful yaml dump of salt job result?
12:07 rgrundstrom mike25de: Putting all the file.managed together saved me a lot of coding.
12:09 mike25de rgrundstrom:  I SEE ... i have to remember this :P
12:13 noirgel babilen: just doing a highstate or even test.ping will trigger the issue, as long as the rawfile_json returner is set as the master_job_cache
12:16 numkem joined #salt
12:29 mage_ joined #salt
12:34 ssplatt joined #salt
12:52 Ni3mm4nd joined #salt
12:53 robin2244 joined #salt
12:54 Ni3mm4nd joined #salt
12:54 robin2244 Hi, it's me again :-S ^^
12:55 robin2244 Someone say how replace text with file.replace with grains ?
12:56 robin2244 like this (it's just a concept ^^) -> https://gist.github.com/anonymous/b6705dd07d413854987da1701e43b110
13:04 Reverend babilen yes greggs! they do the best chicken baguettes :D
13:14 mike25de can one return the Result of a state to a jinja variable?
13:25 simonuk1 joined #salt
13:26 N-Mi joined #salt
13:27 _JZ_ joined #salt
13:30 Puckel_ joined #salt
13:34 robin2244 mike25de : it's for me ?
13:34 pbandark joined #salt
13:38 simonuk1 Hi this maybe a silly question, I have google searched, but how do you use modules that arent the builtin ones, what should I be searching for in docs etc  ?
13:39 robin2244 I have try this too https://gist.github.com/anonymous/5a471f5cd36b40a23e2f89f6a0c60fa5 but i think i don't understand  one thing
13:43 pualj joined #salt
13:45 racooper joined #salt
13:45 justanotheruser joined #salt
13:46 justanotheruser joined #salt
13:48 kiorky joined #salt
13:57 Ni3mm4nd joined #salt
14:02 cgiroua joined #salt
14:02 edrocks joined #salt
14:07 Kelsar joined #salt
14:07 scooby2 joined #salt
14:09 high_fiver joined #salt
14:10 DammitJim joined #salt
14:11 edrocks joined #salt
14:13 simonuk1 Hi this maybe a silly question, I have google searched, but how do you use modules that arent the builtin ones, what should I be searching for in docs etc  ?
14:14 Ahlee joined #salt
14:24 Naresh joined #salt
14:25 dunz0r simonuk1: Start here https://docs.saltstack.com/en/latest/ref/modules/
14:27 stevednd whytewolf: I found this issue yesterday from 11/2014 regarding that orchestration require issue just in case you were curious https://github.com/saltstack/salt/issues/18564
14:28 stevednd setting gather_job_timeout to a higher value helps, but probably isn't foolproof
14:36 fatal_exception joined #salt
14:38 swills joined #salt
14:38 swills joined #salt
14:39 ekristen joined #salt
14:40 KingOfFools Hey guys. I have orch file in which I'm running few states. To the state I'm passing pillar. And I'm passing pillar to the job when I run that orch file (salt-run state.roch orchfile pillar='{"blabla": "blabla"}'. So it looks like I should first do {% set pil = salt['pillar.get']('blabla') -%} in orch file and then same thing in state file which is running by that orch file, right?
14:41 KingOfFools And pass pillar from orch to each state.
14:42 high_fiver joined #salt
14:43 KingOfFools BUt i don't really need pillar in orch file. I need it in states which it running.
14:44 swills joined #salt
14:44 swills joined #salt
14:47 swills XenophonF: pyvpx: OMG new FreeBSD is so much better than 4.x
14:47 swills ps, i didn't file that bug, see the Reported by line. i just claimed it for commit
14:48 smartalek joined #salt
14:50 aldevar joined #salt
14:50 KingOfFools swills: are you using FreeBSD 4? o_O
14:50 aldevar left #salt
14:50 swills KingOfFools: no, why?
14:51 swills i mean, i did, years ago, but no, not now, why would i do that? why would you ask that?
14:52 KingOfFools swills: I was just wondering why would you use FreeBSD nowadays :) nvm, i guess, sorry
14:52 KingOfFools swills: FreeBSD 4*
14:52 swills KingOfFools: well, it's a really great OS now. so many great features now.
14:54 KingOfFools swills: I was wondering about 4 version of FreeBSD. It's super old now, right? Tahoe, Reno, all that stuff
14:55 swills KingOfFools: Tahoe and Reno are releases of BSD 4.x, which is not even the same thing as FreeBSD.
14:55 KingOfFools swills: oh yeah, right. My bad. But FreeBSD 4 is still super old, isnt it?
14:55 swills 4.3BSD-Tahoe came out in 1988. FreeBSD 4.3 came out in 2001.
14:56 swills i mean, that's 13 years apart. and yes, 4.x is now very old and out of date.
14:57 swills that said, i did hear of someone who does hosting recently who still has a customer running it. i was as horrified as you are. I mean, so many security issues, not just in the OS... and they're mising out on so much...
14:58 swills i run 12-CURRENT and update monthly, personally
14:59 Ni3mm4nd joined #salt
14:59 KingOfFools swills: yes, that's what i meant. Donno why I brought BSD here. u_u
15:00 sarcasticadmin joined #salt
15:00 swills KingOfFools: i mentioned it responding to XenophonF and pyvpx who were discussing a salt bug
15:01 KingOfFools swills: i just heard that out of contest and was kinda interested, sorry :D
15:01 LeProvokateur joined #salt
15:01 swills KingOfFools: no need to apologize. :)
15:01 pualj joined #salt
15:03 jmiven joined #salt
15:03 robin224_ joined #salt
15:06 high_fiver joined #salt
15:06 raspado joined #salt
15:07 noirgel joined #salt
15:09 fatal_exception joined #salt
15:10 onlyanegg joined #salt
15:15 willprice joined #salt
15:16 noirgel_ joined #salt
15:16 LeProvokateur joined #salt
15:18 noirgel joined #salt
15:19 robin2244 joined #salt
15:20 lordcirth_work #offtopic very handy bash history tweak: http://northernmost.org/blog/flush-bash_history-after-each-command/
15:21 pualj_ joined #salt
15:29 Brew joined #salt
15:31 XenophonF swills: FreeBSD 4.x is a bit... dated.
15:32 swills XenophonF: indeed
15:32 mpanetta joined #salt
15:33 XenophonF I'm getting ready to upgrade my Salt master to FreeBSD 11.
15:33 XenophonF Actually migrating from a VM to a physical server.
15:34 XenophonF newcons kind of surprised me
15:35 bowhunter joined #salt
15:35 XenophonF kids and their graphical boots, harumphf I say, HARUMPHF
15:37 pualj_ joined #salt
15:39 Inveracity joined #salt
15:40 fritz09 joined #salt
15:44 edrocks joined #salt
16:16 stanchan joined #salt
16:18 mschroeder joined #salt
16:20 stevednd has anyone experienced weird return values when running cmd.run? I'm "pkill -f scheduler" and salt is saying that it failed with return code -15. If I run it manually on the minion it returns just fine with a 0 or 1. -15 isn't even a valid pkill return code according to its man page
16:25 stevednd I should also note that salt is running the command, as the process is indeed being killed. salt just thinks something is wrong for some reason
16:26 Edgan joined #salt
16:27 smartalek joined #salt
16:27 onlyanegg joined #salt
16:30 Lionel_Debroux_ joined #salt
16:32 pualj_ joined #salt
16:40 babilen stevednd: pkill probably kills itself as you match on scheduler (which is also in the pkill command)
16:41 babilen And you then get the signal in question
16:41 babilen Try running "pkill -9 -f scheduler" -- Should get "9" back
16:42 babilen Why do you use "-f" ?
16:45 edrocks joined #salt
16:45 babilen pkill sends SIGTERM / 15 by default, so that would match
16:45 leonkatz joined #salt
16:46 leonkatz Does anyone know where I can find documentation for saltstack enterprise?
16:46 stevednd babilen: without -f it does not kill the process
16:46 noraatepernos joined #salt
16:46 stevednd I don't fully understand why, but that was the only way I could kill it
16:47 babilen stevednd: What's the process's /proc/pid/stat ?
16:47 babilen (the one you want to kill)
16:47 babilen And do you get 9 back if you run pkill with that signal?
16:48 stevednd actually it's pkill -f resque-scheduler that I'm running. I forgot the resque- part before in my haste
16:48 stevednd do I get that on the command line, or from salt?
16:48 whytewolf stevednd: most likely scheduler is a script or something where the process is not actually named scheduler. the -f looks for your command in the full command instead of just the process name
16:48 stevednd on command line pkill works as expected and returns 0
16:48 babilen Which, I assume, also matches the cmdline of the Python process run via subprocess
16:49 babilen So, run it with "-9" and observe the behaviour and give us the process's /proc/pid/stat (of the one you want to kill)
16:49 babilen Or just run pgrep to see which processes are matched
16:50 babilen In fact, do all these things :)
16:52 Ni3mm4nd joined #salt
16:59 stevednd babilen: https://gist.github.com/dnd/dce75cdc164efbbf1847c1642b30f8cd
16:59 swills XenophonF: haha :)
17:00 swills XenophonF: wait until you see the silly patch Adrian did to put Beastie pictures up for the number of CPU cores :D
17:01 wendall911 joined #salt
17:02 stevednd I think what whytewolf said is the case for needing the -f because of the way it's run `bundle exec rake environment resque:scheduler`
17:02 bildz I'm trying to create an orchestrator sls file to kill a process id:  https://pastebin.com/Z7kv5f9F    I've been following the docs and am not sure where I am going wrong.  Anyone have a second to check?
17:03 stevednd if that's the case is there any option other than to ignore the pkill command result since it's likely clobbering itself?
17:04 whytewolf bildz: - arg: is indented to much, it should be on the same indention level as - tgt
17:04 bildz whytewolf: absolutely correct!  Thank you it works!
17:04 babilen stevednd: You'd kill "bundle", but that might overgenerate too
17:05 babilen stevednd: The reason why it "fails" though is as I suspected: You also kill the process that's running the pkill
17:05 stevednd yeah, there's a bunch of other stuff thats started up with "bundle", so that's a no go
17:05 babilen stevednd: Maybe consider making this a "proper" service (with systemd unit file and shit)
17:06 babilen Dinner now .. good luck!
17:08 noraatepernos joined #salt
17:08 babilen Problem is that whatever you match on will inadvertently be present in the cmdline of the process started by subprocess, so "pkill -f" via salt will always result in this
17:10 ChubYann joined #salt
17:17 stevednd right, that's what I was saying above with the clobbering
17:17 stevednd or my dumb self could just use the module function ps.pkill
17:18 stevednd we're not going to talk about why I wasn't using that in the first place...
17:18 sjorge joined #salt
17:29 DammitJim joined #salt
17:34 nixjdm joined #salt
17:36 wendall911 joined #salt
17:37 ecdhe I was watching a host interface using wireshark and saw that one of the VMs was trying to resolve the AAAA record for host "salt".  I shutdown all my VMs and it went away.  Reenabling them one at a time showed it's the salt master host that's originating these queries.
17:37 ecdhe The queries are every 60 seconds.
17:37 ecdhe I can restart the master's minion, but the log doesn't show anything like "can't find host."
17:38 ecdhe I added ipv6: False to /etc/salt/minion, but the machine still issues the queries.
17:39 ecdhe How can I get it to stop?
17:39 lordcirth_work ecdhe, does the minion have an IPv6 address, even a link-local?  If IPv6 is on but unconfigured, disabling it would probably stop the attempt.
17:41 ecdhe yes, it does have a link-local address. The AAAA query itself is over v4.
17:41 pabloh007fcb Hello everyone I seem to get an error when trying to use the following salt state docker_network.present  Im on the latest version of salt 2017.7.0 https://gist.github.com/pabloh007/df28da12c0f3e1dec9b01d7843dd25c5
17:45 leonkatz joined #salt
17:48 ecdhe Interestingly, if I stop the salt-minion on the salt master, the 'AAAA salt' queries continue.  But they stop when I disable the salt-master (systemctl stop salt-master.service)
17:52 raspado joined #salt
17:52 smartalek joined #salt
17:55 sjorge joined #salt
18:00 LostSoul joined #salt
18:06 edrocks joined #salt
18:07 colabeer joined #salt
18:07 ecdhe Why would salt-master need to reach 'salt' over ipv6?
18:08 astronouth7303 ecdhe: what version?
18:08 ecdhe When I issue "systemctl start salt-master" I see 45 queries all issued at once for the AAAA record of 'salt' .  I only have 4 minions.
18:09 ecdhe astronouth7303: 2017.7.0
18:16 robin2244 joined #salt
18:17 spiette joined #salt
18:17 ecdhe astronouth7303: nothing's broken, but seeing that dns request makes me think I'm one answered-query away from getting pwnd
18:19 astronouth7303 i can't imagine why the _master_ is querying for itself
18:19 astronouth7303 (`salt` is the default master if none is configured)
18:21 ecdhe further, the host that the salt master is running on has a proper fqdn of the form salt.domain.com
18:28 ssplatt maybe add ‘127.0.0.1 salt.domain.com salt localhost’ to your /etc/hosts
18:29 ssplatt or ‘search domain.com’ to your resolv.conf
18:31 ssplatt doesn’t explain why the master process is doing a dns search for itself. i’d only expect teh minion to look for ‘salt’ to find the master.
18:35 ssplatt could be grains or the mine doing something, but grains should also be through the minion process. not sure about the mine
18:35 whytewolf mine and grains are both through the minion
18:35 ssplatt thought so
18:36 whytewolf only thing i can think of is tying the pki to the masters domain.
18:37 ssplatt the other thing you could do, if you aren’t using ipv6 is to disable it completely in sysctl
18:37 willprice joined #salt
18:40 nixjdm joined #salt
18:41 robin2244 joined #salt
18:44 pualj_ joined #salt
18:55 XenophonF anyone here use boto_s3_bucket?
18:55 XenophonF it's failing to create/update buckets with a 403 error about a HeadBucket operation
18:57 twooster XenophonF: you probably don't have the right bucket permissions?
18:58 twooster never used it from salt, but have certainly ran into that in other conditions
18:59 XenophonF the salt master had administrator access via an instance profile
19:00 XenophonF i seem to be able to replicate the error using the awscli and a separate IAM account
19:00 XenophonF probably not a salt issue, then
19:01 twooster Maybe check if the bucket has an explicit deny policy set on it
19:01 twooster That's overriding the allow in the instance profile. Good luck :)
19:02 XenophonF the bucket doesn't exist yet
19:04 pualj_ joined #salt
19:08 XenophonF oh
19:08 XenophonF I think I got it.
19:08 XenophonF hm, no
19:09 brianthelion joined #salt
19:09 XenophonF I set s3.key/s3.keyid/s3.buckets on all minions, but on the master, I override key/keyid to be `use-instance-role-credentials`.
19:09 XenophonF So that's not it.
19:10 edrocks joined #salt
19:10 XenophonF A HEAD Bucket operation should return 404 (not found).
19:14 XenophonF hm, maybe that is it
19:14 robin2244 joined #salt
19:14 XenophonF yeah, i think this is a configuration issue
19:15 ecdhe ssplatt: removing '127.0.0.1  salt  salt' from /etc/hosts introduces loads of queries for salt  A and AAAA records.
19:15 high_fiver joined #salt
19:15 ecdhe Looks like the hosts file is the surest way to supress this.
19:17 ssplatt ecdhe: i said ‘add’ i thought
19:18 ssplatt if its just doing searches for “salt” then you don’t have your search domain set too, i believe. which was the line to add in resolv.conf
19:19 ecdhe ssplatt: I removed the v4 'salt' entry to test, but I added ::1 to resolve the issue.
19:19 ecdhe I don't want to rely on the search domain.  I don't want my master EVER reaching out to another server for help finding itself.
19:21 lordcirth_work Personally I don't leave any minions unconfigured such that they reach out to 'salt'.  I put the master hostname in the config before start.
19:21 ssplatt that’s when you make a dummy for your internal stuff and tell your dns server to never forward requests for the dummy domain.  like, everything is internal.local.
19:23 ssplatt we also set our master: but there have been instances when things don’t go as planned and the minion ends up looking for ‘salt
19:23 ssplatt anyway.
19:23 ssplatt avoidable with cnames too
19:27 Morrolan joined #salt
19:27 ecdhe lordcirth_work: I'm the same as you: all my minions are configured.  This is the salt-master process that is "reaching out" for "salt"
19:28 whytewolf did it actually do anything other then a dns query?
19:30 aboe[m] joined #salt
19:30 ecdhe whytewolf: no, but usually you can't open a socket 'til you know the IP... I could set an IP in the hosts file and see what it does next, I suppose!
19:31 whytewolf ...
19:32 whytewolf you gave it an ip
19:33 whytewolf [localhost]
19:35 ecdhe whytewolf: yes, but my current wireshark instance isn't positioned to debug traffic on localhost.  I just set "salt" to 10.10.10.10.  Now I can watch to see if the master generates any packets in that direction.
19:36 whytewolf ... could have just tcpdumped and saved packets. then read that packet dump into wireshark.
19:36 whytewolf but what ever works for you
19:38 whytewolf even a cursery netstat would have shown tcp based connection attampts.
19:38 ecdhe netstat -an
19:39 ecdhe I know how to use tcpdump too.  But I had this wireshark window up, and it didn't require me to scp pcapng files.
19:39 ecdhe ...
19:40 nixjdm joined #salt
19:40 ecdhe there does not appear to be ANY traffic generated to the 10.10.10.10 'salt' address, even after setting it in the hosts file and restarted salt-master.
19:40 whytewolf didn't think there would be
19:40 leonkatz joined #salt
19:41 ecdhe So why try to resolve it every 60 seconds?
19:41 whytewolf every 60 seconds i don't know. but socket connections are not the only reason to know what the ip address of a domain name are
19:42 ecdhe Really?
19:42 whytewolf yes. really. i have seen to many programs use it as validation of binding address. or as what is my ip here is my host
19:43 ecdhe ...or honeypot environment detection.
19:44 vtolstov joined #salt
19:44 ecdhe I'd prefer that a process like this should operate under the parameters I give it, rather than autodiscovering new ones.
19:45 smartalek joined #salt
19:45 whytewolf i once said that exact same thing to the programer of SABnzb
19:45 vtolstov Hi! i'm newbie to salt (comes from chef) i have 5 data centers with different networks slightly different os versions. and mostly identical group of servers with roles like db-cluster, gateway-servers, web-servers, dns-servers, compute nodes.
19:45 eightyeight joined #salt
19:45 vtolstov in chef world - i have cookbooks and override dc specific stuff via attributes.
19:45 whytewolf although he actually was overrighting if the ip i gave didn't match the dns address
19:46 vtolstov in salt world i have states, pillar, grains...
19:46 whytewolf don't forget modules ...
19:46 whytewolf and orchestration
19:47 whytewolf and beacons
19:47 whytewolf and reactors
19:47 vtolstov yes,
19:47 whytewolf and engines.
19:47 whytewolf and i can go on
19:47 vtolstov where i need to put dc and role specific stuff?
19:47 vtolstov in pillar ?
19:47 whytewolf bingo
19:47 vtolstov so i want states like ipxe
19:48 vtolstov in one dc i have 192.168 new
19:48 vtolstov in other 172.16
19:48 vtolstov *new -> net
19:48 whytewolf sounds like you want ipcidr style matching for dc info
19:48 whytewolf https://docs.saltstack.com/en/latest/topics/targeting/ipcidr.html
19:48 vtolstov i'm prefer regexp map be or grains role stuff
19:49 whytewolf ...
19:49 whytewolf please do not go down the "grains roles" route
19:49 vtolstov why ?
19:50 pualj_ joined #salt
19:50 whytewolf security. grains can be changed on a minion. and the master is your source of truth. if you have anything security based in pillar and have a grain match that selects it. anyone that compromises a minion has access to data mine all of the pillar data
19:50 aboe[m] vtolstov:  you could try someting like https://github.com/bbinet/pillarstack
19:53 vtolstov whytewolf: targetting on ip based have for me difficult - i need sometimes assign ip based on role, and sometimes i need to lookup by role and get ip address
19:54 vtolstov aboe[m]: nice but as i understand have security related problems if minion compromised?
19:54 whytewolf only if targetting by grains.
19:54 whytewolf maybe think about nodegroups
19:54 whytewolf https://docs.saltstack.com/en/latest/topics/targeting/nodegroups.html
19:55 whytewolf also a common scheme is hostname manipulation
19:55 vtolstov whytewolf: what you mean?
19:56 whytewolf using a discriptive hostname on a system to define information abotu what it does.
19:56 whytewolf then use globing on id as a way of determining role info
19:57 vtolstov whytewolf: but if user change on minion hostname?
19:57 whytewolf changing the hostname doesn't automaticly change the minion_id
19:57 vtolstov for example i have minion web app, and attacker get root on it and change hostname and minion_id ?
19:57 whytewolf and once the minion_id is changed a new key has to be issued to that minion.
19:57 vtolstov wow, thats cool
19:57 coredumb minion_id grain cannot be overriden
19:58 coredumb althought grain fqdn could
19:58 vtolstov but on my test system minion_id equal with hostname...
19:58 vtolstov does this always like that or no?
19:58 coredumb vtolstov: because that's the default
19:59 coredumb you can change minion_id in the config
19:59 coredumb not at the grain level
19:59 vtolstov so if attacker change miniion id how can i catch this on master ?
20:00 coredumb as whytewolf changing minion_id in config and restarting minion would require a new key to be accepted on master
20:01 whytewolf um coredumb id grain CAN be changed
20:01 whytewolf [i just tested this]
20:01 coredumb O_o
20:01 coredumb last time I tested it could not :O
20:01 whytewolf custom grain script
20:02 coredumb oh
20:02 coredumb mmmh
20:02 coredumb sounds like a major issue
20:02 whytewolf don't need to do that from a master either. but custom scripts take precidence.
20:03 coredumb should be reported imo
20:03 coredumb matching id is supposed to be the only secure matching way ...
20:03 whytewolf you are not matching minion_id if you are matching the grains['id']
20:04 whytewolf they are two different things
20:05 whytewolf https://gist.github.com/whytewolf/4c56147d4d2e5e242b81bad43b3bc9a7
20:06 johnkeates joined #salt
20:09 DammitJim joined #salt
20:11 vtolstov last question now for vim: i'm install salt-vim
20:11 vtolstov and when writing own state module i get errors like: E0602 undefined name '__salt__' [pyflakes]
20:11 vtolstov how can shutoff it ?+)
20:11 johnkeates what is want
20:11 whytewolf um ...i have never seen that.
20:12 whytewolf want is a desire to aquire something you do not currently posses johnkeates
20:12 XenophonF where is _get_conn() defined or imported in salt/modules/boto_s3_bucket.py?
20:13 pipps joined #salt
20:13 whytewolf i don't see it def
20:14 XenophonF me neither but i think i found the constant i was looking for in salt.utils.aws.IROLE_CODE
20:14 whytewolf although 49-52 describe turning off pylint for valid non-assignment code
20:15 whytewolf so most likely doesn't need to be def :P
20:15 XenophonF I really hate the more magical parts of the Salt codebase.
20:17 whytewolf talk to Ryan_Lane about that one. but i am pretty sure that is python magic not just salt magic.
20:17 XenophonF that's what i mean
20:17 XenophonF python magic
20:18 XenophonF i need to read through the core sources
20:19 XenophonF i don't generally look at salt or salt.utils
20:21 whaity joined #salt
20:21 Ryan_Lane what's up? :)
20:21 Ryan_Lane ah
20:22 vtolstov i check some google links and not find solution =(
20:22 whytewolf vtolstov: pretty sure that the vim-salt requires salt
20:23 whytewolf [as it most likely validates through salt]
20:23 whytewolf so it is either don't use it or make sure the python that vim is useing has access to salt
20:24 vtolstov i'm sure that i run vim and salt-vim on the same system that have salt =)
20:24 Ryan_Lane XenophonF: https://github.com/saltstack/salt/blob/develop/salt/utils/boto.py#L259-L276
20:24 Ryan_Lane https://github.com/saltstack/salt/blob/develop/salt/utils/boto3.py#L296-L313
20:24 whytewolf vtolstov: i do it all the time :)
20:24 XenophonF ah thanks Ryan_Lane
20:24 coredumb whytewolf: for some reason I thought id grain was linked to minion_id and could not change and as I tested from grain config, did not
20:25 Ryan_Lane XenophonF: https://github.com/saltstack/salt/blob/develop/salt/modules/boto_s3_bucket.py#L104-L107
20:25 Ryan_Lane it's called during the init
20:25 Ryan_Lane note that we basically never directly import things from utils
20:25 Ryan_Lane but use the __utils__ dunder
20:25 Ryan_Lane I wish everything in salt that used utils did this
20:25 vtolstov whytewolf: may be i need to set some pat variables for salt-vim ?
20:26 whytewolf coredumb: nope. it is script that builds it which means it is loaded after //etc/salt/grains so will override anything there. but a custom grians script will overright the built in grain script
20:27 coredumb so can't be trusted
20:27 whytewolf vtolstov: i don't use any. but you can see if there is anything funky in my "crappy" vimrc https://github.com/whytewolf/dotfiles/blob/master/vim/vimrc
20:27 jhauser_ joined #salt
20:27 whytewolf coredumb: exactly
20:28 coredumb damn
20:29 patrek joined #salt
20:29 whytewolf the closest i get to targetting with it is to fill in the glob spot in a top file. that way it is still targetting on minion_id that won't match the grain['id']
20:30 whytewolf although now that i think about it. that could be pretty bad also.
20:30 whytewolf just need someone to put grains['id'] as '*' and it will always match that minion.
20:32 vtolstov whytewolf: thanks i found the root of my issues.
20:33 vtolstov next question - in chef world i'm use librarian or berkshelf to get cookbooks from remote git repos
20:33 vtolstov in saltstack world as i understand i need to use gitfs ?
20:34 coredumb whytewolf: so can't override from /etc/salt/grains but can from minion config directly of a custom grain script
20:34 whytewolf you can also checkout directly. but i do recomend learning gitfs anyway. it is a useful tool
20:35 whytewolf coredumb: yes.  because of the order of operations. /etc/salt/grains is loaded before the internal grains scripts run. and then the custom grains scripts run
20:35 whytewolf basicly, config file -> internal scripts. -> custom scripts
20:36 XenophonF Ryan_Lane: I wish there were better code style docs.  I'll try to start using the __utils__ dunder in my own code.
20:36 vtolstov whytewolf: may be i miss something but how can i specify ref/branch/tag ?
20:36 whytewolf vtolstov: https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#per-remote-configuration-parameters
20:36 coredumb whytewolf: guess I should read more internal code before making assumptions >_<
20:38 noraatepernos joined #salt
20:40 pualj joined #salt
20:41 nixjdm joined #salt
20:42 XenophonF well i'm going to try setting s3.keyid/s3.key to None in the master-specific pillar
20:42 Ryan_Lane XenophonF: we require it for the boto modules
20:43 Ryan_Lane elsewhere in salt we don't
20:43 Ryan_Lane the point of the utils dunder is for backwards compat
20:43 pipps joined #salt
20:43 Ryan_Lane I can take a util from master and drop it into an old version of salt, then drop in new boto modules
20:43 XenophonF and if that doesn't get the salt master's minion to use the instance profile, i'll add a conditional to the default pillar that removes those keys just on the master
20:43 Ryan_Lane if the modules directly import salt.utils.blah, then it's not backportable
20:43 XenophonF that's pretty cool, actually
20:44 Ryan_Lane XenophonF: what are you trying to get working, re the s3 stuff?
20:44 Ryan_Lane auth?
20:45 vtolstov thanks for answers i'm go to the docs =)
20:45 XenophonF yeah i have an IAM account with read-only access to S3 that I use on all my minions for s3fs
20:45 pipps joined #salt
20:45 XenophonF but on the master, I want it to use the instance profile instead
20:46 XenophonF so i have a default SLS and a master-specific SLS, both in Pillar, that set this up and then override it on the master
20:47 XenophonF i originally had s3.keyid and s3.key set to "use-instance-role-credentials"
20:47 XenophonF but for some reason that isn't working
20:47 XenophonF anyway, it isn't a salt problem but a configuration problem
20:48 XenophonF I'll probably file a PR against salt-formula later to have its various templates ignore settings that are set to none, so that i have a cleaner way to remove configs from the defaults
20:49 XenophonF Ryan_Lane: you wouldn't happen to have experience troubleshooting SaltReqTimeoutError exceptions, would you?
20:52 Ryan_Lane ah. gotcha
20:53 Ryan_Lane XenophonF: I use masterless :)
20:57 Ni3mm4nd joined #salt
20:59 Ni3mm4nd joined #salt
21:00 pipps joined #salt
21:03 wendall911 joined #salt
21:14 XenophonF oh right
21:14 pipps joined #salt
21:16 KingOfFools Why there's such difference in async mode between console commands and API? I can run both salt and salt-run with --async, but in API RunnerClient.cmd_async requires external_auth, but LocalClient.cmd_async not
21:23 XenophonF I'll try to find some time tomorrow to submit a PR to salt-formula.
21:23 XenophonF I want to be able to unset config items and have them disappear from the configs
21:24 XenophonF without having to resort to adding conditionals in SLS files unrelated to a given service or host
21:32 scottk_ joined #salt
21:33 scottk_ What is the best way to change the ip address of a machine? it's complicated, but we are migrating VM's from one host to another, and they need to have a new ip address.
21:38 MajObviousman scottk_: honestly? Change the config file and reboot
21:38 MajObviousman well, easiest
21:38 MajObviousman no handling of salt-minion before and after, no need to schedule a cron or systemd-run
21:38 MTecknology ya.. salt's portion would be update the config and not reboot. The rest, I wouldn't automate.
21:40 scottk_ yeah...I just don't want to have to log into each one and do that.
21:40 pipps joined #salt
21:41 nixjdm joined #salt
21:43 coredumb then do it from salt
21:43 coredumb :D
21:43 coredumb cmd.run will do just fine
21:46 pipps joined #salt
21:47 leonkatz joined #salt
21:50 scottk_ thanks.
21:54 cgiroua joined #salt
22:10 pipps joined #salt
22:16 g4rlic joined #salt
22:17 g4rlic left #salt
22:23 LeProvokateur joined #salt
22:27 Mokilok joined #salt
22:38 Gabe_ joined #salt
22:40 nixjdm joined #salt
22:45 MTecknology coredumb: except it wouldn't be just cmd.run to the minion. You'd have to direct the hosts to move the machine and if you're not already set up to do that, it's not a small task.
22:47 basepi stevednd: sorry for late reply. I left salt beginning of last year. I'm working at Adobe now, where we use salt and I built hubblestack more or less on top of salt.
22:48 pipps joined #salt
22:49 coredumb MTecknology: sure but I believe it was only neede to change the IP and that the VM migration was outside of salt scope
22:50 MTecknology coredumb: so then you agreed with me? :)
22:51 MTecknology basepi: and now you work on flash?!
22:52 coredumb MTecknology: guess I am :)
22:58 basepi MTecknology: *shudders* hell no
22:58 basepi I'm in their Utah office, which is all marketing analytics, primarily from their acquisition of Omniture.
22:59 MTecknology flash is at least officially deprecated :D
23:00 MTecknology :D !!! :D      https://blogs.adobe.com/conversations/2017/07/adobe-flash-update.html      :D !!! :D
23:00 Ni3mm4nd joined #salt
23:08 basepi yup yup! we are as excited as anyone!
23:18 leonkatz joined #salt
23:28 leonkatz joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary