Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-07-24

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 Psy0rz joined #salt
00:05 blu_ joined #salt
00:05 blu__ joined #salt
00:21 mosen joined #salt
00:37 thinkt4nk joined #salt
00:38 onlyanegg joined #salt
00:56 mavhq joined #salt
00:57 Vaelatern joined #salt
01:16 tellendil joined #salt
01:18 tellendil joined #salt
01:29 tellendil joined #salt
01:31 tellendil joined #salt
01:37 Ni3mm4nd joined #salt
01:42 tellendil joined #salt
01:51 tellendil joined #salt
01:52 ilbot3 joined #salt
01:52 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.6, 2017.7.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers <+> The call for speakers for SaltConf17 is now open: http://tinyurl.com/SaltConf17
01:56 mavhq joined #salt
02:02 XenophonF brianthelion: I think you want the highstate outputter, salt/output/highstate.py, for state data.
02:04 brianthelion yeah, i think you're right. the real question is where to insert the call to the formatter. i assume that's the last thing that happens before the output is dumped to the terminal
02:06 XenophonF what specifically do you think is broken?
02:06 XenophonF a state function should return a specific error result
02:06 brianthelion I filed a bug report with some details https://github.com/saltstack/salt/issues/42493
02:07 Vaelatern joined #salt
02:07 brianthelion basically, the docker module treats docker images like minions and runs state in them via remote execution of salt-call
02:08 brianthelion or "remote" if you prefer
02:09 XenophonF If I were you, I'd add a lot more detail to the bug report.
02:09 brianthelion if that process blows up in any way you end up with a giant blob of unformatted salt output in your "comment" line
02:09 XenophonF There's no example of the hard-to-read output, for example.
02:09 zerocoolback joined #salt
02:10 XenophonF That's a pretty neat state module!  I might have a use for it.
02:12 brianthelion yeah, you're right about the details. i will do it at the office tomorrow
02:13 brianthelion it's definitely a cool module. i added pillar support in my local fork, which is handy
02:13 brianthelion it's kinda crippled without it
02:37 brianthelion XenophonF: I bet this is the problem https://docs.saltstack.com/en/latest/ref/modules/#outputter-configuration. They updated the docker namespace recently and I bet this got missed
02:37 brianthelion gonna dig in the source code and see what's up
02:46 brianthelion or not.... very few actual instances of "__outputter__" appearing in the source. is it not idiomatic?
02:47 JPT_ joined #salt
02:48 indigoblu joined #salt
02:52 jeddi joined #salt
02:56 edrocks joined #salt
03:06 tkojames joined #salt
03:08 donmichelangelo joined #salt
03:27 vishvendra joined #salt
03:31 XenophonF joined #salt
03:34 Ni3mm4nd joined #salt
03:46 pmcg joined #salt
04:20 nielsk joined #salt
04:22 armyriad joined #salt
04:26 Ni3mm4nd joined #salt
04:26 mavhq joined #salt
04:33 Ni3mm4nd joined #salt
04:41 evle joined #salt
04:55 golodhrim|work joined #salt
04:56 svij3 joined #salt
04:58 edrocks joined #salt
05:20 vishvendra1 joined #salt
05:26 beardedeagle joined #salt
05:35 onlyanegg joined #salt
05:36 sh123124213 joined #salt
05:50 jrklein joined #salt
05:58 mavhq joined #salt
05:58 do3meli joined #salt
05:58 do3meli left #salt
06:02 ivanjaros joined #salt
06:04 pualj joined #salt
06:06 colttt joined #salt
06:07 tobstone joined #salt
06:14 mbuf joined #salt
06:25 ninjada joined #salt
06:29 Tgrv joined #salt
06:32 aldevar joined #salt
06:37 apofis joined #salt
06:43 sh123124213 joined #salt
06:53 onlyanegg joined #salt
06:58 sh123124213 joined #salt
07:00 edrocks joined #salt
07:05 Ricardo1000 joined #salt
07:08 pbandark joined #salt
07:11 cyteen joined #salt
07:17 Hybrid joined #salt
07:20 Guest58742 joined #salt
07:25 Guest58742 left #salt
07:27 jhauser joined #salt
07:30 ninjada_ joined #salt
07:31 daemonkeeper joined #salt
07:35 daks joined #salt
07:43 jeddi joined #salt
07:50 onmeac joined #salt
07:50 mikecmpbll joined #salt
07:51 samodid joined #salt
07:54 cablekev1n joined #salt
07:58 _KaszpiR_ joined #salt
07:58 DoomPatrol sigh, why did i refactor a broken users ssh state
07:58 DoomPatrol it's now more broken
07:58 DoomPatrol for w/e reason "%h" isn't expanding & it doesn't like when key files have multiple keys
07:59 k_sze[work] joined #salt
08:10 rpb joined #salt
08:11 _KaszpiR_ joined #salt
08:13 bdrung_work joined #salt
08:24 Tucky joined #salt
08:25 jalaziz joined #salt
08:28 Mattch joined #salt
08:37 Naresh joined #salt
08:39 rascal999 joined #salt
08:54 Ricardo1000 joined #salt
09:10 joe_n joined #salt
09:16 ivanjaros joined #salt
09:17 darioleidi joined #salt
09:21 mikecmpbll joined #salt
09:25 capn-morgan joined #salt
09:25 high_fiver joined #salt
09:25 ronnix joined #salt
09:27 jalaziz joined #salt
09:28 ronnix joined #salt
09:30 gmoro_ joined #salt
09:32 jalaziz_ joined #salt
09:32 zulutango joined #salt
09:39 noraatepernos joined #salt
09:46 zl70 joined #salt
09:47 cro joined #salt
10:01 edrocks joined #salt
10:04 ahrs joined #salt
10:04 cable joined #salt
10:06 cablekev1n joined #salt
10:13 cablekev1n Is there a possibility to fixate the repository used by salt cloud to install a minion?
10:16 kedare joined #salt
10:17 zl70 left #salt
10:35 cro joined #salt
10:37 netcho joined #salt
10:37 netcho hi all
10:38 pbandark1 joined #salt
10:44 netcho after upgrading to 2017.7. my boto states stopped working
10:45 smartalek joined #salt
10:46 netcho Comment: State 'boto_vpc.internet_gateway_present' was not found in SLS 'aws_infra.vpc'
10:46 netcho Reason: 'boto_vpc' __virtual__ returned False: The following libraries are required to run the boto_vpc state module: boto >= 2.8.0 and boto3 >= 1.2.6.
10:46 netcho https://hastebin.com/ofecicaxay.sql
10:46 netcho and i have boto installed
11:08 inad922 joined #salt
11:08 vishvendra joined #salt
11:08 cro joined #salt
11:09 haam3r_ netcho: you have both boto and boto3 with the required versions available to salt?
11:12 jalaziz joined #salt
11:17 kbaikov_alt joined #salt
11:17 kbaikov_alt left #salt
11:18 kbaikov_alt joined #salt
11:19 kbaikov_alt left #salt
11:31 coredumb joined #salt
11:35 GMAzrael joined #salt
11:36 GMAzrael joined #salt
11:38 Reverend doesn't look like it! :)
11:40 joe_n joined #salt
11:47 cablekev1n I'm sorry for bashing, but did anyone see my question? :)
11:56 kyle joined #salt
11:59 JPT joined #salt
12:00 haam3r_ cablekev1n: salt-cloud uses a bootstrap script to install the cloud host...you have to modify that to provide a specific repo
12:02 haam3r_ cablekev1n: The script should be in '/etc/salt/cloud.deploy.d' on the master
12:04 edrocks joined #salt
12:05 netcho haam3r_:  yes, boto and boto3 are there
12:07 netcho it worked on 2016
12:07 netcho same state, same minion
12:07 cablekev1n I'll check it out haam3r_ thnx
12:09 ecdhe joined #salt
12:14 snc joined #salt
12:14 ccha about onchanges the list is an 'or or an 'and' ?
12:16 cablekev1n There is not cloud.deploy.d
12:18 cablekev1n I've only got this: https://pastebin.com/ZitdheVF
12:20 ecdhe joined #salt
12:26 zerocoolback joined #salt
12:31 sh123124_ joined #salt
12:35 _JZ_ joined #salt
12:40 thinkt4nk joined #salt
12:42 ProT-0-TypE joined #salt
12:53 cablekev1n Any clue for me haam3r_ ? :)
13:00 XenophonF cablekev1n: you have to create that directory
13:00 XenophonF and then put your script in it
13:00 bridger joined #salt
13:01 bridger left #salt
13:01 XenophonF for example - https://github.com/irtnog/salt-states/tree/development/salt/files/cloud.deploy.d/irtnog
13:04 XenophonF I'm not sure I ever got those to work.  ;)
13:20 cgiroua joined #salt
13:21 racooper joined #salt
13:25 dev_tea joined #salt
13:25 brianthelion joined #salt
13:28 mavhq joined #salt
13:32 Inveracity joined #salt
13:33 apofis joined #salt
13:38 cro joined #salt
13:42 openfly mmm
13:43 darioleidi joined #salt
13:45 Slimmons joined #salt
13:46 Slimmons Is there a .deb package for salt-master?  I'm about to have to install salt-master in a location that won't ever have the internet.  All I've been able to find is this, and the .deb looks like a 2015 version of salt-master (haven't confirmed that).  https://pkgs.org/download/salt-master
13:50 dnull You can d/l the package with apt-get and all the dependencies for offline install if I remember correctly
13:50 dnull 1 sec while I verify
13:53 dnull https://wiki.debian.org/AptMedium
13:53 dnull Looks like you can
13:55 Dimo joined #salt
14:02 cablekev1n Ah nice, thanks XenophonF
14:02 cablekev1n I'll check
14:03 numkem joined #salt
14:03 Slimmons Thanks dnull.
14:03 dnull Slimmons: np
14:05 lorengordon joined #salt
14:05 ronnix joined #salt
14:09 ahrs joined #salt
14:13 lompik joined #salt
14:23 cro joined #salt
14:25 sarcasticadmin joined #salt
14:26 perfectsine joined #salt
14:32 cro joined #salt
14:33 Cottser joined #salt
14:47 bwgartner joined #salt
14:48 bryang joined #salt
15:06 amiskell joined #salt
15:23 JPT So ... i may have some issues with salt on freebsd regarding the version of urllib3: https://paste.debian.net/977940/
15:23 onlyanegg joined #salt
15:23 JPT Right now i am trying to figure out how to solve this.
15:24 astronouth7303 JPT: installed salt with python or the system package manager or pip?
15:24 viq JPT: and pkg search urllib ?
15:25 viq astronouth7303: last line suggests system packages
15:26 astronouth7303 my guess is that FreeBSD's system urllib3 is now 1.22, which setuptools is rejecting
15:26 viq $ pkg search urllib
15:26 viq py27-urllib3-1.21.1            HTTP library with thread-safe connection pooling, file post, and more
15:26 viq JPT: works on mine
15:27 viq ah, though let me see after the pkg upgrade
15:28 JPT :)
15:28 JPT I installed it through the freebsd pkg repos
15:28 JPT At first i thought the saltstack repo was not matching the freebsd 11 repo, so i removed it, removed the repo and installed the salt minion from the freebsd repo. Nothing changed.
15:29 viq JPT: yeah, after upgrading urllib I see the same
15:29 JPT I like to run pkg update + pkg upgrade on a regular basis, so this issue basically appeared over night
15:29 viq aye, like they do
15:30 JPT So ... do you have suggestions on how to solve this? :)
15:30 astronouth7303 i think you have to file an issue on the github project, so they update setup.py with the new version of urllib3
15:30 viq JPT: bug report time!
15:30 viq Or actually two - one for saltstack, and another for freebsd, I guess
15:31 JPT Let's hear them :)
15:31 JPT Since i "only" have 10 machines to take care of, it should be fairly easy to get them a custom solution
15:32 viq JPT: as in, you should report issue, one with saltstack reporting it doesn't work with urllib 1.22, and another for freebsd so they pull in the change once salt has it.
15:32 JPT Okay, sounds good.
15:33 Tucky joined #salt
15:35 cgiroua joined #salt
15:35 kerrick joined #salt
15:38 astronouth7303 especially since we should have a 2017.7.1 release pretty soon here
15:38 JPT Oh, that's good to hear. There is also an issue with git.latest that got already fixed on master.
15:42 astronouth7303 master or 2017.7? salt uses version branches, so stuff on master won't appear until 2017.11
15:42 JPT :|
15:42 JPT I'll need to check that
15:42 astronouth7303 https://docs.saltstack.com/en/latest/topics/development/contributing.html#which-salt-branch
15:45 JPT Okay, the issue is done. If you notice anything that i should add/change, please tell. https://github.com/saltstack/salt/issues/42511
15:45 JPT I feel like there's not that much to say about this one, so it feels a bit too short.
15:46 astronouth7303 nope, that's about right
15:46 JPT :)
15:47 astronouth7303 urllib3 released a new version on friday, saltstack needs to test it and update their requirements
15:48 JPT Okay, so about the other issue i mentioned: I assume that an issue will be fixed in 2017.7.1 if it has the label "2017.7.1" on it?
15:49 JPT In that case, the git thing will be in it for us soon. :)
15:49 JPT (This one: https://github.com/saltstack/salt/issues/42381 )
15:49 cgiroua joined #salt
15:53 astronouth7303 JPT: looks like, https://github.com/saltstack/salt/pull/42471 is the actual commit
15:54 JPT Yeah, it's just about two lines that were forgotten on the way.
15:55 JPT I fixed that one manually on one affected system that relies on it. So when the update comes, all the other machines are fine, too.
15:55 astronouth7303 whytewolf was right. More RCs were needed.
15:55 JPT It's pretty hard to get a good test coverage on all of these features
15:56 astronouth7303 yes.... but some of the issues i've found are really basic stuff
15:56 astronouth7303 like, calling a state that uses pillars from orchestrate ( https://github.com/saltstack/salt/issues/42403 )
15:57 astronouth7303 (seriously? nobody tried that?)
15:57 woodtablet joined #salt
15:58 JPT state.orchestrate ... i would need to read the docs about that first.
15:58 JPT Our setup involves only about ~70 machines, most of them get some basic states + pillar data. There are one or two custom grains and we use salt-cloud to spawn new machines that then get some basic stuff applied through reactor.
15:58 JPT That's basically it for now.
15:59 eprice joined #salt
15:59 JPT Okay, so state.orchestrate basically takes care that states are applied in a certain order?
16:00 astronouth7303 i think of it as the scripting language of salt
16:00 aldevar left #salt
16:00 astronouth7303 i use it as the entry point for continuous deployment
16:00 JPT Could get interesting in the future. For now i'm happy even when i have to run a highstate 2-3 times in order to get everything done.
16:01 astronouth7303 so my CI/CD systems calls into salt orchestrate, and it handles the details of what needs to be updated and which minions need to be poked
16:02 JPT I guess that can reduce load if a lot of machines need to be taken care of but only a small number needs some poking
16:03 fatal_exception joined #salt
16:05 raspado joined #salt
16:08 astronouth7303 it just seems like the smart thing to do? Dunno.
16:08 noobiedubie joined #salt
16:09 noobiedubie hi all
16:09 JPT hey
16:12 pbandark1 joined #salt
16:15 fritz09 joined #salt
16:18 bryang joined #salt
16:18 kerrick joined #salt
16:19 ivanjaros joined #salt
16:20 viq yeah, orchestration sounds sexy, but I didn't get a chance to play with it yet
16:20 svij3 joined #salt
16:20 viq astronouth7303: what do you use for CI/CD?
16:21 astronouth7303 gitlab
16:21 viq cool, I'm starting to play with that
16:22 astronouth7303 (i'm running gitlab's CI through a shell runner, because $dayjob hates containers, but that means I can just bounce from the CI to salt through `sudo` and a special `salt-orchestrate` script.)
16:22 astronouth7303 most people will need to use salt-api to accomplish the same thing.
16:22 viq btw, a question from a friend trying to use gitlab-ci - do you know how to make gitlab deploy something to *all* nodes in an env?
16:23 astronouth7303 i'm using salt, so i pass the environment name to the orchestrate script and it handles it
16:23 viq as in can you make gitlab-ci runners do it, without using something external?
16:24 * viq nods
16:24 astronouth7303 i'm keeeping the commit hash in a pillar, so the orchestrate script just updates the pillar (using the pillar_roots wheel), tells the master to refresh pillars, and then highstating the minions of that envrionment
16:24 astronouth7303 (the environment and role are both grains)
16:25 astronouth7303 you could do all that with `salt` calls from the CI script, but I like minimizing what the CI script has to know, since it's salt's job to manage all these details
16:26 jimklo joined #salt
16:27 astronouth7303 so to make it multi-instance, make sure you're matching on not a specific minion but a id glob or grain or whatever
16:32 doradus joined #salt
16:34 hatifnatt joined #salt
16:40 perfectsine joined #salt
16:41 hatifnatt Hello. If I include one state (lets call it B) with jinja variable to another (lets call it A), will that variable from B be available in A state?
16:42 bwgartner joined #salt
16:44 bgartner joined #salt
16:45 bgartner left #salt
16:45 swills joined #salt
16:45 swills joined #salt
16:46 tapoxi joined #salt
16:47 Guest73 joined #salt
16:48 meca Hey I'm working on https://github.com/saltstack/salt/issues/36466 and I have a question about the code. Where/Can I do that?
16:49 jimklo_ joined #salt
16:49 viq meca: don't ask to ask, just ask ;)
16:52 meca gtmanfred: Maybe can help me out here. If I add the `__context__` from the `_apply` method to return when called by `self.functions[func](*args, **kwargs)` in utils/schedule.py, I can add the return code to the scheduler `ret`.
16:53 meca I don't get how `__context__` is supposed to be passed here in the first place.
16:55 armyriad joined #salt
16:59 dev_tea joined #salt
17:00 drawsmcgraw joined #salt
17:04 vishvendra joined #salt
17:05 jimklo joined #salt
17:13 sarcasticadmin joined #salt
17:18 ChubYann joined #salt
17:25 samodid joined #salt
17:27 XenophonF anyone else using Salt over high latency/low bandwidth network links?
17:27 XenophonF i have a master co-located in eu-west-1 with minions in data centers in west and east Africa
17:27 XenophonF and i'm seeing SaltReqTimeout errors as well as minions failing to return with not connected errors
17:28 XenophonF maybe I should deploy syndics in country?
17:28 fatal_exception joined #salt
17:28 XenophonF or standalone masters
17:30 XenophonF it might also be our threat protection stuff
17:30 XenophonF we're running Meraki MX routers in country, and we've seen the security filtering stuff interfere with other things like Windows Update
17:30 astronouth7303 syndics would probably be a good idea. I think i've heard of people using salt to manage geographically remote nodes (eg, by cell link)
17:33 xsaltd joined #salt
17:35 twooster XenophonF: I've seen the same thing on our remote nodes
17:35 twooster It may be security filtering, be we have flappy nodes pretty constantly. I think cell links maybe don't behave so nice with long-lived low-bandwidth connections.
17:36 gtmanfred meca: __context__ is added by the loader, anything loaded by the loader should have a __context__
17:36 XenophonF These connections are all terrestrial fiber (with a smattering of long-haul microwave backup links).
17:36 twooster Have you tried setting up the ping_interval?
17:36 XenophonF We have a data center in Chennai (south India), which has 40 Mbps to a national backbone, and its exhibiting the same problems
17:37 XenophonF about 300 ms rtt between there and Dublin
17:37 XenophonF no I haven't but that's a good idea
17:38 fatal_exception joined #salt
17:38 twooster Hmm. That does sound like something else, but I haven't debugged this enough, just seen the problem.
17:40 astronouth7303 i think that if a link goes down, the minion will recover, but it'll miss any commands issued to it in the mean time, right?
17:40 viq correct
17:41 viq (hopefully recover)
17:41 astronouth7303 recover as in will reconnect to the master when the link comes back up
17:42 meca gtmanfred: the __context__ exists but is empty at the end of the schedular `handle_func`. But when I display the `__context__` during the highstate execution it is being filled correctly. It's just lost when going between the two files. Maybe a dummy PR could illustrate what I mean better?
17:43 censorshipwreck joined #salt
17:43 astronouth7303 but ZeroMQ is pub/sub where all the filtering/selection is done on the minion. ie, the master doesn't queue missed commands when minions are offline
17:46 twooster I've definitely seen, even on fairly local (local office to in-region AWS) connections, minions go offline rather rapidly. My only thought was aggressive connection termination by a router.
17:47 lorengordon joined #salt
17:47 XenophonF thanks for the hint twooster
17:47 XenophonF i'm putting that change request in now
17:48 Guest73 joined #salt
17:48 XenophonF i think long term i'm going to deploy syndics
17:49 twooster Cool. Also, if you take a look around, ZMQ has a history of spottiness with unreliable connections. Heartbeating seems to be the usual fix. Good luck :)
17:50 astronouth7303 sooo... my workstation has a status bit that reports the results of `manage.status`. I think I accidentally made my laptop a critical part of keeping salt going in light of spotty links (which i mostly have in theory)
17:52 meca gtmanfred: This is what I mean: https://github.com/mecavity/salt/commit/9a6d996cebc30b5a75841a55528949650a9b6842
17:55 meca https://github.com/mecavity/salt/commit/8d229d0b8734e97edcab62484964a74a6bae28fe *
18:02 xer0x joined #salt
18:02 gtmanfred hrm, i do not know.  I would recommend sending an email to the salt-users mailing list
18:02 hatifnatt I'm still not sure about jinja variables and import. To get Jinja variable from one steta in another is state include enough or I need Jinja import?
18:03 hatifnatt *one state in another
18:04 astronouth7303 you would need to do jinja-level things
18:04 astronouth7303 jinja has no concept of states, and states don't have any clue about jinja
18:05 druonysus joined #salt
18:05 hatifnatt astronouth7303: so it's like state import "render" first than include result?
18:06 hatifnatt *state include
18:06 xer0x Salt n00b question: can I run something like `salt '*' ssh.set_auth_key root file://id_new_key.pub` instead of salt://key.pub ?  I guess I'm asking if there is a way to grab an external file.
18:06 astronouth7303 hatifnatt: i'm not following the question?
18:07 iggy you shouldn't be trying to get jinja variables from files with state declarations in them
18:07 iggy you need to put those in a file with no states, and then import from the other file
18:11 hatifnatt astronouth7303: I mean state include doesn't 'source' file directly, it render it first and then include result.
18:12 astronouth7303 it's more like a dependency? but i guess you could say that
18:12 astronouth7303 when you have an `include:` declaration in your sls state file, the state engine will also apply all the state in the referenced file
18:13 ddoback joined #salt
18:14 gtmanfred xer0x: you should just be able to do /id_new_key.pub
18:14 gtmanfred and it will pull it from the minions filesystem
18:14 hatifnatt Yes I know. States from included file will be available, but looks like jinja variables will not.
18:15 astronouth7303 oh yeah, no. Jinja is basically an entirely different layer
18:17 gtmanfred what include does is just render the other state file, and basically drop the entired rendered dictionary into the current state file, nothing else, no yaml anchors and no jinja
18:18 dev_tea joined #salt
18:18 ddoback cmd.run stateful return example echo "changed=yes comment='something has changed' whatever=123" can the "whatever" be accessed from another state?
18:19 gtmanfred no
18:19 hatifnatt gtmanfred: Thanks, 'render' is key word, I think.
18:20 gtmanfred ddoback: there has been talk about adding something called slots that would allow it, but it doesn't exist yet
18:20 gtmanfred you would need to do your cmd.run in jinja, and then access the output from the module output
18:20 gtmanfred if it needs to be run after some state, then you would need to use the orchestrate runner to do all the setup and then run a second state that would do the cmd.run in the jinja
18:21 ddoback ah... Thanks gtmanfred
18:21 lorengordon joined #salt
18:21 lompik joined #salt
18:24 hatifnatt astronouth7303: Thanks. I tested Jinja import - works like expected.
18:27 doradus joined #salt
18:27 xer0x gtmanfred: thanks, I'll try that
18:27 noraatepernos joined #salt
18:28 dev_tea joined #salt
18:31 gtmanfred np
18:31 doradus joined #salt
18:32 doradus joined #salt
18:35 doradus Hey guys, the "install_salt.sh" script (version 2107.05.24) installs Salt version 2016.3.4-84 on opensuse. running few states throw some puzzling warnings: ie. 'force' is an invalid keyword argument for 'archive.extracted' ..... your approach will work until Salt Carbon is out" ...   is there known issues with saltstack and opensuse?
18:36 jkjk joined #salt
18:45 watersoul joined #salt
18:45 cgiroua joined #salt
18:49 doradus okay, I removed 'force' (even though documented in API) and added if_missing after seeing notice "If if_missing is not defined, this state will check for name instead. If name exists, it will assume the archive was previously extracted successfully and will not extract it again."  which explains why my archive was not extracting.
18:52 colabeer joined #salt
19:00 theorem joined #salt
19:00 perfectsine joined #salt
19:01 Ni3mm4nd joined #salt
19:02 impi joined #salt
19:05 beardedeagle joined #salt
19:17 dev_tea joined #salt
19:17 Guest73 joined #salt
19:18 apofis joined #salt
19:19 dev_tea joined #salt
19:20 sjorge joined #salt
19:26 ricardo122 joined #salt
19:27 ricardo1232 joined #salt
19:31 thinkt4nk joined #salt
19:35 ProT-0-TypE joined #salt
19:40 dev_tea joined #salt
19:41 oida_ joined #salt
19:41 Hybrid joined #salt
19:45 dev_tea joined #salt
19:49 dev_tea joined #salt
19:50 theorem joined #salt
19:51 thinkt4nk joined #salt
19:51 theorem this may be a silly question --- what is the right way to ensure that an endpoint meets specific criteria ?  In puppet this is a "state" and it's enforced by the agent.
19:52 theorem does salt do something similar ?  or do I need to keep pushing a particular state to endpoints for that behavior to work ?
19:52 lordcirth_work theorem, why do you need it?  Do you expect your servers to randomly change?
19:53 lordcirth_work But yes, you can just regularly push state.apply to minions if you want.
19:53 jimklo joined #salt
19:53 astronouth7303 a large chunk of salt usage is based on states?
19:53 dev_tea joined #salt
19:53 theorem lordcirth_work: no, but it seems we are having trouble standing up new endpoints -- they are missing a bunch of dependencies.
19:53 bowhunter joined #salt
19:53 dev_tea joined #salt
19:53 astronouth7303 under salt, you would add those dependencies to your state
19:53 lordcirth_work ^
19:54 lordcirth_work If your state needs to do other things first, then do them
19:54 theorem right, so it's just a matter of missing the dependencies to the state.
19:54 lordcirth_work If they are common deps, you might want to make a separate state and include them.
19:55 theorem lordcirth_work: it's things like "missing a config file" ,  "missing a package"
19:56 noobiedubie joined #salt
19:56 theorem if the config file is not part of a package, is there a standard way to check?  I feel like "find if this file exsits" seems like a kludge.
19:56 aneeshusa joined #salt
19:56 dev_tea joined #salt
19:57 lordcirth_work theorem, a standard way to check if a file exists, or if it's in a package?
19:58 theorem lordcirth_work: package puts down a default, we apply a new config with a bunch of settings and keys and kick over the process to use the new config.
19:58 lordcirth_work theorem, right, that's pretty standard, so what's the problem you're having with that?
19:58 dev_tea joined #salt
19:59 whytewolf theorem: are you doing all of that with cmd.run instead of say file.managed pkg.installed and the like?
19:59 theorem I just wondered if there's a better way to check if the file exists -- or if there's a better logical way?
19:59 theorem whytewolf: yes, cmd.run
19:59 whytewolf why do you care if the file exists?
19:59 lordcirth_work theorem, there is a state file.exists, but I suspect you don't need it
19:59 lorengordon joined #salt
19:59 lordcirth_work theorem, why are you using cmd.run to install packages?
19:59 whytewolf see. if you are putting the file there with cmd.run you kind of already went outside of salt.
20:00 theorem whytewolf: the new endpoint needs the config file to work properly -- this is almost a "monitoring" case, where we need to figure out which endpoints are missing the config and apply it.
20:01 lordcirth_work theorem, and how do you test if it's missing the config?
20:01 heaje joined #salt
20:01 lordcirth_work You can probably just use file.managed for all of this
20:01 theorem cmd.run  -- feels like a hack.
20:01 whytewolf cmd.run IS a hack
20:01 theorem got it ;-)
20:02 whytewolf cmd.run is a last resort for anything that doens't have a module or the module doesn't do everything you need.
20:03 theorem ok
20:03 whytewolf are you using salt states or are you just running this through your own bash scripts?
20:03 theorem that one I need to check on.
20:03 theorem I didn't write this ..
20:04 whytewolf how do you exacute what you currently have?
20:05 lordcirth_work I have 3088 lines of states, and 6 match 'cmd.run'.  And I'm not very happy about some of them.
20:05 theorem ok, checking into this a bit more ..
20:06 theorem looks like .... file.managed is in use most places ..
20:06 whytewolf okay, then you are using states.
20:07 whytewolf file.managed checks if the file exists and if it doesn't creates it if it does updates it to whatever it should be based on all the input data
20:07 astronouth7303 theorem: the salt introductory tutorials are pretty good. It's how I got started ~1mo ago
20:08 whytewolf agreed. the tutorial is how i started ... 4 years ago it seems
20:15 theorem thanks !
20:16 theorem is there a good way to restart a systemd service ?
20:16 theorem we are using cmd.run for that in a bunch of places.
20:16 lordcirth_work theorem, you want to restart if another state changed something?
20:17 pcdummy joined #salt
20:17 pcdummy joined #salt
20:17 whytewolf if you are using the command line see if service.restart works. in a start make sure it is running with service.running. then have it watch something else.
20:17 lordcirth_work If you use service.running, you can say: - watch: file: apache_conf
20:17 lordcirth_work where apache_conf is the name of the file.managed state that controls the config file
20:17 theorem hmm
20:18 whytewolf here is the documentation about watch https://docs.saltstack.com/en/latest/ref/states/requisites.html#watch
20:18 theorem thanks
20:18 whytewolf this is kind of where salt has a decent set of options and while it can get confusing using them correctly can get really powerful.
20:23 lordcirth_work Once I'm finished developing a state, I destroy the container and re-create to make sure all the ordering is correct and it works in one run. Sometimes takes a couple of tries with big ones
20:27 theorem ok
20:32 theorem ok, here's a use case ..
20:33 theorem if I wanted to add a route only when a VPN connection gets established, and take the route down when the VPN is not established.  Is this something I should manage with Salt ?
20:33 theorem or systemd ?
20:35 jimklo joined #salt
20:37 astronouth7303 how many machines does the change need to happen on?
20:37 theorem it should happen when the VPN goes up or down ...
20:38 theorem so, the machine loses connectivity between the VPN up and down.
20:38 theorem this means the minion becomes disconnected.
20:38 brianthelion joined #salt
20:41 ntropy theorem: what vpn client are you using?  it doesn't allow you to manage routes?
20:42 whytewolf that is kind of what i was wondering. as that is typically a vpn function. it should be doing that cleanly
20:42 theorem ah, hmm .
20:42 theorem openvpn
20:45 ntropy openvpn client manages the routes pushed from the server, so you don't have to do it yourself
20:45 theorem yes
20:45 theorem I see that now
20:45 theorem looks like "route"
20:45 ntropy and if you want to add an arbitrary route, then i think it can trigger external scripts on connect
20:45 ntropy and likewise on disconnect
20:45 theorem right
20:49 jimklo joined #salt
20:58 ivanjaros joined #salt
21:03 onlyanegg joined #salt
21:03 seanz joined #salt
21:04 aneeshusa joined #salt
21:06 perfectsine joined #salt
21:07 censorshipwreck joined #salt
21:11 Guest73 joined #salt
21:18 cyborg-one joined #salt
21:21 Guest73 joined #salt
21:27 mavhq joined #salt
21:28 cro joined #salt
21:37 debian112 joined #salt
21:41 rpb joined #salt
21:46 _KaszpiR_ joined #salt
21:46 onlyanegg joined #salt
21:49 apofis joined #salt
21:58 coredumb joined #salt
21:59 Ni3mm4nd joined #salt
22:01 Ni3mm4nd joined #salt
22:03 Ni3mm4nd joined #salt
22:11 sjorge joined #salt
22:20 Guest73 joined #salt
22:28 Guest73 joined #salt
22:33 Guest73 joined #salt
22:37 hemebond joined #salt
22:38 ecdhe joined #salt
22:39 sarcasticadmin joined #salt
22:41 kerrick joined #salt
22:49 Hybrid joined #salt
22:54 coredumb joined #salt
23:03 Ni3mm4nd joined #salt
23:04 coredumb joined #salt
23:11 coredumb joined #salt
23:21 kerrick joined #salt
23:22 dxiri joined #salt
23:22 dxiri @gtmanfred I have a pull request for the VNC issue we were looking at :)
23:22 dxiri https://github.com/saltstack/salt/pull/42530
23:23 dxiri my first pull request ever
23:23 dxiri hopefully I didn't fuck it up
23:24 whytewolf shouldn't that be against on of the earlyer dev trees. such as 2016.11 or 2017.7 instead of devel
23:24 dxiri not sure, I can change it if needed
23:25 whytewolf basicly if working on a bug you should always go against the earlyiest stable devel branch that has the bug.
23:25 whytewolf and the main devel branch is for "next major release"
23:26 whytewolf they can always backport it. as this comes up often
23:26 dxiri I remember I found that under 2016.11 so can I change it now?
23:27 * whytewolf shrugs. you could cancel the PR and make a new one.
23:27 om2 joined #salt
23:35 debian1121 joined #salt
23:37 kerrick joined #salt
23:40 mosen joined #salt
23:43 Hybrid joined #salt
23:47 kerrick joined #salt
23:52 ProT-0-TypE joined #salt
23:58 woodtablet left #salt
23:58 mavhq joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary