Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-03-20

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 WesleyTech joined #salt
00:12 KevinAn275773 joined #salt
01:20 exarkun joined #salt
01:28 zerocoolback joined #salt
01:47 shiranaihito joined #salt
01:52 onlyanegg joined #salt
01:57 [R]mu joined #salt
02:15 LeProvokateur joined #salt
02:25 swa_work joined #salt
02:28 relidy joined #salt
02:56 ilbot3 joined #salt
02:56 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.9, 2017.7.4 <+> RC for 2018.3.0 is out, please test it! <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
03:00 tiwula joined #salt
03:03 hasues joined #salt
03:28 zerocoolback joined #salt
04:06 hemebond left #salt
04:25 indistylo joined #salt
05:30 yuhl joined #salt
05:59 Hybrid joined #salt
05:59 aruns joined #salt
06:04 wongster80 joined #salt
06:17 Hybrid joined #salt
06:18 tyx joined #salt
06:31 Guest73 joined #salt
06:44 masber joined #salt
06:47 onslack <haam3r> ,0
06:47 masber joined #salt
07:00 zerocoolback joined #salt
07:24 colttt joined #salt
07:40 tyx joined #salt
07:43 Guest73 joined #salt
07:46 dograt joined #salt
07:49 om2 joined #salt
07:57 aldevar joined #salt
08:01 rgrundstrom joined #salt
08:09 Yamakaja joined #salt
08:09 CrummyGummy joined #salt
08:14 cewood joined #salt
08:17 Hybrid joined #salt
08:26 Tucky joined #salt
08:29 rawzone joined #salt
08:30 Ricardo1000 joined #salt
08:36 Pjusur joined #salt
08:58 mikecmpbll joined #salt
09:01 jrenner joined #salt
09:14 Hybrid joined #salt
09:14 baffle joined #salt
09:29 Naresh joined #salt
09:34 gmoro joined #salt
09:47 xorben1981 joined #salt
09:48 xorben1981 left #salt
09:48 nfahldieck joined #salt
09:49 nfahldieck Hi, how can I delete every empty line of a file with 'file.line'? I can't seem to get it working ...
09:50 stan joined #salt
09:53 Mattch joined #salt
09:57 pf_moore joined #salt
09:58 hemebond joined #salt
10:14 Guest73 joined #salt
10:18 babilen nfahldieck: Might just want to use file.replace
10:34 * rgrundstrom who wrote this... This is madness... Wait... It was me. Two weeks ago.
10:41 onslack <msmith> nfahldieck: perhaps use file.replace instead?
10:41 onslack <msmith> and then i read babilen's reply. i'm awake, honest
10:50 foobardy joined #salt
11:01 * babilen coffees onslack
11:02 edrocks joined #salt
11:08 masber joined #salt
11:10 anthonyshaw joined #salt
11:12 masuberu joined #salt
11:23 evle joined #salt
11:33 Guest73 joined #salt
11:54 J0hnSteel joined #salt
12:12 Nahual joined #salt
12:12 aviau joined #salt
12:13 saintpablo joined #salt
12:14 gmoro_ joined #salt
12:19 indistylo joined #salt
12:20 inad922 joined #salt
12:21 XenophonF anyone have a formula for pf on FreeBSD/OpenBSD they'd like to share?
12:22 DammitJim joined #salt
12:27 J0hnSteel joined #salt
12:31 indistylo joined #salt
12:36 exarkun What's the correct way to maintain salt configuration in vcs such that deploying updates is as easy as possible?
12:38 XenophonF gitfs works well for me
12:38 babilen +1
12:39 XenophonF I've modeled our Salt environments/Git branches after a DTAP workflow.  YMMV.
12:39 onslack <msmith> branches for environments also allow automated testing and merge request-based approval as additional benefits :)
12:40 XenophonF cf. https://github.com/irtnog/salt-states and https://github.com/irtnog/salt-pillar-example
12:40 XenophonF I don't use branches with Pillar data yet, but I'd like to someday.
12:41 deadpoet joined #salt
12:43 evle1 joined #salt
12:47 exarkun The references I could find for gitfs weren't enough for me to build up a very good idea of what it actually does.
12:47 exarkun From these responses, it sounds like it _does_ let you have your state and pillar files in a git repository and allow the salt master to find and use them?
12:47 exarkun So I'll try to learn more about it.
12:48 onslack <msmith> definitely. aiui the master caches head and only updates when requested
12:48 exarkun Sounds great
12:49 onslack <msmith> we have state and pillar in the same repo, although the winrepo stuff is currently split out as i haven't gotten around to testing if it'll work from a sub-folder
12:50 XenophonF exarkun: if you look at my repos (linked above), you'll see a working configuration
12:50 indistylo joined #salt
12:50 XenophonF I'm using salt-formula, users-formula, etc.
12:50 XenophonF the pillar example repo is probably due for an update but its pretty close to my actual configs
12:51 exarkun I think the code in those repos might require some higher level understand than I currently have
12:51 XenophonF if something doesn't make sense, ask away
12:51 exarkun It looks like some state and pillar top definitions but I don't see how they get glued in to a master
12:52 exarkun I assume it has something to do with putting "gitfs" into the master config but I don't see master config in here
12:53 XenophonF These repos get loaded by the Salt Master config here - https://github.com/irtnog/salt-pillar-example/blob/master/salt/example/com/init.sls#L234
12:53 XenophonF cf. https://github.com/saltstack-formulas/salt-formula
12:54 XenophonF and cf. https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html
12:55 exarkun that tutorial is what I've just started taking a closer look at.  Those other github links look exciting, thanks for those.
12:55 edrocks joined #salt
12:56 XenophonF I need to update the example Pillar data.
12:56 XenophonF One issue I ran into with winrepo-ng is that the salt://win/repo and salt://win/repo-ng folders need to be available in all environments.
12:57 XenophonF The solution is easy but weird-looking: list /srv/salt or /usr/local/etc/salt/states in all environments.
12:58 XenophonF I also recommend encrypting your Pillar data if you're going to store it with a third party.
12:58 exarkun Certainly if it contains anything that needs to remain secret... For now though this project has in-house git hosting, though.  And I don't have any Pillar secrets yet.
12:59 exarkun I have like two items in Pillar, one is a public ip address and the other is a port number. ;)
12:59 XenophonF :)
12:59 XenophonF I also sometimes need to distribute large files, and Git isn't the appropriate storage back end.
12:59 XenophonF so those I put into S3 and access via s3fs
12:59 Pjusur joined #salt
13:00 XenophonF so my actual fileserver_backend is set to ['s3', 'git', 'roots']
13:00 exarkun Have you seen https://git-lfs.github.com/ ?
13:00 XenophonF yes
13:01 babilen git-annex
13:01 XenophonF I discounted both git-lfs and git-annex thinking that I wouldn't want to include those in working copies of the repo
13:01 XenophonF but I could see where one might
13:02 babilen git-annex is really quite handy for sharing/distributing "file collections"
13:02 babilen Use it for my mails, downloads, documents, ... and share with others
13:02 babilen No experience with git lfs
13:02 XenophonF I'll have to take another look at it.
13:03 XenophonF TBH the most difficult part about using Git with Salt is that it makes the learning curve that much steeper.
13:04 babilen But Salt is well renowned for being very easy to learn ;)
13:04 XenophonF I say this with love in my heart for both tools, but Salt is like the Emacs of configuration management.
13:05 XenophonF extensible, powerful, etc., but man does it look weird at first ;)
13:07 exarkun Yea, it would be cool if it had a nice flat learning curve... but good software rarely seems to work that way.  It took me years to learn to be as effective as I am now with emacs.  And I wish I could have just picked it up first day and been proficient.  Maybe someday someone will figure out how to make software like that.
13:08 exarkun I still appreciate emacs and use it for 95% of editing.
13:09 inad922 joined #salt
13:09 kojiro I think there is an entire research domain about this, and based on what little I know of it, I think it's unlikely that there is a intersection there.
13:10 tyx joined #salt
13:11 XenophonF msmith: why do you combine Pillar data with states in the same repo?
13:12 * XenophonF is curious
13:12 * XenophonF also thinks it's wierd to talk about themselves in the third person.
13:13 exarkun well, you're typing "me"; third-person is only a rendering issue.
13:13 * XenophonF Grimlock!
13:13 kojiro heh, "msmith" is very close to my real name, so that was surprising for a second.
13:13 XenophonF LOL
13:14 XenophonF I wonder what the name tags look like on the other side of the Slack bridge.
13:14 kojiro I don't even know what glyphs those other-dimensional beings use for writing.
13:14 XenophonF it's markdown-ish
13:18 babilen I wonder what SaltStack would say if I were to implement a bot that posts every /r/subreddit post/comment to the salt-user mailing list
13:19 babilen "Bringing the communities together"
13:19 XenophonF heh
13:19 XenophonF brb
13:20 kojiro babilen: I think that would make a lot of posters cross.
13:20 kojiro #wordplay #this-website-is-free
13:23 jose1711 joined #salt
13:24 gh34 joined #salt
13:24 jose1711 hello, salt .. --out=json does not really return a json i could feed directly to python's json.loads. how to fix that?
13:24 kojiro jose1711: what does it return? Multiple jsons?
13:25 exarkun jose1711: You can read it with json.load() instead.
13:25 exarkun loads() requires a string that is a single complete json object
13:25 exarkun load() reads from a file-like object until it has a single complete json object and leaves the rest in the file-like object
13:25 kojiro til
13:25 aruns__ joined #salt
13:25 exarkun so, get your data in a file-like object and call load() on it until you run out of data
13:26 kojiro json.load(StringIO...)
13:26 jose1711 so each iteration of output i will get a new dictionary object, right?
13:27 exarkun ...unless I am mixing it up with the yaml module
13:27 jose1711 okay, but what if i want to have a dictionary like this: { 'minion1': {'output': 'foo'}, 'minion2': {'output':  'bar'}}
13:27 exarkun but if I am then just --output=yaml instead :)
13:27 exarkun yaml also has native support for multi-object documents, so maybe it's a better choice anyway?
13:28 exarkun jose1711: you'd have to merge them yourself, which is probably pretty easy since it seems like the output is a bunch of dicts with one top-level key that is the minion name
13:29 jose1711 ok, got it. thanks
13:29 babilen jose1711: jq -s '.[] ' ?
13:29 exarkun it looks like I probably was thinking of the yaml module, fwiw :/  I'm not seeing support in the json module for what I thought it did.
13:31 racooper joined #salt
13:31 kojiro tidl
13:31 kojiro >:)
13:54 XenophonF back
13:54 tyx joined #salt
14:00 dave_den joined #salt
14:03 dave_den left #salt
14:03 gh34 joined #salt
14:04 cgiroua joined #salt
14:04 onslack <msmith> XenophonF: we don't have any reason to separate them at this time. maybe as the project grows
14:05 dave_den joined #salt
14:05 dave_den left #salt
14:20 Kelsar joined #salt
14:30 edrocks joined #salt
14:30 GrisKo joined #salt
14:58 DammitJim joined #salt
15:05 XenophonF gotcha
15:07 _JZ_ joined #salt
15:10 hoverbear joined #salt
15:15 Ted___ joined #salt
15:17 mahafyi joined #salt
15:17 mahafyi Debian testing (buster) does not install salt-common due to python-tornado being 5.0.0.1 ( which is too high a version from the errors i see ) and I was wondering of someone here might have a .deb for salt minion that will run on debian testing
15:18 hoverbear mahafyi: FreeBSD has  the same problem
15:19 XenophonF had - they committed a patch over the weekend
15:19 XenophonF the fix should be available via portsnap now
15:19 hoverbear XenophonF: Was still a problem yesterday because the build wasn't done Q_Q
15:19 XenophonF ah :(
15:19 mage_ hoverbear: it's fixed in ports
15:20 hoverbear Oh cool I can try =D
15:20 XenophonF I'm running my own Poudriere instance so I got the patched version built yesterday.
15:20 mage_ same here :)
15:20 hoverbear I'm tring to figure out how to get terraform and salt to playy nice since Salt can't seem to manage anything but DNS records and droplets on DigitalOcean. :(
15:22 hoverbear mage_: So I have a salt master that can bootstrap itself right now except that after accepting the `salt-key` and running `salt "*" state.apply` I get that the minion did not return
15:22 hoverbear mage_: Happy to share what I have if I'm farther along than you
15:27 XenophonF hoverbear: what do you mean by "play nice"?
15:28 inad922 joined #salt
15:28 hoverbear XenophonF: So that I can run terraform and any new machines will pop up as minions, have their keyys accepted, and get their state applied
15:29 tiwula joined #salt
15:29 XenophonF salt-cloud doesn't work for you?  I thought DO was a supported provider.
15:31 hoverbear XenophonF: It only does VMs and DNS records. I need to manage more than that. :(
15:32 onslack <msmith> if you mean the minions on those guests, then those are independent of the host. if not, what else do you need?
15:33 hoverbear msmith: I'm not really sure what you mean?
15:33 onslack <msmith> i'm asking for more information on what it is you want to achieve but believe you can't
15:34 onslack <msmith> a brief use case might help
15:35 hoverbear msmith: I would like to have my salt master accept the minions key (I have this done), but if I accept the key, then sleep for a few seconds, and run `salt "*" state.apply` I get "Minion did not return. [No response"
15:35 inad923 joined #salt
15:35 onslack <msmith> that's normal, minions take time to retry after the key is accepted
15:35 onslack <msmith> if this is manual then simply increase the delay. if it's automated then perhaps you need to look at using reactor to hook the minion connected event
15:35 babilen You can do that with reactors or run highstates on a schedule
15:36 mahafyi can someone point me to a resource 'hwoto' compile salt-minion from source?
15:36 hoverbear msmith, okay, so once I accept a key there is no way for me to know when I can start applying state?
15:36 onslack <msmith> yes, reactor
15:37 hoverbear msmith: Hm, okay, I'll investigate this.
15:37 hoverbear Thanks!
15:37 mikecmpb_ joined #salt
15:37 Guest73 joined #salt
15:38 onslack <msmith> if you want to get even more complicated then you can use orchestration and reactor to hook the key request event, somehow determine whether to accept it, and then wait for the connected event to perform the highstate :)
15:39 onslack <msmith> but perhaps learn the basics first ;)
15:41 hoverbear msmith Yeah. :) I'm still learning. Trying to make a fairly automated cluster
15:42 onslack <msmith> tbh if you're creating a minion on a newly-provisioned vm then seeding the minion key would probably be better, as that saves the step of accepting the key
15:43 rlefort joined #salt
15:43 hoverbear msmith: It's quite difficult to plan "pre-actions" with Terraform. :(
15:43 onslack <msmith> i don't know that provider, but i suspect that having salt do it in the first place would work... :D
15:45 syd_salt joined #salt
15:49 babilen You can also look into startup states
15:54 dendazen joined #salt
15:56 syd_salt Hi all, I have a set of minions that have salt-minion and salt-common installed. They work fine for say test.ping, cmd.run etc, but I cannot sync a custom module to them, -l debug doesn't show much just retries. Anyone think of anything I can check or try?
15:56 syd_salt The same custom module works elsewhere, all it's doing is returning hello
15:58 onslack <msmith> on the minion, try `salt-call saltutil.sync_modules -l all` and be prepared for some logs :)
15:58 Guest73 joined #salt
15:59 MTecknology hoverbear: When I was working with a lot of very *very* remote deployments. When the minion tried to connect, if the name was "$template-id", the key would be denied. When someone ran the deploy script, it set the minion_id/hostname, deleted salt and ssh keys, and restarted both service. When an event hit the bus for pending auth, I kicked off a script from reactor that would attempt to
15:59 MTecknology do some verification of the host and then accept the key. I also had a reactor set up so any minions coming online would run a highstate.
15:59 MTecknology hoverbear: dunno if that helps you or not, but it worked well for me.
16:00 hoverbear MTecknology: I'm working on doing the reactor highstate thing. :) Thanks.
16:01 hoverbear babilen: Oh, hey, that looks like it might work as well.
16:03 babilen hoverbear: It really depends on what you want, my suggestion would be to read the documentation on reactors and startup states and take it from there
16:06 dkehn_ joined #salt
16:06 Karachi joined #salt
16:11 pppingme joined #salt
16:12 onslack <ryan.walder> hoverbear: For TF I made my own bootstrap which basically, sets the id, adds minimal minion config, requests a key from the salt API, starts the minion, runs a highstate
16:13 hoverbear ryan.walder: Do you have your terraform state managed on the salt master?
16:17 mikecmpbll joined #salt
16:26 Karachi left #salt
16:27 syd_salt So I see what's wrong with my minions... they're loading custom modules from /var/cache/salt/minion/extmods/modules/ and not /var/cache/salt/minion/files/base/
16:27 syd_salt Which is where they are in other minions
16:28 Nahual joined #salt
16:28 onslack <msmith> i believe that the former is the target and the latter is the cached source from <salt://_modules/>
16:29 syd_salt I see
16:29 sauerkirsch 2018-03-20 16:11:28.920621 I | http: TLS handshake error from 127.0.0.1:60907: remote error: tls: unknown certificate authority
16:29 beardedeagle joined #salt
16:29 sauerkirsch oh, sorry
16:30 syd_salt When I manually copy the module to those locations it works. So seems the sync of modules just isn't working to these
16:30 BitBandit joined #salt
16:30 onslack <msmith> does the debug logs from  `saltutil.sync_modules` shed any clues?
16:33 syd_salt Not really, it checks if the job is still running 3 times and then - [DEBUG   ] retcode missing from client return
16:34 pppingme joined #salt
16:34 onslack <msmith> run from the master or the minion?
16:35 syd_salt master
16:35 nixjdm joined #salt
16:35 syd_salt when running from the master it removes what I manually copied
16:35 onslack <msmith> try it from the minion using `salt-call` instead, just in case the running minion is behaving oddly
16:37 syd_salt I do have some errors when doing that...
16:37 syd_salt [WARNING ] Failed to import states pip_state, this is due most likely to a syntax error.
16:37 syd_salt and a few - sh: 0: getcwd() failed: No such file or directory
16:38 cewood joined #salt
16:39 onslack <msmith> that helps, even if all it tells us is that something is wrong :)
16:39 onslack <msmith> do you know the state it's failing on?
16:41 syd_salt I think it was just me, I was still in the modules dir it removed... when I cd out and run again I do not get any errors
16:41 onslack <msmith> so the sync claims to have succeeded. did it? :)
16:44 syd_salt No it doesn't
16:44 onslack <msmith> was that modules or all?
16:45 syd_salt modules
16:45 syd_salt it shows it creating the directories etc
16:45 syd_salt [INFO    ] Creating module dir '/var/cache/salt/minion/extmods/modules'
16:45 syd_salt [INFO    ] Syncing modules for environment 'base'
16:45 syd_salt [INFO    ] Loading cache from salt://_modules, for base)
16:45 syd_salt But they don't exist after
16:46 onslack <msmith> ok, on that minion, what does `salt-call cp.list_master` show as in `_modules/` ?
16:47 onslack <msmith> it'll list other files, e.g. states, perhaps grep for _modules
16:49 onslack <msmith> what i'm expecting is that the module you want to sync is listed. if it's not listed then that's why it's not syncing :)
16:50 syd_salt it's not listed...
16:50 syd_salt it is on a working minion
16:50 onslack <msmith> so the problem is on the master, not the minion
16:50 onslack <msmith> ooc does it show up if you run the same command on a different minion?
16:50 syd_salt yeah
16:51 onslack <msmith> do you have more than one master?
16:52 syd_salt No
16:53 onlyanegg joined #salt
16:54 onslack <msmith> on the master does it show up if you run `salt-run fileserver.file_list` ?
16:54 mahafyi where to get and how to install the RC 2018.3.0 ?
16:58 syd_salt yeah it does. I don't want to waste more of your time, I might just have to dig around.
16:59 masuberu joined #salt
17:00 edrocks joined #salt
17:00 onslack <msmith> a quick `fileserver.update` might be worthwhile
17:00 onslack <msmith> ok. good luck :)
17:01 syd_salt Thanks for your help though, I think I might call it a day and start smashing my head off it tomorrow morning.
17:03 onslack <msmith> no worries. i may be a volunteer but i learn something every day :)
17:04 onslack <ryan.walder> hoverbear: Sorry got dragged into a meeting
17:04 onslack <ryan.walder> What do you mean by TF state managed by salt?
17:05 onslack <ryan.walder> if you mean TF run by salt no, it's run by jenkins usually
17:06 onslack <ryan.walder> with the terraform.tfstate file kept in some swift storage
17:08 * kojiro is very interested in how other people are using terraform and salt together
17:09 MTecknology I use salt-cloud instead of terraform
17:11 onslack <ryan.walder> I can't share the TF code but the bootstrap for minions is:<https://gist.github.com/ryanwalder/90c5de0b59be0ea9b3f542202eabc765>
17:12 onslack <ryan.walder> then we have a seperate TF for management (salt + consul) and seperate for prod, dev, qa etc...
17:12 onslack <ryan.walder> (all connecting to the one salt master)
17:14 MTecknology my bootstrap.. https://gist.github.com/MTecknology/66ce7c7f148fc9da936bcf26cc572cd7
17:14 onslack <ryan.walder> So basically a root module per env with dependencies set between machines (IE: bring up 3x DB machines, run salt (magically clustering), then bring up frontend (requires the DBs to be up and all provisining complete (so highstate mostly))
17:14 MTecknology I should put error handling in there.
17:16 zer0def MTecknology: that's a lot of explicitly installed libs, looks like the example from saltconf'15
17:16 onslack <ryan.walder> why not just wrap the official bootstrap?
17:16 onslack <msmith> that's purged, not installed :)
17:16 zer0def you could always `set -e` at the beginning
17:16 onslack <ryan.walder> make a cleaner image ;)
17:16 zer0def oh, welp
17:17 onslack <ryan.walder> packer to install the minimal os + salt, tf to bring them up and bootstrap, salt to configure
17:17 MTecknology ryan.walder: because the official bootstrap is a massive kludge of hacks that cares only about one thing and leaves you with only guessing what it actually did?
17:18 onslack <ryan.walder> I disagree but sure
17:18 zer0def you mean the default salt-bootstrap.sh? because there's a plethora of others in `salt/cloud/deploy`, frequently per-distro
17:19 MTecknology "zer0def> MTecknology: that's a lot of explicitly installed libs,"
17:19 dezertol joined #salt
17:19 MTecknology zer0def: you saw that was purge, right?
17:19 zer0def yeah, msmith reminded me of this
17:27 zer0def as far as terraform goes, you don't need it when using salt; in fact, i find using it actively limiting (don't get my started on HCL's limitations)
17:29 onslack <ryan.walder> yeah, that's not true at all.
17:29 onslack <ryan.walder> TF is way more powerful than salt/salt-cloud
17:29 MTecknology I'veu heard terraform is more flexible overall, but it does lack salt integration
17:29 onslack <ryan.walder> HCL is utter toss though
17:30 onslack <ryan.walder> and it has a ton of it's own problems
17:30 MTecknology salt-cloud is picking up steam, though. It's infinitely better than it was a year ago.
17:30 onslack <ryan.walder> it's still all static
17:30 onslack <msmith> i suspect it's also being deprecated in favour of cloud-specific modules
17:30 onslack <ryan.walder> unless you mean the state modules
17:30 onslack <ryan.walder> which is just a hack
17:31 zer0def i'm yet to experience the flexibility of terraform, then, hence the opinion i hold atm
17:32 MTecknology msmith: I think "merged" is closer to what I'm seeing.
17:32 MTecknology salt-cloud isn't going anywhere
17:32 zer0def however the brief moment in which you've described it, ryan.walder, expressions "utter toss", "it's own problems", "it's still all static" and "just a hack" got mention, which make me *significantly* question TF's flexibility
17:33 inad923 joined #salt
17:33 onslack <ryan.walder> the first 2 reference TF/HCL the 2nd 2 reference salt ;)
17:35 onslack <ryan.walder> TF is on the right the money functionality wise, however HCL is hot garbage
17:35 tiwula joined #salt
17:35 zer0def i wouldn't say using cloud modules is hacky at all
17:35 MTecknology I got to the point where I can define a VM in Netbox and then run "salt-cloud -p <fqdn>" and just wait for things to happen. It'll be added to DNS, if it has aliases, those will also be added, if it's part of a load balanced cluster, my load balancers will get updated, a user will be created on the backup server for it to back up, and an ssh key will be installed, etc.
17:35 Nahual MTecknology: I dream of that scenario. Someday.
17:36 MTecknology someday I'm gonna take the time to automate "salt-cloud -m ..."
17:36 zer0def you're basically served the same functionality salt-cloud does in either a state or reaction
17:36 onslack <ryan.walder> now try and spin up an entire infrastructure of thousands of machines and tell me that, you'll have plenty of time to write your reply while it slowly loops over things one by one
17:36 MTecknology salt-cloud supports parallel executions
17:36 onslack <ryan.walder> or try and use data from one piece of infrastructure in another
17:37 MTecknology that's easy with orchestration and/or sdb
17:37 zer0def i was about to mention the omission of parallel provisioning
17:37 onslack <ryan.walder> like a set of IPS in an ELB
17:37 MTecknology AWS can suck my dog crack
17:37 MTecknology dog's*
17:37 zer0def that's actually something i've already made a repo on for presenting purposes, so i'm heavily inclined to disagree
17:37 onslack <ryan.walder> the platform is irrelevant, it's the example
17:38 MTecknology then it's a bad example
17:38 onslack <ryan.walder> ok, use an IP from one machine in an API call
17:38 onslack <msmith> i haven't scaled up to that magnitude yet so i can't say much, but aren't these the kind of problems that any system will have at significant scale?
17:39 onslack <ryan.walder> zer0def, all in a single run?
17:39 MTecknology orch can handle that, and thorium can handle it better
17:39 zer0def also, i like how "just a hack" was mentioned in context of using cloud module calls, but yet salt-bootstrap.sh somehow didn't make the cut
17:39 zer0def yes, ryan.walder
17:39 MTecknology You're free to use whatever you want, but your arguments against salt-cloud seem quite uneducated.
17:39 onslack <msmith> opinions. like assholes. everyone's got one. be gentle, people.
17:40 MTecknology eh, that probably came across less friendly than I meant it to
17:40 zer0def either through an orchestrate or thorium run (i'm yet to wrap my head around thorium)
17:40 onslack <ryan.walder> well I would love to see some example code as that would likely change my mind
17:41 zer0def ryan.walder: https://github.com/zer0def/salt-reactive-aws-in-15
17:41 onslack <ryan.walder> given the complete dearth of examples for such things in salt
17:41 onslack <ryan.walder> great, thanks. I'll give it a read later.
17:42 zer0def actually, i think i don't have explicit salt-cloud calls to AWS
17:42 onslack <msmith> our env is setting up to use provider-independent kubernetes, managed by salt. i'm told we could merge swarms across providers, including in-house, with ease
17:42 onslack <ryan.walder> I'm sure it'll give me enough of an idea of how you're handling things
17:42 zer0def but the barebones for setting up a two-zone VPC with ASGs and reactions is there
17:42 MTecknology I forgot.. when is saltconf this year? I should see about digging into my project again. I took a bit of a long mental break from it.
17:43 zer0def the example is horrid, because ASGs compete with Salt over provisioning authority, so it probably should react to CloudWatch instead and the reactor should then just call a cloud.create runner or something
17:46 zer0def MTecknology: i've seen promos in documentation's footer, September
17:47 MTecknology ah, nice.. plenty of time!
17:50 tyx joined #salt
17:50 MTecknology zer0def: that massive list of purged packages is because I'm kinda anal about keeping systems as lean and clean as possible ... except that I'm also still stuck on using debian.
17:51 MTecknology it started because cloud-init was massive and had massive dependencies and the string kept unraveling.
17:51 zer0def MTecknology: yeah, i've seen the presentation from '15, assumed you might've been unsure about the systems you've inherited, are keeping some sort of an image and cleaning it up on deployment
17:51 MTecknology my presentation?
17:52 zer0def yeah
17:52 MTecknology oh... I'm sorry.
17:52 zer0def that's ok, i sympathise with it, done similar work a couple of times already, too
17:52 onlyanegg joined #salt
17:53 zer0def just maybe not to the scale you were doing it back then
17:53 MTecknology I meant that you had to watch it. :P
17:53 zer0def oh… yeah, been in the same state you were during it, as well
17:54 zer0def nothing to be sorry about, just reinforces points made
17:54 MTecknology I'm gonna do /way/ better for my next presentation. :D
17:55 zer0def hah
17:55 MTecknology for starters, I'm gonna wear clothes that fit!
17:56 zer0def on merits alone i agree with it
17:56 MTecknology thanks :)
17:57 zer0def everything else i could care less about
17:57 zer0def well, couldn't.
17:59 MTecknology I'm planning on a demo where I can connect to my home network, open netbox, add a brand new service, and have another terminal open doing dig lookups, waiting for the production address to magically exist and instantly be reachable.
18:00 zer0def sounds pretty easy
18:00 zer0def well, assuming you have enough time and willpower
18:00 MTecknology should be- but it's gonna be a whole lot of components that I just talked up that will need to work perfect.
18:01 MTecknology and .. we all know how demos go. I'll have a pre-recorded copy, just in case.
18:01 anthonyshaw joined #salt
18:01 zer0def they probably stand a better chance than working with AWS
18:03 zer0def i mean, AWS is fine, just slow and pricy at times, perks of the magnitude they're operating at
18:03 MTecknology I actually just ranted in #salt-offtopic today about how much I hate aws
18:04 zer0def oh, then i should probably move the convo there
18:11 JawnAuz joined #salt
18:11 bluenemo joined #salt
18:11 JawnAuz joined #salt
18:25 mikecmpbll joined #salt
18:29 xet7 joined #salt
18:34 Nahual So it appears the salt-syndic setup begins to fail when a master_job_cache is specified as pgjsonb. If it's left as local_cache, the setup works as expected with the MOM able to query minions registered to the salt-syndic nodes.
18:35 Nahual I get a tornado exception on self.mminion.returners with the pgjsonb setup. I am currently checking open issues for something related.
18:35 tyx joined #salt
18:39 pcdummy Nahual: its tornado 5.x ?
18:43 Nahual pcdummy: 4.2.1.
18:46 gswallow joined #salt
18:54 onlyanegg joined #salt
19:15 gforgx joined #salt
19:21 mavhq joined #salt
19:29 lordcirth_work joined #salt
19:48 Aleks3Y joined #salt
19:50 ymasson joined #salt
19:51 Trauma joined #salt
19:57 tiwula joined #salt
20:00 dave_den joined #salt
20:00 dave_den left #salt
20:00 Guest73 joined #salt
20:02 masber joined #salt
20:04 onlyanegg joined #salt
20:13 jas02 joined #salt
20:19 DammitJim joined #salt
20:26 petererer joined #salt
20:49 Edgan Can someone explain to me why "{% if salt['pillar.get']('foo') is defined %}" wouldn't act like {% if salt['pillar.get']('etl_app:db') %}"? when the pillar doesn't exist? A pillar.get "defines" a variable even when the pillar doesn't exist?
20:53 Guest73 joined #salt
20:53 hemebond Not sure why it doesn't work but I always use {% if salt['pillar.get']('etl_app:db', false) %}
20:54 hemebond *why that doesn't work
20:55 Edgan hemebond: I switched to version one to get the behavior I expect
20:55 DammitJim how does one check if a pillar value is not existing?
20:55 DammitJim not defined?
20:56 hemebond Well you can't really because it's not a Jinja variable.
20:56 hemebond I mean it should equate to false because it will be an empty value.
20:56 Edgan DammitJim: You use pillar.get('foo') instead of pillar['foo']
20:56 hemebond You can use `in` I think.
20:56 Edgan DammitJim: Otherwise you get an error
20:56 hemebond {% if 'foo' in pillar %}
20:57 xet7 joined #salt
20:57 Edgan probably
20:57 Edgan But pillar.get is meant for this
20:57 hemebond Yes, it's better to use .get()
20:57 DammitJim the thing is that I'm getting the pillar data at the beginning of the state
20:58 Aleks3Y left #salt
21:03 schemanic joined #salt
21:05 gforgx Hi all! Could someone please help me with salt-runner?
21:05 gforgx I'm running:
21:05 gforgx salt 2017.7.4 (Nitrogen)
21:05 gforgx I'm using napalm proxies and they seem to work fine. E. g.:
21:06 gforgx # salt XXX net.mac | grep AC:1F
21:06 gforgx AC:1F:6B:02:FC:F5
21:06 gforgx ...
21:07 MTecknology use a pastebin next time
21:08 gforgx I'm sorry, I thought it was ok to post 2 output lines here.
21:09 gforgx However when I try to run salt-runner net.findmac using a MAC address it just shows 'None' and finishes quickly. It doesn't seem like if it performs some lookups.
21:10 MTecknology I'm counting eight lines where one would have been sufficient.
21:10 aldevar left #salt
21:11 MTecknology What is is you tried that doesn't work?
21:13 gforgx Sorry, what did you mean?
21:13 MTecknology You shared something that does work, and then just said something else doesn't.. but you didn't actually share what that something is.
21:14 gforgx Running net.mac (just an example) with salt does fetch MAC address table from the router.
21:15 gforgx Running net.findmac with salt-runner doesn't work.
21:15 MTecknology salt-runner runs on the master, so what you're saying doesn't make sense without context.
21:18 gforgx I'm trying to debug the code, _get_mine('net.mac') returns empty dict.
21:18 ponyofdeath joined #salt
21:18 onlyanegg joined #salt
21:21 gforgx 'target' in net_runner_opts is '*'. AFAIU, it should query all proxies then.
21:30 dezertol joined #salt
21:53 xet7 joined #salt
21:59 kojiro joined #salt
22:09 onlyanegg joined #salt
22:11 whytewolf gforgx: what command are you running? what configs are you using? and what runners are you trying to call?
22:11 whytewolf you are leaving out WAY to much context
22:14 kojiro Is there any way to rewire salt to run in a single-process mode for debugging?
22:14 kojiro Like instead of going through Multiprocessing, enable single-process pdb?
22:14 gforgx You mean salt-master?
22:14 kojiro I mean salt-ssh, sorry
22:16 gforgx @whytewolf Sorry, I'll try to get all the configs together now and paste them to the bin - there isn't really much I have changed from the default.
22:22 cro joined #salt
22:22 gforgx That's all the configuration I modified by myself: https://gist.github.com/gforg-x/2c248cc190ebd9651023ffeb1d45e8e2
22:23 gforgx In /srv/salt/pillar I have .sls for all routers and top.sls.
22:23 gforgx I can execute commands using `salt` on all of them.
22:27 whytewolf okay, salt commands actually reach out the the minion proxies and query directly.
22:29 gforgx What might be wrong with the master not querying minion proxies when using runner?
22:30 whytewolf did you follow the directions at the top of https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.net.html about settings for the master?
22:31 whytewolf esp. the mine_functions part?
22:31 whytewolf [btw. the runner DOES NOT QUERY the proxies directly]
22:32 gforgx That seems to be the issue :-) It wasn't obvious to me. Thank you for pointing! I'll try now.
22:45 thelocehiliosan joined #salt
22:52 gforgx I've added mine_functions and mine_interval to `/etc/salt/minion`, how can I check whether mines are refreshed?
22:55 gforgx Nevermind, it works now.
22:55 gforgx whytewolf, thank you a lot!
22:56 whytewolf np
23:03 gforgx One more question. What are these null (when using JSON output) and None (table output)? They get printed on stdout and this breaks JSON.
23:03 gforgx https://gist.github.com/gforg-x/95d328202166a7420d170adca03e60b3
23:03 masber joined #salt
23:04 masuberu joined #salt
23:13 mechleg left #salt
23:20 whytewolf I don't know.
23:20 dh joined #salt
23:21 whytewolf you can always run the command with -l debug
23:22 dh__ joined #salt
23:28 monokrome joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary