Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-02-26

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 tiwula joined #salt
00:06 cewood joined #salt
00:08 pcdummy joined #salt
00:08 pcdummy joined #salt
00:33 onlyanegg joined #salt
00:53 onlyanegg joined #salt
00:58 edrocks joined #salt
01:27 oyvindmo joined #salt
01:35 karlthane joined #salt
01:55 zerocoolback joined #salt
02:35 mk-fg joined #salt
02:35 mk-fg joined #salt
02:58 ilbot3 joined #salt
02:58 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.9, 2017.7.3 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
03:12 onlyanegg joined #salt
03:17 gnomethrower joined #salt
03:49 onlyanegg joined #salt
03:53 lompik joined #salt
03:56 JPT joined #salt
03:58 pcdummy Hi, I'm the original Author of https://github.com/saltstack-formulas/lxd-formula as I got some time now i want to add salt-cloud or something to it, anyone got a starting point for me?
03:59 evle joined #salt
04:01 hemebond You want to add "salt-cloud" to it?
04:02 pcdummy basicaly i want saltify "LXD Containers" as requested here: https://github.com/saltstack-formulas/lxd-formula/issues/3
04:06 pcdummy The main problem I'm thinking about is howto retrieve the minions pub key once done.
04:06 hemebond That's not really something for a formula to do, imo.
04:06 hemebond or maybe it is and I don't really understand what you're suggesting.
04:08 pcdummy This formula creates containers, once created the user can saltify them - I want to ship an easy HOWTO on doing so with salt WITHOUT auto_accept :)
04:24 hemebond Hmm. That would mean having a minion tell the master to accept a minion key.
05:20 indistylo joined #salt
05:20 karlthane joined #salt
05:40 golodhrim|work joined #salt
05:52 karlthane joined #salt
05:57 indistylo joined #salt
06:01 gnomethrower joined #salt
06:16 karlthane joined #salt
06:44 eseyman joined #salt
06:48 aruns joined #salt
06:58 aruns__ joined #salt
07:01 indistylo joined #salt
07:04 evle joined #salt
07:07 karlthane joined #salt
07:15 pcdummy some way to generate the key on master seed it to the client and then accept it
07:15 pcdummy or lesser secure - accept the next key with that hostname.
07:15 swa_work joined #salt
07:21 doubletwist joined #salt
07:29 coredumb pcdummy: easiest way imo is to use the API
07:30 MTecknology something about that seems like a sub-optimal idea
07:35 MTecknology vs. using a salt-cloud module that connects and does as needed to create/launch/bootstrap/destroy.
07:35 jhauser joined #salt
07:40 rgrundstrom joined #salt
07:45 Yoda-BZH joined #salt
07:45 Yoda-BZH joined #salt
07:46 pualj joined #salt
07:48 aldevar joined #salt
07:57 jas02 joined #salt
07:59 colttt joined #salt
08:16 Tucky joined #salt
08:19 aviau joined #salt
08:24 Hybrid joined #salt
08:32 masber joined #salt
08:35 inad922 joined #salt
08:37 sergeyt joined #salt
08:40 cewood joined #salt
08:53 darioleidi joined #salt
09:00 mikecmpbll joined #salt
09:00 v12aml joined #salt
09:24 antpa joined #salt
09:26 Tucky joined #salt
09:29 tys101010 joined #salt
09:31 Mattch joined #salt
09:31 hoonetorg joined #salt
09:46 Tucky joined #salt
09:48 esteban joined #salt
09:57 JPT joined #salt
10:04 cewood joined #salt
10:10 Tucky joined #salt
10:12 pcdummy MTecknology: thought the same, do you know if i need to implement all of the APIs or will it work with parts: https://github.com/saltstack/salt/blob/develop/salt/cloud/clouds/digitalocean.py ?
10:16 pualj joined #salt
10:20 karlthane joined #salt
10:26 sergeyt joined #salt
10:28 mage_ any idea when I'm firing a custom event on a minion how could I get the "output" of the reactor ?
10:37 mechleg joined #salt
10:44 colegatron joined #salt
10:52 baffle joined #salt
10:53 dRiN joined #salt
10:58 colegatron is there anyway to display all the states (and the order they are) going to be run within a state.sls ?
11:01 mage_ state.show_sls ?
11:01 colegatron omg. really? going to check
11:01 mage_ yep, take a look at the "order"
11:05 baffle joined #salt
11:06 sergeyt joined #salt
11:08 colegatron than you a lot
11:10 syd_salt joined #salt
11:14 colegatron mage_ seems there is no way to display it sort by "order", there is?
11:18 mage_ I don't think so ..
11:22 Pjusur joined #salt
11:28 zerocoolback joined #salt
11:28 sergeyt joined #salt
11:39 sergeyt joined #salt
12:25 buumi joined #salt
12:35 sergeyt joined #salt
12:44 tys101010 joined #salt
12:46 jas02 joined #salt
12:46 nickadam joined #salt
12:56 tys101010 joined #salt
13:06 gmoro joined #salt
13:07 sergeyt joined #salt
13:07 evle1 joined #salt
13:13 zerocoolback joined #salt
13:13 Tucky joined #salt
13:18 Nahual joined #salt
13:21 Tucky joined #salt
13:23 syd_salt2 joined #salt
13:31 jas02 joined #salt
13:34 gh34 joined #salt
13:36 sergeyt joined #salt
13:39 pualj joined #salt
13:46 jas02 joined #salt
13:47 mikecmpbll joined #salt
13:53 aruns__ joined #salt
14:00 lounagen joined #salt
14:03 sergeyt joined #salt
14:04 edrocks joined #salt
14:05 mchlumsky joined #salt
14:07 XenophonF joined #salt
14:11 BitBandit joined #salt
14:13 saltnoob58 joined #salt
14:23 Whissi joined #salt
14:28 sergeyt joined #salt
14:30 cgiroua joined #salt
14:34 racooper joined #salt
14:35 justanotheruser joined #salt
14:38 JoshL- left #salt
14:38 JoshL- joined #salt
14:38 JoshL- left #salt
14:44 _JZ_ joined #salt
14:46 pualj joined #salt
14:52 sergeyt joined #salt
15:01 sergeyt joined #salt
15:10 gadams joined #salt
15:20 exarkun joined #salt
15:34 lordcirth_work joined #salt
15:35 thelocehiliosan joined #salt
15:41 ddg joined #salt
15:51 emerson left #salt
15:55 jeffspeff joined #salt
16:10 ddg joined #salt
16:10 JohnnyRun joined #salt
16:18 zerocoolback joined #salt
16:23 edrocks joined #salt
16:24 heaje joined #salt
16:28 tiwula joined #salt
16:34 onlyanegg joined #salt
16:36 Tucky joined #salt
16:37 Pjusur joined #salt
16:39 rivyn joined #salt
16:40 rivyn I am able to use postgres_cluster.present without issue.  However, trying to use postgres_cluster.absent produces the following error:  State 'postgres_cluster.absent' was not found in SLS 'postgresql.remove';  Reason: 'postgres_cluster' __virtual__ returned False: Unable to load postgres module.  Make sure `postgres.bins_dir` is set
16:49 Sacro So if I wanted to wait 1 hour between states, is there any way other than test.sleep?
16:53 rivyn the problem seems to be that it depends on postgres packages being installed, and the SLS is also removing the packages.  How can I make the postgres_cluster.absent only run if there is a postgresql package installed?
16:55 Sacro rivyn: state to install the postgresql package
16:55 Sacro Or at least check it's installed
16:56 Nahual Sacro: Is scheduling them an hour apart via the job scheduler an option or are you looking to block during the sleep?
16:56 Sacro Nahual: no need to block
16:56 Sacro I just want to only schedule it once
16:57 Sacro Basically, run an 'add' state, wait one hour (or a given time period) and then run a 'remove' state
16:57 Nahual You can schedule a one time job.
16:57 rivyn I got it working but it seems stupid.
16:58 rivyn my remove.sls has to install postgresql-common package in case it's not already installed to run postgres_cluster.absent, then it removes the package
16:59 Sacro rivyn: can you not test=True on the package install?
16:59 Sacro Or maybe insted use cmd.run which psql
16:59 rivyn hm?
16:59 rivyn I can paste the SLS
17:00 rivyn https://gist.github.com/caseyallenshobe/c9b0a8f56a85fe4f4eb461745c16809a
17:02 theloceh1liosan joined #salt
17:10 MTecknology pcdummy: I'm not sure I follow "implement all of the APIs or will it work with parts"
17:11 cewood joined #salt
17:14 MTecknology pcdummy: I suspect you're going to have an extremely difficult time with this because you're trying to manage lxd directly, instead of using some middleware that provides a nice central access point with an api.
17:15 pcdummy MTecknology: first thanks for helping out! What would you suggest me to do?
17:15 MTecknology ditch the manual management because it's painfully silly in 99% of situations?
17:16 pcdummy MTecknology: i dream of an open-source "Container As a Service" Solution with saltstack as executor
17:16 MTecknology k?
17:16 pcdummy MTecknology: you know cpanel, froxlor and co?
17:17 MTecknology so... turnkey for lxd
17:17 pcdummy yeah
17:18 MTecknology that sounds horrible, but doesn't really change my suggestion
17:18 pcdummy horrible?
17:18 MTecknology it sounds like more people running more garbage that they don't understand in any way
17:18 MTecknology which is apparently "the way of the standard devops"
17:18 pcdummy ahh that, well salt will manage containers
17:19 MTecknology anyway- how terrible it sounds doesn't really matter.
17:19 pcdummy https://drawstack.github.io/qxjoint/qxjoint_demo/ <-- this is what the UI "could" look like (this is a non working preview)
17:20 pcdummy MTecknology: i give a lot in your opionion, i know you know a lot about salt and stuff.
17:20 MTecknology If you want to sanely manage lxd containers, you need a management interface, and salt is not that.
17:21 pcdummy MTecknology: salt is just the exector - nothing else
17:21 pcdummy drawstacks sends command -> create container "A" -> salt executes and reports back
17:22 MTecknology you mean it's just for remote execution?
17:22 pcdummy a little more, i think about generating pillar from drawstack (or better write a plugin for salt).
17:23 pcdummy MTecknology: you don't have ONE bare-bone server but many and you don't have one task (create container) but also update router and so on.
17:25 MTecknology You're re-inventing a lot of wheels.. using some of the least efficient tools for various jobs
17:25 pcdummy hmm not what i wanted to hear... :)
17:25 MTecknology I can see why it might sound nice, but that's a lot of excellent tools used for the wrong jobs.
17:26 pcdummy How would you do that, mate?
17:30 onlyanegg joined #salt
17:31 MTecknology s/lxd/lxc/, and I'd use proxmox. s/lxd/kvm-only/, and I'd start learning whatever I've been recommended in the past.   Having that nice central management interface with a standardized API means you can write a salt-cloud module to interact with your virtualization solution's api.  From there, I have netbox set up as my inventory management solution. I have an ext_pillar that pulls data
17:32 MTecknology from the api, which populates a minion's pillar data (since masters can't have pillar data). Then I have onchestration stuff in place to create/destroy (highstate DNS and LB, then run salt-cloud, or the reverse)
17:33 pcdummy ok, i see
17:33 MTecknology from there, you can build whatever shiny front-end you want to correctly populate your inventory management system
17:33 pcdummy I want to reinvent the wheel, just for LXD
17:34 aldevar left #salt
17:34 MTecknology but why build a one-size-fits-my-setup-only solution when you can build something universal?
17:35 MTecknology well.. one-size-fits-my-setup-only-until-i-want-to-change-things
17:37 pcdummy I want containers cause they take much lesser resources than VM's, i want a easy UI for end users - where they click theier PHP/Python/Ruby/uvm. version and so on.
17:37 pcdummy Managed containers or unmanaged - unmanaged only for trusted users.
17:38 MTecknology but that doesn't answer my question
17:39 pcdummy And i don't want Docker.
17:40 pcdummy Hmm not sure about your question if i understood.
17:40 MTecknology well.. docker is garbage so that's something we agree on
17:41 pcdummy You make me think, maybe its easier to add lxd support to proxmox and orechstrate that over salt.
17:42 MTecknology or just use lxc?
17:42 pcdummy unpriviliged lxc if proxmox does that
17:42 MTecknology iirc, that's a simple toggle
17:42 pcdummy I see
17:42 pcdummy LXD does it great what it does.
17:43 pcdummy But its heavily duck taped to Ubuntu
17:43 MTecknology The formula I gave you, would put drawstacks as the end-user UI that populates inventory management. It would need to do nothing else.
17:45 pcdummy MTecknology: thanks for the talk, would love to spend you a beer :)
17:46 pcdummy TODO now is: Look at Netbox, see if can bring LXD to Proxmox - connect it.
17:46 MTecknology inventory notifies salt of changes, salt refreshes pillar and modules and such, salt tells the minion on the master to refresh pillar, salt tells minion to run salt-cloud, salt-cloud hits whatever provider it's configured to talk to for that node, and hands deploy/setup to bootstrap.
17:48 MTecknology and for the love of all that is holy, do not start hacking against the de-facto bootstrap.sh. Even better, don't read it... ever.
17:49 Nahual Now I have to read it.
17:49 * MTecknology bids farewell to Nahual's soul.
17:49 Nahual You assume I had one in the first place!
17:50 Nahual I'm not going to use it, I just want to see it.
17:50 MTecknology I want to make a "ginger" joke, but don't want to accidentally offend...
17:50 Nahual I'm not a ginger.
17:51 Nahual That is a whole lotta shell script.
17:52 MTecknology yup! and it's all a lot of "well if that failed, let's try this" type logic
17:52 MTecknology my script- if things failed it trashes and halts the system -- https://gist.github.com/MTecknology/66ce7c7f148fc9da936bcf26cc572cd7
17:53 Nahual We have a small python script that does some API calls, grabs the git branch, unpacks it, salt-call --local, donezo.
17:54 MTecknology sounds like chef-solo
17:54 Nahual I just recently put the finishing touches on it with regards to spinning up a salt-master as the first in an environment.
17:54 Nahual Oh I'm sure there are plenty of tools that do something like that.
17:54 Nahual Ideally when we finally get around to not needing it, everything will just be an image.
17:55 MTecknology Oh!! We have a new guy starting today! I'm going to be teaching him salt, and then he's going to help me retire the old-style nodes/init garbage! :D
17:56 MTecknology (a 24k line sls file, sharing 100% of minion pillar with 100% of minions...)
17:57 Nahual What the hell?
17:57 PMS joined #salt
17:58 MTecknology it gets worse if you see the custom module that was written to parse the data- https://gist.github.com/MTecknology/e682d42278e1114b93b2ca6a10162b6e
17:59 n0xff joined #salt
18:00 MTecknology That old.py is actually after some improvements I made, like making the default value of .get() be the second arg, and actually using the imported delim.
18:00 MTecknology Nahual: wanna come work with us?! :D
18:01 Nahual Not today, I'll stick with our methods for now :)
18:01 Nahual I have the advantage that we are designing and engineering the system from a fresh base.
18:02 MTecknology well.. I'm getting pretty fresh 'round here
18:02 Nahual We have an API we utilize for backend DB pillar and required scope for provisioning.
18:02 Nahual Then we have pillarstack which takes that required scope and works with it as need be.
18:03 Nahual This last portion of provisioning hardware is becoming a pain, but it's not too bad.
18:04 MTecknology 80/20?
18:04 Nahual We're mostly hardware.
18:09 MTecknology I gotta finish deciding how I want pillar data working. The old monolithic static mess included a "defaults:" section that had all the defaults for everything. In my personal setup, I just pass the default to pillar.get(). I don't like all minions needing a copy of all defaults and the people managing these systems would screw up a find/replace... so.. I'm thinking remove defaults from
18:09 MTecknology pillar and follow the forumla model.
18:10 MTecknology the only problem is that I would expect a fair amount of pushback for it being "too confusing"
18:10 Nahual That is where I love me some pillar stack, set my defaults, an overrides can be specified as a feature flag or a variable in the DB, done.
18:12 MTecknology I already have the logic for a stack of defaults in pillar being merged to dc-overrieds, and then cluster-specific, and then node. I think the node-specific overrides will become a silly thing of the past when I'm done torturing this guy.
18:12 MTecknology err.. initiating* this guy
18:12 IdoKaplan joined #salt
18:12 lordcirth_work MTecknology, the normal pillar model is to have thing: foo defined for all as a default, then override it
18:12 lordcirth_work Not have a 'defaults' key.
18:12 MTecknology right- I was describing the old way
18:12 IdoKaplan Hi, Is it possible that highstate why execute even if there is " ERROR: Pillar failed to render"? so what can be compiled will execute. Thanks, Ido
18:13 MTecknology something $oldboss was extremely proud of and refused to let me remove.
18:13 MTecknology IdoKaplan: look at your master log
18:14 IdoKaplan I know why the state is not running, I'm asking if it's possible that highstate will work even if there is an issue with the pillars
18:14 MTecknology not likely
18:17 IdoKaplan MTecknology: Thank you for the follow up. Are you sure that there is no way?
18:17 Aikar joined #salt
18:18 MTecknology maybe with --local, but if you still get an error, you'd either have to supply pillar in the command or you're out of luck.
18:20 IdoKaplan MTecknology: Is it possibl eto execute --local from salt master server?
18:20 MTecknology huh?
18:21 wongster80 joined #salt
18:21 MTecknology Why would you need --local if you're pushing the command from the master?.. it's connected and is therefor not local.
18:22 IdoKaplan MTecknology: So i'm not sure that I understand your recommendation. Is there a way to ignore pillar failed to render and highstate will execute even though?
18:23 MTecknology my recommendation is to fix the render error
18:23 IdoKaplan :)
18:23 IdoKaplan MTecknology: I will, but there are "baseline" states that I want that always will execute.
18:25 IdoKaplan MTecknology: and the error is from another department in R&D that working with the server. I "supply" baseline and another department working on others states
18:25 MTecknology then don't run their states
18:25 MTecknology or in this case... why on earth do they have access to edit pillar data?
18:26 rivyn how do I specify a salt pillar value on the command-line to override something that would be in postgresql.version?
18:26 IdoKaplan MTecknology: I don't run their states. I try to execute "state.sls". because they need to edit values...
18:27 MTecknology sounds like a security nightmare
18:27 rivyn Using the docs, I can only figure out how to use pillar='{"version": "10"} and I can get it from pillar_get with name version but don't know how to get it with name postgresql:version
18:28 MTecknology rivyn: a ':' in pillar.get is for separating keys, so you would need {pgsql: {version: 10}}
18:30 rivyn pillar='{"postgresql": {version": "9.5"}}'?  MTecknology - it doesn't work
18:30 MTecknology "doesn't work" is an extremely unhelpful remark..
18:30 rivyn it doesn't pass in the value specified
18:30 MTecknology prove it
18:30 rivyn at the top of install.sls is:  {% set postgresql_version = salt['pillar.get']('postgresql:version', '10') %}
18:31 rivyn and it's ending up set as 10 even with that input
18:31 rivyn not 9.5
18:31 rivyn I don't know how to prove it
18:31 rivyn but it's not working for me
18:32 Pjusur joined #salt
18:32 rivyn I want there to be a default version installed unless it is overridden
18:32 MTecknology HAH!!
18:32 MTecknology I see your mistake
18:32 MTecknology Yes, you need to prove this... try pillar.data
18:33 rivyn what do you mean?
18:33 rivyn what is the mistake?
18:33 MTecknology prove that the pillar your passing is rendering the data you expect
18:33 rivyn how?
18:33 MTecknology try pillar.data
18:33 rivyn where?
18:33 MTecknology on the cli?
18:34 rivyn for someone who complains about remarks being extremely unhelpful, you sure are indirect.
18:35 MTecknology because I'm not spoon feeding you the exact command you need to run?
18:37 rivyn because you uselessly mocked me with laughter while meanwhile refusing to just point out the error
18:37 n0xff joined #salt
18:38 MTecknology I laughed because I thought it was funny when I noticed what's wrong. I still find that moment of realization rather funny.
18:38 MTecknology I honestly thought you could continue using your reasoning skills to fill in the whole commnad.
18:39 rivyn sorry I don't understand this crap yet.
18:41 MTecknology Prove your pillar input, using pillar.data.  <-- I /really/ don't like bottle-feeding answers, especially when I know the person is capable of using the same troubleshooting skills again in the future.
18:43 MTecknology salt-call pillar.data pillar='{"postgresql": {version": "9.5"}}'
18:44 rivyn the mistake wasn't any easier to see from that than it was the original line I mistyped... *shrugs*
18:44 rivyn whatever
18:45 MTecknology your quotes are messed up
18:46 rivyn yeah, I got that
18:48 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.9, 2017.7.4 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
18:49 rivyn Is there a good overview of how salt states are supposed to be written/structured somewhere?
18:50 cliluw joined #salt
18:50 MTecknology pcdummy: fwiw- I've been working on basically a universal implementation of that design I described. I want basically every component to be interchangeable. So, if you have Device42 already in your organization, you can pull from that instead of Netbox. There's an in-progress PR for Netbox to add triggers to hit an API when certain things change. I've been relaxing my project until that
18:50 MTecknology gets merged.
18:51 cewood joined #salt
18:55 MTecknology pcdummy: Oh! It's also worth noting- The salt-cloud support for proxmox is ... pretty much unsupported.  If you go with proxmox, you'll be stuck helping me try to figure that out.
19:08 zer0def quick question regarding multi-master - do all masters need to have the same keypair and signature in `master_type: failover`?
19:08 MTecknology typically, yes
19:09 IdoKaplan MTecknology: About pillar failed to render issue. is "    - ignore_missing: True" Can help me?
19:10 zer0def so putting all of the masters' pubkeys on minions wouldn't work, if they're using separate keypairs?
19:10 MTecknology it's not a problem with missing pillar, it's a problem with render errors
19:11 MTecknology I don't imagine failover would work very well if your masters were using different sets of keys.
19:11 IdoKaplan MTecknology: So you have no idea how can it be ignored?
19:11 MTecknology fix the bloody error... it's an error, not a warning
19:11 zer0def if they're signed by the same authority or if the minions recognize all of them, why not?
19:13 MTecknology I told you ways around it but you didn't seem to like those options.
19:15 IdoKaplan MTecknology: Maybe I dodn't understand your options. I'm trying to find a way to execute other states without fixing the pillar render issue.
19:15 MTecknology use --local
19:16 ymasson joined #salt
19:16 chutzpah joined #salt
19:17 zer0def i'm confused, MTecknology, are you referring to someone chatting from kiwiirc?
19:18 IdoKaplan MTecknology: Can you please elaborate and share an example?  not sure I understand
19:19 MTecknology zer0def: I'm responding to IdoKaplan?
19:19 zer0def yeah, kiwiirc, carry on
19:19 sayyid9000 joined #salt
19:21 MTecknology IdoKaplan: Nah, either you need to read the docs for --local, or you need to fix the error. I'm done beating this horse.
19:21 zer0def you ready to beat on mine? ,'-)
19:22 zer0def although i should probably just try experimentally
19:23 MTecknology zer0def: Does salt support a CA for minion certs?
19:23 jas02 joined #salt
19:23 MTecknology I can't say I recall ever reading about it, but it wouldn't surprise me.
19:24 zer0def doesn't seems so from what the limited set of key generation options `salt-key` provides
19:25 MTecknology so- that kinda rules out that question you had for me, eh?
19:25 zer0def there's still an option of putting all of the masters' pubkeys onto all the minions, even if it's a bit more tideous
19:25 zer0def but i figure i'll check once i get a breather
19:26 MTecknology does salt-minion support multiple master keys?
19:26 zer0def that's the question, hopefully will find the time to poke this bear at the tail end of the week
19:27 MTecknology that was kinda along the same lines as "I'm pretty confident the answer is no, but being wrong also wouldn't surprise me."
19:28 zer0def so far my confidence is built on nothing else than assumption, which doesn't go too far.
19:29 MTecknology that actually sounds like it could have really bad security implications
19:30 zer0def being able to gobble multiple master identities or are we talking about confidence?
19:30 MTecknology multiple master keys
19:31 MTecknology I could see it working, but I can also picture all the inevitable screw-ups that lead to an entire infrastructure compromise.
19:31 zer0def yeah, i'm not hopeful in this regard, either, since it's worse of the two presented options
19:33 MTecknology You already have to manage minion keys, what extra work is it to include master keys?
19:35 zer0def when put that way, you're right, basically just another keypair per master
19:35 MTecknology another?
19:36 zer0def assuming distinct keypairs for each master
19:36 MTecknology right- that's what I'm saying you should /not/ do
19:37 zer0def that much i get, i'm just a bit uncomfy with it, need to let it settle in
19:37 MTecknology uncomfy to you is a less secure option?
19:38 zer0def well, ideally i'd prefer to see CA signing
19:38 MTecknology You already have to keep /etc/salt/pki/minion/* in sync between hosts. In your scenario, you need to keep file state in sync, but ignore the actual contents of the file.
19:38 MTecknology err..  /etc/salt/pki/master/minion/*
19:40 MTecknology minion*/*  **
19:40 tom[] joined #salt
19:43 edrocks joined #salt
19:46 n0xff joined #salt
19:51 pcdummy MTecknology: ok, thanks
19:55 pcdummy MTecknology: so your route would be "UI" -> "NetBox" -> salt -> "Proxmox", right?
19:55 MTecknology salt-master does not get pillar data and I have my salt-cloud map in pillar.. so I have more steps..
19:56 pcdummy True
19:56 pcdummy MTecknology: how does Proxmox scale, you just install another node with proxmox and go, right?
19:57 pcdummy Well i test it out, learn it.
20:00 aldevar joined #salt
20:00 MTecknology UI -> netbox -> (for me)API middleware -> salt-event -> salt-master(reactor) -> salt-master(orch) -> salt-master(gitfs, fileserver, salt * saltutil..., salt 'prd-salt-00...' cloud.profile) -> cloud.profile(provision) -> set up keys -> call custom bootstrap.sh -> bootstrap.sh(configure keys, strip bloatware, install salt-minion, openvpn setup/launch) -> salt-minion(highstate)
20:04 pcdummy MTecknology: do you share parts of you research? :)
20:05 sh123124213 joined #salt
20:06 MTecknology yes 'n no... I've rambled about it a lot in -ot and I /do/ want to eventually share a complete solution.
20:08 MTecknology I'm not making any public project out of it just yet because it's in such flux, but if you want to help me build this, I'd be more than happy to start sharing my solutions so that you can test your shiny UI.
20:08 MTecknology How's your py?
20:11 `mist joined #salt
20:15 pcdummy MTecknology: good to great
20:15 pcdummy MTecknology: i do a lot with python.
20:16 MTecknology I very desperately want to believe your excellence... so I will.
20:16 pcdummy MTecknology: I've made this for example: https://github.com/saltstack-formulas/lxd-formula/blob/master/_modules/lxd.py
20:19 MTecknology I didn't actually want to look, but I did. That's solid.
20:20 MTecknology (don't expect the same from my mess)
20:21 MTecknology I s'pose.. this is more a -ot thing, ya?
20:25 pcdummy ye
20:26 Trauma joined #salt
20:28 jas02 joined #salt
20:30 inad922 joined #salt
20:40 inad922 joined #salt
20:43 BackPort joined #salt
20:51 BackPort left #salt
20:53 MyGitIsDown joined #salt
20:54 MyGitIsDown I noticed salt minion 2017.7.4 was released but doesn't contain PR 45716 which fixes some issues we have.  Will that PR see be included in a future 2017.x build or will should we wait for 2018.3?
20:57 MTecknology it should end up in the next point release
20:57 MTecknology it probably just didn't meet the deadline
20:59 lordcirth_work MyGitIsDown, it was merged into saltstack:2017.7 so if there is a point release it will be in there
21:03 evilrob joined #salt
21:04 MyGitIsDown Thanks
21:06 aldevar left #salt
21:09 gtmanfred MyGitIsDown:as the release notes said, 2017.7.4 is a critical issue fix, anything that wasn't specifically picked for this release, will be in 2017.7.5
21:27 jas02 joined #salt
21:28 pcdummy gtmanfred: I have a special question, what do you think about embedding salt/cloud/cli.py into a state module so you can call it with pillar data?
21:31 pcdummy from what i found its not that hard to embed salt-cloud cli into a state, except from all the side effects that might come up.
21:32 jas02 joined #salt
21:33 rivyn joined #salt
21:35 gtmanfred there is already a cloud module that uses CloudClient just like salt.cloud.cli does
21:35 jas02 joined #salt
21:35 gtmanfred https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cloud.html
21:35 gtmanfred and you can put your provider and profile information in pillar data, and use that
21:35 gtmanfred https://docs.saltstack.com/en/latest/topics/cloud/config.html#pillar-configuration
21:39 pcdummy Thank you.
21:48 jas02 joined #salt
21:49 jas02 joined #salt
21:51 jas02 joined #salt
21:51 Edgan gtmanfred: is 12.04 still supported officially by Saltstack?
21:52 dxiri joined #salt
21:53 edrocks how would you do `require_in: cmd: somename`?
21:53 edrocks for a cmd.run
21:58 MTecknology pretty much that, with a hyphen and a newline
21:58 gtmanfred Edgan: no
21:58 gtmanfred it end of lifed over a year ago
21:59 MTecknology gtmanfred: are you saying we should upgrade our firewalls?
21:59 gtmanfred are you using 12.04 for your firewall?
21:59 MTecknology no... not "mine"
21:59 gtmanfred gross
22:00 MTecknology I'm planning on seeing that fixed this year.
22:00 gtmanfred I have a friend that was still using 10.10 in 2013... he had a hell of a time upgrading
22:01 dimeshake gtmanfred: o/
22:01 Edgan gtmanfred: yeah, found the official list. What do you think of removing 12.04 and rhel5 from the repos? They aren't supported anymore.
22:03 gtmanfred hi dimeshake
22:04 gtmanfred Edgan: i don't think we have any desire to do that. We just don't plan on releasing any new versions of the packages for them
22:04 gtmanfred people still use those packages
22:04 gtmanfred and we have no desire to completely cut them off
22:04 gtmanfred because they pay a lot of money.
22:11 pualj joined #salt
22:19 onlyanegg joined #salt
22:19 thelocehiliosan joined #salt
22:29 masber joined #salt
22:41 thelocehiliosan joined #salt
22:44 `mist joined #salt
22:46 MTecknology that made me chuckle a little :)
22:46 oeuftete joined #salt
22:46 hemebond Just upgraded to 2017.7.4 and now I have a new minion that for some reason finds no states, unlike its brethren.
22:46 pualj_ joined #salt
22:47 onslack joined #salt
22:47 MTecknology typo in the name?
22:48 hemebond Shouldn't be. The only difference is the number on the end.
22:48 lompik joined #salt
22:48 hemebond And I use wildcards on the end of my targeting for that.
22:48 hemebond AAAXXX##
22:50 MTecknology state.show_highstet?
22:50 sayyid9000 joined #salt
22:50 MTecknology I'm gonna pretend I pronounce it that way and it wasn't a typo.
22:50 hemebond I've done show_top and it's just empty for that particular minion.
22:51 masuberu joined #salt
22:52 hemebond show_highstate is the same
22:54 onslack <gtmanfred> cp.list_master ?
22:56 hemebond Wow that's a big list.
22:57 hemebond Restarted the minion and still no luck.
22:57 MTecknology gtmanfred: did salt-cloud stuff get another massive functionality overhaul in develop, or was it mostly just moving things around?  (like cloud/__init__: class SaltCloud() -> cloud/cli)
22:58 cgiroua joined #salt
23:02 hemebond Is it possible to test the targeting from the master?
23:04 hemebond Pillars are working fine.
23:19 masuberu joined #salt
23:21 hemebond #39463 is also still an issue in 2017.7.4, at least on my Debian minions.
23:23 MTecknology state.show_top is probably the best tool I know of for checking the matching
23:23 MTecknology maybe also salt.match.glob
23:23 hemebond Trace logging hasn't shown any issues either.
23:23 MTecknology salt.match *
23:23 hemebond [ERROR   ] No contents found in top file
23:23 hemebond Will test that now.
23:24 hemebond Wait... how would I test with that?
23:24 MTecknology -C
23:25 MTecknology nah, I guess I don't know, but it sounded like a possibility
23:26 hemebond salt aaaxxx* match.glob "aaaxxx*"
23:27 hemebond All came back True
23:27 MTecknology including the new one?
23:28 MTecknology what you did was about the same as xxx* test.ping
23:29 hemebond Yeah, even the new minion.
23:29 hemebond New minion works fine for all execution modules.
23:29 hemebond I've enabled TRACE logging on the master.
23:29 hemebond Hopefully it gets quiet enough for me to test with.
23:34 hemebond Would existing minions have cached the top or anything like that?
23:34 hemebond I'm wondering if it's because I'm using merge_all and multiple environments.
23:35 hemebond But then I assume it would break on existing minions.
23:36 MTecknology it'd be cached in the file server cache
23:36 MTecknology /var/cache/salt
23:37 hemebond I'll try clearing that and see if it breaks the minion.
23:39 hemebond Nope, still works.
23:40 hemebond I can see the new minion reading/fetching the top.sls files and other states.
23:47 hemebond Oh for goodness sake. Nevermind. For some reason an old version got installed.
23:47 hemebond What a half-wit.
23:49 MTecknology ouch..
23:49 ipsecguy joined #salt
23:49 hemebond I say "for some reason" but the reason is I'm a idiot.
23:50 MTecknology I finally have salt-cloud attempting to create a VM in my proxmox cluster. This is an exciting day.
23:50 hemebond Nice.
23:50 MTecknology It fails, but the cool thing is it tried. :P
23:50 hemebond This was the thing where you were creating them manually before?
23:51 hemebond Some sort of local cloud thing but you manually created the instances?
23:52 MTecknology nope- my end goal is to have inventory be the authoritative source for everything that exists; define a VM/VPS in inventory and wait for the email saying it was created (with a copy of the highstate output).
23:54 hemebond Oh neat.
23:54 MTecknology I think I convinced pcdummy to work some awesome magic to make salt-cloud support pillar data. :D
23:55 ponyofdeath joined #salt
23:57 onlyanegg joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary