Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-03-19

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 Guest73 joined #salt
00:06 zerocoolback joined #salt
00:12 tiwula joined #salt
00:18 exarkun joined #salt
00:33 Zachary_DuBois joined #salt
00:33 DammitJim joined #salt
00:35 cswang joined #salt
00:42 zerocoolback joined #salt
00:44 Zachary_DuBois joined #salt
01:35 om2 joined #salt
01:49 shiranaihito joined #salt
01:49 mk-fg joined #salt
01:49 mk-fg joined #salt
02:21 KevinAn275773 joined #salt
02:39 zerocoolback joined #salt
02:56 ilbot3 joined #salt
02:56 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.9, 2017.7.4 <+> RC for 2018.3.0 is out, please test it! <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
03:43 mk-fg joined #salt
03:43 mk-fg joined #salt
03:49 JPT joined #salt
04:04 zerocoolback joined #salt
04:05 zerocoolback joined #salt
04:06 zerocoolback joined #salt
04:07 zerocoolback joined #salt
04:07 zerocoolback joined #salt
04:34 indistylo joined #salt
05:04 motherfsck joined #salt
05:10 ponyofdeath joined #salt
05:46 Vaelatern joined #salt
05:53 Guest73 joined #salt
06:05 zulutango joined #salt
06:11 colegatron joined #salt
06:29 AvengerMoJo joined #salt
06:30 aruns joined #salt
06:55 jrj joined #salt
07:01 shpoont joined #salt
07:05 bachler Have anyone set up a mongodb cluster with salt? I am wondering about how to best go about setting up replication for my 3 mongodb servers set up by salt? Anyone could point me in the right direction?
07:07 bachler I *think* that I want to be able to have a salt state that somehow executes a query to set up replication between the 3 nodes. I just got started with salt and mongodb.
07:35 Ricardo1000 joined #salt
07:36 swa_work joined #salt
07:42 indistylo joined #salt
07:47 Guest73 joined #salt
08:06 darioleidi joined #salt
08:25 Guest73 joined #salt
08:30 Tucky joined #salt
08:34 Hybrid joined #salt
08:43 Pjusur joined #salt
08:44 jrenner joined #salt
08:49 mikecmpbll joined #salt
08:57 `mist joined #salt
09:01 cewood joined #salt
09:01 zulutango joined #salt
09:12 zerocoolback joined #salt
09:16 pf_moore joined #salt
09:30 Mattch joined #salt
09:31 aldevar joined #salt
09:32 hoonetorg joined #salt
09:35 zerocoolback joined #salt
09:35 dograt joined #salt
09:37 Udkkna joined #salt
09:39 JPaul joined #salt
09:41 indistylo joined #salt
09:44 zerocoolback joined #salt
09:45 zerocoolback joined #salt
09:45 synical joined #salt
09:45 synical joined #salt
09:46 zerocoolback joined #salt
09:47 zerocoolback joined #salt
09:50 zerocoolback joined #salt
09:55 nebuchadnezzar hello
09:55 bdrung_work joined #salt
09:58 nebuchadnezzar I'm setting up an environment where each minion is PXE booted, when I boot a VM server the minion key is generated at first start, so after a reboot I have the following message: “Authentication attempt from nebula121.eole.lan failed, the public keys did not match. This may be an attempt to compromise the Salt cluster.”
09:58 nebuchadnezzar what is the best practice? Autoremove the key? Or generate a predefined key directly into the image in which case all minions will share the same key?
09:58 onslack <msmith> minions are expected to keep using the same key once it has been authorised on the master
09:59 hemebond "VM server" as in a VM or a server that runs the VMs?
09:59 nebuchadnezzar onslack: thanks
10:00 onslack <msmith> fyi onslack is the name of the bot
10:00 nebuchadnezzar :-D
10:00 nebuchadnezzar So is it OK to share the same key pair on all PXE boot minions?
10:01 onslack <msmith> how are you authorising the keys on the master?
10:01 hemebond No.
10:02 nebuchadnezzar I'm using “auto_accept: True”
10:02 hemebond Why are your minions getting new keys?
10:02 onslack <msmith> so is it fair to say that security isn't your concern for this master?
10:02 onslack <msmith> hemebond: because they're regenerating them on each boot
10:03 hemebond But why?
10:03 nebuchadnezzar I saw two ways: 1) generate key on boot with auto_accept and not keeping key in cache, 2) seed the key pair in the base image and share the same key pair accross all minions
10:04 onslack <msmith> because they're using pxeboot, so i'm guessing there's no preserved state between boots to store a key in
10:04 nebuchadnezzar hemebond: because I'm setting up an infrastructure where all server are PXE booting
10:04 nebuchadnezzar msmith: you are right
10:05 hemebond Yeah, I had servers PXE booting too and didn't have to worry about keys being regenerated each boot.
10:05 onslack <msmith> if they have any persistent storage whatsoever then i'd use that for the key. if they have literally none then you have little choices available
10:05 onslack <msmith> hemebond: how did you keep the keys?
10:05 hemebond So these are ephemeral servers?
10:05 hemebond msmith the servers were built. Like any normal server.
10:05 yuhl_ joined #salt
10:06 hemebond So they had disks.
10:06 hemebond (virtual disks)
10:06 onslack <msmith> so not pxeboot apart from for the initial setup?
10:06 hemebond Correct.
10:06 onslack <msmith> that would be best
10:06 hemebond VM starts up, PXE boots to install OS, Salt takes over and installs the rest.
10:06 nebuchadnezzar hemebond: yes, I build LTSP images share between all my OpenNebula hypervisors, the same base image for all of them, I need to define what is common (i.e. generated/configured during the build of the image) and what is done at boot
10:07 nebuchadnezzar hemebond: the problem is that when I use keys generated at boot, the same machine (same FQDN) will have different keys accross reboot
10:08 onslack <msmith> then the machines need to have local storage at the very least
10:08 hemebond So when a VM is rebooted or shut down it is effectively destroyed?
10:08 nebuchadnezzar is's bare metal boot in fact, since I'm setting up KVM hypervisors
10:08 hax404 joined #salt
10:08 * hemebond is now more confused.
10:08 nebuchadnezzar hemebond: yes, the LTSP squashFS images are loaded into RAM
10:08 * hemebond wonders how it can be a VM _and_ bare metal.
10:09 onslack <msmith> vmhost
10:09 onslack <msmith> hence hypervisor
10:09 nebuchadnezzar msmith: so you think that seed the key pair is the simplest?
10:09 hemebond msmith that doesn't really explain it.
10:10 hemebond It's a bare metal install without local storage?
10:10 nebuchadnezzar hemebond: the physical machines has no local disk, they are just PXE booted using a SquashFS image loaded in RAM
10:10 hemebond I gotcha.
10:10 nebuchadnezzar to make compute nodes
10:10 onslack <msmith> unless you can provide a way to provide a constant key to the minion - either locally using storage, or remotely using a server of some kind - then salt will not manage keys correctly
10:10 hemebond How are you creating and destroying the VMs?
10:11 nebuchadnezzar hemebond: I'm managing the hypervisor for now
10:12 hemebond Does salt send an event when it's shutting down?
10:13 `mist joined #salt
10:13 onslack <msmith> if it does, then reactor linked to wheel to remove the key would indeed work
10:15 hemebond Yeah, that's how I manage my cloud instances.
10:16 sjorge joined #salt
10:16 onslack <msmith> alternatively make sure each minion comes up with a unique id, and time out inactive minions
10:16 onslack <msmith> more of a hacky workaround, but still
10:20 nebuchadnezzar thanks for the hints
10:42 hoverbear Awww provisioning with salt-cloud on freebsd is broken because `swig` is called `swig30`
10:44 onslack <msmith> symlink? ;)
10:59 Guest73 joined #salt
11:20 edrocks joined #salt
11:24 nickadam joined #salt
11:35 stewgoin- left #salt
11:39 zerocoolback joined #salt
11:39 zerocoolback joined #salt
12:07 Guest73 joined #salt
12:19 Nahual joined #salt
12:20 DammitJim joined #salt
12:37 aruns__ joined #salt
12:42 exarkun are the debian salt packages broken
12:44 nku works for me
12:44 exarkun fresh debian stretch install, https://gist.github.com/exarkun/9fcf721ac308ce6456d943d681646d86
12:44 onslack <msmith> <https://static3.fjcdn.com/comments/Blank+_b585d53af922cec7ac31bacbe3cf37c2.jpg>
12:45 onslack <msmith> the likely culprit is right in the error `you have held broken packages`. which ones are held and why?
12:47 exarkun `apt-mark showhold` has no output
12:49 onslack <msmith> quick check, what error do you get if you try to install one of those dependencies manually?
12:50 exarkun python-crypto is already the newest version (2.6.1-7build2).
12:50 exarkun wait sorry ignore that
12:50 exarkun E: Package 'python-crypto' has no installation candidate
12:51 exarkun I guess some repo is not enabled.
12:52 edrocks joined #salt
12:52 nebuchadnezzar hemebond, msmith: just as a note, seeding the minion key is working, I can 1) use the same key pair for all hosts, 2) set “auto_accept=False” \o/
12:53 onslack <msmith> they've got different minion id's tho, right?
12:54 onslack <msmith> i mean, it's not the most secure way, but if it works for your case
12:56 exarkun no, python-crypto is in main.  so I don't know.
12:57 onslack <msmith> you have run `apt-get update` recently, right? :)
12:59 exarkun of course
12:59 ecdhe joined #salt
12:59 exarkun I found the problem.  There is no repo configured at all.
12:59 onslack <msmith> what does `apt-cache search python-crypto` output?
12:59 onslack <msmith> that would do it
12:59 exarkun Too bad I can't install salt so that salt can ensure my sources.list is configured correctly.
13:00 onslack <msmith> if you've got an ssh server on that machine then you could use `salt-ssh` ;)
13:00 onslack <msmith> or bootstrap
13:02 nebuchadnezzar msmith: yes, different minion_id
13:03 nebuchadnezzar I'm starting up my environment, I may change the way it's done afterward but it's not my top priority
13:06 exarkun msmith: thanks
13:17 Naresh joined #salt
13:19 ProT-0-TypE joined #salt
13:25 aruns__ joined #salt
13:25 exarkun A couple weeks ago, this configuration worked on some EC2 hosts I tried it on.  https://gist.github.com/exarkun/466e28d2fbb02c7a29b442d37c0d6239
13:26 exarkun Today it fails on some new hosts
13:26 exarkun Why isn't the pillar data found?
13:27 exarkun ugh that paste got corrupted
13:28 exarkun fixed
13:31 thelocehiliosan joined #salt
13:32 eekrano joined #salt
13:35 `mist joined #salt
13:50 aruns__ joined #salt
13:54 nebuchadnezzar Is there any maintainer of systemd-formula here ? I wonder why the timezone pillar is “timesyncd:timezone” and not “systemd:timesyncd:timezone” and why “systemd/timesyncd:files_switch” is not “systemd:timesyncd:files_swith”. Any hints?
13:54 nku is there some hook meachanism that runs a local script before applying e.g. a highstate?
13:55 nku i want to make sure that i'm on a specific salt branch when pushing to prod
13:57 ThomasJ joined #salt
14:00 BitBandit joined #salt
14:02 theloceh1liosan joined #salt
14:05 onslack <msmith> exarkun: i don't see pillar/top.sls
14:07 exarkun indeed, there isn't one.  And I can't remember if there ever was one (but seriously this totally worked before somehow).
14:07 exarkun Okay, I added it and now it works. :/
14:08 ebbex When using orchestration, how can I have a command run on only one of my minions? (install php-app on multiple boxes, but only one of them needs to run 'create database tables')
14:11 onslack <msmith> like this? <https://docs.saltstack.com/en/latest/topics/orchestrate/orchestrate_runner.html#function>
14:11 ebbex I suppose I can specify a single host to run the command separate from the common, but I'd rather have a way for salt figure our which of the minions are up and pick one at random. That is if there is such a thing?
14:12 onslack <msmith> you have to define what you want salt to do. `pick one at random` isn't really a definition :)
14:13 zer0def ebbex: you could just get a list of minions a particular sls would run on, pick one at random and pass in through an additional pillar, then compare minion id in sls against the randomized minion
14:13 ebbex onslack: similar to ansibles run_once. You're installing on a group of boxes, but this function needs run on only one of them. (if you take one of them out of the group, another one does the command.)
14:13 onslack <msmith> if it really doesn't matter at all then perhaps you could write something to pick one deterministically using jinja
14:13 cgiroua joined #salt
14:14 onslack <msmith> fyi onslack is the bot
14:14 onslack <msmith> i'm not aware of anything specific like that, no
14:15 zer0def what i've described could be implemented in jinja of orchestration sls
14:16 ebbex zer0def: Yeah, that's probably along the lines of something I'd like to do yeah.
14:18 ThomasJ joined #salt
14:28 ebbex zer0def: How can I list of minions that are running a particular sls? Is it somewhere in grains?
14:29 zer0def you'd probably use mine using the same targeting you use for the deployment sls to call something like, i dunno, `test.ping`, then select one at random from the list of keys returned
14:31 ebbex Ok, now a even more stupid question, 'mine' as in a keyword for some functionality similar to 'pillar' and 'grain' or 'mine' as in 'your'? (did you provide a gist link i missed?)
14:32 zer0def yes, it's Salt Mine, basically periodically calls a defined set of functions against minions
14:32 zer0def https://docs.saltstack.com/en/latest/topics/mine
14:33 racooper joined #salt
14:33 DammitJim joined #salt
14:35 onslack <msmith> if it's in an orch state, then you can probably use `wheel.minions.connected`
14:36 onslack <msmith> <https://docs.saltstack.com/en/latest/ref/wheel/all/salt.wheel.minions.html>
14:36 zer0def does it have targeting capabilities?
14:36 onslack <msmith> never used it, but i expect it'll list all minions
14:36 indistylo joined #salt
14:37 onslack <msmith> if you want targeting specifically then i'm not sure what to search for
14:40 zer0def i usually approached these sorts of situations with `{{ salt['mine.get'](<targeting>, 'test.ping', **kwargs) }}`, although i guess `test.ping` can provide stale results, so joining on `wheel.minions.connected` could provide benefit it more dynamic environments
14:41 onslack <msmith> i guess if the minions are definitely up then you could always shell to a `salt` command!
14:41 onslack <msmith> tho if you're going that far then a custom module may be called for
14:47 eekrano_ joined #salt
15:01 Guest73 joined #salt
15:11 exarkun joined #salt
15:12 tiwula joined #salt
15:21 onslack <nerigal> i guys, any one know the syntax of sls for tuned-adm state ? there is no decent example on the documentation to change tuned profile
15:22 Guest73 joined #salt
15:27 exarkun What's with the crazy number of lists in the config example here? https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#operations-on-regular-files-special-files-directories-and-symlinks
15:27 dezertol joined #salt
15:27 exarkun Is that really how you do it?  Did someone not realize you can have more than one item in a key/value mapping?
15:33 onslack <msmith> which example? which list?
15:36 lordcirth_work Why does file.line in replace mode not match regexes beginning with '^'?  It works in other salt regexes
15:37 lordcirth_work If I match on 'vt_handoff=', there are too many matches, but if I use '^vt_handoff=' it doesn't match, even though it should
15:38 onslack <msmith> is there whitespace at the start?
15:39 lordcirth_work Nope, it's /etc/grub.d/10_linux , vt_handoff="1"
15:41 evle joined #salt
15:42 lordcirth_work I'll just use file.replace, I'm not even sure what line's mode replace is for
15:44 Guest73 joined #salt
16:17 zerocoolback joined #salt
16:21 zerocoolback joined #salt
16:40 zerocoolback joined #salt
16:43 stewgoin joined #salt
16:54 onlyanegg joined #salt
17:05 edrocks joined #salt
17:11 mpanetta joined #salt
17:29 IPvSean joined #salt
17:36 wongster80 joined #salt
17:41 dezertol joined #salt
17:51 hemebond joined #salt
17:52 justanotheruser joined #salt
17:52 justanotheruser joined #salt
18:12 evle3 joined #salt
18:19 cewood joined #salt
18:19 shpoont joined #salt
18:25 ymasson joined #salt
18:29 wongster80 joined #salt
18:34 edrocks joined #salt
18:38 schemanic joined #salt
18:59 nixjdm joined #salt
19:02 schemanic Hello. Is it bad form to call a grain module from pillar? I want to set a value equal to the hostname of the machine
19:05 nkuttler schemanic: why not just access the grain in the state file?
19:19 MTecknology %reload PbinAdmin
19:19 MTecknology ... oops, wrong channel
19:23 Guest73 joined #salt
19:42 aldevar joined #salt
19:44 hammer065 joined #salt
19:56 lordcirth_work Terminology question: Is an individual 'pkg.installed' a "state"?  What is the term for a tree, ie "common.ssh" that includes many things?  Is it also only a "state"? Thanks
20:00 shpoont joined #salt
20:04 shpoont joined #salt
20:10 MTecknology lordcirth_work: yup, that's a state. One state maintains the state of one thing. SLS files usually have multiple states. Those get included in the state tree.
20:11 zer0def so usage of, say, `pkg` and `service` in the same state id make them a composite state or two separate ones?
20:11 lordcirth_work MTecknology, ok, so what's the term for a group of states/SLS files that you target like 'common.ssh' from top?
20:11 hemebond schemanic: I use the grains cache in pillars.
20:11 zer0def (i'd argue the latter, since the combination of state module and state id have to be unique)
20:12 lordcirth_work zer0def, well, I never do that anyway, so idc :P
20:13 zer0def so you're only asking about the distinction between single states and SLS files
20:13 whytewolf lordcirth_work: I think the answers you are looking for are "state stanza" and "state file"
20:14 whytewolf [sometimes refered to as SLS files]
20:14 lordcirth_work whytewolf, thanks!  And to refer to a group, like 'common.ssh' which is a directory with init.sls pulling in all it's contents?
20:14 whytewolf formula
20:14 whytewolf which is a colection of state files
20:15 lordcirth_work I've been thinking of formulas as things that are in their own repo, but I guess that's wrong
20:15 whytewolf formulas can be seperate repos. but they don't have to be. they are just a lose knite collection of state files that belong together
20:15 Trauma joined #salt
20:16 lordcirth_work Ok, that's what I want then.  Thanks!
20:16 zer0def well, SLS files stop being exclusively about states quite some time ago
20:16 * MTecknology likes to think of formulas as a very special type of collection of state files and other misc. stuff.
20:16 whytewolf yes SLS files also do pillars and orchestration and reactors. pretty much anything that can end in .sls
20:17 zer0def SLS all the things!
20:18 whytewolf official terminology for a state file is "SLS module" but that i think can be confusing
20:18 whytewolf see https://docs.saltstack.com/en/latest/glossary.html
20:20 exarkun joined #salt
20:39 mavhq joined #salt
21:04 shpoont joined #salt
21:08 ecdhe joined #salt
21:10 lordcirth_work whytewolf, I thought 'module' exclusively referred to things that were python code, like execution modules
21:14 zer0def wanted to make that remark, but that's just being a bit obsessive about terminology
21:17 whytewolf that is why i find it confusing
21:20 zer0def i figure being obsessive about terminology is valid only in cases when differences in interpretation actively cause damage
21:24 whytewolf agreed
21:32 shpoont joined #salt
21:41 zer0def joined #salt
21:42 dezertol joined #salt
21:44 Aleks3Y joined #salt
21:48 dave_den joined #salt
21:48 thelocehiliosan joined #salt
21:48 dave_den left #salt
21:49 shpoont joined #salt
22:02 Hybrid joined #salt
22:02 shpoont joined #salt
22:05 ProT-0-TypE joined #salt
22:05 ProT-0-TypE has the jinja filter strftime been eliminated on 2018.3? Jinja syntax error: no filter named 'strftime'; line 16
22:07 dezertol joined #salt
22:11 mavhq joined #salt
22:17 whytewolf it should still be there. make sure dateutil is installe. if it is. file a bug report.
22:21 Hybrid joined #salt
22:26 kojiro joined #salt
22:28 kojiro I keep getting errors related to deploying the thin.gz when I try to do a `salt-ssh ... state.apply` on a system. How do I debug this?
22:28 kojiro `WARNING: Unable to locate current thin  version: /var/tmp/.ubuntu_52fa97_salt/version.`
22:29 kojiro `[ERROR   ] ERROR: Failure deploying thin, retrying: /usr/bin/scp` and then a _hash
22:40 shpoont joined #salt
22:44 onlyanegg joined #salt
23:07 robawt wuddup salt folks.
23:07 robawt can I specify a revision for a gitfs remote?
23:10 robawt I see I can specify a ref on a per-environment basis.  this should be fine for my uses.  thanks
23:10 shpoont joined #salt
23:10 kojiro robawt: You are welcome! Glad I could help! o_O
23:11 robawt i couldn't have done it without you kojiro
23:11 kojiro :D
23:13 kojiro AHA! so salt-ssh tries to deploy the thin.gz _before_ it does sudo. Thus, if permissions of the target are broken and salt-ssh cannot deploy the thin as $user, it will fail, obscurely, with "Failed to deploy thin", etc!
23:14 kojiro I am glad to have gotten some learnings out of this
23:19 thelocehiliosan joined #salt
23:27 masber joined #salt
23:48 mikecmpbll joined #salt
23:50 Edgan kojiro: Another fun one is if you patch it, but keep the version number the same, it will keep caching old code locally till you manually clear the cache locally.
23:51 kojiro heh
23:51 Edgan kojiro: A project I really need to get back to is a new salt-ssh mode that sets up a local temp master, and copies the minion code across to connect back across an ssh tunnel. It would solve a lot of problems.
23:52 kojiro ++

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary