Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-04-16

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 onslack joined #salt
00:01 Deadhand joined #salt
00:41 JacobsLadd3r joined #salt
00:43 Felgar joined #salt
00:46 JacobsLadd3r joined #salt
01:05 lkthomas joined #salt
01:34 JacobsLadd3r joined #salt
01:57 ilbot3 joined #salt
01:57 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2017.7.5, 2018.3.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
01:58 shiranaihito joined #salt
02:00 masber joined #salt
02:36 zerocoolback joined #salt
02:54 JPT joined #salt
03:25 zerocoolback joined #salt
03:49 masber joined #salt
04:03 zerocoolback joined #salt
04:13 crux-capacitor joined #salt
04:17 orichards joined #salt
04:57 zerocoolback joined #salt
05:31 DanyC joined #salt
05:35 zerocoolback joined #salt
05:35 eseyman joined #salt
05:44 Ricardo1000 joined #salt
05:50 zerocoolback joined #salt
06:00 lompik joined #salt
06:02 DanyC joined #salt
06:03 colttt joined #salt
06:05 zerocoolback joined #salt
06:15 aravind joined #salt
06:16 swa_work joined #salt
06:19 aravind hi, I am looking for some guidance/advice. We have a working salt setup that sets up a box on AWS pretty well. We install packages, configure them etc just fine. We are now looking to convert this to an AMI setup and are stumped as to how we can keep our investment in Salt and keep the benefits from using AMIs on AWS.
06:20 aravind We would like to make an AMI from a fully configured AWS instance, but we want to bring it up in the future and only run states that change app configs on the box. We don't need to do things like install packages etc, since those would have already been done previously.
06:22 aravind We looked into artificially inserting excludes in formulas, etc.. but it all seems very kludgy. I was hoping others here might have dealt with similar issues and how you worked around this. I appreciate any pointers.
06:24 zer0def well, you'd definitely need some sort of templating to provide a minion id through user data and have the reactor highstate the minion whenever a minion connects to the master (there's an event to this)
06:24 zer0def are data rates so prohibitive on AWS that you'd rather use AMIs, instead of provisioning your machines from a base image?
06:25 aravind it's more for the sake of wanting to isolate our deployments from external things like apt repos, pypi servers etc.
06:26 aravind we could mirror all of those things internally, but this (making AMIs) seemed like a good intermediate step.
06:26 zer0def so avoiding routing to the internet, got it
06:27 aravind yeah, once we have an AMI.. the machine is good to go.  Configs on the machine however will have to change when we make instances from it.
06:32 zer0def actually, you don't even have to template out the minion id, since salt-minion reads /etc/salt/minion_id, so you'd have to uniquely populate that file on instance launch before salt-minion starts, then just go through https://docs.saltstack.com/en/getstarted/event/reactor.html and https://docs.saltstack.com/en/latest/topics/event/master_events.html (you're looking at `salt/minion/<MID>/start` in your case) to
06:32 zer0def set up a reactor on the master which would either highstate your configs or otherwise react accordingly
06:34 aravind well.. getting the minion id into minion conf is not the problem. We can totally script around that. The issue is things like this: config_file_a (contains either minion id, or instance ip, stuff that changes) depends on package a, which depends on apt-repo a. Highstating the minion (with the id fixed) will now trigger this entire chain.
06:35 aravind if that eternal apt repo is down.. the entire thing fails.. defeating the point of making this AMI.
06:35 aviau joined #salt
06:35 zer0def so `pkgrepo.present` into `pkg.installed` into `file.managed`, the first two, if existing previously, shouldn't try to reach out to the net, since they ought to be satisfied during image creation
06:37 lompik joined #salt
06:37 aravind yeah.. there is still the case of us declaring apt-repo a (for package a).. maybe it does the right thing and will not update the repo upon highstate.
06:38 zer0def if you don't use `refresh: true` during `pkg.installed` or `pkg.latest`, you should be fine
06:39 aravind this is just one particular example. The question in my mind was a bit more general, crudely put: is there an easy way for us to do the file.managed (and service.running) parts while ignoring the rest.
06:40 zer0def if you're concerned about dependencies reaching out to the 'net, i'd probably decouple logic between image creation and instance provisioning
06:42 zer0def could be as simple as having a condition in your sls files picking up on some sort of hint on what stage is currently being executed
06:44 aravind yeah, I think your suggestion to make things pkg.installed should help. I am just dreading the whole adding if statements everywhere part.
06:45 zer0def well, it doesn't have to be a jinja condition, that might as well be different sls files called altogether for image creation and another set of sls files called during highstate
06:46 aravind yup.. yup, that was the other path I was considering, splitting things into two phases.
06:46 zer0def that would probably be clearner
06:46 rominf joined #salt
06:47 aravind it's just agonizing to think of the work into separating all of the nice dependencies that we so painstaking put in there, and chopping them up.
06:47 aravind no silver bullets here :(
06:47 rominf joined #salt
06:47 zer0def well, you are technically providing a guarantee that everything up to package installation is provided by the AMI
06:48 aldevar joined #salt
06:48 tyx joined #salt
06:48 zer0def it should be a clean slice
06:49 rominf joined #salt
06:49 aravind zer0def: yup, *should be* :) Thanks for the suggestions.
06:50 zer0def well, if it isn't a clean slice, things *may* be overcomplicated somewhere
06:50 aravind hehe, they are :(
06:51 briner joined #salt
06:51 aravind we got a bit carried away with includes...
06:51 aravind and then requiring those and so on..
06:51 zer0def well, that would've worked otherwise
06:52 aravind yup.
06:52 zer0def sounds more like an exercise in commenting things out and accordingly
06:58 tyx joined #salt
06:59 lompik joined #salt
07:00 Elsmorian joined #salt
07:09 aldevar joined #salt
07:15 Pjusur joined #salt
07:16 mikecmpbll joined #salt
07:26 orichards joined #salt
07:27 orichards joined #salt
07:27 awerner joined #salt
07:28 orichards joined #salt
07:34 Hybrid joined #salt
07:39 sploenix left #salt
07:47 demize joined #salt
07:47 SMuZZ joined #salt
07:55 AvengerMoJo joined #salt
07:55 mikecmpbll joined #salt
07:56 darioleidi joined #salt
07:56 briner joined #salt
07:59 darioleidi joined #salt
08:05 rollniak1 joined #salt
08:07 Cadmus joined #salt
08:14 briner joined #salt
08:17 briner joined #salt
08:18 zerocoolback joined #salt
08:20 zerocoolback joined #salt
08:24 briner_ joined #salt
08:33 mikecmpb_ joined #salt
08:33 rominf I have git_pillar called saltstack_infra with nodes/init.sls in it. I've used "mountpoint: infra" and put "'*': infra.nodes" into top.sls. I'm getting "[CRITICAL] Pillar render error: Specified SLS 'infra.nodes' in environment 'gitfs' is not available on the salt master". How do I debug git_pillar?
09:03 bvcelari joined #salt
09:14 ipsecguy joined #salt
09:18 zerocoolback joined #salt
09:18 Yamakaja joined #salt
09:19 babilen rominf: You want "*': - infra.nodes" rather than "'*': infra.nodes"
09:20 babilen If you do that and remove the mountpoint option, does is it work with "- nodes" ?
09:26 zerocoolback joined #salt
09:51 zerocoolback joined #salt
10:05 zulutango joined #salt
10:09 crux-capacitor joined #salt
10:16 OliverUK1 joined #salt
10:17 Elsmorian joined #salt
10:18 xet7 joined #salt
10:30 jxs1 joined #salt
10:34 mikecmpbll joined #salt
10:41 darkalia Hello here, I'm having a very strange issue with a saltstack environment in docker containers
10:42 darkalia I start a master on container, with "auto_accept: True" and I start the minion in another container, with the master ip referenced in the minion config
10:42 darkalia I have a lots of SaltReqTimeoutError and it can takes up to 2 full minutes before my minion key is accepted
10:43 darkalia I tried with salt 2016.11 or 2017.7 and there is no difference.
10:44 Elsmorian @darkalia I probably don't have any new ideas but I am soon to be transitioning to using containerised salt masters so am interested in this. Are you firing this up from a compose file? and are you using host based networking or passing it through via the docker port mapping?
10:45 darkalia I can see that the master takes up to 2 mintues to reach the binding of its socket
10:46 darkalia Elsmorian: I'm using it as the remote execution for some tests. I was first using a custom network with the salt master known as "salt" for all containers spawned in that network
10:47 darkalia When I suddenly started to have this issue, I assumed the DNS lookup was an extra overhead and I switched on passing the master ip address as a build-arg for my new container since I start the master and then, dockerized minions are spawned at will
10:49 darkalia either that or use --add-host when running the container, but these options didn't exist when I started this project
10:49 darkalia (docker 1.9)
10:49 gmoro joined #salt
10:53 Elsmorian Ah ha. Hmm, have you / can you try with a later docker version? In my limited experience early docker versions had a fair few odd bugs and that is a pretty old version :/
10:53 darkalia Oh I think docker is the culprit in fact
10:53 darkalia I'm on 18.03 CE right now
10:53 darkalia This setup was working before
10:54 darkalia I just can't find the logic about the why
10:54 darkalia I even added everything written by salt as "VOLUME" in the Dockerfile to be sure to not use the graph for runtime data
10:55 Elsmorian Ahh, sorry I though you were using the old version, not that that was where you started from :)
10:56 darkalia I'm writing a github issue to summarize everything, might be a bug or me being dumb, but I just need to fix this thing and I don't want to rely on preseeded minion for this stuff :)
11:03 zerocoolback joined #salt
11:04 zerocoolback joined #salt
11:06 Elsmorian darkalia: Cool, would you mind pinging me a link when you post that? would be good to keep an eye on it incase I suffer similar
11:08 OliverUK1 left #salt
11:08 OliverUK joined #salt
11:09 darkalia Sure
11:11 OliverUK Hello, I am trying to use file.blockreplace and match the end of the block as a empty line, is there a way of doing this? TIA
11:13 zerocoolback joined #salt
11:26 Yamakaja joined #salt
11:32 evle1 joined #salt
11:35 rominf ‎babilen‎: Yes, it's "'*': - infra.nodes". Yes, it works without mountpoint option with "- nodes".
11:35 aldevar joined #salt
11:41 OliverUK Or even if someone knows how I can configure the last line as an empty line that would be great
11:52 nbari Hi, any idea about how to try to install salt-minion on a ppc64
11:52 DanyC joined #salt
11:54 user3281 joined #salt
11:59 StolenToast joined #salt
12:01 StolenToast joined #salt
12:07 zer0def nbari: my idea would be to create a python2 virtualenv and take initscripts from packaging to appropriate to your init
12:08 zer0def from that point onward, install salt in said virtualenv and cross your fingers
12:12 nbari will give a try, thanks
12:16 briner_ joined #salt
12:17 robinsmidsrod I just upgraded to salt-master 2018.3.0 on Ubuntu 14.04 and I'm getting these errors from the zfs grain: https://gist.github.com/robinsmidsrod/bd7fa6f19db694a5a4b7bbd5e500937a
12:18 robinsmidsrod how can I fix this issue?
12:19 robinsmidsrod the problem is mainly related to this: [ERROR] Command '/sbin/zpool list -H -p -o name,size' failed with return code: 2
12:19 robinsmidsrod [ERROR] output: invalid option 'p'
12:22 zerocoolback joined #salt
12:24 Nahual joined #salt
12:32 oida joined #salt
12:33 crux-capacitor joined #salt
12:33 obimod joined #salt
12:35 obimod hey all
12:35 obimod might anyone have a tip for how to import a var from a rendered jinja file (i.e. global_vars.jinja) into a #!py state file?
12:36 obimod i've tried __salt__['jinja.render']('global_vars.jinja')
12:37 viq joined #salt
12:37 _KaszpiR_ joined #salt
12:41 gswallow joined #salt
12:44 obimod ah, __salt__['slsutil.renderer'](abs_path_to_file)
12:45 obimod very nice :)
12:50 justanotheruser joined #salt
12:52 dendazen joined #salt
13:02 cgiroua joined #salt
13:04 cgiroua_ joined #salt
13:05 gh34 joined #salt
13:22 cgiroua joined #salt
13:29 BitBandit joined #salt
13:39 exarkun joined #salt
13:39 lompik joined #salt
13:48 jhauser joined #salt
13:48 jerematic joined #salt
13:51 exarkun joined #salt
13:55 alex-zel joined #salt
13:55 alex-zel is it possible to configure the salt reactor via pillars?
14:11 briner_ joined #salt
14:16 babilen alex-zel: Sure -- https://github.com/saltstack-formulas/salt-formula/blob/master/pillar.example#L111-L113 is an example
14:16 gswallow_ joined #salt
14:21 GrisKo joined #salt
14:23 alex-zel tnx
14:23 heaje joined #salt
14:23 gswallow joined #salt
14:28 zerocoolback joined #salt
14:29 dijit Hey, anyone using salt-cloud with google cloud? I'm trying to create a disk on provision... I can't seem to get the syntax of 'volumes' to work, based on the EC2 configurations.
14:41 dijit https://docs.saltstack.com/en/latest/topics/cloud/aws.html#cloud-profiles
14:41 dijit https://github.com/saltstack/salt/blob/develop/salt/cloud/clouds/gce.py#L2531
14:42 dijit > sudo salt-cloud -p test-profile test-instance
14:42 dijit [ERROR   ] There was a profile error: malformed node or string: [{'size': 100}]
14:44 racooper joined #salt
14:51 cgiroua joined #salt
14:53 cgiroua joined #salt
14:56 tiwula joined #salt
15:03 alex-zel does thorium not support GitFS?
15:04 MTecknology I imagine it would..
15:05 alex-zel can't find anything in the docs, only states and pillars
15:08 MTecknology If I had to venture a guess, without looking anything up, I'd guess salt://_thorium/ would be a good place.
15:10 alex-zel _thorium is for custom modules, I'm talking about /srv/thorium/ via git
15:12 zerocoolback joined #salt
15:14 cgiroua_ joined #salt
15:16 cgiroua__ joined #salt
15:18 cgiroua joined #salt
15:19 cgirou___ joined #salt
15:24 scarcry joined #salt
15:25 nbari I am trying to install salt using: pip install salt, after printing the message Collecting salt exits with code 1
15:25 dijit what message
15:33 exarkun joined #salt
15:33 nkuttler nbari: log? traceback?
15:34 nbari currently on a mac getting: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)",)': /packages/11/52/29a7b924e495f22764603aa095ab41b2c4a952a4b57ad5689dd863e575ff/salt-2018.3.0.tar.gz
15:35 nkuttler yeah, that doesn't reply to me either
15:35 whytewolf that sounds like an issue with pypi
15:36 cgiroua joined #salt
15:37 noobiedubie joined #salt
15:38 nbari any other way to install ?
15:38 nbari probably git ?
15:40 whytewolf git, you said you were on mac so homebrew or macports is a common way.
15:41 das_deniz left #salt
15:41 nbari want to install salt in a power9 but was testing on a mac and get same behavior, so thing is more an issue with pypi.org
15:41 nbari I just gived a try, started to fetch some packages and then again I got 404
15:44 whytewolf if you are running aix on that power9 you might want to look into enterprise. there are AIX packages
15:44 nbari currently using redhat7.4
15:44 nbari any change there are some packages ?
15:45 whytewolf http://repo.saltstack.com/#rhel
15:46 nbari found one 2015.5.10  workign :-)
15:46 nbari any chance it us updated to 2018 ?
15:46 DanyC joined #salt
15:47 whytewolf ... repo.saltstack.com has 2018.3.0
15:48 nbari ok let me give a try I just added the repo and geeting a https://repo.saltstack.com/yum/redhat/7/ppc64le/latest/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
15:49 DanyC joined #salt
15:49 whytewolf doh, ever forget an important detail ... i did, arch. yeah we don't build power9 arch
15:50 whytewolf only arch we have for redhat 7.4 is x86_64
15:50 nbari don't know why I was avaialble to get the 2015
15:50 whytewolf pulled from a different repo somewhere i guess.
15:51 whytewolf yum should tell you the repo
15:52 RobSpectre joined #salt
15:53 RobSpectre Hey gang - getting bit by the new pip bug: https://github.com/saltstack/salt/issues/46971
15:53 RobSpectre Anyone know how to keep a virtualenv state from upgrading pip to version 10?
15:55 alex-zel do i need to restart salt-master after update the reactor?
15:55 nbari whytewolf: I found the package in the epel repo
15:57 whytewolf alex-zel: in thoery no, in practice yes.
15:57 alex-zel seems like configuring the reactor via pillars doesn't work
15:59 om2 joined #salt
15:59 dezertol joined #salt
16:02 noobiedubie alex-zel: do you see your event in the salt-run state.event pretty=True
16:03 alex-zel yeah, but no state is activated
16:03 noobiedubie make sure your reactor on your salt-master matches what's coming over the wire
16:03 noobiedubie post your reactor.conf with location and your actual reactor file with location as well the state file it is suppose to run
16:04 noobiedubie paste.debian.net
16:06 alex-zel no reactor.conf, I'm trying to use pillars for that
16:06 noobiedubie ?
16:06 alex-zel with reactor.conf I can catch the event and run a state
16:07 MTecknology It's still config
16:07 Jumbobazman joined #salt
16:07 Deadhand joined #salt
16:08 noobiedubie you can have a reactor file in a pillar but the reactor.conf is different and how the salt master knows about your events and specifically the ones you want it to react too
16:08 dijit Hey, anyone using salt-cloud with google cloud? I'm trying to create a disk on provision... I can't seem to get the syntax of 'volumes' to work, based on the EC2 configurations.
16:08 dijit https://docs.saltstack.com/en/latest/topics/cloud/aws.html#cloud-profiles
16:08 dijit https://github.com/saltstack/salt/blob/develop/salt/cloud/clouds/gce.py#L2531
16:08 dijit > sudo salt-cloud -p test-profile test-instance
16:08 dijit [ERROR   ] There was a profile error: malformed node or string: [{'size': 100}]
16:09 alex-zel https://github.com/saltstack-formulas/salt-formula/blob/master/pillar.example#L111-L113
16:09 noobiedubie dijit: is this for your root drive by any chance
16:09 dijit nah, I'm trying to add an additional drive.
16:09 MTecknology noobiedubie: I assume he means setting master config values from pillar
16:09 alex-zel according to this I can configure the reactor via pillars
16:10 alex-zel my salt-master is running in a container, I don't want to kill it and create a new one every time I change reactor.conf
16:10 MTecknology docker* container.. I assume
16:10 dijit can you pass the directory to the host?
16:10 alex-zel yeah docker
16:10 dijit pass the config* directory to the host.
16:12 noobiedubie you don't have to kill it simply restart the service
16:12 whytewolf alex-zel: that is a formula. that will create a reactor.conf file
16:13 whytewolf https://github.com/saltstack-formulas/salt-formula/blob/master/salt/files/master.d/reactor.conf <=- the jinja template that formula uses for that reactor pillar example
16:14 noobiedubie alex-zel: yeah not sure what you want to do, are you as whytewolf and MTecknology are saying just trying to dynamically generate your reactor.conf based on a state?
16:15 jeremati_ joined #salt
16:15 DammitJim joined #salt
16:16 alex-zel no, all I want is to configure reactor.conf from pillars
16:16 alex-zel and not have to restart salt-mater for it to take affect
16:17 whytewolf alex-zel: that isn't going to happen.
16:17 jerematic joined #salt
16:17 alex-zel well that's disappointing
16:18 alex-zel sure I can generate reactor.conf using pillars and a state, but having to restart the master kinda sucks
16:18 whytewolf try to not restart the master. there are a lot of configs that can be done on the master that don't need a restart.
16:19 alex-zel I'm already doing this with salt-cloud, creating a provider+profile+map during an orchestration, but this doesn't require a restart
16:19 shakalaka joined #salt
16:20 whytewolf honestly the reactor processes doesn't even start until you have configured them for the first time.
16:22 Mousey joined #salt
16:22 Mousey greetings salty IRCers
16:23 dijit greets
16:24 dijit for my question, I found an answer here: https://github.com/saltstack/salt/pull/22785
16:32 alex-zel ok I added a new event to reactor.conf, I'm going home, and if tomorrow salt-master still won't recognize my event that I can be 100% sure that it need to be restarted on event reactor.conf change
16:41 k1412 joined #salt
16:42 JacobsLadd3r joined #salt
16:48 jerematic joined #salt
16:51 zerocoolback joined #salt
16:56 JacobsLadd3r joined #salt
17:03 cgiroua_ joined #salt
17:13 exarkun joined #salt
17:17 AngryJohnnie joined #salt
17:20 ce1010 joined #salt
17:24 ce1010 left #salt
17:24 ce1010 joined #salt
17:25 ce1010 left #salt
17:26 rollniak joined #salt
17:29 cgiroua joined #salt
17:47 ymasson joined #salt
17:52 cliluw joined #salt
17:56 cgiroua_ joined #salt
18:01 zerocoolback joined #salt
18:02 zerocoolback joined #salt
18:17 pf_moore joined #salt
18:17 swa_work joined #salt
18:21 zer0def quick question - how do i place values in the `[DEFAULT]` section, when using `file.serialize` with `formatter: configparser`?
18:22 zer0def tried with empty string as section key, but that associates values in it to `[DEFAULT]` *and* creates an empty string section
18:22 rollniak joined #salt
18:43 crux-capacitor is it ok to have 2 commands in an SLS that both have a require requisite on the same command (defined further up in the SLS)?
18:44 zer0def yes
18:44 aldevar joined #salt
18:47 crux-capacitor OK, im getting the following error when attempting this state run: " ID 'run_this_script' in SLS 'application.realm' contains multiple state declarations of the same type "
18:47 AngryJohnnie joined #salt
18:47 crux-capacitor https://ghostbin.com/paste/brr3e
18:49 Hybrid joined #salt
18:51 whytewolf crux-capacitor: i see nothing wrong with what was posted.
18:52 crux-capacitor yea i thought it was fine, too...
18:53 exarkun joined #salt
18:53 shakalaka joined #salt
18:54 sjorge joined #salt
18:54 whytewolf just tested and [i don't know about your script. but it runs fine for me
18:55 whytewolf the issue is most likely in what you didn't post
18:57 RobSpectre Most of my states are dependent on salt managed virtualenvs and failing as a result of this bug (https://github.com/saltstack/salt/issues/46971). Does any one know how to pin pip for a virtualenv state?
18:58 crux-capacitor ah whoops, accidentially had the name of the next stanza commented
18:58 crux-capacitor :D
18:58 MTecknology wouldn't you just need to specify a version in the dependencies?
18:58 MTecknology pip==9.x, or something like that
18:59 RobSpectre @mtecknology it appears that the virtualenv state doesn't use the pip version on the system but auto-upgrades to latest.
18:59 RobSpectre Can't find anything in the documentation on preventing that.
19:00 briner_ joined #salt
19:00 stooj joined #salt
19:00 whytewolf --system-site-packages?
19:01 RobSpectre @whytewolf Did try setting that to True - appears it upgrades pip anyway. :/
19:02 whytewolf kick pypi in the neither area for forcing an upgrade that breaks things?
19:03 RobSpectre Looking like that line is going around the block this morning.
19:03 whytewolf i mean one of the only other options i can think of would be --no-pip and then install pip manually
19:05 RobSpectre @whytewolf Def a lot of gymnastics to solve the problem, but appreciate you lending some consideration nevertheless. Will give that a swing.
19:07 Hybrid joined #salt
19:08 whytewolf humm, maybe a custom bootstrap script. but that goes WAY outside of my experence with virtualenv https://virtualenv.pypa.io/en/stable/reference/#extending-virtualenv
19:11 stooj joined #salt
19:52 shanth joined #salt
19:53 shanth we are looking into using salt-ssh on a few minions, we are going to make a salt user and run commands via sudo, will commands work if the user has the /sbin/nologin for their shell?
20:00 tyx joined #salt
20:06 scooby2 joined #salt
20:06 MTecknology I have a state in one file (ferm: file.managed) that I want to set a listen on from another sls (sshguard: pkg.installed). I want to use sshguard: pkg.installed: - listen_in: ferm. However, when that file changes, I'm told the state wasn't found in that sls. I know it's not in that sls, but I know the state is included in the highstate.   I end up this return- http://dpaste.com/1PAC4MF
20:12 alfie left #salt
20:13 MTecknology apparently I was doing something screwy with requisites, but I don't understand what. :(
20:23 shanth with salt-ssh is there a way i can make it use both an ssh key and a password? we use two factor auth on all servers
20:24 defsan joined #salt
20:37 lordcirth_work shanth, curious, what method do you use to set up two-factor before Salt?
20:39 shanth we manage most of our machines with salt lordcirth_work so we simply push sshd_config with salt. for certain reasons, we cannot use salt-minion to manage certain hosts so we must use salt-ssh. the problem is that we use two factor ssh and salt-ssh can't login
20:39 shanth it seems that salt-ssh is limited in using either an ssh key or a password
20:39 shanth i need both
20:40 lordcirth_work sounds like a pull request is needed
20:42 shanth yeah we are kinda boned. we cant manage really important hosts now
20:42 lordcirth_work Did you just enable two-factor without testing?
20:43 shanth we had to enable two factor because of PCI standards
20:43 shanth we dont have a choice in the matter
20:43 lordcirth_work Ah I see.
20:45 DanyC joined #salt
20:46 hiroshi joined #salt
20:50 Hybrid joined #salt
20:54 Edgan shanth: Sad, absolute two-factor is anti-automation :(
20:54 shanth true Edgan
20:55 shanth lordcirth_work: it appears that the root cause of my issue may be solved in the latest version of Salt. being able to specify the SOURCE_ADDRESS for the minion
20:56 shanth sadly FreeBSD only has 2017.7.4_1 and i need version 2018.3.0.
20:56 Edgan shanth: How do you have a deploy user in something like Jenkins that uses ssh to deploy something with absolute two factor?
20:57 shanth we dont use jenkins
20:58 Edgan shanth: You have no tool like Jenkins? Team City? Bamboo? etc?
20:59 shanth bamboo
21:00 Edgan shanth: ok, so how can bamboo ssh deploy something with two factor? The person has to enter the two factor code for that moment into the job?
21:00 shanth apparently for the time being that one has an exception where it doesn't use two factor
21:02 Edgan haha, so get a second one :) Use that as the precedent.
21:03 shanth might have to
21:03 whytewolf that is pretty much how PCI goes. fileing exceptions that are logged for things that just can not do what PCI expects everything to do. it is more paper work then followable standard
21:04 whytewolf still easier then dealing with gambling boards for online products.
21:05 shanth true
21:05 whytewolf s/then/than
21:10 mrueg joined #salt
21:13 oida joined #salt
21:22 Pepperish joined #salt
21:23 shanth whytewolf: what is the difference between salt versions 2017 and 2018? is 2018 like experimental?
21:27 whytewolf no, they are both considered stable. features go into develop so all new features are in the higher number release. but more time has gone into keeping the older release stable. so it generally is more stable.
21:28 whytewolf as we get closer to another magor release the older version will only get CVE fixes
21:28 shanth i really really really need the source_address feature from 2018 in 2017 https://docs.saltstack.com/en/latest/ref/configuration/minion.html#source-address
21:29 whytewolf that doesn't look like an salt--ssh feature
21:29 whytewolf it is minion config
21:29 shanth i know
21:29 shanth i wouldnt have to use salt-ssh if i had source_address available
21:30 shanth because then we could manage certain minions
21:30 whytewolf ok, well unforchantly that is a feature so went into develop which is why it is in 2018.
21:30 shanth wait what?
21:31 whytewolf features go into develop. only bug fixes go into branches.
21:31 shanth ahhh
21:31 shanth 2017 is only bug fixes? and 2018 is features?
21:31 whytewolf actually 2018.3 is only bug fixes now also.
21:31 MTecknology 2018.x is released, no new features
21:31 shanth i wonder if 2018 will ever come to freebsd
21:32 whytewolf ask christer.edwards
21:32 shanth :) i was going to email him actually
21:32 whytewolf he is behind in 2017.7 as well
21:33 shanth yea he is
21:33 shanth hand me the pokin stick
21:34 aldevar left #salt
21:34 dh joined #salt
21:34 whytewolf there are alternative installs. such as git or pip.
21:35 whytewolf but in a PCI enviroment you might nit want to even begin to explore those
21:35 wongster80 joined #salt
21:36 shanth alternative installs of salt?
21:36 shanth like forks people have made?
21:36 whytewolf no, like other ways of installing salt.
21:36 shanth ah
21:36 whytewolf it is just python after all
21:36 shanth good idea
21:36 shanth i will look into that
21:39 shanth is there a guide for that whytewolf or is that no mans land using git?
21:40 whytewolf closest thing i can think of is https://docs.saltstack.com/en/2016.11/topics/development/hacking.html
21:40 whytewolf sorry that is an old version https://docs.saltstack.com/en/latest/topics/development/hacking.html
21:40 shanth just what i needed thanks
21:41 DanyC joined #salt
21:48 DanyC joined #salt
21:51 gmoro joined #salt
22:00 awerner joined #salt
22:15 User__ joined #salt
22:15 mikecmpbll joined #salt
22:46 DanyC joined #salt
23:24 stooj joined #salt
23:27 exarkun joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary