Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-07-21

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 woodtablet left #salt
00:09 joe_n joined #salt
00:12 dxiri thanks for the responses guys!
00:20 dxiri_ joined #salt
00:21 jmickle joined #salt
00:22 mosen joined #salt
00:23 Guest73 joined #salt
00:30 zuppa joined #salt
00:33 masber joined #salt
00:44 jeddi joined #salt
00:44 jmickle joined #salt
01:24 ssplatt joined #salt
01:25 cyteen joined #salt
01:26 Nahual joined #salt
01:46 edrocks joined #salt
01:48 ilbot3 joined #salt
01:48 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.6, 2017.7.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers <+> The call for speakers for SaltConf17 is now open: http://tinyurl.com/SaltConf17
01:50 noobiedubie joined #salt
02:03 _KaszpiR_ joined #salt
02:09 _KaszpiR_ joined #salt
02:26 Guest73 joined #salt
02:36 Guest73 joined #salt
02:39 zulutango joined #salt
02:46 beoseph_stallin[ joined #salt
02:46 Guest73 joined #salt
02:49 svij3 joined #salt
02:50 Guest73 joined #salt
02:53 leonkatz joined #salt
03:05 Zachary_DuBois joined #salt
03:06 vishvendra joined #salt
03:10 vishvendra1 joined #salt
03:12 zerocool_ joined #salt
03:14 zerocool_ joined #salt
03:16 onlyanegg joined #salt
03:17 dxiri joined #salt
03:22 dxiri_ joined #salt
03:25 cgiroua joined #salt
03:28 woodtablet joined #salt
03:29 dxiri joined #salt
03:35 cyborg-one joined #salt
03:39 woodtablet left #salt
03:41 dxiri joined #salt
03:47 edrocks joined #salt
03:48 evle joined #salt
03:49 donmichelangelo joined #salt
04:01 dxiri joined #salt
04:06 dxiri joined #salt
04:09 svij3 joined #salt
04:23 mavhq joined #salt
04:27 cgiroua joined #salt
04:30 svij3 joined #salt
04:31 Shirkdog joined #salt
04:32 hoonetorg joined #salt
04:49 beardedeagle joined #salt
04:52 mosen joined #salt
04:52 Shirkdog joined #salt
04:59 tobstone joined #salt
05:10 kerrick joined #salt
05:11 kerrick joined #salt
05:16 joe_n joined #salt
05:21 high_fiver joined #salt
05:23 onlyanegg joined #salt
05:35 high_fiver joined #salt
05:42 vishvendra1 joined #salt
05:49 edrocks joined #salt
05:53 preludedrew joined #salt
05:53 mavhq joined #salt
05:53 onlyanegg joined #salt
05:59 wiggy joined #salt
06:02 socket- joined #salt
06:02 vishvendra joined #salt
06:06 do3meli joined #salt
06:06 do3meli left #salt
06:06 absolutejam Oh dear
06:07 sh123124213 joined #salt
06:07 absolutejam I never knew about the file tree external pillar
06:08 absolutejam I write my own hacky module and shit too
06:09 aldevar joined #salt
06:12 chowmeined joined #salt
06:15 lompik joined #salt
06:16 impi joined #salt
06:21 inad922 joined #salt
06:38 wiggy left #salt
06:38 gnomethrower joined #salt
06:47 high_fiver joined #salt
06:48 vishvendra1 joined #salt
06:50 aldevar1 joined #salt
06:56 zerocool_ joined #salt
07:00 kboratynski joined #salt
07:01 kboratynski Hey. Question.
07:01 kboratynski Mine targeting by grains is not working for me.
07:01 kboratynski http://wklej.org/hash/f84a5bd7676/
07:01 kboratynski Example here.
07:01 kboratynski Any suggestions?
07:03 kboratynski More over: targeting by salt is working fine. http://wklej.org/id/3222117/
07:06 Hybrid joined #salt
07:08 svij3 joined #salt
07:09 aldevar joined #salt
07:09 lompik joined #salt
07:12 * whytewolf hands kboratynski an s
07:13 whytewolf i do believe it is grainS not grain
07:14 coredumb Good morning
07:15 whytewolf well, it is morning. doubt it is good as i havn't been to bed yet and i am making a midnight snack
07:16 kboratynski @whywolf: nope; http://wklej.org/id/3222122/
07:17 kboratynski whytewolf: nope; http://wklej.org/id/3222122/
07:17 whytewolf does mine.get '*' network.ip_addrs work
07:17 kboratynski Yup.
07:17 hemebond expr_form=grain
07:17 whytewolf shouldn't need the expr_form
07:18 whytewolf on the cli
07:18 kboratynski Yes. ;-)
07:18 kboratynski Well, nope.
07:18 kboratynski Yes - for your sentence, it should not be required.
07:18 hemebond Oh wait. tgt_type=grain
07:18 kboratynski From master?
07:18 kboratynski Or minion?
07:18 whytewolf either.
07:19 coredumb whytewolf: :D
07:19 hemebond Is roles a string?
07:19 coredumb hey folks that use the cloud on a daily basis
07:19 coredumb got a question for you
07:20 whytewolf hemebond: the string he is trying is the same between a mine.get and a normal -G target
07:20 coredumb do you have your salt-master that is deploying in the cloud using salt-cloud as a cloud instance itself?
07:21 hemebond whytewolf: Yeah, but I've seen people try to match a list grain using a string before.
07:21 hemebond Just thought I'd check that first.
07:21 whytewolf kboratynski: try mine.get 'G@roles:elasticsearch-master' network.ip_addrs
07:21 hemebond Since it's called "roles"
07:21 whytewolf hemebond: yes, but if it is working in a normal target it isn't a string. and he already demostrated that it works in a normal target
07:22 Guest73 joined #salt
07:22 hemebond Ah I missed that.
07:22 kboratynski http://wklej.org/id/3222125/
07:22 kboratynski Not working.
07:23 kboratynski Grains are called roles.
07:23 whytewolf no, grains are not called roles. your roles grain has a list
07:24 kboratynski Ach! Yes.
07:24 kboratynski http://wklej.org/id/3222127/
07:24 kboratynski And they are visible.
07:24 whytewolf also sorry for some reason i forgot that mine.get doesn't default to compound. add compound to the end of that last mine.get i sugested
07:25 hemebond coredumb: My salt-master is in the cloud.
07:25 whytewolf kboratynski: what version of salt?
07:25 MTecknology I hate IT guys saying "the cloud" :(
07:26 MTecknology Are you talking about your OwnCloud instance running in your basement or aws or esxi?
07:26 hemebond salt-run mine.get 'os:Debian' ipv4 grain <<< works for me (ipv4 being an alias)
07:26 kboratynski http://wklej.org/id/3222128/
07:27 hemebond MTecknology: "cloud" means "out where I don't have to worry about it"
07:27 hemebond :-)
07:27 MTecknology hemebond: to you it does, but not to everyone
07:28 hemebond Actually I had this dilemma earlier today. How would _you_ refer to a remotely-hosted-by-someone-else cloud platform stuff?
07:28 MTecknology It's cringeworthy how many people talk about owncloud/nextcloud being their cloud. I've seen people expect to run HA nginx on nextcloud hosts
07:28 MTecknology meaning, they thought they were going to get HA on a single host
07:28 kboratynski whytewolf: ^
07:28 whytewolf so latest...
07:29 whytewolf hemebond: have you updated yet?
07:29 hemebond whytewolf: Nooooope.
07:29 hemebond Still on 2016.3.6 until I fix the AWS/EC2 regression.
07:29 MTecknology hemebond: I borrow VM space from a VPS provider, you can typically just name AWS or EC2.. unless you're in a sales room
07:29 whytewolf MTecknology: when I say cloud i always mean either the openstack setup i have at work where yes we get HA. my home openstack setup where again i get a milder form of HA but still have HA
07:30 Ricardo1000 joined #salt
07:30 beardedeagle coredumb: I have my master in the "cloud" and it deployes masters/syndics to the cloud that are in turn configured to deploy minions to the cloud
07:30 beardedeagle some day I hope to achieve true recursion
07:30 kboratynski whytewolf: Suggestions?
07:30 whytewolf wonder if this is yet another little bug in 2017.7
07:32 MTecknology beardedeagle: it's not really recursion, it's more like speaking smurf
07:32 beardedeagle cloud == openstack here at godaddy
07:33 whytewolf ugh, kboratynski put in a bug report
07:33 whytewolf i just tested it and yes it is a bug
07:34 whytewolf i really hope 2017.7.1 drops soon. cause there are a ton of things wrong with this release
07:34 beardedeagle whytewolf: love salt, but isn't there always?
07:34 whytewolf there are always minor little thing. but this release has been wtf crazy
07:35 beardedeagle well you add a language update and things tend to go sqwiffy
07:35 MTecknology I'm starting to just avoid the .0 releases entirely.
07:35 whytewolf salt.state orchestrations compleatly dropping all pillars, now mine.get targetting not working with other tgt_types
07:35 MTecknology beardedeagle: schwifty*
07:36 beardedeagle to much rick and morty...or not enough?
07:36 whytewolf ohhhh, interesting. kboratynski i do have a work around for you and i think i know what happened
07:37 whytewolf and this is silly
07:37 MTecknology I'm too tired to be awake. G'night friends and bits!
07:37 whytewolf good night MT have a good one
07:37 kboratynski whytewolf ?;-)
07:37 usernkey joined #salt
07:37 whytewolf kboratynski: add tgt_type=grain expr_form=grains
07:37 whytewolf [you need both]
07:38 whytewolf ack though i nuked that s off the end of grain
07:38 whytewolf tgt_type=grain expr_form=grain
07:38 coredumb beardedeagle: that's inception like
07:39 whytewolf in the move to localize to tgt_type a check must have got missed somewhere.
07:39 beardedeagle "api changes that may or may not work"
07:40 beardedeagle should be a section
07:40 coredumb gnite MTecknology
07:40 aldevar1 joined #salt
07:41 kboratynski whytewolf: But with salt-call or just sall?
07:41 whytewolf kboratynski: that doesn't matter.
07:41 snarked joined #salt
07:42 whytewolf this isn't something i would work into your code either. cause hopefully that bug will be squashed quick
07:42 kboratynski ERROR executing 'mine.get': The following keyword arguments are not valid: tgt_type=grain
07:43 Zachary_DuBois joined #salt
07:43 kboratynski salt '*' mine.get tgt='G@roles:elasticsearch-master' fun='network.ip_addrs' tgt_type='grain'
07:44 whytewolf all of your minions are running 2017.7 right?
07:44 kboratynski whytewolf: Yup.
07:45 whytewolf in my test enviroment this worked
07:45 whytewolf salt '*' mine.get 'server_id:1852694057' 'mgmt_network' tgt_type=grains expr_form=grain
07:45 whytewolf but you need both tgt_type and expr_form
07:45 whytewolf missing one and it doesn't work
07:46 whytewolf but that error I only saw on minions that were running earlyer versions of salt
07:48 whytewolf interesting. once i got it to work once i didn't have to resort to that method. just having grain worked
07:49 whytewolf something abou tthe cache must not like it.
07:49 kerrick joined #salt
07:51 kboratynski Dayyym!
07:51 kboratynski root@ip-10-186-9-174:~# salt-minion --version salt-minion 2015.8.8 (Beryllium) root@ip-10-186-9-174:~#
07:51 edrocks joined #salt
07:51 kboratynski Did not add installation to user_data in elasticsearch.
07:54 whytewolf humm. yeah once i get 1 good return from the mine it works.
07:55 whytewolf anyway. it is 1 in the morning and i have an 8am call to be on.
07:55 ahrs joined #salt
07:56 onlyanegg joined #salt
07:59 mikecmpbll joined #salt
08:01 jeddi joined #salt
08:03 aldevar joined #salt
08:03 Ricardo1000 joined #salt
08:03 kboratynski Being on-call. :S
08:08 _KaszpiR_ joined #salt
08:09 aldevar1 joined #salt
08:10 noraatepernos joined #salt
08:22 pbandark joined #salt
08:23 usernkey joined #salt
08:44 dxiri joined #salt
08:50 absolutejam anyone used file_tree external pillar?
08:50 absolutejam It seems exactly what I want, except that it doesn't seem to be merging dicts
08:57 onlyanegg joined #salt
09:07 absolutejam So, it only seems to respect a single nodegroup
09:07 absolutejam Does that sound right?
09:13 inad922 joined #salt
09:14 Deliant joined #salt
09:15 coredumb absolutejam: that's a good question I've not used it more than with a single nodegroup
09:16 coredumb and I don't use it anymore
09:22 absolutejam it merges the `hosts` and `nodegroups` keys
09:22 absolutejam but doesn't seem to do 2 nodegroups...
09:22 ivanjaros joined #salt
09:23 mavhq joined #salt
09:30 cyteen joined #salt
09:33 zerocool_ joined #salt
09:53 jhauser joined #salt
10:09 Score_Under I think I've asked this before, but is it possible to order two states between two different SLS files, but not include one in the other?
10:12 dxiri joined #salt
10:12 twooster Score_Under: I think `require` might "just work" referencing states defined in other files? have you tried it?
10:13 twooster Of course if the other file isn't included, I would expect that to blow up.
10:14 viq you can also require an sls, though not sure whether that will work without include
10:16 twooster If you want a "B requires A if and only if A has been defined", that's a bit trickier. I think that could be accomplished by a third `.sls` required by both, and judicious use of `require` and `require_in`.
10:17 pbandark joined #salt
10:17 twooster Other options are depending on guaranteed ordering (order of inclusion/definition) or even explicitly setting the `order` field
10:25 Score_Under I think order will be the easiest solution for now. I'll keep the intermediate SLS solution in mind too though, since that could come in quite handy.
10:25 Score_Under thanks for the ideas
10:32 Ricardo1000 joined #salt
10:38 Dev0n joined #salt
10:40 smartalek joined #salt
10:43 dxiri joined #salt
10:49 jmiven joined #salt
10:54 Electron^- joined #salt
10:58 onlyanegg joined #salt
10:59 cyborg-one joined #salt
11:01 bluenemo joined #salt
11:55 edrocks joined #salt
12:02 eee_ joined #salt
12:13 dxiri joined #salt
12:19 justanotheruser joined #salt
12:24 cgiroua joined #salt
12:25 cgiroua joined #salt
12:27 justanotheruser joined #salt
12:27 lompik joined #salt
12:28 sjorge joined #salt
12:33 xet7 joined #salt
12:37 zuppa joined #salt
12:53 mavhq joined #salt
12:54 cgiroua joined #salt
12:56 ssplatt joined #salt
12:57 jdipierro joined #salt
12:59 onlyanegg joined #salt
13:00 ecdhe joined #salt
13:02 rawkode joined #salt
13:03 rawkode Afternoon
13:03 kedare joined #salt
13:03 kedare Hi all :)
13:03 kedare Anyone know where to find the Salt minion setup files for Windows for 2017.7 ? The links from the documentation are dead: https://docs.saltstack.com/en/latest/topics/installation/windows.html
13:04 kedare Well I found this one also : https://repo.saltstack.com/windows/Salt-Minion-2017.7.0-Py2-AMD64-Setup.exe
13:04 kedare But maybe the documentation should be updated ?
13:05 zuppa_1 joined #salt
13:08 drawsmcgraw joined #salt
13:12 Slimmons joined #salt
13:13 Slimmons Anybody had any real luck running GUI applications on windows minions without the interactive desktop?
13:24 mbuf joined #salt
13:24 jdipierro joined #salt
13:27 gmoro joined #salt
13:28 thinkt4nk joined #salt
13:28 dxiri joined #salt
13:29 dxiri joined #salt
13:30 cgiroua joined #salt
13:32 high_fiver joined #salt
13:33 Ch3LL kedare: thanks for the heads up. im gonna put in a fix
13:34 cyteen joined #salt
13:37 gmoro_ joined #salt
13:38 kedare Ch3LL, Thanks :)
13:39 ivanjaros joined #salt
13:40 rawkode joined #salt
13:45 racooper joined #salt
13:48 Ch3LL kedare: np here it is https://github.com/saltstack/salt/pull/42452 once thats merged and docs build it will show up
13:51 kedare Great :)
13:56 edrocks joined #salt
13:57 GMAzrael joined #salt
13:57 sjorge joined #salt
14:00 onlyanegg joined #salt
14:05 strobelight joined #salt
14:09 snarked Anyone seen an issue with Windows 2008/2012 VM’s crashing when migrated?  RHEL/CentOS VM’s are fine.  On oVirt 4.1.2
14:09 snarked sorry, wrong channel
14:19 numkem joined #salt
14:26 DarrenLowe joined #salt
14:26 amiskell Is anyone using salt 2017.x and inverted iptables rules?
14:27 amiskell Ever since upgrading to the 2017 release, one of my iptables rules which uses a invert fails because it's trying to use two !'s
14:28 evle joined #salt
14:30 noobiedubie joined #salt
14:31 mpanetta joined #salt
14:34 onlyanegg joined #salt
14:42 mpanetta joined #salt
14:49 sjorge joined #salt
14:49 SalanderLives joined #salt
14:51 gmoro joined #salt
14:55 high_fiver joined #salt
14:59 high_fiver joined #salt
15:01 _JZ_ joined #salt
15:03 tkojames joined #salt
15:06 high_fiver joined #salt
15:08 tkojames So we use sls files for user access for ssh to servers. It works well. I am trying to automate the process of adding a new user. I have external source I want to send from. I got that part figured out. The issue is I have an sls file set up with postdata variables that I passed from my external source. Is it possible to have this sls create a new one  sls with the data past to it. So every time the sls gets the post data a new sls is created with the
15:08 tkojames data sent via the api. Or is there a better whay to do this? I am able to pass the data to the sls but I need a unique sls made each time the event is fired. Or is there a better way to do this?
15:08 btorch morning
15:10 sarcasticadmin joined #salt
15:11 hasues joined #salt
15:11 hasues left #salt
15:11 fredric joined #salt
15:11 btorch has anyone experienced this https://pastebin.ca/3845141 ... I think it's some issue with salt/loader.py and perhaps mostly with python2.7/socket.py
15:12 btorch it happens when starting up salt miniont, running states, running commands , syncing grains .. etc anytime it loads the grains
15:13 high_fiver joined #salt
15:15 high_fiver joined #salt
15:15 Rick_ joined #salt
15:15 fritz09 joined #salt
15:17 high_fiver joined #salt
15:19 high_fiver joined #salt
15:19 inad922 joined #salt
15:22 astronouth7303 ok, so i'm using salt to do the actual deploy, but it looks like we want a more sophisticated multi-node deployment strategy than just "update everything and pray". How are people managing this and what tools do they use?
15:22 jesusaurum joined #salt
15:23 high_fiver joined #salt
15:23 dfinn joined #salt
15:24 Rick_ joined #salt
15:24 fritz09 joined #salt
15:25 fritz09 joined #salt
15:26 aldevar1 left #salt
15:28 high_fiver joined #salt
15:36 ekristen joined #salt
15:37 sjorge joined #salt
15:37 Electron^- joined #salt
15:47 AvengerMoJo joined #salt
15:48 ivanjaros joined #salt
16:00 zer0def astronouth7303: are you using orchestration? my first idea would be to use `state.orchestrate` with some sort of stagger and batching
16:00 kulty joined #salt
16:00 zer0def i think the salt term for "stagger" is "splay", though
16:00 astronouth7303 yeah, we're currently using orchestration to do all the scripted actions (update the pillar, highstate the right set of minions, etc
16:01 zer0def well, you could run state.sls instead of highstate
16:02 jmickle joined #salt
16:02 astronouth7303 the trick is that i feel like the batcher needs more awareness of healthchecks, instance status, batch group status, deployment successfulness (ie, applying policies detecting if the new version is actually bad), and generally coordinating between monitoring, load balancing, and salt
16:02 astronouth7303 highstate does the right thing on minion level
16:04 hashwagon joined #salt
16:05 astronouth7303 i'm really enjoying salt for doing all the low-level configuration, but i'm not sure what to do about high-level automation
16:05 viq reactors then?
16:05 viq I think thorium has something towards that
16:05 astronouth7303 viq: reactors are stateless, though?
16:05 viq astronouth7303: I believe thorium can be made aware of state
16:05 dxiri joined #salt
16:05 onlyanegg joined #salt
16:06 hashwagon Hey, what's the best way to rename a minion's hostname via salt command or state from master?
16:06 astronouth7303 and a lot of the stuff i'm looking towards doing has state (autoscaling with hysteresis & noise handling; sophisticated deploy strategies with health, monitoring, rollback, etc)
16:06 astronouth7303 hashwagon: network.system if you're on redhat. /etc/hostname if you're on debian.
16:07 viq astronouth7303: https://docs.saltstack.com/en/latest/topics/orchestrate/orchestrate_runner.html#more-complex-orchestration
16:07 debian112 joined #salt
16:07 viq hashwagon: remember you need to change the minion ID as well, on both minion and master, if you want that reflected as well
16:07 Tim_ joined #salt
16:08 viq astronouth7303: feel free to tell me I don't know what I'm talking about, as that is true ;) Just some bits and pieces left from reading through various docs
16:08 astronouth7303 viq: that's cool, but doesn't really give me a sense of "fleet-wide coordination"
16:08 zer0def i'm thinking astronouth7303 would make a lot of use from `onchanges`, `onfail` and `prereq` requisites
16:09 astronouth7303 a pretty normal way to manage versions is to set a pillar with the version, right?
16:09 astronouth7303 eg, a commit hash
16:10 edrocks joined #salt
16:11 fredrick_c joined #salt
16:11 zer0def i would say so, it does fit fairly nicely into the role
16:11 zer0def if you'd like to revert on particular hosts, i'd probably just associate `onfail` states that do the rollback
16:12 fredrick_c Hi
16:12 fredrick_c salt-master 2016.3.3 (Boron) how do I have a file root for different branches of git?
16:16 vishvendra joined #salt
16:18 svij3 joined #salt
16:20 astronouth7303 zer0def: 1. failure doesn't just mean "deploy failed", could also mean "this version is bad" or a few other things. 2. How does salt know what version to rollback to?
16:21 astronouth7303 all salt knows is the version it was trying to obtain
16:21 jmickle joined #salt
16:22 frew I have some files that I don't actually need to exists, but would be good if they did, that are prerequisites for some states
16:22 zer0def astronouth7303: i used to abuse the fact that jinja ran prior to states being executed to obtain current sha on a given machine
16:24 scarcry joined #salt
16:24 zer0def ran an execution modules within jinja, to be specific
16:25 mikecmpbll joined #salt
16:25 fatal_exception joined #salt
16:27 astronouth7303 but it's not like you can recurse within salt to actually implement the rollback
16:27 astronouth7303 whence, wanting an external system to manage it
16:29 astronouth7303 also, still only covers actual deployment failures
16:30 hashwagon Can the salt master pass a set variable from a shell script to a minion? like -- salt 'myminion' cmd.run "hostnamectl set-hostname $systemname"  ? It doesn't  seem to work for me.
16:30 zer0def astronouth7303: that depends on your deployment process, and if we're talking recursion, my thoughts immediately go to using the event bus and reactors, although there are usually workarounds to make your process non-recursive
16:31 astronouth7303 "recursion" in this case means "deploying the new version failed, re-run deployment with the old version"
16:31 astronouth7303 because you still have to do a bunch of secondary things as part of deployment (reload services, regenerate files, etc)
16:31 zer0def re-run it globally or just on the machine that failed?
16:31 astronouth7303 depends on policies and nature of the failure?
16:33 astronouth7303 i think it would be not unreasonable to say "deploy failed, nuke the instance"
16:33 zer0def if it's on the machine that failed, you could probably get away with `onfail` states; if globally, i would send an event and setup a reactor that would queue anotherrunner module execution
16:36 cro joined #salt
16:42 beardedeagle joined #salt
16:43 vishvendra1 joined #salt
16:47 davidtio joined #salt
16:49 AvengerMoJo joined #salt
16:50 astronouth7303 thorium actually looks like it can do this kind of logic, although the "provisional" status isn't encouraging
16:51 astronouth7303 i also wonder if it'd just be simpler to use the py renderer in a reactor
16:54 zer0def i'm not quite sure about your use case, but you can run at least execution and runner modules from within a reactor sls' templating
16:55 MTecknology The best thing to do with the reactor is shift things to an orchestrate event unless what you're doing is really simple
16:55 zer0def but this roughly equates to running thorium
16:56 astronouth7303 yeah, it looks like i can use thorium registers to track everything
16:57 MTecknology zer0def: thorium was/is meant to be a successor, eliminating the old problems
16:57 astronouth7303 but it crossed my mind that it might be easier to track all the state within real python and just use something like https://yorm.readthedocs.io/en/latest/ to save state
16:58 zer0def MTecknology: figured as much, just haven't gotten around to using it yet :)
16:58 MTecknology I haven't either..
17:03 frew Is there a way to define a requisite that is basically a generic onlyif ?
17:04 MTecknology on_changes?
17:05 frew well I don't want to pollute the normal case with phantom changes
17:05 zer0def i wouldn't consider "onchanges" as "onlyif", since it really depends on the "if"
17:05 frew you know what I mean?
17:05 zer0def frew: then do reverse requisites?
17:06 zer0def as in `onchanges_in`, `watch_in` or `require_in`, depending on your particular case
17:06 frew oh good idea
17:06 frew so the "if" thing requires the other stuff to run
17:06 frew I still can't quite keep the requisite stuff in my head
17:08 whytewolf then keep the requisite page open when ever you are using them for reference
17:08 frew oh I have it open :)
17:08 frew it's more that because I don't have it deeply in my brain, the things I can do with them are always a kind of surprise
17:08 frew (a pleasant surprise but still)
17:08 zer0def well, salt is quite broad in it's scope, i have tens of tabs with references to different components
17:09 frew yeah
17:09 whytewolf i'm just really good with google :P
17:09 frew hah
17:09 vexati0n joined #salt
17:09 zer0def and i still happen to ask for things that are hidden away in parts of the docu and don't remember to google-fu with site:docs.saltstack.com
17:09 whytewolf and tend to jump between the docs and the code.
17:11 Guest73 joined #salt
17:20 woodtablet joined #salt
17:21 woodtablet whytewolf: i am mentioning this because of what you said about Jeff Sessions yesterday, and because its crazy, Sean Spicer resigned this morning!
17:22 woodtablet i literally just logged in to tell you this =D lol
17:22 whytewolf yeah i saw.
17:22 whytewolf lol
17:25 xet7 joined #salt
17:26 kerrick joined #salt
17:28 edrocks joined #salt
17:31 fredrick_c salt-master 2016.3.3 (Boron) how do I have a file root for different branches of git?
17:32 whytewolf fredrick_c: file root? you mean the gitfs_root setting? or just multiple git repos in the same enviroment?
17:33 astronouth7303 i figured having salt://master pointed to @master, salt://dev pointed to @dev
17:34 whytewolf okay so mount point then?
17:34 high_fiver joined #salt
17:35 whytewolf kind of need a little help helping you as file_root actually doens't mean anything in gitfs
17:37 whytewolf hummm now that i think about it. there is no clean way to map enviroments to other branches. the only map i know about is to base
17:41 swa_work joined #salt
17:46 Inveracity joined #salt
17:48 high_fiver joined #salt
17:53 high_fiver joined #salt
17:54 fredrick_c ok so file_root is not correct how do I do the same in gitfs?
17:54 edrocks joined #salt
17:55 whytewolf what are you trying to accomplish
17:55 WN1188 joined #salt
17:55 whytewolf the dev branch pointing at the dev enviroment?
17:56 fredrick_c in top trying to {% if (grains['saltenv'] == 'qa')%}
17:56 whytewolf ...
17:57 astronouth7303 so linking the saltenv to the git branch
17:57 whytewolf first. unless you are setting that grain it doens't exist ever.
17:57 whytewolf it is just saltenv no grains info
17:58 fredrick_c I am setting the grain.
17:58 whytewolf ok
17:58 fredrick_c but when I try a salt-run fileserver.file_list saltenv=qa
17:58 fredrick_c it has nothing
17:58 fredrick_c for the qa branch
17:59 astronouth7303 gains['saltenv'] isn't linked to the actual saltenv
17:59 astronouth7303 it's just a grain that happens to be named saltenv
17:59 astronouth7303 jinja templating won't change what is/isn't available from the salt fileserver
18:00 fredrick_c interesting, but the other environments prodution/test work
18:00 astronouth7303 how are your gitfs_remotes set?
18:01 fredrick_c Just to the repo
18:01 astronouth7303 by default, saltenvs are mapped to branch names
18:02 astronouth7303 (base -> master)
18:02 whytewolf which is all you should need. do you have a qa branch?
18:02 fredrick_c I do
18:02 fredrick_c oh crud I see it somone did G instead of Q
18:02 fredrick_c Thanks
18:03 astronouth7303 whytewolf: for the record, https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#per-saltenv-configuration-parameters looks like you can set some fairly complex mount point and mapping configurations
18:03 whytewolf welp, love it when they end up being something simple like that
18:03 whytewolf astronouth7303: yeah forgot about the saltenv settig
18:04 whytewolf but that is only in 2016.11 and up
18:04 WN1188 joined #salt
18:04 fredrick_c Yes thank you, I hate it.  But at least I did not have to upgrade to fix.
18:04 thinkt4nk joined #salt
18:07 WN1188 left #salt
18:10 WN1188 joined #salt
18:13 wn1188_ joined #salt
18:20 kerrick_ joined #salt
18:21 beardedeagle joined #salt
18:23 mpanetta joined #salt
18:26 kerrick_ joined #salt
18:30 cyborg-one joined #salt
18:32 overyander joined #salt
18:38 ecdhe joined #salt
18:46 jumble_ joined #salt
18:48 mpanetta joined #salt
18:58 WN1188 i've got salt-cloud working great w/ digital ocean. trying to get things going w/ gce and am running into auth issues.
18:59 WN1188 any salt-cloud / gce users around?
18:59 druonysuse joined #salt
18:59 druonysuse joined #salt
18:59 druonysus joined #salt
19:03 ivanjaros joined #salt
19:06 debian112 joined #salt
19:06 Aikar joined #salt
19:08 WN1188 anybody?
19:08 WN1188 left #salt
19:13 WN1188 joined #salt
19:14 WN1188 hello?
19:14 WN1188 left #salt
19:16 Edgan WN1188: I don't use salt-cloud since it just does instances, and load balancers in the case of GCE. There is a working group that is effectively going to rewrite salt-cloud to do way more.
19:17 ChubYann joined #salt
19:21 edrocks joined #salt
19:23 coredumb Edgan: like what?
19:23 dxiri guys, quick question, is there a way to set the hostname on a VM to be the same as the name that shows on virsh list?
19:24 swills joined #salt
19:24 swills joined #salt
19:25 WN1188 joined #salt
19:25 WN1188 hello?
19:27 astronouth7303 dxiri: depends on your VM and how you're spinning them up
19:28 whytewolf coredumb: the goal is to work the salt-cloud system to be more flexiable. as well as use modules actually in salt instead of being a seperate sub system.
19:28 noraatepernos joined #salt
19:28 WN1188 anyone around to discuss salt-cloud gce auth issues?
19:29 whytewolf WN1188: it helps if you atually post errors you are seeing. and what you have done to try figuring out what the issue is with your auth issues.
19:31 Aikar hello, is there any way to stop salt 2016 from using 2017 on minions...
19:32 Aikar were still on 2016, but when we just provisioned a new machine, it used a 2017 minion which doesnt work with 2016 master
19:32 whytewolf Aikar: proper use of https://docs.saltstack.com/en/latest/topics/cloud/deploy.html#deploy-script-arguments and knowing how the bootstrap options work
19:33 WN1188 sure. here you go: https://gist.github.com/WN1188/4cf947acafc4034998732c389cfd5cd2
19:33 WN1188 i've got a salt master w/ salt-cloud using digital ocean with no problems at all. running into the above error when using gce as a cloud provider.
19:34 whytewolf i don't see an error
19:34 whytewolf i see a oauth setup
19:36 WN1188 i've followed the steps outlined at https://docs.saltstack.com/en/latest/topics/cloud/gce.html
19:37 Edgan coredumb: For AWS, use the existing salt boto cloud to do as much as possible in a more salt-cloud like way
19:37 whytewolf ok ... have you gone to the website that is posted in the link and then entered the code that website gives you into the part that says enter code?
19:39 whytewolf Edgan: don't forget the create a new type of "module" and move the boto_* modules and any other cloud style module into that so that those modules can be used as runners/exacution modules/cloud modules ect.
19:39 oida_ joined #salt
19:40 dxiri astronouth7303: VM is centos7 with minion installed, using virt.init to spin it up
19:41 astronouth7303 what's your VM system?
19:42 dxiri KVM
19:42 astronouth7303 dxiri: i would expect that the instance name is whatever name you gave virt.init
19:42 WN1188 yes, the url is a link to a 400 error: redirect_uri_mismatch anf directions to set up an oauth consent screen.
19:43 dxiri astronouth7303: I thought so as well, but no :( all vms come out as localhost.localdomain
19:43 whytewolf oauth consent screen is what you are looking for
19:43 WN1188 but thi is for a service account. the docs make zero mention of this.
19:43 astronouth7303 oh, the actual hostname? Depends on how your initializing the image. You can use states to set it, or install-time scripts, or ....
19:44 astronouth7303 (i use salt-cloud, which handles some of this for you)
19:45 Aikar whytewolf: thanks, that worked
19:45 Aikar why doesnt salt auto use master version?
19:45 Aikar though I guess salt-cloud doesnt know for sure if it has a master :/
19:46 drawsmcgraw left #salt
19:48 whytewolf Aikar: because salt-cloud doesn't have to run on the master. it can be run anywhere indapendent of salt
19:49 debian112 joined #salt
19:50 whytewolf WN1188: humm. I am finding references that said the gce docs are out of date. with a link to new docs that returns a 404
19:52 whytewolf this is kind of old also but might help https://github.com/GoogleCloudPlatform/compute-video-demo-salt/
19:52 wavded joined #salt
19:53 WN1188 yeah, i've seen that. thanks
19:53 wavded For a particular Jinja file, I would like to get the IP addresses (in grains) of all the servers matching a particular grain.  salt[grains.get] appears to just get my server, what is the recommended way to do that?
19:54 wavded my server = the minion currently executing
19:54 whytewolf wavded: https://docs.saltstack.com/en/latest/topics/mine/
19:54 xet7 joined #salt
19:54 wavded whytewolf: ok, i did look at that, apparently I would need to configure all the minions to export a function that does that?
19:55 wavded in `mine_functions`
19:55 whytewolf wavded: yes. i recomend pillar for that. as it saves having to restart the minion
19:55 astronouth7303 i do that, too. the biggest hassle is unrolling lists
19:57 wavded recommend pillar for which part?  Storing the IP addresses?  or storing mine functions?
19:57 whytewolf the mine function
19:57 wavded ok gotcha
19:57 whytewolf one sec looking for a gist that kind of gives a short cut
19:57 noraatepernos joined #salt
19:57 wavded ok thx
19:57 whytewolf man i need to clean up my gists
19:58 WN1188 whytewolf: any suggestions on additional debugging? this works wonderfully with digital ocean. i've just got a real need to use gce.
19:58 whytewolf WN1188: -l debug
19:58 whytewolf and -l trace
19:59 whytewolf on any command you run with salt will give more details info about what it is doing
19:59 whytewolf wavded: https://gist.github.com/whytewolf/eff4a15f0eaa8d5354a3
19:59 smartalek joined #salt
20:01 whytewolf [that gist is kind of old they have added a couple of helpful features to mine to more debugging since then]
20:02 whytewolf such as mine.valid that actually lists mine_functions you have
20:03 DammitJim joined #salt
20:06 kerrick_ joined #salt
20:07 WN1188 whytewolf: thanks. https://gist.github.com/WN1188/dbdc88aa1c2d1f5c72a80abaac0ec0de & https://gist.github.com/WN1188/738b31b73688dafa9ba76537c2bc0b42
20:10 dxiri astronouth7303: can you elaborate on those options?
20:11 astronouth7303 set up a pillar for `network.ip_addrs`. I do this as a poor-mans service-discovery
20:11 dxiri with a state, how would I do that (set the hostname from the vm name)?
20:12 astronouth7303 I'm using variations on salt['mine.get']('*', 'network.ip_addrs') all over my state, but it returns a dict mapping minion ID -> list of IPs
20:12 astronouth7303 if your minion is redhat, use network.system. On debian, set the /etc/hostname, /etc/mailname, and kick /etc/init.d/hostname.sh
20:13 astronouth7303 on other stuff, dunno
20:13 wavded whytewolf: thanks much, out of curiosity, why isn't querying multiple minions via grains.get not supported?
20:13 whytewolf WN1188: you did setup the p12 as a .pem file and have that in service_account_private_key: "<location to pem file>"
20:14 whytewolf wavded: because grains is only ever meant to be local data.
20:14 astronouth7303 wavded: grain.get doesn't query minions. It runs on a minion and asks the local grain set. Try the salt mine instead.
20:14 Guest73 joined #salt
20:14 kerrick_ joined #salt
20:14 thinkt4n_ joined #salt
20:15 whytewolf WN1188: also, what version of libcloud are you using?
20:17 whytewolf wavded: mine also enables a lot more sharing then grains alone would ever provide.
20:19 raspado joined #salt
20:19 WN1188 whytewolf: python-libcloud                       0.20.0-1
20:20 raspado do cloud profiles support jinja  statements?
20:20 whytewolf WN1188: okay. then you should be able to use the json formatted p12 however it still supports the pem format. which might be a better option. either way the service_account_private_key needs to point to it
20:21 raspado sorry, heres a better example, is this possible? https://gist.github.com/anonymous/23bec6090c53516b175487e90df60e0b
20:22 dxiri joined #salt
20:22 WN1188 it fails in the same fashion with the pem file.
20:22 whytewolf raspado: I do not think it currently supports rendering
20:22 raspado darn ok thx whytewolf
20:22 astronouth7303 you'd have to do different profiles for each combination
20:24 whytewolf WN1188: try creating a new service account user useing googles own docs. https://developers.google.com/identity/protocols/OAuth2ServiceAccount and try that account
20:24 WN1188 whytewolf: i'll try it. thanks.
20:26 dxiri astronouth7303: thanks! will try that out!
20:30 aldevar joined #salt
20:37 Hybrid joined #salt
20:37 dxiri joined #salt
20:40 WN1188 whytewolf: that did it. i purged the service account and saw that there was a default compute engine service account. i used that one instead. this must have been an issue related to permissions associated w/ the service account.
20:41 whytewolf WN1188: i was thinking that [which was why i sugested the google docs and maybe creating a new user]
20:43 Cottser joined #salt
20:50 WN1188 whytewolf: very much appreciated.
20:54 kerrick_ joined #salt
20:57 GMAzrael_ joined #salt
21:00 kerrick_ joined #salt
21:06 rpburkholder joined #salt
21:06 rpb joined #salt
21:12 kerrick joined #salt
21:15 justanotheruser joined #salt
21:17 _KaszpiR_ joined #salt
21:30 mt5225 joined #salt
21:31 mt5225 joined #salt
21:32 dxiri joined #salt
21:37 Ryan_Lane folks from saltstackinc, have we considered using projects in the salt repo?
21:37 Ryan_Lane I really wouldn't mind having a project for the boto modules
21:48 jimklo joined #salt
22:10 CampusD joined #salt
22:15 avinson joined #salt
22:17 avinson left #salt
22:17 avinson joined #salt
22:17 avinson stuck on this one for a while.. trying to use https://github.com/carlpett/salt-vault and getting "No module named crypt"
22:17 GMAzrael joined #salt
22:18 avinson but crypt is definitely available
22:34 kerrick_ joined #salt
22:36 kerrick_ joined #salt
22:38 onlyanegg joined #salt
22:41 WildPikachu joined #salt
22:42 WildPikachu Hi guys, I'm looking into storing minion config in pillars, is there a preferred way to create one sls file per minion and have like 2-3 lines of config in the pillar top.sls to load the minion sls file?
22:44 WildPikachu I can probably use jinja in the top.sls file and use import, but finding the variable with the name of the minion is my problem
22:45 WildPikachu I looked at grains.id , but a grain can be modified on the minion machine, right? even after the key is accepted
22:45 MTecknology WildPikachu: that depends on what you want to achieve but that is definitely a potential solution
22:45 MTecknology grains['id'] is kinda special and probably won't change, but you can also use salt.match.glob() instead of testing grains.id
22:46 WildPikachu MTecknology, hello again! :)
22:48 MTecknology hello
22:50 whytewolf i think what you are looking for is something like this
22:50 whytewolf https://gist.github.com/whytewolf/ffdfa467810d2714214bf6054a6eb008
22:51 viq I was about to paste https://pbot.rmdir.de/EzbTXdHLmGfcQFkkMc4_QA ;)
22:51 WildPikachu whytewolf, you are a star!
22:51 WildPikachu my only concern is if id can be changed in minion_id on the minion
22:52 viq WildPikachu: but then you'd need to accept the change on maser as well
22:52 whytewolf how often do you do that?
22:52 WildPikachu perfect viq
22:52 WildPikachu whytewolf, I don't ... but if some twit does I wouldn't want it pulling the wrong pillar data :)
22:52 WildPikachu if it must be accepted again on salt master, then its perfectly ok
22:52 kerrick_ joined #salt
22:52 whytewolf the one i posted puts the target still based on minion_id so if the minion_id doesn't match the grain[id] then you don't get a leak of data
22:53 WildPikachu thanks whytewolf :)
22:53 viq Ah, so mine would?
22:54 whytewolf viq: yeap, since you are matching on '*'
22:54 viq interesting, thanks
22:54 kerrick_ joined #salt
22:54 viq Though I don't get how salt.grains.get('id') is different
22:55 whytewolf it isn't
22:55 whytewolf it is that i put it in both the file get and the target
22:55 viq aah
22:55 WildPikachu target matches on minion id, grain matches on grain id ... its really clever :)
22:56 viq whytewolf: nice, thank you
22:56 rpb joined #salt
22:56 whytewolf and i missed a }}
22:56 WildPikachu yea :D
22:57 viq "left as exercise to the reader" ;)
22:57 whytewolf although viq i do like the replace . with _
22:57 viq Yeah, since salt doesn't like . in names
22:57 WildPikachu in names of sls files viq ?
22:57 viq yeah
22:58 viq Since it uses . as path separator
22:58 WildPikachu *thinks*, I am sure I can string replace that with something else
22:58 whytewolf yeah to add it to mine you would add the .replace to the second  one not the one in the target
22:58 viq So say /srv/salt/mysql/cluster/master.sls you'd call ass mysql.cluster.master
22:59 MTecknology as*
22:59 WildPikachu | replace('.','_')   should work, right?
22:59 viq And I think if you'd have dir/some.path.with.dots.sls salt would just get confused at dir.some.path.with.dots - on the other hand, I've seen people use / in sls paths, "it didn't work in my times" ;P
23:00 edrocks joined #salt
23:00 MTecknology file.name.sls usually works fire
23:00 MTecknology fine*
23:00 viq hm
23:01 viq anyway, I should be asleep, g'night y'all
23:01 MTecknology I only know that because of $client
23:01 WildPikachu well, nothing like testing pushing it live :)
23:01 MTecknology env: ProTest
23:01 whytewolf g'night viq
23:02 whytewolf and yes WildPikachu replace is a valid jinja filter so that should work
23:02 WildPikachu testing dots first :D
23:03 cyteen joined #salt
23:05 WildPikachu tab vs. space will always be the bane of my life ;)
23:06 whytewolf i generally use 2 spaces in everything. but i do get how that annoys some.
23:06 WildPikachu vim expandtab ftw
23:06 whytewolf yeap
23:07 WildPikachu yea, so using a hostname like this    aa-01-bbb.cc.de12345.fff   (.sls) looks like it fails to find the pillar
23:09 whytewolf yeah that would try and look in {pillar_root}/aa-01-bb/cc/de12345/fff/init.sls [or fff.sls]
23:09 MTecknology I'm on hold, I may as well fumble on over to my work laptop and see what they do there.
23:09 WildPikachu *thinks*
23:11 snarked_ joined #salt
23:12 Guest73 joined #salt
23:15 WildPikachu thanks again guys :)
23:15 whytewolf np
23:18 Guest73 joined #salt
23:20 MTecknology I lied, $client also s/./_/
23:22 whytewolf one of the more interesting things i saw with the dot thing and fqdn's was reversing the order and building a path hierarchy. but i don't remeber the syntax
23:23 hemebond split, reverse, join
23:26 WildPikachu for states, if I have      - test/abc     will it look for test/abc.sls first? then  test/abc/init.sls?
23:28 whytewolf humm, i do not remeber the order. and have always avoided having 2 files that matched the same method
23:28 whytewolf i think you have it correct. but don't quote me
23:31 wedgie joined #salt
23:31 MTecknology ya, that's correct
23:31 WildPikachu ok, now to figure out how if I can override pillar values in other pillars, in the same matched host, or if I must use an _override variable
23:32 MTecknology makes it easy to mkdir foo; mv foo.sls foo/init.sls and then start adding more crap
23:32 WildPikachu yea, I normally add all my jinja2 templates if there are any to its own dir to keep things nice and neat
23:32 MTecknology WildPikachu: you were already given an example of doing that for defaults
23:32 MTecknology you make formula-style stuff.
23:33 MTecknology I keep files in a separate git repo that mostly matches the file system
23:33 MTecknology https://gist.githubusercontent.com/MTecknology/7ecdae1440b06b60e7cec5c6723a04d8/raw/6091d84eba5a5edcaf1250adddd0e05432559772/salt_structure
23:35 WildPikachu MTecknology, I'll read back
23:42 WildPikachu ah MTecknology I think my use case is slightly different, I am setting say variable 'aaa' first in the pillar, then redefining it in a pillars/hostname.sls file :), I"ll change how I implement it
23:45 MTecknology I've learned a lot about people that use phrases like that.
23:45 WildPikachu hopefully good things :)
23:46 MTecknology nope
23:46 WildPikachu I shall read the docs further then
23:52 druonysus_ joined #salt
23:53 WildPikachu https://github.com/saltstack/salt/issues/3991  <= has some workarounds for what I'm after to do :)

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary