Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-02-01

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 nZac joined #salt
00:17 dendazen joined #salt
00:23 __number5__ joined #salt
00:25 eThaD joined #salt
00:26 tercenya joined #salt
00:26 eseyman joined #salt
00:47 eThaD joined #salt
00:52 beardedeagle joined #salt
01:05 edrocks joined #salt
01:06 MTecknology how2do state.is_in_highstate ??!
01:07 MTecknology I'm wanting to run " salt '*' something " to figure out if the service: readahead state would be part of the highstate execution... drawing a blank to figure this one out
01:08 cyborg-one joined #salt
01:10 xbglowx joined #salt
01:17 whytewolf salt '*' state.show_highstate | grep thingamabob?
01:18 MTecknology whytewolf: wouldn't that only get me the lines where that thimamabob is present and not include the host?
01:18 whytewolf that is correct.
01:19 whytewolf humm. salt '*' cmd.run 'salt-call state.show_highstate | grep thingamabob' [not sure if that will work]
01:20 whytewolf might need a python_shell= true or something like that.
01:25 MTecknology ouchy...
01:26 MTecknology whytewolf: 1) thanks! 2) proof I'm frustrated with this env 3) off to tomorrowland!!!
01:27 whytewolf yay tomorrowland!
01:28 eThaD joined #salt
01:32 skullone did you guys see Gitlabs disaster?
01:32 skullone https://docs.google.com/document/d/1GCK53YDcBWQveod9kfzW-VCxIABGiryG7_z_6jHdVik/pub
01:32 skullone they deleted their DB :o
01:33 k_sze[work] joined #salt
01:34 whytewolf did not see that...
01:37 juanito joined #salt
01:39 Eugene Oh no shit
01:41 Eugene And this is why you use orchestration tools
01:49 whytewolf yeap
01:50 eThaD joined #salt
01:52 debian112 joined #salt
01:52 whytewolf although this line speaks volumes "So in other words, out of 5 backup/replication techniques deployed none are working reliably or set up in the first place."
01:52 whytewolf setup backups.... but never test if they can be restored?
01:56 brokensyntax joined #salt
02:03 dendazen joined #salt
02:04 cryptolukas joined #salt
02:05 austin_ joined #salt
02:06 WesleyTech joined #salt
02:06 austin_ i need to be able to store passwords. obviously pillars are the way. however, it still means i need to back up those passwords somewhere. what are people doing to handle that ?
02:07 austin_ i have to assume even with gpg, you'd not want to push those to gh
02:11 austin_ well... it seems it is in fact ok according to some quick google hits
02:11 austin_ anyone have any further thoughts on that ?
02:11 whytewolf well, you could always use an internal git repo instead of github.
02:12 debian112 joined #salt
02:12 whytewolf or a database
02:13 austin_ true. valid point
02:14 austin_ i guess i could also add something on s3
02:14 austin_ ugh. security. so much work
02:24 rpb joined #salt
02:31 eThaD joined #salt
02:32 Nahual joined #salt
02:33 druonysus_ joined #salt
02:34 evle joined #salt
02:37 rpb joined #salt
02:46 debian112 joined #salt
02:48 ilbot3 joined #salt
02:48 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.3.5, 2016.11.2 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ (please don't multiline paste into channel) <+> See also: #salt-devel, #salt-offtopic <+> Ask with patience as we are volunteers and may not have immediate answers
03:03 Aleks3Y joined #salt
03:04 debian112 joined #salt
03:10 Eugene Backups are useless. Restores are what matter
03:10 Eugene It sounds like they're learning a lot about orchestration of large-scale systems, the hard way
03:13 eThaD joined #salt
03:23 aw110f joined #salt
03:26 aw110f_ joined #salt
03:35 sagerdearia joined #salt
03:35 eThaD joined #salt
03:36 skullone whats crazy - postgres has some of the best and easiest backup and recovery options of all the open source databases
03:36 skullone WAL archiving, lockless backups, streaming replication, replication reservations, and so on
03:48 mavhq joined #salt
03:49 cacasmacas joined #salt
03:56 onlyanegg joined #salt
03:59 cacasmacas joined #salt
04:01 brail joined #salt
04:05 mosen joined #salt
04:07 edrocks joined #salt
04:16 eThaD joined #salt
04:28 beardedeagle joined #salt
04:37 beardedeagle joined #salt
04:38 eThaD joined #salt
04:40 ravi____ joined #salt
04:48 rdas joined #salt
04:50 cacasmacas joined #salt
04:51 armyriad joined #salt
05:19 eThaD joined #salt
05:32 BlackMaria joined #salt
05:41 preludedrew joined #salt
05:55 impi joined #salt
06:00 barmaley joined #salt
06:04 netcho joined #salt
06:17 beardedeagle joined #salt
06:28 mavhq joined #salt
06:28 Gnomethrower Re: GitLab incident - ouch!!
06:28 Gnomethrower always test your backups...
06:31 mavhq joined #salt
06:34 denkijin joined #salt
06:40 lord2y joined #salt
06:53 netcho_ joined #salt
06:59 Praematura joined #salt
07:00 bocaneri joined #salt
07:01 netcho_ joined #salt
07:20 mpanetta joined #salt
07:25 eThaD joined #salt
07:30 felskrone joined #salt
07:33 sh123124213 joined #salt
07:39 netcho_ gj gitlab
07:45 cyborg-one joined #salt
07:45 JohnnyRun joined #salt
07:47 eThaD joined #salt
07:47 impi joined #salt
07:53 aw110f joined #salt
07:56 felskrone joined #salt
07:57 sh123124213 joined #salt
08:03 sh123124213 joined #salt
08:04 CrummyGummy joined #salt
08:08 usernkey joined #salt
08:14 aw110f_ joined #salt
08:17 Rumbles joined #salt
08:17 sh123124213 joined #salt
08:21 NV joined #salt
08:25 teclator joined #salt
08:27 mikecmpbll joined #salt
08:28 samodid joined #salt
08:28 eThaD joined #salt
08:31 eThaD joined #salt
08:37 ivanjaros joined #salt
08:57 lasseknudsen2 joined #salt
08:58 mikecmpbll joined #salt
09:07 netcho_ joined #salt
09:09 edrocks joined #salt
09:16 Rumbles joined #salt
09:25 Dominik_ joined #salt
09:26 BigSafari joined #salt
09:27 BigSafari left #salt
09:30 netcho joined #salt
09:32 toanju joined #salt
09:39 Cadmus Yikes, I feel sorry for the person who made a screwup under sudo, hands up if you've never done that *keeps own hands down*
09:40 impi joined #salt
09:46 Reverend yeah man
09:46 Reverend someone may have just lost their job
09:46 Reverend we just witnessed the death of someones career
09:46 Cadmus Back to salt, I'm writing a statement to install and configure a service, all bog standard stuff. But part of it changes if another set of statements are installed, is there a way to say {% if minion.has_statement("foo") %} sort of thing?
09:50 s_kunk joined #salt
09:52 s_kunk joined #salt
09:52 s_kunk joined #salt
09:52 ReV013 joined #salt
09:59 Firewalll joined #salt
10:01 netcho_ joined #salt
10:11 Reverend has statement?
10:12 Reverend oh
10:12 Reverend that was a psuedo example haha. \facepalm
10:13 Cadmus Reverend: Yeah, I'm finding it really hard to find the right words for any of this this morning
10:14 Cadmus I mean the 'right' answer is probably node groups or something
10:14 Reverend Cadmus: I dont think so. I've enquired about this before. If you want to see if something is happenig, you need to wrap it in a grains.append :/
10:14 Reverend then match on the grain
10:14 Reverend that's what I'm doing with PHP versions
10:14 AndreasLutro Cadmus: no, you'd have to implement something yourself. what I do is specify states in pillars, use that to template top.sls, then I can do {% if 'mysql.server' in pillar.services %}
10:15 Cadmus Okay, I'll see about implementing that myself somehow, thanks for the confirmation.
10:16 AndreasLutro https://bpaste.net/show/4f02bf4ce1aa for example
10:18 nfahldieck joined #salt
10:19 Cadmus I think in my instance the easiest thing will be to check for the existence of the pillar for the other set of statements, I need some data from there anyway.
10:20 nfahldieck hi, I'm wondering what "mkmnt" in states.mount.mounted exactly does? I have a file.directory as a requirement, so I ensure the mountpoint exists in another state. Do I need "mkmnt"?
10:24 lasseknudsen joined #salt
10:25 babilen nfahldieck: You don't in that case
10:25 nfahldieck babilen: thankx
10:27 amcorreia joined #salt
10:43 eThaD joined #salt
10:56 mavhq joined #salt
11:05 darioleidi joined #salt
11:15 eThaD joined #salt
11:16 sh123124213 joined #salt
11:22 remy joined #salt
11:23 remy I don't know how to change the output format for a salt-api call
11:23 remy curl -sSk https://localhost:8000 -H 'Accept: application/json' -H 'X-Auth-Token:d89...a4d' -d client=local -d tgt='perle-minion' -d fun=cmd.run_all -d arg="ls /tmp"
11:23 tonthon joined #salt
11:24 tonthon Hi
11:24 remy Like with the --out option for the CLI
11:24 remy salt perle-minion cmd.run_all "ls /tmp" --out pprint
11:25 tonthon I just upgraded salt-master to the 2016-11-2 version and I'm facing the following error : "'IOLoop' object has no attribute 'make_current'"
11:25 tonthon any idea where it could come from ?
11:26 tonthon (current python-zmq package version si 14.4, python-tornado is 4.4.2)
11:30 tonthon ok, I've got it
11:31 tonthon I found a zmq installation in version 13 (open a python prompt, >>> import zmq >>> print(zmq.___file__, zmq.__version) )
11:33 * viq grumbles and tries to figure out how to work around https://github.com/saltstack/salt/issues/39100
11:33 saltstackbot [#39100][OPEN] salt-run fileserver.update Exception | Description of Issue/Question...
11:38 * babilen thanks viq and whytewolf for testing the new release
11:38 viq :P
11:40 manji hey all
11:40 manji I have two rectors when a minion starts
11:41 manji a) a state that installs its grain file in /etc/salt/grains
11:41 eThaD joined #salt
11:41 manji b) the typical   local.saltutil.sync_all:
11:41 manji - tgt: {{ data['id'] }}
11:42 manji when a minion starts, a) is executed
11:42 manji but when b) runs I keep getting
11:42 manji return": "Passed invalid arguments to saltutil.sync_all: argument of type 'NoneType' is not iterable\n\n
11:43 manji here are the events:
11:43 manji https://paste.ofcode.org/dhcHTMyNUxJPBT9Dm43wR9
11:43 babilen http://paste.debian.net, https://gist.github.com, http://sprunge.us, … please
11:44 babilen Ah, very good :)
11:44 manji babilen,  :p
11:45 manji hmm I just saw that the returner from a)
11:45 babilen manji: Which version of salt is that? I can't really see anything wrong with your configuration
11:45 manji babilen, 2016.11.1
11:46 babilen (at least with the bits you've shared so far)
11:46 babilen You saw that the returner from a) did what?
11:46 manji I saw it after it failed to do sync_all
11:46 manji let me update the paste
11:47 cryptolukas joined #salt
11:47 manji https://paste.ofcode.org/WAjYYCgybeayN5UDJm92Ti
11:47 manji (updated)
11:48 dendazen joined #salt
11:48 Rumbles joined #salt
11:49 manji but even if /etc/salt/grains was not present at that particular moment
11:49 manji sync_all shouldnt fail like that
11:50 manji aah great
11:50 manji if I restart the minion service
11:50 manji (and trigger the reactor)
11:51 manji sync_all runs properly
11:51 manji ...
11:52 manji but it always fails on the first run
11:53 eThaD joined #salt
11:54 manji I tried to find a "by the book" way to automatically set custom grains
11:54 manji and save them under /etc/salt/grains
11:54 manji to no avail
11:54 manji unless I missed something
11:55 Straphka joined #salt
11:57 viq manji: I admit not having read what you wrote fully, but I set pillars, and then push some of them as grains as well, with something like https://pbot.rmdir.de/iVzbfMOQay4llJA4wgm0RA
11:58 manji hm interesting approach
11:58 manji I suppose each one of us has come up with a hacky solution
11:59 viq Though those grains I treat as "I want everything managed centrally, but have a local copy if for some reason I want to look on machine but salt-master is down or node can't talk to it"
12:00 usernkey1 joined #salt
12:03 viq manji: also I believe saltutil.sync_all is for syncing grains in the sense of "I wrote a bit of python to return a grain for me", not in the sense of https://docs.saltstack.com/en/latest/ref/states/all/salt.states.grains.html#salt.states.grains.present
12:04 manji viq, I think it's a bit vague in general
12:05 viq indeed
12:16 netcho_ joined #salt
12:21 scristian joined #salt
12:33 impi joined #salt
12:35 tonthon left #salt
12:36 evle joined #salt
12:53 _Cyclone_ joined #salt
12:56 kettlewell joined #salt
13:02 eThaD joined #salt
13:04 __number5__ joined #salt
13:04 preludedrew joined #salt
13:05 coldbrewedbrew_ joined #salt
13:05 concernedcitizen joined #salt
13:08 saintaquinas[m] joined #salt
13:08 tyler-baker joined #salt
13:12 edrocks joined #salt
13:19 jholtom joined #salt
13:21 edrocks joined #salt
13:24 eThaD joined #salt
13:25 rdas joined #salt
13:27 ALLmightySPIFF joined #salt
13:31 edrocks joined #salt
13:34 catpig joined #salt
13:34 usernkey joined #salt
13:37 tongpu joined #salt
13:38 alvinstarr joined #salt
13:41 GnuLxUsr joined #salt
13:41 jholtom joined #salt
13:45 dendazen joined #salt
13:46 numkem joined #salt
13:48 ALLmightySPIFF joined #salt
13:53 Vaelatern joined #salt
13:53 Miouge joined #salt
13:56 o1e9 joined #salt
13:57 jholtom joined #salt
13:58 ssplatt joined #salt
14:00 vj4 joined #salt
14:05 XenophonF is there a way to use salt to copy a file from a minion to a master?
14:08 dlloyd module cp.push looks to. https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cp.html#salt.modules.cp.push
14:08 netcho joined #salt
14:08 kojiro joined #salt
14:11 gladia2r joined #salt
14:12 XenophonF thanks for the clue
14:16 XenophonF df -h
14:16 XenophonF hah whoops sorry
14:16 beardedeagle joined #salt
14:28 new2salt joined #salt
14:29 new2salt Hi - I've created a state file to install nrpe - all works but now need to include check if fw port 5666 is open and if not, open it. What's the best way to do this? I've got the following command salt 'min'  cmd.run 'firewall-cmd --list-ports' - I was thinking of gething the output from this into a state somehow?
14:40 _JZ_ joined #salt
14:46 cscf new2salt, https://docs.saltstack.com/en/latest/ref/states/all/salt.states.firewall.html https://docs.saltstack.com/en/latest/ref/states/all/salt.states.firewalld.html
14:48 nickabbey joined #salt
14:55 new2salt cscf, thanks for the link - so I can see how I can use that to set the rules. How would I run a check to only set the rule if the port is not open?
14:56 amagawdd joined #salt
14:56 Tanta joined #salt
14:57 viq new2salt: you want to learn about idempotency ;)
14:58 viq new2salt: "make sure this port is allowed" means, make sure. If it already is, then there is nothing to do.
14:59 new2salt viq, ok got you. I'll give that a go - thanks
15:00 viq Same with all other states. 'pkg.installed' doesn't try to install package again if it is already installed.
15:01 new2salt On some of our servers we don;t have firewalld running - so if I set this state on one of these, what would happen?
15:02 blueyed joined #salt
15:03 racooper joined #salt
15:04 netcho joined #salt
15:04 viq https://github.com/saltstack/salt/blob/4c2d4159fd8705d1b3bb8f90500635f8ccec4f22/salt/states/firewalld.py
15:04 viq If not present, it will error saying that state is unavailable. Present but not running - I don't know.
15:05 viq What will happen when you try to run firewalld commands when it's not running?
15:05 netcho joined #salt
15:05 netcho joined #salt
15:05 toastedpenguin joined #salt
15:05 blueyed How do I use module.wait with dockerng.signal?  [2975] threw an exception. Exception: wrapper() takes at least 1 argument (0 given).  Seems to be this issue?! https://github.com/saltstack/salt/issues/31486
15:05 saltstackbot [#31486][OPEN] dockerng.start cannot be used from module.run | To recreate this issue:...
15:06 abednarik joined #salt
15:07 brousch__ joined #salt
15:08 mpanetta joined #salt
15:09 teclator joined #salt
15:10 ReV0131 joined #salt
15:17 swills joined #salt
15:17 Cadmus Firewalld is giving me such a headache right now, it changed from non-pruning to pruning, and you can't stop it pruning ports, so my idempotency just went into the sea :(
15:17 samodid joined #salt
15:22 usernkey joined #salt
15:28 nZac joined #salt
15:28 netcho_ joined #salt
15:30 ALLmightySPIFF joined #salt
15:33 bowhunter joined #salt
15:37 abednarik joined #salt
15:38 daxroc Can you view the generated pillar from a map.jinja easily ?
15:38 tiwula joined #salt
15:43 samodid joined #salt
15:46 keltim joined #salt
15:56 zulgabis joined #salt
15:57 usernkey1 joined #salt
16:00 onlyanegg joined #salt
16:05 st8less joined #salt
16:13 jjjjj joined #salt
16:14 lompik joined #salt
16:15 samodid joined #salt
16:15 tercenya joined #salt
16:16 MeltedLux joined #salt
16:19 jay_M left #salt
16:19 debian112 joined #salt
16:20 hasues joined #salt
16:23 teclator joined #salt
16:23 eThaD joined #salt
16:26 sarcasticadmin joined #salt
16:27 lionel joined #salt
16:30 jaemin joined #salt
16:32 jaemin left #salt
16:33 swa_work joined #salt
16:33 Jay_M joined #salt
16:33 Jay_M left #salt
16:34 Jay_M joined #salt
16:35 DammitJim joined #salt
16:36 Brew joined #salt
16:38 eThaD joined #salt
16:40 PatrolDoom joined #salt
16:40 debian112 joined #salt
16:41 hasues left #salt
16:42 bowhunter joined #salt
16:43 Sarph joined #salt
16:49 edrocks joined #salt
16:53 swa_work joined #salt
16:55 tapoxi joined #salt
16:56 lompik joined #salt
16:57 beardedeagle joined #salt
16:59 Inveracity joined #salt
17:05 eThaD joined #salt
17:14 impi joined #salt
17:16 SpX joined #salt
17:18 heewa joined #salt
17:18 heewa I'm geting intermmittent unresponsiveness from a salt minion. Periodically a salt call to a minion will hang and eventually fail with `Minion did not return. [No response]` but if I immediately retry, it'll work. But then maybe not the next time, immediately after.
17:18 heewa Anyone experience this, or have any advice for it?
17:20 leonkatz joined #salt
17:20 Trauma joined #salt
17:21 nickabbey joined #salt
17:24 cryptolukas joined #salt
17:27 swa_work joined #salt
17:29 PatrolDoom joined #salt
17:30 PatrolDoom joined #salt
17:30 PatrolDoom joined #salt
17:32 abednarik joined #salt
17:32 samodid joined #salt
17:39 ivanjaros joined #salt
17:40 bakins joined #salt
17:42 sh123124213 joined #salt
17:43 megamaced joined #salt
17:46 eThaD joined #salt
17:50 Eugene heewa - yes, I see that pretty commonly. I tend to run `salt foo test.ping` before firing off a state.apply
17:50 swa_work joined #salt
17:50 Eugene In my environment it seems to be tied to packet loss on my VPN tunnel or across the open internet, which is a TCP-unfriendly place
17:51 hackel joined #salt
17:52 emartens joined #salt
17:54 tercenya joined #salt
17:54 kojiro joined #salt
17:59 nickabbey joined #salt
18:09 Miouge joined #salt
18:09 aric49 joined #salt
18:12 SaucyElf joined #salt
18:13 s_kunk joined #salt
18:15 overyander joined #salt
18:15 debian112 joined #salt
18:16 SaucyElf joined #salt
18:18 mikecmpbll joined #salt
18:18 swa_work joined #salt
18:21 swa_mobil joined #salt
18:25 djgerm1 joined #salt
18:26 foundatron joined #salt
18:27 DanyC joined #salt
18:28 eThaD joined #salt
18:29 Praematura joined #salt
18:36 aw110f joined #salt
18:40 ekristen so I want to be able to register a server with another server, but to do that I need to generate a one time use auth token on the first server, is there a way to do that with the mine or something else like that to dynamically call something that runs on another server then use that return value on the other server
18:40 gtmanfred publish.publish
18:40 xet7 joined #salt
18:40 barmaley joined #salt
18:40 gtmanfred ekristen: https://docs.saltstack.com/en/latest/ref/peer.html
18:41 ekristen ok, that sounds familiar, any example’s I can look at?
18:42 gtmanfred i don't have any
18:43 gtmanfred actually, i have one from a long time ago
18:43 gtmanfred ekristen: https://github.com/gtmanfred/salt-states/blob/d0493902e675d87d14d55aae20b857728d2f725a/hosts/init.sls
18:43 gtmanfred i used it to get network.ip_addrs,back when the mine was having problems
18:44 ekristen sounds like I need to write a custom module
18:44 gtmanfred yar, that would be a good way to make sure you only get back the string for the one time auth
18:45 swa_work joined #salt
18:45 ekristen been a long time since I’ve done that …
18:45 ekristen guess I’ll take a look
18:45 ekristen thanks gtmanfred
18:45 gtmanfred no problem
18:45 foundatron Hi can only tell me what the difference is between https://github.com/saltstack-formulas and https://github.com/salt-formulas
18:46 foundatron is one more  "official"  than the other?
18:46 sh123124213 joined #salt
18:46 Edgan joined #salt
18:46 gtmanfred salt-formulas is not us
18:48 foundatron I was hoping use a formula to manage salt...and then got confused  whether or not use https://github.com/saltstack-formulas/salt-formula or https://github.com/salt-formulas/salt-formula-salt
18:49 foundatron ok, thanks @gtmanfred
18:49 foundatron that certainly makes things simpler
18:50 eThaD joined #salt
18:50 bowhunter joined #salt
18:57 djgerm joined #salt
18:58 tercenya joined #salt
19:05 nZac joined #salt
19:05 juanito_ joined #salt
19:05 Eugene Ran yum-update this morning, getting a stack trace from salt-minion. https://vomitb.in/M9AabvZDab. Did I screw something up or is it time to file a bug on Github? This is a CentOS 7 machine, using the saltstack repo.
19:08 Tanta_G joined #salt
19:09 edrocks joined #salt
19:11 edrocks is `file.line` a good option for ensuring lines are in your bash_profile?
19:11 mpanetta joined #salt
19:12 Brew joined #salt
19:12 nidr0x joined #salt
19:19 nZac joined #salt
19:26 djgerm1 joined #salt
19:31 cyborg-one joined #salt
19:31 eThaD joined #salt
19:31 ssplatt edrocks: or file.managed to manage the whole file
19:38 heewa Eugene: My problem is that even `salt foo test.ping` randomly will hang or finish immediately, even on the minion running on its own master box, as the only minion (no other nodes).
19:38 Eugene Cool.
19:39 heewa Hmm, looks like exactly every other time! What a weird pattern.
19:39 leonkatz joined #salt
19:43 WaffleWiz joined #salt
19:45 ChubYann joined #salt
19:45 gableroux joined #salt
19:45 WaffleWiz howdy folks-- hoping someone can help me. I'm trying to set multiple "file_root(s)" in "file_client: local" mode
19:47 djgerm1 if i have a bunch of grains I want to apply to a bunch of minions, should I just write a state?
19:48 spuder joined #salt
19:48 Eugene You want to bulk-load grain data onto minions?
19:49 gableroux_ joined #salt
19:49 leonkatz joined #salt
19:50 WaffleWiz how about this: what do folks do for unit tests on their states/formulae? I'm coming from Chef and Chefspec and need a way to test complex states without having to spin up VMs
19:53 cmarzullo WaffleWiz: I use test-kitchen
19:54 cmarzullo same same as chef. make formulas testable. Use server spec to verify changes and reduce introdution of new bugs when people change the formula.
19:54 WaffleWiz with kitchen-salt? I'm hoping for something where I neither have to A) be running the tests on the target machine B) turn on a VM and apply the state
19:55 cmarzullo B is a pretty big blocker.
19:55 cmarzullo if you aren't testing on a vm. how do you know it'll work like you want on a vm?
19:55 WaffleWiz it's less about knowing that it works on a vm, more about being able to use complex logic in the states and ensure that the state does what you want
19:56 ssplatt WaffleWiz: and thats when you’d apply the things to a vm to see if it does what you asked it too…no?
19:56 WaffleWiz an example would be building a url of a file to download: you write a unit test to make sure it builds a viable url with an array of inputs that might exist in the state
19:56 cmarzullo what renderer are you using.
19:57 WaffleWiz I'm just using the defaults, jinja/yaml
19:58 cmarzullo So you are mostly concerned about jinja rendering?
19:58 cmarzullo you can render out the sls file without applying it.
19:59 WaffleWiz do you know of an existing tool that would let me render out the sls files, and then run unit tests against that output?
20:00 cmarzullo state.show_highstate
20:01 cmarzullo but we do all that in the vm.
20:01 cmarzullo with test-kitchen
20:01 lpl joined #salt
20:02 WaffleWiz are you just using vagrant/virtualbox for the kitchen VMs?
20:02 cmarzullo yeah for local development and testing.
20:02 cmarzullo Then it gets pushed to CI which then tests the states against our cloud provider
20:02 cmarzullo so we can catch any differences between bento and cloud provider vms.
20:03 WaffleWiz so the CI is also spinning up a VM
20:03 cmarzullo yes.
20:04 cmarzullo there was a saltconf talk about testing formulas in docker. But I felt like you have to fight docker so much. Like running services and stuff.
20:07 denys joined #salt
20:07 beardedeagle joined #salt
20:09 WaffleWiz blech
20:09 cmarzullo indeed. for me, test-kitchen is a very nice way to write formulas.
20:10 snarfy^ joined #salt
20:11 DammitJim joined #salt
20:13 eThaD joined #salt
20:13 ssplatt i do think i saw someone had a python script to display rendered sls files for debugging. not sure if it was shared on the net tho.
20:13 whytewolf ssplatt: it is
20:14 ssplatt whytewolf: links?
20:14 ssplatt pls
20:14 whytewolf https://github.com/whytewolf/salt-debug
20:14 ssplatt ya thats it.  thanks
20:14 ssplatt it’s a crappy tool.
20:14 ssplatt heh
20:14 lpl joined #salt
20:14 whytewolf yeah even the author thinks so ;)
20:15 ssplatt looks like i already had it starred too
20:15 druonysus_ joined #salt
20:16 ssplatt still requires a salt env…
20:16 Sammichmaker joined #salt
20:17 lpl joined #salt
20:17 whytewolf yeah. it uses everything built into salt to render the sls.
20:18 impi_ joined #salt
20:19 oida joined #salt
20:21 whytewolf it is difficult to get out of needing a salt enviroment because of all the things that salt does to render something. from so many different places.
20:24 snarfy^ joined #salt
20:24 DammitJim is there an easy way to get salt to read a directory and create variables from that to fill in a file.managed?
20:24 WaffleWiz indeed
20:25 WaffleWiz whytewolf: thus me asking about setting file_roots earlier
20:27 whytewolf WaffleWiz: well you can set multiple file_roots the setting is exactly like it is on the master just on the minion ... the hard part is local doesn't really support enviroments
20:28 WaffleWiz it seems like --file-root only exposes the ability to set a single such file_root(s) options using salt-call
20:28 WaffleWiz was more my dilemma, but I think I'll have to get into actually building a file client in python for this
20:29 whytewolf ahh --file-root ... yeah that is a whole nothier ball game
20:30 gableroux_ joined #salt
20:34 nixjdm joined #salt
20:34 eThaD joined #salt
20:40 overyander joined #salt
20:41 swa_work joined #salt
20:43 StolenToast joined #salt
20:45 tapoxi anyone running into the userdata templating bug in salt-cloud and know of workarounds?
20:45 tapoxi https://github.com/saltstack/salt/issues/33194
20:45 saltstackbot [#33194][OPEN] salt-cloud: EC2 userdata template error | Description of Issue/Question...
20:49 dyasny joined #salt
20:53 StolenToast I'm having trouble getting reclass working with salt.  Reclass seems to be working fine but there's some disconnect on the salt side: http://reclass.pantsfullofunix.net/salt.html
20:53 StolenToast reclass-salt gives me properly formatted data
20:53 beardedeagle joined #salt
20:54 snergster joined #salt
20:57 SaucyElf_ joined #salt
20:58 PatrolDoom joined #salt
21:01 whytewolf StolenToast: not sure if it matters but the salt documentation for reclass shows an extra setting. storage_type: yaml_fs
21:01 whytewolf https://docs.saltstack.com/en/latest/ref/tops/all/salt.tops.reclass_adapter.html
21:01 whytewolf the reclass documentation looks to have been written around 0.17 which was AGES ago now
21:02 ponyofdeath hi, how can I check if a item is in grains first beforei try to use it.. doing if manufacturer in grains and grains['manufacturer'] == 'DigitalOcean' does not seem to work
21:03 StolenToast I'll try it out, the python doc for reclass made it sound optional
21:03 beardedeagle you specifically want to know if DigitalOcean is in grains['manufacturer']?
21:06 beardedeagle if 'DigitalOcean' in salt['grains.get']('manufacturer', {}) ?
21:07 impi joined #salt
21:12 StolenToast whytewolf: my general problem is that after testing this example reclass setup and (afaik) integrating it with salt (via the master config) I get "no contents found in top file"
21:12 StolenToast but reclass should now be serving as the top file, giving the same information as with "reclass-salt --top"
21:13 viq StolenToast: I think varstack seems more promising to me, reclass doesn't seem to have seen much development lately
21:13 StolenToast I can look, I haven't put too much effort into this
21:14 StolenToast I just want a good inventory manager for salt and other things
21:14 viq But that's just my opinion without having played with either
21:14 viq Yeah, I would like something like that too...
21:15 StolenToast I'll take the one that works
21:15 viq It's interesting what they're doing at Adobe, from the presentations I've seen, but it's unlikely that will be released, if for no other reason that it's beed made for their sepcific internal needs
21:16 eThaD joined #salt
21:17 cmarzullo foreman has some salt integrations. It's been a while since I'velooked though.
21:17 whytewolf cmarzullo: i don't think forman has a master_tops setup though
21:17 whytewolf which is what the end goal of this would be
21:17 viq Ah, yeah, I want to look at that as well, though my first interest is in the reporting part of it
21:18 cmarzullo hmmmm master_top
21:18 cmarzullo do want.
21:19 cmarzullo looks like it does.
21:19 cmarzullo https://theforeman.org/plugins/foreman_salt/7.0/index.html#2.1.2SaltMasterConfiguration
21:20 whytewolf ahh ext_nodes ...
21:21 viq http://tumblr.github.io/collins/  looks interesting too, though I don't think there's salt integration
21:21 StolenToast what I really want is something to simply read lists of nodes and assign grains
21:22 StolenToast I don't really feel like getting into defining my own inheritence hierarchy
21:22 NightMonkey joined #salt
21:23 cmarzullo assign grains? grains don't change that often. Are you doing roles with grains?
21:23 StolenToast I want to divide my nodes into environments better
21:23 StolenToast I have a "my first salt" setup right now but its becoming an untennable mess, I need to re-build it in a smarter way
21:24 cmarzullo be careful about grains for that though.
21:24 StolenToast I want something that will easily let me assign nodes to roles and environments for salt to deal with
21:24 StolenToast is there are better way?
21:25 cmarzullo https://www.lutro.me/posts/dangers-of-targetting-grains-in-salt
21:25 MTecknology StolenToast: what kinda setup do you have now? Most people's first attempt is usually kinda... not pretty
21:25 cmarzullo you can use grains for that type of targeting. But you need to be careful.
21:25 StolenToast MTecknology: it's one big "base" env with a ton of .sls files in vaguely-grouped subfolders like "software"
21:26 whytewolf and the problem? :P
21:26 StolenToast first of all I'd like to partition my centos6 and centos7 nodes, and then I have some further sorting
21:26 whytewolf just kidding
21:26 StolenToast it's getting messy with nodes that are similar in some ways but different in others
21:27 whytewolf personally i mostly do roles in pillars
21:27 whytewolf not grains
21:27 MTecknology StolenToast: centos6 vs c7 is more a job for grains rather than a completely separate set of states
21:27 * MTecknology doesn't use "roles"
21:27 StolenToast yeah I tried that, i think with the configurations I have to do I would prefer to just keep them in a separate env
21:27 cmarzullo +1 for pillar.
21:28 StolenToast how do you do it in the pillar?
21:28 implicitnewt joined #salt
21:28 StolenToast "cluster_A: <nodes>"?
21:28 nickabbey joined #salt
21:28 MTecknology StolenToast: I've found that most people don't take the time to build a nice clear picture of what their ideal setup is, it's always based around wedging crap into crap and producing crap^2
21:29 StolenToast yes and that's what I have now
21:29 StolenToast I've drawn up a little diagram of my future setup
21:29 MTecknology ... ninja
21:29 cmarzullo I use the minion id for targeting in pillar to determine what pillar / states minion gets
21:29 MTecknology is that future based on everything being ideal or is it based on lots of things currently there?
21:30 cmarzullo most minion_ids follow a pattern
21:30 cmarzullo my_app1 myother_app2
21:30 StolenToast it's just a different folder hierarchy, really
21:30 StolenToast I would transplant all my current states and probably change a few
21:31 MTecknology StolenToast: so then you're basically just re-rolling what you have now
21:31 implicitnewt has anyone used salt-cloud with AWS govcloud successfully?  Seem to have some issues with the endpoint definition and accessing information
21:31 StolenToast MTecknology: yeah but in a more sustainable form, I think
21:31 DanyC joined #salt
21:32 StolenToast The main problem I have with targeting minions by id is that I didn't name every node and so sometimes patterns MOSTLY work but will pickup undesirable systems with similar names
21:32 StolenToast like "node1-node10" and also "node101" which exists for some reason
21:32 MTecknology I don't like trying to pick things up dynamically from hostnames. I try to avoid it
21:33 BlackMaria_netsp joined #salt
21:33 StolenToast yeah I don't have that many nodes, i would really like to simply have a hand-built list of groupings (and I tried node groups, I couldn't get them to work)
21:33 cmarzullo hostnames do not need to equal minion_id
21:33 StolenToast isn't the minion_id from the hostname?
21:33 MTecknology I want names to be meaningful so they can be plugged in and easily looked at, but I avoid things like grabbing the "dev/prod/whatever" handle
21:33 cmarzullo if you don't provide one.
21:34 cmarzullo if you using the bootstrap I think -A sets the minion_id
21:34 cmarzullo or just change /etc/salt/minion_id restart the minion and accept the key
21:35 MTecknology lemme grab an old top file from $previous_employer
21:35 StolenToast still I am interested in this kind of software for other purposes, too, so I'd like to look into integrating something like reclass at this point, as long as I'm rebooting
21:35 BlackMaria_netsp left #salt
21:35 cmarzullo TBH if your hosts are all 'artisinally crafted' with wide divergence of metadata you gonna be in a rough time.
21:35 whytewolf I think one of the other things is i don't often use top files for states... I have more orchestration files
21:36 StolenToast I've got like 7 main clusters but then a smaller smattering of specific nodes
21:36 StolenToast gah I couldn't get orchestration working either
21:36 StolenToast <- bad at salt
21:36 cmarzullo it takes time.
21:36 promorphus joined #salt
21:37 beardedeagle joined #salt
21:37 whytewolf yeah orchestration is something that ... is kind of an art piece when it works. but can be a pain in the ass getting working
21:38 MTecknology and pain in the ass to troubleshoot
21:38 kojiro show_highstate is nice
21:39 StolenToast so bottom line is I want to define my groups of nodes by hand (individually) and then have these groupings map to a top file for salt
21:40 whytewolf huh, i never had a problem troubleshooting orchestration. mostly because i tend to break it down into different parts
21:40 MTecknology StolenToast: that sounds terrible
21:40 StolenToast why?
21:40 MTecknology StolenToast: this is a top file from $old http://dpaste.com/0P8XYB7
21:41 debian112 joined #salt
21:41 StolenToast maybe some context will help, these are HPC nodes so I've got "Cluster A" of say 130 nodes that are all identical
21:42 StolenToast the members of this cluster might change but the configuration will apply to all of them
21:43 StolenToast but old host naming conventions means that simple patterns like "clustera_*" will pickup unwanted nodes that don't belong to the cluster
21:43 alrayyes left #salt
21:44 * MTecknology strongly dislikes this concept of "roles"
21:44 MTecknology err... meant to type more before enter... hold back the flames!
21:44 whytewolf lol
21:45 whytewolf roles are a work around for nodegroups not being the most robust devices in salts arsonal
21:45 whytewolf honestly salt needs a more robust node metadata service all around
21:46 whytewolf which is what i like master_tops and ext_pillars that work together for
21:47 DanyC joined #salt
21:48 StolenToast I'm waiting for MTecknology to say something
21:48 StolenToast cuz it's about time to go and think about this again tomorrow
21:49 MTecknology It seems to breed this mentality of roles are a thing you attach to a system and then use those roles to tell the system everything that it's gonna be doing. It promotes doing shit like 'ussr4580 is going to be role apt-proxy and galera-cluster'. If you live in chef-land, those role attributes are easily misplaced (and subsequently lost) when re-deploying a box. Worse yet, is that *EVERYWHERE*
21:49 MTecknology I've seen it, people look at servers as things that need to stick around and never be recreated because something might break. In places where I see this roles concept dropped, things are more a docker-style mentality where servers may come and they may go, but need to remain in a known state.
21:49 MTecknology That's one of the reasons I've made it policy in my home environment that no server is up for longer than one year.
21:50 MTecknology after one year, it's time to re-deploy the sucker.   That includes the salt master and backup host.
21:50 StolenToast I would love for everything to sort itself out but I don't know how to make that happen
21:51 MTecknology StolenToast: gotta start with a future goal. What is the IDEAL setup? Not based on what you have now, but what's the cleanest and simplest way to do organize crap.
21:51 StolenToast but for example these clusters aren't specific, they get added and removed all the time
21:51 dxiri joined #salt
21:52 MTecknology StolenToast: get that clear picture, and start toying around with it in a lab. Figure out the most simplistic way of targeting and if needed, move that to ext_pillar.
21:53 MTecknology If you can cleanly organize what you have, then figure out where the overlap is. You'll /probably/ find that ~99% of it does... no need to re-create that for every box.
21:54 MTecknology StolenToast: then you can use pillar to generate the special things pertinent to each box or cluster of boxes and do things like this - https://gist.github.com/MTecknology/a7138375b14ea9c9561eee659114c00b
21:54 StolenToast so how do I reliably target nodes when the hostnames are not reliable?
21:54 MTecknology exact same states, nothing changes between any web server except pillar data.
21:54 MTecknology your hostnames are not reliable?
21:54 StolenToast I already have an annoyingly complex python grain to help identify which nodes are actual members of a cluster and which just happen to share a similar name
21:55 MTecknology remember... I said "*IDEAL*"
21:55 whytewolf not having a reliable baseline is kind of a bad way to even start.
21:55 MTecknology what's picture perfect?
21:55 StolenToast yeah but I can't change what happened before I was hired
21:56 MTecknology When you know what's picture perfect, it's much easier to figure out creatively simple ways to bridge gaps
21:56 * MTecknology sighs
21:56 StolenToast I've drawn what I think is a good, totally new structure and am working from that, but maybe it's not ideal
21:57 * MTecknology can share top.sls from home (for most things)...
21:58 * MTecknology is planning a death.lustfield.net project that triggers when I die and does things like archives my website to github pages, updates DNS records, creates a dns record that tells my systems to wipe themselves clean, etc.
21:59 whytewolf huh, one of my end goals is to have a beacon that nukes a server and rebuilds another in it's place in case the user count goes higher then 0 on it
21:59 Tanta joined #salt
21:59 whytewolf but that is a bit down the road
22:00 MTecknology whytewolf: don't kill me... but it burns
22:00 MTecknology than*
22:00 MTecknology also, that's really cool
22:00 whytewolf lol/ not going to kill you. my grammer sucks
22:02 MTecknology I think this is sanitized well enough...
22:02 MTecknology StolenToast: https://gist.github.com/MTecknology/ed2d5b3175f29ba08cea989742158da4
22:03 cacasmacas joined #salt
22:03 MTecknology StolenToast: you don't /have/ to use yaml for your sls either
22:03 whytewolf or jinja
22:03 MTecknology could just as easily pull from a database like d42 if that's what you /really/ ... /really/ wanna do
22:04 MTecknology maybe not "as easily" .. but pretty easily
22:04 MTecknology hey... d42 supports adding roles
22:05 xbglowx joined #salt
22:07 beardedeagle joined #salt
22:08 MTecknology StolenToast: anyway... ya, you can't fix what you don't see and to really see the mess, you need to see what it /should/ be. When you stay in the muck, all you see is muck. From a higher view, what works becomes clear and rising out of that muck becomes much easier.
22:11 swills_ joined #salt
22:12 MTecknology OMG!!! I just ran across this!  From only three years ago.  https://gist.github.com/MTecknology/560509f5253cf50e28cb  <-- wanted each minion to connect to a "random" two syndic masters and always select those same boxes again. I used drdb to keep the keys in sync.
22:14 dendazen joined #salt
22:17 whytewolf oh, on a further note. if you havn't tried salt enviroments yet. you might not want to base your new setup on them. they can be .... a pain. hard to define. hard to keep locked apart. typically most places i know switch to multiple masters that each controll a seperate enviroment
22:18 preludedrew joined #salt
22:18 StolenToast nothing is as it seems
22:19 nickabbey joined #salt
22:20 Chris_ joined #salt
22:20 swills_ joined #salt
22:21 Chris_ Oo so much Salt Admins. awsame...
22:22 MTecknology StolenToast: salt is flexible enough to let you do whatever the crap works best in your environment. The bigger challenge is understanding the environment and how salt /should/ fit into it.
22:22 whytewolf ^ this, so much this
22:23 MTecknology I've seen waaay too many people over-engineer these complicated solutions for all sorts of different scenarios. The persist doing things that way because "we need the flexibility!!" but... they don't. They just reinvented pillar in a really bad way that is probably now the root cause of the master never being able to keep up.
22:24 PatrolDoom joined #salt
22:26 MTecknology ... now that I think about it...
22:27 whytewolf huh that is the goal behind a pillar user class i am in the middle of working on. so i don't have 30 different ways of defining users to different states. i just have a single type of user dict that should make looking up pillar data for a user easy. and because the pillar doens't rely on jinja in pillar it also makes it quicker and easier to maintain.
22:28 leonkatz joined #salt
22:28 hrumph joined #salt
22:29 MTecknology I ... think I just found a way to get any node able to display all pillar data for all minions
22:29 whytewolf ...
22:29 MTecknology (in $env)
22:29 whytewolf :(
22:30 * MTecknology coughs
22:30 MTecknology yup... it works
22:30 whytewolf that. needs a security bug report
22:31 MTecknology it's not a bug in salt, it's a bug in the way these guys re-created pillar within pillar
22:31 DEger joined #salt
22:31 whytewolf ohhhhhhh
22:31 whytewolf it is in $clients code. not salt ... gotcha
22:31 MTecknology ya
22:31 honestly "re-created pillar within pillar"
22:32 honestly uhhhh
22:32 foundatron i was like ohhhh snap
22:33 whytewolf honestly: recreating pillar within pillar seems to be a common thing... that most people don't understand they are doing until it is tolate
22:33 * MTecknology grumbles
22:33 MTecknology I just raised my concern and was told this was all by design
22:34 whytewolf wtf
22:34 whytewolf insecurity by design?
22:35 MTecknology soooooooooooo much data being transfered *EVERY* single time that module is requested.
22:36 MTecknology the sls file alone is 360K and whatever that gets compressed to is being pushed and processed every single time..
22:36 MTecknology but it's fine because "that's how I designed it"
22:37 whytewolf ugh
22:37 whytewolf surprised it works at all
22:37 MTecknology eh.. I shouldn't share more about it, but it gets worse
22:38 honestly whytewolf: being hobbled by salt-ssh's terrible state means often taking a step back and thinking about how to achieve something in the proper way
22:38 honestly that helps I suppose (:
22:40 whytewolf MTecknology: yeah. don't need to keep dragging it through the Muck.
22:40 whytewolf i did used to work for a company that didn't use pillar at all. and put passwords into grains.
22:41 whytewolf they also wrote a custom module that would install some packages. and if the packages where in the proper state before it ran would throw an error.
22:42 MTecknology heheh... ya, that kinda thing
22:43 dxiri hi everyone, I am trying to follow the steps here: https://docs.saltstack.com/en/latest/topics/tutorials/cloud_controller.html
22:43 dxiri but I am getting ''virt.hyper_info' is not available.'
22:43 dxiri is there anything special I need to do
22:44 dxiri according to that doc I shouldn't need to install anything additional
22:44 swills__ joined #salt
22:44 whytewolf github
22:44 whytewolf ack!
22:44 whytewolf wrong window
22:45 skullone has anyone used the 'slack engine' with salt?
22:46 whytewolf dxiri: i don't see a hyper_info function in the virt runner
22:48 whytewolf dxiri: a list of functions in the virt runner https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.virt.html#module-salt.runners.virt
22:49 dxiri found the corresponding function, looks like its host_info now :)
22:49 dxiri would be good to update that document
22:56 protoplasm joined #salt
22:59 protoplasm Hi all, does Salt have a way to re-execute all states once a given state has changed?
22:59 protoplasm Say you manage some config files in minion.d/, and would like salt to re-run once it sees that a config has changed. Otherwise you have to run salt twice.
23:02 xbglowx joined #salt
23:07 MTecknology protoplasm: out of curiousity, are you dropping roles into grains in the minion.d/something.conf file/
23:07 MTecknology ?*
23:09 protoplasm Hmm no, right now I'm just configuring the salt-minion using salt itself.
23:09 protoplasm I'm not sure what you mean by 'dropping roles into grains'
23:10 protoplasm We are using grains (ec2 tags) to provide our instances.
23:12 MTecknology protoplasm: why does salt need to re-run a highstate when the salt-minion config changes?
23:12 MTecknology why not just reload the service on change?
23:15 protoplasm In this particular case, we need to set the system providers to 'systemd' across all of our states.
23:15 protoplasm We run on centos7 and use a mix of systemd and daemontools services
23:16 protoplasm but for some reason the default providers for services do not play well with us.
23:16 protoplasm Without setting default service provider to systemd, salt does not know how to handle systemd-based services like ntpd.
23:17 protoplasm So we have to run it twice currently, first pass to add the config, second pass to manage those services.
23:17 protoplasm (for those systems that do not yet have this fix.)
23:18 MTecknology seems like a better job for grains
23:19 protoplasm Oh I see what you mean.
23:22 protoplasm But that'd mean we'd still ahve to specify the system provider everywhere we currently have a service right?
23:23 protoplasm Whereas a minion configuration file applies to all states.
23:25 brent__ joined #salt
23:27 xbglowx joined #salt
23:29 MTecknology protoplasm: sure, but one requires you modify the config file and restart execution and the other does not
23:30 brent__ joined #salt
23:31 protoplasm How are you managing your salt master/minion configuration currently? Do you put them into Salt as well?
23:32 brent__ joined #salt
23:32 beardedeagle joined #salt
23:33 brent__ joined #salt
23:34 zenchiken joined #salt
23:34 abednarik joined #salt
23:37 Ryan_Lane any core devs around? I have a design problem I'd like to get a second set of eyes on
23:37 Ryan_Lane it's for the boto_* modules, and it's a new pattern I'd like to apply
23:37 PatrolDoom joined #salt
23:37 Ryan_Lane (if you're a frequent contributor to boto_* modules, feel free to discuss with me as well :) )
23:45 smcquay joined #salt
23:47 sh123124213 Ryan_Lane: maybe you should go ahead and say what you need
23:48 Ryan_Lane but I like to ask to ask. it's the irc way!
23:48 Ryan_Lane :)
23:48 whytewolf lol, hey Ryan_Lane long time no chat
23:48 Ryan_Lane whytewolf: howdy
23:49 Ryan_Lane mostly this has to do with attempts to avoid yak shaving
23:49 Ryan_Lane we're going to introduce a new option to a state module, and for upstreaming purposes it makes sense for it to be set to false
23:50 Ryan_Lane but we need this to default to true in every single place the state is used in our own infra
23:50 Ryan_Lane and that's in hundreds of repos
23:51 Ryan_Lane every single boto module have an argument called "profile", that's a dict. right now it defines things like the aws region, and aws credentials (if you don't set those up elsewhere)
23:51 Ryan_Lane my thought was to also make it possible to define defaults in the profile
23:51 Ryan_Lane another thought was to add a default pillar key that can be used to set the default
23:52 Ryan_Lane which is a similar pattern we have, but that pattern makes the pillar key redefinable, too
23:52 whytewolf well there is always the config.get style setup
23:52 Ryan_Lane maybe I'm answering my own question and I should just use the same pillar style pattern
23:52 Ryan_Lane yeah. the pillar pattern uses config.get
23:53 Ryan_Lane so it doesn't necessarily need to be pillar. it can be in the config or in grains
23:53 Ryan_Lane that's probably the best option
23:53 whytewolf yeah. i would have to agree.
23:53 MTecknology I just re-deployed my laptop using a new SSD. In theory, after "apt-get install salt-minion; cat endsalt.domain.tld >> minion.d/master.conf; salt-call state.highstate" I /should/ be able to just drop my /home dir back in and be to the exact place I was at before I screwed up this morning.
23:53 MTecknology wish me luck!
23:53 whytewolf GL MTecknology
23:53 Ryan_Lane thanks for rubber ducking that with me :)
23:54 whytewolf np
23:54 sh123124213 Ryan_Lane : I would go with pillar, you want to run states with different profiles depending on the server they are on or just have default profile for all servers ?
23:54 justanotheruser joined #salt
23:54 Ryan_Lane we can define the profile per state
23:54 Ryan_Lane (and we do)
23:55 heewa joined #salt
23:55 Ryan_Lane I actually have an example I use for my own sites I guess
23:55 Ryan_Lane https://github.com/ryan-lane/ryandlane.com/blob/master/salt/orchestration/states/aws.sls
23:56 MTecknology Ryan_Lane: heh.. quite the state id
23:56 Ryan_Lane state IDs as docs is a great pattern IMO :)
23:57 MTecknology I've seen it, but never quite like that
23:57 MTecknology what's it look like when you want to depend on one of those then?
23:57 Ryan_Lane we don't
23:57 Ryan_Lane well, that's not necessarily true
23:57 Ryan_Lane when we use listen_in, we use the name, not the id
23:58 Ryan_Lane I guess maybe that's not true either :D
23:58 whytewolf lol
23:59 Ryan_Lane it would be something like: listen_in... - boto_asg: Ensure {{ grains.workers.web.cluster_name }} asg exists
23:59 Ryan_Lane we basically never use watch, or requires

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary