Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-11-01

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 druonysus joined #salt
00:10 __number5__ wibberwock: will include all sls matched
00:10 wibberwock thanks!
00:19 justlooks joined #salt
00:20 zzzirk joined #salt
00:23 gatoralli joined #salt
00:31 cbloss joined #salt
00:35 bhosmer joined #salt
00:56 zzzirk joined #salt
01:03 shinylasers joined #salt
01:07 mufa left #salt
01:07 redondos joined #salt
01:07 redondos joined #salt
01:17 jacksontj do we have any idea when 0.17.2 is going to be cut?
01:19 jacksontj with 0.17.1 i get some nasty stack traces from the master's worker processes
01:20 indymike joined #salt
01:21 nahamu jacksontj: I think UtahDave said sometime next week.
01:21 jacksontj ok :)
01:21 yano joined #salt
01:23 jacksontj just running into issues with https://github.com/saltstack/salt/issues/8134
01:23 jacksontj i tried to cherry pick the fix in, but that didn't fix it
01:24 jacksontj and if i set the id on the minion it doesn't fix the issue either
01:24 fatbox joined #salt
01:37 xinkeT joined #salt
01:39 mapu joined #salt
01:47 prooty joined #salt
01:53 cewood joined #salt
01:55 ipmb joined #salt
02:00 patrek joined #salt
02:02 oz_akan_ joined #salt
02:10 Teknix joined #salt
02:25 Teknix joined #salt
02:26 oz_akan_ joined #salt
02:34 jcockhren anyone been playing with lxc module?
02:35 oz_akan_ joined #salt
02:45 nmistry joined #salt
03:07 redondos joined #salt
03:07 redondos joined #salt
04:09 pdayton joined #salt
04:17 elfixit joined #salt
04:19 pdayton1 joined #salt
04:27 druonysus joined #salt
04:27 druonysus joined #salt
04:27 anuvrat joined #salt
04:28 elfixit joined #salt
04:30 CheKoLyN joined #salt
04:44 elfixit joined #salt
04:55 jcockhren Are syndic not supposed to be visible to it's master?
04:55 jcockhren syndic masters*
05:03 druonysus joined #salt
05:16 anuvrat joined #salt
05:21 cachedout joined #salt
05:50 druonysus joined #salt
05:50 druonysus joined #salt
05:59 g4rlic left #salt
06:11 gildegoma joined #salt
06:14 Destro joined #salt
06:31 ajw0100 joined #salt
06:32 Niichan joined #salt
06:35 tallpaul joined #salt
06:36 MTecknology joined #salt
06:36 carmony joined #salt
06:36 pmrowla joined #salt
06:38 amahon joined #salt
06:39 druonysus joined #salt
06:39 druonysus joined #salt
06:48 MTecknology joined #salt
06:48 tallpaul joined #salt
06:48 carmony joined #salt
06:49 pmrowla joined #salt
06:49 oleksiy joined #salt
07:00 tallpaul joined #salt
07:00 throwanexception joined #salt
07:00 MTecknology joined #salt
07:01 pmrowla joined #salt
07:01 carmony joined #salt
07:02 [ilin] joined #salt
07:07 pdayton joined #salt
07:07 az87c joined #salt
07:12 xinkeT joined #salt
07:18 carmony_ joined #salt
07:18 Nexpro1 joined #salt
07:27 Destro joined #salt
07:30 ml_1 joined #salt
07:36 linuxnewbie joined #salt
07:36 linuxnewbie joined #salt
07:44 slav0nic joined #salt
07:50 bhosmer joined #salt
08:01 balboah joined #salt
08:02 Katafalkas joined #salt
08:02 linjan_ joined #salt
08:04 sebgoa joined #salt
08:07 dave_den joined #salt
08:26 ml_11 joined #salt
08:32 gasbakid joined #salt
08:37 ipmb joined #salt
08:49 patrek_ joined #salt
08:51 EvaSDK_ joined #salt
08:51 nahamu_ joined #salt
08:52 alexandr1l joined #salt
08:53 eqe joined #salt
08:53 zz___ joined #salt
08:53 Zethrok_ joined #salt
08:54 eskp_ joined #salt
08:54 eskp_ joined #salt
08:54 octagona1 joined #salt
08:54 robins joined #salt
08:54 Ymage_ joined #salt
08:55 tonthon_ joined #salt
08:59 eightyeight joined #salt
08:59 MK_FG joined #salt
09:06 monokrome joined #salt
09:09 tseNkiN joined #salt
09:13 jkleckner joined #salt
09:25 rjc joined #salt
09:28 oleksiy joined #salt
09:29 krissaxton joined #salt
09:30 krissaxton left #salt
09:34 az87c joined #salt
09:34 bhosmer joined #salt
09:35 rawzone joined #salt
09:38 slav0nic hello, why in centos6.4 salt bootstrap  install 0.16.4 version not 0.17?
09:43 az87c joined #salt
09:44 az87c_ joined #salt
09:50 rjc joined #salt
10:04 redondos joined #salt
10:11 micko joined #salt
10:12 giantlock joined #salt
10:15 ravibhure joined #salt
10:15 pkimber joined #salt
10:21 dave_den joined #salt
10:25 srage joined #salt
10:26 nhanpt-rad joined #salt
10:27 nhanpt-rad anyone?
10:27 nhanpt-rad Hello, anyone there?
10:29 nhanpt-rad @@
10:31 dave_den joined #salt
10:33 linuxnewbie joined #salt
10:33 linuxnewbie joined #salt
10:34 redondos joined #salt
10:34 bhosmer joined #salt
10:39 malinoff joined #salt
10:59 [ilin] joined #salt
10:59 [ilin] joined #salt
11:15 patrek joined #salt
11:21 ggoZ joined #salt
11:36 jslatts joined #salt
11:39 redondos joined #salt
11:39 redondos joined #salt
11:47 lemao joined #salt
12:08 blee joined #salt
12:11 gldnspud joined #salt
12:22 aleszoulek joined #salt
12:24 bhosmer joined #salt
12:35 g3cko joined #salt
12:39 krissaxton joined #salt
12:40 timoguin joined #salt
12:41 jslatts joined #salt
12:44 vkurup joined #salt
12:46 imaginarysteve joined #salt
12:46 gasbakid joined #salt
12:46 pdayton joined #salt
12:48 urtow joined #salt
12:56 TonnyNerd joined #salt
13:06 racooper joined #salt
13:08 oz_akan_ joined #salt
13:08 Gifflen joined #salt
13:08 Gifflen_ joined #salt
13:09 oz_akan_ joined #salt
13:10 ipmb joined #salt
13:10 krissaxton joined #salt
13:20 juicer2 joined #salt
13:23 oleksiy joined #salt
13:24 canci joined #salt
13:27 snuffeluffegus joined #salt
13:28 snuffeluffegus joined #salt
13:29 Kholloway joined #salt
13:32 anthrope joined #salt
13:35 pass_by_value joined #salt
13:37 btorch morning
13:40 redondos joined #salt
13:40 redondos joined #salt
13:43 mua joined #salt
13:48 quickdry21 joined #salt
13:49 cbloss joined #salt
13:52 mua joined #salt
13:52 tempspace Good morning
13:52 btorch morning
13:52 m_george left #salt
13:53 pass_by_value Morning btorch and tempspace
13:54 kaptk2 joined #salt
13:55 whiteinge seanz: i have seen those @<rev> refs but not for every branch. i'm not sure why they're created
14:01 kermit joined #salt
14:02 krissaxton joined #salt
14:02 xt So.. nested pillars are broken in 0.17.1 but works in latest git, anyone know anything about this?
14:08 aptiko joined #salt
14:09 anthrope Hey, folks.  How does 'watch' work with commands?
14:10 anthrope Specifically, I've got a pip.installed state that passes the 'upgrade: True'.
14:10 anthrope On its own, it executes the state every single time to check for an upgrade.
14:10 anthrope I want to gate that state so it doesn't always run.
14:11 anthrope I've got a script that returns 0 if an upgrade is required and 1 otherwise.
14:11 Jared_ joined #salt
14:11 anthrope - watch: -cmd: myscript doesn't seem to work because if the script fails, the state fails
14:11 Jared_ Good morning!
14:12 lineman60 joined #salt
14:12 aptiko left #salt
14:13 mwillhite joined #salt
14:13 bhosmer joined #salt
14:14 lineman60__ joined #salt
14:15 Guest79642 I've got Salt up and running on a portion of my network and its running fine but I've received word from another  person in my company that he is wanting to go with Ansible.  Although my boss is with me on Salt, I've been tasked to Sell it.  Does anyone have a Side by side feature comparison that would help to make my case?
14:15 baffle I need to set the weight of an instance in keepalived in a configuration file (jinja template). Is there a good way to f.ex calculate the weight based on the last octet of the IP? :-P
14:15 baffle I.e. two servers share the configuration file but need to have a different value for weight..
14:16 baffle Hmm, one is master, one is backup as well. Maybe I should just force it by using a grain or something..
14:21 d0ugal_ joined #salt
14:21 d0ugal joined #salt
14:21 elfixit joined #salt
14:22 jumperswitch joined #salt
14:26 micah_chatt joined #salt
14:29 vkurup xt: i don't know anything about it, but wondering if it is a fix for this? https://github.com/saltstack/salt/issues/7625
14:32 opapo joined #salt
14:33 brentsmy_ joined #salt
14:33 brentsmyth joined #salt
14:34 hazzadous joined #salt
14:34 jcockhren ok. Been playing with the lxc module. I can now sucessfully create a lxc and bootstrap it with salt. Now I just need to have it registered as a minion
14:34 mannyt joined #salt
14:34 amahon joined #salt
14:35 jcockhren it's much easier it seems to make a custom template than rely on the lxc.init's install and seed arguements
14:37 aberant joined #salt
14:44 xt vkurup: thanks a lot, that's exactly what I was looking for
14:45 alunduil joined #salt
14:46 bhosmer joined #salt
14:47 vkurup xt: you're welcome. i'm waiting to upgrade til it's fixed, so it's good to know that it's in git now
14:47 pdayton joined #salt
14:47 xt I got pretty mad about it
14:47 xt hehe
14:48 xt every time salt upgrade, something breaks horribly
14:48 jcsp joined #salt
14:49 newellista joined #salt
14:49 vkurup don't tell me that, i'm a new user :)
14:49 ja_ joined #salt
14:54 btorch hmm so does salt + pillar/jinja does not conform for all the jinja features ? http://jinja.pocoo.org/docs/templates/
14:55 btorch I'm trying to get my head around salt and pillar now. trying to create an ini style config dynamically
14:57 aptiko joined #salt
15:00 jcockhren let's say there's the setup: host1(master) -> host2(master&syndic) -> host3(minion)
15:00 jcockhren How do we make host2 visible to host 1?
15:01 Brew joined #salt
15:01 jcockhren on this topology, running salt '*' test.ping only returns results from host3
15:01 xt vkurup: then again, Im a very old user, so maybe it really got better recently :-)
15:01 jcockhren (when ran from host1)
15:03 jcockhren running salt-minion on host2 seems to block/supersede their syndic function
15:03 jcockhren so only then can host2 be visible to host1
15:03 newellista joined #salt
15:04 jcockhren with that said, assuming only syndic, any function that globs for matches will hang for a bit b/c salt looks through it's accepted keys
15:04 krissaxton joined #salt
15:05 jcockhren host2 has an accepted key on host1 but is not visible and thus can't be directly controlled
15:05 jcockhren (like a normal minion)
15:05 whidbeywalker joined #salt
15:06 tempspace How does gitfs handle a git repo that can't be contacted (say, github is being DDoS'd...again...)
15:08 timoguin tempspace, pretty sure it logs an error (or warning?) that the remote can't be contacted.
15:08 timoguin but it should continue operating if it's already cached
15:09 ctdawe joined #salt
15:12 tempspace I'm trying to figure out if I can use two identical repo's at two different services, and if one goes down, have it not be a problem
15:12 cachedout joined #salt
15:15 timoguin tempspace, hmm... well i know it will look through a list of remotes searching for state files, for example.
15:15 timoguin so i would think if you listed both in the gitfs_remotes it would try them both
15:15 timoguin but that's just a guess
15:17 my_mom joined #salt
15:17 tempspace yeah that was my guess too
15:17 tempspace was hoping someone here actually did it
15:21 berto- joined #salt
15:24 Nexpro joined #salt
15:25 m_george|away joined #salt
15:25 btorch anyone know how to stop newline char before and after a jinja print statement ?
15:26 m_george left #salt
15:26 cnelsonsic joined #salt
15:27 whiteinge btorch: http://jinja.pocoo.org/docs/templates/#whitespace-control
15:27 whiteinge the syntax is a tad obtuse...
15:27 btorch I tried that
15:28 tempspace So what is the main allure of using gitfs rather than a local folder containing data from the git repo, is it just that it's auto polling every minute or whatever?
15:28 btorch I've been trying several things from that page but when I do the state.highstate it complains about it
15:29 whiteinge btorch: pastebin?
15:31 btorch whiteinge: I think I got it now .. I was only trying '+' and not the '-' one ... testing
15:32 * btorch this is awesome :)
15:33 brianhicks joined #salt
15:36 renoirb Did you succeed to use gifts for both pillars and states tempspace?
15:37 tempspace renoirb: I've never used gitfs, just trying to understand why I'd want to
15:38 renoirb The main reason you would want that is that you have a local workspace to edit, then you push to the remote git. Then the production environment can read from that remote git, sync their copy locally then you can use salt with them.
15:38 tempspace Right, but I have that right now by having a git repo living in my filesystem
15:38 renoirb So you can have more than only one environment and use git to store the manifests.
15:38 CheKoLyN joined #salt
15:39 renoirb Right, but ideally you should not change things on a live system.
15:39 tempspace I don't, I just run 'git pull' on the master after things pass through our testing
15:40 renoirb Work somewhere else, on a branch, code, then run the things, make it work, when happy, merge-rebase to master. Push to remote. Then in production pull then deploy.
15:40 renoirb sure. so you do it manually, that's your point.
15:40 renoirb :)
15:40 tempspace Right
15:40 tempspace So is it just the auto polling I'd be gaining?
15:40 renoirb So, yeah, I am now wondering the same as you now then :)
15:41 renoirb BTW, i'm not expert on Salt stack and git.
15:41 renoirb I see one thing tempspace
15:42 renoirb Imagine your deployment server is also managed by salt. That a deployment server can be created by adding appropriate grains and adding in gifts repositories in a master.d/file.conf
15:42 baffle How do I refer to a dict-entry in a grain? I would assume it was something like {{ grains['something']['entry'] }} but that doesn't seem to work..
15:42 renoirb So the re-build of a deployment server in a given environment is only a few files
15:42 renoirb few as in more or less one
15:42 renoirb more or at least one
15:43 renoirb baffle: is it in a state?
15:43 tempspace baffle: try salt['grains.get']('SOMETHING:ENTRY')
15:43 baffle renoirb: No, jinja template.
15:43 Kizano_droid joined #salt
15:43 Kizano_droid Hi all
15:44 baffle Oh.
15:44 tempspace {{ salt['grains.get']('SOMETHING:ENTRY') }} more specifically
15:45 renoirb baffle: …  oh, yes, sure.  state file called template using default rendering engine with template: jinja.
15:45 Kizano_droid how does salt handle differences between data and Logic?
15:45 renoirb I think you can do too {{ salt['grains.get']('SOMETHING:ENTRY', 'DEFAULT VALUE') }}
15:45 Kizano_droid I feel like I often have to have to program my set up whenever I use puppet....
15:46 Kizano_droid Hiera was cool, but it still doesn't quite merge arrays right....
15:46 renoirb Kizano_droid: If I recall correctly Hiera was also managing git repositories in Puppet, right?
15:46 forrest joined #salt
15:48 ctdawe joined #salt
15:48 baffle Hmm, I don't get anything actually.. Just "grains['something']" will give me "[{'role': 'MASTER'}, {'weight': 100}]" .. But salt['grains.get']('something.role') is blank.
15:49 Kizano_droid renoirb: you could, if got was in the underlying later in conf management....
15:49 baffle something:role I mean.
15:50 pentabular joined #salt
15:50 baffle salt['grains.get']('something') gives me same as grains['something']
15:51 mwillhite joined #salt
15:51 anthrope forrest: I left work and forgot to say thank you for your help yesterday.  My bad.
15:51 dave_den baffle: looks like it's an array of ditcs.
15:51 dave_den [{
15:52 forrest anthrope, no worries! Did that work out for you? Or were you able to find a better way to do it?
15:52 sijis joined #salt
15:52 sijis is there a way to specify a list of minions with a file? .. instead of using salt -L server1,server2 ?
15:52 ddv joined #salt
15:53 anthrope I ended up doing the cmd to manually stop and start the service.
15:53 anthrope (bah)
15:54 forrest hah, well at least it worked, it sucks that restart doesn't work.
15:54 redondos joined #salt
15:54 redondos joined #salt
15:56 baffle dave_den: I'd assume I could do something like grains['something']['role'] but that doesn't work..
15:57 jumperswitch joined #salt
15:57 dave_den baffle: it does, but you have a list of dicts in 'something'.
15:57 newellista joined #salt
15:57 dave_den something is a list, not a dict.
15:57 foxx joined #salt
15:57 foxx joined #salt
15:58 anthrope Is there a way to get salt to explode on parameters it doesn't recognize?
15:58 anthrope I found myself mistyping "require" in one of my states and being puzzled as to why it wasn't being respected.
15:58 dave_den sijis: yes, use your shell's variable substitution to feed the list of minions from the file in the proper format.
15:59 anthrope Or rather, is there a good reason to allow arbitrary YAML?
15:59 dave_den anthrope: right now there's not a way to check the yaml against a schema definition within salt
15:59 renoirb baffle, because the syntax of salt['grains.get']('something:role')… with :
15:59 forrest anthrope, there's been some slow progress on a few types of states to 'explode' when it finds an error, and let you know what the actual issue is with the yaml (as opposed to just a python error)
16:00 renoirb salt['function call']('function params')
16:00 forrest I don't know what the status of that is, I think the contribution was just coming from a community member.
16:00 cheus joined #salt
16:00 dave_den forrest: yeah, but that's with malformed yaml. anthrope has valid yaml but a misspelled entry
16:01 forrest dave_den, good point!
16:01 dave_den so unless he is misspelleing a required argument for a module, it won't explode
16:02 tempspace baffle: what rennoirb said, it's a : not a .
16:02 baffle dave_den: Oh, maybe I'm setting up a list instead of a dict somewhere..
16:02 baffle dave_den: When defining the grain. :)
16:03 dave_den baffle: exactly :)
16:03 jkleckner joined #salt
16:03 dave_den you probably have ' - ' in front of your 'role' in 'something'
16:03 dave_den just take that out
16:03 dave_den and remove it from 'weight', too.
16:05 cheus Hi. Is there any particular recipe for using salt for continuous delivery when each application's salt configuration is found in the application source tree? I'd like to avoid having to build a monolithic state tree for all of our applications; ideally I'd like to think we could use salt remote execution commands to upload post-build application files and a local state tree, then use a (remotely executed) --local salt-call to build the app state, set perms,
16:05 cheus etc
16:05 jalbretsen joined #salt
16:05 baffle dave_den: I was doing grain.setval something '( 'role': 'MASTER', 'weight': '100' )'   instead of '{ 'role': 'MASTER', 'weight': '100' }'
16:06 dave_den baffle: yeah, that'll do it :)
16:06 baffle grains[''][''] worked now.
16:07 baffle Any reason to use salt['grains.get'] instead of just grains[] btw?
16:07 krissaxton joined #salt
16:07 dave_den baffle: salt['grains.get'] uses the salt grains module http://docs.saltstack.com/ref/modules/all/salt.modules.grains.html  grains[] is a straight python dict
16:08 whiskybar joined #salt
16:09 dave_den cheus: sure, you could bundle all of the state files in the applications source tree and just execute salt-call —local in your deploy process
16:09 dave_den cheus: i don't have an example to show you, tho - sorry
16:10 cheus dave_den, That seemed to make sense but what threw me was all of the extra config options (eg, files_root), that have to be put into the minion config. If my minion isn't truly masterless, how do I 'dynamically' pass it a new top?
16:12 dave_den baffle: just think of salt[] as  salt['module.function']('function arguments')
16:12 baffle dave_den: I guess the main benefit you get from using the module is that you can supply a default.
16:12 dave_den where the modules are http://docs.saltstack.com/ref/modules/all/index.html
16:13 dave_den cheus: where's the top file coming from?
16:14 newellista joined #salt
16:16 KyleG joined #salt
16:16 KyleG joined #salt
16:16 smccarthy joined #salt
16:16 AdamSewell joined #salt
16:16 cheus dave_den, I haven't really decided. I assumed we'd use a gitfs backend for the majority of the master/minion pieces but hadn't really sketched out much beyond that. The server's generic top would be there ignorant of the application top which I assume would be packaged with the application files
16:17 dave_den cheus: are you just starting with salt altogether?
16:17 Ryan_Lane joined #salt
16:18 cheus dave_den, in this setting, yes. Been using it extensively solely for config. management in another environment but this one we're starting fresh
16:18 dave_den you may need to play with it a bit to get a feel for how it all works, then that may help you decide how to design your application deployments
16:18 cheus also the first time trying it for app deployment
16:19 micah_chatt joined #salt
16:19 dave_den if your minion is already connected to a central master, i'm not quite sure why you want to try to do —local deployments?
16:20 dave_den sounds like you would just create deployment state files that can do the minion config based on pillar information to supply branch/tag/version info, etc.
16:21 dave_den cheus: also, check out overstate if you haven;t already. it can be handy for deployment scenarios across multiple minions
16:21 dave_den gotta run - back later!
16:21 troyready joined #salt
16:21 pentabular joined #salt
16:24 green_salter joined #salt
16:25 anuvrat joined #salt
16:25 jimallman joined #salt
16:28 fllr joined #salt
16:29 jacksontj joined #salt
16:30 bemehow joined #salt
16:31 ctdawe joined #salt
16:35 foxx joined #salt
16:35 sijis dave_den: ok. i'll give that a shot
16:35 sijis thanks
16:35 sijis left #salt
16:40 dustyfresh joined #salt
16:45 karlgrz joined #salt
16:45 nmistry joined #salt
16:46 karlgrz Hey all, I have a salt state that pulls from a git repo. How would I go about restarting nginx and uwsgi AFTER that pull is made?
16:46 UtahDave joined #salt
16:47 karlgrz Do I have to include the state in my nginx and uwsgi states? I'd prefer to not have to do that, since I could have many, many sites after a while and I'd hate to keep updating those every time a new site is configureg
16:47 karlgrz *configured
16:48 forrest karlgrz, I'd suggest using 'watch_in'
16:48 forrest https://github.com/gravyboat/hungryadmin-sls/blob/master/salt/hungryadmin/app.sls#L36
16:48 forrest there's an example
16:51 jkleckner joined #salt
16:51 karlgrz worked beautifully, exactly what I wanted. Thanks!
16:53 Kizano_droid left #salt
16:53 forrest karlgrz, np!
16:53 linjan__ joined #salt
16:55 druonysus joined #salt
16:57 mpanetta joined #salt
17:00 mpanetta joined #salt
17:01 UtahDave forrest++
17:02 ctdawe joined #salt
17:05 pentabular joined #salt
17:07 xmltok joined #salt
17:07 krissaxton joined #salt
17:13 ctdawe joined #salt
17:20 terminalmage joined #salt
17:21 btorch hmm state.highstate only goes over init.sls ?
17:25 pentabular joined #salt
17:25 brimpa joined #salt
17:27 dustyfresh is it possible to use the minion's id as the variable in a salt state file?
17:29 amckinley joined #salt
17:30 EugeneKay Yes; see the grains system.
17:30 jacksontj in the state module there is a show_top function, but that just shows the compiled top file-- not the matches for the minion
17:30 jacksontj i thought that did/used to?
17:32 jacksontj btorch: it should go over all states assigned in top.sls
17:32 jacksontj dustyfresh: yea, it should be available as a grain i believe
17:33 dustyfresh thanks for getting me pointed in the right direction guys! :)
17:36 xmltok from my understanding when writing a formula you should only put the platform specific variables into a centralized map.jinja. what about defaults for other pillar values, they all go into the individual sls and template files?
17:37 karlgrz When using a git checkout, how do I require that in another step?
17:37 karlgrz require: ssh://git@github.com/username/repo.git?
17:37 karlgrz or do I have to give that a separate name?
17:38 UtahDave xmltok: that's one way of doing it.
17:38 bhosmer joined #salt
17:39 ajw0100 joined #salt
17:39 xmltok seems bad to have it all spread out everywhere, especially since if i want to change things in the future ill probably want to change it on all nodes. it makes more sense to have one default pillar data location
17:43 xmltok i like the map.jinja stuff for the variables, even if theyre not OS specific, it seems like a good idea
17:43 mwillhite joined #salt
17:44 whiteinge xmltok: that same pattern is awesome for any/all places you find it useful to cut down on repetition or spreading info out over many files
17:45 whiteinge pillar is a great place for it :)
17:45 dave_den karlgrz: you will use '- require:\n    - git: sate_id_declaration'
17:46 whiteinge i still find the pattern useful even if i'm only working in a single .sls file (state or pillar) to just make a map dictionary at the top of the file
17:46 whiteinge if i'm working with lots of conditional data or lots of repeat data
17:46 xmltok im thinking even for default values that are going into my templates too
17:46 karlgrz dave_den: cool, thanks!
17:47 karlgrz dave_den: I think i might just give it a shorter name, then use name: ssh://blabla in the git state
17:47 dave_den karlgrz: yep, you can do that
17:47 forrest hey whiteinge, for the saltstack-formulas repo, if we have a new formula to add, do we just email you? I thought there were docs somewhere that explained how to get them added, but I can't find them
17:48 xmltok when using filter_by you get a merge option, is there anything like that for non-platform specific grains? i want to have my map.jinja have two merges, one that matches on OS and one that doesnt
17:48 xmltok i can already see where it might be good to have in the chef formula (https://github.com/saltstack-formulas/chef-formula/blob/master/chef/map.jinja)
17:48 dustyfresh so, to declare a variable in my state file for the id of a minion using grains would be like... {% set app_name = grains['id'] %}
17:50 Ahlee are returners the best way to detect failures? I.e. I just had a command fail on two of 200 minions
17:50 Ahlee completely bs failure (out of inodes), but i really need to know when that happens.
17:50 whiteinge xmltok: filter_by and the merge arg should work for any grain. it takes a second arg that specifies which grain you're interested in filtering on (defaults to os_family)
17:51 bemehow_ joined #salt
17:51 whiteinge forrest: i'll add you to the saltstack-formulas github organization, then you can transfer the repo to the org
17:51 xmltok i guess i still need to find a grain that will match everything everytime
17:51 Ahlee I know i've asked this before, but my memory is crap and I didn't write it down
17:51 forrest ok cool, thanks whiteinge, it's one I haven't tested yet on 0.17 so I'll have to try and get to that this weekend.
17:52 whiteinge sounds good!
17:52 bemehow joined #salt
17:52 whiteinge ooc, what's the formula for?
17:55 forrest fail2ban
17:55 bemehow_ joined #salt
17:55 forrest it's an old one I put together a while back while experimenting
17:55 forrest just been busy and haven't gotten around to testing it out.
17:55 jesusaurus forrest: ooh, we should compare fail2ban formulae
17:55 forrest jesusaurus, https://github.com/gravyboat/fail2ban-formula It's just a bare bones one
17:56 forrest just Debian/RedHat right now.
17:56 forrest but it doesn't modify the configs, just drops them in
17:56 jesusaurus mine is pretty simple too, just ssh and nginx basics
17:57 jesusaurus https://github.com/jesusaurus/hpcs-salt-state/tree/master/fail2ban
17:58 forrest ahh yea, yours is a lot more complex than mine
17:58 forrest I tried making mine just a 'plug and play' style, so people could modify from there.
17:59 jesusaurus i went with the approach of "include what you need", so anything that includes `nginx` should also include `fail2ban.nginx`
17:59 xmltok is there an easy way to dump my pillars and other info from my jinja sls? like i want to know if my map.jinja is creating the things i am looking for. i just get random keys not found and its trial/error to figure out where that is coming from
18:01 krissaxton joined #salt
18:01 bemehow joined #salt
18:03 jesusaurus forrest: that jinja map thing you're doing is nifty
18:03 forrest yea that seems to be the new standard
18:03 forrest or it has been for a bit
18:03 forrest it's really cool
18:03 forrest makes my states a LOT cleaner.
18:04 ddv joined #salt
18:13 amahon joined #salt
18:13 ctdawe joined #salt
18:16 jcsp1 joined #salt
18:16 newellista joined #salt
18:19 bemehow_ joined #salt
18:19 jacksontj seems that i've found a bug in the compound matching in the top files
18:20 jacksontj let me check if its on develop before i get too far into this
18:20 jacksontj looks like it
18:20 jacksontj so, if you use a compound matcher in a top file it will not match correctly-- it defaults to glob
18:23 xmltok joined #salt
18:27 druonysus joined #salt
18:28 blee joined #salt
18:30 anti_ joined #salt
18:30 jslatts joined #salt
18:31 quickdry21_ joined #salt
18:31 jacksontj oh, wait no you just have to supply match: :D
18:34 xinkeT joined #salt
18:34 Brew joined #salt
18:38 josephholsten joined #salt
18:41 Kholloway joined #salt
18:42 lineman60__ joined #salt
18:42 Boohbah joined #salt
18:46 opapo joined #salt
18:47 Katafalkas joined #salt
18:48 nmistry joined #salt
18:50 Katafalk_ joined #salt
18:50 jdenning joined #salt
18:50 aberant joined #salt
18:54 imaginarysteve joined #salt
18:54 brianhicks joined #salt
18:56 sebgoa joined #salt
18:57 ctdawe joined #salt
18:57 ajw0100 joined #salt
19:00 bemehow joined #salt
19:00 wibberwock joined #salt
19:00 wibberwock docs.saltstack.com down?
19:01 Corey wibberwock: No, it is not.
19:01 Corey It works fine here, perhaps your DNS is broken?
19:01 Corey Or perhaps you typo'd it?
19:01 lineman61 joined #salt
19:01 wibberwock i'm getting a domain not claimed page
19:02 Cidan joined #salt
19:02 Corey wibberwock: Yeah, I'm not. :-) From... three locations now.
19:02 dave_den works for me
19:02 Cidan joined #salt
19:03 bemehow_ joined #salt
19:03 wibberwock just fixed for me, weird.
19:04 zz_Cidan joined #salt
19:05 cowmix joined #salt
19:07 ashtonian joined #salt
19:07 jcsp joined #salt
19:11 srage joined #salt
19:14 bemehow_ joined #salt
19:15 Brew1 joined #salt
19:15 notanumber joined #salt
19:18 Brew joined #salt
19:18 seanz Greetings. Is there any "salt" way to set environment variables prior to a cmd.run?
19:19 notanumber So, I figured out the line that was making sudo fail yesterday.  For some reason, having a `user: - present - shell: /bin/bash` does it.
19:19 notanumber Without out that, all works properly.
19:20 dave_den seanz: you can set environement variables during cmd.run
19:21 seanz dave_den: You mean by setting them just before the command?
19:21 seanz VAR=value command_to_run   ?
19:21 dave_den no, by setting the env argument
19:21 seanz Oh...checking the docs again...
19:22 oleksiy joined #salt
19:22 seanz Ah, thanks, dave_den! There it is.
19:23 whiteinge wibberwock: sorry for the docs hiccup. i made two dns changes at the same time instead of waiting on the one before making the other
19:25 dave_den seanz: e.g. salt 'host' cmd.run env='{MY_ENV: blah}' 'echo $MY_ENV'
19:27 seanz dave_den: That makes sense. What about states that don't support env?
19:27 seanz Such as postgres.
19:27 seanz Would you recommend a cmd.run prior to that, just to set the variables?
19:28 dave_den salt won't pick up those variables
19:29 dave_den there's probably an alternative way to specify your settings, like a .postgres file
19:31 seanz dave_den: Thanks. That's the conclusion I'm arriving at.
19:33 dave_den np
19:36 mr_chris What would cause having three or more salt-minion instances running at a time?
19:38 dave_den mr_chris are any of them zombie or defunct?
19:38 mr_chris dave_den, They are not.
19:39 dave_den do you see three network connections to your master on port 4505 on that minion?
19:39 dave_den or 4506
19:39 jacksontj joined #salt
19:40 dave_den it's possible the old process is not being killed by whatever script or service is doing salt-minion restart
19:40 whiteinge seanz: if you feel a command should take the env argument and doesn't, do file a ticket. many commands call out to cmd.run anyway so passing an arg along is a quick addition.
19:40 mr_chris dave_den, I'll have to check when it happens again.
19:40 zz_Cidan joined #salt
19:40 mr_chris I'm going to removing everything from my top.sls file and readd it one at a time to find the culprit.
19:41 mr_chris The more we add the worst these odd problems seem to get.
19:41 mr_chris *worse
19:41 seanz whiteinge: Thanks! I may do that when I get back from lunch.
19:41 pentabular joined #salt
19:42 ckao joined #salt
19:42 amahon joined #salt
19:43 pentabular joined #salt
19:43 amahon joined #salt
19:45 mr_chris dave_den, Three connections on 4506
19:47 hazzadous joined #salt
19:49 oleksiy joined #salt
19:51 dave_den mr_chris: how was the minion installed?
19:51 mr_chris dave_den, CentOS. Yum from epel repo.
19:52 dave_den does `service salt-minion restart` spawn yet another minion?
19:52 dave_den you can also check on the proc dir for each PID which might shed some light.
19:53 zz_Cidan df;dsfkjdsfksd
19:53 dave_den e.g.  cat /proc/<PID>/environ
19:53 dave_den and cat /proc/<PID>/status
19:53 newellista joined #salt
19:53 mr_chris OK.
20:00 brianhicks joined #salt
20:02 carmony Nice job on the new site? :)
20:02 carmony and that ? should have been a . :)
20:02 carmony its friday, and I cannot type
20:05 btorch hmm isn't salt '*' state.highstate supposed to return an output at all times ?
20:06 dave_den btorch: check you master and minion logs for errors
20:06 whiteinge also check salt-run jobs.list_jobs to see if highstate actually ran
20:07 whiteinge (in the background)
20:07 pentabular joined #salt
20:07 dave_den am i crazy, or wasn't salt-ssh included in 0.17.1 for ubuntu precise?
20:08 forrest salt-ssh is a different package dave_den
20:08 forrest it's not packed in with the master by default
20:08 dave_den ah, thx forrest
20:08 forrest np, it should just be named 'salt-ssh'
20:08 btorch whiteinge: thanks .. checking logs
20:08 timoguin joined #salt
20:08 jcockhren forrest: do you use syndic any?
20:09 imaginarysteve joined #salt
20:09 dave_den forrest: yep, got it installed now
20:09 forrest dave_den, awesome
20:09 forrest jcockhren, no I don't
20:09 jcockhren ok.
20:10 forrest Are you trying to get it working?
20:10 jcockhren yeah. so here's what I'm doing:
20:10 jcockhren host1(master) -> host2(syndic&master) -> lxc(minon & container on host2)
20:11 jcockhren host1 can talk and direct lxc just fine
20:11 forrest interesting, are you actually able to get lxc to work properly since the syndic is on the same host?
20:11 forrest but not host2
20:11 forrest even though the keys are accepted and everything?
20:12 forrest do the ports on host2 have to be opened?
20:12 forrest the docs don't say
20:12 jcockhren forrest: just works
20:12 jcockhren I have one issue
20:12 jcockhren and I think think it's a design issue
20:12 jcockhren host2 isn't visible as a minion to host1
20:13 forrest really??
20:13 jcockhren host1 keys are present and accepted, but whn running syndic, it only passes to host2's minions
20:13 forrest so right now you have the syndic_master option set in the master on host2?
20:13 jcockhren yes
20:13 forrest but host1 doesn't see the key
20:13 jcockhren host1 sees the key just fine BUT
20:14 jcockhren salt '*' test.ping doesn't ping the host2 itself. Only host2's containers
20:15 forrest when you created the container, it just used the key associated with host2 itself? So the container didn't get an 'extra' key?
20:16 jcockhren no. (if I understand youy correctly)
20:16 jcockhren lxcs are the minions of host2, so that means their keys are visible from host2 only
20:16 jcockhren host1 knows only of host2's key and uses that
20:16 pentabular1 joined #salt
20:16 Kholloway joined #salt
20:17 newellista_ joined #salt
20:17 forrest but when running salt, only the container is affected
20:17 forrest not the host itself
20:17 jcockhren yes
20:17 forrest that's really interesting
20:17 jcockhren I couldn't find a way to talk to the host
20:17 jcockhren so I attempted to start a minion service as well
20:18 forrest but it DID generate a key, which you had to add.
20:18 forrest you can't even ping host2 or anything?
20:18 forrest via salt
20:18 jcockhren nothing
20:18 GradysGhost joined #salt
20:18 GradysGhost Hey everyone
20:18 forrest hi
20:18 GradysGhost I have a weird non-critical that I just want to verify
20:19 GradysGhost I have a managed file in a salt state. When I run a dry run highstate, no changes-to-be get reported.
20:19 GradysGhost However, if I do a show_highstate, the file is listed out correctly.
20:19 jcockhren if I run salt-minion on host2, then syndic stops and host2 starts acting like a normal minion
20:19 jcockhren forrest: ^
20:19 forrest jockhren, how odd.
20:20 GradysGhost And if I run the highstate for real, the change gets correctly made, but the output from the salt command is empty, save for the target list.
20:20 jcockhren the syndic service is still running btw
20:20 forrest jcockhrne, look at this: https://groups.google.com/forum/#!msg/salt-users/_dr_OOIzzJY/agrVKnjGVnUJ
20:20 forrest *jcockhren
20:20 forrest not identical
20:20 forrest but similar
20:21 forrest GradysGhost, so you're not seeing the actual states get applied?
20:22 forrest but they get applied
20:22 forrest and when you say 'dry run' do you mean with the test option?
20:22 GradysGhost Well, I see the effects of the application. The file gets created (and other things defined in the state). It's just that the output of the salt command is just a list of the targets, indicating no changes being made.
20:22 GradysGhost And yes, I mean with test=True
20:22 veetow joined #salt
20:22 GradysGhost But the output is the same with/without that switch.
20:23 GradysGhost I suppose I should get you version data, eh?
20:23 veetow is it possible to get a remote shell via salt?
20:23 GradysGhost 0.16.4
20:23 forrest veetow, you can run remote command execution with salt.
20:23 pentabular joined #salt
20:23 GradysGhost veetow: You can do cmd.run 'your command'
20:23 GradysGhost Not really interactive
20:23 veetow forrest:  GradysGhost: interactive is what i'm asking about
20:24 veetow like, give me a bash shell on the remote machine
20:24 forrest not as far as I know
20:24 jcockhren hmm
20:24 GradysGhost Is there a reason SSH won't work for you?
20:24 jcockhren lol
20:24 xmltok joined #salt
20:25 veetow i'm quite familiar with ssh thanks
20:25 veetow suffice it to say i have a use case with an existing system mgmt software that i'm trying to replicate with salt
20:26 forrest GradysGhost, I'm running a masterless minion on one of my boxes that is 0.16.4
20:26 forrest salt-call --local state.highstate
20:26 forrest is returning all the state status
20:26 forrest *es
20:26 forrest veetow, if you don't mind me asking which app provides that functionality?
20:26 blee_ joined #salt
20:27 forrest and how does it handle logging for security purposes?
20:27 veetow opsware via global shell
20:27 forrest ahh ok
20:27 veetow rosh
20:27 veetow very very handy tool
20:28 veetow if it's technically feasible but doesn't exist as a function yet, i'm happy to hack at it
20:28 forrest so does that support logging into multiple machines at the same time with an interactive shell, then executing commands across all systems?
20:29 jcockhren veetow: hack that
20:29 ajw0100 joined #salt
20:30 jcockhren veetow: or simulate "interactive" with a kind of a salt specific sub-shell
20:30 oleksiy joined #salt
20:30 jslatts is it possible to target all the minions for a specific syndic master?
20:30 GradysGhost Far be it for me to determine what goes into salt, as I don't contribute to code or anything, but there are already many existing applications to handle the one-input-to-many-terminals issue. Did the opsware stuff use this in a unique way that something like Fabric or Terminator doesn't?
20:31 xmltok joined #salt
20:31 karlgrz joined #salt
20:31 karlgrz salt.modules.rvm.do
20:31 karlgrz is there a way to target a directory?
20:31 karlgrz I want to run a rake task in a specific directory
20:31 jcockhren jslatts: that's a good question. It's command specific
20:31 jcockhren (module specific)
20:32 veetow i don't mean cluster ssh
20:32 veetow i mean, give me a remote shell on a single box
20:32 jslatts jcockhren: i'm trying to figure out how to make AWS autoscaling work with syndic inside a VPC. mainly targeting. i suppose i'll have to make sure they have a predictable hostname
20:33 veetow really what i want is like:
20:33 jcockhren jslatts: some modules go based on the masterofmaster accepted key list, which doesn't makes that much sense in syndic since the masterofmaster doesn't see the sundic's minion's keys
20:33 veetow salt 'somehost' shell.interactive
20:33 veetow or whatever
20:33 jcockhren veetow: do it
20:33 jslatts hrm
20:34 veetow i'll give it a go
20:34 veetow essentially, it should run a login shell and not exit
20:34 jcockhren salt '*' shell.interactive opens it up on all minions
20:34 jslatts jcockhren: so masterofmaster doesn't show minion keys?
20:34 jcockhren then goes to a '>'
20:34 veetow yeah we'd have to deal with that
20:34 jcockhren veetow: then modules can be ran like:
20:35 jcockhren > test.ping
20:35 pentabular joined #salt
20:35 veetow yeah that would be cool but i'm really interested in a login shell
20:35 veetow not a salt-shell per se
20:36 jcockhren veetow: make an issue, so we all can vet the idea and pitch in
20:36 jcockhren jslatts: no. the masterofmaster doesn't see the syndic keys by default
20:36 veetow will do when i flesh out my own thoughts a bit
20:36 veetow jcockhren: thanks
20:36 jcockhren veetow: np
20:37 jcockhren jslatts: mistyped. the masterofmaster doesn't see the syndic *minion* keys by defauly
20:37 mwillhite joined #salt
20:37 jslatts jcockhren: should a test.ping hit the syndic minion keys>
20:37 jslatts ?
20:37 jslatts (currently debugging syndic not seeming to take commands from MoM)
20:38 jcockhren jslatts: it'll hit the syndic's minions just fine.
20:38 jcockhren jslatts: yeah. that's exactly what forrest and I was just talking about
20:38 jcockhren not being able to control he syndic itself
20:38 jcockhren the*
20:38 jcockhren I think minions should have and "syndic" grain
20:39 cachedout FWIW, the new Salt Stack website is now online for those who are into that sort of thing: http://www.saltstack.com
20:39 jslatts well, i can't get it to even relay a command yet
20:40 jcockhren that way, matching can know where are the syndic and whatnot.
20:40 jcockhren jslatts: make sure you enable "order_masters" in the masterofmaster
20:40 jcockhren (config)
20:41 jcockhren brb
20:41 jslatts yeah, did that
20:41 jslatts is there any trickery to key acceptance order?
20:41 veetow be back folks -- thanks
20:41 veetow left #salt
20:42 isomorphic joined #salt
20:44 ajw0100 joined #salt
20:46 opapo joined #salt
20:46 karlgrz So what's best practice to call rake from a state?
20:47 karlgrz I'm trying rvm.do within a state, but it keeps giving an error
20:47 josephholsten joined #salt
20:47 karlgrz State rvm.do found in sls sitestate is unavailable
20:47 lineman60 joined #salt
20:48 forrest karlgrz
20:48 jcockhren jslatts: not that I know of
20:48 forrest you're looking at the module: http://docs.saltstack.com/ref/modules/all/salt.modules.rvm.html
20:48 forrest you need to look at the state docs: http://docs.saltstack.com/ref/states/all/salt.states.rvm.html
20:48 forrest not all module items are available in states.
20:48 jcockhren jslatts: syndic just needs to accept it's minion's key and the masterofmaster needs to accept the syndic's key
20:49 karlgrz gotcha, ok. That makes sense
20:49 jslatts thats what i thought'
20:49 karlgrz So then I need to just use a cmd or something?
20:49 jcockhren jslatts: make sure both the syndic and master services are running on the syndic
20:49 dezertol joined #salt
20:49 forrest if what the state supports won't do it for you, then yes I'd suggest to just use a command.
20:50 karlgrz Basically, all I'm trying to do is call rake generate in an octopress folder after it runs git pull. cmd should be fine for that, I just thought the rvm.do would be the way to go. I'll try just using cmd, thanks!
20:50 forrest karlgrz, yea np, I know that it's sometimes confusing when you see the module items instead of the state stuff :)
20:51 dezertol long story short I'm new to salt trying to install memcached yum info memcached shows the package, but.. memcached:\n  pkg.installed fails for "Cannot retrieve metalink....." any one seen that before?
20:51 jslatts jcockhren: they are. just not able to see anything get relayed to the syndic. trying various debug logs now
20:51 karlgrz forrest, no worries...it is a bit confusing, but if I actually read what I was looking at instead of just plowing ahead I wouldn't be in this predicament ;-)
20:51 forrest haha
20:51 brianhicks joined #salt
20:52 forrest dezertol, can you paste your state file into something like github gist or pastebin or whatever you prefer?
20:53 dezertol forrest: http://pastebin.com/vqvfDCjj
20:54 dezertol for the most part things seem to be working good for me.. just hung up on this at the moment
20:54 jcockhren jslatts: debug the syndic procress
20:54 jcockhren salt-syndic -l debug
20:54 forrest dezertol, what happens when straight off the command line you do yum install memcached?
20:54 mwillhite joined #salt
20:54 forrest because that looks like a repo issue
20:54 dezertol it installes just fine
20:54 dezertol no issues doing it manually with yum
20:54 forrest interesting..
20:54 jcockhren jslatts: salt-debug -l all
20:55 jslatts thats what i'm doing
20:55 jcockhren as well
20:55 jslatts ah
20:55 dezertol even yum search and yum info show the pkg
20:55 jcockhren mis type
20:55 jcockhren not salt-debug. my bad
20:55 jslatts ah
20:55 jslatts i was gonna say, haven't seen that before :)
20:55 jcockhren I WISH there was there
20:55 jcockhren that*
20:56 dezertol the boxes are default amazon aws linux images..
20:56 dezertol they look like some centos/redhad hack..
20:57 dezertol other packages like screen and vim-enhanced work fine
20:57 forrest dezertol, looks like this was an issue last year: https://github.com/saltstack/salt/issues/1555
20:57 forrest but the associated fix: https://github.com/saltstack/salt/commit/bdffc64e1b42d5bdb7cb7180316dd46fe6f1feaf doesn't exist any longer. I don't know if that is a change or what
20:57 forrest I'd open an issue on it.
20:57 forrest explaining what you've encountered.
20:58 dezertol k
20:58 dezertol wonder if I can do a salt cmd to run yum instead of using the yum api
20:58 forrest try it and see what happens
20:58 forrest it might be specific to the salt api
20:59 redondos joined #salt
20:59 forrest err not the salt api, the yum api
20:59 forrest friday brain doesn't work very well apparently
21:00 dezertol that had the issue
21:00 dezertol ya that issue seemed to suggest that it was the yum api that salt was calling
21:00 dezertol so might be able to by pass it by having salt run the system yum command..
21:00 * dezertol insert hack here
21:00 forrest dezertol, yea agreed.
21:01 forrest Just make sure to put an issue in, then we can at least get an answer.
21:01 jslatts jcockhren: so i can run commands directly on the syndic (test.ping) but its syndic process does not seem to be able to auth with the masterofmasters
21:02 ashtonian joined #salt
21:02 jslatts just keeps waiting for the key to be accepted
21:05 Ryan_Lane joined #salt
21:06 brentsmyth joined #salt
21:07 alunduil joined #salt
21:07 brentsmy_ joined #salt
21:07 echos joined #salt
21:08 pentabular joined #salt
21:14 jslatts it does say "Setting up the Salt Syndic Minion "None"" when starting up... wonder if thats the issue
21:15 bhosmer joined #salt
21:18 dave_den jslatts: that means the syndic doesn't know its own minion id.
21:19 jslatts dave_den: shouldn't it just use the hostname?
21:19 jumperswitch joined #salt
21:19 dave_den it should, but for giggles, try setting it explicitly in the config
21:19 jslatts k
21:21 jslatts well, it logged out the correct name, but still doesn't seem to be accepted by the master
21:21 bhosmer_ joined #salt
21:21 dave_den when you run salt-syndic -l debug, what does it show?
21:22 jcockhren jslatts: ah. i know the cause
21:22 jslatts dave_den: "Waiting for minion key to be accepted by the master."
21:22 jcockhren jslatts: set the minion id in the config
21:23 xmltok joined #salt
21:23 jslatts jcockhren: i just tried that
21:23 dave_den jslatts: have you accepted the key in the master?
21:23 jslatts dave_den: i did when the minion initially started
21:23 jcockhren you should be able to accept the key under the id you jusrt created
21:23 jslatts maybe i need to delete and retry it
21:23 dave_den jslatts: delete and restart syndic
21:23 jslatts do they need to be differemt?
21:23 dave_den delete they key on master i mean
21:26 newellista joined #salt
21:27 UtahDave joined #salt
21:27 jslatts k
21:27 pdayton joined #salt
21:27 karlgrz I have a bunch of states in top.sls - how can I run just one of those on the minion? What I want is something like "salt-call --local state.mysinglestate"
21:27 dave_den karlgrz: salt-call state.sls mystatename
21:28 karlgrz dave_den: perfect, thank you!
21:28 dave_den http://docs.saltstack.com/ref/modules/all/salt.modules.state.html#salt.modules.state.sls
21:28 dave_den np
21:29 jslatts syndic and minion need to run on the syndic right?
21:29 jcockhren jslatts: syndic and master
21:29 gldnspud joined #salt
21:30 jslatts oh. well that explains iut
21:30 jslatts it
21:31 jcockhren \o/
21:31 Cidan joined #salt
21:31 jslatts bah. closer, but not working still. at least syndic process shows comm with master
21:33 jcockhren watching both processes on the syndic, you should see the flow of commands when ran from the masterofmaster
21:33 pears is the preferred way to assemble a file on a minion from multiple pieces file.managed with source=None, plus file.append with a list of the sub-files to build it out of?
21:33 jslatts I see this: "[DEBUG   ] Command details: {'tgt_type': 'glob', 'jid': '20131101213305652992', 'tgt': 'autoscaletest', 'ret': '', 'to': 4, 'user': 'sudo_ubuntu', 'arg': [], 'fun': 'test.ping'}"
21:34 jcockhren jslatts: from which process?
21:34 jslatts which looks good but it doesn't run
21:34 jslatts thats from syndic process onsyndic
21:34 nmistry joined #salt
21:34 jcockhren ok. is the id of its minion set?
21:34 jcockhren (and key accepted), my bad if we been through this
21:35 jslatts and [INFO    ] Got return from autoscaletest for job 20131101213502661087 on master
21:35 jcockhren gtg. jslatts sorry. fundraiser
21:35 jslatts so it looks like it is hitting the minion
21:35 akoumjian whiteinge: Trying to set up halite. What's the simplest way to setup a user account for halite?
21:35 jslatts np. thanks for the help
21:36 flebel joined #salt
21:37 mr_chris More odd problems. When running from cron, the salt-minion on half of my servers will fail on "Minion failed to authenticate with the master, has the minion key been accepted?" But if I run salt-call manually it works. In both cases, manual and cronned I'm running, "/usr/bin/salt-call state.highstate > /var/log/salt/salt-call.log"
21:38 notanumber Wow.  So a ton of debugging later, and I still have no idea why this (http://dpaste.com/1437918/) breaks sudo in my Vagrant.
21:38 notanumber Remove that, and all the corresponding "require" statements, and everything works perfect.
21:41 gldnspud_ joined #salt
21:41 ajw0100 joined #salt
21:42 gldnspud joined #salt
21:45 erasmas joined #salt
21:47 oleksiy does salt-ssh require msgpack-python on target host to execute state.highstate? got the following error: [CRITICAL] Unable to import msgpack or msgpack_pure python modules
21:50 goodwill whiteinge, UtahDave : ping
21:52 UtahDave goodwill: pong!
21:53 goodwill UtahDave: we are going to need your help for a few minutes so our heads do not explode
21:53 goodwill erasmas: can you please describe the problem
21:53 dizzyd joined #salt
21:54 dizzyd hallo...I'm having a very weird situation where some of my machines are not able to see all the others via mine.get
21:54 dizzyd any suggestions for how to force the master to rebuild what it thinks the world is?
21:55 carmony notanumber: why are you trying to manage that vagrant user?
21:56 notanumber Just wanted to ensure they were setup with the correct shell.  Can't recall why that was added, to tell you the truth.
21:56 erasmas UtahDave, whiteinge: we're on salt 0.16.3 and we noticed that the highstate lets you use non-existent states. if I say machine Foo has the states apache and pizza, where pizza doesn't exist, I don't get any error (it just silently skips over pizza)
21:56 bemehow joined #salt
21:57 erasmas UtahDave, whiteinge: this happened when using salt-call (masterless minion) and also when remote executing highstate from a master. it's concerning because it means typos or other non-existent states give no warnings
21:57 UtahDave erasmas: yeah, this has been fixed in 0.17.1
21:58 carmony notanumber: alright.... you ready for some awesomeness?
21:59 carmony notanumber: https://github.com/saltstack/salt/blob/develop/salt/states/user.py#L159
21:59 erasmas UtahDave: do you have any kind of ongoing support for 0.16.x? 0.17 introduced several breaking changes so we can't upgrade easily
21:59 goodwill UtahDave: 0.17 broke everything in the china cabinet
21:59 goodwill UtahDave: including itself
22:00 carmony notanumber: so it looks like if no groups are specified
22:00 goodwill UtahDave: do you backport any fixes?
22:00 carmony it removes the user from any non-default groups
22:00 forrest goodwill, erasmas, you could backport the fixes manually :\
22:00 notanumber So, the user is being removed from the sudoers?
22:00 carmony notanumber: I would guess so
22:01 btorch can anyone point me of a doc location or give me an example on using salt on sls files \
22:01 goodwill forrest: right after I'll kill myself
22:01 forrest yea thus the :\
22:01 carmony lets see if I can repo this
22:01 btorch this is not working {% if salt['file.file_exits']('/etc/swift/account.ring.gz') %}
22:02 bemehow joined #salt
22:02 UtahDave goodwill: we backport fixes to the recently released branch
22:02 goodwill UtahDave: what does that mean?
22:02 goodwill UtahDave: silently skipping over states is a pretty bad bug
22:02 UtahDave goodwill: so right now all fixes get backported from the develop branch to the 0.17 branch.
22:03 micko carmony: would be awesome if you specifiy ssh keys :P
22:03 UtahDave 0.17.2 will be released probably next week
22:03 erasmas UtahDave: so practically speaking, we're saying 0.16 is forever broken and no longer supported?
22:03 UtahDave goodwill: we backport bug and security fixes for our enterprise customers for 18 months.
22:04 goodwill UtahDave: I hear ya for the enterprise users ... but is 0.16 abandonded now?
22:04 goodwill for everyone else?
22:05 UtahDave Yeah, we only backport fixes to the last release branch
22:05 notanumber carmony: Seems the fix, for me at least, is to **not** try to create that user.  Just assume they exist.
22:06 kermit joined #salt
22:06 carmony notanumber: yup
22:06 goodwill UtahDave: so 0.17, then 0.16 is abandoned right away?
22:06 goodwill UtahDave: so 0.17 is out, then 0.16 is abandoned right away?
22:06 carmony honestly, you could probably change your pillar value to vagrant 2
22:06 carmony vagrant2
22:06 notanumber Which is going to mean I need to rethink my user setup to be smarter when dealing between production and vagrant
22:06 carmony and it'll work
22:06 notanumber This way, I was able to just set the username in pillar and be done with it.
22:07 carmony because vagrant handles the ssh keys and user logins by default
22:07 carmony notanumber: as long as you are using a vagrant compliant base bos
22:07 carmony box*
22:07 goodwill UtahDave: there is no support period?
22:07 ccase joined #salt
22:07 Katafalkas joined #salt
22:07 goodwill UtahDave: this is going to hurt but I still love you
22:07 goodwill UtahDave: 0.17 was broken like hell when it came out
22:08 goodwill UtahDave: we could run it AT ALL
22:08 goodwill UtahDave: we could NOT run it AT ALL
22:08 goodwill even hostname matching was broken
22:08 UtahDave goodwill: It's a lot of work to backport to just to the recent release.
22:08 goodwill yeah, but abandonding right away it is not a good idea
22:08 renoirb Hey, i have a question regarding jinja templates, conditionals and context
22:09 goodwill UtahDave: you guys needs a period of a few months where 0.16 is supported for security or critical fixes
22:09 goodwill while people migrate
22:09 forrest what's up renoirb
22:09 renoirb I have a state file that pass along - context: (…) coming from pillars
22:09 renoirb Inside the template
22:10 notanumber Would it be safe/make sense to just wrap the user.present in a `{% if 'productname' in grains and grains['productname'] != 'VirtualBox' %}`
22:10 renoirb - template: jinja (of course)
22:10 notanumber Or is that really bad...
22:10 notanumber Feels dirty
22:10 xmltok hmm, so i have a strange problem. in my sls i am doing a pillar.get with a default value of yes, but that 'yes' becomes True when its rendered in my template
22:10 renoirb forrest: let me make a pastebin
22:10 UtahDave goodwill: Yeah, we do the best we can with the resources we have.
22:16 xmltok joined #salt
22:17 goodwill UtahDave: do you know where the fix is for detecting missing states are?
22:17 goodwill UtahDave: maybe we can backport it to 16
22:18 renoirb Hey forrest have a look at https://gist.github.com/renoirb/7272880#file-site-conf-jinja-L12
22:18 renoirb I hilighted the question in the gist
22:19 goodwill UtahDave: or is it too much to backport?
22:20 UtahDave goodwill: let me search for it
22:21 josephholsten joined #salt
22:25 anti_ joined #salt
22:26 forrest renoirb, hmm
22:26 forrest renoirb, I mean you could write some jinja in there that's actual python
22:27 renoirb My background is not about python, i am confusing whenever it is type changing or anything
22:27 forrest so you could do if type(args.magento) is str
22:27 renoirb oh, yeah?
22:27 dizzyd so, after a big of digging it appears that some of the entries in my salt min are missing a key "grains.item" whereas "grains.items" is present
22:27 dizzyd but it's only for some hosts
22:27 forrest I believe jinja only can't do generators and more complex stuff renoirb, so that should work
22:27 dizzyd and there appears to be no pattern to it
22:28 renoirb in {{}} or {% %} (not sure if it is an echo or it has to be {%  … %} since we want to execute python
22:28 forrest {% %}
22:28 pears jinja can't do a lot of stuff
22:28 forrest {{ }} is for values usually
22:28 forrest pears, yea but it should be able to do a type check right?
22:28 renoirb {{}} is implicitly an echo. In Twig (PHP)
22:29 oleksiy joined #salt
22:29 pears {{ }} in jinja evaluates an expression and uses the result in that spot
22:29 renoirb My problem forrest  is the test   should it be == True, or == true
22:29 pears True
22:29 renoirb or args.whatever is sameas(true)
22:29 forrest yea True
22:29 renoirb so many ways to write similar syntax. i'm lost :/
22:29 renoirb ok
22:29 renoirb fair enough
22:29 renoirb i'm testing many ways and sometimes I have half successes
22:29 renoirb :/
22:30 renoirb success
22:30 forrest that's the fun of poking around
22:30 renoirb anyway
22:30 forrest well, sometimes
22:30 pears it's a constrained python syntax, so not everything is possible, but it should at least be valid python
22:31 renoirb Funnily enough, this state i am debugging was working fine and now it just do not react the same way.
22:31 renoirb And I did not change the salt stack version.
22:31 forrest :\
22:31 forrest that's a bummer
22:31 renoirb I caught code rot
22:31 renoirb :/
22:33 forrest that's Friday for ya
22:33 renoirb yaay
22:36 modafinil is there a sort of idiomatic way to replace puppet? (really my first goal, before building out using the neat useful things) i.e. 'run everywhere, often' (even if its just cron salt '*' state.highstate)
22:39 forrest you could do that modafinil, I don't have it in a large prod environment anywhere, so I don't know if other people have better solutions.
22:40 modafinil okay -- im building out a new sort of poc for our medium(?) sized environment (~200 servers) -- hate puppet with a fiery passion and think salt is the right tool for the job -- just not sure of the quickest path to a drop in replacement :D
22:41 brianhicks joined #salt
22:44 xmltok joined #salt
22:47 sebgoa joined #salt
22:50 redondos joined #salt
22:50 redondos joined #salt
22:50 oz_akan_ joined #salt
22:51 oz_akan_ joined #salt
22:51 pentabular joined #salt
23:01 jacksontj i'm trying to add a compound matcher-- and it looks like its changed a bit since i was in here last
23:02 jacksontj i am testing locally-- and i keep getting "no minion matched the target..."
23:02 jacksontj but the compound match command is never sent to the minion
23:02 jacksontj i assume this has to do with the salt mine somehow-- but i'm not sure where its doing the filtering
23:03 pears has anyone ever done anything crazy like making each minion's pillar data be a list of operations, and then dynamically generating the salt states from the pillar data?
23:04 bemehow joined #salt
23:05 lineman60 joined #salt
23:05 dezertol @forrest  on that yum/salt/amazon linux ami issue .. if I uncomment baseurl and comment out mirorlist in the epel.repo it fixes it..
23:05 forrest dezertol, ahh ok
23:05 forrest it must only check the baseurl value
23:05 dezertol not sure.. why...
23:05 forrest still might be worth opening an issue on
23:05 dezertol but it does
23:05 dezertol I did already
23:05 forrest what's the #?
23:06 dezertol https://github.com/saltstack/salt/issues/8226
23:06 forrest cool
23:06 josephho_ joined #salt
23:14 xmltok joined #salt
23:14 pentabular1 joined #salt
23:20 mwillhite joined #salt
23:28 oz_akan_ joined #salt
23:28 my_mom joined #salt
23:34 wibberwock how does one do a watch chain in salt?  x runs, if there are changes y and z run, if there no changes y and z do not run
23:34 wibberwock but y always runs before z
23:35 forrest did you try using watch_in wibberwock?
23:35 bemehow joined #salt
23:35 wibberwock i'm having trouble ordering y and z
23:35 wibberwock if i make z require y, doesn't that mean y will always run, regardless of x?
23:35 forrest watch_in: y in z triggers y, if y runs watch_in: z triggers z
23:36 forrest that's why I'm suggesting to try watch_in
23:36 forrest not sure if that will work with commands staggered like that though
23:36 forrest but it's worth a shot!
23:36 wibberwock alright ill give it a try
23:39 forrest cool, let me know if it works, I haven't tried doing that before.
23:39 ajw0100 joined #salt
23:41 erasmas left #salt
23:43 bemehow_ joined #salt
23:45 pentabular joined #salt
23:48 mpanetta joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary