Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-09-29

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 mrlesmithjr joined #salt
00:03 thedodd joined #salt
00:10 Singularo joined #salt
00:18 glyf joined #salt
00:23 nebuchadnezzar joined #salt
00:24 oz_akan joined #salt
00:26 N-Mi joined #salt
00:26 luminous joined #salt
00:30 thayne joined #salt
00:41 cromark joined #salt
00:42 anotherZero joined #salt
00:43 glyf joined #salt
00:43 luminous left #salt
00:44 luminous joined #salt
00:50 SaveTheRb0tz joined #salt
00:51 retr0h joined #salt
00:51 SaveTheRb0tz joined #salt
00:59 aquinas joined #salt
01:00 Ryan_Lane joined #salt
01:00 Ryan_Lane hi. I want to do remote execution, but have salt minions look locally for files. so, I added a master, but I have file_client: local. The master is ignored, though
01:01 Ryan_Lane is this not possible?
01:06 otter768 joined #salt
01:12 mgw joined #salt
01:14 jslatts joined #salt
01:14 Supermathie joined #salt
01:24 jonatas_oliveira joined #salt
01:25 oz_akan joined #salt
01:26 malinoff joined #salt
01:27 acabrera joined #salt
01:29 otter768 joined #salt
01:35 mgw joined #salt
01:45 bhosmer joined #salt
01:49 schristensen joined #salt
01:52 logix812 joined #salt
01:56 minaguib joined #salt
02:00 n8n joined #salt
02:02 Ahrotahntee anyone know how to include empty values in a pillar? when I leave them blank they come out as 'None' and it busts the config
02:02 malinoff Ahrotahntee, have you tried somevar: ''
02:02 malinoff ?
02:03 Ahrotahntee didn't try single quotes, testing now
02:04 Ahrotahntee bingo, I'm an idiot, thanks malinoff
02:04 malinoff :)
02:08 n8n joined #salt
02:09 scuwolf joined #salt
02:11 Ahrotahntee god I love SaltStack
02:11 Ahrotahntee that and salt-cloud, saves me a ton of work
02:15 mgw joined #salt
02:17 thayne joined #salt
02:22 malinoff joined #salt
02:22 ajolo joined #salt
02:26 oz_akan joined #salt
02:33 oz_akan joined #salt
02:36 mgw joined #salt
02:39 TyrfingMjolnir joined #salt
02:42 bezeee joined #salt
02:43 TyrfingMjolnir_ joined #salt
02:43 ramishra joined #salt
02:55 pdayton1 joined #salt
03:03 dude051 joined #salt
03:14 nhubbard joined #salt
03:17 kermit joined #salt
03:21 pdayton1 joined #salt
03:23 dccc_ joined #salt
03:27 semarie joined #salt
03:33 bhosmer joined #salt
03:42 dalexander joined #salt
03:45 aparsons joined #salt
03:46 pdayton1 joined #salt
03:49 mgw joined #salt
03:57 yidhra joined #salt
03:58 pdayton1 joined #salt
04:00 thayne joined #salt
04:04 dalexander joined #salt
04:09 mosen joined #salt
04:14 aparsons joined #salt
04:18 malinoff joined #salt
04:19 bhosmer joined #salt
04:22 aparsons joined #salt
04:27 jchen joined #salt
04:34 manfred joined #salt
04:41 wedgie joined #salt
04:46 Schmidt joined #salt
04:52 ramteid joined #salt
04:55 JordanTesting joined #salt
05:01 n8n joined #salt
05:06 aparsons joined #salt
05:12 aparsons joined #salt
05:12 thayne joined #salt
05:16 fragamus joined #salt
05:22 bhosmer joined #salt
05:32 ram_ joined #salt
05:33 oz_akan joined #salt
05:33 bkrram joined #salt
05:35 linjan joined #salt
05:35 oz_akan_ joined #salt
05:38 bkrram joined #salt
05:39 bkrram Hi
05:39 bkrram Just started on trying to integrate salt into our product
05:40 bkrram I was trying to find a way to programatically add minion keys from python
05:40 fragamus joined #salt
05:40 bkrram I can list all keys using wheel.call_func('key.list_all')
05:41 bkrram But I cannot seem to get the accept to work syntactically
05:41 bkrram I tried wheel.call_func(fun='key.accept', kwargs={'match': '%s'%m})
05:41 bkrram where m is an unaccepted key host
05:42 bkrram but it bombs saying it requires at least one arg and zero given
05:42 catpigger joined #salt
05:42 bkrram Any thoughts on what I am doing wrong?
05:42 n8n joined #salt
05:43 malinoff bkrram, try wheel.call_func('key.accept', kwargs={'match': '%s'%m})
05:44 jeremyb joined #salt
05:45 bkrram Malinoff, I just tried that and got this response :
05:45 bkrram Traceback (most recent call last):
05:45 bkrram File "s.py", line 17, in <module>
05:45 bkrram print wheel.call_func('key.accept', kwargs={'match': '%s'%m})
05:45 bkrram File "/usr/lib/python2.6/site-packages/salt/wheel/__init__.py", line 68, in call_func
05:45 bkrram f_call = salt.utils.format_call(self.w_funcs[fun], kwargs)
05:45 bkrram File "/usr/lib/python2.6/site-packages/salt/utils/__init__.py", line 870, in format_call
05:45 bkrram used_args_count
05:45 bkrram salt.exceptions.SaltInvocationError: accept takes at least 1 argument (0 given)
05:45 malinoff bkrram, use http://pastie.org
05:46 bkrram Sorry, my bad..
05:47 malinoff try  wheel.call_func('key.accept', args=('%s'%m,))
05:49 bkrram Just tried that and got the same response.. http://pastie.org/9603490
05:50 malinoff bkrram, no, you called the old one
05:50 malinoff http://pastie.org/9603490#3
05:50 malinoff Don't use kwargs
05:51 bkrram Sorry, that was the old pastie. Same response for args as well.. Here is the output : http://pastie.org/9603497
05:55 malinoff bkrram, use s3="http://s3.amazonaws.com"
05:55 malinoff the protocol matters
05:55 malinoff i guess you should report the bug
05:55 malinoff s3_url*
05:55 malinoff oos
05:55 malinoff oops*
05:55 malinoff wrong chat :)
05:55 malinoff sorry
05:56 malinoff will investigate your issue shortly
05:56 colttt joined #salt
05:58 malinoff bkrram, have you tried: wheel.call_func('key.accept', match='%s'%m) ?
06:00 kingel joined #salt
06:00 bkrram Ah, that worked! Thanks malinoff!
06:01 malinoff bkrram, you're welcome
06:06 kasey joined #salt
06:08 otter768 joined #salt
06:14 mgw joined #salt
06:21 askhan joined #salt
06:22 davidone joined #salt
06:25 aparsons joined #salt
06:26 jhauser joined #salt
06:27 mgw left #salt
06:27 Deevolution joined #salt
06:30 aquinas joined #salt
06:30 aquinas_ joined #salt
06:30 aparsons joined #salt
06:32 mechanicalduck_ joined #salt
06:35 ocdmw joined #salt
06:36 oz_akan joined #salt
06:39 bmcorser joined #salt
06:39 sirtaj joined #salt
06:40 ndrei joined #salt
06:41 dalexander joined #salt
06:43 ntropy joined #salt
06:44 aparsons joined #salt
06:49 tld_wrk joined #salt
06:53 flyboy joined #salt
06:55 rattmuff joined #salt
06:57 duncanmv_ joined #salt
07:00 masm joined #salt
07:01 Sweetshark joined #salt
07:01 martoss1 joined #salt
07:09 stephanbuys joined #salt
07:10 Katafalkas joined #salt
07:11 bhosmer joined #salt
07:12 felskrone joined #salt
07:15 tld_wrk joined #salt
07:18 aparsons joined #salt
07:22 intellix joined #salt
07:22 blackhelmet joined #salt
07:24 aparsons joined #salt
07:30 lcavassa joined #salt
07:35 davet joined #salt
07:36 kasey_ joined #salt
07:36 oz_akan joined #salt
07:37 aparsons joined #salt
07:42 kingel joined #salt
07:46 tld_wrk joined #salt
07:54 Katafalk_ joined #salt
08:02 jaimed joined #salt
08:11 che-arne joined #salt
08:13 jalaziz joined #salt
08:18 tld_wrk joined #salt
08:21 darkelda joined #salt
08:21 darkelda joined #salt
08:24 alanpearce joined #salt
08:25 Twiglet_ joined #salt
08:27 Twiglet_ joined #salt
08:29 Guest36216 joined #salt
08:36 the_drow hi guys can anyone help me with the failure in https://jenkins.saltstack.com/job/salt-pr-build/8599/
08:36 the_drow It seems that moto doesn't mock dhcp options for some reason
08:37 oz_akan joined #salt
08:45 CaptinHokk joined #salt
08:49 lcavassa joined #salt
08:50 totte joined #salt
08:50 jdmf joined #salt
08:54 felskrone joined #salt
08:59 bhosmer joined #salt
09:00 yomilk joined #salt
09:02 istram joined #salt
09:02 kiorky joined #salt
09:13 workingcats joined #salt
09:16 krissaxton joined #salt
09:16 PI-Lloyd joined #salt
09:18 ramishra joined #salt
09:24 brain5ide joined #salt
09:25 akafred joined #salt
09:27 tinuva joined #salt
09:31 krissaxt_ joined #salt
09:34 CycloHex joined #salt
09:35 che-arne joined #salt
09:36 CycloHex I'm trying to render a config file with jinja in order to keep my passwords from being plain text in the file on my master. However, after highstating I get an error:Unable to manage file: Jinja variable dict object has no element Undefined; line 13. On line 12 I use the same pillar call. so on line 13 it should work as well, no? Ask me if you need any info about my file
09:36 ggoZ joined #salt
09:37 CycloHex also I already wiped cache and flushed
09:38 oz_akan joined #salt
09:42 yomilk_ joined #salt
09:43 agend joined #salt
09:44 tld_wrk joined #salt
09:44 bhosmer joined #salt
09:48 CycloHex Ok, fixed. I Think I was calling my pillar in the wrong way ({{ pillar['foo']['bar']['baz'] }}) While it should've been {{ salt['pillar.get']('foo:bar:baz') }}
09:51 giantlock joined #salt
09:58 kiorky joined #salt
10:00 Outlander joined #salt
10:01 istram CycloHex: gj! please note that it's a recommended way to use the salt['module.func']('data') not only for pillar requests :)
10:02 CycloHex istram, thanks! I'll keep that in mind
10:03 CycloHex Another problem, Say I'd like to run a script after a pkg is installed, the script removes itself after being finished. Whenever I call a highstate, the file will be copied over again. How can I stop make sure it isn't copied over again?
10:06 bhosmer joined #salt
10:10 krissaxton joined #salt
10:13 jakubek joined #salt
10:14 tld_wrk joined #salt
10:15 stephanbuys joined #salt
10:15 cmthornton joined #salt
10:15 jakubek hello! anyone create some workaround for tunelling proxy server over ssh to salt-minion and use it there as http_proxy&https_proxy? my idea is to create tunnel ssh '-R 3128' and then use it for all pkg operations. is this good or bad idea? maybe there is easier solution for installing packages on minions without internet connection :-)
10:16 glyf joined #salt
10:21 mortis__ would it be an idea to have a --use-local-cache param to the salt-call command? I mean, we're using git, so right now i have to commit every little thing when trying and failing (push to git, pull from master, pull from minion and test again OR cloning somewhere else and testing locally). Would be nice to just try and fail on the minion with the cached files and just remove the cache-param when it works, then do the git stuff. i duno
10:22 krissaxton joined #salt
10:22 mortis__ this is all dev-env, so its where i initially create the slses and such
10:23 mortis__ or maybe i can actually just point --file-root to cache :o
10:24 mortis__ naah
10:24 mortis__ cause i wouldnt have the top.sls then
10:25 mortis__ ofc, i CAN do salt-call state.sls /var/cache/something/something --local .)
10:25 mortis__ :)
10:25 cmthornton mortis__: are you just trying to test your salt states?
10:25 mortis__ yeah
10:26 mortis__ cause im not sure what fails
10:26 cmthornton use `fail_hard: true` in your minion config?
10:26 mortis__ so instead of going via git for every little , or # or !
10:26 mortis__ hmmm
10:26 mortis__ fail_hard, unfamiliar with that one
10:26 mortis__ checking it out
10:26 cmthornton might be failhard without an underscore
10:27 mortis__ - failhard: True
10:27 cmthornton I usually test 1 state at a time by just running `salt-call -l debug state.sls <state name>`
10:28 cmthornton `failhard: true` can be in the master config (or minion config if you have a masterless setup)
10:29 mortis__ that can be handy
10:30 cmthornton looking at the docs, looks like failhard can also be applied to an individual state, but for testing I usually apply it globally
10:30 mortis__ i'll look into it, thanks :)
10:32 cmthornton you're welcome. If you come up with a better way of testing, let me know. It's a bit tedious testing each state one by one, but that's been the most effective way for me so far
10:34 diegows joined #salt
10:39 oz_akan joined #salt
10:44 vbabiy joined #salt
10:45 dark_hel1et joined #salt
10:48 bhosmer_ joined #salt
10:54 ndrei joined #salt
10:54 tld_wrk joined #salt
10:58 krissaxton joined #salt
10:59 CycloHex joined #salt
11:14 lcavassa joined #salt
11:14 scottpgallagher joined #salt
11:17 krissaxton joined #salt
11:18 rdorgueil joined #salt
11:18 rdorgueil joined #salt
11:23 chase_ exit
11:29 baniir joined #salt
11:42 bhosmer joined #salt
11:47 bkrram Quick question on watch. If I have a watch on a file for a service, then do I still need to run a state.highstate for it to go and check this file and restart the service?
11:47 CeBe joined #salt
11:49 joehoyle joined #salt
11:51 krissaxton joined #salt
11:52 marnom joined #salt
11:54 hietler joined #salt
11:56 duncanmv_ joined #salt
11:58 CeBe joined #salt
11:58 teunb joined #salt
11:59 intellix joined #salt
11:59 ginger_tonic joined #salt
12:01 Daviey joined #salt
12:01 ginger_tonic hi
12:02 ginger_tonic I would like to now if any salt module to restart a service when its configuration file is changed ?
12:04 ginger_tonic Say, configuration file of samba service - smb.conf is changed ? then I want the service to get automatically restarted ?
12:04 ginger_tonic Is it possible using any salt module ?
12:04 krissaxton joined #salt
12:06 ze- ginger_tonic: you can do something with the states, not with external changes.
12:06 ze- like if you have a state to manage the file
12:07 ze- you can have an other state to restart the service if any changes have been made (and not if it was already correct)
12:07 ginger_tonic oh ok
12:08 ze- something like: service.running: { reload: True, watch: file: .../smb.conf, }
12:08 ginger_tonic ok
12:08 ze- (written here, not tested, probably writting it on multiple line.. check service, and watch :)
12:08 ze- and of course, you can have it watch multiple dependencies. :)
12:08 ginger_tonic ok
12:09 ginger_tonic I didn't looked into states
12:09 ginger_tonic Thanks a lot
12:11 ze- oh. if you aren't using states, that service thing probably isn't what you are seeking :)
12:13 logix812 joined #salt
12:13 bkrram ze- I just tried something like this for ntp and it does not seem to restart when I change the minion's ntp.conf file
12:14 ze- bkrram: only does when the previous state does changes. not when you change it manualy.
12:15 krissaxton joined #salt
12:15 bkrram So how does one force a state change? Basically, I want to tell it to restart ntpd whenever the ntp.conf changes
12:15 ze- salt is not the solution.
12:16 ze- you need a tool that watches for changes on filesystem, and does action for it.
12:16 ninkotech joined #salt
12:17 bkrram Hmm.. Would a workaround be to specify a source and then change the source file to trigger a copy and restart?
12:19 che-arne joined #salt
12:19 cmthornton inotify tools might be better for monitoring a file for changes
12:19 cmthornton then restarting a service
12:21 geekatcmu bkrram: The canonical solution for your problem is "the tool you use to change ntp.conf kicks ntpd"
12:21 geekatcmu If you're changing ntp.conf by hand, you're doing it wrong.
12:25 mndo joined #salt
12:25 bkrram geekatcmu: I understand but we're providing a web based tool to change some ntp config info. When this is done, we need to push the changes to a bunch of machines so they are all in sync.
12:26 bkrram I tried changing the source in file_roots and then do a state.highstate. This pushes the new file to all the minions AND restarts the service so works as we wanted :)
12:26 avn joined #salt
12:27 geekatcmu perfect
12:27 ggoZ joined #salt
12:27 to_json joined #salt
12:28 Nazzy *sigh* let me ask the obvious question
12:28 Nazzy oh, he left, figures
12:29 Nazzy the question you should have asked him is "how are you changing the ntp.conf file? through salt or externally?"
12:30 cpowell joined #salt
12:31 Nazzy cause I didn't see a single statement that implied either definitely
12:33 jonatas_oliveira joined #salt
12:33 jonatas_oliveira joined #salt
12:36 XenophonF joined #salt
12:37 bhosmer joined #salt
12:37 ginger_tonic nazzy: "how are you changing the ntp.conf file? through salt or externally?" - through salt
12:39 snuffeluffegus joined #salt
12:41 acabrera joined #salt
12:42 Nazzy ginger_tonic, I meant that for bkrram, however ... if you're using states you need requisites, if you're using execution modules you'll need to either setup a separate process on the box or call salt with the instruction after modifying the file
12:43 ginger_tonic call salt with the instruction after modifying the file - Thank you
12:44 semarie joined #salt
12:48 the_drow If anyone wants to help testing out the new boto vpc module I'd appreciate it. I'll even buy you a beer https://github.com/saltstack/salt/pull/16095
12:48 vejdmn joined #salt
12:49 bhosmer joined #salt
12:50 ange joined #salt
12:50 ange hi
12:51 laxity joined #salt
12:51 ange anyone got some experience or content about using orchestrate to replace capistrano ? something similar to : https://mxey.net/deploying-rails-with-ansible/
12:52 shiin joined #salt
12:52 babilen salt orchestrate?
12:53 ange yes
12:53 babilen Anything in particular that you'd like to know?
12:53 ange (or am I completly mistaking the role of salt orchestrate)
12:53 babilen I'm sure that there is a person that has experience with that, but that probably won't help you much.
12:54 ange the idea is to get rid of capistrano to deploy a Rails app as we now have 10+ hosts to deploy to
12:55 ange I believe that salt stack could do a better job at this scale and bigger
12:56 shiin I cant apt-get install salt-minion on a new squeeze installation, having added the repos as the manual suggests here: http://docs.saltstack.com/en/latest/topics/installation/debian.html. the error is this:  salt-minion : Depends: salt-common (= 2014.1.10+ds-1~bpo70+1) but it is not going to be installed and                Depends: python-zmq (>= 13.1.0) but it is not going to be installed
12:56 CycloHex IS it possible to install packages which aren't in the distro's repo's?
12:56 babilen orchestrate is being used to model dependencies between minions/nodes (to ensure that services that are deployed on different nodes can work together)
12:56 vejdmn joined #salt
12:56 ange so basically : check out specific tag or last commit from git_repo@master , run some scripts, compile assets, run some more scripts, switch link from old release to new, restart unicorn instances softly
12:56 Katafalkas joined #salt
12:56 CycloHex I meant to ask whether it's possible to do this without using cmd.run.. I'd like to do this from state files...
12:57 to_json joined #salt
12:57 babilen CycloHex: Sure, pkg.install - source: can point to any URL and you can configure third-party repositories with pkgrepo.managed
12:57 mapu joined #salt
12:57 ange babilen: I see
12:57 CycloHex ohh, i thought I'd read somethign about pkg not being able to install anything but pks that are in the repo.. Thank babilen
12:58 ange not certain that it fits what I am trying to do then
12:58 babilen ange: Sure, but what you describe doesn't require orchestrate. That simply requires a specific order of states that you would express with requisited in salt. See http://docs.saltstack.com/en/latest/ref/states/requisites.html and, say, http://docs.saltstack.com/en/latest/ref/states/all/salt.states.git.html
12:58 miqui joined #salt
12:59 babilen CycloHex: I am referring to sources in pkg.installed http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html
13:00 yomilk joined #salt
13:00 ange babilen: ok, thanks; any good keywords that would describe this kind of usage that would help me find examples ?
13:00 CycloHex babilen, thank you!
13:01 babilen ange: "deployment saltstack" or "git deploy saltstack" comes to mind
13:01 bhosmer_ joined #salt
13:02 oz_akan joined #salt
13:02 ange babilen: yep, that's what I thaught and tried ;)
13:02 babilen ange: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.gem.html might come in handy to install gems (take a look at other states) and then run some cmd.run to do whatever you need to do.
13:03 jslatts joined #salt
13:03 ange babilen: nice, thanks !
13:03 babilen ange: I would approach it from a bit of a different angle first: Implement (in salt) everything that is needed to run and install your software and then start thinking about the best way to upgrade to new versions.
13:04 wendall911 joined #salt
13:06 rawtaz does anyone know if https://github.com/saltstack/salt/issues/16128 is being worked on? :o
13:06 ekristen joined #salt
13:07 rawtaz i was trying out ansible, but it turns out that it *requires* sshpass when you want to use a password for the ssh authentication. and the response by the ansible ppl is that i should instead use a provider that lets me inject a key at the time the VPS is created, so i dont have to use a password the first time i connect with ansible
13:07 rawtaz so now im back to poking at salt-ssh since i dont want to use sshpass, heh
13:09 racooper joined #salt
13:13 jeddi joined #salt
13:18 rypeck Anyone successfully using SaltStack to manage AIX machines?
13:19 rypeck I'm doing some Googling and I see some discussions back in 2012 - unsure where things stand now.
13:19 mpanetta joined #salt
13:19 to_json joined #salt
13:19 to_json joined #salt
13:20 glyf joined #salt
13:21 elfixit joined #salt
13:22 nitti joined #salt
13:25 nitti joined #salt
13:28 lcavassa joined #salt
13:29 krissaxton joined #salt
13:29 kasey_ joined #salt
13:33 manfred rypeck: they are currently working on packaging the enterprise version for aix, no idea about the community.
13:37 yidhra joined #salt
13:42 che-arne joined #salt
13:42 thayne joined #salt
13:42 martoss joined #salt
13:42 gmcwhistler joined #salt
13:46 brandon_ joined #salt
13:49 steve1 joined #salt
13:50 rypeck manfred: thanks!
13:51 Deevolution joined #salt
13:52 wnkz joined #salt
13:53 rawtaz sooo. does anyone have any idea why salt-ssh in /usr/local/Cellar/saltstack/HEAD/lib/python2.7/site-packages/salt/utils/thin.py line 142 is trying to  os.chdir(os.path.dirname(top))  on a top that is a patch to a *file* and not a dir?
13:53 rawtaz looking in /usr/local/Cellar/saltstack/HEAD/libexec/lib/python2.7/site-packages it seems that some of the packages are dirs while some are files. the msgpack one is a zip archive.
13:54 rawtaz im wondering if there's something wrong with the installation (e.g. that the zip archive .egg files should have been unpacked) or if the problem is more likely in how thin.py handles these files/folders
13:54 rawtaz (this is in relation to https://github.com/saltstack/salt/issues/16128)
13:54 krissaxton left #salt
13:57 fxdgear joined #salt
14:02 Supermathie joined #salt
14:03 wnkz___ joined #salt
14:03 dude051 joined #salt
14:04 desertigloo joined #salt
14:05 wnkz joined #salt
14:05 joehoyle joined #salt
14:08 danielbachhuber joined #salt
14:13 perfectsine joined #salt
14:13 wnkz joined #salt
14:13 vejdmn joined #salt
14:14 rawtaz meh, noone :P
14:17 aparsons joined #salt
14:19 ajprog_laptop joined #salt
14:22 aparsons joined #salt
14:23 CeBe joined #salt
14:26 kaptk2 joined #salt
14:26 ckao joined #salt
14:27 supplicant joined #salt
14:28 supplicant I'm wondering if there is a way to target hosts based on uptime. I've tried looking at the grains, but it doesn't seem like uptime is there
14:30 hobakill joined #salt
14:31 TheThing joined #salt
14:31 rawtaz might be a good contribution? :)
14:33 ndrei joined #salt
14:33 babilen supplicant: Whatfor might that be useful?
14:34 ange is there any salt "contractors" around ?
14:36 rypeck manfred: any idea what enterprise pricing runs per node? Other option my colleague is looking at is puppet enterprise for AIX support - that runs about $100 a node.
14:36 manfred no idea
14:37 manfred you would need to contact their sales stuff.
14:37 supplicant babilen: trying to locate servers that were started today
14:39 fannet good morning - has anyone had luck using pygit2 on 2014.1.10 ?
14:40 iggy another good one would be hosts that maybe need a kernel upgrade
14:41 iggy fannet: I don't think it was supported till 2014.7.x
14:42 iggy nvm, I read the docs wrong... it just says as of 2014.7.x, these backends are available and it's in the list
14:43 Katafalkas joined #salt
14:44 iggy nope, I was right, that support was added in 2014.7.x
14:46 fannet what is the recommended gitfs provider for 2014.1.10 then
14:47 fannet gitpython?
14:47 Katafalkas joined #salt
14:47 iggy that's the _only_ one from everything I can find
14:48 iggy "Beginning with version 2014.7.0, both pygit2 and Dulwich are supported as alternatives to GitPython. "
14:48 iggy that implies that prior to 2014.7.x, only gitpython was supported
14:48 iggy at least to me
14:49 hobakill rypeck: i have some idea of costs. we just got a quote from salt a few weeks back.
14:51 scottpgallagher joined #salt
14:52 hobakill rypeck: we were quoted $140/node starting at 250 nodes. the fewer nodes, the higher the price. YMMV
14:54 fannet iggy: am I correct in thinking that file_roots should not be defined with gitfs being the provider?
14:54 rawtaz hobakill: per year?
14:54 rallytime joined #salt
14:56 iggy fannet: you can use both if you have fileserver_backend set to -git -roots
14:56 fannet right but I only want my states to come from git
14:57 btorch joined #salt
14:57 iggy then yeah, you can leave file_roots out completely
14:58 hobakill rawtaz: yes
14:58 rawtaz thats a lot of cash for one node (out of 250+) :) i assume there's a bunch of support in there as well
14:59 TOoSmOotH joined #salt
15:00 hobakill rawtaz: i'd assume so as well. we're still pondering the offer.
15:00 btorch left #salt
15:00 rawtaz i can imagine :D
15:00 fannet one more thing - whenever I define gitfs_root: then salt-call craps out with a 'failed to compile'
15:01 alanpearce_ joined #salt
15:01 ggalvao joined #salt
15:04 imanc joined #salt
15:04 fannet scratch that -got it working (I had a syntax error)
15:04 iamtew I think it's supposed to be: gitfs_remotes
15:04 conan_the_destro joined #salt
15:06 rawtaz it's you, iamtew
15:06 Deevolution Good morning.  Anyone know if it's possible to define ext_pillars by environment (i.e. possibly nest then in the pillar_roots environments)?
15:06 SheetiS joined #salt
15:07 jslatts joined #salt
15:08 iamtew rawtaz: aah my misstake, wrong option
15:08 iamtew I is confused..
15:09 patarr joined #salt
15:09 patarr joined #salt
15:09 snuffop joined #salt
15:10 Daviey joined #salt
15:11 bhosmer joined #salt
15:11 ericof joined #salt
15:12 wnkz joined #salt
15:12 dccc_ joined #salt
15:12 zooz joined #salt
15:13 zooz hi
15:13 zooz can I use gitfs with masterless salt?
15:15 the_drow zooz: no. It's not supported yet
15:15 wnkz_ joined #salt
15:20 bezeee joined #salt
15:21 wnkz_ joined #salt
15:23 manji joined #salt
15:26 aparsons joined #salt
15:28 zooz thanks the_drow
15:29 zooz what options do I have of running masterless salt on ec2?
15:29 zooz I need salt to provision some persistent datastore instances (elasticsearch, mongo, etc)
15:29 zooz nothing too complicated
15:30 zooz the tricky bit is how do I get salt state files on to minions securely?
15:30 jalbretsen joined #salt
15:31 seanz joined #salt
15:32 tinuva joined #salt
15:32 aparsons joined #salt
15:34 thayne joined #salt
15:34 helderco joined #salt
15:34 anotherZero joined #salt
15:34 thedodd joined #salt
15:35 StDiluted joined #salt
15:35 kingel joined #salt
15:35 Katafalk_ joined #salt
15:35 jamesf_ script (as necessary) + git pull as part of provisioning?
15:36 zooz that would work
15:36 ajprog_laptop joined #salt
15:36 zooz the tricky bit is auth I guess
15:36 CeBe joined #salt
15:36 anotherZero joined #salt
15:37 giantlock joined #salt
15:37 tcotav getting a key over there?
15:38 hasues joined #salt
15:38 zooz yeah, minions should be able to git pull regularly via cron job or something
15:40 martoss1 joined #salt
15:40 tcotav yeah, or on demand depending on frequency of changes
15:41 zooz I guess I could upload to s3
15:41 shiin I cant apt-get install salt-minion on a new squeeze installation, having added the repos as the manual suggests here: http://docs.saltstack.com/en/latest/topics/installation/debian.html. the error is this:  salt-minion : Depends: salt-common (= 2014.1.10+ds-1~bpo70+1) but it is not going to be installed and                Depends: python-zmq (>= 13.1.0) but it is not going to be installed, any suggestions?
15:41 iggy if you already have multiple instances, why not use a normal "with master" setup?
15:42 zooz iggy, I just need salt for very small amount of instances configuration, mainly persistent data stores
15:42 zooz everything else runs inside containers
15:42 zooz so having salt master is just a complexity which I am trying to avoid
15:42 wnkz joined #salt
15:43 iggy but you want all the benefits of one (i.e. updating git when necessary, etc.)
15:43 helderco Hey guys… I’m suddenly hitting a “No matching sls found for ‘sites’ in env ‘dev’” error. http://pastebin.com/LXxb1Eki … any help in solving this?
15:44 fannet its probably a syntax error in your SLS helderco
15:44 CeBe1 joined #salt
15:44 bezeee joined #salt
15:45 zooz iggy, that would be nice, but I am just gathering information what's possible with least amount of effort
15:46 helderco fannet: can’t see anything wrong… anyway to pinpoint?
15:46 tcotav iggy: not sure what other benefits you're talking about.  you could set up a git pull loop at the command line even for that one bit.
15:46 basepi the_drow: sorry I never pinged you, I was mostly offline this weekend.
15:46 fannet helderco: did you edit any state files before this started happening
15:46 tligda joined #salt
15:46 the_drow Oh, how are you?
15:46 tcotav this way, zooz's stuff is small and self-contained
15:47 the_drow I need to ask some questions about the vpc module I am creating but it seems that no one is available
15:47 to_json joined #salt
15:47 the_drow basepi: but now you are
15:47 the_drow basepi: I'll brb in a sec.
15:48 basepi the_drow: cool.  I'll be here.
15:48 helderco fannet: yes, but nothing special… I’m looking at the changes with git but not finding anything wrong
15:49 iggy I just keep seeing tons of people expecting masterless to solve all the worlds problems... and it's not going to
15:49 fannet try reverting to the last stable and slowly step in the changes
15:50 fannet also you could try blowing away your cache and restarting salt-master
15:50 tcotav iggy: totally agree with that.  It comes up a lot in the channel.
15:50 iggy it has fairly well documented restrictions (compared to a normal setup)... I guess lack of gitfs support should be added to that list
15:51 tcotav however, I thought what zooz was doing could be done via masterless (minus the gitfs...  and maybe that's because I never ever use that)
15:53 helderco fannet: yeah, I’ll look into it, thanks
15:54 the_drow basepi: back. I need someone to review the module and comment about the style.
15:54 the_drow basepi: Should the module always return true/false or an object when creating?
15:54 helderco not cache
15:55 the_drow basepi: Currently the documentation doesn't use sphinx features such as return type or argument description/type. Is that intentional?
15:55 basepi the_drow: let's move to PMs so we don't clutter this channel
15:55 the_drow k
15:55 wnkz_ joined #salt
15:59 jslatts joined #salt
16:00 wiqd joined #salt
16:01 helderco fannet: found it :) I renamed an include when I didn’t yet rename the files
16:01 fannet ;)
16:01 iggy helderco: what version of salt?
16:01 helderco 2014.1.10
16:02 iggy okay, I'm pretty sure 2014.7 has better reporting of failures like that
16:02 linjan joined #salt
16:05 hobakill i will be so happy when 2014.7 hits epel! :)
16:06 wnkz joined #salt
16:07 SheetiS I'm thinking I want to stop using EPEL all together and start using my local yum repo, so I can control when new versions get rolled up into RPMs
16:08 hobakill SheetiS: i've had similar thoughts. EPEL is too slow
16:09 conan_the_destro joined #salt
16:09 nitti joined #salt
16:09 thedodd joined #salt
16:10 SheetiS That and I always get a little nervous when things are installed from EPEL to production servers anyhow.  I've had a couple of bad experiences.
16:11 philipsd6 joined #salt
16:14 elextro joined #salt
16:14 elextro Hi, has anyone seen this error before:  Could not access /var/cache/salt/master. Path does not exist.
16:14 StDiluted does /var/cache/salt exist?
16:14 StDiluted or /var/cache/salt/master
16:15 elextro Yes, I have verified that /var/cache/salt/master does indeed exist
16:15 iggy permissions?
16:16 iggy like say... non-root user and /var/cache/salt directory doesn't have +x
16:16 elextro What do they need to be set as?
16:16 iggy or any other directory above that
16:16 elextro Hmm...let me check it out
16:16 StDiluted mine is owned by root and is 755
16:17 iggy if you're trying to run salt as non-root, there's some documentation on how to get that up and running
16:17 elextro This error appeared after running ' salt '*' state.sls orchestration.galera_cluster' as root
16:17 elextro :/
16:18 smcquay joined #salt
16:18 jindo joined #salt
16:18 rallytime joined #salt
16:19 smcquaid joined #salt
16:20 SheetiS elextro: if you are actually running an orchestration, salt-run state.orchestrate is probably how you want to do it. (http://docs.saltstack.com/en/latest/topics/tutorials/states_pt5.html#orchestrate-runner)
16:21 nitti joined #salt
16:22 __TheDodd__ joined #salt
16:23 elextro Okay let me give that a shot. That's my problem. I'm not sure why i wasn't using salt-run
16:23 elextro I guess I wasn't thinking. Thanks @SheetiS
16:23 desposo joined #salt
16:23 to_json1 joined #salt
16:24 kivihtin joined #salt
16:24 martoss joined #salt
16:25 to_json2 joined #salt
16:25 dalexander joined #salt
16:26 thedodd joined #salt
16:27 shiin my problem got resolved with squeeze-lts.
16:27 shiin left #salt
16:30 KyleG joined #salt
16:30 KyleG joined #salt
16:31 ndrei joined #salt
16:31 aparsons joined #salt
16:33 rihannon joined #salt
16:34 aparsons joined #salt
16:36 davet1 joined #salt
16:37 wt joined #salt
16:41 troyready joined #salt
16:42 wnkz_ joined #salt
16:42 under_dog5000 joined #salt
16:43 under_dog5000 Anyone have experience installing ruby with salt?
16:43 aparsons joined #salt
16:43 murrdoc joined #salt
16:44 babilen What's your real question?
16:44 SheetiS under_dog5000: wanting to use rvm or via packages (say debs or rpms) or even just a direct target via source?  There are several ways to handle that.
16:44 under_dog5000 Yes, I'm using the rvm module in salt.
16:44 under_dog5000 I'm familiar with the process, but having an error when trying to launch unicorn
16:45 wnkz joined #salt
16:45 under_dog5000 Specifically, (and it's the strangest thing) but when I run a salt-call state.highstate, unicorn runs just fine, and the whole app is up. When I run state.highstate from a master, unicorn fails to start
16:46 LBJ_6 joined #salt
16:46 rmnuvg joined #salt
16:46 fannet joined #salt
16:46 fannet anyone know why salt-cloud for GCE won't accept the FQDN for a host ex: salt-cloud -p myprofile myhost.fqdn.com  ? passing just the hostname w/out FQDN works
16:47 under_dog5000 Unicorn is being ran from a init script, and when I launch the script locally, everything works out fine. I cd to the working directory before starting up the process. Again, with salt call, it works fine (locally) and with state.highstate ran from the master, it can't find the process
16:48 under_dog5000 I think it's a ruby-esque issue (and I know very little about ruby and deploying ruby apps) but I can't seem to nail down what is different about the salt-call state.highstate and the salt 'target*' state.highstate
16:49 SheetiS under_dog5000: have you tried to run the highstate from the master with '-l debug' to see if it gives you any useful information?  Also general information like versions of masters and minions (and do they match?) might be helpful.
16:49 LBJ_6 How to understand the difference between salt state and salt module?
16:50 ajolo joined #salt
16:50 wnkz_ joined #salt
16:52 under_dog5000 SheetiS: Yes, I've ran salt call from the master as such (salt 'target*' cmd.run 'salt-call -l debug state.highstate') And it fails. The issue is that it doesn't find the unicorn process (unicorn is installed as a gem, fyi)
16:52 spookah joined #salt
16:53 under_dog5000 stderr:                       /etc/init.d/unicorn: line 25: kill: (9432) - No such process                       /usr/share/ruby/vendor_ruby/2.0/rubygems/dependency.rb:296:in `to_specs': Could not find 'unicorn' (>= 0) among 4 total gem(s) (Gem::LoadError)
16:53 babilen under_dog5000: Why don't you run the highstate from the master?
16:54 babilen (as in "salt 'target' state.highstate")
16:54 SheetiS babilen: beat me to the question
16:54 under_dog5000 I do
16:54 under_dog5000 It fails when ran from the master with just targeting (salt 'target*' state.highstate)
16:54 babilen How so?
16:55 under_dog5000 And it fails when running salt call from the master (salt 'target' cmd.run 'salt-call state.highstate')
16:55 under_dog5000 Fails in the sense that it cannot find the unicorn gem (the app server) and so unicorn never starts and can't accept any request from Nginx (since it isn't runnin)
16:55 under_dog5000 [12:54] <under_dog5000> stderr:                       /etc/init.d/unicorn: line 25: kill: (9432) - No such process                       /usr/share/ruby/vendor_ruby/2.0/rubygems/dependency.rb:296:in `to_specs': Could not find 'unicorn' (>= 0) among 4 total gem(s) (Gem::LoadError)
16:56 scottpgallagher joined #salt
16:56 under_dog5000 It only works when running salt-call state.highstate locally on the minion
16:56 luminous under_dog5000: what do your logs say?
16:56 SheetiS under_dog5000: my guess is that the proper ruby via rvm isn't selected when using salt 'target*' state.highstate.  do you get a different result when you so 'salt <target> cmd.run "rvm list rubies"' vs when you run it via salt-call directly on the minion?
16:56 babilen under_dog5000: Would it be possible to paste the entire output of the highstate run + your states to http://refheap.com ?
16:56 under_dog5000 SheetiS: Let me try that.
16:56 SheetiS since salt-call would have your local profile variables loaded.
16:56 SheetiS since you'd be directly on hte machine.
16:56 under_dog5000 babilen: Sure, just be a moment
16:57 under_dog5000 Ok, let's see...
16:57 under_dog5000 Very likely that's it, just be a sec though
16:58 jaimed joined #salt
16:58 SheetiS if this doesn't get anywhere, a full pastebin/refheap of output as babilen suggested would be the next step.
17:00 murrdoc has anyone used this ? https://github.com/saltstack-formulas/salt-formula, i am trying to figure out if the intended use is using salt-ssh or does the author want to use it using salt state call
17:00 under_dog5000 Ok, yeah, I get the right rubies back when I do salt 'target' rvm.list |_       - ruby       - 2.1.2       - True  (but I get nothing back when doing it locally Function rmv.list is not available)
17:00 under_dog5000 I'll get the pastebin up
17:01 babilen murrdoc: Yes, plenty of people used that formula before and its usage is not tied to ssh
17:01 babilen under_dog5000: You are aware of the typo rmv vs. rvm, aren't you?
17:02 murrdoc sweet
17:02 under_dog5000 No, I wasn't, but thank you. Here is the proper output: rvm rubies  =* ruby-2.1.2 [ x86_64 ]  # => - current # =* - current && default #  * - default local:     |_       - ruby       - 2.1.2       - True
17:02 murrdoc so install salt-minion on the minion and then apply that state ?
17:02 babilen murrdoc: exactly
17:02 murrdoc :| cool
17:02 murrdoc thanks babilen
17:03 `tek joined #salt
17:03 wnkz_ joined #salt
17:03 psubwayd joined #salt
17:03 `tek left #salt
17:04 alexwh joined #salt
17:05 che-arne joined #salt
17:05 kingel joined #salt
17:06 * iggy really needs to PR his changes to that formula
17:07 wnkz joined #salt
17:07 kingel joined #salt
17:08 bezeee joined #salt
17:08 Katafalkas joined #salt
17:09 ndrei joined #salt
17:09 notpeter_ joined #salt
17:09 murrdoc so the steps are install salt, change config to point to master, then on master apply state to setup the minion
17:10 murrdoc using this formula
17:10 erjohnso joined #salt
17:11 logix812 joined #salt
17:11 iggy it works well with salt-cloud (which does the salt master setup for you)
17:12 murrdoc yeah that it does
17:12 iggy or we have a little boot script setup in gce that does it
17:12 murrdoc cos the master sets up the minion and installs the key and so on
17:13 iggy same concept, we just didn't have any luck with salt-cloud and gce (and we have a time crunch)
17:13 murrdoc whats your pull request
17:14 murrdoc or where is it
17:14 iggy mostly dealing with salt-cloud setup from pillar data
17:14 iggy github.com/iggy/salt-formula maybe?
17:15 murrdoc who knows
17:15 murrdoc only one way to find out
17:15 iggy it's a bit dirty spread over too many commits that need to be squashed
17:15 murrdoc https://github.com/iggy/salt-formula/compare/saltstack-formulas:master...master
17:16 aparsons joined #salt
17:16 iggy as you can see... needs to be squashed
17:16 iggy that was like my 3rd day at this new job, didn't have a good dev env setup
17:17 iggy so just kept committing a bunch of crap
17:17 aparsons joined #salt
17:18 eunuchsocket joined #salt
17:18 under_dog5000 Ok, the refheap can be viewed here: https://www.refheap.com/90895
17:18 under_dog5000 babilen and SheetiS ^
17:19 Ryan_Lane joined #salt
17:20 LBJ_6_ joined #salt
17:21 Phibs joined #salt
17:21 perfectsine joined #salt
17:21 Phibs are there packages around for 2014.7 ?
17:21 manfred not really
17:21 manfred needs to be installed from git or pip for testing
17:22 manfred 2014.7 is still in rc
17:22 manfred rc2 now
17:22 drawks morning all
17:22 Ahrotahntee g'day
17:23 murrdoc the packaging scripts are in the repo
17:23 murrdoc jenkins it up
17:24 mechanicalduck joined #salt
17:26 unstable left #salt
17:27 Phibs ok thanks
17:27 akl joined #salt
17:27 martoss joined #salt
17:27 baniir joined #salt
17:28 Phibs I'm running 2014.1.10 and uppon restart it uses like 5000% cpu and load avg hits 3000, only have 1200 minions
17:28 mechanicalduck joined #salt
17:28 Phibs wondering if 2014.7 fixes any of that
17:30 murrdoc one master for 1200 minions ?
17:30 ggalvao joined #salt
17:30 ggalvao good afternoon, guys :)
17:30 Phibs murrdoc: yeah?
17:31 Gareth morning morning
17:31 murrdoc cool
17:31 Phibs murrdoc: http://static.squarespace.com/static/524cf70fe4b05018590c3fb3/5265a546e4b0bc5cd29c7c71/52662139e4b08e763cc036b1/1383242522692/SaltStack%20speed%20banner%20text.jpg?format=1500w
17:31 Phibs you saying thats not true?
17:31 Phibs :)
17:31 murrdoc i am impressed is all
17:31 Phibs it shits itself a lot :)
17:31 n8n joined #salt
17:31 murrdoc you tried the master-master setup ?
17:32 Phibs that's the plan
17:32 Phibs was the goal all along, just haven't had the chance yet
17:32 iggy or master-master-master-master setup
17:32 Phibs yeah
17:32 murrdoc cos that did get better in 2014.7
17:32 Phibs all the masters and get my syndic on
17:32 philipsd6 joined #salt
17:32 Phibs we only use salt for orchestration, puppet for cfg mgmt
17:32 iggy yeah, I'm surprised a single master is holding up at all with that many minions
17:33 Phibs I'm not sure why it wouldn't ?
17:33 Phibs given its just processing an MQ ?
17:33 jonatas_oliveira joined #salt
17:33 under_dog5000 Ok, the refheap can be viewed here: https://www.refheap.com/90895
17:33 Phibs Maybe the salt master should be c++ ;0
17:34 iggy our salt-master with ~40 minions is pushing 2.5-5% cpu at idle... doing the math... our master wouldn't hold up under that
17:34 Phibs nod
17:34 Phibs what uses all the cpu?
17:34 iggy never cared to look
17:34 Phibs what if i was amazon with like 200K hosts, how many salt masters would I need :)
17:35 murrdoc masterless!
17:35 murrdoc sorta srs
17:35 Phibs ;)
17:35 jonatas__ joined #salt
17:35 Ryan_Lane s/sorta/completely/
17:36 Phibs So salt seems to scale like puppet then :(
17:36 kingel joined #salt
17:36 Ryan_Lane though I've heard there's users with 100k or so minions
17:36 Ryan_Lane https://github.com/saltstack/salt/issues/16215 <-- that would be my solution
17:36 Ryan_Lane local file client, but still doing remote execution
17:36 iggy my master for that deployment is also 2vcpu/7G mem
17:37 Phibs Mine has 8 cpus and 64G of ram
17:37 n8n joined #salt
17:37 StDiluted currently I’m using a  micro instance for my master on AWS, lol
17:37 Phibs it just doesn't seem to scale properly
17:37 iggy so you could scale that vertically for a while
17:37 prosper_ joined #salt
17:37 Phibs I think the MQ part might be a tad buggy
17:37 iggy I would definitely file bugs
17:37 holler left #salt
17:37 holler joined #salt
17:37 dude051 joined #salt
17:37 Ryan_Lane I'd likely also use an external job cache (and external master cache)
17:37 iggy I'm sure most testing/use doesn't happen on setups that size
17:38 Phibs indeed
17:38 Ryan_Lane then try putting the masters behind a load balancer
17:38 holler hello, I am using salt for vagrant provision script and its hanging out salt.highstate
17:38 holler "Calling state.highstate... This may take awhile"
17:38 holler how can I set the provision script to output more information?
17:38 StDiluted check your minion log?
17:38 Phibs iggy: saltstack enterprise claims 'scale' ;0
17:38 holler StDiluted: where is that? /var/log?
17:39 Ryan_Lane well, I've heard anecdotally that there's someone using 100k minions
17:39 manfred holler:  add -D to the script_args in /etc/salt/cloud so that the bootstrap script gives debug information
17:39 StDiluted holler: /var/log/salt/minion
17:39 StDiluted on the minion
17:39 Ryan_Lane basepi: https://github.com/saltstack/salt/issues/16215 <-- any feelings on this?
17:39 Phibs I'll assume using python to 'scale' is probably not the right choice, but obviously meets most people's needs
17:40 holler StDiluted: It's masterless minion, there is no /var/log/salt ...
17:40 Ryan_Lane my masterless minion has a /var/log/salt
17:40 Ryan_Lane it's the default location
17:40 Ryan_Lane and all the defaults make it write there
17:40 dude051 joined #salt
17:40 Ryan_Lane ugh. right. vagrant
17:41 Phibs Sounds like RAET vs 0MQ for 2014.7 should help too
17:41 tristianc joined #salt
17:41 manfred yes
17:41 Daviey joined #salt
17:41 manfred raet is super nice
17:41 Phibs 0MQ has more bugs than a roach motel
17:41 manfred and there are a lot more things that will be coming once 2014.7 is stablized and they can go back to expanding the uses of raet
17:41 Phibs nod
17:41 murrdoc raet is udp
17:41 manfred (supposedly raet will allow them to finally have multitenancy in the portal)
17:42 manfred murrdoc:  correct
17:42 Phibs hmmm
17:42 Phibs interesting design choice....
17:42 murrdoc cant trust udp
17:42 murrdoc ( heh )
17:42 Phibs yeah not quite sure that was a good idea
17:42 manfred to use udp?
17:42 Phibs right
17:42 manfred the point is to get rid of the overhead of tcp
17:43 StDiluted ah, vagrant. sorry. um, not sure then
17:43 manfred because they are already doing the checking on seeing if jobs are reaching the minions, no reason to have the overhead of tcp doing the same thing
17:43 manfred as well as other reasons
17:43 Phibs manfred: checking how ?
17:43 manfred salt always checks if a minion is still running the jobs...
17:43 Ryan_Lane holler: I decided the salt provisioner was inflexible enough that I just used a shell provisioner
17:43 manfred same way it checks for if the highstate is still being run
17:43 Ryan_Lane and had it install salt, then run it how I wanted it
17:43 Phibs gotcha
17:43 manfred Phibs: https://www.youtube.com/watch?v=SI5J43UkarM
17:43 tcotav holler: same here -- used my own installer on vagrant
17:44 Ryan_Lane I was able to get the salt provisioner working, but there's an undocumented option to make it show its output
17:44 Ryan_Lane I think it's .verbose = true?
17:44 Ryan_Lane on whatever the config variable is
17:44 Phibs manfred: thansk
17:44 manfred np
17:45 holler Ryan_Lane: salt.verbose = True is set, but when it gets to "calling salt.highstate..." on fellow dev's machine it just hangs
17:45 Phibs haha who is on the left
17:45 holler of course it works fine on my computer
17:45 holler trying to help debug since I am the one that championed using salt :D
17:45 tcotav holler: the other thing I did was create a custom image with salt pre-installed and had the devs use that
17:45 Ryan_Lane holler: it must be something other than true, then
17:45 prosper_ joined #salt
17:45 Ryan_Lane I had to look into the source to find the option
17:46 manfred Phibs, tom
17:47 Phibs heh
17:47 QiQe joined #salt
17:47 Phibs awkward.
17:48 QiQe hello everybody, Im using iptables.append state and for some reason the same iptables   rule is being added every time that I run salt-call on the node
17:49 QiQe so I have n suplicated lines when I list the rules
17:49 manfred QiQe:  are you on rhel 5/6 or ubuntu 10.xx?
17:49 QiQe rhel
17:49 manfred 5 or 6?
17:49 QiQe 6
17:49 manfred don't use the iptables module
17:49 manfred iptables on rhel 6 did not have iptables —check, so we are doing regex matching on iptables-save
17:50 jalaziz joined #salt
17:50 manfred and iptables changes around the way that the rule is written, so it is very difficult to match
17:50 QiQe got it, thanks manfred
17:50 manfred you should manage it in /etc/sysconfig/iptables using jinja
17:50 manfred QiQe:  https://github.com/saltstack/salt/issues/12455
17:51 QiQe thanks manfred for the quick response
17:52 manfred np
17:53 n8n joined #salt
17:57 ggoZ joined #salt
17:57 aparsons joined #salt
17:58 chasehiccups joined #salt
17:58 jslatts joined #salt
17:59 otter768 joined #salt
17:59 btorch joined #salt
17:59 aparsons joined #salt
18:00 hobakilllll joined #salt
18:00 aparsons joined #salt
18:01 oz_akan joined #salt
18:01 LBJ_6 joined #salt
18:01 bhosmer joined #salt
18:01 aarontc joined #salt
18:03 viq joined #salt
18:04 tristianc joined #salt
18:05 rjc joined #salt
18:05 LBJ_6 joined #salt
18:09 LBJ_6 joined #salt
18:09 to_json joined #salt
18:10 mapu joined #salt
18:10 chrisjones joined #salt
18:13 Vye I just configured salt-minion to run as a non-root user. I'm surprised that cmd doesn't automatically try to use sudo. Is there a way to configure this?
18:14 druonysus joined #salt
18:14 druonysus joined #salt
18:15 tyler-baker joined #salt
18:15 micah_chatt joined #salt
18:15 n8n joined #salt
18:16 LBJ_6 joined #salt
18:16 bhosmer_ joined #salt
18:17 ajolo joined #salt
18:19 trevorj Hi all, what's the best way to get salt-cloud to provision already existing EC2 instances?
18:19 oz_akan joined #salt
18:20 StDiluted http://salt-cloud.readthedocs.org/en/latest/topics/config.html#saltify
18:20 manfred don't use read the docs
18:20 StDiluted sorry
18:20 StDiluted was the link that came up
18:20 trevorj manfred: why not?
18:20 Vye It looks like Alan provided a hack in a salt-users thread... but surely there is a supported way to do this? Docs seem mute on the subject (tho. sudo is referenced).
18:20 StDiluted http://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud.clouds.saltify.html
18:20 Vye https://groups.google.com/forum/#!topic/salt-users/0D0fUjmFxAU
18:21 manfred readthedocs isn't updated anymore
18:21 manfred use docs.saltstack.com
18:21 ange ah
18:21 trevorj manfred: oh wow, I didn't know that, thanks
18:21 cpowell joined #salt
18:21 ange good thing to spot ...
18:21 ange thanks manfred
18:21 manfred you could also use the salt-run cloud.bootstrap in 2014.7
18:21 manfred actually
18:21 manfred it is cloud.create
18:21 trevorj Why don't we take the readthedocs.org site down then?
18:21 trevorj It seems to be what comes up most commonly in google search results
18:22 manfred trevorj:  the best way to do it is just populate and send the vm_ data through the salt-run cloud.create https://github.com/saltstack-formulas/ec2-autoscale-reactor/blob/master/reactor/ec2-autoscale.sls#L12
18:22 manfred like joseph does in that reactor (you would need to populate it on the command line though)
18:22 TheoSLC joined #salt
18:23 TheoSLC Greetings.
18:23 StDiluted oh right on that’s awesome
18:24 saurabhs joined #salt
18:24 StDiluted i had not seen the autoscale reactor formula
18:24 manfred the salt-cloud runner can be used for automatically bootstrapping images based on autoscale events
18:24 trevorj manfred: Nice, so I'm just running the cloud.create event
18:25 ange nice
18:25 StDiluted what setup does the minion need to do the highstae automatically
18:25 manfred StDiluted:  this is the one that just uses the cloud cache events https://github.com/saltstack-formulas/salt-cloud-reactor
18:25 manfred StDiluted:  startup_states: highstate
18:25 trevorj manfred: That's what I was looking for
18:25 manfred http://docs.saltstack.com/en/latest/ref/states/startup.html
18:25 ndrei joined #salt
18:25 hjubal joined #salt
18:25 hjubal joined #salt
18:26 TheoSLC I'm having a serious problem with my salt mine.  The data.p file on the master cache for each minion keeps vanishing (unknown why.) this causes false positives in minion matching for mine.get, which is destroying my application clusters.  I'm hoping to get my eyes on the problem https://github.com/saltstack/salt/issues/15673
18:26 trevorj manfred: Do you happen to know if this will still look in cloud.maps for information such as what key to use?
18:26 trevorj manfred: key being SSH key for deployment
18:26 michael joined #salt
18:27 manfred I would have to dig into it, and I am busy today with vulnerbility stuff flying around, so I won't have time
18:27 StDiluted manfred: does it matter which one? Is it just a matter of preference?
18:27 manfred StDiluted:  if you use ec2, you can use the ec2 sns messages to trigger the ec2 reactor
18:28 manfred instead of having to schedule a salt-cloud -F every 10 minutes, you can have it be immediately upon the server coming online
18:28 perfectsine joined #salt
18:28 under_dog5000 Is there any wa to run a command in salt and pass it in certain env variables to run the command against? For instance, I have a command I need to run, but I want to have it run when certain env variables are exported. Simply running source variables.file and then running the command doesn't seem to be working?
18:28 StDiluted yeah, i do use ec2 so that would be preferable
18:28 manfred in the end, they are not meant to be used directly from the repository, just as examples to create your own from
18:28 StDiluted yeah i get that
18:28 manfred then the ec2 one should just work ™
18:29 manfred http://aws.amazon.com/sns/
18:29 LBJ_6 joined #salt
18:31 murrdoc
18:31 StDiluted looks like salt API has to be set up, is there anythign i need to know about that?
18:31 Guest97968 Hey guys, looking to get this working and am having trouble figuring out how to write the states. Suggestions? https://gist.github.com/Supermathie/ac5899acd60c74a7240c
18:31 manfred not really, just follow the guide (that is still on readthedocs, soon to be moved to docs.rackspace.com)
18:31 trevorj StDiluted: It walks you through it in the README.md of that repo IIRC
18:31 philipsd6 joined #salt
18:31 StDiluted ok
18:32 thedodd joined #salt
18:32 fragamus joined #salt
18:33 otter768 joined #salt
18:34 prosper__ joined #salt
18:34 hjubal left #salt
18:35 otter768 joined #salt
18:35 Supermathie (reposting with reclaimed nick) Hey guys, looking to get this working and am having trouble figuring out how to write the states properly. Suggestions? https://gist.github.com/Supermathie/ac5899acd60c74a7240c
18:37 kingel joined #salt
18:37 jergerber joined #salt
18:39 manfred Supermathie:  get rid of these two lines in your state
18:39 manfred 'windowsdomain:ad2012r2.storagelab.netdirect.ca':
18:39 manfred - match: grain
18:39 manfred - testenviron:
18:39 manfred grains.present:
18:39 manfred - value: ad2012r2
18:39 Supermathie Which two lines?
18:40 manfred all five lines, one second
18:40 manfred Supermathie:  http://ix.io/ey9
18:40 manfred that is your state
18:40 manfred and it will be applied to any minion that has the grain os: Windows
18:41 Supermathie except that I want to match on that grain similar to http://docs.saltstack.com/en/latest/topics/tutorials/states_pt1.html#preparing-the-top-file
18:42 manfred the matching goes in your top.sls file
18:42 manfred not the state file
18:42 manfred which you have
18:42 manfred base:
18:42 manfred 'os:Windows':
18:42 manfred - match: grain
18:42 manfred - windows
18:42 kingel joined #salt
18:42 manfred salt \* state.highstate will apply the windows.sls file to any minion that has os: Windows
18:44 alexthegraham joined #salt
18:44 Supermathie isn't the 'top' file just a normal state file? Can you not do any matching in other state files?
18:44 manfred it is not
18:44 manfred it is special for highstates
18:44 perfectsine joined #salt
18:44 manfred your matching goes in the top.sls file for your highstates
18:45 smcquay joined #salt
18:45 Supermathie ok so any grain matching has to be done in top.sls, in other state files I'd have to use jinja templating?
18:46 manji joined #salt
18:46 troyready joined #salt
18:47 Supermathie or put *all* matching logic into top.sls (which doesn't feel very modular)
18:47 manfred yeah, if you want to apply a state to a server, and only have one of the states apply based on grains… you would need to block out the states using jinja templating
18:47 manfred you could always seperate it out into different files and manually run each sls against the servers
18:47 manfred salt -G 'os: Windows' state.sls windows
18:49 perfectsine joined #salt
18:51 rihannon left #salt
18:53 aynik joined #salt
18:53 alexthegraham Curious if I should file a bug report for this: state.show_top shows states that will be applied but includes states that are specifically excluded by other states.
18:53 doriftoshoes joined #salt
18:55 Supermathie manfred, gotcha! Thanks, working as https://gist.github.com/Supermathie/ac5899acd60c74a7240c#file-solution. Not pretty but it doesn't need to be yet :D
18:55 kingel joined #salt
18:55 manfred nice
18:56 kingel joined #salt
18:58 alexthegraham A very simple example: https://gist.github.com/alexthegraham/74c620bf140db01d61e2
18:58 kingel joined #salt
18:58 alexthegraham I don't know if it's a bug or intentional, though.
18:59 rap424 joined #salt
18:59 StDiluted I’m noticing that even though I have sync_after_install: grains in my cloud profile, the custom grain I have in /srv/salt/states/_grains/ is not getting synced… I’m having to do it manually. Any ideas?
18:59 analogbyte hi, does anybody see why the following snippet fails? can't I reference "dynamic" id's when using watch statements?
18:59 analogbyte https://paste.selfnet.de/zJ6Dz/django
19:00 Vye Opened an issue #16233 for my sudo woes.
19:00 stubee joined #salt
19:00 analogbyte the error is:     Illegal requisite "[OrderedDict([('file', '/etc/supervisor/conf.d/selftest_runner.conf')])]", please check your syntax.
19:00 kingel joined #salt
19:03 rypeck analogbyte: looks to me like you are doing a watch inside of a require.
19:03 rypeck shouldn't the watch be on the same level as the require? not "under" it?
19:03 baniir joined #salt
19:04 ggalvao hey guys, can I do something like mysql.pass: {{ salt.evolux.get_mysql_password() }} on a managed configuration file?
19:04 ggalvao I wrote a custom module doing some grepping on a configuration file and would like to echo it back to a minion conf file
19:05 analogbyte rypeck: well, I guess I'll go and shoot myself now, thanks :D spend like 2 hours looking at that block without realizing that...
19:05 glyf joined #salt
19:05 kasey joined #salt
19:06 iggy TheoSLC: We are seeing the same thing... adding info to the bug report
19:06 LBJ_6 joined #salt
19:07 Supermathie ggalvao, you can turn on templating for the managed  config file similar to what I do here: https://gist.github.com/Supermathie/ac5899acd60c74a7240c#file-solution
19:08 ggalvao Supermathie: ty, I'll look into it!
19:10 StDiluted can startup_states not be set from /etc/salt/cloud.providers ?
19:10 jhauser joined #salt
19:10 fannet manfred: have you ever used GCE with salt-cloud ?
19:11 rallytime joined #salt
19:11 manfred i have not
19:12 fannet ok - anyone else here ever use GCE w/ salt cloud?
19:12 LBJ_6 joined #salt
19:12 StDiluted i use it with AWS
19:12 rypeck analogbyte: I feel your pain - to be fair - it isn't a very helpful error.
19:12 murrdoc joined #salt
19:12 iggy we tried it... what problems are you seeing?
19:12 manfred fannet:  outside of just the main configuration to use it, salt-cloud is written so that the majority of the configurations should be generic between providers
19:13 fannet ya except it doesn't work
19:13 fannet DO and AWS work no problem
19:13 thayne joined #salt
19:13 fannet but libcloud throws an exception when I use GVE
19:13 fannet *gce
19:14 fannet keep getting profile error.
19:15 fannet followed the salt-docs to the T
19:15 fannet http://pastebin.com/c7AwnjBU
19:15 manfred open an issue on github, provide sanatized configurations for your cloud.providers.d file and your cloud.profiles.d file and someone will take a look
19:16 fannet ya @UtahDave was digging into it on friday so was hoping to catch him on here before doing that to see if he had better luck
19:17 StDiluted has anyone who used salt-cloud gotten the sync_after_install to work with grains in <salt_root>/_grains ?
19:17 iggy fannet: let us know the issue number, I'd like to subscribe to it as well... we had the same (and other) problems
19:17 Katafalkas joined #salt
19:18 fannet will do. for now I may just use gcloud to spin up instances and then push the salt-minion bootstrap to them
19:18 iggy you can specify a startup script with gcloud
19:20 iggy I wrote a (horrifying) python script that we use to do our deployment until salt-cloud+gce improve a bit
19:20 iggy or until we have more time to try diagnosing all the issues we had
19:22 nitti joined #salt
19:24 tempspace fannet: what version of libcloud are you on
19:24 manfred i wrote a simple cloud-init script (each cloud server gets the same key because I have salt running on a completely encapsulated cloud network only and running in open mode, so you would have to compromise my account to get a server built in that cloud network) and just spin up servers, and cloud-init installs the minion, and then starts it, dropping in startup_states: highstate
19:24 manfred then i have a rackspace monitoring plugin that checks the load of my entire environment
19:24 manfred and scales servers based on that average load average
19:25 manfred (has to do funky thinks, because the monitoring only hits the webhooks when it changes states, so the monitoring plugin returns a warning status every other return, so that it can continue to scale)
19:25 StDiluted I am pretty much at a point where autoscale can be automated, except that the custom grain is not getting synced automatically, and the startup_states declaration in my cloud.providers file is not getting on the minion
19:25 mechanicalduck joined #salt
19:25 manfred where do you have it declared in cloud.providers?
19:25 tempspace fannet: I believe it's a known issue that 0.15.1 will throw that error if you happen to have that version
19:25 manfred why not put it in /etc/salt/cloud as
19:25 manfred minion:
19:25 manfred startup_states: highstate
19:26 manfred master: salt.something.com
19:26 StDiluted i have it like that in cloud.providers
19:26 iggy a highstate should do a sync first
19:26 StDiluted didnt know i could put it in /etc/salt/cloud
19:26 StDiluted I’ll try that
19:26 manfred cloud.providers.d/provider.conf?
19:26 manfred it is in the minion block under the provider?
19:26 StDiluted iggy: yeah, you would think so
19:26 StDiluted yes
19:26 manfred (i am not sure if it can be in providers.d, but it can be in profile.d or /etc/salt/cloud
19:27 StDiluted oh never mind
19:27 StDiluted ha
19:27 fannet iggy: mind sharing? :)
19:27 StDiluted lol
19:27 StDiluted i stuck it in the one provider but not the other
19:27 manfred boom
19:27 iggy fannet: possibly in a bit... I'd have to do a bit of sanitizing first
19:28 StDiluted that will work, now the only thing is the grain sync
19:28 StDiluted not sure why it’s not syncing grain
19:28 fannet cool
19:28 StDiluted but the grain I’m using determines which states get on the box
19:28 StDiluted so by the time it runs it, if it syncs on first highstate
19:28 StDiluted it has already decided not to include the stuff based on the EC2 tags on the box
19:28 aparsons joined #salt
19:29 fannet I'm stepping through the GCE provider code to see if I can get a better idea of where its failing...
19:29 tempspace fannet: did you see above?
19:29 iggy fannet: have you tried different libcloud versions?
19:29 fannet ah sorry missed that
19:29 fannet let me check
19:30 aparsons joined #salt
19:30 fannet Version: 0.15.1
19:31 tempspace there you go
19:31 Supermathie Is there a "nicer" way of doing this: - name: {{ {'RedHat': 'ntpd', 'Debian': 'ntp'}.get(salt['grains.get']('os_family'),'ntpd') }}
19:31 tempspace you can either downgrade or use latest from git
19:31 tempspace and you should be all set
19:31 fannet ok thanks I'll give it a shot
19:31 StDiluted Supermathie: I think you can use maps
19:32 StDiluted Supermathie: for instance: https://github.com/saltstack-formulas/apache-formula/blob/master/apache/map.jinja
19:32 SheetiS Supermathie: StDiluted is correct.  Most of the saltstack formulas if not all use a map.jinja that handles that.
19:32 fannet tempspace : is there a pref. version
19:33 tempspace fannet: I believe 0.14.1
19:34 tempspace fannet: Latest from git works too
19:34 aparsons joined #salt
19:35 Supermathie ah I see thanks. Same kind of thing but doing it *once* at the top.
19:36 StDiluted should sync_after_install: all work with grains in <salt_root>/_grains
19:36 aparsons joined #salt
19:36 jalaziz joined #salt
19:37 rypeck I'm looking to monitor a whole slew of salt minions - do some e-mail notifications as well - any tips on open source software that could do this?
19:38 Supermathie rypeck, ♥ Zabbix
19:38 StDiluted I use Check_MK to do that
19:38 StDiluted or nagios
19:39 rypeck know of anything that ties directly in with salt?
19:39 rypeck I would like to use salt to run my checks instead of installing additional software on each minion
19:40 StDiluted no, but there are salt-formulas for nagios
19:40 StDiluted https://github.com/saltstack-formulas/nagios-formula
19:40 StDiluted etc
19:40 fannet tempspace:  is this the right repo? https://git-wip-us.apache.org/repos/asf?p=libcloud.git
19:41 tempspace @fannet https://github.com/apache/libcloud
19:41 utahcon is there a way to make salt-master run a command (like add a machine to a gluster trusted pool) without writing something like a custom reactor?
19:41 mechanicalduck_ joined #salt
19:41 Katafalk_ joined #salt
19:42 perfectsine_ joined #salt
19:43 Ahrotahntee rypeck: nagios is pretty good; nrpe is also easy to deploy using salt
19:44 ahale joined #salt
19:49 fannet tempspace: I think its working
19:50 gzcwnk joined #salt
19:50 vxitch left #salt
19:51 gzcwnk hi, any idea what is casuing this when I run highstate, http://pastebin.com/pspj8pq4   ?
19:52 stubee joined #salt
19:53 manfred gzcwnk:  which version of salt are you on?
19:54 fannet tempspace: all right it fired up the VM on GCE - now how do I get the authentication right so salt can actually do the bootstrapping
19:54 fannet [ERROR   ] Authentication failed: status code 255
19:54 cpowell joined #salt
19:55 jcockhren fannet: when you figure that out, update the docs ok? deal? deal.
19:55 jcockhren ;)
19:55 manfred gzcwnk:  that error usually happened with older versions of salt, specifically if the minions version was newer than the master
19:55 rap424 joined #salt
19:56 fannet LOL
19:57 fannet how do I update the docs - there are some serious things that need updating w/ regards to the GCE article
19:57 perfectsine joined #salt
19:57 jcockhren fannet: fork then PR. it's written in restructed text
19:57 SheetiS http://docs.saltstack.com/en/latest/topics/development/contributing.html like this says
19:58 jcockhren DO IT
19:58 jcockhren :D
19:59 fannet I will only do it once its 100% working
19:59 SheetiS My only PR's have been documentation updates so far.  Usually related to seeing a cool feature and learning the hard way that it's not in 2014.1.xx
19:59 jcockhren fannet: https://github.com/saltstack/salt/issues/11669
20:00 gzcwnk manfred, salt-2014.1.10-4.el6.noarch
20:01 gzcwnk could be
20:02 fannet jcockhren - without digging into the source does that mean I can just pass the new style API key or it has not been implemented?
20:02 gzcwnk this looks to be the latest rpm available
20:02 tempspace fannet: I don't have a working salt-cloud at the moment, but I have my configs looking...
20:03 manfred yeah, still trying to get the 2014.1.11 rpms out
20:03 gzcwnk I have the same versions on minion and master
20:03 gzcwnk if i run the webserver.sls via top.sls it runs fine, not if I put in in top.sls
20:04 gzcwnk if i run the webserver.sls via state it runs fine, not if I put in in top.sls
20:04 gzcwnk oops tpyo on the first
20:05 Supermathie gzcwnk, I recently learned that top.sls and the others have entirely different interpretations.
20:05 rdorgueil joined #salt
20:05 rdorgueil joined #salt
20:05 Katafalkas joined #salt
20:05 fannet I used the service_account_private_key method
20:07 gzcwnk they may well have, the Q is I can see no reason its busted, so im stumped ona  fix  :(
20:08 tempspace fannet: I'm trying to get everything setup to see if it still works
20:08 fannet sweet
20:09 fbettag evening
20:10 fbettag how can i do a contains on salt['grains.get']('roles', []) ?
20:10 fbettag if i wanted to check if for example webserver was in there
20:10 LBJ_6 joined #salt
20:10 SheetiS {% if 'webserver' in salt['grains.get']('roles', []): %}
20:11 hardwire joined #salt
20:11 SheetiS err minus the :
20:11 SheetiS I got all python there for a second
20:11 VictorLin joined #salt
20:11 jcockhren fannet: sorry. I don't remember what worked and what didn't been too many months. I need to refresh. Had to move on to other things
20:11 fbettag SheetiS: thanks
20:12 kingel joined #salt
20:13 ndrei joined #salt
20:13 fannet seem the instances is created but then salt cant ssh in to finish bootstrapping. my guess i need to pass in the ssh keys as part of the initial setup
20:14 iggy you need a key that is in your project metadata and that salt knows
20:15 iggy ssh_username ssh_keyfile in your profiles
20:15 manfred you should set ssh_key_file and ssh_key_name, ssh_key_file is the file on the server you are running salt-cloud from, the full path, so that salt.utils.cloud.bootstrap() can use that to ssh to the server, and ssh_key_name (or whatever gce uses) is the setting used to tell the api which key to put on the server.
20:15 aparsons joined #salt
20:15 manfred ugh, it is ssh_keyfile
20:15 manfred lame
20:15 kedo39 joined #salt
20:15 iggy and ssh_username
20:15 iggy gce doesn't allow root ssh
20:15 manfred consistency would be awesome
20:16 fbettag SheetiS: and if i wanted to check for a partial string/regexp?
20:16 fbettag on a string
20:16 prosper_ joined #salt
20:17 iggy fannet: if you followed the howto, it runs you through setting up that key
20:17 fannet iggy - yes I have a service account key
20:17 iggy different
20:18 iggy service account key is how you talk to the google api
20:18 fannet right
20:18 iggy ssh key is what you talk to the actual instance with
20:18 fannet I also have ssh_keyfile defined
20:18 iggy the howto sets up both of them
20:18 fannet I will try passing them as metadata instead
20:19 tempspace fannet: I have:
20:19 Katafalkas joined #salt
20:19 tempspace ssh_username: gceuser
20:19 tempspace ssh_keyfile: /path/to/ssh.pem
20:21 fannet ya thats how I had it... it says in the docs you can also pass with metadata so let me try that
20:21 mapu joined #salt
20:22 vejdmn joined #salt
20:22 iggy where does it say that?
20:22 fannet http://docs.saltstack.com/en/latest/topics/cloud/gce.html
20:23 SheetiS fbettag: jinja is a little weak at partial string matching.  you might want to handle it differently.  http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.match.html#salt.modules.match.grain_pcre if applying for a specific state might work depending on what you need.
20:24 to_json joined #salt
20:25 jcockhren last night I learned that the linode salt-cloud provider doesn't allow periods in VM names
20:25 jcockhren the hard way
20:26 jcockhren the crazy, part: it'll start the vm with their API, but not provision it
20:27 fragamus joined #salt
20:28 TheoSLC iggy:
20:28 TheoSLC iggy: I'm sorry to hear your having this same issue.  Hopefully more eyes will resolve it soon.
20:29 iggy I tried to pick out any common features
20:29 iggy hopefully that sheds some light on things
20:29 DaveQB joined #salt
20:32 tempspace fannet: You do need to add it to the metadata
20:32 druonysuse joined #salt
20:32 druonysuse joined #salt
20:32 iggy I never did
20:32 thedodd joined #salt
20:33 tempspace iggy: I had to add the public key to the GCE metadata console to get it working
20:33 iggy wait, are you talking about adding the ssh key to the project metadata or adding the username/key to the profile?
20:33 KyleG1 joined #salt
20:33 iggy okay, yeah, I did that
20:34 yomilk joined #salt
20:34 tempspace yeah, once I did that, I was all good
20:35 fannet Ya I was trying to do both at once but no love
20:35 fannet I added it manually via GCE console
20:35 KyleG2 joined #salt
20:37 iggy make sure the private key doesn't have open permissions
20:37 iggy ssh will fail if it's more than chmod 400 or so
20:37 utahcon are reactor states not the same as normal states?
20:38 utahcon can reactor file.append or do I have to run something like cmd.state.file.append?
20:38 fannet its working!
20:38 DaveQB joined #salt
20:39 peters-tx joined #salt
20:39 dalexander joined #salt
20:39 forrest joined #salt
20:39 aparsons joined #salt
20:40 jcockhren fannet: word?!
20:40 fannet thanks a bunch guys.. that was driving me nuts
20:40 fannet so for whatever reason the metadata thing didn't work
20:40 jcockhren fannet: could you update that issue? maybe a quick bullet list of things to do/watch out for to get it going
20:41 fannet but once I added it manually it was happy. Yep will do
20:41 jcockhren :D
20:41 to_json joined #salt
20:41 ekristen joined #salt
20:46 monokrome joined #salt
20:46 kickerdog joined #salt
20:48 kickerdog QUick question, right now salt-cloud says, "install_red_hat_enterprise_linux_7_stable_deps" when trying to deploy a RHEL7 vm, how do I switch it to git exactly?
20:48 jonatas_oliveira joined #salt
20:48 kickerdog oops
20:48 kickerdog echoerror "Stable version is not available on RHEL 7 Beta/RC. Please set installation type to git."
20:49 iggy ./bootstrap -h
20:50 fannet jcockhren -https://github.com/saltstack/salt/issues/11669  updated
20:51 jonatas__ joined #salt
20:51 yomilk_ joined #salt
20:52 fannet one last issue which i dunno if its the right place but using a FQDN for the host does not work
20:52 fannet spotdns30.9.mydomain.net  will fail where spotdns30 will not
20:53 jonatas_oliveira joined #salt
20:53 manfred kickerdog:  http://docs.saltstack.com/en/latest/topics/tutorials/salt_bootstrap.html
20:53 jcockhren fannet: look. this is great
20:54 jcockhren thanks again
20:54 Heartsbane joined #salt
20:55 fannet sure. I"ll document the hostname issue as a separate comment
20:56 forrest fannet, the hostname issue might already be known
20:57 UtahDave joined #salt
20:59 fannet forrest - any links?  Its a really bugger because our hosts get built by salt based on subdomain (the subdomain is the region) and so thats kind messing us up.
20:59 forrest fannet, I'm looking at the issues now, I don't see one but I remember someone bringing this up a few weeks ago
21:00 forrest UtahDave, do you remember that issue where there were hostname problems when you did something like spotdns30.9.mydomain.net?
21:01 Supermathie forrest, Aren't numeric subdomains a no-no?
21:02 forrest Supermathie, for Salt? Or in general?
21:02 fannet based on who's standard :) ?
21:02 forrest Let's not get into a discussion of hostname convention
21:02 kickerdog manfred: I got that, but in the cloud.profile would I do "script: bootstrap-salt git" or script: bootstrap -h"?
21:02 forrest all I care is that the hostname isn't supported :P
21:02 fannet hahahaha
21:02 jeward joined #salt
21:02 manfred kickerdog:  you do script_args: git develop
21:03 kickerdog ah, thanks
21:03 manfred kickerdog:  script: only refers to which file to use, not the args for it
21:03 UtahDave forrest: Hm. I thought that problem had gotten ironed out
21:03 Supermathie fannet, based on rfc, I can't remember if entirely numeric subdomains (or host names) were permitted
21:03 forrest UtahDave, yeah so did I, thus why I can't remember. Maybe it's only in the latest release?
21:03 forrest or release candidate that is
21:04 mordonez joined #salt
21:05 fannet According to google/stackoverflow:  RFC1034 was written in 1987 and superseded by, at the very least, RFC1123 (dated 1989) which includes in this update to section 2.1: The syntax of a legal Internet host name was specified in RFC-952 [DNS:4]. One aspect of host name syntax is hereby changed: the restriction on the first character is relaxed to allow either a letter or a digit.  ;)
21:05 aparsons joined #salt
21:05 Supermathie fannet, yes, but not sure if *entirely* numeric is allowed.
21:05 iggy the question was is a completely numeric nibble allowed
21:05 cmthornton I believe the restriction originally is because a hostname could look like an ip address
21:06 iggy subdomain or whatever
21:06 fannet blah I don't care it works just fine across the entire interwebs ;)
21:07 iggy underscores used to work pretty widely too
21:07 Supermathie fannet, so does IPv4 but that doesn't mean it's good :D
21:07 ggalvao ggalvao
21:07 ggalvao damn Command + F failed
21:07 ggalvao :D
21:07 iggy but they were technically illegal and now enough dns servers don't allow them
21:07 Supermathie ggalvao, Pika!
21:07 ggalvao hahahahaha Mathie
21:07 ggalvao :P
21:07 ksalman i am trying to set the ntp server on windows via a state file and the peerlist in the output is empty, why is that? https://gist.github.com/anonymous/7a55c7cc48754de022ac
21:08 ksalman i am guessing i need to use something other then m_name?
21:08 kballou joined #salt
21:08 Supermathie ksalman, http://docs.saltstack.com/en/latest/ref/states/all/salt.states.ntp.html
21:09 ksalman huh.. i didn't realize there was a state.ntp for windows
21:09 ksalman thanks
21:09 fannet well the domains are internal only and our DNS packages have no problem with it :-)~
21:10 thedodd joined #salt
21:12 aynik joined #salt
21:12 iggy well... python's socket.gethostbyname works for that, but I doubt salt is using that
21:13 Supermathie huh… not that it necessarily works…
21:13 zakm joined #salt
21:13 fannet well getting back to the GCE bug - even if I remove the numeric subdomain a FQDN still is a no go
21:14 ggoZ joined #salt
21:14 Supermathie ksalman, in case you get hit by it: https://github.com/saltstack/salt/issues/13062
21:15 ksalman Supermathie: thanks
21:15 perfectsine joined #salt
21:16 chrisjones joined #salt
21:16 MK_FG joined #salt
21:17 stubee joined #salt
21:19 desertigloo joined #salt
21:21 pdayton joined #salt
21:22 aurynn I think my search-fu is insufficient; does Salt do templated files (not state files), that I reference in state files via salt:// ?
21:23 aurynn similar to a puppet template() directive
21:24 kickerdog manfred: I tried the args version, but it appears that bootstrap-salt is configured to use the wrong URL path… is http://mirrors.kernel.org/fedora-epel/beta/7/x86_64/epel-release-7-0.2.noarch.rpm but needs to be http://mirrors.kernel.org/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm
21:24 aurynn salt://<file>.jinja maybe?
21:25 aurynn ah. found iut.
21:25 manfred kickerdog: update the bootstrap script with salt-cloud -u
21:25 aurynn :)
21:25 dude051 joined #salt
21:25 kickerdog on line bootstrap-salt.sh:2381
21:26 beneggett joined #salt
21:26 bmonty joined #salt
21:27 kickerdog ah
21:27 manfred kickerdog:  should be fixed in the newer version
21:27 manfred https://raw.githubusercontent.com/saltstack/salt-bootstrap/stable/bootstrap-salt.sh
21:27 manfred just run salt-cloud -u to grab it and drop it in /etc/salt/cloud.deploy.d/bootstrap-salt.sh
21:31 ndrei joined #salt
21:31 brianwk joined #salt
21:32 brianwk hey i'm trying to figure out what i'm doing wrong...
21:32 brianwk salt was recently updated, and it can't find any of my states in /srv/salt now
21:32 brianwk things like test.ping and cmd.run work fine, state.sls just says the state isnt' found
21:32 brianwk did something change recently that would break my config?
21:38 smcquay joined #salt
21:38 utahcon seems no matter how I start a state in my reactor sls file I get the following error: "'ReactWrap' object has no attribute 'state'
21:38 retrospek joined #salt
21:41 SheetiS utahcon: http://docs.saltstack.com/en/latest/topics/reactor/ look for cmd.state.sls inside of this.  There's a decent example.
21:41 StDiluted this formula: https://github.com/saltstack-formulas/ec2-autoscale-reactor says it needs ‘develop’ branch, is that curernt, or will it work with Hydrogen?
21:41 iggy brianwk: did the master config get changed in any way? what's the output of state.show_top? state.show_sls foo?
21:41 KyleG joined #salt
21:41 KyleG joined #salt
21:43 n8n joined #salt
21:43 utahcon SheetiS: thanks, that is helpful
21:45 SheetiS utahcon: no problem.
21:45 SheetiS StDiluted: the Readme on that was only updated 3 months ago.  I'd probably guess that 2014.7 would be the target and 2014.1 would not work with some part of that.
21:46 trevorj Is saltify not part of 2014.1.10?
21:47 SheetiS trevorj: It appears in the 2014.1 branch -> https://github.com/saltstack/salt/blob/2014.1/salt/cloud/clouds/saltify.py
21:48 trevorj Hmm, when I try to use it with a cloud map it tells me "The profile 'make_salty' is defining 'saltify' as the provider. Since there's no valid configuration for that provider, the profile will be removed from the available listing"
21:49 trevorj But, in the docs, it says it doesn't require any provider configuration.
21:50 krissaxton joined #salt
21:55 murrdoc joined #salt
21:59 MK_FG joined #salt
22:01 krissaxt_ joined #salt
22:02 kusams joined #salt
22:05 wt joined #salt
22:05 BaT joined #salt
22:08 jonatas_oliveira joined #salt
22:08 trevorj Ok, so I made a provider anyway that only has a provider: saltify element
22:09 trevorj Now it gets semi further, but now it says that it's "already running"
22:09 jonatas_oliveira joined #salt
22:09 trevorj Of course it's already running, I'm using saltify
22:09 UtahDave trevorj: Hey, could you open an issue on that?  I think that was an oversight since it behaves differently than the other providers
22:10 trevorj UtahDave: Yeah, I will. Does anyone currently use saltify?
22:10 trevorj UtahDave: I'm unsure how they could get past this point as it is
22:10 UtahDave I haven't used it in quite a while
22:11 trevorj Gotcha. Could it be that it's looking at my ec2 salt-cloud provider that contains this instance?
22:11 trevorj When I do a -Q/query it shows the instance information from my other provider
22:11 trevorj even though I'm using my map that points it to saltify
22:12 UtahDave that's possible. I'm not sure how the internals work.
22:13 UtahDave I have plans to make a salt-ssh master/minion deployer to be the one true way(tm)   :)
22:13 trevorj yeah, it appears that all provider hosts are coallesced by name at some point
22:14 trevorj Which is odd I'd say?
22:14 trevorj I'll open a bug for this
22:14 trevorj s/coallesced/coalesced/
22:15 aquinas joined #salt
22:16 trevorj It also looks like it merges instances upon query by name
22:17 trevorj So if you have more than one instance in EC2 tagged with the same name, it only returns one
22:17 salt_qq joined #salt
22:18 UtahDave Yeah, you can't have multiple servers with the same name/id in Salt
22:18 salt_qq Hello!  quick question while using cmd.script module.  when using runas=, how do you tell it to use all of the user's environments variables
22:18 salt_qq or load the .profile or bashrc
22:19 kballou joined #salt
22:19 trevorj UtahDave: In the case of salt-cloud finding multiple on the backend what about appendding the instance ID to the name when listing multiple with the same name?
22:21 StDiluted salt_qq: source .profile && execute_thing?
22:21 thedodd joined #salt
22:21 dusel joined #salt
22:23 salt_qq okay so just source .profile in the script
22:23 ajolo joined #salt
22:23 StDiluted so if I wanted to give 2014.7 a try, I would have to build that from source, yeah?
22:24 salt_qq alot of the scripts are pre-developed and it would require a decent amount of editing, but if thats the recommended way
22:24 salt_qq i assumed there was another flag similar to runas= or env= that would tell it to source the user's profile
22:25 StDiluted salt_qq, I don’t think there’s a way to get env vars when using a cmd.script
22:25 salt_qq thank you
22:26 StDiluted if you’re doing a cmd.run, you could use - name: source ~/.profile && script.sh
22:26 nitti joined #salt
22:27 StDiluted looks like you can set particular env vars
22:27 StDiluted http://docs.saltstack.com/en/latest/ref/states/all/salt.states.cmd.html#module-salt.states.cmd
22:27 aquinas joined #salt
22:27 salt_qq so that would first require copying the file over or does cmd.run allow you to specify salt://path_to_script?
22:27 aquinas_ joined #salt
22:27 salt_qq I did see that.  But there are multiple env's and at some point it defeats the purpose... just login to host.  haha
22:28 StDiluted yeah, you would have to manage the files separately
22:28 salt_qq cool, I'll go that route!
22:30 StDiluted if you made the scripts jinja templates, you could put your env vars in pillars
22:30 StDiluted maybe a lot of editing
22:31 jslatts joined #salt
22:33 MugginsM joined #salt
22:35 jalaziz joined #salt
22:35 wt joined #salt
22:37 wallmani joined #salt
22:37 wallmani i was told that i can put salt in coffee, is that really going to improve the taste if the coffee tasted bitter before?
22:39 aurynn A bit of salt will take the edge off the bitterness, yes
22:40 murrdoc with tequila
22:40 wallmani salt + tequilla in my morning coffee?
22:40 trevorj wallmani: of course, how else are you going to get through the day
22:41 trevorj wallmani: gotta maintain that ballmer peak
22:41 wallmani oh, cool thanks!
22:41 wallmani that's true
22:41 wallmani i'll remember to keep a thing of salt every time i go to starbucks
22:41 wallmani their coffee is burnt
22:41 wallmani left #salt
22:43 mordonez joined #salt
22:44 mosen joined #salt
22:45 beneggett joined #salt
22:46 utahcon if I am correct... I can have a minion call fire.event, have the master react by running a runner and having that runner do actual cli commands... right?
22:46 utahcon s/fire.event/fire.event_master/
22:46 utahcon and by fire.event I mean event.fire_master (one of those days)
22:51 druonysus joined #salt
22:52 Ryan_Lane basepi: is there anything bad about how I implemented that use case?
22:53 basepi Ryan_Lane: Nope. It's solid, I actually like the use case a lot, it's just not a use case around which salt was architected.
22:53 basepi So it may take some work.
22:53 Ryan_Lane from what I saw in the code it's the only area needed to make it work
22:54 Ryan_Lane so, the next logical step for masterless + remote execution is for minions to be able to use gitfs/s3fs/etc :)
22:54 Ryan_Lane then when you call highstate it would sync from those locations and then run highstate
22:55 Ryan_Lane which would be pretty awesome
22:55 LBJ_6 joined #salt
22:55 Ryan_Lane most of my recent thoughts about how to scale things out have been related to deployment of salt code and etcd or zookeeper clusters for shared config
22:56 Ryan_Lane which work pretty well with ext_pillar + etcd module
22:56 Ryan_Lane or etcd returner
22:56 wallmani joined #salt
22:58 Ryan_Lane then the salt master is only responsible for remote execution and event handling, which should be really scalable
22:58 Ryan_Lane especially if the masters could be put behind a load balancer
22:58 StDiluted that would be sweet
22:58 StDiluted Ryan, have you tried the ec2-autoscale reactor stuff?
22:59 Ryan_Lane putting them behind a load balancer would also remove a single point of failure
22:59 Ryan_Lane StDiluted: I'm not using salt-cloud
22:59 StDiluted ah, yeah, that’s right
22:59 Ryan_Lane that reactor requires salt-cloud
22:59 StDiluted gotcha
22:59 Ryan_Lane there's not a really great way to handle autoscaling without salt-cloud right now
22:59 StDiluted I want to try it out but it looks like it requires 2014.7
22:59 Ryan_Lane since the auth data doesn't include a minion IP
22:59 StDiluted I am using salt-cloud, so I think it shuold work for me
23:00 Ryan_Lane I should really open an issue for that
23:01 StDiluted how long is 2014.7 expected to be in RC for?
23:01 murrdoc till 2015.1 ( I KEED)
23:02 StDiluted lol
23:02 StDiluted I’m wondering if i should just move my system to 2014.7
23:02 StDiluted I’m not doing anything crazy
23:03 iggy they are running a lot more internal tests this time than last release
23:03 aparsons_ joined #salt
23:03 iggy I doubt much is going to change from the last RCif that's what you're wondering
23:04 StDiluted well, I want to try the ec2-autoscale reactor stuff, and it looks like it needs develop
23:04 StDiluted so
23:04 StDiluted trying to decide what to do
23:06 mr_chris joined #salt
23:07 StDiluted how difficult is it to move to 2014.7?
23:07 iggy theoretically your salt server should be pretty easy to roll back out... just go for it, if it fails, roll back to the good version
23:08 iggy I used the same config on both
23:08 murrdoc bad idea iggy
23:08 murrdoc bad idea
23:08 bhosmer joined #salt
23:08 murrdoc fire up a few vms
23:08 murrdoc put the new master on one
23:09 murrdoc put new salt on one vm
23:09 murrdoc and one vm on old salt
23:09 murrdoc test
23:09 murrdoc or do what iggy said
23:09 murrdoc and order pizza and beer
23:10 murrdoc :P
23:10 iggy I guess I take it for granted that some people might try to roll it out to their production machines without any testing
23:10 iggy that was not what I was implying
23:10 murrdoc i done did it
23:10 murrdoc in the past
23:11 murrdoc back when mongo didnt know that each change of the replica set config
23:11 murrdoc will lead to lost connections for upto 1 minute
23:11 murrdoc that was fun
23:11 utahcon what is the proper syntax for runner_dirs: in /etc/salt/master?
23:14 murrdoc runner_dir: ['path1','path2']
23:17 Outlander joined #salt
23:18 utahcon thanks UtahDave
23:18 utahcon thank murrdoc
23:18 murrdoc o/
23:19 wallmani \o
23:21 utahcon so can you pass arguments using salt-run?
23:25 mharrytemp joined #salt
23:32 kermit joined #salt
23:32 kermit joined #salt
23:32 mharrytemp hey howdy. n00b question. I've been hitting my head against the docs for a while, and I can't figure out why this SLS file: https://gist.github.com/nickmarden/f47715b41ebcfa60b6c6 doesn't trigger an nginx reload after an update to /etc/nginx/nginx.conf
23:34 iggy watch
23:34 forrest mharrytemp, yeah as iggy said you need to use watch, not listen
23:34 iggy err... that's weird
23:35 mharrytemp @iggy are you suggesting that I should use watch instead of listen? I thought the only difference was that listen delayed the restart/reload until in the end of the run
23:35 mharrytemp iggy are you suggesting that I should use watch instead of listen? I thought the only difference was that listen delayed the restart/reload until in the end of the run
23:35 bezeee joined #salt
23:35 mharrytemp (damn it. wrong chat client!)
23:35 forrest mharrytemp, which docs are you looking at by the way?
23:35 mharrytemp http://docs.saltstack.com/en/latest/ref/states/requisites.html#listen-listen-in
23:35 utahcon does true == True?
23:35 utahcon I don't think it does
23:35 forrest mharrytemp, which release of salt are you running?
23:36 mharrytemp salt 2014.1.10 (Hydrogen)
23:36 forrest mharrytemp, yeah that's why, listen isn't in until 2014.7.0
23:36 forrest there's a small footnote under the location you linked
23:36 mharrytemp oh weird, I would have expected it to complain that it was an invalid method.
23:36 forrest and 2014.7.0 isn't released yet :P
23:36 iggy I was about to say the same thing
23:37 forrest mharrytemp, yeah not sure why it doesn't error, I thought it would as well!
23:37 iggy I actually usually use watch_in on my config file, but I'm not sure why that makes more sense to me
23:37 mharrytemp goodness knows what that .listen call did. Probably blew up a small town outside of Philadelphia.
23:38 iggy microwave power satellite burned down the city?
23:38 mharrytemp something like that.
23:38 forrest iggy, I also do that because then each individual item is segmented, makes more sense to my brain *shrug*
23:39 beneggett joined #salt
23:39 mharrytemp weirdly, I thought I had upgraded to the latest-and-greatest bootstrap-salt.sh and was no longer installing 2014.1.10, but if you say the requisite (there's that word again) version hadn't been released yet...
23:39 mharrytemp I wasn't hung up on listen vs. listen_in, in fact in my "real" usage I was planning to use listen_in.
23:39 mharrytemp I get the factorization value of that.
23:39 iggy you can force bootstrap to install it using git, but by default it's still going to do 2014.1.x
23:40 mharrytemp My gist was simply trying to reduce it to the simplest possible case by using listen and staying within a single file
23:40 aparsons joined #salt
23:40 jslatts joined #salt
23:40 dccc_ joined #salt
23:41 hlohi joined #salt
23:41 hlohi word
23:41 LBJ_6 joined #salt
23:41 bhosmer joined #salt
23:45 mechanicalduck joined #salt
23:47 mharrytemp iggy & forrest: changing from listen to watch did the trick for me _within the same state file_. When I try to trigger the similar behavior elsewhere with watch_in, it doesn't seem to work. And yes, I am calling "include" on my nginx state in the other file
23:47 mharrytemp gist(s) coming up
23:49 mharrytemp ok nginx SLS is https://gist.github.com/nickmarden/665c339f4116bf58c65a and jenkins SLS is https://gist.github.com/nickmarden/c72bcb18ef79657b30a8
23:49 oz_akan joined #salt
23:49 n8n joined #salt
23:49 forrest mharrytemp, your service name is wrong, use reload-nginx, not nginx
23:50 forrest mharrytemp, for the jenkins-http-proxy
23:50 mharrytemp before when I was using the implicit name (that is, "service" was a state of the "nginx" top-level ID) it also didn't work
23:50 mharrytemp but let me give that a whirl
23:52 forrest mharrytemp, good luck, gotta head to the gym
23:52 mharrytemp thanks a lot forrest!

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary