Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-04-24

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 otsarev_home whiteinge: requests over root servers. Try to set dnscache up, set size cache to zero.
00:00 otsarev_home dnscache would walk over root dns server
00:00 otsarev_home great if you would have several delegated zones
00:01 cruatta joined #salt
00:02 KyleG joined #salt
00:04 it_dude joined #salt
00:06 UtahDave joined #salt
00:07 ziirish joined #salt
00:08 doanerock joined #salt
00:08 gw joined #salt
00:08 gw joined #salt
00:09 Guest17373 joined #salt
00:10 tttt_ joined #salt
00:11 Guest17373 joined #salt
00:11 itslinky joined #salt
00:15 halfss joined #salt
00:16 chrisjones joined #salt
00:18 itslinky i'm seeing an un-expected connection out when running salt-call on 2014.1.3 and 2014.1.0 minions...strace output:
00:18 itslinky # strace -e connect salt-call test.ping connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) --- SIGCHLD (Child exited) @ 0 (0) --- --- SIGCHLD (Child exited) @ 0 (0) --- connect(4, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory)
00:18 itslinky connect(4, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("188.25.73.210")}, 16) = -1 EINPROGRESS (Operation now in progress)
00:18 itslinky is the line that is concerning me
00:19 KyleG you got a problem with romania?
00:19 itslinky with minions running 2014.1.3 installed via rpm from epel testing...there are 2 more IPs
00:20 itslinky i hear romania is great
00:20 KyleG i want to go one day for sure
00:20 KyleG that's weird though, by the way
00:20 itslinky one of the other IPs goes back to linode.com
00:21 UtahDave itslinky: I wonder if that was the external ip grain that got added and then stripped out.
00:21 UtahDave It was calling out to a service that would tell you what your external ip was.  i had them move that out of Salt proper and into Salt contrib recently
00:21 UtahDave I don't know why that was ever accepted into Salt proper.
00:21 itslinky interesting
00:22 it_dude joined #salt
00:22 itslinky @UtahDave: do u know of a way to not load grains during a salt-call?
00:23 UtahDave itslinky: yeah, I just did a reverse lookup and that was the website  ipecho.net
00:23 itslinky ok...that makes me feel better
00:24 UtahDave that got removed.  You can pull that out yourself from the grains directory if you'd like.
00:24 itslinky thats first on my list for tomorrow...we're blocking outgoing traffic and the 3 calls were adding 9s to runs
00:25 itslinky each call had a 3s timeout
00:25 itslinky thanks!!!
00:25 UtahDave yep.  I knew that would happen.
00:25 UtahDave back in a bit
00:27 linuxlewis joined #salt
00:28 linuxle__ joined #salt
00:31 it_dude joined #salt
00:32 halfss joined #salt
00:33 linuxlewis joined #salt
00:36 faldridge joined #salt
00:38 possibilities joined #salt
00:39 kermit joined #salt
00:41 it_dude joined #salt
00:41 rgbkrk_ joined #salt
00:43 halfss joined #salt
00:48 UtahDave joined #salt
00:52 linuxlew_ joined #salt
00:53 mpanetta joined #salt
00:54 mpanetta joined #salt
00:54 xinkeT joined #salt
00:55 linuxle__ joined #salt
00:57 kevinbrolly joined #salt
00:58 ze- joined #salt
01:03 mikemar10 joined #salt
01:04 herzi_ joined #salt
01:05 thayne joined #salt
01:06 smcquay joined #salt
01:08 it_dude joined #salt
01:09 halfss joined #salt
01:11 feiming joined #salt
01:12 GradysGhost joined #salt
01:17 jeremyfelt joined #salt
01:17 jeremyfelt left #salt
01:19 Luke_ joined #salt
01:23 linuxlewis joined #salt
01:29 linuxlew_ joined #salt
01:32 GradysGhost joined #salt
01:34 tr_h joined #salt
01:35 doanerock joined #salt
01:37 ZombieFeynman joined #salt
01:38 rocket joined #salt
01:39 rocket how can I put the salt-master in the foreground so the pdb works?
01:41 scalability-junk joined #salt
01:41 doanerock joined #salt
01:41 bmcorser joined #salt
01:41 Ryan_Lane1 rocket: run it directly as root
01:41 Ryan_Lane1 rocket: unless you pass in -d it won't daemonize
01:41 rocket it seems that its spawning workers or something?
01:42 Ryan_Lane1 ah. right.
01:45 Ryan_Lane1 rocket: http://docs.saltstack.com/en/latest/ref/configuration/master.html#worker-threads
01:45 Ryan_Lane1 you can adjust the number
01:45 Gareth joined #salt
01:45 rocket set it to 0?
01:45 it_dude joined #salt
01:46 ZombieFeynman joined #salt
01:53 AdamSewell joined #salt
01:54 ZombieFe_ joined #salt
01:54 it_dude joined #salt
01:56 Networkn3rd joined #salt
01:57 stanchan joined #salt
01:58 ZombieFeynman joined #salt
01:59 doanerock joined #salt
01:59 schimmy joined #salt
02:00 schimmy1 joined #salt
02:03 it_dude joined #salt
02:05 cyrusdavid joined #salt
02:06 schimmy joined #salt
02:09 mateoconfeugo joined #salt
02:10 mapu joined #salt
02:12 logix812 joined #salt
02:15 ghanima joined #salt
02:16 ghanima hey guys anyone on
02:16 mateoconfeugo joined #salt
02:17 ghanima left #salt
02:19 dangra joined #salt
02:21 it_dude joined #salt
02:23 joehh1 just looking at the external_ip grain - is there anyway to disable it from config?
02:28 dangra_ joined #salt
02:31 it_dude joined #salt
02:32 ghanima joined #salt
02:33 ghanima hello all just curious if anyone on the chat that has upgraded to the new version of saltstack
02:33 manfred plenty of people
02:33 mgw joined #salt
02:38 mway joined #salt
02:46 mgw joined #salt
02:49 it_dude joined #salt
02:53 meteorfox joined #salt
02:54 xl1 joined #salt
02:58 it_dude joined #salt
03:02 possibilities joined #salt
03:03 renoirb I'm wondering if there will be Ubuntu 14.04 (Trusty) packages?
03:05 manfred joehh1: ^^
03:06 manfred salt 2014.1.3+ds-1trusty1
03:06 manfred looks like there already is one
03:06 ajw0100 joined #salt
03:06 manfred in ppa:saltstack/salt
03:07 it_dude joined #salt
03:08 whiteinge joehh1: you can replace the external_ip grain module with an empty module: touch /srv/salt/_grains/external_ip.py && salt '*' saltutil.sync_grains
03:10 whiteinge rocket: if you're looking to debug a module with pdb you can use salt-call which will drop you in to pdb. if you're looking to debug an salt-master process, i think you're out of luck
03:10 catpiggest joined #salt
03:10 whiteinge using pdb with execution modules or state modules via salt-call works great though
03:11 TyrfingMjolnir joined #salt
03:12 Ryan_Lane joined #salt
03:18 bhosmer joined #salt
03:29 possibilities joined #salt
03:29 UtahDave joined #salt
03:35 mgw joined #salt
03:36 AdamSewell joined #salt
03:38 thayne joined #salt
03:44 it_dude joined #salt
03:46 ajw0100 joined #salt
03:53 it_dude joined #salt
04:02 it_dude joined #salt
04:03 travisfischer joined #salt
04:20 UtahDave joined #salt
04:20 ajw0100 joined #salt
04:21 it_dude joined #salt
04:26 travisfischer joined #salt
04:27 mateoconfeugo joined #salt
04:29 malinoff joined #salt
04:30 it_dude joined #salt
04:34 smcquay joined #salt
04:39 it_dude joined #salt
04:43 otsarev_home joined #salt
04:53 cruatta joined #salt
04:55 layer3switch joined #salt
04:57 it_dude joined #salt
04:58 joehh1 whiteinge: thanks - I'm thinking about at a packaging level - I see it as really useful in certain circumstances, but not on by default
05:02 meteorfox joined #salt
05:02 cruatta joined #salt
05:06 mateoconfeugo joined #salt
05:07 it_dude joined #salt
05:12 layer3switch joined #salt
05:20 ravibhure joined #salt
05:25 it_dude joined #salt
05:29 gwmngilfen|afk joined #salt
05:35 black123 joined #salt
05:35 black123 left #salt
05:36 epcim joined #salt
05:38 linuxlewis joined #salt
05:42 ajw0100 joined #salt
05:44 black123 joined #salt
05:45 black123 left #salt
05:47 ajprog_laptop joined #salt
05:52 malinoff joined #salt
05:58 ndrei joined #salt
06:01 it_dude joined #salt
06:08 anuvrat joined #salt
06:11 ndrei joined #salt
06:11 it_dude joined #salt
06:16 malinoff joined #salt
06:20 it_dude joined #salt
06:21 gildegoma joined #salt
06:25 malinoff joined #salt
06:27 stanchan joined #salt
06:37 TamCore Any ideas regarding this error? http://paste.tamcore.eu/34176e9a8.txt Related minions are running Ubuntu 10.04, but this happens only on 4 out of ~50 systems.
06:38 it_dude joined #salt
06:47 it_dude joined #salt
06:55 bhosmer joined #salt
06:58 happytux_ joined #salt
07:02 schimmy joined #salt
07:02 slav0nic joined #salt
07:02 slav0nic joined #salt
07:04 ndrei joined #salt
07:05 schimmy1 joined #salt
07:05 stanchan joined #salt
07:08 CeBe joined #salt
07:10 UtahDave joined #salt
07:13 harobed joined #salt
07:14 Kenzor joined #salt
07:21 rawzone_ joined #salt
07:23 gw joined #salt
07:23 jalaziz joined #salt
07:28 jpaetzel joined #salt
07:29 googolhash joined #salt
07:31 fragamus_ joined #salt
07:36 srage_ joined #salt
07:38 srage__ joined #salt
07:42 it_dude joined #salt
07:43 layer3switch joined #salt
07:45 it_dude_ joined #salt
07:47 topochan joined #salt
07:50 kadel joined #salt
07:56 ede joined #salt
07:56 aleszoulek joined #salt
07:58 rjc joined #salt
07:58 cruatta joined #salt
07:59 bram_ joined #salt
08:00 cruatta joined #salt
08:00 bram_ hi all... I'm running this rather simple state ( http://pastebin.com/Pi2T3eyu ) and when the second one is exucuted it exists with "Desired working directory "/tmp/solr/" is not available... Anyone get an idea why?
08:02 bram_ even though the second state explicitly calls for a file requirement (/tmp/solr/pom.xml) it looks like the first state is never executed
08:02 bram_ ah, no, it is executed, but after the first state... weird!
08:04 bram_ HAH! typo... require, not requires... :)
08:12 it_dude joined #salt
08:13 ravibhure joined #salt
08:15 babilen bram_: Happened to me all the time at the beginning too as I find "foo state requires the following states" to be more in line with what I consider to be correct English than "foo state require the foll..."
08:16 bram_ yeah...
08:16 it_dude joined #salt
08:16 bram_ I'm bitten by that too
08:17 bram_ by the way, how do you guys debug adding new states to salt?
08:18 bram_ I find that I constantly have to go back to a VM snapshot and retry as sometimes things are not what they seem, especially the dependencies are somtimes difficult to get right
08:18 babilen bram_: I test them from my Git dev branch in vagrant
08:19 bram_ yeah, running vagrant here too...
08:19 bram_ still
08:20 babilen What are you missing?
08:20 bram_ well, once you *add* something there is no way to go back, right?
08:20 bram_ salt should have a --reverse or something :D
08:20 bram_ hehe
08:21 babilen Yeah, a "revert to state as it was k minutes ago would be great" .. for everything :)
08:22 giannello joined #salt
08:22 babilen bram_: I am typically not too concerned with that as I primarily test if my highstate achieves what I want it to achieve. Quite often even start with an unprovisioned box and take it from there.
08:23 donatell0 joined #salt
08:25 bram_ last Q, ... are there any "recommended workflows" out there? there are so many different ways of organizing states, grains, pillars, ... it's a bit daunting to choose sometimes
08:25 bram_ for example, I hear a lot of people talk about gitfs in here, ...
08:26 babilen Yes, I use that. I don't think that there is a single recommended workflow (both in state organisation and structuring your work), but a couple of best practices seem to have emerged.
08:27 bram_ well, best practices then :)
08:28 babilen I am quite fond of formulas and treat most of my states as if I would write a formula. The structure is detailed in http://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html
08:29 babilen This entails that the state is properly namespaced ( foo/init.sls foo/subservice.sls ...) and is mostly configured by configuring it through pillars (both state behaviour and configuration file settings)
08:31 babilen You can find a collection of formulas on https://github.com/saltstack-formulas/
08:33 babilen bram_: That covers my "general" states. In terms of configuring boxes I tend to describe services I want to offer rather than "hardcode" how they are being offered so that I can target minions with, say, "foo-web1.myhoster.com: \n - foo.co.uk" with "foo.co.uk combining *all* states and configuration adaptations (apart from the pillar) that allow a minion to serve that website. In there I typically include various formulas or my own states and adapt ..
08:33 babilen ... them to my needs
08:36 babilen bram_: So if that service requires, say, some directories for its webcontent I would have a foo.co.uk/init.sls that includes foo.co.uk/dirs.sls which takes care of creating those. They would then be required_in the apache service that I included in either the init.sls or (if more adaptations are needed) in foo.co.uk/httpd.sls (to not hardcode that I am using apache)
08:37 babilen And so on ... I should probably note that I haven't used salt for very long, but I like this way of thinking about my minions in the gist of "you offer $SERVICE" rather than "do this and this and this and this, which we will then call $SERVICE"
08:37 bram_ reading the formulas docs, and ... it looks like there is also a bestpractices in the docs...
08:38 babilen yeah the whole "Writing Formulas" section discusses how to structure them. I try to adhere to that as much as I can.
08:39 babilen I also only use a single map.jinja that I include everywhere to ensure that things can be changed in a single place rather than everywhere
08:40 babilen Does that help a little?
08:46 bram_ it does
08:46 bram_ thx!
08:46 gmoro joined #salt
08:47 yomilk joined #salt
08:52 giantlock joined #salt
08:59 bram_ what's preferred, {{ salt['pillar.get']('mysql:lookup:password') }} or {{ pillar[''mysql"]["lookup"]["password"] }} ?
09:00 malinoff bram_, if you are aware about a missing key, the first one is better
09:00 bram_ "are aware" or "want to be aware" ?
09:02 bram_ as a noob I'm a bit horrified by the syntax of the first :-) I would rather write something like salt.pillar.get('mysql:lookup:password') or maybe even pillar.get('mysql:lookup:password'), but accessing a function inside a dict seems... weird
09:02 bezaban some cases require the first iirc
09:03 bram_ I'm sure there is a technical reason why it's written the way it is, but it doesn't look particularly user friendly
09:04 bram_ the same syntax should be used inside regular templates right?
09:04 bram_ i.e. managed files with - template: jinja
09:06 babilen bram_: I don't use that at all, but prefer: pillar.get('foo:bar:baz', DEFAULT)
09:06 babilen s/:bar:baz//
09:09 yomilk joined #salt
09:14 ggoZ joined #salt
09:14 millz0r joined #salt
09:18 layer3switch joined #salt
09:20 sdlarsen joined #salt
09:20 yomilk joined #salt
09:33 layer3switch joined #salt
09:41 bhosmer joined #salt
09:47 malinoff babilen, the problem here is ':'-separated keys are not included in python
09:47 malinoff bram_, sorry
09:48 malinoff You can't just pillar.get('mysql:lookup', default)
09:49 malinoff You must pillar.get('mysql', {}).get('lookup', {}).get('password', 'error') which is horrible
09:49 bhosmer joined #salt
09:49 malinoff But if you are sure that all keys will be present, you can simply use the second (pure python) approach
09:49 malinoff The biggest disadvantage of the first one is the performance
09:51 marnom joined #salt
09:52 marnom is the apache formula working for anyone? https://github.com/saltstack-formulas/apache-formula - I keep getting "Rendering SLS 'apache' failed, render error:
09:52 marnom Jinja variable 'id' is undefined; line 32"
09:59 gw_ joined #salt
09:59 scott_w joined #salt
10:14 azieger joined #salt
10:15 azieger hey there!
10:15 azieger I'm new to salt and have a problem with file.managed
10:16 marnom whatsup
10:16 azieger I want to copy a file to the systemroot in windows
10:16 azieger and therefore I would like to use the environment variable %SystemRoot%
10:16 azieger but it tells me that it is no absolute path
10:17 azieger so how can I use environment variables in that case? :(
10:17 jayfk joined #salt
10:20 marnom azieger: interesting question, unfortunately I don't have an answer\
10:21 azieger Hmm okay, thanks anyway! I'll keep on searching...
10:25 viq azieger: you could see if that's available in grains somewhere
10:31 marnom I need to update a loadbalancer when a web*.domain.com server comes online. I've already created a reactor and a dynamic nodegroup, but how can I loop over these nodegroups & generate the cnofig for the loadbalanceR? :\
10:31 marnom I need something like {% for webserver in nodegroup['webservers-backend'] %} but then in valid syntax :)
10:37 crashmag joined #salt
10:44 dstanek_zzz joined #salt
10:47 bram_ hey guys, I have a (python) script that should be run on a minion, but I don't need the file containing the script to stay on the minion. What's the easiest way to do that?
10:48 yomilk joined #salt
10:48 viq bram_: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cmdmod.html#salt.modules.cmdmod.script  ?
10:48 bram_ hmm, it's a python script and it needs the path to a virtualenv though...
10:49 bram_ i.e. it needs to be executed as /path/to/virtualenv/bin/python myscript.py
10:50 viq bram_: shell=/path/to/virtualenv/bin/python
10:50 bram_ oh cool :)
10:50 bram_ thx!
10:51 bram_ any simple way of escaping command line arguments (which will come from the pillar)?
10:51 slav0nic joined #salt
10:51 slav0nic joined #salt
10:53 Sypher hi all! anyone using gitfs with environments? I'd like to use salt formulas via gitfs but the "branch to env" mapping system gets in the way, expecting stuff to be in the branch 'dev' rather than the master which is the only available one
11:11 bram_ viq, http://pastebin.com/eJFPCQzR gives me http://pastebin.com/jWKFmGMe ... very weird error...
11:15 viq bram_: oh, you want to do it from states? http://docs.saltstack.com/en/latest/ref/states/all/salt.states.cmd.html#salt.states.cmd.script is the correct doc then. What you pasted seems valid, but I haven't done that myself really, so no idea at the moment
11:17 viq And I'm too much up to my elbows with syslog-ng at the moment to look deeper into this
11:20 bram_ hmm, weird, still getting the error
11:21 bram_ anyone else have a second to look at this? http://pastebin.com/dqSW7Fmy
11:28 patrek_ joined #salt
11:36 bram_ the problem was using the virtualenv as a shell...
11:36 bram_ if I put the virtualenv in the script with a shebang there was no problem
11:42 viq cool
11:44 Shenril joined #salt
11:44 halfss_ joined #salt
11:44 halfss_ joined #salt
11:49 Sypher any jinja wizards in here?
11:49 malinoff Sypher, what's wrong?
11:49 Sypher nothing wrong, just looking for someone to help me out a bit
11:52 chiui joined #salt
11:54 babilen Hmm, I just encountered something pretty unpleasant -- I use the following state to mount a NFS share (and switch from a backup) -- http://paste.debian.net/95420/ -- I introduced static1-lighttpd-stop, the require_in: - service: lighttpd and also changed the device (different IP) -- As usual I ran a test=True run before and was *very* surprised to find that the (old) NFS share was no longer mounted. As you can see: The nfs-fstab-no-old-static1 was ...
11:54 babilen ... actually executed despite the "test=True" and I have no idea why that happened.
11:55 topochan joined #salt
11:55 babilen Luckily we were able to simply roll out the change immediately (praise "salt -b 5 ..."), but I would have expected that highstate runs with test=True wouldn't change anything.
11:56 babilen This is on a 2014.1.3 master with 2014.1.1 minions
11:57 malinoff Sypher, you should ask the question if you want to get an answer
11:58 babilen The output you see at the bottom is that of "salt-run jobs.lookup_jid 20140424132036407438" btw (so I am sure that it belongs to that job)
11:58 Sypher i was wondering whether this works: {%- set majorversion = version | split('.', 2) %}
11:58 Sypher the input is '10.0.10'
11:58 Sypher should work right?
11:59 hhenkel Hi all, anyone familar with salt-api?
12:00 Sypher i am hhenkel
12:00 Sypher *i am, hhenkel
12:00 babilen heh
12:01 logix812 joined #salt
12:01 hhenkel Sypher: I'm trying to use print_job but I fail with an error.
12:01 malinoff Sypher, there is no "split" filter in built-in jinja filters
12:02 hhenkel Sypher: If I try the example with lookup_jid it works.
12:03 hhenkel Sypher: I use the same syntax and the error is {"return": "print_job takes at least 1 argument (0 given)"}
12:03 Sypher looks like you don't provide it with the jobid hhenkel?
12:03 hhenkel Example is taken from: http://salt-api.readthedocs.org/en/latest/ref/netapis/all/saltapi.netapi.rest_wsgi.html#usage-examples
12:06 hhenkel Sypher: https://www.refheap.com/79626
12:06 ndrei joined #salt
12:09 bastion1704 joined #salt
12:10 DaveQB joined #salt
12:11 marnom is it possible to populate a managed file with the hostnames in a nodegroup? I can't figure out how to loop over it (how to reference the nodegroup)..
12:12 hhenkel Sypher: Or is there a better way to fetch the job results from the master?
12:13 hhenkel Problem with using returners is in my opinion, that I only get results if the client was reachable.
12:13 Sypher i'll have to dive into our apiclient's source to see whether we have implemented that / are using that, don't recall from the top of my head
12:14 Damoun marnom: use the fqdn in grains ?
12:15 hhenkel Sypher: Would be intererst as I'm looking for a way to visualize stuff in some way.
12:16 marnom Damoun: you mean something like this? {% for host in salt['pillar.get']('fqdn') %} I have grains with which I can target the minions already... but I'm having trouble updating the loadbalancer config with all backend webservers
12:16 marnom I made a reactor which fires when a new web* server comes online and fires a highstate on my loadbalancers
12:16 babilen Okay, i filed a bug report about this. Rather suboptimal behaviour -- https://github.com/saltstack/salt/issues/12259
12:17 sabayonuser_ joined #salt
12:17 Damoun marnom: mmmh, no, fqdn give the hostname, not a list so it's not what you want
12:18 marnom Damoun: I'm using file.managed to write a file on the loadbalancer
12:19 marnom and I would ideally use a loop to generate the config: loop over all web* fqdn's and put the hostnames in the file.. shouldn't be too difficult with Salt's flexibility but I'm having trouble figuring out syntax
12:22 Debolaz Do I need to do anything special to make the mysql_database/user/etc states available? Install python-mysqldb in another state perhaps?
12:23 toastedpenguin joined #salt
12:23 taion809 joined #salt
12:25 hhenkel Sypher: Okay, looked in the code...seems like there is some difference between documentation and code.
12:25 hhenkel The parameter is job_id and not jid in my case. Gotta look what is currently in upstream
12:27 halfss joined #salt
12:27 hhenkel Okay, seems like the variable name was changed 6 days ago.
12:28 gadams999 joined #salt
12:28 ckao joined #salt
12:29 krak3n` joined #salt
12:29 toastedpenguin1 joined #salt
12:34 Damoun marnom: try {% for host in salt['mine.get']('nodegroup:group1', 'grains.fqdn', 'grains') %} (and replace group1 by your nodegroup name)
12:40 marnom Damoun: thanks, testing now. Looking good!
12:42 marnom I'm assuming I will need to enable the salt mine to use this :)
12:42 Damoun yes
12:43 jslatts joined #salt
12:43 bastion1704 joined #salt
12:43 Damoun I think you can add a default value to don't break the deployment if nodegroup does not exist
12:44 marnom hmm to enable the salt mine, I need to define some mine_functions... which mine_functions would I need to work with your example above Damoun?
12:45 vejdmn joined #salt
12:46 feiming joined #salt
12:47 diegows joined #salt
12:48 dangra joined #salt
12:57 sdlarsen is it possible to see what command hangs when a highstate never finishes?
13:03 cast joined #salt
13:05 tyler-baker joined #salt
13:05 doanerock joined #salt
13:06 Tekni joined #salt
13:06 xintron Is there some way to check if a sls file exists (pillar/<foo>.sls) so that I can load dynamically in my pillar/top.sls and only if it's available?
13:07 mpanetta joined #salt
13:11 pydanny joined #salt
13:12 ngealy2 Can someone explain why Salt Best Practices suggests: Don't use grains for matching in your pillar top file for any sensitive pillars.
13:12 mpanetta joined #salt
13:12 Shenril joined #salt
13:16 ajprog_laptop joined #salt
13:22 topochan joined #salt
13:23 vejdmn joined #salt
13:23 HeadAIX joined #salt
13:24 Debolaz joined #salt
13:24 wendall911 joined #salt
13:25 scott_walton joined #salt
13:27 faldridge joined #salt
13:28 ekristen joined #salt
13:30 faldridg_ joined #salt
13:32 Nazca joined #salt
13:32 Nazca joined #salt
13:38 mapu joined #salt
13:39 vejdmn joined #salt
13:40 mapu joined #salt
13:40 markd joined #salt
13:41 vejdmn joined #salt
13:41 quickdry21 joined #salt
13:51 Networkn3rd joined #salt
13:52 EWDurbin ngealy2 grains can be added to any compromised pillar
13:52 EWDurbin errr machine
13:54 [MT] where has dave_den been? :(
13:57 Ahlee Is there a way to specify a default environment for the salt cli?  I had thought it would use __opts__['environment'], but it doesn't appear to
13:57 thayne joined #salt
13:58 ngealy2 @EWDurbin so what is a safe way to match?
13:58 che-arne joined #salt
14:01 dccc joined #salt
14:02 ndrei joined #salt
14:06 HeadAIX joined #salt
14:11 arapaho joined #salt
14:12 rojem joined #salt
14:13 halfss joined #salt
14:15 thedodd joined #salt
14:17 TyrfingMjolnir joined #salt
14:17 ngealy2 @EWDurbin is it safe to match on the master id?
14:17 |o||o| joined #salt
14:18 danielbachhuber joined #salt
14:18 |o||o| newbie question... how do install a salt master on an amazon linux instance in ec2?
14:20 Ahlee yum install salt-master
14:20 |o||o| yum isn't aware of the package
14:20 Ahlee Add epel repo, yum install salt-master
14:20 Ahlee i'm relatively certain amazon linux is just a centos knockoff
14:21 cruatta joined #salt
14:21 pdayton joined #salt
14:21 |o||o| tried the epel thing, no luck
14:21 |o||o| it is a centos clone
14:22 cruatta_ joined #salt
14:23 Ahlee https://aws.amazon.com/amazon-linux-ami/faqs/ tells me epel.repo is disabled by default on Amazon's distro
14:23 topochan joined #salt
14:23 Ahlee so, sed -ie s/enabled=0/enabled=1/g /etc/yum.repos.d/epel.repo
14:23 Ahlee and try again
14:23 Ahlee actually
14:23 Ahlee you probably shouldn't do that
14:24 Ahlee as i have no idea what that repo looks like or how many testing/whatever repos it includes
14:24 Ahlee or i guess yum --enablerepo=epel
14:25 |o||o| trying the enablerepo now
14:25 yomilk joined #salt
14:25 pdayton joined #salt
14:26 |o||o| that worked... now to the next debacle!
14:26 |o||o| thanks
14:28 UtahDave joined #salt
14:28 funzo joined #salt
14:29 mateoconfeugo joined #salt
14:35 taterbase joined #salt
14:40 UtahDave joined #salt
14:40 babilen Hmm, my jinja-fu is too weak. I try to read a value from a pillar and fallback to generating it from grains (with modifications) if it isn't set. I use something like http://paste.debian.net/95457/ and would like the fallback to be $HOSTNAME-fd essentially.
14:40 jrdx joined #salt
14:45 pdayton joined #salt
14:49 AviMarcus joined #salt
14:54 [MT] salt-syndic1.networkservices.good-sam.com
14:54 [MT] salt-syndic2.networkservices.good-sam.com
14:54 [MT] salt-syndic3.networkservices.good-sam.com
14:54 [MT] salt-syndic4.networkservices.good-sam.com
14:54 [MT] sorry...
14:56 ngealy2 I would like to know how to security match in the pillar top file.  If you cannot use Grains, what are your options?
14:56 babilen ngealy2: "security match" ?
14:56 ngealy2 securely
14:57 jcockhren still
14:57 babilen Indeed :)
14:57 babilen ngealy2: Do you want to make sure that you don't target the wrong minion by mistake? You can, naturally, target by minion id which is even the default.
14:58 jcockhren http://docs.saltstack.com/en/latest/topics/targeting/compound.html
14:58 babilen But then you can use everyhting that is detailed in http://docs.saltstack.com/en/latest/topics/targeting/
14:58 babilen ngealy2: Which problem are you trying to solve?
14:58 babilen And why does nobody know how to solve my jinja problem? ;)
14:59 ngealy2 @babilen So, if the minion is compromised, the master id is the only thing I can rely on?
15:00 ngealy2 Basically, I don't want a QA server to get a Production database password.
15:00 UtahDave ngealy2: the minion id is cryptographically secure to match on
15:01 babilen It has to be
15:02 [MT] Well... this is a new issue since updating salt. http://dpaste.com/1794625/
15:03 babilen You would expect /var/cache/salt/master/gitfs/remote_map.txt to not be a directory :)
15:03 [MT] that's what I'd expect
15:04 ml_1 joined #salt
15:04 shawnjgoff joined #salt
15:06 [MT] rm -rf /var/cache/salt/master/gitfs/ <-- fixed it
15:06 babilen As always.
15:06 babilen I should integrate that into the init script
15:06 shawnjgoff When I run a command, I get reports from several minions, but not all, then salt just hangs forever. If I press send a SIGINT, salt tells me the job didn't finish running and gives me a job id. This happens for every job I run now. If I do salt-run jobs.list_jobs, I get a long list of jobs.
15:07 shawnjgoff Is there a way to figure out what job is hung and how to fix or kill it?
15:07 jeremyBass1 joined #salt
15:07 babilen shawnjgoff: Can you run "salt-run jobs.lookup_jid $JID" for the job id returned earlier?
15:08 babilen How could I retrieve data from a pillar and use grains['host']-fd (e.g. foo-fd for host foo) if it isn't set?
15:09 shawnjgoff babilen, that shows a list minions with nothing after them. http://sprunge.us/LUdF
15:09 CeBe joined #salt
15:09 AdamSewell joined #salt
15:09 AdamSewell joined #salt
15:10 shawnjgoff babilen, Ah, I tried one of the other job IDs from the list given by list_jobs and got back something more interesting: http://sprunge.us/YhYj
15:15 ipmb joined #salt
15:16 [MT] What's the best way to keep reactor and other such info in sync with syndic servers?
15:17 tligda joined #salt
15:18 mdasilva joined #salt
15:18 mdasilva hey all
15:18 mdasilva is the webinar happening?
15:18 ccase joined #salt
15:19 gildegoma left #salt
15:20 ldlework joined #salt
15:20 Debolaz joined #salt
15:20 Voziv joined #salt
15:20 babilen shawnjgoff: That doesn't look too healthy -- Which salt version is that?
15:21 Voziv When configuring ext_pillar to use git as a source, does the repo need to appear under gitfs_remotes ?
15:21 shawnjgoff salt 2014.1.0 on the master
15:21 ajw0100 joined #salt
15:21 shawnjgoff I'm looking for if there is a way to get it from the clients without cmd.run
15:21 timoguin Voziv: no, gitfs_remotes is just for the state/filesystem backend
15:22 Voziv ok, thanks
15:23 shawnjgoff versions: http://sprunge.us/gPEJ
15:24 tligda mdasilva: webinar is in 37 minutes, right?
15:24 [MT] what webinar is that?
15:24 babilen Voziv: It does not
15:24 AviMarcus so.. I asked about docker yesterday, I think. I've played with it a bit (although I think it kept dying on kernel 3.0, but I have to upgrade my desktop for that and tons of other things... so, what parts do you have docker do, what do you have salt do... ?
15:25 babilen shawnjgoff: You have some fairly old clients there that are definitely not compatible with your master version (e.g. jenkins.jetengine.net) -- Could it be those that are hanging/experiencing the problem?
15:25 jalbretsen joined #salt
15:27 shawnjgoff Could be. I'll try updating them
15:27 babilen shawnjgoff: Please do :)
15:28 dramagods joined #salt
15:28 ndrei joined #salt
15:29 linuxlewis joined #salt
15:29 it_dude joined #salt
15:31 [MT] shawnjgoff: no, it doesn't
15:31 tligda webinar: http://www.saltstack.com/saltstack-events/saltstack-demo-webinar
15:31 [MT] shawnjgoff: gitfs_remotes is for states, NOT pillars (I learned this by things breaking not long ago)
15:32 tligda AviMarcus: I have played with docker and salt a little bit.
15:32 shawnjgoff [MT], are you replying to the right person?
15:32 AviMarcus tligda, thoughts?
15:32 tligda AviMarcus: As far as configuration, I use the Dockerfile to get the master installed and the minion installed. Then salt for everything after that.
15:32 [MT] shawnjgoff: nope...
15:33 tligda AviMarcus: I'm currently having issues getting the minion to talk to the master reliably when they are running in their own containers on the same machine.
15:33 [MT] shawnjgoff: I meant to reply to Voziv but now I see he got his answer anyway
15:33 AviMarcus tligda, I wanted to leverage docker for isolated dev -- that way it won't make my machine a mess
15:33 _gothix_ joined #salt
15:33 JasonSwindle joined #salt
15:34 tligda AviMarcus: Isolated dev of salt modules, states, formulas, etc.? Or isolated dev of some other system?
15:34 JasonSwindle UtahDave: HOWDY!
15:34 mateoconfeugo joined #salt
15:34 mateoconfeugo joined #salt
15:34 AviMarcus tligda, that I can run my code, etc, on my local machine
15:34 JesseC joined #salt
15:34 Gareth morning
15:35 tligda AviMarcus: And your code is salt stuff (configuration of systems, remote execution, etc.), or something else?
15:36 gothix joined #salt
15:39 gothix how would i for a call with the api to delete a host from the master? curl -sik https://localhost:8000/run -H "Accept: application/json" -d client='local' -d fun='key.delete' -d match='myserver' -d password='xxxx' -d username='salt-user
15:41 AviMarcus tligda, yes, much of my stuff is in salt or at least in git.
15:43 tligda AviMarcus: For developing and testing salt stuff, I've been prioritizing getting it all working using vagrant and virtualbox. It's not as hot as docker, but it's easier for me to relate to. My plan is that once I have the vagrant and virtualbox part working, I'll see what I can integrate docker to do.
15:44 AviMarcus tligda, docker is supposed to be much more light weight
15:44 AviMarcus I started a virtual box server last night, it took like 10 mins to manually run through setup of the .iso (I was doing it wrong?) with docker it takes 5 seconds to start a new instance
15:46 tligda AviMarcus: True, but in my case I spent a long time trying to get docker to work the way I wanted it to and ended up going back to virtualbox. I'll spend more time on it, but I need to get some stuff working now. I can't wait a few more days to figure out all the docker stuff.
15:46 AviMarcus :)
15:46 tligda AviMarcus: And anyway, the docker "lightweightedness" is true in some ways, but it's still heavy in certain ways too.
15:46 tligda AviMarcus: You still have to download the images.
15:47 tligda AviMarcus: You have to learn the Dockerfile stuff that's not useful for anything else.
15:47 tligda AviMarcus: And just understanding how to run processes inside docker containers is a whole other thing to learn.
15:47 AviMarcus I haven't done any dockerfile stuff yet... still not sure how it fits in. Maybe just to use on my dev machine to  keep everything isolated.
15:48 ngealy2 left #salt
15:50 Voziv how can I get a file from pillar? I'm looking to store things like SSL certificates within pillar instead of my salt formula
15:51 timoguin Voziv: pillar doesn't yet support storing files
15:52 fllr joined #salt
15:53 kaptk2 joined #salt
15:55 UForgotten so, question… I don’t see much info out there about keeping salt minion logs under control, how are people dealing with rotating salt minion’s logs (not logs for other stuff, for salt itself)?  Is there a signal you can send to have it rotate the log, or do you have to fully restart the minion daemon?
15:56 AviMarcus UForgotten, use logrotate?
15:56 UForgotten AviMarcus: We can, but how do we cleanly/properly hup the salt-minion daemon so it doesn’t crash or interrupt processing?
16:07 TheRhino04 joined #salt
16:10 tru_tru joined #salt
16:12 or1gb1u3 joined #salt
16:13 yomilk joined #salt
16:17 linuxlewis joined #salt
16:22 timoguin hmm, think i found a bug with the cmd module. got something that works fine if the minion is ran in the foreground, but fails when running as a service
16:23 KyleG joined #salt
16:23 KyleG joined #salt
16:25 jimklo joined #salt
16:25 [MT] 10k minions per master?..
16:26 KyleG whoa
16:26 jmlowe1 joined #salt
16:26 KyleG That's a lot of minions
16:27 KyleG Although
16:27 [MT] that's apparently the recommended maximum per master
16:27 jmlowe1 Is there a known problem with 2014.1.3 on rhel5?
16:27 KyleG Well
16:27 meteorfox joined #salt
16:29 UtahDave jmlowe1: what version of zmq do you have?
16:29 jmlowe1 the stock one, 2.3.something
16:30 jmlowe1 it was working before I updated my master
16:30 repl1cant anyone here using rest_wsgi.py behind apache, nginx, or Lighttpd?
16:30 [MT] I think there are a lot of issues with zmq 2
16:30 ml_1 joined #salt
16:31 jmlowe1 any easy way around getting zmq 3.* on rhel 5 derivatives other than building new packages?
16:33 gothix anyone working with the api to delete minions from the master?
16:34 viq [MT]: yeah, I heard linkedin runs around that many minions per master
16:34 |o||o| .
16:34 [MT] viq: I heard the have two masters, one per dc
16:35 stritzel joined #salt
16:35 [MT] they use chef too
16:35 viq didn't hear that part
16:36 [MT] iirc, they're trying to move more and more to salt, but a lot of lower system stuff is done with chef or something like that
16:37 [MT] it was too long ago and I can't remember anymore
16:37 srage joined #salt
16:37 schimmy joined #salt
16:39 schimmy1 joined #salt
16:39 yusuket joined #salt
16:44 jordan_getweave joined #salt
16:45 jordan_getweave left #salt
16:51 [MT] If I have a bunch of syndic servers, how do I manage keys? Every syndic server will need to approve every key?
16:53 vejdmn joined #salt
16:58 jimklo trying to figure out a way to make a minion checkout a repo from mercurial, but then remove the credentials after the fact, but protect credentials in the process... I wouldn't want someone to be able to do a salt-call on the minion and fetch the password from the pillar on the master.  Any ideas on how to do this?
16:59 mgw joined #salt
16:59 possibilities joined #salt
17:00 TheRhino04 joined #salt
17:00 timoguin jimklo: using https auth, i'm assuming?
17:00 jimklo timoguin: yes, but bound to ldap
17:01 timoguin jimklo: i could see setting an SSH key and then removing it, but not sure about send a user/pass combo to the minion
17:02 jimklo timoguin: that was my thought too... but was thinking it could have a similar issue
17:02 jimklo at least it wouldn't have the user/pass in cleartext
17:02 [MT] Who gave that webinar?
17:03 haroldjones joined #salt
17:04 timoguin jimklo: i'm curious as to why the minion can't have access to the repo though
17:04 timoguin if it's already cloning it
17:06 thedodd joined #salt
17:09 jimklo timoguin: it's a somewhat of a screwy idea I had.... we have an extremely complicated project... which has an equally complicated developer environment config... so instead of making each dev install the whole config... we create a developer VM... but I also need to test the VM on the real hardware (since it has some custom bits)... so my thoughts since the source tree is about 2G... it would be nice to be able to pre-checkout the tree, then remov
17:09 jimklo e the creds used to checkout, but then replace the creds with bogus ones so the dev using the vm would replace with their own..
17:09 bhosmer joined #salt
17:10 jimklo in this process we are also evaluating migrating from ubuntu to rhel due to a data center security requirement
17:11 timoguin so you basically want to clone it with temp creds that the dev will then replace with their own?
17:11 joehillen joined #salt
17:11 jimklo yep
17:11 JasonSwindle joined #salt
17:12 timoguin i'd say do a default, read-only ssh key to clone it, and remove the key from the machine after cloning.
17:13 timoguin but like you said, that still kinda has the same issue of still being available as pillar data
17:13 JasonSwindle joined #salt
17:13 arthabaska joined #salt
17:13 jimklo one solution is if I can pass an extra yaml file on the cli that contains my creds during the update... that file can be local to the master... then someone trying to access that info from a salt-call couldn't do it.
17:14 kermit joined #salt
17:14 jimklo but not sure if there's a way to do that
17:16 linuxlew_ joined #salt
17:17 joehillen joined #salt
17:17 jaimed joined #salt
17:17 ajolo joined #salt
17:17 haroldjones joined #salt
17:18 jalaziz joined #salt
17:19 yusuket joined #salt
17:20 linuxlewis joined #salt
17:20 haroldjo_ joined #salt
17:21 linuxlew_ joined #salt
17:22 it_dude joined #salt
17:22 shawnjgoff How can I ask a minion for its name?
17:23 zain nicely
17:23 manfred grains.item minion_id ?
17:23 shawnjgoff :-)
17:24 diegows joined #salt
17:25 ravibhure joined #salt
17:25 shawnjgoff Hm. salt-call isn't working, is there another way?
17:25 Ryan_Lane joined #salt
17:25 haroldjo_ left #salt
17:26 shawnjgoff I'm getting this: The Salt Master has cached the public key for this node, this salt minion will wait...
17:27 krak3n` joined #salt
17:27 shawnjgoff What I'm trying to do is make sure any key for this minion is rejected so I can try to generate a new key and accept it.
17:29 thayne joined #salt
17:30 shawnjgoff I also can't use key.finger because that requires salt-call arg.
17:31 srage joined #salt
17:31 Networkn3rd joined #salt
17:31 shawnjgoff Got it! I used cat /etc/salt/pki/minion/minion.pub | grep -v BEGIN | grep -v END | md5sum and compared that to the keys in the master.
17:32 shawnjgoff It's up and working now.
17:32 Gordonz joined #salt
17:33 bhosmer joined #salt
17:33 anuvrat joined #salt
17:33 Ryan_Lane1 joined #salt
17:34 whiteinge Voziv: for distributing things like SSL certs via pillar you can make use of YAML's multiline strings. you can embed the cert directly in the .sls file or you can pull in an external file using jinja
17:36 ndrei joined #salt
17:37 thedodd joined #salt
17:39 pydanny joined #salt
17:41 [MT] whiteinge: for running the syndic servers, the syndics are masters, so I need to keep the pki stuff in sync, right? It seems like I could use nfs to keep data on the master in sync with files on the syndics, or maybe another system that keeps duplicates on the client in case the master goes down. I'm having issues getting clients to authenticate with the syndics.
17:41 A||SySt3msG0 joined #salt
17:42 Ryan_Lane [MT]: they aren't masters
17:42 Ryan_Lane they just pass through calls
17:42 Ryan_Lane it's treated like another minion
17:42 Ryan_Lane so, you don't need to sync the keys with it
17:43 Ryan_Lane or, do you mean that yours syndics are also masters?
17:43 Ryan_Lane [MT]: if it's the latter, I wouldn't use NFS, since that would defeat the purpose of having multiple masters. NFS outage = total salt outage
17:43 [MT] Ryan_Lane: not from what I was reading or what whiteinge and jcockhren told me yesterday
17:44 [MT] salt-master is a dependency of salt-syndic too
17:44 whiteinge syndics are not masters but each syndic manages the keys of minions below it
17:44 Ryan_Lane ah. it looks like it passes calls from one master to another master
17:45 whiteinge since you're load-balancing multiple syndics, you will need to manually keep the minion keys in sync between them
17:45 [MT] I can manage keeping keys in sync, that shouldn't be too bad.
17:45 Ryan_Lane whiteinge: it would be really nice to have a pluggable key system
17:45 Ryan_Lane keys in zookeeper or etcd would be really nice
17:45 whiteinge yes it would
17:45 jmlowe1 left #salt
17:46 bhosmer joined #salt
17:46 Ryan_Lane whiteinge: https://github.com/saltstack/salt/issues/5752 ;)
17:46 [MT] This is what I have happening - http://dpaste.com/1794891/
17:47 [MT] I have all keys in sync across all syndics which were pushed from the master
17:47 aurigus joined #salt
17:47 [MT] /etc/salt/pki/master is mirrored on every one of them
17:48 [MT] I'm doing something dumb... I know I am. I just don't know what it is.
17:48 [MT] the master, the syndics, an the minion used for testing are on 2014.1.3
17:51 ajw0100 joined #salt
17:51 whiteinge Ryan_Lane: ah, nice!
17:51 whiteinge ...er, sorta. ...been there for a while
17:52 Ryan_Lane whiteinge: what has been?
17:52 Ryan_Lane the key subsystem?
17:52 whiteinge that feature request
17:52 Ryan_Lane oh. yeah :)
17:52 Ryan_Lane wikimedia would love you if you fixed that :D
17:52 Ryan_Lane I actually don't need it at lyft
17:53 Ryan_Lane hm. I wonder if it would be possible to load balance masters behind an ELB if the keys were handled
17:54 [MT] what's ELB?
17:54 [MT] I'm trying to do it behing F5
17:54 Ryan_Lane amazon elastic load balancer
17:54 KyleG joined #salt
17:54 KyleG joined #salt
17:55 Ryan_Lane the job system is problematic, even if keys are handled
17:55 Ryan_Lane data returning to the masters would also be an issue
17:56 Ryan_Lane whiteinge: this is also a good example of why returners are nice :D
17:56 [MT] I don't get this... the keys are completely in sync, but I can only authenticate to one of the syndics
17:56 Voziv whiteinge: Yeah I noticed that, I'm going to try that in a bit
17:57 * whiteinge puts another tick on the chalkboard under "Returners"
17:58 Voziv Returners?
17:58 [MT] oooooooh..... the minions weren't reloading their keys because some processes got stuck open
17:58 [MT] BAM!
17:58 sroegner joined #salt
17:59 Voziv [MT]: http://www.nytimes.com/images/blogs/tvdecoder/posts/1107/emeril.jpg
18:00 halfss joined #salt
18:01 thedodd joined #salt
18:01 whiteinge :D
18:03 dramagods joined #salt
18:03 [MT] This is my newest problem... http://dpaste.com/1794967/
18:05 Gareth 'lo
18:06 JasonSwindle left #salt
18:06 * whiteinge tips his hat in Gareth's direction
18:06 allanparsons joined #salt
18:07 allanparsons any of you guys in LA?
18:08 pjs I am.. offices in DTLA 5th & Spring
18:11 [MT] ok... I can have the minion connect to any one of the syndic servers directly, but connecting to the LB address causes that error I pasted (http://dpaste.com/1794967/). Any idea what might be breaking? I'm sure it's the LB that's causing the issue, but trying to understand what's causing this.
18:11 Gareth whiteinge: morning...ish :) hows it going?
18:11 Gareth allanparsons: I'm in the LA area..
18:13 ajolo_ joined #salt
18:13 whiteinge [MT]: wish i could help there but i'm not sure what zeromq needs from a LB :-/
18:14 whiteinge Gareth: good. just woke up but i'm ready for lunch... and you?
18:14 jero joined #salt
18:14 Ryan_Lane [MT]: I'm betting multiple calls are occuring
18:16 Gareth whiteinge: not bad :)
18:19 John____ joined #salt
18:21 John____ anyone ever have issues getting --list-images working on joyent?
18:22 joehoyle joined #salt
18:23 Darnoth joined #salt
18:23 joehoyle hey, is there any module or somehting that will let me add new masters to all minsions, like salt \* saltutil.addmaster my-master.mydomain.com
18:24 whiteinge joehoyle: for multi-master? or are you decomissioning an older master in favor of a new one?
18:24 joehoyle whiteinge: multimaster
18:24 joehoyle I have done it before via a hacky 'cmd.run echo '- mymase > /etc/salt/minion' or whatever
18:25 whiteinge that would be a nice addition. i think your hack is the best way to do that for now though
18:25 leron joined #salt
18:25 druonysus joined #salt
18:25 druonysus joined #salt
18:26 joehoyle whiteinge: if I wanted to do this properly, I'd write a custom module I presume?
18:27 whiteinge i'd suggest using file.comment to comment out the "master: <somemaster>" in /etc/salt/minion and using file.managed to write a new /etc/salt/minion.d/master.conf file with the new value
18:27 whiteinge that would make it easier to change/modify in the future
18:27 [MT] whiteinge: Ryan_Lane: the hash that binds one minion to one syndic wasn't set up right so during the zmq session establishment, the minion started authenticating to one syndic and then tried to continue with another, that no so well works
18:27 joehoyle ahh cool, havn't used minion.d before, that makes sense
18:28 [MT] so now that much is fixed, but now I get this far (http://dpaste.com/1794987/) and it hangs
18:30 whiteinge joehoyle: (in case it's helpful) a good way to restart the salt minion after making that change is to use atd to schedule the service restart, say one minute in the future. that's a tad more reliable than using service.restart directly.
18:30 ajw0100 joined #salt
18:30 joehoyle whiteinge: ah, why would that be ore reliable? or rather, why is service.restart not reliable?
18:32 ajw0100 joined #salt
18:32 whiteinge when you use service.restart to restart salt, sometimes the minion daemon is interrupted half-way through the process
18:32 whiteinge using atd does the restart as a separate process which tends to have better results for restarting salt using salt
18:33 druonysus joined #salt
18:33 druonysus joined #salt
18:33 [MT] It's neat that salt can restart salt, but it's understandable that it can be a bit buggy
18:33 whiteinge it's a hard problem to solve. we've taken a few cracks at it, but it still needs work
18:34 joehoyle ahh ok, makes sense
18:35 whiteinge [MT]: is that minion hanging or waiting on whatever syndic it's connected to to send out a command?
18:35 UtahDave joined #salt
18:35 leron left #salt
18:35 jimklo joined #salt
18:36 [MT] whiteinge: hanging, if it finishes connecting, it'll do a lot more chatter before it's waiting for a command
18:37 Gordonz joined #salt
18:37 whiteinge hm :-/
18:39 Ryan_Lane [MT]: you may want to consider doing source hash load balancing, rather than round robin
18:39 [MT] Ryan_Lane: that's what I'm doing now
18:40 thedodd joined #salt
18:41 Ryan_Lane oh? then all your communication should go to/from the same nodes
18:41 Networkn3rd joined #salt
18:44 kickerdog joined #salt
18:45 kickerdog my m1.small AWS salt-master has a loadavg of 9.02 very consistently, I think it's time for an upgrade.
18:46 doddstack joined #salt
18:47 marnom joined #salt
18:48 marnom Hey guys, just started using the salt mine today and am wondering... does the data in it ever get purged/removed if it's old?
18:58 ndrei joined #salt
19:01 wincus joined #salt
19:03 bastion1704 joined #salt
19:04 lyddonb_ joined #salt
19:06 sabayonuser__ joined #salt
19:07 Nazzy_ joined #salt
19:07 Nazzy_ joined #salt
19:07 [MT] Ryan_Lane: whiteinge: So... I walked away and came back and have something that's actually useful! http://dpaste.com/1795050/
19:08 dramagods joined #salt
19:08 simonmcc_ joined #salt
19:08 otsarev joined #salt
19:09 [MT] but salt-call test.ping works, that authenticates with the master, right?
19:10 goki joined #salt
19:10 [MT] salt-call state.highstate will not work
19:12 [M7] joined #salt
19:13 Jarus joined #salt
19:13 octarine joined #salt
19:13 akoumjian joined #salt
19:13 jbub joined #salt
19:13 wiqd joined #salt
19:14 NullWagon joined #salt
19:14 Chrisje joined #salt
19:14 herzi_ joined #salt
19:20 chrisjones joined #salt
19:20 allanparsons Gareth... you're at wireddrive?
19:23 [MT] it's kinda tough to use wireshart to debug salt connections since it's all encrypted, but I can at least see that any request made by the minion has been responded too
19:23 diegows joined #salt
19:26 zain lol, wireshart
19:27 JasonSwindle joined #salt
19:27 Gareth allanparsons: wired drive?
19:27 allanparsons yes
19:28 Gareth Nope. and I'm not sure what that is :)
19:28 allanparsons oh nm... a guy named Kyle
19:28 allanparsons @Gareth, I'm looking for a devops guy in LA to hire.
19:28 allanparsons with experience with SaltStack + AWS
19:31 mgw joined #salt
19:32 DaveQB joined #salt
19:35 ajw0100 joined #salt
19:35 possibilities joined #salt
19:40 pydanny joined #salt
19:41 kballou joined #salt
19:44 bhosmer joined #salt
19:47 linuxlewis joined #salt
19:49 jimklo joined #salt
19:49 [MT] heh... with only one box behind the load balancer, it works; with multiple, it doesn't
19:50 [MT] I lied... still broked
19:51 [MT] whiteinge: if I don't stick this behind a load balancer and just have a list of masters, will a minion randomly choose which master to connect to?
19:52 rocket joined #salt
19:52 rocket what do I need to just have a management client for salt?  eg I would like to run salt commands but not have my laptop be managed by a particular salt infrastructure
19:55 mike joined #salt
19:56 jalaziz joined #salt
19:56 harobed_ joined #salt
19:56 programmerq anyone know if the saltstack meetup video from Tuesday is up anywhere?
19:59 whiteinge [MT]: no, minions connect to all masters in that configuration
19:59 manfred rocket: you can do that, you just have your salt master be your laptop and everything sync to it, ideally, you would want a master constantly up though.
19:59 manfred rocket: you want salt-master
20:00 rocket so you can only do this from the salt-master?
20:00 whiteinge rocket: an alternate tack is to use salt-api and the in-alpha Pepper CLI scrtip: https://github.com/saltstack/pepper
20:00 whiteinge pepper is still pretty rough though
20:01 halfss joined #salt
20:01 rocket bummer :p
20:01 [MT] whiteinge: one minion will maintain a connection to multiple masters?
20:01 whiteinge yes
20:01 jaimed joined #salt
20:03 whiteinge rocket: i don't want to talk you out of trying pepper. it's rough but it's usable for the basic stuff
20:03 [MT] right now, I have one master behind the LB and the minion key is making it to that master, the minion can tell that the key was pending and could tell when it got accepted, but after that, the SaltReqTimeoutError starts happening.
20:04 [MT] this definitely has to be the fault of the LB and isn't going to have anything to do with whether or not it's having issues connecting to different masters (but it was having that issue)
20:04 rocket whiteinge: I understand .. for what I need I can get by being on the master for now .. I will follow pepper for the future
20:04 [MT] I have an idea...
20:06 druonysus joined #salt
20:06 druonysus joined #salt
20:10 dramagods joined #salt
20:11 abe_music joined #salt
20:13 pssblts joined #salt
20:14 mgarfias joined #salt
20:17 John____ does there happen to be a known bug with the joyent provider? list-locations works fine, but I can't get a list of valid images. returns empty.
20:18 mgarfias i can't seem find appropriate dox for how to do this.  I need to configure haproxy, with info from my app servers.  How do i access that info from my haproxy state?
20:21 ajw0100 joined #salt
20:22 jimklo using file.directory state... i get an error that says... "No directory to create /workspace/foo in" and yet I've set mkdirs: True
20:22 pydanny joined #salt
20:22 kballou joined #salt
20:23 jimklo anyone have any ideas why?
20:25 it_dude_ joined #salt
20:27 Ryan_Lane1 joined #salt
20:27 happytux joined #salt
20:28 srage_ joined #salt
20:28 otsarev_home joined #salt
20:29 Ahlee What's nssm?  Salt minion appears to run underneat it on windows, and it appears to be dropping security privileges or something, but it's removing the ability to log, see network shares, etc
20:30 jalaziz joined #salt
20:32 jaimed joined #salt
20:33 Ahlee nevermind, nssm is just what's providing the servies 'wrapper', the running as a system account is what's preventing it from seeing shares/writing logs/etc
20:36 xinkeT joined #salt
20:37 ajw0100 joined #salt
20:45 eliasp does anyone know of a way to deal with states which require a reboot? how to track the system state/whether a reboot was done/have other states depend on the reboot?
20:46 eliasp I'm currently adding some functionality to the win_dns_client.py module to be able to set the primary DNS suffix which is advertised on DHCP leases for dynamic DNS updates but it requires a reboot to succeed… how'd I communicate this "reboot required" state as a result of the operation?
20:46 dfinn1 joined #salt
20:47 Ahlee eliasp: i schedule reboot last at 5 minutes after now to allow states to return, and use shutdown -r +5
20:47 Ahlee i don't know of a non-hacky way involving external pillars being updated to do it
20:48 eliasp Ahlee: well, scheduling a reboot is no option for me as I also control regular clients which have to be rebooted manually by users on demand
20:49 dfinn1 I think this is probably pretty simple but I'm not finding it in the docs, probably not looking for the right thing.  How would I run commands from my salt master against a certain environment?  for example we have a dev env configured in top and I want to run some commands against all of those servers and only those.
20:50 eliasp dfinn1: salt your-minions foo.bar saltenv=dev
20:50 che-arne joined #salt
20:50 Ahlee i believe he's looking for how to only target if environment=dev, though
20:50 dfinn1 yes ^
20:51 Ahlee is environment available in pillar? I know it's not a grain
20:51 dfinn1 i just tried this but it got everything
20:51 dfinn1 salt '*' saltenv=pqa cmd.run ifconfig
20:51 Ahlee i have an individual master per environment, sorry dfinn1
20:51 dfinn1 crap
20:51 dfinn1 that's less than ideal
20:51 dfinn1 for a smaller env
20:52 Ahlee it's got it's pros and cons, indeed
20:52 dfinn1 it would seem like there must be a way to do this.  I can't be the first to look for this functionality
20:52 rocket if I am using gitfs is there a way to have the salt-master pickup the file changes immediately based on some command?
20:53 JasonSwindle left #salt
20:53 eliasp rocket: salt-run fileserver.update
20:53 Ahlee rocket: salt-run fileserver.update
20:54 eliasp Ahlee: reboot handling is an open issue: https://github.com/saltstack/salt/issues/6792
20:54 Ahlee dfinn1: salt -I master:environmemt:dev test.ping
20:55 dfinn1 is that a L or an eye?
20:55 Ahlee eye
20:55 Ahlee it's a pillar match
20:55 dfinn1 [prod][root@salt-master01 (salt-master) salt]# salt -i master:environment:pqa test.ping
20:55 dfinn1 Usage: salt [options] '<target>' <function> [arguments]
20:55 dfinn1 salt: error: no such option: -i
20:55 Ahlee capital
20:56 eliasp dfinn1: see also: http://docs.saltstack.com/en/latest/topics/targeting/compound.html
20:56 dfinn1 No minions matched the target. No command was sent, no jid was assigned.
20:56 Ahlee dfinn1: salt minion pillar.items, verify you see master options
20:56 it_dude joined #salt
20:56 Ahlee it's enabled by default, but can be disabled
20:57 dfinn1 looks like it's there
20:57 Ahlee then you should be able to match on it with -I, it's just getting the depth right
20:58 joehoyle joined #salt
21:01 jaimed joined #salt
21:01 [MT] WOOHOO!!!!!!!!!!!!!! :D :D :D :D
21:03 rawzone joined #salt
21:04 Ryan_Lane joined #salt
21:06 [MT] whiteinge: Ryan_Lane: GOT IT!!!! There's a setting in the LB to reuse connections with the server and using that after the authentication bit happens makes it go frowny face. However, making the LB establish new connections to the syndics every time makes it happy face. This is going to be one heck of an interesting setup and I need to make sure keys, ext_mods, and other data keep in sync. Beyond
21:06 [MT] that, it seems to be running solid. :D
21:08 Teknix joined #salt
21:08 jimklo what's the best way to make a directory path (and all parent directories if they don't exist)? I've tried file.directory state and it seems to fail
21:08 Ryan_Lane [MT]: once you get it running well you should write a blog post ;)
21:10 anuvrat joined #salt
21:10 [MT] Ryan_Lane: it needs to be done - don't let me forget
21:12 [MT] nope... it's not running solid
21:13 [MT] damnit... what now?
21:13 ajw0100 joined #salt
21:15 shawnjgoff I have a minion that didn't have the hostname setup correctly, so its id was localhost.localdomain. I've fixed it; I've verified that socket.getfqdn is returning the hostname, but when I start the salt-minion, it is still using localhost.localdomain.
21:15 shawnjgoff I know I can override it in the config file, but I'd rather get it working correctly.
21:16 fragamus_ joined #salt
21:17 JasonSwindle joined #salt
21:18 possibilities joined #salt
21:20 timoguin shawnjgoff: so you fixed the minion id... did you delete the key from the master before starting up again?
21:21 shawnjgoff There isn't a key with this id.
21:21 timoguin is there one still hanging around with the old id though?
21:21 shawnjgoff This key isn't on the server.
21:21 shawnjgoff There is.
21:21 timoguin delete that and see if it re-auths with the new minion id
21:22 ajw0100 joined #salt
21:22 shawnjgoff Yep - it submitted a new key, but still with localhost.localdomain, which is what I expected.
21:23 timoguin i want to say the minion id is cached somewhere as well, but i'm not seeing evidence of that in /var/cache/salt
21:24 timoguin i'd also clear out /etc/salt/pki/minion on the minion
21:24 timoguin that will cause it to regenerate its key before it tries to auth with the master
21:24 Teknix joined #salt
21:26 shawnjgoff I stopped the minion, cleared the keys, deleted the unapproved key from the master, restarted the minion. It's still using localhost.localdomain.
21:27 shawnjgoff grep -r 'localhost' /var/cache/salt gave me nothing.
21:27 shawnjgoff hm... there aren't any files there anyway.
21:27 timoguin okay... it's cached at /etc/salt/minion_id
21:27 timoguin seems liek that would make more sense in /var/cache...
21:27 shawnjgoff Ah...
21:28 shawnjgoff That did it!
21:28 timoguin :)
21:29 Ryan_Lane1 joined #salt
21:29 shawnjgoff Thanks.
21:30 joehoyle joined #salt
21:30 rojem joined #salt
21:35 xinkeT joined #salt
21:38 Ahlee Am I seeing this correct, and https://github.com/saltstack/salt/commit/4af873dcaffc463020e044e6b12b3862fa3a335a adds a return of the run in addition to the returned values?
21:44 timoguin Ahlee: yea, so if there's any failure in the highstate run you will get a non-zero exit code
21:44 Ahlee freaking fabulous.
21:44 Ahlee assuming that's also true for state.sls
21:45 timoguin i'd assume so
21:45 dfinn1 left #salt
21:45 Ahlee yeah, tracing now
21:49 linuxlew_ joined #salt
21:56 mgarfias how would i go about getting an array of ip addresses that match a specified grain
21:59 tligda joined #salt
22:00 kermit joined #salt
22:01 joehoyle joined #salt
22:02 halfss joined #salt
22:03 Cidan joined #salt
22:04 Ahlee salt -G <grain> grains.item ipv4
22:04 Cidan joined #salt
22:04 Cidan joined #salt
22:04 Ahlee so salt -G virutal:physical grains.item ipv4
22:04 mgarfias inside a template, will that give me a list?
22:04 Ahlee will spit back all ipv4 IP addresses on physical
22:05 Ahlee Should
22:05 Cidan joined #salt
22:05 gothix_ joined #salt
22:05 millz0r joined #salt
22:05 zz_Cidan joined #salt
22:07 mgarfias oh, i'm only getting data from the minion thats configuring the module im messing with
22:07 mgarfias i need other hosts' data too
22:07 mrj joined #salt
22:16 dramagods joined #salt
22:18 Teknix joined #salt
22:23 A||SySt3msG0 joined #salt
22:24 jaimed joined #salt
22:32 Teknix joined #salt
22:33 Luke_ joined #salt
22:35 tligda joined #salt
22:37 meteorfo_ joined #salt
22:37 elfixit joined #salt
22:38 Gareth hrm. some tweaking and I've managed to break schedule.py :)
22:41 Gareth ah. rogue tab.
22:41 UtahDave I thought I sensed a disturbance in the Force...
22:41 Gareth ...as if millions of jobs suddenly screamed out in terror and were suddenly silenced? :)
22:43 UtahDave exactly, Gareth.   :)
22:44 xinkeT joined #salt
22:56 faldridge joined #salt
22:56 jslatts joined #salt
23:00 tyler-baker_ joined #salt
23:00 mgw1 joined #salt
23:00 tyler-baker_ left #salt
23:01 tonthon joined #salt
23:01 it_dude joined #salt
23:01 pfallenop joined #salt
23:01 yusuket_ joined #salt
23:01 djaime joined #salt
23:02 TheRhino_ joined #salt
23:02 trevorj joined #salt
23:02 BogdanR joined #salt
23:02 Teknix joined #salt
23:03 ajolo__ joined #salt
23:03 mgw joined #salt
23:04 mgw joined #salt
23:04 Psi-Jack_ joined #salt
23:05 sijis joined #salt
23:05 sijis joined #salt
23:05 wendall911 joined #salt
23:05 Luke_ joined #salt
23:05 _gothix_ joined #salt
23:06 zz_cro joined #salt
23:07 mgw joined #salt
23:07 mgw joined #salt
23:09 nebuchadnezzar joined #salt
23:09 mgarfias i have a grain, and i don't think its getting into the mine.  how can i best go about figuring out why?
23:11 faldridge joined #salt
23:14 ajw0100 joined #salt
23:14 mgw joined #salt
23:15 happytux_ joined #salt
23:17 dramagods joined #salt
23:18 zz_Cidan joined #salt
23:21 patrek joined #salt
23:23 elfixit joined #salt
23:26 fllr joined #salt
23:27 fllr joined #salt
23:27 ajw0100 joined #salt
23:27 xinkeT joined #salt
23:31 Ryan_Lane1 joined #salt
23:34 halfss joined #salt
23:40 A||SySt3msG0 joined #salt
23:44 nebuchad` joined #salt
23:51 bhosmer joined #salt
23:54 logix812 joined #salt
23:55 logix812 provisioning a new box, I am getting this error: got an unexpected keyword argument 'saltenv'
23:55 logix812 googling around for an answer, thought I would try here too
23:56 logix812 happens on all my pkg.installed:
23:56 logix812 2014.1.3 is my version
23:57 Teknix joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary