Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-09-05

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 SheetiS joined #salt
00:02 shaggy_surfer joined #salt
00:02 nyx joined #salt
00:03 n8n joined #salt
00:09 n8n joined #salt
00:19 n8n joined #salt
00:25 icebourg joined #salt
00:26 rome joined #salt
00:29 kingel joined #salt
00:31 rome joined #salt
00:37 invsblduck joined #salt
00:39 bezeee joined #salt
00:42 audreyr left #salt
00:42 TyrfingMjolnir joined #salt
00:44 rome joined #salt
01:01 shaggy_surfer joined #salt
01:10 n8n joined #salt
01:12 KaaK joined #salt
01:13 smkelly When I run salt '*' state.highstate against my environment (1 host, still building things), it times out before I get the results of the run. My states are extremely simple; checks for a package and manages like two files. Any suggestions on where I should be looking for why it is so slow? Or is this normal?
01:14 KaaK what is the idiomatic way to handle a state the is required by a large number (or perhaps all other states)?
01:14 KaaK do I have to explicitly add the `require` in all my dependent states? or is there a cleaner shortcut?
01:16 KaaK more concretely, i've got a state for installing several packages -- offically most of the rest of my states depend on these packages being installed -- but it is quite tiresome adding the require to every state
01:18 __number5__ KaaK: just make sure that state has order before any one depends on it, http://docs.saltstack.com/en/latest/topics/tutorials/states_ordering.html
01:19 KaaK __number5__, the `order` keyword?
01:20 oz_akan joined #salt
01:20 __number5__ KaaK: that or put it in the first included sls and at top of that file
01:22 Furao joined #salt
01:22 KaaK __number5__, one last fuzzy bit -- are order numbers global? or local only to a sls file?
01:26 __number5__ KaaK: global, read the link above and run few state.show_lowstate/state.show_highstate and you'll have a better idea
01:27 xcbt joined #salt
01:27 KaaK __number5__, thanks
01:27 yomilk joined #salt
01:29 kingel joined #salt
01:34 oz_akan joined #salt
01:41 nyx joined #salt
01:45 tom__ joined #salt
01:46 bhosmer joined #salt
01:46 vbabiy joined #salt
01:47 jnials joined #salt
01:50 n8n joined #salt
01:54 tmh1999 joined #salt
01:56 jalaziz_ joined #salt
01:59 diegows joined #salt
02:02 tmh1999 joined #salt
02:11 aparsons joined #salt
02:12 aparsons_ joined #salt
02:14 XenophonF joined #salt
02:14 XenophonF hey are keys in pillar case-preserving?
02:15 manfred i do believe they are case sensitive
02:15 XenophonF OK
02:15 XenophonF good
02:16 ramishra joined #salt
02:16 XenophonF thanks manfred
02:19 tmh1999 joined #salt
02:19 XenophonF any idea of how complex a key in pillar can be?
02:20 XenophonF like, could it be a pathname, like ``/foo/bar: baz qux``?
02:21 Hipikat XenophonF: arbitrary data, as deep as you’d like it
02:21 errr joined #salt
02:21 XenophonF i guess i could rtfs or try it out :)
02:21 Hipikat XenophonF: the string ‘/foo/bar’ with a value ‘baz qux’ is perfectly acceptible
02:21 XenophonF awesome
02:22 errr in the vmware cloud provider am I understanding that you can provide the url in all of the following formats: 10.1.1.1 10.1.1.1:443 https://10.1.1.1:443 https://10.1.1.1:443/sdk 10.1.1.1:443/sdk
02:22 aparsons joined #salt
02:25 aparsons_ joined #salt
02:28 BrendanGilmore joined #salt
02:29 rome joined #salt
02:29 XenophonF so, if you saw something like http://paste.debian.net/119425/ in a pillar, would you be able to figure out what's going on?
02:29 XenophonF i guess i'm also assuming the reader is familiar with Apache web server configs
02:31 Hipikat XenophonF: yeah, that’s nice. :) starting with a consise but self-explanatory pillar structure and then engineering a formula to make it do stuff is the way to go
02:31 TTimo joined #salt
02:31 rome joined #salt
02:34 XenophonF i'm thinking about making a pillar called apache:vhosts:<website>:sections instead
02:34 jalaziz joined #salt
02:34 XenophonF and then writing the jinja template such that it just transposes the YAML directly into an Apache config
02:34 XenophonF like, completely generic
02:35 XenophonF so  apache:vhosts:www.example.com:sections:Directory:/ would automatically turn into <Directory "/"></Directory> in the matching config file
02:37 Hipikat if you want *completely* generic you could change ‘apache’ for ‘web_server’ and have web_server:server_type = ‘Apache’ and in the top init.sls include: - {{ web_server.server_type }} and write formulas for different web servers :D
02:37 XenophonF or ``apache:vhosts:www.example.com:sections:DirectoryMatch:/var/www/test*:Options: Indexes`` would end up rendered as ``<DirectoryMatch "/var/www/test*">Options Indexes</DirectoryMatch>``
02:37 XenophonF hah!
02:37 XenophonF i'm kind of going there
02:37 XenophonF once i have an apache formula that i like, i'm going to tackle iis
02:37 Hipikat but yeah starting small and refactoring towards genericness is generally better than getting ahead of yourself
02:38 andrej__ Can I use something like  salt['match.pcre']('ps[^.]+\.com') and grains['os'] == 'CentOS' as a condition in an SLS?
02:38 XenophonF i want this apache formula to be better than the one in saltstack-formulas
02:38 aparsons joined #salt
02:38 andrej__ I get unexpeted end of template
02:39 andrej__ unexpected
02:40 XenophonF andrej__, can you  post the error on paste.debian.net or something?
02:40 andrej__ Can do, gimme a min
02:40 andrej joined #salt
02:41 nitti joined #salt
02:42 andrej XenophonF http://pastebin.com/9847Z2Nf
02:44 iggy smkelly: one problem I was having was a bug at one point... apt-get update was being run for every package... should be fixed in recent versions
02:46 Ryan_Lane joined #salt
02:47 joehoyle joined #salt
02:48 joehoyle left #salt
02:49 aparsons joined #salt
02:49 iggy smkelly: you can also run salt-call -l debug... to get a better idea of what's going on
02:50 rallytime joined #salt
02:52 smkelly This is FreeBSD. And only one package. But thanks!
02:52 iggy well then...
02:52 iggy ;)
02:52 iggy it was just something I ran into
02:53 iggy still the salt-call -l debug bit is helpful
02:53 smkelly It is sdefinitely helpful to be aware of, and the -l debug thing is good
02:53 smkelly Thanks
02:53 XenophonF andrej__: do you have an {% endif %} to match the {% if ... %}?
02:54 Hipikat and even more useful for debugging info is running salt-master and salt-minion in the foreground with -l debug :) i wish i’d learnt that a loooong time before i did
02:54 XenophonF oh man that's the truth
02:54 andrej Ugh ... I'm missing the opening curly from the endif
02:54 XenophonF salt-call -l debug is so helpful too
02:54 hoohaha joined #salt
02:54 andrej Thanks man ...
02:54 XenophonF great!
02:55 XenophonF those tiny little syntax errors are what usually get me :)
02:55 smkelly Looks like it might be the zfs module doing it
02:55 jkaye joined #salt
02:56 andrej I guess I was just dumbfound and staring at the line it pointed to, rather than seeing the bigger picture
02:56 hoohaha it is possible to install the 'salt' program on a workstation?  I'd like to be able to manage my minions from my workstation without having to ssh in to my master, first.  can this be done?
03:05 yomilk joined #salt
03:08 cherry joined #salt
03:09 icebourg joined #salt
03:09 TyrfingMjolnir joined #salt
03:10 n8n joined #salt
03:10 icebourg joined #salt
03:13 halfss joined #salt
03:14 cherry Some one have openstack with saltstack project ?
03:18 kingel joined #salt
03:20 ipmb joined #salt
03:23 ramishra joined #salt
03:26 skullone im putting my openstack work into salt as i go
03:26 skullone oh he left
03:27 malinoff joined #salt
03:27 skullone been looking at all these frameworkds like foreman and such, and its just too much to do what i need
03:27 skullone ive got baremetal provisioning with my own scripted pxe stuff
03:29 ramishra joined #salt
03:35 anitak joined #salt
03:48 BrendanGilmore joined #salt
03:57 tmh1999 joined #salt
03:57 schimmy joined #salt
03:58 halfss_ joined #salt
03:59 schimmy1 joined #salt
04:00 thayne joined #salt
04:03 schimmy joined #salt
04:25 ramteid joined #salt
04:26 rypeck joined #salt
04:27 vbabiy joined #salt
04:31 aparsons joined #salt
04:33 vbabiy joined #salt
04:33 aparsons joined #salt
04:36 aparsons_ joined #salt
04:38 Ryan_Lane joined #salt
04:39 Ryan_Lane joined #salt
04:40 yomilk joined #salt
04:40 schimmy joined #salt
04:44 ajolo joined #salt
04:44 schimmy joined #salt
04:45 aurynn joined #salt
04:45 rypeck joined #salt
04:47 TTimo joined #salt
04:48 schimmy1 joined #salt
04:49 kingel joined #salt
04:49 jalbretsen joined #salt
04:54 icebourg joined #salt
04:56 jkaye joined #salt
04:58 ramishra_ joined #salt
05:03 Ryan_Lane joined #salt
05:09 ajw0100_ joined #salt
05:11 ramishra joined #salt
05:19 ramishra joined #salt
05:22 njs126 joined #salt
05:31 jambocomp joined #salt
05:37 murrdoc joined #salt
05:39 Ryan_Lane joined #salt
05:39 marco_en_voyage joined #salt
05:40 emostar is there a way to the salt minion log when we call state.highstate from the master?
05:40 emostar other than ssh'ing into the minion and tailing the log of course...
05:41 esogas_ joined #salt
05:44 esogas_ joined #salt
05:44 felskrone joined #salt
05:46 vbabiy joined #salt
05:50 felskrone joined #salt
05:51 jdmf joined #salt
05:51 esogas_ joined #salt
05:51 robinsmidsrod joined #salt
05:51 xcbt joined #salt
05:52 esogas_ joined #salt
05:54 robinsmidsrod joined #salt
06:00 yomilk joined #salt
06:04 kingel joined #salt
06:05 lietu salt -v '*' cmd.run 'uname -a' says "Minion did not return", but if I run salt master in interactive mode with -l debug I can clearly see, "Got return from hostname for job <the_job_id>"?
06:06 lietu oh wtf .. a while later I saw "this salt-master instance has accepted 1 minion keys." and then the command works
06:08 lietu also, it seems salt-cloud can't fully set up centos minions on GCE because they come with "PermitRootLogin no" by default .. any ideas on how to work around this? can I set salt-cloud to use a non-root user to ssh in and sudo to install the minion?
06:08 ThomasJ|d I'm looking to deploy saltstack in a multisite mixed enterprise environment with active directory, and was looking into handling redirection to local saltmaster using SRV records, but I'm unable to find any information on this regarding salt. Anyone touched on something similar?
06:09 esogas_ joined #salt
06:16 halfss joined #salt
06:17 esogas_ joined #salt
06:19 jalbretsen joined #salt
06:23 kingel joined #salt
06:28 jhauser joined #salt
06:32 jalaziz joined #salt
06:33 kingel_ joined #salt
06:35 kingel joined #salt
06:35 agend_ joined #salt
06:35 mapet joined #salt
06:39 yomilk joined #salt
06:45 Sweetshark joined #salt
06:47 lcavassa joined #salt
06:48 esogas_ joined #salt
06:50 TTimo joined #salt
06:52 aparsons joined #salt
06:53 melinath joined #salt
06:53 esogas_ joined #salt
06:56 agend_ joined #salt
06:56 aparsons_ joined #salt
07:00 esogas_ joined #salt
07:01 tomspur joined #salt
07:02 tomspur joined #salt
07:05 jayfk joined #salt
07:10 bhosmer joined #salt
07:12 lietu is it possible for me to use salt-cloud to pass in e.g. google compute engine metadata to all the instances it creates?
07:13 lietu I'm trying to work around the issue that the instances it creates don't actually seem to end up with salt installed, thus they won't connect to the salt master, etc.
07:13 lietu I could do this, via a startup-script metadata that would just install salt minion on the machine
07:16 esogas_ joined #salt
07:17 esogas_ joined #salt
07:18 chiui joined #salt
07:22 MrTango joined #salt
07:27 slav0nic joined #salt
07:29 melinath joined #salt
07:32 fredvd joined #salt
07:34 lietu hmm .. when salt-cloud is SSH-ing into the newly booted VM, what ssh key does it use?
07:36 lietu ok, so there's an ssh_key_file apparently
07:39 tmh1999 joined #salt
07:41 aparsons joined #salt
07:41 scalability-junk joined #salt
07:41 rogst joined #salt
07:42 aparsons_ joined #salt
07:43 n8n joined #salt
07:44 kingel_ joined #salt
07:56 intellix joined #salt
07:56 sectionme joined #salt
07:58 yomilk joined #salt
08:08 lionel joined #salt
08:08 thehaven joined #salt
08:12 viq joined #salt
08:17 deepz88 joined #salt
08:17 scarcry joined #salt
08:18 TTimo joined #salt
08:18 alanpearce joined #salt
08:23 runiq joined #salt
08:36 halfss_ joined #salt
08:45 kermit joined #salt
08:50 ekristen joined #salt
08:54 runiq Okay, I'm at a loss: https://gist.github.com/robodendron/be24f3165b0145529051
08:56 runiq Does anybody know why Jinja thinks the imported dicts have a value of "None"?
08:57 jkaye joined #salt
09:03 CeBe joined #salt
09:09 intellix joined #salt
09:18 lietu I'm really starting to lose hope with salt-cloud and GCE .. after like a week got to the point where I can actually create VMs with it, now I can't get salt-cloud to SSH in to the machines successfully, GCE will not create SSH keys in the project metadata, and the gcloud tool is broken so I can't create a startup script .. this is madness
09:19 mariusv joined #salt
09:22 linjan joined #salt
09:23 marnom lietu: yeah it can be a lot of work to get it to work reliably, I'm using it with vSphere, Proxmox and Digital Ocean now but it was quite a lot of work to get there
09:24 jcockhren lietu: last I checked the GCE docs for salt-cloud were massively out of date. AWS, DO and linode were the easiest to get working
09:25 jcockhren (for me)
09:30 micko joined #salt
09:31 mariusv joined #salt
09:32 lietu it's not just that the salt-cloud + GCE docs are out of date, but e.g. the gcloud tool is crashing, new GCE instances don't get the SSH keys defined in project metadata, etc.
09:32 thayne joined #salt
09:32 babilen runiq: Are you sure that you can import multiple things at the same time?
09:32 lietu and just getting salt-cloud installed to work at all with GCE .. oh my god, took ages, mainly due to HORRIBLE error messages from the salt components and lack of basic error checks
09:35 ianmcshane joined #salt
09:36 runiq babilen: I should, the Jinja docs say so. I also get the same error when I only import one thing, though – doesn't matter which one. :/
09:36 runiq Uh, wait – what does 'grains.filter_by' filter on by default=
09:37 runiq "os" or "os_family"?
09:38 runiq Alright, I am officially an idiot – I always thought it filtered by os, but it filters by os_family. I used "CentOS" when I should have used "RedHat". Sorry for the noise, people. :/
09:38 babilen runiq: Okayos_family
09:39 babilen That was my next guess, but then I have very little exposure to both CentOS and Arch and therefore wasn't sure that it is not the correct os_family
09:39 babilen Glad you figured it out
09:40 rofl____ so when will there be buildt packages for 2014.7 ?
09:40 rofl____ would be alot easier to start using the rc's
09:41 giantlock joined #salt
09:42 lietu ok, so with a project-wide startup-script metadata in GCE, I managed to get it to boot instances with a "salt" user and an authorized SSH key that I have on the salt master, configured ssh_key_file for salt-cloud on /etc/salt/cloud and ssh_username on /etc/salt/cloud.profiles, still salt-cloud is failing authentication to the VMs ..
09:45 bhosmer joined #salt
09:48 CeBe1 joined #salt
09:50 felskrone joined #salt
09:51 ingwaem joined #salt
09:51 Nexpro1 joined #salt
09:53 ingwaem good morning all
09:53 ingwaem with the latest version now incorporating api, how does one go about getting rest_cherrypy working again on debian?
09:54 ingwaem i have a ton of code using port 8000 to connect
10:05 n8n joined #salt
10:07 ajprog_laptop joined #salt
10:09 yomilk joined #salt
10:15 geekmush3 joined #salt
10:15 MK_FG joined #salt
10:18 TheThing joined #salt
10:19 TTimo joined #salt
10:20 ndrei joined #salt
10:33 yomilk joined #salt
10:33 MrTango joined #salt
10:41 ggoZ joined #salt
10:45 istram joined #salt
10:46 ianmcshane joined #salt
10:56 CeBe1 joined #salt
10:59 masm joined #salt
11:11 martoss joined #salt
11:12 CeBe1 joined #salt
11:13 scalability-junk joined #salt
11:13 CeBe1 joined #salt
11:16 blarghmatey joined #salt
11:18 tkharju2 joined #salt
11:19 VSpike joined #salt
11:20 VSpike Stupid question #1 .. In http://docs.saltstack.com/en/latest/topics/tutorials/walkthrough.html#the-first-sls-formula it talks about /srv/salt/ ... is there anything special about that location? I don't have it
11:21 Outlander joined #salt
11:21 mackstic1 The default configs assume that your salt tree will sit under /srv on the master
11:21 mackstic1 If it doesn't exist, just create it
11:24 VSpike mackstic1: thanks
11:28 ianmcshane joined #salt
11:29 bhosmer joined #salt
11:29 lietu if I run salt with vagrant, what kind of a filter can I use in top.sls to match the vagrant VM? I tried with vagrant setting the hostname as "dev" and using 'dev*', but vagrant provision says "no top file or external nodes data matches found"
11:31 diegows joined #salt
11:31 marnom I don't know if dev* requires it to be longer than dev... try with just 'dev'?
11:33 lietu I think that's the first thing I tried, but not 100% so retrying with that
11:33 lietu yea, no, doesn't work
11:34 Sp00n is the id of the minion also dev?
11:34 * lietu shrugs
11:34 Sp00n its trying to match id, not hostname
11:35 Sp00n so my guess would be its not
11:35 lietu how do I find that out from inside the vm?
11:35 lietu oh yeah, this is masterless btw
11:36 Sp00n i dont know what that is, but your minion should have an id specified in the minion file
11:36 Sp00n /etc/salt/minion
11:36 lietu it is a single vm, set up with vagrant, no salt master .. /etc/salt/minion has no id in it
11:36 lietu it says "file_client: local"
11:36 lietu nothing else
11:37 lietu I could use vagrant to set up a grain and maybe match that in top.sls, no idea how that works tho, but I'll try and figure it out
11:38 moos3 anyone have a good talk or walk though on pillars and multiple environments such as dev, stage, prod ?
11:38 tmh1999 joined #salt
11:42 TTimo joined #salt
11:44 hobakilllll joined #salt
11:47 elfixit joined #salt
11:48 lietu ok, the grains didn't actually work, but pillar data does .. I can set pillar data {"vagrant": "vagrant"} in vagrant config, then match against that with 'vagrant:vagrant': - match: pillar
11:48 lietu allows me to use one top.sls for my "real" environments and vagrant VMs pretty easily
11:49 RomtamK joined #salt
11:49 dvestal joined #salt
11:50 RomtamK left #salt
11:52 ggoZ joined #salt
11:53 vbabiy joined #salt
11:54 intellix joined #salt
11:55 vbabiy joined #salt
11:56 Furao joined #salt
12:00 Furao joined #salt
12:01 tkharju3 joined #salt
12:04 Furao joined #salt
12:12 elfixit joined #salt
12:13 workingcats__ joined #salt
12:15 lietu can I with salt, recursively copy a directory from the salt master, outside the salt root, to a minion?
12:17 felskrone you mean outside the file_root? if yes, then no :-)
12:17 felskrone what basically happens is that the minions tries to copy the files and it does that wie salt:// and the paths needs to be below the file_root somewhere
12:17 felskrone wie = with
12:18 marnom lietu: if you put the directory in GIT, you could..
12:21 _ikke_ Is salt-ssh still considered alpha?
12:26 Setsuna666 joined #salt
12:31 blackhelmethacke joined #salt
12:33 lietu felskrone: right, well that sucks .. I guess I'll just set up an optional step for non-vagrant machines and set up the mounts in a specific way on vagrant machines
12:33 lietu marnom: that's not quite the same thing
12:35 hobakill joined #salt
12:35 lietu the issue here is, that I use vagrant for development, where I have the ability to e.g. mount /src to a directory outside the VM and provide the source code there .. now when deploying via a "real master", I'd rather have the code deployed on the salt master from my CI/CD system, and then have salt distribute it .. it would be easy for the latter case for me to just put the source folder under salt roots, or just symlink, but I can't do it the same way i
12:37 lietu the only "solution" I can come up with is that on the master set up a symlink from /srv/salt/roots/src to /src and add a step on the non-vagrant machines to copy that directory over to /src first .. that way I can then leave all the other states refer to /src/* stuff
12:40 mackstic1 Depending on how you feel about matters that extra step could just be wrapped in a jinja if statement that checks if this is a live machine or not
12:41 mackstic1 And then a depend line that is again wrapped in the same if check in the state that handles the deployment
12:42 vejdmn joined #salt
12:44 alanpearce joined #salt
12:47 sectionm1 joined #salt
12:48 nyx joined #salt
12:50 miqui joined #salt
12:50 yomilk joined #salt
12:52 TTimo joined #salt
12:52 miqui joined #salt
12:58 ekristen joined #salt
12:58 simmel joined #salt
12:59 cpowell joined #salt
13:01 jkaye joined #salt
13:01 martoss joined #salt
13:02 brandon___ joined #salt
13:02 hobakill if i want to keep the winrepo.p file outside of the salt 'environment' can i specify that in the Windows Software settings?
13:02 hobakill eg: win_repo_cachefile: '/etc/salt/whatever/winrepo.p'
13:03 rome joined #salt
13:03 oz_akan joined #salt
13:03 blarghmatey joined #salt
13:03 oz_akan joined #salt
13:07 ndrei joined #salt
13:09 rome joined #salt
13:10 halfss joined #salt
13:12 hobakill i really really really really hate windows pkg repos.
13:13 ericof joined #salt
13:14 dude051 joined #salt
13:14 XenophonF joined #salt
13:15 bdf joined #salt
13:17 sastorsl joined #salt
13:18 sastorsl can i test if an sls-file exists before I include it.
13:18 sastorsl Say if I want to include a .custom.sls, but only if it exists.
13:19 TTimo are you familiar with chocolatey hobakill
13:19 rome joined #salt
13:19 TTimo with the templating I'm pretty sure you could ?
13:19 jkaye joined #salt
13:19 sastorsl since it almost made me giggle, I guess not.
13:19 TTimo I mean .. ultimately you can just write straight up python to spit state .. so if all else fails use os.path.exists
13:19 hobakill TTimo: i am...but it doesn't seem to work on my version on salt. i have an issue ticket on it
13:20 hobakill TTimo: even if it did - i don't know how to integrate chocolatey into a state file.
13:20 TTimo http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.chocolatey.html
13:20 dvestal joined #salt
13:20 TTimo there's a state module for it
13:22 hobakill TTimo: isn't "state module" a conflicting term?
13:23 TTimo yeah you're right .. that part still confuses me :)
13:23 hobakill TTimo: i need to make sure certain application are installed on windows boxes and i can't do that with anything but a winpkg repo AFAIK
13:24 hobakill and i can't get my minions to see the repo i set up... because salt hates me. :(
13:24 dccc joined #salt
13:25 VSpike Someone integrated chocolatey and salt? Apart from the obvious jokes, that's got to be quite useful. I have a lot of manual build recipes to convert to salt states, and the Windows ones tend to use chocolatey
13:26 sastorsl So, I can write something like '{% if pillar[envid] is defined %}', but can I do something similar for testing if a state exists?
13:27 hobakill VSpike: it would be if it worked properly
13:27 simmel joined #salt
13:29 nyx__ joined #salt
13:29 bobby_ joined #salt
13:33 snuffeluffegus joined #salt
13:34 Supermathie joined #salt
13:35 bobby_ i'm having an issue writing a _module, anyone have a moment?
13:35 Supermathie Heyo. What's the recommended way to clean up old jobs from the master? I'm running out of inodes.
13:35 Supermathie sure bobby_ :)
13:37 pressureman joined #salt
13:43 sastorsl Trying to understand "watch". I know I can watch i.e. "file: /my/directory", but can I watch if a _anything_ happened in a state, say a "watch: mystate" in a "substate.sls" ?
13:44 sastorsl and only run "substate.sls" if "mystate" made changes.
13:44 shoma joined #salt
13:46 mage_ joined #salt
13:53 VSpike hobakill: same goes for chocolatey itself. It a bit of a crap shoot as to what works and what doesn't :/
13:56 mage_ hello
13:57 rome joined #salt
13:59 SheetiS sastorsl: I am pretty sure you can "-watch:\n  - sls: mystate" and if mystate.sls applies changes it will do what you want.
14:01 SheetiS 2014.7.0 also adds onchanges which will do that even better methinks.
14:01 quickdry21 joined #salt
14:01 mage_ I plan to use a deployment tool soon and I can't choose between salt and ansible. I've read a lot of reviews and apparently both are awesome. So stupid question: why salt over ansible ?
14:02 mage_ also can salt replace fabric for deployement ? or the way to do is to use both in parallel ?
14:02 bobby_ here is the problem i'm having: http://pastebin.com/Mg9wHLjJ
14:02 bobby_ attempted to see if there was an open/closed issue related to ti but couldn't find anything
14:03 babilen bobby_: That's just beatiful (apart from the horrible pastebin, use http://refheap.com)
14:03 racooper joined #salt
14:03 babilen *beautiful
14:03 babilen bobby_: What happens if you just return eggs as ex? (as opposed to "str(eggs)")
14:03 bobby_ https://www.refheap.com/89847
14:03 babilen bobby_: That's crazy, let me reproduce that locally. Which version of salt is this?
14:03 nyx joined #salt
14:03 blarghmatey joined #salt
14:03 bobby_ https://www.refheap.com/89848
14:03 bobby_ versions ^
14:03 halfss joined #salt
14:03 majoh left #salt
14:03 giantlock joined #salt
14:03 halfss_ joined #salt
14:03 babilen mage_: I've evaluated both and listened to people who had a *lot* more experience with them and it was mostly: "Salt is easier to extend if you want to do things that aren't available yet while ansible's upstream is *very* opinionated and you often end up doing things 'their way'". I'll try to find a great writeup on both, but for some reason I don't have it in my bookmarks :-/
14:03 babilen bobby_: Oh, that's quite old. Would it be a problem to reproduce that on .10 ?
14:03 hobakill i have no idea how to integrate the windows package repo if i use gitfs for my environments. absolutely no idea. windows packaging is the bane of all of these config mgmt programs.
14:03 mage_ babilen: okay..
14:03 mr_chris joined #salt
14:03 bobby_ babilen: I'll attempt to upgrade and reproduce.
14:03 mage_ atm I'm using fabric to deploy with a script like https://dpaste.de/3BFX and I wonder if I can achieve the same in salt
14:03 bobby_ babilen: This particular master has ~1,000 minions reporting to it all running 2014.1.0. Do you know offhand if there will be any issues with 2014.1.0 minions reporting to a 2014.1.10 master until we can get those minions upgraded?
14:03 mage_ can I use salt in ssh mode (without having to install a daemon on each searver ) ?
14:03 mage_ as for Ansible
14:03 XenophonF mage_, it's possible using salt-ssh
14:03 XenophonF but i have run into problems getting it to work on FreeBSD
14:03 XenophonF i think the issue is with sshpass
14:03 babilen bobby_: You would definitely have to upgrade the master first, but I cannot promise you that it won't be an issue at all. Give me a second, I'll try to reproduce it in my vagrant testsetup.
14:03 XenophonF not salt
14:03 mage_ XenophonF: I'm also using FreeBSD
14:03 XenophonF hm... how to get Salt to automatically generate a X.509 key-pair and the associated CSR...
14:03 mage_ is FreeBSD support good ?
14:03 XenophonF generally, it's great!
14:03 mechanicalduck joined #salt
14:03 XenophonF except for the fact that just about every formula in the salt-formulas repo is written for Linux
14:03 XenophonF :(
14:03 NV joined #salt
14:03 XenophonF so at least for me, sshpass doesn't work, and i want to be able to use salt-ssh with a username and password
14:03 XenophonF i figure that if i have to manually install a ssh authorized key, i might as well go ahead and manually install the salt minion
14:03 hobakill VSpike: i'd do anything to just get this working ONCE.
14:04 Supermathie haven't used ansible, but I understand it does EVERYTHING over ssh? That's just one option in salt.
14:04 ajprog_laptop joined #salt
14:04 Supermathie bobby_, holy crap ok I have to try that out
14:05 babilen bobby_: Works fine here: https://www.refheap.com/89850
14:05 Supermathie bobby_, are you really running that from the command line like that? or are you escaping the arguments
14:06 babilen mage_: You can use salt SSH, but it will be much slower and there might be rough edges. Is it a problem to install the minion on your boxes?
14:06 sastorsl SheetiS: You are absolutely right
14:06 bobby_ Supermathie: really running it like that via CLI
14:06 bobby_ babilen: Are you on 2014.1.0 or a later version?
14:06 babilen bobby_: And, fwiw, that should actually be: salt '*' deploy.spam '['asdf','abcdefghijklmnopqrztuvwxyz']'
14:06 sastorsl SheetiS: re "watch:\n - sls: mytest"
14:07 babilen bobby_: That is with 2014.0.10
14:07 babilen bobby_: err, salt '*' deploy.spam "['asdf','abcdefghijklmnopqrztuvwxyz']" rather
14:07 babilen .1.10 *sigh*
14:08 Supermathie bobby_, you have a file named 'r' in the directory from which you're running the command
14:08 bobby_ Supermathie: ... yes
14:08 Supermathie http://paste.ubuntu.com/8260008/
14:08 babilen bobby_: You have to escape the argument (as shown above)
14:08 hobakill i hope UtahDave shows up today.
14:09 Supermathie characters like [] are shell metacharacters you have to escape that as babilen says
14:09 Supermathie echo ['asdf','abcdefghijklmnopqrztuvwxyz'] → r
14:09 babilen Essentially the same as with *
14:09 babilen indeed
14:09 ndrei joined #salt
14:10 SheetiS sastorsl: Good deal :)
14:10 bobby_ Supermathie: Not sure why I didn't think of that :(
14:10 bobby_ Thanks to the two of you for all your help.
14:10 Supermathie bobby_, because sometimes you need to take a step back and ask someone AHHHH WHAT AM I DOING WRONG and they can see the answer ;)
14:11 sastorsl SheetiS: Made my day :-) I'm fighting with some of the consepts, the old bash/sed/awk-scripter I am - only doing some projects in python.
14:11 wangofett Anybody know/have the problem with a windows minion appear to just... not return
14:11 wangofett I get [DEBUG   ] Checking whether jid 20140905090755367980 is still running
14:11 wangofett and it does that over and over
14:11 Supermathie wangofett, running powershell command?
14:11 wangofett with [DEBUG   ] jid 20140905091127640398 found all minions
14:11 wangofett trying to do state.highstate
14:11 smcquay joined #salt
14:12 SheetiS sastorsl: I'm definitely an old bash/sed/awk guy, but I really have started to dig python over the last couple of years.  It's now my go-to for most things as python is on all the machines I manage anymore.
14:12 babilen np, rock on. And I would recommend planning that upgrade. It should be much harder than upgrading the master and then running "salt '*' pkg.install pkgs='["salt-common", "salt-minion"]'" to upgrade the minions.
14:12 wangofett Supermathie: I restarted the salt minion service, too
14:12 babilen bobby_: ^^
14:12 jkaye joined #salt
14:12 Supermathie wangofett, try running it from the command line with debug on, that helps me find problems all the time
14:13 perfectsine joined #salt
14:13 XenophonF I personally would like to use salt-ssh to bootstrap the minion install.
14:13 XenophonF but like I said, on my Salt master I can't get sshpass to work
14:13 sastorsl SheetiS: My way to, only I'm still doing a lot of awk, logparsing, etc. Wrote some python stuff to clean up a Maildir based (courier) mailserver. Fun project.
14:13 wangofett Supermathie: you mean `$ salt 'myminion' state.highstate -l debug` ? That
14:14 wangofett that's what's showing me the aforementioned lines
14:16 geekmush1 joined #salt
14:16 anitak joined #salt
14:16 berto- joined #salt
14:16 wangofett looking for a process on the minion that I can kill... but I'm not finding anything promising
14:17 to_json joined #salt
14:17 bobby_ babilen: Thanks, we'll look into upgrading. We had issues when upgrading between various versions of 13.x and 16.x which is why we chose to stick with 2014.1.0 everywhere. It may be time to upgrade again. Thanks again for everything.
14:19 hobakill XenophonF: you got a second to help me out with some winrepo questions? you were really helpful last time i ran into issues.
14:21 Supermathie wangofett, I mean stop the service on the windows minion then start it up from cli with 'salt-minion -l debug'
14:21 babilen bobby_: We've been really quite happy with .10 and I consider it a reasonably solid release.
14:22 d3vz3r0 joined #salt
14:22 mpanetta joined #salt
14:22 rallytime joined #salt
14:23 ramishra joined #salt
14:23 workingcats joined #salt
14:24 orev joined #salt
14:25 peters-tx joined #salt
14:26 bhosmer joined #salt
14:27 simmel joined #salt
14:27 debian112 joined #salt
14:28 rome_ joined #salt
14:31 vejdmn1 joined #salt
14:32 orev joined #salt
14:33 tmh1999 joined #salt
14:38 dvestal joined #salt
14:39 picker joined #salt
14:40 icebourg joined #salt
14:40 orev joined #salt
14:41 hobakill is there a file:/// type format for win_repo_cachefile?
14:44 pjs joined #salt
14:45 kermit joined #salt
14:46 jergerber joined #salt
14:46 vejdmn joined #salt
14:47 eunuchsocket joined #salt
14:48 rome joined #salt
14:48 XenophonF hobakill: sure whats up?
14:48 XenophonF sorry i was afk
14:49 rome_ joined #salt
14:49 XenophonF mage_: salt + poudriere = freebsd config management happiness :)
14:50 XenophonF hobakill: I'm not sure I understand your question.
14:50 hobakill XenophonF: no wories. i am using gitfs for my environments and it totally borked my winrepo that i worked so hard to fix. now i've made a mess of it and unsure how to fix it
14:50 XenophonF wow that sucks!
14:50 esogas_ joined #salt
14:50 XenophonF so how did it break your winrepo?
14:51 hobakill XenophonF: what confuses me is the /etc/salt/master file now
14:51 XenophonF OK
14:51 eunuchsocket Hi all, I'm confused about file.managed.  My state manages /etc/sysconfig/snmpd.  If the file doesn't exist it's created (as expected); however, if I edit the file on the minion and run a highstate the local changes aren't reverted back to the master's version.  Is this normal?
14:51 hobakill XenophonF: because /srv/salt/win/blahhhhh doesn't exist anymore... that file structure is gone. i have no local environments on the salt master. they live in our internal git
14:52 martoss joined #salt
14:52 esogas left #salt
14:52 hobakill so win_repo_mastercache: ... no idea how to set that up now.
14:53 mage_ XenophonF: I'm already using poudriere .. it's awesome :)
14:53 XenophonF hobakill: how do you reference the gitfs repo? is it a URI like gitfs://something?
14:53 iggy anybody ever hooked salt up to riemann or graphite for events?
14:54 hobakill XenophonF: looks like this: https://www.hobapolis.com/paste/?b4d1f64682d56df5#YFCngZzjOTHTHWsELfxQAnIZfCHIwJYKK7fWjn9cIjA=
14:56 XenophonF gotcha
14:56 longdays joined #salt
14:56 XenophonF so you can't use win_gitrepos for this?
14:57 UtahDave joined #salt
14:57 XenophonF i thought gitfs was for pillar or something, but i'm not sure because i don't (yet) use it
14:57 toastedpenguin1 joined #salt
14:57 hobakill XenophonF: i sure can and that works fine but when i generate the winrepo.p file or whatever it's called, the minions can't see it cuz i have no idea how to point those minions to that file.
14:57 UtahDave gitfs is for your salt states.  There is also a git pillar for your pillar data
14:57 XenophonF ah
14:57 XenophonF OK
14:58 Musurp joined #salt
14:58 Gareth morning morning
14:58 hobakill XenophonF: so the win software repo loooks like this. https://www.hobapolis.com/paste/?f9955dd8f1287e45#e/5Hw2xOnvfB5e/V4ZCuklxdGS0P0Dglc8/RJ4MdWUI=
14:59 hobakill XenophonF: but then how do i tell the minions to look at /etc/salt/winrepo/winrepo.p .... that's the crux of this issue.
14:59 hobakill (i think)
14:59 XenophonF hobakill: what happens if you comment out win_repo and win_repo_mastercachefile?
14:59 XenophonF and just use win_gitrepos?
14:59 UtahDave hobakill: Hm. I don't remember for sure, but I think your winrepo needs to be in your file_roots
15:00 XenophonF oh
15:00 UtahDave hobakill: let me check the code
15:00 XenophonF def listen to utahdave on this one :)
15:00 hobakill XenophonF: for sure.
15:00 hobakill UtahDave: that's fine....so then then i ....what... create a folder in my win branch for repos and the minions will see it....?
15:01 rome joined #salt
15:01 eunuchsocket adding pastebin of state to my question above... http://pastebin.com/E0V5aEEk
15:02 hardwire joined #salt
15:03 ndrei joined #salt
15:04 UtahDave hobakill: ok, so in your minion's config, all you need is    win_repo_cachefile     That should be the path in your file_roots to the repo cachefile
15:04 UtahDave hobakill: the default is this:    win_repo_cachefile: 'salt://win/repo/winrepo.p'
15:05 hobakill UtahDave: even though i'm using gitfs? doesn't salt://win/repo/winrepo.p point to a 'hard' file on the server?
15:05 UtahDave no, it doesn't
15:05 babilen UtahDave: Sorry, what is the procedure for my own PRs that I could merge myself into saltstack-formulas? I just fixed two issues with the nagios formula, but assumed that somebody else would merge the PRs or what is the procedure/workflow there?
15:05 UtahDave gitfs abtracts away the fact that the files are in git.
15:05 UtahDave nothing else cares
15:06 UtahDave babilen: you've already opened PRs directly on those repos?
15:06 rome_ joined #salt
15:07 hobakill UtahDave: is that the same in the master file then? win_repo: and win_repo_mastercachefile need be .... what.... it can't be /srv/salt/win  because that doesn't exist on the server if you use gitfs
15:07 UtahDave hobakill: Now for your master configs...  You can probably have win_repo anywhere you want.  win_repo_mastercachefile needs to be in your file_roots. (that's what the minions point to)
15:08 UtahDave hobakill: good point.  I haven't used the windows repo with gitfs. gitfs came after the win repo was originally built
15:08 hobakill UtahDave: ok but again, that location doesn't exist
15:08 UtahDave Let me ask someone who has worked more with the gitfs internals
15:08 UtahDave just a minute
15:09 Musurp Hey, I have a question about output from "salt-call --local". Anyone seen any requests (or know of a setting) for a more minimal output thats between "INFO" and "WARNING". Something that kind of goes down the line of "Running state xyz".
15:09 Musurp I find "INFO" to be too spammy, but would like something more simplistic for things like vagrant box.
15:10 babilen UtahDave: Yes (e.g. https://github.com/saltstack-formulas/nagios-formula/pull/8 ) -- I could merge them myself, but I simply wanted to clarify the workflow.
15:10 hobakill thanks UtahDave
15:10 kingel joined #salt
15:10 Supermathie Musurp, notice is between warning and info
15:10 UtahDave hobakill: starting a conversation right now. I'll let you know what I find out
15:10 UtahDave babilen: just a sec
15:10 babilen sure
15:13 Musurp salt-call: error: option -l: invalid choice: 'notice' (choose from 'info', 'all', 'critical', 'trace', 'garbage', 'error', 'debug', 'warning', 'quiet')
15:13 XenophonF brb all
15:13 Supermathie oh I thought that followed the standard syslog levels
15:13 errr UtahDave: Im working on porting the vmware driver from pysphere to pyvmomi (the lib mware open sourced and supports) and Im curious about the vmware config allowed. Are there all really valid ways of defining the url to vsphere? 10.1.1.1 10.1.1.1:443 https://10.1.1.1:443 https://10.1.1.1:443/sdk 10.1.1.1:443/sdk
15:14 Musurp Yeah i would have hoped for the same, notice would have been perfect
15:14 aparsons joined #salt
15:15 ndrei joined #salt
15:19 kingel joined #salt
15:20 UtahDave babilen: sorry nobody noticed those pull reqs.  We generally get notified of them and act on them. must have slipped through the cracks.
15:21 UtahDave babilen: generally we like a second set of eyes to go over a pull req, even for SaltStack employees.
15:21 babilen UtahDave: No, I just (some minutes ago) opened them. I am a contributor on saltstack-formulas and could merge them myself (I could even push those changes directly) -- Just wasn't sure about the proper etiquette
15:21 babilen That is what I figured.
15:21 thayne joined #salt
15:22 babilen So pushing to my own repo and opening the PRs (and then waiting for another member to merge them) is correct then?
15:22 UtahDave babilen: gotcha.  If it's something really simple, like a docstring spelling fix, then it's probably ok to just merge yourself, but in general we appreciate letting someone else look it over and actually do the merge.
15:23 UtahDave babilen: Yeah, that would be the best.
15:23 babilen +1 -- they aren't rocket-surgery, but that is what I figured. Thanks!
15:24 UtahDave Thank you! we really appreciate your help, babilen
15:24 UtahDave errr: That's what we've found working with several different enterprise customers. It seems that the url changes depending on how the install was done.
15:24 UtahDave errr: I'm very interested in your work with pyvmomi!
15:25 UtahDave hobakill: OK, I think I have a couple of options for you.  You ready?
15:25 hobakill UtahDave: you bet
15:26 UtahDave OK, so I think you have 2 options really.
15:27 UtahDave 1. Enable roots as your second fileserver backend. So then all your other stuff will still be in git, but your cache file can be on the salt master's file system and still be available somewhere at salt://win/repo/wherever/winrepo.p
15:27 icebourg left #salt
15:27 errr UtahDave: cool I just wanted to make sure, Ill make sure to maintain backwards compat then. Not sure if you remember me or not. Im Michael Rice, we met at the Rackspace hack-a-thon I was there most of that with Jordan Rinke that day. Ill be sure to keep you posted.
15:28 UtahDave 2. Use the winrepo runners to create the winrepo.p cache file as normal, but then take that winrepo.p file and commit it to your git repo such that salt://win/repo/whatever/winrepo.p is valid for the minion
15:28 UtahDave hobakill: basically, all the minion cares about is that it can reach winrepo.p from somewhere on salt://
15:29 UtahDave errr: Oh, yeah! I do remember!
15:29 UtahDave errr: Glad to have your help porting to that driver!
15:30 rome joined #salt
15:30 hobakill UtahDave: what does salt:// actually represent, anything? if i want to keep it in /etc/salt/win/repos would i set it to salt://etc/salt/win/repos?
15:31 pdayton joined #salt
15:32 UtahDave when the salt-minion goes to pull a file from the salt master, it always comes from somewhere like salt://a/path
15:32 UtahDave a/path represents a directory or file in the Salt Master's file roots
15:32 UtahDave file_roots by default is  /srv/salt/
15:32 JordanTesting What are you breaking now errr ?
15:33 aparsons joined #salt
15:33 hobakill UtahDave: yeah. ok. i'll customize it. thanks for the help.
15:33 UtahDave so salt://a/path/to/file.txt    will pull the file from  /srv/salt/a/path/to/file.txt    on the salt master
15:34 wendall911 joined #salt
15:34 UtahDave errr: I'm pretty sure you have people on staff with more vmware expertise than we do.  Do you know why we'd see that url to vsphere would change?
15:35 dabb joined #salt
15:35 ajprog_laptop joined #salt
15:35 vejdmn1 joined #salt
15:36 hobakill UtahDave: last question. what should be file_roots be called... base?
15:36 hobakill UtahDave: won't that mess up my gitfs?
15:36 UtahDave I'm not sure what you mean by your first question
15:37 hobakill file_roots:
15:37 hobakill base:
15:37 hobakill - /etc/salt/
15:37 UtahDave hobakill: to your second question, you can use gitfs and roots at the same time
15:37 hobakill or should i rename "base" to something like "winrepo"
15:37 scarcry joined #salt
15:37 UtahDave whoa, don't make /etc/salt/ your file_roots    you're going to make all your config files available to your minions
15:38 hobakill UtahDave: ok - /srv/salt..... that's fine.
15:38 UtahDave hobakill: http://pastebin.com/kJVEbyFE
15:39 UtahDave put that in your master config. That will allow gitfs and the regular file based fileserver, with a preference for files in gitfs
15:39 hobakill UtahDave: yep. that that much. i'm just wondering if the 'base' naming convention makes a difference
15:39 dabb is it possible to reference a nodegroup inside a nodegroup definition, like default: '* and not nodegroup:testgroup1' ?
15:39 UtahDave hobakill: that's just defining where a specific environment pulls its files from
15:40 hobakill UtahDave: ok i'll take this knowledge and see what happens. thanks again.
15:40 rome joined #salt
15:41 UtahDave you're welcome, hobakill!
15:42 errr UtahDave: its not so much the url as it is the lib that handles it. The url to the actual vsphere sdk is always https://something/sdk where https could be http. In pyVmomi we construct the url in the lib so you pass in just the host, port, scheme then we take care of the rest.
15:42 UtahDave dabb: the nodegroup list uses compound matching to create the nodegroup. I'm not sure if the compound matcher supports nodegroups or not
15:43 dabb UtahDave: thank you
15:43 UtahDave errr: ok, I see.
15:43 UtahDave dabb: you're welcome. Go ahead and try it and see if it works.
15:44 ndrei joined #salt
15:44 errr UtahDave: the way pyVmomi works is similar to their perl and java libs. So you pass in something like connection(host='10.1.1.1',port=443,user'root',passwd='vmware')
15:44 justyns joined #salt
15:44 justyns joined #salt
15:45 Ozack joined #salt
15:45 justyns joined #salt
15:46 aparsons joined #salt
15:48 Ahlee so i have a pillar['value'] that may or may not be defined.  Is there a way to check for definition without specifying a default?
15:48 debian112 Is it a problem to run pillars inside of the following: /srv/salt/base/pillar?  I am no worried about sensitive information.
15:48 UtahDave {% if 'value' in pillar %}
15:48 Ahlee i.e. I do'nt want to salt['pillar.get']('value'), '' as I don't want the ''
15:48 Ahlee ah
15:48 UtahDave debian112: that should be fine
15:48 Ahlee thanks UtahDave, so obvious it hurts
15:49 UtahDave :)  you're welcome.
15:49 debian112 @UtahDave: thanks
15:50 kingel joined #salt
15:50 halfss joined #salt
15:50 tligda joined #salt
15:50 UtahDave debian112: you're welcome!
15:55 hobakill UtahDave: sadly still no dice. pkg.available_version isn't returning a valid package in my repo
15:56 catpig joined #salt
15:56 XenophonF back
15:56 pdayton joined #salt
15:57 debian112 Is there a way to tell a salt state to use every *.sls in a directory: for example  base.nodes.*.sls?
15:58 hobakill UtahDave: master file: https://www.hobapolis.com/paste/?1261e3d0a6b34f4b#aA1/m/zR3zMScI1EwUCoR1isCvZDeDjp2xV84mGT/i4=
15:58 rome joined #salt
15:59 hobakill UtahDave: windows minion info: https://www.hobapolis.com/paste/?e957532bf517f732#t7MCyyKyNtTMPQ3Xrctc/xJ6KZ+J9XU4XhL1DJo3rKw=
16:00 spookah joined #salt
16:00 giantlock joined #salt
16:02 kingel joined #salt
16:02 musurp joined #salt
16:03 UtahDave debian112: you'd probably have to have an sls file that used some jinja to add all the .sls files in a directory to an include directive
16:03 troyready joined #salt
16:03 debian112 UtahDave leaning towards that; Thanks. I thought I would asked
16:04 UtahDave cool
16:04 UtahDave hobakill: did you execute the winrepo runner functions to compile the winrepo.p cache file?
16:04 dvestal joined #salt
16:04 hobakill UtahDave: yes
16:05 UtahDave did those give you any errors?
16:05 hobakill nope
16:06 ndrei joined #salt
16:08 UtahDave hobakill: what's the output of      salt 'windowsminion' pkg.list_available
16:08 UtahDave also, did you run    salt 'windowsmachine' pkg.refresh_db     ?
16:08 quickdry21 joined #salt
16:10 aparsons joined #salt
16:11 hobakill UtahDave: 1 - nothing returns . 2 - yes.
16:12 possibilities joined #salt
16:13 kingel joined #salt
16:15 KyleG joined #salt
16:15 KyleG joined #salt
16:15 UtahDave hobakill: sorry for asking obvious questions. I'm just trying to make sure we didn't skip anything
16:15 UtahDave did you restart the salt-master service after modifying the config file?  and the salt-minion service after modifying the config file?
16:16 hobakill mmm. likely yes but i can do that again UtahDave
16:16 hobakill UtahDave: can i start the minions from the master on windows?
16:17 juicer2 joined #salt
16:17 UtahDave hobakill: what version of the minion are you using?
16:17 aparsons_ joined #salt
16:18 hobakill UtahDave: 2014.1.10 master and a mix enviro of 1.7 and 1.10 on the windows minions
16:18 Ahlee hrm.  No joy on that UtahDave, TypeError: argument of type 'StrictUndefined' is not iterable
16:18 Ahlee https://gist.github.com/jalons/f552fb25ce858807c1da
16:18 pssblts joined #salt
16:19 UtahDave hm. probably not. we fairly recently improved the service state to allow the minion to restart itself on windows
16:19 Ahlee guess i'll try {% set %} and is defined
16:19 UtahDave you'll have to send a cmd.run 'Restart-Service salt-minon' shell=powershell
16:19 hobakill i'm on the box itself. it's not tricky do it manually ATM UtahDave
16:20 chitown have a question about pkgrepo....
16:20 chitown i have been messing with a state so some boxes have a /etc/apt/sources.list.d/foo.list file
16:20 UtahDave ahlee try   {% if 'jiraversion' in pillar['deploy_pillar'] %]
16:20 chitown BUT... i forgot to add the key
16:20 UtahDave %}
16:20 gurpgork joined #salt
16:21 hobakill UtahDave: regardless - the restarting of servicse didn't do anything.
16:21 chitown i have moved that to pkgrepo.managed, but it doesnt run if foo.list exists
16:21 Ahlee thought i had tried that iteration
16:21 chitown it doesnt detect that the key isnt there, it just looks for the file
16:21 Ahlee let me try again.
16:21 Ahlee that'd make sense
16:21 Ahlee since pillar is just just a dict, and in {} should work with jinja
16:22 Ahlee That did it
16:22 Ahlee i love being dumb
16:22 bob_dobbs joined #salt
16:22 hobakill UtahDave: FWIW on the minion, C:\salt\var\cache\salt\minion\files\win\ doesn't show that repo.p file at all
16:24 snuffeluffegus joined #salt
16:25 hobakill UtahDave: it does, however, show my gitfs file(s) as i would expect
16:25 juicer2 Is there an open bug or any info on the 'salt going to sleep issue' with salt-run manage.status returning hosts as down when they're not? (subsequent runs of manage-status will eventually show all hosts as up)
16:26 audreyr joined #salt
16:26 aparsons joined #salt
16:27 nitti joined #salt
16:29 UtahDave hobakill: ok, let me set up a test environment here real quick
16:29 hobakill UtahDave: sound groovy
16:30 aparsons joined #salt
16:30 UtahDave juicer2: try changing your   random_reauth_delay to 10  in your minion config and restarting the minion service
16:31 marco_en_voyage joined #salt
16:31 perfectsine_ joined #salt
16:31 juicer2 UtahDave: thx, will try that
16:33 schimmy joined #salt
16:34 Setsuna666 joined #salt
16:36 thayne joined #salt
16:36 linjan joined #salt
16:36 marco_en_voyage joined #salt
16:42 forrest joined #salt
16:43 metaphore joined #salt
16:45 aparsons joined #salt
16:45 halfss joined #salt
16:45 schimmy joined #salt
16:46 rap424 joined #salt
16:46 intellix joined #salt
16:47 hobakill UtahDave: ok weird. i found the winrepo.p file on the minion but it put it somewhere somewhat unexpectedly
16:47 schimmy1 joined #salt
16:48 hobakill UtahDave: C:\salt\var\cache\salt\minion\files\base\win\repo  ... but nothing else lives in that directory other than the .p file.
16:49 perfectsine joined #salt
16:53 notpeter_ joined #salt
16:55 shalkie joined #salt
16:58 martoss joined #salt
16:58 shaggy_surfer joined #salt
17:01 sectionme joined #salt
17:02 melinath joined #salt
17:02 jonbrefe joined #salt
17:03 rap424 joined #salt
17:03 hobakill i think i killed UtahDave
17:03 ndrei joined #salt
17:04 jonbrefe Quick question. I have several environments set in the top.sls. How can I execute something using cmd.run using the environment set in that file. I know the variable name now is saltenv but how can be defined?
17:04 forrest hobakill, nah, he probably had to leave his desk, he's like a ninja
17:08 deepz88 joined #salt
17:10 Ahlee wait, the move to saltenv changes env= specified on the command line?
17:10 Ahlee i thought saltenv was just internal representation?
17:11 KevinMGranger jonbrefe: not too sure but can't you just jinja it into the cmd?
17:11 possibilities joined #salt
17:12 whytewolf joined #salt
17:12 chrisjones joined #salt
17:12 aparsons joined #salt
17:13 aparsons_ joined #salt
17:17 jonbrefe KevinMGranger. Probably I have not a lot of skills on jinja. I did an exercise and it gave me something I didn't expect.
17:17 justyns joined #salt
17:17 jonbrefe salt '*' cmd.run  template=jinja "echo  {{saltenv}}"
17:17 jonbrefe and every single server returned base and not the right value set on the top.sls
17:18 v0rtex based on https://github.com/saltstack/salt/issues/8180 and https://github.com/saltstack/salt/issues/10054 I'm assuming to get a useful return code for now I will need to run output through grep such as: salt-call mysql.db_exists 'oauth' --out=txt | grep 'True'
17:18 v0rtex or is there something better I should do?
17:19 v0rtex in my case I'm using it in the 'unless' parameter of a cmd.script state
17:19 catpigger joined #salt
17:20 forrest v0rtex, you will have to, unfortunately there has't been a ton of work done on making the output cleaner and configurable
17:20 v0rtex k, cool - just making sure there isn't something obvious that I've missed
17:22 drawks harumph, I wish jenkins was faster at testing PRs
17:23 forrest drawks, we all do
17:23 * drawks waits patiently for tests to pass
17:23 forrest drawks, but jenkins is just slow in general :
17:23 catpiggest joined #salt
17:24 TheoSLC joined #salt
17:24 thayne joined #salt
17:24 TheoSLC Greetings
17:24 drawks yeah I'm not a big fan of the interface either. too many places to click :) We've been using travis over on the graphite/carbon project and I much prefer it
17:28 TheoSLC I've been upgrading salt from the Fedora EPEL repo.  But it seems each release from EPEL has several bugs.  Is it advisable to use the EPEL release of Salt?
17:28 catpig joined #salt
17:29 forrest TheoSLC, every release has some bugs, the salt guys work hard to fix them, but there's a lot of code that still doesn't have tests
17:29 forrest TheoSLC, so sometimes shit breaks between releases unfortunately.
17:29 forrest TheoSLC, that's why the current release is taking so long, they are doing a lot more testing than previous versions
17:29 XenophonF EPEL isn't so great, either
17:30 XenophonF i've run into a number of issues with packages in it
17:30 octarine joined #salt
17:30 iggy it depends
17:30 XenophonF the version of Salt in EPEL seems to be OK, though
17:30 TheoSLC Is there a better repo for Salt, or should I go bleading edge?
17:30 drawks 390 modules and states and only 51 unit test files
17:31 drawks seems coverage has a LONGGGGGGG way to go
17:31 rome joined #salt
17:31 forrest TheoSLC, that package will be the same one regardless of the repo you get it from, terminalmage packages up all the RPM stuff. You can go bleeding edge if you want to try TheoSLC but it can be risky.
17:31 drawks I'm familiar with this situation from graphite, but this codebase is much larger and varied
17:33 jkaye joined #salt
17:33 forrest I agree, and so will the whole Salt team, they've actually been building out the QA team over the past few months very slowly
17:33 shaggy_surfer joined #salt
17:33 forrest so it is becoming a much more important priority
17:34 nahamu TheoSLC: in terms of "bleeding edge" you could go with the 2014.7 branch that's being stabilized, or there's always the develop branch which is probably far more "bloody"...
17:34 TheoSLC that's good
17:35 catpigger joined #salt
17:35 forrest TheoSLC, sorry I don't have a better answer, I know it can be frustrating.
17:35 TheoSLC the latest from EPEL is 2014.1.10-4  that seems very old now, and that was only released a week ago.
17:39 catpiggest joined #salt
17:39 TheoSLC I see.. 2014.1 and 2014.7 are the current active branches.
17:39 TheoSLC so 2014.1 is current stable and 2014.7 is current development?
17:39 murrdoc joined #salt
17:41 oz_akan joined #salt
17:41 aparsons joined #salt
17:42 nahamu correct
17:42 forrest Ryan_Lane, why didn't you tweet this at splung?
17:42 nahamu I haven't watched develop closely, but in theory if anyone was working on new features not planned for 2014.7 they would be going into develop.
17:42 forrest *splunk
17:43 forrest TheoSLC, 2014.1.10 is the latest stable
17:43 Ryan_Lane I honestly think no one at splunk reads their twitter account
17:43 forrest TheoSLC, is it not in epel? Their approval process is so stupid
17:43 forrest Ryan_Lane, well, you already know they don't know how to read documentation
17:43 Ryan_Lane :D
17:44 TheoSLC no 2014.1.10-4 is the current EPEL release
17:44 shaggy_surfer joined #salt
17:44 aparsons joined #salt
17:45 murrdoc morning
17:46 Ryan_Lane TheoSLC: right, so it's 2014.1.10 and the 4th version of the package for that version
17:46 perfectsine joined #salt
17:48 TheoSLC Does anybody know how to fix this small problem?  https://github.com/saltstack/salt/issues/15548
17:49 forrest TheoSLC, nope, but can you do salt --version-report, and put that in the ticket, then tag @terminalmage.
17:49 TheoSLC forrest: sure
17:50 n8n joined #salt
17:50 murrdoc Ryan_Lane:  i may or may not have derailed your issue btw
17:50 Ryan_Lane heh
17:51 murrdoc all i was trying to say was that I would like to  have a blacklist for states
17:51 UtahDave hobakill: sorry, I had to jump on a conference call
17:51 aparsons joined #salt
17:51 jalaziz joined #salt
17:51 murrdoc and if Higstate was in the blacklist then blacklist all states
17:51 murrdoc my bad man
17:51 murrdoc feel free to course correct
17:51 nyx joined #salt
17:52 hobakill UtahDave: no problem
17:52 drawks ah damnit
17:53 drawks stupid python 2.6
17:54 UtahDave TheoSLC: I have a state for that.  Just a second
17:54 UtahDave TheoSLC: https://gist.github.com/UtahDave/eb64e806328e5ebab5b7
17:54 hobakill UtahDave: this is an example of my WebDeploy thing. https://www.hobapolis.com/paste/?508189e7210aa853#2oP8AELQepjqNYJ8dJ58IqY8f+rOeNTUtTAvoKY1zzE=
17:55 UtahDave TheoSLC: be warned though, this just suppresses the warning. It doesn't fix the underlying library that is nagging you to upgrade
17:55 davedash joined #salt
17:55 eunuchsocket left #salt
17:57 patrek joined #salt
17:58 drawks what's the general time between PR->pass->merge for this project?
17:58 forrest drawks, depends on the release
17:58 drawks I.E. should I expect a bug fix for the current rc that passes tests to be merged same day?
17:59 ndrei joined #salt
17:59 UtahDave drawks: the full test suite takes around 30 minutes to execute
17:59 t4ank joined #salt
17:59 forrest drawks, it would be merged into develop, and usually the same day.
17:59 drawks k cool
17:59 UtahDave drawks: it depends on how complicated your PR is and also the workload of those evaluating the PRs
17:59 perfectsine_ joined #salt
17:59 drawks py2.6 compatibility is the bane of my existence
18:00 forrest drawks, to put it in perspective, there's only 13 PRs right now
18:00 UtahDave but we do try to be as responsive as possible to those PRs. They're often merged the same day
18:00 forrest drawks, UtahDave is being modest :P
18:00 drawks 2.7 was released in Jul 2010, you'd think the world could manage an upgrade by now ;)
18:00 forrest drawks, are you on cent 6?
18:00 t4ank is there way to run a salt command only on nodes that are true for another command? I.E I want to run cmd.run " cat /etc/passwd | grep evi" If this is true then run another command
18:00 drawks debian wheezy
18:01 TheoSLC UtahDave: thanks, but it scares me to modify the platform like that.  I've read that python warnings can be controlled.  could we make this a master/minion config setting?   I can blame Amazon for the first warning, but the second warning is from salts use of randpool.
18:01 aparsons joined #salt
18:02 perfectsine joined #salt
18:03 deepz88 joined #salt
18:03 TheoSLC UtahDave: or perhaps not, I don't see that RandomPool is imported in the salt code.  I'll just work on fixing the platform.
18:04 hobakill UtahDave: https://www.hobapolis.com/paste/?a49e67782b1b9e89#8uWw+C5UixiY+aXFWv+ECKFrXULBSciNau9a3Dm6lOA= is interesting in that clearly the winrepo.p is moving over as expected.
18:05 UtahDave hobakill: yeah. I just got my test server spun up.  Just a moment
18:05 aparsons joined #salt
18:06 aparsons_ joined #salt
18:10 forrest joined #salt
18:11 rypeck joined #salt
18:11 cpowell joined #salt
18:11 druonysus joined #salt
18:12 jkaye joined #salt
18:12 druonysus joined #salt
18:12 druonysus joined #salt
18:13 TTimo joined #salt
18:15 UtahDave hobakill: OK, with the default settings, I was just able to install Firefox and vlc, no problem.  Let me try modifying to look more like your setup
18:15 mr_chris UtahDave, I have some salt formulas I would like to submit for inclusion. On my GitHub account, structure it as salt-formulas/{formula1, formula2} or should have have each formula be its own repo?
18:15 UtahDave each formula is its own repo.
18:15 hobakill UtahDave: well i reverted to default settings. :(
18:16 mr_chris Thanks.
18:16 forrest mr_chris, each formula should be it's own repo like the other ones in https://github.com/saltstack-formulas
18:16 mr_chris forrest, OK. Thanks.
18:16 UtahDave thank you, mr_chris!  Looking forward to seeing what you've been working on
18:16 forrest mr_chris, give me a heads up when you need a repo created to fork off of and I can make it
18:16 marco_en_voyage joined #salt
18:16 forrest or you can talk to UtahDave, but that slacker has conference calls and (rental) sports cars to drive
18:17 mr_chris UtahDave, Will do. It's a different way of managing users and grains. I'm looking forward to sharing it.
18:17 UtahDave cool
18:17 mr_chris forrest, Is UtahDave having conference calls while driving said sports cars?
18:17 hobakill WTF. UtahDave i was able to pkg.install WebDeploy but i still don't see it under pkg.available
18:18 forrest mr_chris, I don't think so, he's a responsible driver
18:18 UtahDave hobakill: try pkg.available_version WebDeploy
18:19 hobakill WTF. UtahDave i was able to pkg.install WebDeploy but i still don't see it under pkg.available
18:19 hobakill oops
18:19 UtahDave salt windowsminion sys.list_functions pkg    will give you the pkg functions that are available on the platform
18:19 hobakill i literally have no idea why it's working now UtahDave
18:20 UtahDave does it show up with    salt windowsminion pkg.available_version WebDeploy    ?
18:21 hobakill UtahDave: yes it does but my other test repo, salt-minion, doesn't show all the versions, only 2014.1.10 and i have a 2014.1.7 on top of that in the init.sls file
18:23 UtahDave hobakill: Yeah, pkg.get_repo_data is only showing me the newest version of each piece of software
18:24 hobakill UtahDave: i really have no idea why it's working now but maybe i should just back away slowly......
18:24 tempspace Has anybody heard if somebody has started working on a consul external pillar provider yet? I know etcd is set to go out in the next release
18:24 UtahDave tempspace: I haven't heard of one.
18:25 UtahDave hobakill: :) I'm sorry this has been frustrating.  Polishing the windows repo stuff is on my long list of projects
18:26 t4ank if there a way to run via salt command line a salt command on nodes based on the success of a first command salt.cmd  " cat /etc/passwd | grep evid " if this is true then run salt.cmd something else?
18:26 hobakill UtahDave: well thank you so much for your help. tell your bosses you need more money and resources to help you out! salt's windows support is why we are probably choosing salt over, say, puppet.
18:26 rome joined #salt
18:27 UtahDave hobakill: :)  I'll definitely do that.  We're actually going to be hiring some more windows devs over the next few months.
18:28 UtahDave t4ank: you'd probably want  salt cmd.retcode 'your command' && salt  cmd.run 'the next command'
18:28 UtahDave t4ank: no wait, that's not quite right
18:28 UtahDave cmd.retcode returns the command's return code.
18:29 t4ank I only want to run the second command on the nodes the first command was sucessfull on.
18:30 marco_en_voyage joined #salt
18:30 UtahDave I think there's a --retcode-passthru option or something so you can use && to only run if the return code is happy.
18:30 UtahDave let me find that
18:31 QuinnyPig Hmm, specifying a version string to pkg.installed *should* force a downgrade, right?
18:31 marco_en_voyage left #salt
18:32 QuinnyPig (Presuming of course that the lower version is still present on the yum repository, that is...)
18:32 UtahDave QuinnyPig: only if the package manager supports it.
18:33 aparsons joined #salt
18:33 halfss joined #salt
18:36 UtahDave t4ank: right now, your best bet would be to use  cmd.run_all with --out json    and figure out in your script if they're all succesfull.
18:36 UtahDave I can go into more detail in a little bit.   I have to run to a meeting with the big boss.
18:36 t4ank ok thanks
18:36 jayfk joined #salt
18:37 t4ank new to salt. we have it set up but only been using it very minimally and trying to not destroy thousands of nodes.
18:37 QuinnyPig UtahDave: Which yum does, last I checked.
18:37 hobakill UtahDave: remeber. tell big boss your customers want more Windows love! :)
18:38 rome joined #salt
18:41 aparsons joined #salt
18:42 mr_chris UtahDave or forrest. Is the LICENSE file required or will you add that when it's forked?
18:42 forrest mr_chris, I'll have to add one when I generate the repo
18:42 mr_chris I won't bother, then.
18:42 mr_chris With that file I mean.
18:42 murrdoc joined #salt
18:42 forrest mr_chris, then if you can just copy it from another formula repo for yours and overwrite, that would be great
18:42 ajolo joined #salt
18:42 mr_chris OK.
18:42 forrest thanks
18:43 forrest I can add it later, just adds more work for me
18:43 ajolo joined #salt
18:46 aparsons joined #salt
18:50 shoma joined #salt
18:51 cpowell joined #salt
18:54 rome joined #salt
18:59 nyx joined #salt
19:00 melinath joined #salt
19:03 n8n joined #salt
19:06 n8n_ joined #salt
19:07 sectionme joined #salt
19:10 younqcass joined #salt
19:11 CatPlusPlus joined #salt
19:12 mr_chris UtahDave, forrest Here's the first one. https://github.com/llamallama/reverse-grains-formula
19:12 mr_chris This is my first time making a formula. I'm not sure if I followed the conventions correctly.
19:13 mr_chris The problem it is solving:
19:13 mr_chris Where I work, we started with a basic home rolled pillar based grain setup. The pillar used jinja to specify what went where. So a host/grain format.
19:14 mr_chris With this new setup, you can specify them as grain/value/compoundMatch. So it's a grain/host format.
19:14 debian112 Anyone has an example of storing different pillar information on two nodes? Then calling that data via a state.
19:14 mr_chris So if you want to see which servers have the role "database" you just find database and look at the matches.
19:18 mr_chris debian112, In the states you call the pillar data with {{pillar['pillarName']['pillarData']}}
19:19 mr_chris debian112, To put it on different node, you create the pillars then apply them via the pillar top.sls file.
19:19 mr_chris *nodes
19:19 picker joined #salt
19:20 perfectsine joined #salt
19:21 mr_chris UtahDave, forrest I'm open to critique with that formula.
19:24 debian112 mr_chris: can I see any example if you have something?
19:24 mr_chris debian112, I learned it from the examples in http://docs.saltstack.com/en/latest/topics/tutorials/pillar.html
19:25 debian112 mr_chris: ok thanks
19:30 perfectsine joined #salt
19:42 catpig joined #salt
19:43 possibilities joined #salt
19:44 t4ank when you run salt from the command line is there a way to only display the nodes that the command executed true?
19:44 t4ank I want a list of all nodes where say where a file exists
19:48 murrdoc joined #salt
19:48 aparsons joined #salt
19:49 iggy t4ank: --out=txt and grep?
19:51 t4ank thanks - no way to display them on the command line only
19:51 bhosmer joined #salt
19:53 rap424 joined #salt
19:56 iggy not that I've seen
19:56 iggy who is gravyboat?
19:57 Ryan_Lane joined #salt
19:57 melinath joined #salt
19:57 ckao joined #salt
19:59 oz_akan joined #salt
20:00 mr_chris UtahDave, forrest Here's the second formula I want to submit. https://github.com/llamallama/reverse-users-formula
20:00 mr_chris I know it partially duplicates functionality for the existing users formula but the existing one didn't fit a very specific need my company has.
20:01 mr_chris We have a lot of users and matching in Jinja was getting unwieldy. This allows us to define hosts under users rather than users under hosts.
20:02 QuinnyPig How would I invoke pkg.latest from the command line?
20:02 murrdoc mr_chris:  does this match on pillars too
20:03 mr_chris murrdoc, Yes. Compound matches.
20:03 murrdoc oooooh nice
20:03 mr_chris So for a single host you could do "G@id:example.com"
20:03 lionel joined #salt
20:03 mr_chris You can even do AND and OR statements
20:03 murrdoc why did you call it reverse-users ?
20:04 murrdoc cos of the host/user and user/host thing ?
20:04 mr_chris Because it's defined in opposite order compared to https://github.com/saltstack-formulas/users-formula
20:04 mr_chris Yes.
20:04 aparsons joined #salt
20:05 mr_chris You can also do things like "G@roles:web and G@roles:database" for matches.
20:05 mr_chris Or even regex via E@
20:06 mr_chris It's just using __salt__["match.compound"]() to do the matching.
20:06 murrdoc yeah
20:07 murrdoc no sudo tho
20:07 mr_chris murrdoc, Not yet. I wanted it to do one thing and one thing well.
20:07 mr_chris Might add sudo later.
20:07 metaphore joined #salt
20:07 aparsons joined #salt
20:09 wangofett So... because salt isn't using the builtin zip/tarfile modules (yet), and I don't want to download yet another program, I wrote a module to do the unzipping... however, I want it to be dependent on another state but I can't get it to work right
20:09 UtahDave QuinnyPig: salt 'minionid' state.single fun=pkg.latest name=vim
20:09 QuinnyPig UtahDave: Ah, thank you.
20:09 QuinnyPig Never had to do that before. :-)
20:09 wangofett that is, I'm doing `file.managed` to download a file. I only want to unzip the file if the `file.managed` state actually changes something
20:10 aparsons joined #salt
20:10 QuinnyPig UtahDave: What's the salt-call equivalent?
20:10 forrest mr_chris, oh hmm you're using the pyrenderer. whiteinge are you around?
20:11 QuinnyPig Duh. salt-call state.single fun=pkg.latest name=vim
20:11 UtahDave salt-call state.single fun=pkg.latest name=vim
20:11 UtahDave doh!
20:12 QuinnyPig UtahDave: I have a fascinating bug report for oyu. Let me gist it.
20:12 UtahDave cool
20:13 murrdoc joined #salt
20:13 QuinnyPig Okay, now it's nondeterministic. Some runs work, some fail. Hmm.
20:13 QuinnyPig WTF have they done here? :-)
20:13 jkaye joined #salt
20:15 forrest mr_chris, I want to double check with whiteinge regarding his opinion on those two formulas. I don't know if he's ok with using pyrenderer stuff., should be, but I want to be sure.
20:21 mr_chris forrest, Understood.
20:21 mr_chris I chose that way because the logic in Jinja would have been unreadable.
20:21 halfss joined #salt
20:21 forrest mr_chris, totally understandable
20:21 delinquentme joined #salt
20:21 delinquentme salt $CONTROL_MINION cp.push /etc/munge/munge.key <<< why doesn't this run in a shell script
20:21 delinquentme but runs perfectly fine in actual bash?
20:22 forrest delinquentme, sometimes shell scripts work differently than on the command line, is it giving a salt error?
20:22 murrdoc joined #salt
20:22 delinquentme additionally any other command I run with $CONTROL_MINION ... executes just fine such as: salt $CONTROL_MINION cmd.run 'sudo /usr/sbin/create-munge-key'
20:22 delinquentme forrest, nah it simply quietly doesn't return any information / feedback from the node
20:23 forrest delinquentme, weird, you're running that script as the root user?
20:23 forrest or a sudo'd user
20:26 Ryan_Lane joined #salt
20:27 ajolo joined #salt
20:28 delinquentme forrest correct. logged in as root and the file running that script has chmod 0755
20:28 patarr joined #salt
20:29 patarr joined #salt
20:29 forrest delinquentme, weird
20:29 forrest delinquentme, can you try to add -l debug to that salt command?
20:29 forrest see if anything gets logged
20:30 delinquentme and ive verified that the "file_recv: True" is on /etc/salt/master config file
20:30 delinquentme wait ! is there a way to get the location of the master config file?
20:30 delinquentme specifically the one currently being used by salt?
20:32 cpowell joined #salt
20:35 delinquentme forrest, salt $CONTROL_MINION cp.push /etc/munge/munge.key doesnt seem to like and additional -l in there
20:35 delinquentme salt $CONTROL_MINION -l cp.push /etc/munge/munge.key  ....Nor....  salt $CONTROL_MINION cp.push /etc/munge/munge.key -l
20:35 gothix joined #salt
20:36 drawks https://github.com/saltstack/salt/pull/15549 any chance someone can tell me from the jenkins output what exactly failed here?
20:36 forrest it should be at the end -l debug
20:36 drawks it looks like it failed in the lint pass, but doesn't appear to indicate why or where
20:37 possibilities joined #salt
20:37 forrest drawks, http://jenkins.saltstack.com/job/salt-pr-lint/6981//console
20:37 forrest drawks, past that, I don't know.
20:37 rome joined #salt
20:37 aparsons joined #salt
20:38 blarghmatey joined #salt
20:38 rap424 joined #salt
20:39 Rasathus joined #salt
20:39 UtahDave drawks: something weird happened. I'm going to close and reopen the PR to kick off the test again
20:39 drawks k
20:42 possibilities joined #salt
20:44 chrisjones joined #salt
20:45 ndrei joined #salt
20:48 aparsons joined #salt
20:49 Rasathus Hi, Im looking for a few suggestions as to the best approach for assigning pillar data dynamically.
20:49 Rasathus Somehow I would like to make a dynamic list of minion roles available to salt, and then use this to control which pillar data is available to each minion.
20:50 Rasathus Initially my thoughts were to distribute the role list via pillar, but discovered it wasn’t possible to specify and evaluation order.  Which results in ninja attempting to evaluate None types.I don’t mind writing modules if necessary, but I could do with some advice on the best approach.
20:52 delinquentme forrest, done! [DEBUG   ] get_returns for jid 20140905205157332472 sent to set(['woe-01']) will timeout at 20:52:02
20:52 delinquentme [INFO    ] jid 20140905205157332472 minions set(['woe-01']) did not return in time
20:54 forrest delinquentme, oh weird, add -t 60
20:54 forrest to your command
20:57 jalaziz joined #salt
20:58 aparsons joined #salt
20:59 delinquentme forrest, i just changed all the timeouts to 10 seconds -- bad idea?
20:59 forrest delinquentme, not really, the timeout is just the time salt will wait to hear back from the minion before it drops you to the command line (job still runs)
21:02 shaggy_surfer joined #salt
21:02 aparsons joined #salt
21:06 rglen joined #salt
21:06 kballou joined #salt
21:08 n8n joined #salt
21:13 intellix joined #salt
21:15 quist joined #salt
21:16 KyleG joined #salt
21:16 KyleG joined #salt
21:16 quist left #salt
21:16 debian112 First time working with pillars here: any help would be great: http://paste.debian.net/119550/
21:17 oz_akan_ joined #salt
21:17 jab416171 joined #salt
21:17 quist joined #salt
21:24 jalaziz joined #salt
21:27 aparsons joined #salt
21:29 forrest debian112, can you sanitize and provide the output from your pillar?
21:29 debian112 forrest sure
21:32 perfectsine joined #salt
21:33 debian112 I changed to:  salt ['pillar.get'] and atleast it runs now, but not returning the write info
21:33 forrest debian112, salt['pillar.get']('allbox:dns1', 'default')
21:36 debian112 forrest: http://paste.debian.net/119552/
21:36 forrest looks ok to me
21:36 aparsons joined #salt
21:37 debian112 when it runs it returns wrong info: http://paste.debian.net/119553/
21:37 delinquentme is a .saltrc file necessary? and I cant seem to find examples online
21:38 Zero3 joined #salt
21:38 forrest no
21:38 forrest delinquentme, ignore that error
21:38 forrest debian112, what info is wrong?
21:38 forrest oh it's not picking up the default
21:39 forrest actually, why aren't you just referencing the pillar data in the templated file?
21:39 forrest debian112, is that pillar file in /srv/salt/top.sls?
21:39 debian112 yeah: it should return: 192.168.100.20 for dns3
21:39 shaggy_surfer joined #salt
21:39 delinquentme any idea what the default timeout on $ salt-cloud -P -m cloud.map is?
21:40 debian112 forrest its here: /srv/salt/greentoads_staging/local_pillar/nodes/allbox_greentoads_net.sls
21:40 Zero3 Good evening salt people. Random question of the day: What could cause a minion to fail to cache the _states directory? The debug log says it's caching it, but the .py file within it is not being transferred to the cache folder and thus not used :(
21:41 debian112 pillar_roots: - /srv/salt/greentoads/local_pillar
21:42 debian112 I mean:  /srv/salt/greentoads_staging/local_pillar
21:44 chitown have a problem with apt pkg install
21:44 chitown [DEBUG   ] Results of YAML rendering:
21:44 chitown OrderedDict([('datadog_agent', OrderedDict([('pkg', ['installed', OrderedDict([('refresh', False)])])]))])
21:44 chitown [INFO    ] Executing state pkg.installed for datadog_agent
21:44 chitown [INFO    ] Executing command 'apt-get -q update' in directory '/home/craig'
21:44 chitown refresh = False
21:48 forrest debian112, but you have a top in that directory which is including the other file? Seems to me like the pillar just isn't being found
21:48 forrest chitown, what's the issue?
21:50 debian112 yes there is a top file: /srv/salt/greentoads_staging/local_pillar/top.sls
21:51 debian112 I will investigate more over the weekend, and report my finds on Monday
21:51 mrlesmithjr joined #salt
21:51 forrest debian112, ok, have a good one
21:52 mrlesmithjr can anyone here help me out with salt-cloud and vsphere setup? It would be much appreciated
21:53 jalaziz joined #salt
21:54 delinquentme [ERROR   ] Salt request timed out. If this error persists, worker_threads may need to be increased.
21:54 chitown forrest: you can see in the next line that it *IS* updating the pgk list
21:54 chitown not a huge deal... it just makes it butt slow :)
21:55 delinquentme I currently have worker threads set to .... 40 ... is this too many? I mean im still getting issues
21:56 mr_chris joined #salt
21:58 UtahDave delinquentme: try setting   random_reauth_delay: 10    in you /etc/salt/minion   config
21:58 UtahDave mrlesmithjr: it's been a while since I've used the vsphere driver, but I can try to help.  What problem are you running into?
21:59 mrlesmithjr Ah....@UtahDve :) Remember me?
21:59 mrlesmithjr LOL
22:00 murrdoc joined #salt
22:01 mrlesmithjr UtahDave: I have setup my cloud.providers.d/vsphere.conf with info - When I run salt-cloud I get the following......cloud provider alias was not loaded since 'vsphere.get_configured_provider()' could not be found
22:01 murrdoc joined #salt
22:01 UtahDave hm. what version of salt are you on?
22:02 mrlesmithjr I have installed pysphere
22:02 mrlesmithjr verified I can connect to vcenter using that
22:02 UtahDave mrlesmithjr: I think you confused me with Matt!  :)
22:02 mrlesmithjr Yeah I think so too
22:02 mrlesmithjr :)
22:02 murrdoc joined #salt
22:03 mrlesmithjr wondered that the other day as well
22:03 mrlesmithjr Just remembered him being from Utah :) LOL
22:03 UtahDave Yeah, his office is just down the hall from me.
22:04 smcquay joined #salt
22:04 mrlesmithjr salt 2014.1.10 (Hydrogen)
22:05 UtahDave I think vsphere isn't in that version yet.
22:05 UtahDave it will be part of the 2014.7 release
22:05 mrlesmithjr well that might make sense then
22:06 UtahDave there should be a better error, though!
22:06 TheoSLC joined #salt
22:06 smcquay_ joined #salt
22:06 UtahDave You might be able to just drop this file in your clouds directory and it might work:  https://github.com/saltstack/salt/blob/develop/salt/cloud/clouds/vsphere.py
22:07 mrlesmithjr let me try that
22:07 mr_chris joined #salt
22:08 blarghmatey joined #salt
22:08 mrlesmithjr put it under cloud.providers.d/ ?
22:09 halfss joined #salt
22:11 n8n joined #salt
22:12 murrdoc joined #salt
22:16 possibilities joined #salt
22:16 UtahDave mrlesmithjr: no, you'll have to find where salt is installed and drop it in the clouds directory with the other cloud drivers
22:17 murrdoc joined #salt
22:21 saltedpeanuts joined #salt
22:22 unstable joined #salt
22:22 rome joined #salt
22:23 unstable My saltmaster crashes once a week, and needs a restart. What is a good way to debug this? or profile what the problem is.
22:23 ramishra joined #salt
22:24 UtahDave unstable: what version of salt? what os?
22:25 UtahDave hey, unstable. I have to head out.  Could you open an issue on github with all the details of your situation?  salt-master --versions-report   salt 'minionid' test.versions_report    ?
22:26 UtahDave Sorry
22:28 unstable Salt: 2014.1.7, python 2.6.6, centos 6.5
22:32 jalaziz joined #salt
22:33 mr_chris joined #salt
22:35 murrdoc joined #salt
22:36 ajprog_laptop joined #salt
22:38 rome joined #salt
22:50 logix812 joined #salt
22:56 seblu joined #salt
22:58 mrlesmithjr UtahDave: Now getting this error https://gist.github.com/mrlesmithjr/381536d13f86dcb89d3f
22:58 forrest mrlesmithjr, he left for the day
22:59 mrlesmithjr forrest: K
23:01 Gareth unstable: 2014.1.7 is a few revisions behind.  are you able to upgrade to 2014.1.10?
23:05 rome joined #salt
23:14 halfss joined #salt
23:16 delinquentme salt wont be using multiple master files right?
23:18 perfectsine joined #salt
23:20 rome joined #salt
23:22 aquinas_ joined #salt
23:24 aquinas__ joined #salt
23:29 rome joined #salt
23:33 winmutt joined #salt
23:39 mpanetta joined #salt
23:42 ecdhe joined #salt
23:42 winmutt complete noob question, ive done the walkthrough, setup a base and dev file_root and some related sls
23:42 winmutt how do i apply all of them? at work we use puppet, i am looking for the equiv of puppet agent -t i gues
23:43 winmutt the minion doesnt seem to be doing anything
23:43 Zero3 salt '*' state.highstate
23:43 Zero3 Should push all states to all minions
23:44 Zero3 For a pull instead, do "salt-call state.highstate" on the minion
23:44 perfectsine joined #salt
23:45 Zero3 winmutt: I was confused about this the other day when I started too
23:45 winmutt how come the minion is not doing this on its own?
23:45 winmutt puppet agent runs every 5min, do i need to cron something?
23:46 Zero3 apparently not how salt works.. Nothing will be pushed/pulled unless you ask it to
23:46 winmutt and thanks!
23:46 winmutt interesting
23:46 Zero3 yeah, I've seen some people put it in cron
23:46 perfectsine joined #salt
23:46 Zero3 I expected it to auto-sync instantly too. Guess it has some dangers...
23:47 TTimo joined #salt
23:49 Zero3 One reason might be that highstate is slooooow. Probably faster than puppet and others, but still slow and probably takes too many resources on a production server. Salt doesn't seem to handle diffs/updates, only full "highstate" runs
23:51 winmutt i kinda dig it
23:51 winmutt seems like you would cron somethings, like ssh authkeys/users
23:51 winmutt if you werent using ldap
23:54 winmutt Succeeded: 9
23:54 winmutt Failed:    0
23:54 winmutt babysteps!
23:54 Zero3 congratz
23:54 younqcass joined #salt
23:55 blarghmatey Does anyone have suggestions on how to retrieve the hostnames and ip addresses from a set of servers based on grain data to be used in a pillar?
23:56 blarghmatey In particular, I'm trying to set up a MongoDB replica set and want to retrieve the hostname and IP of each of the members.
23:56 winmutt "Succeeded: 9"
23:56 winmutt Failed:    0
23:56 winmutt er
23:56 winmutt You could also of course run state.highstate from the master once the minion checks in for the first time as well.
23:57 winmutt how can i do this programmatically
23:59 Zero3 winmutt: Check out startup_states in the minion config file as well. You can do an automatic highstate when the daemon starts on the minion it seems
23:59 jkaye joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary