Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-09-25

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 loz-- joined #salt
00:02 jalaziz joined #salt
00:05 Vye rawtaz: You can pass relative paths with the "-c" switch but I doubt the config will allow that.
00:05 rawtaz yeah i cant get any of them examples to work :/ seems it wants absolute paths
00:06 rawtaz the cachedir and pki_dir in the config file too
00:09 jslatts joined #salt
00:11 glyf joined #salt
00:14 rawtaz hm, getting this when i run a simple test against one node, using HEAD: https://pastebin.mozilla.org/6609222
00:14 neonixcoder joined #salt
00:15 neonixcoder left #salt
00:16 halfss joined #salt
00:21 tnachen joined #salt
00:26 Damon joined #salt
00:26 gfa joined #salt
00:27 polliard Ok, another newb question.  How do I "list" all states that are "resolvable".  Ie.  I can see in the salt-master that its "fetching" the gitfs formulas.  However, when running to highstate it says it cant find a matching sls in my "common"
00:27 polliard I am running in test=true and get the 'no matching sls message" but is there a way to simply say on the master show me all "resolvable" states
00:29 rawtaz do you guys think https://pastebin.mozilla.org/6609222 is worth making an issue for? or did i do something really obviously wrong?
00:31 Daemonik joined #salt
00:31 polliard rawtaz: on line 22 in your pastebin it is trying to change to the directory, your ls shows it's a file
00:31 polliard rawtaz: since I am new, Im trying to make sure I am understanding what I am seeing your log
00:32 TyrfingMjolnir joined #salt
00:33 rawtaz polliard: yeah, i agree. i opened the thin.py file as well, where the error happens, and it's going through a set of top folders.
00:34 polliard rawtaz: I am digging around to see if I can see whats happening, I don't see anything obvious but I may not in your salt config.
00:35 halfss joined #salt
00:37 polliard rawtaz: dumb question, was this a brew install?
00:38 tnachen joined #salt
00:40 bhosmer joined #salt
00:42 tnachen joined #salt
00:44 jensnockert joined #salt
00:44 rawtaz polliard: yes, HEAD
00:47 elfixit joined #salt
00:52 iggy polliard: state.show_top show_sls and I occasionally use cp.get_file_str (or whatever it is
00:52 yomilk joined #salt
00:52 polliard iggy: thanks, giving that a try.
00:52 iggy I think show_low_sls is supposed to be helpful sometimes, but I only remember using it once
00:53 iggy history | grep get_file
00:53 iggy http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html
00:54 iggy I think most of the stuff (that doesn't make changes to your system) in that module is supposed to be debug stuff
00:55 jalaziz joined #salt
00:57 polliard iggy: thanks for the information, more confluence examples for me to write :)
00:57 glyf joined #salt
00:58 tnachen joined #salt
00:58 iggy I generally have this one open all day long: http://docs.saltstack.com/en/latest/salt-modindex.html
00:59 rawtaz filing an issue now :)
01:02 nitti joined #salt
01:02 rawtaz https://github.com/saltstack/salt/issues/16128 in case anyone is interested.
01:02 halfss joined #salt
01:03 mordonez joined #salt
01:07 polliard rawtaz: of course we are interested, and thank you
01:08 fllr joined #salt
01:08 polliard iggy: I can understand that, my issue is I am working on some servicenow integration with salt stack and am having to learn servicenow, remember ajax, and learn salt all while still being effective
01:08 polliard iggy: to many pages open at the same time!
01:08 rawtaz polliard: no problem! none at all :)
01:12 thedodd_ joined #salt
01:14 thedodd joined #salt
01:15 debian112 anyone know how I can view pillars in different environments?
01:16 debian112 I wish this worked: salt-call pillar.get server_gt saltenv='greentpa'
01:16 kedo39 joined #salt
01:18 tedski http://pastie.org/9592312  i'm having issues getting the scheduler to run... i have added this to my pillar
01:18 mordonez joined #salt
01:19 tedski master and minion running 2014.1.10
01:20 gmcwhistler joined #salt
01:21 murrdoc joined #salt
01:21 VictorLin joined #salt
01:24 perfectsine joined #salt
01:25 bhosmer_ joined #salt
01:29 anotherZero joined #salt
01:30 anotherZero joined #salt
01:32 Ryan_Lane joined #salt
01:32 perfectsine_ joined #salt
01:33 jalbretsen joined #salt
01:35 Outlander joined #salt
01:39 kusams joined #salt
01:45 anotherZero joined #salt
01:45 possibilities joined #salt
01:46 to_json joined #salt
01:46 ramishra joined #salt
01:46 to_json1 joined #salt
01:52 jslatts joined #salt
01:54 tristianc joined #salt
01:57 n8n joined #salt
01:58 glyf joined #salt
01:59 saggy joined #salt
02:02 bigred_ joined #salt
02:05 DaveQB joined #salt
02:06 bigred_ joined #salt
02:07 bigred_ left #salt
02:11 hasues joined #salt
02:15 sherbs_tee joined #salt
02:16 thedodd joined #salt
02:24 rallytime joined #salt
02:26 huleboer joined #salt
02:30 possibilities joined #salt
02:32 jensnockert joined #salt
02:35 active8 joined #salt
02:35 otter768 joined #salt
02:41 dalexander joined #salt
02:43 jalaziz joined #salt
02:46 TyrfingMjolnir joined #salt
02:48 sudarkoff joined #salt
02:48 jslatts joined #salt
02:57 delinquentme LocalClientEvent PULL socket URI: ipc:///var/run/salt/master/master_event_pull.ipc
02:57 delinquentme I keep getting my attempt to highstate hanging at this event
03:00 jaimed joined #salt
03:02 TyrfingMjolnir joined #salt
03:02 bezeee joined #salt
03:03 thedodd joined #salt
03:05 halfss joined #salt
03:09 thedodd joined #salt
03:12 halfss_ joined #salt
03:14 jbub joined #salt
03:15 bytemask joined #salt
03:18 dmick joined #salt
03:18 dmick https://launchpad.net/~saltstack/+archive/ubuntu/salt-depends looks like it might have been useful once, but seems to have lost things like libzmq3; anyone active there still?
03:19 dmick whiteinge?
03:20 mordonez joined #salt
03:29 ajolo joined #salt
03:37 sudarkoff joined #salt
03:38 Ryan_Lane joined #salt
03:40 kusams joined #salt
03:42 kore joined #salt
03:57 ramishra joined #salt
03:57 GnuLxUsr joined #salt
04:01 malinoff joined #salt
04:08 forrest joined #salt
04:09 thayne joined #salt
04:10 lynxman joined #salt
04:19 mordonez_ joined #salt
04:19 GnuLxUsr joined #salt
04:21 jensnockert joined #salt
04:23 mordonez__ joined #salt
04:24 mordonez___ joined #salt
04:26 mordonez____ joined #salt
04:33 thayne joined #salt
04:35 kermit joined #salt
04:38 Outlander joined #salt
04:42 zhou_ joined #salt
04:43 possibilities joined #salt
04:46 askhan joined #salt
04:46 felskrone joined #salt
04:51 GnuLxUsr joined #salt
04:52 tligda joined #salt
04:55 zhou_ anyone who can tell me ,why grains.item give me a old value.  i custom a grains , it can return a dist  {”a“: 123 , “b”: 456 }  . while ,i changed my grains let it return dist {"a",123}  and execution saltutil.sync_all. but i still can see the "b: 456 " use grains.item b  command .  why ??
04:58 ramteid joined #salt
05:03 mordonez_____ joined #salt
05:03 TyrfingMjolnir joined #salt
05:06 iggy I usually just wait a while, clear caches, etc... haven't found exactly what makes it start working
05:13 murrdoc joined #salt
05:14 yomilk_ joined #salt
05:20 sherbs_tee joined #salt
05:22 thayne joined #salt
05:30 lacrymology joined #salt
05:30 lacrymology 'pip install' is case-insensitive, so 'pip install django' works although the official name of the package is 'Django'. Now, while 'pip uninstall django' works also, state.high '{"django": {"pip": ["removed"]}}' fails, because it is case-sensitive. Would you call this a bug?
05:36 delinquentme joined #salt
05:41 esogas` joined #salt
05:47 jeremyBass joined #salt
05:48 modafinil__ joined #salt
05:48 masm joined #salt
05:49 mordonez______ joined #salt
05:49 octarine joined #salt
05:49 fxdgear_ joined #salt
05:50 blackjid joined #salt
05:50 zhou_ left #salt
05:54 catpiggest joined #salt
05:55 mordonez_______ joined #salt
05:56 mordonez________ joined #salt
06:02 Outlander_ joined #salt
06:03 thayne joined #salt
06:07 sherbs_tee joined #salt
06:10 jensnockert joined #salt
06:10 zhou_ joined #salt
06:10 jensnockert joined #salt
06:11 murrdoc joined #salt
06:12 ramishra joined #salt
06:13 SpX joined #salt
06:17 zhou_ @iggy    so,is that a bug or just i did a wrong thing use grains
06:21 murrdoc joined #salt
06:28 ramishra joined #salt
06:34 jalaziz_ joined #salt
06:36 lacrymology joined #salt
06:41 zz_Cidan joined #salt
06:44 TheThing joined #salt
06:46 flyboy joined #salt
06:46 yomilk joined #salt
06:50 bhosmer_ joined #salt
06:55 tnachen joined #salt
06:55 Sweetsha1k joined #salt
06:56 mosen joined #salt
06:58 akafred joined #salt
07:02 ghartz joined #salt
07:03 mndo joined #salt
07:03 oyvjel joined #salt
07:04 mindlessdemon joined #salt
07:04 lcavassa joined #salt
07:04 mindlessdemon joined #salt
07:05 delinquentme joined #salt
07:05 mindlessdemon joined #salt
07:06 mindlessdemon joined #salt
07:07 mindlessdemon joined #salt
07:08 mindlessdemon joined #salt
07:08 mindlessdemon joined #salt
07:09 the_drow_ joined #salt
07:09 mindlessdemon joined #salt
07:11 mindlessdemon left #salt
07:12 oyvjel joined #salt
07:13 kingel joined #salt
07:13 jcockhren everyone probably should do a -> salt '*' pkg.install bash refresh=True
07:14 jcockhren given the exploit exposed today. just sayin...
07:15 the_drow_ exploit?
07:15 TheThing http://arstechnica.com/security/2014/09/bug-in-bash-shell-creates-big-security-hole-on-anything-with-nix-in-it/
07:15 TheThing ^ this exploit
07:15 the_drow_ Is ryan lane here?
07:16 jcockhren http://seclists.org/oss-sec/2014/q3/685
07:16 jcockhren yeah. I wonder how this effect salt-ssh
07:16 jcockhren +s
07:18 the_drow_ hehe it seems that I'm not vulnerable to this attack :)
07:18 jcockhren github is
07:18 jcockhren well... was
07:18 the_drow_ ohhh crap
07:18 jcockhren git over ssh uses forced commands
07:19 sastorsl joined #salt
07:19 jcockhren I imagine that salt-ssh could be affected as well.
07:20 zhou_ anyone who can tell me ,why grains.item give me a old value.  i custom a grains , it can return a dist  {”a“: 123 , “b”: 456 }  . while ,i changed my grains let it return dist {"a",123}  and execution saltutil.sync_all. but i still can see the "b: 456 " use grains.item b command .  why ??
07:22 TheThing jcockhren++
07:22 jhauser joined #salt
07:23 jensnockert joined #salt
07:24 the_drow_ zhou_: Call http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.refresh_modules
07:25 mordonez joined #salt
07:26 TheThing for those that wanna check if they're vulnerable for the bash bug: salt '*' cmd.run "env x='() { :;}; echo vulnerable' bash -c \"echo this is a test\""
07:26 kiorky joined #salt
07:28 martoss joined #salt
07:31 mordonez_ joined #salt
07:32 pduersteler joined #salt
07:35 saurabhs left #salt
07:36 PI-Lloyd joined #salt
07:36 delinquentme do salt-masters use state files to configure what they themselves have installed?
07:36 TheThing You know it's a good morning when you wake up to big warnings about huge security hole or something
07:36 fredvd joined #salt
07:36 jcockhren heh
07:36 TheThing barely had my coffee
07:37 jcockhren you're gonna need it
07:37 TheThing thanks jcockhren for the command :b
07:37 TheThing hehe
07:37 maboum joined #salt
07:38 jcockhren Went heartbleed was announced, 15 min later I was fixing the things
07:38 TheThing lol
07:38 jcockhren long long long 2 days
07:38 jcockhren my friends and I started working on it before it became a 'thing'
07:39 TheThing Is seclist.org best way to be subscribed to these kinds of news?
07:39 babilen TheThing: I already fixed it yesterday and had a big smile on my face when could answer "Yes, naturally" to my colleague's question "Did you upgrade bash already?" a minute after coming into work.
07:39 TheThing hehe
07:39 TheThing nice going babilen
07:39 * babilen hugs salt
07:39 jcockhren went into the office like: 'hey (ops) guys, watch me do this'
07:39 tinuva joined #salt
07:39 jcockhren want the logs? want the env vars? here!
07:40 babilen Somebody should release a version of http://xkcd.com/208/ for salt
07:40 malinoff jcockhren, 30% 'minion did not respond'
07:40 jcockhren there seems to be a communication issue even between mismatch minor versions
07:41 rogst joined #salt
07:41 jcockhren 2014.1.10 masters and 2014.1.7 minions don't seem to have consistent communication
07:41 TheThing malinoff: jcockhren, 30% 'minion did not respond' <— looks like somebody is gonna have a long day
07:41 malinoff jcockhren, i'm not asking, this is what will be received after 'hey ops guys, watch me do this'
07:41 jcockhren yeah
07:41 malinoff just joking :)
07:41 TheThing lol
07:41 jcockhren haha
07:42 babilen I haven't had any problems like that every since .10
07:42 babilen *ever
07:42 malinoff So am I, haven't had any problems since 0.17.5 :)
07:42 jcockhren https://twitter.com/enigma0x3/status/514967993328365569
07:42 jcockhren related ^
07:42 pduersteler ouch..
07:43 DaveQB joined #salt
07:43 babilen malinoff: You are just here for joking anyway
07:43 mordonez__ joined #salt
07:43 jcockhren right
07:43 TheThing hahaha
07:43 malinoff babilen, nope, i try to help people sometimes, but most of the time i'm read-only
07:44 alanpearce_ joined #salt
07:44 TheThing I thought you were write-only <_<
07:44 jcockhren I think I'm going to have to make all new amis
07:44 TheThing people write to you and you never reply >_>
07:45 j-saturne joined #salt
07:46 malinoff TheThing, I'm replying right now
07:46 babilen Lets move on, I should not have started this.
07:46 TheThing Only because of the bash vulnerable. Someone must have clearly hacked you <_<
07:47 TheThing or because there's a glitch in the matrix :b
07:47 jcockhren s/the matrix/bash/
07:48 spo0nman joined #salt
07:57 ghartz joined #salt
07:59 thayne joined #salt
08:02 sastorsl joined #salt
08:02 CycloHex joined #salt
08:03 rjc joined #salt
08:03 ndrei joined #salt
08:04 TheThing joined #salt
08:08 thayne joined #salt
08:13 kavakava joined #salt
08:19 CeBe joined #salt
08:19 scalability-junk how would someone go about updates with migration need?
08:20 scuwolf joined #salt
08:20 scalability-junk would I add some migration call on yeah update and therefore do the upgrade "manually". Aka change software version in the state and edit the migration to the latest needed and then let the state run automaticly or manually.
08:20 scalability-junk or are there other best practices to consider?
08:21 scalability-junk should one build version specific migration states and then run the one designated to a specific version.
08:21 intellix joined #salt
08:21 scalability-junk so having migration scripts for version v1, v2, v3 and it uses the assigned version number to lookup the right migration script and if none is provided use the default one? or something like that?
08:23 nullscan joined #salt
08:24 nullscan hi
08:24 TheThing hello
08:24 nullscan is anyone using salt for ie. php software deployment?
08:25 nullscan of your own code hosted in a git repo in general?
08:25 nullscan something like fabric or capistrano etc
08:25 nullscan i was thinking of writing a custom module that would do that but i was wondering if anyone has any other ideas
08:26 malinoff nullscan, build deb/rpm packages and use apt/yum to deliver your software
08:26 TheThing I've done php software deployment using salt, yes. Had to set up custom github user that salt would set up on remote machine
08:26 malinoff nullscan, salt will help on both steps
08:26 TheThing that way, I could do any git commands I wanted on my private repo's
08:26 TheThing which is how I deployed my php site
08:27 nullscan malinoff: yes that is onw option but building a deb from the code base we have would be impractical, it would be a 200Mb deb file :p
08:28 malinoff nullscan, may i ask, do you have a 200mb of php code you write by yourself?
08:28 malinoff Or most of them is a framework code/
08:28 malinoff ?*
08:28 mindlessdemon joined #salt
08:28 pduersteler Anyone running salt with git on debian? As soon as I add git as fs backend, I can't do state.highstate anymore, even when not referencing files from git. the minion connection times out, highstate gives no output, and from then on I have to restart the minion in order to get it working again. https://gist.github.com/pduersteler/a7955992f96995d482ef
08:29 nullscan TheThing: yes i was thinking something along those lines, but i would like to have it as a state so that i can combine it with reactors etc and have a minimal state file to call it
08:29 malinoff nullscan, generally it is ok to separate the framework core and custom themes/plugins to different packages and install them separately
08:29 nullscan malinoff: no i do not write it myself :p
08:29 malinoff nullscan, that's how we deploy php apps
08:29 nullscan its php + js + a huge number of app specific ini files etc etc
08:30 CycloHex pduersteler: have you restarted the salt-master after adding the git remote?
08:30 nullscan and it adds up to 200mb because we carry the frameworks too because they are customised quite a lot
08:30 pduersteler CycloHex: yep
08:30 malinoff nullscan, i think it's time to refactor the code
08:31 nullscan lol, its long overdue but its not up to me i'm affraid
08:31 malinoff nullscan, well, of course you can just clone the repo and so on, but this way is *very* error-prone
08:32 jcockhren heh
08:32 nullscan that is how we do it now anyway using capistrano but i desperately want to get rid of it
08:32 jcockhren there's a 2nd biggie vulnerability for another major crypto lib
08:32 TheThing|temp joined #salt
08:32 CycloHex Has anyone here used salt-cloud with digital ocean? If so, is it normal that it takes up to 20minutes to just deploy a minion through salt-cloud? And it doens't even install salt-minion :s
08:32 scalability-junk malinoff: why not build docker images from the code and use the different layers to minimize the need to distribute the images?
08:32 TheThing|temp but what malinoff said is probably the best approach
08:33 nullscan docker containers are something we are evaluating atm
08:33 nullscan so we are not ready to jump on that just yet
08:33 malinoff scalability-junk, uh-uh, docker is an evil. It does not solve any problem, just give a lot of new bugs and problems
08:33 olenz joined #salt
08:33 jcockhren CycloHex: I had that issue once, but it was due to something wrong on their end. the droplet would just hang and never 'start'
08:33 jcockhren happened to me last week
08:34 CycloHex If I use vagrant with digital ocean it's up in 2 minutes :s
08:34 nullscan plus i am really reluctant to add yet another layer in the entire stack
08:34 scalability-junk malinoff: sounds like I found someone giving me an answer to why someone would use docker instead of salt to deploy stuff...
08:34 olenz Hi everybody! Is tehre a way to find out what functions where executed on a certain minion? Basically a "jobs.list_jobs", but locally on a minion?
08:34 scalability-junk malinoff: so why would one use docker (except for versioned images)?
08:35 nullscan scalability-junk: its easier when you are creating staging enviroments
08:35 nullscan to just have docker containers
08:35 nullscan but for production i am really sceptical
08:35 jcockhren NSS -> http://seclists.org/oss-sec/2014/q3/673
08:37 olenz In general, I wonder whether there is any concept on how the results of running functions/states/commands are stored and can be accessed afterwards?
08:38 olenz To my understanding, whenever executing a function, I can specify the returner, but if I want to access the results afterwards, how can I do that?
08:38 olenz jobs.find_jobs only works for jobs started from the master, doesn't it?
08:41 scalability-junk nullscan: but why is docker in production bad? malinoff
08:42 nullscan we are not at the point where we are evaluating performance yet but it seems kind of a messy solution to add another layer to the server setup
08:42 nullscan for our infrastructure anyway
08:43 scalability-junk depends on the setup. could make the size issue less severe and instead of using packages you would use images. but I am by no means an expert still trying to figure out the next generation of the infrastructure.
08:47 malinoff scalability-junk, i think docker is bad everywhere, because docker promotes that one service should be in one container - so instead of managing services you manage containers. Again, it does not solve the problem of *managing services*
08:48 N-Mi joined #salt
08:48 malinoff scalability-junk, and generally you have tons of services on a single host - postfix, nginx, httpd, uwsgi, logstash, etc, etc
08:50 malinoff scalability-junk, and imagine there is a vulnerability in e.g. bash. To update its version you must update ALL CONTAINERS WITH BASH, which means actually *ALL* containers you have
08:50 malinoff scalability-junk, does it sound like a good solution?
08:51 scalability-junk malinoff: depends on the way you build containers and deploy them.
08:52 scalability-junk I mean if you build them through a CI like jenkins you could just retrigger a build of all your last commits and letting docker pull in the latest bash version and then have your salt distribute the containers for example.
08:52 scalability-junk malinoff: I mean service separation is something awesome in my opinion. But I'll probably go with salt and use vms/docker without worrying about them.
08:52 scalability-junk only to use less overhead mostly.
08:52 scalability-junk But not decided yet
08:53 laxity joined #salt
08:54 aquinas joined #salt
08:54 nullscan scalability-junk: i think that using a framework/tool like salt only to distribute docker containers kinda looses the point of the configuration management frameworks alltogeather
08:55 nullscan scalability-junk: i mean you have the tools to manage single services on thousants of systems in the blink of an eye nowdays, why use it only to distribute text files that will then pull data from another repository many times over for the same thing ?
08:56 scalability-junk nullscan: you wouldn't distribute the text files, but the actual images.
08:56 nullscan i am not sure about this whole app containers too, i have just started looking into it but i think that its a bit of an overkill
08:56 scalability-junk these images are working without dependency and therefore are more useful than salt alone.
08:57 malinoff scalability-junk, is using jenkins only to update bash everywhere simpler than just running "salt \* pkg.latest bash ?
08:57 malinoff forgot the last quote
08:57 scalability-junk updating can introduce unexpected results, so instead of updating stuff you could create a new vm/image and bootstrap the new version with salt (should be done as the result is actually reproducable, but not as far with a snapshot like an image)
08:57 scalability-junk malinoff: but with that you alter the system and a rollback gets harder.
08:58 malinoff scalability-junk, do you ever rollback to versions with vulnerabilities?
08:58 nullscan scalability-junk: that is why you have staging envs, to try out new versions of e.g. apache
08:58 scalability-junk malinoff: if they break my online shop, which doesn't use bash queries... yeah
08:59 scalability-junk nullscan: did staging never fail for you?
08:59 nullscan actually, no
08:59 malinoff scalability-junk, so you shouldn't run that pkg.latest command. That's all
08:59 malinoff scalability-junk, we're actually arguing about "one command vs CI system"
09:00 malinoff I bet on one command
09:00 nullscan well, if it comes down to testing e.g new php version with the current codebase then i can always update gradually and see what happens in prod despite of jenkins builds etc
09:00 scalability-junk malinoff: could be more. it's more about the bigger picture of having versioned images, which actually run without dependencies and having versioned config management with dependencies you could perhaps miss in 2 months or are down or whatever.
09:01 nullscan scalability-junk: and to downgrade services such as apache etc, i can always teke the system offline, uninstall the service and run the previous version of my state to bring it back to the previous version
09:01 scalability-junk nullscan: only when you don't use pkg.latest for example
09:02 scalability-junk and as I said imagine you rely on a piwik build, which is unavailable when you wanna run salt.
09:02 nullscan scalability-junk: i never use pkg.latest for critical services
09:02 malinoff scalability-junk, I have a versioned config management without docker - such system called "git"
09:02 scalability-junk I mean sure you could just snapshot your bootstrapped vm after each salt run.
09:03 scalability-junk malinoff: so no external dependencies at all, which are not versioned?
09:03 nullscan scalability-junk: i have to agree with malinoff, git is what i rely on too
09:03 scalability-junk malinoff: that's great, but as I said having it run ready even in 5 years (in theory) is better than having the instructions how to produce somthing running in my opinion.
09:03 malinoff scalability-junk, of course there are external dependencies, which are handled by my deployment system
09:04 malinoff scalability-junk, there are no system without a dependency
09:04 kingel joined #salt
09:04 malinoff App depends on OS, OS depends on hardware, hardware depends on electricity, etc, etc
09:04 scalability-junk I see docker containers more like binary blobs that can run even in x months without any need for dependencies.
09:05 scalability-junk malinoff: agreed, but that's taking it a bit too far ;)
09:05 malinoff scalability-junk, it's not true, you will have dependencies - other docker containers
09:05 malinoff because if your app needs let's say postfix, you will have to have a postfix container
09:06 scalability-junk which is available in a running form either ;)
09:06 scalability-junk but yeah I get your points, I just wanted to stretch the discussion into interesting levels
09:06 nullscan scalability-junk: what happens when your new shiny varnish lets say, does not play well with your old nginx ?
09:06 nullscan :)
09:07 scalability-junk In my opinion the best way would be to have salt states, use vms or containers and snapshot them after each bigger change and therefore fallback if something doesn't work
09:07 malinoff so I don't see the point using docker containers if you still have a dependency management, service management and other painful stuff, and more than that, you should build a CI/CD system to handle docker
09:07 nullscan i know stupid example, but humor me for arguments sake
09:07 scalability-junk nullscan: use the old container, but when that happens you have the same issues with salt.
09:07 nullscan scalability-junk: i war reffering to the "independany" state of containers :)
09:07 scalability-junk malinoff: true, but CI/CD system is available anyway so are the others mostly.
09:07 nullscan *independant
09:08 scalability-junk I mean why do you use packages instead of just distributing code via git?
09:08 malinoff scalability-junk, really?
09:08 jcockhren uh oh
09:08 scalability-junk nullscan: fair enough
09:09 malinoff scalability-junk, because yum install myapp is simpler than `git clone https://blah.git; git checkout vA.b.c; make; make install; do-something-else-to-install-this-shitty-app`
09:09 scalability-junk malinoff: which could be handled with a salt state as it's available anyway
09:09 scalability-junk instead you need a build environment and dependency management for your packages ;)
09:09 scalability-junk a package repo etc.
09:10 nullscan scalability-junk: the ports systems is a well established system that works for ages, the is not only the package distribution system available :)
09:10 malinoff scalability-junk, right, but these tools are mature things, and they won't break
09:10 scalability-junk docker run myapp is simpler than running `make install etc. pp`
09:11 scalability-junk malinoff: but still you acknoledge that they could be used in the same way and would have similar benefits in making the infrastructure easier to install ;)
09:11 malinoff scalability-junk, no, it's not about "docker run myapp" it's more about "docker run dep1; docker run dep2; docker run dep3; docker run myapp"
09:11 VSpike What will the file.append state do if the file doesn't exist? Will it create it?
09:11 glyf joined #salt
09:12 giantlock joined #salt
09:12 jcockhren not understanding how/why the conversation got to docker vs. system packaging
09:12 che-arne joined #salt
09:12 scalability-junk malinoff: but you have to have dependency management anyway it doesn't matter if it manages docker run dep1 or yum install deb1
09:12 malinoff so am i :)
09:12 scalability-junk you still have different services on different systems and you have to coordinate that even with the current system
09:13 malinoff scalability-junk, yum has dependency management from-the-box
09:13 scalability-junk jcockhren: probably because of the similarities
09:13 jcockhren seems to me, that system-packaging is a dependency of using docker
09:13 scalability-junk malinoff: does that include setting up the link to a mysql server, memcache server etc.
09:14 scalability-junk jcockhren: ^^
09:14 malinoff scalability-junk, it is not a dependency management, it is configuring step which is handled by salt/ansible/whatever
09:14 scalability-junk We are circling :D
09:15 pduersteler Okay, python-git breaks my setup. dulwich is available but since saltstack-wheezy does not provide 2014.7.x but 2014.1.10 the gitfs_provider setting is not implemented, and pygit2 has no deb package and I'd like to not compile dependencies etc by myself. Best way go get git fsbackend running on a debian? upgrade to an unstable dist version?
09:15 scalability-junk So conclusion could be that docker has pros in less overhead compared to vms. Other pros can't be considered as salt is needed anyway so just use salt :P
09:15 scalability-junk seems like a good conclusion in the salt channel :D
09:16 jcockhren there are different types of dependencies
09:16 jcockhren package deps, service deps, app deps
09:17 jcockhren all those have states related to them:
09:17 jcockhren package deps: installed, uninstalled, removed
09:17 jcockhren service deps: running, stopped, restart(ing)
09:18 jcockhren app deps: package deps, connections, etc
09:19 jcockhren regardless of vms or container, you need a state to where you want that "system" to converge upon
09:20 jcockhren given the layering, it seems to me that alone, each layer has to be managed to has some sense of isolation.
09:21 scalability-junk so package deps can be handled with packages for example
09:21 jcockhren yes
09:21 scalability-junk app deps at least can partly be handled with docker, but also wie scripts (salt, bash etc.)
09:21 fbretel joined #salt
09:21 scalability-junk the question is what is easier or less error prone.
09:21 jcockhren _easier_ right?
09:21 scalability-junk as for service and app deps a config management should be used anyway so scripts could be available anyway
09:22 fbretel hi all, have any simple example of making a state fail ?
09:22 scalability-junk jcockhren: yeah easier is not the best word ^^
09:23 TheThing|temp joined #salt
09:23 jcockhren I argue that a config management system should probably touch each layer, but serving a different purpose
09:23 TheThing|temp jcockhren: I just realized that updating bash still doesn't fix the vulnerability :-/
09:24 jcockhren TheThing|temp: right it's a partial fix
09:24 jcockhren there are still vulnerable code paths
09:24 TheThing|temp indeed
09:24 jcockhren fun times
09:25 TheThing|temp time to lock down my ssh on my server for now
09:25 TheThing|temp I really should get VPN installed
09:26 wnkz joined #salt
09:26 TheThing|temp yeah, very fun times :)
09:26 malinoff jcockhren, thanks! I think i should save your explanation somewhere
09:26 jcockhren scalability-junk: even if system packaging is used for handling package deps, config management has to be useful enough to handle the dependecies of the layers themselves
09:27 nihe joined #salt
09:27 jcockhren for example, a security update is released.
09:27 malinoff I don't have enough english skills to say everything i want to say :)
09:27 jcockhren a single docker container has no clue to restart a related service on another coreOS node
09:28 jcockhren however, salt would have an understanding of the distribution of containers amoungst the nodes
09:29 jcockhren and know which services to restart. and given the versioning of containers, then CI/CD can repackage the container for later
09:29 scalability-junk jcockhren: true, but that's not something this layer (docker/container) needs to understand. As it's not the purpose of the package management to restart a remote mysql server.
09:30 scalability-junk *+either
09:30 malinoff scalability-junk, that's why salt is more "infrastructure manager" than just a "configuration manager"
09:30 jcockhren right, exactly.
09:30 malinoff the same for ansible
09:31 malinoff because salt knows both how to configure an app and how to restart a service
09:31 scalability-junk Which would actually allow docker being used to reduce the moving parts salt has to handle. But it introduces black boxes of operation for salt, which in itself could be bad.
09:31 scalability-junk I mean there are always pros and cons.
09:31 jcockhren the orchastration of the behaviors of the layers is what you're really trying to manage
09:31 jcockhren the boundaries between what's important is different for each layer
09:32 yomilk joined #salt
09:33 olenz fbretel: how about salt '*' cmd.run '/bin/false'?
09:33 jcockhren in the words of stahmna at puppet labs, "There are degrees of automation"
09:33 babilen joined #salt
09:33 babilen ddd333
09:34 olenz well, that's not a state, but you get the idea
09:35 fbretel olenz: thx but I'm looking for a state to fail, not a cmd/module
09:35 scalability-junk jcockhren: With varying degrees of complexity with more automation resulting in more complex tools to fullfil this automation. Or at least I would say so.
09:36 yomilk_ joined #salt
09:36 jcockhren it _could_ but that's the hard way
09:36 nnion joined #salt
09:36 jcockhren that doesn't follow the unix philosophy
09:37 jcockhren a tool should be small and do one thing well
09:37 jcockhren in the case of larger software, that _tool_ should be restricted to doing one thing
09:37 jcockhren then you need inputs, outputs and a way to pipe things
09:38 olenz well, use a state "/bin/false: cmd.run'
09:38 jcockhren for example `dmesg | tail`
09:38 olenz fbretel: use a state "/bin/false: cmd.run'
09:38 jcockhren what is your pipe in infratructure management?
09:38 jcockhren infrastructure*
09:38 olenz fbretel: that should fail
09:38 jcockhren is it that tool. salt
09:39 jcockhren (in this room)
09:40 jcockhren does something very well... send commands elsewhere. delegate work to other systems (vms, containers, cloud providers, hypervisors )
09:40 jcockhren and it can recieve commands and so something with it, with the schedulers and reactor parts
09:41 jcockhren doesn't _have_ to be complex
09:41 jcockhren just the separation of responsilbity has to be clearly defined
09:41 jcockhren responsibility*
09:41 fbretel olenz: perfect! :)
09:43 CycloHex when trying to deploy a minion over salt-cloud, I get the error that the deploy.sh failed.. Probably because it created a dir but that failed... I ran the script 3 times, I keep getting the same error
09:45 scalability-junk jcockhren: yeah but salt is not really one thing. It does a lot more. Same with systemd, which on purpose doesn't try to be in line with the unix philosophy.
09:46 jcockhren scalability-junk: right it _can_ do more. but you your needs, whatever that is, you have to actively _restrict_ what it should do
09:46 ndrei joined #salt
09:46 VSpike That's strange. file.append complains if the file doesn't exist. It doesn't appear to have a paremeter to force creation of the file, but it *does* have a makedirs parameter which says "If the file is located in a path without a parent directory, then the state will fail. If makedirs is set to True, then the parent directories will be created to facilitate the creation of the named file. "
09:47 jcockhren mysql for example
09:47 jcockhren b/c it's a relational db, doesn't mean it has to be considered the source of truth for you data
09:48 scalability-junk jcockhren: so you are saying instead of using salt to manage and orchestrate the infrastructure I shouldn't use it for services discover via reactor or mine. Or use salt for monitoring purposes?
09:48 VSpike Ok, Setting makedirs causes it to create the file too. Badly named parameter + documentation fail I guess
09:48 scalability-junk I mean why use a different tool, when the one you are familiar with does the same or perhaps even better integrated?
09:48 scalability-junk I mean sure there are questionable things like odoo is doing. They implemented a website and ecommerce module barely working with software already doing that quite awesome.
09:49 jcockhren yeah. I'm saying pick something b/c no one tool does everything well
09:49 blackhelmet joined #salt
09:49 jcockhren it does things at varying degrees
09:49 scalability-junk jcockhren: you just dislike me trying to fuel more discussions :D
09:49 jcockhren even in salt, some parts of the codebase mature slower than others
09:50 jcockhren heh. nah. these conversations on practical salt usage should happen
09:50 Outlander joined #salt
09:50 jcockhren of course, I'm only speaking from my perspective
09:51 TheThing joined #salt
09:51 mindlessdemon joined #salt
09:52 scalability-junk I agree that you better choose a tool today than try to find the right tool tomorrow. But I give myself 2-3 days for bigger revamps of infrastructure to evaluate a few options, discuss them etc. (I love discussions you can learn so much)
09:52 jcockhren many parts of your infrastructure should be pluggable.
09:52 olenz VSpike: I just reread the following mail, which is closely related to your problem: https://groups.google.com/forum/#!topic/salt-users/DyygWzt3HK8
09:53 jcockhren and I agree, it takes time to find that balance
09:53 mindlessdemon joined #salt
09:53 jcockhren start with what you have, then iterate on it
09:53 scalability-junk jcockhren: pluggable yeah, but you mostly need a central and governing structure to make something pluggable.
09:53 scalability-junk It's hard to have your config management and dependency stuff pluggable, when it's so customized and work intense to switch.
09:54 mindlessdemon joined #salt
09:54 jcockhren when I say pluggable, I mostly mean you can replace it with something better later
09:54 scalability-junk monitoring service switching should be one simple config edit in the base state sure, or the webserver should be pluggable, but with pluggability you introduce other pain points.
09:54 VSpike olenz: ah indeed. My solution is better though :)
09:54 jcockhren for example, puppet uses hiera
09:54 mindlessdemon joined #salt
09:55 scalability-junk jcockhren: you can always do that. The pain associated with it is just varying or don't you agree?
09:55 jcockhren a company can switch to another tool that uses hiera
09:55 mindlessdemon joined #salt
09:55 jcockhren I mean... that's our jobs eh? to iterate on version 1
09:55 olenz VSpike: Yes, it is
09:56 jcockhren what database schema is perfect at v1?
09:56 jcockhren none
09:56 scalability-junk jcockhren: mine is more setting up version 1 :P the current infrastructure is more like 0.01 :P
09:56 diegows joined #salt
09:56 jcockhren relase management
09:56 jcockhren release
09:57 jcockhren your infrastructure _is_ versioned
09:57 jcockhren and breaking changes have to be introduced with care just like with any other software project
09:58 jcockhren they have to planned and rolled out in pieces
09:58 glyf joined #salt
09:58 CycloHex when trying to deploy a minion over salt-cloud, I get the error that the deploy.sh failed.. Probably because it created a dir but that failed... I ran the script 3 times, I keep getting the same error
10:00 scalability-junk jcockhren: the plan is more to migrate the old infrastructure to a completely new one. Aka building every service with salt from scratch as right now there is nothing more than some bash scripts. So nothing pluggable to build on.
10:00 jcockhren scalability-junk: do it in pieces
10:01 bhosmer_ joined #salt
10:01 jcockhren you'll never ship if you try to automate every single thing at once
10:01 scalability-junk There are enough services to migrate so each service is my piece.
10:02 jcockhren plan it out. like... do all backups first
10:02 scalability-junk First thing is to automate the host setup, than each service container/vm
10:02 jcockhren or that ;)
10:02 scalability-junk after that yeah service gets logging, monitoring, tracking etc. pp
10:02 jcockhren define 'automate'
10:02 scalability-junk and hopefully in 10 years it's working :P
10:03 jcockhren b/c you can do logging without touching other stuffs
10:03 scalability-junk jcockhren: have a versioned setup script instead of manual setups.
10:03 jcockhren segmenting your services is a good start
10:03 jcockhren then grouping them together for specific host roles
10:03 scalability-junk Everything is already separated so that's good. I can gradually move over each vm to a new salt based infra.
10:04 jcockhren yep.
10:04 jcockhren or each service type
10:04 jcockhren (for all machines)
10:04 jcockhren like... firewall setup
10:04 kingel joined #salt
10:05 jcockhren done... secured... now pick another something
10:05 fbretel left #salt
10:06 scalability-junk Yeah the thing is I would love to have each service stand alone to a degree so actually the different services or states shouldn't care about setting up monitoring etc. but have one service inject itself into every setup.
10:06 Rory left #salt
10:06 scalability-junk I think it's much more modular, but it seems to introduce a lot of issues. So adding a monitoring require or something to the base state seems better.
10:07 jcockhren that's cool too. if per vm works for you. then do that
10:07 jcockhren just doing all the things at once may be a bit much at first
10:07 scalability-junk jcockhren: especially with the learning curve of salt :P
10:07 scalability-junk will start with local bootstrap scripts and move to salt master and etcd orchestration later on.
10:08 jcockhren word
10:08 scalability-junk I mean at least having versioned setup scripts would be a huge improvement.
10:09 VSpike I'm wondering how suitable salt is for deployment of your own code & applications to servers. I know there are a lot of special tools for automated deployment, but is there any good reason *not* to use salt for the job if you are already using it to build the servers?
10:09 VSpike Of course you could take the approach that you deploy by building new servers with the new code on them and once they are proven, throw away the old ones
10:11 scalability-junk VSpike: which is sometimes considered the best way.
10:11 scalability-junk With updating you can have leftovers, which interfere or make your setup not reproducable.
10:11 VSpike scalability-junk: yes. Possibly a problem with Windows in the mix, because licenses. But Windows is always a problem.
10:12 VSpike Fine if you're doing it on a cloud where the licences are covered
10:12 scalability-junk VSpike: I'm from the linux world so that doesn't matter :P
10:13 VSpike scalability-junk: me too, but forced at the moment reluctantly to deal with a mixed environment
10:13 scalability-junk One thing you have to have in mind with not using new servers is that when you remove a package install for example you should add a cleanup state to actuall uninstall it for example.
10:13 VSpike Yeah, good point
10:14 scalability-junk so when you change something, which needs a cleanup task, if the server is not recreated cleanup the mess ;)
10:14 scalability-junk It makes it slightly more reproducable.
10:15 VSpike I'm wondering how it would work in practice for workflow. Hopefully, devs will not be making changes all the time to qa or production environments, but I can see that they'd want to make changes to dev enviroments a lot. So they either have to learn to use salt-stack to do it, or they have to be trained to record and report all changes back to ops so they can be folded back into the states.
10:19 scalability-junk VSpike: not sure if you need to change so much in dev environments, but I would try to go the let them learn salt and edit states instead of relying on communication.
10:19 yomilk joined #salt
10:20 VSpike That would be better, I agree
10:20 scalability-junk you want versioned, reproducible environments... use versioning everywhere
10:20 scalability-junk VSpike: I would say not better, but necessary.
10:21 VSpike Part of the problem is the code, and part of the problem is the amount of config files
10:22 VSpike Code deployment is reasonably simple ... config files are harder
10:23 VSpike We have a lot of components and they are all quite dumb, but have a lot of dependencies on other services, all of which have to be wired up by the configuration files
10:23 VSpike Devs often reconfigure stuff on the fly, even on the live environment, to manage load or route around problems
10:24 scalability-junk VSpike: which should not be done with salt or at least not usually.
10:24 VSpike Yeah, quite ... that seems like a bad fit for salt. So, really you wouldn't want salt to write the config files in the first place
10:24 scalability-junk as every change needs to be added to the script later on. So it's (in my opinion) better to live edit a versioned script and manually let it be executed earlier perhaps, than to just change things on the fly.
10:25 VSpike Really, the architecture is wrong but that's hard to go back and fix. I'm almost thinking I need a service manager that knows all the components, where they are and what they connect to. It would watch the processes and manage their config files
10:25 scalability-junk VSpike: no you don't want someone manually writing the config ;)
10:25 VSpike It would be configured either by database or by yaml/json chunks
10:25 scalability-junk I think for you it needs to be more a cultural change than an infrastructure change.
10:26 VSpike That way, when you build an new environemnt with salt, it can tell the config manager about the env and its initial layout and the config manager could write all the config files. After that, changes would be made via the config manager
10:26 scalability-junk Yeah you just have to prevent manual edits. So removing ssh access for most of the engineers would be a great way for them to get into the habit to not change things on production :P
10:26 VSpike It would then audit all changes... and you could provide pretty front-ends for it if desirted
10:27 scalability-junk Yeah the initial version has to be written by someone, but as you said that can be done via json/yaml or another external pillar "database"
10:30 VSpike Can anyone tell me what I'm doing wrong here? https://bpaste.net/show/f1cdcf1fd4f9
10:30 glyf joined #salt
10:30 VSpike calling state.highstate complains in the same way about win.global not being found, but it's definitely there
10:31 VSpike Is it a problem perhaps that it only contains includes?
10:36 SpX joined #salt
10:44 nihe joined #salt
10:45 jensnockert joined #salt
10:46 CycloHex joined #salt
10:47 nihe joined #salt
10:47 peters-tx joined #salt
10:49 ramishra joined #salt
10:49 ghanima joined #salt
10:56 pduersteler joined #salt
11:00 kingel joined #salt
11:03 duncanmv joined #salt
11:06 mrlesmithjr joined #salt
11:14 ndrei joined #salt
11:18 rawtaz that CeBe guy is a real quitter, that's for sure! ;)
11:18 scottpgallagher joined #salt
11:20 mage_ joined #salt
11:20 pduersteler question: Docs state that you only have to apply firewall rules to minions. So the master basically just publishes commands and minions receive them through polling, or how is it actually working? I have to put that into "management words" somehow and need to fully understand that thing
11:21 CeBe joined #salt
11:24 eliasp joined #salt
11:25 jY pduersteler: you open the ports on the master
11:26 jY the minions connect to master and the connection is kept open for zeromq to talk
11:27 pduersteler jY: so the docs basically just mean "make sure that minions can talk to the master through a possible firewall".
11:27 ggoZ joined #salt
11:28 rawtaz or just "make sure that the minions can connect to the master" :-)
11:28 jY yes you open 4505 and 4506 for the master
11:29 intellix joined #salt
11:31 scalability-junk pduersteler: what you mean is probably to have firewall or networking rules to only allow access to these ports from the master ip so no other master at least without ip spoofing can take control. (there is a lot more, but that is a way to prevent the first layer of breakin)
11:31 VSpike I'm really confused by this. Is there something special about the name 'global.sls' ?
11:32 pduersteler thx guys :)
11:33 rawtaz scalability-junk: what do you mean *from* the master? isnt the communication from minions to master, on some random high port on the minions, to 4505 and 4506 on the master?
11:34 scalability-junk rawtaz: that controlling acces could be restricted to the master node. But that depends on the setup.
11:34 scalability-junk It could prevent minions talking to each other etc. pp
11:35 scalability-junk So I would usually go with a private config network firewalled to the other networks
11:36 otter768 joined #salt
11:39 pduersteler I'm interested in it as it may be used to maintain boxes deployed to networks outside of our control, but still with access to maintain and get / put data.
11:39 pduersteler hence my question
11:47 CycloHex Is it possible to automatically let a minion do a salt-call state.highstate after being deployed?
11:48 honestly i'd put it into a cronjob with @reboot
11:48 otter768 joined #salt
11:51 VSpike Can I safely delete C:\salt\var\cache\salt\minion ?
11:54 scalability-junk pduersteler: Would probably go for vpn into your config network... there is crypto etc. but another layer with auth control might be good.
11:54 CycloHex honestly: Isn't there a salt-function or module to do this?
11:54 scalability-junk then you at least don't have to open the maintainance network to the whole internet, but just to all your customers via vpn
11:55 scalability-junk pduersteler: if these setups are bigger you could use a master per customer and control the master with your central master and only use one connection between your master and the master in the customer datacenter
11:55 scalability-junk would make it faster latency wise and more scalable.
11:55 scalability-junk take a look at multiple master setups in the docs
11:56 honestly CycloHex: cron.file
11:56 pduersteler scalability-junk: basically it's just one minion per external network. the problem is more that every network has its own vpn, which, in the current "manual labor" setup, prevents any automatism at all, and we can't manage anything right now without switching vpn's (mixed software- and hardwareboxes- nets)
11:57 scalability-junk pduersteler: one solution would be for these external minions to use salt-ssh
11:57 scalability-junk so you actually don't need more than ssh access.
11:58 scalability-junk would make it much easier probably.
11:58 CycloHex honestly: thanks, I'm looking into Scheduler of salt atm, might use this, although the cron @reboot is what I need, since I don't want my minion to highstate every xSeconds
11:58 pduersteler scalability-junk: thanks for the hint, will look into that
12:04 yomilk joined #salt
12:04 micah_chatt joined #salt
12:08 cheus joined #salt
12:10 micah_chatt joined #salt
12:10 yomilk joined #salt
12:17 to_json joined #salt
12:25 ndrei joined #salt
12:26 nyx joined #salt
12:29 salt-n00b joined #salt
12:29 glyf joined #salt
12:31 alanpearce_ joined #salt
12:33 brandon__ joined #salt
12:41 vejdmn joined #salt
12:42 j-saturne joined #salt
12:46 ndrei joined #salt
12:48 j-saturne joined #salt
12:56 yomilk joined #salt
12:59 oeuftete joined #salt
12:59 tligda joined #salt
13:00 bhosmer_ joined #salt
13:02 micah_chatt_ joined #salt
13:02 duncanmv joined #salt
13:05 blarghmatey joined #salt
13:05 ghartz joined #salt
13:05 scuwolf joined #salt
13:06 fredvd_ joined #salt
13:06 n1ck3 joined #salt
13:06 mdekkers joined #salt
13:07 tligda joined #salt
13:07 ndrei joined #salt
13:07 racooper joined #salt
13:07 gmcwhistler joined #salt
13:08 scuwolf joined #salt
13:09 patrek joined #salt
13:11 cpowell joined #salt
13:14 hasues joined #salt
13:16 darkelda joined #salt
13:16 mpanetta joined #salt
13:16 flyboy82 joined #salt
13:16 CycloHex Hello, I have my minion set up and in the config file it has a line which says : startup_states: highstate. Only problem is that it doesn't get in a highstate.. Later when I tried to manually salt-call sate.highstate I got the following error: [WARNING ] SaltReqTimeoutError: Waited 60 seconds
13:16 CycloHex Minion failed to authenticate with the master, has the minion key been accepted?' But my keys are accepted, so that isn't the problem :s
13:18 halfss joined #salt
13:22 flyboy82 joined #salt
13:23 bluenemo joined #salt
13:28 bluenemo hi guys. I have an if statement in my recipes that checks and compares grains:osfinger, so I can have those states only be executed on distros I wrote and tested them for. If that if fails, I want to print some text without executing a state. I thought about just using {% print string %}, which wont work somehow: http://paste.debian.net/123030/
13:29 ghartz joined #salt
13:32 Zachary_DuBois joined #salt
13:35 bluenemo is there a preferred way to print some string without using a state?
13:35 halfss joined #salt
13:36 nitti joined #salt
13:37 nitti joined #salt
13:38 CycloHex has anyone else had problems with the startup_states: highstate
13:38 CycloHex ?
13:38 micah_chatt joined #salt
13:42 bluenemo CycloHex, can you be more specific?
13:44 CycloHex I deploy a minion via salt-cloud, in my /etc/salt/cloud config I added 'minion: startup_state: highstate'. But once the minion is deployed it doesn't pull the top.sls... When I log in to my minion and check the /etc/salt/minion file startup_state: highstate is in here.
13:44 dude051 joined #salt
13:45 diegows joined #salt
13:47 ramishra joined #salt
13:48 jhujhiti joined #salt
13:49 perfectsine joined #salt
13:49 ndrei joined #salt
13:52 diegows does anyone has published what salt users have to do with the bash issue?
13:52 diegows if not, I can do it :)
13:52 babilen diegows: "salt '*' pkg.install bash refresh=True" is essentially what you want to run.
13:53 diegows yes, I know
13:53 diegows but I'd like to give a link to my colleagues/clients :)
13:53 diegows and we have to add an state with pkg.latest too
13:55 blarghmatey joined #salt
13:56 diegows well, I'll write something in my blog :)
13:56 mgw joined #salt
13:56 diegows we should have an execution module to do pkg.update ${PKG}
13:56 diegows I'm lazy to write refresh=True :)
13:57 rallytime joined #salt
13:58 istram joined #salt
13:58 nitti joined #salt
13:59 * honestly tries to figure out how to get salt-minions onto debian armel
13:59 thehaven joined #salt
14:00 ajprog_laptop joined #salt
14:00 mordonez__ joined #salt
14:00 babilen I honestly don't know
14:01 babilen honestly: What's your problem? 0.17.5+ds-1~bpo70+1 is in wheezy-backports and 2014.1.10+ds-2 in both sid and and jessie. Are you referring to salt's repository? Does that not contain packages for that architecture?
14:01 * babilen checks
14:02 honestly babilen: mmm, didn't have backports enabled
14:02 babilen joehh: Any hope for 2014.1.10 in wheezy-backports?
14:02 VSpike Can anyone tell me what I'm doing wrong here? https://bpaste.net/show/f1cdcf1fd4f9
14:03 honestly my salt-master version is 2014.1.10-1precise1
14:03 babilen honestly: I wouldn't necessarily recommend to run 0.17.5+ds-1~bpo70+1, but there does not seem to be a specific problem that prevents you from running salt on armel. Are you referring to saltstack's third-party repositories?
14:03 iggy is there any kind of "salt jobs" site?
14:03 perfectsine joined #salt
14:04 honestly babilen: I'm just trying to figure shit out
14:04 VSpike Here's how the file_roots are configured, and the directory listing https://bpaste.net/show/9bb3aa283bcb
14:04 babilen honestly: Okay, could you paste the output of "apt-cache policy salt-minion" to http://refheap.com please?
14:05 KennethWilke joined #salt
14:05 ghartz joined #salt
14:06 babilen honestly: And are you aware of http://docs.saltstack.com/en/latest/topics/installation/debian.html ? The third-party repository referenced in there contains armel packages.
14:06 honestly I'm using that now
14:06 babilen Okay, backtrack to the question before that.
14:07 babilen (or answer them sequentially)
14:07 honestly well
14:07 honestly Im
14:07 honestly I'm assuming this is what you want to see?   Candidate: 2014.1.10+ds-1~bpo70+1
14:07 honestly and since those versions mostly match I'll install that
14:08 babilen There should have been more output than that, but that would imply that you can install a "salt-minion" package on your box. You'd do that with "apt-get install salt-minion".
14:08 honestly oh bleh, it has weird depends
14:08 honestly babilen: I'm not a complete noob :P
14:08 * babilen learned to never assume anything
14:09 babilen "weird depends" ?
14:09 babilen The version running on your master and the candidate version on the minion is exactly the same (from salt's perspective)
14:10 honestly it need a version of python-zmq that isn't available
14:10 honestly probably need backports enabled
14:10 babilen It does indeed.
14:11 VSpike This is so strange. Why can I call all of the states from the minion apart from the global.sls which includes them all?
14:12 babilen honestly: Not on wheezy though. Could you paste the output of "apt-get -t wheezy-saltstack install -s salt-minion", "apt-cache policy", "apt-cache policy salt-minion python-zmq" to http://refheap.com ?
14:13 kusams joined #salt
14:14 vbabiy joined #salt
14:14 babilen VSpike: Mind pasting base/win/global.sls ? And you are sure that you can call *all* included states independently?
14:14 ericof joined #salt
14:14 oz_akan joined #salt
14:14 honestly babilen: https://www.refheap.com/1a3c5631090698dae034a3f70
14:15 ggalvao joined #salt
14:18 VSpike babilen: I found the issue. One of the included states was missing .. but it just didn't tell me that :)
14:18 babilen honestly: Okay, first thing I'd like to ask you is to replace "ftp.debian.org" with "http.debian.net" in your sources.list (or a geographical mirror such as ftp.CC.debian.org with CC in {jp,tw,de,uk,us,...} (country code)). It is curious that 13.1.0-1~bpo70+1 is in wheezy-backports and not in wheezy-saltstack (I'll investigate why in a second), but you should be able to install python-zmq from there.
14:18 babilen VSpike: It never does that. (I don't like that either, but that is why I asked if you can *really* call all included states independently)
14:19 honestly babilen: I poked aptitude to try harder and it worked out that it can install python-zeromq to resolve the dependency problem.
14:19 VSpike babilen: thanks :) I'll know for next time
14:20 babilen honestly: You could have also passed "-t wheezy-backports" to raise the priority of that, but still ...
14:20 diegows http://www.woitasen.com.ar/2014/09/salt-and-the-bash-security-issue/
14:20 ramishra joined #salt
14:20 babilen diegows: I don't like pkg.latest in my states, but ta
14:21 diegows I like feedback
14:21 diegows what option do we have in cases like bash, openssl and other critical security issues?
14:21 babilen (you should have made that pkg.installed with refresh: True)
14:22 babilen diegows: In my world a highstate run should preferably a no-op in the sense that changes will have to be pushed explicitly. In particular upgrades of packages should be performed deliberately and not on every highstate run.
14:22 ckao joined #salt
14:23 dhwty joined #salt
14:23 diegows I agree, we I prefer to be sure that critical update are applied
14:23 diegows they are just special cases
14:24 babilen They will be on newly provisioned boxes if you ensure that you refresh *once*. It's just that the package shouldn't be upgraded if it has been installed already. (you'd do that manually with "salt '*' pkg.install bash refresh=True")
14:24 ggalvao I believe special cases could be solved with 'salt '*'', couldn't they?
14:24 babilen yeah
14:25 nullscan left #salt
14:27 ggalvao does anyone here know any way to run a mysql.query command scheduled on the master instead of having to put it on the minion config file?
14:28 KennethWilke ggalvao: that module's functions allow passing those values as keyword arguments
14:28 ggalvao yeah, that's not the issue
14:28 ggalvao when put on schedule on the master it won't run
14:28 ggalvao gives '[INFO    ] Invalid function: extensions in job mysql.query. Ignoring.'
14:29 ggalvao extensions = job name
14:29 ggalvao from the documentation it says scheduled jobs on the master can only invoke 'runner functions'
14:29 ggalvao and I am not sure how to circumvent this problem
14:29 KennethWilke hmm, i'm not familiar with that bit
14:30 ggalvao kk ty anyway :)
14:30 KennethWilke np
14:30 patrek joined #salt
14:31 ramishra joined #salt
14:31 timoguin ggalvao: you'd need to create a custom runner that calls that module. examples here: https://github.com/saltstack/salt/tree/develop/salt/runners
14:31 timoguin or actually....
14:31 timoguin you could make your master a minion of itself
14:32 timoguin and just schedule it normally to run on your "minion"
14:32 wnkz__ joined #salt
14:32 ggalvao hmm
14:32 ggalvao but I need it to run on every minion connected, actually
14:32 anotherZero joined #salt
14:34 ggalvao I'll look into a custom runner, ty timoguin
14:34 timoguin ggalvao: you shouldn't need one if you're just trying to run a module on the minions
14:34 timoguin take a look at the scheduler docs: http://docs.saltstack.com/en/latest/topics/jobs/schedule.html
14:34 ggalvao I did
14:35 timoguin i'd schedule it via highstate, or a state, or maybe the minion config file
14:35 ggalvao and I already ran that one on the minions but by configuring the scheduled tasks on each minion
14:35 ggalvao how would I do it via a state?
14:35 timoguin and you want to make a simpler config on the master so it's just one command?
14:35 ggalvao can I enclose a 'schedule' clause on a file.sls?
14:35 timoguin http://docs.saltstack.com/en/latest/topics/jobs/schedule.html#states
14:36 timoguin I've never actually used the scheduler, so I'm just looking at the docs
14:36 ggalvao I saw that. Not sure how I can call 'mysql.query' from a state
14:36 timoguin function: mysql.query
14:36 ggalvao I am actually a bit confused conceptually right now
14:37 ggalvao because running this mysql query doesn't look like a 'state' to be achieved
14:37 ggalvao but I guess I could make a state and run the function from there
14:37 ggalvao I'll test this right now, ty :)
14:38 timoguin you could also just do a pretty simple schedule (maybe even cron job) on the master that will call salt 'minions' mysql.query
14:39 ggalvao yeah. that would probably be much simpler, yeah
14:40 timoguin state seems a bit too heavy for what you're trying to do
14:40 Jarus joined #salt
14:40 mschiff joined #salt
14:40 mschiff joined #salt
14:41 eunuchsocket joined #salt
14:43 jab416171 joined #salt
14:43 jdmf joined #salt
14:45 ghartz joined #salt
14:45 thayne joined #salt
14:45 CeBe joined #salt
14:47 SheetiS joined #salt
14:47 SheetiS joined #salt
14:47 jonatas__ joined #salt
14:49 kaptk2 joined #salt
14:51 ggalvao timoguin, just to clarify
14:51 ggalvao Salt execution modules are different from state modules and cannot be called directly within state files. You must use the module state module to call execution modules within state runs.
14:51 ggalvao you cannot call mysql.query inside a state definition
14:51 ggalvao :(
14:52 ggalvao I guess the only options now are cron and custom runner
14:52 timoguin You can. You just have to use module.run if it's inside an SLS.
14:52 ggalvao hm
14:52 timoguin That let's you call any of the available execution modules inside an SLS.
14:52 KennethWilke http://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html#module-salt.states.module
14:52 timoguin Which you should generally keep to a minimum.
14:52 KennethWilke for what he's mentioning
14:53 ramishra joined #salt
14:53 KennethWilke there's also ways to do it in jinja or mako
14:53 debian112 joined #salt
14:53 KennethWilke though as timoguin that should probably be used sparingly
14:54 KennethWilke as he said*
14:54 ggalvao nice
14:54 ggalvao I didn't know module.run
14:54 ggalvao that should suit to the problem, I guess
14:54 KennethWilke completely understandable :p there are a bunch of modules!
14:54 ggalvao thanks a bunch, guys
14:54 pdayton joined #salt
14:54 KennethWilke no problem
14:54 timoguin so many module
14:55 KennethWilke such great
14:55 ggalvao wow
14:55 ggalvao =p
14:55 timoguin haha teamwork
14:55 ggalvao hahahha
14:55 KennethWilke lol
14:58 jaimed joined #salt
14:59 glyf joined #salt
14:59 jalbretsen joined #salt
15:03 TyrfingMjolnir joined #salt
15:03 kusams_ joined #salt
15:11 ajolo joined #salt
15:13 gmcwhistler joined #salt
15:13 timoguin used salt-ssh for the first time yesterday to patch our non-salted servers. worked like a charm!
15:16 wendall911 joined #salt
15:16 wendall911 joined #salt
15:24 perfectsine joined #salt
15:26 debian112 anyone running multi-environments?
15:28 kingel joined #salt
15:28 jslatts joined #salt
15:28 ajolo joined #salt
15:29 nitti_ joined #salt
15:30 thayne joined #salt
15:31 kusams joined #salt
15:32 ndrei joined #salt
15:33 dccc joined #salt
15:34 mapu joined #salt
15:35 Katafalkas joined #salt
15:38 jdmf I want to set gid=uid within user.present, but this will not work. Currently I perform group.present first to make it work. If there a way to make uid=gid with only uisng user.present
15:40 perfectsine joined #salt
15:41 ndrei joined #salt
15:41 timoguin jdmf: use jinja variables maybe: {% set uid = 4000 %}
15:41 timoguin and then {{ uid }}
15:43 jdmf timoguin: never used jinja, whould you happen to have a few examples to point me in the fight direction?
15:44 smcquay joined #salt
15:44 SheetiS http://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.jinja.html has lots of examples
15:44 PI-Lloyd beat me to it by a second :p
15:44 SheetiS :D
15:48 to_json joined #salt
15:49 StDiluted joined #salt
15:51 wangofett joined #salt
15:51 pdayton joined #salt
15:52 StDiluted joined #salt
15:54 ndrei joined #salt
15:54 Gareth morning morning.
15:54 wangofett anyone experience salt minions hanging/freezing on ubuntu 14.04?
15:55 UtahDave joined #salt
15:55 wangofett it seems to coincide with problems executing states
15:55 wangofett e.g. 2014-09-24 17:06:34,043 [salt.state       ][ERROR   ] patient_portal: ERROR (abnormal termination)
15:56 wangofett then it plugs up the ports and salt can't communicate
15:56 jemejones joined #salt
15:56 diegows joined #salt
15:56 Gareth UtahDave: morning.
15:56 UtahDave morning, Gareth!
15:57 TheThing joined #salt
15:57 perfectsine joined #salt
15:57 wangofett UtahDave: you ever hear of a failed state execution plugging up ports?
15:58 wangofett at least that's what it appears to be doing
15:58 tligda joined #salt
15:58 UtahDave No, I've never heard anything like that.  Can you pastebin the output of salt 'minion' test.versions_report and the state that's causing this? and the output on the cli?
16:00 wangofett It seems to be intermittent, I'm still trying to figure out what is causing it - I thought I had narrowed it down to the minion (nmap -sS -q -p 4505-4506 salt.master shows the ports as filtered)
16:00 VSpike A london meetup, hey? Good idea
16:01 VictorLin joined #salt
16:01 wangofett typically as I restarted the minion service life was happy for a minute
16:02 ndrei joined #salt
16:03 to_json joined #salt
16:03 wangofett UtahDave: https://gist.github.com/waynew/aee4052323bf357f4d81
16:08 halfss joined #salt
16:09 n8n joined #salt
16:11 VSpike If you want your states to decide what a machine is going to be (e.g. web, db, mail) in a way that's portable across lots of environments (i.e not depending on AWS tags or similar) is a structured hostname the best way to do it?
16:12 VSpike I've always been a bit wary of naming schemes. Apart from emacs vs. vi they are one of the best ways to start a religious war
16:12 KennethWilke yeah that's probably all down to personal taste
16:13 KennethWilke i like to target by structured hostnames mysql
16:13 KennethWilke myself* rather
16:13 KennethWilke though as you can tell i was thinking of mysql-n* nodes while answering that
16:13 VSpike Heh finger macro
16:13 wangofett UtahDave: I just updated that gist with nmap/lsof output
16:14 KennethWilke thought i also think setting a grain like 'role' is a pretty good solution too
16:14 KennethWilke though*, blah can't type today
16:14 SheetiS I inherited an unmanged environment, so I have a 'grains formula' that has custom 'role' grains and then 'categories' of roles that tells each machine what it should get.  Unfortunately I couldn't rely on a naming scheme, and this was quicker to deploy out.
16:15 VSpike So when provisioning a machine, how do you set those grains.. by hand? Or does your provisioning tool do it?
16:15 KennethWilke custom grains can be added as a file on the minion you create during provisioning
16:15 SheetiS I have a single pillar that I keep all of my systems in.  I pre-populate this pillar myself.
16:15 KennethWilke as well they can be set via execution modules or state modules. ie `salt '*' grains.setval role web`
16:16 SheetiS I don't like grains on my minions that weren't managed themselves, so I do it a little 'weird' I guess.
16:16 KennethWilke nah i think you're solution is also good
16:16 KyleG joined #salt
16:16 KennethWilke your*
16:16 KyleG joined #salt
16:17 KennethWilke but that's one of the things about salt, it's not going to enforce a paradigm that's going to incite religious war
16:17 KennethWilke if they can help it
16:17 SheetiS I loved the flexibility to come up with something that worked for me.
16:18 KennethWilke here here!
16:18 VSpike There are a few remnants of an old naming system here, designed by a networks guy, so I have a few names like LN-GR-L3-WR01 .. for london, goswell road, level 3, something Router 01
16:18 KennethWilke computers don't tell me what to do, i tell them what to do!
16:18 n1ck3 joined #salt
16:18 iggy the voices tell me what to do
16:19 VSpike My tapeworm tells me what to do
16:19 VSpike Or SA-OS-PC-DC01 .. for Salisbury, Old Sarum, Portway Centre, Domain Controller 01. Which no longer makes any sense as we moved office.
16:19 dalexander joined #salt
16:19 aparsons joined #salt
16:20 felskrone joined #salt
16:20 VSpike WAN Router? I dunno
16:21 VSpike SheetiS: can you explain a bit more how your system works? It sounds interesting
16:21 timoguin WHOA Router? WOW Router?
16:21 TheThing joined #salt
16:22 arnoldB is this still #salt ? :)
16:22 KennethWilke lols
16:22 VSpike salt-ish
16:22 n8n joined #salt
16:22 ndrei joined #salt
16:23 elfixit joined #salt
16:23 SheetiS VSpike: this is a (poorly written) blog post I made on what I'm doing and why http://devop.ninja/configuration%20management/2014/08/25/the-grains-conundrum/.  I have a basic version of the formula used here: https://github.com/rfairburn/salt-grains-formula
16:23 CeBe joined #salt
16:24 arnoldB SheetiS: first comment: +1 for the TLD :)
16:24 ecdhe I need to target a file that I don't know the name of -- but I do know a pattern for it.
16:25 ecdhe How can I set the name of a file.managed dynamically?
16:27 arnoldB SheetiS: are you aware of the possible security risk when defining roles on client side?
16:27 timoguin ecdhe: you mean something like file-12324t2311.txt on the minion where you want to manage the file but don't know the last part of the filename?
16:27 to_json joined #salt
16:28 Katafalkas joined #salt
16:29 VSpike SheetiS: that's very  nice
16:29 SheetiS arnoldB: right now I use an orchestration for all highstates that refreshes the grains formula immediately before a highstate as part of what it does.  (to keep from having to run a highstate twice with the methodology I use).  This leaves a VERY narrow window for someone to change the grains.
16:29 wangofett UtahDave: the plot thickens - changing the logfilelevel to debug seems to ameliorate my situation
16:30 wangofett spoke too soon
16:30 rihannon joined #salt
16:30 ecdhe timoguin, yes, like that.
16:30 ecdhe It's actually only a single digit that I don't know.
16:30 mpanetta SheetiS: Thanks for the example links :)
16:32 wangofett hmmm... or maybe it *is* working, it just appears not to be because it doesn't return info to the master & master gives up waiting
16:32 jonatas_oliveira joined #salt
16:33 rihannon left #salt
16:35 ecdhe timoguin, I can easily target the file in bash; a pattern like /sys/devices/mydev* will get it, but salt doesn't like using a pattern as a name in file.managed.
16:37 smcquaid joined #salt
16:37 timoguin ecdhe: you can do some module calls with jinja. one sec.
16:38 wangofett UtahDave: ahah. Looks like it's jut taking /forever/ to return and things have been timing out
16:38 wangofett like... even test.ping
16:38 notpeter_ joined #salt
16:38 sudarkoff joined #salt
16:39 timoguin ecdhe: something like this: https://gist.github.com/timoguin/51778e6ecd64ca5ab775
16:39 timoguin not tested
16:39 ecdhe thanks timoguin!
16:39 StDiluted_ joined #salt
16:40 aparsons joined #salt
16:41 VictorLin joined #salt
16:42 ecdhe timoguin, in file.find(name='*pattern*'), the docs say the name field is a "path-glob".
16:42 ecdhe Is that full regex?
16:43 ecdhe My file has a period '.' in it, so I don't know if I need to escape it.
16:44 SheetiS I bet it is a simple glob similar to if you would run 'find / -name \*globbed.name\*' from the command line
16:44 SheetiS I would guess not to start with
16:45 arnoldB SheetiS: if it's useful for you could try Foreman which supports external node classification for Salt for some weeks: https://github.com/theforeman/foreman_salt/wiki
16:45 arnoldB SheetiS: here you can control the roles via the master
16:46 arnoldB SheetiS: host groups and so on
16:46 timoguin ecdhe: name is just a glob, but it supports passing iname, regex, iregex, type, and a handful of others
16:46 thayne joined #salt
16:46 timoguin the options that the normal find command supports
16:50 SheetiS arnoldB: Thanks I will take a look at it sometime soon.  I'm in a bit of a time crunch right now for some things, so I might not have time to evaluate for implementation right now.  Maybe I can try and include it with upgrade planning for 2014.7 or something.
16:50 capricorn_1 joined #salt
16:50 ecdhe thanks timoguin, this is working very very well!
16:51 timoguin great!
16:52 esogas_ joined #salt
16:56 wnkz joined #salt
16:58 kingel joined #salt
17:01 notpeter_ joined #salt
17:02 bezeee joined #salt
17:05 troyready joined #salt
17:06 jindo joined #salt
17:06 mgarfias joined #salt
17:06 aparsons_ joined #salt
17:08 ndrei joined #salt
17:09 aparsons joined #salt
17:11 sherbs_tee joined #salt
17:12 QuinnyPig UtahDave: https://github.com/saltstack/salt/issues/16147 <-- Okay, what idiotic thing have I done now? :-)
17:13 * UtahDave looking
17:13 holler joined #salt
17:15 aparsons_ joined #salt
17:21 murrdoc joined #salt
17:23 perfectsine joined #salt
17:24 UtahDave hm.  Ok, testing.   I also edited your post with ```   around each section for readability
17:24 diegows joined #salt
17:25 QuinnyPig UtahDave: Ah, thanks.
17:25 holler hello, I am new to salt and trying to get a grasp on where to start... I am working on a django as api project and using vagrant with basic provision shell script atm.. recently my friend showed me salt and I want to try it out.. My use case right now is I want to have developers be able to spin up a vagrant box and have all packages and configurations automated.. There would be a develop version for local dev and probably a stagi
17:25 holler ng/production version.. where to start for first case of local developer ready-to-go vagrant box?
17:27 ghanima joined #salt
17:27 QuinnyPig holler: https://docs.vagrantup.com/v2/provisioning/salt.html is a good start.
17:28 holler QuinnyPig: ok so I have that open but I first notice this: Your minion file must contain the line file_client: local in order to work in a masterless setup.
17:28 holler where does the minion file go?
17:28 vukcrni joined #salt
17:30 to_json1 joined #salt
17:30 QuinnyPig holler: /etc/salt/minion
17:36 cpowell joined #salt
17:37 scoates joined #salt
17:38 UtahDave QuinnyPig: I'm testing on the latest in 2014.7 branch and it's not working for me either, though I'm getting a different error
17:38 QuinnyPig UtahDave: Fascinating.
17:39 kickerdog joined #salt
17:42 UtahDave QuinnyPig: Does it ask you if you want to deploy the salt-ssh key?
17:44 holler QuinnyPig: one thing Im trying to figure out is do I keep all of my deployment config files out of my application repo? or do I include a /salt folder with all of that in the repo? use case is a developer setting up new local dev environment
17:48 aparsons joined #salt
17:48 jforest joined #salt
17:48 Ryan_Lane joined #salt
17:50 n1ck3 joined #salt
17:50 Oxf10e joined #salt
17:51 beneggett joined #salt
17:51 esogas_ joined #salt
17:51 QuinnyPig UtahDave: It does.
17:51 cedwards joined #salt
17:51 UtahDave OK, so I think we're seeing the same behavior
17:51 QuinnyPig UtahDave: In this environment, I don't. :-
17:52 UtahDave I said yes, input my password and it successfully deployed the key and things worked perfectly after that
17:52 UtahDave but just using the password isn't working.  I'm doing a little more testing
17:53 MatthewsFace joined #salt
17:53 QuinnyPig UtahDave: Yeah, that's my experience as well.
17:56 jonatas_oliveira joined #salt
17:56 bhosmer_ joined #salt
18:03 viq joined #salt
18:03 debian112 I got a pillar question for multi-environments
18:04 debian112 how I can view pillars in different environments?
18:04 jalaziz joined #salt
18:05 perfectsine joined #salt
18:05 delinquentme joined #salt
18:06 iggy we had unending problems with pillars and multi-environments... we eventually just gave up on it for now
18:08 smcquay joined #salt
18:08 ndrei joined #salt
18:09 ajolo joined #salt
18:09 halfss joined #salt
18:10 kballou joined #salt
18:11 jY debian112: i used salt hiera
18:11 debian112 jY. I have been trying to view pillars in another environment, but nothing works
18:11 debian112 salt-call pillar.get server_gt saltenv='greentpa'
18:12 debian112 that would be nice
18:12 debian112 if that worked
18:12 UtahDave QuinnyPig: That should be fixed today, most likely.
18:12 UtahDave iggy and debian112 is there an open issue on that pillar environment issue?
18:13 iggy I think there's one dealing with gitfs pillars and only the first one being read or something
18:13 chrisjones joined #salt
18:14 iggy I don't really open tickets since when I do they just get closed as being too broad instead of asking me to clarify things
18:15 Daviey joined #salt
18:15 UtahDave iggy: The SaltStack project has closed your tickets for being too broad?
18:15 iggy si
18:15 Ryan_Lane iggy: ouch, really? can you point me at some?
18:15 Ryan_Lane that's not good behavior
18:16 UtahDave iggy: really?  We really make a big effort not to do that. Which tickets did that happen on ?
18:16 Ryan_Lane I haven't had that happen to me yet (and I open a shitload of bugs)
18:16 saggy i am getting an error on my formula- can someone help?
18:17 UtahDave iggy: Hm. found it.
18:17 ggalvao guys, can I use something like
18:17 ggalvao /var/www/index.html:                        # ID declaration
18:17 ggalvao file:                                     # state declaration
18:17 ggalvao - managed                               # function
18:17 ggalvao - source: salt://webserver/index.html   # function arg
18:17 ggalvao - require:                              # requisite declaration
18:17 ggalvao - pkg: apache                         # requisite reference
18:17 ggalvao but for minion conf?
18:17 saggy Rendering SLS "base:states.ea.users.corp" failed: Conflicting IS 9
18:18 iggy ggalvao: you can use something like a pastebin
18:18 saggy Rendering SLS "base:states.ea.users.corp" failed: Conflicting ID 9
18:18 debian112 @UtahDave
18:18 timoguin ggalvao: yes, you can manage the minion config
18:18 ggalvao iggy: sorry :)
18:18 ggalvao timoguin, nice, ty
18:18 debian112 @UtahDave: I don't think there is a ticket for that
18:18 debian112 how can I open one?
18:19 UtahDave debian112: Would you mind opening an issue for that?
18:19 UtahDave Yeah, that would be really helpful.  Include as much info as you can so we can try to reproduce it.
18:19 UtahDave saggy: that means you have two ID declarations with the same name
18:19 debian112 Where do I go to open one?
18:20 UtahDave debian112: https://github.com/saltstack/salt/issues/new
18:20 UtahDave iggy: sorry about that. I'm going to reopen that issue.
18:20 SheetiS UtahDave: have that link permanently in your clipboard 'just in case'? :D
18:20 UtahDave I should!  :)
18:20 Katafalkas joined #salt
18:20 iggy keep in mind that ticket was opened my second day using salt-cloud and I didn't have a clue what info was required for it
18:20 saggy oh - i though about that - but cant find any such id. I have ids starting with 9 but they are 6 digits long, none are just 9
18:21 iggy UtahDave: as I stated in that bug, we abandoned salt-cloud
18:21 iggy I really have no interest in pursuing it
18:22 UtahDave saggy: Yeah, I doubt it's named 9.  I think we've improved the error reporting in newer versions of Salt.
18:22 UtahDave iggy: OK. Sorry about that. I'll make sure this gets addressed internally.
18:23 saggy ok so it will a little tricky trying to guess that. never mind i will search my salt formulas
18:24 iggy honestly, as someone who's worked on open source projects for years, I fully understand that was a terrible bug report, but I honestly had no idea which way was up with salt-cloud to know what I did need
18:24 Ryan_Lane iggy: even if so, it's better to ask for followup info, rather than to close the issue.
18:25 debian112 @UtahDave: https://github.com/saltstack/salt/issues/16154
18:27 UtahDave debian112: awesome, thanks!  I wanted to make sure that doesn't get lost
18:27 debian112 @UtahDave thanks!
18:28 to_json joined #salt
18:30 aparsons joined #salt
18:32 aparsons_ joined #salt
18:33 aparsons joined #salt
18:34 diegows any simple tool to recommend todo auto deploy?
18:34 jergerber joined #salt
18:34 diegows I don't need CI, buildbot, something like that
18:34 diegows just a simple notification that something change in the repo to update an instance
18:34 diegows simple, I could write it but I'm lazy with that too :)
18:34 debian112 any idea how I can use pillars with: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.host.html
18:35 blackhelmet joined #salt
18:36 Ryan_Lane diegows: well, if you have web-hooks you can use salt-api + reactors
18:36 Ryan_Lane github, for instance, will send a web-hook on changes to a repo
18:36 Ryan_Lane salt-api can consume that web-hook and a reactor could take an action based on the hook
18:37 timoguin I'm about to implement auto-deploy with web-hooks and a Flask app that ingests them and fires events up to the salt master
18:37 * diegows loves salt every day more and more
18:37 diegows Ryan_Lane, thanks!
18:37 Ryan_Lane timoguin: why not salt-api?
18:37 Ryan_Lane diegows: yw
18:38 timoguin Ryan_Lane: because my master is inside a VPC in a private subnet, and i don't want to open it up.
18:38 Ryan_Lane ah
18:38 Ryan_Lane timoguin: did you see: http://scm.io/blog/hack/2014/08/salt-event-hub/
18:38 diegows or a reverse proxy may be :)
18:38 timoguin Ryan_Lane: dude!
18:39 timoguin no i hadn't seen that.
18:39 Ryan_Lane :)
18:39 Ryan_Lane I still think the same can be done in salt-api
18:39 Ryan_Lane salt-api can send events
18:39 timoguin Hubot is something we were wanting to do too. I already wrote the flask piece to ingest the hooks, just not the firing events piece
18:40 timoguin Doesn't salt-api run on the master though?
18:40 Ryan_Lane hm. does it?
18:40 Ryan_Lane it may
18:40 timoguin yea it does
18:41 Ryan_Lane it would be nice if that could be decoupled some :)
18:41 nitti joined #salt
18:41 iggy ^
18:41 iggy that's one reason we aren't using it
18:41 timoguin so i'll have a teeny instance sitting on a public subnet that will fire the events
18:42 iggy hmmm, teeny instances
18:42 timoguin teeeeeny-tinnnny
18:42 iggy is that gce?
18:43 timoguin nah AWS
18:43 iggy wherever it is, I like the naming scheme!
18:44 cedwards left #salt
18:48 mordonez__ joined #salt
18:51 jeffspeff joined #salt
18:51 jeffspeff how can i get the total hdd size of each hdd in a minion?
18:52 jeffspeff any google search involving the word "salt" and hdd/hard drive/etc gives a bunch of results about using table salt to better align the bits on a platter for better space utilization.
18:52 UtahDave jeffspeff: does   salt \* disk.usage   give you the info you're looking for?
18:52 UtahDave hm. that seems to be just filesystem info, actually.
18:54 jeffspeff UtahDave, that helps actually, is there way to parse those size values into something more readable than blocks?
18:58 UtahDave jeffspeff: Hm. I don't think that module has any options for that... yet
18:59 ggalvao am I right to assume that you can only specify a returner on a scheduler if it's running on a minion?
19:00 kickerdog is there a state for MS SQL Server?
19:00 timoguin nope
19:00 kickerdog nuts
19:01 UtahDave I think there's a returner for MS SQL server, though
19:01 kickerdog oh?
19:01 UtahDave which is kind of backwards.  Usually we have execution modules and states first, then returners
19:02 UtahDave Hm. Actually I'm not seeing it in the docs
19:02 timoguin it's the odbc returner
19:02 UtahDave ah, ok.  I thought we had someone build that
19:03 timoguin https://github.com/saltstack/salt/blob/develop/salt/returners/odbc.py
19:03 timoguin looks like it'd be pretty simple to turn it into a module / state
19:03 kermit joined #salt
19:04 jslatts joined #salt
19:04 perfectsine joined #salt
19:07 pdayton joined #salt
19:11 Katafalkas joined #salt
19:13 to_json joined #salt
19:15 ggalvao guys, how can I call module.run on a master (on a scheduler) and pass a returner at the same time?
19:16 ggalvao apparently I am only able to setup a returner if the scheduled job will run on a minion
19:16 UtahDave ggalvao: The problem is that by definition a returner runs on a minion
19:17 ggalvao I see.
19:17 ggalvao alright
19:18 jalaziz joined #salt
19:18 ggalvao I had set 5 scheduled sql queries on /etc/salt/minion
19:18 ggalvao and I was trying to do the same but calling them from the master
19:18 ggalvao I guess there is not easy way to do this, then?
19:18 ggalvao *no
19:19 johtso joined #salt
19:20 UtahDave ggalvao: well, you could run a minion on the salt master
19:22 skyler_ UtahDave: I had an idea I wanted to run by you to find out if anything like this exists: a testing framework built in salt to verify that the states work properly.
19:23 skyler_ For example, you could run a command like `salt-run state.run_tests` and then it would run commands on minions and make sure that they have the expected output.
19:23 skyler_ So you could have a directory /srv/tests that has your tests. Use a top file to match tests to minions, then make sure that the minions have the capabilities you want them to.
19:24 wangofett skyler_: in theory, that's how you setup your state files and when they run \o/
19:24 gmcwhistler joined #salt
19:25 skyler_ wangofett: Say I have a minion with state X and it works. Then I add state Y to the minion and the functionality of state X stops working in the minion. I don't think that the burden is on state X to test for this or state Y.
19:30 skyler_ wangofett: Also, are we really supposed to write out states to test themselves? If I have a state for an email server, should it really send a test email from one user to another, then delete the email? This is the kind of thing that you might test, but it would seem like a lot of clutter in a state.
19:30 timoguin skyler_: check out kitchen-salt. states are applied to vagrant hosts and tests are written in RSpec.
19:31 beneggett joined #salt
19:31 ggalvao UtahDave: but that minion on the salt master wouldn't be able to broadcast the sql query to all minions, would it?
19:32 timoguin ggalvao: commands are broadcast to all minions by default
19:32 UtahDave ggalvao: not unless you set up peer communication
19:32 timoguin and each minion determines if it matches and needs to run
19:33 UtahDave skyler_: Yeah, we'd love to have something like that. We've had some design discussions around thtat idea, but haven't had the bandwidth to make it happen.
19:33 skyler_ timoguin: Thanks, I will have to take a closer look at that.
19:34 thedodd joined #salt
19:34 timoguin skyler_: it's a plug-in for Test Kitchen, which is popular for testing chef cookbooks/recipes
19:37 skyler_ UtahDave: Cool, I will keep my eyes open for any development in that direction. If I have time, I might try to make a proof of concept or start working on such a system.
19:38 UtahDave skyler_: cool!  I'd love to see whatever you come up with.
19:41 debian112 I am trying to set /etc/hosts with: server 192.168.21.2
19:41 murrdoc joined #salt
19:41 debian112 any idea's, but I want to use pillars
19:44 UtahDave debian112: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.host.html#module-salt.states.host
19:47 aparsons joined #salt
19:47 spookah joined #salt
19:47 aparsons_ joined #salt
19:55 skyler_ Say I have a state that has a pillar that defines its database. The pillar contains a dictionary "databases". Now say I want to add another pillar for a different state to the same minion that also has a "databases" dictionary. Is there anyway to cleanly merge them instead of getting an error?
19:55 skyler_ It would be nice to be able to use the same name so that I can just iterate over that dictionary in my mysql state.
19:56 timoguin skyler_: if the keys are unique, they'll be merged. so you can have databases:\n - foo in one SLS and databases:\n - bar in another
19:56 timoguin they'll both be combined under the databases dict
19:58 skyler_ timoguin: Awesome, I thought I would get an error for some reason.
19:59 dmick left #salt
20:05 debian112 @UtahDave: I saw that, but when I try to use a pillar it just return exactly what I put there, and not the pillar info
20:06 debian112 I was thinking that it didn't support pillars there?
20:06 perfectsine joined #salt
20:07 martoss joined #salt
20:07 lpmulligan joined #salt
20:07 dusel joined #salt
20:08 UtahDave debian112: you can use jinja most places.  can you pastebin what you tried?
20:09 martoss1 joined #salt
20:10 jslatts joined #salt
20:10 kermit joined #salt
20:12 jalaziz joined #salt
20:13 holler is it possible to use git repositories with a masterless minion? with gitfs?
20:13 murrdoc joined #salt
20:15 micah_chatt_ joined #salt
20:18 XenophonF joined #salt
20:19 timoguin holler: not at the moment there isn't.
20:20 holler timoguin: thanks... Im trying to figure out how to install mysql-server and client in a masterless setup and not sure what to do
20:20 holler I got nginx working
20:20 iggy why do you need masterless?
20:20 holler new to salt, trying to set up a vagrant + salt fully loaded dev box
20:20 UtahDave holler: you can use the git states to check out your repos for you
20:21 holler iggy: its just me and 2 other devs atm and I dont see why we'd need a master atm... Im thinking more of a vagrant provision that creates/installs everything in one step for dev environment
20:22 holler but that said maybe Im doing it wrong?
20:22 n1ck3 joined #salt
20:23 timoguin Makes plenty of sense.
20:23 timoguin What are you having trouble with?
20:26 dude051 joined #salt
20:27 jaimed joined #salt
20:28 perfectsine joined #salt
20:31 martoss1 left #salt
20:42 bhosmer_ joined #salt
20:43 murrdoc joined #salt
20:44 tkharju joined #salt
20:54 thedodd joined #salt
20:56 thedodd joined #salt
20:56 thedodd joined #salt
20:57 XenophonF left #salt
20:57 dstokes seeing this error when trying to kill a job w/ saltutil.kill_job <jobid>:
20:57 dstokes "OverflowError: long too big to convert"
20:57 dstokes anybody else see this ever?
20:59 cromark joined #salt
21:00 thedodd joined #salt
21:01 aparsons joined #salt
21:05 oz_akan joined #salt
21:05 chrisjones joined #salt
21:06 thedodd joined #salt
21:07 SpeeR joined #salt
21:08 SpeeR we've run a simple state against 180 VM's and now I can't get the salt master to start running again
21:09 SpeeR is there a job queue or something that I can clear to get it to respond?
21:09 TheThing joined #salt
21:10 xDamox Does anyone know when Helium will go GA?
21:10 skyler_ I want to require a cmd.wait state. That is to say, I want the cmd.wait to be executed with the require prereq instead of using watch because I need it to execute before I run my other state. Is there a way to do this?
21:10 Ryan_Lane xDamox: I keep hearing 2 weeks
21:11 Ryan_Lane but I've been hearing that for a few weeks now
21:11 xDamox Woooo :D
21:11 xDamox Ahh hehe
21:12 bhosmer_ joined #salt
21:13 dude051 joined #salt
21:15 to_json joined #salt
21:15 tkharju joined #salt
21:18 aquinas joined #salt
21:18 kingel joined #salt
21:21 chrisjon_ joined #salt
21:24 giantlock joined #salt
21:24 bhosmer joined #salt
21:25 mndo joined #salt
21:27 perfectsine joined #salt
21:28 debian112 I am trying to get the IP address of my host, any reason why this is not working: salt '*' network.interface_ip eth0
21:28 fxhp joined #salt
21:30 ajprog_laptop joined #salt
21:30 wt joined #salt
21:31 wt anyone know how to properly run code once in a custom pillar?
21:31 babilen fyi: new bash packages just hit the mirrors for Debian, get them while they are hot!
21:31 wt I could put the code in the __virtual__, I guess, but that seems wrong
21:31 wt I need to make sure a couple of directories are created.
21:31 SheetiS debian112: it looks like network.interface_ip was added for 2014.7 and the documentation did not reflect that.
21:31 SheetiS what version are you running?
21:31 iggy debian112: grain.get fqdn_ipv4?
21:32 debian112 2014.1.10
21:32 SheetiS grains.get on fqdn_ipv4 is usually an alternative
21:32 iggy crap, grains
21:33 SheetiS It would be a built-in grain rather than a custom one in that case.
21:33 iggy I always mess up grains/pillars/states/etc and their -s versions
21:33 SheetiS it looks like it is fqdn_ip4 as well
21:34 iggy yeah, well, I can't be right all the time...
21:34 iggy or even twice apparently
21:34 SheetiS iggy: I mess those up too
21:35 SheetiS I had to test it to be sure
21:35 debian112 @UtahDave: http://paste.debian.net/123135/
21:36 debian112 it  returns this in the /etc/hosts file: grains['fqdn_ipv4']server2.gt
21:36 UtahDave debian112:     - ip: {{ grains['fqdn_ipv4'] }}
21:37 * Gareth mutters about Jenkins
21:37 SheetiS isn't it fqdn_ip4?
21:37 rawtaz make fun of it
21:37 iggy it is
21:38 iggy debian112: I messed it up the first time, it's fqdn_ip4
21:38 debian112 iggy does that target eth0?
21:38 perfectsine joined #salt
21:38 debian112 I have some servers that will have two interfaces
21:39 debian112 I just want eth0
21:39 iggy no, there is a grain for that too I think
21:39 iggy it grabs whatever matches the hostname of the system
21:39 iggy but look in salt-call -g for other options that may be more appropriate
21:40 SheetiS you can also use network.interfaces['eth0']['inet'][0]['address'] or something ugly like that
21:40 SheetiS the grain is probably cleaner
21:41 mapu joined #salt
21:41 iggy I think the grain will be cached too (vs lookup every time)
21:41 rawtaz what does it mean when an issue is "added to the Blocked milestone"? is it that the issue is considered blocking the next release, or that the issue itself is blocked so it should NOT block the next release?
21:41 SheetiS grains.items['ip_interfaces']['eth0'][0]
21:41 SheetiS something like that
21:42 nitti_ joined #salt
21:42 ntropy joined #salt
21:43 debian112 ok sheetis and iggy I will try
21:45 SheetiS UtahDave: If I submit a PR for documentation update on the network module to show versionadded for some of the things that are new for 2014.7, do you guys prefer those to go against develop or 2014.7 right now?
21:45 UtahDave 2014.7
21:45 UtahDave rawtaz: i think it means that something is keeping the issue from being worked on.
21:46 rawtaz oh ok
21:47 UtahDave rawtaz: is there one in particular that you're asking about?
21:47 kermit joined #salt
21:47 rawtaz UtahDave: yeah https://github.com/saltstack/salt/issues/16128 , but it's progressing :) was just curious
21:50 UtahDave rawtaz: just checked. it was a mistake. It just got moved to Approved
21:50 rawtaz oh ok :)
21:52 beneggett joined #salt
21:54 debian112 iggy, @UtahDave, SheetiS: ok I end up doing this: http://paste.debian.net/123145/
21:54 ntropy joined #salt
21:55 jalaziz joined #salt
22:08 delinquentme joined #salt
22:09 forrest joined #salt
22:13 rawtaz joined #salt
22:14 delinquentme if I want to pull a salt file and unzip it... would I do thi sin a state file?
22:14 UtahDave sure, there's the archive.extracted state
22:16 Gareth delinquentme: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.archive.html#module-salt.states.archive
22:17 delinquentme is this preferable to the untar_file: / module.run
22:18 delinquentme per : http://stackoverflow.com/questions/20046520/how-do-i-call-archive-tar-in-salt
22:18 kingel joined #salt
22:19 UtahDave I would think so.  But whatever works.
22:21 holler in the use case of local developers spinning up a vagrant box that is provisioned with salt to an exact state with all packages and a django project installed, would it make sense to use a masterless setup? or would the host machine be the master and the vagrant box the minion?
22:23 aquinas_ joined #salt
22:23 iggy use docker instead of vagrant... then your host has salt and you use the dockerio module
22:25 murrdoc use a masterless setup
22:27 DaveQB joined #salt
22:31 holler why docker vs vagrant?
22:31 Outlander joined #salt
22:31 holler ps Im getting this error and cant figure out why? http://dpaste.com/2KPNDEY
22:32 jindo joined #salt
22:32 iggy lighter and (in my experience) more portable
22:32 holler http://dpaste.com/38D32M5
22:32 iggy but it's definitely not for everybody
22:33 iggy bad indentation
22:33 cromark joined #salt
22:33 holler ah I can tell youre right doh
22:35 holler iggy: is docker for linux only? Im on osx
22:35 iggy hmmm
22:35 iggy I'm not really sure
22:35 holler (my friend showed me lxc' over the weekend and that was so fast compared to vagrant
22:36 iggy yeah, docker is just a fancy wrapper around lxc/etc.
22:36 iggy I know it can use things other than lxc
22:36 loz-- joined #salt
22:36 iggy I don't know if that includes any non-linux technologies
22:37 tcotav I use masterless salt on a vagrant'd ubuntu image with salt pre-installed in base
22:38 micah_chatt joined #salt
22:38 tcotav its a good way to dev, imo.  mount your /srv/salt and /srv/pillar and off you go
22:39 rogst_ joined #salt
22:45 smcquay joined #salt
22:45 glyf joined #salt
22:46 smcquay joined #salt
22:48 murrdoc +1
22:53 yomilk joined #salt
22:53 holler tcotav: have you installed mysql ever with it? any resources you know of appreciated
22:56 pdayton joined #salt
23:03 aparsons joined #salt
23:03 fannet joined #salt
23:04 fannet evenin.  I'm trying to launce an instance on Google Compute Engine using salt-cloud and its spitting out: InvalidRequestError: {u'domain': u'global', u'message': u"Required field 'value' not specified", u'reason': u'required'}
23:04 felskrone joined #salt
23:05 ajprog_laptop joined #salt
23:05 iggy I want to say that just means you've got some field wrong
23:06 iggy double, triple check your configs
23:06 iggy turn up debugging
23:07 fannet thanks iggy - noticed you had the same error once :)  - http://irclog.perlgeek.de/salt/2014-09-08   do you remember what it was? I used the salt documentation as an example
23:08 iggy I don't remember specifically what it was
23:09 iggy just kept banging on different config settings and it started working
23:10 fannet lol ok thanks - here's what I havehttp://pastebin.com/v4np3k9E
23:10 iggy but then we ran into other problems with salt-cloud and ended up dumping it
23:10 darrend joined #salt
23:10 catpig joined #salt
23:10 fannet its been rock solid for us @digitalocean so far
23:11 UtahDave Yeah, GCE is still newish
23:11 UtahDave fannet: what version of Salt are you using?
23:11 fannet salt-cloud 2014.1.10 (Hydrogen)
23:12 UtahDave fannet: can you pasetbin the output you're getting?
23:13 jonatas_oliveira joined #salt
23:14 fannet http://pastebin.com/sA2XBDV0
23:14 chrisjones joined #salt
23:14 mosen joined #salt
23:15 subha joined #salt
23:15 UtahDave fannet: let me get set up real quick to test
23:15 fannet sure thing
23:16 auser joined #salt
23:16 aparsons joined #salt
23:17 auser left #salt
23:17 troyready joined #salt
23:18 fannet my providers.d/google.conf :  http://pastebin.com/M2UWXLTM    my profiles.d/google.conf : http://pastebin.com/v4np3k9E
23:19 rawtaz i dont suppose mr hatch is in here?
23:20 smcquay joined #salt
23:20 UtahDave rawtaz: nope
23:21 delinquentme how about running a ./configure && make && make install ?
23:21 delinquentme from a state file
23:24 Gareth delinquentme: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.cmd.html#module-salt.states.cmd
23:24 Gareth cmd.run or cmd.script would be the best options.
23:27 aparsons_ joined #salt
23:32 fannet @UtahDave - looks like the disk gets created in GCE but fails before it creates the instance
23:34 delinquentme any way to get feedback to the master on the status of the highstate event?
23:34 fannet you can request the job id
23:35 fannet http://docs.saltstack.com/en/latest/topics/jobs/
23:36 fannet salt '*' saltutil.find_job <job id>
23:37 fannet salt '*' saltutil.running if you need to find all the Job ids
23:50 lahwran joined #salt
23:53 auser joined #salt
23:59 freelock joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary