Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-11-13

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 whiteinge joehh: (i'm catching up on salt-users) did you get past the docs build failure for the content-type headers?
00:07 iggy baconbeckons: paste the state and logs somewhere maybe
00:07 nitti joined #salt
00:08 baconbeckons iggy: the state is out of salt-formula. i’ll post some logs and the relevant state in a moment
00:13 ajolo_ joined #salt
00:13 mdasilva joined #salt
00:19 beneggett joined #salt
00:20 conan_the_destro joined #salt
00:22 alexr joined #salt
00:24 baconbeckons iggy: and others, this the part of the logs where the state fails http://dpaste.com/1GKA3SA
00:24 forrest joined #salt
00:25 b1nar1_ joined #salt
00:25 baconbeckons the first time it runs, it never gets to the part that says “[INFO    ] Fetching file from saltenv 'saltdev', ** done ** 'salt/files/key’”
00:28 baconbeckons if i look back in the entire logs, i see that other files were fetched before this one
00:33 jeddi joined #salt
00:34 alexr joined #salt
00:35 meylor joined #salt
00:38 ThrillScience joined #salt
00:44 ThrillScience Hello!
00:44 ThrillScience Is there any way from "Vagrant" to tell Salt which "environment" to use?
00:45 ThrillScience I want to have the same salt files, and different vagrant files to start up different classes of machine
00:50 jalbretsen joined #salt
00:53 baconbeckons ThrillScience: i set the hostname to something like “vagrant-servername”, then i have states that look for “vagrant-*” or “vagrant-web-*” or whatever would be appropriate
00:54 rojem joined #salt
00:54 ThrillScience The problem is bootstrapping my master
00:54 ThrillScience (Thanks, BTW)
00:55 baconbeckons ThrillScience: i use the vagrant provisioner which installs the needed salt packages
00:55 ThrillScience I see. I'm doing it in one step now--bringing up the "master" as a "masterless minon" along with all the other "master" software I want on it unrelated to salt
00:55 baconbeckons ThrillScience: the master/minion config files are set in the provisioner, then the high state takes over which also is used to configure the salt master itself using the salt-formula
00:56 p0rkmaster left #salt
00:56 ThrillScience but maybe I should do that in two steps....first bring up a salt master
00:56 ThrillScience and the minions with vagrant
00:56 ThrillScience and once they're up, use salt commands to do the rest....
00:56 baconbeckons ThrillScience: i was doing it masterless at first. then i wanted to use gitfs which doesn’t work masterless. then i found out that gitfs wasn’t going to work for me, but i still run it with a master
00:57 baconbeckons are you using a vagrant multi box?
00:57 ThrillScience I'm trying to....
00:57 ThrillScience I don't have a lot invested in Vagrant
00:57 mdasilva joined #salt
00:57 ThrillScience will salt-cloud handle this better?
00:57 ThrillScience I just wanted a simple way to bring AWS boxes up and down
00:57 baconbeckons well, those are two different things
00:58 baconbeckons ahh, in my case, i’m using vagrant for local Vms, not for aws
00:58 MrFuzz joined #salt
00:58 joehh whiteinge: yes, thanks, I changed the code-block:: http to code-block:: text and all was good
00:58 jalaziz joined #salt
00:58 baconbeckons for AWS, i use cloud formation to bootstrap part of the VPC, then i let the salt master take over and use salt-cloud to start the rest of the servers
00:59 ThrillScience Yeah, I'm beginning to realize this.
00:59 ThrillScience Don't try to use Vagrant's ability to issue a salt state.hightstate, just use it to spin the boxes up with salt installed
01:00 ThrillScience however I do need to configure them a bit, the master server needs a DNS that will resolve "salt", etc
01:00 whiteinge joehh: good deal. we're kicking around the idea of including pre-built HTML in the sdist tarball so packages don't have to deal with that anymore
01:00 ThrillScience using the 10.1.10.x range I'm using in my VPC
01:00 baconbeckons if you use the salt formula to manage the saltmaster/salt cloud, the way that they use maps/providers/profiles is a little confusing… i’ve been meaning to put in a pull request to change it
01:01 joehh unfortunately that won't fly for "official" debian packages - they need to be built from source
01:01 ThrillScience I have to get the machines named and the DNS set up before I can issue salt commands across it
01:01 baconbeckons ThrillScience: i don’t use DNS to resolve salt, i just tell the minions to connect by ip
01:01 baconbeckons i provide the ip using salt mine
01:02 murrdoc interesting
01:02 baconbeckons oops, i don’t use salt mine there, salt cloud deals with that for me
01:02 baconbeckons but i do everything with ips and salt mine in other places in the stack
01:02 joehh I currently have to strip out the pre minifed js files (bootstrap etc) add the source and the minify as part of the package build process
01:02 ThrillScience ok! I need to read up on this a bit more
01:02 baconbeckons maybe i’ll setup dns sometime soon :)
01:02 ThrillScience I like to use names so I can reconfigure things. We're spinning up a LOT of machines
01:03 baconbeckons ThrillScience: how many is a lot?
01:03 ThrillScience about 400 in 4 different locations
01:04 genediazjr joined #salt
01:04 ThrillScience that would be for peaks, though
01:05 joehh trying to track down exact requirement now
01:06 baconbeckons ThrillScience: are you letting AWS auto scale for you?
01:06 whiteinge joehh: holy cow. what do they consider source?
01:06 whiteinge everything? code, docs, etc?
01:06 ThrillScience no
01:07 ThrillScience we will spin them up based on our own instrumentation
01:08 baconbeckons it sounds like you are going to want to spend some quality time with salt-cloud then
01:08 baconbeckons you should probably consider using multiple salt masters for failover
01:09 whiteinge joehh: do you rebuild the manpages too or do you use the pre-built ones that ship in salt/doc/man?
01:10 Cottser|away joined #salt
01:10 joehh I rebuild them too
01:11 joehh these days, they are pretty trouble free
01:11 whiteinge dang. that's dedication :)
01:11 joehh whiteinge: pretty much, everything must be built from the "preferred form for modification" to ensure that it can be built
01:11 joehh by the tools in the archive and is trully "free"
01:12 joehh Typically, it is only an issue for major releases where there are lots of changes
01:13 joehh Once I figure out what the problem is, it is easy to patch, dquilt (part of the build process) takes care of patching the source at the right time
01:13 joehh and maintaining those patches between minor releases is pretty easy
01:14 whiteinge do you also need to stick to libs & tool versions that are available on the distro version you're building for?
01:15 joehh strictly yes, each "official" build is done within an environment (chroot or vm) of the actual distro version
01:15 * whiteinge nods
01:16 joehh I cheat a little for lucid (ubuntu 10.04) but I wouldn't be able to do that for debian
01:16 whiteinge well my hat is off to you, sir
01:16 joehh :)
01:16 murrdoc do you build the debian packages or the ubuntu ones too
01:16 joehh The other aspect is I have to remove any non free or uncompilable files from the archive before uploading
01:17 joehh murrdoc: both
01:17 murrdoc interesting, and you do the same chroot setup for those too
01:17 murrdoc those being, ubuntu packages
01:18 joehh yes, though mostly for ubuntu I upload a source package to launchpad
01:18 joehh but sometimes I use my chroots
01:18 joehh If you look just below http://anonscm.debian.org/cgit/pkg-salt/salt.git/tree/debian/repack#n58
01:19 joehh you can see how I remove the windows nssm.exe file from the sdist as I am unable to rebuild it.
01:20 joehh the rules are pretty pedantic, but the tools tend to support automating most of it once you've solved the problem
01:20 joehh and lintian will tell you if you are "breaking" any (most) of the rules
01:20 murrdoc man this is interesting
01:21 joehh for example: https://lintian.debian.org/tags/source-contains-prebuilt-windows-binary.html
01:21 BrendanGilmore joined #salt
01:21 murrdoc and then you use git-buildpackage /
01:22 joehh yes, specifying the branch and the distro with cowbuilder to build inside the relevant chroot
01:22 murrdoc cowbuilder ?
01:23 joehh copy on write builder - makes a fast copy of the chroot, does the build then throws it away
01:23 iggy I've been using docker lately
01:23 iggy for pkg building
01:24 scarcry joined #salt
01:24 murrdoc i am lazy, fpm recipes and jenkins slaves for the various distros
01:24 joehh That would work, though I haven't managed to get docker set up on any of my machines yet
01:24 murrdoc just the two actually
01:24 Eugene Hehe, "cow builder"
01:25 murrdoc iggy do u have a salt package docker
01:26 joehh is there a jenkins plugin for package building or do you just use a cript
01:26 joehh script
01:26 murrdoc script
01:26 murrdoc well i do a git clone  of the tag and cd into the directory and run make
01:26 murrdoc so it assumes that the repo ships with a make file
01:27 murrdoc or cmake
01:27 iggy no, so far it's postgres 9.4, backported collectd, java, $company_apps, and a few other things
01:27 murrdoc u on precise ?
01:27 iggy me? wheezy
01:27 murrdoc cos for collectd there is a good ppa on launchpad
01:27 murrdoc ah
01:28 murrdoc setting up collectd to monitor isilon with snmp this week
01:29 iggy ugh... isilon
01:30 iggy I still have a script laying around somewhere that screen scraped quota info from it and emailed evil doers
01:30 iggy I can only hope they improved the need for that since then
01:34 MugginsM joined #salt
01:34 bhosmer joined #salt
01:34 mbrgm joined #salt
01:39 yomilk joined #salt
01:39 mosen joined #salt
01:47 borgstrom joined #salt
01:51 mapu joined #salt
01:53 paull1953 joined #salt
01:54 TheThing joined #salt
01:55 thayne joined #salt
01:58 mapu joined #salt
02:00 igorwidl_ joined #salt
02:04 mapu joined #salt
02:07 linjan joined #salt
02:10 kickerdog joined #salt
02:14 Nazca__ joined #salt
02:18 rypeck joined #salt
02:18 beneggett joined #salt
02:19 rojem joined #salt
02:20 malinoff joined #salt
02:25 linjan joined #salt
02:28 ajolo_ joined #salt
02:28 pipps joined #salt
02:29 beneggett joined #salt
02:30 possibilities joined #salt
02:32 MaZ-_ joined #salt
02:35 Leonw_ joined #salt
02:35 rojem joined #salt
02:35 thayne joined #salt
02:36 Damon_ joined #salt
02:37 Gareth joined #salt
02:38 Damoun joined #salt
02:48 thayne joined #salt
02:50 jalaziz_ joined #salt
02:59 perfectsine joined #salt
03:01 chutzpah I'm getting unit test failures: test_rendering_includes (unit.pydsl_test.PyDSLRendererTestCase) ... "Rendering SLS 'base:aaa' failed: Unknown include: Specified SLS base: yyy is not available on the salt master in saltenv(s:( base "
03:02 chutzpah version 2014.7.0
03:02 iggy unit tests... what are those
03:04 murrdoc hah
03:08 gfa joined #salt
03:09 igorwidl_ is there a way to make changes to a file, but not restart a service that is watching that file?
03:09 mdasilva joined #salt
03:09 nitti joined #salt
03:11 beneggett joined #salt
03:11 iggy igorwidl_: don't watch it? or use reload=True in the service stanza
03:12 iggy I mean it's easy enough to comment out the watch lines for a second
03:13 igorwidl_ iggy: yeah, i was hoping for cleaner way, ie like pass argument through commend line
03:14 gfa how could i match by env? i have defined many environments on master, top.sls but i cannot run salt 'env:dev' test.ping
03:14 gfa or salt 'G@env:dev' test.ping
03:15 baconbeckons joined #salt
03:16 iggy igorwidl_: nope
03:16 iggy kind of defeats the purpose of the watch
03:16 iggy gfa: test.ping saltenv=dev ?
03:17 possibilities joined #salt
03:18 gfa iggy: no
03:19 iggy yeah, only some things take saltenv
03:19 igorwidl_ iggy: restarts can be disruptive. Sometimes its nice to be able to make changes that are not critical and can wait until next server reboot
03:20 iggy don't know what to tell you...
03:22 gfa i've made a pillar key, called pillar_env with the  value of the environment, so i can salt -I 'pillar_env:dev' test.ping
03:22 gfa is not nice but it works :)
03:22 mdasilva my gitfs_remote woes have returned
03:23 bhosmer joined #salt
03:23 mbrgm joined #salt
03:23 iggy gfa: you can limit minions to a specific environment... I _think_ that shows up in grains if you do it that way
03:23 mdasilva salt-run -l debug fileserver.update returns a Gitfs received 0 objects
03:24 mdasilva this is after stopping the salt-master and blowing away the /var/cache/salt/master/* files
03:25 racooper joined #salt
03:25 gfa iggy: i don't see the env in the grains.items output
03:26 iggy I said if you limit the minion to a specific env (via the minion config file
03:28 renoirb joined #salt
03:28 gfa ohhh i see
03:29 iggy and I also said _think_
03:30 TTimo joined #salt
03:32 yomilk joined #salt
03:35 Ryan_Lane joined #salt
03:35 Ryan_Lane joined #salt
03:40 vbabiy joined #salt
03:41 moos3 can anyone help me with a gitfs issue
03:46 mbrgm joined #salt
03:51 mdasilva moos3: im having gitfs issues myself
03:51 mdasilva whats urs
03:53 otter768 joined #salt
03:56 moos3 mdasilva: it seems like its not evening pulling them down
03:57 moos3 it says fetching but when you run highstate on a minion it fails and says no state found
03:58 iggy cp.list_master
03:58 funzo_ joined #salt
03:59 moos3 cp.list_master ?
03:59 iggy run it... it'll tell you if things are syncing
04:00 moos3 interesting
04:01 moos3 i get no output at all from that
04:01 jeffspeff joined #salt
04:01 renoirb joined #salt
04:04 iggy then you've definitely got gitfs issues
04:04 jalaziz joined #salt
04:06 denstark joined #salt
04:06 jpaetzel joined #salt
04:06 mikepea joined #salt
04:06 Kyle joined #salt
04:06 goki______ joined #salt
04:06 thunderbolt_ joined #salt
04:06 grepory joined #salt
04:06 scalability-junk joined #salt
04:06 octarine joined #salt
04:07 JordanTesting joined #salt
04:07 basepi joined #salt
04:07 mihait joined #salt
04:07 CaptTofu joined #salt
04:08 codekobe_ joined #salt
04:08 fxdgear joined #salt
04:08 neilf______ joined #salt
04:08 abele_ joined #salt
04:08 akoumjian_ joined #salt
04:09 whiteinge_ joined #salt
04:09 mattl_ joined #salt
04:09 munhitsu___ joined #salt
04:09 DenkBret1l joined #salt
04:09 vividloop_ joined #salt
04:09 antonw_ joined #salt
04:09 imanc_ joined #salt
04:09 joeyparsons_ joined #salt
04:09 eclectic_ joined #salt
04:09 Hipikat_ joined #salt
04:09 akitada_ joined #salt
04:10 simonmcc_ joined #salt
04:11 gamingrobot_ joined #salt
04:11 balltongu_ joined #salt
04:11 CryptoMe1 joined #salt
04:11 brucewang joined #salt
04:11 HuleB joined #salt
04:12 goodwill_ joined #salt
04:12 darvon_ joined #salt
04:12 emostar_ joined #salt
04:12 tmh_ joined #salt
04:12 cb_ joined #salt
04:12 tempspace_ joined #salt
04:12 IOMonste1 joined #salt
04:12 vbabiy_ joined #salt
04:12 hotbox1 joined #salt
04:12 smkelly_ joined #salt
04:12 lahwran_ joined #salt
04:12 georgemarshall joined #salt
04:12 renoirb_AFK joined #salt
04:12 GothAck joined #salt
04:12 Guest57036 joined #salt
04:13 xenoxaos joined #salt
04:13 IOMonste1 joined #salt
04:13 sylphid joined #salt
04:14 ahale joined #salt
04:14 Wagahai joined #salt
04:14 Tahm joined #salt
04:14 keekz joined #salt
04:17 crane joined #salt
04:17 genediazjr joined #salt
04:19 Nazzy joined #salt
04:19 ldlework joined #salt
04:20 TrafficMan joined #salt
04:20 mfournier joined #salt
04:21 chutzpah joined #salt
04:22 Ryan_Lane joined #salt
04:22 _ikke_ joined #salt
04:23 shalkie joined #salt
04:25 thomasmckay joined #salt
04:26 bhosmer joined #salt
04:30 yomilk_ joined #salt
04:36 lude joined #salt
04:38 stan_k joined #salt
04:39 kickerdog joined #salt
04:43 kickerdog1 joined #salt
04:44 balltongu joined #salt
04:51 thayne joined #salt
05:01 spookah joined #salt
05:02 garthk meh; winrepo still unreliable
05:03 yomilk joined #salt
05:03 garthk salt-run winrepo.genrepo… see JSON description of my packages… salt \* pkg.refresh_db… salt \* pkg.available_version packagename… nothing. Not a sausage.
05:04 asyncsrc joined #salt
05:11 bhosmer joined #salt
05:26 jalbretsen joined #salt
05:27 delinquentme joined #salt
05:34 baconbeckons joined #salt
05:35 felskrone joined #salt
05:39 otter768 joined #salt
05:44 spookah joined #salt
05:48 ndrei joined #salt
05:50 goodwill left #salt
05:51 thayne joined #salt
05:56 TTimo joined #salt
05:59 jpaetzel joined #salt
06:04 _JZ_ joined #salt
06:09 ice799 joined #salt
06:13 snuffeluffegus joined #salt
06:13 philipsd6 joined #salt
06:21 rap424 joined #salt
06:25 philipsd6 joined #salt
06:31 jpaetzel_ joined #salt
06:33 tligda joined #salt
06:39 yomilk joined #salt
06:42 cbaesema joined #salt
06:47 catpiggest joined #salt
06:49 oyvjel joined #salt
06:51 jhauser joined #salt
07:00 bhosmer joined #salt
07:02 pipeep joined #salt
07:02 yomilk_ joined #salt
07:04 mohae joined #salt
07:11 colttt joined #salt
07:11 borgstrom joined #salt
07:12 philipsd6 joined #salt
07:20 flyboy joined #salt
07:23 skamithi_ joined #salt
07:26 jdmf joined #salt
07:34 yomilk joined #salt
07:36 slav0nic joined #salt
07:44 saravanans joined #salt
07:46 shorty_mu joined #salt
07:48 lcavassa joined #salt
07:51 zlhgo does salt-ssh have python api?
07:51 ramishra joined #salt
07:54 Ancient joined #salt
07:55 zlhgo hello~~
07:55 TTimo joined #salt
07:57 aw110f joined #salt
08:00 aw110f_ joined #salt
08:04 rypeck joined #salt
08:14 philipsd6 joined #salt
08:15 srage joined #salt
08:15 j-saturne joined #salt
08:16 Leonw joined #salt
08:17 trikke joined #salt
08:17 saravana_ joined #salt
08:18 saravan__ joined #salt
08:23 saravanans joined #salt
08:26 wvds-nl joined #salt
08:26 saravana_ joined #salt
08:27 saravanans joined #salt
08:33 PI-Lloyd joined #salt
08:40 Mso150 joined #salt
08:42 saravana_ joined #salt
08:45 ramishra joined #salt
08:46 kickerdog joined #salt
08:47 saravanans joined #salt
08:49 bhosmer joined #salt
08:51 intellix joined #salt
08:52 __gotcha joined #salt
08:54 godber joined #salt
08:56 TTimo joined #salt
09:00 saravana_ joined #salt
09:02 TyrfingMjolnir joined #salt
09:04 ckao joined #salt
09:05 alexr__ joined #salt
09:05 oyvjel1 joined #salt
09:05 iwishiwerearobot joined #salt
09:06 __gotcha joined #salt
09:12 JlRd joined #salt
09:13 alexr__ joined #salt
09:16 ThomasJ Hrm, bootstrap seems broken for Debian Jessie
09:18 godber joined #salt
09:19 babilen jessie does not have to be installable at all times and there is nothing we can do without further details
09:20 babilen Ah, ECHAN. But we still need more details :)
09:23 ThomasJ Working on tracking down what happens
09:25 viq MTecknology: what is this "free time" you speak of? ;)
09:27 babilen Time that you realise later should have been spent on doing something important that you forgot about
09:28 saravana_ left #salt
09:30 godber joined #salt
09:30 philipsd6 joined #salt
09:31 viq Though, I guess, prioritizing. Right now close to the top of list of things I'll be spending time on is installing windows on my machine so I can play some games.
09:33 SpX joined #salt
09:33 GnuLxUsr joined #salt
09:34 agend joined #salt
09:37 iamtew good morning :)
09:38 iamtew short question regarding gitfs_remotes and its backend; does it make any difference in which backend I use? I'm on centos
09:39 N-Mi_ joined #salt
09:39 viq iamtew: which salt version? IIRC not all are available with 2014.1
09:39 iamtew I'm using salt-master-2014.7.0-3.el7.noarch from EPEL
09:41 viq iamtew: http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html seems to have some differences listed
09:42 babilen iamtew: Some per-remote settings are, in particular, only available with pygit2
09:42 iamtew right, ok
09:42 iamtew well I'm mainly looking to have read access from my master
09:42 babilen Which, unfortunately, still hasn't been packaged for Debian it appears.
09:42 iamtew some of my formulas are forked on github
09:42 iamtew and I access them via https, just pull you know
09:43 viq seems that if you want password auth you need pygit2
09:43 iamtew and then I have personal stuff in private repositories that is access using the ssh things
09:43 iamtew from gitlab.com (because unlimited free private repos :))
09:43 viq But otherwise it seems you should be fine with either
09:43 iamtew so it's more of a personal taste thing then in this case it seems
09:43 babilen That's my impression too, but I am in no position to actually compare the three projects and their code.
09:44 viq Or decision which set of bugs can you live with ;)
09:44 babilen It might be that, once I look at, say, dulwich's codebase that I would swear to never go near it again
09:45 iamtew well I got GitPython and dulwich available in the upstream repositories on this machine, so I guess that rules out pygit2 for now :)
09:45 iamtew they're both in EPEL
09:46 iamtew and GitPython is more up-to-date in terms of versions from upstream
09:46 genediazjr joined #salt
09:46 iamtew so I think I have my answer :)
09:46 iamtew thanks guys
09:46 viq Anyone using 2014.1 minions with 2014.7 master ? Seems like 2014.7 already made it to EPEL but not to debian repos yet
09:47 nocturn joined #salt
09:48 bersace joined #salt
09:49 diegows joined #salt
09:50 godber joined #salt
09:52 shookees joined #salt
09:52 slav0nic joined #salt
09:53 alexr__ joined #salt
09:54 babilen viq: Not EPEL-testing ?
09:56 babilen viq: And you can use "deb http://debian.saltstack.com/debian RELEASE-testing main" with release in [squeeze,wheezy,jessie,unstable] if you want the, yet unreleased, Debian packages.
09:57 genediazjr joined #salt
10:03 viq babilen: no, normal epel
10:03 babilen Hmm, I wonder why they decided to move it
10:05 mbrgm joined #salt
10:06 oyvjel joined #salt
10:06 genediaz_ joined #salt
10:07 nocturn joined #salt
10:10 godber joined #salt
10:11 ujjain joined #salt
10:12 linjan joined #salt
10:13 Micromus So, why no ipv6 support on debian.saltstack.com!?
10:14 philipsd6 joined #salt
10:15 baconbeckons joined #salt
10:15 wnkz joined #salt
10:18 j-saturne joined #salt
10:22 ]V[ joined #salt
10:24 P0bailey joined #salt
10:25 P0bailey joined #salt
10:26 alexr___ joined #salt
10:26 iwishiwerearobot joined #salt
10:27 karimb joined #salt
10:29 alexr___ joined #salt
10:30 CeBe joined #salt
10:30 __gotcha joined #salt
10:38 bhosmer joined #salt
10:47 GnuLxUsr joined #salt
10:57 TTimo joined #salt
10:59 Neco_ anyone got a good tutorial on scheduling?
11:04 Ironhand joined #salt
11:09 alanpearce joined #salt
11:10 wvds-nl joined #salt
11:11 Micromus Is there no way to dualstack a salt-master? "Whether the master should listen for IPv6 connections. If this is set to True, the interface option must be adjusted too (for example: "interface: '::'")"
11:17 giantlock joined #salt
11:23 fredvd joined #salt
11:27 __gotcha joined #salt
11:28 oyvjel1 joined #salt
11:30 geekatcmu joined #salt
11:35 Neco_ http://docs.saltstack.com/en/latest/ref/states/all/salt.states.win_update.html that first example errors out unless I missed something, "    ID 'updates' in SLS 'win_update' contains multiple state declarations of the same type
11:35 Neco_ "
11:46 genediazjr joined #salt
11:46 karimb joined #salt
11:48 karimb joined #salt
11:49 ramishra joined #salt
11:51 CeBe1 joined #salt
11:58 canci joined #salt
12:01 __gotcha joined #salt
12:09 genediazjr joined #salt
12:10 Miq joined #salt
12:13 mbrgm joined #salt
12:17 genediazjr joined #salt
12:18 Miq joined #salt
12:19 mbrgm joined #salt
12:20 alexr__ joined #salt
12:24 intellix joined #salt
12:26 alexr__ joined #salt
12:26 JlRd joined #salt
12:27 bhosmer joined #salt
12:27 genediazjr joined #salt
12:32 spo0nman joined #salt
12:34 ndrei joined #salt
12:36 bhosmer joined #salt
12:36 giounads joined #salt
12:36 giounads hi guys
12:37 giounads i get the following error when trying to get values from a pillar
12:37 giounads Unable to manage file: Jinja error: 'NoneType' object is not iterable
12:37 giounads indeed some values of the pillar are empty
12:37 giounads how do you overcome this issue? any ideas?
12:38 saravanans joined #salt
12:43 babilen salt['pillar.get']('some:value:that:might:be:none', [])
12:44 diegows joined #salt
12:51 giounads babilen: thing is that i am itereting a list taken from the pillar, where some values are empty
12:52 giounads emails= {% for portal, data in salt['pillar.get']('portal:users').items() %} {% for mail in data.mainmail %}{{ mail }},{% endfor %}{% endfor %}
12:52 giounads but mainmail value might be empty
12:53 babilen data.get('mainmail', [])
12:53 babilen and also: salt['pillar.get']('portal:users', {}).items()
12:53 j-saturne joined #salt
12:54 jaimed joined #salt
13:01 brayn joined #salt
13:01 giounads thanks babiles!
13:02 giounads this also did the trick:     data.mainmail|default("", True
13:02 giounads this also did the trick:     data.mainmail|default("", True)
13:05 bhosmer joined #salt
13:06 wincus joined #salt
13:16 glyf joined #salt
13:24 j-saturne joined #salt
13:24 zekoZeko i see there's a Ubuntu PPA for 2014.7 now, what's the best way to upgrade from 2014.1? Master first, then add states for the new PPA and upgrade? Will old minions know how to talk to new master?
13:31 miqui joined #salt
13:33 aqua^mac joined #salt
13:33 johtso joined #salt
13:34 johtso joined #salt
13:35 Ahlee Yes, minions can talk to newer masters, and you must upgrade the master first
13:39 bhosmer joined #salt
13:41 zlhgo joined #salt
13:41 zekoZeko great. it seemed reasonable, but I couldn't find anything in the release notes or the docs about this. I'm sure I must have read about it in a blog post or something a while ago.
13:42 TyrfingMjolnir joined #salt
13:45 ajolo_ joined #salt
13:48 erjohnso joined #salt
13:53 thawes joined #salt
13:56 genediazjr joined #salt
13:57 giantlock joined #salt
13:59 alexr___ joined #salt
13:59 TTimo joined #salt
14:00 primechuck joined #salt
14:01 alexr__ joined #salt
14:02 genediazjr joined #salt
14:06 thawes joined #salt
14:06 spo0nman joined #salt
14:07 cpowell joined #salt
14:09 racooper joined #salt
14:11 peters-tx joined #salt
14:11 tmh1999 joined #salt
14:12 gngsk joined #salt
14:12 pdayton joined #salt
14:16 bhosmer_ joined #salt
14:16 nitti joined #salt
14:17 ajolo__ joined #salt
14:17 istram joined #salt
14:18 _prime_ joined #salt
14:19 thawes joined #salt
14:22 MTecknology viq: :P
14:22 diegows joined #salt
14:25 viq ;)
14:31 ajolo__ joined #salt
14:32 elfixit joined #salt
14:32 perfectsine joined #salt
14:33 mpanetta joined #salt
14:34 tmh1999 joined #salt
14:35 cpowell joined #salt
14:35 younqcass joined #salt
14:36 TTimo joined #salt
14:37 elfixit1 joined #salt
14:39 Ahlee who wants to talk pillar caching?  I had some invalid pillar values sneak in that broke some states.  I refresh pillar, it says success (well, None, but that's another story).  I can query the pillars directly and they're fine, but if i call the values from a state it'll still show the bad values
14:39 genediazjr joined #salt
14:41 lcavassa joined #salt
14:42 flebel joined #salt
14:44 mgw joined #salt
14:46 XenophonF joined #salt
14:46 XenophonF howdy y'all
14:46 mpanetta mornin XenophonF
14:46 genediazjr joined #salt
14:46 XenophonF so i'm using github/gitfs for states, but i need to deploy a few large files that i can't store on github
14:46 TOoSmOotH joined #salt
14:46 XenophonF like, big tgz files
14:47 XenophonF i thought that i'd just put them into /srv/salt, and i see them when i issue a "cp.list_master" command on the minion
14:47 XenophonF but the states fail with what look like file not found errors
14:48 mpanetta Whats the exact error?
14:48 mpanetta Can you paste it?
14:48 XenophonF this is the state https://github.com/ibrsp/salt-states/blob/development/dbaasp/init.sls
14:49 XenophonF and this is the error https://bpaste.net/show/1669b3781399
14:49 mpanetta XenophonF: Why tar_options v?
14:49 teebes joined #salt
14:50 XenophonF was copying the example from the archive.extracted docs
14:51 asyncsrc joined #salt
14:52 XenophonF i assumed that salt would merge the contents of /srv/salt and the git master branch, but perhaps that isn't the case
14:54 viq XenophonF: and your /srv/salt is listed in your file_roots ?
14:54 XenophonF yup
14:55 XenophonF i've added the following to the default master config https://github.com/ibrsp/salt-states/blob/production/salt/files/master-overrides.conf
14:55 viq joined #salt
14:55 viq joined #salt
14:56 XenophonF and cp.list_master shows the files i want to deploy
14:56 viq XenophonF: how about setting up something like nginx and serving them that way?
14:56 XenophonF if i use a salt:// URL in one environment, will it grab files in another environment?
14:57 XenophonF because the state's in my 'development' environment, but the files are in the base env.
14:58 XenophonF viq: the only reason i haven't done that is because i don't know how to handle authorization
14:58 XenophonF the minions are scattered all over the internet
14:58 oyvjel joined #salt
14:59 XenophonF maybe i should use s3...
14:59 viq ssl cert? could be the same accross all your minions, would provide you with same level of control as normally any minion can access any file
14:59 XenophonF yeah
14:59 XenophonF actually s3fs might be a good answer, too
15:00 viq Ha, just set up a test env to see pillar merging on 2014.7 in action, Just Works (tm)  :D
15:01 cofeineSunshine (tm) - DD
15:01 cofeineSunshine (tm) - :DD
15:02 quantumriff joined #salt
15:02 briffle_ joined #salt
15:02 __gotcha joined #salt
15:03 babilen viq: I will use that so hard :)
15:03 viq Yeah, we were waiting for that to be able to manage users the way we want
15:03 viq ie. easily assign particular users or groups of them to machines
15:07 seydu joined #salt
15:08 jalbretsen joined #salt
15:09 housl joined #salt
15:09 babilen That's my first usecase too
15:10 babilen Although I've started using the reverse-users-formula lately
15:11 XenophonF i'm just going to serve these files out of s3 over http
15:11 kaptk2 joined #salt
15:13 thawes joined #salt
15:14 srage_ joined #salt
15:14 QiQe joined #salt
15:14 mortis_ anyone having trouble with salt-api not starting after upgrading to saltversion .13?
15:14 mortis_ anyone else*
15:15 eightyeight joined #salt
15:17 conan_the_destro joined #salt
15:18 Ahlee 0.13?
15:18 OnTheRock joined #salt
15:21 OnTheRock joined #salt
15:25 Guest48612 joined #salt
15:27 genediazjr joined #salt
15:31 mortis_ 2014.1.13, sorry
15:31 jngd joined #salt
15:32 mortis_ we had salt-api running on 2014.1.10, but after upgrading and restarting the salt-master it stopped working
15:32 techdragon joined #salt
15:33 whiteinge mortis_: which netapi module are you using? if you're using rest_cherrypy what version of CherryPy? also, any errors in the master log?
15:33 Leonw joined #salt
15:33 mortis_ whiteinge: rest_cherrypy yeah
15:33 mortis_ 2.3.0-3
15:34 alexr___ joined #salt
15:35 mortis_ grabbing logs now
15:35 Ahlee anybody familiar with what lives in /var/cache/salt/master/tokens ?
15:35 Ahlee # find . | wc -l
15:35 Ahlee 304253
15:35 whiteinge Ahlee: O_O
15:35 Ahlee hard on the ole inode
15:35 Ahlee s
15:35 whiteinge sheesh. those are supposed to expire every 12 hours
15:36 Ahlee so if i blow it away, minions will need to re-auth?
15:36 whiteinge mortis_: rest_cherrypy requires CherryPy > 3. any chance that package was changed/downgraded recently?
15:36 whiteinge Ahlee: no, those are eauth tokens
15:36 nitti joined #salt
15:37 mortis_ whiteinge: i dont think so, but i just now got an error for a custom module in the logs when starting the api, im gonna take a look
15:37 Ahlee gotcha.
15:37 _prime_ Ahlee: I'm seeing something similar: 306532 (2014.1.13 master)
15:37 bigpup joined #salt
15:37 mortis_ might be herp derp
15:37 viq Ahlee: 3 of the masters I have access to have 0 files in that dir, and they have active minions
15:37 ajolo__ joined #salt
15:37 bigpup does anyone know that the link for the documentation for salt is broken
15:37 _prime_ 66% inode usage, ouch
15:37 bigpup http://docs.saltstack.com/en/latest/
15:37 bigpup the pdf is 1k
15:38 Ahlee oh well, another cron state to delete files with an access time older than one day
15:38 Ahlee and away we go
15:38 whiteinge bigpup: yeah. that one's waiting on me...
15:38 whiteinge Ahlee: mind filing a quick ticket with your salt-master version so that one doesn't fall off our radar?
15:39 Ahlee whiteinge: sure.  But only since _prime said he had hte same under 2014.1.13
15:39 mortis_ getting this in the logs, but in the config ive defined rest_cherrypy : 2014-11-13 16:38:54,152 [rest_wsgi        ][ERROR   ] Not loading 'rest_wsgi'. 'port' not specified in config
15:39 mortis_ and port also
15:40 mortis_ it is defined
15:40 mortis_ it reads the right configfile too
15:40 mortis_ i can see it with -l debug
15:40 iggy do you have all the proper prerequisites for rest_cherrypy?
15:40 mortis_ iggy: well, it worked before upgrading salt
15:40 mortis_ but yes, i got cherrypy installed
15:41 TOoSmOotH joined #salt
15:41 iggy oh... downgrade?
15:41 mortis_ hehe
15:41 alexr__ joined #salt
15:41 mortis_ rather find out whats wrong
15:41 mortis_ :)
15:41 whiteinge mortis_: you can ignore the warning about rest_wsgi if you're using rest_cherrypy
15:42 mortis_ oh ok
15:42 iggy that's why I suggested downgrading
15:42 iggy if you downgrade and it still doesn't work, then it's not salt
15:42 mortis_ true
15:42 Ahlee whiteinge: https://github.com/saltstack/salt/issues/18055
15:42 mortis_ good point, i should try
15:42 whiteinge Ahlee: tyvm
15:42 viq ale__: I've been looking at your a-dash-of-salt, I find it interesting, but I have a question - do you know of any way of protecting ES traffic?
15:43 genediazjr joined #salt
15:43 iggy ipsec
15:43 VSpike I have a question .. don't you get nervous having a tool in your command line where salt '*' system.reboot would kill your entire infrastructure in a moment? :) It's a very sharp knife to leave lying around.
15:43 VSpike Makes me think of https://twitter.com/devops_borat/status/41587168870797312
15:44 whiteinge haha
15:44 iggy for i in `cat ~/.ssh/known_hosts | cut -f1 -d' '` ; do ssh $i reboot ; done
15:44 iggy no different really
15:45 viq VSpike: though how is that different from " for i in `cat server_list` ; do ssh $i 'sudo halt -p' & done" ? ;)
15:45 viq yeah, what he said
15:45 iggy except for the fact that you can actually limit what commands people can run with salt if you want
15:45 VSpike True, I guess
15:46 whiteinge VSpike: for sensitive environments, i'd recommend using client_acl on the production master and only whitelist "safe" modules.
15:46 whiteinge a decent work-flow is to only allow states to run on production machines so that everything executed there must be written up as an .sls file, must go through the normal version-control workflow of pull-requests and code-reviews.
15:46 ale__ viq, you could proxy (nginx?) or there are some ES plugins that allow for auth
15:46 VSpike whiteinge: that's a good idea
15:47 viq ale__: yeah, I was just thinking of proxying with nginx and requiring client certs... But I have no idea whether salt supports serving client certs before sending traffic
15:47 XenophonF left #salt
15:47 felskrone joined #salt
15:47 ale__ i doubt that that's part of the current ES returner
15:48 viq yeah
15:48 viq Maybe stunnel...
15:49 whiteinge mortis_: find any rest_cherrypy errors? is your installed CherryPy 3.x.x?
15:50 TOoSmOotH joined #salt
15:54 genediazjr joined #salt
15:54 Kenzor joined #salt
15:56 tmh1999 joined #salt
15:57 genediazjr joined #salt
15:58 StDiluted joined #salt
16:01 laderhiton joined #salt
16:03 CeBe1 joined #salt
16:03 SheetiS joined #salt
16:05 bhosmer_ joined #salt
16:05 __gotcha joined #salt
16:07 rawzone joined #salt
16:08 TheoSLC joined #salt
16:09 TheoSLC 2014.7 just dropped in EPEL like a bomb!
16:09 TheoSLC I look forward to all of the fixes (and new breaks)
16:11 unpaidbi1l Anyone know of any issues with centos 5 and salt?  I'm getting some very high CPU usage and long runs on CentOS5 vs CentOS6.  Working on identifying the cause now.
16:11 gothix joined #salt
16:11 unpaidbi1l versions are: Salt: 2014.1.10 Python: 2.6.8 (unknown, Nov 7 2012, 14:47:34) Jinja2: 2.5.5 M2Crypto: 0.21.1 msgpack-python: 0.1.12 msgpack-pure: Not Installed pycrypto: 2.3 PyYAML: 3.08 PyZMQ: 14.4.1 ZMQ: 3.2.5
16:12 jba-123 joined #salt
16:12 ramishra joined #salt
16:12 iggy well, you've got updated ZMQ bits which is usually the first thing to hit people on EL5
16:12 unpaidbi1l yeah, i went through all the versions yesterday to make sure i was running at least minimum required
16:12 genediazjr joined #salt
16:13 unpaidbi1l i was running an unsupported zmq but the upgrade didnt make a difference unfortunately
16:13 repl1cant joined #salt
16:13 mapu joined #salt
16:15 jba_ joined #salt
16:15 unpaidbi1l the centos 6 system is 64 bit, centos 5 is 32 bit, they're identically configured xen machines other than that
16:16 flebel joined #salt
16:16 rawzone joined #salt
16:20 thedodd joined #salt
16:20 kermit joined #salt
16:20 b1nar1 joined #salt
16:24 jba_ hi, I'm a beginner wiht salt. I try to use a pillar with a masterless setup but it looks like my state can't see my pillar and it's working with a master/slave setup. Any idea ? I use the 2014.1.13 version not the 2014.1.11 (https://github.com/saltstack/salt/issues/16210)
16:25 genediazjr joined #salt
16:25 iggy can you see the pillar data in pillar.items?
16:26 jba_ yep
16:28 forrest joined #salt
16:29 forrest joined #salt
16:30 iggy if so, I got nothing... maybe someone more familiar with masterless will speak up
16:32 kickerdog joined #salt
16:33 Lingo_ joined #salt
16:34 jba_ ok thanks for trying. I will try to downgrade to 2014.1.12
16:34 viq Whom do I whine at regarding documentation?
16:35 iggy there wasn't a 2014.1.12
16:35 iggy it was tagged, but never released as it had problems
16:35 rypeck joined #salt
16:35 jba_ ah ..
16:35 kickerdog joined #salt
16:37 jalbretsen joined #salt
16:37 TOoSmOotH joined #salt
16:40 hobakill joined #salt
16:43 murrdoc joined #salt
16:43 murrdoc fluff http://xkcd1446.org/#0
16:45 thawes joined #salt
16:45 viq I have some servers that have multiple interfaces. How would I go about getting the IP address of one that (whichever is easier) has default route / is used to communicate with the salt master ?
16:45 ramishra joined #salt
16:46 Lingo_ Hey guys, having some really bad problems with versions for supporting libraries.
16:47 Rucknar Think we've got a bit of a mismatch and it's causing us issues with minions disconnecting
16:47 Rucknar http://pastebin.com/rQ1KtpDZ
16:48 Rucknar When we upgrade python-zmq from 2.2 to 14.3, we see a lot of our Redhat 5 boxes lose connection with the master
16:48 Rucknar Wondering if it's because of the ZMQ version on the master only being 3.2.2?
16:49 AubreyF joined #salt
16:49 TOoSmOotH joined #salt
16:49 Rucknar Any help appreciated, been a long day :(
16:50 TheoSLC I'm trying to use {{ variables }} is my pillar file.  but the values are not being set.  should this work?
16:51 viq TheoSLC: and the variables are taken from where?
16:51 tligda joined #salt
16:52 murrdoc Rucknar:  so what did you do
16:52 murrdoc like can we run through the steps
16:52 Rucknar Thanks murrdoc, steps below:
16:53 hal58th TheoSLC: Maybe you can pastebin us an example and ask a more specific question
16:53 TheoSLC viq: the variables come from the 'defaults' option of an 'include' for the pillar
16:53 AubreyF Anyone have tips on debugging errant templates?
16:53 AubreyF (1) What's the right way to see the final template Jinja is generating based upon the commands/etc embedded in it?
16:53 TheoSLC see section 5.4 http://docs.saltstack.com/en/latest/topics/pillar/
16:53 Rucknar 1. Monday our patching updated python-zmq from 2.2 to 14.3 ( old i know) from redhat EPEL repo. Since monday, our RH5 servers seem to of been dropping connections to the salt master, requiring manual restart to resolve.
16:53 AubreyF (2) Is there an easy way to run a single template instead of calling the highstate and watching hundreds of checks scroll by?
16:54 AubreyF sudo salt-call -l all state.template '/srv/salt/redis/init.sls' works great for single-file templates, but fails for more complicated ones. " - The 'include' declaration found on '/srv/salt/redis/init.sls' is invalid when rendering single templates"
16:54 murrdoc ah Rucknar have you updated the master ?
16:54 Rucknar 2. Today we restarted the minions and downgraded the package back to 2.2, all has been quiet since then
16:54 murrdoc have to update the master first
16:54 hal58th TheoSLC: I think I need to see what's going on to give my opinion
16:55 Rucknar murrdoc: Does the master use python-zmq? Regardless it was updated at the same time and restarted
16:55 murrdoc yeah they should all be using the same thing
16:55 TheoSLC hal58th: here is the example http://paste.ubuntu.com/8990040/
16:56 igorwidl is there a way to see what states are applied to each minion?
16:56 Rucknar murrdoc: Okay, wasn't aware. That being said though they would of at the time been the same version. We haven't downgraded the master as it's on redhat 6 (haven't seen the issue on 6 boxes)
16:57 viq igorwidl: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html  one of the show ones
16:57 whiteinge any yum people know why running `yum install zeromq` on an 64-bit machine would try to install the i386 version (in addition to trying to install the 64-bit version)? presumably my local repo is messed up. not sure where to look though.
16:57 murrdoc Rucknar:  and python versions match up ? like its all python 2.6
16:57 mpanetta whiteinge: Your yum configs may be set to install both x86_64 and i386 packages
16:57 Rucknar whiteinge: Some 64 repo's do contain 32 bit packages due to wierd dependency issues.
16:58 tligda1 joined #salt
16:58 whiteinge AubreyF: (1) your best bet for that is to run ``state.show_sls <thesls>`` and look at the resulting data structure. we've got an open issue for displaying the result of the Jinja step only.
16:58 ndrei joined #salt
16:58 whiteinge AubreyF: (2) ``state.sls`` is best for this
16:58 igorwidl viq: this hsould do, thanks
16:59 Rucknar murrdoc: Yep. versions : http://pastebin.com/X4GbSgbN
16:59 vlcn ugh, I've run into a really problematic issue
17:00 vlcn It become necessary to change the IP of my salt master, I confirmed that all minions were resoling the new A record and then removed the 'old' interface from the master
17:00 AubreyF Thx Whiteinge, exactly what I was looking for
17:00 vlcn so far out of about 450 minions, only 4 are showing as being 'up' after about 20 minutes
17:00 whiteinge mpanetta, Rucknar: thanks. checking.
17:01 vlcn anyone have any idea on what would be causing this to happen?
17:02 thayne joined #salt
17:03 Ozack1 joined #salt
17:03 saravanans joined #salt
17:03 mpanetta whiteinge: No problem.
17:03 vlcn the really odd thing is that I can see them authenticating to the master but they do no respond to manage.up or test.ping
17:03 hal58th TheoSLC: I think you are calling the variable wrong possibly. Are the pillar variables in a subtree possibly? Like {{ pillar['nexus']['repo_url'] }}
17:03 Rucknar vlcn: Have they ever or is this a new problem?
17:04 vlcn Rucknar, it's a result of removing the interface with the 'old' IP from the master
17:04 hal58th TheoSLC: Can you pastebin your pillar file for nexus-pillar-gen?
17:04 alexr joined #salt
17:04 iggy hal58th: {{ salt['pillar.get']('nexus:repo_url') }}
17:05 Rucknar vlcn: I do remember seeing this somewhere before a few months back. do the minions address the master via DNS or IP?
17:05 vlcn DNS
17:05 vlcn I confirmed that the minions were resolving the new IP before making the switch
17:05 Rucknar is the check_dns enabled in the minion config?
17:05 mpanetta I've started a github api _module/_state, are there any guidelines for layout that I should be aware of?
17:05 hal58th iggy: Sure, just giving him a simple example.
17:06 vlcn Rucknar, it's set to the default (which should be true)
17:06 iggy mpanetta: pep8
17:06 mpanetta I want to contribute it back.
17:06 mpanetta That is all?
17:06 mpanetta Sweet heh
17:06 mpanetta How should I lay out the directory structure so you can just pull it in as a git repo?
17:07 iggy there aren't many hard rules yet... make it look like the code that already exists and you should be fine
17:07 mpanetta Ok cool
17:07 Rucknar vlcn: have any of the minions been restarted?
17:08 mpanetta iggy: Are there any examples that you are aware of that I can look at for ideas?
17:08 vlcn Rucknar, I restarted one of them as a test
17:08 vlcn started working immediately
17:08 Rucknar vlcn: https://github.com/saltstack/salt/issues/8540 https://github.com/saltstack/salt/issues/10032
17:08 vlcn what I don't understand is how minions can be showing as authenticated in the master log
17:08 Rucknar vlcn: Seems they don't fully support that scenario just yet
17:08 SheetiS joined #salt
17:09 vlcn that seems completely insane
17:09 iggy mpanetta: if the entire salt repo isn't enough, there's also salt-contrib
17:09 mpanetta haha
17:09 hal58th vlcn: That's a bummer man.
17:09 Rucknar Agree, that was a while ago tho so could of changed. You on 2014.1.13 + ?
17:09 mpanetta I could not find anything that defines a _module or _state dir yet
17:09 thawes joined #salt
17:10 ramishra joined #salt
17:10 vlcn Rucknar: 2014.1.13
17:10 miqui joined #salt
17:11 Rucknar vlcn: I guess one of the expereinced guys around here will know if that's still a limitation
17:11 Rucknar vlcn: sorry can't be of much more help, just remember seeing those tickets
17:11 vlcn Rucknar, I understand, thanks for the links
17:11 desposo joined #salt
17:12 hal58th vlcn: If you can reverse your thing to get the salt master back up, you can do this to restart salt-minions.  http://pastebin.com/vJh3Dfi5
17:12 vlcn fortunately I don't think this is the precise issue I'm seeing.
17:13 wincus joined #salt
17:16 zlhgo joined #salt
17:17 iggy mpanetta: you just make the directory... throw some code in there, and bam
17:18 iggy mpanetta: I wrote a pretty simple _grains file that's in salt-contrib
17:18 pipps joined #salt
17:18 diegows joined #salt
17:18 quist joined #salt
17:18 iggy https://github.com/saltstack/salt-contrib/blob/master/grains/gce.py  <--- just throw that file in /srv/salt/_grains and off you go
17:18 mpanetta iggy: Taking a look at salt-contrib now
17:19 iggy okay, if you have actual questions, just ask
17:19 thayne joined #salt
17:19 iggy fwiw, any of salt's normal modules could be put into /srv/salt/_modules
17:20 iggy in fact a few people do that to "backport" newer modules to current versions of salt
17:20 mpanetta iggy: Yeah, I understand how to use the modules and write them, but I was not sure if it were possible to make it so I could get it to work out of the box as a gitfs remote.
17:21 mpanetta iggy: Yeah I have done that with th environment module.
17:21 _JZ_ joined #salt
17:21 iggy our main salt_states repo has an _grains and _modules folder
17:21 hasues joined #salt
17:22 hasues left #salt
17:22 sxar joined #salt
17:22 iggy just create the folder at the top level and bob's your uncle
17:22 mpanetta iggy: Perfect, thanks!
17:22 whiteinge vlcn: what OS and version of zmq on master & minions?
17:22 murrdoc dont forget sync_all
17:24 hobakill left #salt
17:24 pipps99 joined #salt
17:26 __JZ__ joined #salt
17:27 JeremyR joined #salt
17:28 mbrgm joined #salt
17:29 KyleG joined #salt
17:29 KyleG joined #salt
17:31 SheetiS joined #salt
17:32 capricorn_1 joined #salt
17:33 __gotcha joined #salt
17:33 conan_the_destro joined #salt
17:34 marshalls Is anyone using the mysql formula with Centos 7?
17:35 repl1cant joined #salt
17:37 j-saturne joined #salt
17:37 TheoSLC hal58th: just a follow-up the render issues is due to the fact that my included pillar is a custom python pillar.  when I include a normal yaml pillar the variables from defaults are available.  What I am trying to do may not be possible.
17:38 hal58th TheoSLC: Ah okay, can't help you out with that then. I am no expert on python.
17:39 __gotcha joined #salt
17:39 smcquay joined #salt
17:40 wendall911 joined #salt
17:42 devnull_ joined #salt
17:42 skyler joined #salt
17:43 alanpearce joined #salt
17:44 TheoSLC There used to be a document on writing custom pillars in python.  I can't find it.  Anybody know where it is?
17:46 troyready joined #salt
17:46 robawt TheoSLC: all pillar is custom, what're you trying to accomplish, maybe that could help us better point you in the right direction?
17:47 robawt also the tutorial has a good basic walkthrough
17:47 SheetiS so 2014.7.0 is in epel as of today instead of just in epel testing ;-)
17:48 Rucknar ooohh, now just waiting for 2014.7.1 so we can compound match :)
17:48 TheoSLC robawt: my yaml pillars are rendering with variables passed by the defaults option in includes. However, my pillar written in all python is not rendering with those same variables.
17:49 TheoSLC robawt: see http://paste.ubuntu.com/8990040/  thanks
17:49 SheetiS I'm not using compound matching in the places where it is disabled in 2014.7.0, so I'm 100% ok to upgrade after I run in my test environment for a while.
17:49 bigpup does anyone know why I would be having issues running salt against the saltmaster
17:50 bigpup i am trying the command salt salt test.ping and it is just hanging
17:50 yetAnotherZero joined #salt
17:51 Rucknar SheetiS: Is it only coumpound matching within pillars?
17:51 hal58th bigpup: Can you go the other direction? sudo salt-call pillar.items work on the minion?
17:51 iggy Rucknar: that's only for mine and publish.publish use
17:51 SheetiS It's compound matching in mine and publish only I thik
17:51 SheetiS *think
17:52 Rucknar Ah, thanks SheetiS. Might give it a shot
17:52 iggy so as long as you aren't using something like this, you're fine: {% set dbhost = salt['mine.get']('G@tags:db and G@tags:primary', 'network.interfaces', 'compound') %}
17:52 TheoSLC robawt: I'm think I need to follow these rules http://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.py.html
17:53 meylor joined #salt
17:53 bhosmer joined #salt
17:55 bigpup hal58th the minion and the master are the same box. Do you want me to call that command?
17:55 bhosmer_ joined #salt
17:56 hal58th bigpup: Sure why not. Also can you do a 'ps xa | grep salt-minion' and give me the output
17:58 igorwidl i'd like to classify nodes on salt master instead of using grains file on nodes, but I'd rather not use nodegroup. What would be my options? I'd love to use pillars, but not sure how it would look like
17:58 bigpup 13736 ?        Ss     0:00 /usr/bin/python /usr/bin/salt-minion
17:59 bigpup salt-call pillar.items hangs as well
17:59 iggy igorwidl: reclass?
17:59 iggy what do you mean by classify nodes?
17:59 hal58th bigpup: Looks like salt-minion is up and running. Anything in /var/log/salt/minion.log? I have to run to a meeting for a while. Maybe someone else can help.
17:59 bigpup ahh wait
18:00 bigpup that salt-call pillar.items cause it to request a key
18:00 bigpup i just authorized it
18:01 Ryan_Lane joined #salt
18:01 bigpup well the key is there now
18:01 bigpup but its still hanging
18:02 bigpup 2014-11-13 11:45:14,226 [salt.minion      ][WARNING ] Master hostname: salt not found. Retrying in 30 seconds
18:02 igorwidl iggy:  for example node004 has a role: database-server, this would let me use match in top.sls to assing states based on the role
18:03 iggy igorwidl: where did the role come from?
18:05 igorwidl iggy: that is what i am trying to figure out. How to "assign" a role to a node.
18:05 aparsons joined #salt
18:05 robawt igorwidl: using an exterior DB isn't a bad idea for that
18:06 iggy it all depends on your setup
18:06 robawt something lightweight like etcd or heavier like MariaDB would do the trick
18:06 SheetiS igorwidl: I put my roles in grains and then manage the grains via a state.  An external pillar of some type (etcd, database, something else) would also be viable options.
18:06 mpanetta I've used the mongo ext_pillar to do something similar.
18:06 SheetiS robawt++ :D
18:06 Morbus joined #salt
18:07 thayne joined #salt
18:07 iggy we personally use GCE, so we have nodes tagged with roles, then we have a custom _grains module that pulls an instance's tags and stuffs them in a "roles" grain, then we target using that grain
18:07 robawt twinsies
18:07 jcockhren iggy: same but for non-gce
18:07 jcockhren :D
18:08 iggy yeah, any cloud provider that offers some sort of instance accessible metadata could do similar
18:08 _JZ_ joined #salt
18:10 robawt or just run your own etcd in a container somewhere
18:10 forrest joined #salt
18:11 robawt anything you depend on you may want to control
18:12 patarr left #salt
18:13 igorwidl i m not a programmer so it complicates things for me. I think a simple yaml file could do. example http://pastebin.com/uPbUP3cC .
18:13 patarr joined #salt
18:13 patarr joined #salt
18:14 iggy by that logic, cloud providers should just all close up shop
18:14 iggy I just accidentally clicked an ad on that pastebin...
18:14 igorwidl how would i pull this data, and make it into grain/pillars for a host
18:14 ndrei joined #salt
18:14 iggy fuck pastebin.com... I hope it's owner forgets to renew the name
18:15 robawt iggy: cloud providers close up shop?  referencing ownership/control?
18:16 robawt i guess i meant more than a passive participation, maybe run your own or have atleast 2 synched instances.
18:16 robawt not everyone can run their own HW
18:17 iggy nor should they... but there's also no reason to setup etcd (or mongo or one of the million other tools that could do the job) when the functionality exists for you already
18:17 iggy assuming it does
18:17 iggy which it doesn't sound like it does in this case
18:18 robawt agreed iggy
18:18 TTimo joined #salt
18:19 iggy but now I'm all pissed off that I clicked on an ad on that god-forsaken website
18:19 iggy so I'm out
18:20 igorwidl now that i look at etcd i think it will do the trick
18:20 robawt iggy: i feel like that's a reason why so many dynamic web framework tutorials show off a pastebin-like tutorial
18:20 robawt just to help them get off pastebin
18:21 devnull_ hi, is there any documentation on raet and its dependencies?   i'm aware of putting 'transport:  raet' in the salt-master & minion configs.. but when i do that, and restart salt, i get dependency errors.   i'm trying to work through them, but it would be helpful if there was a list or something that i could reference.   thanks in advance
18:22 bhosmer joined #salt
18:23 berserk joined #salt
18:23 glyf joined #salt
18:27 hal58th bigpup: You need to set the master in you salt-minion configuration. Default is "master". Set it to the IP or localhost.
18:28 hal58th Sorry, default is "salt"
18:29 bigpup oik
18:29 bigpup let me try that
18:30 gothix Anyone work with the event system?
18:31 TOoSmOotH joined #salt
18:31 babilen gothix: Sure, lots of people
18:33 gothix babilen, I want to key on a presence event but am not familliar with socket programming. http://docs.saltstack.com/en/latest/topics/event/master_events.html#presence-events what is "new" is it data I supply on that is retured?
18:33 meylor1 joined #salt
18:34 bigpup crap
18:34 glyf joined #salt
18:34 bigpup its still just hanging
18:34 bigpup hal58th any other ideas?
18:35 druonysus joined #salt
18:35 druonysus joined #salt
18:35 gothix babilen, and do i have to cron my script to be able to detect when somthing happens? Seems like I will need to constantly check.
18:38 devnull_ disregard, found it
18:39 babilen gothix: I'll be with you in a few minutes
18:39 gothix babilen, k
18:43 babilen gothix: Okay, back
18:43 babilen I would recommend to grab the eventlisten.py script and run it on your master. If you are using 2014.1.* you should grab it from that branch rather than develop. (let me know if you can't find it)
18:45 babilen Events are constantly coming in and the master will check if it has some reactors defined for each incoming event. If you want to use presence events you will have to enable them explicitly and new would be data['new'] and a list that contains all minion ids of minions that are "new" (i.e. that recently became available/present)
18:45 thawes joined #salt
18:46 babilen So you don't have to run a script or something, just follow the instructions in the reactor doc to write a reactor that matches the events you want to match and then calls execution modules on, well, whichever minion you want to call them on.
18:46 babilen What are you trying to do?
18:46 gothix babilen, All im looking to do is run a bash one liner when a new server is built. So it can be added to ldap
18:47 gothix is there a better way i dont have reactor set up and i would like to keep it simple
18:48 robawt gothix: make a simple salt state and apply it to all machines
18:48 babilen gothix: Okay, "new server is built" sounds as if you want to listen for auth events rather than presence ones. You would then call "cmd.run" with the command you want to run.
18:48 robawt create a lock file when the process is done, and have the script in the salt state check for that file, if it's been ran it won't run again
18:48 babilen But if you could give us more information we might be able to suggest something simpler than the reactor system.
18:48 gothix can i run cmd.run on the master
18:48 babilen If you have a minion on the master, sure
18:49 robawt masters now implicitly run a minion process I thought?
18:49 babilen Oh, do they?
18:49 robawt i read that somewhere on the doc
18:49 hal58th bigpup: What's it say in the salt-minion log?
18:49 babilen "now" == 2014.7 ?
18:50 robawt now < 2014.7
18:50 babilen :)
18:51 gothix What my goal is to run a bash scrip located on the master servver when a new host comes online and meets the criteria such as grains.get evironement == development
18:52 babilen I don't think you have access to grains in the reactor
18:52 babilen (which is yet another example why using grains for things like this is not a splendid idea)
18:53 gothix thats all i have to key off off is the grain on the minion
18:53 babilen gothix: run eventlisten.py -- you will see exactly what data you have available and what kind of events are being fired
18:54 TTimo joined #salt
18:54 gothix okay I will. I wish there was a way to run a local script in a state instead of only on the minion :(
18:55 Nilium joined #salt
18:55 gothix babilen, would think there would be a local option
18:58 mpanetta babilen: Instead of using grains for a role type match, what would be better?  minion specific pillar?
18:58 babilen Or nodegroups
18:59 mpanetta Hmm, i've not looked at nodegroups...
18:59 babilen But yeah .. I mean grains are, IMHO, minion local data. Targeting based on minions is not a good idea as anybody who can edit grains on your minion can change the targeting (and you wouldn't want to maintain the grains on your minions anyway)
19:00 babilen If you are making sensitive data available to your minions based on the grains they *claim* you defined for them you will have a data leak rather sooner than later.
19:01 babilen And then, why do that in the first place? One example where it does make sense is if your virtualisation platform supports "tags" (which are then part of the minion local information), but I would much rather maintain much more explicit matching/targeting/grouping on the master only.
19:01 rap424 joined #salt
19:01 Nilium joined #salt
19:02 mpanetta Problem is that does not necessarily work for autoscale. Would hate to have to modify pillar/formulas every time you add something
19:02 spookah joined #salt
19:02 babilen The whole "grains for roles" is, IMHO, a bad idea and developed when "old school" sysadmins wanted to "request" (i.e. bottom up) states from the master.
19:02 mpanetta However ext pillar seems to alleviate that...
19:03 babilen You can just write your data in database and be done with it.
19:03 mpanetta Yep
19:03 mpanetta Problem is, even that matches on a grain.  The minion ID
19:03 babilen Making data available (including private keys and whatnot) based on what *they* tell you is ludicrous.
19:03 mpanetta Yeah but there is no other way in salt to do it.
19:04 babilen The minion id is fixed and you will have exchanged keys once when you accepted the minion.
19:04 mpanetta Esp since the minion ID can be changed
19:04 mpanetta Do the keys contain that?  I was not aware.
19:04 babilen You would have to reaccept their keys if you change the minion id
19:04 babilen I'd much rather maintain lists of "groups: [minion1, minion2, ....]" on the master and trust that than use grains.
19:05 _prime_ fwiw we use an external pillar for roles, which define what states get run, and it works very well.  We use pepa templates to define other pillars used by states based on roles as well.
19:05 babilen And how do you maintain those grains to begin with? Log into each minion? Set them during provisioning?
19:05 geekatcmu sync from master
19:06 babilen I mean I totally understand where it comes from and how grains are useful, but I just don't like it. This is information that should be maintained on the master (or in an external, trusted datasource the master has access to) than the minions themselves.
19:06 babilen geekatcmu: How do you sync them from master?
19:06 geekatcmu There's a state for that
19:06 quist left #salt
19:06 bhosmer joined #salt
19:07 babilen (I know)
19:07 babilen And how do you target that?
19:07 mpanetta Hmm
19:07 babilen Or rather: why not use the *same* targeting for things you would use the grain for to begin with?
19:07 TTimo joined #salt
19:07 geekatcmu Dunno, someone else wrote the code.  I *think* it's just an external pillar.
19:07 babilen And why does that information has to live on the minion?
19:08 babilen geekatcmu: No, you target the state just like every other state
19:08 babilen _prime_: Yeah, that is IMHO a much more sane approach
19:08 geekatcmu Because grain matching is built-in and trivial, and leads to more elegant top.sls
19:08 babilen So is pillar matching
19:09 geekatcmu Maybe I'll look at that.
19:09 mpanetta Thinking about it... If someone has already broken in to your minion and can change arbitrary grains (meaning they have root) you already have a much bigger issue on your hands at that point...
19:09 babilen You should maintain that information in pillar or an ext pillar (for which you can use everything: http://docs.saltstack.com/en/latest/ref/pillar/all/)
19:10 micah_chatt joined #salt
19:10 babilen mpanetta: Yes, but at that point only one minion would have been compromised
19:10 jdmf joined #salt
19:10 mpanetta How do you compromise a minion from another minion?
19:10 babilen And we have plenty of boxes on which customers have sudo, but should still not be able to see other peoples private keys, ssl certificates, ...
19:10 mpanetta Ah
19:11 mpanetta In our env everything is locked down, nobody has access.
19:11 mpanetta Different use case I guess
19:11 babilen Or "please give me database state for $OTHERCUSTOMER" or something like that.
19:11 riftman joined #salt
19:11 babilen Still, you wouldn't want to use information defined *on* the minions for this. There is no need for that.
19:12 Gareth morning morning
19:12 babilen Write it to a database and target based on data from an external pillar if simple node lists in nodegroups or pillars aren't working for you.
19:12 goodwill joined #salt
19:13 babilen I mean it isn't too hard to write "if 'develop in salt['pillar.get']('roles', [])" or target based on pillar values in the top.sls
19:14 babilen (sorry, I'm trying to take a very definite stance here so that this position becomes more obvious. In real life I can see it, but just as design pattern I don't think that this information should be maintained locally on the minion)
19:14 babilen Not saying that I have everything sorted out :D
19:14 mpanetta It is a definite area that requires thought
19:15 mpanetta I need to look at node groups
19:15 mpanetta Ok, already don't like nodegroups :P
19:16 mpanetta You have to restart the master for any changes to take effect :(
19:16 babilen It's just that a lot of people are adopting grains when they start using salt because: 1. It fits in with their well known "log into box, do things" approach to systema administration (not appropriate with salt) 2. It is well documented and advocated in the salt community 3. It is easy and pleasant to use
19:16 babilen IMHO a sane approach uses an external pillar for that
19:17 babilen (e.g. write you node → group mappings in a database and be done with it)
19:17 babilen There is nothing wrong with all those points above and I really do understand them, but I think that, by and large, it is not the right approach.
19:17 chitown any saltstack guys around?
19:17 mpanetta Ahh, yeah I kinda started doing that in my playground
19:18 mpanetta Only problem is I ended up with a lot of db rows with the same exact data in them except the node ID
19:18 mpanetta Which I guess is ok, but isn't very DRY.
19:18 mpanetta Maybe it is a limit of the mongodb ext pillar?  You can only match on node ID...
19:18 babilen I just think that the "maintain the information in a datasource on the a master (or accessible by it) is nicer, more secure and easier to maintain approach in the long run.
19:18 peters-tx Fedora EPEL 6 has 2014.7.0-3.el6, but the Topic says that 2014.1.13 is the latest; is the Topic out-of-date?
19:19 babilen mpanetta: Have minion ids and groups in one table each and join them?
19:19 babilen But as I said: I don't have everything figured out, just a very strong feeling that "this is wrong"
19:19 mpanetta Ahh!  Hmm...
19:20 mpanetta Yeah I can see where you are coming from.
19:20 babilen peters-tx: 2014.7 has been released to pypi and packages are trickling in. That version has not yet been officially released so: Both
19:20 babilen mpanetta: I seem to be rather on my own with that though :)
19:21 peters-tx babilen, Ok
19:21 mpanetta babilen: Well, to me it makes sense in that, custom grains data can't be trusted because it can't be verified.
19:21 linjan joined #salt
19:21 mpanetta Like you said, someone could modify it (given access)
19:22 babilen peters-tx: I am not sure why those packages made it into EPEL (I think that happened by mistake) as I thought hat packages were uploaded to the respective testing repositories as of now and would, shortly/while the official announcement be moved to the "stable" repos.
19:23 babilen mpanetta: Yeah, that's one point. But then: You also have to maintain that information. And you really wouldn't want to login to your minions to set those! So you are at the point of "How do I target the states to set my grains?" and I would argue that you might use that targeting to begin with.
19:23 babilen But then pillars are easy to maintain and you have access to a bazillion external pillars ...
19:23 mpanetta babilen: I set them either at build time with salt-cloud or by using grains.setitems
19:24 mpanetta So no logging in required...
19:24 mpanetta Hmm
19:24 kballou joined #salt
19:25 babilen mpanetta: I am a bit split where the virtualization environment can actually do that, but my concerns still apply. In short I haven't quite made up my mind, but so far I tried not to introduce that into my environment (in particular because I have some minions on which globally untrusted people have root)
19:25 mpanetta Yeah that is a very good reason to not do it.
19:26 babilen IMHO it would be nicer for salt-cloud to be able to set "foo: bar" in "pillar adaptor X" during provisioning and you would then target based on that.
19:26 babilen Same interface, different place to store data.
19:26 babilen I think that is my main problem. This data should not be stored on the minion
19:27 babilen grains are minion local bits of information (ram, fqdn, ip addresses, ...)
19:28 mpanetta what I was doing was storing the list of states that needed to be run in mongodb, indexed by minion_id
19:28 mpanetta Only thing I did not like about that was the DRY issue I stated before...
19:28 mpanetta But if I modified the mongodb ext_pillar to be able to match on arbitrary pillar data...
19:28 mpanetta Instead of just node id...
19:28 mpanetta Hmm...
19:29 babilen Which probably wouldn't have caused you any issues, but was just "ugly" presumably?
19:29 mpanetta I think it would work.
19:29 mpanetta yep
19:29 mpanetta just 'ugly'
19:29 babilen yeah, I can see how you would think that
19:29 mpanetta It worked perfectly in fact :P
19:29 babilen I just think its less ugly than "ask the minion which states it wants" :)
19:29 mpanetta Yeah
19:29 mpanetta It is
19:30 desposo joined #salt
19:30 TTimo joined #salt
19:30 Ryan_Lane babilen: heh, you and I disagree on a lot :)
19:30 babilen I am very "top-down" these days when it comes to salt. I, naturally, exploit grains whenever that is appropriate (different software for different hardware, virtualization environments, ...), but not for deciding which states to apply to which minion.
19:31 Ryan_Lane I <3 using grains
19:31 Ahlee i </3 grains.
19:31 babilen Ryan_Lane: I think that is good. This is one area in which I have, for once, strong feelings and it is always good if you can argue about things.
19:31 Ryan_Lane with the knowledge that protecting pillars with grains will lead to data leakage
19:31 Ryan_Lane of course, I also think using a master for the most part is a bad idea
19:31 babilen But do you understand the points I am making?
19:31 Ahlee they get stuck too often, you have to trust a server to have the value, up until very recently there was no way of appending, so it was destructive
19:32 Ryan_Lane I protect my pillars in S3 using IAM policy
19:32 Ryan_Lane and run masterless
19:32 bhosmer joined #salt
19:32 Ryan_Lane and everything in my states (and orchestration) uses grains for service-name, environment, region, etc.
19:32 Ahlee I run master, with states configured to update pillar and file_roots from an internal git repo
19:32 Ryan_Lane I generate the grains based on the hostname, which is defined from our autoscaling group names
19:32 babilen Ryan_Lane: I would love to learn your environment one day as I have the feeling that it would be great to learn "the other way" :)
19:33 Ryan_Lane babilen: http://ryandlane.com/blog/2014/08/26/saltstack-masterless-bootstrapping/
19:33 Ryan_Lane I have a bunch of blog posts about what we're doing :)
19:33 Ahlee i can't grok the idea of masterless salt, but we use salt 99% for RPC and 1% for systems config :)
19:34 Ryan_Lane the further I go along the less I need a master. we /may/ eventually add a master for remote execution
19:34 aparsons joined #salt
19:34 rap424 joined #salt
19:34 babilen I love my master ...
19:34 * babilen laughs
19:34 Ryan_Lane etcd/zookeeper are *much* better than mine
19:34 babilen Ah, this is great. Thanks guys for being such an awesome community.
19:35 SheetiS :D
19:35 Ryan_Lane we'll probably release a fun daemon for zk/etcd + salt for service discovery at some point
19:35 babilen I like fun
19:35 SheetiS Ryan_Lane: I'd very much look forward to seeing that. :D
19:36 Ryan_Lane the biggest problem with salt's master is that it can't be relied on
19:36 Ryan_Lane zk/etcd can be
19:36 Ryan_Lane even with a full availability zone outage
19:37 Ryan_Lane maybe the next release of salt will fix this, but it still won't be as mature as zk, for instance
19:38 * babilen should play with zk and consul soonish
19:41 Ryan_Lane I wish salt has zk support like it has etcd support
19:42 Ryan_Lane it does have a very small amount of support: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.zk_concurrency.html
19:42 Ryan_Lane that state is *awesome*
19:45 chitown are there any guidelines on how often one should update grains?
19:45 wincus left #salt
19:46 chitown i was told that originally they were mean for relatively static data
19:46 chitown that seems to have changed
19:46 Ryan_Lane they are meant for that
19:46 Ryan_Lane are they being used in some non-static way now?
19:46 chitown well, you can set them more dynamically
19:46 Ryan_Lane they get updated automatically on highstate runs
19:46 chitown but, not sure if thats advised or not
19:46 Ryan_Lane if you set a grain, it will automatically reload your grains
19:47 Ryan_Lane if it doesn't that's a bug that needs to get fixed
19:47 chitown yes. thats all working. im just trying to determine if thats really "tbe best course of action"
19:47 Ryan_Lane I think it's a good pattern
19:47 chitown i am using grains to determine if a particular host needs an upgrade
19:48 Ryan_Lane chitown: take this for example: http://ryandlane.com/blog/2014/09/02/saltstack-patterns-grainstate/
19:48 chitown a runner is updating grains as it determines that the upgrade on the given machine is complete
19:48 Ryan_Lane and this: http://ryandlane.com/blog/2014/08/29/a-saltstack-highstate-killswitch/
19:48 Ryan_Lane chitown: yeah, that seems like a reasonable approach
19:48 chitown ya, thats what im doing :)
19:48 chitown cool
19:49 chitown thanks!
19:49 Ryan_Lane yw
19:49 murrdoc did the killswitch ship ryan ?
19:49 Ryan_Lane murrdoc: no, but the approach I'm using works :)
19:49 bhosmer joined #salt
19:49 murrdoc cool cool
19:49 murrdoc anyone written custom functions for salt.mine ?
19:49 chitown murrdoc: lol. that was my next question :)
19:50 murrdoc TOO SLOW
19:50 murrdoc :D
19:50 bastion1704 joined #salt
19:51 murrdoc well from the looks of it any module can be a mine fucntion
19:51 chitown thats super cool :)
19:51 iggy you just have to enable it
19:51 murrdoc in the minion conf yeah
19:52 iggy we have a custom module as a mine function
19:54 felskrone joined #salt
19:55 murrdoc is there a required format that the function has to return ?
19:55 iggy not that I've seen
19:55 iggy ours is a dict
19:56 thedodd joined #salt
19:57 glyf joined #salt
19:58 elfixit joined #salt
19:59 MrFuzz joined #salt
20:01 alexr joined #salt
20:04 Gareth murrdoc: I added some code awhile back, which should go on Lithium, to disable state runs using a very similar approach to Ryan_Lane's idea.
20:05 murrdoc can i see the pull request ? or the change
20:05 murrdoc maybe we can put it into our core
20:05 Gareth let me dig it up.
20:06 CeBe1 joined #salt
20:06 Gareth murrdoc: https://github.com/saltstack/salt/pull/16011
20:07 murrdoc https://github.com/garethgreenaway this is awesome!
20:07 murrdoc :)
20:07 murrdoc i cosign that sentiment
20:08 Gareth :)
20:08 murrdoc so have to specify grains
20:09 murrdoc state_runs_disabled grain has to list out all the disabled states ?
20:09 Gareth yeah.
20:10 murrdoc so to disable a state, update grain and restart minion
20:10 murrdoc ++
20:11 Gareth if you use the grain module to add it, you shouldn't have to restart.
20:18 Gareth murrdoc: there too: https://github.com/saltstack/salt/commit/a948da60d57addeb14b205ddc10084a69f7d5615 https://github.com/saltstack/salt/commit/c6dae7013f6bf965c88f64e5d182fa52ece4d1ce
20:21 ghanima joined #salt
20:21 ghanima hello all
20:22 ndrei joined #salt
20:22 littleidea joined #salt
20:27 StDiluted joined #salt
20:28 murrdoc thats not in the pull ?
20:31 Gareth murrdoc: those additions were in an earlier pull request, the second one was a new approach to disabling.
20:31 murrdoc this is slated for lithium tho
20:32 Gareth Yup.
20:32 vlcn man, today is not my day
20:32 srage joined #salt
20:33 vlcn https://gist.github.com/kelchm/1b6d3a859b3299a2e4a9
20:35 tkharju joined #salt
20:46 perfectsine joined #salt
20:48 giantlock joined #salt
20:51 ghanima was wondering if I can ask your opinion about how to go about this. I have nrpe agents installed on all my salt minions and I want to be able to access the data generated via nrpe agents. I found an article that shows how you can expose these agents via salt mint. but the problem is that I am not sure how to represent the data
20:52 ghanima the config.get function seems to expect sometype of string but I am not sure how the data is represented
20:52 ghanima just want to get some guidence when I create a data to be stored as mines how I should represent that data in order to query it
20:52 davet joined #salt
20:52 littleidea joined #salt
20:53 alexr__ joined #salt
20:55 aparsons_ joined #salt
20:56 murrdoc where do you want this data to go
20:57 ghanima just want them to be sorted in salt mine for now
20:57 ghanima and be able to query them via salt
20:58 murrdoc you have to write a custom module to get that data from nrpe
20:58 murrdoc then write a minion config entry to pull that data in
20:58 vlcn how can I figure out what is actually causing this?  I didn't change anything in pillar -- only change was an upgrade to 2014.7
20:58 vlcn https://gist.github.com/kelchm/3f9db4e576f6142c1ccc
20:58 murrdoc its all dictionaries all the way
20:58 glyf joined #salt
20:59 ghanima so there is an indivdual key -> val pair
21:00 ghanima so every nrpe check should be its own entry in the dictionary I presume
21:03 murrdoc yeah
21:07 housl joined #salt
21:08 iggy ghanima: just out of curiosity... why?
21:09 Rucknar joined #salt
21:10 thedodd joined #salt
21:11 ghanima iggy: sorry don't mean to be rude but are you asking why I am trying to get nrpe data
21:11 ghanima or why I care about the format
21:12 ghanima To the later question
21:12 iggy take your pick
21:12 ghanima I am not sure once I have that data in salt.mine
21:12 ghanima how to properly query it
21:12 ghanima I am executing a shell command and having it spit out all of this data
21:12 iggy mine.get('target', 'function')
21:13 iggy where function matches what you set in mine_functions in the minion config
21:14 ghanima So the command I am running in check_mk_agent and its output is multiline output that some level of delineation. When I run my function
21:14 iggy but yeah... why do you even want all that data
21:14 ghanima it shows all the data produced by the function
21:14 iggy "why do you want that data" "because it shows me the data"
21:14 iggy ...
21:15 iggy I'm trying to think about it this way... what are you actually trying to achieve, not how are you trying to achieve it
21:15 iggy some people get tunnel vision
21:15 ghanima For now I want the ability to cache monitoring metrics so I don't have to call the nrpe agent and reduce the amount of calls I make on the minion
21:16 iggy but... why?
21:17 ghanima to reduce the amount of calls I make on the minion
21:17 iggy but what are you doing that is going to require so many calls on the minion?
21:17 ghanima the time that it takes to execute every NRPE check and wait for a response
21:17 iggy I feel like a dentist right now
21:18 ghanima is been problematic. I have 6 to 16 checks per hosts on avg with 720 hosts
21:19 ghanima was thinking it would be better to configure a polling mechnism and set that to a salt mine
21:19 iggy if you are trying to use salt as a monitoring system... don't... if you're doing this for some other reason... I don't really get it, but whatever
21:19 ghanima iggy: No I am not using this as a replacement for monitoring, but I would like to to create salt states that check if a service is running or not and take action if it is not
21:20 ghanima I am not there yet but that is my intended goal.
21:20 ja-s joined #salt
21:22 ghanima iggy: to your point don't use salt to replace monitoring why the documentation on your event bus and overstate seems like it will have the ability to submit and query events in realtime
21:22 iggy I don't see how running all that stuff you had the other day helps toward that goal (at least not any more than the service module or other salt built-in functionality)
21:23 ghanima iggy: because executing the command on the minion keeps timing out when trying to get a status across all the machines in my environment.
21:23 iggy it surely can handle events and schedules and just about anything else you throw at it... that doesn't necessarily mean it's good at handling the quantity of data that a monitoring system is going to generate
21:24 ghanima iggy: I am trying to find ways to reduce the amount of times I make a minion call if I don't neccessarily need realtime data
21:24 ghanima sorry don't mean to be difficult just trying to explain my pickle
21:24 jalbretsen joined #salt
21:24 iggy but you're talking about reacting to service state... you _want_ real time data for that
21:25 ghanima iggy: so does saltstack have a whitepaper as to the limits of scale for events and scheduling
21:25 iggy you're just getting that data wrong
21:25 Mso150 joined #salt
21:25 ghanima if there is a best practice would be happy to try to limit to that
21:26 iggy this is an awkward answer, but if you don't know if you can hit Salt's limits... you probably can't
21:26 nitay joined #salt
21:26 nitay how do I use salt mine to get ec2 public address?
21:27 nitay mine.get network.ipaddrs etc all give private ips
21:27 spookah nitay: use the ec2 api?
21:27 iggy I don't mean that to sound condescending... but by the time you get to the scale that Salt can't handle, you know enough about Salt to know what it can/can't handle
21:27 nitay spookah: what do u mean?
21:27 iggy nitay: there's something in salt-contrib to do it
21:27 iggy although I'm not a big fan of how it works... but it does work
21:28 nitay iggy: got a pointer?
21:28 aquinas joined #salt
21:28 smcquay joined #salt
21:28 iggy the salt-contrib git repo?
21:28 aquinas_ joined #salt
21:28 iggy uhh... github.com/salt/salt-contrib I think?
21:28 nitay an example of how to use it meant
21:29 iggy saltstack/salt-contrib
21:29 iggy always forget it's saltstack not salt
21:30 ghanima iggy: fare statement but if your previous statement was to suggest that I should make sure I confirm that the amount of data I am tramistting into salt scales appropriately I suspect that this analysis has been done on some level
21:30 iggy https://github.com/saltstack/salt-contrib/blob/master/grains/ec2_info.py drop that in /srv/salt/_grains (or wherever) and then you'll have the ec2 info in your grains
21:31 iggy ghanima: I doubt an in depth analysis has been done... but there are a few large companies that have hit certain limits of salt (some zeromq limits, some salt memory usage limits, python GIL limits, etc.)
21:32 spookah nitay: GET http://169.254.169.254/latest/meta-data/local-ipv4 for private address
21:32 spookah nitay: GET http://169.254.169.254/latest/meta-data/public-ipv4 for the public address
21:33 twobitsp1ite Seth? I forgot what you said your nick was... This is Isaac.
21:33 platoscave joined #salt
21:34 mbbc joined #salt
21:34 twobitsprite whiteinge: Seth?
21:34 nitay spookah: yeah i meant within salt, thx anyways
21:34 nitay iggy: awesome that works, thx!
21:35 spookah nitay: execute the command on the host?
21:37 roolo joined #salt
21:38 thayne joined #salt
21:42 whiteinge twobitsprite: hi, Isaac
21:45 toddnni joined #salt
21:46 twobitsprite whiteinge: ahh, good I remembered it... anyway, just wanted to let you know after fixing that .el5 in the sls it worked like a champ
21:47 _JZ_ joined #salt
21:47 beneggett joined #salt
21:49 platoscave joined #salt
21:50 kermit joined #salt
21:52 giannello joined #salt
21:54 giannello left #salt
21:55 whiteinge twobitsprite: bam!
21:55 whiteinge Thanks for the update.
21:56 thedodd joined #salt
21:57 mgw joined #salt
21:58 robawt whiteinge: wuddup wuddup
21:58 bigpup joined #salt
21:59 TheoSLC robawt: Fixed my python pillar problem.  Most of my script wasn't in def run().  So my passed variables were not availabe.  Fixed by placing my entire script in def run(). :)
22:01 CeBe joined #salt
22:01 robawt nice TheoSLC
22:01 robawt sorry i didn't get back
22:01 CeBe joined #salt
22:04 babilen TheoSLC: You can actually use lots of function. Just all the dunder dictionaries (__salt__, __pillar__, ...) will be monkey patched later on (yeah, I know) and are not available at evaluation time so you cannot use them in module level variables (they, essentially, all have to be within the scope of a function)
22:04 babilen s/pillar// in this case
22:04 kermit joined #salt
22:05 kermit joined #salt
22:05 whiteinge robawt: hey, man! How goes?
22:05 robawt whiteinge: goes well :D
22:06 bhosmer joined #salt
22:07 mohae joined #salt
22:08 istram joined #salt
22:09 Rucknar joined #salt
22:14 bigpup joined #salt
22:16 eightyeight joined #salt
22:17 linjan joined #salt
22:22 Singularo joined #salt
22:25 perfectsine joined #salt
22:30 bigpup joined #salt
22:34 perfectsine joined #salt
22:36 perfectsine_ joined #salt
22:38 ghanima is there a way to restart minions from the master I presume using the service function is the best way
22:40 murrdoc http://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.manage.html
22:42 ghanima murrdoc: is runner manager function available in 2014-1-13
22:42 murrdoc uh i d think so
22:43 murrdoc the page lists if/when something isnt ready for a version
22:43 ghanima From my master I am getting this error Function 'manage.not_present' is unavailable
22:43 ghanima I am running the command
22:43 ghanima salt-run manage.not_present
22:46 murrdoc hmm then its probably not in your version
22:46 yomilk joined #salt
22:47 bigpup joined #salt
22:48 iggy the docs are incomplete on that one (it is definitely not listed in the docs for 2014.1.x
22:48 iggy I certainly don't feel like doing a PR for it at this point
22:48 iggy I'd rather they hurried up and got 2014.7.1 out the door than fixing docs issues for old versions
22:49 nitti_ joined #salt
22:51 jagardaniel hi, just a small question :) how do i pass a variable from a reactor into the state that it is executing?
22:55 iggy figure out what data is available (generally using the event_listen.py script), and then set that in the arg list
22:55 bhosmer joined #salt
22:57 gildegoma joined #salt
22:59 dave_den joined #salt
23:00 diegows joined #salt
23:03 bigpup joined #salt
23:08 jagardaniel iggy: i'm more curious how the syntax should look like. I'm using webhooks to trigger the reaction (it looks like this now http://pastie.org/private/lbjnny7avj99qxroaorg)
23:09 jagardaniel so basically, i just want to pass a post-value into the state that i'm calling (states.game.instance)
23:09 smcquay_ joined #salt
23:09 intellix joined #salt
23:09 smcquay_ joined #salt
23:09 sfadam joined #salt
23:11 pdayton joined #salt
23:13 rattmuff joined #salt
23:16 redkrieg joined #salt
23:17 housl joined #salt
23:19 bigpup joined #salt
23:19 redkrieg Hi all, I've been fighting with a salt state today and I'm at my wit's end as to why it isn't working.  I'm using file.replace and looking for the string below.  When I use this with re.sub it works every time, but I can't get it to work with salt.  I did have a yaml parse problem that I resolved by subbing \\ for \.
23:20 redkrieg title CentOS \(2.6.32[^\n]+\n\s+root[^\n]+\n\s+kernel\s+/vmlinuz-2.6.32[^\n]+\n\s+initrd\s+/initramfs-2.6.32[^\n]+\n
23:20 Rucknar joined #salt
23:20 redkrieg in my salt state it appears as     - pattern: "title CentOS \\(2.6.32[^\\n]+\\n\\s+root[^\\n]+\\n\\s+kernel\\s+/vmlinuz-2.6.32[^\\n]+\\n\\s+initrd\\s+/initramfs-2.6.32[^\\n]+\\n"
23:20 whiteinge jagardaniel: the best way is to use the ``pillar`` kwarg to state.sls
23:21 redkrieg does anyone see anything obvious I'm doing wrong?
23:21 whiteinge jagardaniel: example here (ctrl-f search for 'kwarg') http://docs.saltstack.com/en/latest/topics/reactor/index.html
23:22 jagardaniel ah i see! thank you whiteinge :)
23:22 nitay left #salt
23:24 rattmuff joined #salt
23:25 whiteinge redkrieg: instead of quoting the whole thing and escaping values, try yaml's multiline syntax instead
23:25 whiteinge it would at least simplify your escaping
23:27 redkrieg whiteinge: thanks, I didn't know you could do that in yaml.  giving it a shot now
23:29 whiteinge got booted from the wireless...
23:29 whiteinge https://www.irccloud.com/pastebin/nxV1eViW
23:29 whiteinge redkrieg: ^^ example
23:31 younqcass_ joined #salt
23:31 redkrieg whiteinge: awesome, I'm trying a highstate with that now
23:34 marv__ joined #salt
23:35 bigpup joined #salt
23:36 rattmuff joined #salt
23:41 yomilk_ joined #salt
23:43 marv__ joined #salt
23:45 redkrieg whiteinge: that did it.  thank you so much, I've been banging my head against this for hours
23:45 whiteinge woot!
23:45 whiteinge yaml strikes again!
23:46 whiteinge hm. we should add that to the docs...
23:46 hal58th whiteinge: I totally never thought about doing that. Some guy had a similar problem yesterday because he had a } in his pattern.
23:47 redkrieg this article would probably be a good place...  I combed it looking for an answer earlier: http://docs.saltstack.com/en/latest/topics/troubleshooting/yaml_idiosyncrasies.html
23:48 rattmuff joined #salt
23:49 whiteinge i am convinced YAML will play a role in the End of Days. but i do not know if that role is for good or for evil...
23:49 robawt whiteinge++
23:49 robawt skynet will be configured by YAML
23:49 MugginsM joined #salt
23:49 murrdoc as it should be
23:49 murrdoc :D
23:49 murrdoc sup whiteinge !
23:49 murrdoc (puneet from training in lax)
23:50 murrdoc just saying hi
23:50 whiteinge hey!
23:50 murrdoc hows things man
23:50 murrdoc testing out 2014.0.7
23:50 murrdoc when is the compound matching coming for pillars
23:51 bigpup joined #salt
23:52 whiteinge murrdoc: matching pillar values inside a compound command?
23:53 murrdoc i have a use case, where based on the os, i want to use separate pillars
23:53 murrdoc i was hoping to do this in the top.sls in pillars
23:54 murrdoc i may have misread http://docs.saltstack.com/en/latest/topics/releases/2014.7.0.html
23:54 iggy top.sls compound/pillar matching should be fine
23:54 pdayton joined #salt
23:54 babilen yeah, already use that
23:55 whiteinge ah, yeah. that note just affects mine and peer. you can use compound matching and OS-specific grains in pillar and in the pillar top file now.
23:55 murrdoc just awesome
23:55 whiteinge that warning caused a bit of a stir because it's easy to mis-read  :)
23:56 murrdoc 'Compound and pillar matching for normal salt commands are unaffected.' this is new
23:57 iggy yes, he went back and clarified it a bit
23:58 iggy the original version was actually very misleading
23:58 iggy (as was his original mention of it in the IRC channel for which we got our torches and pitchforks)
23:59 murrdoc his ?
23:59 iggy I sha'nt name names

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary