Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-12-02

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 TK_ joined #salt
00:06 wt joined #salt
00:06 wt Is anyone having problems with the gitfs pillar module in 2014.7.0?
00:09 genediazjr joined #salt
00:11 Eugene Nothing really mattress
00:14 bfoxwell joined #salt
00:15 linhm oops my bad. error in the pem file.
00:16 beneggett joined #salt
00:16 beneggett How can i use pillar data within pillar files?
00:17 beneggett I'd like to use a user's name in paths, etc. Like => app_directory: /home/{{ pillar['app']['name'] }}/application or something
00:18 beneggett I know how to use grain info in pillars, just not pillar info
00:18 wt beneggett, you can really do that in the pillar files
00:18 wt you could do that in states though
00:19 viq joined #salt
00:23 wt I am getting a bunch of this in logs: If no other git process is currently running, this probably means a
00:23 wt git process crashed in this repository earlier. Make sure no other git
00:23 wt process is running and remove the file manually to continue.
00:23 wt 2014-12-02 00:20:23,846 [root                                       ][ERROR   ] Unable to checkout branch prod: 'git checkout origin/prod' returned exit status 128: fatal: Unable to create '/var/cache/salt/master/pillar_gitfs/36e614ed288b2ad4ec04f1a110686acf79fb717918e8d8d9c08f905c37472ace/.git/index.lock': File exists.
00:23 wt If no other git process is currently running, this probably means a
00:23 wt git process crashed in this repository earlier. Make sure no other git
00:23 wt process is running and remove the file manually to continue.
00:23 wt 2014-12-02 00:20:23,846 [root                                       ][ERROR   ] Unable to checkout branch prod: 'git checkout origin/prod' returned exit status 128: fatal: Unable to create '/var/cache/salt/master/pillar_gitfs/36e614ed288b2ad4ec04f1a110686acf79fb717918e8d8d9c08f905c37472ace/.git/index.lock': File exists.
00:23 wt shoot, sorry
00:24 wt I didn't mean to paste that many lines
00:24 notnotpeter Is anyone doing basic automated testing with their SatlStack configs? Specifically something like a jenkins job which runs on commit catching trivial jinja or pillar whitespace errors?
00:24 wt And my salt-master is slow.
00:24 gwmngilfen joined #salt
00:25 nitti joined #salt
00:28 Ouzo_12 joined #salt
00:30 wt I think there is something else wrong with s3fs
00:35 hal58th1 beneggett: pillar look ups do not work in pillar. You'd have to recursively compile your pillar data somehow and Salt just can't do that.
00:35 TheThing joined #salt
00:36 beneggett hal58th1: thanks. That's what I was afraid to realize
00:36 beneggett hal58th1: sounds like I'm better off using grain info, or doing some fancy includes
00:36 aurynn grains feed pillars feed states is kind of how you have to go
00:36 beneggett aurynn: The order of the saltstack
00:36 beneggett aurynn: thanks
00:37 hal58th1 beneggett: both solutions work. good luck
00:38 aqua^mac joined #salt
00:39 huleboer joined #salt
00:42 hal58th joined #salt
00:46 mattjb joined #salt
00:48 tkharju joined #salt
00:50 wavis joined #salt
00:50 abele joined #salt
00:51 lkannan joined #salt
00:51 CaptTofu joined #salt
00:51 Yufei joined #salt
00:52 akafred joined #salt
00:54 otter768 joined #salt
00:55 gladiatr joined #salt
00:56 alexthegraham joined #salt
00:56 druonysus joined #salt
00:57 alexthegraham Anyone around who's got some good details on the 'pkg' module? I've got a few questions.
01:00 gladiatr what os/distribution?
01:00 alexthegraham CentOS.
01:00 aqua^mac joined #salt
01:00 gladiatr I'm wouldn't consider myself an expert, but what's your situation?
01:01 alexthegraham I'm trying to create a state that'll install a package from config'd repos if it's available, and run a command to install it from an NFS share if it's not.
01:01 TK_ joined #salt
01:01 gladiatr are you running 2014.7?
01:02 alexthegraham But since pkg.available_version returns blank if the package is installed and blank if the package doesn't exist, I'm struggling to find a distro-independent way to test whether the package is available before running the command.
01:02 alexthegraham Some minions are 2014.7, some are not there yet.
01:02 cpowell joined #salt
01:02 alexthegraham I've seen the "onfail" addition.
01:03 alexthegraham Which is awesome, but since not all minions have it yet...
01:04 alexthegraham It would be ideal for pkg.available_version to return the available version regardless of whether it's installed, but it doesn't.
01:04 gladiatr I hear you.  Probably that way because of the sheer amount of data that would be tossing around the event bus :)
01:05 alexthegraham I'd argue that passing accurate data around is better than passing *no* data around, but that's moot at this point. Suggestions?
01:06 gladiatr gimme a sec here.
01:09 alexthegraham This is what I've got now: https://gist.github.com/alexthegraham/9074072e268c323e00bc
01:10 gladiatr ok, so this is kinda gross, but you can do something along the lines of : :% set salt.cmd.run('repoquery [packagename] --queryformat="%{VERSION}" -%} at the top of your state.sls then wrap your pkg.installed pieces in appropriate part of a jinja if/else
01:12 xenoxaos joined #salt
01:12 gladiatr yeah.  You could plug that right in to set the pkg_available_version.
01:13 alexthegraham Is that distro independent, though? I'm looking to use one state across RHEL, SUSE, and Debian systems.
01:14 gladiatr lol.  No.  That's why I asked what distro you were using earlier.  At the moment, I'm not quite a complete idiot when it comes to apt/dpkg things, and I haven't touched a SuSE box in over a decade
01:14 gladiatr Very much yum-specific.
01:14 alexthegraham Lucky you.
01:14 gladiatr lol
01:16 basepi joined #salt
01:16 alexthegraham When salt works, it's helping me forget all the distro-specific commands, but there are still a number of things that don't work the way I'd expect and cause headaches. Querying for available packages is definitely one of them.
01:16 ingwaem joined #salt
01:17 gladiatr looks like aptitude has the following incantation that would provide the same info: aptitude search --display-format '%V' --disable-columns "<package name>"
01:18 gladiatr alexthegraham, heh.  I hear you.  The real adventure is going to be when I turn to automating my OpenBSD network systems--that is going to be quite the adventure.
01:18 ingwaem joined #salt
01:18 alexthegraham Yeah, I *could* set up a series of if/else statements and use a bunch of distro-specific commands to query for available packages, but I'm using Salt to get away from that nonsense.
01:19 superted666 joined #salt
01:19 alexthegraham Any advice, basepi?
01:19 ingwaem joined #salt
01:20 gladiatr Well, as I like to admonish myself (often and vigorously of late), if Salt doesn't do exactly what I need it to do, it's because I haven't written that piece yet :)
01:20 alexthegraham Hah. So true.
01:20 ingwaem joined #salt
01:22 aurynn sounds like a job for a custom repository!
01:22 aurynn let the OS handle it :)
01:22 alexthegraham Or I could just re-write the OS not to need this package.
01:22 alexthegraham :)
01:23 robawt custom repos go a long way
01:23 robawt especially since salt makes it easy to manage repositories
01:26 alexthegraham robawt, while that may be true, what I'm really looking for is a distro-agnostic way to query available repos about whether a package is available, regardless of whether it's installed. That doesn't appear to be an option as of now.
01:27 gladiatr robawt, the glories of implementing a system based on such a device (salt) as one learns it.  Every 2-3 days, the urge to trash the whole thing and start over must be fought off when "Yes, This Is Truely The Magic I Need" moments arise.
01:27 robawt ha
01:27 robawt just remember config management is supposed to help you, not drive you crazy
01:28 notnotpeter alexthegraham: You could use something like map.jinja to specify the platform specific requirements/package names and methods to query them.
01:28 ingwaem` joined #salt
01:30 jimklo_ joined #salt
01:30 alexthegraham Anyone have any speculation about the justification behind pkg.available_version returning blank if the current version is already installed?
01:31 aurynn I suspect it's a bug
01:32 MugginsM joined #salt
01:32 TK_ joined #salt
01:34 alexthegraham It's in the documentation that it's supposed to behave that way, I just can't imagine why.
01:34 ingwaem joined #salt
01:37 gladiatr 165 # available_version is being deprecated
01:37 gladiatr 166 available_version = latest_version
01:37 gladiatr modules/pkgin.py
01:39 ingwaem joined #salt
01:42 cpowell joined #salt
01:42 gladiatr alexthegraham, It makes some sense, if you think about it in terms of a state system expressing "This is what that minion should be"--the whole declarative vs. imperative thing.
01:44 ingwaem joined #salt
01:45 gladiatr unfortunately, especially when it comes to commercial (mal)ware, it isn't always so very straight forward to figure out a good way to interact with it declarativly.  I think that's where the (strong but not intransigent) philosophy of putting those parts together ourselves rather than having them in salt's core comes from.
01:47 alexthegraham I'll keep plugging away. Thanks for your help.
01:47 gladiatr Best of luck in your effort!
01:48 ingwaem joined #salt
01:50 huleboer joined #salt
01:50 ingwaem joined #salt
01:53 ingwaem joined #salt
01:53 ingwaem left #salt
01:55 ingwaem joined #salt
01:57 Zachary_DuBois If I wanted to say `for ip in grains['ip4_interfaces']['eth1']` in jinja but I want that grain to return that grain from all of the minions, how would I go about that
01:57 wt joined #salt
01:59 bhosmer_ joined #salt
02:00 superted666 joined #salt
02:01 genediazjr joined #salt
02:04 rbstewart joined #salt
02:05 RedMEdic joined #salt
02:05 RedMEdic Hi!
02:06 RedMEdic Its so rare to find other people online who share my love of table salt
02:06 gladiatr Zachary_DuBois, have you looked at mines yet?
02:07 Zachary_DuBois No
02:07 gladiatr check this page out: http://docs.saltstack.com/en/latest/topics/mine/
02:08 gladiatr Mines allow you specify functions whose output will be collected periodically by the salt-master and made available by way of the mines execution module.
02:12 Zachary_DuBois Ah
02:12 Zachary_DuBois Ok
02:12 Zachary_DuBois Thanks!
02:20 ingwaem joined #salt
02:28 malinoff joined #salt
02:30 superted666 joined #salt
02:32 johanek joined #salt
02:42 genediazjr joined #salt
02:48 superted666 joined #salt
02:54 otter768 joined #salt
02:59 false joined #salt
03:00 false left #salt
03:02 superted666 joined #salt
03:07 Mso150 joined #salt
03:15 anotherZero joined #salt
03:15 TheThing joined #salt
03:18 elfixit joined #salt
03:25 Outlander joined #salt
03:26 ericof joined #salt
03:35 CeBe1 joined #salt
03:37 superted666 joined #salt
03:47 otter768 joined #salt
03:47 atbell joined #salt
03:50 hellome joined #salt
03:52 capricorn_1 joined #salt
03:58 Ryan_Lane joined #salt
04:04 jab416171 joined #salt
04:11 smcquay joined #salt
04:12 nitti joined #salt
04:19 sumpos joined #salt
04:24 ndrei joined #salt
04:25 lahwran joined #salt
04:28 superted666 joined #salt
04:36 Furao joined #salt
04:52 TK_ joined #salt
04:54 rap424 joined #salt
04:56 desposo joined #salt
04:58 kormoc Hey folks. I know I can configure the output of the salt-call command via /etc/salt/minion, but how do I configure the output of the salt command itself?
05:01 Furao kormoc: you can't
05:01 kormoc Hrm. Fair 'nuff. Thanks
05:01 Furao if you need formula for diamond we have extensive one :P
05:01 kormoc Heh, just starting to play with it all. We'll see :)
05:03 Furao you can write your own salt-call command with the pythons salt client and hardcode the output
05:03 kormoc well, I just switched from using raw salt-call commands (masterless) to using the master/minion setup
05:04 kormoc and I really liked being able to customize the salt-call output
05:04 ndrei joined #salt
05:04 Furao maybe the default outputer setting had been added to salt in 2014.7
05:05 kormoc sadly, salt --config-dir= also appears to not work
05:05 kormoc *sadly
05:06 Furao salt -c /path/to/confdir work for me
05:06 Furao you need a minion file in /path/to/confdir
05:07 TK_ joined #salt
05:08 kormoc Furao, https://gist.github.com/kormoc/3281b62a2acb859af517
05:09 johanek joined #salt
05:11 Furao the master is running on your osx box?
05:11 kormoc nope, but it's resolvable via 'salt'
05:11 Furao ah ok! you can use salt on the master
05:11 Furao but even in master-minion setup, on a minion you must use salt-call
05:12 Furao https://gist.github.com/bclermont/8016880ae02276fbd89d
05:12 kormoc oh!
05:12 Furao but this is an internal doc to hack salt and run locally, it might not apply to you
05:12 Furao but can give you some trick
05:13 kormoc Certainly. Thanks!
05:20 TK_ joined #salt
05:24 kermit joined #salt
05:35 TK_ joined #salt
05:40 Flusher joined #salt
05:40 atbell joined #salt
05:44 kormoc Furao, So then, am I correct in understanding that I can't from one minion request a task to be run on another minion?
05:47 Furao not directly like that
05:47 Furao but you can use salt reactor
05:48 Furao let say you have a monitoring system that detect that some cluster is broken and you have a salt .sls that fix this, you can send a salt event back to the master which will take it and trigger a new task and fix the cluster
05:49 malinoff kormoc, or you can configure peer communication
05:49 Furao malinoff peer.peer can be used for that
05:50 Furao but it feel dangerous, so I never turn it on
05:50 cpowell joined #salt
05:50 malinoff Furao, sure, it's just an other option
05:51 malinoff you can write a dangerous .sls for reactor though
05:51 gladiatr joined #salt
05:51 Furao yes, but at least it’s “locked” on the master :)
05:52 kormoc I'm basically playing with replacing Ansible. I have playbooks that I want to run on my laptop that modify state on my NAS.
05:52 ramteid joined #salt
05:52 malinoff lol, i've switched from salt to ansible :)
05:54 Furao if someone can rewrite salt in golang I’ll switch to that
05:54 malinoff why go?
05:54 malinoff asyncio is cool enough
05:54 kormoc malinoff, gotta play with them all to understand it :)
05:55 Furao faster, smaller footprint, concurrencies
05:55 Furao cp.
05:55 atbell joined #salt
05:55 Furao cp.* module is so slow
05:55 Furao our test suite take almost 2 days to run
05:55 malinoff Furao, it's not a pythons fault :)
05:58 TyrfingMjolnir joined #salt
06:04 Furao well python isn’t fast (not pypy)
06:04 catpigger joined #salt
06:05 malinoff it can be, if the code is lazy
06:06 malinoff does salt worn on pypy?:)
06:06 Furao i think that to gain speed benifit from pypy it need some changes
06:06 Furao benefits
06:07 malinoff maybe somebody tried that
06:09 Flusher joined #salt
06:09 Furao pypy is good at speed python code built for concurrencies, gevent/tornado style
06:09 Furao http://morepypy.blogspot.com/2014/11/tornado-without-gil-on-pypy-stm.html
06:09 Furao salt use multiprocess
06:10 malinoff well, because of jit it can speed up all things, even django - http://speed.pypy.org/
06:10 jcockhren "even django" >_>
06:11 Furao I did some tests on few of our django apps and it was indeed faster
06:11 Flusher joined #salt
06:11 Furao but i had to fight with some dependencies
06:11 jcockhren seriously, I've been curious about salt on pypy for some time
06:11 Furao maybe zmq won’t work
06:11 malinoff jcockhren, i mean, pypy is not tied with async programming
06:11 malinoff pyzmq works on pypy
06:12 malinoff http://stackoverflow.com/questions/11831145/installing-zeromq-under-pypy
06:12 jcockhren malinoff: oh?! it does?
06:12 Furao oh
06:12 jcockhren hmmm
06:12 malinoff :D
06:13 malinoff i don't think salt will work on pypy though, due to its codebase
06:13 malinoff too much dirty hacks
06:15 malinoff https://github.com/saltstack/salt/blob/develop/salt/modules/cmdmod.py#L221 i don't think this will work on pypy as on cpython
06:19 superted666 joined #salt
06:20 dagrizbo_ joined #salt
06:21 jhauser joined #salt
06:22 pduersteler joined #salt
06:45 nethershaw joined #salt
06:57 Katafalkas joined #salt
07:00 superted666 joined #salt
07:03 colttt joined #salt
07:05 ericof joined #salt
07:22 hojgaard joined #salt
07:28 eject_ck joined #salt
07:33 Katafalkas joined #salt
07:34 lcavassa joined #salt
07:37 TK_ Can't establish  websocket connection to salt api use websocket.create_connection('wss://localhost:8080/all_events/305209df3096fc09d0199b8d0d008853')
07:38 TK_ the api use rest_tornado.
07:38 JlRd joined #salt
07:38 TK_ can anyone help me ?
07:40 Outlander joined #salt
07:48 Ryan_Lane joined #salt
07:53 felskrone joined #salt
07:54 aquinas joined #salt
07:56 slafs joined #salt
07:57 slafs left #salt
07:59 __gotcha joined #salt
08:02 bhosmer_ joined #salt
08:05 colttt joined #salt
08:07 \ask joined #salt
08:10 eject_ck joined #salt
08:10 lb1a joined #salt
08:15 Katafalk_ joined #salt
08:24 ice_ joined #salt
08:25 Outlander joined #salt
08:27 ice_ joined #salt
08:27 bersace left #salt
08:27 ice_ hello
08:33 fredvd joined #salt
08:35 eject_ck joined #salt
08:36 Katafalkas joined #salt
08:36 NaPs joined #salt
08:37 shorty_mu joined #salt
08:41 Flusher joined #salt
08:42 __gotcha joined #salt
08:42 PI-Lloyd joined #salt
08:43 dnai23 joined #salt
08:47 genediazjr joined #salt
08:47 AirOnSkin Hmm, the docs for salt.states.archive state for tar_options: "Required if used with archive_format: tar, otherwise optional. It needs to be the tar argument specific to the archive being extracted, such as 'J' for LZMA or 'v' to verbosely list files processed. Using this option means that the tar executable on the target will be used, which is less platform independent. Main operators like -x, --extract, --get, -c and -f/--file should not b
08:47 AirOnSkin e used here."
08:47 monkey661 joined #salt
08:48 AirOnSkin Up until now I've used: tar_options: z
08:48 AirOnSkin That doesn't seem to work anymore. Now I NEED to use tar_options: -xz
08:49 AirOnSkin Only works that way. Are they not up to date with the docs? Is anyone aware that this behavior changed?
08:49 intellix joined #salt
08:50 oyvjel joined #salt
08:51 karimb joined #salt
08:54 Mso150 joined #salt
08:59 Furao AirOnSkin: someone change the state and didn’t updated the doc
09:00 Furao i’m the original author of that state and -x was implicit
09:03 badon joined #salt
09:03 Furao I don’t remember why I didn’t used tarfile and zipfile python module to make it crossplatform
09:05 AirOnSkin Furao: I see. Well, I don't mind either way... I just think it should be correct in the docs. I'd like to report the error somewhere, but couldn't find the appropriate place...
09:05 Furao github issue
09:05 Furao or create a PR
09:05 __gotcha joined #salt
09:05 AirOnSkin Furao: Ah, ok. Will do...
09:05 Furao you’re on 2014.7 ?
09:05 genediazjr joined #salt
09:08 Furao https://github.com/saltstack/salt/blob/2014.7/salt/states/archive.py#L191
09:08 Furao i remember why i didn’t used tarfile/zipfile is because I needed support for rar
09:09 mfournier joined #salt
09:09 AirOnSkin Ah, somebody already reported it: https://github.com/saltstack/salt/issues/13077#issuecomment-62273805
09:10 AirOnSkin Furao: yes, I'm on 2014.7.0
09:13 pduersteler joined #salt
09:15 malinoff Furao, rarfile wasn't suitable for that?
09:16 Furao rarfile isn’t in stdlib
09:17 Furao or it was to support xz
09:17 Furao i did that 2 years ago i don’t remember much :)
09:20 pduersteler Hi all. I have some trouble putting pillar data into git. states work from git. Initially, i entered the wrong git repo. The change didn't affected anything so I dropped /var/cache/salt/master/pillar_gitfs. now when starting the server via -l debug, I always see "coult not update from remote <correct git repo>: fatal: git repo <earlier wrong git repo> does not seem to be a repository". Any hint?
09:22 geekatcmu joined #salt
09:22 pduersteler never mind, solved, restarted the master and cleared the cache dir beforehand again
09:23 AirOnSkin I'm in need of a second pair of eyes... I have the following state to setup vmware-tools in a minion: http://hastebin.com/duwujivahi.sm
09:24 AirOnSkin It works well for installing the tools, but now it seems, the second block is run even though i required ... wait.. I think I just answered my own question while writing this
09:25 AirOnSkin Ah no, see, the first block has an unless statement... but it seems the second block still gets executed eventhough it has a requirement in it...: http://hastebin.com/ahidejidij.vhdl
09:26 goal if I am using reclass as an enc, how can still target minions based on grains. Since I can't do all the 'P@os:(RedHat|Debian.*)' sort of stuff
09:26 Furao AirOnSkin: that formula will download and extra that file everytime time you run this
09:26 Furao you don’t want to run this only once or on upgrade?
09:27 AirOnSkin Furao: I only want to run it if it isn't installed (so yeah, only once). That's what the unless is for...
09:27 Furao then you need cmd.wait instead of cmd.run
09:28 Furao and watch archive: vmware-tools-source
09:28 AirOnSkin Huh? Ok... I'll read into that...
09:28 genediazjr joined #salt
09:28 AirOnSkin But why doesn't the unless work?
09:28 madduck goal: you can still use those matchers, although I think it makes more sense to match on pillar data from reclass, e.g. address hosts with classes redhat_node|debian_node or whatever you may call them.
09:29 goal madduck: my example was poor. I actually want to match on some hardware grains
09:29 madduck goal: the OS of a node should hardly ever be the function of a node, it should be the other way around, i.e. you define the O, not the other way around ;)
09:30 Furao AirOnSkin: for archive yes, but your cmd.run is not running an equivalent of unless
09:30 AirOnSkin test -f /etc/init.d/vmtoolsd should return true on a re-run of the state, which would lead to the first block not being executed and all the following ones as well (since they're dependent on each other)
09:30 malinoff madduck, ansible is still a no-no for reclass?:)
09:30 madduck goal: ah, okay… but you *can*. Why do you think reclass disables grains matching?
09:31 glyf joined #salt
09:31 TK_ Can't establish  websocket connection to salt api use websocket.create_connection('wss://localhost:8080/all_events/305209df3096fc09d0199b8d0d008853'),
09:31 AirOnSkin Furao: Sorry for not getting it, but cmd.run requires the archive block, and the archive block shouldn't run... but it does, you can see that in the output of highstate...
09:31 dvestal joined #salt
09:31 TK_ Can't establish  websocket connection to salt api use websocket.create_connection('wss://localhost:8080/all_events/305209df3096fc09d0199b8d0d008853'),
09:31 TK_ the api  I use rest_tornado.
09:31 N-Mi_ joined #salt
09:31 goal madduck: I think this is down to my misunderstanding, but all the salt docs say that targetting minion via globbing or regex, is done in the top. And with reclass there is no longer a topfile.
09:31 TK_ can anyone help me ?
09:32 madduck malinoff: I've just tried to sync up with the ansible community again, but mdehaan apparently still does not want me in the community.
09:32 malinoff madduck, sadly
09:32 malinoff madduck, i've tried to send some patches and share ideas, "we don't need it" is the answer
09:32 madduck malinoff: mdehaan and ansible make a few poor choices IMHO and they are not open to the community, which makes my motivation to do anything for/on ansible zero by now
09:32 madduck goal: you should try it out!
09:33 goal in the topfile?
09:33 madduck i think you can have a topfile too, IMHO
09:33 malinoff madduck, i've read your articles about ansible community, and now i'm definitely agree
09:34 AirOnSkin malinoff: could you share those links? I'm still in evaluation of Ansible vs. Salt and it would help me to get some other inptus
09:34 malinoff AirOnSkin, links about what?
09:35 AirOnSkin "articles about ansible community"
09:35 malinoff AirOnSkin, just type "salt vs ansible" in google
09:35 AirOnSkin malinoff: did that. never read anything that was specifically talking about the community
09:36 malinoff http://ryandlane.com/blog/2014/08/04/moving-away-from-puppet-saltstack-or-ansible/
09:36 malinoff http://jensrantil.github.io/salt-vs-ansible.html
09:36 AirOnSkin malinoff: thanks :)
09:36 madduck http://thefsb.tumblr.com/post/94644144890/ansible-reminds-me-of-phps-salad-days
09:37 malinoff madduck, ha!
09:37 madduck to be fair, I think that there are some serious problems with Salt too, but less so with the community than with the codebase
09:39 malinoff yeah
09:39 malinoff madduck, i'm using ansible now, and i see that it is just a wrapper around good-old-bash scripting
09:39 malinoff which is slow, and sad, and very annoying about all these "ssh_exchange" errors
09:39 madduck ansible is really quite bad in many ways, but that does not make the NIH syndroms in Salt better, e.g. crypto and module loading, and pubsub is also just a bad choice for system maintenance, IMHO
09:39 malinoff joined #salt
09:39 madduck malinoff: 02 10:39 < madduck> ansible is really quite bad in many ways, but that does not make the NIH syndroms in  Salt better, e.g. crypto and module loading, and pubsub is also just a bad choice for  system maintenance, IMHO
09:40 malinoff madduck, pubsub is actually ok, but not with the single aes key shared with all minions
09:41 malinoff madduck, will you be interested in a lazy, pluggable CM system?
09:41 VSpike Does file.recurse preserve the permissions on the files?
09:41 VSpike Or rather, should it
09:41 madduck it's not okay. When I command a robot, I don't publish an order and expect it to subscribe to it; I expect immediate feedback
09:41 madduck malinoff: yes, and we already have a few ideas that we've been discussing in #reclass.
09:42 madduck malinoff: basically, build it from components, and the next thing would be what I call an ssh-based botnet.
09:42 madduck http://madduck.net/blog/2013.02.01:a-botnet-for-configuration-management/
09:42 malinoff madduck, yes, i saw that article
09:42 malinoff madduck, though i'm building a system with pluggable transport, so it can be amqp, ssh, even a database
09:43 madduck that, plus reclass for inventory and you have a fancy remote executor; then you can start creating modules (like ansible) and finally you create a sort of expectation management system/declarative enforcement daemon
09:43 madduck malinoff: interesting, is there a link? pluggable is always good, though I think it's more important to have one proven, efficient, secure and flexible transport than a choice at this level tbh
09:44 madduck and I see SSH as being that transport
09:44 malinoff madduck, amqp is faster and have messaging patterns from-the-box
09:44 malinoff madduck, take a look on kombu library
09:45 madduck malinoff: how is amqp faster than an established SSH link?
09:47 felskrone joined #salt
09:47 malinoff madduck, to run a command on a single agent i guess they're equal, but for pub/sub, or rpc, you will have to implement these patterns for ssh transport using multiprocessing/whatever, which is slower than using an amqp broker
09:48 Steeltip joined #salt
09:48 Steeltip hi @ all
09:48 finnzi joined #salt
09:50 malinoff joined #salt
09:50 madduck malin    it also seems more reliable; IOFor my use case, reliability is monre important than speed.
09:50 malinoff madduck, sorry, internet goes bad sometimes
09:51 madduck same here ;)
09:51 malinoff madduck, i don't say that ssh is bad transport and amqp is good, i'm just saying that sometimes it is better to use first and sometimes it is better to use second
09:52 malinoff I'm building my system to have a simple switch between those transports, so you can choose the one you need *right now*
09:52 goal madduck: oh, shame. I was barking up the wrong tree, so to speak. I had tried using the topfile still, but it didn't appear to work. Turns out that's because compound matchers using PCRE doesn't like spaces in the expression. Replacing with \040 works fine.
09:53 malinoff I like ansibles simplicity - all you need to start is to run pip install ansible - and here you go, you can immediately start to write playbooks
09:54 malinoff but of course in a complex environments ansible has a lot of weaknesses
09:55 malinoff so i'd like to have a system, which allows me to start from very simple setup, with no agents, authentication, etc, and then allows me to extent it by installing additional components
09:56 malinoff extend*
09:56 sieve joined #salt
09:56 malinoff Jenkins is one of such systems (and I *love* jenkins)
09:56 sieve joined #salt
10:00 intellix joined #salt
10:04 Outlander joined #salt
10:05 heaumer hi; i'm having weird issue with a home-made module
10:05 heaumer having a simple function like def test(): return __salt__['cmd.run']("some shell", output_loglevel='trace')
10:06 heaumer the call (salt myserver module.test) "lock"
10:06 heaumer the "some shell" gets executed
10:07 heaumer actually, someshell starts a perl's Daemon::Generic; "perl /a/path start"
10:07 heaumer having a test2() which calls "perl /a/path stop" works fine however
10:07 heaumer "perl /a/path start &" leads to the same result
10:08 heaumer any idea where does it come from/how to debug this?
10:08 goal are there any known issues with zmq on EL5 variant distros? (eg. zeromq-2.1.9-1.el5), as I have EL5 minions ending up not connected after a period of time
10:09 madduck joined #salt
10:09 babilen goal: Could you be more specific on the "not connected" part?
10:10 goal babilen:      Minion did not return. [Not connected]
10:10 goal minion process still running on the minion
10:10 goal tcpdump both sides, no communications occur
10:12 gladiatr joined #salt
10:13 dRiN joined #salt
10:13 babilen goal: Does restarting the minion resolve that issue?
10:13 goal I haven't done that yet, trying to debug before getting the hammer out
10:15 babilen sure
10:15 shorty_mu joined #salt
10:16 goal So, there's no known issue as such? I seem to recall reading about problems with older zmq versions and salt in the past
10:25 babilen goal: A short google came up with two reports: https://github.com/saltstack/salt/issues/16518 and https://github.com/saltstack/salt/issues/17278
10:28 Chris_Sybaritic joined #salt
10:29 TK joined #salt
10:32 pduersteler There's something I don't get at the moment. How am I supposed to split pillar data across hosts? Let's say I want to manage mysql users and I have two servers running things, each with their own respective users.
10:32 gildegoma joined #salt
10:33 moderation joined #salt
10:33 babilen pduersteler: Target different data to each host
10:34 pduersteler babilen: where does that targeting happen? In the pillar top.sls, meaning I have e.g. a database.sls for each host and then include the one for each host?
10:35 babilen pduersteler: You might like something like: https://www.refheap.com/94280
10:36 viq pduersteler: you can either have separate pillar files, and target them via top.sls, or inside the pillar file have jinja ifs, and return different data depending eg on minion id
10:36 babilen I write a lot of my pillars in Python as that allows me to target the data I want for each minion easily ...
10:36 babilen (basically the same idea as what I exemplified in the paste, just with Python, itertools, dictionary .update(), ...)
10:37 pduersteler Yes, these were the ways I thought I could make it happen. So i just have to find my "taste" now ^^ thanks babilen and viq
10:37 babilen But yeah, a single pillar file with different values that you target to each host is the most basic approach
10:39 babilen I mean it really depends on what you want. I tend to end up with pillars quite often that are essentially static except for one value ... I'd use something like https://www.refheap.com/94281
10:39 mick3y goal: there are knokwn issues with EL5
10:40 babilen The previous example wasn't correct btw (you would have to use a for loop and iterate over the k,v in the dictionary to render proper YAML
10:40 mick3y goal: i had to recompile newer version of zmq otherwise the minions would loose connectivity after period of inactivity
10:40 xsteadfastx joined #salt
10:41 babilen mick3y: Is that #16518 or another bug?
10:43 ckao joined #salt
10:43 babilen (that would imply that it is simply the horribly old ZMQ version that comes with those distributions)
10:43 mick3y babilen: it is horribly old version of zmq by default :)
10:43 goal mick3y: what about using the COPR repo zmq4 ?
10:44 mick3y goal: we're using only epel
10:45 mick3y babilen: 2.1.x if i remember correctly
10:45 goal surely better than building your own, though. I guess it depends on your config though
10:45 babilen mick3y: exactly
10:45 mick3y goal: it depends i guess. we can afford to maintain handful of packages ourselves - zeromq and sudo are one of them
10:46 goal I've made some iptables changes to see if that helps, but it probably is zmq being poor. It should handle these situations, I'm guessing
10:46 CeBe joined #salt
10:46 babilen Curious that such an old version is still in actively supported stable releases - aren't there newer versions available from the equivalent of what are "backports" in Debian?
10:47 babilen I mean 3.2.3 is in the soon-to-be-oldstable and the new release (jessie) will have 4.0.5 and I don't mean to imply that Debian is particularly bleeding-edge.
10:49 sieve1 joined #salt
10:49 mick3y babilen: shocking. i know. but redhat doesn't tend to be bleeding edge when it comes to packages :)
10:50 mick3y babilen: also i'm not sure if the package is in it's base repository anyways
10:51 babilen Oh, so you would have to resort to third-party sources anyway? Are there different levels of "officialness" in these third-party sources that force you to stick to those that come with the old version?
10:51 mick3y babilen: exactly
10:51 mick3y babilen: especially if you want to get newer software like php for example
10:52 babilen If I learned something in the laste years then it is "PHP is *always* the wrong version"
10:52 mick3y babilen: i much prefer debian but their LTS is not L enough for us here. although they've extended it now
10:52 mick3y babilen: matter of your religious views i guess ;-)
10:53 goal stability isn't it. People don't want version creep.
10:53 babilen squeeze-lts is a best effort restricted to a subset of packages (not the entire 25k+) anyway
10:53 goal software collections tries to solve that somewhat
10:53 goal and generally with things like PHP there are always alternatives (IUS for example).
10:54 babilen mick3y: Yeah, just wanted to understand the issue (you can probably tell that I am way more familiar with apt based distributions)
10:57 mick3y babilen: not really an issue - just the way things are with RH/CentOS. instead of bleedin' edge they go for stability
10:57 mick3y but I think I'm digressing from the topic of this channel
10:58 babilen We are indeed, sorry for the noise
10:58 oyvjel joined #salt
11:04 eject_ck Hi, from documentation it's not clear if I can use salt-cloud with ESXi 5.5 free or not?
11:04 eject_ck anybody uses it ? http://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud.clouds.vsphere.html
11:05 eject_ck I've added provider details for my ESXi host
11:05 eject_ck salt-cloud salt.cloud.clouds.vsphere.avail_images
11:05 eject_ck [INFO    ] salt-cloud starting
11:05 eject_ck Usage: salt-cloud: error: Nothing was done. Using the proper arguments?
11:05 giantlock joined #salt
11:08 genediazjr joined #salt
11:09 glyf joined #salt
11:16 thawes joined #salt
11:19 __gotcha joined #salt
11:33 genediazjr joined #salt
11:34 bhosmer joined #salt
11:36 tafa2 joined #salt
11:36 __gotcha joined #salt
11:39 sieve joined #salt
11:39 sieve joined #salt
11:40 sieve joined #salt
11:40 sieve joined #salt
11:40 sieve joined #salt
11:41 sieve joined #salt
11:42 BigBear joined #salt
11:42 TyrfingMjolnir joined #salt
11:45 Chris_ joined #salt
11:46 oyvjel joined #salt
11:47 BigBear Hi there , is it busy now, or am I hearing only the crickets?
11:47 sieve Does anyone have a workaround for this problem? https://github.com/saltstack/salt/issues/18581
11:48 sieve Container 'docker-registry' cannot be started: TypeError: start() got an unexpected keyword argument 'restart_policy'
11:48 diegows joined #salt
11:49 BigBear is there anybody on that can confirm that they do indeed run a salt-minion from macports on a Mac OS X (Mavericks or Yosemite)?
11:50 analogbyte hey, anybody got salt's scheduling to work? I configured it in my minions' configs, but it is only working on the minion with the master on it... any suggestions?
11:50 BigBear i am trying to install latest released salt from macports salt @2014.1.13 and it fails activating most all of the files (same on Mavericks as on Yosemite)
11:59 monkey661 left #salt
11:59 oyvjel1 joined #salt
12:02 bhosmer joined #salt
12:03 dvestal joined #salt
12:03 tafa2 joined #salt
12:04 bhosmer joined #salt
12:12 ericof joined #salt
12:14 jespada joined #salt
12:15 otter768 joined #salt
12:18 johtso__ joined #salt
12:19 tristianc joined #salt
12:21 analogbyte I found out: using explicit times in salt scheduleing requires python-dateutil to be installed on the minions...
12:25 felskrone joined #salt
12:26 bhosmer joined #salt
12:27 hcl2 joined #salt
12:27 glyf joined #salt
12:36 AirOnSkin Furao: So, I had lunch and my thoughts could settle. I understand what you meant earlier about cmd.wait and a watch statement now. Thanks :)
12:42 Furao great :)
12:43 Furao i have some old (1.5 year) formulas that you can look and understand the basic of wait/watch and some other basics: https://github.com/bclermont/states
12:44 AirOnSkin Cool! Thanks :) Will definitely have a look at them
12:44 Furao https://github.com/bclermont/states/blob/master/states/graylog2/server/init.sls#L54
12:45 JlRd joined #salt
12:45 thawes joined #salt
12:47 Katafalkas joined #salt
12:52 Roee joined #salt
12:53 Roee Hi all how are you ?
12:53 Roee I have a qestion
12:53 seanz joined #salt
12:53 Roee I would like to run "status.cpuinfo" but to grep only some arguments
12:54 thawes joined #salt
12:54 Roee is there is a way to do so ?
12:54 manji joined #salt
12:56 ericof joined #salt
12:57 babilen Roee: Where do you want to do that?
12:58 Roee Hi
12:58 Roee would like to run this command from Master on the minion
12:58 goal if, in one state file I configure a pkg to be installed and a service to be running, then in another i modify the config file of that service and wish the service to then restart/reload, how would this be done?
12:58 babilen goal: watch the service in the file's state
12:58 Roee e.g. : salt TestSalt status.cpuinfo
12:58 babilen (or use listen_in if that is appropriate)
12:59 babilen Roee: That is not currently supported from what I can tell.
12:59 babilen (you can, naturally, grep for it -- *maybe* a custom returner/outputter would work, but *shrug*, haven't played with that)
12:59 Roee there is no way to do let's say : salt TestSalt status.cpuinfo [ ' processor' ]
13:00 babilen The module doesn't support it from what I can see
13:00 Roee :(
13:00 Roee thanks !
13:00 paha joined #salt
13:01 babilen But then a "salt '*' status.cpuinfo|grep -A1 microcode" works fine
13:01 TK joined #salt
13:01 babilen (for example)
13:02 AirOnSkin Can someone explain this to me? I'm trying something very simple: Clean a directory, place files in it, don't clean the managed-by-salt files on a rerun of highstate: http://hastebin.com/lujoreneru.avrasm
13:02 AirOnSkin But the managed-by-salt files get deleted and transferred on every highstate
13:02 thawes joined #salt
13:03 babilen AirOnSkin: Why don't you simply use a file.recurse with clean: True?
13:04 AirOnSkin babilen: I wasn't aware that recurse had a clean option ... until now ;)
13:04 babilen And the file will, naturally, have to be retransferred/remanaged if you just deleted it
13:04 AirOnSkin Yeah, but it doesn't get deleted by me, but by the file.directory state...
13:05 AirOnSkin The docs say "Make sure that only files that are set up by salt and required by this function are kept. If this option is set then everything in this directory will be deleted unless it is required." which led me to believe that if I require the file.directory state the file.managed would stay there
13:06 AirOnSkin especially the " everything in this directory will be deleted unless it is required"
13:09 __alex joined #salt
13:12 tkharju joined #salt
13:13 AirOnSkin But yeah, it does work better. Just replaced 65 lines with 10
13:13 goal babilen: I added '- watch: - service: net-snmp' to my file.append, but it didn't reload the service when it changed the file
13:14 Morbus joined #salt
13:14 thawes joined #salt
13:15 JoeHazzers how do you guys manage service definitions for monitoring systems and the like?
13:16 JoeHazzers i mean, how do you put them in your states? re-write the same sort of state every time, use a macro, pillar data?
13:17 thawes joined #salt
13:17 babilen goal: It should
13:17 babilen (more details!!!)
13:18 goal does the service also need to have a watch for the file.append ?
13:20 slafs joined #salt
13:20 slafs left #salt
13:22 goal it does the append just fine, but doesn't seem to do anything about the watch on the service
13:24 hobakill joined #salt
13:26 hobakill guys - have there been reported key issues with 2014.7.0? i have a set of seemingly completely random minions that continually lose communication with the master. they only way they can re-communicate is if i wipe the key on the minion/master and reinstall salt-minion ... it's frustrating and i didn't have this issue before the 2014.7.0 update.
13:27 Valdo joined #salt
13:28 thawes joined #salt
13:28 AirOnSkin JoeHazzers: Mine looks like this: http://hastebin.com/xalarigixa.sm
13:28 AirOnSkin But I'm still learning ...
13:30 JoeHazzers i was thinking more along the lines of managing configuration files for something like consul or sensu
13:30 AirOnSkin Oh, I see. Misread that ... sorry
13:30 JoeHazzers i realised my ambiguity after the fact :)
13:34 babilen goal: Please paste more information (state(s) + output)
13:37 SheetiS joined #salt
13:38 babilen hobakill: Which platform?
13:43 thawes joined #salt
13:46 iotako joined #salt
13:46 hobakill babilen, both lin and win
13:48 babilen Check the bug tracker, but I haven't heard nor experienced anything like that. I had to restart some minions, but a reinstall really wasn't necessary. Did you switch transports?
13:48 Svake joined #salt
13:53 hobakill babilen, no. but i'm finding something else/new. seems like the minions have pids hanging. the one similarity all these minions have is "salt-minion dead but pid file exists"
13:53 hobakill i might create an ansible group to mass-restart the minion on those boxes.
13:54 babilen salt-ssh ?
13:55 nyx_ joined #salt
13:55 JoeHazzers when we used puppet, we had a cron job that made sure that puppet was running
13:55 JoeHazzers any reason you couldn't do the same for salt?
13:56 thawes joined #salt
13:57 babilen .oO( Don't use salt scheduler for that )
13:57 glyf joined #salt
13:58 goal babilen: it should be watch_in, ofcourse, not watch
13:58 * goal smash head
13:59 jaimed joined #salt
14:00 babilen goal: Sure
14:00 babilen (could have told you that if you had pasted your actual state)
14:05 Svake joined #salt
14:06 felskrone joined #salt
14:07 sieve joined #salt
14:08 cpowell joined #salt
14:11 genediazjr joined #salt
14:13 thawes joined #salt
14:16 otter768 joined #salt
14:21 nitti joined #salt
14:24 perfectsine joined #salt
14:24 BigBear joined #salt
14:25 mpanetta joined #salt
14:25 faust joined #salt
14:26 eject_ck joined #salt
14:26 mikkn joined #salt
14:27 shookees joined #salt
14:30 KaaK_ joined #salt
14:30 cpowell joined #salt
14:35 dynamicudpate joined #salt
14:43 blaffoy joined #salt
14:46 jdesilet joined #salt
14:47 gebi joined #salt
14:47 rawkode joined #salt
14:48 rawkode Afternoon
14:48 blaffoy Hi, I'm trying to configure salt to restart a service every night using salt.utils.schedule. I can configure the service to restart at a specific interval using the field "seconds: 60" (to restart every minute say)
14:49 aqua^mac joined #salt
14:49 blaffoy But I would like to restart every night at a specific time.
14:49 blaffoy The documentation (http://docs.saltstack.com/en/latest/topics/jobs/schedule.html) indicates that this shoud be achievable with the "when:" keyword
14:49 blaffoy But this is not working for me
14:50 Deevolution joined #salt
14:51 blaffoy I've tested "when: 14:30pm" and the like, but with no luck.
14:51 blaffoy Actually, it just occurred to me that the problem might be that I'm mixing 24-hour clock with am/pm signifiers. I'll test 2:53pm
14:52 moapa blaffoy: 2:53pm, then you are still mixing am/pm with 24hr format
14:53 blaffoy moapa. Am I? What's the correct format for 7 minutes to three in the afternoon?
14:53 moapa In 24hr format it's 14:53
14:53 moapa without any am/pm
14:53 rawkode Is there a way to say "require this OR this?"
14:54 blaffoy Right, but 2:53pm should specify non-24hour time, right?
14:54 moapa yes
14:54 blaffoy The examples in the documentation are formatted like "when: 5:00pm"
14:55 blaffoy So I would think that 2:53pm should work.
14:55 moapa Mm
14:55 JoeHazzers any help/insight into this headache would be useful: https://groups.google.com/forum/#!topic/salt-users/qEyHpach1q0
14:55 JoeHazzers regarding modular configuration files
14:56 eject_ck joined #salt
14:56 ericof joined #salt
14:59 __number5__ joined #salt
14:59 blaffoy Okay, I tried in the format "when: 2:53pm", but that had the same lack of response as "when: 14:53pm". So that doesn't solve my problem
15:00 JoeHazzers debug it, log it.
15:01 blaffoy And I also tried "when: 14:53" (without the pm), and the salt-minion log start complaining about not being able to serialize a msgpack message. So I'm pretty sure that's not a valid state configuration.
15:01 blaffoy Does anybody have a working example of a time based scheduler in salt?
15:03 housl joined #salt
15:04 moapa blaffoy: so, with 14:53 the minion actually produced some output?
15:04 dude051 joined #salt
15:04 moapa and with 2:53pm it was just silent ?
15:06 wincus joined #salt
15:07 cleme1mp joined #salt
15:08 mick3y quick question: what's your approach to updating minion package?
15:11 mick3y don't be shy
15:11 blaffoy moapa: with 14:53 yeah, I saw errors in C:\salt\var\log\salt\minion, that looked like: 2014-12-02 14:58:26,785 [salt.payload     ][CRITICAL] Could not deserialize msgpack message: In an attempt to keep Salt  running, returning an empty dict.This often happens when trying to read a file not in binary mode.Please open an issue  and include the following error: Data is not enough.
15:12 blaffoy And with 2:53pm nothing happens
15:12 iggy blaffoy: are you using 2014.7?
15:12 blaffoy I run saltutil.refresh_pillar after updating the scheduler in both cases
15:12 blaffoy iggy: No. I'm on 2014.1.11
15:13 moapa blaffoy: Well, then.. atleast you know which time format to use now?
15:13 crane annyone seen this before? TypeError encountered executing cmd.run: argument of type 'NoneType' is not iterable. See debug log for more info.
15:13 iggy I thought that when stuff was added after 2014.1, but I could be wrong
15:13 JoeHazzers crane: see the debug log for more info.
15:13 crane seeing this on just one minion. it is up2date with the others and a service restart did not brought any help
15:13 blaffoy iggy: You're right.
15:14 blaffoy iggy: It's right there in documentation "New in version 2014.7.0."
15:14 blaffoy My mistake
15:14 blaffoy I really need to learn to read. :-/
15:15 iggy it'll be nice when they have the docs fixed so it's easier to view different versions again
15:15 iggy *cough*@ops*cough*
15:15 tristianc joined #salt
15:16 smcquay joined #salt
15:16 zerthimon joined #salt
15:16 blaffoy Cool, it looks like I'm updating my salt version. That'll be fun.
15:17 crane crap, my log is full of tracebacks
15:17 crane https://www.refheap.com/94288
15:17 KaaK_ joined #salt
15:22 blaffoy mick3y: by "updating minion package", do you mean updating the version of saltstack running on a minion?
15:22 blaffoy I'm just about to try that myself.
15:25 xenoxaos joined #salt
15:26 lynxman joined #salt
15:27 JasonSwindle joined #salt
15:27 cleme1mp joined #salt
15:28 OnTheRock joined #salt
15:29 pacopablo joined #salt
15:29 vectra joined #salt
15:30 ggrieves joined #salt
15:33 mick3y blaf: yeah. and restarting teh minion
15:33 mick3y blaffoy*
15:34 blaffoy Yep, well I'm going to try and follow the instructions here: http://docs.saltstack.com/en/latest/topics/tutorials/esky.html
15:36 blaffoy I think.
15:36 blaffoy I've never used esky, and I don't know exactly what a "frozen app" is.
15:37 JasonSwindle left #salt
15:37 JasonSwindle joined #salt
15:38 BigBear joined #salt
15:40 PI-Lloyd hey guys, having a small issue with adding grains to a minion from the salt master - "salt 'somehost' grains.setval environment production" - this works fine, however trying to run any other commands on the minion after returns - "Minion did not return. [Not connected]" - Any ideas as to why using setval is causing minions to fail to return after successfully setting the grain?
15:42 blaffoy mick3y: The trick seems to be to place "update_url: http://salt/minion-updates" and  "update_restart_services: ['salt-minion']" in your minion config file, which can be set by sls. Then to deploy the new salt version to  http://salt/minion-updates, then run salt "*" saltutil.update
15:42 arapaho joined #salt
15:42 blaffoy However, I don't know yet what file to place at the update_url. It's not the exe installer, it seems.
15:42 glyf joined #salt
15:43 ajolo joined #salt
15:44 elfixit joined #salt
15:44 blaffoy `python setup.py bdist_esky` looks like it should work. I'll try it out and let you know how it goes.
15:45 Ozack1 joined #salt
15:47 ggrieves newbie trying to get started: Is there any other documentation other than on their website?  Is it just me or does their docs lack any logical order?
15:47 conan_the_destro joined #salt
15:48 khalieb joined #salt
15:48 iggy logical is different to different people
15:49 iggy it was definitely written by software developers
15:50 UtahDave joined #salt
15:50 iggy as someone who's been using open source software for a while now, the docs are better than most
15:50 eliasp ACK
15:51 iggy they have training classes for a reason though
15:51 ggrieves heh: "Section 3.3.1.8 Next Reading" takes you to Section 3.2.6..      and the whole thing starts with masterless, then leads to the next major section that starts with "standalone"
15:51 iggy it's a rather large and complex system, kind of hard to cover that much content concisely for everyone
15:52 ggrieves yeah, I've seen some bad ones, but I'm just having trouble following it and needed to rant for a quick second
15:52 blaffoy Right, having read through the documentation here (http://docs.saltstack.com/en/latest/topics/installation/windows.html) on how to prepare an esky build, now I don't really want to do it. Installing a dozen or more dependencies for a build distribution process that might not even work doesn't seem like a good use of my time.
15:53 UtahDave ggrieves: Thanks for pointing those problems out to us.
15:53 UtahDave blaffoy: do you need a custom build?
15:53 iggy fwiw, I got started by skimming the docs, then just getting started, I've since gone back and read most of the docs at this point I'm sure
15:53 lz-dylan I find the docs _mostly_ searchable, but would be thrilled to plunk down $49 for an ebook someday :)
15:53 blaffoy UtahDave: no I want to upgrade my minions from 2014.1.11 to 2014.7
15:54 iggy lz-dylan: there are pdf/epub/etc versions on readthedocs
15:54 ggrieves I would buy a book
15:54 Gareth morning morning
15:55 blaffoy UtahDave: from what I've read here: http://docs.saltstack.com/en/latest/ref/configuration/minion.html, it looks like the easiest way to distribute an updated minion is to use an esky_build
15:55 UtahDave joined #salt
15:55 cjohn joined #salt
15:55 UtahDave blaffoy: ah and you want to do an esky upgrade instead of installing over the top?
15:56 babilen ggrieves: I'd start with http://docs.saltstack.com/en/latest/topics/tutorials/index.html and read 3.1, 3.2, 3.3 up until 3.3.7 and continue with 3.4.8. Once you are done with that I'd take a look at http://docs.saltstack.com/en/latest/ref/states/top.html and http://docs.saltstack.com/en/latest/topics/targeting/ before I finish the course with http://docs.saltstack.com/en/latest/topics/best_practices.html and ...
15:56 babilen ... http://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html
15:56 StDiluted joined #salt
15:56 blaffoy UtahDave: from reading the docs, I thought that the esky route was a good way of distributing an upgrade. I've never used esky before, so I don't know how it differs from a regular build.
15:57 blaffoy If I have to set up my own build environment, then I won't bother
15:57 iggy ggrieves: I would also suggest watching some of thatch's videos on youtube... I used to turn them on when I was working out and picked up a lot of good info that way
15:57 JasonSwindle UtahDave: Howdy Howdy Howdy
15:58 blaffoy I'll figure out a way to update all my minions using the regular installer.
15:58 UtahDave blaffoy: Esky upgrades are still kinda new. I've successfully done them, but I haven't made them available for public consumption yet
15:58 UtahDave hey, JasonSwindle!
15:58 thawes joined #salt
16:01 TK joined #salt
16:01 blaffoy UtahDave: cool. I'll hold off on trying it. I'll just see what happens if I `salt "*" cmd.run "Salt-Minion-0.17.0-Setup-amd64.exe /S /master=salt". :-)
16:02 UtahDave blaffoy: should be fine. make sure to add your minion id in there.  It might wipe out your minion config
16:02 UtahDave blaffoy: test before you do that in production.  :)
16:03 blaffoy UtahDave: yup! trying it now. I'll be back soon enough if I run into any problems. Cheers!
16:03 UtahDave blaffoy: once I've got the esky builds tested to my satisfaction, then I'll be hosting the esky upgrades on our website and your windows minions can be upgraded to the latest version with just a salt command
16:03 UtahDave thanks!
16:04 sieve joined #salt
16:04 sieve joined #salt
16:05 huddy after modules have been synced should minion need to be restarted? I've observed it requiring a restart couple times today
16:05 sieve joined #salt
16:05 sieve joined #salt
16:06 _JZ_ joined #salt
16:07 intellix joined #salt
16:07 UtahDave huddy: It shouldn't need to be restarted. what version of salt and which os are you on?
16:08 huddy 2014.1.13, redhat. think it might be a bug then?
16:10 lb1a is there something that generates some kind of document/report from the gathered grains info of all managed minions?
16:11 UtahDave huddy: possibly.  could you test on 2014.7.0? It may have been fixed already
16:11 lb1a like an inventory list or something like that
16:11 UtahDave lb1a: there's not anything more than just something like      salt \* grains.items
16:12 UtahDave You could create a runner that creates a report like you're looking for.
16:12 huddy UtahDave: hmm think it might be this: https://github.com/saltstack/salt/issues/14763
16:12 huddy I can't just upgrade in live, but i've tested on a test box and it's fine which is 2014.7.0
16:13 Frank_I joined #salt
16:13 lb1a UtahDave, thanks that's totally ok for me. is it possible to get that info in a json format to further processit?
16:13 huddy so yeah, it's a bug :)
16:13 UtahDave lb1a: yeah.     salt \* grains.items --out json
16:14 Frank_I Can Some on point me on the right direction to download salt, I the packages
16:14 UtahDave Frank_I: which OS?
16:14 Frank_I Centos 6.5
16:15 iggy epel?
16:15 bytemask joined #salt
16:15 lb1a UtahDave, thank you very much
16:16 Frank_I Yes it is for testing iggy
16:16 Frank_I We want to migrate from Puppet by next year
16:16 UtahDave Frank_I: I'd start here: http://docs.saltstack.com/en/latest/topics/installation/rhel.html
16:17 otter768 joined #salt
16:17 Ryan_Lane joined #salt
16:18 msciciel hi, is there anyway to recover/create file /var/cache/salt/master/.dfn ? i found that master is trying to read this file but it's missing
16:18 iggy Frank_I: I was saying salt is in the epel repo
16:19 UtahDave msciciel: can you pastebin the error you're seeing?
16:21 msciciel UtahDave: problem is solved, i found it in strace few days ago but now this file exists. Is this file is created after some time of salt-master start ?
16:22 UtahDave msciciel: Yeah, I think so. It's probably not a problem if it isn't there, unless you're getting an error.  It's probably a cache of some sort
16:23 druonysus joined #salt
16:23 xenoxaos joined #salt
16:23 msciciel UtahDave: i was courius because strace shows a lot tries to read this file. Thanks
16:24 Rory_ joined #salt
16:29 jimklo joined #salt
16:29 RedundancyD joined #salt
16:30 toplessninja joined #salt
16:32 Frank_I [root@null-0018fe26c7ca]/home/isabelfa# salt-key -A
16:32 Frank_I The key glob '*' does not match any unaccepted keys.
16:32 Frank_I Any Idea why I am getting this.
16:32 hasues joined #salt
16:33 hasues left #salt
16:33 babilen Frank_I: Because there are no new keys to accept
16:33 BigBear joined #salt
16:33 Gareth Frank_I: no unaccepted keys?
16:34 ggrieves do salt-key -L and see if there are any
16:34 Frank_I Accepted Keys:
16:34 Frank_I Unaccepted Keys:
16:34 Frank_I null-0018fe26c7ca.corning.com
16:34 Frank_I Rejected Keys:
16:34 ggrieves if there ought to be some, make sure the minion knows the master address
16:34 JasonSwindle left #salt
16:34 ggrieves source: I forgot to tell my minions my master once
16:35 babilen Frank_I: Why do you think that you should have minions in that list?
16:35 Frank_I I am just following the documentation
16:35 babilen But there is an unaccepted one (i.e. "null-0018fe26c7ca.corning.com")
16:36 Frank_I babilen, that's my test machine
16:36 babilen does "salt-key -a 'null-0018fe26c7ca.corning.com'" work?
16:37 Frank_I I am running from that machine
16:38 aqua^mac joined #salt
16:38 babilen So you are using a minion on the master. That is perfectly normal.
16:38 Frank_I Yes I am
16:38 nafg joined #salt
16:38 smcquay joined #salt
16:39 babilen so, can you accept the key with the command I just mentioned?
16:40 Frank_I 1 min
16:41 Ryan_Lane joined #salt
16:41 Frank_I Key Accepted
16:41 bytemask joined #salt
16:41 babilen good, I wonder why '*' didn't match your minion id though
16:42 Frank_I now I can to the test ping :)
16:42 Gareth joined #salt
16:42 Frank_I Now I can do the test ping*
16:43 RedundancyD joined #salt
16:44 TyrfingMjolnir joined #salt
16:45 cjohn joined #salt
16:45 StDiluted joined #salt
16:45 Frank_I babilen I have to install the salt-minon on the client machines right?
16:47 Frank_I Like puppet master and puppet agent?
16:47 swa_work joined #salt
16:47 jrluis1 joined #salt
16:51 jimklo joined #salt
16:51 cjohn joined #salt
16:51 arapaho joined #salt
16:57 babilen Frank_I: yes, exactly. You install the salt-minion and edit /etc/salt/minion to point to the master (if necessary)
16:57 BigBear joined #salt
16:58 pr_wilson joined #salt
17:01 druonysuse joined #salt
17:01 druonysuse joined #salt
17:01 TK joined #salt
17:03 kzx joined #salt
17:06 arapaho joined #salt
17:16 zlhgo_ joined #salt
17:19 glyf joined #salt
17:20 manji joined #salt
17:23 elfixit joined #salt
17:30 jalbretsen joined #salt
17:30 Svake_ joined #salt
17:31 aparsons joined #salt
17:31 linjan joined #salt
17:31 Svake_ hello everyone, Is there anyway to display all loaded formulas in a minion?.
17:32 SheetiS salt-call state.show_top
17:32 SheetiS from the minion
17:32 TK_ joined #salt
17:32 Svake_ thanks!
17:33 SheetiS that should show everything that will apply on a highstate or whatever.
17:35 debian112 joined #salt
17:37 kzx left #salt
17:38 micah_chatt joined #salt
17:39 blaffoy Hmmm... i upgraded to 2014.7 on a test master/minion pair. It seems to have broken some of my sls configuration. "State 'pkgrepo.managed' found in SLS 'mongodb' is unavailable"
17:39 blaffoy Any reason that pkgrepo.managed wouldn't be available?
17:40 wt joined #salt
17:41 troyready joined #salt
17:41 wendall911 joined #salt
17:43 debian112 anyone had salt-minion 2014.1.10 (Hydrogen) just stop working?
17:43 debian112 salt-master 2014.1.10 (Hydrogen)
17:45 imanc_ joined #salt
17:46 TyrfingMjolnir joined #salt
17:46 Ahlee you mean minion exists in process list, but not responding?
17:46 Ahlee logging level, and anything in logs?
17:47 felskrone joined #salt
17:49 debian112 DEBUG   ] Reading configuration from /etc/salt/minion [INFO    ] Using cached minion ID from /etc/salt/minion_id: server1.net [DEBUG   ] Configuration file path: /etc/salt/minion [DEBUG   ] Reading configuration from /etc/salt/minion [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem [WARNING ] SaltReqTimeoutError: Waited 60 seconds Minion failed to authenticate with the master, has the minion key been accepted?
17:49 debian112 wow that is ugly
17:50 Ahlee accept the key on the master
17:50 debian112 the server has been for over a year and working fine
17:52 Katafalkas joined #salt
17:53 eliasp I've had the same when I upgraded my minions from 2014.1.x to 2014.7.x but was unable to reproduce it after more than 24h… it seems this was caused here by stale cache objects on the minion
17:53 desposo joined #salt
17:54 wnkz joined #salt
17:54 eliasp so try to wipe the minion cache manually (stop minion; empty /var/cache/salt/minion/*; start minion) to see if this fixes the problem for you
17:55 eliasp you could also try to preserve the old cache to have data for reproducing and possibly fixing this issue
17:55 manji joined #salt
17:56 debian112 eliasp: tried it
17:56 debian112 same thing
17:56 debian112 even the master is unable to send out a test.ping
17:56 Ahlee You're positive the minion can reach the master?
17:56 eliasp debian112: yeah, exactly what happened here… minion is completely non-responsive
17:56 debian112 Ahlee, tcpdump
17:56 Ahlee telnet to port 4505 on the minion to the master, make sure something isn't blocking it
17:57 debian112 and telnet to that port
17:57 Ahlee huh
17:57 debian112 all that works
17:57 Ahlee what version of zeromq?
17:58 debian112 ZMQ: 3.2.3
17:58 aparsons_ joined #salt
17:58 debian112 Salt: 2014.1.10          Python: 2.7.3 (default, Jan  2 2013, 13:56:14)          Jinja2: 2.6        M2Crypto: 0.21.1  msgpack-python: 0.1.10    msgpack-pure: Not Installed        pycrypto: 2.6          PyYAML: 3.10           PyZMQ: 13.1.0             ZMQ: 3.2.3
17:59 eliasp I had around ~1MB of log entries per minute per minion on my master because of this… most of the lines were simply "Minion failed to authenticate with the master"
17:59 eliasp good luck with that!
18:01 jswanson_ joined #salt
18:01 TK_ joined #salt
18:02 wnkz joined #salt
18:02 debian112 any more suggestions will be greatly welcomed?
18:04 conan_the_destro joined #salt
18:04 forrest joined #salt
18:07 cjohn joined #salt
18:07 thawes joined #salt
18:08 cpowell joined #salt
18:09 wnkz joined #salt
18:10 Ahlee nothing in master log about any key issues with the minion?
18:10 Ahlee i mean, ther'es always nuclear and stop minion, blow away pki directory, delete key on master, start again and see what happens
18:11 cpowell joined #salt
18:12 debian112 Ahlee, yeah I hear ya. This is very strange.
18:13 thawes joined #salt
18:13 debian112 even my salt-ssh commands stopped working
18:13 MatthewsFace joined #salt
18:14 cjohn joined #salt
18:14 murrdoc joined #salt
18:15 Ahlee now that's really strange
18:15 Ahlee well
18:16 Ahlee does salt-ssh use the salt pki stuff?
18:16 SpX joined #salt
18:18 debian112 I don't think it is using ssh keys?
18:18 debian112 I can ssh fine to the server
18:18 otter768 joined #salt
18:19 Ahlee I still want to say that ssh-auth is seperated from salt-auth
18:19 Ahlee but, i used salt-ssh once as a 'neat, that works'
18:19 debian112 SaltReqTimeoutError: Waited 3 seconds
18:19 debian112 I see this in the minion logs
18:20 ajolo joined #salt
18:20 debian112 2014-12-02 10:05:59,182 [salt.crypt                               ][WARNING ] SaltReqTimeoutError: Waited 60 seconds
18:20 shaggy_surfer joined #salt
18:21 hal58th debian112: Can you run this command on the master and look for the two ports to be open, 4505 and 4506.   sudo netstat -tulpn
18:21 debian112 tcp        0      0 0.0.0.0:4505            0.0.0.0:*               LISTEN      2752/python
18:21 debian112 tcp        0      0 0.0.0.0:4506            0.0.0.0:*               LISTEN      2740/python
18:22 hal58th debian112: Looks like you already tested that anyways with a telnet.
18:22 Ahlee just one minion misbehaving?
18:22 debian112 everyone of them
18:23 debian112 about 300
18:23 hal58th Any errors in the master.log?
18:23 Ahlee and this just happened after upgrade to 2014.1.10?
18:23 hal58th What's your version of the minions and master? Are they different versions?
18:23 Ahlee if memory serves, there were versions after 2014.1.10, no?
18:23 troyready joined #salt
18:23 hal58th yeah, up to 1.13
18:24 Ahlee might want to go up again to whatever latest 2014.1.x was
18:24 debian112 they are all 2014.1.10
18:24 Ahlee sounds like master is issue, then, if it's all minions
18:24 debian112 yeah
18:25 Ahlee kill it all, make sure it's dead, restart
18:25 debian112 I have done a full restart
18:25 druonysuse joined #salt
18:25 druonysuse joined #salt
18:25 debian112 and the same
18:25 hal58th did you make sure all the master processes were dead?
18:25 Ahlee yeah, i have a lot of issues with processes hanging around on 0.17.5
18:25 cjohn joined #salt
18:26 Ahlee though that's also likely due to supervisord trying to be overly helpful on restarting
18:26 Nazca__ joined #salt
18:26 aqua^mac joined #salt
18:27 thawes joined #salt
18:33 Nazca joined #salt
18:35 yetAnotherZero joined #salt
18:38 yetAnotherZero I'm working through http://docs.saltstack.com/en/latest/topics/tutorials/states_pt4.html#practical-example and I think I'm having trouble sorting some of the literals vs substitutions in the example because when I try to run the highstate it says no minions match the target.  Anyone care to help me a little?
18:38 debian112 not sure, what else to check
18:39 thawes joined #salt
18:39 yetAnotherZero i have two minions 'm1' and m2' and from the example i assumed that the references to 'web*prod*' etc, were for minion matching so i replaced with 'm1'
18:39 meylor joined #salt
18:40 hal58th yetAnotherZero, did you specifically call the minion 'm1' or is that the host name? It uses the FQDN by default
18:40 murrdoc whats the output of salt-key -A
18:40 hal58th debian112: There are no errors or anything that changed in the master log when the minions stopped responding?
18:40 debian112 I can't let the puppet team know about this in the company, since I have been the stealer of there puppet users
18:41 murrdoc debian112:  ?
18:41 Ahlee debian112: yeah, I'm out of ideas, too.  This just cropped up, or did it crop up after an upgrade?
18:41 yetAnotherZero I'm using the minion names: `salt-key -L`; m1 m2
18:41 Mso150 joined #salt
18:41 debian112 just happened
18:42 hal58th debian112: You run through this document? http://docs.saltstack.com/en/latest/topics/troubleshooting/#troubleshooting-the-salt-master
18:42 Ahlee maybe the minions had their minion_id chnaged?
18:42 debian112 it's been working fine over a year
18:42 Ahlee well, 2014.1.10 isn't a yaer old
18:42 Ahlee so something significant has changed in the last year
18:43 debian112 let me check more on it
18:43 debian112 will report more findings
18:43 murrdoc debian112:  run this https://github.com/saltstack/salt/blob/develop/tests/eventlisten.py
18:44 murrdoc on your master
18:44 hal58th yetAnotherZero: Pastebin the outpout of "sudo salt-key -L" and your top.sls file.
18:44 hal58th warcraft
18:44 hal58th woops, wrong window
18:44 debian112 mrrdoc ok running it
18:45 debian112 ipc:///var/run/salt/master/master_event_pub.ipc
18:45 debian112 is the onlything popped up
18:45 murrdoc run salt '*' test.ping
18:46 murrdoc on the mater
18:46 murrdoc master*
18:46 rap424 joined #salt
18:46 bdrung_work joined #salt
18:48 bdrung_work hi, salt throws an UnicodeDecodeError: https://paste.debian.net/134635/
18:48 yetAnotherZero hal58th: http://www.pastebucket.com/72447
18:49 Ahlee bdrung_work: it also will with windows minions that report the ® in the name
18:49 debian112 ok, so more events are happening
18:49 debian112 murrdoc
18:49 debian112 I see all the minions
18:49 Ahlee debian112: do you have an default returner defined?
18:49 bdrung_work Ahlee, the minion name contains only a-z and 0-9
18:49 murrdoc whats a state you can run safely debian112
18:50 Ahlee bdrung_work: the minion processes way more than just hte name.
18:50 debian112 default returner standard out
18:50 Ahlee your state is making salt look at a file that contains a character it can't handle
18:51 hal58th yetAnotherZero: Did you do the thing about it with setting up your "file_roots"?
18:51 jfroot joined #salt
18:51 bdrung_work i have a vimrc: file.managed rule and this file contains an utf-8 character
18:51 murrdoc debian112:  from a minion run salt-call state.highstate
18:51 hal58th *thing above it
18:51 murrdoc assuming you can run it fine
18:52 Ahlee bdrung_work: is your vimrc file a jinja template or straight file.managed?
18:52 debian112 mrrdoc running a state run on the master using the local minion
18:52 bdrung_work LANG=en_US.UTF-8 is set on the minion. so why does salt want to convert the file to ascii?
18:52 yetAnotherZero hal58th yes, but I will paste the file_roots section in case i borked it
18:52 eliasp bdrung_work: this problem was fixed in git (utils.sdecode was introduced to deal with these unicode issues)
18:52 bdrung_work Ahlee, it's a  straight file.managed file
18:53 Ahlee huh.  I've only had issues with jinja templates
18:53 debian112 DEBUG   ] Reading configuration from /etc/salt/minion [INFO    ] Using cached minion ID from /etc/salt/minion_id: server1.net [DEBUG   ] Configuration file path: /etc/salt/minion [DEBUG   ] Reading configuration from /etc/salt/minion [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem [WARNING ] SaltReqTimeoutError: Waited 60 seconds Minion failed to authenticate with the master, has the minion key been accepted?
18:53 debian112 murrdoc is what I get on the local master
18:53 jfroot Hey gang.. does anyone know a good way to stage/test changes before pushing to production with Salt? ie.. would like changes to be tested before pushing to production as a fuckup could potentially nuke a large number of machines
18:53 manji joined #salt
18:54 yetAnotherZero hal58th: thanks for that.  that's where the problem *probably* is.  I didn't keep the roots names the same by the time I finished...
18:54 Ahlee jfroot: All changes go through change control and are pushed first to Staging, seperate salt master for UAT, which again is a seperate master than production
18:54 eliasp jfroot: that's why you have environments in Salt…
18:54 Ahlee and we still fuck up, often.
18:54 hal58th yetAnotherZero: That would make sense. I also recommend doing something really really simple at first
18:55 Ahlee jfroot: I'm currently setting up a state test environment so on commit a new VM is created, and states are run against it.  It won't work with everything, but it's close
18:55 Ahlee then it's up to whoever is in charge of merging to make sure the state isn't going to do bad things
18:56 jfroot we currently use bcfg and for testing we will pull down the repo. and run a local copy of bcfg2 on a test instance. So it sounds similar.
18:57 Ahlee http://bcfg2.org/ ?
18:57 murrdoc debian112:  salt-key -A
18:57 thawes joined #salt
18:58 bdrung_work eliasp, can you point me to this commit or where there multiple commits for it?
18:58 eliasp bdrung_work: yeah, one moment
18:59 debian112 murrdoc there is nothing in Unaccpeted, all the keys are accepted
18:59 murrdoc now comes the fun part
18:59 murrdoc pick a minion
18:59 debian112 I even tried deleting a key
18:59 murrdoc stop it , like on the server
18:59 debian112 ok
18:59 murrdoc nuke /var/cache/salt
18:59 murrdoc start salt-minion
19:00 murrdoc salt-call test.ping
19:00 murrdoc salt-call state.highstate
19:01 eliasp bdrung_work: https://bpaste.net/show/e8a208455713
19:01 TK_ joined #salt
19:02 debian112 murrdoc
19:02 debian112 so I deleted the key of a host
19:02 debian112 and ran a state
19:03 druonysuse joined #salt
19:03 druonysuse joined #salt
19:03 debian112 the key got added because we use autosign
19:03 debian112 but after that
19:03 debian112 it hangs
19:03 debian112 here
19:03 debian112 Loaded minion key: /etc/salt/pki/minion/minion.pem [DEBUG   ] Decrypting the current master AES key [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
19:04 debian112 I wonder will it stop working if the master is expired?
19:05 murrdoc keys expiring was an issue with 2014.1
19:05 murrdoc but nuking cache normally handled it
19:05 spookah joined #salt
19:06 aparsons joined #salt
19:08 manji left #salt
19:10 ajolo UtahDave: o/
19:12 dave_den joined #salt
19:14 druonysus joined #salt
19:14 thawes joined #salt
19:16 wnkz_ joined #salt
19:16 gadams joined #salt
19:18 Ahlee up master's log level to trace, something's funky on it
19:22 bmonty joined #salt
19:22 debian112 new findings:
19:22 debian112 If I run the master: salt-master -c /etc/salt/master -l debug
19:23 wnkz__ joined #salt
19:23 debian112 the minion will atleast run and fail
19:23 debian112 local:     Data failed to compile: ----------     No matching sls found for 'users.labs_accounts' in env 'base'
19:23 murrdoc well tada
19:23 debian112 but when I run it like this:
19:24 thawes joined #salt
19:24 debian112 salt-master -c /etc/salt/ -l debug
19:24 debian112 the minions fail
19:24 debian112 again
19:26 babilen So, do you have file_roots/users/labs_accounts{.sls,/init.sls} ?
19:26 eliasp this still shouldn't make the minion completely unresponsive… (not even test.ping is working)
19:27 babilen oh, absolutely
19:27 eliasp for me this issue mysteriously disappeared after 24h
19:27 kermit joined #salt
19:28 ericof joined #salt
19:29 cads joined #salt
19:30 Svake joined #salt
19:30 thawes joined #salt
19:32 unpaidbi1l do any of you have any tricks for quickly checking highstate in test mode to see if any changes would be made?
19:32 unpaidbi1l i have hundreds of servers and need a quick way to do it - i've got some grep/sed stuff i do now but i wasnt sure if there's an easier way to summarize
19:34 kballou joined #salt
19:34 debian112 test.ping works
19:34 debian112 when the master is running like this:
19:34 debian112 salt-master -c /etc/salt/master -l debug
19:35 debian112 this makes me think that something is messed up here? /etc/salt
19:36 eliasp unpaidbi1l: --state-output=terse?
19:36 thawes joined #salt
19:37 riessen joined #salt
19:37 davet joined #salt
19:38 Mso150_p joined #salt
19:41 unpaidbi1l that still outputs everything in the correct state
19:41 unpaidbi1l i couldnt find any only show changed type option
19:42 hal58th debian112 maybe some sort of permissions? That's very odd…
19:42 vectra joined #salt
19:42 debian112 hal58th I am checking that now
19:44 eliasp unpaidbi1l: did you try "--state-output=changes"?
19:46 thawes joined #salt
19:47 unpaidbi1l yeah, that one is a lot less output but it still shows all the results for unchanged states (Result: Clean)
19:48 unpaidbi1l i havent messed around with that one much though.. and i can grep -v 'Result: Clean' to limit it to changes only - that's way better than my current method
19:49 eliasp unpaidbi1l: there's also 'state_verbose', but IIRC this is only configurable through your master config isn't parsed by the 'salt' commandline
19:49 eliasp unpaidbi1l: so you could set in your master config "state_verbose: False"
19:50 unpaidbi1l cool, this is pretty much exactly what i was trying to find, thanks a ton
19:51 vectra joined #salt
19:51 thawes joined #salt
19:52 Steve7314 joined #salt
19:52 unstable joined #salt
19:53 jaimed joined #salt
19:53 felskrone joined #salt
19:54 djaime joined #salt
19:54 thawes joined #salt
19:55 Steve7314 I'm trying to translate ansible playbooks to salt.  In ansible, I have a playbook that runs a command.  Depending on the result, it may restart a service.  In salt, how do you define a state to run based on the stdout from the execution of a previous one?
19:56 murrdoc http://docs.saltstack.com/en/latest/ref/states/requisites.html
19:56 murrdoc search for onlyif
19:57 Ahlee or watch, if the command can be translated into a state, rather than a command line
19:58 Steve7314 thanks!  <reading>
19:59 unstable How do I determine what values go inside -L? for say salt --verbose --out txt -L 'some_host' cmd.run 'echo $HOSTNAME'
19:59 unstable ?
19:59 Ahlee -L is list, so you need to pass it a list of hosts
19:59 Ahlee I believe that's comma seperated
20:00 Ahlee yeah, comma seperated, no spaces
20:00 Ahlee salt -L 'host1.domain.tld,host2.domain.tld' test.ping
20:01 TK_ joined #salt
20:01 Ahlee the -L is matching, so that's the most flexibible (and thus, probably the most frustrating) part of salt.  How you target minions varies greatly, depending on your requirements
20:02 Ahlee http://docs.saltstack.com/en/latest/topics/targeting/index.html goes over targeting in depth
20:03 unstable Ahlee: On the minion side, where is the host domain?
20:04 unstable eg, where is the minion id
20:04 Ahlee unstable: minion side, /etc/salt/minion_id should contain the minion_id
20:05 Ahlee it should also match the key name visible on teh master with salt-key -L
20:05 jcsp joined #salt
20:05 unstable cat /etc/salt/minion_id
20:05 unstable cat: /etc/salt/minion_id: No such file or directory
20:06 unstable heh. :(
20:06 Ahlee hence should ;)
20:06 unstable There is a grains file
20:06 Ahlee you might be able to find it in /var/log/salt/minion
20:07 Ahlee what version of salt?
20:07 unstable /etc/salt exists
20:07 unstable ii  salt-minion                         2014.7.0+ds-2precise3
20:07 unstable ii  salt-common                         2014.7.0+ds-2precise3
20:09 Ahlee dunno then.  minion_id has been being populated since at least somewhere in the 0.16s if memory serves
20:10 dude^2 joined #salt
20:12 unstable o I see, weird the id is slightly different from the fqdn
20:13 Ahlee unstable: do you have append_domain set in /etc/salt/minion?
20:13 dude051 joined #salt
20:14 Gareth joined #salt
20:15 aqua^mac joined #salt
20:18 skyler joined #salt
20:18 thawes joined #salt
20:19 otter768 joined #salt
20:19 fxhp joined #salt
20:22 debian112 ok guys
20:22 hasues joined #salt
20:22 hasues left #salt
20:23 debian112 Ahelee, murrdoc, hal58th, eliasp
20:24 * Ahlee waits with bated breath
20:24 debian112 I upgraded the master and it started working again: salt-master 2014.7.0 (Helium)
20:25 Ahlee well, yay working.
20:25 druonysus joined #salt
20:25 lz-dylan in state.dockerio.pulled smart enough to login to dockerhub with creds in pillar? module.dockerio.pull seems to do so, but I'm having issues with state getting 500 access denials
20:25 lz-dylan s/in/is
20:25 debian112 well I cloned the production master and upgraded it
20:26 kickerdog joined #salt
20:27 Frank_I joined #salt
20:27 Frank_I Question.
20:27 Frank_I When I use the command salt '*' root salt://ssh/keyfile
20:27 murrdoc joined #salt
20:27 Frank_I I get  'root' is not available.
20:27 Frank_I I do not know why.
20:27 giantlock joined #salt
20:28 Ahlee root is not a valid command
20:28 Ahlee are you trying to copy the file served from the salt fileserver named keyfile to /root/ on all minions?
20:29 debian112 are many people using: 2014.7?
20:29 Frank_I what I want to do is ssh to the minion from master
20:29 kickerdog debian112: I've switch all my clusters to 2014.7
20:29 Ahlee Frank_I: are you trying to use salt-ssh?
20:29 Frank_I yes
20:30 debian112 kickerdog any problems?
20:30 Ahlee All I know is you'll want to change the command from salt to salt-ssh, from there I'm not fit to comment further on salt-ssh
20:30 lz-dylan debian112: after waiting so long for .7 I think a lot of us jumped pretty quick :)
20:31 kickerdog I use a few different distros so I ended building my own repo so everyone cloud get the package, but aside from conf file updates everything works alright.
20:31 lz-dylan debian112: I found the syntax for state.archive:options changed to require 'x' for untar
20:31 kickerdog RPMs are missing some deps.
20:31 kickerdog winexe and samba-client arn't required.
20:31 Frank_I ahlee I am trying to understand the module 22.16.197. salt.modules.ssh
20:31 Frank_I http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.ssh.html#module-salt.modules.ssh
20:32 kickerdog apache-libcloud and netaddr also are missing deps
20:32 Ahlee Frank_I: ok.  Do you have a minion with an accepted key?
20:33 Frank_I yes
20:33 Ahlee ok, i see your issue, salt '*' root salt://ssh/keyfile is straight from the docs
20:34 unstable Ahlee: Thanks for the help, I appreciate it!
20:34 unstable It's all working now
20:34 unstable fqdn was wrong
20:34 Ahlee i think that needs to be salt '*' ssh.check_key_file root salt://ssh/keyfile
20:34 Ahlee unstable: fantastic!
20:34 unstable left #salt
20:35 Frank_I That will copy the files to the minion
20:35 Frank_I ?
20:35 Frank_I or just check
20:35 Ahlee Frank_I: afaik, no, that's just going to check
20:35 debian112 I use salt-ssh I can share an example if you like
20:36 Ahlee i'd have to read up on modules/ssh.py to comment further Frank_I, and unfortunately I don't have time currently
20:36 pipeep joined #salt
20:36 Frank_I ahlee I got it now.
20:36 Steve7314 I've been looking into the conditionals like only_if, unless, and watch (per my previous question about translating ansible playbooks to salt).  It looks like if I want to conditionally run based on the output of a previous command, I need to wrap it.  To translate an ansible "changed_when: 'some text' in results.stdout", I need to wrap the original command either by something that will return t/f in the unless or only_if declaration, or in a stateful shel
20:36 Frank_I Thank you.
20:38 A||SySt3msG0 joined #salt
20:38 balltongu joined #salt
20:38 thawes joined #salt
20:39 spookah joined #salt
20:40 UtahDave hey, ajolo!
20:40 A||SySt3msG0 joined #salt
20:40 Gareth UtahDave: howdy
20:40 UtahDave hey, Gareth!
20:41 cads joined #salt
20:41 lz-dylan re: above question: dockerhub auth appears to work OK with state.highstate. onwards to debugging other things!
20:41 Gareth UtahDave: :) hows it going?
20:42 aparsons_ joined #salt
20:42 xenoxaos joined #salt
20:42 UtahDave pretty good! I had a pretty nice Thanksgiving break and I just got back from lunch with Tom.
20:42 UtahDave How about you, Gareth?
20:42 druonysuse joined #salt
20:42 UtahDave Oh, I just finished some consulting work for an enterprise customer ahead of schedule, too. So that was nice.
20:43 lz-dylan I'm trying to have a docker container-based service come up on initial highstate, but since docker-py isn't installed initially & saltstack minions don't recognize it in the same run it's installed, I end up having to state.highstate twice. anyone else looked into this? i'm thinking of having reactor run pip.installed on minion creation, but it feels like there must be a better way.
20:43 Gareth UtahDave: doing good :) it's a bit wet out here in SoCal today.
20:43 dude051 joined #salt
20:44 UtahDave lz-dylan: Yeah, I've run into that same issue.
20:45 UtahDave Gareth: nice to hear. Everything's usually so dry down there.
20:46 debian112 I guess there is no root cause for the 2014.1.10+ds-1~bpo70+1 issue on my end
20:46 thawes joined #salt
20:46 EugeneKay joined #salt
20:46 debian112 I find it pretty wired to just stop working
20:47 Gareth UtahDave: yeah.  definitely need the rain.
20:47 debian112 weird
20:47 lz-dylan UtahDave: have you tried something that you like? jinja conditionals dependent on a grain set to say 'yeah, I've got docker-py installed now', maybe?
20:49 micah_chatt lz-dylan: I’m in AWS, so my solution has been to have my user-data script auto install a few things like: docker-py, boto, awscli before salt
20:49 UtahDave lz-dylan: Yeah, set minion to highstate on startup. jinja conditional checking for docker. if no docker, install docker and reboot. If docker, then don't bother with installing docker nor rebooting.
20:49 micah_chatt I know its not exactly the ‘best’ solution, but for those basics it works
20:50 debian112 UtahDave. Have you seen a problem where the master just stopped working: version: . salt-master 2014.1.10 (Hydrogen)
20:50 UtahDave lz-dylan: But for most packages Salt can use them even if they've just been installed. we should probably open a bug for that
20:50 lz-dylan micah_chatt: that's not a bad way to do it. I've covered 2/3rds of that by just appending '-p python-boto -p awscli' or something along those lines to salt-cloud's options.
20:50 UtahDave actually, not reboot, but restart the minion
20:51 UtahDave debian112: do you now why it stopped?
20:51 debian112 that is what we all been trying to determine
20:51 debian112 the master is running
20:51 debian112 ports open
20:51 vectra joined #salt
20:51 debian112 can telnet to ports
20:51 MugginsM joined #salt
20:52 UtahDave debian112: any zombie processes?
20:52 debian112 but can't do any state runs or test.ping
20:52 debian112 rebooted the node
20:52 UtahDave what's the output you get if you try a state run or test.ping?
20:52 debian112 should have taken care of that
20:52 lz-dylan UtahDave: good to know! when you say 'if no docker, install docker and restart minion', does that interrupt the highstate run? (I _want_ it to, for context.)
20:53 lz-dylan (haven't had to kick over minion mid-run yet)
20:53 UtahDave yeah, lz-dylan, that does interrupt the highstate.
20:53 debian112 @UtahDave: minion is already accepted and has been running for over a year: http://paste.debian.net/134667/
20:53 UtahDave unless you set    startup_states: highstate in the minion config, when the minion comes up it won't continue the highstate
20:54 UtahDave debian112: how many minions?  are minions all the same version as the master?
20:54 debian112 this happens on all 300 minions
20:54 debian112 yes
20:54 debian112 all same
20:54 lz-dylan ......and if you _do_ set that, it'll continue, or it'll restart highstate? either is fine, and I didn't know that option existed. cool! thanks :)
20:54 UtahDave lz-dylan: it will restart the whole highstate
20:55 lz-dylan good deal. how're you kicking over minions? just cmd.run something?
20:55 UtahDave debian112: I think that has to do with the master getting flooded by the minions. There are some options you can set in the minon's configs that should alleviate this.  Also, newer versions of Salt deal with this better
20:56 UtahDave debian112: can you upgrade to 2014.1.13?
20:56 UtahDave lz-dylan: on Ubuntu I just used  service.restart salt-minion
20:56 debian112 is it  on the repos?
20:57 debian112 how can I check the flooding?
20:57 lz-dylan UtahDave: gotcha. that'd work. thanks so much!
20:57 debian112 I took a clone of the production server
20:58 debian112 and upgraded to 2014.7 and it started working again
20:58 debian112 but I know 2014.7 is the new kid on the block
20:59 BigBear joined #salt
20:59 tristianc joined #salt
20:59 debian112 I use Debian if that tells you how I roll. LOL
20:59 UtahDave debian112: here's a commit where some of those settings' defaults were changed:  https://github.com/saltstack/salt/pull/13210/files
20:59 UtahDave debian112: I was wondering...  ;)
21:00 debian112 @UtahDave what do you suggest?
21:00 debian112 upgrade to: 2014.1.13
21:00 debian112 ?
21:01 UtahDave I would at least upgrade to 2014.1.13, which is the latest on that release branch
21:01 debian112 i didn't see it on the repo?
21:01 UtahDave I really like 2014.7. I've been doing all my custom enterprise customer work on 2014.7
21:01 debian112 all I see is 2014.7
21:02 rlarkin we were going to wait for a point release for 2014.7 , but we're going to go forward with what's there
21:02 UtahDave debian112: in git look for the v2014.1.13 tag
21:02 rlarkin because of lxc support
21:02 rlarkin and cloud in general
21:03 rlarkin debian112: there is a debian repo btw , and .13 and 2014.7 are both available as .deb files
21:03 Mso150_p joined #salt
21:04 debian112 what is the path to the repos?
21:04 debian112 or URL
21:04 rlarkin debian.saltstack.com/debian
21:04 ello_govna joined #salt
21:05 rlarkin debian.saltstack.com actually if you want the front page with notes/comments
21:05 cpowell joined #salt
21:05 rlarkin yeah, and the key , you might want that
21:06 rlarkin I'm still trying to get a container to deploy with salt-cloud -m , I've got it down to two variables in the profile ( script: and deploy: )
21:07 rlarkin I haven't been able to get script to actually use my script , and it seems that if deploy is True the container creation will fail 100 % of the time
21:07 debian112 @UtahDave and rlarkin stand by going to make the move to 2014.1.13
21:07 rlarkin that was a very painless move for us, but I did have to clean old salt first.
21:07 rlarkin debian112 are you using wheezy?
21:08 debian112 yes
21:08 rlarkin k, wait
21:08 rlarkin apt-get remove salt-master salt-minion salt-common salt-cloud salt-ssh salt-syndic # You probably don't have salt-ssh and salt-syndic, but just in case
21:09 rlarkin apt-get autoremove #say yes and let all those python packages get removed
21:09 debian112 yeah I do have salt-ssh and salt-syndic
21:09 rlarkin apt-get remove msgpack-python python-crypto python-jinja2 python-m2crypto python-mako python-pip python-yaml python-zmq
21:09 rlarkin ^^ remove all those in case autoremove doesn't get them
21:10 rlarkin apt-get -t wheezy-backports install python-zmq
21:10 rlarkin apt-get -t wheezy-backports install python-requests
21:11 rlarkin rm /usr/lib/python2.7/dist-packages/salt/*pyc
21:11 debian112 ok I am taking a new clone of production, and will test the upgrade
21:11 rlarkin after all that reinstall salt
21:12 rlarkin there's some overkill there I"m sure, but we had some problems and doing that made them all go away
21:12 karimb joined #salt
21:13 debian112 rlarkin: ok, what about for the minions?
21:13 rlarkin oh, we don't update minions, we destroy them
21:13 debian112 you must running containers or something?
21:14 rlarkin containers for CI and developer machines, aws instances for production
21:14 ajolo UtahDave: hey ! how's it going ?
21:14 rlarkin I've been deploying containers with a script so far, I'm trying to get lxc to work with salt-cloud
21:15 debian112 We use salt for everything, KVM, Xen, AWS and Physical servers
21:15 rlarkin nice
21:16 Ryan_Lane joined #salt
21:16 debian112 I will like to move to Multimaster and Salt-syndic
21:17 debian112 just need to find the time
21:17 Ryan_Lane -_- you need to use require/require_in when you use accumulators?!
21:17 rlarkin right now I'm stuck and the script: and the deploy: options in my cloud profile for lxc.  I can't get script: anyscript to work at all , and deploy must be set to False or the container will not be created.  I don't know why
21:18 Ryan_Lane or, let me ask: wtf does 'Orphaned accumulator' mean?
21:19 Ryan_Lane seems that's only there if required_in isn't used
21:19 Ryan_Lane that's really, really lame
21:22 perfectsine joined #salt
21:23 StDiluted joined #salt
21:25 erjohnso joined #salt
21:26 ingwaem joined #salt
21:29 debian112 UtahDave and rlarkin: upgrading to 2014.1.13 didn't help
21:30 debian112 should I also mention that my master is a VM
21:30 debian112 8 core, 16GB-RAM
21:31 rlarkin jobs
21:31 rlarkin we had problems where the master would die, and a ps grep would show everything as <defunct>.  cleaninup up old jobs in /var/cache fixed that.
21:32 rlarkin or seemed to anyway.
21:34 vectra joined #salt
21:35 debian112 atleast test.ping started working
21:35 g3cko joined #salt
21:35 debian112 again
21:35 debian112 rlarkin how many minions do you have?
21:36 rlarkin our biggest environment has only about 20 minions
21:36 rlarkin we have lots of salt-masters
21:36 qihou_ joined #salt
21:37 debian112 oh ok, you have a master per environment?
21:37 rlarkin yeah
21:37 rlarkin some machines have a master and a minion both running.
21:37 debian112 we have a multi-tenant master with 300 minions
21:38 qihou_ hello, anyone know how i can manage permissions for socket files?
21:38 debian112 I wonder if i have too many minions on that one master
21:38 qihou_ file.managed module seems to only work with normal file
21:38 thawes joined #salt
21:39 Ahlee I have ~1k minion on a VM master, 10G ram, 8vCPU
21:39 Ahlee 50 worker threads
21:40 Mso150_p joined #salt
21:44 badon joined #salt
21:45 debian112 well upgrade to 2014.1.13 didn't help, but moving to 2014.1.7 did
21:46 debian112 Ahlee what version are you using?
21:46 Ahlee 0.17.5
21:47 canci joined #salt
21:47 debian112 oh ok, the devs here want more features so that forced us to upgrade
21:47 Ahlee i hear ya
21:47 Ahlee It'd be nice to upgrade
21:47 Ahlee but, i have no compelling reason to force that battle
21:48 debian112 @UtahDave any idea?
21:48 debian112 moving to 2014.7 worked
21:49 UtahDave debian112: have you checked your inode usage?
21:49 dynamicudpate joined #salt
21:50 debian112 I have not
21:50 debian112 looks fine
21:50 rlarkin if you don't clean old jobs and your filesystem was made with defaults, you can run out of inodes
21:51 aparsons joined #salt
21:51 thawes joined #salt
21:52 aparsons joined #salt
21:52 rlarkin Looks like this is what I'm needing to make lxc work: https://github.com/saltstack/salt/pull/18433
21:52 debian112 you mean jobs: /var/cache/salt/master/jobs/
21:52 rlarkin yeah
21:52 murrdoc 'Sorry, but you have wasted your time. I am rebasing over develop and continuing the work myself.'
21:52 murrdoc hah
21:53 rlarkin well, drama aside, it looks like when it's done I'll be able to deploy containers with cloud (?I hope)
21:54 rlarkin in the meantime  I think I'll embed bootstrap logic into the lxc template itself
21:56 Ice-x joined #salt
21:58 snuffeluffegus joined #salt
22:00 aparsons joined #salt
22:01 TK_ joined #salt
22:03 OnTheRock joined #salt
22:04 mpanetta_ joined #salt
22:04 cpowell_ joined #salt
22:04 aqua^mac joined #salt
22:04 fridder joined #salt
22:06 aparsons_ joined #salt
22:07 druonysus joined #salt
22:13 smcquay joined #salt
22:16 thawes joined #salt
22:16 carmony what is a safe way to issue a command to all my minions to restart?
22:16 carmony like restart the salt-minion service
22:18 shaggy_surfer joined #salt
22:19 rager salt '*' service.restart salt-minion
22:19 rager carmony:
22:19 carmony For some reason I thought that would cause the salt-minion to fail to return, but it looks like it doesn't have that issue :P
22:20 otter768 joined #salt
22:23 jdesilet joined #salt
22:24 rickh563 joined #salt
22:24 hal58th1 joined #salt
22:26 dagrizbox joined #salt
22:26 thawes joined #salt
22:29 thawes joined #salt
22:29 JlRd joined #salt
22:29 dagrizbox joined #salt
22:30 dRiN joined #salt
22:30 bhosmer_ joined #salt
22:31 fishdust joined #salt
22:31 g3cko joined #salt
22:34 rickh563 joined #salt
22:36 UtahDave hey, rickh563
22:36 rickh563 Hi UtahDave
22:39 thawes joined #salt
22:43 kballou joined #salt
22:45 thawes joined #salt
22:51 thawes joined #salt
22:53 maze joined #salt
22:53 maze evening
22:53 hal58th1 UtahDave, I asked a little while ago about some example questions for the SSCE exam. Get a chance to look into that?
22:54 maze when using 2014.7.0 with raet as non-root user, I get permission denied errors in ioflo truing trying to do a self.ss.sendto(), any clue where to look?
22:54 maze upgraded from 2014.1.something
22:54 dynamicudpate joined #salt
22:55 maze ah, if the minion on the same system runs as root, it can't communicate with a non-root master on the same node .. lovely
22:56 superted666_ joined #salt
22:58 Nazzy joined #salt
22:58 Nazzy joined #salt
22:59 schristensen joined #salt
22:59 claytron joined #salt
22:59 pmcg joined #salt
23:00 ingwaem` joined #salt
23:00 glyf joined #salt
23:01 TK_ joined #salt
23:02 vectra joined #salt
23:02 bhosmer_ joined #salt
23:02 Heartsbane joined #salt
23:02 Heartsbane joined #salt
23:06 pacopablo joined #salt
23:11 sumpos joined #salt
23:12 shaggy_surfer joined #salt
23:13 thawes joined #salt
23:14 genediazjr joined #salt
23:15 Outlander joined #salt
23:18 ajolo joined #salt
23:18 thawes joined #salt
23:21 g3cko joined #salt
23:29 thawes joined #salt
23:33 pr_wilson joined #salt
23:34 thawes joined #salt
23:37 davet joined #salt
23:37 Outlander joined #salt
23:37 karimb joined #salt
23:37 davet joined #salt
23:38 genediazjr joined #salt
23:38 JordanTesting joined #salt
23:40 thawes joined #salt
23:41 murrdoc joined #salt
23:45 thawes joined #salt
23:46 echoplexion joined #salt
23:50 StDiluted joined #salt
23:52 nitti_ joined #salt
23:53 aqua^mac joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary