Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-07-31

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 allanparsons anyone have a good state for nodejs?
00:05 retrospek ^(?:uat)?(?:hr|db)01[ab](?:\..+)$ - allows hostnames, hostnames with subdomains or fqdn
00:06 TheThing joined #salt
00:07 agliodbs ah, salt pcre supports the non-greedy modifier?
00:08 agliodbs wait, just non-grouping.  why do I care about non-grouping?
00:08 jslatts joined #salt
00:14 hellome joined #salt
00:15 andrej retrospek - that WON'T accept just a hostname, you're non-groupingly forcing a dot
00:16 ghartz joined #salt
00:21 jpl1079 joined #salt
00:31 nyx joined #salt
00:38 jpl1079 left #salt
00:38 scoates joined #salt
00:43 ericof joined #salt
00:43 helderco joined #salt
00:46 notbmatt joined #salt
00:46 zandy joined #salt
00:48 faulkner joined #salt
00:48 beneggett joined #salt
00:49 steveoliver joined #salt
00:50 to_json joined #salt
00:51 chamunks joined #salt
00:52 nyx_ joined #salt
00:55 steveoliver joined #salt
00:57 notbmatt joined #salt
00:58 crane joined #salt
01:02 nyx_ joined #salt
01:04 dmick joined #salt
01:05 dmick hi guys.  What should I look for first when I restart the salt master and then start getting "Failed to authenticate, is this user permitted to execute commands?" even in response to "salt '*' test.ping"?   (adding -l garbage doesn't give any answers.)
01:06 dmick (running daemons and salt command as root)
01:06 dmick minion isn't logging anything
01:08 badon joined #salt
01:09 arthabaska joined #salt
01:10 dmick https://github.com/saltstack/salt/issues/12248 hm
01:12 yomilk joined #salt
01:13 monokrome joined #salt
01:16 Ryan_Lane dmick: are you root?
01:16 Ryan_Lane oh. you just mentioned that
01:16 d3vz3r0 joined #salt
01:16 dmick yep
01:17 Ryan_Lane is your disk full?
01:17 dmick when it fails like that it's not trying to contact the minion according to tcpdump
01:17 dmick raising nofiles from 1024 to 2048 doesn't help
01:17 Luke joined #salt
01:17 dmick .....and...it just worked, without changing anything
01:17 dmick <gah>
01:18 gzcwnk it hates you
01:18 dmick shell loop: intermittent
01:18 Ryan_Lane does your master log have any info?
01:18 gzcwnk i'd suggest slashing your wrists and becoming a MSVP
01:19 dmick Ryan_Lane: I've tried salt -l garbage
01:19 Ryan_Lane dmick: I mean your actual master logs
01:19 gzcwnk what version of salt?
01:19 dmick and when it fails it looks just like when it succeeds, right up to the failure
01:19 Ryan_Lane that's the logs for the salt cli
01:19 Ryan_Lane /var/log/salt/master
01:19 dmick 2014.1.7
01:20 gzcwnk ok....im on 7-3
01:20 dmick and, no, nothing in the log for success or failure
01:20 gzcwnk that maybe just redhat/epel
01:20 dmick don't have debug very high
01:20 dmick 2014.1.7+ds-1trusty1 to be more specific ATM
01:22 gzcwnk ive had similar things but with older versions
01:22 gamedna joined #salt
01:22 nmistry joined #salt
01:23 * dmick considers some python hacking.  I've seen this too; the last time it was when I'd installed the precise packages on trusty
01:23 dmick was hoping that was it
01:23 dmick but no, these are actually the trusty packages
01:23 andrej joined #salt
01:23 dmick wonder if there is zmq debug I can enable
01:25 andrej joined #salt
01:28 andrej Hmmm ... so to find out why I don't get a response for salt-call -l debug mine.get oob-01\* grains.item main_ip I enabled garbage output on the master ... looks like it never gets interrogated (and local debug on the minion doesn't suggest any problems).
01:28 andrej How does salt-call get mine data w/o talking to the master?
01:30 otter768 joined #salt
01:30 andrej And, more importantly, a single grain that I addedd to functions via http://http://pastebin.com/eCJ60cMa ...
01:30 andrej how do I extract main_ip on the minion
01:31 andrej salt-call mine.get oob-01\* grains.items lists it
01:31 andrej but I don't need all the data
01:31 retrospek grains.item main_ip
01:38 dmick Ryan_Lane: setting log_level_logfile to debug shows similar information on the master as salt -l debug does
01:38 steveoliver joined #salt
01:38 dmick that is, noting the config files, PUB/PULL sockets, and *_out modules
01:38 dmick and then failure
01:39 dmick every so often it hangs for a while and then exits, and then there's more logging
01:39 dmick so it's like it hasn't gotten to the "send the command" bit at all yet
01:40 manfred andrej: does salt-call grains.item main_ip work on each individual minion?
01:40 Ryan_Lane weird
01:41 dmick I spent a little time stepping thru the Python client code before
01:41 dmick I'll try that again and see just what's failing.  It is as that issue says: something comes back empty, which is interpreted as auth failure, but I suspect it's not actually an auth failure
01:49 zandy joined #salt
01:50 nyx joined #salt
01:52 andrej manfred : nope, doesn't return anything but local: and a line of dashes
01:53 conan_the_destro joined #salt
01:55 manfred andrej: if it doesn't work with grains.item, it won't work with the mine either
01:57 manfred andrej: you could set it up to do grains.get instead of grains.item
02:03 andrej manfred : I'll give that a shot ... syntax should be the same?
02:03 dccc joined #salt
02:04 manfred kind of
02:04 spidermo joined #salt
02:04 andrej manfred : "if it doesn't work with grains.item, it won't work with the mine either" maybe I didn't quite understand what you were asking ...
02:05 andrej Oic ...
02:05 andrej I do
02:05 andrej and I hadn't tried
02:05 andrej gimme a sec
02:06 andrej ok, I lied ... I tried salt-call grains.item main_ip on two minions, and it works just fine
02:08 manfred andrej: so, i am still kind of leaning torwards that that wont work
02:08 andrej heh
02:08 manfred because salt-call mine.get is only returning a dictionary
02:08 manfred it doesn't acutally do the function
02:09 andrej ok ...
02:09 andrej so, if the main_ip is in the mine, how do I get it out? :)
02:09 manfred you get it out by using jinja, and parsing the dictionary in your state file
02:09 otter768 joined #salt
02:10 manfred {% set grains = salt['mine.get']('*', 'grains.items') %}
02:10 manfred {% for minion in grains %}
02:10 manfred something something
02:10 manfred {{ minion['main_ip'] }}
02:10 manfred {% endfor %}
02:11 oz_akan_ joined #salt
02:11 andrej Os this is something that can't be done interactively for testing?
02:11 manfred https://github.com/saltstack/salt/blob/develop/salt/modules/mine.py#L162
02:11 manfred andrej: i mean... if you do grains.get ... you will ahve a dictionary back...
02:12 manfred https://github.com/saltstack/salt/blob/develop/salt/modules/mine.py#L162
02:12 andrej Right
02:12 andrej I understand that
02:12 manfred andrej: if you get the dictionary back, you know what to use...
02:12 manfred so ... what are you testing?
02:12 manfred you know the thing you want is in the dictionary
02:12 manfred and you know how to get it
02:13 manfred what you are wanting is just not the way that mine.get works
02:13 andrej Well ... I don't want to do a real live run and overwrite my current icinga config files w/ something potentially invalid
02:13 manfred that is why you have a testing environment to test the new states in
02:13 andrej Heh
02:14 manfred before pushing to your produciotn environment
02:14 andrej In an ideal world (or if we were a bank or IRS) we might
02:14 manfred andrej: vagrant exists for a reason
02:15 agliodbs joined #salt
02:15 andrej :}
02:16 manfred if you are really worried about it, that is the correct way to test them
02:16 andrej How easy are DEBs or RPMs to install on arch?
02:18 andrej Ooooh ... there's a vagrant pacman package
02:18 andrej thanks manfred
02:19 dccc_ joined #salt
02:19 manfred yar
02:29 Shish_ joined #salt
02:33 dude051 joined #salt
02:33 andrej Sweet lord, that was too easy
02:33 andrej thanks manfred, seriousl
02:33 andrej seriously
02:33 manfred np
02:34 andrej instant vagrant fanboy here ;D
02:34 manfred i don't like a lot of things about it, but for just spinning up something quick to test stuff, it is pretty solid
02:35 terminalmage joined #salt
02:35 terminalmage left #salt
02:37 Singularo joined #salt
02:41 mgw joined #salt
02:43 dude051 joined #salt
02:44 dude051 joined #salt
03:00 dmick so, for what it's worth, all the way down in payload.SREQ.send, the packet is sent to pyzmq, and the poll returns a '\xa0'
03:00 dmick I wonder if kicking the tires with a 'set the serializer to pickle' is worth an experiment
03:05 notbmatt "set the camera to potato!"
03:06 garthk joined #salt
03:14 dmick heh
03:16 bezeee joined #salt
03:17 mapu joined #salt
03:20 beneggett joined #salt
03:25 fxhp joined #salt
03:25 ramishra joined #salt
03:28 davet joined #salt
03:31 Outlander joined #salt
03:35 animosity_ joined #salt
03:35 DaveQB joined #salt
04:04 ramishra_ joined #salt
04:05 oz_akan joined #salt
04:18 beneggett joined #salt
04:26 m1crofarmer joined #salt
04:32 gzcwnk left #salt
04:32 gzcwnk joined #salt
04:34 gzcwnk when I am running salt im finding sometimes state completes in seconds, other times teh same thing takes amny minutes or times out, why would I see such a huge variation?
04:35 gzcwnk like this, http://pastebin.com/sQqnRdnd
04:38 matthiaswahl joined #salt
04:39 malinoff joined #salt
04:43 oz_akan joined #salt
04:48 ramishra joined #salt
04:49 ghartz joined #salt
04:49 ramteid joined #salt
04:52 rgarcia_ joined #salt
04:53 matthias_ joined #salt
04:53 JPaul joined #salt
04:53 v0rtex joined #salt
04:54 snuffeluffegus joined #salt
04:55 snuffels joined #salt
05:00 jalaziz joined #salt
05:02 snuffeluffegus joined #salt
05:02 snuffeluffegu2 joined #salt
05:04 ajw0100 joined #salt
05:06 jhauser joined #salt
05:06 armonge joined #salt
05:07 snuffeluffegus joined #salt
05:14 ramishra joined #salt
05:14 kermit joined #salt
05:17 beneggett joined #salt
05:23 Sauvin joined #salt
05:23 dstokes gzcwnk: if your minion doesn't return, then the run will take <timeout> to finish. if it does, it will only take as long as the state takes to complete
05:27 bmatt dstokes: can you define for me "doesn't return"?
05:43 xcbt joined #salt
05:44 oz_akan joined #salt
05:45 oz_akan_ joined #salt
05:50 ml_1 joined #salt
05:57 beneggett joined #salt
05:59 aquinas joined #salt
06:00 aw110f joined #salt
06:00 roolo joined #salt
06:04 davet joined #salt
06:17 aw110f joined #salt
06:29 mgw1 joined #salt
06:30 mgw joined #salt
06:30 mgw joined #salt
06:31 mgw joined #salt
06:31 mgw joined #salt
06:32 mgw joined #salt
06:32 badon joined #salt
06:32 mgw joined #salt
06:33 mgw1 joined #salt
06:33 mgw joined #salt
06:42 zions joined #salt
06:46 oz_akan joined #salt
06:51 kjkoster5489 joined #salt
06:52 badon joined #salt
07:10 xcbt joined #salt
07:11 thehaven_ joined #salt
07:12 matthiaswahl joined #salt
07:14 ml_1 joined #salt
07:17 alanpearce joined #salt
07:17 linjan joined #salt
07:17 Hipikat joined #salt
07:20 Hipikat joined #salt
07:23 linjan joined #salt
07:40 intellix joined #salt
07:41 ggoZ joined #salt
07:42 martoss joined #salt
07:46 sverrest joined #salt
07:47 oz_akan joined #salt
07:51 totte joined #salt
07:59 bhosmer joined #salt
08:04 totte joined #salt
08:06 ramishra joined #salt
08:08 darkelda joined #salt
08:12 aw110f joined #salt
08:13 ghartz joined #salt
08:18 yomilk joined #salt
08:20 shorty_mu joined #salt
08:23 sontek_ joined #salt
08:24 shorty_mu Hi everyone. I'm trying to run a Masterless Minion. What I do:
08:24 shorty_mu - install salt-minion via yum
08:24 shorty_mu - setup a new minion configuration:
08:24 shorty_mu file_client: local
08:24 shorty_mu file_roots:
08:24 shorty_mu base:
08:24 shorty_mu - /vagrant/salt/saltstack-states
08:24 shorty_mu pillar_roots:
08:24 shorty_mu base:
08:24 shorty_mu - /vagrant/salt/saltstack-pillars
08:24 shorty_mu - then I restart the Minion and it fails:
08:24 shorty_mu [salt.crypt][CRITICAL] The Salt Master has rejected this minion's public key!
08:24 shorty_mu What am I doing wrong?
08:24 poogles joined #salt
08:35 CeBe joined #salt
08:38 CeBe1 joined #salt
08:48 oz_akan joined #salt
08:53 poogles joined #salt
08:57 sectionme joined #salt
08:59 xzarth joined #salt
09:03 mackstick joined #salt
09:04 shorty_mu Allthough I added "file_client: local" the minion shows with debug logging enabled: "2014-07-31 10:58:46,329 [salt.minion][DEBUG   ] Attempting to authenticate with the Salt Master at 127.0.0.1"
09:06 ntropy i dont think you can run minion as a daemon w/o a master
09:06 ntropy you can call minion like so: 'salt-call --local state.highstate'
09:08 shorty_mu ?? DOH! That's kinda embarrassing. You're right. Everythings working fine without a running minion.
09:08 shorty_mu Is it possible to point that out in the docs?
09:09 shorty_mu I never thought of masterless == no daemon. My bad.
09:09 shorty_mu And many thanks @ntropy
09:10 ramishra joined #salt
09:12 che-arne joined #salt
09:15 dcmorton joined #salt
09:19 andrej joined #salt
09:21 alanpearce joined #salt
09:25 chamunks joined #salt
09:26 faulkner joined #salt
09:26 alanpearce joined #salt
09:36 ramishra_ joined #salt
09:40 giantlock joined #salt
09:43 sectionme Anyone seen mine.get returning details for minions without the required grains set? Versions and example availible here http://paste.debian.net/112882/
09:43 helderco joined #salt
09:48 oz_akan joined #salt
09:49 shorty_mu left #salt
09:55 ghaering joined #salt
10:06 MrTango joined #salt
10:10 helderco joined #salt
10:10 alanpearce joined #salt
10:19 ramishra joined #salt
10:25 Outlander joined #salt
10:37 superted joined #salt
10:38 yomilk joined #salt
10:38 superted Is salt capable of using grain information withhin a command string? for example... Salt -E server1 file.copy /var/log/wtmp /opt/logs/wtmp.server1
10:38 superted But with server1 generated by a grain?
10:39 superted Salt -E server1 file.copy /var/log/wtmp /opt/logs/wtmp.{{grains.id}}
10:39 superted Type thing...
10:46 helderco joined #salt
10:49 oz_akan joined #salt
10:57 diegows joined #salt
10:59 intellix joined #salt
11:01 jacksontj joined #salt
11:05 simonmcc_ if anybody here is going to OpenStack Summit in Paris in November & would like to learn more about how we use salt with test-kitchen, you could vote for my talk: https://www.openstack.org/vote-paris/Presentation/ci-cd-pipeline-to-deploy-and-maintain-an-openstack-iaas-cloud :) Thank you!
11:12 alanpear_ joined #salt
11:18 jacksontj joined #salt
11:26 ramishra joined #salt
11:44 intellix joined #salt
11:47 williamthekid joined #salt
11:55 ramishra joined #salt
12:11 jas-_ joined #salt
12:19 nyx joined #salt
12:19 thayne joined #salt
12:20 scoates joined #salt
12:30 che-arne joined #salt
12:36 alanpearce joined #salt
12:37 slav0nic joined #salt
12:44 ghartz joined #salt
12:44 vejdmn joined #salt
12:46 pclermont joined #salt
12:48 sgate1 joined #salt
12:49 cpowell joined #salt
12:56 ajprog_laptop joined #salt
12:57 mapu joined #salt
13:01 bhosmer joined #salt
13:01 oz_akan joined #salt
13:08 mpanetta joined #salt
13:14 thedodd joined #salt
13:14 blarghmatey joined #salt
13:14 slav0nic joined #salt
13:14 matthia__ joined #salt
13:14 helderco_ joined #salt
13:14 geekmush1 joined #salt
13:14 racooper joined #salt
13:14 pclermont_ joined #salt
13:14 rlarkin|2 joined #salt
13:14 Shish_ joined #salt
13:14 clone1018_ joined #salt
13:14 notbmatt joined #salt
13:14 zemm_ joined #salt
13:14 Fa1lure joined #salt
13:14 rmnuvg_ joined #salt
13:15 xzarth joined #salt
13:15 hopthrisC joined #salt
13:15 Sacro joined #salt
13:15 bensons_ joined #salt
13:15 harkx joined #salt
13:15 aqua^lsn joined #salt
13:15 workingcats joined #salt
13:15 darrend joined #salt
13:15 ashb joined #salt
13:15 mirko joined #salt
13:15 jakubek joined #salt
13:15 jeddi joined #salt
13:16 carmony joined #salt
13:16 nlb joined #salt
13:16 micko joined #salt
13:16 beardo joined #salt
13:16 notpeter_ joined #salt
13:16 dimeshake joined #salt
13:16 nliadm joined #salt
13:16 whitepaws joined #salt
13:16 KaaK_ joined #salt
13:16 ronc joined #salt
13:17 bhosmer joined #salt
13:17 gmoro joined #salt
13:17 anteaya joined #salt
13:18 Kalinakov joined #salt
13:19 FeatherKing joined #salt
13:20 xzarth joined #salt
13:24 Deevolution joined #salt
13:34 hopthrisC left #salt
13:34 VSpike joined #salt
13:34 racooper Is there any option to print the jid at the start of a job run?
13:37 VSpike Is there a good summary online somewhere of what you can and can't with with Windows minions in salt? I couldn't find one
13:37 viq racooper: yes, add -v to flags
13:38 viq racooper: as in 'salt -v \* test.ping'
13:38 analogbyte joined #salt
13:38 VSpike I'm guessing you'll end up with a bit of a frankenstein's monster of salt + powershell scripts, in most practical situtations
13:40 racooper thanks viq
13:41 racooper though that gives continual output during a job run
13:41 DaveQB joined #salt
13:43 gmoro joined #salt
13:45 xsteadfastx joined #salt
13:46 viq that it does
13:46 viq Or you could ^C out of a running job and that would give you jid
13:46 FeatherKing yeah ^C immediately
13:46 intellix joined #salt
13:47 FeatherKing will print the whole lookup command
13:48 FeatherKing i have a weird one on a bootstrapped minion, the service dies right away with "RSAError: no start line"
13:49 viq FeatherKing: check the pem files, sounds to me like one of them is malformed
13:50 FeatherKing yeah empty
13:50 FeatherKing odd
13:50 nyx joined #salt
13:50 FeatherKing thanks viq
13:52 zandy joined #salt
13:54 quickdry21 joined #salt
13:55 ericof joined #salt
13:56 dude051 joined #salt
13:59 eliasp joined #salt
14:03 helderco joined #salt
14:08 khaije1 joined #salt
14:12 scoates joined #salt
14:14 toddnni joined #salt
14:14 ipmb joined #salt
14:14 aquinas joined #salt
14:15 aquinas left #salt
14:18 housl joined #salt
14:19 jalbretsen joined #salt
14:19 jeddi joined #salt
14:21 dccc_ joined #salt
14:26 dariusjs joined #salt
14:27 oz_akan joined #salt
14:27 mechanicalduck_ joined #salt
14:27 dariusjs Hey I've been struggling to find out how to  enclose such a command  in a cmd.run       wevtutil el | Foreach-Object {wevtutil cl "$_"}
14:27 dariusjs I already have double quotations, how do I escape them?
14:28 sectionme Tried \"? :/
14:28 dariusjs oh im a dummy, single quotes were nough
14:28 toastedpenguin joined #salt
14:31 cpowell_ joined #salt
14:32 ericof joined #salt
14:32 ajprog_laptop joined #salt
14:35 cpowell joined #salt
14:37 Ozack1 joined #salt
14:38 zandy joined #salt
14:39 toastedpenguin joined #salt
14:41 conan_the_destro joined #salt
14:44 to_json joined #salt
14:49 thedodd joined #salt
14:50 to_json joined #salt
14:56 kballou joined #salt
14:58 bigl0af joined #salt
15:02 bigl0af joined #salt
15:04 elfixit joined #salt
15:05 bigl0af joined #salt
15:07 rallytime joined #salt
15:09 jslatts joined #salt
15:14 UtahDave joined #salt
15:15 UtahDave left #salt
15:15 wendall911 joined #salt
15:20 thayne joined #salt
15:23 jaimed joined #salt
15:29 arthabaska joined #salt
15:33 eliasp upgraded yesterday all Win 7 minions which were still on 2014.1.1 to 2014.1.7, but now jobs just get stuck on them… A simple 'state.highstate' which is usually finished within ~30 seconds is now stuck for nearly 20 minutes… has anyone ever seen something like this?
15:33 kballou joined #salt
15:34 alanpear_ joined #salt
15:37 m1crofarmer joined #salt
15:37 scoates joined #salt
15:37 patarr how can i have different pillar values for different base environments?
15:38 patarr ah nvm, stupid question :)
15:38 m1crofarmer joined #salt
15:41 thedodd joined #salt
15:45 KaaK_ i've got a large dictionary in pillar that i'd like to break up into seperate files -- but keeping the same namespace rooted with the key 'environemnt'. Is there any recipe for merging these dictionaries while keeping their namespaces intact?
15:45 tligda joined #salt
15:46 eliasp KaaK_: try some Jinja-magic like here: https://github.com/saltstack-formulas/samba-formula/blob/master/samba/map.jinja
15:48 sectionme joined #salt
15:55 gq45uaethdj26jw6 rallytime: re: https://github.com/saltstack/salt/issues/13365, I experience this bug even when my gogrid node successfully is created. when ret.update(data) is executed, data is a Node object, not a dictionary. same bug or different bug?
15:57 rap424 joined #salt
15:58 rallytime gq45uaethdj26jw6 if I recall correctly, I think the node is created on GoGrid just fine, but then salt has issues after that and can't connect.
15:58 rallytime are you able to get past that point?
16:00 xcbt joined #salt
16:01 gq45uaethdj26jw6 hold on, let me paste my error. i added some debugging, comparing to another provider
16:01 alanpearce joined #salt
16:01 gq45uaethdj26jw6 rallytime: after the node is created, i can issue salt commands to it just fine. it craps out when trying to move on to the next node in the map though
16:02 gq45uaethdj26jw6 rallytime: actually, i have a test running as we speak, it should finish in about 1m
16:03 rallytime the error I was seeing was connecting to the first node. Salt can't connect because port times out.
16:04 gq45uaethdj26jw6 rallytime: right, the error has the same root cause though. when executing ret.update(data), data is supposed to be a dict, however it is a Node object instead, therefore a bad parameter to update(). other providers accurately provide a dict to update()
16:05 rallytime yeah that makes sense. sounds like the same error to me. Mind posting your findings to that issue so we can have more info to help debug?
16:05 gq45uaethdj26jw6 sure. oddly enough it was working for me before. i think i actually applied a local patch to fix it and then updated salt and forgot how i had patched it
16:06 gq45uaethdj26jw6 i'll post some info when i get back from lunch
16:07 rallytime oh that's a bummer that you had it working and now it's broken again! thanks for your detective work here, though gq45uaethdj26jw6!
16:08 rallytime post that info and if you recover your patch, feel free to submit a PR so we can fix this one for good.
16:09 ghartz joined #salt
16:13 laubosslink joined #salt
16:14 jaimed joined #salt
16:18 s8weber joined #salt
16:18 VSpike Someone here just opined that if you want to know about Saltstack you should ask the "product people" and not the devs, because the devs will tell you something "completely different" ... the implication being "wrong".
16:18 Comradephate joined #salt
16:19 VSpike Funny... I'd usually assume the exact opposite when seeking information on a product
16:21 patarr is there a module that can do things like ulimit? Or would you probably just have a cmd.run in a state?
16:21 jalbretsen joined #salt
16:21 KyleG joined #salt
16:21 KyleG joined #salt
16:22 gothix joined #salt
16:29 sxar joined #salt
16:37 poogles left #salt
16:38 ntropy patarr: doesnt look like there is a built in module - http://salt.readthedocs.org/en/latest/ref/modules/all/
16:39 ntropy otoh, its only a pull request away
16:40 smcquay joined #salt
16:44 zandy joined #salt
16:45 thayne joined #salt
16:48 patarr but then again, it's just a file.managed limits.conf
16:48 patarr and there's a sysctl module.
16:50 lietu I've got a machine running fedora 20, which uses systemctl, but I also have a few "legacy" services using "service servicename action" .. if I try to use those services in salt, I get errors saying "The named service X is not available" .. any ideas?
16:51 notbmatt maybe build a hack with cmd.wait, or cmd.run+unless/onlyif?
16:51 lietu ah, there is a bug report about this from 29 days ago which is still open
16:52 dstokes is sync_grains the proper way to update the salt master grain cache?
16:52 lietu hmm .. yea, I guess I'll do some condition for fedora and scripts
16:52 lietu *script it
16:53 thedodd joined #salt
16:54 joehillen joined #salt
16:55 SteveJ1729 joined #salt
16:56 chrisjones joined #salt
16:56 troyready joined #salt
16:56 Gareth morning
16:57 ashirogl joined #salt
16:58 MatthewsFace joined #salt
16:58 troyready joined #salt
16:59 forrest joined #salt
17:00 armonge joined #salt
17:02 armonge joined #salt
17:03 armonge joined #salt
17:05 ml_1 joined #salt
17:05 thayne joined #salt
17:07 patarr will the salt user.present automatically create a group for a user you define?
17:07 patarr I notice that you you can set uid and gid, but in order to set gid properly wouldn't a group have to be defined first?
17:07 forrest patarr, correct, you should create the group
17:08 forrest patarr, here's configuration of users via pillar: https://github.com/saltstack-formulas/users-formula
17:10 bezeee joined #salt
17:11 patarr thanks forrest
17:11 forrest patarr, yea np
17:11 patarr is there integrated support in the salt binaries for salt-formulas or do I just clone them and do what I do?
17:12 forrest patarr, the formulas are just sets of states, so just drop them in like you would normal states, and fill out the appropriate pillar data.
17:12 eliasp patarr: clone them and keep a local copy/your own fork around in either FS or GitFS
17:13 patarr i need to experiment with those fs things.
17:13 patarr We may switch to git soon so that will be awesome.
17:14 forrest patarr, yea gitfs is great, we use it at work
17:14 forrest other than the 1 minute wait for the refresh (which is only a big deal if you're dealing with impatient idiots) it's amazing
17:14 dude051 joined #salt
17:14 patarr what refresh? LIke a git pull?
17:15 bezeee +1 on that...I experimented with gitfs yesterday after someone suggested it to me here and it's pretty darn nice
17:15 forrest patarr, yea gitfs checks every 1 minute for updates in the repos and then pulls them in
17:15 eliasp well, there's "salt-run fileserver.update"  which takes care of the refresh on-demand
17:15 dude051 joined #salt
17:15 eliasp so if you're impatient, just run "salt-run fileserver.update"
17:15 bezeee if you read the gitfs walkthrough there's a snippet on how to add a post-receive hook to do that
17:16 forrest yea you can, I just don't do so :P
17:16 forrest it's 1 minute
17:16 forrest no big deal, I have a million things to work on
17:16 bezeee forrest: do you keep the pillar and states in separate git repos?
17:16 bezeee seems like keeping the pillars separate would be good since you could then merge between branches in your states repo
17:17 eliasp I have 1 repo for my SLS, one for my top.sls and one for all pillars
17:18 gothix joined #salt
17:18 bezeee why do you keep the top.sls in a separate repo?
17:18 eliasp I'd recommend to keep top.sls in a separate repo (out of your SLS repo), otherwise you'll have to make sure top.sls is always consistent between your branches … otherwise things will get messy
17:18 eliasp bezeee: because top.sls is merged into a single top.sls from all environments/branches
17:19 eliasp bezeee: by keeping it separate, you have 1 top.sls in a repo with just a base/master branch
17:19 eliasp bezeee: makes things waaaaay more simple
17:19 bezeee sure...that makes sense, but one suggestion from the docs is to only keep it in the master branch
17:19 bezeee "the recommended setup is to have the top.sls file only in the master branch and use environment-specific branches for state definitions."
17:19 eliasp bezeee: well, once you create a new branch based off master, it's automatically there as well
17:19 bezeee very true
17:20 eliasp and if you remove it manually in this new branch, things get out of sync… merging the other branch back into master etc. will become really messy
17:20 bezeee I'm deciding on our set up here at work in the next week...so hearing other's set ups is very helpful
17:20 gq45uaethdj26jw6 rallytime: http://pastebin.com/Zc93W3tm This is what I get. Issue only occurs when I run a map with gogrid hosts. Error does not exist when launching a single instance by profile. There are actually a number of other bugs with the gogrid deploy script.
17:24 aw110f joined #salt
17:24 conan_the_destro joined #salt
17:28 wigit joined #salt
17:28 rallytime gq45uaethdj26jw6 that might be worth opening a new issue and reference the other issue that I opened and also post your --versions-report. I wasn't able to get any instances running on GoGrid at the time of filing that issue. I might have had something wrong in my configuration files, but that nasty traceback should be happening either.
17:29 rallytime I know there is a current effort to try and clean up these tracebacks with different providers, so the more info you give, the better!
17:30 dccc_ joined #salt
17:36 smcquay joined #salt
17:37 jaimed joined #salt
17:37 gq45uaethdj26jw6 rallytime: Alright, if you have a need to get gogrid working, let me know. I have some 30 instances running that I put up through salt-cloud. Spent a lot of time debugging...
17:38 smcquay joined #salt
17:38 Ryan_Lane joined #salt
17:39 rallytime gq45uaethdj26jw6 awesome! thanks for the debugging work. I'll give you a holler when I circle back around to writing those salt-cloud tests for gogrid if I still have trouble.
17:40 kermit joined #salt
17:42 guest11232 joined #salt
17:44 druonysus joined #salt
17:44 druonysus joined #salt
17:46 gmoro joined #salt
17:46 ckao joined #salt
17:47 eliasp where are the build-files for the Windows builds of Salt? https://github.com/saltstack/salt-windows-install is quite outdated and unlikely the code used for the latest builds…
17:48 forrest eliasp, https://github.com/saltstack/salt/tree/develop/pkg/windows
17:48 forrest it's in the main repo now
17:48 eliasp forrest: thanks a lot
17:48 forrest eliasp, yea np
17:48 eliasp maybe somebody could update the description of the old repo and point to the new location…?
17:49 forrest eliasp, I'm already planning on doing so. I just want to confirm with Dave before I do
17:49 forrest since he'll have to accept the PR
17:49 eliasp great, thanks :)
17:49 forrest yup
17:51 matthiaswahl joined #salt
18:01 seventy3_away joined #salt
18:03 thayne joined #salt
18:03 to_json joined #salt
18:04 notbmatt what's your concern regarding security?
18:04 notbmatt the minion can't manipulate the master
18:06 forrest notbmatt, that's not technically true. If you KNEW that some sort of secure pillar data was available for a specific grain, you could write a state to get that grain and dump the data.
18:06 forrest But in that scenario you'd have to be managing minions that aren't under your control, which seems kind of dumb anyways.
18:06 notbmatt right
18:07 notbmatt but that requires auto-acceptance of keys *and* the ability to predict useful minion IDs
18:07 notbmatt (and no collision)
18:07 forrest exactly
18:07 notbmatt I've actually been meaning to write a way to automate that process
18:07 bmatt but haven't gotten around to it :)
18:14 zandy joined #salt
18:15 roolo joined #salt
18:18 dstokes salt bootstrap broken again? /cc bmatt
18:18 bmatt dstokes: not afaik; you want me to test?
18:19 dstokes seeing the same error as the other day re: salt-api
18:19 bmatt testing
18:21 kermit joined #salt
18:22 bmatt dstokes: you're using the stable version?
18:23 bmatt dstokes: can't repro
18:23 dstokes using vagrant provisioner, not sure if it's using stable or dev
18:23 dstokes strange
18:23 dstokes i'll keep digging. thx
18:23 bmatt using version at https://raw.githubusercontent.com/saltstack/salt-bootstrap/stable/bootstrap-salt.sh
18:24 bmatt fyi, Vagrant uses the script that lives in .../embedded/gems/gems/vagrant-1.6.3/plugins/provisioners/salt/bootstrap-salt.sh
18:24 thedodd joined #salt
18:26 forrest are you guys actually using vagrant in production?
18:26 bmatt I use it for testing
18:27 bmatt I can iterate my states in a tiny test cluster pointed at my own state repo(s), and then promotion to production is a PR and a merge
18:29 forrest ahh
18:29 dstokes bmatt: what's that path relative to?
18:29 bmatt dstokes: on OS X, /Applications/Vagrant/
18:29 bmatt anything else idk :)
18:30 dstokes ah, got it. looks like it's using stable
18:30 mapu joined #salt
18:31 dstokes * ERROR: salt-api was not found running
18:31 to_json joined #salt
18:31 bmatt yeah, that's the message from a few days ago
18:31 bmatt I looked through the commit logs and don't see a regression
18:31 dstokes salt-api is installed along w/ upstart script
18:33 doddstack joined #salt
18:34 bmatt ah, so s0undt3ch made salt-api installation implicit in master installation now?
18:35 forrest he shouldn't have
18:35 bmatt agreed
18:35 forrest or it would be extremely rare for him to do something like that
18:35 forrest he is not a fan of it
18:35 forrest lol
18:36 bmatt of salt-api?
18:36 forrest no no, of adding deps to the bootstrap that aren't absolutely required
18:36 forrest the install is meant to be as minimal as possible
18:37 bmatt right, except he wasn't willing to back out the salt-api test itself, so maybe he wants it to be installed?
18:37 forrest I doubt it.
18:37 bmatt je ne sais pas
18:37 forrest you can message him
18:37 forrest he's on IRC even though he's not in this channel
18:37 forrest though he may not be around for a bit
18:39 kermit joined #salt
18:39 kermit joined #salt
18:43 dstokes unrelated: anybody else been seeing logging weirdness on 2014.7? restarting minions sometimes causes the new proc to not write anything to default minion log
18:44 dstokes process is running, cpu is working, just no log output
18:44 zandy joined #salt
18:44 dstokes wait, apparently only error output
18:45 dstokes nvm, think this is a config mishap on my end
18:46 zandy_ joined #salt
18:46 robawt anyone have more than one git repo for ext_pillar?
18:46 * robawt highfives forrest
18:47 forrest hey robawt
18:50 FeatherKing would this be some valid jinja for checking a return code from a cmd.run
18:50 FeatherKing {% if salt['cmd.run']("grep --quiet string file",'') == 0%}
18:51 FeatherKing will jinja be aware that grep returns 0 on success?
18:52 allanparsons joined #salt
18:54 allanparsons for the unless in cmd.run, how do you check for >=?
18:54 allanparsons like:  unless: /bin/app --version -gte {{ version }}
18:54 helderco joined #salt
18:56 snuffeluffegus joined #salt
18:57 patarr joined #salt
18:57 patarr joined #salt
18:58 dstokes allanparsons: bash math is weird ;)
18:58 dstokes also i believe the flag is -ge
18:59 allanparsons >= :)
18:59 dstokes i.e. `unless: "$(/bin/app --version)" -ge "{{ version }}"`
18:59 dstokes maybe..
18:59 allanparsons nope...  - unless: [[ "/usr/local/bin/app --version | awk '{print $2}' >= '{{ app_version }}'" ]]
19:04 kjkoster5489 joined #salt
19:04 dstokes allanparsons: that do it?
19:04 allanparsons yea
19:05 dstokes won't that always be truthy? the conditional contains a string.. (dbl qoutes)
19:07 dstokes oh, unless jinja / yaml is removing thos
19:11 m1crofarmer joined #salt
19:12 beneggett joined #salt
19:15 peters-tx joined #salt
19:17 laubosslink joined #salt
19:33 xsteadfastx joined #salt
19:34 austin987 joined #salt
19:36 beneggett joined #salt
19:37 ipmb joined #salt
19:43 abe_music joined #salt
19:43 zooz joined #salt
19:45 zandy joined #salt
19:46 ghartz joined #salt
19:53 matthiaswahl joined #salt
20:01 Comradephate joined #salt
20:03 poogles joined #salt
20:12 stevednd Does jinja have any capacity to only include a file if it exists?
20:14 dude051 joined #salt
20:14 dave_den joined #salt
20:17 dstokes there a way to stream the stdout of a minion job?
20:21 jslatts joined #salt
20:23 martoss joined #salt
20:27 bezeee joined #salt
20:28 stevednd joined #salt
20:30 stevednd joined #salt
20:32 bensons_ stevednd: havent found one, if you do - please share :)
20:34 bezeee joined #salt
20:37 scoates joined #salt
20:37 vejdmn joined #salt
20:38 cpowell joined #salt
20:40 gzcwnk I seem to have jobs not finishing just a {}, how to fault find this pls?
20:41 gzcwnk i ran this at 5pm last night and 8am this morning, http://pastebin.com/CTWebTX0
20:42 beneggett joined #salt
20:43 stevednd gzcwnk: salt-run jobs.lookup_jid <jid>
20:44 zandy joined #salt
20:45 cpowell joined #salt
20:45 stevednd if you're still getting nothing, I would suggest stopping the salt-minion on the machine, connect via ssh, and run `salt-minion -l debug` then from your master rerun your command and see what happens on the minion
20:49 patarr joined #salt
20:49 patarr joined #salt
20:51 thayne joined #salt
20:52 gzcwnk vmware so i have a console
20:54 bhosmer joined #salt
20:55 gzcwnk runs prefectly
20:55 gzcwnk whatever was casuing the seizure has disappeared, but im getting it frequently
20:57 vejdmn joined #salt
20:58 zooz joined #salt
20:59 cheus joined #salt
21:01 robawt forrest: you mess with gitfs?
21:01 robawt i'm painting myself into an edge case
21:03 forrest robawt, we use gitfs here
21:03 forrest but only with one environment
21:03 robawt forrest: i've got two pillar repos i want to use, and it appears to only grab the first for ext pillar :\
21:04 forrest robawt, ahh, we don't use it for ext pillars, all that data is in reclass.
21:04 forrest robawt, but yea, I don't know if it can support multiple repos which are combined like that
21:05 cachedout joined #salt
21:05 robawt forrest: ok next bit is to dig through the code
21:06 robawt so if i disappear for a few weeks look for me
21:06 forrest robawt, might be worth seeinf if there is an issue that exists as well
21:06 forrest robawt, no way man, it's dangerous out there
21:06 forrest I'll put up a poster for you
21:06 forrest but that's it
21:06 forrest :P
21:06 kermit joined #salt
21:07 amontalban joined #salt
21:08 MatthewsFace joined #salt
21:10 robawt haha thanks
21:12 seanz joined #salt
21:12 seanz akoumjian: ping
21:12 seanz akoumjian: I've got some questions about the salty-vagrant plugin, if you have a few minutes.
21:13 forrest seanz, I haven't seen Alec around in a while
21:13 seanz forrest: Ah...dang.
21:13 forrest seanz, yea was going to ask him about some vagrant issues people were having earlier this week
21:14 seanz forrest: I guess I'll just have to figure it out myself then.
21:14 forrest seanz, heh
21:14 robawt seanz: greetings!
21:14 seanz robawt: Greetings!
21:14 robawt saly vagrant is dead
21:14 robawt salt is now built into vagrant
21:14 seanz What about salty-vagrant? :)
21:15 robawt wha wha
21:15 seanz I'tis?
21:15 robawt :)
21:15 robawt yep, been that way since around/before my fuzzy-octo-hipster talk
21:15 tligda joined #salt
21:15 jslatts joined #salt
21:16 seanz robawt: Since you're familiar with our version woes, here's the situation: vagrant does a shallow clone, and the number of commits in salt is now so numerous that the 0.16.4 version we've been using isn't cloned.
21:16 seanz I guess I'll just use a custom salt provisioning script.
21:16 seanz That'll work.
21:17 robawt wow
21:17 robawt you guys should upgrade
21:17 * robawt ducks
21:17 seanz haha! Preaching to the choir...
21:18 forrest I think one of my test vms is still running 0.16.4
21:18 forrest I should upgrade that...
21:18 seanz forrest: At least you *can* upgrade it.
21:18 robawt seanz: can you do a git install and point to the tag instead?
21:18 seanz We've pinned ourselves for fear of future changes.
21:18 robawt or i think that's what you're doing
21:18 robawt double check
21:18 seanz robawt: Yes, the configuration points to the tag.
21:18 seanz The tag is now out of scope and doesn't get cloned.
21:19 seanz I was thinking it would be better to use the ppa instead, anyway.
21:19 seanz That would be cleaner.
21:19 seanz Install the ppa and install salt through a custom provisioner.
21:19 seanz Provisioning script, that is.
21:19 robawt i am not sure you can point to it.  you should use a SHELL provisioner in vagrant, and point to a script that does the manual 16-4 ppa
21:20 seanz That's what I'm saying.
21:20 robawt i was looking at the same issue for pointing to our local repos
21:20 robawt yep, do that
21:20 robawt :D
21:20 seanz salt.bootstrap_script = "salt_provisioner.sh" - something like that.
21:20 seanz Whatever it's called.
21:20 robawt seanz: https://docs.vagrantup.com/v2/provisioning/shell.html
21:20 robawt NO
21:20 forrest seanz, it's my 5 dollar VM, I can do whatever I want :P
21:20 robawt get rid of salt provisioner completely
21:20 robawt use the vagrant shell provisioner
21:21 seanz robawt: ...but then we lose the ability to have vagrant call a highstate automatically.
21:21 seanz Though nobody uses that...someone may want to at one point.
21:21 robawt seanz: no you don't
21:21 seanz I don't?
21:21 robawt you just add another inline to highstate after the install
21:21 robawt just like salt does
21:21 robawt you're just doing it 'manually'
21:21 robawt (via the shell provisioner manually, not by typing it)
21:21 seanz Oh...got it! I think I'm into that idea.
21:22 seanz No, I get what you're saying.
21:22 robawt yep, that way you can comment it out when you don't want highstate
21:22 dmick i..e "vagrant ssh -c 'salt-call --local salt.highstate'"
21:22 seanz I don't need --local though, right?
21:22 * dmick does that for vagrant-hosted jenkins builds
21:22 robawt local?
21:22 seanz dmick: Oh, sorry.
21:22 dmick well, I do that because it's masterless in my case
21:22 dmick but yeah
21:22 seanz I didn't realize you said that.
21:23 seanz Looks like a solid option. I'll give it a try.
21:23 robawt local doesn't matter on masterless minion
21:23 seanz Thanks robawt, forrest and dmick.
21:23 robawt 9/10 times y'all are running masterless minion vagrant salt, so it shouldn't be needed
21:23 seanz robawt: How would *you* know that? :)
21:23 dmick robawt: yeah, good poitn
21:23 robawt seanz: i'm a spy
21:23 forrest seanz, I don't think I did much, but you're welcome
21:24 robawt i still owe forrest a beer
21:24 dmick one gets in the habit
21:24 forrest robawt, hah
21:24 seanz forrest: You told me akoumjian hasn't been around. That saved me hours of pinging him.
21:24 forrest seanz, ahh
21:24 seanz Years, really.
21:26 doddstack joined #salt
21:28 Linuturk in regards to the Windows State Modules
21:28 Linuturk are the only ones that actually work with windows prefixed by win_*
21:28 Linuturk or, do other standard ones like file and extract work as well?
21:28 Linuturk s/extract/archive
21:28 eliasp Linuturk: others work as well
21:29 eliasp Linuturk: win_* are mostly modules/states with win-specific functionality
21:29 bhosmer joined #salt
21:29 Linuturk is there a compatibility matrix somewhere that I can reference?
21:29 Ozack1 joined #salt
21:29 akoumjian Sorry
21:29 Linuturk or, just try what I think I need and hope for the best
21:30 akoumjian seanz: I am on a very different timezone these days
21:30 seanz akoumjian: It's quite alright. :)
21:30 eliasp Linuturk: just run "salt some-win-minion -d module" … if it responds, it should work on windows
21:30 Linuturk good idea eliasp thanks
21:30 akoumjian seanz: Get your issue resolved?
21:31 seanz akoumjian: I think so. Given our circumstances, I'm just going to go with a custom provisioner script. Thanks for responding, though.
21:32 akoumjian seanz: Sounds good
21:34 patarr joined #salt
21:34 patarr joined #salt
21:35 ml_1 joined #salt
21:36 seanz robawt: The provisioner script is dramatically faster. :)
21:39 DaveQB joined #salt
21:44 zandy joined #salt
21:45 ksalman how can i get get past this "The function "state.sls" is running as PID 2488 and was started at 2014, Jul 27 23:08:22.433375 with jid 20140727230822433375"
21:45 ksalman i tried restarting the minion but i still get this error
21:49 thedodd joined #salt
21:51 beneggett joined #salt
22:00 allanparsons @ksalsman... you can kill the minion on the box running.  one liner:  "or ii in `ps -ef | grep -i salt | grep -v grep | awk '{print $2}'`; do sudo kill -9 $ii; done"
22:01 allanparsons ksalman ^
22:01 allanparsons ... "for ii in ...."
22:02 ksalman allanparsons: i have a lot of minions so i can't really login to various minions and kill the serivce
22:03 allanparsons you must've deployed something that's hanging
22:03 allanparsons do you know what box is hanging?
22:04 dave_den can your minions accept commands form the master?
22:06 dave_den salt '*' saltutil.kill_job 20140727230822433375
22:07 ksalman i'll try that thanks
22:07 mechanicalduck joined #salt
22:08 MatthewsFace joined #salt
22:11 ipmb_ joined #salt
22:12 Luke joined #salt
22:12 mechanicalduck joined #salt
22:18 seanz left #salt
22:24 bhosmer joined #salt
22:30 mosen joined #salt
22:30 Ozack1 joined #salt
22:30 stevednd joined #salt
22:36 robawt hey folks
22:36 robawt i think i found it
22:37 robawt https://github.com/saltstack/salt/blob/develop/salt/pillar/git_pillar.py#L224
22:38 robawt does this mean there can be only one git repo for ext_pillar
22:38 robawt ?
22:39 helderco joined #salt
22:44 Outlander joined #salt
22:44 zandy joined #salt
22:44 ajprog_laptop joined #salt
22:46 gzcwnk is there a slatstck module to allow exporting of a nfs module?
22:47 gzcwnk and mounting and setting up fstab?
22:49 gzcwnk nfs mount point i mean
22:49 eliasp gzcwnk: states.mount
22:50 gzcwnk yeah im looking at taht, cant see how its done from taht page
22:51 eliasp gzcwnk: there are loads of examples: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.mount.html
22:51 eliasp … ok, just one… but that's clear enough
22:54 gzcwnk hmm not really i want a remote mount
22:54 gzcwnk but i also want to configure nfs server to a) run on certain ports, b) export the sahre
22:55 gzcwnk is that done with something else?
22:56 dmick states.mount is for mounting, surely
22:56 gzcwnk also that example doesnt say its editing fstab
22:56 dmick see persist arg to states.mount.mounted
22:56 gzcwnk yes, but what options for remote mounting and editing fstab?
22:57 dmick what do you mean "remote mounting"?  An NFS mount is remote by definition, right?
22:57 eliasp gzcwnk: a "remote mount" is no different than a local
22:57 eliasp gzcwnk: - device: server:/export
22:57 gzcwnk so how do i specify the remote mount?
22:57 gzcwnk ah right
22:57 gzcwnk ta
22:58 gzcwnk is there a module to set up a nfs server?
22:59 dmick all I see is nfs3, which is pretty slim
22:59 gzcwnk i cant find much
22:59 dmick http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.nfs3.html
22:59 gzcwnk guess i should put in a request?
22:59 tracphil joined #salt
22:59 dmick or make one out of cmd.run's
23:00 forrest you guys know you can call modules from states right?
23:00 gzcwnk yeah cmd.run i guess
23:00 forrest http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html
23:00 dmick yeah.  nfs3 isn't very rich tho
23:00 manfred forrest: the other one
23:00 gzcwnk i might ask if it can be expanded
23:00 gzcwnk they can only say take a running jump
23:01 manfred forrest: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html
23:01 forrest manfred, http://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html
23:01 manfred that is teh one you wanted
23:01 forrest yea thanks for the correction :P
23:01 forrest too many hits in my history
23:01 manfred yes
23:01 manfred !states.module
23:01 forrest !help
23:01 wm-bot4 I'm a documentation bot. To control me, please use #salt-bot to avoid channel spam. See this URL for my commands: http://meta.wikimedia.org/wiki/WM-Bot
23:02 manfred !states.module.run
23:02 wm-bot4 http://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html#salt.states.module.run
23:02 forrest you're lucky the bot was down for three+ days when you weren't around manfred
23:03 forrest otherwise you'd be number 2
23:03 forrest :P
23:03 manfred heh
23:03 adamfisk joined #salt
23:03 phx joined #salt
23:03 bezeee joined #salt
23:04 robawt !states.module.git
23:04 dkellz joined #salt
23:04 robawt i call shenannigans!
23:05 manfred !states.git.latest
23:05 wm-bot4 http://docs.saltstack.com/en/latest/ref/states/all/salt.states.git.html#salt.states.git.latest
23:05 dkellz greetings!
23:05 manfred robawt: there are only three commands in the module state, run wait and mod_watch
23:05 adamfisk hey ya'll i'm a salt newbie and am trying to provision some servers using existing an salt config from a colleague on paternity leave, and I keep getting the following on cloudmaster after a new cloudmaster is setup: root@cloudmaster:~# salt Traceback (most recent call last):   File "/usr/local/bin/salt", line 5, in <module>     from pkg_resources import load_entry_point   File "/usr/lib/python2.7/dist-packages/pkg_resources.py", lin
23:06 cnelsonsic joined #salt
23:06 adamfisk sorry didn't all come through -- key is the last line: pkg_resources.DistributionNotFound: apache-libcloud
23:06 manfred adamfisk: how are you deploying salt? With salt-bootstrap or something else?
23:07 dkellz I want to provision/maintain a set of laptops running Ubuntu in my classroom.  Can somebody point me in the right direction?
23:07 manfred -L with salt-bootstrap will install apache-libcloud, but you just need the apaache-libcloud python module
23:08 manfred dkellz: just start with the beginning walkthrough
23:08 manfred dkellz: http://docs.saltstack.com/en/latest/topics/tutorials/walkthrough.html
23:08 adamfisk good question =)..hold on...it's using a fairly lengthy python script to first create a server on Digital Ocean...looks like the key line is the following, so I believe that's bootstrap correct?
23:08 manfred dkellz: and the states tutorial 1-> 5 http://docs.saltstack.com/en/latest/topics/tutorials/states_pt1.html
23:08 adamfisk util.ssh_cloudmaster("sudo SALT_VERSION=%s ./bootstrap.bash"                             % config.salt_version,                          ".log")
23:09 adamfisk so maybe I should just add -L in there...
23:09 dkellz gracias!!
23:10 manfred adamfisk: i don't see SALT_VERSION in https://bootstrap.saltstack.com so i don't know what your bootstrap.bash script is
23:10 manfred https://github.com/saltstack/salt-bootstrap
23:10 manfred http://docs.saltstack.com/en/latest/topics/tutorials/salt_bootstrap.html
23:14 nickg what do you guys think about an interval or delay option for batch processing on the CLI?
23:15 forrest nickg, what purpose would that serve?
23:15 KaaK_ is it possible to include files via pillar from a salt state. e.g. keeping an SSL key in pillar, and referencing it via a salt state?
23:16 nickg forrest: restarting applications.. sometimes they take X seconds to get up to speed.  so i want to let each server's app fully launch before i move on
23:16 manfred KaaK_: of course
23:16 nickg just because the service is running it may not be accepting new connections just yet
23:17 ghartz joined #salt
23:17 manfred nickg: then your service is written wrong
23:17 manfred it shouldn't fork to the back ground until it is ready
23:17 forrest nickg, Fair enough, that was the only situation I could think of, but usually only shitty (IE Java) services are like that :P
23:17 nickg well, think of mysqld innodb rollbacks, etc
23:18 forrest hmm
23:18 adamfisk thanks manfred: looks like it's just grabbing it from configy.py, which is setting the version to 2014.1.4
23:18 forrest nickg, I think it would be good to have as an option
23:18 forrest really there's no negative of having it
23:18 nickg the only downside i can think of is bloat
23:18 forrest yea
23:18 forrest but it won't be much
23:18 nickg but it shouldn't impact anything other than lines of code
23:18 forrest honestly it would be better if there was dynamic checking
23:19 forrest so you could have a batch_service.sls that would wait till certain criteria had been met or something before moving to the next batch
23:19 nickg yeah
23:19 forrest but your solution is quicker
23:19 nickg idk if i have enough time to write that tonight though :)
23:19 manfred you can just build in a wait
23:19 forrest nickg, yea I say go for it
23:19 manfred don't do you wait for the batch
23:20 manfred what you need is a state that makes sure the service is up and responding
23:20 nickg i was just thinking of one additional option, default 0, that if specified it would sleep after executing in batch.py
23:20 manfred i disagree that it should be part of batch, because it could be used to affect other states
23:20 manfred nickg: like this http://docs.saltstack.com/en/latest/ref/states/all/salt.states.tomcat.html#salt.states.tomcat.wait
23:20 manfred so that it just sits there and waits for tomcat to be done starting
23:20 manfred then the state succeeds
23:20 dmick yeah.  it seems to me that 'delay' is always the worst solution
23:21 dmick when you have no other options
23:21 manfred don't build a delay into batch
23:21 manfred just make a like...
23:21 manfred service.wait
23:21 KaaK_ manfred, can you point me in the right direction for docs? I'm going through the online docs for pillar and not really seeing anything of the sort
23:21 manfred that can run a command, over an over for a set timeout, until it succeeds
23:21 nickg heh well, the downside here is this is a cmd.run not a service. i guess i should have specified that earlier
23:22 nickg the easiest method to isolate for the conversation was the example of a service
23:22 adamfisk hey manfred sorry took some digging -- installing with pip install salt==$SALT_VERSION
23:22 manfred KaaK_: drops the thing in here https://github.com/mparker1001/loadtester-salt/blob/master/loadmaster/init.sls#L91
23:22 manfred KaaK_: drop the key here https://github.com/mparker1001/loadtester-salt/blob/master/loadmaster/pillar.example#L10
23:22 manfred nickg: so?
23:22 manfred nickg: make a new module
23:22 manfred nickg: name it something, all you need is something that just waits and checks a command
23:23 nickg i just felt it coudl be valuable across many modules, hence the use of batch.
23:23 manfred so
23:23 manfred what I was saying was not just for services
23:23 manfred you could make a new module
23:23 forrest Seems to me one of the main focuses was to insert a delay between batches
23:23 forrest not on a per state basis
23:23 manfred that just sits there and checks a command
23:23 manfred forrest: but that is useless and won't really solve anything
23:23 manfred that is just a delay
23:23 forrest correct
23:23 manfred whereas if you tie it to a state... you don't need the delay at the very end
23:23 manfred you know that the service is started
23:24 KaaK_ manfred, ahh... so there is no file reference -- just put the file contents in. thanks for the links!
23:24 forrest well, in that case you might as well run a cmd.run that does a sleep x
23:24 forrest lol
23:24 manfred where as with a batch and a delay... what if it takes just a bit longer this one time
23:24 manfred nickg: in the case of the batch wait... just do
23:24 manfred waiting:
23:24 manfred cmd.run:
23:24 nickg forrest: heh yeah that avoids the debate with the community on whether or not my time is worth it :)
23:24 manfred name: sleep X
23:24 manfred - order: last
23:24 forrest nickg, :P
23:24 manfred and run a sleep last
23:25 dmick wait...default python-zmq on Ubuntu Trusty is using zmq 2.2.0?  really?
23:25 manfred the delay on a batch doesn't add any value
23:25 nickg manfred: yes, true, but i dont relaly need it unless im in batch mode :)
23:25 forrest crappy solution for a crappy situation *shrug*
23:25 manfred nickg: put it in the state file with your cmd.run, so it only gets included if that one is run
23:25 nickg if im manually executing on one server i dont need the delay.  i only need it if i execute in batch
23:26 manfred the real solution would to have a check state, that just checks if something is running, over and over, for a timeout number of seconds
23:26 manfred then you know it is running
23:26 manfred and then you know it is in the state
23:26 manfred nickg: but then our thing won't be in that state, so it is lieing
23:27 nickg the states are generally broken anyway  but thats probably an ubuntu issue not salts
23:27 manfred it shouldn't finish running the state or highstate until everything is done
23:27 manfred waiting for an arbitrary amount of time doesn't tell you if your thing finished
23:28 nickg manfred: yes you are correct, its arbitrary.
23:28 manfred i don't see the point in adding the delay
23:28 manfred it shouldn't finish the highstate until everything is done
23:28 nickg manfred: well this isnt a highstate command :)
23:28 manfred s/highstate/state
23:28 manfred it applies to both
23:29 manfred there shouldn't be a need to delay running the next batch, as soon as the state run is done, it should be in that state
23:29 dmick grr, no, pip vs apt
23:29 jesusaurus does anyone here know the story with m2crypto? is it still maintained? did the maintainers fall into oblivion?
23:29 nickg manfred: i wont disagree
23:29 manfred jesusaurus: no idea
23:30 manfred jesusaurus: raet doesn't use m2crypto though
23:31 manfred nickg: so the solution is not to add the delay on batch
23:31 nickg manfred: correct. it's not the best solution, so it's not a solution for the community
23:32 manfred cool
23:32 manfred nickg: i am going to write that sleep/watch state
23:32 Luke joined #salt
23:32 manfred nickg: technically you could just use cmd.run
23:32 manfred and have your command do the loop
23:33 forrest manfred, that's not as clean though
23:33 manfred correct
23:33 forrest it's way better to have like a watch that checks the command for expected output
23:33 manfred yar
23:33 manfred that is what I am writing
23:33 manfred cmd.watch?
23:33 forrest so I can do command: ps | grep service expected: service, so then if it returns blank you're good to go
23:33 forrest how would cmd.watch resolve that?
23:34 nickg dare i even say this? heh
23:34 manfred forrest: i am going to write it
23:34 forrest manfred, gotcha
23:34 nickg i have to write my own cmd function because this thing I have to run doesn't end immediately anyway
23:34 forrest lol
23:34 manfred give it a command to run in a while loop until timeout, and when it returns the correct status, the state ends
23:34 nickg it's background process.  so i can hack a sleep into this i suppose
23:34 manfred unless it hits the timeout
23:34 manfred nickg: meh, gimme a bit
23:35 forrest why is your service so bad nickg :P
23:35 forrest why doesn't it start correctly
23:35 nickg forrest: not mine :)
23:35 nickg https://github.com/saltstack/salt/issues/6691
23:35 forrest nickg, EXCUSES! :D
23:35 nickg im just going to lean on this discussion for that portion
23:35 nickg right now my custom hack was to schedule an at jobfor the next minute, but that fails when i need batch and delays :)
23:36 forrest yea
23:36 forrest I agree it should be supported
23:36 forrest it's just a lot of edge cases to find for poorly designed services.
23:36 nickg yes
23:36 forrest good products, bad startup/shutdown processes
23:36 nickg the toughest thing to deal with in life is dealing with non-perfect solutions
23:36 forrest yep
23:36 manfred nickg: gimme a bit, my dinner just showed up
23:36 forrest manfred, you're at home already?
23:36 tempspace yo!
23:36 manfred forrest: i am at the bar
23:36 nickg we were discussing this the other day with driving and turns on red
23:36 forrest manfred, ahh
23:37 forrest hi tempspace
23:37 manfred i was looking for something to hack on tonight anyway
23:37 forrest manfred, should hack on something non IT related
23:37 forrest round out dem skills
23:37 nickg manfred: no worries, unfortunately i have to run by the hospital to see the mrs so i wont be back online for a few hours.  if you do have the time to put something together can you msg me soi can read it later tonight?
23:37 manfred nickg: what are you on github?
23:38 forrest nickg, tell your wife not to have panic attacks about poorly designed services, it's ok, there's nothing she can do about it
23:38 manfred i will tag you
23:38 nickg manfred: that was the dicsussion about background processes and cmd.run
23:38 forrest 6691 manfred
23:38 manfred ok
23:38 nickg thats not my thread
23:38 manfred i will tag that
23:40 nickg manfred: i am ngaugler    my only valuable contribution has been to repair the at state module. incidentally because of the need to run atq because of background processes
23:41 manfred heh
23:41 nickg so poorly designed services did serve a good purpose, at least in that case :)
23:44 zandy joined #salt
23:50 helderco joined #salt
23:52 Outlander_ joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary