Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-09-11

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 sibsibsib_ joined #salt
00:05 ISO8601 joined #salt
00:09 ninkotech joined #salt
00:13 gamingrobot joined #salt
00:21 druonysuse joined #salt
00:21 druonysuse joined #salt
00:23 jaequery joined #salt
00:25 Katafalkas joined #salt
00:37 bemehow joined #salt
00:38 Gwayne joined #salt
00:41 oz_akan_ joined #salt
00:43 superflit joined #salt
00:43 krissaxton1 joined #salt
00:47 Jahkeup joined #salt
00:55 lineman60 joined #salt
00:55 whiteinge craig: sorry, i didn't get you were asking about that IP check in salt-api. adding firing an event to that function should be a ~2 line diff
00:55 whiteinge i just replied to your reply on the ML
00:56 mgw joined #salt
00:57 whiteinge i don't know the state of firing auth events in salt-core, but given your deadline you could just add the one-liner to Login() in salt-api until we can get this in to salt proper
00:59 honestly hey whiteinge
00:59 whiteinge hi
00:59 honestly how do you test salt-formulas? do you have some kind of VM setup?
01:00 whiteinge we've got a jenkins server in place that spins up VMs using salt-cloud
01:00 honestly oh cool
01:00 whiteinge we're planning to perform integration tests on formula just by running each formula and looking for failures
01:01 sibsibsib_ salt cloud is super sick btw :D
01:01 whiteinge :D
01:01 honestly whiteinge: could I set that up locally?
01:02 whiteinge honestly: the pieces are in place to make that happen except for getting formula on and off that server. i'm working on it now
01:03 whiteinge honestly: probably. i don't have a sense of how much work that would be. the script we're using is https://github.com/saltstack/salt/blob/develop/tests/jenkins.py
01:04 bhosmer joined #salt
01:04 honestly okay
01:04 honestly I'll be building a new computer tomorrow
01:05 honestly hopefully all the parts will be good
01:05 honestly and I'll have a blazingly fast machine :D
01:07 whit joined #salt
01:08 whiteinge neat!
01:09 mivv joined #salt
01:10 mivv Hi all, I'm trying out Salt for the first time tonight and having a bit of an issue settings grains, trying to set multiple roles for a server but when I do salt 'roles:webserver' I'm getting nothing back
01:12 whiteinge mivv: what is the command you're using to set the values?
01:13 mivv I set the values in the /etc/salt/grains file directly on the minion
01:13 mivv when I run salt '*' grains.items it does show up in the roles section
01:13 whiteinge oh, i see
01:14 whiteinge when you target by that grain you need to use the -G flag
01:14 mivv oh awesome!
01:14 mivv thanks so much whiteinge
01:16 bhosmer_ joined #salt
01:16 mivv whiteinge: is there a best practice for setting grains? Should I be setting them all from the master?
01:18 whiteinge there are a ton of ways to do it because there are a ton of different use-cases/deployments. i'd recommend going with whatever is the least work™ for you :)
01:18 whiteinge if you want to keep the list entirely on the master, you should look into using Pillar instead
01:19 whiteinge you can match via pillar in a similar fashion
01:19 mivv whiteinge: Okay thanks I will try looking into that now, I'd prefer to have everything on the master
01:21 whiteinge when you get pillar set up with your roles, use salt -I "roles:webserver" to target
01:21 whiteinge http://docs.saltstack.com/topics/pillar/index.html#targeting-with-pillar
01:22 pdayton joined #salt
01:23 mivv whiteinge: Thanks! Trying to get an understanding of this and get my VPS set up with it so that hopefully I can guide them a little at work as we setup new servers there
01:23 whiteinge sounds cool! let us know if you have any questions as you go
01:27 liuyq joined #salt
01:30 copelco i'm not certain of how i'm supposed to use overstate. do i only run that once during the initial provision and then just use highstate afterwards?
01:37 whiteinge copelco: it's entirely up to you. make a bunch of overstate files and use 'em whenever/however it's useful  :)
01:38 whiteinge basically, i'd make an overstate any time i have state files that need to run one-after-the-other
01:38 copelco yeah, that's what i need to setup a web and db server.
01:39 whiteinge yeah, that's a perfect example. another i could see using an overstate to do a software deployment, for example. distribute tarballs, shutdown servers, install tarballs, bring servers back up
01:39 copelco so it's supposed to be used in conjunction with highstate?
01:41 whiteinge it can be used to orchestrate multiple state runs, it can also be used without just by calling functions directly
01:42 sgviking joined #salt
01:43 krissaxton joined #salt
01:44 copelco huh, ok
01:44 mwillhite joined #salt
02:00 jbunting joined #salt
02:05 sixninetynine joined #salt
02:15 neomaan joined #salt
02:15 neomaan left #salt
02:16 neomaan joined #salt
02:16 neomaan left #salt
02:21 xl1 joined #salt
02:21 racooper joined #salt
02:29 tomtomtomtom joined #salt
02:31 anteaya_ joined #salt
02:32 liuyq joined #salt
02:33 vbabiy Hey, what is the best way to handle this case. I development I am install all my service one machine so I have a lot of requirements but when I push it out the production it would split up in to 4 machines so I can't have those requirements. Do I need to rewrite those states for these env? Or do I just have to templatize them like crazy?
02:33 josephholsten joined #salt
02:34 forrest vbabiy, I like to split my stuff into dev/test/prod
02:34 vbabiy env for your states?
02:34 forrest right
02:34 forrest breaking the top file out like this: http://docs.saltstack.com/ref/states/top.html
02:35 forrest so basically I might have /apache/init.sls
02:35 forrest and all that does is install apache
02:35 forrest but then I have /test/init.sls, which includes apache, then includes unique hostname configuration etc.
02:36 forrest then you can use those items in your top to just break it out easily
02:36 forrest depending on how you configure it you can make it realy modular
02:36 forrest *realy
02:36 vbabiy so /apache/init.sls is in the base env?
02:37 forrest it just installs apache in my example
02:37 forrest because our other states (based on environment) add in unique host files, configurations, etc.
02:37 vbabiy okay I think I am understanding
02:38 forrest and you don't have to do it that way if you don't want
02:38 forrest I just hate to repeat writing code
02:38 forrest or the worst case scenario, going back later and trying to fix it because of a config change where things are chained together :\
02:38 vbabiy Same here
02:39 vbabiy extend a state to include a requirement in one env?
02:39 vbabiy like in dev make sure the db is there before talking to it
02:39 forrest right
02:39 forrest you could extend depending on how you have it configured
02:40 vbabiy lol I didn't know extend was a thing, good choice of words :D
02:40 forrest Yea I haven't messed with extend much yet
02:40 forrest there have been some oddities I've noticed from people in the IRC.
02:43 pdayton joined #salt
02:43 deepakmd_oc joined #salt
02:44 vbabiy forrest do you happen to have a good example of using env?
02:45 forrest https://github.com/terminalmage/djangocon2013-sls
02:45 forrest in that example it's not an environment, but a project called foo
02:45 forrest but it's the same general design if that makes sense.
02:45 krissaxton joined #salt
02:45 vbabiy thanks I will look over it
02:46 forrest np
02:47 auser hey all
02:47 auser it's been a while
02:47 forrest hey
02:47 vbabiy never heard of overstate :)
02:48 forrest overstate is awesome
02:48 forrest http://docs.saltstack.com/ref/states/overstate.html
02:52 NV hrm, can i use match: pillar in a a pillar top.sls? (assume that the pillar it's matching on is defined earlier)
02:52 NV I'm guessing not?
02:53 forrest I have no idea
02:53 forrest if you try it let me know
02:55 NV doesn't appear to work, was wondering if there was a trick or something :P
02:55 forrest gotcha
02:57 redmoxie joined #salt
03:01 lvicks joined #salt
03:04 fengyu_ joined #salt
03:05 berto- joined #salt
03:08 Nexpro joined #salt
03:12 vipul joined #salt
03:13 jbunting joined #salt
03:14 MefA joined #salt
03:15 pass_by_value joined #salt
03:18 josephholsten joined #salt
03:19 josephholsten are there any particularly great modules I should check out?
03:32 jheise joined #salt
03:43 oz_akan_ joined #salt
03:48 jheise_ joined #salt
03:52 lynxman joined #salt
03:52 lynxman joined #salt
03:55 xl1 joined #salt
03:56 Furao joined #salt
03:56 \ask hi everyone -- we're making a dashboard for our operations team to do a handful of operations/changes on our clusters. We were thinking of using Salt "underneath the covers" since we use Salt to configure everything already.
03:57 \ask Is the salt-api code ready for use for this sort of thing (with a layer on top we make) or should we just call the salt program inside our HTTP API?
03:57 Furao \ask: I built a salt-cloud webui and inventory in django that does that. and I use salt-api for all communication with salt
04:03 \ask Furao: great, perfect -- that's what I was looking for. Is the new salt dashboard using the API, too, or does it python bindings/API?
04:05 Katafalkas joined #salt
04:07 Furao new salt dashboard?
04:17 xl2 joined #salt
04:19 emocakes joined #salt
04:21 xl1 joined #salt
04:25 auser there is one with AngularJS \ask
04:25 auser \ask: https://github.com/saltstack/halite
04:28 Jahkeup_ joined #salt
04:31 mwillhite joined #salt
04:34 efixit joined #salt
04:36 Gareth gg~./w 6
04:36 Gareth erm
04:43 \ask auser: yeah, that's the one I was thinking of. I just wanted to sanity check that salt-api isn't deprecated or something already. :-)
04:43 \ask thanks!
04:43 xl1 joined #salt
04:43 Katafalkas joined #salt
04:44 Katafalkas joined #salt
04:45 josephholsten are there any community formulas that depend on other community formulas?
04:48 josephholsten also, is there a good pattern for testing a formula?
04:48 LarsN joined #salt
04:49 chesty joined #salt
05:01 Ryan_Lane joined #salt
05:09 jaequery joined #salt
05:12 craig whiteinge: saw your email. THANKS! soo easy! love it
05:13 forrest josephholsten, I don't believe any of the community states include other community states, most of them are pretty straightforward in the salt-formulas.
05:13 forrest Regarding testing: http://docs.saltstack.com/ref/states/testing.html
05:14 forrest I don't know if there's something equivalent to puppet parser validate (basic syntax check and suck on a per state basis), if there is I haven't seen anyone mention it.
05:15 forrest *such
05:17 forrest does that answer your questions josephholsten?
05:20 rberger joined #salt
05:20 forrest oh he left, well what a waste of keystrokes
05:21 rberger left #salt
05:28 andrew_seattle joined #salt
05:29 Katafalkas joined #salt
05:39 berto- joined #salt
05:40 Rorgo joined #salt
05:40 redbeard2 joined #salt
05:52 jalbretsen joined #salt
05:53 nonuby joined #salt
05:53 nonuby joined #salt
05:55 whiteinge \ask: salt-api is stable and the REST API will remain backward-compatible
05:56 whiteinge that said, salt-api is being merged into salt proper (salt-cloud too) so there will be a little code shuffling
05:56 whiteinge the goal is the only thing that will mean for users of salt-api is one less package to install (just salt instead of salt and salt-api)
05:57 whiteinge halite is not using the REST API that salt-api provides, but rather the parts of salt-api that have already been merged into salt core
05:59 whiteinge in other words, halite will not depend on the salt-api package and will ship with salt 0.17
06:00 whiteinge your best bet right now is to program against the REST API that salt-api provides and just keep an eye on the mailing list to stay on top of the merge progress.
06:04 whiteinge craig: the event stuff has changed a lot between 0.16 and (the upcoming) 0.17. if you get stuck, don't hesitate to ask questions
06:06 joehh whiteinge: thanks for that summary - good to know for packaging 0.17
06:06 \ask whiteinge: ah, that makes sense. What's the schedule for having more of the API merged in?
06:07 \ask oh, you just told -- 0.17. Is there a schedule for 0.17? :-)
06:08 whiteinge \ask: i don't know the exact timeline. when it happens the current REST API will be moved into salt verbatim and will keep the exact same API interface, you just won't need to install the salt-api package
06:10 \ask ok, great.
06:10 middleman_ joined #salt
06:10 whiteinge the first 0.17 RC will probably be late this week or next week. the salt-api integration is not ready for public use so i'd stick with salt-api for now
06:10 whiteinge that said, if you want to build on top of halite instead of doing your own thing, definitely wait for 0.17
06:13 whiteinge joehh: i *think* packaging for salt 0.17 will be nearly identical to the current process. the halite project will remain a separate repo but the built files will be included with salt's source-package. hopefully that means packagers will just need to include a few more static files
06:14 tuxIO joined #salt
06:21 abele joined #salt
06:23 joehh great - good to hear
06:28 davidone joined #salt
06:30 NV so, my gift to anyone who uses salt, wishes for a central way to define roles for a host that doesn't require restarting of the salt master, allows the full flexibility of matching that top.sls does, allows you to target based upon said roles and automagically includes any pillar sls files that have the same name as the role (no need to repeat yourself now!) -- http://pastie.org/private/fa3kqtt6xzc6ksttp3uc2g
06:44 josephholsten joined #salt
06:45 josephholsten forrest: still around?
06:45 josephholsten well, I saw your answer in the logs. I appreciate it.
06:52 balboah_ joined #salt
06:52 Koma joined #salt
06:53 aantony joined #salt
06:55 aantony what is the practical difference between config.option and config.get?
06:58 Damoun joined #salt
07:00 sfello joined #salt
07:01 josephho_ joined #salt
07:01 ml_1 joined #salt
07:02 josephho_ anyone used the npm.bootstrap state?
07:05 vaxholm joined #salt
07:12 ggoZ joined #salt
07:12 linjan joined #salt
07:14 olaf38 joined #salt
07:15 nonuby joined #salt
07:22 bejer joined #salt
07:25 HumanCell1 joined #salt
07:26 HumanCell1 joined #salt
07:28 renoirb joined #salt
07:28 stefw` left #salt
07:29 bhosmer joined #salt
07:38 jpcw joined #salt
07:40 bdf does the maintainer of the freebsd port of salt or pyzmq lurk around here by chance?
07:42 krissaxton joined #salt
07:42 xl1 joined #salt
07:44 ronc joined #salt
07:47 auser joined #salt
07:55 az87c joined #salt
07:56 bemehow joined #salt
07:56 az87c_ joined #salt
08:16 nonuby joined #salt
08:35 ml_1 joined #salt
08:35 Ryan_Lane joined #salt
08:53 bitmand_ left #salt
09:01 mech422 joined #salt
09:03 mech422 Hi all - just want to make sure I'm not missing something: file.recurse is incapable of preserving owner/perms when transfering files to a minion?  So I guess I'd have to use file.managed with each individual file ?
09:06 xl1 You can specify user/dir_mode/file_mode in the state
09:07 mech422 xl1: right - but thats global to all the files in the directory right ? you can't for instance have some stuff 0755 and some stuff 0600 (for say secret keys that should be readable)
09:07 fredvd joined #salt
09:08 mech422 err - that should read "secret keys that should NOT be readable"...
09:09 Ryan_Lane joined #salt
09:10 xl1 I guess so. Maybe you can define 2 file.recurse states, one for the regular files and one for secret ones, both target the same directory, with separate source
09:11 mech422 oh - that would be nicer then doing each file! thanks!
09:18 vaxholm joined #salt
09:27 tuxIO joined #salt
09:30 bemehow joined #salt
09:37 mech422 left #salt
09:39 derelm joined #salt
09:43 kamal_ I'm trying to add a salt minion (fresh server), but it exits with "[CRITICAL] The Salt Master has rejected this minion's public key!"
09:43 kamal_ I haven't rejected the key and there's nothing in /etc/salt/pki/master/minions_rejected (salt-master)
09:50 fredvd joined #salt
10:01 felixhummel joined #salt
10:08 pdayton joined #salt
10:09 xl1 joined #salt
10:10 pdayton1 joined #salt
10:14 Katafalk_ joined #salt
10:19 lemao joined #salt
10:21 krissaxton joined #salt
10:22 kamal_ Huh, weird. I changed the hostname and now it works fine.
10:24 vaxholm joined #salt
10:34 xl1 joined #salt
10:49 TheCodeAssassin joined #salt
10:54 krissaxton joined #salt
10:54 jbunting joined #salt
10:54 pdayton joined #salt
10:55 TheCodeAssassin joined #salt
10:57 durnik joined #salt
10:58 bemehow_ joined #salt
11:00 ronc joined #salt
11:02 ninkotech joined #salt
11:03 jslatts joined #salt
11:06 bhosmer joined #salt
11:09 ronc joined #salt
11:10 Ryan_Lane joined #salt
11:21 bhosmer joined #salt
11:32 copelco joined #salt
11:34 aptiko "State file.copy found in sls ... is unavailable". I'm using Salt 0.16.4.
11:46 jbunting joined #salt
11:46 sibsibsib_ joined #salt
11:46 joehh aptiko: did the same sls file work previously?
11:47 aptiko joehh: No, it has never worked, it's the first time I'm trying it (after I added the file.copy, that is).
11:49 joehh I'll fire up some vms and try myself
11:49 Furao 0.16.4?
11:51 joehh yeah 0.16.4 has been made available for packaging - it is released for some distros/oses where salt has less control over when things get uploaded
11:51 Furao changelog?
11:51 joehh it hasn't been announced formally yet - still waiting on a couple oses and an issue to be finalised in debian
11:53 joehh re changelog, just bugfixes as far as I am aware - the diffs are all pretty small and minor
11:53 carlos_ joined #salt
11:54 aptiko joehh, Furao: I'm using the Debian packages.
11:57 joehh thought so - that is one we've traditionally had less control over timing - though things are now much quicker
12:02 mwillhite joined #salt
12:06 Furao joined #salt
12:09 joehh aptiko: I get the same...
12:09 aptiko joehh: Meanwhile I've worked around by using cmd.run.
12:10 aptiko file.copy is a bit more elegant, but otherwise there isn't much it does you can't do with cmd.run.
12:11 aptiko Ah, it's more portable.
12:14 joehh aptiko: it doesn't appear in the code. Checking grep ^def /usr/lib/python2.7/dist-packages/salt/states/file.py
12:14 joehh it doesn't appear
12:16 Furao joined #salt
12:17 joehh checking specific docs (https://salt.readthedocs.org/en/v0.16.3-0/ref/states/all/salt.states.file.html#module-salt.states.file), it is not in 16.3/4
12:17 joehh it is in https://salt.readthedocs.org/en/latest/ref/states/all/salt.states.file.html#module-salt.states.file
12:17 joehh which i think is generated from develop
12:17 joehh other interesting link is https://github.com/saltstack/salt/commit/4251341a7b2dd5f80a2da0afb1554c584c2db692
12:18 oz_akan_ joined #salt
12:18 joehh which suggests it is a feature added a month ago, so in the 0.16.x timeframe and unlikely to be included
12:18 uomobonga joined #salt
12:18 joehh good to see you've worked around
12:18 aleszoul3k joined #salt
12:21 ronc joined #salt
12:24 unicoletti_ joined #salt
12:24 blee joined #salt
12:25 nielsbusch joined #salt
12:26 Furao joined #salt
12:27 fredvd joined #salt
12:30 aptiko salt-call --local state.highstate   <- When I run this I don't know if there was an error, unless I scroll up and try to find any red text. The green stuff always appears regardless of --log-level. I wonder whether I can work around that.
12:32 aptiko Hmm., >/dev/null doesn't help either, the red message goes to stdout as well.
12:33 aptiko Maybe I'll try salt-call --local state.highstate || echo "There has been an error"
12:33 nonuby joined #salt
12:33 nonuby joined #salt
12:33 joehh There is a verbose_somthingorother option in the master config file
12:33 xl1 There is state_verbose and state_output in configuration
12:34 joehh you don't need to restart the master after changing it
12:34 joehh xl1: that is a more accurate recollection than mine :)
12:34 aptiko salt-call --local state.highstate || echo "There has been an error"  <- Doesn't help either. salt-call returns 0 all the time.
12:35 copelco joined #salt
12:36 aptiko joehh: Yes, state_verbose=False did it, thanks.
12:37 Furao joined #salt
12:39 aleszoulek joined #salt
12:45 LLckfan joined #salt
12:47 Furao joined #salt
12:47 mastrolinux joined #salt
12:48 mastrolinux I have changed the server for my salt-master while preserving the same hostname and changed DNS
12:48 mastrolinux now my minion are no more able to contact the new master unless I manually restart the salt-minion service
12:49 mastrolinux why is this happening? does the minion deamon retry the dns resolution or not?
12:49 mastrolinux I cannot restart the agent in 100 nodes :( needed to wrote a fabric script for it, this is annoying, solutions?
12:50 LLckfan I have a fiend ho waas without power for three days and even though thier power is back on she thinks her under arms smell. Is there anything that someone would have in a house that she can use to get rid of the smell?
12:52 StDiluted joined #salt
12:56 oz_akan_ joined #salt
12:57 pdayton joined #salt
12:57 Furao joined #salt
12:57 oz_akan_ joined #salt
12:58 brianhicks joined #salt
13:04 Gifflen joined #salt
13:05 mapu joined #salt
13:06 anteaya_ joined #salt
13:07 juicer2 joined #salt
13:07 Furao joined #salt
13:10 jslatts joined #salt
13:10 mgw joined #salt
13:12 Jahkeup joined #salt
13:17 Furao joined #salt
13:20 tuxIO joined #salt
13:22 cnelsonsic joined #salt
13:25 qba73 joined #salt
13:26 jeff__ joined #salt
13:27 racooper joined #salt
13:27 Furao joined #salt
13:27 toastedpenguin joined #salt
13:29 krissaxton joined #salt
13:31 redbeard2 joined #salt
13:33 mastrolinux LLckfan: I mean well architected solutions in client side, not just external fixes
13:33 matanya joined #salt
13:44 m_george|away joined #salt
13:44 kaptk2 joined #salt
13:45 Furao joined #salt
13:46 jaequery joined #salt
13:55 Furao joined #salt
13:56 pakdel joined #salt
13:56 pakdel Hi all
13:56 mgw joined #salt
13:56 pakdel Does anyone know where should I put my custom runner module?
13:57 pdayton left #salt
13:57 pakdel I would like to evade polluting /usr/lib/python2.6/site-packages/salt/
13:57 bemehow joined #salt
13:58 bemehow_ joined #salt
13:59 pakdel sorry guys... found it: runner_dirs: []
14:06 jhermann joined #salt
14:07 mgw joined #salt
14:09 kermit joined #salt
14:10 alunduil joined #salt
14:10 [diecast] joined #salt
14:14 imaginarysteve joined #salt
14:20 gildegoma joined #salt
14:24 avienu joined #salt
14:24 krissaxton joined #salt
14:25 danielbachhuber joined #salt
14:29 pmrowla joined #salt
14:30 EugeneKay mastrolinux - did you copy the master keys?
14:30 mohae joined #salt
14:31 tyler-baker joined #salt
14:33 mgw joined #salt
14:36 teskew joined #salt
14:36 mastrolinux yes
14:37 EugeneKay I would think it would reconnect to the new DNS record, but I don't know for sure
14:37 EugeneKay APparently not :-p
14:39 micah_chatt joined #salt
14:39 UtahDave joined #salt
14:41 honestly pah, gitlab.
14:42 honestly they want me to include tests with my pull request.
14:43 micah_chatt_ joined #salt
14:44 micah_chatt joined #salt
14:45 mannyt joined #salt
14:50 abe_music joined #salt
14:50 jalbretsen joined #salt
14:50 backjlack joined #salt
14:58 GradysGhost joined #salt
15:03 mastrolinux left #salt
15:03 Jahkeup joined #salt
15:04 jpeach joined #salt
15:05 jaequery joined #salt
15:08 lineman60 joined #salt
15:09 ckao joined #salt
15:12 austin987 joined #salt
15:14 JoAkKiNeN joined #salt
15:15 tuxIO_ joined #salt
15:16 dave_den joined #salt
15:16 jbunting joined #salt
15:22 bemehow joined #salt
15:22 whit joined #salt
15:26 xmj pah, testing
15:29 opapo joined #salt
15:31 mgw Is it possible to ensure a certain state has been applied before anything else, without having to explicitly say so in everything else? I need to add an apt-proxy before any packages are installed.
15:31 UtahDave - order: 1
15:31 forrest joined #salt
15:32 mgw UtahDave: thanks, I was trying to find the docs on order
15:34 UtahDave yep!
15:36 mgw ah, here it is: http://docs.saltstack.com/ref/states/ordering.html
15:37 djn I guess I am seeing a quite severe bug: https://paste.selfnet.de/q1o/ this happened trying to upgrade to 0.16.4, and all minions that had this happen lost connection to their master.. can somebody confirm this behaviour?
15:38 djn and by lost connection I mean that they are not even being reconnected -> I have to ssh all hosts manually
15:39 forrest what happens when you try running that manually on the box
15:39 UtahDave djn: what os?
15:39 unicoletti_ joined #salt
15:40 devinus joined #salt
15:40 felskrone hey dave, long time no see :-)
15:40 UtahDave hey, felskrone!
15:40 UtahDave How was your holiday?
15:40 m_george left #salt
15:41 mmilano joined #salt
15:41 felskrone pretty good, lots of sun, lots of sand, a little sun-burn here and there but thats a given with my geeky-white skin :-)
15:42 felskrone i was wondering if you have any comments regarding this: https://github.com/felskrone/salt-eventsd
15:42 djn UtahDave: debian wheezy
15:42 mapu joined #salt
15:42 felskrone thomas stopped answering on the mailing-list a while ago, cant tell if he is/was interested or not
15:43 djn UtahDave: on a host I use to experiment with another salt-master, it seems the salt-master package was upgraded (didn't use the --just-print there) before it broke, but salt-common was not
15:44 forrest did you restart the salt-master djn?
15:45 djn forrest: which one? the main one or the experimentation one with the now unsatisfied dependency?
15:45 alexandrel is there a way to tell minions: restart your salt-minion?
15:46 UtahDave felskrone: that looks really cool!
15:47 forrest the one with the deps issue
15:47 UtahDave felskrone: I haven't dug too deep yet, but that is looking awesome.
15:47 djn yes, thats how I noticed it was garbled... fixed it now using apt directly
15:47 joehh djn: https://github.com/saltstack/salt/issues/7151
15:48 UtahDave djn: joehh has been updating the debian packages recently.  He just cut a second release to fix this issue.
15:48 forrest alexandrel, salt 'host' service.restart <service name>
15:48 Furao joined #salt
15:48 alexandrel forrest: thanks
15:48 forrest np
15:48 forrest here's the docs if you're curious http://docs.saltstack.com/ref/modules/all/salt.modules.service.html#module-salt.modules.service
15:49 UtahDave felskrone: Yeah, sorry about that. With all the crazy growth we've been having Tom is way behind on his email
15:49 gmoro joined #salt
15:49 forrest Just chain him to his desk UtahDave
15:50 forrest or find him a secretary to start screening email
15:50 djn joehh: not cool :/ UtahDave, when will the ssh transport finally make it into a release so I can fix stuff like that without visiting every host?
15:50 UtahDave djn: yes, in 0.17
15:50 felskrone UtahDave: i dont blame him at all, im working on it anyways becaus we need it here. but i'm aiming at getting it integrated into salt and for that i need comments, criticism, whatever applys from your (saltstacks) point of view :-)
15:51 UtahDave djn: I'd also recommend testing upgrades on one or two systems first
15:51 djn joehh: also notable that you do not actually have to upgrade it, --just-print breaks it too...
15:52 UtahDave felskrone: that's cool. I'll try to get Tom to look at it. I'm sure he won't be able to today though.
15:52 joehh not sure what you mean by --just-print
15:53 djn UtahDave: yeah, learned that the hard way now... Anyways, it's kind of frustrating that every time I start to rely on salt something goes wrong/instable
15:54 alexandrel djn: it's not even @ version 1 yet :)
15:54 djn joehh: calling apt-get with that options, it's the same as doing --dry-run, -s or something
15:55 felskrone UtahDave: dont rush him too much :-), as i said, im working on it anyways and think, that some users might actually like it :-)
15:55 joehh djn: apologies there - changes were made to the init script to ensure that when /etc/init.d/minion stop was called, all processes were stopped
15:55 UtahDave djn: I'm sorry that's been frustrating. We're working to iron those things out.  joehh has been doing an awesome job getting salt debian packaging working well.
15:55 joehh This was tested fairly extensively, but I missed testing on the case where upgrades were done within salt
15:57 djn alexandrel: I noticed ;P people are somehow using it on a really large scale and trust in it, so I tend to feel kind of safe with only ~40 hosts
15:58 joehh djn: was that done via cmd.run? just trying to understand the way people try to upgrade salt to add to testing scenarios
16:00 nebuchadnezzar joined #salt
16:01 unicoletti_ left #salt
16:01 QauntumRiff joined #salt
16:02 djn joehh: https://paste.selfnet.de/q1o/ this is the error log from before, the command in there [apt-get --just-print dist-upgrade] was run using cmd.run
16:02 djn while matching by grain, but I don't think that matters
16:06 joehh djn: that is odd, had you run any commands before
16:06 joehh I could see that occuring if you had tried doing an upgrade via salt then
16:07 joehh restarted minion, then ran --just-print, but otherwise, I don't see how you would have arrived there
16:07 joehh is that roughly what occurred?
16:08 djn joehh: I have to ask the one who actually run it about that, a minute
16:08 QauntumRiff for pillars, is it possible to use them in a config file to pass to the client? for example, I want to setup fail2ban with custom ports, where it might be none, or might be many.  can I do this with the /etc/fail2ban/jail.local file, and push it out?  http://pastebin.com/6ZxXLbxU
16:08 vaxholm joined #salt
16:09 QauntumRiff i've only ever used pillars in the state files before
16:09 EugeneKay Sure, you can do that.
16:09 djn btw, if there are known issues with upgrades, I suggest you point to it on the release notes page ;) would have saved me some time ;)
16:09 jdenning joined #salt
16:10 tuxIO joined #salt
16:10 UtahDave QauntumRiff: yep!
16:10 Lue_4911 joined #salt
16:11 joehh djn: In most cases, we try to co-ordinate releases and have notes on the release notes
16:11 UtahDave djn: we just got the packages in place and are going to announce the release today.  You got caught in between.  sorry about that.
16:11 joehh in the past, we have not had much control over the timing for debian. Updates got deployed much quicker than previously
16:12 HumanCell joined #salt
16:12 joehh Hopefully we can co-ordinate better next time
16:12 jetblack joined #salt
16:13 djn UtahDave: ah okay, unlucky as always ;) what do you mean, packages in place? for 0.17? because apticron has been bugging me about 0.16.4 since monday
16:14 juicer2 joined #salt
16:14 djn joehh: okay, I just talked with the guy who ran the command, and he only ran [alt -G 'virtual:kvm' cmd.run 'apt-get upgrade -y'
16:14 djn ]
16:15 joehh djn: That would trigger the issue
16:16 djn s/alt/salt/ so it seems it is a salt thing to run the command with --just-print? because the minions' logs indicate just that... and the apt db lock comes from the other apt command? can you make sense of that?
16:16 joehh I'm not familiar enough with that bit of the code to comment yet
16:18 joehh regarding the timing, I pushed the update to debian.saltstack.com on monday, expecting an announcement about then
16:18 joehh This issue was identified a day or so later and we've been trying to identify the best way to deal with it and co-ordinate the
16:18 joehh announcement since
16:19 joehh I apologise for not letting debian users know directly in advance - I should have sent the email I sent a little while ago as soon as possible
16:21 ggherdov joined #salt
16:21 ggherdov joined #salt
16:23 UtahDave djn: I'd also recommend a testing environment before upgrading production servers.
16:23 KyleG joined #salt
16:23 KyleG joined #salt
16:24 redondos joined #salt
16:24 djn joehh: okay, maybe I'll look at it this evening, could be this is another thing... I shouldn't have updated so eagerly I guess ;D thanks for your work
16:24 joehh djn: a quick review of the code indicates the just-print thing is part of salt's process
16:25 joehh for upgrading
16:26 mgw from a custom runner, what would be the best way to monitor for when a minion comes online?
16:27 UtahDave mgw: I think there's a tag that gets sent up the event bus when a minion comes online
16:28 mgw any idea how I can get at that from a runner?
16:29 mgw UtahDave: ^
16:29 djn UtahDave: yeah, as mentioned before we now have an other testing master, but we didn't put that to use yet
16:29 UtahDave Hm. well, you'd have to have it listen on the event bus.  Are you having the runner reboot the minion?
16:29 UtahDave djn: cool
16:29 mgw UtahDave: yes, it's actually booting the minion for the first time
16:30 mgw I'm looking for a better way than waiting 5s and then trying test.ping
16:31 UtahDave Yeah, you can listen on the event bus for the auth tags from the minion. I've never tried it from a runner, but I'm guess it would be possible
16:31 UtahDave mgw: have you looked here?  http://docs.saltstack.com/topics/event/index.html
16:32 mgw no, that should do it though
16:32 mgw thanks!
16:33 UtahDave yep!
16:37 troyready joined #salt
16:43 djn is there a known problem with cp.get_file? it isn't working for me right now (but did, in a prior version)
16:44 djn salt host cp.get_file salt:///etc/fstab /tmp
16:47 micah_chatt I just put in a PR adding documentation to the py renderer, https://github.com/saltstack/salt/pull/7180
16:47 micah_chatt does anyone have any suggestions on it?
16:49 tomviner joined #salt
16:49 Thiggy joined #salt
16:50 mapu joined #salt
16:50 ipmb joined #salt
16:53 craig whiteinge: i have noticed that a number of "tag" events are not being fired
16:54 QauntumRiff for jinja templates for files (ie, fail2ban, httpd.conf, etc) are they rendered on the master, or minion?
16:55 QauntumRiff ie, does it send the template to the minion, and have it fill it out, or does it parse it, and then send it?
16:55 craig i am guessing that the support is pretty basic in 0.16.3
16:56 berto- joined #salt
16:59 baniir joined #salt
16:59 juicer2 joined #salt
17:01 whit joined #salt
17:02 mapu joined #salt
17:02 g4rlic So, interesting question..  When I install bind via salt, the named user and group isnt' created.  When I install it manually via yum install, it is..
17:02 g4rlic Does salt do something that would tell it not to run any scripts inside the RPM?
17:04 mgw joined #salt
17:04 Thiggy Can you pass arguments to runners?
17:05 Thiggy e.g., salt-run mything.do_stuff arg1
17:06 Thiggy derp, you super can: https://github.com/saltstack/salt/blob/develop/salt/runners/jobs.py#L54
17:09 jeffmendoza joined #salt
17:15 mapu joined #salt
17:15 juicer2 joined #salt
17:16 superflit joined #salt
17:25 Ryan_Lane joined #salt
17:25 devinus joined #salt
17:27 Ryan_Lane1 joined #salt
17:28 QauntumRiff my pillar/jinja problems are sovled, when I fix syntax errors in my template, imagine that :)
17:30 KennethWilke joined #salt
17:30 tuxIO joined #salt
17:34 bhosmer_ joined #salt
17:36 Ryan_Lane joined #salt
17:46 josephholsten joined #salt
17:51 rorski joined #salt
17:51 Ahlee mission accomplished. Bosses are simultanously in love with and terrified of saltstack
17:54 jpeach is overstates the right saltstack equivalent to puppet stages?
17:55 jpeach I want to set up yum repositories prior to running any other states ...
17:55 baniir joined #salt
18:01 druonysus joined #salt
18:01 druonysus joined #salt
18:02 jpeach actually, it looks like the order option is the right way?
18:07 pdayton joined #salt
18:07 forrest jpeach, why don't you just use a require
18:07 nebuchadnezzar joined #salt
18:08 jpeach forrest: because then I would have to manage a require for every pkg I want to install ( and know which repo to require)
18:08 [diecast] joined #salt
18:08 forrest oh I thought you meant you just wanted to set up the repo itself
18:08 jpeach that might work if I could require_in to pkg: *
18:08 devinus joined #salt
18:08 jpeach no, I want to guarantee that the yum repo is configured before any package installation takes place
18:09 pdayton joined #salt
18:10 forrest ok, so you could populate a giant list of the rpms, then loop through it with a pkg.installed, and a - require on the repo
18:10 jpeach I think ordering os the right way? http://docs.saltstack.com/ref/states/ordering.html
18:11 jpeach set order:1 on the repo states?
18:11 forrest auto_ordering isn't in till 0.17
18:12 jpeach auto ordering would only help if everything was in one giant sls file though
18:12 forrest right
18:14 forrest so what I'd do is {% set all_rpms = salt['pillar.get']('base_rpms') %}
18:15 jpeach hmm, the order option appears to work correctly in 0.16.3
18:16 jpeach if I see it go south, then maybe I'l hit it with something heavy like that :)
18:16 jpeach forrest: thanks!
18:16 forrest then require_in: {% for rpm in all_rpms %} - pkg: {{ rpm }}
18:16 forrest and then in pillar you populate the list
18:16 forrest Cool, if it works it works
18:16 forrest but it's pretty easy to do something like what I listed, you just have to populate that info in pillar
18:17 forrest obviously with proper indentation/line formatting
18:21 g4rlic any clues on why salt may not execute scripts in rpms?
18:21 forrest g4rlic, so the script runs when you do a yum install on the box, but not when you install it from salt?
18:21 ml_1 joined #salt
18:21 g4rlic forrest: yep
18:22 g4rlic Wanna see a pastebin with the output and init.sls for that state?  Because I'm stumped.
18:22 forrest that seems weird, I thought that salt just handed the process off to the server. Can you try to run it through with debug?
18:22 g4rlic Salt-call has a switch that will give me even more output?  Sweet. Let me see what that does.
18:23 forrest yea if it's on a masterless minion just do salt-call --local state -l debug
18:23 forrest if it's all local that is
18:26 imaginarysteve joined #salt
18:26 g4rlic well it's sort of local
18:26 g4rlic in the sense that the master is configured by its own minion and selector.
18:27 forrest oh I see
18:38 freelock joined #salt
18:39 pipps joined #salt
18:41 dave_den joined #salt
18:44 auser joined #salt
18:47 TheCodeAssassin joined #salt
18:47 freelock Hi,
18:48 freelock I have an EC2 instance that I would like to start up and stop on a schedule, was thinking of using salt-cloud for this
18:49 freelock All the docs I'm seeing are to use salt-cloud to provision machines (which we'll want to do next), but I'm not finding a quick how to get salt-cloud aware of existing ec2 instances
18:49 freelock is there a resource written up on how to do this quickly?
18:49 pipps joined #salt
18:50 DanGarthwaite joined #salt
18:52 freelock also, I installed salt-cloud package on ubuntu, do I need anything in /etc/salt/cloud, or can I just drop configs into /etc/salt/cloud-providers.d/*.conf?
18:55 baniir joined #salt
19:06 jpeach joined #salt
19:07 [diecast] joined #salt
19:08 g4rlic forrest: http://pastebin.centos.org/4286/ <-- this is what it *should* look like when you install named.
19:15 g4rlic forrest: and this is what it looks like after using Salt: http://pastebin.centos.org/4291/
19:15 g4rlic I get the feeling I'm doing something wrong in the sls.
19:15 g4rlic it's complaining about a group not being available.
19:15 g4rlic Which, it's not..  when bind gets installed from Salt, its done without creating the named user and group, as it does normally with yum.
19:15 g4rlic That's why I was asking if salt did anything strange with it's invocation of yum install.
19:17 aleszoulek joined #salt
19:17 forrest that's really weird, I'm not sure why it would do that
19:18 lawlquist left #salt
19:19 forrest g4rlic
19:19 forrest can you try to just do an install from salt via the commandline?
19:22 baniir joined #salt
19:24 freelock how would I specify an existing ec2 instance to manage via salt-cloud (e.g. for the "start", "stop" functionality?)
19:24 forrest just a salt 'host' pkg.install ["bind-chroot", "bind"]
19:24 tuxIO joined #salt
19:24 forrest I don't know offhand freelock
19:25 cron0 joined #salt
19:26 sfello joined #salt
19:26 QauntumRiff a few weeks ago, when I would add a new minion, I would push a minion config file, and the service would restart.  I would then have to run state.highstate again.  Now, with 16.3, it doesn't seem to be timing out.  the master thinks the minion is still running it, but the minion doesn't seem to care
19:27 QauntumRiff is there something new I need to do?
19:29 QauntumRiff salt-minion:
19:29 QauntumRiff pkg:
19:29 QauntumRiff - installed
19:29 QauntumRiff service:
19:29 QauntumRiff - running
19:29 QauntumRiff - watch:
19:29 QauntumRiff - file: /etc/salt/minion.d/logging.conf
19:29 bhosmer joined #salt
19:30 pipps joined #salt
19:34 ronc joined #salt
19:34 zach Has anyone done their config files into a DB?
19:34 QauntumRiff zach: I keep thinking it would be much less typing to figure that out.  and easier to build a web front end.
19:34 EugeneKay Define "DB". A git repo is technically a database
19:35 zach QauntumRiff: that's what I'm toying with...for user management
19:35 EugeneKay Do you mean pulling Pillar data from a database?
19:35 zach type username, generate homedir, generate password hash, and select from a series of servers with a checkbox, push to servers
19:36 QauntumRiff You mean you don't like hand editing 20 different text files? What? :)
19:36 zach exactly
19:36 zach Dynamically generated state files
19:36 zach or even a hook that connects to a DB
19:36 QauntumRiff that would probably make you a hero.
19:36 EugeneKay Sounds like you want to build a small script to put it into your pillar data ;-)
19:37 EugeneKay And run a state.sls against the modified servers
19:37 zach kind of
19:37 EugeneKay I'm pretty sure there's a way to get Pillar data from a SQL DB, but I've not toyed with it. I find the YAML layout to be plenty good for my needs.
19:37 jwon left #salt
19:37 zach So the idea was to have an interface, either web or shell ... that our CSO can just update for when we onboard/term employees
19:38 QauntumRiff EugeneKay: it is.. but it can be very repetitive
19:38 QauntumRiff ie, add one user to 20 servers, out of the 150.
19:38 QauntumRiff or add some new pillar data to 30 different servers
19:38 zach yea, I do it really hacky right now. I do if pillar['localhost'] == blah or pillar['localhost'] == blah2
19:38 zach in one really large state file
19:39 zach and change the users shell from sbin/nologin to /bin/bash
19:39 zach it's gross
19:39 zach and a nightmare to manage
19:39 jbunting joined #salt
19:39 QauntumRiff for users, I just use ldap, but there are other places that could be nice
19:39 djn you can use the salt tops system, there is support for externel encs like cobbler and also for mongodb
19:39 zach that's a WIP for us
19:39 abe_music joined #salt
19:40 zach We can't change how we authenticate until our audit is done
19:40 zach we're right smack in the middle of it
19:40 QauntumRiff although, as a short term fix, you could do targeting.  IE, in your top.sls, have 'roles:prodDB' - match: pillar - users.prodDBUsers
19:40 QauntumRiff or something like that
19:41 EugeneKay Many:many mappings are FUN!
19:41 QauntumRiff so you can add user bob to all production db servers, but not web
19:41 mapu joined #salt
19:47 mannyt joined #salt
19:48 nu7hatch joined #salt
19:49 nu7hatch what's the best way to share utility funcitons between multiple modules, or better between modules and grains?
19:49 nu7hatch i have a redis connection creator that wanna use to load grains and to write some custom modules around
19:53 SEJeff_work nu7hatch, Create a utility module :D
19:54 copelco i'm running into some difficulty with overstate and salt mines. basically i want have access to DB ip address so the web server can connect to it. but for some reason when i run overstate, the pgbouncer.ini file is empty: https://gist.github.com/copelco/6528865 -- any ideas?
19:55 jbunting joined #salt
19:55 copelco by empty, i mean, it just prints "; hiii"
19:56 whit joined #salt
19:56 nu7hatch SEJeff_work: how can do that? just adding a module under _utilities? and by what name i can import it? docs say nothing about it :/
19:56 SEJeff_work nu7hatch, Honestly, I'd be tempted to just make a small python library and distribute that to my servers or something
19:56 SEJeff_work if you want to share it between modules AND grains,
19:57 nu7hatch that's slightly overkill
19:57 SEJeff_work I don't think what you're asking to do is really supported cc: terminalmage
19:57 StDiluted joined #salt
19:58 nu7hatch i have just one function to share, well 2, _redis_or_die() and _key(), first gives me a redis instance, pings (and eventually raises error), second builds a key name based on some grains
19:58 nu7hatch building a python lib for that is waay to much
19:59 ronc joined #salt
20:00 Jahkeup joined #salt
20:01 baniir how can i use states.mysql_grants to grant permissions to create databases
20:04 imaginarysteve joined #salt
20:08 jaequery joined #salt
20:13 opapo joined #salt
20:14 rgarcia_ joined #salt
20:14 qba73 joined #salt
20:16 forrest any luck g4rlic
20:18 * terminalmage materializes
20:18 terminalmage so what's up?
20:18 forrest I feel like there wasn't enough fanfare for your appearance
20:18 forrest no billowing smoke
20:18 terminalmage haha
20:18 SEJeff_work *fireworks*
20:19 SEJeff_work \o\
20:19 SEJeff_work |o|
20:19 SEJeff_work /o/
20:19 forrest yea there we go
20:19 SEJeff_work the wave!
20:19 terminalmage nu7hatch: what exactly are you looking to do?
20:19 terminalmage hehe
20:19 mapu joined #salt
20:19 pdayton left #salt
20:19 SEJeff_work <nu7hatch> what's the best way to share utility funcitons between multiple modules, or better between modules and grains?
20:19 terminalmage was on the way back from a late lunch
20:19 SEJeff_work terminalmage, ^^
20:19 SEJeff_work <nu7hatch> i have just one function to share, well 2, _redis_or_die() and _key(), first gives me a redis instance, pings (and eventually raises error), second builds a key name based on some grains
20:19 SEJeff_work terminalmage, ^^
20:20 terminalmage yeah I saw that... he basically wants a function in a "common" module
20:20 ccase joined #salt
20:20 terminalmage I'm thinking the best way is to write that and put it in the pythonpath
20:20 terminalmage and import it
20:20 forrest baniir, I'm not sure the myslq.grants docs only shows select, insert, update, or all privileges
20:20 forrest and I don't know what 'all' includes, would need to look at the code.
20:21 terminalmage Salt doesn't, to my knowledge, have a way to load arbitrary python modules
20:21 terminalmage in the same way that it does custom modules, grains, etc from _modules, _grains, etc.
20:22 terminalmage that would seem to me to be a vulnerability vector anyway
20:22 terminalmage though, there's nothing stopping one from writing custom modules that do bad things to your minions
20:22 terminalmage so yeah
20:23 nu7hatch terminalmage, SEJeff_work actually now i realized that what i wanted to do doesn't really work. What i wanna do is to have a redis storage when i can put various information about the nodes and have it available via grains or pillars, and have it editable via my custom module
20:23 terminalmage ahh ok
20:23 auser joined #salt
20:23 SEJeff_work nu7hatch, Like the job cache?
20:23 nu7hatch i want to have there information like roles assigned to each node, each node's strength, apps that being deployed there etc
20:24 SEJeff_work Oh a config management database
20:24 SEJeff_work gotcha
20:24 nu7hatch yeah
20:24 SEJeff_work Yeah everyone builds their own version of that
20:24 terminalmage btw folks, 0.16.4 should be in epel-testing shortly
20:24 terminalmage it was just pushed to testing this morning
20:24 nu7hatch something idea somewhat like openshift but way way waaay smaller
20:24 terminalmage has to sync to the mirrors
20:24 SEJeff_work terminalmage, I can give it kudos
20:24 SEJeff_work to speed up the process
20:25 terminalmage SEJeff_work: not sure if that's necessary at this point, it's been accepted into testing and everything
20:25 SEJeff_work it waits 2 weeks or until it has X number of kudos to move from testing to stable
20:25 terminalmage oh, well testing -> stable is a different matter
20:26 terminalmage yeah, if we can speed up getting it into stable that'd be nice
20:26 nu7hatch so narrowing my question, is it possible to make grains (or pillars) dynamicly loaded from redis? because grains part doesn't seem to work for me here
20:26 terminalmage nu7hatch: grains no, they're static data compiled when the minion starts
20:27 terminalmage there are several external pillar plugins
20:27 terminalmage not sure if redis is among them
20:27 SEJeff_work terminalmage, Extra karma makes it go out of testing sooner: https://apps.fedoraproject.org/packages/salt
20:27 SEJeff_work notice: 0 karma
20:28 terminalmage ahhh
20:28 nu7hatch ok
20:28 terminalmage nu7hatch: http://docs.saltstack.com/ref/pillar/all/index.html
20:28 terminalmage no redis external pillar
20:28 nu7hatch thanks terminalmage and SEJeff_work
20:28 mapu joined #salt
20:28 terminalmage nu7hatch: I'm not familiar with redis, but it's probably doable
20:28 terminalmage just won't be available until 0.18.0 at the earlier
20:28 terminalmage s/earlier/earliest/
20:29 nu7hatch hmm mongo could work for me too, and writing an external plugin would do the job as well
20:29 nu7hatch thanks
20:29 terminalmage feel free to open an issue
20:29 terminalmage no prob
20:29 mohae forrest & baniir: all privileges
20:29 nu7hatch yeah, i can write one and hand it over, you may make use of it if you like it
20:31 shinylasers joined #salt
20:32 mohae err all privileges == grant all privileges on <targetdb>@<host> etc
20:34 juicer2 joined #salt
20:34 terminalmage nu7hatch: cool, just fyi though, it looks like there is not yet a way to use that custom pillar module without copying it to the location where the rest of salt is installed
20:35 terminalmage under /usr/lib/python....../site-packages/salt/pillar/
20:35 terminalmage so while you're writing it, you'd need it to be there to be available. unless I'm overlooking something
20:36 QauntumRiff nu7hatch, you and zach need to talk
20:37 mohae_ joined #salt
20:40 jpeach joined #salt
20:40 curtisz joined #salt
20:40 GradysGhost Am I to understand that I can use pkg.install with the source variable pointed at a salt resource (like an RPM) and it will install that RPM on a minion? http://docs.saltstack.com/ref/modules/all/salt.modules.yumpkg.html#salt.modules.yumpkg.install
20:41 terminalmage GradysGhost: yes but you have to use a python data structure
20:41 terminalmage like in the CLI example
20:41 terminalmage easier to do it in a states
20:41 terminalmage *state
20:42 GradysGhost I was just going to ask how to do it with states.
20:42 curtisz ahoy... i have a problem, the solution to which is not obvious. i am running salt-master on a centos machine (v0.16.4) and trying to use minions on FreeBSD 9.1-RELEASE. salt-minion in /usr/src/sysutils/salt is v0.10.4, and of course these versions cannot talk to the version on my salt-master. i cannot find a solution to upgrade or patch ports. is there a resource that i could use to patch/upgrade my minions to be able to talk to the master? thank you for y
20:44 juicer2 joined #salt
20:44 GradysGhost curtisz: Have you tried installing all the gnu utils for Solaris and compiling?
20:44 curtisz gradysghost: no i have not
20:44 * GradysGhost hates Solaris's lack of compatibility with normal *nix tools
20:45 g4rlic forrest: OK, I tried that command (sorry I was eating lunch)
20:45 GradysGhost I did Solaris support for two years and ran into your problem (not with salt, specifically) several times. By the time I left, I had a 1,000+ line BASH script to download and compile an AMP stack for Solaris.
20:45 g4rlic results were, unexpected.
20:46 GradysGhost I don't know much about salt's compatibility with Solaris, as I'm a salt noob myself, but having some Solaris experience, that's what I would try if I couldn't find an actual Solaris release of the latest minion.
20:46 curtisz can you please explain why solaris' gnu utils on freebsd 9.1-release will help me with version incompatability inside salt?
20:47 GradysGhost I just meant that you would likely need the gnu headers/source to compile against
20:47 GradysGhost although... salt's all python, right? So maybe you don't need to compile?
20:47 g4rlic forrest: http://pastebin.centos.org/4296/
20:47 GradysGhost Please ignore me, I'm a dumbass.
20:48 forrest ok so even running the install through salt without any state info, still screws i up
20:48 forrest that's very odd to me
20:48 g4rlic it's odd to me too.
20:48 g4rlic I was hoping to see the command being executed, I suppose I can try it under strace..
20:48 cedwards curtisz: I maintain the FreeBSD port for sysutils/py-salt*. What issues are you having upgrading the port?
20:49 terminalmage GradysGhost: it's in the docs
20:49 terminalmage the syntax for using the 'sources' param in states
20:50 GradysGhost terminalmage: I found it and am trying it atm, will query again if I encounter problems.
20:50 terminalmage cool
20:50 curtisz cedwards: hello! thank you for your help. i can't seem to find the patch that will get my minions on freebsd to talk to a master running 0.16.4. can you point me to a resource that will help me do this?
20:50 curtisz cedwards: also, thank you for the work you do. we're a freebsd shop in love with what salt can do.
20:51 cedwards curtisz: you say the version in /usr/ports/sysutils/py-salt is 0.10.4 still? sounds like you've got an out of date ports tree..?
20:51 robertkeizer joined #salt
20:51 curtisz haha yes that may be the problem
20:52 forrest hey g4rlic
20:52 forrest is selinux enabled?
20:52 cedwards curtisz: 'portsnap fetch update' and you should end up at 0.16.4.
20:52 curtisz cedwards: doh... thank you :)
20:52 isomorphic joined #salt
20:52 cedwards curtisz: the port has since moved from sysutils/salt to sysutils/py-salt, fyi.
20:53 curtisz cedwards: cool, thank you!
20:53 cedwards s0undt3ch: ping re: bootstrap
20:53 terminalmage SEJeff_work: working on initial macos user management right now
20:53 g4rlic forrest: yes, it is.
20:54 cedwards curtisz: let me know if you have any other issues.
20:54 forrest https://github.com/saltstack/salt/issues/6735
20:54 forrest boom!
20:54 forrest selinux problem with salt, already filed as an issue
20:54 SEJeff_work terminalmage, nifty
20:54 terminalmage forgot if that was something you mentioned wanting or not
20:54 SEJeff_work terminalmage, Bah, I don't use OS X
20:54 SEJeff_work I like the idea of it, a pretty unix for desktops
20:54 terminalmage neither do I
20:54 terminalmage hehe
20:54 SEJeff_work but I'm linux to the absolute core
20:54 SEJeff_work I don't even dual boot windows on a single one of my 5 machines at home
20:54 terminalmage yeah, MacOS is a very visually-appealing prison
20:54 SEJeff_work only linux
20:55 terminalmage word
20:55 abe_music joined #salt
20:55 SEJeff_work but for Becca? She loves her macbook because it syncs super well with her iPwn
20:55 SEJeff_work which is why I got it for her explicitly
20:55 forrest cedwards, what's up with the bootstrap?
20:55 g4rlic forrest: Balls, thank you.  I'm looking for the suggested "Workarounds".
20:55 cedwards forrest: just curious if my BSD changes ever made it into stable
20:55 terminalmage yeah, my wife liked my macbook... if she gets another laptop that's probably what I'll end up needing to get for her
20:55 terminalmage she cannot stand Windows 8
20:56 terminalmage makes me very proud
20:56 forrest all your pulls are back in for the last month from what I see.
20:57 terminalmage cedwards: hey, while you're here, is there any danger from using both ports and pkgng on the same box?
20:57 terminalmage since they're different package managers
20:57 forrest here you go g4rlic: http://docs.saltstack.com/topics/troubleshooting/index.html#salt-and-selinux
20:57 cedwards terminalmage: if both are detected you'll likely want to set WITH_PKGNG=YES in the make.conf
20:58 terminalmage cedwards: ahh, so that will register ports that you install into pkgng's database?
20:58 cedwards terminalmage: yes
20:58 terminalmage cool
20:58 terminalmage I've been trying to step up my BSD knowledge
21:00 forrest cedwards did you ever look at this issue? https://github.com/saltstack/salt-bootstrap/issues/178
21:01 cedwards forrest: I don't think I ever saw that ticket. reading..
21:01 g4rlic forrest: looks like lack of an SELinux salt_minion_t and associated policy has been around for a year.
21:01 g4rlic :\
21:01 cedwards forrest: seems like something like that should be fixed with the new paths fix coming in 0.17.0?
21:01 g4rlic I See the workaround, but it's ugly. and I'm going to have to figure out how to bootstrap that.
21:02 forrest g4rlic, yea it's a bit ugly.
21:02 forrest you could always just run it as a command though
21:03 forrest and require the command be run prior to the package installation
21:03 forrest cedwards, I don't know. I'm unaware of a path fix in 0.17.0, just wanted to bring it up since it's been hanging around for a while
21:03 * cedwards digs for the link
21:03 forrest if you have any docs on the path fix, I'd be interested to read about it though :P
21:03 g4rlic forrest: I see that, but there's a *lot* of rpm's with %scripts, and disabling SELinux wholesale on my nodes is not an option for what they're running.
21:04 g4rlic So I have to find a way to ensure that salt-call gets the correct rpm_t context
21:04 forrest ?
21:04 g4rlic a priori
21:04 forrest you don't need to disable it
21:04 forrest # chcon -t system_u:system_r:rpm_exec_t:s0 /usr/bin/salt-minion
21:04 forrest you're just changing the context for that salt-minion/master processes
21:04 forrest err call
21:04 g4rlic Which I have to do before this system ever installs any packages.,
21:05 forrest yea
21:05 aantony joined #salt
21:05 forrest so what's the first thing you do on your system when you build it with salt?
21:05 g4rlic Which means, since this is the salt master, either I have to do this in Kickstart, or find a way to ensure that before any other salt states are executed, this context gets corrected.
21:05 g4rlic OK, so here's the steps..
21:06 g4rlic My laptop has dhcpd, tftp, httpd, all configured to pass out the kickstart for the salt master.
21:06 forrest oh I see you're running this off your local box.
21:06 g4rlic it also has the salt and pillar directories available for access during kickstart to seed the box.
21:06 g4rlic no, I'm describing to you how I built the salt master in question. ;)
21:06 mannyt joined #salt
21:06 forrest oh ok
21:07 g4rlic My laptop was, for all intents and purposes, the god node, that gave life to this little cluster.
21:07 cedwards forrest: https://github.com/saltstack/salt/pull/6890
21:07 g4rlic it installed the salt master via kickstart, and copied a working /srv/salt tree into it
21:07 g4rlic Once that node rebooted after the kickstart was complete, it used its salt minion to configure itself.
21:07 cedwards forrest: if i recall, s0undt3ch made those changes to the bootstrap as well..
21:08 mortis i just got my amiga1200 online, when are you gonna make a salt amiga minion? :P
21:08 g4rlic So I'm thinking that the best solution for this, is to ensure that the initial change of the context is done in Kickstart, and there needs to be a salt-state to ensure that it maintains the correct context.
21:08 g4rlic Does that sound plausible?
21:09 forrest that does sound plausible yes
21:10 forrest it's too bad you can't run the cmd, then stop the process so the changes get applied, then re-run :\
21:10 oz_akan_ joined #salt
21:11 mgw1 joined #salt
21:12 mapu joined #salt
21:12 g4rlic forrest: that would be even uglier.
21:12 forrest I know
21:12 forrest lol
21:12 g4rlic I wish I knew more about SELinux policies, or I'd write it.  :\
21:12 g4rlic Hell, maybe that will be one of my q4 goals..
21:12 forrest yea I don't either, I messed with them for my RHCE, but there are big changes between 5 and 6 as well.
21:13 g4rlic Right, and we're all 6 in this particular cluster.
21:13 g4rlic I remember how to use the basics of audit2why and audit2allow to help generate policies, but that's not a comprehensive solution.
21:15 forrest 6 is way better at least
21:15 forrest much simpler
21:15 forrest still sucks though
21:16 forrest cedwards, Hmm, yea I'm not sure then. Maybe the user just forgot to close the issue?
21:17 forrest and yes, s0undt3ch did make those changes
21:17 forrest at least he hardcoded them for freebsd
21:18 forrest so it passes the options through
21:18 matanya joined #salt
21:19 aantony would anybody be able help me understand the direction the project wants to go with respect to retreiving config items from config.option versus config.get?
21:20 aantony i'm seeing some inconsistencies with some config items.   like when things should be "module.key" or "module:key"
21:21 aantony the latter only being retreivable from config.get
21:24 redondos joined #salt
21:24 GradysGhost Ok. I have this state file (http://pastebin.com/UdXxichL), which yields this output on the master (http://pastebin.com/06A2wQPY), and this output on the minion (http://pastebin.com/rgKM5hfc)
21:24 GradysGhost I know the minion correctly gets the resource. It's sitting on that server.
21:25 brianhicks joined #salt
21:25 GradysGhost But it looks like the minion runs rpm -qp, which should just display info aobut the package, and salt interprets this as failure when the intent is to install the thing, not look at its metadata.
21:27 forrest GradysGhost, what happens if you install straight from the minion? Can you pull that down?
21:27 GradysGhost You mean if I run `rpm -i $packagefile` on the minion?
21:28 GradysGhost ah fuck me, it lacks deps
21:28 GradysGhost Figures.
21:28 Ryan_Lane joined #salt
21:29 forrest :P
21:29 forrest I'm kinda surprised that pkg.installed is using rpm as opposed to yum, since yum would address the deps
21:30 GradysGhost It looks like there was an issue filed a while back (around v 0.10 or so) pushing for rpm support, and I suspect that's when this change was made.
21:30 Ryan_Lane1 joined #salt
21:30 forrest what's the issue #?
21:31 forrest I mean if that's the case you should be able to specify, with yum being the default option :\
21:31 forrest no one wants to deal with a failed install because RPM is too stupid to pull in the deps
21:31 GradysGhost I believe this is the one I looked at earlier: https://github.com/saltstack/salt/issues/2291
21:31 forrest cool
21:31 GradysGhost Yeah, a `yum localinstall` would probably be the better option.
21:32 forrest I'm blaming terminalmage
21:32 forrest his is the first comment suggesting to use rpm -qpi :P
21:32 baniir joined #salt
21:32 GradysGhost But it also looks like salt.modules.yumpkg doesn't have a localinstall function. http://docs.saltstack.com/ref/modules/all/salt.modules.yumpkg.html#module-salt.modules.yumpkg
21:33 baniir attempting to use mysql_grants I get: Access denied for user 'debian-sys-maint'@'localhost'
21:34 pipps joined #salt
21:39 anteaya joined #salt
21:41 g4rlic yum localinstall is the new way forward in redhat land, as far as I've read.
21:42 g4rlic Later fedora's start whining at you if the rpm database was modified outside of yum. :\
21:42 forrest GradysGhost, the sources work fine, because it downloads the package to the minion, it just needs to use yum localinstall instead of rpm -qip
21:42 forrest g4rlic, I've been using localinstall for the last few iterations of cent/rhel, I never use rpm if I don't have to for that stuff.
21:42 GradysGhost Yeah, it's not an issue with getting the file to the remote box at all.
21:43 forrest Nop
21:43 forrest *e
21:43 forrest GradysGhost, that might be worth opening an issue on
21:43 GradysGhost It's clearly in that cache folder.
21:43 GradysGhost Okay, I'll do so.
21:43 forrest yea, the deps just don't get resolved, which is dumb.
21:43 forrest Make sure to reference that old RPM one
21:43 forrest and be like 'rabble rabble'
21:43 GradysGhost I'll make sure to mention that.
21:44 g4rlic iunderstandthatreference.gif
21:44 matanya joined #salt
21:44 forrest heh
21:46 auser joined #salt
21:49 rjc joined #salt
21:50 krissaxton joined #salt
21:50 troyready joined #salt
21:53 GradysGhost forrest: How's this? https://github.com/saltstack/salt/issues/7184
21:53 forrest I'd remove the outrage at the bottom, otherwise good stuff.
21:53 forrest :P
21:54 GradysGhost hah, you told me to put it there!
21:54 forrest Well, I consider the issue enough 'rabble rabble'
21:58 nomad_ joined #salt
22:00 forrest GradysGhost, it looks like https://github.com/saltstack/salt/blob/develop/salt/modules/yumpkg.py#L578
22:00 UtahDave joined #salt
22:01 nomad_ hello, trying to find some additional examples of salt runners
22:01 jeffmendoza joined #salt
22:01 nomad_ need to create a script to use mine.send and mine.get to pull some IP from minions and cant get sytax right
22:03 forrest GradysGhost,      |  installLocal(self, pkg, po=None, updateonly=False)
22:03 forrest |      handles installs/updates of rpms provided on the filesystem in a
22:03 forrest |      local dir (ie: not from a repo)
22:03 forrest that's the module help data for the python yum module
22:04 GradysGhost ok, how would this be invoked from the master? yumpkg.install and provide a source?
22:05 * GradysGhost checks the time
22:05 GradysGhost yep
22:05 GradysGhost packitin o'clock
22:05 forrest maybe yea?
22:05 GradysGhost see you guys later. Again, thanks for all the support!
22:05 forrest I don't know, I've never seen yumpkg used in a state before
22:05 troyready joined #salt
22:05 GradysGhost heh, ok, I'll toy with it tomorrow
22:05 forrest have a good one
22:05 GradysGhost later
22:05 forrest sounds good
22:07 efixit joined #salt
22:08 Thiggy @nomad can you put what you have in a pastebin? I just started messing with runners yesterday...
22:08 Thiggy ./cc @nomad_
22:10 nomad_ its kinda a mess just trying to run this command using client.cmd -- salt '*' mine.get '*' network.ip_addrs and it does not work within client.cmd, think its just sytax just trying to find more examples other than whats in github
22:12 nomad_ have it working using fabric to run the command but trying to do it within saltstack
22:14 Ryan_Lane joined #salt
22:14 ksalman joined #salt
22:15 Thiggy @nomad_ gotcha, Lemme PM you
22:17 nomad_ thanks Thiggy, cant get web client to let me respond in PM
22:18 Thiggy Refresh that gist I sent you, I think that's what it'd be like
22:19 isomorphic joined #salt
22:20 jaequery joined #salt
22:20 nomad_ Perfect, thanks Thiggy that was exactly what I needed
22:20 oz_akan_ joined #salt
22:21 Thiggy It work?
22:21 nomad_ yeap my sytax was way off ;(
22:23 Thiggy groovy, glad it worked
22:25 s0undt3ch forrest, cedwards: I'm here now
22:25 forrest it's all good s0undt3ch
22:25 s0undt3ch forrest: Great!
22:25 s0undt3ch :)
22:26 forrest we were talking about whether that freebsd issue could be closed that's 2 months old on the bootstrap due to the new pathing stuff
22:26 forrest I did an @you on github so I'm sure you'll see it
22:28 mmilano joined #salt
22:34 djn can I use cp.get_file to send a file thats not in my fileroot to a minion?
22:35 forrest pretty sure you can't djn
22:35 forrest because salt won't know about any files outside that file root
22:37 kermit1 joined #salt
22:40 g4rlic forrest: do you have any pointers on how to use cmd.run inside of an SLS file?  The saltstack docs are, well, less than clear to me on the matter.
22:41 pipps joined #salt
22:41 forrest are you looking at this one?
22:41 forrest http://docs.saltstack.com/ref/states/all/salt.states.cmd.html
22:42 g4rlic Aye
22:43 g4rlic oh crap
22:43 g4rlic nvm.
22:43 forrest ok
22:43 g4rlic Assume I'm blind
22:43 forrest :P
22:43 g4rlic The doc there though does make a slight error
22:43 forrest where?
22:43 g4rlic the comments at the top of the code block for the SLS example aren't actually marked as comments.
22:44 g4rlic Run only if myscript changed something:
22:44 jslatts joined #salt
22:44 forrest no that's fine
22:44 g4rlic That's not a comment?
22:44 forrest run only if myscript chaged something is the name
22:44 forrest that's the 'reference'
22:44 forrest or whatever you wanna consider it
22:44 forrest the command that will actually be run is - name: echo hello
22:44 forrest so it will run echo hello
22:45 g4rlic I get it now.  Derp.  Thank you for all your help today. :)
22:45 forrest np man, the joys of being on a bridge call for production problems all day
22:45 * g4rlic knows that feel
22:47 alunduil joined #salt
22:51 g4rlic also, fwiw, the correct context in centos 6 is: system_u:object_r:rpm_exec_t:s0
22:51 g4rlic system_r is an invalid role I thihnk.
22:51 forrest oh nice
22:51 djn forrest: It worked some versions ago with selt:///etc/fstab for example I think, but now its definitly not..
22:51 forrest rhel5 vs rhel6 hooray
22:51 forrest djn, hmm I'm not sure then, I've never tried to reference a file outside the root
22:52 forrest g4rlic, if you want to file an issue on that I can update the docs when I get home.
22:52 BrendanGilmore joined #salt
22:53 g4rlic forrest: I didn't realize that was part of the official docs?  I thought you just did it from the top of your head.
22:53 forrest You'd think though that salt:///etc/fstab would still be looking at whatever salt considered the root.
22:53 g4rlic also, chcon doesn't need -t
22:54 forrest oh no http://docs.saltstack.com/topics/troubleshooting/index.html#salt-and-selinux
22:54 forrest I was just looking there
22:54 forrest as if I'm going to waste brain space memorizing all the selinux commands :P
22:55 StDiluted joined #salt
22:55 HumanCell joined #salt
22:56 g4rlic oh.
22:56 g4rlic Well, yes, those docs are at least incorrect for CentOS 6
22:56 micah_chatt joined #salt
22:56 g4rlic and FEdora afaik.
22:57 forrest yea I imagine they were written with rhel5 in mind based on the age of that issue.
22:58 g4rlic Which project should I file the bug under?
22:58 g4rlic github.com/saltstack/salt ?
22:58 forrest g4rlic, yep
22:59 forrest just note that the commands are wrong, link the page, as well as the commands that worked if you would
22:59 g4rlic Absolutely.
22:59 forrest and I'll spin up an instance (hopefully) tonight to confirm functionality on cent5/6 with those and doing a yum install on bind.
22:59 forrest and bind-utils right?
23:07 g4rlic bind and bind-chroot
23:08 g4rlic btw, it doesn't *appear* to be helping in CEntOS 6.
23:08 forrest interesting
23:08 forrest and terrible
23:08 g4rlic I'm actively trying to debug it, I'll let you know what I find int he next hour or so.
23:08 forrest sounds good, I'll probably head home in the next 30 minutes or so, but I'll log back on as soon as I get home.
23:10 g4rlic Ok.
23:10 g4rlic Wait, actually, I think I see where I may have screwed up.
23:11 kermit joined #salt
23:11 forrest :( selinux
23:12 g4rlic https://github.com/saltstack/salt/issues/7185  <-- created a documentation issue.
23:12 g4rlic that look good?
23:13 forrest looks good to me.
23:13 forrest you confirmed that works I assume?
23:13 forrest or found what you had wrong in the command?
23:14 williamthekid_ joined #salt
23:14 g4rlic forrest: found what I'd done wrong.  The command on the docs is broken in CO6.4 for sure.
23:15 g4rlic but I had forgotten to change the context for salt-call in addition to salt-minion.
23:15 forrest ahh that makes sense
23:16 forrest cool, I'll work on testing it tonight and getting the docs updated, thanks g4rlic
23:16 g4rlic And since I have to do it in kickstart *and* a salt state, I got a bit of a copypasta error.
23:16 forrest heh
23:16 druonysus joined #salt
23:16 g4rlic No problem, thanks for helping me get this sorted.
23:16 forrest yea np
23:16 forrest glad we could figure it out
23:17 AviMarcus joined #salt
23:19 g4rlic I'm just ashamed it was a year old documented bug that bit me.
23:19 g4rlic it also means I have to rekick the salt master.  who knows how many scripts didn't get called when the machine was built. :(
23:19 forrest lol
23:19 forrest yea I've never experimented with writing an selinux policy module
23:20 forrest it would be nice if that was done
23:20 g4rlic Indeed it would.
23:20 g4rlic Being the rare selinux "proponent", I should probably learn how to write one, and just get it done.
23:20 g4rlic Shouldn't take more than a day or two.
23:21 forrest yea it doesn't seem that bad, I just don't know enough to know how do you create and then get it put in. Does it come with the RPMs you install? How does that work, etc.
23:21 forrest just not sure
23:21 g4rlic IIRC, there's a massive default policy that ships with fedora/centos.
23:21 g4rlic I'm nto sure how to wedge in a custom policy outside of that framework.
23:21 forrest Hmm, do they accept contributions?
23:21 forrest would be nice if salt just 'worked' out of the box
23:21 g4rlic probably?
23:22 g4rlic You know..
23:22 g4rlic Fedora 19 ships Salt
23:22 forrest If I knew anyone that worked at RHEL I'd see about how to do that.
23:22 g4rlic without going to EPEL
23:22 forrest hmm
23:22 g4rlic I wonder if this is fixed in FEdora 19 somehow?
23:22 forrest I don't know
23:22 g4rlic I'm going to go pull the SRPM and see what I can find.
23:23 forrest Cool
23:23 forrest if not I THINK that digital ocean has fedora 19 vms available now, so I could just spin up one of those when I bring up the centos ones for testing
23:23 bhosmer joined #salt
23:25 forrest I wonder if you just submit it as a 'bug', that they would fix it?
23:26 g4rlic I'm runing F19 now
23:27 forrest nice
23:27 g4rlic I can just install it and look at the contexs.
23:27 forrest sounds good
23:27 forrest that's basically what I would do, it would cost me like 5 cents
23:28 g4rlic Ok
23:28 forrest if you wanna check it though, more power to you
23:28 g4rlic so F19 doesn't appear to have an explicit type set for the salt binaries.
23:28 g4rlic system_u:object_r:bin_t:s0
23:28 forrest bummer
23:28 g4rlic That's the context
23:28 forrest I don't know if that would work
23:28 forrest my selinux foo is not good
23:28 g4rlic I don't know either.
23:29 g4rlic but I figure can't be too hard to try.
23:29 g4rlic I can point this system at the salt master
23:29 g4rlic hang tight.
23:29 forrest cool
23:31 g4rlic that doc issue I opened look good though?
23:31 forrest yea looks fine to me
23:32 forrest already emailed it to myself to look at later tonight
23:34 juanlittledevil joined #salt
23:34 forrest Man basepi might beat me to updating this g4rlic :P
23:34 NV hrm, any way to enable nazi-mode for ssh_auth state? A quick look at the source tells me no but I might be missing something obvious to someone who has used salt a tad longer than I :P
23:34 forrest he's already got a comment logged.
23:34 basepi forrest: heh
23:35 basepi forrest: i'm pretty quick if i'm already working on triage and a new e-mail comes through.  ;)
23:35 anteaya_ joined #salt
23:35 forrest hah gotcha
23:35 basepi but feel free to work on it -- i just filed it away, i'm not actually actively looking into it.
23:37 forrest yea I'll take care of it when I get home
23:38 jesusaurus is anyone here using git_pillar with environments?
23:39 NV nazi mode as in only listed keys should be in the authorized_keys file, any keys not known to salt would be removed
23:40 NV jesusaurus: I was using git_pillar, found it to be a tad broken (failed to update a few times) so now I'm just using a git checkout
23:40 mohae joined #salt
23:41 forrest There's nothing for removing keys that salt doesn't know about NV.
23:41 NV mhmm
23:41 forrest however you could maybe write up some logic to check the value of the file versus salt, or you could clear the file every time, then have salt re-add the keys
23:41 forrest so technically it's 'only' adding the keys it knows about.
23:42 jesusaurus NV: are you sure it wasnt just salt caching issues? I turn off caching for states and pillars because it always bites me
23:42 forrest I'm outta here NV, I'll be online later though, let me know how that works.
23:43 forrest or if you find a good solution
23:43 NV jesusaurus: ooh? what option is that?
23:43 g4rlic too slow. :(
23:45 jesusaurus NV: in the master config: minion_data_cache: False
23:45 jesusaurus see also: job_cache: True
23:46 NV iirc that doesn't chance pillar caching?
23:46 NV change**
23:49 cjh in the latest version of salt when using pip are other people seeing this error: [ERROR   ] Can't parse line '## FIXME: could not find svn URL in dependency_links for this package:
23:49 cjh i see that for every pip package i specify
23:54 jesusaurus NV: according to the comment in the config file, its both grains and pillar data
23:54 jesusaurus https://github.com/saltstack/salt/blob/develop/conf/master#L100
23:55 anteaya__ joined #salt
23:57 blee_ joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary