Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-08-23

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 woodtablet left #salt
00:21 Church- joined #salt
00:37 christopherl-sf_ joined #salt
00:45 GMAzrael joined #salt
00:52 aleph- joined #salt
00:54 Church- joined #salt
00:57 ssplatt joined #salt
00:58 pipps joined #salt
00:59 whytewolf XenophonF: test=True is a state.highstate thing. not a salt proper thing.
01:00 LostSoul joined #salt
01:05 iggy each state should implement test=True logic
01:05 iggy but yeah, not all of them do
01:13 pipps joined #salt
01:15 pipps99 joined #salt
01:16 Church- joined #salt
01:30 ixs joined #salt
01:34 neilf__ joined #salt
01:36 cgiroua joined #salt
01:52 ilbot3 joined #salt
01:52 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.7, 2017.7.1 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers
01:58 omie888777 joined #salt
02:03 ssplatt joined #salt
02:36 zerocoolback joined #salt
02:40 evle joined #salt
02:44 wryfi joined #salt
02:45 GMAzrael joined #salt
02:46 shanth_ joined #salt
02:57 Church- joined #salt
02:57 Church- joined #salt
03:06 GMAzrael joined #salt
03:38 donmichelangelo joined #salt
03:39 onlyanegg joined #salt
03:41 ninjada joined #salt
03:45 bigjazzsound_ joined #salt
03:45 skorpy2009 joined #salt
03:59 onlyanegg joined #salt
04:04 jeddi joined #salt
04:29 sh123124213 joined #salt
04:52 Church- joined #salt
04:57 onlyanegg joined #salt
05:02 leev joined #salt
05:02 Hazelesque joined #salt
05:02 robinsmidsrod joined #salt
05:02 frygor_ joined #salt
05:02 davedash joined #salt
05:02 phobosd__ joined #salt
05:02 munhitsu_ joined #salt
05:02 LordOfLA joined #salt
05:03 bbradley joined #salt
05:03 fl3sh joined #salt
05:03 todder joined #salt
05:03 dragon788 joined #salt
05:03 doriftoshoes__ joined #salt
05:03 hax404 joined #salt
05:03 hexa- joined #salt
05:03 ople_ joined #salt
05:03 basepi joined #salt
05:03 brousch joined #salt
05:03 petems joined #salt
05:03 djural joined #salt
05:04 manji joined #salt
05:04 shadoxx joined #salt
05:04 justyns joined #salt
05:04 systeem joined #salt
05:04 mrueg joined #salt
05:04 tom[] joined #salt
05:04 gmacon joined #salt
05:04 Vye joined #salt
05:04 aarontc joined #salt
05:05 jor joined #salt
05:06 kukacz joined #salt
05:06 marcinkuzminski joined #salt
05:06 coldbrewedbrew joined #salt
05:06 coldbrewedbrew joined #salt
05:07 nledez joined #salt
05:07 nledez joined #salt
05:07 coldbrewedbrew_ joined #salt
05:07 Deliant joined #salt
05:07 GMAzrael joined #salt
05:09 Savemech joined #salt
05:10 preludedrew joined #salt
05:19 Diaoul joined #salt
05:22 Felgar joined #salt
05:35 impi joined #salt
05:36 zerocool_ joined #salt
05:38 felskrone joined #salt
05:38 zerocoolback joined #salt
05:50 aldevar joined #salt
05:55 _KaszpiR_ joined #salt
06:07 hoonetorg joined #salt
06:08 onlyanegg joined #salt
06:15 seena_e joined #salt
06:15 michelangelo joined #salt
06:16 Tucky joined #salt
06:18 sh123124213 joined #salt
06:21 twiedenbein joined #salt
06:22 seena_e Hi
06:22 whytewolf hi
06:22 seena_e I have upgrade salt version 2016.3 to 2016.11 in the amazon linux .https://gist.github.com/seenae/61d63338b32cc3941842a31243555077
06:23 seena_e But the version shows different in managed.version and test.version from salt master
06:23 CEH joined #salt
06:23 seena_e Sorry wrong gist
06:24 whytewolf seena_e: stop the minion software. and then make sure it is actually stopped. if it is still running kill it then start it again
06:25 seena_e I had similar issue with the other server , hence already done that
06:26 seena_e https://gist.github.com/seenae/61d63338b32cc3941842a31243555077
06:26 seena_e updated the gist
06:27 whytewolf check the ip on both, make sure you are talking to the same host.
06:27 whytewolf [minion id does not have to be the hostname it can be anything]
06:30 armyriad joined #salt
06:32 masber joined #salt
06:36 seena_e I have removed the salt-key and readded , then tried
06:36 seena_e I am communicating with the same server
06:37 whytewolf yeah no reason to remove and readd a server. for that
06:38 seena_e yea just to confirm
06:38 seena_e salt 'hostname' cmd.run 'salt-call grains.get saltversion' giving me the new version
06:39 seena_e strange ..
06:39 whytewolf when you use 'ps -ef | grep salt-minion' does the exacutable for salt-minion show the same location as which salt-minion
06:39 Ricardo1000 joined #salt
06:39 whytewolf should be something like /usr/bin/python /usr/bin/salt-minion
06:40 seena_e hmm there is a difference
06:42 whytewolf sounds like you know where to start looking now.
06:43 seena_e yea checking init
07:00 s0undt3ch joined #salt
07:02 tsia joined #salt
07:08 AnotherNick joined #salt
07:09 sh123124213 joined #salt
07:09 CEH joined #salt
07:09 GMAzrael joined #salt
07:11 toanju joined #salt
07:18 fl3sh joined #salt
07:20 ninjada joined #salt
07:21 _KaszpiR_ joined #salt
07:21 ninjada_ joined #salt
07:26 jeddi joined #salt
07:28 cofeineSunshine hi
07:29 cofeineSunshine I'm using serverdensity for monitoring servers. Managed to write state to add newly created server to SD. Now I'm looking for a solution how to remove from SD when instance is deleted with salt-cloud.
07:42 ninjada joined #salt
07:42 coredumb cofeineSunshine: Normally you should be able to match an event on instance destruction
07:44 cofeineSunshine coredumb: yea, looks like
07:44 jhauser joined #salt
07:44 cofeineSunshine reactor is on
07:44 cofeineSunshine now I'm looking into serverdensity_device state
07:45 cofeineSunshine there is no state "not_monitored" opposing to "monitored" state
07:46 cofeineSunshine so, if I wan't to have that state I have to write it
07:46 cofeineSunshine so the question would be
07:46 cofeineSunshine what is painless way to code own module/state into saltstack source
07:47 cofeineSunshine now I have installed saltstack on master via deb package
07:48 iggy copy the existing one to $file_roots/_modules (or _states) and add whatever functionality you need
07:51 JohnnyRun joined #salt
08:06 shanth_ joined #salt
08:08 Tucky joined #salt
08:08 pualj joined #salt
08:18 shanth_ joined #salt
08:19 blucena joined #salt
08:21 Hybrid joined #salt
08:29 twiedenbein joined #salt
08:31 jab416171 joined #salt
08:32 Mattch joined #salt
08:35 zerocoolback joined #salt
08:38 yuhl joined #salt
08:47 __number5__ joined #salt
08:47 mage_ joined #salt
08:48 mage_ hello
08:49 mikecmpbll joined #salt
08:50 gnomethrower joined #salt
08:50 JPT joined #salt
08:50 mage_ I have a stupid question: is there a difference between those two https://gist.github.com/silenius/393a11962c498299e16e43345804b89e ?
08:51 mage_ and what's the recommended approach between the two ?
08:52 davedash joined #salt
08:52 dragon788 joined #salt
08:52 phobosd__ joined #salt
08:52 munhitsu_ joined #salt
08:52 doriftoshoes__ joined #salt
08:52 babilen mage_: Impossible to say .. the former allows you to re-use settings for multiple machines and "compose" their settings easily, whereas the latter makes it clear that all (this is the important bit!!!) settings pertaining to somemachine are in that SLS
08:53 babilen I personally don't like deeply nested includes for pillars as they make it hard to see where specific data is coming from
08:53 mage_ so you prefer the former from my paste ? :)
08:54 mage_ and in terms of merging there are no differences ?
08:54 babilen If I had to adopt one, that would be my choice, yeah
08:54 mage_ thank you :)
08:54 babilen Merging might be easier to predict if you have various includes all over the place
08:54 babilen err
08:54 babilen *HARDER!
09:01 capn-morgan joined #salt
09:01 cofeineSunshine joined #salt
09:02 hojgaard joined #salt
09:03 yuhl joined #salt
09:03 mikecmpb_ joined #salt
09:15 StolenToast joined #salt
09:33 dijit salt-formulas generally have includes though
09:33 mikecmpbll joined #salt
09:35 schasi joined #salt
09:40 mishanti1 joined #salt
09:56 babilen dijit: For states, yes
09:56 babilen In the foo.* semantic
09:57 babilen You simply can't say "One is strictly better than another"
09:59 twooster Any idea why a `mine_function` to cmd.run would behave differently for globbing than using `salt .. cmd.run 'ls /somewhere/*'` on the command line?
10:00 twooster Particularly, I have `- mine_function: cmd.run   - cmd: cat /etc/ssh/ssh_host_*_key.pub` and I get a literal cat: '/etc/ssh/ssh_host_*_key.pub': No such file or directory in response
10:01 nickadam joined #salt
10:01 twooster Whereas running it via `salt` on the command line works just fine (and yes, I'm singlequoting the cat command i'm sending on command line)
10:09 zulutango joined #salt
10:12 twooster Ah. I see. that's really crappy. python_shell is set to True if it's being run as a `salt` job, but False otherwise. That's really confusing.
10:38 Ricardo1000 joined #salt
10:42 nich0s joined #salt
10:43 hoonetorg joined #salt
10:45 arc__ joined #salt
10:48 GMAzrael joined #salt
10:48 aogier joined #salt
10:49 aogier hi !
11:01 cofeineSunshine I wrote custom runner
11:01 cofeineSunshine I can call it from salt-call
11:01 cofeineSunshine can I call that runner from my state.sls?
11:07 zerocoolback joined #salt
11:12 debian112 joined #salt
11:15 sjorge joined #salt
11:17 Church- joined #salt
11:29 minion_ joined #salt
11:35 bartuss7 joined #salt
11:37 bartuss7_ joined #salt
11:40 bartuss7 joined #salt
11:40 bartuss7 left #salt
11:45 bartuss7 joined #salt
11:45 bartuss7 Hello. How can i get access to files loacted in /mnt/storage located on master? When I use salt://mnt/storage state failed. If i'm right when i'm using salt:// state thy to find files in file_roots with is /srv/salt
11:46 Reverend bartuss7 salt:// means "/srv/salt" or wherever you configured it.
11:46 Reverend bartuss7 you could move your mnt i guess to /srv/salt/mnt/something ?
11:46 bartuss7 i can create symbolic link to this location
11:47 Reverend hmm. i dunno if that'll work... but try it and let me know :D
11:54 gmoro joined #salt
11:55 bartuss7 symbolic link do not work, but I also use salt-master VM to share files via http, so a can specify location as http://<master-ip>/path/to/file
12:00 twooster bartuss7: it's surprising that they do not work -- there's a `fileserver_followsymlinks` setting in the master conf that should allow it; could it be a permissions issue, if you're running the master not as root?
12:00 twooster bartuss7: you could probably also use a bind-mount to achieve this
12:01 arc__ joined #salt
12:04 zerocoolback joined #salt
12:08 Naresh joined #salt
12:10 GMAzrael joined #salt
12:11 Nahual joined #salt
12:13 evle joined #salt
12:13 TRManderson joined #salt
12:13 dnull joined #salt
12:14 skrobul joined #salt
12:14 McNinja joined #salt
12:14 djural joined #salt
12:27 Lionel_Debroux_ joined #salt
12:27 magz0r joined #salt
12:31 sh123124213 joined #salt
12:36 gh34 joined #salt
12:41 hammer065 joined #salt
12:42 bartuss7 twooster: it looks like symbolic link works. The problem occured, becouse I tried to run state.apply using different saltenv but I did not specify another file_roots, so salt could not find path to this location
12:51 vtolstov joined #salt
12:53 vtolstov Hi. i'm try to use some salt formulas from github, and have a question - how can i avoid update master config to add new formula dir ? does it possible to wildcard dir or something else?
12:55 babilen You can copy them all into the same directory, but I'd recommend explicitly enabling them in the configuration
12:56 babilen (we use GitFS for that and manage the configuration via salt-formula)
12:58 Miouge joined #salt
12:59 vtolstov @babilen: what mean copy to the same directory?
13:04 swills_ joined #salt
13:04 swills joined #salt
13:04 swills joined #salt
13:11 pualj joined #salt
13:12 swills joined #salt
13:13 vtolstov for my use case formulas stored in /srv/salt/formulas
13:15 vtolstov and i don't want to restart maser after each formula added
13:24 mamalos joined #salt
13:24 mamalos hey everybody!
13:25 mamalos A quick question: When using a decorator like memoize that caches data in a dict, how long is this cache present? (until the minion is exited?)
13:25 mamalos (memoize can be found in salt/utils/decorators/__init__.py)
13:38 racooper joined #salt
13:38 ssplatt joined #salt
13:39 edrocks joined #salt
13:39 rpb joined #salt
13:43 lkolstad joined #salt
13:46 kbaikov joined #salt
13:48 bartuss7 left #salt
13:50 tapoxi joined #salt
13:53 rgrundstrom joined #salt
13:56 rgrundstrom Good afternoon
14:01 babilen Heya
14:02 noobiedubie joined #salt
14:02 rgrundstrom When i use a for loop and then a file.managed like so: https://gist.github.com/anonymous/90e5e2f5ee5fe2eabba8290d5495bf77#file-gistfile1-txt it seems that the watch.file: does not like that. solutions?
14:02 cgiroua joined #salt
14:03 mchlumsky joined #salt
14:06 rgrundstrom I dont want to put the service running inside the for loop for that would cause the service to restart A LOT... and it takes a lot of time
14:06 rgrundstrom babilen: You are awsome on salt. Got any ideas?
14:07 rgrundstrom Just spotted my watch has the wrong ID but you get my point.
14:10 babilen :)
14:11 babilen What's the actual error?
14:12 lkannan joined #salt
14:12 rgrundstrom one minute
14:15 rgrundstrom Error : https://gist.github.com/anonymous/b1e976c368d15995373d90e3b493468f#file-gistfile1-txt
14:17 rgrundstrom again.... typos in the error message.... But then again... you get the point.
14:19 vtolstov i'm solve partialy my issue with custom state that downloads from github tar archive with specific revision and unpack to /srv/salt/formulas
14:20 vtolstov so file_roots: contains /srv/salt/formulas
14:20 vtolstov and al works fine
14:20 vtolstov but restart of salt-master needed as i understand
14:21 obimod joined #salt
14:21 obimod good morning
14:24 Church- joined #salt
14:25 babilen rgrundstrom: Your state definitions aren't unique by id, but by name attribute .. either move the ID into the loop and generate a state each, or reference by name
14:26 babilen push_configuration_{{ bar }}:
14:26 babilen I also don't see the state where you define the requisite, but I guess the above is the issue
14:27 _KaszpiR_ joined #salt
14:27 Negi joined #salt
14:34 rgrundstrom I might have found the problem..... Its far fetched but maybie
14:35 viderbit joined #salt
14:35 DammitJim joined #salt
14:35 rgrundstrom nope that was not the problem.
14:37 Negi left #salt
14:37 NegiLXXXVIII joined #salt
14:37 NegiLXXXVIII hi. i'm trying to set a grain from withinn a module. therefore i use __salt__['grains.setval'](key=x,val=y). but somehow the grain is not set
14:37 NegiLXXXVIII does anybody know why the grain is not set?
14:38 numkem joined #salt
14:40 rgrundstrom babilen: I was wrong... There was some other stuff inside the for loop that the check did not like... Moved them inside its own for loop and presto :)
14:41 babilen What was it?
14:41 babilen which lines that is?
14:42 sh123124213 left #salt
14:45 rgrundstrom I did not have it in the gist for security reasibs
14:46 babilen heh
14:47 babilen Do we need a #salt nda? ;)
14:51 rgrundstrom joined #salt
14:51 rgrundstrom And back
14:51 rgrundstrom Got disconnected.
14:51 sarcasticadmin joined #salt
14:55 SaucyElf_ joined #salt
14:55 _JZ_ joined #salt
14:57 mamalos /exit
14:59 rgrundstrom Next problem.... But i think this is a bug: https://gist.github.com/anonymous/cc8053f7a51cebded2174f6c5649f9ef#file-gistfile1-txt
15:00 Hybrid joined #salt
15:01 rgrundstrom minion version: 2016.11.5
15:06 rgrundstrom Never mind.... I have to update to latest version before doing anything else
15:06 rgrundstrom And it is soon time to go home.... Have a great day everyone.
15:07 schasi u2
15:09 DammitJim is there an easy way for me to generate a csv file from pillar data of a minion?
15:09 DammitJim and create a state for it? :D
15:10 onlyanegg joined #salt
15:13 nullwit joined #salt
15:18 sh123124213 joined #salt
15:21 fatal_exception joined #salt
15:23 sp0097 joined #salt
15:23 edrocks joined #salt
15:24 Brew joined #salt
15:26 Hybrid joined #salt
15:29 fatal_exception joined #salt
15:30 nullwit Hi all, I have a minion that is about 50/50 if it comes back with the queries i send it or if it comes back with "Minion did not return". Any idea what would cause the minion to be to intermittent??
15:31 LinkRage joined #salt
15:34 sh123124213 nullwit: what are you running ? module ? state ?
15:34 nullwit test.ping
15:34 TomJepp nullwit: I had that with questionable WAN links
15:34 TomJepp solved it by enabling keepalives
15:35 sh123124213 nullwit: do you have a syndic ?
15:36 nullwit @TomTepp okat thanks, I will try and figure out how to do that.....really new with Salt
15:36 sh123124213 does the minion receive the events ?
15:39 nixjdm joined #salt
15:39 Miouge left #salt
15:56 Church- joined #salt
15:59 btorch did salt 2017.7.0 or 2017.7.1 change external_pillars cache location ?
16:02 nullwit @sh123124213 no i do not have a syndic....ive configured keep alives and still having No Response issues
16:02 edrocks joined #salt
16:05 christopherl-sf joined #salt
16:05 mpanetta joined #salt
16:05 sh123124213 nullwit: is master-minion on a wan link or lan ?
16:05 numkem joined #salt
16:06 sh123124213 I would check if minion receives the request first by tailing the log
16:06 nullwit lan....ill try tailing the log
16:06 sh123124213 I'm guessing both master/minion are linux
16:06 fritz09 joined #salt
16:06 omie888777 joined #salt
16:06 nullwit naw, minion is windows, master centos
16:06 sh123124213 ahm
16:07 sh123124213 did you use the default package of salt for installation of the minion ?
16:07 nullwit yea, all except the keepalive now
16:08 sh123124213 you don't have multimaster or something right ?
16:08 nullwit no, single master with 4 minions (2 windows/2 centos)....it seems to be working a little better now (I think the keepalives are actually sending)
16:09 sh123124213 in a lan there should be no issues unless you have network problems
16:09 woodtablet joined #salt
16:09 sh123124213 what is the latency from master to minion ( ping -c 1 minion )
16:15 DammitJim how do I write a file on the fly and store it in a minion?
16:15 DammitJim or do I have to store it in the master and then file.managed it?
16:16 joecrowe joined #salt
16:20 swills joined #salt
16:20 swills joined #salt
16:22 Church- joined #salt
16:24 joecrowe Anyone here have any experience with setting up Syndic with the salt masters?
16:25 sh123124213 joecrowe: what do you need ?
16:26 joecrowe I am trying to figure out if the /etc/salt/master file should be similar to the "master of masters" file with the syndic bits set up stated here https://docs.saltstack.com/en/2016.11/topics/topology/syndic.html , or if I just need those bits.
16:27 joecrowe on the Syndic Master.
16:29 woodtablet i just setup a syndic master last week, i am not sure if i am the best person to answer, but it is going to pretty similiar
16:29 numkem joined #salt
16:29 woodtablet for the file base
16:30 woodtablet with the master of masters naturally saying order masters: true, while the synidc wont have that
16:30 woodtablet it works better than i thought too
16:31 woodtablet i mean expected
16:31 hammer065 Is there a way to remove the salt masters minion cache?
16:32 hammer065 Or a way to say the master to update it forcefully?
16:36 joecrowe woodtablet: so the Syndic Master should have the same information aside from the oder_masters bit ?
16:36 onlyanegg salt-call saltutil.clear_cache ?
16:37 woodtablet i believe so, and the syndic server needs this syndic_master: $Master_of_masters
16:38 joecrowe Cool. Thanks.
16:38 joecrowe How is syndic working out for you currently?
16:38 woodtablet you are welcome
16:38 woodtablet great
16:39 joecrowe Yeah I have servers that are over seas and I'd like to have a local master to those servers.
16:39 woodtablet salt calls from the master of masters are a little slower, but it solved my other problems
16:39 woodtablet ya, i have that same setup
16:39 sh123124213 This should speed up syndic->master connectivity -> https://github.com/saltstack/salt/pull/43053
16:41 woodtablet ah that is why its slow, i see
16:41 hammer065 onlyanegg: I want to clear the cached data of the minions on my master, not the masters cache on a/the minion/-s
16:41 hammer065 For ex. I'm working with grains and when they change on the client they don't get updated on the server
16:42 hammer065 Which is annoying since that's basically ruining the use of grains
16:46 whytewolf hammer065: what do you mean they don't get updated on the server?
16:47 hammer065 I'll send a screenshot explaining the issue
16:47 hammer065 One sec
16:50 pipps joined #salt
16:50 sh123124213 hammer065: you mean this https://github.com/saltstack/salt/issues/17204?
16:51 hammer065 https://puu.sh/xhGYm/53e6ad490a.png
16:52 hammer065 I guess this explains what the general problem is
16:52 woodtablet hammer you could try this > salt '*' saltutil.sync_all && salt-run saltutil.sync_all
16:52 MTecknology accidentally locked myself out.. time for  salt '<host>' cmd.run 'iptables -F sshguard'  :P
16:53 woodtablet mtecknology - ya.. its a life saver =D
16:54 hammer065 Apparently sync_all resyncs the grains; now its working as expected
16:54 hammer065 ¯\_(?)_/¯
16:54 woodtablet =D
16:54 obimod :)
16:54 hammer065 Also strike the salt-run part, saltutil.sync_all doesnt exist for salt-run
16:55 whytewolf hammer065: sync_all does sync the grains. also in the furture. clearing the master cache would not have actually fixed your issue it would have made it worse until the grains were synced
16:55 whytewolf what version are you on
16:55 hammer065 salt 2015.8.8 (Beryllium)
16:56 whytewolf ahh, a very old one
16:56 whytewolf [saltutil for runners was added later. but wasn't needed for this case anyway]
16:57 swills joined #salt
16:57 swills joined #salt
16:57 whytewolf but if you ever DO want to clear the grains cache on the master. salt-run cache.clear_grains
16:57 hammer065 Well but it definitely refreshes the cache, since I was calling saltutil.sync_all before running the command which changed the grain
16:58 hammer065 Well just have to execute it twice then ¯\_(?)_/¯
16:58 hammer065 Or once after the comand
16:58 whytewolf yeah was going to say just move it to after
16:59 pipps joined #salt
17:04 cholcombe joined #salt
17:05 pipps joined #salt
17:05 cholcombe o/ salt.  I was using salt a few years ago and it seems things have changed a bit since the last time i looked at it
17:06 cholcombe it seems /etc/salt/master now specifies /srv/salt/dev/services and states as well as prod/services and states.  What's the difference between services and states?
17:08 Vaelatern joined #salt
17:08 whytewolf that is an example of how someone might setup a file_root. there is no different.
17:08 whytewolf [and that example has been there since at least 0.17.5
17:08 whytewolf ]
17:09 cholcombe i see
17:09 pipps99 joined #salt
17:12 tapoxi_ joined #salt
17:14 relidy joined #salt
17:15 pipps joined #salt
17:22 pipps joined #salt
17:32 pipps joined #salt
17:34 brousch left #salt
17:35 DammitJim can one create a salt state where a custom module is called and a file is generated and saved on the minion without having to be saved on the maseter?
17:36 GMAzrael joined #salt
17:37 whytewolf that sentances doesn't make a lot of sense ... you mean even the custom module doens't exist on the master?
17:38 riftman joined #salt
17:39 skatz joined #salt
17:40 flebel joined #salt
17:40 skatz Is saltutil.sync_all supposed to use the value of "environment" in the minion config file if saltenv isn't specified on the command line? It looks like it uses "base" if you don't specify saltenv on the command line.
17:41 skatz (This is with gitfs btw if that makes a difference)
17:41 Miouge joined #salt
17:43 whytewolf it is supposed to sync from all enviroments configured in the top files. but if it can't determine based on the top files it will sync from base.
17:44 sh123124213 how do I cache grains on the master and avoid calling the minion directly when I run 'salt -L minion grains.item os'  ?
17:45 whytewolf sh123124213: um... salt 'anything' always calls a minion
17:46 sh123124213 ok, any other way I can get those cached data ?
17:46 sh123124213 salt-run cache.grains
17:47 whytewolf yeah. that will crab all of the grains that the master has on a minion.
17:47 sh123124213 I hope you mean grab
17:48 whytewolf i do. slip of the fingers
17:48 sh123124213 :p
17:48 whytewolf :P
17:49 sp0097 joined #salt
17:50 impi joined #salt
17:50 DammitJim whytewolf, oh no, I"m saying custom module because I created one to get pillar info and put it on the csv file I want to put on the minion
17:52 DammitJim so, I guess my general question is... is there a way to run a state that will pull pillar data to be saved on a file on the minion
17:53 DammitJim the data from pillar is just a list
17:53 sh123124213 DammitJim: You can try to loop through pillar data with jinja
17:54 sh123124213 and have a cmd.run that pipes data to a file
17:54 whytewolf DammitJim: file.serialize
17:54 sh123124213 dunno if there is a state that can do it except cmd.run
17:54 DammitJim sh123124213, I already have a custom module that pulls the data from pillar and "prints it"
17:54 DammitJim the next step was to either write it to a file on the master, then file.managed to the minion
17:55 DammitJim or find a way to do this directly on the minion
17:55 cholcombe whytewolf: do we still have to configure the minion to run highstate on cron or is that a thing of the past now?
17:56 sh123124213 so I know for a fact that you can loop through the pillar data. the only thing I'm not sure is if you can use some state to append the data to a file
17:56 DammitJim whytewolf, oh, file.serialize doesn't store the file on the master, huh?
17:56 whytewolf cholcombe: you can put it in cron. or use the internal scheduler...
17:56 cholcombe ok
17:58 tacoboy joined #salt
17:58 duckdanger joined #salt
17:58 whytewolf how many years ago did you run saltstack. you keep asking about things that have been in it for a long time. [like since 2014 at least]
17:58 tacoboy joined #salt
18:00 Church- joined #salt
18:05 relidy joined #salt
18:09 DanyC joined #salt
18:19 mikecmpbll joined #salt
18:20 pipps joined #salt
18:27 pipps joined #salt
18:32 bowhunter joined #salt
18:34 AnotherNick joined #salt
18:43 ChubYann joined #salt
18:47 pipps99 joined #salt
18:54 mpanetta joined #salt
18:56 scooby2 joined #salt
19:10 DammitJim joined #salt
19:12 sarcasticadmin joined #salt
19:12 Hybrid joined #salt
19:13 brianthelion joined #salt
19:18 kjsaihs joined #salt
19:25 hashwagon joined #salt
19:31 englishm_work joined #salt
19:33 Hybrid joined #salt
19:41 toanju joined #salt
19:49 edrocks joined #salt
19:49 JohnnyRun joined #salt
19:52 pipps joined #salt
19:55 meandme joined #salt
19:57 systemexit joined #salt
20:02 sp0097 joined #salt
20:09 sarcasticadmin joined #salt
20:16 shanth_ i think it was your idea but i finally created a pillar file for each host whytewolf. definitely was the best solution and very easy to manage a bunch of custom data now
20:18 pipps joined #salt
20:18 noobiedubie joined #salt
20:22 eseyman joined #salt
20:28 pipps joined #salt
20:35 relidy joined #salt
20:41 vodik joined #salt
20:44 onlyanegg joined #salt
20:49 omie888777 joined #salt
20:56 coredumb1 joined #salt
21:20 cholcombe will a pillar match of 'hostname-a[17-30]' work with glob?
21:20 cholcombe i'm trying to give it a list of hostnames like i would in bash
21:21 whytewolf that should work yes
21:22 whytewolf [as long as it ends on a number]
21:22 cholcombe so anything with hostname-a17 through hostname-a30 should match
21:22 cholcombe cool
21:22 whytewolf https://docs.saltstack.com/en/latest/topics/targeting/globbing.html
21:23 cholcombe ah yeah i see it under web[1-5].  thanks!
21:29 debian112 joined #salt
21:29 cholcombe whytewolf: i seem to be tripping over something with pillar.  I have a base: '*' match in my top.sls in /srv/pillar/top.sls.  On the master when i do salt '*' pillar.items I see nothing for my 1 minion
21:29 whytewolf can you gist up more of what you have? the top file and the pillar sls files
21:30 cholcombe sure
21:31 cholcombe whytewolf: https://gist.github.com/cholcombe973/0b4caef925837d1f361dc3f476293e52
21:32 cholcombe oh and top.sls is in /srv/pillar/
21:32 whytewolf yeah, kind of figured you had that in the right place :)
21:33 cholcombe never know :D
21:33 cholcombe i'm working on a tiny amount of sleep today because of my toddler
21:33 organized_mayhem joined #salt
21:33 cholcombe i didn't uncomment /srv/pillar in the /etc/salt/master because i figured it was the default
21:34 tacoboy joined #salt
21:35 whytewolf just to be safe [it works in my tests but is worth a shot anyway] does salt 'pistore-ho-b[17-32]' test.ping return the minion?
21:35 cholcombe i also tried changing the glob to a '*' and that didn't help
21:35 cholcombe lets see
21:36 whytewolf ahh okay. yeah if it isn't showing with '*' then something else is up
21:36 cholcombe `No minions matched the target. No command was sent, no jid was assigned.
21:36 cholcombe ERROR: No return received`
21:36 cholcombe yeah i think that's the problem
21:37 whytewolf humm, it shouldn't have problems with that matcher
21:37 cholcombe that's interesting
21:37 cholcombe works fine with the full minion name
21:38 whytewolf is there anything after the 17 in the minion name?
21:38 cholcombe yeah
21:38 cholcombe it's .domain.name
21:38 whytewolf then try pistore-ho-b[17-32]*
21:38 cholcombe that also didn't work
21:39 cholcombe i'm running the latest stable salt.  2017.7.1
21:39 whytewolf https://gist.github.com/whytewolf/d09fea4341d78184c44a4308f713a7c9
21:39 whytewolf that was me testing it [also with 2017.7.1]
21:39 cholcombe yeah we basically have the same thing
21:40 cholcombe i wonder if it has to be pistore[1-3]*
21:40 cholcombe haha that worked
21:40 whytewolf did you change the minion name?
21:40 cholcombe no
21:40 cholcombe the glob isn't matching 17-32
21:41 cholcombe it's matching the first number
21:41 cholcombe that doesn't make sense
21:41 whytewolf humm. might need a bug report on that. that isn't how that should work
21:41 gtmanfred do 'pistore-hob[1-3][0-9]*'
21:41 cholcombe i was thinking maybe it has to be like [1-3][7-2]
21:42 cholcombe yeah let me try that
21:42 cholcombe yeah your last one worked
21:42 cholcombe [1-3][0-9]*
21:42 gtmanfred yeah, the glob expansioncan only do single digits it seems
21:42 cholcombe yeah
21:42 gtmanfred you can always use pcre minionid
21:42 cholcombe i've seen that other places as well
21:42 whytewolf unforchantly thats going to match 10-39 not 17-32
21:43 cholcombe right haha
21:43 whytewolf yeah pcre might be a better choice
21:43 gtmanfred salt -E 'pistore-hob[17-32].*'
21:43 cholcombe that's a pearl reg expression?
21:44 whytewolf yeap
21:44 cholcombe that doesn't match either
21:44 cholcombe lemme find a reg expression tester online
21:46 btorch left #salt
21:47 vodik joined #salt
21:53 whytewolf pistore-ho-b(1[7-9]|2[0-9]|3[0-2]).*
21:54 cholcombe i found it
21:54 cholcombe salt 'pistore-ho-b[17,32]*' test.ping
21:54 cholcombe that works
21:54 whytewolf unless you have 18
21:54 whytewolf 19
21:54 cholcombe oh but it also catches things less than 17
21:54 whytewolf 20
21:54 cholcombe dammit haha
21:54 whytewolf it will catch 17 and 32
21:55 cholcombe the regex101.com site says it'll catch things in between also
21:55 cholcombe yeah that's def not right
21:55 whytewolf I'm on that site
21:56 cholcombe yeah yours works haha
21:56 cholcombe much appreciated
21:56 cholcombe ahh yeah i see what you did there.
21:57 cholcombe 17-19 or 20-29 or 30-32
21:57 whytewolf yeap
21:57 cholcombe tricky regex's
21:57 whytewolf pcre apperently also falls for the [] is a single char item. but it also has a way to make up for them
21:58 hashwagon I'm getting a "Comment: mount: special device /dev/sdb1 does not exist" when it does infact exist on the minion and is mountable. Any recommendations?
21:59 whytewolf are you running your minion as root?
21:59 hashwagon It's an ubuntu 16.04 system. I'm signed in as a regular user now, but ran sudo -i.
22:00 debian112 joined #salt
22:00 whytewolf i meant the minion daemon not you. [unless you are doing salt-call then you should be root also]
22:00 whytewolf anyway, add -l debug
22:01 whytewolf to a salt-call version of the command you are running
22:01 hashwagon salt-minion is running as root
22:02 oida joined #salt
22:03 oida joined #salt
22:09 hashwagon Here's the output: https://pastebin.com/ibhTHU1X
22:10 whytewolf that is from salt, need it from salt-call on the minion.
22:10 Guest73 joined #salt
22:11 Edgan joined #salt
22:20 hashwagon Strange, I'm getting some kind of a key conflict. [CRITICAL] The Salt Master has rejected this minion's public key! I'm re-authing the key now.
22:20 oida joined #salt
22:21 whytewolf interesting. wonder if you have more then one minion with the same minion id somehow.
22:22 hashwagon I've been wiping this minion and reloading as a testing bed. I'm not showing any authentication requests on the master yet. Hmm.
22:23 whytewolf restart the minion software it will force the auth request
22:23 hashwagon Did a reboot first, just restarted the salt-minion service. Normally it's pretty quick. I'll give it a minute.
22:26 hashwagon journalctl | grep salt-minion is showing [WARNING ] Minion received a SIGTERM. Exiting. The Salt Minion is shutdown. Minion recieved a SIGTERM. Exited.
22:26 oida joined #salt
22:27 hashwagon systemctl status salt-minion is showing: active (running)
22:27 hashwagon Interesting stuff.
22:27 whytewolf check your minion logs
22:31 oida joined #salt
22:38 oida joined #salt
22:39 hashwagon The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
22:40 hashwagon the minions hostname isn't showing up in salt-key -L
22:41 hashwagon I see showing up on the master now
22:41 hashwagon It was taking a bit longer
22:43 oida joined #salt
22:44 hashwagon Anyway, I think I got it. Thanks for the suggestions.
22:45 whytewolf no problem :) have a good day
22:46 omie888777 joined #salt
22:49 hasues joined #salt
22:49 hasues left #salt
22:50 hasues joined #salt
22:55 hasues I'm trying to find the right way of using "accept_keywords" from portage_conf.flags. http://dpaste.com/0J7R7DA.  It looks like YAML does not like the use of **.  How should I denote this?
22:56 whytewolf '**'
22:56 hasues Hm, okay, I tried "" instead of ' '.  Let me give that a shot.
22:57 whytewolf I'm just going by want a yaml lint site is telling me
22:57 hasues Well, based on the complaint from the paste, that sounds right
22:57 hasues Bah, it still throws an exception :(
22:59 brianthelion joined #salt
23:00 whytewolf if '**' doesn't work. it isn't yaml
23:00 hasues Probably.  I could understand why it may trip Salt up, so I'm trying to see if there is another way of saying what I want done.
23:01 jessexoc joined #salt
23:05 cholcombe can i put my sysctl's in a big list in an sls file?  I have a bunch and it'll probably grow over time
23:07 cholcombe nvm i figured it out :)
23:07 cholcombe silly brainfarts
23:08 whytewolf they happen :)
23:10 hasues Okay, the ' ' was the fix.  Thanks again whytewolf.
23:10 whytewolf np hasues take it, the issue was caching between tests?
23:11 hasues I'm not sure.  I think Salt may not have understood a result coming back from portage.  I rewrote my mask another way, and it seemed to work (after I wiped out all my portage configuration and allowed Salt to rebuild it).
23:12 hasues The exception was coming back because Portage was asking to have a specific accept keywords on a specific version of the package, and the global should have caught it, but its fixed.
23:12 whytewolf nice, least it is fixed
23:13 hasues You can use portage mask, unmask, and accept keywords in Gentoo, but one shouldn't necessarily have to.  Perhaps that is what is causing it.  At least it allowed me to refine what I was saying :)
23:14 CrummyGummy joined #salt
23:20 cholcombe so I'm checking out: https://github.com/salt-formulas/salt-formula-ceph/blob/master/ceph/osd.sls and it looks like i have to specify the id's by hand.  Is that right?  I have some clusters with over 1100 disks.  That's gonna get crazy looking
23:22 cholcombe on the readme that looks correct.  So the pillar file will easily exceed 6000 lines haha
23:22 whytewolf that does look correct. i don't use formulas personally.
23:24 cholcombe yeah i'd like to figure out something a little more creative
23:24 cholcombe like eat every disk in the server that's not used for root and ask ceph for an ID to use
23:24 whytewolf i don't see why you couldn't do that. just not with the formula ;)
23:25 cholcombe yeah i'm just loosely following the formula to give myself a head start
23:25 GMAzrael joined #salt
23:25 cholcombe i'm hoping to wow my coworkers who are using ansible
23:25 shanth_ is there way to do a pillar.get pillarname and return the minion that meet a certain setting? i want to do pillar.get domain for minions that are example.com
23:25 cholcombe i'll be like watch. check the server into salt and you're done.  there is no step #2 lol
23:26 whytewolf shanth_: um, huh. your question broke my head in trying to wrap my head around the logic there
23:27 whytewolf are you storing minion id's in a pillar?
23:27 shanth_ ah whoops long day - meant grains
23:27 shanth_ i just wanna do salt '*' grains.get domain=example.com
23:28 whytewolf oh you want something like salt -G 'domain:example.com' grains.get domain?
23:28 shanth_ yeah
23:28 shanth_ i guess i could do salt -G 'domain:example.com' as well - brain is fried and couldn't think of how to do it
23:28 CmndrSp0ck joined #salt
23:29 whytewolf it is okay. my head hasn't stopped hurting this month
23:30 CmndrSp0ck joined #salt
23:36 ssplatt joined #salt
23:37 mchlumsky joined #salt
23:54 oida joined #salt
23:56 oida joined #salt
23:59 cyborg-one joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary