Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2016-12-07

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 jrgochan no sort of {% for file in dir %} type thing?
00:01 whytewolf jrgochan: are you failing to grasp that that logic doesn't work because the filesystem doesn't exist on the minion where the state will render?
00:02 buu jrgochan: why can't you copy defaults then copy the customs on top of them?
00:02 whytewolf buu: that won't be ideal as the two states will always fight
00:02 jrgochan buu: that's essentially what I'm trying to do, but in one file. I don't want to make a file.managed statement for each file in /srv/salt/files
00:02 buu jrgochan: You can do that in one file..
00:02 buu whytewolf: Well, yes, but it sounds better than the other options so far..
00:03 whytewolf buu: thats just 2 recurses
00:03 buu yes it is
00:03 whytewolf one for the default and then a second for the custom
00:03 jrgochan I tried writing up something like that, but couldn't get the custom to overwrite when needed
00:03 whytewolf the custom should require the default
00:04 jrgochan http://pastebin.com/1Zb7sXaH
00:04 whytewolf oh please don't use order
00:04 whytewolf it really tramples on state ordering
00:04 jrgochan roger. order deleted
00:05 whytewolf order:last is only a in case of emergency
00:05 whytewolf anyway in copy_etc_custom. add - require: - file: copy_etc
00:07 jrgochan hrm. it's still prefering the default over the custom
00:07 jrgochan http://pastebin.com/1Ud5Aa9x
00:09 whytewolf this would be so much simpler if you actually made templates of your files so that the defaults and custom could exist in the same space
00:09 keimlink joined #salt
00:10 whytewolf [hint recurse can take a template flag}
00:11 Sammichmaker im familiar with terraform, puppet, a little bit of chef.. but die-hard python fan so I want to give saltstack a good trial in a test env.  Also need to look at salt-cloud..? oh well, to the docs!
00:11 khaije1 is there a difference between nodename and id?
00:12 jrgochan ahhhhhh. I had the minion in test mode... it didn't seem to want to apply the custom changes, but after test=False-ing the minion it worked just fine
00:12 jrgochan cool
00:12 jrgochan whytewolf: Thanks!
00:14 whytewolf khaije1: in thoery no, in practice sometimes.
00:14 whytewolf khaije1: most of the differences come up when spining up a salt-cloud and looking for events on the bus
00:16 jas02 joined #salt
00:18 dendazen joined #salt
00:18 khaije1 whytewolf: OK cool, I like it better as a device label than minion/id
00:19 khaije1 whytewolf: thanks!
00:19 khaije1 /afk
00:27 jrgochan one last thing i suppose. Is there any way to tell a state to not output anything when applied?
00:27 jrgochan or when errored
00:27 whytewolf not really
00:28 jrgochan kk
00:28 whytewolf you can change the outputer to a less verbose one. but you will still get most of the info
00:29 jrgochan good to know. Looks like every time I run this copy and copy_custom it does the default change, then the custom change. so it will always output a diff even if the file is in sync with the custom
00:29 whytewolf yes, that is what i meant by they will fight
00:30 jrgochan hrm... seems as if they don't always though. odd
00:34 jas02 joined #salt
01:01 nickabbey joined #salt
01:08 woodtablet left #salt
01:09 raspado does making changes to /etc/salt/cloud require a salt-master restart?
01:11 hemebond raspado: Possibly not.
01:11 hemebond Though I would recommend doing stuff in cloud.*.d/
01:11 hemebond I've found that changes are usually available immediately.
01:12 lws joined #salt
01:23 nidr0x joined #salt
01:28 lws joined #salt
01:28 raspado sweet seems to be true
01:34 raspado so setting setting stable 2.3.0.4 in /etc/salt/cloud worked like a champ
01:34 raspado oops 2016.3.4
01:35 jas02 joined #salt
01:43 raspado can /etc/salt/cloud be put in cloud.conf.d ?
01:52 amcorreia joined #salt
01:52 bocaneri joined #salt
01:54 hemebond Don't know, I've not edited that one.
01:54 hemebond I do everything in providers and profiles.
01:56 nZac joined #salt
02:02 raspado yeah same here, theres no documentation on cloud.conf.d either
02:05 XenophonF joined #salt
02:12 catpigger joined #salt
02:18 jas02 joined #salt
02:18 raspado seems like this will work in the cloud providers config too
02:32 heewa joined #salt
02:35 raspado what is the best way to upgrade the salt masters
02:35 raspado im on 2016.3.4, id like to get up to carbon 2016.11.0
02:38 raspado does the new carbon release support defining public ips via salt-cloud?
02:49 Tech01x joined #salt
02:51 Tech01x Hi, I am still struggling with understanding state IDs and dictionaries
02:52 Tech01x I am using the apache formula
02:52 jas02 joined #salt
02:53 Tech01x and to get the configuration stuff to work, I put directives like:   - apache.config   - apache.no_default_vhost   - apache.vhosts.standard
02:53 Tech01x in the top.sls under 'web*':
02:53 Tech01x that works
02:53 Tech01x but then I create a webconfig directory
02:54 Tech01x use - webconfig in the top.sls
02:54 Tech01x and then create an init.sls
02:54 Tech01x I get lost
02:55 Tech01x in the init.sls I include another sls
02:55 Tech01x and I get an error
02:56 Tech01x ID webconfig in SLS is not a dictionary
02:56 Tech01x the type of webconfig ... is not formatted as a dictionary
02:59 evle joined #salt
03:08 MeltedLux joined #salt
03:08 devster31 joined #salt
03:08 sh123124213 joined #salt
03:10 mpanetta joined #salt
03:10 AndreasLutro joined #salt
03:13 AvengerMoJo joined #salt
03:20 raspado cool so it seems like i dont need to pass script: bootstrap-salt with script_args? just passing script_args seems to work just fine
03:29 fgimian joined #salt
03:31 debian112 joined #salt
03:39 bastiand1 joined #salt
03:45 bfrog_ joined #salt
03:46 GordonTX joined #salt
04:01 Jimlad joined #salt
04:07 Jimlad joined #salt
04:13 plinnell joined #salt
04:13 jas02 joined #salt
04:19 swills joined #salt
04:21 jas02 joined #salt
04:31 faizy_ joined #salt
04:37 cro joined #salt
04:38 amontalb1n joined #salt
04:38 NightMonkey joined #salt
04:42 DEger joined #salt
04:43 justanotheruser joined #salt
04:45 cro joined #salt
04:50 DEger joined #salt
04:50 jacksontj joined #salt
04:58 preludedrew joined #salt
05:00 fleaz joined #salt
05:08 bfrog_ joined #salt
05:10 sh123124213 joined #salt
05:13 mpanetta_ joined #salt
05:14 mauli_ joined #salt
05:14 Ryan_Lane_ joined #salt
05:14 shawnbutts_ joined #salt
05:15 jas02 joined #salt
05:15 qman joined #salt
05:15 saltstackbot joined #salt
05:15 kuromagi^ joined #salt
05:15 czchen_ joined #salt
05:15 phtes joined #salt
05:15 djural_ joined #salt
05:15 hacks_ joined #salt
05:15 nickadam joined #salt
05:15 johtso_ joined #salt
05:16 MeltedLux joined #salt
05:17 systeem joined #salt
05:17 Laogeodritt joined #salt
05:17 esharpmajor joined #salt
05:17 HRH_H_Crab joined #salt
05:17 arapaho joined #salt
05:18 TRManderson joined #salt
05:18 doriftoshoes joined #salt
05:18 imanc joined #salt
05:18 bmcorser joined #salt
05:19 CaptTofu joined #salt
05:19 robawt joined #salt
05:19 godlike joined #salt
05:19 godlike joined #salt
05:19 gmacon joined #salt
05:19 bVector joined #salt
05:20 manji joined #salt
05:20 onovy joined #salt
05:20 daks joined #salt
05:20 xenoxaos joined #salt
05:20 KingJ joined #salt
05:20 LordOfLA joined #salt
05:21 evilrob joined #salt
05:21 rideh joined #salt
05:21 frew joined #salt
05:21 Jarus joined #salt
05:21 shalkie joined #salt
05:23 linovia joined #salt
05:24 nZac joined #salt
05:24 jas02_ joined #salt
05:25 MeltedLux joined #salt
05:26 al joined #salt
05:28 izibi joined #salt
05:28 ThomasJ|m joined #salt
05:28 hax404 joined #salt
05:29 jcl[m] joined #salt
05:29 tongpu joined #salt
05:31 lubyou_ joined #salt
05:31 hemebond joined #salt
05:33 lubyou__ joined #salt
05:43 madbox joined #salt
05:45 lilvim joined #salt
05:46 Bryson joined #salt
05:47 madboxs joined #salt
05:48 madboxs joined #salt
05:56 GordonTX joined #salt
06:05 pipps joined #salt
06:09 pipps99 joined #salt
06:10 pipps_ joined #salt
06:12 pipps joined #salt
06:15 pipps joined #salt
06:18 IgorK__ joined #salt
06:21 londo joined #salt
06:21 felskrone joined #salt
06:23 sjorge joined #salt
06:23 sjorge joined #salt
06:25 sh123124213 joined #salt
06:31 gnord joined #salt
06:31 pipps joined #salt
06:32 [SYN\ACK] joined #salt
06:33 jab416171 joined #salt
06:33 dtsar joined #salt
06:34 pipps joined #salt
06:35 pipps joined #salt
06:36 jas02 joined #salt
06:40 pipps joined #salt
06:45 cyborg-one joined #salt
06:47 pipps joined #salt
06:48 pipps joined #salt
06:52 madboxs joined #salt
06:53 pipps joined #salt
07:00 irctc218 joined #salt
07:01 irctc218 Hi there. Anybody here?
07:03 ecdhe joined #salt
07:05 pipps joined #salt
07:06 pipps joined #salt
07:09 nZac joined #salt
07:10 preludedrew joined #salt
07:10 preludedrew joined #salt
07:13 Antiarc joined #salt
07:15 ProT-0-TypE joined #salt
07:22 fracklen joined #salt
07:27 nidr0x joined #salt
07:30 keimlink joined #salt
07:33 sh123124213 joined #salt
07:35 impi joined #salt
07:36 babilen joined #salt
07:37 jas02 joined #salt
07:55 fracklen joined #salt
07:55 hemebond yes
07:55 fracklen joined #salt
07:58 ecdhe joined #salt
07:59 fracklen joined #salt
07:59 lubyou_ joined #salt
08:00 sarlalian_ joined #salt
08:01 J0hnSteel joined #salt
08:01 guerby joined #salt
08:01 lionel joined #salt
08:01 pfallenop joined #salt
08:02 irctc218 hemebond: while running a batch run over some thousand servers. Whats the best way to see which one has failed?
08:02 buu joined #salt
08:05 smkelly joined #salt
08:05 stooj joined #salt
08:05 UForgotten joined #salt
08:05 N-Mi joined #salt
08:06 APLU joined #salt
08:06 hemebond Does it not show you in the results?
08:06 hemebond I've not done batch commands before but I assumed the output was the same.
08:07 hemebond If you mean how to collect that information to make failures easy to find, I would probably use JSON output and parse that with something like jq.
08:09 irctc218 homebond: yes it outputs that but's the output is very long. OK so i've hack aronud that with jq. I thought there might be a ready to run solution.
08:09 dariusjs joined #salt
08:10 MaZ- joined #salt
08:18 yuhl_ joined #salt
08:21 lubyou_ joined #salt
08:25 samodid joined #salt
08:25 toanju joined #salt
08:27 teclator joined #salt
08:32 q1x joined #salt
08:34 nethershaw joined #salt
08:34 jas02 joined #salt
08:34 dariusjs joined #salt
08:40 iggy for that many boxes, I'd be sending the output to elasticsearch or something
08:48 ecdhe joined #salt
08:48 ecdhe joined #salt
08:49 fracklen Hi. I'm having trouble with salt-cloud. The bootstrap no longer creates the /etc/salt/minion.d/99-master.conf, but places all config in /etc/salt/minion. Here's the minion log - https://gist.github.com/fracklen/04356a4831d787a8cc5cdddfd535239a - help?
08:49 lubyou_ joined #salt
08:49 fracklen And btw - it doesn't connect to the master :)
08:50 lubyou_ joined #salt
08:51 Rumbles joined #salt
08:51 lubyou_ joined #salt
08:52 lubyou_ joined #salt
08:53 lubyou_ joined #salt
08:53 lubyou_ joined #salt
08:54 lubyou_ joined #salt
08:55 lubyou_ joined #salt
08:58 hemebond fracklen: So the minion doesn't get configured?
08:58 hemebond Which version of Salt?
08:59 fracklen 2016.3.4 - both master and minion
08:59 hemebond And it just started failing?
08:59 fracklen The minion does get configured - in a way
08:59 fracklen yep
08:59 hemebond In a way?
09:00 fracklen Config now goes in /etc/salt/minion
09:01 hemebond Then why can't it connect?
09:01 hemebond Does it have the proper master address?
09:01 fracklen hemebond: I've added a comment to the gist with the config file after bootstrap
09:01 fracklen yes
09:02 hemebond Can you telnet from the minion to the master on 4505 and/or 4506?
09:04 fracklen hemebond: Awesome - no, this appears to be a networking issue... Thanks!
09:04 hemebond 👍
09:04 fracklen Can you explain why the config structure has changed?
09:04 mikecmpbll joined #salt
09:05 hemebond I haven't a clue :-) I use a custom bootstrapping system.
09:14 Rumbles joined #salt
09:15 jas02 joined #salt
09:24 nZac joined #salt
09:27 Rumbles joined #salt
09:31 Rumbles joined #salt
09:39 s_kunk joined #salt
09:41 keimlink joined #salt
09:44 afics joined #salt
09:46 ronnix joined #salt
10:02 onlyanegg joined #salt
10:05 jhauser joined #salt
10:14 madboxs joined #salt
10:18 sjorge joined #salt
10:18 sjorge joined #salt
10:22 lubyou_ joined #salt
10:24 N-Mi_ joined #salt
10:30 ecdhe joined #salt
10:36 jas02_ joined #salt
10:39 pipps joined #salt
10:43 pipps joined #salt
10:46 lcsiki joined #salt
10:47 McNinja joined #salt
10:47 lcsiki left #salt
10:48 skrobul joined #salt
10:50 gladia2r joined #salt
10:56 SteamWells joined #salt
10:57 kutenai joined #salt
10:58 amcorreia joined #salt
10:59 ronnix joined #salt
11:02 shawnbutts joined #salt
11:03 Rumbles joined #salt
11:04 gladia2r hi, maybe its basic stuff, I just don't get it apparently yet :) - any advise is appreciated here on this: https://gist.github.com/gladia2r/25d115d0604511036cb98fc58ea7d42e (folder deploy based on grains) - I'm not really finding appropiate docs for this scenario
11:05 hemebond gladia2r: Your grain is a list.
11:05 hemebond But you're testing for a string.
11:06 hemebond {%- if 'blue' in grains['type'] %}
11:09 gladia2r Ah! I see - gotcha, thanks much @hemebond ! (works)
11:09 hemebond 👍
11:09 aidin joined #salt
11:12 djural_ joined #salt
11:14 nickadam joined #salt
11:19 monrad joined #salt
11:21 Reverend morning everyone
11:23 CrummyGummy joined #salt
11:26 nZac joined #salt
11:28 jas02 joined #salt
11:30 pipps99 joined #salt
11:30 pipps99 joined #salt
11:31 pipps99 joined #salt
11:31 mikecmpb_ joined #salt
11:37 jas02_ joined #salt
11:42 mauli joined #salt
11:44 Trauma joined #salt
11:44 tom29739 joined #salt
11:45 Kelsar joined #salt
11:57 sebastian-w joined #salt
12:03 onlyanegg joined #salt
12:14 madboxs joined #salt
12:20 ronnix joined #salt
12:23 mikecmpbll joined #salt
12:29 Flying_Panda joined #salt
12:29 Drunken_Panda joined #salt
12:30 Drunken_Panda anyone played around with windows updates and salt ? how do you write a statefile for windows? ive found the modules and can run them on the cli just cant seem to write a state as it cant find them https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.win_update.html
12:34 martoss joined #salt
12:36 madboxs joined #salt
12:37 Reverend gtmanfred - the web ui on salt... can you edit sls in there? :s
12:37 Bico_Fino joined #salt
12:38 DEger joined #salt
12:43 amontalban joined #salt
12:50 Bico_Fino joined #salt
12:50 Miouge joined #salt
12:53 pipps joined #salt
12:56 Reverend @babilen - those reactor / beacons worked a treat btw. thank you!
12:56 Reverend the only problem is removing old SSL's, but I've put some validations in there to help mitigate that
13:04 amontalban joined #salt
13:16 LondonAppDev joined #salt
13:17 Bico_Fino joined #salt
13:18 nZac joined #salt
13:21 faizy_ joined #salt
13:27 pipps joined #salt
13:30 XenophonF Drunken_Panda: I've been messing around with Salt-driven Windows Update configuration.
13:30 XenophonF Frankly, I'm starting to think that using Salt to manage a policy.inf file is probably the better way to do things.
13:31 XenophonF The Windows Update API is a mess of COM and legacy Win32.
13:34 XenophonF As for domain members, I think that one should use GPO settings where available.
13:38 madboxs joined #salt
13:39 jas02_ joined #salt
13:40 Ahlee joined #salt
13:41 Ahlee joined #salt
13:41 krymzon joined #salt
13:43 Drunken_Panda I was writing a state which would also add and remove from f5lb at same time so would rather keep with salt api tbh
13:43 Drunken_Panda do you have a sample state you call for win update ?
13:44 XenophonF do you mean something that would trigger an update check?
13:44 XenophonF b/c that'd be easy: cmd.run "wuauclt /detectnow"
13:44 Drunken_Panda anything cant seem to figure out how to declare windows update as a function win_update.install_updates' was not found in SLS 'windows.init'
13:45 XenophonF i think you're confusing execution modules with state modules
13:45 XenophonF https://docs.saltstack.com/en/latest/ref/states/all/salt.states.win_update.html
13:46 Drunken_Panda perfect thanks :D
13:46 XenophonF i didn't realize salt added this state module - nice!
13:46 Drunken_Panda <3
13:46 numkem joined #salt
13:49 LondonAppDev joined #salt
13:53 g3cko joined #salt
13:57 jas02 joined #salt
13:59 madboxs joined #salt
14:02 cmarzullo joined #salt
14:02 XenophonF I'm working on a state formula that deploys WSUS.
14:02 XenophonF I don't have anything ready for public consumption yet.
14:03 XenophonF I'm about 10% of the way through writing a Python interface to the Microsoft.Update.ServiceManager COM+ API
14:04 onlyanegg joined #salt
14:04 Drunken_Panda ooo Id love to get my hands on it when your done
14:04 pipps joined #salt
14:04 XenophonF the idea being that you could do local msu installs and things like that
14:04 XenophonF my driver was putting RSAT on all Windows boxes in my network
14:04 Drunken_Panda I just dont want to login to windows boxes :D
14:04 Drunken_Panda or any boxes
14:05 XenophonF but that's only delivered as an update for Windows Vista/7/8/8.1 and maybe 10
14:05 Reverend can beacons be defined in an sls, or does it need to be in a pillar?
14:12 XenophonF Drunken_Panda: I've got an AD formula in the works, too.
14:12 XenophonF it will end up managing AD DS, AD FS, AD CS, AD RMS, AD LDS, DNS, and WINS
14:12 XenophonF https://github.com/irtnog/active-directory-formula
14:13 XenophonF bear in mind that it's a work in progress
14:13 amontalban joined #salt
14:13 amontalban joined #salt
14:20 madboxs joined #salt
14:30 disaster123 joined #salt
14:30 disaster123 Is there any chance to load .sls file with . in their names / structure? Like host/FQDN/roles.sls
14:31 cmarzullo seems like a recipie for. . . disaster
14:32 Rumbles joined #salt
14:33 ronnix joined #salt
14:34 disaster123 cmarzullo: what's wrong with this?
14:34 nicksloan joined #salt
14:35 cmarzullo mostly making a play on your username. But in the top file the period dontes directory path. Just seems like lots of things can go wrong.
14:35 Saltguest joined #salt
14:36 disaster123 cmarzullo: but i have some places this time in pillar which a minion dependend. The Idea was to include by minion_id or FQDN but both contain .
14:36 Drunken_Panda to much escaping makes it horrible to read
14:36 disaster123 what's the alternative?
14:36 XenophonF disaster123: make the paths like this - for something.example.com, create something/example/com/roles.sls
14:36 cmarzullo use underscores in your sls and in yor pillar/state top you can do a jinja replace.
14:36 XenophonF or better yet, something/example/com/init.sls
14:37 XenophonF then you can assign "something.example.com" in your top file and it looks pretty
14:37 XenophonF for example, https://github.com/irtnog/salt-pillar-example/blob/master/top.sls#L94
14:37 cmarzullo {{ grains['fqdn'] | replace('.','_') }}
14:37 AvengerMoJo joined #salt
14:38 XenophonF come on cmarzullo, let's come up with a third option for this guy ;)
14:38 XenophonF isn't that the joke? ask two engineers, get three opinions? ;)
14:39 Drunken_Panda :p
14:39 cmarzullo {{ grains['fqdn'] | replace('.example.com','') }}
14:39 XenophonF haha
14:39 XenophonF even better
14:39 cmarzullo heheh
14:40 cmarzullo copied from actual production code /whelp
14:40 jas02_ joined #salt
14:41 madboxs joined #salt
14:44 racooper joined #salt
14:45 hwtt joined #salt
14:52 stooj joined #salt
14:55 amontalban joined #salt
14:55 amontalban joined #salt
14:56 LondonAppDev joined #salt
14:56 nickabbey joined #salt
15:00 dendazen joined #salt
15:02 madboxs joined #salt
15:02 nickabbey joined #salt
15:10 cscf cmarzullo, is there a subdomain?  Because otherwise that's grains['host']
15:10 disaster123 cmarzullo> the replace one looks good so far. If i would go the directory structure way i need to reverse the FAQ to get com/example/name - the other way round makes no sense. But then i would need to reverse in jinja the array
15:16 anotherzero joined #salt
15:18 DEger joined #salt
15:19 cmarzullo cscf: host gives me only the host name. where fqdn is with the domain.
15:19 cscf cmarzullo, yeah, but then you're stripping the domain out.  Unless there's a subdomain, like I said
15:20 cmarzullo that was the goal. to strip the domainout.
15:20 whytewolf cscf: the hostname is not always the first part of the fqdn. it is likely. but there are other things that can be hostname that have nothing to do with fqdn
15:20 cscf whytewolf, can you give an example?
15:20 cmarzullo https://github.com/jbussdieker/hostname-game
15:20 cscf I don't recall seeing an fqdn that doesn't start with hostname
15:20 whytewolf a host that uses more then tcp/ip
15:21 whytewolf such as applit is a lot less lickly not days
15:21 whytewolf but it was not always the case
15:21 numkem joined #salt
15:22 pipps joined #salt
15:23 whytewolf I have seen hosts that had the hostname set to some idx protocol but the fqdn was the tcp/ip protocol and they were compleatly different
15:23 Sketch i'd use host if you don't want the domain (though as previously mentioned, if you wanted subdomain, that wouldn't work)
15:24 bluenemo joined #salt
15:26 cmarzullo {{ grains['fqdn'] | replace('.example.com','') | replace('.','_') }}
15:26 cmarzullo where fqdn = host1.dallas.example.com and you want the file to be host1_dallas
15:26 cmarzullo lots of ways to skin that cat.
15:27 cmarzullo we had lots of inconsistencies when starting out. But now I try to do everything by minion id
15:32 viq I guess I'll ask here as well, maybe someone will be able to provide a hint on how to use test-kitchen in that setup
15:33 viq I'm trying to figure out how to use test-kitchen with kitchen-salt where in the same directory where I have .kitchen.yml I have dirs states/ and pillars/ and those are the state and pillar roots as the normal salt-master sees them (via gitfs)
15:33 viq But for now everything I tried makes me have to call eg states.baseline instead of baseline
15:35 nethershaw joined #salt
15:35 ub1quit33 joined #salt
15:35 cmarzullo hmmm
15:38 cmarzullo I presume you've used the dependencies hash in .kitchen.yml?
15:39 cmarzullo it'll copy formulas (directories) into you vms.
15:39 viq Ah, no, not yet
15:39 * viq goes to try
15:39 cmarzullo there's also pillars-from-files where you can specifiy pillar for each guest.
15:39 cmarzullo https://github.com/simonmcc/kitchen-salt/blob/master/provisioner_options.md
15:40 cmarzullo sometimes the stuff isn't super documented and ya have to go code diving.
15:41 viq yeah, I didn't think to try dependencies yet, so far I was trying salt_file_roots and formula
15:41 roock joined #salt
15:41 jas02_ joined #salt
15:42 cmarzullo you could also use some of the vagrant provision options to sync the whole directory over then set some of the salt_*_root vars
15:43 cmarzullo I usually set is_file_root: true and have a mockup/init.sls which does some pre stuff that's not really part of the formula. but does ssl generation and stuff for testing.
15:43 ronnix joined #salt
15:47 CeBe_ joined #salt
15:47 viq cmarzullo: do you maybe have a sample of the dependencies hash? kitchen complains about https://pbot.rmdir.de/NiUF4kzRhiqt6vXRZQJsJQ saying it's not valid YAML
15:48 CeBe joined #salt
15:49 cmarzullo yeah. I'll have to trim it down though. gimie a sec.
15:49 onlyanegg joined #salt
15:50 cmarzullo https://gist.github.com/cmarzullo/5cb94d4a10aa500a1de175fc290f8643
15:50 cmarzullo it's not your use case.
15:50 viq Thank you!
15:50 cmarzullo but basically that stands up a whole pile of systems. sets up an ELK stack and runs tests across it ensuring all our formulas work.
15:50 cmarzullo all the formulas/* are gitsubmodules.
15:53 Drunken_Panda https://docs.saltstack.com/en/latest/ref/states/all/salt.states.win_update.html How would I specify for salt not to skip downloaded updates
15:56 pipps joined #salt
15:57 DEger joined #salt
15:59 Drunken_Panda tried skips: -download: False but no change in behaviour
16:00 sarcasticadmin joined #salt
16:05 VR-Jack-H joined #salt
16:05 madboxs joined #salt
16:06 Deliant joined #salt
16:09 amontalban joined #salt
16:09 amontalban joined #salt
16:10 VR-Jack2-H joined #salt
16:13 lompik joined #salt
16:14 viq cmarzullo: simonmcc: for reference, this seems to work https://pbot.rmdir.de/x1b5j-UMC8QunTGlVjYWhg
16:16 cmarzullo awesome!
16:21 nickabbey joined #salt
16:24 LondonAppDev joined #salt
16:24 dwfreed joined #salt
16:26 madboxs joined #salt
16:27 Heartsbane joined #salt
16:30 DammitJim joined #salt
16:33 _JZ_ joined #salt
16:36 seanz joined #salt
16:37 seanz Question about salt masters. Do people generally keep their salt states in a repo and then just symlink salt/pillar to /srv? Is there an insanely better way to do it?
16:39 Sketch we keep states (and pillars) in repos, then check out the repos in /srv/salt and /srv/pillar
16:39 Sketch and generally just modify/commit them in place, which may not be the safest thing to do.
16:39 amontalb1n joined #salt
16:39 Sketch probably safer to modify/test elsewhere, then commit and check out in prod after testing
16:40 cmarzullo ^^ is a great way to start
16:40 Drunken_Panda gitfs and prod and test branch :D
16:40 weiwae joined #salt
16:40 Sketch Drunken_Panda: yeah, separate branches would probably be even better
16:41 Drunken_Panda added benifit of only killing your test enviroment when a windows admin commits :D
16:41 Sketch heh
16:42 weiwae Hi, I'm trying to read all the ips of my minions and edit the host files for each one to point to the other minions so I can acces them by name instead of ip. However, I'm having a hard time wrapping my head around how mines work. Anyone know a good tutorial or link or can explain it easily?
16:42 jas02_ joined #salt
16:43 weiwae another option is to override the name that ec2 gives each VM, but I didn't see how to do that.
16:44 LondonAppDev joined #salt
16:47 madboxs joined #salt
16:56 plinnell joined #salt
16:58 scsinutz joined #salt
17:01 Bryson joined #salt
17:06 lws joined #salt
17:06 XenophonF don't you think DNS would be a better solution?
17:06 nZac_ joined #salt
17:07 samodid joined #salt
17:08 madboxs joined #salt
17:11 Bico_Fino joined #salt
17:12 onlyanegg joined #salt
17:13 Brew joined #salt
17:15 Miouge joined #salt
17:18 weiwae I'm not sure how to automate that.
17:19 weiwae I have a salt-cloud map which creates the minions and provisions the machines.
17:19 ronnix joined #salt
17:23 woodtablet joined #salt
17:25 tapoxi weiwae: I'm setting the minion id via grain, then running hostnamectl set-hostname grains['id'] and restarting the minion
17:25 tapoxi its a hack but works
17:27 _Cyclone_ joined #salt
17:27 bltmiller joined #salt
17:28 madboxs joined #salt
17:29 weiwae tapoxi Ok, help me understand please. Let's say I use a map to create three VMs. One is called "web", one is called "cache" and one is called "db". (I actually have more than this, and multiple instances of each type but that is ok).
17:29 Trauma joined #salt
17:30 weiwae I want in the /etc/hosts file for each VM to have something like 123.45.67 web1 , 234.56.78 cache1 etc.  but if the current minion is web1, then it should say 127.0.0.1
17:30 tapoxi weiwae: http://hastebin.com/cajowiyusi.css
17:32 weiwae my understanding is that to do what I just said, I need to have the state read the mine and either use the ip, or if the id is equal to the current minion use 127?
17:32 tapoxi weiwae: you could do some jinja where {% if grains['id'] in grains['hostname'] %} {% set ip = '127.0.0.1 %}
17:32 tapoxi bad jinja probably but thats the idea
17:32 weiwae I have that part done, but currently I have the ips of the other machines hard coded
17:33 weiwae let me show you what I have, one sec
17:33 tapoxi you could loop through pillar data (or mine data) and when it comes to its own name output 127.0.0.1
17:33 anotherzero joined #salt
17:37 weiwae http://hastebin.com/afupojopav.php
17:37 nicksloan joined #salt
17:38 weiwae My question is how to do this better and also to pull the ip dynamically instead of having it hard coded.
17:39 atoy3731 joined #salt
17:39 atoy3731 Hi all.. What is the difference in the master config file between 'file_roots' and 'pillar_roots'?
17:40 atoy3731 Or are they the same thing, and anything added to either gets merged?
17:40 tapoxi weiwae: I haven't touched mine, unfortunately. but what I think you can do is have them enter their hostname (or id) and the contents of grains['ipv4'] and then loop through the mine data instead of have a single entry per host
17:41 Edgan joined #salt
17:42 debian112 joined #salt
17:42 weiwae atoy3731 I'm new to salt but my understanding is that file_roots is for files you might want to copy over, like a conf file, vs  a pillar which is just data written in yaml
17:42 LondonAppDev joined #salt
17:42 jas02 joined #salt
17:43 weiwae tapoxi Thanks. Yeah that is my plan in theory. But I'm having a hard time wrapping my head around how that data is actually put into the mine, and how to pull it out. :(
17:43 tapoxi weiwae: so its per-minion it looks like, mine.get '*' will return all mine data for each minion. how you access that in jinja is another question
17:43 Reverend joined #salt
17:44 jas02_ joined #salt
17:44 atoy3731 weiwae: Ah, thanks. So, for instance, I'd define my templates/static files in my file_roots, then the data to parse into those templates in the pillar_roots?
17:45 weiwae atoy3731 Yeah that is my understanding. The files btw you would access like salt://templates/static while pillar data is accessed via salt['pillars']['my_data']
17:45 onlyanegg joined #salt
17:46 atoy3731 Got it.. Makes sense. Thanks!
17:47 tapoxi weiwae: i'm guessing {{% for item in salt['mine.get']('*')('mine-name') %}}
17:47 weiwae mine-name would be like ipv4 or web1?
17:48 tapoxi similar to a grain, so it would be ipv4
17:48 tapoxi the '*' being get the mine data from all minions and then we loop through it
17:49 jas02 joined #salt
17:49 Drunken_Panda any oen played with salt for windwos updates got a state but it dont seem to do anything, simply replies that is done but hasnt actully installed anything packages are already on the system I just want to intsall them
17:49 Drunken_Panda https://gist.github.com/anonymous/487d9e5af7a911d1dec6dae3c037a861
17:50 raspado joined #salt
17:50 jas02 joined #salt
17:50 faizy joined #salt
17:50 raspado what is the master.py responsible for?
17:50 tapoxi although you might be able to do that with grains too, not sure
17:51 whytewolf weiwae: https://gist.github.com/whytewolf/eff4a15f0eaa8d5354a3
17:51 Lionel_Debroux joined #salt
17:52 whytewolf the '*' is just a standard target interface
17:52 tapoxi raspado: I think that's the entrypoint for the salt master, why?
17:53 raspado just got an error during provisioning a host in openstack and it referenced master.py as part of the error, found it kinda odd
17:53 weiwae like this? http://hastebin.com/uhiqituroz.php
17:53 whytewolf weiwae: no
17:53 tapoxi raspado: what's the error?
17:53 whytewolf weiwae: salt['mine.get']('*','ipv4')
17:53 raspado the host got deleted so i no longer see the error -_-
17:54 whytewolf raspado: better question is which master.py are you refering to
17:54 weiwae crap, have to run to a meeting, brb. Thanks for your help.
17:54 weiwae whytewolf other than the syntax errors, is that basically what I want to do?
17:55 Aleks3Y joined #salt
17:55 jas02 joined #salt
17:57 whytewolf weiwae: here is a snippet i use with that a simalar mine
17:57 whytewolf https://gist.github.com/whytewolf/59269fd5eabe2c586a4abb6162fde945
17:57 Salander27 joined #salt
17:58 raspado whytewolf: ok we got the error it says No valid host was found. Exceeded max scheduling attempts 3 for instance <long numbers>. Last exception Traceback (most recent call last) File /usr/share/python/nova/lib/python2.7/site-packages/nova/compute/manager.py
17:59 raspado it got far enough to actually build the host in openstack but the host shows that message as a fault
17:59 weiwae whytewolf thanks.. so terse I'll have to go over it later to understand it :( But thank you for the direction!
18:00 mohae joined #salt
18:01 nickabbey joined #salt
18:01 justanotheruser joined #salt
18:01 whytewolf raspado: that error .. is openstack related... looks like the openstack wasn't able to scheule the spinning up of the instance ... you need to digg further into openstack to know why. also i don't see reference to master.py in it :P
18:01 raspado okay perfect thanks
18:03 whytewolf I often see things like that when there is a problem in your amqp bridge... or if the compute service broke
18:04 whytewolf and why am i giving out openstack advice ... i get paid tens of dollars for that :P
18:05 raspado hahah
18:06 ronnix joined #salt
18:08 jas02 joined #salt
18:11 Drunken_Panda tens of dollars :P
18:16 eprice joined #salt
18:17 nidr0x joined #salt
18:18 wendall911 joined #salt
18:18 sagerdearia joined #salt
18:27 scsinutz1 joined #salt
18:28 lws joined #salt
18:29 telx joined #salt
18:29 wwalker joined #salt
18:31 rickflare joined #salt
18:31 madboxs joined #salt
18:31 sh123124213 joined #salt
18:32 sh123124213 any ideas on how to bundle python installation with salt for minions on centos5-6-7
18:32 dijit use the package manager?
18:33 nickabbey joined #salt
18:33 sh123124213 i want to have python inside the salt rpm together with the rest of the dependencies
18:33 Sketch you're building your own rpm?
18:34 cscf sh123124213, why?  rpms are supposed to be single packages
18:34 mikecmpbll joined #salt
18:34 cscf just bake all the rpms into the image or something
18:35 sh123124213 I don't want to use the systems python
18:35 sh123124213 I would want salt-minion to use the python which is inside the rpm
18:36 Drunken_Panda question does the salt-minion for windows install win32com ect by itself
18:36 Drunken_Panda ?
18:37 Sketch sh123124213: you can build your own rpm that installs both salt and python, if that's what you really want.  nobody is going to build it for you, because it's a strange scenario that few people would want.
18:37 sh123124213 exactly like windows package does it that bundles everything and donsn't use the system's python
18:37 sh123124213 I didn't ask for anybody to build my rpm
18:37 sh123124213 I asked for ideas
18:37 Sketch sh123124213: if you have a non-system python installed, it would probably be easier just to rebuild the salt rpms (and deps) against that instead.
18:39 nicksloan joined #salt
18:40 raspado is it possible to see the quotas via salt-cloud?
18:44 jas02 joined #salt
18:44 ronnix joined #salt
18:44 Rumbles joined #salt
18:45 CruX__ left #salt
18:49 _Cyclone_ joined #salt
18:50 huleboer joined #salt
18:51 whytewolf raspado: no, quota info is not seen from salt-cloud humm and apperently the salt exacution module for nova also doesn't have quota info.
18:51 whytewolf but really that info is easy to get outside of salt anyway
18:51 nZac joined #salt
18:52 raspado yeah its simple, "nova absolute-limits"
18:52 madboxs joined #salt
18:53 whytewolf i was thinking about openstack quota show
18:53 ponyofdeath hi, anyone know why i need to specify postgres.bins_dir: in my minion dir all of a sudden?
18:53 ponyofdeath seems that the minion is not detecting where the postgres bins are
18:54 jas02 joined #salt
18:54 cscf What's the best way to output a file for debugging after jinja has been run?  I think there's a saltutil command for it?
18:56 whytewolf cscf: 2 methods. cp.get_template https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cp.html#salt.modules.cp.get_template or through my homemade module https://github.com/whytewolf/salt-debug
18:56 cscf whytewolf, thanks
18:58 whytewolf ponyofdeath: i think there were some changes made in the postgres module during the 2016.11 update. have been hearing a lot about postgres modules not detecting binaries
18:58 bltmille_ joined #salt
18:59 nZac joined #salt
19:06 catpig joined #salt
19:08 raspado is it possible to build a node without using a map file using salt-cloud? I want to build a specific flavor
19:09 raspado to test potential quota limit issues
19:09 SaucyElf joined #salt
19:10 whytewolf raspado: yes, instead of useing -m use -p
19:10 raspado ahh nice thx!
19:10 whytewolf [need to give a profile of coarse]
19:10 weiwae sh123124213 we run python inside of virtualenv to keep everything isolated.
19:12 raspado the profile being the size of the image?
19:12 sh123124213 weiwae: ye makes sense. I don't see a reason to use virtualenv though
19:12 raspado so in thise case i2.cw.largessd-16 ?
19:13 madboxs joined #salt
19:13 sh123124213 weiwae : I want to avoid having to install any extra packages and having the minion in one rpm with the compiled binaries for python depending on the OS
19:13 sh123124213 .. OS version
19:13 tercenya joined #salt
19:14 sh123124213 as long as nothing is compiled salt wouldn't complain and I would just have to change the salt-minion file to add the python envs
19:15 sh123124213 together with the path on where to load python
19:16 sh123124213 my problem is that src packages in salt only come with python2.6 and the spec file needs it
19:16 jrgochan hello all. we've got a dataset that's roughly 9GB that needs to be synced somewhat routinely across all of our machines. Anyone know the most efficient way of doing that? Would a file.recurse be fine?
19:16 sh123124213 I only tried on centos6 but I'm guessing its the same for 7
19:16 buu jrgochan: rsync
19:17 sh123124213 jrgochan: check syncthing
19:18 jrgochan cool cool. so nothing built into salt and stick with standard linux toolset?
19:18 whytewolf jrgochan: yeah. honestly with something that big you would be better off with rsync or compress and using archive tools.
19:19 scsinutz joined #salt
19:19 sh123124213 I would go with a network file system actually
19:20 whytewolf sh123124213: not always practical, if the system has files that change on machinces differently that would be a non starter for a network filesystem
19:20 buu jrgochan: there's a state.rsync.sychronize
19:21 buu Excepts spelled correctly
19:21 buu Also file.recurse is hilariously slow
19:21 cscf What does recurse use, anyway?  scp?
19:21 whytewolf file.recurse is slow because it actually runns a lot of processing on each file.
19:21 jas02 joined #salt
19:21 cscf no, it wouldnt be scp
19:22 whytewolf file.recurse uses the internal salt cp tools
19:22 cscf whytewolf, even if you don't use templating?  What processing is done?
19:22 krymzon joined #salt
19:23 whytewolf cscf: mostly hashing and checks per file
19:23 buu also it has to copy every single file to the minion
19:23 jrgochan hrm. Yeah. I'm currently rsyncing, but was wondering if salt had something built in. I'll check out the rsync state. Thanks!
19:23 whytewolf not just copy every file. but it does it one at a time :P
19:24 buu yeah
19:24 buu it's amazing
19:24 buu I have a like, 5mb git repo and it takes like 2 minutes to copy it over a lan
19:24 yidhra joined #salt
19:25 cscf So I'm using - unless: 'sudo -u www-data ./occ status | grep "installed: true"' , and that command run manually returns 0, but the state always runs?
19:26 whytewolf that is odd.
19:27 whytewolf i would say toss a -l debug with salt-call on that
19:27 buu does unless consider exit 0 to be true?
19:28 buu also what does sudo return
19:28 whytewolf yes, unless is just looking at the return code
19:28 whytewolf scsinutz: ... have you tried changeing ./occ to the full path?
19:29 whytewolf sorry scsinutz that was meant for cscf
19:29 jrgochan Any experience with "Salt request timed out. The master is not responding. If this error persists after verifying the master is up, worker_threads may need to be increased." ?
19:29 jrgochan Looks like for big jobs they continue to run in the background, but salt returns a time out
19:30 buu jrgochan: Last time I got that error I had actually broken salt master
19:30 jrgochan buu: good to know. I'll try salt-master in -l debug and see what it says
19:30 cscf whytewolf, you're right, I'm an idiot and copy-pasted that
19:31 buu So
19:32 buu Why can't state.rsync handle rsync daemon connections using ::
19:33 whytewolf most likely yaml rendering
19:33 whytewolf [just a guess]
19:34 madboxs joined #salt
19:36 AbyssOne joined #salt
19:36 buu !
19:36 buu whytewolf: tell me more
19:36 Drunken_Panda seems the windows update.list_updates returns no pending updates even though the system has 4 updates pending
19:37 buu And why does salt master always take exactly 60 seconds to report an error from a minion?
19:37 roberto_ joined #salt
19:37 buu Comment: Source directory saltrsync@192.168.1.14::buu/.vim/ was not found.
19:37 roberto_ howdy
19:37 whytewolf buu: toss it into a yaml parser and see if it is trying to parse it into another dict
19:38 roberto_ quick question, I am new to the cherrypi for salt api
19:38 whytewolf buu: it doesn't take 60 seconds for me :P
19:38 roberto_ it looks like I have to authenticate with `Some user:passwd
19:38 roberto_ to generate a token
19:38 buu whytewolf: what setting did you change?
19:38 whytewolf buu: none
19:38 roberto_ I did not see where to get such creds - :P
19:39 buu whytewolf: well.. something is weird
19:39 whytewolf roberto_: https://docs.saltstack.com/en/latest/topics/eauth/index.html
19:39 DEger joined #salt
19:39 jas02 joined #salt
19:40 buu whytewolf: Where does salt '*' cp.get_template salt://path/to/template /minion/dest
19:40 buu put the template?
19:41 whytewolf buu: /minion/dest on the minion
19:41 buu Passed invalid arguments to cp.get_template: coercing to Unicode: need string or buffer, bool found
19:41 buu wtf
19:41 buu sudo salt '*' cp.get_template salt://common/init /home/buu/
19:42 buu What's wrong with my cp?
19:42 ivanjaros joined #salt
19:42 whytewolf i don't see anything wrong with it. try it on the minion with salt-call
19:42 roberto_ thank you whytewolf
19:42 whytewolf [i don't personally do the cp.get_template"
19:42 buu oh
19:43 buu I need to specify a destination *filename* as the second argument
19:43 buu Not a destination *directory*
19:43 whytewolf this is why i wrote a module that outputs to where ever i am calling instead of saving the file on the minion
19:43 buu This is a very reasonable solution
19:44 buu "To simply return the file contents instead, set destination to None. This works with salt://, http://, https:// and file:// URLs. The files fetched by http:// and https:// will not be cached.
19:44 buu wtf does that mean
19:44 whytewolf um. i have no idea
19:44 buu Wait, can I use file:// instead of salt:// anywhere?
19:45 whytewolf there are some places file:// doesn't work
19:45 whytewolf unless they fixed them
19:45 whytewolf i remeber file.recurse doesn't like file://
19:46 buu -_-
19:46 buu ha ha ha
19:48 theologian joined #salt
19:48 toanju joined #salt
19:49 buu whytewolf: goo.gl/nGTKyLcontent_copyCopy short URL
19:49 buu =/
19:49 buu uh
19:49 buu https://goo.gl/nGTKyL
19:49 buu Thanks google
19:49 nicksloan joined #salt
19:50 whytewolf okay. yaml rendering is fine :P i have no idea why the state.rsync doesn't work then
19:50 XenophonF joined #salt
19:50 buu Because someone is a monster
19:51 theologian hey, we are taking about how to setup dev, qa, staging and prod using salt, but can not find a good best practice. does anyone have a good doc/blog on a good salt dev workflow should happen in git? how does one test salt configs before pushing to master?
19:51 buu How about why it takes 60 seconds to return from my state.apply call?
19:51 buu theologian: Have you read the state tutorials where they cover creating roots for dev/qa/prod?
19:52 whytewolf buu: that you might want to increase the log_level_logfile on both a minion and master and check the logs on both while operating
19:52 Drunken_Panda thelogian we use gitfs monted on test prod and staging branches and each root has a top file which managed test/prod/staging servers
19:53 whytewolf theologian: there are not really a best practices in salt. however. that being said a lot of people tend to use a seperate master per enviroment
19:53 Drunken_Panda that way masters match test to testconfig stag to stag prod to prod ect
19:54 Drunken_Panda when your happy push from stag to prod
19:55 theologian buu: this one? https://docs.saltstack.com/en/latest/ref/states/top.html#how-top-files-are-compiled
19:55 madboxs joined #salt
20:04 plinnell joined #salt
20:04 buu HOLY SWEET JESUS
20:04 buu  hs  buu  ~  salt  sudo salt '*' cmd.shell cmd='RSYNC_PASSWORD=43ffdd22f24e4e1b3cf94398e8b26350 rsync -va saltrsync@192.168.1.14::buu/.vim/ /home/buu/.vim' shell=/bin/bash
20:04 buu This is an outrage
20:06 Sketch the weird characters in your prompt?
20:06 buu haha
20:06 buu No those ate leet
20:06 buu *are
20:06 buu Look at that horrible code just to run an rsync
20:06 buu AND IT'S NOT EVEN ENCRYPTED AUGH
20:07 Sketch rsync over ssh is your friend
20:12 plinnell joined #salt
20:16 madboxs joined #salt
20:16 toanju joined #salt
20:18 juanito joined #salt
20:20 stanchan joined #salt
20:20 djgerm joined #salt
20:21 djgerm hello! I want to spin up a new machine with AWS cloud formation and have the key automatically accepted by the master. Any thoughts there?
20:21 mohae_ joined #salt
20:22 juanito hey guys would anyone know why when using salt-api (cherrypi), it gives us back a token which is a session_id  then used to query the real token in the ongoing api calls ? Why don't we just send back a salt token id ?
20:22 VR-Jack-H joined #salt
20:23 juanito could be helpfull if we want to use the token for different calls and check against salt-master that the token is valid, with a cherrypi session_id makes it harder
20:27 writtenoff joined #salt
20:28 onlyanegg joined #salt
20:36 madboxs joined #salt
20:41 krymzon joined #salt
20:42 onlyanegg joined #salt
20:43 nickabbey joined #salt
20:44 tercenya joined #salt
20:51 hemebond juanito: What do you mean?
20:51 hemebond What is a "salt token id"?
20:53 tapoxi djgerm: why not salt-cloud?
20:56 juanito joined #salt
20:57 ivanjaros joined #salt
20:57 hemebond djgerm: If you create a file, with the name of the new minion, in /etc/salt/pki/master/minions_pre/ it will automatically accept that minion.
20:58 madboxs joined #salt
20:59 scsinutz joined #salt
21:14 onlyanegg joined #salt
21:18 madboxs joined #salt
21:19 rylnd is anyone in here deploying within vmware, using saltstack and pxeboot or kickstart?
21:21 raspado why do the minions have a master file in /var/log/salt?
21:21 Sketch i have deployed within kvm using salt and pxeboot/kickstart
21:21 hemebond raspado: They shouldn't.
21:21 Sketch salt didn't do any of the vm building though, that was done manually
21:21 raspado hemebond: oh?
21:22 TlostSWE joined #salt
21:22 hemebond None of my minions have a master log on them.
21:23 raspado yeah its weird, the master log is 0kb
21:24 whytewolf raspado: by chance did you install the salt-master rpm by accident?
21:24 raspado hmmm i hope not
21:24 raspado on the minions?
21:24 whytewolf yeah
21:25 whytewolf like giving salt-cloud the make_master:true option
21:25 pcn is anyone familiar with this? https://gist.github.com/pcn/019e4f939398bd34fc313226be9dfce7
21:25 pcn redis returner having trouble deleting expired data I guess
21:25 raspado http://pastebin.com/upzJE17y nope, looks like just salt.noarch and salt-minion are present
21:27 whytewolf humm. actually now that i look at one of my minions i do see a /var/log/salt/master on a minion [that isn't a master]
21:27 raspado is the minion a diff version than the others?
21:27 whytewolf no,
21:30 raspado whytewolf: same rpms as I do?
21:30 raspado rather, packages
21:31 pipps joined #salt
21:32 whytewolf ahh according to whatprovides it is because it is installed by salt.noarch
21:32 monrad left #salt
21:32 amontalban joined #salt
21:32 raspado should salt.noarch be deployed by the provisioning process?
21:32 whytewolf yeah
21:33 pipps joined #salt
21:33 whytewolf salt.noarch is like mysql-common it is just the files that are common between all the different subpackages
21:33 mpanetta joined #salt
21:33 raspado ahh ok
21:34 raspado i suppose having the master log is acceptable in this case then
21:34 cscf What happens if state B requires A, and A has an "unless"
21:34 dxiri joined #salt
21:34 cscf *? Is A considered to have occurred regardless?
21:35 debian112 joined #salt
21:35 dxiri hey guys, for some reason I can't figure out, doing salt '*' test.ping gets me duplicate output for some nodes
21:35 dxiri any clues on what could the problem be?
21:35 whytewolf require doesn't trigger on changes it triggers on start being current or changed.
21:36 whytewolf dxiri: couple of things. some minions are running the minion more than once... or you have minions with the same pki and minion id
21:36 cscf whytewolf, just wondering if the require was why my state is executing even though it's 'unless' should prevent it
21:37 whytewolf cscf: no
21:37 whytewolf it isn't the reason
21:37 irctc685 joined #salt
21:37 whytewolf the state will show up but will say that the unless was met so no changes accord and will show sucsess
21:38 cscf Yeah, that's what I thought...
21:38 whytewolf and my spelling just took a noise dive with that one. maybe i am having a strok
21:38 cscf I have  'sudo -u www-data /var/www/nextcloud/occ status | grep "installed: true"  '
21:39 dxiri whytewolf: you mean there may be 2 instances for the same minion on the same box?
21:39 cscf manually, grep returns 0
21:39 whytewolf dxiri: yes.
21:39 dxiri how would that happen?
21:39 nmccollum joined #salt
21:39 irctc685 Hi folks - question about requisites. I see https://docs.saltstack.com/en/latest/ref/states/requisites.html says you can "require" an entire sls file. Does that apply to other requisites too? can I "watch" or "onchanges" an entire sls file?
21:39 jhauser joined #salt
21:40 whytewolf cscf: manually doesn't mean squat. as there could be any number of subtil differences between you being a user and salt being a deamon
21:40 cscf whytewolf, well, do you know a good way to debug?
21:40 cscf just salt-call?
21:40 whytewolf salt-call -l debug
21:40 rylnd Sketch when you deployed with kickstart, who managed the deployment of VMs? i mean, right now i have a VM template that i clone and have profiles and maps in salt-cloud that determine which type of instance gets cloned from which template and has which ip and so and so forth. how did you guys keep track of that?
21:41 whytewolf also running the command through cmd.retcode with python_shell=True
21:41 krymzon joined #salt
21:41 cscf [DEBUG   ] output: /bin/bash: -c: line 0: unexpected EOF while looking for matching `"'
21:41 cscf Let me guess, Salt wraps the command in "" ?
21:42 nmccollum Hello.  I'm having an issue where I issue a state.apply to a minion and retrieve the error "Minion did not return".  When I run the command again, I get an error stating that "The function state.apply is running as PID blah and was started at date".  Is there a way I can continue watching that state.apply and retreive the stdout and stderr for that previous command?
21:42 bltmiller joined #salt
21:42 whytewolf cscf: for that you might have to go to the code
21:43 irctc685 @nmccollum check out https://github.com/saltstack/salt/issues/18201
21:43 saltstackbot [#18201][MERGED] Question: How to tell when a job has finished | I am writing a GUI (see https://github.com/mclarkson/obdi) but I'm having trouble finding out when a job has finished, specifically when I do a state.highstate in the background using cmd_async....
21:43 XenophonF nmccollum: increase the timeout on your salt command
21:43 XenophonF e.g., salt -t 300 minion state.apply
21:43 keimlink joined #salt
21:44 irctc685 in short use a salt runner to check what jobs are running, get the jid, and get details of that specific job
21:44 whytewolf nmccollum: https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.jobs.html
21:46 nmccollum The state.apply actually takes about 45 minutes.
21:46 whytewolf dxiri: sorry back to your problem it could happen any number of ways. a process not dieing on a salt-minion restart. or something going wrong with the startup script. it used to happen a lot more.
21:47 whytewolf nmccollum: with that long of process it might be easier to just toss it compleatly async and use the jobs runner to keep an eye on it
21:47 dxiri whytewolf: does the minion registers its PID somewhere? other than ps output?
21:47 nmccollum Sometimes it will state that minion did not return a response nearly immediately, and randomly.
21:47 dxiri whytewolf: so I know which one to kill
21:47 whytewolf dxiri: best way is just stop the minion and kill all other salt minions running
21:48 dxiri whytewolf: to answer my own question, its on /var/run/salt-minion.pid
21:48 dxiri whytewolf: but yeah I agree
21:51 buu so is there some way to do an encrypted rsync without ssh?
21:51 buu i dont want the minions to have access to the master
21:52 hemebond ?
21:52 cmarzullo make a package. install the package. fpm for the win
21:52 Sketch you could give them access to a restricted account
21:52 buu mmm
21:53 buu this may be the wrong approach
21:53 whytewolf nmccollum: "salt --async '*' state.apply" it will return with something like "Executed command with job ID: 20161207135116901951" take that job id and use salt-run jobs.lookup_jid 20161207135116901951
21:53 Sketch it also probably doesn't have to be the master at all...
21:53 buu i wanted to avoid the remote.machines being able to ssh into my local master
21:54 cscf nmccollum, 'watch' may be useful
21:54 buu Sketch: now can you answer why state.rsync doesnt support :: rsync daemon addrs?
21:54 cscf buu, NFS?
21:54 whytewolf ...
21:54 Sketch buu: nope.  i've never used state.rsync
21:54 buu cscf: terrifying
21:54 cscf buu, why, reliability?
21:55 buu also security
21:55 buu and speed
21:56 XenophonF buu: ipsec
21:56 whytewolf buu: a attach a magnet to a pole and throw it at the disks
21:56 XenophonF that's assuming rsync doesn't do encryption itself
21:56 buu pretty sure it doesnt
21:56 whytewolf rsync has no built in encryption
21:57 XenophonF then ipsec
21:57 XenophonF build the SAs/SPDs such that only rsync traffic between the minions is transported
21:57 cscf sounds complex
21:57 XenophonF not really
21:57 XenophonF should be pretty straightforward
21:57 cscf why not sftp ?
21:58 whytewolf buu: a. forget the state.rsync :P b. push from the master instead of pulling from a minion.
21:58 cscf or even wget --recursive lol, over TLS
21:58 cscf or yeah, just put up with the speed of file.recursive
21:58 ecdhe vagrant can bootstrap salt and set a pillar variable, "is_vagrant".  However, when I ssh into my vagrant box, salt-call pillar.items | grep vagrant shows me nothing.
21:59 ecdhe This might be more of a vagrant issue, but has anyone run into this?
21:59 cscf ecdhe, did you try running saltutil.refresh_pillar, just in case?
21:59 XenophonF cscf: he doesn't want to give minions ssh access
21:59 buu oh duh i can rsync TO a remote host
21:59 buu but can i make salt do that?
21:59 whytewolf buu: with orchestration i would say yes
22:00 cscf Is file.recurse really so slow you can't use it?
22:00 buu yes
22:00 cscf how big is this dataset?
22:00 jas02 joined #salt
22:00 buu it takes over 80s on a 4mb git repo over lan
22:00 XenophonF buu: does the dataset have to live on the salt master?
22:00 Sketch git will be slower than copying files
22:00 buu yes
22:00 ecdhe cscf, the output of that showed me that 'local' went from False to True... but no is_vagrant
22:00 XenophonF why not put it on a staging server or something?
22:01 buu because this is a home network setup
22:01 XenophonF or in a chroot/jail/container/whatever kids are calling it this week?
22:01 cscf buu, and how long does it take to git clone?
22:01 buu cscf: dunno, didnt try, but rsync was about 5seconds
22:02 cscf buu, you are trying to copy a git repo to another place.  Using git is the obvious way to do this.  unless you want to copy other things?
22:02 buu XenophonF: this is a very overengineered way to sync my .vimrc to random boxes i maintain
22:03 cscf buu, why does your vimrc need a whole git repo?
22:03 whytewolf buu .... this is all for a frigging dotfiles?
22:03 buu cscf: its not literally A git repo, its a folder wiyh a bunch of subdirs somr of ehich came from gir clone
22:03 XenophonF LOL
22:03 nmccollum Another issue I have been having is that once I started using the salt file transfer (file:///path/to) my salt commands seem to take a minimum of 40 seconds, even for very simple things.  My path is a symlink to a parallel file system.  Is there a known bug causing delays?
22:03 cscf buu, make a copy that doesn't have the .git directories, and file.recurse that
22:03 buu also fucl phonr keyboars
22:04 buu whytewolf: well, the vimrc was fine but then i tried to recurse my .vim folder and sadness
22:04 XenophonF buu: how big is the dataset
22:05 cscf nmccollum, are you using file.recurse with a lot of small files?
22:05 whytewolf buu: how much custom stuff is in your .vim folder that can't be done with something like Vundle?
22:05 buu originally my .vim folder was 350mb (dont ask) and that made .recurse cry
22:05 Sketch buu: if what you're copying is a git repo, you might also consider using salt.states.git
22:05 XenophonF buu: as a diehard emacs user myself, I salute you
22:05 buu lol
22:05 whytewolf seriously. I install a .vimrc file and then just run a command and vim installs the rest
22:06 nmccollum cscf:  No, just pushing about 25 configuration files directly.
22:06 buu theres some stuff vundle wont get and at that point you might as well copy everything
22:06 whytewolf XenophonF: ahh emacs, nice operating system with a sucky text editor :P
22:06 XenophonF no doubt!
22:07 XenophonF my favorite full-screen shell
22:07 cscf nmccollum, file.managed? the salt-cp commands do multiple transactions per file, and clustered fs's usually introduce a bit of per-tx latency
22:07 buu is that what emacs-vim mode is for?
22:07 buu slime?
22:07 XenophonF you betcha!
22:07 XenophonF elpy too
22:07 buu nice
22:07 nmccollum cscf:  using file.managed: \ - source: salt://nfs-mount/path/to/file.conf
22:07 XenophonF god slime and elpy are the bestest
22:07 cscf nmccollum, what atime mode is the mount using?
22:07 XenophonF and tramp
22:08 nmccollum cscf: noatime
22:08 buu so how do i make salt orchestrate run commands.on master?
22:08 XenophonF isn't it invoked using salt-run?
22:08 whytewolf buu: install a minion on the master then target it like any minion
22:08 nmccollum cscf:  wait, let me doublecheck
22:08 cscf nmccollum, what's the ping time to the nfs host?
22:08 whytewolf also you can use runners
22:09 nmccollum cscf:  0.439 ms
22:09 whytewolf pretty much anytime the question is how do i do X on the master the answer is most of the time... install a minion on the master.
22:09 cscf nmccollum, just curious, what clustered fs are you using?
22:10 nmccollum cscf:  I just checked, its relatime.  The clustered fs is GPFS.  Using the GPFS NFS mount before GPFS is installed.
22:11 buu whytewolf: thay s
22:11 cscf nmccollum, what do you mean, before it's installed?
22:11 buu that sounds saner than runners
22:11 nmccollum cscf:  I'm using salt to completely provision a CentOS 7 node from a base minimal install to a fully working compute node.  Part of the install process is installing and compiling GPFS for the node's kernel during install.
22:11 buu also why does this documentation use tgt:
22:11 ProT-0-TypE joined #salt
22:12 nmccollum cscf:  GPFS will push an NFS mount.
22:13 cscf nmccollum, so how are you using it's mount before it's installed?
22:13 buu thats just confusing
22:13 nmccollum cscf:  Mounted the filesystem via NFS.
22:14 whytewolf buu: because orchestration uses targettting.
22:14 buu whytewolf: can my master minion get a list of minion hosts?
22:16 * whytewolf shrugs never tried
22:16 nmccollum cscf:  Either way, the salt-master doesn't have GPFS installed, it's just following a symlink to the NFS mounted GPFS.  Just after I told it to use file:///path/to/ it somehow started getting 40+ second delays even simple commands that run in mere milliseconds.
22:17 tobiasBora Hello,
22:18 buu whytewolf: saltrun manage.up
22:19 tobiasBora I would like to know if there exists proper way to deploy salt using a git repository. For the moment I'm doing dirty stuff using git hooks that unpack the git repo and rsync it with /srv/... And to copy it in /srv/ I've some problem of rights (root...) so I don't if there is a good way to proceed...
22:20 nmccollum Here's an example of what I am talking about:  http://hastebin.com/refozureba.pas
22:21 madboxs joined #salt
22:22 buu hm
22:22 buu i should fix my dns stuff
22:23 pipps joined #salt
22:24 foundatron_ joined #salt
22:24 whytewolf nmccollum: that looks like an iowait problem
22:25 nmccollum whytewolf:  Normally I would agree, but the system is very responsive.
22:26 whytewolf except for this task ... which is using that space?
22:27 buu yay
22:28 bltmiller joined #salt
22:32 KajiMaster joined #salt
22:32 ProT-0-TypE joined #salt
22:35 nmccollum So, I ran a command that doesn't touch the NFS mounted filesystem at all.. still has a 40 second delay.
22:37 whytewolf yet if that nfs isn't mounted. the 40 second delay goes away?
22:38 whytewolf which is what i gathered from what you had said earlyer
22:38 nmccollum Hmmm... It'd be tricky to unmount that NFS mount on that node.
22:38 nmccollum I'm almost 100% sure that it's not the NFS mount.  That mount is hella fast.  It's something to do with how salt file daemon talks to the NFS mount.
22:39 whytewolf the file deamon only sees it as a filesystem
22:39 whytewolf it doens't care what filesystem it is
22:39 whytewolf it is just using standard unix subsystems for opening files
22:40 whytewolf fopen, fclose
22:40 whytewolf ext
22:40 nmccollum Yeah, but it's doing something to push the file to the salt-minion
22:40 nmccollum Somewhere in that, it's causing a 40 second delay.
22:41 whytewolf are you mounting this on /tmp
22:41 nmccollum this?
22:41 whytewolf the nfs share
22:42 nmccollum Salt's /srv/salt has a symlink inside that points to /diskless which is my global NFS mount on a few hundred machines.
22:42 nmccollum That way I can use salt's file:// to push a file
22:42 irctc338 joined #salt
22:42 sh123124213 joined #salt
22:42 madboxs joined #salt
22:43 irctc338 all, have a question on the salt orchestration runner. How would I call a module?
22:43 whytewolf irctc338: salt.function
22:43 irctc338 thanks
22:43 irctc338 so, I could do boto_ec2.create_tags: salt.module: - arg: my module args here?
22:44 whytewolf nmccollum: so you are not pulling a file you are using local file system mechinisms.
22:44 whytewolf salt isn't even involoved in the transfer
22:45 whytewolf file:// means salt thinks the file is local to the minion
22:45 whytewolf irctc338: salt.function not salt.module
22:45 irctc338 yeah sorry ! ha
22:45 whytewolf irctc338: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.saltmod.html#salt.states.saltmod.function
22:45 irctc338 how to not specify a tgt?
22:45 whytewolf you can't
22:46 whytewolf it is orchestration it needs a target
22:46 irctc338 so this is making an aws call, not to minions so...
22:46 irctc338 I guess it doesn't matter what is in here
22:46 whytewolf irctc338: boto_ec2.create_tags on the command line still needs a target also
22:46 jas02 joined #salt
22:47 ProT-0-TypE joined #salt
22:47 irctc338 whats whytewolf!
22:47 irctc338 thanks
22:47 whytewolf I'm whytewolf :P
22:47 irctc338 not sure how that became whats
22:47 irctc338 ugh
22:47 irctc338 wtf
22:48 irctc338 thx!
22:48 irctc338 later :)
22:48 whytewolf hehe
22:48 whytewolf have a good one
22:49 nmccollum whytewolf:  That's contrary to the salt documentation then.
22:49 whytewolf nmccollum: salt:// is remote file:// is local
22:51 nmccollum ... you're right, i've transposed those in my head.
22:51 nmccollum I'm using salt://file/to/blah
22:52 nmccollum I'm doing something like:  /etc/ntp.conf:   file.managed:     - source: salt://diskless/node-cfg/allnodes.cent7.2/etc/ntp.conf
22:52 nmccollum When I first started using salt, three days ago... I wasn't pulling files like that.  It could pull a file in milliseconds.
22:53 whytewolf are these masterless?
22:53 nmccollum No, there is a salt-master
22:53 nmccollum ...and to be fair I could be doing all of this ass backwards.
22:54 whytewolf so you have this nfs share that has all your salt files.... and you are sharing it with all of your minions even though they are still going to be requesting the file from the master and compleatly ignoreing that nfs share?
22:56 whytewolf salt:// means that the minion will be requesting the file from either the local file_roots [on a masterless system] or from the master through the cp module functionality. which will copy down the file over the trasnport layer and save it to the cache. and request a hash every time a state is run
22:56 whytewolf also. every salt command still checks the state tree..
22:57 whytewolf so if that nfs is slow on the master you will have a slow time
22:57 whytewolf regardless of how fast it is on the minin
22:58 nmccollum NFS is fast everywhere on this system.
22:58 whytewolf on the master?
22:58 nmccollum So my reasoning behind this is that I don't want to mount and then later unmount the NFS mount constantly.
22:58 nmccollum Yes.
22:58 ecdhe Using salted-vagrant, /etc/salt/minion contains "file_client: local", but when I start the vagrant machine, salt hangs for the longest time, and /var/log/salt/minion has several lines that say "error while bringing up minion for multi-master"
22:58 whytewolf what is the point of the mount at all?
22:59 nmccollum The NFS mount is a nearly unused DDN SFA7700 and two redundant gridscalers.  It's never going to be a bottleneck for a 1GB line.
22:59 whytewolf is there anything being done out of it not salt based?
22:59 DEger joined #salt
22:59 ecdhe I didn't specify multi-master... how can I override it's search for the master?  Isn't  "file_client: local" supposed to do that for me?
22:59 hasues joined #salt
23:00 nmccollum whytewolf:  The mount is a globally accessible location that contains pretty much everything.
23:00 whytewolf "pretty much everything" is vauge and doesn't answer the question
23:01 XenophonF nmccollum: why not just go masterless if you're going to do it that way?
23:01 whytewolf also. having actually used NFS across a couple hundred servers in a large complex i can say that yes it will bottle neck on a 1gb line
23:01 nmccollum So, forgive the noob question since I've only been doing salt for a few days now.  But what advantage do I have of going masterless?
23:02 Edgan nfs on everything is one kernel bug away from disaster
23:02 bltmiller tobiasBora: I saw you had a question about salt via git repo – did you ever get your question answered?
23:02 ecdhe nmccollum, principly, there's no master to maintain.
23:02 hemebond nmccollum: I think there are benefits around load, not having to run a master, scaling easier.
23:02 ecdhe nmccollum, it's great for testing too.
23:03 nmccollum The only thing that NFS is used for in my environment is to get base configuration files before GPFS and Infiniband is active.
23:03 hemebond I don't see the benefits myself. I think having a master is great.
23:03 whytewolf Edgan: having actually seen NFS lose networking coherence in a solaris machine and swamp an entire network in arp traffic I can say it passed the 1 bug limit
23:03 tobiasBora bltmiller: No. I found something with reactor, but maybe it's not the better way to proceed
23:03 madboxs joined #salt
23:04 whytewolf nmccollum: are you actually using it that way though. it sounds like the only system actually useing the NFS is the master
23:04 Edgan whytewolf: yeah, I have seen nfs mounts work, ok, but seen enough of it going bad that it would be my last choice. At home I use smb instead of nfs.
23:04 nmccollum Yes, the master is following a symlink to an NFS mount to push files via the salt:/// file.managed
23:04 whytewolf iscsi.... I'm not a fan of cifs or smb either
23:05 Edgan whytewolf: I have used iscsi between two hosts, but smb is good for HTPC stuff
23:05 irctc529 joined #salt
23:05 nmccollum There's gotta be a bug that causes the salt-master to hang for nearly exactly 40 seconds everytime it asks from something following that symlink.
23:05 bltmiller tobiasBora: what was your situation with root permissions?
23:06 irctc529 hey whytewolf, I got one more question: States run on minions and orch runners on masters. But I am trying to figure out how to fire a runner after a state executes
23:06 scsinutz joined #salt
23:06 tobiasBora bltmiller: What do you mean ? I have all the permission I want since it's my server.
23:06 irctc529 basically, I am doing some elevated things minions don't have access to
23:06 Edgan whytewolf: I was happier when I merged my two storage machines into one machine by going from 1tb drives(one machine) and 2tb drives(second machine) to 4tb drives(single machine).
23:06 whytewolf irctc529: use orch to push the state then have the orch require that state on a salt.runner
23:07 bltmiller tobiasBora: disregard, I misinterpreted your issue. carry on! :)
23:07 nmccollum All of the config files that it's pushing total to 320KB.
23:07 whytewolf Edgan: yeah i was pretty happy when i moved 4 1TB drives into a NAS device that did iSCSI
23:07 irctc529 hmm..
23:08 irctc529 so what a minion initiates with state.highstate, is there no way to get the orch runner going?
23:08 nmccollum I'm just saying, the moment I added that symlink under /srv/salt/ it added 40 seconds to every command.
23:08 hasues left #salt
23:08 Edgan whytewolf: I have been tempted to switch to 6tb or 8tb drives, but I still have so much free space that it hasn't been worth it. 4tb*9 raidz2
23:08 whytewolf nmccollum: file a bug if you think there is a bug. I just don't see how there is a bug in salt when it is useing the standard filesystem items in python.
23:09 whytewolf Edgan: my media NAS has 4 6TB drives. so much space
23:09 s_kunk joined #salt
23:09 nmccollum This is the NFS mount:  gs1:/mnt/homeapps/diskless  129T   65T   64T  51% /diskless
23:10 whytewolf although if i can scrap the cash together I'm looking at seagates ironwolf drives... they have a 10TB one
23:10 Edgan whytewolf: Hitachi just announced 12tb non-smr drives :)
23:11 whytewolf Edgan: do they have a NAS version? I need thouse iops damn it
23:11 dxiri joined #salt
23:11 nmccollum Thank you for your guys help so far.  I do have further questions if I haven't been too annoying.
23:12 tobiasBora By the way, what is the state of the "bootstrap" script ? Does it installs a version which is upgradable using apt-get or not ?
23:12 Edgan whytewolf: http://www.tomshardware.com/news/wd-sandisk-ssd-qlc-nand,33143.html
23:12 whytewolf tobiasBora: if you use the stable option is uses the repo.saltstack.com version
23:13 Edgan whytewolf: 390/186 read/write IOPS at QD32
23:13 nmccollum Is there a way to have a minion automatically check in the the master to ensure that it is in the correct state?
23:13 tobiasBora whytewolf: is there an arm version in this repo ?
23:14 whytewolf nmccollum: look up the scheduler... it will allow you to have it run highstate and set intervals.
23:14 whytewolf does not look like it
23:14 whytewolf Edgan: nice
23:15 whytewolf Edgan: my only problem is that HDD prices don't seem to be dropping in price. 4TB drives are still over $100
23:15 nicksloan joined #salt
23:16 Edgan whytewolf: big SSDs are coming. I am looking forward to when SSD prices really drop and we stop buying HDs
23:17 whytewolf Edgan: I know synology just nnounced a new NAS device that is setup to only use SSD. and is optimized strictly for ssd
23:17 Edgan whytewolf: looks like for relative crazy money($1,374.99) you can get 4tb SSD, and I know Samsung has already announced 16tb SSDs
23:19 whytewolf ugh... bigger hard drive or start saving for the 10GB network
23:20 Awesomecase joined #salt
23:24 dendazen joined #salt
23:25 Edgan whytewolf: how many machines do you want to switch to 10GB?
23:25 xbglowx joined #salt
23:26 Klas joined #salt
23:26 whytewolf not many. only about 7 hosts, 2 NAS, 3 switchs, and one firewall
23:27 whytewolf also needs to be copper so that that my the systems i don't want to switch still work
23:28 ecdhe whytewolf, you could just do point to point from your storage server to your... VM host... If you can do it all with a couple of NICs, no need for a 10GbE switch.
23:28 nmccollum So, I have 200 machines named node[1-200] that I need blah.sls applied to, and 10 machines that are named nodelogin[1-10] that I need login.sls applied to.  I'd like to just run one sls file that automatically applied states based on what the nodename is.  I figured this would be done in a top.sls file but i can't seem to get it to work.
23:29 whytewolf ecdhe: yeah, openstack wouldn't take to kindly to that ;)
23:30 whytewolf one of the NS to 1 of the host sthat wouldn't be a problem though i could remove 1 switch out of that
23:31 nethershaw joined #salt
23:31 whytewolf nmccollum: that would need to be done in jinja in the file since you want to keep it in one file, if you split the file into 2 you could do it in top
23:31 ecdhe okay, I'm stumped... why is my masterless node searching for a master even though I told it "file_client: local"?
23:32 whytewolf did you start the minion?
23:32 whytewolf if so shut it down
23:32 nmccollum whytewolf:  the blah.sls actually follows a bunch of other sls files.
23:32 whytewolf nmccollum: ew. an include dependency hell tree huh. yeah jinja that up.
23:33 Sammichmaker joined #salt
23:33 ecdhe "[ERROR][6539] Error while bringing up minion for multi-master.  Is master at salt responding?"
23:34 whytewolf ecdhe: masterless minions don't need the salt-minion deamon running
23:34 whytewolf in fact it shouldn't run
23:34 nmccollum I'm just trying to avoid having to do "# salt 'node[1-200]' state.apply blah && salt 'nodelogin[1-10]' state.apply login"
23:34 whytewolf [unles something changed recently i wasn't told about]
23:34 whytewolf nmccollum: ohhhhhhhh
23:35 whytewolf one second
23:35 pipps joined #salt
23:35 nmccollum I figured it was something simple and I wasn't explaining it well.
23:35 ecdhe If I set "master: localhost" in /etc/salt/minion, the error changes to "is master at localhost running?"
23:35 madboxs joined #salt
23:35 nmccollum My brain is about frizzled today
23:35 madboxs joined #salt
23:35 ecdhe So it's reading /etc/salt/minion, but still ignoring "file_client: local"
23:36 whytewolf nmccollum: https://gist.github.com/whytewolf/b0b79516916e6d4346d88ee5f0d49f41
23:36 ecdhe I didn't run the salt minion, vagrant started it for me.  The command it ran is "/usr/bin/python /usr/bin/salt-minion"
23:36 amontalban joined #salt
23:36 amontalban joined #salt
23:37 ecdhe I can see from ps -ax that salt-minion then called "salt-call saltutil.sync_all"
23:37 nmccollum whytewolf:  I had that verbatim a minute ago and it bombed out on me.  Lemme wait for all these jobs to finish and i'll try again
23:37 whytewolf ecdhe: in your Vagrentfile do you have salt.masterless = true
23:37 whytewolf ?
23:38 ecdhe whytewolf, nope!  I'll give that a try.
23:38 ecdhe It has worked for years without that.
23:38 whytewolf ecdhe: it was litterally the first thing i saw on the vagrent help page
23:39 ecdhe Well it's worked for years because vagrant passes in the minion file with "file_client: local" which normally overrides the master-seeking behavior
23:40 whytewolf ecdhe: file_client: local has nothing to do with the minion deamon being spun up when it shouldn't be
23:40 whytewolf that deamon is confusing the commands
23:42 mosen joined #salt
23:42 nmccollum whytewolf: http://hastebin.com/iruwahuseh.pas
23:43 nmccollum 10/10, I'm probably doing something dumb.
23:43 whytewolf nmccollum: ... that isn't a top.sls
23:43 nmccollum Does it literally need to be named top.sls ?
23:44 whytewolf nmccollum: it does. or you need to specify that the master top is different
23:44 whytewolf it also needs to be in the root of the file_roots
23:44 nmccollum executed with : salt -t 3000 '*' state.apply top test=True  ?
23:44 whytewolf no, just state.apply
23:45 nmccollum so just salt state.apply ?
23:45 whytewolf salt -t 3000 '*' state.apply test=True
23:46 scsinutz joined #salt
23:46 ecdhe whytewolf, no joy, the host is still trying to find multi-masters
23:46 ecdhe blah
23:46 nmccollum several hundred nodes replied correctly, minus one that said " Comment: No Top file or external nodes data matches found."
23:47 ecdhe I even tried changing the node name from "boot" to "booter" in case there was some conflict with loading a python module or something.
23:47 whytewolf nmccollum: does that one that didn't match the naming scheme?
23:48 nmccollum same naming scheme everything else has.
23:48 whytewolf so it will match dmc[1-200]* or be named dmcvlogin1
23:49 nmccollum for example, dmc2 worked, dmc34 did not
23:49 nmccollum Just one wanted to be a special snowflake.
23:50 whytewolf strange... wonder if it's cache just wasn't updating for somereason
23:50 whytewolf ecdhe: I'm not sure then.
23:51 ecdhe I'm trying a few things just to be sure.  The checking for masters happens *after* my pillars and states get rendered.
23:52 ecdhe So it's possible this is happening because one of my pillar variables has overwritten some behavioural parameter
23:52 whytewolf have a pillar value for file_client?
23:53 scsinutz joined #salt
23:53 ecdhe I have TWO vms in this vagrant file, and one of them boots, bootstraps salt, gets its states+pillar, and gets provisioned nicely.  The other just hangs trying to find a master.
23:53 ecdhe Let me check pillar
23:54 ecdhe `salt-call --local pillar.items | grep file_client shows' nothing.
23:54 whytewolf was a long shot didn't think that variable uses config.get anyway
23:55 whytewolf maybe put it in log_level_logfile: all
23:57 nmccollum [DEBUG   ] compound_match dmc34.blah.edu ? "dmc[1-200]*" => "False"
23:57 nmccollum uh huh
23:57 whytewolf okay ... that ... should be bug reported
23:58 whytewolf last i checked 34 is somewhere between 1-300
23:58 whytewolf err 1-200
23:58 whytewolf think I'm getting tired
23:58 nmccollum Yeah, i'm done for the day.  Got a link to salts bugzilla or whatever?
23:59 whytewolf https://github.com/saltstack/salt/issues

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary