Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2016-11-16

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 pipps joined #salt
00:07 edrocks joined #salt
00:07 mosen joined #salt
00:09 jas02 joined #salt
00:17 pipps joined #salt
00:17 mirko joined #salt
00:17 pipps99 joined #salt
00:18 KajiMaster joined #salt
00:23 toastedpenguin joined #salt
00:25 Klas joined #salt
00:32 jhujhiti joined #salt
00:33 XenophonF schemanic: sorry I missed you this afternoon - feel free to ping me tomorrow if you have further questions about salt-formula
00:34 XenophonF schemanic: i have a master bootstrap script that might aid you
00:35 Valfor joined #salt
00:35 Valfor joined #salt
00:40 jhujhiti joined #salt
00:40 Edgan XenophonF: I think salt-ssh is the way to go for master bootstrapping.
00:42 * MTecknology just saw a few hundred crows from work. According to Hitchcock, I'm now about to die a miserable and degrading death.
00:43 MTecknology wrong channel...
00:44 binocvlar It amused me all the same MTecknology ;)
00:48 jeddi joined #salt
00:51 netcho joined #salt
00:51 XenophonF Edgan: my approach is to set up salt-formula locally, with temporary top.sls files and Pillar keys
00:51 XenophonF one state.apply later, i have a fully functional salt-master from scratch
00:52 XenophonF YMMV and salt-ssh sounds like a great approach
00:52 XenophonF so does masterless
00:56 arif-ali joined #salt
01:03 Edgan XenophonF: I have the same with salt-ssh -i 'salt-master-hostname' state.highstate
01:03 Edgan XenophonF: Salt-ssh is a wrapper around salt-call
01:05 raspado_ hi all whats the command to show all registered minions on a salt master?
01:05 Edgan raspado_: salt-key -L
01:06 raspado_ is there one that shows whats actively registered and responding
01:06 Edgan raspado_: salt '*' test.ping
01:07 fannet joined #salt
01:10 pdayton joined #salt
01:10 jas02 joined #salt
01:11 raspado_ Edgan: salt-run manage.up seems to be a nice one
01:11 pdayton1 joined #salt
01:13 Edgan raspado_: not bad
01:14 Edgan raspado_: test.ping will tell you which didn't respond
01:14 raspado_ yep
01:15 raspado_ i like test.ping thx
01:15 pipps joined #salt
01:16 pipps99 joined #salt
01:18 raspado_ is it possible to add the minion name
01:18 raspado_ as the FQDN
01:18 raspado_ instead of the shortname?
01:20 babilen Add it where?
01:21 raspado_ hmm disregard
01:21 raspado_ thx babilen
01:22 awiss_ joined #salt
01:26 XenophonF raspado_: iirc Salt uses the normal Python OS interface to get the hostname, so if it's set correctly in the OS, it'll be right in Salt
01:26 raspado_ does it use /etc/hostname?
01:26 XenophonF or you can override it by writing the desired FQDN to /etc/salt/minion_id (or /usr/local/etc/salt/minion_id)
01:26 XenophonF that depends on the operating system
01:27 XenophonF e.g., on RHEL7/CentOS7, you have to configure systemctl-hostnamed
01:27 XenophonF while on Ubuntu 14.04, iirc you have to set the FQDN in /etc/hosts
01:27 iggy back to the question of "add it where?"
01:27 raspado_ nice thx XenophonF
01:29 babilen raspado_: What are you trying to achieve? It's hard to help you if you are still looking, but say "hmm disregard"
01:29 babilen :)
01:30 raspado_ babilen: hacking the gibson
01:31 babilen I doubt it
01:31 raspado_ ;) was able to see fqdn another way
01:34 bowhunter joined #salt
01:36 amontalban joined #salt
01:36 amontalban joined #salt
01:39 rashford joined #salt
01:41 woodtablet left #salt
01:46 jeddi joined #salt
01:47 debian112 joined #salt
01:51 fracklen joined #salt
01:58 samodid joined #salt
01:59 akhter joined #salt
02:04 snc joined #salt
02:08 edrocks joined #salt
02:09 mswart joined #salt
02:12 notnotpeter joined #salt
02:12 jas02 joined #salt
02:13 catpigger joined #salt
02:14 nicksloan joined #salt
02:22 netcho joined #salt
02:33 evle joined #salt
02:36 onlyanegg joined #salt
02:46 netcho joined #salt
02:46 bowhunter joined #salt
02:48 ilbot3 joined #salt
02:48 Topic for #salt is now Welcome to #salt! | Latest Versions: 2015.8.12, 2016.3.4 | Support: https://www.saltstack.com/support/ | Logs: http://irclog.perlgeek.de/salt/ | Paste: https://gist.github.com/ (please don't multiline paste into channel) | See also: #salt-devel, #salt-offtopic | Ask with patience as we are volunteers and may not have immediate answers
02:56 notnotpeter joined #salt
02:57 hasues joined #salt
02:57 hasues left #salt
03:02 bastiandg joined #salt
03:08 fannet joined #salt
03:31 systo joined #salt
03:34 orionx joined #salt
03:35 orionx_ joined #salt
03:39 amontalban joined #salt
04:09 edrocks joined #salt
04:10 jimklo joined #salt
04:14 jas02 joined #salt
04:15 systo joined #salt
04:20 pipps joined #salt
04:28 informant joined #salt
04:30 jimklo joined #salt
04:34 systo joined #salt
04:42 donmichelangelo joined #salt
04:44 netcho joined #salt
04:53 akhter joined #salt
04:57 systo joined #salt
05:08 orionx joined #salt
05:09 fannet joined #salt
05:14 jas02 joined #salt
05:15 impi joined #salt
05:20 onlyanegg joined #salt
05:41 amontalban joined #salt
05:42 bocaneri joined #salt
05:48 ivanjaros joined #salt
05:50 rai_ joined #salt
05:53 fracklen joined #salt
05:54 rdas joined #salt
06:00 xbglowx__ joined #salt
06:02 jimklo joined #salt
06:03 awiss joined #salt
06:03 shalkie joined #salt
06:04 awiss_ joined #salt
06:15 jas02 joined #salt
06:41 Miouge joined #salt
06:45 rem5_ joined #salt
06:45 netcho joined #salt
06:51 Miouge joined #salt
07:00 jumzter joined #salt
07:04 raspado joined #salt
07:15 justan0theruser joined #salt
07:16 jas02 joined #salt
07:26 keimlink joined #salt
07:27 ivanjaros joined #salt
07:29 fracklen joined #salt
07:30 fracklen joined #salt
07:32 nidr0x joined #salt
07:37 pdayton joined #salt
07:43 amontalban joined #salt
07:43 amontalban joined #salt
07:50 akhter joined #salt
08:05 fracklen joined #salt
08:05 rem5 joined #salt
08:07 babilen joined #salt
08:13 impi joined #salt
08:13 edrocks joined #salt
08:14 netcho joined #salt
08:15 fracklen joined #salt
08:18 jas02 joined #salt
08:19 fracklen joined #salt
08:26 JohnnyRun joined #salt
08:27 toanju joined #salt
08:33 darioleidi joined #salt
08:39 ronnix joined #salt
08:43 samodid joined #salt
08:51 ronnix joined #salt
08:56 Rumbles joined #salt
08:56 netcho joined #salt
08:57 davidone joined #salt
09:02 Yee joined #salt
09:02 Yee Hi
09:03 Yee we have two nodes multi-master setup
09:04 Yee when one of the master down, salt '*' test.ping not returning the minion status
09:05 Yee " Minion did not return. [No response]" this is what i get
09:05 Yee but minion is running in that server "salt-minion (pid  1656) is running..."
09:10 fannet joined #salt
09:10 mikecmpbll joined #salt
09:14 Yee is this good time to ask question
09:16 lempa joined #salt
09:18 hemebond Yee, are you sure the minion is connected to the active master?
09:18 Reverend check minion logs :)
09:18 jas02 joined #salt
09:18 Yee in my setup i have only two servers master and minion running on both servers
09:18 hemebond Yee: Minions only connect to one master at a time.
09:19 impi joined #salt
09:19 Yee when both servers up and running the test.ping return correct status
09:19 Yee but when one server down i supposed to get one minion response when checking test.ping
09:20 hemebond Minion only connects to one master. If master goes down, minion is not connected.
09:21 netcho joined #salt
09:21 Yee hemedong: not clear; i am running multi-master environment, two servers both running master and minion
09:22 hemebond Yee: Unless it has changed recently, masters don't communicate.
09:22 Yee only one server is running the other one is down
09:22 hemebond There is no "multi-master" to the best of my knowledge.
09:22 hemebond Do you have a link to the documentation you used to configure your multi-master setup?
09:22 Yee Aah, there is
09:23 Yee https://docs.saltstack.com/en/latest/topics/tutorials/multimaster.html
09:26 Yee [salt.minion ][ERROR   ][1656] Error while bring up minion for multi-master. Is master 10.246.130.42 responding?
09:26 Yee getting this error
09:27 hemebond Is that the live master or dead master?
09:27 Yee i didnt power-on the second master
09:27 Yee once boot the second master everything become ok
09:28 hemebond I'm still reading that page you linked to, but so far it hasn't mentioned automatic fail-over or communication between the masters.
09:29 hemebond Also, is that IP the dead master or the live master?
09:29 Yee the above ip is for dead master
09:29 hemebond Is it the first in the minion config?
09:30 Yee multi-master concept is for high-availability so if one master dead i should have all the functionality perform all of the operations
09:30 s_kunk joined #salt
09:30 Yee nope
09:30 hemebond Yeah, but it's not automatic.
09:30 Yee master:   - 10.246.130.40   - 10.246.130.42
09:30 hemebond (as far as I know, even after reading that page)
09:31 hemebond Do you have "master_type: failover" in the minion config?
09:31 Yee no
09:31 AndreasLutro "multi-master concept is for high-availability" no
09:31 AndreasLutro multi-master is very dumb
09:32 hemebond ^ ????
09:32 AndreasLutro don't assume anything
09:32 hemebond multi-master is really just running two masters.
09:32 hemebond There is no communication between them.
09:32 hemebond I'm not even sure how one master can send events to minions not connected, but it says it does. *shrug*
09:33 AndreasLutro where does it say that?
09:33 Yee yes i understood, we can have some external file_servers to keep the configuration between masters
09:33 hemebond "When using a multi-master setup, all masters are running hot, and any active master can be used to send commands out to the minions."
09:34 AndreasLutro right, in that scenario, the minions connect to all available masters simultaneouslyt
09:34 hemebond Ah.
09:34 Yee so why the alive master not sending commands to minion in my setup
09:34 AndreasLutro but in that case you'll have problems with double publishing
09:34 AndreasLutro so you need to do stupid stuff like only configure reactors on 1 master
09:34 Yee confusing
09:34 hemebond Yee: If you restart the minion does it connect to the live master?
09:34 AndreasLutro Yee: the minion probably isn't connected to the master
09:35 Yee how can i check whether the minion connected to master
09:35 Yee ok let me restart minion
09:35 hemebond salt-run manage.up
09:35 AndreasLutro ^ run that on all masters
09:35 hemebond Or run the minion in debug mode "salt-minion -l debug"
09:35 AndreasLutro see which one has the minion
09:37 Yee salt-run manage.up, it is not return any info
09:38 Yee i did restart minion but test.ping still same
09:38 Yee no response from minion
09:40 AndreasLutro if you ssh to the minion and run `salt-call test.ping` does that return true?
09:40 hemebond Can you telnet from the minion to the live master?
09:41 Yee not sure there is some miscommunication. i have two servers only, both masters and minion also running in the same server
09:41 hemebond ?
09:43 Yee hemebond: sorry if i missed something to explain the issue
09:44 Yee let me reiterate
09:44 Yee in my step i have only two servers both are salt masters and minion also running on both
09:45 amontalban joined #salt
09:45 amontalban joined #salt
09:45 yuhlw______ joined #salt
09:46 N-Mi joined #salt
09:46 N-Mi joined #salt
09:46 hemebond And the masters can talk to each other over 4505 and 4506?
09:46 Yee when only one server is up and running, the minion is not responding
09:46 hemebond [dead master live minion] -> 4505 -> [live master live minion] ?
09:47 theredcat joined #salt
09:48 Yee hemebond: one master is not alive at the moment
09:48 hemebond I know.
09:48 Yee when both alive yes they are ok
09:48 hemebond But the minion is still running on that server, correct?
09:48 hemebond [dead master live minion]
09:48 hemebond No?
09:49 Yee i can see response from both minion when run "salt '*' test.ping" from both masters
09:49 Yee yes minion and master running
09:49 chron0 joined #salt
09:49 chron0 ahoy
09:49 hemebond [___dead___ master, live minion]
09:50 yuhlw______ joined #salt
09:51 Yee any idea why when one server up and running the minion in that server is not responding
09:55 hlub joined #salt
10:01 toanju joined #salt
10:04 Lionel_Debroux joined #salt
10:05 JohnnyRun joined #salt
10:06 amontalban joined #salt
10:06 amontalban joined #salt
10:10 Yee if multi-master supported why slat is looking the other master is up or not?
10:10 Yee 2016-11-16 18:05:26,426 [salt.minion      ][ERROR   ][4467] Error while bringing up minion for multi-master. Is master at 10.246.130.42 responding?
10:10 Yee it didnt make sense
10:11 hemebond Yee: AndreasLutro said the minions connect to every master in the list.
10:11 hemebond Not sure why your minions aren't connecting to the live master.
10:11 Yee ooh ok so the current running minion check the other dead master status
10:11 Yee got it
10:14 Yee i do see the connection are established between master and minion in tcp level
10:15 Yee tcp        0      0 ::ffff:10.246.130.40:52234  ::ffff:10.246.130.40:4505   ESTABLISHED
10:15 Yee tcp        0      0 ::ffff:10.246.130.40:4505   ::ffff:10.246.130.40:52234  ESTABLISHED
10:15 Yee tcp        0      0 ::ffff:10.246.130.40:4506   ::ffff:10.246.130.40:37936  ESTABLISHED
10:16 hemebond So the local minion is connected.
10:17 hemebond But "salt-run manage.up" returns nothing on that master>
10:17 hemebond ?
10:18 ws2k3 joined #salt
10:18 ws2k3 joined #salt
10:19 ws2k3 joined #salt
10:19 ws2k3 joined #salt
10:19 jas02 joined #salt
10:20 ws2k3 joined #salt
10:20 Yee hemebond: yes you are correct
10:20 amcorreia joined #salt
10:21 hemebond What about "salt-call test.ping" ?
10:21 Yee it return True
10:22 hemebond salt-run manage.alived
10:22 hemebond ?
10:23 Yee 'manage.alived' is not available.
10:23 hemebond Huh. What version are you running?
10:23 mohae joined #salt
10:23 Yee salt 2015.5.10 (Lithium)
10:23 hemebond Oh.
10:34 Yee it is weird; i start second server and shutdown second server. Now i can see the "salt '*' test.ping" return correct response
10:34 hemebond I wonder if the minion was trying one master at a time or something.
10:35 kbaikov joined #salt
10:35 Yee the first server is return True and the second one is "No response"
10:35 hemebond And not trying the second until the first was up.
10:39 Yee this is something problematic; say like if both masters are shutdown for maintenance and when start both masters one of the master is not booted due some reason then the master not function as normal
10:39 hemebond Well you might need to play with the retry settings.
10:40 hemebond I have a single master but configure my minions to work in failover mode and they will retry constantly.
10:43 kbaikov joined #salt
10:48 JohnnyRun joined #salt
10:49 netcho_ joined #salt
10:50 netcho joined #salt
10:54 awiss joined #salt
10:58 netcho joined #salt
11:00 Rubin joined #salt
11:01 raspado joined #salt
11:10 mavhq joined #salt
11:11 Lionel_Debroux joined #salt
11:11 fannet joined #salt
11:13 bbbryson joined #salt
11:15 edrocks joined #salt
11:19 fleaz joined #salt
11:20 jas02 joined #salt
11:26 Yee hemebond: is this something salt limitation?
11:27 hemebond Yee: I don't know enough about running multiple masters. I only run a single master.
11:28 Yee Got it; how can i further query about this?
11:28 weylin joined #salt
11:28 hemebond Post to the mailing list or hang around for someone who has used it a lot.
11:30 Yee Thanks; where i can find the mailing list
11:30 hemebond https://saltstack.com/community/
11:32 Yee Thanks a lot
11:32 hemebond Good luck ☺
11:35 davidone ... and you'll see when you (want to) have multiple syndic nodes :)
11:36 hemebond Syndic works well?
11:38 hemebond Wait... does that allow minions to reconnect to another master?
11:45 felskrone joined #salt
11:47 weylin joined #salt
11:49 msn joined #salt
11:53 KingOfFools joined #salt
11:53 impi joined #salt
11:55 Mani_ joined #salt
11:55 Mani_ Hi
11:55 Mani_ Master is loosing minion connectivity after sometime
11:55 Mani_ Has any one faced this problem?
11:55 xbglowx__ joined #salt
11:57 Reverend so, we're looking at runnign letsencrypt on an HA cluster... any way to get the output of a file from a minion and put it into a pillar?
12:04 XenophonF isn't Salt Mine the canonical answer?
12:05 Reverend probably
12:05 mikecmpb_ joined #salt
12:07 Reverend I really don't think letsencrypt is really designed for the network we're created here.
12:07 Reverend i have the feeling it's for folks who have a single webserver with no HA, and just want an SSL easy as pie.
12:08 AndreasLutro just do SSL termination at a single point in your infra and you don't need to copy certs everywhere
12:08 TyrfingMjolnir joined #salt
12:08 om2 joined #salt
12:09 Reverend AndreasLutro: that defeats the point of HA + scalability
12:09 chron0 anyone using nginx-formula? how do you force it to use the nginx repos?
12:09 Reverend SPOF ssltermination is a bad idea.
12:14 Reverend joined #salt
12:15 Reverend not sure if anyone saw my idea thanks to netsplit... but awesome idea is awesome.
12:15 oida joined #salt
12:16 saltsa joined #salt
12:16 stupidnic joined #salt
12:17 mTeK joined #salt
12:17 tehsu joined #salt
12:19 netcho joined #salt
12:21 jas02 joined #salt
12:25 fizmat joined #salt
12:35 davidone hemebond: no. syndics nodes allow you to have a more complex topology
12:38 emaninpa joined #salt
12:38 tuxx joined #salt
12:42 mswart left #salt
13:12 impi joined #salt
13:17 edrocks joined #salt
13:22 jas02 joined #salt
13:22 _JZ_ joined #salt
13:26 edrocks joined #salt
13:27 DammitJim joined #salt
13:33 netcho joined #salt
13:34 krymzon joined #salt
13:36 systo joined #salt
13:37 Edulogardo joined #salt
13:40 amontalban joined #salt
13:40 amontalban joined #salt
13:45 _aeris_ joined #salt
13:47 IgorK__ joined #salt
13:49 promorphus joined #salt
13:54 raspado joined #salt
13:54 oida joined #salt
13:55 GnuLxUsr joined #salt
13:59 akhter joined #salt
14:07 komputes joined #salt
14:08 toastedpenguin joined #salt
14:08 brokensyntax joined #salt
14:15 StolenToast joined #salt
14:19 mohae_ joined #salt
14:22 jas02 joined #salt
14:25 subsignal joined #salt
14:25 ronnix joined #salt
14:27 Xopher joined #salt
14:28 sjoerd_ joined #salt
14:35 DammitJim joined #salt
14:40 racooper joined #salt
14:41 mohae joined #salt
14:42 sjoerd_ Hi all, I'm having a fight with merging variables from my pillar and a defaults.yaml, using a map.jinja. I've got no clue what's going wrong. Who knows what I'm doing wrong here? http://pastebin.com/6cN2Vs6P
14:45 mikecmpb_ joined #salt
14:47 XenophonF just a sec
14:52 XenophonF sjoerd_: take a look at this relatively simple example - https://github.com/irtnog/openssh-formula/tree/master/ssh
14:52 XenophonF the contents of defaults.yaml should be identical to what you might put into a pillar .sls file
14:52 bowhunter joined #salt
14:53 XenophonF in the pillar .sls file, you might only override a few of the default settings
14:53 XenophonF but you'd structure the pillar .sls file identically
14:53 XenophonF for example - https://github.com/irtnog/openssh-formula/blob/master/pillar.example#L37
14:54 XenophonF now as for your map.jinja file, it's completely wrong
14:54 nicksloan joined #salt
14:54 sjoerd_ but it seems a reasonable idea to be able to override the vars from defaults with things from pillar, but no needing to have every single thing in the pillar?
14:55 XenophonF that's what i'm saying
14:55 XenophonF you're calling salt.grains.filter_by() wrong
14:56 sjoerd_ I've been mucking with it for a while, it's a mess ;)
14:56 XenophonF the calling convention is that the first arg is a dictionary mapping different possible values of the targeted grain (os_family by default) to some kind of value
14:56 XenophonF in the case of salt formulas, that value is a dictionary
14:56 XenophonF and it's supposed to be structured the same as the _value_ of your top level dict
14:56 XenophonF so for example, https://github.com/irtnog/openssh-formula/blob/master/ssh/map.jinja
14:57 XenophonF you can see that i have key-value pairs for each O/S family
14:57 XenophonF Arch, Debian, RedHat, and so on
14:57 sjoerd_ Yes I noticed those in several places
14:58 sjoerd_ Internally I've only got one flavour at the moment though, so i tried to ignore that
14:58 XenophonF and that the _value_ is a dictionary structured the same as the dict to which `ssh:` points in YAML
14:59 XenophonF so .../ssh/defaults.yaml has a dictionary with one key, `ssh` and a value that is also a dictionary
14:59 XenophonF it is that second-level dictionary that you're modifying
14:59 impi joined #salt
14:59 XenophonF so you can see that dictionary has a key named `packages` with a list value
15:00 XenophonF by default, the list value is nil
15:00 XenophonF er, []
15:00 sjoerd_ Indeed, let me bring my example back so something that looks more sane and give that another try
15:00 scoates joined #salt
15:00 XenophonF the third step is to merge the two values
15:00 XenophonF hence the call to default_settings.ssh.update()
15:01 XenophonF *NOT* a call to default_settings.update()
15:01 XenophonF the final step is to merge settings from pillar
15:02 XenophonF so that merges the _value_ of the `ssh` Pillar key with the _value_ of `default_settings.ssh` a/k/a `default_settings['ssh']`
15:03 XenophonF and it returns the merged value (a dictionary) as the variable `ssh_settings`
15:03 nicksloan joined #salt
15:05 ALLmightySPIFF joined #salt
15:05 johnkeates joined #salt
15:05 ALLmightySPIFF joined #salt
15:06 * sjoerd_ is reading through your openssh formula...
15:06 ALLmightySPIFF joined #salt
15:09 Tanta joined #salt
15:10 XenophonF i apologize in advance for my jinja config file templates
15:12 sjoerd_ So in default_settings.ssh.update() the 'ssh' is the key of the to-merge dict, right
15:12 mpanetta joined #salt
15:13 fannet joined #salt
15:15 colegatron joined #salt
15:15 babilen sjoerd_: That call would update the dictionary at default_settings.ssh with nothing
15:16 babilen (assuming the datastructure there is a dictionary)
15:18 babilen sjoerd_: https://github.com/saltstack-formulas/template-formula is a good, well, template
15:18 sjoerd_ yes I found that one a while back
15:18 babilen Argh .. who fucked up the "lookup" pillar in there again?
15:19 sjoerd_ one of the things that I wanted is to merge lists if possible
15:19 XenophonF i don't like that template
15:19 sjoerd_ which it didn't seem to be
15:20 XenophonF so
15:20 raspado joined #salt
15:20 XenophonF there's a couple of approaches
15:20 sjoerd_ and then I found this whole thread as well: https://github.com/saltstack/salt/issues/28606#issuecomment-211537586
15:20 saltstackbot [#28606][OPEN] How to override nested parameters in map.jinja | If I have a defaults.yaml structure that looks like this (nested):...
15:20 XenophonF you could write your own exec module to handle merge cases not handled by the default .update() method
15:21 XenophonF e.g., https://github.com/irtnog/apache-formula/blob/master/_modules/apache_formula_helper.py
15:21 sjoerd_ couldn't really make heads or tails of that whole bug string and got kinda lost in the woods
15:21 XenophonF (that's my apache-formula, not the saltstack-formulas one)
15:21 XenophonF the other approach is that when extending things like lists or second-level dictionaries, you copy the entire default value into pillar
15:22 XenophonF which is simpler
15:22 sjoerd_ Yes, that seemed weak to me :|
15:22 mpanetta_ joined #salt
15:22 XenophonF well, i can understand that
15:22 keltim_ joined #salt
15:22 XenophonF here's a way to justify it:
15:22 XenophonF POLA
15:22 keltim joined #salt
15:22 sjoerd_ que?!
15:23 XenophonF if you override a second-level value and then discover later that it isn't overridden, but merged
15:23 XenophonF that's a violation of the principal of least astonishment ;)
15:23 XenophonF plus how would you signal override vs. merge?
15:23 sjoerd_ Polish KISS?
15:23 XenophonF hehe
15:23 jas02 joined #salt
15:23 sjoerd_ Yes well, that was an aftherthought. I already went down the rabbithole
15:25 sjoerd_ I got annoyed by ACLs for my postgres formula, and needing to copy paste a pretty long list into the pillar
15:25 sjoerd_ so I thought, let's merge
15:26 babilen One can use https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.defaults.html
15:26 pdayton joined #salt
15:27 babilen It also shouldn't be too hard to get that to merge lists
15:27 babilen Maybe ship your own execution module based on that and submit a suitable PR
15:27 pdayton1 joined #salt
15:27 ponyofdeath joined #salt
15:27 babilen In the end we really need a jinja filter for that
15:27 babilen I'll write that next year
15:27 babilen User defined jinja filter suppor
15:28 sjoerd_ My main issue is that my python fu is weak. I never got around to it and always seem to go back to perl.
15:28 XenophonF i dunno
15:28 sjoerd_ Salt is hopefully my method to get over the hump
15:28 XenophonF i think merging second-level values is a bad idea
15:28 XenophonF b/c how do you signal merge-vs-override?
15:28 sjoerd_ global setting?
15:28 XenophonF terrible idea
15:28 XenophonF what if i want both?
15:29 XenophonF better to merge manually
15:29 sjoerd_ according to that bug thread i linked earlier it's already in the works
15:29 * XenophonF shrugs
15:29 edrocks joined #salt
15:29 sjoerd_ something about jinja filters, but i dont' know how/what that is yet
15:30 XenophonF http://jinja.pocoo.org/docs/dev/templates/#filters
15:30 XenophonF it's a Jinja feature
15:30 XenophonF YAML and Jinja and ZeroMQ and so on have lives outside of Salt.
15:31 XenophonF If you're writing formulas, it's definitely worth your time to skim through the Jinja template designer doc
15:31 sjoerd_ Yup, /glares @ ansible
15:31 XenophonF LOL
15:32 sjoerd_ it's a lot to take in after years with cfengine
15:33 * sjoerd_ shrughs
15:33 anotherzero joined #salt
15:35 manji in salt-ssh
15:36 manji I am trying to get the ip address of all minions
15:36 manji for munin
15:36 manji sto I have a munin.conf template, which simply says
15:36 manji
15:36 manji {%- for host,data in salt['mine.get']('*', 'grains.items').items() %}
15:36 manji [{{data.id}}]
15:36 manji address {{data.ip_interfaces.eth0[0]}}
15:36 manji use_node_name yes
15:36 manji {%- endfor %}
15:37 manji damn sorry for not pasting that online
15:37 manji anyway, tl;dr thsi returns an empty dictionary
15:38 manji on the other hand, in the same installation
15:38 sjoerd_ XenophonF: thanks for the help, I'll try it all tomorrow. It's already going home time here
15:38 manji I have an sls where I call the exact same function, and it works
15:39 XenophonF ttyl sjoerd_
15:39 anotherzero joined #salt
15:41 pdayton joined #salt
15:45 dunz0r Can I require a state from another state?
15:46 XenophonF you mean like a requisite?
15:46 gtmanfred the full state file? or just one block?
15:46 dunz0r gtmanfred: The full state.
15:46 gtmanfred yeah
15:46 XenophonF there's an sls requisite you can use
15:46 gtmanfred put sls: <service.apache>
15:46 gtmanfred and you can require the service/apache.sls state file
15:46 dunz0r Ah, then I understand the documentation correclty :)
15:47 dunz0r It goes from the root, right? So if my state is in foo/bar/baz/ I'd do sls: foo.bar.baz?
15:47 gtmanfred yeah, the same way you would reference it in top.sls
15:47 XenophonF https://docs.saltstack.com/en/latest/ref/states/requisites.html#require-an-entire-sls-file
15:47 XenophonF yes
15:47 XenophonF assuming that's foo/bar/baz/init.sls
15:48 dunz0r XenophonF: Yeah, reading that as well :)
15:48 gtmanfred or foo/bar/baz.sls
15:49 dunz0r gtmanfred: XenophonF Thanks!
15:52 XenophonF manji: sorry - no idea why that's not working
15:52 XenophonF wait
15:52 manji I will test a little more and submit an issue
15:52 XenophonF is grains.items intended to be a function call?
15:52 XenophonF or a dictionary lookup?
15:53 manji it is in my mine_functions
15:53 XenophonF wouldn't you be looking for a key name there? like `foo:bar`?
15:54 manji no
15:54 manji the line works when I call it from an state file
15:55 manji if there is an issue, it is when the file module is rendering a file
15:57 netcho joined #salt
15:59 tvinson joined #salt
16:00 BetaFrey joined #salt
16:00 KingOfFools joined #salt
16:01 VR-Jack2-H joined #salt
16:02 snergster joined #salt
16:03 remyd1 joined #salt
16:05 fyarci joined #salt
16:05 nicksloan joined #salt
16:05 remyd1 Hi. I have to configure some minions on another network (HPC cluster). I have a gateway which is already a salt minion. Should I transform it in a salt master or is there any way to turn into some kind of salt proxy, with reactor or whatever ?
16:05 XenophonF manji: why don't you pass the results of the mine.get call to the file template via a context variable, instead?
16:06 XenophonF like this
16:06 XenophonF hosts: {{ salt['mine.get']('*', 'grains.items')|yaml }}
16:06 winsalt joined #salt
16:07 XenophonF cf. the context argument to file.managed or file.recurse, https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.managed
16:07 manji hmm let me try that
16:10 racooper howdy. I'm trying to create a pkgrepo state for a yum repo, with an embedded password coming from a GPG-encrypted pillar.  unfortunately, in the pkgrepo state I can't seem to get it to insert the decrypted pillar data, it tries to include the full ascii-armored block instead. Since it's not just a file, is there any way to specify "template: jinja" in the pkgrepo state? https://gist.github.com/racooper/ee0d0d011596da6c0527c929d30d108a is what I'm w
16:10 racooper orking with right now.
16:12 beowuff joined #salt
16:14 orionx joined #salt
16:19 amcorreia joined #salt
16:19 irctc094 joined #salt
16:20 irctc094 can salt migrate vmware vm to aws cloud and vice versa?
16:24 jas02 joined #salt
16:24 nickabbey joined #salt
16:25 Rumbles joined #salt
16:26 gtmanfred it cannot
16:26 gtmanfred not direct migratino
16:26 gtmanfred what you would do is configure the vmware vm with salt, and then you could apply the states to a vmware vm or an aws cloud  instance
16:26 tercenya joined #salt
16:26 viq test-kitchen + kitchen-salt + gitfs question. I have currently git repo, let's call it saltstack. In it I have states/ and pillars/ directories, and salt configured to use that. I'm trying to figure out how to put test-kitchen files either in root of the repo or in say tests/
16:27 tiwula joined #salt
16:29 pdayton joined #salt
16:29 Brew joined #salt
16:30 irctc094 Thanks
16:30 debian112 joined #salt
16:33 tapoxi joined #salt
16:34 ronnix joined #salt
16:34 netcho keep getting: Salt request timed out. The master is not responding. If this error persists after verifying the master is up, worker_threads may need to be increased.
16:34 netcho for test.ping 20 minions
16:35 netcho first only 3 minions returned and now it timesout
16:36 netcho i was running in debug mode when my connection dropped from master... after i logged in back it;s timeouting
16:37 LordOfLA joined #salt
16:38 jeddi joined #salt
16:39 darix joined #salt
16:39 impi joined #salt
16:42 pdayton joined #salt
16:42 Salander27 joined #salt
16:44 jimklo joined #salt
16:46 onlyanegg joined #salt
16:52 lubyou_ joined #salt
16:52 jschoolcraft joined #salt
16:59 scoates joined #salt
16:59 winsalt anyone run into this and have a better solution? https://github.com/saltstack/salt/issues/10852
16:59 saltstackbot [#10852][OPEN] requiring sls file containing only includes fail | I have the following file structure:...
17:02 whytewolf have a nop state in your blank sls file
17:02 whytewolf https://docs.saltstack.com/en/latest/ref/states/all/salt.states.test.html#salt.states.test.nop
17:04 DammitJim what is the command I can use to check what states will be called when I try to run highstate on a minion?
17:04 DammitJim without running highstate test=true
17:05 whytewolf !state.show_highstate
17:05 whytewolf https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html#salt.modules.state.show_highstate
17:09 DammitJim thanks!
17:12 cmarzullo tell me more viq. I use test-kitchen quite a bit.
17:13 DammitJim the weirdest thing just happened
17:13 gtmanfred DammitJim: state.show_top
17:13 DammitJim salt was trying to run a state for a minion that shouldn't be run
17:13 DammitJim show_highstate didn't show such state
17:13 edrocks joined #salt
17:13 DammitJim nor running state.highstate test=true
17:13 whytewolf is it possable it was an include?
17:14 DammitJim but earlier when using regular expressions, it showed that a ton of changes would have to be made to the server
17:14 whytewolf although show highstate should show includes
17:14 DammitJim an inlude such as a requirement by another state?
17:14 whytewolf yes
17:14 DammitJim this is my command... maybe I'm using -E inproperly...
17:15 DammitJim oh, nevermind
17:15 DammitJim I just saw the problem
17:15 DammitJim I wasn't running highstate
17:15 DammitJim blah... such an idiot
17:15 whytewolf ...
17:15 DammitJim I need to be careful when checking things
17:15 whytewolf :P
17:15 DammitJim if salt could only save me from my stupidity!
17:15 relidy joined #salt
17:17 raspado is it safe to run state.apply on minions like such sudo salt 'dev-*' cmd.run 'salt-call state.apply'
17:17 gtmanfred raspado: just run sudo salt 'dev-*' state.apply
17:18 gtmanfred why would you do salt-call?
17:18 raspado oh nice thx
17:18 raspado welll always have from the minion
17:18 raspado from the minion how would it be done?
17:18 gtmanfred ahh yeah anything you did from salt-call, just target, and use that same module
17:18 gtmanfred salt-call state.apply if you are on the minion
17:18 gtmanfred salt \* state.apply if you are on the master
17:18 raspado ahh kk
17:18 raspado thx gtmanfred
17:18 gtmanfred np
17:20 XenophonF racooper: what does `salt-call pillar.get jenkins-repo-pass` show?
17:25 jas02 joined #salt
17:26 nickabbey joined #salt
17:30 sarcasticadmin joined #salt
17:30 XenophonF don't actually post your password in the clear ;)
17:31 whytewolf but IRC just knows it is a password and posts splats.... see I"ll type my password after this:  *******
17:31 whytewolf [just kidding]
17:35 racooper XenophonF,  it's showing the GPG ascii-armored block. so I've got something wrong somewhere, because a similar setup works on a different minion.
17:35 gtmanfred hunter2
17:35 XenophonF yeah, that's weird, b/c the gpg renderer should fix that
17:35 XenophonF any errors in the master log file?
17:37 XenophonF as an aside: you should use the |format and |yaml_encode filters when stubstituting strings like that
17:37 woodtablet joined #salt
17:37 racooper duh..I should have checked there first. "no secret key". I may have crypted with the wrong key then
17:37 XenophonF something like {{ "foo%sbar"|format(value)|yaml_encode }}
17:37 XenophonF ah fantastic - glad you found the error
17:37 XenophonF i don't know if this is best practice, but when i encrypt values, i always include myself
17:38 XenophonF that way i can decrypt the blobs
17:38 XenophonF problem is, i don't know how to scale that up to multiple users
17:38 XenophonF i'm thinking that salt-pillar-vault is probably the more scalable solution
17:39 * XenophonF shrugs
17:41 racooper yeah...I crypted it with the password manager key and not the salt master key....oops.
17:43 jeddi joined #salt
17:55 DEger_ joined #salt
17:57 KingOfFools joined #salt
17:57 fannet joined #salt
18:00 pipps joined #salt
18:01 pipps joined #salt
18:04 CrummyGummy joined #salt
18:05 edrocks joined #salt
18:22 pipps joined #salt
18:39 Edgan joined #salt
18:43 edrocks joined #salt
18:43 tercenya joined #salt
18:57 nickabbey joined #salt
18:59 pipps joined #salt
19:02 nicksloan joined #salt
19:03 ghaering joined #salt
19:07 nidr0x joined #salt
19:07 sh123124213 joined #salt
19:09 promorphus joined #salt
19:10 pipps joined #salt
19:10 edrocks Hello, I just moved my salt master to a different machine. I successfully accepted all my minion's salt keys, but when I try `sudo salt '*' test.ping` I receive `Minion did not return. [No reponse]`. Could this be a firewall issue if salt-keys worked?
19:11 edrocks I also confirmed all of the salt version are the same(2016.3.3)
19:11 cscf edrocks, the minions probably don't trust the new salt-master's key
19:11 ghaering left #salt
19:12 gtmanfred also make sure that the minions didn't completely shut down during the process of accepting the new keys
19:12 edrocks cscf: I think your right. I remember now that I had to delete some master key file before when I last moved. Thanks!
19:13 edrocks Yea "[ERROR   ][7203] Invalid master key"
19:16 pipps joined #salt
19:19 pipps_ joined #salt
19:20 MajObviousman humm, my understanding of how service.running works is that it will collect all watched states that trigger it and restart the service just one time
19:20 MajObviousman but I've got the following in my minion log: [salt.loaded.int.module.cmdmod][ERROR   ][25513] output: Job for sssd.service failed because start of the service was attempted too often. See "systemctl status sssd.service" and "
19:20 edrocks MajObviousman: Doesn't it do nothing if it is running with the same config?
19:20 MajObviousman pah
19:21 MajObviousman yes, unless you've got a watch declaration
19:21 MajObviousman then it will bounce the service for you
19:21 MajObviousman at least, this is my understanding of how things are to work
19:21 tercenya joined #salt
19:23 ronnix joined #salt
19:23 MajObviousman aha yep: "watch can be used with service.running to restart a service when another state changes ( example: a file.managed state that creates the service's config file ). More details regarding watch can be found in the Requisites documentation."
19:26 jas02 joined #salt
19:37 Edgan edrocks: restart minions, and make sure their hostnames are right, unless you have created the dns entry for the hostname "salt"
19:37 edrocks Edgan: I got it working. Just had to delete the master keys and restart minions
19:37 Trauma joined #salt
19:38 XenophonF MajObviousman: the service should only get restarted once
19:38 cyborg-one joined #salt
19:39 MajObviousman is what I expected. Thank you for confirmation
19:43 s_kunk joined #salt
19:51 WanderSoul joined #salt
19:52 WanderSoul Greetings programs
19:52 xbglowx__ joined #salt
19:52 * XenophonF checks for his identity disc.
19:53 * WanderSoul hold it out as he doesn't want to be derezzed
19:55 WanderSoul So I have what I think is a dumb question...I'm looking at https://docs.saltstack.com/en/latest/ref/publisheracl.html and I wish to add a user to be able to execute all commands but when I follow all the instructions, the user's key doesn't exist and I still can't run salt commands as the user
19:55 netcho joined #salt
19:56 WanderSoul I made sure that the 5 directories are 755 and readable by the user, restart salt-master, and still no dice
19:57 strobelight joined #salt
20:02 nicksloan joined #salt
20:03 edrocks joined #salt
20:03 raspado for a salt-master my specs is 4 CPU and 15GB ram, hosting about 200-300 salt minions
20:04 raspado think this is a good spec?
20:04 cscf MajObviousman, that error means that the service is failing to start and is crashlooping
20:04 MajObviousman ahh
20:05 MajObviousman so that confirms a suspicion I had
20:05 nicksloan joined #salt
20:05 MajObviousman first: that sssd is a pile of shirt
20:05 cscf MajObviousman, 'systemctl status sssd'
20:05 cscf or 'journalctl -xe'
20:05 theblazehen_ joined #salt
20:06 WanderSoul <3
20:07 WanderSoul joined #salt
20:11 iggy XenophonF: MajObviousman: watch restarts it each time... listen only restarts once
20:11 gtmanfred but it is also important that watch restarts it immediately, and listen restarts at the very end
20:12 cscf really??
20:12 gtmanfred yes
20:12 plup joined #salt
20:12 sebastian-w joined #salt
20:12 cscf Would that explain why sometimes salt says they failed to restart when multiple changes are made, but the service is working?
20:13 iggy that probably has more to do with init system returning bad return values
20:13 gtmanfred :) i usually always use listen, unless my restart change is going to be needed for something else that is happening later in the states
20:13 iggy (at least in my experience)
20:13 cscf But could it be that being restarted several times in one second causes the fail?
20:14 strobelight joined #salt
20:15 MajObviousman hummmm
20:15 MajObviousman this is important news
20:16 cscf sed -i 's/watch/listen/'
20:16 iggy it seems unlikely that you would hit multiple watches in the same second, but I don't know what systemctl's timeframe for the check is... completely possible that watch's cause that
20:17 nicksloan joined #salt
20:18 MajObviousman let me gist this state file and maybe you can point out what needs to change?
20:18 MajObviousman that'd be way easier than talking abstractly
20:19 iggy
20:19 cscf hmm, listen doesn't reorder
20:20 iggy it moves the action to the very end of the state run (I'm not sure it actually reorders, but the action definitely happens at the end)
20:21 whytewolf ^
20:21 cscf but the docs say it doesn't reorder the rest of the state
20:22 cscf Like service.running would execute partway through, and then the restart later
20:22 cscf Which is not what one would normally want, I think
20:22 iggy it only restarts if the listen'ed state changes
20:23 iggy (which is what I expect to happen)
20:23 gtmanfred cscf: sure it is, part way through, you just want to make sure it is up, but if the config changes, listen runs the mod_watch command which would restart the service at the very end of the highstate
20:23 cscf right, but I would want the service start to be delayed until it's config states have run
20:24 gtmanfred 90% of state runs, your service will already be running, only first one will it have started and then been restarted at the very end
20:24 cscf I guess having a service crash and then restart on deployment isn't too important
20:24 gtmanfred you should do your first restart after all the config has been done anyway, there is no reason for it to crash
20:25 iggy and in most distros/service combinations, just installing the service is going to start it
20:25 gtmanfred that is false, only deb/apt-get/dpkg does that
20:25 whytewolf presonally i normally put the service.running at the bottom of the sls after the config stuff anyway.
20:25 gtmanfred and it is fucking annoying
20:25 iggy pretty sure gentoo does as well
20:25 iggy and alpine
20:25 gtmanfred i haven't seen alpine do it
20:26 gtmanfred but that is dumb, don't start the service until i have a chance to configure it... cause then i can't just check if the port is listening to know if it can be added to the load balancer :(
20:26 whytewolf if gentoo does it is a new thing. [last 6 years since i touched anything gentoo]
20:27 gtmanfred i would find it surpriseing if gentoo does it, cause they are very big on you doing everything
20:27 iggy I'm too lazy to compile shit for days to find out
20:27 gtmanfred heh
20:27 cscf lol days
20:27 gtmanfred same
20:27 gtmanfred emerge @world
20:27 jas02 joined #salt
20:27 cscf you forgot the -j :P
20:28 gmacon joined #salt
20:28 gtmanfred -j1000
20:28 whytewolf days? small system. i ran gentoo on a pent II 400 mgzh. I was lucky to get the base system up in a week
20:29 whytewolf man i'm glad i don't fiddle with that anymore
20:29 theblazehen_ joined #salt
20:30 Edgan gtmanfred: I am trying to use the salt module cassandra_cql.cql_query after cassandra_cql.create_user. With cassandra_cql.create_user I give it a user/pass to login to cassandra with. When I then try to do cassandra_cql.cql_query, I also give it a different user/pass, but it seems to be using the user/pass from cassandra_cql.create_user. Is there a way to reset module state?
20:30 gtmanfred i have no idea
20:30 gtmanfred have never used cassandra
20:31 Edgan gtmanfred: I mean more at the salt level, resetting module state
20:31 gtmanfred i have no idea how to reset it, but it is probably using the __context__ to store the connection to cassandra, and isn't clearing it
20:32 gtmanfred yup, it is
20:32 gtmanfred looks like it only stores one session https://github.com/saltstack/salt/blob/develop/salt/modules/cassandra_cql.py#L227
20:33 cscf whytewolf, gentoo is pretty decent at 3.2Ghz with 6 cores :)
20:33 * MajObviousman clears several hurdles and finally gets people out of his doorway enough to post the gist
20:33 MajObviousman https://gist.github.com/anonymous/6c816c395039b5d9386ed9ed4bd26fd8
20:34 gtmanfred Edgan: you would need to open an issue requesting that cassandra to put multiple connections in __context__ based on which variables passed as user, host etf
20:34 gtmanfred etc*
20:34 whytewolf I'm sure if i was bored enough i could wipe one of my comp systems, get 12 cores and 128 gigs of mem
20:34 MajObviousman we've got 2015.05 installed, since that's what EPEL has
20:34 gtmanfred 2015.5 is no longer supported, highly recommend using https://repo.saltstack.com
20:34 MajObviousman you're saying I just need to change watch_in to listen_in ?
20:35 cscf 2015 is missing several features, too
20:35 gtmanfred but yes, you could use listen_in in 2015.5, it was added in 2014.7
20:35 gtmanfred https://docs.saltstack.com/en/latest/ref/states/requisites.html#listen-listen-in
20:36 mikecmpbll joined #salt
20:36 MajObviousman yep
20:36 cscf I just tried to change some watches to listens, and I got errors
20:36 cscf big traceback, then AttributeError: 'str' object has no attribute 'iteritems'
20:36 MajObviousman I advocated for us to track latest release via pip, but I was overruled
20:36 MajObviousman we're allergic to software which doesn't come from packages, maybe
20:36 strobelight joined #salt
20:37 whytewolf software out of package can be a pita sometimes.
20:37 gtmanfred MajObviousman: we package our own rpm repository on repo.saltstack.com that you could add
20:37 MajObviousman yeeeeuuuuuppp
20:37 gtmanfred lots of different packages
20:37 MajObviousman gtmanfred: you beautiful person you
20:37 MajObviousman ofc the director of security is out until after US Thanksgiving break
20:37 * MajObviousman sighs
20:37 gtmanfred heh
20:37 gtmanfred US Thanksgiving is the best thanksgiving, i am smoking a whole turkey, got my meat injector today
20:38 MajObviousman I recently acquired a Traeger grill/smoker that I'd like to use for smoking turkeys
20:38 gtmanfred nice
20:38 MajObviousman but I'm still new to it and seems like a big jump to try and get it figured out before the whole family expects it to be working
20:38 gtmanfred i have a weber smokey mountain bullet smoker, but i wanna get an offset barrel smoker when I get a house
20:38 * MajObviousman bought a pair of whole chickens to try it out on first
20:39 MajObviousman yeah I was leaning towards that MSW. I bought a Masterbuilt electric last year and it died the second run
20:39 gtmanfred MajObviousman: beer can chicken is the best, just don't use the beer can, get a holder and just use that, it turns out better
20:39 MajObviousman noted, thanks
20:39 gtmanfred MajObviousman: http://amazingribs.com/tips_and_technique/debunking_beer_can_chicken.html
20:39 MajObviousman that was the first thing my wife said too, "We can do beer can chicken!"
20:39 MajObviousman love that site
20:39 gtmanfred yar
20:39 MajObviousman picked out a wireless thermometer from their recommendations. Early Christmas present.
20:42 say_what joined #salt
20:42 MajObviousman functionally, there's no distinction between listen and listen_in, correct?
20:43 whytewolf besides direction?
20:43 MajObviousman yes, besides direction
20:43 say_what Hey guys, I'm trying to install the salt-minion on a Windows 2008 Server. I need to configure a proxy_host and proxy_port which I had to add in to the minion.conf file. Starting up the minion in debug and running a packet capture seems to indicate that the minion is ignoring the proxy_host setting. Is this expected?
20:43 gtmanfred MajObviousman: correct
20:44 gtmanfred say_what: it only uses the proxy_host setting for http connections, not connections to the master?
20:46 say_what gtmanfred: Thanks. Is there no support for a proxy host for master connectivity then?
20:47 gtmanfred the minion needs a direct connection to a master to listen on the event bus, it is a zeromq connection, not an http connection
20:48 XenophonF hey i've got a weird problem on debian-family operating systems
20:48 gtmanfred you could use a syndic to do that... or you could use something like this maybe... https://pypi.python.org/pypi/salt-broker/0.2.5
20:48 XenophonF i'm trying to enable both ypbind and rpcbind in one state
20:48 XenophonF https://github.com/irtnog/salt-states/blob/development/nis/client.sls#L40
20:49 say_what gtmanfred: Thank you!
20:49 XenophonF the list referenced there is ['rpcbind', 'ypbind']
20:50 whytewolf ... service.running doesn't have a names setting.
20:50 XenophonF the problem is, when salt runs `systemctl is-enabled rpcbind.service`, it misinterprets the result, which is `output: indirect`, to mean that rpcbind is already running
20:50 XenophonF the names kwarg is special
20:50 XenophonF works with every state
20:51 XenophonF anyway, my point is that rpcbind isn't running
20:52 XenophonF so ypbind fails to start
20:52 XenophonF which is super bizarre
20:52 XenophonF hm, maybe i have the wrong service name
20:53 XenophonF nope - doesn't matter
20:53 XenophonF rpcbind isn't running
20:53 XenophonF ypbind seems to be aliased to nis
20:53 XenophonF anyway, salt's not starting the rpcbind service
20:54 XenophonF and systemd isn't starting rpcbind when the ypbind service tries to start
20:54 whytewolf can you start rpcbind correctly with out salt?
20:54 XenophonF yes
20:54 WanderSoul joined #salt
20:55 XenophonF if i run `systemctl start rpcbind && systemctl start ypbind`, everything works properly, e.g., `ypcat passwd` works
20:55 WanderSoul Has anyone set up a non-root user to use salt commands?
20:55 anotherzero joined #salt
20:55 XenophonF WanderSoul: yes - long time ago, using PAM external auth
20:55 XenophonF (I wish Salt supported SAML ECP.)
20:55 WanderSoul Any chance anyone has done with local rhel accounts?
20:56 XenophonF no, this was on FreeBSD using NIS and pam_krb5
20:56 Tanta I did with sudo
20:56 XenophonF otherwise the salt command needs to be setuid-root in order to read the shadow password file
20:56 Tanta the easiest way to do that is nopasswd with sudo and explicit definition of the commands
20:57 gtmanfred WanderSoul: i have setup publisher_acl recently
20:57 WanderSoul so just run the command as `sudo <salt command>` after you set up the /etc/salt/master to look at the publisher_acl?
20:57 gtmanfred all i did was follow that docs page you linked, so i am not sure i will be much help
20:57 gtmanfred you shouldn't need sudo if you have publisher_acl set
20:57 WanderSoul hm....
20:57 gtmanfred try running with -l debug, and see what happens
20:57 WanderSoul same thing
20:57 WanderSoul tried that earlier
20:58 pipps joined #salt
20:58 gtmanfred of course it is the same thing, but are there any error messages
20:58 gtmanfred or like... nope this user can't see that file
20:59 WanderSoul I got the error where the user can't write to /var/log/salt/master
20:59 WanderSoul but I simply changed permissions on the file and it worked
21:00 XenophonF i'm going to call this a bug in salt's service state
21:00 WanderSoul but I updated the salt master publisher_acl part to include my user, restarted salt-master and still no dice
21:01 WanderSoul I made sure that user would write in the listed 5 directories, nope, computers hate me
21:01 XenophonF computers hate this one weird trick!
21:01 XenophonF sorry couldn't help myself
21:02 WanderSoul lol
21:02 whytewolf XenophonF: any chance this bug could be part of the problem? https://bugs.launchpad.net/ubuntu/+source/rpcbind/+bug/1558196
21:11 pipps joined #salt
21:11 Hybrid joined #salt
21:13 bltmiller joined #salt
21:14 swa_work joined #salt
21:14 bltmiller I think I've asked this before, but am I able to iterate over members of a nodegroup?
21:15 bltmiller e.g. {% for minion in nodegroup %}
21:15 XenophonF whytewolf: yeah, that looks like it
21:15 XenophonF i'm going to try the workaround
21:17 gtmanfred bltmiller: i believe this can be done on 2016.3, but you need access to the salt runner on the master
21:18 gtmanfred and you could use something like salt-run cache.grains which takes the tgt=<nodegroup> tgt_type=nodegroups and can return a list of minions that the master has grains for
21:18 gtmanfred actually, works on 2015.8 too
21:18 gtmanfred neat
21:18 bltmiller gtmanfred: ooooh!
21:19 rashford joined #salt
21:19 gtmanfred i think it is 2016.3 that has the manage.up that can target on different tgt_types
21:19 Rumbles joined #salt
21:19 gtmanfred which would probably be more useful
21:19 gtmanfred nope, that isn't available until 2016.11
21:20 kiorky joined #salt
21:20 bltmiller d'awww
21:20 gtmanfred but you could still use cache.grains
21:20 gtmanfred bltmiller: but you would need peering available for the minion that is rendering it to run saltutils.runner + cache.grains runner on the masters minion, or you could publish that data into the mine...
21:21 bltmiller hmm, do I need peering even if this is for an orchestration thing?. essentially trying to orchestrate a docker swarm cluster initialization. 1st minion creates swarm and join-token, then all 2..N minions join existing swarm using that token
21:22 gtmanfred the orchestration is rendered on the master, and is a runner, so if {{ salt}} is in there, it should be the runner modules
21:22 Edgan gtmanfred: on the same topic, https://github.com/saltstack/salt/blob/develop/salt/modules/cassandra_cql.py  lines 227-231. If I simply comment them out, it works.
21:23 gtmanfred Edgan: yes, i know, that is what I was saying
21:23 gtmanfred so you need to open a feature request to be able to cache multiuple different connections in the module
21:23 gtmanfred cause it doesn't do that right now
21:23 Edgan gtmanfred: but is that the right solution, or should it be advanced?
21:23 gtmanfred cache multiple connections
21:23 gtmanfred based on host/user/port passed to the _connect
21:23 Edgan gtmanfred: no caching is better than crappy caching in my book
21:24 gtmanfred well, you aren't going to get them to remove the caching of the conection in the __context__
21:24 gtmanfred so sure it is an immediate solution, but open an issue
21:26 cyborg-one joined #salt
21:29 jas02 joined #salt
21:32 Edgan gtmanfred: done
21:32 amontalban joined #salt
21:32 amontalban joined #salt
21:33 bltmiller gtmanfred: to be clear, I don't see a tgt_type kwarg for cache.grains. did you mean expr_form?
21:33 manji XenophonF, the mine thing earlier you proposed, worked
21:33 manji thanks !
21:35 DEger joined #salt
21:38 anotherzero joined #salt
21:38 gtmanfred bltmiller: yeah that
21:39 pdayton joined #salt
21:40 bltmiller gtmanfred: hmm, yeah even in that case something like this is failing: https://gist.github.com/blaketmiller/785b6c3e3210ed261ebdc73537e3b65a (where `saltmaster` is a nodegroup)
21:40 bltmiller (on 2016.3.4)
21:41 gtmanfred hrm, i don't know
21:48 fracklen joined #salt
21:48 Edgan gtmanfred: Is there anything like sleep 30 without running it manually with cmd.run?
21:48 gtmanfred test.sleep?
21:49 Edgan gtmanfred: yep, that will do
21:50 nickabbey joined #salt
21:51 nickabbey joined #salt
21:51 Edgan gtmanfred: Thank you for your help.
21:51 akhter joined #salt
21:52 pipps joined #salt
21:52 relidy left #salt
21:56 netcho joined #salt
21:56 XenophonF glad to hear it, manji
21:59 ekristen joined #salt
22:00 mpanetta joined #salt
22:00 bltmiller gtmanfred: I'm not a Salt dev, but this line is checking for the class attribute of _check_nodegroup_minions, and it seems that _check_nodegroup_minions() simply doest
22:00 bltmiller *doesn't exist
22:00 bltmiller https://github.com/saltstack/salt/blob/509be70eaf88b11fd23f3d27fd5d52b79677dd76/salt/utils/minions.py#L629
22:01 rml joined #salt
22:01 MajObviousman can anyone point me towards an example extension module?
22:01 * MajObviousman is finding the existing doc wanting
22:02 gtmanfred extension module?
22:02 MajObviousman err execution module
22:02 MajObviousman like a custom one
22:02 gtmanfred they look exactly like regular execution modules
22:02 gtmanfred nothing special
22:02 MajObviousman well, that's what I thought
22:02 whytewolf _any module in salt_
22:02 MajObviousman I copied the rsync one into the path defined in extension_modules, then did a saltutil.sync_all
22:02 MajObviousman no dice
22:03 MajObviousman well, I renamed it first
22:03 gtmanfred don't put it in extension_modules
22:03 whytewolf _modules
22:03 gtmanfred put it in _modules in your fileserver, so like /srv/salt/_modules/
22:03 MajObviousman that's where I had it first
22:03 * MajObviousman moves it back
22:03 gtmanfred then do the sync_all
22:03 gtmanfred bltmiller: that would do it
22:04 gtmanfred hrm...
22:04 whytewolf MajObviousman: https://github.com/whytewolf/salt-debug something like that
22:04 akhter joined #salt
22:05 bltmiller gtmanfred: shall I file a bug? :D
22:06 gtmanfred yes please :)
22:06 MajObviousman and when I do the sync_all, I would expect to see some blip for the modules part, but I don't see it
22:06 gtmanfred where is _modules?
22:07 gtmanfred also, make sure that it isn't already on the minion at /var/cache/salt/minion/extmods/modules/<name>.py
22:07 MajObviousman /srv/salt
22:07 gtmanfred yeah, make sure that it isn't already in extmods on the minion
22:08 MajObviousman verified not present. In fact, the entire directory is empty
22:08 gtmanfred you have your module at /srv/salt/_modules/<name>.py?
22:08 whytewolf did you redefine file_roots in your master config?
22:08 gtmanfred and that^^
22:08 akhter joined #salt
22:10 MajObviousman mmmmyep there it is
22:10 whytewolf basicly minions should see the files as salt://_modules/<name>.py
22:10 MajObviousman I thought custom execution modules would be like pillar and be apart from environment
22:10 MajObviousman ok, thanks folks
22:10 gtmanfred np
22:10 MajObviousman knew it was something simple. Which is why it is so maddening
22:11 MajObviousman if I remove a custom execution module and then re-call the sync, does it remove it over there too?
22:11 gtmanfred no
22:11 gtmanfred well, i don't think so
22:11 MajObviousman so then, the way to delete is file.absent I suppose?
22:12 gtmanfred it is worth testing, but i don't think it clears it
22:12 * MajObviousman gets out his lab coat
22:12 MajObviousman the answer is: yes it does delete it
22:13 gtmanfred awesome
22:13 whytewolf well i guess with a name like sync that makes sense. it isn't put_things_there
22:13 MajObviousman I think it's doing a file.recurse with clean=True
22:13 pipps joined #salt
22:15 MajObviousman interesting. Not sure if this is a 2015.05 bug or not, but a sync_modules doesn't pick it up. Have to do a sync_all
22:15 gtmanfred pretty sure it isn't doing that
22:15 gtmanfred but it might have the same logic
22:15 MajObviousman fair enough
22:18 debian112 joined #salt
22:24 bltmiller gtmanfred: for your consideration: https://github.com/saltstack/salt/issues/37742 :)
22:24 saltstackbot [#37742][OPEN] Cannot match on nodegroup when checking minions | Description of Issue/Question...
22:25 NightMonkey joined #salt
22:25 gtmanfred I will look at it in the morning if ch3ll doesn't get to it this afternoon
22:25 armguy joined #salt
22:25 bltmiller thanks for your assistance!
22:25 gtmanfred no problem, thanks for opening the issue and digging :)
22:29 jas02 joined #salt
22:34 akhter joined #salt
22:36 pipps joined #salt
22:38 pipps joined #salt
22:40 pipps joined #salt
22:42 gtmanfred good luck everyone, i am ducking out a little early today to go meet up with a friend o/ cyall tomorrow
22:42 bltmiller ????
22:52 mk-fg joined #salt
22:52 debian112 joined #salt
22:55 Rumbles joined #salt
22:57 bluenemo joined #salt
23:02 Pulp joined #salt
23:04 kunersdorf joined #salt
23:06 pipps joined #salt
23:10 ProT-0-TypE are the pillars cached somewhere in the minion?
23:11 Edgan ProT-0-TypE: Doesn't look like it
23:12 kunersdorf https://gist.github.com/anonymous/d2b10906ef3530ba3d2f8ae237a3db4b
23:12 kunersdorf getting this error: saltstack Rendering SLS failed: Jinja variable "No first item, sequence was empty."
23:12 kunersdorf that first line of code works fine
23:12 pdayton joined #salt
23:12 kunersdorf can I not do 3 set in a row like that?
23:12 ProT-0-TypE thanks edgan, I just wanted another confirmation
23:15 Edgan kunersdorf: You would get that error if any of the files doesn't exist
23:15 kunersdorf thank you.
23:17 Edgan kunersdorf: {% if somefile is defined %}
23:18 Edgan kunersdorf: or   {% if somefile is not none %}
23:18 Edgan kunersdorf: might help
23:18 kunersdorf ok, very cool
23:18 Edgan kunersdorf: Negative values you be a pain
23:20 Bryson joined #salt
23:23 hasues joined #salt
23:23 hasues left #salt
23:27 aw110f joined #salt
23:31 pipps joined #salt
23:32 pipps99 joined #salt
23:33 heaje joined #salt
23:35 bltmiller joined #salt
23:37 raspado hi all how do i know which salt cloud provider file is being used?
23:41 xbglowx joined #salt
23:44 iggy each provider should have a different name, so it'd be whichever file had that provider name
23:46 raspado so we have two
23:47 raspado one in /etc/salt/cloud
23:47 raspado and another one in /srv/salt/salt-config/cloud
23:47 raspado i take it the one thats actually working is /etc/salt/cloud
23:48 iggy unless you changed the default config to look in /srv/ for cloud configs, it's looking in /etc
23:48 raspado gotcha thx
23:51 cyteen joined #salt
23:57 netcho joined #salt
23:57 cyteen joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary