Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2016-07-26

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 infrmnt joined #salt
00:01 amcorreia joined #salt
00:03 justanot1eruser joined #salt
00:06 wryfi this is strange
00:06 wryfi on a new minion
00:07 wryfi i'm seeing
00:07 wryfi [ERROR   ] The renderer "pyobjects" is not available
00:07 wryfi followed by salt falling on its face with a pyobjects file
00:07 wryfi why would the pyobjects renderer be unavailable?
00:08 wryfi nevermind, coworker installed a *very* outdated salt-minion package
00:08 wryfi ;)
00:09 brotatochip joined #salt
00:09 johnkeates left #salt
00:10 infrmnt1 joined #salt
00:12 mTeK Edgan I don't think I can do this in Freebsd. Id rather just have eth0 vnet0 wlan0 but I don't think that's the case even with linux since the switch to systemd.
00:13 brotatochip joined #salt
00:18 erewok joined #salt
00:20 erewok Q: If you are using `file.managed` with a jinja templatetized file, is pillar data automatically available to be interpolated inside that template?
00:21 erewok or must you always pass in pillar variables via `context`?
00:21 iggy erewok: yes
00:21 erewok thanks
00:21 iggy I almost never use context
00:21 iggy hate the misdirection
00:22 murrdoc yup
00:22 erewok do you have to reference them using {{ pillar['foo'] }}
00:22 erewok or can you just do {{ foo }}?
00:22 flowstate joined #salt
00:23 iggy pillar (preferrably {{ salt['pillar.get']('foo') }}
00:23 erewok I was planning to put together a set of test files to try this out, but thought it might be quicker to ask those who know.
00:23 erewok thank you
00:28 subsignal joined #salt
00:29 krymzon joined #salt
00:31 lorengordon anyone using the augeas module with salt?
00:33 lorengordon it seems unnecessarily difficult to me, and i've been toying with the idea of extending augeas support along the lines of http://augeasproviders.com/ for puppet
00:33 lorengordon just wondering if anyone would be interested or if it's just me :)
00:42 Tyrm joined #salt
00:46 perfectsine joined #salt
00:47 ssplatt joined #salt
00:48 rem5 joined #salt
00:51 perfectsine_ joined #salt
00:51 MTecknology 19:51 < MTecknology> would that produce an iggy?
00:52 iggy ENOCONTEXT
00:52 MTecknology eno?..
00:53 aarontc joined #salt
00:53 MTecknology I didn't include anything more about you than what's in that one line, but they agreed that, yes, cooking an egg in thermite will produce an iggy.
00:54 Nahual joined #salt
01:00 flowstate joined #salt
01:01 DaveQB joined #salt
01:10 hemebond ZachLanich: It is possible to get the Virtualbox Salt provisioner to work, but it isn't straight-forward.
01:10 hemebond I managed to get it "working" a couple of weeks ago but haven't played with it since.
01:11 PerilousApricot joined #salt
01:15 orion joined #salt
01:16 orion Hi. Does anyone know what I have to do to get rid of this warning?: /usr/lib/python2.7/dist-packages/salt/beacons/__init__.py:55: DeprecationWarning: Beacon configuration should be a list instead of a dictionary.
01:16 orion I'm using salt 2016.3.1 (Boron) on Ubuntu Trusty.
01:17 orion I structured my beacon configuration exactly as the documentation stated.
01:19 amcorreia joined #salt
01:21 fgimian joined #salt
01:32 Sammichmaker joined #salt
01:41 tuxx joined #salt
01:49 hoonetorg joined #salt
01:51 rem5 joined #salt
01:56 ssplatt joined #salt
01:57 ZachLanich hemebond: Sometime, if you would, could you help me get it working? It's kind of crucial to getting my local version of my cloud running for testing reasons.
01:58 PerilousApricot joined #salt
01:59 hemebond ZachLanich: It was mostly a case of trying to understand the Virtualbox installer and making sure it, along with all its dependencies, were installed correctly. I found that even when I installed it and could import it, I was still required to install some other libs.
01:59 hemebond I think running salt-cloud in debug mode let me find all the dependencies.
02:02 DaveQB joined #salt
02:05 flowstate joined #salt
02:12 kshlm joined #salt
02:14 bastiandg joined #salt
02:17 benji__ joined #salt
02:17 benji__ Hey all... question. Is there a way to trigger a highstate from the saltmaster when new nodes are registered?
02:18 benji__ I know it's possible from the minions to call a highstate, but I'm wondering if that's something I can do from the master.
02:20 hemebond benji__: Yes, using beacons and reactors you can listen for and react to the "minion has started/joined" event to call a highstate.
02:21 benji__ I'm not too versed with beacons...
02:21 hemebond Good luck :-)
02:21 benji__ Would it require some minion configuration
02:21 benji__ ?
02:21 benji__ Or master only?
02:22 hemebond I don't believe it will require minion config changes to achieve what you want here.
02:23 benji__ Interesting. Thanks for your suggestion.
02:23 hemebond ????
02:25 ssplatt is it possible to ‘grains.get ipv4’ and do a cidr filter in the same way that i can mine.function: network.ip_addrs: cidr: 192.168.0.0/16 ?
02:25 evle joined #salt
02:26 hemebond ssplatt: I would be surprised if you could.
02:26 ssplatt so ig i want to use a specific subnet ip range in pillar, i can do like bind_to: {{ salt[‘grains.get’][‘ipv4’, cidr: …] }}
02:26 hemebond You could just check/filter in Jinja.
02:27 hemebond I don't believe you can, that's why I'd be surprised if you _could_ do that.
02:27 ssplatt or possible match.filter_by?
02:27 racooper joined #salt
02:27 hemebond Hmm, or a map?
02:28 hemebond I think I saw something in the Jinja docs about filtering a list based on a map.
02:28 hemebond Remember you can check/test against slices of strings in Jinja.
02:28 hemebond Not quite as robust as a cidr check but something.
02:29 ssplatt maybe i can just use the mine filter. then pull that back out instead of trying to use grains.
02:29 ssplatt the problem is on some hosts its easy to just do “eth1"
02:29 ssplatt but others have an alias, like eth0:1
02:29 ssplatt and some are vlan eth0.100
02:31 krymzon joined #salt
02:36 ssplatt hemebond: attempting {{ grains[‘ipv4’]|select(‘192.168’) }}...
02:36 hemebond Oh, nice.
02:36 ssplatt lol.    <generator object _select_or_reject at 0x7f32e6d40910>
02:37 ssplatt got placed.
02:37 ZachLanich hemebond: Thanks. I will try that sometime.
02:37 ZachLanich On another note folks, I'm having trouble getting cp.push to work. I've enabled file_recv and verified that the minions respond fine for other things. Here's the gist of my debug output: https://gist.github.com/zlanich/cd00fa729cab347f8a39cc90fcbbd3e9
02:37 hemebond ZachLanich: If you're still around in ~4 hours I can see what I have installed for it.
02:38 hemebond ZachLanich: You restarted your master, yeah?
02:38 ZachLanich hemebond: It's almost 11pm here, so I'll probably be asleep by then, but tomorrow would work too.
02:38 ZachLanich hemebond: Yea, I've even done a complete vagrant halt/up
02:39 hemebond Is your master running in a VM?
02:40 hemebond ssplatt: That string in select("") should be a test.
02:40 ZachLanich Yea, everything is based off of the salt-vagrant demo on SaltStack's site/tutorials
02:40 ssplatt hmm.
02:40 hemebond You'll want something like equalto
02:41 hemebond Or just use a regular loop with a test.
02:41 hemebond s/test/comparison/
02:41 ZachLanich hemebond: Great, I did a service salt-master restart, now I'm getting this: https://gist.github.com/zlanich/7ec9cbf864406020c019752b0e731b0f
02:42 hemebond ZachLanich: It'll take a minute or two for master and minions to talk to each again after it comes up.
02:42 ZachLanich hemebond: Ok, I'll wait and try again.
02:46 hasues joined #salt
02:46 tristianc joined #salt
02:46 hasues left #salt
02:55 bretth joined #salt
02:57 ssplatt hemebond: decided to just got with a “if grains…eth1 is defined…” blob
02:58 ssplatt kind of annoying and ugly, but is working.
03:03 antpa joined #salt
03:13 Tyrm joined #salt
03:15 ZachLanich hemebond: Waiting worked for cp.push'ing that file, but I just did a vagrant halt/up for another reason, now neither of my minions are responding and it's been over 5mins. This is getting exhausting. The vagrant setup is so inconsistent.
03:19 hemebond ZachLanich: Some sort of DNS issue perhaps? Or network interface issue? Can you telnet from a minion to the master on 4505 and 4506?
03:22 flowstate joined #salt
03:22 kus joined #salt
03:22 ZachLanich hemebond: Would telnet 192.168.50.11:4505 suffice to test? I haven't used the telnet command much.
03:23 hemebond "telnet 192.168.50.11 4505"
03:23 ZachLanich hemebond: Connection Refused on both
03:23 hemebond So your master is not running or not listening or the network connectivity is broken.
03:24 hemebond "netstat -anp | grep LISTEN" on the master
03:24 ZachLanich http://dp.wecreate.com/hNBC
03:25 raspy_ joined #salt
03:25 hemebond Okay so it is running and listening which suggests some sort of Virtualbox networking issue.
03:26 raspy_ joined #salt
03:26 ZachLanich Ok, here's my vagrant file. Very simple, so idk that there are any issues in there: https://gist.github.com/zlanich/c1faf9409423979b923fd143ceeac78a
03:27 hemebond Wait your master is .10 not .11
03:27 ZachLanich I read somewhere that double NAT can cause connectivity issues with minions, but I'm 100% sure what that even means lol
03:27 hemebond So you should be telnet-ing to 192.168.50.10 4505
03:28 ZachLanich I thought I was telnetting to a minion to test connectivity to it?
03:28 hemebond No, from minion to master.
03:28 hemebond Minions connect to the master, never the other way around.
03:29 ZachLanich Ok, telnet successfully connected to master from both minions
03:30 ZachLanich But still no response with test.ping or commands.
03:30 hemebond Then you'll need to run the commands with debug output. "-l debug" I think.
03:30 hemebond And check the minion log in /var/log/salt/minion
03:31 ZachLanich The minion log on the master?
03:31 hemebond On the minion.
03:31 ZachLanich If it doesn't respond, how would it log anything?
03:31 hemebond You will also want to check the master log (on the master) for errors, /var/log/salt/master
03:31 tonybaloney joined #salt
03:32 hemebond The minion might have fetched the command but been unable to respond or process it.
03:32 hemebond This is just how I debug issues. Chuck the master and minion into debug mode (update config and restart the service) then watch the logs.
03:32 tonybaloney afternoons
03:33 hemebond Remember the master doesn't send the command to the minion, it puts it into a queue and the minion fetches it.
03:33 hemebond You could also do a "netstat -anp | grep ':4505'" on the minion to see if it thinks it's connected.
03:36 ZachLanich hemebond: Ok, so there's this now: http://dp.wecreate.com/DitC
03:36 hemebond There you go :-)
03:36 ZachLanich Which is weird, because the keys used for this entire setup are in my folders and the same ones are used every vagrant up
03:36 ZachLanich So that's weirdly inconsistent, and I don't think the key actually changed, so what's the deal?
03:37 hemebond I don't see the master pem/pub mentioned in your Vagrant config.
03:37 ZachLanich Yea, it's in there: http://dp.wecreate.com/1ar6I
03:37 hemebond Which means you possibly recreated your master VM./
03:37 hemebond Oh I see.
03:38 hemebond Do an md5sum check on the pem and pub on the master VM and compare to the files you have in that directory.
03:39 hemebond That'll tell you if it's actually using your files or not.
03:40 hemebond You have "salt.no_minion = true" and also "salt.minion_key = ...". Do they not conflict?
03:43 antpa joined #salt
03:44 ZachLanich I think salt.no_minion means don't install the minion service on the master
03:44 hemebond Correct. But you seem to be providing a pem+pub for the minion.
03:44 ZachLanich I'm working on md5sum. I have to find all the key files lol
03:45 hemebond /etc/salt/pki/master/master.pem
03:45 hemebond (on the master)
03:48 ZachLanich Idk lol. It's just how the tutorial came. I'm not 100% sure how all the keys are distributed. I know the seed_master is for seeding the minion pubs onto the master, but I don't think the minion_key & minion_pub keys are actually part of the master config. I think those are an accident and don't do anything.
03:48 hemebond It looks like it. Neither here nor there anyway.
03:50 ZachLanich I'm just going to regenerate the keys lol
03:51 hemebond Am I missing something or is there nothing in your minion configs that puts the master pub onto them?
03:52 ZachLanich I'm assuming the vagrant salt provisioner does it for us.
03:53 hemebond Hmm, not sure if that was my experience. Been a while since I used Vagrant to provision VMs.
03:53 aswini1 joined #salt
03:53 ZachLanich If I delete the VMs, and vagrant up, minions respond right away.
03:54 hemebond Oh of course. The minions will just use what they're given. derp.
03:54 hemebond And you've already pre-approved the minions on the master.
03:54 ZachLanich Yea, it's seeded in this case
03:57 ZachLanich I deleted the master pub key on the minions and I'm doing a full vagrant reload now to see if it fixes it.
03:58 ZachLanich Is it normal to never get any output with service salt-minion restart? I just feel like it does nothing.
03:58 tonybaloney has anyone managed to get a salt-master running in a docker container yet?
03:58 hemebond Output? What kind of output?
03:58 tonybaloney bcos docker
03:58 subsignal joined #salt
03:59 hemebond tonybaloney: I shudder to think.. :-)
03:59 ZachLanich hemebond: Well, when you restart master's service, you 2 lines saying it stopped and started. You don't get anything when restarting the minion's service.
04:00 hemebond Though I suppose running it into some container that then mounts your state store could make sense. But it's so easy to install and get running.
04:00 tonybaloney I'm thinking about using salt for a whole bunch of home-automation and I wanted to theorise quickly how easy that would be
04:00 hemebond ZachLanich: Oh I see. I've never thought about it.
04:01 ZachLanich hemebond: OMFG, now I'm back to cp.push not working! Will it ever end?!?: https://gist.github.com/zlanich/cd00fa729cab347f8a39cc90fcbbd3e9
04:01 hemebond tonybaloney: I think I saw an issue on github about making a minion all-in-one or standalone package.
04:02 ZachLanich Last time I restarted the master service, it fixed it, and Idk wtf I should have to do that after a fresh vagrant up???
04:02 hemebond ZachLanich: Did you restore your master config changes?
04:03 tonybaloney also, for anyone who extends salt often, I'm looking for feedback on this feature I proposed https://github.com/saltstack/salt/pull/34921
04:03 saltstackbot [#34921][OPEN] Introduce a template system for extending SaltStack open | What does this PR do?...
04:03 ZachLanich Hmmm. It's supposed to pull it from my mounted folder, but it seems vagrant failed to do so...
04:04 ninjada joined #salt
04:04 hemebond ZachLanich: You might have more fun just making your local workstation the Salt master and use VMs for minions only.
04:09 ninjada hey guys, having a fail moment making a state to install nvm via curl and then runing nvm install.. no matter what i try (like sourcing the nvm location and having it added to .bashrc) it wont recognise the nvm command. always get stderr: /bin/bash: nvm: command not found
04:10 hemebond ninjada: Are you trying to run nvm from a state?
04:13 ninjada two cmd.runs basically, installing nvm, then running nvm install --lts
04:13 hemebond Can you not just use the full path to the nvm executable?
04:14 ninjada nope, same err
04:15 ZachLanich hemebond: Ugh, I feel like doing that on a Mac would be easier than Windows, but still a huge pita. I found a workaround I'm trying. Apparently, I'm not the first person to have Vagrant Salt fail to place a provided config file.
04:15 badon_ joined #salt
04:15 hemebond Oooh, Windows. Never thought of that.
04:16 hemebond Still, once you've got the master up and running there shouldn't be many issues after that.
04:17 hemebond Shouldn't need to recreate the master VM at any point.
04:17 ZachLanich hemebond: I'm on Mac
04:18 hemebond Not sure how much easier that is when installing and running Python stuff :-)
04:18 Disorganized_ joined #salt
04:20 Pulp joined #salt
04:21 ZachLanich Yea, I'd prefer just to get this working lol. I'd like to eventually get the virtualbox provider working and just configure a Master, then use cloud profiles to provision out my varnish server(s), several minions, etc for testing almost identically to what I'll be putting up in Digital Ocean. Right now, I'm just working on getting state files and pillars made for the web boxes, since I'm new to salt.
04:22 hemebond I don't know if salt-cloud can be run from a non-master server. Something to bare in mind. I run salt-cloud from my salt-master.
04:22 hemebond *bear
04:23 flowstate joined #salt
04:24 tonybaloney hemebond, it can
04:24 hemebond The provider uses the local (to salt-cloud) Virtualbox stuff.
04:24 hemebond Oh it can, cool.
04:24 hemebond How does it auto-accept the minion keys?
04:25 tonybaloney I've just added some digital ocean DNS support btw,
04:25 tonybaloney don't know if that helps
04:26 ZachLanich I'd like to be able to run the exact cloud/state configs on my vagrant/vb setup as I do my DO cloud eventually for local testing, just swap out the provider for vb on my local and do on the cloud. I'm assuming that's possible?
04:27 ZachLanich I feel like I'll run into networking issues somewhere, and I'm no networking pro, but I'll figure it out with some help ;)
04:27 tonybaloney so test the salt-cloud infra states first then deploy them to a master?
04:27 hemebond Well, you might want your Salt master to be a little more permanent than a local VM.
04:31 ZachLanich I will have a permanent master on DO, but I will have an identical master in my vm too.
04:31 ZachLanich Completely separate setups, but nearly identical
04:32 hemebond Oh I see.
04:32 hemebond I use a single master for all environments.
04:32 ZachLanich IT will allow me to play with salt and figure out everything on my local, then push those same state/cloud configs to my real master and assume they'll work pretty well.
04:32 hemebond So it hadn't occurred to me that you'd have a master in DO.
04:32 hemebond Makes sense.
04:33 ZachLanich It'll allow me to break things without panic lol
04:33 hemebond Sure.
04:36 elias_ joined #salt
04:59 ZachLanich hemebond: Now I'm getting "Data failed to compile" on state.apply and I've literally commented out everything in the top.sls file...
04:59 hemebond ZachLanich: It's a pillar, yeah?
05:00 ZachLanich Maybe...
05:00 msn joined #salt
05:00 msn hey all.
05:00 hemebond If you just comment out everything in the top.sls it will, at least, throw a warning.
05:00 ZachLanich I haven't changed any of the pillars since it worked last though...
05:00 hemebond I'
05:00 msn Still stuck at a problem. no errors in either master or minion but the salt-state is not being read
05:00 hemebond I'll need more info.
05:00 hemebond msn: Will need more info for that too :-)
05:01 msn hemebond: i know let me put it all together
05:02 ZachLanich hemebond: I commented out everything in the top.sls pillar dir too :(
05:02 ZachLanich This is blowing my mind.
05:03 hemebond ZachLanich: You don't really want an empty top.sls. But still, I need more info, e.g., the full error message.
05:04 msn state top file : http://paste.debian.net/785274/  state.highstate http://paste.debian.net/785276/  pillartop http://paste.debian.net/785275
05:05 ZachLanich hemebond: Not very useful lol: http://dp.wecreate.com/w0bP
05:05 hemebond msn: "Function: no.None" haven't used that module before :-D
05:06 msn where is it getting that
05:07 hemebond msn: What is your file_roots and where on the master file system are these top.sls files?
05:08 msn they are in /srv/salt
05:08 hemebond Also what is the minion ID?
05:08 hemebond Oh, nevermind, I see it.
05:08 ZachLanich hemebond: I get this on the minion, but delayed like 30secs: http://dp.wecreate.com/16VUr
05:10 hemebond msn: Both your pillar and state top.sls are in /srv/salt?
05:10 msn yes
05:10 msn they are in /srv/salt/state and /srv/salt/pillar
05:10 hemebond How can you have two files called top.sls in the salt directory?
05:10 hemebond Ah, right.
05:10 hemebond What is your file_roots?
05:10 hemebond And pillar_roots?
05:10 msn and master file_roots: /srv/salt/state
05:11 msn pillar_roots: /srv/salt/pillar
05:11 hemebond Is that the exact config or are you leaving out the "base" lines?
05:11 msn i am leaving out the base lines
05:11 msn just as ec
05:11 ZachLanich hemebond: Looks like that "The minion function caused an exception" error is happening every 5mins, not so much as a result of the failed compile. So there seems to be no log info for the failed compile.
05:12 hemebond ZachLanich: Yeah, looks like a minion issue. Chuck your minion into debug mode (edit minion config to have debug logging) and watch the log file.
05:13 hemebond That message suggests the minion didn't return or is taking longer than the salt master wants to wait.
05:13 hemebond msn: This was working, yeah? This config?
05:13 msn hemebond: found the prblem
05:13 msn thanks for the help
05:13 hemebond ????  what was it?
05:13 msn problem was fileserver_backend
05:13 hemebond Ah
05:13 msn it was set to git i don't know why
05:14 msn spent 2 days banging my head with that
05:16 ZachLanich hemebond: It seems it's not even successfully pulling the new sls files down. They're stale on the minion. Thoughts? (when you get a min)
05:16 hemebond ZachLanich: Hmm, try "salt 'minion' saltutil.sync_all"
05:18 hemebond Then re-run your command.
05:18 hemebond Oh.
05:19 hemebond Actually if something fails to compile the command will usually fail outright.
05:19 hemebond And won't get to the minion. So the minion won't get new states and pillars.
05:19 hemebond I need to see your top.sls files and any relevant pillars and states.
05:19 msn thanks heme
05:19 msn i will be back again :)
05:20 rdas joined #salt
05:21 tonybaloney joined #salt
05:23 flowstate joined #salt
05:29 ZachLanich hemebond: /srv/salt/top.sls: http://dp.wecreate.com/1g10a - /srv/pillar/top.sls: http://dp.wecreate.com/AHqz
05:30 ZachLanich Literally nothing.
05:30 hemebond ZachLanich: What is the command you're trying to run?
05:30 hemebond Was it a state.apply?
05:30 hemebond What is the full line you're running?
05:31 ZachLanich salt 'minion1' state.apply -l debug
05:31 hemebond Ah I see the problem :-)
05:31 ZachLanich I sure freaking hope so lol
05:31 ZachLanich I'm about to go play in traffic lol
05:31 hemebond state.apply, without an actual state, is the same as state.highstate. It reads top.sls to find the states and pillars to apply to the minion and applies them.
05:32 ZachLanich Ok
05:32 hemebond But your top.sls files are empty... so it has nothing to do :-)
05:32 ZachLanich So I prolly had a mistake in my sls files, then commented everything out and it broke it that way too lol
05:35 antpa joined #salt
05:37 ZachLanich hemebond: Still failing. srv/salt/top.sls: http://dp.wecreate.com/sgud - /srv/salt/nothing.sls - http://dp.wecreate.com/17hem
05:38 hemebond Will need the error/output.
05:38 ZachLanich http://dp.wecreate.com/10Vo7
05:40 impi joined #salt
05:41 ZachLanich For the record, my master is Ubuntu 14.04 and my minions are 16.04
05:41 hemebond There's something else going on here. How much have you customised your minion and master configs?
05:41 hemebond But they're the same version of Salt, yeah?
05:42 ZachLanich Yea, vagrant installs salt from the latest bootstrap
05:42 ZachLanich The only thing I've changed in the master config is file_recv = True
05:43 hemebond What if you do...
05:43 ZachLanich Minion configs are the same as provided for the demo, and all this shit was working earlier
05:43 hemebond salt -l debug 'minion1' state.apply nothing
05:43 ZachLanich Minus I adjusted the KeepAlive settings for the minions cause they were dropping off when I first started playing with the demo and it seemed to fix it.
05:44 ZachLanich AND I literally have to run these commands 2-3 times, cuz the first couple tries, I get this: http://dp.wecreate.com/1hB5x
05:45 ZachLanich Still failed to compile on salt -l debug 'minion1' state.apply nothing
05:45 hemebond You shouldn't be having any network-related issues with local Virtualbox VMs.
05:45 ZachLanich There have been a few minor mentions of it, but no real solid "AHAs"
05:45 hemebond Is your pillar top.sls still "empty"?
05:46 ZachLanich Not anymore.
05:46 ZachLanich I literally had it running perfectly with all my pillars earlier though
05:46 ZachLanich I haven't changed them
05:46 hemebond Then I'll need to see that too. The top.sls with the pillar file/s
05:46 ZachLanich I had mysql installed and configured for RDBMS between master/minion, and it worked flawlessly right after a fresh vagrant up
05:47 ZachLanich Can't we just get rid of the pillars for now to narrow it down?
05:47 hemebond Yip
05:47 hemebond You can make that top.sls empty.
05:47 hemebond "empty"
05:47 ZachLanich Why did you quote that?
05:48 hemebond Because the file isn't really empty, it's just all commented out so it might as well be empty.
05:48 ZachLanich Ok
05:48 tonybaloney joined #salt
05:48 badon_ joined #salt
05:49 ZachLanich Still failed
05:49 ZachLanich Pillar "empty": http://dp.wecreate.com/1deCD
05:50 ZachLanich This is blowing my mind right now. I hope everyone else doesn't have this much trouble with Salt
05:52 hemebond salt-run manage.versions
05:53 ZachLanich On master?
05:53 fracklen joined #salt
05:53 hemebond Yeah
05:53 ZachLanich http://dp.wecreate.com/155Ve
05:53 ZachLanich I don't see minion 1
05:53 hemebond Well that's a problem.
05:54 hemebond Try restarting the salt-minion service on minion1
05:54 ZachLanich Yea, it's having all sorts of issues. I tried telnet and netstat again earlier and everything checked out there.
05:54 ZachLanich Oh, there's minion1, but 2 is gone lol: http://dp.wecreate.com/jE9p
05:55 ZachLanich Haven't restarted the service yet
05:55 hemebond uhh
05:55 ZachLanich Both of them just keep dropping out like crazy
05:55 hemebond Check the IP on the minion VMs.
05:55 hemebond Jump on each one locally and check the IPs.
05:55 tonybaloney have you checked the obvious stuff?
05:56 tonybaloney network latency, packet loss, disk space?
05:56 hemebond tonybaloney: Local VMs running under VirtualBox.
05:56 hemebond Having network issues and really shouldn't be.
05:56 ZachLanich Idk how to do all that lol, but if you want me to, guide me
05:57 hemebond ZachLanich: Log on to the VMs (which I think you are already) and run "ip addr"
05:57 ZachLanich http://dp.wecreate.com/1h1Al
05:58 ZachLanich Is it possible that there isn't enough disk space in one of the spots the minions or master is caching/writing to?
05:58 hemebond Well, I see two interfaces. That might mean a routing issue.
05:58 hemebond df -h
05:58 tonybaloney route -na
05:58 hemebond Will show disk usage.
05:58 Miouge joined #salt
05:58 tonybaloney sorry thats windows
05:58 ZachLanich df -h: http://dp.wecreate.com/17oNd
05:59 tonybaloney dammit brain
05:59 tonybaloney route -n
05:59 hemebond ZachLanich: Where is the "ip addr" for the minion2?
05:59 ZachLanich route -na - Invalid arg a: http://dp.wecreate.com/4z0B
05:59 POJO joined #salt
06:00 ZachLanich minion2 IP as expected: http://dp.wecreate.com/1atZ9
06:00 ZachLanich I don't think I've run any states on that minion2 yet
06:01 ZachLanich route -n: http://dp.wecreate.com/1l4xq
06:01 hemebond There we go :-)
06:02 ZachLanich same result on both minions
06:02 ZachLanich I see .0 in both. Is that the culprit?
06:02 hemebond Yeah, so your default gateway is the 10.0.2.2
06:03 hemebond Oh hang on, your master is a VM too.
06:03 ZachLanich Yea
06:03 hemebond netstat -anp | grep :450
06:03 hemebond On the minions.
06:03 ZachLanich route -n same on master too
06:04 ZachLanich http://dp.wecreate.com/134uV & http://dp.wecreate.com/Yj8X
06:04 hemebond Ah, that's right, you're using 192.168.50.10 for the master IP
06:04 ZachLanich I don't see a 4506. Is that an issue?
06:04 hemebond But the gateway is on the other interface.
06:05 hemebond 4506 is created as needed.
06:05 ZachLanich Ok
06:05 hemebond So here's what you do: delete the 10.* interface (eth0) on all the VMs.
06:05 hemebond That's the easiest way to do it for now.
06:05 ZachLanich Can I just comment it out for future reference?
06:05 hemebond If there is a file for it, sure.
06:07 hemebond I think this is the issue anyway.
06:08 hemebond I remember struggling to get Vagrant to _not_ create that additional interface.
06:10 aswini joined #salt
06:11 ub joined #salt
06:11 mohae joined #salt
06:12 ZachLanich Shoot, I used sudo ip link set down eth0 and it killed my ability to access the box lol
06:12 ZachLanich I'm reloading vagrant now
06:12 ZachLanich Not sure if it'll fix it or not.
06:12 hemebond If you restart the VM it'll bring the interface back up, no?
06:13 ZachLanich I'm hoping lol
06:14 ZachLanich I think I have to edit a file to permanently disable it, but after losing access to the box, am I permanently losing my access by editing /etc/rc.local to disable eth0?
06:14 ShaolongTan joined #salt
06:14 hemebond The interface config is in /etc/network/interfaces no?
06:15 Tyrm joined #salt
06:15 hemebond You should be able to access the VM over the 192.168.50.* IP address.
06:15 hemebond Try that first.
06:15 ZachLanich I access it via vagrant ssh master
06:16 ZachLanich I'm not 100% sure what keys it's using, if any
06:16 hemebond Ah. 10.* is the Vagrant management interface.
06:16 ZachLanich So should I still be disabling things? lol
06:16 hemebond Hmm, maybe we'll change the route.
06:17 hemebond What is the 192.168.50.* IP address on your local machine?
06:17 ZachLanich Which one? lol 10,11 or 12?
06:18 ZachLanich master, m1 or m2?
06:18 hemebond On your local machine.
06:18 hemebond The machine running Virtualbox.
06:18 ZachLanich Not sure how to find that out
06:20 ShaolongTan Hello everybody, I meed a mistake when i try to  use python api. Please see blow output.
06:20 ShaolongTan >>> import salt.client >>> local = salt.client.LocalClient() >>> local.cmd('*', 'test.ping') Traceback (most recent call last):   File "<stdin>", line 1, in <module>   File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 558, in cmd     **kwargs)   File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 313, in run_job     raise SaltClientError(general_exception) salt.exceptions.SaltClientError: Salt
06:21 hemebond ShaolongTan: Please paste that somewhere else and link to it here.
06:21 hemebond !paste
06:21 hemebond !pastebin
06:21 hemebond erg
06:21 hemebond http://paste.debian.net
06:22 flowstate joined #salt
06:22 ZachLanich hemebond: I think I'm done for the night lol. Will you be on tomorrow?
06:22 hemebond I likely will be. Will be at work though until around this time.
06:23 ZachLanich Ok. I'll try to catch up with you at some point. Thanks for all your help. This is ridiculous lol
06:23 hemebond :thumbsuo:
06:23 hemebond ????
06:25 subsignal joined #salt
06:31 kawa2014 joined #salt
06:31 jtang joined #salt
06:37 antpa joined #salt
06:39 TyrfingMjolnir joined #salt
06:42 krymzon joined #salt
06:45 atmosx Hello, can I execute two states one after another with 1 command? Like: salt 'target' state.sls state1 state2 ?
06:46 hemebond atmosx: I believe so.
06:46 atmosx and the order is kept right?
06:46 hemebond Uh, pass. Never used it but I would assume so. Easy to test :-)
06:46 hemebond Use state.apply btw.
06:47 hemebond It's a comma-separated list of states.
06:47 hemebond Oh wait.
06:47 hemebond Are you talking about states or state files?
06:47 hemebond You can apply state files.
06:47 atmosx state files
06:49 hemebond Well the order that states run in will be determined by the actual states and their dependencies, but I would _assume_ the order you provide on the CLI is the order the files will be applied.
06:49 hemebond It's not stated explicitly in the docs.
06:50 atmosx hemebond thanks for the help, I think I'll go the traditional way launching once cmd after the other
06:51 hemebond No ambiguity that way :-)
06:55 atmosx :thumbsup:
06:55 hemebond ????
06:56 atmosx works when you do it, doesn't work for me though (textual) why?!
06:56 atmosx lol
06:56 hemebond I use Pidgin with the text replacement plugin. When I type thumbsup with the colons it replaces the text with the unicode symbol :-)
07:00 catpig joined #salt
07:12 ninjada joined #salt
07:24 flowstate joined #salt
07:25 Brijesh1 joined #salt
07:28 huyby joined #salt
07:29 kawa2014 joined #salt
07:33 tonybaloney joined #salt
07:34 jhauser joined #salt
07:36 auzty joined #salt
07:36 fredvd joined #salt
07:36 Rumbles joined #salt
07:38 ivanjaros joined #salt
07:42 manji joined #salt
07:42 ninjada joined #salt
07:43 tonybaloney joined #salt
07:44 fracklen joined #salt
07:44 fracklen joined #salt
07:46 fracklen joined #salt
07:48 fracklen joined #salt
07:48 ribx joined #salt
07:48 av_ joined #salt
07:53 lero joined #salt
07:55 ninjada joined #salt
07:57 saltuser joined #salt
08:01 clouddale joined #salt
08:02 copelco joined #salt
08:11 Bryson joined #salt
08:13 Brijesh1 can we do auto-scaling using salt
08:19 rdas joined #salt
08:20 antpa joined #salt
08:23 flowstate joined #salt
08:24 ronnix joined #salt
08:25 GreatSnoopy joined #salt
08:26 tonybaloney joined #salt
08:26 Rumbles joined #salt
08:29 JPT joined #salt
08:29 yuhlw_ joined #salt
08:30 hemebond Brijesh1: You can react to events.
08:30 hemebond Some people use those events to make Salt provision new servers I'm sure.
08:31 babilen https://www.packtpub.com/books/content/how-to-auto-scale-your-cloud-with-saltstack
08:31 sagerdearia joined #salt
08:33 rdas joined #salt
08:34 babilen I use file.recurse to copy a directory to a minion, but its permissions are all fucked up. How do I set it? http://paste.debian.net/785296/
08:34 ozux joined #salt
08:35 Bryson joined #salt
08:35 ninjada joined #salt
08:37 tonybaloney joined #salt
08:38 AndreasLutro o_o
08:38 AndreasLutro report a bug then add a separate file.directory state for fixing permissions I guess
08:41 babilen Lovely, eh?
08:41 * babilen blames GitFS
08:43 s_kunk joined #salt
08:46 lubyou joined #salt
08:49 AndreasLutro considering git only stores the executable bit I doubt that's the problem
08:49 ajw0100 joined #salt
08:49 babilen Well, I'm not actually sure, but I didn't expect that behaviour.
08:59 ribx joined #salt
09:11 POJO_ joined #salt
09:13 manji babilen, lol
09:13 babilen https://github.com/saltstack/salt/issues/34945
09:13 saltstackbot [#34945][OPEN] file.recurse breaks directory permissions | Description of Issue/Question...
09:13 babilen Let's see
09:14 huyby left #salt
09:17 manji (and sensible)
09:17 manji hahahaha
09:17 jtang joined #salt
09:17 manji babilen, did you have a peak at the code that produced those perms?
09:17 manji they are really funny
09:17 babilen No
09:17 babilen Where would I find it?
09:17 babilen (and do I really want to see it?)
09:18 manji I'd guess /usr/lib/python2.7/dist-packages/salt/modules/file.py
09:18 manji lets see
09:20 tonybaloney joined #salt
09:22 babilen "int('{0}'.format(mode)) if mode"
09:23 flowstate joined #salt
09:24 manji I am looking at the state file as well
09:24 manji (maybe I am wrong) but at some point it says:
09:24 manji # Make sure that leading zeros stripped by YAML loader are added back
09:24 manji dir_mode = __salt__['config.manage_mode'](dir_mode)
09:24 manji file_mode = __salt__['config.manage_mode'](file_mode)
09:25 Rumbles joined #salt
09:25 manji salt.modules.config.manage_mode(mode)
09:25 manji https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.config.html
09:25 manji maybe it fucks up when it "normalises" it
09:25 antpa joined #salt
09:26 babilen Ah .. yeah
09:26 babilen That would result in 02755 wouldn't it?
09:26 babilen fuckers
09:26 manji hmm what mode is 02755 ?
09:27 babilen It's undefined, but they "intify" and "stringify" the mode in random places, so who knows what octal value falls out at the end?
09:27 manji yeah weird
09:31 babilen manji: But it's almost certainly due to that
09:32 tonybalo_ joined #salt
09:43 KingOfFools_ joined #salt
09:43 flowstate joined #salt
09:48 msn joined #salt
09:48 adelcast joined #salt
09:51 msn this is my iptables pillar http://paste.debian.net/785311/  using salt stack formula as is, but the rules i get in the end are http://paste.debian.net/785312/ there is some reason why the service rules are not being read
09:52 babilen Which formula are you referring to?
09:52 msn saltstack formula iptables
09:52 babilen https://github.com/saltstack-formulas/iptables-formula ?
09:52 msn yes
09:53 babilen Okay, so you have defined that pillar. Has it been targeted to the minion in question? Could you run "salt 'yourminion' pillar.get firewall" ? How did you apply the state to the minion?
09:54 msn my pillar definition top file http://paste.debian.net/785275
09:54 msn state file is same
09:55 babilen What do you mean by "state file is same" ?
09:55 msn top.sls in both is exactly the same for pillar and state
09:56 msn except mysql.client part but that's not being used here
09:56 babilen Where exactly are you targeting your "firewall" pillar and the "iptables" state? What does "salt 'yourminion' pillar.get firewall" return?
09:58 garphy joined #salt
09:58 msn nothing minions keep returning nothing randomly didnt' figure where i am getting that from
09:58 babilen What is the actual output of that command?
09:59 msn Minion did not return. [Not connected]
09:59 msn
09:59 babilen Well .. you might want to solve that issue first
09:59 msn on minion salt-call -l debug state.highstate runs fine
09:59 msn well i have not been able to figure where that problem occurs from
09:59 msn it randomly works and not works
10:00 babilen You should, it really shouldn't behave like that
10:00 msn well nothing in logs for me to check
10:00 babilen But okay, what does "salt-call pillar.get firewall" give you on the minion?
10:00 subsignal joined #salt
10:02 msn http://paste.debian.net/785315/
10:03 babilen How about "salt-call state.sls iptables" ?
10:04 msn and the pillar.items on master for firewall gets same result
10:04 flowstate joined #salt
10:04 guedressel joined #salt
10:04 babilen You said that that results in "Minion did not return. [Not connected]" -- So that is only intermittent?
10:05 msn yes
10:05 msn if i run a full pillar.items it works
10:05 ninjada joined #salt
10:05 babilen So, what does ""salt-call state.sls iptables" give you?
10:06 msn http://paste.debian.net/785316/
10:08 babilen Okay, that seem to have worked
10:09 POJO joined #salt
10:10 tonybaloney joined #salt
10:13 babilen msn: What's the problem?
10:15 msn its applying the whitelisting rule
10:15 msn but not the other service rules
10:17 babilen Why would it?
10:17 babilen You don't set strict mode nor any ips_allow or block_nomatch and whatnot
10:18 babilen Did you study the state?
10:18 msn yes it states block_nomatch either serivce specific or global
10:18 jtang joined #salt
10:20 babilen I am not familiar with that formula, but you might want to read https://github.com/saltstack-formulas/iptables-formula/blob/master/iptables/init.sls and figure out which pillar data you have to set to achieve what you want to achieve
10:22 flowstate joined #salt
10:24 KingOfFools_ Hello guys! Am I right saying that I can't bootstrap salt-minion inside lxc-container with salt.states.lxc.present? I want to get few lxc-containers on each server with specific role and get salt in them to configure. And im kinda confused should I use states or execution modules (or both) to get the job done.
10:27 antpa joined #salt
10:28 flowstate joined #salt
10:37 sfxandy joined #salt
10:47 babilen KingOfFools_: You might be looking for https://docs.saltstack.com/en/latest/topics/cloud/lxc.html
10:48 ninjada joined #salt
10:48 sfxandy hi everybody
10:50 sfxandy ok, struggling with creating a reactor SLS here.  does anybody know how to trigger a Pillar refresh amd a Salt mine update via a reactor SLS please?
10:51 AndreasLutro what've you tried so far?
10:52 sfxandy well, this is the thing .... not entirely sure where to start.  as its running on the master I would have figured it was something to do with 'local',
10:52 sfxandy cant quite figure out what local has access to...
10:52 AndreasLutro saltutil.refresh_pillar and mine.update certainly don't run on the master
10:53 KingOfFools_ babilen: I maybe not understand cloud principle correctly, but it seems like i should create a provider for each lxc-node. What if there is like 30 servers. Create provider option for each?
10:53 sfxandy do they not AndreasLutro
10:54 sfxandy so when you issue a salt 'foo*' saltutil.refresh_pillar, does the master merely signal the targeted minions to do a refresh?
10:54 AndreasLutro local. is referencing salt's localclient, poorly named, basically anything after local. is the same module/function that you would issue with a `salt name-of-minion` command
10:55 sfxandy ah ok
10:55 AndreasLutro yes sfxandy
10:55 AndreasLutro just look at the "local.state.apply" examples here https://docs.saltstack.com/en/latest/topics/reactor/
10:58 sfxandy ok so if I'm on a minion, I can trigger a pillar refresh by doing a 'salt-call saltutil.refresh_pillar'
10:58 sfxandy and it comes back 'local: True'
11:01 om joined #salt
11:07 Satyajit joined #salt
11:12 haole joined #salt
11:12 haole how do I get better output out of a pkg.install run? I'm getting this bizarre report: http://kopy.io/uLnWb
11:12 antpa joined #salt
11:17 manji haole, look at the minion and master logs
11:17 manji you'll find more infos there
11:20 haole manji: thanks
11:20 jtang joined #salt
11:21 flowstate joined #salt
11:29 ninjada joined #salt
11:29 lilvim joined #salt
11:33 amcorreia joined #salt
11:42 ninjada joined #salt
11:43 haole I'm using a third-party formula which has a defaults.yaml file, and some values in there must be changed... what concept am I missing to understand where and how to change them?
11:43 Deliant joined #salt
11:46 babilen haole: You typically change them by passing in suitable pillar data. The convention to use "foo:lookup:SETTING_TO_OVERRIDE" is commonly used for os specific settings, while foo:SETTING_TO_OVERRIDE would be used for "normal" ones.
11:47 haole babilen: thanks! gonna look into it
11:48 babilen If you have a link it might be easy to check
11:49 M-liberdiko joined #salt
11:52 MadHatter42 joined #salt
11:54 felskrone joined #salt
11:55 haole babilen: https://github.com/haolez/postgres-formula
11:55 babilen That's https://github.com/saltstack-formulas/postgres-formula
11:56 babilen (I'd use the official source, rather than someones fork)
11:56 babilen What do you want to override?
11:59 haole babilen: the fork is for myself, it was advised in the documentation :)
12:00 haole I'm trying to use the upstream repo with version 9.5, but I'm getting some inconsistent results
12:00 haole I'll pastebin
12:00 babilen Ah! Sure, you are haolez :)
12:00 babilen Sorry .. I thought you references some random third person :)
12:00 numkem joined #salt
12:01 haole :)
12:01 haole here is my pillar file: http://kopy.io/MZg6t
12:01 haole and here is the output of state.apply: http://kopy.io/J3li2
12:01 haole as you can see, there is a mix of 9.5 and 9.3 (default) in there (I want 9.5)
12:02 subsignal joined #salt
12:02 haole am I correctly specifying the pkg.installed version? and is there a way to revert what state.apply does for a single state?
12:03 ninjada joined #salt
12:03 babilen You are not .. one second please
12:05 rdas joined #salt
12:07 babilen haole: My guess is that you want to define "postgres:lookup:pkg" in your pillar with a value such as "postgresql-9.5" (assuming that's the name of the package in the upstream repository
12:07 hemebond left #salt
12:08 babilen https://github.com/saltstack-formulas/postgres-formula/blob/master/pillar.example#L9
12:08 palica joined #salt
12:10 jtang joined #salt
12:11 palica hello all, I have problems with file.managed timing out. https://gist.github.com/palica/f99da031a90ff2b4132e724929f8904e
12:13 impi joined #salt
12:14 haole babilen: gonna try! thanks
12:16 toanju joined #salt
12:18 Tyrm joined #salt
12:21 M-cpt joined #salt
12:21 necronian joined #salt
12:21 M-MadsRC joined #salt
12:25 rem5 joined #salt
12:25 ribx joined #salt
12:30 remyd1 joined #salt
12:31 remyd1 Hi. I would like to fire an event from from my one minion, modify a pillar from this event, and then, apply a state. Is there any example on how to do this ? The documentation is not really helpful on how to achieve that.
12:37 AndreasLutro if you want to permanently modify a pillar through a reactor you'll have to set up some external reactor that talks to a database or api, and then have your reactor update that
12:41 sagerdearia joined #salt
12:45 gh34 joined #salt
12:45 remyd1 Can I use any kind of orchestrate method to watch for a particular event on my master ? If the event appear do this, then do that...
12:47 teryx510 joined #salt
12:47 babilen remyd1: That's essentially how reactors work
12:52 remyd1 Ok. So from my minion, I am doing that kind of stuff: "salt-call event.send 'container-refresh' '{memory: 48, cores: 16}'"
12:53 cpn joined #salt
12:54 remyd1 What is the event I should look at ?
12:54 remyd1 'container-refresh' ?
12:54 averell joined #salt
12:56 cpn Hi! I'm having a struggle with the loader. What determines if an execution module is available while rendering pillars? I'm getting "Rendering exception occurred: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'my-module'" when putting {{ salt['my-module.method']('foo', 'bar') }} in a pillar sls
12:57 cpn I tried dumping salt.keys() into a dict, and it looks like just about every other module is available (~1200 items)
12:58 cpn Also, putting the same call (salt['my-module.method']('foo', 'bar')) in a state-file works, so it's just in pillars that I'm missing something
13:00 AndreasLutro cpn: pillars are rendered by the salt master, which uses different custom module loading mechanics than the minion
13:02 cpn AndreasLutro: Thanks! I followed the code, but as far as I can tell it still uses the same pillar.get_pillar method?
13:03 cpn Also, I'm unable to understand what a module needs to do to be marked for inclusion?
13:03 AndreasLutro extension_dirs
13:03 ssplatt joined #salt
13:03 Erik-P joined #salt
13:05 Erik-P when i try to run any salt * module i get this https://gist.github.com/erikpar/772d78624e27f924248990726f4942ad
13:05 fracklen joined #salt
13:07 v12aml joined #salt
13:07 teryx510 joined #salt
13:08 yuhlw_ joined #salt
13:09 cpn AndreasLutro: Sorry, a bit slow on the uptake here... You mean that the master does not read from extension dirs? (which are the underscore-prefixed ones like _modules, right?)
13:09 flowstate joined #salt
13:10 AndreasLutro cpn: I misremembered the name. https://docs.saltstack.com/en/latest/ref/configuration/master.html#extension-modules
13:10 ninjada joined #salt
13:11 racooper joined #salt
13:11 ninjada joined #salt
13:12 v12aml joined #salt
13:12 cpn Oh, didn't know about that one. Will have a look. Thanks!
13:13 mikecmpbll joined #salt
13:15 Erik-P Hii all need help with salt-master error https://gist.github.com/erikpar/772d78624e27f924248990726f4942ad
13:15 fracklen Erik-P can you show the reactor config?
13:17 v12aml joined #salt
13:18 sroegner joined #salt
13:19 remyd1 Is there a way to change a pillar value from a salt command (eg. a kind of 'pillar.set') or through a state (file.managed won't be easy in my case )?
13:20 v12aml joined #salt
13:22 v12aml joined #salt
13:23 fracklen Erik-P: This is usually a problem when you have the reactor clause followed by a bunch of if blocks. When none of the if-blocks match, it isn't valid yaml. Try to assume, that all if blocks are missing - either the "reactor:" clause should disappear - or a fallback reactor should match e. g. "/nothing/to/match/here"
13:24 subsignal joined #salt
13:25 cpn AndreasLutro: That worked, thanks!
13:25 babilen remyd1: You can do that with external pillars (e.g. databases) against which you run queries. There is no generic "pillar.set".
13:26 remyd1 babilen, ok :'(
13:26 v12aml joined #salt
13:27 adulteratedjedi joined #salt
13:27 fracklen remyd1: You could use custom grains instead - there is the grains.setval
13:27 fracklen I've used that for dynamically updating minion data
13:28 remyd1 fracklen, I already knew that, but that is an option, thx :)
13:29 babilen remyd1: What are you trying to achieve?
13:30 remyd1 babilen, I have a cluster and I will add some sonde on my cluster through lxd. My master is managing the reinstall of the minions, and then, on each minion, I create some containers that will join my cluster
13:30 remyd1 Right now, it is working, but I would like to add a possibility from the host to change the amount of resources for the container hosted on it
13:31 remyd1 So the minion should contact the salt master to refresh these dynamic values and then refresh that value on the master job scheduler
13:31 remyd1 not easy
13:31 babilen Who decides that this change needs to happen?
13:32 remyd1 some people that booked the host
13:32 babilen Okay, and they communicate that per mail, web interface or telepathy?
13:33 remyd1 a shell script :)
13:33 fracklen joined #salt
13:33 babilen It might make sense to keep that information in a database and make it available as external pillar
13:33 remyd1 the book scheduling is done through a web UI
13:33 ninjada joined #salt
13:33 remyd1 Yes
13:34 remyd1 Or I use grains + log files
13:34 remyd1 Do you use any DB for ext_pillar babilen ? If so, what do you use ? sqlite db ?
13:35 babilen I don't
13:35 ninjada joined #salt
13:35 Hybrid joined #salt
13:36 subsignal joined #salt
13:36 jtang joined #salt
13:36 Tanta joined #salt
13:36 subsignal joined #salt
13:37 fracklen joined #salt
13:37 lilvim joined #salt
13:38 Brijesh1 joined #salt
13:38 Guest46238 joined #salt
13:38 remyd1 s/sondes/nodes
13:39 fracklen joined #salt
13:40 tho_ joined #salt
13:42 mapu joined #salt
13:43 tho joined #salt
13:45 west575 joined #salt
13:49 v12aml joined #salt
13:50 dyasny joined #salt
13:51 _JZ_ joined #salt
13:51 dyasny joined #salt
13:53 ronnix joined #salt
13:54 flowstate joined #salt
13:56 Erik-P fracklen this is the problem i dont have reactor
13:56 perfectsine joined #salt
14:01 orion Hi. Does anyone know why I keep getting timeouts from the salt master when I'm executing commands... ON the salt master?
14:01 pcdummy https://groups.google.com/forum/#!topic/salt-users/E5ly7igBX1M
14:02 orion https://gist.github.com/centromere/9f2355572e68e131409b4db6aacaac45 <-- I'm on 2016.3.1, Ubuntu Trusty.
14:02 Cadmus joined #salt
14:02 Cadmus Hello, very quick question, does batch mode round up or down?
14:04 babilen orion: Anything funky about that setup?
14:04 babilen pcdummy: Cool :)
14:04 babilen Cadmus: Try it with test.ping ?
14:04 * babilen can't remember, but would guess "down"
14:06 Cadmus babilen: You're right, seems to be down, to a min of 1
14:06 Cadmus Which is really the behaviour you want I guess
14:06 * babilen nods
14:06 Satyajit joined #salt
14:07 PerilousApricot joined #salt
14:08 orion babilen: Not that I can think of.
14:08 orion Is there any advanced debugging I can do?
14:08 babilen orion: Normal bare metal machines on perfectly working network?
14:08 orion babilen: EC2.
14:09 protoz joined #salt
14:09 orion There's no reason I would suspect a networking problem though. I am /on/ the salt master.
14:09 babilen orion: And you can contact the master on port 4505/4506 (tcp) ?
14:09 babilen Ah, this is the same box?
14:09 bltmiller joined #salt
14:09 orion Yes. I can successfully connect to 127.0.0.1:4505/4506 TCP.
14:09 babilen Could there be some firewall involved? Can you telnet to that?
14:10 pcdummy babilen: please let me know when you have anything i could make better with the LXD formula.
14:10 babilen And you have "master: 127.0.0.1" in your minion conf?
14:11 sinwalt joined #salt
14:11 babilen orion: Does "salt-call -ldebug test.ping" work?
14:12 babilen Which IP does salt01-test resolve to? Do you have that entry in your /etc/hosts ? Can you connect to salt01-test:4505/4506 ? (and so on)
14:12 babilen But you, presumably, accepted the minion's key beforehand?
14:13 babilen salt01-test_master != salt01-test as well
14:14 Cadmus left #salt
14:15 orion "salt-call -ldebug test.ping" does not work.
14:16 orion babilen: salt01-test resolves to 10.0.3.254, which is both in the hosts file and is assigned to eth0 on the salt master.
14:16 babilen Can you telnet to 10.0.3.254:4505 / 4506 ?
14:17 orion Yes.
14:18 babilen orion: It's just that we only had these kind of problems when the networking between master and minion was broken for some reason (mismatched MTU on different paths, firewalls, ...)
14:18 babilen I don't actually know what causes it for you, but it boiled down to "master and minion can't speak to each other" in the end
14:19 orion Sure, but at no time should packets ever leave the box.
14:19 orion I am /on/ the master itself.
14:21 babilen Yeah, I understand .. they shouldn't leave the box at all
14:21 orion In fact, tcpdump shows that communication is occurring when I restart salt-minion.
14:21 babilen And you were obviously able to accept the minion key to begin with
14:21 bltmiller joined #salt
14:21 babilen Which version of salt is this?
14:22 babilen "salt --versions-report"
14:22 orion babilen: https://gist.github.com/centromere/eb8a1af2291566a2219ad54cd8df108a
14:23 corichar joined #salt
14:23 v12aml left #salt
14:24 orion Now this is weird. I ran test.ping and it worked, but then I ran it again immediately after and it didn't work.
14:24 babilen orion: Okay, could you shut down the minion then delete /var/cache/salt along with /etc/salt/pki/minion/minion_master.pub and start the it again?
14:25 babilen *the minion
14:25 bowhunter joined #salt
14:27 orion babilen: https://gist.github.com/centromere/f1bb246ee0d67b2cee009167b3e36277
14:28 orion Do you mean to say that I should delete /var/cache/salt/minion? If I delete /var/cache/salt, I will also delete /var/cache/salt/master.
14:30 cpn Any ideas on how I can detect if a module is being called during pillar generation or not?
14:30 hasues joined #salt
14:30 raspy_ joined #salt
14:30 hasues left #salt
14:30 babilen orion: Yes, minion would be better
14:31 babilen https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.cache.html → clear_all could also be used
14:33 cpn One idea would be to figure out if it is the minion or master that is executing, but so far I haven't figured out anything that is available from the dunders that tells me
14:35 ivanjaros joined #salt
14:35 babilen orion: It's just caches .. there is nothing of value there
14:36 Tyrm joined #salt
14:36 babilen bbl
14:37 cpn Ah, __opts__['__role'] looks perfect :)
14:37 cpn it's either 'master' or 'minion'
14:37 cpn In case someone else needs something similar some day
14:38 jtang joined #salt
14:38 mapu joined #salt
14:39 orion babilen: Done.
14:40 orion The problem persists.
14:41 manji orion, did you happen run some salt command, and then pressed ctrl-c ?
14:41 lilvim joined #salt
14:41 manji there was a bug about this
14:41 orion manji: No.
14:42 orion It just worked, then I ran it again <5 seconds later and it timed out.
14:42 manji is the machine ok? load etc
14:42 orion 07:43:05 up 11 days, 21:03,  3 users,  load average: 0.03, 0.03, 0.05
14:42 Andrew joined #salt
14:44 fracklen joined #salt
14:45 alvinstarr joined #salt
14:45 alvinstarr left #salt
14:48 flowstate joined #salt
14:50 dfinn joined #salt
14:50 tapoxi joined #salt
14:51 mikecmpbll joined #salt
14:52 Xevian_ joined #salt
14:53 teryx510 joined #salt
15:01 cyborg-one joined #salt
15:09 armyriad joined #salt
15:11 ronnix joined #salt
15:12 Andrew joined #salt
15:12 ub joined #salt
15:15 patarr joined #salt
15:23 autofsckk joined #salt
15:25 corichar joined #salt
15:29 tapoxi joined #salt
15:29 tapoxi anyone using a formula for LDAP/PAM on centos?
15:32 ssplatt joined #salt
15:35 mpanetta joined #salt
15:36 corichar1 joined #salt
15:39 jtang joined #salt
15:43 corichar1 left #salt
15:44 av_ joined #salt
15:44 corichar joined #salt
15:46 MTecknology import salt.modules.cmdmod \n mount_output = __salt__['cmd.run']('cat /proc/mounts')
15:46 * MTecknology weeps quitely to himself
15:48 Tanta joined #salt
15:49 lubyou joined #salt
15:49 brent_ joined #salt
15:50 ronnix joined #salt
15:52 babilen MTecknology: Really?
15:52 babilen But then .. it's really not *that* outrageous :)
15:53 babilen salt '*' mount.active ?
15:55 MTecknology babilen: this is in a grain. I didn't mention that. The grain loads cmdmod so that it can use salt to run a system command to read a file.
15:55 MTecknology with open('/proc/mounts') as file:   <-- the much more betterer option
15:57 babilen Oh, definitely
15:58 beardedeagle joined #salt
15:59 MTecknology That's really the least of the concerns with what I'm reading, though. Other stuff, like try: except: ... except there's no final exception handling. In most cases, it'll handle exactly one exception and let the rest trickle up.
16:00 cyborg-one joined #salt
16:01 Tyrm_ joined #salt
16:01 Tyrm_ joined #salt
16:02 ronnix_ joined #salt
16:03 rem5 joined #salt
16:05 bltmiller joined #salt
16:10 west575 joined #salt
16:10 Tyrm joined #salt
16:11 mpanetta_ joined #salt
16:20 flowstate joined #salt
16:21 PerilousApricot joined #salt
16:21 traph joined #salt
16:23 ZachLanich joined #salt
16:24 jhauser joined #salt
16:25 Tyrm joined #salt
16:25 ingslovak joined #salt
16:28 ronnix joined #salt
16:30 ingslovak Hey guys. So, today we upgraded from 2015.8.5 to 2016.3.1 but things went south pretty soon; no minion is getting his Pillar data. We use the "environment" option in minion config files to confine groups of minions and for some reason, this var is not propagated to master in __opts__ when minion requests its pillar data. Can anyone point me where do I look for the code responsible for sending minion opts to the master? Thanks a lot
16:30 ronnix joined #salt
16:31 ingslovak Yeah and that all is relevant because we use the "legacy Git pillar" mode, to be able to dynamically use environments mapped to Git branches
16:35 writtenoff joined #salt
16:37 corichar1 joined #salt
16:39 jenastar joined #salt
16:39 jtang joined #salt
16:39 MTecknology ingslovak: dumb question.. did you update the master before the minions?
16:41 ingslovak MTecknology: yes, not my first upgrade. But comparing to a parallel set of servers that still run 2015.8.5, the opts["environment"] is sent correctly whereas in the updated world, it is None
16:42 oida joined #salt
16:44 ingslovak so I've tapped into the SyncWrapper zeromq initialization and the "environment" opt gets up there fine. Looks like master will discard it somehow
16:44 ingslovak so I've tapped into the SyncWrapper zeromq initialization on the minion and the "environment" opt gets up there fine. Looks like master will discard it somehow
16:48 brotatochip joined #salt
16:49 onlyanegg joined #salt
16:50 ingslovak So the question now is, where on the way from recieving minion request to initializing GIt pillar would the master reset/loose __opts__['environment'] from the minion?
16:51 MTecknology This is the extent I can help :(  https://github.com/saltstack/salt/pull/33764
16:51 saltstackbot [#33764][MERGED] Merge instead of update pillar overrides | An earlier commit (made in develop the 2015.8 release branch was...
16:51 MTecknology probably 100% unrelated
16:52 antpa joined #salt
16:55 rdas joined #salt
16:57 mpanetta joined #salt
16:58 ingslovak thanks for the effort, but unfortunately my problem is a level above, the pillar data isn't even cloned from the Git repo (apart from "master" branch), since the env is missing.
17:01 IdoKaplan joined #salt
17:02 aw110f_ joined #salt
17:03 ageorgop joined #salt
17:07 aswini joined #salt
17:08 brotatochip joined #salt
17:09 corichar joined #salt
17:10 Edgan joined #salt
17:11 sagerdearia joined #salt
17:11 IdoKaplan Hi, I'm using "flie.managed" and I have an issue with python list. Can you please help? http://pastebin.com/YFbixPz4
17:12 rem5 joined #salt
17:12 ponyofdeath hi, what is the recommended way to use a output from one state in other stats as a way to prevent them from running or running?
17:12 ponyofdeath setting a environment variable then checking for the true or false of the var in the other states?
17:13 impi joined #salt
17:13 pcdummy IdoKaplan: i see it right the output formatting is wrong?
17:13 Perilous_ joined #salt
17:13 pcdummy IdoKaplan: best is to use a jinja template for the output, do you do that?
17:14 ingslovak ponyofdeath: see https://docs.saltstack.com/en/latest/ref/states/requisites.html , covers 99% of those scenarios
17:15 pcdummy ponyofdeath: when you can let the state fail (red output) then you can simply do "require" as ingslovak pointed out.
17:15 DammitJim joined #salt
17:16 IdoKaplan pcdummy: yes, ofcourse, i'm using Jinja. the output if different. In option 1 the output is showed as list - ['appfe']. In option 2 the output showed not as list - appfe. I would like to get always output without list - appfe.
17:16 pcdummy IdoKaplan: give your jinja template
17:17 bltmiller joined #salt
17:17 pcdummy IdoKaplan: ahh i see
17:18 pcdummy you read grains.roles and want always a list, then you have them to define as list in /etc/salt/grains
17:18 ponyofdeath pcdummy, ingslovak : in this example https://bpaste.net/show/e6f4cc15fc01 i am trying to run the init only if that psql command does not work. but if the jchem_psql service is not running it wont work either, so the init process which requires service to not be running kills psql because the unless is failing since it kills the service before checking psql
17:19 Miouge joined #salt
17:19 bowhunter joined #salt
17:19 IdoKaplan pcdummy: I want the the output will be no list, even if /etc/salt/grains is configure to use list.
17:20 IdoKaplan pcdummy: *the=that
17:20 ponyofdeath is there a way to pull that check out and do unless a state that runs first failed or not
17:20 PerilousApricot joined #salt
17:21 pcdummy ponyofdeath: "watch" helps here
17:21 pcdummy ponyofdeath: use that instead start/stop_jchem_psql
17:21 whytewolf joined #salt
17:21 pcdummy ponyofdeath: not sure about your question
17:21 pcdummy ponyofdeath: watch: https://docs.saltstack.com/en/latest/ref/states/requisites.html#watch
17:22 rem5 joined #salt
17:23 pcdummy ponyofdeath: that solves your question, right?
17:23 ponyofdeath not sure what watch does
17:23 ponyofdeath even after reading that
17:24 pcdummy ponyofdeath: it restarts a service for example when something changes
17:24 ponyofdeath it returns what has changed during a salt
17:24 ponyofdeath run?
17:24 ponyofdeath well that is not waht i want
17:24 ponyofdeath i want the service to be stopped
17:24 ponyofdeath or the init function will fail
17:24 pcdummy ponyofdeath: you switch services?
17:25 ponyofdeath nope same service
17:25 ponyofdeath the service has a start stop and init
17:25 ponyofdeath init crates a directory
17:25 ponyofdeath init will not run if service is already running
17:25 ponyofdeath i want to run init on first install but not after the psql check i have returns 1
17:26 ponyofdeath sorry -
17:26 ponyofdeath 0
17:26 ponyofdeath that is why in the cmd.run i require service to be stopped
17:26 * pcdummy is thinking
17:26 ponyofdeath but i dont need that cmd.run to run once the service has been initialised which is why i have the unless psql query
17:26 pcdummy theres is for sure a easy solution
17:27 ponyofdeath the unless fails tho since its run after teh require
17:27 pcdummy ponyofdeath: cant you just check for that directory?
17:27 ponyofdeath nope i cuess i could touch a file and check for that file
17:28 ponyofdeath or set a environment variable
17:28 pcdummy touch file sounds best
17:28 ponyofdeath use a better check but was wondering if there was something else u guys know of
17:28 pcdummy i'm not the ultra pro here, may someone else knows more
17:28 ponyofdeath pcdummy: cool no worries thanks for your help!
17:29 pcdummy ponyofdeath: you could watch the deb install
17:29 pcdummy ponyofdeath: when deb install changes -> init -> start
17:30 ponyofdeath pcdummy: yeah let me try that
17:31 pcdummy ponyofdeath: but be warned, when you upgrade versions it will run again
17:31 ponyofdeath yup i want to use the onfail
17:32 flowstate joined #salt
17:33 ponyofdeath have a psql state and then if it fails run init
17:33 ponyofdeath does that sound good
17:34 JPT joined #salt
17:34 pcdummy yes if you set that onfail for all 4 states.
17:34 pcdummy i would also restart that states
17:34 pcdummy in chronoical order
17:37 ponyofdeath https://bpaste.net/show/1346aa51ae34 pcdummy that looks good?
17:38 IdoKaplan pcdummy: do you have any suggection?
17:38 pcdummy IdoKaplan: you overwrite grains with your own template?
17:38 pcdummy IdoKaplan: where do you use that grains
17:39 pcdummy IdoKaplan: in a jinja template you could check for iterateable (or something like that) and convert it in jinja
17:40 pcdummy ponyofdeath: you applied "onfail" only to init, thats intended?
17:40 jtang joined #salt
17:40 IdoKaplan pcdummy: i'm using grains in a file. i'm not sure that I understand your idea.
17:40 s_kunk joined #salt
17:41 pcdummy IdoKaplan: you write those grains somewhere?
17:41 pcdummy IdoKaplan: what do you do with grains?
17:41 Garo_ joined #salt
17:41 IdoKaplan pcdummy: I would like to print the grains in a file and that the output will be always without list.
17:42 pcdummy IdoKaplan: then use "file.managed" with template: jinja and convert "grains.roles" in that template
17:43 IdoKaplan pcdummy: This is what i'm doing... I think that the issue is not understood. I will try to explain again.
17:43 pcdummy IdoKaplan: give a full example please
17:43 pcdummy with both the "state" and the jinja template of it
17:44 IdoKaplan pcdummy: please see option 1 - http://pastebin.com/YFbixPz4, the grain is listed. right?
17:44 antpa joined #salt
17:44 IdoKaplan pcdummy: option 2- the grains is not a list, right?
17:44 pcdummy IdoKaplan: yep
17:45 irctc554 joined #salt
17:45 IdoKaplan pcdummy: cool, and if I use "{{ grains.roles }}" in option 1 - the output will be print as ['appfe']. right?
17:46 Miouge joined #salt
17:46 pcdummy right
17:46 pcdummy IdoKaplan: i write a pseudo code for you.
17:47 IdoKaplan pcdummy: I would like to print it as appfe and not ['appfe']
17:47 babilen grains.roles[0] ?
17:47 cmarzullo {{ grains.roles | join(' ') }} will join the list. But you'll probabbly need to do one thing if it's a list.
17:48 babilen Ah, multiple entries
17:49 pcdummy IdoKaplan: https://bpaste.net/show/267454c5b481
17:49 pcdummy IdoKaplan: thats what you want? (its completly untested).
17:49 toanju joined #salt
17:49 om joined #salt
17:50 cmarzullo that's tricky pcdummy cause strings are iterable. You specifically want to check if it's a list.
17:50 pcdummy cmarzullo: ohh :/
17:50 babilen Shouldn't it always be a list?
17:50 pcdummy no
17:50 pcdummy one time its a string one time its a list
17:50 pcdummy and he doesn't want to change grains.roles
17:51 pcdummy cmarzullo: should have known that :/(
17:51 * babilen mentions that grains for roles are a bad idea and shuts up about it
17:51 irctc554 Question: I'm thinking about setting up large states to run only once if a highstate is called on a machine. Is it kosher to use jinja to use a if else to do that sort of a thing, or is there maybe a better way I'm not seeing?
17:52 bass joined #salt
17:52 pcdummy IdoKaplan: update https://bpaste.net/show/5f9fd72309f9
17:52 mpanetta joined #salt
17:53 babilen irctc554: You could target that by some custom grain and set that grain at the end of your "large state" .. that way it won't be targeted again
17:53 cmarzullo pcdummy: IdoKaplan: https://bpaste.net/show/0bc1707d42c1
17:54 cmarzullo that'll join if it's a list otherwise just spit out the value.
17:54 pcdummy cmarzullo: so jinja knows about "is list" :)
17:54 pcdummy cmarzullo: learned something, ty
17:55 cmarzullo yeah it's not documented. But works anyway.
17:55 pcdummy cmarzullo: what is a list (list, tuple, set - all together) ?
17:55 cmarzullo Well not explicitely documented.
17:56 cmarzullo hmm not sure. But there's examples for is iterable so people often reach to that.
17:56 cmarzullo I pulled that by printing out the type of a variable. Think I got stuck cause I didn't know what my variable type was. So I printed it out. Showed up as list. So tried is list
17:57 brotatochip joined #salt
17:58 cmarzullo There's a salt module that will print out the data type.
17:59 cmarzullo I think it was test.arg_type test.arg_type
17:59 cmarzullo https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.test.html
18:00 antpa joined #salt
18:00 cmarzullo irctc554: I'd need more info to understand what you are asking
18:00 cmarzullo you should design you states so they are indempotent.
18:00 cmarzullo won't matter if it highstates multiple times.
18:00 antpa joined #salt
18:01 bltmille_ joined #salt
18:02 PerilousApricot joined #salt
18:03 antpa joined #salt
18:04 tapoxi joined #salt
18:06 IdoKaplan pcdummy and cmarzullo: thank you very much. I tried cmarzullo simple state and it's working. but i have a new problem - if there is more than one roles, the output will print both roles. I would like to print always to first roles in the list
18:07 EzeQL joined #salt
18:08 cmarzullo instead of join use first
18:08 cmarzullo http://jinja.pocoo.org/docs/dev/templates/#first
18:09 ponyofdeath pcdummy: yeah, here is what i have come up with https://bpaste.net/show/c238aa0b0c87 but this does not work after second run. https://bpaste.net/show/0cc739bdf6fa basically saying that te onfail req did not change which it obviously does
18:13 cmarzullo your cmd.run needs a guard. Only run if the state on line 53 files.
18:13 cmarzullo err fails
18:14 IdoKaplan cmarzullo: 10x!! working!
18:14 IdoKaplan i'm very happy, thank you and pcdummy
18:14 rem5 joined #salt
18:15 cmarzullo something like cmd.run: - name: init db - unless: db check failes
18:15 ponyofdeath cmarzullo: i have the guard.. its the onfail: no?
18:15 cmarzullo happy to help IdoKaplan
18:15 cmarzullo should be in the same state id
18:15 cmarzullo If I understand what you are doing.
18:15 cmarzullo only init the db if a sql command fails.
18:16 ponyofdeath hmm, not sure abut state id's basically trying to get the state init_jchem_psql_store to run only when check_jchem_psql_init fails
18:17 cmarzullo yeah you can put that all in the same state id using
18:17 ponyofdeath cmarzullo: well i cant because the check requires service to be up
18:17 ronnix joined #salt
18:18 ponyofdeath the stop_jchem_sql is called in one and the check requires it to be up
18:19 cmarzullo I think I'm tracking.
18:19 cmarzullo So the jchem-psql init has to be run with the DB down?
18:20 cmarzullo Like if your select statement fails stop the db and do the init?
18:21 Netwizard joined #salt
18:23 subsigna_ joined #salt
18:24 edrocks joined #salt
18:26 Miouge joined #salt
18:26 bluenemo joined #salt
18:27 Bryson joined #salt
18:27 sfxandy joined #salt
18:34 ronnix joined #salt
18:37 corichar joined #salt
18:37 amcorreia joined #salt
18:40 mohae joined #salt
18:41 jtang joined #salt
18:43 alvinstarr joined #salt
18:50 irctc554 Thanks babilen for your suggestion
18:52 Miouge joined #salt
18:57 nZac joined #salt
18:57 barajasfab joined #salt
18:59 GreatSnoopy joined #salt
19:00 ingslovak joined #salt
19:09 rick__ joined #salt
19:10 rick__ High is there a easy way to run a test against a git branch separate from the production branch.
19:10 rick__ so I can test my states before commiting to my prod repo
19:11 cmarzullo rick__: http://unicolet.blogspot.it/2016/05/a-not-so-short-guide-to-tdd-saltstack.html
19:12 rick__ Very cool
19:12 cmarzullo that's a good blog about breaking your states down into testable formulas. It's the method I use and we have jenkins spin up vms and test everything with test-kitchen.
19:13 Miouge joined #salt
19:14 rick__ was hoping for a more basic salt-call -t --branch:<git branch> <machine name>
19:14 cmarzullo heheh yeah it's a little more involved thatn that.
19:14 cmarzullo You could have a separate configuration file and point salt-master at that.
19:15 CeBe joined #salt
19:15 rick__ if it was not for serveral people pointing at the same server
19:16 cmarzullo Yeah there's that.
19:16 jenastar joined #salt
19:16 cmarzullo Also enviroments.
19:17 cmarzullo You use them so your different state directories are tied to different envs
19:17 rick__ ??
19:17 CeBe joined #salt
19:17 cmarzullo in your top file you have base: well you can also have dev: prod: etc
19:18 rick__ are there docs around the environments?  I actually really like the blog for building a better test environment.
19:18 cmarzullo https://docs.saltstack.com/en/latest/ref/states/top.html#environments
19:18 rick__ cool that can fix the short term
19:18 joshin joined #salt
19:19 cmarzullo I'm very happy with the test-kitchen workflow. we have dozens of formulas that we can match together for whatever.
19:20 rick__ I love that but it is more than I can scope for the quarter.
19:20 rick__ Currently just trying to get salt-master recoverable in a dr. , syndic, api all up and running
19:20 protoz joined #salt
19:21 cmarzullo yeah takes some time. we started breaking out services from our monolithic state tree.
19:22 rick__ find any good tricks to recover salt-master?  We have all the configs/keys in a git repo.  Would love to be able to use a syndic server to build it.
19:23 cmarzullo naw not yet. There was a good salt talk last saltconf about a dude who pushes all his keys into s3. So when a minion is (re)provision it checks there and makes keys if it can't find itself.
19:23 rick__ Yup found lots of examples of that.
19:24 rick__ Just trying to find a easy way to auto configure the master.  All it really needs it the master config file.  So I might just do a install that copies that and restarts.
19:25 cmarzullo I have a salt formula that pulls all the pieces in. Doesn't manage the keys though.
19:25 cmarzullo clones things where they need to go etc.
19:25 protoz joined #salt
19:26 cmarzullo for newly provisioned boxes we have a webhook to auto add and revoke keys.
19:26 rick__ For master of just minions
19:27 rick__ minions I have working great, similar to above I store in s3.
19:27 rick__ Master is my problem child
19:27 cmarzullo we do master also. Does all api, syndics.
19:27 cmarzullo It's acutally quite easy it's just pillar.
19:28 cmarzullo then I do a {{ saltmaster_config | toyaml }} or whatever in the config file.
19:28 rick__ cool and how to you build if the master is down
19:30 cmarzullo Like rebuild if gone? Haven't tackled that yet in prod. Have too much day job stuff to do. The master ain't critical. If it goes down I'll do the pet thing and bring him back. We have backups and stuff.
19:31 cmarzullo But I do do that in our vagrant project. Basically drops in enough of a master config file and enough of the formulas to highstate the master.
19:31 cmarzullo We're I to spend more time I'd bake it into an image, or have a provisioing script.
19:33 rick__ That is what I am working on.  A provisioning script that install git, and pulls in the needed files and then installs.
19:33 rick__ Thanks for the Test kitchen that is awesome
19:33 cmarzullo great. it's really a fun way to develop states.
19:34 viq Haven't read the link yet, but I also am playing with test-kitchen, in combination with testinfra and {kitchen,vagrant}-lxc
19:35 cmarzullo Yeah we'd love to move from server spec to testinfra. Keep it more python like.
19:35 jgarr joined #salt
19:35 lero joined #salt
19:36 viq I should learn python first ;)
19:36 viq I heard good things about InSpec, and goss seems interesting
19:38 cmarzullo haven't looked at them. test-kitchen kitche-salt and serverspec is 'enough' for now. We want to improve it. But can't spend all day automating our automation system :/
19:39 viq "who automates the automation automation automation?!" ;)
19:39 rick__ So salt-ssh is there a way to have salt build a roster of it available minions?
19:39 cmarzullo viq: turtles all the way down.
19:40 rick__ viq: the robots
19:40 viq rick__: AFAIK you have to provide it - though you could build it using other tools
19:40 viq roboturtles
19:41 tapoxi save me from myself #salt, is this sensible? http://hastebin.com/kazepewose.sm
19:41 rick__ Yup kind of figured it would be that way.  But since it already knows it minions I was thinking there might be a way.
19:41 cmarzullo tapoxi: no.
19:41 tapoxi see thats why I stopped :P
19:41 cmarzullo is that your top file?
19:41 tapoxi no its a sls
19:42 tapoxi base.sls
19:42 jtang joined #salt
19:42 tapoxi problem is git is installed with base.packages, and the users formula I'm including needs to run after
19:43 edrocks joined #salt
19:44 cmarzullo that should all be in the top file.
19:44 viq rick__: ...how does it know the minions?
19:44 cmarzullo imo
19:44 rick__ I just want it as a backup incase things go bad.
19:45 rick__ so I want all my current minions to also be in a roster file
19:45 viq rick__: ah. Then maybe you could build it with mine data?
19:45 tapoxi you're probably right cmarzullo
19:45 rick__ That is what I am thinking
19:49 viq rick__: be aware that I have no experience with mine and people here have been saying it's not reliable for them.
19:50 rick__ Great.
19:50 viq And be aware that mine will not work accross syndics
19:50 viq (at least to my knowledge)
19:56 MTecknology Upcoming SaltStack meetups: Make sure to attend an upcoming SaltStack meetup in Las Vegas, Los Angeles, Phoenix, Zurich, or Orange County, Calif.
19:57 * MTecknology grumbles
19:57 * MTecknology just left LA
19:58 viq Well, there's one on this side of the pond, though still half a continent away
19:59 sagerdearia joined #salt
19:59 MTecknology pond?
19:59 * MTecknology is currently in mountain view
20:00 ageorgop joined #salt
20:01 viq MTecknology: the body of water spanning 4 time zones :P
20:03 M-cpt left #salt
20:04 MTecknology ooooh
20:04 MTecknology viq just made a yo mamma joke..
20:09 ponyofdeath cmarzullo: sorry, yes that is correct.
20:09 elias_ joined #salt
20:11 jgarr I frequently get to a state where all of my minions are "Not Connected" when I try to salt '*' test.ping them. Usually I can restart the master and they come back (once they re-auth) but then they slowely start to go away again. I'm checking with salt-call manage.down and the number keeps going up (every ~30 minutes) until they're all down
20:12 jgarr I'm on 2016.3.1-1 mostly on redhat. I'm running the server in debug mode but there's way too much noise (auth validations) Anyone know what could be causing this?
20:13 jgarr I don't remember having this problem with my PoC server with a lot fewer nodes. The server doesn't run highstate anywhere, just used for module commands
20:18 ajw0100 joined #salt
20:21 cmarzullo seems like networking to me. Your box can handle that many open connections? (ulimit)
20:23 cyborg-one joined #salt
20:23 cmarzullo ponyofdeath: without knowning your app it's hard to suggest an idempotent way to do what you want. In mysql land. I isntall the software, load the schema if it's not there, and ensure the service is running.
20:24 cmarzullo it's odd to me that you have to do your db init with the service stopped.
20:24 ponyofdeath cmarzullo: well i just need to understand why onfail is not working as i think it should
20:24 ponyofdeath why is the state fialing but not triggering the init state which has the onfail:
20:27 bowhunter joined #salt
20:27 cmarzullo the state (i lost the paste bin) is probably not returning correct return codes. So maybe the command isn't 'failing'
20:27 cmarzullo run the command by hand to make it fail, then 'echo $?'
20:27 stickmack jgarr: do you have an ASA in the path somewhere? Or a really old version of zeromq? I've seen similar failure modes with either of those
20:29 jgarr stickmack: ASA? The only zmq on the box came from the salt repo when I installed everything on a new server last week
20:30 stickmack ASA firewall
20:30 jgarr zeromq3-3.2.5 is installed
20:31 stickmack should be new enough then
20:31 jgarr no firewall on the box. All minions are in the datacenter with the server. I get a constant stream of "auth request" and "auth accepted" in the master log (using -l info)
20:32 ponyofdeath cmarzullo: the output is showing the state as red, so it is failing
20:32 ponyofdeath cmarzullo: but after the state that has onfail in it says "State was not run because onfail req did not change"
20:34 stickmack how often is the same minion trying to request authentication?
20:34 jgarr cmarzullo: $(cat /proc/sys/fs/file-max) is 2433369 and $(lsof | wc -l) is 82602
20:35 jgarr stickmack: the stream is constant (~5000 minions). I didn't change any defaults on the minion config
20:36 * jgarr is reading through the scaling document now
20:36 stickmack more curious if the same minions are trying to re-auth constantly, i.e. they're losing their channel to the master
20:37 jgarr that could be. There's maybe no way of decifering in all the re-auths
20:38 stickmack grep 'Authentication request from' master | awk {'print $1" "$8'} | sort | uniq -c | sort -n
20:38 stickmack or something to that effect
20:38 jgarr I think the only thing I changed on the server was increasing the thread count to 50 and adding eauth
20:39 protoz joined #salt
20:39 viq huh, 5k minions? nice...
20:40 west575 joined #salt
20:40 krymzon joined #salt
20:41 jgarr stickmack: whoa, yep, some have LOTS of auth attempts
20:41 jgarr 1000+
20:41 oida_ joined #salt
20:42 stickmack that seems excessive
20:42 jgarr a majority have ~3-5 which probably corellates to master restarts
20:42 jtang joined #salt
20:43 jgarr looks like most of the hosts with high counts are in a remote datacenter. maybe timeout?
20:43 brotatochip joined #salt
20:43 stickmack possibly
20:44 stickmack does the local minion log on the high auth attempt nodes have anything interesting to say?
20:45 cmarzullo ponyofdeath: I'm running out of ways to suggest that your state file is not the right approach. I've got 1000's of states written and haven't had to use on_fail.
20:46 MTecknology http://dpaste.com/336E89A  <-- is this a known bug? If it's in the correct state, it shouldn't report changes
20:47 ponyofdeath cmarzullo: what do you suggest as a alternative.
20:47 MTecknology cmarzullo: How ya been buddy?
20:48 MTecknology Assuming I know you because the world isn't /that/ small...
20:48 eseyman joined #salt
20:49 cmarzullo Living the dream MTeck. it's probably me. At least I think it's me.
20:49 hemebond joined #salt
20:49 viq Or so the voices tell you
20:49 MTecknology cmarzullo: Chris?
20:49 cmarzullo aye
20:49 cmarzullo dave?
20:50 jgarr stickmack: one of the systems had an old master key I thought I cleaned that out everywhere but I guess I missed some. I'll make another sweep and hopefully it stops some of the hosts with lots of re auths
20:50 * MTecknology is Michael Lustfield
20:50 cmarzullo ah ha! thought it might have been you.
20:50 MTecknology :)
20:52 flowstate joined #salt
20:52 stickmack I'm curious what ratios others use for their worker_threads. I've generally stuck to 10:1(minions:workers) as anything lower has led to responsiveness issues when stating minions
20:53 MTecknology I always size it to the box, not the minions
20:53 jgarr 10:1? whoa that's a lot of worker threads. I'm 100:1
20:53 MTecknology I think I tend to sit closer to 500:1 :P
20:54 stickmack yeah, why I'm asking
20:54 MTecknology As a rule of thumb, I do ten workers on a master and -b 8 if I want to hit everything
20:54 haole joined #salt
20:55 sagerdearia joined #salt
20:55 haole I'm trying to use the phabricator formula, but even when using the pillar.example file, I get a weird YAML: http://kopy.io/C897m (formula at https://github.com/bougie/salt-phabricator-formula/blob/master/pillar.example)... can someone point me in the right direction?
20:55 stickmack guess that's a difference
20:56 stickmack when I want to splatter all of the minions I don't want to wait around
20:57 ttw joined #salt
20:57 jgarr stickmack: same here. My master is also a pretty big box so I can use lots of threads
20:57 arif-ali joined #salt
20:58 ttw Hi everyone, does anyone have experience with the postgres module ?
20:58 ttw im looking for the equivalent of  `ALTER SCHEMA public OWNER TO "username";` for a given database.
20:59 ttw I tried dbname : schemas:      public:        owner: username  but it doesnt seem to do anything
20:59 nshttpd I'm trying to migrate to salt and I'm running up against trying to get the ordering right such that the minion will know custom grain information at start so it can properly pull in pillar data needed in templates after high state.
21:00 nshttpd in the /etc/salt/master.d/reactor.conf  is `salt/minion/*/start` the place to put the sync_grains.sls
21:01 cmarzullo nshttpd: that's the problem with grains. Not good for changing data like that.
21:01 nshttpd so if I have a intsance I'm autoscaling up with a specific tag on it
21:01 nshttpd to define what it is/does
21:02 nshttpd so that the proper software will be installed on it. how should that be configured so auto-scaling will happen automagically?
21:02 cmarzullo match based on minion_id?
21:02 nshttpd ie, if it's a kubernetes-worker-node or fluentd-node
21:03 cmarzullo We moved away from that cause I didn't like the ability for the minion to determine it's fate. Box gets rooted. Set the grain to role: payroll_db
21:03 cmarzullo queue long night.
21:03 jgarr another question I have that I couldn't find an answer to. If I have multiple masters and have the api configured only on one of them. If I run pepper commands against the api server will there be any limitations to what minions I can access? I'm assuming minions authenticate to all masters in a multi-master setup so pepper '*' test.ping should still get everything right?
21:04 antpa joined #salt
21:04 nshttpd in cloud stuff where you may need different configs based on region or zone though.
21:05 nshttpd you're suggesting all of that gets encoded into th hostname
21:05 nshttpd or a salt-master per region and zone?
21:05 nshttpd so you co-ordinate configurations across multiple masters?
21:06 brotatochip joined #salt
21:06 cmarzullo All the data for the host is in pillar. The hostname determines what pillar you get.
21:06 cmarzullo We have strong separation between data about states and the states themselves.
21:06 ribx joined #salt
21:08 nshttpd so wildcard based on host basename then?
21:09 cmarzullo mostly. in the pillar it self we'll do other jinja trickery if needed. 'if ip in Europa_ip range include other pillar file'
21:10 nshttpd which is what I was trying to do with the tags on the instance template definition and providing it as a grain
21:10 johnkeates joined #salt
21:11 cmarzullo I pretty sure people are successful with that path too. Can you make it into your ami?
21:12 cmarzullo err bake it
21:12 nshttpd not amazon. gcp.
21:13 mikecmpbll joined #salt
21:13 nshttpd I've got the _grains/gce.py making it down. it's just some ordering setup that I'm missing.
21:14 cmarzullo not sure m8.
21:16 flowstate joined #salt
21:16 nshttpd but in therory in a master.d/reactor.conf file .. the 'salt/minion/*/start' should get fired before first highstate at initial boot correct?
21:18 ageorgop joined #salt
21:20 mohae_ joined #salt
21:24 Sarphram joined #salt
21:26 rodr1c joined #salt
21:26 rodr1c joined #salt
21:26 packeteer joined #salt
21:26 jwon joined #salt
21:26 chutzpah joined #salt
21:26 vaelen joined #salt
21:27 esharpmajor joined #salt
21:33 west575 joined #salt
21:43 jtang joined #salt
21:44 Tyrm joined #salt
21:49 jgarr how can I delete all denied keys?
21:49 jgarr salt-key -d --include-denied was my first guess but I think I need a glob match
21:50 west575_ joined #salt
21:50 oida joined #salt
21:55 manji joined #salt
21:56 brent_ joined #salt
21:57 jenastar joined #salt
22:01 oida joined #salt
22:01 alexhayes joined #salt
22:02 AdamSewell joined #salt
22:02 garphy`aw joined #salt
22:02 AdamSewell when trying to group machines together in node groups, id' like to do something such as if (A OR B) AND C - how would i accomplish that?
22:03 hemebond AdamSewell: https://docs.saltstack.com/en/latest/topics/targeting/compound.html
22:08 oida joined #salt
22:08 AdamSewell hemebond, yes but it seems what I'm doing isn't working
22:08 AdamSewell even after reading that document, i actually started there.
22:09 hemebond Well it should be possible. Have you pasted the info somewhere?
22:11 sagerdearia joined #salt
22:16 AdamSewell hemebond, scratch that. seems to be working now.
22:16 hemebond ????
22:18 majuscule joined #salt
22:24 jtang joined #salt
22:29 cyborg-one joined #salt
22:29 ninjada joined #salt
22:32 Tyrm joined #salt
22:38 ninjada joined #salt
22:53 ninjada joined #salt
22:53 ninjada joined #salt
22:54 antpa joined #salt
22:57 SteamWells joined #salt
23:01 tonybaloney joined #salt
23:01 cyborg-one joined #salt
23:02 ninjada joined #salt
23:04 orion Hi. Does anyone know why, after clearing /vra/cache/salt/minion *on the salt mster* I am having intermittent connectivity issues, as displayed here?: https://gist.github.com/centromere/f1bb246ee0d67b2cee009167b3e36277
23:04 hemebond orion: Probably a network issue, no?
23:04 hemebond Where is salt01-test running?
23:05 orion hemebond: EC2, but I want to make it clear that I am running these commands on the salt master itself.
23:05 orion The packets never leave the box.
23:06 hemebond orion: Minions fetch tasks from the master, the master doesn't send the task to the minions.
23:06 hemebond So the master puts the "test.ping" onto the queue for salt01-test to fetch, but salt01-test hasn't fetched it or responded fast enough.
23:07 orion salt01-test /is/ the master.
23:07 hemebond You could try a longer timeout on the command; I think the default is like 5 seconds or something.
23:07 hemebond LOL, oh I see.
23:07 hemebond Sorry about that.
23:07 orion Moreover, the problem is intermittent. That gist I sent you... those commands were sent in a span of 30 seconds.
23:07 ALLmightySPIFF joined #salt
23:07 hemebond Have you put the minion into debug mode and watched its log?
23:08 hemebond What is the networking config for the box? Could it be a routing issue?
23:09 orion I don't see how it could be a routing issue, because the packets should never leave the box.
23:09 Xopher joined #salt
23:09 orion Moreover, why would it work sometimes and not others in a span of 30 seconds?
23:10 hemebond Not sure, just throwing things out there.
23:10 hemebond I don't run a minion on my master so I've not had to debug this kind of issue before.
23:10 orion hemebond: https://gist.github.com/centromere/70c0a5e5e6152af202f9ab04d07b2b73 <-- salt minion on master debug log.
23:11 hemebond Same message a user last night was seeing; that seemed to be a routing issue.
23:11 hemebond Is there any way the minion could be using both the eth0 and loopback interfaces?
23:12 hemebond Which interface is the minion connected to?
23:12 hemebond netstat -anp | grep ':450'
23:12 orion Do the logs I showed you present that information?
23:12 hemebond "The minion failed to return the job information for job"
23:13 hemebond "This is often due to the master being shut down or overloaded."
23:13 orion 2016-07-26 16:09:12,309 [salt.crypt ][DEBUG ][26954] Initializing new AsyncAuth for ('/etc/salt/pki/minion', 'salt01-test', 'tcp://127.0.1.1:4506')
23:13 orion Is the last part of this log line what you're looking for? ^
23:13 hemebond Can you show me 'route -n'?
23:14 hemebond That message I copied suggests the minion was unable to connect to the master.
23:14 orion hemebond: https://gist.github.com/centromere/f418d5b209ea816a9e658b136ceb36cc
23:14 hemebond Since network connectivity shouldn't be an issue, there might be a routing issue.
23:14 johnkeates joined #salt
23:15 hemebond What is the master ID used in the minion config?
23:15 hemebond netstat would show you if the minion and master are only talking over the loopback (127.0.1.1)
23:16 hemebond You could perhaps force it, by editing /etc/hosts, to use the eth0 interface.
23:18 orion hmm
23:19 orion In the minion config I have: "master: salt01-test"
23:19 Edgan Anyone use test.check_pillar with something more complex than the example in https://docs.saltstack.com/en/latest/ref/states/all/salt.states.test.html ?
23:19 orion In /etc/hosts I have: "10.0.3.254 salt01-test.node.consul salt01-test"
23:20 antpa joined #salt
23:20 orion And eth0's inet addr is 10.0.3.254.
23:21 hemebond Was that already there?
23:22 orion Yes.
23:22 hemebond Okay, what if we change it to 127.0.1.1?
23:22 hemebond Can we try that?
23:23 orion Yes, one moment.
23:23 hemebond (and restart the minion and master services of course)
23:24 hemebond I suppose you could, instead, put 127.0.1.1 as the master ID in the minion config.
23:24 Bryson joined #salt
23:25 krymzon joined #salt
23:25 mohae joined #salt
23:29 armguy joined #salt
23:30 orion It seems that the salt-minion isn't connecting to the master.
23:30 hemebond Is there any way salt01-test would resolve to 127.0.1.1?
23:31 hemebond If your /etc/hosts has the eth0 IP then the log should show the minion connecting over that interface.
23:31 orion "PING salt01-test (127.0.1.1) 56(84) bytes of data."
23:32 hemebond So there's no line in /etc/hosts like "127.0.1.1 salt01-test"?
23:32 orion No, that's exactly what's there. If there wasn't, the line I just pasted wouldn't exist.
23:32 nZac joined #salt
23:33 hemebond I don't understand.
23:33 hemebond Oh you just put it in.
23:33 orion I pinged salt01-test on the command line and the ping utility resolved salt01-test to 127.0.1.1.
23:33 hemebond But it wasn't in /etc/hosts before, correct?
23:33 orion That's what you were looking to accomplish.
23:33 orion No.
23:34 ninjada joined #salt
23:34 hemebond The master is listening over 0.0.0.0?
23:34 hemebond You can telnet to localhost 4505 and 4506?
23:34 orion yes
23:34 hemebond 127.0.1.1 4505 and 4506?
23:35 orion tcp        0      0 0.0.0.0:4505            0.0.0.0:*               LISTEN      29963/python
23:35 orion (Same for 4506)
23:35 orion Yes.
23:35 hemebond But the minion can't connect...
23:36 hemebond What's the output of
23:36 hemebond salt-run manage.versions
23:36 hemebond Oh
23:36 orion Correct. I am able to telnet to ports 4505/4506 on 127.0.0.1, 127.0.1.1, and 10.0.3.254.
23:36 hemebond That won't work properly yet.
23:36 hemebond But the minion has not connected still?
23:37 orion Correct.
23:37 orion Something is strange though.
23:38 orion "salt 'salt01-test' test.ping" is the command I'm executing. "Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/master', 'salt01-test_master', 'tcp://127.0.0.1:4506', 'clear')" is the output.
23:38 orion From the command (master).
23:38 orion Here's the output from the minion: "[salt.transport.zeromq][DEBUG   ][31171] Initializing new AsyncZeroMQReqChannel for ('/etc/salt/pki/minion', 'salt01-test', 'tcp://127.0.1.1:4506', 'clear')"
23:38 orion Why does the master log indicate 127.0.0.1, yet the minion log indicates 127.0.1.1?
23:39 orion /etc/hosts has these entries: 127.0.0.1 localhost; 127.0.1.1 salt01-test salt01-test.node.consul
23:39 hemebond One possibly using localhost, the other resolving the DNS name to 127.0.1.1
23:39 sjmh joined #salt
23:40 hemebond What if you put 127.0.1.1 as the master ID?
23:40 hemebond I think an IP works too.
23:40 sjmh is there a way w/ the jobs runner to view jobs that DON'T match a filter?
23:44 hemebond orion: Could also try making all those entries 127.0.0.1
23:44 orion Yes, standby.
23:45 subsignal joined #salt
23:45 orion I just made the change. Now, /etc/hosts has only one entry for 127.0.0.1, and it reads as follows: 127.0.0.1 salt01-test salt01-test.node.consul
23:46 orion There is no entry for 127.0.1.1, nor 10.0.3.254.
23:46 nZac joined #salt
23:46 PerilousApricot joined #salt
23:46 orion Restarting master and minion services now.
23:49 orion I have new data.
23:49 orion Sort of.
23:49 orion The minion successfully connects to the master.
23:49 orion I am able to execute "salt 'salt01-test' test.ping" successfully *twice*.
23:49 orion On the third try, it times out.
23:51 hemebond Are your master and minion configs basically default? Any customisations?
23:52 hemebond Is there anything on the server that might stop or throttle traffic?
23:52 hemebond I suppose using localhost should get around that anyway.
23:52 hemebond Oh, what if you do "salt-call test.ping"?
23:53 hemebond And what about "salt 'salt01-test' test.ping -t 300"?
23:53 subsignal joined #salt
23:54 ZachLanich joined #salt
23:54 hemebond What if you just keep doing a regular "ping" test for a minute or two?
23:54 ZachLanich hemebond: Where are you!?!?! :P
23:54 hemebond ZachLanich: Morning :-D
23:54 orion "salt-call test.ping" worked the first time, and when I executed it immediately after, it timed out.
23:54 hemebond Yikes.
23:55 ZachLanich It's 8pm here :P. I decided that with all the issues I'm having, that I'm going to halt my effort to get vagrant working and move straight to DO and see if all my issues go away. That will confirm that my issues are with either the Ubuntu image I'm using or with Vagrant/VB
23:55 orion hemebond: https://gist.github.com/centromere/e8d0f5c0f9a67384a15b5c7c3535fc6c
23:55 ZachLanich hemebond: &
23:55 ZachLanich hemebond: ^ *
23:56 ninjada joined #salt
23:56 hemebond orion: Did you test using the eth0 IP address?
23:56 hemebond In the minion config (as the master ID) and also the /etc/hosts file?
23:56 hemebond ZachLanich: Sounds good :-)
23:57 ninjada joined #salt
23:58 flowstate joined #salt
23:58 orion hemebond: New data.
23:59 orion In the gist I just sent you... it worked on the third automated try: SaltReqTimeoutError, retrying. (1/3) ... SaltReqTimeoutError, retrying. (3/3) ... local: True

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary