Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-12-09

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 lompik joined #salt
00:04 marekb joined #salt
00:07 SeVenSiXseVeN joined #salt
00:08 otter768 joined #salt
00:12 adelcast joined #salt
00:17 bhosmer joined #salt
00:18 marekb joined #salt
00:18 boargod2 joined #salt
00:20 pdx6 joined #salt
00:20 otter768 joined #salt
00:20 Shirkdog joined #salt
00:22 Riz joined #salt
00:23 darvon joined #salt
00:24 alvinstarr joined #salt
00:24 renoirb joined #salt
00:26 edrocks__ joined #salt
00:32 mdupont joined #salt
00:33 Zachary_DuBois joined #salt
00:36 AdamSewell joined #salt
00:36 ekristen joined #salt
00:36 JDiPierro joined #salt
00:38 Rumbles joined #salt
00:39 cpowell joined #salt
00:42 bhosmer joined #salt
00:42 akhter joined #salt
00:43 oida joined #salt
00:48 keepguessing joined #salt
00:49 keepguessing Hi I am new to salt. I am debugging a master slave setup. I have a packges's sls. While running it fails. I tried doing state.sls check for the package but could not find much info.
00:49 keepguessing I wanted to know if there a log file or something that I could use to find out why the package installation using salt failed?
00:50 keepguessing how do not right now need help in debugging this particular issue. But I need something specific to this issue.
00:50 keepguessing help?
00:50 keepguessing s/specific/not specific/
00:51 JDiPierro keepguessing: The minion's log file should have the output of why the state failed. Check /var/log/salt/minion on the minion
00:51 keepguessing In other words how to debug failures in salt?
00:51 keepguessing JDiPierro: ah thanks. will check it.
00:51 JDiPierro The output of the failed state should hopefully give you a hint as to what's wrong. If you want to paste the output into pastebin I can take a peek at it.
00:53 whytewolf keepguessing: there are log files in /var/log/salt [both minion and master] you can also use -l debug and -l trace in during your salt call. if you need something that might be happening on the minion you use salt-call -l debug
00:55 keepguessing can I specifically force installation of a package?
00:55 keepguessing with salt-call ?
00:55 keepguessing I will actually try all these out first.
00:55 keepguessing Thanks JDiPierro and whytewolf
00:57 whytewolf keepguessing: salt-call works just like salt 'minion'
00:57 jaybocc2_ keepguessing: salt-call pkg.install 'pkgname'
00:57 whytewolf but from the minion side
01:02 keepguessing ok I ran the pkg install and it ran successfully. I think it did not get installed because some dependency did not satisfy.
01:02 keepguessing Is there a way to print the order of package install?
01:05 edrocks joined #salt
01:10 cberndt joined #salt
01:17 RobertChen117 joined #salt
01:21 pppingme joined #salt
01:37 hasues left #salt
01:37 falenn joined #salt
01:48 cyteen joined #salt
01:50 jesusaurus joined #salt
01:57 dendazen joined #salt
02:00 burp_ joined #salt
02:02 wangofett Does anyone know how to improve the pkg.installed time on Centos7 for selinux when the package is already installed?
02:02 wangofett it takes a whopping 45s to run right now
02:03 wangofett My initial thought is to use an 'unless'
02:07 wangofett (looks like I'm installing policycoreutils-python and -devel)
02:08 cwyse joined #salt
02:08 RobertChen117 salt orchestration use mutiple sls file, one depends on another. But my situation is using only one sls file for a very huge site deployment.  How to use orchestration in the same sls file?
02:13 akhter joined #salt
02:15 wangofett RobertChen117: I'd break up the sls. You can use requires, but that's probably less maintainable
02:18 oida joined #salt
02:22 edrocks joined #salt
02:23 catpigger joined #salt
02:30 keepguessing jaybocc2_: Thanks for your suggestion earlier. Is there a way to get the order in which packages are installed?
02:36 cberndt joined #salt
02:40 anmolb joined #salt
02:41 _JZ_ joined #salt
02:45 ageorgop joined #salt
02:47 hightekvagabond joined #salt
02:52 evle joined #salt
03:00 tristianc joined #salt
03:02 mapu joined #salt
03:07 marekb joined #salt
03:08 clintberry joined #salt
03:11 anmolb joined #salt
03:11 jgelens joined #salt
03:13 quasiben joined #salt
03:14 clintberry joined #salt
03:18 writtenoff joined #salt
03:19 nZac joined #salt
03:22 oida joined #salt
03:35 p joined #salt
03:37 larsfronius joined #salt
03:37 marekb joined #salt
03:38 cyborg-one joined #salt
03:42 marekb joined #salt
03:47 ekristen joined #salt
03:47 cwyse joined #salt
03:49 RobertChen117 joined #salt
03:52 racooper joined #salt
03:55 cyborglone joined #salt
03:56 clintberry joined #salt
03:59 clintber_ joined #salt
03:59 hightekvagabond1 joined #salt
03:59 dendazen joined #salt
04:02 hightekvagabond joined #salt
04:04 jaybocc2 joined #salt
04:10 colegatron joined #salt
04:17 hightekvagabond joined #salt
04:24 pirulo joined #salt
04:25 pirulo Does any one know how to get the boot lun 0 of the multipath using salt?
04:28 AdamSewell joined #salt
04:30 cwyse joined #salt
04:33 JDiPierro joined #salt
04:33 ramteid joined #salt
04:38 brianfeister joined #salt
04:40 brianfei_ joined #salt
04:43 hightekvagabond joined #salt
04:43 RobertChen117 joined #salt
04:46 Vaelatern joined #salt
04:53 hightekvagabond joined #salt
04:54 cwyse joined #salt
04:58 hightekvagabond joined #salt
05:06 bhosmer joined #salt
05:08 malinoff joined #salt
05:08 burp_ joined #salt
05:09 ramteid joined #salt
05:11 burp_ joined #salt
05:11 rdas joined #salt
05:30 hightekvagabond1 joined #salt
05:43 nZac joined #salt
05:48 ajw0100 joined #salt
05:49 hightekvagabond joined #salt
05:53 oida joined #salt
06:01 calvinh joined #salt
06:08 Bryson joined #salt
06:09 tehsu joined #salt
06:09 ITChap joined #salt
06:09 nidr0x joined #salt
06:16 foundatron joined #salt
06:16 RobertChen117 joined #salt
06:19 hightekvagabond joined #salt
06:22 aw110f joined #salt
06:24 Bryson joined #salt
06:28 brianfeister joined #salt
06:32 armguy joined #salt
06:38 hightekvagabond joined #salt
06:44 cpowell joined #salt
06:46 jaybocc2 joined #salt
06:52 hightekvagabond joined #salt
06:52 RobertChen117 joined #salt
07:04 hightekvagabond joined #salt
07:07 keimlink joined #salt
07:10 jaybocc2 joined #salt
07:17 hightekvagabond joined #salt
07:20 AndreasLutro joined #salt
07:26 lemur joined #salt
07:28 baweaver_ joined #salt
07:31 hightekvagabond1 joined #salt
07:33 Sucks joined #salt
07:33 felskrone joined #salt
07:35 hightekvagabond joined #salt
07:37 hightekvagabond joined #salt
07:39 larsfronius joined #salt
07:41 acsir joined #salt
07:41 stooj joined #salt
07:42 ITChap joined #salt
07:43 KermitTheFragger joined #salt
07:45 cpowell joined #salt
07:47 lemur joined #salt
07:48 baweaver_ joined #salt
07:50 oida joined #salt
07:59 solidsnack joined #salt
08:03 elsmo joined #salt
08:06 Fabbe joined #salt
08:11 jamesp9 joined #salt
08:11 joren_ joined #salt
08:12 rdas joined #salt
08:17 brianfeister joined #salt
08:17 cberndt joined #salt
08:24 ITChap joined #salt
08:25 slav0nic joined #salt
08:26 cwyse joined #salt
08:28 elsmo joined #salt
08:30 job how do people deal with IPv6-only hosts? I cannot access repo.saltstack.com via IPv6
08:38 eseyman joined #salt
08:39 brianfeister joined #salt
08:42 schinken An installed package (unifi) requires JAVA_HOME env set to run. The package provides an systemd unit file. How would you add "setting environment" to your saltfile?
08:42 schinken Append it to the unit file? environ.setenv failed - i guess thats only relevant to the minion
08:43 Norrland schinken: isn't that handled in the unifi.service file for systemd?
08:43 schinken Nope: https://gist.github.com/schinken/277f3f23edbccc19e60c
08:44 schinken ah. The /usr/lib/unifi/bin/unifi.init has a function which should set the java_home but doesn't ...
08:44 schinken I've must be blind ... hm
08:44 Norrland which dist?
08:45 schinken debian jessie
08:45 Norrland ok
08:46 Norrland afaik that was set in the /etc/init.d/unifi file. But not sure what they've done when/if transitioning to systemd.
08:47 AndreasLutro huh. some pillar data is present if I do `salt minion pillar.items` but not if I do `salt minion pillar.get pillar_name`
08:47 schinken https://gist.github.com/schinken/37538e130729c2b3b358
08:47 schinken Norrland: looks like this function fails
08:47 schinken damnit
08:49 schinken it's looking for openjdk.. interesting
08:52 vvoody joined #salt
08:57 om joined #salt
08:58 RobertChen117 joined #salt
09:01 Grokzen joined #salt
09:06 Qlawy joined #salt
09:07 GreatSnoopy joined #salt
09:07 Qlawy Hi guys
09:08 Qlawy I have repossitory in gitfs, Can I have forumalas in subdirectories too?
09:08 bhosmer_ joined #salt
09:08 Qlawy or 1 directory in git root = 1 formula?
09:13 Bryson joined #salt
09:16 chiui joined #salt
09:18 Rumbles joined #salt
09:18 TyrfingMjolnir joined #salt
09:24 jaybocc2 joined #salt
09:26 armyriad joined #salt
09:28 Joren__ joined #salt
09:28 oida joined #salt
09:32 shiin joined #salt
09:34 jhauser joined #salt
09:34 s_kunk joined #salt
09:34 s_kunk joined #salt
09:35 shiin When I restart my salt-master and try to test.ping a host right after, it results in "Minion did not return" - Is there a way to avoid having to wait 15 seconds before trying something?
09:36 CaptainMagnus joined #salt
09:36 hemebond joined #salt
09:37 joren joined #salt
09:37 hemebond Is repo.saltstack.com a bit broken?
09:38 thalleralexander joined #salt
09:39 hemebond nvm, forcing the upgrade worked
09:39 hemebond Spoke too soon.
09:44 tampakrap joined #salt
09:45 keimlink joined #salt
09:46 om joined #salt
09:46 cpowell joined #salt
09:47 nZac joined #salt
09:49 AlberTUX1 joined #salt
09:50 babilen hemebond: forcing?
09:50 Qlawy left #salt
09:51 hemebond babilen: Didn't work so I downloaded and installed the deb manually before trying the salt-minion install again. Worked that way.
09:51 babilen That sounds bad and shouldn't have been necessary
09:51 AlberTUX2 joined #salt
09:52 * babilen wonders if there will ever be properly maintained packages now that SaltStack alienated all packagers
09:52 om joined #salt
09:53 rdas joined #salt
09:55 hemebond One of my minions is in a bad state. The 2015.8.1+ds-1 package is gone and I need to reinstall it.
09:55 hemebond Are they removing old versions from the repo?
09:55 babilen Debian repositories only contain a single version (by design)
09:56 hemebond Really?
09:56 hemebond That's... huh.
09:56 babilen If you want to support multiple "branches" you'd have to create multiple packages (with different names), suites or entire repositories
09:57 babilen Well, in Debian you have three distributions (stable, testing and unstable) and each only contains a single version of a package. The underlying assumption is that you always want the newest version as that version has been properly tested and that, if you ware running stable, only security updates are being made available
09:57 babilen The "properly tested" part is crucial here
09:58 hemebond That seems odd since you can pin versions in apt and stuff.
09:59 hemebond And specify a specific version.
09:59 favadi joined #salt
09:59 babilen You can do that, but it is being used to ensure that a package is not being updated (and then you should have just set it on hold)
09:59 babilen Pinning is way too often abused
10:00 LotR well, if you never run apt-get clean, you can find the .deb you had installed in /var/cache/apt/archives
10:00 babilen Sure
10:01 hemebond oh, there it is!
10:01 fredvd joined #salt
10:01 babilen hemebond: Where you simply looking for the .deb?
10:02 hemebond Yeah, I can't upgrade because the install is in a bad state, but I can't remove it because it wants me to reinstall first.
10:02 elsmo joined #salt
10:02 babilen Can you be more specific about "bad state" ?
10:02 hemebond That's all Apt told me.
10:03 nafg joined #salt
10:03 hemebond For some reason the source for the minion was "/var/lib/dpkg/status"
10:03 babilen Okay, that sounds as if dependencies are missing. What does "aptitude search '~b'" give you? What about "dpkg --configure -a" ?
10:03 hemebond So I'm not sure what happened there.
10:04 babilen That just means that it is being installed locally (and that the status file knows about that version)
10:04 babilen Ah, and include the output of "apt-cache policy" and "apt-cache policy salt-minion"
10:05 hemebond Sorry, before you'd even made that first comment I'd reinstalled and purged.
10:05 oida joined #salt
10:05 babilen Okay
10:05 babilen There is little anybody can do to prevent problems like that if it is isn't even clear/reported what the problem is
10:05 hemebond I suspect my automated installation process was a bit off when I created this server.
10:06 hemebond I suspect my fault somewhere.
10:06 hemebond Oh, and the repo issue (size mismatch) was a different minion.
10:07 hemebond These minions seem to be in a bit of a state so I'm trying to tidy them up.
10:07 cyborg-one joined #salt
10:09 TyrfingMjolnir joined #salt
10:09 catpiggest joined #salt
10:10 colegatron joined #salt
10:12 jaybocc2 joined #salt
10:13 joren joined #salt
10:13 mekstrem joined #salt
10:18 yomilk joined #salt
10:25 denys joined #salt
10:26 DanyC joined #salt
10:26 sfxandy joined #salt
10:26 larsfronius joined #salt
10:27 DanyC Hi all, anyone is able to explain/ provide a tip to undestand how salt-cloud auto accepts the minions key ? I don't have any auto-accept on salt master nro any reactors defined
10:28 om joined #salt
10:28 oida joined #salt
10:30 LondonAppDev joined #salt
10:31 DanyC does Salt-Cloud have a in-built reactor ?
10:31 larsfron_ joined #salt
10:35 shiin joined #salt
10:37 om joined #salt
10:40 giantlock joined #salt
10:40 N-Mi_ joined #salt
10:43 TyrfingMjolnir joined #salt
10:45 av_ joined #salt
10:46 bhosmer joined #salt
10:47 cpowell joined #salt
10:47 oida joined #salt
10:52 linjan joined #salt
10:52 thalleralexander joined #salt
10:59 amcorreia joined #salt
11:02 Edgan joined #salt
11:02 ericof joined #salt
11:10 cyteen joined #salt
11:15 oida joined #salt
11:19 yomilk joined #salt
11:31 DanyC joined #salt
11:33 Konkas joined #salt
11:33 s_kunk joined #salt
11:33 s_kunk joined #salt
11:48 dendazen joined #salt
11:48 RobertChen117 joined #salt
12:02 rotbeard joined #salt
12:06 mr-op5 joined #salt
12:08 denys joined #salt
12:14 DanyC how you guys cna check in a bash script if there are key in unaccepted state? i know "salt-key -l <arg> will return the info but i can't based on exit code
12:19 mattiasr joined #salt
12:20 DanyC you can even do this bit "salt-key -l pre --out raw and will return {'minions': ['salt-master']} but then anyone knows in bash how to check if there is value associate with it ?
12:24 jaybocc2 joined #salt
12:24 giantlock joined #salt
12:25 favadi joined #salt
12:29 AndreasLutro DanyC: you could just grep for it I guess
12:32 falenn joined #salt
12:34 AndreasLutro DanyC: if you use salt-cloud on the master, I think it pregenerates the keys on the master, accepts the key, then sends it to the minion
12:35 akhter_1 joined #salt
12:35 DanyC AndreasLutro: correct that is the case, i tested and saw using debug
12:39 jaybocc2 joined #salt
12:44 keimlink joined #salt
12:46 Joren joined #salt
12:48 cpowell joined #salt
12:51 quasiben joined #salt
12:52 mattiasr joined #salt
12:55 rb_ joined #salt
12:58 DanyC AndreasLutro: or for the search bit you could do this: salt-key -l acc --out json | jq '.minions[]' 2>/dev/null
12:59 DanyC AndreasLutro: or salt-key -l pre --out json | jq '.minions[]' 2>/dev/null .If the output is null then don't care otherwise it will return the key value which is the minion_id
12:59 tkharju joined #salt
13:00 DanyC AndreasLutro: i was trying to get jq working for try/catch error but didn't work
13:07 oida joined #salt
13:13 toastedpenguin joined #salt
13:23 evle joined #salt
13:24 p joined #salt
13:25 mattiasr joined #salt
13:33 DanyC joined #salt
13:34 DanyC joined #salt
13:36 akhter joined #salt
13:36 scoates joined #salt
13:38 dendazen joined #salt
13:40 toastedpenguin joined #salt
13:42 anotherZero joined #salt
13:45 marekb joined #salt
13:48 edrocks joined #salt
13:49 oravirt joined #salt
13:51 subsignal joined #salt
13:52 oravirt joined #salt
13:53 favadi joined #salt
13:56 lompik joined #salt
13:57 marsdominion joined #salt
13:57 marsdominion joined #salt
13:59 tpaul joined #salt
14:02 andrew_ joined #salt
14:03 traph joined #salt
14:03 traph joined #salt
14:03 fredvd joined #salt
14:04 oravirt joined #salt
14:05 oravirt joined #salt
14:06 Mandorath joined #salt
14:07 JDiPierro joined #salt
14:10 flowstate_ joined #salt
14:10 mattiasr joined #salt
14:10 flowstate_ anyone have a sec to help a noob out with initial state / topfile setup?
14:11 flowstate_ I'm trying to add a topfile in /srv/salt, which does nothing but reference my tested state file '/srv/salt/tomcat.sls'
14:12 flowstate_ however, I keep getting Comment: State '*.tomcat' was not found in SLS 'top'               Reason: '*.tomcat' is not available.
14:14 flowstate_ summary and topfile contents can be found https://gist.github.com/flowstate/5868c17aa2c0bc908d54
14:14 oravirt joined #salt
14:17 morissette joined #salt
14:17 flowstate_ this is a better link, included the basic shape of 'tomcat.sls' https://gist.github.com/flowstate/9152d34082114c992825
14:17 flowstate_ any help would be much appreciated
14:17 winsalt joined #salt
14:19 giantlock joined #salt
14:21 tmclaugh[work] joined #salt
14:21 brianfeister joined #salt
14:23 oravirt joined #salt
14:24 flowstate_ aaaaaand nevermind
14:24 flowstate_ I was trying to call top using state.sls, which I'm assuming doesn't bring in all your other states
14:24 flowstate_ state.highstate works just fine
14:25 protoz joined #salt
14:26 anmolb joined #salt
14:27 racooper joined #salt
14:27 numkem joined #salt
14:27 mpanetta_ joined #salt
14:27 numkem joined #salt
14:28 andrew_v joined #salt
14:28 oravirt joined #salt
14:28 Tanta joined #salt
14:30 oravirt joined #salt
14:35 winsalt_ joined #salt
14:37 oravirt joined #salt
14:40 TyrfingMjolnir joined #salt
14:41 marekb joined #salt
14:42 _JZ_ joined #salt
14:43 justanotheruser joined #salt
14:45 perfectsine joined #salt
14:46 dyasny joined #salt
14:47 AlberTUX1 joined #salt
14:48 _JZ_ joined #salt
14:49 zmalone joined #salt
14:49 oravirt joined #salt
14:50 justanotheruser joined #salt
14:51 jaybocc2 joined #salt
14:52 oravirt joined #salt
14:59 oravirt joined #salt
15:04 malinoff joined #salt
15:04 JDiPierro joined #salt
15:06 jaybocc2 joined #salt
15:10 bhosmer joined #salt
15:11 bhosmer_ joined #salt
15:11 marsdominion joined #salt
15:11 oravirt joined #salt
15:11 keimlink joined #salt
15:14 oravirt joined #salt
15:17 timoguin joined #salt
15:19 oravirt joined #salt
15:19 marekb joined #salt
15:19 nZac joined #salt
15:20 oravirt joined #salt
15:25 mattiasr joined #salt
15:26 hightekvagabond joined #salt
15:31 bhosmer joined #salt
15:31 colegatron joined #salt
15:31 bhosmer joined #salt
15:31 colegatron hi
15:32 mapu joined #salt
15:34 StolenToast I'm trying to glob a list of minions like so: "salt 'qcd10g0[105|303]*'"
15:34 StolenToast but it targets every node beginning with qcd10g0 instead.  Is the string wrong?
15:35 StolenToast wait I think I might know, do the brackets suggest an optional arg as they do in regex?
15:35 StolenToast The primary problem is they all end in the domain name and I'd like to not type that out every time
15:36 cswang__ joined #salt
15:37 zmalone If all your hosts have the same domain, you could drop it from your minion config and refer to them by short name.
15:37 ericof_ joined #salt
15:37 StolenToast well the minions get their hostname automagically from dns
15:37 StolenToast so I would have to SET their hostnames to the shortened version, I guess
15:37 zmalone I think you want (105|303) though
15:38 StolenToast that's teh regex version, which I tried also and got the same result
15:38 StolenToast actually that targets no minions
15:38 lothiraldan joined #salt
15:38 zmalone "qcd10g0105*,qcd10g0303*" may work
15:39 zmalone but yeah, it sounds like regex issues,
15:39 StolenToast I know it will work but it defeats the point of the globbing.  I actually have about a dozen nodes to do
15:39 lumtnman joined #salt
15:39 StolenToast well I am one stable version old, I think I should upgrade
15:39 oravirt joined #salt
15:45 protoz joined #salt
15:45 DammitJim joined #salt
15:46 akhter joined #salt
15:46 marsdominion joined #salt
15:46 marsdominion joined #salt
15:47 honestly The * makes it optional if you're using regex matching.
15:47 protoz joined #salt
15:48 StolenToast specifically the * being there at the end?  What I want is to match the host's domain names trailing
15:48 whytewolf shouldn't that be [105,303] not [105|303]
15:49 StolenToast yeah I mistyped, but what you just mentioned is what I executed
15:49 StolenToast salt "qcd10g0[105,303]*" state.highstate
15:50 protoz joined #salt
15:50 StolenToast replacing the * with the domain name results in no minions
15:51 whytewolf just tried it here. worked fine for me https://gist.github.com/whytewolf/99719e72140e9ca81174
15:51 whytewolf there must be something else going on
15:51 StolenToast I wonder if this is some dns incongruity I don't know about, I'll look into that
15:52 whytewolf StolenToast: do they show up in salt-keys and salt-run manage-up
15:52 bhosmer joined #salt
15:52 StolenToast lemme check, there should be no issues there as I am running in open mode atm
15:52 whytewolf reason i am asking is targeting doesn't use dns
15:53 StolenToast yeah but the hosts get their minion names from dns, don't they?
15:53 Brew joined #salt
15:53 StolenToast or at least they do here, no editing in the minion config
15:53 whytewolf yes. but they don't have to
15:53 whytewolf and that doesn't mean that dns is used for targeting
15:53 StolenToast but the minion name set by DNS is what's used
15:53 StolenToast at least that's what I think is going on here
15:54 whytewolf so you think they don't have the names you think they should
15:54 whytewolf which would show up in salt-keys and manage.up
15:54 StolenToast I did but I don't think so any more because a specific, single-node state just failed
15:54 StolenToast salt might be borked right now
15:55 zmalone salt-key -L and salt "*" test.ping should give you a fuller picture of what is going on.
15:55 StolenToast oooh I got it, it's stupid
15:55 StolenToast it was what I suspected, an extra little nugget in the domain name for these specific hosts
15:55 StolenToast thanks for the input whytewolf and zmalone
15:58 pzipoy joined #salt
16:00 ggoZ joined #salt
16:01 sdm24 joined #salt
16:03 VSpike_ joined #salt
16:05 cpowell joined #salt
16:05 spuder joined #salt
16:06 linjan_ joined #salt
16:08 Sucks joined #salt
16:08 bfoxwell joined #salt
16:09 illern joined #salt
16:09 cpowell joined #salt
16:10 Heartsbane joined #salt
16:10 Heartsbane joined #salt
16:11 Dev0n I'm trying to file.replace with a require for file.touch if that file doesn't exist, how do you actually require nested since most examples state to use the main ID?
16:13 whytewolf Dev0n: can't have more then one file.* items in a single state anyway.
16:13 Dev0n ahh whytewolf, that might be the case
16:13 Dev0n just came across this https://groups.google.com/forum/#!topic/salt-users/ONes7jCbEa0
16:14 winsalt_ does anyone know how to clear out minion keys.  Theres nothing in the master pki directories, but the minion still says its cached somewhere
16:14 Dev0n and I guess there isn't a way to tell file.replace to create the file if it doesn't exist
16:14 Dev0n nothing in docs
16:14 Sketch joined #salt
16:15 mullein joined #salt
16:16 whytewolf Dev0n: most people tend to use file.managed so it gets more love. file.replace with the option to create a file might be needed. you could put in a ticket.
16:16 oida joined #salt
16:16 brianfeister joined #salt
16:16 Dev0n great whytewolf, thanks
16:19 beardedeagle joined #salt
16:22 lumtnman joined #salt
16:22 AdamSewell joined #salt
16:23 emaninpa joined #salt
16:23 murrdoc joined #salt
16:24 shaggy_surfer joined #salt
16:24 mapu joined #salt
16:27 nZac joined #salt
16:28 spuder joined #salt
16:28 mapu joined #salt
16:29 flowstate_ joined #salt
16:29 flowstate_ hi
16:31 DanyC joined #salt
16:33 hightekvagabond joined #salt
16:33 DanyC i'm truing to install salt on a Centos 6.7 using the https://repo.saltstack.com/yum/rhel6/ and i get this error on python-zmq-14.5.0-2.el6.x86_64 which requires libzmq.so.4()(64bit)
16:33 DanyC is this s'thing screwed on the repo ?
16:34 zmalone I think people were reporting bad manifests on that repo, let me hunt down the issue
16:34 Ahlee DanyC: python-zmq doesn't have an include for zeromq, ensure you install it too
16:34 zmalone https://github.com/saltstack/salt/issues/29477
16:34 saltstackbot [#29477]title: Metadata of repo.saltstack.com/yum/rhel6/ doesn't match packages in repo | Hello there,...
16:35 zmalone sorry, s/manifest/metadata/g
16:35 DanyC Ahlee: untill few days ago i was installing in the same way and it was working since i'm using bootstrap-salt.sh script
16:35 Ahlee zmalone: is right
16:35 Ahlee er
16:36 Ahlee DanyC: zmalone is right
16:36 DanyC Ahlee: hence i've done nothing except re-run the same stuff over and over again.
16:36 Ahlee welcome to "Yet another bug in saltstack"
16:36 zmalone a few days ago was when 2015.8.3 was released
16:38 protoz joined #salt
16:39 jhauser_ joined #salt
16:40 DanyC Ahlee: zmalone thanks guys
16:41 DanyC i'm all in tears as i spent few good hours understanding/ reviewing the code/ AWS
16:41 DanyC damn boy...why on earth is so hard .....ggrrrrr
16:41 zmalone Keeping an eye on https://github.com/saltstack/salt/issues will save you a lot of heart burn with Salt
16:42 zmalone (unfortunately)
16:42 DanyC zmalone: trust me, i'm doing that everyday (instead of reading the news to see what is happening in the world, i'm reading that list)
16:44 oida joined #salt
16:44 amcorreia joined #salt
16:53 tristianc_ joined #salt
16:55 whytewolf zmalone: unforchantly the issues stem from a growing pains type issue. salt has started to take off. and with it a ton of PR's and not enough people paying attention to what is getting accepted. they only see that it passed the tests. not checking if it breaks backwords compat. or even checking that the tests are actually verified, and testing everything they should.
16:55 streetmapp joined #salt
16:56 streetmapp hi folks, using salt-ssh and executing a state against multiple minions. however, i'm failing because i'm running out of space on the master in /tmp. Is there a way to change the directory the files there get written to?
16:56 whytewolf the repo thing is just an extension of that. it needed to be done. but was rushed out the door. and is incompleate.
16:59 whytewolf maybe salt needs to hire more ops. [of the non-dev kind] to actually verify how these systems are being setup.
17:00 DanyC whytewolf: and i guess that is the case indeed. today i've changed my title into _monkey pipeline man_ trying to catch this issues
17:01 lothiraldan joined #salt
17:03 * whytewolf offers his years of expernce to salt. all i ask is remote work and the same amount of pay i currently make ;)
17:04 murrdoc :D
17:04 murrdoc whytewolf:  sounds like forrest
17:05 whytewolf yeah well forrest isn't dumb. there is no reason for any of these positions to need to be in an office.
17:06 whytewolf streetmapp: sorry. unforchantly I don't use salt-ssh. so have no experence with it to help.
17:07 whytewolf the dumbest position i ever held was at IGT. I came into an office. to watch servers that could not live in the US due to regulations
17:08 jimklo joined #salt
17:08 flowstat3 joined #salt
17:09 flowstat3 hello all
17:09 flowstat3 wow, getting an actual irc client to work on el capitan is a PITA
17:10 AlberTUX2 joined #salt
17:10 whytewolf you tried? I just loaded an ssh client and used a linux server
17:11 DanyC flowstat3: i'm running limeChat on El capitan w/o any prob. (it was installed on previous version of OS X)
17:12 flowstat3 hmm, I may try limeChat
17:12 flowstat3 out of curiosity, is anyone using salt in AWS?
17:13 flowstat3 erm, I mean anyone currently here, not like in the world
17:13 geekatcmu flowstat3: I've been using irssi across 4 versions of OSX now, including El Capitan, without a single problem.
17:13 geekatcmu ANd if you like GUIs, Textual is nice.
17:13 geekatcmu (not free, though)
17:13 flowstat3 welp, maybe it's just me then, haha
17:14 geekatcmu I've looked at LimeChat.  It doesn't fit my workflow, but it doesn't seem to suck much, either.
17:14 akhter joined #salt
17:14 protoz joined #salt
17:14 geekatcmu whytewolf: You know IGT is essentially gone now, right?
17:15 geekatcmu They got bought by some European outfit.
17:15 whytewolf kind of. I got out like a month before that merger took place.
17:15 geekatcmu good for you
17:15 whytewolf and they do still exisit. a lot of the old gaurd are still there
17:15 geekatcmu I have a couple friends at the Reno location who survived the transition.
17:18 flowstat3 if anyone is using salt in the public cloud, I'm curious as to your workflow for deployment / setting grains on minions
17:18 akhter flowstat3: In what sense?
17:18 akhter I'm using it in cloud.
17:18 flowstat3 I just started at a semi-mature startup as the first devOps guy, so it's all greenfield
17:18 akhter Not exacty public.
17:19 flowstat3 and the first thing I'm tackling is configuration with salt
17:19 flowstat3 and they're currently using Jenkins for deployment
17:19 flowstat3 now, I guess I could set jenkins up as a salt master and do setvals on minions for things like software versions, etc
17:19 AlberTUX1 joined #salt
17:19 akhter Does salt have a plugin for jenkins?  We use teamcity here.
17:19 whytewolf flowstat3: http://ryandlane.com/blog/
17:20 whytewolf also hang out in here. Ryan Lane stops by now and then. and pretty much wrote the book on running salt in aws [masterless]
17:20 flowstat3 akhter: I haven't gotten to that step yet, I'm in early days, just setting up environments and testing salt with vagrant
17:21 flowstat3 interesting... masterless
17:21 flowstat3 do you still get the same functionality in terms of remote execution?
17:21 tpaul joined #salt
17:21 akhter I don't run masterless since I need control from a central location for Teamcity.
17:21 whytewolf I'm not one to speek on the issue.
17:21 flowstat3 k
17:21 flowstat3 I'll go do some digging
17:22 akhter Same, I've never ran masterless.
17:22 whytewolf I have thought about something like it for my next openstack setup. master.minion for the openstack setup. but for the minions inside the openstack setup going masterless
17:22 akhter I can see it be useful for small environments.
17:23 flowstat3 yeah, I'm in a mixed environment
17:23 whytewolf akhter: Ryan lane works at Lyft
17:23 whytewolf it is something like 30k nodes IIRC
17:23 flowstat3 there is a monolithic app that isn't at scale, and then there're a ton of microservices with big horizontal scale
17:23 flowstat3 so I'll definitely see both sides of it
17:23 akhter Wow, yeah, I don't have nearly that many nodes.
17:24 akhter Not sure how you would target a wide variety of minions/agents with masterless.
17:24 akhter I'd love to see that config.
17:24 flowstat3 exactly
17:24 flowstat3 I'm super curious
17:24 flowstat3 because at that scale, he's definitely had to do so
17:24 whytewolf https://www.youtube.com/watch?v=94TSWxyapuU
17:24 flowstat3 I was going to go HA master in each VPC, salt-cloud for deployment, etc etc
17:25 flowstat3 welp, that's just the perfect link
17:25 flowstat3 tyvm
17:25 whytewolf yavw
17:26 shaggy_surfer joined #salt
17:26 jeffspeff is there a command i can run from a minion to check if it is currently running a state or anything?
17:26 shaggy_surfer joined #salt
17:27 akhter saltutil.is_running?  I think that'll check highstate only though.
17:27 akhter But do check the modules within saltutil.
17:29 jeffspeff akhter, is that something that runs locally on the minion or is that a command you run from the master?
17:29 whytewolf salt-call saltutil.is_running is on the minion
17:29 akhter Not sure if salt-call has them, I only use them from the master.
17:29 flowstat3 you should be able to run it from either place, using salt-call on the minion, right?
17:29 flowstat3 ...
17:29 akhter There you go, salt-call can use it.
17:30 jeffspeff ok, can anyone else say whether is_running will only show highstate or will it show if the minion is busy with anything?
17:30 whytewolf https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.is_running
17:30 whytewolf it requires knowing what would be running is running
17:31 akhter https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.is_running
17:31 akhter whytewolf: You beat me to it.
17:31 jeffspeff thanks
17:31 whytewolf saltutil.running might be better as it will list everything salt is currently doing
17:31 notnotpeter joined #salt
17:31 colegatron joined #salt
17:33 akhter This video pretty awesome.  I take my statement back, when using heavy autoscaling, masterless is probably the way to go.
17:34 akhter Err, terrible english "Autoscaling heavily"
17:35 whytewolf akhter: actually even if not autoscaling. honestly you avoid a lot of headache then you do with multimaster for scale. most of downsides of multimaster nullify the master/minion setup
17:36 akhter When you say multi-master, you mean using syndic?
17:36 akhter I've still yet to fully utilize that...
17:36 whytewolf actually i mean having dual masters.
17:37 whytewolf HA masters
17:37 whytewolf but syndic does have some of the same problems
17:42 job is there a way to use a grain from another server to generate a piece of config on 'this' server?
17:42 Dev0n if I want cmd.run to stop throwing on errors, using output_loglevel: silent is the way to go right?
17:42 whytewolf job: mine
17:43 flowstat3 yeah, I'm going to model my approach after this
17:43 job are all grains automatically minable?
17:43 whytewolf job: https://docs.saltstack.com/en/latest/topics/mine/
17:43 flowstat3 including some of the culture points, as I'm tasked with setting that up
17:43 whytewolf job: no, you have to define your grains
17:43 job thank you
17:43 flowstat3 and I want our devs to be comfortable with their own infrastructure
17:44 flowstat3 I smell a culture meeting.
17:44 whytewolf flowstat3: yeah. it is defintly a good way of keeping devs comfortable with what is going on.
17:44 flowstat3 I just moved from dev to dev ops, at my former company (bazaarvoice) where each team had a dev ops person
17:44 flowstat3 to a startup where I'm the 8th engineer
17:45 flowstat3 so ... a bit of culture shock
17:45 flowstat3 and I want to make sure I set expectations and culture so that as we grow things don't get out of control
17:45 xmj "no... don't ssh into the production servers.. we have scripts to do that"
17:46 oida joined #salt
17:46 murrdoc joined #salt
17:46 flowstat3 well it's interesting, because the company was moving along quite well for 2 years before I got here, so at least the full-stack devs are happy doing ops stuff
17:46 whytewolf xmj: personally I keep a dream ideal of. delete and recreate server if logged in >= 1
17:47 xmj keep dreaming
17:47 xmj :p
17:47 flowstat3 yeah, that's a funky thing here: some of our servers are cattle, some are pets
17:47 whytewolf actually one my own stuff i have it.
17:47 xmj when would you ever have >1 user on your own stuff
17:47 xmj anyway, OT :p
17:48 whytewolf lol. I'm an old op. so I had to train myself to not do that
17:48 whytewolf and I have a couple of seervers it doesn't apply to.
17:48 flowstat3 I've got the opposite perspective, being a dev up until about 6 months ago
17:49 Fiber^ joined #salt
17:50 whytewolf xmj: my own stuff is public and i have friends in IT. so yes it is possable that my user load COULD be higher then 1
17:50 whytewolf as i give them access to some projects
17:50 hightekvagabond1 joined #salt
17:51 whytewolf [the nice thing about owning my own personal openstack cluster]
17:51 xmj uh huh!
17:53 Bryson joined #salt
17:54 whytewolf the bad part is the electric bill
17:54 ekkelett joined #salt
17:54 flowstat3 I'm super close to having our VP talked into OpenStack within the next year
17:54 flowstat3 we just need to nail down our scaling projections
17:54 whytewolf flowstat3: what ever you do. DO NOT GO MIRANTIS!
17:55 flowstat3 I will take that at face value, because you're not the first
17:55 job how do i view a list of everything that can be mined?
17:55 cyteen joined #salt
17:56 whytewolf unforchantly I can't give tips on better supported openstack. although i might lean towards HP or rethat
17:57 job there is grains.ls and grains.items, maybe something liket that for 'mine'?
17:57 whytewolf job: unforchantly I do not know of anything that grabs mine data as freely as items for grians and pillars
17:58 job so i have this in a pillar:
17:58 job mine_functions:
17:58 job grain.get:
17:58 job - fqdn_ip4
17:58 murrdoc is the file called mine.conf
17:58 job - fqdn_ip6
17:58 murrdoc (amirite)
17:59 job how will i know what the key is to query/mine for?
17:59 whytewolf the key is grain.get
17:59 Rumbles joined #salt
17:59 akhter I hear more and more openstack every day, they have to be gaining on AWS...
17:59 geekatcmu the "easy" button is "login to the master and look for mine.p in the master cache directory.  There'll be a lot of them.
18:03 job it should probably be grains.get
18:03 whytewolf job: yeah
18:04 murrdoc joined #salt
18:04 ageorgop joined #salt
18:05 hal58th joined #salt
18:06 oida joined #salt
18:06 khaije1 joined #salt
18:06 ViciousLove joined #salt
18:06 lemur joined #salt
18:06 nZac joined #salt
18:07 khaije1 testing my thinking ... If I want a way to compose several stanzas, I can use a "noop" test state with the relevant requires, yes?
18:10 khaije1 OK, I see it's test.nop actually ( https://docs.saltstack.com/en/latest/ref/states/all/salt.states.test.html#salt.states.test.nop ) nice :)
18:11 * babilen thought you were referring to that to begin with
18:11 babilen What do you mean by "compose" ?
18:14 Dev0n hey, in the dockerng state, if I have an iamge_present state with force: False but a watch for a git: repo, then the image should only be built when that git repo changes right?
18:14 babilen Dev0n: I think it would only be built *once*
18:14 babilen But why don't you try it?
18:15 Dev0n babilen, it wasn't in the docs so just wanted to check here first
18:15 emaninpa joined #salt
18:16 ajw0100 joined #salt
18:16 conan_the_destro joined #salt
18:16 babilen My expectation would be that image_present is satisfied as soon as *any* image is present (regardless of its version). But then I don't know how that state reacts to watch statements being triggered.
18:16 Dev0n ok, I'll give watch a try and see what happens
18:16 babilen force: True would definitely rebuild everytime (even if there are no changes)
18:17 Dev0n yea I've been using that
18:17 Dev0n but it seems wasteful
18:17 protoz joined #salt
18:17 babilen Sure .. I mean, in the end you can use two states (one with force and onchanges), but see how a watch statement works here
18:18 babilen Or you could remove the image if the git has changes (prereq)
18:19 GreatSnoopy joined #salt
18:20 jaybocc2 joined #salt
18:20 Dev0n former sounds good, latter means I would have to stop a running container which could lead to a bit of downtime which I want to avoid. I was thinking if watch fails then simply do cmd.run to build, but I'll give watch a go
18:20 khaije1 Is there a way to require and extend at the same time?
18:20 babilen Please don't
18:21 brianfeister joined #salt
18:21 Dev0n was that please don't for cmd.run or khaije1's?
18:22 babilen khaije1: But then: What are you trying to do and achieve? How did that work and what are you having problems with? Provide applicable output and states on one of http://refheap.com, http://paste.debian.net, https://gist.github.com, http://sprunge.us, http://dpaste.de, …
18:22 Dev0n ahh k
18:22 babilen Dev0n: That was aimed at khaije1
18:22 Dev0n cool
18:22 babilen It just sounds like something that is hard to understand later on (provided it works and whatever 'it' is exactly)
18:27 hightekvagabond joined #salt
18:27 khaije1 babilen: the idea is to seperate generic configs states and more specific states
18:28 doriftoshoes_ joined #salt
18:29 khaije1 if I can inclide the specifics of a particular config set by extending a general case state, then combine these specific states into a file, it'll provide a clean "local-ization" of config specifics and allow the growth of more generic states.
18:31 LondonAppDev joined #salt
18:31 shaggy_surfer joined #salt
18:33 Dev0n babilen, you were right, TypeError: image_present() got an unexpected keyword argument '__reqs__'
18:34 Dev0n doesn't seem to like watch
18:35 shaggy_surfer joined #salt
18:37 oida joined #salt
18:38 Dev0n babilen, just wondering how the two state approach would work here? (your first suggestion)
18:38 khaije1 babilen: does my explination of purpose make sense?
18:40 protoz joined #salt
18:41 AlberTUX1 joined #salt
18:45 hightekvagabond joined #salt
18:47 zmalone joined #salt
18:48 tristianc_ joined #salt
18:50 job what is wrong with https://github.com/NLNOG/ring-salt/blob/master/pillars/mine/init.sls#L7-L9 causing this error https://p.6core.net/p/TCoBZDHT3MkH1MKlBtJQdhTi
18:50 aw110f joined #salt
18:50 job the two whitespaces before the second fqdn_ip4 ?
18:51 geekatcmu or 2 missing spaces, depending on where that key is supposed to be
18:52 tbaker57 joined #salt
18:52 geekatcmu See also, "why I hate salt's usage of YAML"
18:52 DanyC joined #salt
18:54 iggy yeah, run that through a yaml parser and you'll probably see what's wrong
19:04 AdamSewell joined #salt
19:07 denys joined #salt
19:09 murrdoc joined #salt
19:09 SunPowered joined #salt
19:11 perfectsine joined #salt
19:12 bhosmer_ joined #salt
19:12 khaije1 speaking of Salt's use of YAML, (which I actually like, but notably isn't required), could someone describe why 4 spaces are sometimes required for indents?
19:12 SunPowered hey, I'm testing the new ext_pillar features in 2015.8.x.  I have a top file repo and a pillar data repo.  No local pillar_roots are defined.  I created a minion with specified environment 'dev'.  I have pillar data in the dev branch of that repo and the target defined in the top file, but no pillar.items are returned whem querying
19:14 SunPowered is there something I should be watching out for?  I tried to define the pillar repo in the ext_pillar config section both using the generic method without matching branches to environments and explicitly as well, both with the same result.  What could I be missing?
19:15 SunPowered I am using https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.git_pillar.html#configuring-git-pillar-for-salt-releases-2015-8-0-and-later as a guide for my configuration
19:16 geekatcmu khaije1: it's all about dict/list nesting.
19:18 solidsnack joined #salt
19:19 spiette joined #salt
19:20 hightekvagabond joined #salt
19:20 babilen Dev0n: You could have a state with "force: True" and use https://docs.saltstack.com/en/latest/ref/states/requisites.html#onchanges so that it only becomes "active" if your git state has changes
19:20 Deevolution left #salt
19:21 babilen Well .. you don't even need two states then now that I think about it
19:21 Deevolution joined #salt
19:21 babilen Unless the "force: True" overrides the "only apply if the required states generate changes" of the "onchanges" requisite, but I don't think it would
19:22 job ok, i can fetch a 'mined grain' like so https://p.6core.net/p/hAKxQr8N3LhCCfEyrBGu3clA
19:23 job but trying to use it https://github.com/NLNOG/ring-salt/blob/master/hostsfile/init.sls#L22 results in an error
19:23 job Rendering SLS 'base:hostsfile' failed: Jinja variable No first item, sequence was empty.
19:23 babilen khaije1: I think you are trying to implement some OO patterns that aren't necessarily applicable in salt. I'd write generic states and keep all the specifics in pillars.
19:23 protoz joined #salt
19:24 DanyC joined #salt
19:24 pieter` joined #salt
19:26 akhter Does anyone know if you can pass access and secret key to state boto_vpc via cloud.profiles?  https://docs.saltstack.com/en/latest/ref/states/all/salt.states.boto_vpc.html#module-salt.states.boto_vpc
19:27 akhter "It's also possible to specify key, keyid and region via a profile, either passed in as a dict, or as a string to pull from pillars or minion config:"   --  Not sure what profile this is referring to.
19:28 babilen job: Several things: 1. Do not use grains for addresses, but define a mine alias for the network (specifified in CIDR) you actually care about 2. salt['mine.get'](name, 'fqdn_ip4').items() would give you ('amazon04.ring.nlnog.net', ['176.34.57.117']) -- The first element in that would be 'amazon04.ring.nlnog.net' rather than '176.34.57.117' ..
19:29 babilen Not sure at the moment why it is empty though, but I'm sure there is a reason for that too
19:30 flowstat_ joined #salt
19:31 khaije1 babilen: I expect to do that but was hoping this would provide a useful suppliment
19:31 LondonAppDev joined #salt
19:31 job the mine.get(Y, X) function should return thingy 'X' from minion Y, right?
19:31 babilen What I typically do is: I define something like: https://www.refheap.com/112521
19:32 babilen Yeah, but you still get {'hostname': VALUE, 'hostname2': OTHERVALUE, ....}
19:32 babilen (even if your target expression only matches a single host)
19:32 job right
19:33 babilen So .items() gives you tuples like the one I showed above.
19:33 job in your example, where does addr_mine_function come from
19:33 babilen Still not sure why it is empty (and not 'amazon04.ring.nlnog.net' of which the first would be 'a', but we might be able to find that out too ...
19:33 izrail joined #salt
19:33 jaybocc2 joined #salt
19:33 babilen job: I set that in a pillar
19:34 job outside your pasted example?
19:34 babilen You can just hardcode it in the state if you want, but I like to keep a 'net' pillar in which I define certain things
19:34 forrest joined #salt
19:34 babilen job: yeah, it is an unrelated pillar
19:34 forrest sdm24, you around?
19:34 wangofett Question about watches - do they refer to other states? And do they need to be in the same state file?
19:35 job i am very new to salt, so for me this is more of an excercise to understand how things interact with each other
19:35 babilen job: That way I don't have to change my states if the network a minion is in for specific things changes.
19:35 forrest wangofett, They can watch states, and if you wanted to reference something in another file, it would need to be part of the include.
19:35 babilen job: Sure, don't worry. Feel free to hardcode the addr_mine_function directly in there.
19:36 ViciousL1ve joined #salt
19:37 hal58th_ joined #salt
19:37 sinonick joined #salt
19:37 babilen job: I try to keep *all* dynamic bits in a pillar. That way I don't have to touch my states just because I suddenly want to change the matcher for workers behind a nodebalancer (for example)
19:37 Netwizard joined #salt
19:38 babilen I just change the foo.net_tgt value in the pillar
19:38 babilen Hmm, I think I use [0][0] in that particular state in lieu of |first|first ... :)
19:39 job i figured it out: the grain to be mined was not yet populated in all minions matching amazon*
19:39 job this resolved my issue
19:39 job root@master01:~# salt 'amazon*' cmd.run 'sudo service salt-minion restart'
19:40 job this state has no ipv6 support? https://docs.saltstack.com/en/latest/ref/states/all/salt.states.host.html
19:41 job root@coloclue01:~# grep amazon /etc/hosts | tail -1
19:41 job ::ac1f:19e3amazon09.ring.nlnog.net
19:41 cberndt joined #salt
19:41 wangofett forrest: I've got this setup but it doesn't seem to work: https://gist.github.com/waynew/b0f4ee4ed32e2fd84200
19:41 hackerman joined #salt
19:41 job i guess ipv6 in and ipv4 garbage out
19:41 babilen job: That should not have been necessary. A simple https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.mine.html#salt.modules.mine.update is what you want
19:41 job good to know for next time
19:41 babilen job: And, fwiw, a salt-minion service restart can be done with "salt '*' service.restart salt-minion" ;)
19:42 wangofett babilen: does that work now? I know I used to have to do some hacky things in the past
19:42 babilen job: ipv6? newfangled magic!
19:42 babilen wangofett: You mean the at stuff?
19:42 wangofett babilen: yeah
19:43 babilen wangofett: I never had problems with this ever ... (on debian)
19:43 wangofett Lucky you :)
19:43 sdm24 forrest: yeah, not really checking the channel. Whats up?
19:44 babilen wangofett: I don't know why nor do I understand why others needed that. Would be nice to actually get to the bottom of that
19:45 wangofett babilen: I know I ran into the problem enough to make me shy of it. AFAICT it happens because the minion process that's calling service.restart is the service itself. So when it gets SIGTERM or whatever it is it dies before returnning
19:45 wangofett at least that's my best guess
19:45 grumm_servire joined #salt
19:46 wangofett Anyone know wny this state wouldn't restart/reload the httpd service? https://gist.github.com/waynew/b0f4ee4ed32e2fd84200
19:47 wangofett (running on centos7, and salt 2015.8.1 FWIW)
19:47 babilen wangofett: yeah, that is my understanding of the issue too, but something must work better on Debian then. Which distribution did you encounter this on?
19:47 babilen Ah ...
19:47 babilen Guess that answers that question
19:48 wangofett actually it was when I was running on ubuntu, I think
19:48 wangofett it's been at least 6 months or more since I've tried, though
19:48 wangofett hm
19:48 babilen wangofett: /etc/httpd/* doesn't match /etc/httpd/conf.d/myconf.conf is my guess .. make that /etc/httpd/conf.d/*
19:49 babilen wilcards in requisites are speshul
19:50 babilen Does anybody know if there will be 2015.8 packages for wheezy anytime soon?
19:50 forrest sdm24, did you get your shirt yet?
19:50 babilen I guess I should switch to simply installing from git tags
19:50 sdm24 forrest: nope not yet. You?
19:50 forrest Nope
19:50 sdm24 :9
19:50 sdm24 :(
19:50 forrest It's definitely been 3 weeks
19:50 babilen Any tips on how to switch from the packages to git based deployment? (using salt-formula fwiw)
19:50 sdm24 yeah I think i remember getting an email ~ 3 weeks ago that they were comnig, but not a specific "your shirt has shipped" email
19:51 forrest sdm24, Yeah they were supposed to ship within 3 weeks I thought, but yeah no shipping details, no shirt
19:51 forrest it's been over a month
19:52 forrest babilen, The formula currently doesn't support git baed installs
19:52 babilen I know, but I use it to install/configure salt at the moment
19:52 amcorreia joined #salt
19:52 forrest babilen, You could just create a state that calls the bootstrap if you want and pass the git flag and options
19:52 babilen I don't really see us running 2015.5.3 for long and I *have* to do $something about that
19:53 forrest babilen, Oh for a bug or something?
19:53 babilen And as packages are, obviously, not forthcoming (nor the packaging code) I'm starting to think about other options
19:53 forrest sdm24, They posted this a few days ago: https://www.digitalocean.com/company/blog/looking-back-at-hacktoberfest/
19:53 forrest babilen, Packages for what? I have packages for 2015.8
19:54 babilen forrest: https://docs.saltstack.com/en/latest/topics/releases/2015.5.6.html is a security release .. not feeling comfortable without that
19:54 forrest sdm24, Hacktoberfest T-shirts are in the mail as of 11.24! :)
19:54 babilen forrest: Debian wheezy, squeeze .. some Ubuntu releases
19:54 sdm24 cool!
19:54 forrest I still don't have mine though, lol
19:54 sdm24 I figured that they had more people than they expected, so that would take a while
19:54 forrest Are they going by donkey?
19:55 forrest It's still been 15 days
19:55 forrest babilen, I'm seriously confused
19:55 babilen Why?
19:55 forrest babilen, http://repo.saltstack.com/apt/ubuntu/14.04/amd64/2015.5/
19:56 wangofett babilen: welll, I just tried doing my restarts on my personal salt server with debian and it seemed to work fine. Takes some seconds for the minion to come back online *shrugs*
19:56 babilen Yes, that's 14.04 Ubuntu
19:56 forrest what are you on?
19:56 forrest 12?
19:56 zmalone Debian
19:56 zmalone https://repo.saltstack.com/apt/debian/
19:56 babilen Primarily wheezy is the problem
19:56 SunPowered babilen: The repository packages are really old
19:56 babilen exacly
19:57 zmalone I think there also used to be some non-x86 packages which are now gone, and that makes the deb people mad
19:57 babilen That is the problem
19:57 SunPowered ah
19:57 babilen Along with fedora IIRC
19:57 forrest jfindlay, Is there any support to package up salt for (still supported) versions of Debian such as wheezy? If not, why not? It's still a supported OS.
19:57 Gareth o/
19:57 zmalone babilen is on the mailing list, but Fedora is gone altogether, and is supposed to be back within a year
19:57 forrest hey Gareth
19:58 Gareth forrest: hey hey.  hows things?
19:58 wangofett babilen: I changed my file to `- file: /etc/httpd/conf.d/*` and still nada
19:58 forrest Gareth, Doing well! I saw the sky for about 30 minutes before the clouds rolled in so that was awesome
19:58 babilen wangofett: If you use the filename exactly?
19:58 Gareth forrest: haha. joys of Seattle living :)
19:58 SunPowered well its annoying.  I resorted to bootstrapping all my minions from git directly
19:58 forrest Gareth, Yep! How are you?
19:58 babilen SunPowered: How did you switch?
19:59 Gareth forrest: doing well.  in NY at the moment.
19:59 forrest babilen, Is the structure of the apt packaging file a lot different between weheezy and jessie?
19:59 forrest Gareth, Nice, get some of that delicious halal cart food!
19:59 * babilen should just package it himself .. joehh's packaging is still available and I'm a DM
19:59 SunPowered babilen: For existing nodes I ended up doing it manually, though I have a small enough cluster
19:59 forrest babilen, joehh hasn't packaged anything in quite a while, from what I understand he's been too busy with school to do it.
19:59 forrest but the salt team should have taken over packaging that
20:00 SunPowered for new nodes, I just updated my salt-cloud script args to specify the version explicitly
20:00 forrest wheezy is NOT a EOL'd release, so there should be a package
20:00 forrest was hoping jfindlay could weigh in
20:00 toastedpenguin joined #salt
20:00 ageorgop joined #salt
20:00 SunPowered forrest: The saltstack PPA is more recent (2015.8.0 I believe)
20:01 babilen forrest: I am well aware of joehh's situation, but 2015.8 has been released quite some time ago, there have been security releases on 2015.5 and there are still no packages for a bunch of supported Debian releases. And, no, I can't upgrade every single minion to jessie just for this ..
20:01 * wangofett has never had much luck using the packages
20:01 jfindlay forrest: it's only a matter of resources
20:01 forrest SunPowered, Correct, but no support for wheezy.
20:01 babilen jfindlay: What do you need?
20:01 forrest babilen, The older releases are EOL, so I'm sorry to say but fuck them, wheezy however is not, so it should be supported.
20:02 babilen forrest: yeah
20:02 jfindlay I would love to have packages for all supported platforms
20:02 intel joined #salt
20:02 jfindlay this is something I have been pushing for for a long time
20:02 babilen jfindlay: Hosting? (I'd happily host that for free if that gets me packages) -- Work on the packaging code? (where is the stuff for jessie so that we can sort that out)
20:02 forrest babilen, They should be able to host them on the same server, it's just a directory creation. Packaging code lives here: https://github.com/saltstack/salt/tree/develop/pkg
20:03 jfindlay babilen: once we release our package builder states, we're hoping to get more community involvement in the packaging process
20:03 forrest I don't see joehh's stuff in there though
20:03 jfindlay s/hoping/needing/
20:03 babilen jfindlay: Any idea when those states will be published? I had hoped that it would have happened by now ..
20:03 jfindlay I'll talk with Dave to see what the status is
20:04 forrest jfindlay, Thanks, if you guys don't have time than maybe babilen can grab joehh's files for the wheezy code, it should be minimal effort to change them to support it
20:04 babilen I mean this was all fun and nice for a couple of weeks, but now there are CVEs that aren't closed and so ;)
20:04 forrest babilen, Agreed, can you grab joe's packaging files by chance?
20:04 forrest I know he WAS packaging for wheezy, so it should be around
20:04 babilen I like the idea of repo.saltstack.com, but I simply like to see the packaging beforehand and, well, it needs support for more platforms.
20:04 xmj CVEs, what for?
20:05 ekristen joined #salt
20:05 babilen In the end this is just beating cowbuilder into submission and a few changed dependencies.
20:05 forrest right
20:05 babilen xmj: https://docs.saltstack.com/en/latest/topics/releases/2015.5.6.html
20:05 jfindlay forrest, babilen: yeah, this is definitely an issue.  Let me talk with utahdave and see what his position is
20:05 xmj woopsies
20:06 xmj hm, we're on 2015.8.3
20:06 forrest jfindlay, Sounds good, babilen seems pretty gung ho about it, so maybe we can force some free work out of him ;)
20:06 jfindlay honestly, no one is more frustrated than me about how slow this has been going
20:06 wangofett babilen: nope. Using the name of the file doesn't help. Also removed the other watch just in case magic
20:06 aidalgol joined #salt
20:07 forrest jfindlay, Good to know
20:07 lemur joined #salt
20:08 forrest This is a pretty big community concern in terms of getting stuff packaged up, I'm sure many people would update the files if they were pulled from a location for the builds in the repo.
20:08 aw110f joined #salt
20:08 babilen jfindlay: Much appreciated, but I just don't understand what is keeping you guys. And, honestly, I'd rather tinker with a couple of dependencies and code than complaining here and on salt-users .. well .. s/complaining/begging/
20:08 jfindlay babilen: if you can get wheezy packages, we may even be willing to host them for you
20:08 jfindlay but I'll have to get UtahDave to make the final decision there
20:08 babilen jfindlay: You mean if I package 2015.8.* manually and send you the .dsc ?
20:09 jfindlay babilen: a repo would be preferred :-)
20:09 Shirkdog joined #salt
20:09 babilen I have hosting myself :)
20:09 * jfindlay knows little about deb packaging magic
20:09 dimeshake can someone verify I'm using service.mod_watch and onchanges correctly?  http://sprunge.us/XTJj
20:09 forrest babilen, Hosting it on the main repo.saltstack site should be easy, it's just another dir.
20:09 dimeshake idea being to restart nginx if group memberships have changed
20:09 forrest If aptly is being used it's about a 5 second process to get a wheezy package on there.
20:09 brianfeister joined #salt
20:09 job who here is responsible for repo.saltstack.com ?
20:10 aw110f joined #salt
20:10 forrest The salt devs are.
20:10 babilen But wouldn't it be easier to just take whatever you guys use to create the jessie packages for repo.saltstack.com, define a new cowbuilder environment, write a suitable pbuilderrc and whatnot for wheezy?
20:10 job any of the salt devs here?
20:10 zmalone I think the person who directly does repo.saltstack.com packaging doesn't watch this channel.
20:10 snuggie joined #salt
20:10 jfindlay job: I
20:10 jfindlay babilen: whatever is easiest for you is fine with me
20:11 job jfindlay, is it feasible to also serve repo.saltstack.com content over IPv6 and add an IPv6 AAAA DNS record?
20:11 felskrone joined #salt
20:11 job jfindlay, it is a bit of a challenge to fetch packages from repo.saltstack.com on IPv6-only hosts at this moment
20:11 babilen jfindlay: Sounds good, but where do I find the code that you use for building for jessie?
20:11 snuggie Hello everyone. Total n00b to salt so please forgive me. Every example I see with the salt mine does a for loop with a single function in your mine.get. Is there any way to get two functions in a single for loop?
20:11 spuder_ joined #salt
20:11 snuggie I want to grab the IP address from a server and a grain
20:11 jfindlay job: I think so, but we'd have to look into it.  I think it's on DO, currently, which seems to be more v6 capable than other possibilities
20:12 job jfindlay, yes, digitalocean should support IPv6
20:12 job https://www.digitalocean.com/community/tutorials/how-to-enable-ipv6-for-digitalocean-droplets
20:12 jfindlay job: would you file a github issue against saltstack/salt and ping me on it?
20:12 job will do
20:12 jfindlay thanks
20:12 fredvd joined #salt
20:12 job thank you :)
20:14 babilen snuggie: Just make two mind.get calls, save that dictionary in a variable {% set foobar = salt['mine.get'](.....) %} and then foobar.get(HOSTNAME) later
20:14 babilen mine.get
20:14 babilen Still working on mind.get
20:14 babilen :)
20:15 snuggie that's a good workaround
20:15 job jfindlay, https://github.com/saltstack/salt/issues/29580
20:15 saltstackbot [#29580]title: Serve repo.saltstack.com content over IPv6 as well | ping @jfindlay ...
20:16 job thank you salty
20:17 quasiben joined #salt
20:19 lompik joined #salt
20:20 shaggy_surfer joined #salt
20:22 int3l joined #salt
20:22 murrdoc joined #salt
20:25 oida joined #salt
20:26 hightekvagabond joined #salt
20:26 solidsnack joined #salt
20:26 giantlock joined #salt
20:28 JDiPierro joined #salt
20:29 flowstat_ anyone currently in chat have a full aws workflow working?
20:29 flowstat_ saltstack + aws, I mean
20:30 wangofett you mean where salt brings up your aws instance?
20:30 forrest flowstat_, Lot's of people do, Ryan_Lane wrote up a blog post on it as well: http://ryandlane.com/blog/2014/08/26/saltstack-masterless-bootstrapping/
20:30 flowstat_ that could be one of them, but even using something like Jenkins
20:30 flowstat_ forrest, I'm reading his blog right now, but I'm not sure that I want to bite that much off all at once
20:31 flowstat_ all of my work is greenfield, and I'm the only ops guy
20:31 forrest What are you trying to achieve?
20:31 forrest step one, get rid of jenkins /troll
20:31 flowstat_ going from 0 infrastructure to infrastructure as code
20:31 s_kunk joined #salt
20:31 s_kunk joined #salt
20:31 forrest flowstat_, You could start here: https://github.com/gravyboat/docka-docka-docka
20:31 flowstat_ which is obviously huge and relatively meaningless
20:31 forrest Yeah, that's too wide to weigh in on
20:32 flowstat_ yep
20:32 flowstat_ which is why I'm starting out just trying to get a sane saltstack stack up
20:32 flowstat_ for our monolithic web app
20:32 snuggie left #salt
20:32 forrest You might want to write up a spec first with the requirements you need to achieve and how the parts have to interact, that would help figure out how you can incorporate salt
20:32 flowstat_ and I have a working highstate setup with a master, working locally in vagrant
20:32 brianfeister joined #salt
20:33 flowstat_ well, where I'm really floundering is in the deployment part of it
20:33 forrest Which portion? For after your tests run?
20:33 flowstat_ yep
20:33 forrest set up a post deploy hook that hits the salt master via the reactor
20:33 forrest I have an example in that docka-docka-docka repo I linked.
20:33 flowstat_ nice!
20:33 flowstat_ yeah, I haven't gotten to reactor yet
20:33 flowstat_ I will dive into that repo
20:33 flowstat_ thanks for the help
20:34 forrest Cool. It's pretty helpful for using post success hooks on code runs.
20:34 forrest Yeah np
20:34 forrest You do have to have a master for the reactor currently as a heads up
20:34 forrest if you were planning on running masterless.
20:34 flowstat_ honestly, I'm not sure about it
20:34 forrest okay
20:34 forrest No reason to overload right now then
20:35 flowstat_ I'm going to have such a mixed environment, and it's a startup, so I think it might be best for now to have remote execution
20:35 flowstat_ since we don't have multiple services/dev teams
20:35 forrest remote execution in terms of no salt on the systems?
20:35 flowstat_ for Ryan, it seems as though that choice was a reflection of their org and culture
20:35 forrest Yeah
20:35 flowstat_ cutting myself off from remote execution via a master seems like it would hamstring me right now
20:36 flowstat_ and I need to be as agile as possible while I set the groundwork for our team expansion
20:36 forrest Totally understandable
20:36 flowstat_ anyways, thanks again
20:36 forrest np, good luck
20:38 UForgotten joined #salt
20:39 solidsna_ joined #salt
20:40 flowstat_ forrest, what are you using to set the minion config in docka-salt?
20:41 flowstat_ I guess whatever your CI solution is?
20:41 flowstat_ or I guess you could set it in some kind of tarballed artifact and pull it down during bootstrapping... hmmm
20:41 forrest flowstat_, Yeah use whatever is provisoning the system.
20:41 flowstat_ k
20:42 forrest Use that to install the salt packages, and drop the minion config in, in aws unfortunatey that's cloudformation
20:43 flowstat_ yeah, or salt cloud, I guess
20:43 flowstat_ oh yeah, last thing: how do you handle pki?
20:44 flowstat_ I have to admit being a bit confused by it, beyond the trivial setup I have with vagrant
20:44 forrest in terms of key acceptance?
20:44 flowstat_ yep
20:44 forrest For that I just had it set to auto accept.
20:45 flowstat_ given what the repo does, that makes sense
20:45 forrest Yeah, it was just a basic example. I didn't get paid to work on it.
20:45 forrest Just to show that it CAN be done, because no one had a working full setup of the process.
20:45 flowstat_ well, I'm grateful you did, this is invaluable
20:45 flowstat_ yeah, that's the way with ansible and salt, I've found
20:46 forrest That's great to hear! I was annoyed when I couldn't find something similar, lol
20:46 flowstat_ lots of "if you run into this, do this" sort of snippets, and a bunch of 'hello world' articles
20:46 flowstat_ but nothing attacking the full stack
20:46 forrest Yeah exactly
20:46 mapu joined #salt
20:47 snuggie joined #salt
20:47 zmalone Be careful with auto-accept, it makes it easy for hosts to impersonate other hosts, which is really dangerous if you are keeping any api keys, ssl keys, passwords, password hashes, or other secrets in salt.
20:47 zmalone (which isn't to say that you are)
20:47 forrest zmalone, Agreed.
20:47 perfectsine_ joined #salt
20:47 forrest zmalone, are you out in AWS?
20:48 forrest How are you handling key acceptance in an automated fashion when systems are provisioned?
20:48 flowstat_ yep, having it anywhere but local or in a sandbox makes my inner security nerd cringe
20:48 zmalone I have AWS hosts, but I'm not using Salt to create the EC2 instances.
20:48 forrest gotcha
20:48 flowstat_ yeah, actually, how would that work?
20:48 flowstat_ I'm assuming you salt-call on the minions
20:49 pmcg joined #salt
20:49 flowstat_ so if they're not yet in 'unaccepted'
20:49 flowstat_ and you can't, therefore, accept them
20:49 flowstat_ how do you run highstate on them once you accept them?
20:49 forrest once the master accepts the minion you're set to go.
20:50 forrest and can highstate or reactor or whatever as necessary.
20:50 Tanta I run masterless now
20:50 flowstat_ so if the workflow is 'minion salt-call -> master holds key in unaccepted -> tell master to accept -> ????? -> highstate runs on minion'
20:50 forrest here we go
20:50 forrest https://www.reddit.com/r/saltstack/comments/3jeggp/automating_key_acceptance/
20:50 saltstackbot [REDDIT] Automating key acceptance (self.saltstack) | 3 points (81.0%) | 20 comments | Posted by MrBooks | Created at 2015-09-02 - 20:15:16
20:50 Tanta I like the masterless setup a little better
20:50 flowstat_ Tanta, you don't miss remote execution?
20:50 Tanta I can do that via other means
20:50 forrest salt-cloud still seems like the best option honestly
20:51 forrest flowstat_, the remote execution aspect is easy with salt-ssh: https://github.com/gravyboat/salt-ssh-example
20:51 Tanta I'm using autoscaling across 5 different VPCs within AWS, so a single salt-master is not really possible without a bunch of nasty hacks
20:51 Tanta it's a harder up-front design, but I think it's a lot more flexible in the long run
20:51 flowstat_ oooh, thanks Forrest
20:51 forrest flowstat_, Yeah there are some good options in there.
20:51 forrest Tanta, I just want the reactor in masterless.
20:51 Tanta when I had one logical network, the salt-master was great
20:52 forrest Tanta, If you want it as well and want to +1 this: https://github.com/saltstack/salt/issues/15265 /shameless plug
20:52 saltstackbot [#15265]title: Bring reactor system to minions | Right now reactors are a master-only feature, but it would be really nice to be able to react locally to events, especially if not using a master. For instance, it would be really awesome to react to udev events or to etcd events.
20:52 Tanta I've never used reactor forrest, not sure what it's for
20:52 flowstat_ Tanta, how do you provision? Do you just use userdata to set grains?
20:53 flowstat_ and set configs, etc
20:53 forrest Tanta, It's so you can react to events, so if your test suite passes, it auto deploys or whatever via a POST hook.
20:53 forrest it's pretty sick
20:53 Tanta I inspect the ASG name, and then set a standard ID, and the rest flows from there
20:53 Tanta all done with a bash script via user data + curl
20:54 flowstat_ k
20:54 flowstat_ I'm really debating the whole master / masterless thing
20:54 flowstat_ given where we're at atm, I think I'll go with the easy thing with an eye towards transitioning
20:54 whytewolf if you are thinkinf of HA masters then you are better off masterless
20:54 forrest The biggest downside of no master to me is the loss of the reactor. It's really the only thing that's part of the master that I care about at all.
20:55 flowstat_ working on a PR?
20:55 flowstat_ :D
20:55 forrest Nope.
20:55 forrest Tom needs to finish it up, he's stuck at some point and I haven't been able to get more details out of him to see what's left
20:55 pzipoy left #salt
20:55 flowstat_ ahh, gotcha
20:55 forrest And no way am I spending my unpaid time working on it, lol.
20:55 Tanta I see forrest, the way I usually do that is encode a branch and commit hash or tag for Git in pillar
20:56 forrest I already do enough for free :)
20:56 Tanta and check from cron to see if there are any new commits, and then deploy
20:56 Tanta it's a completely self-driven pull process
20:56 forrest Tanta, Yeah that's I've done in the past as well, it just sucks if the cron doesn't run often enough, or your deploy process is slow due to how the app builds
20:56 forrest 'your code deployed when it passed the tests' is better than 'your code deploys every 15 minutes'
20:57 Tanta I like the cmd_wait, module_wait, and state with onchanges: blocks
20:57 forrest *what I've done
20:57 Tanta the Git update is what triggers an entire sequence of 'events'
20:57 Tanta it's a little ghetto but it accomplishes the same thing
20:57 forrest Yeah and that totally works, it just doesn't deploy immediatey, which is a big deal for some devs
20:58 whytewolf if reactos start working for masterless. then beacons start working for masterless also. which would alow a system to become self healing. or even have the abiltiy to auto expand a drive based soley on new drive space being added
20:58 forrest 'I can deploy as soon as the tests finish manually' versus 'I have to wait up to 15 minutes for tihs system to deploy'
20:58 forrest It's a stupid argument, but what are you going to do? What's the response when you're the one at 'fault' because the devs are held up by deployment times?
20:58 Tanta our test server doesn't even care about tests, it just grabs whatever is at HEAD in master every 3 minutes
20:58 Tanta you could do that with Jenkins in a masterless setup though
20:58 forrest whytewolf, Yep, it would be sick. I've been begging for it since Ryan made the ticket.
20:58 forrest Tanta, That's what we did
20:59 forrest Deploy took a while, meant we had to scale back how often things were deployed
20:59 Tanta what kind of application?
20:59 forrest ruby on rails.
20:59 Tanta oh man
20:59 Tanta compiling assets and what not
20:59 forrest Exactly
21:00 flowstat_ could be worse, you could be trying to replace a ruby file, opsworks chef, and random puppet all somehow 'managed' by jenkins
21:00 flowstat_ just sayin
21:00 forrest True
21:00 Tanta I ripped out a Puppet + RoR setup and rewrote it in salt
21:00 jfindlay that doesn't sound too fun :-)
21:00 flowstat_ haha
21:00 Tanta we used Capistrano for deployments though
21:00 forrest Tanta, Good plan.
21:00 forrest ehhh
21:01 Tanta it was an old Rails 2.3 app that was 5 years in development
21:01 flowstat_ yeah, I'm at a startup, and luckily they're all really awesome devs who understand what ops is like
21:01 flowstat_ and they made the choices they did for expedience
21:01 flowstat_ but still.... jenkins
21:01 forrest Yeah, I don't like jenkins
21:01 forrest The hope is that your jenkins server needs to be beefy enough where you can rip it out and just pay for Travis CI
21:01 whytewolf there are worse things out there then jenkins
21:02 forrest There are, but jenkins is just so janky it sucks
21:02 Tanta I like Jenkins
21:02 Tanta it's a big fancy bash executor
21:02 forrest lol
21:02 flowstat_ hahahaha
21:02 whytewolf lol
21:02 forrest That doesn't clean up after itself and takes 20 minutes to restart ;)
21:02 Tanta it's 'enterprise' quality
21:03 Tanta it beats using Oracle + Hudson though
21:03 forrest I'll agree with that.
21:03 forrest But that's like saying 'hmm, this dog turd is better than this human turd'
21:03 forrest they both still stink
21:03 whytewolf ... isn't hudson just the paid version of jenkins? or did enough of the code split now [i havn't looked at hudson in a LONG time]
21:04 Tanta I believe the history is that Jenkins forked from Hudson and a new team took over
21:04 whytewolf well jenkins is the old name.
21:04 whytewolf at least I remeber hearing about jenkins before the split
21:05 Tanta https://en.wikipedia.org/wiki/Hudson_%28software%29
21:05 saltstackbot [WIKIPEDIA] Hudson (software) | "Hudson is a continuous integration (CI) tool written in Java, which runs in a servlet container, such as Apache Tomcat or the GlassFish application server. It supports SCM tools including CVS, Subversion, Git, Perforce, Clearcase and RTC, and can execute Apache Ant and Apache Maven based projects, as..."
21:05 Tanta this has the history
21:05 Tanta it was a Sun product called Hudson, then Oracle ate Sun, and the maintainers got mad and forked it
21:05 whytewolf yeah I'm on it now reading
21:05 Tanta kinda like MariaDB / MySQL
21:06 whytewolf well except MySQL actually is still a pretty solid product and oracle hasn't compleatly trashed it
21:07 lemur joined #salt
21:07 whytewolf I think why mysql survived and hudson didn't is that mysql had a larger following then hudson did. cause CI was just starting to take off when the sun/oracle thing happened. but mysql was deeply entrenched already
21:07 flowstat_ aaaaaand derailed
21:08 whytewolf just like an amtrak
21:08 whytewolf anyway. salt is good
21:08 flowstat_ yep
21:08 Tanta Salt is a joyful tool
21:09 flowstat_ Tanta, how are you managing your secrets?
21:09 racooper and...to bring this back to a salt discussion...issue I mentioned the other day.  why a pkgrepo.managed doesn't create the repofile, but the execution module pkg.mod_repo does....
21:09 racooper this is what I'm seeing happen: https://gist.github.com/racooper/15631d1c1f1c2abf6627
21:09 whytewolf the thing i love about salt is there are SO many way to do anything. the thing i hate about salt is that there IS so many ways to do anything
21:09 Tanta I put a file on s3 that gets pulled down to /etc/salt/grains and attach the read permissions to the bucket to an instance profile
21:10 Tanta the path and sha1 hash get encoded in pilar
21:10 flowstat_ I've hit that wall which occurs just after "holy crap, salt is amazing, I've got everything running perfectly locally!" and "Yep, my cloud orchestration is actually working"
21:10 whytewolf racooper: I have never seen that problem. it always created the files for me
21:10 flowstat_ and before*
21:10 flowstat_ k
21:10 flowstat_ and the pillar is defined where?
21:10 flowstat_ are you using GitFS?
21:11 Tanta side-by-side with the Salt states in Git
21:11 flowstat_ k
21:11 Tanta the master repo just contains 'salt' and 'pillar' directories in the top-level -- I clone it to a local working copy, and then symlink /srv/pillar  and /srv/salt to the Git repository
21:12 forrest racooper, I feel like I had this issue recently..
21:12 flowstat_ interesting
21:12 Tanta with that setup you don't put any secrets in pillar, just non-secure variable data
21:12 forrest racooper, What distro?
21:12 whytewolf racooper: also i moved away from centos based about a year ago. so honestly don't know if something changed that broke it. do you have an issue open?
21:13 forrest ubuntu I'm assuming?
21:13 whytewolf I doubt it is ubuntu if it is useing yum :P
21:13 racooper no, this is CentOS. and no, I haven't opened an issue yet, I was trying to see if I missed something first
21:14 whytewolf let me see if i can find my old configs for some of my old centos boxes
21:15 forrest racooper, Check out my update to your gist
21:15 whytewolf hummm none of my old setups use - name for pkgrepo.managed
21:16 forrest Mine do
21:16 whytewolf in centos?
21:16 forrest yes
21:16 whytewolf I use it for ubuntu
21:19 forrest racooper, Did that work?
21:19 whytewolf wait... why is yumpkg.py in extmods?
21:19 * forrest shrugs
21:19 whytewolf racooper: are you using a custom yumpkg.py?
21:20 protoz joined #salt
21:20 racooper when you said that I just realized. it was testing a previous bug issue...looks like I never removed it. let me work on that and see what I can do with it.
21:20 forrest lol
21:21 forrest Let me know if that change I added works, I can't remember if the gpgkey stuff supports non-local values
21:21 forrest thus why I always use key_url
21:21 forrest racooper, Depending on the results you get, it might be worth adding to this issue I've created: https://github.com/saltstack/salt/issues/28720
21:21 saltstackbot [#28720]title: pkgrepo returns error that does not accurately describe where an issue is. | Today I spent some time troubleshooting this:...
21:21 edrocks joined #salt
21:21 forrest Regarding poor reporting from pkgrepo.managed
21:22 racooper gpgkey is just passed verbatim to the repo file, so salt really doesn't do anything with it directly otherwise.
21:22 forrest gotcha
21:22 racooper hah. this was the custom yumpkg.py: https://github.com/saltstack/salt/issues/11121
21:22 saltstackbot [#11121]title: pkg.list_upgrades broken on Oracle Enterprise Linux? | On an Oracle Linux server: ever since upgrading to 2014.1.0 from EPEL, pkg.list_upgrades is no longer reporting upgrades available for this server.  however, yum check-updates locally shows three....
21:24 whytewolf hehe.
21:25 viq joined #salt
21:25 viq joined #salt
21:25 protoz joined #salt
21:28 subsignal joined #salt
21:28 newsun joined #salt
21:28 newsun greetings
21:28 newsun I am setting up a vagrant+salt implementation and I've run into a little snag...
21:28 hightekvagabond joined #salt
21:30 newsun I have a jinja .yaml file I'm loading some config files into both a state file and a pillar using {% import_yaml...%} syntax, only it seems to be looking in the same directory as the pillar or state
21:30 newsun I have /srv/salt and /srv/pillar dirs for those
21:31 newsun I was hoping to use a /config dir next to them to load my jinja file from
21:31 newsun anyone have suggestion on how to load this path?
21:31 racooper forrest, whytewolf, removing the old debug module and running with current code fixed it.  Thanks for catching that.
21:31 forrest racooper, That was all whytewolf
21:32 whytewolf racooper: that was just a "thats odd" observation
21:33 conan_the_destro joined #salt
21:33 whytewolf newsun: if you mean for things like file configs. that are using for file imports and what not. you need to add a line to your base_root directory to suck up that directory into the salt:// filesystem
21:33 aidalgol joined #salt
21:34 forrest newsun, In addition look at how things are handled via the defaults.yaml file: https://github.com/saltstack-formulas/salt-formula/blob/master/salt/defaults.yaml
21:35 snuggie I'm having a problem forming a dictionary in jinja so I think I'll ask my original question. Is there any way to output two different functions with one mine.get call? I want to return an IP address and a grain
21:36 whytewolf snuggie: thats easy, NO you can not get two mine items with one call.
21:36 snuggie whytewolf: so how would I create a line in a config file with two different mine functions?
21:37 whytewolf snuggie: by setting variables beforer you get to the line config item and using them in it?
21:38 brianfeister joined #salt
21:40 whytewolf {% set host_ip = salt.mine.get('minion','host_ip')[0] %}{% set the_grain = salt.mine.get('minion','grains_iwant')[0] %}
21:40 whytewolf [adjusted to use of coarse]
21:40 solidsnack joined #salt
21:40 snuggie yeah, the problem is I want an array of servers
21:40 snuggie this is for zookeeper's zoo.cfg
21:41 TyrfingMjolnir joined #salt
21:41 snuggie which requires the hostname (gotten from mine), the zkid (gotten from grain), and the IP address (gotten from mine)
21:41 snuggie but there's 5 lines  in a row for the 5 servers
21:41 snuggie so you have to build an array of dicts
21:41 whytewolf are the minion_id = fqdn?
21:42 snuggie I think it ends up being the hostname, not fqdn, though that seems to work
21:42 linjan joined #salt
21:42 snuggie like I have for server, addrs in salt['mine.get']("role:%s" % grains['role'], 'network.ip_addrs', expr_form='grain').items()
21:42 snuggie which gets me the hostname and the ip address
21:42 snuggie and the zkid is stored in a grain
21:44 UForgotten joined #salt
21:45 whytewolf okay, i don't feel like testing this. but something more like {% set host[currenttargetedminion]['ip'] = salt.mine.get(currenttargetedminion,'ip_function')[0] %}
21:45 clintberry joined #salt
21:45 aw110f joined #salt
21:47 whytewolf or switch to pythonobjects for better use of pythong
21:47 snuggie ok, let me give it a try
21:47 snuggie thanks
21:47 whytewolf but basicly build an array before hand to use
21:47 whytewolf dict. sorry
21:47 flowstat_ joined #salt
21:48 aw110f joined #salt
21:51 whatapain joined #salt
21:52 AndreasLutro joined #salt
21:54 aw110f joined #salt
21:54 bhosmer_ joined #salt
21:55 newsun forrest: I'm unsure what you refer to in the 'base_dir' is this something I set in my Vagrantfile or elsewhere?
21:56 forrest It's set in the config for salt.
21:57 aw110f joined #salt
22:01 hightekvagabond1 joined #salt
22:03 zmalone My /var/log/salt log files are full of errors about pyOpenSSL being too old after I moved to 2015.8.3, but I can't figure out what's triggering them.  Has anyone else hit this? (I already opened https://github.com/saltstack/salt/issues/29581 )
22:03 saltstackbot [#29581]title: Complaints about pyOpenSSL version on Saltstack 2015.8.3 | When 2015.8.3 is installed on Ubuntu 14.04 from repo.saltstack.com, I get the following error on run:...
22:06 newsun forrest: unsure where I set that, as salt is preinstalled with the box I'm using with Vagrant.
22:06 oida joined #salt
22:07 forrest If you're going through the tutorial I believe it explains some portion of that, if not: https://docs.saltstack.com/en/latest/ref/configuration/minion.html
22:07 spuder joined #salt
22:07 whytewolf zmalone: nope although someone else was asking about it though. I think the ubuntu version of pyOpenSSL is 0.13.something. I don't have the issue cause i have 0.15.1 installed through pip
22:08 zmalone Weird, I'd expect the Salt packages to include the version of pyOpenSSL they depend on, like they do with other dependencies that the OS doesn't provide
22:09 whytewolf I would to.
22:09 whytewolf it is odd
22:09 whytewolf I'm looking at the repo and they don't have anything for openssl in their repo
22:10 AndreasLutro the dependency is optional
22:10 jrgochan joined #salt
22:10 whytewolf shouldn't flood logs if optional
22:10 AndreasLutro and forcing an upgrade for a security-focused package like that could potentially compromise a system
22:10 newsun forrest: do you mean root_dir? https://docs.saltstack.com/en/latest/ref/configuration/minion.html#root-dir
22:11 AndreasLutro it probably wouldn't but that's probably the philosophy behind it
22:12 zmalone Other packages, like the zeromq package, are overwriting the system packages, so it's probably just an oversight.
22:12 newsun or is this FILE_ROOTS base: ?
22:13 AndreasLutro true, but zmq is nearly as widely used as openssl
22:13 forrest newsun, whytewolf made the comment regarding base_root, I assume root_dir.
22:13 whytewolf python-future?
22:13 newsun oic yeah sorry I lumped those together
22:13 whytewolf did i say base_root? I think i meant file_root
22:14 forrest whytewolf, Yeah you did, WHAT AN AMATEUR ;)
22:14 oida joined #salt
22:14 whytewolf lol
22:15 whytewolf sorry juggleing 6 different screens between 3 systems.
22:15 solidsnack joined #salt
22:16 toastedpenguin joined #salt
22:16 zmalone pyOpenSSL isn't OpenSSL, it's a wrapper for OpenSSL.
22:16 zmalone (which you probably know)
22:17 timoguin joined #salt
22:17 edrocks joined #salt
22:18 AndreasLutro even if it's just bindings, I still think it's better to leave it alone
22:18 AndreasLutro if you really want to fix it you can just add a pip.installed state
22:19 AndreasLutro but then the salt maintainers aren't responsible for anything going wrong, which is probably what they were aiming for in the first place :p
22:22 brianfeister joined #salt
22:24 brianfeister joined #salt
22:25 burp_ joined #salt
22:26 timoguin joined #salt
22:27 overyander joined #salt
22:28 zmalone looks like it slid in with https://clinta.github.io/salt-x509-details/ , and the dep probably wasn't noticed
22:28 cberndt joined #salt
22:30 newsun whytewolf: I could not get the file_roots to work, however I do have a workaround that is good enough for my case. Basically I just need to get data from outside a pillar to trickle down, so I'm just setting a couple more pillar vars that pass into the state via pillar data
22:31 newsun so now I just keep my configVars.yaml file in my pillar dir and load this into the pillar with jinja and the rest trickles down as needed.
22:33 whytewolf ahh kewl. least you got it working for you
22:34 whytewolf zmalone: ahh that does look like the case as x509 is pretty recent add to salt. [and was needed]
22:37 abednarik joined #salt
22:37 joshin joined #salt
22:37 joshin joined #salt
22:43 toastedpenguin joined #salt
22:45 lemur joined #salt
22:47 perfectsine joined #salt
22:47 job i have the feeling my CIRD restriction is being ignored https://github.com/NLNOG/ring-salt/blob/master/pillars/mine/init.sls#L6
22:48 job https://p.6core.net/p/DNYDhMXCdLxguZdHQk8sVhTM
22:48 job i would not expect fe80::1031:50ff:fe04:cd4c there, it doesnt fall within 2000::/3
22:51 hightekvagabond joined #salt
22:51 faeroe joined #salt
22:51 whytewolf job: does salt-call network.ip_addrs6 cidr='2000::/3' work as you expect?
22:52 whytewolf also those :: could be throwing of the yaml
22:52 flowstat_ joined #salt
22:52 job https://p.6core.net/p/ffc0iCX62REktq2BqtZ1Iy4v
22:53 whytewolf hummm. that looks like a no
22:54 whytewolf that might be a reason to post a bug report cause that looks compleatly broken
22:55 whytewolf [don't think a lot of testing has been done with salt in ipv6 enviroments]
22:56 job yeah, this doesnt look like it has support for ipv6 https://github.com/saltstack/salt/blob/develop/salt/utils/network.py#L886-L904
22:58 HeinMueck joined #salt
22:59 FredFoo joined #salt
23:00 whytewolf well ipaddress is the python3 version of the lib
23:01 FredFoo joined #salt
23:01 whytewolf if it cannot import ipaddress from python3 it uses this one https://github.com/saltstack/salt/blob/develop/salt/ext/ipaddress.py
23:01 job oh, right, py3
23:01 job how do i get salt to output all the versions
23:01 mosen joined #salt
23:01 job like i see in many bug reports
23:02 whytewolf salt --versions or salt-call test.versions
23:02 whytewolf [first one is master, second is minion]
23:03 flowstat_ joined #salt
23:05 job i opened https://github.com/saltstack/salt/issues/29585
23:05 saltstackbot [#29585]title: cidr argument in salt.modules.network.ip_addrs6() is broken | The 'cidr' argument to `salt.modules.network.ipaddrs6()` appears to be broken. The documentation states: "*Providing a CIDR via 'cidr="2000::/3"' will return only the addresses which are within that subnet.*", however I observed the following on a server with IPv6 on eth0:...
23:05 wryfi i have multiple logstash servers, and i want to randomly assign which one of them my hosts send logs to. is there an easy way to get a random value from a list in jinja?
23:05 whytewolf you can also pass -l debug to a salt-call to find more indepth info about what salt is doing.
23:06 yomilk joined #salt
23:07 snarfy joined #salt
23:07 whytewolf wryfi: a fancy use of |random
23:07 canci joined #salt
23:08 wryfi whytewolf: yup, i just found that in the jinja docs. should have looked before asking. ;)
23:08 job {{ ['a','b','c']|random }} => 'c'
23:08 ronrib joined #salt
23:09 Sucks joined #salt
23:09 whytewolf wryfi: thats always how it goes.
23:09 whytewolf ;)
23:10 yomilk joined #salt
23:11 falenn joined #salt
23:12 snarfy wheeee. I upgraded my master from 2015.8.1 to 2015.8.3 and gitfs with dulwich backend stopped working
23:12 snarfy and now i can see now evidence in the lgos
23:12 snarfy no evidence. it appears to be updating correctly according to logs
23:12 snarfy and the pillar gitfs works swimmingly
23:13 snarfy but it cannot find any of my states anymore.
23:13 abednarik joined #salt
23:14 Striki joined #salt
23:14 andrew_v joined #salt
23:15 larsfronius joined #salt
23:17 Micromus joined #salt
23:18 mikepea joined #salt
23:19 moderation joined #salt
23:20 quasiben joined #salt
23:21 yomilk joined #salt
23:23 flowstat_ joined #salt
23:27 falenn joined #salt
23:28 job whytewolf, i found the bug
23:29 solidsnack joined #salt
23:29 quasiben joined #salt
23:29 mcsaltypants joined #salt
23:29 brianfeister joined #salt
23:29 mcsaltypants hello
23:30 whytewolf job: nice!
23:30 borgstrom joined #salt
23:30 mcsaltypants can someone help me with a couple gitfs saltstack questions?
23:32 colegatron joined #salt
23:33 job strategically removing 3 characters seemed most appropiate https://github.com/saltstack/salt/pull/29587
23:33 saltstackbot [#29587]title: Fix the 'cidr' arg in salt.modules.network.ip_addrs6() | The two arguments to ip_in_subnet() should be swapped around, but...
23:35 whytewolf oh nice
23:36 whytewolf very simple fix
23:38 job mmm, did i file the PR against the proper branch?
23:41 whytewolf humm, that one should probley have been against 2015.8 but you would have to rebase your fix to be against 2015.8 also [cause you don't want everything from devel going into 2015.8]
23:41 job so file a second PR?
23:42 whytewolf or ask them to back merge it
23:42 whytewolf I would go with the lazy option myself
23:42 job they can just cherry-pick it in 2018.8
23:42 job s/8/5
23:43 whytewolf yeah
23:44 lemur joined #salt
23:46 snarfy mcsaltypants, maybe but i'm having gitfs issues as of this mornin
23:47 mcsaltypants hey @snarfy
23:48 snarfy something in 2015.8.3 caused it to stop working for me.
23:48 snarfy with the dulwich backend
23:49 mcsaltypants well, im new to gitfs. I have my master config set to pull data from my remote which i think it does correctly. and when i add a file to my github repo and do this on my master it appears to pick up the new file:
23:49 mcsaltypants my-test-minion1.domain:     - srv/salt/salt-states/formulas/nginx/init.sls     - srv/salt/salt-states/formulas/nginx/init2.sls     - srv/salt/salt-states/formulas/nginx/nginx.conf
23:49 mcsaltypants my command: salt my-test-minion1.domain cp.list_master
23:50 mcsaltypants but i also have a mount point set up in my master config. Shouldn't the file be replicated on my master on the mount point automatically? Or do i have to do something further to make that happen?
23:50 mcsaltypants i'm using pygit2
23:51 flowstat_ joined #salt
23:51 burp_ joined #salt
23:58 snarfy mcsaltypants, nope
23:58 snarfy it's mostly stored in memory
23:58 snarfy with a cache in /var/cache/salt/master
23:58 mcsaltypants oh ok, that clears some things up
23:59 snarfy when the mount point is specified in the master config, you can use the files in /srv/salt as a backup

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary