Time |
Nick |
Message |
00:00 |
|
jalaziz_ joined #salt |
00:00 |
|
stanchan joined #salt |
00:03 |
|
shaggy_surfer joined #salt |
00:05 |
|
Guest3774 joined #salt |
00:08 |
robm |
Thanks for the response number5, I am using it already, but was curious if there was something for limits. I will use the method that iggy mentioned. |
00:08 |
|
aron_kexp joined #salt |
00:13 |
|
ajw0100 joined #salt |
00:15 |
|
vu joined #salt |
00:16 |
|
theologian joined #salt |
00:16 |
|
ekristen joined #salt |
00:17 |
|
otter768 joined #salt |
00:18 |
|
dude051 joined #salt |
00:19 |
|
dude051 joined #salt |
00:23 |
|
viq joined #salt |
00:24 |
|
peters-tx joined #salt |
00:25 |
|
scott2b joined #salt |
00:27 |
andrej |
What could be causing an intermittent behaviour where the master will sometimes echo out results for a salt command, and sometimes it won't? |
00:29 |
|
conan_the_destro joined #salt |
00:29 |
|
lesel left #salt |
00:29 |
|
dusel joined #salt |
00:31 |
hal58th |
minions aren't responding fast enough? |
00:31 |
|
bluenemo_ joined #salt |
00:32 |
|
aqua^mac joined #salt |
00:33 |
|
scott2b joined #salt |
00:34 |
iggy |
yeah, throw a couple test.pings in there first |
00:36 |
andrej |
they come back within 0.995 - 1.005 seconds |
00:37 |
andrej |
I keep seeing message authentication failures on both the master and the minion(s) |
00:38 |
iggy |
check issues on github and see if any of the common fixes help that |
00:38 |
andrej |
heh |
00:40 |
|
GabLeRoux joined #salt |
00:45 |
|
scott2b joined #salt |
00:45 |
|
bhosmer_ joined #salt |
00:59 |
Tritlo |
Would you recommend ansible or salt for personal use? |
01:03 |
|
KyleG joined #salt |
01:03 |
|
KyleG joined #salt |
01:03 |
markmari_ |
I would like to use gitfs to transfer a dir tree to some machines, so my salt code is /srv/www: file.recurse: - source: salt://tos/www |
01:03 |
markmari_ |
is that possible? when I do a salt call salt is reporting that none of the sources are there |
01:05 |
|
jimklo_ joined #salt |
01:05 |
markmari_ |
there aren't any sls files in that machine, but I'm trying to keep from installing git and ssh keys, etc. on these webservers |
01:06 |
markmari_ |
and I want my files inside the VPC when the box comes up so they come online faster |
01:12 |
|
jer_ joined #salt |
01:17 |
|
tkharju joined #salt |
01:20 |
|
aqua^mac joined #salt |
01:26 |
__number5__ |
file.recurse can only copy stuff *inside* your salt states tree |
01:28 |
|
scott2b joined #salt |
01:28 |
|
mohae joined #salt |
01:29 |
|
GabLeRoux joined #salt |
01:33 |
fxhp |
cmd.append('{0}'.format(repository)) <- that sort of looks funky |
01:34 |
fxhp |
could we just cast to str? |
01:34 |
fxhp |
is this a style thing? |
01:36 |
|
aparsons joined #salt |
01:36 |
|
ZafarHussaini joined #salt |
01:43 |
|
nitti joined #salt |
01:46 |
|
nethershaw joined #salt |
01:49 |
|
keeth joined #salt |
01:49 |
|
zanhsieh joined #salt |
01:50 |
|
eliasp joined #salt |
01:52 |
|
shaggy_surfer joined #salt |
01:55 |
|
druonysus joined #salt |
01:58 |
|
yetAnotherZero joined #salt |
01:58 |
|
yetAnotherZero joined #salt |
02:00 |
|
msheiny joined #salt |
02:02 |
|
cberndt joined #salt |
02:03 |
|
scott2b left #salt |
02:04 |
|
jasonrm joined #salt |
02:07 |
|
yomilk joined #salt |
02:08 |
|
brendanashworth joined #salt |
02:08 |
|
brendanashworth left #salt |
02:13 |
|
fllr joined #salt |
02:16 |
|
ckao joined #salt |
02:17 |
|
otter768 joined #salt |
02:20 |
|
schlueter joined #salt |
02:22 |
|
schlueter1 joined #salt |
02:26 |
|
speed145a joined #salt |
02:40 |
|
eligos joined #salt |
02:40 |
fxhp |
https://github.com/russellballestrini/salt/commit/bc517c0e1c73a8f6f19fae8da2aa6255b7b5090c |
02:41 |
fxhp |
some reason cmd.run doesn't let this work |
02:45 |
|
s51itxsyc joined #salt |
02:45 |
|
dude051 joined #salt |
02:45 |
|
dude051 joined #salt |
02:47 |
|
ilbot3 joined #salt |
02:47 |
|
Topic for #salt is now Welcome to #salt | SaltConf 2015 is Mar 3-5! http://saltconf.com | 2014.7.1 is the latest | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/ |
02:50 |
|
eightyeight joined #salt |
02:52 |
|
echtish joined #salt |
02:57 |
|
yomilk_ joined #salt |
03:03 |
|
mohae joined #salt |
03:04 |
|
yetAnotherZero joined #salt |
03:04 |
|
markmarine joined #salt |
03:07 |
|
Furao joined #salt |
03:09 |
|
markmarine joined #salt |
03:12 |
|
TyrfingMjolnir joined #salt |
03:13 |
|
elfixit1 joined #salt |
03:15 |
|
yetAnotherZero joined #salt |
03:15 |
|
fllr joined #salt |
03:16 |
jeddi |
I'm trying to do a file.managed for a file with utf-8 characters in it - a '.toprc' file as it happens. I appear to be hitting this bug: https://github.com/saltstack/salt/issues/16651 ... I can't really bump up to the most recent version on master and minions. Any clues on a workaround for managing that file in the short-term? |
03:17 |
jeddi |
The traceback I'm seeing (with file.managed) is: http://rn0.me/show/DZen0JLjSis8GUzFDZck/ |
03:19 |
|
Mso150 joined #salt |
03:23 |
|
CeBe1 joined #salt |
03:32 |
|
tligda joined #salt |
03:34 |
|
battaglia joined #salt |
03:37 |
|
Mso150_e joined #salt |
03:38 |
|
markmarine joined #salt |
03:39 |
|
Mso150 joined #salt |
03:41 |
|
forrest joined #salt |
03:42 |
|
markmari_ joined #salt |
03:44 |
|
tligda joined #salt |
03:44 |
|
schristensen joined #salt |
03:45 |
|
agend joined #salt |
03:49 |
|
Furao joined #salt |
04:00 |
|
jer_ joined #salt |
04:04 |
|
bhosmer joined #salt |
04:05 |
|
cotton joined #salt |
04:17 |
|
jerematic joined #salt |
04:19 |
|
otter768 joined #salt |
04:20 |
|
druonysus joined #salt |
04:21 |
|
monkey66 joined #salt |
04:25 |
|
vbabiy joined #salt |
04:36 |
|
monkey66 left #salt |
04:38 |
|
jimklo_ joined #salt |
04:46 |
|
Ryan_Lane joined #salt |
04:48 |
|
aurynn joined #salt |
04:48 |
|
Micromus joined #salt |
04:51 |
|
cberndt joined #salt |
05:00 |
|
tomspur joined #salt |
05:02 |
|
smcquay joined #salt |
05:03 |
|
smcquay joined #salt |
05:13 |
|
gmoro joined #salt |
05:13 |
|
timbyr_ joined #salt |
05:19 |
|
ajw0100 joined #salt |
05:22 |
|
ajw0100 joined #salt |
05:31 |
|
yomilk joined #salt |
05:34 |
|
rome_390 joined #salt |
05:36 |
|
felskrone joined #salt |
05:48 |
joehh2 |
debian packages for 2014.7.1 going up to debian.saltstack.com now |
05:53 |
joehh2 |
ubuntu going up to ppa shortly |
05:54 |
__number5__ |
cool, thanks joehh2 |
06:01 |
|
Saltn00b joined #salt |
06:06 |
Saltn00b |
Anyone hanging out here tonight? I'm running into a problem with a new installation of Salt that I was hoping someone could help with. |
06:07 |
Saltn00b |
I set up salt-ssh and it was working great. After playing around with it, I used the bootstrapper to install the full version onto my master, so that I could try out installing the agent on my minions. |
06:07 |
Saltn00b |
Ever since I did the full install on the master, though, salt-ssh doesn't work. I get: |
06:07 |
Saltn00b |
NameError: global name 'msgpack' is not defined |
06:08 |
Saltn00b |
I know this is a known bug that's fixed in the latest version, but I can't get it to disappear. I've tried reinstalling, but no luck. |
06:08 |
Saltn00b |
Is there a way to completely uninstall salt, then reinstall? |
06:11 |
jeddi |
Saltn00b: what distro are you on? if you're on debian or derivative - dpkg --purge salt-minion salt-common should do it. |
06:11 |
jeddi |
i'm assuming salt state and pillar files are stored in a dvcs, somewhere safe. |
06:11 |
jcockhren |
"safe" |
06:11 |
jcockhren |
sorry. couldn't resist. ;) |
06:12 |
jeddi |
safer than on the system you're about to purge stuff from. |
06:13 |
jcockhren |
yep. |
06:13 |
jeddi |
safety is always relative. :) |
06:13 |
Saltn00b |
I'm on Amazon Linux. Same packages with yum? |
06:13 |
Saltn00b |
And yes, my config files are safe. :) |
06:13 |
jeddi |
dpkg purge *usually* leaves data files around, and from memory usually snots config files unless they've been changed ... but it's still something i'm wary of. |
06:13 |
jeddi |
I'm not sure what Amazon Linux means. |
06:14 |
jeddi |
If you mean EC2, then that doesn't define the distro. |
06:15 |
|
mikeywaites joined #salt |
06:15 |
Saltn00b |
It's the Amazon Linux distro on EC2 (as opposed to the other distros they offer). It's their own CentOS-based distro. |
06:16 |
jeddi |
aha ... okay. been a while since i used ec2. |
06:16 |
jeddi |
Saltn00b: not sure what the yum equiv for dpkg --purge would be - but it can't be hard to determine? have you already tried flushing the entire contents away? sounds like you have. |
06:17 |
Saltn00b |
No worries. I appreciate the help! |
06:17 |
Saltn00b |
That's the next step. I just wanted to make sure that I wasn't missing something obvious, like "install_salt.sh --uninstall" or something.... |
06:18 |
|
Furao joined #salt |
06:19 |
|
otter768 joined #salt |
06:19 |
|
timbyr_ joined #salt |
06:20 |
|
favadi joined #salt |
06:25 |
|
calvinh joined #salt |
06:27 |
|
mikkn joined #salt |
06:28 |
|
Furao joined #salt |
06:28 |
|
Pixionus joined #salt |
06:29 |
jeddi |
yeah - stick with the package management tools first ... |
06:31 |
Saltn00b |
Will do, thanks. Another quick question -- the bug report on this issue describes a workaround where they cleared the thin client code on the minions. Is there a command to force a thin client update, or do I have to ssh into each of the devices and delete the cached version? |
06:32 |
jeddi |
ooh - no idea, sorry. i've not used the salt-ssh stuff at all yet. |
06:32 |
jeddi |
i'm a strictly master/minion kind of chap. |
06:32 |
jeddi |
simple life, and all that. |
06:33 |
Saltn00b |
I'm starting to think that may be the smart way to go.... :) |
06:34 |
|
markmarine joined #salt |
06:34 |
jeddi |
i think on the provisioning front, there's some advantages to salt-ssh ... but ... i really haven't looked into it, as i say, or automating provisioning with salt at all, in fact. |
06:36 |
Saltn00b |
The nice thing about salt-ssh is that I was able to set it up on my master and immediately talk to minions, without having to install anything on them. |
06:37 |
Saltn00b |
They're behind firewalls and connecting to the master via ssh tunnels, so I wrote a roster plugin to get the list of connected devices and dynamically generate the roster. |
06:37 |
|
markmari_ joined #salt |
06:37 |
Saltn00b |
I figured I'd get up and going like that, then set up the minions using salt-ssh. |
06:38 |
jeddi |
the master is behind a firewall, or the minions are? the minions only need to be able to talk to the master , remember. |
06:38 |
Saltn00b |
Only the minions are behind the fw. They can contact the master fine. |
06:39 |
jeddi |
ah, then they'll be fine to 'phone home' and use vanilla master / minion comms (no need for salt-ssh) |
06:39 |
|
calvinh joined #salt |
06:39 |
Saltn00b |
Yes, that's the eventual goal. I just wanted to kick the tires without having to install agents on all the minions. |
06:39 |
jeddi |
though there are other reasons for using salt-ssh apparently. as I say, have'nt used it, haven't needed to. |
06:39 |
jeddi |
ahh - gotcha. |
06:40 |
Saltn00b |
I was actually trying out the minion install when the problem happened. Unfortunately, the "upgrade" from just salt-ssh to the full salt install seemed to torpedo the thin client. |
06:40 |
Saltn00b |
I'm still hopeful it's pilot error, though, and not something more insidious with running both at once. |
06:40 |
jeddi |
i wouldn't think so ... you said there's a known bug, is that relating to conflicts with salt-ssh and salt-minion? |
06:40 |
jeddi |
concurrent, same client machine, i mean? |
06:41 |
|
Ryan_Lane joined #salt |
06:42 |
Saltn00b |
https://github.com/saltstack/salt/issues/7913 |
06:42 |
jeddi |
I'm stuck trying to work around this utf-8 problem where file.managed tries to show a diff of the file, and chokes ... weirdly file.managed works just fine on straight up binary files without choking. wonder if there's a way of telling file sls to not try to be smart about it (just do a binary diff, or whatever python equiv it's doing under the hood). nothing doc'd. |
06:42 |
Saltn00b |
The thin client was expecting the msgpack module, which won't necessarily be there. |
06:43 |
jeddi |
aha... msgpack, yeah, rings a bell. |
06:43 |
jeddi |
not as much fun as it used to be to get zmq running on a raspberry pi ... but fun nonetheless, I'm sure. |
06:43 |
Saltn00b |
Funny, that's exactly what my minions are.... |
06:44 |
jeddi |
ah. |
06:44 |
Saltn00b |
The Pi side worked swimmingly, though. I installed Salt on one of them at the same time I upgraded the master, and afterwards that was the only one that would respond. |
06:45 |
Saltn00b |
No doubt because Salt installed msgpack. |
06:45 |
jeddi |
i used to have four of them ... but kept on giving them away to relations to run xbmc on (etc) ... i need to order another handful. the way i've done my state files means it takes ages to compile and then implement initial runs on those poor little devices. |
06:45 |
jeddi |
you're using raspbian wheezy? |
06:45 |
|
jimklo_ joined #salt |
06:46 |
Saltn00b |
Yes, for now. We may move to a custom Arch build eventually, but for now Wheezy is getting it done. |
06:47 |
|
calvinh_ joined #salt |
06:47 |
jeddi |
ack. |
06:47 |
Saltn00b |
We've got about 2 dozen deployed in our customers' houses. That's why I'm looking at Salt -- it's quickly becoming a pain to ssh into each of them every time we need to do an upgrade. |
06:47 |
jeddi |
i could imagine! |
06:49 |
|
ajw0100 joined #salt |
06:49 |
|
stoogenmeyer_ joined #salt |
06:49 |
|
calvinh_ joined #salt |
06:49 |
jeddi |
i'm starting ona project to run a half dozen of them around the perimeter of the property with onboard cameras and 'motion' package , to keep an eye on things, as it were. been pondering (non-salt) problem of distributing everything each unit sees as soon as it sees it, in case that one device is compromised. |
06:50 |
jeddi |
i'm pre-configuring with salt - though it's probably as easy to just make a gold 8gb sd card build and replicate then change hostname / ip - but this way i can periodically upgrade them. |
06:50 |
jeddi |
but, yeah, as i say, watch out for the massive performance hit you take when your recipes get lengthy. i've bumped the timeouts up a bit for minions to respond. |
06:50 |
|
krelo joined #salt |
06:51 |
felskrone |
Saltn00b: depending on what you need to to do, dsh might help you already and its easier to setup than salt http://www.netfort.gr.jp/~dancer/software/dsh.html.en |
06:52 |
felskrone |
it nothing compared to salt, but you can manage groups of hosts |
06:55 |
Saltn00b |
We're doing the golden image SD card thing right now. It's fine for bootstrapping the business, but exponentially grows into a huge pain. |
06:56 |
|
calvinh joined #salt |
06:56 |
joehh2 |
packages for ubuntu precise and trusty have gone to launchpad ppa |
06:56 |
Saltn00b |
And thanks for the advice -- I'll keep an eye on the timeouts. As it is, even the testing I was doing with salt-ssh was slow. Ping responses were taking 20-30s. Another reason why I wanted to try using minion agents. |
07:00 |
jeddi |
Saltn00b: you may find the slightly hacky way of using salt to push out major stuff primarily, and perhaps ship out sh / py / etc scripts that then do the actual work ... may be slightly more performant. but, well, i've found once the boxes are configured, performance isn't so bad. |
07:01 |
jeddi |
felskrone, Saltn00b: that might be a good way of doing the initial provisioning |
07:02 |
|
colttt joined #salt |
07:03 |
|
smkelly_ joined #salt |
07:04 |
felskrone |
yeah, maybe :-) |
07:08 |
|
AndreasLutro joined #salt |
07:10 |
Saltn00b |
felskrone: Thanks for the tip. I'm looking at that site now, and I'll definitely download it and try it out. |
07:10 |
|
TheThing joined #salt |
07:11 |
|
stoogenmeyer_ joined #salt |
07:12 |
Saltn00b |
jeddi: That makes sense. We don't do too many changes to device configuration. It's primarily code updates, and those are not too big; the overhead of doing all of the connections is the biggest concern. Salt is showing promise for that, and I've already been able to pull some pretty useful reporting information using salt-ssh. |
07:13 |
|
krelo joined #salt |
07:14 |
jeddi |
i do some code distributions, but usually pulling (at the minion) from a read-only gitolite repo ... rather than distributing files / packages directly from the salt master. still, single location, i guess. |
07:15 |
|
markmarine joined #salt |
07:15 |
Saltn00b |
I'm outta here gentlemen. Thanks again for the help! |
07:22 |
|
bfoxwell joined #salt |
07:26 |
|
KermitTheFragger joined #salt |
07:26 |
|
Furao joined #salt |
07:31 |
|
flyboy joined #salt |
07:31 |
|
slafs joined #salt |
07:33 |
|
Morbus joined #salt |
07:35 |
|
mikeywaites joined #salt |
07:35 |
|
krelo joined #salt |
07:37 |
|
toanju joined #salt |
07:38 |
|
slafs left #salt |
07:38 |
|
Auroch joined #salt |
07:39 |
|
laax joined #salt |
07:47 |
|
jerrcs joined #salt |
07:48 |
|
glyf joined #salt |
07:50 |
|
ralala joined #salt |
08:03 |
|
hebz0rl joined #salt |
08:04 |
smkelly |
I'm running 2014.7.0 on FreeBSD 10 and have a state {'zabbix_agentd': {'__env__': 'base', '__sls__': 'zabbix', 'service': [{'enable': True}, {'restart': True}, {'require': [{'pkg': 'zabbix24-agent'}]}, 'running', {'order': 10001}]}. If the service is enabled in rc.conf but not running, teh state doesn't start the service but just errors with: Service zabbix_agentd is already enabled, and is dead |
08:04 |
smkelly |
Any idea what I'm doing wrong? |
08:05 |
|
bhosmer_ joined #salt |
08:06 |
smkelly |
hm, hold that thought. the service may be failing to start |
08:06 |
smkelly |
yup |
08:06 |
smkelly |
question retracted; I'm dumb. |
08:11 |
|
fredvd joined #salt |
08:12 |
|
lb1a joined #salt |
08:15 |
|
timbyr_ joined #salt |
08:15 |
|
gmoro joined #salt |
08:17 |
|
malinoff joined #salt |
08:18 |
|
krelo joined #salt |
08:20 |
|
otter768 joined #salt |
08:23 |
|
kawa2014 joined #salt |
08:24 |
|
TheThing joined #salt |
08:28 |
|
eseyman joined #salt |
08:32 |
|
_blackjid joined #salt |
08:34 |
egil |
I have a situation where I installed develop on my master. Now I wan't to back to stable but I'm having some issues .. |
08:35 |
egil |
1) I tried bootstrapping stable version. salt --versions-report still showed develop |
08:35 |
egil |
2) I unistalled the master and tried to bootstrap it again |
08:35 |
egil |
now salt-master refuses to startup |
08:35 |
egil |
any tips? |
08:36 |
|
zadock joined #salt |
08:43 |
|
bluenemo joined #salt |
08:43 |
|
bluenemo joined #salt |
08:48 |
|
intellix joined #salt |
08:52 |
|
trikke joined #salt |
08:54 |
|
krelo joined #salt |
08:59 |
jeddi |
'yellow denotes a future expected change in configuration' -- does this mean syntax, or that my state files are going to change something? because the text next to each of my yellow paragraphs is 'is in the correct state', which indicates that entity won't be adjusted. |
09:00 |
jeddi |
egil: tail -f /var/log/salt/master & .... and /usr/bin/salt-master |
09:00 |
|
karimb joined #salt |
09:00 |
jeddi |
oh, maybe 'which salt-master' - to confirm you are running the one you think you are. and/or salt-master --version |
09:03 |
|
catpig joined #salt |
09:07 |
|
jtang joined #salt |
09:07 |
|
krelo joined #salt |
09:07 |
|
dkrae joined #salt |
09:09 |
grrrrr |
I'm testing salt-ssh and just can't get it working... followed this tutorial http://www.giantflyingsaucer.com/blog/?p=5061 and getting "No hosts found with target * of type glob" after running "sudo salt-ssh '*' test.ping" |
09:09 |
|
trikke joined #salt |
09:10 |
grrrrr |
any ideas? |
09:10 |
grrrrr |
sudo salt-ssh --roster |
09:10 |
grrrrr |
heheesa |
09:13 |
egil |
jeddi: the problem is, now salt-master wont start and there is No log entry |
09:14 |
|
I3olle joined #salt |
09:16 |
joehh2 |
egil: is a master still running? |
09:16 |
joehh2 |
ps or netstat to see if something is listening on the port |
09:17 |
|
wnkz joined #salt |
09:21 |
egil |
joehh2: no ports are taken, and no salt services running |
09:23 |
egil |
running /usr/bin/salt-master i get a stacktrace: |
09:23 |
egil |
ImportError: No module named cli.caller |
09:24 |
|
N-Mi joined #salt |
09:24 |
|
N-Mi joined #salt |
09:30 |
jeddi |
egil: so you're sure master's not running? then what happens if you bump up logging level to debug (/etc/salt/master) and then launch salt-master directly in the foreground (/usr/bin/salt-master ... as opposed to init.d / service / systemctl) |
09:30 |
|
ramteid joined #salt |
09:30 |
jeddi |
aha - got it - stacktrace. |
09:30 |
egil |
jeddi: it wont start |
09:30 |
jeddi |
sorry - catching up slowly :) |
09:31 |
jeddi |
did you remove the packages via your package manager? and then check if there's any salt remnants on the box? |
09:31 |
egil |
I tried removing using apt, some files remained though |
09:31 |
egil |
I can try and delete everything again |
09:35 |
|
davidone joined #salt |
09:35 |
davidone |
hi all |
09:35 |
davidone |
is there a way to force a propagation to minions of files managed by salt? |
09:37 |
|
jhauser joined #salt |
09:42 |
joehh2 |
egil: that is probably best |
09:42 |
joehh2 |
davidone: file.managed? |
09:42 |
jeddi |
egil: dpkg --purge is your friend. |
09:43 |
egil |
joehh2: Think I've managed to remove everything now (wow, does salt leave traces everywhere), but do you think it's best to use bootstrap or just use apt? |
09:43 |
egil |
jeddi: thanks! |
09:43 |
egil |
never thought of that one |
09:50 |
davidone |
joehh2: salt minion state.sls file.managed? |
09:51 |
jeddi |
egil: apt ... always package management tools if you have the option. |
09:51 |
|
felskrone joined #salt |
09:52 |
jeddi |
davidone: --> http://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.managed |
09:53 |
egil |
jeddi: yeah, but doesn't bootstrap try to use package manager if available? |
09:54 |
|
mikkn joined #salt |
09:55 |
egil |
jeddi,joehh2: thanks guys, its working again now |
09:56 |
egil |
Salt: 2014.7.1 |
09:56 |
egil |
now to try and update minions :) |
09:56 |
jeddi |
egil: glad to hear it. :) |
09:57 |
jeddi |
egil: ... not sure about bootstrap's inner workings. haven't really had to defer to it. |
09:57 |
egil |
jeddi: I'll remember dpkg --purge though :) |
09:58 |
jeddi |
egil: dpkg --purge is a tremendously handy thing to have in your toolkit. knowing to look at /var/lib/dpkg/info/*list (where * is your package name) is also handy ... |
09:58 |
davidone |
jeddi: so if I have something like this: http://pastebin.com/zbkheN5g |
09:58 |
davidone |
which command shoud I run on the master to update /usr/local/etc/clamd.conf on minions? |
09:58 |
jeddi |
won't necessarily show you generated configuration and data files, but will tell you if there's remnants sticking around. similar to 'dpkg -l | grep -v ^ii' |
09:58 |
davidone |
salt '*' state.sls file.managed /usr/local/etc/clamd.conf ? |
09:59 |
egil |
jeddi: yeah, I saw that now when looking for traces of salt, so thanks for that. I learned something new :) |
09:59 |
jeddi |
davidone: no, you'd refer to the state (file) name. so, if that file you pasted was called 'foo.sls' you'd run 'salt '*' state.sls foo test=True' |
09:59 |
jeddi |
and once you looked at the output, and see that it isn't doing something Surprising (!), you'd then run that again without the 'test=True' bit |
10:00 |
jeddi |
davidone: does that make sense? |
10:00 |
davidone |
so if I have that stanza in a bigger file with many instructions, I cannot 'isolate' the file.managed block |
10:00 |
jeddi |
egil: apt-get / apt-cache do 95% of the things you need. occasionally ... dpkg needs to be summoned. |
10:01 |
jeddi |
davidone: no. the trick typically there, as I understand it, is to break things up into separate files. you can have sub-directories that refer to groupings of activities - for instance i have /srv/salt/base/ - and within that an 'init.sls' which includes a bunch of other files in there. so I can, if I want, run state.sls base, or I can run state.sls base.root-config-files (for example). |
10:02 |
jeddi |
of course, the counter to this approach is that dependencies need to be referenceable from within each file (or at least you need to be able to deal with on-screen complaints / errors if you've got a configuration type state, that refers bcak to a separate state file that may do the package installation, as one example) |
10:03 |
jeddi |
davidone: but .. in general terms, break things into logical groupings, small groupings go into files in a sub-directory under /srv/salt, and larger groupings are those sub-directories. of course, this is merely how I do it ... and may not be Best Practice <tm>. |
10:03 |
jeddi |
dinner beckons. |
10:11 |
|
Furao joined #salt |
10:12 |
|
trikke joined #salt |
10:13 |
|
akafred joined #salt |
10:20 |
|
TyrfingMjolnir joined #salt |
10:21 |
|
otter768 joined #salt |
10:23 |
|
Cidan joined #salt |
10:25 |
davidone |
jeddi: yeah, same feelings here |
10:25 |
davidone |
ty |
10:28 |
|
aquinas joined #salt |
10:30 |
|
fxhp joined #salt |
10:30 |
|
karimb joined #salt |
10:31 |
|
karimb joined #salt |
10:41 |
|
ekle joined #salt |
10:41 |
ekle |
hi, i have an endless loop of "Running scheduled job: __master_alive" and "/usr/lib/python2.7/dist-packages/salt/modules/config.py:136: DeprecationWarning: pillar_opts will default to False in the Lithium release" after salt '*' status.master |
10:41 |
ekle |
any suggestions ? |
10:53 |
|
harkx joined #salt |
10:54 |
|
teogop joined #salt |
11:03 |
|
calvinh joined #salt |
11:05 |
|
giantlock joined #salt |
11:23 |
|
Grokzen joined #salt |
11:27 |
|
che-arne joined #salt |
11:27 |
|
AndreasLutro joined #salt |
11:27 |
|
intellix joined #salt |
11:33 |
|
hojgaard joined #salt |
11:34 |
|
krelo joined #salt |
11:38 |
|
zadock joined #salt |
11:44 |
|
paulm- joined #salt |
11:44 |
paulm- |
Can you say "require x OR y" instead of the usual "require x AND y"? |
11:45 |
|
Morbus joined #salt |
11:46 |
|
Zachary_DuBois joined #salt |
11:46 |
|
bytemask joined #salt |
11:48 |
|
APLU joined #salt |
11:49 |
jeddi |
When I run a state.sls or highstate, there's a stack of yellow-paragraphs with 'this is in the correct state', and they obviously don't change anything. Any idea why they're coming up yellow? These are exclusively file.managed and file.recurse stanzas. |
11:49 |
|
aqua^mac joined #salt |
11:49 |
|
shel3over joined #salt |
11:53 |
joehh2 |
jeddi: guessing you are running 2014.7.1 with test=true? |
11:54 |
joehh2 |
probably also debian/ubuntu.... recently upgraded... |
11:54 |
joehh2 |
I believe it is a regression in 2014.7.1 |
11:54 |
|
karimb joined #salt |
11:55 |
joehh2 |
https://github.com/saltstack/salt/issues/18312 |
11:56 |
|
calvinh_ joined #salt |
11:59 |
|
trikke joined #salt |
11:59 |
_ether_ |
paulm-: depending of the structure of your sls files and how they are included, you may achieve this with require_in. |
12:03 |
|
giantlock joined #salt |
12:04 |
|
agend joined #salt |
12:05 |
|
vbabiy joined #salt |
12:07 |
|
bhosmer joined #salt |
12:12 |
|
intellix joined #salt |
12:13 |
|
felskrone joined #salt |
12:15 |
|
colttt joined #salt |
12:17 |
paulm- |
Is it possible to use mysql_database states with only salt-ssh? |
12:18 |
JDog |
Hi. When I use the virtualenv salt state, how do I install the virtual environment as not the root user? |
12:22 |
_ether_ |
JDog: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.virtualenv_mod.html#salt.states.virtualenv_mod.managed did you try the user kwarg? |
12:22 |
|
otter768 joined #salt |
12:23 |
JDog |
No - - have got that page up and didn't see that listed -- silly me. Thanks! |
12:23 |
|
bhosmer joined #salt |
12:26 |
|
]V[ joined #salt |
12:28 |
|
monkey661 joined #salt |
12:39 |
|
Furao joined #salt |
12:40 |
|
xsteadfastx joined #salt |
12:44 |
|
wnkz joined #salt |
12:45 |
|
favadi joined #salt |
12:49 |
jeddi |
joehh2: that very much looks like it ... and I see you're on the task, and in AU also. nicely done. :) |
12:49 |
I3olle |
Hey there. Is it possible to use the return output from a .py file in a .sls formula? I would like to have the output of the following simple script and then use a for loop to generate some files on the master |
12:49 |
I3olle |
import salt.config |
12:49 |
I3olle |
import salt.runner |
12:49 |
I3olle |
opts = salt.config.master_config('/etc/salt/master') |
12:49 |
I3olle |
runner = salt.runner.RunnerClient(opts) |
12:49 |
I3olle |
grains = runner.cmd('cache.grains', []) |
12:49 |
I3olle |
pillar = runner.cmd('cache.pillar', []) |
12:50 |
jeddi |
joehh2: it's not just that the stanzas are yellow, and saying 'result: none', but that they appear at all with test=True ... they clog up the output. usually i run test=True before any production run, just to make sure a small sls / pillar change won't bork things savagely ... and expect only a few paragraphs of output, of course. anyway - will wait for an updated package - thank you! |
12:52 |
|
jakubek joined #salt |
12:52 |
jakubek |
any idea how to pass http_proxy to all state.sls? |
12:52 |
|
tomspur joined #salt |
12:54 |
jakubek |
i want to run pkg.install in all states over http_proxy because my server don't have access to internet and i need to tunnel proxy |
13:01 |
|
felskrone joined #salt |
13:06 |
|
bhosmer joined #salt |
13:11 |
|
intellix joined #salt |
13:12 |
|
I3olle joined #salt |
13:17 |
|
felskrone1 joined #salt |
13:25 |
|
jerematic joined #salt |
13:33 |
|
]V[_ joined #salt |
13:34 |
|
favadi left #salt |
13:38 |
|
aqua^mac joined #salt |
13:43 |
|
adilroot joined #salt |
13:44 |
|
rypeck joined #salt |
13:49 |
jeddi |
all 'handling users and groups via pillar' tutorials seem to assume you want the same set of users on every host. |
13:49 |
|
cotton joined #salt |
13:50 |
|
xsteadfastx joined #salt |
13:53 |
babilen |
jeddi: Take a look at https://github.com/saltstack-formulas/users-formula |
13:55 |
egil |
Running a local master and some minions i Azure: I loose contact with minions all the time |
13:55 |
egil |
if I run test.ping maybe 10 times, they respond again |
13:55 |
|
dude051 joined #salt |
13:55 |
egil |
has anyone seen this behaviour? |
13:58 |
|
elfixit joined #salt |
13:59 |
|
nitti joined #salt |
14:00 |
|
jeremyr joined #salt |
14:00 |
|
Ymage joined #salt |
14:01 |
|
jerematic joined #salt |
14:04 |
|
racooper joined #salt |
14:04 |
|
pestouille joined #salt |
14:04 |
pestouille |
Hi |
14:05 |
|
GabLeRoux joined #salt |
14:05 |
|
ptinkler joined #salt |
14:06 |
pestouille |
I don’t know if it’s possible with Salt to do this : state A depend on B and C, I would the state C to reload the service in B but only if B is installed... |
14:06 |
ptinkler |
can salt states/directory paths not have dots in them? |
14:07 |
ptinkler |
i.e. is /home/vagrant/.virtualenvs/myapp: an illegal name for a state? |
14:08 |
pestouille |
you can have ID with that name (I think) |
14:08 |
pestouille |
but state file must always have .sls extension |
14:08 |
Ahlee |
It's a valid name. What error are you getting? |
14:08 |
ze- |
well, the ':' is probably not part of the name :) |
14:09 |
ze- |
pestouille: I don't get what you are trying to do exactly (from your example with A B C) |
14:09 |
ptinkler |
"Rendering SLS python.python-pip failed, render error: expected '<document start>', but found '<block mapping start>'" is the error |
14:09 |
ptinkler |
and points to the start of my next state in the file, which I don't think there's anything wrong with |
14:09 |
pestouille |
ze-: I have A (php5) with B (apache) and C (newrelic) dependencies |
14:09 |
pestouille |
If I install newrelic (or change its configuration file) I need to reload Apache |
14:10 |
Ahlee |
ptinkler: can you gist or similar the state? It's likely a syntax issue |
14:10 |
pestouille |
but… If I install php-fpm instead of php for php, I must tell newrelic to reload php-fpm instead of Apache |
14:10 |
ptinkler |
sure I'll write one, one mo |
14:10 |
|
mdasilva joined #salt |
14:10 |
ze- |
A B and C are pkg.intall ? |
14:10 |
pestouille |
yep |
14:11 |
pestouille |
I have included service.running also in B |
14:11 |
ze- |
that's a B', other state :) |
14:11 |
pestouille |
It worked well with watch_in in newrelic state but… it doesn’t know if it must reload apache2 or php-fpm |
14:11 |
ze- |
so, have B' watch C. service.running: reload: True watch: C |
14:11 |
pestouille |
exactly ! |
14:12 |
ptinkler |
Ahlee: http://pastebin.com/10DXKCwL |
14:12 |
ze- |
how do you install php-fpm or apache2 ? |
14:12 |
pestouille |
so I might have to declaration for newrelic |
14:12 |
pestouille |
? |
14:12 |
Ahlee |
ptinkler: testing |
14:12 |
ze- |
pestouille: I guess you have a state installing php-fpm or mod-php, depending on something ? |
14:12 |
pestouille |
nop |
14:13 |
pestouille |
i have in my top.sls => - php5-fpm (or mod-php) |
14:13 |
ze- |
well, your "something" is in top.sls |
14:13 |
ze- |
you can never have php5-fpm and mod-php loaded at once |
14:14 |
ze- |
get a "php-service" state (same name) in both files, reloading either apache or php-fpm. |
14:14 |
pestouille |
ze-: that s it |
14:14 |
ze- |
and your newrelic can watch_in that state name. |
14:14 |
|
_prime_ joined #salt |
14:14 |
pestouille |
ze-: let me show you a pastebin of all of this :) |
14:17 |
pestouille |
http://pastebin.com/MH4xDhv2 |
14:18 |
pestouille |
at the end, when I change newrelic.ini I should reload either apache2 or php5-fpm depending on what have been previously installed... |
14:18 |
Ahlee |
ptinkler: so what is python-pkgs? |
14:18 |
|
JDiPierro joined #salt |
14:19 |
jeddi |
babilen: woo - thanks for that link - it's certainly got much more complex than the last time i trawled through the saltstack-formulas repo. mind, at 230 lines, that init.sls is pretty frightening :) |
14:19 |
ptinkler |
Ahlee: just installing python libs, so it installs python-pip, python-dev, python-virtualenv and build-essential |
14:20 |
ptinkler |
the whole thing works when I remove the dots from the python-pip sls, which is why I assumed it was that |
14:20 |
Ahlee |
strange |
14:20 |
Ahlee |
what version? |
14:20 |
pestouille |
ptinkler: If you have a doubt on /home/vagrant/.virtualenvs/myapp |
14:20 |
pestouille |
you can change this to virtualenv-myapp: |
14:20 |
pestouille |
- name: /home/vagrant/.virtualenvs/myapp |
14:20 |
|
subsignal joined #salt |
14:21 |
babilen |
jeddi: We have been using in production for quite some time and it is working great though |
14:21 |
pestouille |
but it should work the same in both situations |
14:21 |
Ahlee |
also the witespace before /home/vagrant |
14:21 |
ptinkler |
pestouille: I'll try that now |
14:21 |
Ahlee |
you're offset one space on the first stanza, not offset on second |
14:21 |
Ahlee |
it's invalid yaml |
14:22 |
xmj |
oh noes :( |
14:22 |
ze- |
pestouille: you have include troubles. |
14:22 |
jeddi |
babilen: nice. will have a deeper ponder on the morrow - it's a bit much for my after-midnight elderly brain to take in now. i do recall previously trying to work out how to get ssh keys handled sanely between hosts, given they're a function of both where you are, and where you're configuring (if that makes sense). |
14:22 |
pestouille |
ze-: probably :) |
14:22 |
ze- |
php5/web includes newrelic/php which includes php5/fpm |
14:22 |
|
cpowell joined #salt |
14:22 |
ze- |
so, you install both, php5-fpm and mod php |
14:22 |
ptinkler |
Ahlee: ah, stupid me :( |
14:23 |
pestouille |
ze-: indeed… the include was a requirement for the « watch_in » |
14:23 |
|
otter768 joined #salt |
14:23 |
Ahlee |
ptinkler: don't worry about it, a linter is sorely needed |
14:24 |
ptinkler |
yeah it's working now :) thanks |
14:24 |
ze- |
pestouille: i don't see any easy way where you can state.sls newrelic.php directly. |
14:25 |
pestouille |
I was thinking of some kind of global var using pillar. But It sounds ugly |
14:25 |
ze- |
pestouille: but get 2 different files, that both provide php (apache & mod php - or - apache & php-fpm), and both should provide a php: service, but restarting the adequate service |
14:25 |
pestouille |
it should be a common case that salt might address, but I can’t think of an elegant way to do it |
14:25 |
|
xmj left #salt |
14:26 |
pestouille |
the problem is not with php & apache |
14:26 |
pestouille |
it s the third dependencies on newrelic |
14:26 |
|
vbabiy joined #salt |
14:26 |
pestouille |
I already have to different php sls files |
14:26 |
pestouille |
one for php-fpm and another for apache |
14:27 |
ze- |
pestouille: get the same state name with different action. |
14:27 |
pestouille |
I thought ID state must be unique across all states |
14:28 |
ze- |
pestouille: yeah, but it's not a problem if you can only include one of those. |
14:28 |
ze- |
globaly unique, and fails if you try to install both (mod-php & fpm) |
14:28 |
|
redzaku joined #salt |
14:33 |
|
karimb joined #salt |
14:33 |
|
mpanetta joined #salt |
14:35 |
ze- |
pestouille: http://pastebin.com/UNbP5P2H |
14:35 |
ze- |
and in top, you can either include php.mod or php.fpm to install the version you want. |
14:37 |
MTecknology |
viq: I don't imagine you ever found time to poke at gitlab-ci perhaps? |
14:38 |
|
gngsk joined #salt |
14:39 |
|
keeth joined #salt |
14:43 |
|
_prime_ joined #salt |
14:44 |
|
pestouille joined #salt |
14:45 |
pestouille |
ze-: thank you very much |
14:45 |
pestouille |
ze-: I’m going to test that |
14:46 |
|
glyf joined #salt |
14:48 |
|
faliarin joined #salt |
14:52 |
|
andrew_v joined #salt |
14:52 |
|
trikke joined #salt |
14:52 |
viq |
MTecknology: no, I didn't yet, but there's a much better chance now than last year |
14:53 |
MTecknology |
:D |
14:54 |
MTecknology |
I haven't touched it either. :( |
14:54 |
|
FRANK_T joined #salt |
14:54 |
pestouille |
ze-: it works… you rock :) |
14:54 |
|
giantlock joined #salt |
14:55 |
FRANK_T |
Do you guys know how salt manage yum package. for example a create a yum package that install a lot of thinking and the manual command is yum groupinstall @package name |
14:55 |
MTecknology |
viq: will I see you at saltconf? |
14:55 |
FRANK_T |
Do you know hot salt handle those packages? |
14:55 |
viq |
MTecknology: wrong side of the pond |
14:55 |
FRANK_T |
how salt handle those packages* |
14:55 |
MTecknology |
bummer |
14:56 |
|
Ouzo_12 joined #salt |
14:58 |
|
Brick joined #salt |
14:59 |
|
hax404 joined #salt |
15:00 |
|
kaptk2 joined #salt |
15:02 |
|
housl joined #salt |
15:02 |
|
krelo_ joined #salt |
15:08 |
|
twellspring joined #salt |
15:11 |
|
dude^2 joined #salt |
15:12 |
|
dude051 joined #salt |
15:13 |
|
favadi joined #salt |
15:14 |
|
Brew joined #salt |
15:17 |
|
mdasilva joined #salt |
15:20 |
|
bostonq joined #salt |
15:22 |
|
Saltn00b joined #salt |
15:24 |
iggy |
FRANK_T: for the most part, salt calls the yum command line util and passes through a lot of what you tell it |
15:27 |
|
aqua^mac joined #salt |
15:28 |
FRANK_T |
iggy how do I add that to my .sls list |
15:29 |
|
pdayton joined #salt |
15:30 |
FRANK_T |
I have a lot of packages like 50 and 10 group I am able to install the packages |
15:30 |
|
tomh- joined #salt |
15:30 |
FRANK_T |
but my problem is the groups that I have |
15:30 |
FRANK_T |
I do not know if they did implement this https://github.com/saltstack/salt/issues/5504 |
15:31 |
|
aquinas joined #salt |
15:31 |
iggy |
it's in the docs |
15:32 |
FRANK_T |
Give me the link if you can please. I cant find that. |
15:34 |
|
monkey66 joined #salt |
15:35 |
|
smcquay joined #salt |
15:36 |
iggy |
you aren't trying very hard |
15:36 |
iggy |
http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.yumpkg.html#salt.modules.yumpkg.group_install |
15:37 |
|
tkharju joined #salt |
15:38 |
FRANK_T |
iggy I thought that this is for a group of nodes |
15:39 |
FRANK_T |
I read this document |
15:39 |
iggy |
presumably, everything in salt is for a group of nodes |
15:39 |
iggy |
or at least has the possibility of being such |
15:39 |
|
ccarney_ROCC joined #salt |
15:40 |
FRANK_T |
I do not know if I am not asking the correct question. |
15:40 |
|
ccarney_ROCC left #salt |
15:40 |
twellspring |
looking for the best way to install minion on existing servers. salt-cloud + saltify, bootstrap-salt.sh, something else? |
15:41 |
FRANK_T |
iggy https://www.refheap.com/fee14917c2c5664562c129b45 |
15:41 |
|
lytchi joined #salt |
15:42 |
lytchi |
win 25 |
15:42 |
lytchi |
oups sorry :> |
15:42 |
FRANK_T |
If I install the one with the add manually I just have to do yum groupinstall "package name" |
15:43 |
FRANK_T |
I do not have a problem with the one without @ |
15:43 |
iggy |
there's no such option as names to pkg.installed |
15:43 |
iggy |
oh, so there is |
15:43 |
iggy |
but what you really want is pkgs |
15:47 |
FRANK_T |
and iggy this works for me |
15:48 |
FRANK_T |
https://www.refheap.com/a8dd2b9afd5e0a5c4e83dfe2b |
15:48 |
|
chrism_ joined #salt |
15:49 |
|
chrism_ left #salt |
15:50 |
|
chris_m_ joined #salt |
15:50 |
|
jtang joined #salt |
15:52 |
chris_m_ |
morning |
15:52 |
chris_m_ |
ran this command: ./salt.sh '*' test.ping -b 50 reports back on all 102 minions, and then just hangs. any ideas? |
15:55 |
|
glyf joined #salt |
15:56 |
|
paulm- joined #salt |
15:56 |
|
tkharju joined #salt |
15:56 |
FRANK_T |
chris_m_ try salt '*' test.ping |
15:57 |
FRANK_T |
hy -b 50 |
15:57 |
FRANK_T |
why -b 50? |
15:58 |
chris_m_ |
only want to report back so many at a time. like the scaling documentation suggest. could be an overload to have them all report bck at once. |
15:58 |
iggy |
chris_m_: salt-run manage.down |
15:59 |
FRANK_T |
got it. |
15:59 |
|
Brew1 joined #salt |
16:00 |
|
ajw0100 joined #salt |
16:00 |
|
SheetiS joined #salt |
16:01 |
manfred |
just booked my travel to saltconf |
16:03 |
|
kermit joined #salt |
16:04 |
|
trikke joined #salt |
16:04 |
|
glyf joined #salt |
16:04 |
|
thedodd joined #salt |
16:05 |
|
pestouille joined #salt |
16:06 |
|
stqism_ joined #salt |
16:07 |
chris_m_ |
thank-you iggy. will research that one |
16:07 |
|
SheetiS1 joined #salt |
16:10 |
FRANK_T |
iggy You were right I was able to install the group using pkg.group_install |
16:14 |
|
pestouille_ joined #salt |
16:14 |
FRANK_T |
I was able to do it like this salt '*' pkg.group_install SC_Lustre |
16:14 |
|
jimklo joined #salt |
16:15 |
|
brian__ joined #salt |
16:15 |
|
bluenemo joined #salt |
16:15 |
|
bluenemo joined #salt |
16:16 |
|
rojem joined #salt |
16:16 |
|
dmorrow joined #salt |
16:17 |
iggy |
see you there manfred |
16:17 |
|
jer_ joined #salt |
16:17 |
|
jimklo joined #salt |
16:18 |
manfred |
I am pumped. |
16:18 |
dmorrow |
hello everyone, I was wondering does anoyone know is there anywhere official that historical RPM's are stored?. I found some on http://rpmfind.net but was wondering is there a saltstack official one? |
16:22 |
iggy |
saltstack has never published rpms |
16:22 |
iggy |
but the spec file is in the source. They are easy to make |
16:22 |
|
clintberry joined #salt |
16:24 |
|
otter768 joined #salt |
16:24 |
|
chris_m_ left #salt |
16:25 |
|
jimklo joined #salt |
16:26 |
|
quickdry21 joined #salt |
16:29 |
|
forrest joined #salt |
16:30 |
|
conan_the_destro joined #salt |
16:30 |
|
glyf joined #salt |
16:33 |
|
lb1a joined #salt |
16:33 |
|
josephleon joined #salt |
16:38 |
|
Furao joined #salt |
16:38 |
|
Ozack-work joined #salt |
16:39 |
dmorrow |
ok cool thanks |
16:41 |
|
mdasilva joined #salt |
16:42 |
|
JDiPierro joined #salt |
16:42 |
|
hasues joined #salt |
16:42 |
|
hasues left #salt |
16:44 |
|
tkharju joined #salt |
16:45 |
|
anotherZero joined #salt |
16:45 |
|
jalbretsen joined #salt |
16:46 |
|
felskrone joined #salt |
16:46 |
|
tomspur joined #salt |
16:48 |
|
tligda joined #salt |
16:48 |
|
_JZ_ joined #salt |
16:51 |
|
Andre-B joined #salt |
16:54 |
|
paulm- joined #salt |
16:56 |
|
pestouille left #salt |
17:01 |
|
StDiluted joined #salt |
17:02 |
|
hebz0rl joined #salt |
17:04 |
|
desposo joined #salt |
17:04 |
hobakill |
anyone doing any sssd/winbind/whatever ad joins via salt? |
17:04 |
|
keeth joined #salt |
17:05 |
|
tligda1 joined #salt |
17:06 |
|
lothiraldan joined #salt |
17:06 |
|
AlexStraunoff joined #salt |
17:06 |
|
timoguin joined #salt |
17:06 |
|
tligda1 joined #salt |
17:07 |
|
cpowell joined #salt |
17:09 |
|
theologian joined #salt |
17:10 |
|
spookah joined #salt |
17:12 |
|
lothiraldan_ joined #salt |
17:15 |
|
TyrfingMjolnir joined #salt |
17:15 |
|
aqua^mac joined #salt |
17:20 |
|
bluenemo joined #salt |
17:20 |
|
bluenemo joined #salt |
17:21 |
|
wendall911 joined #salt |
17:22 |
|
KyleG joined #salt |
17:22 |
|
KyleG joined #salt |
17:26 |
|
lothiraldan joined #salt |
17:27 |
|
pdayton joined #salt |
17:27 |
|
Deevolution joined #salt |
17:28 |
|
agend joined #salt |
17:28 |
|
jla joined #salt |
17:29 |
hobakill |
any chance we can get 2014.7.1 out of EPEL testing and into stable? i realize no one here has direct access but i think upvotes help. |
17:30 |
|
Guest89335 joined #salt |
17:30 |
forrest |
hobakill: do you have the link? |
17:30 |
forrest |
hobakill: I can log in and upvote |
17:32 |
hobakill |
forrest, i don't. i always lose it. let me find and post it. in the meantime i just upgraded from epel testing. :) |
17:32 |
forrest |
heh |
17:32 |
|
TyrfingMjolnir joined #salt |
17:32 |
iggy |
push saltstack to move the packaging in house sooner than later |
17:32 |
hobakill |
iggy, amen |
17:32 |
|
vectra joined #salt |
17:33 |
|
aparsons joined #salt |
17:33 |
iggy |
(and preferably with separate repos for the major versions... which I think they do for ubuntu or debian already) |
17:33 |
|
Ryan_Lane joined #salt |
17:34 |
|
mdasilva joined #salt |
17:34 |
hobakill |
forrest, https://apps.fedoraproject.org/packages/salt-master |
17:35 |
|
kitp joined #salt |
17:35 |
hobakill |
forrest, https://admin.fedoraproject.org/updates/salt-2014.7.1-1.el6 |
17:35 |
hobakill |
not sure which one you want |
17:36 |
forrest |
I'm looking at all of them, ugh I always forget how to upvote these... |
17:36 |
|
paulm-- joined #salt |
17:36 |
|
rodder joined #salt |
17:37 |
forrest |
2015-01-28 05:53:59 |
17:37 |
forrest |
This update has reached 7 days in testing and can be pushed to stable now if the maintainer wishes |
17:37 |
forrest |
hobakill: ^ |
17:37 |
|
timoguin joined #salt |
17:37 |
hobakill |
i thought it was 14 for some reason |
17:37 |
rodder |
Hello everyone. Does anyone know of way to use an entire subnet in the /etc/salt/autosign.com |
17:37 |
rodder |
autosign.conf |
17:38 |
forrest |
hobakill: Nah, that was just the non cent versions anyways |
17:38 |
rodder |
id like to auto accept any keys coming from a kickstart network. |
17:38 |
iggy |
rodder: I'm pretty sure autosign accepts globs |
17:38 |
iggy |
but I've only ever seen it used with names, so... |
17:38 |
forrest |
hobakill: should be there the day after tomorrow, it has zero upvotes, so mine won't help much :P |
17:38 |
rodder |
ive tried the usualy CIDR notation |
17:38 |
forrest |
just going to let it hang out. |
17:38 |
rodder |
no love |
17:39 |
hobakill |
forrest, sounds good. |
17:39 |
|
monkey661 joined #salt |
17:40 |
hobakill |
forrest, one of these days i'm going to learn how to package an RPM from the dev branch ... haven't had much luck so far |
17:41 |
forrest |
hobakill: did you grab the spec and try rpmbuild and it didn't work? |
17:41 |
hobakill |
forrest, yeah it kept looking for old-ass versions of salt that, frankly, i wasn't sure if i needed to tar up and put in the directory |
17:42 |
forrest |
hobakill: ahh yeah I'm not sure either, if terminalmage shows up today, you could ask him if you can't figure it out. |
17:42 |
hobakill |
forrest, sounds good. i should also spend more than 45 minutes trying to get it to work! |
17:43 |
forrest |
The joys of RPMs, where 45 minutes with a pre-made (working) file doesn't result in a working install |
17:43 |
hobakill |
no joke. |
17:43 |
hobakill |
another quick question. can we upgrade windows salt minions via salt yet? i've been manually doing them all and it's a real pain |
17:44 |
|
neogenix joined #salt |
17:45 |
|
josephleon joined #salt |
17:45 |
|
shaggy_surfer joined #salt |
17:51 |
|
aw110f joined #salt |
17:54 |
|
neogenix_ joined #salt |
17:57 |
|
Ouzo_12 joined #salt |
18:01 |
forrest |
hobakill: I honestly don't know, I rarely touch the windows stuff. |
18:01 |
|
saltine joined #salt |
18:01 |
|
desposo joined #salt |
18:03 |
Saltn00b |
Good morning everyone. Can anyone help me with a salt-ssh problem? If I install from bootstrap, then when I try to run salt-ssh, it fails on the minions with "NameError: global name 'msgpack' is not defined". |
18:03 |
Saltn00b |
The versions report returns Salt: 2015.2.0-307-g1fc9a52 |
18:03 |
Saltn00b |
If I install from pip, it works fine. |
18:04 |
Guest78488 |
Saltn00b: sounds like the git install doesn't pull in all the dependencies for you |
18:04 |
hobakill |
forrest, you are lucky |
18:04 |
|
RedundancyD joined #salt |
18:04 |
Saltn00b |
In that case, versions report returns Salt: 2014.7.1 |
18:04 |
Saltn00b |
Guest78488: That's my thought, but the minions shouldn't need msgpack (or anything) for salt-ssh. |
18:05 |
Saltn00b |
There's a similar issue open about this (https://github.com/saltstack/salt/issues/7913), but it was fixed in 2014.7.1. |
18:05 |
Saltn00b |
Could the later code have introduced a regression? |
18:06 |
|
holler joined #salt |
18:07 |
|
pdayton joined #salt |
18:08 |
holler |
hello, I just started getting an error when doing vagrant up (w/salt provision) to rackspace.. ive tried reverting to past commits and same thing.. nothing seems to have changed that I know of so not sure what's causing the error? UnicodeEncodeError: 'ascii' codec can't encode character |
18:08 |
|
gmoro joined #salt |
18:08 |
|
timbyr_ joined #salt |
18:09 |
|
bhosmer_ joined #salt |
18:09 |
|
dgiagio joined #salt |
18:10 |
iggy |
holler: A. salt is python2 only B. python2 has shit unicode/charset handling |
18:11 |
holler |
iggy: would it be anything relating to my actual application code? or something in the provisioner? I havent changed any of the salt files |
18:11 |
Guest78488 |
Saltn00b: good question! |
18:11 |
Guest78488 |
Saltn00b: ping one of the salt devs/ops |
18:11 |
Ryan_Lane |
iggy: the next version of salt supports python3 ;) |
18:12 |
iggy |
"supports" |
18:12 |
|
glyf joined #salt |
18:12 |
iggy |
I know they've done a lot of work, but it's a big project to claim "full support" |
18:13 |
iggy |
holler: if python2 is the default on the systems you are using, then my guess would be that you had some unicode characters sneak in somewhere (often from c&p'ing from a webpage and your ' turns into the unicode version that look like `) |
18:16 |
|
monkey66 joined #salt |
18:17 |
holler |
iggy: crap... this seems like it could be a needle in a haystack@ |
18:17 |
holler |
! |
18:17 |
holler |
the weird thing is that my local vagrant/salt works |
18:18 |
holler |
the main difference between the local version and rackspace version is rackspace pulls the actual code from github and sets it up wheras the local just symlinks to the host machine |
18:19 |
holler |
how could I find unicode character in a file tree? |
18:19 |
|
Mso150 joined #salt |
18:20 |
|
cpowell joined #salt |
18:23 |
|
Auroch joined #salt |
18:23 |
iggy |
holler: throw all your yaml in http://yaml-online-parser.appspot.com/ (with jinja variables filled in appropriately) |
18:24 |
|
shaggy_surfer joined #salt |
18:25 |
|
otter768 joined #salt |
18:30 |
|
mdasilva joined #salt |
18:31 |
|
rap424 joined #salt |
18:33 |
|
schlueter joined #salt |
18:37 |
|
salten joined #salt |
18:38 |
salten |
hello, when I try to use dockerio states/modules on a set of hosts, I get 'docker.foo' is unavaible. I have docker installed, docker-py installed (with pip, even install --upgrade), and the salt-minions refreshed, what am I missing? |
18:40 |
forrest |
salten: which version of salt are you using? |
18:42 |
salten |
forrest: salt-minion 2014.7.0 (Helium) |
18:43 |
salten |
forrest: ok, so I'm probably referencing states/modules from develop |
18:43 |
salten |
or more recent version |
18:43 |
forrest |
salten: That's what I was wondering as well |
18:43 |
forrest |
but I see this salten: https://github.com/saltstack/salt/blob/v2014.7.0/salt/states/dockerio.py |
18:44 |
forrest |
so maybe you are using something that doesn't exist in that release? |
18:44 |
salten |
I will find out |
18:44 |
salten |
what is the latest stable? |
18:46 |
|
timoguin joined #salt |
18:48 |
|
hax404 joined #salt |
18:49 |
|
timoguin joined #salt |
18:49 |
|
hax404 joined #salt |
18:50 |
|
conan_the_destro joined #salt |
18:50 |
|
adnauseaum left #salt |
18:50 |
|
toanju joined #salt |
18:54 |
|
Guest99681 joined #salt |
18:55 |
GabLeRoux |
Hey there, How would I install php module php_curl (that's for legacy code support). php-mysql worked (pkg: - installed), but I can't seem to find how I could install php_curl with salt |
18:57 |
GabLeRoux |
Oh well, that's not a yum package, that's why |
18:58 |
iggy |
salten: try with docker-py-0.5 (I think latest is more recent and people have had issues with it) |
18:58 |
|
BigBear joined #salt |
19:02 |
Grokzen |
Is raising a exception from a module method the only way to say that something within the method failed? Can i somehow set some retcode or say success=False? |
19:02 |
iggy |
I think most modules just return false |
19:02 |
iggy |
exception's are frowned upon |
19:03 |
|
jimklo joined #salt |
19:03 |
|
jimklo joined #salt |
19:04 |
|
josephleon joined #salt |
19:04 |
|
ajw0100 joined #salt |
19:04 |
|
aqua^mac joined #salt |
19:06 |
Grokzen |
iggy, mkay |
19:06 |
salten |
iggy: thanks for the pointer |
19:10 |
|
Mso150 joined #salt |
19:11 |
Grokzen |
iggy, Just returning False do not change the success or retcode in the event that is transmitted on the eventbus, it just sets the 'return' field to False |
19:11 |
iggy |
well, that's not what you said to begin with... |
19:12 |
iggy |
I'm 98% sure there isn't a way |
19:12 |
Grokzen |
it was not O.o |
19:13 |
iggy |
reactors/schedulers, etc are notoriously bad about error handling |
19:15 |
|
hax404 joined #salt |
19:17 |
|
druonysus joined #salt |
19:24 |
|
josephleon joined #salt |
19:24 |
|
davet joined #salt |
19:25 |
jla |
my salt-master for just a handfull, several, minions is using 1.3g virt and 1.2g res and every minute or so pegs at 99% cpu for several seconds. Does that sound normal? |
19:26 |
|
spootly joined #salt |
19:27 |
|
kitp joined #salt |
19:28 |
iggy |
cpu peggin maybe (depends on cpu, etc.) |
19:28 |
iggy |
memory usage, no |
19:30 |
iggy |
my master with 40+ minions is using less than 512M of mem |
19:30 |
iggy |
res |
19:30 |
iggy |
virt is much higher, but I don't ever pay attention to virt |
19:31 |
jla |
I'll have to re-check what I'm doing. The salt/master log is empty, nothing in messages, daemon, or dmesg related to salt or looking out of the ordinary. I didn't even think I was doing that much with salt yet. |
19:31 |
jla |
thanks |
19:32 |
iggy |
jla: oh, you know one time I had a problem with something like that... We have a git tree that holds our "dist" files (packages, code archives, etc.), and when I used to have it as a gitfs backend mem usage was through the roof |
19:32 |
|
ALLmightySPIFF joined #salt |
19:35 |
|
davet joined #salt |
19:36 |
|
druonysuse joined #salt |
19:39 |
|
glyf joined #salt |
19:40 |
|
berserk joined #salt |
19:42 |
jla |
I had to go check the configs because I had started into salt a couple months ago then got busy and am trying to dive in again. My master doesn't have any git stuff defined. The fileserver_backend is all commented out so defaults |
19:42 |
iggy |
well, there goes that idea :/ |
19:44 |
jla |
I remember thinking about trying multi-master but according to my minion configs I'm not doing that either. What's the best way to see what the master is doing? |
19:44 |
|
druonysus joined #salt |
19:44 |
|
druonysus joined #salt |
19:45 |
|
johanek joined #salt |
19:45 |
|
kitp joined #salt |
19:45 |
|
jeremyr joined #salt |
19:48 |
|
Guest99681 joined #salt |
19:48 |
|
shaggy_surfer joined #salt |
19:48 |
|
bhosmer__ joined #salt |
19:49 |
|
supersheep joined #salt |
19:49 |
|
arif-ali_ joined #salt |
19:49 |
|
lionel_ joined #salt |
19:49 |
|
mik3 joined #salt |
19:50 |
|
iggy__ joined #salt |
19:51 |
|
toanju joined #salt |
19:51 |
|
philipsd6_ joined #salt |
19:52 |
|
GnuLxUsr_ joined #salt |
19:52 |
|
scoates_ joined #salt |
19:52 |
|
malinoff joined #salt |
19:53 |
|
rbjorkli1 joined #salt |
19:55 |
|
mtanski_ joined #salt |
19:56 |
|
smkelly_ joined #salt |
19:56 |
|
MK_FG joined #salt |
19:56 |
|
monkey661 joined #salt |
19:56 |
|
MK_FG joined #salt |
19:57 |
|
Tahm joined #salt |
19:57 |
|
iamtew joined #salt |
19:58 |
|
rhand joined #salt |
19:58 |
|
catpig joined #salt |
19:58 |
|
al joined #salt |
19:58 |
|
whytewolf joined #salt |
19:58 |
|
lamasnik joined #salt |
19:58 |
|
xist joined #salt |
19:59 |
|
Alan_S joined #salt |
19:59 |
|
Mso150_t joined #salt |
20:00 |
|
thedodd joined #salt |
20:00 |
|
shaggy_surfer joined #salt |
20:00 |
|
malinoff_ joined #salt |
20:02 |
Grokzen |
iggy__, Aparently you can set the retcode with __context__["retcode"] = 1337 inside the module function |
20:02 |
Grokzen |
that should be enough to handle errors |
20:10 |
|
giantlock joined #salt |
20:14 |
|
BigBear joined #salt |
20:18 |
|
aparsons joined #salt |
20:18 |
|
chris_m_ joined #salt |
20:19 |
|
quickdry21 joined #salt |
20:19 |
chris_m_ |
afternoon all (those in EST) |
20:19 |
|
johtso joined #salt |
20:20 |
chris_m_ |
qq instead of using the --config-dir everytime I execute a salt command (since i am running as a non-root user). Is there any master profile where I can auto-define this config path |
20:20 |
|
urtokk joined #salt |
20:20 |
|
toastedpenguin joined #salt |
20:20 |
|
felskrone joined #salt |
20:21 |
ThomasJ |
chris_m_: Not sure, but could you not solve it with an alias? |
20:22 |
|
gattie joined #salt |
20:23 |
|
toastedpenguin joined #salt |
20:23 |
chris_m_ |
yes, that would work too. but, is there any environment properties file (like .bashrc or .profile) that salt reads before executing any commands - would be better :) |
20:24 |
ThomasJ |
Absolutely, can't help you there though :\ |
20:24 |
chris_m_ |
thx Thomas for your help |
20:26 |
|
otter768 joined #salt |
20:26 |
|
supersheep_ joined #salt |
20:27 |
|
serenecloud left #salt |
20:30 |
|
Morbus joined #salt |
20:31 |
|
andrew_v joined #salt |
20:32 |
|
redzaku joined #salt |
20:33 |
chris_m_ |
Thomas - there is two variables that store the environment properties location: SALT_MASTER_CONFIG & SALT_MINION_CONFIG |
20:34 |
|
mr_chris joined #salt |
20:34 |
chris_m_ |
is it as simple as setting these in my user profile to what I need? and then, I don't have to worry about adding the --config-dir everytime I run salt? |
20:34 |
|
jimklo joined #salt |
20:38 |
ThomasJ |
easiest way to test: export SALT_MASTER_CONFIG='' and then just try :) |
20:38 |
|
Mso150 joined #salt |
20:38 |
ThomasJ |
With the desired path of course |
20:39 |
|
aparsons joined #salt |
20:40 |
|
zadock joined #salt |
20:41 |
|
big_area joined #salt |
20:42 |
|
ALLmightySPIFF joined #salt |
20:42 |
|
durana joined #salt |
20:44 |
|
Mso150_f joined #salt |
20:45 |
|
cberndt joined #salt |
20:46 |
|
josephleon joined #salt |
20:48 |
|
bluenemo joined #salt |
20:48 |
|
bluenemo joined #salt |
20:52 |
|
jimklo joined #salt |
20:53 |
|
aqua^mac joined #salt |
20:53 |
|
ALLmightySPIFF joined #salt |
20:54 |
|
Mso150 joined #salt |
21:01 |
|
Ozack-work joined #salt |
21:03 |
|
kermit joined #salt |
21:03 |
|
kermit joined #salt |
21:04 |
|
conan_the_destro joined #salt |
21:05 |
|
laax joined #salt |
21:06 |
|
murrdoc joined #salt |
21:07 |
|
snuffychi joined #salt |
21:08 |
andrej |
Is there a way to only capture data pertaining to a certain minion in the masters log? |
21:10 |
iggy |
minions don't log to the masters log |
21:11 |
iggy |
you can use a returner to get the ouput somewhere else useful |
21:12 |
|
twellspring joined #salt |
21:13 |
|
hal58th1 joined #salt |
21:19 |
|
jtang joined #salt |
21:20 |
|
badon joined #salt |
21:21 |
|
CeBe joined #salt |
21:23 |
|
CeBe joined #salt |
21:25 |
|
snuffychi joined #salt |
21:26 |
|
druonysus joined #salt |
21:26 |
|
druonysus joined #salt |
21:32 |
|
josephleon joined #salt |
21:35 |
|
_2_Maggie joined #salt |
21:35 |
_2_Maggie |
hey guys :D |
21:40 |
|
mdasilva joined #salt |
21:42 |
|
josephleon joined #salt |
21:43 |
* iggy |
ponders when 2015.2 will actually be released |
21:44 |
jtang |
i dont suppose anyone here uses salt-virt as a cloud controller? |
21:44 |
jtang |
i've seem to have run into a bunch of regressions that I'm not sure if others have come across |
21:44 |
iggy |
lots of people do |
21:44 |
Ryan_Lane |
salt-virt? |
21:45 |
Ryan_Lane |
lots of people use salt-cloud, I haven't seen many who use salt-virt |
21:45 |
jtang |
specifically runners.virt |
21:45 |
|
Brick joined #salt |
21:45 |
iggy |
oh, that |
21:45 |
iggy |
yeah, you're on your own |
21:45 |
jtang |
the virt.modules are a bit broken as well in 2014.7.1 |
21:45 |
jtang |
at least for me on ubuntu they are a bit borked |
21:46 |
jtang |
iggy, its convenient for spinning up vm's on my laptop for testing |
21:47 |
Ryan_Lane |
I'm sure there's others using it |
21:47 |
Ryan_Lane |
I've seen people talk about it once or twice in here |
21:47 |
jtang |
its just a little frustrating with these regressions |
21:48 |
Ryan_Lane |
jtang: you should open an issue on github |
21:48 |
Ryan_Lane |
in the saltstack/salt repo |
21:48 |
|
mosen joined #salt |
21:48 |
Ryan_Lane |
I know the saltstackinc folks use it |
21:49 |
jtang |
Ryan_Lane, i've already logged it |
21:49 |
Ryan_Lane |
ah. gotcha |
21:49 |
jtang |
I'm trying to track down where the problem is |
21:53 |
|
bfoxwell joined #salt |
21:54 |
rbjorkli1 |
Hi, does anyone know the .sls syntax for the docker.running argument links? |
21:55 |
rbjorkli1 |
I've been trying: |
21:55 |
rbjorkli1 |
- links: |
21:56 |
rbjorkli1 |
/linked_container:/container_with_link/alias |
22:00 |
|
josephleon joined #salt |
22:00 |
iggy |
rbjorkli1: space is important in yaml (I don't kno wanything about docker.running, but that doesn't look right in a strictly yaml sense) |
22:02 |
|
jimklo_ joined #salt |
22:03 |
|
andrew_v joined #salt |
22:05 |
|
yomilk joined #salt |
22:06 |
rbjorkli1 |
iggy: You are correct, with 4 more spaces it's now almost doing what I wanted |
22:07 |
|
shaggy_surfer joined #salt |
22:08 |
iggy |
rbjorkli1: it looks like whatever you specify there get's passed all the way down to docker.utils.create_host_config which wants a dict or list it looks like |
22:09 |
|
smcquay joined #salt |
22:11 |
|
jimklo joined #salt |
22:14 |
|
clintberry joined #salt |
22:17 |
|
hellerbarde joined #salt |
22:26 |
|
glyf joined #salt |
22:26 |
|
otter768 joined #salt |
22:34 |
|
jimklo joined #salt |
22:37 |
|
druonysus joined #salt |
22:42 |
|
aqua^mac joined #salt |
22:44 |
|
dude051 joined #salt |
22:48 |
|
lothiraldan joined #salt |
22:48 |
|
dude051 joined #salt |
22:54 |
|
josephleon joined #salt |
22:56 |
|
smcquay joined #salt |
22:59 |
|
conan_the_destro joined #salt |
23:00 |
|
rogst joined #salt |
23:07 |
|
GabLeRou_ joined #salt |
23:10 |
|
glyf joined #salt |
23:21 |
the_lalelu |
the gpg renderer is nice. thx again salt upstreams. ;) |
23:27 |
|
crane joined #salt |
23:32 |
Edgan |
I am trying to get a redis.conf's contents to dependent on a pillar existing or not. The pillar depends on the value of a grain. It all looks right, except it doesn't behave as expected. http://pastebin.com/FWJHUdaG |
23:33 |
|
kermit joined #salt |
23:33 |
|
kermit joined #salt |
23:34 |
iggy |
you have include redis.config, but the only file you have there is /srv/pillar/config.sls |
23:35 |
Edgan |
iggy: that is a typo on my part, it is correct in reality |
23:35 |
iggy |
and you have top targing redis and no redis file |
23:35 |
Edgan |
http://pastebin.com/rmaMxTf7 |
23:36 |
Edgan |
If it didn't find slave, then it wouldn't write anything in redis.conf |
23:36 |
|
josephleon joined #salt |
23:36 |
Edgan |
I expect to write slaveof on 02 and 03, but it does it on 01 too |
23:37 |
stevednd |
grains['id'] is a system grain, and I'm pretty sure you can't overwrite it |
23:37 |
iggy |
don't use defined |
23:37 |
stevednd |
nor should you try to |
23:37 |
stevednd |
that's the actual minion id grain |
23:37 |
Edgan |
stevednd: I am not trying to overwrite it. I am just trying to if != it |
23:37 |
stevednd |
call your instance_id or something |
23:38 |
iggy |
foo=False <-- that's still defined |
23:38 |
Edgan |
iggy: yes, but on top it is slave: True on 01 |
23:38 |
stevednd |
Edgan: yes, but given what you posted, if 'id' cannot be overriden then grains['id'] will not be whatever you have '01' |
23:38 |
Edgan |
iggy: I expect it not to be defined on 01 |
23:39 |
iggy |
if '01' in grains['id'] |
23:39 |
stevednd |
grains['id'] will equal redis-01.foo.com or whatever your actual minion id is |
23:39 |
iggy |
or not in |
23:39 |
Edgan |
iggy: != is only for integers? |
23:39 |
stevednd |
so either use another gain, or do as iggy suggested |
23:40 |
stevednd |
if not '01' in grains['id'] |
23:40 |
iggy |
no, but as stevednd said, your id isn't going to be 01, it's redis-01.foo.com |
23:41 |
Edgan |
iggy: I don't know where you get that. I think your suggestion will work. |
23:45 |
Edgan |
iggy: ok, so that worked, but http://docs.saltstack.com/en/latest/topics/tutorials/pillar.html suggests what I did should have worked, {% if grains['os_family'] == 'RedHat' %} |
23:46 |
|
tomh- joined #salt |
23:46 |
Edgan |
iggy: unless is there some inconsistent difference between == and != |
23:48 |
stevednd |
Edgan: there is no inconsistent difference between the two. In python, you place `not` before the condition. `if not 'something' == yourvar` |
23:49 |
rudi_s |
Hi. I'm new to salt and have a general question about packages and services. At the moment my "template" to create and start a service looks like this: https://pbot.rmdir.de/EZNKpCXk-wo3nm5KI7IkiA - if the service depends on e.g. config files, I add watch_in: service: nslcd to the config file. - Is this the recommended way to install/start/enable/restart a service or should I use a different way? Thanks. |
23:50 |
Edgan |
stevednd: That is just silliness. != should equal not == |
23:50 |
rudi_s |
Also, is the require: pkg: nslcd necessary? |
23:50 |
rudi_s |
(And while I'm at it, this creates quite some boilerplate, is there a recommended way to shorten this?) |
23:51 |
|
anotherZero joined #salt |
23:55 |
|
neilf______ joined #salt |
23:56 |
|
scalability-junk joined #salt |
23:57 |
|
meylor joined #salt |
23:59 |
hal58th1 |
rudi_s I believe you want to break that into two separate states. Then the require on the service state would make more sesnse |