Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-10-02

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 subsignal joined #salt
00:05 murrdoc1 joined #salt
00:06 ahammond or... not. I can't get __salt__ to work. Nor __pillar__. I've tried the bootstrapping workaround at https://github.com/saltstack/salt/blob/develop/salt/grains/core.py to get access to pillar.get
00:06 ahammond so... I've updated my gist with current status: https://gist.github.com/ahammond/5d78b8a7362baf5ef804
00:07 ahammond note, this module exists specifically to support a custom grain.
00:07 ahammond iggy: ^
00:08 ahammond anyone?
00:10 murrdoc joined #salt
00:10 joe_n joined #salt
00:11 iggy ahammond: how are you testing this? I see a shebang at the top that looks like you're trying to run this outside of salt
00:11 Guest37533 joined #salt
00:14 ahammond salt-call saltutil.sync_all; salt-call syncthing.data
00:15 ahammond should I remove the #! ?
00:16 baweaver joined #salt
00:17 iggy nah, was just making sure
00:20 hal58th_ joined #salt
00:22 murrdoc joined #salt
00:22 invalidexception joined #salt
00:24 orion What salt state do I want if I want to copy an entire directory recursively from one location on the minion to another?
00:28 kevinquinnyo orion: maybe the rsync module?
00:28 kevinquinnyo just a guess
00:35 orion I mean, I could simlpy do a `cp -r` but it doesn't feel nice.
00:36 bastiandg joined #salt
00:43 Bryson joined #salt
00:43 invalidexception joined #salt
00:50 opensource_ninja joined #salt
01:07 John_Kang joined #salt
01:11 pm90_ joined #salt
01:13 joe_n joined #salt
01:21 joe_n joined #salt
01:25 ashirogl joined #salt
01:25 ashirogl joined #salt
01:32 invalidexception joined #salt
01:32 msx joined #salt
01:38 bfoxwell joined #salt
01:47 msx joined #salt
01:47 breakingmatter joined #salt
01:51 viq joined #salt
01:55 catpiggest joined #salt
02:04 larsfronius joined #salt
02:06 jodv joined #salt
02:11 zmalone joined #salt
02:12 kevinquinnyo so i have a really annoying problem.  I need to install a package that has a broken init script, and as part of the packages post-installation, it tries to start the service which fails
02:12 kevinquinnyo becuase the init script is broken
02:12 kevinquinnyo i have my own replacement init script and the package installs and starts just fine
02:12 kevinquinnyo but salt will always error out the first time i run the state
02:13 pcn What distro?
02:13 kevinquinnyo ubuntu 14.04 LTS
02:13 kevinquinnyo the repo is from the official varnish repo
02:13 kevinquinnyo they mixed up the exit status in the start_varnish function in the init script, so it starts, but does exit 1
02:14 pcn But it's started by the package?
02:14 kevinquinnyo yeah if you do apt-get install varnish, the last thing it does is try to start the service
02:14 pcn Try putting this in place: https://jpetazzo.github.io/2013/10/06/policy-rc-d-do-not-start-services-automatically/
02:15 pcn If your policy-rc.d exits non-0, then apt/dpkg won't try to bring up the service
02:15 kevinquinnyo pcn thanks
02:16 zmalone joined #salt
02:17 pcn Make sure it doesn't inhibit things you do want to start - Have it be something like [ "$1" == "varnish-cache" ] && 101
02:17 pcn err... exit 101
02:18 favadi joined #salt
02:18 whytewolf http://upstart.ubuntu.com/cookbook/#disabling-a-job-from-automatically-starting
02:18 whytewolf thats for a per job instance.
02:18 favadi joined #salt
02:22 jodv joined #salt
02:24 zmalone You could install the init script first, I think Ubuntu defaults to not overwrite files that are already in place.
02:24 zmalone Then when Varnish installed, your script could be left.
02:26 kevinquinnyo zmalone: good point
02:29 _JZ_ joined #salt
02:29 aw110f joined #salt
02:29 johnkeates left #salt
02:32 zmalone It probably makes sense to hassle Varnish to publish varnish-cache-debian on https://github.com/varnish?page=1, so you can make a pull request to fix the init script.
02:32 zmalone They publish varnish-cache-redhat, which contains those init files, but it looks like varnish-cache-debian is a private repo right now.
02:33 zmalone I'm guessing it's an oversight.
02:34 zmalone https://github.com/varnish/pkg-varnish-cache/issues/1
02:35 zmalone oh, I see
02:35 zmalone git://git.varnish-cache.org/varnish-cache-debian.git
02:35 zmalone Those probably should still end up on github.
02:36 zmalone I guess we'll see what the explanation for keeping them on an "internal" git server is.
02:36 msx joined #salt
02:42 joe_n joined #salt
02:48 otter768 joined #salt
02:56 jalaziz joined #salt
03:02 falenn joined #salt
03:08 timoguin joined #salt
03:12 kevinquinnyo1 joined #salt
03:15 kevinquinnyo can you put an orchestration inside an orchestration
03:22 invalidexception joined #salt
03:26 opensource_ninja joined #salt
03:36 ageorgop joined #salt
03:36 rpx joined #salt
03:39 jodv joined #salt
03:40 otter768 joined #salt
03:42 mohae_ joined #salt
03:50 zmalone joined #salt
03:53 ramteid joined #salt
03:54 ageorgop1 joined #salt
04:00 jodv joined #salt
04:19 und1sk0 joined #salt
04:33 pm90_ joined #salt
04:48 ageorgop joined #salt
04:51 Jl193 joined #salt
04:51 Jl193 Hello
04:53 NV joined #salt
05:07 mehakkahlon joined #salt
05:08 joe_n joined #salt
05:13 clintberry2 joined #salt
05:15 jodv joined #salt
05:29 edulix joined #salt
05:32 felskrone joined #salt
05:36 freelock joined #salt
05:37 svinota joined #salt
05:45 rhodgin joined #salt
05:47 stanchan joined #salt
05:48 rpx joined #salt
05:49 iggy kevinquinnyo: you can put runners in orchestrate... but should you
05:51 chiui joined #salt
05:51 moogyver anyone used a dfs ( like glusterfs ) to keep their /srv dir synced across multiple masters/syndics?
05:56 colttt joined #salt
05:57 kevinquinnyo iggy: the idea behind that was, i have an orchestration already for percona-xtradb-cluster, which neccesitates some procedural logic already, but if a new node comes online, i want the salt event listener to spawn another orchestration that builds it out in a particular order so that it can join the load-balanced cluster properly
05:58 kevinquinnyo and that orchestration would need to invoke the percona-xtradb-cluster orchestration, among other things (the rest of the states) with some order of operations there as well
06:03 moogyver kevinquinnyo: couldn't you just create a reactor event to start the orchestrate?
06:06 larsfronius joined #salt
06:07 moogyver you could also use event.send in an orchestrate to send an event back to the master to fire another orchestrate off
06:08 kevinquinnyo moogyver: i didnt think about doing it like that
06:09 cberndt joined #salt
06:10 kevinquinnyo moogyver: as far as your question earlier, why not just use gitfs?
06:11 kevinquinnyo you're using version control for your salt stuff already?  or no?
06:11 moogyver kevinquinnyo: it's on the table.  it's just a large infrastructure ( 200 or so users, 29k minions and about 20 syndics )
06:11 moogyver yeah, we do, although we don't use any of salt's state stuff
06:12 moogyver we just use salt for orchestration and adhoc commands/scripts
06:12 moogyver we're a chef shop :/
06:12 geekatcmu Funny, our chef guys use chef for orchestration
06:13 moogyver it's meh at it.  chef push jobs are also slow, and don't scale.
06:13 moogyver and don't allow adhoc
06:13 kevinquinnyo 29 thousand mininons?
06:14 moogyver yeah
06:14 kevinquinnyo good lord
06:14 moogyver i think linkedin is around 70k?
06:14 moogyver or they were at saltconf anyways
06:14 geekatcmu yeah, our chef-using team has maybe 50 servers, and they don't really do ad-hoc.
06:15 evle joined #salt
06:16 moogyver kevinquinnyo: re the gitfs, i'm just unsure as to how well it would scale to that many users/syndics.  we have an internal github setup, but it requires authentication for any action ( clone/view included.. )
06:17 AndreasLutro joined #salt
06:18 kevinquinnyo well you could setup a machine user in say, github with ssh key auth of course
06:18 kevinquinnyo but i have no experience dealing with that kind of scale
06:18 mehakkahlon joined #salt
06:18 kevinquinnyo so i dont know
06:19 moogyver i don't either ( not yet.. )
06:19 moogyver :)
06:19 moogyver we're still in the planning/lab phase, trying to phase out our other product that currently does bare-metal provisions and adhoc jobs
06:20 kevinquinnyo i dont think the planning/lab phase ever ends
06:20 moogyver lol
06:20 kevinquinnyo it just gets better over time
06:24 rim-k joined #salt
06:25 kevinquinnyo gitfs *seems* like the way to go for that at first glance -- your 20 syndics would just pull salt code down -- you can set it up pretty granularly to pull specific branches, or releases
06:26 kevinquinnyo as opposed to using glusterfs to just sync them all -- that seems like a headache, and an anti-pattern
06:26 kevinquinnyo again at first glance
06:26 moogyver agreed, although we could do something like gitfs on the master-of-masters
06:26 moogyver and then glusterfs the syndics
06:27 moogyver since no one is going to be making modifications on the syndics
06:27 kevinquinnyo oh that's true, if the syndics are all identical
06:28 joe_n joined #salt
06:29 kevinquinnyo honestly with only 20, and (i assume) very little changes to the files, i wonder if you could just hack the syndic file replication with rsync, you know as iteration #1
06:29 markm joined #salt
06:31 CeBe joined #salt
06:32 kevinquinnyo anyway, it's over my head.  i've had success with glusterfs though.  it's pretty straight-forward, but if you need to change the "replica count" it can get weird.  for full replication like you want to do it should just work
06:33 invalidexception joined #salt
06:37 joe_n joined #salt
06:37 moogyver good to know.  thanks kevinquinnyo.  i've also been looking at something like csync2, which seems to be a bit like a smarter rsync.
06:41 KermitTheFragger joined #salt
06:45 kevinquinnyo moogyver: just remember that perfect is the ememy of good
06:45 kevinquinnyo i'm in no place to give advice, but that i know
06:45 moogyver haha
06:46 moogyver if i can get all this working and replacing the other product, i'll be happy.  doesn't need to be perfect, just better.
06:46 moogyver but i'm expecting twins in feb., so I'm on a short timeline :)
06:49 kevinquinnyo congrats!
06:49 kevinquinnyo i'm in the same boat sans the twins, and the scale of the project
06:49 kevinquinnyo good luck
06:49 kevinquinnyo with all of that ;)
06:54 felskrone moogyver: are you the guy with the multiple BUs requiring their own master on the multimaster discussion on github?
07:03 nadley hi all
07:04 mattiasr joined #salt
07:04 Ashcroft joined #salt
07:06 llua joined #salt
07:06 llua joined #salt
07:07 lb joined #salt
07:08 Ymage joined #salt
07:25 rim-k joined #salt
07:31 felskrone1 joined #salt
07:36 Grokzen joined #salt
07:38 mehakkahlon joined #salt
07:42 aqua^c joined #salt
07:43 TyrfingMjolnir joined #salt
07:43 boargod joined #salt
07:47 rpx joined #salt
07:53 chiui joined #salt
07:54 eseyman joined #salt
08:01 elsmo joined #salt
08:04 cb joined #salt
08:05 Rumbles joined #salt
08:11 thefish joined #salt
08:13 thalleralexander joined #salt
08:15 scoates_ joined #salt
08:17 Xevian joined #salt
08:19 losh joined #salt
08:20 s_kunk joined #salt
08:22 impi joined #salt
08:25 kawa2014 joined #salt
08:25 boargod2 joined #salt
08:36 rofl____ is it possible to force a highstate run even if it is already running?
08:36 rofl____ im running highstate to verify the code in a post commit
08:36 rofl____ so commiting is blocked by highstate
08:37 GreatSnoopy joined #salt
08:38 rofl____ even using test=True doesnt work..which it maybe should?
08:39 AndreasLutro doubt it, you can't reliably test if a highstate would make any changes if another highstate is already running
08:41 AndreasLutro maybe you can switch your post commit to use show_highstate
08:41 rofl____ need to do some manual linting then
08:41 AndreasLutro or, I see that highstate has a queue arg
08:41 AndreasLutro which defaults to False
08:41 AndreasLutro Instead of failing immediately when another state run is in progress, queue the new state run to begin running once the other has finished.
08:44 mattiasr joined #salt
08:44 rofl____ AndreasLutro: not bad
08:54 lootic joined #salt
08:55 lootic Hello! I'm trying to use extends in providers but it gives me the following error:  The 'ec2' cloud provider entry in 'db_windows' is trying to extend data from 'windows:ec2' though 'ec2' is not defined in 'db_windows'
08:57 lootic In - extends under "db_windows" I've written "windows:ec2" and I got a provider called "windows" in that very same file which im trying to do an extend in
08:57 lootic anyone that can tell what Im doing wrong?
09:00 svinota joined #salt
09:06 larsfronius joined #salt
09:08 larsfron_ joined #salt
09:12 dynamicudpate joined #salt
09:12 mattiasr joined #salt
09:13 notnotpe_ joined #salt
09:13 mohae joined #salt
09:13 rawzone^ joined #salt
09:13 toddnni_ joined #salt
09:14 tedski- joined #salt
09:14 aqua^c_ joined #salt
09:15 sastorsl_ joined #salt
09:15 SaveTheRb0tz joined #salt
09:15 cberndt joined #salt
09:15 Grok joined #salt
09:16 crd_ joined #salt
09:17 baffle_ joined #salt
09:17 babilen_ joined #salt
09:17 jbub_ joined #salt
09:17 lude1 joined #salt
09:17 erjohnso_ joined #salt
09:18 msx1 joined #salt
09:18 erjohnso_ joined #salt
09:18 wm-bot4118 joined #salt
09:18 daemonkeeper joined #salt
09:18 Puckel_ joined #salt
09:19 bernieke joined #salt
09:20 ekkelett joined #salt
09:20 ekkelett joined #salt
09:20 nihe joined #salt
09:21 eightyeight joined #salt
09:21 drags joined #salt
09:21 denys joined #salt
09:22 ropes joined #salt
09:22 sifusam joined #salt
09:22 danemacmillan joined #salt
09:35 zer0def joined #salt
09:47 ashirogl joined #salt
09:49 sgargan joined #salt
09:52 cberndt joined #salt
09:58 zerthimon joined #salt
09:59 rpx joined #salt
10:00 cyborg-one joined #salt
10:11 mehakkahlon joined #salt
10:12 TyrfingMjolnir joined #salt
10:15 mattiasr joined #salt
10:22 joe_n joined #salt
10:30 Xevian joined #salt
10:30 fgimian joined #salt
10:42 giantlock joined #salt
10:46 rogst joined #salt
10:55 izibi how do i output a state file on a minion after jinja rendering is done?
10:57 babilen_ Can you be more specific about "output" ?
11:02 malinoff joined #salt
11:02 Grok Hi. Is there a block somewhere that you should not run cmd.run "pkill -f ..."?
11:03 Grok whenever i run that from cli it exits out with no return data
11:04 babilen Grok: pkill has no return data, does it?
11:04 babilen In that it doesn't have any output that is
11:04 Grok well the problem is really that if i run something like cmd.run "pkill -f ...; sleep 3; echo foobar"
11:04 Grok it exits out instant and the second command is not runned
11:05 Grok it is not even runned on the target machine
11:05 babilen How do you know that it isn't executed?
11:05 Grok pkill is executed but none of the subsequent commands
11:05 mattiasr joined #salt
11:05 Grok if i kill apache2
11:05 Grok it dies
11:06 Grok but if i would replace "echo foobar" with a "service apache2 start" it will not start
11:06 babilen Could you try passing "shell: /bin/bash" ?
11:06 keimlink joined #salt
11:07 izibi babilen: just print it for debugging
11:08 Grok babilen, you mean cmd.run 'pkill -f /usr/sbin/httpd2-prefork; sleep 3; service apache2 start' shell=/bin/bash ?
11:08 babilen izibi: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html#salt.modules.state.show_sls
11:09 babilen Grok: 'salt '*' cmd.run "sleep 3 ; echo 'foo'"' works fine here, what do you get? And are you sure that 'service' and so on are in PATH ?
11:10 izibi babilen: already found that but it doesn't quite do what i want as it prints the already processed yaml
11:10 babilen Grok: That is also not the example you gave earlier, but differs significantly
11:10 Grok babilen, yes, that exact command works fine for me to, it only fails when i use pkill first
11:10 babilen izibi: So, what do you want exactly?
11:11 izibi babilen: the yaml directly after executing jinja
11:11 babilen Grok: What do you get if you run that?
11:13 babilen izibi: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cp.html#salt.modules.cp.get_template ← is that what you are looking for? I am, as of now, not aware of a way to run a *single* renderer rather than the whole pipeline (which gives you jinja|yaml by default)
11:14 babilen (or get_file_str in there)
11:14 Grok babilen, See the output https://gist.github.com/Grokzen/e89325ab7ce1bc632fa1
11:14 Grok i get nothing :P
11:15 babilen Grok: It might be specific to pkill and the way salt executes commands through subprocess
11:16 izibi babilen: yes, get_template does exactly what I was looking for. thanks
11:18 babilen izibi: Great, sorry .. it is never quite clear when people ask which of the "show_*" or "get_*" people actually want :)
11:19 X67r joined #salt
11:19 babilen Grok: https://github.com/saltstack/salt/blob/develop/salt/modules/ps.py#L243 might come in handy
11:20 babilen Grok: But then: what is the *actual* problem you are trying to solve?
11:20 Grok restart apache from commandline :P
11:21 monkey66 joined #salt
11:21 Grok ye i know about pkill, just found it wierd that the command above did not work O.o
11:21 babilen "salt '*' service.restart apache"
11:21 Grok can't do that yet
11:21 babilen How so?
11:21 Grok we have a apache that have a broken init.d script :P
11:21 babilen Well .. then fix the bloody init script
11:21 Grok atleast for the stop part
11:21 babilen Do you need a working one?
11:21 Grok we are in newer version of our platform, but legacy systems can't be upgraded yet
11:22 babilen Sounds broken
11:22 Grok yepp
11:22 Grok well well, i think i will get around it with the pkill module
11:22 babilen Still: Fix the init script
11:23 babilen I mean working around the actual problem is just nonsensical and will simply delay the inevitable
11:23 babilen In the end you have so much cruft in your system that nobody dares to throw away that you will never dare to upgrade
11:23 babilen But then that might already be the case ;)
11:23 Grok i have no control over when they want to touch the legacy systems and change them
11:23 Grok i want to but you know bureaucracy in the way :P
11:24 babilen So, just to get this straight: You are running a webserver on a very old version of $DISTRIBUTION, but upgraded the Apache thereon manually. You furthermore use a totally broken init script and can neither fix that nor upgrade to the current stable release of that distribution?
11:25 babilen What keeps you from, at least, using a fitting init script? Say you pulled apache 2.4 from jessie into, let's be extreme here, bo and are still using the bo init script. Why can't you, in that example, also use the init script for jessie?
11:25 babilen (although I doubt that you would get Apache 2.4 running on bo easily)
11:26 Grok ha :P it sounds something like it is over here
11:26 giantlock joined #salt
11:27 Grok it is even apache 2.2
11:27 Grok upgrading i think is planned within a year or so
11:27 Grok if all goes well
11:27 babilen Can you be more specific or are you not allowed to provide that information? If so that's fine too, but it sounds as if you are looking for a solution to a problem you shouldn't even have in the first place and instead of actually fixing that problem you are applying multiple layers of duct tape to keep it all from falling apart
11:30 Grok my customer i work for have a large platform that we help build, they upgrade their entire prod envirnoment (10k units, ~1k servers) once a year to the next platform, until the next platform is in place with a working apache init.d script operations tried to run that command to manually restart apache on alot of servers at the same time to do some certificate change
11:31 Grok it is not nice but that is how the customer does it :P
11:31 babilen okay, $cruft_from_the_olden_days
11:31 Grok yepp
11:31 babilen But then .. i still have the feeling as if the way to go forward is to roll out a working version of the init script
11:31 Grok i am glad i got salt in atleast :P
11:31 babilen Oh, I totally believe you
11:32 Grok that was a battle of its own
11:32 babilen Don't worry, I do understand the situation you are in, but I am simply no friend of piling cruft upon cruft upon cruft simply because some stupid rule keeps you from actually fixing a problem
11:33 babilen And when I see "pkill -f /usr/sbin/httpd2-prefork; sleep 3; service apache2 start" that just screams "fix me"
11:33 Grok yepp :p
11:34 babilen All that being said: I don't know why you can't run that command (you should) and I guess that it has to be do with pkill's inner workings and how that affects Python's subprocess module
11:34 babilen Feel free to file a bug on the salt bugtracker, but I wouldn't expect this to be an easy fix .. but then, it might be and other people might just think of something appropriate
11:35 felskrone1 hey Grok, still using eventsd? :-)
11:36 Grok felskrone1, yes we are
11:36 felskrone1 ok, just wondering :-)
11:36 Grok it works really nice
11:37 felskrone1 have you ever head an issue with the daemon not receiving eventy anymore?
11:37 felskrone1 events
11:38 felskrone1 its still running and dumping its stats into the stats file, just not receiving events anymore
11:38 Grok hmm
11:38 Grok i do not think i have got that yet
11:38 Grok atleast nothing we have detected
11:39 felskrone1 hm damn, it does not happen regularly, only hardly ever, but i cant figure out what causes it
11:39 Grok the only problem i still get is that last bug i filed
11:40 Grok where multiple workers that used the same tag duplicated the events :P
11:40 Grok but i worked around that in my worker instead
11:40 felskrone1 yeah, i have a few things i still want too, but hardly the time with a pregnant wife and working etc.
11:40 Xevian joined #salt
11:40 Grok :]
11:41 Grok i know the feeling
11:41 breakingmatter joined #salt
11:41 felskrone1 but im getting a new colleague some time soon, that will hopefully ease the pressure at work a bit so i can come back to doing more salt related development
11:42 Grok sounds nice
11:42 Grok eventsd will work hard in our environment in the future, will be fun to see how it performs in high scale environment
11:49 joe_n joined #salt
11:53 felskrone1 have you set up any kind of dashboard yet?
11:54 Grok nope
11:54 Grok that will prolly be a project for next year
11:54 Grok there is a big logging/monitoring project that will be done by then so everything will be included then
11:57 felskrone1 we're using graylog for that, works really well and you can even monitor the threads created/join which is the most important value in my opinion :-)
11:59 Grok mkay, will have to look at it
12:00 Grok i think the current plans is to use shinken
12:00 Grok but we will see
12:01 felskrone1 at a glance i think graylog and shinken are two different things, graylog is log-aggregation while shinken is monitoring
12:02 felskrone1 well, different topic for a different day :-)
12:02 Grok ah
12:02 Grok ye it might be some other solution for log-aggregation, i dunno yet :P
12:02 Grok not my project so the future will tell
12:03 felskrone1 btw, what version of eventsd are you running? the latest stable or from develop?
12:04 Grok salt-eventsd==0.9.2 currently
12:04 felskrone1 oh never mind, i have not pushed the latest changes to devel yet
12:04 Grok we have not yet touched that component in a while because it currently works as is
12:04 felskrone1 i thought i had, but im getting older obviously :-)
12:05 Grok :p
12:05 felskrone1 nice to hear that it works so well for you :-)
12:06 Grok ye it is a very nice and easy way to deal events and things to do after them
12:07 Grok it offloads the salt master with so much processing
12:09 felskrone1 that was one of the main ideas :-)
12:09 Grok yepp
12:09 felskrone1 i still think that doing everything in one daemon is not the way to go
12:10 DammitJim joined #salt
12:10 Grok are u thinking about a producer/consumer setup, think how rabbitmq & celery works
12:11 DammitJim is salt used in wide area networks where the IP address might not be the same? like with devices that have a cell modem?
12:11 rpx joined #salt
12:13 felskrone1 hm, sort of, i prefer having transport and processing seperated to keep things simple. you could do everything eventsd does in the master itself without a problem, but having to restart all my masters just because i changed the name of an event is something i just cant do
12:14 otter768 joined #salt
12:14 felskrone1 but im still running 2014.7, not sure how much that parts have evolved lately
12:16 Grok if you think about some major redesign, please consider the problem someone posted about saving events so they can be reprocessed if a worker fails or something
12:17 Grok i see that as a major feature that is needed in the future
12:17 ponpanderer joined #salt
12:17 Puckel_ joined #salt
12:18 felskrone1 will do, but dont expect this any time soon unless my boss decides that we need that :-)
12:18 Grok yepp :P
12:19 rogst joined #salt
12:19 lionel joined #salt
12:20 felskrone1 btw, a good example of what i really do not like is what systemd does with everthing in one daemon :-)
12:20 monkey66 joined #salt
12:21 N-Mi joined #salt
12:22 fxhp joined #salt
12:23 otter768 joined #salt
12:29 babilen Is there an easy why (akin to _modules) to ship new extension_modules for the master?
12:29 babilen I have a utils module that I like to use in pillars and states and it is a hassle to synchronise them
12:33 Trauma joined #salt
12:34 dendazen joined #salt
12:36 babilen (In a sense: How would I keep them in GitFS or do I *really* have to write a state for each module and/or keep a repo checkout on the master?!)
12:37 mr-op5 joined #salt
12:39 shiriru joined #salt
12:43 Grok joined #salt
12:44 eliasp felskrone1: ssssh, don't tell anyone: qlist sys-apps/systemd | grep /bin/ | wc -l → 33 :)
12:45 Antiarc joined #salt
12:47 zer0def quick question in relation to requisites - if state1 fails and state2 has `onfail: state1`, do states requiring state1 still fail as a result of their requisite failing?
12:52 ninkotech joined #salt
12:52 mattiasr joined #salt
12:52 Antiarc joined #salt
12:53 subsignal joined #salt
12:56 mpanetta joined #salt
13:01 ksj how do I force a single command to be executed last? I've tried order: 999999 and it just executes it first. (I've also tried order: 1 and get the same thing)
13:01 ksj I can't easily do requisite statements, because in this context it doesn't make any sense
13:02 babilen zer0def: I would expect them to, yeah
13:03 racooper joined #salt
13:04 AndreasLutro ksj: order: last
13:07 ksj AndreasLutro: thanks
13:07 cpowell joined #salt
13:09 AndreasLutro ksj: states with a numeric order will always take precedence over non-ordered states, that's why 999999 was executed first
13:09 bezaban joined #salt
13:11 breakingmatter joined #salt
13:14 pm90_ joined #salt
13:16 zer0def babilen: how would i work around this behavior? for example, states depend on the success of either state1 or state2, where state2 is ran `onfail: state1`?
13:21 babilen I don't think that you can express that in terms of requisites as you lack the predicates for that
13:21 babilen But then: Why don't you verify the behaviour first .. I haven't tried it
13:22 zer0def ok, then i'm a bit lost… say, i want to pull in a git repository only when it's dir doesn't exist… my initial though would be to check for whether the repo exists and if not, git.latest it
13:22 babilen Why would it exist?
13:23 zer0def because it might've been pulled in before
13:23 babilen And "it's dir does exist" means what? That it has been cloned earlier?
13:23 zer0def yup
13:24 babilen Well, then you can still use git.present
13:24 babilen ah, no .. that does something else IIRC
13:24 zer0def it initializes a git repository, if it doesn't exist there before
13:25 babilen So, you are trying to ensure that a clone isn't updated once you've checked it out?
13:25 zer0def in this case, yes.
13:25 pm90__ joined #salt
13:25 AndreasLutro just use - unless: test -f /path/to/repo/.git
13:26 zer0def sorta felt odd, but i guess that's a solution
13:26 AndreasLutro that being said I think git.latest/git.present has been changed to allow only an initial clone in 2015.8
13:27 babilen You have multiple force_* options
13:27 AndreasLutro oh and my unless should be test -d
13:28 zer0def yeah, i know
13:28 zer0def i guess i overthought the issue
13:29 babilen I find the solution with unless to be perfectly viable
13:29 zer0def oh yeah, that's totally viable, not a doubt
13:29 babilen It's a bit weird that you have a "clone *any* revision, but never change it" requirement, but if that's how it is then that's how it is :)
13:30 zer0def yeah, something else is making sure that the repo is at a required version, so i only need this for new machines
13:31 winsalt joined #salt
13:31 babilen And "something else" would break on highstates?
13:31 zer0def and even then, i'll probably need to track the "current" revision somehow, but that's another story
13:31 zer0def let's say the way the git repository is managed can be questionable
13:32 impi joined #salt
13:32 babilen fwiw, I have a customer that needed a couple of repositories around. Our life became *a lot* easier when we replaced all their old cron-job-and-whatnot cruft with salt completely
13:33 edrocks joined #salt
13:34 zer0def oh that's an intended state, but for the time being, i've sorta inherited an infra and am trying to wrap it in salt
13:34 zer0def first, so that it reflects the current state, before i start making reasonable changes
13:35 JDiPierro joined #salt
13:35 perfectsine joined #salt
13:36 perfectsine joined #salt
13:36 perfectsine joined #salt
13:37 perfectsine joined #salt
13:39 mapu joined #salt
13:40 zer0def thanks for the aid AndreasLutro, babilen
13:41 parapov joined #salt
13:42 rpx joined #salt
13:42 protoz joined #salt
13:47 jemejones joined #salt
13:51 I joined #salt
13:54 pravka joined #salt
13:55 zmalone joined #salt
13:56 dekstroza joined #salt
13:57 RandyT_ 8.8.8.8: 8 (8)
13:57 elfixit joined #salt
13:57 JDiPierro 8D
13:58 RandyT_ good morning :-)
13:58 JDiPierro Good Morning 8)
13:59 jdesilet joined #salt
13:59 dekstroza hey there, somewhat saltstack newbie here, looking for a bit of help, or at least a hint to the right direction...Have servers in two types of servers: master (only one) and pool of slaves...Master has grain called host_ip, which is its ip address, and I am trying to figure out how to make this information available on slaves , perhaps via mine?
14:00 babilen dekstroza: That is exactly how you do it
14:00 dekstroza problem being all examples of mine show 'builtin functions'
14:00 dekstroza and i cant find anything on how to expose info from grain on a server which has been assigned a specific role
14:00 dyasny joined #salt
14:02 zwi joined #salt
14:02 babilen dekstroza: https://www.refheap.com/110201
14:03 babilen I wouldn't use grains for that as you can't easily get the IP you want from within there
14:04 scoates joined #salt
14:04 dekstroza i did see that, in my case i have more then few interfaces on the server, and i have to figure out default one (by looking what is the default route, over which interface and what is the ip of that interface), which i have working fine, and have it exposed as custom grain, which i call host_ip
14:05 dekstroza what confuses me is how to expose that through mine, as all examples show, similar to yours mine_function: network.ip_addrs, where network.ip_addrs is built in
14:05 babilen You don't care about interfaces, but networks
14:06 babilen You want the IP from a specific network. The interface that is being used for that is just an implementation detail that doesn't matter
14:06 andrew_v joined #salt
14:07 babilen Or is that not fixed in your case and $something configures something and you have to reconstruct this information from the routing table?
14:07 zer0def ok, a different question - can i add a state to a state_id i'm extending?
14:08 babilen dekstroza: But to answer your question: You would use "grains.item" as mine_function with the name of the grain as argument
14:08 pravka joined #salt
14:09 kawa2014 joined #salt
14:09 dekstroza network.ip_addrs will return all addresses on this server, except for localhost...and i would need to figure out between them which one is the one associated to default route - beside the point, i got that thing working...question was more along line: can i do something like this:
14:10 sgargan joined #salt
14:10 dekstroza mine_function: grains.host_ip
14:10 babilen dekstroza: I'd still go down the "assign one mine function per network (identified by CIDR)" route and then use that information .. That way you can easily say things like: "I want communication between services to happen in foo_network rather than anything that hardcodes interfaces
14:10 dekstroza where host_ip is my custom grain
14:10 babilen dekstroza: No, grains.item with host_ip as argument
14:10 dekstroza ahh i get it...both suggestions
14:11 babilen dekstroza: And no, network.ip_addrs *IF CALLED IN THE WAY I SHOWED YOU* would only return a single IP, namely the one in the given CIDR range
14:11 dekstroza will try out that
14:11 dekstroza yes, i realize that now...
14:11 babilen Which is a lot more flexible than hardcoding "eth0" which will bite you if you ever happen to have eth1 for that
14:11 dekstroza that bit sliped me when i read documentation
14:12 dekstroza perfect...thanks a million @babilen
14:13 babilen https://www.refheap.com/110202 exemplifies mine functions for multiple private networks
14:14 dekstroza yes, I got it...got it working your way with specifying CIDR
14:14 dekstroza awesome, thanks a million
14:14 babilen I typically include the 10_network_addr, 192_168_network_addr and 172_16_network_addr for 10.0.0.0/8, 192.168.0.0/16 and 172.16.0.0/12 networks respectively on every minion and add other (more semantic) subnets as I see fit.
14:15 babilen That way you can easily say "something in 10.0.0.0/8, whatever the minion has" but also be more specific if you have smaller networks for certain situations
14:15 TooLmaN joined #salt
14:17 babilen In fact the call I make in states is more like "salt['mine.get'](grains['id'], net.addr_mine_function).values()" or "salt['mine.get'](lb.backend_tgt, net.addr_mine_function)" in which both the target expression and mine function names are being set in the pillar
14:18 khaije1 Is there a jinja variable that will give some info about the stanza or sls name?
14:18 XenophonF joined #salt
14:18 sgargan joined #salt
14:19 khaije1 I vaguely remember something like {{ slspath }} or some such ...
14:20 babilen dekstroza: In that example "lb.backend_tgt" contains a suitable target expression to target all workers behind a load balancer, while "net.addr_mine_function" would be 10_network_addr for example.
14:21 Xevian joined #salt
14:21 babilen khaije1: {{ source }}
14:22 lb babilen, i should rename to lb1a :D the lb get highlighted way too often :D
14:22 khaije1 awesome, cheers babilen
14:22 babilen lb: haha :)
14:23 babilen One might say "You get pounded" ;)
14:23 perfectsine joined #salt
14:23 bhosmer joined #salt
14:23 protoz joined #salt
14:23 XenophonF hey all - i'm getting the error "you need to allow pip based installations" when running salt-cloud, deploying ubuntu 14.04 on ec2
14:23 XenophonF does that ring a bell for anyone else?
14:23 babilen "-P"
14:24 TheoSLC joined #salt
14:24 babilen (cf. https://docs.saltstack.com/en/latest/topics/tutorials/salt_bootstrap.html )
14:24 XenophonF i'm invoking salt-cloud like `salt-cloud -p profile instance-name` - is that where I need to use the -P flag?
14:25 XenophonF a couple of days ago I was able to run the same command without any errors
14:25 XenophonF where would I set this flag for salt-bootstrap?
14:25 oravirt joined #salt
14:25 quasiben joined #salt
14:25 XenophonF https://gist.github.com/xenophonf/fd89addebec86e181ea5 is the transcript, if that helps
14:28 XenophonF hm, i guess --script-args?
14:31 Akhter joined #salt
14:32 Akhter joined #salt
14:45 kawa2014 joined #salt
14:55 Brew joined #salt
14:57 perfectsine joined #salt
14:59 berserk joined #salt
14:59 TheoSLC joined #salt
15:00 Akhter joined #salt
15:04 patchedmonkey joined #salt
15:05 XenophonF i wonder what the heck changed.  salt-cloud worked perfectly for me earlier this week.  now, with debugging enabled i see profile errors.  not sure what they mean.
15:08 perfectsine joined #salt
15:13 hasues joined #salt
15:13 iamnota joined #salt
15:13 iamnota am
15:13 iamnota hi
15:13 hasues left #salt
15:15 iamnota when i executing command by "cmd.run", is it runs in administrator mode?
15:15 XenophonF iamnota: it's run in the same security context as the salt-minion process, by default
15:15 iamnota (on windows)
15:17 iamnota thank you. but how then execute a command from the administrator?
15:17 XenophonF it's actually running with more priveleges than an administrator, since by default it's running in the LocalSystem security context
15:18 XenophonF i don't understand your question, iamnota
15:18 XenophonF do you want the command to be run under a particular user account?
15:18 iamnota i have the error
15:18 iamnota [ERROR   ] Salt request timed out. If this error persists, worker_threads may need to be increased. Failed to authenticate!  This is most likely because this user is not permitted to execute commands, but there is a small possibility that a disk error occurred (check disk/inode usage).
15:19 XenophonF that's not enough information - please post the complete transcript of the command to something like gist.github.com
15:19 keltim joined #salt
15:22 XenophonF like, are you invoking cmd.run from the minion or from the master?
15:22 XenophonF what command are you using?
15:22 iamnota in windows i can click rigth button on cmd and press "execute as administrator".
15:22 XenophonF under what account are you logged into?
15:23 XenophonF if you want to try executing cmd.run from the minion, you'd use salt-call, same as on linux
15:23 XenophonF so you need to open command prompt or powershell as an administrator
15:23 XenophonF cd to wherever you installed salt (if that directory isn't already in the path)
15:24 XenophonF then run `salt-call cmd.run "dir c:\"`
15:24 XenophonF and it should work just fine
15:25 iamnota the master installed in linux. minion - windows. i have a program on windows and i want execute it.
15:25 iamnota ok. i will try
15:26 XenophonF from your linux master you'd do the same thing, only using salt
15:26 XenophonF again, you'd have to run it with elevated privileges
15:26 pcdummy joined #salt
15:26 pcdummy joined #salt
15:26 druonysus joined #salt
15:27 XenophonF the linux equivalent of "Run as administrator" is called sudo (waves hands like a Jedi to gloss over the details that make this analogy... inexact)
15:27 iamnota no
15:27 XenophonF so `sudo windows-minion-id cmd.run "dir c:\"`
15:27 XenophonF er i mean `sudo salt windows-minion-id cmd.run "dir c:\"`
15:27 iamnota i need privileges in windows
15:27 patchedmonkey joined #salt
15:28 pm90_ joined #salt
15:28 XenophonF no, you don't
15:28 iamnota minion on windows can't execute setup.exe
15:28 iamnota i want install a programm in silent mode
15:29 pm90__ joined #salt
15:29 debian112 joined #salt
15:31 XenophonF salt-minion on windows runs as localsystem by default, so it's already got admin rights---technically, it is more powerful than an administrator account
15:31 iamnota salt -v 'asdasdas' cmd.run 'C:\dasdas\setup.exe -i silent'
15:31 Akhter joined #salt
15:32 XenophonF prefix that command with sudo
15:32 iamnota ok. ty. then problem somewhere else.
15:32 XenophonF sudo -v 'asdasdas' cmd.run 'C:\dasdas\setup.exe -i silent'
15:32 XenophonF er, sudo salt ...
15:35 ageorgop joined #salt
15:37 edulix joined #salt
15:42 bhosmer joined #salt
15:43 RedundancyD joined #salt
15:46 DammitJim joined #salt
15:46 DammitJim I just rebuilt a server with the same server name
15:46 DammitJim I deleted the keys from the master
15:47 DammitJim what do I need to do on the minion to re-ask the master to be accepted?
15:47 bhosmer joined #salt
15:50 zmalone configure the /etc/salt/minion file to point to the master, and restart the salt-minion
15:51 DammitJim zmalone, I already had that
15:51 DammitJim I just deleted the minion folder where the keys were
15:51 DammitJim restarted and that seems to have worked
15:58 DammitJim I am running a highstate test=true for a new minion, but it's not returning
15:59 DammitJim what can I do to troubleshoot this?
15:59 DammitJim oh, it just returned saying: Minion did not return
15:59 DammitJim jobs.active returns Running with an id
15:59 bhosmer joined #salt
16:01 favadi joined #salt
16:02 alemeno22 joined #salt
16:02 favadi joined #salt
16:03 hal58th joined #salt
16:04 favadi joined #salt
16:06 favadi joined #salt
16:07 berserk joined #salt
16:07 favadi joined #salt
16:11 moogyver joined #salt
16:12 moogyver felskrone1: yeah, that's me ( re: multimaster )
16:15 fivehole joined #salt
16:22 breakingmatter joined #salt
16:27 stanchan joined #salt
16:29 polishdub joined #salt
16:29 ajw0100 joined #salt
16:30 anotherZero joined #salt
16:30 jefferyharrell joined #salt
16:34 bhosmer joined #salt
16:35 jvv joined #salt
16:38 murrdoc joined #salt
16:39 Akhter joined #salt
16:41 KyleG joined #salt
16:41 KyleG joined #salt
16:42 Akhter joined #salt
16:44 jdesilet joined #salt
16:45 jmreicha joined #salt
16:45 invalidexception joined #salt
16:45 khaije1 awesome ==> https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.pillar_ldap.html
16:47 geekatcmu ... only if the team that runs your LDAP infrastructure can produce a five-nines service.
16:47 writtenoff joined #salt
16:51 RedundancyD ldap is one of those really critical things that no one spends any time learning or maintaining. It just treated like the bastard step child.
16:51 markm joined #salt
16:53 RedundancyD Kinda like the poor lone developer who maintains ntp. Would love help, donation, anything. But if it ever broke the WHOLE world would fall apart.
16:55 geekatcmu pretty much
16:56 babilen RedundancyD: Yeah, I am so happy that we have someone around who is *really* into LDAP
16:58 jmreicha_ joined #salt
16:59 larsfron_ joined #salt
17:00 forrest joined #salt
17:01 forrest joined #salt
17:03 _JZ_ joined #salt
17:03 favadi joined #salt
17:04 aparsons joined #salt
17:05 bhosmer joined #salt
17:05 favadi joined #salt
17:08 Tanta joined #salt
17:10 markm joined #salt
17:11 JDiPierro joined #salt
17:11 iggy Don't need ldap, just get salt to admin all your users
17:13 writtenoff joined #salt
17:16 dgarstang joined #salt
17:16 dgarstang Is there a way to have a master automatically accept a minion's key?
17:16 bluenemo joined #salt
17:18 favadi dgarstang: https://docs.saltstack.com/en/latest/ref/configuration/master.html#auto-accept
17:18 forrest dgarstang: https://docs.saltstack.com/en/latest/ref/configuration/master.html#auto-accept but I would recommend against it unless your network is locked down tight.
17:18 forrest DAMN YOU FAVADI
17:18 forrest ;)
17:18 favadi :)
17:18 favadi anw, avoid it at all costs
17:18 forrest Agreed
17:19 berserk joined #salt
17:20 dgarstang favadi/forrest: Our instances configure themselves on boot. What would you suggest then?
17:20 iggy the better way is to have whatever deploys the minion do it (salt-cloud f.ex.)
17:21 dgarstang In an autoscaling environment, or any environment where instances are fully self provisioning, having to manually accept keys doesn't scale.
17:22 irdan joined #salt
17:24 moogyver you can precreate the keys
17:24 irdan is there a way to pass flags to the 'file.remove' command to replicate the functionality of 'rm -rf' ?
17:24 dgarstang moogyver: Sure, but we're using CloudFormation and chef. Not sure how either of those would help
17:25 favadi dgarstang: please don'nt, it is #salt
17:25 favadi :)
17:25 dgarstang favadi: sorry?
17:25 favadi anw, you can have pause VM instances that are accepted
17:26 favadi and boot as need
17:26 favadi that's what I do
17:26 moogyver dgarstang: you could send a call from CF to the salt-reactor when a new VM is provisioned to precreate the minion key and place it somewhere available.  and in the chef recipe that provisions the cookbook and installs the salt-minion could have an attribute that points to the URL of the keypair for the cookbook to grab/install.
17:26 moogyver er, the chef cookbook that does the provision, sorry.
17:27 favadi irdan: file.absent (state) will do that
17:28 moogyver dgarstang: i'm just suggesting an alternative.  if everything is pretty locked down, i'd just do auto-accept myself.
17:28 moogyver :)
17:28 iggy dgarstang: salt-api+salt.wheel.key
17:28 moogyver ah, hadn't even thought about using the wheel stuff
17:29 moogyver easier to do then
17:29 khaije1 Ideally we'll use LDAP to help with targeting.
17:30 forrest yeah I'd go with salt-api
17:30 dgarstang I think I'm gonna have to go with ansible just to avoid the complexity
17:30 irdan favadi: any suggestions if you're running an old enough version of salt that file.absent isn't an available command?
17:30 Ryan_Lane dgarstang: it's not necessary to manually accept keys
17:31 Ryan_Lane there's numerous things you could do for this
17:31 favadi irdan: upgrade then
17:31 irdan favadi: lol awesome
17:31 forrest file.absent isn't available? How old is your release, holy crap man
17:31 Ryan_Lane for instance, you write a reactor that sees new key reqeuests and checks via boto to see if the instance exists
17:31 Twiglet I have a very crude method of dealing with external pillars failing; set failhard in the options and make a test like: if not pillar.get('test:value'); then /bin/false, order 1; end if
17:32 Ryan_Lane or... you could allow instances to write to a location in S3, then allow the master to read from that location
17:32 alemeno22 joined #salt
17:32 szhem joined #salt
17:32 Ryan_Lane then IAM policy is your trust
17:32 invalidexception joined #salt
17:32 Ryan_Lane similarly, you could write to dynamodb instead, if you prefer dynamo's fine-grained iam
17:32 iggy irdan: file.absent has been there since 0.11... how old of a version are you using?
17:33 Ryan_Lane anyway, for all those cases, have a reactor that calls a runner. the runner will return true/false based on whether the minion is allowed
17:33 irdan iggy: running 2015.X iirc - I'm pretty sure someone decided that 'absent' was too powerful to allow in our environment
17:33 forrest Ryan_Lane: I don't think you wrote a blog post on this at any point right? I wasn't able to find anything with a quick search, but for some reason I thought you did. Maybe you've just explained it a few times in the IRC.
17:34 DammitJim how do you guys deal with updates? Like I am using a template for my smb.conf
17:34 forrest irdan: Well get whoever has the right permissions to run the state.
17:34 DammitJim what if samba changes something?
17:34 Bryson joined #salt
17:34 Ryan_Lane dgarstang: alternatively use salt-ssh, or go completely masterless (like we do)
17:34 iggy irdan: so it's not that you're using an old version it's a completely different problem
17:34 Twiglet DammitJim: vagrant/jenkins
17:34 Ryan_Lane forrest: nope. we're not using a master at all, so didn't write a post
17:34 DammitJim Twiglet, what do you mean?
17:34 forrest DammitJim: Hope that the app isn't complete shit and their new conf doesn't support the old at all, then update the conf.
17:34 Twiglet test with the latest salt when it comes out, see what breaks
17:35 DammitJim Twiglet, ok, so you are in a constant flux of testing
17:35 DammitJim got it
17:35 forrest Ryan_Lane: Laaaame, should do it anyways
17:35 Ryan_Lane DammitJim: samba shouldn't change anything, except across distro upgrades
17:35 forrest DammitJim: How many apps exist that are worth using that break their confs completely between changes? If anything they should say 'deprecated'
17:35 DammitJim Ryan_Lane, that was just an example
17:35 DammitJim that's encouraging, forrest
17:35 DammitJim thanks
17:35 Ryan_Lane well, in general distros keep all things stable :)
17:35 JDiPierro Twiglet: What do you use for testing states? We just started playing around with serverspec.
17:36 iggy s/general/theory/
17:36 Ryan_Lane but anyway, I use conditionals to switch between the versions
17:36 forrest DammitJim: I'll put it this way, been using salt for a few years now, never run into that problem where I wasn't aware of issues via the logs.
17:36 ageorgop joined #salt
17:36 Ryan_Lane until I've fully upgraded, then I remove the conditionals ane assume the old version never existed
17:36 DammitJim forrest, you mean, you are reviewing the minion's logs to check stuff all the time?
17:36 DammitJim or which logs?
17:36 Twiglet DammitJim: Kinda, I do all my salt state dev in vagrant so when a new version comes out I'll test things there, we also test all out states in vagrant with jenkins so when a new version comes out we'll just re-run the jenkins jobs
17:36 Ryan_Lane forrest: too much work :)
17:36 irdan left #salt
17:37 Ryan_Lane forrest: also, I gave a pretty good implementation plan to saltstack inc about how to make this work transparently in AWS
17:37 forrest DammitJim: Most places I've logged our logs for apps (the app should be reporting config issues on service start) to a central location, so you notice when there are a lot of errors.
17:37 pmcg joined #salt
17:37 Ryan_Lane but the key/auth stuff isn't pluggable
17:37 forrest Ryan_Lane: Cool
17:37 DammitJim forrest, I gotta get on that
17:37 Ryan_Lane http://ryandlane.com/blog/2015/06/16/custom-service-to-service-authentication-using-iamkms/
17:37 deus_ex joined #salt
17:38 Ryan_Lane dgarstang: btw, you should check out the boto_* salt state modules
17:38 forrest DammitJim: Look at papertrail, it's pretty awesome if you're willing to pay depending on the log volume you have.
17:38 Ryan_Lane they're pretty excellent. way better than what's in ansible and pretty competitive with what's in terraform
17:38 Ryan_Lane (I actually think it's better than terraform's AWS support, but I'm biased)
17:38 forrest Ryan_Lane: What?? It's not like you wrote them or something... ;)
17:38 DammitJim thanks forrest .... looking into returners first to "capture" everything that we do from the masters
17:38 Ryan_Lane forrest: not all of them ;)
17:39 forrest DammitJim: Good plan! Have you looked at that open source gui that was being worked on? I can't remember if that uses returners to output job errors.
17:39 forrest Ryan_Lane: Enough of them to be biased.
17:39 DammitJim I have not
17:40 forrest DammitJim: https://github.com/tinyclues/saltpad I haven't looked at it very much AT ALL because we run masterless.
17:40 forrest They definitely return the result of jobs though, you can see it on that first screenshot
17:40 forrest errors and everything
17:40 forrest which is nifty
17:40 DammitJim looks cool
17:41 iggy the dev is also pretty active/responsive
17:41 DammitJim and not only that, but this guy has done an excellent job documenting how to get it working
17:41 DammitJim hopefully it's as easy as he says
17:41 iggy (although it'd be nice if he was in here... but then he probably wouldn't get anything done)
17:42 forrest iggy: That's why a lot of the salt devs had to bounce out of here.
17:42 forrest They were answering too many questions, offering dat free support.
17:42 DammitJim I thought they have at least 1 dev in here at all times
17:43 iggy we try no to bother them when they are here
17:43 iggy so as to not make them not come here anymore
17:43 DammitJim why do they come in here for, then?
17:43 DammitJim I"m messing with you... I understand, though
17:44 scoates joined #salt
17:46 markm_ joined #salt
17:53 invalidexception joined #salt
17:54 perfectsine joined #salt
17:58 Tanta joined #salt
17:59 Akhter joined #salt
17:59 larsfronius joined #salt
17:59 jalbretsen joined #salt
18:01 Ryan_Lane forrest: well of course :)
18:02 Ryan_Lane forrest: really, though the salt boto modules support more things and don't do magical shit, like "oh, you need to change attribute x of an autoscale group? well, I'm going to need to delete that for you first."
18:02 Ryan_Lane forrest: you're doing masterless?
18:02 Ryan_Lane didn't realize that
18:02 forrest Ryan_Lane: Yeah.
18:03 Ryan_Lane sequentially ordered, or with requisites?
18:03 notnotpeter joined #salt
18:04 forrest requisites.
18:05 khaije1 How do I specify the order that salt looks up pillar data when using multiple backends, ie roots/pillar and gitfs ... in other words whats the Pillar equivalent of 'fileserver_backend'?
18:07 kwork joined #salt
18:07 sbogg joined #salt
18:09 Ryan_Lane forrest: boooo :D
18:10 Ryan_Lane otherwise high-five :D
18:11 forrest Ryan_Lane: I don't hate requisites like you do
18:11 timoguin joined #salt
18:11 forrest Because I think huge state files are gross.
18:13 notnotpeter joined #salt
18:13 murrdoc joined #salt
18:14 baweaver joined #salt
18:15 Ryan_Lane :D
18:15 Ryan_Lane most of our state files are pretty small
18:15 Ryan_Lane we do ordered includes
18:15 notnotpeter joined #salt
18:16 twiedenbein joined #salt
18:17 SunPowered joined #salt
18:18 denys joined #salt
18:26 quasiben joined #salt
18:27 JDiPierro Ryan_Lane: I'm trying to move towards that but our states are a huge mess :\ it's a very slow process.
18:29 forrest Ryan_Lane: We primarily use ordered includes, but there are still some requires that exist so things make sense. I rewrote all of it a while back so it acts in a much more ordered include way
18:32 Ryan_Lane JDiPierro: yep. it can be a bit difficult, especially if you're requiring across modules
18:32 JDiPierro ... everywhere
18:33 Ryan_Lane that's one of the biggest reasons we don't use requisites. it's confusing when people do requisites across modules
18:33 Ryan_Lane "Now where in the hell is this thing I'm requiring?"
18:33 Ryan_Lane especially when people refactor, because things just randomly break
18:33 JDiPierro Our states are so nasty... no use of map.jinja so {%set ... %} everywhere. module calls in the state to get configuration info. Pretty much every pillar is available to '*'
18:33 Ryan_Lane :D
18:34 Ryan_Lane we're doing masterless, so all pillars are available to all states, but only portions of things are deployed
18:34 Ryan_Lane and secrets come from a secret management system
18:35 hal58th_ joined #salt
18:35 RandyT_ Ryan_Lane: sorry to butt in here, but curious if you can share what you are using for secret management. I've been noodling that Vault integration would be a nice Salt add-on
18:36 pfallenop joined #salt
18:36 pfallenop joined #salt
18:38 markm joined #salt
18:39 JDiPierro RandyT_: There's a feature request for that already :) https://github.com/saltstack/salt/issues/27020
18:41 forrest RandyT_: We use S3
18:42 forrest then IAM rules determine what servers can access which buckets.
18:42 RandyT_ Is it possible to pull ssh key from s3 and not store  on local master?
18:42 RandyT_ I've tried to get that to work but have not been successful
18:45 Ryan_Lane RandyT_: something we wrote in-house
18:45 Ryan_Lane I'm in the process of open sourcing it
18:45 Ryan_Lane it's only usable if you're 100% in AWS
18:45 RandyT_ Ryan_Lane: cool, isn't everyone? :-)
18:46 Ryan_Lane our system solves the chicken/egg problem of auth and master encryption key management
18:46 Ryan_Lane by using KMS and IAM
18:46 Ryan_Lane it's basically an extension of AWS
18:46 RandyT_ awesome, been wracking my brain on that one for a few weeks...
18:47 Akhter joined #salt
18:47 RandyT_ Ryan_Lane: will that code appear on your github?
18:50 zmalone joined #salt
18:51 meeteOrite joined #salt
18:53 meeteOrite I am trying to get the mysql ext_pillar working without any success.  My logs tell me the query is executing on the master, but nothing shows up in pillar.items.  The documented code on the site for the mysql pillar did not work, is there a good example out ther?
18:54 Ryan_Lane RandyT_: it'll appear in Lyft's github
18:55 meeteOrite I've already read the code and instructions in the mysql.py file, which didn't help much either
18:56 dgarstang joined #salt
18:57 forrest meeteOrite: I can't think of any examples off the top of my head. Can you see if you can get it working, and if not (or if you do) create an Issue/PR (might be worth looking at existing issues) to update the docs?
18:57 ageorgop joined #salt
18:58 meeteOrite forrest: I can give it a shot, is there a good way of tracing the process that I might not know about?
18:59 forrest meeteOrite: I don't have much familiarity with the mysql ext_pillar unfortunately. I'd look at issues to see if anyone already did this.
19:03 scoates joined #salt
19:04 perfectsine joined #salt
19:04 zmalone joined #salt
19:05 scoates joined #salt
19:05 GreatSnoopy joined #salt
19:06 orion What's the best way to execute an arbitrary command within a virtualenv via a salt state?
19:07 forrest orion: as in an execution or an install via pip?
19:07 forrest what's the command?
19:07 orion pybabel compile -D foo -d locale/
19:08 forrest Gotcha, I'd probably just use cmd.run
19:08 XenophonF is anyone else having trouble with the latest salt-bootstrap on ubuntu 14.04?
19:09 XenophonF i keep getting the error "salt-minion was not found running", whether i run it myself or invoke it via salt-cloud
19:09 forrest orion: Just set the cwd
19:09 forrest XenophonF: I get that on centos as wel.
19:09 forrest XenophonF: *well
19:09 orion forrest: Sure, but it needs to be run within a virtualenv context.
19:10 forrest XenophonF: https://github.com/saltstack/salt-bootstrap/issues/648
19:10 forrest orion: uhh yeah? So set the cwd to the virtualenv dir, then reference the pybabel executable directly down in the virtualenv directories.
19:10 scoates joined #salt
19:11 forrest so you'd have ./bin/env/pybabel_path
19:11 forrest or whatever.
19:11 orion forrest: The only thing about that is that pybabel will fail to find dependencies installed within the venv if the whole shell context has not been activated with bin/activate./
19:12 forrest orion: Oh really? That's shitty, it doesn't reference the local python stuff? Gross.
19:12 forrest orion: Did you already try a cmd.run that activates the venv?
19:13 orion Will two successive cmd.run invocations use the same shell context?
19:13 orion I think I might have to write a script and deploy it.
19:13 Jeff_ joined #salt
19:14 forrest orion: No that won't work with two runs, you'd have to do something like set the cwd, then `source env/bin/activate; pybabel compile -D foo -d locale/`
19:14 rim-k joined #salt
19:14 Guest55625 How do I keep a nickname?
19:14 orion ah
19:15 iggy XenophonF: there was a bug open about it iirc
19:15 orion forrest: I'll give that a try.
19:15 forrest orion: Cool.
19:15 forrest iggy: I already linked that bug.
19:15 Sketch Guest55625: https://freenode.net/faq.shtml#contents-userregistration
19:16 aron_kexp joined #salt
19:16 XenophonF so no workaround, eh?
19:17 iggy XenophonF: forrest: I was thinking more about https://github.com/saltstack/salt/issues/25270 ... but might be a different problem
19:17 Guest55625 I'm trying to use a custom grain (ec2_tags.py from salt-contrib).   It works locally on the minion when I run salt-call grains.get, but I get nothing on the master.  Where should I look to figure out what's going on?
19:17 forrest XenophonF: That's still iggy using the bootstrap script, so I imagine they are related.
19:17 XenophonF iggy, that looks like the error i'm getting
19:18 ALLmightySPIFF joined #salt
19:18 kwork joined #salt
19:18 kwork joined #salt
19:18 forrest Guest55625: give https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.sync_grains a shot.
19:18 forrest Make sure your grains are synced
19:19 Guest55625 I synced the grains.  They're cached on the minion and it runs ok on them with salt-call
19:21 thefish joined #salt
19:23 XenophonF iggy, forrest, i've got an error in the minion log about the master having rejected the minions key
19:23 XenophonF so maybe it's not a bootstrap issue
19:23 forrest Guest55625: What about sync_all? Maybe since they are custom you need to use sync_all.
19:24 Guest55625 Ah.  I had put the aws credentials in the minion config.  I needed to restart the minion for it to pick up those configs.
19:24 forrest Guest55625: Ahh okay.
19:24 mehakkahlon joined #salt
19:25 druonysus joined #salt
19:25 druonysus joined #salt
19:30 Cottser joined #salt
19:31 orion forrest: I ended up doing cmd.script. Thanks for your help.
19:33 LeProvokateur joined #salt
19:35 breakingmatter joined #salt
19:36 Cottser joined #salt
19:36 Akhter joined #salt
19:36 canci joined #salt
19:36 forrest orion: Glad you were able to find a solution.
19:37 Guest55625 If I want to add a function to a module (aptpkg), should I copy the whole module to /srv/salt/_modules, or is there a better way?
19:39 KennethWilke joined #salt
19:39 druonysus joined #salt
19:40 Cottser joined #salt
19:41 mitsuhiko joined #salt
19:43 Dev0n joined #salt
19:43 forrest Guest55625: Copy the whole thing to your _modules directory
19:43 forrest Guest55625: And then if it's a good change create a PR with some tests so it can go into mainline salt.
19:44 Guest55625 thanks
19:44 forrest For sure
19:47 jefftang joined #salt
19:50 chiui joined #salt
19:50 druonysus joined #salt
19:50 druonysus joined #salt
19:53 falicy joined #salt
19:56 baweaver joined #salt
19:58 bhosmer_ joined #salt
19:58 mapu joined #salt
19:59 baweaver joined #salt
19:59 JDiPierro joined #salt
19:59 N-Mi joined #salt
20:07 pm90_ joined #salt
20:08 tkharju joined #salt
20:08 perfectsine joined #salt
20:11 pm90_ joined #salt
20:12 giantlock joined #salt
20:17 jdesilet joined #salt
20:19 viq joined #salt
20:21 baweaver joined #salt
20:21 jab416171 joined #salt
20:28 mehakkahlon joined #salt
20:33 mehakkahlon joined #salt
20:34 alemeno22 joined #salt
20:35 subsignal joined #salt
20:36 rim-k joined #salt
20:38 mehakkahlon joined #salt
20:42 murrdoc joined #salt
20:42 druonysus joined #salt
20:43 mehakkahlon joined #salt
20:48 mehakkahlon joined #salt
20:53 mehakkahlon joined #salt
20:57 RandyT_ Curious if anyone can share experience or howto on setting up a windows package manager.
20:58 RandyT_ I've looked at the docs, but first command outputs an error for me...
20:58 RandyT_ must be some basic starting point that I have yet to achieve
20:58 mehakkahlon joined #salt
20:59 ahammond I have a custom grain. It works on server1, it does not work on server2. I've tried saltutil.sync_all, saltutil.clear_cache and saltutil.refresh_modules     any other suggestions?
21:00 voileux joined #salt
21:03 mehakkahlon joined #salt
21:05 Bryson joined #salt
21:08 ashirogl1 joined #salt
21:08 mehakkahlon joined #salt
21:10 subsignal joined #salt
21:10 Ryan_Lane Guest25336: are you not using IAM roles?
21:10 sgargan joined #salt
21:10 Ryan_Lane Guest25336: if you use IAM roles then you don't need to embed credentials anywhere. everything should just use the IAM role credentials from the metadata service
21:11 mehakkahlon joined #salt
21:12 protoz joined #salt
21:12 Akhter joined #salt
21:16 edrocks joined #salt
21:23 pcn I'm running into this problem again: https://gist.github.com/pcn/6f8b244256f0974f1683
21:24 pcn I have 2 minions that can render that now, and one that can't.
21:24 pcn I've tried to clear the cache for that minion, and I've tried looking at the grains and pillars of one that works and the one that doesn't, and I can't find anything that would make a difference.
21:26 bluenemo is there any way to transfer files that are not present on the salt master from minion one two minion two using salt?
21:26 pm90__ joined #salt
21:28 baweaver joined #salt
21:32 shiriru joined #salt
21:33 iggy bluenemo: minionfs
21:33 ageorgop joined #salt
21:33 kevinquinnyo joined #salt
21:34 sgargan joined #salt
21:39 eliasp isn't minionfs more or less dead?
21:39 chiui joined #salt
21:43 breakingmatter joined #salt
21:44 zmalone joined #salt
21:47 Akhter joined #salt
21:49 bluenemo iggy, eliasp I want to generate openvpn server / client certs and propagate those to other servers (as in run multiple vpn servers that communicate among each other). So far I thought about just using glusterfs for /etc/openvpn with geo replication, as in having one "master" openvpn server that generates certs to /etc/openvpn, and then have salt manage all vpn servers and create supervisord / systemd config for starting the assigned vpns
21:50 eliasp bluenemo: hmm, wouldn't simply using the x509 state solve all that for you?
21:50 bluenemo the task is to have 4 vpn processes for the "monitoring vpn", and have 2 each on one server
21:51 geekatcmu Am I not finding the ticket where 2015.8.0 seems to have broken git.config_set ?
21:51 eliasp bluenemo: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.x509.html
21:51 bluenemo wow that one is hardcore nift
21:51 bluenemo y
21:51 geekatcmu The docs pretty clearly show is_global -> global, but trying to use that results in errors.
21:52 bluenemo hm cant do password protection for priv keys, can it?
21:54 bluenemo hm ok thats awesome. I'll take that. thanks for the hint eliasp !! :)
21:54 eliasp bluenemo: it can't, but as those are generated locally anyways, this wouldn't change much
21:54 hemphill joined #salt
21:54 thefish joined #salt
21:55 eliasp bluenemo: yw
21:55 eliasp bluenemo: have fun! :)
21:55 bluenemo I'm really excited about how this state plays out :)
21:55 bluenemo no I need password keys for road worrior laptops
21:56 eliasp bluenemo: ah, ok… well, might be relatively easy to extend the x509 state + module to support passphrases for priv keys
21:56 bluenemo I actually wrote a state module for easy-rsa - or my fork from easy-rsa.. Guess this state module makes mine obsolete. mine however also does lots of stuff, i'm excited which has more features. I'll read the code tomorrow
21:56 bluenemo yeah I guess so, yes
22:00 Ryan_Lane bluenemo: gluster is never the answer :)
22:00 bluenemo ah using m2crypto. cool. yeah thats my thing then :) Finally a module for openssl :)
22:01 bluenemo Ryan_Lane, why?
22:01 larsfronius joined #salt
22:01 eliasp Ryan_Lane: or to phrase it differently: if "glusterfs" is the answer, you're asking the wrong questions :)
22:01 Ryan_Lane distributed filesystems are never the answer :)
22:01 bluenemo for this use case I agree that the module is way cooler :D but in general?
22:01 Ryan_Lane I've had day long outages due to gluster
22:01 bluenemo how do you implement a failover fs for a cluster?
22:01 Ryan_Lane I've also had day long outages due to nfs
22:01 Ryan_Lane I don't use a shared fs :)
22:01 Ryan_Lane object storage ftw
22:02 bluenemo ceph?
22:02 Ryan_Lane or swift
22:02 Ryan_Lane ceph is a bit scary
22:02 eliasp ever tried to provide NFS backed SMB volumes… this is the pure joy of locking subsystem collisions…
22:02 Ryan_Lane at wikimedia we tried it for a long while as an alternative to swift
22:02 bluenemo what caused the problems with gluster and nfs?
22:02 Ryan_Lane we had numerous "well, we lost all the data in the cluster" moments
22:02 bluenemo hm no idea about smb..
22:03 Ryan_Lane bluenemo: cascading failures are the issue with using distributed fs
22:03 bluenemo did you use it for php hosting file bakcend?
22:03 Ryan_Lane gluster goes down, thene everything using it does too
22:03 bluenemo hm well if the whole gluster - cluster goes down, yes
22:03 Ryan_Lane also, gluster/nfs/etc aren't encrypted
22:03 Ryan_Lane so if you're passing sensitive data over it, it's being passed in the clear
22:04 bluenemo hm gluster geo replication is encrypted.
22:04 Ryan_Lane but is peer to peer in a single cluster also encrypted?
22:04 Ryan_Lane and is the client to gluster also encrypted?
22:04 Ryan_Lane client to gluster would be my concern
22:04 bluenemo but NFS well thats true, yes. I currently run two php sites on amazon where on one I use two gluster in rep mode which share to 4 workers via NFS - one gluster + 2 workers per availability zone
22:05 bluenemo and one "normal" nfs setup with a standard nfs server.
22:05 Ryan_Lane why not EFS?
22:05 Ryan_Lane or sync from S3? :)
22:05 bluenemo its not available in germany yet
22:05 Ryan_Lane ah :(
22:05 bluenemo but yeah, want it! ;)
22:05 Ryan_Lane S3 is incredibly reliably
22:05 bluenemo about mount -t glusterfs idk, dont think its encrypted.
22:05 eliasp yeah, I want EFS for eu-west-1 too ;(
22:06 bluenemo :D
22:06 Ryan_Lane *reliable
22:06 bluenemo what caused your glusterfs to fail?
22:06 bluenemo i'm like days away of putting the site into production :D
22:07 pfallenop joined #salt
22:07 bluenemo imho worst case scenario here would be split brain
22:07 bluenemo also if gluster fu**s up completely, i'll stop gluster and share via normal NFS, as the gluster "brick" is just a regluar xfs
22:08 Ryan_Lane bluenemo: I've had so many failures
22:08 Ryan_Lane so many
22:09 pcn iggy do you know of any minion bugs that could be related to this: https://gist.github.com/pcn/6f8b244256f0974f1683
22:09 Ryan_Lane the first was storing KVM images on a glusterfs, which was a bad idea, plus a fuse bug
22:09 bluenemo i'm didnt use s3 so far.. how do you use that as a shared fs? s3fuse?
22:09 mehakkah_ joined #salt
22:09 Ryan_Lane bluenemo: aws cli s3 sync
22:09 Ryan_Lane err
22:09 bluenemo hm ok, the doc says dont do that :/
22:09 Ryan_Lane bluenemo: awscli s3 sync
22:09 Ryan_Lane the docs say not to do syncs?
22:09 bluenemo no they say dont do kvm on gluster ;)
22:09 bluenemo they say do like streaming stuff
22:10 bluenemo if i'm not misstaken
22:10 Ryan_Lane they weren't saying that when I used it
22:10 Ryan_Lane they were actually encouraging it. maybe after my horrible outage they stopped saying that
22:10 bluenemo #gluster also said php isnt the perfect idea for gluster, but using NFS it performs kinda similar to a regular NFS server
22:10 bluenemo with cachefilesd its kinda ok imho
22:10 bluenemo I just didnt want to go pacemaker / corosync with NFS
22:10 Ryan_Lane then I was using it for multi-tenancy by creating a gluster project for every tenant
22:10 bluenemo ah ok. hm
22:10 Ryan_Lane that doesn't scale
22:11 Rumbles joined #salt
22:11 Ryan_Lane the third time was just a normal usage of gluster in the cloud for files
22:11 bluenemo multi-AZ you mean?
22:11 Ryan_Lane just to store images across a cluster
22:11 Ryan_Lane wasn't in AWS, but it didn't matter
22:11 Ryan_Lane I had one node fail and it took me hours to fix that
22:11 Ryan_Lane I had to rebuild the cluster, basically
22:11 bluenemo uh
22:11 bluenemo hm
22:11 Ryan_Lane have you dealt with a split-brain yet?
22:12 bluenemo in a test scenario
22:12 Ryan_Lane their recovery tools are the absolute worst :)
22:12 bluenemo felt kinda ok
22:12 rim-k joined #salt
22:12 bluenemo there is this show the split brain files feature
22:12 Ryan_Lane I had to write something using salt to help
22:12 bluenemo :D
22:12 bluenemo uh
22:12 Ryan_Lane when you have thousands of split brained files....
22:12 bluenemo thats hard. what was your setup?
22:12 bluenemo hm thats bad, yes
22:13 Ryan_Lane I had the normal distributed setup
22:13 bluenemo two nodes replication?
22:13 Ryan_Lane but when i tried to replace a node it just wouldn't work
22:13 Ryan_Lane yeah
22:13 bluenemo as in add a third, then remove one?
22:13 bluenemo or remove one add a new one?
22:13 Ryan_Lane anyway, if you're just hosting PHP, push an artifact to S3 and have your nodes do s3 syncs to pull them
22:13 bluenemo how long was that ago?
22:13 Ryan_Lane it was a while ago
22:13 Ryan_Lane maybe 2 years
22:14 bluenemo ah ok. about that s3 - i'm totally new to that. do you mount that? :D
22:14 Ryan_Lane no need to mount
22:14 Ryan_Lane you treat it like rsync
22:14 bluenemo is there like mount -t s3 /var/www or sth? how does it work with my php-fpm?
22:14 Ryan_Lane it copies the files to your instance
22:14 bluenemo sry I dont follow
22:14 Ryan_Lane from S3
22:14 bluenemo aaah ok
22:14 bluenemo i see
22:15 bluenemo hm
22:15 bluenemo so its not exactly instant, is it?
22:15 Ryan_Lane basically, you deploy to S3 and have your instances pull the files from S3 to their local disks
22:15 bluenemo how do you trigger syncs?
22:15 bluenemo as a hook, right?
22:15 Ryan_Lane http://ryandlane.com/blog/2015/04/02/saltconf15-masterless-saltstack-at-scale-talk-and-slides/
22:15 bluenemo :)
22:15 Ryan_Lane http://ryandlane.com/blog/2014/08/26/saltstack-masterless-bootstrapping/
22:16 bluenemo ah yes i watched that already, nice talk!
22:16 Ryan_Lane my posts/slides/talk describe it a lot better than I can in chat :)
22:16 Ryan_Lane thanks
22:16 bluenemo aah ok. I guess I didnt listen closely at that part. but yeah, that one was very nice indeed! :)
22:16 Ryan_Lane basically I always assume that deploys are forwards and backwards compatible
22:16 bluenemo it got me thinking about my current way to do orchestration a lot
22:17 Ryan_Lane you should definitely be using the boto_* state modules :)
22:17 bluenemo hehe yes
22:17 bluenemo I'm getting into that atm.. end of year business is booming so its not that much time to learn new skills atm sadly
22:18 bluenemo at night in my free time i'm currently hacking on my openvpn formula ;)
22:18 Ryan_Lane I really want to open source our vpn stuff :)
22:19 bluenemo what can yours do?
22:19 Ryan_Lane is your vpn for users, or service to service?
22:19 bluenemo I'm working on a way to have you say "clones: 5" and then it deploys 3 on server A and 2 on server B, clients configs know both servers and instance ports and vpns push DNS updates to bind.. and so on :)
22:20 Ryan_Lane ah. ok
22:20 Ryan_Lane all of my stuff is for managing openvpn itself, not the salt stuff. our salt code is relatively naive
22:20 bluenemo well kind of for everything. the idea is to have multiple vpns - sth like "monitoring vpn", "admin vpn", backup vpn.. then scale the backup vpn for rsync usage over multiple servers and vpn processes as they dont thread.
22:21 bluenemo hm well i wrote a hacky module for easy-rsa but with x509 i'll exchange that for the x509 module
22:21 Ryan_Lane heh. we manage all different groups through a single openvpn instance
22:21 Ryan_Lane by managing their ip ranges
22:21 Ryan_Lane we have a daemon that implements the openvpn management protocol
22:21 bluenemo single vpn server instance? as in process?
22:21 Ryan_Lane yeah
22:21 bluenemo does that scale?
22:21 Ryan_Lane we segregate user groups by ip range
22:21 bluenemo without multiple my bakcups kill my monitoring and so on
22:22 bluenemo also one instance can only take 250 clients
22:22 bluenemo ah ok. do you do client-to-client?
22:22 Ryan_Lane yeah. we have multiple servers
22:22 Ryan_Lane one per az. we can also add more ports
22:22 bluenemo i disabled "client-to-lcient" in the server confs, enabled ip.forward and use iptables for that
22:22 Ryan_Lane the daemon can connect to multiple vpn processes
22:22 bluenemo that way I can say foo.admin-vpn.com can ssh to bar.backup.vpn
22:22 Ryan_Lane and manage the ip range between them
22:22 bluenemo but not vice versa
22:23 bluenemo ah ok
22:23 bluenemo i see :)
22:23 Ryan_Lane we're all aws, so no need for client to client
22:23 bluenemo all servers in the same subnet / vpc?
22:23 Ryan_Lane aws will do vpc peers
22:24 bluenemo ah ok I understand
22:24 bluenemo can you publish that please? :D that would be very interesting for me atm :)
22:24 fivehole joined #salt
22:24 pm90_ joined #salt
22:24 Ryan_Lane heh. I do eventually need to make a blog post
22:25 pm90___ joined #salt
22:26 bluenemo Another thing about s3 Ryan_Lane, how do you handle uploads with that? as in if my php writes to /var/www, how do I push those to the other workers?
22:28 bluenemo as for this I wouldnt know how to implement a hook. I guess there is a push command too - but then stuff like file locks come to mind. How do you handle that?
22:28 bfoxwell joined #salt
22:29 aristedes joined #salt
22:29 aristedes left #salt
22:31 mehakkahlon joined #salt
22:34 KyleG1 joined #salt
22:35 baweaver joined #salt
22:38 bluenemo eliasp, the x509 state does generate me a /fo/bar/bla.crt if its missing - but it doesnt manage to distribute exactly that file across multiple minions, does it?
22:40 Ryan_Lane bluenemo: there's libraries for php to write directly to S3
22:40 Ryan_Lane there's not really any locking for S3. you'd need to use redis or something along those lines to handle locks
22:40 Ryan_Lane or dynamodb
22:41 Ryan_Lane or do locks in your db
22:41 Ryan_Lane if you really wanted to, you could have clients write directly to S3 by making signed urls :D
22:42 adelcast left #salt
22:43 bluenemo hm. sounds like to much trouble. I want EFS.
22:44 bluenemo actually I want a nice open source solution for clusterfs. so far gluster did look ok even though its REALLY slow when using the native client. NFS however is kinda ok, a little slower than the normal NFS server.
22:45 bluenemo Ceph is nice but its to big for my current usecase - two servers in two aws availability zones..
22:47 bluenemo do you remember back in the day with DRBD + OCFS2, watchdoged by corosync / pacemaker? Not sth like that again ;)
22:56 notnotpeter joined #salt
23:02 * geekatcmu has done that and hated it
23:05 otter768 joined #salt
23:20 jmreicha joined #salt
23:27 TheoSLC joined #salt
23:27 toastedpenguin joined #salt
23:30 pm90_ joined #salt
23:30 SunPowered hey all, I have a minion in a fail loop. It fails as there was a key error in the pillar data, which crashes it (should it?).  On a restart, zeromq sends the cached job again, which subsequently crashes the minon
23:30 SunPowered I have deleted the cached minion files, but the job is still being run.
23:31 SunPowered Is there a way to reset the zeromq cache?
23:37 SunPowered minion versions report: http://paste.ubuntu.com/12642738/
23:45 keimlink joined #salt
23:46 SunPowered I ended up blowing away the minion and starting again
23:49 solidsnack joined #salt
23:52 keimlink_ joined #salt
23:56 zmalone joined #salt
23:56 otter768 joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary