Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-04-23

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 * whiteinge looks
00:01 whiteinge wow. no
00:01 whiteinge that is really hard to read
00:02 whiteinge Ryan_Lane: i'll bet money you can safely ignore that one
00:02 Ryan_Lane heh
00:02 Ryan_Lane yeah, I'd imagine so. I think it's triggered by a class I'm including (OrderedDict)
00:03 pssblts joined #salt
00:03 pydanny joined #salt
00:04 gw joined #salt
00:04 gw joined #salt
00:05 gw joined #salt
00:05 gw joined #salt
00:07 joehoyle joined #salt
00:08 redondos joined #salt
00:09 UtahDave1 joined #salt
00:09 ajolo joined #salt
00:11 austin987 joined #salt
00:12 whiteinge ah, i didn't see that error at the top
00:13 whiteinge Ryan_Lane: you might want to change how you're importing OrderedDict
00:13 Ryan_Lane whiteinge: oh?
00:13 whiteinge from the error i'm guessing you're just doing a ``from salt.utils import OrderedDict`` kind of thing?
00:14 Ryan_Lane yeah
00:14 Ryan_Lane that's what I see elsewhere too?
00:14 Ryan_Lane from salt.utils.odict import OrderedDict
00:14 Ryan_Lane that's what I'm doing
00:15 Ryan_Lane what should I be doing instead?
00:17 whiteinge Ryan_Lane: try this instead:
00:17 linuxlew_ joined #salt
00:17 whiteinge import salt.utils.odict
00:18 whiteinge or
00:18 whiteinge import salt.utils.odict as odict
00:18 funzo joined #salt
00:18 whiteinge salt's loader system won't try to introspect the OrderedDict class that way
00:19 Ryan_Lane then use OrderedDict as odict.OrderedDict?
00:19 haroldjones joined #salt
00:19 whiteinge yeah
00:20 joehillen joined #salt
00:20 Ryan_Lane ah, I see. the other examples use import salt.utils before that line
00:22 joehillen joined #salt
00:31 stevendgonzales joined #salt
00:33 JasonSwindle joined #salt
00:33 Heartsbane whiteinge: remember all those times I verbally berated you and inflicted bodily harm
00:34 Heartsbane whiteinge: put those tears on your twitter feed
00:34 whiteinge Heartsbane: how about a goofy picture of me grinning like an idiot instead?
00:35 Heartsbane Seen that... like only moments ago
00:35 whiteinge ok. i'll work on adding the tears thing then
00:37 tyler-baker joined #salt
00:38 TyrfingMjolnir joined #salt
00:43 Heartsbane thanks
00:49 NV joined #salt
00:52 Luke_ joined #salt
00:55 TyrfingMjolnir joined #salt
01:02 BrendanGilmore joined #salt
01:05 [diecast] joined #salt
01:07 calvinhp_mac joined #salt
01:11 taterbase joined #salt
01:14 meteorfox|lunch joined #salt
01:20 [diecast] joined #salt
01:29 xzarth joined #salt
01:31 possibilities joined #salt
01:32 Networkn3rd joined #salt
01:33 Gordonz joined #salt
01:34 stevendgonzales joined #salt
01:35 joehoyle joined #salt
01:46 stevendgonzales joined #salt
01:49 mateoconfeugo joined #salt
01:49 jnials joined #salt
01:51 jmpf joined #salt
01:54 jeremyfelt joined #salt
01:56 danielbachhuber joined #salt
02:00 joehoyle joined #salt
02:05 mgw joined #salt
02:13 Juon joined #salt
02:14 Juon I need help
02:14 Juon <Juon> to install in my debian t
02:14 Juon the tor browser
02:14 Juon Hiii
02:21 thayne joined #salt
02:21 CeBe joined #salt
02:29 dancat joined #salt
02:37 lahwran joined #salt
02:37 haroldjones joined #salt
02:43 ewong_ joined #salt
02:46 mgw joined #salt
02:48 tyler-baker joined #salt
02:48 tyler-baker left #salt
02:50 joehoyle joined #salt
02:50 HeadAIX joined #salt
02:59 mateoconfeugo joined #salt
03:01 pydanny joined #salt
03:01 joehoyle joined #salt
03:03 meteorfo_ joined #salt
03:10 tligda joined #salt
03:11 catpigger joined #salt
03:15 tligda joined #salt
03:20 mgw joined #salt
03:27 bhosmer joined #salt
03:28 tyler-baker joined #salt
03:29 halfss joined #salt
03:29 funzo joined #salt
03:33 schimmy joined #salt
03:36 schimmy1 joined #salt
03:41 ajolo joined #salt
03:42 srage_ joined #salt
03:43 thayne joined #salt
03:43 chrismoos joined #salt
03:47 stevendgonzales joined #salt
03:48 StDiluted joined #salt
03:48 ajolo joined #salt
03:49 ajw0100 joined #salt
03:50 meteorfox|lunch joined #salt
03:50 StDiluted joined #salt
03:52 mgw joined #salt
04:00 stanchan joined #salt
04:13 smcquay joined #salt
04:30 funzo joined #salt
04:38 redondos joined #salt
04:38 redondos joined #salt
04:38 anuvrat joined #salt
04:45 schimmy joined #salt
04:47 StDiluted joined #salt
04:51 googolhash joined #salt
04:57 malinoff joined #salt
05:04 Tekni joined #salt
05:04 stevendgonzales joined #salt
05:06 pfallenop joined #salt
05:11 TyrfingMjolnir joined #salt
05:14 linuxlewis joined #salt
05:17 doanerock joined #salt
05:26 schimmy joined #salt
05:28 srage joined #salt
05:30 joehoyle joined #salt
05:30 funzo joined #salt
05:31 schimmy joined #salt
05:32 epcim joined #salt
05:32 epcim_ joined #salt
05:33 smcquay joined #salt
05:34 mgw joined #salt
05:37 l0x3py joined #salt
05:39 aleszoulek joined #salt
05:39 taion809 joined #salt
05:43 ravibhure joined #salt
05:50 anuvrat joined #salt
05:53 garthk joined #salt
05:55 Daemonik joined #salt
06:05 garthk joined #salt
06:08 redondos joined #salt
06:19 svs joined #salt
06:28 zain_ joined #salt
06:28 Ryan_Lane joined #salt
06:29 funzo joined #salt
06:51 it_dude joined #salt
06:51 scott_walton joined #salt
06:56 rjc joined #salt
07:04 harobed joined #salt
07:10 fishernose In bash, I often run: chmod 644 /boot/vmlinuz-`uname -r`
07:11 fishernose I want to do a file.managed: - mode: 644 instead
07:11 fishernose But the file name changes on me with every OS update.
07:11 fishernose Any ideas on how to write a state with a file name that's a moving target?
07:15 Kenzor joined #salt
07:20 TyrfingMjolnir joined #salt
07:24 xintron joined #salt
07:30 funzo joined #salt
07:32 xintron I'm building an dev solution (with vagrant) for my team. Is there any information on how I can interact with apt and add custom repositories (ppa) to my sources.list?
07:51 fishernose xintron, see here: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkgrepo.html
07:51 xintron ty
07:51 scott_w joined #salt
08:01 babilen Can I get more fine grained control over the exact time when a service is being stopped and started? We have to perform a couple of steps while a service is down and before it can be started. I could either manually stop the service manually and run that particular state (or highstate) or use some clever salt magic that allows me to say "perform these if changed and take down the service beforehand, once finished start the service again"
08:01 babilen Haven't really seen anything like that. Does it exist or am I better off just doing this manually?
08:02 gw joined #salt
08:04 mike25de babilen: can that be done with the ... watch_in or smth like that?
08:04 mike25de i am just saying.. .
08:05 babilen Well, that would allow me to essentially express "restart service $FOO once you've performed these states" -- but that would restart the service at the end whereas I don't want it to be active while the changes are performed
08:05 Flusher joined #salt
08:05 babilen I have the feeling as if it would be much easier to simply do this manually, but thought I'd ask :)
08:05 babilen "manually" that is
08:06 viq terminalmage: awesome, thanks for upgrading halite :)
08:10 epcim joined #salt
08:16 bezaban could you possibly user order: and execute a state with running enabled: false first, then the others and finally a state that takes the service up? Or will chaining like this not work and confuse salt
08:18 bezaban I'm not all that sure it will work.  In an answer to babilen
08:19 bezaban or state_auto_order.  I'll give it a try
08:21 bezaban oh. state_auto_order is a master config option
08:22 babilen bezaban: Background to this is that I'm switching an NFS mount that is being written to by the service and I would like the service to be down while the switchover is being performed (the loadbalancer will simply use a different node in the interim) so that the service doesn't attempt to write to an unavailable NFS share
08:23 babilen Could I just write a new state "foo-service-down" and require that in the mount.mounted state that takes care of the NFS mount ?
08:24 jalaziz joined #salt
08:26 bezaban babilen: that's what I am wondering, but not sure if salt will register the conflicting states
08:28 babilen Ah, each day brings a new interesting problem :)
08:31 funzo joined #salt
08:32 joehoyle joined #salt
08:35 bezaban babilen: prereq
08:35 bezaban http://docs.saltstack.com/en/latest/ref/states/requisites.html
08:35 bezaban from the looks of it. "  This requisite allows for actions to be taken based on the expected results of a state that has not yet been executed. In more practical terms, a service can be shut down because the prereq knows that underlying code is going to be updated and the service should be off-line while the update occurs."
08:36 bezaban learned something new today as well :)
08:38 bezaban not entirely sure how the service is taken back up
08:41 millz0r joined #salt
08:41 sverrest joined #salt
08:45 zions joined #salt
08:46 zions Hello there, I have a question regarding pillars
08:46 babilen bezaban: Awesome! Thank you. The service will be taken up as I have another service state that will ensure that. If they are not ordered properly I can always sprinkle in more requisites
08:47 zions suppose I have a pillar named foo with an attribute bar, and I want to override the attribute bar in a custom pillar file, how do I go about that?
08:47 bezaban babilen: yeah, might require 'manual' ordering to make sure it is taken back up after.  That will prove useful to me too :)
08:49 bezaban zions: can be done in the sls using 'context' I think.  I am doing that in multiple ways depending on where I use the values and not quite settled on a good way
08:49 bezaban what is it for?
08:49 kedo39 zions: based on https://github.com/saltstack/salt/issues/3991 , it doesn't look like that is possible :(
08:50 jalaziz joined #salt
08:51 bhosmer joined #salt
08:52 kadel joined #salt
08:53 zions kedo/bezaban: I want to customize a set of contacts for monitoring. I was looking at the example given at this gist: https://gist.github.com/UtahDave/3785738 however since I just want to drop the data it's kind of awkward to do all this looping
08:54 babilen kedo39: Ah, my favourite issue
08:55 babilen zions: Look into reclass for functionality like that: http://reclass.pantsfullofunix.net/salt.html (and the rest of that document)
08:56 zions kedo39: I saw a mention to use import_yaml, which might be useful to some extent, thanks.
08:56 kedo39 oof, reclass looks hacky
08:56 zions babilen: I will look into this, looks interesting yet more complicated.
08:56 giannello joined #salt
08:56 zions Thanks all for your tips!
08:59 honestly joined #salt
09:00 babilen zions: I would be really happy if issue/3991 would finally be implemented, but I don't expect that to happen in the next couple of weeks - reclass is, naturally due to being a separate system, more complicated (you have to learn it after all), but is I guess the way to get access to those features
09:01 stevendgonzales joined #salt
09:11 ggoZ joined #salt
09:11 hvn joined #salt
09:14 flub joined #salt
09:17 giantlock joined #salt
09:19 Daemonik joined #salt
09:20 gw_ joined #salt
09:25 madduck kedo39: hacky?
09:26 zaz\ left #salt
09:27 kedo39 rather, it seems like functionality that should be baked into salt, without requiring third party stuff
09:27 _mel_ joined #salt
09:27 kedo39 since it seems like quite a few people want this functionality in salt
09:32 funzo joined #salt
09:37 yomilk joined #salt
09:39 Kenzor joined #salt
09:46 arthurlutz joined #salt
09:46 arthurlutz is the google search on the docs really necessary ? (i preferred the old search)
09:48 babilen kedo39: I completely agree
09:48 anuvrat joined #salt
09:50 Ryan_Lane joined #salt
09:58 pdayton joined #salt
10:04 gildegoma joined #salt
10:12 aleszoulek joined #salt
10:18 jcsp joined #salt
10:19 Ryan_Lane joined #salt
10:19 babilen Is any of the salt developers aware of a working vagrant setup for testing local versions? I'd like to contribute, but wouldn't want to without testing the code properly.
10:20 babilen Just read http://docs.saltstack.com/en/latest/topics/development/hacking.html and am about to start on implementing that on top of vagrant. Thought that the process might have been documented somewhere already which would spare me some time. :)
10:21 viq babilen: what do you mean by "local versions" ?
10:21 babilen viq: Sorry, should have been more precise about that. I am referring to the local git branch on which i am working.
10:22 viq babilen: maybe https://github.com/agoragames/saltstack-sandbox ?
10:23 viq babilen: also I guess read what flags are available to salt-bootstrap
10:25 babilen viq: I was considering going down the virtualenv route
10:25 viq babilen: I'm programming illiterate, therefore also not quite familiar with proper development procedures. Kind of guessing and googling here ;) Though I have played with vagrant, but for sysadminy stuff
10:27 babilen Sure - I use virtualenvs a lot for Python development, but haven't worked on something as complex as salt before. Thought that some "best practices" exist for setting up your development/test environment.
10:28 babilen I will adapt saltstack-sandbox to my needs and work on the virtualenv setup on the salt master therein.
10:28 viq Probably, though USians are still asleep ;)
10:29 viq https://groups.google.com/forum/#!topic/salt-users/yZK7EecEp7M
10:29 babilen About time for salt devs to become a more geographically diverse group :)
10:33 viq babilen: although I guess it wouldn't differ much from using any other virtualenv development environment with vagrant
10:33 funzo joined #salt
10:33 mikkn babilen: I recommend using their docker images
10:33 epcim joined #salt
10:33 joehoyle joined #salt
10:34 babilen mikkn: Could you elaborate on that?
10:34 mikkn http://salt.readthedocs.org/en/v2014.1.1/topics/tests/index.html
10:34 viq mikkn: https://index.docker.io/u/saltstack/ ?
10:34 mikkn yeah, those
10:34 mikkn We're using them for testing states in, it's very convenient in contrast to vagrant
10:35 viq hm
10:35 babilen mikkn: How would that help me with developing salt itself?
10:35 viq though I'm also testing stuff on BSDs, docker is not an option there
10:35 viq babilen: because you can run different version of salt on those
10:35 mikkn You get your own source code in an environment where you could link it from the host machine very easily
10:36 mikkn And you can also destroy the changes they've made by just restarting the docker
10:36 viq But yeah, I guess docker for testing linux parts can be easier
10:36 viq mikkn: are you driving the docker images with vagrant, or otherwise?
10:36 mikkn viq: Not sure I understand the question...
10:37 mikkn viq: Docker is like vagrant, but it uses linux containers rather than a full VM
10:37 viq mikkn: how do you start up and define what should be in the dockers you start
10:37 viq mikkn: yeah, but there is vagrant-docker too ;)
10:37 mikkn viq: I must have missed that :)
10:38 mikkn Docker has its own file format for specifying what should go in, a Dockerfile
10:38 viq huh, http://docs.vagrantup.com/v2/provisioning/docker.html
10:38 viq though it's a different beast to what I'm talking about
10:39 mikkn viq: Have you used Docker?
10:39 viq hah, https://github.com/fgrehm/docker-provider - "NOTICE: This plugin is no longer being maintained as its functionality has been merged back to Vagrant core and will be available with Vagrant 1.6+."
10:40 viq mikkn: spent maybe half an hour with it, close enough to "no"
10:40 bmcorser joined #salt
10:41 mikkn viq: Docker uses LXC containers, which is basically a chroot with a purpose built file system and kernel flags for controlling how much it can touch on the kernel. Docker wraps this, pretty much like Vagrant wraps Virtualbox, KVM and so on
10:41 babilen mikkn: So you essentially prefer docker over vagrant for testing your setup locally? What are your main reasons for that? I am not particularly attached to vagrant (and don't really like virtualbox) so I am open to alternative approaches, but what I really care about is an easy way to test salt in a standardised way.
10:42 mikkn I use docker, because a docker boots and tears down practically instantly
10:42 babilen ack
10:42 viq babilen: docker is faster and more lightweight, though it's pretty much only linux-on-linux, while with vagrant you have full power of virtualization, so (almost) able to run anything-on-anything
10:42 mikkn So, I'm probably done testing my states when someone using vagrant just got his VM started :D
10:42 babilen viq: I only need Linux on Linux (and will for some time to come)
10:42 mikkn But yes, it only tests linux on linux
10:43 entil joined #salt
10:43 mikkn You can run docker on mac too, i think. I haven't looked into how that works, though.
10:43 viq I have some FreeBSD and OpenBSD boxes in the mix, so for that docker is not an option...
10:43 mikkn Ah, they run it in a virtualbox
10:43 babilen mikkn: How does docker deal with networking between master and minions and provisioning multiple boxes?
10:44 viq With vagrant being able to control docker, I wonder if I could have it spin up some machines via docker and some via virtualbox/libvirt
10:44 mikkn It sets up an address space at 172.17.0.0/24 on the machine, which can communicate with eachother
10:44 mikkn viq: Most probably. :)
10:44 babilen viq: I've never managed to get libvirt running witout problems on vagrant
10:44 mikkn babilen: It creates tap interfaces on the real host device with that address space
10:44 mikkn So they automatically get internet and so on
10:45 babilen mikkn: okay, that's fine
10:45 mikkn Docker is a bit crude, I'll admit. I need to run them in privileged mode, which gives them pretty much unhindered access to the kernel
10:46 entil I was looking at salt-ami-cloud-builder
10:46 mikkn But that's only because I need to create sparse files to fake devices
10:46 AviMarcus joined #salt
10:46 entil just joined so I probably missed out on a lot of conversation, but it seems almost related :>
10:46 AviMarcus Hiya folks. It's been a while since I caught up on salt...
10:47 AviMarcus What's the most used option for creating a dev machine, locally, on my linux machine? e.g. salt + docker or salt + virtual something?
10:47 babilen viq, mikkn: Okay, thanks for the pointers. I'll read up on them and will play a little.
10:47 AviMarcus I'm not really too familiar with any of the options.
10:47 babilen AviMarcus: heh
10:47 AviMarcus I've played a tiny bit with oracle virtualbox
10:47 viq AviMarcus: you just came at the end of a conversation about exactly that ;)
10:47 mikkn AviMarcus: I think the most popular is Vagrant, but I'm a big proponent of using Docker unless you need to try it out on more than linux. :)
10:48 AviMarcus nice.
10:48 entil http://docs.logilab.org/salt-ami-cloud-builder/#why-you-need-salt-ami-cloud-builder looks like it could be nice
10:48 babilen AviMarcus: I use vagrant to provision a salt master and two minions ... mikkn just suggested docker as a lighter alternative
10:48 entil but it hasn't been developed in a while...
10:48 viq entil: that's if you use amazon, I'm a cheapskate
10:48 viq entil: also have a look at packer.io
10:48 AviMarcus mikkn, no, just linux
10:48 entil viq: I found packer but I don't think it knows how to build from salt states
10:49 entil I'd hate to have rules separate from salt to build a packer-based image which includes salt for future management
10:49 viq entil: hm, not by itself... though you probably could instruct it to install salt and run it locally, then package the machine
10:49 AviMarcus babilen, I'm looking specifically for a local test environment. Actually, I'd do it on my own computer but I'm running an older version so packages aren't available anymore and it's a PITA to upgrade the OS, tons of stuff I've changed..
10:49 entil I currently have a slightly tuned AMI which connects to the salt master and sets things up, only thing is that it's really really slow
10:50 entil so I'm going to use that as a base to reproduce preconfigured images
10:50 mikkn entil: You basically need any cloud boot image that has cloud-init in it and can accept userdata. You can ask it to install salt in the userdata field.
10:50 entil but I'm slightly worried about what to use, when there are so many options
10:50 mikkn I was looking at the exact same thing 6 months ago, and that seems to be "the" way of doing it now
10:50 viq entil: I heard that DigitalOcean is among the cheapest
10:51 entil mikkn: sure, but if it's an empty image it takes a while for it to configure itself and get all the relevant software deployed, whereas having a preconfigured AMI takes like a minute or something
10:51 AviMarcus brb...
10:52 babilen AviMarcus: I am running vagrant locally and use it to provision a salt master + some minions. The salt master tracks my "dev" branch via GitFS which pulls from a special "vagrant" bare repo to which I push. Once I am happy with my changes I merge them into my master branch and push that to the bare repo used by the production salt-master.
10:52 mikkn entil: In my setup, it takes around 30 seconds for the full setup of Salt through cloud-init, I think that was a worthy tradeoff
10:52 entil mikkn: that's actually fast! does that include deploying your app on it as well?
10:52 babilen AviMarcus: I really like that setup, but am not attached to vagrant at all and even dislike it as it forces me to use virtualbox which is just slow
10:52 mikkn entil: Ah, no. Just actually setting up salt in the boot process and injecting an accepted key
10:53 entil yeah ok, that's fast for me too, though I have something lighter and more suited to my company than cloud-init in there
10:53 mikkn entil: We have a local opencloud though
10:53 mikkn openstack*
10:53 entil it's after that when things start to get slow, the actual updating of the image's configs and deploying the app - that's why I'd like to create images beforehand for everything ;)
10:53 mikkn I get confused :)
10:54 babilen mikkn: One more question (but I should probably take it to #docker soon) -- How can I share data between the host box and docks (? what's the term ?) ?
10:54 mikkn babilen: Volume is the term
10:54 mikkn http://docs.docker.io/use/working_with_volumes/
10:54 babilen mikkn: Great, thanks
10:54 AviMarcus I'm trying to read up on docker
10:55 AviMarcus and funnily, I use GIT to serve the salt config, but it doesn't use GITfs. I started using a long time ago and didn't fully migrate to salt stuff.
10:55 mikkn What I do is that I mount my gitfs repository in /srv/salt and then I do a local salt run rather than a minion-master salt run
10:55 babilen vagrant with libvirt would be nice, but it's simply not working properly right now
10:55 entil mikkn: but have you ever made AMIs of your setup?
10:55 entil just out of curiosity
10:55 AviMarcus I'll be back to ask questions about docker, maybe...
10:56 mikkn entil: Ah, no. I have a custom made debian image with cloud-init and a few other things prepped, though. But I use the same for everything, so I never did a prepared image for a specific role
10:56 entil mikkn: how long does it take for you to deploy, on average?
10:57 entil depending on the ec2 it can take up to 20 minutes here, which is barely acceptable, but not for long :/
10:58 stevendgonzales joined #salt
10:58 mikkn entil: I've never had a run that takes that long
10:58 entil ok
10:59 entil I actually optimized it a bit by running a bootstrap tarball of some python stuff and it's faster for really usable ec2s but now I got my mind set on custom AMIs ;)
10:59 mikkn I do it in two steps though, first I provision a machine and add it in to salt, this may take up to 3 minutes until it is up and running as a bare machine (depending on how long since I last did a dist-upgrade on the image)
10:59 mikkn Usually it takes 30-60 seconds until it's up and running as a bare machine, though
10:59 entil and then you do the deployment somehow else
10:59 entil I guess
11:00 mikkn Then it can take up to 10 minutes to run a full highstate run including formatting disks and a large portion of time downloading packages
11:00 stevendgonzales joined #salt
11:00 mikkn But we're setting up our own mirror repository to remove the download times
11:00 entil yeah, if this wasn't about python/django it'd be nice to deploy by apt-get
11:01 mikkn We actually have a pypi-mirror too
11:01 entil the one thing I've found is that salt is not very good with deployments
11:01 entil or maybe that's unfair, but it's a bit cumbersome to do some things
11:02 entil this is another reason why I'd want salt to do *only* configuration management... though I need to figure out how to do both salt and deployments on the custom ami...
11:02 entil like, after I gut out the deployment states from my salt states
11:02 mikkn entil: I'm having the same problem, actually. Deployments is a very weird beast, though.
11:03 entil argh SIGINT irl @ work bbiab &
11:03 gildegoma joined #salt
11:05 JonGretar joined #salt
11:09 entil mikkn: we use fabric for post-salt deployments
11:09 entil I wrote this little frameworklet to model setups
11:10 entil but it has the hassle of maintaining both salt states and fabric definitions, and like we established, the initial setup isn't particularly fast
11:10 entil if we were using load-based autoscaling, as we might in the future, having a delay of 10-20 minutes would be painful
11:10 entil and for staging it'd be trying on the developers' patience to wait for that long
11:12 mike25de left #salt
11:19 jrdx joined #salt
11:29 anuvrat joined #salt
11:33 funzo joined #salt
11:34 joehoyle joined #salt
11:42 diegows joined #salt
11:42 mikkn entil: Yeah, 10-20 minutes is way too much. My pain point is probably 5 minutes or so.
11:44 malinoff joined #salt
11:45 harobed joined #salt
11:49 sgviking joined #salt
11:49 entil mikkn: absolutely, but this is not a small task to tackle :E
11:50 bhosmer joined #salt
11:51 viq There are some things (eg there's a cookbook for chef that does that) that distributes stuff via torrents, I wonder how easy it would be to do with salt
11:52 entil doesn't facebook deploy one huge binary using bittorrent or am I mistaken?
11:53 viq I heard of twitter doing that, no idea about facebook
11:53 entil doesn't suit us anyway, not now, but it's still an interesting approach
11:53 elfixit joined #salt
11:59 babilen mikkn: How do you implement a master/minion setup on docker in such a way that they talk to each other?
12:06 easylancer joined #salt
12:09 patrek joined #salt
12:09 bastion1704 joined #salt
12:13 Pingman joined #salt
12:14 mikkn babilen: Not sure I understand which part of it you're asking for. Like what addresses they talk to eachother over? How it automatically is linked together? I only know that they can use the 172.17.0.0/24-span. I myself as I mentioned am only doing local state runs for testing, not master/minion runs
12:14 mikkn SaltStack use docker for testing master/minion setups as well, though for the integration testing
12:15 babilen mikkn: Do you know where they keep their dockerfiles for that?
12:15 mikkn https://github.com/saltstack/docker-containers
12:15 mikkn https://index.docker.io/u/salttest/
12:17 mikkn http://salt.readthedocs.org/en/v2014.1.1/topics/tests/index.html
12:17 mikkn There you can find some more info
12:18 babilen I don't quite see how/where that configures a minion
12:18 mikkn This is the cookie: curl -L http://bootstrap.saltstack.org | sh -s -- git develop
12:18 mikkn https://raw.github.com/saltstack/salt-bootstrap/stable/bootstrap-salt.sh
12:18 mikkn with commands "git develop"
12:22 babilen doesn't that just bootstrack salt within the container ? I mean where in, say, https://index.docker.io/u/salttest/debian-7/ is a minion being configured?
12:23 mikkn The minion is being configured through the bootstrap script
12:23 thayne joined #salt
12:23 mikkn babilen: You can use the bootstrap script to send in the master IP yourself too
12:24 mikkn Command line switch -A to the bootstrap script, and -i for setting the minion id
12:24 dmorrow_ joined #salt
12:24 mikkn https://github.com/saltstack/salt-bootstrap/blob/develop/bootstrap-salt.sh#L177 for full usage instructions. :)
12:25 babilen Okay, I guess I have to play with it until I realise how these bits fit together and how I could use it to test my own git branch.
12:27 TyrfingMjolnir joined #salt
12:27 ckao joined #salt
12:28 mikkn babilen: To give you a brief description of how we use it: We have a script called "statest" which is run both outside to start the docker and inside the docker to run the highstate again. The docker itself runs a masterless minion and statest basically wraps three commands, the first is a copy of the state tree from the mount point to the point which salt reads from
12:28 mikkn the second is to write a custom top file with only one line: '*': [ cmd line mentioned states here ]
12:29 mikkn And the third is to execute salt-call --local state.highstate
12:29 stevendgonzales joined #salt
12:29 babilen I still don't quite understand how to start two different containers from a single dockerfile, but I will figure it out eventually :)
12:29 mikkn Ah, you can give them names. :)
12:30 mikkn Docker is as said a little rough. And they also don't really describe some things very well
12:30 toastedpenguin joined #salt
12:31 mikkn If you don't have time to learn something new (a problem I know well) there are fewer new concepts in a Vagrant setup, and Vagrant has been around for longer so the edges aren't quite as rough. :)
12:31 stevendgonzales joined #salt
12:32 tessellare joined #salt
12:34 babilen mikkn: Well, I have time (the rest of my life), but I would prefer to *not* spend it on learning something if I don't end up using it eventually. The concept of docker appeals to me and I realise that I should use it to test salt itself. Once I've accomplished that I can happily hack on salt and live happily ever after.
12:34 funzo joined #salt
12:35 joehoyle joined #salt
12:36 babilen mikkn: The second part is a bit more complicated in that I am thinking about implementing what I have in vagrant right now in docker, but I am just not sure how to do that. My requirements for that are more or less: I want to be able to setup a master and as many different minions as I have "minion classes" in whatever salt setup I am testing. (depends on each customer) -- I then want the master to grab its salt states from GitFS (dev branch) and ...
12:37 babilen ... perform a highstate ... I would also like to be able to inspect the result of that highstate run on the minions.
12:37 babilen mikkn: I will definitely spend the time to learn how to run my development branch of salt in docker (which seems to be pretty well documented) and I guess that it won't take me too long to wrap my head around it.
12:38 babilen OTOH I am simply not sure how to replicate my vagrant setup in docker right now and guess that doing so would take longer
12:41 mikkn babilen: Ah, well in that case! I think the guys in #docker can answer some of those questions, the bootstrap-script for salt covers a lot of what you want for bigger setups as well, the primary thing you need to figure out is how to get and feed the master IP into the other dockers. Then you can use the -i switch for the bootstrap script to assign the correct matching hostnames :)
12:42 jslatts joined #salt
12:44 jforest 2014.1.3 is still not in regular epel, only testing epel, any idea how long it takes to move stuff to regular epel?
12:45 nuun joined #salt
12:45 nuun left #salt
12:48 doanerock joined #salt
12:49 babilen mikkn: yeah, I'll focus on getting to grips with the whole "develop *on* salt" docker setup for now and will continue to test our salt setups in vagrant for now until I have enough time to evaluate the benefits of using docker for that
12:50 mortis is there any way in salt to print only the changes since last run?
12:50 mortis like showing only Result: Changed
12:52 mike25ro joined #salt
12:56 calvinhp_mac joined #salt
12:56 mapu joined #salt
12:59 viq mortis: try salt --state-output=changes host state
12:59 mortis viq: hmmmm interesting :)
12:59 joehoyle joined #salt
13:00 viq mortis: http://docs.saltstack.com/en/latest/ref/output/all/salt.output.highstate.html
13:00 mortis viq: thanks :)
13:02 mortis so: salt "*dev" state.highstate --state-output=changes
13:02 mortis should work?
13:02 mortis it gives the clean states also
13:03 viq yes, but on single line
13:04 mortis thing is, i wanna modify the smtp_return returner to mail me with changes only
13:04 mortis its my im looking at it
13:05 mortis its why*
13:05 mortis i guess i will have to write a tempfile and use diff
13:09 dpac|away joined #salt
13:09 patrek My git repo is only available by https. Looking for the documentation to setup gitfs to store/send the right credentials.
13:10 viq https://user:pass@git.example.com/repo  ?
13:11 funzo joined #salt
13:11 ipmb joined #salt
13:11 quickdry21 joined #salt
13:16 rtucker joined #salt
13:18 racooper joined #salt
13:20 babilen state_output=changes really should be the default
13:20 mpanetta joined #salt
13:28 patrek Thanks viq, was too simple.
13:29 mortis babilen: agree
13:30 mortis so salt doesnt have something to find a diff between to highstate runs?
13:30 mortis between two*
13:30 mortis my brain and/or fingers dont work correctly today -.-
13:31 Ryan_Lane joined #salt
13:32 babilen mortis: It's the first thing I change on every master and my fellow admins scream if I forgot to do it :)
13:32 mortis :D
13:32 mortis so you have a lot of masters babilen ?
13:33 babilen mortis: A few ... We use one master per customer to keep things separate
13:33 mortis aha
13:34 mortis babilen: do you also have different environemnts pr customer? like dev/qa/prod?
13:35 babilen mortis: I am not using environments in salt, but test different git branches in vagrant. I found that the whole "branch == environment" in GitFS to be quite incompatible with my git workflow
13:35 mnaser i think there is a problem with the doc generation.  http://docs.saltstack.com/en/latest/ -- keep refreshing and watch the commit version change :\
13:36 giannello is there a way to specify the Amazon VPC in the cloud profiles definition?
13:36 mortis babilen: right ... yeah we've been struggling to find a good way to implement it when it comes to dealing with stageing
13:37 chiui joined #salt
13:38 babilen mortis: What I do right now is to use a test setup (in vagrant) that tracks the dev branch as master in GitFS (by setting "gitfs_base: dev")
13:39 grim76 joined #salt
13:39 mortis babilen: so each dev runs his own vagrant-setup then?
13:40 patrek_ joined #salt
13:40 babilen mortis: Unfortunately there is no comparable setup for pillars yet which is why I have a "normal" (i.e. non git extpillar) which is, in fact, a checkout of the dev branch
13:40 babilen mortis: yeah
13:40 mortis ah yeah
13:41 jraby joined #salt
13:41 babilen mortis: Well, I offer them a playground on a preconfigured box too, but the majority of our (small) team is currently just running it locally on their workstations
13:41 mortis babilen: should work well
13:42 babilen mortis: mikkn got me thinking about docker and automating testing of our salt deployments even further, but so far that is what we use
13:42 faldridge joined #salt
13:42 mortis hehe babilen ...so many opertunities
13:43 babilen mortis: The main point I wanted to bring across is that setting "gitfs_base: FOO" allows you to treat "FOO" as a real development branch that is *not* synonymous with "salt environment"
13:43 babilen Which fits my git workflow a lot better (I can easily fork as many feature branches as I want and can easily merge back and forth)
13:44 Ahlee is orchestration not honoring env= a feature or a bug?
13:44 faldridge joined #salt
13:45 [MT] All of my reactors seem to have stopped reacting... :(
13:46 [MT] My master config hasn't changed, the reactor sls files haven't changed, I still see the event coming through...
13:46 GradysGhost joined #salt
13:49 anuvrat joined #salt
13:50 jcsp joined #salt
13:51 Ahlee I jsut restart the master when that happens (0.17.5)
13:51 Ahlee very less than ideal
13:51 [MT] a restart didn't help
13:51 elfixit joined #salt
13:53 Ahlee interesting
13:54 abe_music joined #salt
13:56 munhitsu_ hi there, any prediction on Helium release?
13:56 [MT] http://dpaste.com/1792877/ <-- this is what I have
13:56 srage_ joined #salt
13:58 HeadAIX joined #salt
13:59 Luke__ joined #salt
14:00 taterbase joined #salt
14:00 danielbachhuber joined #salt
14:02 munhitsu_ I was tempted by htpasswd state
14:05 AviMarcus so... what's the interplay between salt and docker?
14:05 AviMarcus It seems like you can interactively run a new docker, then save it and then run it.
14:06 AviMarcus seems to kinda obviate the need for much of the salt stuff. But there's no real log, then, of the clean commands to run from the base
14:06 kaptk2 joined #salt
14:08 halfss joined #salt
14:11 pdayton joined #salt
14:12 jeremyBass1 joined #salt
14:15 sroegner joined #salt
14:19 babilen Where do I file (suspected) bugs in https://index.docker.io/u/salttest/ ? I am running into the following issue: https://www.refheap.com/79114
14:19 joehoyle joined #salt
14:22 thedodd joined #salt
14:22 Networkn3rd joined #salt
14:22 sandbender1512 joined #salt
14:23 epcim joined #salt
14:26 rgbkrk joined #salt
14:29 calvinhp_mac joined #salt
14:30 abe_music joined #salt
14:32 srage joined #salt
14:33 mgw joined #salt
14:34 jeremyfelt joined #salt
14:34 gothix /msg nickserv identify #%fsmo@17
14:35 sm1ly joined #salt
14:35 babilen gothix: Change your password *now*, never do that in a channel and consider configuring CertFP -- http://freenode.net/certfp/
14:35 gothix thanks
14:35 cro joined #salt
14:35 gothix left #salt
14:36 * bronsen needs to set up certftp as well..
14:36 toddnni joined #salt
14:36 easylancer joined #salt
14:37 sm1ly joined #salt
14:37 [MT] You can use your nickserv password as the server password and it'll authenticate you with that.
14:37 meteorfox|lunch joined #salt
14:38 calvinhp_mac joined #salt
14:39 alunduil joined #salt
14:39 conan_the_destro joined #salt
14:41 anuvrat joined #salt
14:42 cro_ joined #salt
14:43 mapu joined #salt
14:43 stephanbuys joined #salt
14:45 wendall911 joined #salt
14:46 feiming joined #salt
14:47 feiming Hi guys,how do i split string with jinja2?
14:49 gothix joined #salt
14:51 Networkn3rd joined #salt
14:53 mgw joined #salt
14:55 wendall911 joined #salt
14:55 cro joined #salt
14:56 [MT] I have a lot of salt/auth events happening. It has a template id so I don't know where that box actually is. How can I find an IP address the event is originating from?
15:01 colinbits joined #salt
15:03 UtahDave joined #salt
15:03 ocdmw joined #salt
15:05 vbabiy joined #salt
15:07 [MT] I think I've hit the point where I need syndic servers. Restarting my master sends the load average to 8 on a 2 core box
15:07 feiming [MT]: maybe u can start your master under debug mode
15:08 feiming [MT]: salt-master -l debug
15:08 viq [MT]: there were some fixes for that in 2014.1.3
15:08 jalbretsen joined #salt
15:08 TyrfingMjolnir joined #salt
15:09 jeremyfelt joined #salt
15:09 [MT] feiming: starting my master in debug causes things to fail in all sorts of horrible and terrible ways
15:09 [MT] viq: I'll give that a go
15:10 tyler-baker joined #salt
15:11 [MT] lovely... apparently my apt cacher is broken
15:11 StDiluted joined #salt
15:11 Philip joined #salt
15:12 tligda joined #salt
15:13 Philip good morning!
15:13 elfixit joined #salt
15:13 it_dude joined #salt
15:14 gw joined #salt
15:14 gw joined #salt
15:15 Philip I've got a weird thing happening this morning: Minion running locally on the master, salt-call calls are failing, but targeting itself with salt 'name-of-master' works just fine
15:16 UtahDave morning!
15:16 Philip getting minion failed to auth with master, but the key is in accepted keys, and obviously works for salt .. calls
15:17 mateoconfeugo joined #salt
15:18 Gordonz joined #salt
15:18 [MT] heh... that'll cause the apt cache issues... file system got messed up
15:19 Gordonz joined #salt
15:19 ZombieFeynman joined #salt
15:20 UtahDave Philip: are you using sudo when making the salt-calls?
15:21 Philip @UtahDave yeah, sudo on both, and it's an intermittent problem that I haven't put my finger on
15:21 Kenzor joined #salt
15:21 Philip lets say my minion and master are called `master`
15:21 it_dude joined #salt
15:21 Philip `sudo salt-key` shows `master` in accepted keys
15:22 abe_music joined #salt
15:22 Philip sudo salt-call grains.items will fail in the way I described, while sudo salt 'master' grains.items will work as intended
15:22 raizyr joined #salt
15:25 vejdmn joined #salt
15:25 [MT] viq: 2014.1.3 isn't available for debian unstable yet... :(   I wanted to be lazy and not build the package myself
15:26 tligda Philip: Perhaps try deleting the key from the master and re-adding it?
15:27 Philip tligda I've done this, deleted the key on disk too and restarted minion to generate a new one, master is set to auto accept
15:27 viq [MT]: huh, I do have .3 on wheezy boxes
15:27 tligda Philip: That's the right way to do it!
15:27 viq Philip: is that the only minion connected to that master?
15:28 Philip @vig no I've got a bunch of others that all do fine
15:28 Philip so there must be something about the minion running on the same box as the master I imagine
15:28 viq Philip: stop minion on master, remove the key, and see if the key reappears with the minion there stopped
15:28 Gareth morning.
15:28 viq Philip: no, there isn't, I am personally doing that on 5 machines and they all always report
15:29 viq (for the limited use I do)
15:29 Philip @viq yeah I have this running on other setups as well just fine which is why its so strange
15:29 [MT] viq: ignore me... I'm being a dummy
15:30 it_dude joined #salt
15:30 viq [MT]: is that the one where I'm supposed to argue or nod and agree? ;)
15:31 pdayton joined #salt
15:31 jimklo joined #salt
15:31 Philip @viq this always works and eventually goes back to not working on this particular machine, so now I'm back to a working environment, I'll try and see what causes it to start failing
15:32 Philip figured I'd ask here in case anyone had seen this behavior and I was doing something obviously wrong
15:33 [MT] viq: nod and agree.. I wasn't actually thinking for that one
15:33 * [MT] has 2014.1.3 on the master now
15:34 [MT] load average is now 1
15:34 viq [MT]: I think the fix was actually for the minions, to stagger somewhat how they reconnect
15:34 peters-tx joined #salt
15:35 [MT] part of the problem is that 180 of these boxes are told to run state.highstate when they reconnect
15:36 viq Yeah, that can be a problem. I wonder if there's a setting for that
15:38 [MT] a setting to stagger that?
15:38 meteorfox|lunch joined #salt
15:38 viq yeah
15:38 [MT] probably not... it's in the reactor formula
15:39 cruatta joined #salt
15:39 it_dude joined #salt
15:40 viq hmmm
15:40 opapo joined #salt
15:40 CeBe joined #salt
15:41 cro_ joined #salt
15:42 viq sorry, right now fighting with syslog, can't look more into that
15:44 cro joined #salt
15:45 thayne joined #salt
15:48 it_dude joined #salt
15:50 cruatta joined #salt
15:55 smcquay joined #salt
15:56 iptables joined #salt
15:56 redondos joined #salt
15:56 redondos joined #salt
15:57 yano joined #salt
15:57 ksalman whey i use --output-file with sys.doc it still prints everything to the screen instead of saving it to a file, what gives?
15:57 cruatta joined #salt
15:57 ksalman sudo salt ENG-KA2G5080S5U sys.doc --output-file=/tmp/foo
15:58 cruatta joined #salt
15:58 it_dude joined #salt
15:58 meteorfox|lunch joined #salt
15:59 iptables hi - I am fighting with iptables.insert state: http://pastebin.com/ZvkYf29P it seems to be tied to the position param which is afaik required input but it does not behave like plain iptables. if this has to be a unique position its bad design (because the different states for different services will not know about each other and should not be related), and 0 is not allowed, and 1 will be out of bounds, and -1 like in the internal modul
15:59 iptables allowed
15:59 iptables any pointers?
16:01 rgbkrk joined #salt
16:01 kermit joined #salt
16:03 sandbender1512 insert requires a position, append does not, just like normal iptables at the cli
16:03 sandbender1512 you can use iptables.append if that satisfies your need...
16:04 sandbender1512 also can setup a custom chain and jump to that from your default (ie: INPUT, whatever) and stick your explicitly-ordered rules in there and/or append to that
16:04 iptables not really - as I want the fw rules for services (ssh, bind etc) to be tied to these states, and the last thing to append is a log and drop rule
16:05 sandbender1512 slightly confused, so you want the rules to be added/dropped as the service in question comes up/down?
16:06 iptables actually same problem with chain as well, as jumping to the custom chain will have to be positioned within INPUT as well
16:06 sandbender1512 you can jump to custom chains from a static position that never changes, and add/remove from (each) custom chain as necessary
16:07 iptables sandbender1512: see my pastebin, the firewall rules is tied to the openssh state, I have similar rules for other services (bind, ntp whatever)
16:07 sandbender1512 ie: sshd is disabled, remove all rules from your SSH chain and replace with a DROP... sshd is enabled, remove all rules and replace with your ssh-speicfic rules in the SSH chain, etc
16:07 iptables so not a central iptables state for all services
16:07 sandbender1512 sec
16:07 it_dude joined #salt
16:08 bastion1704 joined #salt
16:08 iptables (actually doing this on the command line outside salt works just fine)
16:08 sandbender1512 iirc that error is related to a bug which has already been fixed in the development branch/should make it to latest stable soon if it hasn't already
16:09 iptables that is good news - I did not find an issue for it when I glanced through github
16:09 sandbender1512 if you setup your iptables "globally" to have a custom chain that gets jumped to which is empty by default, you can then just iptables.append the rules attached to each of your states
16:09 iptables I run: salt-call --version salt-call 2014.1.3
16:09 iptables sandbender1512: that is what I want to avoid
16:10 iptables especially as I want to package the rules into the different forumulas and encourage reuse outside of my context.
16:10 iptables i.e. global would be ok for my system, but not my neighbour, or at work, or at salt-side X which I do not control.
16:11 KyleG joined #salt
16:11 KyleG joined #salt
16:11 sandbender1512 well you can always insert at the 0 position ie: for INPUT to add an arbitrary set of rules... although networking-wise that wouldn't be terribly efficient, usually you want quick matches at the top like --established and stuff :/
16:11 [diecast] joined #salt
16:12 sandbender1512 the problem is everyone's firewall will be different anyways... think about the issue outside of the salt context for a sec - how would you package a generic firewall rule that could be installed on anyone's iptables setup directly and guaranteed to work?
16:13 sandbender1512 pretty tough, other than just inserting to the 0 position so your 'generic' rule is guaranteed to be it before... anything existing before it's added :)
16:14 iptables yes - but it will work fairly ok for most cases (e.g. non-forwarding firewalls)
16:14 chrisjones joined #salt
16:14 iptables so tpyically for simple boxes on the net just doing a few tasks
16:14 sandbender1512 well, if you're only worried about simple boxes as your audience, just INSERT to 0 of INPUT and you're good to go?
16:14 iptables anyways, what I want to achieve is to insert at the start of INPUT w/o having to know exactly how many rules will be in total
16:15 sandbender1512 ^^^
16:15 sandbender1512 insert to 0 will do what you want
16:15 sandbender1512 (or 1? I believe iptables rule numbering is 0-based but not positive you'd have to check)
16:15 sandbender1512 either way - just insert it at the beginning
16:15 iptables sandbender1512: what is what i tried whch does not work, but I can try a newer version of salt. do yo have an commit for "iirc that error is related to a bug which has already been fixed in the development branch/should make it to latest stable soon if it hasn't already"
16:15 iptables ?
16:16 it_dude joined #salt
16:16 sandbender1512 move the doubled paren after strip() to the end of the string literal
16:16 sandbender1512 line 328 of states/iptables.py in your install
16:17 mgw joined #salt
16:17 sandbender1512 (sorry don't have the github commit hash handy/would have to dig)
16:18 joehillen joined #salt
16:19 sandbender1512 iptables numbering is 1-based, I just checked, so if you insert at 1 you should be good
16:19 iptables yes
16:20 * iptables running hacked version
16:21 iptables Failed:     0
16:21 iptables great!
16:21 iptables thanks!
16:21 sandbender1512 nps :)
16:22 viq nps? Nopes Per Second? ;)
16:22 Debolaz joined #salt
16:23 gildegoma joined #salt
16:24 sandbender1512 haha
16:24 Topic for #salt is now Welcome to #salt | 2014.1.3 is the latest | SaltStack trainings coming up in SLC/NYC/London: http://www.saltstack.com/training | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
16:25 it_dude joined #salt
16:26 iptables http://en.wikipedia.org/wiki/Net_Promoter :)
16:27 scott_walton joined #salt
16:29 svs_ joined #salt
16:30 schimmy joined #salt
16:35 schimmy joined #salt
16:37 maschinetheist joined #salt
16:38 maschinetheist Hi...Can salt-master run on different OS (FreeBSD) than its minions (CentOS)?
16:38 jimklo trying to use the file.managed state... can one use the - name: {{ source_tar }}, where source_tar == salt://mystate/myfile.tgz?  doesn't seem to work wondering if there is a 'trick' to making it work
16:38 jcockhren maschinetheist: yes
16:40 maschinetheist thanks
16:41 maschinetheist Is there anything special that needs to be done on the master or is it pretty transparent then?
16:41 Corey maschinetheist: In what sense?
16:41 Corey Most software abstracts away the underlying OS. And frankly, BSD and Linux are pretty similar already.
16:42 maschinetheist ah cool
16:42 maschinetheist was just wondering if there were any settings to change in salt configs before i dig into thise
16:42 maschinetheist thanks!
16:43 Corey Nope. :-)
16:45 maschinetheist awesome
16:46 haroldjones joined #salt
16:47 maschinetheist left #salt
16:55 ZombieFeynman joined #salt
16:56 mgw joined #salt
16:56 APLU joined #salt
16:57 linuxlewis joined #salt
16:59 stanchan joined #salt
17:01 jimklo joined #salt
17:02 halfss joined #salt
17:02 it_dude joined #salt
17:02 fishernose In bash, I often run: chmod 644 /boot/vmlinuz-`uname -r`
17:03 fishernose I want to do a file.managed: - mode: 644 instead
17:03 fishernose But the file name changes on me with every OS update.
17:03 fishernose Any ideas on how to write a state with a file name that's a moving target?
17:03 ZombieFe_ joined #salt
17:03 rawtaz ha
17:03 rawtaz aha*
17:04 ajolo joined #salt
17:04 [MT] I'm wondering if I should fire up a bunch of syndic servers and set up a load balancer to split that traffic.
17:04 jaimed joined #salt
17:07 jimklo is there a way to store a tgz file in a pillar and reference it from the state?
17:07 ajolo_ joined #salt
17:08 jcockhren jimklo: the actual file? Pillars are typically key:value stores
17:08 jcockhren jimklo: you'd want to put the tgz file in a supported filesystem backend
17:08 jcockhren then reference the file using the 'salt://' syntax in a state
17:09 jimklo jcockhren: okay... I tried doing that by storing the file in my state...
17:09 shm_get how does one go about diagosing a minion-side exception ? http://pastebin.com/VmhftfZK
17:09 scarcry joined #salt
17:10 jimklo however, jcockhren, I tried putting my 'salt://' syntax into a variable and the file.managed state complains of a non-absolute path.
17:10 jcockhren jimklo: gist your state. I'll take a look
17:11 linuxlewis joined #salt
17:11 jimklo jcockhren: thanks.. http://pastebin.com/A4WuJcyU
17:11 fishernose How can I write a state where the file.managed filename comes from a bash function?
17:11 it_dude joined #salt
17:12 MrTango joined #salt
17:13 logix812 joined #salt
17:13 jimklo fishernose: afaik, if have the absolute filename in a variable, you should be able to set the 'name' parameter of file.managed using jinja
17:13 jcockhren jimklo: your file.managed state is incorrect
17:13 possibilities joined #salt
17:13 jcockhren 'name' is the location of where you want to put the file
17:14 jcockhren http://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#module-salt.states.file
17:14 jcockhren jimklo: ^
17:14 jcockhren the 'salt://' convention is used with the 'source' key
17:14 jimklo ah... yes
17:15 jimklo ahh... so fishernose... not name... then source parameter... :P I had it reversed myself...
17:17 shm_get fishernose if your source is the result of a bash function.. it is not managed is it ?  I guess a cmd.run cp should do the trick no ?
17:18 Ryan_Lane joined #salt
17:18 fishernose shm_get, it's actually the target that I want to change.
17:18 fishernose On the minion, I want to set the permissions of the running kernel to '644'
17:19 bmcorser I am finding lots of issues trying to do something that I would expect is a fairly common use case
17:19 travisfischer joined #salt
17:19 fishernose On the minion's commandline, I can type chmod 644 /boot/vmlinuz-`uname -r`
17:19 bmcorser That is provisioning a preseeded master and minion using vagrant
17:19 shm_get fishernose isn't uname a Grain already ?
17:19 fishernose shm_get, that would make sense--I'll look into that!
17:19 bmcorser Are there known issues with this process
17:20 fishernose I mostly use grains for OS family/version targetting.
17:20 fishernose I hadn't considered looking there for kernel version.
17:20 bmcorser I find the docs somewhat contradictory in places, ie. I am confused as to whether I should be using vagrant's built in salt provisioner or the external plugin ...
17:20 it_dude joined #salt
17:21 ajw0100 joined #salt
17:21 shm_get fishernose kernel and kernelrelease are rain apparently
17:21 bmcorser Also should I be using the bootstrap-salt script from the github repo or the one provided by the provisioner
17:22 jrdx joined #salt
17:22 fishernose shm_get, what is rain?
17:22 shm_get fishernose on my box kernelrelease is the same than uname -r
17:22 shm_get fishernose s/rain/grain/
17:23 fishernose Ahhh -- thanks!
17:23 shm_get iow  salt 'machine>' grains.item kernelrelease
17:24 bmcorser Nothing seems to work
17:24 bmcorser Or nothing salty-vagrant looks like it might be able to do seems to happen for me
17:25 akoumjian bmcorser: They are the same thing. Use the built in one
17:25 fragamus_ joined #salt
17:25 fishernose shm_get, you solved my problem!
17:25 fishernose Thanks!
17:26 shm_get fishernose you're welcome :-)
17:26 bmcorser akoumjian: I was reading about issues with the names of preseeded keys, but even after installing vagrant from the source, my master box doesn't get preseeded
17:26 bmcorser or is this an issue i should raise in #vagrant
17:27 jimklo hmm now I'm getting "Minion failed to authenticate with the master, has the minion key been accepted? " any thoughts?
17:28 shm_get fishernose out of curiosity: why do you want your kernel to be 'writeable' by anyone ?
17:29 fishernose shm_get, 644 is just readable by everyone--and the short answer is "guestmount"
17:30 bmcorser jimklo: you using salty-vagrant?
17:30 arthabaska joined #salt
17:30 shm_get fishernose is is readable by everyone bu writeable by someone (the owner) :-)
17:30 druonysus joined #salt
17:30 druonysus joined #salt
17:30 shm_get fishernose yeah I know if someone is root a missing w flags is not going to stop them... but still :-)
17:31 jimklo bmcorser: no... however I'm using rhel5 inside vmware
17:31 fishernose shm_get, oh, I see what you're saying.
17:31 bmcorser is your master a virtual machine
17:31 fishernose I think it's writeable by root so root can remove it.
17:32 jimklo it has an 'old' zmq... not sure if I can upgrade it yet or not... DIACAP reasons
17:32 fishernose Every time Ubuntu updates the kernel it adds it to /boot...
17:32 arthabaska greetings everyone...quick question--are there any caveats or best practices around managing a salt master's system configuration?
17:32 fishernose At some point you might want to remove some of them.
17:33 fishernose But ubuntu security policy is making the kernels 0600, and I need at least +r for userland.
17:33 fishernose Guestmount is such a wonderful tool--instead of using loopback devices, you can mount up whole hard drive images in userland--as long as you can read that kernel.
17:34 Luke_ joined #salt
17:34 fishernose This is especially handy for automating disk image preparation from a makefile in such a way that your build system doesn't need to run with root priviledges.
17:35 foxx joined #salt
17:35 fishernose So guestmount is why I wanted to change the file mode of the kernel.
17:36 shm_get fishernose ok.. thanks for indulging my curiosity :-)
17:38 happytux joined #salt
17:39 shm_get I'm kind of stuck on http://pastebin.com/VmhftfZ  (running a highstate on that minion) any suggestions ?
17:40 peters-tx shm_get: "This paste has been removed!"
17:42 shm_get peters-tx humm.. I can still see it here... oh I drop a letter in cut and past... http://pastebin.com/VmhftfZK
17:44 druonysuse joined #salt
17:44 druonysuse joined #salt
17:44 dwiden joined #salt
17:45 happytux joined #salt
17:46 socks__ joined #salt
17:46 Heartsbane terminalmage: ping
17:46 dwiden This isn't strictly a salt question, but it will help with my salt deployment.  I am trying to use vagrant to spin up minion VMs and then use salt for provisioning.  My minions are on a server and communicate using the host-only adapter (vboxnet0).  However, when I vagrant up, my host only adapter is on the "vboxnet2" network.  Does anyone have experience fixing this?  I quickly googled it, but no solutions worked
17:47 sandbender1512 basepi: pull request created... got another tiny one coming in a min
17:48 terminalmage Heartsbane: pong
17:48 pydanny joined #salt
17:49 ravibhure joined #salt
17:49 happytux joined #salt
17:49 terminalmage Heartsbane: you summon me and then leave me hanging? bad form
17:50 terminalmage now you owe me another beer
17:50 bastion1704 joined #salt
17:51 Heartsbane Sorry
17:51 Heartsbane I had someone want my attention
17:52 terminalmage ooh, mr popular over her
17:52 terminalmage *here
17:52 terminalmage :)
17:52 Heartsbane Always someone busting my bawlz
17:52 terminalmage haha
17:52 terminalmage so what's up?
17:52 Heartsbane salt -v staging.example.com state.sls derp_staging_setup pillar="{release_version:'6.0.13-15192'}"
17:53 Heartsbane That does not seem to overriding the release_version
17:53 akoumjian Latest vagrant stable should have it working.
17:53 Heartsbane am I doing something wrong
17:53 akoumjian bmcorser: FWIW, setting up a masterless minion for local dev is a much simpler option
17:53 terminalmage no, that should work... maybe you need to quote 'release_version' ?
17:54 terminalmage or put a space after the :
17:54 terminalmage or both
17:54 Heartsbane k hold on
17:54 basepi sandbender1512: woot!
17:54 sandbender1512 :D
17:55 terminalmage Heartsbane: yeah the space is a problem
17:55 terminalmage the lack of space that is
17:55 terminalmage causes a pyyaml exception
17:55 terminalmage just tested
17:55 sandbender1512 also fyi: as soon as I'm done with this pull request submitting, I'm gonna start digging into that ssh package stuff we discussed, so may ping you on that here and there for the next little bit ;)
17:56 Heartsbane k
17:56 terminalmage Heartsbane: and the 'release_version' doesn't need to be quoted, it works either way
17:56 terminalmage I tend to quote just to avoid weird behavior
17:56 Heartsbane wow silly spaces
17:56 terminalmage yup
17:57 terminalmage Heartsbane: for future reference: import yaml, then do a yaml.safe_load("{release_version:'6.0.13-15192'}")
17:57 terminalmage do that in the python shell
17:57 it_dude joined #salt
17:57 Heartsbane Thanks
17:57 terminalmage and you'll see how it gets loaded
17:57 Heartsbane I think that did the trick
17:58 Heartsbane syntax is always a killer
17:58 terminalmage we do some custom loading so safe_load won't replicate what salt does 100%
17:58 terminalmage but, it'll catch things like this for sure
17:58 bmcorser akoumjian: :/
17:58 bmcorser cheers tho
17:58 [MT] Any ideas what's going on here? http://dpaste.com/1793173/  Most any command I run ends up with that same issue. From a single minion salt-call will sometimes finish after a very long delay. The load average on the master is almost nothing, ~1/2 mem free, no disk wait, nothing actively running, etc. My master config - http://dpaste.com/1793175/
17:58 terminalmage Heartsbane: ok, I'm going to go get a late lunch, good luck man
17:59 jaimed joined #salt
18:00 AdamSewell joined #salt
18:00 AdamSewell joined #salt
18:01 [MT] I wonder if I just need to upgrade minions and maybe this issue will go away.
18:03 [MT] If I'm going to upgrade, I'm curious if I should set up syndic servers. Maybe 4-6 syndic servers sitting behind a load balancer.
18:03 sandbender1512 Gareth: here?
18:03 Heartsbane terminalmage: thanks man
18:05 [MT] I don't like that I have to manually update salt on every minion. It's going to especially suck for windows boxes. :(
18:05 mapu joined #salt
18:06 it_dude joined #salt
18:07 Ryan_Lane [MT]: why would you need to manually update?
18:07 Heartsbane terminalmage: when you get back can you confirm that was merged into salt-minion 2014.1.0
18:08 Heartsbane Cause it is not overriding
18:08 Ryan_Lane salt * pkg.install salt-minion won't work? or whatever the same upgrade method is for using the windows version?
18:09 [MT] Ryan_Lane: that's broken in the past, maybe it'd work
18:10 diegows can i use templates in pillar files?
18:11 diegows well.... just trying :)
18:11 [MT] Ryan_Lane: I'm also not able to reach many of the minions and I can't figure out why. I wonder how hard it is to upgrade windows salt-minion
18:11 Ryan_Lane ah
18:12 Ryan_Lane [MT]: this is a good reason to have a scheduled highstate run on minions (always back up remote execution with eventual consistency)
18:12 Ryan_Lane then you can set the salt version in your state config
18:13 AdamSewell joined #salt
18:13 AdamSewell joined #salt
18:13 [MT] I've been wanting to do that. It's definitely planned once I have salt managing everything I want it to manage.
18:13 ksalman when i use --output-file with sys.doc it still prints everything to the screen instead of saving it to a file, what gives?
18:13 [MT] right now highstate breaks some things
18:13 ksalman sudo salt ENG-KA2G5080S5U sys.doc --output-file=/tmp/foo
18:14 Ryan_Lane [MT]: ah
18:14 [MT] Ryan_Lane: wanna take a lok at my longer line of whining and see if anything looks wrong there? (two dpaste links)
18:16 Ryan_Lane [MT]: what version of salt are you using?
18:16 Ryan_Lane I think there was some authentication flood bug
18:16 [MT] 2014.1.3 on the master, various on the minions
18:16 Ryan_Lane ah. dunno, then
18:16 [MT] that bug was a client side fix, wasn't it?
18:20 Gareth sandbender1512: I am now.
18:20 opapo joined #salt
18:21 sandbender1512 pull request 12239, for that dport/etc thing yesterday, just FYI :)
18:21 bmcorser akoumjian: i wrote up my issue :)
18:21 bmcorser https://github.com/mitchellh/vagrant/issues/3523
18:22 aw110f joined #salt
18:22 sandbender1512 I modded teh sportS and dportS lines as well, which should be fine but I'm not utilizing those in my recipes/states so SLIGHTLY less sure vs. the non-plural ones
18:22 Gareth sandbender1512: nice.
18:22 sandbender1512 anyways, no action required, just letting you know it's up :)
18:22 arthabaska joined #salt
18:23 aw110f Hi, I'm using Atlassian CI tool Bamboo, to orchestrate deployment of rpm package.  Does anyone have any experience with that?
18:25 xmj offtopic i feel that orchestrate is becoming the un-word of 2014.
18:25 it_dude joined #salt
18:26 aw110f I made the a bamboo agent that our software builds on, a salt-minion, and i want it to target other minions to install packages
18:26 cruatta joined #salt
18:26 [MT] If I were to have say 6 syndic servers behind a load balancer and have the load balancer distribute connections to the syndic servers, would I likely get around this issue I'm running into?
18:26 aw110f i turned on peer communication
18:26 [MT] I'm noticing that eventlisten.py is seeing no events
18:28 [MT] ooooh... shiny... things are happening after dropping to 20 worker_threads and restarting;
18:28 cruatta_ joined #salt
18:29 aw110f @xmj any synonym of orchestrate you know of that's trendy for 2014?
18:29 [MT] it takes a looooong time for a restart to clean up all those connections. I think that's enough selling point for syndic servers.
18:30 xmj aw110f: for the lulz: we're pushing for conducting.
18:30 [MT] reactor still isn't working right
18:31 TheRhino04 joined #salt
18:32 [MT] HAHAH!!!!  All the whining about the reactor and it's fine. Someone made a typo
18:41 patrek joined #salt
18:42 ggoZ joined #salt
18:43 jaimed joined #salt
18:43 it_dude joined #salt
18:44 analogbyte joined #salt
18:48 mgw joined #salt
18:48 eykd joined #salt
18:49 joehillen joined #salt
18:50 halfss joined #salt
18:52 eykd joined #salt
18:53 eykd Is anyone here familiar with Vagrant’s salt provisioner?
18:53 jcockhren eykd: I am
18:54 eykd I’m trying to set up a masterless minion VM, but it’s not using the config I’m specifying w/ `salt.minion_config`
18:54 jcockhren eykd: gist your vagrant filr
18:54 jcockhren file*
18:55 eykd On it.
18:56 [MT] Can the batch setting be configured at the master level?
18:56 eykd jcockhren: https://gist.github.com/eykd/e2238a6a65a251c319de
18:58 jcockhren eykd: if you ssh into the vagrant, you're saying the minion file isn't present?
18:58 jcockhren (usually at /etc/salt/minion in linux)
18:59 jcockhren also, rename the file from minion.conf to just minion.
18:59 eykd jcockhren: Yes, it’s a different file.
18:59 [MT] WOOHOO!!!! I just saw salt-run manage.down actually finish!!!!! :D :D
18:59 aw110f Is there a limitation on the amount of git repository in gitfs-remotes?  We want to have each of our git repos manage their own set of files and we have a lot.
19:00 eykd [MT]: Congrats. :)
19:01 eykd jcockhren: there’s something at /etc/salt/minion but it’s different from the local file I’m specifying.
19:01 [MT] thanks; this is one hell of a mess; it's definitely time to get salt upgraded to latest on all minions and get syndic boxes set up
19:02 ajw0100 joined #salt
19:02 jcockhren eykd: make the changes above then do a `vagrant provision`
19:06 redondos joined #salt
19:07 rgbkrk_ joined #salt
19:08 eykd jcockhren: You know what, I think maybe the problem was that the provisioner only copies the file once, but doesn’t update it ever again. I think I had an old version on the VM. Running a `destroy` and `up` seems to have solved the issue.
19:08 Heartsbane terminalmage: forget looking that up. I figured it out again
19:09 Heartsbane totally my fault syntax again.
19:09 jcockhren eykd: ah
19:10 jcockhren eykd: could you file an issue about this at https://github.com/mitchellh/vagrant ?
19:10 jcockhren eykd: tag me in it. @jcockhren
19:10 it_dude joined #salt
19:11 eykd jcockhren: OK, will do. I’ll make sure it’s really a problem, first. :)
19:13 TheRhino04 Hey guys
19:13 TheRhino04 just curious, why are some wheel functions not available from the WheelClient?
19:14 Bplotnick joined #salt
19:14 mgw joined #salt
19:15 Bplotnick Hi - is there a way to have salt run highstate repeatedly until there are zero errors? There are some scenarios that have a time component and we would like a retry mechanism
19:15 TheRhino04 for example, wheel.key.gen and wheel.key.gen_accept are not available from the w_func list
19:16 [MT] welp... I definitely need to upgrade minions because the master can't talk to them anymore. I wasn't looking to start with syndic boxes yet but now's the most sensible time I'll ever see... *grumble*
19:16 [MT] fine salt... make me do things
19:16 [MT] I think everything is working though, so no more complaining. :)
19:17 danielbachhuber joined #salt
19:20 cruatta joined #salt
19:26 pdayton1 joined #salt
19:28 eykd jcockhren: https://github.com/mitchellh/vagrant/issues/3524
19:29 kermit joined #salt
19:29 jcockhren eykd: thanks. I'll look into it
19:30 eykd jcockhren: Thanks for the help.
19:32 zombie__ joined #salt
19:33 schimmy joined #salt
19:33 JesseCW joined #salt
19:33 Ryan_Lane1 joined #salt
19:34 mattmtl joined #salt
19:35 mattmtl left #salt
19:36 schimmy1 joined #salt
19:38 basepi joined #salt
19:40 possibilities joined #salt
19:45 Bplotnick joined #salt
19:46 mgw joined #salt
19:46 sroegner joined #salt
19:46 [diecast] joined #salt
19:50 mattmtl joined #salt
19:50 pssblts joined #salt
19:51 mattmtl left #salt
19:52 jimklo joined #salt
19:54 schimmy joined #salt
19:58 [diecast] joined #salt
20:01 schimmy1 joined #salt
20:04 Bplotnick joined #salt
20:05 Bplotnick I guess I could just do a while loop in bash but it's pretty ugly. I would rather have a native retry capability
20:07 gildegoma_ joined #salt
20:09 whiteinge [MT]: (re: your question from earlier on upgrading/restarting Salt using Salt) see the techniques in this ticket (including Windows at the bottom): https://github.com/saltstack/salt/issues/7997
20:09 whiteinge might be helpful for what you're working on
20:11 mik3 left #salt
20:13 foxx[a] joined #salt
20:15 it_dude joined #salt
20:22 rgbkrk joined #salt
20:24 it_dude joined #salt
20:25 anuvrat joined #salt
20:25 faldridg_ joined #salt
20:25 bhosmer joined #salt
20:26 ajw0100 joined #salt
20:26 Networkn3rd joined #salt
20:28 redondos joined #salt
20:28 redondos joined #salt
20:30 KyleG Is anyone automatically generating ascii motd's with salt?
20:30 taterbase joined #salt
20:30 KyleG like automatically creating cool looking MOTD with the server name/environment in easy to see ASCII text
20:30 ngealy2 joined #salt
20:31 ngealy2 Can you assign nodegroups using pillar information?
20:32 ngealy2 You can do it with grains, but I don't see a way to do it with pillarnodegroups:
20:32 ngealy2 aio: 'G@environment_type: aio'
20:33 it_dude joined #salt
20:33 jeremyfelt left #salt
20:36 faldridge joined #salt
20:37 nube_ joined #salt
20:38 AdamSewell_ joined #salt
20:38 halfss joined #salt
20:39 ipmb joined #salt
20:39 vbabiy_ joined #salt
20:39 ixokai- joined #salt
20:40 TheRhino04 why aren't wheel.key.gen_accept and wheel.key.gen available from the Wheel Client?
20:40 fridder joined #salt
20:42 it_dude joined #salt
20:43 ifnull_ joined #salt
20:44 ndrei joined #salt
20:45 otsarev joined #salt
20:45 jalaziz joined #salt
20:48 fxhp joined #salt
20:49 druonysuse I think I remember there was a way to set salt's scheduler to run at some random of offset time but I have been unable to locate the docs on how to set this up. How can I configure this?
20:50 bhosmer joined #salt
20:50 AdamSewell joined #salt
20:51 it_dude joined #salt
20:52 ixokai- joined #salt
20:53 patrek_ joined #salt
20:54 sgviking joined #salt
20:57 possibilities joined #salt
20:57 Gareth druonysuse: https://github.com/saltstack/salt/pull/12153
20:59 druonysuse Gareth: so the option is splay?
20:59 anuvrat joined #salt
20:59 Gareth druonysuse: Nod.  Only in develop at the moment though.
21:00 druonysuse Gareth: Okay. Good to know
21:00 druonysuse Gareth: we are running 2014.1
21:01 pssblts joined #salt
21:02 druonysuse Gareth: Thanks you very much :)
21:02 possibil_ joined #salt
21:04 harobed joined #salt
21:04 Gareth druonysuse: no worries :)
21:06 TheRhino04 I guess no one knows?
21:06 TheRhino04 haha
21:10 arya joined #salt
21:11 arya Hi there.
21:11 arya we are hit by this issue badly https://github.com/saltstack/salt/issues/12185
21:11 arya there is a stuck job and I cannot kill it in any way
21:11 Gareth TheRhino04: are you talking about a wheel module?
21:11 arya is there a way to kill the job/history manually on master?
21:12 TheRhino04 yeah
21:12 TheRhino04 Only a select few of the key sub-module's methods are available via the Wheel Client
21:12 TheRhino04 is this by design?
21:12 TheRhino04 or a bug?
21:13 bmcorser how can i send pillar data to minions
21:13 Gareth TheRhino04: that I do not know.  Not familar with that one.
21:13 doanerock joined #salt
21:14 bmcorser i would like to use it for targetting ie pillar['roles'] = ['webserver', 'database']
21:14 bmcorser rather than globbing on the minion_id
21:15 TheRhino04 Thanks Gareth
21:15 TheRhino04 I'll keep researching
21:20 arya bmcorser: I think you are dealing with a chicken and egg problem. depending on where you host, there might be some post bootstrap mechanism on the machines that you can use to populate some pre-required files. I know AWS uses cloud-init. Not sure what you use.
21:21 Ryan_Lane1 bmcorser: assuming you've already loaded the pillars on the minions you can then use the pillars to target
21:22 Ryan_Lane1 so, if your pillar top file matches by glob, a highstate run on the minions will load the pillars
21:22 Ryan_Lane1 then you can target via the pillar
21:23 Ryan_Lane1 alternatively, you could match via grains in the pillars, then a highstate will pull the pillars, allowing you to target that way. matching pillars via grains is dangerous, though, you shouldn't put any sensitive info into pillars protected by grains
21:23 Ryan_Lane1 in your situation, if you can inject the grains into the minion on boot (like via cloud-init) you can target via grains
21:29 [MT] syndic servers... basically just there to manage extra clients, right? that way minions talk to syndics, and syndics talk to the master?
21:30 bmcorser i see Ryan_Lane1, so grains are user-configurable like minion_ids and pillars, not just info about the box?
21:30 Ryan_Lane1 @bmcorser yes, you can set them via the grains module, the minion config, or via grain modules
21:30 bmcorser *they aren't just info about the box
21:31 Ryan_Lane1 but they exist on the box, unlike pillars, which exist on the master
21:31 Ryan_Lane1 i rely heavily on grains for targeting
21:31 Ryan_Lane1 but again, trusting them for anything sensitive is bad, since the minion can change them
21:32 bmcorser so there is one set of pillars for the whole infrastructure? the docs seemed to suggest that minions could have their own set of pillars
21:33 bmcorser gah i need to watch more tutorial videos :)
21:34 Ryan_Lane1 bmcorser: you have a top file for pillars
21:34 Ryan_Lane1 where you can say "minionx gets these sets of pillars"
21:34 Ryan_Lane1 and "miniony get these other sets of pillars"
21:35 ajw0100 joined #salt
21:36 bmcorser ah
21:36 bmcorser like the "dev" "prod" etc in the docs
21:39 jimklo is there a way to pass a password to the hg state?  I need to checkout a repo on the minion, but then remove the credentials... so they may be reset by the local user...
21:39 ajprog_laptop1 joined #salt
21:42 thart joined #salt
21:46 hhenkel Hi all, I'm thinking of having a simple webfrontend with some management "compatible" pie charts showing various stats like the distribution of kernel versions and such. What would be the best way to fetch such information?
21:50 hhenkel I thought about using salt-api to fetch the grains, but did not give it a try. Also I'm not completly sure how this will scale if more then one user is fetching information.
21:52 anuvrat joined #salt
21:54 [MT] WOW! syndic servers are dead simple!
21:54 jaimed joined #salt
21:54 possibilities joined #salt
21:55 westurner joined #salt
21:56 it_dude joined #salt
21:56 [MT] I don't need to also set up git repos on the syndic server that the master uses?
21:56 jcockhren [MT]: no
21:57 jcockhren what I do is also make the syndics minions as well
21:57 jcockhren therefore, the get all the states and pillar stuffs
21:58 cruatta_ joined #salt
21:58 jcockhren no sure others do it though
21:58 jcockhren not sure how* ^
21:58 [MT] that's what I was planning to do - same as I keep the master a minion of itself
21:58 [MT] I use salt to make changes to salt :)
21:59 cruatta joined #salt
22:00 westurner @MT with https://github.com/saltstack-formulas/salt-formula ?
22:01 linuxlew_ joined #salt
22:02 [MT] nope, but what I have looks much like that
22:02 pydanny joined #salt
22:04 [MT] I'm trying to get to the point where 100% of non-dev work is done via git
22:05 it_dude joined #salt
22:05 Ahlee is there a way to purposefully fail a state?
22:05 cruatta joined #salt
22:05 Ahlee I want to test my detection of unexpected failures, so of course I can't make a state fail when i want to
22:05 whiteinge Ahlee: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.test.html#salt.states.test.fail_with_changes
22:05 [MT] state_fail: cmd.run: - name: false
22:05 [MT] or that
22:06 [MT] ya, that way is much better
22:06 whiteinge Ahlee: you can pull that state module into your /srv/salt/_states folder
22:06 whiteinge since it's new
22:06 Ahlee neat
22:06 Ahlee thanks whiteinge, that's pretty awesome.
22:07 kermit joined #salt
22:08 [MT] whiteinge: thanks for that link earlier! It's not going to help me in this particular case because I'm basically ripping things apart and making these changes really really burn, but in the future that will be incredibly helpful.
22:08 * [MT] caused a lot of hurt for himself
22:09 whiteinge haha
22:09 whiteinge doh :)
22:10 [MT] I'm moving the salt server to a new subnet, adding four syndic servers, setting up the load balancer (F5 BigIP) to lb to those four boxes, and then updating salt on every minion and manually configuring it to point to the lb address.
22:10 whiteinge hhenkel: grabbing grains or running executions to gather data that isn't in grains is a great way to do that. if you're concerned about people triggering expensive checks too often, set up a returner and have the web interface pull the data you need from whatever database
22:11 [MT] whiteinge: sound mostly sane?
22:12 whiteinge [MT]: yeah, nice. sounds like a good project...but i see your point ;)
22:12 westurner With today's develop branch, when I run 'python setup.py test' within an Ubuntu 12.04 virtualenv as a normal user I have:
22:12 westurner """FAILED (total=1060, skipped=140, passed=901, failures=14, errors=5)"""
22:12 westurner * http://jenkins.saltstack.com builds appear to be passing without
22:12 westurner errprs failures and significantly fewer skips
22:12 westurner * Is that normal for locally running the tests?
22:12 westurner * Is there an environment variable I should be setting?
22:12 westurner https://github.com/saltstack/salt/blob/develop/Contributing.rst#fixing-issues
22:12 westurner http://docs.saltstack.com/en/latest/topics/development/hacking.html
22:12 whiteinge [MT]: is that for scaling or redundancy purposes?
22:12 hhenkel whiteinge: Okay, that sounds reasonable. are you aware of any existing returner webfrontend (mysql or postgres)?
22:13 Ahlee lol, i love the messages whiteinge
22:14 Ahlee That's amazing
22:14 westurner Also, my github contributions block chart looks like a turtle and I'm tempted not to commit for the next 10 days or so.
22:14 whiteinge hhenkel: i'm not aware of an existing front-end but if you use a returner you can pull the data back out with the jobs runner via calls to salt
22:14 [MT] whiteinge: scaling mostly, I seem to be smacking my head pretty hard with single master where 380 servers are almost 100% managed by salt; I have to tune the master and I'm only going to be adding more. It seems like 2014.1.3 fixed some of these issues, but I'm deciding that this is the point where I'm going to move to having syndic servers. I was going to have to do it at one point eventually
22:14 [MT] anyway.
22:15 [MT] It'll also help massively with reconnects.
22:15 whiteinge Ahlee: haha, yeah :)
22:16 rgbkrk joined #salt
22:16 hhenkel whiteinge: okay, thanks for the info. Good night.
22:16 whiteinge [MT]: makes sense.
22:16 [MT] Right now, when I restart salt-master, the box is basically brought to its knees for 10-30 minutes while everything reconnects. With the syndic servers, I'll be able to restart salt-master on the master without them all pounding the same box
22:16 whiteinge [MT]: is that the case even on 2014.1.3?
22:17 [MT] I'd have to update all the minions to find out...
22:18 [MT] at that point we run into all the weird issues I'm fighting right now which are causing me so many headaches and I've completely lost communication with most of the minions anyway
22:18 jimklo_ joined #salt
22:18 whiteinge bleh :(
22:18 [MT] :P
22:19 [MT] It's something that I can't really blame salt for. This one is my fault. I really should have planned ahead and thought through some of the changes I made and ... not gonna play the game anymore.
22:19 [MT] OH! Question!
22:19 anuvrat joined #salt
22:19 [MT] If I trigger gitfs sync on the master, with the syndic boxes need to do anything for the change or will they just transparently grab things from the master?
22:20 whiteinge [MT]: syndics won't automatically grab anything from the master
22:20 whiteinge you'll have to trigger a sync on them separately
22:21 [MT] oh..
22:21 whiteinge it's common to have a minion daemon running on syndics for that reason. that way you can control the syndics as a group
22:21 whiteinge from the master
22:21 rgbkrk joined #salt
22:22 [MT] will it be the same gitfs.file_sync that they need?
22:23 [MT] or... they'll pull from the master and won't actually be configured to pull gitfs remotes, right?
22:23 it_dude joined #salt
22:23 westurner left #salt
22:24 whiteinge syndics don't get any files or configuration from the master automatically. if you configure them with gitfs remotes they'll sync themselves automatically on the normal gitfs timer (~60 secs?). or you can trigger manually with the fileserver runner
22:24 whiteinge http://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.fileserver.html#module-salt.runners.fileserver
22:24 westurner joined #salt
22:25 bhosmer joined #salt
22:26 [MT] ok... so a salt-syndic needs to be configured the same as the master as if it's another master, it just also has the extra syndic piece that allows them to sit between the master of masters and minions?
22:26 * whiteinge nods
22:26 [MT] that makes much sense now :)
22:26 whiteinge syndic is literally just a pass-through mechanism for commands
22:26 elfixit joined #salt
22:27 halfss joined #salt
22:27 [MT] I was way over-complicating what they did then
22:27 [MT] whiteinge: THANKS!!!
22:27 whiteinge np :)
22:30 jraby joined #salt
22:32 redondos joined #salt
22:32 [diecast] joined #salt
22:32 it_dude joined #salt
22:41 jraby joined #salt
22:43 pydanny joined #salt
22:48 redondos joined #salt
22:48 redondos joined #salt
22:49 cewood joined #salt
22:51 it_dude joined #salt
22:58 linuxlewis joined #salt
23:00 jgarr joined #salt
23:03 tligda joined #salt
23:04 gothix_ joined #salt
23:08 peno joined #salt
23:09 it_dude joined #salt
23:10 patrek joined #salt
23:18 it_dude joined #salt
23:21 TheRhino04 Just curious, is there any repository for older versions of Salt?
23:21 TheRhino04 We are using 0.17
23:21 TheRhino04 but are currently at 0.17.1
23:21 kermit joined #salt
23:21 TheRhino04 0.17.4 has functions that we would like to have, but the yum pkgs don't appear to be publicly available
23:22 oleg joined #salt
23:22 oleg Hi guys
23:22 Guest16343 I suppose I found the reason of very slow SaltStack works in some cases (including event processing)
23:22 Guest16343 https://github.com/saltstack/salt/issues/12246
23:22 Guest16343 What do you thin about that?
23:24 Guest16343 boost from 4 minutes to 8 seconds
23:25 possibilities joined #salt
23:26 taion809 joined #salt
23:26 cruatta joined #salt
23:28 cruatta_ joined #salt
23:34 jslatts joined #salt
23:35 Shish joined #salt
23:36 it_dude joined #salt
23:37 alunduil joined #salt
23:43 ajw0100 joined #salt
23:47 whiteinge Guest62618: yes! that has been bugging the crap out of me this week
23:47 whiteinge i usually don't see the problem but i'm on a weird network
23:47 TOoSmOotH joined #salt
23:47 jalbretsen joined #salt
23:47 APLU joined #salt
23:47 whiteinge TheRhino04: you can find old RPMs on koji
23:48 lahwran_ joined #salt
23:48 mgw joined #salt
23:48 Dattas joined #salt
23:48 whiteinge TheRhino04: http://koji.fedoraproject.org/koji/packageinfo?packageID=13129
23:48 thunderbolt joined #salt
23:48 TheRhino04 whiteinge: Awesome
23:48 TheRhino04 thank you
23:49 nlb joined #salt
23:50 otsarev_home whiteinge: so, now we now exact reason of bad performance :)
23:50 Jahkeup joined #salt
23:51 otsarev_home With slow DNS we have dramatticaly bad speed
23:51 otsarev_home So, I suppose, SaltStack or one of the depenpdency made too much calls of nslookup
23:51 otsarev_home And dns_cache: False does not help....
23:51 whiteinge i think that's exactly what i'm seeing. exactly. i'm playing with my dns now to try and verify
23:52 whiteinge veery annoying :)
23:52 otsarev_home I see about several handreds DNS requests during single "salt-call state.highstate"
23:52 tyler-baker joined #salt
23:52 otsarev_home On debug log you see it as AES requests
23:52 seblu joined #salt
23:52 otsarev_home I found some issues on github with exactly same logs
23:52 otsarev_home But problem not in crypto
23:52 otsarev_home problem in DNS :)
23:53 whiteinge huh. do you know where in the code that AES logging is coming from?
23:54 otsarev_home No
23:54 jrdx joined #salt
23:54 otsarev_home Let me check my logs / google... I will show typical example
23:55 otsarev_home [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
23:55 otsarev_home [DEBUG   ] Decrypting the current master AES key
23:55 otsarev_home [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
23:55 otsarev_home [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
23:55 otsarev_home [INFO    ] Executing state user.present for asdf
23:55 otsarev_home [INFO    ] User asdf is present and up to date
23:55 otsarev_home [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
23:55 otsarev_home [DEBUG   ] Decrypting the current master AES key
23:55 otsarev_home [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
23:55 otsarev_home [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
23:55 otsarev_home https://groups.google.com/forum/#!topic/salt-users/DM0XzVtSws4
23:55 otsarev_home 4th message here
23:56 otsarev_home https://github.com/saltstack/salt/issues/10798
23:56 otsarev_home This is!
23:56 otsarev_home Exactly!
23:57 otsarev_home [INFO    ] AES payload received with command _file_hash
23:57 otsarev_home [INFO    ] AES payload received with command _ext_nodes
23:57 otsarev_home [INFO    ] Clear payload received with command _auth
23:57 druonysuse joined #salt
23:57 otsarev_home [INFO    ] Authentication request from gateway
23:57 druonysuse joined #salt
23:57 otsarev_home [INFO    ] Authentication accepted from gateway
23:57 otsarev_home [INFO    ] Clear payload received with command _auth
23:57 otsarev_home [INFO    ] Authentication request from gateway
23:57 otsarev_home [INFO    ] Authentication accepted from gateway
23:57 otsarev_home [INFO    ] AES payload received with command _file_list
23:57 otsarev_home [INFO    ] AES payload received with command _file_list
23:57 otsarev_home [INFO    ] AES payload received with command _file_list
23:57 otsarev_home [INFO    ] AES payload received with command _file_list
23:57 otsarev_home [INFO    ] AES payload received with command _file_list
23:57 otsarev_home [INFO    ] AES payload received with command _file_list
23:57 otsarev_home [INFO    ] AES payload received with command _file_hash
23:57 otsarev_home A lot of messages like ths
23:58 [MT] otsarev_home: next time, use dpaste.com and don't flood the channel
23:58 KyleG1 joined #salt
23:58 otsarev_home [MT]: I am sorry
23:59 otsarev_home whiteinge: https://github.com/saltstack/salt/issues/12246 I added log with my comments to github
23:59 whiteinge otsarev_home: this is making sense. sorry, i have to run afk. i'll be back
23:59 linuxlew_ joined #salt
23:59 whiteinge ah, nice. thanks for adding that to the issue
23:59 otsarev_home whiteinge: so, you just need pure DNS performance :)
23:59 meteorfo_ joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary