Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-06-01

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 litwol joined #salt
00:02 litwol Hello
00:06 spookah joined #salt
00:10 mgw joined #salt
00:13 mgw joined #salt
00:15 solidsnack joined #salt
00:26 litwol interesting
00:27 litwol i'm using ssh_known_host (and accompanying ssh module ssh.recv_known_host) to control my minion .. well known hosts :). however i am running into a problem where ssh.recv_known_host gets fingerprint in format different from what i get when i run ssh-keygen -l -f ..
00:29 litwol ohhhhhhhhhhhhhhh
00:30 mosen hehe, sounds solved
00:33 litwol no
00:34 litwol salt retrieves old-format fingerprints.. xx:xx:xx:xx....
00:34 litwol while ssh-keygen produces new fngersprints
00:34 mgw left #salt
00:34 litwol SHA256:XXXXXXXXXXXXXXXx...
00:35 litwol trying to dbug right now how to force salt to use different fingerptint schema, or maybe something else ?
00:37 mosen maybe just a different hash type option
00:38 litwol yeah
00:39 litwol i've tried provinding 'enc' param. doesn't change a thing
00:40 litwol ssh.recv_known_host always returns xx:xx:xx:xx... format
00:42 litwol found it
00:42 litwol https://github.com/saltstack/salt/blob/develop/salt/modules/ssh.py#L223
00:43 litwol fingerprints are always base64
00:43 cheus joined #salt
00:53 schuckles Can anyone recommend an example function that calls cloudformation via boto?
00:54 schuckles Im sorta' new with python/salt and having a little trouble trying to write a script
00:55 schuckles joined #salt
01:01 badon_ joined #salt
01:03 ajw0100 joined #salt
01:33 c10 joined #salt
01:42 tkharju joined #salt
01:48 ilbot3 joined #salt
01:48 Topic for #salt is now Welcome to #salt | 2015.5.1 is the latest | Please use https://gist.github.com for code, don't paste directly into the channel | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
02:03 thehaven joined #salt
02:13 aidin_ joined #salt
02:26 vexati0n hey how would i write a state that makes my wife make me a sammich
02:31 mosen I think you're looking for chef
02:36 vexati0n zing!
02:37 ALLmightySPIFF joined #salt
02:38 vexati0n i'm proud of myself, today i got a whole contraption set up to contact all my remote minions and get them to reconnect over openvpn
02:38 vexati0n it's sort of ugly but it works
02:39 hasues joined #salt
02:39 hasues left #salt
02:50 evle joined #salt
02:50 solidsnack joined #salt
02:52 favadi joined #salt
02:52 snkr joined #salt
02:54 snkr hello - need help running salt-api in Python3
02:56 snkr same code runs without issues in python 2.7, but in python 3 i get
02:56 snkr File "/usr/local/lib/python3.4/dist-packages/salt/config.py", line 1927, in get_id     if name.startswith(codecs.BOM):  # Remove BOM if exists TypeError: startswith first arg must be str or a tuple of str, not bytes
02:56 snkr import salt.client   local = salt.client.LocalClient() whoami = local.cmd('*', 'cmd.run', ['whoami'])  print (whoami)
02:58 zer0def joined #salt
03:01 joeto joined #salt
03:04 solidsna_ joined #salt
03:11 bin_005_k joined #salt
03:13 badon joined #salt
03:18 jcristau joined #salt
03:19 snkr joined #salt
03:19 timoguin joined #salt
03:30 Peeko joined #salt
03:32 bhosmer_ joined #salt
03:33 evle1 joined #salt
03:34 otter768 joined #salt
03:47 Aidin joined #salt
03:47 aidin_ joined #salt
03:51 CryptoMe1 Evening Everyone! I've been toying with the beacon + reactor system tonight, and I'm falling short on reactors, and the 'tgt' option. I'm trying to target the minion where the event came from, but I'm getting this error: SaltRenderError: Jinja variable 'dict object' has no attribute 'id'
03:51 CryptoMe1 I've tried using data['id'] in several locations, but it doesn't seem to get picked up by the reactor.
03:54 CryptoMe1 Here are the relevent details: http://pastebin.com/Edvvfrwk
03:54 CryptoMe1 If anyone has some insight, I'd greatly appreciate it!
03:55 Aidin joined #salt
03:55 aidin_ joined #salt
03:57 CryptoMer Oh, I should mention i'm running 2015.5.0, on Ubuntu 12.04
03:59 vexati0n eh. i tried setting up reactor and ran into an error that has stumped enterprise support for a week
04:00 vexati0n so idk. lol.
04:00 CryptoMer lol
04:00 Aidin joined #salt
04:03 vexati0n as for the 'dict object has no attribute' error, i get that when doing highstate on some of my minions that have failed updating to later versions of salt
04:04 vexati0n but obviously that's a different problem
04:11 CryptoMer ya. I wish it wasn't.
04:11 CryptoMer I'm trying it on a system that is on 2015.5.1, see if that magically fixes it haha.
04:19 vexati0n do you have the relevant section of your master conf that specifies the reactor command?
04:27 CryptoMer Sorry was setting it up on another system. Sure, I'll paste it in a moment.
04:27 CryptoMer http://pastebin.com/YvgJkXKj
04:33 debianguy joined #salt
04:35 vexati0n ooh! i see the problem!
04:35 vexati0n ..not really. it looks fine to me.
04:35 CryptoMer gah!
04:35 CryptoMer you got my hopes up!
04:35 vexati0n yeah that's pretty much exactly what mine is, only i get a "too many arguments" error instead of the jinja one you get.
04:35 CryptoMer haha
04:36 CryptoMer It will work if I specify '*' as the tgt
04:36 vexati0n ew
04:36 CryptoMer But then it fires off a highstate on each of my minions, which is less than ideal.
04:36 vexati0n i'm new with this jinja voodoo so i'm probably just not seeing what the problem is
04:39 debianguy Hi, I have a command that runs correctly from the command line (redacted): salt some_minion rsync.rsync rsync://user@host/some/dir/ /opt/iota/reflux delete=True update=True passwordfile=/etc/some.file . I am trying to migrate this into a state.sls file. Here is an example of what the config I am trying to use looks like: https://gist.github.com/banyanleaf/7008934a8a0eb5ccced0.  I received an error saying that rsync.rsync is not foun
04:40 debianguy Any tips or suggesstions would be most appreciated, thanks.
04:42 CryptoMer debianguy: states use the states module. I don't see 'rsync' as a function of that module: http://docs.saltstack.com/en/latest/ref/states/all/
04:43 debianguy Hrm good point, so general modules like: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.rsync.html are not usable in states then?
04:43 debianguy in other words this is a cli only module?
04:43 CryptoMer that would be my take on things.
04:44 CryptoMer There's probably a way to get it to work from a state, but off the top of my head, I wouldn't know what it was.
04:45 debianguy thanks a lot. I guess a follow up, We switched to rsync from file.recurse because of extreme delays in processing directories with a couple hundred files in them. By removing them from our base dirs and rsync'ing them over we see 1-2 minute salt jobs instead of 12-15min. We are running latest  2015.5.1 if that helps. Is there a better option for syncing multiple large directories?
04:46 debianguy At this point it seems like a lot of command runs to rsync is my only option for good performance and handling in a state fikle
04:46 debianguy file*
04:46 CryptoMer hmm, that's a tough one.
04:46 debianguy ok I will look at the source again with an eye on non-state modules running in a state, that is very helpful
04:46 CryptoMer good luck!
04:46 debianguy thanks a lot!
04:47 rdas joined #salt
04:50 debianguy Looking at the docs for the rsync module again, it does say right at the top: This data can also be passed into pillar. Options passed into opts will overwrite options passed into pillar.
04:50 debianguy so I think that is likely a workable solution
04:52 vexati0n we do massive syncs over rsync (~1000 minions each caching a directory of 450GB) and we had to just write a separate sync script in bash and just orchestrate it through salt.
04:52 CryptoMer vexati0n: using cmd.script?
04:52 vexati0n yes
04:52 debianguy ok thanks vexation, that is the scope of the data I am talking about
04:52 CryptoMer that's not a bad idea.
04:53 vexati0n all our minions are remote, too, and the system works (more or less) reliably. as long as some yahoo doesn't shut off the minions but i can't find a Salt State that electrocutes end users.
04:54 debianguy I think we likely end up there, I was just hoping to control all dirs through states
04:54 debianguy but it just does not seem realisitic
04:54 CryptoMer vexati0n: sounds like a pull request!
04:56 vexati0n yeah. all the docs say the fileserver isn't really intended to handle that kind of load.
04:57 ramteid joined #salt
04:57 rdas joined #salt
04:58 debianguy having rsync as a state module would be nice
04:58 debianguy and would really solve a lot of these types of issues
05:00 debianguy I do have a second on going issue/question I am trying to work through. Running a DNS server in saltstack manage docker container is having port binding issues. I think this maybe more of a docker-py issue but if you have an experience with docker/docker-py/saltstack, I would love some info on this one: http://stackoverflow.com/questions/30522816/issue-with-saltstack-docker-py-port-binding-tcp-and-udp-to-the-same-port
05:02 seev docker seems so nasty to work with
05:02 debianguy Ha I feel docker is the easy stuff and saltstack is the very painful stuff
05:02 debianguy in truth though I think it is docker-py that is the real thorn in most peoples side
05:02 debianguy docker from command line is trivial
05:02 seev I haven't worked with containers since OpenVZ
05:04 debianguy yeah we are getting to the point with writing our file sync with rsync outside of salt, and removing the use of dockerio in favor of cmd.runs I just start to wonder if saltstack is the right tool for us
05:04 debianguy I feel like it should be, I want it to be, but the pain factor has been high
05:04 aarontc joined #salt
05:05 vexati0n the only thing i know about dockers is they're nice pants.
05:05 debianguy ha
05:06 debianguy yeah whenever I ask my saltstack/docker/docker-py port binding question all I get is crickets, seems like running dns or other multiprotocol applications in a saltstack managed container is quite uncommon
05:09 mosen I'm neither an expert in salt or docker, but ill take a look anyways :)
05:11 vexati0n i'm not an expert in anything. but salt makes things fun. like while a bunch of people spent 6 months banging their head against setting up SCOM, i just quietly built all the functionality they were trying to come up with into Salt and turned it on.
05:11 Norrland SCOM?
05:11 vexati0n systemcenter
05:12 vexati0n they were like "uh.. this automates stuff right? let's try that." because.. "windows shop"
05:12 Norrland ah, windows thingies
05:12 vexati0n one of those places where anytime something goes wrong, they hunt down the nearest open source program and blame it
05:12 debianguy I work for a company that has an inhouse system managing over 10k servers and we are trying to migrate to salt, high pain factor so far
05:12 mosen debianguy: some bits are better than others
05:13 CryptoMer debianguy: I've found the initial learning curve is easy, but once you start using the 'more advanced' stuff (read: anything but simple states), the config becomes more difficult.
05:13 debianguy really sync'ing dirs and mapping ports are the two things I really need, and the two things I have the most trouble with
05:13 mosen debianguy: lemme read through the source for dockerio on the port mapping thing
05:14 vexati0n i work for a company that has absolutely nothing at all to manage anything whatsoever, and we're trying to go from that to "devs push a button and a server happens"
05:15 debianguy thanks mosen, that would be great. I have looked at it and it was not obvious/trivial. I do believe the salt dockerio module does not care and the docker-py module on the minion being used is where the trouble is. But I have yet to confirm or deny this
05:15 CryptoMer vexati0n: It's unfortunate how many companies are in that state.
05:15 vexati0n it is, if you don't have people who like the old-west atmosphere.
05:15 vexati0n not us though. nothing but outlaws and cattle wranglers.
05:16 debianguy we have a working system but are trying to unburden maintaining/building/managing our own organically grown not well planned or thought in house system in favor of a stable open source project
05:17 CryptoMer debianguy: Saltstack will get you there. Most config mgmt systems would get you partially there.
05:17 debianguy without a doubt so far salt is our best chance, and we do have about 10% of our servers running through salt right now, in just the non-optimal way so to speak, lots of cmd.runs, etc
05:17 CryptoMer However, salt's built on that remote execution pluggable system, so it makes it quite excellent in that regard.
05:17 CryptoMer It could be that the proper module for what you want simply hasn't been written yet.
05:17 debianguy we are working the docker.running port binding issue by doing a cmd.run docker run
05:18 CryptoMer IMHO, there's no shame in using cmd.run to get things working initially.
05:18 debianguy I agree CryptoMer, and we are at that point, but we would like to evolve to using the state modules before deciding if we are going to proceed
05:18 mosen debianguy: ok.. gonna hafta spin up a busybox image or something with tcp/udp 53 and replicate it.. there's nothing obvious on the salt side, maybe it is docker-py
05:20 vexati0n i got almost everything we do written using practically nothing BUT cmd.run. mainly because i didn't have time to learn more than that by the time we needed it. but the remote execution alone is worth the trouble.
05:20 debianguy mosen, I am with you 100%, When I looked at the dockerio code there was nothing obviously wrong there
05:20 debianguy the interesting thing is the order, it only ever maps tcp whether udp is listed as port 1 or port 2 in my example
05:20 debianguy so I am sure there has gotta be an if tcp: somewhere
05:21 mosen ah so theyre not just using the first item.. yeah there has to be some kind of filtering for just tcp ports bound
05:21 debianguy well that was super generic but yeah they are using tcp as a value to act on that has a preference over udp
05:21 debianguy let me pull the docker-py code out again, I have only looked at it briefly but it was non-obvious iirc
05:22 mosen lemme just write a stupid container state that does nothing much
05:25 debianguy Thanks so much everyone for the feedback and help, I really appreciate it
05:27 debianguy In retrospect it is silly of me that I did not previously strip out all my other configs and just try to the port binding, I should have
05:27 debianguy because my state files are non trivial
05:28 debianguy (I am doing this now)
05:34 mosen just realised that even though I've used a lot of salt and docker.. i havent really dealt with dockerio module :)
05:34 mosen i might take a quick look at the docker-py api
05:35 otter768 joined #salt
05:36 debianguy Here is a dumbed-down sls file I am about to test with: https://gist.github.com/banyanleaf/b0d41037d2734b59497c
05:37 debianguy bllah bad copy/paste wrong ports
05:37 debianguy but you get the idea hopefully
05:37 debianguy for docker-py source: https://github.com/docker/docker-py/blob/master/docker/client.py#L857
05:38 mosen ah right. i had pulled but not installed.. and i thought maybe it only operated with create / start
05:39 debianguy there is no cleanup in that example so if the container aleady exists it will complain but it will work for a new container
05:40 favadi joined #salt
05:40 mosen oh gosh.. I'm going to need an actual bind container i think. Dont have anything that EXPOSE's udp
05:41 p66kumar joined #salt
05:41 debianguy yeah I am using ubuntu:vivid and apt-get installing the knot dns server
05:41 debianguy in the image
05:42 pkc joined #salt
05:43 debianguy tbh I don't think it matters though. If you expose those two ports I believe it will map regardless of if you have something listening on them, but I am not 100% on that
05:44 pkc Is there a way to store result of salt-call to some variable?
05:44 mosen it didnt seem to run without the container exposing the ports in the first place. not sure. Anyways ill just grab some ISC bind
05:44 debianguy I have my simplified test running now
05:45 mosen does it show bindings for both?
05:47 debianguy I am seeing the same behavior with my simplified state file: 0.0.0.0:53->53/tcp, 53/udp
05:47 mosen ok
05:48 mosen I wonder if this is why they wrote dockerng :)
05:48 mosen still pulling bind
05:49 debianguy If I docker run the container from the command line with -p 53:53 -p 53:53/udp it works fine
05:49 debianguy Yes I have been following dockerng closely lol
05:51 debianguy from a docker run I see the ports as: 0.0.0.0:53->53/tcp, 0.0.0.0:53->53/udp
05:51 djdeaf joined #salt
05:52 aparsons joined #salt
05:55 aparsons_ joined #salt
05:56 debianguy fwiw I have tried multiple versions of docker-py, currently we are running: docker-py==1.2.2
05:57 aparson__ joined #salt
06:02 mosen phew ok, same issue replicated
06:02 debianguy thanks so much for testing it, at least I am not crazy
06:02 debianguy well in this regard
06:03 debianguy I will file an issue with the docker-py project and see if they can lend a hand, I can't thank you enough for the help
06:03 mosen might not be them
06:04 mosen looks like in docker inspect, the binding is there, but it doesnt assume the IP 0.0.0.0
06:04 mosen docker port bindtest
06:04 mosen 53/tcp -> 0.0.0.0:9053
06:04 mosen 53/udp -> :9053
06:05 schuckles joined #salt
06:05 mosen being explicit about HostIp: "0.0.0.0" resolves that
06:06 golodhrim joined #salt
06:06 mosen its a problem with the default value of HostIp
06:06 cheus joined #salt
06:07 debianguy ok I did see the syntax change of 0.0.0.0 vs. '' in the upgrade from saltstack 2014.x to 2015.x I believe
06:07 debianguy I will try adding the 0.0.0.0
06:08 mosen ahh right.. yeah I'm a little behind so maybe its my problem in a different way
06:08 mosen the thing is, my tcp connection got the correct binding and udp didnt, like you said
06:09 colttt joined #salt
06:09 schuckles1 joined #salt
06:09 packeteer joined #salt
06:09 debianguy wow
06:09 debianguy it worked
06:10 debianguy I would have sworn I tried this, but hot dam you are a life saver
06:10 mosen yay
06:10 debianguy I owe you beers or beverages of your choice
06:11 mosen welp, I need a beer now :) took me about 47 goes to write a valid docker state
06:14 stoogenmeyer joined #salt
06:14 debianguy seriously thanks a lot, this is a great find
06:14 mosen no problem
06:14 mens joined #salt
06:14 dendazen joined #salt
06:15 mosen whats the other problem? hehe.. I don't know anything about the rsync state if any
06:15 mosen and sounds like file.recursive is not gonna be helpful
06:15 soren_ joined #salt
06:16 soren_ joined #salt
06:16 debianguy yeah that one is a little clearer, file.recurse isn't a good solution, rsync is the answer. And someone mentioned the useful idea of writing a script to do the rsync and using cmd.script to run it which I think is very workable
06:16 debianguy I am going to play with that
06:16 mosen yeah fair enough
06:18 AndreasLutro joined #salt
06:18 debianguy My main issue was I was trying to use rsync as a state module when in fact it is a salt module not a state module, my inexperience with salt is showing on this one
06:19 ndrei joined #salt
06:19 aparsons joined #salt
06:23 mosen ahh right yes
06:23 mosen i think you could use the execution module via a scheduler thing
06:25 debianguy interesting, I will look into it
06:26 mosen I havent tried using the scheduler at all
06:30 kawa2014 joined #salt
06:31 aparsons joined #salt
06:34 Garo_ When a minion triggers a reactor in the master, can the reactor some how access the pillar data for that particular minion?
06:40 malinoff joined #salt
06:41 stoogenmeyer_ joined #salt
06:45 pkc Is there a way to store result of salt-call to some variable?
06:45 Auroch joined #salt
06:45 aparsons joined #salt
06:46 AndreasLutro pkc: variable? what are you actually trying to achieve?
06:47 joeto joined #salt
06:47 pkc I want to create subnets in vpc but I need vpc id for that. So I'm trying to get and store vpcid in a variable
06:47 ThomasJ pkc: variable=$(sudo salt-call pkg.list_upgrades)
06:47 pkc I'm trying like this:
06:47 ThomasJ echo $variable
06:47 pkc {% set vpcdetails = {{ salt-call['boto_vpc.get_id']('testvpc') }} %}
06:48 ThomasJ In jinja, not so sure.
06:48 AndreasLutro first of all you don't need {{ }} inside {% blocks %}
06:48 pkc but in sls we need to use jinja ...right?
06:49 KermitTheFragger joined #salt
06:49 pkc ok
06:49 AndreasLutro second you use salt['module.function'] to call salt modules, salt-call makes no sense
06:49 AndreasLutro looking up the boto_vpc.get_id function, it returns an object: https://github.com/saltstack/salt/blob/develop/salt/modules/boto_vpc.py#L489-L490
06:49 ThomasJ pkc: You can use any of the renderers salt supports by specifying them. So you could write pure python if jinja will not allow you to do what you need
06:49 AndreasLutro so you can do {% set vpc = salt['boto_vpc.get_id']('testvpc') %} {{ vpc.id }}
06:50 AndreasLutro dictionary, not object
06:51 sleibo joined #salt
06:51 pkc what is {{ vpc.id }} here ?
06:51 AndreasLutro the id of testvpc
06:52 al joined #salt
06:52 pkc ok .. let me try it
06:54 pkc Rendering SLS 'base:subnet' failed: Jinja variable 'None' has no attribute 'id'
06:54 thalleralexander joined #salt
06:55 AndreasLutro ah we're looking at the wrong version
06:55 AndreasLutro https://github.com/saltstack/salt/blob/2015.5/salt/modules/boto_vpc.py#L165-L189
06:55 AndreasLutro you can see when it returns None there
06:56 s_kunk joined #salt
06:56 AndreasLutro maybe you can just wrap it in an {% if vpc %}
06:56 baweaver_ joined #salt
06:56 s_kunk joined #salt
06:57 AndreasLutro also it's just {{ vpc }}, not .id
06:57 baweaver_ joined #salt
06:57 AndreasLutro not until the next version anyway
06:59 pkc still getting some syntax error
07:00 AndreasLutro well, share it
07:01 ndrei joined #salt
07:01 pkc {% if vpc = salt['boto_vpc.get_id']('testdev') %}
07:01 pkc Rendering SLS 'base:subnet' failed: Jinja syntax error: expected token 'end of statement block', got '='; line 1  --- {% if vpc = salt['boto_vpc.get_id']('testdev') %}    <======================
07:01 aparsons joined #salt
07:02 pkc set instead of if, throws same error
07:02 AndreasLutro I highly doubt that
07:05 pkc hey .. it worked ...what did I chang
07:05 pkc this is working ... {% set vpc = salt['boto_vpc.get_id']('test-dev') %}
07:05 pkc I can use {{ vpc }} then
07:06 pkc Thanks a lot AndreasLutro!!
07:07 pkc You saved me today :)
07:10 anton joined #salt
07:11 ndrei joined #salt
07:14 dkrae joined #salt
07:16 Guest30133 Hi, i cant find any info on executing a job from state file that should update other server , not minion. something like ansible's delagation: delegate_to: foo,local.org, maybe i am not searching right. my need is: after installing munin-node or nagios-nrpe to update config file on other server. any help will be great!
07:29 s0lar joined #salt
07:32 supersheep joined #salt
07:36 otter768 joined #salt
07:37 __number5__ Guest30133: salt minions is on all servers, even master have one
07:40 Aidin1 joined #salt
07:43 bluenemo joined #salt
07:45 Guest30133 for example , i have master wich configs minion to run munin-node, and i have another minion that runs on munin-master, n success of munin-node i need to put config for it on munin-master
07:46 mosen salt mine is one way
07:47 Guest30133 mosen: i am trying to work this out with reactor , but maybe there is a better way
07:47 Guest30133 ?
07:48 eseyman joined #salt
07:48 mosen Guest30133: I'm not sure because I haven't done anything with reactor. But with salt mine you can collect all the hostnames/ip addresses of the nodes, and then they become available as a list, which you could use in a template or something like that
07:49 Guest30133 thanks , i am checking
07:49 mosen might need reactor to force the state to refresh on the master
08:01 jeddi joined #salt
08:04 david_an11 joined #salt
08:06 Grokzen joined #salt
08:07 slav0nic joined #salt
08:18 stoogenmeyer__ joined #salt
08:19 codehotter Can I do tasks on the same minion in parallel
08:20 baweaver_ joined #salt
08:20 codehotter like ABCD all need to happen before E starts, but A B C D can themselves be done in parallel, so it will be done much faster
08:21 baweaver_ joined #salt
08:21 N-Mi joined #salt
08:29 ninkotech__ joined #salt
08:32 gmoro joined #salt
08:34 solidsnack joined #salt
08:37 c10 joined #salt
08:38 denys joined #salt
08:43 cruatta joined #salt
08:46 stephanbuys joined #salt
08:48 markm joined #salt
08:52 ndrei joined #salt
08:56 soren joined #salt
09:02 markm_ joined #salt
09:03 ndrei joined #salt
09:06 solidsna_ joined #salt
09:09 ndrei joined #salt
09:12 bin_005 joined #salt
09:20 stephanbuys joined #salt
09:21 Berty_ joined #salt
09:21 soren joined #salt
09:21 Aidin joined #salt
09:24 ndrei joined #salt
09:27 Berty_ joined #salt
09:32 stephanbuys joined #salt
09:33 linjan joined #salt
09:36 lothiraldan joined #salt
09:36 baweaver joined #salt
09:36 otter768 joined #salt
09:39 lothiraldan joined #salt
09:40 _mel_ joined #salt
09:41 Katyucha joined #salt
09:41 soren joined #salt
09:42 solidsnack joined #salt
09:43 vilitux joined #salt
09:45 Katyucha Hi. I try a connection to our vsphere but non way because of SSL Certificate verify failed
09:45 Katyucha Any solution tout bypass?
09:47 markm__ joined #salt
09:53 froztbyte Katyucha: looks like you'll have to check the https://code.google.com/p/pysphere/ code
09:53 froztbyte since that's what salt-cloud uses (according to the docs)
09:55 TyrfingMjolnir joined #salt
09:59 Berty_ joined #salt
10:03 impi joined #salt
10:05 arount joined #salt
10:06 arount Hi there, I'm experimenting Salt Reactor, but I don't understand how to execute command on minion when event is fired
10:06 arount Someone can help me please ?
10:08 _mel_ hi. i have some states combined by a "basic" state. now i want to use another environment to use just some of this states. so i created a new environment. but i got this error: "Detected conflicting IDs, SLS IDs need to be globally unique". can i reuse sls states in several environments?
10:09 ange hi, where should I drop a line regarding a job opening in a London based company?
10:09 ndrei joined #salt
10:11 solidsnack joined #salt
10:15 soren joined #salt
10:17 impi joined #salt
10:17 quist``` joined #salt
10:20 tmclaugh[work]_ joined #salt
10:21 vexati0n_ joined #salt
10:21 s_kunk_ joined #salt
10:21 rdas_ joined #salt
10:22 MohShami joined #salt
10:22 dayid joined #salt
10:22 dayid joined #salt
10:23 armguy joined #salt
10:23 rdas_ joined #salt
10:23 linjan joined #salt
10:23 KermitTheFragger joined #salt
10:23 pviktori joined #salt
10:29 k_sze[work] joined #salt
10:32 amcorreia joined #salt
10:35 mrbigglesworth joined #salt
10:38 dimeshake joined #salt
10:38 sinenitore joined #salt
10:38 MohShami_ joined #salt
10:39 premera_c joined #salt
10:40 nobrak_ joined #salt
10:40 hunmaat joined #salt
10:41 mackstic1 joined #salt
10:42 ndrei joined #salt
10:42 nlb_ joined #salt
10:42 __alex joined #salt
10:43 khris_ joined #salt
10:44 Aikar joined #salt
10:44 rhand joined #salt
10:45 mens joined #salt
10:46 Heartsbane joined #salt
10:46 Heartsbane joined #salt
10:46 lloesche joined #salt
10:47 bhosmer joined #salt
10:48 dendazen joined #salt
10:50 giantlock joined #salt
10:50 stephanbuys joined #salt
10:54 ndrei joined #salt
10:55 markm__ joined #salt
10:56 s_kunk joined #salt
10:57 Katyucha froztbyte: OK... Sound no option to ignore check
10:59 stephanbuys joined #salt
11:01 TheHelmsMan joined #salt
11:02 joeto joined #salt
11:03 c10 joined #salt
11:04 riftman joined #salt
11:05 MohShami hi guys, is there a way to include all pillar files in a folder? folder.* doesn't seem to work for me
11:13 ndrei joined #salt
11:16 babilen MohShami: That doesn't work, but you *might* like http://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.file_tree.html#module-salt.pillar.file_tree (I consider that particular approach to be exactly wrong as it conflates pillar storage/inclusion and targeting)
11:17 babilen Depending on what you actually want to achieve a different approach might be more appropriate .. can you provide more details? Why are you trying to include all files in a particular directory as opposed to listing them explicitly?
11:18 iwishiwerearobot joined #salt
11:18 MohShami thank you babilen, I'm using this for an automated deployment method for our websites
11:18 MohShami and I add site details using pillars
11:18 MohShami each website gets a file, and I keep forgetting to list them, so it's just a matter of convenience :)
11:23 babilen You could generate your top.sls from data in the pillar or even depending on files present on the master (cf. http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cp.html#salt.modules.cp.list_master )
11:26 babilen ~foo.bar.* would be quite useful though .. maybe you could write to use mailing list?
11:27 MohShami yeah sure
11:27 MohShami thanks a million mate
11:28 brandk joined #salt
11:30 giantlock joined #salt
11:32 donmichelangelo joined #salt
11:37 supersheep joined #salt
11:37 otter768 joined #salt
11:38 lothiraldan joined #salt
11:39 denys joined #salt
11:46 ThomasJ Hrm, anyone know why cmd.run via mine does not accept wildcards? ie cmd: cat /etc/ssh/ssh_host_*_key.pub
11:47 gmoro_ joined #salt
11:50 ThomasJ nm, found the answer
11:51 ThomasJ python_shell: True  required for 2015.5
11:52 snkr joined #salt
11:55 VSpike Has 2015.5.1 actually been released officially?
11:56 VSpike http://docs.saltstack.com/en/latest/topics/releases/2015.5.1.html says "Release: TBA"
11:56 babilen It has been tagged and is on pypi so: "Yes"
11:56 babilen It has not been packaged and hasn't been announced, so: "No"
11:57 babilen (pick your poison)
11:57 VSpike :)
12:01 MohShami left #salt
12:03 VSpike ksj: sorry, missed your question. Did you get the mysql creds working?
12:04 VSpike ksj: I just have "mysql.user: 'root'" and "mysql.pass: 'sklfjoisjfoiwejr'" in my pillar at the top level
12:04 * babilen copies VSpike's MySQL data
12:04 VSpike Good luck :)
12:05 VSpike ksj: It's a bit more complicated because if you install mysql-server on a clean box with salt it has an empty root password.
12:06 babilen I use https://github.com/saltstack-formulas/mysql-formula
12:07 babilen (which allows you to set a root password by defining mysql:server:root_password
12:07 babilen )
12:08 VSpike Yeah, that might be a good idea too
12:08 VSpike Can't remember why I didn't use that TBH.
12:09 VSpike I have a job on my to do list to go back over what I've done and replace as much as possible with formulas
12:09 VSpike If I'm replicating existing servers I sometimes find it easier as a first step to do stuff manually rather than trying to figure out what a formula will do
12:10 VSpike https://bpaste.net/show/4dc71b0ef8a5 is what I did. I need to set up a jinja macro or something to remove the repetition (I have a few users) but it works
12:11 ponpanderer joined #salt
12:12 cruatta joined #salt
12:14 giantlock joined #salt
12:33 matthew-1arlette joined #salt
12:36 DammitJim joined #salt
12:38 arount Hi, I'm trying to work with Reactor, but I think there is a bug or I miss something, Here is some infos / scripts / config of my setup: https://gist.github.com/arnoutpierre/3c943eb1e5bb6abd9016
12:39 arount My master seems to have problems with executing commands on minions
12:41 DandyDev joined #salt
12:41 dendazen joined #salt
12:42 DandyDev Hi everyone
12:42 DandyDev I'm having quite a problem with the GitFS backend
12:43 DandyDev I was so stupid as to rebase & squash some commits in our salt repo, and now when I run highstate, it seems to be stuck in a merge conflict
12:43 DandyDev This is the output of `salt '*' highstate`: https://gist.github.com/DandyDev/f2c78715a92ab0a3c804
12:44 DandyDev '<<<<<<< HEAD'    this tells me there is some git stuff going on
12:44 DandyDev does anybody know how I can force GitFS to do a reset --hard or something?
12:44 AndreasLutro heh
12:45 DandyDev Also, feel free to laugh at my stupidity ;)
12:45 AndreasLutro I suppose those git files are located somewhere in /var/lib/salt
12:45 mage_ is it possible in a - require: to do an "OR" ?
12:46 DandyDev @AndreasLutro, sadly there doesn't seem to even be a /var/lib/salt dir on my salt master
12:47 mage_ something like - require: state1 OR state2 ..?
12:48 TranquilityBase joined #salt
12:49 AndreasLutro DandyDev: /var/cache/salt maybe?
12:50 scottpgallagher joined #salt
12:51 brandonk joined #salt
12:53 DandyDev AnreasLutro: I can't find a regular git working dir anywhere in there
12:54 PI-Lloyd DandyDev: /var/cache/salt - stop the salt master, clear that directory, restart the master
12:54 DandyDev @PI-Lloyd: Clear that directory on the master only?
12:54 PI-Lloyd yes
12:54 DandyDev will try that!
12:56 miksan joined #salt
12:56 DandyDev did that, and problem persists
12:56 subsignal joined #salt
12:56 emaninpa joined #salt
12:56 Kelsar shouldn't the salt service (manager) do a daemon-reload on systemd systems?
12:57 oravirt joined #salt
12:58 PI-Lloyd actually DandyDev: it looks like you've had a merge conflict, and not fully resolved it and pushed it up, hence the <<<< HEAD line
12:58 DandyDev I checked that in the remote (which is on Bitbucket), and it's fine there
13:00 stephanbuys joined #salt
13:00 PI-Lloyd did you try cloning the repo into a fresh directoy on your local machine to double check?
13:00 jeremyr joined #salt
13:01 PI-Lloyd sometimes bitbuket gets a bit funny with crap like this
13:01 dyasny joined #salt
13:02 PI-Lloyd and to confirm you did "sudo rm -rf /var/cache/salt/*" while the master was in a stopped state?
13:02 DammitJim joined #salt
13:02 DandyDev yep and yep
13:03 DandyDev fresh clone shows nothing wrong, salt master was in stopped state when removing cache
13:04 CeBe joined #salt
13:05 DandyDev ok, something really really weird is going on.
13:05 DandyDev I looked into the master config (at least, I assume that /etc/salt/master is the master config), and I just found out the GitFS has been commented out by my colleague
13:06 PI-Lloyd lol
13:06 DandyDev it is now using the regular filesystem backend
13:06 DandyDev which points to /srv/salt
13:06 PI-Lloyd so it should be in /srv/salt
13:06 DandyDev which is indeed a git working dir
13:06 DandyDev with a conflict
13:07 PI-Lloyd so someone has been doing local changes on the server by the sounds of it... naughty naughty
13:08 DandyDev no, I think this conflict really is because of me squashing commits. The files that are in conflict are those I've added today
13:08 primechuck joined #salt
13:08 DandyDev the really eerie thing is: how can I have been happily updating my states all day :/
13:08 zerthimon joined #salt
13:09 PI-Lloyd does "git status" show any local changes?
13:09 mage_ mmh I don't understand very well the difference between prereq and require.. any enlightenment?
13:09 DandyDev yep it does
13:09 DandyDev exactly about the files that were involved in my squash
13:09 riftman joined #salt
13:09 PI-Lloyd the squash wouldn't show "local" changes
13:10 PI-Lloyd someone has been modifying files on the server, hence "local" changes
13:11 PI-Lloyd you have a couple of options here... 1. get shouty shouty and tell people to not make local changes, 2. nuke the directory and re-clone it, 3. "git stash && git pull"
13:11 DandyDev it actually can show local changes in that situation: I push a couple of commits to origin/master from my laptop, salt fetches happily, then I decide to squash them and --force push to origin/master. Salt tries to fetch again, and BOOM conflict because of rewritten history
13:11 PI-Lloyd 4. nuke it and re-enable gitfs
13:11 cpowell joined #salt
13:12 DandyDev What I did was git reset --hard origin/master, that fixed the working dir
13:12 DandyDev salt highstate works again
13:12 subsignal joined #salt
13:12 cpowell joined #salt
13:13 racooper joined #salt
13:15 PI-Lloyd :)
13:15 jdesilet joined #salt
13:17 timoguin joined #salt
13:17 bhosmer joined #salt
13:18 zer0def joined #salt
13:19 bhosmer_ joined #salt
13:20 DandyDev interestingly enough, when I make changes on my laptop, push to bitbucket, and run salt-run filesystem.update, the /srv/salt dir will update...
13:21 Tecnico1931 joined #salt
13:23 sunkist joined #salt
13:24 DandyDev It seems I must have serious chat with my colleague who normally manages our Salt stuff and is now on holiday
13:24 DandyDev I found out the magic "update" functionality in a cron script that does a git pull every minute....
13:25 AndreasLutro genius!
13:25 PI-Lloyd ouch
13:25 nobrak joined #salt
13:25 PI-Lloyd so he's basicly done a messy version of gitfs
13:25 PI-Lloyd bravo
13:28 DandyDev yep :/
13:29 PI-Lloyd https://s-media-cache-ak0.pinimg.com/736x/a5/72/c6/a572c6f73af47c780d034816bede5469.jpg  -- final thing i have to say on the subject
13:30 teebes joined #salt
13:31 DandyDev *agrees*
13:37 mpanetta joined #salt
13:38 otter768 joined #salt
13:39 Twiglet my head hurts
13:40 Tecnico1931 joined #salt
13:41 Arca joined #salt
13:42 Arca Hi there :)
13:42 Arca Anyone ever tried giving 2 id to a minion?
13:42 sroegner joined #salt
13:44 PI-Lloyd why would you want to?
13:44 Twiglet I heard about one guy that tried, but he got eaten by a dragon shortly after
13:44 scoates joined #salt
13:44 PI-Lloyd ^^
13:44 Twiglet (as in no, it's probably a bad idea)
13:45 hasues joined #salt
13:45 Jeanneret joined #salt
13:45 hasues left #salt
13:45 Arca Well, the id given at installation is something like server.host.fr
13:45 Jeanneret Hi I need to install a soft with pkg.install and I have the .msi on my computer with Windows is it possible to run a .msi localize on my computer=
13:45 Arca I'd like to leave this id, and add a shorter one
13:47 sroegner_ joined #salt
13:47 Twiglet if you want both i'd suggest setting one as a grain, but that's not very secure (as any other minion could use the same grain)
13:48 Tecnico1931 joined #salt
13:48 DammitJim joined #salt
13:52 codehotter Is there any way to run multiple runners in parallel?
13:52 PI-Lloyd both would already be available in the grains anyway, 'id' / 'host' / 'fqdn' / 'nodename'
13:52 PI-Lloyd so many options
13:53 babilen (by which you can already target minions)
13:53 Arca Yea i'm actually using jinja to define both id and host from grains to minion id :) but the key couple seems not usable
13:53 Arca I'll just rename the id in a shorter way :)
13:53 Arca Thanks
13:55 iwishiwerearobot joined #salt
13:56 _JZ_ joined #salt
13:58 drawsmcgraw joined #salt
14:00 bhosmer joined #salt
14:00 riftman joined #salt
14:01 peters-tx joined #salt
14:01 litwol iggy: Hello :)
14:02 SheetiS joined #salt
14:04 ksj hi, anyone having issues with the mariadb service module? I can't get it to restart the service with a requisite (watch) statement. I know it's not a syntax issue as the same statement restarts nginx fine.
14:04 andrew_v joined #salt
14:08 sunkist1 joined #salt
14:08 litwol Does salt have a "resident specialist" for some modules/states that can be spoken with to discuss an issue in more detail before posting it on https://github.com/saltstack/salt/issues
14:08 drawsmcgraw joined #salt
14:08 litwol I'd like to make sure i fully understand the "problem" that i'm going to report.
14:12 rojem joined #salt
14:15 sunkist joined #salt
14:17 debian112 joined #salt
14:18 codehotter Is the salt client api threadsafe? Like salt.cloud.CloudClient, can I call muliple client.action client.create, etc in threads?
14:20 sunkist1 joined #salt
14:25 sunkist joined #salt
14:30 sunkist1 joined #salt
14:35 sunkist joined #salt
14:35 babilen litwol: Just ask your question
14:36 babilen ksj: What's the exact issue?
14:37 arount babilen: can you help me ? Here is a gist explaining my problem with Reactor: https://gist.github.com/arnoutpierre/3c943eb1e5bb6abd9016
14:38 DammitJim joined #salt
14:40 litwol babilen: i am struggling how to phrase the issue appropriately to make it sound meaningful instead of "complayn-y"
14:40 litwol the issue is this:
14:40 litwol babilen: https://github.com/saltstack/salt/blob/develop/salt/modules/ssh.py#L223
14:40 litwol babilen: all fingerprints are encoded to base64
14:40 sunkist1 joined #salt
14:40 litwol babilen: while this is "not wrong", it causes a HUGE problem.
14:40 babilen namely?
14:41 litwol babilen: every search result online says use "ssh-keygen -l -f ..." to find key fingerprint.
14:41 litwol and i bet that is what ppl, salt users, are using to get fingerprint to add into ssh_known_hosts.present.
14:41 timoguin arount: anything in the logs?
14:41 litwol unfortunately the two formats are different.
14:42 litwol ssh.py _fingerprint generates format xx:xx:xx:xx...
14:42 jalbretsen joined #salt
14:42 litwol while new fingerprint by ssh-keygen is [hash]:[value], such as SHA256:dhfjoasdfjoaijdfoi....
14:42 litwol now. there's omre.
14:42 arount timoguin: no, nothing in /var/log/salt/master ans /var/log/salt/minion on master
14:43 arount same in minion
14:43 ksj babilen: oh, sorry, only just saw this. I've opened an issue here: https://github.com/saltstack/salt/issues/24298
14:43 timoguin arount: try running master/minion in the foreground with -l debug to see if that yields anything
14:43 timoguin also try running the eventlisten script to make sure the event is happening
14:43 litwol you COULD use ssh module ssh.recv_known_host to fetch key with fingerprint that /salt/ understands.. and i can use that in my state/pillar
14:44 babilen ksj: Is that on jessie?
14:44 timoguin arount: This is the event you are reacting to though: salt/minion/*/start
14:44 timoguin But 'start' is what you are firing from the CLI
14:44 ksj babilen: yeah
14:44 arount timoguin: I can see master recieving event if I do salt-run state.event
14:45 arount timoguin: so the error seems to append when master have to execute command in minion
14:45 babilen ksj: Are those official mariadb packages from Debian?
14:45 litwol BUT even that doesn't work. because when ssh_known_hosts.present, which uses ssh.* module functions, retrieves key it ends up with values that are different (ie one is xx:xx:xx.. other is SHA256:....) and no matter what format you specify in your pillar for ssh_known_hosts.present you end up with non-validating values when state is run.
14:45 arount timoguin: I check your clues
14:45 ksj babilen: yeah
14:45 jeddi joined #salt
14:45 sunkist joined #salt
14:46 babilen ksj: Okay, I guess that you are right in that this is somewhat related to systemd and the way salt checks/restarts services. Does service.restart work for it?
14:46 litwol babilen: that is the jumble in my head about the problem. i need help organizing it into something easier to grok and act on .
14:48 arount timoguin: here are logs when I fire evnt from minion: https://gist.github.com/arnoutpierre/3c943eb1e5bb6abd9016#file-logs-master
14:48 ksj babilen: you mean running the execution module? 'salt \*3 service.restart mysql' works fine
14:48 sandah joined #salt
14:48 ksj restarts the service
14:49 babilen litwol: I am still unsure what the problem is. I am using ssh_known_hosts.present just fine with a "xx:yy:zz..." style fingerprint
14:49 litwol babilen: short version is "new ssh key fingerprint format is [hash]:[fingerprint], which is incompatible with the old xx:xx:xx:xx..., salt forces format check of xx:xx:xx..."
14:49 timoguin arount: this is the event tag you are reacting to on the master: salt/minion/*/start. but 'start' is the event tag you are actually sending
14:49 babilen ksj: Not sure then
14:49 litwol babilen: have you updated ssh ?
14:49 babilen litwol: Updated it to what?
14:49 ksj like I said in the github issue, if 'enabled' is set to false, it all works fine. it's definitely a bug somewhere. I'm just not sure where
14:50 litwol babilen: i am using openssh 6.8.x
14:50 favadi joined #salt
14:50 litwol babilen: here's key format i get from ssh-keygen -l -f : 2048 SHA256:CNW7iXcWF7RUqcYKh7DQHeXgwLDNjKZFuNvyqdmaK3A root@gentoo-base (RSA)
14:50 litwol key fingerprint format*
14:50 babilen litwol: Boxes in question are on 6.0p1
14:50 sunkist1 joined #salt
14:50 arount timoguin: Hoooo damn .. I'm .. well .. fuck me
14:50 arount timoguin: thx
14:50 Brew joined #salt
14:51 litwol babilen: yes. old format is still xx:xx:xx... once you update openssh salt will start to break.
14:51 babilen litwol: So this issue is specific to systems that use a more recent openssh version (e.g. >= 6.8) ?
14:51 litwol yes. i am looking up openssh release notes.
14:52 litwol babilen: http://www.openssh.com/txt/release-6.8
14:52 babilen litwol: Is there a way to support the old and the new version at the same time?
14:52 ndrei joined #salt
14:52 litwol babilen: 2nd "*" under new features.
14:52 litwol babilen: yes. one moment i'll link you to report from gitolite which solved this before.
14:52 iwishiwerearobot joined #salt
14:52 dr4Ke joined #salt
14:53 litwol babilen: https://groups.google.com/forum/#!topic/gitolite/rkMMk8Xz3yI
14:53 litwol babilen: patch attached.
14:53 litwol babilen: it basically checks for both fingerprint "patterns" now
14:53 dr4Ke Hi everyone, what's the best way to discuss a 'state' behavior, existing or proposed? IRC, mailing list, github issue?
14:54 litwol babilen: i cannot comment on whether this is "correct" or "safe". but i am using it on my system successfuly
14:54 babilen litwol: Well, that SSH version was released quite recently. I guess it is perfectly fine to file a bug with the description of the problem (new OpenSSH versions do $NEW_FINGERINT which breaks ssh_known_hosts.present because $REASONS. A better way to implement $THE_THING is to follow the isntructions in $LINK)
14:54 ALLmightySPIFF joined #salt
14:54 p66kumar joined #salt
14:55 babilen litwol: I'm curious: Which distribution do you use that has 6.8 in a production release already
14:55 litwol babilen: frankly.. i can't think "why" salt (and gitolite) are checking fingerprint format. why not compare fingerprint values verbatim ?
14:55 litwol babilen: keyworded gentoo :)
14:55 litwol keyworded == "give me experimental stuff"
14:55 litwol i've lived on non-stable systems for more than 6 years
14:55 babilen ah, well .. I wouldn't do anything like that in production but i'm old and boring ;)
14:56 litwol with config management there's little reason to be running old systems
14:56 litwol or as they are called "supported" system
14:56 litwol "what? your cluster crushed? no problem! decrement version number to old known stable"
14:56 babilen Either way, comparing fingerprints verbatim might be a better solution. Just recommend that if you think it is the way forward as the current implementation might just simply be a historical relic
14:57 litwol i cannot recommend that in good concience. i do not understand the reason behind it
14:58 babilen Well, I think it is perfectly fine to write a bug report such as the one I outlined earlier
14:58 ALLmightySPIFF joined #salt
15:00 Mustafa__ joined #salt
15:00 schuckles joined #salt
15:01 gladiatr joined #salt
15:01 TooLmaN joined #salt
15:03 TooLmaN Hi Guys.  I have installed salt-minion on a raspberry pi (raspbian).  It installed fine and has no errors yet it is not seen by the salt-master (ubuntu server).  Any suggestions?  Thanks in advance.
15:04 schuckles joined #salt
15:04 arount TooLmaN: have you check hosts and thing like that on your minion ?
15:06 TooLmaN arount, I have added the 'master: xxx.xxx.xxx.xxx' line to my minion config.  I can ping back and forth just fine.  I even tried setting the user to 'pi' instead of root
15:06 giantlock joined #salt
15:07 Mustafa__ Hey Salt group could somebody take a look at the question I have on Stackoverflow question http://stackoverflow.com/questions/30514301/problems-with-basic-usage-of-saltstack-apache-formula I think I'm just missing fundamental but I just can't seem to find it.
15:07 TooLmaN The master is also called salt just in case and I added a DNS record to my DNS server just in case.  I can ping the salt master by name from the minion as well
15:08 Mustafa__ TooLmaN did you do a list-key
15:08 TooLmaN Is there a log I can view on the minion to see if it has found the master?
15:08 Mustafa__ from the master
15:08 TooLmaN Mustafa__, yes, 'salt-key --list-all' shows nothing
15:09 sgargan joined #salt
15:09 arount TooLmaN: Ok, maybe try to access to master with 'salt' host (just remove your master: xxx.. line in config and add 'xxxx.xxxx.xxxx.xxxx master' on /etc/hosts) can help us to find where is the error
15:09 TooLmaN Okay, standby
15:11 litwol babilen: please review/comment https://github.com/saltstack/salt/issues/24299
15:13 riftman joined #salt
15:14 TooLmaN arount, okay, I have 2 RPIs running salt-minion now.  One with a master: IP entry in the config and one without.  The one without has a hosts entry to the master.  I can see the one with the master: IP line but not the other one.
15:14 TooLmaN What is the polling time of the salt master to find new minions?
15:16 rm_jorge joined #salt
15:16 arount TooLmaN: Don't know, each time I tryed it was instan immediat
15:16 LtLefse I don't think there is one? minions connect out to the master
15:17 arount TooLmaN: So the minion configured with /etc/hosts (<IP> salt) works with master, but the one configured in /etc/salt/minion (master: <IP>) do not ?
15:18 TooLmaN arount, No, just the opposite.
15:18 arount K :)
15:18 TooLmaN Then again, running a test.ping, the one that was accepted is not responding
15:18 khris joined #salt
15:20 arount TooLmaN: Strange .. Maybe trying to run salt in foreground will help you to view errors .. (service salt stop && salt -l debug)
15:21 TooLmaN salt -l debug dumped the help page at me
15:22 TooLmaN says I need a target
15:22 arount sorry, salt-master -l debug
15:22 arount no the master node ..
15:22 arount on*
15:23 litwol babilen: is the issue i created "meaningful"? easy to understand/act/etc ?
15:23 ksj if I'm using clean: true in a file.recurse state, but I also have another file I want in that directory to be managed separately (because it needs to be templated), salt is deleting the file, then immediately restoring it. Is there any way around this (apart from templating the entire recursed directory)?
15:23 TooLmaN arount, It's running in debug now; I'll play with it.
15:24 TooLmaN Is it common to run salt-minion on the salt-master?  The bootstrap file installed both on the server
15:24 arount play with it
15:24 arount TL:
15:24 arount TooLmaN: don't know
15:24 ek6 joined #salt
15:25 TooLmaN alrighty.  I'll play with it.  This isn't a production setup just yet so I have time to break stuff.  I run a series of RPI units as display kiosks.  Automating their deployment and updates would be awesome
15:26 TooLmaN I've played with Chef, Puppet, and Ansible.  I think I like Salt better.
15:27 LtLefse TooLmaN: sure, I run salt-minion on the master
15:27 TooLmaN LtLefse, It does make sense, but I was just curious.  I assumed it was standard practice since the bootstrap file did it.
15:28 ksj what I mean is this: http://dpaste.com/2DVJ83B.txt  . salt will delete the file "myfile" every time, then immediately restore it, and report 2 changes
15:29 PI-Lloyd TooLmaN: afaik, the bootstrap will install the minion, you pass extra flags to it to install the other packages like master etc
15:30 TooLmaN Can you rename the Key list on the Master?  Or remove and readd keys to the same servers?  I have a couple of servers that pulled in their domain name due to an old network config file.  I corrected it and I want a readd them to the master with their new shorter name.
15:30 TooLmaN PI-Lloyd, Thanks
15:31 whytewolf ksj have the recurse require the other file.
15:31 PI-Lloyd TooLmaN: they key names will need to match the minion_id, so as long as you change both it should in theory work
15:31 iggy ksj: file.recurse supports template:
15:31 ksj whytewolf: ahh ok, thanks, I'll give that ago. less messy than doing an exclude_pat
15:31 TooLmaN PI-Lloyd, Ok, I'll stay mindful of that.  Thanks again
15:32 schuckles1 joined #salt
15:32 ksj iggy: I know, but it's a BIG directory, and I don't want the overhead of passing all those files through jinja
15:34 PI-Lloyd ksj, if it's that big, could storing the files in git and using git.latest be an option?
15:34 ksj whytewolf: unfortunately adding the file as require doesn't change the behaviour. I guess I'll have to go with exclude_pat
15:35 iggy do you have a legitimate reason to be using clean: True?
15:35 whytewolf okay, wasn't sure about that was just going on the wording in the docs
15:35 ksj PI-Lloyd: how would that help, in terms of having highstate not report changes?
15:35 ksj iggy: I'm OCD?
15:35 PI-Lloyd well it would only report a change if it actually pulls any changes
15:35 ksj is that legitmate?
15:36 iggy no
15:36 PI-Lloyd lol
15:36 iggy take your meds and remove the clean: True
15:37 ksj bah. fine. I guess that's the simplest solution, but I feel dirty and probably won't be able to sleep tonight
15:37 TooLmaN The salt-minion service on my Raspberry Pi keep stopping.  I can restart it and it works for a few minutes then stops again.  Where would this log be?
15:37 whytewolf TooLmaN: /var/log/salt/
15:38 whytewolf oh wait i don't know on a pi
15:38 quasiben joined #salt
15:38 PI-Lloyd try running the salt-minion manually, it should output what it's doing
15:39 TooLmaN whytewolf, There is a minion log there.  Nothing very eventful.  I'll run it manually, PI-Lloyd
15:39 otter768 joined #salt
15:39 iggy check syslogs/dmesg? maybe it's getting oom'ed
15:40 TooLmaN Got a 'KeyError: 'token'
15:42 Mustafa__ Another request for anybody with a few minutes to take a look at my problem on stackoverflow http://stackoverflow.com/questions/30514301/problems-with-basic-usage-of-saltstack-apache-formula
15:43 spiette joined #salt
15:44 DandyDev joined #salt
15:44 iggy Mustafa__: you can't put include: in a top file
15:45 DandyDev joined #salt
15:45 iggy use "salt-call cp.list_master | grep apache" to check that your formulas are getting pulled
15:45 Mustafa__ iggy ok  will do
15:45 TooLmaN http://pastebin.com/pSztPVzm  - Error when running salt-minion manually on an RPI
15:45 iggy don't use the upstreamn saltstack-formulas, they break and we still have problems with people committing directly to master
15:45 dalexander joined #salt
15:46 TooLmaN I ran salt-minion as root, not as pi with sudo
15:47 TooLmaN same error either way as expected
15:47 arount TooLmaN: both versions of salt are same ? (reference: https://groups.google.com/forum/#!topic/salt-users/DgiE-PxsB3o)
15:48 arount TooLmaN: http://salt-users.narkive.com/8j86BYqO/raspberry-pi-installation
15:49 arount TooLmaN: look like a version issue or something like that
15:51 conan_the_destro joined #salt
15:52 sgargan joined #salt
15:53 TooLmaN arount, I'll dig through that and see.
15:53 TooLmaN Thanks again.
15:53 TooLmaN Slipping out for a lunch meeting.  Thanks for the help, guys.
15:53 iggy yeah, I really wish we could have the ancient version of salt that someone put together for rpi removed completely
15:54 iggy it useless
15:54 TooLmaN iggy, is it an issue with armhf?
15:54 arount TooLmaN: good luck
15:54 TooLmaN Should I recompile my own from source?
15:54 iggy not really sure, but everybody that tries salt on a rpi ends up with 0.15 or something... which is 3+ years old
15:54 TooLmaN Is salt the right tool to automate my array of RPI kiosks?
15:55 TheHelmsMan joined #salt
15:55 iggy it can be, but not with the version that's in whatever kind of repo people seem to always use
15:55 iggy it really depends though
15:55 iggy are your kiosks going to have an always on connection?
15:56 iggy the minions keep an open connection to the master
15:56 TooLmaN Yeah, they will always be on.  They are showing an internal website
15:56 iggy if they only have an intermittent connection, something else might be better
15:57 bhosmer joined #salt
15:57 TooLmaN I keep a cross-compiler vagrant for my custom RPI stuff.  I've been slowly integrating them in my company for a couple of years now.
15:57 TooLmaN I have them all hardwired.  No wireless setups
15:58 Auroch joined #salt
15:58 TooLmaN I'll play with it more in a bit.  Gotta go play 'nice IT guy' in a lunch meeting.  :)  Later guys and thanks
16:01 writtenoff joined #salt
16:01 Mustafa__ iggy: I'm seeing the apache files for the server but when I run the highstate  I just get a "data failed to compile, No matching sls found for 'apache' in env 'stage'" Is there something I have to do to get the gitfs stuff in the different environments?
16:01 iggy oh
16:01 iggy right... yeah, each git tree has to have branches matching your environments
16:02 iggy (another reason no to use the upstream git trees)
16:02 imanc what's the canonical way to set a password for a user in salt?  Is there a way of setting the passord without having to use a hash?  Or is there a way of generating the hash for the user, on that server, and then setting it via user.present?
16:02 Mustafa__ Ok I see so if my environment is stage I need an branch stage
16:03 iggy Mustafa__: correct, in all of the git repos
16:03 iggy there's a bug about why things don't just use base/master
16:04 bhosmer joined #salt
16:04 Berty_ joined #salt
16:06 VR-Jack joined #salt
16:10 cruatta joined #salt
16:12 matt_pingman joined #salt
16:13 bhosmer joined #salt
16:17 tkharju joined #salt
16:17 babilen litwol: looks good
16:18 tiadobatima joined #salt
16:18 arount joined #salt
16:19 martineg_ joined #salt
16:20 schuckles joined #salt
16:21 slav0nic can i do http://dpaste.com/3DFCKXC more shortly?
16:22 iggy you can't reference pillars in pillars
16:24 slav0nic iggy, no way?
16:25 cedwards joined #salt
16:25 quasiben joined #salt
16:27 iggy not reliably (and definitely not like that)
16:30 smcquay joined #salt
16:32 baweaver joined #salt
16:32 desposo joined #salt
16:33 baweaver joined #salt
16:35 spookah joined #salt
16:35 riftman joined #salt
16:35 favadi joined #salt
16:36 troyready joined #salt
16:36 joeto1 joined #salt
16:37 Grokzen joined #salt
16:39 KyleG joined #salt
16:39 KyleG joined #salt
16:41 aparsons joined #salt
16:45 amcorreia joined #salt
16:46 ange are job posting accepted on the mailing list ?
16:47 iggy ange: I was told yes one time, so long as they are actually related to SaltStack and not just some random admin job or something
16:47 ange iggy: thanks
16:48 ange yep, looking for a saltstack person
16:50 quasiben joined #salt
16:50 baweaver joined #salt
16:56 cruatta joined #salt
16:56 saffe joined #salt
16:57 cruatta joined #salt
16:58 cruatta joined #salt
17:01 ksj is there a way to tell the rabbitmq state module that my rabbitmq instance is listening on a different port than the default
17:01 ksj ?
17:02 iggy pillar config?
17:03 iggy doesn't look like it
17:03 iggy it just calls out to rabbitmqctl with no options
17:03 iggy does that util support a config file that you could set it in?
17:03 whytewolf humm the todo item for modules.rabbitmq list 'minion configuration' as an item
17:05 ksj iggy: I'm looking at that now
17:05 ksj looks like the only way to do it
17:05 iggy it looks like rabbitmqctl reads an ENV variable
17:06 iggy which you won't be able to utilize because the module uses cmd.run python_shell=False so it won't load any env variables
17:07 baweaver joined #salt
17:07 yaryarrr joined #salt
17:07 chutzpah joined #salt
17:08 stanchan joined #salt
17:08 ksj yeah....and unfortunately the way my company's implemented rabbitmq is to have multiple instances running on the same machine and listening on different ports....this is going to be painful
17:08 KyleG oh my
17:08 iggy I don't even see how to tell it to use alternate ports
17:09 iggy unless it supports the port as part of the node to connect to
17:09 KyleG Why would you run multiple rabbit instances instead of just doling out unique usernames/passwords/vhosts/queues
17:09 KyleG o_O
17:10 forrest joined #salt
17:10 yaryarrr i'm trying to use the influxdb_database state, here is my configuration and the error i'm getting. any help would be very much appreciated! https://gist.github.com/yaryarrr/28dd9e0423c8429268b9
17:10 * iggy would guess remnant of running with a differnt MQ server before
17:10 iggy yaryarrr: what version of influxdb?
17:10 yaryarrr Version     : 0.8.8
17:11 iggy and version of salt?
17:11 yaryarrr 2015.5.0
17:11 ksj KyleG: yeah....I don't know anything about rabbitmq, but that was my first thought too. I guess because we have different sites in different countries and they want to keep them separate
17:12 ksj but we keep all dbs in the same mysql instance....so I don't think that argument really works
17:12 ksj I think the answer is "because no-one really knows how rabbitmq works"
17:12 hal58th_ joined #salt
17:12 hal58th__ joined #salt
17:12 KyleG lol
17:12 hal58th_1 joined #salt
17:12 Nazzy joined #salt
17:12 KyleG Sounds like a management nightmare
17:14 iggy yaryarrr: apparently not many people use that state... it's trying to call functions in the influxdb module... but the module is actually called influx
17:15 iggy ahh, nvm, it's using a virual name
17:15 iggy yaryarrr: do you have the influxdb python module installed on the minions?
17:16 yaryarrr i never installed it, is that installed via pip?
17:17 iggy yaryarrr: yeah or your pkg manager
17:18 Gareth morning morning
17:18 iggy yaryarrr: https://github.com/saltstack/salt/issues/24301
17:18 quasiben joined #salt
17:18 yaryarrr installed it via pip, now get different error: https://gist.github.com/yaryarrr/dbc423a5e772c12f89ba
17:20 iggy yaryarrr: what version of influxdb did it install?
17:20 iggy the python module
17:20 spookah joined #salt
17:22 iggy the influxdb module requires python 2.7+ (you're on 2.6)
17:23 impi joined #salt
17:25 iggy and 0.8.x requires using special classes
17:25 iggy that's a mess
17:25 fusionx86 joined #salt
17:27 iggy babilen: you use influxdb right?
17:27 theologian joined #salt
17:29 iggy yaryarrr: I'd maybe try installing an older version of that python module
17:31 schuckles joined #salt
17:32 quasiben joined #salt
17:33 pix9_ joined #salt
17:33 denys joined #salt
17:35 saffe joined #salt
17:35 saffe joined #salt
17:38 baweaver joined #salt
17:40 sixninetynine joined #salt
17:40 otter768 joined #salt
17:40 sixninetynine hey, anyone about who can talk to me about salt's setup.py ?
17:41 sixninetynine namely, the _property_entry_points property
17:41 quasiben left #salt
17:41 BigAl joined #salt
17:42 conan_the_destro joined #salt
17:43 cruatta_ joined #salt
17:43 perrigo joined #salt
17:43 ipmb joined #salt
17:44 yaryarrr Name: influxdb
17:44 yaryarrr Version: 2.3.0
17:44 yaryarrr Location: /usr/lib/python2.6/site-packages
17:44 yaryarrr Requires: requests, six
17:44 yaryarrr man, that is kinda a mess
17:45 pix9_ joined #salt
17:45 pix9_ hello friends I have one doubt
17:45 perrigo joined #salt
17:46 pix9_ when we register a minion wit master, it sends the key to master and master accepts it.
17:46 pix9_ in same way how does minion authenticates the validity of master.
17:48 pix9_ in simple words If if Try to fake master and try sending some instructions to a minion, how will minion know it's not getting instructions from fake master.
17:48 hal58th joined #salt
17:48 hal58th_2 joined #salt
17:49 hal58th_3 joined #salt
17:50 asoc I think it stores a copy of the masters public key
17:50 asoc That is what I seem to remember it explaining when I was switching masters around last week
17:50 pix9_ hmm
17:51 pix9_ do you happen to know where does master/minions store keys?
17:51 asoc conf/pki/minion/minion_master.pub
17:51 pix9_ thanks
17:51 asoc from wherever your minion installdirectory is
17:52 tiadobatima joined #salt
17:53 rap424 joined #salt
17:53 toofer joined #salt
17:53 ajw0100 joined #salt
17:55 pix9_ thank you asoc I found it.
17:55 TooLmaN iggy, Salt on RPI update:  Following the Debian installation instructions seems to be working.  http://docs.saltstack.com/en/latest/topics/installation/debian.html
17:55 rojem joined #salt
17:57 TooLmaN So to recap, the Debian packages from the SaltStack repo work well on RPI (armhf).
17:57 baweaver joined #salt
17:58 relopezz joined #salt
18:00 jdesilet joined #salt
18:01 asoc TooLmaN:  Cool. I was going to get mine running one of these days (life permitting) so that is good to know.
18:01 schuckles joined #salt
18:02 TooLmaN asoc, well... it timed out a couple of times before it responded.  I'll troubleshoot that
18:02 impi joined #salt
18:03 schuckles joined #salt
18:04 relopezz Hi, I can't find in the documentation any information about boto_rds anymore... Isn't available?
18:04 rap424 joined #salt
18:07 p66kumar joined #salt
18:08 TooLmaN joined #salt
18:09 CeBe joined #salt
18:11 whytewolf relopezz: the code lists boto_rds as coming in Beryllium.
18:12 evilrob so I've got a file.managed state.  If that file is updated, how do I make it run a command?
18:14 vieira_ joined #salt
18:14 evilrob add a "- watch: -cmd: file.managed" to the state def that runs the command?
18:14 vieira_ I'm a beginner and I am thinking about what are the best practices regarding grains?
18:15 iggy evilrob: - watch:\n  - file: file-managed-id
18:15 vieira_ how do you organize the info? do you manually write the grains by ssh'ing to each server?
18:15 solidsnack joined #salt
18:15 iggy vieira_: sometimes (you can have salt-cloud do it)
18:15 vieira_ and creating the file and content, or do you manage the file using salt itself?
18:16 iggy vieira_: you can also write grains to pull info from other places (we have a custom grain that pulls GCE metadata and puts it into grains)
18:16 evilrob iggy:  thanks.  I think I'd have gotten there through some syntax fumbling.   This is in a cmd.wait: right?
18:16 iggy vieira_: Salt is very much a "do what works best for you" kind of tool
18:17 evilrob if so, I'm looking at the right spot in the manual  I'll RTFM from here.
18:17 iggy evilrob: yeah, if it's a cmd.* you need to run
18:17 vieira_ iggy: for instance, roles
18:17 vieira_ mapping roles to hosts
18:17 vieira_ I would like to have that information centralized somewhere
18:17 vieira_ so I don't need to go to each server and manage it there
18:18 iggy there are things like reclass that can help with that
18:18 vieira_ but does it make sense for the grains file to be managed?
18:18 pix9_ hmm
18:18 iggy but some sort of ext_pillar and using pillars to lookup roles is probably a better idea (than using grains)
18:19 conan_the_destro joined #salt
18:20 pix9_ I have one doubt. :- "how do I diffrentiate grains from pillars?"
18:20 vieira_ iggy: hmmm
18:20 warthog42 joined #salt
18:21 sgargan joined #salt
18:23 monkey66 joined #salt
18:23 vieira_ I am reading the ext_pillar docs
18:23 warthog42 hello all, I have an issue where an exeuction module runs fine under salt-call but fails running from the master to the same minion.  This is for some freeipa work I'm putting together.  It comes down to the ipalib call bombing out under the minion with an error on the NSS db.  Here is a gist with more info.  If anybody has any ideas what might be different between salt-call and running via the minion I'd be very greatful!  https://gist.github.com
18:23 vieira_ but it is not helping me much :(
18:23 lexter joined #salt
18:24 iggy warthog42: is it the upstream module or something you've written?
18:24 warthog42 something I'm working on
18:25 iggy can you paste the code?
18:25 warthog42 the module is in the gist I linked.  can you see the url?
18:25 vieira_ warthog42: could it be that some env vars are different?
18:25 warthog42 https://gist.github.com/warthog/c540edc8449d0578bb33
18:25 iggy warthog42: I guess it got cut off
18:26 warthog42 I've been digging through every env var I can find and not seeing much.  I've also asked the freeipa folks and they are not coming up with much.
18:27 vieira_ what salt version? 2015.2?
18:27 vieira_ *2015.5
18:27 iggy warthog42: what happens if you stop the minion service, then start it in the terminal with -l debug?
18:27 warthog42 2014.7.5 to match our production level right now
18:28 vieira_ args='keyctl' 'search' '@s' 'user' 'ipa_session_cookie:admin@TESTDOM.LOCAL'
18:28 warthog42 when I do that I get nothing back the salt-minion output throwss this:  salt-minion: error: no such option: -l
18:29 iggy -l works for every salt command
18:29 bin_005 joined #salt
18:29 vieira_ salt-call is running as the same user as salt-minion?
18:29 joeto joined #salt
18:30 pix9_ thanks everyone, think I've found some useful defination on http://stackoverflow.com/questions/13115700/salt-stack-grains-vs-pillars
18:30 linjan joined #salt
18:30 warthog42 added the console output for salt-minion -l debug so you can see what I'm talking about
18:30 pix9_ ok see you guys, thank you for all your answers and guidlines. Good Night.
18:31 SheetiS joined #salt
18:32 iggy warthog42: errmmm salt-minion doesn't take any commands
18:32 iggy which salt-minion
18:32 iggy head `which salt-minion`
18:32 adelcast left #salt
18:33 sgargan joined #salt
18:33 retr0h joined #salt
18:33 warthog42 added to the gist
18:34 warthog42 it was bootstrap provisioned via the vagrant salt provisioner
18:35 monkey66 left #salt
18:35 murrdoc joined #salt
18:37 baweaver joined #salt
18:37 warthog42 vieira_: yes sorry just saw your question, both running as root
18:39 vieira_ keyctl search @s user ipa_session_cookie:admin@TESTDOM.LOCAL
18:39 vieira_ does it gives the expected output if you run on the terminal?
18:41 vieira_ and then also test env -i keyctl search @s user ipa_session_cookie:admin@TESTDOM.LOCAL
18:41 warthog42 there is output for that in the gist showing that the first call to keyctl shows failed (expected), then it tired to talk to the freeipa server which is where the minion one fails with the nss db error, and the salt-call successed and then creates the key session (that is for talking to the freeipa server)
18:41 schuckles joined #salt
18:41 iggy weird, I can run salt-minion just fine on the command line with just -l debug
18:42 warthog42 so, keyctl -> not there -> create session with freeipa server -> keyctl to cache the session -> do stuff -> remove keyctl key is how it works on a good run
18:43 warthog42 but the miinion does keyctl -> failed -> talk to freeipa server -> choke on nssdb files
18:43 tomh- joined #salt
18:43 vieira_ I am looking to salt call output
18:43 vieira_ I only see one call to keyctl
18:43 warthog42 let me make sure I didn't goof a cut and paste :)
18:44 vieira_ sorry what I meant was
18:44 vieira_ the first I see
18:44 vieira_ suceeds
18:44 vieira_ while the one with salt-minion fails
18:45 vieira_ both after a klist
18:45 c10 joined #salt
18:47 solidsnack joined #salt
18:47 vieira_ warthog42: am I looking at it right?
18:49 bin_005_u joined #salt
18:49 daemonkeeper joined #salt
18:49 warthog42 vierira_ no I think you are on to something.  I think I missed that first call working under salt-call to keyctl
18:52 warthog42 well dang, that is the difference.  I feel dumb for missing that, but thank you for pointing that out.
18:52 baweaver joined #salt
18:53 warthog42 doesn't run via cmd.run but runs local.  ok, I guess that gives me more to go on.
18:53 vieira_ :)
18:53 warthog42 thanks for looking at it
18:53 warthog42 :)
18:55 murrdoc joined #salt
18:56 ageorgop joined #salt
18:57 kj1541 joined #salt
18:57 perrigo left #salt
18:58 hybridpollo joined #salt
19:03 c10 joined #salt
19:05 tmclaugh[work] joined #salt
19:08 arount joined #salt
19:08 ajw0100 joined #salt
19:10 j-saturne joined #salt
19:10 tmclaugh[work] joined #salt
19:10 sgargan joined #salt
19:13 c10 joined #salt
19:13 yaryarrr joined #salt
19:14 tmclaugh[work] joined #salt
19:19 ajw0100 joined #salt
19:20 viq joined #salt
19:21 murrdoc joined #salt
19:21 bhosmer joined #salt
19:22 katyucha joined #salt
19:22 katyucha Hi again
19:22 katyucha is someone using libvirt with salt ? i'm looking about clone Vm ... i found how to init but not clone :/
19:25 jrdnr joined #salt
19:26 tr_h joined #salt
19:27 brandonk joined #salt
19:27 schuckles joined #salt
19:33 tmclaugh[work] joined #salt
19:35 toofer joined #salt
19:41 otter768 joined #salt
19:44 andrew_v joined #salt
19:45 bin_005_u joined #salt
19:47 andrew_v_ joined #salt
19:49 arount joined #salt
19:49 Kobe_ joined #salt
19:50 warthog42 just a follow-up, the keyctl thing was a red herring.  the example salt-call was showing that it found a key because I had ran the salt-call from that login terminal already so it had a key cached, when I open a new ssh session I can do a keyctl show, not have a key, run the salt-call and it works, keyctl show then shows the ipa session key.  I'm pretty sure this have to do with the error on the NSS db access so it can't talk to the freeipa server
19:51 warthog42 but thanks again for taking a look earlier.  hopefully I'll get to the bottom of this soon :)
19:52 Kobe_ Has anybody seen an example on placing the salt configs under source control? Is it as simple as cd /srv ; git init ; git add . ; git commit ; git push origin
19:52 mrbigglesworth joined #salt
19:52 iggy warthog42: is there a reason you're using subprocess instead of __salt__['cmd.run']()?
19:53 warthog42 iggy:  for the kinit?  that part was written when I was first learning, I do intending to change that over to __salt__['cmd.run']() for that part.
19:54 iggy Kobe_: should work fine (assuming you setup a proper origin)
19:55 warthog42 at one point this was all working with older versions of salt and freeipa.  I'm just now getting around to updating things and it hasn't been as easy as I expected :)
19:55 overyander Kobe_, that's how i do mine and it works fine.
19:56 bastiandg joined #salt
19:59 notnotpeter joined #salt
20:00 impi joined #salt
20:08 cberndt joined #salt
20:10 adelcast joined #salt
20:11 cberndt joined #salt
20:15 bash124512 joined #salt
20:16 murrdoc joined #salt
20:19 matthew-parlette joined #salt
20:19 baweaver joined #salt
20:19 Kobe_ Ok thanks
20:22 murrdoc joined #salt
20:22 belak What's the recommended way to install a package from source using salt?
20:22 iggy build a package and install it
20:24 iggy that's a bit of a tongue-in-cheek response, but realistically, it's super easy to build packages these days with docker/vagrant/etc.
20:25 cberndt joined #salt
20:26 belak I'm trying to avoid docker...
20:27 belak Seems like overkill for what I want
20:27 arount_ joined #salt
20:28 solidsnack joined #salt
20:33 iggy We use it to build all of our packages
20:38 baweaver joined #salt
20:39 belak Oh, to build them
20:39 belak that would make sense
20:40 schuckles joined #salt
20:42 nobrak joined #salt
20:42 nobrak joined #salt
20:44 virusuy joined #salt
20:45 giantlock joined #salt
20:54 jvblasco_ joined #salt
20:57 baweaver joined #salt
20:58 smcquay joined #salt
20:58 subsignal joined #salt
20:58 smcquay joined #salt
20:59 jvblasco_ joined #salt
21:03 supersheep joined #salt
21:07 bin_005_u_j joined #salt
21:09 Brew1 joined #salt
21:09 murrdoc joined #salt
21:10 belak Do many people use salt to manage user accounts on all their boxes?
21:11 belak Or is it better to use salt to manage something like ldap
21:11 ahammond joined #salt
21:11 HappySlappy joined #salt
21:11 murrdoc ldap would be lightest coupling
21:11 murrdoc both will work
21:11 belak ldap is confusing
21:11 __number5__ joined #salt
21:12 murrdoc well in that case
21:12 murrdoc pillars + salt should be the win
21:12 markm joined #salt
21:14 belak Really it's just about me dealing with it and learning ldap
21:15 HappySlappy Hi salt-folk, anybody have any experience with ext_pillar / ec2_pillar?  Docs on ec2_pillar seem pretty sparse, just curious how I can dump data for ec2_pillar to ensure that it's grabbing data for my minions
21:15 matthew-parlette joined #salt
21:16 iggy HappySlappy: pillar.items?
21:16 HappySlappy so far I've attempted looking at pillar.data and pillar.items, that didn't seem to include anything from ec2_pillar
21:16 iggy then it's not working
21:16 iggy run the master in the foreground with -l debug and see if you see anything useful when you do the pillar lookup
21:16 Berty__ joined #salt
21:16 yaryarrr joined #salt
21:17 HappySlappy k thx, will try that
21:18 HappySlappy any reason why salt-ssh wouldn't work with an ext_pillar like ec2_pillar as long as I have the master conf in the specified config directory?
21:18 belak Is there a way to make highstate a bit less verbose?
21:18 murrdoc update config to only show changes
21:20 belak ooh, state_output: mixed seems perfect
21:22 belak Er, changes rather.
21:24 murrdoc :)
21:25 belak is it possible to make my config files not have a ton of blank lines where I had jinja logic?
21:25 DammitJim joined #salt
21:28 teebes joined #salt
21:33 iggy state_verbose: False
21:33 iggy belak: {%- and -%}
21:33 iggy http://jinja.pocoo.org/docs/dev/templates/#whitespace-control
21:36 belak ah
21:36 murrdoc is that going to fix blank links where the {% if %} stuff is
21:37 belak So, I need to add that to every if?
21:38 baweaver joined #salt
21:40 dimeshake joined #salt
21:41 badon joined #salt
21:42 otter768 joined #salt
21:44 litwol joined #salt
21:44 c10 joined #salt
21:46 belak I'm finding a few references to jinja_trim_blocks and jinja_lstrip_blocks but when I put those in my /etc/salt/master and /etc/salt/minion, they don't appear to do anything
21:46 belak Also, they're listed on an example config page, but not on the page which lists valid vars for the master cfg file
21:47 baweaver joined #salt
21:52 belak When I run with -l debug, the lines about jinja2 conf don't show up
21:56 shanemhansen joined #salt
21:59 shanemhansen I can't figure out why a dc-specific sls file I'm trying to apply isn't working. It worked in vagrant. https://gist.github.com/shanemhansen/e2fb2fb8b90ff30fe7fc
22:00 shanemhansen This issue:  https://github.com/saltstack/salt/issues/1432 kind of makes me think overriding a pillar value in older versions of salt doesn't work the way I think it does, but I'm not sure.
22:00 shanemhansen If someone could look at that gist and 3 referenced files and let me know if I'm on the right track I'd give them a big internet hug.
22:04 jeffspeff joined #salt
22:06 forrest shanemhansen: comment https://gist.github.com/shanemhansen/e2fb2fb8b90ff30fe7fc#file-top-sls-L4 and see if it works.
22:07 iggy shanemhansen: what version?
22:07 utahcon joined #salt
22:07 garthk joined #salt
22:09 shanemhansen forrest, reordering top.sls file seemed to make it work.
22:09 forrest okay
22:10 shanemhansen iggy, I think it's 0.17.something.
22:10 iggy ouch, yeah, there were lots of odering issues back then
22:10 shanemhansen I haven't tackled the upgrade because the protocol seems to have changed with the new 2014.* versions.
22:10 iggy 2015...
22:10 iggy and it changed in 0.17.2 I think
22:10 shanemhansen I'm not sure if a 2015.* salt-master can talk to 0.17 minions. I know that newer minions can't talk to an old master.
22:10 iggy so if you are over that, you should be fine
22:11 iggy yes, upgrade master first
22:13 shanemhansen So in terms of doing the "right thing" for datacenter specific values would people recommend I a) do what I'm doing but with a current version of salt b) something else I'm not aware of
22:13 supersheep joined #salt
22:15 iggy A
22:17 shanemhansen Nice. Thanks.
22:18 iggy What you have looks like the way I'd do it (And the way I've seen others do it)
22:18 iggy I highly suspect the reason you are seeing weirdness is the old Salt version
22:20 brandk joined #salt
22:20 thehaven_ joined #salt
22:21 p66kumar joined #salt
22:23 iggy joined #salt
22:24 jacksontj joined #salt
22:24 bbradley joined #salt
22:24 baweaver joined #salt
22:25 mlanner joined #salt
22:25 patrek joined #salt
22:39 sunkist joined #salt
22:43 joeto1 joined #salt
22:44 snaggleb joined #salt
22:44 snaggleb joined #salt
22:44 Singularo joined #salt
22:44 sunkist1 joined #salt
22:45 mrbigglesworth joined #salt
22:47 monkey66 joined #salt
22:49 ageorgop joined #salt
22:59 mosen joined #salt
23:04 monkey66 left #salt
23:04 sgargan joined #salt
23:11 sgargan joined #salt
23:15 iggy clever ways to reset /etc/salt/pki/minion/minion_master.pub when changing masters?
23:15 iggy or should I just make all my masters have the same keys?
23:17 flipflop joined #salt
23:17 flipflop hi there
23:18 flipflop I have a small question regarding pillar data. Is it common (wise) practise to save pillar data in GIT?
23:18 iggy we do... in a separate repo than our states obviously
23:19 flipflop iggy: yes that is exactly what I meant ... using the gitfs things within the config file
23:19 iggy yep
23:20 flipflop iggy: I saw it inside the documentation but was wondering if it is wise to do so ... but I guess if your repo is sufficiently secured ..
23:20 flipflop :)
23:20 iggy - git: master git+ssh://git@salt-pillars-github.com/iggy/salt_pillars.git
23:20 iggy it's all a trade off
23:20 iggy convenience vs security
23:21 flipflop yeah ... either you have to copy everythin from your backups if things break or just reference the pillar from the config right?
23:21 iggy we don't have anything automated to push states, so if someoen did manage to get at our pillars, we could just redo all the keys etc before redeploying
23:21 flipflop yes indeed and security
23:21 sgargan joined #salt
23:22 flipflop you mean state.highstate?
23:22 iggy yeah
23:22 flipflop that you are doing it manually?
23:22 iggy yep
23:22 flipflop and not by cron or ... right
23:22 flipflop :)
23:22 ahammond joined #salt
23:23 flipflop yes ... that would be another barrier and makes it more explicit
23:23 ajw0100 joined #salt
23:23 flipflop too
23:23 flipflop thank you for your answer iggy
23:24 DammitJim joined #salt
23:25 sgargan joined #salt
23:26 flipflop :)
23:26 flipflop i am off again ... I can go to sleep now
23:26 flipflop haha
23:27 flipflop left #salt
23:31 sgargan joined #salt
23:32 bfoxwell joined #salt
23:33 joeto joined #salt
23:36 Aidin joined #salt
23:38 baweaver joined #salt
23:39 mrbigglesworth joined #salt
23:43 otter768 joined #salt
23:48 cansis joined #salt
23:50 baweaver joined #salt
23:51 bfoxwell joined #salt
23:56 ipmb joined #salt
23:57 Singularo joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary