Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2016-12-12

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 ninjada joined #salt
00:37 thekabal joined #salt
00:38 thekabal What is the most common/supported method for keeping Syndic's in sync with the main master? Gitfs?
00:38 nethershaw joined #salt
00:42 jas02 joined #salt
00:46 cyteen joined #salt
00:50 jas02_ joined #salt
01:07 sh123124213 joined #salt
01:43 armyriad joined #salt
01:54 mpanetta joined #salt
02:01 Valfor joined #salt
02:01 Valfor joined #salt
02:04 ninjada joined #salt
02:07 g3cko joined #salt
02:07 catpigger joined #salt
02:21 jas02_ joined #salt
02:36 debian112 joined #salt
03:08 armyriad joined #salt
03:09 Bryson joined #salt
03:13 evle joined #salt
03:33 bastiandg joined #salt
03:46 JPT joined #salt
03:51 jas02_ joined #salt
05:10 mohae_ joined #salt
05:16 rdas joined #salt
05:38 ronnix joined #salt
05:40 ninjada_ joined #salt
05:45 filippos joined #salt
05:54 clevodearia joined #salt
05:54 icebal joined #salt
05:55 vodik joined #salt
05:55 mpanetta joined #salt
06:15 ninjada joined #salt
06:20 g3cko joined #salt
06:21 jas02_ joined #salt
06:24 darthzen joined #salt
06:24 samodid joined #salt
06:38 sh123124213 joined #salt
06:57 colttt joined #salt
07:09 preludedrew joined #salt
07:12 Miouge joined #salt
07:14 jhauser joined #salt
07:20 felskrone joined #salt
07:22 jhauser joined #salt
07:36 jas02_ joined #salt
07:37 SamYaple im trying to do a thing with salt to setup mariadb+galera. I need to share data between the minions while the state is running on all of them
07:38 SamYaple first it needs to check if mariadb is running on any hosts, if it is, all hosts can start mariadb normally. if its not, then a special bootstrap action must occur on _one_ host
07:38 SamYaple im not sure salt can help me with this or if I would need to setup something like zookeeper
07:39 hlub you can use grains for instance to tellthe other minions that you are running the db.
07:39 keimlink joined #salt
07:40 hlub or mine
07:40 SamYaple hlub: but do grians update after the state is running
07:41 SamYaple if all 3 mariadb hosts are running the state at the same time, how would they sync info about the running state?
07:41 hlub hmm
07:42 hlub I didn't get the full picture of what you are trying
07:42 SamYaple what else do you need?
07:44 hlub SamYaple: so, when you enter the state, you need to know if there is any mariadb available? if you tellthat via grains, you are able to reuest it with jinja like {% if mariadb exists %} do some state {% else %} do smoething else {% endif %}
07:45 SamYaple hlub: no, when i enter the mariadb state, i must check if mariadb is running on any node in the environemnt
07:45 SamYaple since it is a cluster of the nodes
07:46 hlub yes
07:47 SamYaple from my understanding grains don't sync mid state run. im not sure how what you are purposing would work
07:48 hlub why should it sync during the state?
07:49 fracklen joined #salt
07:51 hlub SamYaple: anyway, I think mine is a better option as it is used to store more dynamic data.
07:52 SamYaple hlub: that wont work, thats what i cant use
07:52 hlub SamYaple: oh, why?
07:52 SamYaple if the mariadb state is running on minion1 minion2 and minion3, then who can they communicate that one of them is running the service
07:53 SamYaple s/who/how/
07:53 SamYaple if the whole cluster is stopped, different actions need to be take and they must coordinate across the cluster to take those actions
07:54 SamYaple that can't be done with grains I dont think
07:54 hlub what about selecting the minion that is first in ordered list of the minions?
07:55 hlub the ordering is the same everywhere so you can use jinja's if clause to choose only one minion to execute some state.
07:55 SamYaple ok, but what if mariadb is stopped on that minion, and running on the rest. how will it know?
07:56 hlub if you really need more coordination, there is also the orchestration
07:56 SamYaple im using orchestration. it doesnt do this
07:56 SamYaple not unless I were to split these steps into 3 different state files
07:57 hlub SamYaple: if you use mine to query a list of thos minions that are currently running mariadb, then you should have all of the information you want?
07:57 chowmein__ joined #salt
08:00 SamYaple i think that might work, yea
08:00 SamYaple what about coordinated shutdowns? where all nodes need to wait until the last node is shutdown?
08:01 SamYaple so the minions would need to coordinate mid-state before proceeding
08:03 toanju joined #salt
08:05 hlub that sound like a typical orchestration task
08:06 ronnix joined #salt
08:06 hlub now I have to go back to work
08:06 SamYaple and yet, I can't figure out how to do it in salt without different states
08:08 bocaneri joined #salt
08:08 hlub SamYaple: I understood you have two states. 1) shutdown, 2) some afterburner task. with these to states it should be straightforward.
08:10 whytewolf SamYaple: why is breaking it into multiple states a problem? if you need to syncronize a function... then it needs to happen at the orchestration level. which will require breaking up the states into the syncronized parts and the unsyncronized
08:10 hlub but if you have only one state, then that state is responsible of the communication. of course you can't run a half of a state.
08:11 whytewolf states can't syncronize between servers. period.
08:11 whytewolf only orchestration can
08:12 SamYaple thanks for the help. I may just need to adjust my thinking. I am converting a fairly large ansible project into salt. sometimes logic doesn't convert well :)
08:12 jhauser joined #salt
08:12 SamYaple i understand your points. thanks hlub and whytewolf. ill try to adjust my thinking
08:13 whytewolf man I really should be in bed :/
08:18 Valfor joined #salt
08:18 Valfor joined #salt
08:19 fracklen joined #salt
08:20 darioleidi joined #salt
08:28 dijit hey guys, is it possible to run a linter after generating a config with jinja?
08:28 fracklen joined #salt
08:29 dijit my end goal is to ensure that what jinja outputs is always valid YAML.
08:29 dijit s/YAML/JSON/
08:29 fracklen joined #salt
08:29 buu What exactly do you plan on doing if it isnt?
08:30 dijit error.
08:30 dijit it can put the config down, but I need to know it's not valid.
08:30 dijit harder to catch on the other side.
08:31 SamYaple dijit: why not keep it in the form of a dict? you can always output yaml or json from a dict
08:31 buu Won't salt error when it tries to parse it?
08:31 dijit the simple answer is that I don't know how to do that.
08:31 dijit buu: it's a hand mangled json file that gets munged in horrible ways due to 'if/else' blocks in jinja.
08:32 buu So you just want better errors?
08:32 dijit yes
08:32 buu Well, presumably you could run some kind of gulp process to watch the files, run jinja, then lint them
08:32 dijit currently it happily spits out an invalid file (as intended)
08:32 dijit hrm
08:32 buu I mean
08:32 buu I'm not saying this is a *good* idea
08:33 dijit I think I like the dict idea.
08:33 whytewolf {{ dict |json }}
08:33 dijit it sounds significantly cleaner.
08:33 buu How are you getting it into a dict in the first place?
08:33 dijit pillar I suppose.
08:33 SamYaple yea
08:33 dijit which is where the problem arised anyway.
08:33 dijit :P
08:33 buu So..
08:33 dijit because I wanted to use pillars /and/ hardcode my json
08:33 buu You're defining the pillar files as yaml?
08:34 buu I mean
08:34 buu You're just moving the text from a state file to a pillar file right?
08:34 dijit my pillars are YAML, the files salt spits out is JSON
08:34 dijit yeah
08:34 dijit I assume that's what I should be doing
08:34 buu I'm confused
08:34 dijit instead of if/elsing everywhere.
08:34 buu Why are you using json for config in the first place?
08:34 buu Oh wait
08:34 buu You mean for like, non-salt config files?
08:34 dijit json is the files I manage, they're ingested by something else.
08:34 dijit yeah
08:34 buu ohhh
08:35 dijit sorry!
08:35 buu Yeah that makes way more sense
08:35 buu I like the dict idea also, but you can define tasks to run as part of a state
08:35 dijit orly?
08:35 buu cmd.run basically
08:35 dijit awesome, I might do that as a stop-gap.
08:35 dijit getting my entire config in pillar will take... time.
08:36 buu salt.states.module.wait(name, **kwargs)
08:36 buu Run a single module function only if the watch statement calls it
08:37 buu A good example of using watch is with a service.running state. When a service watches a state, then the service is reloaded/restarted when the watched state changes, in addition to Salt ensuring that the service is running.
08:38 buu https://docs.saltstack.com/en/latest/ref/states/requisites.html
08:38 buu I don't actually understand any of that
08:38 buu But there's some docs
08:38 mikecmpbll joined #salt
08:39 AndreasLutro also look at file.managed's check_cmd argument
08:39 whytewolf watch - do some other thing based on a mod_watch function in the current module based on if another state changed
08:39 buu whytewolf: writing python as yaml makes my head hurt
08:40 whytewolf module.wait does nothing. but the mod_watch in the module state will run the module if the other state changes
08:41 o1e9 joined #salt
08:41 whytewolf buu: then heat must really drive you nuts.... OOP python written in yaml
08:41 buu I stay away from it
08:43 JohnnyRun joined #salt
08:47 CrummyGummy joined #salt
08:48 jhauser_ joined #salt
08:52 samodid joined #salt
08:54 jhauser joined #salt
09:00 jhauser_ joined #salt
09:01 jhauser joined #salt
09:01 AndreasLutro https://bpaste.net/show/61febb897d1d how is this even possible
09:04 viccuad joined #salt
09:05 buu AndreasLutro: Because 00_client doesn't have access to that variable?
09:05 buu oh
09:05 buu I missread. Nope. Got nothing.
09:05 AndreasLutro can't reproduce locally of course...
09:06 buu AndreasLutro: I'd bet one of the internal keys are undefined
09:06 buu 'address': pillar.sensu.get('address', salt['twutil.public_ips']() | first),
09:06 buu Where does 'first' come from?
09:06 haam3r1 left #salt
09:06 AndreasLutro built-in jinja filter
09:06 buu oh, er, yes
09:07 AndreasLutro hmm, if I change "opts.id" to "opts.askdjsjkd" I get the same error locally
09:07 buu Anyways, I think one of your internal things are undefined on the minion but since I'm too sleepy to distinguish between | and ||
09:07 AndreasLutro so I guess there is something going wrong inside the dict initialization
09:07 buu I'm going t seep now
09:09 jhauser joined #salt
09:09 bluenemo joined #salt
09:09 N-Mi_ joined #salt
09:09 viccuad Hi folks. I'm trying to set up templates for vm hosts spawned on-demand, following the "nodes,roles,profiles" puppet pattern, implemented in salt, that some colleagues have told me. The pattern is awesome. Does anybody know of a better way of spawning on-demand hosts that making pillars/custom grains and calling `salt 'newhost' …` with custom grains in that command? thanks in advance
09:16 cyborg-one joined #salt
09:18 kbaikov joined #salt
09:20 PhilA joined #salt
09:20 ninjada joined #salt
09:24 mikecmpbll joined #salt
09:24 kbaikov joined #salt
09:24 invalidexception joined #salt
09:24 coredumb morning
09:25 coredumb is there ressources online where it's described which config option requires a restart or not ?
09:26 Mattch joined #salt
09:26 invalidexception joined #salt
09:26 fracklen_ joined #salt
09:31 s_kunk joined #salt
09:31 teclator joined #salt
09:32 kbaikov joined #salt
09:33 invalidexception joined #salt
09:37 ninjada joined #salt
09:39 sebastian-w joined #salt
09:39 AndreasLutro in salt? every config option, afaik
09:41 N-Mi_ joined #salt
09:42 Rumbles joined #salt
09:53 gaghiel joined #salt
09:57 mpanetta joined #salt
10:16 irctc723 joined #salt
10:17 ronnix joined #salt
10:35 ReV013 joined #salt
10:40 Cadmus joined #salt
10:41 Cadmus Morning, does anyone use the firewalld module? That's started pruning rules recently right?
10:46 cyteen joined #salt
10:48 haam3r1 joined #salt
10:50 coredumb AndreasLutro: seems like some work without restart but it's not really well documented
10:51 coredumb btw how does one run saltutil.refresh_pillar from a reactor ?
10:52 yuhl_ left #salt
10:55 yuhl__ joined #salt
10:55 jhauser joined #salt
10:56 irctc723 Does anybody know how to setup the listen socket at the salt master? It seems that the default backlog queue setting is too low
10:57 sh123124213 joined #salt
10:58 SpX joined #salt
11:12 inire joined #salt
11:13 amcorreia joined #salt
11:20 jhauser joined #salt
11:39 jhauser joined #salt
11:42 cyteen joined #salt
11:58 ronnix joined #salt
12:03 sh123124213 joined #salt
12:14 seanz joined #salt
12:16 jhauser_ joined #salt
12:19 PhilA joined #salt
12:22 ronnix joined #salt
12:28 zigurat joined #salt
12:28 zigurat Hi. I want to run a for in IPs and do something if the IP is private. However I don't know how to include this check in the if + IP variable
12:32 antonw joined #salt
12:32 ronnix joined #salt
12:33 armin joined #salt
12:35 sh123124213 joined #salt
12:39 Reverend joined #salt
12:39 Reverend boo
12:41 munhitsu_ joined #salt
12:43 ernescz joined #salt
12:49 mattl joined #salt
12:51 copelco joined #salt
12:53 __number5__ joined #salt
12:55 ernescz Hello! Does anyone have a working example of mongodb module on how to connect to Mongo instance? Tried just about everything. Getting this result all the time: https://pastebin.ubuntu.com/23618716/
12:56 ernescz debug just gives 'Error connecting to database admin' at the best or the same trace as on master.
12:59 Cadmus Oh this is going to be painful. So it seems that the firewalld module now prunes services by default, and ports too but you can't tell it not to prune ports (it's almost totally undcumented anyway it seems)
13:04 irctc723 does anybody know a fix for dropped SYN requests to the master? 40891 SYNs to LISTEN sockets dropped
13:05 irctc723 the kernel says request_sock_TCP: Possible SYN flooding on port
13:06 amontalban joined #salt
13:06 jhauser joined #salt
13:07 mihait joined #salt
13:07 netcho joined #salt
13:08 XenophonF irctc723: assuming you're running Linux, there are kernel sysctls that you can tweak
13:08 XenophonF but it sounds like something else is wrong
13:08 XenophonF how many minions do you have?
13:08 XenophonF what do your firewall settings look like?
13:08 irctc723 XenophonF> 1570 minions
13:08 irctc723 no firewall
13:09 XenophonF that's about 25 connection requests per minion, which sounds wrong
13:09 ramSeraph joined #salt
13:09 XenophonF are they all on the same LAN/WAN?
13:10 irctc723 XenophonF>  yes they are. netstat shows raising times the listen queue of a socket overflowed and SYNs to LISTEN sockets dropped
13:11 XenophonF what's the loadavg like on your master?
13:11 irctc723 XenophonF> 1 - 3 with 24 cores
13:11 XenophonF RAM?
13:12 XenophonF i'll admit that i'm guessing that this point
13:12 irctc723 XenophonF> 16G while 4 are free and 2 are used for cache
13:13 felskrone how can i make the info of flat 15.000 lines file available as pillars to a master? i tried import_yaml within pillars, but thats imports the file every time a minion requests its pillar and that too much for our very busy masters.
13:13 XenophonF yeah it's got plenty of juice
13:14 XenophonF felskrone: use a YAML multi-line string? there's also a files pillar IIRC
13:14 darthzen joined #salt
13:15 irctc723 felskrone> do all minion need those 15 000 lines?
13:15 felskrone XenophonF: the file gets updated regularly (its a server<->type map), i can just define the data somewhere as a multiline string,
13:16 felskrone irctc723: yes, it makes server-specific info available to the minions available like server-type, virtualzation host, etc. and i gets updated without me knowing about it
13:17 irctc723 XenophonF: i already set salt_event_pub_hwm: 128000 and event_publisher_pub_hwm: 64000
13:17 irctc723 felskrone> i had exaclty the same also providing meta data to the minions like roles, settings and stuff. I splitted those into specific yaml files for each minion.
13:17 Cadmus Okay, you can't stop the firewalld module from deleting ports, so it's no longer usable
13:18 felskrone irctc723: i wa shoping to avoid that, but i guess i have to go that way
13:18 irctc723 felskrone> i wasn't able to find another way. Having exactly the same problem. Another thing might be to publish the hole file an loading it as grains on the server.
13:20 felskrone irctc723: interesting idea, thx
13:21 Cadmus Okay, I think this can be fixed, you just have to dfine every service yourself
13:22 coredumb when sending an event with salt-call event.send "test/user1_pillar/refresh" "{type: pillar, group: user1}" how am I supposed to get type and group from reactor? I've trie data['type'] and data['data']['type'] and both return me that the dict object has no attribute "type"
13:22 coredumb any idea ?
13:23 irctc723 anybody here who is able to serve more than 1k minions?
13:24 felskrone irctc723: can any minion auth at all? how many minion auth at once? are you using max_minions and/or connected_cache?
13:24 Cadmus This frankly is a bit annoying
13:25 irctc723 felskrone: auth works fine. Just doing things like manage.down or running states mostly results in missing packets and not returning minions. What is max_minions and or connected_cache?
13:26 numkem joined #salt
13:26 netcho_ joined #salt
13:28 felskrone irctc723: max_minions is to limit he number of minions per master, interesting for mutlimaster setups, connected cache is often used in conjunction with max_minions
13:29 moy joined #salt
13:29 jas02__ joined #salt
13:29 m0nky joined #salt
13:29 irctc723 felskrone> ah OK but i'm "only" at 1500 currently. I've not planned to setup a new master for every 1500 minions. That seems quite low to me.
13:29 felskrone irctc723: let me find something i wrote a while go...
13:30 felskrone irctc723: i had similar problems and a hard time figuring out what was really happening: https://github.com/felskrone/utils/blob/master/salt-tcpdump.py
13:31 dyasny joined #salt
13:32 felskrone irctc723: also have a look at the acceptance_wait_time, random_reauth_delay and recon_* settings, if you're master/os says its getting flooded
13:33 dyasny joined #salt
13:34 mavhq joined #salt
13:36 Awesomecase joined #salt
13:37 bVector joined #salt
13:38 irctc723 felskrone> reauth, recon and stuff is already set. Thanks for your py script. But i can't see anything interesting from it.
13:40 irctc723 felskrone: the only thing i'm wondering is that while the connections are raising to Port 4506 when returning data the connections to 4505 are dropping from constant 1637 down to 1500 to go up to 1637 after executing the command
13:41 felskrone irctc723: hm, if you cant see too many auth from the dump-script i currently dont have an idea what might be wrong
13:42 Cadmus I can see why you want to have everything designated as a service, but this does mean half my machines have just closed all their ports :(
13:42 irctc723 felskrone> but is it normal that the established connections on port 4505 are dropping while the minions return data
13:43 felskrone irctc723: they shouldnt, 4505 is usually persistent connections, 4506 are dynamic
13:44 onmeac joined #salt
13:45 irctc723 felskrone: yes but it drops and SYNs to LISTEN sockets dropped raises in netstat
13:45 onmeac joined #salt
13:45 irctc723 felskrone: and times the listen queue of a socket overflowed raises. This looks to me like the backlog on the master socket is too low.
13:48 jas02_ joined #salt
13:49 jhauser joined #salt
13:51 jas02_ joined #salt
13:53 irctc723 felskrone: is there any reason why this script switches the promisious mode on and off and on and off ...
13:53 _Cyclone_ joined #salt
13:53 onmeac joined #salt
13:54 felskrone irctc723: thats the pcaplib iirc
13:57 jas02_ joined #salt
13:58 ToeSnacks joined #salt
13:58 mpanetta joined #salt
14:00 jhauser joined #salt
14:03 DanyC joined #salt
14:05 Cottser joined #salt
14:07 scoates joined #salt
14:15 keimlink_ joined #salt
14:17 chamunks- joined #salt
14:18 ReV013 joined #salt
14:18 haam3r joined #salt
14:22 simmel joined #salt
14:23 alrayyes joined #salt
14:23 JPT joined #salt
14:23 quarcu joined #salt
14:24 cyteen joined #salt
14:27 coredumb Is it possible to ensure here http://pastebin.com/b714nW7E that update_pillars or update_states is never triggered before update_fileserver ? even with the require, if I listen on the bus I see the refresh_pillar going off before the fileserver.update :(
14:27 coredumb this is in a reactor FWIW
14:29 coredumb am I supposed to use an orchestrator there ?
14:33 jhauser joined #salt
14:35 scoates joined #salt
14:37 jhauser_ joined #salt
14:42 riftman joined #salt
14:50 Tanta joined #salt
14:52 AndreasLutro coredumb: yes, orchestrators are the way to go if you need to control order
14:52 Sammichmaker joined #salt
14:52 anotherzero joined #salt
14:52 coredumb AndreasLutro: thought require would be respected in the reactor ... is it the only place where it's not ?
14:53 irctc723 need help with scaling salt. The mater is dropping packets as it runs out of queue on the listen socket
14:53 AndreasLutro coredumb: the documentation clearly says it doesn't: "Reactor SLS files, by design, do not support Requisites, ordering, onlyif/unless conditionals and most other powerful constructs from Salt's State system."
14:54 coredumb AndreasLutro: forgive me then completely missed that
14:57 racooper joined #salt
14:58 jas02_ joined #salt
14:59 nickabbey joined #salt
14:59 ronnix joined #salt
15:00 wwalker if I have a state letsencrypt/init.sls the state.sls letsencrypt works.  I need to just call letsencrypt/renewal.sls  what is the syntax ?  I've tried about 8 different separators and 15 google searches...
15:00 wwalker s/the/then/
15:01 Tanta salt-call state.sls letsencrypt.renewal
15:01 Tanta or state.apply
15:03 jas02_ joined #salt
15:04 wwalker Tanta: Thanks!  I had tried that and gotten the same error as the other separators (it's renew...)
15:15 mpanetta joined #salt
15:18 mpanetta joined #salt
15:21 Tanta well that's the syntax for traversing a directory (it becomes .)
15:21 AndreasLutro wwalker: what error are you getting?
15:29 djgerm joined #salt
15:33 stooj joined #salt
15:36 wwalker no such...in env 'base'   .  it was two problems first was I was using the wrong state name "renewal" vs "renew".  Second was '.'  (3rd one I tried (/ : .))  I'm all good now.  Thanks to AndreasLutro and Tanta
15:36 Bico_Fino joined #salt
15:39 jas02__ joined #salt
15:40 Sarphram joined #salt
15:41 jas02__ joined #salt
15:41 jas02__ joined #salt
15:45 jas02_ joined #salt
15:45 teclator joined #salt
15:47 felskrone left #salt
15:47 felskrone joined #salt
15:48 ronnix joined #salt
15:48 sarcasticadmin joined #salt
15:49 nickabbey joined #salt
15:52 alvinstarr joined #salt
15:53 nickabbey joined #salt
15:57 austin_ joined #salt
15:57 ALLmightySPIFF joined #salt
15:57 traph joined #salt
16:02 jav joined #salt
16:05 djgerm Hello, I am running salt-cloud in ec2 and I get an error when I have my storage configs specified "[ERROR   ] There was a query error: 'NoneType' object has no attribute '__getitem__'" is the error I get. No obvious errors. the instance is created.. just not the storage and the cloud bombs out…
16:06 Salander27 joined #salt
16:22 nocz joined #salt
16:23 swills joined #salt
16:23 Rumbles you have an undeclared variable you are trying to urn a method on djgerm.... without seeing your config it would be impossible for me to say what
16:24 Rumbles all I want for christmas is for salt to tell me why my pillar won't render D:
16:24 djgerm haha that was my issue (pillar rendering) on friday!
16:25 djgerm whytewolf gave me some tips… let me copy paste their questions to me :)
16:25 Rumbles that would be great :)
16:29 nocz left #salt
16:34 Rumbles even if I don't share the pillar with the host it still fails to render :(
16:34 Rumbles I even tried moving the pillar so it didn't exist
16:34 Rumbles same issue
16:34 spuder joined #salt
16:34 djgerm I got distraacted. SORRY! looking now. I think my issue with AWS tho stems from my misunderstanding of how EBS is defined in cloud maps maybe https://docs.saltstack.com/en/latest/topics/cloud/aws.html
16:35 djgerm i thought anyhting in a profile could go into a map… and overwrite the profile
16:35 Rumbles np
16:36 Rumbles I've not used salt cloud, but I've seen that error before
16:36 djgerm 11:31:28 AM whytewolf: if you use pillar.items is there an item that says it couldn't render?
16:36 djgerm 11:32:52 AM whytewolf: are you using an ext_pillar?
16:36 djgerm 11:33:53 AM whytewolf: did you turn on log_lvel_logfile: all in the master and try the refresh again. then check the logfile on the master?
16:36 djgerm 11:35:14 AM whytewolf: when all fails did you nuke the cache on the minion?
16:36 samodid joined #salt
16:37 djgerm then from there whytewolf made me understand the errors of my ways based on that data
16:37 Rumbles ahhhh
16:37 Rumbles I bet it's cache
16:37 Rumbles that always gets me
16:39 Rumbles ARGH/me must remember the cache is trynig to ruin his day
16:39 Rumbles damnit
16:39 * Rumbles gets his coat
16:40 Rumbles thanks djgerm
16:40 djgerm thank whytewolf :)
16:40 Rumbles if you pastebin your config I will look it over for obvious mistakes :)
16:40 djgerm cache integrity and naming… hardest stuff in computer science!
16:43 Hybrid joined #salt
16:48 gimpy2939 joined #salt
16:48 dyasny joined #salt
16:49 gimpy2939 In an execution module, how do I call other modules and pass them loads of args?  In other words, what the heck is 'bar' in this example if I was using mount.mounted or such that needs much more than a single string arg?  https://docs.saltstack.com/en/latest/ref/modules/#cross-calling-execution-modules
16:51 whytewolf gimpy2939: bar is a bash command that is unimportant in that exmaple.
16:53 whytewolf the important part is that cmd.run is module.function being called with __salt__['cmd.run']()
16:54 whytewolf gimpy2939: when you look up a module.function you want to run it should show you a list of arguments it takes
16:56 pipps joined #salt
16:56 whytewolf in that example. cmd.run takes several arguments. however there is only 1 required argument ... cmd... so by passing only one argument in it will fill that argument. it could just have easily been writen as __salt__['cmd.run'](cmd=bar)
16:57 pipps joined #salt
16:59 whytewolf so for mount.mounted it would be something like __salt__['mount.mounted'](name=name,device=device,fstype=fstype)
17:00 whytewolf you could strip out the name=name to just name if you give them in the same order
17:00 whytewolf got it?
17:00 amontalb1n joined #salt
17:01 spuder_ joined #salt
17:02 arapaho joined #salt
17:03 djgerm is there any good reason to make salt-master run as a dedicated service user instead of root?
17:04 whytewolf because you like headaches?
17:04 djgerm or chroot it… or…
17:04 djgerm haha
17:04 djgerm ok great
17:04 whytewolf although i don't think salt-master being non-root is as big of issue as salt-minion being non-root
17:05 djgerm yeah that makes sense
17:05 Trauma joined #salt
17:07 debian112 joined #salt
17:15 nickabbey joined #salt
17:17 PhilA joined #salt
17:20 eprice_ joined #salt
17:21 eprice joined #salt
17:23 esharpmajor joined #salt
17:25 t0m0 joined #salt
17:33 t0m0 joined #salt
17:36 cyborg-one joined #salt
17:37 onmeac joined #salt
17:39 pipps joined #salt
17:40 pipps joined #salt
17:42 t0m0 joined #salt
17:44 armguy joined #salt
17:46 Trauma joined #salt
17:48 stupidnic I am wondering if somebody could assist me in wrapping my head around how to accomplish something in Salt. A dumbed down version is basically executing a command to get a UUID on one minion, then using that UUID in other minions. I am guessing that could be accomplished via grains, but I wanted to know if there was a batter way to accomplish something like that.
17:49 t0m0 joined #salt
17:50 buu stupidnic: I think that's more of a pillar thing?
17:50 buu stupidnic: or the 'salt mine'
17:51 stupidnic salt mine might work
17:51 R_jackedin mine sounds best.
17:52 MTecknology stupidnic: custom grain dropped into mine is what I'd do
17:52 stupidnic MTecknology: I think that is where I was heading, but I was a little hung up on the integration with salt mine
17:53 stanchan joined #salt
17:57 brokensyntax joined #salt
17:57 t0m0 joined #salt
18:01 lws joined #salt
18:02 t0m0 joined #salt
18:05 nicksloan joined #salt
18:05 nickabbey joined #salt
18:09 wwalker In my salt state, I want to know IF a pkg (systemd) is at a specific version, but I don't want it upgraded if it is not (systemd, needs updated searately as a reboot is required, etc...).  What is the best way to accomplish that test
18:10 pipps joined #salt
18:10 wendall911 joined #salt
18:12 mk-fg joined #salt
18:12 t0m0 joined #salt
18:14 MTecknology wwalker: I'd suggest locking that version in apt, but even more, I'd suggest removing systemd :P
18:14 wwalker I want to modify the systemd config, but not if it is an old systemd
18:14 MTecknology {% if salt['pkg.version']('systemdump') == '0.0.19' %}
18:15 wwalker Sweet!  Thank you!
18:15 wwalker And well named package.
18:15 MTecknology ;)
18:16 buu The best part about removing systemd is debugging 1800 line bash scripts
18:16 buu wait no, init scripts suck horribly
18:17 MTecknology debugging bash is trivial, though.
18:17 buu uh
18:17 MTecknology If you know what you're doing anyway...
18:17 stupidnic I don't like systemd, but I understand what it is trying to accomplish.
18:18 Sketch large bash scripts are not always trivial
18:18 MTecknology it's vim vs. emacs at this point.
18:18 buu I think my favorite part about the systemd saga is all these people criticizing other people for adopting systemd components without putting in a shred of effort to replace the convenience of the systemd features
18:18 netcho joined #salt
18:19 buu "waah why does gnome depend on  systemd now?!?!?!" "Because systemd was willing to write the sub-components gnome needed?" "why didn't gnome just re-write all those components themselves?!?!"
18:19 MTecknology buu: just stop now...
18:19 buu Also of course there's the marketing issue where the 'systemd project' ends up encompassing a ton of semi-related pieces of code
18:22 gimpy2939 another question about modules: I see when I call __salt__['mount.mount'](..) if it has an error, that gets logged so Salt knows about it but it does not reaise an exception and the only thing it returns is a string ... should I really be parsing string output of modules?  Since Salt knows mount.mount failed, is there a better way to have it tell me that?
18:22 MTecknology gimpy2939: what's your end goal?
18:23 gimpy2939 MTecknology: To convert a state that is chained cmd.run into an execution module
18:23 mohae joined #salt
18:24 aw110f joined #salt
18:24 MTecknology I don't think one module is supposed to be parsing the return output of another. I'm pretty sure they should just be checking success/changes = true/false/none
18:24 gimpy2939 ... however that module will have several private functions to do the work; I'm trying to determine how I can use exsiting Salt modules instead of doing everything myself in Python
18:24 Brew1 joined #salt
18:24 fracklen joined #salt
18:25 gimpy2939 MTecknology: ok that's where I'm stuck, nothing is telling me 'success/changes = true/false/none'
18:25 MTecknology I can't help ya without seeing what you're working with.
18:27 gimpy2939 MTecknology: MTecknology
18:27 gimpy2939 MTecknology: https://gist.github.com/jwhite530/ab0dcc6f87f859d4fae7361a0b82d046
18:27 gimpy2939 ...fuckin broken copy+paste
18:28 UtahDave joined #salt
18:30 fracklen joined #salt
18:33 t0m0 joined #salt
18:33 nickabbey joined #salt
18:33 edrocks joined #salt
18:36 MTecknology gimpy2939: show me what's in ret
18:37 sh123124213 do we know when the next version of 2016.11 will be out ?
18:38 MTecknology sh123124213: probably pretty soon
18:38 UtahDave sh123124213: 2016.11.1 is in testing and QA right now
18:38 sh123124213 meaning tomorrow ? when qa passes ? :)
18:38 zer0def joined #salt
18:39 t0m0 joined #salt
18:40 gimpy2939 MTecknology: the ret from mount.mount?  From what I can tell it only is 'mount.nfs: /var/lib/mysql/database_dumps is busy or already mounted'
18:40 MTecknology sh123124213: when/IF
18:41 UtahDave When QA passes.  I wouldn't be surprised if it were this week or next
18:44 scoates joined #salt
18:46 sh123124213 thanks both
18:47 t0m0 joined #salt
18:47 irctc496 joined #salt
18:48 Gareth ahoy hoy.
18:48 iggy gimpy2939: yeah, your best bet is to parse the return... it would be awesome if you didn't have to, but...
18:49 nickabbey joined #salt
18:49 gimpy2939 iggy: bahhh, that sucks; guess I'll just not use modules and repeat the work myself in Python :(
18:49 iggy to each their own
18:50 Gareth UtahDave: o/
18:51 UtahDave Gareth: hey!  how's it going?
18:51 __number5__ joined #salt
18:51 Gareth UtahDave: doing well :) yourself?
18:51 UtahDave good!  Just getting acclimated to the cold here in Utah now.  ::brrr::
18:52 Gareth Haha yeah. I bet.  I'm heading off to NY tomorrow, going from low 60s to mid 30s.  Should be interesting :)
18:53 Gareth UtahDave: any snow near you yet?
18:53 UtahDave Oh yeah.
18:54 UtahDave It's mostly melted at the moment.
18:54 Gareth That's almost worse.  then you get ice :)
18:54 UtahDave Yeah, there's ice in the shady spots in the parking lot.
18:55 fracklen joined #salt
18:55 UtahDave I have several projects taking up room in my garage. I need to get them put away so I can get my truck back into my garage. I hate scraping snow off the windshield
18:55 UtahDave NY sounds like fun!  Work trip?
18:55 woodtablet joined #salt
18:56 Gareth Yeah.  Quarterly trip out to meet up with work folks then company holiday party.
18:56 fracklen_ joined #salt
18:57 UtahDave sweet
18:57 raspado joined #salt
18:58 buu UtahDave: Sounds like you need more salt!
18:58 t0m0 joined #salt
18:59 UtahDave buu: haha yes!
18:59 Tanta "The root of the evil lies in the fact that the government of the Congo is above all a commercial trust, that everything else is orientated towards commercial gain..."
18:59 Tanta whoops
18:59 Tanta wrong channel for that post guys
18:59 buu uh
18:59 buu go on
19:01 nicksloan joined #salt
19:08 irctc496 is there any reason why salt does not implement ZMQ_BACKLOG? for the transport sockets?
19:08 vod1k joined #salt
19:12 raspado joined #salt
19:12 nidr0x joined #salt
19:12 UtahDave irctc496: not sure.
19:13 whytewolf gimpy2939: you could always look at the mount state to see how they handle the fact that mount.mount returns a string
19:14 swa_mobil joined #salt
19:14 edrocks joined #salt
19:14 t0m0 joined #salt
19:14 tduerr joined #salt
19:15 irctc496 UtahDave: Currently the ZeroMQ sockets have only the default queue size of 100 which seems strange to me for big setups and i can't find a way to fix this without touching the transport/zeromq module.
19:18 UtahDave Hm. I'm not seeing an option for that in the master config.  I'm really not sure. I haven't been involved in that part of the code at all.  You could either send an email to the salt-user's mailing list or open an issue or pull request on github to discuss it. I'll point a couple of the core devs to it.
19:23 raspado joined #salt
19:23 raspado joined #salt
19:24 raspado joined #salt
19:25 irctc496 utahdave: https://github.com/saltstack/salt/issues/38210
19:25 saltstackbot [#38210][OPEN] salt-master: transport/zeromq.py misses a setting for the zeromq backlog / listen queue size | Hello,...
19:27 s_kunk joined #salt
19:29 MTecknology Grains are supposed to be cached each run, aren't they? Then just certain things inside of a run trigger a refresh? When a grain of mine runs, I have a painfully long process that kicks off to build the grain data. During one highstate, that painfully long command seems to run 40+ times during a highstate. If possible, I'd like to stick the return value of the command into a cache for the
19:29 MTecknology highstate/state run so that I can just recall that each time.
19:29 lws joined #salt
19:30 pipps joined #salt
19:31 MTecknology wwalker: ah, if it suits you better, I use this -- {% if grains['init'] == 'systemd' %}
19:33 pipps joined #salt
19:34 keltim joined #salt
19:35 lws joined #salt
19:37 UtahDave thanks, irctc496. I'll point some people to that.
19:39 irctc496 UtahDave: thanks. Do you know how others are able to serve > 1000 minions currently? I had no success.
19:41 UtahDave Yeah, I've worked on lots of systems 5 and 6 times that.  Have you tinkered with the other zmq options in the master config?
19:42 iggy MTecknology: do you have __context__ in grains?
19:43 iggy nope :( only states/modules
19:43 MTecknology :(
19:45 irctc496 UtahDave: just salt_event_pub_hwm, event_publisher_pub_hwm and pub_hwm. But they can't help if the kernel drops the connection cause the listen queue is already full. 100 is very small.
19:45 UtahDave MTecknology: it's a chicken and egg problem.   grains are basically the first thing evaluated by the minion and are used by the other subsystems to determine what should be loaded by the minion
19:46 ProT-0-TypE joined #salt
19:46 UtahDave irctc496: OK. That makes a lot of sense.  Could you open a PR with that change? We'll do a bunch of testing on it.
19:46 MTecknology I considered trying to use sdb for it, but a module sticking something into sdb sounds like a bad idea.
19:46 MTecknology err.. a grain*
19:47 ProT-0-TypE how can I validate an ip address in a jinja template?
19:48 iggy MTecknology: tried grains caching?
19:49 _JZ_ joined #salt
19:50 ProT-0-TypE (or in a different way: how can I call salt.utils.validate.net.ipv4_addr from a jinja template?)
19:50 MTecknology I haven't, and sure wouldn't mind a link to help me get started with that. I'm more concerned that the grain seems to need refreshing that much because I know grain return values are supposed to at least be cached for a highstate run.
19:50 irctc496 UtahDave: I don't think this is a PR? Is it? I've no idea how to implement that option into the master. I just added a hard coded value of 8192. But when you serve tens of thousands of clients with varnish,nginx and other you may need bigger values. Also your kernel must support this. So it needs to be a configurable option and i don't know how to implement. I'm no python guy.
19:50 monokrome joined #salt
19:52 UtahDave Hm. OK. Let me check if I can find someone to shepherd this through
19:54 scoates joined #salt
19:54 irctc496 UtahDave: i just hacked that value into as it was driving me nuts for a week why i'm not able to serve enough minions. And i still don't understand how it works for others. I can clearly see that the kernel drops the returns by queue overflow.
19:55 keltim joined #salt
19:55 UtahDave irctc496: OK. just got approval to assign that to a core engineer.
19:57 akhter joined #salt
19:57 UtahDave irctc496: Got some very grateful praise for your find, irctc496!  Thanks!
20:00 iggy MTecknology: https://docs.saltstack.com/en/latest/ref/configuration/minion.html#grains-cache
20:00 irctc496 UtahDave: i'm still not sure it is a find. I can't believe that a big product like salt misses something like this. May be i missed something an the default value of 100 is fine for salt.
20:02 UtahDave well, with all the moving parts of both zmq and salt, sometimes things can fall through the cracks. They'll test and evaluate this closely, but their initial quick review of what you posted looks very promising to them.
20:03 irctc496 UtahDave: than i'm proud of the find ;-) hoping to make salt better.
20:04 MTecknology iggy: heh...
20:04 Bryson joined #salt
20:05 MTecknology iggy: I'll betcha that's the problem and holy crap I feel silly now but also holy crap thanks and also ... yay!
20:11 afics joined #salt
20:12 dyasny joined #salt
20:13 Brew1 joined #salt
20:16 irctc496 utahdave: ok will have a PR in a few minutes - python seems easy
20:18 sh123124213 I wonder if that change is going to improve file uploads and downloads
20:20 kulty joined #salt
20:22 sjorge joined #salt
20:22 sjorge joined #salt
20:22 pcn Is there a way to only have a service.dead invoked iff the service exists?  Using onlyif is causing an error that the service doesn't exist (which  is great, but...)
20:28 stooj joined #salt
20:30 irctc496 utahdave: https://github.com/saltstack/salt/pull/38212
20:30 saltstackbot [#38212][OPEN] ZMQ: add an option for zmq.BACKLOG to salt master (zmq_backlog) | What does this PR do?...
20:31 akhter joined #salt
20:31 promorphus joined #salt
20:34 fracklen joined #salt
20:35 promorphus Hey guys, I've got an issue with a state using dockerng.copy_from, which attempts to copy a file from a created container. The normal docker functionality for this requires the container be 'created', but not 'running'. However, invoking the dockerng.copy module on a 'created' container gives me the error 'container not running'. Anyone know if this is intentional, or a way around it?
20:35 Brew joined #salt
20:35 viccuad joined #salt
20:42 lws joined #salt
20:44 dyasny joined #salt
20:46 blu__ joined #salt
20:48 lws joined #salt
20:53 dyasny joined #salt
20:53 lumtnman joined #salt
20:56 keimlink joined #salt
21:02 xbglowx joined #salt
21:04 woodtablet joined #salt
21:05 woodtablet left #salt
21:07 lumtnman I am curious about what folk's test environment looks like for testing changes to .sls files before commiting them into the 'dev' git branch. Maybe using docker to spin up a quick version of the minion? Open to any and all ideas
21:09 stooj joined #salt
21:12 FroMaster joined #salt
21:12 akhter joined #salt
21:14 sarcasticadmin joined #salt
21:16 whytewolf lumtnman: I have an openstack cluster just for testing. letting me setup all sorts of silly situations
21:17 whytewolf i do sometimes play around with vagrant though as well
21:18 austin_ if there is already a minion key, and you do a fresh install of salt-syndic, will salt try to override that key ?
21:19 UtahDave austin_: I think the key will stay there.
21:19 austin_ ok cool. that makes life better
21:19 whytewolf iirc syndic doesn't even consider any minion settings
21:19 lumtnman whytewolf: hua ok, you all just have a testing platform seperate from dev, qa, or prod?
21:21 whytewolf lumtnman: sorry. I guess I should preface with I am a home user with extra time. and used to have extra money.
21:22 whytewolf my openstack cluster is my home lab. I have a dev,qa and prod bit in it. but also spin up a playground in it for doing my own testing and devel work
21:22 whytewolf seperate tenants
21:23 whytewolf [and soon seperate domains for the enviroments, if i can get salt to play nice with openstack domains]
21:23 sp0097 joined #salt
21:23 lumtnman whytewolf: hahaha gotcha ok, that makes sense
21:23 lumtnman resources are not a problem, just the easiest most streamlined design is desired
21:24 stooj joined #salt
21:27 mavhq joined #salt
21:29 ninjada joined #salt
21:31 mohae joined #salt
21:34 jas02_ joined #salt
21:35 austin_ was there any reason why the token was not good enough for generating new keys?
21:35 austin_ instead of having to supply the user/pass/eauth
21:36 stooj joined #salt
21:37 debian112 joined #salt
21:49 tercenya joined #salt
21:52 onlyanegg joined #salt
21:54 drawsmcgraw joined #salt
21:55 tercenya joined #salt
21:56 fracklen joined #salt
21:56 fracklen joined #salt
22:03 lws joined #salt
22:09 xbglowx joined #salt
22:11 ninjada joined #salt
22:13 fracklen joined #salt
22:14 akhter joined #salt
22:15 sh123124213 joined #salt
22:15 sh123124213 joined #salt
22:15 nidr0x joined #salt
22:15 teclator_ joined #salt
22:16 ProT-0-TypE joined #salt
22:16 UtahDave austin_: can you not generate new keys using the token?
22:17 nickabbey joined #salt
22:17 austin_ unless i pass params like [username, password and eauth]
22:17 austin_ when passing token, it says no authentication creds passed
22:17 austin_ bug?
22:21 UtahDave austin_: hmm.  Let me check.  Can you pastebin what you've tried? and the external_auth section of your master config?  (sanitized)
22:22 austin_ yup. in the process of doing that now. this is all on vagrant so nothing important :)
22:23 UtahDave k
22:29 lws joined #salt
22:30 sjorge joined #salt
22:30 sjorge joined #salt
22:34 austin_ https://gist.github.com/austinpapp/948729bf76ed950f5507621ed89f2959
22:35 austin_ this is just for testing right now
22:35 austin_ as i work out the details of my cloud-config config
22:36 austin_ ideally, when i boot up a new instance via terraform, it would just re-build the keys. this should be good enough until i can build out a more central authority for keys.
22:36 sh123124213 joined #salt
22:36 austin_ but i was asking about the overriding of keys before because i have a RBD volume mounted on my instance. that would already have the key. so the idea there is i shouldn't have to rebuild that key. its there
22:37 writtenoff joined #salt
22:38 austin_ hopefully its something stupid like how i'm calling it
22:38 austin_ but if you have anything further to add let me know in comment or here. i gotta run and will check the logs when i can back home
22:38 austin_ thanks for any help upfront
22:40 netcho joined #salt
22:50 lws joined #salt
22:51 scoates joined #salt
22:52 pipps joined #salt
22:58 writtenoff joined #salt
23:02 drawsmcgraw Anyone have any good tips on using pygit2 for the GitFS backend on Centos 7?
23:02 drawsmcgraw I'm in the third ring of hell, trying to get pygit2 properly installed
23:08 moooooo joined #salt
23:08 moooooo hey, what is the best way to run a command before restarting a service for a "watch" change?
23:08 moooooo for example, I want to run /etc/init.d/sensu-server validate_config prior to restarting the service after a config change
23:09 tercenya joined #salt
23:09 drawsmcgraw moooooo: Insert a cmd.run between your changed file and your service.restart?
23:10 drawsmcgraw And make the service.restart conditional on the cmd.run' success
23:18 moooooo so watch_in: cmd: /etc/init.d/sensu-server validate_server on the file then on the cmd definition, watch_in: sensu-server ?
23:21 drawsmcgraw moooooo: something like that, yes. Assuming 'watch_in' only fires on the watched state's success.
23:21 drawsmcgraw *also* assuming that `sensu-server validate` actually returns 0 on success.
23:21 onlyanegg joined #salt
23:21 moooooo yea it returns the proper return code
23:22 drawsmcgraw For posterity, my pygit2 fix is to manually build/install libgit2 via the docs, then pip install pygit2, and finally to run `ldconfig` to update the lib locations.
23:22 drawsmcgraw moooooo: Good to know. I've dealt with other systems that don't (I'm looking at *you*, Docker)
23:23 justanotheruser joined #salt
23:31 akhter joined #salt
23:36 drawsmcgraw left #salt
23:41 sh123124213 joined #salt
23:44 hemebond Wow, 2016.11 broke postgres too.
23:44 hemebond Time to figure out downgrading.
23:44 sh123124213 joined #salt
23:45 onlyanegg joined #salt
23:49 pipps joined #salt
23:50 N-Mi joined #salt
23:50 jgarr joined #salt
23:51 jgarr this really has me confused. When I run puppet on a system (via puppet agent -vt) everything works fine. When I use salt 'system' puppet.run some of the costom facts don't load and the puppet runs fail. Anyone know how/why that would be happening?
23:53 wwalker What is the correct syntax/filter/test/??? for this:  {% if str1 in str2 %}
23:58 wwalker never mind...   I had some typo somewhere, it works now....
23:59 wwalker I'm wrong again.  it is broken, it was my other state that is working.

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary