Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-04-09

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 onslack joined #salt
00:25 rockey joined #salt
00:43 lkthomas__ left #salt
01:58 ilbot3 joined #salt
01:58 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2017.7.5, 2018.3.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
02:08 tiwula joined #salt
02:26 monokrome joined #salt
02:53 JPT joined #salt
02:57 ponyofdeath joined #salt
03:58 evle joined #salt
04:32 om2 joined #salt
04:36 lompik joined #salt
05:05 Hybrid joined #salt
05:08 mavhq joined #salt
05:22 eseyman joined #salt
05:23 Hybrid joined #salt
06:07 colttt joined #salt
06:15 briner joined #salt
06:25 pppingme joined #salt
06:25 Tucky joined #salt
07:13 aldevar joined #salt
07:25 Hybrid joined #salt
07:30 Hybrid joined #salt
07:42 darioleidi joined #salt
07:42 DanyC joined #salt
07:43 jrenner joined #salt
07:49 Pjusur joined #salt
07:51 eekrano joined #salt
08:06 mikecmpbll joined #salt
08:12 hoonetorg joined #salt
08:13 briner joined #salt
08:35 briner joined #salt
08:36 DanyC joined #salt
09:03 mikecmpb_ joined #salt
09:23 Elsmorian joined #salt
09:29 Elsmoria_ joined #salt
09:32 DanyC joined #salt
09:38 dsawww1 joined #salt
09:38 dsawww1 left #salt
10:04 edrocks joined #salt
10:19 antranigv joined #salt
10:20 antranigv hey all! I'm wondering if there's a prefered Salt webui?
10:21 antranigv I basically need to setup something for my devs, so they can automatically setup FreeBSD Jails, update DNS, etc, I have all the scripts on my box, but I want to use Salt for easy management and orchestration :)
10:38 megamaced joined #salt
10:40 Rr4sT joined #salt
10:41 hemebond antranigv: Saltstack Enterprise has a web UI component I believe.
10:42 hemebond There have been a couple of open source web UIs but I'm not sure how they're getting on to be honest.
10:42 hemebond And I don't think they're that widely used.
10:49 Younder Still no support in vi syntastic for .sls files? How does it work to allow YAML and ignore JINJA?
10:50 Younder How is EMACS support?
10:54 aviau joined #salt
10:55 onslack <msmith> there are vim syntax files i believe
10:55 onslack <msmith> also atom
11:07 briner joined #salt
11:24 aldevar joined #salt
11:29 Elsmoria_ @antranigv: https://github.com/martinhoefling/molten might be an option?
11:40 babilen Is that still maintained?
11:43 evle2 joined #salt
11:44 antranigv hemebond: I'll check all that I see. I'm gonna assume that Saltstack enterprise is not free?
11:45 antranigv I've been looking a lot on Salt for the last day, I already like it.
11:47 babilen Enterprise is not free, no
11:48 antranigv probably not the best place to ask, but. is it expensive?
11:48 onslack <msmith> from what i can tell, most people using salt are command-line users, and the salt config is entirely text (mostly yaml)
11:48 onslack <msmith> i've seen a couple of people use a ui to graph history, based on exporting salt job results to a database, but not for  configuring salt in the first place
11:49 antranigv yea I love command-line too. in that case, how to give restricted access to my devs to just push two, three buttons? ZFS clone, new node, start some script :) or is it gonna be an overdo with Salt?
11:50 onslack <msmith> the way we plan it is to use git branches and use merge requests to deploy. actually executing salt  states can be done in lots of different ways. for example a slack channel
11:51 LevitusCommander joined #salt
11:51 onslack <msmith> or you could use a wrapper around the api, or post events, although neither of these natively have access control
11:52 onslack <msmith> actually i lie, i think the api does have something. i haven't used it
11:52 LevitusCommander Hello! Can salt-cloud maps / configuration be stored in a gitfs?
11:53 onslack <msmith> antranigv: take a look at <https://docs.saltstack.com/en/latest/ref/netapi/all/salt.netapi.rest_cherrypy.html>
12:05 exarkun joined #salt
12:08 Nahual joined #salt
12:12 xet7 joined #salt
12:16 zer0def hnh, so apparently `boto_asg` keeps reporting being throttled by AWS on Salt 2018.3.0, while working fine on 2017.7.4
12:49 crux-capacitor joined #salt
12:51 zer0def i'll report this some time later this week, got a bunch of other concerns atm
12:56 xet7 joined #salt
12:58 pf_moore joined #salt
12:59 crux-capacitor joined #salt
13:00 viq antranigv: also you may have a look at rundeck
13:06 zer0def or basically anything that interacts with the salt-api for you
13:06 zer0def might as well be a jenkins job with salt plugin
13:08 mbrgm joined #salt
13:11 edrocks joined #salt
13:13 cewood joined #salt
13:13 deuscapturus joined #salt
13:14 mbrgm hey! I want to start using salt for our windows clients, installed salt-minion on them... test.ping is working, but most of the modules (win_ip, win_system etc. etc.) are not available... what could be the problem here?
13:15 mbrgm I installed the latest client (2018.3), master is running 2018.3 as well
13:17 babilen You might have to install Python modules on the minions for those to work
13:17 Elsmorian joined #salt
13:18 onslack <msmith> the win modules are nearly all aliased, so try without the win_ prefix. the examples should show how to use them
13:18 edrocks joined #salt
13:19 babilen mbrgm: What are you trying exactly and what is the outcome?
13:20 babilen Paste commands and output on one of http://paste.debian.net, https://gist.github.com, http://sprunge.us, …
13:20 mbrgm babilen: I think you might be right about the python modules... I just installed the minion package, but nothing elsel
13:20 babilen It really is impossible to say without knowing more details
13:21 onslack <msmith> i run windows minions, i've never had to install anything extra
13:21 babilen Which command did you run and what happened?
13:21 msmith joined #salt
13:21 onslack <msmith> more information would indeed help
13:22 mbrgm e.g. http://paste.debian.net/hidden/8cb4c399/
13:22 mbrgm other modules are working, e.g. file.readdir or cmd.run
13:22 babilen https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.win_system.html#salt.modules.win_system.get_computer_name
13:23 babilen That would be "salt 'minion-id' system.get_computer_name"
13:23 babilen Rather than the "win_system.get_computer_name"
13:23 onslack <msmith> the example under that command shows exactly how to call it, and indeed you don't put in the win_ prefix
13:23 mbrgm babilen: yup, the 'system.' version works
13:23 mbrgm thx
13:23 babilen RTFM! ;)
13:24 babilen Enjoy your fancy new architecture :D
13:24 onslack <msmith> babilen: clearly you're the only one who helped, too. congrats.
13:24 mbrgm babilen: I've been using salt for like 3 years now for linux, so you guess I read the FM
13:24 babilen onslack: What? It was your suggestion
13:24 mbrgm ;)
13:24 mbrgm onslack: thx too
13:26 onslack <msmith> sometimes i think people ignore the bridge simply because the bot has a weird way to bridge those of us using slack
13:26 babilen Well, this is an IRC channel and not Slack
13:26 babilen (I'd use a Slack channel if I wanted Slack)
13:26 babilen err
13:26 babilen *client naturally
13:27 onslack <msmith> for you, yes. for me it's a slack channel and not irc, along with the other slack channels that aren't bridged to irc
13:27 onslack <msmith> but rather than continue a discussion that simply has no end, let's just agree to disagree on which is best
13:27 zer0def well, you could've used irc with znc, but Slack's apparently flipping off xmpp and irc gateways starting last month and effectively may
13:27 babilen No, you are participating in an IRC channel using a suboptimal IRC client that you can't even configure to use your username
13:28 babilen I understand why people who primarily hang out on Slack would want to do that, but I still do not like the outcome
13:29 onslack <msmith> as i said, simply different views. but can you understand that when i get ignored for offering assistance for free then i'm far less likely to do it in future?
13:29 babilen I mean we also don't post every reddit.com/r/saltstack post to the mailing list (or Slack), nor do we create pastebins with salt-user mails and put links here ..
13:29 zer0def as far as i can tell, Slack's just another communicative walled garden among many currently available…
13:30 babilen exactly
13:30 gh34 joined #salt
13:31 babilen onslack: Sure, the same would hold true if you had joined this channel directly. If you are being ignored it is unlikely to encourage you to participate further
13:31 babilen (which would be a shame really)
13:31 zer0def now, if they weren't dropping "legacy" protocol gateways, the walled garden argument wouldn't have a place, the same way it wouldn't have with gtalk ;)
13:31 babilen If you are being ignored because *you* choose to use a suboptimal client, then there is something you can do about that
13:31 babilen (run IRC)
13:31 zer0def actually, not gtalk, fbchat
13:31 onslack <msmith> fine. i'm done. bye
13:32 babilen I can assure you that it is not due to the content of your messages nor your manners (both are excellent)
13:32 babilen heh
13:32 zer0def well, rip
13:42 racooper joined #salt
13:58 shiranaihito joined #salt
14:02 briner joined #salt
14:07 exarkun do I have to include another state sls if I want to `require` something from it by id?
14:07 cgiroua joined #salt
14:07 hemebond Yes
14:08 lordcirth_work joined #salt
14:08 hemebond (not actual "include" but it must be part of the highstate)
14:08 exarkun so if it's referenced anywhere, directly or indirectly, from top.sls then that's good enough?
14:08 hemebond yip
14:09 hemebond Wait... indirectly?
14:09 exarkun does it hurt anything to `include` things that are already referenced?
14:09 exarkun indirectly like top.sls referenced foo and foo `include`s bar
14:09 hemebond If you `include` the same SLS more than once you will end up with duplicate states and it'll all fail.
14:09 exarkun ah sad.
14:09 hemebond Wait....
14:10 hemebond I was thinking of the Jinja2 include.
14:10 exarkun ah happy.
14:10 hemebond The Salt include is safe to use multiple times.
14:10 exarkun :)
14:19 dendazen joined #salt
14:21 mbrgm left #salt
14:27 heaje joined #salt
14:37 DanyC joined #salt
14:40 mavhq joined #salt
14:51 cbosdonnat joined #salt
14:51 cbosdonnat Hello
14:52 cbosdonnat I'm rather new to salt's code base and I wonder what are config.get and __opts__
14:52 cbosdonnat I though those were minion config, but seems it's more subtle than that
14:54 Rumbles joined #salt
14:56 spiette joined #salt
14:56 viq I think config.get can also be fed from pillars
14:57 cbosdonnat ok
14:57 viq But it's only a moderately educated guess
14:57 cbosdonnat can __opts__ get from other places too?
14:58 sjorge joined #salt
14:58 viq I will let someone who has some idea answer that
15:02 peters-tx MTecknology, you around?
15:18 cbosdonnat viq, after some more debug statements, I figured out my problem.... I confused __opts__['virt.connect'] with 'virt:connect' in the get.config
15:18 cbosdonnat I think I'll need to rationalize the virt.py connection setup
15:19 sjorge joined #salt
15:20 tiwula joined #salt
15:20 cbosdonnat viq, BTW I found a nice doc about that: doc/topics/development/dunder_dictionaries.rst
15:24 vali joined #salt
15:27 noobiedubie joined #salt
15:30 dezertol joined #salt
15:32 viq I'm too green in python to attempt ;)
15:33 quantumsummers joined #salt
15:34 crux-capacitor joined #salt
15:45 whytewolf cbosdonnat: looks like you found the doc for __opts__. here is config.get https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.config.html#salt.modules.config.get
15:45 gmoro joined #salt
15:45 cbosdonnat whytewolf, thanks a lot
15:52 MTecknology peters-tx: nope
15:52 viq I'm looking through https://github.com/saltstack/salt/blob/e974cf385d3ac0b86959a0eca88d96c9e3527ec3/salt/modules/consul.py and I don't see anywhere to put cert and key that I need to authenticate to consul, how should I go about it?
15:56 crux-capacitor anyone know why salt master waits for non-responding minions to return even when targeting one specific minion? and can that be disabled?
15:56 viq How are you "targeting one specific minion"?
15:57 viq By explicit name, or grains, pillars or other means?
15:57 crux-capacitor various ways, but just now, i did "salt -S x.x.x.x grains.item kernel", and that minion returned, but the master waits for all the non-responding minions
15:58 viq Then you're not targeting a single minion
15:58 viq Only way you'd be targetting a single minion would be "salt minion_id grains.item kernel"
15:59 viq Everything else is "hello, everyone out there, if you'd be so kind to respond to this query if those parameters match you"
15:59 Edgan crux-capacitor: With a grain match it has to pull the grains from all the minions to match them. Grains are generated minion side.
15:59 viq Possibly with exception of pillar targetting, but I'd have to read up on that
16:00 crux-capacitor indeed youre right!
16:00 viq Edgan: I don't think it's master-side processed like you're describing
16:00 crux-capacitor i guess i was forgetting how the msg bus works. thanks
16:01 Edgan viq: How else would it work?
16:01 viq < viq> Everything else is "hello, everyone out there, if you'd be so kind to respond to this query if those parameters match you"
16:01 Pomidora What's an easy way to count how many minions match a role?
16:01 Pomidora I'm going to shove this into JINJA
16:02 Edgan viq: Then why is it waiting for minions that aren't responding?
16:02 viq Edgan: broadcast query with parameters, minions decide whether they match or not and therefore execute and return or not
16:03 viq Edgan: because it's broadcast to everyone, and it's waiting for them to return
16:03 Edgan viq: If it is more like you describe it shouldn't assume minions are there
16:03 DanyC joined #salt
16:04 viq That's why there's timeout
16:04 viq Time for me to run, cya!
16:04 MTecknology I /think/ even pillar targeting is handled that way.
16:06 Edgan But them pillars are master side
16:11 DammitJim joined #salt
16:13 DanyC joined #salt
16:13 LevitusCommander joined #salt
16:15 DanyC_ joined #salt
16:20 v0rtex joined #salt
16:32 sjorge joined #salt
16:45 gmoro joined #salt
16:52 englishm_work joined #salt
16:53 sjorge joined #salt
16:58 noobiedubie joined #salt
17:04 edrocks joined #salt
17:08 cewood joined #salt
17:18 onlyanegg joined #salt
17:22 onlyanegg joined #salt
17:36 mikecmpbll joined #salt
17:37 schemanic joined #salt
17:38 shadoxx joined #salt
17:40 schemanic Hey, I'm getting the following errors when I try to commission new servers from salt-cloud. Something about an instance id not existing, however I'm not doing anything involving an instance existing before I begin. https://ghostbin.com/paste/8z6fz
17:42 MTecknology what's with the caps?
17:42 MTecknology (likely not relevant)
17:42 schemanic Caps?
17:43 MTecknology TSAP-C OTTSAPVMXXXA
17:43 schemanic like, in my minion ids?
17:43 schemanic Oh, just convention I suppose
17:43 schemanic all my hostnames and minion ids are all caps
17:43 MTecknology weird
17:43 schemanic for legibility's sake
17:43 schemanic each is a code
17:44 schemanic or rather, blocks of it are codes
17:44 schemanic On the side, can anyone tell me how to find the version of the salt-bootstrap script salt-cloud is using?
17:45 Edgan schemanic: your error makes me think salt-cloud isn't waiting long enough to check for the instance id. AWS apis have multiple frontends and they don't sync instantly. They also vary in how long they take based on load.
17:45 Edgan schemanic: This is a common problem with other tools like packer.
17:46 Edgan schemanic: AWS calls it eventually consistent, and it requires waits and retries.
17:46 schemanic Edgan, I agree with you. What I changed was an addition to my cloud profile: allocate_new_eip: True.
17:47 schemanic Why doesn't that work?
17:47 Edgan schemanic: It may not be anything changed, but this time it happened to be a little slower.
17:47 schemanic Edgan, no I know it's because something changed because I ran a profile not containing that command and it ran fine
17:48 schemanic also Edgan do you know where the bootstrap script lives?
17:48 Edgan schemanic: The other possiblity is that the allocation of the ip, or something else failed for that instance. Hence why the id couldn't be found.
17:48 Edgan schemanic: I would say that is an assumption. Even if it happened to fail the first time after you added eip.
17:49 Edgan schemanic: But you could still be right
17:49 Edgan schemanic: Are you out of eip quota?
17:49 Edgan schemanic: that might do it
17:49 schemanic No I'm not
17:49 Edgan schemanic: How many times have you tried this?
17:50 schemanic like three
17:50 Edgan schemanic: -l trace on salt-cloud might help
17:50 Edgan schemanic: See what is going on in the background with the apis
17:50 XenophonF a watch requisite on a module.wait state should work like it does with a service.running state, right?
17:50 schemanic Edgan, does that mean 'type "-l trace salt-cloud"'?
17:50 Edgan XenophonF: I can show you a working example
17:51 schemanic Or do you mean to call salt-cloud -l debug?
17:51 Edgan schemanic: salt-cloud -l trace
17:51 XenophonF the following extend doesn't seem to have the effect I want, which is to make the `module: apache-restart` state watch a bunch of other states in another SLS
17:51 XenophonF https://github.com/irtnog/satosa-formula/blob/master/apache/satosa.sls
17:51 schemanic I see
17:51 XenophonF any clues as to what I'm missing?
17:51 Edgan XenophonF: https://pastebin.com/kUYmXLCG
17:52 XenophonF thanks Edgan looking...
17:52 Edgan XenophonF: As far as I know, that works for me
17:52 XenophonF maybe i'm suing extend wrong?
17:52 XenophonF using
17:53 noobiedubie joined #salt
17:53 Edgan XenophonF: what does the original look like?
17:53 Edgan XenophonF: I think I see your problem
17:54 noobiedubie having trouble setting smtp.settings through minion pillar so far this is my pillar file https://paste.debian.net/hidden/48828419/
17:54 noobiedubie and my pillar top file https://paste.debian.net/hidden/836ce56a/
17:54 noobiedubie not seeing propigate to minions but not getting any errors in logs either
17:54 Edgan XenophonF: Note, salt will silently ignore typoed attribute names in states.  module.wai ignored, where as module.wait works
17:55 Edgan noobiedubie: Your first link doesn't work
17:56 Edgan noobiedubie: The space in the grain value might be breaking it.
17:57 Edgan noobiedubie: I also prefer the G@osfinger:CentOS Linux-7 syntax for grain matching
17:57 Edgan noobiedubie: You are also breaking a big rule
17:58 Edgan noobiedubie: No grain matching in pillars for sensitive data like passwords, '- passwords.common'
17:58 Edgan noobiedubie: minions generate grains, and can fake them, and then get access to any passwords matched by grains
17:59 Edgan noobiedubie: {% set hostname = grains['id'] %}   You are on the right road with this.
17:59 noobiedubie https://paste.debian.net/hidden/4822342a/
17:59 noobiedubie first link again
18:00 noobiedubie thanks for the password tip
18:00 noobiedubie as for the grain value all the other pillars in that list work so I don't think it's that
18:01 Edgan noobiedubie: https://pastebin.com/zfuJEHam  This is the method taken much further
18:01 whytewolf noobiedubie: how are you fetching the pillar?
18:02 Edgan noobiedubie: You should include the path and filename in your first link. Otherwise we can't make sure it would match in theory.
18:02 schemanic is allocate_new_eip: True supposed to also associate the IP with the instance as well?
18:04 noobiedubie everything is in pillar/ directory pillar file (second link) is at smtp.localhost.sls and the smtp directory is at the same level as the pillar top file pillar/top.sls
18:05 noobiedubie so pillar/top.sls and pillar/smtp/localhost.sls
18:05 schemanic_ joined #salt
18:05 om2 joined #salt
18:06 whytewolf noobiedubie: how are you fetching the pillar?
18:06 schemanic_ is allocate_new_eip: True supposed to also associate the IP with the instance as well?
18:06 schemanic_ sorry if thats doublepost - i got DCed
18:07 noobiedubie well i'm checking to see if it is in pillar.items using a salt-call on one of the minions
18:08 whytewolf schemanic_: in the docs. "allocate a new elastic ip address to this interface, will be associated with primary private ip address on the interface
18:08 noobiedubie but the idea is to call the profile from a reactor state on certain events
18:09 schemanic_ yes I read that, but that's not happening now when I use it, and I've often gone and read the docs and heard here that the documentation is not quite representative of the situation
18:09 schemanic_ I just want to know if there's a known 'this is how it REALLY works, even though *thats* how they say it works'
18:10 XenophonF Edgan: do I have a typo somewhere?
18:11 Edgan XenophonF: module vs module.something?
18:11 Edgan XenophonF: Can you show me the original state it is extending?
18:12 XenophonF according to the docs, you specify the state module - https://docs.saltstack.com/en/latest/ref/states/extend.html
18:12 XenophonF here's the satosa SLS: https://github.com/irtnog/satosa-formula/blob/master/satosa/init.sls
18:13 XenophonF here's the apache SLS: https://github.com/saltstack-formulas/apache-formula/blob/master/apache/init.sls
18:13 XenophonF goal is to glue the two together and trigger a web server restart upon any config changes in the web app
18:14 Edgan XenophonF: I don't see a restart in that
18:15 Edgan XenophonF: I would try module.wait in the extend
18:17 schemanic_ Hello, I am seeing example configuration at this page here: https://docs.saltstack.com/en/latest/topics/cloud/aws.html#required-settings. I would like to read documentation on each possible property and how it is meant to be used. Can someone explain where to find this kind of documentation?
18:18 Edgan schemanic_: If you can't find it on the website, it is probably in the code only
18:19 schemanic_ Well I'm wondering if there is something I am missing? Would this be documented in the modules section or something?
18:19 Edgan schemanic_: https://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud.clouds.ec2.html
18:20 schemanic_ I need to know why some examples have things like subnetid on their own, but here I'm seeing it appear under a property called 'network_interfaces'
18:21 schemanic_ Edgan, your link is insufficient. It does not have the word 'network_interfaces' anywhere on the page
18:21 schemanic_ I don't know how to use it to find out what I need
18:21 Edgan schemanic_: But it is the docs for the ec2 driver
18:21 Edgan schemanic_: I have an idea
18:22 schemanic_ yes but it does not have a comprehensive list of available keys
18:22 schemanic_ those are the commands that can be run through salt-cloud
18:22 schemanic_ not values that are valid in salt-cloud profiles
18:22 schemanic_ not key names that are valid in salt-cloud profiles rather
18:23 Edgan schemanic_: It is just poorly documented. network_interfaces in an option of request_instance, but the page doesn't document it.
18:24 Edgan schemanic_: code in ec2.py request_instance():  https://pastebin.com/8QFsxXE8
18:24 schemanic_ right, that only gives me more confusion, because you introduced a new term 'request_instance' that I don't recognize
18:24 schemanic_ This is baffling
18:24 Edgan schemanic_: It is an internal function used by salt-cloud when making you an instance, which is the main thing salt-cloud does for you
18:25 Edgan schemanic_: The code reads in the network_interfaces option from the config, and lets you specify multiple network interfaces on your instance.
18:25 schemanic_ yes I understand the high level function of salt-cloud. What is not clear is where the origin and valiidation of keys
18:25 schemanic_ right, but where does it say what you are *allowed* to use
18:25 Edgan schemanic_: I have used multiple interfaces before to have an network interface that can be moved to another instance for failover
18:25 schemanic_ I cant just put 'snarfblat': true and have it mean something
18:26 Edgan schemanic_: If it isn't in the docs, it comes down to the code
18:26 Edgan schemanic_: period
18:26 schemanic_ right but this code doesn't have it in there either
18:26 schemanic_ for example, I see in the documentation something called 'DeviceIndex'
18:26 Edgan schemanic_: salt-cloud is a lesser part of salt, and is in the process of being rewritten to do much more than just instances
18:27 schemanic_ Edgan, that makes me happy. I would like to use it to manage a lot of the things that boto modules do
18:27 Edgan schemanic_: The code does mention DeviceIndex, but that is a carry over from the AWS api
18:27 Edgan schemanic_: that is exactly what they are doing
18:28 schemanic_ That sentence begins to answer my question. You are saying that these terms are derived from another system which is feeding back to salt-cloud
18:28 Edgan schemanic_: https://pastebin.com/n082PWXJ
18:28 schemanic_ Further elaboration is required. What helps illuminate that the external system may say something like 'device_index' and that the salt-cloud system might understand something called 'DeviceIndex'?
18:29 Edgan You run salt-cloud, it calls aws apis, and salt-cloud has to conform to what the apis expect
18:30 Edgan schemanic_: https://pastebin.com/x47dztHH
18:31 schemanic_ So in these cases it is fair to assume that exact terms specified by the external system will be valid. Ergo I will in most cases see a key 'DeviceIndex' in the documentation for the external system, and be able to use it exactly in salt-cloud profiles
18:32 wad joined #salt
18:32 schemanic_ Is it unreasonable to have expected that these things would have been documented?
18:32 Edgan schemanic_: no, but that doesn't make it so
18:32 ymasson joined #salt
18:32 Edgan schemanic_: file an issue, and/or make a pull request
18:34 wad Dumb question time: I've been reading the docs, but I didn't find the answer. I've got a salt-minion running, and a salt master. Using GitHub to store the configs for stuff running on the machines that the minions are running on. Question: If I change something in GitHub, will the minions automatically notice it, and pull down the new config? Or do I need to "salt-call state.highstate" on all the boxen?
18:35 schemanic_ Edgan, now that we have established the operating paradigm, I have an actual question. Do I have to have a 'network_interfaces' block for things like subnetid subnetname or allocate_new_eip to work? I am currently using subnetid outside of network_interfaces. I do not understand what allows what I am using to work while causing an independent allocate_new_eip to fail
18:35 schemanic_ wad: run salt-run git_pillar.update if your pillar data is in a remote git repo
18:36 wad Okay, so the minions won't automatically notice the changes then.
18:36 wad Thanks!
18:36 Edgan schemanic_: I don't think you need network_interfaces. I think the reason for subnetids in network_interfaces would be to have different interfaces in different subnets
18:36 schemanic_ Edgan: Thank you, but now I don't know why it wont allocate elastic IPs
18:36 Edgan wad: you also need a cron job or schedule to run the highstate every so often
18:37 schemanic_ wad: for filesystem changes runalt-run fileserver.update && salt '*' saltutil.sync_all
18:37 schemanic_ whoops
18:37 schemanic_ wad: for filesystem changes run salt-run fileserver.update && salt '*' saltutil.sync_all
18:37 wad Thanks again!
18:38 MTecknology wad: there is an automatic sync. You can see the default master config for the default values.
18:38 Edgan schemanic_: see http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.ServiceResource.create_instances
18:38 wad Ah, okay. We'
18:38 wad We'll look there.
18:39 Edgan wad: the default gitfs sync is 60 seconds
18:39 Edgan wad: but there is no default run the highstate for me
18:40 noobiedubie joined #salt
18:41 * MTecknology uses a git hook, salt-event, and a reactor to make the master update gitfs/git_pillar and then make the minions sync_all.
18:41 Edgan schemanic_: Think about salt-cloud like a more user friendly wrapper for boto functions
18:42 schemanic_ Edgan, I get that, but there's too big of a ? between what I'm seeing in docs and what you've sent. Docs say it's a boolian: true and it does what I think it does, false and it doesnt.
18:43 schemanic_ here there are multiple functions mentioning elastic IPs
18:43 schemanic_ it is not clear what has been translated
18:43 RF_ joined #salt
18:44 Edgan schemanic_: yeah, it isn't awesome. I found salt-cloud too limited. In the past I wrote my own boto wrapper scripts. At my current job we are using terraform.
18:44 Edgan schemanic_: I hope to see the day when salt-cloud can do everything terraform can for EC2 in a more user friendly way.
18:44 schemanic_ can you help me answer why salt-cloud is not doing what I am trying to get it to do though?
18:45 schemanic_ all I want it to do is allocate and attach an eip when I commission a new instance
18:45 Edgan schemanic_: I need to see your salt-cloud for the instance that is causing the problem
18:45 schemanic_ sure. a moment
18:45 sreddy joined #salt
18:46 RF_ We run Salt in masterless mode and would like to log the output of the salt-call runs. I have tried 'sat-call --local <options> 2>&1 | logger -t' but it didn't log anything to /var/log/messages. Am I doing something wrong?
18:48 Edgan RF_: Sounds more like a logger problem than a salt-call problem. If you can echo foo | logger -t  in the same way and that works, I would expect salt-call to work too if it outputs anything
18:48 RF_ Basically I would like to log any state changes, and its run status
18:48 MTecknology you should probably take a look at returners
18:49 schemanic_ https://ghostbin.com/paste/2mddm
18:49 schemanic_ Edgan
18:50 nielsk joined #salt
18:50 whytewolf RF_: https://docs.saltstack.com/en/latest/ref/returners/all/salt.returners.syslog_return.html might be a better option for you
18:51 Edgan schemanic_: Looking at the code, looks like the syntax got changed at some point in the last couple years.
18:52 Edgan schemanic_: You probably want something like https://github.com/saltstack/salt/issues/16919
18:52 Edgan schemanic_: looking for the exact eip option
18:53 Edgan schemanic_: looks like you put allocate_new_eip in the interface section like AssociatePublicIpAddress: True
18:53 schemanic_ didn't we JUST say that I don't need network_interfaces?
18:53 Edgan schemanic_: You don't need it, unless you want to do more advanced things
18:54 Edgan schemanic_: which eip is
18:54 schemanic_ okay, so when I start doing advanced things like that, am I required to do OTHER things to accomplish the same things I've already done?
18:54 RF_ MTecknology: thanks! I will take a look.
18:55 Edgan schemanic_: I really really wish salt threw errors when you put options in the wrong section. Like allocate_new_eip in the main section instead of network_interfaces. It would reduce confusion ALOT.
18:55 schemanic_ like, if I start using network_interfaces, does it break things if I don't use that DeviceIndex thing that no one knows what it does without reading three codebases?
18:55 Edgan schemanic_: But it doesn't, and so we suffer
18:56 Edgan XenophonF: any luck?
18:56 schemanic_ yes, I'm incredibly angry right now. Grateful to you, angry at the state of things. It's taked me an hour or so to answer something very simple, and I still really don't have an answer
18:56 schemanic_ taken*
18:57 Younder As a side-point Kybernetes saves you a lot of headache in setting up PKI and networking.
18:57 Edgan schemanic_: If you want to use the less used parts of Salt, and have some sanity, you have to be willing to read the code. :\
18:58 Younder Also docker swarms automate much of this. Doing so manually with salt is going to be fiddly.
18:58 Edgan schemanic_: Salt is awesome, but Salt is buggy. The documentation is also incomplete. There are also no great Salt books. I should write one that covers the 50% of stuff all the others don't.
18:58 Pomidora +1 for Salt + k8s
18:59 Edgan Younder: Docker swarm might as well be depcreated now, because Kubnernetes is "The Way".
18:59 Pomidora -1 grey hairs
18:59 Edgan Younder: I still wouldn't put stateful things in Kubernetes, yet. Even with Stateful volumes.
19:00 Edgan Younder: Even with Kubernetes, unless you are using a Kubernetes serviec in AWS or GCP, you have to use something to set it up.
19:00 Edgan Younder: and manage your Kubernetes hosts
19:00 Pomidora Edgan: if there's one thing Salt can be used for
19:00 Edgan Younder: Having tried to go down that road, all the current public Kubernetes setup methods SUCK.
19:00 Pomidora it's pretty good with that
19:00 Younder Edgan, I see swarm taking over the roles of Kybernetes and Kybernetes taking over the role of Docker. I am a bit confused as to why, but to me it seems you could do either.
19:00 wad Hey guys, kind of a long shot, but we're using salt to manage LogStash agents, and we're stuck on installing the kinesis plugin to LogStash. Anyone here find a way to do that?
19:01 aldevar joined #salt
19:01 Edgan Younder: Swarm is pre-Kubernetnes, and Docker partnered with Google for Kubernetes
19:02 Edgan wad: If it is something like logstash plugin install kinesis, you want a cmd.run
19:02 wad Yeah, that's exactly what it is. Okay, we'll look into that. Thanks!
19:02 Edgan wad: you also want a creates:  to make sure it doesn't run every time
19:02 wad ok
19:02 schemanic_ joined #salt
19:03 sreddy Can you give an example for it?
19:04 Younder There is a took called kops which automates setting up Kybernetes of AWS. That might help.
19:12 marwel joined #salt
19:16 nafg joined #salt
19:18 briner joined #salt
19:31 marwel joined #salt
19:37 edrocks joined #salt
19:38 mianosm I think you meant tool, and Kubernetes...so there might be some other ambiguities there.
19:39 LeProvokateur joined #salt
19:39 marwel joined #salt
19:39 cewood joined #salt
19:43 MTecknology Isn't a highstate supposed to effectively call a saltutil.sync_all before taking off?
19:45 ipsecguy joined #salt
19:46 Kelsar joined #salt
19:49 dmaphy joined #salt
19:50 XenophonF OMG I'm such an idiot
19:51 XenophonF Edgan: I moved a bunch of states into a separate formula but haven't merged those changes into the production branch yet.
19:51 XenophonF So my minion is getting the old version apache.satosa SLS file, not the one currently in the formula
19:56 Edgan XenophonF: ah
19:57 Edgan MTecknology: Not sure about sync_all, but grains, yes
19:57 Edgan MTecknology: But I think you can get into a chicken and the egg
19:58 Edgan MTecknology: It renders pillars before grain sync. So if you use grains even indirectly in your pillars, you can get into a state where you need it to sync, but it won't.
20:01 DanyC joined #salt
20:02 noobiedubie joined #salt
20:02 MTecknology ah..
20:02 MTecknology I was looking at modules
20:02 DanyC joined #salt
20:04 thelocehiliosan joined #salt
20:07 ECDHE_RSA_AES256 joined #salt
20:10 mianosm It doesn't seem like there's flexibility to add services to firewalld (my thought was to include firewall modifications in my fomulas...so the httpd formula would add the service for that formual, and same for mysql). Is that right?
20:10 MTecknology screw firewalld! long live ferm!
20:10 Edgan MTecknology: ferm?
20:11 MTecknology iptables wrapper, like firewalld but less sucky
20:14 mianosm I'm thinking of sticking cmd.run in my formulas with an unless and checking to see if the service is already listed in the current/default zone; but this feels like terrible form...
20:15 schemanic_ is there a way to upgrade salt minions from the master without losing communication?
20:16 MTecknology dangit! I think I just found a bug in the debian networking stuff.
20:16 * MTecknology runs off to the vet before getting angrier
20:16 babilen schemanic_: "pkg.upgrade" (ensure you have a version with KillMode=process if you use systemd)
20:17 schemanic_ babilen, how would I find out if I have KillMode=process? Also do you mean 'salt '*' pkg.upgrade salt-minion'?
20:18 schemanic_ babilen, I'm following a guide which told me to apply the following state. https://ghostbin.com/paste/ouw7a
20:24 Edgan schemanic_: That method in your paste might work, but I am doubting it
20:24 babilen schemanic_: If the salt-minion is the only package to be upgraded, pkg.upgrade would do just that. If not, you might want to use "pkg.install salt-minion"
20:24 babilen I am not familiar with the state you pasted, but that could work as well
20:25 schemanic_ Edgan, babilen, I mostly am concerned with not losing the salt-minion process or having it come back post upgrade
20:25 Edgan schemanic_: I let it lose the connection, but don't have a probably with it going down
20:25 babilen Regarding KillMode: Check the unit file of the service. You can use something like: http://paste.debian.net/1019468/ if you have a version without the proper settings
20:25 Edgan schemanic_: it reconnects and the cron job catches it again in half an hour
20:26 schemanic_ Edgan, you manually shell into all of your minions and restart it?
20:26 schemanic_ Edgan, what cron job?
20:26 babilen schemanic_: Well, you might want to run on a subset of minions to begin with.
20:26 Edgan schemanic_: no, salt highstate manages salt-minions
20:26 Edgan schemanic_: the one I creates for running the highstate
20:26 babilen And if things go wrong, you can always use salt-ssh with the cache roster on the master
20:26 schemanic_ you say 'the', not 'my' or 'a', ergo 'something that ships with saltstack'
20:26 babilen (assuming you can SSH into your minions)
20:27 Edgan babilen: I found a way to generate a dynamic roster for salt-ssh using ec2.py and the the supported ansible roster method
20:27 babilen If you have a working saltmaster already you can use the cache roster
20:27 babilen But sure, there are many ways and rosters
20:28 schemanic_ So, I'm confused. Is there or is there not a built-in way that saltstack uses to automatically reconnect minions to te master
20:28 babilen They normally do that themselves
20:28 babilen Obviously different versions behave differently
20:28 Edgan schemanic_: Minions, if running will auto connect. If the process is dead, you are left on your own.
20:28 schemanic_ Edgan, does the upgrade not kill the process?
20:28 babilen Are you experiencing problems, schemanic_ ? Which version are you upgrading from? Which version are you upgrading to? On how many boxes?
20:29 Edgan schemanic_: yes, but init/systemd will restart it, if that is what you told salt to tell init/systemd to do
20:29 peters-tx MTecknology, ahh, good to know
20:29 babilen schemanic_: NO, the upgrade does *not* kill the process if you have "KillMode=process"
20:29 schemanic_ I am trying to establish a procedure that I should always attempt regardless of those factors
20:29 babilen Which is .. well .. why you want it
20:29 Edgan schemanic_: but you typo the hostname in /etc/salt/minion, you are SoL without something like salt-ssh
20:30 schemanic_ babilen, apparently Amazon Linux does not use systemd
20:30 babilen The maintainer scripts (Debian) will restart the service during the upgrade, so that you get the new version
20:30 Edgan schemanic_: It is based on CentOS 6
20:30 schemanic_ Edgan, what is based on CentOS 6?
20:30 babilen schemanic_: Are you running Amazon Linux?
20:31 schemanic_ babilen, yes
20:31 Edgan schemanic_: Amazon Linux
20:31 schemanic_ Edgan, I ran pidof systemd && echo "systemd" || echo "other" and found "other"
20:31 schemanic_ ergo, no systemd
20:32 Edgan schemanic_: CentOS/RHEL 7 introduced systemd. I know a few people, myself not included, that consider it a feature.
20:32 Edgan schemanic_: it being Amazon Linux's lack of systemd
20:32 schemanic_ I see. you had meant your statement to carry meaning beyond the exact statement you made
20:33 schemanic_ Implication is a difficulty of mine. I can't detect it in most cases.
20:33 schemanic_ Okay. So we have established that there is no systemd. Does that mean that babilen's statement is not true?
20:34 schemanic_ ergo, "because I have no systemd on Amazon Linux, my minions will not reconnect upon upgrade."
20:34 schemanic_ true/false?
20:35 schemanic_ babilen's statement seems to be that "Unless I can make the system understand the equivalent of KillMode=process, then the minion process will be destroyed upon upgrade."
20:36 schemanic_ I am sorry if I have been tedious.
20:39 Edgan schemanic_: Unknown. He is saying you have to do it the KillMode for systemd, but unclear if it just works for upstart.
20:39 Edgan schemanic_: you best bet is to test, and see what works for you. Your salt version may also matter. Behavior changes over time.
20:40 Edgan schemanic_: as new versions come out
20:40 schemanic_ mmm. I have run salt 'targets' pkg.upgrade salt-minion
20:40 Edgan schemanic_: I let the highstate handle it for me, and update the map.jinja with the new version number when I wanted to upgrade
20:40 babilen schemanic_: It certainly does not mean that it isn't true, but rather that the necessity to ensure "KillMode=process" doesn't apply as it wouldn't have any effect
20:40 Edgan schemanic_: I need to make 2018.3.0 packages with my patches, and then do this process
20:41 schemanic_ It currently appears to be ... thinking? I'm going from the last 2017.x to 2018.3..0
20:42 babilen All I meant was that it *will* break if you use systemd with a different KillMode setting from "process". I certainly didn't enumerate all possibilities in which it will go right
20:43 schemanic_ hmm
20:43 schemanic_ It seems strange to me that there isn't a prescriptive set of instructions how one should go about upgrading
20:44 schemanic_ yes. babilen your suggestion did not work. It killed my minion processes
20:46 babilen What exactly was my suggestion?
20:46 babilen I upgraded many minions with "pkg.install salt-minion" on systemd systems with "KillMode=process"
20:47 babilen There are instructions on upgrading it with other systems using "at" to delay actions
20:48 babilen https://docs.saltstack.com/en/latest/faq.html#what-is-the-best-way-to-restart-a-salt-minion-daemon-using-salt-after-upgrade is probably something you like to read
20:48 doubletwist I'm apparently missing something very obvious here.
20:49 ntropy joined #salt
20:49 doubletwist In a pillar, I need to match grains['id'] against some kind of "regex".
20:49 babilen You'd typically do that in top.sls, but go on
20:50 doubletwist I'm trying to sue "regex_match" or "regex_search" but I don't understand how to test that against the contents of 'grains['id']'
20:50 doubletwist This isn't in the top [I get basically how to match there] but in an init.sls pillar file
20:50 babilen grains['id'] is a string to which you apply that jinja filter
20:50 schemanic_ I'm sorry. I didn't mean to express a lack of gratitute. I appreciate the patience you and Edgan have shown me today.
20:50 babilen so grains.id|regex_match .. or so
20:51 doubletwist So something like:
20:51 babilen schemanic_: I think you simply misunderstood me .. I simply wanted to make sure that *iff* you use a system with systemd that does *not* have KillMode=process set, that you ensure that it is set beforehand
20:51 doubletwist %{ if grains['id'] | regex_match('^appname(.*)', ignorecase=True ) %}
20:51 babilen doubletwist: Apart from the syntax errors, yeah ..
20:52 babilen That looks about right
20:52 doubletwist I think it's the syntax that's throwing me
20:52 babilen {% vs %{
20:52 doubletwist er {%
20:52 babilen And as always: If you run into an error: What's the error? ;)
20:52 doubletwist it's just not matching
20:52 doubletwist lemme try somethign
20:54 doubletwist gah I think that was it
20:54 doubletwist thank you
20:54 doubletwist By some miracle i was actually close to the real sytax even though I barely understand what I'm doing here :)
20:56 babilen :)
21:00 Edgan doubletwist: I prefer something lke https://pastebin.com/zfuJEHam   Which works for me.
21:02 Sammichmaker joined #salt
21:02 Sammichmaker joined #salt
21:02 cewood joined #salt
21:03 sreddy joined #salt
21:05 Edgan doubletwist: Alternatively you could turn you if statement with regex_match into a macro, make regex_match('^appname(.*)' into regex_match('^' + appname + '(.*)', and then pass the appname into the macro where you wanted it. But I would say that would hurt readiablity in this case.
21:06 Edgan doubletwist: People wouldn't expect the macro to be an if statement
21:07 Edgan doubletwist: You should be able to get the same effect of your ignorecase with conversion to lower case in my style above.
21:07 Edgan doubletwist: jinja has a lower() filter, like regex_match
21:08 Pomidora I've dipped my feet into Ansible before for personal stuff, and I've been doing lots of work with Salt afterwards, so my experience with the former is limited. Has anyone here used both and say why the prefer Salt?
21:09 Pomidora I don't have a good point of reference
21:09 Edgan Pomidora: yes
21:09 Pomidora Please share Edgan
21:09 Edgan Pomidora: I have used Puppet(heavily), Chef(some), Ansible(some), Salt(heavily). I prefer Salt.
21:10 Durkee joined #salt
21:10 Pomidora Nice, seems like you used all the talked about ones. Do you think you can make a concise case for why you prefer Salt?
21:10 Edgan Pomidora: Ansible is SSH based only. I don't like it's lack of master mode. You can fake it with a Tower, but it is still SSH based. Tower was not open source for years.
21:11 Edgan Pomidora: Salt is awesome in that it has masterless, master mode, and ssh mode with salt-ssh.
21:12 Edgan Pomidora: Outside my opinion, more overall reality.
21:12 Edgan Pomidora: Puppet is the thing everyone is moving away from, if they are moving. Some are too entrenched.
21:13 Edgan Pomidora: Chef is moved away from, but some people move to it. It is generally preferred in Ruby based companies.
21:13 Pomidora I like that Salt has a Queue
21:13 Pomidora I feel like Ansible will take a million years for a large cluster
21:13 Pomidora Sorry I mean a message Queue
21:14 Edgan Pomidora: I have seen Ansible without optimization take 45 minutes when running from my laptop in California against us-east-1
21:14 Edgan Pomidora: Move to a bastion box in us-west-1 and it was like 25 minutes. Optimize and it was 15 minutes from the bastion.
21:14 Edgan Pomidora: Salt can do it in 10 minutes or less.
21:14 Pomidora Edgan: I want to eventaully move away from Salt and bake in golang binaries and replicate the event-based infrastructure
21:14 Pomidora the baking in is for immutable nodes
21:15 Pomidora idk if that's coo coo crazy though
21:15 Edgan Pomidora: Sounds like you want more containers
21:15 Pomidora The nodes will run Kubernetes on Alpine
21:15 Edgan Pomidora: But there is a difference between what I call our and and their code. First and third party code. Our code probably should be containers already. Their code, which tends to be data stores, I don't think containers are ready yet.
21:16 Edgan Pomidora: Data stores as in rabbitmq, redis, postgresql, mongodb, etc
21:17 Edgan Pomidora: Also unless you use a Kubernetes service in AWS or GCP, you need to setup your own Kubernetes, and that is probably best done with Salt. Every method I have seen to setup a Kubernetes cluster, including public Salt code, Sucks!
21:19 zer0def for the record, puppet still tries to be relevant with toys like "mgmt", which was heavily promoted at fosdem by their devs
21:19 Edgan zer0def: Puppet has heavily suffered by being the first mover, and being the first to make all the mistakes
21:19 Edgan zer0def: Everyone else then me toos and learns from their mistakes
21:20 zer0def well, yeah, because back when puppet was the crop, barely anyone else was doing the same
21:20 Edgan Pomidora: On the flip side, Ansible is the msot popular for new stuff, these days.
21:20 Pomidora Edgan: yeah I don't touch the cloud-native Kubernetes clusters
21:20 zer0def that said, my experiences with puppet are of a steep learning curve and over-oop'ing of states
21:21 Pomidora is that what you call the built-in services? Cloud-native? I don't even know
21:21 Edgan Pomidora: I think the single biggest thing holding Salt back is that salt-ssh isn't treated first class, and isn't talked about more.
21:21 Pomidora Edgan: I don't think I've ever used Salt-SSH
21:22 Edgan Pomidora: The normal model of write code, git commit, git push, git pull to test some puppet/salt code, sucks
21:22 Edgan Pomidora: salt-ssh, like Ansible, is a far better way for developing code
21:22 Pomidora Edgan: Oh I see
21:22 Edgan Pomidora: But when it comes more to production, salt master mode is better
21:22 zer0def btw, Edgan, last time i've checked (in 2016), kubernetes used salt to deploy
21:22 Pomidora zer0def: interesting
21:22 Edgan zer0def: Their officialy code sucks, and they have a new just run this command method
21:23 Edgan zer0def: and some newer salt code, is just a wrapper around that command
21:23 noobiedubie joined #salt
21:23 noobiedubie anyone have a good way of enabling full screen on virtualbox with mac pro host only found a writeup from 2009 that doesn't seem to apply now with xenodm
21:24 zer0def well, i'd probably make myself some less-than-friendly-stares by openly stating that most go-based solutions operate like alpha-level crapware
21:24 Edgan zer0def: But the command isn't a stateful thing. So if you half setup a cluster, you can't finish the rest.
21:24 Edgan zer0def: You aren't the only one, and go isn't the only language
21:24 Edgan zer0def: with that problem
21:24 zer0def yeah, i know, but that's pushing the discussion away from original topic
21:24 Edgan zer0def: VCs care more about speed than quality
21:25 zer0def development speed*, not operational speed ;P
21:25 Edgan zer0def: yes
21:25 zer0def something something nitpicking something
21:25 Edgan zer0def: They don't care if they waste a million dollars if it makes them profit
21:26 zer0def oh that topic is a hornet's nest, i'm staying away from that one just as much
21:26 noobiedubie wouldn't that make it not a waste if you made a profit on it?
21:26 Edgan zer0def: and even the profit is a long term goal, because sometimes you just need to capture the market, or you chunk of it
21:26 Edgan noobiedubie: It still ignores efficent, and tech debt you will pay later
21:27 zer0def noobiedubie: as for your question, have you tried using bhyve instead?
21:27 zer0def due to licensing and performance issues, virtualbox might not be the most viable of options
21:27 Edgan noobiedubie: I prefer VMware Fusion on OSX
21:27 Edgan But it the most popular, because it is free
21:27 noobiedubie sure that's a possibility but not a waste
21:28 noobiedubie yeah might have to fall back to vmware was really hoping to ditch it though
21:28 Edgan noobiedubie: You could also say the VCs have so much money that money isn't really the goal. They want more the power that spending their money can bring.
21:29 zer0def Edgan: thing about tech debt is that it has a tendency to sneakily f you up by surprise
21:29 noobiedubie lol i think that's what everyone wants with money otherwise it's called toilet paper i think
21:29 Edgan zer0def: yes, but then another startup kills your company weighed down by tech debt with new alpha level crap, and the cycle repeats
21:30 Edgan zer0def: Why does Google keep coming out with new chat services even first years? Hmm
21:30 zer0def i don't even pay attention to that anymore
21:30 zer0def mostly because it's a waste of my time
21:31 zer0def that's also one of the reasons why i remain on irc
21:32 Edgan zer0def: There are a few newer thing that are clearly better, but it is somewhat a trick of getting everyone to switch to the same one.
21:32 cdunklau so what's a reasonable way to trigger a state application from a git hook where the hook doesn't run as root?
21:32 zer0def api call?
21:32 cdunklau i was thinking that
21:32 MTecknology api call, or wrapper script to salt-event
21:33 cdunklau but i'm having trouble digesting the API docs
21:33 Edgan zer0def: We have Slack, Discord, Riot, etc
21:33 zer0def Edgan: i'd argue the presence of twitter, fbchat, slack, discord and a bunch of other walled gardens just creates annoying fragmentation, repeat from mid-way last decade's history
21:34 zer0def different names, same principle
21:34 Edgan zer0def: You can run your own Discord server.
21:34 Edgan zer0def: Same with Riot
21:34 zer0def cdunklau: take a peek at something like pepper (or some other solution leveraging salt-api)
21:34 cdunklau MTecknology: what's salt-event?
21:34 Edgan zer0def: Slack is definitely walled, and became more so recently
21:35 MTecknology "man salt-event"
21:35 cdunklau MTecknology: i tried that already :)
21:35 zer0def Edgan: same can be said for rocketchat and mattermost, i guess i over-generalized by putting all of those solutions under the umbrella of walled gardens, but they still contribute to fragmentation
21:35 briner joined #salt
21:35 Edgan zer0def: But even when they aren't walled, you have to give everyone to switch to it. It is like trying to replace smtp
21:36 zer0def thing is, if matrix is supposed be the next generation of xmpp's golden age, kill me now.
21:36 Edgan zer0def: Slack seems to be winning, but since they blocked third parties, that might reverse.
21:36 briner joined #salt
21:36 MTecknology matrix is a bastardized pile of garbage that could never even dream of competing with xmpp
21:36 cdunklau MTecknology: is there a  link or something you can give me? i'm failing hard at finding it
21:37 Edgan What is the other one I am having trouble thinking off that is like a better matrix?
21:37 zer0def Edgan: funny how 3-something years ago slack basically started out and it already had a wiff of what they did recently ;)
21:37 MTecknology https://docs.saltstack.com/en/latest/topics/event/index.html
21:38 cdunklau MTecknology: ah state.event gotcha
21:38 MTecknology cdunklau: fwiw- the api may very well be a better option. My salt-event solution existed before the api was a thing..
21:38 cdunklau i was wondering if that's what you meant
21:38 MTecknology it's not
21:38 cdunklau oh
21:38 cdunklau MTecknology: is it actually a CLI command for you?
21:38 MTecknology yup
21:39 cdunklau huh. where is it?
21:39 cdunklau it's not in my installation
21:39 MTecknology did you install salt-minion?
21:39 cdunklau i have salt-common/master/minion installed
21:39 cdunklau 2017.7.5
21:39 MTecknology I can google a bit more once I finish debugging something
21:39 whytewolf cdunklau: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.event.html
21:40 cdunklau whytewolf: that's not what MTecknology meant though.
21:40 MTecknology they're fundamentally the same thing
21:40 cdunklau it's not a big deal, i'm mostly just curious what they mean
21:40 cdunklau ok
21:41 MTecknology Is there a way of running an sls file without passing it through state.HighState()?
21:41 zer0def Edgan: how are you drawing similarities to matrix? because i can't come up with a name
21:42 doubletwist babilen: & Edgan: - Thanks for the assistance
21:43 doubletwist Hrm. Now wondering if I should stick with a single "pillar" git repo or split out each 'app' into separate repos like I do our formulas
21:43 cdunklau but thanks folks, i'll dig into that stuff
21:44 Edgan zer0def: matrix is a slack like open source chat tool, and there is enough one that is also open source and e2e, but seemed to handle the encryption negoitation much better.
21:44 zer0def you're not talking about signal or telegram?
21:45 zer0def no, those have proprietary servers, nevermind
21:45 Edgan zer0def: No
21:46 whytewolf MTecknology: I'm not even sure where to start with that question. a state file has to pass through the highstate function in order for it to compile the low state. you can pass lowstate info into state.low but there isn't a function to run a file as a pure lowstate run.
21:46 zer0def no idea, i'd compare matrix mostly to xmpp and for e2e i'd settle for otr, but that's not protocol-specific
21:48 zer0def Edgan: oh, omemo maybe?
21:48 MTecknology whytewolf: that's an excellent point! .. quite the poorly formed question. :)
21:49 Edgan zer0def: no, can't find it with google either
21:50 zer0def omemo has the most direct competition to matrix, but that's just a xep
21:57 MTecknology whytewolf: A better question would have been about how I'm trying to avoid self.run_parellel(), but it seems running state.sls instead of state.highstate avoids that (I think...).
22:02 exarkun joined #salt
22:02 Edgan zer0def: Found it through my phone. Keybase
22:02 cgiroua joined #salt
22:03 zer0def ah, ok
22:07 Edgan zer0def: But the only way I have heard of it was through a Saltstack employee in the past
22:10 Edgan zer0def: But as this shows, too many, https://en.wikipedia.org/wiki/Comparison_of_instant_messaging_clients
22:10 Edgan zer0def: and it isn't in that list
22:11 zer0def yeah, i always associated keybase with gpg keyservers
22:11 zer0def also, you probably wanted to link this: https://en.wikipedia.org/wiki/Comparison_of_instant_messaging_protocols
22:11 tzero joined #salt
22:12 zer0def but even then, this list is lacking
22:12 Edgan zer0def: keybase has a slack like client that is easy to use, and even has an Android client
22:12 ecdhe joined #salt
22:15 DanyC joined #salt
22:33 shanth joined #salt
22:35 shanth i see in the salt docs that salt says latest version 2018.3.1 but on FreeBSD the latest version we have is 2017.7.4_1 - is the freebsd development behind?
22:37 aphor shanth, FreeBSD salt port needs updated.
22:37 shanth thanks aphor
22:37 ecdhe joined #salt
22:38 shanth does anyone know what the new feature for users.present does? allow_uid_change says new in 2018.3.1 https://docs.saltstack.com/en/latest/ref/states/all/salt.states.user.html
22:38 sreddy joined #salt
22:38 shanth because i think it might fix a bug i ran into. if a user already exists lets say with uid 2000 and i have a salt state to change the uid to 3000 then all the home directory permissions get screwd up. i wonder if this allow_uid_change function fixes that
22:39 MTecknology I would expect not
22:39 mikecmpbll joined #salt
22:39 shanth is that a known isuse MTecknology ?
22:39 MTecknology it's not really a problem for salt to deal with
22:40 MTecknology that option, based on the docs only, would prevent a uid change unless set to True
22:40 shanth what is the actual issue MTecknology and is the best way to deal with it to wipe the user with user.absent first?
22:40 MTecknology Salt should /not/ look through a file system and modify file ownership without being told.
22:41 shanth the issue ist he home directory MTecknology
22:41 aphor shanth, you need to write a state that will migrate the user's files, and make it run with a watch on user.present
22:41 MTecknology You're free to write a file.recurse state that enforces user/group on the home dir
22:41 shanth and due to this, the user cannot SSH in now because perms are borked
22:41 MTecknology but.. you probably shouldn't
22:41 shanth interesting
22:41 shanth is the best solution to just nuke the user first?
22:41 aphor Personally, I recommend against changing a UID.
22:41 MTecknology what is it you're actually trying to do?
22:41 MTecknology aphor: +1
22:42 shanth so we have 10 servers for example. we want to push john.doe. john.doe exists on 3 servers already but has different uids. our salt state makes the user with uid 5000
22:42 shanth so the uid is changed on those 3 servers
22:42 shanth now that user cant ssh on those 3 servers because his home directory perms are borked
22:42 MTecknology you should fix those permissions /before/ applying salt
22:43 shanth but the kicker is that the home folder appears to have the right uid/gid
22:43 shanth i will test that MTecknology
22:44 shanth appears to work. got the info i need in that it is not salts job to do that
22:44 shanth now i know what to do :) love u guys
22:44 MTecknology wedging salt into an existing environment is a whole different game from rolling fresh. When you wedge it in, you'll have to pre-bandaid lots of stuff or else you'll have to write a mess of extra states to remedy something that should never have been screwed up in the first place.
22:45 shanth yeaaaaaaaaaaaaaaah boy MTecknology we are running into a lot of band aids
22:45 shanth TONS of mismatched existing users: jon.doe john.doe jdoe johndoe lol
22:45 MTecknology #1 recommendation... stop
22:46 MTecknology roll back and think about what you're rolling out, and then do it from scratch
22:47 MTecknology new systems, fresh states, nice 'n clean
22:47 shanth i wish MTecknology
22:47 shanth maybe this christmas
22:47 shanth if santa is nice to me
22:47 MTecknology in the long run- it's almost guaranteed to save you time.
22:48 MTecknology (and lots of frustration)
22:49 aphor or.... *job*security*
22:49 aphor watch out for that trap though
22:50 MTecknology what trap is that?
22:50 aphor job security will steal your vacation
22:50 MTecknology if you do something for the sake of improving your job security and that thing is not taking on additional responsibilities... you should be fired
22:51 aphor +1
22:52 aphor I've seen management become complicit in that, because EMPIRE BUILDING
22:52 MTecknology that line lost me
22:53 MTecknology I didn't follow "complicit in that" or "empire building" :(
22:53 aphor larger companies judge managers by their pinball score: their headcount labor cost plus project expense budgets plus capital budgets.
22:54 aphor management becomes complicit in "do{ing] something for the sake of improving your job security"
22:54 aphor because it ups the pinball score
22:54 MTecknology you mean managers tend to promote things that keep the head count?
22:55 aphor sometimes, yes.
22:55 MTecknology complicit might be the wrong word?
22:55 MTecknology I think I understand now, though
22:55 aphor So automation projects get sidelined, or sandbagged.
22:56 aphor salt is very powerful, ergo politically dangerous, in some environments.
22:56 MTecknology Every shop I've seen, including sony pictures, runs with a tight enough staff that automation is key, to the point that you're pretty well screwed without it.
22:57 aphor Lucky you!
22:57 aphor Have you ever worked for a "too big to fail" bank?
22:57 * whytewolf has
22:57 aphor .. or maybe a DoD contractor company?
22:58 MTecknology I've done the yearly audits/pen tests for various banks, but never been employed by one
22:58 aphor Gotta sit next to some lifers for more than a few weeks to get the culture.
22:59 * aphor is SO jaded, much crass, WOW
23:00 MTecknology heheh... I betcha it's nothing compared to working for EROS
23:00 aphor Anyway: I've seen a lot of time wasted specifically on churning UIDs and file ownerships migrating file storage the WRONG way.
23:01 MTecknology miles of red tape before you're allowed to see the various tapes that lead to more red tape that you need to get to before you can consider asking for a user account to check email.
23:01 aphor I almost got staffed working for Pure Romance...
23:01 whytewolf lol. when i worked at $bank it took a long time for spinup. but our job was bringing in automation and cloud into the bank. it was an uphill battle. but we were winning it.
23:01 aphor MTecknology that's all job security pinball score bloating red tape.
23:02 aphor whytewolf go go go!
23:02 whytewolf that was before i said stuff this and went and work for salt :P
23:02 MTecknology whytewolf: that's their goal... it's clearly a losing battle
23:04 aphor whytewolf, which "the bank?"
23:04 whytewolf would rather not say. but will say it gets in the news a lot
23:04 aphor I still know people at one of them.
23:05 aphor anyway...
23:05 aphor who wants to review some crazy stuff I proposed in a salt ticket?
23:06 Edgan aphor: define crazy stuff
23:07 aphor I want to rewrite (or write an alternate) call_chunks(), call_chunk()
23:07 aphor https://github.com/saltstack/salt/issues/32956
23:08 aphor I DON'T want to go all rogue and crackpot though, so unless I can get someone else to agree to help me test, it's a non starter.
23:09 aphor It's kinda important code path, so it needs moar eyes, I think.
23:10 onlyanegg oh, I was looking at that earlier. it looks cool.
23:10 aphor I want to put a DAG class together, which is an iterator, to hold low chunks, and then run them in a bunch of coroutines in an event loop.
23:11 onlyanegg Does someone know where/how the dunder dictionaries are made available to modules?
23:12 aphor onlyanegg Hatch wrote a howto somewhere in the docs about the dunder dicts.
23:12 MTecknology Woo! Finally tracked down that bug and got a PR done! :D
23:12 onlyanegg ok, thx. I'll take a look.
23:13 MTecknology onlyanegg: https://docs.saltstack.com/en/latest/topics/development/dunder_dictionaries.html
23:14 MTecknology unless you're looking for internals
23:15 aphor I vaguely recall seeing something like that doc page with some explanation of the loader process.
23:15 aphor Maybe it was a saltconf talk?
23:16 onlyanegg yeah, I'm interested in the internals
23:16 aphor If I get the DAG iterator based call_chunks event loop, then it can emit PROGRESS INDICATOR MESSAGES as it goes...
23:17 MTecknology aphor: that sounds very ambitious. I don't have any need for anything like that so I can't volunteer
23:18 aphor MTecknology thanks though.
23:18 aphor The REAL use case is that issue: orchestration states can have a requisite on events.
23:20 aphor also I think it will improve the concurrency of call_chunks, which I suspect might have AWESOME effects on bloated highstate runs
23:20 MTecknology how do you plan to prevent too many things from going on at the same time?
23:21 aphor MTecknology put some concurrency brakes in, like only run n low chunks for n physical/logical cores.
23:22 MTecknology I really have to wonder if I'm just about the only person using the debianized network.managed or if it's just me that's trying to use it to this extent.
23:23 aphor Most of my clients are into RHEL conservatism, so I don't get much play with Debian distros.
23:24 Edgan MTecknology: I have done heavy CentOS and Ubuntu salt. Plus a little Amazon Linux. I have never used network.managed. Most of what I have done in EC2, and there is generally only eth0, and it is already set to dhcp. For dns, I write resolv.conf raw with salt. I delete the /etc/resolv.conf symlink if I need to.
23:24 * MTecknology grumbles @ https://github.com/saltstack/salt/pull/46980
23:25 MTecknology that change is a breaking change... but it breaks functionality that only worked by accident.
23:26 Edgan MTecknology: You operating in a datacenter?
23:28 MTecknology a few of them, ya
23:28 MTecknology and, unfortunately, also aws
23:29 Edgan MTecknology: I see AWS as the best of bad options. Most people see OpenStack as way too much work than they want to take on. Once that is off the table, what else do you have but cloud services?
23:29 MTecknology I look at AWS as one of the worst of all options
23:29 Edgan MTecknology: Using OpenStack?
23:30 MTecknology I don't have any objection to VPS providers when they make sense, but AWS in particular is on my shitlist.
23:30 MTecknology $client uses raw lxc :(
23:30 Edgan MTecknology: It is the best I have seen of them all if you want to get serious and scale.
23:31 englishm_work joined #salt
23:31 MTecknology that depends on how you define "get serious and scale" and what your requirements are. So far, I've never come across anything that sanely fit into aws without having lots of infinitely better options
23:31 Edgan MTecknology: Azure sucks. GCP is a close second to AWS. Everything else like Linode or DigitalOcean aren't even in the same class.
23:32 MTecknology I've never even been willing to consider azure for any application. If it ever becomes a hard requirement, I'll just find new work.
23:32 Edgan MTecknology: So what are the better alternatives?
23:33 MTecknology it's probably not worth the debate. I can summarize it with: you won't agree
23:33 Edgan MTecknology: I am interested in learning what you would say, even if I don't agree.
23:33 whytewolf I've watch Azure lose to openstack because azure wanted to controll to much of everything.
23:34 MTecknology I'd consider linode and DO better than AWS for many use cases.
23:34 Edgan whytewolf: Azure is just poorly done, and the only niche it seems to have is Windows heavy shops that want to pay the even higher Windows instances costs
23:34 Edgan MTecknology: Only for the personal level. AWS runs circles around them for business.
23:35 MTecknology like I said- we're going to end in disagreement
23:35 Edgan MTecknology: If you want to limit yourself to just want they can do, you can do the same in AWS, but better
23:35 whytewolf Edgan: oh they were looking at it for windows operations. but this was at $bank. and the AD didn't already match what Azure wanted.
23:35 MTecknology s/better/worse for 3x the cost/
23:36 Edgan MTecknology: 3x the cost, yes, but more than 3x better
23:36 MTecknology again... we're going to disagree
23:36 MTecknology you have one opinion, and I emphatically disagree; there's no middle ground
23:36 Edgan MTecknology: how many AZs do Linode and Digital Ocean have per region?
23:37 * MTecknology sighs
23:37 Edgan MTecknology: I think the answer is one, they don't do AZs.
23:39 Eugene lol aZs
23:39 Edgan Eugene: ?
23:40 Eugene Given the history of major debilitating outages in AWS, I would not(and don't) trust the AZ guarantees. If you notice an outage it tends to be because us-east-1 is totally down somehow.
23:40 Eugene I get that there are some local outages it guards against, but the reality is that if you're planning for a Group of servers to go down it makes more sense to just plan for the entire region disappearing.
23:41 Eugene </2c>
23:41 Eugene And no, most VPS providers have zero concept of an AZ. You're lucky if they have >1 region, and if those regions are atually BGP independent
23:41 Edgan Eugene: The first rule is don't do business in us-east-1. It is bad choice of region. Yes, AWS has even had other regions depending on us-east-1. That has gotten fixed.
23:42 Edgan Eugene: us-east-1 is both the default, and should be considered beta
23:42 exarkun joined #salt
23:43 Eugene I'm aware. us-east-1 is also one of 3 regions that support SES fully, so I'm kinda stuck with it
23:43 Eugene But yeah
23:43 Edgan Eugene: It is all a matter of scale. I think AZs are useful, but not the end all be all. You should be multiple region too.
23:43 Edgan Eugene: doesn't us-west-2 too?
23:43 Eugene My point was: if you're gonna be multi-Anything, don't waste time being multi-AZ; jump straight to multi-Region
23:43 Eugene Yes, us-west-2, us-east-1, and... whatever ireland is
23:43 Eugene Or even jump to multi-Cloud, if you're feeling super cool
23:43 Edgan Eugene: You aren't going to run a cluster of say elasticsearch across three regions.
23:44 Edgan Eugene: yes, and multi-cloud is the next level, and you could say hybrid-cloud between say AWS+GCP and a data center is the next level after that
23:45 Eugene I'm not saying that within the application you shouldn't take advantage of multi-AZs. just the failover awareness ;-)
23:45 Edgan Eugene: Some times, like mass storage are just always going to be cheaper outside of AWS
23:45 Eugene If us-east-1b has failed, I would expect -1c and and -1a to follow shortly. Why would i failover to there instead of directly to us-west-2?
23:46 Eugene Nevermind the weird problems you get with us-east-1 failures.... there's actually 7 or 8 physical AZs there
23:46 Edgan Should I run everything in us-east-1a and failover to us-west-2a?
23:46 Eugene Good question ;-)
23:46 Edgan Eugene: Actually I think you are touching on a new design that AWS themselves have
23:47 Edgan Eugene: They have previously also run things like Elasticache all in one AZ
23:47 Edgan Eugene: I think they switched to that model. All the new non-US regions are two AZ, not three.
23:48 Eugene I noticed that as well. I wonder if its driven by cost-cutting (third datacenter is $$$), or because basically nobody uses the C zones
23:48 shanth does salt log what states are being applied to which minions in a file somewhere or is that a salt enterprise feature?
23:48 whytewolf shanth: job history
23:49 whytewolf https://docs.saltstack.com/en/latest/topics/jobs/
23:49 Edgan Eugene: I am guessing cost cutting.
23:49 Edgan Eugene: Save the money and built one more region
23:49 Edgan build
23:53 shanth does that get saved to a file anywhere whytewolf?
23:53 shanth i only see jobs for today in mine whytewolf
23:54 whytewolf shanth: it is a msgpack stored in /var/cache/salt/master it isn't a readable file.
23:54 whytewolf but you can always change out jobs are stored. and how long they are saved
23:54 shanth what is it then? whytewolf why is it not readable :(
23:54 whytewolf it is a msgpack.
23:55 shanth what is that?
23:55 shanth is the only way to get logging to pay for enterprise?
23:55 whytewolf again read what i said. "you can change how jobs are stored, and how long they are stored"
23:56 shanth missed that part
23:56 shanth is it on that page you linked? i'd like to log them daily as log files that are readable
23:56 tzero joined #salt
23:57 whytewolf that would not be advised.
23:57 shanth why not?
23:57 whytewolf job history needs to be readable by the master
23:57 whytewolf that is why they are in msgpack format...
23:57 shanth we need to show proof of jobs running in the past.
23:57 MTecknology look into returners
23:58 shanth but it's not readable? what can i do with it haha
23:58 whytewolf https://docs.saltstack.com/en/latest/topics/jobs/external_cache.html

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary