Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-07-27

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:54 noraatepernos joined #salt
01:05 Sokel left #salt
01:09 ropes_ left #salt
01:09 DoomPatrol joined #salt
01:25 joe_n joined #salt
01:30 joe_n joined #salt
01:34 noraatepernos joined #salt
01:50 MTecknology "Do you have an extra goto ten line? I said ____!"  lol
01:52 ilbot3 joined #salt
01:52 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.6, 2017.7.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers <+> The call for speakers for SaltConf17 is now open: http://tinyurl.com/SaltConf17
01:57 teratoma joined #salt
02:00 rpb joined #salt
02:04 debian112 joined #salt
02:18 noobiedubie joined #salt
02:22 DoomPatrol joined #salt
02:26 debian112 joined #salt
02:26 zerocoolback joined #salt
02:33 joe_n joined #salt
02:42 beardedeagle joined #salt
02:51 beardedeagle whytewolf: to add the earlier conversation, I think twisted could have been a feasible option as well.
02:52 ahrs joined #salt
02:52 beardedeagle (in addition to tornado I mean)
02:56 Ni3mm4nd joined #salt
03:00 preludedrew joined #salt
03:06 donmichelangelo joined #salt
03:12 Ni3mm4nd joined #salt
03:18 tiwula joined #salt
03:45 sarcasticadmin joined #salt
03:46 sarcasticadmin joined #salt
03:47 svij3 joined #salt
03:49 dendazen joined #salt
03:51 om2 joined #salt
03:54 rwaweber Is there a salt-devel channel? Or is that better suited to conversation on the PRs, once they're in motion?
03:54 rwaweber been bumping around the idea of getting file capabilities supported in the file module
03:55 rwaweber https://wiki.archlinux.org/index.php/Capabilities <== context
03:56 rwaweber I suppose a PoC PR would probably be the best way to get that conversation going :P
03:57 beardedeagle rwaweber: literally #salt-devel
03:57 beardedeagle it's even in the channel topic for this channel
03:59 mavhq joined #salt
04:09 tyler-baker joined #salt
04:12 Ni3mm4nd joined #salt
04:26 hoonetorg joined #salt
04:50 tobstone joined #salt
04:56 ahrs joined #salt
05:02 dxiri joined #salt
05:17 svij3 joined #salt
05:17 samodid joined #salt
05:23 Bock joined #salt
05:39 dxiri joined #salt
05:40 inad922 joined #salt
06:02 aldevar joined #salt
06:09 do3meli joined #salt
06:11 do3meli left #salt
06:16 ivanjaros joined #salt
06:17 debian1121 joined #salt
06:21 Ricardo1000 joined #salt
06:22 dxiri joined #salt
06:41 sh123124213 joined #salt
06:44 mbuf joined #salt
06:48 samodid joined #salt
06:57 darioleidi joined #salt
07:00 Ricardo1000 joined #salt
07:14 high_fiver joined #salt
07:18 high_fiver_ joined #salt
07:24 Hybrid joined #salt
07:31 mgjpas joined #salt
07:32 mgjpas left #salt
07:32 preludedrew joined #salt
07:33 mpas joined #salt
07:33 mpas Hi all :-)
07:35 Rumbles joined #salt
07:43 noraatepernos joined #salt
07:50 pualj_ joined #salt
07:55 mikecmpbll joined #salt
07:58 pbandark joined #salt
08:00 golodhrim|work joined #salt
08:03 mpas I am trying to target minions where grain osrelease_info has "12" as first item of the grain (the grain seems to be to an array) but I cannot figure out how to do this. Is there anybody who can help with this?
08:04 hemebond mpas: Can you paste the grain somewhere?
08:07 jhauser joined #salt
08:07 mpas Isn't it a build-in grain?
08:08 hemebond Yes, but I want to see exactly what you're seeing/.
08:10 mpas Ah sure, https://pastebin.com/dTcv6C7T
08:10 chowmein__ joined #salt
08:14 hemebond And how are you trying to use it? In Jinja?
08:26 mike25de joined #salt
08:26 viq lordcirth_work: you may also want to look at osquery and sysdig
08:26 mpas No, just like this https://pastebin.com/iV5wivc3
08:27 mpas That is the top.sls, forgot to mention.
08:27 Mattch joined #salt
08:36 gmoro joined #salt
08:37 mpas I got it working with Jinja, thanks for the hint hemebond! :-)
08:39 hemebond mpas: You should be able to do G@osrelease_info:12
08:40 hemebond Though that will match if any item in the list is 12.
08:41 mpas Ah ok, than I think working with Jinja is better. I am just using: {% if grains.osrelease_info[0] == 12 %}
08:43 _KaszpiR_ joined #salt
08:47 Micromus joined #salt
08:50 darioleidi joined #salt
08:56 dxiri joined #salt
08:58 N-Mi_ joined #salt
08:59 _KaszpiR_ joined #salt
09:00 samodid joined #salt
09:16 zerocoolback joined #salt
09:16 noraatepernos joined #salt
09:22 zerocoolback joined #salt
09:24 mbuf joined #salt
09:36 zerocoolback joined #salt
09:41 ivanjaros joined #salt
10:00 Shirkdog joined #salt
10:01 Naresh joined #salt
10:21 twooster This seems like it should be simple... how do i see the output of a command I ran (via `salt`) and accidentally CTRL-C'd out of ?
10:21 hemebond twooster: salt-run jobs.list_jobs
10:22 hemebond salt-run jobs.lookup_jid ###############
10:22 hemebond salt-run jobs.lookup_jid ############### --output=highstate
10:22 twooster and/or is there anyway to continue "waiting" on that job id and its output, as though i hadn't ctrl-cd
10:22 hemebond :shrug:
10:22 hemebond You could `watch` it :-)
10:23 twooster so right now jobs.lookup_jid shows nothing, but the job shows up in jobs.active
10:23 twooster so i guess i'll just watch it ;)
10:25 twooster is it normal for lookup_jid to return nothing while it's active? a bit ... annoying, i guess
10:26 hemebond Yeah, I think it should return nothing.
10:28 twooster huh, and print_job and list_job won't show the active state. have to use active to show that. odd. okay, well i'll just wait.
10:58 babilen twooster: If you want to see realtime information you could run the job directly on the minion with salt-call
11:04 pualj_ joined #salt
11:14 dxiri joined #salt
11:15 zerocoolback joined #salt
11:22 mbrgm joined #salt
11:23 mbrgm hey! is anyone successfully using the saltify salt-cloud driver?
11:29 KingOfFools joined #salt
11:29 mavhq joined #salt
11:35 KingOfFools joined #salt
11:45 nku hm.. can't i include states from another env? it thought that env:foo.bar.sate was supposed to work?
11:45 nku or is that syntax only for top files?
11:45 xet7 joined #salt
11:52 promorphus joined #salt
11:54 nku ok, whitespace: is.significant ..
12:04 zerocoolback joined #salt
12:15 inad922 joined #salt
12:16 ecdhe joined #salt
12:16 Miouge joined #salt
12:21 cgiroua joined #salt
12:22 pualj_ joined #salt
12:33 zerocoolback joined #salt
12:35 jdipierro joined #salt
12:36 smartalek joined #salt
12:36 smartalek left #salt
12:36 smartalek joined #salt
12:41 DammitJim joined #salt
12:46 tacoboy joined #salt
12:49 noobiedubie joined #salt
12:52 johnkeates joined #salt
12:53 noobiedubie joined #salt
12:57 numkem joined #salt
12:57 antix_ joined #salt
13:02 Ni3mm4nd joined #salt
13:08 pualj_ joined #salt
13:11 ssplatt joined #salt
13:22 ProT-0-TypE joined #salt
13:27 racooper joined #salt
13:34 drawsmcgraw joined #salt
13:46 cyborg-one joined #salt
13:47 beardedeagle joined #salt
13:48 _JZ_ joined #salt
13:51 _aeris_ joined #salt
14:01 squishypebble joined #salt
14:02 devtea joined #salt
14:02 gmoro joined #salt
14:04 overyander joined #salt
14:07 lordcirth_work nku, you may want to read: https://docs.saltstack.com/en/latest/topics/yaml/index.html
14:07 nku nah, what i really want is to write all states in python
14:08 lordcirth_work Trying to use Hubblestack, ran "salt \* saltutil.sync_modules", which returned nothing, and getting: "'hubble.audit' is not available" when I try to use it
14:10 noobiedubie joined #salt
14:13 cachedout joined #salt
14:15 lordcirth_work Does hubblestack_nova-2016.10.2 not work with salt 2017.7.0?
14:16 XenophonF joined #salt
14:20 mikea joined #salt
14:24 tiwula joined #salt
14:27 zerocoolback joined #salt
14:31 impi joined #salt
14:32 dxiri joined #salt
14:35 mikecmpbll joined #salt
14:36 btorch anyone using  rest_tornado for this salt-api ?
14:37 zer0def quick question regarding salt-cloud in 2017.7 - should i just rename the key `provider` for `driver` in both profiles and providers?
14:39 zer0def already reverted back to 2016.11, but there doing the forementioned change still mentioned the driver not being specified in the provider
14:43 dxiri joined #salt
14:43 mbrgm left #salt
14:50 KennethWilke joined #salt
14:54 dxiri_ joined #salt
15:02 btorch hehe I think salt-api has a mind of its own :)
15:04 jdipierro joined #salt
15:06 dxiri joined #salt
15:07 samodid joined #salt
15:09 heaje_ joined #salt
15:10 beardedeagle joined #salt
15:13 khaije1 FYI, I got my salt-cloud w/ VMware setup working. Haven't bisected the config but it looks like the template was just missing some packages: net-tools, perl, and of course open-vm-tools is also required.
15:14 khaije1 Is there a way to list only the Salt managed VMs available on a salt-cloud connected provider?
15:14 * khaije1 mutters, probably should have used a map
15:16 khaije1 In fact I would have but I'm waiting on https://github.com/saltstack/salt/issues/40975 pillar based cloud maps
15:20 devtea joined #salt
15:21 khaije1 hey zer0def, thats my understanding, yes. If your config is from an older version you'll want to confirm the provider name too. IIRC the VMware provider name changed around the same time as the driver->provider change.
15:23 sarcasticadmin joined #salt
15:23 zer0def khaije1: the provider name's saltify, for provisioning existing system
15:25 ekristen joined #salt
15:25 khaije1 zer0def: I've read of it but not yet used it. Just wondering, how are you using it?
15:26 zer0def with a cloud map
15:26 Guest73 joined #salt
15:26 zer0def there's nothing too notable about it, given how most configuration details would either have to be stored in a profile (which is clunky) or in a map
15:31 pppingme joined #salt
15:32 khaije1 zer0def: thats why I'm using the pillar for my profile and provider configs, with jinja I'm able to make it pretty re-usable
15:33 zer0def i don't have a need for it yet - i wouldn't have expected pillars to be working with salt-cloud
15:34 khaije1 yeah, theres a small twist around that ... it doesn't work with salt-cloud, but it does work with "normal" salt's execution and state cloud modules
15:34 khaije1 it's (almost, if not perfectly) equivalent though
15:35 om2 joined #salt
15:42 * khaije1 is really glad to have this working :)
15:46 zer0def huh, cute
15:46 zer0def thanks for the heads up
15:46 btorch anyone experience salt-api getting stuck ? not returning any requests back
15:52 deuscapturus joined #salt
15:54 mbrgm joined #salt
15:54 mbrgm can I render a template into a jinja variable?
15:59 aldevar left #salt
16:00 deuscapturus joined #salt
16:01 svij3 joined #salt
16:01 censorshipwreck joined #salt
16:05 Ni3mm4nd joined #salt
16:16 davisj joined #salt
16:19 WildPikachu is there a way in jinja2 to have a string match on a regex expression in an IF statement? I'm googling but after 1hr not find anything, looks like there is no such support
16:19 twooster mbrgm: I think you can using {% load %}
16:19 twooster WildPikachu: I don't think jinja2 supports regex out of the box.
16:20 twooster WildPikachu: You can do it using an external module
16:20 WildPikachu let me google for an example, thanks twooster
16:27 zerocoolback joined #salt
16:28 aneeshusa joined #salt
16:29 twooster WildPikachu: It's a bit of a pain. I _think_ it'll require adding to the `extension_modules: /srv/extmods' master.conf, then you'll place your modules in /srv/extmods/modules/helpers.py, then reference as {% if str|salt['helpers.regex_match']('^reg.ex$') %}
16:29 twooster Something like that
16:30 WildPikachu hectic, yea, let me find another way to solve my problem :D
16:30 WildPikachu 2017.7.0 looks like it has regex support tho, very nice
16:31 zerocoolback joined #salt
16:32 babilen Long overdue :)
16:35 Heartsbane robawt: ping
16:36 Edgan WildPikachu: Couldn't you just use grains for that, and put the regex in a grain, which you write in python?
16:38 jdipierro joined #salt
16:38 debian112 joined #salt
16:40 WildPikachu Edgan, I wanted to test if an IP is v6 or v4 and output appropriate [ ]'s if needed
16:40 WildPikachu I'll add a grain/pillar   isv6 or something
16:41 Edgan WildPikachu: Sounds like you want more of a custom jinja filter to test if it is v4 or v6
16:42 WildPikachu regex would of been great, but extra grain is probably going to be faster :)
16:43 Edgan WildPikachu: I don't think a grain exactly fits your use case if the ip could be an ip different than the system ip.
16:43 Edgan WildPikachu: But a custom jinja filter would probably be better than the regex
16:44 svij3 joined #salt
16:44 Edgan WildPikachu: If it is the system ip, you could just a compare between the v4 and v6 ips reported in grains
16:45 WildPikachu Edgan, I am setting up about 2k firewall rules based on pillar data, which is treated different in different v4/v6 firewall rule files :)
16:45 WildPikachu I"m outputting accumulator data for it
16:45 Edgan WildPikachu: iptables ebtables?
16:46 Edgan WildPikachu: I would just segregate the v4 and v6 rules from each other. Then you don't have to play games with is this v4 or v6.
16:47 WildPikachu Edgan, about half pertain to both, but ya, not to worry, I"m good, thanks man :)
16:51 tapoxi joined #salt
16:58 tapoxi_ joined #salt
16:59 mbrgm left #salt
17:00 davisj Greetings.
17:00 davisj I was under the impression that you could write pillar sls which included another sls and that you could override values from the included sls. But it seems as though values from the included sls always take precedence, regardless of the chosen pillar_source_merging_strategy, or whether the include appears before or after the new data.y
17:00 davisj I know that's a lot of words, so here's an example  https://gist.github.com/davisj/165ced5d0bbb8aefd76baaf75a198333
17:02 astronouth7303 that's... hm.
17:03 teratoma joined #salt
17:04 davisj The the 'group' list indeed gets merged (yay) but 'shell' is still from included file.
17:04 astronouth7303 i haven't used pillar includes, but i would think that the local value would override the included one?
17:05 astronouth7303 it probably sees that the existing value is a string and does nothing?
17:05 davisj astronouth7303: I wrote a lot of yaml with thT same assumption in mind ;)
17:06 astronouth7303 i agree with you, intuitively that's weird
17:06 davisj astronouth7303: you're probably right that it ignores stings
17:06 astronouth7303 what salt versions are you running?
17:06 davisj I guess if that's the intended behavior, it's back to the dawing board.
17:07 MTecknology davisj: you didn't test before doing a full writeup? oops
17:07 astronouth7303 idk if it is intended behavior
17:07 MTecknology It's expected, ya
17:07 davisj MTecknology: I swear I tested it a looong time ago but....
17:08 astronouth7303 was there a rationale? Or did it just happen that way and now it's too late to change it?
17:08 MTecknology You can sorta merge some things, but you won't achieve what you're trying for without lotsa hacking
17:10 davisj Yeah, the idea was that each user get's a base .sls which can be included by all hosts that they need an account on. But this infra is years oforganoc growth and... differences....
17:10 davisj s/oforganoc/organic/
17:13 MTecknology I try to avoid individually managed local users.
17:13 astronouth7303 you could always take the dramatic option and use one of the ext_pillars (which might have different merge strategies) or write your own (interfacing with, say, LDAP)
17:14 davisj Yeah... Gonna look at alternate pillars, (maybe pillar_stack?) before rolling my own.
17:19 hatifnatt davisj: use extend
17:20 davisj hatifnatt: I actually looked at that but the docs are a little vague and I couldn't get it to work. How would you re-write my example using extend?
17:29 wendall911 joined #salt
17:30 ChubYann joined #salt
17:30 hatifnatt davisj: I also have a little experience with extend. Try something like https://gist.github.com/hatifnatt/43e01be4269c274cc7f4492dc5cf6efb
17:31 davisj hatifnatt: Will give it a shot, Thanks!
17:39 davisj no dice
17:40 DammitJim so, my boss cloned a server that I was managing through salt
17:40 DammitJim then he changed the server name
17:40 DammitJim I have updated the /etc/salt/minion_id on the minion
17:40 DammitJim is there something else I need to do on the master?
17:41 davisj DammitJim: delete the key 'salt-key -d' from master
17:42 davisj and from minion: rm -f /etc/salt/pki/minion*
17:42 DammitJim davisj, if I delete the key, it's going to delete the key for the server my boss cloned, though, right?
17:42 davisj Try deleting it from the new minion first. restart the minion.
17:42 davisj It should generate a new key.
17:43 astronouth7303 i don't think you need to delete the key on the master (presumably, the minion that was cloned is still using it)
17:43 davisj ^^^ that
17:43 DammitJim oh, delete the /etc/salt/pki/minion* on the minion
17:43 DammitJim thanks
17:46 pabloh007fcb Hello all, im getting an error when trying to use this state to build a docker image here is the file Im trying to run on the latest version of salt. https://gist.github.com/pabloh007/bc08373097fba8e5a062ba4ea06a6a21
17:46 DammitJim got it
17:47 pabloh007fcb this is the error "State 'docker.build' was not found in SLS
17:49 DammitJim oh weird... the salt master is rejecting it
17:49 davisj pabloh007fcb: how are you applying that state?
17:52 XenophonF joined #salt
17:56 davisj DammitJim: if all else fails, you can 1) stop both minions. 2) delete both keys from master. 3) delete both keys from minons. 4) restart minions
17:56 davisj 5) accept new keys
17:56 hatifnatt davisj: sad. It is any error or simply you doesn't get expected yaml? Actually I'm never tried to use extand in pillar.
17:56 pabloh007fcb using state.highstate davisj
17:56 DammitJim I removed them again and deleted entries in the master
17:58 davisj hatifnatt: Nothing at all logged. Just silent failuer. My favorite kind :)
17:59 davisj pabloh007fcb: Auto run via a schedule, or are you typing somehting into the command line to kick it off?
18:00 ivanjaros3916 joined #salt
18:00 hatifnatt Is there any state for pinning package version? Or pkg hold is only option? But it only can be applied to single package but I need wildcard, i.e. 'erlang*'.
18:01 beardedeagle joined #salt
18:02 davisj hatifnatt: I think pkg.installed takes wild card for versions as of 2017.07
18:02 pabloh007fcb davisj im actually using the docker_image.present state and when the image is not found I want it to start a build process using the sls I attached.
18:03 davisj pabloh007fcb: I ask because it seems like there might be a bad argument passed. Your state name should be "build webservice image" rather than "docker.build"
18:04 XenophonF import_text + emacs epa-file makes working with encrypted Pillar data sooooooooooo much easier
18:04 XenophonF my only concern now is whether import_text is 8-bit clean
18:07 opnsrc joined #salt
18:08 Inveracity joined #salt
18:09 hatifnatt davisj: I need wildcard for package name, not for version.
18:09 skatz joined #salt
18:10 davisj hatifnatt: ahh... nvm
18:11 hatifnatt Sure I can create file for this in '/etc/apt/preferences.d/' but may be there is state for this wich I have missed :)
18:12 skatz joined #salt
18:12 noraatepernos joined #salt
18:14 skatz joined #salt
18:15 pabloh007fcb davisj let me try to play around with the arguments
18:16 skatz Hi! I'm testing out the new salt 2017.7 release on a CentOS 6 box that has Python 2.6.6. I see that the release notes for 2017.7 say that 2.6 is no longer supported, but it does look like 2017.7 works on python 2.6 after downgrading tornado to 4.3. Are there known issues with salt 2017.7 on python 2.6 or is the dropping of support a way of saying "we won't fix any more python 2.6 specific bugs but it might work anyway"?
18:18 DammitJim does the master run something every minute?
18:18 DammitJim I see the logs printing stuff about updating roots files....
18:19 astronouth7303 skatz: I believe Your Milage May Vary, and you're pretty much on your own.
18:19 Ni3mm4nd joined #salt
18:19 astronouth7303 DammitJim: I know I do, but that's because of the salt mine and something I did in my shell.
18:19 skatz astronouth7303: that's what I figured, thanks! just have to convince people that moving to 2.7 or 3.6 is worth the effort...
18:20 astronouth7303 2.6 was EOL'd 2013-10-29 and 2.7 should be a transparent update
18:21 DammitJim oh
18:21 DammitJim I'm just surprised that when I run highstate on a server, it takes like 2 or 3 minutes
18:21 davisj skatz: saltstack are publishing python2.7 rpms they claim won't interfere with the system python. Also, I get the impression python3 support is not fully baked.
18:22 pabloh007fcb davisj here is the main file where I call the previous sls file https://gist.github.com/pabloh007/78f751788f8420156a2b8110a9fe009f and this is the sls file that should be invoked for the build https://gist.github.com/pabloh007/bc08373097fba8e5a062ba4ea06a6a21
18:22 Eugene Wait, what? What py2.7 packages? This won't end well :-/
18:22 lordcirth_work It seems like Hubblestack doesn't work with salt 2017.7.0?
18:22 cyborg-one joined #salt
18:22 astronouth7303 DammitJim: given the amount of stuff I'm doing for just base stuff, plus application-specific stuff, i'm not surprised
18:23 skatz davisj: thanks, I'll try installing that rpm first. we're running salt-minion inside a virtualenv which is an additional wrinkle but should be doable.
18:24 astronouth7303 skatz: also, there have been 5 security vulnerabilities in 2.7 that would not have been backported to 2.6
18:24 astronouth7303 oh, you're using a venv? moving to 2.7 shouldn't be a problem
18:24 astronouth7303 install 2.7, but leave 2.6 as the default 2, and build your venv with 2.7
18:25 skatz cool, that's what i was thinking. install the new 2.7 package, create a new virtualenv that uses that python binary, should be good. all the other people who still rely on 2.6 will still be able to use it.
18:26 skatz and re: the security point, completely agreed.
18:28 astronouth7303 if you have CI, i'd suggesting seeing what happens if you move it to 2.7 (my guess: Nothing. Upgrading minor releases of python should have 0 effect)
18:29 davisj pabloh007fcb: According to https://docs.saltstack.com/en/latest/ref/states/all/salt.states.dockerng.html#salt.states.dockerng.image_present "sls" is not a valid argument to docker.image_present
18:30 druonysus joined #salt
18:30 druonysus joined #salt
18:40 pabloh007fcb Hello davisj I was looking at this documentation https://docs.saltstack.com/en/latest/ref/states/all/salt.states.docker_image.html
18:41 pabloh007fcb davisj Can't seem to understand how the sls part works. I thought i would include a state file with a build process to execute but maybe Im understanding it wrong
18:48 davisj Sorry I don't have any experiance with the docker states. Should the state calling in you're last .sls file be docker_image.build rather than docker.build?
18:48 lordcirth_work DammitJim, are you using gitfs?  Master polls it every 60s, among other things
18:49 DammitJim lordcirth_work, no
18:50 DammitJim I am more concerned with the fact that running a test state takes like 3 minutes
18:50 DammitJim anything I do against a minion takes 3 or 4 minutes
18:52 pabloh007fcb Thanks davisj ill continue working on it
18:57 lordcirth_work DammitJim, your state return should tell you how much time each took.  there's probably 1 or 2 that stand out
18:57 lordcirth_work DammitJim, or do you mean that a very simple state takes 3 min?
18:57 DammitJim Total run time: 30.356 ms
18:58 DammitJim however, since I pressed enter, it was about 3 minutes!
18:58 DammitJim yeah, a simple task... like file.managed
18:58 lordcirth_work DammitJim, ok, so the overhead is outside the state itself
18:58 lordcirth_work DammitJim, well, start running, eg, atop on the minion and watch as it runs
18:58 DammitJim that's what I'm thinking
18:59 DammitJim lordcirth_work, it happens with any minion (I have 150)
18:59 opnsrc joined #salt
19:00 lordcirth_work DammitJim, or, just use salt-call -l debug and watch where it hangs
19:00 DammitJim I would do salt-call on the minion, right?
19:00 hatifnatt DammitJim: what about state start time? Is there big gap between you hit enter and state "Started"?
19:00 lordcirth_work DammitJim, yes
19:01 DammitJim hatifnatt, state start time? where do I see that?
19:02 fritz09 joined #salt
19:02 hatifnatt DammitJim: Within state output. There is few strings ID, Function, Name etc.
19:03 DammitJim oh, with salt-call, I"m getting sqltreqtimeouterror, retrying
19:03 DammitJim weird
19:04 Sammichmaker joined #salt
19:04 Sammichmaker joined #salt
19:05 dendazen joined #salt
19:05 lordcirth_work DammitJim, do you have mysql settings configured for this minion?
19:08 DammitJim mysql settings? I don't think so
19:11 lordcirth_work DammitJim, salt-call config.get mysql.host
19:12 lordcirth_work DammitJim, wait, is it really sqltreqtimeouterror, or saltreqtimeouterror?
19:19 MTecknology Is there anywhere I'm not aware of where "system" is a valid command for restarting a service?  It wasn't an accidental unfinished systemctl because it's "system $app restart".
19:23 opnsrc joined #salt
19:25 beardedeagle joined #salt
19:26 opnsrc joined #salt
19:26 quique joined #salt
19:26 quique what does salt:// mean?
19:29 sjorge joined #salt
19:29 opnsrc left #salt
19:29 llua refers to the file server that salt masters runs
19:30 opnsrc joined #salt
19:31 opnsrc when using state mysql_database.present in salt version 2017.7 i'm getting the error: InternalError: (1698, u"Access denied for user 'root'@'localhost'")  This is directly after a pkg.install of mariadb-server
19:32 opnsrc i found this issue reported last year: https://github.com/saltstack/salt/issues/36896 however in my case, it's occurring even w/out python-pymysql installed
19:32 nixjdm joined #salt
19:37 bakins joined #salt
19:39 seanz joined #salt
19:42 oida joined #salt
19:44 jdipierro joined #salt
19:57 N-Mi_ joined #salt
19:58 mbrgm joined #salt
20:00 mbrgm are there any efforst for a master-initiated solution to minions or a 'master of masters' initiated partial replication to other masters in less privileged network segments? practical use case is having an internal network and a DMZ. minions in the dmz should not be able to initiate connections into the internal net but rather talk to a DMZ-master, which would get its state and pillar trees _pushed_ from the
20:00 mbrgm internal master...
20:02 astronouth7303 mbrgm: you looked at syndic?
20:07 mikea mbrgm, that's exactly the sort of usecase a syndic is meant for
20:07 mbrgm isn't the replication target meant to connect to the source?
20:07 mbrgm i.e. the DMZ syndic would have to connect to the INTERNAL master?
20:08 mikea yes, but that's a single connection
20:08 mikea vs. every minion
20:08 MTecknology If you want them 100% separate, you'd want to push to them individually using some script on a box that has the ability to push to those devices
20:09 mbrgm can syndics do partial replication? i.e. replicate only part of a state/pillar tree?
20:09 MTecknology nope
20:10 mbrgm ok, so I guess the recommended solution would be to separate internal from external state/pillar data and then sync (in some way that salt doesn't offer) the external data to the DMZ master?
20:11 mbrgm i.e. using rsync or some other tool in a push-based fashion?
20:11 MTecknology I personally prefer just have all minions talk to the master directly and allow 4505,4506 through the firewall
20:11 mikea mbrgm, gitfs on separate branches
20:11 mikea or on separate git servers
20:11 astronouth7303 you shouldn't be storing secrets in state, and ext_pillar gives you a ton of flexibility
20:12 mbrgm astronouth7303: I'm not storing secrets in state, but still, maintaining internal and dmz secrets, would create the need to separate them one from each other
20:12 mbrgm right?
20:12 whytewolf mbrgm: only if your paranoid.
20:13 astronouth7303 you can assign pillar data to different minions based on any kind of matching criteria, and use the same name in states
20:13 mikea mbrgm, If you don't want the Master in the DMZ to know about the secrets for the internal servers, yes. It needs to be a separate pillar tree
20:13 whytewolf if you don't put your master in the dmz minions only get the pillar data that is assigned to them.
20:13 mikea The way we solve for that is separate gitlab instances for each security zone
20:13 mbrgm whytewolf: still searching for my tin foil hat.. ;)
20:14 astronouth7303 states should describe the shape of your configuration, but you generally leave out/generate the specifics
20:15 mbrgm astronouth7303: ty, I'm aware of that. I'm kinda looking for a way to serve isolated network segments..
20:16 mbrgm whytewolf: well, I'm trying to keep the firewall as closed down as possible, so if there's a way around having to open it up for minion connections, I'd try to avoid this
20:16 whytewolf is it truelly isolated if internal can connect to dmz.
20:16 astronouth7303 you could always do reverse tunnels
20:17 mbrgm whytewolf: no, not isolated. unidirectional SYN only whereever possible
20:17 astronouth7303 have the master open SSH tunnels to your DMZ'd syndic masters?
20:17 whytewolf salt-ssh
20:18 mikea mbrgm, Internal Master, internal git repos for states/pillars. DMZ Master, DMZ git repos for states/pillars. Sync internal state repos from internal TO dmz, keep pillar repositories separate. Open up just the syndic ports from the DMZ master to the internal master.
20:18 whytewolf or masterless where every minion has it's own git repo
20:18 astronouth7303 salt-ssh sounds like a pretty good idea, actually
20:18 mikea salt-ssh suffers the same problem that ansible
20:18 astronouth7303 which is?
20:18 mikea that ssh is not scalable and is garbage for configuration management
20:18 whytewolf it's slow
20:18 mbrgm mikea: oh no, believe me it's way faster
20:19 mbrgm been there, seen that before ;)
20:19 astronouth7303 i don't think we're talking about hundreds of DMZ'd boxes, though.
20:19 astronouth7303 kinda defeats the point of having a DMZ, actually
20:20 mbrgm mikea: "syndic ports from dmz master to internal master" -> what purpose does this have? rest sounds pretty good though! :)
20:20 astronouth7303 but that's me assuming
20:20 MTecknology astronouth7303: depending on your actual scale..
20:21 mikea mbrgm, the syndic ports from the dmz master to the internal master lets you salt \* state.apply foo on the internal master and the DMZ master will relay that to the DMZ minions.. so you can manage everything from the internal master
20:21 mikea mbrgm, all of the DMZ hosts would report to the DMZ master. The DMZ master would then become slave to the internal master.
20:21 MTecknology I didn't realize syndic had a special port that it cas use
20:21 mbrgm mikea: ah, so you're tackling state/pillar separation and leave only a small portion open for control purposes
20:22 mikea mbrgm, yep
20:22 mikea secrets stay separate
20:22 mikea and your limiting your attack surface to just a single set of ports from a single device
20:22 mikea rather than every minion
20:22 whytewolf MTecknology: defaults to 4506
20:23 robawt Heartsbane: pong
20:23 MTecknology whytewolf: I thought syndic stuff just ran across the same pub/sub that minion chatter does
20:23 mbrgm mikea: sounds feasible. after all, the state.apply could easily be relayed by a `alias dmz-salf="ssh ..."` or something similar in that case
20:24 mbrgm or does syndic also relay events and stuff, so reactor can be used?
20:24 whytewolf MTecknology: nope, it gets it's own fancy port
20:24 mikea a master relays everything from itself to its syndic
20:25 mikea it basically takes the entire event bus and relays it both ways
20:25 mikea so when the syndic publishes a command, the master publishes the command to its minions
20:25 MTecknology it does magic stuff to events forwarded from a syndic, though
20:25 mikea and then the returns come back the other direction
20:26 mbrgm ah, I see..
20:26 mikea everything would 'appear' to be connected to the internal master
20:27 mbrgm talking about separate gitlab instances: are you using git_pillar/gitfs?
20:27 mikea yes
20:27 mikea we use git for everything
20:27 mikea We have a bunch of different salt masters all over the world
20:27 mbrgm what's the benefits over just checking in the state tree?
20:27 mikea gitfs was the cleanest way to keep them in sync
20:27 mbrgm oh, that was not clear... let me elaborate
20:27 mikea you check in your code to git and it's automagically deployed to 50 salt masters :-)
20:29 viq whytewolf: isn't 4506 just the default return port for normal traffic?
20:30 mikea especially if you're running a syndic, you want to make sure that your state trees are in sync
20:30 mikea and if there's more than just you maintaining the code, gitfs is the cleanest way to do that
20:30 viq Although AFAIK with syndic you won't get a notification that some of the minions that are connected to the syndic did not return
20:30 mbrgm mikea: what I'm doing now is check in the state tree, pillar tree with secrets encrypted in one single git repo. this gives the ability to sync that git repo to other masters (if necessary). then setting file_roots/pillar_roots accordingly to point to the proper directories. my thoughts behind this were having a monolithic, contained and consistant state of my configuration vs having separate repos where I'd
20:30 mbrgm have to point to the proper refs
20:31 whytewolf hehe, yes it is. but it also handles syndic. and can be seperated using config options.
20:31 viq whytewolf: ah, ok
20:31 viq mbrgm: gitfs also automatically maps branches and tags to environments
20:31 mikea if you have more than one git repo
20:32 mikea they all get merged
20:32 mikea so like we have our formulas in their own repositories
20:32 mikea and even pillars for specific things in their own repos
20:33 prg3 joined #salt
20:33 mikea We started with a monolithic 'salt' repo
20:33 mikea with everything
20:33 mbrgm mikea: I used to have submodules for that, now using subtree to have the actual files in the container repo but still being able to sync formulas
20:34 mikea don't need submodules
20:34 mikea here's an example from my lab box
20:34 mikea one sec let me gist it
20:35 mbrgm I'm absolutely not trying to criticize using gitfs here! :) I'm just trying to find out in what way my thoughts about this topics have flaws in them.
20:35 mikea https://gist.github.com/mikeadamz/9316ca2bea760b302e432b434794f90c
20:35 mbrgm mikea: great :)
20:35 mikea that logdna pillar is automagically merged into everything in that salt-pillars repo
20:35 mikea and you can do the same thing with the other roots
20:36 viq mikea: I prefer to run my master as non-root user ;)
20:36 mikea the same logic applies, just refer to a different ssh key
20:36 mbrgm ok, so I think the point is about having that _automatic_ merging/updating? instead of having to run git commands in order to update?
20:37 mikea yes
20:38 mikea it also gives you an easier development, no need to deploy to a bunch of servers.. just merge into the right branch and you're done
20:38 mikea brb
20:39 mschiff joined #salt
20:39 mbrgm ok. now my approach is to have control about what gets updated when. say you're changing the formula for your loadbalancers but need to apply matching changes in the backend servers to keep it compatible. this would mean that with automatic updating, there'd be a point where the state tree would be inconsistent, right? contrary to having to run several git commands in order to update the monolithic container
20:39 mbrgm repo from the several formula repos and then deploying that. so I think it comes down to convenience vs control.. just two philosophies I guess :)
20:41 mbrgm what I really like about the gitfs is what viq mentioned: pulling different environments from different branches. that sounds really neat. :)
20:46 mikea so what we do is we have our lab pointing to a lab branch in git
20:46 mikea and when stuff is tested/ready to promote, we merge lab to master
20:47 mikea and the production masters are setup for the master branch in git
20:48 mikea We don't really have separate salt environments on the same master. We have one set of masters for lab, another for each security zone
20:48 mbrgm mikea: for the case I mentioned... updating several formulas, but all the updates need to be deployed together to be consistent, how do you control that your state tree has all the updates required for consistency when deployment is triggered?
20:49 mikea and we don't run syndics, we wrote out own middleware layer that keeps track of which node is communicating with which master and we have our own enterprise API to perform tasks on individual nodes
20:49 mikea gitfs does not update instantly
20:49 mikea also for sensitive things we use tags
20:51 astronouth7303 mbrgm: i think you just hit on why many teams go for monolithic repos
20:52 mbrgm well, I really like having monolithic stuff. modules are fine, but if you _can_ control the interdependencies, having things monolithic eliminates a lot of difficulties
20:53 mikea we have so many different developers doing different things
20:53 mikea so we ran into problems people merging things from the lab branch into the master branch before they were ready
20:53 mikea because they were pushing a different change for something else
20:53 darioleidi joined #salt
20:54 mbrgm mikea: same for us. if you are not fb/google and have a complete, automated test harness, automated refactorings etc., monolithic is not manageable.
20:54 mbrgm don't have*
20:54 mikea yeah
20:54 mbrgm but it makes me jealous hearing about how they are able to manage this ;)
20:55 ronnix joined #salt
20:56 heaje joined #salt
20:59 mikea I'm sure they also cherry pick and merge specific commits
20:59 mikea and not the entire branch
21:00 mikea and I'm sure they do a lot more work in feature branches on small chunks of the repo and have a bunch of people working on different things in the same branch
21:01 mikea and not have a bunch of people working on different things in the same branch, I mean
21:01 mbrgm mikea: yeah, no other way. they have several people doing integration/release management only. :-D
21:03 mikea it was just not really workable for us
21:04 mbrgm yeah... I mean, seeing that scale of development is impressive, but I wonder if those guy ever think 'man, I'd like to fiddle around, push to master and deploy without having to open 7 jira tickets on the way' :-P
21:06 willprice joined #salt
21:06 willprice94 joined #salt
21:07 mikea btw if you haven't landed on a git server yet
21:07 mikea check out bitbucket
21:07 mikea it actually does proper jinja highlighting for salt
21:07 mikea gitlab doesn't seem to get it
21:07 mbrgm we're using gitlab for I think nearly 4 years now :)
21:08 mikea yeah, us too
21:08 mbrgm pretty happy with it. matured and grew in a good direction feature-wise. gitlab-ci is genious compared to jenkins :)
21:08 hemebond Is bitbucket the one that doesn't have search?
21:08 mikea I have been playing with bitbucket for my home lab, I am digging it
21:08 mikea bitbucket has code aware search, whatever that means
21:09 mikea I just like the look/feel of bitbucket better, maybe I'm a superficial jackass
21:09 hemebond Looking at the screenshots I'm pretty sure that's the one that didn't have search.
21:09 hemebond And so I stopped bothering with any project that used it.
21:10 hemebond Perhaps they've added search now.
21:10 hemebond No wait... what's the another code repo thing, other than git?
21:10 whytewolf it has always had a search. even when it was stash there was a search feature in it
21:10 teratoma you should be running gogs in your home lab !
21:10 Eugene Man, now there's a great bikeshedding example: I won't use X because they develop it using Y that doesn't do Z
21:11 Eugene How does that even enter into consideration
21:11 mikea lol
21:11 hemebond Are you talking about me?
21:11 Eugene I am engaging in generalized conversation relevant to something that you mentioned, yes :-p
21:12 Eugene s/relevant/tangent/
21:12 noraatepernos joined #salt
21:12 whytewolf hemebond: other code repo thing? are you talking about hg?
21:12 hemebond Mercurial yeah, maybe it was a Mercurial hosted repo thing.
21:13 hemebond Looks like they added search last year.
21:13 hemebond Bitbucket that is.
21:14 mikea bitbucket does Mercurial and git
21:14 hemebond Yeah, it was just because it was a Mercurial repo that led me to Bitbucket that I remember it that way.
21:16 * whytewolf shrugs i know stash had a search feature but i stopped using it about the time it got merged with bitbucket. [not because of the merge]
21:16 mikea I use bitbucket for my home stuff because it's free unlimited repos
21:16 mikea I'd use github, but I'm a cheap bastard
21:17 whytewolf mikea: that is why i used to use it. but github gave me a free account. then they expanded the basic account to unlimited private repos
21:17 hemebond "free unlimited repos" you mean private?
21:17 mikea yeah
21:17 mikea sorry unlimited private repos
21:17 hemebond I didn't know it was unlimited private now,.
21:19 whytewolf Plan
21:19 whytewolf Developer – Unlimited private repositories
21:19 whytewolf Coupon
21:19 whytewolf You have an active coupon for $7.00 off forever.
21:19 astronouth7303 whytewolf: nice.
21:20 whytewolf yeah with a deal like that. it is difficult to spend even the $10 a year for a home installed copy of bitbucket
21:20 mikea where is this coupon? they're trying to charge me $7 a month
21:21 mikea bitbucket hosted is free
21:21 mikea with unlimited private repos
21:21 whytewolf it was a speciel promotion they gave me
21:21 mikea oh
21:22 whytewolf and bitbucket hosted wasn't always free.
21:22 mikea it has been for the last 5 years at least
21:23 whytewolf humm. fthey wanted $10 a month for me
21:23 whytewolf that was 3 years ago
21:23 mikea lets see if I can find my sign up email
21:23 mikea ive never paid for it
21:23 mikea and I've had it as long as I can remember
21:24 mikea 3/28/12
21:24 mikea that's when I signed up
21:25 hatifnatt Is it normal that when incorrect pillar name specified in top pillar salt return almost useless traceback? https://gist.github.com/hatifnatt/42586ba3eb0001f89e0799323f0409e0
21:26 whytewolf humm, strange. like i said they wanted to charge me 10 a month. which was why i had originally boaught stash cause it was cheaper at only 10 a year.
21:27 mikea i dunno, pretty cool either way
21:32 ecdhe joined #salt
21:32 dxiri_ joined #salt
21:36 mikea anyone need any part time help with salt-related things? I'm lookin' for some side hustle work if it's available.
21:38 willprice joined #salt
21:40 willprice joined #salt
21:56 cro joined #salt
21:57 dnull joined #salt
21:59 btorch anyone here uses saltpad ?
22:06 ahrs joined #salt
22:12 wych joined #salt
22:13 mikea no, but it looks nice
22:14 ProT-0-TypE joined #salt
22:15 btorch yeah looks better than molten but molten actually worked :) finally got saltpad up but can't list minions to execute anythin
22:16 mikea We have an external job cache that we create from the event bus,
22:16 mikea I wonder if I can take saltpad and retool it to read that data
22:19 btorch I wonder if there are any other salt uis out there besides saltpad (looks dead) and molten
22:20 whytewolf you could reserect hilite
22:20 btorch hehe
22:22 whytewolf anyway, salt-ui, enterprise salt, saltpad, molten, custom jobs using libpepper, custom jobs using curl, custom jobs using local python libs
22:24 mikea We're looking for someting similar to what foreman does for puppetdb
22:24 whytewolf saltvirt, i hear forman does some salt ui jobs.
22:25 sh123124213 can I have multimaster-multisyndic where syndics are half with tcp transport and half with zmq ?
22:26 whytewolf sh123124213: i don't see why not as long as a minion doesn't try connecting to both a zmq and a tcp transport master
22:26 btorch https://github.com/theforeman/foreman_salt
22:27 mikea hmm
22:28 sh123124213 whytewolf: I wouldn't think that would an an issue but I'm not sure if syndics can connect to upper level masters and forward messages correctly
22:28 whytewolf sh123124213: they should be able to.
22:28 whytewolf syndic only uses the 4506 recivie port
22:28 btorch haven't used theforeman.org but seems interesting
22:32 sh123124213 hmmm but : tcp (experimental).
22:33 whytewolf it is less experimental then raet
22:33 whytewolf kinda just wish they would make the transport pluggable. but that could introduce overhead
22:34 hemebond I thought transport was pluggable now.
22:35 whytewolf not really. it has 1 of three types but you can't introduce your own.
22:38 watersoul joined #salt
22:41 davidtio_ joined #salt
22:41 baffle_ joined #salt
22:43 ilbot3 joined #salt
22:43 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.6, 2017.7.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers <+> The call for speakers for SaltConf17 is now open: http://tinyurl.com/SaltConf17
22:43 lazybear joined #salt
22:43 pppingme joined #salt
22:43 whytewolf did i just have lag that cleared. cause i just saw like 40 part/joins in a second
22:44 pabloh007fcb Hello all, I have a quick question under the new docs for salt version 2017.07.0 i dont see a way to pass arguments to the build process for docker_image.present, is there another way of achieving this?
22:45 jhujhiti joined #salt
22:46 tehsu joined #salt
22:46 pabloh007fcb Here is the file im trying to use
22:46 pabloh007fcb https://gist.github.com/pabloh007/e630b9f76df80b88466acba5b25702dc
22:46 pabloh007fcb It appears that the version is ignored and not passed to the build process.
22:46 kiltzman joined #salt
22:47 nledez joined #salt
22:47 nledez joined #salt
22:47 stotch joined #salt
22:47 valkyr2e joined #salt
22:47 lubyou joined #salt
22:48 brent joined #salt
22:48 bantone joined #salt
22:49 Micromus joined #salt
22:49 nkuttler joined #salt
22:49 nku joined #salt
22:49 LostSoul joined #salt
22:50 dunz0r joined #salt
22:51 whytewolf pabloh007fcb: looking at the code. looks like that was missed. file a bug report. currently looks like the state only takes path,image,and dockerfile.
22:51 whytewolf https://github.com/saltstack/salt/blob/v2017.7.0/salt/states/docker_image.py#L204-L208
22:53 whytewolf it doesn't even pass kwargs to the docker.build module
22:53 pabloh007fcb thanks will do whytewolf probable fix would be to pass kwargs to the build process so it can map any of the other fields.
22:56 whytewolf agreed. it is a simple enough fix that it should in practice make it to 2017.7.1 but with them already working on some magor bugs and this one being a minor it might not.
23:03 tom29739 joined #salt
23:25 rpb joined #salt
23:30 dxiri joined #salt
23:34 frew is it normal for a highstate that does nothing to take 5-10 minutes?
23:34 frew (I mean I know that it's actually checking the state of the system, just nothing changed)
23:36 noraatepernos joined #salt
23:37 woodtablet joined #salt
23:39 whytewolf sometimes yeah. pkg.installed with do a package manage cache update [makecache for yum or apt-get update on debian based] also will transfer files and do comparisons on rendered versions. things like that. the more states the longer it could take.
23:43 frew hm.
23:44 whytewolf although 5-10 min does seem rather long
23:44 whytewolf most of my installs don't take that long even when there are changes
23:45 frew lol well a *fresh* install takes a full 40m

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary