Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-04-17

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 onslack joined #salt
00:10 mbologna joined #salt
00:12 asoc joined #salt
00:13 stooj joined #salt
00:14 Whissi joined #salt
00:14 tom29739 joined #salt
00:14 sjorge joined #salt
00:19 noobiedubie joined #salt
00:24 gmoro_ joined #salt
00:31 StolenToast joined #salt
00:56 tiwula joined #salt
01:04 pcdummy joined #salt
01:04 pcdummy joined #salt
01:12 noobiedubie joined #salt
01:21 cyborg-one joined #salt
01:23 cyborg-one left #salt
01:25 John_Kang joined #salt
01:26 sjorge joined #salt
01:26 Kelsar joined #salt
01:32 reyu joined #salt
01:38 reyu Is there a secret to getting the consul ext_pillar working? I've gone over the docs, and some other sites, with no luck at all. I have a fresh install of 2018.3.0 and Consul v1.0.6 on a brand new vm, and the only message I can get is "[CRITICAL][11189] Specified ext_pillar interface consul is unavailable"
01:56 ilbot3 joined #salt
01:56 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2017.7.5, 2018.3.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
01:59 antpa joined #salt
02:38 orichards joined #salt
02:44 whytewolf reyu: in the python that the saltmaster runs on, try import consul
02:47 whytewolf depends:
02:47 whytewolf python-consul
02:47 reyu Yeah, I got it a bit ago. Missed the little line at the top of the doc that listed that.
02:47 reyu Thanks anyway
02:48 shiranaihito joined #salt
02:48 whytewolf sorry, would have helped sooner, but was playing a table top RPG version of fallout with friends over video chat
02:48 reyu No problem. Got it eventually, that's what matters.
03:05 cgiroua joined #salt
03:28 zerocoolback joined #salt
04:02 qman joined #salt
04:02 swa_mobil joined #salt
04:04 matti_ joined #salt
04:05 zerocool_ joined #salt
04:05 cgiroua_ joined #salt
04:05 PFault joined #salt
04:08 shortdudey123 joined #salt
04:08 noobiedubie joined #salt
04:08 Edgan joined #salt
04:13 sjohnsen_ joined #salt
04:15 dlloyd- joined #salt
04:17 om2 joined #salt
04:19 justanotheruser joined #salt
04:26 stooj joined #salt
04:37 sauvin joined #salt
04:38 om2 joined #salt
04:48 golodhrim|work joined #salt
04:57 gomerus[m]1 joined #salt
05:00 zerocoolback joined #salt
05:18 rtr63gdh[m] joined #salt
05:18 alj[m] joined #salt
05:18 hoverbear joined #salt
05:28 armyriad joined #salt
05:29 toofoo[m] joined #salt
05:29 jerrykan[m] joined #salt
05:32 armyriad joined #salt
05:32 fujexo[m] joined #salt
05:33 Tenyun[m] joined #salt
05:36 aboe[m] joined #salt
05:36 viq[m] joined #salt
05:37 glock69[m] joined #salt
05:39 golodhrim|work joined #salt
05:40 sxar joined #salt
05:41 gomerus[m] joined #salt
05:42 Hybrid joined #salt
05:44 systeem[m]1 joined #salt
05:44 benasse joined #salt
05:45 benjiale[m] joined #salt
05:45 freelock joined #salt
05:46 ThomasJ|m joined #salt
05:46 atmoz joined #salt
05:47 Processus42 joined #salt
06:00 tyx joined #salt
06:06 DanyC joined #salt
06:11 Elsmorian joined #salt
06:12 hoonetorg joined #salt
06:12 DanyC joined #salt
06:19 DanyC joined #salt
06:23 Tucky joined #salt
06:36 lompik joined #salt
06:39 lompik joined #salt
06:41 tzero joined #salt
06:41 DanyC joined #salt
06:44 Pjusur joined #salt
06:47 lompik joined #salt
06:57 lompik joined #salt
07:01 aldevar joined #salt
07:05 sayyid9003 joined #salt
07:11 k1412 joined #salt
07:15 Hybrid joined #salt
07:19 aviau joined #salt
07:24 lompik joined #salt
07:28 orichards joined #salt
07:30 Ricardo1000 joined #salt
07:31 tyx joined #salt
07:39 darioleidi joined #salt
07:39 jrenner joined #salt
07:48 stelucz joined #salt
07:52 rollniak joined #salt
07:55 mikecmpbll joined #salt
07:58 orichards joined #salt
08:00 orichards joined #salt
08:03 orichards joined #salt
08:03 DanyC joined #salt
08:05 briner_ joined #salt
08:07 briner joined #salt
08:19 Ricardo1000 joined #salt
08:21 lompik joined #salt
08:25 orichards joined #salt
08:30 CrummyGummy joined #salt
08:30 orichards joined #salt
08:39 briner joined #salt
08:39 stooj joined #salt
08:45 alex-zel joined #salt
09:10 nielsk joined #salt
09:23 mikecmpb_ joined #salt
09:28 zulutango joined #salt
09:40 Hybrid joined #salt
09:48 eightyeight joined #salt
09:50 copec joined #salt
09:51 alex-zel hi, i've create a thorium file to save a register but it doesn't seem to be working
09:51 alex-zel I have a reactor watching the same event and it is working, but for some ready "reg.list" doesn't
09:51 alex-zel reason*
09:53 aviau joined #salt
10:00 fl3sh joined #salt
10:35 Hybrid joined #salt
10:38 darioleidi joined #salt
10:44 bluenemo joined #salt
10:47 zulutango joined #salt
10:48 xet7 joined #salt
10:57 rollniak joined #salt
10:59 xet7 joined #salt
11:24 mymtw joined #salt
11:27 rofl____ joined #salt
11:42 briner joined #salt
11:49 alex-zel is the thorium reactor even working?
11:52 bluenemo joined #salt
11:59 rollniak joined #salt
12:02 zerocoolback joined #salt
12:09 Nahual joined #salt
12:09 Nahual joined #salt
12:26 lompik joined #salt
12:44 pahizz joined #salt
12:45 pahizz left #salt
12:46 pahiz joined #salt
12:46 pahiz Hi
12:46 pahiz Is there an easy way to add a certain key to VM templates so that saltmaster will accept the new minion automatically?
12:49 EthPyth joined #salt
12:50 mchlumsky joined #salt
12:55 Elsmorian joined #salt
12:57 dendazen joined #salt
12:57 mchlumsky joined #salt
12:59 Cadmus pahiz: We're doing it with an auth-pending reactor, let me find something
13:00 Cadmus Okay, I didn't set this up and I have no idea what I'm looking at, but that's how we're going it
13:00 Cadmus Or there's this method https://docs.saltstack.com/en/latest/topics/tutorials/preseed_key.html
13:01 Cadmus You'll have to put those into whatever you're using to make yur VM templates though. What are you using? VMWare?
13:02 ecdhe joined #salt
13:03 xet7 joined #salt
13:03 jerematic joined #salt
13:03 Cadmus Wait, that's for a single minion, yes I'd look at the reactor method
13:12 pcn Not having this in 2018.3.0 seems to make it spin really hard: https://github.com/saltstack/salt/pull/46878/files
13:12 pcn Is there going to be a .1 at some point to fix that?
13:13 darkalia Elsmorian: https://github.com/saltstack/salt/issues/47118
13:14 darkalia For the issue we spoke yesterday about
13:14 oida joined #salt
13:14 gh34 joined #salt
13:15 Elsmorian @darkalia Ah brilliant, thanks for that!
13:16 darkalia I ended up writing a script that generate everything so people who wants to test it out can do it quickly
13:16 darkalia Surprisingly, once stripped of all my personal stuff, it takes between 30 seconds and 1 minute to connect
13:17 pcn alex-zel: thorium is still experimental.  I started working with it a bit last year and decided that for now, at least, it's better to write an engine to do what I need.
13:17 darkalia which is high anyway
13:24 alex-zel pcn: I can't event make the example in the docs work
13:25 aphor joined #salt
13:33 lompik joined #salt
13:39 racooper joined #salt
13:44 noobiedubie joined #salt
13:45 AngryJohnnie joined #salt
13:49 tiwula joined #salt
14:00 jken_ joined #salt
14:02 jken_ left #salt
14:02 jken joined #salt
14:03 jken Hello, I want to use Salt rather than puppet to manage a fleet of deployed IOT devices. However, I have a hard requirement that states the devices can only communicate with the master on port 443. I see salt requires communication over 2 ports rather than one due to its transport layer using zeromq. Does anyone know if its possible to use salt with only a single port exposed?
14:06 mianosm https://docs.saltstack.com/en/getstarted/system/communication.html
14:06 mianosm Might shed more light on that for you.
14:07 DammitJim joined #salt
14:08 jken I've been reading that, and if I understand it correctly both ports are going to be required.
14:10 msmith joined #salt
14:11 dxiri joined #salt
14:14 Elsmorian joined #salt
14:16 Miuku On the master only.
14:18 msmith left #salt
14:19 briner joined #salt
14:23 Miuku Naturally you can define the ports yourself, they do not have to be those two specific ones.
14:24 cgiroua joined #salt
14:28 Blender joined #salt
14:30 jken Unfortunately, I have requirements that only port 443. So I can't even have it use 80 and 443 :(
14:32 Elsmorian joined #salt
14:35 mikecmpbll joined #salt
14:39 zerocoolback joined #salt
14:40 ecdhe joined #salt
14:42 Miuku Bind it to two different IPs on 443 ;-)
14:43 pahiz Thanks Cadmus I'm looking at reactor now :)
14:45 pahiz I do use vmware
14:45 pahiz vcenter to deploy so I'm also thinking of trying out salt cloud
14:46 ponyofdeath joined #salt
14:55 zerocoolback joined #salt
15:10 viq joined #salt
15:12 colttt joined #salt
15:12 bluenemo joined #salt
15:17 feld joined #salt
15:18 awerner joined #salt
15:34 feld joined #salt
15:35 ecdhe joined #salt
15:36 feld left #salt
15:38 zerocoolback joined #salt
15:43 dezertol joined #salt
15:48 MTecknology jken: that two-ip binding thing /might/ work... but it'd probably be easier to select a different transport entirely. Thankfully, the transport system is yet another pluggable interface within salt, so you might be able to find something like the raet or another transport, or you might be able to write your own.
15:48 pahiz My new keys still go to unaccepted keys even when I try using reactor to add them with "reactor: - 'salt/minion/*/start': - salt://reactor/accept-key.sls"
15:49 pahiz accept-key.sls has wheel.key.accept in it the way docs say
15:52 jken MTecknology, with that in mind, would using the existing TCP transport solve my problem?
15:57 whytewolf raet is dead
15:57 jken Are 2 ports still required with the TCP transpor?
15:58 jken docs are unclear
16:02 AngryJohnnie joined #salt
16:03 codewaffle joined #salt
16:06 pahiz Nevermind I didn't know I can accept all new keys with the master configuration file "auto_accept: True"
16:07 MTecknology jken: I don't know crap about the alternate transport options. It might be easiest to just try on a two node setup.
16:08 MTecknology pahiz: the reactor option should work, it's there so you can do some sanity checking instead of just blindly accepting all keys.
16:09 pahiz For some reason I didn't manage to get it working at all
16:09 pahiz I don't know much about salt and I'd like to get started as fast as possible
16:09 whytewolf because the start tag is the wrong tag.
16:09 whytewolf the start tag is for AFTER a minion is authenticated
16:09 pahiz Also this is for a home lab so I'm not overly concerned about rogue minions
16:09 pahiz Ahh
16:10 pahiz What should have it been whytewolf?
16:10 whytewolf salt/auth
16:10 pahiz Nothing more than that?
16:10 whytewolf https://docs.saltstack.com/en/latest/topics/event/master_events.html#authentication-events
16:10 MTecknology My home lab beats the crap out of anywhere else when it comes to security. It's where I put best practices in play and make sure all my theories work.
16:11 pahiz Thank you whytewolf, I'll make sure to go back to reactor when I manage to play a bit more with salt in this configuration
16:11 whytewolf honestly, i don't use reactors. I use saltify, or salt-cloud. or salt-ssh to put a key in place using pre generated keys.
16:12 pahiz I wonder if salt-cloud is the way I should go since I only use vmware to deploy my VMs
16:13 MTecknology salt-cloud is turning into a sexy beast
16:13 jken MTecknology, how would a multi master setup help me?
16:13 MTecknology jken: I don't remember saying it would
16:13 pahiz I wonder how easy it is to use that with my current way of deploying vm templates in vcenter
16:14 whytewolf jken: MT meant a test 2 node setup. a master and a minion
16:14 pahiz I guess I'll found out sooner than later
16:14 jken MTecknology, I must be misunderstanding then. What did you mean by " It might be easiest to just try on a two node setup."?
16:14 MTecknology ah.. ya, what whytewolf said
16:14 jken ahh
16:15 whytewolf it is a common quick way of finding the basics of how things work
16:15 alex-zel anyone has any experience with GitFS? It seems that setting disable_saltenv_mapping breaks even the base env, can't even see top.sls
16:17 MTecknology alex-zel: share your config? How are you doing the mapping now?
16:18 alex-zel https://paste.ubuntu.com/p/TTjkFq89CY/
16:19 JacobsLadd3r joined #salt
16:19 MTecknology You disabled mapping, and then you gave it no mapping information...
16:19 alex-zel I've tried adding "- base: master"
16:20 alex-zel this also doesn't work https://paste.ubuntu.com/p/gCyQkfx5bH/
16:20 MTecknology OMG!! I just saw gitfs_saltenv_whitelist! This... this is a goooood day. :D
16:21 alex-zel also from what I understod from the docs, master branch is always mapped to base env
16:22 MTecknology My discovery has only been available for approx. 4 years. :P
16:22 alex-zel from the docs about disable_saltenv_mapping: When set to True, all saltenv mapping logic is disregarded (aside from which branch/tag is mapped to the base saltenv)
16:23 alex-zel so this "- base: master" should tell salt to map master branch to base env
16:23 Mousey joined #salt
16:23 MTecknology It sounds like it should, ya. I don't have any answer, though.
16:24 whytewolf it is a new option. it might be buggy.
16:24 Edgan MTecknology: All the options I had to set to get gitfs in a happy place for my circumstance, https://pastebin.com/AypSm26h
16:26 alex-zel I'll try to whitelist
16:27 alex-zel I've been trying to get thorium to work with no luck for hours, and after playing around with GitFS somehow thorium started using the top file from salt root :/
16:30 AngryJohnnie joined #salt
16:30 AstraLuma joined #salt
16:38 JacobsLadd3r joined #salt
16:39 DanyC joined #salt
16:39 MTecknology A syndic process is supposed to connect to it's MoM as if it's just another minion, isn't it? (as in, if the connection was made, I should expect to see a minion key?)
16:40 DanyC joined #salt
16:41 zer0def as far as i can tell from my little poking around, yes; it also should cough up all of it's minion keys straight into the accepted list
16:45 MTecknology I swear this feels like a firewall problem, but not even my firewalls are showing any bits. :(
16:49 ciastek joined #salt
16:51 Edgan MTecknology: Can one master connect to the other on tcp 4505? telnet hostname 4505 ?
16:53 MTecknology nope, it can't.. definitely a firewall problem then, but I can't figure out which firewall needs to change and what rule needs to be written. :(
16:53 MTecknology such a mess of an environment :(
16:53 MTecknology made harder by the destination being an aws instance with an internal address.
16:54 Edgan MTecknology: You don't have dns resolution in AWS?
16:55 Edgan MTecknology: and between AWS and not AWS?
16:55 MTecknology I don't think I follow the questions..
16:55 MTecknology gotta run for 2-4 min..
16:56 Edgan MTecknology: I wouldn't be using something like 10.10.1.199 for a salt master. I would be using something like salt-salt-master-01.staging.us-west-2.aws.acme.com
16:57 sanshaloo joined #salt
16:59 justinl joined #salt
17:01 stooj joined #salt
17:02 MTecknology holy crap... acme.com is one heck of a terrible website
17:03 Uni bet it renders great in lynx
17:04 MTecknology probably better than in a web browser
17:04 MTecknology Edgan: is your argument against using the private address or against using an ip address?
17:05 Edgan MTecknology: in AWS, it should be private, because it should be connected via a VPN. I am advocating for not using ips, because they should be dynamic in AWS. Which means using DNS in some form.
17:05 MTecknology I'm using DNS, ya
17:06 Edgan MTecknology: How you used private address made it sound like you were using an ip. It shouldn't matter if it is private if there is a VPN.
17:06 Elsmorian joined #salt
17:06 MTecknology ...
17:07 justinl left #salt
17:07 justinl joined #salt
17:08 AngryJohnnie joined #salt
17:08 justinl Hi all, I have a question about setting up a generalized solution for managing logrotate config files.
17:09 justinl I'm looking at https://github.com/salt-formulas/salt-formula-logrotate#cross-formula-relationship, but I'm confused by the statement "It's possible to use support meta to define logrotate rules from within other formula." What does "support meta" mean in this context?
17:10 Edgan justinl: I think they are saying you can extend one formula from another
17:10 MTecknology Apparently our existing setup is using the public address instead of the private one. grrr
17:11 Edgan justinl: As in the logrotate service can live in the logrotate formula, but then you extend it by adding things for it to watch
17:11 DanyC_ joined #salt
17:11 lompik joined #salt
17:12 Edgan justinl: So the service gets restarted when new configuration files get added, or existing ones changed
17:12 justinl Ok, I think I understand. Never tried anything along those lines but I'll look into the proper approach for doing so.
17:12 justinl Basically I'm trying to figure out how to create e.g. /etc/logrotate.d/apache2 depending on the role(s) of a given server.
17:12 Edgan justinl: Though in the case of logrotate, doesn't quite apply
17:12 Edgan justinl: In my experience logrotate isn't a service, but a cron job, so no need to restart it
17:13 justinl Right, I'm referring to having the relevant formulas, applied based on minion roles, dropping in their own /etc/logrotate.d config files.
17:13 Edgan justinl: I would add /etc/logrotate.d/apache2 to the apache2 formula. If you want to be fancy you can use one formula's map.jinja in another to pull in the /etc/logrotate.d path
17:14 Edgan justinl: https://cygnusx-1.org/formula.txt  see how I use systemd/map.jinja in application/map.jinja?
17:15 Edgan justinl: Then if something like /etc/systemd/system becomes /etc/systemd/newdir, I can change it in one place and everything updates
17:15 justinl I actually have that page bookmarked but haven't looked at it in a while. Cool that you're the author. :) Let me take a closer look at it right now.
17:17 quique joined #salt
17:18 justinl Ok, I think I get the gist of it from a quick skim. Never used load_yaml so I'll have to go over the docs but this should help!
17:18 quique when using the boto_ec2 states I get failures like Failed to create instance and nothing more.  using -l debug doesn't give anything more useful.  How to I get useful debugging?
17:19 Edgan quique: Always use trace instead of debug when debugging
17:22 Edgan quique: What version of salt are you using?
17:22 quique Edgan: salt 2018.3.0 (Oxygen)
17:22 quique trying trace
17:24 quique Edgan: unfortunately that's not any better
17:25 MTecknology Edgan: I'm solving this one by giving up and letting someone more familiar with the existing rules figure it out. :D
17:25 quique Edgan: is this an issue: None of the required configuration sections, 'logstash_udp_handler' and 'logstash_zmq_handler', were found in the configuration. Not loading the Logstash logging handlers module
17:25 quique do i need that to get better logging?
17:26 Edgan quique: looking master or minion side?
17:26 quique minion
17:27 Edgan quique: How are you using trace exactly?
17:27 quique Edgan: salt '*' state.apply aws.vpcs.tools1.* -l trace
17:28 phtes joined #salt
17:28 Edgan quique: That looks like a master command, not a minion command
17:29 quique Edgan: the command is run on the master, but i believe it's being executed on the master as a minion?
17:29 quique perhaps I'm wrong
17:30 Edgan quique: yes, but you are getting master output when you want minion output
17:30 quique Edgan: how to I get minion output
17:31 quique I've tailed the minion logs
17:31 quique and it's not any different
17:32 Edgan quique: either set the log level to trace for the minion or run salt-minion -l trace
17:32 wongster80 joined #salt
17:36 Edgan quique: masters just figure out what a minion will run, and what the pillars are
17:36 Edgan quique: Pretty much everything else is executed minion side
17:38 MTecknology isn't top.sls also rendered minion side?
17:40 Elsmorian joined #salt
17:40 Edgan MTecknology: Not sure, but it probably comes down to thats complicated. Since the master has to do merging of state top.sls in the case of gitfs
17:40 tiwula joined #salt
17:40 Edgan MTecknology: But if the jinja in a top.sls is rendered minion side, not sure
17:43 DanyC joined #salt
17:45 gswallow joined #salt
17:46 ponyofdeath joined #salt
17:47 pahiz Was there an option to run state only once?
17:48 pahiz For example if I wanted salt to clear all logs when it first gets highstate
17:49 whytewolf pahiz: normally that kind of thing you either wrap in jinja, for logic into with unless & onlyif. or use orchestration for firsttime setup and then leave highstate for maintaince
17:50 whytewolf s/for logic/force logic/
17:50 whytewolf salt. we hand you a gun and the bullets. it is up to you which body part you want to shoot.
17:51 pahiz Thanks
17:58 mikecmpbll joined #salt
18:03 Sokel joined #salt
18:06 Sokel If I run salt host grains.setvals "{'foo': 'bar', 'bar': 'foo'}" - this works correctly. If I try it using the salt api, I always get "setvals grains must be a dictionary" - am I doing something wrong here? https://paste.fedoraproject.org/paste/PjilXjyECU4CzmRz7DBwcA
18:14 rollniak joined #salt
18:20 esteban joined #salt
18:23 Edgan Sokel: Try kwarg= not arg=
18:24 Edgan Sokel:  "LocalClient also takes arg (array) and kwarg (dictionary) arguments" from https://docs.saltstack.com/en/latest/ref/netapi/all/salt.netapi.rest_cherrypy.html
18:33 ymasson joined #salt
18:47 nbari I am trying to update the kernel on a centos machine with pkgs: - kernel but it is not working
18:47 nbari I found that there is a kernelpkgs module
18:47 nbari therefore wondering how to install the kernel/kernel-devel and if needed reboot
18:47 nbari any ideas ?
18:50 Edgan nbari: It is guarnteed you have a kernel installed, if running Salt. So explicitly for the kernel package, why do you need to manage it?
18:50 Edgan nbari: You want to do manage the version/upgrade process?
18:50 whytewolf also, define "not working"
18:51 nbari good poing whytewolf, so I am trying to create a formula for beegfs, therefore I do this steps "yum install kernel kernel-devel gcc -y"
18:52 nbari now I notice that having 'install-kernel': pkg.installed: pkgs - kernel (it is not doing yum install kernel)
18:52 Edgan nbari: kernel is assumed by kernel-devel
18:52 whytewolf pkg.installed: pkgs: -kernel means hey install the kernel if it isn't installed. that will NOT update
18:53 whytewolf like Edgan said earlier. kernel is installed
18:53 nbari if I do a yum kernel-devle the recipie does install it, indeed if I go back to the terminal of the server yum kernel-updae I get a (Nothing to do)
18:53 nbari mmm I will then give a try to the headers
18:53 whytewolf try pkg.latest
18:53 whytewolf that will update the kernel
18:54 mpanetta joined #salt
18:54 nbari but having that in a state when a new kernel comes out will install it automatically ?
18:54 whytewolf well, once the state is run
18:54 firefly_ joined #salt
18:54 Edgan nbari: Note, use pkg.latest carefully, in general, it can lead to BAD things when it comes to things like database software
18:55 whytewolf ^
18:55 nbari righ, I would like to prevent that
18:56 nbari so then what about kernelpkg ?
18:56 * whytewolf shrugs never heard of it
18:57 MTecknology I think I've used pkg.latest exactly once ever.
18:57 firefly310 joined #salt
18:57 nbari Makefile:117: *** Linux kernel build directory not found. Please check if the kernel module development packages are installed for the current kernel version. (RHEL: kernel-devel; SLES: linux-kernel-headers, kernel-source; Debian: linux-headers).  Stop.
18:57 nbari I know that if I do yum install kernel
18:57 nbari that will fix the problem
18:58 nbari but want to automate that in a recipie and find a way how ot get the current develpment
18:58 Edgan nbari: you might want something like, https://pastebin.com/N4Xx3N8A
18:58 whytewolf yes, yum install kernel installs the latest kernel.
18:58 Edgan nbari: I think you have a different problem
18:58 whytewolf pkg.installed. actually asks yum, hey is this package installed? yes? okay then i don't need to do anything
18:58 joseph joined #salt
18:59 Edgan nbari: You are running say 4.14.0.1, but you install kernel-devel for 4.14.0.2, and the  compile might want to have kernel-devel for the running kernel
19:00 nbari Edgan: you are right indeed that's my problem, I think I need just to find, install kernel-devel for existing kernel
19:01 whytewolf or, you could set the version
19:01 whytewolf pkgs: - kernel: 3.10.0-693.el7
19:02 Edgan nbari: You could also install, reboot, compile
19:03 Edgan nbari: Then you don't have to manage the kernel version, but it would reboot every time there is a kernel upgrade
19:03 Edgan nbari: Which in a production case is a bad idea, likely
19:03 nbari whytewolf: probaly that could be the safest option, my concern is that if I use pkg.latest (please correct me if I am wrong) if a new release comes out the pkg will be installed right
19:03 Edgan yes
19:03 whytewolf nbari: yes
19:04 whytewolf i wasn't saying use pkg.latest with the version.
19:04 Edgan nbari: Actually, you would probably be better off with something like dkms or akmods to manage kernel module recompiles
19:05 Edgan nbari: Have salt handle the dependecies, but leave it to the service to do the compile
19:05 whytewolf under centos 7.ish dracut uses dkms IIRC
19:06 Edgan whytewolf: I have run Fedora for years and dkms vs akmods depends on who you get the rpm from
19:06 nbari so waht guys you suggest in this case
19:06 nbari what could be the "safest" aproach, I am using cloud-init to bootstrap this instances
19:07 nbari and using salt to provision the istances
19:07 Edgan nbari: Is this professional work?
19:07 whytewolf I ws wrong anyway, dkms runs dracut not the other way around.
19:07 nbari is a disposable instance (just need to doit once) and then destroy it
19:07 nbari nothing that needs to scale in/out (live service)
19:08 Edgan nbari: More of a jenkins slave situtation, not a web service?
19:08 nbari not even that
19:08 Edgan nbari: is this in AWS?
19:08 nbari mmm I whish
19:08 Edgan nbari: openstack?
19:08 nbari yes
19:09 Edgan nbari: I would probably use a combination of packer and salt-call(masterless) to make an OpenStack image, and just update it every so often.
19:09 nbari I can't even use images
19:09 Edgan nbari: why?
19:10 whytewolf what?
19:10 nbari I ask that 10000 times
19:10 whytewolf some openstack engineer needs to be shot out of a cannon.
19:10 nbari +1
19:10 nbari on top of that took me 4 weeks to explain / try to use saltstack
19:11 Edgan nbari: How long will these VMs run?
19:11 nbari from 1 day to maybe 10
19:11 whytewolf that isn't default in openstack, and we used it at the bank i used to work at all the time to save the groups that used to use our openstack instances time
19:11 nbari or maybe months
19:12 Edgan nbari: 1-10 days and months are very different use cases. Do you need to apply security updates, because they are long running, or would you just spin a new one?
19:13 Edgan nbari: If you would just spin a new one, you are probably better off going as simple as possible
19:13 nbari Edgan: to know how bad this is, there is no even DHCP (I had to implemet somethign with python/tags) so the easy/stable solution would work
19:13 Edgan nbari: openstack without DHCP, hahaha, why are you still doing this work?
19:13 whytewolf that would be rage quit
19:14 Edgan nbari: gold cage or handcuffs?
19:14 Edgan nbari: otherwise, RUN! :)
19:14 nbari I am trying to find my way out jajajaja
19:15 nbari but well
19:15 nbari sorry for asking again, but how will pkg.lates will behave over time /
19:15 nbari ?
19:15 nbari periodically will fetch /update the cache and install the "latest" ?
19:15 whytewolf yes.
19:16 Edgan nbari: it will do it everytime you run salt. Which could be a cron job, salt schedule, or manually
19:16 whytewolf well, by periodically means when ever the highstate is run.
19:16 nbari mmmm ok
19:16 Edgan nbari: But if you take the replace instead of change model, it is a waste for anything but the kernel
19:16 nbari I think I will go  for this: https://github.com/saltstack/salt/issues/20690
19:16 nbari specify a version of the current kernel
19:17 Edgan nbari: Are you familiar with pets not cattle?
19:17 Edgan I mean cattle not pets
19:17 * whytewolf pets the cattle
19:17 nbari the point is ?
19:18 Edgan nbari: If you just make a new VM every time you want to do an apache upgrade, you can just do pkg.installed not pkg.latest, because there will never be a second salt run
19:19 Edgan nbari: Which goes to your these are disposable comment
19:19 Edgan nbari: Either they are or they are not, treat them accordingly
19:20 nbari what about the master calling state.highstate on does minions ?
19:20 whytewolf the master only puts it into the message bus. the minion actually runs it
19:20 Edgan nbari: You could just never run the highstate a second time, and there is nothing that says you have to use master mode, unless you have other uses for it
19:20 whytewolf and honestly, with something like this, you might be better off with masterless
19:21 Edgan whytewolf: unless he wants to be ables to do things like salt 'foo*' grains.items
19:21 whytewolf or even salt-ssh
19:21 Edgan whytewolf: salt-call would be better for auto scripting
19:21 whytewolf yeah, but they key manegment on disapearing minions is 'fun'
19:22 Edgan But has the downside of having to have the git repos on the VM, which unless you segrate the secrets to different git repos is less secure
19:22 Edgan whytewolf: Though you might be able to use a openstack ansible dynamic roster, salt-ssh also generally means managing the roster file
19:22 whytewolf true
19:23 nbari I full agree, but need help giving a try with this: yum install "kernel-devel-uname-r == $(uname -r)"
19:23 Rubin joined #salt
19:23 Edgan nbari: uname -r could be a grain, if it isn't already
19:23 nbari how to could it put that in a state so it can place nice even if the master call X times the minion
19:23 whytewolf pretty sure it is a grain already
19:23 Edgan nbari: for kernel-devel, it would need to be a pkg.latest
19:24 whytewolf yeap it already is a grain kernelrelease
19:24 nbari right just found it
19:25 Edgan nbari: You would be better off getting kernel-devel into whatever you start running with Openstack in the first place
19:25 Edgan nbari: Then kernel-devel could already be there to match the already installed kernel
19:26 Edgan nbari: But that is basically back to the image comment
19:26 nbari will give a try with pkg.installed -version {{ grains['kernelrelease'] }}
19:26 whytewolf yeah, that is basicly back to a functioning well managed openstack. which he does not have access too apperently
19:26 nbari true I full agree, but I have to do this in one of the worst enviroments you could imagine
19:27 Edgan nbari: I have a one off idea
19:27 Edgan nbari: You probably really don't want salt managing the kernel version, because you don't want it auto rebooting
19:28 Edgan nbari: You could make yum install "kernel-devel-uname-r == $(uname -r)" a cmd.run, and then make the unless: rpm -q kernel-devel-uname-r == $(uname -r)
19:28 darioleidi joined #salt
19:29 nbari good catch
19:30 whytewolf that is pretty much what pkg.installed: pkgs: - kernel-devel: {{salt.grains.item('kernelrelease')}} is doing anyway
19:30 Edgan nbari: Another trick I have used before is you create a python based custom grain that looks for a file, and then after doing a whole formula worth of stuff, you do a touch file at the end, which then prevents it all from running a second time, because you wrap it in a if grain == True
19:30 Edgan nbari: It is a hack to state-ify stomething
19:30 emdk joined #salt
19:31 Edgan whytewolf: yeah, your way would probably work and is better
19:31 nbari thanks for the tips, have more ideas now
19:31 emdk Hi everyone
19:34 emdk Anyone have Salt and docker playing nicely?  It seems logical for Salt to automatically execute the docker.login module if the ~/.docker/config.json file is blank or is missing the pillar configured registries
19:35 emdk Right now I'm executing the docker.login module manually - kinda defeats the purpose :)  Any ideas?
19:35 nbari whytewolf: No package kernel-devel-OrderedDict([(u'kernelrelease', u'3.10.0-693.11.6.el7.x86_64')]) , should't only be grains['kernelrelease'] ?
19:36 Sokel Edgan: then what would be arg=...? Because if I leave arg out, it complains that it's missing arguments (takes at least 1 argument, 0 given)
19:38 Edgan Sokel: maybe try arg='' ?
19:39 AngryJohnnie joined #salt
19:39 Sokel Edgan: Same exact error of it expecting a dictionary, even with kwarg set.
19:41 Elsmorian joined #salt
19:41 Edgan nbari: yeah, try grains['kernelrelease']
19:42 Edgan Sokel: read the docs carefully, but I am pretty sure arg is array and kwargs is a dictionary. The api is tricky
19:43 DammitJim joined #salt
19:43 bluenemo joined #salt
19:44 tiwula joined #salt
19:45 nbari it works installing the package but then when run the second time I get: The following packages failed to install/update: kernel-devel=3.10.0-693.11.6.el7.x86_64
19:46 nbari so I think I will fall back to the cmd.run unless
19:56 mauli joined #salt
19:58 whytewolf nbari: sorry about my example should have been salt.grains.get not salt.grains.item but looks like you ran into another issue anyway
20:00 briner joined #salt
20:03 pcn I'm trying to use the file.manage_file execution module, and I've got no idea how to figure out which of the 11 arguments I'm missing: https://gist.github.com/pcn/3a89341813f04dbb45cf63580dc25eb8
20:07 whytewolf pcn you are missing attrs
20:08 pcn I'm confused about what attrs is in this case?  FS extended attributes?  What is the shape of the data?
20:09 * whytewolf shrugs
20:09 pcn The documentation just says "attributes to be set on file: '' means remove all of them_'
20:13 ecdhe joined #salt
20:14 heyimawesome joined #salt
20:16 whytewolf pcn, look at the help for file.managed
20:17 whytewolf The attributes to have on this file, e.g. a, i. The attributes can be any or a combination of the following characters: acdijstuADST.
20:18 whytewolf or https://en.wikipedia.org/wiki/Chattr
20:18 pcn Yeah, the docs are alphabet soup.  The chattr docs is def. the next step.
20:19 whytewolf well, it isn't something someone normally sets
20:19 whytewolf wonder why that isn't None
20:19 whytewolf [except for the fact that manage_file isn't something people normally do
20:20 Elsmorian joined #salt
20:21 pcn Yeah, trying to copy a file via salt-ssh
20:23 whytewolf do you need to manage it or could you just cp.get_file it
20:24 pcn cp.get_file isn't wrapped to work with salt-ssh
20:26 pcn Which made me very sad.
20:27 whytewolf anyway, unless you actually need it whichi is doubtful, attrs should be None
20:31 dendazen joined #salt
20:33 viq joined #salt
20:44 JAuz joined #salt
20:47 AngryJohnnie joined #salt
20:52 dijit Hey guys, does anybody know why salt-cloud (gce provider) defaults to trying to deploy over the public interface?
20:52 dijit is this configurable.
20:53 Neighbour not sure, but for AWS EC2 it is configurable
20:53 pcn whytewolf: I gave up on the execution module but now I'm scratching my head on this: https://gist.github.com/pcn/8a6189cd1ddf939cf90d0435ba773485
20:53 jeremati_ joined #salt
20:53 dijit Neighbour: how do you configure it?
20:53 dijit maybe I can search for the option.
20:54 dendazen joined #salt
20:54 Neighbour dijit: By providing the option 'ssh_interface': 'private_ips' to cloud.present
20:55 dijit oh, it's windows. :\
20:55 dijit hm
20:55 Neighbour Sorry, no experience with salt+windows :)
20:56 dijit winrm_interface does what you would expect.
20:56 dijit <3 principle of least surprise.
20:59 Sokel left #salt
21:09 keldwud joined #salt
21:09 keldwud joined #salt
21:15 dijit ah no
21:16 dijit it's ssh_interface... I supplied both and got happy it worked.
21:16 apofis joined #salt
21:33 Eelis joined #salt
21:36 Eelis i'm having problems with service.status and a wildcard and a systemd instantiated service: https://ideone.com/9SoNQg   is it a known bug that service.status wildcards can't find instantiated services?
21:37 Eelis it seems like somewhere down the line it decides to drop the "testA" instance name
21:38 mavhq joined #salt
21:44 honestly Eelis: service.status with globs seems to be a new feature so it's probably buggy, your output doesn't match the docs at all :-) https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.service.html#salt.modules.service.status
21:44 honestly "If the name contains globbing, a dict mapping service name to PID or empty string is returned."
21:45 honestly I don't see any PIDs in your output
21:45 Eelis hmm indeed
21:46 honestly I'd go ahead and raise an issue
21:46 Eelis i guess i could do that. the fact that there's 3440 open issues kinda scares me off tbh
21:47 honestly some of my issues have been fixed
21:47 honestly oh
21:47 honestly I guess that was a lie
21:47 whytewolf https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.systemd.html#salt.modules.systemd.status
21:47 honestly https://github.com/saltstack/salt/issues/created_by/duk3luk3
21:48 honestly ah wait that's open issues only - this looks much better: https://github.com/saltstack/salt/issues?utf8=%E2%9C%93&amp;q=is%3Aissue+author%3Aduk3luk3+
21:49 honestly whytewolf: there's a documentation bug then :-)
21:49 whytewolf yeah, that happens a lot
21:49 whytewolf no one wants to document things.
21:49 whytewolf :P
21:51 whytewolf actually the issue is someone fixes the documentation in the place they made the fix. but not in everything else that might have referenced that
21:52 whytewolf as for the issue with instance based systemd and globs. the units for the instances don't actually exist so it isn't really known if they are up or down. difficult to code for something that is based on a variable if you are not supplying the variable
21:58 Eelis whytewolf: all i know is that if i do "systemctl status 'bla*'" i do see my services, while if i use "salt '*' service.status 'bla*'", i /don't/ see my services. so i'll have to tell the ops people to use the former instead of the latter
21:59 Eelis which makes me sad :(
22:01 whytewolf are you on 2018.3? or did you mean bla@*
22:01 Eelis either
22:01 Eelis (and yes i'm on 2018.3)
22:01 Eelis bla* and bla@* should both find bla@yada
22:01 Eelis (and do, with systemctl)
22:02 whytewolf cause i just tested and as long as it isn't an instantiated service it can be found with wildcards.
22:02 Eelis right, which is why in my first message in this channel, i specifically mentioned instantiated services. twice.
22:02 whytewolf and I'm saying ... that doesn't work
22:02 whytewolf 3 times now
22:03 Eelis that's wonderful, but are you also saying it's a bug?
22:03 whytewolf it should be yes.
22:03 whytewolf please file one
22:03 Eelis ok, then i dunno what the story about "well they don't actually exist" was for
22:04 Eelis clearly there is no technical hindrance, because systemctl can do it no problem
22:04 whytewolf it was a comment about the odd nature of them. salt isn't doing systemctl status * it is looking up the list of existing unit files which. instantiated services don't have
22:05 Eelis right, so it will just have to get the info from wherever systemctl also gets it from
22:06 whytewolf easier said then done. systemd is a pita when dealing with what does and doesn't exist
22:06 Eelis copy that. thanks for confirming, i'll file a ticket in a bit
22:14 DanyC joined #salt
22:19 DanyC_ joined #salt
22:21 pcn whytewolf: any deas on why salt-ssh is getting stuck on that file.managed?  I don't recall seeing this with salt-ssh before? Permission denied: '/opt/salt'
22:22 whytewolf not a clue
22:22 rollniak joined #salt
22:22 whytewolf not sure why anything is being put into /opt
22:23 Eelis (done https://github.com/saltstack/salt/issues/47139 )
22:25 whytewolf btw, about the fact it is 3,400+ issues opened. it was around 4600 around the end of dec. so work is being done
22:25 Eelis wow, very nice!
22:29 whytewolf honestly someone should audit the ones that exist still cause I'm willing to bet a large portion have been fixed or the modules they were about was depreciated
22:29 tom[] joined #salt
22:29 Eelis auditing sounds like hard work. gentle triaging, perhaps ;)
22:30 whytewolf lol
22:32 MTecknology The auto-cleaning seems to do a pretty decent job, from my perspective
22:38 whytewolf humm, a lot of the early issues are all feature requests. not a lot of bugs from back then
22:38 dendazen joined #salt
22:56 keldwud joined #salt
23:00 justanotheruser joined #salt
23:18 stooj joined #salt
23:21 brokensyntax joined #salt
23:43 yctn_ joined #salt
23:48 cmichel joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary