Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-08-16

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 sjorge joined #salt
00:17 pipps joined #salt
00:28 tapoxi joined #salt
00:33 aldevar joined #salt
00:53 cgiroua joined #salt
01:03 ouemt joined #salt
01:17 iggy if an sls file isn't ever included/targetted, does it get rendered through jinja?
01:19 XenophonF no
01:19 iggy I assumed no, just wanted a sanity check
01:19 iggy thanks XenophonF
01:20 XenophonF :-D
01:20 XenophonF jokes on you - this could be a consensus hallucination
01:22 iggy either way, I'm happy for now
01:22 * iggy takes the blue pill
01:26 onlyanegg joined #salt
01:29 ujjain joined #salt
01:29 ujjain joined #salt
01:31 Nahual joined #salt
01:39 bbradley anyone experiening significant slowdowns from 2017.7.1?
01:51 ilbot3 joined #salt
01:51 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.7, 2017.7.1 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers
02:01 onlyanegg joined #salt
02:06 zerocoolback joined #salt
02:27 onlyanegg joined #salt
02:54 cgiroua joined #salt
03:02 evle joined #salt
03:23 preludedrew joined #salt
03:28 Ni3mm4nd joined #salt
03:49 bstevenson joined #salt
04:01 cgiroua joined #salt
04:14 tellendil joined #salt
04:17 tellendil joined #salt
04:17 jim_ joined #salt
04:21 tellendil joined #salt
04:24 tellendil joined #salt
04:32 Ni3mm4nd joined #salt
04:35 tellendil joined #salt
04:41 fritz09 joined #salt
04:42 gnomethrower joined #salt
04:58 tellendil joined #salt
04:59 golodhrim|work joined #salt
05:02 tellendil joined #salt
05:05 tellendil joined #salt
05:11 felskrone joined #salt
05:14 Bock joined #salt
05:19 ECDHE_RSA_AES256 joined #salt
05:22 Guest43163 left #salt
05:23 JPaul joined #salt
05:26 moy joined #salt
05:38 tellendil joined #salt
05:38 jkaberg joined #salt
05:43 impi joined #salt
05:48 eseyman joined #salt
05:48 pipps joined #salt
05:51 sjorge joined #salt
05:57 oida joined #salt
06:02 jkaberg wondering if someone could help me out with getting my pillar.get right. pillar looks like this https://pastebin.com/7uKfrp7h . when I use pillar.get on the path haproxy:443:jira:cert I get "Pillar haproxy:443:jira:cert" does not exist?
06:02 jkaberg in the end I want to use this with contents_pillar in my state
06:03 jkaberg file.managed -> contents_pillar that is
06:13 omie888777 joined #salt
06:16 coredumb jkaberg: remove the -
06:16 coredumb both of them
06:19 coredumb btw I'm wondering what's the best way to retrieve and use the output of a command in a state? like one state retrieves a value and another one uses it in jinja
06:19 coredumb that possible?
06:20 hoonetorg joined #salt
06:22 do3meli joined #salt
06:23 tellendil joined #salt
06:29 cyborg-one joined #salt
06:37 Ricardo1000 joined #salt
06:38 zulutango joined #salt
06:43 do3meli left #salt
06:47 darioleidi joined #salt
06:47 sh123124213 joined #salt
07:05 usernkey joined #salt
07:14 masuberu joined #salt
07:26 Hybrid joined #salt
07:28 masuberu joined #salt
07:30 snake_87_ joined #salt
07:44 Hybrid joined #salt
07:46 snake_87_ joined #salt
07:47 kyur joined #salt
07:55 sergeyt joined #salt
07:56 Rumbles joined #salt
07:59 mikecmpbll joined #salt
07:59 snake_87_ left #salt
08:02 mechleg left #salt
08:02 omie888777 joined #salt
08:05 schasi joined #salt
08:06 jhauser joined #salt
08:17 dom___ joined #salt
08:17 Antiarc joined #salt
08:24 snake_87_ joined #salt
08:24 impi joined #salt
08:24 snake_87_ left #salt
08:28 Mattch joined #salt
08:34 snake_87_ joined #salt
08:35 mike25de joined #salt
08:37 schasi joined #salt
08:41 snake_87_ left #salt
08:41 snake_87_ joined #salt
08:42 jthunt joined #salt
08:42 pbandark joined #salt
08:42 mavhq joined #salt
08:47 snake_87_ joined #salt
08:50 kbaikov joined #salt
08:58 _KaszpiR_ joined #salt
08:59 nebuchadnezzar joined #salt
09:07 snake_87_ left #salt
09:10 hoonetorg joined #salt
09:10 Naresh joined #salt
09:12 tellendil joined #salt
09:14 kukacz joined #salt
09:15 tellendil joined #salt
09:16 huddy joined #salt
09:17 _KaszpiR_ joined #salt
09:19 tellendil joined #salt
09:19 filippos joined #salt
09:21 gnomethrower joined #salt
09:22 tellendil joined #salt
09:23 pipps joined #salt
09:29 * mike25de hi all
09:31 stduolc joined #salt
09:31 coredumb btw I'm wondering what's the best way to retrieve and use the output of a command in a state? like one state retrieves a value and another one uses it in jinja, is that even possible?
09:31 nku coredumb: what does the other state do?
09:31 nku coredumb: output into a file, and read the file?
09:32 coredumb actually I was thinking about matching the value in jinja, but I guess I could do it directly ...
09:32 coredumb nku: yeah basically run a command and get its output
09:33 coredumb run another command based on the output
09:33 stduolc hi, I use the salt-syndic in 2017 version. there is something wrong with the salt api of the salt old_master. The return value is not the right return from the minion.
09:40 babilen coredumb: I'd call that via an execution module and save it in a variable
09:41 coredumb babilen: would you mind showing a simple example?
09:44 kukacz_ joined #salt
09:45 coredumb babilen: oh like {% set xxx = salt['cmd.run']('cmd') %}
09:46 babilen yeah
09:46 babilen Or just put the command in the unless bit of the command you want to run (or not run) in relation to this one
09:47 babilen unless/onlyif
09:47 coredumb I see
09:47 coredumb this could work
09:47 coredumb I'll try that thanks :)
09:53 tellendil joined #salt
09:54 mike25de stupid question... after i deploy the salt minion ... i have to setup a proxy for yum.conf: how is the reccommended way? or can i have the proxy as a pillar var and use it in the pkg.installed ?
09:56 coredumb mike25de: don't you deploy the salt minion from yum? Or is it from an internal repo not needing the proxy?
09:56 mike25de coredumb: i deploy it from .. AWS with the bootstrap script and the proxy option in the bootstrap command line
09:57 mike25de like sh bootstrap_salt.sh -H http://myproxy...
09:57 coredumb I'd make the minion deploy the yum.conf
09:57 mike25de coredumb: :) yep :) my idea as well. i just wanted to know if you guys have any idea
09:57 mike25de thanks coredumb
09:59 mike25de is there a way to create a reactor/event... so when a salt-key is added - some magic also happens - like autodeploy the yum.conf coredumb  :)
09:59 mike25de i haven't worked with events... so i am not sure what magic they can do
10:04 coredumb mike25de: yes it should be possible
10:04 coredumb just not sure if salt-key accept is sending an event
10:05 coredumb but if you do the accept by hand, you can send the event as well
10:06 coredumb salt-key -a xxxxx && salt-call event.send 'new/key/<minion_id>'
10:07 coredumb you can also watch at event from the bus live so you can easily test if a salt-key -a does fire an event by itself
10:09 kukacz_ joined #salt
10:15 aldevar joined #salt
10:23 pbandark joined #salt
10:30 haam3r_ mide25de: There is an existing example for that in the docs: https://docs.saltstack.com/en/latest/topics/reactor/#passing-event-data-to-minions-or-orchestrate-as-pillar
10:32 haam3r_ mike25de: There is an existing example for that in the docs: https://docs.saltstack.com/en/latest/topics/reactor/#passing-event-data-to-minions-or-orchestrate-as-pillar
10:53 smartalek joined #salt
10:55 sergeyt joined #salt
11:07 sjorge joined #salt
11:17 mike25de haam3r_: coredumb thank you guys
11:18 mike25de is there a difference between state.sls and state.apply ?
11:19 babilen state.apply is syntactic sugar and calls state.highstate or state.sls based on the number of arguments
11:19 babilen It does, as of now, not support state.sls_id though
11:23 cyteen joined #salt
11:25 Rumbles joined #salt
11:26 mike25de babilen: thanks good to know
11:34 rdale joined #salt
11:37 sergeyt joined #salt
11:41 pbandark joined #salt
11:55 ssplatt joined #salt
12:04 nku i need to generate ssh keys on minions, and get all pubkeys to another one. what's the correct approach for this?
12:05 nku (the get all pubkeys part)
12:05 sergeyt joined #salt
12:07 smartalek nku: have you looked at using salt mine?
12:07 nku smartalek: yeah, but not sure how to get file contents. and it's supposedly only for short-lived data
12:13 smartalek nku: you could write a module that reads the pubkeys and make it a mine function for that minion. It'll run every mine interval.
12:16 nku hmmm... yeah.. that would work..
12:34 toanju joined #salt
12:38 Yaowu_ joined #salt
12:38 sergeyt joined #salt
12:41 mchlumsky joined #salt
12:45 coredumb how is salt-master on OpenBSD? Is that as stable as anywhere else?
12:46 justanotheruser joined #salt
12:54 tellendil joined #salt
12:56 edrocks joined #salt
12:57 jdipierro joined #salt
12:58 tellendil joined #salt
13:04 gary joined #salt
13:15 evle1 joined #salt
13:15 bluenemo joined #salt
13:17 sergeyt joined #salt
13:21 debian112 joined #salt
13:27 ouemt joined #salt
13:29 stduolc hi, I think I found a bug in the salt LocalClient class. I used a syndic master and pub function of the LocalClient class didn't return the correct minions variable.
13:29 ProT-0-TypE joined #salt
13:34 beardedeagle joined #salt
13:38 pbandark1 joined #salt
13:40 ejsf joined #salt
13:43 tapoxi joined #salt
13:45 stduolc https://github.com/saltstack/salt/blob/develop/salt/client/__init__.py#L1722
13:45 stduolc this function didn't return the minoins id in the syndic cluster.
13:48 schasi joined #salt
13:50 sergeyt joined #salt
13:51 Ni3mm4nd joined #salt
13:53 MajObviousman can someone help me sort out a pattern with pillar?
13:53 cofeineSunshine joined #salt
13:54 MajObviousman we have a tool called ossec that needs to be running on certain production systems. I've got a directive in my pillar top file matching on all systems with the production fragment in their name
13:54 MajObviousman each system gets its own unique key
13:54 MajObviousman what I'd like is a clever way for prod systems that don't yet have a key created to default to a "blank" sls instead of one named after their system
14:02 MajObviousman I have a working solution, but I don't know if it's the best idea to do it this way: https://gist.github.com/anonymous/6e297898bb7b80b991ae6b80908e3aa0
14:03 cofeineSunshine hi. What could be the problem if salt-master fails to receive responce from minion. It works for first few times but after some bigger states it fails to run test.ping. Restarting minion server helps. I'm using salt 2017.07 on rackspace
14:04 MajObviousman cofeineSunshine: are any of your states editing the on-system firewall?
14:04 MajObviousman that would possibly interrupt the existing connection from the minion back to the master
14:05 cofeineSunshine MajObviousman: no, none of my states changes/edits firewall settings
14:05 cofeineSunshine MajObviousman: it fails randomly or after trying to apply top.sls
14:05 cofeineSunshine looks like smaller states ok
14:05 cofeineSunshine but bigger ones fails
14:06 MajObviousman what happens if you pull the bigger one apart into small ones?
14:07 _KaszpiR_ joined #salt
14:08 cgiroua joined #salt
14:10 sergeyt joined #salt
14:10 cofeineSunshine MajObviousman: smaller one works
14:11 cofeineSunshine as far as I tested
14:11 cofeineSunshine looks like bigger tasks breakes something
14:11 cofeineSunshine how can I test this situation? where to look for usefull information right now?
14:11 cofeineSunshine looks like it fails after longer task. Timeout or something like that
14:12 cofeineSunshine when I restart minion - everything goes back to normal. And I can see that tasks on minion finishes to execute
14:12 cofeineSunshine so looks like communication problem
14:18 racooper joined #salt
14:23 schasi cofeinsunshine: What I would do in your situation is stop the minion and then start it with "salt-minion -l debug"
14:24 schasi I usually start it in a "screen" session, in case I lose the connection somewhen
14:24 MajObviousman good call
14:24 stduolc this is the issue link https://github.com/saltstack/salt/issues/42978
14:25 schasi You then see what is done on the host. It is a very good debugging tool
14:25 filippos joined #salt
14:28 fatal_exception joined #salt
14:29 onlyanegg joined #salt
14:31 pualj_ joined #salt
14:35 MajObviousman after doing a bit more thinking, it is foolish to trust the minion to report its own hostname back. I will change the grain to a True instead
14:36 _JZ_ joined #salt
14:37 viq hah, on debian upgrading salt-minion 2017.7.0 -> 2017.7.1 breaks salt-minion ;) Apparently due to salt-minion restart in the middle of update
14:37 pualj joined #salt
14:38 MajObviousman viq: awesome
14:39 MajObviousman I have been using systemd-run --on-active=5 to schedule things like firewall changes or salt-minion upgrades & restarts to happen after the salt job completes
14:39 MajObviousman were you issuing the update via a salt call?
14:40 cofeineSunshine restarting salt-minion on minion brings host back up on salt-master
14:40 cofeineSunshine investigating further
14:40 cofeineSunshine -l debug
14:40 MajObviousman cofeineSunshine: please share what you discover, because you've piqued my interest
14:41 cofeineSunshine MajObviousman: ok
14:41 cofeineSunshine narrowng down on the problem
14:43 Elsmorian joined #salt
14:46 jdipierro joined #salt
14:46 viq MajObviousman: yeah, 'salt * pkg.upgrade'
14:48 xet7 joined #salt
14:55 bstevenson joined #salt
14:57 sergeyt joined #salt
15:02 bstevenson Is there a config setting on master or minion to get the "return": {"data": {...}} section of an orchestration job to show up in the job cache?  example https://gist.github.com/anonymous/321b5aafcb88e0ff262365529e41f924
15:08 tiwula joined #salt
15:12 aphor joined #salt
15:13 exegesis joined #salt
15:14 aphor I need to do some salt-api testing and I'm trying to use auto external auth, but CherryPy keeps giving me 401 :(
15:15 heaje joined #salt
15:15 ssplatt install the latest from pip if you want ssl
15:17 exegesis joined #salt
15:17 _KaszpiR_ joined #salt
15:18 aphor @ssplatt: is there a problem with SSL in Carbon? curl isn't giving me any https grief using self-signed cert and -k (insecure).
15:19 ssplatt if you’re using cherrypy and installing it from packages, more than likely it’s very old and the ssl is broken.
15:20 lordcirth_work joined #salt
15:23 jholtom joined #salt
15:32 sergeyt joined #salt
15:35 exegesis joined #salt
15:36 onlyanegg joined #salt
15:37 exegesis joined #salt
15:39 heaje Any general guidance on the stability of 2017.7.1?  I know that 2017.7.0 had some odd issues.
15:40 MajObviousman bstevenson: I could use that too
15:43 jkjk joined #salt
15:45 fritz09 joined #salt
15:47 onlyanegg joined #salt
15:49 Brew joined #salt
15:51 bstevenson MajObviousman: I'm trying to get orchestration results back to some devs without giving them access to the master.  I had this working via the API for a year and half, but behavior must have changed in 2016.x. It's not use regularly so hard to say when it broke.
16:02 DammitJim joined #salt
16:09 dev_tea joined #salt
16:12 mavhq joined #salt
16:13 woodtablet joined #salt
16:23 XenophonF how are you all upgrading Windows minions using salt and winrepo/winrepo-ng?
16:24 sergeyt joined #salt
16:24 XenophonF is it just a matter of re-running 'pkg.install salt-minion'?
16:25 _JZ_ joined #salt
16:28 pualj joined #salt
16:29 armyriad joined #salt
16:30 impi joined #salt
16:32 DammitJim is salt supposed to "enable" a systemd service unit that is defined in /etc/init.d?
16:33 DammitJim for some reason, I have to go to the servers I'm building and run: service <new service> start
16:34 DammitJim and then I can apply the service.running with enable: True
16:37 dendazen joined #salt
16:37 etong8306 joined #salt
16:39 woodtablet systemd service united arent in /etc/init.d
16:39 DammitJim right!
16:40 woodtablet ohh
16:40 DammitJim but if systemd doesn't file a unit, it looks in /etc/init.d
16:40 pualj_ joined #salt
16:41 woodtablet but when you enable it, does it symlink from the right location to the /etc/init.d file ?
16:41 jdipierro joined #salt
16:41 DammitJim that's the thing, the salt state: service.running with enable: True doesn't work
16:42 babilen Which service is this?
16:42 babilen ES?
16:42 DammitJim it's a service I created *hides under the desk)
16:43 XenophonF want help writing a unit file?  it's pretty easy
16:43 nixjdm joined #salt
16:43 woodtablet =D
16:44 DammitJim I'm a little nervous because the init.d file for tomcat8 has a TON of stuff in it
16:44 DammitJim I am creating multiple instances of tomcat8 on the same server (that's why I say tomcat8)
16:45 XenophonF oh
16:46 XenophonF that sounds less easy ;)
16:48 pipps joined #salt
16:48 M-ccheckk joined #salt
16:48 woodtablet DammitJim: you are one of thooooseee.. well my coworker is the same way
16:48 DammitJim is there a way to have salt do the equivalent of 'sudo service <new service> start' ?
16:48 woodtablet DammitJim: one sec, i ll pastebin you something for a unitd file
16:48 woodtablet DammitJim: https://gist.github.com/gwaters/430ca52faa8096b28ff2c021e522693b
16:49 DammitJim woodtablet, all I'm saying is that I don't know why the init script for tomcat8 has 300 lines
16:49 woodtablet DammitJim: we one of these for each instance on the box
16:49 DammitJim of tomcat8?
16:49 woodtablet the unit file above is for tomcat7
16:50 vtolstov joined #salt
16:50 DammitJim so, what do you call your different instances of tomcat7?
16:51 woodtablet > Environment="SERVICE_NAME=tomcat-dev"
16:51 woodtablet tomcat-test, tomcat-prod, etc etc
16:52 vtolstov hi! i cant solve simple issue - i need to enable saltstack repo in debian, server does not have external connection to internet only via proxy. Does it possible to update salt-minion proxy settings via states file and download pgp key for repo via proxy?
16:52 aphor @DammitJim I think the new Fedora style tomcat wrapper takes instance args.
16:52 vtolstov i'm read some discussions and found that environ.setenv can update minion environment, but i can't see that php key downloaded via proxy.
16:53 pipps joined #salt
16:53 aphor @vtolstov environment vars work. http_proxy https_proxy no_proxy
16:54 vtolstov @aphor i don't want to preset this variables before salt-minion run.
16:54 vtolstov so i'm setenv it via environ.setenv
16:54 vtolstov but it does not work
16:55 aphor @vtolstov: I think your requirements are idiosyncratic and you might be on your own.
16:56 aphor conventionally daemon environment is in /etc/defaults/salt-minion or /etc/sysconfig/salt-minion like OS facility.
16:58 vtolstov @aphor: may be you are wrong? in debian i don't have this files. also note i'm use systemd and have in /etc/environment
16:58 vtolstov http_proxy
16:58 vtolstov and in service file EnvironmentFile=/etc/environment
16:58 vtolstov but salt-minion ignore this. curl/wget and other stuff works fine
16:59 aphor /etc/environment works just like /etc/defaults and /etc/sysconfig
17:01 vtolstov but it now work
17:01 vtolstov Failed to configure repo 'deb http://repo.saltstack.com/apt/debian/8/amd64/2016.11 jessie main': Error: HTTP 599: Timeout reading http://repo.saltstack.com/apt/debian/8/amd64/2016.11/SALTSTACK-GPG-KEY.pub
17:02 vtolstov Loaded: loaded (/lib/systemd/system/salt-minion.service; enabled)
17:02 vtolstov Drop-In: /etc/systemd/system/salt-minion.service.d \ override.conf
17:02 vtolstov now -> not
17:07 aphor @vtolstov can you curl the GPG key URL from the minion?
17:08 aphor You said it works fine, just additional clarity
17:08 vtolstov yes
17:08 aphor ok.
17:08 vtolstov may be i miss that pkgrepo.managed need key url on salt master ?
17:08 vtolstov and not support http ?
17:09 aphor I think you're running states which execute system commands but do not pass the minion's environment to the system commands.
17:10 vtolstov but as i understand envrion.setenv and pass update_minion: True pass env to running commands ?
17:14 vtolstov also as i see internally key_url used only in cp.cache_file
17:14 vtolstov in aptpkg module
17:15 vtolstov in aptpkg module
17:15 vtolstov also as i see internally key_url used only in cp.cache_file
17:16 Vaelatern joined #salt
17:18 aphor can you insepct the environment of your salt-minion python process using procfs to verify the process has the proxy setup?
17:20 aerbax joined #salt
17:21 schasi cofeineSunshine: Did you find the error?
17:22 DammitJim woodtablet, what is that on line 5 of your gist: before tomcat-sysd is called
17:24 woodtablet DammitJim: ? its a continuation of line 4
17:25 swa_work joined #salt
17:26 woodtablet yaahh salt 7.1 is out and fixes the 7.0 bug of inline config files
17:28 aphor @vtolstov it eventually falls through to salt.utils.http.query()
17:28 vtolstov @aphor: yes /proc/PID/environ contains http_proxy
17:29 DammitJim but what does it mean?
17:31 wendall911 joined #salt
17:31 vtolstov @aphor: as i see http.query uses tornado
17:31 vtolstov which proxy_host = opts.get('proxy_host', None)
17:31 woodtablet DammitJim: oh ok, that is how i do the instances. in the Enviornment (line 15) you will see Servicename= tomcat-dev
17:31 vtolstov so this is not global environ but salt-minion options
17:31 woodtablet dammitjim: when you do a systemctl you will see this service is listed and called tomcat-dev
17:31 aphor @vtolstov salt.utls.http.query can use requests, urllib2, or tornado to do the actual http request.
17:31 DammitJim oh
17:32 DammitJim you call it tomcat-sysd?
17:33 vtolstov @aphor by default it uses tornado
17:33 vtolstov and this default does not overrided in case of pkgrepo
17:34 Elsmorian joined #salt
17:34 aphor https://github.com/saltstack/salt/issues/21985 @vtolstov have you seen this?
17:35 vtolstov_ joined #salt
17:36 vtolstov_ http util module does use ony tornado by default and pkgrepo states can change it default
17:37 woodtablet dammitjim: i think that line about tomcat-sysd, is just short hand for tomcat-systemd. you just need to make sure that /etc/sysconfig/tomcat-dev file exists for the environment
17:38 DammitJim hhhmmm.. are you on Debian?
17:39 woodtablet rhel
17:39 aphor @vtolstov if I were you, I would find the python file where the lowest level http client operation is actually implemented, and hack in a debug log message to log the contents of your os.environ['http_proxy'] and friends.
17:40 aphor If they are present, but tornado isn't using them, then it's a tornado problem and I'd start looking for a solution there.
17:41 aphor If they are not present, then I'd look to see where the environment is getting mangled.
17:42 woodtablet is there a simple way to delete salt master cache without trolling through the /var/cache/salt filesystem ?
17:42 woodtablet maybe i should just delete the whole directory...
17:44 wendall911 joined #salt
17:45 vtolstov_ @aphor: this is not needed. in case of tornado i need to set variables in salt-minion file
17:46 vtolstov_ i found needed piece of code: proxy_host = opts.get('proxy_host', Non
17:46 vtolstov_ so if salt-minion not have proxy setings function returns None and no proxy settings used
17:48 donmichelangelo joined #salt
17:49 nixjdm joined #salt
17:49 lordcirth_work woodtablet, may I ask why you need to clear master cache?
17:51 woodtablet lordcirth_work: i changed an _pillar script, and it is not refreshing
17:52 woodtablet lordcirth_work: salt master restarts arent helping, and i know the hash value is very different, so i should detect the change.. and copy the new file, but it is not
17:53 DammitJim is there a way to get only network information like the card type of a VM with salt-cloud?
17:56 mikecmpbll joined #salt
17:58 ssplatt grains.
17:58 ssplatt mine.
18:01 woodtablet grains can do that
18:01 woodtablet but if your machines arent in salt, i suggest pyVim pyVmomi
18:01 woodtablet and you write a python script
18:01 woodtablet with those modules, they are super
18:01 DammitJim I want the network adapter type that the vm is using according to ESXi
18:07 ChubYann joined #salt
18:13 heaje joined #salt
18:14 wendall911 joined #salt
18:20 Shados joined #salt
18:23 pipps joined #salt
18:24 Heartsbane joined #salt
18:24 Heartsbane joined #salt
18:27 impi joined #salt
18:28 bstevenson @MajObviousman  I found the change the removes the data from the orchestration output. Now I'm digging in to truly understand why this change was needed and find a work around.  https://github.com/saltstack/salt/pull/31355/commits/08a60e7b751eb677b358ecb6bfdbffe3ace1c7f5
18:28 MajObviousman much obliged
18:31 edrocks joined #salt
18:32 snc joined #salt
18:36 Ni3mm4nd joined #salt
18:37 peters-tx joined #salt
18:38 mechleg joined #salt
18:41 sh123124213 joined #salt
18:42 mikea joined #salt
18:42 hasues joined #salt
18:46 xMopxShell I've got a minion that test.ping jobs sent to it from the master fail to get a reply about 70% of the time. The minion log looks like this, any idea whats wrong? https://pastebin.com/raw/hmU453xm
18:47 xMopxShell in the minion conf i've added `master_tries: -1` and `ping_interval: 1`, but it didn't seem to help
18:48 xMopxShell The minion runs highstate and some other stuff on a schedule and seems to connect fine then...
18:49 nixjdm joined #salt
18:49 MajObviousman every minute it's losing sync. I'm guessing a network device between the master and minion is closing that connection
18:49 stevednd I'm almost positive there's no way to do this, but is there any way at all for a state to dynamically know which state triggered it? I have a deployment orch script. if certain points in the script fail I want to send slack notifications. I wanted to avoid having 6 different notification states that just differ in the text that is sent because a different call failed
18:50 MajObviousman also verify you don't have tcp_nodelay active somewhere
18:50 MajObviousman a quescient connection sends very few packets, and the packets it does send are only a few tens of bytes long
18:51 MajObviousman err I have it backwards, you DO want tcp_nodelay on to prevent ^
18:51 xMopxShell hmm ill look into tcp_nodelay, thanks
18:51 MajObviousman it looks like network layer to me
18:51 xMopxShell The minion and master and in separate physical locations, so it's reaching over the internet. Would explain some connection instability
18:52 snc Stevednd: you could send the calling proc in as a pillar
18:52 xMopxShell (also, this is on Azure, which has the shittiest network ever)
18:52 MajObviousman stevednd: I have a grain that I set which counts steps throughout the process
18:53 MajObviousman every time a state completes in the orch, it +1s the counter in the grain, until the very last step is to delete it
18:53 MajObviousman so you could trigger on that or some variety
18:53 xMopxShell hmm looks like tcp_nodelay is a socket option that the master/minion would need to set on its sockets?
18:54 MajObviousman likely your minion, xMopxShell
18:54 xMopxShell sure, but I can't find anything in the salt docs about enabling it
18:56 xMopxShell Looks like ZeroMQ uses tcp_nodelay by default
18:56 MajObviousman probably set that side for now
18:57 MajObviousman oh does it? Well that's good. That's not your problem then :)
18:57 MajObviousman the regularity of the timeouts screams network device interfering somehow
18:57 xMopxShell yeah, im willing to bet it's azure's shitty network
18:57 xMopxShell all my aws minions are 100% fine lol
18:58 MajObviousman you could use salt-ssh for that minion
18:58 colabeer joined #salt
18:58 xMopxShell i think ill just live with it, as long as it's still checking in per the minion's schedule
19:01 MajObviousman if you have a way to stream a small but steady stream of bits from the azure node down to the master, e.g. trickle -s -u 10 dd if=/dev/random | ssh user@saltmaster dd of=outfile
19:01 MajObviousman see if it still does the disconnect
19:01 MajObviousman I'd guess it will not
19:02 stevednd snc: do you have a quick example showing what you mean? I don't follow exactly
19:03 wendall911 joined #salt
19:03 stevednd MajObviousman: do you increment the grain on the master, or on one of the machines you're deploying to?
19:03 MajObviousman stevednd: on the machine itself
19:03 MajObviousman I wrote a custom deploy script for salt-cloud that drops "deploystep: 2" into /etc/salt/grains as one of its final actions
19:03 MajObviousman deploy step 1 represents salt-cloud doing its work
19:06 brianthelion left #salt
19:09 pualj_ joined #salt
19:15 sjorge joined #salt
19:17 cofeineSunshine MajObviousman: hi
19:17 MajObviousman ohai
19:17 MajObviousman stevednd: https://gist.github.com/anonymous/bec51fd363fcfdcc2969736f86559d92 sanitized form of what I'm doing
19:18 MajObviousman right now we only have four steps, but pretty soon that will likely be extended to six
19:18 cofeineSunshine MajObviousman: about disappearing minions on rackspace
19:18 MajObviousman step 4 becomes "provision for purpose" and step 5 becomes "coordinate insertion with load balancer if needed". Step 6 becomes "We're done, delete the deploystep grain
19:19 cofeineSunshine MajObviousman: looks like rackspace changes IP of instance and salt-minion couldn't maintain connection until instance is restarted(salt-minion)
19:19 MajObviousman what do?
19:19 MajObviousman rackspace y u do dis?
19:19 MajObviousman of all the immutable things on a system, I would expect it's ip to be near the top of the list
19:19 cofeineSunshine yes
19:20 cofeineSunshine also I added:     rackconnect: True
19:20 cofeineSunshine to provider config
19:20 MajObviousman is it a specific class of rackspace instance? A la a spot pricing node on AWS
19:20 cofeineSunshine looks like it started to handle it
19:20 MajObviousman interesting
19:20 Cottser joined #salt
19:20 cofeineSunshine but I was looking into salt-minion -l debug
19:20 cofeineSunshine no messages at all
19:21 cofeineSunshine restart it
19:21 cofeineSunshine and works again
19:21 cofeineSunshine but not 100% fails
19:21 cofeineSunshine like I have several times out of 50 when in worked
19:21 MajObviousman that makes me nervous
19:21 MajObviousman if I want variability in my checkins, I'll just use ansible and pushes
19:22 cofeineSunshine but     rackconnect: True in provider looks like trying address that problem
19:22 MajObviousman so I'm reading here. Thanks for the tip
19:22 cofeineSunshine at the beggining setup just fails without that
19:22 MajObviousman I don't currently have any rackspace nodes, but we were talking of tossing some on their Sydney location to get coverage down there
19:23 _KaszpiR_ joined #salt
19:23 cofeineSunshine looks like rackcloud instanced where managed by rackconnect has 2 IP addresses: one for initialization and bootstrap, another public IP
19:23 carlwgeorge cofeineSunshine: if a rackspace cloud server is changing it's ip address, call rackspace and ask them to investigate.  that's not normal.
19:23 cofeineSunshine after initialization it changes
19:23 MajObviousman carlwgeorge: they're doing outbound NAT
19:24 cofeineSunshine yes
19:24 MajObviousman which I'm not a fan of, but it's a solution to ipv4 starvation
19:24 cofeineSunshine MajObviousman: yes, you're right
19:24 MajObviousman certainly not the best
19:24 stevednd MajObviousman: so in my case I'm running this orchestration on the master and it does various things on multiple machines. I don't overly care about per machine details, so much as generally which step failed. I suppose I could set the grain locally on the master, and then check and report based on that
19:24 carlwgeorge not on standard cloud servers
19:24 carlwgeorge that is a rackconnect v2 specific thing, has nothing to do with the ipv4 shortage
19:24 MajObviousman stevednd: you are going to run into a rendering problem
19:25 carlwgeorge cofeineSunshine: the best thing you can do is switch to rackconnect v3
19:25 MajObviousman the jinja is rendered up front, so in your orchestration, if you have multiple calls to grains.get on the master, it'll return the same value each time
19:25 MajObviousman carlwgeorge: noted, thanks for filling in
19:25 MajObviousman stevednd: to get around this you COULD do an orchestration of orchestrations
19:25 stevednd right, but if I built a custom runner and set it to run last it could pull that grain value from the master and use that to report
19:26 carlwgeorge np, i used to work in rackspace support and had to deal with rackconnect way more than i wanted to
19:26 MajObviousman were you in SA or remote?
19:26 carlwgeorge sa
19:26 MajObviousman nice town
19:26 carlwgeorge great food
19:26 MajObviousman boring, but nice, and yeah great food
19:26 cofeineSunshine carlwgeorge MajObviousman: i managed to login to machine via ssh, even ssh session fails at the very same moment. Looks like NAT thing f'ups thing and eve ssh session cannot recover. Changes outbound ip or something
19:27 snc Stevednd: salt-run state.orchestrate my.orch.sls  pillar='{"caller": "whatever" }'
19:27 MajObviousman stevednd: yes, that's an option I suppose
19:28 MajObviousman I think you'd do better with a lean manufacturing model, where your orchestration checks all relevant details after each step and errors out right there or tries to correct them.
19:28 carlwgeorge cofeineSunshine: it definitely can work, i don't know what is specifically not behaving for you but it has worked for many other customers.  the best thing you can do is give rackspace support a call and have them walk you through rackconnect v3, it's much better.
19:29 stevednd MajObviousman: yes, I just wanted to avoid littering my orch files with repetitive calls to slack
19:29 MajObviousman sure
19:29 MajObviousman well, you can use a jinja macro
19:30 stevednd true
19:31 MajObviousman you're well into "kaizen" territory now
19:31 stevednd I considered making runners that wrap up the actual orch step, and handle failure by firing notifications, but runner returns are still unreliable in 2016.11
19:31 MajObviousman it's not sufficient that your product is made, but that it is made cleanly and with minimal extra effort and waste
19:32 MajObviousman I am on a pitifully old version of salt, so I haven't encountered the runner returns being unreliable. Or maybe haven't tried to use them
19:33 MajObviousman I'm interested in hearing more about that, after resolving the current needs
19:44 edrocks joined #salt
19:44 stevednd it's inconsistent wth the handling of runner returns in that the only way to reliably trigger a runner failure is to raise an exception. It's supposed to be getting sorted out in Oxygen, along with inconsistent return code handling from cmd.run
19:45 stevednd I need to get moving on this, so I'm probably just going to use a macro I bring in from a library
19:48 systemexit stevednd: do you know what alternatives are being considered besides just raising an exception (that's how I'm currently managing my runners)
19:50 nixjdm joined #salt
19:55 cofeineSunshine MajObviousman, carlwgeorge: https://gist.github.com/gulbinas/ae53782fe68c4c27a26a2f624b053086 this is the moment when one minion went out
19:55 cofeineSunshine can in any case it shudown itself?
19:55 cofeineSunshine I stoped service salt-minion stop and started under screen salt-minion -l debug
19:56 sh123124213 joined #salt
20:06 peters-tx Just updated to 2017.7.1
20:07 lordcirth_work I've had a bunch of minions crash, I had to use pdsh to restart them to update
20:09 lordcirth_work This is why you always put your salt-master's ssh key on the minion!
20:10 peters-tx Yeah, I do the same on all my minions
20:49 nixjdm joined #salt
20:57 jhauser joined #salt
20:59 cyborg-one joined #salt
21:03 edrocks joined #salt
21:03 noobiedubie joined #salt
21:17 pipps joined #salt
21:22 cofeineSunshine MajObviousman carlwgeorge: https://github.com/saltstack/salt/issues/38157 this is related to rackspace droping minion connection
21:23 cofeineSunshine and in my case salt-master is outside rackpsace network, on different provider
21:28 M-ccheckk left #salt
21:30 perfectsine joined #salt
21:32 pualj_ joined #salt
21:41 kukacz_ joined #salt
21:47 perfectsine joined #salt
21:47 pipps joined #salt
21:48 nixjdm joined #salt
22:10 onlyanegg joined #salt
22:19 pualj joined #salt
22:22 pipps joined #salt
22:23 onlyanegg clear
22:25 edrocks joined #salt
22:26 * MajObviousman gets the paddles to charging
22:33 * woodtablet makes a sign of he cross
22:42 varuntej joined #salt
22:44 brianthelion joined #salt
22:45 brianthelion When calling salt.runner: cloud.profile:, how do we tell cloud to look somewhere other than /etc/salt/cloud for profiles?
22:48 nixjdm joined #salt
22:56 rpb joined #salt
23:25 edrocks joined #salt
23:41 pipps joined #salt
23:42 noobiedubie joined #salt
23:52 justanotheruser joined #salt
23:52 justanotheruser joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary