Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-05-23

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 thinkt4nk joined #salt
00:09 masber joined #salt
00:33 VR-Jack2-H joined #salt
01:13 mosen joined #salt
01:16 brousch__ joined #salt
01:16 edrocks joined #salt
01:20 onlyanegg joined #salt
01:33 onlyanegg joined #salt
01:33 zseguin left #salt
01:37 manji joined #salt
01:41 DammitJim joined #salt
01:41 puzzlingWeirdo joined #salt
01:42 rmelero joined #salt
01:48 ilbot3 joined #salt
01:48 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.3.6, 2016.11.5 <+> Support: https://www.saltstack.com/support/ <+> SaltStack Webinar on Carbon, Nitrogen, and Enterprise 5.1 on May 18, 2017 https://goo.gl/PvsOvQ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> Due to spam, please register with NickServ
01:51 onlyanegg joined #salt
01:59 jas02 joined #salt
02:03 manji joined #salt
02:04 wangofett joined #salt
02:09 wangofett joined #salt
02:12 onlyanegg joined #salt
02:26 keldwud joined #salt
02:26 manji joined #salt
02:31 Praematura joined #salt
02:34 zerocoolback joined #salt
02:35 zerocoolback joined #salt
02:47 keldwud joined #salt
02:49 wangofett joined #salt
02:52 onlyanegg joined #salt
02:53 preludedrew joined #salt
02:55 wangofett joined #salt
03:08 justan0theruser joined #salt
03:09 bennabiy iggy: so where should I apply your change?
03:16 fracklen joined #salt
03:18 edrocks joined #salt
03:32 megamaced joined #salt
03:35 Praematura joined #salt
03:36 manji joined #salt
03:39 iggy bennabiy: iirc, I pulled that fork into my atom packages folder instead of the upstream and then you'd just apply it on top of that
03:40 bennabiy iggy: so the fork is better than the original?
03:40 iggy slightly
03:41 iggy someone should get nevins to submit his changes and then I can do mine
03:41 iggy or just put it in a different fork altogether and try to get salt devs to bless the other one
03:42 iggy (although I suspect if we just poke someone at saltstack they'll merge PRs if they get a couple of "doesn't break for me")
03:49 bennabiy I guess whatever works
03:49 bennabiy thanks for the help. I will try it tomorrow. Time to go to bed
04:06 jas02 joined #salt
04:12 auzty joined #salt
04:13 evle joined #salt
04:23 benner joined #salt
04:23 upb joined #salt
04:36 Sketch joined #salt
04:44 megamaced joined #salt
04:46 golodhrim|work joined #salt
04:46 Bryson joined #salt
05:19 wangofett joined #salt
05:20 edrocks joined #salt
05:21 do3meli joined #salt
05:21 do3meli left #salt
05:25 sh123124213 joined #salt
05:25 wangofett joined #salt
05:32 preludedrew joined #salt
05:36 sh123124213 joined #salt
05:41 rgrundstrom joined #salt
05:41 candyman88 joined #salt
06:13 sh123124213 joined #salt
06:17 feld_ joined #salt
06:20 rgrundstrom Good morning.
06:23 whytewolf not for another half hour
06:26 sh123124213 joined #salt
06:34 felskrone joined #salt
06:37 rgrundstrom whytewolf: Not had your first cup off coffe?
06:37 whytewolf first cup? it is 11:330 pm
06:38 hemebond Ah, yes, 11:330pm.
06:38 hemebond Is that what we're doing now? Fractions of minutes? :-D
06:38 Ricardo1000 joined #salt
06:39 whytewolf only way i can get enough hours in the day
06:39 hemebond Maybe time for bed, whytewolf :-)
06:39 whytewolf it might be
06:42 rgrundstrom we dont use am/pm here... Never took the time to learn how to use it... But i guess its 08:42 AM here if its night fot whythwolf.
06:43 Bock joined #salt
06:43 hemebond You don't use am/pm?
06:43 hemebond Do you only use 24-hour time?
06:43 rgrundstrom hemebond: yes
06:44 hemebond So you say to people "I'll see you at suxteen hundred"?
06:45 rgrundstrom hemebond: No, I would say meet you at four. Its usually self explained if its in the afternoon or morning. :)
06:46 rgrundstrom But it happens occassonaly that i say Sixteen hundred.
06:46 hemebond "I'll see you Tuesday at eight"
06:47 rgrundstrom That would usually mean Tuesdays at 20:00 if you are going out or 08:00 if its work related.
06:48 rgrundstrom hemebond: You seem to be elite when it comes to salt. Maybie you can help me out
06:48 hemebond rgrundstrom: Definitely not but I will certainly try to help.
06:49 rgrundstrom I have a pillar file  in pillar/default.sls that loads default settings for each server.
06:50 rgrundstrom I want the settings in pillar/defaults.sls to be overwritten if there is a file in /pillar/fqdn/<fqdn>
06:51 rgrundstrom I got it working but there is a few things that are bothering me
06:52 rgrundstrom 1. I cant use FQDN cause of the <hostname>.<domain>.com would be awkward for the code since (dots) are subdirectorys.
06:53 rgrundstrom 2. The code requiers that there is a pillar/fqdn/<hostname> present. I dont want this. If no file present it should just run the defults.
06:55 rgrundstrom They way i got it working is thru adding the following code to pillar/top.sls https://gist.github.com/anonymous/4ea4585c863d89ba03244883df4ab3ac#file-gistfile1-txt
06:56 hemebond Re #2: You could try using ignore_missing, like you can with states. I'm not sure if it works yet though.
06:57 hemebond Re #1, if you want to use the FQDN, I would probably use Jinja to reverse it and have a directory structure /com/<domain>/<host>.sls
06:58 hemebond To me that would create quite a nice structure.
06:59 rgrundstrom Can you provide an example how the solution to #1 would look?
06:59 hemebond The jinja code?
06:59 rgrundstrom yes
07:00 rgrundstrom hemebond: - ignore_missing does not work.
07:00 hemebond Ah, then I guess those tickets really should still be open.
07:01 walker joined #salt
07:01 jas02 joined #salt
07:01 cablekev2n joined #salt
07:02 rgrundstrom hemebond: General questions. When Ive been looking thru code and stuff pillars generally dont use if or for commands. Do they work with pillars?
07:02 hemebond Pillars don't use `if` or `for`? In what context?
07:04 hemebond {% set fqdn = 'host.domain.com'.split('.')|reverse|join('.') %}{{ fqdn }}
07:04 hemebond u'com.domain.host'
07:05 rgrundstrom I tried usting something like   {% if not salt['file.file_exists']('/srv/pillar/fqdn/<hostname>') %} earlier
07:05 hemebond You could use `{% include "sidebar.html" ignore missing %}`
07:06 hemebond Where `sidebar.html` would be replaced with a path to the host.sls
07:06 hemebond Pretty sure you can use variables in that string.
07:07 aldevar joined #salt
07:08 JohnnyRun joined #salt
07:12 baffle joined #salt
07:13 o1e9 joined #salt
07:19 jas02 joined #salt
07:20 inad922 joined #salt
07:32 yuhl joined #salt
07:34 zulutango joined #salt
07:37 wangofett joined #salt
07:44 wangofett joined #salt
07:50 om2 joined #salt
07:50 wangofett joined #salt
07:54 mikecmpbll joined #salt
07:56 Rumbles joined #salt
07:56 wangofett joined #salt
08:02 alex-zel joined #salt
08:05 oida_ joined #salt
08:06 ozux joined #salt
08:07 ozux joined #salt
08:09 wangofett joined #salt
08:10 ozux__ joined #salt
08:11 cyborg-one joined #salt
08:13 fracklen joined #salt
08:18 ivanjaros joined #salt
08:22 edrocks joined #salt
08:28 tellendil joined #salt
08:33 Mattch joined #salt
08:38 kporembinski joined #salt
08:38 Eagleman7 joined #salt
08:41 fracklen joined #salt
08:44 N-Mi joined #salt
08:44 N-Mi joined #salt
08:50 freelock joined #salt
08:51 Morrolan joined #salt
08:51 lubyou joined #salt
08:51 J0hnSteel joined #salt
08:51 Valfor joined #salt
08:52 nledez joined #salt
08:52 squig joined #salt
08:52 ozux joined #salt
08:53 squig hey, I was wondering if any one was aware of the policy for install?
08:53 squig if I used the pin to major version, here. https://repo.saltstack.com/#rhel
08:53 ozux joined #salt
08:53 squig does the rpm specified here, https://repo.saltstack.com/yum/redhat/salt-repo-2016.11-2.el7.noarch.rpm
08:54 squig ever disappear?
08:54 squig my client is telling me it did
08:58 squig argh, it did, the installer said to install salt-repo-2016.11-1.el7.noarch, but now that file is gone.
09:07 big|bad|wolf joined #salt
09:10 benjiale[m] joined #salt
09:10 theblazehen joined #salt
09:10 ThomasJ|m joined #salt
09:10 saintaquinas[m] joined #salt
09:10 hackel joined #salt
09:10 gomerus[m] joined #salt
09:10 Jon-Envisioneer[ joined #salt
09:10 jerrykan[m] joined #salt
09:10 fujexo[m] joined #salt
09:12 tellendil joined #salt
09:13 big|bad|wolf joined #salt
09:15 gmoro_ joined #salt
09:15 capnhex joined #salt
09:23 tellendil joined #salt
09:33 tellendil joined #salt
09:36 OnLEon joined #salt
09:41 ozux__ joined #salt
09:47 inad922 joined #salt
09:54 bdrung_work joined #salt
09:57 ahrs joined #salt
10:12 LondonAppDev joined #salt
10:14 lorengordon squig: yep, opened an issue about it, https://github.com/saltstack/salt-pack/issues/317
10:17 v3x joined #salt
10:20 mavhq joined #salt
10:24 wangofett joined #salt
10:25 edrocks joined #salt
10:28 manji joined #salt
10:30 squig joined #salt
10:38 wangofett joined #salt
10:42 tellendil joined #salt
10:46 wangofett joined #salt
10:46 cyteen joined #salt
10:49 Reverend joined #salt
10:52 ozux joined #salt
10:53 wangofett joined #salt
10:55 tellendil joined #salt
10:57 XenophonF joined #salt
11:00 tellendil joined #salt
11:02 amcorreia joined #salt
11:02 ozux__ joined #salt
11:04 wangofett joined #salt
11:06 Taters_ joined #salt
11:06 tellendil joined #salt
11:10 wangofett joined #salt
11:13 upb joined #salt
11:16 wangofett joined #salt
11:19 Taters_ joined #salt
11:23 wangofett joined #salt
11:26 Astarandel joined #salt
11:29 wangofett joined #salt
11:30 cb joined #salt
11:35 wangofett joined #salt
11:42 wangofett joined #salt
11:45 mikecmpbll joined #salt
11:47 Praematura joined #salt
11:49 wangofett joined #salt
11:51 fxhp joined #salt
11:54 wangofett joined #salt
11:57 mbologna joined #salt
11:59 wangofett joined #salt
12:05 numkem joined #salt
12:10 sjorge joined #salt
12:13 rgrundstrom Ive been looking at https://docs.saltstack.com/en/latest/ref/netapi/all/salt.netapi.rest_tornado.html
12:13 twork_ joined #salt
12:13 rgrundstrom Wanted to get this to start using the salt-api
12:14 nledez joined #salt
12:14 nledez joined #salt
12:14 c_g joined #salt
12:14 rgrundstrom But I cant seem to get the test that is stated in the dokumentation to work :(
12:15 freelock joined #salt
12:15 v0rtex joined #salt
12:19 zerocoolback joined #salt
12:19 sjorge joined #salt
12:21 zerocoolback joined #salt
12:22 Xenophon1 joined #salt
12:22 cmichel joined #salt
12:22 zerocool_ joined #salt
12:22 alex-zel joined #salt
12:28 numkem joined #salt
12:30 theblazehen joined #salt
12:30 benjiale[m] joined #salt
12:30 gomerus[m] joined #salt
12:30 jerrykan[m] joined #salt
12:30 saintaquinas[m] joined #salt
12:30 Jon-Envisioneer[ joined #salt
12:30 ThomasJ|m joined #salt
12:30 hackel joined #salt
12:30 fujexo[m] joined #salt
12:41 nicksloan joined #salt
12:42 cb joined #salt
12:44 golodhrim|work joined #salt
12:47 zerocoolback joined #salt
12:47 thinkt4nk joined #salt
12:54 edrocks joined #salt
12:54 JohnnyRun joined #salt
12:58 GMAzrael joined #salt
13:02 yuhl left #salt
13:09 inad922 joined #salt
13:09 hexa- my salt apply runs that deliver minion.d files and restart the salt-minion … always restart the salt-minion on every run http://paste.debian.net/plain/935745
13:09 hexa- I'm using file.recurse with clean to deploy, and service.running restart: True with watch on the file.recurse
13:13 mavhq joined #salt
13:18 MeltedLux joined #salt
13:20 demize hexa-: You don't want watch, you want onchanges
13:20 ssplatt joined #salt
13:20 demize or wait, hm
13:20 hexa- nah, it's the same from two ends :)
13:20 hexa- found /etc/salt/minion.d/_schedule.conf just now
13:20 hexa- something the minion apparently recreates
13:20 hexa- so the dir is "unclean" on every run
13:24 demize Mmm.
13:24 demize Anyway, both work from the same end, it's just that onchanges only does anything at all if a change has happened.
13:24 BHauser joined #salt
13:25 demize While watch lets the original service.running run even if the watch didn't have any change, hm.
13:28 thinkt4nk joined #salt
13:35 LondonAppDev joined #salt
13:36 SaucyElf joined #salt
13:41 jas02_ joined #salt
13:49 Praematura joined #salt
13:51 jas02 joined #salt
13:57 jas02_ joined #salt
14:01 mpanetta joined #salt
14:02 CrummyGummy joined #salt
14:02 mpanetta joined #salt
14:02 debian112 joined #salt
14:05 golodhrim|work joined #salt
14:08 puzzlingWeirdo joined #salt
14:11 racooper joined #salt
14:13 jf_sebastian Is anyone here using salt to manage a large-ish enterprise network?  I've got a mixed network of 3000+ Cisco/Juniper devices, and I'm looking into Ansible/Salt to assist in administration
14:13 jf_sebastian Salt looks really cool, but I'm wondering how well the proxy-minion system scales?
14:14 babilen jf_sebastian: There were a couple of talks on Napalm and SaltStack on SaltConf and during RIPE meetings.
14:15 babilen We aren't managing anything at that scale, but Mircea might be a good person to ask. See https://docs.saltstack.com/en/latest/ref/proxy/all/salt.proxy.napalm.html
14:17 keltim joined #salt
14:17 babilen Generally speaking SaltStack (used to?) scale(s) better than Ansible and I'd prefer it in this domain and at the scale you mentioned.
14:21 jf_sebastian babilen: Thank you for the documentation, I will check it out.  Mircea's talk at NANOG68 is what got me interested in Salt.
14:28 sarcasticadmin joined #salt
14:28 fracklen joined #salt
14:31 ozux joined #salt
14:32 mikecmpb_ joined #salt
14:35 Brew joined #salt
14:38 onlyanegg joined #salt
14:43 nixjdm joined #salt
14:43 Inveracity joined #salt
14:49 LondonAppDev joined #salt
14:49 cyborg-one joined #salt
14:50 major joined #salt
14:59 ntropy jf_sebastian: definitely ask Mircea, i know for a fact he'll be glad to help with any questions you have
15:04 jf_sebastian ntropy: thanks for the advice, I will do just that :)
15:04 evle1 joined #salt
15:04 babilen jf_sebastian: People were also pleased with the deployments they build on top of Salt from what I heard
15:06 ntropy re: scaling proxy-minions, i think there hasn't been anything new on that front, you need to run one process per device you manage
15:07 nicksloan joined #salt
15:10 mikecmpbll joined #salt
15:13 ozux joined #salt
15:14 saltyotter joined #salt
15:16 jas02 joined #salt
15:17 LondonAppDev joined #salt
15:18 coval3nce joined #salt
15:20 censorshipwreck joined #salt
15:26 jf_sebastian my concern in that regard was driven by a podcast that I listened to where IIRC it was stated that you were looking at roughly 40MB of memory overhead/process for the proxy-minion
15:27 ozux joined #salt
15:27 jf_sebastian but based on what I've read so far, we could utilize full salt-minions to distribute the load of these proxy-minion processes, correct?
15:28 aldevar left #salt
15:30 PatrolDoom joined #salt
15:30 fracklen joined #salt
15:32 sh123124213 joined #salt
15:34 ozux__ joined #salt
15:37 ozux_ joined #salt
15:41 mikecmpbll joined #salt
15:43 sjorge joined #salt
15:43 nicksloan joined #salt
15:44 mikecmpbll joined #salt
15:45 ecdhe joined #salt
15:57 KyleG joined #salt
15:57 KyleG joined #salt
15:57 Praematura joined #salt
15:58 nixjdm joined #salt
16:00 raspado joined #salt
16:01 candyman88 joined #salt
16:04 hashwagon joined #salt
16:07 raspado is it possible to passover the instance type im creating from salt-cloud to a saltstate? We will need to do different things depending on instance flavors
16:08 o1e9_ joined #salt
16:09 nicksloan joined #salt
16:12 Neighbour raspado: You could use orchestration for that
16:12 raspado Neighbour: orchestrate runner?
16:13 Neighbour yes
16:13 raspado ok thx
16:15 autofsckk joined #salt
16:16 major how do you get someone from Salt Enterprise to actually contact you? ;)
16:22 gtmanfred major: have you submitted a request for information about enterprise?
16:24 major yup
16:24 major .. submitted a second one after a week of no response...
16:24 woodtablet joined #salt
16:26 gtmanfred can you pm me your contact information and where you are located and I will get someone to reach out.
16:30 coval3nce joined #salt
16:35 woodtablet joined #salt
16:38 walker joined #salt
16:47 shanth same for me gtmanfred i submitted too and got nothing
16:47 gtmanfred shanth: pm me your info and company/location
16:54 raspado Neighbour: not sure how orchestrate runner would help here, does it have a method of determining the flavor type somehow?
16:56 gtmanfred raspado: configure your cloud providers in pillars, and then have the minion look up its data in the jinja by using the cloud module
16:56 gtmanfred alternatively, write a custom grains that uses the CloudClient
16:56 raspado hmmm i see okay!
16:57 gtmanfred raspado: https://docs.saltstack.com/en/latest/topics/cloud/config.html#pillar-configuration
16:57 vodik joined #salt
16:58 raspado so i would move this configuration to a pillar and not inside the cloud.providers directory?
16:58 raspado or have it undefined in cloud.providers rather
16:58 gtmanfred i would keep it in both places
16:58 nixjdm joined #salt
16:58 gtmanfred because the pillar data can only be used with the cloud modules, it cant be used with the runner or salt-cloud cli
16:59 raspado ahhh ok
16:59 gtmanfred you could write a jinja loop in pillars that looks over /etc/salt/cloud.providers.d and sets up the providers dictionary in pillars
16:59 gtmanfred that would be fun
16:59 wendall911 joined #salt
17:00 nicksloan joined #salt
17:00 raspado are salt minionins aware of their flavor type? like a hidden grain or such that we can call?
17:01 raspado for example if i wanted to target something like instance_type: r3.8xlarge or something like that
17:01 gtmanfred are you on amazon?
17:01 gtmanfred you might check the metadata server
17:01 raspado on both aws/openstack
17:02 gtmanfred https://github.com/saltstack/salt/blob/nitrogen/salt/grains/metadata.py
17:02 raspado oh niceeee
17:02 gtmanfred they should both have a metadata server, assuming that openstack isn't using the awful option config_drive instead, which i know rackspace does
17:04 Trauma joined #salt
17:06 mrtfsvm joined #salt
17:07 edrocks joined #salt
17:08 gtmanfred shanth: if you can send me your information right now, i am about to forward along major's
17:11 raspado gtmanfred: will that metadata script work okay on salt 2016.3.4 (Boron) ?
17:13 gtmanfred raspado: assuming 2016.3.4 has http.query, which i believe it does, then yes
17:16 raspado k
17:30 keldwud joined #salt
17:30 dograt joined #salt
17:31 J0hnSteel joined #salt
17:36 raspado gtmanfred: re config_drive, anyway I can verify that on my end?
17:36 gtmanfred curl 169.254.169.254
17:37 gtmanfred if you get something back you can use the metadata grain
17:37 gtmanfred make sure you enable it though
17:38 canci joined #salt
17:39 raspado on an openstack instance, I get this https://gist.github.com/h1h1h1/daa33b7348e0937903354600f12d211e
17:40 gtmanfred re: if you get anything back, then you can use the metadata grain
17:40 raspado sweet thx gtmanfred
17:42 capnhex joined #salt
17:45 schemanic joined #salt
17:45 schemanic hello. My ext_pillar isn't working and I'm having a hard time diagnosing it
17:45 pbandark joined #salt
17:46 ChubYann joined #salt
17:46 gtmanfred have you enabled debug logs on the master?
17:47 fracklen joined #salt
17:48 auha joined #salt
17:50 schemanic gtmanfred, I have not. Let me do that now
17:53 jas02 joined #salt
17:54 Edgan joined #salt
17:56 cscf My /var/log/salt/minon log is empty even though I keep doing things that print errors to syslog?
17:56 cscf oh nvrm
17:56 auha joined #salt
17:57 schemanic gtmanfred, I've run salt-master -l debug
17:57 gtmanfred do you see the ext_pillar module getting loaded? Which ext_pillar are you trying to use?
17:58 schemanic gtmanfred, It doesn't get there. I have a new problem. The debug messaging says
17:58 schemanic [INFO    ] Setting up the Salt Master
17:58 schemanic [WARNING ] Unable to bind socket 0.0.0.0:4505, error: [Errno 98] Address already in use; Is there another salt-master running?
17:58 schemanic [INFO    ] The Salt Master is shut down
17:58 schemanic [DEBUG   ] Stopping the multiprocessing logging queue listener
17:58 schemanic [DEBUG   ] closing multiprocessing queue
17:58 schemanic [DEBUG   ] joining multiprocessing queue thread
17:58 schemanic [DEBUG   ] Stopped the multiprocessing logging queue listener
17:58 schemanic was kicked by gtmanfred: gt...fo. ♥, gtmf
17:59 schemanic joined #salt
17:59 gtmanfred schemanic please use a pastebin
17:59 schemanic sorry
17:59 Zaunei_ left #salt
17:59 schemanic did I get autokicked?
17:59 gtmanfred schemanic: you will need to stop the salt master that is running from the init system
17:59 gtmanfred no i kicked you
17:59 woodtablet left #salt
17:59 gtmanfred so that it would stop spamming
17:59 nixjdm joined #salt
18:00 schemanic yes. service salt-master status returns about 10 pids
18:00 gtmanfred so you can only have one process listening on ports... so you will need to stop the system one to run it from the commandline
18:00 gtmanfred or set log_level: debug in /etc/salt/master and restart the system salt
18:01 schemanic Whats the correct way to shut down/start salt-master in RHEL/Amazon? I've been using service salt-master restart/start/stop
18:01 gtmanfred service salt-master stop
18:01 gtmanfred or systemctl salt-master stop
18:01 gtmanfred err
18:01 gtmanfred systemctl stop salt-master
18:01 gtmanfred just like any other service
18:01 schemanic no systemctl present
18:01 gtmanfred ok, so you are on amazon linux, which still uses sysvinit, and not systemd
18:02 walker joined #salt
18:02 gtmanfred service salt-master stop
18:02 gtmanfred if it doesn't stop, kill -9 the parent pid
18:03 schemanic That seems to have done it. Stand by, I'm going to change log_level.
18:03 jas02 joined #salt
18:04 schemanic log_level is set. salt master is running
18:04 schemanic nothing appears to be happening. I gather I should tail the logfile
18:07 schemanic gtmanfred, this is strange. I'm able to view my gitfs remotes with salt-run fileserver.file_lists, however the log appears to indicate that my gitfs repos require authentication which hasn't been configured
18:08 schemanic gtmanfred, however, yes the log appears to mention getting my remote pillar
18:08 schemanic but it makes note of a special pygit2 condition
18:08 schemanic pasting now...
18:08 woodtablet joined #salt
18:10 gtmanfred the fileserver setup is seperate from gitpillars
18:13 schemanic https://gist.github.com/devinnasar/061dbd2396cb89f5a1a3472dd3b00092
18:14 schemanic I don't understand this really. I'm getting a pillar render fail, but the log says it has my pillar
18:14 gtmanfred seems like it should be loading the git pillars, i do not see an issue there
18:14 gtmanfred why didn't you put the render fail in the gist?
18:14 woodtablet left #salt
18:15 schemanic updated gist: https://gist.github.com/devinnasar/061dbd2396cb89f5a1a3472dd3b00092
18:15 gtmanfred This is your error  Specified SLS 'salt' in environment 'base' is not available on the salt master
18:15 gtmanfred you are referencing an sls named 'salt' in the base environment in gitfs and it doesn't exist
18:16 woodtablet joined #salt
18:16 schemanic no
18:16 schemanic That's in the local pillar
18:16 schemanic It's differ
18:16 schemanic hmm
18:16 gtmanfred where is it in the local pillar?
18:16 gtmanfred and what is your pillar_roots config?
18:16 schemanic Let me back up a bit
18:16 schemanic So I'm using the salt-formula formula to set up my salt master
18:17 schemanic I'm not using salt-ssh for it
18:17 gtmanfred sure, but is the salt sls in the git fileserver or in the pillar gitfs?
18:18 schemanic I'm going right to the instance, cloning salt-fomula, running a script to copy some starter statetree and pillar tops, which should set up the machine to be a master and a minion of itself, which in turn point to ext_pillar
18:18 mikecmpbll joined #salt
18:18 schemanic the starter pillar is staying in /srv/pillar
18:18 schemanic so there's a top.sls there and a top.sls in ext_pilar
18:18 schemanic ext_pillar rather
18:20 schemanic to answer your question about where the salt.sls file is - it's in /srv/pillar/salt.sls.
18:21 woodtablet left #salt
18:21 whytewolf okay, so what top is set to look for it?
18:22 schemanic whytewolf, I don't know how to tell. I suspect it's a top collision - the leftover starter /srv/pillar/top.sls is being read first instead of ext_pillar/top.sls
18:23 whytewolf they both have a reference to salt.sls?
18:24 schemanic no
18:24 schemanic local has reference to salt.sls, ext has a reference to salt.master.sls
18:24 whytewolf ...
18:25 whytewolf so salt/master.sls
18:25 schemanic the local one is a file called 'salt.sls' and the remote repo has a directory called 'salt' with a file called 'master.sls' in it
18:25 schemanic sure enough: mv /srv/pillar/top.sls /srv/pillar/top.bak let the ext pillar take over
18:25 schemanic there was a name collision
18:26 whytewolf it isn't a name collision. it is a merging
18:27 whytewolf although didn't know git_pillar merged with local... but git_pillar is a speciel snowflake in pillar
18:27 upb joined #salt
18:29 SaucyElf joined #salt
18:29 schemanic hmm
18:29 schemanic Thanks for helping me along
18:29 mikecmpbll joined #salt
18:29 schemanic So what should I do here? remove the local stuff after I've finished my initial highstate?
18:32 cscf when I use salt-run lxc.init, I get 1 key accepted on the master and a second one in the container
18:34 whytewolf schemanic: well, there are options. one you might investigate. instead of setting up a full master/minion on the master for your bootstrap, use a masterless minion to bootstrap your master config. that way you can just pass a couple of settings to the command to tell it where your settings are
18:36 whytewolf that way they don't even need to be in /srv/salt and /srv/pillar when you run them
18:37 cscf Why does the minion seem to regen it's key after being given one?
18:37 whytewolf just a salt-call --local --file-root=/srv/bootstrap/salt --pillar-root=/srv/bootstrap/pillar
18:38 whytewolf cscf: i have no idea i never used lxc
18:38 cscf I'd guess it's not particularly lxc-specific
18:38 whytewolf well i have never seen the behavour you are describing either
18:38 whytewolf but then i tend to pregenerate my keys and transfer them
18:41 cscf Well, that's essentially what it's supposed to be doing
18:41 cscf But then the minion connects with a completely different key and gets rejected.  This worked fine earlier
18:41 Renich joined #salt
18:41 MTecknology is something installing the wrong keys later on?
18:42 whytewolf is it starting the minion software before the system ssh's in
18:43 MTecknology oh.. does the template have salt-minion in it already? that'd cause that behavior
18:43 schemanic Is there a way to get salt to show you what pillar data it has?
18:44 MTecknology salt-call pillar.data
18:44 whytewolf schemanic: salt '*' pillar.items
18:44 schemanic A template isn't rendering and I swear I have the data in pillar but it's not putting it in the template
18:44 MTecknology pillar.get the:key
18:44 whytewolf ^
18:45 whytewolf although pillar.(data|items) will say if there was a rendering error
18:45 whytewolf at the top
18:46 whytewolf also, is the template in pillar or the state tree
18:46 MTecknology I heard template and assumed file.managed:-template:jinja
18:47 whytewolf you never know these days ;)
18:48 cscf MTecknology, it does, yes, but the template deletes /etc/salt/pki at the end
18:49 PatrolDoom joined #salt
18:49 whytewolf cscf: if the template has salt-minion start at boot, doens't matter if it deletes the key
18:49 cscf Maybe I'll have to leave the salt-minion package out of the template
18:49 cscf whytewolf, well it worked earlier! lol
18:49 cscf But ok, I'll try
18:49 MTecknology I recommend leaving it out
18:49 MTecknology leave that to your custom bootstrap script
18:49 schemanic Okay, so I know for a fact that my pillar repo is updated with a pillar value, and pillar.items is not returning that data. I've already tried saltutil.sync_pillar and nothing's happened yet
18:50 MTecknology ... and use a custom script
18:50 schemanic rather I've tried saltutil.sync_all and saltutil.refresh_pillar
18:51 MTecknology start at the pillar top file and start working your way through..
18:51 muxdaemon joined #salt
18:52 schemanic MTecknology, I'm afraid I dont understand your advice
18:52 schemanic I'm looking at the top file and seeing my data there
18:52 schemanic it's not showing up when it's called down
18:53 MTecknology I have no idea what that's supposed to mean
18:54 schemanic I mean that the ext_pillar has salt.master.gitfs_privkey in it and when I call salt '*' pillar.items, salt.master.gifs_privkey isnt there
18:54 whytewolf schemanic: try to be as informative as possable. your descriptions are difficult to follow and you tend to not answer the questions that are being asked. so again. a.) where is this template you are trying to render that is not rendering. b.) what error are you getting when you try to render it. c.) what is the nature of the medical emergency
18:55 schemanic whytewolf, hold on a moment
18:55 schemanic whytewolf, MTecknology, this is the template: https://github.com/saltstack-formulas/salt-formula/blob/master/salt/files/master.d/f_defaults.conf
18:56 schemanic b) There is no error. The issue is that the template is expecting data that it isn't finding, because it appears as though that data does not exist in the pillar, although it does.
18:57 MTecknology If pillar.data doesn't show the value, then it doesn't exist in pillar
18:57 whytewolf it renders but it is blank?
18:57 schemanic whytewolf, yes it is blank
18:58 SaucyElf joined #salt
18:58 major is there a summar of helpful guidelines when writing this stuff.. like "if you are using map.jinja then you might want to treat it as a formula" (I dunno if that is even true .. just a random example)
18:58 schemanic MTecknology, I'm telling you, the file has the correct key in it, and the repo has been pushed. When I call pillar.items, the key isn't there
18:58 major s/summar/summary/
18:58 whytewolf schemanic: then no it doesn't have it
18:58 schemanic Yes I understand but that means the problem is that my repo changes are not making it to the master
18:59 schemanic and I don't know how to diagnose that
18:59 MTecknology it /might/ mean that..
18:59 MTecknology it could mean you don't have your master configured to read from places correctly
18:59 whytewolf salt-run git_pillar.update -l trace
18:59 dyasny joined #salt
18:59 nixjdm joined #salt
18:59 whytewolf could also mean your targetting is off
19:00 xet7 joined #salt
19:02 onlyanegg I'm having a hell of a time upgrading from 2016.3 to 2016.11 on CentOS 7. I've had a number of issues, including the minion not restarting, and minion authentication issues detailed here https://github.com/saltstack/salt/issues/40889 . Currently, my upgrade sls works when I use salt-call, but not when I'm calling from the master. Specifically, the state that removes pycryptodome does not get run.
19:03 woodtablet joined #salt
19:03 schemanic whytewolf, this is what I get when I run your command: https://gist.github.com/devinnasar/171e9affb3fa83dcb6d2ef8f8fbf7e7f
19:04 whytewolf um
19:04 whytewolf you typoed
19:04 whytewolf update not uplate
19:05 schemanic typo
19:05 schemanic hang on
19:05 whytewolf lots of long nights i take it
19:05 major no idea what that is like..
19:06 whytewolf i don't remeber what it isn't like
19:06 major I'll drink to that
19:06 whytewolf i do drink to that
19:06 whytewolf often
19:06 major ++
19:07 schemanic updated gist
19:07 schemanic https://gist.github.com/devinnasar/171e9affb3fa83dcb6d2ef8f8fbf7e7f
19:07 schemanic whytewolf, yes. I'm sorry for my hazyness. I've seen 2am about 5 nights in a row
19:08 Deliant joined #salt
19:08 whytewolf humm, that would indacate it has not changed. and is in the current state. and is able to detect and connect to the git repo.
19:08 onlyanegg Actually, it seems like, when calling from the master, everything after the minion install does not get run. https://gist.github.com/onlyanegg/0b96b945232e99085b8aefaa3f0697ff
19:09 dyasny joined #salt
19:10 shanth is there a way to make salt reliablity show stdout of a command while it's running. i have a state that is doing an svn checkout, sometimes it shows the output line by line, sometimes it waits to the end and spits it all out at once
19:11 shanth forgot to pm you gtmanfred
19:11 whytewolf shanth: actually it only ever shows it all at once. if just runs slower sometimes
19:11 shanth ahhh that explains
19:12 schemanic This is a git problem
19:12 schemanic My changes literally arent in bitbucket
19:12 schemanic and my local repo says there's nothing to commit
19:12 whytewolf ... well that would be a problem
19:12 schemanic or push
19:14 whytewolf maybe you need to call atlassian
19:15 Renich_ joined #salt
19:15 sknebel joined #salt
19:28 major feels like storing formulas in the state hierarchy is a bit on the confusing side
19:34 schemanic whytewolf, MTecknology so the issue was some sort of ghostly push issue.
19:34 schemanic I made my changes to the repo yesterday and git status told me 'sure, you're up to date'
19:35 schemanic however my changes didn't display in the remote system until I made another file and pushed it maybe 15 minutes ago
19:35 whytewolf wierd.
19:35 schemanic Just wanted to say thank you for your help, and I'm sorry if i sounded like a crazy person. I thought I had learned everything I needed to about it so I didn't quite know what to ask about saltstack
19:36 gtmanfred lol, that is such a bizzare issue
19:36 whytewolf ^
19:36 schemanic It was almost as if the file was being .gitignore-ed, but there is no such .gitignore in my ext_pillar repo
19:37 schemanic So, blergh
19:38 onlyanegg joined #salt
19:42 onlyanegg How does the salt minion deal with states in a state run? Is there a new process spawned for each? What will happen if the salt-minion shuts down in the middle? Would it be different in `salt` vs `salt-call`?
19:42 gtmanfred the salt minion spins off one new process to run each command it gets from the master event bus
19:42 gtmanfred so one process runs the whole state run.
19:43 gtmanfred at least for right now, there was some work done on parallel running states, so that might change in the future
19:43 gtmanfred if the salt minion shuts down in the middle, it depends on which init system you are using
19:43 gtmanfred systemd kills the parent, but reparents the children to itself, and then they exit once finished
19:44 gtmanfred which is why a service.restart works on systemd minions
19:44 gtmanfred as for salt vs salt-cloud
19:44 gtmanfred salt-call*
19:44 gtmanfred salt puts the message on the event bus, and the salt-minion daemon picks it up, spins off a process and runs the job
19:45 gtmanfred salt-call creates a fully seperate salt-minion instance, and runs all the maintenance thread stuff, which is why pillar.get might work with salt-call, but not salt, because the continuously running salt-minion process needs the pillars to be upated
19:47 major is there any particular logic behind the seeming preference for having <prefix>/salt/<formula> and <prefix>/pillars/ over something like: <prefix>/salt/{state,formulas,pillars}/ ??
19:48 gtmanfred There is not, i have seen people do it both ways
19:48 gtmanfred I know that i keep my pillars in states in the same repo, and then pull them in with gitfs
19:49 gtmanfred https://github.com/gtmanfred/blog-sls/blob/master/minion.d/fileserver.conf
19:49 onlyanegg hmmm, thanks. I'm having an issue while upgrading salt from 2016.3 to 2016.11 and it seems like when using salt (vs. salt-call) the states after salt-minion installation are not run. So I guess I was trying to see if that was expected.
19:49 * whytewolf shrugs. if [big if] i use formulas i tend to put them in thier own file_roots
19:49 gtmanfred onlyanegg: that is a different issue
19:49 gtmanfred that is caused by the package restarting the salt minion
19:49 gtmanfred and yeah, there is no great way to solve that
19:49 gtmanfred except to run the package upgrade seperately from the rest of the highstate
19:51 major gtmanfred, "root: salt" ?
19:51 gtmanfred refers to the salt directory in the root of the git repo
19:51 gtmanfred https://github.com/gtmanfred/blog-sls
19:51 gtmanfred because the top level of the git repo is not where the salt states are
19:52 major cute
19:53 gtmanfred or it could be git clone --depth=1 git://github.com/gtmanfred/blog-sls.git /srv
19:54 onlyanegg gtmanfred: so the process that gets spawned for the state run is killed when the salt-minion is restarted. And this doesn't happen with salt-call because a whole new instance of salt-minion is created?
19:55 gtmanfred right
19:55 major is there some way to tell gitfs_remotes which directory to check a repo out into?
19:55 gtmanfred because the salt-call runs on its own, completely seperate, the parent is the bash shell
19:55 gtmanfred major:no, gitfs clones the bare repo into /var/cache/salt
19:55 major ahh
19:56 major lightbulb moment
19:56 onlyanegg ahh, ok, got it, thx! Is it a bad idea to do `salt '*' cmd.run 'salt-call state.apply <state>'`?
19:56 gtmanfred nah, one of the recommended ways
19:56 gtmanfred in fact,i think it is actually better to do an at job otherwise your cmd.run will be killed too
19:57 onlyanegg oh, ok. I'll give that a try. Thanks!
19:57 gtmanfred onlyanegg: https://docs.saltstack.com/en/latest/faq.html#what-is-the-best-way-to-restart-a-salt-minion-daemon-using-salt-after-upgrade
19:58 racooper is there a way to prevent salt from printing "ERROR: Minions returned with non-zero exit code" after a cmd.run job?
19:59 gtmanfred is this in a cmd.run state?
19:59 racooper not a state, it's a command module running in a script
19:59 gtmanfred it should be sent to stderr, so it shouldn't pipe to anything
19:59 gtmanfred what are you trying to do?
19:59 nixjdm joined #salt
20:00 racooper trying to prevent cron output with that error
20:00 onlyanegg I did read that, and I'm using systemd's restart parameter to keep salt-minion up, but I need the state run to finish (to remove pycryptodome), otherwise salt breaks due to the pycryptodome issue.
20:00 gtmanfred racooper: add -l quiet
20:00 gtmanfred and it won't log anything
20:01 gtmanfred alternatively, use the salt schedule instead of cron
20:04 racooper the -l quiet doesn't help (just tested); it's not writing to logs. I can send stderr to /dev/null for my purposes though.
20:06 gtmanfred yeah, might need to do that
20:08 major curious .. the use of formulas seems to be fairly polar.  Casual observation would seem to indicate that most are reserved about using formulas. Any particular reason?
20:09 gtmanfred they are community managed and changed
20:09 major like .. is it formulas in general .. or just 3rd party formulas .. or the design model .. or .. what
20:09 gtmanfred if you are going to use them,I highly suggest forking them and managing merging changes to your own repo
20:10 gtmanfred we are working on some SPM site hosting stuff, so that you could just install the formula spm from a salt provided repository
20:10 major hmm
20:10 whytewolf agreed. had someone actually create a problem ticket on another project not to long ago after i nuked a formula i had created but had no need of anymore [was really just a test project]. i had deleted the repo and all of my backups of it
20:12 whytewolf just remeber if it is important to you. fork it
20:12 major yah .. I noticed the inclusion of formulas were a little odd as well .. with the need to update the configs every time a new formula was added .. would be kinda nice if could just include everything from a directory..
20:14 major I guess you generally get that if you write them directly into your salt/ directory .. but the formula layout kinda makes that a bit of a pain..
20:14 whytewolf the layout is actually meant for gitfs
20:16 gtmanfred ^^ formulas are specifically designed to work with gitfs or spm instead of cloning them to the master
20:18 major makes sense
20:22 shanth so i have master, and two minions, minion1 builds an app which is ultimately tarred and stored on minion1 in /tmp/app.tgz, is there a salt-esque way to get that file to copy to the master so i can distribute app.tgz to minion2 via salt state?
20:22 shanth or do i have to scp it manually every time?
20:23 cscf shanth, you might want to use rsync.synchronized directly between minioms
20:24 shanth but i want to be able to distrubute app.tgz to 50 hosts without using rsync :(
20:24 major okay .. soo .. when you are writing reusable states, but you don't necessarily make them an external repo/spm .. then they aren't a formula?
20:24 shanth org wont let me install rsync on everything
20:25 shanth my practical solution is to have the master build the app i guess
20:26 whytewolf or have a server somewhere whos only task is building the app. and packaging it. then pushing to the master
20:27 shanth yea i was just wondering if salt had a way to retrieve files from a minion and store it on the master though
20:27 MTecknology I say produce a proper .deb and have that propegate to a repo host
20:27 shanth freebsd MTecknology
20:27 MTecknology then their version of a .deb
20:28 major tarball..
20:28 shanth it's more of a kernel than a package, was just using app.tgz to be generic
20:28 whytewolf shanth: there is a cp.push for salt. which is minion -> master however it takes enabiling it
20:28 shanth might be just what i need whytewolf
20:28 whytewolf generally minion -> master actions are seen as unsecure
20:29 shanth oh
20:30 major I imagine you could always write a simple script to manage the syncronization (give it the necessary keys and what not)
20:31 onlyanegg or nfs?
20:32 McNinja joined #salt
20:32 shanth i will end up going with one of those options if salt doesnt have an easy and secure way to to it
20:32 major is it always built on a specific minion?
20:33 major and is there any reason why it can't be hosted via rsyncd?
20:35 walker joined #salt
20:36 shanth doesnt have to be on any specific host
20:36 shanth just my test lab did it on a minion and not the master
20:37 shanth going to do it on the master for prod
20:37 shanth it's just building a kernel
20:40 amcorreia joined #salt
20:40 edrocks joined #salt
20:41 MTecknology I need some help... right now, when I deploy a DO droplet, I rely on DO having updated templates, because... https://gist.github.com/MTecknology/66ce7c7f148fc9da936bcf26cc572cd7#file-bootstrap-sh-L89
20:42 MTecknology it's not really a salt question, but it involves salt-cloud and a bootstrap script, so maybe somebody has an idea. :)
20:42 mikecmpb_ joined #salt
20:43 schemanic Do I need to do anything special to the master to enable minions to talk to it at a domain? so If I wanted minions to hit it at 'saltstack.mydomain.com'?
20:43 justincely joined #salt
20:44 fracklen joined #salt
20:44 MTecknology schemanic: that's just dns resolution
20:45 schemanic Right now I'm just ensuring the server can see the internet and can be contacted along port 4506
20:45 whytewolf don't forget 4505 also
20:48 schemanic thanks whytewolf. Do I need to do anything else to protect this communication? Like set up ssl?
20:49 MTecknology I prefer keeping salt on the internet tucked away behind openvpn
20:49 whytewolf ^ I wouldn't stick it on the internet directly
20:50 schemanic I have a problem with that I'm afraid
20:50 schemanic My network is running on EC2-Classic, meaning I can't change security groups for a server once it has been created
20:51 MTecknology I don't know how that has anything to do with using openvpn..
20:52 schemanic I don't want to have to set that up if I can avoid it
20:52 schemanic I dont know anything about openvpn presently
20:52 schemanic and I have till the end of the week to set up a working server template
20:53 MTecknology The level of security you achieve is up to you. For example, I use the same openvpn config for all servers that connect.
20:54 schemanic So rather than bother with your cloud's security stuff, you just put openvpn on everything and tell the minions to connect to the master through that?
20:55 om2 joined #salt
20:56 MTecknology I know per-host would be better, but I don't want to create and distribute that many keys. Instead, they use the same key, but they connect to a special server vpn host that uses net30 to keep them isolated and that vlan only allows them to talk to the salt master, the syslog host, and a backup server.
20:57 schemanic so in this setup, the salt master is a client as well, connecting to the vpn host along with the other machines?
20:58 nixjdm joined #salt
20:59 MTecknology In that case, net30 wouldn't work, but I would strongly advise against running salt-master on a VPS
20:59 schemanic No I didn't make a suggestion
20:59 schemanic I asked you if my understanding of your description was correct
20:59 MTecknology oooh..
21:00 MTecknology salt-master is a VM running in my infrastructure inside my prod vlan. It doesn't need to do VPN.
21:00 schemanic I see, so it's your remote hosts that need the vpn?
21:00 schemanic I wonder if I can just tell my security groups that they're allowed to talk to each other...
21:01 MTecknology i.imgur.com/V41Ak7D.png
21:01 MTecknology not the best diagram in the world, but it explains my home setup, which is what I'm describing
21:02 schemanic Whoa. That is nice
21:02 schemanic wait your home setup?
21:02 schemanic you run all that infrastructure at home?
21:03 * MTecknology shows off more.. http://imgur.com/a/fjdoE
21:05 schemanic Are you like a photographer or a freelance developer?
21:06 onlyanegg :thumbs_up: that's pretty sweet
21:06 schemanic Okay I get it. You're running your local cloud and the salt master talks to everything in THE  cloud over vpn
21:06 whytewolf https://imgur.com/gallery/HgSk1
21:07 whytewolf while we are showing off :P
21:07 schemanic and you're saying if my hosts are local to my salt master I needn't worry about a vpn yet
21:07 schemanic Jesus. That's beautify whytewolf.
21:07 schemanic wow I need sleep
21:08 schemanic b-e-a-u-t-i-f-u-l
21:08 schemanic are you a sysadmin who also enjoys WoD?
21:08 onlyanegg imgur.com/a/picture_of_my_macbook
21:08 gtmanfred i am disappointed that didn't work
21:09 schemanic What does this stuff drive? Are these home automation networks  or just local clouds for personal projects?
21:10 whytewolf schemanic: I have had many roles in life. started out in tech support took a 3 year break came back as a php dev, moved to system admin. then to Ops, then to DevOps, and currently openstack engineer
21:10 schemanic ahh
21:10 schemanic I am similar, but earlier in my journey
21:11 James joined #salt
21:12 Guest30109 left #salt
21:13 jdshewey joined #salt
21:13 whytewolf as for what it runs, currently it is an openstack cloud. [although the 2950 on the bottom is my salt/infra server] the 3 1u asus boxes are my openstakc controllers and the 2 RD450's are my compute nodes
21:14 jdshewey left #salt
21:14 whytewolf the nas on the top is my cinder storage. [have a new device that i am going to work into a new revision when i actually get unlazy to add it]
21:14 walker joined #salt
21:17 zulutango joined #salt
21:17 nethershaw joined #salt
21:22 bbbryson joined #salt
21:24 Edgan MTecknology: whytewolf: Currently working on upgrading my storage/nas system from nine 4tb drives in ZFS RAIDZ2 to those nine plus another nine 8tb drives in ZFS RAIDZ2 in the same case.
21:25 MTecknology schemanic: I do not run clouds. I have run an environment that makes use of virtualization technology.
21:26 jdshewey joined #salt
21:26 Edgan MTecknology: What state do you live in?
21:26 schemanic what is the difference between cloud and virtualization technology?
21:26 MTecknology Edgan: SD and CA mostly
21:27 MTecknology cloud is a meaningless marketing fluff word
21:27 Edgan schemanic: virtualization is more generic. Cloud is generally thought as some third party service.
21:27 jdshewey I seem to be having trouble with ordering of execution of modules in my sls file. If I show the low state, they are ordered correctly, but then when I actually do the apply, they come out in a different order.
21:27 Edgan schemanic: Though then people say on premise cloud. It also means more turn key, less complexity, and higher level of automation.
21:27 Edgan jdshewey: If you are using requires, it will reorder them.
21:28 MTecknology Cloud means "we don't know what's going on, that's for the smart people to figure out."
21:28 Edgan jdshewey: If you don't use requires, you will get order defined/included
21:28 jdshewey edgan: That's the problem - the requires are being ignored
21:28 schemanic Thanks for clarifying
21:28 whytewolf MTecknology: your bias is showing :P
21:28 Edgan jdshewey: What version of salt?
21:29 Edgan jdshewey: requires are actually counter productive, because the more you define, the more it is going to reorder them. If you want full control, take all requires out.
21:30 jdshewey The RPM I am using from the centOS repos (or maybe EPEL?) is salt-master-2015.5.10-2.el7.noarch
21:30 MTecknology whytewolf: owncloud put the final nail in that coffin when "cloud" started applying to a singular non-virtualized server sitting in a basement.
21:30 Edgan jdshewey: There are official salt rpm/yum repositories that have the latest, 2016.11.5. The latest version might help, but taking requires out is more likely to get the outcome you want.
21:30 whytewolf MTecknology: not really. cloud is and always has been a service industry.
21:30 jdshewey Edgan: I'm actually working with a fork of this formula: https://github.com/jdshewey/salt-formula-freeipa
21:31 MTecknology whytewolf: I feel like you're saying VPS
21:31 whytewolf it doesn't how the service is maintained just that it is service
21:31 whytewolf VPS is in a way a cloud industry. just the very basics
21:31 whytewolf since you generally don't get as much control
21:32 MTecknology eh, ya.. we're going to have to agree to disagree on this term
21:32 onlyanegg joined #salt
21:32 jdshewey Edgan: I could give the latest repos a try...
21:35 whytewolf MTecknology: most people only see IaaS and think that is all that cloud is
21:36 whytewolf aws for example is both IaaS and several PaaS
21:36 dfinn joined #salt
21:36 whytewolf even a couple of SaaS
21:37 Edgan VPS is just give me a VM. Cloud is a more advanced form where you have secondary services(RDS, Route53, etc) all managiable via APIs. VMware has APIs, but as far as I know it doesn't directly tackle making databases as a service, dns as a service, caches as service, etc.
21:37 MTecknology You can stick whatever you want into the cloud umbrella, literally, anything
21:37 Edgan Post cloud, some of the VPSes became more Cloud like, but still don't have 10+ secondary services like AWS, Google, and Azure. Even the Cloud services have different levels.
21:38 whytewolf a lot of people abuse the cloud term but no not anything can be a cloud
21:38 MTecknology again.. we'll agree to disagree
21:38 MTecknology I'm not interested in spamming this channel with the debate
21:38 Edgan Another piece of Cloud is software defined networking, which VMware did first.
21:38 whytewolf ownclou is actually abusing the term cloud.
21:38 dfinn we found recently that we need our puppet service to start with a certain environment variable.  if you run service restart via salt cmd.run this doesn't get set so I'm working on a simple module to restart puppet.  When I run the puppet code locally on the server, it sets the environment variable correctly but when I run it via salt as a module, it's setting the environment variable to blank.  I'm not totally sure what I'm missing here.  Here's the module if a
21:38 jdshewey Eh. The cloud is just somebody else's computer.
21:40 Edgan dfinn: pastebin or something the code
21:41 jdshewey Dfinn: You might also be affected by this warning I get: DeprecationWarning: Starting in 2015.5, cmd.run uses python_shell=False by default, which doesn't support shellisms (pipes, env variables, etc). cmd.run is currently aliased to cmd.shell to prevent breakage. Please switch to cmd.shell or set python_shell=True to avoid breakage in the future, when this aliasing is removed.
21:41 Edgan dfinn: Are you working toward replacing Puppet with Salt? Or are you planning on keeping both for some twisted reason?
21:42 dfinn I've tried both LC_ALL="en_US.UTF-8" and LC_ALL=en_US.UTF-8 and it doesn't work either way
21:42 dfinn Edgan, pastebin is in my first post
21:42 puzzlingWeirdo joined #salt
21:42 Edgan dfinn: too long, got chopped on my side
21:42 whytewolf dfinn: your first post was cut off by the char limit in irc
21:42 dfinn when I first started here I had intentions of replacing puppet with salt but it's too well loved here so we now just use salt for remote execution :(
21:43 dfinn doh
21:43 dfinn here's the module code
21:43 dfinn https://pastebin.com/nGB9sPyP
21:43 nebuchadnezzar joined #salt
21:43 dfinn it's quite simple and works as expected when run manually on a minion just as a python script
21:44 dfinn it starts/restarts with the correct env set
21:44 dfinn but when run as a module I get LC_ALL=""
21:44 Edgan dfinn: try using os.environ instead
21:45 dfinn I'm not sure how I'd combine that with subprocess?
21:45 dfinn the way I'm doing it seems to be acceptable with subprocess and it does work, just not when called via salt as a module
21:46 whytewolf not sure subprocess works in salt.
21:46 dfinn why would that be?
21:46 dfinn oh, i should mention that the start/stop/restart functionality works when called via salt, it's just not setting the env var that I need
21:46 dfinn but it does do everything else
21:47 whytewolf I'm checking first as i am not sure
21:47 dfinn interesting, I did just find this in the writing-modules.html doc:
21:47 dfinn "Please do not use subprocess in your custom module unless you have a very good reason to do so. "
21:47 Edgan whytewolf: it is calling python directly, unless Salt is tweaking the runtime, or mocking something, I would expect subprocess to work.
21:48 dfinn http://intothesaltmine.readthedocs.io/en/latest/chapters/development/writing-modules.html
21:48 dfinn I guess I'm not sure what the best way to proceed is
21:49 whytewolf also dfinn just a minor thing, but __virtual__name needs to be returned by __virtual__ to be of any worth. other wise it is just a memory hog
21:49 sh123124213 joined #salt
21:50 dfinn ok
21:50 whytewolf humm, salt does use subprocess in it's self. but it all uses popen not call
21:52 dfinn that link I posted above makes it sound like I can somehow use cmd inside my module to run arbitrary commands but I'm not really sure how that would look?
21:55 MajObviousman what version was sdb added, does anyone recall?
21:55 whytewolf __salt__['cmd.run'](name='service puppet restart',env=[{LC_ALL:"en_US.UTF-8"}]) iirc
21:56 whytewolf SDB was added to Salt in version 2014.7.0.
21:56 dfinn based on the example in the URL above, I'm going to try this:
21:56 whytewolf https://docs.saltstack.com/en/latest/topics/sdb/
21:56 dfinn cmd = '{0} {1} {2} {3} {4}'.format('env', 'LC_ALL=en_US.UTF-8', 'service', 'puppet', 'restart')
21:56 dfinn ret = salt['cmd.run'](cmd)
21:57 dfinn well, that got me the following error : NameError: global name 'salt' is not defined
21:58 MajObviousman whytewolf: ok thank you, so I'm not crazy. I can't get salt-call to use sdb.get properly, not sure why yet
21:58 MajObviousman this is a 2-year-old version of salt (whatever's available from EPEL)
21:59 whytewolf EPEL i think had 2015.something
21:59 nixjdm joined #salt
21:59 dfinn I ran it your way whytewolf and got : NameError: global name 'LC_ALL' is not defined
22:00 whytewolf oops forgot a couple of ''
22:00 dfinn around LC_ALL?
22:00 whytewolf yeap
22:00 dfinn k
22:00 dfinn TypeError encountered executing puppet_mgmt.restart: run() takes at least 1 argument (1 given). See debug log for more info.
22:01 gtmanfred MajObviousman: is there a reason you aren't using repo.saltstack.com/
22:01 MajObviousman gtmanfred: ask my DirSec
22:01 whytewolf darn it, name should be run
22:01 dfinn k
22:01 MajObviousman but I have all the ammo I need to make the pitch
22:01 whytewolf my memory must be fading in my advanced age
22:02 MajObviousman my neteng wants to deploy napalm, and the module for that only showed up in 2016.11
22:02 dfinn same error whytewolf
22:02 MajObviousman so I think we'll be switching soonish
22:02 gtmanfred MajObviousman: sdb was added in 2014.7
22:02 whytewolf dfinn: try this __salt__['cmd.run']('service puppet restart',env=[{'LC_ALL':"en_US.UTF-8"}]) iirc
22:03 gtmanfred MajObviousman: according to https://docs.saltstack.com/en/latest/topics/sdb/
22:04 whytewolf i must be getting old. having so many problems drawing up a simple cmd.run line
22:05 dfinn that worked for restarting it but still did not fix my blank LANG env var
22:07 MajObviousman hah it's because the version of salt we have installed doesn't have an env driver for sdb
22:07 * MajObviousman eyerolls
22:07 * MajObviousman starts drafting polite but firm email to DirSec
22:07 gtmanfred yes
22:08 gtmanfred that is new
22:08 gtmanfred but you can sync it using saltutil.sync_sdb
22:08 gtmanfred though that might not be available in 2014.7, so you will have to sync the saltutil module
22:08 MajObviousman would cure the symptom. I want to cure the disease
22:08 gtmanfred ok, cool yeah do that :P
22:09 MajObviousman any gotchas you know about when rolling 2016.11 on CentOS 6/7 ?
22:09 gtmanfred i mean, a couple but nothing super major afaik
22:09 MajObviousman I see something about FIPS mode on the repo page, which we're not using
22:10 gtmanfred yeah, i don't believe that matters anymore really, since we rolled centos 6 back to pycrypto and not pycryptodomex
22:10 gtmanfred it was only an issue if you upgrade to 2016.11.4 for some reason and not the newest 2016.11.5
22:10 Edgan gtmanfred: are we getting 2017.5 in July or August?
22:10 gtmanfred no comment
22:11 gtmanfred we are working very hard on getting it out.  look for a release candidate soonish
22:11 Edgan gtmanfred: looking forward
22:11 MTecknology I haven't paid attention, does everything that hits develop wind up in the next release, excluding obvious freeze time
22:12 gtmanfred everything that is hitting develop now will be in oxygen
22:12 Edgan MTecknology: major or minor?
22:12 gtmanfred nitrogen was forked a couple months ago
22:12 Edgan MTecknology: everything in develop doesn't hit 2016.11, but 2017.5+
22:12 MTecknology excellent, thanks!
22:12 jdshewey Edgan: Thanks - upgrading did resolve the ordering issue.
22:13 Edgan jdshewey: nice
22:13 Edgan jdshewey: A lot has changed since 2015
22:13 gtmanfred https://groups.google.com/forum/#!searchin/salt-users/nitrogen$20branch%7Csort:relevance/salt-users/GhOwk7vsseM/V_C7zsehBAAJ
22:13 gtmanfred MTecknology: ^^
22:13 gtmanfred nitrogen branch was made on april 3rd
22:13 gtmanfred anything merged to develop after that date will be in oxygen
22:13 Edgan jdshewey: I find I have to be on the latest salt release plus patches to keep up with what I want to do with Salt.
22:14 jas02 joined #salt
22:16 rav joined #salt
22:17 MajObviousman so if Nitrogen was branched in April, wouldn't that mean the next major release will be 2017.4 ?
22:17 MajObviousman or am I missing something here?
22:17 gtmanfred no, it will be when we actually make the release
22:17 aneeshusa joined #salt
22:17 gtmanfred which we were hoping for 2017.5 (that is the temporary number) i don't know if we will change it again
22:17 gtmanfred 2015.5 started as 2015.2
22:18 gtmanfred and then was changed when we actually made the release in may of 2015
22:18 MajObviousman ahhh, ok
22:18 gtmanfred yeah, the version number is based on when it was released
22:18 MajObviousman not branched
22:18 gtmanfred i hate it too :P
22:18 gtmanfred right
22:18 gtmanfred https://docs.saltstack.com/en/latest/topics/releases/version_numbers.html
22:18 MajObviousman requires additional gymnastics up front, but perhaps less confusing in the long run
22:19 gtmanfred the one nice thing about it is you can immediately tell how old a release is
22:19 MajObviousman yep I found and read that page before asking
22:20 dfinn any other ideas @whytewolf, i'm tearing what little hair I have left out
22:20 whytewolf sorry dfinn $job calls
22:21 dfinn no worries, maybe i'll try back tomorrow
22:24 jdshewey If anyone is interested, they started a DevOps site over at Stack Exchange: https://devops.stackexchange.com/
22:25 gtmanfred did it get moved out of area51?
22:26 gtmanfred We have just been using regular stackoverflow and the salt-stack tag
22:26 MajObviousman jdshewey: surprised it took this long
22:27 MajObviousman but maybe I'm too close to "devops" and have developed myopia
22:27 gtmanfred It has been around for a while
22:27 gtmanfred but is still a public beta http://area51.stackexchange.com/proposals/97295/devops
22:27 raspado joined #salt
22:28 gtmanfred https://stackoverflow.com/questions/tagged/salt-stack
22:33 SaltyVagrant_ joined #salt
22:35 OliverMT joined #salt
22:37 Kelsar joined #salt
22:43 mavhq joined #salt
22:46 walker joined #salt
22:48 MajObviousman so this is super interesting: "As of 0.8.8 targeting with executions is still under heavy development and this documentation is written to reference the behavior of execution matching in the future. Execution matching allows for a primary function to be executed, and then based on the return of the primary function the main function is executed."
22:49 MajObviousman I don't see that anywhere in latest docs
22:49 MajObviousman is that a feature that was canned?
22:49 gtmanfred it was super removed
22:49 gtmanfred yeah
22:49 gtmanfred just a whole can of vulnerabilities from that
22:49 gtmanfred i think the decision was that it would be better to use a custom grain to do that
22:49 gtmanfred we have been talking about matching on the mine though
22:49 MajObviousman ok, good to know. That functionality is exactly what I was asking about a few weeks ago
22:50 MajObviousman and yes, I believe we concluded with "look into mine"
22:54 walker joined #salt
22:56 nicksloan joined #salt
22:58 nixjdm joined #salt
23:15 hemebond joined #salt
23:17 rav_ joined #salt
23:25 KennethWilke joined #salt
23:28 mattl joined #salt
23:54 dendazen joined #salt
23:59 om2 joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary