Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-10-23

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 sgargan joined #salt
00:04 bfoxwell joined #salt
00:06 Eugene That looks like systemd being /too/ efficient with the reboot
00:08 otter768 joined #salt
00:11 sjorge joined #salt
00:11 sjorge joined #salt
00:14 cberndt joined #salt
00:15 PredatorVI RD_: I checked out the ISSUE link but I don't think it applies.  I'm not using any of those modules.  It's a pretty basic install.
00:16 clintberry joined #salt
00:16 RD_ Exactly. Whatever is your issue, its probably not related to the salt-api. Your return is 0 right?
00:17 kinetic joined #salt
00:18 clintberry joined #salt
00:19 zwi joined #salt
00:20 breakingmatter joined #salt
00:20 otter768 joined #salt
00:21 traph joined #salt
00:21 otter768 joined #salt
00:32 dthom91 joined #salt
00:41 Aikar joined #salt
00:44 PredatorVI RD_:  Yes, return is '0'
00:48 breakingmatter joined #salt
00:51 zmalone PredatorVI: Are you still trying to get salt-api working?
00:51 PredatorVI YES! :)
00:51 zmalone I tried a default install, and had the same issue you did, when I created a valid /etc/salt/master.d/salt-api.conf from scratch, it worked.
00:51 zmalone If there's no config, it just tried to spawn a child, failed, and exited
00:51 PredatorVI I do have that file....
00:52 PredatorVI wait
00:52 zmalone http://bencane.com/2014/07/17/integrating-saltstack-with-other-services-via-salt-api/
00:52 PredatorVI chekcing
00:52 zmalone worked for me, but I had to create the ssl key/cert too.
00:52 PredatorVI DANG IT!  Thank you.  I had the file in /etc/salt and not /etc/salt/master.d
00:53 zmalone Without a working config, it's HARD to troubleshoot because the strace dies on that missing child, and there are a million prior ENOENTs for libraries and python modules and stuff before the failure
00:53 zmalone Terrible error reporting, it's probably worth opening a github issue for that alone.
00:53 zmalone (or a pull request with an error message)
00:54 PredatorVI Yeah...i'll do one of those ;).
00:54 PredatorVI Thank you for nudging me back toward sanity
00:54 PredatorVI Too many dang things going on at the same time somedays
00:58 fsteinel_ joined #salt
00:59 lionel joined #salt
01:02 larsfronius joined #salt
01:10 zmalone joined #salt
01:14 catpiggest joined #salt
01:19 Akhter joined #salt
01:21 zmalone1 joined #salt
01:25 johnkeates joined #salt
01:26 RandyT_ question: in the deploy process of a minion, is the minion id available to me in environment or some place prior to minion code actually installed and running?
01:26 johnkeates yes
01:27 johnkeates but it depends on what you mean by deploy and what you mean by ID
01:27 johnkeates an minion ID is not magic, it's something YOU set :)
01:27 johnkeates by default it takes the hostname I think, or the FQDN
01:27 RandyT_ let me tell you what I am trying to do.
01:27 johnkeates okay
01:28 RandyT_ trying to run a script provided through user-data on ec2 to set the hostname and ip address with route53 at boot up.
01:28 RandyT_ if this gets run in rc.local before I actually have a /etc/salt/minion file, I come up with blank hostname...
01:28 johnkeates well, by default, the ID would be your FQDN (or hostname, you can look it up, it's in the comments)
01:29 RandyT_ meaning /etc/hostname?
01:29 johnkeates i personally don't mix aws and salt, but any imaging and deployment system I used had a way of predefining the fqdn
01:29 johnkeates here is what I have in two different setups:
01:30 johnkeates setup 1: I have al machines seeded with a virtual MAC address within the XenSource address space, and a DHCP server that knows those MAC addresses and assigns them the same IP, hostname and domain name
01:31 johnkeates setup 2:  I have a preseed system that uses an url like: https://preseed.domain.com/debian/jessie.cfg?hostname=mysql1prod&domain=acme.corp
01:31 johnkeates in both cases the VM gets it's hostname externally
01:31 johnkeates now, when you say user-data, is that something you can read/process before you do anything salty?
01:32 RandyT_ yes, available through api via 'curl http://169.254.169.254/latest/user-data
01:32 RandyT_ so allows me to pass in the script I want to run without having to change the image.
01:32 johnkeates (by the way, preseeding is unattended setup for debian + ubuntu, like anaconda and kickstart are for fedora and redhat)
01:33 RandyT_ makes it possible for new ip to be assigned if AWS reboots the image for example.
01:33 RandyT_ as opposed to setting it via salt-cloud or boto
01:33 RandyT_ doesn't require a deploy process to set the host/ip
01:33 johnkeates well, I guess your options are limited, since you are using image-based deployments you should have a run-once script that disables itself and boot in to single-user mode by default
01:34 johnkeates that way you can be sure it's going to be configured OK
01:34 johnkeates yeah, that's the only downside to my process: it depends on inventory control to be connected to the DHCP server and the DHCP server being online for deployments
01:34 RandyT_ one option, which I am avoiding right now is to dig the tag value for name out of ec2 api...
01:35 RandyT_ unfortunately, it is not as accessible as the other values through the http api
01:35 johnkeates but with preseeding instead of image-based deployments I do fresh installs with 2 commands, so that's neat
01:35 johnkeates you may want to simply invent some glue logic
01:35 johnkeates have a small instance your deployment scheme can access for pre-processed data
01:35 johnkeates then use python or php or something on the small instance to generate automatic configurations
01:36 johnkeates it's probably about 20 lines of code to get that going on a private network
01:36 RandyT_ I'll checkout preseeding as I am not familiar with it. fewer options available to me on aws in this regard, so trying to avoid diverging too much
01:37 RandyT_ that is basically what I have in this script, but unfortunately, the source I am relying on for hostname is not guaranteed to be available when this script runs.
01:37 RandyT_ going to need to resort to digging deeper in the AWS api it seems.
01:37 RandyT_ thanks for the suggestions though. thought provoking.
01:38 johnkeates when going for preseeding, consider the option to supply the needed information at the preseed command interface
01:38 johnkeates i'm doing most of it on the cli
01:38 johnkeates so basically i ssh into a kerberized box and do xenlight-setupvm.sh <FQDN> and it starts a standard provisioning
01:40 johnkeates i'm gonna go get me some food, good luck
01:40 RandyT_ thanks again
01:40 johnkeates and look in to the preseeding or automated/unattended options for your OS of choice
01:41 johnkeates in the case of debian, it's basically a tiny image, a kernel and a text file that can be downloaded over a variety of protocols
01:41 johnkeates anyway, good luck
01:41 RandyT_ coo
01:41 RandyT_ cool
02:02 zmalone1 https://github.com/saltstack/salt/blob/develop/pkg/rpm/salt-master#L133
02:02 zmalone1 ha
02:03 zmalone1 if (unable to perform action) then echo "Errror!"; return 0
02:07 orionx joined #salt
02:07 laax joined #salt
02:13 dthom91 joined #salt
02:18 orionx_ joined #salt
02:19 ageorgop joined #salt
02:20 TyrfingMjolnir joined #salt
02:31 sunkist joined #salt
02:38 evle joined #salt
02:39 xsteadfastx joined #salt
02:44 AndroUser2 joined #salt
02:51 TyrfingMjolnir joined #salt
02:53 ajw0100 joined #salt
02:55 zmalone joined #salt
02:56 alemeno22 joined #salt
03:02 stanchan joined #salt
03:03 larsfronius joined #salt
03:04 favadi joined #salt
03:04 msx joined #salt
03:33 kinetic joined #salt
03:34 dthom91 joined #salt
03:36 nikogonzo zmalone: lol
03:43 Akhter joined #salt
03:50 breakingmatter joined #salt
03:51 danlsgiga hey iggy, I was able to successfully use the deep merge custom module and the custom dictupdate util, I just had to update my module to use the __utils__['dictupdate.update'](dest, upd) instead of dictupdate.update(dest, upd)
03:52 danlsgiga Thanks to murrdoc
03:55 danlsgiga Now dicts and even lists are being aggregated and deep merged... this open lots of opportunities for fancy pillar merging
04:05 TyrfingMjolnir joined #salt
04:10 BretFisher joined #salt
04:11 orion203 joined #salt
04:12 dthom91 joined #salt
04:31 timoguin joined #salt
04:43 davidbanham joined #salt
04:43 davidbanham Hi all, I'm running a salt master based on gitfs. I have just added a new state and required it under '*' in top.sls, but it's not showing up in highstate on any of my minions. I'm lost as to how to proceed debugging this. Can anyone steer me in a direction?
04:48 __Greg joined #salt
04:48 __Greg left #salt
04:49 ramteid joined #salt
04:51 DanyC joined #salt
04:54 otter768 joined #salt
04:55 jcastle joined #salt
04:59 rdas joined #salt
05:00 DanyC joined #salt
05:01 DanyC_ joined #salt
05:02 bhosmer_ joined #salt
05:07 MK_FG joined #salt
05:12 anmol joined #salt
05:13 armguy joined #salt
05:17 incnspcus joined #salt
05:17 incnspcus joined #salt
05:17 TyrfingMjolnir joined #salt
05:28 MK_FG joined #salt
05:41 felskrone joined #salt
05:44 aquinok joined #salt
05:46 vexati0n joined #salt
05:50 timoguin joined #salt
05:50 mlanner joined #salt
05:52 breakingmatter joined #salt
05:54 anmol joined #salt
06:02 bhosmer_ joined #salt
06:11 impi joined #salt
06:17 charli joined #salt
06:21 iggy danlsgiga: I'll tell murrdoc you said he's not totally worthless tomorrow
06:22 iggy davidbanham: do you have a top.sls in multiple branches? They do weird things
06:23 lb joined #salt
06:23 davidbanham iggy: No, I don't. I've since "solved" the problem by moving the things I wanted out of the new state file and into a state file I already had. It worked. I have no idea why.
06:27 oherrala davidbanham: did you try restarting the master? maybe it doesn't read new files automatically?
06:28 davidbanham Ooh, yeah maybe. I didn't try that.
06:28 anmol joined #salt
06:30 iggy gitfs refreshes every minute
06:31 iggy or with `salt-run fileserver.update`
06:32 moogyver joined #salt
06:32 davidbanham I do fileserver.update as a habit every time.
06:37 iggy is your top file in a separate repo?
06:52 malinoff joined #salt
06:53 orionx joined #salt
06:55 otter768 joined #salt
07:00 otter768 joined #salt
07:03 timoguin joined #salt
07:05 larsfronius joined #salt
07:10 Zytox joined #salt
07:10 drwx joined #salt
07:19 eseyman joined #salt
07:22 flebel joined #salt
07:23 drwx hi, i'm having trouble on some centos 6 server: python-tornado can't be found in the repo
07:23 drwx i comparerd to other machines and for some reason yum doesn't see some packages in the repo: yum --disablerepo '*' --enablerepo saltstack-repo list available -v | wc -l
07:24 drwx yields 73 while it's 102 on other servers
07:24 Jimlad joined #salt
07:24 Guest43368 joined #salt
07:24 kinetic joined #salt
07:26 fredvd joined #salt
07:26 DanyC joined #salt
07:33 DanyC joined #salt
07:35 jhauser joined #salt
07:41 eliasp joined #salt
07:42 markm joined #salt
07:42 markm_ joined #salt
07:45 edulix joined #salt
07:47 Fiber^ joined #salt
07:58 Rumbles joined #salt
07:59 chiui joined #salt
08:02 fredvd_ joined #salt
08:12 bluenemo joined #salt
08:14 ollins joined #salt
08:16 keimlink joined #salt
08:16 s_kunk joined #salt
08:19 CeBe joined #salt
08:21 thefish joined #salt
08:26 larsfronius joined #salt
08:30 larsfron_ joined #salt
08:35 orionx_ joined #salt
08:37 ollins joined #salt
08:39 slav0nic joined #salt
08:40 DanyC joined #salt
08:42 N-Mi joined #salt
08:42 N-Mi joined #salt
08:45 evle1 joined #salt
08:48 titilambert joined #salt
08:53 sfxandy joined #salt
08:53 sfxandy morning all
08:54 sfxandy (morning in the UK that is!)
09:00 GreatSnoopy joined #salt
09:08 jhauser joined #salt
09:10 ubikite joined #salt
09:10 linjan joined #salt
09:16 ponpanderer joined #salt
09:18 overyander joined #salt
09:19 dustywusty_ joined #salt
09:20 voileux_ joined #salt
09:20 armyriad joined #salt
09:20 penguinpowernz joined #salt
09:21 paha joined #salt
09:21 Ch3LL_ joined #salt
09:21 SheetiS joined #salt
09:22 fsteinel joined #salt
09:22 tru_tru joined #salt
09:23 favadi joined #salt
09:26 cberndt joined #salt
09:26 peters-tx joined #salt
09:33 msx joined #salt
09:39 larsfronius joined #salt
09:43 larsfron_ joined #salt
09:52 Guest43368 joined #salt
09:53 DanyC_ joined #salt
09:54 KermitTheFragger joined #salt
09:59 leev_ joined #salt
09:59 TyrfingMjolnir_ joined #salt
10:00 dbanham joined #salt
10:00 Hazelesque_ joined #salt
10:00 Striki_ joined #salt
10:01 garphyx joined #salt
10:01 SaveTheRb0tz joined #salt
10:02 clone1018__ joined #salt
10:02 lude1 joined #salt
10:02 JPau1 joined #salt
10:02 xnaveira joined #salt
10:02 gtmanfre- joined #salt
10:02 Sacro_ joined #salt
10:02 scarcry_ joined #salt
10:02 mephx_ joined #salt
10:02 tmmt joined #salt
10:02 amatas_ joined #salt
10:02 jeblair_ joined #salt
10:03 davroman1ak joined #salt
10:03 mortis__ joined #salt
10:03 lionel_ joined #salt
10:03 dork_ joined #salt
10:03 dthorman_ joined #salt
10:03 is_null_ joined #salt
10:03 doglike joined #salt
10:03 doglike joined #salt
10:03 ze-_ joined #salt
10:03 sjorge_be joined #salt
10:04 evle joined #salt
10:04 imanc_ joined #salt
10:04 larsfronius joined #salt
10:05 womble` joined #salt
10:06 rideh joined #salt
10:07 FreeSpencer_ joined #salt
10:07 FreeSpencer joined #salt
10:07 Nebraskka joined #salt
10:08 MeltedLux joined #salt
10:08 calebj joined #salt
10:08 jcockhren joined #salt
10:08 cb joined #salt
10:08 jY- joined #salt
10:08 ponpanderer joined #salt
10:08 MK_FG joined #salt
10:08 sgargan joined #salt
10:08 zz_Cidan joined #salt
10:08 shanemhansen joined #salt
10:08 TomJepp joined #salt
10:09 Cidan joined #salt
10:09 shnguyen joined #salt
10:09 paolo joined #salt
10:09 paolo joined #salt
10:09 Shirkdog joined #salt
10:09 dean joined #salt
10:10 bodgix joined #salt
10:11 __alex joined #salt
10:11 unusedPhD joined #salt
10:12 traph joined #salt
10:13 chitown joined #salt
10:13 analogbyte joined #salt
10:14 Nazzy joined #salt
10:14 bluenemo joined #salt
10:14 alexhayes joined #salt
10:14 sarlalian joined #salt
10:14 _ikke_ joined #salt
10:14 danielcb joined #salt
10:15 markm_ joined #salt
10:15 PI-Lloyd joined #salt
10:16 jay_d joined #salt
10:17 packeteer joined #salt
10:17 MaZ- joined #salt
10:17 LinuxHorn joined #salt
10:18 malinoff joined #salt
10:19 kalessin joined #salt
10:19 m0nky joined #salt
10:19 m0nky joined #salt
10:25 malinoff joined #salt
10:26 jhauser_ joined #salt
10:26 malinoff joined #salt
10:29 sunkist joined #salt
10:31 larsfron_ joined #salt
10:34 Number6 left #salt
10:36 sgargan joined #salt
10:43 giantlock joined #salt
10:45 sgargan joined #salt
10:46 linjan joined #salt
10:53 sgargan joined #salt
10:56 breakingmatter joined #salt
10:58 MadHatter42 joined #salt
11:03 moeyebus joined #salt
11:09 favadi joined #salt
11:09 kbaikov joined #salt
11:09 sgargan joined #salt
11:10 danielcb joined #salt
11:11 amcorreia joined #salt
11:14 wordsToLiveBy joined #salt
11:17 evle joined #salt
11:22 saffe joined #salt
11:24 ponpanderer Anyone know what would cause " [WARNING ] Could not write out jid file for job <JID>. Retrying.". I'm getting this on 2015.8.1 doing 'salt-run jobs.print_job <JID>. It just starts endlessly looping the warning
11:24 ponpanderer I'm using the default local job cache
11:24 ggoZ joined #salt
11:26 ponpanderer strangely the jid the warning complains about isn't even the jid requested by print_job
11:29 ponpanderer seems the issue and resulting loop is a thrown IOError here: https://github.com/saltstack/salt/blob/v2015.8.1/salt/returners/local_cache.py#L114
11:35 zerthimon joined #salt
11:37 ubikite ponpanderer: try checking ownerships of /var/cache/salt/master/ and have a look at https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.cache.html.
11:40 davromaniak joined #salt
11:43 Guest43368 joined #salt
11:43 mattiasr joined #salt
11:48 ponpanderer ownership is all good. just started noticing this on the latest release. happens about 50% of the time I issue print_job, and always for random old job ids.
11:49 ponpanderer something to me seems wrong with prep_jid in local_cache.py. It's changed quite a bit from 2015.5 and maintains a recurse count and param it never uses that causes self-referencing loops on exception
11:49 ubikite ponpanderer: you mean ownership matches with the user running salt-master?
11:49 ponpanderer yep
11:50 ponpanderer I'm going to dive into the code and see what the exception is saying for IOError
11:51 ponpanderer the weird thing is i 'print_job X' and it does output the results and randomly throws the warning loop for job id Y that i didn't even request
11:51 ponpanderer so it works and then craps out into a loop of seemingly unrelated activities
11:54 ponpanderer That's weird. It's a 'no such file or directory' error in /var/cache/salt/master/jobs
11:54 ponpanderer but the error is for a job that isn't even requested
11:56 BogdanR I would like to run orchestrate after a run sa salt-cloud map with the -P parameter so that instances are started in parallel
11:56 BogdanR How would I be able to accomplish this?
11:57 BogdanR Basically I need to make sure all instance have the minions installed when I run orchestrate
11:58 ponpanderer Bogdan: How about testing with a test.ping and comparing those returned vs what was targeted and if it doesn't match the minion(s) are not online
11:59 BogdanR ponpanderer: It has to happen automatically. After I run the map, when the map finishes salt should call orchestrate.
12:03 BogdanR So, does anyone know how I can tell salt to run orchestrate after a map finished running?
12:06 DammitJim joined #salt
12:08 stolenmoment left #salt
12:10 drwx 09:23 < drwx> hi, i'm having trouble on some centos 6 server: python-tornado can't be found in the repo
12:10 drwx 09:23 < drwx> i comparerd to other machines and for some reason yum doesn't see some packages in the repo: yum --disablerepo '*' --enablerepo saltstack-repo list available -v  | wc -l
12:10 drwx 09:24 < drwx> yields 73 while it's 102 on other servers
12:11 drwx and when i try to install salt-minion, i get
12:11 drwx Error: Package: salt-2015.8.1-1.el6.noarch (saltstack-repo) Requires: python-tornado >= 4.2.1
12:12 drwx i've spent like 2 or 3 hours on that specific problem and i don't really know where to ask, i can't tell if it's yum-related or salt-related
12:24 kawa2014 joined #salt
12:30 tmclaugh[work] joined #salt
12:36 mattiasr joined #salt
12:40 jettero joined #salt
12:44 MadHatter42 joined #salt
12:44 cpattonj joined #salt
12:45 cpattonj What is the correct path to downgrade a salt-master?
12:45 ggoZ1 joined #salt
12:45 toastedpenguin joined #salt
12:45 cpattonj I recently upgraded to 2015.8.0 and am having many problems with upgrading/maintaining my 2015.5.x minions
12:45 jettero I'm looking for a way to populate pillar sls files, clearly 'import yaml'; yaml.load(whatever) ... but I'd like to resolve "id:tag:thing" into the yaml and put values there.  I think I can use salt.utils.traverse_dict_and_list() or something like it to find the right place in the data, but if there's no such tag yet, it's less useful.
12:45 cpattonj (I already tried upgrading the minions which failed and I don't want to get into that right now)
12:46 jettero (I went source diving on how grains.setval does it, but didn't get very far)
12:46 jettero cpattonj you may be stuck downgrading the master
12:46 jettero I think they have to match mostly
12:47 jettero I don't realy know though.  Seems like the version mismatches mostly don't matter
12:47 jettero though when I had older minions I found  sign_pub_messages: True helped
12:50 oleksiy joined #salt
12:50 oleksiy left #salt
12:54 cpattonj jettero: what do you mean I may be stuck? I want to downgrade. I just don't know how.
12:58 mikeywaites joined #salt
12:59 mikeywaites hey guys, can someone tell me how to correctly say 'grain does not match'  i tried the following 'not @Gnode_type:dev'
12:59 JDiPierro joined #salt
12:59 mikeywaites sorry `not G@node_type:dev`
13:02 subsignal joined #salt
13:04 JDiPierro joined #salt
13:05 subsignal joined #salt
13:06 jettero cpattonj: oh, I see.  I imagined you'd rather upgrade the minions than downgrade the master
13:07 jettero it's probably just a matter of downgrading the package (for whatever package manager ou use) and then clearing the cache and (I forget... proc dir? something like that) as necessary
13:07 cpattonj that's having it's own host of issues
13:08 cpattonj that's too vague to be helpful but I appreciate it... I'm new to this and can't find anything about downgrading the master online
13:08 cpattonj tbh I'm not sure how I upgraded to begin with
13:08 tremon joined #salt
13:08 cpattonj I think with apt-get
13:11 toastedpenguin joined #salt
13:11 tremon hi all, is it expected that none of the grains match when using salt-call --local?
13:11 BretFisher joined #salt
13:12 kawa2014 joined #salt
13:17 DanyC_ all, anyone can give me any tips on how you guys are testing custom modules? say i have this module - http://pastebin.com/bmJ28Ntf - can you use Pycharm or any idea to test it before stick it into _modules dir and build some state files?
13:23 winsalt joined #salt
13:25 mephx joined #salt
13:29 furrowedbrow joined #salt
13:29 favadi joined #salt
13:30 jeffpatton1971 joined #salt
13:33 numkem joined #salt
13:37 kawa2014 joined #salt
13:37 flou joined #salt
13:38 ubikite anybody suggesting best way to configure iptables via salt in Debian 8.0?
13:39 jeffpatton1971 have you tried looking here first? https://docs.saltstack.com/en/latest/ref/states/all/salt.states.iptables.html
13:40 jeffpatton1971 there is also this on GitHub https://github.com/saltstack-formulas/iptables-formula
13:41 ericof joined #salt
13:42 ubikite jeffpatton1971: i've now but in my installation there are no builtin modules. as you can understand i'm a bit newbie on salt.
13:43 jeffpatton1971 i'm not an old hand either..but the GitHub I believe you can download into a modules folder for use, and I think iptables is one of those that comes with
13:43 kermit joined #salt
13:44 cpowell joined #salt
13:46 bhosmer joined #salt
13:46 JDiPierro joined #salt
13:52 zwi joined #salt
13:52 sfxandy joined #salt
13:55 rmnuvg joined #salt
13:58 I joined #salt
14:00 _JZ_ joined #salt
14:02 hasues joined #salt
14:02 hasues left #salt
14:04 sroegner joined #salt
14:05 BretFisher joined #salt
14:08 kitplummer joined #salt
14:08 keekz joined #salt
14:09 zmalone joined #salt
14:10 mpanetta joined #salt
14:10 andrew_v joined #salt
14:12 sroegner joined #salt
14:13 mapu joined #salt
14:14 smkelly joined #salt
14:16 erjohnso joined #salt
14:17 gthank joined #salt
14:17 gthank joined #salt
14:17 lowfive joined #salt
14:20 quix joined #salt
14:21 chiui joined #salt
14:22 favadi joined #salt
14:28 Eureka703 joined #salt
14:34 kawa2014 joined #salt
14:35 Brew joined #salt
14:38 mikeywaites left #salt
14:39 iggy cpattonj: you should be able to just tell your package manager to install 2015.5* (worst case scenario, you have to stop the master and rm -rf /var/cache/salt/master/ too)
14:43 BretFisher joined #salt
14:43 JohnTunison joined #salt
14:46 Akhter joined #salt
14:47 dthom91 joined #salt
14:49 clintberry joined #salt
14:51 giantlock joined #salt
14:52 dyasny joined #salt
14:53 Akhter joined #salt
14:55 bhosmer joined #salt
15:01 btorch so, I don't like reading online when at cafes and all ... you guys have any updatish saltstack book that you would recommend ?
15:01 debian112 joined #salt
15:01 danlsgiga iggy: hahaha... thanks!
15:01 lasko joined #salt
15:03 danlsgiga hey guys, I was checking the ext_pillar capabilities and it seems to me that I can't use my pillar top.sls to specify an ext_pillar to my specific targeted hosts
15:04 danlsgiga am I wrong or ext_pillar works in the global scope and I need to target it using another mechanism?
15:05 iggy danlsgiga: if you are writing your own ext_pillar, you make the decision in the pillar
15:05 flou joined #salt
15:06 danlsgiga iggy: taking gitfs_pillar as an example... I'd have another top.sls in git to control the ext_pillar?
15:06 iggy not exactly
15:07 danlsgiga iggy: Then how Salt knows where the ext_pillar is targeted for?
15:09 Rockj joined #salt
15:09 mapu joined #salt
15:10 Brew joined #salt
15:10 sdm24 joined #salt
15:11 subsigna_ joined #salt
15:11 Rockj joined #salt
15:12 danlsgiga iggy: What I want to leverage with ext_pillar in my company is giving some kind of control over specific pillars to developers for specific servers
15:12 iggy ext_pillar gets passed the minion_id, it can make certain decisions based on that
15:12 danlsgiga iggy: But I didn't find any information on how to do the targetting
15:12 evilrob joined #salt
15:12 iggy otherwise it's just matching a key's path based on certain metadata (done in the top file like normal)
15:13 danlsgiga Ok, so would you have an example of a top.sls pillar for an ext_pillar on gitfs?
15:13 Akhter joined #salt
15:14 iggy i.e. for git pillar, if you have nginx/init.sls in your git tree, then in your top file, you'd have:  '*web*':\n    - nginx
15:15 dthom91 joined #salt
15:15 sdm24 ext_pillars are targetted in the main pillar top.sls file, just like any other pillar
15:16 sdm24 at least, thats how I use gitfs ext_pillars
15:16 iggy btorch: base of pi's is good and I've heard good things about "Mastering SaltStack"
15:17 kaptk2 joined #salt
15:17 mattrobenolt joined #salt
15:18 danlsgiga iggy: Ah... Totally got it! But it actually downloads the file to the filesystem and cache it?
15:18 sk_0_ i'm having trouble getting the salt api to start on my salt-master 2014.7.0 (helium).
15:19 ollins joined #salt
15:19 btorch iggy: cool thanks
15:19 sk_0_ i followed this https://docs.saltstack.com/en/latest/ref/netapi/all/salt.netapi.rest_cherrypy.html
15:19 zmalone sk_0_: Someone else had that problem yesterday, if the config file isn't in the right place, it'll just silently fail.
15:20 sk_0_ i've got a config for cherrypy in /etc/salt/master.d/
15:21 zmalone Is it called /etc/salt/master.d/salt-api.conf ?
15:21 zmalone http://bencane.com/2014/07/17/integrating-saltstack-with-other-services-via-salt-api/ seems to work
15:21 kawa2014 joined #salt
15:22 sk_0_ i'll rename it to salt-api.conf
15:23 zmalone https://github.com/saltstack/salt/issues/28240
15:24 larsfronius joined #salt
15:25 ageorgop joined #salt
15:29 orionx joined #salt
15:30 alemeno22 joined #salt
15:30 Brew joined #salt
15:30 sk_0_ /etc/salt/master.d/salt-api.conf restarted master. nothing is listening on 8000 and not a peep about it in the master log file.
15:32 zmalone Don't you need to run salt-api seperately?
15:32 sdm24 stop salt-master, run "salt-master -l debug" and try to find if/where it says "reading from /etc/salt/master.d/salt-api.conf"
15:32 zmalone or in the salt-api / salt-master merge did that go away?
15:33 sk_0_ i'm running 2014.7.0 helium salt-api should be merged
15:33 sk_0_ i'll try debug
15:33 ingslovak joined #salt
15:35 grumm_servire joined #salt
15:35 sdm24 I'm not sure where to go from there, but at least it will tell you if its being read
15:38 sk_0_ debug log says eauth.conf and salt-api.conf are being read & included
15:38 ingslovak Hey guys, does anyone have experience with calling salt['mine.get'] in a reactor SLS? I am just getting an empty dict, despite the mine is working right.
15:41 meye1677 joined #salt
15:41 grumm_servire joined #salt
15:43 RedundancyD joined #salt
15:44 whytewolf ingslovak: reactors/pillars/orchestrate. all have a common problem. they run on the master. salt['mine.get'] calls the module for mine.get you want to use the runner version of mine.get [which only works if you are on 2015.8.1 or 2015.5.6] salt['saltutil.runner']('mine.get', tgt='targeting', fun='minefunction') https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.runner
15:49 ingslovak whytewolf: you just saved my life. fortunately i run 2015.8.1 and the reactor flow is now working - BIG thanks :)
15:50 evilrob joined #salt
15:52 dthom91 joined #salt
15:53 tracphil joined #salt
15:54 jeffpatton1971 joined #salt
15:57 jimklo joined #salt
15:57 edulix joined #salt
15:58 dendazen joined #salt
15:59 kinetic joined #salt
16:06 sdm24 I have a strange bug. Yesterday I added another git repo (I have gitfs set for a Salt.git repo) to have salt push out files from. I also (for a long time) had minionfs set up (I need two minions to be able to push some keys out to other minions). I mounted the new gitfs repo to a salt subdirectory (salt://apps), which does not share the id of any minion
16:06 sdm24 However, now I keep getting warnings that No files found in minionfs cache for minion ID 'apps'
16:08 sdm24 In file_roots, the order is "roots, git minion", and I even have a minionfs_whitelist for only the 2 minions I need, and added a minionfs_blacklist  for "apps", even though that is not a minion name. I was getting warnings that all my other minions also had no files in minionfs cache, but adding the whitelist fixed that
16:09 sdm24 Has anyone else seen this behavior? should I open a bug issue?
16:13 sdm24 I meant fileserver_backend is set to roots, git, minion
16:18 Akhter joined #salt
16:19 jhauser joined #salt
16:21 bhosmer joined #salt
16:21 clintberry joined #salt
16:22 anotherZero joined #salt
16:23 moogyver joined #salt
16:23 Aikar left #salt
16:25 doug____ joined #salt
16:25 larsfronius joined #salt
16:31 lasko joined #salt
16:32 ageorgop joined #salt
16:35 aparsons joined #salt
16:36 writtenoff joined #salt
16:36 Akhter joined #salt
16:39 orionx_ joined #salt
16:39 brianvdawson joined #salt
16:41 brianvdawson joined #salt
16:41 sunkist joined #salt
16:41 chiui joined #salt
16:41 Aikar joined #salt
16:44 murrdoc joined #salt
16:45 cwyse joined #salt
16:54 dendazen_ joined #salt
17:01 ageorgop joined #salt
17:02 meye1677 joined #salt
17:02 dthom91 joined #salt
17:03 chrischris joined #salt
17:05 sfxandy joined #salt
17:21 jhauser joined #salt
17:21 Fiber^ joined #salt
17:21 kinetic joined #salt
17:23 DanyC_ whytewolf: also say thanks for the explanation given about reactors ..cool
17:24 jalbretsen joined #salt
17:25 giantlock joined #salt
17:28 Aikar left #salt
17:29 wendall911 joined #salt
17:33 aron_kexp joined #salt
17:36 mpanetta joined #salt
17:39 malinoff joined #salt
17:42 moogyver joined #salt
17:43 stomith does repo.saltstack.com have all the dependencies for salt, or does it need to get them from somewhere else as well? I'm in a pretty restricted environment.
17:44 jmreicha joined #salt
17:44 jfindlay stomith: great effort has been made to include all core dependencies
17:45 jfindlay without epel on RHEL-like systems, for example
17:45 stomith jfindlay, great, thanks
17:46 jfindlay ideally it should include all core dependencies between salt and the stock minimal install of each supported distro
17:46 timoguin joined #salt
17:47 szhem joined #salt
17:56 aron_kexp joined #salt
17:57 dthom91 joined #salt
17:58 baweaver joined #salt
17:59 Vaelatern joined #salt
18:00 baweaver joined #salt
18:00 dendazen_ where do I usually declare static variable for state which i want to use in jinja template?
18:00 dendazen_ also can i use in jinja template something like this? Hostname=salt['grains.get']('noename')
18:02 JohnTuni_ joined #salt
18:04 Akhter joined #salt
18:05 alvinstarr Can I do the following using salt? Create a new system and install SSH. Then copy the host keys to the salt-master and on next time the same system is rebuilt use the cached host keys.
18:07 iggy sure
18:08 alvinstarr I guess I should have asked "How would I do...."
18:10 baweaver joined #salt
18:16 flou joined #salt
18:17 sdm24 {% set hostname = salt['grains.get']('nodename') %}, and then to use that, {{ hostname }}
18:17 dthom91 joined #salt
18:18 quix joined #salt
18:19 sdm24 dendazen: and you can declare that anywhere. The easiest spot is just in the state .sls file directly, or you can have a map.jinja (or any name) file that you import
18:22 dendazen_ oh thanks.
18:23 sdm24 dendazen_: just remember, if you are using file.managed (or something similar) with a source template, to include " - template: jinja" to render the jinja on the template
18:24 dendazen_ yeah i have that: source: salt://packages/zabbix_agent/files/etc_zabbix_zabbix_agentd.conf.jinja
18:24 dendazen_ - template: jinja
18:24 dendazen_ Thanks.
18:24 sdm24 no problem
18:29 forrest joined #salt
18:29 dendazen_ and when i do {% from "packages/zabbix_agent/map.jinja" import blah with context %}
18:29 dendazen_ can i do glob like * if I have 4 or 5 blahs
18:29 ponpanderer question on the new spm feature in 2015.8. I'm trying to install a package locally without a repo "spm local install <pkg>.spm" and getting an error listing no dependencies/None as the reason it cannot be installed. Am i missing something?
18:32 sgargan joined #salt
18:33 racooper joined #salt
18:36 Akhter joined #salt
18:37 MindDrive joined #salt
18:38 MindDrive My search skills sometimes suck... is there a Salt method that I can use to read a yum database for the available packages?  (I'm referring to what's on the actual repository server, not what's on a given client.)
18:44 baweaver joined #salt
18:46 aron_kexp joined #salt
18:51 sfxandy joined #salt
18:53 pppingme joined #salt
18:54 merlin` joined #salt
18:56 jspatton1971 joined #salt
19:00 cliluw joined #salt
19:01 racooper have you checked here? https://docs.saltstack.com/en/2015.5/ref/modules/all/salt.modules.yumpkg.html#module-salt.modules.yumpkg
19:02 racooper sounds like you may be looking for list_repo_pkgs
19:05 keimlink joined #salt
19:05 mitsuhiko joined #salt
19:09 timoguin_ joined #salt
19:10 jspatton1971 left #salt
19:10 keimlink_ joined #salt
19:14 evidence so we've been slowly ramping up our minion count, and see random SaltReqTimeoutError: Message timed out messages all over.  we've ruled out network/cpu/locate tcp stack and a few other things, the issue seems to be with zmq
19:14 evidence i came across this and hot patches it while adding some of the test values - https://github.com/saltstack/salt/pull/27606
19:15 evidence but i'm still seeing errors.. people with that many minions aren't really only running 5 worker threads right?  I have ncpus + 4 to bring it to a round 20, none of them ever see too high of usage
19:17 keimlink joined #salt
19:17 dthom91 joined #salt
19:17 forrest evidence, Are all the minions the same version as the master?
19:17 evidence yep
19:17 evidence 2015.8.1
19:18 merlin` hi
19:18 forrest evidence, evidence I'm assuming you already increased the ulimit, or it's set pretty high already?
19:18 evidence 100000 yep
19:18 merlin` its normal when on fresh cloned repo ./setup.py test fails ?
19:18 evidence confirmed via /proc
19:19 forrest evidence, What's the version of zmq?
19:19 evidence 4.0.4 on the master, 4.1.3 on most of the edge
19:20 forrest hmm okay that should be fine then. I'd say restart the master and get debug logging cranked on to see if that finds anything, then confirm if you get the issue across a random sampling of instances, or if it's always the same instances that are a problem.
19:21 evidence using the new repo repo.saltstack.com/apt/ubuntu/ubuntu14/latest/ also on the master.. 4.0.4 still seems the newest put out
19:21 evidence forrest: i'll flip on debug, but it's pretty random across all minions
19:21 forrest evidence, Oh you're on ubuntu? Was there any chance that there were long apt-get commands running during those timeouts that you saw?
19:22 evidence well part of highstate, which the masters run every 30m, is an apt-get update.  is there a known bug?
19:22 s_kunk joined #salt
19:22 evidence the masters are ubuntu, the minions are mostly fbsd
19:23 evidence also, is the rec for worker_threads still ~ncpus?
19:24 forrest no, apt-get is just super slow so sometimes people have their timeout set super low and it drops out. But you're seeing actual message timed out errors, and I assume you're not seeing anything on the minions themselves that are problematic? No logging there?
19:24 forrest evidence, As far as I'm aware.
19:24 evidence no the minions see it, it's usually on pillar updates or file.managed operations which require input from the master
19:24 evidence they usually always succeed on the 2nd attempt (3 tries seem the be the default), so it's transient
19:25 forrest Hmm that's odd.
19:25 forrest evidence, Did this error occur prior to 2015.8.1?
19:25 evidence yes
19:25 evidence there was a timeout bug i had opened an issue with where jobs would never die in 2015.5.X that made it MUCH worse, we had to restart the master every few days
19:25 forrest How many minions are you running if you don't mind me asking?
19:26 evidence this seems to be a constant flow of errors, not affected by leaks/uptime of the master
19:26 forrest If you can say
19:26 evidence 3k right now.. have about 1/4 of the boxes in
19:26 forrest How beefy is the master?
19:27 forrest Also this is all on your internal network right? Thus the comment regarding network
19:27 evidence E5-2667 x 2, all SSDs, 128GB RAM
19:27 evidence the cpus and disks aren't breaking a sweat
19:28 evidence yeah, internal backbone.  there is certainly some latency as we have servers everywhere, but tcp itself is stable, which continues to implicate a bug or zmqw
19:28 forrest Yeah.
19:28 evidence i'm reading http://api.zeromq.org/3-2:zmq-setsockopt to see if there is anything else that might be relevant to tweak
19:28 forrest This might be an unacceptable for you, but have you considered trying raet? https://docs.saltstack.com/en/latest/topics/transports/raet/index.html
19:29 evidence we've tested it, doesn't seem ready for primetime though
19:29 evidence we are very interested in raw tcp also..
19:29 forrest evidence, Does the exact same issue occur using rate?
19:29 forrest My main point with that is to see whether the issue still happens or not, so we can confirm if it is a salt issue, or a zmq issue.
19:29 evidence we've not tested it at scale yet, the deps aren't installed
19:30 forrest okay
19:30 evidence the raw tcp would prob be an easier test
19:30 forrest Yeah. In addition I'd say crank on debug logging on a chunk of minions where you've seen the issue (I know there was't a pattern, but worth a shot) as well as debug on the master to see if anything pops up
19:31 orionx joined #salt
19:32 jeffpatton1971 joined #salt
19:32 jeffpatton1971 left #salt
19:33 merlin` joined #salt
19:35 alvinstarr left #salt
19:36 aparsons_ joined #salt
19:40 TyrfingMjolnir joined #salt
19:40 jeffpatton1971 joined #salt
19:41 racooper Is there a salt state module that corresponds to authconfig?
19:41 iggy if the cpu usage is low, you can keep cranking up the worker_threads value
19:43 evidence iggy: how far?  already running 20 w 16 cpus
19:45 evidence bumped it to 32, easy change while we dig in
19:46 evidence on the restart and thundering herd of reconnects no master proc spiked above 20%
19:46 iggy yeah, I've had mine as much as 4x #cpus
19:46 evidence ah interesting, someone around here had mentioned cpus plus a few spares
19:47 forrest evidence, They probably aren't running that many instances :)
19:48 evidence yea.  well at some point there has to be diminishing returns as each worker is eating resources of its own.  but it's sounding like it's much higher than one worker per cpu.  the high load docs don't really talk about good targets, they just say bump it up heh
19:49 iggy yeah
19:49 iggy it depends a lot on what each one is doing
19:51 Akhter joined #salt
19:52 Guest89 joined #salt
19:52 ajw0100 joined #salt
19:56 baweaver joined #salt
19:57 CeBe joined #salt
19:58 GreatSnoopy joined #salt
20:08 dabb joined #salt
20:13 kitplummer joined #salt
20:16 aparsons joined #salt
20:17 baweaver joined #salt
20:20 d1noe joined #salt
20:21 tercenya joined #salt
20:21 chiui joined #salt
20:22 Akhter joined #salt
20:23 overyander i'm lookin gat the ip matching methods in states here https://docs.saltstack.com/en/latest/topics/targeting/ipcidr.html   how would i use that in an if statement in my state file? for when i want to change a setting depending on the ip subnet
20:24 danlsgiga forrest: I saw somewhere that Salt is planning to replace zeromq with tornado... is that true?
20:25 zmalone given the release notes for 2015.8, I'd probably not leap into tornado or raet
20:25 iggy overyander: you should target the state based on the cidr, not use conditionals as targeting
20:25 iggy overyander: but if you must, you'd better use salt.modules.network.*
20:26 danlsgiga oh, in addition to zeromq, it would be just the tcp transport
20:26 larsfronius joined #salt
20:26 overyander iggy, so in the last example on that page, the '- internal' is a statefile to run?
20:28 fredvd joined #salt
20:28 sdm24 overyander: to me, it looks that way
20:29 overyander here's what i'm trying to accomplish. there is one office with a proxy. i need to run a command to configure that proxy in the chocolatey config file. i am running that command in init.sls, but i only want that command to run on the minions on the subnet in that office
20:33 forrest danlsgiga, I haven't heard of such a thing, but I also don't work at Salt so I don't know. Isn't Tornado still stuck on 2.7 though?
20:33 overyander if i used modules.network.in_subnet in an if statement would that be along the lines of {% if modules.network['in_subnet'] == '10.200.0.0/16' %}  ?
20:34 danlsgiga forrest: no idea! Just curiosity
20:34 DanyC joined #salt
20:35 sdm24 overyander: I do it {% if salt['modules.network']('in_subnet') == ... %}
20:35 overyander sdm24, thanks. i've only used if's that use grains.
20:35 alemeno22_ joined #salt
20:35 overyander let me give that a shot
20:35 evidence we have it working this format - {% if salt['network.in_subnet']('10.0.0.0/8')
20:36 overyander ok, thanks evidence
20:42 overyander evidence, how would you express not equals in your method?
20:42 timoguin_ "if not"
20:44 mpanetta joined #salt
20:44 DanyC_ joined #salt
20:45 evidence yeah we are just looking for a boolean return from the function.. the == '10.200.0.0/16' expression seems kind of odd to me, but it likely works just the same
20:46 DanyC__ joined #salt
20:47 aron_kexp joined #salt
20:47 evidence i guess if the netmask actually matches /16 it works maybe as that's the return.. mine includes 100s of smaller /24s and /23s that are in 10/8
20:48 dthom91 joined #salt
20:49 faliarin joined #salt
20:50 Brew joined #salt
20:50 subsignal joined #salt
20:51 Plastefuchs joined #salt
20:53 keimlink joined #salt
21:06 aron_kexp joined #salt
21:09 kinetic joined #salt
21:10 jhauser joined #salt
21:12 Akhter joined #salt
21:12 _JZ_ joined #salt
21:14 TyrfingMjolnir joined #salt
21:15 chiui joined #salt
21:25 jmreicha joined #salt
21:26 zwi joined #salt
21:27 woodtablet joined #salt
21:28 woodtablet Hello, can someone help explain to me this line: salt '*' state.sls my_sls_file pillar='{"hello": "world"}'
21:29 woodtablet i think it is replacing the master render's pillar's key - hello's value with world
21:29 big_area anyone  use the hipchat or slack modules? the docs don't quite make sense in saying to target '*' ie:
21:29 big_area salt '*' slack.post_message channel="Development Room" message="Build is done" from_name="Build Server"
21:29 big_area am i missing something?
21:30 racooper FYI: https://github.com/saltstack/salt/issues/28261
21:30 dthom91 joined #salt
21:30 sdm24 big_area: the '*' just means that it will target all of your minions
21:31 sdm24 https://docs.saltstack.com/en/latest/topics/targeting/index.html
21:32 big_area indeed. but the above command fails by way of checking each minion for the slack module and returns nothing further
21:32 sdm24 so the message will be passed from the targetted minions (in *'s case, all of them) using slack
21:32 sdm24 yep. the minions will send the message. I don't know much about how slack works, but thats what the docs make it sound like
21:33 woodtablet i am trying to follow the salt tutorial and trying hte command line replacement of the pillar data, like so: salt '*' state.sls edit.vim pillar='{"pkgs": "world"}' , where I am just trying to change the name of the package installed to "world" from vim-enhanced specified in my pillar/edit/vim.sls. but i get and error of  Rendering SLS 'base:edit.vim' failed: Jinja variable 'str object' has no attribute 'vim'
21:33 big_area ah so its just just like a returner rather than a similar implementation on the master
21:33 big_area got it
21:34 sdm24 woodtablet: can you post your pillar/edit/vim.sls, your pillar/top.sls, your salt/edit/vim.sls in gist?
21:35 sdm24 but unfortunately, I don't pass pillar data through the CLI so I'm not 100% sure if I can help you there
21:37 dthom91 joined #salt
21:38 sdm24 But I think you would want something like pillar='{"pkgs": "vim": "world"}' or something like that, since pkgs contains the key 'vim' and corresponding value of 'vim-enhanced'
21:38 woodtablet sdm24: https://gist.github.com/anonymous/070a2b07f07e7c62c89a
21:39 woodtablet sdm24: i am not sure if i would use it other than for testing, but I am trying to fully comprehend what i am learning in the tutorial
21:39 big_area hmmm... that is really cool
21:39 big_area #chatops
21:41 sdm24 woodtablet: can you also post your pillar/pkgs.sls?
21:41 sdm24 err pillar/pkg.sls, but yeah
21:42 woodtablet https://gist.github.com/anonymous/d32ff62f183257d5a1c0
21:44 forrest woodtablet, What page of the tutorial are you looking at?
21:45 woodtablet forrest: https://docs.saltstack.com/en/latest/topics/tutorials/pillar.html
21:45 woodtablet forrest: they give examples of setting pillar data on the command line
21:45 forrest Yeah I see it.
21:45 sdm24 yeah I'm not sure how to do it via CLI, but basically 'pkgs' contains the key:values of 'apache:httpd' and vim:vim-enhanced' (if salt detects that the os is RedHat, for example). You want to pass somehow pkgs:vim:world, instead of pkgs:vim:vim-enhanced
21:46 sdm24 your method is setting 'pkgs:world', which is why the state is failing, because its looking for the pkgs:vim key
21:48 vieira joined #salt
21:49 sdm24 pillar='{"pkg": {"vim": "world"}}' I think is what you want
21:51 forrest yep agreed with sdm24, since it's a dict, what about pillar='{"pkgs"["vim"]: "world"}'
21:51 forrest woodtablet, ^
21:51 forrest or a variation, maybe "pkgs[vim]"
21:51 dendazen joined #salt
21:51 forrest not quite sure how that gets interpreted, because I know how you'd reference it in a python shell
21:52 forrest it would just look like print(dict['pkgs']['vim'])
21:52 forrest and that would return vim-enhanced
21:52 woodtablet sdm24 & forrest: yesss, thank you this worked: salt '*' state.sls edit.vim pillar='{"pkgs": {"vim": "telnet"}}'
21:52 baweaver joined #salt
21:52 forrest ahh okay, so just had to swap around the format, cool.
21:53 woodtablet i think my problem was being silly and forgetting to make it like a nested dict
21:53 woodtablet thanks so much both of you
21:53 forrest Yeah just a disconnect between the example and the more advanced thing you were trying to do :)
21:53 forrest woodtablet, np.
21:53 sdm24 glad I could help!
21:53 woodtablet forrest: i saw the example, but it didnt do anything, i wanted to actually see it work lol
21:54 forrest woodtablet, Totally understand.
21:56 timoguin joined #salt
21:59 forrest woodtablet, I created a PR to update the docs, thanks for running into that issue, hopefully no one else will have to deal with that confusion: https://github.com/saltstack/salt/pull/28264
22:00 ViciousLove joined #salt
22:02 sdm24 nice
22:02 forrest That reminds me, for anyone that plans on making PRs this month and wants a free shirt: https://www.digitalocean.com/company/blog/hacktoberfest-is-back/
22:02 forrest All you need to do is make 4 PRs, so it's easy mode.
22:03 forrest And the Salt team (as well as the IRC usually) is pretty helpful if you want to make your first PRs, docs are pretty good about it as well.
22:03 woodtablet nice
22:03 woodtablet i like that
22:04 woodtablet oh cool i want one, i didnt know that was running
22:05 forrest woodtablet, Yeah I didn't know till earlier this week. Not a TON of time left to get it done, but it's only 4 PRs, and anything else you already did this month towards other open source stuff should count.
22:05 sdm24 if only I could remember my github login at this work computer haha
22:06 forrest sdm24, That's what you get for not contributing enough /troll
22:06 sdm24 true true
22:06 woodtablet lol
22:10 zwi1 joined #salt
22:12 bfoxwell joined #salt
22:13 woodtablet forrest: ok you got me with that awesome tshirt. ill bite and contribute. how do i append to this page ? https://docs.saltstack.com/en/latest/topics/tutorials/minionfs.html
22:13 forrest woodtablet, Did you already fork the repo and all that jazz?
22:14 woodtablet forrest: fork the tutorial repo ? no... not yet lol i need to find it first i guess
22:14 forrest woodtablet, So all the docs are located in the main salt repo
22:15 woodtablet forrest: this one right ? https://github.com/saltstack/salt
22:16 forrest The path of the URL usually matches pretty well to where it is: so you'd be modifying: https://github.com/saltstack/salt/blob/develop/doc/topics/tutorials/minionfs.rst I'd say start here: https://docs.saltstack.com/en/latest/topics/development/contributing.html
22:16 sdm24 https://docs.saltstack.com/en/latest/topics/development/hacking.html that has a good example how to download the latest version
22:16 forrest sdm24, There's a whole doc on contributing now!
22:16 furrowedbrow joined #salt
22:16 sdm24 oh didn't see that one
22:16 sdm24 yeah forrest's is a lot better
22:17 woodtablet forrest: niceee ok, i ll do the needful
22:17 woodtablet sdm24 - thanks too
22:17 rbjorklin joined #salt
22:17 forrest woodtablet, Fork tha repo, then follow the steps in that contributing doc that I linked, there are also some docs regarding 'tone' and so on inside the docs, but don't let it be too overwhelming! You can just skim it, and make the PR, then the salt team can help out, or hit me up and I'll review if you have questions (I just can't merge)
22:17 forrest woodtablet, It looks more daunting than it is :)
22:18 edulix joined #salt
22:20 rbjorklin Would it make sense to manage orchestration on top of Fleet with Salt?
22:20 forrest woodtablet, When you make the PR if you want me to take a look at it tag me with @gravyboat since I probably won't be in IRC at all this weekend.
22:21 forrest Granted you don't really need me to, the salt team is pretty nice.
22:21 woodtablet forrest: ok, both are encouraging
22:21 forrest woodtablet, Aww yeah!
22:28 geomyidae_ joined #salt
22:30 toastedpenguin joined #salt
22:32 peters-tx In Jinja, is there a way to say "if this host is a member of this Nodegroup"
22:32 incnspcus joined #salt
22:32 incnspcus joined #salt
22:33 bbradley joined #salt
22:33 terinjokes joined #salt
22:34 peters-tx Ok I think I can use the match module
22:35 sdm24 peters-tx: I don't think you can use nodegroups in jinja, because state files are read by the minion while nodegroups are defined by the master
22:35 creppe joined #salt
22:36 sdm24 You can match based on the qualification to put a host in a nodegroup, like if its ID is web* and it is running Ubuntu
22:36 sdm24 or however you define each nodegroup
22:36 aron_kexp joined #salt
22:36 peters-tx Hmm
22:39 sdm24 apparently, if you turn on "pillar_opts" in /etc/salt/master, you can target nodegroups, since it will be in the pillar data
22:39 sdm24 http://grokbase.com/t/gg/salt-users/148t6kk89m/if-condition-on-nodegroups#20140906fw7s4m5kxkpnzodsr7ru7njqbu
22:40 orionx joined #salt
22:40 forrest sdm24, Once you can log in you should document this, I don't think it's anywhere in the docs.
22:40 sdm24 haha ok
22:41 forrest sdm24, I'm serious, I looked at it yesterday and we couldn't find any documentation regarding that sort of targeting unfortunately
22:41 sdm24 yeah, now I have a reason to recover my password
22:41 forrest sdm24, But what Colton is showing is just creating a grain called nodegroups.
22:41 forrest Are you referring to a different section of this grok?
22:41 sfxandy joined #salt
22:42 forrest sdm24, Oh nevermind, I see Florian's response
22:42 forrest hmm
22:42 sdm24 yeah, the 2nd comment
22:42 quix joined #salt
22:43 forrest sdm24, I'm going to create an issue on this just so we can track it, what's your github handle?
22:43 forrest I think it's the same but I can't remember.
22:43 sdm24 sdm24
22:43 sdm24 yep haha
22:44 orionx joined #salt
22:48 forrest sdm24, Okay I created an issue and tagged you on it.
22:48 okfine joined #salt
22:49 woodtablet forrest: how do I tag you in a git pull request ?
22:49 sdm24 cool, I can't fork over salt on this machine, but I can work on that issue
22:50 forrest woodtablet, just do @gravyboat and it will tag me.
22:50 woodtablet where do i do the @gravyboat? sorry I have never tagged someone before in a PR
22:50 forrest sdm24, Sounds good, if you don't get around to it let me know and I'll do it.
22:50 peters-tx Hmm, where is "master_opts" even documented? O_o
22:50 sdm24 peters-tx: its in the master config file, /etc/salt/master
22:50 forrest woodtablet, Oh sorry I am not being very clear. Just in the body of the PR, so you type in the title, then you have the body section, just tag me there.
22:51 peters-tx sdm24, Doesn't show up in mine :(
22:51 peters-tx Is there a minimum supported version?
22:51 peters-tx sdm24, Mine is (RPM'ed) 2015.5.5
22:51 woodtablet forrest: ahh I see ok, got it. you are tagged =D
22:52 peters-tx sdm24, "Error parsing configuration file"
22:52 peters-tx hmm
22:52 forrest woodtablet, Cool!
22:52 peters-tx sdm24, Oops, actually, I mistook the key name
22:52 sdm24 https://docs.saltstack.com/en/latest/topics/pillar/index.html#master-config-in-pillar
22:53 peters-tx master_opts != pillar_opts    sorry
22:53 forrest woodtablet, okay, you need to make sure you fetch the upstream so you don't have all these extra commits.
22:53 forrest woodtablet, Looks like jfindlay is already on it :)
22:53 sdm24 ok gotcha. It used to be True by default
22:54 peters-tx sdm24, Ok, cool
22:54 forrest woodtablet, Make sure you're fetching and merging the upstream, then you won't have all these extra commits in there. jfindlay might be adding a comment to help you out, which might require a new PR, we'll see
22:54 jfindlay forrest: I was just doing that :)
22:54 forrest jfindlay, I figured, I saw you add that tag, lol
22:54 jfindlay don't know how helpful the comment was though :/
22:55 woodtablet i thought i did...
22:55 forrest jfindlay, woodtablet (person making the PR) is in here, so it should be fine. Looks to me like it might just be behind a bit or something.
22:55 Guest26235 joined #salt
22:55 forrest They joys of git, lol
22:55 TyrfingMjolnir joined #salt
22:56 jfindlay woodtablet: if you can figure out how to remove the extra commits then that will be fine, if not you can tell me what they are and I can do it
22:56 aron_kexp joined #salt
22:56 jfindlay forrest: yeah, it took mew a few months before I got comfortable with git, there were many times when I knew I had done it right and then caused a disaster
22:56 woodtablet jfindlay: i have 2 commits, but are you talking about the other 16 that happened at the same time as me ?
22:57 forrest jfindlay, I stlll sometimes have issues with extra commits/commits that don't make sense.
22:57 woodtablet jfindlay: I did this, i thought it would have helped.. git fetch upstream git rebase upstream/2015.5 fix-broken-thing git push --set-upstream origin fix-broken-thing
22:57 jfindlay that was at a previous company, whose codebase was already well beyond wacked, so maybe it didn't matter :)
22:58 jfindlay woodtablet: the issue might be then that the pull request was submitted against develop and you rebased against 2015.5
22:58 woodtablet oh
22:58 peters-tx sdm24, I just want to verify -- http://grokbase.com/t/gg/salt-users/148t6kk89m/if-condition-on-nodegroups#20140906fw7s4m5kxkpnzodsr7ru7njqbu   This refers to master_opts, not pillar_opts
22:58 woodtablet how do i submit to 2015.5 ?
22:59 forrest woodtablet, You don't
22:59 jfindlay woodtablet: yes, see https://github.com/saltstack/salt/compare/2015.5...gwaters:update-tutorial-documentation
22:59 woodtablet forrest: dohh.. i was just following that doc lol
22:59 forrest woodtablet, Docs are built against the develop branch.
22:59 peters-tx sdm24, But the config that is here already doesn't even list master_opts; yet does list pillar_opts
22:59 jfindlay er, not trying to contradict forrest
22:59 woodtablet forrest: dohhh
22:59 sdm24 peters-tx: it should be pillar_opts. master_opts is wrong
22:59 forrest jfindlay, I don't think we are contradicting :)
22:59 peters-tx sdm24, Ok cool, thanks
23:00 jfindlay forrest: actually, we have a full time docs guy now who has made many awesome docs improvements, including docs for each release branch
23:00 forrest jfindlay, That works fine when it's a code issue from an old release, but this is a doc commit so it can be slightly confusing.
23:00 jfindlay and the default now goes to the latest stable release
23:00 woodtablet jfindlay: so how should I continue ? kill my PR and try again ?
23:00 jfindlay for the docs site, that is
23:00 forrest jfindlay, Does develop no longer simply get modified/added against versionadds?
23:00 jfindlay woodtablet: I would close out the PR against develop and resubmit against 2015.5
23:01 forrest jfindlay, So you actually have to make the PR against the relevant tag?
23:01 woodtablet jfindlay: ok
23:01 jfindlay forrest: I'm unclear about your question, but versionadded declarations are valid at any branch in which they're introduced
23:02 sdm24 forrest: I'm taking off for the weekend, but Ill work on that doc update on Monday. Maybe over the weekend if I feel inspired (probably not)
23:02 forrest jfindlay, I'm saying if I go to the docs site and the current release shows as 2015.8.1 is that building against the dev branch for any item where versionadded declarations don't exist? Or does that specifically build against the associated tag.
23:02 forrest sdm24, Sounds good, have a good one!
23:02 sdm24 thanks you too
23:02 forrest Thanks!
23:03 jfindlay forrest: the documentation is built against the head of each release branch and develop
23:03 woodtablet jfindlay: ok, I think I did it now
23:03 jfindlay I don't know if that helps or not
23:03 forrest jfindlay, Gotcha, I'll make sure to keep that in mind, I wasn't aware of that change so I just PR against dev all the time.
23:05 jfindlay forrest: no problem, the 3 version selectors on the right of the page are a little misleading in that way, since they say that you're looking at 2015.8.1, for example, when really you're looking at HEAD of 2015.8
23:05 clintberry joined #salt
23:06 forrest gotcha, I know who does the doc updates but I can't remember their name right now, https://docs.saltstack.com/en/latest/topics/development/conventions/documentation.html should definitely be updated to note how to do that.
23:06 jfindlay jacobhammons is his github name
23:06 forrest ahh right right, jh
23:07 forrest I never remember, then I look at his page and notice the github icon beard and remember, lol
23:07 jfindlay feel free to ping him whenever you need; he's a chill kind of guy :)
23:07 forrest jfindlay, Yeah I've spoken with him before
23:08 JDiPierro joined #salt
23:09 catpig joined #salt
23:14 baweaver joined #salt
23:15 forrest woodtablet, I need to go, I commented on your PR.
23:35 cpattonj joined #salt
23:36 Nazca joined #salt
23:39 brianvdawson joined #salt
23:47 sgargan joined #salt
23:55 sgargan joined #salt
23:58 meye1677 joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary