Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-02-28

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 brd_ joined #salt
00:00 d3c4f joined #salt
00:01 bVector joined #salt
00:01 hemebond1 joined #salt
00:01 saltstackbot joined #salt
00:03 kuromagi joined #salt
00:03 TRManderson joined #salt
00:03 mrud joined #salt
00:03 mrud joined #salt
00:04 Aikar joined #salt
00:04 Aikar joined #salt
00:04 johtso joined #salt
00:04 rylnd joined #salt
00:04 Hazelesque joined #salt
00:05 cyraxjoe joined #salt
00:05 tcolvin joined #salt
00:05 esharpmajor joined #salt
00:06 Ryan_Lane joined #salt
00:06 linovia joined #salt
00:07 basepi joined #salt
00:07 jor joined #salt
00:07 tbrb joined #salt
00:07 ajv joined #salt
00:08 skrobul joined #salt
00:08 godlike joined #salt
00:08 godlike joined #salt
00:09 McNinja joined #salt
00:09 vaelen joined #salt
00:11 rodr1c joined #salt
00:11 rodr1c joined #salt
00:11 samkottler joined #salt
00:12 Guest64515 joined #salt
00:12 tom29739 joined #salt
00:13 DanyC joined #salt
00:14 madboxs joined #salt
00:14 hexa- joined #salt
00:20 scsinutz can the minionswarm.py script talk to multiple masters at once?
00:22 skrobul joined #salt
00:23 zifnab joined #salt
00:26 jerrykan[m] joined #salt
00:28 rem5 joined #salt
00:28 onlyanegg joined #salt
00:30 nafg joined #salt
00:34 nickabbey joined #salt
00:34 joshbenner joined #salt
00:40 scsinutz joined #salt
00:48 jgarr left #salt
00:57 djgerm joined #salt
01:06 scsinutz joined #salt
01:09 ninjada joined #salt
01:11 ninjada joined #salt
01:14 DanyC joined #salt
01:14 raspado_ joined #salt
01:22 ninjada joined #salt
01:30 madboxs joined #salt
01:32 vaelen joined #salt
01:32 KennethWilke joined #salt
01:52 ninjada joined #salt
02:01 raspado joined #salt
02:05 bVector joined #salt
02:05 antonw joined #salt
02:10 nickabbey joined #salt
02:11 McNinja joined #salt
02:14 raspado joined #salt
02:14 DanyC joined #salt
02:17 stooj joined #salt
02:18 auzty joined #salt
02:19 Nahual joined #salt
02:24 karlthane_ joined #salt
02:25 vaelen joined #salt
02:25 sshpillar joined #salt
02:26 saltstackbot joined #salt
02:26 sshpillar Hey all, working on implementing salt-ssh and the pillar passed in via the cli is overriding the entire dictionary instead of following the pillar source merging strategy
02:26 NightMonkey joined #salt
02:27 sshpillar Much like this issue, https://github.com/saltstack/salt/issues/33647, but the fix doesn't apply to ssh
02:27 saltstackbot [#33647][MERGED] Pillars passed from command-line override pillar subtrees instead of merging | Description of Issue/Question...
02:37 ninjada joined #salt
02:37 cypher543 left #salt
02:47 catpigger joined #salt
02:48 ilbot3 joined #salt
02:48 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.3.5, 2016.11.2 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ (please don't multiline paste into channel) <+> See also: #salt-devel, #salt-offtopic <+> Ask with patience as we are volunteers and may not have immediate answers
02:49 sshpillar any ideas?
02:54 overyander joined #salt
03:12 puzzlingWeirdo joined #salt
03:15 DanyC joined #salt
03:17 NightMonkey joined #salt
03:24 joshbenner left #salt
03:28 evle joined #salt
03:41 scsinutz joined #salt
03:44 NightMonkey joined #salt
04:00 ivanjaros joined #salt
04:02 mpanetta joined #salt
04:04 mpanetta joined #salt
04:08 NightMonkey joined #salt
04:09 rdas joined #salt
04:16 DanyC joined #salt
04:19 ruxu joined #salt
04:25 raspado joined #salt
04:25 tedski joined #salt
04:31 dynamicudpate joined #salt
04:36 onlyanegg joined #salt
04:38 snc joined #salt
04:39 tvinson joined #salt
04:40 preludedrew joined #salt
04:40 ninjada joined #salt
04:48 euidzero joined #salt
04:51 scsinutz joined #salt
04:54 onlyanegg joined #salt
04:57 gnomethrower joined #salt
05:05 rickflare joined #salt
05:12 leonkatz joined #salt
05:17 gableroux joined #salt
05:17 DanyC joined #salt
05:39 ninjada joined #salt
05:52 ivanjaros3916 joined #salt
05:55 nafg Can state top files use compact targeting syntax (like G@)? Docs mention it in command line and in pillar top file, not in states top file
05:59 Deliant joined #salt
06:00 onlyanegg joined #salt
06:01 orionx joined #salt
06:04 sshpillar I would assume yes, are you having trouble targeting using grains? They would definitely be available during state compilation
06:13 ninjada_ joined #salt
06:18 DanyC joined #salt
06:22 rdas joined #salt
06:25 gk-1wm-su joined #salt
06:25 gk-1wm-su left #salt
06:26 raspado joined #salt
06:28 DanyC joined #salt
06:33 jas02 joined #salt
06:37 karlthane joined #salt
06:38 jas02 joined #salt
06:52 ruxu joined #salt
06:57 dyasny joined #salt
07:00 bocaneri joined #salt
07:00 jhauser joined #salt
07:05 Fiber^ joined #salt
07:10 ReV013 joined #salt
07:12 ruxu joined #salt
07:16 nafg sshpillar: no, just wondering why the docs use a very verbose format, match: grain
07:17 felskrone joined #salt
07:18 bocaneri joined #salt
07:23 ninjada joined #salt
07:25 rdas joined #salt
07:34 ravenx joined #salt
07:35 githubcdr joined #salt
07:39 puzzlingWeirdo joined #salt
07:41 ninjada joined #salt
07:43 puzzlingWeirdo joined #salt
07:48 ninjada joined #salt
07:53 Inveracity joined #salt
08:01 gk-1wm-su joined #salt
08:02 aldevar joined #salt
08:08 samodid joined #salt
08:08 juntalis joined #salt
08:11 ntropy is there a way to provide arbitrary args (outside of grains & pillar) to orchestrate runner?
08:11 Rumbles joined #salt
08:15 ntropy actually, i see i can specify the pillar on the command line, i suppose whatever i provide will be merged with the rest of pillar for minion in question
08:15 ravenx ntropy: like a way to provide data?
08:15 ravenx as far as i know, pillars is the only way
08:16 ntropy ravenx: yes to provide extra variables.  i think pillar is it
08:18 ravenx yeah, you can pass K:V via pillar on the command
08:18 ravenx is that what you're trying to do?
08:20 ntropy yes exactly that
08:23 ntropy im looking to trigger the creation of a single resource and want to provide the required args for that
08:24 ntropy as opposed to enforcing a complete state, the args for which are in pillar already
08:24 masber joined #salt
08:25 orionx joined #salt
08:26 ravenx ah yeah you can do that
08:26 jas02 joined #salt
08:26 ravenx i'd just use a pillar value in the state
08:26 raspado joined #salt
08:26 rdas joined #salt
08:27 ravenx so things like:    cwd:  {{ salt['pillar.get']('branchname') }}
08:27 gmoro joined #salt
08:27 ravenx then:   salt 'server' state.sls weee  pillar='{"branchname": "stable"}'
08:27 ravenx and cwd will be:      cwd:   stable
08:28 ntropy nice, thanks for that :)
08:28 o1e9 joined #salt
08:31 orionx joined #salt
08:33 ninjada joined #salt
08:34 ninjada joined #salt
08:37 ruxu joined #salt
08:37 ravenx no problem :D
08:45 JohnnyRun joined #salt
08:47 dariusjs joined #salt
08:52 ronnix joined #salt
08:54 s_kunk joined #salt
08:57 DanyC joined #salt
09:00 DanyC joined #salt
09:01 mikecmpbll joined #salt
09:16 gmoro joined #salt
09:30 LostSoul joined #salt
09:44 netcho_ joined #salt
09:47 puzzlingWeirdo joined #salt
09:50 ronnix joined #salt
09:56 armguy joined #salt
09:58 dariusjs joined #salt
10:02 yuhl______ joined #salt
10:09 ninjada joined #salt
10:15 scristian joined #salt
10:18 Hybrid joined #salt
10:19 g3cko joined #salt
10:19 alem0lars joined #salt
10:27 raspado joined #salt
10:30 cyteen joined #salt
10:35 ravenx hey is there a way to delete accepted keys?  i'm using salt elastically with auto-accept
10:35 ravenx as my instances are ephemeral....i now have this massive backlog of "dead" keys
10:35 colegatron salt-key -d
10:36 ravenx thanks
10:36 ravenx would you recommend to do this as a cronjob
10:36 ravenx or just, from time to time, or what?
10:36 ravenx cuz i can't afford to have my _current_ sessions be deleted
10:36 colegatron depends on your setup. but by default I prefer do manually from time to time some delicate tasks, like probably that one
10:37 colegatron but as said, it depends on your setup
10:37 megamaced joined #salt
10:37 ravenx true, i wouldn't trust it automated.
10:37 ravenx i suppose i should avoid doing the salt '*' anyways
10:37 ravenx cuz right now i'm just annoyed that the '*' hangs thanks to dead minions.
10:37 N-Mi joined #salt
10:37 N-Mi joined #salt
10:38 colegatron well.. I would avoid because usually "*" means too much.
10:38 ravenx yeah
10:38 ravenx you're right.  ty
10:39 freelock joined #salt
10:46 jerrykan[m] joined #salt
10:46 Salander27 joined #salt
10:46 ThomasJ|m joined #salt
10:46 saintaquinas[m] joined #salt
10:47 jas02 joined #salt
10:48 orionx joined #salt
10:53 jas02 joined #salt
11:01 remyd1 joined #salt
11:05 LostSoul Hi
11:06 LostSoul I have a problem, after changing git repo (I mean repo is the same but I switched from gitlab to stash) - I'm getting:
11:06 LostSoul [salt.utils.gitfs ][ERROR   ][26707] Exception 'len([]) != len(['Host key verification failed.', '', '', 'and the repository exists.'])' caught while fetching gitfs remote '
11:06 LostSoul I added this git ssh setting to.ssh/config
11:08 LostSoul What can be a cause?
11:08 LostSoul Any idea how to debug this?
11:09 LostSoul I'm able to get content from repo from console
11:16 amcorreia joined #salt
11:25 nfahldieck joined #salt
11:25 nfahldieck I'm having troubles with cmd.script state. I'm trying to run a Perl-Script on my minion. That script is located on the master.
11:26 goal joined #salt
11:27 goal is there really no way to perform a 'yum upgrade/update' from the pkg state module?
11:29 jas02 joined #salt
11:30 Neighbour goal: other than https://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html#salt.states.pkg.latest ? :)
11:32 goal yes, looking to do a full system upgrade rather than specific package
11:32 Neighbour Hmm, interesting...I haven't seen anything (yet) in salt for a full system upgrade
11:33 Neighbour maybe https://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html#salt.states.pkg.uptodate ?
11:34 goal i read that as simply returning whether it is up to date or not
11:34 jas02 joined #salt
11:35 LostSoul Ok I did it with debug -l
11:35 LostSoul But is there other way to add host key to salt?
11:35 LostSoul Or to set up different ssh key?
11:35 jas02 joined #salt
11:36 LostSoul Nah, it stills ask for password, why?
11:37 Xevian joined #salt
11:39 sh123124213 joined #salt
11:40 dariusjs joined #salt
11:41 hlub it seems that file.directory fails to test for domain users. I'm getting "Comment: User user@asdf.local is not available"
11:41 hlub but that user is available.
11:49 LostSoul Anyone :P?
11:50 Reverend LostSoul: have you tried executing the command manually?
11:53 LostSoul Which command?
11:54 jor joined #salt
11:58 cj_ joined #salt
11:58 jerrykan[m] joined #salt
12:00 basepi joined #salt
12:00 alem0lars joined #salt
12:01 linovia joined #salt
12:01 hlub hmm, minion reboot solved my issue.
12:02 Reverend LostSoul: the command that you're trying to get salt to run.
12:02 Reverend is it a git command or what?
12:04 LostSoul Reverend: I have all states in git
12:04 LostSoul git is my backend
12:04 LostSoul I switched repository (host to be more specific)
12:05 Reverend LostSoul: ah okay. and you're trying to call git fetch or something with a new sshkey?
12:05 LostSoul And not I'm getting (in debug mode) that salt is waiting for password for git@host
12:05 LostSoul I don't know why as key was added to stash
12:11 Reverend yeah,
12:11 Reverend if you fail SSH key auth, then it will move onto password auth
12:11 Reverend the fact taht you're getting a password is not a bad thing, just means that the SSH auth failed using a key :)
12:12 tom29739 joined #salt
12:13 LostSoul Why so?
12:13 LostSoul I've tried to run git from console - it went smoothly
12:13 LostSoul When I added git repo as saltstack state/pillar repo it asks for password
12:17 Infergo joined #salt
12:22 Infergo Hey all, I was wondering if salt-cloud had the option to deploy multiple servers at once, sort of like Terraform does with the count variable. I looked through the documentation but couldn't find a concrete answer.
12:28 raspado joined #salt
12:34 Reverend LostSoul: the error you got above said that the key verification failed right?
12:35 Reverend `'Host key verification failed.'`
12:35 tom29739 joined #salt
12:37 Deliant joined #salt
12:44 bezaban joined #salt
12:46 bezaban_ joined #salt
12:47 evle1 joined #salt
12:47 amcorreia joined #salt
12:50 daxroc Should grains SSDs contain both block devices and mapped devices dm-0...
12:50 Eugene joined #salt
12:51 g3cko joined #salt
12:51 orionx joined #salt
12:54 LostSoul Reverend: Yes
12:55 LostSoul When I started it in debug mode I got info that it waits for password to get access to repo (git@host)
12:55 LostSoul I don't know why Reverend
12:56 entil I just moved states to gitfs, but now I get the complaint "No matching sls found for 'states.xxx' in env 'base'", how can I list the states to see what's confused here?
12:56 LostSoul As when I do git clone XXX it works like a charm
12:56 brousch__ joined #salt
12:56 LostSoul When salt tries to do the same it got errors
12:58 entil FWIW I had the exact same fs layout in my file-based states as I have in the git, I converted the directory to a git repo
13:00 entil salt '*' state.show_top returns empty for the minions
13:00 onlyanegg joined #salt
13:00 linovia joined #salt
13:00 Aikar joined #salt
13:00 Aikar joined #salt
13:01 entil "In a master/minion setup, files from a gitfs remote are cached once by the master, so minions do not need direct access to the git repository." -- https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html
13:03 johnkeates joined #salt
13:03 Deliant joined #salt
13:04 entil so I can find nothing wrong with my setup, yet it fails :(
13:04 numkem joined #salt
13:08 ruxu joined #salt
13:12 daxroc Can you filter out list items containing a regex or matching a regex ?
13:12 entil AssertionError: len([]) != len(['Host key verification failed.', '', '', 'and the repository exists.'])
13:12 entil took a while but that popped up in the logs
13:14 daxroc Using jinja - I've a list of SSDs returned from grain. Need to filter this for a particular pattern of disk names what would the best approach be?
13:14 entil ok, got it, I had to make the host known to root
13:14 entil thanks for the rubber ducking
13:15 arapaho joined #salt
13:23 ninjada joined #salt
13:24 ninjada joined #salt
13:28 muxdaemon joined #salt
13:28 _JZ_ joined #salt
13:28 muxdaemon joined #salt
13:29 gableroux joined #salt
13:38 Brew joined #salt
13:39 ssplatt joined #salt
13:39 raspado joined #salt
13:48 _two_ joined #salt
13:49 saintaquinas[m] joined #salt
13:50 _two_ joined #salt
13:50 Straphka joined #salt
13:53 jas02 joined #salt
13:59 jav joined #salt
14:01 gableroux joined #salt
14:01 amcorreia joined #salt
14:04 hlub daxroc: I do not know any simple solutions for that.
14:05 daxroc hlub: really hate JINJA root of all my complaints with salt :D
14:05 Sketch it's ugly but it works
14:06 hlub jinja sould provide a regex-based filter
14:08 daxroc hlub: return [ x for x in i if re.search(pattern, x) ]
14:08 hlub although complex regexes are not very readable.
14:13 bd__ for different values of "complex"
14:14 hyman joined #salt
14:26 edrocks joined #salt
14:28 mpanetta joined #salt
14:31 Seb-Solon joined #salt
14:32 g3cko joined #salt
14:34 racooper joined #salt
14:34 muxdaemon joined #salt
14:35 muxdaemon joined #salt
14:39 aldevar joined #salt
14:43 gmoro joined #salt
14:44 manji is there any way to match on top file whether a grain exists
14:44 manji ?
14:45 manji instead of 'G@roles:webserver'
14:47 nickabbey joined #salt
14:50 ntropy manji: match regardless of the value of the grain?
14:50 manji yes
14:51 manji if the grain exists in general
14:52 ntropy yeah i guess you could put all the grains in the variable by callinggrains.items
14:53 ntropy and then doing if "your_grain" in grains then we have a match
14:53 Kelsar just 'G@roles' does not work?
14:53 Kelsar manji: ^
14:53 dyasny joined #salt
14:54 manji Kelsar, yeah tried it, it said that it had not enough for a match :)
14:54 orionx joined #salt
14:55 sjohnsen left #salt
14:56 ivanjaros joined #salt
14:59 ronnix joined #salt
15:00 JohnnyRun joined #salt
15:02 nickabbey joined #salt
15:06 dariusjs joined #salt
15:06 armonge joined #salt
15:07 ntropy is it possible to specify both onlyif and unless requisites?
15:07 ntropy or my states are wrong
15:07 armonge joined #salt
15:08 armonge joined #salt
15:09 Remy_ joined #salt
15:11 yuhl______ joined #salt
15:11 nickadam joined #salt
15:13 Remy_ Hi,
15:14 Remy_ I wan't wait for a minion reboot
15:14 Remy_ I write this state :
15:14 Remy_ step-2:   cmd.run:     - name: reboot step-3:   salt.wait_for_event:     - name: salt/minion/*/start     - id_list:       - kalamon
15:14 Remy_ I check the event bus en I see :
15:14 Remy_ Tag: salt/minion/kalamon/start
15:15 Remy_ But Salt return :
15:15 Remy_ Minion did not return. [No response]
15:16 Remy_ If I reduce the timeout, I've got this error :
15:16 Remy_ Comment: Timeout value reached
15:18 dendazen joined #salt
15:23 hoonetorg having a problem with a highstate with yum commands on rhel 7 to a local satellite server
15:26 mikecmpbll joined #salt
15:27 hoonetorg getting errors like ...content/dist/rhel/server/7/7Server/x86_64/extras/os/repodata/repomd.xml: [Errno 14] curl#22 - "The requested URL returned error: 503"
15:28 cmarzullo does the command work outside of salt?
15:28 cmarzullo I've see issues where satellite doesn't finish building the metadata.
15:28 gtmanfred ^^
15:28 gtmanfred i see it from time to time when epel is syncing
15:31 mpanetta_ joined #salt
15:32 mikecmpb_ joined #salt
15:33 mpanetta joined #salt
15:33 nickabbey joined #salt
15:36 hoonetorg gtmanfred: yes
15:36 hoonetorg problem was env var which set proxy variables
15:36 gtmanfred that would do it
15:37 hoonetorg http_proxy:
15:37 hoonetorg environ.setenv:
15:37 hoonetorg - value:
15:37 hoonetorg ...blabla
15:38 hoonetorg but version is now salt 2016.11.2 where this is obsolete
15:39 hoonetorg proxy env var was set in a completely different state but seems to be valid over the whole highstate now in 2016.11.2 (that was not the case in 2015.8.8)
15:39 hoonetorg :scream:
15:39 Sketch_ joined #salt
15:39 hoonetorg :wink:
15:41 raspado joined #salt
15:46 patrek_mobilus joined #salt
15:48 patrek_mobilus Can anyone point me to some example or documentation to create a postgresql table from a salt state?
15:49 cscf patrek_mobilus, https://docs.saltstack.com/en/latest/ref/states/all/salt.states.postgres_database.html#module-salt.states.postgres_database
15:50 cscf patrek_mobilus, bookmark this for future reference: https://docs.saltstack.com/en/latest/ref/states/all/
15:51 patrek_mobilus @cscf Thanks. But that documentation is for the creation of a database, I'm taking about the creation of a table within a database
15:52 cscf patrek_mobilus, oh.  Well, I don't see one.  Feel free to write it :)
15:52 cscf of course, a hacky cmd.run would also work.
15:53 patrek_mobilus seems easier for now.
15:53 patrek_mobilus ls
15:53 cmarzullo command not found
15:54 patrek_mobilus sorry
15:54 cmarzullo :)
15:55 ninjada joined #salt
15:56 TRManderson joined #salt
15:56 edrocks joined #salt
16:01 Yoda joined #salt
16:03 raspado joined #salt
16:04 Yoda afternoon all - facing a weird issue. just started using salt properly so excuse me if i use wrong terms. whenever i am trying to run a state.apply, I am being greeted with the error: https://gist.github.com/madyoda/19fe03c05b9a2f5d454c62bbe3d41838 (theres some info in there about directory structure and files too) - i have tried disabling all my gitfs remotes, tried commenting out different pillars, etc. but
16:04 Yoda still no avail. could anyone point me in the right direction?;) thanks!
16:07 babilen Yoda: Could you paste your pillar top.sls ?
16:08 Yoda babilen: sure - https://gist.github.com/madyoda/1f5fdbca85b6e61815c31b1181220008
16:09 cscf Yoda, do any of those pillar files have "include:" statements?
16:10 Yoda a quick grep of /srv/pillar/* - no, none of them have include statements.
16:11 cscf Yoda, something in a pillar file is trying to access an SLS 'formulas.setup', which implies it's trying to read a file /srv/pillar/formulas/setup.sls
16:11 Yoda hmph interesting, let's double check everything.
16:13 Yoda cscf, I even tried moving top.sls file out of salt / renaming pillar directory / etc. + i just double checked and nothing is including except my init.sls files in my /srv/salt directories
16:14 cscf Yoda, grep -r for 'formulas.setup'
16:14 nickadam joined #salt
16:15 Yoda "root@salt-master:/srv/salt# grep -r 'formulas.setup' /srv" - nothing.
16:15 Yoda the only result for 'setup' is in my top.sls in salt/ directory which references my setup/ folder in /srv/salt
16:16 cmarzullo need a big r -R
16:16 Yoda eh, sorry
16:16 cmarzullo oh never mine
16:16 cmarzullo mind. I'm wrong
16:16 Yoda either way - still no result :)
16:17 babilen master log?
16:17 Yoda just for good measure, renaming the setup/ directory and its associated line inside top.sls still gives me error
16:18 cscf Yoda, oh yes, try reading /var/log/salt/master
16:19 Yoda nothing's in /var/log/salt/master except for an error i made myself a hour or so ago. i can paste if you want, though.
16:19 Tanta joined #salt
16:19 tapoxi joined #salt
16:24 tiwula joined #salt
16:30 nickabbey joined #salt
16:35 alem0lars joined #salt
16:38 cyborg-one joined #salt
16:42 mikecmpbll joined #salt
16:43 babilen Yoda: Anything that includes "setup" or matches thereof? Any git repositories you include?
16:43 Yoda babilen, I grepped my /srv folder for setup and only result was top.sls. Let me go check git repos manually
16:46 Yoda Nothing matching setup in the git repos I include.
16:48 jas02 joined #salt
16:48 dyasny joined #salt
16:49 nickabbey joined #salt
16:50 babilen What was the match in top.sls ?
16:51 Yoda https://gist.github.com/madyoda/99008a3f33852590fbdba369c808d98b this here - which runs the init.sls in /srv/salt/setup/ for my box setup scripts
16:51 jerrykan[m] joined #salt
16:51 Yoda i tried renaming it to see if it was conflicting (e.g. reserved name), still failed.
16:52 Yoda i also tried, just to see, making /srv/pillar/formulas/setup.sls and put something random in there "foo: bar" - still returned the same error.
16:52 rewbycraft joined #salt
16:56 orionx joined #salt
16:58 freelock joined #salt
16:59 nickabbey joined #salt
17:00 jas02 joined #salt
17:03 cmarzullo do you have any other pillar_roots defined in your configuration?
17:03 Yoda my whole master config - https://gist.github.com/madyoda/2259b806dcbe287e44223d7647f86de5
17:03 Yoda so, no
17:03 jcristau joined #salt
17:05 aldevar left #salt
17:05 Roelt_ joined #salt
17:05 candyman88 joined #salt
17:06 cmarzullo In your pillar top maybe start removing options until it works. to narrow down which item is causing the issue.
17:07 Yoda already tried, heck i even commented all of them out. still didnt work
17:08 jas02 joined #salt
17:10 cmarzullo very very strang.
17:11 Yoda let me try nuking pillar roots
17:11 Yoda # is correct for comments, right ?
17:11 cmarzullo yeah #.
17:11 cmarzullo Was thinking you could do a pillar.show_tops but that only works for states.
17:12 raspado joined #salt
17:12 Yoda ok what - nuked pillar roots, restarted salt-master, even ran fileserver.update for good will - same error.
17:13 cmarzullo is it DNS? it's always DNS.
17:13 * cmarzullo is joking
17:13 Yoda :p
17:15 scsinutz joined #salt
17:19 onlyanegg joined #salt
17:19 |Fiber^| joined #salt
17:21 Lionel_Debroux_ joined #salt
17:23 jas02 joined #salt
17:25 ekristen man running saltstack on macOS is a pain, anyone know how to override the openssl location or the PATH variable for the salt minion?
17:28 scsinutz joined #salt
17:31 babilen Yoda: Anything in your state .. I mean it must come from somewhere
17:31 babilen Or pillar_roots etc
17:34 aerobotos joined #salt
17:34 jas02 joined #salt
17:35 jas02 joined #salt
17:36 mpanetta joined #salt
17:42 Yoda babilen, grepped it all for setup, nope, nothing.
17:44 edrocks joined #salt
17:46 jas02 joined #salt
17:47 leonkatz joined #salt
17:47 whytewolf Yoda on the minion try your state.apply with salt-call -l all state.apply upgrade
17:48 whytewolf let us see if we can get salt to tell us where the issue is
17:51 nickabbey joined #salt
17:52 scoates joined #salt
17:53 onlyanegg joined #salt
17:53 rewbycraft Hi there. I'm a friend of Yoda's. He's asked me to take a poke at his servers to see if I could spot something he missed. As a first thing I noticed that it only gave the error when the minion was replying (when the minion was unreachable the error was a normal no response from minion). When I then relaunched the minion with debug logging everything just worked. Relaunching it back into normal operation
17:53 rewbycraft everything still worked.
17:54 rewbycraft I have a feeling it was some sort of cache somewhere that wasn't entirely 100% right.
17:54 freelock joined #salt
17:54 rewbycraft The problem seems to be resolved now though.
17:54 rewbycraft (He's having some issues with his IRC client at the moment and asked me to inform you guys)
17:54 jas02 joined #salt
17:55 scsinutz joined #salt
17:56 ninjada joined #salt
18:02 jerrykan[m] joined #salt
18:02 Salander27 joined #salt
18:02 saintaquinas[m] joined #salt
18:02 ThomasJ|m joined #salt
18:08 Trauma joined #salt
18:17 armguy joined #salt
18:23 Brew joined #salt
18:35 s_kunk joined #salt
18:37 SaucyElf joined #salt
18:41 cryptolukas joined #salt
18:42 cscf rewbycraft, that's interesting.  Thanks for letting us know
18:43 rewbycraft Yeah. I thought it was weird too. But it just started working when I tried to put it in debug-logging mode.
18:44 rewbycraft So I can only conclude it to be some cache or something that was causing it
18:44 scsinutz joined #salt
18:46 toastedpenguin joined #salt
18:47 xet7 joined #salt
18:48 cryptolukas How can I use different git branches with own top files, without configure the minions .. I want to define the git envs only on the master. I want to prevent top-file merging because I have a master, qa and dev git branch. My fileserver config https://gist.github.com/LukasDoe/4ad573c7543fe6eba289e225bc855fad
18:49 SaucyElf_ joined #salt
18:49 austin_ joined #salt
18:51 ChubYann joined #salt
18:58 izibi joined #salt
18:58 samodid joined #salt
19:00 SaucyElf joined #salt
19:01 edrocks joined #salt
19:05 UForgotten joined #salt
19:06 jas02 joined #salt
19:12 leonkatz can you insert a PTR record with ddns.present state?
19:14 ninjada joined #salt
19:18 scsinutz joined #salt
19:22 seanz joined #salt
19:28 Edgan joined #salt
19:29 tkharju joined #salt
19:46 Renich joined #salt
19:46 spuder joined #salt
19:48 whytewolf leonkatz: yes you should be able to. as long as the right info is put in to reflect and the reverse zone is setup to allow updates
19:49 leonkatz i'm having an issue, is there an example i could follow
19:53 aldevar joined #salt
19:53 nickabbey joined #salt
19:54 whytewolf leonkatz: I would think it would be like this. https://gist.github.com/whytewolf/ffbfa76628d46f861784273072be90c7 [sorry don't actually use it myself yet. but it is in my roadmap for my upcoming project]
19:55 leonkatz @whytewolk https://gist.github.com/leonkatz/cd51c6842a6f12b4a359bd085d87608f
19:55 leonkatz my code
19:57 whytewolf that ... doesn't have rdtype which needs to be set to PTR for a PTR record... also data and name in a PTR record would be reversed ...
19:57 whytewolf with out rdtype in your state the only record type you can support is an A record
19:58 leonkatz oh sorry i pasted the a record instead of the ptr
19:59 leonkatz https://gist.github.com/leonkatz/e24c7503e707105a1ee454833f9b4cd3
20:01 whytewolf humm, that looks good as long as the ip_list is actually getting populated correctly. and the zone is setup to accept updates. [many forget to set reverse zones for update]
20:02 leonkatz yup they are but i think i found the issue - data: {{ minion_id }}. was missing the period at the end
20:02 whytewolf oh, doh. yeah that would cause issues
20:03 leonkatz thanks for your help, i would not have figured it out for a long time
20:03 leonkatz without your help
20:03 whytewolf i didn't do anything. literally. lol. I missed the issue myself.
20:06 dyasny joined #salt
20:07 jas02 joined #salt
20:10 leonkatz I think there might be an issue
20:10 leonkatz its not using the - name: field to populate the reverse lookup, its using my method name, even though i have a name
20:11 KyleG joined #salt
20:11 KyleG joined #salt
20:11 whytewolf humm. that isn't right. i would say submit a bug report. with as much detail. it should be respecting - name:
20:12 whytewolf although you might need to wrap the name in quotes
20:12 leonkatz let me try that
20:12 whytewolf otherwise it is just a number
20:13 leonkatz yup that was it
20:13 leonkatz it needed the quotes
20:13 leonkatz thanks
20:13 whytewolf np
20:13 DammitJim joined #salt
20:15 ninjada joined #salt
20:20 jas02 joined #salt
20:22 scsinutz joined #salt
20:23 fracklen joined #salt
20:23 cyborg-one joined #salt
20:26 mikecmpbll joined #salt
20:26 jas02 joined #salt
20:27 leonkatz whytewolf: another guy just told me that the ddns.add_host module does both at the same time, are the modules idempotent, and if so why isn't there as add_host state
20:28 nickabbey joined #salt
20:28 leonkatz joined #salt
20:30 whytewolf leonkatz: humm most modules are not idempotent. but it does apear that ddns.add_host at least according to the docs does do both at the same time. the ddns state should be using the ddns module.
20:32 whytewolf gtmanfred: um. someone needs to smack the webserver. saltstack.com is coming up with a 500 error.
20:32 honestly Didn't you hear, S3 is down
20:32 whytewolf i did not
20:32 honestly It's the awspocalypse
20:33 leonkatz its using update instead of add_host
20:33 gtmanfred whytewolf: it is behind s3
20:33 * whytewolf didn't know s3 was down
20:33 honestly Just as I thought
20:33 jhauser joined #salt
20:33 gtmanfred whytewolf: https://twitter.com/awscloud/status/836656664635846656
20:34 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.3.5, 2016.11.3 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ (please don't multiline paste into channel) <+> See also: #salt-devel, #salt-offtopic <+> Ask with patience as we are volunteers and may not have immediate answers
20:35 honestly That's hilarious
20:35 whytewolf welp. didn't know cause the only site i host on s3 is still up because it isn'ting hosted through s3 directly.
20:35 gtmanfred yeah, the status page is just awful
20:35 gtmanfred https://status.aws.amazon.com/
20:35 gtmanfred Apparently us-east-1 is just not having a good day
20:35 gtmanfred so like 5% of the internet is down
20:36 whytewolf that explains a lot
20:36 gtmanfred isitdownrightnow.com is also down
20:36 gtmanfred http://isup.me/isitdownrightnow.com
20:36 gtmanfred curiously, isup.me is also hosted on aws
20:39 whytewolf shit was so broken they couldn't say how broken it was
20:39 whytewolf "We have now repaired the ability to update the service health dashboard. "
20:39 seanz joined #salt
20:42 scsinutz joined #salt
20:43 armyriad joined #salt
20:48 scsinutz joined #salt
20:50 gtmanfred shits broke yo
20:54 smcquay joined #salt
20:55 miker_ joined #salt
20:56 miker_ hello
20:56 miker_ FYI, salt cheat sheet if anyone needs a reference
20:56 miker_ https://sites.google.com/site/mrxpalmeiras/saltstack/salt-cheat-sheet
20:57 gtmanfred https://github.com/harkx/saltstack-cheatsheet
20:57 rewbycraft Oh nice
20:57 rewbycraft *bookmarks
20:57 gtmanfred miker_: that is awesome too
20:57 rewbycraft Yeah. Looks great
20:57 rewbycraft Lots of stuff there too
20:58 leonkatz joined #salt
21:00 miker_ im adding more as i use it
21:00 harkx gtmanfred, thanks for mentioning me :)
21:01 nickabbey joined #salt
21:01 gtmanfred :D
21:02 gtmanfred I can't remember why i ran across yours...
21:02 harkx hehe, no problem.. not easy structuring a cheat sheet.. needs a lot of info but .. not too much :p
21:03 harkx that other one is nice also , thanks miker_
21:07 gtmanfred yar, i think that is how i found it, someone was asking if I had a small cheatsheet, because the salt docs are so daunting
21:09 bantone wow, nice guys
21:09 scsinutz joined #salt
21:11 whytewolf kewl s3 is starting to work again
21:11 fracklen_ joined #salt
21:11 gtmanfred yay
21:12 bantone yeah I heard about their issues
21:13 whytewolf i didn't and i am starting to host a site on s3 :P [coarse didn't know it went down cause i wasn't pushing anything new in the downtime and the frontend is cached in cloudfront which didn't go down]
21:13 hemebond1 I thought the idea of AWS was to run your stuff in multiple zones. Are these big websites not doing that?
21:14 gtmanfred it is, a lot arent
21:14 whytewolf multi zone hosting is expensive. when i worked at a a paper that was hosting in aws i tried getting the budget for multizone and was shot down.
21:15 gtmanfred yeah, that is i think the biggest problem
21:18 whytewolf [but we somehow had the budget to allow poorly designed software with broken caches. that kept needing more and more power to run.]
21:20 whytewolf [I wasn't the designer i was the guy that did deploys] best vidication i ever got was after i left they tried replacing me with a PaaS company for the software and they flat out refused to work with the software in that condition.
21:21 hemebond1 Haha, nice.
21:24 manji lol
21:24 whytewolf a company so stupid that I left 5 years ago and still have the master password to their aws account
21:26 whytewolf actually my own password still worked up until about 2 years ago
21:26 ronnix joined #salt
21:32 gtmanfred miker_: you should be able to do a salt \* service.restart salt-minion on the windows service too
21:35 whytewolf my only comment is the switching between [state.sls,state.highstate] and state.apply. should be conistent with those. one or the other with out blending them
21:37 miker_ question, if im creating a grain, ie, salt node1 grains.set 'apps:myapp:port' 3100
21:37 miker_ where does that grain info get stored?
21:37 miker_ is that a permanent grain or in memory only
21:38 whytewolf iirc it puts it in /etc/salt/grains
21:38 gtmanfred ^^
21:38 miker_ cool
21:39 miker_ tried vim that file and adding some more yaml data, but it master doesnt pick it up
21:39 miker_ guess thats not the recomended way
21:40 whytewolf editing the file requires a minion restart to read it
21:40 gtmanfred no it doesn't
21:40 whytewolf it doesn't?
21:40 gtmanfred it only requires a minion restart if you put the grains: option in /etc/salt/minion
21:40 gtmanfred /etc/salt/grains just requires a grains refresh
21:40 whytewolf ahhh
21:41 miker_ nice, config reload w/o service restart
21:41 gtmanfred the problem with config reload without restarting the service is how many different process are spinoffs from the initial process
21:41 gtmanfred it just doesn't work well
21:41 gtmanfred https://github.com/saltstack/salt/issues/570
21:41 saltstackbot [#570][OPEN] master/minion should accept a SIGHUP and reload config |
21:41 gtmanfred super old issue
21:42 miker_ just tried that with salt node1 saltutil.refresh_modules
21:42 whytewolf yeah. personally i try to keep my minion config file changes to a min. and use pillars for any setting that uses config.get
21:42 miker_ picking up the new value
21:42 miker_ love this product
21:44 relidy Can anyone recommend a general approach for this problem, or point me down a better path? I'd like to take the OS (CentOS) provided /etc/php.ini and add a few rules to it, but store the result in a separate file (e.g. /etc/php-restricted.ini). The main goal is that I'd like to be able to track upstream changes to the original configuration file and I do NOT want these rules applied by default (so no /etc/php.d/ rules; CLI can specify this config file
21:44 relidy manually when needed).
21:45 jhauser joined #salt
21:45 gtmanfred does php.ini not have a way to do includes? because what I would do would be to never touch php.ini, and instead just include it, and then append all extra options to the end of php-restricted.ini
21:46 hunmaat or include its staterun-time content with templating
21:46 hunmaat if that's what you want
21:46 gtmanfred hrm, doesn't look lik eyou can do an include on the file
21:46 relidy PHP's configuration does not allow includes
21:46 gtmanfred damn
21:46 relidy Yeah
21:47 relidy Is it possible to read the contents of a non-managed file from the minion? Basically, hoover up the contents and append my stuff, writing it somewhere else?
21:47 gtmanfred sure, just specify /etc/php.ini as the source, instead of salt:// something
21:48 gtmanfred actually that might still try and pull it from the master
21:48 gtmanfred hrm
21:48 whytewolf don't think file.managed accepts file://
21:48 whytewolf or at least has problems with it
21:49 gtmanfred what you could do is write up a jinja template that reads in /etc/php.ini, puts it all there and then adds on your changes at the bottom
21:50 relidy I guess I never considered that jinja might have access to the file. Are you saying it can do that on it's own, or would I need to read that in somewhere else and pass it in?
21:51 whytewolf well. state and file jinja is rendered on the minion. so you could build your jinja in a state file or in a file template. if you did it in a state file you could use ini_manage states to build out your /etc/php-restriced.ini
21:52 relidy Sounds like I have some more reading to do, then. Thanks for the thoughts.
21:53 miker_ soemthing like this maybe
21:53 miker_ https://github.com/saltstack-formulas/php-formula/blob/master/php/ng/ini.jinja
21:54 tkojames joined #salt
21:54 relidy Thanks, miker_.
21:56 tkojames I am trying to create VM from template with salt cloud and vsphere. When I run salt-cloud -p profilename nameof vm. I get the following error "exceptions must be old-style classes or derived from BaseException, not NoneType" happens when runing in debug as well. Any ideas?
21:58 gtmanfred can you put the whole exception output into a gist?
21:59 ronnix joined #salt
22:00 aldevar left #salt
22:01 cachedout joined #salt
22:03 dps joined #salt
22:06 tkojames Looked in to it more. It seems that the commands are running correctly on vpshere i can spin up VM's destroy them etc but get the same exception each time. So seems to be an issue with getting data back that it was succesful.
22:07 whytewolf maybe a version issue with vsphere?
22:07 whytewolf vmware has a tendency to break api
22:09 tkojames Yea I am going to look into that next. Might be releated to this issue, maybe? https://github.com/saltstack/salt/issues/37845
22:09 saltstackbot [#37845][OPEN] VM creation / deletion on vSphere 5.5 is aborted with "Error: None" | Description of Issue/Question...
22:11 candyman88 joined #salt
22:11 twork_ ok so... i back all my user accounts with "users:" pillars, using the "users" formula.  i'm writing a new state that will make a record of all my users, of the form "[login],[GECOS],[password]".  flat file, nothing fancy there.  but:
22:12 Claw_ joined #salt
22:15 twork_ i want my new state to be backed by a different set of user data than what gets built on the minion where it runs.
22:15 twork_ that makes no sense, i realize.
22:16 twork_ n/m, i'll go write.
22:16 Sketch indeed ;)
22:16 whytewolf welp, that was easy :P
22:16 twork_ you're welome.
22:16 * Sketch hopes that the flat text file containing user passwords is being written somewhere secure
22:16 ninjada joined #salt
22:17 twork_ (i do a fair number of those. [line 1] ; [line 2... eeegh, sorry]
22:18 scsinutz joined #salt
22:18 gtmanfred twork_: it is called rubber ducky debugging
22:18 gtmanfred https://en.wikipedia.org/wiki/Rubber_duck_debugging
22:18 saltstackbot [WIKIPEDIA] Rubber duck debugging | "In software engineering, rubber duck debugging or rubber ducking is a method of debugging code. The name is a reference to a story in the book The Pragmatic Programmer in which a programmer would carry around a rubber duck and debug their code by forcing themselves to explain it, line-by-line, to the..."
22:19 twork_ gtmanfred: yeah.  the rubber ducky here has come through for me more than once.  not today though.
22:21 * whytewolf loves rubber duck debugging. if you can't explain something to a rubber duck then you most likely don't understand it yourself
22:22 gtmanfred I have a rubber ducky that has an eyepatch and a pirate hat
22:22 gtmanfred yarrrgh!
22:22 whytewolf .. i have one with a pirate hat and a scope
22:24 whytewolf s/scope/spy glass
22:25 gtmanfred nice
22:25 gtmanfred that makes more sense
22:27 Eugene I have two cats
22:27 whytewolf yeah but cats don't listen
22:27 whytewolf they just judge
22:28 Eugene They know the sound of the treat container
22:28 gtmanfred and beg
22:28 gtmanfred lol
22:28 gtmanfred mine was attacking my ankles this morning, so I bit him
22:37 CeBe Hi, I'd like to use gitfs as the source of salt master state files. In the docs at https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#refreshing-gitfs-upon-push it is mentioned, that the master will update the git repo every 60 seconds. Can I configure it to always check for updates before a state run?
22:38 CeBe (I do not have git hooks to update)
22:39 whytewolf CeBe: I use this as a quick way of updating. and typically run it before i try anything i have changed https://github.com/whytewolf/salt-phase0-states/blob/master/orch/salt-core-update.sls
22:40 CeBe whytewolf: could I run these automatically when I run  salt 'minion' state.sls  ... ?
22:41 whytewolf not that i know of
22:41 whytewolf since some of them target the master
22:41 whytewolf you could build a orch file that calls that then your state.sls
22:41 CeBe whytewolf: so in that scenario you have, the master is under control of salt too, right?
22:42 whytewolf CeBe: yes.
22:42 CeBe and that state would be applied to the master before running another state on the minion?
22:42 whytewolf [although i call the runner]
22:42 whytewolf no i use state.orch runner
22:42 whytewolf salt-run state.orch <orch state>
22:43 ninjada joined #salt
22:43 ninjada joined #salt
22:45 CeBe whytewolf: checking the docs, I do not find orch, only orchestrate and I do not get what it does...
22:46 CeBe what is different in orch from applying the state to the master?
22:46 whytewolf orch is orchestrate
22:47 whytewolf first orchestration is using salt to orchestrate running commands at different minions to say orchestrate setting up a website that needs a database. you have salt setup the database then on the webmaster it will bring the site online
22:48 whytewolf I'm using the runner feature of it to force the fileserver to update [and git_pillar as well]
22:48 rewbycraft Of you could just setup a git hook to trigger an event?
22:48 whytewolf then force all minions to refresh pillar caches and update mines
22:49 whytewolf rewbycraft: actually that is why i wrote this orchestration file actually. i plan to setup a githook to trigger this orchestration and update everything. fileserver, pillar, mines on everything
22:49 rewbycraft Ah
22:49 CeBe rewbycraft: git is hosted on bitbucket, not sure how to implementat that hook, but I will also check that option.
22:50 XenophonF joined #salt
22:51 whytewolf CeBe: basicly you setup salt-api to allow web hooks, setup your git post commit to hit the web hook, which will trigger an event that you have a reactor setup listening for. which triggers well something like the orchestration file i already posted
22:51 CeBe okay, now I see. The file you linked is run on the master to update the master and then run the sync stuff on all minions. correct?
22:51 whytewolf yes CeBe
22:52 rewbycraft whytewolf's suggestion seems sane enough. If you can figure out the ip range bitbucket uses you may be able to do some firewalling so only bitbucket can hit your api
22:52 rewbycraft Otherwise, figure out some way to authenticate
22:52 rewbycraft I don't think it's a good idea to have such an api exposed without auth or at least ip filtering
22:52 * whytewolf doesn't use bitibucket ;)
22:52 rewbycraft I do, but I have self-hosted
22:53 rewbycraft So I have a salt-minion on the machine anyway
22:53 CeBe sure, thats why my original question was about pulling from bitbucket instead of letting it push.
22:53 rewbycraft You can expose salt-api
22:53 rewbycraft Just investigate the auth options
22:53 rewbycraft I think bitbucket supports making arbitrary POST/GET calls
22:54 CeBe yeah, just found this: https://developer.atlassian.com/bitbucket/server/docs/latest/reference/plugin-module-types/post-receive-hook-plugin-module.html
22:54 whytewolf CeBe: technicaly it is still a pull from bitbucket. you just setting up a trigger to tell salt to pull
22:54 CeBe whytewolf: sure
22:55 rewbycraft Haven't messed with salt-api much. But I think this is worth reading: https://docs.saltstack.com/en/latest/ref/netapi/all/salt.netapi.rest_cherrypy.html
22:55 rewbycraft Also the docs on the event/reactor system are worth reading
22:56 CeBe great, thank you very much! :)
22:56 rewbycraft Particularly, the /hook docs are worth reading
22:56 rewbycraft You can either figure out the auth that salt-api supports to setup like nginx to reverse proxy and setup basic http auth on that
22:56 rewbycraft Oh and this is useful too https://confluence.atlassian.com/bitbucket/manage-webhooks-735643732.html
22:56 rewbycraft That explains how to use web-hooks with the hosted bitbucket service
22:57 CeBe tom[]: you might want to read the above converation ^
23:00 rewbycraft Basic summary of recommendations from whytewolf and me: Use salt-api to expose a hook for bitbucket to call which fires off an event. Configure bitbucket to call the hook. Setup an orchestration state (or whatever they're called -- I forget) to be triggered when the event is fired. Have the orchestration state instruct salt to pull the needed things.
23:01 CeBe rewbycraft: thank you!
23:01 rewbycraft Useful documentation beign the netapi docs for how to setup hooks. The linked bitbucket docs for configuring bitbucket to call the hook. The event system + reactor docs for how events work and how to use them and the orchestration docs.
23:01 rewbycraft Most of which is on docs.saltstack.com, presuming that s3 has stopped being a major derp
23:02 whytewolf yeah it mostly has. least for already existing content
23:02 whytewolf wouldn't try updating anything right now though
23:02 rewbycraft Yeeahhh. I'll give them a day before I consider s3 stable again
23:03 rewbycraft Also, disclaimer: I'm not part of the saltstack team, just a user who doesn't mind helping.
23:03 whytewolf hehe i think gtmanfred is the only one who is here who is offical saltstack
23:03 whytewolf least right now
23:03 * rewbycraft doesn't really know
23:04 nickabbey joined #salt
23:04 rewbycraft I've simply spent too much time digging through the docs
23:04 rewbycraft (There's some really neat, but fairly hidden features)
23:04 whytewolf yeap. I dig through the docs a lot. and when i can't be sure of the answers there i'll dig into the code
23:05 sh123124213 joined #salt
23:05 rewbycraft Yeah
23:05 joshbenner joined #salt
23:05 joshbenner left #salt
23:05 rewbycraft I'm working on rebuilding my global network and managing my software-routers with saltstack. So I've been digging recently to figure out the best way to approach it
23:06 ninjada joined #salt
23:06 scsinutz1 joined #salt
23:07 whytewolf oh fun. my current home project is redesigning my entire saltstack setup to build out my openstack infrastructure. then build out everything internal to it. but i have been getting lazy about it.
23:07 rewbycraft Ah nice. That's next on my list
23:08 rewbycraft A lot of my infra is of the "organically grown" kind
23:08 rewbycraft Soooo it needs a good clean up
23:09 rewbycraft Current plan is to eliminate direct ssh access as much as possible
23:09 whytewolf yeah. that is what I am trying to avoid myself this time. I did way to much by hand last time i built this out.
23:09 rewbycraft Manage most of it through systems like saltstack so that I have a good record of wtf I did
23:09 rewbycraft I'm liking saltstack so far. I've used it a little bit in the past for auto-configuring vpses
23:09 rewbycraft But nothing "proper"
23:10 rewbycraft Which is what I'm doing now
23:10 rewbycraft And yeah, I deployed openstack by hand and didn't keep good docs. That setup isn't fun to upgrade
23:10 whytewolf my own goal is to have 0 users ever. in fact one item in the todo list is to setup the internal openstack stuff to autodelete and recreate if any user is logged into the system
23:10 rewbycraft Hah. Cool
23:10 rewbycraft I probably can't 100% eliminate ssh
23:10 rewbycraft Mostly for debugging stuff
23:11 rewbycraft The downside of dealing with what you could consider "ISP-level" networking is that you quite often have to go in to run traceroutes or check logs
23:11 whytewolf upgrading openstack has gotten better. i actually upgraded my home openstack from liberty to mitaka with out any issue [except for a new database that wasn't in the docs]
23:11 rewbycraft Oh yeah. I fell for that too
23:12 rewbycraft I'd like my system to be roll-back-able if you know what I mean
23:12 rewbycraft The only bit I'm not having saltstack manage this time is the LVM2 PV creation.
23:12 whytewolf lol, yeah i'm sure that is on the roadmap
23:12 rewbycraft That way, even if I re-install the storage node, it's keeping the actual drives
23:13 rewbycraft So data isn't lost
23:13 rewbycraft (I particularly care about the volumes with databases on them)
23:13 whytewolf yeah dataloss is a big thing.
23:13 rewbycraft I have backups, but I'd rather not
23:13 tom[] CeBe: yes
23:14 whytewolf also as for logs i tend to work with cental logging so never really tend to log into a box to read it's logs
23:14 rewbycraft But I'd love to be able to say "Welp. This update screwed this system. Re-install! <5 minutes later> Done!"
23:14 rewbycraft Cantral logging is on my todo list
23:14 rewbycraft Got a recommendation?
23:15 whytewolf I'm biast. I use ELK [elasticsearch]
23:15 rewbycraft Ah
23:15 rewbycraft I've been looking at graylog2.
23:15 rewbycraft But I may go for something a bit simpler and less resourceintensive
23:15 whytewolf i used to use splunk but i just couldn't aford it for my home setup anymore
23:15 rewbycraft In the end, I just want to be able to effectively see /var/log for everyvps
23:16 rewbycraft So even if it's a simple syslog server that just dumps into /srv/logs/<hostname>/<filename>, that's fine for me
23:17 whytewolf well syslog is always a great place to start.
23:17 mosen joined #salt
23:17 rewbycraft Which is why I was considering graylog2. You can just point rsyslog at it and go
23:17 rewbycraft And it has nice parsing features.
23:18 rewbycraft I like the ability to send an email if it encounteres certain types of log entries
23:19 whytewolf the only reason i stick with ELK is not everything logs through syslog. some applications are bad and log directly to a file. or try logging to an api. and i can setup logstash to get those logs also
23:20 rewbycraft I think you can setup rsyslog for basically any file
23:20 mikecmpbll joined #salt
23:21 rewbycraft We should probably stop dragging this channel off-topic
23:21 whytewolf lol yeah
23:21 rewbycraft (sorry everyone)
23:46 RonWilliams joined #salt
23:48 RonWilliams Getting a 404 when pulling the SaltStack key install on RHEL7. Are the SaltStack repo's offline due to the AWS outage?
23:52 CeBe RonWilliams: works for me, which file or command are you trying?
23:52 CeBe I am not on RHEL7 though
23:53 Eugene I would argue that orchestration-related topics(like log management!) are very much a Salt-related topic
23:56 RonWilliams @CeBe, let me create a gist...
23:56 CeBe Eugene: agree

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary