Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-02-22

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 mikecmpbll joined #salt
00:21 tiwula joined #salt
00:52 synical joined #salt
00:52 synical joined #salt
01:03 exarkun joined #salt
01:50 cyborg-one joined #salt
01:51 cyborg-one left #salt
01:51 tiwula joined #salt
02:04 zerocoolback joined #salt
02:57 ilbot3 joined #salt
02:57 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.9, 2017.7.3 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
03:02 Twiglet joined #salt
03:04 Trauma joined #salt
03:04 al joined #salt
04:25 exarkun joined #salt
04:31 rjc joined #salt
04:32 rjc hi all
04:33 rjc I've read about this https://docs.saltstack.com/en/latest/topics/blackout/
04:33 rjc but I'm none the wiser how to actually use it
04:34 rjc as in, apply it to manage.down hosts only
04:34 rjc does anyone have any experience with it?
04:35 rjc or perhaps there's a better way to ignore hosts which are no connected to the master?
04:35 rjc automatically, that is - the lists of up/down hosts changes so I'd prefer not to hard-code it
04:46 hemebond What would be the point in applying it to minions that aren't even available?
04:46 justan0theruser joined #salt
04:48 hemebond rjc: What is the actual issue? If a minion is not connected it won't run the job.
04:48 justanotheruser joined #salt
04:51 rjc slight delay (with some minions down, that can be considerable) - also, I'd prefer not to see those, in red, in the summary after running salt '*' state.apply
04:52 hemebond https://docs.saltstack.com/en/latest/ref/cli/salt.html#cmdoption-salt--hide-timeout
04:53 rjc that solves it :)
04:54 rjc ok, slightly related question - what about "skip managed.down" hosts altogether, i.e. don't bother contacting them" so that master doesn't try doing so unnecessarily?
04:55 hemebond The master does do anything anyway.
04:55 hemebond *doesn't
04:55 hemebond The master puts a job on the queue for all minions to check.
04:56 hemebond Minion checks itself against the target and runs the job if it should be included.
04:56 hemebond At least, that's how I understand it.
04:57 hemebond It shouldn't be too difficult to get the list from manage.down and use it as a list in the target using bash.
04:58 rjc ok - I'll look into the finer details of master/minion communication some other time :)
04:58 rjc also, the cluestick with the blackout above would be greatly appreciated
04:58 hemebond What's not clear about it?
04:59 hemebond You put it in the pillar data for the minions you want blacked out.
04:59 hemebond minion_blackout: True
05:00 rjc ok, how to do it automatically for managed.down?
05:00 hemebond If a minion is down, it can't get the pillar :-)
05:00 hemebond blackout is to prevent you running commands against the minion.
05:00 hemebond It doesn't stop you trying to target them.
05:01 rjc right
05:01 hemebond But if the minion is dead, it can't get the pillar data.
05:02 rjc I get very confused about all of this
05:02 rjc :)
05:02 rjc thanks for the explanation
05:02 hemebond ūüĎć
05:02 hemebond Take your time. It'll click.
05:02 hemebond Try not to be too "clever" with solutions though.
05:03 hemebond And if you have an issue or annoyance, it's usually best to ask for solutions to that, rather than ask how to shoe-horn in what you think is the solution.
05:04 rjc well, I'm simply trying to do something like - sometimes minions are down, and that's fine - don't bother stopping or even mentioning that you can't contact them kind of thing
05:05 hemebond Right. Commands are run against minions in parallel; it's never really stopped.
05:05 rjc sure, sure
05:05 hemebond With --hide-timeouts and --timeout you should be able to reduce any delays.
05:05 rjc it's the displayed result I have in mind
05:05 hemebond Though for some reason --timeout=1 isn't reducing my timeout to 1 second, but still takes 10s.
05:05 hemebond What do you mean?
05:06 rjc essentially, I don't want the master to contact/schedule jobs for minions which are down
05:06 rjc and diplsay "Minion did not return. [Not connected]" for any of them
05:07 hemebond Well it won't run jobs on any minions that are down.
05:07 rjc I don't care about it - don't litter the screen with it, I know they are down, kind of thing
05:07 hemebond That message is from the command-line tool.
05:07 rjc ok, let's go back a step :)
05:08 hemebond I think with --hide-timeouts you're sorted.
05:08 rjc I simply don't want to see "Minion did not return. [Not connected]" for minions which are down
05:08 rjc can't hard-code it
05:08 rjc various minions are up or down at any given time
05:09 hemebond Sure.
05:10 rjc something like "salt '$(salt-run manage.up)' state.apply" would be nice
05:10 rjc :)
05:10 om2 joined #salt
05:10 hemebond Except not really required, because your only real issue the timeout message, no?
05:11 rjc essentially, yes
05:12 zerocoolback joined #salt
05:12 rjc btw, --hide-timeout doesn't do what I want
05:13 rjc partially anyway
05:14 hemebond It doesn't?
05:14 hemebond What is it you're trying to achieve?
05:14 rjc there's still a delay before all the up minions' info and returning to the prompt
05:15 hemebond That's pretty normal.
05:15 zerocool_ joined #salt
05:15 hemebond Takes time for the minions to run the job and return the info.
05:16 hemebond gather_job_timeout: 10
05:16 hemebond That's why I can't get the timeout below 10s.
05:16 rjc ok, I thought I'm expaining myself perfectly but it seems that I'm not
05:16 rjc :)
05:17 rjc targetting mininions which are up, I get the result straight away and command prompt also
05:17 hemebond Oh, you mean there's a delay between when info is printed and then the command exits?
05:17 hemebond If so, yes, that's the 10 second delay I just mentioned.
05:18 rjc targetting all woth --hide-timeout I get the results straight away but need to wait longer because master tries to contact/schedule job/push command to (whatever you call it) minion
05:19 rjc with
05:19 hemebond Yes, the tool checks with targeted minions to see if they're running the job.
05:20 hemebond If that's too long then you're need to generate a list for your targetting.
05:20 rjc what I've been trying to explain is that I simply would like master to skip all the down minions - so that I don't have to wait those 10 seconds
05:20 rjc :)
05:20 hemebond Or, you could use --async and fetch the results afterwards.
05:20 hemebond Right. Except you were talking about the output :-)
05:21 rjc I know about async but that's not the issue here
05:21 hemebond If it's the delay, best method would be to generate a list of minions.
05:21 hemebond Not the issue? What do you mean?
05:23 rjc sorry, I don't think I'm able to explain myself any clearer than I already had
05:23 rjc :)
05:23 hemebond "I know about async but that's not the issue here"
05:23 hemebond run command with --async
05:24 hemebond Wait however long you're willing to wait.
05:24 hemebond Fetch results.
05:24 hemebond Or just generate your own list of minions.
05:24 rjc but I don't want to fetch results later - I'd like to see the mstraight away
05:24 rjc for up minions getting the result only takes <1
05:24 rjc 1s
05:25 rjc all I want is for the down minions to be skipped automatically
05:25 hemebond Then you need to provide your own list of minions.
05:25 rjc I know how to do it statically - with a list of some sort
05:26 hemebond Yeah, the command line tool has nothing to do that for you as far as I know.
05:27 rjc 05:10 < rjc> something like "salt '$(salt-run manage.up)' state.apply" would be nice
05:27 rjc 05:10 < rjc> :)
05:27 rjc 05:10 < hemebond> Except not really required, because your only real issue the timeout message, no?
05:27 rjc well, it is required as it seems
05:27 hemebond But it isn't really required.
05:28 NV joined #salt
05:28 hemebond If a minion is dead/down it does nothing but introduce a 10s delay in the command line tool.
05:28 rjc and that's my biggest issue here
05:28 rjc I'd like to avoid it
05:29 hemebond How long does manage.down take to run?
05:29 rjc it seems that such a simple option should've been there from the start - salt-run manage.down already lists the minions which aren't connected
05:29 hemebond Except it doesn't really. it asks the minions if they up or down.
05:30 hemebond Go run `salt-run manage.up`
05:30 rjc I shouldn't need to run a one-liner to get be able to use that list for targetting
05:30 hemebond How long does it take?
05:32 hemebond If you _really_ wanted to you could reduce  gather_job_timeout
05:32 hemebond But that would likely increase the amount of noise generated for every job.
05:39 rjc manage.down/manage.up did indeed take a bit longer
05:41 rjc if only --timeout worked as advertised
05:41 rjc :)
05:43 hemebond It technically does.
05:43 hemebond But the master is only checking every 10 seconds :-)
05:43 hemebond So if the first check comes back empty, the timeout will make sure it exits.
05:46 om2 joined #salt
05:55 om2 joined #salt
05:59 om2 joined #salt
06:04 aruns joined #salt
06:05 exarkun joined #salt
06:05 aruns__ joined #salt
06:17 onlyanegg joined #salt
06:21 df3nse joined #salt
06:28 MTecknology rjc: In what way are you saying it doesn't work as advertised?
06:29 * MTecknology remembers having "timeout: 70" in master.d/aggravating.conf back before rpi3.
06:33 tiwula joined #salt
06:42 hemebond MTecknology: The minimum timeout/delay is 10 seconds.
06:43 hemebond By default.
06:45 Hybrid joined #salt
06:51 rjc MTecknology: -t TIMEOUT, --timeout=TIMEOUT
06:51 rjc The timeout in seconds to wait for replies from the Salt minions. The timeout number specifies how long the command line client will wait to query the minions and check on running jobs. Default: 5
06:51 rjc it doesn't seem like it can be anything lower than 10 ;)
06:52 hemebond I might test lowering gather_job_timeout to see what it can do.
06:53 hemebond Definitely seems faster.
06:54 hemebond I set it to 2 and used timeout=1 and it took 5 seconds to return everything.
06:57 inad922 joined #salt
07:01 hemebond 1 and 1 is very quick.
07:03 zerocoolback joined #salt
07:22 MTecknology hemebond: I was increasing it to 70 second because it took the rpi2 about a minute to run through my regular states, assuming no changes
07:22 hemebond Only a minute? You need to sync more directories with lots of files :-D
07:22 MTecknology I'm not sure I see where I said it was lower than 10?..
07:23 hemebond What?
07:23 MTecknology nevermind... I thought the thing rjc said meant something else
07:26 pualj joined #salt
07:29 LocaMocha joined #salt
07:36 aruns__ joined #salt
07:43 exarkun joined #salt
08:07 aldevar joined #salt
08:15 Tucky joined #salt
08:16 kiorky joined #salt
08:18 onlyanegg joined #salt
08:30 cewood joined #salt
08:34 hoonetorg joined #salt
08:35 Ricardo1000 joined #salt
08:55 testbozo joined #salt
08:59 pbandark joined #salt
09:07 pbandark joined #salt
09:12 mikecmpbll joined #salt
09:20 __peke__ joined #salt
09:23 pfallenop joined #salt
09:24 exarkun joined #salt
09:25 zerocoolback joined #salt
09:30 yuhl joined #salt
09:30 sh123124213 joined #salt
09:34 swa_work joined #salt
09:44 pfallenop joined #salt
09:48 gmoro joined #salt
09:50 aruns joined #salt
09:51 aruns__ joined #salt
10:01 zulutango joined #salt
10:04 Hybrid joined #salt
10:04 aruns__ joined #salt
10:08 golodhrim joined #salt
10:12 rjc joined #salt
10:19 onlyanegg joined #salt
10:35 sol7 joined #salt
10:49 testbozo joined #salt
10:49 audi joined #salt
10:50 aldevar1 joined #salt
10:52 pcgod_ joined #salt
10:52 Hybrid joined #salt
10:52 swa_work joined #salt
10:53 armyriad joined #salt
10:54 indistylo joined #salt
10:56 chesty_ joined #salt
10:57 duckfez_ joined #salt
10:57 drags1 joined #salt
10:57 wireknot joined #salt
10:58 coldbrew- joined #salt
10:58 vhasi_ joined #salt
10:58 cholcombe_ joined #salt
10:59 coredumb1 joined #salt
10:59 inetpro_ joined #salt
10:59 ingy1 joined #salt
10:59 rideh joined #salt
10:59 jab416171 joined #salt
11:00 systemdave joined #salt
11:04 nledez joined #salt
11:05 exarkun joined #salt
11:14 evle joined #salt
11:19 swa_work joined #salt
11:26 CrummyGummy joined #salt
11:26 miruoy_ joined #salt
11:30 wwalker_ joined #salt
11:30 Heartsbane_ joined #salt
11:30 ipsecguy joined #salt
11:31 Nazzy joined #salt
11:31 skrobul joined #salt
11:33 mage_ joined #salt
11:34 jab416171 joined #salt
11:35 scooby2 joined #salt
11:35 AssPirate joined #salt
11:39 aviau joined #salt
11:40 nledez joined #salt
11:40 Pomidora joined #salt
11:47 jab416171 joined #salt
11:52 bannomore joined #salt
11:58 jab416171 joined #salt
12:06 Mogget joined #salt
12:17 mrBen2k2k2k joined #salt
12:18 Hybrid joined #salt
12:19 onlyanegg joined #salt
12:25 aruns joined #salt
12:48 jab416171 joined #salt
12:54 KennethWilke joined #salt
12:55 m0nky joined #salt
12:55 wwalker joined #salt
13:03 sjorge joined #salt
13:05 spiette joined #salt
13:05 mchlumsky joined #salt
13:10 dmytro joined #salt
13:11 aruns joined #salt
13:12 Nahual joined #salt
13:13 dmytro left #salt
13:13 TheRealDmytro joined #salt
13:33 saltnoob58 joined #salt
13:47 mikecmpbll joined #salt
13:51 edrocks joined #salt
13:53 CrummyGummy joined #salt
13:56 baikal joined #salt
13:59 gareth__ joined #salt
14:01 gareth__ joined #salt
14:02 aldevar1 left #salt
14:05 aldevar joined #salt
14:08 alfie joined #salt
14:12 gh34 joined #salt
14:19 df3nse joined #salt
14:20 onlyanegg joined #salt
14:22 racooper joined #salt
14:24 exarkun joined #salt
14:30 nledez joined #salt
14:35 df3nse left #salt
14:36 cewood joined #salt
14:39 TheRealDmytro left #salt
14:46 vsi138 joined #salt
14:58 mage_ left #salt
14:58 mage_ joined #salt
14:58 mage_ hello
14:59 mage_ I'd like to integrate my "Saltstack scripts" with Gitlab CI/CD, anyone already did this ?
14:59 mage_ my idea was to trigger an orchestration script on the master
15:00 pualj joined #salt
15:00 mage_ is it possible for a minion to trigger an orchestration script on the master ? or should I install a gitlab runner on the salt master ?
15:00 onslack <mts-salt> salt is sufficiently versatile that "integrate" could mean many many different things. what is it you want to achieve?
15:01 saltnoob58 what ci/cd do you have? I have on one env bitbucket and ansible, i just have bitbucket checkout ansible repo, then run #ansible-playbook mystuff.yml
15:01 saltnoob58 you could do the same with a salt command
15:02 saltnoob58 before you can do it, you must find out what you want
15:02 saltnoob58 oh, sorry, you said you have gitlab ci/cd
15:02 mage_ I have already states, etc to create virtual envs, dedicated users, etc and I'd like to re-use that and avoid re-recreating everything in .gitlab-ci.yml
15:02 saltnoob58 but idea is the same. Realize your usecase, then see how you can make it happen the simplest
15:02 mage_ yes
15:03 saltnoob58 so whats your use case?
15:04 mage_ I don't know yet :)
15:04 saltnoob58 then invent it first
15:04 saltnoob58 "when I X I want Y to happen"
15:04 onslack <mts-salt> as i said, what is it you want to achieve?
15:04 mage_ I'd like to try some basic stuff, like running a pylint + flake8 after each commit
15:05 saltnoob58 the research gitlab cicd triggers. Learn how to make it do things on each commit
15:05 saltnoob58 in bitbucket this is "triggers"
15:06 mage_ I've already installed a dedicated VM (FreeBSD) + "Runner" + salt minion; I just wonder what would be the best way to trigger an orchestration script on the master from the minion ..
15:06 saltnoob58 then research how to make gitlab cicd run commands. Then make a command that pylints either whole repo or your commits, then make gitlab do it on commit with your first research result
15:07 cgiroua joined #salt
15:07 onslack <mts-salt> so far none of what you've said involves salt
15:08 saltnoob58 hmm, yeah
15:08 mage_ as I said I have already states for creating dedicated user, virtualenv, etc
15:08 mage_ I'd like to trigger that
15:08 saltnoob58 that's already more than running pylint and flake8
15:09 onslack <mts-salt> there's a couple of ways. simplest is probably to post an event to the salt bus and have a reactor trigger the orchestration state
15:09 mage_ yep, pylint and flake8 are already installed in those virtualenvs
15:09 saltnoob58 do you have on master an orchestration sls or whatever that you can trigger manually? tie that to something, like event like mts-salt suggested
15:09 mage_ ah, the reactor sounds a good idea
15:09 mage_ thanks
15:10 saltnoob58 do you want to deploy one artefact to many target hosts and run pylint on each one?
15:10 mage_ yes
15:10 onslack <mts-salt> the tricky part is if you need any parameters. that's why i suggested an event, because the reactor /should/ be able to pull that out and pass it into the orch
15:10 keltim joined #salt
15:11 saltnoob58 isn't it less resource taxing to run pylint once, then if it passes deploy it?
15:11 saltnoob58 a test stage before deploy stage so to speak
15:11 mage_ yep
15:11 onslack <mts-salt> very true. orch could do that easily enough
15:11 mage_ I'll go with events && reactor
15:11 mage_ thanks a lot
15:12 mage_ stupid question, how could I trigger an event on a minion?
15:13 twiedenbein joined #salt
15:13 saltnoob58 depends on the event condition?
15:14 saltnoob58 if you need to make something happen on a minion, remote exection is the simplest way
15:15 saltnoob58 https://docs.saltstack.com/en/latest/topics/event/events.html#firing-events this something like you need?
15:16 saltnoob58 remember to always first think without salt what you want to happen, then think how to make that thing happen with salt. There are many options, many of which you can use but dont need to
15:16 onslack <mts-salt> that's the one
15:16 saltnoob58 hope your thingamajib works
15:16 onslack <mts-salt> oh definitely. salt should be used to automate something you can already do manually
15:17 saltnoob58 even if that something you can manually do is # /bin/bash nonmanual_activity.sh  :D
15:17 saltnoob58 i have a government holiday extra day weekend of drinking ahead of me that's just started 2 minutes ago, wish you luck, wish me luck
15:18 onslack <mts-salt> ooh, lucky you :)
15:21 onlyanegg joined #salt
15:24 Nahual Seems that the saltutil.runner state module executes in a minion context and not in a master context, is that expected? Example is I am loading in a custom pillar from _pillar, when the state is run it ends up in /var/cache/salt/minion/extmods/ instead of /var/cache/salt/master/extmods. If I run salt-run saltutil.sync_pillar on the master itself from the CLI, all is well.
15:25 onslack <mts-salt> what command are you using to call saltutil.runner ?
15:26 Nahual It's being called as a state within the SLS file. salt.runner: - name: saltutil.sync_pillar
15:29 onslack <mts-salt> and what calls that?
15:29 Nahual It's called from a highstate.
15:29 onslack <mts-salt> using what command exactly?
15:29 onslack <mts-salt> or trigger, or whatever
15:30 onslack <mts-salt> i'm trying to establish which environment it's running under based on what starts the process off
15:31 Nahual Ooo good, point, it is a salt-call --local initially for bootstrapping.
15:32 Nahual After that initial run it'd be a normal salt state.apply call.
15:35 onslack <mts-salt> so the first command is running masterless, so it's definitely in minion context, yes
15:36 onslack <mts-salt> even a state.apply is still in minion context
15:37 onslack <mts-salt> generally speaking, pillar is minion-specific anyway. there is a view where the master can merge all pillar together but that's not what i'd call standard usage :)
15:37 Nahual This is laying down the module though which would only be executed by the master.
15:42 onslack <mts-salt> it looks like you're using the saltmod.runner state based on your snippet. where is pillar coming into it?
15:42 Nahual I am syncing a pillar module using the runner.
15:42 Nahual Or, attempting to, anyway.
15:43 onslack <mts-salt> yep, so i see. might be worth comparing debug logs for both runs
15:45 mikecmpbll joined #salt
15:45 Nahual I think the overarching issue I will be facing, now that I have had time to digest it, is that salt-call --local will be installing the module into its cache versus the master cache, which makes sense. Without that module though, the further salt state.apply that would be run after the bootstrap is going to fail as the pillar module is missing which is required. Would it be considered bad practice to just shunt the file directly into /var/
15:45 Nahual cache/salt/master/extmods/pillar on that initial --local call?
15:46 Nahual I'm trying to use the tools provided as much as possible.
15:46 onslack <mts-salt> why use --local in the first place if you have a master?
15:47 Nahual I don't have a master yet, the bootstrap is bootstrapping the master.
15:48 onslack <mts-salt> on the same host?
15:48 Nahual Correct. So the salt-call --local stands up the master once completed.
15:48 Nahual Has been working pretty well, this has been the only sticking point so far.
15:49 onslack <mts-salt> i see. so really you want to change context part way through from masterless to master
15:49 Nahual That would be ideal. I could always schedule the sync with at after the states have completed.
15:50 onslack <mts-salt> if i understand it, you're installing the master and want to bring in a custom pillar module as defined in the (new) master config?
15:50 Nahual Correct.
15:51 onslack <mts-salt> unless someone can think of an easier way, have you considered something as simple as using cmd.run to call salt-run directly? :)
15:51 onslack <mts-salt> that would guarantee the context switch at least
15:52 Nahual I was just going to implement that as a last resort actually.
15:52 Nahual I have the commit ready but thought I would check in.
15:53 onslack <mts-salt> everything that occurs as a result of salt-call --local will be in the masterless context. i don't know of another way to change the context
15:53 Nahual That's fine. I can do the cmd.run.
15:54 onslack <mts-salt> well, i do, but it's probably worse :D
15:55 onslack <mts-salt> assuming the master is running, post an event into it to trigger what you want
15:58 om2 joined #salt
15:58 mikecmpbll joined #salt
16:00 pualj_ joined #salt
16:07 pualj joined #salt
16:07 mrBen2k2k2k_ joined #salt
16:14 sh123124213 joined #salt
16:15 shoogz joined #salt
16:17 yaml joined #salt
16:18 yaml Greetings. Anyone know why 'ingy' was banned here?
16:20 ingy1 left #salt
16:23 tiwula joined #salt
16:32 pualj_ joined #salt
16:33 inad922 joined #salt
16:34 whytewolf they do not look to be banned yaml
16:36 Hybrid joined #salt
16:40 Sacro Is there a trick to actually starting the slack engine?
16:40 Hybrid joined #salt
16:47 ingy joined #salt
16:47 ingy test
16:48 this_is_tom joined #salt
16:50 this_is_tom Having an issue with pam authentication. I keep getting "Authentication failure of type "eauth" occurred." Glad to share master and cherrypy configs if someone thinks they might be able to help
16:51 om2 joined #salt
16:52 onlyanegg joined #salt
16:54 coredumb joined #salt
16:55 edrocks is there an easy way to sync a dir to google cloud storage with salt?
17:22 irated joined #salt
17:22 irated joined #salt
17:40 Sacro Anyone here using the Slack engine? I can't get it to start :(
17:44 MyNickname__ joined #salt
17:47 zerocoolback joined #salt
17:52 om2 joined #salt
17:58 edrocks joined #salt
18:25 Rr4sT joined #salt
18:27 swa_work joined #salt
18:28 brokensyntax joined #salt
18:34 rkhadgar joined #salt
18:34 Trauma joined #salt
18:40 jpsharp I need to query a mysql database based on the minion hostname and then use the results of that query to build an assortment of configuration files via a salt state.  I *think* a pillar is what I want, but I'm not sure.
18:47 hashwagon joined #salt
18:49 hashwagon I have an sls file with the following line: {% if grains['host'].startswith("a123") %} is there a way to wildcard the 'a' in ("a123") ?
18:49 hashwagon Or should I be using something different than startswith?
18:53 kiorky joined #salt
18:57 edrocks joined #salt
18:59 wongster80 joined #salt
18:59 whytewolf {% if '123' in grains['host'] %}
18:59 whytewolf or. if you need more advanced matching look at the the match module
19:02 hashwagon Thanks, whytewolf!
19:09 mikecmpbll joined #salt
19:09 pualj joined #salt
19:13 DammitJim joined #salt
19:28 aldevar joined #salt
19:32 cyborg-one joined #salt
19:34 curio_casual joined #salt
19:36 curio_casual Can anyone here point me to the source for the SaltStack Kapeli Dash.app docset?
19:39 whytewolf curio_casual: saltstack is one of the ones you can install from inside dash.
19:41 curio_casual I'm aware of how to install it; I'd like to know where it's coming from: its routinely missing the "test" state/module and hasn't been updated for the 2016 branch since 2016.11.6
19:42 whytewolf you would have to ask the dash.app people as they are the ones that maintain that.
19:42 whytewolf it is odd they stopped 2016.11.6 yet have added 2017.7.3
19:42 curio_casual right?
19:44 cyborg-one left #salt
19:44 whytewolf it could be they just download and compile our docs against the most recent version 2016.11.6 was the last release before 2017.7.0 came out
19:44 curio_casual entirely possible, I'm writing into they're support
19:54 onslack <gtmanfred> this bridge is still working right?
19:55 gtmanfred perfect
19:55 whytewolf yreap
19:55 gtmanfred i am really happy with how my plugin to irc3 is working
19:56 gtmanfred i need to finish clearing it up, and add the ability to upload file uploads from slack to push them to imgur/youtube/gist
19:57 gtmanfred probably will be my project for saturday
19:59 trogdorunique joined #salt
20:00 whytewolf that reminds me. I need more cat6. cause I have to start looking at revamping my openstack. got new servers to use as compute. going to move my old controllers to a ceph setup. [not a big one more of a just because i can type thing.] turn the current compute cluster into the new controllers. and some how work my new NAS into the mix
20:02 ymasson joined #salt
20:22 dRiN joined #salt
20:26 onlyanegg joined #salt
20:26 onslack <gtmanfred> nice
20:27 onslack <gtmanfred> let me know if you use the salt stuff, and how it works for you, i still have to document the new openstack stuff for oxygen
20:28 curio_casual "I only track the latest stable version of a docset. If you ever want me to add a specific version, please email me about it." ~Kapeli support
20:28 whytewolf are the exacution and state modules converted to shade yet? or are they still using the buggy older python clients?
20:30 whytewolf curio_casual: yeap, just as i thought. they just pull the docs from the source and compile it. and most likely just have a script that does it.
20:30 jcristau joined #salt
20:30 curio_casual I was hoping it wasn't a one-man-shop but oh well
20:31 trogdorunique joined #salt
20:45 Hybrid joined #salt
20:51 Trauma joined #salt
21:01 onlyanegg joined #salt
21:06 trogdorunique joined #salt
21:39 jrsdav joined #salt
21:40 sh123124213 joined #salt
21:41 trogdorunique joined #salt
21:42 jrsdav left #salt
21:43 jrsdav joined #salt
21:52 jrsdav_ joined #salt
21:53 jrsdav_ Is anyone seeing `Failed to start SSH session: Unable to exchange encryption keys` with `gitfs`?
21:54 jrsdav_ Whelp - https://githubengineering.com/crypto-removal-notice/
22:04 pualj joined #salt
22:06 XenophonF jrsdav_: you shouldn't be using sha1 anyway
22:06 XenophonF https://stribika.github.io/2015/01/04/secure-secure-shell.html
22:06 XenophonF I wrote my own version of OpenSSH formula to implement the above:
22:06 XenophonF https://github.com/irtnog/openssh-formula
22:07 sh123124213 joined #salt
22:08 asoc joined #salt
22:10 jrsdav_ XenophonF: This actually seems to be a problem with SaltStack's gitfs implementation using pygit2, afaict.
22:14 trogdorunique joined #salt
22:18 edrocks joined #salt
22:24 MTecknology SHA1 shouldn't even still exist.
22:27 jrsdav_ These versions appear to resolve the problem with gitfs: libgit2-0.26.0-3.fc27.x86_64 python2-pygit2-0.26.3-1.fc27.x86_64
22:28 sh123124213 joined #salt
22:30 xMopx Any idea when 2017.7.4 will be hitting the repos?
22:35 Deliant joined #salt
22:38 lane_ joined #salt
22:42 onlyanegg joined #salt
22:49 exarkun joined #salt
22:51 trogdorunique joined #salt
22:56 alfie joined #salt
23:00 MTecknology xMopx: probably at some point after it's been released?
23:03 onlyanegg joined #salt
23:07 xMopx Of course :P
23:11 hemphill joined #salt
23:11 hoonetorg joined #salt
23:16 trogdorunique joined #salt
23:30 trogdorunique joined #salt
23:50 trogdorunique joined #salt
23:56 shortdudey123 joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary