Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-03-15

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 tiwula joined #salt
00:05 ixs hemebond: https://github.com/saltstack-formulas/php-formula/blob/master/php/ng/map.jinja looks like a lot of duplication to me.
00:07 systemexit joined #salt
00:08 hemebond While there definitely appears to be some duplication of options, it's probably far easier and cleaner than trying to merge options over the top of each other depending on the kernel/os/version combinations.
00:08 hemebond Would probably be better to split it all into separate files.
00:09 ixs hemebond: I wonder if it would make sense to split it up into something like a debian.yaml, an ubuntu.yaml, a redhat.yaml
00:09 ixs yeah
00:09 ixs because 2500 lines is quite a lot...
00:09 hemebond It's massive.
00:10 hemebond Actually, seems to have some duplication in the filter_by's too.
00:12 hemebond {%- if salt['grains.get']('os') == "Ubuntu" %}
00:12 hemebond But then does a filter_by with Debian in the dict.
00:12 hemebond With _only_ Debian.
00:13 ixs hemebond: I stumbled over that too but it makes sense because filter_by filters by os_family by default.
00:13 ixs now I think that is dumb to use the default there because it is confusing.
00:13 ixs as you just demonstrated. :D
00:13 hemebond Oh it does os_family? Okay.
00:13 ixs yeah, but still dumb.
00:13 hemebond I didn't know Ubuntu had Debian as its family.
00:14 hemebond (in the grain I mean)
00:15 ixs os_family is Debian and Ubuntu and all the other clones as Debian. Fedora, CentOS, SCL, RHEL etc. go as Red Hat.
00:15 ixs and SLES, OpenSuse etc. as SUSE
00:16 dendazen joined #salt
00:16 zerocoolback joined #salt
00:20 schemanic joined #salt
00:22 schemanic Hi, whats the best way to pass a nested dict straight from pillar to a state?
00:22 schemanic I want to pass a pillar dict to an apache.configfile state
00:39 XenophonF use the|yaml filter maybe?  depends
00:44 schemanic joined #salt
00:45 mannefu joined #salt
00:47 scooby2 joined #salt
01:38 Psi-Jack joined #salt
01:40 schemanic joined #salt
01:41 schemanic Is there a way to make salt order a dictionary? I'm specifying values for an apache configuration and I want it to properly write my state in the order  in which I'm specifying keys in my pillar
01:41 hemebond |sort
01:41 hemebond Oh.
01:42 hemebond Dicts are unordered in Python (until 3.something)
01:42 hemebond If you want a specific order you need to use a list.
01:44 schemanic It's not up to me. I'm using apache.configfile state. It iterates over a dictionary to do what it needs
01:44 schemanic do you know if apache balks for out of order directives?
01:45 hemebond I think there are a couple of directives that must be first in the config file.
01:45 hemebond The formula should take care of that for you though.
01:46 hemebond Oh, within a VirtualHost, I don't believe order matters.
01:46 hemebond What is not being ordered properly for you?
01:47 schemanic I'm not operating within a virtualhost
01:47 schemanic I have nested directive scopes
01:48 schemanic so to set up my disk cache I have <IfModule mod_cache.c> ...cache_directives... <IfModule mod_cache_disk.c> ...cache_disk_directives.. </IfModule></IfModule>
01:49 schemanic and all of those are being sorted in dictionary order at one level
01:49 schemanic the scopes appear to be in order
01:49 schemanic um. that is to say, they appear to be properly defined
01:49 schemanic but the order of the whole thing is alphabetical
01:55 mannefu joined #salt
02:08 MTecknology schemanic: are you using a formula for this?
02:20 schemanic I mean, I am using formulas, but this is a straight up state module
02:20 schemanic https://docs.saltstack.com/en/latest/ref/states/all/salt.states.apache.html
02:34 MTecknology ah
02:35 shiranaihito joined #salt
02:55 MTecknology man, that looks gross
02:55 zerocoolback joined #salt
02:57 ilbot3 joined #salt
02:57 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.9, 2017.7.4 <+> RC for 2018.3.0 is out, please test it! <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
03:02 thelocehiliosan joined #salt
03:03 zerocoolback joined #salt
03:12 thelocehiliosan joined #salt
03:17 justanotheruser joined #salt
03:18 dezertol joined #salt
03:28 sh123124213 joined #salt
03:33 zerocoolback joined #salt
03:43 zerocoolback joined #salt
03:47 threwahway joined #salt
03:49 threwahway_ joined #salt
03:52 Guest73 joined #salt
03:55 AssPirate joined #salt
03:56 threwahway joined #salt
04:00 zerocoolback joined #salt
04:01 threwahway_ joined #salt
04:09 sjorge joined #salt
04:10 mavhq joined #salt
04:33 aldevar joined #salt
05:17 samodid joined #salt
05:30 sauvin_ joined #salt
05:44 zerocoolback joined #salt
05:48 Guest73 joined #salt
05:52 wongster80 joined #salt
06:05 zerocoolback joined #salt
06:25 masber joined #salt
06:29 Hybrid joined #salt
06:35 dynek Does anyone know how salt-api interacts with salt-master? TCP (Localhost)? Socket? Thanks!
06:44 zerocoolback joined #salt
06:45 hemebond "Executing a Salt command via rest_cherrypy is directly analogous                       to executing a Salt command via Salt's CLI (which also uses the Python API)"
06:46 dynek OK that shed a light but then how to salt's cli contact the master? :-) ok ok I'll strace that. Thanks!
06:46 dynek erm: OK that sheds a light but then how does the salt's cli contact the master? :-) ok ok I'll strace that. Thanks!
06:49 hemebond I suspect it uses TCP
06:50 hemebond Since they tend to just put events onto the event bus.
06:50 hemebond master_uri = 'tcp://' + salt.utils.zeromq.ip_bracket(self.opts['interface']) + .......
06:57 dynek Thank you!
06:58 zerocoolback joined #salt
07:12 om2 joined #salt
07:14 aviau joined #salt
07:14 pppingme joined #salt
07:18 nku is there a way to apply a state only once, immediately after a package was installed, without using some lockfile?
07:18 hemebond nku Immediately afterwards? Why?
07:19 aviau joined #salt
07:19 nku hemebond: actually, i don't think i need it, i can just overwrite it
07:19 hemebond 👍
07:19 nku some app creates a file with an initial password inside..
07:20 nku i guess i need to figure out the hashing algo .oO
07:20 sploenix joined #salt
07:21 sploenix hi. is there a possibility to get the output of module.run?
07:22 hemebond sploenix: Sounds like you're trying to script something.
07:22 sploenix hemebond: yes i am :)
07:23 sploenix i want to get a list (or ideally a set) of the firewall zones from firewalld
07:24 hemebond And do what with it?
07:25 sploenix delete some default zones if they exist and add some custom zones if they don't
07:25 hemebond sploenix: That's what state modules are for :-)
07:25 hemebond "Make sure these are absent"
07:25 hemebond "Make sure these exist"
07:27 zerocoolback joined #salt
07:28 zerocoolback joined #salt
07:28 zerocoolback joined #salt
07:29 masber joined #salt
07:29 zerocoolback joined #salt
07:30 zerocoolback joined #salt
07:31 sploenix hemebond: ok. in a test case it looked like all other zones will be deleted if I add a new one using the state module. maybe i misinterpreted the output.
07:32 sploenix hemebond: https://pastebin.com/fPxqW3vt
07:34 hemebond Is this the firewalld state module?
07:34 sploenix there is a state module but there are also service modules
07:34 sploenix the output is from the state module
07:35 hemebond yes, state modules use the execution modules
07:35 sploenix ok the answer to your question is yes :)
07:37 hemebond And I guess you don't want to just set the firewall rules you want?
07:39 hemebond "There is an undocumented parameter for the firewalld.present state called prune_services, which defaults to True and cleans all services not explicitly defined. "
07:39 rideh joined #salt
07:40 sploenix very nice, thats exactly what i want :)
07:41 hemebond Check the source. That parameter might have changed names.
07:41 sploenix ok i will do that. thanks
07:42 hemebond Or fully define all the services and zones your firewall should have :-)
07:44 hemebond This issue seems relevant https://github.com/saltstack/salt/issues/41075
07:45 hemebond It looks like the next Salt release will have more flexibility in that state module.
07:49 hemebond If you can't get it to work with the version you've got I recommend writing a bash script to do it can call that.
08:01 darioleidi joined #salt
08:01 babilen Another approach is to have a single state that managed your firewall configuration and to provide (merged) pillar data from various sources
08:03 sploenix yes is related to my problem. but furthermore i want to delete zones, as I only need a very simple setup regarding the zones. with the predefined zones the final ruleset is not really readable. a firewalld.absent would be nice to delete rules.
08:04 babilen Sure, that state could be pillar driven too
08:04 sploenix babilen: that would be possible, but the ports I need are not available in plain text and are generated at runtime
08:05 babilen Which runtime?
08:05 sploenix in another state. they are generated from directory names
08:07 babilen Right, well .. I see how having the states modularised would help
08:13 mk-fg joined #salt
08:13 mk-fg joined #salt
08:13 monokrome joined #salt
08:22 Hybrid joined #salt
08:26 Pjusur joined #salt
08:26 Guest73 joined #salt
08:30 yuhl joined #salt
08:31 Tucky joined #salt
08:39 babilen sploenix: Could you use the same logic to generate the pillar data?
08:43 Ricardo1000 joined #salt
08:54 mikecmpbll joined #salt
09:00 sploenix maybe, but before thinking further about it I need to solve the basic problems... are multiple active zones supported? if I add a new zone the default zone public gets deactivated...
09:01 pf_moore joined #salt
09:06 cewood joined #salt
09:09 mk-fg joined #salt
09:10 mk-fg joined #salt
09:14 samodid joined #salt
09:17 ingslovak joined #salt
09:19 zulutango joined #salt
09:25 mage_ left #salt
09:25 mage_ joined #salt
09:25 mage_ hello
09:25 mage_ is it possible to launch an orchestration script from a minion with salt-call ?
09:26 zer0def you mean an sls ran with `state.orchestrate`?
09:27 mage_ yes
09:27 zer0def well, the function *used* to be called `state.sls` and it runs in a minion-like context, so it'll depend on particular states you have defined in the sls
09:29 zer0def if it doesn't have something that only a master would need to know (aka something that only a runner call would know), it's possible you'd be just fine with `salt-call --local state.sls <sls-path>`
09:30 mage_ I'll take a look at --local :) thanks
09:30 zer0def also, "only a master would need to know" → "only a master would be capable of knowing"
09:30 zer0def mage_: the focus should be `state.sls`, not `--local`
09:30 msmith joined #salt
09:31 zer0def ymmv, there's a bunch of caveats associated with it
09:32 mage_ my goal is to be able to run an orchestration script from a gitlab runner which run on a different machine
09:32 mage_ but maybe I should just install a gitlab-runner on the salt master too
09:33 zer0def well, you can try the presented option or, at least from my perspective, set up salt-api to throw salt calls at
09:33 msmith left #salt
09:33 exarkun joined #salt
09:33 msmith2 joined #salt
09:34 mage_ never used salt-api yet, but will look at it :)
09:34 zer0def the latter will probably the cleanest and most straight forward options to achieve orchestration calls without shenanigans
09:34 babilen Or trigger events that the master reacts to
09:35 Mattch joined #salt
09:35 zer0def or that, if your gitlab runner is already in a salt cluster, yes
09:35 mage_ yep, the problem with events is that it's async and I don't get the output
09:37 babilen I like that they ar asynchronous
09:42 msmith joined #salt
09:42 msmith test
09:43 hemebond msmith: TEST FAILED. Payment missing.
09:44 zer0def hemebond: test123
09:44 msmith thanks, at least you saw it. which means i'm finally auth'd
09:53 mage_ babilen: yeah, but in a ci/cd context I have to know if the jobs failes of succeeded
09:54 mage_ s/failes/failed
09:55 zer0def one doesn't deny the other, but you're definitely looking for orchestration results elsewhere
09:56 hemebond mage_: Returner?
09:59 mage_ dunno if it is possible to return to a gitlab-runner
10:51 inad922 joined #salt
11:00 Naresh joined #salt
11:02 harsh_ joined #salt
11:19 systemex1t joined #salt
11:20 wedgie_ joined #salt
11:20 hillna_ joined #salt
11:23 Deliants joined #salt
11:23 motherfs1 joined #salt
11:23 onslack_ joined #salt
11:23 Nimbus joined #salt
11:23 nku_ joined #salt
11:24 honestly_ joined #salt
11:24 ddg_ joined #salt
11:24 Psy0rz joined #salt
11:24 Nazzy joined #salt
11:24 MajObvio1sman joined #salt
11:24 Hipikat_ joined #salt
11:25 deadpoet_ joined #salt
11:25 Deadhandd joined #salt
11:25 averell- joined #salt
11:26 cro- joined #salt
11:27 KolK joined #salt
11:28 shakalaka_ joined #salt
11:28 StolenToast joined #salt
11:28 agustafson joined #salt
11:28 darkalia_ joined #salt
11:29 lkthomas_ joined #salt
11:29 ingy1 joined #salt
11:29 Freeaqingme joined #salt
11:29 sarlalian joined #salt
11:29 hillna joined #salt
11:29 ponyofde1 joined #salt
11:29 atoponce joined #salt
11:30 nledez joined #salt
11:30 sayyid9000 joined #salt
11:30 ekkelett joined #salt
11:30 nledez joined #salt
11:30 monokrome joined #salt
11:31 squig joined #salt
11:31 JPT_ joined #salt
11:32 dezertol joined #salt
11:38 Ricardo1000 joined #salt
11:39 tuxawy joined #salt
11:43 thelocehiliosan joined #salt
11:49 aleph- joined #salt
12:03 Guest86597 joined #salt
12:04 zerocoolback joined #salt
12:08 onslack <msmith> and we're back
12:08 zerocoolback joined #salt
12:08 thelocehiliosan joined #salt
12:14 atoponce joined #salt
12:18 Nahual joined #salt
12:19 sploenix left #salt
12:21 gnord joined #salt
12:23 ThomasJ|d joined #salt
12:30 brokensyntax joined #salt
12:37 pcdummy joined #salt
12:45 schemanic joined #salt
12:50 FL1SK joined #salt
12:57 motherfs1 left #salt
12:59 schemanic joined #salt
13:09 gh34 joined #salt
13:13 XenophonF joined #salt
13:18 petererer joined #salt
13:21 tuxawy joined #salt
13:21 zerocoolback joined #salt
13:27 thelocehiliosan joined #salt
13:34 schemanic joined #salt
13:41 magnus1 joined #salt
14:02 nixjdm joined #salt
14:34 theloceh1liosan joined #salt
14:55 cgiroua joined #salt
14:56 tom[] joined #salt
14:58 tom[] is there a nice tidy recipe to check for a certain string in the stdout (or stderr) of a cmd.run and append \nchanged=no to stdout ?
14:59 * tom[] not very good at shell programming
15:03 onslack <msmith> as a single command? could be challenging. you may want to consider using a script instead
15:18 tom[] mmm
15:19 zer0def tom[]: looks like you're trying to use `stateful`, wouldn't `onlyif` or `unless` fit your case?
15:19 zer0def unless you need that `cmd.run` to run *every* time
15:20 nebuchadnezzar Hello
15:21 tom[] zer0def: yes, i do. the command runs database migrations. i want to discriminate on the "No new migrations found. Your system is up-to-date." in stdout
15:21 tom[] i could run a test first, i suppose
15:21 zer0def was about to ask for that
15:22 zer0def probably the cleanest way to achieve your goal - test for missing migrations, if none needed, exit 0, else `cmd.run`
15:22 tom[] my question was like: can i just add a grep && echo to the end of this?
15:23 zer0def you could, sure, just wrap it in quotes and might want to double check on whether cmd.run runs as shell
15:25 zer0def i'd advise to have that test slapped in `unless` and specify the command in `name`, arguably the cleanest way of communicating what you intend to do, as well as execution logic
15:27 thelocehiliosan joined #salt
15:29 nebuchadnezzar I'm starting a new configuration in which I would like to get all the configuration under revision control. I read the salt-formula and GitFS walkthrough but I'm confused by the amount of informations. I thought I could use something like: 1) clone a “masterless” salt-formula, 2) run salt-call --local state.apply. But I'm wondering what to put inside that bootstrap repository. It looks like I must have a dedicated git repository for
15:29 nebuchadnezzar top.sls (https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#branches-environments-and-top-files)
15:31 zerocool_ joined #salt
15:32 zer0def nebuchadnezzar: do you intend to make use of multiple environments at this point? if not, that's a non-concern at this point
15:33 zer0def aka, worry about it later, when it actually comes up as an issue
15:33 lordcirth_work tom[], if you pipe the stdout to grep, it will return 0 or 1 depending on the text being matched
15:33 onslack <msmith> rolling out salt over an existing infrastructure is an interesting task. git does help with determining what changed when something goes wrong after the fact :)
15:34 mikecmpb_ joined #salt
15:34 thelocehiliosan joined #salt
15:34 lordcirth_work - unless: 'thingy | grep -q "No new migrations found" '
15:35 zer0def that's horrible
15:35 dezertol joined #salt
15:35 lordcirth_work Well yes but we're already talking about using cmd.run
15:35 zer0def because he's already running `thingy` as the `name` argument of `cmd.run`
15:35 nebuchadnezzar zer0def: ok. I saw that something like reclass could be used to manage that top.sls in a different way. So for now I don't need to worry about top.sls merging
15:35 lordcirth_work Oh ok, didn't see that
15:36 lordcirth_work tom[], is there no way to do a dry-run / list migrations without doing them?
15:36 cgiroua joined #salt
15:36 zer0def there is, he was just curious about using shellisms in `name` of `cmd.run`
15:37 zer0def i'd personally advise against doing so, if you have a capability in your software to achieve your goal
15:37 tom[] test first is the better way
15:37 tom[] i'll do that
15:39 nebuchadnezzar So for now, if I understand correctly, I just need to 1) install salt-minion, 2) clone salt-formula, 3) provides some pillar to configure my master (mostly gitfs_remotes) and 4) run salt-call --local apply.state to transform that masterless minion into a master with himself as its first minion
15:40 zer0def sounds like a plan, although i'm not familiar with the formula you're describing (or any other formula, for that matter)
15:41 nebuchadnezzar zer0def: salt-formula is a formula to auto-salt-himself https://github.com/saltstack-formulas/salt-formula
15:41 Nahual nebuchadnezzar: Same procedure I follow now although without the use of the salt-formual.
15:41 zer0def eh, i don't care for formulas, but as long as someone finds uses for them, they have a purpose
15:45 zer0def personally find them as a 1-1 complement to chef cookbooks, which stem from how rigid the file hierarchy is in chef (and puppet, for that matter)
15:45 nebuchadnezzar OK, looks like I have my script already done in fact https://github.com/saltstack-formulas/salt-formula/blob/master/dev/setup-salt.sh, I just need to copy the dev directory to something like _boostrap and adapt my stuffs in it
15:51 fernie joined #salt
15:52 tiwula joined #salt
15:55 zerocoolback joined #salt
15:55 onslack joined #salt
15:56 pcdummy nebuchadnezzar: https://bootstrap.saltstack.com/ is much better
15:56 pcdummy https://github.com/saltstack/salt-bootstrap
15:57 nebuchadnezzar pcdummy: I tried it but it only install salt, I would like to have everything in Git deployed by salt, the configuration of the master too
15:58 pcdummy nebuchadnezzar: what about using gitfs for pillar?
15:58 pcdummy nebuchadnezzar: ahh i read your initial question
15:58 nebuchadnezzar pcdummy: that's what I want to to, but I need to start somewhere
15:59 nebuchadnezzar :-D
16:00 pcdummy I would start with that bootstrap script - somewhere i had one where it lets you choose older stable variants.
16:03 pcdummy nebuchadnezzar: why not salt-ssh?
16:03 msmith are you intending to automate a redeployment of your entire environment, or simply to do it once and build on it from there?
16:03 msmith because you can saltify salt once installed quite easily
16:04 onslack joined #salt
16:04 zer0def was about to mention salt-cloud and the saltify provisioner
16:04 onslack <gtmanfred> test
16:04 gtmanfred ok, working
16:04 nebuchadnezzar msmith: I want to automate the redeployement, for test environments spawned in our OpenNebula for example
16:04 gtmanfred ugh, saltify is the worst, i really just need to find time to fix the manage.bootstrap runner
16:05 pcdummy :)
16:05 msmith test environments are commonly done using git branches
16:05 msmith salt has an entire environment system for that very purpose :)
16:06 msmith (waiting for gtmanfred to mention kitchen....)
16:06 nebuchadnezzar msmith: sure that's a great feature. I want to spawn a new VM, bootstrap it with the same initial git repository, just using another branch ,-)
16:07 msmith what is it that you're testing though? i'd imagine that creating an entirely new master is perhaps overkill
16:07 mikecmpbll joined #salt
16:08 msmith an existing master is fine for testing new configs in a dev environment, and indeed that's what kitchen is good at
16:08 gtmanfred kitchen isn't really good at testing master
16:08 gtmanfred you have to futz with the minion.erb file to get it to work
16:08 nebuchadnezzar msmith: we want to make it reusable for separate environments
16:08 gtmanfred still waiting on multinode kitchen support
16:08 msmith s/testing new configs/testing new minion configs/
16:09 gtmanfred oh, yeah
16:09 gtmanfred I really want this https://github.com/chef/chef-rfc/blob/master/rfc084-test-kitchen-multi.md
16:09 msmith nebuchadnezzar: reusable how tho?
16:10 msmith unless you intend to massively scale out to thousands of minions, using multi-master and/or syndic, then a single one should still do
16:11 msmith i'm not saying you can't, just that i'm not sure what the use case is that might need it :)
16:13 nebuchadnezzar msmith: simple: ease the management of unrelated infrastructures (no common master possible)
16:14 msmith totally isolated? gotcha
16:15 nebuchadnezzar msmith: not totally, they will access shared “generic” salt-formulas ;-)
16:16 msmith i still haven't, but i was looking at a bare metal salted environment. the machine literally boots cold from pxe, unattended installs debian with qemu and salt, and then drags in config to create and initialise windows guests over kvm, salts those, and brings up an entire solution, all from salt config
16:16 nebuchadnezzar msmith: when you have clients you can not always plug them to your own infrastructure
16:18 msmith i've got the 2nd part, i can salt a new vm and specialise for the solution, i just never got around to doing the pxe install
16:18 msmith nebuchadnezzar: well yes. it does sound like you're after a usb -> full salt environment :)
16:19 nebuchadnezzar msmith: that's what I'm doing right now: having a first machine called the “master”, salt it to configure an LTSP to build “server images” and then boot bare metal hosts to serve as OpenNebula hypervisors
16:21 msmith i never got around the need for a 15gb pre-installed windows server image. trying to take a stock iso and put all the customisations in it took significantly longer than a prebuilt image that i just copy and seed
16:21 nebuchadnezzar we are doing full GNU/Linux ;-)
16:22 msmith then your unattended install, even with packages, will far outstrip my windows install, every time
16:23 msmith i seem to be one of the few doing this, too. others seem to be using cloud these days
16:23 nebuchadnezzar msmith: “master” -> configures “master” to be “LTSP server” (DHCP/TFTP) -> “LTSP server” builds GNU/Linux images <- bare metal and VM will PXE boot and load that tiny image (~300MB) in RAM and automatically connect to “master”
16:24 msmith sweet! make that a formula and i'm sure you'll have people re-using it ;)
16:24 nebuchadnezzar :-D
16:24 nebuchadnezzar msmith: that's the plan
16:24 msmith anything we deploy using linux is likely to use kube now, rather than bare-metal
16:25 nebuchadnezzar I'm just a little bit stick in the boostrap and by which end I must start
16:25 msmith it's just the windows stuff that's too hungry to share nicely
16:25 zerocoolback joined #salt
16:27 nebuchadnezzar msmith: sure, we could imagine PXE boot some coreos/whatever, but there is a first machine to deploy to be able to deploy all the others…
16:28 nebuchadnezzar we want finally to PXE boot everything except the first machine which need to be done manually
16:28 masber joined #salt
16:28 nebuchadnezzar well, manually “git clone && ./make-world” ;-)
16:28 msmith that's quite some task you have there. sounds fun :)
16:29 nebuchadnezzar absolutely fun
16:30 nebuchadnezzar will continue tomorrow, need to rest for now
16:30 inad922 joined #salt
16:38 Guest73 joined #salt
16:39 zer0def pxe boot or install? only the latter would make sense in this context
16:39 zer0def and even then, a lot of it could be handled before first persistent boot
16:49 ecdhe joined #salt
16:51 onlyanegg joined #salt
17:09 nebuchadnezzar zer0def: pxe boot, we do as much as we can during the build of the image to boot but scripting has it's limit to configure things ,-)
17:10 zer0def in that case it's even easier
17:16 nebuchadnezzar zer0def: yes configuring the minions is easy, setup the complete infra from a single git clone is a little bit trickier. I split the task in independent formula etc. but I was less sure how to boostrap a reproductible master.
17:16 onslack <scub> Curious; anyone have some test-kitchen magic to keep .git from syncing during a converge? Running into an issue where the secondary converge fails due to read-only objects nested within gits structure
17:17 zer0def yeah, i can imagine a bunch of quirks that can come up from such a setup, have fun doing it
17:17 nebuchadnezzar zer0def: thanks
17:56 Guest73 joined #salt
18:07 jmedinar joined #salt
18:08 jmedinar Hello. How can I force a module to run with Python3 ?
18:09 jmedinar the module has the shebang... and I have installed influxdb with pip for python3
18:09 jmedinar but with I execute with salt... it runs it with 2.7 and says the influxdb module doesnt exist
18:09 Hybrid joined #salt
18:11 onslack <msmith> that what you're saying it appears as if salt will run the module using whichever version of python that salt itself is using. you could try to use the python3 version of salt if it's available for your host
18:11 onslack <msmith> *from what
18:11 onslack <msmith> or try to install influxdb in the 2.7 environment, again if available
18:12 jmedinar yeah I did installed influxdb for 2.7 but still sends the same error
18:13 onslack <msmith> is it possible you have multiple python installations on the one machine? salt would need it installed into whichever one it's specifically using
18:15 jmedinar any document to how to configure it for python3?
18:15 eekrano joined #salt
18:25 thelocehiliosan joined #salt
18:27 Hybrid joined #salt
18:41 fl3sh joined #salt
18:43 JPT joined #salt
18:50 sjl_ does @with_deprecated() -decorator need something else besides "from salt.utils.decorators import with_deprecated"?
18:54 sjl_ simply adding @with_deprecated makes salt-call produce a messy error (https://pastebin.com/bJAVbhwW) from a function-pair working just fine (https://pastebin.com/fcS7GTxm)
18:55 sjl_ so, it seems that i could use some gentle hand-holding to guide me to the right import/usage
19:03 thelocehiliosan joined #salt
19:07 Trauma joined #salt
19:09 cewood joined #salt
19:37 bendoin joined #salt
19:47 alvinstarr joined #salt
20:00 mikecmpbll joined #salt
20:05 eekrano joined #salt
20:23 aldevar left #salt
20:24 AngryJohnnie joined #salt
20:37 ingslovak joined #salt
20:52 eekrano joined #salt
20:56 cewood joined #salt
21:00 nbari what are the options to update the states/pillars on the server (not using gitfs)
21:02 MTecknology if it's not gitfs, then it shouldn't need updating because it's already updated?
21:03 nbari my point is that I would like to avoid needing to update stats on the server
21:04 nbari normaly I have states/pillar on a repository the one later via rsync is just uploaded to the salt server
21:05 MTecknology so are you asking for a list of available backends?
21:06 nbari a way to update states/pillars on a server not using gitfs
21:06 nbari I was thinking on using salt-api, create a hook on git and they just update (git pull) on master
21:07 nbari just asking to know more options
21:19 * MTecknology blinks
21:40 Edgan nbari: gitfs is better
21:41 Edgan nbari: Killing the need to git clone the git repos while setting up a salt master makes gitfs an obvious choice
21:41 Edgan nbari: Also stops people from editing files on the salt master
21:42 MTecknology and existing tools support that magic logic in a much cleaner/nicer/more-predictable way
21:42 Edgan It also supports having multiple branches processed instead the one current active on disk
21:43 Edgan Though I lock it to one branch in my case
21:46 Edgan The only issue I have ever had with gitfs was it merging top.sls files across branches, hence the branch lock.
21:46 Edgan I have thought I was having caching or no sync bugs, but ended up not being the case
21:46 Edgan It has been very reliable when I have used it
21:48 MTecknology When I have different environments, I just have a separate repo for top.sls
21:57 dave_den joined #salt
21:59 scarcry joined #salt
22:09 masber joined #salt
22:12 oida joined #salt
22:13 thelocehiliosan joined #salt
22:23 onlyanegg joined #salt
22:28 ymasson joined #salt
22:36 sh123124213 joined #salt
23:11 onlyanegg joined #salt
23:27 eseyman joined #salt
23:34 thelocehiliosan joined #salt
23:37 cswang joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary