Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-11-10

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 mavhq joined #salt
00:17 johnj_ joined #salt
00:25 GMAzrael joined #salt
00:27 Sarphra joined #salt
00:28 Sarphra left #salt
00:31 Bryson joined #salt
00:33 threwahway_ joined #salt
00:45 Guest73 joined #salt
00:57 sp0097 joined #salt
01:07 tiwula joined #salt
01:10 phileus0 joined #salt
01:12 phileus0 Hi ..is there a way to print debug variables in a jinja template.  I just want to be able to print out a variable like I can do in a regular programming language.
01:16 phileus0 Anyone...bueller?
01:17 johnj_ joined #salt
01:36 phileus0 joined #salt
01:42 bildz joined #salt
01:44 GMAzrael joined #salt
01:48 mavhq joined #salt
02:17 sp0097 joined #salt
02:19 numkem joined #salt
02:19 johnj_ joined #salt
02:28 phileus0 joined #salt
02:29 MTecknology phileus0: patience is usually useful... if you'd stuck around, I'd have provided some tips from Ch3LL's session. :(
02:29 MTecknology lol.. I love that timing, but ya.. be patient man
02:30 MTecknology {% do salt.log.error(foo) %}
02:35 Praematura joined #salt
02:37 pipps joined #salt
02:37 MTecknology you can also just do # {{ foo }} and look at the rendered template
02:54 numkem joined #salt
02:56 ilbot3 joined #salt
02:56 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.8, 2017.7.2 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
03:07 vchauhan joined #salt
03:13 major joined #salt
03:21 johnj_ joined #salt
03:25 cgiroua joined #salt
03:27 jacksontj joined #salt
03:37 Manor joined #salt
03:43 GMAzrael joined #salt
03:50 StarHeart left #salt
03:50 Edgan joined #salt
03:53 Praematura joined #salt
03:53 TheRock2 joined #salt
03:53 TheRock2 left #salt
04:03 onlyanegg joined #salt
04:09 kellyp joined #salt
04:12 pipps joined #salt
04:13 pipps joined #salt
04:22 johnj_ joined #salt
04:32 pualj joined #salt
04:32 SkyRocknRoll joined #salt
04:36 pualj joined #salt
04:43 GMAzrael joined #salt
04:47 pipps joined #salt
04:49 onlyanegg joined #salt
04:58 phileus0 joined #salt
05:00 onlyanegg joined #salt
05:07 pualj_ joined #salt
05:11 ahrs joined #salt
05:15 sp0097 joined #salt
05:20 Praematura joined #salt
05:23 johnj_ joined #salt
05:25 onlyanegg I writing a state which calls functions in an execution module. There's a function in the execution module which I'd like to use in the state, but I don't really want to make it available to the end user.
05:27 onlyanegg It seems like __salt__ only keeps track of "public" functions (ie. those that don't start with _), and I don't see any examples of importing execution modules from states. Is there a way to call "private" functions within states?
05:30 onlyanegg probably not the best time to ask :D
05:47 GMAzrael joined #salt
05:53 tiwula joined #salt
06:00 MTecknology onlyanegg: I don't think I really understand the question. You want a function that can't be called to be available for use?
06:01 threwahway_ joined #salt
06:02 impi joined #salt
06:05 onlyanegg I have a funcion _do_something() that I use as a utility in the module and that I'd like to use as a utility in the state, but I don't really see use for the user
06:06 onlyanegg and it seems like __salt__ contains only module functions that don't start with '_', so I can't call it through __salt__
06:06 onlyanegg I also don't see any examples of importing execution modules in states
06:07 onlyanegg so I'm wondering if there's another way, or if some of my understanding is wrong...
06:09 maestropandy joined #salt
06:09 maestropandy left #salt
06:11 MTecknology You should not import modules from states, no
06:11 MTecknology How are you defining "user"?
06:12 whytewolf onlyanegg: https://docs.saltstack.com/en/latest/topics/utils/index.html
06:14 whytewolf and private functions by definition are meant to be private
06:15 phileus0 joined #salt
06:16 zerocoolback joined #salt
06:16 onlyanegg MTecknology: someone using the module through the salt interface (eg. salt-call my_module.do_something)
06:20 MTecknology so.. someone with root on a server
06:20 MTecknology Are you trying to put plain text passwords in a module or something? If so... don't do that.
06:22 onlyanegg no, I just think it would clutter the module to have functions available which shouldn't be called by a user, but may need to be called elsewhere
06:22 vchauhan joined #salt
06:23 MTecknology If a function should be available for use, then it should be available for use. If not, then it shouldn't be.
06:23 whytewolf onlyanegg: that is what utils is for. functions that are not meant to be called directly and are useful in more then one place
06:24 MTecknology and that thing... I didn't know about _utils until now.
06:24 onlyanegg Thx, whytewolf. That's good to know, but not really what I'm looking for.
06:24 johnj_ joined #salt
06:24 whytewolf in what way is it not what you are looking for?
06:24 threwahway_ joined #salt
06:25 maestropandy joined #salt
06:25 onlyanegg MTecknology: There are helper functions that you may want to call from public functions, but shouldn't be available
06:26 MTecknology then make them private and don't call them from outside the module that needs the helpers
06:26 MTecknology or if multiple modules need the helpers, then whytewolf's link is pertinent
06:26 MTecknology which is cool, because I was really struggling to understand what the point of _utils was, but now it makes sense
06:27 onlyanegg MTecknology: But I'd like to call them from the related state - but it sounds like that's not possible
06:28 whytewolf __utils__ is accessable from states and modules
06:28 whytewolf and pretty much every _*
06:29 LocaMocha joined #salt
06:29 onlyanegg whytewolf: This function is really only useful in this specific context
06:29 onlyanegg this is hard to explain in the abstract...
06:30 MTecknology if the problem is hard to explain it's a bad idea
06:30 MTecknology If it’s hard to explain, it’s probably a bad idea. **
06:30 whytewolf ... if it is only useful in the context of that one module. then that would by definition you just gave exclude the possibility of a related state
06:30 onlyanegg we have a file module and a file state
06:31 onlyanegg there are functions in the file module that only the file state calls
06:32 whytewolf and why can't there be a "file" util that is the glue between them?
06:33 MTecknology most state modules rely entirely on the execution module to do anything, including check if they're going to tell the exec module to do anything... that's the case even if only one state uses that module
06:33 onlyanegg ok, maybe there can be - is that best practice?
06:34 MTecknology I'm aware debconf is a good example of this..
06:35 whytewolf actually yes. that is best practices. if you have functions that need to be shared between the state and the module that do not belong in public space. then utils is where that should go
06:37 onlyanegg ok, I'll definitely take a look at that
06:37 phileus0 joined #salt
06:37 onlyanegg thanks, guys :)
06:38 whytewolf now days, there are more module/state/runner/cloud pairs that are putting the entire guts into utils and then just forking off the public bits into their respective ideals
06:39 whytewolf it actually is kind of kewl
06:40 MTecknology raetevent.py, eh?
06:40 onlyanegg yeah - this module / state could actually fit into cloud as well. That's something I haven't thought through yet...
06:42 whytewolf sigh raet ...
06:45 whytewolf okay, I have work in the morning i should be off
06:47 maestropandy left #salt
06:48 GMAzrael joined #salt
06:55 felskrone joined #salt
07:07 pipps joined #salt
07:11 maestropandy joined #salt
07:12 sh123124213 joined #salt
07:22 evle1 joined #salt
07:23 jas02 joined #salt
07:23 jas02 joined #salt
07:25 johnj_ joined #salt
07:40 do3meli joined #salt
07:40 do3meli left #salt
07:44 onlyanegg joined #salt
07:47 Ricardo1000 joined #salt
07:48 GMAzrael joined #salt
07:49 zulutango joined #salt
07:49 maestropandy joined #salt
07:51 maestropandy1 joined #salt
07:51 mannefu joined #salt
07:52 k_sze[work] joined #salt
07:54 Manor joined #salt
07:56 cyborg-one joined #salt
08:01 impi joined #salt
08:02 threwahway_ joined #salt
08:02 threwahway joined #salt
08:04 DarkKnightCZ joined #salt
08:06 aldevar joined #salt
08:09 kellyp joined #salt
08:18 Hybrid joined #salt
08:26 johnj joined #salt
08:28 Manor_ joined #salt
08:32 pbandark joined #salt
08:35 Hybrid joined #salt
08:45 onlyanegg joined #salt
08:45 GMAzrael joined #salt
08:49 maestropandy joined #salt
08:52 jrenner joined #salt
08:53 zerocoolback joined #salt
09:05 kellyp joined #salt
09:07 maestropandy1 joined #salt
09:07 Gabemo joined #salt
09:07 maestropandy1 left #salt
09:09 maestropandy2 joined #salt
09:10 maestropandy2 left #salt
09:21 usernkey joined #salt
09:24 vchauhan joined #salt
09:27 johnj joined #salt
09:27 Manor joined #salt
09:37 SkyRocknRoll_ joined #salt
09:40 GMAzrael joined #salt
10:02 sp0097 joined #salt
10:10 _KaszpiR_ joined #salt
10:12 sfxandy joined #salt
10:15 zerocoolback joined #salt
10:15 sfxandy hi everyone.  ok, am trying this via state.orchestration call... https://gist.github.com/phtx3/ea3624ccbc8b239508adbc305415f19c
10:16 sfxandy and I am getting the error 'ERROR executing 'state.sls': Pillar data must be formatted as a dictionary, unless pillar_enc is specified.'
10:16 sfxandy i'm pretty certain it is formatted as a dictionary i.e. a key-value pair
10:16 sfxandy does anyone have any idea what I am doing wrong?
10:23 zerocoolback joined #salt
10:25 apofis joined #salt
10:28 _KaszpiR_ joined #salt
10:28 johnj joined #salt
10:29 pualj_ joined #salt
10:31 dhwt joined #salt
10:38 haam3r_ sfxandy: Take a look at the comment :)
10:40 Ajay_ joined #salt
10:40 haam3r_ sfxandy: or is the "environment_id" the key?
10:45 AAbouZaid joined #salt
10:45 GMAzrael joined #salt
10:46 onlyanegg joined #salt
10:47 AAbouZaid left #salt
10:50 sfxandy @haam3r_ yes the environment_id would essentially be the key
10:51 Praematura joined #salt
10:52 haam3r_ sfxandy: modified the comment. Does that work?
10:53 AAbouZaid joined #salt
10:53 sfxandy ah haam3r_ , ok that error is no longer present.  never occured to me to format the data as a set of key/value pairs that way
10:54 AAbouZaid Hi, I created a SaltStack formula for `ulog`, how could I push it to `saltstack-formulas` repo? Thanks.
10:54 AAbouZaid https://github.com/AAbouZaid/ulog-formula
10:57 N-Mi joined #salt
10:57 N-Mi joined #salt
10:57 haam3r_ sfxandy: great. Yeah YAML formatting is well YAML formatting :D
10:57 kalessin joined #salt
10:59 jhauser joined #salt
11:05 Gilfoyle- joined #salt
11:05 sfxandy thanks haam3r_
11:07 Ricardo1000 joined #salt
11:08 megamaced joined #salt
11:18 Hybrid joined #salt
11:26 coredumb mmmh I wonder what has changed in reactor/orchestrate in 2017.x for my old orchestrators to fail to run with: TypeError: orchestrate() takes at least 1 argument (0 given)
11:26 coredumb any idea?
11:26 Manor joined #salt
11:27 ozux joined #salt
11:29 johnj joined #salt
11:30 gmoro_ joined #salt
11:43 ParsectiX joined #salt
11:44 ozux joined #salt
11:44 ParsectiX Hey team is there a way to test salt mine functions locally without building a salt-master?
11:45 GMAzrael joined #salt
11:46 usernkey1 joined #salt
12:30 johnj joined #salt
12:39 phileus0 joined #salt
12:44 GMAzrael joined #salt
12:46 sfxandy @coredumb did anyone get back to you?
12:47 onlyanegg joined #salt
12:48 StolenToast joined #salt
12:51 coredumb sfxandy: I fixed myself like a grown up actually
12:51 coredumb but thx for asking :)
12:51 coredumb the new kwarg thing not documented in the release note got me
12:52 sfxandy fair enough!
13:26 DammitJim joined #salt
13:28 DammitJim what keywords do I use on google to search history of applied states toa  minion and its results?
13:30 Hybrid joined #salt
13:31 johnj joined #salt
13:33 tom[] i have an orchestration sls that fails at a specific function (cmd.run). the salt-run log has no useful information "Result: False". but there's no evidence that the command ran at all -- there would be a log entry if it ran and failed
13:33 tom[] how can i try to debug this?
13:33 sfxandy tom[] can you paste the state into a gist so we can take a look...?
13:33 DammitJim how can I see the output of the last state run against a minion?
13:35 tom[] sfxandy: https://gist.github.com/spinitron/b8d926785781dd53a89065dc6b0ac03c
13:36 tom[] the {{ log }} file exists and is writable by the runas user
13:37 sfxandy is this an issue...
13:37 sfxandy there appears to be a space in the command executable path
13:38 tom[] the command executable is /v2/doc_root/yii
13:38 sfxandy ah ok
13:38 tom[] epf/data/run is a sub  command
13:38 sfxandy ok, and do all your Pillar values expand correctly?
13:39 tom[] how can i verify that?
13:39 sfxandy show_sls
13:39 sfxandy state.show_sls
13:39 tom[] i'll try
13:40 tom[] i'm not sure how to use that. salt-run state.show_sls orch.epf-update
13:40 tom[] 'state.show_sls' is not available.
13:41 sfxandy try salt-call state.show_sls orch.epf-update
13:41 sfxandy i.e. not as a runner
13:43 tom[] it looks good to me
13:43 sfxandy so the actual command as rendered works?
13:43 sfxandy if you run it in the shell?
13:44 tom[] i'll have to copy paste it to the target
13:44 sfxandy ok
13:44 tom[] it takes a an hour or so to run
13:44 tom[] seems to work
13:44 sfxandy ok
13:45 mannefu joined #salt
13:45 tom[] the first thing it does is write a log line saying that it is starting. since i didn't see that in the log, i don't believe it was ever actually invoked
13:46 tom[] as run by the orch runner, i mean
13:46 sfxandy sounds reasonable.  what did the actual orchestration output show when it invoked the salt.function?
13:46 GMAzrael joined #salt
13:47 tom[] i added it to the gist https://gist.github.com/spinitron/b8d926785781dd53a89065dc6b0ac03c
13:48 tom[] it spent 15 seconds and then gave up
13:48 tom[] i wonder if this might be a timing issue
13:48 sfxandy how so?
13:48 sfxandy what about a cmd.wait?
13:48 sfxandy rather than a cmd.run?
13:48 Manor_ joined #salt
13:49 sfxandy thing is if it takes an hour to run...
13:49 tom[] the preceding orch function was to start an lxc container. this function runs in that container. if the container's minion isn't ready when the orch runner proceeds to run the next function, will all be ok?
13:50 tom[] perhaps i should add something to repeat pinging the container's minion until it is happy
13:50 sfxandy maybe add a reactor?  to listen for the minion start event?
13:50 coredumb ok anyone has seen salt-cloud nova driver complaining of missing python-novaclient when it's obviously installed? On CentOS 7
13:51 tom[] maybe. that will require learning what reactor is
13:51 sfxandy they're very useful and simple to implement.  I use them at the moment for bootstrapping new minions as they get spun up in AWS
13:52 tom[] i'll read about it
13:52 sfxandy may not help you but it takes the guess work out of identifying when a minion is actually ready to accept commands
13:56 tom[] i think it's worth a try. the evidence, such as it is, suggests that the orch runner did not execute the specified function but failed the state anyway. under the circumstances, the minion's container's os having been stated only moments ago, it seems like a failure to communicate might be the problem
13:56 Manor joined #salt
13:56 sfxandy ok, seems a reasonable path to go down
13:57 coredumb damn where is this plugin coming from? novaclient.auth_plugin
13:57 sfxandy so you would put create a reactor config that listened for the minion start event, then launched the orchestration
13:57 coredumb apparently python-novaclient from pip doesn't provide it >_<
13:59 tom[] sfxandy: sounds complicated. this is in the middle of an orch that has several steps in advance
13:59 tom[] another idea: lxc.wait_started https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.lxc.html#salt.modules.lxc.wait_started
14:00 sfxandy ok
14:02 sfxandy i assume that returns a true/false
14:02 sfxandy so how would you use that in an orchestration run
14:02 ParsectiX joined #salt
14:03 ParsectiX Hey team is there a way to test salt mine functions locally without using a salt-master?
14:03 tom[] make it depend on the lxc.start salt.function and make the command (that you saw in the gist) depend on that instead of the lxc-start
14:04 ozux joined #salt
14:04 sfxandy ok but if the function returns false..... then what do you do?
14:04 tom[] everything fails
14:04 tom[] and i get an email from the cron that started the salt-run
14:05 sfxandy ok.  ut dont give up on using a reactor... at least give it a try
14:05 tom[] if the container fails to start properly, then i don't expect to automate the recovery
14:05 tom[] sure, i'll study the reactor
14:05 sfxandy well you wont.  but this just gives it a better chance of succeeding
14:06 sfxandy if you need help with reactors then just ping me on here
14:06 tom[] it seems useful but i'll need time to understand its uses and scope
14:06 tom[] sure will, sfxandy. thanks for your help!
14:06 sfxandy np tom[]
14:06 sfxandy wasnt much help at all !!
14:07 tom[] still helpful when i've nobody else with whom to talk issues through
14:07 sfxandy fair point!  good luck, drop me a line if you need a hand
14:11 Praematura joined #salt
14:12 Naresh joined #salt
14:14 _JZ_ joined #salt
14:17 gh34 joined #salt
14:17 onlyanegg joined #salt
14:20 coredumb sfxandy: o/
14:20 coredumb (me raising my hand to you)
14:20 sfxandy whats the matter coredumb?
14:20 sfxandy lol
14:21 sfxandy ask away mate
14:21 coredumb can't make this damn salt-cloud connect to my openstack tenant
14:21 sfxandy ah salt-cloud
14:21 sfxandy feel free to PM
14:29 tom[] sfxandy: lxc.wait_started Result: True but the following cmd.run fails in the same way as before. so i think i need either a better theory as to why or i need a better error message
14:29 tom[] or both
14:30 tom[] g2g
14:32 johnj joined #salt
14:33 evle2 joined #salt
14:36 DarkKnightCZ joined #salt
14:37 ozux joined #salt
14:38 GMAzrael joined #salt
14:42 Manor_ joined #salt
14:44 ozux__ joined #salt
14:51 user-and-abuser joined #salt
14:53 J0hnSteel joined #salt
14:53 Brew joined #salt
14:59 onlyanegg joined #salt
15:00 DarkKnightCZ joined #salt
15:02 ouemt joined #salt
15:03 scbunn joined #salt
15:04 cgiroua joined #salt
15:09 tiwula joined #salt
15:14 ozux joined #salt
15:20 pualj_ joined #salt
15:26 Praematura joined #salt
15:26 pualj joined #salt
15:26 sp0097 joined #salt
15:31 rojem joined #salt
15:32 gmoro joined #salt
15:33 chowmeined joined #salt
15:33 johnj joined #salt
15:34 saltslackbridge joined #salt
15:40 edrocks joined #salt
15:43 pewpew joined #salt
15:43 noobiedubie joined #salt
15:48 vchauhan joined #salt
15:52 lkolstad joined #salt
15:57 Manor joined #salt
16:01 usernkey joined #salt
16:03 onlyanegg joined #salt
16:15 mavhq joined #salt
16:17 edrocks joined #salt
16:18 pualj_ joined #salt
16:20 heaje joined #salt
16:23 SMuZZ joined #salt
16:24 ozux joined #salt
16:24 schemanic joined #salt
16:25 edrocks joined #salt
16:27 dxiri joined #salt
16:33 numkem joined #salt
16:33 usernkey1 joined #salt
16:34 johnj_ joined #salt
16:34 DammitJim joined #salt
16:42 zerocool_ joined #salt
16:43 colegatron joined #salt
16:45 kellyp joined #salt
16:52 gtmanfred ping
16:52 gtmanfred neat, i am not sure how that bot is still up :/
16:53 whytewolf luck?
16:53 gtmanfred well, only the frontend is down right now
16:56 nomeed joined #salt
16:56 nixjdm joined #salt
16:58 Manor joined #salt
16:59 bowhunter joined #salt
17:07 devoper joined #salt
17:08 cyborg-one joined #salt
17:10 zerocoolback joined #salt
17:16 Praematura joined #salt
17:18 impi joined #salt
17:27 onlyanegg joined #salt
17:29 aldevar left #salt
17:35 el_wood joined #salt
17:35 johnj_ joined #salt
17:49 el_wood joined #salt
17:51 el_wood_le joined #salt
17:58 xet7 joined #salt
18:00 pipps joined #salt
18:05 pualj_ joined #salt
18:05 Diaoul joined #salt
18:20 Guest73 joined #salt
18:32 scarcry joined #salt
18:32 phileus0 joined #salt
18:34 joe__ joined #salt
18:36 johnj_ joined #salt
18:46 edrocks joined #salt
18:56 pualj_ joined #salt
18:59 pipps joined #salt
19:01 coredumb whytewolf: you around?
19:01 whytewolf ?
19:02 whytewolf there is a chance i might not be able to answer any questions.
19:03 user-and-abuser can anyone point to some examples about repo's sls configs
19:03 pipps joined #salt
19:03 whytewolf repos's sls config?
19:03 user-and-abuser https://gist.githubusercontent.com/klyr/cdf4bc8fcd741465d8e9b7d5a87a7a5c/raw/c3ef25aff37337885817cadfce26dbe2616bba25/add-yum-repo-salt.sh
19:03 whytewolf you mean pkgrepo?
19:03 user-and-abuser yes I think so
19:04 user-and-abuser my comapny uses the elements flag for their artifactory repos
19:04 user-and-abuser but I want to bypass this
19:04 user-and-abuser and use the public
19:04 whytewolf oh, um that is a question i can not answer
19:04 user-and-abuser they also dont seem to use the base config
19:05 user-and-abuser im not trying to bypass policy
19:05 user-and-abuser i just want to see what it looks like
19:05 coredumb whytewolf: I know you use salt-cloud with openstack
19:05 whytewolf I used to a long time ago coredumb
19:06 coredumb I've a mitaka infra here - which I don't own - and salt-cloud cannot seem to successfully login
19:06 coredumb oh :O
19:06 coredumb apparently salt-cloud is the only thing that can't log in
19:06 saltslackbridge <gtmanfred> You will need to make sure to use `use_keystoneauth: true` if you are on mitaka and don’t have keystonev2 enabled
19:06 coredumb :(
19:06 saltslackbridge <gtmanfred> and even thing, it is going to be fun.
19:06 saltslackbridge <gtmanfred> I am working rewriting that whole driver
19:07 coredumb so wanted to see another config that works see if I missed osmething obvious
19:07 coredumb gtmanfred: yes user_keystoneauth: True
19:07 saltslackbridge <gtmanfred> use_keystoneauth not user
19:07 saltslackbridge <gtmanfred> also, the config in the docs is the one I used to test it
19:07 coredumb not user ?
19:08 saltslackbridge <gtmanfred> you typed user_keystoneauth
19:08 coredumb yeah typo ^^
19:08 saltslackbridge <gtmanfred> The example on the page, https://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud.clouds.nova.html is the one that I know for a fact works
19:08 saltslackbridge <gtmanfred> if your identity url does not end with v3, it might not work, because it uses that to determine if it is keystonev3 or keystonev2
19:08 coredumb it does not here
19:08 coredumb :(
19:08 coredumb wasted my entire afternoon on this
19:09 whytewolf didn't for me in mitaka either. which was one of the reasons i dropped using salt-cloud
19:09 whytewolf it would auth v3 then ask for a v2 catalog which isn't allowed
19:13 noobiedubie joined #salt
19:14 coredumb whytewolf: oh
19:14 coredumb I feel bad having to integrate ansible in the loop to be honest
19:15 noobiedubie anyone having issues or had issues using the acl.present state to set mask permissions? Users & groups work fine and even default:mmask but trying to set just the mask fails with directory does not exist even though the user state we use is exactly the same minus the acl_type being mask obviously
19:15 * whytewolf shrugs. I just started using heat
19:15 noobiedubie btw on centos 7 latest stable saltstack
19:15 coredumb whytewolf: not provided here -_-
19:15 gtmanfred basically all of the nova stuff was written a long time ago right before they started rapidly iterating on novaclient and breaking everything
19:15 gtmanfred coredumb: make sure you are using a version of novaclient that actually works
19:16 gtmanfred all of the problems with openstack should be fixed in Oxygen, when we have a driver that doesn't break every 2 weeks
19:16 coredumb gtmanfred: yeah doc should say clearly that anything > 0.6.1 doesn't work
19:16 coredumb you only see it using debug :(
19:16 whytewolf 3.6.1 not 0.6.1
19:16 cyteen joined #salt
19:17 astronouth7303 is there any work or prototypes or ideas floating around for a more asyncronous salt gateway/api/rpc?
19:17 GMAzrael joined #salt
19:17 noobiedubie side question when i run salt-call how can i have the actal commands the state is running print out tried -l all and --log-level-file is that the most verbose?
19:17 whytewolf astronouth7303: there is always eAPI ;)
19:17 gtmanfred enterprise is moving to xmlrpc
19:17 coredumb whytewolf: 6.0.1 actually :D
19:18 gtmanfred noobiedubie: -l debug should show anytime cmd.run is called...
19:18 astronouth7303 like, i issue a minion command (local), and it'll send me responses as minions come back
19:18 stupidwrk joined #salt
19:18 coredumb gtmanfred: ok so basically I should just give up on using salt-cloud with mitaka right ?
19:18 gtmanfred but that is not always what is being run, sometimes it is python modules that do otherstuff elsewhere, and don't always have the best logging
19:18 gtmanfred coredumb: unless you change the endpoint to end in /v3 then for now yes
19:18 astronouth7303 whytewolf: what's eAPI?
19:18 gtmanfred astronouth7303: enterprise
19:18 whytewolf astronouth7303: enterprise
19:19 astronouth7303 no wonder i've never heard of it
19:19 sp0097 joined #salt
19:19 coredumb gtmanfred: well i've tried in v3 still no luck here
19:19 coredumb or you mean _all_ the endpoints ?
19:19 gtmanfred no, just the url
19:19 whytewolf coredumb: does your openstack speak v2
19:19 gtmanfred thouh
19:19 gtmanfred mitaka does not by default
19:20 gtmanfred though this pr could help https://github.com/saltstack/salt/pull/43230
19:20 coredumb gtmanfred: it's supposed to
19:20 gtmanfred coredumb: this could also help https://github.com/saltstack/salt/pull/44199
19:20 coredumb got both rc config available
19:20 coredumb and it's what it says by default
19:21 coredumb I've successfully logged in with curl though
19:21 astronouth7303 so if i actually ponied up the money, would there be eAPI docs? or is it just a mysterious protocol that powers the GUI?
19:21 gtmanfred if you can try the openstack driver instead of nova, i know people have better luck with that, and it also supports v3 kind of
19:21 gtmanfred astronouth7303: eAPI will be available in the 5.3 release they are workin gon getting out the door now
19:22 gtmanfred and it will have docs
19:22 gtmanfred there is a whole private repo just for the docs.
19:22 gtmanfred coredumb: https://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud.clouds.openstack.html
19:22 gtmanfred look for auth_version: 3 on there
19:22 coredumb gtmanfred: didn't work either
19:22 coredumb really
19:23 gtmanfred well, then wait for the new driver
19:23 coredumb spent my whole afternoon on this :'(
19:23 coredumb basically tried everything I could >_<
19:23 gtmanfred try spending 2 years trying to get time specifically to rewrite the whole thing
19:23 whytewolf I spent a week on it when mitaka dropped. it has gotten better since then
19:24 dhwt joined #salt
19:24 coredumb guess It'll be easier to spawn instances with ansible and relay to saltify then
19:24 gtmanfred coredumb: if you can get them to add this line to the identity in keystone, it will work with v2 https://github.com/gtmanfred/openstack-salt-ssh/blob/master/salt/keystone/config_domain.sls#L8
19:25 whytewolf odd. my mitaka cloud doens't have that line and keystone v2 works for me
19:25 gtmanfred otherwise, waiting for Oxygen is the best bet
19:26 gtmanfred it shouldn't mitaka was the first one that switched the default to v3, and required a default_domain_id for v2 to work
19:26 coredumb yeah as you may guess I can't really wait for Oxygen :D
19:27 coredumb I'm supposed to be production ready next week :P
19:27 coredumb wooops
19:27 whytewolf I wonder if it was because my mitaka was an upgrade from liberty
19:27 gtmanfred whytewolf: yeah, that would have allowed you to keep the default
19:27 gtmanfred a clean install of mitaka wouldn't work though
19:28 gtmanfred yeah, no way I am finishing the rewrite by next week, i got 2 other things in front of finishing the rewrite, and then I gotta go review the code again
19:28 coredumb oh I don't mean _you_ have to come up with something
19:28 coredumb I mean that _I_ have to ;)
19:28 gtmanfred ahh, good luck
19:29 gtmanfred try terraform, it is quite good, but not really for dynamically building environments.
19:29 whytewolf agree on teraform. we used to use it at the bank.
19:29 whytewolf [for openstack deployments]
19:29 coredumb yeah will see what integrates better ansible or terraform
19:29 stupidwrk We have a bunch of nodes that all boot from the same PXE image. So these machines will boot up without a key and will generate a new one on each boot. I know that I can setup reactor to clear old keys of minions from the master.
19:30 coredumb to be launched from salt reactors and the likes
19:30 saltslackbridge <gtmanfred> Lunch time
19:30 stupidwrk My problem is the minions need to be restarted again to get them to register with the master after the original key is removed.
19:30 stupidwrk Is there a better way to handle that?
19:31 whytewolf stupidwrk: remove salt from the PXE image and then use saltify to salt them
19:32 stupidwrk Where can I read about that?
19:32 stupidwrk nm
19:32 stupidwrk Is this something relatively new in Salt?
19:32 stupidwrk Hadn't heard of it before
19:32 whytewolf not really. it is pretty old
19:33 stupidwrk Weird. been using Salt for a long time, but never seen it before.
19:33 coredumb it's in salt-cloud
19:33 whytewolf https://docs.saltstack.com/en/latest/topics/cloud/saltify.html
19:34 stupidwrk Ah okay. Not used salt-cloud before so I guess that explains it
19:35 pipps joined #salt
19:37 stupidwrk Just so I understand this process a bit more indepth. If we have a minion with the same name saltify itself more than once, it will remove itself via a call to the master and then add itself back?
19:37 johnj_ joined #salt
19:37 stupidwrk all via SSH?
19:40 whytewolf i don't know if it will deregister.
19:40 whytewolf but yes it is via SSH it installs the salt client. and generates keys based on minion name
19:41 teratoma joined #salt
19:42 astronouth7303 salt-cloud is nice. i use it with a banked image on vmware/esxi. As long as the machine boots with ssh with a banked key, salt-bootstrap (used by salt-cloud) can assimilate it
19:43 aldevar joined #salt
19:43 aldevar left #salt
19:43 stupidwrk Yeah our servers are all booted from a common image and the host name is set from the server's primary MAC address on the network
19:44 stupidwrk The problem is, the master already knows the minion by a specific hostname/key.
19:44 astronouth7303 do they have any per-node storage?
19:44 stupidwrk negative
19:44 astronouth7303 hm.
19:44 stupidwrk completely baremetal, no storage
19:45 stupidwrk So when the same minion comes online again (after a reboot)
19:45 stupidwrk the minion name is the same
19:45 stupidwrk but the minion has a new key now
19:45 stupidwrk so the master rejects the auth attempt
19:46 stupidwrk I have reactor set to delete the minion from the master after that occurs
19:46 astronouth7303 salt won't help you here. My initial thought is some kind of glue that connects your instance managemnet to salt key management
19:46 astronouth7303 you don't want any ol' client coming along and being able to enroll in salt
19:46 astronouth7303 (enroll==submit a key and have it accepted)
19:47 stupidwrk True, but this is a semi-secure environment (NAT behind a firewall)
19:47 mikecmpbll joined #salt
19:47 stupidwrk I have some code in the reactor that will do that.
19:47 stupidwrk It's not ideal.
19:47 astronouth7303 yeah, but if anyone gets behind the firewall, you're screwed (depending on your pillar code)
19:48 stupidwrk Yeah, our pillar code isn't really secure, we have other processes that handle that (rabbitmq)
19:48 astronouth7303 do you have a system that's able to push an event (to a program by command, or HTTP, or proprietary protocol) whenever a system goes down, or a request comes in to start on?
19:48 stupidwrk but I can see the rabbitmq connection details being problematic I guess
19:49 stupidwrk Yeah, so our PXE server uses iPXE, and that hits an HTTP endpoint
19:49 stupidwrk we have API calls that track the boot status of the hardware
19:49 astronouth7303 if it's json, salt-api has an endpoint specifically for webhooks
19:49 stupidwrk as it progresses through the boot process
19:50 stupidwrk Yeah I considered that as an option as well
19:50 stupidwrk I was just hoping to do it all in Salt, but that is looking unlikely
19:52 astronouth7303 and then reactor can examine the posted data, verify it, and then poke at the salt-key wheel
19:52 stupidwrk That looks like the direction I will be heading
19:53 stupidwrk Thanks for everybody hearing me out and offering solutions.
19:54 astronouth7303 salt-api also has websocket and server-sent event (EventSource) endpoints to monitor the event bus
19:55 astronouth7303 and there's `_utils` that run in-process on the salt-master
19:55 astronouth7303 (idk what the API is to get called with bus messages, though)
19:56 astronouth7303 just remember, if you open salt-api web hooks, anyone who can establish a TCP connection with the salt-master can submit messages
19:57 stupidwrk Noted. Thanks for the heads up.
19:58 pualj joined #salt
20:02 samodid joined #salt
20:16 fatal_exception joined #salt
20:21 pualj_ joined #salt
20:39 johnj_ joined #salt
20:45 _KaszpiR_ joined #salt
20:49 kellyp joined #salt
20:58 overyander joined #salt
21:02 kellyp joined #salt
21:03 jeffspeff joined #salt
21:03 user-and-abuser joined #salt
21:05 ChubYann joined #salt
21:06 kellyp joined #salt
21:08 bowhunter joined #salt
21:16 pipps joined #salt
21:23 Diaoul joined #salt
21:23 DarkKnightCZ joined #salt
21:23 xet7 joined #salt
21:29 Hybrid joined #salt
21:29 CampusD joined #salt
21:40 johnj_ joined #salt
21:44 dnull joined #salt
21:47 Hybrid joined #salt
22:00 gswallow joined #salt
22:00 gswallow howdy
22:00 gswallow quick question; just kind of bashing my way through bootstrapping salt on an EC2 instance and I ran install.sh -h
22:00 gswallow what is the -L flag for, when I install a client?
22:01 gswallow do I need it?  If I don't use it am I going to die?
22:02 rojem joined #salt
22:04 cgiroua joined #salt
22:05 MTecknology gswallow: that's a vague question... is -L an option to the "default" bootstrap script?
22:06 gswallow sorry.  I'm following this:  https://docs.saltstack.com/en/getstarted/fundamentals/install.html
22:07 gswallow then I have scripted up this: https://gist.github.com/gswallow/8d4d8ab4b0778c966cbf36fe39c9e58c
22:07 gswallow then out of curiosity, I ran "install-salt.sh -h"
22:07 gswallow and it says -L  Also install salt-cloud and required python-libcloud package
22:08 pipps joined #salt
22:09 MTecknology I usually recommend /not/ using pre-baked all-in-one magic installers
22:10 astronouth7303 gswallow: http://repo.saltstack.com/#rhel might be easier?
22:10 pipps joined #salt
22:10 MTecknology ^ +1
22:11 astronouth7303 (I would suggest pinning to a major version, because ime major version upgrades can break things)
22:11 MTecknology gswallow: If you're asking about -L here, https://gist.github.com/gswallow/8d4d8ab4b0778c966cbf36fe39c9e58c#file-user-data-txt-L48, then "man curl" would be helpful
22:12 gswallow nah I'm asking about line 49
22:12 astronouth7303 https://github.com/saltstack/salt-bootstrap/blob/develop/bootstrap-salt.sh#L282-L400
22:12 gswallow and maybe it would be just as easy to install directly from RPM?  I'd just need to substitute a value in the salt config file but the -A flag to the installer seems to work great
22:13 kellyp joined #salt
22:13 MTecknology my bootstrap for DO deployments looks similar to this-> https://gist.github.com/MTecknology/66ce7c7f148fc9da936bcf26cc572cd7
22:14 gswallow ok, I guess I can make my question more specific.  Do I need/want to install salt cloud and python-libcloud?  Does it hurt if I do and I don't use it?
22:15 astronouth7303 if you don't want salt to provision VMs, you don't need it
22:15 gswallow Yeah probably not
22:15 gswallow I'm leaning towards Terraform for that.
22:15 gswallow or cloudformation
22:15 sp0097 joined #salt
22:15 MTecknology both of those are things that I'd never want to personally use
22:15 MTecknology especially cloudformation
22:15 gswallow Heh
22:16 astronouth7303 yeah, salt-cloud is only the initial provisioning and salt enrollment, not larger cloud orchestration
22:16 gswallow Cool.
22:16 gswallow Ok, next step: install a master.
22:16 gswallow Thanks!
22:16 astronouth7303 you might just want to use the saltstack repo for that, too :P
22:18 gswallow Yeah, but all that preamble stuff?  It works really, really well.
22:18 gswallow The functions and traps and stuff.
22:19 MTecknology I'm not sure I follow what those last two lines meant.
22:20 kellyp joined #salt
22:20 gswallow in my gist.
22:20 gswallow the majority of that script is functions to capture instance ID and AWS region and to commit seppuku if the instance has an issue during bootstrap.
22:21 gswallow then autoscaling zaps it and a new instance takes its place.
22:21 gswallow I also have a routine to signal cloudformation if it's a success so that I can ensure that my farms of identical EC2 instances all came up successfully.
22:22 MTecknology "# All of this, just to install python-pip."
22:23 MTecknology heheh... I also try my hardest to avoid pip/cpan/npm/etc. on production servers. :P
22:23 dnull joined #salt
22:23 astronouth7303 sure makes setting up a well-featured salt server easier, though
22:24 MTecknology easier... ya
22:26 MTecknology astronouth7303: wait now.. which part does pip make easier for salt features?
22:27 astronouth7303 extra stuff
22:27 astronouth7303 some modules require non-default packages
22:27 astronouth7303 which tend to be either 1. system binaries, 2. pypi packages
22:35 CampusD hi guys, quick question, on minion with job cached enabled is there a reason that you know of why the naming of the directories is differenct under /var/cache/salt/minion/jobs vs /var/cache/salt/minion/minion_jobs?
22:36 CampusD the second one seems to use the JIDs for the naming of the directories while the first uses two random letters
22:37 CampusD the one that uses the JIDs is nice if I want to do something like this "find /var/cache/salt/minion/minion_jobs/ -type d -name 201* -printf "%f\n" -exec salt-call saltutil.find_cached_job {} \;"
22:38 CampusD is there a setting perhaps to make the other directory use the same pattern?
22:41 johnj_ joined #salt
22:44 kellyp joined #salt
22:44 cyteen joined #salt
22:53 Manor joined #salt
23:01 Manor joined #salt
23:03 Manor joined #salt
23:07 pewpew joined #salt
23:09 MTecknology eh.. what now?  http://dpaste.com/3A62WB3
23:14 MTecknology Why is salt trying to execute su? Not even -l trace is answering that question for me.
23:24 astronouth7303 MTecknology: dafuq? any idea what state is causing that?
23:26 pipps joined #salt
23:42 johnj_ joined #salt
23:42 MTecknology ah... cmd.run
23:46 MTecknology astronouth7303: The bright side now is that I finally know why it looked like all of my servers were being attacked
23:47 MTecknology down side- I have no idea how to tell pam to not require gauth for root.
23:47 astronouth7303 ooph, pam config. good luck.

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary