Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-02-08

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:25 pipps99 joined #salt
00:54 cyteen joined #salt
01:05 GnuLxUsr joined #salt
01:12 hemebond joined #salt
01:18 shiranaihito joined #salt
01:23 edrocks joined #salt
01:31 Church- Thank god for salt.
01:31 Church- Holy be thy's name.
01:50 hemebond Hallowed
01:53 schemanic joined #salt
02:04 pipps joined #salt
02:09 exarkun joined #salt
02:27 schemanic_ joined #salt
02:56 ilbot3 joined #salt
02:56 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.9, 2017.7.3 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
03:50 exarkun joined #salt
03:52 pipps joined #salt
04:17 lompik joined #salt
04:21 ahrs joined #salt
04:55 schemanic joined #salt
04:56 evle2 joined #salt
05:23 zerocoolback joined #salt
05:30 exarkun joined #salt
05:38 lkthomas if I am using git.latest state, isn't that would be risky to put private key path into salt?
05:42 zerocoolback joined #salt
05:53 evle4 joined #salt
06:00 Guest73 joined #salt
06:01 taylorbyte joined #salt
06:11 LocaMocha joined #salt
06:13 NightMonkey joined #salt
06:13 Guest73 joined #salt
06:36 evle1 joined #salt
06:44 hoonetorg joined #salt
07:08 exarkun joined #salt
07:20 pualj joined #salt
07:23 hoonetorg joined #salt
07:33 darioleidi joined #salt
07:39 wongster80 When running salt-ssh test.ping on some host i get ImportError: No module named backports.ssl_match_hostname
07:40 wongster80 has anyone seen that?
07:47 lkthomas that's a python error man
07:48 lkthomas wongster80, https://github.com/saltstack/salt/issues/41020
07:48 lkthomas check response from MatthiasKuehneEllerhold
07:50 pualj joined #salt
07:55 wongster80 lkthomas: tried it no good
07:55 lkthomas check if that python module actually installed
07:56 wongster80 yea it’s installed on the master
07:57 wongster80 actually sorry i’m getting a different error, NameError: global name 'memoryview' is not defined
07:58 lkthomas well google it then
08:09 losh joined #salt
08:09 onslack joined #salt
08:09 aldevar joined #salt
08:14 Hybrid joined #salt
08:17 m0nky joined #salt
08:18 supermike___ joined #salt
08:20 wongster80 lkthomas: got it working installing several rpm dependencies , thanks
08:22 Ricardo1000 joined #salt
08:35 saltnoob58 joined #salt
08:35 LostSoul joined #salt
08:38 Tucky joined #salt
08:40 IdoKaplan joined #salt
08:40 lkthomas :)
08:42 vb29 joined #salt
08:42 IdoKaplan Hi, I'm trying to setup HA in salt. Can someone please advise? https://pastebin.com/dYzn1Syk
08:49 exarkun joined #salt
08:54 jrenner joined #salt
08:59 mattfoxxx joined #salt
08:59 darioleidi joined #salt
09:02 Guest73 joined #salt
09:03 mikecmpbll joined #salt
09:09 haam3r_ joined #salt
09:10 darioleidi joined #salt
09:16 Mattch joined #salt
09:17 Guest73 joined #salt
09:40 schasi joined #salt
09:40 Udkkna joined #salt
09:56 pualj joined #salt
10:05 K0HAX joined #salt
10:11 colegatron joined #salt
10:26 JPT I don't know why, but using salt '*' pkg.upgrade refresh=True in a cronjob seems to kill the salt-minion almost every time there is an update for salt-minion. I need to get on every machine and do dpkg --configure -a and apt install --fix-broken. Is there a way to fix this?
10:28 saltnoob58 add salt '*' cmd.run dpkg --configure -a && apt install --fix broken to same cronjob?
10:29 exarkun joined #salt
10:29 onslack <mts-salt> ... which won't work if there isn't a minion to accept the cmd.run
10:30 onslack <mts-salt> jpt: perhaps you could explicitly upgrade the minion beforehand and see if it behaves better that way
10:31 saltnoob58 use salt-ssh for cmd.run?
10:32 saltnoob58 not that thats any better really, but if it works
10:38 lxsameer joined #salt
10:38 JPT Well, the basic idea is that i want to do automatic updates via salt, so i don't have to fiddle with custom cronjobs.
10:40 schasi joined #salt
10:40 JPT mts-salt: Can you elaborate on how you would do that explicit upgrade?
10:42 onslack <mts-salt> as with most things salt, i haven't had to do this myself yet, but does it behave differently if you issue pkg.upgrade only against salt-minion?
10:47 onslack <mts-salt> based on your commands i'm guessing you're on a debian-based system, so it may be worth looking into why it isn't working as it should. are there any helpful messages if you manually downgrade and then retry the upgrade from the command-line using `apt-get upgrade salt-minion`? does it start the minion that way?
10:48 JPT I'm on debian 9 (jessie) with most of the affected machines. Debian 8 behaves fine somehow
10:50 JPT Hm. How would i do a manual downgrade?
10:50 onslack <mts-salt> well that's certainly unusual. worth trying the manual apporach
10:51 onslack <mts-salt> from the apt-get manpage: `sudo apt-get install <package-name>=<package-version-number>`
10:51 JPT Okay, i'll see if i can get that to work. :)
10:51 onslack <mts-salt> you can use `apt-cache showpkg salt-minion` to list available versions
10:58 saltnoob58 mts-salt: remember you recommended me to pipe salt-ssh output directly into python? now it works just fine and great, but three more pipes have grown around you, maybe you have another great recommendation, this time to move out of commandline?
11:00 onslack <mts-salt> well it wasn't my idea, i just expanded on it. what's the problem with the solution you have?
11:02 saltnoob58 well the real problem is it's... ugly? suspicious? Needs extra steps to say no to all questions "permission denied wanna deploy key" and then a sed to cut out stdout that's not salt's own json output
11:02 saltnoob58 makes for a long convoluted bash string, was wondering maybe there's a better way to do the same thing
11:03 babilen JPT: That's https://github.com/saltstack/salt/issues/43340#issuecomment-329760470 and annoying
11:03 babilen Fixed it before (after having had to argue quite a bit) and they keep reintroducing the problem
11:04 babilen Promised to reject "Let's break it again" PRs from now on
11:04 onslack <mts-salt> well that sucks. thanks for highlighting that
11:04 babilen Unfortunately it means that you'd have to deploy the KillMode setting yourself now
11:04 babilen (or just wait for .4)
11:04 onslack <mts-salt> saltnoob58: if you have content you don't want then you're better off redirecting it with a simple `>/dev/null`
11:05 onslack <mts-salt> won't the upgrade to .4 have the same problem, and only upgrading _from_ .4 would work? :slightly_smiling_face:
11:05 babilen Exactly
11:05 saltnoob58 well my command looks like this #yes n |salt-ssh '*'  grains.items --out json --static | sed | python
11:05 onslack <mts-salt> or as you say, just use salt to drop the KillMode config in ;)
11:06 babilen Essentially you have to ensure that KillMode is configured correctly. Thanks for systemd's drop-in configuration that shouldn't be too tricky
11:06 JPT babilen: Thanks for providing that link. After reading that, i guess it describes my issue very well. :)
11:06 saltnoob58 and this one command has one output like "would you like to deploy key [y/n] {start of all useful json}" so i dont know what i would send to dev null
11:06 babilen I'm just annoyed as it took me ages to get this in the right state and it just keeps coming up and up
11:07 onslack <mts-salt> a quick `salt-minion.conf/override` might work there, right?
11:08 babilen https://www.freedesktop.org/software/systemd/man/systemd.unit.html → "drop-in"
11:08 onslack <mts-salt> saltnoob58: if that's output by the salt-ssh command then i would expect an option to turn that off somewhere. i haven't rtfm on that yet
11:08 babilen I'd recommend reading your distro's systemd.unit(5) as it specifies the locations
11:08 onslack <mts-salt> babilen: exactly, i was just working out what the file would need to be called to perform the override
11:09 onslack <mts-salt> since salt-minion is a sysvinit script that's promoted to a systemd service without an explicit service file
11:10 saltnoob58 i have tried looking for it but did not find such an option for salt-ssh to be non-interactive, found the yes n| trick from saltstack github in the first place. At least there's options to ignore unknown host keys :)
11:10 babilen On Debian /etc/systemd/system/salt-minion.d/letmeliveplz.conf would work
11:11 onslack <mts-salt> perfect. is that in the issue by any chance? :slightly_smiling_face:
11:11 onslack <mts-salt> saltnoob58: have you tried `salt-ssh -i --no-host-keys` ?
11:12 babilen onslack: It should use a "proper" systemd unit file tbh -- https://github.com/saltstack/salt/blob/develop/pkg/deb/salt-minion.service
11:12 onslack <mts-salt> ooh, shiny :slightly_smiling_face:
11:12 babilen (for Debian and derivatives)
11:12 onslack <mts-salt> well, systemd, but yes
11:13 babilen That particular file is used in .deb packaging .. there are similar ones for rpm (and others) in https://github.com/saltstack/salt/tree/develop/pkg
11:13 onslack <mts-salt> i don't have that file in 2017.7.2 so i can only assume it's only just been added
11:13 babilen I haven't checked all of them, but it appears as if there are unit files for salt-minion for most if not all of them
11:13 saltnoob58 mts-salt: i use --ignore-host-key checking, but it's not the host key, it's the public salt-ssh user key that's asked to be deployed, the one that salt-ssh uses to connect to target hosts. Some hosts will inevitably sneak in where i dont have access but i havent found a way to ignore them
11:14 babilen onslack: /lib/systemd/system/salt-minion.service is being used with 2016.11.8 on stretch here
11:14 babilen onslack: What does "systemctl status salt-minion.service" give you?
11:15 onslack <mts-salt> bah, i forgot that folder was even there
11:15 onslack <mts-salt> i'm not used to config being in /lib
11:16 babilen Systemd looks in various locations (cf. systemd.unit(5)) .. even your home! ;)
11:18 onslack <mts-salt> saltnoob58: is there any downside to using `salt-ssh --key-deploy` ?
11:19 Pistahh left #salt
11:19 onslack <mts-salt> or are you only using (failing) password auth?
11:20 xet7 joined #salt
11:21 saltnoob58 i have keys deployed and i DONT use password auth at all. But on some target hosts i wont have neither pass nor key
11:21 onslack <mts-salt> so what does --key-deploy do against such a host? do you still have one you can test against?
11:24 saltnoob58 --key-deploy pre-answers yes and thus doesnt have any prompts so it achieves that part
11:25 saltnoob58 i didnt try it before because in my mind i thought "i want to pre-answer NO"
11:25 saltnoob58 it still outputs a json retcode:255 instead of ignoring the thing completely but that's easy to cut out in python
11:28 pualj joined #salt
11:45 JPT joined #salt
11:49 Guest73 joined #salt
12:10 exarkun joined #salt
12:14 Guest73 joined #salt
12:23 saltnoob58 if im trying to run salt-ssh minion as unprivileged minion and allow only some things with sudo, is there any convenient way to find out what the fullpaths its trying to use are? For example for smbios.get
12:31 The_Loeki joined #salt
12:57 swills joined #salt
12:57 swills joined #salt
12:58 swills_ joined #salt
12:59 schasi joined #salt
12:59 schasi whytewolf: Tried "salt-minion -l all". It eventuelly brought me to the solution
13:07 Guest73 joined #salt
13:15 Nahual joined #salt
13:19 AstraLuma joined #salt
13:21 pualj joined #salt
13:35 pualj joined #salt
13:39 nickadam joined #salt
13:58 edrocks joined #salt
13:59 bowhunter joined #salt
14:03 vhasi joined #salt
14:04 gh34 joined #salt
14:23 Sacro Is there a way to detect salt-ssh in a state? E.g. I don't want to check the minion is configured right
14:24 cgiroua joined #salt
14:25 Sacro Hmm, I guess grains['master'] won't equal 'salt'
14:26 babilen It shouldn't really make a difference .. what do you need it for?
14:26 racooper joined #salt
14:28 Sacro So, i don't want to install / configure / check the state of a salt-minion if it's over ssh
14:28 Sacro i.e. I don't have root and can't do a full install but still want to use states
14:29 onslack <mts-salt> perhaps limit the states to be applied during a highstate by filtering in top?
14:30 Guest73 joined #salt
14:30 Sacro Yeah, I've got 'G@master:my.salt.master'
14:30 Sacro It works, but if someone renames the salt master 'salt' then that's going to be unhappy
14:33 onslack <mts-salt> ooc what's the value of the 'master' grain in that case?
14:40 pualj joined #salt
14:40 pualj_ joined #salt
14:43 schasi When using bootstrap-salt.sh on FreeBSD, python 3.6 is used. Is that intended? I would rather have python 2.7, but there seems to be no mechanism to choose.
14:43 Sacro If it's salt-ssh then master gets set to 'salt'
14:44 Sacro Maybe salt-ssh is a better option, then it's more obvious to match against it
15:00 mage_ schasi: why don't you use the packages?
15:00 Hybrid joined #salt
15:05 _JZ_ joined #salt
15:08 schasi mage_: Because I am using salt-cloud, which uses bootstrap-salt.sh.
15:15 evle1 joined #salt
15:25 mage_ schasi: ok.. never used salt-cloud
15:25 mage_ do you create FreeBSD jails with salt-cloud?
15:27 ws2k3 joined #salt
15:28 ws2k3 joined #salt
15:28 schasi mage_: No, just individual VMs
15:32 edrocks joined #salt
15:34 schasi mage_: We have oVirt (on CentOS) running and a custom module to interface to it. salt-cloud creates FreeBSD-VMs on it, from a template
15:40 oida joined #salt
15:46 vb29 Hi...I have mysql database with master-slave configuration....and I want to do some configration on that using saltstack... but before executing anything I want to know the ACTIVE mysql node...is there any way I can achieve this using saltstack or python?
15:47 onslack <ryan.walder> How you you currently determine which is the active node?
15:49 cro_ joined #salt
15:49 vb29 I do it manually as of now and then run the config script manually...but I want to automate this using salt and in future this whole process will be hooked with Jenkins..that is why I am looking for a way to get the active node using salt
15:50 onslack <ryan.walder> My point is, how do you do it now, how do you automate that, translate that to salt
15:51 Hybrid joined #salt
15:53 vb29 sorry Ryan but I am not able to understand your question :(   right now there is no automation....and I am looking for a way to run the script after I find which node is active...
15:54 onslack <ryan.walder> How would you determine which node is active programtically?
15:54 onslack <mts-salt> he's asking how you perform these tasks manually. once you can describe these steps, converting them to salt for automation should be easy
15:55 onslack <mts-salt> specifically, try writing down each logical step and the command you'd run to achieve it
15:56 onslack <mts-salt> this doesn't always work for salt, as it has a way of doing things in a very specific way, but it's going to be extremely difficult to bridge the gap until you at least know what you're trying to achieve in detail
15:56 mage_ any idea what Terraform does more/better than salt-cloud?
16:02 Guest73 joined #salt
16:03 vb29 mts-salt:   thanks for the answer.... I was just looking for some info that if there is any module in salt which straight away gives this info...but looking at my setup I guess I have to first do somehting for automatic failover which is not there in place right now..... (I have a crappy setup :) )
16:04 vb29 I will dig into this more after I have a setup eligible for such thing...
16:05 onslack <mts-salt> you may find someone has done something similar, or even that salt provides a way to automate some things in a way you're not currently using, but until then it can always just run commands in a sequence to "get the job done" :slightly_smiling_face:
16:05 vb29 thanks! :)
16:07 oida joined #salt
16:09 MTecknology mts-salt: Worst case- you write a custom script for cmd.wait (or exec/state module)
16:10 onslack <mts-salt> i've done that, as provisioning a windows vm from a custom image with specific virtual hardware doesn't work using salt-cloud or salt-virt :slightly_smiling_face:
16:10 MTecknology it does if you deploy from a sysprepped template?
16:11 onslack <mts-salt> that would be my custom image, yes. but preparing it requires creating a volume, copying the image into it, mounting it using ntfs-3g, writing the customisations, unmounting it, and finally allowing it to boot
16:12 onslack <mts-salt> and my custom script performs the bulk of that in the background so salt starts the provisioning but doesn't wait for it to finish
16:12 onslack <mts-salt> all templated and configured through pillar of course
16:12 Church- Morning folks
16:12 Church- Heya mage_
16:13 Church- MTecknology*
16:13 MTecknology mage_: I've never used terraform for my own stuff, but I wouldn't be surprised if it's generally more robust and better developed than salt-cloud. It's only somewhat recently that salt-cloud started becoming a development focus/priority.
16:13 MTecknology Church-: howdy
16:13 Church- Sup?
16:14 mage_ MTecknology: right, thanks :)
16:14 MTecknology mts-salt: That sounds exceptionally painful. What virt platform are you using?
16:15 onslack <mts-salt> as host? debian
16:15 onslack <mts-salt> so qemu-kvm
16:16 MTecknology OUCH!
16:16 onslack <mts-salt> it's the customisations that stop me from using salt-virt. providing a hostname and network config to a sysprepped image involves a templated response file which will be different for every windows guest. there just isn't any other way that i've found
16:16 MTecknology Going raw with that stuff is like going raw w/ a hooker.
16:17 onslack <mts-salt> that appears to be a limitation of sysprep rather than virt
16:17 MTecknology sounds like you might have a use case for cloud-init or whatever that clusterfuck is called.
16:17 onslack <mts-salt> and now you know why i put it all into salt
16:18 MTecknology but the virt thing... ouchies
16:18 onslack <mts-salt> i didn't find a solution
16:18 MTecknology proxmox wasn't suitable?
16:18 MTecknology If you consider proxmox- I have an excellent state to show in a second.
16:19 onslack <mts-salt> only runs over redhat. i couldn't find a debian version
16:19 MTecknology "only runs over redhat."?
16:19 MTecknology last I checked... proxmox /only/ supports debian
16:20 Church- ^
16:20 onslack <mts-salt> i don't remember the reason, but almost certainly $policy
16:20 MTecknology http://dpaste.com/0CGC0WR
16:21 MTecknology I'm really curious what the reason was because I'm considering rolling proxmox as a replacement to ... basically what you just described, but without the automation.
16:22 onslack <mts-salt> well now it's done, the automation is sweet. if you layer that with pxeboot and a response-controlled installation of debian, you could literally cold-boot from bare metal all the way through to multiple vms hosting anything
16:23 DammitJim joined #salt
16:24 schasi mts-salt: sounds pretty nice :)
16:28 onslack <mts-salt> i think the problem we saw was that proxmox is a bare-metal platform, whose update and security policies were insufficient to meet our iso27k standards
16:30 onslack <mts-salt> it's been a while, and i'd have to recheck it all to remind myself
16:33 MTecknology I remember reviewing rhell's oVirt. That thing was a nightmare of ugly pain.
16:34 pualj joined #salt
16:35 onslack <mts-salt> yep, that was it. they want to use stretch (debian 5) and we're running 9
16:36 onslack <mts-salt> our systems team just said nope. nope nope nope nope nope.
16:36 dendazen joined #salt
16:36 onslack <ryan.walder> The annoying thing with ProxMox is that you can spin vms up with salt cloud but can't do anything with them as they have no method of talking with proxmox to customise them (like with openvm-tools in vmware)
16:37 onslack <mts-salt> and that's exactly the problem i had with kvm itself. even with the guest installed they don't have a  "remote execute" function for a windows guest. they do for a linux one
16:37 onslack <mts-salt> that's why i had to mount the image and customise it directly
16:38 onslack <mts-salt> plus i needed explicit ip and name
16:38 onslack <mts-salt> as in, no dhcp
16:39 onslack <mts-salt> it still irks me that only redhat have compiled binaries for the windows guest, but i'll have to live with that
16:40 MTecknology I'm sure there's a good reason for the "only" part...
16:40 onslack <mts-salt> yeah, they paid for the code signing from m$
16:43 Guest73 joined #salt
16:47 pualj joined #salt
17:02 user-and-abuser joined #salt
17:31 GnuLxUsr joined #salt
17:36 edrocks joined #salt
17:48 c4rc4s joined #salt
17:55 pbandark joined #salt
18:00 spiette_ joined #salt
18:05 Lionel_Debroux joined #salt
18:11 bdrung_work joined #salt
18:12 spiette_ joined #salt
18:12 tobiasvdk joined #salt
18:20 pipps joined #salt
18:21 pipps99 joined #salt
18:29 lordcirth_work joined #salt
18:32 Guest73 joined #salt
18:40 lordcirth_work Is the branch 'upstream/2016.11' the correct one to use for bugfixes?
18:40 pipps joined #salt
18:49 gtmanfred no
18:49 gtmanfred that is cve only now
18:49 gtmanfred there are no more scheduled releases for it
18:51 lordcirth_work gtmanfred, so 2017.7 then?
18:51 gtmanfred yes
19:03 pualj joined #salt
19:15 ymasson joined #salt
19:16 mikecmpbll joined #salt
19:19 Hybrid joined #salt
19:26 xet7_ joined #salt
19:28 SteamWells joined #salt
19:29 poige joined #salt
19:30 nickadam joined #salt
19:33 kiorky joined #salt
19:49 user-and-abuser joined #salt
19:51 aldevar joined #salt
19:53 edrocks joined #salt
19:55 pipps joined #salt
20:00 Guest73 joined #salt
20:02 pipps joined #salt
20:12 mavhq joined #salt
20:13 pualj joined #salt
20:29 pipps joined #salt
20:32 lordcirth_work Anyone know if there's an official stance on adding new python deps to Salt?  https://github.com/saltstack/salt/issues/28165
20:33 gtmanfred don't
20:33 gtmanfred hard deps are a don't do it
20:34 gtmanfred but if you can make it a soft dependency, so it doesn't have to be installed for all salt usage, then go for it
20:36 lordcirth_work gtmanfred, hmm.  It's to add a feature to file management, is there a good way to only require the import if the new option is passed?
20:36 gtmanfred yeah, check if HAS_DEPENDENCY, and then log a warning if that option is set in the salt.utils.parser
20:43 lordcirth_work *sigh* client got back to me, so salt dev is low-priority again.  Hopefully I'll get around to it.
21:02 Guest73 joined #salt
21:07 aldevar left #salt
21:32 pipps joined #salt
21:33 cyborg-one joined #salt
21:35 cyborg-one left #salt
21:42 pipps joined #salt
21:42 jsmith0012 joined #salt
21:50 pipps joined #salt
21:52 JAuz joined #salt
21:53 pipps99 joined #salt
21:53 Trauma joined #salt
22:05 pipps joined #salt
22:10 exarkun joined #salt
22:12 pipps joined #salt
22:38 pipps joined #salt
22:39 pipps99 joined #salt
22:45 pipps joined #salt
22:47 Edgan gtmanfred: Is the documentation incomplete and it expects at acl? https://paste.fedoraproject.org/paste/J~ddJxlrqCL9k42bSLHNHQ
22:49 kiorky joined #salt
22:49 onslack <gtmanfred> edgan, i am not sure that i tested it without supplying an acl, but that seems to be the case
22:50 onslack <gtmanfred> well no, i definitely used that zookeeper.present one without an acl
22:50 onslack <gtmanfred> so it may have changed since I originally wrote this thing
22:50 Edgan gtmanfred: This is on Ubuntu 14.04, so maybe older version of kazoo?
22:50 onslack <gtmanfred> yeah, i pip installed kazoo, and also used the newest zookeeper from a docker container at the time
22:51 Edgan gtmanfred: ok, I will try updating, and if so I will file a issue to document the issue
22:51 onslack <gtmanfred> i wrote tests for it, lemme see if those tests are still passing
22:52 onslack <gtmanfred> https://jenkins.saltstack.com/job/oxygen.rc1/job/oxygen-salt-cent7-py2/9/testReport/integration.states.test_zookeeper/ZookeeperTestCase/
22:52 onslack <gtmanfred> yeah, and that runs one that would do it without acls
22:53 onslack <gtmanfred> @edgan https://github.com/saltstack/salt/blob/oxygen.rc1/tests/integration/states/test_zookeeper.py#L44
22:53 Edgan gtmanfred: upgrading kazoo from 1.2.1 to 2.4.0 didn't help :\
22:54 onslack <gtmanfred> might be the version of zookeeper, because that runs against the latest zookeeper with the latest kazoo
22:54 onslack <gtmanfred> but it definitely works.
22:54 Edgan looking
22:54 onslack <gtmanfred> yeah, it actually just grabs the latest zookeeper docker container, and uses that for the tests https://github.com/saltstack/salt/blob/oxygen.rc1/tests/integration/states/test_zookeeper.py#L36
22:55 Edgan gtmanfred: I am running 3.4.5 and latest is 3.4.11, so not a huge difference
22:55 gtmanfred hrm i don't know then
22:56 gtmanfred i know i specifically wrote theses tests because i don't know a ton about zookeeper, but wanted to make sure my examples in the docs worked
22:56 Edgan gtmanfred: Let me show you the patch I used. Maybe I am missing a commit.
22:58 Edgan gtmanfred: https://paste.fedoraproject.org/paste/w45Ns0Cja9Y3J016jVjc-g
22:58 gtmanfred This is how we install kazoo, https://github.com/saltstack/salt-jenkins/blob/7832aca6ab2628dc5bfaf87acae21fc026a79542/python/zookeeper.sls#L8 for the test suite
22:58 gtmanfred so it should just be the latest version
22:58 gtmanfred is there a reason you aren't just dropping the raw file from oxygen.rc1 into _modules and _states?
22:59 Edgan gtmanfred: https://github.com/saltstack/salt/commits/oxygen.rc1/salt/states/zookeeper.py   This shows it should only be the one commit.
22:59 gtmanfred hrm, that is fair
23:02 gtmanfred looks like you should not need to specify any acls, because it should just use whatever the default_acls of the connection are
23:02 Edgan gtmanfred: any idea what else the traceback would point at?
23:03 gtmanfred that is super odd, because it looks like acls can be None https://kazoo.readthedocs.io/en/latest/api/client.html#kazoo.client.KazooClient.ensure_path
23:04 gtmanfred i do not
23:04 gtmanfred it has been almost a year since I looked at this
23:16 Edgan gtmanfred: trying some ideas
23:17 onslack <gtmanfred> kk
23:21 demize joined #salt
23:21 Edgan gtmanfred: This seems like it might be a clue, https://paste.fedoraproject.org/paste/O6yXoZXKG~ePwxDKT51NlQ
23:22 Edgan gtmanfred: trying to upgrade kazoo again
23:23 onslack <gtmanfred> you don’t have a password in there for make_digest_acl
23:23 onslack <gtmanfred> it requires a username and a password per acl
23:24 onslack <gtmanfred> https://docs.saltstack.com/en/develop/ref/modules/all/salt.modules.zookeeper.html#salt.modules.zookeeper.make_digest_acl
23:24 Deliant joined #salt
23:25 onslack <gtmanfred> that is probably a bug that could be fixed, but i don’t know enough about zookeeper to fix it.
23:25 Edgan gtmanfred: I am just trying to set the default acl with a username of world. There doesn't seem to be any password unless getacl isn't showing the password.
23:26 onslack <gtmanfred> well i am telling you the way it is written, each acl needs a username and a password
23:26 onslack <gtmanfred> so if world doesn’t need a password, then the thing needs to be rewritten
23:26 onslack <gtmanfred> https://kazoo.readthedocs.io/en/2.0/api/security.html#kazoo.security.make_digest_acl
23:26 Bryson joined #salt
23:26 Edgan gtmanfred: ok, I will keep digging
23:26 kiorky joined #salt
23:27 gtmanfred i am off until monday o/
23:30 Edgan gtmanfred: still there?
23:31 onslack <gtmanfred> not unless it is an emergency
23:32 Edgan gtmanfred: I think I figured it out, and will talk to you on Monday
23:32 onslack <gtmanfred> cool :+1:
23:38 Mogget joined #salt
23:43 cyteen joined #salt
23:50 exarkun joined #salt
23:54 schasi joined #salt
23:57 Guest73 joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary