Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2016-11-09

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 netcho joined #salt
00:18 choopooly joined #salt
00:19 tkharju joined #salt
00:19 abednarik joined #salt
00:20 aalmenar joined #salt
00:20 aalmenar joined #salt
00:20 nawwmz whytewolf: so out of curiousity why is /ets/sysctl.conf still loading when it was deprecated in systemd (sorry to go back in time) :)
00:20 whytewolf because it isn't deprecated yet.
00:21 whytewolf that change is coming and is in fedora, but that change hasn't made it down to the systemd in centos 7
00:21 whytewolf [or redhat 7]
00:22 nawwmz ahhh i seeeee so thats why they symlinked it to 99-sysctl.conf so it has the same effect as /etc/sysctl.conf (in terms of precedence)
00:22 whytewolf yes
00:23 nawwmz interesting, that is a big "gotcha" since stuff still writes to /etc/sysctl.conf and if you try to overwrite as you would normally, it never would
00:23 ahammond does compound targeting work again? I have a big list of IPs and need to target... :(
00:26 nawwmz welp, thx for the tips whytewolf
00:26 whytewolf ahammond: do you rember the issue id?
00:27 whytewolf nawwmz: np.
00:27 ahammond whytewolf I remember now, it had to do with mixing compound matchers with nodegroups. Fortunately I'm not doing that, so I'm good.
00:28 ahammond oh, and for anyone searching, perl -lne 'push @r, "S\@$_"; END{ print join " or ", @r };' < file_listing_one_ip_per_line.txt
00:29 jas02_ joined #salt
00:29 whytewolf ahhh nodegroups.... those do break many things
00:30 whytewolf but they are great when they work
00:33 ahammond whytewolf yeah, poor man's CMDB
00:33 lordcirth ahammond, what would you use instead?
00:33 ahammond lordcirth for a CMDB?
00:33 lordcirth Yeah
00:35 whytewolf I don't know about ahammond but me personally i would love if targetting was plugable
00:35 ahammond lordcirth I don't think there _are_ any good answers there. https://docs.saltstack.com/en/latest/topics/master_tops/ combined with eitehr Cobbler or Reclass strikes me as the only real hope, but to be honest this is an area where salt has totally failed the to address the needs of the enterprise.
00:37 keimlink_ joined #salt
00:39 awiss joined #salt
00:42 mosen joined #salt
00:42 ahammond whytewolf and yeah, one of my huge complaints is that master_tops helps with top.sls files, but ignores targeting on the CLI and in orchestrations, and in reactors. Our hack-around for that is nodegroups, but they're a shit-storm in and of themselves.
00:43 whytewolf another workaround would be pillar targetting
00:43 mosen joined #salt
00:44 ahammond whytewolf sure, but then... pillar/top.sls?
00:44 whytewolf iirc ext_pillars still count in pillar target
00:44 ahammond sure, but then how do you control access to your external pillars?
00:44 ahammond Of course that's assuming you're using pillars at all, and I've pretty much concluded that you shouldn't. Ever.
00:45 whytewolf ext_pillars are only handed the minion_id
00:45 ahammond whytewolf sure, so you have to re-write the concept of top.sls for every ext pillar?
00:45 gableroux joined #salt
00:46 ahammond whytewolf I'm just not clever enough to keep track of it in multiple places. Especially if your ext_pillars are in separate repos for management.
00:47 whytewolf well I wasn't talking about repos. I was meaning like the mysql ext_pillar or a database based ext_pillar. that can be updated
00:48 whytewolf there is also the new vault ext_pillar
00:49 ahammond whytewolf yeah, I saw that. I guess if you're working with legacy systems it makes sense. If you're _not_ working with legacy systems, why in the name of Cthulu would you immediately trash Vaults distributed security model by adding a centralized point of failure! :)
00:49 ninjada joined #salt
00:50 ahammond whytewolf and that's not even thinking about the performance and scaling implications.
00:50 ahammond No, pillars are deprecated in our Brave New World. :)
00:51 ninjada joined #salt
00:54 theproxy joined #salt
00:55 whytewolf why? i don't get the anti-pillar mentality. they had it at IGT also. pillars are very good for seperating data from the states. personally i would rather lose grains before i lost pillar. most of the grains people lookup can be gathered quickly with salt based tools. but the pillars are inputs coming from me that may change based on need. i don't want to search through 5000 states looking for a single change.
00:57 antpa joined #salt
00:57 nawwmz hi all in saltstack im defining my block device as "- { source: blank, dest: volume, device: sdf, size: 100, shutdown: preserve }" and in openstack it shows attached but I dont see the disk on the host I spun up, any idea?
00:58 whytewolf nawwmz: check dmesg
00:58 nawwmz ah k thx
00:59 whytewolf it attaches the disk, but you still need to put a filesystem on it and mount it
00:59 thejrose1984 joined #salt
01:00 nawwmz can I pass the format like ext4?
01:00 nawwmz as part of the block_device mapping
01:00 jas02_ joined #salt
01:00 whytewolf openstack doesn't format the disk. it is something you have to do in the minion
01:00 nawwmz kk
01:01 edrocks joined #salt
01:02 nawwmz whytewolf: that is something that the ec2 driver does though right?
01:02 whytewolf no
01:02 whytewolf not as far as i know
01:03 nawwmz so in the ec2 driver, we just add a block device and it would show up using lsblk or fdisk, we would still need to partition it  but it was visible as a volume at least
01:03 nawwmz it would mount it in other words
01:03 nawwmz hmm let me poke around a little more
01:04 pipps joined #salt
01:04 nicksloan joined #salt
01:06 jalaziz joined #salt
01:08 whytewolf the disk should show up in lsblk [just with no partitions or formatting]
01:08 nawwmz interesting... k maybe im doing something wrong, volume does show attached to the instance i created too which is weird
01:10 whytewolf this is what a freshly attached disk looks like in openstack
01:10 whytewolf https://gist.github.com/whytewolf/a317651791382a385c924246f3958b9f
01:11 whytewolf my hypervisor is kvm, your milage may very on device name
01:12 matth in a state, how can I use a jinja variable (  {{ myvar }}   inside :         {{ salt['pillar.get']('mypillar:{{ my var }}:data }} ?
01:12 whytewolf {{ salt['pillar.get']('mypillar:'~myvar~':data')}}
01:14 matth whytewolf: thank you
01:14 nawwmz whytewolf: ah nice thx for the snippet
01:16 matth is it also possible to use ~= to check the containt of a string with if statement ?  like : if myminion =~ 'type1'  ?
01:18 ninjada joined #salt
01:20 jalaziz_ joined #salt
01:20 whytewolf matth: I have never tried.
01:22 nawwmz whytewolf: in your config, are you passing device: sdb or xvdb ?
01:23 nawwmz maybe thats what im doing wrong, im simply passing sdb
01:23 whytewolf nawwmz: I didn't use salt to attach the disk :P
01:23 nawwmz hax
01:24 whytewolf i didn't assign a device name
01:24 hoonetorg joined #salt
01:24 whytewolf i let openstack decide for me
01:28 matth whytewolf: it doest now work for the variable (  {{ salt['pillar.get']('mypillar:'~myvar~':data')}} )
01:28 nawwmz oh nice ill try that next
01:39 nawwmz whytewolf: question... so i got it to work it turns out our cloud provider doesnt translate sdf -> vdf, but I defined vdf and on the host its vdb, seems like no matter what i add in salt, the host will increment it from vda ->
01:39 ahammond @whytewolf separating data from states is better done using modules for complex logic, jinja's import_yaml for non-secret data that you want to pass into an execution module, or the maps.jinja idiom. In my experience anyway. Pillars are a great crutch for rapid development, but scaling them is not fun.
01:40 infrmnt joined #salt
01:44 ninjada joined #salt
01:48 nethershaw joined #salt
01:48 dps joined #salt
01:55 schemanic joined #salt
02:01 jas02_ joined #salt
02:05 sh123124213 joined #salt
02:06 DammitJim joined #salt
02:11 mikecmpbll joined #salt
02:11 fracklen joined #salt
02:13 netcho joined #salt
02:14 __number5__ Is Salt changing from date-based versioning (e.g. 2016.3.0) to codename based versioning (e.g. Carbon)?
02:20 sh123124_ joined #salt
02:25 hemebond There is a codename for each major release I think.
02:25 hemebond Previous was Boron I think.
02:28 __number5__ hemebond: I know that, just seeing this announce email bit confused https://groups.google.com/forum/#!topic/salt-users/zIFbcxamXNs
02:29 __number5__ now I see that 2016.11 is the real version of Carbon
02:29 mikecmpbll joined #salt
02:29 whytewolf the codenames have been there from the start of the date based. and are a way of refering to the releases before we know the release version
02:33 whytewolf such as the changes that gtmanfred is working on for Nitrogen
02:40 sebastian-w joined #salt
02:41 antpa joined #salt
02:43 catpiggest joined #salt
02:47 ilbot3 joined #salt
02:47 Topic for #salt is now Welcome to #salt! | Latest Versions: 2015.8.12, 2016.3.4 | Support: https://www.saltstack.com/support/ | Logs: http://irclog.perlgeek.de/salt/ | Paste: https://gist.github.com/ (please don't multiline paste into channel) | See also: #salt-devel, #salt-offtopic | Ask with patience as we are volunteers and may not have immediate answers
02:54 abonilla joined #salt
02:56 swills joined #salt
03:00 iggy nawwmz: try using virtio-scsi instead of virtio-blk
03:00 iggy it should show up as sdX then (instead of vdX)
03:03 matth in a state, how can I use a jinja variable (  {{ myvar }} )    inside :         {{ salt['pillar.get']('mypillar:{{ my var }}:data }} ?   I tried ~myvar~ as  suggested by did not do the trick.
03:03 nawwmz hi iggy
03:04 edrocks joined #salt
03:04 hemebond {{ salt['pillar.get']('mypillar:'~myvar~':data }}
03:04 hemebond {{ salt['pillar.get']('mypillar:'~myvar~':data') }}
03:04 matth just in a case, "myvar" is from a "for in salt['...    is it a problem ?
03:04 nawwmz looks like cloudwatt provider doesnt allow us to statically define the device name, I defined it as vdf but the host shows it as vdb (incrementing from vda)
03:05 hemebond matth: Shouldn't be a problem.
03:06 iggy matth: make sure you put the quotes around both parts of the string
03:07 matth here is my state : http://pastebin.com/FBCw9yJU
03:07 iggy matth: 'mypillar:' ~ myvar ~ ':data'
03:07 iggy looks okay
03:08 matth does not work :D
03:08 pppingme joined #salt
03:08 matth if I replace ~ network ~ by something present in the pillar it's working.
03:08 matth a value*
03:09 iggy I don't think the issue is with your jinja
03:09 matth and if I put "network" in the "file.managed" states, I get the value.
03:09 matth I mean inside the file.
03:09 iggy what does `salt-call state.show_sls yourstate` show?
03:10 jimklo joined #salt
03:10 iggy maybe throw an -l debug in there for good measure
03:10 bastiand1 joined #salt
03:11 matth iggy: "[ERROR   ] Module/package collision: '/usr/lib/python2.7/site-packages/salt/modules/network.pyc' and '/var/cache/salt/minion/extmods/modules/network'"
03:12 matth iggy: http://pastebin.com/S5xLL6Ze
03:12 matth forgot the -l debug.
03:14 iggy change that one line to: /tmp/test_{{network}}_{{ salt['pillar.get']('infrastructure:network:eur:public:'~network }}:
03:16 iggy I'm assuming your pillar data doesn't quite match what you're expecting
03:16 iggy change that one line to: /tmp/test_{{network}}_{{ salt['pillar.get']('infrastructure:network:eur:public:'~network) }}:
03:16 iggy forgot closing parethesis
03:19 matth iggy: that's the thing
03:19 matth it's empty.
03:20 iggy I mean... we can't fix your pillar data for you
03:20 iggy ;)
03:20 iggy maybe some more context
03:20 matth if I put : /tmp/test_{{network}}{{instance}}_{{ network }} I get the value
03:21 iggy add `salt-call pillar.get frontend:eur:reverse_proxy:network` and `salt-call pillar.get infrastructure:network:eur:public` output
03:21 iggy !protip
03:21 matth if I put : /tmp/test_{{network}}{{instance}}_{{ salt['pillar.get']('infrastructure:network:eur:public:'~network~':myotherdata') }} it does not return anything
03:21 iggy damn, bot needs a reload
03:21 iggy protip: gist supports multiple files per paste
03:21 matth I get the value
03:22 matth it's tree
03:22 matth sorry
03:22 matth network, I mean frontend:eur:reverse_proxy:network    is an array :    network:['mynetwork1','mynetwork2']
03:23 matth that's why I do the first "for in .." to get the name of the network.
03:24 hemebond Need to see the exact pillar and state.
03:24 matth ok
03:25 jalaziz joined #salt
03:26 sp0097 joined #salt
03:26 ronnix joined #salt
03:28 matth hemebond: http://pastebin.com/yHNyqieY
03:30 hemebond Where is ipfailover?
03:30 hemebond I see failover1 but no ipfailover1
03:30 hemebond network: ['ipfailover1']
03:32 matth ho god..
03:32 matth thanks!!
03:32 matth yep it's working..
03:32 matth forgot the "ip"..
03:32 matth psss
03:35 iggy next time we're going to start out with the pillar data aren't we ????
03:35 hemebond
03:36 matth haha
03:37 sh123124213 joined #salt
03:55 onlyanegg joined #salt
04:00 nawwmz for openstack, is there a way I can define the root volume?
04:01 nawwmz for the ec2 driver, I can add DeviceName: /dev/sda, Ebs.VolumesSize: 20
04:01 nawwmz and it will create sda as 20GB but I dont know how to do that with openstack
04:03 jas02_ joined #salt
04:08 awiss joined #salt
04:12 fracklen joined #salt
04:13 netcho joined #salt
04:15 justanotheruser joined #salt
04:18 armguy joined #salt
04:35 netcho joined #salt
04:46 informant1 joined #salt
04:50 akhter joined #salt
04:54 abonilla joined #salt
04:59 Ni3mm4nd joined #salt
05:04 jas02_ joined #salt
05:21 matth is it possible to get the value returned from "cloud.query" with the grains ?
05:27 mapu left #salt
05:27 stooj joined #salt
05:28 awiss joined #salt
05:29 fracklen joined #salt
05:38 mntnman left #salt
05:38 sh123124213 joined #salt
05:42 stooj joined #salt
05:57 Yee_ joined #salt
05:57 Yee_ Hi
05:57 Yee_ Good time to ask questions?
05:57 hemebond ya
05:58 manji depends on the next state results
05:58 manji :p
06:00 sh123124213 joined #salt
06:03 ivanjaros joined #salt
06:04 Yee_ http://pastebin.com/6mLAz7ep
06:04 Yee_ hi manji:
06:05 Yee_ here you can check the salt configuration for logrotate in linux
06:05 edrocks joined #salt
06:06 Yee_ confusing me the output from "salt 'SDC1*' logrotate.show_conf"
06:08 ninjada_ joined #salt
06:15 preludedrew joined #salt
06:17 ninjada joined #salt
06:18 evle joined #salt
06:29 ronnix joined #salt
06:31 sh123124213 joined #salt
06:35 netcho joined #salt
06:39 k_sze[work] joined #salt
06:48 felskrone joined #salt
06:50 jhauser joined #salt
06:50 rdas joined #salt
06:50 haam3r joined #salt
06:56 abonilla joined #salt
06:56 awiss joined #salt
06:56 ninjada joined #salt
06:59 onlyanegg joined #salt
07:04 matth is it somehow possible in python to access to the function located in 'salt/cloud/clouds/nova.py' ? I would like to reuse the function to connect to openstack ( get_conn ) if that's possible.
07:05 yuhlw____ joined #salt
07:05 jas02_ joined #salt
07:08 ivanjaros3916 joined #salt
07:12 netcho joined #salt
07:15 toanju joined #salt
07:16 awiss_ joined #salt
07:17 systo joined #salt
07:23 Elsmorian joined #salt
07:25 nidr0x joined #salt
07:29 yelin joined #salt
07:30 DEger joined #salt
07:30 iggy you can write scripts that wrap LocalClient to call states/modules (which in turn call salt.cloud.*)
07:32 jas02 joined #salt
07:35 ninjada joined #salt
07:35 yelin Hi there. Like i ask yesterday. I have a problem with my Salt 2016.3.3. Salt want to apply states even after I deleted them from the top file. Clearing the cache didn't fix anything. Does someone have an idea ? :) Thanks in advance !
07:35 fracklen joined #salt
07:36 onlyanegg joined #salt
07:37 iggy are you using salt environments?
07:37 hemebond ????
07:37 thebinary joined #salt
07:39 yelin No, i have just one environments
07:41 iggy using gitfs?
07:43 yelin iggy, No, this is in the local file system
07:43 iggy hmm, a highstate should refresh all the files anyway... seems weird
07:45 yelin I agree. I tried to stop both the master and the minion, clear cache...
07:53 subsignal joined #salt
07:54 hemebond The postgres module is so confusing.
07:59 cbiel joined #salt
08:00 cbiel hi together, i hope someone can help me solving a problem:
08:00 cbiel i need to call http apis within a state
08:00 cbiel the problem is not only "GET" but also "PUT" requests
08:01 cbiel when i use the "http.query" state, this does not work because the data seem not to get passed correctly
08:01 cbiel the state looks like this: https://gist.github.com/chbiel/b5786c2f37787d7a708e4c468e36e52d
08:02 cbiel this is for elasticsearch 5 snapshot functionality
08:02 cbiel running this command works fine: curl -XPUT 'http://localhost:9200/_snapshot/es5_backup' -d "{'es5_backup': {'type': 'fs', 'settings': {'location': '/mount/backups/es5_backup'}}}"
08:03 cbiel i woul dbe very thankful if someone have an idea what i am doing wrong...!
08:04 awiss joined #salt
08:04 hemebond cbiel: Should data not be a dict instead of a string?
08:07 jas02_ joined #salt
08:08 edrocks joined #salt
08:08 cbiel hemebond: thanks for answering that quick. I updated the gist. I tried using a dict with string keys (in "). now trying without " around the keys
08:11 cbiel also in that case i get 400 Bad Request responses
08:11 hemebond Can you see what Elasticsearch is receving?
08:13 amontalb1n joined #salt
08:13 iggy I didn't even know http.query could do PUTs
08:14 cbiel iggi: The log says "Doing request using PUT method". so i assume it works^^
08:14 cbiel hemebond: ne currently not. i run elasticsearch via docker and did not find a way to increase logging level
08:14 cbiel i currently search for a parameter to increase the logging level
08:15 iggy -l debug should show a bit more detail in the requests/urllib3 modules
08:18 ronnix joined #salt
08:19 cbiel iggi: i run state.apply with -l debug with use the default tornado backend
08:19 hemebond state.http -> modules.http -> utils.http
08:20 cbiel there is not very much more output :) only "request does not match return code"
08:20 hemebond oh
08:20 bocaneri joined #salt
08:20 hemebond Does your manual request actually return a 200?
08:21 zer0def joined #salt
08:21 cbiel hemebond: I already debug into utils.http and know that the data gets there correctly. but i think tornado does a wrong conversion or I put the wrong format into the "data" field
08:22 hemebond data should be a dict
08:22 hemebond Wait... it could also be an encoded string.
08:22 cbiel i dont now the real return code but the http.query shows "400 Bad Request" what is definitly not expected
08:23 hemebond Can you not leave off the status parameter?
08:25 JohnnyRun joined #salt
08:27 cbiel i tried to remove the status but "Either match text (match) or a status code (status) is required." would be the problem then
08:27 cbiel i now see the requests in elasticserach but not the data
08:28 hemebond I see.
08:28 ozux joined #salt
08:28 hemebond Have you tried using the execution module to do the request?
08:29 cbiel http.module?
08:29 cbiel yes
08:29 hemebond salt '*' http.query http://somelink.com/ method=PUT data='<xml>somecontent</xml>'
08:29 hemebond Works or fails?
08:29 ozux__ joined #salt
08:29 cbiel oh good idea, i will try
08:31 cbiel but does not work :( "Requesting URL http://localhost:9200/_snapshot/es5_backup using PUT method" ==> data = {'local': {'status': 400, 'error': 'HTTP 400: Bad Request'}}
08:32 hemebond You are on 2016.3+?
08:32 cbiel y 2016.3.4
08:33 cbiel i will run elasticsearch with trace. how it shows what requests come to elasticsearch
08:34 q1x joined #salt
08:36 awiss_ joined #salt
08:41 ninjada joined #salt
08:42 toanju joined #salt
08:42 hemebond Well I just tested it and it does send the data along.
08:42 hemebond Tested with the module that is.
08:43 hemebond Will test the state now.
08:43 cbiel yes i think now its a problem with elasticsearch5 (docs...)
08:43 ninjada joined #salt
08:44 hemebond Okay.
08:44 fracklen joined #salt
08:44 cbiel i see the data now in the request that comes to elasticsearch
08:44 cbiel there is something wrong with the payload. not correctly documented
08:44 cbiel i think
08:44 cbiel i will play around now and try to find out whats wrong
08:44 cbiel thanks very much for the help!
09:00 geomacy joined #salt
09:02 awiss joined #salt
09:02 bluenemo joined #salt
09:07 akhter joined #salt
09:08 geomacy joined #salt
09:08 jas02_ joined #salt
09:09 mikecmpbll joined #salt
09:10 ninjada joined #salt
09:13 samodid joined #salt
09:18 fracklen_ joined #salt
09:20 theproxy joined #salt
09:21 hlub urgh, x509.certificate_managed state started failing and gives a rather unclear error message: "SaltInvocationError: PEM does not contain a single entry of type CERTIFICATE: 'signing_private_key'"
09:22 onlyanegg joined #salt
09:25 Miouge joined #salt
09:33 ronnix joined #salt
09:35 Rumbles joined #salt
09:36 dkrae joined #salt
09:45 s_kunk joined #salt
09:51 jeddi joined #salt
09:52 losh joined #salt
09:54 keimlink joined #salt
09:55 om2_ joined #salt
10:05 DEger joined #salt
10:14 N-Mi joined #salt
10:20 abednarik joined #salt
10:25 awiss joined #salt
10:34 netcho joined #salt
10:36 Neighbour How can I get the result of salt-cloud functions (like avail_images) from jinja?
10:36 ninjada joined #salt
10:37 Neighbour specifically: `salt-cloud -f avail_images cloud-provider owner=self`
10:40 babilen Neighbour: Is that https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cloud.html#salt.modules.cloud.list_images ?
10:43 cyborg-one joined #salt
10:43 fracklen joined #salt
10:45 Neighbour babilen: unfortunately no, it doesn't accept an owner parameter
10:47 Neighbour ERROR executing 'cloud.list_images': The following keyword arguments are not valid: owner=self
10:50 Neighbour which is the same as with: https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.cloud.html#salt.runners.cloud.list_images
10:51 babilen Neighbour: So it overgenerates?
10:52 babilen Might not be too hard to implement that functionality, submit a PR and deploy it locally via _modules in the interim. Or you might be able to filter the list in jinja.
10:53 rburkholder joined #salt
10:53 awiss joined #salt
10:57 Neighbour except I don't understand how the provider-generic module calls provider-specific functions
11:00 amcorreia joined #salt
11:02 DanyC joined #salt
11:07 SaltyVagrant joined #salt
11:08 schinken joined #salt
11:09 okolesnykov joined #salt
11:09 edrocks joined #salt
11:09 jas02_ joined #salt
11:14 Trauma joined #salt
11:18 fracklen joined #salt
11:21 CruX__ joined #salt
11:22 CruX__ Hi everyone, when writing an execution module and calling other modules via the __salt__ object, is there a standard way to check for errors that ocured within that module?
11:23 CruX__ I checked the code of existing modules and some are checking 'retcode', is that universal?
11:26 Reverend is there any way to check if a state is included as part of the current run, and base something on that? eg. install something on php54 IF nginx is installed
11:28 ravenx joined #salt
11:28 ravenx for my salt pillar key values:
11:28 ravenx i have one value beginnign with:     a %
11:29 ravenx so for example:      foo:  %bar
11:29 ravenx when i run salt '*' pillar.items i get:
11:29 ravenx found character '%' that cannot start any token; line 21
11:29 ravenx how can i use that special character?
11:29 Reverend escape it
11:29 ravenx with a \?
11:30 Reverend gimme a segment of your sls that grabs it.
11:31 ravenx urls_sso_app_url: %(host)s%(sso)s
11:31 Reverend is that your pillar, or the sls?
11:32 Reverend i assume somewhere you've got salt['pillar.get'](something:urls_sso_app_url) ?
11:32 ravenx that is the pillar:  app-name/init.sls
11:32 ravenx ahhh
11:32 ravenx yes.
11:32 ravenx sso_app_url = {{ pillar['app-name']['foo'] }}
11:33 Reverend you probably should escape it in your pillar. try this: `urls_sso_app_url: '\%(host)s\%(sso)s'`
11:33 ravenx sure thing.
11:34 ravenx gonna give that s hot after lunch
11:34 ravenx thanks a bunch
11:34 Reverend =after lunch? it's barely breakfast :P
11:34 Reverend also - i don't know if that will work... so just tinker or ask AndreasLutro... he'll fucking know. he always does.
11:36 dendazen joined #salt
11:38 ravenx lol
11:38 ravenx Reverend: thanks for the advice :)
11:38 Reverend np
11:38 sh123124213 joined #salt
11:42 ravenx is there a command to populate pillar data
11:42 ravenx i forgot
11:42 ravenx kinda like a state.show_sls
11:42 ravenx i have a config i would liek to see
11:44 sh123124213 joined #salt
11:46 Reverend yeah, just do a pillar.items
11:46 Reverend if you want to read it from the terminal, right?
11:48 ravenx yup
11:48 ravenx but iu woul dlike to see the tempalte populated too
11:48 ravenx so in pillar.items i see it as key:values, similar to how i have it in /srv/pillar/app/init.sls
11:49 ravenx hjowever, i would liek to see it populate my /srv/salt/app/configuration.conf
11:49 ravenx so the actual file, after rendering, that will get copied.
11:50 ravenx aah i think i am missing a file.managed lol
11:50 Miouge joined #salt
11:53 ravenx like a state.sls test=True shows me the high level
11:53 ravenx state.show_sls still doesn't print the file to me.
11:54 Reverend i normally just use test=True to get a file.managed list of changes.
11:54 Reverend usually weorks quite well :)
11:54 Reverend if it doesn't change it, just cat it and read it
12:06 rpb joined #salt
12:07 av_ joined #salt
12:09 Reverend -watch : file:/something/* <-- is legit?
12:10 Reverend answer: yes. yes it is
12:10 DanyC left #salt
12:22 ravenx yeah i have somethign resembling that.
12:25 ninjada joined #salt
12:26 ninjada joined #salt
12:26 ninjada joined #salt
12:34 XenophonF joined #salt
12:34 lubyou_ joined #salt
12:46 awiss joined #salt
12:47 ronnix joined #salt
12:52 assafshapira joined #salt
12:54 lubyou_ I have a grain that has nested entries. How can I target those from the CLI?
12:57 ravenx is there a wway to preview the jinja substitution for a file before highstate?
12:57 Rumbles grain['something']['somethingelse'] ?
12:57 Rumbles run test=True ravenx
12:57 Rumbles at the end of the highstate
12:57 Rumbles or state.apply
12:57 ravenx right but i have been usign state.show_sls
12:58 ravenx it only shows the cmd, the mode, the user and cwd
12:58 ravenx not the actual file itself
12:58 Rumbles never used that, sorry
12:58 Rumbles if the file content is updated and you have not changed the default state-ouput it should show the changes in the file
12:58 ravenx ah i see
12:59 Rumbles if the file isn't going to change then it wouldn't get printed out
12:59 ravenx cuz this is a new file.
13:00 Rumbles maybe if you use state-output=changes it might print out?
13:01 Rumbles so 'sudo salt "*" state.highstate state-output=changes test=True'
13:06 fracklen joined #salt
13:08 ravenx ah,
13:08 ravenx that looks handy
13:08 ravenx let me try.
13:08 abednarik joined #salt
13:08 fracklen_ joined #salt
13:12 edrocks joined #salt
13:15 scoates joined #salt
13:16 Pulp joined #salt
13:19 xbglowx_ joined #salt
13:20 Electron^- joined #salt
13:21 Brew joined #salt
13:23 fracklen joined #salt
13:24 xbglowx joined #salt
13:25 okolesnykov joined #salt
13:25 edrocks joined #salt
13:25 lubyou_ I have two custom grains. Matching on them individually works fine, but trying to do compound match doesnt seem to work
13:25 lubyou_ sudo salt -C 'G@domain_role:member_workstation and G@last_logged_on_user:sam_account_name:some.user' test.ping
13:25 lubyou_ this should in theory work, shouldnt it?
13:26 Fenlee joined #salt
13:32 subsignal joined #salt
13:32 awiss joined #salt
13:33 numkem joined #salt
13:34 cyteen joined #salt
13:39 XenophonF lubyou_: it sounds plausible and might even work
13:40 XenophonF are you sure both grains are set like you expect?
13:40 XenophonF oh wait
13:40 lubyou_ they are
13:40 XenophonF is last_logged_on_user a dict?
13:40 lubyou_ yes
13:41 XenophonF yeah
13:41 XenophonF don't do that
13:41 XenophonF i'm not sure how you'd match on a field of a dict
13:41 XenophonF i'd have to RTFS
13:41 XenophonF you're better off keeping it simple
13:41 lubyou_ well it works if I use them individually
13:41 XenophonF set the custom grain to a string
13:41 XenophonF i'm sure it does
13:42 XenophonF i'll repeat myself: you're better off keeping it simple
13:42 XenophonF if you need/want the dict for something else, put it into a different grain
13:42 abonilla joined #salt
13:42 lubyou_ https://github.com/saltstack/salt/issues/31228 maybe im hitting that one
13:42 saltstackbot [#31228][OPEN] Compound matcher with multiple conditions not matching properly | I'm experiencing an issue with Salt's compound matching. I've got a machine named `qa-tool-util`. It has the following grains, among others not displayed:...
13:43 XenophonF could be but since you're in control of everything, you're better off keeping it simple
13:43 XenophonF just add a grain named last_logged_on_username
13:43 XenophonF that way you're doing two string matches, which i'm almost certain should work without any issues
13:47 dendazen joined #salt
13:48 ravenx i still can't seem to escape the % that i have for a pillar variable
13:48 ravenx anyone have any ideas
13:48 ravenx essentially in one of my pillar init.sls:  i have a value pair:    atestkey: %asdf
13:48 ravenx the percentage sign is breaking things.
13:49 XenophonF wrap it in single quotes?
13:50 abednarik joined #salt
13:53 ravenx lemme try
13:54 ravenx oh wow that worked like a charm
13:54 XenophonF you should give the YAML spec a glance
13:54 XenophonF http://yaml.org
13:54 XenophonF lots of goodies in there
13:55 ravenx sounds good, thansk XenophonF
13:55 XenophonF np!
13:57 jeddi joined #salt
13:59 assafshapira I'm new to salt. following the getting started guides I'm having problems with state files
13:59 subsignal joined #salt
13:59 XenophonF welcome to the club, assafshapira
13:59 subsignal joined #salt
13:59 XenophonF can you post the state files or error messages to something like gist.github.com or paste.debian.net?
14:00 assafshapira trying to apply a state file to a spesific minion I'm gettint the following
14:00 assafshapira No matching sls found for 'executorBeacon' in env 'base'
14:00 assafshapira executorBeacon is the name of the state
14:00 XenophonF executorBeacon should be the name of the file containing the state definitions
14:00 assafshapira it's located in /srv/salt/
14:01 XenophonF so you have /srv/salt/executorBeacon.sls?
14:01 assafshapira tried also with executorBeacon.sls
14:01 assafshapira targeting the minion works well with test.ping etc...
14:02 akhter joined #salt
14:02 assafshapira is an installation on CentOS, from salt repo
14:03 assafshapira tried also to set the file_roots in /etc/salt/master
14:03 assafshapira same resaults
14:03 XenophonF you should have a file named /srv/salt/executorBeacon.sls
14:03 XenophonF yes?
14:04 assafshapira yes
14:04 assafshapira it's there
14:05 assafshapira any special premissions to the .sls files?
14:05 XenophonF and file_roots is set to the default, right?
14:06 assafshapira yes, I tired leaving file_roots at it defaults
14:06 XenophonF here, run this command on your minion:
14:06 XenophonF salt-call cp.list_master
14:06 XenophonF post the output to gist.github.com, please
14:06 assafshapira sal-call from the minion, or from the master?
14:07 subsignal joined #salt
14:07 XenophonF run the salt-call command on the minion as root
14:07 assafshapira OK, just a sec
14:08 assafshapira this is the output I get
14:08 assafshapira local:
14:09 assafshapira I'm running this trough a syndic server
14:09 assafshapira not sure if it makes any differance
14:09 XenophonF you'll need to check the master/syndic log files for errors
14:10 XenophonF perhaps there's a replication failure?
14:10 XenophonF i don't use syndic currently
14:11 assafshapira I'm runing the Master and the syndic in debug mode now.
14:11 assafshapira should I expect some error on the console?
14:11 subsignal joined #salt
14:11 XenophonF i would think, but check the log files
14:13 assafshapira I can see the xp.list_master comaand going trough to the both syndic and master
14:13 Mattch joined #salt
14:14 assafshapira with return code 0
14:15 assafshapira just to make sure I'm not missing anything. the .sls file exists anly on the top master, not on the syndic server. is this OK?
14:16 amontalban joined #salt
14:16 theproxy joined #salt
14:16 subsigna_ joined #salt
14:17 ronnix_ joined #salt
14:18 John_Kang joined #salt
14:21 Mattch joined #salt
14:24 assafshapira not errors on the logs
14:25 fracklen joined #salt
14:27 ninjada joined #salt
14:28 assafshapira checked from a different minion
14:28 assafshapira i'm getting
14:28 assafshapira "/usr/lib/python2.6/site-packages/salt/grains/core.py:1493: DeprecationWarning: The "osmajorrelease" will be a type of an integer"
14:30 John_Kang I need to concat two jin variables in another jinja variable
14:30 John_Kang {% set a = '123' %}
14:30 John_Kang {% set b = '456' %}
14:30 John_Kang {% set c = a ~ b %}
14:30 John_Kang like this
14:30 John_Kang but i don't know how to do that, googled but failed :(
14:31 John_Kang can someone give me advice ?
14:32 Rumbles that looks right to me, what does it do?
14:33 John_Kang i expected 123456
14:33 John_Kang but it failed with syntax
14:33 John_Kang of jinja
14:33 DEger joined #salt
14:34 DEger joined #salt
14:34 John_Kang sorry, there wasn't syntax error but c was just empty
14:35 AndreasLutro the jinja you've shown here is correct
14:35 AndreasLutro the bug is likely somewhere else
14:35 Rumbles oh good I'm not going mad :)
14:35 AndreasLutro or you're not showing us your real code
14:35 fracklen joined #salt
14:35 John_Kang that's all, the state just to echo c with cmd.run
14:35 CruX__ Is there a standard way to detect failures occurring when one module calls another via __salt__?
14:36 AndreasLutro CruX__: no
14:37 CruX__ So whats the approach if I have a module that does multiple calls and could fail in one of them?
14:37 John_Kang AndreasLutro: Rumbles: thank you
14:37 CruX__ Continuing after an error doesn't really make sense
14:38 AndreasLutro depends on which function you're calling, check what it's supposed to return if there is a failure
14:38 AndreasLutro could be false, could be a string, could be a dict with the key 'result'
14:38 CruX__ yeah, thats what I wanted to avoid, thanks though
14:39 CruX__ Am I doing something rather special here? Having some sort of standardised error mechanics seems like a no-brainer to me…
14:39 AndreasLutro nope
14:40 AndreasLutro just hitting legacy code crap
14:41 racooper joined #salt
14:44 CruX__ alright, thanks for the information AndreasLutro
14:47 okolesnykov joined #salt
14:49 ravenx is it possible to call multiple modules at once
14:49 ravenx salt 'server' test.ping test.something else
14:50 gtmanfred it is not
14:50 ravenx d'aww
14:50 ravenx oaky
14:50 ravenx thanks though
14:53 patarr joined #salt
14:53 assafshapira <XenophonF>, thanks for the help with the initial debugging. when working with masters->syndic->minion, the minion can't see the .sls file on the mstaer.
14:54 assafshapira if the file the syndic can see the files. no issues
14:54 assafshapira is this how it suppose to work?
14:56 assafshapira running commands works well from the master to minions with the syndic on the way, with no issues. .sls files aren't working...
14:56 ronnix joined #salt
14:57 Tanta joined #salt
14:59 bowhunter joined #salt
15:01 abonilla joined #salt
15:03 swills joined #salt
15:21 jas02 joined #salt
15:24 keltim joined #salt
15:27 ALLmightySPIFF joined #salt
15:28 ninjada joined #salt
15:29 akitada joined #salt
15:35 XenophonF I don't know, not having used syndic.  I think you have to replicate the SLS files, but I don't know how.
15:35 mede joined #salt
15:36 assafshapira joined #salt
15:36 mede guys - stupid question: i have the sqlplus rpm from oracle and i want to install it ... using pkg.installed but the state fails... although the package gets installed.  What am i missing?
15:37 XenophonF post the error message or relevant log file enteries to gist.github.com or paste.debian.net or ix.io
15:37 XenophonF my ESP super powers aren't working today
15:37 gtmanfred and your state
15:37 XenophonF had too much to drink last night
15:38 * XenophonF starts weeping uncontrollably.
15:38 gtmanfred heh
15:38 gtmanfred it will be ok
15:38 XenophonF i know
15:39 * gtmanfred is also pretty hung over
15:39 abonilla joined #salt
15:40 xmj trump drinking?
15:40 gtmanfred yup
15:40 XenophonF tbh i would have drunk the same amount if it went the other way
15:40 gtmanfred same
15:41 gtmanfred anyway, that talk goes in #salt-offtopic
15:41 abednarik joined #salt
15:43 Ni3mm4nd joined #salt
15:43 nicksloan joined #salt
15:46 tiwula joined #salt
15:47 DEger joined #salt
15:48 zer0def quick question - is there a config option to set how often are presence events being sent?
15:48 DEger_ joined #salt
15:49 gtmanfred uhhh, maybe
15:49 gtmanfred one second
15:50 lumtnman joined #salt
15:51 gtmanfred it does not look like it
15:56 XenophonF mede: ENQ?
15:56 XenophonF oh, he dropped off :(
15:56 gtmanfred aww, lame, probably figured it out
15:56 abonilla joined #salt
15:56 XenophonF good for him but i wish people would close the loop and say how they figured something out
15:56 gtmanfred heh
15:57 eriko_ joined #salt
15:57 gtmanfred XenophonF: https://xkcd.com/979/
15:57 * XenophonF indulges in a sensible chuckle.
15:58 XenophonF that's God's truth right there
15:58 raspado joined #salt
15:59 zer0def gtmanfred: a quick burst of grep-driven development tells me that `handle_presence()` on the master is being called every `loop_interval`, which defaults to 60 seconds
15:59 gtmanfred ahh cool, yeah we should document that that can be modified in presence_events
15:59 gtmanfred can be used to modify presence_events&
16:00 _JZ_ joined #salt
16:01 zer0def technically, whoever was documenting it, might've taken it fro granted, given how master's `run()` operates
16:01 raspado with openstack/nova module is it possible to name the volumes? they are named generically
16:02 gtmanfred if you use the nova or cloud module/state to build them, yes it is
16:02 gtmanfred if you are using the nova cloud driver and building it under block_device, then no
16:02 raspado ahh
16:03 DEger joined #salt
16:05 raspado gtmanfred: is it the meta nova.image_meta_set ?
16:06 gtmanfred uhh, it is just the way that the volume is created
16:06 gtmanfred the novaclient boot option creates the volume for salt-cloud, which doesn't give us the option to name it
16:06 raspado oh okay i see, in volume create
16:06 gtmanfred we don't create the volume with nova volume-create while creating a server
16:06 fracklen joined #salt
16:07 DammitJim joined #salt
16:07 watersoul joined #salt
16:10 edrocks joined #salt
16:15 yuhlw_____ joined #salt
16:21 pcn What is the newer syntax for salt['pillar.get']?
16:21 gtmanfred salt.pillar.get
16:21 gtmanfred it isn't a newer syntax
16:21 gtmanfred just another way to do it
16:22 N-Mi joined #salt
16:22 pcn Which syntax is encouraged?
16:22 saras joined #salt
16:22 gtmanfred whichever works for you
16:23 saras https://gist.github.com/sarasfox/de51ab65c9033adaded3b6c2d9b1faf1 any idea what i am missing
16:24 gtmanfred saras: it looks fine, what is the problem?
16:24 saras the file is not on the minion
16:25 gtmanfred did you do a state.apply on it?
16:25 gtmanfred and what did you get back
16:25 saras yes
16:25 gtmanfred what did you get back
16:26 saras https://gist.github.com/sarasfox/2a5e9658a20975a3baaefd7b45ff77ca
16:26 gtmanfred that doesn't show the file.managed run
16:27 saras the init.sls is in /srv/salt/updates
16:27 gtmanfred do you have updates in the top.sls?
16:29 ninjada joined #salt
16:31 saras https://gist.github.com/sarasfox/44646769e7cd67f01edbfb45c7719e19
16:32 saras i got that fixed now it is broken
16:32 gtmanfred you have it misspelled
16:32 gtmanfred file.mananged != file.managed
16:32 gtmanfred https://gist.github.com/sarasfox/de51ab65c9033adaded3b6c2d9b1faf1#file-init-sls-L2 misspelled in the gist too
16:33 abednarik joined #salt
16:33 yelin left #salt
16:34 saras https://gist.github.com/sarasfox/d2f2e44ee5eea8bffa3f44211504d7d5
16:34 saras ok fine how do tell salt where the file is on the master
16:35 gtmanfred move the file to /srv/salt/... then do salt://...
16:35 gtmanfred the fileserver is based on file_roots, which defaults to /srv/salt
16:35 saras ok
16:35 gtmanfred https://docs.saltstack.com/en/latest/ref/file_server/file_roots.html
16:37 jas02 joined #salt
16:38 saras cp thanks @gtmanfred
16:38 gtmanfred no problems
16:40 Shirkdog joined #salt
16:40 Shirkdog joined #salt
16:48 yawniek joined #salt
16:49 yawniek after an update i get for file.managed: Unable to manage file: 'module' object has no attribute 'TEMPFILE_PREFIX'  . any ideas?
16:49 promorphus joined #salt
16:49 Miouge joined #salt
16:52 yawniek https://github.com/saltstack/salt/pull/37022/files i guess its because of this PR
16:52 saltstackbot [#37022][MERGED] Use a default prefix for the mkstemp utils function | This also moves this function into salt.utils.files so that alongside it can be an attribute (`TEMPFILE_PREFIX`) which can be used to get the default tempfile prefix. This allows for simpler logic for properly cleaning the tempfiles created by the `__clean_tmp()` function in `salt.modules.file`....
16:52 fracklen_ joined #salt
16:53 frackle__ joined #salt
16:54 yawniek ok restarting the salt minions o/c
16:54 gtmanfred what version are you running on?
16:54 iggy pcn: salt.pillar.get won't do nested lookups (with :)
16:54 gtmanfred iggy: it should
16:54 gtmanfred cause it should be the same thing as salt['pillar.get']
16:55 gtmanfred pillar.get() won't
16:55 iggy is there a difference between the two?
16:55 gtmanfred yes
16:55 gtmanfred salt.pillar.get is the pillar.get module
16:55 gtmanfred pillar.get is the pillar object which is just a dictionary
16:55 iggy I mean, I can certainly try it, but I assumed that would default to the normal python .get() function
16:56 gtmanfred it does not, it is actually the pillar.get module
16:56 fracklen joined #salt
16:58 fracklen_ joined #salt
16:58 iggy looks good... guess I never actually tried that (vs assuming it worked the same as pillar.get() )
16:58 gtmanfred :)
17:00 Miouge joined #salt
17:00 sh123124213 joined #salt
17:01 promorphus joined #salt
17:01 Trauma joined #salt
17:02 jimklo joined #salt
17:10 pcn Thanks for checkint iggy. How much of the salt objects __dict__ is being exposed as properties?
17:10 gtmanfred none should be properties
17:11 gtmanfred everythign in salt is an execution module
17:12 orionx joined #salt
17:14 Reverend yo chaps. any way to make salt master only return data that'
17:14 Reverend s changes?
17:14 Reverend instead of all the 'is already blah. is unchanged... is blerp'
17:15 Reverend i just want 'changed, reload, etc'
17:15 saras ok how should push a powershell script to minion and run it
17:15 jas02_ joined #salt
17:15 gtmanfred Reverend: yes
17:15 gtmanfred Reverend: state_output and state_verbose
17:15 Reverend i take it those work like test= ?
17:15 gtmanfred Reverend: they do not
17:16 Reverend TO THE GOOGLES
17:16 gtmanfred Reverend: https://docs.saltstack.com/en/carbon/ref/output/all/salt.output.highstate.html
17:16 Reverend thanks GT.
17:16 Reverend <3
17:16 gtmanfred np
17:16 gtmanfred if you only want changes, i would say state_verbose=False state_output=changes in /etc/salt/master
17:16 promorphus joined #salt
17:17 Reverend perfect.
17:17 Reverend thanks gorgeous.
17:17 gtmanfred or you can pass --state-verbose and --state-output to the salt \* state.highstate setting
17:17 gtmanfred <3
17:17 JohnnyRun joined #salt
17:18 orionx joined #salt
17:19 nflicker joined #salt
17:20 zer0def joined #salt
17:23 akhter joined #salt
17:23 saras https://gist.github.com/sarasfox/b79bedcb67b72fd5701608d84e566146 hum
17:26 iggy could just do cmd.script
17:26 gtmanfred ^^
17:26 geomacy joined #salt
17:27 patrek joined #salt
17:27 saras thanks iggy
17:29 funabashi joined #salt
17:30 ninjada joined #salt
17:41 edrocks joined #salt
17:42 s_kunk joined #salt
17:42 s_kunk joined #salt
17:43 derrickm joined #salt
17:43 akhter joined #salt
17:54 Lionel_Debroux_ joined #salt
17:58 saintromuald__ joined #salt
17:58 akhter joined #salt
18:00 ecdhe joined #salt
18:00 pipps joined #salt
18:04 sarcasticadmin joined #salt
18:04 sh123124213 joined #salt
18:05 promorphus joined #salt
18:09 om2 joined #salt
18:11 om2_ joined #salt
18:14 mavhq joined #salt
18:14 saras https://gist.github.com/sarasfox/0078955bc74e62fee9b634be8114fb13 how does this look
18:15 impi joined #salt
18:15 gtmanfred that should work
18:17 DammitJim joined #salt
18:18 stooj joined #salt
18:24 sp0097 joined #salt
18:24 akhter joined #salt
18:30 sh123124213 would it make sense to add a delay to the time that master workers are spawned ? for example, have 20 very fast and the others with an incremental +1 seconds
18:31 sh123124213 I'm guessing that the first 20 would start serving normally right ?
18:31 Edgan joined #salt
18:32 pipps99 joined #salt
18:34 sh123124213 gtmanfred : ? :)
18:34 gtmanfred i have no strong opinions
18:34 gtmanfred but at first glance, i don't see the point
18:35 sh123124213 load with 300 workers
18:35 sh123124213 goes crazy
18:35 gtmanfred how many cores do you have?
18:35 sh123124213 16
18:35 sh123124213 virtual
18:35 gtmanfred how many minions do you have?
18:35 sh123124213 3k
18:36 sh123124213 seperated into 4 syndics
18:36 gtmanfred i don't think you really need 300 workers...
18:37 sh123124213 are workers related to cores ?
18:37 gtmanfred it sounds like you want them to be more similar to the old apache processes where each connection gets a new process.  Think of them more like nginx threads, (minus the fact that python/salt doesn't thread)
18:37 sh123124213 I'm guessing they excecute async tasks so some might just be idle
18:37 gtmanfred they are not the same, but it is closer to how nginx works than apache
18:38 gtmanfred sh123124213: if you can't start 300 workers at the start of salt, then you will never benefit from having 300 workers running at max at the same time
18:39 sh123124213 lets say I sent 300 async tasks that take 3 seconds
18:39 sh123124213 the 301 is going to wait ?
18:39 sh123124213 or die ?
18:39 gtmanfred from my understanding is that those workers only recieve and diseminate information to the minions
18:40 gtmanfred so if you have 301 workers returning data at the exact same time, then you will lose data from one minion
18:40 gtmanfred and it will appear as if it never responded
18:42 gtmanfred but that is just how I understand it working, it could be wrong
18:43 derrick joined #salt
18:43 samodid joined #salt
18:43 iggy but that seems like an unlikely scenario
18:43 gtmanfred yes, i agree
18:44 Guest9874 have a question someone might hopefully answer. I am targeting minions with pillar data.. in my case ec2_pillar. Some of the key names have colons in them. ie. ec2_tags: aws:autoscaling:groupName. In the top file, how can I escape them?
18:44 sh123124213 iggy, how do you think it works ? :)
18:45 sh123124213 I know tornado has something to do with it together with zmq
18:45 iggy Guest9874: you can change the delimiter
18:45 sh123124213 I'm guessing workers connect to zmq to get tasks and process them
18:45 sh123124213 but that seems unlikely too
18:45 geomacy joined #salt
18:46 iggy sh123124213: I agree with gtmanfred on how it works... I just think it's kind of unlikely that 300 minions would be sending all at the same time
18:46 sh123124213 why not ?
18:46 Guest9874 iggy: so there is no other way? hmm
18:46 iggy Guest9874: not that I know of... does changing the delimiter not work for you?
18:47 Guest9874 yeah, it's all autogenerated
18:47 Guest9874 aws.. although in ec2_pillar it would be nice to replace colons with - or something
18:48 sh123124213 can't you just double or single quote them ?
18:48 iggy Guest9874: I mean in the pillar.get call, you can specify the delimiter it uses
18:48 iggy {{ salt['pillar.get']('foo|bar', delimiter='|') }}
18:49 iggy or in your case...
18:49 gtmanfred or in this case, salt.pillar.get('aws:autoscaling:groupName', delimiter='-')
18:49 Guest9874 oh I see...
18:49 iggy {{ salt['pillar.get']('foo:bar:fu:aws', delimiter='|') }}
18:49 gtmanfred noice, didnt' know that one
18:49 Guest9874 iggy: since this is the top file, is there any way to do something similar
18:50 Guest9874 on pillar matching
18:50 iggy oh geez
18:50 Guest9874 lol
18:50 iggy I'm not sure
18:50 gtmanfred Guest9874: i am not aware of one... :/ it would be worth trying to escape it with \:
18:50 Guest9874 I'll make a custom tag
18:50 gtmanfred but i don't think it will work
18:50 iggy https://docs.saltstack.com/en/latest/topics/targeting/compound.html#alternate-delimiters
18:51 iggy so yeah, it's possible
18:51 gtmanfred oh, dope
18:51 derrickm wow cool
18:51 iggy put a big f'ing comment above it because anyone that sees that is going to be like WTF!!!!!!1!!!!!!
18:52 gtmanfred +1 ^^
18:52 derrickm thanks iggy :)
18:52 derrickm LOL
18:56 stooj joined #salt
18:58 akhter joined #salt
18:59 akhter joined #salt
19:01 nicksloan joined #salt
19:04 fracklen joined #salt
19:16 derrickm iggy... ugh these are evil. For Alt. Delimiters am I far off on this one? salt -C 'J|@ec2_tags|aws:autoscaling:groupName|*email*|ec2_tags:aws:autoscaling:groupName:*email*' test.ping
19:17 jas02_ joined #salt
19:19 numkem joined #salt
19:21 Chandler_ joined #salt
19:21 iggy I don't think so unless you have a very odd pillar structure
19:24 derrickm here is the pillar
19:24 derrickm ec2_tags: ---------- Name:     i-abc1234 aws:autoscaling:groupName:     ops-email-service
19:25 sh123124213 gtmanfred: thnx for the help. I actually only want to make the startup time of the master faster and be able to handle requests as soon as possible
19:25 sh123124213 I enabled grains_cache that helped alot
19:25 derrickm maybe I should just give up and make a custom tag without the colons, but that's not my style ugh, I don't want to lose to this hah
19:26 sh123124213 but since I want to have lots of workers available thats why I was thinking to make them start slower.
19:27 edrocks joined #salt
19:27 Tanta one tip for fast execution of salt I learned is pkg.installed rather than pkg.latest
19:27 derrickm ah it does work afterall
19:28 derrickm :)
19:28 derrickm forgot this is PCRE
19:31 ninjada joined #salt
19:31 mohae joined #salt
19:31 pipps joined #salt
19:32 pipps joined #salt
19:32 toanju joined #salt
19:32 subsignal joined #salt
19:33 hasues joined #salt
19:38 vegasq joined #salt
19:39 vegasq joined #salt
19:40 DEger joined #salt
19:40 vegasq joined #salt
19:44 mikecmpbll joined #salt
19:47 Miouge joined #salt
19:47 lubyou_ joined #salt
19:48 Xopher joined #salt
19:49 pipps joined #salt
19:53 tapoxi joined #salt
19:55 hasues left #salt
20:07 nicksloan joined #salt
20:09 pipps joined #salt
20:09 edrocks joined #salt
20:09 Trauma_ joined #salt
20:11 impi joined #salt
20:15 pipps joined #salt
20:20 XenophonF joined #salt
20:22 hemebond joined #salt
20:26 fracklen_ joined #salt
20:32 ninjada joined #salt
20:38 Miouge joined #salt
20:40 aboe joined #salt
20:45 jhauser joined #salt
20:48 fracklen joined #salt
20:51 felskrone joined #salt
20:52 Ni3mm4nd joined #salt
20:59 toastedpenguin joined #salt
21:04 bltmiller joined #salt
21:08 ninjada joined #salt
21:10 Bryson joined #salt
21:11 edrocks joined #salt
21:15 pipps joined #salt
21:15 keimlink joined #salt
21:26 onlyanegg joined #salt
21:33 dtsar joined #salt
21:35 DammitJim joined #salt
21:39 sh123124213 joined #salt
21:45 ALLmightySPIFF joined #salt
21:46 McNinja heya all, whats the best way to upgrade pip via salt? basically run the equivalent of 'pip install --upgrade pip'
21:46 ALLmightySPIFF joined #salt
21:48 sjorge joined #salt
21:48 sjorge joined #salt
21:51 hemebond salt minion pip.install pip upgrade=True
21:51 hemebond maybe
21:52 ninjada joined #salt
21:54 McNinja well was trying to do it via a sls with the pip.installed method
21:54 hemebond Same thing.
21:54 McNinja might have found the issue, pip was broken on the node :D
21:54 McNinja ahh ok
21:54 hemebond upgrade: True
21:55 hemebond That'd be a problem.
21:56 McNinja haha yeah just a little :D
21:56 DammitJim so, I recently finally upgraded to salt 2015.8.12
21:56 DammitJim it's nice because at the end of a run, it tells you the execution time; however, the time it is actually taking to return is an order of magnitude
21:56 DammitJim is that normal?
21:57 hemebond Why not 2016.3.*?
21:57 cscf DammitJim, as in, an order of magnitude higher than the reported time?
21:57 DammitJim right
21:57 hemebond Not sure about execution time.
21:57 DammitJim so, if the reported time was 2 seconds, it might be 20 seconds to return
21:57 hemebond Oh, that's odd.
21:57 DammitJim I also ask because I thought ZMQ was supposed to be super duper fast
21:57 cscf I get that sometimes, I think
21:58 DammitJim oh, so that is odd?
21:58 hemebond I use 2016.3.4 and switched from 0mq to Tornado. Seems much faster than 0mq was.
21:58 cscf ZMQ is really fast, that doesn't mean Salt is
21:58 DammitJim LOL
21:58 cscf hemebond, how involved a switch is that?
21:59 DammitJim it's just that sometimes I run highstate on a server and it takes a minute to come back
21:59 jalaziz joined #salt
21:59 hemebond Change setting on master and minions. Restart them.
21:59 cscf I just ran one, maybe 30s to return and 7s reported
22:00 DammitJim so, it's normal for stuff like this to take that long?
22:00 cscf hemebond, so if you're starting with a default salt-minion package, is there a good way to set that on first state.apply?  Or does the master not handle both?
22:00 DammitJim I just want to know if I'm in the norm or there is something wrong with my setup....
22:01 hemebond cscf: You can have the master handle both by using different ports for each. With my setup I customise the salt-minion on installation so it's Tornado from the start.
22:01 hemebond DammitJim: Seems normal.
22:01 DammitJim alright, thanks
22:01 hemebond But then I've never paid much attention to the time it takes.
22:01 pipps joined #salt
22:02 hemebond Unless I'm doing test.ping or manage.up in which case I did notice a difference between 0mq and Tornado.
22:02 ninjada joined #salt
22:02 DammitJim ok
22:06 Edgan hemebond: what about raet?
22:10 hemebond I believe raet has been abandoned.
22:10 hemebond I think they found they were rewriting too much stuff and decided to use Tornado instead.
22:10 hemebond Something like that ☺
22:10 Edgan gtmanfred: state of raet?
22:11 gtmanfred Edgan: Highly recommend using tcp over rate
22:11 gtmanfred raet
22:12 gtmanfred hemebond: is basically right
22:12 hemebond Oh yeah, it's called tcp, not Tornado.
22:12 gtmanfred once we add this, i would say use that https://github.com/saltstack/salt/pull/37576
22:12 saltstackbot [#37576][OPEN] Basic SSL/TLS support in master-minion communication. | What does this PR do?...
22:13 gtmanfred right now it still encrypts it, but it doesn't send it over an encrypted transport layer
22:15 gtmanfred there are a lot more intrecate things with raet
22:15 gtmanfred but it basically comes down to the fact that even though raet/udp is faster than zeromq, the tcp event system is faster than both
22:16 gtmanfred and requires less work
22:16 Edgan gtmanfred: thanks
22:16 gtmanfred raet was requiring re implementing tcp windowing in udp at the salt level
22:17 Edgan gtmanfred: how is rc2 looking? I see the tag in github.
22:18 gtmanfred Edgan: it was yesterday, we have no idea yet
22:18 hemebond I tried to install it on my mater last night via pip but the version didn't exist,
22:18 hemebond *master
22:18 gtmanfred it is in pip
22:18 Edgan gtmanfred: funny, it says 7 days
22:18 gtmanfred we only went public with it 7 days ago
22:18 gtmanfred or public with it last night
22:18 hemebond "Could not find a version that satisfies the requirement salt==v2016.11.0rc2"
22:18 gtmanfred we made the tag 7 days ago to do internal testing
22:18 gtmanfred hemebond: remove the v
22:18 hemebond Oh. Copied and pasted from the docs linked to from the email.
22:19 Edgan gtmanfred: I meant how has it done in internal testing. Blocker bugs?
22:19 gtmanfred yeah, they shouldn't have linked the carbon docs
22:19 gtmanfred Edgan: if it had any in internal tests it wouldn't have gone public
22:19 Edgan gtmanfred: ok, cool, so worth testing
22:19 jas02_ joined #salt
22:20 Edgan gtmanfred: The release of 2016.3.4 required me to do apt repo pinning for my salt 2016.3.3+patches until I made 2016.3.4+patches.
22:20 Edgan gtmanfred: As far as I know 2016.11.0 has all my patches
22:20 gtmanfred ok
22:20 Sketch Edgan: that's why they have repos for specific releases, use those instead of latest....
22:21 gtmanfred hemebond: https://pypi.python.org/pypi/salt/2016.11.0rc2
22:21 hemebond Yeah, removed the v and it worked.
22:21 hemebond Now to somehow upgrade my minions.
22:21 Edgan Sketch: But then I have to rewrite the repo for every release, pinning works
22:22 Sketch pkgrepo.managed :)
22:23 Edgan Sketch: I am effectively doing that my own way. But the .list file on disk still has to be rewritten per version per machine
22:25 Edgan Sketch: My real problem is salt is so buggy I have to maintain my own patched version, because new releases with the right fixes(which do get merged to master fast) don't get released fast enough
22:26 Edgan Sketch: Even with 2016.11.0 I have a high chance that it fixes all my existing bugs, but then I find new bugs. Then I am back to a build with patches.
22:27 Sketch sounds like you might be best off just installing from a local repo
22:27 abednarik joined #salt
22:27 Sketch unless you're doing salt-cloud, i'm not sure how to get it to use a local repo instead of official...
22:28 Edgan Sketch: I do, but then I need dependencies and I don't want to have to manage them
22:28 Sketch clone salt repo, then add patched versions as needed...update the whole thing as needed
22:29 Edgan Sketch: I use my own home grow python script. I find salt-cloud to be a subset of what I need. I want it to deal with adding route53 entries too.
22:30 Edgan Sketch: This is less work, https://paste.fedoraproject.org/476617/78730596/
22:31 Edgan Sketch: I also just added the ability to create ALBs in AWS with one config file and one command to the same script.
22:32 Edgan Sketch: I considered salt orchestration at the point I was adding ALB support to my script, but salt doesn't have ALB support yet.
22:40 cwandrews joined #salt
22:42 cwandrews I had multi-master setup and one of my nodes went down. I am trying to build a new master. I have copied the keys over and added a node. I can run test.ping on the node, but when I run state.highstate it fails with the classic SaltReqTimeoutError error. any ideas?
22:48 parasciidick joined #salt
22:50 swills joined #salt
22:55 onlyanegg joined #salt
22:59 pipps joined #salt
23:03 xet7 joined #salt
23:06 abednarik joined #salt
23:10 amontalban joined #salt
23:10 amontalban joined #salt
23:27 SaucyElf joined #salt
23:27 jcl[m] joined #salt
23:30 jeddi joined #salt
23:31 zer0def joined #salt
23:32 infrmnt1 joined #salt
23:35 zer0def joined #salt
23:38 stooj joined #salt
23:39 LostSoul joined #salt
23:43 onlyanegg joined #salt
23:43 aanderson joined #salt
23:43 subsignal joined #salt
23:46 aanderson Hiya! I've been using Salt a lot at work, and we've run into a bit of a weird case that's giving confusing errors. Basically, I'm rendering a State sls with the renderer directive #!jinja|jinja|yaml, and for whatever reason the renderer barfs sometimes on the second jinja render.
23:47 aanderson Here's a smallish repro of the problem: https://github.com/aetherson/salt-repro
23:48 hemebond What's the error?
23:49 aanderson was just getting that into a gist
23:49 aanderson https://gist.github.com/aetherson/561c2ece5c18d4368d4de84c13449584
23:52 hemebond And it sometimes works?
23:53 LostSoul joined #salt
23:53 zer0def to start off, i don't think you need to define a separate execution module to achieve what you want, the json serializer passes all options it doesn't make use of directly into stdlib json function args
23:54 dendazen joined #salt
23:55 aanderson The classic way to get it to break is to include a string with an '@' in the data. I'll make that change, zer0def, thanks - wasn't aware that I could do that
23:56 aanderson I'm ~24 hours out from having worked on the full codebase so I can't characterize the exact conditions under which it works correctly very well.
23:58 zer0def aanderson: so the intent is for pillar example:pointer to be a json string of pillar example:a?

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary