Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-08-30

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 omie888777 joined #salt
00:01 Antiarc joined #salt
00:01 rodr1c joined #salt
00:01 rodr1c joined #salt
00:01 alexlist joined #salt
00:01 godber joined #salt
00:03 jwon joined #salt
00:04 tbrb joined #salt
00:04 godlike joined #salt
00:04 godlike joined #salt
00:05 kshlm joined #salt
00:05 hax404 joined #salt
00:05 dwfreed joined #salt
00:06 APLU joined #salt
00:06 zerocoolback joined #salt
00:11 spuder joined #salt
00:20 GMAzrael joined #salt
00:24 johnj_ joined #salt
00:29 _maniac_ joined #salt
00:33 A_Person__ joined #salt
00:34 _maniac_ Hi, I have a salt-cloud question. Can I set up azure vm with multiple network interfaces with it?
00:34 _maniac_ I see some relevant functions in salt.cloud.clouds.azurearm module, but I don't want to write python for it.
00:37 shred joined #salt
00:43 whytewolf you don't write python for those. those are cloud functions, however i don't see an action to attach a network interface just a function to create one. [actions work on an instance, functions work on the cloud as a whole]
00:44 whytewolf https://docs.saltstack.com/en/latest/topics/cloud/action.html & https://docs.saltstack.com/en/latest/topics/cloud/function.html
00:45 tommyfun__ joined #salt
00:45 _maniac_ aha, thank you
00:46 shred joined #salt
00:46 jas02 joined #salt
00:49 aleph- joined #salt
00:50 Church- joined #salt
00:54 tommyfun_ joined #salt
00:56 Electron^- joined #salt
01:18 xMopxShell How do you pass kwargs to a function with LocalClient?
01:18 xMopxShell I'm reading https://docs.saltstack.com/en/latest/ref/clients/index.html#salt.client.LocalClient.cmd which says the "kwargs" arg is "A dictionary with keyword arguments for the function.". But when I kwargs there, they seem to be lost.
01:19 xMopxShell E.g, via salt-api, if my function is 'cmd.run' and kwarg={'cwd': '/tmp'}, the command is executed in the default location instead of /tmp.
01:20 xMopxShell arg=['pwd', 'cwd=/tmp'] seems to work BUT that's hella ugly.
01:20 whytewolf try adding cwd='/tmp'
01:21 xMopxShell Same with that, the temp is lost
01:23 whytewolf humm
01:24 xMopxShell Just to be clear, i'm calling this via salt-api. There's another layer in the mix. Let me dig what's being posted to be clear...
01:24 Nahual joined #salt
01:25 xMopxShell This is what's posted to /run - https://hastebin.com/karimehuzu.pl
01:25 xMopxShell (with the cwd arg tried in a couple different places than that example
01:25 johnj_ joined #salt
01:26 Shirkdog joined #salt
01:26 whytewolf one of these days i need to setup salt-api just so i have a test platform
01:26 xMopxShell i've only been trying this with cmd.run for shell commands, maybe that module is funky? I came from ansible, and 'shell' was funky there.
01:29 xMopxShell whytewolf: if i call cmd.run via LocalClient.cmd(), directly like the example in link above, it works.
01:30 xMopxShell So i'm probably calling the salt-api wrong. I'll look into that...
01:33 whytewolf humm, yeah i just tried it here [with out alt-api cause i don't have that setup and it does work with a dict. not sure what is going on with salt-api but the docs look like they should also take a dict
01:35 whytewolf btw, once you figure out whats going on. you can pass pwd into cmd. so you don't have to have arg
01:36 whytewolf but yeah this worked perfectly fine for me giving me /tmp as a response  local.cmd('salt01*', 'cmd.run',kwarg={'cwd':'/tmp','cmd':'pwd'})
01:49 cyborg-one joined #salt
01:51 ilbot3 joined #salt
01:51 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.7, 2017.7.1 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers
01:52 jas02 joined #salt
01:52 whytewolf humm, it worked for me
01:53 nixjdm joined #salt
01:54 whytewolf humm, i can't get it not to work
01:54 oida_ joined #salt
01:56 astronouth7303 joined #salt
01:56 GMAzrael joined #salt
01:59 whytewolf xMopxShell: just installed salt-api and tried it. this was the results. https://gist.github.com/whytewolf/552a89975f4d81bd5be2bbea310388aa
02:01 kukacz joined #salt
02:02 xMopxShell whytewolf: are you POSTing to /run or /minions?
02:03 whytewolf curl -k 'https://localhost:8000/run' -H 'Accept: application/x-yaml' -H 'Content-type: application/json' -d @test.json
02:09 zerocoolback joined #salt
02:10 xMopxShell ahha, that helped find the issue whytewolf :)
02:11 zerocoolback joined #salt
02:11 xMopxShell i was sending the payload as post fields instead of a request body...
02:11 whytewolf ahh
02:12 whytewolf welp now you know, and knowing is half the battle
02:12 zerocoolback joined #salt
02:12 xMopxShell yep haha
02:12 xMopxShell i appreciate the help!
02:12 whytewolf no problem :)
02:15 zerocool_ joined #salt
02:24 oida joined #salt
02:26 johnj_ joined #salt
02:30 zerocoolback joined #salt
02:31 dxiri joined #salt
02:32 dxiri joined #salt
02:35 dxiri joined #salt
02:53 evle joined #salt
02:53 shred joined #salt
02:54 shred joined #salt
02:54 shred joined #salt
02:55 shred joined #salt
02:56 shred joined #salt
02:57 shred joined #salt
02:58 tiwula joined #salt
03:02 dxiri joined #salt
03:03 jas02 joined #salt
03:20 spuder joined #salt
03:27 johnj_ joined #salt
03:27 ahrs joined #salt
03:31 michelangelo joined #salt
03:41 shanth__ joined #salt
03:47 Church- joined #salt
04:16 citaret joined #salt
04:19 jas02 joined #salt
04:25 tobio joined #salt
04:28 johnj_ joined #salt
04:31 omie888777 joined #salt
04:31 tobio-heap joined #salt
04:33 tobio-heap Hey, we've previously been happily using 2016.11, I upgraded to 2017.7 yesterday and our master has been pegged at 100% CPU ever since.
04:33 tobio-heap I've tried increasing worker_threads changing timeouts and increasing the instance size without any luck
04:34 tobio-heap Just wondering if anyone has any debugging tips here.
04:34 tobio-heap There doesn't seem to be anything interesting in the master debug logs, there's a reasonable number of timeouts in minions logs though.
04:35 tobio-heap I've also cleared all caches on the master and removed all minion keys forcing re-auth.
04:36 tobio-heap We're not running what I would consider huge scale here, ~150 minions on a 4 core master
04:54 shanth_ joined #salt
04:59 sh123124213 joined #salt
05:00 hemebond 100% CPU all the time?
05:00 hemebond Yikes.
05:08 iggy I'd check scheduled jobs, reactors, etc
05:08 shanth__ joined #salt
05:14 Bock joined #salt
05:16 swills joined #salt
05:16 _aeris_ joined #salt
05:25 preludedrew joined #salt
05:25 jas02 joined #salt
05:29 johnj_ joined #salt
05:29 MTecknology watch the events
05:32 oida_ joined #salt
05:35 oida joined #salt
05:41 felskrone joined #salt
05:42 justanotheruser joined #salt
05:45 maestropandy joined #salt
05:56 rgrundstrom joined #salt
05:56 rghv joined #salt
05:57 rgrundstrom Good morning
05:57 rgrundstrom Anyone have any intel on when 2017.02 might be released?
05:59 justanotheruser joined #salt
06:10 schasi joined #salt
06:10 pualj_ joined #salt
06:10 rghv Hi! I have a question regarding salt-ssh and Python salt-ssh client. salt.client.ssh.client.SSHClient.cmd() seems to accept a 'user' argument, to run commands as a different user. But that doesn't seem to work at all.
06:14 rghv A sample trace that am referring to is at https://pastebin.com/raw/txKLTbzY       Not sure if "salt.client.ssh.client.SSHClient.cmd()" really honours the 'user'/'runas' args passed to it
06:19 oida_ joined #salt
06:23 oida joined #salt
06:29 johnj joined #salt
06:30 zulutango joined #salt
06:36 jas02 joined #salt
06:36 usernkey joined #salt
06:38 Ricardo1000 joined #salt
06:42 JohnnyRun joined #salt
06:46 jas02 joined #salt
06:50 * rgrundstrom needs more coffe.......
06:53 nicola_pav joined #salt
06:54 nicola_pav hi all. is there a way to validate salt yaml files - regular yaml validators fail whtn the yaml has a pillar value
06:54 nicola_pav fir example, my yaml has the following  OS_USERNAME: {{ pillar['OS_USERNAME'] }}
06:55 nicola_pav it fails with the regular online yaml validators with the error: expected ',' or '}', but got '['
06:57 Ricardo1000 joined #salt
06:58 LostSoul joined #salt
07:00 GMAzrael joined #salt
07:00 sh123124213 joined #salt
07:08 mechleg nicola_pav: i am not entirely sure, but I think salt complains if the YAML is not valid, so an easy validation would to run a state.apply test=True and see if it gives YAML errors
07:09 nicola_pav mechleg: thank you - I will try that
07:10 schasi What to do if I found a bug in the documentation, like missing examples? https://docs.saltstack.com/en/latest/ref/states/parallel.html
07:11 LostSoul joined #salt
07:15 jas02 joined #salt
07:15 Hybrid joined #salt
07:16 ivanjaros joined #salt
07:20 hoonetorg joined #salt
07:25 LostSoul joined #salt
07:28 sfxandy joined #salt
07:30 johnj joined #salt
07:43 AvengerMoJo joined #salt
08:01 maestropandy joined #salt
08:01 aldevar joined #salt
08:02 maestropandy hello
08:02 mikecmpbll joined #salt
08:02 maestropandy I would like start reclass in saltstack.. anyone having ideas pls share
08:06 impi joined #salt
08:10 maestropandy anyone online here
08:10 maestropandy pls ping-pong
08:18 _KaszpiR_ joined #salt
08:19 coredumb maestropandy: are you planning on using reclass with other cfgmgt tools like ansible or puppet? Or stictly with salt
08:20 Electron^- joined #salt
08:23 hammer065 joined #salt
08:27 nicola_pav joined #salt
08:27 oida_ joined #salt
08:29 frygor_ joined #salt
08:30 salt_user joined #salt
08:31 johnj joined #salt
08:32 zylviu joined #salt
08:37 saltslack joined #salt
08:41 jas02 joined #salt
08:48 saltslack_ joined #salt
08:51 schasi I have changed the contents of my top file(s) and now get the following error: "No Top file or master_tops data matches found." How can I debug further? (besides using -l debug)
08:56 jhauser joined #salt
08:58 ibro joined #salt
08:59 saltslack1_ joined #salt
09:02 GMAzrael joined #salt
09:03 saltslack1 joined #salt
09:04 cyborg-one joined #salt
09:05 schasi I rolled back my top.sls and the old version works. Huh. Seems to be the wrong error message for my error
09:05 zylviu joined #salt
09:06 Antiarc joined #salt
09:07 saltslack1 How does saltstack support High Availability? If there is a high available environment with 2 masters running as "hot" (active-active) and if one master which sends a command to minion and as soon as the minion receives the instructions, master goes down. Now in this case will the data be returned to other master (since it is a high available environment)? If yes, then how can this be achieved?
09:14 _aeris_ joined #salt
09:16 zerocoolback joined #salt
09:16 nomasprime joined #salt
09:18 Neighbour schasi: Have you tried running the salt-master with -l debug?
09:18 schasi "(besides using -l debug)" yes :D
09:19 Neighbour schasi: yes, but -l debug can be applied to the salt-master, salt-minion and any salt-call you perform...not sure which one you used it with
09:19 schasi I used the salt-master, because it was an error on the salmaster, when trying to compile the states
09:21 k1412 joined #salt
09:21 schasi But thanks
09:21 schasi I have found where the error is located (which line produces it), but I have no idea why so far. I guess I'll find that out eventually
09:21 Neighbour ok, that's the proper place (though regular states get compiled at the minion they're executed at, but your error happens before that)
09:21 schasi That is good to know
09:23 Neighbour Could you share the (affected part of the) topfile via pastebin (or similar service)?
09:26 remyd1 joined #salt
09:29 netcho joined #salt
09:31 k1412 joined #salt
09:32 johnj joined #salt
09:32 schasi X) X) X) m) I had a duplicate target. That was all, I guess. I noticed while assembling the paste
09:35 maestropandy coredumb: strictly with salt
09:35 remyd1_ Hi
09:35 maestropandy I have raised ticket.. can someone help on this https://github.com/saltstack/salt/issues/43257
09:35 schasi hi
09:36 remyd1_ Any idea on how to include a whole formula from another directory
09:36 remyd1_ eg /foo/bar/test want to include /foo/lib/bar
09:36 remyd1_ ?
09:38 Neighbour schasi: Ah, good that you found it :)
09:39 schasi Neighbour: Yes. But I think it should throw something like "duplicate IDs", no the error I got
09:39 maestropandy can someone look into and help https://github.com/saltstack/salt/issues/43257
09:39 maestropandy schasi: can you
09:39 schasi No, I don't have the knowledge
09:43 remyd1_ maestropandy, You should take a look at your log when you start your salt-master
09:44 saltslack1 Hi
09:44 saltslack1 any idea about satl HA?
09:44 saltslack1 How does saltstack support High Availability? If there is a high available environment with 2 masters running as "hot" (active-active) and if one master which sends a command to minion and as soon as the minion receives the instructions, master goes down. Now in this case will the data be returned to other master (since it is a high available environment)? If yes, then how can this be achieved?
09:47 vb29 joined #salt
09:54 rghv saltslack1: interesting thought. Check this - https://docs.saltstack.com/en/latest/topics/tutorials/multimaster.html#prepping-a-redundant-master
09:58 saltslack1 Thanks @rghv for the reply. I have already gone through it and i have a multi master setup with Windows minions and I have configured failover. If the any master goes down then the minions can detect the failure and can switch over to next master available in the list which is all fine
10:00 saltslack1 but the problem I have is if I execute a command and the minions have received the command  and the minion agent is working on the instructions, during this time if the master from which minions received the instruction goes down then if it HA then minions should return the data to another master
10:00 saltslack1 but this is not happening and the information is lost
10:01 saltslack1 even if I configure returners on the minion side I am not able to achieve this
10:03 saltslack1 what I have observed is minions open a return channel on 4506 port towards the master, and if the master is not online then communication totally stops. I was hoping that in this case if I have configured mysql returner towards a different machine (I call it log server) then data should be passed to that machine instead of breaking the communication completely
10:04 saltslack1 I have 2 questions:
10:04 saltslack1 Is this possible at all?
10:04 saltslack1 and have i misconfigured anything?
10:07 Kira joined #salt
10:07 remyd1_ saltslack1 I think you raise an issue. However, I am not sure it would be easy to fix, even if it's just a basic tcp answer...
10:08 rghv saltslack1: "Minions can automatically detect failed masters and attempt to reconnect to reconnect to them quickly. To enable this functionality, set master_alive_interval in the minion config and specify a number of seconds to poll the masters for connection status.  If this option is not set, minions will still reconnect to failed masters but the first command sent after a master comes back up may be lost while the minion authenticate
10:09 rghv saltslack1: try tuning master_alive_interval for your use-case, please
10:10 saltslack1 rghv, I have already set it to 10,     master_alive_interval: 10
10:10 saltslack1 I can try reducing it more and see what happens
10:10 Neighbour saltslack1: The data on 'current running jobs' is not shared between masters
10:11 saltslack1 remyd1_, thanks will do it if nothing works after the suggestions give here
10:12 saltslack1 Neighbour, could you please mention which which data are you talking about? is it cache data? I have mounted a brick using glusterfs  and it has file_root, pillar_root and cachedir
10:12 saltslack1 this has been shared on both the masters along with replication architecture
10:14 Neighbour saltslack1: I'm not sure where masters save running-job information (like jid, time started etc), but if it's only in memory, then it won't replicate
10:20 vb29 I have configured mysql returner so as soon as it receives the instruction this info is getting stored in the database
10:22 saltslack1 yes, it is storing in mysql database, not sure if I can paste the output here
10:24 zerocoolback joined #salt
10:24 Neighbour saltslack1: Would adding options like 'master_type: failover' help? (from https://docs.saltstack.com/en/latest/topics/tutorials/multimaster_pki.html)
10:24 Neighbour since your current observation is that a minion does not attempt to connect to another master if the primary fails
10:25 vb29 no it connects
10:25 vb29 i have done the setup with failover
10:25 vb29 and it connects
10:25 vb29 but the data is lost
10:26 vb29 my thinking is that if the environment is HA and it is configured with failover then the data should return to another master if the first master has failed
10:26 Neighbour indeed, but apparently it doesn't do this
10:26 vb29 if this is not supported then how is it HA :)
10:27 Neighbour HA-ready :P
10:27 vb29 :D
10:28 saltslack1 so should I raise an issue about this?
10:31 Neighbour If you want it fixed, probably :)
10:31 aleph- joined #salt
10:31 saltslack1 yes I do :)
10:31 Mogget joined #salt
10:32 saltslack1 ok..then I will raise an issue....thanks a lot everyone for the suggestions! :)
10:33 johnj joined #salt
10:37 netcho is it possible to insert pillar data on orchestrator run if init.sls of that orch is only including other orch sls files?
10:38 netcho included sls files get data from pillar
10:40 netcho example: init.sls -> include:   - .mystate
10:40 netcho mystat.sls gets pillar data
10:41 netcho and if i run ' salt-run state.orch myorch(init.sls) pillar={'some':'value'} ' will it propagate to mystate.sls?
10:43 Neighbour if you specify a 'pillar' argument, it will override any other pillardata passed to that state
10:44 netcho yes i know that
10:44 netcho let's say state accepts only 1 pillar arg\
10:44 netcho will it pass over init.sls to mystate.sls
10:44 netcho and all other states included
10:44 Neighbour yes
10:45 netcho ok, thanks
10:45 Neighbour but not to all states you start using salt.state (unless you explicitly supply a pillar argument there too)
10:45 Neighbour (from the orchestration state)
10:45 k1412 joined #salt
10:46 netcho not following
10:47 netcho let me qyuicly wrine an example of what ia m trying to accomp;lish
10:47 Neighbour from an orchestration state (which runs on the master), you can call other states on specific targeted minions using 'salt.state'
10:48 Neighbour but those states won't automatically get the pillar from the orchestration state which calls them, unless you pass it explicitly with the 'pillar'-argument
10:49 oida joined #salt
10:51 netcho in the orch file or when running?
10:51 netcho this is what i meant
10:51 netcho https://hastebin.com/udajemujaf.coffeescript
10:51 netcho short example
10:52 netcho i coulds just run it
10:52 netcho and see if it works :D
10:53 netcho don't say you could use cloud map :) i am aware this is just testing :)
10:53 netcho i see what tyou arte trying to say /// maybe
10:54 rgrundstrom Anyone have any intel on when 2017.02 might be released?
10:54 netcho if i use salt.state in orch i need to pass pillar args from that orch file. .. and if i am using salt.functions it will get pillar data from cli command?
10:57 netcho makes perfect sense
10:57 k1412 joined #salt
10:57 oida_ joined #salt
10:59 Church- joined #salt
11:01 colegatron joined #salt
11:02 sh123124213 joined #salt
11:04 rghv If a shell command needs to be run as a different user than one configured in roster, (via cmd.run of salt.client.ssh.client.SSHClient), what's the best method? Using 'su <other_uder> -c "cmd" or is there any way by using 'saltsshclient.cmd(server,"cmd.run",["whoami && id"], user="other_user")' etc?
11:05 GMAzrael joined #salt
11:06 Neighbour netcho: Indeed..It looks like your example should work just fine
11:06 rghv I tried 'saltsshclient.cmd(server,"cmd.run",["whoami && id"], user="other_user")' . my roster has user 'vagrant' configured. I want to run some commands as a different user. One way I see is by using 'su other_user -c "<cmd>"'. But this doesn't seem to be the most elegant one. Any pointers/suggestions will be very helpful, please!
11:07 netcho Neighbour:  is there a way of parallel execution of cloud.profile function in this case?
11:07 netcho setting parallel: True works with map files
11:08 netcho RuntimeError: maximum recursion depth exceeded
11:08 netcho is what i get
11:08 netcho but it passes on
11:09 Neighbour netcho: not that I'm aware of
11:09 saltslack1 left #salt
11:11 netcho ok
11:11 netcho thanks for your help
11:16 netcho Neighbour:  found it. Please note that the execution module does not run in parallel mode. Using multiple minions to create instances can effectively perform parallel instance creation.
11:16 Neighbour But there's no mechanism to easily distribute the instances-to-create to different minions?
11:17 netcho yeah i am just trying to figure that one out
11:17 netcho only cloud.map
11:18 netcho salt.runners.cloud.map_run(path=None, **kwargs)
11:19 netcho will map file accept insterting instances-to-create in map file as kwargs?
11:21 netcho this might not work as i wanted to :)
11:21 netcho i always forget that jinja renders first :)
11:22 vb29 left #salt
11:24 jas02 joined #salt
11:26 Neighbour from the code, it looks like it expects a key 'map', which should either be an existing file, or a salt://-link
11:26 Neighbour (if it is neither, then a SaltCloudNotFound-exception is raised)
11:27 netcho yea
11:28 netcho now i have different issue when registering newly created instances to load balancer
11:29 netcho {% set instance_id = salt['boto_ec2.get_id'](region='us-east-1', name=name) %} is rendered first and it fails because machines are not created yet
11:29 netcho so i will need to separate it into 2 files and do 2 orch runs
11:29 Neighbour ah yes, encountered that too...create two orchestration states to run in succession
11:30 Neighbour (but only run the 2nd if the first succeeds without errors)
11:30 netcho how can i do that?
11:30 zerocoolback joined #salt
11:30 Neighbour a simple shell-script or batchfile should do
11:31 netcho got some examples? i figured i will need some wraper around this eventually
11:31 netcho where i will insert pillar as arguments/vars
11:31 Neighbour what OS are you running on?
11:31 netcho ubuntu
11:32 netcho i have profiles done, i have styates done ... i just need to make a good orch to combine them all
11:33 netcho negotiating with corp to opensource our states
11:33 netcho formulas
11:33 netcho we have pretty cool ones for rabibtmq and elasticsearch clusters
11:34 johnj joined #salt
11:37 rghv netcho: > "negotiating with corp to opensource our states;  we have pretty cool ones for rabibtmq and elasticsearch clusters" - would be very useful to many!
11:37 netcho yeah i know
11:38 netcho 3 node elasticsearch cluster with avoiding 'split brain problem' solved and 2 node rabbitmq cluster with adding users and vhosts troug pillar
11:39 netcho every day more and more  i fall in love with saltstack
11:42 Neighbour netcho: Waiting for my boss' approval to share our shellscript..
11:42 netcho thanks :)
11:43 schasi Have you had a look at event-based orchestration, netcho?
11:46 netcho yes i did schasi but still don't have time to try it all
11:46 netcho work needs to get done and wer are a rather small teamn
11:46 netcho would need bunch of beacons and reactors
11:46 netcho and custom events
11:47 schasi Because that seems to work parallel. I am using the "normal" model right now, but wanna look into event base orchestration
11:47 schasi I am a team of one :D
11:47 netcho but .. step by step
11:47 schasi Yes, step by step
11:47 netcho 2 of us here :D
11:48 schasi Nice. 200% capacity
11:49 netcho it would be if we wouldn't need to maintain legacy chef and ruby crap
11:49 netcho but as i said .. step by step .. recipes become formulas :D
11:49 nomasprime joined #salt
11:50 netcho celebrate the day when i shutdown chef repo :D
11:50 schasi Hehe
11:51 babilen netcho: One day you might say the same about salt :)
11:52 jas02 joined #salt
11:52 mavhq joined #salt
11:53 netcho so far i am pretty sure salt wil do for us nex couiple of years babilen , if it continues to develop this way
11:53 netcho sorry for bad typing
11:53 netcho it replaces all our needs atm
11:54 netcho orch AWS, bootstrapping machines and service discovery
11:54 netcho so ( manual work in AWS console + chef + zookeeper ) = salt
11:55 netcho SOLD
11:56 schasi babilen: I was thinking the same thing :D
11:57 GMAzrael joined #salt
11:59 maestropandy joined #salt
12:05 ekkelett joined #salt
12:10 jeblair_ joined #salt
12:10 citaret joined #salt
12:10 oida joined #salt
12:10 Akkarin joined #salt
12:10 ChubYann joined #salt
12:10 pcgod joined #salt
12:10 twiedenbein joined #salt
12:10 cyborg-one joined #salt
12:10 Neighbour joined #salt
12:10 ahrs joined #salt
12:10 Vaelatern joined #salt
12:10 Ricardo1000 joined #salt
12:10 netcho joined #salt
12:10 JPT_ joined #salt
12:10 nich0s joined #salt
12:11 jab416171 joined #salt
12:12 hoolio joined #salt
12:12 nledez joined #salt
12:12 lubyou joined #salt
12:13 ekkelett joined #salt
12:14 Nahual joined #salt
12:16 flebel joined #salt
12:18 dalom joined #salt
12:21 Diaoul joined #salt
12:22 preludedrew joined #salt
12:22 astronouth7303 joined #salt
12:22 wonko21 joined #salt
12:22 nielsk joined #salt
12:22 SneakyPhil joined #salt
12:22 gareth__ joined #salt
12:22 peters-tx joined #salt
12:22 nonsenso joined #salt
12:22 riftman joined #salt
12:22 nickadam joined #salt
12:22 munhitsu_ joined #salt
12:22 phobosd__ joined #salt
12:22 s0undt3ch joined #salt
12:23 nickadam joined #salt
12:25 KennethWilke joined #salt
12:26 ekkelett joined #salt
12:27 nledez joined #salt
12:30 Lionel_Debroux joined #salt
12:32 numkem joined #salt
12:35 johnj joined #salt
12:38 Brew joined #salt
12:39 vb29 joined #salt
12:40 vb29 left #salt
12:40 unixdude joined #salt
12:48 sybix joined #salt
12:50 gh34 joined #salt
12:57 nomasprime joined #salt
12:57 aldevar joined #salt
13:09 mchlumsky joined #salt
13:11 dxiri joined #salt
13:18 Naresh joined #salt
13:19 Mogget joined #salt
13:21 Mogget joined #salt
13:22 Mogget joined #salt
13:23 ssplatt joined #salt
13:24 Mogget joined #salt
13:25 rpb joined #salt
13:28 wendall911 joined #salt
13:31 tapoxi joined #salt
13:35 johnj joined #salt
13:39 racooper joined #salt
13:42 noobiedubie joined #salt
13:45 ritz joined #salt
13:49 _KaszpiR_ joined #salt
14:01 oida_ joined #salt
14:01 cgiroua joined #salt
14:10 zmalone joined #salt
14:12 spuder joined #salt
14:14 zmalone left #salt
14:15 brianthelion joined #salt
14:18 sh123124213 joined #salt
14:23 edrocks joined #salt
14:23 gyro joined #salt
14:32 oida joined #salt
14:34 sarcasticadmin joined #salt
14:34 aleph- joined #salt
14:37 johnj joined #salt
14:38 Church- joined #salt
14:40 sh123124213 joined #salt
14:40 ivanjaros joined #salt
14:44 _KaszpiR_ joined #salt
14:44 colegatron joined #salt
14:48 evle joined #salt
14:54 remyd1_ hey. Is it possible to have a list of commands to run in an "unless:" statement. eg, for a dns config, using "named-checkzone" on both the standard zone definition and the reverse one
14:54 remyd1_ ?
14:55 remyd1_ (before reloading the bind daemon)
15:09 ixs who to talk to in order to get a formula added to github.com/saltstack-formulas?
15:10 babilen ixs: Sent a mail to the salt-user mailing list
15:11 babilen Sorry
15:11 babilen Google Group™ naturally
15:12 ixs babilen: brrrr. I had hoped the right people were on IRC... ;-)
15:14 ivanjaros joined #salt
15:14 babilen They sometimes are, but writing a mail typically succeeds
15:18 omie888777 joined #salt
15:30 spuder joined #salt
15:36 skatz joined #salt
15:40 skatz What setting (if any) governs the time that the gitfs cache (/var/cache/salt/master/gitfs) is valid? If we lose connectivity to our git server, would the cache get invalidated after some period of time? I see keep_jobs but my hunch is that only applies to the jobs cache and not /var/cache/salt/master
15:40 skatz not *all of /var/cache/salt/master
15:43 Heartsbane joined #salt
15:43 Heartsbane joined #salt
15:47 nixjdm joined #salt
15:56 tiwula joined #salt
15:59 dxiri joined #salt
16:06 lordcirth_work So I'm writing a state to configure /etc/exports. Almost ready to PR. 'exportfs -r' will throw an error if one of the share dirs doesn't exist. Should I add a sanity check to the state, or is that out-of-scope?
16:06 k1412 joined #salt
16:07 JPT joined #salt
16:09 shanth_ joined #salt
16:11 ivanjaros3916 joined #salt
16:17 Bock joined #salt
16:19 Splix76 joined #salt
16:20 Splix76 Hello. I am trying check a grain for two strings and in a jinja if statement and not having much luck. {% if '(string1|string2)' in grains['grain'] %}
16:20 Splix76 I can get it to work if I use or, but would prefer the shorter syntax if possible.
16:20 whytewolf if is not a regex!
16:21 Splix76 right, but is there no way to do something similar to that inside the jinja / python escape for if?
16:22 Splix76 or should I just use or and accept the longer syntax?
16:22 AvengerMoJo joined #salt
16:22 Splix76 if 'string' in grain['grain'] or 'string' in grain['grain']
16:23 Splix76 I was hoping to find a solution to provide a similar if check over two strings vs. breaking it out into a second or for each string.
16:23 whytewolf {% if salt.match.grain_pcre('grain:(string1|string2)') %} i think. been a while since i have done a real proper pcre
16:24 Splix76 Sweet!
16:24 Splix76 If that is not exactly it, it'll get me close enough to the answer.
16:24 Splix76 thank you @whytewolf
16:24 shanth__ joined #salt
16:24 whytewolf and yes you should get used to the longer syntax overall. because if is not a regex and the match module will not always be useful
16:25 darioleidi joined #salt
16:25 Splix76 I have a few others similar but got tired of the longer syntax. I could put each or on a new line to help the readability flow better.
16:26 Splix76 might be better to do that just to make it more readable for anyone who has to read through a state I wrote a year from now.
16:26 Splix76 :)
16:26 whytewolf if you are on 2017.7 or later you should look through the filter list and see if there is one of the newer filters that might help
16:27 impi joined #salt
16:34 shanth_ joined #salt
16:38 Splix76 salt 2017.7.1 (Nitrogen), my environment is still small enough to update. Seeing what is in the repo I have enabled now.
16:39 johnj_ joined #salt
16:39 whytewolf Splix76: https://docs.saltstack.com/en/latest/topics/jinja/index.html#filters
16:39 Splix76 looks like I would have to enable latest / dev and I am currently running stable.
16:39 whytewolf you are running 2017.7 [which is the version i said]
16:40 Splix76 Thanks for the doc @whytewolf , I have read through that but didn't find the two value answer for a grain check. I had forgotten about the ability to call modules in Jinja, which is what your original reply provided info on.
16:41 whytewolf Splix76: you wanted a regex.
16:41 whytewolf https://docs.saltstack.com/en/latest/topics/jinja/index.html#regex-match
16:44 whytewolf {% if salt.grains.get('grains')|regex_match('(string1|string2)') %}
16:44 Splix76 I clearly need to re-read the filters page and ensure I understand how the regex works. Thanks again.
16:48 shanth__ joined #salt
16:48 Splix76 Thank you for the help @whytewolf , you gave me exactly what I needed and pointed me to what I have over-looked in my research to fill in the gaps.
16:55 xet7 joined #salt
16:56 mechleg remyd1_:  that should be able work if your salt root is at /foo.  when i need to include entire directories, i have found that creating an init.sls to then include all the states i need from that folder helps:  /foo/lib/bar/init.sls
16:56 mechleg oops, that was some time ago
16:59 simondodsley I'm running pylint against a module that contains imports of salt.exceptions and salt.ext.six.moves (this one because pylint previously complained it wasn't there) but I'm getting a pylint error "3rd-party module import is not gated in a try/except". If you are requiring these imports, why are they not listed as 3rd party exceptions to this try/except rule?
17:00 jesk joined #salt
17:01 spuder joined #salt
17:01 jesk hiho
17:01 jesk new to Salt
17:01 jesk I wonder how I can see help to module parameters, like exception modules
17:01 jesk eg. "salt '*' -d useradd" doesn't give out anything
17:04 whytewolf jesk: salt 'minion' sys.doc useradd
17:04 jesk whytewolf: thanks
17:04 whytewolf [you probley don't want '*' as every minion will return it's version]
17:05 jesk hm ok
17:06 jesk still empty output somehow
17:06 jesk using 'user.add' just display very short decription which can't be the full one
17:06 simondodsley probably as there are no docs in the module useradd. Try a different one. -d works on others where docs have been specifically written into the module
17:07 jesk ok
17:07 jesk how can I see the function signature at least?
17:07 Edgan joined #salt
17:08 jesk of user.add
17:08 whytewolf if you want the function signature look it up online
17:08 whytewolf https://docs.saltstack.com/en/latest/salt-modindex.html
17:08 numkem joined #salt
17:08 jesk so I need a webbrowser to use salt basically :D
17:08 whytewolf the built in docs are just the doc strings of the functions
17:09 whytewolf well, the other option is to grok the code
17:12 lkolstad joined #salt
17:17 mikecmpbll joined #salt
17:19 AvengerMoJo joined #salt
17:20 impi joined #salt
17:33 dxiri joined #salt
17:37 cholcombe joined #salt
17:40 johnj_ joined #salt
17:44 btorch there is no issue having salt-apai running on all salt-masters on a multi-master setup is there ?
17:45 dol-sen left #salt
17:45 whytewolf shouldn't be.
17:46 btorch cool thanks
17:48 spuder joined #salt
17:48 Ryan_Lane promoting saltstack for aws orchestration, if anyone wants to give some medium or twitter love: https://eng.lyft.com/saltstack-as-an-alternative-to-terraform-for-aws-orchestration-cd2ceb06bf8c https://twitter.com/SquidDLane/status/902939487256195072
17:49 numkem joined #salt
17:49 DanyC joined #salt
17:53 spuder joined #salt
17:53 netcho gj Ryan_Lane
17:53 oida_ joined #salt
17:53 Ryan_Lane thanks :)
17:53 netcho improved states from masterless post from while back?
17:53 netcho i used those :P
17:55 netcho post from 2014. :)
17:56 rgrundstrom_home joined #salt
17:56 rgrundstrom_home Good evening
17:57 Ryan_Lane netcho: we've been working on them since then :)
17:57 whytewolf good morning
17:57 whytewolf retwittered Ryan_Lane
17:57 Ryan_Lane lots of AWS support. mostly this was a post about "here's all the stuff we don't have in the docs"
17:57 netcho i love you
17:57 Ryan_Lane we'll be working on the docs soon :)
17:57 netcho thats all
17:58 rgrundstrom_home How much do you need to know to be able to take the SSCE exam?
17:58 * whytewolf shrugs. never took it :P
17:58 netcho :)
17:59 oida joined #salt
18:00 rgrundstrom_home I have been writing some pretty advanced stuff lately... Would be nice to be able to have a certification for the hours spent in it
18:01 * rgrundstrom_home has spent about 10-12 hours a day on avrage the past 6 months
18:01 whytewolf rgrundstrom_home: there is info about the exam here https://saltstack.com/certification/ it does list the topics
18:02 netcho saltstack does on-site trainigs right?
18:03 * rgrundstrom_home lives in Europe so going to Saltlake is..... a long trip... Would be awsome tho.
18:03 whytewolf i believe they do onsite trainings. least they did when they first started
18:04 nixjdm joined #salt
18:05 aleph- joined #salt
18:06 omie888777 joined #salt
18:06 rgrundstrom_home Well reading the topics I "think" that i might have what it takes to do this....
18:06 babilen Anybody using Salt to setup Kubernetes clusters? Seems as if it its not really used anymore
18:06 * rgrundstrom_home needs to study some stuff tho. Of to the libarary
18:09 nixjdm_ joined #salt
18:12 Church- joined #salt
18:12 Church- joined #salt
18:13 Angleton joined #salt
18:14 nixjdm joined #salt
18:14 Church- joined #salt
18:20 _KaszpiR_ joined #salt
18:20 DammitJim joined #salt
18:26 edrocks joined #salt
18:27 Johnm61 joined #salt
18:30 aleph- joined #salt
18:36 rgrundstrom_home Good night everyone.
18:39 lordcirth_work Just submitted my first state PR! https://github.com/saltstack/salt/pull/43273
18:40 johnj_ joined #salt
18:41 sh123124213 joined #salt
18:47 MTecknology lordcirth_work: but where are the unit tests?
18:48 dxiri joined #salt
18:48 MTecknology Am I reading that right, is only nfs3 supported?
18:49 MTecknology (first one was only half serious, second one a bit more so)
18:51 MTecknology lordcirth_work: You probably don't want 27 commits of yours showing up in the changelog (which will probably happen unless you squash them);   You may want to consider doing a squash merge from your add-nfs-branch into your develop and then request the pull from there.  (just my 2-cents)
18:54 * MTecknology has had a commit about dragons show up in the release notes...
18:55 chutzpah joined #salt
19:00 justanotheruser joined #salt
19:13 Hybrid joined #salt
19:14 shanth_ joined #salt
19:20 sh123124213 joined #salt
19:22 lordcirth_work MTecknology, the module is called nfs3, I haven't really tested with others, basic features should work
19:30 Church- joined #salt
19:31 nixjdm joined #salt
19:32 Hybrid joined #salt
19:35 shanth_ joined #salt
19:38 Church- joined #salt
19:40 shanth__ joined #salt
19:41 johnj_ joined #salt
19:43 A_Person__ joined #salt
19:45 A_Person___ joined #salt
19:48 shanth_ joined #salt
19:49 aleph- joined #salt
19:50 Church- joined #salt
19:52 Church- joined #salt
20:18 wccropper joined #salt
20:19 shandonjuan joined #salt
20:20 oida_ joined #salt
20:21 schemanic joined #salt
20:23 wccropper hello, brand new install of salt-minion 2017.7.1, 2 servers same exact builds, 1 works 1 fails to even start the minion.
20:23 wccropper returns "[ERROR   ] __call__() takes exactly 2 arguments (1 given)" followed by traceback
20:28 SneakyPhil is it possible to run a command against the saltmaster while applying a state to a minion?
20:29 SneakyPhil like a local command that the minion doesn't know about
20:30 wavded joined #salt
20:31 nixjdm joined #salt
20:33 wccroppe_ joined #salt
20:39 wccroppe_ left #salt
20:40 wccroppe_ joined #salt
20:42 johnj_ joined #salt
20:54 zulutango joined #salt
20:55 shandonjuan let me know if this is too general or more detailed required. I have been using salt-minion on windows where the services is already started upon start-up / reboot, however, on Redhatlinux, the salt-minion has to be manually started. what is the best way to ensure this happens?
20:55 whytewolf what version of redhat and how was salt installed? [repo or pip]
20:59 _JZ_ joined #salt
21:01 shandonjuan redhat 7.3 , I installed from the repo
21:01 shandonjuan then just yum
21:01 whytewolf shandonjuan: systemctl enable salt-minion
21:02 gtmanfred SneakyPhil:not while applying the state, but before or after, using the orchestration runner
21:02 gtmanfred wccropper: can you paste the traceback on gist.github.com?
21:02 whytewolf or, be fancy and make a state with service.running
21:02 whytewolf - enable: true
21:02 sh123124213 joined #salt
21:03 gtmanfred salt-call --local service.start salt-minion
21:03 gtmanfred salt-call --local service.enable salt-minion
21:03 whytewolf service.enable you mean?
21:03 whytewolf :P
21:03 gtmanfred ¯\(°_o)/¯
21:04 Guest73 joined #salt
21:04 whytewolf service.do_a_tap_dance
21:04 shandonjuan you can run a salt-call without the minion running already?
21:05 whytewolf yes
21:05 gtmanfred it starts up a whole new salt-minion process so it is a little slower than using salt from the master to run in the already running salt-minion daemon, but yes
21:12 aneeshusa joined #salt
21:18 dnull joined #salt
21:18 DanyC_ joined #salt
21:21 shanth__ joined #salt
21:22 DanyC_ left #salt
21:22 DanyC_ joined #salt
21:25 omie888777 joined #salt
21:25 aneeshusa joined #salt
21:26 omie88877777 joined #salt
21:30 aneeshusa joined #salt
21:31 blowfish joined #salt
21:31 nixjdm joined #salt
21:52 tiwula joined #salt
21:52 cgiroua joined #salt
21:53 tiwula joined #salt
21:58 jessexoc joined #salt
22:03 dnull joined #salt
22:05 vexati0n i have salt 3 salt masters -- 1 primary, and 2 syndics that report to the primary. minions are connected to all 3. The primary maintains a cache of all the minion info, and it works fine for everything *except* the 2 syndic masters... there's no cached grains for those. anyone have any idea why?
22:10 ssplatt joined #salt
22:24 tihom joined #salt
22:31 nixjdm joined #salt
22:32 jas02 joined #salt
22:34 shanth_ joined #salt
22:36 ssplatt joined #salt
22:38 oida joined #salt
22:38 dnull joined #salt
22:43 ecdhe joined #salt
22:44 johnj_ joined #salt
22:53 jessexoc joined #salt
23:04 Bock joined #salt
23:05 aleph- joined #salt
23:10 wccropper @gtmanfred https://gist.github.com/wccropper/4f060c63455331e70b5d748bab7ee2ee
23:34 aneeshusa joined #salt
23:45 sh123124213 joined #salt
23:45 johnj_ joined #salt
23:49 GMAzrael joined #salt
23:56 icebal joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary