Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2016-10-25

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 Awesomecase joined #salt
00:01 pipps joined #salt
00:04 _JZ_ joined #salt
00:16 sagerdearia joined #salt
00:16 pipps joined #salt
00:19 blue joined #salt
00:20 amontalban joined #salt
00:20 amontalban joined #salt
00:20 babilen joined #salt
00:21 edrocks joined #salt
00:21 ujjain joined #salt
00:21 ujjain joined #salt
00:22 amcorreia joined #salt
00:25 alxchk_ joined #salt
00:26 hax404_ joined #salt
00:26 pipps joined #salt
00:29 MeltedLux joined #salt
00:33 mirko joined #salt
00:36 antpa joined #salt
00:37 jeddi joined #salt
00:39 Antiarc joined #salt
00:40 edrocks_ joined #salt
00:40 edrocks__ joined #salt
00:42 edrocks joined #salt
00:47 infrmnt1 joined #salt
00:53 antpa joined #salt
00:57 hasues joined #salt
00:58 hasues left #salt
01:00 stanchan joined #salt
01:01 systo joined #salt
01:02 sh123124213 joined #salt
01:03 onlyanegg joined #salt
01:07 edrocks_ joined #salt
01:09 nawwmz joined #salt
01:21 abednarik joined #salt
01:32 edrocks_ joined #salt
01:35 jenastar joined #salt
01:46 akhter joined #salt
01:46 neilf__ joined #salt
01:46 edrocks joined #salt
01:47 ilbot3 joined #salt
01:47 Topic for #salt is now Welcome to #salt! | Latest Versions: 2015.8.12, 2016.3.3 | Support: https://www.saltstack.com/support/ | Logs: http://irclog.perlgeek.de/salt/ | Paste: https://gist.github.com/ (please don't multiline paste into channel) | See also: #salt-devel, #salt-offtopic | Ask with patience as we are volunteers and may not have immediate answers
01:48 aagbds joined #salt
01:52 catpiggest joined #salt
01:55 sp0097 joined #salt
01:56 aagbds joined #salt
02:04 edrocks_ joined #salt
02:19 ecdhe joined #salt
02:26 sagerdearia joined #salt
02:30 DEger joined #salt
02:32 DEger joined #salt
02:33 Bryson joined #salt
02:34 systo joined #salt
02:36 nawwmz joined #salt
02:37 stanchan joined #salt
02:38 vodik joined #salt
02:45 evle joined #salt
02:47 sh123124_ joined #salt
02:49 sh123124213 joined #salt
02:50 mikea Is there a quick/easy way to check if a linux box needs a reboot after patching?
03:03 stanchan joined #salt
03:04 jas02_ joined #salt
03:04 jeddi joined #salt
03:05 systo joined #salt
03:08 cyborg-one joined #salt
03:09 edrocks joined #salt
03:14 jeddi joined #salt
03:16 iggy some distros touch /var/run/reboot-required
03:16 iggy that's not universal though
03:23 antpa joined #salt
03:24 jeddi joined #salt
03:30 onlyanegg I just put together a multimaster with failover setup with random master choice. Now I can't connect to all my minions from a single box, which is troublesome for orchestration.
03:32 onlyanegg First, is this normal behavior? and second, how do people deal with this?
03:35 jeddi joined #salt
03:40 jeddi joined #salt
03:44 mapu joined #salt
03:50 iggy that's why I don't/wouldn't use multi-master
03:50 iggy onlyanegg: what version of salt?
03:50 jeddi joined #salt
03:51 stanchan joined #salt
03:51 iggy there are supposed to be some fixes in recent versions... don't know if it's still complete though
03:56 jeddi joined #salt
03:57 Ni3mm4nd joined #salt
03:57 mikea we have multi-master currently
03:58 mikea but we're getting rid of one
03:58 mikea and just letting VMWare do our HA
03:58 thedukeness joined #salt
04:01 jeddi joined #salt
04:05 onlyanegg iggy: Boron 2016.3.3
04:06 jeddi joined #salt
04:18 Bryson joined #salt
04:19 nawwmz joined #salt
04:19 om joined #salt
04:20 nawwmz anyone ever seen salt not add an ephemeral device on an ec2 instance?
04:28 stanchan joined #salt
04:36 sagerdearia joined #salt
04:45 __number5__ nawwmz: which instance type you are using, all m4.* don't have ephemeral device
04:45 nawwmz __number5__: t2.large
04:46 __number5__ nawwmz: t2.* don't have ephemeral either. only old gen instance types have them
04:46 nawwmz ahh shit
04:47 __number5__ http://www.ec2instances.info/?selected=t2.large
04:47 nawwmz is it basically whatever is listed here http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStore_UsageScenarios
04:49 nawwmz __number5__: i like your reference, pretty cool actually, so aything that is 0GB (ebs only) = no ephemeral?
04:49 __number5__ yep, that's correct
04:49 nawwmz fml
04:49 nawwmz sucks but good to know it wasnt a potential bug :) thx for the insight
04:49 __number5__ no worries
04:55 iggy onlyanegg: that's unfortunate... that means the situation hasn't improved... how bad do you need multi-master?
04:59 GreatSnoopy joined #salt
05:00 nawwmz __number5__: Q though... if some instances dont support ephemeral, what are people doing out there for swap drives?
05:02 DarkKnightCZ joined #salt
05:02 ivanjaros joined #salt
05:02 iggy the same the rest of us do... get enough memory that you don't need swap
05:04 __number5__ nawwmz: set to zero or set it on ebs
05:05 nawwmz dope thx
05:05 __number5__ get bigger instances is more important ;)
05:05 nawwmz yeah seems like it
05:10 nawwmz joined #salt
05:10 edrocks joined #salt
05:12 samodid joined #salt
05:21 Straphka hoi
05:25 swa_work joined #salt
05:32 _aeris_ joined #salt
05:35 binocvla1 Anyone have an idea why 'cmd.run' wouldn't be found? I'm using it in a state file, applied using salt-ssh
05:36 iggy binocvla1: need more context
05:37 binocvla1 http://pastebin.com/TXSDDYQe
05:37 binocvla1 iggy: The above pastebin shows my input and the output
05:43 akhter joined #salt
05:44 withaSMILE joined #salt
05:49 iggy oh, totally missed salt-ssh
05:49 * iggy backs away slowly
05:49 binocvlar Hahaha - thanks for your honesty iggy :P
05:50 swa_work joined #salt
05:50 binocvlar The frustrating part is that I'm only using salt-ssh to stand-up my salt-master (running salt-minion), and to complete the picture, I wanted to git clone my salt repo onto the master host
05:50 binocvlar The git.present function doesn't appear to work, so I thought I'd go with the rough-and-ready cmd.run option instead
05:51 iggy I tried salt-ssh once... I had about as much luck as you
05:51 binocvlar Ahhh. Weirdly, I'd had a pretty smooth run so far
05:51 iggy you checked the issue tracker?
05:52 binocvlar Good point - I'd only really checked Google (which did find a few issue-tracker items that were unrelated). I'll take a look now.
05:52 iggy if I were back in the habit of automating building salt-masters, I'd probably go with masterless to bootstrap it
05:54 binocvlar i.e. salt-call ?
05:55 iggy --local, yeah
05:57 binocvlar aWeirdly enough, that's a
05:58 binocvlar Weirdly enough, that's all that salt-ssh is doing:
05:58 binocvlar /usr/bin/python2.7 /var/tmp/.root_e3ae40_salt/salt-call --retcode-passthrough --local --metadata --out json -l quiet -c /var/tmp/.root_e3ae40_salt -- state.pkg /var/tmp/.root_e3ae40_salt/salt_state.tgz test=None pkg_sum=hashval hash_type=md5
05:59 Elsmorian joined #salt
05:59 impi joined #salt
06:00 iggy sort of
06:01 ivanjaros joined #salt
06:03 ivanjaros3916 joined #salt
06:09 ninjada_ joined #salt
06:09 nawwmz joined #salt
06:11 rdas joined #salt
06:16 zer0def joined #salt
06:19 jas02 joined #salt
06:20 felskrone joined #salt
06:24 jas02 joined #salt
06:25 zulutango joined #salt
06:25 nawwmz joined #salt
06:26 nidr0x joined #salt
06:27 bocaneri joined #salt
06:30 ninjada joined #salt
06:34 jas02 joined #salt
06:35 wangofett joined #salt
06:37 dynamicudpate joined #salt
06:39 Rubin_ joined #salt
06:39 mskjeret joined #salt
06:39 oyvindmo joined #salt
06:40 Garo__ joined #salt
06:40 mpanetta joined #salt
06:41 blue_ joined #salt
06:41 hopthrisC joined #salt
06:42 sebastian-w_ joined #salt
06:42 rickflare2 joined #salt
06:42 catpigger joined #salt
06:43 babilen joined #salt
06:44 dcpc007 joined #salt
06:44 Morrolan joined #salt
06:44 stooj joined #salt
06:45 m4rx joined #salt
06:48 ozux__ joined #salt
06:51 sagerdearia joined #salt
06:51 binocvlar iggy: Just incase you're interested: I've got a personal salt-ssh setup (i.e. not for $employer), running a slightly over version of salt-ssh, and the cmd.run call works a treat
06:52 binocvlar if I upgrade salt-ssh to match the one that I've setup for work, it still works
06:52 binocvlar there's a gremlin in there somewhere... No idea where though
06:52 rdas joined #salt
06:55 mskjeret Anyone that has any experience with long lived tokens through salt-api? I have a problem where I after some time gets 401 "No permission -- see authorization schemes"
06:56 mskjeret I can login, I get a token that the system says should work a year. It works for a day, then starts to give the error mentioned above.
07:01 xet7 joined #salt
07:02 edrocks joined #salt
07:04 binocvlar iggy (and anyone else who may care) - it looks like the problem was caused by some cached data. I manually removed /var/tmp/.root_e3ae40_salt/, and now I can run cmd.run
07:07 jas02_ joined #salt
07:07 west575 joined #salt
07:07 binocvlar if you run your salt-ssh command with "-l trace" appended to it, then output stdout and stderr from that command, you can read through the output and look for lines that begin with SALT_ARGV. The first line matching this string appears to just be a test (it grabs all of the grains I think), but the later ones actually perform the work. Anyway, the lines that begin with SALT_ARGV will give you the location
07:08 binocvlar to the tarball that salt ships to the remote machine
07:08 jas02 joined #salt
07:09 okolesnykov joined #salt
07:14 jas02 joined #salt
07:17 mpanetta_ joined #salt
07:22 toanju joined #salt
07:23 samodid joined #salt
07:24 rbjorklin joined #salt
07:25 impi joined #salt
07:25 DarkKnightCZ Hi, I'm currently facing issue with automated reconfiguration of salt-minion (change minion id + regen keys) on Windows. The problem is Salt-minion keeps lock on certificates, so it's quite impossible to change it with salt itself (even saltutil.regen_keys throws IOException)... does anybody have any thought how to solve this issue? Executed script gets killed if he stops the service, so maybe some fork + detach from the process tree? Thank
07:27 ribdro joined #salt
07:28 okolesnykov joined #salt
07:28 mikecmpbll joined #salt
07:29 samodid joined #salt
07:30 hamlesh left #salt
07:43 sgo_ joined #salt
07:45 haam3r joined #salt
07:53 m4rx_ joined #salt
07:55 JohnnyRun joined #salt
07:59 cyborg-one joined #salt
08:00 krymzon joined #salt
08:04 ronnix joined #salt
08:06 antpa joined #salt
08:07 jas02_ joined #salt
08:12 Rebus joined #salt
08:15 sgo_ joined #salt
08:17 Rebus left #salt
08:22 s_kunk joined #salt
08:23 m4rx joined #salt
08:23 kleszcz joined #salt
08:25 ninjada joined #salt
08:28 flughafen joined #salt
08:32 jas02_ joined #salt
08:33 jas02_ joined #salt
08:34 m4rx joined #salt
08:34 Mattch joined #salt
08:37 ronnix_ joined #salt
08:40 jas02 joined #salt
08:45 jas02 joined #salt
08:49 jas02 joined #salt
08:51 aagbds joined #salt
08:58 keimlink joined #salt
08:58 Ni3mm4nd joined #salt
08:59 jas02 joined #salt
09:01 jas02 joined #salt
09:01 sagerdearia joined #salt
09:04 edrocks joined #salt
09:11 mikecmpbll joined #salt
09:14 cyteen joined #salt
09:23 mher718 joined #salt
09:25 Brijesh1 joined #salt
09:32 N-Mi joined #salt
09:32 N-Mi joined #salt
09:33 m4rx joined #salt
09:40 antpa joined #salt
09:41 jas02 joined #salt
09:53 impi joined #salt
09:59 amcorreia joined #salt
10:09 ronnix joined #salt
10:18 m4rx joined #salt
10:19 impi joined #salt
10:28 sgo_ joined #salt
10:35 jas02 joined #salt
10:36 m4rx joined #salt
10:39 dariusjs joined #salt
10:51 abednarik joined #salt
10:54 rdas joined #salt
11:01 om joined #salt
11:06 edrocks joined #salt
11:09 leev joined #salt
11:10 barmaley joined #salt
11:11 sagerdearia joined #salt
11:27 okolesnykov joined #salt
11:33 sgo_ joined #salt
11:33 Ni3mm4nd joined #salt
11:43 lionel joined #salt
11:47 jjuris joined #salt
11:47 jjuris hello
11:48 jjuris I got question
11:48 jjuris anybody here?
12:02 ninjada joined #salt
12:03 SaltyVagrant_ joined #salt
12:08 okolesnykov joined #salt
12:21 wangofett jjuris I'm here now
12:24 jjuris wooh ty
12:25 jjuris but i am busy now
12:25 jjuris I try to explain my problem later
12:28 ronnix joined #salt
12:30 jas02 joined #salt
12:36 edrocks joined #salt
12:40 edrocks joined #salt
12:40 babilen jjuris: It is typically a good idea to just ask your question and then wait for questions .. it might take a while for someone to arrive who can answer your question
12:40 wangofett Probably be a few more active later, too - it's still pretty early here in the states ;)
12:40 XenophonF joined #salt
12:41 babilen wangofett: You can't really answer a question that hasn't been asked
12:41 wangofett Indeed
12:41 XenophonF joined #salt
12:41 babilen And I guess we have a bunch of people from Asia, Oceania or Europe in here also
12:41 jjuris Well its is not easy situation I need help with
12:42 jjuris the best would be to have somebody available for live discussion :)
12:43 AndreasLutro the easiest way to start a "live discussion" is to ask a question
12:43 jjuris But have u ever experienced that SaltStack made service on target minion get into coma state ?
12:43 jjuris Like everything seems to be ok but it is not  ?
12:43 babilen I haven't
12:43 AndreasLutro I'm never going to say "yes I'm available" but I'll answer the question if I can
12:44 babilen jjuris: Have you?
12:44 jjuris and only restart of virtual machine helps ?
12:44 jjuris Yes twice in past week
12:44 AndreasLutro there's no such thing as "coma state" in linux/salt, figure out what's actually happening. is the salt-minion process hanging? is the entire system going down?
12:44 jjuris see
12:44 jjuris Let me describe what happend
12:45 babilen You might want to paste relevant information, commands and their output to pastebins such as http://refheap.com, http://paste.debian.net, https://gist.github.com, http://sprunge.us, …
12:46 numkem joined #salt
12:46 dendazen joined #salt
12:48 amontalban joined #salt
12:50 jjuris I made very simple formula for distribution of SSL certificates with keys, check it out :
12:50 jjuris 1. install distribution pkg called Secrets,
12:50 jjuris 2. download config, token, encryption key
12:50 jjuris 3. run Secrets application (it downloads certificate and key from storage,  lets say into /secrets/key, cert)
12:50 jjuris 4. back up actual certificate  and key /app/certs/cert to cert.bkp  key to key.bkp
12:50 jjuris 5. create symlink from /secrets/key -> /app/cert/key , etc
12:50 jjuris pretty easy isn't it ?
12:50 wangofett Pretty straight forward, yes
12:51 akhter joined #salt
12:51 impi joined #salt
12:52 jjuris then next step is connect to server, double check that everything is Ok and restart NGINX server, double check service and final step add server back to load balancer
12:52 jjuris I did this on lets say 250 - 300 machines
12:52 jjuris no problem
12:53 jas02 joined #salt
12:54 jjuris But few services had problem, after some time service got into "coma state", it stopped responding, i could restart it , stop it , start it , no zombies, no hangs nothing , but it was in coma state
12:54 jjuris Only help was to restart whole virtual machine
12:55 babilen What happens if you strace the process?
12:55 jjuris when i did the same thing by hand, no problem
12:55 jjuris but then I involved salt, BAM!
12:55 jjuris All illusions about linux stability gone
12:55 AndreasLutro are you saying the whole VM stalled? or just the salt minion?
12:55 babilen And even restarting the service doesn't cause it to respond? Is the process in question stopped successfully ?
12:55 jjuris Strace, said that server was waithing for something
12:56 J0hnSteel joined #salt
12:56 babilen What does it say exactly? And which process/server are you referring to, salt-minion/salt-master or something else?
12:56 jjuris something else
12:56 babilen Such as?
12:56 jjuris nginx webserver
12:56 babilen So, what does an strace say?
12:59 jjuris futex(0x2b1012a7e9d0, FUTEX_WAIT, 26202, NULL
12:59 jjuris nothing useful
12:59 babilen That's a very short strace
12:59 jjuris but do you understand ? You do same thing by hand, exctly
12:59 jjuris no problem
12:59 jjuris you do it with salt, and the service goes crazy
12:59 wangofett does salt say that machine returned fine?
13:00 babilen And when you stop the service and start it again it behaves identical? Could you show some output that confirms that?
13:00 jjuris ofc ofc
13:00 babilen Could you paste the minion debug output during the state run in question?
13:00 wangofett or does it stop responding - e.g. `salt 'that-machine' test.ping` doesn't return?
13:01 jjuris there is no problem with communication between master and minion :)
13:01 jjuris problem is that salt somehow affected service, and I do not know how why and how to prevent it from doing it again
13:02 jjuris sreality-admin4 [a2-master77.ng|lxc]:~ $ service szn-sreality-adminweb restart
13:02 jjuris [ ok ] Stopping adminweb: . . . . . . . . . . .[....] done.
13:02 jjuris [ ok ] Starting adminweb:[....] done
13:02 jjuris sreality-admin4 [a2-master77.ng|lxc]:~ $ strace -p 9003
13:02 jjuris Process 9003 attached
13:02 jjuris futex(0x2b461b1bf9d0, FUTEX_WAIT, 9008, NULL
13:03 AndreasLutro you have to find out what the actual problem is first
13:04 flowstate joined #salt
13:05 jjuris thats why I joind this forum :)
13:05 jjuris any ideas ?
13:06 AndreasLutro nope. find someone who knows how to use strace, lsof, gdb etc. to investigate further
13:07 jjuris ok\
13:08 jjuris but as you saw, would you say that saltStack is responsible for situation ?
13:09 AndreasLutro probably not, more likely some sort of system race condition that just happens to be triggered by salt's state execution
13:09 AndreasLutro if you manage to narrow the problem down I wouldn't be surprised if you were able to reproduce it with a bash command
13:10 ashmckenzie joined #salt
13:10 wangofett jjuris: https://jvns.ca/blog/2014/04/20/debug-your-programs-like-theyre-closed-source/ for magic wizardry debugging tools
13:11 dyasny joined #salt
13:11 dariusjs joined #salt
13:11 jjuris YY, agree, I am not that skilled
13:11 du5tball joined #salt
13:11 wangofett It's probably just that salt is faster than you ;)
13:11 jjuris Ty I try to follow your material
13:11 jas02_ joined #salt
13:12 wangofett (I'm not Julia Evans - she just has some amazing stuff)
13:12 du5tball hi there. does the minion regularly try to contact the master? or is there a way to set it so the master sends the requests?
13:12 wangofett du5tball: what do you mean?
13:12 du5tball wangofett: i have a server, which the minion should run on, and a laptop, on which i want to run the master
13:12 wangofett Does anyone else use salt in this way? https://youtu.be/sjCilMXiyLc?t=7m28s
13:12 amontalban joined #salt
13:13 wangofett du5tball: so far that sounds normal, yes :)
13:13 wangofett (specifically where he has a deploy.sls)
13:14 du5tball wangofett: so the idea was that the minion doesn't contact the master on it's own because it could come to failed runs frequently, but for the master to contact the minion periodically
13:14 du5tball otoh... it'd probably be a better idea to have the master run on the server, and do the configs via a git
13:14 wangofett what does it need to do periodically?
13:15 du5tball wangofett: uh. check for new configs. (though maybe i'm imagining salt wrong. i know puppet agents check every 30 minutes if there's a new catalog for them)
13:16 wangofett *typically* what you're going to be doing is pushing out changes from the master. Presumably you know when there are new configs, right?
13:16 du5tball wangofett: i hope so :D
13:16 du5tball okay, then there's no worry about that. neat
13:17 wangofett So what you're probably going to want to do is stick your config in your salt directory that you're keeping under version control
13:18 wangofett it seems like the pattern of <thing>/files/<file>.<ext> is the convention that's been settled on when it comes to salt formulas: https://github.com/saltstack-formulas
13:18 wangofett e.g. https://github.com/saltstack-formulas/docker-formula/tree/master/docker
13:19 wangofett you make a change to the config, you commit it, and then you run `salt 'your minion here' state.highstate`
13:19 aagbds joined #salt
13:19 wangofett (from the master, naturally)
13:19 wangofett then it will apply whatever changes you've made in the config file there on the master to the minion(s)
13:24 akhter joined #salt
13:26 Tanta joined #salt
13:28 dps is it possible to select minions based on the absence of a grain?
13:28 rylnd joined #salt
13:30 rylnd hey guys. good afternoon. question: is there a way to tag VMs on a VMware backend with salt or salt-cloud? I looked at the docs and it doesnt look like it (maybe because pyvmomi cannot do it, you need the vcloud SDK AFAIK). just want to make sure i didnt overlook some documentation. thanks.
13:31 numkem joined #salt
13:37 scoates joined #salt
13:38 Rasathus joined #salt
13:40 dyasny joined #salt
13:41 mpanetta joined #salt
13:47 mpanetta joined #salt
13:50 mher718 joined #salt
13:50 ronnix joined #salt
13:57 Ni3mm4nd joined #salt
13:58 dps is it a best practice to crontab high state application on the master?
14:01 babilen dps: You could use the salt scheduler: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.schedule.html
14:02 ALLmightySPIFF joined #salt
14:03 __monty__ joined #salt
14:03 netcho joined #salt
14:04 khaije|mentat joined #salt
14:04 dps babilen, thank you for responding, this is cool, i didn't know about the feature.  that doesn't really answer my question though :(
14:05 babilen dps: I would consider it best practice to use the scheduler
14:05 dps ok i understand that
14:05 dps but like if you have a really large environment
14:05 dps i..e hundreds of servers
14:05 babilen That should answer your question "is it a best practice to crontab high state application on the master?" shouldn't it?
14:05 dps is it typically to schdule the high state
14:05 LotR sounds like you got your answer but you just don't like it :P
14:05 dps ah i see what you are saying babilen apologize for the confusion
14:06 dps the spirit of my question is is it best practice to schedule high state
14:06 babilen As in: I don't consider it best practice to schedule it using cron
14:06 babilen Right .. I got that now :)
14:06 dps :-)
14:06 dps sorry
14:06 babilen It's definitely done .. the problem is that you don't want all your minions to run a highstate at the same time in large deployments
14:07 dps is there documentation that discusses that?
14:07 dps i haven't seen any
14:07 babilen https://docs.saltstack.com/en/carbon/topics/tutorials/intro_scale.html that's about scale and associated issues, but that doesn't discuss the problem of scheduling a highstate
14:07 babilen How many minions are we talking about here?
14:08 dps when all is said and done no more than 500
14:08 dps over 200
14:08 babilen That's nothing
14:08 dps heh ok
14:09 babilen Feel free to implement some of the things discussed in that scale issue and make sure that you have a reasonably sized master (salt loves multiple cores)
14:10 babilen I wouldn't say "nothing", but I wouldn't worry about it too much .. Look into "splay" with the scheduler
14:10 LotR multiple cores to grind the salt between? ;)
14:10 dps i wasn't really concerned with the peformance of it, rather the philosophy of whether or not it is appropriate
14:10 babilen Ah, okay
14:10 dps or suggestable
14:10 AndreasLutro depends on what your states and servers are doing
14:11 AndreasLutro we have some servers where salt constantly has to correct permissions of a dumb web application, so we run those states regularily
14:11 dps i.e. is the holy precipice of salt a scheduled high state
14:11 babilen dps: It is nice in that you know that simply by pushing new states that they are being applied automatically. I would only do that if you have ensured proper testing before they are being applied in prod.
14:12 dps ok i think this answers my question, thank you AndreasLutro, babilen, and LotR for responding
14:14 hasues joined #salt
14:14 hasues left #salt
14:14 tapoxi joined #salt
14:15 arif-ali joined #salt
14:15 dyasny joined #salt
14:17 babilen dps: yw
14:20 mist12332 joined #salt
14:21 mist12332 Hiya fellas, how do i use network.get_route with the python salt wrapper?
14:21 mist12332 salt_c.cmd('mymachine', 'network.get_route 1.1.1.1') does not seem to work
14:22 mist12332 ok i solved it, you have to pass it as an argument to cmd in a list
14:22 mist12332 this totally does not explain what needs to be done: "Passed invalid arguments to network.get_route: get_route() takes exactly 1 argument (11 given)\n\n    Return routing information for given destination ip\n\n    .. versionadded:: 2015.5.3\n\n    .. versionchanged:: 2015.8.0\n        Added support for SunOS (Solaris 10, Illumos, SmartOS)\n
14:22 mist12332 Added support for OpenBSD\n\n    CLI Example::\n\n        salt '*' network.get_route 10.10.10.10\n    "
14:22 blue joined #salt
14:23 dendazen joined #salt
14:23 wangofett can you use pillar data in an orchestration file? i.e. I'm trying to do `- echo "{{ pillar['buildbot']['worker_name'] }}"` with a cmd.run but it's telling me variable: 'salt.utils.context.NamespacedDictWrapper object' has no attribute 'buildbot'"],
14:24 dendazen Hey guy i am generating the keys https://gist.github.com/anonymous/4c31e624b29c65ee50a6c946757908e8
14:24 AndreasLutro no wangofett
14:24 AndreasLutro at least not as you traditionally use pillars
14:25 dendazen Is there a way after generating it to add pub key on that host  to .ssh/authorized_keys for that user on that host where keys were generated?
14:27 wangofett AndreasLutro: so... I'd have to set it up in my states and then use the pillars from there, right?
14:29 AndreasLutro yeah
14:30 Cadmus joined #salt
14:32 mist12332 joined #salt
14:33 antpa joined #salt
14:34 mist12332 joined #salt
14:36 flowstate joined #salt
14:37 flowstate joined #salt
14:38 lompik joined #salt
14:41 khaije|mentat joined #salt
14:41 khaije|mentat alt
14:49 PerilousApricot joined #salt
14:50 flowstate joined #salt
14:52 bluenemo joined #salt
14:53 flowstate joined #salt
14:53 racooper joined #salt
14:54 flowstate joined #salt
14:57 keltim joined #salt
15:00 mgresser joined #salt
15:04 antpa joined #salt
15:04 keltim_ joined #salt
15:06 antpa joined #salt
15:07 nawwmz joined #salt
15:08 darthzen joined #salt
15:09 ronnix joined #salt
15:12 pipps joined #salt
15:14 jas02_ joined #salt
15:18 jjuris left #salt
15:21 subsignal joined #salt
15:21 fxhp joined #salt
15:28 ZachLanich joined #salt
15:28 Ni3mm4nd joined #salt
15:30 rherna joined #salt
15:31 wangofett Does anyone know of any tutorials or blog posts about actually deploying software with saltstack? I've read https://www.conetix.com.au/blog/software-deployment-saltstack but it's light on the details that I don't already know
15:32 wangofett specifically, when we're releasing new versions of our software - database schema migrations, new software versions, etc.
15:33 ALLmightySPIFF joined #salt
15:36 cro joined #salt
15:37 jas02 joined #salt
15:38 armguy joined #salt
15:39 sgo_ joined #salt
15:41 DarkKnightCZ joined #salt
15:46 mgresser joined #salt
15:46 Trauma joined #salt
15:47 akhter joined #salt
15:47 Cadmus left #salt
15:49 gimpy2939 left #salt
15:49 debian112 joined #salt
15:56 fxhp joined #salt
16:00 heaje joined #salt
16:00 akhter joined #salt
16:01 sh123124213 joined #salt
16:04 netcho joined #salt
16:05 lazzurs joined #salt
16:06 mgresser joined #salt
16:08 rihannon joined #salt
16:08 rihannon Does anyone know how to dump all mine data?
16:09 rihannon I tried: salt '*' mind.valid
16:09 rihannon from the docs, but I get back: 'mine.valid' is not available.
16:11 dps Is there some special syntax to use compound selection to select a grain value with space in it?  It seems to work with -G but not -C, i.e. salt -C 'G@lsb_distrib_id:Red Hat Enterprise Linux Server' test.ping
16:12 dps i've tried escaping with \ , encapsulating in extra quotes, etc
16:12 sh123124213 joined #salt
16:13 rihannon dps:  (guessing) have you tried: salt -C 'G@lsb_distrib_id:"Red Hat Enterprise Linux Server"' test.ping
16:13 Brijesh1 joined #salt
16:13 dps rihannon: yes i have i'll try it again though
16:13 dps rihannon:  nope, doesn't work.  i appreciate you getting back to me though.
16:14 XenophonF joined #salt
16:15 fxdgear joined #salt
16:16 heaje @dps: Try putting a "?" in place of spaces
16:16 heaje that's what I do
16:16 heaje so something like this => salt -C 'G@lsb_distrib_id:Red?Hat?Enterprise?Linux?Server' test.ping
16:17 dps @heaje: thank you for responding.  that works, but I think the ? will technically match any character, not a space
16:17 heaje yes
16:17 heaje but in most cases, the ? will work just fine
16:18 dps yeah i agree
16:19 Salander27 joined #salt
16:19 kingscott joined #salt
16:20 nawwmz joined #salt
16:21 nawwmz joined #salt
16:21 kingscott looking to get some help with file.line. I am trying to set a security banner and when i try to use file.line and put - match: "#Banner*" it doesn't match. Is my Regex incorrect, or why isn't it replacing with the - content: "Banner /etc/issue.net"
16:22 wisesp00l joined #salt
16:23 kingscott her is the code: https://gist.github.com/scottking2/6f408d50871ea10222f0ab9a99bb6597
16:26 onlyanegg joined #salt
16:27 rihannon dps:  Curiosity got the best of me, the compound routine splits on spaces and does not observe any kind of escaping.  You could try filing an issue.  Otherwise, question mark is going to be your only solution.
16:30 rihannon (at least for version 2015.8.8.2 which is what I'm running ATM)
16:33 abednarik joined #salt
16:34 dps rihannon: yeah, it seemed like it might be a bug to me.  i'll file an issue.
16:35 rihannon Still that way now: https://github.com/saltstack/salt/blob/develop/salt/utils/minions.py#L458
16:36 sh123124213 joined #salt
16:37 N-Mi joined #salt
16:37 N-Mi joined #salt
16:42 garga joined #salt
16:43 garga Hi! do you know if there are plans to provide a repo for FreeBSD 11? https://repo.saltstack.com/freebsd/FreeBSD:11:amd64/All/ is missing py27-salt package
16:44 onlyanegg joined #salt
16:47 Bryson joined #salt
16:52 akhter joined #salt
16:56 akhter joined #salt
16:57 gimpy9238 joined #salt
16:57 abednarik joined #salt
17:00 gimpy9238 I had renamed my top.sls out of the way for a few days; now when I renamed it back to top.sls minions still claim there is no top file, what do?  https://gist.github.com/jwhite530/3d6f476da2a84f72396be6ce82a3af94
17:00 UtahDave joined #salt
17:01 gimpy9238 I ran a saltutil.sync_all on the minions too, no change, still don't see top.sls
17:01 cscf gimpy9238, and your /etc/salt/master points to /srv/salt correctly?
17:02 gimpy9238 cscf: nothing set to force that but it is the default and has beeb working for a year until now
17:04 cyborg-one joined #salt
17:06 cscf gimpy9238, so you run salt '*' state.apply and it says there are no matches?
17:06 cscf It looks fine to me
17:07 cscf you could try restarting salt-master, for fun
17:07 gimpy9238 cscf: same error
17:08 gimpy9238 ... but I can still call states out of that dir just fine (e.g. `salt state.sls states/poop` which is /srv/salt/states/poop.sls)
17:08 mikecmpbll joined #salt
17:08 gimpy9238 same result after restarting slat-master
17:08 gimpy9238 salt-master*
17:09 abednarik joined #salt
17:09 adelcast joined #salt
17:11 akhter joined #salt
17:11 SaltyVagrant joined #salt
17:13 cscf gimpy9238, oh, wait a minute, are you even allowed to have /'s in your states?
17:13 UtahDave gimpy9238: is your top file correct yaml?
17:14 cscf gimpy9238, make a backup copy, then sed -i  s#/#.# /srv/salt/top.sls
17:14 gimpy9238 cscf: yes, I could say states.poop if I wanted and it is the same; it just mean the file poop.sls in the dir states
17:16 gimpy9238 @UtahDave: that sed command didn't do anything but I wiped out the file entirely and still get the same error
17:19 cscf gimpy9238, that sed command should substitute all slashes with dots.  Or do you mean it made no difference to the error?
17:20 gimpy9238 cscf: same error with dots
17:21 gimpy9238 I don't see how this would be a formatting problem anyway; the error says "No Top file or external nodes data matches found"; as in it doesn't even see the file at all
17:21 xbglowx joined #salt
17:21 cscf It is strange
17:21 gimpy9238 otherwise I would expect a redering/parsing error and that's it; just like salt does all the other type a I screw up a .sls file
17:21 nidr0x joined #salt
17:22 cscf salt-master is running as root and you're sure the file is /srv/salt/top.sls ?
17:22 gimpy9238 yup
17:22 cscf You could try setting the directory in /etc/salt/master to /srv/salt just to make sure.
17:23 gimpy9238 again, this worked for over a year ... I renamed it so it wouldn't apply for a bit then renamed it basck
17:23 ivanjaros joined #salt
17:24 gimpy9238 cscf: done, same behaviour
17:25 cscf Very strange.  I'm out of ideas
17:26 impi joined #salt
17:26 cscf gimpy9238, actually, you could try starting salt-master manually with strace -e open to see if it's reading the top file
17:26 gimpy9238 cscf: I expect it opens a bajillion things
17:26 gimpy9238 ... after about 700 forks too
17:26 cscf quite likely.  Good thing you can pipe to file and less/grep/etc
17:27 cscf true, forks might keep you from seeing
17:27 gimpy9238 what are you trying to see?  with `strace -gkfnirfgnfgbfgb | grep top.sls` all we'll see is *if* it opens, that's it, right?
17:28 xet7 joined #salt
17:29 cscf Yup. Thus we'd know if it's even looking in the right place.  I don't know, just an idea.
17:30 akhter joined #salt
17:30 gimpy9238 cscf: This returned nothing:   strace -f -e open /usr/bin/salt-master 2>&1 | grep top.sls
17:31 cscf oh, perhaps inotify would be better.
17:31 cscf That way it doesn't matter if it forks
17:31 darthzen joined #salt
17:31 gimpy9238 oh wait, here we go - got one usefil line: [pid 10632] open("/srv/salt/top.sls", O_RDONLY) = 47
17:32 gimpy9238 so there we go, it sees the file, but because SaltStack is now a gigantic fucking pile of shit, it still doesn't work
17:34 jas02_ joined #salt
17:40 NV joined #salt
17:40 akhter joined #salt
17:40 edrocks joined #salt
17:42 flowstate joined #salt
17:44 abednarik joined #salt
17:46 flowstat_ joined #salt
17:47 akhter joined #salt
17:47 jas02_ joined #salt
17:48 xet7 joined #salt
17:50 Rasathus_ joined #salt
17:50 Edgan joined #salt
17:59 GreatSnoopy joined #salt
17:59 flowstate joined #salt
18:00 s_kunk joined #salt
18:00 s_kunk joined #salt
18:02 mike25de joined #salt
18:05 PerilousApricot joined #salt
18:06 sh123124213 joined #salt
18:06 flowstate joined #salt
18:07 whytewolf gimpy9238: from a minion run salt-call cp.list_master | grep top.sls
18:08 whytewolf also the error you are describing doens't only happen when the top file can't be found. it also happens when the minion can't find anything in the top file that it thinks belongs to it
18:08 xet7 joined #salt
18:09 whytewolf most often enviroment errors
18:09 gimpy9238 whytewolf: looking throught code now, I saw that
18:09 gimpy9238 whytewolf: env error as in base vs. prod vs. dev or shell env or something else?
18:10 DammitJim joined #salt
18:10 cliluw joined #salt
18:10 whytewolf gimpy9238: yes. like if you hard set a minion with an enviroment but only have base in your top.sls
18:11 gimpy9238 whytewolf: I'm only using base on everything - I think I found a line or two in my top.sls that work if I remove them .... but there's nothing special about those lines so I haven't a clue why they are a problem
18:11 whytewolf what are the lines?
18:12 cliluw joined #salt
18:13 gimpy9238 whytewolf: the slurm-p1n01 part of  https://gist.github.com/jwhite530/bf2cefe458c0e447afaec1f060160320
18:14 jas02_ joined #salt
18:14 flowstate joined #salt
18:14 gimpy9238 whytewolf: ... at leats I think, I'll test it again in ~hour when my test minions are done doing their current work so I can run another highstate
18:15 whytewolf humm, sometimes hyphens can be tricky. why is everything else quoted while those are not?
18:16 gimpy9238 whytewolf: don't remember ... I must have had an issue with * not being quoted so I did it for that, but ones like xdmod-p1n01 hvae been in there unquoted for months are work fine
18:17 gimpy9238 `cat -A top.sls` doesn't show funky chars either
18:17 gimpy9238 (^ meaning I don't see line endings from Winderps or anything odd like that)
18:17 antpa joined #salt
18:18 gimpy9238 whatever, already submitted a bug and I'll continue this over there
18:21 du5tball is there a way to have salt check the configuration and tell me where the errors are?
18:21 gimpy9238 du5tball: check what config?
18:22 du5tball gimpy9238: uh. all it should apply
18:22 du5tball gimpy9238: kinda like apachectl chkconfig
18:22 cscf du5tball, you can do a dry-run
18:23 du5tball i have.. sec, lemme paste this
18:23 gimpy9238 ^what they said ... also if you want to check that your state renders I use state.show_sls for that
18:23 du5tball https://ptpb.pw/V1pD (top.sls) https://ptpb.pw/LBhE (rhel.sls)
18:24 du5tball and salt claims "no states found for this minion" (the minion name is trioptimum)
18:25 cscf du5tball, I'm not sure you're allowed to have a state called 'highstate' isn't that taken?
18:26 du5tball cscf: possible. lemme rename that to foobar
18:26 du5tball same result
18:26 jas02_ joined #salt
18:27 jas02__ joined #salt
18:27 du5tball so... what's wrong with those configs?
18:28 cscf du5tball, you renamed it in both top and the filename?
18:28 du5tball yes
18:29 gimpy9238 the file is named foobar.sls and not foobar, correct?
18:29 whytewolf you can have an sls named highstate. no rule against it.
18:29 du5tball gimpy9238: correct
18:29 jas02_ joined #salt
18:30 whytewolf change those to pkg.installed: []
18:30 jas02_ joined #salt
18:30 du5tball ookay now the problem changed. before the foobar.sls was empty. adding htop to it applied that.
18:30 whytewolf also a minion named trioptimum would not match 'centos*'
18:31 cscf whytewolf, it would match '*'
18:31 du5tball whytewolf: true. but shouldn't a group named "centos*", which contains trioptimum, match?
18:31 whytewolf no
18:31 cscf du5tball, "centos*" means any fqdn starting with "centos"
18:31 sgo_ joined #salt
18:31 cscf not a group
18:31 whytewolf since you didn't tell it nodenames
18:32 du5tball that explains a bunch
18:33 swa_work joined #salt
18:33 abednarik joined #salt
18:34 murrdoc joined #salt
18:35 jas02_ joined #salt
18:36 du5tball ohhh okay i got the groups to work now
18:37 du5tball thanks :)
18:39 murrdoc whats the recommendation for
18:39 murrdoc [WARNING ] /usr/lib/python2.7/dist-packages/salt/grains/core.py:1493: DeprecationWarning: The "osmajorrelease" will be a type of an integer.
18:47 akhter joined #salt
18:51 akhter joined #salt
18:52 pipps joined #salt
18:56 Trauma joined #salt
19:00 akhter joined #salt
19:02 honestly huh. the different file states inconsistently use "show_diff" and "show_changes" for showing a diff of the file change.
19:05 theblazehen_ joined #salt
19:08 sh123124213 joined #salt
19:08 toanju joined #salt
19:09 pipps joined #salt
19:09 pipps joined #salt
19:10 haam3r joined #salt
19:11 __monty__ left #salt
19:11 jhauser joined #salt
19:21 akhter joined #salt
19:22 Ribdro joined #salt
19:22 keimlink joined #salt
19:26 akhter joined #salt
19:29 UtahDave murrdoc: that's just a notification of a change in the type of that grain
19:30 om joined #salt
19:30 murrdoc yeah i checked the github issues
19:30 murrdoc thanks UtahDave
19:30 UtahDave np.
19:30 UtahDave I find that notification really annoying.
19:31 Gareth ahoy hoy
19:31 murrdoc it shows up in all things , we have a bunch of `| jq .massagestuff` and they are all getting that error to
19:31 murrdoc too
19:33 fullstop left #salt
19:33 xet7 joined #salt
19:37 UtahDave yo, Gareth!
19:37 netcho joined #salt
19:37 Gareth UtahDave: hey hey :) hows it going?
19:37 murrdoc o/
19:37 Gareth murrdoc: hey :)
19:37 murrdoc hows things
19:37 UtahDave murrdoc: yeah, I wish there was a flag you could set to stop seeing that message. "OK, I saw the warning of the upcoming change. please be quiet now."
19:38 Gareth going good :) yourself?
19:38 UtahDave Gareth: pretty good!
19:38 murrdoc its all good UtahDave we ll have to upgrade 7k servers
19:38 murrdoc do beacons work now
19:38 murrdoc like you know for real
19:38 UtahDave Yes, beacons are awesome.
19:39 UtahDave but like all things you do as a sysadmin, test before you push to production.  :)
19:39 murrdoc :)
19:39 murrdoc sweet
19:40 murrdoc eventual convergence for the win
19:42 whitenoise joined #salt
19:44 renoirb joined #salt
19:47 akhter joined #salt
19:47 RandyT joined #salt
19:49 cyteen joined #salt
19:56 akhter joined #salt
20:00 ekristen joined #salt
20:05 gimpy9238 the top.sls issue I was fighting earlier in here was caused by duplicate targets in top.sls ... see issue 37235
20:07 zmalone joined #salt
20:08 UtahDave gimpy9238: Hm. Salt should handle that situation better.  Remember that these yaml structures become python dictionaries internally, so you can't have two identical keys in a dictionary like that.
20:10 netcho joined #salt
20:11 jwang joined #salt
20:12 jwang_ joined #salt
20:12 sgo_ joined #salt
20:13 djgerm joined #salt
20:18 pipps joined #salt
20:24 Rasathus_ joined #salt
20:24 renoirb joined #salt
20:30 amontalb1n joined #salt
20:38 akhter joined #salt
20:39 renoirb joined #salt
20:44 XenophonF joined #salt
20:52 theblazehen_ joined #salt
20:53 jas02_ joined #salt
20:54 netcho joined #salt
20:58 pipps joined #salt
21:03 jas02_ joined #salt
21:03 xet7 joined #salt
21:03 pipps_ joined #salt
21:03 gimpy9238 @UtahDave: I'm leaving the case open in hopes that top.sls parsing can be improved
21:04 jas02__ joined #salt
21:04 UtahDave gimpy9238: great.  Thanks for creating the issue.
21:13 cyborg-one joined #salt
21:15 GreatSnoopy joined #salt
21:16 jas02_ joined #salt
21:22 ninjada joined #salt
21:30 jenastar joined #salt
21:33 Aleks3Y joined #salt
21:33 abednarik joined #salt
21:35 nixjdm joined #salt
21:37 pipps joined #salt
21:47 subsignal joined #salt
21:49 pipps joined #salt
21:51 anotherZero joined #salt
21:55 rem5_ joined #salt
21:55 Rasathus joined #salt
21:57 ninjada joined #salt
21:58 scoates I recently upgraded one of my salt masters to 2016.3.3. It's been eating 100% CPU on this box since then. If I restart my salt-master process, it calms down for a few mins, and then goes back to 100%. Is there an easy way for me to see what my salt-master is *doing*? It's logging a lot… lot.
21:59 pipps joined #salt
22:00 shadoxx scoates: logs should be the first place to look, which it sounds like you did. can you create a pastebin?
22:00 scoates not really… it's outputting a lot of my pillar. )-:
22:01 scoates makes me think I might have some rogue prints or debugs somewhere. will check for those. just was hoping for something like "eventlisten.py" that shows more specifically what the master is doing. Maybe eventlisten is a good enough place to start…
22:02 UtahDave scoates: try stopping the daemon and then starting the master in debug mode in the cli.      salt-master -l debug
22:02 scoates yeah… so very much output when I do that. /-:
22:03 scoates can you tell when I restarted it? (-: https://www.dropbox.com/s/iytrvy1qqckspv6/Screenshot%202016-10-25%2018.03.32.png?raw=1
22:04 scoates running from cli now. hoping the logs show something useful that I actually see.
22:04 pipps joined #salt
22:08 ninjada joined #salt
22:09 ninjada joined #salt
22:21 UtahDave scoates: I haven't run into that problem.  Are you using gitfs? What options have you changed in your /etc/salt/master  config file?
22:26 scoates no gitfs
22:27 scoates we have a bunch of reactor scripts. not sure those are it, though.
22:28 scoates https://paste.website/p/f758db94-5711-444a-bc84-71ac3d8f76b4.txt
22:28 scoates (ports changed) this has been our master config for quite a while
22:30 flowstate joined #salt
22:31 scoates the journal is just a constant shower of pillar and/or state yaml
22:34 UtahDave how many minions are attached to the master?
22:36 scoates 293 lines in `salt-key`; not all are connected, though (many of them are VMs that are offline). can check for connected if that's not specific enough.
22:39 UtahDave wait, it could be your reactor...
22:39 UtahDave On one of those you're matching on   'salt/auth'     Aren't there a ton of those events coming through all the time?
22:43 jhauser joined #salt
22:48 jhauser joined #salt
22:56 ninjada joined #salt
22:58 pipps joined #salt
23:01 ninjada joined #salt
23:08 keltim_ joined #salt
23:08 keltim joined #salt
23:08 renoirb joined #salt
23:12 pipps joined #salt
23:17 keimlink_ joined #salt
23:21 flowstate joined #salt
23:23 jas02_ joined #salt
23:24 johnkeates joined #salt
23:26 Rasathus_ joined #salt
23:30 Rasathus joined #salt
23:30 scoates UtahDave: could be that, I suppose. It was fine before last week. I'll check it out. Thanks.
23:31 UtahDave Ok.  Let me know how it shakes out.
23:32 ninjada_ joined #salt
23:32 Rasathus_ joined #salt
23:34 edrocks joined #salt
23:37 Rasathus joined #salt
23:40 Rasathus_ joined #salt
23:41 Rasathus_ joined #salt
23:43 ninjada joined #salt
23:44 ninjada joined #salt
23:44 flowstate joined #salt
23:47 UtahDave left #salt
23:50 om2 joined #salt
23:52 om joined #salt
23:55 akhter joined #salt
23:57 om joined #salt
23:57 om joined #salt
23:58 om joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary