Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2016-12-08

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 ecdhe thanks for your suggestion whytewolf... I'm currently waiting for the machine to boot with log_level_logfile: all
00:00 ecdhe thanks for your input, I was out of things to try
00:00 dxiri joined #salt
00:01 whytewolf I'm not sure the advanced logging will help, but it might get you on the wright path of what is causing it
00:01 ProT-0-TypE joined #salt
00:03 lionel joined #salt
00:03 ecdhe Well this could very well be a vagrant issue... not sure why it would suddenly start happening on THIS vm and not any of the others I've been using this year, but whatever.
00:03 nmccollum Thanks for the help.  I'll probably be back tomorrow.
00:04 ecdhe ttfn
00:04 whytewolf have a good one :)
00:09 justanotheruser joined #salt
00:11 tobiasBora In this page : https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html
00:11 tobiasBora I can read that there is a user that runs the minion (which is apparently not root)
00:11 tobiasBora but in my /etc/password, I cannot see such minion...
00:12 tobiasBora *such user
00:12 tobiasBora So what is the default user that run the minion ?
00:15 Bryson joined #salt
00:15 whytewolf tobiasBora: um the minion typically runs as root
00:16 tobiasBora ok thank you
00:25 CrummyGummy joined #salt
00:26 tobiasBora I have a question about security
00:26 tobiasBora I doing a git deployment through stack
00:27 tobiasBora using this page
00:27 tobiasBora https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html
00:27 tobiasBora However, I'm seeing that if a user can access my computer...$$
00:27 tobiasBora *
00:28 tobiasBora if a use can access my computer while my ssh is still available... Well it can completely break my system and has the root power, even if the user that push to the git repository isn't that strong.
00:29 tobiasBora is there some more secure ways to proceed ? For example in huge compagnies, if I can get the computer of the good man while it's computer has the ssh key loaded, I can break everything ?
00:32 johnkeates joined #salt
00:34 whytewolf huh. I'm sorry I'm not sure what you mean
00:34 tobiasBora Well
00:36 tobiasBora In my configuration (and for me any git configuration) every one that can push to the salt server can push malicious code on all the servers of the network.
00:36 pipps joined #salt
00:36 whytewolf that is a given.... thats why you lock down your repos
00:36 tobiasBora So if someone can get my computer while the secure key is still loaded... It can break everything
00:36 drew__ joined #salt
00:36 bltmiller tobiasBora: of course there are implications for whatever configuration you settle on. I use GitFS backend for my Salt config with the assumption that my git repos have the appropriate permissions
00:37 bltmiller same applies for SSH keys and access
00:38 drew__ does anyone know how syndic_wait works? It is really slow.  I have 1 syndic master node connecting to a master node and it takes 5+ seconds for a simple salt command to finish (ie. test.ping).
00:39 whytewolf you security question doesn't make sense because it is a security issue beyond being able to be locked down. yes if someone gets on your computer while you have a tunnel open they can push something.... but they have to be able to get on your computer into the same session. [if you use linux into the same user defined session]
00:39 drew__ I set syndic_wait to 1 second on my syndic node and it's still 5seconds
00:41 whytewolf sorry drew__ as of yet i have no experence with syndic. I do hope to be changing that soon though
00:42 tobiasBora whytewolf: bltmiller : Ok thank you, it's quite normal there is no magic...
00:43 bltmiller Salt is pretty flexible. good luck :)
00:45 tobiasBora By the way, is pygit2 stable enough for production ?
00:47 bltmiller tobiasBora: I'm using it in production, no complaints so far. on 0.21.4
00:48 bltmiller (on RHEL 7)
00:48 tobiasBora bltmiller: Ok thank you. And is there any reason it's not package for debian ?
00:48 jas02 joined #salt
00:49 bltmiller no idea. you'd have to ask the Salt maintainers/developers about that. I've used both gitpython and pygit2. can't remember why I switched to pygit2 exactly, but the switch was pretty painless
00:50 sh123124213 joined #salt
00:50 whytewolf iirc there isn't a package for centos either... it is a pip install
00:50 bltmiller that sounds familiar
00:51 whytewolf https://pypi.python.org/pypi/pygit2 and it does pass all of it's tests
00:53 tobiasBora ok thank you
00:56 nickabbey joined #salt
01:04 SaucyElf_ joined #salt
01:10 jas02 joined #salt
01:15 Derailed joined #salt
01:28 skeezix-hf joined #salt
01:30 tobiasBora I've a question : I'm using this config for gitfs
01:30 tobiasBora http://paste.debian.net/901162
01:31 tobiasBora and I just saw this : "When using the gitfs backend, branches, and tags will be mapped to environments using the branch/tag name as an identifier."
01:31 tobiasBora However, I don't understand where the mountpoint is chosen...
01:33 Bryson joined #salt
01:38 tobiasBora And what is the good way to handle the top file ? Should I put it by hand ? Can I put it in my git repo ?
01:42 tobiasBora Should I put something like this in my top file :
01:42 tobiasBora prod:
01:42 tobiasBora 'prod-*':
01:42 tobiasBora - git-prod
01:42 tobiasBora ?
02:03 sh123124213 joined #salt
02:09 deadbeefcafe joined #salt
02:12 catpigger joined #salt
02:13 Nahual joined #salt
02:16 juanito joined #salt
02:20 tobiasBora I have a strange bug : if I run twice the bootstrap code, I get the error
02:20 tobiasBora $ sudo systemctl restart salt-client
02:20 tobiasBora Failed to restart salt-client.service: Unit salt-client.service failed to load: No such file or directory.
02:20 tobiasBora (no problem with the server)
02:21 tobiasBora Hum...
02:22 tobiasBora After a reboot the master doesn't want to start
02:24 viq joined #salt
02:25 krymzon joined #salt
02:28 evle joined #salt
02:28 pipps joined #salt
02:38 hemebond What is salt-client?
02:48 ilbot3 joined #salt
02:48 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.3.4, 2016.11.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ (please don't multiline paste into channel) <+> See also: #salt-devel, #salt-offtopic <+> Ask with patience as we are volunteers and may not have immediate answers
02:49 jas02 joined #salt
03:12 MeltedLux joined #salt
03:12 devster31 joined #salt
03:12 KingJ joined #salt
03:13 saltsa joined #salt
03:13 systeem joined #salt
03:14 al joined #salt
03:14 godlike joined #salt
03:14 godlike joined #salt
03:19 scsinutz joined #salt
03:27 jas02 joined #salt
03:27 krymzon joined #salt
03:38 bastiandg joined #salt
03:40 scsinutz joined #salt
03:46 debian112 joined #salt
03:56 rawzone joined #salt
04:02 XenophonF tobiasBora: you can ignore the mountpoint option with gitfs
04:03 XenophonF and salt-master will automatically map the root of the git repo to the root of the salt:// virtual file system
04:03 XenophonF it will also automatically map the master branch to Salt's "base" environment
04:03 voxpop joined #salt
04:04 tercenya joined #salt
04:05 XenophonF if you put top.sls into the root of your git repo, salt will find that and use it for targeting the same as if you created top.sls and put it in /srv/salt (or /usr/local/etc/salt/states if you're running FreeBSD like me)
04:05 sh123124213 joined #salt
04:06 XenophonF here's how i have gitfs configured: https://github.com/irtnog/salt-pillar-example/blob/master/salt/example/com/init.sls#L254
04:07 XenophonF and here's what my salt-states repository looks like: https://github.com/irtnog/salt-states
04:07 XenophonF i only have a top.sls file in the master branch
04:08 XenophonF i think having top.sls files in multiple branches gets too confusing
04:08 XenophonF i also set up the other branches detached from the master branch
04:08 XenophonF b/c conceptually, my dev/test/stage/prod branches are wholly separate from the master branch
04:09 LeProvokateur joined #salt
04:10 XenophonF here's how I initialized my git repo: https://gist.github.com/xenophonf/95357d87b6e0b5e2b0e6
04:21 rdas joined #salt
04:29 pipps joined #salt
04:32 flight884 joined #salt
04:36 MTecknology Corey: Turns out, high west makes a bad manhattan. It doesn't belong in a mixed drink. :(
04:48 scsinutz joined #salt
04:55 jas02 joined #salt
04:57 faizy joined #salt
04:59 skullone is there a good way to omit a host from getting a state applied to it? i have a '*' to get some base states, but specifically only dont want one node from getting it
05:00 hemebond skullone: https://docs.saltstack.com/en/latest/topics/targeting/compound.html
05:08 skullone perfect, ty
05:25 onlyanegg joined #salt
05:27 jas02 joined #salt
05:28 faizy joined #salt
05:28 * robawt blames Heartsbane
05:29 krymzon joined #salt
05:32 pipps joined #salt
05:39 SWA joined #salt
05:40 pipps joined #salt
05:49 XenophonF skullone: alternatively, this kludge works, too - https://github.com/irtnog/salt-states/blob/master/top.sls#L191
05:49 onlyanegg joined #salt
05:55 skullone interesting
06:03 pipps joined #salt
06:04 aarontc joined #salt
06:05 ivanjaros joined #salt
06:06 sh123124213 joined #salt
06:09 alex-zel joined #salt
06:10 alex-zel Hello, I've read in the release notes about Snapper, but I cannot find any documentation on this in saltstack site, also this is missing from the list of execution modules
06:10 cyborg-one joined #salt
06:10 alex-zel but the module is present in the release
06:11 justan0theruser joined #salt
06:20 preludedrew joined #salt
06:24 hemebond alex-zel: There is this: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.snapper.html
06:25 alex-zel thank you
06:25 hemebond I can't find any execution module. Perhaps there isn't one.
06:25 alex-zel I was worried i couldn't find anything about it
06:25 alex-zel there is one in the source code, and it is available to run on minion
06:25 bocaneri joined #salt
06:28 hemebond No documentation so probably still under development.
06:34 buu Question
06:34 buu Where is the documentation page that contains cmd.run ?
06:35 buu wait
06:35 buu is cmdmod aliased to cmd?
06:35 hemebond ya
06:36 buu ...
06:36 buu It couldn't bother to mention that anywhere?
06:36 hemebond hmm. I just Google for cmd.run or something.
06:36 buu =/
06:37 buu =\
06:37 buu https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cmdmod.html
06:37 hemebond !salt.modules.cmd
06:37 hemebond !salt.modules.cmdmod
06:37 hemebond I can never remember how that works.
06:38 buu Neat
06:39 buu My horrible hack is well on its way to completion
06:43 hemebond Horrible hack?
06:45 buu I'm now running a minion on the salt master so it can invoke rsync to transfer files to the minions
06:46 buu hm
06:53 sh123124213 joined #salt
06:57 jas02 joined #salt
07:12 faizy joined #salt
07:17 raspado joined #salt
07:18 pipps joined #salt
07:19 pipps99 joined #salt
07:20 pipps99 joined #salt
07:23 gladia2r joined #salt
07:34 jas02 joined #salt
07:35 ProT-0-TypE joined #salt
07:36 ivanjaros3916 joined #salt
07:43 jas02 joined #salt
07:43 yuhl_ joined #salt
07:48 mintmint joined #salt
07:53 iggy !salt modules.cmd.run
07:53 saltstackbot https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cmdmod.html#salt.modules.cmdmod.run
07:54 pipps joined #salt
07:54 hemebond iggy: Thanks :-D
07:54 pipps99 joined #salt
07:58 jas02_ joined #salt
08:00 madboxs_ joined #salt
08:08 _weiwae joined #salt
08:25 felskrone joined #salt
08:26 toanju joined #salt
08:31 MTecknology !modules.cmd
08:32 MTecknology awe.. :(
08:32 MTecknology !salt modules.cmd
08:32 saltstackbot https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cmdmod.html
08:32 MTecknology that works too :)
08:33 huleboer I get "Specified ext_pillar interface key is unavailable"
08:39 sh123124213 joined #salt
08:44 huleboer when I'm using salt.pillar.cobbler.. anyone know what the key should be?
08:47 samodid joined #salt
08:55 madboxs joined #salt
08:57 pipps joined #salt
08:58 fracklen joined #salt
08:59 mikecmpbll joined #salt
08:59 jas02_ joined #salt
09:01 Rumbles joined #salt
09:12 samodid joined #salt
09:15 RandyT_ joined #salt
09:19 mavhq joined #salt
09:30 keimlink joined #salt
09:33 mpanetta_ joined #salt
09:34 N-Mi_ joined #salt
09:35 ProT-0-TypE joined #salt
09:42 amcorreia joined #salt
09:42 fracklen joined #salt
09:44 pipps joined #salt
09:46 yuhl_ joined #salt
09:50 tobiasBora XenophonF: Great, thank you it's very instructive ! But it's not too enoying to always checkout between the prod/... branch and the master branch each time you want to add a service ?
09:50 dxiri joined #salt
09:52 fracklen_ joined #salt
09:54 yuhl_ joined #salt
09:54 Hybrid1 joined #salt
09:56 s_kunk joined #salt
09:57 tobiasBora XenophonF: And by the way in your bootstrap script, I don't see anything linked with the adding of gitfs_remote in the master file, so I don't understand how you can use it...
10:00 jas02_ joined #salt
10:05 madboxs joined #salt
10:13 pysen joined #salt
10:19 ronnix joined #salt
10:19 fracklen joined #salt
10:22 AndreasLutro is there a way to make pkg.installed upgrade a package if some conditions apply (like version number currently installed)? other than using {% if salt['pkg.version'](...) < '1.2.3' %}
10:27 pipps joined #salt
10:32 pysen Hi, does anyone use salt with os x? Trying to use the macpackage.installed state with a dmg file and keep getting this error: installer: Error the package path specified was invalid: '/tmp/dmg-ItcWG8/*.pkg'.
10:33 krymzon joined #salt
10:33 pysen Salt tries to mount the dmg at that location but i am unable to read the contents of the dmg file. If mounted manually the dmg works
10:39 oida joined #salt
10:40 fracklen joined #salt
10:41 sh123124213 joined #salt
10:43 pipps joined #salt
10:48 Rumbles joined #salt
10:58 Queen16 joined #salt
11:01 jas02_ joined #salt
11:03 pipps99 joined #salt
11:03 patrek joined #salt
11:05 Bico_Fino joined #salt
11:08 o1e9 joined #salt
11:33 wangofett joined #salt
11:35 jas02 joined #salt
11:37 stooj joined #salt
11:46 tercenya_ joined #salt
11:48 jas02 joined #salt
11:51 XenophonF tobiasBora: that script is just how i created the github.com/irtnog/salt-states repo
11:52 tobiasBora XenophonF: Hum ok, I though it was the script to create the repo and bootstrap it in a new node.
11:52 XenophonF my recommendation is that you set up separate dev/test/stage/prod (or whatever---I happen to be using DTAP phases as environments) from your master branch
11:52 XenophonF and just use the master branch for targeting
11:53 tobiasBora XenophonF: And you do "git checkout master" each time you want to enable a new sls right ?
11:53 XenophonF as for branching and merging operations with git, i dunno, it feels pretty natural for me to do a bunch of state authoring in the dev branch, and the merge those changes into test, and then stage, and then prod
11:53 XenophonF no - i actually have two clones of the same repo for convenience's sake
11:53 tobiasBora Hum ok I see.
11:53 XenophonF so /code/salt-states usually contains the dev branch (except when i'm merging)
11:54 XenophonF and /code/salt-states-base contains the master branch
11:54 tobiasBora Ok good to know !
11:54 XenophonF you can clone the same repo as many times as you want, right?
11:54 XenophonF as for bootstrapping a Salt master, I wrote a script to do that, too
11:55 Reverend XenophonF - with git? yes.
11:55 XenophonF bear in mind that i wrote it for AWS
11:55 Reverend the only time it matters is when your pushign to a repo, and have to rebase conflicts.
11:55 tobiasBora It's getting a bit clearer in my mind now.
11:56 XenophonF https://gist.github.com/xenophonf/d8da7f47ea29d9ad46e7
11:56 XenophonF i don't rebase
11:56 XenophonF only merge
11:56 Reverend XenophonF - was it you I was talking to Re. letsencrypt?
11:56 XenophonF yes
11:56 XenophonF brb gotta make a milk run (literally)
11:56 Reverend okay - i finished that today. so I'll get something on git ASP for you
11:56 XenophonF awesome thanks!
11:56 tobiasBora Great, your script looks similar to mine, I'm happy to see I didn't do complete bullshit ^^
11:56 Reverend np
11:57 tobiasBora By the way, is there a way to debug nicely ? For example I would love to be able :
11:57 tobiasBora 1) To see on the master the "unfold" git server (I think it clone it in a folder, which one ?)
11:57 Bico_Fino joined #salt
11:58 tobiasBora 2) The list of sls states available, and for which env
11:58 tobiasBora is it possible ?
11:58 sebastian-w joined #salt
11:59 Reverend tobiasBora: how do you mean 'unfold' ?
11:59 Reverend the git logs?
11:59 _Cyclone_ joined #salt
12:00 tobiasBora Reverend: No, the local git repo to minion
12:00 Reverend are you using gitfs?
12:00 tobiasBora I think it should run something like "git pull <repo>". Where does it do that ?
12:00 tobiasBora Reverend: yes
12:01 Reverend im entirely unsure how gitfs works... but if it didn't give you logs, I'd be very surprised.
12:01 Reverend you may need to write a postreceive to check them out
12:01 Reverend do a git reset --hard and checkout the latest commit
12:01 jas02_ joined #salt
12:03 tobiasBora Reverend: what do you mean ? I already have a post-receive which is supposed to call the gitfs reactor update process
12:04 Reverend does reactor get the event?
12:05 tobiasBora Reverend: Don't know ^^
12:05 Reverend `salt-run state.event pretty=True`
12:05 Reverend watch that on master, see if you get any events come from your minion to trigger the reactor
12:06 Reverend what does your reactor actually do? git push or something?
12:08 tobiasBora Reverend: I push from my computer to the local git repo, then it should call post-receive which is supposed to call:
12:08 tobiasBora /bin/salt-call event.fire_master update salt/fileserver/gitfs/update
12:08 keimlink joined #salt
12:08 Reverend oh okay.
12:09 Reverend that seems like a very convoluted way to get a git repo to automatically push out changes :P
12:09 Reverend hahah
12:09 tobiasBora what do you mean by convoluted ?
12:09 Reverend oh i see, so your postreceive on the update pushes the changes out to the minion via reactor?
12:09 tobiasBora Reverend: It should yes
12:10 Reverend so, it's pc -> local -> postreceive -> reactor -> state.apply to minions
12:10 tobiasBora exactly
12:10 tobiasBora I do nothing more than the thing explained here : https://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html
12:10 Reverend and at which step is it failing?
12:10 Reverend reactor, postreceive, state.apply?
12:11 tobiasBora it's hard to know, that's why I would like to know where I could have the local repo that is maintained on the minion
12:11 jas02 joined #salt
12:11 Reverend okay - well my first step would be to add something into your postreceive to echo something into a file
12:11 tobiasBora Ahah it's kind of tricky but why not ^^
12:12 Reverend if that appears, great. your post receive is running. Next step - push a commit - and watch the event logs for a reactor event.
12:12 tobiasBora hum
12:12 Reverend not really, it's just bash... just do "echo $(date) > /var/log/something.log"
12:14 tobiasBora The thing is that I don't see anything in the reactor event
12:15 Reverend did your postreceive run?
12:15 tobiasBora it does not seems
12:15 tobiasBora looks strange
12:16 Reverend fix your post receive first then :P
12:16 jas02 joined #salt
12:17 tobiasBora I think I get it
12:17 tobiasBora I forgot the chmod +x
12:18 Reverend hahaha
12:18 Reverend nice one :)
12:18 Reverend okay - so test your postfix again with another echo > something
12:19 tobiasBora hum we progress
12:20 Reverend good! did your echo work?
12:20 tobiasBora now I have the log file created
12:20 Reverend sick
12:20 tobiasBora but sudo salt-run state.event pretty=True
12:20 tobiasBora doesn't give me any output
12:20 tobiasBora *but*
12:20 tobiasBora in the git-push window, I got
12:20 tobiasBora remote: [WARNING ] Failed to open log file, do you have permission to write to /var/log/salt/minion?
12:20 tobiasBora remote: [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
12:20 jas02 joined #salt
12:20 tobiasBora remote: Could not access /etc/salt/pki/minion. Try running as user root.
12:21 tobiasBora The thing is that I added in visudo the line
12:21 Reverend oh. i wonder if you're gonna need to sudo your event?
12:21 Reverend looks like it's tryiung to write to the log without using root
12:21 Reverend :S
12:21 Reverend BBI5 - going for a smoke.
12:22 tobiasBora isn't :
12:22 tobiasBora Cmnd_Alias SALT_GIT_HOOK = /bin/salt-call event.fire_master update salt/fileserver/gitfs/update
12:22 tobiasBora Defaults!SALT_GIT_HOOK !requiretty
12:22 tobiasBora ALL ALL=(root) NOPASSWD: SALT_GIT_HOOK
12:22 tobiasBora supposed to avoid this problem ?
12:24 stooj joined #salt
12:34 tobiasBora I got it
12:34 Rumbles joined #salt
12:34 krymzon joined #salt
12:34 tobiasBora it's not /bin/salt-call but /usr/bin/salt-call on debian.
12:35 tobiasBora Maybe an update of the wiki would be interesting ?
12:36 Reverend nice one did :)
12:36 Reverend dude*
12:36 Reverend is it working now?
12:38 tobiasBora Grr
12:38 jas02 joined #salt
12:38 tobiasBora I changed it, but same problem
12:38 aidin joined #salt
12:41 tobiasBora Here is my end of visudo file:
12:41 tobiasBora Cmnd_Alias SALT_GIT_HOOK = /usr/bin/salt-call event.fire_master update salt/fileserver/gitfs/update
12:41 tobiasBora Defaults!SALT_GIT_HOOK !requiretty
12:41 tobiasBora ALL ALL=(root) NOPASSWD: SALT_GIT_HOOK
12:42 tobiasBora When I run it from ssh :
12:42 tobiasBora $ /usr/bin/salt-call event.fire_master update salt/fileserver/gitfs/update
12:42 tobiasBora [WARNING ] Failed to open log file, do you have permission to write to /var/log/salt/minion?
12:42 tobiasBora [WARNING ] Although 'dmidecode' was found in path, the current user cannot execute it. Grains output might not be accurate.
12:42 tobiasBora Could not access /etc/salt/pki/minion. Try running as user root.
12:42 Shirkdog joined #salt
12:42 Shirkdog joined #salt
12:42 tobiasBora hum
12:42 sh123124213 joined #salt
12:42 tobiasBora maybe I should run sudo...
12:43 tobiasBora exactly !
12:47 tobiasBora Great, it seems to work !
12:47 tobiasBora Only one problem : the command state.event doesn't give me the path of the git repo.
12:53 tobiasBora Now, to continue my debug, is it possible to have a list of all available sls ?
12:53 tobiasBora And how could I debug an error like
12:53 tobiasBora Template was specified incorrectly: False
12:55 Reverend potensh. might be able to do it with something.list
12:56 Reverend maybe
13:00 pipps joined #salt
13:03 jas02_ joined #salt
13:03 amontalban joined #salt
13:04 numkem joined #salt
13:08 DEger joined #salt
13:10 netcho joined #salt
13:17 dyasny joined #salt
13:20 raspado joined #salt
13:21 hlub is it so that custom states and execution modules must always be placed relative to a fileroots directory? so they cannot be used via gitfs?
13:23 Neighbour you can always make a symlink to somewhere in your gitfs
13:27 hlub I can't make a symlink pointing to multiple locations
13:31 hlub and I need to configure fileroot just to serve a couple of symlinks, urgh.
13:37 tobiasBora Reverend: Everything is in the *something* ^^
13:44 sh123124213 joined #salt
13:51 irctc386 joined #salt
13:51 irctc386 How do I make sure a returner is running continuously to send latest data to salt master ?
13:52 AndreasLutro a returner doesn't run continuously, it just runs after a salt job finishes
13:55 netcho is it possible to use instance-role-credentials for boto stuff?
13:57 netcho works for salt-cloud but can it be used for creating other AWS resources?
14:03 jas02_ joined #salt
14:03 tobiasBora Grrr
14:04 tobiasBora there is something really anoying I do not understand : sometimes I cannot restart the salt server/salt minion,
14:04 tobiasBora sudo systemctl start salt-server
14:04 tobiasBora Failed to start salt-server.service: Unit salt-server.service failed to load: No such file or directory.
14:04 Rumbles joined #salt
14:04 netcho systemd :D
14:04 tobiasBora and the only way I can set it back is to uninstall, purge, and install back...
14:05 pipps joined #salt
14:06 legreffier joined #salt
14:08 Brew joined #salt
14:08 tobiasBora Does anyone knows how that can happend ?
14:10 tkharju joined #salt
14:11 _JZ_ joined #salt
14:11 AndreasLutro check the logs
14:11 AndreasLutro also what on earth is salt-server
14:11 AndreasLutro how did you install salt on the system?
14:12 tobiasBora I'm soo stupid
14:12 tobiasBora I always type salt-server instead of salt-master, my bad
14:12 tobiasBora Thank you
14:14 racooper joined #salt
14:24 atoy3731 joined #salt
14:28 Reverend tobiasBora - fyi , systemd will respond with 'no such file/dir' if that service doesn't exist.
14:28 Reverend that's actually saved me a few times from sledgehammering my servers
14:28 tobiasBora yes indeed. But I was sure it was the good command, so I though that I did wrong thing and delete the files...
14:28 Reverend :D:D
14:28 Reverend is your gitfs working btw?
14:29 Reverend i didn't check back in after lunch
14:31 tobiasBora Well the synchronize seems to be done, but I don't know why I can't manage to call the states on it...
14:31 Reverend :(
14:31 Reverend GG so far then
14:36 krymzon joined #salt
14:37 atoy3731 Anyone know how to define minion-match wildcards for an environment and apply states ONLY to minions who match those wildcards? Here's my master, dev/top.sls, prod/top.sls, and error..
14:37 atoy3731 https://gist.github.com/anonymous/1534dcdc49b7d29166d50ee2807a2dc1
14:38 atoy3731 I'd think doing salt '*' state.apply saltenv=dev would ONLY apply to wildcard matches for 'dev*' and 'service-dev*', but it is still trying to apply state to the 'prod*' minions.
14:39 Reverend your selector is still '*' meaning 'everything'
14:40 atoy3731 but does that mean the selector still needs to define which minions no matter what?
14:40 Reverend AFAIK, yes. :S
14:40 Reverend but check withs oemeone else here
14:40 atoy3731 Aka, no way for me to say "based on what I defined in my environment's configs, just update those.."
14:40 atoy3731 Drat, alright.. Thanks.
14:40 Reverend as far as I recall, environments are for changing how sls's react in certain sitches
14:41 Reverend I use nodegroups here, and define them all by subnets, so I have no experience with it. but from what I gather, using '*' will apply to everything
14:41 nickabbey joined #salt
14:41 Reverend as i said, check with someone else here who knows a bit more than me, incase I'm chatting shit
14:41 atoy3731 lol, no worries. Any insight is good.
14:41 Reverend I'd look into nodegroups.
14:41 Reverend that might help
14:42 edrocks joined #salt
14:42 atoy3731 Do nodegroups also accept wildcards?
14:42 atoy3731 So i could define a prod nodegroup for 'prod*, service-prod*'?
14:43 atoy3731 That might be the ticket if so.
14:43 Reverend i dunno, let me try for ya
14:44 Reverend [root@ip-10-200-0-249 ~]# salt -N s\* test.ping
14:44 Reverend Node group s* unavailable in /etc/salt/master
14:44 Reverend nope
14:44 atoy3731 Drat.
14:44 Reverend why do you need to?
14:44 pipps joined #salt
14:44 Reverend you got quite a few envs?
14:45 Reverend there -must- be a way to do what you want. There's only been like 2 instances where there hasn't been a way to work something from my side. so even if it's not with saltenv, im sure it's possible.
14:46 atoy3731 Yea, we have 4 environments.. All with potentially different versions of the code at various stages of development.
14:46 Reverend can you do it with your top.sls ?
14:46 Reverend and just deploy to all servers every time?
14:47 atoy3731 Well, the customer probably doesn't want us overwriting configs to production when we're making a change on development.
14:47 atoy3731 My only concern with that really.
14:47 Reverend https://docs.saltstack.com/en/latest/ref/states/top.html#choosing-an-environment-to-target <-- see the bottom of that paragraph. that might help.
14:48 Reverend and the 'multiple environments' section.
14:48 atoy3731 Yea, that's what I tried to follow while setting this up.. But the "salt '*' state.highstate saltenv=prod" is still applying to all minions.
14:48 misconfig joined #salt
14:48 Reverend yeah
14:49 Reverend the docs _do_ say 'apply to all minions. ' :P haha
14:49 atoy3731 Haha, yea. Might be easier to write a wrapper script for it then.
14:49 Reverend are your servers in different subnets?
14:49 atoy3731 They are.
14:50 Reverend if you've only for 4 envs, I'd use nodegroups
14:50 atoy3731 Yea, I think that's still a viable option.
14:50 Reverend man, even I hgave more than 4 clusters in my stack.
14:50 Reverend but... each cluster does a specific task
14:51 Reverend i duynno - but it's a though... enjoy!
14:51 Reverend :)
14:52 atoy3731 Thanks again.
14:52 nebuchadnezzar joined #salt
14:52 mpanetta joined #salt
14:56 Tanta joined #salt
15:00 netcho joined #salt
15:01 netcho trying to delete minion with salt-cloud and i get error [ERROR   ] There was an error destroying machines: 'instanceId'
15:02 netcho machine gets terminated but key is not deleted but changed (change on deletio nset)
15:02 nicksloan joined #salt
15:02 netcho nothing but that line in master log
15:03 anotherzero joined #salt
15:04 jas02_ joined #salt
15:07 madboxs joined #salt
15:08 pipps joined #salt
15:09 mpanetta joined #salt
15:09 netcho eventlog -> http://hastebin.com/zoperezehe.bash
15:10 netcho \salt-key -L -> test-vlebo-DEL28133628372b4fca8dd0be9b69a691da
15:10 ronnix joined #salt
15:11 jas02 joined #salt
15:12 atoy3731 Reverend: Thanks for the nodegroups suggestion. It does work:
15:13 atoy3731 dev: 'E@dev.*, E@service-dev.*'
15:14 atoy3731 haven't cleaned up the other environment stuff I did, but I can do "salt -N dev cmd.run ls" just fine with that now.
15:14 dyasny joined #salt
15:16 Reverend atoy3731: if that works, you can do that with glob selecting on the command line. I find nodegroups easier tho... just do -N <nodegroup> *shrugs*
15:16 Reverend GG on the fix though buddy
15:17 atoy3731 Yea, also nice that you can limit it to a subnet range.. Gives a lot of options for grouping.
15:17 Reverend oh yeah. we have a 5 layer stack here on AWS, and with thge multi-AZ stuff, the subnet selectors are SO fucking handy
15:18 Reverend our nodegroups are basically S@10.200.xx1.0/24 - xx2 - xx3 <-- that 5 times, one for each group XD haha
15:18 tobiasBora Really I'm out of ideas... I tried everything in my mind
15:19 tobiasBora Here are the conf files and the command I run : http://paste.debian.net/901230
15:19 tobiasBora If someone could look in it it would be sooo nice.
15:21 Reverend okay - which bit is failing tobiasBora ?
15:23 MeltedLux joined #salt
15:26 tobiasBora Reverend: Well... The salt files aren't recognised... As you can see at the botom of the file, when I run :
15:26 tobiasBora sudo salt '*' state.sls test saltenv=prod
15:26 tobiasBora the test/init.sls file isn't recognized
15:27 remi joined #salt
15:28 DEger joined #salt
15:28 tobiasBora the error I have is :
15:28 tobiasBora No matching sls found for 'test' in env 'prod'
15:30 tobiasBora and in the /var/log/salt/minion file, I can read:
15:30 tobiasBora 2016-12-08 16:17:48,529 [salt.fileclient  ][DEBUG   ][8119] Could not find file 'salt://test/init.sls' in saltenv 'prod'
15:31 Reverend in your env then, that file doesn't exist
15:32 Reverend might be worth reading the link i sent to atoy3731 - that had some deets in about envs
15:32 Reverend https://docs.saltstack.com/en/latest/ref/states/top.html#multiple-environments
15:34 teclator joined #salt
15:37 Jimlad_ joined #salt
15:43 mavhq joined #salt
15:45 sh123124213 joined #salt
15:46 jas02 joined #salt
15:48 tobiasBora Reverend: In your link, for me I applyied everything in my case, and it seems to be pretty consistant with what XenophonF did. You can find here what I did:
15:48 tobiasBora http://paste.debian.net/901230
15:48 dxiri joined #salt
15:49 lompik joined #salt
15:50 madboxs joined #salt
15:52 adelcast joined #salt
15:53 Reverend okay so you -do- have a /srv/salt/prod/test/init.sls ?
15:53 Reverend wait -
15:53 Reverend your file is called test.sls ?
15:53 Reverend y no init.sls ?
15:56 Reverend tobiasBora ^
15:58 saintromuald joined #salt
16:01 Jimlad joined #salt
16:06 jas02_ joined #salt
16:06 winsalt the command should be "state.apply test.test"
16:10 tobiasBora Ohhhh
16:10 tobiasBora I will try, thank you !
16:10 seanz joined #salt
16:11 onlyanegg joined #salt
16:11 wavded joined #salt
16:11 seanz Am I mistaken, or is salt-cloud broken in random places? I was not able to obtain a list of AWS locations, though the docs showed the command that should have given me that.
16:11 seanz Then I read online that listing locations doesn't work for all providers.
16:11 wavded I'm using salt-call in a container for testing, is there a way to apply a state regardless if the node name matches?  or make salt-call assume a particular "node" name?
16:12 seanz Yet, I was able to list different VM sizes. So not everything is broken.
16:12 jas02 joined #salt
16:12 seanz wavded: I believe there is state.apply for running a state on a server.
16:12 seanz You can even pass in pillar data to the command if needed.
16:13 seanz https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html#salt.modules.state.apply
16:13 wavded seanz: I am using state.apply, but would like to use my top.sls file but have it "ignore" the node name (just for testing)
16:13 aidin joined #salt
16:13 seanz Why not modify the top file to include an additional match condition until your testing is done?
16:14 seanz There's probably a better way someone else can suggest.
16:14 wavded i could but want it to be mostly hands off so people can test their states in a container and inspect them
16:15 tobiasBora Reverend: winsalt : I'm so stupid, I was looking for a bug the gitfs config, and it was a stupid error in salt files… Thank you so much !
16:15 Reverend what was it in the end tobiasBora ? :P
16:17 winsalt wavded, you can match in the top.sls with someting other than the node name, like grains or pillar
16:18 dxiri joined #salt
16:19 jas02 joined #salt
16:19 wavded do people usually go by the node name with salt, or is it better to tag or label servers another way?
16:20 Reverend minion_id ?
16:20 Reverend i use nodegroups... but for individual nodes: minion_od
16:20 Reverend id*
16:20 tiwula joined #salt
16:20 wavded ok, sg, and you set that up in the minion config on the server right?
16:21 Reverend oh the minion. there's a file in /etc/salt called minion_id
16:21 Reverend you can specify the minion id in there and use that as a selector
16:21 Reverend IIRC
16:22 arount joined #salt
16:22 Rumbles joined #salt
16:23 arount Hi, is there a way to read file on *master* from custom execution module (I know modules are executed on minions but don't know if something exists to do that) ?
16:23 dyasny joined #salt
16:25 pysen Hey, anyone successfully installed a dmg on mac? I get a python error when I'm installing an application with a space in the name. I.E docker.dmg contains docker.app and it works fine. Googlechrome.dmg contains 'Google Chrome.app' and this gives me an error.
16:29 jas02 joined #salt
16:32 Aleks3Y joined #salt
16:32 Reverend can you escape it pysen?
16:32 Reverend submit a PR on git when you fix it ;)
16:33 seanz joined #salt
16:33 pysen am i escaping it correctly like this: macpackage.install_app "/tmp/dmg-HPkema/Google\ Chrome.app" "/Applications/" ?
16:37 jas02 joined #salt
16:38 krymzon joined #salt
16:44 tercenya joined #salt
16:44 amontalb1n joined #salt
16:45 mohae joined #salt
16:45 Jimlad joined #salt
16:46 roberto_ joined #salt
16:46 roberto_ howdy
16:47 roberto_ so working with cherrypy salt-api
16:47 roberto_ I was able to authenticate and generate a token - however when trying to run a function I receive a 401
16:48 samodid joined #salt
16:48 roberto_ this is my config in master conf file
16:48 roberto_ http://pastebin.com/vJ4ehBPM
16:49 roberto_ saltdev being the user that I am exercising
16:49 roberto_ I am able to test.ping as the user via
16:49 roberto_ cli
16:50 roberto_ however when trying to do the post via the api I get permission denied
16:50 roberto_ any thoughts?
16:50 madboxs joined #salt
16:51 roberto_ this is my super basic script to test
16:51 roberto_ http://pastebin.com/mxGmSpsm
16:51 roberto_ any help is highly welcomed
16:51 roberto_ thanks in advance
16:52 roberto_ I get     'Could not authenticate using provided credentials') HTTPError: (401, 'Could not authenticate using provided credentials')
16:53 debian112 joined #salt
16:55 roberto_ I apologize for the noise - I was able to figure it out
16:55 roberto_ nothing to do with salt/auth/api
16:55 roberto_ rather if you interested you can see that on my second post / I am once again posting against the /login endpoint
16:55 roberto_ which is a valid 401 response
16:55 roberto_ sorry about the noise
16:56 DammitJim joined #salt
16:56 roberto_ thanks for being my rubber duck
16:56 fracklen joined #salt
16:58 edrocks joined #salt
17:00 MTecknology ehm...
17:04 samodid joined #salt
17:04 MTecknology I just learned of another meaning for rubber duck dev. I've seen it refer to a useless feature that is going to be highly debated so that the dev can worry about actual functionality.
17:05 MTecknology apparantly the first time it was done, the guy decided to just toss a rubber duck in the thing and people were holding up the project got distracted because they were arguing over the size/color/function of the duck.
17:05 Trauma joined #salt
17:07 cpowell joined #salt
17:07 lws joined #salt
17:10 roberto_ lol
17:11 madboxs joined #salt
17:11 buu MTecknology: Originally the term was 'bikeshedding'
17:11 buu It's in the jargonfile somewhere
17:12 buu It's actually kind of an interesting phenomena
17:13 ronnix joined #salt
17:16 sp0097 joined #salt
17:17 troy_16bit joined #salt
17:17 bltmiller joined #salt
17:19 foundatron joined #salt
17:20 Salander27 joined #salt
17:21 fracklen joined #salt
17:31 MTecknology buu: heh, interesting
17:32 madboxs joined #salt
17:32 buu MTecknology: http://shop.nordstrom.com/s/friendswithyou-little-cloud-bronze-fine-art-sculpture-limited-edition-nordstrom-exclusive/4537020?
17:33 MTecknology I'm apparently a good personal ducky
17:33 MTecknology I start reading the code to myself and explaining it but if I cant grasp something that I thought was easy when I wrote it, I tend to yell at myself for not getting it and then we either refactor or add comments.
17:34 edrocks joined #salt
17:34 MTecknology ... life in my head is not fun :S
17:35 Compeso joined #salt
17:44 jas02 joined #salt
17:46 wavded joined #salt
17:46 honestly having more than one person in your head gets crowded
17:46 sh123124213 joined #salt
17:47 roberto_ ya specially when disagreement emerge
17:48 pipps joined #salt
17:48 raspado joined #salt
17:49 pipps99 joined #salt
17:51 ronnix joined #salt
17:51 pipps_ joined #salt
17:52 pipps__ joined #salt
17:53 sarcasticadmin joined #salt
17:53 madboxs joined #salt
17:53 ronnix joined #salt
17:54 pipps joined #salt
17:56 ronnix_ joined #salt
17:57 pipps99 joined #salt
17:59 pipps_ joined #salt
18:01 scsinutz joined #salt
18:01 nickabbey joined #salt
18:02 pipps joined #salt
18:02 pipps99 joined #salt
18:03 Edgan joined #salt
18:05 UForgotten joined #salt
18:05 pipps_ joined #salt
18:06 pipps__ joined #salt
18:07 jas02_ joined #salt
18:08 flight884 joined #salt
18:09 flight884 hello all. Have a question re git_pillar. Current setup isos: centos7
18:10 flight884 oops. hit enter too quickly. standby for more details
18:11 flight884 setup: os: centos7, salt: 2016.11.0 (Carbon), pygit2: 0.21.4
18:11 anotherzero joined #salt
18:11 sh123124213 joined #salt
18:12 tercenya joined #salt
18:13 pipps joined #salt
18:13 flight884 after a repository change (i.e., someone commits a file change to the pillar data), when I call salt-run git_pillar.update, the correct changes occur. However, when master is running for a while (say, a day), and I repeat the steps, git_pillar says that the repository is "up-to-date"
18:13 XenophonF joined #salt
18:13 pipps99 joined #salt
18:14 flight884 anyone experience this before?
18:14 madboxs joined #salt
18:15 pipps_ joined #salt
18:16 AndreasLutro the git pillar updates itself regularily, there's always a chance that it's updated itself before you had the chance
18:20 flight884 We want a commit to the pillar repository to trigger a continuous integration build. So the way I have it set up is that a commit to the repository fires a salt event, which is picked up by reactor, which calls an orchestrate command that makes git_pillar update.
18:21 flight884 This all works...for a while
18:21 pipps joined #salt
18:21 flight884 And it works when calling salt-run git_pillar.update -l debug
18:22 flight884 however, after a while, the reactor-based call and the salt-run call do not detect repository changes.
18:23 flight884 its as if the repository in /var/salt/cache/git_pillar somehow gets disconnected from the remote
18:23 AndreasLutro check your master logs, see if there are lock files hanging around
18:23 AndreasLutro and make sure you're running a recent version
18:23 flight884 AndreasLutro: Versions are os: centos7, salt: 2016.11.0 (Carbon), pygit2: 0.21.4
18:24 flight884 note that pygit2 is not latest, as we're pulling from yum and 21.4 is what we get
18:25 pipps99 joined #salt
18:25 nmccollum joined #salt
18:26 pipps joined #salt
18:27 nmccollum In the module cmd.run, there is the creates... is there an opposite to it?  If the file exists, run this command?
18:28 AndreasLutro dunno then flight884
18:28 AndreasLutro nmccollum, onlyif: test -f /path/to/file
18:30 flight884 AndreasLutro: I'll check re locks. thanks. But it feels like locks are not the issue. git_pillar just simply receives nothing when fetching. I'll put links to dpaste here with the logs just in case there's something obvious. Appreciate the help
18:30 sh123124213 why is cp-push so slow in comparison to scp
18:34 scsinutz joined #salt
18:35 madboxs joined #salt
18:37 atoy3731 Is there a way to make the prod and dev init.sls files identical? https://gist.github.com/anonymous/70f5935ba3472354acff111630e3b2ca
18:37 buu include?
18:37 atoy3731 (that's a very simplified example of what i'm working through)
18:38 buu atoy3731: isn't this an example in the tutorials
18:38 atoy3731 The multi-environment tutorial?
18:38 buu atoy3731: you use file roots defined in the fileserver config
18:38 buu yeah
18:38 atoy3731 Didn't really answer my question.
18:39 buu If you specify different roots then salt://test.jinja will pick up the right file
18:39 atoy3731 Well, when I used multiple file_roots, I couldn't get changes to flow down from base.
18:39 atoy3731 Ie, if I had a file in 'dev' but not in 'prod', I'd want prod to use the 'dev' file.
18:39 buu atoy3731: It's a 'first file found' wins
18:39 buu ok that's just weird
18:40 buu butif you specify:
18:40 buu root:
18:40 buu - prod
18:40 buu - dev
18:40 atoy3731 lol that's not a great example, but I was simplifying it.. I have a common environment.
18:40 atoy3731 And I want to overwrite things for dev and prod.. Only the things that differ.
18:40 ronnix joined #salt
18:41 atoy3731 I'll check it out again.. Maybe I just misinterpreted.
18:41 flight884 AndreasLutro: here is a paste of logs when remote pillar repo has changed, but git_pillar update is not pulling in the changes. At bottom of paste is me looking into the cache and you can see that the remote is different from what is in cache: https://dpaste.de/WRiR/raw
18:42 buu atoy3731: https://docs.saltstack.com/en/latest/ref/file_server/backends.html#defining-environments ?
18:42 buu atoy3731: I might be missunderstanding you, can you restate what you're trying to accomplish?
18:43 AndreasLutro flight884: mm, maybe origin/develop isn't set as the upstream for the local develop branch
18:44 AndreasLutro best guess
18:44 misconfig I'm using a jinja for loop in a template to iterate over the ip_interfaces grain, does anyone have a clue why there are so many newlines in my output? => https://gist.github.com/ndobbs/71461bdd4ed0423f61038351998fa573
18:45 atoy3731 I have 4 environments.. 90% of those environments are identical, but some files/configs do differ.. So I'd want there to be a 'common' environment that contains the base version of everything.. Then the other environments will inherit the common environment and only change what they need based on their configurations.
18:45 buu atoy3731: okilydokily
18:45 atoy3731 I tried to use that link you sent before, but the inheritance wasn't working.
18:45 buu atoy3731: so can you define a root for each environment: env1: - env1files; - common; ?
18:46 buu misconfig: Because you have lots of newlines around each loop statement
18:46 atoy3731 yep, that's what I had originally: file_roots: common: - /srv/salt/common; env1: - /srv/salt/env1;
18:46 atoy3731 for example.
18:47 buu misconfig: You can probably write {%- for i ... -%} to hide some
18:47 buu atoy3731: you need to list common in env1 also
18:47 misconfig Thanks buu - I had used that trick in other states but wasn't sure if it would apply to variables. Let me test.
18:48 buu misconfig: I mean, I'm just completely guessing, I have no idea what syntax jinja uses for it
18:48 buu =]
18:48 misconfig any help is appreciated
18:48 misconfig so thank you
18:49 atoy3731 buu: then is there any reason to have the 'common' root element? could i just have:  file_roots: dev: - /srv/salt/common; - /srv/salt/dev; prod: - /srv/salt/common; - /srv/salt/prod;
18:49 buu "You can also strip whitespace in templates by hand. If you add a minus sign (-) to the start or end of a block (e.g. a For tag), a comment, or a variable expression, the whitespaces before or after that block will be removed:
18:49 atoy3731 since common isn't really a standing environment, more just a storage for the base environment.
18:50 flight884 AndreasLutro: I think that is what it looks like. I assume these directories in cache are created by git_pillar thru pygit2? Is it perhaps not set up to track by default?
18:50 buu atoy3731: Not that I know of
18:50 buu atoy3731: Do you want common to override dev or dev to override common?
18:50 atoy3731 dev to override common.
18:50 buu then put it first!
18:50 AndreasLutro no clue. git_pillar has worked fine for me
18:50 atoy3731 Haha, the opposite of a top.sls.. Of course.
18:50 buu =]
18:51 atoy3731 Alright, thanks for the help. Let me run with that for a little.
18:51 atoy3731 I'm sure I'll be back.. :-|
18:51 buu Good luck
18:51 buu we're all counting on you
18:52 atoy3731 Terrifying for you then.
18:52 buu =O
18:52 flight884 AndreasLutro: it works fine for me too at first. But then...it is as if it forgets how to work fine :)
18:52 buu It's an airplane reference
18:52 buu No one ever gets it =[
18:52 misconfig buu, that worked
18:52 misconfig thank you
18:53 misconfig I should have tried this earlier as I was aware of this but wasn't sure it would work in this case.
18:53 misconfig had to add a bunch of minuses, lol
18:53 buu misconfig: do you really need to care about whitespace?
18:54 buu or does it break yaml
18:54 misconfig Not really. It just looked really ugly in my output
18:54 misconfig I'm going to be using these as variables to configure interface bonding
18:54 misconfig so this loop will work with the interface state, I just wanted to ensure it was tidy.
18:54 atoy3731 buu: Haha, my bad. Classic. Should have had it.
18:54 flight884 AndreasLutro: just trying to gather forensics here (thank you for your help). When you say it works fine for you, does that mean that the pillars _eventually_ represent what is on external repo? Or that you are able to always successfully force git_pillar to update itself immediately?
18:55 buu Ok, what exactly is grains.get ps; returning?
18:55 coredumb evening folks
18:55 coredumb I was wondering
18:55 coredumb how much different git repositories is too much ?
18:56 coredumb how many *
18:56 misconfig @coredumb when it becomes a pain to manage
18:56 coredumb performance wise ?
18:56 misconfig I separate a repo per utility // function personally.
18:56 misconfig Ah, I can't speak to that. I've not had enough repos to see any performance hit during clones etc.
18:56 coredumb couple hundreads of gitfs repos ?
18:56 buu coredumb: 83
18:57 madboxs joined #salt
18:57 misconfig ^^
18:57 misconfig o.O
18:57 buu 84 is clearly way too many
18:57 buu I mean come on, what kind of a question is that
18:58 coredumb what kind of answer is that ?
18:58 whytewolf coredumb: that is a matter between you and your operating system really. depends on network, operating system latency, disk space, disk latency, if goerge is going to be out to lunch
18:59 coredumb whytewolf: yeah apart from that there could also be some know fact where more than 150 gitfs repos makes the master lag like a big fat cow during highstates
19:00 coredumb especially when states are looked up top to down
19:01 whytewolf no known fact like that exists. I have seen slow masters with 10 git repos buckle. and i have seen fast servers with thousands able to handle the load
19:05 coredumb whytewolf: now if you tell me you've seen thousands of gitfs repos looked up still fast that makes it interesting
19:05 fracklen joined #salt
19:06 whytewolf they were using a raided fusion drives for their var directory. and ram as their /tmp
19:06 whytewolf extreamly high io setup
19:06 heaje joined #salt
19:07 whytewolf on normal enterprise hardware [not virtual machine] you should be able to get a couple hundred
19:08 whytewolf but honestly.... i think you need to revaluate your life choices after about 83
19:08 jas02 joined #salt
19:08 sh123124213 joined #salt
19:08 jas02_ joined #salt
19:09 netcho joined #salt
19:13 st8less joined #salt
19:13 seanz joined #salt
19:14 sjorge joined #salt
19:14 sjorge joined #salt
19:16 numkem joined #salt
19:17 madboxs joined #salt
19:20 tercenya joined #salt
19:21 atoy3731 buu: Got it working, thanks again. Does the same logic apply to pillar_roots and pillar configs?
19:23 coredumb whytewolf: what would need to be reevaluated around 83 ?
19:24 bltmiller joined #salt
19:24 cyborg-one joined #salt
19:25 s_kunk joined #salt
19:25 scsinutz joined #salt
19:29 ivanjaros joined #salt
19:30 ivanjaros3916 joined #salt
19:33 buu atoy3731: I have no idea what pillars do!
19:33 buu =]
19:33 buu You should investigate and let us all know
19:33 atoy3731 Haha, no worries. I just might.
19:34 buu I keep getting pillars and grains confused
19:38 madboxs joined #salt
19:39 krymzon joined #salt
19:44 o1e9 joined #salt
19:44 keimlink joined #salt
19:46 netcho whats the best way to apply nginx state with multiple configs? for example 'salt state.apply nginx' installs and sets default config... i need for example 'salt state.apply nginx.app1..appN'
19:47 netcho i have all configs templated
19:47 heaje netcho: Sounds like a good use of pillar data
19:47 netcho just need a way do deploy them all withoth having a state for each of tme
19:47 netcho passing pillar into state?
19:47 heaje have a pillar that specifies which config to deploy.  If the pillar isn't defined, use the default.  If the pillar is defined, use the value from the pillar.
19:48 heaje netcho: Yes, make your state reference the pillar data
19:51 netcho so for example... state.apply nginx pillar='{"config":"app1"}'
19:54 netcho pillars cannot be passed in top.sls file right?
19:54 netcho heaje: ?
19:55 heaje netcho: Pillars have their own top.sls file
19:55 netcho i know that
19:56 cscf netcho, top files aren't the same syntax, you can't include pillar vars, no
19:57 heaje I guess I'm confused then by what you mean when you say "pillars cannot be passed in top.sls file right?"
19:57 cscf I think he wants to substitute pillar vars in a top file, which doesn't make much sense anyway, since what minion's pillar would be used?
19:57 netcho writing an example i pastebin
19:58 netcho http://hastebin.com/nevosawifa.cs
19:59 netcho example
19:59 madboxs joined #salt
20:00 whytewolf netcho: top.sls only links to files....  so if you want that in top.sls you need a file per item.... there is no getting around that.
20:00 netcho gotcha
20:00 whytewolf if you don't want to build a file per item and just pass in nginx then setup nginx to take pillar data and turn it into what the app will do
20:03 netcho sec i will paste what i have so far...
20:03 netcho http://hastebin.com/oriyiqimid.cs
20:03 netcho it works ok with 1 app and default
20:04 netcho but if i have 10 apps
20:04 netcho what would be the best way to solve it
20:04 whytewolf one second
20:04 netcho line 14 is wrong but u get the point
20:06 zer0def joined #salt
20:06 raspado joined #salt
20:08 fracklen joined #salt
20:09 whytewolf netcho: something more like this [not tested just thrown together] https://gist.github.com/whytewolf/d519a4b473dea21e6f122166394a00de
20:11 netcho whytewolf: wouldn't this create them all?
20:11 whytewolf netcho: if they all are being created on the same host yes
20:12 netcho yeah i need 1 per host based minion id
20:12 netcho on
20:13 netcho so if grains['id'] contains app1 apply config app1
20:13 netcho i might try somehow to parse grains id and set app with that
20:14 whytewolf ohhhh ok. thats not to bad either
20:14 scsinutz joined #salt
20:14 whytewolf one second
20:14 whytewolf it actually is closer to what you had originally
20:14 netcho yes
20:15 netcho just lack of dev skilz here :D
20:15 netcho salt does not awk or sed :D
20:17 whytewolf okay, updated your gist
20:17 whytewolf that gist
20:17 whytewolf simple changes to variables instead of limits
20:17 netcho damn that looks easy :D
20:18 whytewolf it is
20:18 whytewolf :P
20:19 netcho will it work if only part of grains id is in pillar
20:19 netcho for example grains_id=staging-app1-xxx and pillar=app1
20:20 whytewolf no, it has to match the whole id
20:20 netcho gotcha
20:20 netcho makes sence
20:20 netcho no globbing here? :D
20:20 madboxs joined #salt
20:20 netcho sorry, just tryiong to minimize it as mush as i can :)
20:21 netcho if salt.grains.get('id')+'*' haha
20:21 whytewolf eh, pillar in query doesn't do wild cards :P
20:21 netcho :)
20:21 whytewolf err s/pillar/python
20:22 zer0def joined #salt
20:22 netcho this will work for static apps, for autoscaling i will have to parse cux i have instance_is as sufix
20:23 netcho thanks
20:24 whytewolf if you know for a fact that you will only put nginx on hosts and that you will never have a default case you could drop everything but the {% set app = salt.grains.get('id') %}
20:25 netcho i will never have default nginx config if thats what u mean
20:25 netcho or in the worst case scenario i couyld have 2 files.. 1 for default and one for all other apps
20:26 whytewolf one second let me see if i have time i might be able to whip up something that takes care
20:26 netcho but now when i look deeper into this, it will not work if i have app on muiltiple servers
20:26 netcho i don't wanna waste ur time
20:26 netcho u helped enough
20:28 whytewolf hehe okay. I really should be doing my day job anyway... or at the very least figureing out my own issue at the moment. how to saltify a solo master and preload content on it. with a fresh install so i won't even have the libs for gitfs
20:28 netcho server names are for example.... staging-app1-0, staging-app1-1, staging-app1-2 etc
20:32 sh123124213 is there anyway to speed up file transfairs with cp.get_file ?
20:33 lem2 joined #salt
20:33 lem2 left #salt
20:34 honestly yeah, you just pass --cp-get-file-dont-be-slow
20:35 netcho xD
20:35 Tanta cloud-init
20:36 whytewolf cloud-init?
20:36 Tanta it's the underlying system for
20:36 Tanta "user data" in EC2
20:36 whytewolf yes ...
20:36 Tanta you can pass in a bash script and use that to bootstrap
20:36 whytewolf that would be nice if this was going to be a cloud system
20:37 whytewolf it is hardware
20:37 netcho pure iron
20:37 Tanta PXE boot and use kickstart
20:37 Tanta or whatever the automation tool is for your distro
20:37 whytewolf thats another problem. this is where the pxe boot system will end up living
20:38 Tanta install to a live USB key?
20:38 Tanta you could boot a server off of that and prepare if beforehand
20:38 whytewolf I'm trying to avoid that. I just want a pretty default install and just push a button on my mac and go
20:38 netcho whytewolf: is there a way to strip grain value in a state? :D like with sed? :)
20:39 Tanta then do kickstart + CD-ROM boot, that is preparable
20:39 jas02 joined #salt
20:39 whytewolf I might have to... just wanting to get away from that
20:40 Cottser joined #salt
20:41 madboxs joined #salt
20:42 hemebond whytewolf: What are you trying to boot on?
20:44 whytewolf hemebond: dell 2950
20:47 debian112 what are the ways to protect saltstack-api?  network ACLs, and Username and Password, SSL.   Is there anything else?
20:47 hemebond whytewolf: So you're creating VMs on bare metal?
20:49 whytewolf hemebond: saltify is the cloud driver for salting a system. doesn't have to be related to vms at all
20:49 hemebond whytewolf: Sorry, I read back in the conversation but couldn't follow so I'm not really sure what you're struggling with.
20:50 hemebond I thought it was the boot process.
20:50 whytewolf hemebond: i want my cake and to eat it too
20:51 whytewolf basicly saltify has no mechinism for direclty transfering files. since i am creating a standalone master [local:true, master:127.0.0.1] i can not just run a salt command cause there is nothing for it to do.
20:52 whytewolf i was trying to avoid the basic kickstart of a bootstrap machine but i might just have to cause there just doesn't seem to be a way around it
20:53 hemebond So you have a kickstart process already, and you want to use Saltify to install salt-master?
20:53 hemebond And you need to transfer files?
20:54 whytewolf currently there is not a kickstart. this is all prework stuff. i was just going to get a basic min install with network and then use saltify. but with saltify unable to transfer files i can't just do a one button press on this one system [everything that comes after is golden]
20:55 whytewolf basicly this is the bootstrap of a new infrastructure
20:57 whytewolf trying to do the absolute min outside of salt
21:03 oida joined #salt
21:04 hemebond Well, Saltify, I thought, was mostly about installing salt-minion on machines.
21:04 hemebond Usually in cloud environments.
21:04 whytewolf saltify was actualy made for bare metal machines. since the other drivers pretty much covered the clouds
21:04 cscf Why not install salt-minion in the pxe preseed?
21:05 hemebond If this is an entirely new environment, that won't be connected to an existing master, then it's really outside of salt.
21:05 whytewolf cscf: because there is no pxe preseed :P
21:05 cscf whytewolf, oh, I thought you meant a network install
21:05 hemebond Sorry, yes, non-cloud environments.
21:06 hemebond I guess I don't really understand the scope.
21:07 whytewolf hemebond: i'm trying to strech a system to it's limits. also saltify can create the standalone master. with some very basic configs. i just can't get it to actually have data. and am whining about :P really more of an excersize in frustration
21:08 hemebond Are you going to be managing this new master using salt-cloud or something?
21:08 whytewolf no, it is the MoM of the whole thing.
21:08 hemebond Then it sounds less like "stretching" a system, and more like displacing it.
21:09 whytewolf once deplyed it was going to pxe boot the rest of the hardware, install openstack, then deploy a dev,qa,and prod enviroments within.
21:09 hemebond It sounds like you could just SSH onto the machine and run a custom script that uses bootstrap-salt.sh to install salt-master and then pulls down a repo.
21:09 whytewolf hemebond: thats what i do now :P
21:10 hemebond That sounds like the best and most appropriate method.
21:10 hemebond You aren't getting any benefit from using saltify.
21:10 whytewolf education about salitfy and what is not possable
21:10 whytewolf i wouldn't say that isn't a benefit
21:10 hemebond While Saltify can install a master, it seems, to me, to be more about adding Salt to existing machines.
21:10 toanju joined #salt
21:10 kramcinerok joined #salt
21:11 hemebond Well, what you've learned is a separate thing.
21:11 hemebond I'm talking technical benefit.
21:11 Bico_Fino joined #salt
21:12 whytewolf and if what i am trying to do was possable ... which technically if i wasn't lazy i guess i could do by using a custom bootstrap script. ...
21:14 whytewolf hemebond: I guess you have to understand my mind set. I'm the kind of guy that built linux from scratch broke it on purpase and rebuilt it because i found it fun
21:15 hemebond whytewolf: That's cool.
21:15 hemebond There's nothing stopping you from trying to use Saltify this way.
21:15 hemebond You just need to write your own bootstrap script.
21:15 whytewolf i know ... just being lazy.
21:15 hemebond And you get no technical benefit.
21:16 whytewolf it is a home lab i don't get a technical benefit from it anyway
21:16 hemebond Sounds like the bad lazy. You need to be the good lazy ☺
21:16 hemebond Oh I see.
21:16 DammitJim joined #salt
21:17 XenophonF joined #salt
21:18 mikecmpbll joined #salt
21:18 whytewolf yeah i would never try something this stupid in production ... or at work
21:19 whytewolf esp since i would have pxe boot there
21:19 whytewolf and most likely a vast infrastructure to already use
21:19 hemebond You don't have PXE boot in your home lab?
21:20 whytewolf this box would be the main system to pxe boot off of
21:20 hemebond Oh, and you haven't scripted it yet?
21:20 whytewolf exactly
21:20 hemebond You could just use salt-ssh to set it up.
21:20 whytewolf no master to salt-ssh off of
21:20 whytewolf i guess i could spin one up in vagrant
21:20 hemebond Oh.
21:21 whytewolf since it wouldn't need to be a perm fixture
21:21 bltmiller to something something, you must first invent the universe
21:21 bltmiller or something like that
21:21 bltmiller - famous person
21:21 cscf to bake a cake from scratch, I think it was
21:21 whytewolf to bake a pie you must first create the universe. carl sagan
21:21 bltmiller ayyy there we go, cscf and whytewolf FTW
21:26 whytewolf I do wonder about scientists sometimes... very vilent bunch who seem to always be hungry.... killing cats and baking cakes
21:26 pipps joined #salt
21:40 mohae_ joined #salt
21:42 hemebond Yikes. My minions have updated themselves.
21:42 bltmiller using pkg.latest?
21:43 hemebond Nope, I don't manage the salt-minion package install.
21:43 hemebond Oh wait.
21:43 hemebond If I have used a state that did that once (not applied every time) will it update it?
21:44 cscf no
21:44 madboxs joined #salt
21:44 bltmiller depends on what you're doing sorta. a basic `state.apply` will only execute the states that are targeted in your top.sls file
21:44 hemebond Well I did pkg.upgrade and some minions didn't respond.
21:44 hemebond So I checked and now they're newer.
21:44 hemebond So I guess it was just me.
21:44 cscf hemebond, seeing as that's what pkg.upgrade does, this should not surprise you
21:44 hemebond But none of the minions said they'd upgraded.
21:45 hemebond Yeah. The only surprise was that the list of packages upgraded didn't mention salt at all.
21:45 cscf that is odd.
21:45 hemebond plus I thought I'd pinned  the version.
21:46 Sketch the minion gets restarted when it gets upgraded, so it will time out
21:46 hemebond Sketch: Yeah, but some didn't.
21:46 Sketch hemebond: yeah, i have noticed that as well.  sometimes they do, sometimes they don't.
21:47 hemebond The ones that didn't timeout didn't list salt-minion as being updated,.
21:47 Sketch it probably wasn't, they might already be up to date?
21:48 hemebond Hmm. Maybe so.
21:49 hemebond I think I need to update my cloud-init process to specify the version.
21:52 AnalogLifestyle joined #salt
21:56 Axenow joined #salt
22:05 bltmiller anyone using salt on VMs behind an HTTP proxy? seems like minions are only honoring `http_proxy` and `https_proxy` environment variables (not their ALL-CAPS counterparts)
22:05 Bico_Fino joined #salt
22:05 madboxs joined #salt
22:09 Praematura joined #salt
22:09 jas02 joined #salt
22:11 jas02_ joined #salt
22:16 hemebond bltmiller: salt-minion doesn't use environment variables.
22:16 hemebond You have to configure the proxy in the minion configuration.
22:18 bltmiller it should inherit from systemd's environment
22:18 hemebond Really?
22:18 hemebond When did they add that?
22:19 saintromuald joined #salt
22:19 bltmiller don't know. but without setting the env var (lowercase only) in my systemd unit file, shit don't work
22:20 hemebond bltmiller: https://github.com/saltstack/salt/issues/8177
22:20 saltstackbot [#8177][OPEN] Support http_proxy/https_proxy for sources | For example, a file.managed with source pointing to a file on a HTTP location will not work if you are behind a proxy....
22:20 hemebond Oh wait.
22:20 hemebond That's the wrong one.
22:20 krymzon joined #salt
22:20 hemebond Oh wait, that should be relevant.
22:21 hemebond https://github.com/saltstack/salt/pull/5937
22:21 saltstackbot [#5937][MERGED] fixing issues with passing env var to salt-minion,master,syndic | Hi guys I have a problem with env variables (set in /etc/environment) as they are not passed when running salt-minion as a service (ubuntu)...
22:21 hemebond Sounds like people are using the workaround you are.
22:22 hemebond Basically, the minion doesn't read environment variables like HTTP_PROXY.
22:22 hemebond Environment variables usually used to configure global stuff for a system.
22:22 hemebond To the best of my knowledge.
22:23 bltmiller it's a pain for sure and was a hard one to track down
22:25 tehsu can you pass two grains through a mine.get, example shows I can but doesn't seem to work
22:25 onlyanegg joined #salt
22:25 nidr0x joined #salt
22:26 tehsu nevermind,
22:26 wavded joined #salt
22:26 madboxs joined #salt
22:26 bltmiller tehsu: bad syntax?
22:26 wavded how do you target a minion_id in a sls file?
22:26 tehsu i was using grain instead of compound
22:26 hemebond tehsu: Seems to work for me.
22:27 hemebond Ah
22:27 tehsu my mistake
22:27 tehsu thx
22:28 wavded ahh looks like grains.id
22:29 hemebond wavded: The minion ID the is default target.
22:31 misconfi_ joined #salt
22:31 whytewolf wavded: what do you mean by target a minion_id in an sls file? what are you doing that you can't target in the top file?
22:32 wavded whytewolf: I run my tests in a docker container, and I had one case where I couldn't apply a part of sls file to the container, so I wanted to ignore it there
22:32 wavded a kernel issue
22:33 wavded ironically it was an sls file used to install docker :)
22:34 whytewolf so you are installing docker within a docker container?
22:34 whytewolf ... that seems illogical
22:35 whytewolf there is only so far you can go with continers. before you have to break into virtualization
22:36 Edgan whytewolf: I have a friend who does docker in docker, but only that deep. They ran into bugs, and I think have had to stick to certain versions at times.
22:37 whytewolf Edgan: honestly I try to stay away from continers. just not my bag... i can see the apeal. but i just am more of a full virtalization person
22:38 wavded whytewolf: we don't actually use docker in docker for anything, i just wanted to test that state
22:38 Edgan whytewolf: same here, but devs are going to drag me into containers in EC2, because of instances costs, and especially for dev environments. When there VMs are 1-2% cpu load, we can get way more containers on a big VM. It really comes down to memory. VMs require it and containers don't as much.
22:39 hemebond ugh, I have to use Docker on EC2 instances for some stuff. I can't stand it.
22:40 Edgan hemebond: I will probably end up using Kubnetes.
22:41 hemebond Yeah. It's a completely different process/system to using Saltstack and VMs.
22:42 Edgan whytewolf: I am using salt for deploys, and devs are hating the complexity of jinja templates and pillar overrides. But it is mostly because it enforces discipline on them. They would turn it back into a disaster with dockfiles.
22:43 Edgan whytewolf: They would just check in configuration files to git with passwords if I let them.
22:45 bltmiller I'm transitioning my app team to containers. that's just where the puck is going to be.
22:45 wavded Yeah we ave moving the Kubernetes, still dockerfiles there, just better management
22:46 bltmiller wavded: have you looked at Docker Swarm mode yet?
22:46 bltmiller in case you want a quick demo, this article is pretty good ;) http://btmiller.com/2016/11/27/docker-swarm-1.12-cluster-orchestration-with-saltstack.html
22:46 Edgan wavded: I plan on sticking to salt and not using dockerfiles.
22:46 wavded bltmiller: no i haven't yet
22:47 onlyanegg joined #salt
22:47 wavded bltmiller: is it something like kubernetes?  pros/cons?
22:48 bltmiller k8s seems more like a build-everything-yourself which really tickles my fancy, but I've got super tight deadlines and need something like *now*, so Swarm it is, for now, because everything is baked-in. includes its own service discovery, its own networking, etc.
22:48 wavded we played around with managed kubernetes in gcloud and now are going to try rolling our own, was really nice for packaging containers together, gathering logs, autoscaling, persistent disks.. etc
22:49 Edgan docker X is the bleeding edge rush it out of door version of things
22:49 bltmiller X?
22:49 Edgan bltmiller: Fill in the blank = X
22:49 bltmiller ¯\_(ツ)_/¯
22:49 Edgan bltmiller: swarm, registry, etc
22:49 bltmiller yeah I'm finding myself combatting the bleeding edge almost daily now
22:50 wavded bltmiller: the bleeding edge is ... bloody :)
22:50 wavded in my experience, trying to manage my sanity
22:50 Edgan bltmiller: Not that I don't find myself wishing for new Salt versions on a weekly basis, and having to build my own packages to include unreleased patches.
22:50 bltmiller I do like k8s concept of Deployments. I think Swarm will eventually have an answer for it with their Stack deploys but, I think it's still a work-in-progress
22:51 bltmiller Edgan: heh, I'm over here managing a Frankenkernel
22:51 Edgan bltmiller: I am already one patch in on 2016.11.0
22:51 Edgan bltmiller: It was five patches on 2016.3.3, and three on 2016.3.4
22:51 __number5__ just did a project with Docker 1.12 swarm mode and found most basic features like docker-compose support/ service logs is not there
22:52 bltmiller Edgan: I'm okay with staying behind on 2016.3 for now ;)
22:52 bltmiller __number5__: did you look at the experimental stack deploy feature?
22:52 __number5__ hopefully 1.13 will catch up a bit
22:52 scott joined #salt
22:52 bltmiller the release notes for 1.13 look promising
22:52 Edgan bltmiller: I just use it so heavily and with so many different things that I am constantly running into bugs, and I need fixes NOW.
22:52 wavded deis on top of k8s is something I've played around as well (similar to app-engine, heroku, etc).  that really removes dockerfiles :)
22:53 __number5__ bltmiller: bundle/apply thing? nope
22:53 bltmiller https://docs.docker.com/engine/reference/commandline/stack_deploy/
22:53 __number5__ next project will definitely be k8s
22:53 bltmiller wavded: oooooh this looks neat
22:54 bltmiller wavded: is there any equivalent in Docker universe?
22:54 kings_ joined #salt
22:54 wavded bltmiller: afaik, nothing from docker themselves, there is another similar project called flynn that doesn't use k8s
22:54 __number5__ bltmiller: thanks.
22:55 wavded we probably will end up with a combination of deis, k8s, and raw vms, they all have their uses
22:56 kings_ does anyone know how to update a repo to the newest version and replace the existing one?
22:57 __number5__ kings_: what repo? need more context
22:58 kings_ i'm looking to upgrade zabbix-agent on all my machines from version 2.2 to 3.2. I'll need to install the newest repo on all machines and stop the agent, upgrade, and start the agent again.
22:59 __number5__ kings_: so I supposed you mean apt repo or yum repo?
23:00 kings_ yes. sorry.
23:00 __number5__ kings_: use this to manage your repos https://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkgrepo.html
23:01 __number5__ then in your zabbix-agent pkg state, required that repo and specify the version you want
23:02 kings_ great thank you! i'll give that a try.
23:08 madboxs joined #salt
23:11 Xenophon1 joined #salt
23:12 jas02 joined #salt
23:12 nicksloan joined #salt
23:24 pppingme joined #salt
23:25 bfrog_ joined #salt
23:29 madboxs joined #salt
23:29 ivanjaros joined #salt
23:30 djgerm joined #salt
23:30 djgerm hello! is saltstack enterprise supported on UBuntu 16?
23:34 Praematura_ joined #salt
23:38 voxpop joined #salt
23:39 ProT-0-TypE joined #salt
23:39 jas02 joined #salt
23:44 tmkerr joined #salt
23:50 madboxs joined #salt
23:51 chowmeined joined #salt
23:53 ProT-0-TypE joined #salt
23:54 JPT joined #salt
23:54 scsinutz joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary