Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-03-16

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 mlesyk joined #salt
00:08 mlesyk joined #salt
00:09 DaveQB So I found out the hard way that lv_present uses -L for the size argument, not -l. So 100%FREE, for example, doesn't work.
00:20 mlesyk joined #salt
00:28 bones050 joined #salt
00:36 mlesyk joined #salt
00:39 jerematic joined #salt
00:41 mlesyk joined #salt
00:42 bluenemo_ joined #salt
00:42 mlesyk joined #salt
00:44 thayne joined #salt
00:49 mlesyk joined #salt
00:50 ajw0100 joined #salt
00:50 funzo joined #salt
00:50 bwebb joined #salt
00:54 subsignal joined #salt
01:02 mlesyk joined #salt
01:05 vectra joined #salt
01:13 UForgotten joined #salt
01:49 nilptr joined #salt
01:49 otter768 joined #salt
01:53 iggy rubenb: you could look into ext_pillar and pillar wheels
01:59 Edgan joined #salt
02:00 cberndt joined #salt
02:03 subsignal joined #salt
02:07 TyrfingMjolnir joined #salt
02:15 malinoff joined #salt
02:15 lietu joined #salt
02:21 michelangelo joined #salt
02:23 catpigger joined #salt
02:28 hasues joined #salt
02:28 hasues left #salt
02:31 Singularo_ joined #salt
02:32 aparsons joined #salt
02:34 teskew1 joined #salt
02:38 evle joined #salt
02:53 schristensen joined #salt
02:54 zwi joined #salt
03:05 favadi joined #salt
03:15 eyeball_01 joined #salt
03:16 eyeball_01 hey newb question here. i keep getting rror parsing configuration file: /etc/salt/minion - while parsing a block mapping   in "<string>", line 16, column 1:     master: 10.0.2.15 on all my minions
03:23 raygunsix joined #salt
03:35 bfoxwell joined #salt
03:36 eyeball01 joined #salt
03:36 eyeball01 hey newb question here. i keep getting rror parsing configuration file: /etc/salt/minion - while parsing a block mapping   in "<string>", line 16, column 1:     master: 10.0.2.15 on all my minions
03:42 iggy eyeball01: try pasting your config file somewhere
03:44 eyeball01 https://gist.github.com/wolfman2g1/e19eefb14d8ad11ecaed
03:44 eyeball01 i was initially able to accept the keys but that's it
03:50 raygunsix joined #salt
03:52 nene left #salt
03:54 iggy that's the whole config file?
03:54 iggy and you should probably paste everything in /etc/salt/minion.d/
03:55 iggy feel free to strip out comments
03:58 eyeball01 well i managed to fix one. minion  there was a typo in another part of the minion file
03:59 eyeball01 also there isn't anything in /etc/salt/minion.d/
04:16 catpig joined #salt
04:24 viq joined #salt
04:29 Furao joined #salt
04:32 Guest70 joined #salt
04:38 mbrandeis hi... i'm new to salt. when i run "salt-master --log-level=all" it looks like the master keeps restarting, or at least re-reading its master config file. salt commands (like test.ping) take forever to run while the server restarts itself. is this normal behavior, or is there somewhere i can look for an indication as to why it is restarting constantly?
04:42 iggy it will do certain things (reading grains, config, fileserver.update, etc.) periodically
04:43 iggy some examples would help
04:52 mbrandeis what kind of examples would help? when i run "time" on the test.ping it takes 1m 30s, and in that time the salt-master reloads 2 or 3 times based on seeing messages about adding modules, unloading modules, authenticating the client, the doing it all again, and then executing the test.ping. i'm testing it out on a mac using brew to install, and vagrant as a minion.
04:52 Furao on startup master spawn multiple workers
04:53 mbrandeis is 1m 30s typical for a test.ping?
04:53 Furao and those workers might look like master restart
04:53 Furao no it’s not, unless you test.ping immediately after master is started
04:54 aparsons joined #salt
04:54 Furao which take time for master itself to start, spawn workers, init and listen to connections from minions, if you have a lof of them need some time
04:55 iggy 90s could be normal if you have dead minions
04:55 mbrandeis salt-master has been running for 1.5 hours.
04:55 mbrandeis just one minion
04:58 mbrandeis here is a small snippet of the debug output right after a test.ping where it reloads. the master thinks the server took 0.009 seconds, which would be nice.  http://pastebin.com/ZjX7wQcS
05:00 iggy I have no experience with osx minions
05:00 mbrandeis its an osx master
05:01 iggy unsupoorted (afaik)
05:02 mbrandeis http://docs.saltstack.com/en/latest/topics/tutorials/walkthrough_macosx.html
05:02 mbrandeis may not be "supported" as in, you are on your own, but they have a walkthrough
05:02 iggy on a VM
05:03 forrest joined #salt
05:03 iggy yeah, nfc... I didn't even know that was possible
05:03 iggy might try the mailing list
05:04 aparsons_ joined #salt
05:04 mbrandeis i'm going to just dump this route as not being supported and build a virtual box linux instance to be my test salt-master
05:04 mbrandeis thx for your time guys
05:12 teskew1 joined #salt
05:17 justyns joined #salt
05:18 mgw joined #salt
05:19 felskrone joined #salt
05:19 catpigger joined #salt
05:21 justyns joined #salt
05:36 funzo joined #salt
05:46 ramteid joined #salt
05:47 cberndt joined #salt
05:59 Ryan_Lane joined #salt
06:02 aparsons joined #salt
06:08 yomilk joined #salt
06:11 nexsja joined #salt
06:16 I3olle joined #salt
06:22 jacky joined #salt
06:23 Guest92697 hi, i wrote a salt execution module , but the minion log said : NameError: global name '__salt__' is not defined
06:24 mosen thats pretty strange
06:24 mosen did one ofr your imports fail somehow?
06:25 Guest92697 mosen: I just import several common modules, like ,import os import json import traceback
06:25 mosen ahh ok
06:25 mosen Guest31320: nothing interesting running -l debug
06:26 mosen crap how many guests are there
06:26 AviMarcus joined #salt
06:27 Guest92697 mosen: yes, nothing interesting,
06:28 Guest92697 mosen: [WARNING ] Failed to import module deployer, this is due most likely to a syntax error. Traceback raised:
06:28 Guest92697 mosen: and the following is NameError: global name '__salt__' is not defined
06:35 tmh1999 joined #salt
06:35 jackywu joined #salt
06:36 tmh1999 joined #salt
06:37 tmh1999 joined #salt
06:38 tmh1999 joined #salt
06:46 krelo joined #salt
06:55 enarciso joined #salt
07:00 jeddi joined #salt
07:02 Plotek joined #salt
07:04 lietu joined #salt
07:06 AndreasLutro joined #salt
07:15 mikkn joined #salt
07:20 lumu_ left #salt
07:23 Lightsword joined #salt
07:23 cberndt joined #salt
07:23 mattiasr joined #salt
07:26 Lightsword I’m using vagrant and am looking to move from a shell provisioner to a saltstack one, my application uses a c backend with php frontend in addition to nginx/php5-fpm for the webserver, the database is postgres, are there any good example projects or good templates that I should start from?
07:29 TinuvaMac joined #salt
07:30 flyboy joined #salt
07:36 mike25de left #salt
07:36 mike25de joined #salt
07:38 KermitTheFragger joined #salt
07:38 krelo joined #salt
07:44 AndreasLutro Lightsword: https://github.com/saltstack-formulas search for nginx, php, postgres
07:44 AndreasLutro Lightsword: the formula examples are somewhat overcomplicated so you might not want to recreate them exactly, but you can learn a lot from them
07:45 nilptr joined #salt
07:47 nilptr joined #salt
07:51 Auroch joined #salt
07:52 spookah joined #salt
07:53 Lightsword AndreasLutro, whats the standard method for using the formulas, I have my vagrant git repo which currently has shell scripts that in turn clones my application repo when I run vagrant up, should I just copy the forumals into my vagrant repo or should I be using them as submodules or something else?
07:53 linjan joined #salt
07:57 lietu joined #salt
07:57 AndreasLutro Lightsword: I would just read them for inspiration and then create my own states after understanding what's going on
07:57 Sacro joined #salt
07:57 AndreasLutro definitely don't just copy them and use them as is, that'll only hinder your learning process - but maybe that's not a priority of yours
08:00 Furao those formulas are good if you only care about install something, they don’t cover every other aspects of manage infra, which for some are more important than install something
08:00 tmh1999 joined #salt
08:01 hebz0rl joined #salt
08:01 Lightsword Yeah, I’m kind of looking for something that can manage this c app(which is in continuous development and is usually compiled on the production servers) most everything else is fairly static although being able to reconfigure easially is something I’m looking for as well.
08:02 trikke joined #salt
08:04 Furao Lightsword: https://github.com/bclermont/states/tree/master/states/postgresql/server is a formula i wrote > 3 years ago it might not use new features that make formula better. but that used to work.
08:07 krelo joined #salt
08:13 kawa2014 joined #salt
08:16 lb1a joined #salt
08:20 wincyj joined #salt
08:20 eseyman joined #salt
08:20 intellix joined #salt
08:21 wnkz joined #salt
08:24 nilptr joined #salt
08:27 ekle joined #salt
08:33 hger joined #salt
08:37 AxelFooley joined #salt
08:37 nilptr joined #salt
08:38 hger Question: I wantto programatically ask the salt-master for know minions from a kickstart install of centos. We use salt-stack for orchestration and need to remove the old salt-minion certificate if we do a reinstall of an existing server. So I will promt the user reinstalling the server at installtime to remind him that he need to remove the minion from the master. Is there already a service that I can call and ask about know minion host
08:39 Furao mine
08:39 hger replace know with known in above question....
08:39 Furao i use salt mine to do other “autodiscovery” tasks such as for monitoring
08:39 hger salt mine?
08:40 hger Yes I see. good suggestion
08:40 Furao we need a bot that link to doc
08:40 Furao http://salt.readthedocs.org/en/latest/topics/mine/index.html
08:42 wnkz joined #salt
08:42 hger however this would require the salt-minion to be installed?
08:43 Furao ah yes hehe sorry that won’t help you
08:43 hger so the salt-minion needs part of the kickstart
08:43 hger in the post chrooted env it could be available
08:43 hger but not registered with the master
08:43 Furao mine won’t work at this point
08:44 Furao are those physical server or vm? salt-cloud —destroy remove old minion key
08:44 hger yes the problem would be going around. new host same hostname needs to register and cannot
08:44 hger they are both and solution should fit both
08:44 kawa2014 joined #salt
08:46 Furao well… there is a few hacky way to do that
08:46 hger Yes I tried a hack to publish a list using cron job and httpd
08:47 hger would not be current, alwas a time delay if cron job does not run continously
08:47 Furao “salt-cloud —destroy” for VM and human process for physical boxes?
08:47 manishr joined #salt
08:48 hger as part of decomission? good suggestion
08:48 Furao yes, I know humans are usually bad at this but it’s hard to automate something that involve human intervention and physical removal of server :)
08:48 hger but we are severly lacking in our decomission status. as most are I guess
08:49 Furao or have dynamic minion name? such as MAC of eth0 ?
08:49 enarciso joined #salt
08:49 hger we have very static setup and not complete controll of network setup
08:50 hger MAC registerd = Static IP
08:50 hger No MAC no network
08:50 hger :(
08:50 Furao the eth0 mac can be used as the minion id
08:50 hger aha
08:50 hger instead of hostname
08:50 hger ?
08:51 Furao I used that trick for client with small linux boxes installed in their shops and hundred of minions
08:51 Furao as it was also printed on the bottom of the box
08:51 Furao yeah hostname == minion id == eth0.mac.replace(“:”, “”)
08:53 hger good suggestion. How would one handle all minions registered on master that gets left behind? Would potentially keep growing
08:54 Furao use salt mine to have a function that every 6 hours (or 12, 24) to send the actual timestamp
08:54 Furao you can create a _modules/imalive.py with a import datetime\ndef imalive(): return datetime.datetime.now()
08:55 hger yes I was thinking about something like so, if a client fails to report back in a week delete it. Great suggestion! I will attemt to incorporate. Thank you for your time! have a great day
08:55 Furao and on the minion that run on the master, get all mine data for all minions and if it’s (datetime.datetime.now() - mine_data[minion_id]).seconds > 86400: consider dead
08:56 Furao it’s hacky but tolerable to other possible solutions
08:56 hger yes seems not so bad. And would solve my issue! thanks!
08:56 Furao np!
08:57 Furao i have actually a monitoring check that does something like that
08:57 hger wantto share?
08:57 Furao looking if that is true
08:58 hger hehe
08:58 Furao https://doc.robotinfra.com/salt/master/doc/monitor.html#salt-master-mine
08:59 Furao not exactly, this is to handle failure when salt-cloud can’t create a minion
08:59 Furao and we end with key in master but no VMs deployed
09:00 hger ok
09:00 hger have a good day
09:01 badon joined #salt
09:04 asaladin_ joined #salt
09:07 oliver_l2c joined #salt
09:09 losh joined #salt
09:09 nilptr joined #salt
09:12 losh_ joined #salt
09:14 32NABJXY3 joined #salt
09:17 pf_moore joined #salt
09:21 badon_ joined #salt
09:25 iromli joined #salt
09:27 manishr Hello Guys
09:27 manishr I am having issue while defining environment. I am getting this error "Specified SLS hosts in saltenv base is not available on the salt master"
09:28 manishr Can anyone please help me out?
09:32 Xevian joined #salt
09:33 nexsja joined #salt
09:34 jrluis joined #salt
09:35 jtang joined #salt
09:37 linjan joined #salt
09:38 babilen manishr: Why do you think that it should be available? Could you paste relevant parts of your directory structure, master configuration and top.sls (along with hosts.sls) to http://refheap.com ?
09:41 manishr Yes Thanks. I will paste it
09:41 sieve joined #salt
09:42 mgw joined #salt
09:43 paulm- joined #salt
09:44 lietu joined #salt
09:44 bhosmer joined #salt
09:49 N-Mi_ joined #salt
09:50 everbird joined #salt
09:53 jhauser joined #salt
09:56 johtso joined #salt
09:57 manishr @babilen, This is the link
09:57 manishr https://www.refheap.com/98496
10:01 JlRd joined #salt
10:01 manishr I have few states for base and few for other environemnts. As far I know base states should be executed for all the machines as well from their respective environments. But what is happening here is when I remove the base section in  top.sls file it worked but if I include it does not work.
10:04 peters-tx joined #salt
10:04 manishr In the end what I want is my common/base states should be available to all the servers irrespective of the environment
10:06 badon__ joined #salt
10:13 aquassaut joined #salt
10:14 Samuel_ joined #salt
10:14 Samuel_ Hello
10:16 babilen manishr: You have no "hosts" SLS file, but only hosts.old/hosts (which is missing the ".sls" and the directory shouldn't really contain dots in its name)
10:16 bluenemo joined #salt
10:18 babilen manishr: Assuming you have the actual states in hosts.old/init.sls you probably want to rename hosts.old to hosts
10:19 manishr let me try it
10:21 rjc joined #salt
10:21 manishr @babilen, I completely removed hosts entry from top.sls and reran state.highstate on minion but same error persists
10:22 babilen manishr: Show me (and this is IRC not twatter so you don't have to use @ to address people)
10:22 manishr sorry
10:22 babilen No problem :)
10:26 Firewalll joined #salt
10:28 manishr babilen, https://www.refheap.com/98498
10:29 jppp joined #salt
10:31 Lightsword anyone know what this error message when running salt in vagrant is? https://gist.github.com/jameshilliard/b8dcac4deb3ce2a511b5
10:31 denys joined #salt
10:33 babilen manishr: And the exact error you get?
10:33 AndreasLutro Lightsword: it couldn't find a top.sls in your salt states directory (/srv/salt by default)
10:34 Schmidt joined #salt
10:34 AndreasLutro Lightsword: a tip - you can `vagrant ssh` into the box then run `sudo salt state.highstate` to re-provision faster
10:34 Lightsword does this vagrant file look sensible? https://gist.github.com/jameshilliard/4c12f0ad8bc31909cb50
10:34 Auroch joined #salt
10:34 manishr babilen, the exact error is https://www.refheap.com/98499
10:35 Lightsword I’m not sure which vagrant folder it should be in though
10:35 babilen manishr: How are you running this?
10:35 Lightsword odd, maybe vagrant isn’t installing salt right, I’m getting “sudo: salt: command not found"
10:35 AndreasLutro Lightsword: ./salt/roots/top.sls relative to your Vagrantfile
10:35 manishr # salt-call state.highstate -l debug
10:35 AndreasLutro Lightsword: sorry, salt-call
10:36 AndreasLutro not salt
10:36 babilen manishr: Could you try running it on your master? (i.e. "salt 'theminion' state.highstate")
10:36 intellix joined #salt
10:36 manishr Like I said if I completely remove the base env it will work like a charm.
10:36 funzo joined #salt
10:37 manishr LET me try
10:37 Lightsword AndreasLutro, what should be in top.sls?
10:38 manishr babilen, from the master as well I am getting the same error
10:39 AndreasLutro Lightsword: http://docs.saltstack.com/en/latest/ref/states/top.html
10:40 manishr Error from the master https://www.refheap.com/98500
10:41 babilen manishr: Could you run "salt 't-saltminion' state.show_top" and show me the output?
10:43 babilen manishr: Also: Please grep for "hosts" in your base environment files, something might include it still
10:43 manishr This is the output of state.show_top https://www.refheap.com/98501
10:45 babilen A "salt-run fileserver.update" might also be a good idea, but my guess is that you have references to "hosts" somewhere
10:48 dopesong joined #salt
10:49 manishr babilen, I have found the problem and rectified the error. Now everything is running fine
10:49 babilen manishr: What was it?
10:50 Lightsword AndreasLutro, I’m a little confused about how paths get specified, can the top.sls YAML file specify another top file in a lower folder? the repo I was basing my structure off of doesn’t have a top.sls file in the roots directory https://github.com/wunki/django-salted/tree/master/salt/roots
10:50 manishr I had included salt state in my salt-minion and sendmail states which was causing the issue. I should have checked properly. Thank you very much for your help.
10:51 giantlock joined #salt
10:51 manishr I had included hosts states
11:03 AndreasLutro Lightsword: https://github.com/wunki/django-salted/blob/master/Vagrantfile#L15 is different from yours
11:05 sieve joined #salt
11:09 evle1 joined #salt
11:12 Lightsword yep, that was it, got it to install postgres and create a database :)
11:16 bhosmer joined #salt
11:24 matthew-parlette joined #salt
11:32 kiorky joined #salt
11:39 hobakill joined #salt
11:40 \ask joined #salt
11:42 Furao joined #salt
11:42 fredvd joined #salt
11:44 irctc511 joined #salt
11:46 matthew-parlette joined #salt
11:46 jespada joined #salt
11:48 aparsons joined #salt
11:50 wincyj joined #salt
11:52 mgw joined #salt
11:55 bluenemo joined #salt
11:59 badon_ joined #salt
12:01 rvankleeck joined #salt
12:05 linjan joined #salt
12:07 badon joined #salt
12:11 rvankleeck I'm attempting to put several salt-master instances behind a proxy that will port forward based upon request name (e.g. salt-master1.example.com, salt-master2.example.com, etc.) to a proper high port on the back end. Does anyone have any experience with something similar?
12:15 CeBe joined #salt
12:18 nilptr joined #salt
12:21 funzo joined #salt
12:21 sieve joined #salt
12:24 paulm- joined #salt
12:27 ramteid joined #salt
12:29 bluenemo joined #salt
12:33 stephanbuys joined #salt
12:33 babilen joehh: Any chance for 2014.7.2 today? I need https://github.com/saltstack/salt/pull/18915 urgently and would really prefer it if I don't have to roll it out manually
12:34 stephanbuys hi all, anyone look at stackstorm yet? Thoughts?
12:35 cmcmacken joined #salt
12:38 seshan joined #salt
12:41 bluenemo joined #salt
12:43 nilptr joined #salt
12:44 enarciso joined #salt
12:50 subsignal joined #salt
12:51 che-arne joined #salt
13:00 jeremyr joined #salt
13:01 monkey66 joined #salt
13:02 tkharju joined #salt
13:03 bhosmer joined #salt
13:05 monkey661 joined #salt
13:05 monkey66 joined #salt
13:09 monkey66 joined #salt
13:10 \ask joined #salt
13:10 Sam_____ joined #salt
13:11 Sam_____ Hy. I have a Salt question.
13:11 Sam_____ It's possible to mount an ISO on a windows minion ?
13:17 I3olle_ joined #salt
13:17 jerematic joined #salt
13:27 teskew1 joined #salt
13:28 TyrfingMjolnir joined #salt
13:32 jdesilet joined #salt
13:33 MWheelz joined #salt
13:33 racooper joined #salt
13:33 racooper joined #salt
13:36 meylor joined #salt
13:36 cheus joined #salt
13:36 mpanetta joined #salt
13:36 paulm- joined #salt
13:37 erjohnso joined #salt
13:37 meylor with jinja templating. if you have a parent template and want to import a child template, is {% block child_name %}{% endblock %} what you'd want.
13:38 racooper joined #salt
13:38 Deevolution joined #salt
13:38 meylor i guess is {% extends "parent_name" %} required?
13:39 ndrei joined #salt
13:40 elfixit joined #salt
13:40 dyasny joined #salt
13:42 \ask joined #salt
13:44 dyasny joined #salt
13:44 TinuvaMac joined #salt
13:48 andrew_v joined #salt
13:49 t0rrant joined #salt
13:49 perfectsine joined #salt
13:50 mephx joined #salt
13:58 kaptk2 joined #salt
13:59 timoguin joined #salt
14:00 wincyj hello
14:00 wincyj i try to run cmd on the remote machine
14:00 wincyj extract_isp:
14:00 wincyj cmd:
14:00 wincyj - run
14:00 wincyj - name: /tmp/extract.sh
14:00 wincyj the file is present
14:00 wincyj when i run it manually it works
14:00 wincyj but salt seems to not to run it
14:00 wincyj what am i doing wrong
14:00 wincyj ?
14:01 vincent_vdk joined #salt
14:01 Aikar your working directory is probally the issue
14:02 debian112 joined #salt
14:02 wincyj does this work like that: i cp script on the remote host
14:02 wincyj and then provide absolute path to it as i pasted above?
14:02 malinoff joined #salt
14:04 bastion1704 joined #salt
14:06 thayne joined #salt
14:06 zwi joined #salt
14:06 wincyj ok missed cdw
14:06 wincyj ...
14:06 wincyj cwd*
14:07 jerematic joined #salt
14:10 meylor I'm trying to use Jinja templating with Saltstack. I have a parent template and I'm trying to include a child template with {% block child_template_name %}{% endblock %} but it never gets included. Am i missing something?
14:14 bluenemo joined #salt
14:16 wnkz joined #salt
14:18 Andre-B joined #salt
14:20 subsigna_ joined #salt
14:20 wnkz Hey, I have a Salt + Docker(io) formula in which I have pulled / installed / running states and each watches the previous step. So when a new image is pulled, new container is created and run ; the problem is that my existing previous container gets killed instead of being stopped properly ; I can't find anything relevant in the code (https://github.com/saltstack/salt/blob/v2014.7.1/salt/modules/dockerio.py) does anyone have
14:20 wnkz a clue ?
14:22 Deevolution meylor:  For that kind of thing I usually use an {% include FILE %}
14:24 oldmantaiter joined #salt
14:24 ek6 joined #salt
14:26 intellix joined #salt
14:27 DaveQB joined #salt
14:28 raygunsix joined #salt
14:29 stej joined #salt
14:31 oliver_l2c joined #salt
14:32 otter768 joined #salt
14:34 Brew joined #salt
14:34 \ask joined #salt
14:38 timoguin_ joined #salt
14:46 \ask joined #salt
14:48 numkem joined #salt
14:49 numkem joined #salt
14:51 jerematic joined #salt
14:53 Nazzy joined #salt
14:53 Brew joined #salt
14:54 jalbretsen joined #salt
14:56 mlesyk joined #salt
14:56 spookah joined #salt
14:56 stephanbuys joined #salt
14:56 murrdoc joined #salt
14:57 murrdoc has anyone been able to get pkgrepo.managed and file.directory to work together with a clean: True in the directory
14:57 Nazzy am I imagining things... didn't we used to have a state module for setting minion config options?
14:58 murrdoc does file.directory recognize pkgrepo.managed files as files that should stay ?
14:59 mlesyk joined #salt
14:59 XenophonF joined #salt
15:00 XenophonF can anyone here provide an example of a mysql_* state that uses connection_args?
15:00 kermit joined #salt
15:00 mlesyk save
15:01 sieve joined #salt
15:01 scbunn joined #salt
15:01 XenophonF or is it as simple as adding user/pass/host/etc. arguments to the state?
15:01 murrdoc yup
15:01 paulm- How should I deploy environment data for an application? I'd prefer to avoid setting environment variables in the system and just store configuration in a suitable location instead
15:01 perfectsine_ joined #salt
15:02 schristensen joined #salt
15:02 Nazzy guess mlesyk failed the DC on his save roll then
15:02 XenophonF I ask because there's at least one case where the arguments for the state conflict with what could go into connection_args (http://docs.saltstack.com/en/latest/ref/states/all/salt.states.mysql_grants.html)
15:02 Furao joined #salt
15:02 raygunsix joined #salt
15:06 XenophonF like, is https://bpaste.net/show/ea55f16f80c5 how one would use connection_args?
15:06 dkrae joined #salt
15:07 XenophonF according to salt/modules/mysql.py's _connarg function, connection_args is a dict
15:07 XenophonF i'm just not sure i'm RTFS-ing it right
15:07 XenophonF brb
15:09 \ask joined #salt
15:12 ajolo_ joined #salt
15:14 jerematic joined #salt
15:15 lb1a is there some kind of "report" generating tool, that can render some kind of quick overview over all managed machines out the infos accumulated by salt? like a quick inventory of all managed machines?
15:17 igorwidl lb1a: salt-run manage.down and salt-run manage.up will show you which minions are up and down
15:17 lb1a igorwidl, i meant more like a "sysinfo" report like, what OS is present, what ressources available etc
15:17 seanz joined #salt
15:18 lb1a ip, hdd, ram, os, maybe services running etc....
15:19 lb1a igorwidl,  but thanks ;)
15:20 igorwidl not, that i know of, you might have to make the module yourself
15:20 lb1a ok, i just thought, maybe i dont have to invent the wheel myself :D
15:21 \ask joined #salt
15:21 thedodd joined #salt
15:24 igorwidl If the info you are looking for is in a grain, then you can do 'salt*' grains.item os serialnumber <grain> ..'
15:24 mlesyk joined #salt
15:25 basepi lb1a: there are different modules for different pieces of information. Disk
15:25 basepi Whoops, stupid phone.
15:26 basepi disk can give you usage, status can give you load and cpu info and other stuff.
15:26 basepi If you want it in a single report you can easily write a custom module which aggregates the data.
15:26 lb1a basepi, yeah that's pretty much what i wanted. so it's coding time :D
15:27 lb1a thanks
15:27 basepi =)
15:27 ajw0100 joined #salt
15:27 Brew joined #salt
15:27 basepi We've considered writing one ourselves, but everyone wants different data in their aggregate report. =P
15:27 basepi And custom modules are so easy it's ridiculous.
15:28 lb1a basepi, yeah i'm new to salt, but it's more or less the first requirement from my boss. to get an overview over all managed machines. in a form that he can comprehend
15:28 lb1a html or pdf or something
15:28 lb1a :D
15:29 basepi Makes sense. Ping me if you run into issues. =)
15:30 basepi Hopefully my talk will go up soon from SaltConf on writing custom modules. It's so much easier than most people assume.
15:30 lb1a basepi, yup i'll dig into it. it might take some time :D
15:30 lb1a basepi, you mean a recording from your talk?
15:30 basepi Yep.
15:30 lb1a basepi, maybe slides available that might help?
15:30 basepi Unfortunately the slides are pretty sparse, I did a lot of live demo.
15:31 basepi But I'll link them anyway, one sec.
15:31 lb1a basepi, where can i look for it?
15:31 teskew1 joined #salt
15:32 hasues joined #salt
15:32 lb1a "Extending SaltStack with Custom Execution and Runner Modules" ?
15:32 Gareth ahoy hoy.
15:32 basepi That's the one. Is it already up? It will eventually be on our YouTube.
15:32 hasues left #salt
15:33 lb1a no, just studied the saltconf agenda
15:33 williamthekid joined #salt
15:33 basepi Ah.
15:33 basepi I'm in a meeting now, I'll link a odf
15:33 basepi Pdf of the slides when I get back to my desk
15:33 lb1a basepi, no hurry
15:33 basepi Ping me if you don't see it in the next hour or so.
15:34 basepi =)
15:34 neogenix joined #salt
15:34 igorwidl left #salt
15:35 jtanner joined #salt
15:36 timoguin joined #salt
15:36 [7hunderbird] joined #salt
15:39 aparsons joined #salt
15:40 __gotcha joined #salt
15:40 __gotcha I am trying to test salt setup.
15:41 __gotcha In CI, my salt master has autosign_file: /etc/salt/autosign.conf
15:41 __gotcha and * in autosign.conf
15:42 iggy sounds secure
15:42 __gotcha any idea why I still get messages like Authentication failed from host 7864213a8ee9, the key is in pending and needs to be accepted with salt-key -a 7864213a8ee9
15:42 jab416171 joined #salt
15:44 jerematic joined #salt
15:45 iggy I've never used autosign, but why not use open_mode?
15:45 __gotcha how can I debug autosign ?
15:45 __gotcha what is open_mode ?
15:46 iggy or auto_accept
15:46 iggy they are described in the docs
15:46 igorwidl joined #salt
15:46 neogenix_ joined #salt
15:46 __gotcha I'll try auto_accept
15:49 overyander I've been having an issue lately where salt-master stops responding and you have to restart the service. This happens every day. The resources on the system are minimal, not showing hardly any usage for ram or cpu. /var/log/salt/master looks good except for a recurring "saltreqtimeouterror: after 600 seconds (try X of 3)"
15:49 overyander I increased the ulimit a few days ago to see if maybe that would help and set the corresponding value in /etc/salt/master but that doesn't seem to have resolved the issue. i have about 200 minions. i have the master set to use 30 worker threads. any suggestions on where to look or what to check to resolve this?
15:50 iggy define stops responding
15:51 __gotcha iggy: thanks
15:51 smcquay joined #salt
15:52 overyander minions are configured to run highstate upon boot, they don't. if i issue "salt MINION_NAME test.ping" (any other command) it just sits there. When I press ctrl+c to stop the command the command just ends and doesn't give the usual salt information about the job is still running here's the JID, etc. "salt-key -L" will still list the keys, but any new keys that are pending acception are not listed. as soon as i restart the salt-master service, everything
15:52 overyander starts working again for a while.
15:55 overyander iggy, ^^
15:56 thayne joined #salt
15:57 Furao joined #salt
15:57 aparsons joined #salt
15:58 jdesilet joined #salt
16:02 iggy Does it continue logging anything?
16:02 mikaelhm joined #salt
16:03 overyander iggy, it keeps spamming this in the logs "03-16-2015 03:06:35,236 [salt.config      ][INFO    ] Found minion id from generate_minion_id(): MASTER_SERVER_FQDN"
16:04 smcquay joined #salt
16:04 fredvd joined #salt
16:05 overyander it doesn't show anything different, even if i issue a "salt MINION test.ping"
16:06 iggy I wouldn't expect to see that message in the master log
16:06 hobakill overyander, which version of master/minion ?
16:08 overyander hobakill, currently master is on 2014.7.1. I had the same issue with 2014.7.0 and thought an upgrade would fix it, but it didn't. minions are all 2014.7.0
16:08 overyander I also tried updating the zeromq version on the master to the latest version 4 using http://copr.fedoraproject.org/coprs/saltstack/zeromq4/
16:08 iggy this obviously isn't a general problem (I've never heard of it, and I'm the channel's resident parrot)
16:09 paulm- Can you not access pillar data from pillar files?
16:09 hobakill overyander, FWIW i had similar problems, mostly with windows minions on earlier versions. 2014.7.1 and .2 have really cleaned a lot of that up - if not entirely.
16:09 iggy so I'd start by isolating bits of your config files and your state/pillar trees and see when it stops breaking
16:09 iggy paulm-: generally no
16:09 hobakill but yeah - iggy is right. haven't heard that specific issue. my issues were more general in nature than yours and more well known.
16:10 paulm- iggy: and beyond generally?
16:11 desposo joined #salt
16:11 TyrfingMjolnir joined #salt
16:11 iggy paulm-: it sometimes works, sometimes doesn't... don't... just don't
16:11 overyander all of my minions are windows minions. the minions are on ver 2014.7.0... manually running a highstate on them works fine and doesn't show any errors. i only use state files, haven't started using pillars yet. any suggestions on what to look for in the state files?
16:11 CheKoLyN joined #salt
16:12 hobakill overyander, !!!!!
16:13 hobakill Note
16:13 hobakill The 2014.7.0 installers have been removed because of a regression. Please use the 2014.7.1 release instead.
16:13 iggy A/B testing
16:13 murrdoc someone ping base of pi
16:13 murrdoc i want his slides
16:13 hobakill overyander, put something higher than 7.0 on your windows boxes and you'll be happier. http://docs.saltstack.com/en/latest/topics/installation/windows.html
16:14 overyander i see that. i wonder what the regression was
16:14 dyasny joined #salt
16:14 murrdoc https://github.com/saltstack/salt/issues/17194
16:15 murrdoc http://docs.saltstack.com/en/latest/topics/releases/2014.7.1.html
16:15 murrdoc <3 OSS
16:15 hobakill overyander, RAM leak IIRC but whatever it was it changed everything for my DOZE minions.. i put a fresh copy on each, manually, and it worked like a charm.
16:15 jri joined #salt
16:17 tligda joined #salt
16:20 iggy babilen: you got rid of some necessary functionality with that collectd change (in the df plugin it's customary to only specify one of Device, MountPoint, _or_ FSType... not all of them)
16:23 MatthewsFace joined #salt
16:28 meylor joined #salt
16:30 giantlock joined #salt
16:30 overyander salt-master version 2014.7.2 hasn't been published on epel repositories yet, so the latest version is 2014.7.1-1.el7  is it ok to run 2014.7.1 on master and 2014.7.2 on minions?
16:31 KyleG joined #salt
16:31 KyleG joined #salt
16:31 paulm- I can't use salt 2015 because I get "global name 'msgpack' is not defined"... is this expected behaviour?
16:32 otter768 joined #salt
16:32 hax404 joined #salt
16:34 XenophonF back
16:35 XenophonF so regarding mysql_* states, am i correct in assuming that connection_args is a dict of the keys mysql.user, mysql.pass, etc.?
16:35 _JZ_ joined #salt
16:35 hobakill overyander, i wouldn't. try to keep master ahead of or equal to minion. you can easily get 2014.7.2 from epel-testing
16:35 iggy overyander: should be fine
16:36 NV joined #salt
16:36 iggy paulm-: make sure everything matches version wise... make sure you restart all services... etc
16:37 paulm- iggy: it's this issue: https://github.com/saltstack/salt/issues/20276
16:37 paulm- But apparently it's closed now
16:37 paulm- I guess I have to wait for the next RC
16:37 hax404 joined #salt
16:37 iggy oh, salt-ssh
16:37 iggy you should probably mention that in the future
16:38 paulm- Sorry, I only use salt-ssh so I forget it's a different world for some people
16:38 paulm- ;)
16:38 iggy a lot of us don't have any experience with salt-ssh and it will save time people giving you advice that doesn't apply to salt-ssh use
16:38 paulm- For the most part they are one and the same
16:38 paulm- As they shoudl be
16:39 XenophonF oh never mind - examples given in http://docs.saltstack.com/en/latest/ref/states/all/salt.states.mysql_user.html
16:39 iggy except for that whole transport layer being totally different and the various other things that aren't the same between them
16:40 beneggett joined #salt
16:41 hal58th joined #salt
16:42 funzo joined #salt
16:42 sieve joined #salt
16:42 MaliutaLap joined #salt
16:47 perfectsine joined #salt
16:49 murrdoc babilen:  pew pew!
16:52 shanemhansen joined #salt
16:52 shanemhansen left #salt
16:53 wendall911 joined #salt
16:56 blue0ctober left #salt
16:58 dave_den joined #salt
17:00 aparsons joined #salt
17:01 Ryan_Lane joined #salt
17:02 iggy babilen: I think I hit everything I saw, but I reserve the right to check any future versions of that PR
17:02 meylor is there a trick to storing data in a pillar (in my case map.jinja) that has line breaks, so that I can use with file.managed: contents?
17:02 aparsons joined #salt
17:03 meylor I keep getting yaml rendering errors
17:03 iggy it's hard
17:03 jerematic joined #salt
17:04 meylor iggy: is that for me?
17:04 iggy meylor: https://github.com/saltstack-formulas/postgres-formula/pull/44/files
17:04 iggy see if that helps clear things up a little
17:04 stephanbuys joined #salt
17:05 murrdoc https://github.com/saltstack-formulas/postgres-formula/blob/master/pillar.example#L40
17:06 Aikar can someone help us understand the top.sls a little better - it is our understanding that if you put a state in base, in top.sls, files it references will first check the base file path before prod. seems a general recommendation is to leave base blank, and then prod states look in prod file root. ideally, base would be the state every environment has, and in prod it defaults to prod file root first then falls back to base. is this right? does it go base > prod
17:06 Aikar if the state is listed under base or prod > base?
17:11 meylor iggy: thanks… seems like gets me passed the issue, but getting "ParserError: expected '<document start>', but found '<block mapping start>'" as a result
17:12 benegget_ joined #salt
17:12 murrdoc paste your yaml here
17:12 murrdoc http://yaml-online-parser.appspot.com/
17:12 iggy past the issue... paste your states/pillars/etc and the full error somewhere
17:12 fishdust joined #salt
17:12 XenophonF Aikar: my understanding is a little different
17:12 linjan joined #salt
17:13 XenophonF Aikar: http://docs.saltstack.com/en/latest/ref/states/top.html
17:13 jrluis joined #salt
17:13 XenophonF let me show you my top.sls, since it's moderately complicated
17:14 XenophonF https://bpaste.net/show/92b620356c32
17:15 XenophonF i have three environments
17:15 XenophonF base, development, and production
17:15 XenophonF base is a generic catch-all that currently contains only salt formulas that i've forked from https://github.com/saltstack-formulas
17:16 khris joined #salt
17:16 meylor iggy: http://paste.ofcode.org/hJkteGxJXQpgqFwHQvdkfU
17:16 meylor murrdoc
17:16 XenophonF i force minions into the development environment based on the value of a pillar key
17:17 murrdoc why you do it like this meylor
17:17 murrdoc does the cert change by os ?
17:17 meylor it's a generic format that I'm following
17:17 meylor for this specific case no it doesn't
17:18 murrdoc the 'easier' way imho would be to make a certs.yml
17:18 murrdoc then {% import_yaml it %}
17:18 aparsons joined #salt
17:18 murrdoc and then conf.update() the valye in
17:18 meylor mmm that would work. i'll just do that
17:18 meylor thanks
17:18 murrdoc yeah
17:19 XenophonF Aikar: i purposefully keep base empty except for third-party salt formulas
17:19 XenophonF or truly generic stuff that i absolutely want to be the same everywhere, like the minion config
17:20 hebz0rl joined #salt
17:20 aparsons joined #salt
17:20 XenophonF Aikar: everything I write goes into either the development or production environments (== Git branches in my setup)
17:21 XenophonF Aikar: does that help?
17:22 ajw0100 joined #salt
17:22 dopesong_ joined #salt
17:23 aparsons joined #salt
17:23 murrdoc left #salt
17:23 murrdoc joined #salt
17:26 aparsons joined #salt
17:27 khris joined #salt
17:27 hobakill does UtahDave ever show his mug around these parts any more? feels like i haven't seen him in forever.
17:27 murrdoc hes moved on
17:28 murrdoc to a go based config management system
17:28 murrdoc python based ones are so 2014
17:28 murrdoc (totes kidding )
17:28 beneggett joined #salt
17:28 hobakill ha!
17:28 brick joined #salt
17:29 benegget_ joined #salt
17:30 meylor joined #salt
17:36 paolo left #salt
17:36 paolo joined #salt
17:36 kmwhite joined #salt
17:37 kmwhite Hey. I found a few syntactic errors on /quit
17:37 davet joined #salt
17:38 kmwhite joined #salt
17:38 basepi lb1a: http://cloud.basepi.net/1q2O2k3G3l0n
17:38 sieve joined #salt
17:41 forrest joined #salt
17:41 UtahDave joined #salt
17:42 wincyj joined #salt
17:42 Hybrid2 joined #salt
17:42 TyrfingMjolnir joined #salt
17:47 loz-- joined #salt
17:48 curiousbiped joined #salt
17:48 monkey66 joined #salt
17:58 loz-- joined #salt
17:59 forrest joined #salt
18:02 teskew2 joined #salt
18:03 baweaver joined #salt
18:03 timoguin joined #salt
18:07 mattiasr joined #salt
18:10 hal58th joined #salt
18:13 lietu joined #salt
18:13 sieve left #salt
18:14 jerematic joined #salt
18:16 MindDrive iggy: [from Friday] tons of inodes left (over 100,000 on the relevant partition), and no duplicate processes running; stopped, cleared and restarted the salt masters (and several minions) many times.  Will be looking into it further today with the help of a coworker.
18:19 iggy Friday was along time ago
18:20 neogenix_ is archive.extracted for tar.gz broken in 2014.7.x?
18:20 tomh- joined #salt
18:20 jerematic joined #salt
18:20 murrdoc basepi:  i do want to see the video with those slides, i am regretting not attending that presentation
18:21 murrdoc thanks for the slides
18:22 ek6 joined #salt
18:23 hal58th neogenix_, someone is always complaining about archive.extracted. I wouldn't be surprised it it had problems.
18:29 MindDrive ARGH... like my coworker said "Leave it until Monday, and it will probably just start working".  Now the manage.down is succeeding and I've got just three hosts non-responsive (out of over 750)... and they're actually physically offline.  *bangs head against desk*
18:30 jespada joined #salt
18:30 TyrfingMjolnir joined #salt
18:30 neogenix_ hal58th: It plainly refuses to work. I'm pretty sure that it's busted.
18:32 monkey661 joined #salt
18:32 jngd joined #salt
18:33 baweaver joined #salt
18:33 otter768 joined #salt
18:36 neogenix joined #salt
18:36 gladiatr joined #salt
18:40 Corey joined #salt
18:42 eyebal01 joined #salt
18:44 basepi murrdoc: they'll go up in the coming weeks. Not sure where mine will be in the release schedule.
18:44 eyebal01 hey there folks. time for your daily newb question.. I'm trying to copy over some config files using a for loop but i'm not sure how to write the source and destination portion.. here is what i have so far ( for give the crappy code) https://gist.github.com/wolfman2g1/375ffa99c09243d8b2c7
18:44 murrdoc basepi:  thanks
18:47 hal58th eyebal01, I don't know where to start… You may want to look at some example states and run through the tutorial.
18:48 hal58th http://docs.saltstack.com/en/latest/topics/tutorials/starting_states.html#adding-configs-and-users and http://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.recurse
18:48 MrsButterWorps joined #salt
18:50 hal58th You also misspelled "managed" at the end. You need to put stuff in the "-source: salt://nginx/files".
18:50 hal58th I would also separate the file.managed or file.recurse (which I recommend) into it's own state id.
18:50 timoguin joined #salt
18:51 eyebal01 ok
18:51 MrsButterWorps I'm managing a bunch of packages on a minion running redhat using pkg.installed and it appears yum install is being used for the package everytime it highstates.  Should this be the case?
18:52 MrsButterWorps i mean it appears to be running yum install everytime ignoring whether or not it has already been installed on the system
18:52 robawt MrsButterWorps: what salt version?
18:52 hal58th Hmmm, maybe it says it's running yum install, but it's really checking to see if it is installed.
18:53 MrsButterWorps 2014.7.0
18:53 MrsButterWorps the reason I'm thinking that is the case is because I'm managing around 50 packages, running highstate in a cron every 15 minutes, and after a day the server was kicked from rhn for "abuse of service"
18:54 MrsButterWorps and looking through the minion log it appeared that yum was being called and receiving the "abuse of service" error during each highstate
18:55 monkey661 left #salt
18:57 I3olle joined #salt
19:07 brick__ joined #salt
19:08 davet joined #salt
19:08 nesv joined #salt
19:10 murrdoc joined #salt
19:13 baweaver joined #salt
19:16 aparsons joined #salt
19:16 denys joined #salt
19:17 aparsons joined #salt
19:17 gscott joined #salt
19:18 baweaver Is there a signal of some type that tells when an async job is done?
19:18 baweaver or what would be a reliable way to tell?
19:20 murrdoc what do u want to do with the information
19:20 baweaver If, for instance, I can't get the hostname of an instance and have to use ipv4 grain search
19:20 ek6 joined #salt
19:20 baweaver it forces me to use an async request
19:21 baweaver (made a rubygem that handles it)
19:21 baweaver (rails implemented, not much choice there)
19:22 baweaver It's admittedly hacky until I can port it off to JS/Node
19:22 ndrei joined #salt
19:24 Lightsword joined #salt
19:26 a1j joined #salt
19:27 a1j Question: I want to make highstate to be running automatically, but some states (expensive or disruptive) i want to run manually or on a different schedule. Is there any way to specify alternative top.sls file or replace that alternative file with some other functionality?
19:29 kmwhite @a1j, you can use state.run (I think) to only launch the specified states at a secondary schedule
19:29 Andre-B joined #salt
19:30 a1j kwork: yes but every server needs different states, and that map is recorded in top.sls. there are many states and many servers.
19:30 a1j kmwhite: i t
19:30 a1j e
19:31 kmwhite http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html#salt.modules.state.apply_ actually.
19:31 kmwhite AS for the problem with the differing states, that is a problem.
19:32 kmwhite We just run the highstate via the crontab and ignore their timeliness
19:32 \ask joined #salt
19:33 a1j another question - is there any way to force highstate implicitly on minion join (key acceptance)?
19:33 monkey66 joined #salt
19:34 murrdoc there are few
19:34 murrdoc you can configure a startup state
19:34 a1j kmwhite: you calculate different cron time ahead? Why not use salt scheduler?
19:34 murrdoc or u can use reactor/orchestrate
19:34 edrocks joined #salt
19:34 murrdoc or u can use schedula
19:35 a1j murrdoc: startup state is when minion starts? that does not work, when minion starts key is not accepted, i did not find "key accepted" event in reactor system.
19:35 murrdoc http://docs.saltstack.com/en/latest/topics/reactor/#a-complete-example
19:36 ckao joined #salt
19:36 iggy there is a key accepted event (pretty sure)
19:36 a1j murrdoc: oh thanks didnt see that
19:36 murrdoc auth-* events
19:37 a1j murrdoc: but that auth event will be triggered on every salt-minion start? or on first key acceptance only?
19:37 murrdoc first key acceptance
19:38 murrdoc like auth start and stop
19:39 a1j murrdoc: what if i have 2 masters, will it run highstate 2 times simultaneously? Does it have proper locking?
19:39 a1j murrdoc: and BTW does reactor system work now with 2 masters (it was broken a while ago AFAIK)
19:39 murrdoc cant answer second question
19:40 murrdoc that a pi base question
19:40 a1j heheh
19:40 murrdoc and for the 'locking' question
19:45 bhosmer__ joined #salt
19:56 kaptk2 joined #salt
19:59 mikaelhm joined #salt
19:59 enarciso joined #salt
20:01 edrocks joined #salt
20:02 nexsja^ joined #salt
20:03 Fiber^ joined #salt
20:03 MatthewsFace joined #salt
20:04 Andre-B_ joined #salt
20:06 murrdoc joined #salt
20:07 aparsons joined #salt
20:07 Habsgoa1ie joined #salt
20:11 bhosmer joined #salt
20:13 theologian joined #salt
20:18 nilptr joined #salt
20:19 jhauser joined #salt
20:24 jrluis joined #salt
20:27 enarciso joined #salt
20:33 ipmb joined #salt
20:34 otter768 joined #salt
20:35 I3olle joined #salt
20:37 Sagane joined #salt
20:38 linjan joined #salt
20:41 kotzen joined #salt
20:42 catpiggest joined #salt
20:43 giantlock joined #salt
20:44 catpig joined #salt
20:49 badon joined #salt
20:50 aparsons joined #salt
20:54 nich0s joined #salt
20:55 thedodd joined #salt
20:59 wincyj joined #salt
21:01 baweaver joined #salt
21:02 aparsons joined #salt
21:02 I3olle joined #salt
21:12 prwilson joined #salt
21:14 aparsons_ joined #salt
21:15 linjan joined #salt
21:23 I3olle joined #salt
21:27 iggy murrdoc: can you look at https://github.com/saltstack-formulas/salt-formula/issues/89 and tell me how I might go about being more clear as to why it's a bad idea to call a file local variable "salt"
21:27 iggy because I'm about to start being an asshole again
21:27 murrdoc haha
21:27 murrdoc ok
21:30 murrdoc i tried
21:32 iggy I think my problem at first might have been that I didn't have ``` around my "code samples"
21:32 iggy so it did that weird thing where it was only showing half of the function call
21:36 Andre-B joined #salt
21:38 murrdoc yeah
21:38 murrdoc i dont know
21:38 murrdoc but for gods sake {{salt}} as a variable name in a template is just lazy
21:38 robawt it's bad
21:38 Ryan_Lane that's why I use pillar
21:38 robawt there's the magic salt object
21:38 robawt people could get confused
21:40 perfectsine joined #salt
21:42 iggy I mean my problem explaining what the issue was
21:43 iggy there is absolutely no reason you should overwrite salt/jinja default variables
21:43 Singularo joined #salt
21:43 murrdoc Ryan_Lane:  i use pillars too
21:43 murrdoc hmm i should write this gist up
21:44 murrdoc one second
21:44 Ryan_Lane {% from 'map.jinja' import pillar with context %}
21:44 iggy that you're going to hell
21:44 iggy -that
21:44 Ryan_Lane :)
21:45 elfixit joined #salt
21:45 iggy I really just opened that bug back up so when I send a PR to fix it, I can get a free bug closure at the same time... I didn't really expect to have to justify not using reserved global names
21:46 kermit joined #salt
21:48 murrdoc https://gist.github.com/puneetk/6336b2905511d6a56a56
21:48 murrdoc thats what i am doing
21:48 murrdoc cookie cutter style
21:49 iggy boo for import
21:50 iggy boo for import + set immediately after
21:50 murrdoc :P
21:50 murrdoc yay for repeatability!
21:51 murrdoc Ryan_Lane:  thoughts ? https://gist.github.com/puneetk/6336b2905511d6a56a56
21:51 murrdoc
21:51 murrdoc iggy:  again this works for me cos i am currently at 18 formulas
21:51 Ryan_Lane heh. I don't know :)
21:51 Ryan_Lane I don't use formulas
21:51 murrdoc so i am not including 18 formula default files in my top.sls
21:51 murrdoc word word ryan
21:52 Ryan_Lane looks reasonable, though
21:52 * iggy makes note not to ever call lyft again... just becaue of Ryan_Lane
21:52 Ryan_Lane hahaha
21:52 murrdoc #uber all day
21:52 Ryan_Lane http://ryandlane.com/blog/2014/10/08/config-management-code-reuse-isnt-always-the-right-approach/
21:52 murrdoc its not reuse per se
21:52 murrdoc its repeatable
21:52 Ryan_Lane same same :)
21:53 murrdoc i dunno, i think macros in jina when i think reuse
21:53 iggy I'm sure there are people that think the dark side of the force is good too... doesn't mean they are right? (too nerdy?)
21:53 murrdoc giving the other two devs a template to build formulas off of isnt reuse per se
21:53 * murrdoc nitpicks all by himself
21:53 neogenix Ryan_Lane: we're in the same boat, and are avoiding some of the super complex stuff, however reuse of simple stuff is all over the place in our repo.
21:54 Ryan_Lane yeah, I think formulas are a reasonable place to get an idea of how to make a module
21:54 JoeJulian Ryan_Lane does make some odd decisions sometimes. ;)
21:54 murrdoc we all have different problems tho
21:54 Ryan_Lane I make very odd decisions ;)
21:54 murrdoc ryan and iggy have greenfield, master less-able vm setups
21:54 murrdoc i dont :|
21:55 neogenix heh.
21:55 Ryan_Lane we embed our code into our service repos. most of our repos have <100 lines of salt code
21:55 iggy except I don't use masterless... mostly just so I can say I don't know anything about it when people ask questions
21:55 neogenix One of the things that's been solidly a win for our team was a vagrant in the repo (/vagrant, vagrant up == master + minion)
21:55 murrdoc same reason u avoided 2014.7
21:55 Ryan_Lane (most have ~20 lines)
21:56 Ryan_Lane neogenix: even more fun? now integrate docker with that. do a run + commit to create your docker images
21:56 Ryan_Lane docker with no dockerfiles \o/
21:56 Ryan_Lane run all your stuff in a single vagrant box
21:56 neogenix Ryan_Lane: baby steps yoh! :)
21:56 * Ryan_Lane nods
21:56 Ryan_Lane getting stuff into vagrant is the best first step :)
21:56 nich0s joined #salt
21:57 murrdoc yeah man
21:57 murrdoc i just moved my previous gig to packer + vagrant
21:57 murrdoc new gig, new challenges
21:58 murrdoc it was sweet tho, packer made the vagrants for devs to work on and the vms in openstack for production
21:58 iggy quick question, anybody ever seen a file /etc/salt/minion.d/_schedule.conf ?
21:58 iggy I didn't make it (it looks autogenerated)
21:58 neogenix iggy: no, but it'd be imported, so someone could've put it there by hand I guess.
21:58 murrdoc aws too
21:59 adelcast joined #salt
21:59 murrdoc it could be new
21:59 murrdoc to expose the default schedules
21:59 iggy contents are.... schedule:\n  __mine_interval: {function: mine.update, jid_include: true, maxrunning: 2, minutes: 5}
21:59 neogenix 2015.x ?
21:59 iggy I highly doubt anyone actually wrote that out
21:59 iggy (plus I'm the only one that would have, and I didn't)
21:59 iggy yes
22:00 cmcmacken joined #salt
22:00 neogenix *ponder*
22:00 Gareth iggy: it's auto-generated.
22:00 murrdoc what bout driggy (drunk iggy)
22:01 iggy it wasn't driggy either (it's in qa... driggy doesn't go near qa/prod)
22:01 Gareth iggy: persist function in the schedule.py under utils is what is writing it.
22:01 Gareth Some code Tom added, not really sure what his idea behind it was.
22:01 iggy so the salt-formula needs to stop file.recurse clean: True'ing
22:02 enarciso joined #salt
22:02 neogenix iggy: https://github.com/saltstack/salt/blob/26acc5c1d476284174ba988439a2cc08ce66dc61/salt/utils/schedule.py#L302
22:03 neogenix iggy: part of this commit : https://github.com/saltstack/salt/commit/25b767fb81f8a47db377fbb58882bc3b81b83dc3
22:03 JoeJulian One of my extends isn't getting applied on the first highstate. Should it matter if the state that's getting extended is included via two different paths?
22:03 murrdoc whats wrong with file.recurse clean trueing
22:03 neogenix seems like Tom sneaked something in there.
22:04 iggy murrdoc: it's rm'ing stuff that salt is writing out
22:07 murrdoc yeah thats bad
22:07 murrdoc all files in the dir need to require_in the directory
22:07 murrdoc if they are adding a clean: True to it.
22:08 nich0s joined #salt
22:09 iggy the stupid part is... there's one file in salt://salt/files/minion.d/ ... there's no reason they should even be using a recurse
22:09 Kelsar joined #salt
22:10 iggy maybe they expect people to have their own salt/files/{master,minion}.d directories
22:10 iggy still, clean: True seems a bad idea
22:11 murrdoc my minion conf is basically specifying where the include dir is
22:11 murrdoc allows me to bucket configs by type into the conf.d directory
22:11 baweaver joined #salt
22:12 aquassaut joined #salt
22:12 iggy I suspect that's what most people do when .d dirs are available
22:20 JoeJulian damn. When I bump my log level to debug, the bug goes away. It's a quantum state bug. My favorite.
22:21 murrdoc that means it needs a sleep somewhere
22:22 JoeJulian You would think so, wouldn't you.
22:22 murrdoc i did
22:22 JoeJulian But this isn't that the system isn't ready for the state, it's that the state is getting parsed differently.
22:23 JoeJulian One way, the extend isn't happening, the other it is.
22:25 nich0s joined #salt
22:26 Brew joined #salt
22:34 jeddi joined #salt
22:35 otter768 joined #salt
22:39 N-Mi joined #salt
22:39 N-Mi joined #salt
22:44 jespada joined #salt
22:50 jerematic joined #salt
22:50 adrianhannah joined #salt
22:50 rudi_s joined #salt
22:51 aparsons joined #salt
22:55 jbirdman joined #salt
22:58 jbirdman joined #salt
23:03 BlackMustard joined #salt
23:06 lnr joined #salt
23:06 davet joined #salt
23:09 CeBe joined #salt
23:09 bhosmer_ joined #salt
23:10 BlackMustard joined #salt
23:10 BlackMustard joined #salt
23:11 BlackMustard joined #salt
23:12 BlackMustard joined #salt
23:19 NV joined #salt
23:22 Hazelesque_ joined #salt
23:22 Jahkeup joined #salt
23:26 bhosmer_ joined #salt
23:27 APLU joined #salt
23:29 I3olle joined #salt
23:35 prwilson_ joined #salt
23:36 murrdoc joined #salt
23:37 nilptr joined #salt
23:42 baweaver joined #salt
23:50 nich0s joined #salt
23:56 catpigger joined #salt
23:56 catpig2 joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary