Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-09-02

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 dude051 joined #salt
00:06 dude051 joined #salt
00:07 rome joined #salt
00:08 aquinas joined #salt
00:09 aquinas_ joined #salt
00:09 TTimo joined #salt
00:11 fllr joined #salt
00:16 anotherZero joined #salt
00:19 otter768 joined #salt
00:27 joehoyle joined #salt
00:27 bhosmer joined #salt
00:27 dude051 joined #salt
00:28 otter768 joined #salt
00:30 dvestal joined #salt
00:31 rome joined #salt
00:35 iggy so... I'm still having a problem getting a fairly minimal git backend to work. Keeps failing with this error when running state.highstate "TypeError: string indices must be integers, not str" with not much to go on as to why it's failing
00:36 tedski iggy: are you able to fetch from the remote as the user salt runs as?
00:36 iggy yes
00:36 tedski gitpython, pygit or ... the other one?
00:37 iggy "default"
00:37 tedski i've seen that with gitpython before and i'm trying to remember what it was
00:37 iggy I believe gitpython
00:37 __number5__ iggy, that error looks like you have syntax errors in your sls, can you do a state.show_sls just on your states?
00:37 iggy https://github.com/iggy/salt-test/tree/master/srv/salt
00:37 iggy that's the state
00:38 tedski is that a submodule or subtree?
00:38 tedski nevermind
00:39 tedski that is, indeed, a very simple state :)
00:39 thayne joined #salt
00:39 iggy yep ;)
00:40 iggy the real one I'm trying to get to work is more complex
00:40 tedski your url is wrong
00:40 tedski in the readme
00:40 iggy but I broke it down to the bare minimum for testing
00:40 __number5__ iggy: try to rename your 'files' module to something else
00:40 tedski should be: -  git+ssh://git@github.com:iggy/salt-test.git
00:41 iggy docs say that should work
00:41 tedski where?
00:41 iggy but I've tried https and a couple other variations all with the same results
00:41 tedski also, your root is wrong
00:42 iggy enlighten
00:42 iggy the docs also seemed to say that was true
00:43 tedski try this:
00:43 tedski http://pastie.org/9520389
00:43 rome joined #salt
00:43 tedski per http://salt.readthedocs.org/en/latest/topics/tutorials/gitfs.html and http://salt.readthedocs.org/en/v2014.1.4/topics/tutorials/gitfs.html
00:44 tedski see: http://salt.readthedocs.org/en/latest/topics/tutorials/gitfs.html#serving-from-a-subdirectory
00:44 iggy okay, but root should be a per repo thing
00:45 iggy and all my git repos are not going to have the same layout
00:45 tedski let's start there
00:45 iggy so I can test that, but if that works, then it doesn't really solve my problem
00:45 tedski i know the root=path/to/root works on ext_pillar
00:45 tedski but, let's get to a working condition then work towards a solution
00:46 tedski i want to isolate the config as the issue before trying undocumented configs :)
00:46 tedski if you don't mind :)
00:47 iggy that is documented
00:47 tedski i can't find anything in the docs that show your config
00:47 tedski let me know where you saw it
00:47 fllr joined #salt
00:47 iggy I'm not smart enough to come up with this on my own
00:47 iggy oh
00:47 tedski you wouldn't be the first one to misread docs is all i'm saying
00:48 iggy I'm not looking on readthedocs
00:48 tedski i beat you to it numerous times
00:48 iggy http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#per-remote-configuration-parameters
00:48 kingel joined #salt
00:48 tedski thanks
00:49 iggy ooohhhh
00:49 iggy 2014.1.10 != 2014.10
00:49 tedski still should be a slash between host and user in your remotes, though
00:50 iggy I was looking at that "new in 2014.7.0" and thinking "I'm good, I'm running 2014.10.1"
00:50 tedski i.e git@github.com:iggy/salt-test.git: should become git@github.com/iggy/salt-test.git: and ultimately should become git+ssh://git@github.com/iggy/salt-test.git:
00:50 kingel_ joined #salt
00:50 iggy http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#simple-configuration
00:51 iggy git@github.com:user/repo.git
00:51 iggy but as I've said, I've tried a few variations on that url and it doesn't work
00:52 iggy but I think you found the problem... the per repo settings
00:52 iggy unfortunately, that kind of bones us
00:53 tedski thanks for giving me reason to catch up on docs :)
00:53 iggy as we're using a repo with that layout and then we're also using formula repos that are layed out a more "standard" way
00:53 tedski could you do something with subtrees to solve it until the release?
00:54 tedski so, you would split out /srv/salt in that repo into a repo for salt to pull form, then subtree it into the salt-test repo
00:54 tedski it's kludgy
00:55 iggy the bad part is we've got our pillars and states in the same repo
00:55 iggy (not my doing)
00:55 tedski even more reason to split it out and subtree it for legacy support
00:56 iggy so... I think I'm going to have to go to the person that started this and let them know they are wrong
00:56 iggy now... how to tell the person that's indirectly responsible for me getting paid that he's doing it all wrong
00:57 tedski that's actually kind of easy
00:57 tedski pillar data is intended to be secure and separate, external pillar also supports code
00:57 tedski state trees are for custom modules, grains and state files
00:58 tedski in my first buildout, we did 3 git repos: state tree, fat files, and pillar
00:58 ndrei joined #salt
00:59 iggy well... he also has the master's /etc/salt in there (which I've already told him wasn't the best way of doing it)
00:59 iggy so, he should be used to me telling him he's wrong by now ;)
01:00 tedski he had the right intentions from the start... it seems he did that before he learned salt
01:00 tedski that's not really a terrible thing... it's better than having a cowboy who doesn't believe in scm :)
01:00 iggy well... while he was learning salt, yeah
01:00 iggy yeah, for every thing I'm finding wrong here, there's so many other things that are right
01:01 iggy I mean the guy has actually heard of docker, etc. (not that they are using it, but... baby steps ;)
01:01 iggy but this kind of brings up another question
01:02 iggy is there some magic for salt to handle formulas? or do we need to set gitfs_root to suit them?
01:03 iggy (which would then change the layout of our base repo)
01:04 tedski no, they are intended to be used as is
01:05 tedski that's why they each have pillar.example named exactly alike, so they "squash" each other
01:05 iggy yeah, nvm
01:05 iggy I was thinking they were another level deep for some reason
01:06 iggy sweet, all working now
01:06 iggy danke
01:07 bhosmer_ joined #salt
01:08 vbabiy joined #salt
01:09 mordonez joined #salt
01:13 n8n joined #salt
01:19 bhosmer joined #salt
01:30 pled76 joined #salt
01:32 jalaziz joined #salt
01:45 pled76 joined #salt
01:55 Hipikat joined #salt
01:56 geekmush joined #salt
02:01 jalaziz joined #salt
02:02 pled76 joined #salt
02:03 davet1 joined #salt
02:05 pled76_ joined #salt
02:05 skullone_ anyone build a database backed pillar system?
02:06 vmdsch joined #salt
02:06 Corey skullone_: Yup. External Pillars.
02:08 whyzgeek joined #salt
02:08 skullone_ ive been playing with reclass, but im thinking of something else
02:10 jalaziz joined #salt
02:16 jalaziz_ joined #salt
02:16 vbabiy joined #salt
02:17 whyzgeek joined #salt
02:19 miqui joined #salt
02:28 anotherZero joined #salt
02:28 jalaziz joined #salt
02:32 TTimo joined #salt
02:34 floWenoL_ joined #salt
02:35 vbabiy joined #salt
02:39 pled76 joined #salt
02:44 thayne joined #salt
02:45 TTimo joined #salt
02:46 pled76 joined #salt
02:50 vmdsch left #salt
02:53 Nexpro1 joined #salt
02:56 bhosmer joined #salt
03:03 vbabiy joined #salt
03:04 jensnockert joined #salt
03:05 vbabiy_ joined #salt
03:06 vbabiy joined #salt
03:07 yomilk joined #salt
03:14 geekmush1 joined #salt
03:14 jkaye joined #salt
03:18 miles32 joined #salt
03:19 miles32 hey one thing I've noticed with salt-cloud is running salt-cloud -d for a machine in AWS will bring up two machines to destroy instead of only the one that exists
03:20 pled76 joined #salt
03:20 miles32 for instance I have a machine in a VPC that spans ec2-us-east-1, running salt-cloud -d machine_name will return one result for ec2-us-east-1a and ec2-us-east-1d.
03:20 miles32 this destroys an instance and then errors out on the second instance which clearly does not exist in the first place
03:21 miles32 is that known behavior?
03:22 vbabiy joined #salt
03:40 vbabiy joined #salt
03:47 pled76 joined #salt
03:47 oz_akan joined #salt
03:52 vbabiy joined #salt
03:58 schimmy joined #salt
03:59 kingel joined #salt
04:03 malinoff joined #salt
04:03 mordonez joined #salt
04:14 vbabiy joined #salt
04:18 oz_akan joined #salt
04:23 tmh1999 joined #salt
04:27 pled76 joined #salt
04:38 ajw0100_ joined #salt
04:40 felskrone joined #salt
04:41 plainlystated joined #salt
04:44 bhosmer joined #salt
04:45 plainlystated I'm a saltstack newb, and trying to get a standalone minion to pull a public nagios formula…  I installed python-pygit2 (v 0.21.1-2) and added fileserver_backend and gitfs_remotes to my /etc/salt/minion file. Still, it's not working. The only error I see is that the nagios.server formula can't be found. no menion of git in the debug log, and nothing that looks gitish under /var/cache/salt. Any idea what I'm doing wron
04:45 plainlystated here?
04:47 ajolo joined #salt
04:49 ramteid joined #salt
04:53 plainlystated nevermind..  seems like a bug in the docs..  this isn't supported in masterless
04:53 plainlystated https://github.com/saltstack/salt/issues/6660
04:53 endragor joined #salt
04:54 endragor Is there a way to achieve something similar with Mako? http://stackoverflow.com/questions/3352724/in-jinja2-whats-the-easiest-way-to-set-all-the-keys-to-be-the-values-of-a-dictio
04:54 endragor Having to put "salt['pillar.get'](...)" all the time hurts readability, so I'd like to expand pillar value (which is dictionary) into Jinja context and use it directly
04:55 endragor Jinja or Mako or any other template engine
05:01 kingel joined #salt
05:01 geekmush joined #salt
05:02 __number5__ you get use {{ pillar.blah.blah }} in jinja
05:03 pled76 joined #salt
05:04 __number5__ salt['pillar.get']('a', 'default') is safer because it won't throw an exception when the pillar key you want don't exist
05:05 jensnockert joined #salt
05:08 endragor oh, cool, I thought the only way is {{ pillar['blah']['blah'] }}, which is also worse readable/writable. I still see a point in extending the context with a pillar value: my pillars are nested on ~3 levels, and I could get rid of one of them in certain cases
05:08 endragor for example, if I have this:
05:08 endragor openstack:
05:08 endragor horizon:
05:08 endragor enabled: y
05:09 endragor and I want to render some openstack config file, I want to use just {{ horizon.enabled }} from there
05:09 malinoff endragor, actually, salt['pillar.get']('a:b:c', 'default') equals pillar.get('a', {}).get('b', {}).get('c', 'default'), not only pillar['a']['b']['c']
05:09 endragor yep, I know about that colon feature, thanks
05:15 pled76 joined #salt
05:18 schimmy joined #salt
05:19 oz_akan joined #salt
05:24 ianmcshane joined #salt
05:29 ericof joined #salt
05:32 scalability-junk joined #salt
05:33 micko joined #salt
05:36 TTimo joined #salt
05:37 juice joined #salt
05:42 melinath joined #salt
05:47 pled76 joined #salt
05:55 TyrfingMjolnir joined #salt
05:57 bhosmer joined #salt
06:02 kingel joined #salt
06:04 kingel joined #salt
06:19 oz_akan joined #salt
06:20 agend_ joined #salt
06:22 yomilk joined #salt
06:24 jensnockert joined #salt
06:32 lcavassa joined #salt
06:33 bhosmer joined #salt
06:37 dh joined #salt
06:38 kingel joined #salt
06:39 slav0nic joined #salt
06:42 jdmf joined #salt
06:43 Sweetshark joined #salt
06:46 n8n joined #salt
06:47 masm joined #salt
06:48 Outlander hey team, I’m looking to set my server timezone to UTC, is it this? http://pastebin.com/vy3fqHp4
07:00 duncanmv joined #salt
07:05 tomspur joined #salt
07:05 tomspur joined #salt
07:15 verwilst joined #salt
07:16 jkaye joined #salt
07:20 oz_akan joined #salt
07:22 verwilst hi!
07:22 verwilst i've noticed there is a hiera external thingy!
07:23 verwilst http://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.hiera.html
07:23 verwilst anyone using this here and any feedback on it?
07:23 verwilst sounds like the thing i need :)
07:24 verwilst the docs are kinda.. sketchy though :D
07:24 pled76 joined #salt
07:32 marnom joined #salt
07:32 intellix joined #salt
07:41 schimmy joined #salt
07:42 jalaziz joined #salt
07:42 malinoff verwilst, https://github.com/gtmtechltd/salthiera
07:45 pled76 joined #salt
07:45 ianmcshane joined #salt
07:46 verwilst yeah but that's something else malinoff?
07:46 chiui joined #salt
07:47 alanpearce joined #salt
07:50 verwilst malinoff: but it looks pretty nice..
07:50 verwilst even though it's ruby
07:51 sectionme joined #salt
07:51 tru_tru joined #salt
07:52 malinoff verwilst, i don't use hiera (and salt itself), but i guess you can believe the readme says that you can't just use hiera
07:53 verwilst the /etc/salt/salthiera.yaml file in the readme file put all stuff in /srv/salt.. shouldn't that be /srv/pillar?
07:53 mosen hiya
07:53 malinoff verwilst, i don't know :)
07:53 jensnockert joined #salt
07:54 mosen guess not
07:54 mosen because its hiera, not pillar :)
07:54 marnom joined #salt
07:55 marnom_ joined #salt
08:02 agend_ hi all
08:03 middleman_ joined #salt
08:03 agend_ so far i have been using salt + virtulbox + vagrant. And now everybody is talking about docker. I wonder if i could move my workflow to salt + docker, and how painful or worth it would it be?
08:04 ianmcshane joined #salt
08:16 mosen Dont know much about docker
08:16 mosen what platform you running virtualbox on
08:20 schimmy1 joined #salt
08:21 oz_akan joined #salt
08:22 bhosmer joined #salt
08:23 schimmy2 joined #salt
08:27 darkelda joined #salt
08:30 xsteadfastx joined #salt
08:33 pled76 joined #salt
08:36 verwilst ok, so maybe i need /srv/hiera :)
08:36 verwilst but the data is visible in pillar.items
08:37 verwilst and exposed through ext_pillar :P
08:37 verwilst so /srv/pillar seems quite right :P
08:38 TTimo joined #salt
08:40 martoss joined #salt
08:42 MrTango joined #salt
08:44 ckao joined #salt
08:45 pled76 joined #salt
08:47 pled76_ joined #salt
08:50 CeBe joined #salt
08:56 CeBe1 joined #salt
08:57 deepz88 joined #salt
08:57 intellix joined #salt
09:12 deepz88 joined #salt
09:12 canci joined #salt
09:19 MohShami joined #salt
09:22 oz_akan joined #salt
09:23 MohShami Hey guys, I'm running salt on FreeBSD and want to restart obspamd (openBSD's spamd daemon), the daemon stops without issues, when i try to start it the daemon start but I never get the cursor back, I keep getting this function when running the minion in debug mod "'fun': 'saltutil.find_job'" with the job ID of the initial command, any idea what's going on?
09:26 che-arne joined #salt
09:27 yomilk joined #salt
09:33 fredvd joined #salt
09:34 ianmcshane joined #salt
09:35 agend_ mosen: mint 17 as host and ubuntu 12.04 as guest
09:37 melinath joined #salt
09:39 pled76 joined #salt
09:40 PI_Lloyd joined #salt
09:46 pled76_ joined #salt
09:47 PI_Lloyd I'm trying to do something like {{ salt['pillar.get']('group:{{ nodename}}:variable') }} - basicly i have a pillar file that contains config options for a group of devices, rather than have a seperate file for each device (there's over a thousand devices to manage, that many files is going to be painful), which is split into groupname:devicename:options....Jinja doesn't seem to like this though. Is this even possible or am I just d
09:48 malinoff PI_Lloyd, try this {{ salt['pillar.get']('group:' + nodename + ':variable') }}
09:49 lord_nikon joined #salt
09:50 viq_ joined #salt
09:50 lord_nikon Hi Guys, having an issue with appending to a file from two different states. Getting a “ Detected conflicting IDs, SLS IDs need to be globally unique”, anyway around this?
09:50 malinoff lord_nikon, well, use globally unique IDs
09:51 malinoff http://pastie.org/9521084
09:51 lord_nikon Aha!
09:52 malinoff lord_nikon, each state function accepts 'name' argument. If you omit it, state id will be used as 'name' argument
09:53 lord_nikon Oh I see, I was using the actual filename as the state argument, thus the duplication.
09:53 malinoff yep
09:54 PI_Lloyd malinoff: that doesn't seem to pull the nodename from grain data which is what I was trying to do... I suppose I could use a jinja set statement to pull grain info and set that varible... but that seems a bit messy
09:54 PI_Lloyd i also tried using a grains.get inside the pillar.get but salt just threw a wobbly about it
09:55 mariusv joined #salt
09:55 lord_nikon Works like a charm!
09:55 ndrei joined #salt
09:55 malinoff PI_Lloyd, what about {% set namespace = 'group:' + grains['nodename'] + ':variable' %} {{ salt['pillar.get'](namespace) }}
09:56 verwilst babilen: hello
09:56 alanpear_ joined #salt
09:57 scott_w joined #salt
10:00 PI_Lloyd malinoff: that works! thanks muchly :)
10:05 aarontc joined #salt
10:10 jensnockert joined #salt
10:11 bhosmer joined #salt
10:19 ericof joined #salt
10:22 oz_akan joined #salt
10:24 fredvd joined #salt
10:28 martoss joined #salt
10:31 ianmcshane joined #salt
10:32 bhosmer joined #salt
10:34 pled76 joined #salt
10:37 ghartz joined #salt
10:39 xintron Is there a simple way of starting a process and have it running with salt (there is no service wrapper etc)?
10:39 malinoff xintron, write service script
10:39 TTimo joined #salt
10:40 viq xintron: supervisord ?
10:41 xintron viq, hrmm... yeah. Maybe
10:41 viq AKA "it's not salt's job to do it, but it has support for things that do"
10:44 lord_nikon left #salt
10:46 linjan joined #salt
10:50 TheThing joined #salt
10:51 alanpearce joined #salt
10:53 dean joined #salt
10:54 istram joined #salt
11:02 TTimo joined #salt
11:02 colttt joined #salt
11:06 martoss joined #salt
11:12 TheThing joined #salt
11:15 ninkotech__ joined #salt
11:18 jkaye joined #salt
11:20 bhosmer joined #salt
11:38 intellix joined #salt
11:40 pled76 joined #salt
11:49 hobakill joined #salt
11:53 sectionme joined #salt
11:55 bhosmer joined #salt
11:55 tmh1999 joined #salt
11:57 mordonez joined #salt
12:04 seblu ehlo
12:05 seblu I'm looking for a way to use file.manage only if a file is available in the fileserver root
12:06 seblu that works perfectly with multiple source, when the default file is empty, it's ok.
12:07 bluenemo joined #salt
12:09 mordonez_ joined #salt
12:09 Setsuna666 joined #salt
12:18 sectionme joined #salt
12:18 pviktori joined #salt
12:18 pviktori joined #salt
12:19 pviktori joined #salt
12:20 TTimo joined #salt
12:22 diegows joined #salt
12:26 nyx joined #salt
12:29 rome joined #salt
12:32 erjohnso joined #salt
12:33 rome joined #salt
12:35 erjohnso joined #salt
12:39 rome joined #salt
12:42 cpowell joined #salt
12:44 ianmcshane joined #salt
12:44 bhosmer_ joined #salt
12:46 rome joined #salt
12:47 Setsuna666 joined #salt
12:48 Setsuna666 exit
12:49 ninkotech joined #salt
12:49 rome joined #salt
12:52 jaimed joined #salt
12:57 acabrera joined #salt
12:59 pled76 joined #salt
13:02 vejdmn joined #salt
13:03 _mel_ joined #salt
13:03 alanpearce joined #salt
13:05 mhubbard_ joined #salt
13:07 Twiglet joined #salt
13:11 MTecknology It feels weird having to specify -G 'kernel:Linux' when running salt commands...
13:11 bhosmer_ joined #salt
13:12 jkaye joined #salt
13:13 rangertaha joined #salt
13:14 rangertaha can execution modules conduct distributed tasks?
13:16 tmh1999 joined #salt
13:18 oz_akan joined #salt
13:20 nyx_ joined #salt
13:21 wendall911 joined #salt
13:21 rangertaha I want to make a module that will communicate with other targets hosts to distribute the load. Any ideas on the best way to do that?
13:22 TTimo joined #salt
13:26 abe_music joined #salt
13:27 nitti joined #salt
13:29 rome joined #salt
13:30 to_json joined #salt
13:33 toastedpenguin joined #salt
13:36 Deevolution joined #salt
13:36 noeol joined #salt
13:42 ajprog_laptop joined #salt
13:45 runiq joined #salt
13:46 tmh1999 joined #salt
13:48 pled76 joined #salt
13:48 Nazzy MTecknology, what you add to your mix? bsd, windows, mac...?
13:49 BrendanGilmore joined #salt
13:51 lnxnut joined #salt
13:53 dude051 joined #salt
13:53 dccc_ joined #salt
13:55 MTecknology Nazzy: windows
13:57 littleidea joined #salt
13:57 Nazzy MTecknology, ah, yes, that would certainly need some care ... I can't imagine windows would be impressed at being asked to apt-get something *grin*
13:58 runiq Hey all, back with a problem… I'm trying to bootstrap the Salt Master (2014.1.10) on a CentOS7 host. First of all, is that even supported with this version?
13:59 abe_music joined #salt
13:59 quickdry21 joined #salt
14:01 viq MTecknology: I do that regularly, BSDs don't really support pkg.list_upgrades with salt
14:02 MTecknology It's neat that I'm to the point where I have to worry about that
14:02 viq runiq: https://dl.fedoraproject.org/pub/epel/7/x86_64/s/ does have packages for salt, so should be
14:03 MTecknology viq: I wish you had a gitlab-ci formula :(
14:03 runiq viq: Yeah, that's what I'm using. Alright, so it's supposed to be supported, I guess… hm
14:03 viq MTecknology: salt -t 30 -v -C 'G@kernel:linux and G@updates:i*' pkg.list_upgrades refresh=True    where 'i*' matches immediate and info ;)
14:03 MTecknology that's the reason I don't have a gitlab-ci server yet... I don't want to build it :P
14:04 MTecknology oooh- shiny command
14:05 runiq I'm running into a weird error with a fairly basic state: When I invoke 'state.highstate', salt says it wants to install the salt-master pkg, but doesn't. I have to explicitly invoke 'state.sls salt.master', then it works. This is my config: https://gist.github.com/robodendron/f661b922ff0ac091aa64
14:05 viq And after that salt -v -C 'G@kernel:linux and G@updates:immediate and not G@redundancy:*' pkg.upgrade and after that '...te and G@redundancy:phase1' pkg.upgrade and then phase2
14:06 viq runiq: have you tried https://github.com/saltstack-formulas/salt-formula ?
14:06 Nazzy MTecknology, I wish gitlab didn't give me so many headaches when I have to fix it :p
14:07 Nazzy I wish gitlab-ci /was/ a headache I had to fix heh
14:07 runiq Both state.sls and state.highstate run two repoquery commands, but state.highstate doesn't run the yum install command.
14:07 abe_music joined #salt
14:08 viq Hm, I haven't played with cent7 yet, so can't really say
14:08 MTecknology Nazzy: do you have a formula for it then? :D
14:08 MTecknology for gitlab, I had to take viq's and rip it apart so it would work in this environment
14:09 Nazzy for context, I upgraded our internal gitlab install a few days back and omniauth-ldap stopped working, locking out all the users... I had to get the omniauth-cas stuff working then go in to the database and manually set everyone over to that auth provider
14:09 Nazzy and no, sadly out gitlab install predates our salt trees
14:09 William left #salt
14:10 Nazzy I've not converted it to a salt recipe cause it's supposed to have been moved out of my responsibilities like 6 months ago heh
14:11 Nazzy it's ruby and I don't do ruby :p
14:11 MTecknology I'm with you there... ruby is a **** pile of **** dog ****.
14:11 MTecknology deployment is worse
14:11 verwilst "State mysql_user.present found in sls mysql.server is unavailable" any idea why this could be triggered?
14:12 verwilst i don't see anything related marked as failed
14:12 verwilst why would it be "unavailable" ?
14:14 verwilst ugh
14:14 verwilst :P
14:14 verwilst yum install MySQL-python
14:18 MTecknology wow... rhel package naming is gross
14:18 bhosmer_ joined #salt
14:19 Nazzy hehe
14:19 Nazzy yeah, ruby is a pile of eww, thankfully it's not my problem hehe
14:20 kaptk2 joined #salt
14:20 MTecknology If it's a linux server or it runs on a linux server or it in some way touches a linux server, it's my responsibility
14:21 Nazzy oh, I have an out on that ... the ruby stuff runs bsd :p
14:21 Nazzy in any case, I only have to maintain the linux servers that run my own code, so if my code doesn't touch it then it isn't my scope
14:21 Nazzy thankfully, since we have a lot of linux boxes here
14:24 kingel joined #salt
14:24 peters-tx joined #salt
14:24 MTecknology I also write a lot of applications and custom solutions for windows and linux and sometimes aix. Of the 450 servers I handle, less than 5 of them are redundant. Everything else is a unique flower.
14:25 Nazzy aix? ouch
14:25 mordonez_ joined #salt
14:26 geekmush joined #salt
14:28 KennethWilke joined #salt
14:28 kermit joined #salt
14:31 viq ugh, how fun. Though I think the technical term is 'snowflake' ;)
14:32 DaveQB joined #salt
14:32 xmj viq: i'm here? :D
14:33 Ozack joined #salt
14:33 tempspace It's a beautiful day in the neighborhood
14:33 viq heya xmj. Did the mention of snowflakes trigger you? ;)
14:34 Supermathie joined #salt
14:34 xmj viq: next time include a trigger warning! :D
14:34 viq how's life?
14:34 felskrone joined #salt
14:34 xmj good, been travelling through eastern europe all august
14:34 xmj viq: how's yours?
14:34 viq proximity fuse everything! ;P
14:34 bhosmer joined #salt
14:35 xmj viq: i was in krakow for a few days, didn't you hear your alarm beeping?
14:35 viq Eh, not that much worth mentioning, starting to gamemaster some shadowrun for some friends, bitching at oracle at work
14:35 viq hehe
14:35 viq Have you seen Wieliczka?
14:35 viq the salt mine
14:35 viq appropriately ;)
14:36 xmj hehe, saw that
14:36 viq Cool, that's certainly a place worth seeing
14:36 quickdry21 joined #salt
14:36 bhosmer_ joined #salt
14:37 tempspace I finally got around to creating a blog post that goes over the pillar driven design pattern I use in my salt code, hopefully someone will find it useful: http://www.willdurness.com/salt-pillar-driven-design-pattern/
14:37 viq It saddens me that people go to that area to visit Auschwitz and haven't even heard about Wieliczka
14:39 Nazzy viq, in cases like this I have to resort to the tried phrase: "You're a precious and unique snowflake, just the same as everyone else."
14:39 ericof joined #salt
14:39 Nazzy alternatively.... "You're all unique and different!" "Yes, we are all different!"
14:42 jnials joined #salt
14:42 vbabiy joined #salt
14:42 viq tempspace: I'm pondering whether compound matchin with pillar would be more readable
14:43 tempspace viq: I think it would be more readable for people who understand what they're reading
14:43 tempspace if that makes sense
14:44 tempspace I was worried about just using the jinja variables in that blog post to be honest
14:44 viq it does
14:44 ianmcshane joined #salt
14:45 \ask joined #salt
14:47 viq tempspace: it would be useful if you'd show how you assign a 9.3 version of postgres to some machine
14:47 tempspace yeah I think you're right
14:48 xmj wait, now
14:49 xmj i didn't see the saw mine. that 'hehe, saw that' was on something else.
14:49 xmj derp
14:49 viq oh
14:49 viq GO BACK THERE THIS INSTANT
14:49 SheetiS joined #salt
14:49 xmj ok /goes back and sees auschwitz
14:49 viq Sure, as long as you go to Wieliczka as well :P
14:50 rallytime joined #salt
14:51 kingel_ joined #salt
14:51 schristensen joined #salt
14:54 jalbretsen joined #salt
14:54 pled76 joined #salt
14:55 Supermathie Morning everyone - can someone point me at the salt-recommended way to have debug logging within minion jobs?
14:55 vbabiy joined #salt
14:56 econnell joined #salt
14:57 tempspace viq: thanks for suggestions
14:58 tempspace viq++
14:58 pled76 joined #salt
14:59 TyrfingMjolnir joined #salt
15:00 philipsd6 joined #salt
15:00 pled76_ joined #salt
15:02 manfred Supermathie: change your initscript to be -l debug, or change the log level in the /etc/salt/minion to be debug
15:02 manfred Supermathie: then you check the log on the minion, or you check salt-call state.sls <state> -l debug ancheck there
15:03 Supermathie manfred, So it's an all-or-nothing choice? Ok
15:03 manfred yeah
15:07 icebourg joined #salt
15:08 icebourg joined #salt
15:11 conan_the_destro joined #salt
15:13 tempspace I love the feeling of ripping out code from salt states for products that caused me nothing but aggravation....it
15:13 tempspace it's as close to office spacing the server as I can get
15:14 Nazzy oh yeah ... MTecknology, rhel package naming isn't actually the grossness in this case... the actual python package is named MySQL-python and it installs a library named "MySQLdb" for you to import from
15:15 dvestal joined #salt
15:16 n8n joined #salt
15:17 MTecknology yuck
15:21 izibi joined #salt
15:21 ckao joined #salt
15:21 Nazzy that's exactly what I think every time I have to work with it heh
15:22 UtahDave joined #salt
15:23 UtahDave Good morning!
15:25 Nazzy it's afternoon here *looks out the window* and doesn't look especially good :p
15:26 Gareth morning morning
15:26 viq morfternoon
15:26 Gareth Nazzy: morning is a relative term :)
15:27 kingel joined #salt
15:27 UtahDave sorry, Nazzy. If I could I'd share some beautiful blue skies with you.  :)
15:27 Nazzy Gareth, I tried telling my boss that... apparently I still have to turn up before noon :P
15:28 Gareth Not me.  I'm keeping the ones here all for myself! Muhahaha.
15:28 Nazzy actually the sky was fairly blue earlier, it's cooling a little now but still comfortable
15:29 Nazzy sadly I'm not in my bed, and I've had to deal with ISO related stress today, so it balances out lol
15:30 Nazzy *sigh* if python-ldap would stop giving me double free deaths, my day would be much better
15:31 * Gareth waits for jenkins.saltstack.com to do it's thing
15:32 rangertaha I want to make a module that will communicate with other targets hosts to distribute the load. Any ideas on the best way to do that?
15:33 Heartsbane joined #salt
15:33 Heartsbane joined #salt
15:33 Nazzy misery of the day for me: *** Error in `python': double free or corruption (top): 0x0000000000ea9be0 ***
15:33 viq rangertaha: what kind of load?
15:33 Gareth rangertaha: Looking at the publish module.
15:33 N-Mi joined #salt
15:34 Gareth s/Looking/Look/
15:34 rangertaha Gareth: ok
15:35 rangertaha viq: like an nmap scan
15:35 darkelda_work joined #salt
15:35 ajolo joined #salt
15:36 viq rangertaha: oh, that would require quite a bit of logic I think... I seem to recall some talk that the upcoming version (Helium, 2014.7) is to have some job queue system that could make this easier, but I haven't looked at it at all
15:36 viq rangertaha: to my knowledge until that's out you could be better off using something like celery
15:37 runiq viq: Found the bugreport for my problem: https://github.com/saltstack/salt/issues/10877 :)
15:39 rangertaha I have do it with celery. But the goal is to start creating modules for offsec/CYBINT. Use salt conduct distributed tasks, responses, and so on.
15:40 rangertaha sorry, I have done it with celery...
15:40 pled76 joined #salt
15:41 rangertaha So the reactor system would response to nmap results on the event bus.
15:42 to_json joined #salt
15:42 rangertaha Tools would execute and allow other tools to react to the results.
15:43 viq yeah
15:45 viq Currently with salt I believe there's no confirmation that an event/command was received, unlike with celery. I think they are planning a feature like you (I think) have there, ie "I want one of the boxes to execure this command, only one but I don't care which", I don't believe such functionality is present yet
15:45 rangertaha Its not for only systems. But for other things like RSS news feeds. Have a module to monitor news feeds and allow other tools to react to the feeds.
15:46 viq At least in the relased version, I have not looked at what's coming in 2014.7
15:46 rangertaha ok, thanks
15:46 rangertaha How much data can the event bus handle?
15:47 pled76 joined #salt
15:47 rangertaha can I use it as a primary means of data transfers.
15:48 rangertaha like 1000's of system collecting and sending feeds on it. Can it handle that?
15:48 viq I have no idea. Also I think there are no confirmations that the event was received, so that currently could be an issue
15:49 Supermathie rangertaha, I'd probably look at using something like redis for that, but I'm still just learning saltstack so take my advice... well... you know.
15:49 arochette joined #salt
15:49 viq But I am intrigued by what you're doing with it
15:50 Supermathie rangertaha, I'm doing something similar - using saltstack for functional and performance testing which involves coordinating and distributing jobs out to a number of hosts then collating the results.
15:50 * viq is programming illiterate and using salt only for managing systems, so not most knowledgeable person around here either
15:52 conan_the_destro joined #salt
15:52 Supermathie My first naïve crack at doing so is essentially this: https://github.com/saltstack/salt/issues/14854#issuecomment-53914226
15:52 joehillen joined #salt
15:52 bernieke joined #salt
15:57 aparsons joined #salt
15:58 snuffeluffegus joined #salt
15:58 CatPlusPlus joined #salt
15:59 viq oh, yeah, Supermathie and rangertaha mind find it useful if you don't know of it already - https://github.com/felskrone/salt-eventsd
16:05 ajprog_laptop joined #salt
16:06 ajprog_laptop joined #salt
16:07 tligda joined #salt
16:08 littleidea joined #salt
16:10 troyready joined #salt
16:10 pdayton joined #salt
16:11 patarr what's that about Wieliczka? Sa tu polacy? :D
16:12 patarr heh. The Saltstack team should do a group trip to Wieliczka.
16:12 viq patarr: are you polish?
16:12 patarr Yup
16:12 viq a skąd? ;)
16:13 xmj patarr: I was in krakow recently and viq asked me whether i'd seen that
16:13 xmj viq: dear god ukrainian border controls on the lviv-krakow night train is ... lovely!
16:14 schimmy joined #salt
16:14 JPaul where can I find the jinja doc that talks about the tests on strings such as "startswith"? searches aren't turning up anything other than bits of example code that contain that
16:14 KyleG joined #salt
16:14 KyleG joined #salt
16:17 bhosmer joined #salt
16:17 viq xmj: oh?
16:17 schimmy1 joined #salt
16:17 KyleG joined #salt
16:17 KyleG joined #salt
16:19 ajw0100_ joined #salt
16:23 forrest joined #salt
16:24 SheetiS JPaul: I am fairly certain that things like .startswith are all just python string methods.  I think all of these I have tried have worked: https://docs.python.org/release/2.5.2/lib/string-methods.html
16:24 tempspace JPaul: It's been a while, but if I remember correctly, startswith was in jinja 1 but not in jinja 2...or something like that, I'd have to google...I believe I worked around it by using the replace filter and checking to see if the new variable contained my replacement string...may not work for all purposes
16:26 SheetiS I've used endwith, rjust, replace and several others.
16:28 TTimo do I need to create the keys on the minion when installing?
16:28 aparsons joined #salt
16:28 TTimo I tried to use the bootstrap script, but it's complaining about no keys
16:28 JPaul ok, thanks guys
16:29 viq TTimo: they keys should be created when service starts and they're not present
16:29 spookah joined #salt
16:29 TTimo right, I have that disabled because it's broken on 14.04 .. how can I get that done manually ?
16:30 pled76 joined #salt
16:31 brunolambert joined #salt
16:32 scoates joined #salt
16:37 sectionme joined #salt
16:38 pled76 joined #salt
16:40 ajolo whiteinge: o/
16:40 ajolo UtahDave: o/
16:40 UtahDave hey, ajolo!
16:40 darkelda_work joined #salt
16:42 ajolo UtahDave: how are you doing ?
16:42 aparsons_ joined #salt
16:42 UtahDave ajolo: pretty good!  You?
16:42 hobakill if i'm using gitfs - should i keep my winrepo out of the git structure?
16:43 ajolo UtahDave: good good, coming back to the salt way of life :)
16:43 ajprog_laptop2 joined #salt
16:43 hobakill ack. i found the man info on it. nvm.
16:44 ajolo I've been in a few projects that took time from me :/
16:44 UtahDave ajolo: Ah, very cool!
16:45 hobakill maybe not. i hate windows.
16:50 pled76 joined #salt
16:51 melinath joined #salt
16:51 pled76_ joined #salt
16:52 justyns joined #salt
16:52 justyns left #salt
16:52 justyns joined #salt
16:53 dvestal joined #salt
16:55 scoates joined #salt
16:58 jaimed joined #salt
16:58 eofs joined #salt
17:00 klotho joined #salt
17:00 scoates joined #salt
17:02 TTimo is there a command to run if I edit state files while the master is running? minion doesn't seem to pickup on the changes
17:02 shaggy_surfer joined #salt
17:03 MatthewsFace joined #salt
17:06 scoates joined #salt
17:07 Ryan_Lane joined #salt
17:07 TTimo ** skipped ** latest already in cache 'salt://general.sls'
17:07 TTimo no it's not
17:09 rangertaha Supermathie: how's that working out? what kind of issues are you encountering?
17:10 smcquay joined #salt
17:11 seblu I'm looking for a way to use file.manage only if a file is available in the fileserver root
17:11 seblu that works perfectly with multiple source, when the default file is empty, it's ok.
17:11 viq TTimo: salt-run fileserver.update
17:11 scoates joined #salt
17:11 TTimo kk ty :)
17:12 seblu but if I expect to not copy and empty file, that's throw an error
17:12 TTimo omg why is salt so memory hungry
17:12 Supermathie rangertaha, pretty well so far; the difficulty I had was figuring out when the jobs just "weren't picked up" and reacting appropriately. Also, I'm not really synchronizing the jobs so I'm just kind of figuring "well they all started within 10s of each other" should be good enough :) If it's a problem I was pondering using, say, a redis semaphore for synchronization.
17:13 TTimo I have two minions and the master is gobling up 450MB
17:15 diegows joined #salt
17:16 pled76 joined #salt
17:16 shaggy_surfer joined #salt
17:17 hobakill is win_repo a requirement UtahDave ? or can i just use win_gitrepos?
17:18 UtahDave hobakill: what doc are you following?
17:18 hobakill UtahDave: http://docs.saltstack.com/en/latest/topics/windows/windows-package-manager.html
17:19 hobakill UtahDave: i'm curious about win_repo and win_repo_mastercachefile as i only use gitfs
17:19 Supermathie TTimo, mine'
17:20 Supermathie TTimo, mine's only using 172MB, are you counting shared libs?
17:20 TTimo I'm looking at htop, it went up to 500M VIRT, and about 100M of paged in pages
17:21 TTimo not the first time I observe this either .. it's really concerning
17:21 TTimo puppet had terrible memory usage too
17:22 beardo joined #salt
17:23 chrisjones joined #salt
17:23 Supermathie VSS != memory usage, it's complicated.
17:24 ianmcshane joined #salt
17:24 TTimo yes
17:24 TTimo but RES
17:24 TTimo was at 100M
17:24 TTimo that's memory physically paged in afaik
17:24 TTimo that seems a LOT for just two minions and a handful of state files
17:25 felskrone joined #salt
17:27 scoates joined #salt
17:28 schimmy joined #salt
17:30 schimmy1 joined #salt
17:30 yomilk joined #salt
17:32 UtahDave hobakill: I think win_repo is the location on the master where all the win_gitrepos get compiled to
17:32 pled76 joined #salt
17:32 ianmcsha_ joined #salt
17:34 bluenemo joined #salt
17:34 archen_ joined #salt
17:34 rap424 joined #salt
17:34 jalaziz joined #salt
17:34 archen_ what is salt for?
17:34 archen_ NaCl?
17:35 hobakill UtahDave: ok that's cool but will if work if gitfs is the only fileserver_backend i use?
17:35 xcbt joined #salt
17:35 UtahDave hobakill: Yeah, it should still work. It's not really tied into the fileserver backend subsystem.
17:35 aparsons joined #salt
17:36 hobakill UtahDave: ok thanks.
17:38 Corey archen_: Systems automation / configuration management / remote execution, primarily.
17:39 NotreDev joined #salt
17:39 vbabiy joined #salt
17:40 NotreDev why is it that if I change the name of my keyfile (remove the .pem extension) i can’t ssh?
17:40 jkaye joined #salt
17:40 a1 joined #salt
17:40 aparsons joined #salt
17:41 jkaye joined #salt
17:41 hobakill UtahDave: interesting - can my salt-winrepo be a branch from my salt git or does it have to be an independent git? looks like the latter
17:44 mordonez_ joined #salt
17:44 rangertaha Supermathie: I was just looking at https://github.com/felskrone/salt-eventsd, It looks interesting. An event daemon, maybe add some celery for flavor. That would be awesome. However, the point in doing it in salt was to allow non devs to configure things. Maybe I can use the deamon with celery, to store all events and then process and send events to the minions. This would give control over the jobs/events. hmm
17:44 BrendanGilmore joined #salt
17:44 scoates joined #salt
17:44 TheThing joined #salt
17:44 UtahDave hobakill: I've only tested with an independent git repo. gitfs wasn't really a thing yet when I originally wrote the windows repo stuff
17:44 hobakill UtahDave: ok thanks again.
17:44 UtahDave anytime!
17:44 viq huh, http://xforce.iss.net/xforce/xfdb/95392
17:44 kermit joined #salt
17:44 pled76 joined #salt
17:44 troyready joined #salt
17:46 mackstick joined #salt
17:47 smcquay joined #salt
17:49 murrdoc joined #salt
17:49 pled76 joined #salt
17:50 scoates joined #salt
17:50 Supermathie rangertaha, allowing non-devs??? :D (not my use case yet)
17:53 xmj oh dear.
17:53 xmj rangertaha: salt usually runs as root.. right?
17:54 melinath joined #salt
17:54 xmj rangertaha: if you allowed non-devs to use salt, instead of giving them root access...
17:54 xmj ...they could just deploy their own keys into /root/.ssh/authorized_keys, right?
17:59 xcbt joined #salt
17:59 scoates joined #salt
18:01 Supermathie if you didn't restrict what jobs they could run, yeah.
18:01 aparsons joined #salt
18:03 rockey joined #salt
18:03 kaictl joined #salt
18:04 whitepaws joined #salt
18:04 Fa1lure joined #salt
18:05 aparsons joined #salt
18:09 FL1SK joined #salt
18:09 dstokes_ does salt automatically install rvm on a system when using the rvm state?
18:10 bluenemo can I clone git repos as a specific linux user? I want to clone an application into a linux users $HOME, but the files should be owned by that user
18:11 UtahDave dstokes_: states aren't supposed to do that, but I believe the rvm state does install rvm automatically if you try to use the rvm state
18:12 TheThing joined #salt
18:12 UtahDave bluenemo: yes, you can. You can use the "user" option to the git state
18:12 bluenemo ah ok. I thought that would set git --config username or sth. thank you UtahDave :)
18:13 dstokes_ @UtahDave: got it, thx
18:14 murrdoc joined #salt
18:16 pled76_ joined #salt
18:16 KevinMGranger So there's precedent for a state bootstrapping its requisites? I was told that should be done explicitly before.
18:17 dstokes it definitely should be done explicitly. assuming rvm is an old implementation
18:17 ianmcshane joined #salt
18:17 UtahDave KevinMGranger: Yeah, rvm state is misbehaving
18:22 druonysus joined #salt
18:23 rangertaha Supermathie: Remember the lego days. I miss those days. Making things all day; all weekend with my brothers. Salt has an amazing set of building blocks. I want to keep it like that.
18:23 occup4nt joined #salt
18:23 Supermathie +1 Love the framework, building something awesome.
18:24 rangertaha Supermathie: How offen do jobs not pickup?
18:25 Supermathie most often when the minion is offline :)
18:25 Supermathie (usually temporarily, but I think I have some weird networking thing happening where something just stops communicating for a bit so I a few test.ping to wake it up)
18:26 druonysuse joined #salt
18:27 rangertaha Darn, I need the events to be more persistent.
18:28 davet joined #salt
18:30 brandon___ joined #salt
18:32 ajolo joined #salt
18:34 ajolo UtahDave: back :)
18:34 ianmcshane joined #salt
18:37 amontalban Hey guys, I'm trying to use: salt '*' state.sls core,edit.vim dev
18:37 amontalban But doesn't work for me, is this syntax correct: salt 'host' state.sls saltenv=environment stateiwanttorun
18:37 NotreDev i’m using salt & docker - i’m presently unconvinced that my Dockerfile should live with my application. it makes more sense for it to live with my docker state. is this the typical pattern?
18:37 Ryan_Lane NotreDev: I may be out of the ordinary, but i keep both the dockerfile and the salt config with the application
18:38 NotreDev Ryan_Lane: thanks, was actually just reading your blog
18:38 dvestal joined #salt
18:39 sashka_ua joined #salt
18:39 NotreDev i think that’s a peculiar setup though with matching on grains.
18:40 murrdoc its necessary in the cloud
18:40 NotreDev murrdoc: why’s that?
18:41 murrdoc its not necesary
18:41 NotreDev docker makes it difficult by requiring all ADDs to be from the same folder as the Dockerfile
18:41 Ryan_Lane you don't have to use grains
18:41 stbenjam joined #salt
18:41 Ryan_Lane you could also match based on hostname or whatever else you want
18:42 murrdoc but it matches the philosophy of the cloud
18:42 Ryan_Lane I happen to use grains because I can easily convert my hostnames into grains
18:42 Ryan_Lane (I also don't need to control access to my pillars, since that's handled via IAM policy anyway)
18:42 murrdoc yup
18:42 murrdoc you could always use etcd
18:43 murrdoc if u want a central place for your configs
18:43 murrdoc or you could use what puppet uses
18:43 Ryan_Lane hiera?
18:43 murrdoc its with a f i think
18:43 NotreDev i’m not deploying on coreos, or using etcd. eh.
18:43 murrdoc the name escapes me
18:43 murrdoc but yeah u could use hiera too
18:43 Ryan_Lane meh. use pillar or external pillar
18:44 Ryan_Lane same concept
18:44 SheetiS joined #salt
18:45 murrdoc yup
18:45 pled76 joined #salt
18:46 kballou joined #salt
18:47 n8n_ joined #salt
18:49 schmutz joined #salt
18:49 kingel joined #salt
18:51 chrisjones joined #salt
18:52 TheThing joined #salt
18:52 dvestal joined #salt
18:53 oeuftete_ joined #salt
18:54 bluenemo in order to use a init script with service.foo, what requirements does it have to fulfill? is it enough for /etc/init.d/foobar to be present and give exit statuses for start stop status? I placed one in /etc/init.d/foobar, service.running tells me 'the named service foobar is not available'
18:55 _ikke_ What os?
18:56 _ikke_ Debian based?
18:56 aparsons joined #salt
18:58 bluenemo yes, ubuntu 14.04, sysvinit, sorry.
18:58 bluenemo _ikke_,
19:00 _ikke_ bluenemo: What I can see it should also be present in one of the /etc/rc*.d/ dirs
19:01 Supermathie _ikke_, bluenemo: that shouldn't be necessary (from a Debian POV) - it should only need to be in init.d.
19:02 Supermathie bluenemo, does 'service foobar status' work?
19:03 smcquay joined #salt
19:04 bluenemo ah. making it executable helps M) Supermathie _ikke_
19:04 Supermathie reading the code looks like the module looks in /etc/init.d so yeah it should work: https://github.com/saltstack/salt/blob/v2014.1.10/salt/modules/debian_service.py#L117
19:05 Supermathie Yes it will! :D
19:05 bluenemo sorry ;)
19:05 bluenemo works like a charm ;)
19:05 linjan joined #salt
19:06 xcbt joined #salt
19:07 nitti_ joined #salt
19:07 hobakill i give up trying to make salt + windows happy.
19:07 Supermathie hobakill, really? working well for me.
19:07 martoss joined #salt
19:08 hobakill Supermathie: i applaud your efforts sir. but i can't get the minions to see my git repo as the sole source of packages for the life of me.
19:09 Supermathie oh, I haven't tried that. Just doing simpler things. I will also admit that other than adding a job to update the minion on demand I'm not installing anything :)
19:10 Supermathie and using MSI, not that chocolately thing.
19:10 hobakill don't get me started on salt+chocolatey
19:10 maboum joined #salt
19:11 Supermathie mmmmmmmmmmmmmm http://www.lindt.ca/swf/eng/products/bars/dark-chocolate/excellence-sea-salt/
19:12 Ahlee oh wow.  No wonder this minion wasn't responding.  0.13.2, heh
19:14 aparsons joined #salt
19:15 pled76 joined #salt
19:15 scoates joined #salt
19:17 dstokes UtahDave: so, i'm seeing syntax errors in the salt installed rvm bash scripts. anyway to peg the rvm version in salt?
19:19 murrdoc joined #salt
19:20 thayne joined #salt
19:20 UtahDave dstokes: I imagine you can, but I haven't used it at all. I'd look at the docs and see what the options are.
19:21 dstokes was afraid you'd say that ;) no options in the docs. just looked at the src and it looks like it's hard coded https://github.com/saltstack/salt/blob/2c0c11a6abb46bba49d0be328325c222114365e2/salt/modules/rvm.py#L78
19:22 pled76_ joined #salt
19:22 dstokes rather https://github.com/saltstack/salt/blob/2c0c11a6abb46bba49d0be328325c222114365e2/salt/modules/rvm.py#L82
19:22 UtahDave ah, so the "stable" version?
19:22 dstokes yeah, which seems to be broken for me. gonna keep investigating. thank,s
19:23 Setsuna666 joined #salt
19:24 dvestal joined #salt
19:24 che-arne joined #salt
19:25 UtahDave you're welcome
19:31 aparsons joined #salt
19:32 aparsons joined #salt
19:32 ekristen joined #salt
19:33 basepi Ryan_Lane: you should probably note the giant warning about concurrent being quite dangerous in your blog post:  http://ryandlane.com/blog/2014/09/02/concurrent-and-queued-saltstack-state-runs/ Here's the warning:  https://github.com/saltstack/salt/blob/develop/salt/modules/state.py#L356-L359
19:33 pled76 joined #salt
19:34 basepi `queue` should be the go-to in almost every case.  If someone submits an issue on Github and they're using `concurrent=True`, I'm going to tell them we can't help them.  The state system is really not designed to run concurrently.
19:36 brandon joined #salt
19:36 KevinMGranger basepi: is it a future goal to have the state system run concurrently?
19:37 basepi Not currently, no.  It would take an insane amount of work, and in many cases just doesn't work.  I think of it like package management systems -- apt has a lock as well, because there's really no way to architect it in a multiple-process safe manner
19:37 UtahDave KevinMGranger: what do you mean by run concurrently?  Run multiple highstates at the same time? Or run different parts of the same highstate concurrently?
19:38 UtahDave oops, didn't read the backlog
19:38 dvestal joined #salt
19:40 basepi That said, there are a few states that are probably concurrent-safe.  File states, as long as they're not accessing the same file, I expect would be safe.  That kind of thing.  But package management isn't, because the package managers have their own locks.
19:40 miqui joined #salt
19:41 basepi KevinMGranger ^
19:41 ianmcshane joined #salt
19:45 bhosmer joined #salt
19:45 fredvd joined #salt
19:45 Supermathie …a few states that, through no fault of their own, are concurrent-safe… :)
19:45 pled76 joined #salt
19:46 Ryan_Lane basepi: well, I did try to give a warning in the text :)
19:46 NotreDev is there a way to create / use a temporary folder, perhaps one in memory?
19:46 Ryan_Lane basically that running two things at the same time may cause them to conflict with each other
19:47 Ryan_Lane I have some things written as states because it's easy to chain a set of actions together as a state, and provide grains and pillars
19:47 basepi Ryan_Lane: I just thought the warning could have been a bit stronger.  I just want people to know that I will not help them debug their states if they're using concurrent=True.  =P
19:47 Ryan_Lane but they may just call a set of execution modules, or they set grains
19:47 Ryan_Lane heh
19:47 Ryan_Lane ok, ok, I'll add an update ;)
19:49 Ryan_Lane basepi: ah, you commented
19:49 Ryan_Lane I just approved it ;)
19:50 basepi Hehe, cool.  =)
19:50 murrdoc hey ryan u blog everything :)
19:51 murrdoc good man
19:51 murrdoc http://ryandlane.com/blog/2014/08/29/a-saltstack-highstate-killswitch/ we were just talking about this a few days ago
19:51 Ryan_Lane heh
19:51 Ryan_Lane yeah, I think it's good to blog about stuff you're using :)
19:51 jkaye joined #salt
19:51 murrdoc dat rya, so devops right now
19:51 Ryan_Lane makes the ecosystem stronger, since people see other people using it
19:52 Ryan_Lane people like to use what other people are using, so the community grows. the community grows and writes more stuff, which benefits me :)
19:52 murrdoc much respect
19:53 SheetiS joined #salt
19:53 dvestal_ joined #salt
19:54 UtahDave Ryan_Lane++
19:54 murrdoc so lyft is all in the cloud
19:54 Ryan_Lane murrdoc: yep
19:58 Ryan_Lane basepi: hm. if you run state with concurrent, will the context data be the same?
19:58 Ryan_Lane across the two state runs?
19:58 murrdoc there should be a global way to turn off all states
19:58 Ryan_Lane murrdoc: yeah, I opened a bug for that
19:59 murrdoc got link ?
19:59 murrdoc i ll co sign
19:59 Ryan_Lane I linked to it from the killswitch blog post
19:59 basepi Ryan_Lane: I have no idea.  Your blog post was the first I heard of us "supporting" concurrent runs.  I have done no testing as to the side effects.
19:59 Ryan_Lane basepi: well, it's an option listed in the docs ;)
19:59 basepi They should be in different forks, however, and so their data sets should probably remain independent
20:00 * Ryan_Lane nods
20:00 dvestal joined #salt
20:01 murrdoc was there a way to +1 in github ?
20:02 murrdoc no thats jira
20:02 murrdoc nvm
20:02 Ryan_Lane I think people usually just do +1 in the comments :)
20:02 chrisjones joined #salt
20:02 murrdoc i went with :8ball:, then redacted
20:03 thayne joined #salt
20:04 aparsons joined #salt
20:05 Ryan_Lane basepi: I updated it with a long warning section ;)
20:06 basepi hahaha, thanks
20:06 Ryan_Lane my updates are longer than my post
20:06 basepi Hahaha
20:07 basepi It's well written, exactly what I think needed to be said.  Thanks for adding that.
20:08 Supermathie man does it really get on anybody else's nerves that the various job functions are 'lookup_jid, list_jobs, etc' rather than 'active, lookup, list, …'
20:08 murrdoc not yet
20:08 kermit joined #salt
20:08 murrdoc so wait this global disable, is it supposed to block all salt runs
20:08 basepi They could definitely have been named more consistently.  You get used to them pretty quick though.  =P
20:08 murrdoc or just block a state ?
20:09 Ryan_Lane murrdoc: I'd imagine just stuff in state
20:10 tempspace You've been doing a great job with the Salt blog entries Ryan_Lane
20:10 tempspace good stuff
20:10 Ryan_Lane thanks :)
20:10 basepi ++
20:10 murrdoc both would be cool
20:10 murrdoc so it mimics top.sls
20:11 murrdoc either block '*'
20:11 murrdoc or block on a match on a grain
20:11 tempspace Funny we both ended up blogging about different design patterns today
20:11 Ryan_Lane tempspace: indeed :)
20:11 murrdoc or disable a state
20:11 murrdoc tempspace:  link
20:11 smcquay joined #salt
20:12 tempspace murrdoc: http://www.willdurness.com/salt-pillar-driven-design-pattern/
20:12 murrdoc tempspace:  u ever meet a guy called sinh
20:13 tempspace Not that I recall, but I'm pretty crappy with names
20:13 murrdoc "became one of the first ten SaltStack Certified Engineers ever", he mentioned hanging out with someone along those lines at salt conf this year
20:14 Ryan_Lane I think 20 or so people were certified at saltconf
20:14 tempspace Does sinh work at Rackspace? There were a crap ton of them there :)
20:14 forrest lol
20:15 wangofett joined #salt
20:15 murrdoc {% if salt['pillar.get']('POSTGRESQL:INSTALLED', False) %}
20:15 murrdoc is that different from using 'I:'POSTGRESQL:INSTALLED'" in the compound match way
20:15 murrdoc sorry I@pdata:foobar
20:15 murrdoc http://docs.saltstack.com/en/latest/topics/targeting/compound.html
20:15 tempspace negative, compound match works fine
20:15 murrdoc is there a pro/con
20:15 Ryan_Lane it's slightly clearer to be explicit
20:15 murrdoc oh sinh's at edgecast
20:15 basepi compound matcher is slightly slower
20:15 pled76 joined #salt
20:16 fredvd joined #salt
20:16 basepi but I don't think it would be noticeable
20:16 murrdoc feels slower ? or you seen it in timing
20:16 murrdoc k
20:16 murrdoc cool
20:16 basepi Just from a complexity standpoint, it must be slower.  The compound matcher calls out to the pillar matcher if it finds a pillar matching string.
20:17 basepi So bypassing the compound compilation step will save time.  But it will be milliseconds.
20:17 murrdoc does anyone here use redis as the enc ?
20:17 murrdoc uh ext_job_cache
20:21 murrdoc or not, is cool
20:21 Supermathie Should we?
20:22 ianmcshane joined #salt
20:23 tempspace murrdoc: I don't...speaking of which, I need to look and see how we're doing on the bug reports I put in about default returners
20:23 murrdoc kk
20:27 xcbt joined #salt
20:29 iggy http://salt.readthedocs.org/en/v2014.1.4/ref/configuration/master.html?highlight=fileserver_backend#fileserver-backend  the example says "gitfs" instead of "git"
20:32 manfred iggy: don't use readthedocs.org
20:32 manfred http://docs.saltstack.com/en/latest/ref/file_server/backends.html
20:33 manfred readthedocs i believe is no longer updated because we were hammeirng them too much
20:34 murrdoc joined #salt
20:34 iggy well... the latest docs don't work if you're using 2014.1.10 (which I am)
20:34 iggy so... I guess I'm boned either way
20:35 murrdoc joined #salt
20:36 tomspur joined #salt
20:37 aparsons joined #salt
20:39 NotreDev is it possible to build a docker image from a Dockerfile that is piped in? I have a jinja2 template that produces my dockerfile.
20:40 Supermathie NotreDev, Yep: `docker build -t nameofimage -` works (I think as long as you don't reference a directory)
20:40 Supermathie Or you could specify a URI
20:41 NotreDev Supermathie: i’m generating a docker image from a template, and i’m trying to pipe it in
20:42 NotreDev your approach works from the commandline, but i can’t find any supporting documentation on how to use it in Salt https://salt.readthedocs.org/en/latest/ref/states/all/salt.states.dockerio.html#module-salt.states.dockerio
20:42 Supermathie NotreDev, yeah, pipe it in and specify `-` as the source
20:42 NotreDev i did find dockerio.installed
20:43 a1 joined #salt
20:45 jalaziz joined #salt
20:45 NotreDev Supermathie: http://pastebin.com/3rJUF8yP
20:46 al joined #salt
20:48 NotreDev Supermathie: how do i pipe it in? what is the argument that it should be piped to?
20:48 NotreDev or just use cmd.run
20:48 Supermathie sorry, I gave the generic docker answer rather than the salt answer
20:49 schmutz joined #salt
20:49 abe_music joined #salt
20:50 NotreDev ok
20:51 NotreDev these docs are no longer maintained? https://salt.readthedocs.org/en/latest/contents.html
20:55 UtahDave use docs.saltstack.com
20:56 NotreDev UtahDave: thanks
20:56 ajolo Hey, I've just installed Salt using the AWS default AMI (amzn-ami-hvm-2014.03.2.x86_64-ebs) and I'm getting this error:
20:56 ajolo Well, warning:
20:56 ajolo /etc/init.d/salt-master start
20:56 ajolo Starting salt-master daemon: /usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec.  You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.
20:56 ajolo _warn("Not using mpz_powm_sec.  You should rebuild using libgmp >= 5 to avoid timing attack vulnerability.", PowmInsecureWarning)
20:56 ajolo /usr/lib64/python2.6/site-packages/Crypto/Util/randpool.py:40: RandomPool_DeprecationWarning: This application uses RandomPool, which is BROKEN in older releases.  See http://www.pycrypto.org/randpool-broken
20:56 ajolo RandomPool_DeprecationWarning)
20:57 ajolo It seems AWS ships version 4.3, I found a thread related to this but I couldn't find a solution
20:57 meteorfox joined #salt
20:57 UtahDave ajolo: Yeah, that's really annoying.  Let me get you a state that will fix it in a brutish way.
20:58 ajolo lol
20:58 UtahDave It's not actually a Salt issue. It's a library that's kicking out that warning.
20:58 LBJ_6 joined #salt
20:59 LBJ_6 left #salt
20:59 littleidea joined #salt
21:00 ajolo UtahDave: Yes, I found the same report for other projects
21:00 ajolo I could compile libgmp >= 5
21:01 pled76 joined #salt
21:01 aparsons_ joined #salt
21:02 fredvd joined #salt
21:02 UtahDave ajolo: try this  https://gist.github.com/UtahDave/eb64e806328e5ebab5b7
21:02 SheetiS ajolo: I was going to fix it by compiling libgmp into it's own prefix and then building python Crypto against it.  I am also using AWS and was annoyed by it.
21:03 murrdoc joined #salt
21:03 UtahDave ajolo: BE WARNED!!   That does NOT fix the actual problem.  It just suppresses the warning, so use at your own risk.
21:03 ajolo UtahDave: I will, thanks !
21:04 UtahDave you're welcome!
21:05 xcbt joined #salt
21:05 SheetiS UtahDave: What is the scope of the risk?  I've only got 2 distro/arch combos that I'd have to build libgmp against, and was going to populate it with a file.recurse once I built it, but if the risk is not significant, I am not certain that I want to bother.
21:06 SheetiS (e.g. compile on 1 machine and then distribute via salt to all the minions)
21:06 UtahDave SheetiS: I really don't know what the scope of the risk is, unfortunately.
21:06 SheetiS ok.  I'll cover my behind then and do a little extra work :)
21:08 kingel joined #salt
21:08 smcquay joined #salt
21:10 johngrasty joined #salt
21:11 TTimo joined #salt
21:13 synestine joined #salt
21:13 murrdoc joined #salt
21:15 synestine Is there a document on JINJA syntax? I'm trying to figure out a control structure and it's silently failing now.
21:16 pled76 joined #salt
21:16 SheetiS I use http://jinja.pocoo.org/docs/dev/templates/ when I can't find things in the salt docs
21:16 synestine Thanks. I'll look there.
21:17 aparsons joined #salt
21:18 verwilst joined #salt
21:18 NotreDev State dockerio.build found in sls circle.vitrine is unavailable - docker-py is installed / i can import it if i launch python. i’ve restarted salt-master (running masterless). anything else?
21:18 aparsons joined #salt
21:19 verwilst if i run salt-call state.highstate, all is fine, if i run state '*' state.highstate, that same host gives error on items that i fixed before
21:19 verwilst is something cached when run from the master?
21:23 KennethWilke joined #salt
21:23 bluenemo I want to add a linux user and salt gives me back: These values could not be changed: {'homeDoesNotExist': '/srv/wechange'}  /srv is a lvm lv ext4 partition, also created with salt, which already exists in this run. my state + output http://paste.debian.net/119010/, minion running 2014.7.0rc1-1549-gb646f9f (Helium).
21:24 bluenemo what puzzles me is that I gave the state createhome: true (and it actually worked before I re-setup the minion from scratch (new install of ubuntu) to see if all my fancy states work out when applied freshly)
21:25 TTimo joined #salt
21:25 bluenemo the user does not exist yet
21:25 murrdoc True ?
21:25 anditosan joined #salt
21:26 bluenemo murrdoc, you mean me?
21:27 tenshi joined #salt
21:27 murrdoc hmm, i would try switching to True (weak assist) and also try a require on the /srv directory bluenemo
21:27 murrdoc that might work
21:27 KennethWilke bluenemo, does it work on a second run?
21:28 bluenemo this sounds familiar: https://github.com/saltstack/salt/issues/4943
21:28 racooper joined #salt
21:28 bluenemo murrdoc, you mean True as in uppercase T? KennethWilke no
21:29 KennethWilke seems like the minion must be having an issue doing a mkdir in /srv if it can't make the new home dir there
21:29 synestine Okay, I must be missing something basic here. Can anyone tell me why the following code would not work?
21:30 synestine {% if grains['os'] == 'CentOS' %}
21:30 synestine {% if grains['osmajorrelease'] == 7 %}
21:30 synestine include:
21:30 synestine - os.basepkgs-centos7
21:30 synestine {% elif grains['osmajorrelease'] == 6 %}
21:30 synestine include:
21:30 synestine - os.basepkgs-centos6
21:30 synestine {% endif %}
21:30 synestine {% endif %}
21:30 Corey synestine: Pastebin.
21:30 KennethWilke it didn't work because you pasted it into irc without using pastebin!
21:30 bluenemo :D
21:31 bhosmer joined #salt
21:31 bluenemo Hm. I can create folders just fine there (in /srv) as root myself (by hand).
21:31 KennethWilke synestine, what's the error you get say
21:31 KennethWilke ah! bluenemo the createhome doesn't do a mkdir /srv, maybe that's the issue?
21:31 eliasp synestine: https://github.com/saltstack/salt/pull/15337
21:31 Supermathie synestine, y u not use grain selectors in the states file? (is that a states file?)
21:32 erjohnso joined #salt
21:32 synestine http://pastebin.com/jeRhmxn9
21:32 KennethWilke it'll only make the full home dir path, not parent paths
21:32 bluenemo KennethWilke, no /srv/ is already present, its a mountpoint for an lvm lv ext4
21:32 bluenemo its supposed to do a mkdir /srv/wechange, which it doesnt ;)
21:32 KennethWilke oh sorry i misread what you said :x
21:32 bluenemo np
21:33 bluenemo hm. killing cache and restarting on both ends doesnt help
21:33 aparsons joined #salt
21:33 synestine Supermathie: Yes, that's a snip from one of my state files. I'm working on integrating a CentOS 7 test box in with my existing CentOS 6 boxen. There are some package name differences, so I want to include the appropriate files.
21:34 eliasp synestine: make sure you have the fix from #15337, otherwise your CentOS 7 boxes won't work
21:34 bluenemo it works when I create the home before the run by hand :(
21:34 synestine KennethWilke: I get no error. I apply the highstate to any of my boxes and neither the osmajorrelease == 6 or == 7 gets run. No error whatsoever.
21:34 Supermathie synestine, seems more simpler & correct to use grain matchers as in http://docs.saltstack.com/en/latest/topics/targeting/compound.html
21:35 bluenemo if I then remove it again, it works too. hm. seems like some cache bug or sth.
21:35 bluenemo i'm to nooby to debug that :(
21:35 KennethWilke lol awww bluenemo
21:35 bluenemo :D
21:36 KennethWilke lie!
21:36 murrdoc stop master
21:36 murrdoc stop slave
21:36 bluenemo its half past 11 in the evening here, maybe I'm to tried too :P
21:36 murrdoc nuke /var/cache/salt/*
21:36 murrdoc hate on random dude on irc
21:36 murrdoc start master
21:36 KennethWilke right, you'll sleep on it and fix it in 10 minutes the following day
21:36 murrdoc start slave
21:36 murrdoc see wassup
21:36 bluenemo hm I hardly think so :D
21:36 KennethWilke synestine, maybe none of those conditions are matching
21:37 KennethWilke synestine, try a state.show_sls call on that minion
21:37 morganfainberg joined #salt
21:38 QuinnyPig morganfainberg returns!
21:38 morganfainberg shhhhh
21:38 eliasp synestine: did you look at #15337? as long as you don't apply this fix, "osmajorrelease == " will never be matched
21:39 * morganfainberg remembers to change nick nextime :P
21:39 Supermathie synestine, it's possibly because osmajorrelease is a tuple
21:40 Supermathie {'centos-ad2012r2.storagelab.netdirect.ca': {'osmajorrelease': ['6', '5'],  'osrelease': '6.5'}}
21:40 eliasp Supermathie: synestine: osmajorrelease is supposed to be a number, but is returned as list… so please just have a look at https://github.com/saltstack/salt/pull/15337 - the last time I repeat this
21:41 Supermathie eliasp, you made it sound like it was specific to CentOS 7
21:42 eliasp it is, because it applies only to os_family == RedHat
21:43 eliasp well, ok… it might have sounded like CentOS 7 but not CentOS 6 specific… sorry about that
21:43 bluenemo ha. installed etherpad like a boss >:) i'm so in love with saltstack! the only thing i'm really to stupid for is writing macros and templating with them..
21:44 bluenemo eliasp, still hanging on the nginx macro for templating ;)
21:44 NotreDev i can’t figure out why the dockerio module is unavailable. i can prove that salt knows about it: http://pastebin.com/nqnkGHcw
21:46 pled76 joined #salt
21:46 NotreDev it looks like i have access to the dockerio module, but not the dockerio state.
21:47 kermit joined #salt
21:48 quantumriff joined #salt
21:49 talwai joined #salt
21:50 quantumriff I want to add ssh_auth public keys, but only after a user has logged in the first time.. is there a way I can only run those, if a directory exists?
21:52 econnell joined #salt
21:54 bhosmer_ joined #salt
21:55 jalaziz joined #salt
21:55 UtahDave NotreDev: use docker.build
21:56 UtahDave NotreDev: the file is dockerio.py   but if you look at the examples  http://docs.saltstack.com/en/latest/ref/states/all/salt.states.dockerio.html#module-salt.states.dockerio
21:56 NotreDev UtahDave: it looks like i should have been using docker.built for my state.
21:56 UtahDave docker.buit
21:56 UtahDave built
21:56 NotreDev yep
21:56 UtahDave yeah
21:57 NotreDev one other thing - anyway to create a temporary file? or just chain states? i’m writing a private key from my pillar to a Dockerfile, and i’d like for it to be removed. Unfortunately, docker.built doesn’t support building an image from stdin
21:57 kelseelynn joined #salt
21:57 UtahDave NotreDev: you could write the dockerfile to disk, then later have  a file.absent state to delete the file when you're done
21:58 NotreDev UtahDave: ok, yeah i’ll chain them then
21:58 xcbt joined #salt
21:59 synestine Thanks all, it looks like it is that osmajorrelease bug that's biting me. I applied the fix to the two test machines I'm allowed to monkey with and the state applies correctly.
22:01 UtahDave quantumriff: one way to do that is to put a jinja block around your ssh section where jinja checks for the directory first
22:02 aquinas joined #salt
22:04 quantumriff utahDave: thanks.. I use lots of jinja blocks and checks, but didn't know it could be used to check if a file exists..
22:05 UtahDave yeah, something like  {% if salt['file.exists']('/home/joeuser/') %}
22:05 shaggy_surfer joined #salt
22:06 UtahDave test that before pushing to production, though!
22:06 murrdoc tests ? we dont need no stinking tests
22:06 murrdoc ./me is kidding
22:07 wangofett UtahDave: Shouldn't salt '*' cmd.run 'dir' cwd='/salt/salt-2014.1.10.win-amd64/salt-2014.1.10-py2.7.egg/salt/states/' | grep env
22:07 wangofett UtahDave: give me the environ module?
22:07 jalaziz joined #salt
22:08 wangofett https://github.com/saltstack/salt/issues/13048 <-- I'm using 2014.1.10, so it's a bit newer than this issue
22:09 TheThing joined #salt
22:09 wangofett I mean... it exists at 1.7 (https://github.com/saltstack/salt/tree/2014.7/salt/states)... So I'm a bit confused as to why it's missing
22:10 jkaye joined #salt
22:10 UtahDave I think it should be in 2014.7 branch, which is in RC still
22:11 bhosmer joined #salt
22:12 chitown UtahDave: hey.. its craig from "linkedin"
22:12 UtahDave hey, chitown! how's it going?
22:12 chitown but, i quit linkedin :/
22:12 chitown did you get my email re:book?
22:12 chitown weeks ago
22:13 n8n joined #salt
22:13 UtahDave Yeah, I did, actually.  Is it still open for review?
22:13 UtahDave Congrats on the new job, by the way!
22:15 aparsons joined #salt
22:16 forrest chitown, are you writing a salt book?
22:16 forrest or have written one
22:18 ninkotech joined #salt
22:18 TTimo joined #salt
22:19 SheetiS joined #salt
22:19 synestine left #salt
22:20 chitown UtahDave: yes, it is! :)
22:20 chitown forrest: HEY! i was going to ping you :)
22:20 chitown http://shop.oreilly.com/product/0636920033240.do
22:20 chitown i'd love to get some input
22:20 chitown it is *SUPER* rough
22:20 UtahDave chitown: just created my account and responded to your email
22:21 chitown awesome
22:21 chitown forrest: if you have some time...
22:21 chitown i can send you an email with details (it's just a template)
22:22 chitown essentially, you just need to create an account on an oreilly site and send me ur deets and i can grant access
22:22 chitown the chapters are still in flux
22:22 chitown so, if oyu think something should be added....
22:22 chitown UtahDave: so, the question i was going to ask... :)
22:23 chitown for environments, anything other than base are used in alpha order
22:23 aparsons joined #salt
22:23 chitown is there any open request to specify the order?
22:24 UtahDave chitown: Hm. So in your scenario a minion would match in multiple environments and what order will they be executed in?
22:24 TTimo joined #salt
22:25 manfred chitown: kind of, Ryan_Lane had one where he wanted to make sure that it was all loaded in order.
22:25 manfred pretty sure that it is in 2014.7, except for pillars
22:26 chitown ya, thats kinda what i am after
22:26 Ryan_Lane chitown: you can specify order in the top file
22:26 manfred Ryan_Lane: is the pillar top file ordered yet?
22:26 Ryan_Lane no clue. haven't checked against dev yet
22:26 manfred kk
22:26 Ryan_Lane my issue wasn't closed
22:26 manfred yar
22:26 chitown Ryan_Lane: is there an example somewhere?
22:27 chitown or is it just the order given in the top file
22:27 chitown i.e. top down
22:27 Ryan_Lane chitown: http://ryandlane.com/blog/2014/08/26/saltstack-masterless-bootstrapping/
22:27 Ryan_Lane see where I describe the top files
22:27 Ryan_Lane it's a keyword argument like match is
22:28 Ryan_Lane order: <number>
22:28 chitown intersting...
22:28 chitown lol... thats what i was going to suggest! :)
22:28 mosen joined #salt
22:28 Ryan_Lane that's available on every state and in the top files
22:28 Ryan_Lane it would be way better if it was just ordered as written
22:28 chitown +1
22:29 chitown that said... i dont see what it would change THAT often
22:29 chitown i.e. you do some work initially and then it kinda sites there
22:29 Ryan_Lane yeah, mine will basically never change
22:29 chitown sits*
22:29 Ryan_Lane but it's just annoying
22:29 chitown :D
22:29 chitown totally
22:29 Ryan_Lane and unintuitive
22:29 chitown "why isnt this working!?!?!"
22:30 chitown "oh crap... i gave 2 of them #3... fml"
22:31 mordonez_ joined #salt
22:32 dstokes i don't see a 2014.7.0 milestone on github. how can we determine which fixes / features are scheduled for that release?
22:32 dstokes *milestone or label*
22:33 forrest chitown, book us an utter failure, simply due to animal choice :P
22:33 forrest *is
22:33 mordonez_ joined #salt
22:34 chitown sigh... that wasnt really my choice
22:34 forrest lol
22:34 forrest is it an eel?
22:34 forrest I can't tell
22:34 chitown they said that if i felt strongly about an animal, they would take it into acount
22:34 forrest did you say 'not that thing'
22:34 chitown http://en.wikipedia.org/wiki/Conger
22:34 chitown lol
22:34 chitown it didnt work that way :)
22:34 chitown "oh btw, your animal is..."
22:35 chitown to be fair, if i had raised a big stink, they may have changed it
22:35 forrest chitown, lol, so they gave you a huge violent eel
22:35 chitown honestly, i didnt care :)
22:35 forrest yeah I'm just joking with you
22:35 forrest just funny
22:36 chitown ya... when i saw it, i was like... really... not what i would have guessed
22:36 nitti joined #salt
22:36 forrest haha
22:36 chitown maybe it was some salt water connection thing
22:36 chitown idk
22:36 chitown anyway, if you want to read it, just pm your email
22:36 chitown or public... doesnt matter to me :)
22:36 forrest I think they are just running out of animals
22:37 chitown ya, they have quite a few books at this point
22:37 chitown and they try to keep some kind of consistency within a "topic"
22:37 chitown there is a puppet book out... or coming
22:37 chitown and 2 chef books
22:37 forrest when I finish my book in 2025, I'm just going to put a square on it
22:37 forrest and call it 'salt book'
22:37 forrest BOOM
22:38 shaggy_surfer joined #salt
22:38 dccc_ joined #salt
22:39 xcbt joined #salt
22:40 NotreDev is there a requisite that is like “require” but executes regardless of another execution state? i want to delete a file only after a state has been attempted
22:40 dstokes basepi: there any visibility on which features / fixes are scheduled for the next stable (2014.7.0)?
22:40 yomilk joined #salt
22:41 NotreDev maybe a require / onfail combo?
22:42 basepi dstokes: Define "visibility"  ;)
22:42 forrest lol
22:42 basepi dstokes: Any commit in 2014.7 is included in that release!
22:42 forrest basepi, troll of trolls :P
22:42 basepi Nah, I'm kidding.  But we're trying to hit on major features in the release notes
22:42 basepi And versionadded directives are useful for seeing whether something is in there or not
22:42 dstokes ;_; github visibility, i.e. "my bug is def important enough to be considered for next stable"
22:43 dstokes milestone, label etc
22:43 basepi And anything before July 15 is in
22:43 basepi dstokes: ohhhhh, for still-open issues?
22:43 dstokes basepi: more curious about which things _will_ make it in, not which _have_ made it in
22:43 dstokes yea
22:43 dstokes i assume there's an internal method for deciding when to freeze the release
22:44 Ryan_Lane basepi: thinking of things that needed to go in. have you had a chance to look at the backport requests I made last week?
22:44 Ryan_Lane I had realized I missed about 6 PRs
22:44 basepi dstokes: That's much harder.  We're currently stretched *very* thin.  We're working on hiring like 50 people in the next 6 months, but we need those people *now*.  So we're just trying to get "as much as we can".
22:44 dstokes would be nice to be able to have discussions about what the things that _will_ make it in are. i.e. priority scheduling based on future releases
22:45 dstokes basepi: didn't know you guys were hiring ;)
22:45 basepi dstokes: we will block on most high severity bugs
22:45 basepi dstokes: otherwise, it's no promises at the moment.  We're technically feature-frozen already for 2014.7
22:45 dstokes basepi: that's exactly the reason why quantifying what makes it into the release is a good idea
22:45 basepi http://www.saltstack.com/careers
22:45 * QuinnyPig perks up
22:46 dstokes i.e. "these are the things we think need to be fixed for next release, everything else waits"
22:46 basepi QuinnyPig: loving the new username.  =P
22:46 QuinnyPig basepi: Thanks!
22:46 forrest QuinnyPig, I'm not sure if your pun is worse than all the Salt ones or not...
22:46 basepi dstokes: right now, that's high severity bugs.  but community members are making a bunch of others, so that's not to say yours won't get in
22:47 basepi And we will often grab an easy one as it flies by as well
22:47 basepi in the medium->low spectrum
22:47 dstokes got it. the one i'm referring to is high severity. good to know that means "next release"
22:47 basepi dstokes: link?  I'm just curious
22:47 dstokes s/means/hopefully means/
22:47 dstokes https://github.com/saltstack/salt/issues/14499
22:48 dstokes any time new servers are coming up and another goes down, my whole stack freezes..
22:48 forrest dstokes, well, stop provisioning then, duh.
22:48 dstokes OMGURRITE!
22:48 forrest This is why they pay me the big bucks, clear and obvious solutions!
22:48 dstokes `salt \* state.sls take.a.break.guise`
22:48 basepi dstokes: cachedout was looking at that one.  And I think it ended up being more complex than we anticipated.  There's a chance that it won't be in the .0, but it's high on our list for sure.
22:49 forrest dstokes, haha
22:49 dstokes basepi: hopefully a patch shortly after. not trying to beat a dead horse here, but it totally breaks dynamic infra
22:49 druonysus joined #salt
22:50 dstokes basepi: also not even sure i'm directing this at the right person. sry if my requests are mis-targeted ;)
22:50 Topic for #salt is now Welcome to #salt | 2014.1.10 is the latest | Help us test the 2014.7 RC! http://bit.ly/salt-rc | SaltStack is hiring! http://www.saltstack.com/careers | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
22:50 scoates joined #salt
22:50 rap424 joined #salt
22:51 basepi dstokes: actually, I am the right person.  Issue tracker is currently my thing, though eventually it will go to the QA team we're building.
22:51 dstokes got it
22:51 CeBe1 joined #salt
22:51 * forrest watches basepi rub his hands together excitedly
22:51 basepi Hehe, I am excited to write more code....
22:53 CeBe joined #salt
22:53 CeBe1 joined #salt
22:58 Gareth basepi: as soon as you finish your TPS reports...
22:59 basepi Gareth: I'm missing the joke.  Not sure what TPS is...
22:59 Outlander joined #salt
22:59 Gareth basepi: Office Space?
23:00 jkaye joined #salt
23:00 miles32 what you need is more bosses
23:00 miles32 then you will know what a TPS report is
23:01 forrest Gareth, WOOSH
23:02 aparsons joined #salt
23:03 basepi Gareth: I'm sorry to admit I haven't seen it.  It's been on my list for so long but I keep forgetting about it.
23:03 tempspace hiring a QA engineer, awesome!
23:03 chitown can we down vote people?
23:03 chitown havent seen office space... working in tech!?!? sigh... so disappointing
23:03 basepi I know....
23:03 basepi I am disappoint.
23:04 miles32 IRC existed before the concept of the downvote, it will exist long after
23:04 miles32 so no
23:04 chitown ask your boss to screen it at the office
23:04 basepi Hehe, that sounds like a plan
23:04 chitown :)
23:04 tempspace basepi: How many QA engineers you guys hiring for the team?
23:04 basepi I do not know an exact number.  A few.
23:05 dude051 joined #salt
23:05 basepi tempspace ^
23:05 tempspace :)
23:06 dude051 joined #salt
23:07 aparsons joined #salt
23:07 tempspace I was just playing an Office Space scene today - https://www.youtube.com/watch?v=fjsSr3z5nVk - That was me taking our neo4j infrastructure out today
23:08 n8n joined #salt
23:08 CeBe joined #salt
23:12 Gareth basepi: You need to leave the office right now and go watch it.  Just yell out as your leaving, "Haven't seen Office Space.  Must watch now!"
23:12 basepi Hahaha
23:13 DaveQB joined #salt
23:14 Gareth :)
23:16 aparsons joined #salt
23:19 smcquay_ joined #salt
23:19 smcquay_ joined #salt
23:22 aparsons joined #salt
23:23 aquinas joined #salt
23:26 aparsons joined #salt
23:26 gzcwnk joined #salt
23:26 gzcwnk anyone in?
23:28 miles32 nope
23:28 gzcwnk ah excellent
23:29 bhosmer joined #salt
23:29 QuinnyPig Whee.
23:29 aparsons joined #salt
23:30 miles32 rebooting 120 servers at the same time? sure we can do that
23:30 miles32 should you? no
23:30 gzcwnk yeah just flick the mains switch, or drop a spanner on the UPS, or forget to put deisel in teh gene, no problem
23:30 miles32 that's shutting down
23:31 manfred miles32: need a proxy salt module that controls dracs
23:31 gzcwnk well then u fix the hardwarwe issue and they all boot
23:32 gzcwnk assuming the hardware can take the power surge.
23:32 miles32 you can write a "minion" wrapper for a DRAC
23:33 miles32 if you wanted
23:33 miles32 it's like on page 6 of the documentation for what minions can control or something
23:33 gzcwnk is there a limit on node groups?  i seem to get 8 out of 15 replying
23:33 miles32 there example was networking gear
23:34 gzcwnk i mean minions in a group
23:34 gzcwnk either taht or the minions dont respond
23:34 gzcwnk like they are sleeping
23:35 miles32 yeah that "sleeping behavior" is known and supposedly being worked on
23:35 miles32 Ryan_Lane: had something to say on the subject a few days ago
23:35 miles32 if you repeat the test.ping a few times they'll wake up
23:35 gzcwnk i wondered if it was the same bug biting me again  :(
23:36 gzcwnk we ahve tried test.ping, no joy
23:36 gzcwnk every time its 8 just sometimes a different 8, plian wierd
23:36 miles32 do they respond individually if you bypass the node groups?
23:36 mosen drac proxy minion?
23:36 rome joined #salt
23:36 gzcwnk not really....sometimes they do, sometimes npt
23:36 yomilk joined #salt
23:37 gzcwnk hence i thought it might be this bug
23:37 Ryan_Lane miles32: I just gave a link to the thread
23:37 gzcwnk just wanted to make sure it wasnt me making a config error
23:37 Ryan_Lane I have no clue about it :)
23:37 miles32 mm, but I bet you remember where you got the thread from
23:38 gzcwnk just weird its always 8
23:39 kermit joined #salt
23:40 gzcwnk If I use a '*' and not -N i get more than 8
23:40 gzcwnk responding
23:40 xcbt joined #salt
23:42 miles32 weird
23:43 aparsons joined #salt
23:43 miles32 (I mean it's not least surprising behavior, not, I understand saltstacks idiosyncrasies and that this is weird for salt)
23:49 aparsons joined #salt
23:50 aquinas_ joined #salt
23:50 nitti joined #salt
23:55 aparsons joined #salt
23:56 mordonez_ joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary