Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-09-26

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 kingel joined #salt
00:01 catpig joined #salt
00:03 UtahDave fannet: sorry, I got pulled away to a meeting
00:03 aparsons joined #salt
00:03 fannet ok I'll be around for a while
00:04 aparsons joined #salt
00:08 TyrfingMjolnir joined #salt
00:13 UtahDave fannet: attempting to spin up the first vm right now
00:13 fannet cool
00:16 jalaziz_ joined #salt
00:24 mapu joined #salt
00:24 UtahDave fannet: I keep getting this error:
00:24 UtahDave image: debian-7-wheezy-v20140924  size: g1-small  location: europe-west1-b  network: default  tags: '["one", "two", "three"]'  metadata: '{"one": "1", "2": "two"}'  use_persistent_disk: True  delete_boot_pd: False  deploy: True  make_master: False  provider: google
00:24 UtahDave oops, not that. just a sec
00:24 UtahDave salt-cloud: error: There was a profile error: The defined key_filename '/root/.ssh/google_compute_engine' does not exist
00:24 UtahDave Have you gotten that before?
00:25 mapu joined #salt
00:29 mapu joined #salt
00:30 mapu joined #salt
00:31 fannet no - did you create a key through the OAUTH section on GCE?
00:31 UtahDave I had to copy my key to that location.
00:31 mapu joined #salt
00:32 UtahDave That seemed to help, but then saltcloud is saying it can't start salt.     Hm. getting closer.
00:32 UtahDave sorry, this is my first time using gce with salt cloud
00:33 halfss joined #salt
00:35 mapu joined #salt
00:36 jms joined #salt
00:36 yomilk joined #salt
00:37 jms we have a few weeks of salt consulting work, including salt-cloud.  If interested, please drop me an email jscharber@vidscale.com
00:38 jms BTW, we were happy to use saltstack.org, they were just booked out to far, good sign
00:39 acabrera joined #salt
00:40 delinquentme joined #salt
00:46 tk75 joined #salt
00:49 fannet @UtahDave - no worries
00:50 UtahDave I can spin up the vm just fine, but it's not getting salt-cloud installed for some reason.
00:51 loz--_ joined #salt
00:54 snuffeluffegus joined #salt
00:54 fannet hmm what does your provide and profile config look like
00:57 UtahDave I'm using your profile exactly.  I think my ssh key isn't being accepted when it's trying to ssh in
00:59 fannet very strange
01:01 fannet Same version of salt?
01:02 fannet [WARNING ] /usr/lib/python2.7/dist-packages/salt/cloud/clouds/gce.py:527: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6   vm_['name'], exc.message  - could this have something to do with it
01:03 halfss joined #salt
01:04 halfss joined #salt
01:04 UtahDave fannet: no, I don't think sol
01:04 fannet its barfing on the json response I believe
01:11 halfss joined #salt
01:12 halfss joined #salt
01:13 delinquentme joined #salt
01:15 UtahDave fannet: have you set your user and ssh key?
01:16 fannet in the profile?
01:16 UtahDave anywhere
01:17 fannet does it need to be separate from the GCE service account key?
01:17 thayne joined #salt
01:19 UtahDave I'm not sure. the docs aren't super clear
01:20 UtahDave they seem to indicate that you can upload your a key through the web console or through a metadata tag, but I can't find that on the website nor can I get the metadata to work
01:20 UtahDave I may have to grab the original dev tomorrow
01:20 whitepaws hello everyone, how would you refer to a grain with a hyphen in its name, in jinja?
01:21 whitepaws eg salt -v host cmd.run template=jinja 'echo {{grains.ec2_instance-id}}'
01:21 whitepaws SaltRenderError: Jinja error: unsupported operand type(s) for -: 'StrictUndefined' and 'StrictUndefined'
01:21 UtahDave try this:   {{ grains['ec2_instance-id'] }}
01:23 whitepaws that did it thanks!
01:23 whitepaws do you know if there's a more graceful way to do that on the shell, than this?
01:23 whitepaws cmd.run template=jinja 'echo {{ grains['\''ec2_instance-id'\''] }}'
01:23 whitepaws the many ' are kinda hard to read even though i am use to escaping them.
01:24 UtahDave try:      cmd.run template=jinja 'echo {{ grains["ec2_instance-id"] }}'
01:25 fannet @UtahDave: ok I'll ping you tomorrow
01:25 seanz Greetings. Can I run multiple sls files manually with state.sls, if I give it a comma-separated list?
01:25 UtahDave cool
01:25 UtahDave seanz: yep
01:25 seanz UtahDave: Has that always been a feature? We're on 0.16.4.
01:26 UtahDave seanz: Yeah, from as long as I can remember.
01:26 seanz Also, does it run in the order specified?
01:26 whitepaws UtahDave: not quite as as successful, SaltRenderError: Jinja variable 'dict' object has no attribute '"ec2_instance-id"'; line 1
01:26 whitepaws but thank you
01:26 fannet @UtahDave - fyi: https://github.com/naegelin/saltapi  I slapped this primitive saltapi php client together not sure if there is one out there as this one is super basic.
01:27 UtahDave seanz: My first guess is that it would run them in order, but I'm not sure. Especially with as old of a version as you have.  Definitely test that.
01:27 seanz UtahDave: Thanks!
01:27 UtahDave fannet: That's awesome!
01:28 __number5__ whitepaws: you'll need a ec2_info custom grains https://github.com/saltstack/salt-contrib/blob/master/grains/ec2_info.py
01:28 UtahDave seanz: you're welcome.
01:28 UtahDave carmony: hey did you see this? https://github.com/naegelin/saltapi    fannet wrote it
01:29 fannet its worth what you paid for it ;)
01:29 UtahDave :)
01:29 whitepaws haha
01:32 whiteinge seanz: comma separated are all combined into a single dictionary and run in default order which on your version is lexographic order only modified by requisites.
01:32 seanz whiteinge to the rescue!
01:32 seanz That's exactly what I needed to know.
01:33 seanz So basically I'm defining an ad hoc top file by providing a comma-separate list.
01:33 whiteinge Yeah
01:33 whiteinge Requisites and include is your only friend on that version.
01:34 scuwolf joined #salt
01:36 UtahDave whiteinge: Hey, I'm about to leave the office. I want to pick your brain on your new watch
01:36 whiteinge :-)
01:36 whiteinge It's awesome
01:37 dalexander joined #salt
01:37 whiteinge UtahDave: call me. I'll talk your ear off.
01:38 UtahDave will do in about 10 minutes.
01:44 UtahDave whiteinge: FYI, transfering a german tld is pain in the behind
01:46 Ryan_Lane joined #salt
01:46 oz_akan joined #salt
01:47 otter768 joined #salt
01:49 kingel joined #salt
01:49 fannet lol
01:50 fannet OR as they would say "Das macht Schmerz im Arsch"
01:54 UtahDave :)
02:08 rallytime joined #salt
02:09 bhosmer joined #salt
02:10 MindDrive joined #salt
02:12 baniir joined #salt
02:14 druonysus joined #salt
02:14 druonysus joined #salt
02:15 baniir left #salt
02:15 baniir joined #salt
02:17 baniir using map.jinja where pillar data is merged, would the pillar data be available in jinja templates e.g. file.managed
02:23 jalaziz joined #salt
02:27 rostam joined #salt
02:35 ndrei joined #salt
02:35 zhou_ joined #salt
02:36 otter768 joined #salt
02:38 druonysus joined #salt
02:38 druonysus joined #salt
02:39 ramishra joined #salt
02:43 mapu joined #salt
02:45 rostam joined #salt
02:54 baniir joined #salt
02:54 Ryan_Lane joined #salt
02:56 ajolo joined #salt
03:10 murrdoc joined #salt
03:11 acabrera joined #salt
03:18 malinoff joined #salt
03:20 baniir joined #salt
03:20 TyrfingMjolnir joined #salt
03:23 ramishra joined #salt
03:23 thayne joined #salt
03:32 zhou_ joined #salt
03:36 otter768 joined #salt
03:37 kermit joined #salt
03:38 kingel joined #salt
03:38 bezeee joined #salt
03:38 XenophonF joined #salt
03:57 bhosmer joined #salt
04:02 jalbretsen joined #salt
04:05 yomilk joined #salt
04:07 ramishra_ joined #salt
04:07 jonatas_oliveira joined #salt
04:12 yomilk joined #salt
04:12 saggy i am getting an error on my formula- can someone help?
04:12 saggy Rendering SLS "base:states.ea.users.corp" failed: Conflicting ID 9
04:12 murrdoc joined #salt
04:13 iggy that means you have 2 things with the same id
04:13 saggy somehow i am not able to find in all sls formulas. ( thanks for  the reply)
04:14 iggy i.e. /etc/hosts:ip\tfoo and then somewhere else /etc/hosts:ip2\tbar
04:14 iggy pastebin as much as you can
04:14 saggy ok let me try
04:14 iggy also... newer salt versions have more than just the numeric ID there... what version of salt are you running?
04:15 saggy checking...
04:17 saggy salt 2014.1.7
04:18 saggy and hydrogen on client
04:18 saggy minion i mean
04:18 saggy 2014.1.10
04:19 saggy so i should upgrade  my server. i want to get overwith shellshock first
04:20 saggy last time salt server upgrade broke a few things- but i upgraded minions using salt!
04:26 TheThing joined #salt
04:34 iggy hmmm, I'm running the same version
04:34 TyrfingMjolnir joined #salt
04:34 iggy I could swear it reported more than that
04:35 saggy interestingly i still see an update on yum
04:35 saggy i am doing that now
04:35 iggy 2014.1.7 -> 2014.1.10 should be safe
04:35 SheetiS baniir: worst-case you can always pass the map.jinja maerged data as context to the file.managed
04:35 iggy famous last words
04:36 SheetiS hmm I just answered a message from 3 hours ago because my scroll wasn't at the bottom
04:36 SheetiS heh
04:37 iggy do that all the time
04:39 nitti joined #salt
04:39 jonatas_oliveira joined #salt
04:42 oz_akan joined #salt
04:48 ramteid joined #salt
05:08 kingel joined #salt
05:17 tligda joined #salt
05:20 Tahm joined #salt
05:20 cztanu__ left #salt
05:34 jonatas_oliveira joined #salt
05:43 oz_akan joined #salt
05:43 jhauser joined #salt
05:48 bezeee joined #salt
05:58 catpigger joined #salt
05:59 nnion joined #salt
06:07 SheetiS joined #salt
06:09 kingel joined #salt
06:13 Eliz joined #salt
06:14 dalexand_ joined #salt
06:14 Katafalkas joined #salt
06:24 ravenac95 joined #salt
06:25 thayne joined #salt
06:27 davidone_ joined #salt
06:28 flyboy joined #salt
06:28 jonatas_oliveira joined #salt
06:29 tcotav joined #salt
06:29 mackstick joined #salt
06:30 csa joined #salt
06:30 vukcrni joined #salt
06:31 erjohnso joined #salt
06:31 drogoh joined #salt
06:31 ifmw joined #salt
06:35 keekz joined #salt
06:35 juice joined #salt
06:36 uber joined #salt
06:36 uber joined #salt
06:37 MTecknology joined #salt
06:38 Zuru joined #salt
06:41 lcavassa joined #salt
06:43 scooby2 joined #salt
06:43 oz_akan joined #salt
06:44 kingel joined #salt
06:51 thehaven joined #salt
06:51 masm joined #salt
06:58 duncanmv joined #salt
07:00 Sweetshark joined #salt
07:00 otter768 joined #salt
07:01 martoss joined #salt
07:03 martoss1 joined #salt
07:17 vukcrni joined #salt
07:21 intellix joined #salt
07:22 kiorky joined #salt
07:24 packeteer joined #salt
07:25 zhou_ joined #salt
07:27 akafred joined #salt
07:27 oilbeater joined #salt
07:29 oilbeater I wonder if there is a way to refresh salt minion id.I changed the hosts name , the minion id remain unchanged.
07:31 holler joined #salt
07:33 felskrone oilbeater: remove /etc/salt/minion_id
07:33 felskrone the minion will recreate is on next start
07:34 bhosmer joined #salt
07:35 bhosmer_ joined #salt
07:36 oilbeater is there a command way? so many hosts need change
07:37 felskrone hm, not sure, salt-minion -h might have one, but i dont think so
07:38 babilen "rm /etc/salt/minion_id" comes to mind, no?
07:40 felskrone babilen: i guess he meant a salt-command to force the minion to do it on itself :-)
07:42 ramishra joined #salt
07:43 oilbeater felskrone: yes, has a command to do it is more convenient
07:44 oz_akan joined #salt
07:46 darkelda joined #salt
07:46 darkelda joined #salt
07:46 oz_akan_ joined #salt
07:46 babilen 'salt 'SOMEMINIONS' file.remove /etc/salt/minion_id' and 'salt 'SOMEMININIONS' cmd.run "rm /etc/salt/minion_id"'
07:48 babilen (you probably want to run '... service.restart salt-minion' too and then deal with the fallout (i.e. remove/accept keys)
07:50 diudaros joined #salt
07:50 diudaros hi guys
07:52 diudaros i have a config template and i wish with the use of jinja to obtain the ip address of the minion from the grain     [ ip_interfaces ]
07:52 diudaros thing is that when i list the grains i get the following    ip_interfaces: {'lo': ['127.0.0.1'], 'eth0': ['192.168.40.70']}
07:53 diudaros i tried the following
07:53 diudaros {% set main_ip = salt['grains.get']('ip_interfaces[eth0]','') %}
07:53 diudaros {{main_ip}}
07:53 diudaros but i had no luck... :-(
07:54 diudaros how can i pont to the second element/result of the    "ip_interfaces" grain?
07:54 malinoff diudaros, try this: {% set main_ip = salt['grains.get']('ip_interfaces:eth0', '') %}
07:57 diudaros malinof you are my HERO!!!!
07:57 diudaros @malinoff thank you very much :-)
07:57 malinoff diudaros, you're welcome :)
07:59 babilen (assuming they are always on eth0 across all your minions - you can use http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.network.html#salt.modules.network.ip_addrs otherwise)
07:59 zhou_ hi , man , i got a question. why grains.item give me a old value.  i custom a grains , it can return a dist  {”a“: 123 , “b”: 456 }  . while ,i changed my grains let it return dist {"a",123}  and execution saltutil.sync_all. but in master host, i still can see the "b: 456 " use grains.item b command .
08:03 diudaros @malinoff, this returns alos the brackets which are inside the grains value, is it possible to get only the ipaddress?
08:03 babilen zhou_: http://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.cache.html
08:04 malinoff diudaros, salt['grains.get']('ip_interfaces:eth0', [''])[0] i guess
08:04 babilen diudaros: You can have multiple IP addresses assigned to the same interface. You'd have to come up with a heuristic (e.g. always use the first one or use the function I mentioned above)
08:04 babilen zhou_: salt.runners.cache.clear_grains in particular
08:09 diudaros @malinoff  wroked like a charm :-D
08:09 diudaros @babilen  thanks millions for the advise, will have a look at it now
08:10 PI-Lloyd joined #salt
08:13 martoss joined #salt
08:14 martoss1 joined #salt
08:17 jonatas_oliveira joined #salt
08:23 fredvd joined #salt
08:25 holler how can I have a requirements file installed automatically?
08:30 babilen diudaros: I'd use the "|first()" built-in filter in 'salt['network.ip_addrs'](cidr='192.168.0.0/16')|first()' (adjust CIDR range as necessary)
08:30 babilen Just in case my suggestion was a bit too RTFM
08:30 babilen (cf. http://jinja.pocoo.org/docs/dev/templates/#first)
08:31 tinuva joined #salt
08:31 babilen holler: pip?
08:31 * babilen stabs in the dark
08:31 holler yes
08:32 babilen yay! What do you mean by having "a requirements file installed automatically" -- Do you want to place the file itself somewhere on disk or do you want to install the dependencies listed in there with pip?
08:32 holler babilen: will this work to install it automatically on vagrant provision? http://dpaste.com/01NX4GJ
08:32 holler that is in a django/init.sls
08:33 holler the file is located in the vagrant shared directory with host at vagrant/coach-app/
08:33 babilen I'd serve the requirement file via salt, but yeah ... more or less.
08:33 babilen Did you try it?
08:33 holler its installing right now
08:33 holler takes a few minutes
08:34 holler Im also wondering if there is a better way to test vagrant provision with salt... I keep blowing away the vm and then vagrant up all over again
08:34 holler takes foreverr
08:34 babilen vagrant is so horribly slow with vbox on top of what appears to be a large salt setup
08:34 babilen Well, vbox is slow.
08:34 holler whats a better alternative?
08:34 holler for osx
08:35 babilen I am testing most of my things in docker containers now (cf. https://gist.github.com/babilen/e9479fdfbcca431db208), but am working on getting vagrant to play nicely with packer and vagrant-libvirt (I'll have that ready soon)
08:35 babilen KVM is .... ah, OSX
08:35 babilen Install Linux, use goodness?
08:36 holler I heard lxc's are really fast
08:36 ndrei joined #salt
08:36 babilen containers are fast, but so is KVM. My vbox setup did take minutes to come up while docker/libvirt(kvm) takes mere seconds.
08:37 babilen I am abusing Docker there a little, but it is working quite well for "checking if the states do what I want them to do", but I'm looking forward to the day that I complete my KVM setup. That is, naturally, not an option on OSX. What other virtualisation platforms are available on that platform?
08:38 holler I have no idea
08:38 babilen https://docs.vagrantup.com/v2/providers/ mentinos VMWare that might be worth a try.
08:38 holler seems like most are linux based
08:38 babilen sure
08:39 holler for local development do you use masterless minion?
08:39 holler btw today was my first day using salt hehe
08:39 holler so def newbie
08:39 babilen Well, LXC and libvirt are Linux specific. The other supported providers are not Linux specific, but are either also not suitable for you (Hyper-V) or will cost you money (VMWare)
08:40 holler although I have a vagrant box setting up so far
08:41 CeBe joined #salt
08:41 babilen holler: It depends on what I want to test. I have various setups in vagrant, but most use a master/minion infrastructure (some just on the same box though) as I want to test some aspects of that too. But yeah, a masterless minion might make sense in places for testing.
08:43 holler in the context of a developer's workstation, could the master be the host machine and the minion be the vm? or would masterless just make sense for spinning up a provisioned vm with all necessary project libs
08:45 babilen I run no master on my actual machine. If I need to test a master I'll fire up a VM with a master installed.
08:53 N-Mi joined #salt
08:53 N-Mi joined #salt
08:57 datenarbeit joined #salt
09:02 ndrei joined #salt
09:03 datenarbeit joined #salt
09:05 mndo joined #salt
09:05 delinquentme joined #salt
09:07 delinquentme left #salt
09:15 ramishra joined #salt
09:17 istram joined #salt
09:24 bhosmer joined #salt
09:24 marnom joined #salt
09:28 CeBe joined #salt
09:34 micko joined #salt
09:40 Sweetshark joined #salt
09:42 zhou_ hi guys , can i put multi function into single  custom grains file?  i think,  i got some problem about this
09:45 yomilk joined #salt
09:47 flyboy82 hey guys! anyone here played around with the logrotate module?
09:48 oz_akan joined #salt
09:48 ndrei joined #salt
09:50 babilen zhou_: What do you mean by "multi function" ?
09:51 babilen flyboy82: Do you have a real question or are you simply searching for a single person in this channel that used that module? (if the latter I guess that we can be sure that there is at least one person in here who 'played' with that module before)
09:52 flyboy82 need to create a new file in /etc/logrotate.d/ folder and populate it but haven't found an example somewhere
09:53 zhou_ def a():  test1={} test1['first']=first  return test1    def  b():  test2={}  test2['second']=second  return test2 . like this
09:55 giantlock joined #salt
09:56 babilen zhou_: sure, take a look at core.py in grains to see an example for a module that provides multiple grains
10:02 zhou_ yeah , i checkd core.py file , it  put multiple grains in there . but i my case,  i edit  grains file above, delete function b and execute saltutil.sync_grains. but somehow , i still can use grains.get second  get "second"  output. i'm confused
10:02 zhou_ sorry , i'm not good at english
10:03 thayne joined #salt
10:05 jonatas_oliveira joined #salt
10:10 babilen zhou_: Did you clear your caches as I asked you to do earlier?
10:10 babilen (and don't worry, I do understand you just fine)
10:11 zhou_ yes,  i  did
10:13 babilen Could you paste your current custom grain, the old version and the output of "salt-run cache.clear_all", "salt 'yourminion' saltutil.sync_grains" and "salt 'yourminion' grains.items" to http://refheap.com ?
10:13 babilen (remove personal information or all grains apart from the custom ones from the output)
10:14 zhou_ on my way
10:17 goodwill joined #salt
10:21 CryptoMer joined #salt
10:21 CryptoMer joined #salt
10:32 zhou_ @babilen https://www.refheap.com/90695
10:35 bhosmer joined #salt
10:40 flyboy82 guys, how do i append many lines that happen to contain curly braces in a state file?
10:40 flyboy82 wrong phrasing: within a state file, i want to have a file.append that appends a multiline which contains {}
10:42 CryptoMer joined #salt
10:44 jonatas_oliveira joined #salt
10:44 TheHorse joined #salt
10:45 babilen zhou_: Interesting. Could you take a look at the grain in /var/cache/salt/minion/extmods/grains/ on both minions and verify that they are the same as your second version?
10:46 babilen flyboy82: Use explicit ' to ensure it's a string?
10:46 TheHorse hi all, I hope someone can help me, bit confused... I am new to Salt, and DevOps - I have a running Master which works fine. I have defined a base config that applies to all servers. When I add a Minion, and set it to highstate, the state deploys without problem, and everything works as expected. Now I am terribly confused with how to get on from here...
10:46 babilen TheHorse: That sounds as if everything is fine ...
10:47 TheHorse For example, I have a config to set up a webserver: install some packages, copy some config directories etc. I created a new file root for it, set a grain on the target minion (node_type: webserver) but setting this minion to highstate doesn't do anything - no errors, no actions.
10:47 peters-tx joined #salt
10:48 TheHorse salt will sti there for a few minutres, and silently exits. checked the logs, nothing there either.
10:48 oz_akan joined #salt
10:49 TheHorse I am not sure in my head how grains and pillars etc fit into the picture, and from the docs and trying it out there wasn't any "aha" moment yet... :(
10:49 babilen TheHorse: Okay, so "the state deploys without problem, and everything works as expected" is not actually the case as you try to run highstate on a webserver which is, however, apparently not doing anything?
10:49 TheHorse babilen: sorry - I meant to say "the initial config worked fine, adding a new config in the mix doesn't"
10:50 TheHorse to clarify:
10:50 TheHorse I have defined a "base" file_root in master, that points to /srv/salt/base
10:50 zhou_ yes , i checkd  . @babilen https://www.refheap.com/90696
10:51 babilen TheHorse: Okay, do you use different environments already and did you also change your master's configuration to reflect that /srv/salt/base change?
10:51 TheHorse adding another root "webserver" pointing to /srv/salt/webserver is whne nothing starts happening, f you catch my drift.
10:52 babilen zhou_: Okay, so the grain got synced correctly, but the master still caches the old value even though you cleared it. Does restarting the master solve this?
10:52 TheHorse babilen: i added a new file_rrots into the master config, and added the relevant sls files etc in the proper location
10:53 TheHorse I am assuming I am missing something fundemental, and probably very simply, but cannot for the life of me figure it out...
10:54 babilen TheHorse: Ah, I see where your problem is. You do *not* want "base" and "webserver" and "database", ... environments in salt. Environments are used for modelling specific workflows (e.g. dev, qa, production) in which minions belong to different environments.
10:54 martoss joined #salt
10:54 scarcry joined #salt
10:54 laxity joined #salt
10:54 baniir joined #salt
10:55 TheHorse my top.sls in the webserver root is very simple: webserver: <nl> 'node_type:webserver':  <nl> -match: grain <nl> -live
10:55 babilen TheHorse: So, what you want is, I guess, to have *all* minions in the "base" environment and then to target different states to different minions in /srv/salt/top.sls -- You would, for example, write a /srv/salt/webserver.sls and then target that to whatever minion you want to target with that.
10:55 martoss1 joined #salt
10:55 TheHorse ah
10:56 TheHorse I think I see... however, I dont have a /srv/salt/top.sls
10:56 TheHorse I have a /srv/salt/base/top.sls
10:56 babilen Okay, there then if you want to keep that non-standard path.
10:56 TheHorse also, I will be wanting dev/test/stage/live etc environments at some point, but am doing this step by step :)
10:57 babilen sure
10:57 TheHorse babilen: I prefer to keep things standard, I thought that this was the right way to do this
10:57 TheHorse so I should make a /srv/salt/top.sls?
10:58 TheHorse ah, hang on
10:58 zhou_ @babilen no , details can see in the https://www.refheap.com/90696 bottom
10:58 babilen It is *one* way of using different environments with salt, yes. As you do not, yet, use environments it doesn't matter. You can, naturally, structure your file_roots as if you were using different environments already.
10:59 babilen zhou_: Okay, that is weird. Could you open an issue on github with that paste? I'll look into it a bit later.
10:59 babilen Please also include the output os "salt --versions-report"
10:59 TheHorse so /srv/salt/top.sls should have sections for base, and for webserver? how would I define what systems are part of what? ( I really appreciate your time with the "salt for dummies" by the way) :)
11:00 babilen TheHorse: But then might start thinking about using GitFS soon, so I wouldn't necessarily complicate things *now* that you haven't even started to use environments.
11:00 babilen TheHorse: Okay, lets assume for now that you don't have environments and that all your minions are part of the same (i.e. base) environment.
11:01 TheHorse yeah I looked at GitFs, and would like to throw it in the mix, but want to make sure I am happy I understand the basics first
11:01 babilen sure
11:01 harkx hey, I noticed some typos (that bugged us for a while) in the saltstack docs, what is the recommended way to point this out ?
11:01 TheHorse babilen, yes, that is the case for now
11:03 babilen TheHorse: So, that means that salt would look for /srv/salt/top.sls in which you define which states will be applied to which minions. States will *also* be placed in /srv/salt/ and, for example, both /srv/salt/foo.sls and /srv/salt/foo/init.sls will be referenced in top.sls as "- foo" while /srv/salt/web/www.example.com.sls would be referenced as "- web.www.example.com"
11:03 TheHorse ok, got that
11:04 babilen TheHorse: There are various ways to target minions and they are detailed in http://docs.saltstack.com/en/latest/topics/targeting/. The targeting system in salt allows your to define which states/SLS files apply to which minions based on a plethora of attributes. You can target in salt commands and in top.sls and the latter is detailed in http://docs.saltstack.com/en/latest/ref/states/top.html
11:05 yomilk joined #salt
11:05 TheHorse cool - the many different options confused, but I was thinking of going with setting a grain during deploy time
11:05 TheHorse looks easy and flexible...
11:06 TheHorse ugh, my english is also confused... I meant to say that the many different options confused me a bit at first...
11:07 babilen By default minions are targeted simply by their minion ID. So lets say you have a "foo-web.example.com" and "foo-database.example.com" minions that you want to target with the "webserver" and "database" SLS file respectively you would use the following in top.sls https://www.refheap.com/90697
11:07 TheHorse ah, cool - I see
11:08 babilen You can, naturally, target states to minions in whatever way you seem fit. I personally dislike the "custom grains" approach to roles, but it is frequently used.
11:08 babilen My main gripe with it is that it necessitates local modifications to minions that are *not* managed with salt.
11:09 baniir joined #salt
11:09 TheHorse ah I see - is this a common scenario? to have minions not managed by salt?
11:10 babilen You can, and I would recommend to do that in the long run, naturally still target based on custom_grains and assign those grains by some other means with salt (e.g. http://docs.saltstack.com/en/latest/ref/states/all/salt.states.grains.html), but you will have to target *that* to minions too somehow.
11:10 babilen Sorry, I did not mean that the minions aren't managed by salt, but that those modifications aren't managed with salt.
11:11 babilen My personal approach is one in which I *never* have anything on the minions that isn't managed by salt (i.e. no manual changes are necessary).
11:11 younqcass joined #salt
11:11 babilen It is a common approach though and you will find people in the community for whom it is the most natural thing to do.
11:12 yomilk joined #salt
11:12 TheHorse ok, got it, I guess my next question is then - how do you manage groups of servers - for example by minion Id I would have to look after each server individually?
11:12 bhosmer_ joined #salt
11:13 babilen It's just that you don't necessarily want to login manually to ten thousand minions just to edit /etc/salt/grains.
11:14 babilen Well, you obviously have to exploit *some* invariant of the minions somewhere or enumerate all minions that belong to a certain "group" or "role".
11:14 TheHorse thats right - I deploy servers with salt-cloud, and at some point in my deploy script I set a grain on the minion with salt
11:15 babilen The best way to do that really depends on your setup and what you are trying to achieve. If you could tell me a bit more about that I might be able to give better advice.
11:15 TheHorse sure, and once again I really appreciate your time
11:16 babilen But then I mostly get away with targeting by minion id (foo-web1, foo-web2, ...) and some grains (e.g. product_id to differentiate between servers or virtual* types and so on)
11:18 TheHorse I want to have a uniformity in our deploys - we don't deploy thousands of servers, but have enough movement in systems to warrant an automated and uiform approach. I want all our servers to be the same (we are on Ubuntu) with the same config. Our base config is pretty simple - install packages xyz, remove packages xyz, set up some ssh keys and some networking specific stuff. Also some changes in bashrc and inputrc,
11:18 babilen okay
11:18 TheHorse we deploy servers to rackspace for live work, and to our internal VMWare (soon moving it to Openstack)
11:18 TheHorse internal is dev/test/stage
11:19 TheHorse also, our devs use local VM's for development
11:19 babilen ack
11:19 TheHorse I want anyone to be able to deploy servers as required, using jenkins, which is where a lot of our workflow is already driven from
11:20 TheHorse so a dev should be able to say "gimme a web server for testing, and put project xyz on it"
11:21 TheHorse we have a lot of these puzzle pieces already - wanting to use salt to tie it nicely together. also, when we have changes etc. I want to fix and test somewhere, commit to git, push a button, and have that pushed out wherever required
11:21 TheHorse so far so standard, I think :)
11:21 babilen Indeed
11:22 babilen But also a nice technical challenge to tie it together. "local VMs" are managed how?
11:22 TheHorse so, that our use case for the time being - I am 100% sure that as soon as half of this is finished, everyone will come up with more ideas and requests, so am planning to keep it flexible as well
11:22 jensnockert joined #salt
11:23 TheHorse local VM's at the moment are not so much managed as much as that the images are copied around - it's a bit messy
11:23 babilen It would probably be nice if you could manage those with, say, vagrant and generate images with packer. But lets focus on the salt side of this.
11:24 TheHorse yeah I looked at vagrant, packer and docker but I want to get really comfortable with salt first
11:24 babilen So, it sounds as if, yes, you really want environments and plan with them early.
11:25 TheHorse also, salt-cloud is our tool to deploy to rackspace, and would prefer to make that work for internal, so as to avoid a sprawl of tools
11:25 babilen To be honest: I would feel much more comfortable to continue this on the mailing list as it is bound to be a big discussion that multiple people will be interested in.
11:25 babilen sure
11:25 gmcwhistler joined #salt
11:25 TheHorse at the end of the day, for the dev to click a 2deploy" button is the goal, using salt-cloud or vagrant or whatever isn't something they are too terribly excited about :)
11:27 TheHorse re. mailing list, for sure, not a problem, I want to write this all up into a solid howto as well - it is something I was looking for. The salt docs are gfood but there is a big gap between "here is a basic walkthrough" and "here is a list of functions" :)
11:27 babilen How do you identify, say, webservers though? What are the invariants that you could exploit?
11:27 rattmuff In the example for "contents_pillar" in the "salt.states.file.managed" docs (http://salt.readthedocs.org/en/latest/ref/states/all/salt.states.file.html), the private key is located in the "pillar file". Would it be possible to just keep a reference to the local file system, like "/home/[user name]/.ssh/id_rsa" instead?
11:28 ggoZ joined #salt
11:29 jonatas_oliveira joined #salt
11:29 TheHorse babilen: the most common thing is that we use an internal naming scheme for servers, something along the lines of <location code>-<client code>-<main function>-<number> which works fine when you are looking at a list of servers, but it isn't something I would be relying on for automating things
11:29 babilen rattmuff: yes, but you would have to write the pillar in Python and read it from the filesystem yourself. (a pillar is just a dictionary and you can write them with "#!py" in the first line and then implement the "run()" function therein that reads the file and returns a suitable dictionary)
11:29 babilen There is no built-in support for that.
11:30 babilen TheHorse: We do the same :)
11:30 rattmuff @babilen ok, do you think it is a good idea or is it better to keep the keys in the pillar file?
11:30 TheHorse :)
11:30 babilen rattmuff: Just implement some code to deal with the case that a file is *not* there and I don't see why it should be problematic.
11:31 rattmuff @babilen very nice, thanks :D
11:31 babilen TheHorse: I'm not sure what I can suggest now :( -- Your setup is complicated enough so that your solution actually needs design.
11:31 TheHorse as we set the names on deploy time, i figured that it would be just as easy to set a grain at deploy time from the salt master - the whole idea is to never have to log in to a server and touch it
11:32 CycloHex joined #salt
11:32 babilen TheHorse: I would read http://docs.saltstack.com/en/latest/topics/best_practices.html and http://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html to get a feeling for how to write idiomatic salt.
11:33 babilen TheHorse: Sure, if you do it like that grains are perfectly adequate (IMHO). It's just that I don't like the whole "manual change *on the minion* so that it requests a specific state from the master" approach.
11:33 TheHorse babilen, yeah understood :) I have a lot of experience managing large environments, so understand the general challenges. I chose Salt for the flexibility (as well as all the other good stuff) but go stuck between the "I have a basic config that I can push out everywhere" and "now I want to target specific servers for specific environments" -
11:33 lionel joined #salt
11:34 babilen TheHorse: You are, also a bit too early, as you will have to make huge changes to your setup once the next salt release (Helium, 2014.7.0) becomes available.
11:34 TheHorse oh, really?
11:34 babilen Well, 2014.7 will bring a number of changes and I have a bunch of things on my todo list already.
11:35 TheHorse ah
11:35 TheHorse its in Rc already
11:35 babilen it is
11:35 TheHorse is it relatively stable? happy to go with that.
11:36 babilen You will, probably, start doing things that could be done "nicer" soon. That doesn't invalidate what you are doing now, but it is a bit of a bad timing to start implementing against 2014.1 now. I mean things will keep on working (or should™)
11:36 babilen I don't use it, but it will probably not take *too* long.
11:37 TheHorse am with you - don't really want to invest time in building something that would not make best use of what is just around the corner. and would like to do things "nice" :)
11:37 babilen And please do me the favour of writing this "question" to the mailing list. i'd love to see a discussion about different approaches to "the big picture" within the community.
11:37 TheHorse ok, will do right away
11:38 intellix joined #salt
11:38 TheHorse thank you so much for your time and help - extremely generous of you!!
11:38 babilen I'm in a position in which I start tying things together (packer, vagrant, one "click" deployment, ...) but lack guidance there too.
11:40 babilen One thing that is, however, really nice is that if you wrote your states in a non-specific way (best practices + formula conventions), if you made ample use of formulas to configure services and if you don't hardcode things but keep the "variant" bits users want to change in pillars that you can use your states in a lot of different situations.
11:40 TheThing joined #salt
11:42 babilen I simply provision all the "foo-setup" test boxes with *exactly* the same states that I use in production and simply target dev/prod based on combinations of one custom grain (dev_env), the virtual* grains and minion IDs.
11:43 babilen Still have the feeling as if it could be more dynamic though which is why I would be happy to contribute to a discussion, but also learn from it :D
11:44 TheHorse yeah, i would like to see that as well, I started looking into formulas etc. but want to get the basics right first - still not clear exactly where pillars fit into the whole picture for example :)
11:44 babilen Okay, that's easy
11:45 babilen Grains are "static" parts of your minions that define certain aspects of their platform. This includes information about their networking setup, over their operating system to the amount of RAM or their product name.
11:46 TheHorse yep
11:46 babilen Pillars on the other hand are, essentially, Python dictionaries that are targeted to specific minions and information in them are *only* available to the targeted minions.
11:47 diegows joined #salt
11:47 babilen This makes them suitable for distribution data such as private keys or other sensible data, but they are also used to tailor certain aspects of a state to a specific minion.
11:48 TheHorse :) I understood the last part (whatever is in a pillar is only for the minions it targets) but I don't python, so not sure what a python dictionary is
11:48 babilen Say you want to create specific users on your minions. One way of doing this would be to write actual states that hardcode those users in, say, /srv/salt/users/{alex,bob,cthulhu}.sls and target those to your minions.
11:49 TheHorse yeah, but would not be very nice....
11:49 babilen A dictionary is a "map" or an "associative array" a "key value store" or .... foo: 'bar' ('bar' is the value of the key 'foo')
11:50 babilen TheHorse: So another approach is to write your SLS files in such a way that states are dynamically created based on *some* data that describes the setup.
11:50 TheHorse yeah - i suspected a key/value kind of affair
11:50 masterkorp Hello
11:50 babilen In this particular case you would use the users-formula (or the reverse-users-formula) -- https://github.com/saltstack-formulas/users-formula/blob/master/users/init.sls
11:51 masterkorp I have a bash state that runs script that gives me custom error messages
11:51 babilen TheHorse: This formula uses data from the pillar to generate states that will, eventually, create the users themselves. The pillar structure a formula expects is typically documented in the pillar.example file that comes with the formula. So in this case that would be: https://github.com/saltstack-formulas/users-formula/blob/master/pillar.example
11:51 masterkorp can I check the error message
11:52 masterkorp because the cmd.run state fails
11:52 baniir joined #salt
11:52 masterkorp and it really isn't failing
11:52 TheHorse ah!
11:52 TheHorse ok :)
11:53 babilen TheHorse: Pillars are SLS files that are kept in /srv/pillar and can be targeted to minions just like you target states. So instead of targeting the - users.cthulhu state to your minion you'd target the generic '- users' state to *all* minions and then define suitable pillars for those on which you actually want to create users.
11:53 masterkorp any ideas on going around that
11:55 TheHorse ok, let me see if I get this. all minions get "users" state, and then for, lets say, "servergroup1" I have a pillar that says "make users a, b and c" and for "servergroup2" make users "d, e, and f" - the formula is the logic that makes that work
11:56 babilen TheHorse: exactly
11:56 TheHorse ah, awesome
11:56 TheHorse well, that makes it very clear :)
11:56 babilen That's just one example, there are formulas for all sorts of situations. Just wanted to walk you through one actual use case.
11:58 TheHorse yeah, cool - i already looked at a few formula's - nginx for example
11:59 TheHorse apache formula gave me a headache :D
11:59 babilen Yeah, that one isn't too nice.
11:59 babilen (it also uses lots of outdated idioms)
12:07 flyboy82 hey guys, any of you has a working example of creating and populatin a logrotate config file in /etc/logrotate.d by means of the logrotate module?
12:08 duncanmv joined #salt
12:09 pviktori joined #salt
12:09 glyf joined #salt
12:14 to_json joined #salt
12:14 jaimed joined #salt
12:15 to_json joined #salt
12:19 hobakill joined #salt
12:19 flyboy82 I'd like to do that from within a state and provide it with variables taken from my pillar, rather than CLI
12:24 datenarbeit joined #salt
12:24 cpowell joined #salt
12:25 glyf joined #salt
12:25 datenarbeit joined #salt
12:25 cpowell joined #salt
12:25 jemejones joined #salt
12:28 glyf joined #salt
12:28 rallytime joined #salt
12:32 babilen flyboy82: The logrotate states have just recently been merged and has probably not made it into the salt release you are using yet.
12:33 flyboy82 probably gonna have to wait till 2014.7 eh?
12:33 babilen https://github.com/saltstack/salt/pull/15576
12:34 Ironhand I've just started familiarizing myself, and one (hopefully obvious) thing isn't quite becoming clear to me: how do I get the state to be automatically applied to each minion (periodically and/or when the minion is first installed)?
12:34 babilen Nobody stops you from shipping those in _modules and _states respectively (haven't checked if it requires any other features that are .1 specific)
12:34 babilen Ironhand: Well, which one?
12:35 Ironhand let's start with periodically
12:35 hobakill http://docs.saltstack.com/en/latest/topics/jobs/#highstates
12:35 babilen http://docs.saltstack.com/en/latest/topics/jobs/schedule.html explains how to schedule jobs
12:36 babilen http://docs.saltstack.com/en/latest/ref/states/startup.html
12:36 babilen (or use http://docs.saltstack.com/en/latest/topics/reactor/index.html)
12:37 mechanicalduck joined #salt
12:37 Ironhand so is scheduling such jobs considered a best practice? I'm trying to figure out the basics of getting a network of nodes set up to be configured through Salt the way I currently do with CFengine
12:37 babilen I wouldn't do it.
12:38 babilen I like to control when changes happen, but then you might argue that pushing new configuration allows you to do just that by setting the time you push things.
12:39 Ironhand so what would you consider to be the best way to ensure state eventually becomes consistent everywhere?
12:39 bhosmer joined #salt
12:39 Ironhand assuming that not every node may actually be online all the time (workstations etc)
12:39 babilen State is consistent if you run a highstate *once*
12:40 babilen Ah, well. Then run a highstate when they come online.
12:40 mapu joined #salt
12:41 Ironhand as in: put a highstate call inside things like startup scripts and network "interface up" scripts?
12:41 babilen No
12:41 babilen Use either the reactor or startup state functionality for that.
12:41 PI-Lloyd the salt-minion will run a highstate on start anyway I believe
12:41 babilen I'd use reactors if you have other uses for them (e.g. syncing custom grains on minion_start) and startup states if you don't.
12:42 babilen It won't.
12:42 mrlesmithjr joined #salt
12:42 PI-Lloyd It does on our estate :/
12:42 PI-Lloyd we've not configured it to do that, it just does it
12:42 babilen You might have configured it to do that
12:43 babilen You definitely enabled it somewhere.
12:43 PI-Lloyd the only change in the minion config is the master and ensuring ipv6 is disabled
12:44 PI-Lloyd master config changes are only the file roots
12:44 Ironhand things like startup states and minion_start sound as though they'd only actually apply when the minion starts up, which doesn't really cover all necessary scenarios
12:45 Ironhand suppose I want a set of laptops to synchronize their state when they happen to come online (which happens at unpredictable times), what would be the "good" way to do that?
12:45 babilen Ironhand: Feel free to run a highstate every k minutes if that is what you want. It is a perfectly fine thing to do, but *I* wouldn't like that as I want to know exactly when changes are being enabled on our servers or pushed into our customers configuration.
12:46 babilen Ironhand: "come online" -- why is a reactor that triggers on 'salt/minion/*/start' not suitable?
12:46 PI-Lloyd ^^
12:46 Ironhand babilen: I don't specifically *want* to run a highstate every k minutes, I'm just trying to figure out (coming from a CFEngine background) what best practices most apply to my situation
12:47 Ironhand babilen: I got the impression that those would only trigger when the minion starts up, i.e. at system boot
12:47 Ironhand which would exclude things like laptops coming out of hibernate
12:47 younqcass_ joined #salt
12:47 Ironhand or connecting to a LAN after booting up in offline state
12:47 babilen Ironhand: Let me verify that before I give bad advice. Either way you could enable presence events and trigger on that.
12:48 babilen One second please.
12:48 Ironhand ok
12:49 felskrone Ironhand: have a look at master_disconnect event and maybe the minion schedular
12:49 Ironhand the main scenario I want to avoid is that I end up with minions which, through relatively unlikely combination of circumstances ("employee always turns his laptop on at home which has no WIFI"), never actually sync their state
12:51 jchen joined #salt
12:52 toastedpenguin joined #salt
12:52 Ironhand felskrone: are you sure "master_disconnect" is what it's called? googling that combined with saltstack yields 0 results
12:53 dude051 joined #salt
12:54 babilen Ironhand: https://www.refheap.com/90703 are events that you'll see if a minion is stopped and restarted. I don't have a minion that I could hibernate, but salt/presense/change is perfect for what you want to do
12:54 vejdmn joined #salt
12:54 bhosmer joined #salt
12:54 Ironhand babilen: I'll have a look at that, thanks
12:55 felskrone hm interesting, its not in the minion reference, seems like i forgot to document it :-)
12:55 babilen Ironhand: The spelling mistake "presense" → "presence" will be fixed in 2014.7 so pay attention if you use that
12:55 Ironhand babilen: noted
12:56 ramishra joined #salt
12:56 babilen felskrone: A lot of things aren't in the minion configuration documentation unfortunately
12:57 felskrone sadly true, but this particular one surprises me. i added that a while ago and would be hard money that i added documentation for it
12:57 felskrone be = bet of course :-)
12:58 istram joined #salt
12:59 williamthekid_ joined #salt
13:00 lloesche joined #salt
13:00 felskrone babilen: its not in the documentation. damnit. basically an event is fired whenever the minion detects that it has lost connection to the master and vice versa
13:01 babilen felskrone: Oh, that sounds super useful. I've been using presence events and, in particular, salt/presense/change for that.
13:01 felskrone its: master_alive_interval: <seconds>
13:01 babilen How exaclty would the event reach the master though if the minion lost connection?
13:01 bhosmer_ joined #salt
13:02 dude051 joined #salt
13:02 felskrone not at all, the minion would have to act when it detects that it has a new connection to the master
13:04 lloesche If I have something like master.sls that includes .repo and slave.sls that also includes .repo and then a init.sls that includes both master and slave, will Salt include .repo only once?
13:04 bhosmer_ joined #salt
13:04 felskrone babilen: now that i think about it, this will not help. you can not run something with that event yet. only stuff that coded right into the minion
13:04 babilen felskrone: So essentially a minion initiated salt/presense/change fired from the minion in question that isn't raised on a set schedule but when it actually applies?
13:05 babilen Why is that?
13:05 babilen lloesche: TIAS, guess it'll be included multiple times, but that should only increase runtime.
13:06 * babilen is very hygienic with his states and hasn't run into that problem ..
13:06 babilen curious now that you mention it
13:07 racooper joined #salt
13:07 lloesche babilen: what I meant was will whatever is in there only be interpreted once.. like with ansible if you include something twice it's executed twice... but thinking about it I'm pretty sure Salt is smarter than that... otherwise he would probably complain about duplicate IDs which it doesn't
13:08 babilen yeah
13:09 babilen Even if it were evaluated multiple times it wouldn't be executed multiple times as the first state run would have already achieved the goal of the state.
13:09 felskrone babilen: https://github.com/felskrone/salt/blob/develop/salt/minion.py#L1541 that the schedule for checking the master connection
13:09 babilen But you would have clashes, yeah
13:10 lloesche well in the case of the .repo he might refresh the package repo multiple times which takes time... that was my concern
13:10 felskrone babilen: https://github.com/felskrone/salt/blob/develop/salt/minion.py#L1554 and that is where a disconnect is handled. its hardcoded and does not allow configurating a module
13:11 babilen felskrone: And that triggers event.fire_master on completed authentication?
13:11 felskrone its currently only used in multimaster-pki two jump back and forth between several masters
13:12 istram joined #salt
13:12 SheetiS joined #salt
13:12 madduck joined #salt
13:13 nitti joined #salt
13:13 felskrone babilen: no, that only enables the minion to detect connection loss and reconnection. it does not do anything. sorry i got your hopes up
13:14 babilen lloesche: To be honest: I don't know ... surprisingly I never ran into this. I'm curious though and would *hope* that salt just "does the right thing"™
13:14 babilen felskrone: Well, that's cool. It *would* be a nice event to have though.
13:14 bhosmer joined #salt
13:15 babilen I would love to see a much more fine-grained and detailed event structure anyway that you can enable/disable based on globs.
13:15 felskrone babilen: but it would of course trigger a minion_start on the master once the connection is re-established
13:15 babilen sure
13:17 babilen felskrone: I guess that a minion/start event would also be triggered when you hibernated a minion and wake it up again, wouldn't it? I guess so, but couldn't test it. ( Ironhand asked about that)
13:17 felskrone how would you hibernate a minion?
13:17 felskrone like closing a laptop?
13:17 babilen felskrone: Hibernate a laptop on which .... yeah
13:18 mapu joined #salt
13:18 felskrone i guess that would work, yes
13:18 babilen My intuition is, minion process wakes up, networking becomes available, it contacts the master and minion/start is triggered eventually. I simply couldn't test it though and therefore didn't want to say something that would turn out to be wrong later.
13:19 istram joined #salt
13:19 Ironhand once I get it set up the way I think it should be I'll do some test
13:19 Ironhand s
13:19 Ironhand worst case, I'll simply add some salt calls into things like post-up hook scripts
13:20 babilen Ironhand: either way: If that doesn't work, presence events will
13:20 Ironhand hmmm yes, and they'll probably be less error prone too
13:22 babilen And are you sure that your users will appreciate it if you trigger upgrades/installations when they wake up their laptops?
13:22 jslatts joined #salt
13:23 Ironhand some might, some might not, but those who want to manage it themselves can do so
13:23 Ironhand it's a pretty reliable way to ensure non-tech users have things set up so that they don't shoot themselves in the foot all too often
13:23 babilen ack
13:24 Ironhand (and obviously I wouldn't let it automatically upgrade a kernel and reboot or other radical things like that)
13:25 nitti joined #salt
13:27 mpanetta joined #salt
13:33 dccc joined #salt
13:35 bbradley hello
13:35 bbradley can you print to stdout from states when using the salt provisioner in vagrant?
13:37 litwol left #salt
13:41 babilen I don't think so.
13:42 babilen But I've never tried, so take that with a grain of, ba-dum-tsh, salt
13:42 glyf joined #salt
13:43 bezeee joined #salt
13:44 jslatts joined #salt
13:45 micah_chatt joined #salt
13:47 ninkotech joined #salt
13:48 oz_akan joined #salt
13:52 ndrei joined #salt
13:53 KennethWilke joined #salt
13:55 johtso joined #salt
13:57 jemejones joined #salt
14:00 scottpgallagher joined #salt
14:00 _mel_ joined #salt
14:00 faust joined #salt
14:05 blackhelmet babilen: You must be pun-ished
14:06 rattmuff I'm looking at the file server settings (http://docs.saltstack.com/en/latest/ref/file_server/file_roots.html) and can't really get my head around the custom settings. If I define a file_root like 'backup: -/srv/backup', do I access it with salt://... or backup:// ?
14:07 babilen rattmuff: You *always* reference it with salt://, it is just that minions might be in different environments or that you pass different environments with saltenv. I *guess* that what you are doing is not what you actually want to do. What do you plan to use the "backup" environment for?
14:08 faust left #salt
14:08 ajprog_laptop joined #salt
14:10 rattmuff babilen: thanks, I guess uou are right :P. I have a 'restore' state that should copy a file from /srv/backup/myfile.tar to the minions /tmp/myfile.tar. If the file had been under '/srv/salt/backup/myfile.tar' I would have used salt:// to reference it.
14:11 babilen rattmuff: So, why don't you do exactly that?
14:11 babilen Well, copy it from /srv/salt/backup/files/yourfile.tar that is.
14:12 babilen (s/files//)
14:12 pdayton joined #salt
14:13 micah_chatt_ joined #salt
14:13 blackhelmet rattmuff: Just remember that the order of file_roots is important. Salt starts with the first and searches for the requested file continuing down the list until it finds it.
14:13 rallytime joined #salt
14:14 rostam joined #salt
14:14 younqcass_ joined #salt
14:15 rattmuff babilen: I guess that's the better way but it would require me to change a lot of inherited states, scripts and other 'stuff' so it would be nice if I could keep the files at /srv/backup. But if I understand you correctly it is possible if I don't specify a custom environment for it
14:15 rattmuff just add the path to the 'base' environment
14:16 perfectsine joined #salt
14:18 ndrei joined #salt
14:18 rostam joined #salt
14:18 ericof joined #salt
14:20 hobakilllll joined #salt
14:21 babilen rattmuff: sure, but everybody who will see your setup will wonder what the hell you were doing there.
14:21 babilen ;)
14:22 rostam joined #salt
14:22 rattmuff lol
14:23 rattmuff I will speak to the person I've inherited this from and see if we can work something sane out
14:23 rattmuff thanks alot babilen
14:24 ckao joined #salt
14:24 babilen rattmuff: No, but hang on. Your *salt* configuration should depend too much on that path. Scripts could (and can do so), but that would simply mean that you adjust the "source: salt://..." lines in your salt states, but nothing on the actual boxes
14:26 rattmuff hmm, not really following what you mean
14:26 zwevans joined #salt
14:26 younqcass joined #salt
14:28 babilen rattmuff: If you say: "inherited states, scripts and other 'stuff'" this obviously only refers to the salt setup. And moving /srv/backup to /srv/salt/backup while *at the same time* changing the file_roots configuration in the same manner means that you don't have to change anything (as relative references will still be correct)
14:29 babilen So, your salt configuration shouldn't depend on /srv/backup .. you can, naturally, use salt to copy files to /srv/back/foo one the minions and you can reference that in scripts that run on the minions, but the salt configuration shouldn't have to change too much.
14:29 renoirb Hi all, I love your software guys!! :)  #randomlove
14:29 renoirb ... not so random :)
14:30 babilen #predictablelove ?
14:31 SheetiS joined #salt
14:32 babilen rattmuff: Do you get what I mean?
14:32 rattmuff babilen: ah, I see. However, what I mean is that there are scripts outside of the salt configuration that put files in /srv/backup on the master and I would prefer not to go through those scripts to have them work with /srv/salt/backup instead.
14:32 babilen Ah!
14:33 rojem joined #salt
14:33 babilen No, that would probably not be appropriate for *those* scripts. So, if I understand you correctly: You have $SOMESOFTWARE that creates files in /srv/backup on the master and you want salt to send/use those files in $SOMESTATE on the minions?
14:34 rattmuff babilen: exactly :P
14:35 babilen You could also symlink /srv/salt/backup to /srv/backup to keep everyone happy.
14:35 renoirb babilen: Predictable indeed. Just that they work hard to do this stuff, gotta tell them from time to time.
14:36 rattmuff babilen: <3
14:39 TyrfingMjolnir joined #salt
14:41 ramishra joined #salt
14:45 ajprog_laptop joined #salt
14:47 anotherZero joined #salt
14:47 Supermathie joined #salt
14:47 fredvd_ joined #salt
14:48 seanz joined #salt
14:49 hasues joined #salt
14:49 ndrei joined #salt
14:56 ndrei joined #salt
14:58 Katafalkas joined #salt
14:59 thayne joined #salt
15:02 logix812 joined #salt
15:03 Katafalkas joined #salt
15:04 aquinas joined #salt
15:05 tristianc joined #salt
15:08 bezeee joined #salt
15:08 berserk joined #salt
15:09 conan_the_destro joined #salt
15:10 bhosmer joined #salt
15:11 bhosmer_ joined #salt
15:12 renoirb Hi guys a question about syntax within a jinja template
15:12 renoirb I have this block from which I want all ipv4 private addresses and host names `{%- set test = salt['publish.publish']('*', 'grains.item', 'host,ipaddr') -%}
15:13 renoirb When I do  this   {% for a in test %}{{ a }} {% endfor %}  I only get the host,   how can I get both grains items i called in publish.publish?
15:14 kusams joined #salt
15:15 patarr joined #salt
15:15 patarr joined #salt
15:15 anotherZero joined #salt
15:18 rawtaz renoirb: interesting question. stay tuned.
15:18 renoirb So far, rawtaz, I changed to {%- set test = salt['publish.publish']('*', 'grains.item', 'host,ipaddr') -%}
15:19 renoirb and {% for a, b in test.items() %}{{ b.get('ipaddr') }} {{ b }}   {% endfor %}
15:19 renoirb gives
15:19 thedodd joined #salt
15:19 renoirb 10.10.10.31 {'host': 'percona1', 'ipaddr': '10.10.10.31'}
15:19 renoirb for each node
15:19 renoirb which is very close to what I want :)
15:19 renoirb I guess that the {{ b }} to be instead {{ b.get('host') }} and it would work
15:20 renoirb it works, thanks for you hlep rawtaz  :)
15:21 ajolo joined #salt
15:22 MTecknology I have a password in a pillar. In one case, I want to change that to MD5 in a template. How hard would this be?  (store it in text, write to minion via template as an MD5 version)
15:22 duncanmv joined #salt
15:24 timoguin MTecknology: I feel like there's a module in salt somewhere that will do MD5 hashing.
15:24 MTecknology that would be quite swell
15:24 timoguin You could shell out with cmd.run in jinja if you have to. Pass the plaintext and set the encrypted version in a variable
15:25 MTecknology my brain instantly thought shellshock when I read that :P
15:25 SheetiS cmd.run all the things without updating bash :D
15:26 MTecknology I'd like to keep it all within salt if possible, but I'm okay with that if it'll work.
15:26 bhosmer joined #salt
15:27 SheetiS the cmd.run thing would definitely work.  Something like this: {% set var = salt['cmd.run']('md5 <<<' ~ pillar.get('password')) %}
15:27 SheetiS assuming that ~ is the right way to concatenate strings in jinja at that spot
15:28 MTecknology If I have list of say 300 passwords (unique to each server) and want to store them in a pillar, is {% if grains['id'] == 'some_fqdn' %}system_pass: 'home_hash'{% endif %} for each of them going to be my best option?
15:29 SheetiS hmm I'd probably structure my pillar so that I could use a grain to target it without a big group of ifs like that
15:30 StDiluted joined #salt
15:30 SheetiS say you had a pillar called passwords that had key/value pairs under it of grains['id']: password, or something like that.
15:31 MTecknology ah, that makes MUCH more sense
15:31 kermit joined #salt
15:31 SheetiS then you could get that pilalr into a variable called passwords and do this  {% if grains['id'] in passwords.keys() %}{{passwords[grains['id']]{% endif %}
15:31 SheetiS something like that
15:32 mgw joined #salt
15:32 timoguin MTecknology: BAM! https://github.com/saltstack/salt/blob/develop/salt/modules/hashutil.py
15:32 timoguin hashutil.md5_digest
15:33 SheetiS I try and make sure that my state/formula doesn't have to know what data it is getting specifically.
15:33 wendall911 joined #salt
15:33 timoguin and no shelling out
15:33 SheetiS and timoguin has the rest of the equation right there :D
15:34 timoguin it's not in 2014.1, just 2014.7+.
15:34 timoguin but it looks portable
15:34 tligda joined #salt
15:36 SheetiS Since it's just a 1-liner to hashlib, that'd bea 3-line module, even if you made your own module.
15:37 timoguin yuhp, but just drop hashlib.py into _modules in the file roots, sync_all, and should be good to go
15:38 Ironhand after a little experimenting with Salt I ran into the following exception when running 'salt \* state.highstate': http://pastebin.com/Bxwm9zRg
15:38 Ironhand last line: RuntimeError: When using gi.repository you must not import static modules like "gobject". Please change all occurrences of "import gobject" to "from gi.repository import GObject". See: https://bugzilla.gnome.org/show_bug.cgi?id=709183
15:38 Ironhand is this a known issue, and is there any way to avoid it?
15:38 Ironhand (not determined exactly what triggers the problem, yet)
15:40 thayne joined #salt
15:43 bhosmer__ joined #salt
15:45 bezeee joined #salt
15:46 ipmb joined #salt
15:47 * MTecknology hugs timoguin and SheetiS
15:47 glyf joined #salt
15:48 SheetiS :D
15:52 kermit joined #salt
15:54 KennethWilke joined #salt
15:54 troyready joined #salt
15:58 ndrei joined #salt
15:59 tligda Hmm . . . looking for the most trivial github issue to work on. Any candidates?
16:02 to_json joined #salt
16:05 druonysus joined #salt
16:06 peters-tx I'm getting some weird results when I run  salt \* cmd.run 'echo $HOME'
16:06 peters-tx What would I expect to see $HOME set for a salt-minion?
16:06 iggy / maybe
16:07 MTecknology SheetiS: you just made me start re-engineering a giant pillar and a bunch of states that use it.
16:07 younqcass joined #salt
16:07 SheetiS MTecknology: little pain now, save lots of pain later? :D
16:08 MTecknology SheetiS: not really... it was working perfect and probably was never going to be altered. There's a LOT of logic to recreate. You're a bully.
16:09 MTecknology It'll be better, though... more proper and sane
16:09 XenophonF joined #salt
16:11 SheetiS MTecknology: <3
16:13 dalexander joined #salt
16:14 desposo joined #salt
16:14 UtahDave joined #salt
16:16 scbunn joined #salt
16:16 __TheDodd__ joined #salt
16:16 KyleG joined #salt
16:16 KyleG1 joined #salt
16:17 KyleG joined #salt
16:17 KyleG joined #salt
16:19 StDiluted what’s the best practice way to make sure an executable exists before running a cmd.run?
16:20 StDiluted unless: [ ! -x /file/to/execute ] ?
16:20 aparsons joined #salt
16:20 carmony UtahDave: That is awesome! :D (in reference to https://github.com/naegelin/saltapi)
16:20 SheetiS I use a file.managed on the executable (assuming it is a script i wrote), then I have the cmd.run - require the managed file.
16:21 UtahDave carmony: yeah, huh?
16:22 fannet mornin @UtahDave - any luck getting a hold of your GCE dev?
16:23 SheetiS StDiluted: your unless would work if the executable were not managed by salt for some reason.
16:23 StDiluted well, I am installing ruby onto the AMI before i install salt, because executing a build from source takes about 15 minutes. I want to make sure that the executables are there, though i suppose i know they are for a fact
16:23 Gareth morning morning
16:24 StDiluted in other words, i dont want to manage ruby with salt because it’s too cumbersome
16:24 scbunn joined #salt
16:24 StDiluted and i dont want to use rbenv or rvm
16:24 SheetiS StDiluted: I like to package my own rpm/deb in those cases and use a private repo :D
16:25 StDiluted yeah, i dont really feel like the effort of that either. I baked an AMI with the version of ruby i want.
16:25 StDiluted so I think that unless will have to do
16:25 StDiluted ;)
16:25 UtahDave fannet: I think I figured out what I was missing right before I left last night. I just started testing it now
16:26 SheetiS It can definitely work.  :)
16:27 thayne joined #salt
16:27 uber_ joined #salt
16:28 bhosmer joined #salt
16:28 uber joined #salt
16:29 uber joined #salt
16:29 masterkorp hello
16:29 uber joined #salt
16:30 jemejones if i'm mostly deploying python apps and i'm doing "git checkout master; git pull; python setup.py install" on the shell, what's the recommended way of doing this using a salt state?
16:30 jemejones i have the git repo, so that's working well
16:30 jcockhren jemejones: I use system packaging
16:30 uber joined #salt
16:30 jemejones but i'm using cmd.run to do python setup.py install
16:30 ndrei joined #salt
16:30 jcockhren that way salt can just pkg.installed and restart the services
16:31 fannet @UtahDave: OK thanks.
16:32 jcockhren jemejones: that should work with the cmd.run
16:32 aparsons joined #salt
16:32 jemejones jcockhren, is there a better way?
16:32 bezeee joined #salt
16:32 jonatas_oliveira joined #salt
16:33 jemejones i'm using the virtualenv with a requirements.txt to initially build the virtualenv
16:33 jemejones that's working awesome
16:33 jemejones and i'm using git to checkout and pull the branch that i need
16:33 jemejones the cmd.run just feels like it's the odd man out.
16:34 anteaya joined #salt
16:34 datenarbeit joined #salt
16:36 jcockhren hmmm. there's http://docs.saltstack.com/en/latest/ref/states/all/salt.states.virtualenv_mod.html#module-salt.states.virtualenv_mod
16:37 jemejones jcockhren, yeah - that's working awesome.
16:38 jcockhren in terms of python setup install, seems like a perfect job for cmd.run
16:38 jemejones ok - i'll just stick with that for now
16:39 jemejones on another topic - is there a good doc on managing multiple environments?  right now, everything is in base.  the main thing that i'll need to do for the different environments is to just have some configurations for things
16:40 jemejones so, some specific questions are do you just configure each minion with /etc/salt/minion and the environment=prod|staging|qa?
16:40 jemejones if that's how you do that, then cool.
16:40 thedodd joined #salt
16:40 jemejones and how do i just deploy to environment=prod?
16:43 ndrei joined #salt
16:43 StDiluted i had such trouble figuring out environments in salt, that I gave up
16:44 UtahDave jemejones: I'd avoid setting the environment from the /etc/salt/minion config
16:44 UtahDave jemejones: your top.sls is where you do your matching to determine which minions pull configs from which environments
16:45 perfectsine joined #salt
16:47 iggy jemejones: if the packages are in pip, there's a pip state module
16:47 wt joined #salt
16:48 jemejones UtahDave, how do you recommend differentiating with the various environments to put a match in the top.sls?
16:48 jemejones hostname?
16:48 Ryan_Lane joined #salt
16:49 jemejones i've seen various pieces of documentation on this (on the main site and various blog posts) but nothing has really clicked for me.
16:49 UtahDave jemejones: Well, there are many ways to match.  You can match on the minionid, you can set custom grains, you can specify it in a pillar variable
16:49 timoguin A well-defined FQDN should work great.
16:49 UtahDave jemejones: How do you determine which servers are in which envrironments right now?
16:50 jemejones UtahDave, this is all completely new
16:50 jemejones i don't have this working yet
16:50 timoguin *.dev.site.com, for example in the dev env
16:50 jemejones i'm migrating from a poorly done chef implementation to either ansible or salt
16:50 jemejones i'm liking salt better at the moment.
16:51 ndrei joined #salt
16:51 UtahDave cool
16:51 blast_hardcheese joined #salt
16:52 conan_the_destro joined #salt
16:52 jemejones we're going to be migrating this app from hosted in-house to a cloud vendor, maybe amazon.
16:52 jemejones i noticed a salt-cloud module that looks pretty cool
16:53 timoguin The AWS support is great.
16:53 jemejones i'm not sure if i spin up servers on amazon that i have the ability to change the hostname, do i?
16:54 smcquay joined #salt
16:54 jemejones i mean, i guess updating /etc/hostname works....but is that what a minion uses for the master to match them?
16:54 fragamus joined #salt
16:55 timoguin The master can match them on a lot of different things. It's the minion_id by default, which should default to the FQDN.
16:55 timoguin I define my minionids when I spin them up with salt-cloud
16:55 iggy most cloud providers also have some form of metadata that you can attach to an instance
16:56 iggy we use gce tags heavily for targetting
16:56 druonysuse joined #salt
16:56 druonysuse joined #salt
16:56 StDiluted jemejones: I wrote a custom grain to gram EC2 tags
16:56 StDiluted grab*
16:56 iggy with a custom gce grain module that fills in the tags/roles grain dynamically from that info
16:56 iggy ^5
16:56 iggy great minds...
16:57 jemejones ooooohhhhhhh....tags......
16:57 jemejones mmmmmmmmmmm..............
16:57 timoguin AWS can also be configured to use your own DHCP and DNS servers if you're using VPCs.
16:57 timoguin And tags can be defined by salt-cloud as well
16:57 StDiluted I think my code is in the contrib in salt let me see
16:57 jemejones so, i guess if i do use ec2, i could use tags to define the role and the environment
16:57 jemejones like app server for role and production for the environment
16:58 timoguin you could
16:58 Katafalkas joined #salt
16:58 jemejones if it were up to me, i think i'd just choose ec2.  i've had good success with them before
16:58 StDiluted yep: https://github.com/saltstack/salt-contrib/blob/master/grains/ec2_tag_roles.py
16:59 StDiluted looks like someone else did some modifications to it recently but there it is
16:59 jemejones StDiluted, that looks sweet
16:59 thedodd joined #salt
16:59 timoguin Tags are easy-peasy to define with salt-cloud profiles: tag: {'Environment': 'dev', 'Role': 'web'}
16:59 timoguin so my dev webservers will get those when i spin them up
17:00 jemejones this is all so helpful, guys
17:00 Katafalkas joined #salt
17:00 jemejones i think for developing it, i may just hard code some grains to see what happens and make sure i have my top.sls pull things correctly
17:01 StDiluted yeah that’s how i tested to begin with
17:01 StDiluted it was just nice to have the roles be grains
17:01 StDiluted and then you can easily match them in your top.sls
17:01 to_json joined #salt
17:02 apergos left #salt
17:03 zooz joined #salt
17:03 StDiluted like: 'ec2_roles:web':
17:04 KyleG UtahDave: So there I was, talking about SaltStack and how easy it was for me to use it to patch my env. This guy agreed with me: http://ss.digitalflydesigns.com/Screen%20Shot%202014-09-26%20at%2010.02.32%20AM.png so then I looked @ where he works and what he does..: http://ss.digitalflydesigns.com/Screen%20Shot%202014-09-26%20at%2010.02.52%20AM.png
17:04 KyleG you guys better not introduce any major bugs ;p no pressure. haha
17:05 Ryan_Lane KyleG: I've been using develop for quite a while now and haven't hit any very serious bugs
17:05 fragamus joined #salt
17:05 UtahDave KyleG: he he. nice
17:05 Ryan_Lane I'm not using a lot of features, though
17:05 aparsons_ joined #salt
17:07 KyleG Ryan_Lane: I used to use Develop, and then I think somewhere around 16.1 or 17.x there was a math bug that set any file that did not have permissions explicitly defined to mode 420.
17:08 * Ryan_Lane nods
17:08 Ryan_Lane I fork and live on a specific hash
17:08 KyleG that one scared me, not gonna lie
17:08 KyleG I still use salt, love it
17:08 StDiluted lol, the ‘420’ bug
17:08 Ryan_Lane and I test in vagrant/docker before I use it in production
17:08 StDiluted ;)
17:08 KyleG but I approach it like it's a Nuke instead of a machine gun now
17:08 Ryan_Lane :D
17:08 Ryan_Lane yeah
17:09 StDiluted Error 420: Developer too stoned to do math.
17:09 KyleG haha
17:09 logix812 joined #salt
17:09 KyleG people at work know I smoke so I think they thought at first that I was trying to be funny
17:09 KyleG and I was like no this isn't meh
17:09 KyleG lol
17:10 reddye joined #salt
17:12 saurabhs joined #salt
17:14 ramishra joined #salt
17:16 KennethWilke i like that 420 is twitter's response code for rate limits
17:17 aparsons joined #salt
17:18 ndrei joined #salt
17:19 murrdoc joined #salt
17:19 StDiluted so is the way to include custom grains still by adding a _grains directory in your salt root?
17:21 SheetiS StDiluted: that still works.  It has the lowest precedence, but can definitely do it.
17:21 SheetiS http://docs.saltstack.com/en/latest/topics/targeting/grains.html#precedence
17:21 StDiluted ah, ok cool
17:22 SheetiS or should I say that it has the highest precedence
17:22 SheetiS it overwrites everything else
17:22 SheetiS it's processed last
17:22 SheetiS so overwrites previous grains.
17:22 StDiluted so if i put the custom grain in /etc/salt/grains it will get synced to the minion at the end of the salt-cloud run assuming i have sync_after_install: all
17:23 jeffspeff I'm quite new to salt, currently deploying and testing within my network and I'm noticing that my minions aren't always available. I would expect this for remote systems outside the LAN, but even servers with static addresses sometimes show as down. How should I investigate this, are there any common solutions?
17:23 jeffspeff FYI, my minions are all Windows systems Windows XP, 7, 2008 server and 2012 server. I'm not noticing a particular OS that is responding less than others.
17:24 SheetiS StDiluted: That sounds resonable to me.
17:25 SheetiS jeffspeff: if you are running a 2014.1.XX version, that could be after an AES key rotation (happens daily).  Do they all come back reliably after you run a 'salt \* test.ping'?
17:25 kingel joined #salt
17:25 jean-michel joined #salt
17:26 jeffspeff I'm using the latest version, no, they do not come back. I've even tried remoting into a few of them and restarting the salt-minion service, the system still didn't come back up
17:27 KyleG UtahDave: I see apple on the salstack website, are they really using salt?
17:28 SheetiS Do their keys show as accepted in salt-key?  I don't run windows very much with salt, so I was just trying to cover some of the more common things.
17:28 jeffspeff yes, the keys are all accepted, I see the hosts when i do 'salt-run -L'
17:29 UtahDave KyleG: yep
17:29 KyleG noice
17:29 KyleG they're trying to recruit me for their apple pay team right now, I'll throw it out there that I use salt extensively lol
17:29 UtahDave jeffspeff: what version are you?
17:29 EarthBorn joined #salt
17:29 alexthegraham joined #salt
17:29 jeffspeff SheetiS, when I issue the test.ping job, the log shows several of the hosts and that "did not return in time"
17:29 UtahDave KyleG: apple is pretty big, I'm don't know if they're using it everywhere or not
17:30 alexthegraham Hey gang, I'm looking for a hand with jinja templating. I want to call pkg.installed and provide a list of packages from an external file.
17:30 UtahDave jeffspeff: They're probably all returning, if you check the job cach.  The master cli is just not waiting for all the results
17:30 jeffspeff UtahDave, server is CentOS 7 with salt-master.noarch 2014.1.10-4.el7. The minions are version 2014.1.10
17:30 alexthegraham I don't want to call pkg.installed once for each line in the file, though. I just want to call pkg.installed and provide the lines of the external file as list items to pkgs.
17:32 __TheDodd__ joined #salt
17:32 alexthegraham Actually, I'd like to provide the lines of the file as list items in a pillar.
17:33 jeffspeff UtahDave, the log shows the job try several times and each time it has a list of hosts that didn't respond in time. I have set the following: "keep_jobs: 200" "timeout: 90" "loop_interval: 60"
17:33 jeffspeff How can I check the cache and confirm your idea?
17:33 UtahDave jeffspeff: salt-run jobs.list_jobs
17:33 UtahDave ignore all the find_job   jobs
17:34 EarthBorn #freenode
17:34 jonatas_oliveira joined #salt
17:34 SheetiS alexthegraham: the pillar is definitely ideal for that.  I can probably give a quick example in a moment of something you could do
17:34 alexthegraham SheetiS, that would be great. Thanks.
17:35 jeffspeff UtahDave, so, if the job is listed there then that means it hasn't returned from the minion yet?
17:35 thedodd joined #salt
17:35 ndrei joined #salt
17:35 EarthBorn joined #salt
17:36 kermit joined #salt
17:36 kermit1 joined #salt
17:37 SheetiS alexthegraham: This is a short example of what I was thinking:  https://bpaste.net/show/d65f04e03ef5
17:37 kermit1 joined #salt
17:37 mugsie left #salt
17:38 UtahDave jeffspeff: no, that should tell you which minions have returned.
17:38 UtahDave jeffspeff: also, if you add -v to your command it will give you the job id (jid)
17:38 SheetiS http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html and http://docs.saltstack.com/en/latest/topics/tutorials/pillar.html are good background references for what is going on.
17:39 UtahDave then you can run     salt-run jobs.lookup_jid <jobid>     and it will give you the output of all the minion returns.
17:39 aparsons joined #salt
17:39 arnoldB left #salt
17:39 arnoldB joined #salt
17:39 reddye left #salt
17:40 aparsons joined #salt
17:40 ndrei_ joined #salt
17:41 jeffspeff UtahDave, what if "salt-run jobs.lookup_jid '20140926121833035185'"  doesn't return anything?
17:41 __TheDodd__ joined #salt
17:43 UtahDave Hm. then that should mean that no minions have returned.
17:43 UtahDave salt-run jobs.active      should show you what jobs are still running on which minions
17:45 jeffspeff UtahDave, this is from the output of salt-run jobs.active http://pastebin.com/1PjtF6a9
17:45 jeffspeff UtahDave, sorry, that's from jobs.list_jobs
17:46 alexthegraham SheetiS, Thanks. What you have there can be done w/out the for loop by just using "- pkgs: {{ pillar['packages'] }}. Here's pseudocode for what I'm shooting for: https://gist.github.com/alexthegraham/0f70264332a932a37e95
17:46 UtahDave cool, so what's the output of this:   salt-run jobs.lookup_jid 20140926121833035185
17:47 jeffspeff UtahDave, that doesn't return anything
17:48 jaimed joined #salt
17:48 smcquaid joined #salt
17:48 SheetiS alexthegraham: I suppose so, since you can give - pkgs: a list formatted like ['this', 'list', 'here'].  Just used to iterating over a for loop to perform extra tasks with my pillars I guess :D
17:50 UtahDave jeffspeff: Can you log into one of the windows machines and verify that the salt-minion service is running?
17:51 SheetiS if you want to go outside of the pillar to an external file from a pillar, you can use the import_* stuff here http://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.jinja.html.  search for import_yaml or import_text on that page.
17:52 jeffspeff UtahDave, yes, just did, i even restarted the service just for fun
17:52 UtahDave Hm and the keys showed up correctly in salt-key -L on the master, right?
17:53 jeffspeff yes
17:54 jeffspeff I'm not seeing any errors in the master log about keys
17:54 MTecknology Hm.. I have a pillar that looks like this (http://dpaste.com/2NDEEBQ). All of these hosts have an id of 'server.<center_number>.domain.tld'. I had been doing {% if grains['id'].split('.')[1] == '<center_number>' %} in a pillar, but thanks to SheetiS, I'm not doing that anymore. In my state, I now have this (http://dpaste.com/1394ZW4) which seems to always pick the else, even on say center 1290,
17:54 MTecknology which should match that if statement.
17:54 Katafalk_ joined #salt
17:54 UtahDave jeffspeff: Hey, I have to run to a lunch meeting.    I'll jump back in here as soon as I get back
17:54 MTecknology I'm doing something silly... any idea what it is?
17:54 jeffspeff ok
17:54 jeffspeff thanks
17:54 nitti joined #salt
17:55 p2 joined #salt
17:56 SheetiS MTecknology: for pillar['centers_served'], are you wantingto match the key, or the values in the list?
17:56 nitti joined #salt
17:56 SheetiS like 6 on the if in the paste
17:56 SheetiS *line
17:56 MTecknology they key
17:56 SheetiS pillar['centers_served'].keys()
17:56 SheetiS try that
17:56 dstokes is `echo 'service salt-minion restart | at now + 1 minute` still the best way to restart minions with salt?
17:57 iggy I've frequently wondered the same thing
17:57 thedodd joined #salt
17:58 iggy I generally just tell them to restart, but I think it's messed me up a few times
17:58 timoguin dstokes: I've had no problem just doing service.restart salt-minion
17:58 StDiluted hm. Trying to sync a custom grain and it doesn’t appear to work. log on minion: https://gist.github.com/dginther/b9b27a0576e7fedcbfbb BUT /var/cache/salt/minion/extmods/grains is NOT created.
17:58 MTecknology supposedly, the minion forks and you don't have to actually worry about killing the task that's runing
17:58 MTecknology haven't tested it yet
17:58 dstokes timoguin: auth btwn the minion and master gets jacked on rc2 restarting a minion w/ salt
17:59 dstokes restarting the minion manually after a salt triggered restart gives me "Failed to login" in the minion log
17:59 MTecknology SheetiS: same result
17:59 drawks hey hey hey
17:59 drawks another day another couple new issues filed
17:59 * drawks settles in to write a few hundred more lines of tests
18:00 iggy StDiluted: what if you run a highstate (which should theoretically be running that same code path... but theoretically)
18:00 drawks I really didn't expect the test coverage on this project to be so low
18:00 dstokes basepi: any input? (on proper way to restart minion w/ salt)
18:00 StDiluted iggy: highstate works fine
18:00 dstokes if service.running w/ watch statement is supposed to work, i'll file an issue
18:00 iggy StDiluted: but does it sync the grains?
18:01 MTecknology drawks: too many people contributing that don't know how to write tests (I'm one of them)
18:01 drawks writing tests is easy, just time consuming
18:01 StDiluted i stuck the grain in $file_roots/_grains
18:01 MTecknology The test coverage has been getting much better over a short time, though
18:01 StDiluted iggy mellem try
18:01 StDiluted er, lemme! :)
18:01 timoguin dstokes: hmm I haven't had an issue with that. Just sometimes the minion doesn't respond. But it comes back up.
18:02 drawks just pick a module and go after it. most of the code is meant to be pretty functional, so it's easy enough to test input vs output
18:02 timoguin But I don't restart my minions often.
18:02 drawks way simpler than some other side-effect filled stuff i've had to do test for before
18:02 bhosmer joined #salt
18:03 viq joined #salt
18:03 Gareth drawks: There are existing docs on how to writes tests but they're a little sparse, if you submit a PR to those with some more information then I bet more people will write more tests.
18:03 perfectsine joined #salt
18:04 jslatts joined #salt
18:04 jalaziz joined #salt
18:05 SheetiS MTecknology: Does pillar['centers_served']["{{ grains['id'].split('.')[1] }}"] actually give you the information you want outside of an if statement and applied only to a minion that would work with that setting?  Also is there a reason that you refer to it like that rather than pillar['centers_served'][grains['id'].split('.')[1]]?
18:05 StDiluted iggy: no it does not sync the grains
18:05 jalaziz joined #salt
18:05 timoguin A lot of people just don't know how to write tests. Part of the problem is the fact that Salt has a ton of modules, with a wide range of use and stability.
18:05 timoguin So it's hard to know without looking at the code whether or not it's stable or robust.
18:05 drawks Gareth: I'm just gonna scratch my own itch for now. Ensure coverage on the modules *I* use... if others want to use my test code as an example (for better or worse) more power to them ;)
18:06 to_json joined #salt
18:06 iggy StDiluted: crank the debug up to 11!
18:06 SheetiS afk a minute brb
18:07 StDiluted iggy: weird, eh?
18:07 dstokes nvm guys. i'm being lame, rewriting config w/ invalid master entry :/
18:07 elfixit joined #salt
18:07 iggy StDiluted: seemingly... but I've found that there's usually a logical explanation for why things don't work the way I expect
18:08 iggy usually it's because I did something daft
18:08 druonysuse joined #salt
18:08 druonysuse joined #salt
18:09 StDiluted hahah yes
18:09 StDiluted it was me
18:09 to_json joined #salt
18:09 StDiluted i failed to put the grain in the right p[lace
18:10 MTecknology SheetiS: pillar['centers_served'][grains['id'].split('.')[1]]  <-- some of these have leading zeros that jinja likes to trim
18:10 scbunn joined #salt
18:11 rap424 joined #salt
18:11 MTecknology I guess I haven't checked to make sure it gives me what I want... lemme check
18:12 glyf joined #salt
18:12 che-arne joined #salt
18:12 aparsons joined #salt
18:13 thayne joined #salt
18:13 kingel joined #salt
18:15 hintss joined #salt
18:15 SheetiS MTecknology: ahh doesn't keep it as a string even though it's quoted in the pillar?
18:16 MTecknology SheetiS: when it's pulled from the pillar, it will be, but that part isn't taken from the pillar, it's taken from grains, and when I grab it it's still quoted, but if I don't quote it in the file, it'll be lost during render.
18:17 SheetiS ahh I see
18:18 druonysus joined #salt
18:18 druonysus joined #salt
18:22 MTecknology SheetiS: My issue at this moment seems to be that line 6 (http://dpaste.com/1394ZW4) isn't matching when it should be. I tried with .keys() but that didn't help. I tested what I expect to happen in python alone and it worked.
18:22 harkx who/where can I inform about an error in the docs  (on salt.readthedocs.org) ?
18:23 timoguin harkx: docs are built from the source, so they get issues on github too
18:23 MTecknology harkx: the bestest best thing you can do is create a PR with a fix; alternatively, you can create an issue
18:23 babilen MTecknology: remove {{ and }}
18:23 MTecknology babilen: but it would then be treated as just text because it's inside of quotes
18:24 babilen harkx: That doesn't host current documentation, you find that on docs.saltstack.com
18:24 harkx timoguin, MTecknology alright! thanks! I'll go search on github and create PR .. thanks!
18:24 SheetiS harkx: also docs.saltstack.com is the latest place for most up-to-date docs.
18:24 babilen MTecknology: It already is a jinja block
18:24 SheetiS err babilen already said that
18:25 SheetiS looking at too many things at once
18:25 harkx babilen, SheetiS , oke, thanks. you're correct.. I'll double check and check github to make sure..
18:25 XenophonF hey UtahDave, by default does a Salt minion on Windows use the computer's FQDN for the minion ID?
18:25 MTecknology babilen: huh?
18:26 MTecknology babilen: Are you saying that this should work?  {% if "grains['id'].split('.')[1]" in pillar['centers_served'].keys() %}
18:27 SheetiS It definitely won't work with the quotes.
18:27 MTecknology that's what I was saying :)
18:27 babilen harkx: That doesn't mean that the error doesn't still exist, but you'd better double-check against current documentation
18:27 MTecknology and I know I need the quotes
18:27 babilen MTecknology: You won't need the {{ nor }} nor the two " "
18:28 MTecknology babilen: yes, I definitely do.
18:28 babilen You do?
18:28 duncanmv joined #salt
18:28 MTecknology yes
18:29 MTecknology Like I mentioned.. without quotes, the leading zeros are trimmed as python likes to do with numbers.
18:29 tharkun joined #salt
18:30 MTecknology and it gets treated like a number which breaks it even without trimming zeros
18:30 babilen Ah, I had no idea that your data is of a specific format. I mean that you don't have to signal "python block" inside {% ... %}
18:30 glyf joined #salt
18:30 MTecknology hm..
18:31 babilen Ah, jinja .. the worst decision of the saltstack project. Lets see what we can do for you.
18:31 mapu joined #salt
18:31 MTecknology funny, I actually like jinja
18:31 babilen I would have *much* preferred mako, but that discussion is moot.
18:32 datenarbeit joined #salt
18:33 MTecknology {% {{ }} %} .. should work... I'd assume... Maybe it's treating the whole thing as one string and not running {{ }} through jinja since it's inside of {% %}, although I'm sure I did it before.
18:35 MTecknology SheetiS: all your fault!
18:35 timoguin I like Jinja more the more I work with it.
18:36 timoguin more the more
18:36 MTecknology hrm......
18:36 nitti joined #salt
18:37 druonysuse joined #salt
18:37 Gorilla joined #salt
18:38 MTecknology I just realized something...
18:39 MTecknology LOL!
18:40 MTecknology Because I'm doing .split() on a string, it outputs a string, so, in this case, it's already quoted. Duh... :P
18:42 iggy I'd use a {% set svd = grains['id'].split('.')[1] %} to save the triple lookup
18:43 babilen MTecknology: yes, sure
18:44 babilen I still don't quite understand why what I suggested didn't work for you, but I'll have your setup duplicated in vagrant in a second and can run it myself.
18:45 babilen It works perfectly here
18:45 babilen I have no idea why it didn't for you. Could you paste your exact state and its output to http://refheap.com please.
18:46 babilen fwiw, I've been using "{% if grains['id'].split('.')[0] in pillar['centers_served'].keys() %}" as suggested earlier.
18:46 babilen (well, [1] not [0] in your case)
18:47 MTecknology babilen: it looks like your suggestion worked
18:47 babilen Ah, so you never actually tried it?
18:48 MTecknology I made an assumption. You can punch me. I'm sorry. :(
18:48 MTecknology You were correct.
18:50 babilen I tend not to punch people :)
18:50 iggy I'll do it!!!!
18:51 babilen And I'd like to point out iggy's earlier suggestion
18:51 SaltyWater joined #salt
18:51 SaltyWater hello all
18:51 MTecknology Woohoo! It's working perfect!
18:51 MTecknology On to the next state... :(
18:51 babilen SaltyWater: Good evening SaltyWater :)
18:51 SaltyWater i wonder if there are some salt users here who can help with a small thing
18:52 SaltyWater hey babilen
18:52 fannet @UtahDave: How did your GCE test go :)
18:52 babilen SaltyWater: What small thing might that be if I may be so impudent to ask you that?
18:52 SaltyWater i am trying to run a small script and have it just echo a output file id , and then continue to do the work , and have the salt master return ( getting the output file name )
18:52 grove_ joined #salt
18:53 SaltyWater and then later can i can copy that output file over
18:53 SaltyWater ok so i have tried & and --async
18:53 SaltyWater nothing seeems to be working
18:53 SaltyWater i get the echo "file.txt" back from minion
18:53 SaltyWater but the job does not keep executing
18:54 SaltyWater tried & , nohup etc
18:54 SaltyWater this is how i am calling the job
18:54 SaltyWater salt "client1" cmd.run "script.sh arg1 arg2"
18:54 babilen SaltyWater: It would probably help tremendously if you could paste the relevant parts of your setup to, say, http://refheap.com along with your commands and their output. I have to admit that I don't quite understand yet what you are trying to do.
18:54 * MTecknology hugs babilen
18:55 babilen d'aww :)
18:55 nitti joined #salt
18:56 SaltyWater k guys doing that give me a second
18:56 SaltyWater sorry for not being clear
18:58 SaltyWater https://www.refheap.com/90730
18:58 SaltyWater here you go guys
18:59 SaltyWater the reason for the apparent redundancy with result.txt  is because i will be calling a few minions so this ways i can co-relate the outputs to minions
18:59 SaltyWater btw do let me know if there is a better way of doing this
18:59 fannet @UtahDave: How did your GCE test go :)
19:00 SaltyWater i was going to do this over ssh but ran across salt a few days ago
19:00 SaltyWater and am learning it
19:01 babilen SaltyWater: What are you actually trying to achieve?
19:01 SaltyWater search for a term in a few hundred files that get written constantly
19:02 babilen SaltyWater: Why don't you use grep for that?
19:02 SaltyWater btw ..slightly offtopic .. SALT came in super handy today with the bash stuff going on :)
19:02 TheThing joined #salt
19:02 babilen It did indeed.
19:02 smcquay joined #salt
19:02 SaltyWater i am using grep babilen .. just it takes forever to return
19:02 SaltyWater so i want it to continue working
19:03 SaltyWater and the master to collect the file later ..of the result .. btw like i said .. if there is a better way let me know
19:03 babilen I loved it when my colleague came in yesterday morning five minutes past nine and jokingly asked me "And, did you upgrade bash on all servers/boxes already?" and to see his face when I said: "In fact, I've just done that. Yes." :D
19:03 SaltyWater basically master needs to call the minion to do the job , which will take some time
19:03 SaltyWater lmao :) good one
19:03 SaltyWater have been following it pretty close
19:04 babilen SaltyWater: The "master collect the file later" part is probably where you are thinking salt would work in a way it probably doesn't.
19:04 ek6 joined #salt
19:04 aparsons joined #salt
19:04 SaltyWater ok .. how would you go about doing something like this ..
19:04 SaltyWater run script on minion which takes time
19:04 jalaziz joined #salt
19:04 SaltyWater and then the data needs to pushed to master
19:05 babilen Do you need that data on the salt master?
19:05 babilen (for anything salty that is)
19:05 SaltyWater hmm i am using it as a central hub sort of
19:06 SaltyWater since 5-6 minions need to do the work
19:06 SaltyWater and then the data needs to be combined
19:06 SaltyWater i was planning on using cp.get_file  after that
19:06 babilen Are you trying to implement some map/reduce style functionality on top of salt here?
19:07 SaltyWater not sure i understand .. but this is how i do it right now .. do the same thing over ssh , ssh to each client/minion .. run the script and later ssh again and collect the fles , combine , sort them etc
19:07 SaltyWater just trying to see if SALT would be a good option
19:07 iggy orchestrate
19:07 kingel joined #salt
19:08 kingel joined #salt
19:08 SaltyWater i guess you can call it that .. i was thinking of minion in the true sense of the word minion :)
19:08 SaltyWater the master tells theminion to do a job and it does it
19:09 chrisjones joined #salt
19:11 iggy that's a feature of salt... orchestrate
19:11 SaltyWater so how would you guys do it ?
19:11 ndrei joined #salt
19:11 ndrei_ joined #salt
19:11 ek6 how relative long in the tooth is rc2 at this point?...i want to do some helium upgrade testing and not sure if thats the best start at this time
19:11 iggy it lets you run multiple jobs across hosts with dependencies between them
19:12 to_json joined #salt
19:13 babilen iggy: You'd still have problems modeling the sending of data, but then you could probably hook on either cp or the reactor system. Peer runners might be possible, but all in all, I don't necessarily have the feeling as if salt is a good tool here.
19:14 tharkun joined #salt
19:14 SaltyWater that sucks  .. i didnt think it would be that hard .. and thought this is what probably the devs had in mind for the minions job
19:15 iggy if it's being done purely with ssh now... salt should at least be able to streamline things a little
19:15 Ahrotahntee joined #salt
19:15 SaltyWater exactly iggy ..
19:15 babilen Well, I'm not creative enough for that right now :)
19:15 Ahrotahntee afternoon folks
19:16 MTecknology another state updated for this new structure! :D
19:16 * babilen passes on to people for whom it is still "afternoon"
19:16 Ahrotahntee Fri Sep 26 15:16:25 EDT 2014
19:16 MTecknology two more left, no more "low hanging fruit"
19:16 MTecknology Ahrotahntee: east coast?
19:17 Ahrotahntee aye
19:17 * MTecknology is in SD
19:17 SaltyWater forget the exact script for a minute .. is there an example of orchestrate somewhere i can look at .. basic stuff .. master sends a command to minions and expects results back
19:17 catpig joined #salt
19:17 Ahrotahntee I'm trying to use publish in a custom grain, but I can't seem to get it; salt.modules.publish.publish complains that salt doesn't have a member/package/etc 'modules'
19:18 iggy SaltyWater: I wasn't necessarily thinking in terms of getting output back... more just waiting for the first round of jobs to be complete
19:18 Ahrotahntee am I going about this ass-backwards?
19:18 nitti joined #salt
19:18 catpigger joined #salt
19:19 Ahrotahntee ffs got it
19:19 Ahrotahntee nevermind
19:19 babilen heh
19:19 babilen publish might come in handy for SaltyWater
19:19 SaltyWater yes thats where i think the issue is ..sorry i misunderstood .. the getting the output back i think the cp.getfile will be fine
19:20 iggy what babilen just said...
19:20 iggy started reading that module and was really confused for a sec
19:20 Ahrotahntee you ever get one of those things where you've been looking at the same 8 lines of code for so long you're effectively blind?
19:20 Ahrotahntee it was that
19:21 babilen Yes, those are the times were I just start doing something else for a while. Glad you finally "saw" it.
19:21 h1gglns joined #salt
19:22 babilen salt mine might be worth a look too
19:22 h1gglns hi. how can i change the minion id remotely? can i change it from the salt-master server, or do i have to log in and change it manually on the minion host machine?
19:22 kaptk2 joined #salt
19:23 MTecknology babilen: {% set centers = pillar.get('centers_served'[grains['id'].split('.')[1]], [grains['id'].split('.')[1]|string()]) %}   <-- not quite. I'm sure it's because of pillar.get('centers_served'[grains['id'].split('.')[1]] not working the way I was hoping.
19:24 Ahrotahntee babilen: this looks easier, I might go with salt mine
19:24 oz_akan joined #salt
19:24 iggy MTecknology: nope
19:24 iggy the python gods just killed a kitten
19:24 MTecknology Any pointers here?  grains['centers_served'][grains['id'].split('.')[1]]  would have what I need, I just need the alternative if the key isn't in there
19:26 iggy that looks right, your first one is wrong
19:26 iggy you can't do what you're trying to do in one step like that
19:27 thayne joined #salt
19:27 mapu joined #salt
19:27 MTecknology iggy: what I /was/ doing was this    {% set centers = pillar.get('centers_served', [grains['id'].split('.')[1]|string()]) %}   but I changed the pillar and now I need to change what this does as well
19:28 MTecknology {% if grains['id'].split('.')[1] in grains['centers_served']) %} {% set centers = grains['centers_served'][grains['id'].split('.')[1]] %} {% else %} {% set centers = [grains['id'].split('.')[1]] %}   <-- I could probably do this, but it feels silly
19:28 tristianc joined #salt
19:28 iggy that's your best option with jinja syntax I think
19:28 iggy you could probably be slightly more concise with straight python, but...
19:29 * babilen would consider writing it in python/pyobject at this point
19:29 Ahrotahntee hmm, while saltmine is interesting this brings me back to my initial issue
19:30 smkelly joined #salt
19:30 Ahrotahntee I can get the addresses of tun0 for all peers in a dict <hostname,[ip address]>
19:30 Ahrotahntee but I can't do a comma seperated list in jinja of the IP addresses, only the fqdns
19:30 Ahrotahntee I was intending on getting a comma seperated list of peers into a grain
19:30 SaltyWater ok how stupid does this sound .. have salt update the crontab
19:30 oz_akan joined #salt
19:30 SaltyWater and then let the cron do the job
19:30 SaltyWater :(
19:30 Ahrotahntee and using that grain in the galera config
19:30 SaltyWater cant think of anything else
19:30 holler joined #salt
19:31 nitti_ joined #salt
19:31 holler hello, I have a vagrant box spinning up with salt as provisioner in masterless minion setup... I am able to now get it working to the point where nginx/mysql/django are all installed correct
19:32 holler however now I need to do some configuration stuff automated.. first is I need to create a ~/.ssh/config file that has a HostName set, second is I need to run a fabric command in my /home/vagrant/project folder
19:32 holler how might I accomplish that?
19:32 holler *those
19:35 grep_away joined #salt
19:37 catpig joined #salt
19:38 catpigger joined #salt
19:38 MTecknology iggy: I guess it works. It feels long winded, but I'll take it since it works.
19:38 catpig joined #salt
19:39 MTecknology and makes sense, is readable, and does exactly what I want it to
19:41 tcotav holler: you can just run shell scripts from your Vagrantfile
19:41 catpig joined #salt
19:41 tcotav like this:   box.vm.provision :shell, :path => "files/reinit-salt-minion.sh"
19:41 holler tcotav: can you do that from salt too?
19:42 holler just wondering why itd be in vagrantfile vs salt
19:42 Ahrotahntee all in all I absolutely love SaltStack
19:43 nitti joined #salt
19:43 tcotav holler: because I assumed you wanted this to kick off at vm creation...  which may not be true
19:43 h1gglns hello, I my minions are running on ec2, so their minion ids are the usual amazon assigned hostnames. is there a command to change a minion id remotely from the salt-master, or do i have to log in to each minion machine and change it manually?
19:44 tcotav holler: you could use shell to make specific 'salt-call --local state.sls <whatever>' too
19:48 Ahrotahntee a-ha solved
19:48 Ahrotahntee TIL .iteritems()
19:49 scbunn joined #salt
19:52 SaltyWater joined #salt
19:52 SaltyWater got it guys
19:52 SaltyWater :)
19:52 jalbretsen joined #salt
19:52 aparsons joined #salt
19:53 babilen hmm?
19:53 cwyse joined #salt
19:53 big_area joined #salt
19:55 SaltyWater called script 2 from script 1
19:56 SaltyWater so simple ..just dint think of it before ( actually i thought i had ) but i guess didnt
19:56 SaltyWater now script one returns after firing off script 2
19:58 aquinas_ joined #salt
19:59 bhosmer_ joined #salt
20:01 basepi dstokes: sorry I never replied earlier.  Ideally service.restart should work.  We were having issues on certain distros....maybe RedHat?  Where that wouldn't work
20:01 basepi So some people have been using the `at` command to schedule a `service salt-minion restart` for like 5 minutes later or whatever
20:01 holler tcotav: after my django project is installed via requirements.txt (working), I want to run a fabric command "fab db:download db:restore" that downloads a db dump and restores it
20:01 basepi It's on the list of things we want to get super consistent but haven't had a chance yet.
20:02 Imtnt joined #salt
20:02 dstokes basepi: totally my fault. i was triggering a restart after blasting the master field in minion config. when the minion came back up it couldn't  connect.
20:02 dstokes so dumb ;)
20:02 basepi Hehe, nice
20:02 basepi Glad you figured it out.  =)
20:02 dstokes basepi: thx for the reply tho
20:02 basepi Yep, np
20:02 Imtnt Hey does anyone here have experience with halite?
20:02 dstokes i think the `at` method is still in the docs. might be worth removing later
20:03 anotherZero joined #salt
20:04 babilen Imtnt: be the first!
20:05 catpig joined #salt
20:05 mechanicalduck_ joined #salt
20:06 KennethWilke joined #salt
20:07 jergerber joined #salt
20:07 quantumriff joined #salt
20:08 Imtnt joined #salt
20:08 quantumriff quick question related to the bash vulnerability.  I have lots of RHEL servers.. I need to mass update them to the newest version.  I know that I can do a "cmd.run 'yum -y update bash'" but that won't take care of machines that get installed tomorrow.
20:09 Imtnt just wondering why it stops working why it stops working whenever I change the port
20:09 quantumriff is there a way now, to update a single pkg on RH systems? Last time I tried it (over a year ago) it did a full "yum update" and ignored the package names
20:10 dstokes quantumriff: just add pkg.latest (for bash) to the highstate for new machines
20:11 tcotav holler: can't you just cmd.run that?
20:12 quantumriff dstokes: thanks, I had not seen .latest before
20:12 Eugene I'd also suggest `salt '*' cmd.run yum update bash` as a quick fix
20:12 holler tcotav: yes I could only thing is how can I make it happen after my django requirements.txt is installed? http://dpaste.com/2RN2QWF
20:12 holler the watch: part Im not sure of
20:12 dstokes quantumriff: i've got an install & latest stanza in a core state where i define two groups of packages
20:13 dstokes bash is in the latest stanza
20:13 quantumriff yep, its getting added to my default right now
20:13 dstokes don't forget to include refresh: True
20:13 dstokes to the latest state
20:14 dstokes i've been running `salt \* state.sls core.packages` a lot lately ;)
20:15 quantumriff update-pkgs:
20:15 quantumriff pkg.installed:
20:15 quantumriff - refresh: True
20:15 quantumriff - pkgs:
20:15 quantumriff - bash
20:15 quantumriff something like that?
20:15 dstokes also `salt \* cmd.run 'grep ":;" /var/log/{nginx,apache2}/*.log'`
20:15 dstokes quantumriff: perfect
20:15 quantumriff err, except pkg.latest.. darn copy/paste
20:15 dstokes erm..
20:15 quantumriff :)
20:15 dstokes yeah, that
20:17 perfectsine joined #salt
20:18 to_json joined #salt
20:22 aparsons_ joined #salt
20:23 retrospek joined #salt
20:26 mapu joined #salt
20:28 tcotav holler: you might be able to use require instead of watch
20:30 holler tcotav: I see.. would that be: -watch: -pip: coach-app?
20:34 Supermathie salt -G 'kernel:Linux' pkg.install bash refresh=True
20:34 Supermathie much sexy
20:35 StDiluted jemejones, I just submitted a PR for an updated version of my EC2 tag custom grain. I rewrote it today because I realized it was a bit clunky. It’s better now. Works even if the hostname and the ‘Name’ tag on the instance don’t match now.
20:35 StDiluted woo! that was a quick PR merge :)
20:35 jemejones StDiluted, cool
20:36 StDiluted let me know if you have issues with it if you use it
20:36 smcquay joined #salt
20:36 StDiluted grains.items:
20:36 StDiluted ec2_environment: production
20:36 StDiluted ec2_roles:
20:36 StDiluted app_server
20:36 jemejones will do
20:37 jemejones i'm going through right now and setting certain pillar data depending on the environment
20:37 jemejones and by environment, i don't know if i'm doing it right, but i'm looking at the grain right now of 'environment'
20:37 jemejones which ideall, i'd pull from an ec2 tag....if we go with ec2
20:38 mrlesmithjr joined #salt
20:39 mrlesmithjr joined #salt
20:39 glyf joined #salt
20:39 tcotav holler: try that and see if it does what you want.  I sometimes mix 'em up and end up referencing the docs
20:40 tcotav holler: sorry, probably not the most reassuring statement thar ^ :D
20:41 holler haha no worries... trying it out
20:42 StDiluted jemejones: salt has it’s own idea of environments, like in the top.sls file ‘base’: is an environment
20:42 StDiluted but I found the idea of it confusing and not very intuitive
20:43 jemejones right - so, i'm not exactly using *that*
20:43 * iggy same
20:43 jemejones i'm using grains of 'environment' and 'roles'
20:43 iggy not so much confusing, but it just didn't match the way we work here
20:43 StDiluted so I use ec2 tags and a grain that grabs the tag called ‘Environment’, and then in my top.sls I say ‘ec2_environment:production’: match: grain
20:44 jemejones and i'm doing some matching in the top.sls and keeping things that are specific to those "environments" in the environment directory
20:44 iggy and especially when it came to pillars/git branches/environments... things didn't work the way we really wanted
20:44 StDiluted that looks for a grain called ‘ec2_production’ with the value ‘production’ and applies the states listed underneath to any machine with that tag matching.
20:49 thedodd joined #salt
20:51 smcquay joined #salt
20:56 viq joined #salt
20:56 hasues joined #salt
20:56 viq joined #salt
21:00 perfectsine joined #salt
21:01 Eugene salt-ssh roster; will "host" be matched aginst the executing user's ~/.ssh/config, for Hostname blocks ?
21:01 Eugene Eg, if I need to do a ProxyCommand or similar ridiculousness to reach a given minion
21:04 alainv joined #salt
21:05 ndrei joined #salt
21:06 ndrei_ joined #salt
21:13 thayne joined #salt
21:15 KennethWilke joined #salt
21:15 holler how do I find out what failed when provisioning vagrant with salt? I see 34 success 1 fail
21:15 holler but I cant scroll up far enough to see what failed
21:16 holler (I think I know what failed but I cant see why)
21:20 kballou joined #salt
21:23 tcotav holler: increase your scroll buffer :-D
21:25 big_area_ joined #salt
21:26 TheThing joined #salt
21:27 aparsons joined #salt
21:30 glyf joined #salt
21:31 racooper joined #salt
21:31 Katafalkas joined #salt
21:36 catpig joined #salt
21:37 perfectsine joined #salt
21:37 murrdoc joined #salt
21:37 aquinas_ joined #salt
21:39 holler tcotav: how?
21:39 KyleG joined #salt
21:39 KyleG joined #salt
21:40 holler tcotav: L55 is either running and not working or not running http://dpaste.com/2M0BAEJ#line-55
21:40 holler it no longer has any fails upon provision but the database hasnt been dl'd/restored
21:41 holler Im also wondering if maybe its bc the fab:download command is going to prompt for rsa fingerprint when it tries to ssh to the server for the db dump?
21:41 jms_ joined #salt
21:42 jms_ Question, is there a way to test the output of a jinja sls template without publishing to a minion?
21:46 wendall911 joined #salt
21:47 mgw joined #salt
21:48 fannet jms: are you trying to do just syntax validation?
21:49 * Ahrotahntee sprays help with a fire extinguisher
21:49 [M7] joined #salt
21:50 mgw1 joined #salt
21:50 Ahrotahntee someone say something so I know these ignore settings are working right
21:50 ze- Ahrotahntee: why would we? :P
21:51 Ahrotahntee I just set up my client to filter joins, parts and quits
21:51 Ahrotahntee since it's just noise anyway
21:53 v0rt3xtraz joined #salt
21:56 Ahrotahntee would it be wrong of me to use pillars to designate roles to my nodes?
21:57 iggy the funtionality exists
21:57 iggy someone obviously thought it was a worthwhile feature
21:58 StDiluted Ahrotahntee: not wrong, no, but grains seem like a better way to me
21:58 StDiluted grains are better suited to features of individual servers, where pillars are best for secret or static data
21:58 StDiluted that’s how I see it anyway
21:59 Ahrotahntee so I'd be writing my top.sls for my grains to have a block for each server, rather than each role?
22:00 Ahrotahntee and then including other (service) grains on a per-server basis
22:02 utahcon I am struggling with the docs on readthedocs... says they are for the latest, and can't seem to get the docs for 2014.1.7
22:03 utahcon simply need to know if glusterfs support is upposed to be in 2014.1.7 or not
22:03 utahcon it seems it is, but my install doesn't have it :(
22:05 UtahDave utahcon: glusterfs module isn't in 2014.1.7
22:05 UtahDave it will be in 2014.7
22:06 aurynn joined #salt
22:07 joehh joined #salt
22:07 utahcon dang
22:07 utahcon thanks UtahDave
22:07 utahcon more work for me :D
22:08 StDiluted i heart gluster
22:08 utahcon UtahDave: is that being dev by SaltStack or someone else? I noticed the docs are ... slightly inaccurate
22:09 UtahDave probably someone else.  If you have any improvements to the docs that would be really appreciated
22:09 utahcon well, if I get time
22:09 utahcon still in the middle of the move
22:10 UtahDave sure
22:11 v0rt3xtraz Would anybody know if there's any documentation about configuring both linux and mac desktops with a single salt server? I can't seem to find anything anywhere
22:12 utahcon v0rt3xtraz: absolutely doable
22:12 utahcon you just have to remember when building your states to target differences
22:13 utahcon especially in package management
22:16 iggy which you generally have to do just between distros anyway
22:20 murrdoc joined #salt
22:28 v0rt3xtraz utahcon: Would I basically set it up like I would normally with one platform, and then combine the two, or would it be something completely different?
22:29 drawks it's a bit puzzling what return codes the salt command gives back
22:29 drawks and where they come from
22:29 catpig joined #salt
22:30 laxity joined #salt
22:30 kermit joined #salt
22:30 drawks in a module with a call to cmd.run_all it seems that in the case that the commands from cmd.run_all return non-zero salt always exits with 11
22:30 drawks but I don't understand how that happens or what 11 is meant to indicate
22:33 scbunn joined #salt
22:36 drawks yeah... there's some sort of non-obvious magic happening here
22:39 catpig joined #salt
22:39 drawks https://github.com/saltstack/salt/blob/b456fce060b1479abe36a70b65307dccdf4dc0c6/salt/cli/__init__.py#L203-L208
22:42 murrdoc joined #salt
22:45 ndrei joined #salt
22:46 drawks so are functions in modules which call cmd.run meant to pass back something that indicates their return codes or is some sort of reflection the only way to do that?
22:49 younqcass joined #salt
22:51 drawks for instance... salt.modules.foo.bar call salt.modules.cmd.run and that forks a shell that returns non-zero...
22:52 drawks it seems that the salt command returns an 11 when i run my foo.bar module function, does that mean if another module called my function it would /just know/ that it failed?
22:52 vejdmn joined #salt
22:52 drawks or am I meant to be passing explicit pass/fail in my results somehow
22:53 pdayton joined #salt
22:53 drawks cmd.run doesn't return anything except stdout...
22:53 Ahrotahntee I know the claim for pillars is it is an excellent place to store private information specific to that minion. Would it be recommended to keep some like ssl certificate information?
22:55 Ahrotahntee s/some/something/
22:55 iggy Ahrotahntee: that's what we use it for
22:56 Ahrotahntee iggy: but that does mean 1 pillar file per server, right?
22:56 iggy it's probably not perfect, but it's as good as most any other system for deplying that kind of thing
22:56 Ahrotahntee I'll look into including a file based on the minion id
22:56 iggy well... we have * certs that live on multiple servers
22:58 bezeee joined #salt
22:58 Ahrotahntee I use cert authentication for a lot of stuff, each server has its own cert
22:58 iggy the only servers we have that there's only one of are our salt servers
22:59 iggy everything else is in pools
22:59 v0rt3xtraz left #salt
22:59 iggy so * certs are the only thing that really makes sense for us
23:08 Ahrotahntee does file.recurse also traverse subdirectories?
23:09 ndrei joined #salt
23:10 Ahrotahntee found it, nevermind
23:10 thayne joined #salt
23:12 aparsons_ joined #salt
23:17 perfectsine joined #salt
23:21 holler how can I install node and then run npm update and npm install?
23:25 iggy there's a npm module
23:25 iggy http://docs.saltstack.com/en/latest/ref/states/all/salt.states.npm.html#module-salt.states.npm
23:25 iggy pretty straightforward
23:27 elfixit joined #salt
23:37 glyf joined #salt
23:38 dccc joined #salt
23:38 nebuchadnezzar joined #salt
23:43 n8n joined #salt
23:45 unstable joined #salt
23:45 unstable Salt is freezing on a batch 20, where it just hangs on some boxes. I'm not sure why.
23:45 unstable Any ideas?
23:46 jcockhren unstable: mismatch master and minion versions
23:47 jcockhren my only guess since that's the only time I experience that
23:47 freelock joined #salt
23:53 DaveQB joined #salt
23:55 thayne joined #salt
23:59 Ahrotahntee hmm, file.recurse is rather slow

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary