Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-05-29

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 manfred in develop, if you haev an ssh key, you can use the cloud.py runner to bootstrap servers
00:00 manfred __number5__: https://github.com/saltstack-formulas/ec2-autoscale-reactor
00:00 manfred https://github.com/saltstack-formulas/salt-cloud-reactor
00:01 __number5__ I got bunch of instances/minions need to spun up and destroy daily, I would like the whole process fully automated
00:01 wt manfred, does the cloud running work in AWS VPC yet?
00:01 manfred use the reactor and the salt-run
00:01 manfred wt: no idea what that is
00:02 manfred but if it works with the ec2 driver, yes
00:02 wt __number5__, user-data is unchangable and accessible by every process on the host
00:02 manfred because we split out the actual conn.boot() function from the to a request_instance() function, and if you pass an instance_id through salt-run cloud.create... it won't actually request it be created but will instead ssh to it and just bootstrap it
00:03 manfred wt: the requirements to get it to work... split out the request_instance function like they are in salt/cloud/clouds/{ec2,nova,openstack}.py
00:03 manfred i did nova and openstack cause i use those, techhat did ec2
00:04 manfred and those are the two that are all ready for 2014.2
00:04 __number5__ wt, currently I managed VPC directly via boto
00:04 manfred if you get it so that it can run with that
00:04 wt __number5__, I do too...in fact we have a custom instance launcher for launching scripts with certain conventions in our environment.
00:04 manfred then you can use the salt-cloud -F , list_nodes_full, and use the events fired by the salt-cloud cache system to automatically bootstrap servers you created using anything
00:07 sroegner______ joined #salt
00:07 __number5__ yep, we normally have 7 different instances in one VPC, salt-cloud seems works better if you want to launch hundreds of instance of same AMI
00:08 wt Is there any reason why scheduled highstate runs would stop running in the minion?
00:15 wt I have the following in my minion config file:
00:15 wt schedule:
00:15 wt highstate:
00:15 wt function: state.highstate
00:15 wt minutes: 15
00:15 wt does that look correct?
00:21 ilbot3 joined #salt
00:21 Topic for #salt is now Welcome to #salt | 2014.1.4 is the latest | SaltStack trainings coming up in SLC/London: http://www.saltstack.com/training | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
00:21 JordanRinke that works perfect, except, it works every time ;)
00:21 mosen joined #salt
00:22 JordanRinke gotcha
00:22 wt manfred, FWIW, that method doesn't work well with docker since it will flush the firewall rules.
00:22 JordanRinke so I need to match the file not the long hand
00:22 manfred wt: you keep saying that
00:22 manfred but it works better than matching
00:22 manfred and you have a micro second of the flush...
00:22 mosen hiya salt. Just started using and writing some modules. I'm a convert :)
00:23 manfred JordanRinke: http://paste.gtmanfred.com/953/
00:23 wt manfred, docker inserts it's own rules that are not saved to that file
00:23 manfred i blame docker then
00:23 manfred why not include the docker rules then?
00:23 wt manfred, you can blame whatever you want, but there are use cases where that is not convenient.
00:23 JordanRinke manfred: trying it right meow
00:23 wt manfred, however I do think it's a good idea
00:23 manfred sure, but you can't just say... for what it's worth... all of these other things should be used, and then expect us to suport them, when it just isn't reasonable
00:24 wt I don't mean to imply it's not
00:24 manfred ok
00:24 manfred i want to smack someone using rhel 6 on docker... just use ubuntu 12.04 until rhel 7 comes out
00:25 wt manfred, I worked around the problem fine. I just figured that if salt was going to have an iptables module, it should at least work.
00:25 wt manfred, I am not using rhel 6 on docker
00:25 manfred it does work, for stuff that isn't old
00:25 wt manfred, truth be told, I don't really care much for docker
00:25 manfred i am a fan of just requiring the check to even allow the iptables module to be used
00:25 JordanRinke wt & manfred: switching jump to j, works perfeclty
00:25 JordanRinke thx
00:25 manfred like, if it doesn't have --check, then return False from _-virtual__
00:25 manfred JordanRinke: np
00:26 wt JordanRinke, cool
00:26 wt manfred, I think I could support that
00:26 wt at least then it's clear that it's not supported
00:26 manfred i am ok with the limited support right now, as long as it is understood that it is only for simple iptables rules
00:28 wt manfred, word
00:29 aw110f joined #salt
00:30 ashw7n joined #salt
00:35 Joseph joined #salt
00:38 bhosmer_ joined #salt
00:39 oz_akan_ joined #salt
00:39 ipalreadytaken joined #salt
00:40 bhosmer joined #salt
00:46 schmutz joined #salt
00:46 seanz joined #salt
00:51 Joseph_ joined #salt
00:56 fragamus joined #salt
00:56 elfixit joined #salt
00:58 Networkn3rd joined #salt
01:10 redondos joined #salt
01:17 mgw joined #salt
01:17 yano joined #salt
01:19 otter768 joined #salt
01:20 otter768 joined #salt
01:21 jimklo joined #salt
01:25 kermit joined #salt
01:25 xzarth joined #salt
01:25 oz_akan_ joined #salt
01:27 yano joined #salt
01:29 otter768 joined #salt
01:31 n8n joined #salt
01:47 taion809 joined #salt
01:56 logix812 joined #salt
02:03 ckao joined #salt
02:03 tristianc joined #salt
02:04 anuvrat joined #salt
02:04 jimklo joined #salt
02:07 sroegner______ joined #salt
02:11 mgw joined #salt
02:15 brucelee_ joined #salt
02:15 dude051 joined #salt
02:16 thayne joined #salt
02:23 dude051 joined #salt
02:24 dude051 joined #salt
02:26 bhosmer joined #salt
02:27 kermit joined #salt
02:35 thayne joined #salt
02:36 Ryan_Lane joined #salt
02:38 catpigger joined #salt
02:41 jalaziz joined #salt
02:49 dude051 joined #salt
02:50 techdragon joined #salt
02:54 travisfischer joined #salt
02:57 tristianc joined #salt
03:02 neilf_ joined #salt
03:05 manfred this is pretty cool http://blog.trueability.com/2014/03/give-it-a-shot-saltstack/
03:08 aw110f joined #salt
03:12 ipalreadytaken joined #salt
03:19 jalbretsen joined #salt
03:23 n0arch left #salt
03:23 n0arch joined #salt
03:26 dude051 joined #salt
03:35 catpiggest joined #salt
03:38 scooby2 is there a way to get {{ grains['fqdn_ip4'] }} without the [' '] around it?
03:39 Furao grains['fqdn_ip4'].lstrip().rstrip()
03:39 scooby2 awesome thank you
03:39 Furao there should not be any spaces around
03:39 Furao that is a ug
03:39 Furao bug
03:40 fragamus joined #salt
03:40 manfred should be able to do ... grains['fqdn_ip4'][0] ?
03:40 manfred it should just be a list
03:40 ajw0100 joined #salt
03:40 scooby2 its returning ['192.168.100.1']
03:41 scooby2 i'd just like 192.168.100.1
03:41 manfred scooby2: so put a [0]
03:41 manfred to grab the first instance in the list
03:41 manfred since it is just a list
03:41 manfred grains['fqdn_ip4'
03:41 manfred bah
03:41 manfred grains['fqdn_ip4'][0]
03:41 scooby2 trying that
03:42 manfred when converted from a list to a string is when you get the [' ']
03:42 manfred if you had more than one, it would be ['192.168.100.1', '10.10.10.10']
03:42 scooby2 thats it
03:42 manfred yup :)
03:42 scooby2 thanks everyone:)
03:42 scooby2 was banging my head on the wall
03:43 sgviking joined #salt
03:45 fragamus joined #salt
03:49 malinoff joined #salt
03:58 travisfischer joined #salt
04:03 MTecknology joined #salt
04:08 xinkeT joined #salt
04:08 sroegner______ joined #salt
04:14 bhosmer joined #salt
04:15 sunkist joined #salt
04:21 xinkeT joined #salt
04:25 aw110f joined #salt
04:28 ipalreadytaken joined #salt
04:29 wwalker left #salt
04:36 Ryan_Lane joined #salt
04:37 thomaso joined #salt
04:39 ashw7n joined #salt
04:47 brucelee_ joined #salt
04:50 ajw0100 joined #salt
05:03 ipalreadytaken joined #salt
05:05 greyhatpython joined #salt
05:08 n8n joined #salt
05:11 malinoff Hi everybody, could you participate in my poll please? https://docs.google.com/forms/d/1-O3Md8o86jKIW9n5gNZeMHwgC0CWkIQvwjyXe9I2Z3g/viewform
05:13 greyhatpython Done Voted!
05:13 malinoff Thanks!
05:14 Rhomber joined #salt
05:14 Rhomber hey guys
05:14 greyhatpython U welcome!
05:14 Rhomber it there any way to find out why a master can't connect to a minion?.. I have two masters.. one can connect.. the other can't..  :/
05:14 Rhomber identical setup.. synced keys
05:15 Furao left #salt
05:18 Rhomber nvm, works now
05:24 schimmy joined #salt
05:25 mosen joined #salt
05:25 jalaziz joined #salt
05:28 schimmy joined #salt
05:32 mafrosis joined #salt
05:32 mafrosis left #salt
05:32 mafrosis joined #salt
05:32 mafrosis lo
05:33 mafrosis is it possible to add Jinja extensions to the SLS parser?
05:39 whiteinge short answer: not without editing salt code. longer answer: since you can call custom salt modules directly adding Jinja extensions is usually not needed
05:39 mafrosis whiteinge: thanks
05:40 whiteinge there's an open ticket exploring how best to expose extending jinja
05:40 mafrosis all I really want to do is some minor dict building or for loop control
05:40 combusean joined #salt
05:40 mafrosis neither seem possible on vanilla jinja
05:41 whiteinge yeah. definitely whip up a custom module for that. have you gone through that process before?
05:41 mafrosis I’ve build custom salt modules - although I dont really see how they apply here..
05:42 whiteinge the idea is to make a custom module that will return data in a format that is easier to reason about with vanilla jinja
05:42 joehh mafrosis: you can call your module like {{ salt["module.function"](arg) }} and the text output will appear in your file
05:43 joehh if used in other context ie for/if then other data structures may be useful
05:43 mafrosis I see gents, thanks
05:44 mafrosis I’ll have a tinker and see what happens
05:44 mosen is there a best practice way for doing salt module dependency management?
05:45 whiteinge mosen: not really. that's still a greenfield area of exploration
05:46 mosen whiteinge: no problem
05:46 whiteinge i tend to use git submodules when i've got a deadline
05:47 whiteinge there's two community-run projects (that i know of) to make a package manager style wrapper
05:47 mosen I wasnt sure how to structure my github repo
05:47 mosen just going with subdirs of _modules _grains etc
05:49 thayne joined #salt
05:49 whiteinge that's what we do for salt formulas
05:50 brucelee_ joined #salt
05:50 mosen I'm a puppet convert :) salt is pretty nice in that it has orchestration and salt states too
05:51 whiteinge another pattern we use internally if we need to include any master-side config/reactor/etc is: http://paste.fedoraproject.org/105571/34264214
05:51 bhosmer joined #salt
05:51 oz_akan_ joined #salt
05:53 mosen I'm new to both python and salt, so the docs are in RST? i like keeping dev docs around with the modules
05:53 * whiteinge nods
05:54 aw110f joined #salt
05:54 whiteinge about half the docs are in .rst file and half are extracted from the python code
05:54 anuvrat joined #salt
06:00 whiteinge calling it a day
06:00 whiteinge mosen: welcome, btw. glad to have you
06:00 millz0r joined #salt
06:02 bhosmer joined #salt
06:05 picker joined #salt
06:08 mosen whiteinge: thanks very much! ive written a bunch of native osx modules that use pyobjc bindings. I realise thats making the modules horribly unportable
06:09 mosen whiteinge: but salt seemed to be the only non-agentless system, written in python
06:09 sroegner______ joined #salt
06:10 segen joined #salt
06:11 mosen as a PoC i did a module that uses native API to set wallpapers company wide. That's probably a really stupid application of SaltStack, but I wanted to see how everything worked first.
06:12 malinoff mosen, salt allows you to change wallpapers periodically, like cron :)
06:13 mosen malinoff: hehe, I'm not worried about wallpaper automation that much!
06:13 malinoff mosen, SaltStack - the first tool for wallpaper automation - sounds great!
06:14 mosen haha NO!
06:14 malinoff :D
06:16 mosen forget the wallpaper thing
06:16 segen left #salt
06:16 malinoff just joking, don't worry :)
06:17 segen joined #salt
06:18 mosen it is pretty funny to execute changes in scaling on a whole group of machines though
06:19 mosen just scaling up wallpapers for the whole place hehe :)
06:27 mafrosis joined #salt
06:31 segen joined #salt
06:36 segen left #salt
06:36 taterbase joined #salt
06:37 TyrfingMjolnir joined #salt
06:41 segen joined #salt
06:42 segen left #salt
06:43 segen joined #salt
06:46 segen left #salt
06:46 ipalreadytaken joined #salt
06:47 ggoZ joined #salt
06:49 gildegoma joined #salt
06:52 oz_akan_ joined #salt
06:59 thomaso joined #salt
07:08 thomaso joined #salt
07:18 ajw0100 joined #salt
07:20 slav0nic joined #salt
07:20 slav0nic joined #salt
07:21 oz_akan_ joined #salt
07:23 ashw7n joined #salt
07:24 oz_akan_ joined #salt
07:26 thomaso joined #salt
07:35 segen joined #salt
07:35 segen left #salt
07:39 ashw7n joined #salt
07:48 ramteid joined #salt
07:50 bhosmer joined #salt
07:52 seanz joined #salt
07:57 ramteid joined #salt
08:09 thomaso joined #salt
08:10 sroegner______ joined #salt
08:19 linjan joined #salt
08:20 Kenzor joined #salt
08:24 oz_akan_ joined #salt
08:27 krow joined #salt
08:39 fsniper1 joined #salt
08:40 fsniper1 Hello
08:41 orbit_darren joined #salt
08:41 fsniper1 is it possible to return an exception in a custom module? Salt only informs me about "module threw an exception %(name)"
08:42 chiui joined #salt
08:45 ashw7n joined #salt
09:00 hachaboob joined #salt
09:00 ghartz_ joined #salt
09:02 hachaboob I assume the vsphere driver for salt-cloud is not in release 2014.1.4?
09:05 hachaboob Anyone usings salt to provision vms on esxi?
09:15 thayne joined #salt
09:17 ml_1 joined #salt
09:18 CeBe joined #salt
09:21 martoss joined #salt
09:22 martoss1 joined #salt
09:22 martoss2 joined #salt
09:23 martoss2 left #salt
09:25 TyrfingMjolnir joined #salt
09:31 rdorgueil joined #salt
09:33 giantlock joined #salt
09:37 lynxman joined #salt
09:39 bhosmer joined #salt
09:41 fneves joined #salt
09:43 fneves Hi all.
09:43 fneves I am trying to verify that my deployment done with salt is successful
09:44 fneves I wanted something that would check the availability of a Url
09:44 fneves if it was successful my state would be successful
09:45 fneves Not sure what would be the most appropriate way of doing this...custom generic state? custom module?
09:45 fneves is this a wrong concept for the usage of salt?
09:46 ashw7n joined #salt
09:48 mrchrisadams joined #salt
09:55 bhosmer joined #salt
09:57 CeBe fneves: salt already tells you if state has been applied successfully...
09:58 CeBe to check something explicitly you can run a command using the cmd state and let it fail with exitcode != 0 if assertation fails
10:00 otter768_ joined #salt
10:07 pwistrand joined #salt
10:09 pwistran_ joined #salt
10:10 jacksontj joined #salt
10:11 jesusaurus joined #salt
10:11 sroegner______ joined #salt
10:14 fneves my idea was to develop a state/module that you could call from anywhere that would verify is a url is alive (response code 200) or if a log file contains a certain string, etc...
10:14 fneves I know this can be done by creating a script that does those operations and call that from salt
10:15 fneves but, for readibility purposes, shouldn't I develop something that could be packaged with salt?
10:16 ahale joined #salt
10:19 babilen fneves: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.nagios.html + https://hveem.no/salt-icinga-nrpe-replacement might provide somthing to mull over too.
10:20 ilako joined #salt
10:26 oz_akan_ joined #salt
10:27 fneves Definitely nagios module is something to look for. Thanks a lot! I might have to do some simpler module for simpler cases
10:31 londo_ joined #salt
10:38 oz_akan_ joined #salt
10:39 anuvrat joined #salt
10:43 mrchrisadams hi all, I'm looking for some information about what happens when minions lose their connection to a salt master, and I couldn't find anything in the docs search
10:44 mrchrisadams does anyone here have any links to a blogpost or page explaining what happens when minions try to reconnect?
10:47 ashw7n joined #salt
10:48 CeBe joined #salt
10:50 ashb Okay file.directory clean=true has the deps backwards :/
10:54 taion809 joined #salt
11:05 martoss joined #salt
11:09 tristianc joined #salt
11:09 bhosmer joined #salt
11:12 scalability-junk joined #salt
11:25 pwistrand joined #salt
11:27 ipmb joined #salt
11:27 bhosmer joined #salt
11:28 jrdx joined #salt
11:35 _withnale joined #salt
11:36 _withnale Anyone know the correct way to use ‘clean: True’ on a directory? the logic seems all screwed and doesn’t allow for an idempotent salt run
11:37 _withnale The documentation seems to suggest that the directory must require all the files within it, which is pretty warped since the files depend on the directory being present.
11:39 oz_akan_ joined #salt
11:41 martoss joined #salt
11:47 ashw7n joined #salt
11:50 pdayton joined #salt
11:56 segen joined #salt
11:59 thart_ joined #salt
12:01 thart_ is there a way to use salt-ssh to copy a file from local (has salt) to remote (no salt)?
12:03 fsniper thart_: use salt-cp
12:04 fsniper thart_: oh sorry you do not have minions
12:04 thart_ Yea sucks...
12:05 jslatts joined #salt
12:05 fsniper thart_: did you tried  salt-ssh '*' cp.get_file 'salt://xxx' '/tmp/xxx'
12:06 scalability-junk joined #salt
12:07 thart_ I thought that would copy from remote to local. Like minion to master.
12:07 cetex joined #salt
12:07 fsniper thart_: no it's master to minion
12:07 thart_ Thanks... got that backwards...
12:11 sroegner______ joined #salt
12:13 martoss joined #salt
12:14 elfixit joined #salt
12:16 madduck joined #salt
12:22 pdayton joined #salt
12:25 faeroe joined #salt
12:30 sgviking joined #salt
12:34 faeroe joined #salt
12:34 to_json joined #salt
12:35 mgw joined #salt
12:41 londo__ joined #salt
12:41 Koala_ joined #salt
12:43 faeroe joined #salt
12:43 jaimed joined #salt
12:48 ashw7n joined #salt
12:57 cetex left #salt
12:58 miqui joined #salt
12:59 faeroe joined #salt
12:59 oz_akan_ joined #salt
13:04 faeroe joined #salt
13:06 faeroe joined #salt
13:07 resmike joined #salt
13:08 _withnale Anyone know the correct way to use ‘clean: True’ on a directory? the logic seems all screwed and doesn’t allow for an idempotent salt run
13:11 racooper joined #salt
13:11 cpowell joined #salt
13:12 pwistrand joined #salt
13:12 ajolo joined #salt
13:15 patrek joined #salt
13:15 XenophonF joined #salt
13:15 bhosmer joined #salt
13:21 jaycedars joined #salt
13:21 sroegner______ joined #salt
13:22 bhosmer joined #salt
13:22 MTecknology _withnale: what do you mean? clean will cause all files that aren't managed by that state to be removed (you can remove a file from the source and it'll be removed in the destination)
13:22 zooz joined #salt
13:22 bhosmer_ joined #salt
13:25 _withnale yes. but how do you make sure that when you do an additional salt-call that the server treats this as idempotent and doesn’t flag these files as changes…
13:26 dude051 joined #salt
13:26 _withnale in the documentation, it says that any files that exist in the directory should be listed in a require block for the directory.
13:28 spiette joined #salt
13:28 mapu joined #salt
13:29 smcquay joined #salt
13:29 sroegner_______ joined #salt
13:31 resmike joined #salt
13:31 brucelee_ joined #salt
13:31 Kotov joined #salt
13:31 wendall911 joined #salt
13:32 Kotov joined #salt
13:32 wendall911 joined #salt
13:34 GradysGhost joined #salt
13:39 mpanetta joined #salt
13:44 pwistrand joined #salt
13:45 rmnuvg joined #salt
13:45 _withnale left #salt
13:47 kermit joined #salt
13:50 eberg joined #salt
13:50 eberg1 joined #salt
13:52 jaycedars joined #salt
13:53 seanz left #salt
13:55 sweetbacon joined #salt
13:58 stritz joined #salt
13:59 fsniper left #salt
14:01 ekristen joined #salt
14:04 dude051 joined #salt
14:07 danielbachhuber joined #salt
14:09 eberg1 joined #salt
14:09 eberg joined #salt
14:09 ashw7n joined #salt
14:11 rebelshrug joined #salt
14:13 eberg1 joined #salt
14:13 eberg joined #salt
14:20 mpanetta joined #salt
14:23 diegows joined #salt
14:24 techdragon joined #salt
14:29 eberg1 joined #salt
14:29 eberg joined #salt
14:32 oz_akan_ joined #salt
14:35 Sypher joined #salt
14:38 tyler-baker joined #salt
14:38 techdragon joined #salt
14:38 otter768 joined #salt
14:44 jeremyBass joined #salt
14:44 vejdmn joined #salt
14:45 jalbretsen joined #salt
14:46 alunduil joined #salt
14:47 mateoconfeugo joined #salt
14:50 stevednd should I expect any problems if I create a custom state module, and import and call a method from an existing salt state module?
14:51 tyler-baker left #salt
14:51 thedodd joined #salt
14:54 tyler-baker_ joined #salt
14:56 eriko joined #salt
14:59 canta joined #salt
15:00 pentabular joined #salt
15:05 budrose joined #salt
15:08 tyler-baker joined #salt
15:08 krow joined #salt
15:11 kermit joined #salt
15:12 krow joined #salt
15:13 krow joined #salt
15:16 ashb joined #salt
15:21 eberg1 joined #salt
15:21 eberg joined #salt
15:21 eberg1 left #salt
15:21 eberg left #salt
15:23 jergerber joined #salt
15:27 bigl0af joined #salt
15:29 ipalreadytaken joined #salt
15:38 tyler-baker_ joined #salt
15:38 nineteeneightd joined #salt
15:39 jgarr my minions are saying the salt master has the pub keys already. I deleted them from /etc/salt/pki/master/minions/{minion} and also set the master to open mode but it still gives the same error
15:39 jgarr something else I missed?
15:40 taterbase joined #salt
15:41 dude051 joined #salt
15:44 travisfischer joined #salt
15:44 combusean joined #salt
15:46 saru18 joined #salt
15:47 eliasp who's in charge of saltstack.com/saltstack-events/ ? you might want to include timezones as well… 10:00am - 11:00am … that's 17:00 - 18:00 here ;)
15:48 saru18 hi all, I would like to ask you how to properly pass variables from sls state file to a jinja template, let's have the following: http://pastebin.com/igcx2Lwx
15:49 tyler-baker_ joined #salt
15:49 saru18 the template file is http://pastebin.com/1EamJzQz
15:50 mateoconfeugo joined #salt
15:50 saru18 the point is that I can reference simple variables like gitfs_root or gitfs_base is
15:51 saru18 but I can't do it with pillar['sr_nested']['configuration-management']['git']['repositories'] which is basically a list of some string values
15:52 saru18 so I can't loop through the variable git_repositories as I wish
15:52 jimklo joined #salt
15:52 saru18 if I try to put put in the for loop {{ git_repositories }} it ends with  syntax error
15:54 timoguin saru18: i think the pillar variables need to be wrapped in {{ }} when passing the context
15:54 timoguin but the pillar values should be accessible to the template as well
15:55 saru18 yea, I would like to keep vars together
15:56 frasergraham joined #salt
15:56 saru18 you are great {{ }} in sl state file helped, I would say it's mroe about jinja syntax :-/, I need to read about it again
15:56 jheintz joined #salt
15:57 Gareth morning
15:57 jheintz morning
15:58 icarus joined #salt
15:58 jgarr morning
16:00 ashw7n joined #salt
16:00 hunter_ joined #salt
16:02 jimklo joined #salt
16:02 ashw7n joined #salt
16:05 tligda joined #salt
16:07 tyler-baker joined #salt
16:14 XenophonF hey all, if I have a file.copy state that watches another state, will the file.copy state re-copy the file (without having to set force: True) if the watched state changes?
16:14 XenophonF you know what - never mind
16:15 cwyse joined #salt
16:15 XenophonF i'm just going to manage the file instead
16:15 XenophonF simpler
16:15 hunter_ joined #salt
16:15 sgviking joined #salt
16:18 sroegner________ joined #salt
16:18 CeBe joined #salt
16:23 KyleG joined #salt
16:23 KyleG joined #salt
16:23 forrest joined #salt
16:26 ggoZ joined #salt
16:31 chrisjones joined #salt
16:33 anuvrat joined #salt
16:36 joehillen joined #salt
16:38 sroegner________ joined #salt
16:38 n8n joined #salt
16:40 ipmb joined #salt
16:43 resmike joined #salt
16:43 notbmatt joined #salt
16:43 notbmatt greetings earthlings!
16:43 notbmatt is there an elegant way to make file.managed.contents emit a YAML object (say, a pillar dict) without any sort of intermediate interpretation?
16:43 travisfischer joined #salt
16:44 notbmatt say I want to write the master pillar dict into a file as YAML (because contrived example)
16:44 troyready joined #salt
16:45 ipalreadytaken joined #salt
16:45 notbmatt oh hey look at this, there's contents_pillar
16:46 notbmatt oh, wait, that won't work
16:46 XenophonF won't pillar.get work for you?
16:47 XenophonF notbmatt: I have a bunch of entries like "source: {{ salt['pillar.get']('apache:modssl_conf', 'salt://apache/modssl.conf.jinja') }}" scattered through my configs
16:47 ar left #salt
16:48 XenophonF the idea being that i can selectively override that config file if the standard one doesn't work for me
16:48 tcotav joined #salt
16:52 AdamSewell joined #salt
16:52 AdamSewell joined #salt
16:52 redondos joined #salt
16:52 redondos joined #salt
16:55 notbmatt right on XenophonF, you win an Internet
16:55 dude051 joined #salt
16:55 XenophonF an Internet?! ooh i always wanted one!
16:55 notbmatt pillar.get solves a related issue
16:56 notbmatt still not quite sure how to say "write this pillar dict to disk", but you've gotten me closer :)
16:56 XenophonF I'm going to love him and hug him and squeeze him and hold him and call him George.
16:56 dude051 joined #salt
16:56 hunter_ joined #salt
16:56 XenophonF so here's a related pillar question
16:57 mgw joined #salt
16:57 XenophonF I have a jinja template that defines a dict - the dict descripts which mod_security rulesets I'm enabling
16:57 XenophonF so it looks like {'base_rules': ['modsecurity_crs_20_protocol_violations.conf', ...], ...}
16:58 XenophonF would it be possible to pull that data from a pillar?
16:58 XenophonF i'd like to be able to override the list of activated rules on a per-host basis
17:00 schimmy joined #salt
17:01 anuvrat joined #salt
17:01 kermit joined #salt
17:01 notbmatt ooh, looks like I can maybe abuse: contents: {{ pprint(salt['pillar.get']('foodict', merge=bardict)) }}
17:02 smcquay joined #salt
17:02 schimmy1 joined #salt
17:03 Networkn3rd joined #salt
17:04 XenophonF hm, i might be able to use salt.utils.dictupdate.update
17:07 hunter_ joined #salt
17:11 allanparsons joined #salt
17:12 rgarcia_ joined #salt
17:12 allanparsons with service.running....
17:12 aw110f joined #salt
17:12 allanparsons has restart been renamed to full_restart?
17:12 allanparsons see docs: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.service.html
17:15 smcquay joined #salt
17:20 anuvrat joined #salt
17:23 travisfischer joined #salt
17:26 Ryan_Lane joined #salt
17:29 ipalreadytaken joined #salt
17:32 schmutz joined #salt
17:34 joehillen joined #salt
17:35 cruatta joined #salt
17:36 druonysuse joined #salt
17:36 druonysuse joined #salt
17:36 combusean joined #salt
17:37 stanchan joined #salt
17:38 rglen joined #salt
17:39 jeremyBass joined #salt
17:40 Theo-SLC joined #salt
17:45 eculver joined #salt
17:46 eculver joined #salt
17:46 bhosmer joined #salt
17:48 anuvrat joined #salt
17:49 g4rlic joined #salt
17:50 g4rlic question for the Salt community: I wrote a relatively simple grain to help me determine if a system is a laptop or not.  All it does, is say "yes or no" to the question, "does this system have a battery attached?"
17:50 icarus joined #salt
17:50 g4rlic I would like to contribute it to the Salt team, but I don't know where I should fork/pull-request from.
17:50 g4rlic salt-contrib looks like the best idea so far.  Ideas?
17:50 Ryan_Lane g4rlic: fork the salt repo
17:50 Ryan_Lane ah
17:50 Ryan_Lane g4rlic: it depends
17:51 Ryan_Lane are you going to contribute salt code, or formulas/states/etc?
17:51 ipmb joined #salt
17:51 g4rlic Grain code. ;)  Right now, it's an independent grain in /srv/salt/_grains/
17:52 g4rlic I'm not sure if it's worth integrating into the core grains just yet.
17:52 allanparsons joined #salt
17:52 g4rlic That's why salt-contrib looked appealing, there's already a couple of stand-alone grains, just like mine.
17:54 bhosmer joined #salt
17:55 Joseph joined #salt
17:55 g4rlic At the same time, I wanted to make sure this contribution gets to where its needed.  If I need to wedge it into grains/core, I can do that, but it'll take a while to test.  (Current version tests successfully)
17:57 ghanima joined #salt
17:57 g4rlic Ah, according to the README.md, it looks like salt-contrib is where to start.  derp, RTFM.
17:58 ghanima hey guys
17:58 ghanima not sure how to ask this so here gooes
17:59 ghanima so I am doing a cmd.run against multiple nodes on a init script.... for some reason the salt command is still waiting for the command to finish executing however when I go to the box the command has fully started
17:59 savvy-lizard joined #salt
17:59 Joseph is this in a state file or just a reg "salt call"
17:59 ghanima I am looking at the init script and it looks like a standard RH init script using the daemon function to start the process
17:59 ghanima Joseph: salt call
18:00 Joseph can you paste the exact c ommand line?
18:00 peters-tx ghanima: I've seen that happen before; any reason you aren't using service.start
18:00 ghanima sudo salt -C 'G@thor_role:path_processor and *.x.x.x.com' cmd.run "/etc/init.d/halo start
18:01 jimklo Q: I've got 2 rpms that I'm trying to install using pkg.installed with a sources list... they seem to fail to install because either the rpm is missing the signature or missing the signing key... http://pastebin.com/B3FdnyYu  How would I import the signing key via salt or tell it to ignore signing?
18:01 ghanima peters-tx: no particular reason I always thought there was really no difference between service and executing the init script directly
18:01 aw110f is file.prepend available in 2014.1.3 ? https://github.com/saltstack/salt/issues/11661
18:02 linjan_ joined #salt
18:03 g4rlic Ryan_Lane: https://github.com/saltstack/salt-contrib/pull/77
18:03 ghanima any thoughts
18:03 g4rlic Any feedback apprecaited.
18:03 ghanima please note this is a java application being started with the init script
18:03 peters-tx ghanima: I'm pretty sure that if you use service.start, you won't run into any pipe hang-up issues with init scripts which is what I think you're running into
18:03 to_json joined #salt
18:03 peters-tx Well, at least I have run into a few times myself
18:03 peters-tx ghanima: It is worth experimenting with
18:03 ghanima peters-tx: let me give it a try :)
18:04 peters-tx ghanima: They say to try and "do everything" through Salt, so using the service. module is probably one more step in that direction
18:04 Joseph +1 for peters-tx
18:05 Joseph cmd.run should usually be your last option since it is for all intents and purposes shelling out
18:05 ashw7n joined #salt
18:05 Joseph sometimes you need to do but it poises problems which the salt interfaces address out of the box
18:06 catpigger joined #salt
18:06 peters-tx ghanima: "no particular reason I always thought there was really no difference between service and executing the init script directly" -- this is true on the host itself
18:06 allanparsons argh
18:06 allanparsons i have a unicorn process
18:07 allanparsons and it doesnt get restarted
18:07 allanparsons when using restart: True
18:07 ghanima peters-tx: ok so I just tried using service and no dice still the same behavior the process starts but it doesn't return from the salt call
18:08 Ryan_Lane g4rlic: default ubuntu cloud images don't have acpi, for instance
18:08 saltybacon joined #salt
18:08 u809xj0 joined #salt
18:08 Ryan_Lane g4rlic: so, using log.warn there (if this was a core grain) would result in every cloud user's logs quickly filling up :)
18:08 ghanima Joseph: See my response to peters-tx
18:08 bhosmer joined #salt
18:09 bhosmer_ joined #salt
18:09 g4rlic Ryan_Lane: CentOS 5 and 6 don't have it by default, and don't ship it either.
18:09 pentabular joined #salt
18:09 g4rlic I didn't want to rely on it, but I didn't want to reimplement it either.
18:10 g4rlic the ACPI subsystem had fairly major changes at one point, and having to re-do both in Python seemed unwise.
18:10 * Ryan_Lane nods
18:10 Ryan_Lane I'd say to document the requirement for acpi, rather than sending a log warning
18:10 peters-tx ghanima: Are you targetting only 1 system?
18:10 anuvrat joined #salt
18:10 Ryan_Lane I don't need to know about batteries on cloud services, for instance ;)
18:11 g4rlic Ryan_Lane: Works for me.  I took my cue from that from grains/core.py, which relies on dmidecode, etc.
18:11 g4rlic s/from that/for that/g
18:12 ghanima peters-tx: yes my test was only against one system
18:12 Joseph ghanima: what does the salt-run jobs command return on this job
18:12 Joseph can you lookup the jid?
18:13 ghanima Joseph: checking
18:13 ser_rhaegar joined #salt
18:14 krow joined #salt
18:14 ashw7n joined #salt
18:15 Sacro joined #salt
18:15 ashw7n joined #salt
18:16 ghanima Joseph: here you go http://www.pastebin.ca/2783393
18:16 g4rlic Ryan_Lane: CentOS 5.10 and CentOS 6.5 don't even provide it.. :(  (I'm testing this in Fedora, which does)  Yikes.
18:16 Ryan_Lane heh
18:17 Joseph salt.run jobs.active
18:17 Joseph what does that show?
18:17 g4rlic It's a simple client, easy enough for me to repackage for CentOS, but how many people run CentOS on a laptop these days?
18:17 ashw7n joined #salt
18:17 ghanima Joseph: that was the output from jobs.active
18:18 Joseph ahhh
18:18 jimklo how do you tell Salt to use a specific version of Python?
18:18 Joseph so you didnt "salt-run jobs.lookup_jid"
18:18 peters-tx ghanima: What is the longest you have waited for the command to return?
18:18 manfred jimklo: you install it with that version of python
18:18 ghanima peters-tx: no more than 30 seconds at most
18:19 g4rlic anyway, thanks for the feedback, I'll get that change pushed up.
18:19 peters-tx ghanima: Does it *never* return?  Or, does it just take a while
18:19 jimklo manfred: I installed via yum package
18:19 ghanima Joseph: no I didn't but I can
18:19 ghanima peters-tx: so far it never returns
18:19 manfred jimklo: then you can't, you have to use the version that the yum package was packaged with
18:19 ghanima its been over 8 min
18:19 XenophonF left #salt
18:20 g4rlic done.
18:20 peters-tx ghanima: If you check ps on the remote system is it running at all?  from salt-minion?
18:20 peters-tx ghanima: Or is this only a salt-master thing
18:20 ashw7n joined #salt
18:21 ghanima peters-tx: yes I can confirm from the minion that the process has started and is actively running its not in any wait state
18:21 ghanima peters-tx: when I say I am waiting I am waiting for the salt call the finish its execution
18:21 stevednd should I expect any problems if I create a custom state module, and import and call a method from an existing salt state module?
18:21 ghanima peters-tx: but  from what I can tell the minion executed this successfully
18:22 peters-tx ghanima: tail -f /var/log/salt/minion
18:22 jimklo manfred: ok... thanks... actually that might not be my problem now that I see... it's using at 2.6..
18:24 ghanima Joseph here is the jobs.lookup_jid output: http://www.pastebin.ca/2783402
18:24 ashw7n joined #salt
18:27 ashw7n_ joined #salt
18:28 ghanima peters-tx: that really odd there is nothing in the /var/log/salt/minion showing my attempt to execute.... BUT I AM 1000% positive that the command did execute from my salt call.. I am the only one using this system
18:29 Joseph ghanima: youi see the same behavior in a "salt-call" on the minion as you do when you execute salt on the master?
18:29 ashw7n joined #salt
18:31 joehillen joined #salt
18:34 ghanima Joseph: so this time I see the statement Executing command '/sbin/service halo start' in directory '/root'
18:34 ghanima but that's it nothing else
18:34 ghanima however the salt-call is still running
18:34 wt joined #salt
18:36 ghanima Joseph: nothing else is in the log and its been 2 min that the salt-call(from the minion) is still executing and not exiting I can confirm the process is fully up
18:36 wt Is there any way to track which pillar file is causing an exception?
18:37 wt e.g.     2014-05-23 23:26:55,982 [salt.minion                                 ][WARNING ] TypeError encountered executing pillar.get: get() takes at least 1 argument (0 given). See debug log for more info.  Possibly a missing arguments issue:  ArgSpec(args=['key', 'default'], varargs=None, keywords=None, defaults=('',))
18:37 wt Traceback (most recent call last):
18:37 wt File "/usr/lib/python2.6/site-packages/salt/minion.py", line 793, in _thread_return
18:37 wt return_data = func(*args, **kwargs)
18:37 wt TypeError: get() takes at least 1 argument (0 given)
18:37 wt err... which state file, I mean
18:38 Joseph ghanima: if you run salt-minion in debug mode in foreground
18:38 Joseph what do you see if you run the call
18:39 wt Joseph, that's a good idea. I'll try that really quickly.
18:41 wt No WARNING logs.
18:43 jalaziz joined #salt
18:45 tligda joined #salt
18:46 wt 2014-05-29 18:03:04,503 [salt.fileserver                             ][WARNING ] Found invalid hash file [blah.lk] when attempting to reap cache directory.
18:46 wt I see a lot of this in my log. Is that normal, or does it actually indicate something bad?
18:47 jheintz left #salt
18:51 rgarcia_ joined #salt
18:51 to_json joined #salt
18:55 aw110f Hi, when calling a custom grain in my init.sls using file.replace env={{ salt['grains.get']('CLUSTER') }}, the string replacement end up as env=['qasqi']
18:55 ghanima Joseph: sorry had to step away I am testing right now
18:56 anuvrat joined #salt
18:56 Joseph wt: i'd stop minion, master, clean out the cache in /var/cache/salt
18:56 Joseph and then start up the servers again
18:57 aw110f How do i make it so that the replace string is a value and not a list, without the brackets and quotes
18:57 hunter_ joined #salt
18:57 wt Joseph, just rm -rf /var/cache/salt/*?
18:57 wt on minions and masters?
18:57 wt aw110f, is your grain a list?
18:58 UtahDave joined #salt
18:58 aw110f yes
18:58 wt aw110f, so, do you want the 0th element?
18:58 aw110f yes
18:58 tligda aw110f: env={{ salt['grains.get']('CLUSTER')[0] }}
18:59 wt yep
18:59 aw110f my list happens to be a single element
18:59 travisfischer joined #salt
18:59 wt aw110f, what do you want to do in the case where there is more than one element
18:59 wt ?
18:59 hunter_ joined #salt
19:01 aw110f {{ join(salt['grains.get']('CLUSTER') }}
19:01 aw110f ?
19:01 wt What are you joining with?
19:02 wt For example, comma joined is like this: {{ ','.join(salt['grains.get']('CLUSTER')) }}
19:02 wt join is a method of string
19:02 notbmatt er why not {{ ('foo', 'bar')|join(',') }} ?
19:03 wt aw110f, what notbmatt said
19:03 wt that's the jinja way
19:03 aw110f cool ok thanks guys, I'll try the jinja way
19:03 wt so, {{ salt['grains.get']('CLUSTER') | join(',') }}
19:04 wt aw110f, http://jinja.pocoo.org/docs/templates/#filters
19:05 micah_chatt joined #salt
19:06 aw110f thanks wt:
19:09 wt aw110f, I would make sure your sls files render properly when that grain is absent also.
19:12 Joseph wt: i'd do both
19:13 CeBe joined #salt
19:14 anuvrat joined #salt
19:14 wt Joseph, word
19:16 dude051 joined #salt
19:19 hunter_ joined #salt
19:21 ashw7n joined #salt
19:22 quickdry21 joined #salt
19:24 krow joined #salt
19:28 hunter_ joined #salt
19:34 ghanima Joseph: output from salt-call: http://paste.ubuntu.com/7546206/
19:35 Joseph so in the case of a salt-call on minion the logs seem to indicate that its working, correct?
19:35 chrisjones joined #salt
19:38 ghanima Joseph: just confirm though the salt-call on the minion still does not return the prompt
19:38 ghanima the salt-call command is still running
19:38 Joseph ohh
19:38 Joseph you are saying the  Executing command '/sbin/service halo start'
19:38 Joseph pops up in the logs repeatedly?
19:39 ghanima Joseph no it only appears once in the log
19:39 Joseph so what makes you think it still running
19:39 ghanima so I have two shells open
19:39 Joseph the log snippt you showed me seem to indicate that it had completed
19:40 ghanima I am tailing the log in one
19:40 Joseph oh ps?
19:40 ghanima and running salt-call in the other
19:40 Joseph does ps -ef show that its running the start comrmand?
19:40 ghanima the other shell the salt-call is still running because it hasn't exit back to the prompt
19:40 m1crofarmer joined #salt
19:40 ghanima when doing a ps you do still see it running in the process tree
19:40 Joseph ps doesn't lie
19:40 Joseph hmmm
19:40 Joseph well at least it's never lied to me yet :)
19:41 Joseph and this happens with any service you try
19:41 Joseph such as ntpd or apache web server?
19:41 m1crofarmer_ joined #salt
19:41 saltybacon howdy, am using salt-cloud, when I bootstrap my instance it does not mount my ephemeral
19:42 ghanima Joseph: no I have tried other services and it returns within seconds
19:42 ghanima Joseph: here is the init script if your curious: http://paste.ubuntu.com/7546252/
19:42 Joseph reading
19:43 Joseph is this on red hat?
19:44 saltybacon any ideas?
19:45 Joseph saltybacon....are you on openstack?
19:45 forrest lol openstack
19:45 forrest I wish I had a dollar for every new problem people encountered on openstack
19:45 Joseph forrest: its the gift that keeps on giving. hush now
19:45 saltybacon yea
19:46 Joseph forrest....and that's before people get exposed to the neutron networking. God help us all.
19:46 forrest there's a reason 99% of the people who work on openstack are paid to do so
19:46 ghanima Joseph: if that redhat question was for me the answer is yes centos
19:46 Joseph forrest: also  if you got a dollar for that you'd be fabulously wealthy
19:46 forrest thus why I wish I had a dollar
19:46 Joseph ghanima: yes that was for you
19:46 Joseph hehe
19:46 Joseph the init script looks non standard
19:47 Joseph i am going to dig into the service start piece of saltstack to see what it expects that stuff to return in centos
19:47 Joseph saltybacon....mounting ephemeral drives is very finicky in openstack
19:47 Joseph even outside saltstack i have had problems
19:48 Joseph does mounting work with a generic VM launch using plain vanilla openstack.
19:48 Joseph ?
19:48 acu joined #salt
19:51 Joseph ghanima: if you just execute the init script yourself, what does echo $? return after the start and stop?
19:52 darrend joined #salt
19:52 fusionx86 joined #salt
19:53 jrdx joined #salt
19:54 ghanima Joseph: it does return 0
19:55 peters-tx ghanima: Try inserting "nohup" before $HALO_JVM in "$HALO_JVM -classpath '$HALO_CLASSPATH' $JVM_OPTS $PGREP_OPTS $JMX_OPTS $ZOOKEEPER_OPTS $AKKA_OPTS $HALO_OPTS $HALO_APP 2>&1 &"
19:55 peters-tx Using the "daemon" function with a "<quoted string of commands>" I think is probably the badness
19:55 peters-tx "daemon" expects one single command/binary
19:56 peters-tx "nohup" should allow it to disconnect cleanly
19:56 Joseph ahhhhh
19:56 Joseph i couldn't put my finger on what could be wrong but that init script just smelled funny
19:57 Joseph peters-tx: that said shouldn't saltstack fault handle better thna this instead of just hanging?
19:57 Joseph that seems like undesirable behavior
19:58 peters-tx Joseph: Well the init script fires off a backgrounded command, which relies on shell job management... and that works fine in a terminal
19:58 peters-tx I think
19:58 MatthewsFace joined #salt
19:58 peters-tx Joseph: So yes, I think probably salt could figure out how to handle jobs that are initiated by whatever you are asking it to run, but that's pretty over-the-top I think possibly
19:59 Joseph well i wasn't suggesting that salt do that
19:59 Joseph more so that it detect when the init script isn't behaving like what it expects and then exiting with a failure
20:00 peters-tx I think it is similar to what BASH does when you are terminalled into a box, run a command, background that command, and then  run "logout" ... it will say "no way, you have jobs running"
20:00 peters-tx the session that salt is running on your behalf has started a job in the background
20:01 thayne joined #salt
20:01 peters-tx Actually maybe that is the exact problem
20:01 Joseph that would explain the hang
20:01 peters-tx that "daemon" function call that is provided a quoted list, I think probably starts up a new shell, and within that it backgrounds the command, which keeps the shell from exiting in a non-interactive shell... maybe
20:02 peters-tx Which is why possibly it works in an interactive terminal/shell
20:02 peters-tx I'm reaching; waiting for ghanima
20:02 Joseph its a fine theory
20:03 peters-tx For instance if I do "salt-call cmd.run top" on a minion
20:04 peters-tx It never comes back, of course
20:04 Joseph lol yea....my first day with saltstack i tried that and got confused when i got no response
20:04 Joseph hehe
20:04 peters-tx I'll fiddle some more
20:05 combusean joined #salt
20:09 peters-tx Actually, on top of asking the "daemon" function to background that JDK startup, all of the "daemon" output is piped to the logger binary, which is itself backgrounded....
20:09 Joseph sounds promising
20:09 Joseph i'd be fascinated to know if that service can actually be started up by chkconfig
20:09 Joseph i'd suspect not
20:10 peters-tx I see no examples of "logger" getting backgrounded in my /etc/init.d ...
20:10 rogst joined #salt
20:10 peters-tx I'm thinking that's not the right way to go about it
20:13 aw110f hi, In a template file I have {% from 'libs.sls' import pds.email_lists with context %}
20:13 aw110f but salt complains with Unable to manage file: Jinja syntax error: expected token 'block_end',
20:13 ghanima peters-tx sorry I had to step away testing nohup now
20:13 jimklo joined #salt
20:14 n8n joined #salt
20:14 anuvrat joined #salt
20:14 dccc joined #salt
20:16 vexati0n weird issue... i have a minion on a remote network, and if I do "salt-call test.ping" locally on that minion, it works fine. But if I do "salt ID test.ping" from the master, it doesn't reply.
20:16 vexati0n Watching the logs, I see that it calls in for authentication and succeeds, but besides that I see no traffic from it at all
20:16 peters-tx ghanima: Remove the "| logger -t halo &" bit also
20:17 Gareth vexati0n: firewall issues between the two?
20:17 Joseph vexation: no firewalls right?
20:17 Joseph what gareth said :)
20:17 Gareth Joseph: jinks :)
20:17 vexati0n Gareth: possibly a firewall on the minion end, but if that were true why does "test.ping" work from the minion?
20:18 Joseph because that goes through loopback ithink
20:18 vexati0n shouldn't there be a command I can run that can tell me whether Salt is, in fact, actually working?
20:18 vexati0n from the minion
20:19 to_json joined #salt
20:20 resmike joined #salt
20:20 ghanima peters-tx: still no dice
20:20 ghanima this is with the nohup
20:21 resmike joined #salt
20:22 jergerber joined #salt
20:23 fusionx86 Hey guys, can anyone tell me when version 2014.1.5 will come out? I have this https://github.com/saltstack/salt/issues/12699 issue which is really causing headaches for me. Or can anyone give me a pointer on patching salt that was installed via the Ubuntu PPA?
20:23 fusionx86 Supposed to be fixed in 1.5
20:26 mateoconfeugo joined #salt
20:30 krow joined #salt
20:30 peters-tx ghanima: Did you remove the "| logger -t halo &" bit?
20:30 thayne joined #salt
20:30 ghanima peters-tx: no I did not testing that now
20:30 cruatta joined #salt
20:32 krow joined #salt
20:36 resmike joined #salt
20:37 peters-tx vexati0n: Well one thing is if you look at /var/log/salt/minion, or the minion log, if it can't talk to the master via network it will say things like "2014-05-27 15:27:03,252 [salt.crypt                                  ][WARNING ] SaltReqTimeoutError: Waited 60 seconds"
20:37 peters-tx At least you will see lots of SaltReqTimeoutError
20:40 intr1nsic Hey guys, i've run into an issue where a module in /srv/salt/_modules works when manually run on the salt-minion but from the salt-master I get <module>.<action> is not available.
20:41 intr1nsic Is there any pointers I could use to go lookup where the issue may sit?
20:43 to_json joined #salt
20:43 jimklo joined #salt
20:48 rgarcia_ joined #salt
20:50 aw110f hi, if you want to import a jinja template (say: lib.sls) where you have some variable defined in salt, would you put it in the same directory as the init.sls file?
20:50 schimmy joined #salt
20:51 resmike joined #salt
20:54 schimmy1 joined #salt
20:54 garthk joined #salt
20:58 to_json joined #salt
20:59 hunter_ joined #salt
21:02 cruatta_ joined #salt
21:06 resmike joined #salt
21:08 notbmatt I think the time has come for the "grains formula" paradigm
21:09 rushm0r31 joined #salt
21:11 ggoZ joined #salt
21:13 luminous hello! the isinstance() jinja macro (?) here: https://github.com/saltstack-formulas/salt-formula/blob/master/salt/files/minion#L268 - where is this defined?
21:14 luminous aw110f: a jinja template like another formula? or a file/config?
21:14 aw110f file/config
21:14 Joseph <intr1nsic> : have you run the salt.util sync command?
21:16 luminous aw110f: the convention seems to be to put a directory in your formula directory (next to the .sls) 'files', and in there, put your configs
21:17 luminous aw110f: most people put all files in there flat, some people recreate the path to those configs (such as files/etc/hosts  rather than /files/hosts)
21:17 luminous intr1nsic: yea, ensure you've sync'd the modules
21:17 intr1nsic Yea, sync and refresh
21:17 luminous saltutil.sync_all I believe
21:20 intr1nsic Is there a way to do a list from the master to the minion of loaded modules
21:20 nhubbard UtahDave: have you tried using the watch functions for docker installed?
21:20 intr1nsic If I run the module from the master the minion actually runs the module but the master immediately  returns that its not available.
21:21 Eugene Oy. Interviewing for a dayjob at a Chef company.
21:21 intr1nsic So i'm guessing that either the __virtual__ is causing issues or i've janked it up beyond belief.
21:21 luminous Eugene: why bother?
21:21 UtahDave nhubbard: I haven't yet.  Are you running into issues with it?
21:21 nhubbard yeah
21:21 Eugene $, as usual.
21:21 UtahDave nhubbard: thanks for that pull req, by the way
21:21 nhubbard UtahDave: no problem
21:21 aw110f thanks luminous: I'll give it a shot
21:21 luminous Eugene: do what you love
21:21 luminous :)
21:22 luminous follow that heart and it will always pay
21:22 Eugene Also the possibility of being able to rip it out and replace(guy interviewing me did it as a side project)
21:22 Eugene So, there's that.
21:22 nhubbard UtahDave: I found part of the problem with the watch, but I don't fully understand enough about what **kw and **args are to fully get it working, is there a good way to print out variables to the console for troubleshooting it
21:22 luminous heh
21:22 luminous now that more folks are around: the isinstance() jinja macro (?) here: https://github.com/saltstack-formulas/salt-formula/blob/master/salt/files/minion#L268 - where is this defined?
21:22 forrest Eugene, :(
21:22 nhubbard if I can figure it out I'll get you guys a pull req for this also, as it would be extremely helpful for my current work project
21:23 UtahDave nhubbard: Yeah, you have to run the salt-minion in the terminal in debug mode, and then print your variables.
21:23 nhubbard just doing a {% print variable_here %} or how do I print them?
21:23 UtahDave nhubbard: basically, you'd have to create a mod_watch()  function
21:23 UtahDave nhubbard: no, that's jinja syntax.
21:24 UtahDave print(yourvar)
21:24 nhubbard okay so just print(var) simple enough.... the mod_watch function is already there actually
21:24 UtahDave or print yourvar   if the print_statement isn't being imported from future
21:24 nhubbard and based on the code should do what I'm looking for, but it doesn't actually work
21:24 UtahDave nhubbard: hm. let me take a peak.
21:25 UtahDave er, peek
21:25 nhubbard UtahDave: maybe ignore me....
21:26 nhubbard the dockerio state has been updated for mod_watch since the last release, which is what we are still on, I'll stand up a dev instance completely and see what we get
21:27 gzcwnk joined #salt
21:27 gzcwnk anyone in?
21:27 UtahDave cool.  You can drop the newer execution module in   /srv/salt/_modules/    and the state module in /srv/salt/_states/      and then sync them down to your minions
21:28 gzcwnk im trying to run salt '*' test.ping and I get no replay, it just hangs....where to start fault finding pls?
21:28 anuvrat joined #salt
21:29 UtahDave some info about your setup would help, gzcwnk.  what os, pastebin the output of    salt --versions-report    and   salt '<a minion>' test.versions_report
21:29 gzcwnk well one fault i just found is,  -bash-4.1# salt --version salt 2014.1.0 -bash-4.1# service salt-master restart Stopping salt-master daemon:                               [FAILED] Starting salt-master daemon: WARNING: Unable to bind socket, error: [Errno 98] Address already in use The ports are not available to bind                                                            [FAILED] -bash-4.1#
21:30 gzcwnk looks like teh daemon is dead and cant start
21:31 luminous I've looked through docs for confirmation, but do not see any note about where the isinstance() jinja macro is defined (but used in salt-formula, such as here: https://github.com/saltstack-formulas/salt-formula/blob/master/salt/files/minion#L268)
21:31 luminous where is this macro defined? sorry if I'm overlooking the obvious
21:31 gzcwnk os in rehl6.5 it used to work ok
21:32 UtahDave gzcwnk: what happens when you run    salt-master -l debug?
21:32 forrest gzcwnk, kill the process if it is running, then start here
21:32 forrest http://docs.saltstack.com/en/latest/topics/troubleshooting/master.html
21:33 gzcwnk http://pastebin.com/qHLV52T1
21:33 forrest right, so kill the process, start the master in debug and see what happens from there.
21:33 gzcwnk http://pastebin.com/4hdjX3KK
21:34 forrest you need to kill the existing process first
21:34 gzcwnk that is just wierd
21:34 forrest no it isn't
21:34 forrest if you do netstat -aln | grep 450 you'll see that the ports are probably still in use
21:34 luminous gzcwnk: stuff hangs, it's linux
21:34 zooz joined #salt
21:36 gzcwnk luminus dont be silly
21:39 gzcwnk no process is running, im rebooting anyway as I just patched teh kernel
21:39 luminous there's the more complete version of what happened :)
21:40 gzcwnk luminious are you talking to me?
21:40 MatthewsFace Hey guys, anyone had any success upgrading zeromq on CentOS 5.10?
21:40 MatthewsFace I'm getting a dependency error: Error: Missing Dependency: libzmq.so.1()(64bit) is needed by package python26-zmq-2.1.9-3.el5.x86_64 (installed)
21:40 luminous gzcwnk: not directly, no
21:40 Joseph how are you upgrading? PIP or epel?
21:41 gzcwnk well patching the kernel makes no difference to a running machine....and it didnt work before I patched it anyway.
21:41 MatthewsFace Joseph, I've synced a mirror of the RHEL/CentOS 5 repo listed here here: http://zeromq.org/distro:centos
21:41 Joseph ls -ltr
21:42 Joseph sorry wrong window
21:42 MatthewsFace I also have a private epel repo
21:42 forrest gzcwnk, can you update past 2014.1.0?
21:42 nhubbard UtahDave: yeah the newest dockerio state file fixes the issue already it appears
21:42 luminous gzcwnk: reminder: it's linux ;) so what you think about the kernel's reliability might not actually be true or happen as expevted
21:42 forrest MatthewsFace, someone built a package for that
21:42 luminous *expected
21:43 luminous nhubbard: yay
21:43 gzcwnk luminious your ar a bigoted idiot
21:43 MatthewsFace is that so?! do you have links forrest?
21:43 forrest MatthewsFace, http://download.opensuse.org/repositories/home:/fengshuo:/zeromq/CentOS_CentOS-5/home:fengshuo:zeromq.repo
21:43 bemehow joined #salt
21:43 forrest MatthewsFace, it's worked for everyone else who has had an issue, that's also the official link  from http://zeromq.org/distro:centos
21:44 MatthewsFace roger
21:44 forrest even though it looks kinda shady
21:44 hunter_ joined #salt
21:44 MatthewsFace well
21:44 MatthewsFace If I reboot my salt master
21:44 UtahDave nhubbard: ah, good.  Glad you figured it out!
21:44 MatthewsFace none of my CentOS 5.10 clients reconnect
21:44 forrest lol
21:44 MatthewsFace I have to manually restart the service on each one of them
21:44 forrest that sucks
21:44 MatthewsFace I've been told its because of the dated zeromq packages
21:44 forrest most likely
21:45 forrest all sorts of crazy issues happen with outdated zeromq
21:45 Eugene CentOS 5 is up to .10? o.O
21:45 MatthewsFace haha yeah
21:45 Theo-SLC O
21:45 gzcwnk matches rhel5.10 I assume, its pretty old
21:45 MatthewsFace but pretty stable!
21:45 MatthewsFace ;)
21:45 forrest lol
21:45 MatthewsFace kinda..
21:45 MatthewsFace Lovely
21:46 MatthewsFace even that repo gives me the same dependency error
21:46 Joseph in the 100 year old grandpa kinda way
21:46 MatthewsFace hahaha
21:46 gzcwnk support til 2017 so 5.13 is possible.....do you feel lucky?
21:46 luminous gzcwnk: I don't mean to offend, I am totally serious - you are facing this unknown reliability issue, you patched the kernel, and you cannot explain what has happened, why would you call me such names? this makes no sense :)
21:46 Joseph gzcwnk....coudl be worse. you could be dealing with XP
21:46 forrest MatthewsFace, weird, that should be the core
21:47 bemehow Quick yum question. We're using s3 to store repositories and s3 plugin to pull them. They change a lot, but not every second. Salt is erroring with installation (not bubbling up the retcode even with --retcode-passtrough) and we're seeing this very often. The metadata is not corrupted as a seconds before the newest salt tree is being pulled from the same repo. I've forced state/pkg.py to run with refresh=False in a
21:47 bemehow code via _states but still salt breaks my repomd.xml with the following https://gist.github.com/bemehow/201e29d2d5af159f24ee
21:47 gzcwnk luminious the problem was before the kernel patch and before a reboot so where is your logic?
21:48 bemehow Also I think states/yumpkg.py:refresh_db logic is flawed for rapidly changing repomd.xml
21:48 luminous how would we know any of these details?
21:49 gzcwnk what details? it didnt matter, hence I didnt comment on it.
21:49 forrest gzcwnk, let's start from the beginning
21:49 forrest did you upgrade to 2014.1?
21:49 jacksontj joined #salt
21:49 forrest or was this initial install?
21:49 gzcwnk a un-used, kernel has no impact on a running kernel
21:51 gzcwnk I installed in some months ago and it was runnign fine as 2014.10.  I found it dead just now so I patched via epel to see if there was a newer salt-master, no there isnt (on epel)
21:51 allanparsons any idea why a restart: True wouldnt work?
21:51 allanparsons on service.running.
21:52 forrest bemehow, is there a chance that somehow the metadata is changing between those two time periods?
21:52 allanparsons http://pastebin.com/BqR6picJ
21:52 allanparsons that should have restarted
21:52 bemehow forrest: there is a slight chance but still. With refresh forced to False it shouldn't matter unless one of the 'repoquery' calls beforehand refreshes the db.
21:53 allanparsons (and, does restart: true send the restart command to init.d?)
21:53 forrest allanparsons, stupid question, but does the service support the restart command?
21:53 allanparsons yea
21:53 bemehow forrest: so I call yum clean all, yum install salt-config from init script a few seconds before. THen salt-call syncall, then highstate. This is all during bootup.
21:54 allanparsons http://pastebin.com/7zpY4AE5
21:54 bemehow forrest: there is a 10s window between those for salt-call syncall
21:54 forrest bemehow, weird
21:54 bemehow yeah we've frozen the s3 endpoint to only point to us-east-1 already trying to minimize the 's3 backend non-transactionality'
21:57 bemehow forrest: so with the default repo cache lifetime being 6 hours. This 10s shouldn't matter. I'm trying to find the logic that pulls new metadata and/or is corrupting it.
21:57 forrest :\
21:57 worgen joined #salt
21:57 bemehow forrest: and it only happens 1 time. I know about the rtag 'hack' and we've just disabled it. We wanted salt never to update unless explicitly told too
21:57 forrest ahh I wasn't even aware of an rtag hack
21:58 forrest so you've got me there
21:58 schmutz joined #salt
21:59 hunter_ joined #salt
22:01 nrgaway joined #salt
22:02 luminous allanparsons: which version?
22:03 luminous bemehow: can't you just lock the salt packages?
22:05 Theo-SLC I'm trying to use the scheduler on a minion.  I can't see that the job is actually running.  http://pastebin.com/FHJKJ8Br
22:05 allanparsons luminous:            Salt: 2014.1.4
22:05 tmwsiy joined #salt
22:05 allanparsons (minion)
22:05 allanparsons Salt: 2014.1.0-5695-gd93d7a8 (Hydrogen)
22:05 allanparsons (master)
22:05 luminous allanparsons: gzcwnk has something similar, it sounds like, same master release I think too
22:05 luminous maybe you both are staring at a known bug
22:06 luminous 2014.1.0 is not the latest
22:06 micah_chatt joined #salt
22:06 luminous it usually takes till the 3rd or 4th minor revision on a new release.. before those types of issues are all worked out
22:06 luminous welcome to salt!
22:06 luminous :)
22:07 tmwsiy Hi I am trying out salt in a macinstosh computer lab and had a couple of questions about pacakge management. Does salt support something like somebrew for states?
22:07 tmwsiy erer homebrew
22:07 Joseph tmswiy: yes it does i think just a sec
22:07 cruatta joined #salt
22:07 Joseph http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.brew.html#module-salt.modules.brew
22:08 tmwsiy swet!
22:08 tmwsiy thanks
22:08 tmwsiy uggh keyboard is flaking out sorry
22:09 luminous tmwsiy: it's generally worth looking at the raw module and state list in docs to confirm that sort of thign
22:09 bemehow luminous: you mean the versions so it doesn't do the lookup?
22:09 smcquay joined #salt
22:09 luminous bemehow: however that works in the OS you have, yes
22:10 tmwsiy yes I know enough salt to be slightly dangerous and this is the first non toy application I really have come across  :) I will attempt RTFM type operations henceforth! :)
22:10 luminous you should have total control over what, when, from where, etc, for salt packages
22:10 worgen left #salt
22:10 luminous tmwsiy: well, it's always also just so exciting when you go there and you find the module you want, and it's doc'd, etc
22:10 cruatta joined #salt
22:11 bemehow luminous: well yeah this is not a salt packages problem. I've got it frozen to 2014.1.4 and we're freezeing amazon-linux at the particular release too not using the 'latest' repositories.
22:11 luminous then I'm not understanding, sorry
22:11 bemehow luminous: unless you want me to specify the : version part of the pkgs dict to stop doing lookups against rpmdb.
22:11 taion809 joined #salt
22:12 bemehow luminous: so let's say there is 100 engineers in the company and they build things all the time. Every commmit is a change to the => new snapshot rpm is build. repodata.xml and sqlite files are a moving target
22:12 to_json joined #salt
22:12 hunter_ joined #salt
22:13 bemehow luminous: I have a logic in place to add new packages in seconds and upload them everytime there is a new build done. Then clients (servers) are being launched all the time. I run a CI job that does it constantly to verify salt-config and if the AMIs are bootable.
22:13 hunter_ joined #salt
22:13 JordanRinke joined #salt
22:14 luminous bemehow: that sounds awesome, where does this break down?
22:14 bemehow the problem always manifests during the 1st pkg.install state execution. The salt-call highstate against the dir. And this dir => /usr/local/salt-config/
22:14 bemehow and this dir comes from the rpm,
22:15 bemehow luminous: So Im able to install salt configuration => /usr/local/salt-config/top.sls (for simmplicitys sake) then I call sync-all cause I override _grains, _modules with some stuff that's not in the mainstream packages yet. And call highstate. The highstate fails on the first pkg.installed(pkgs=[])
22:16 luminous why?
22:16 luminous what fails?
22:16 bemehow so pkg.insalled with a list fails
22:16 allanparsons argh
22:16 allanparsons bootstrap still pulls that hydrogen ver
22:16 bemehow complaining that my metadata is broken but it wasn't broken seconds before when I was installing salt tree
22:16 allanparsons super annoying
22:17 luminous bemehow: what is your 'metadata' referring to?
22:17 bemehow luminous: s3 bucket
22:17 bemehow we store snapshots in the s3 and we're using s3.py plugin to pull them out
22:17 allanparsons Salt: 2014.1.0-5693-g9ce9456 (Hydrogen)
22:17 luminous bemehow: oh, hrm
22:17 bemehow com.t.snapshots/cent6/repodata/*
22:18 bemehow i understand limittions of the s3 that it's not transactional (read after write) for us-east-1
22:18 luminous bemehow: sounds like you have either some debugging to do with that module, or an opportunity to find a better method (better in that it'll actually work for you)
22:18 bemehow hah I've been there... BASH
22:19 luminous ;)
22:20 bemehow ill dig some more in the code. Maybe I will patch refresh_db to run check_metadata beforehand and clear if needed. I have that logic. But it only manifests under certain conditions => freshly booted boxes...
22:20 tmwsiy ok another newbie question. ok so I also want to do this with windows and notice that the chocolaty module does not have the pkg.install, etc directive. Does this mean I can't so states with them?
22:20 bemehow when aws settles them... it never happens again.
22:21 nrgaway joined #salt
22:22 tmwsiy or I would just use the chocolaty.install instead of pkg.install in the sls files I am thinking is how it would work?
22:22 luminous bemehow: yay for working in the cloud!
22:22 luminous the living breathing machine
22:22 luminous isn't it so much fun?
22:24 luminous wtf, npm.bootstrap does not respect test=True
22:24 luminous oh, well.. this is v0.17.5
22:24 cruatta_ joined #salt
22:24 DaveQB joined #salt
22:27 jamesl joined #salt
22:27 Heartsbane UtahDave: ping
22:27 Joseph <tmwsiy>: you can run execution modules from a sls
22:30 smcquay joined #salt
22:31 UtahDave my brain is sore from being schooled on git by whiteinge
22:31 goodwill hehehe
22:31 UtahDave Heartsbane: pong!
22:31 goodwill UtahDave: been there
22:31 goodwill UtahDave: BELLY BUMP!!!!
22:31 UtahDave goodwill: BELLY BUMP!!
22:31 xt awkward
22:31 Heartsbane Creepy
22:31 forrest lol
22:32 goodwill nah
22:32 goodwill its cool
22:32 allanparsons how the heck do you UNINSTALL salt
22:32 allanparsons man
22:32 forrest ?
22:32 forrest with what os
22:32 goodwill who else wants a belly bump?
22:32 allanparsons curl -L http://bootstrap.saltstack.org | sudo sh -s -- -M git develop
22:32 nrgaway joined #salt
22:32 allanparsons that should bump my version up, no?
22:32 forrest you're going to try and run develop
22:32 forrest *?
22:32 allanparsons yea
22:32 forrest why
22:32 allanparsons my restarts dont work
22:32 UtahDave allanparsons: well, do you already have Salt installed from packages?
22:32 forrest WHY YOU DO DIS
22:33 allanparsons and it's annoying as hell.
22:33 forrest and you're running the latest release?
22:33 allanparsons when in doubt, use the develop branch!
22:33 allanparsons Salt: 2014.1.0-5693-g9ce9456 (Hydrogen)
22:33 aw110f joined #salt
22:33 forrest ok, again
22:33 forrest you are using 2014.1
22:33 forrest go to epel testing, and get one that has been patched
22:33 ZombieFeynman joined #salt
22:33 forrest don't go from 2014.1 to develop :P
22:34 allanparsons i dont have a large infra
22:34 dpnvektor joined #salt
22:34 tmwsiy Joseph: thanks. What I would really like to do (and I love to code and python is so much fun and I can always can get at least some resources to pay some sharp kids at work) would like to have see two things built around salt. 1) a kind of meta system that can have multiple states that you can run other than high state 2) bolt on a batch system like condor with salt. open source of course
22:34 allanparsons my minions are on:  Salt: 2014.1.4
22:34 ZombieFeynman joined #salt
22:34 kyr0 joined #salt
22:34 Eugene You mean, like state.sls ?
22:35 anuvrat joined #salt
22:35 forrest so why is your master not on 2014.1.4?
22:35 allanparsons uhhhh
22:35 Heartsbane UtahDave: let me figure out how to phrase this
22:35 allanparsons dont judge me!
22:35 Joseph tmsiy: not sure i understand 1. What do you mean about running multiple states. ?
22:36 forrest allanparsons, you should start by making sure both the master and minions are on the same release
22:36 luminous UtahDave: he teach you about rebasing?
22:36 allanparsons i'd rather bump my master to 2014.1.4 then
22:36 kyr0 joined #salt
22:36 kyr0 joined #salt
22:36 forrest yea that's what I am suggesting
22:36 UtahDave luminous: yeah, but crazy rebasing  --onto and a bunch of other stuff.
22:37 luminous that's awesome!
22:37 forrest UtahDave, he should write a guide that is worthwhile
22:37 luminous you will love it!
22:37 luminous I just found a good one a little while ago
22:37 tmwsiy well for instance with these windows machines they run this produc tcalled deep freeze where if they are not in what is called a "thawed" state then when they reboot they go back to the state they were when started/ so be able 1) wake them up if needeed 2) make sure they are in a thawed state and 3) run a high state 4) go back to frozen state when done. to make any lasting chainges
22:37 luminous rebasing is something everyone should learn
22:37 tmwsiy trying to get away from it but no luck as of yet
22:37 luminous you will never again be afraid of git
22:37 forrest UtahDave, as much as I love this: https://pbs.twimg.com/media/BjF0pvrCcAAiVRG.png:large
22:38 Eugene We try to be pretty helpful in #git too
22:38 Joseph tmssiy: do you mean you want to orchestrate a set of states, controlling what states are executed for a specific set of machines?
22:38 forrest I still don't like the fact that after you commit if you need to change history, you're screwed. :(
22:38 stevednd UtahDave: Is there any way to call a state module from inside of another state module?
22:38 kyr0 joined #salt
22:38 kyr0 joined #salt
22:39 tmwsiy Joseph, yes and the order in which they happen
22:39 Joseph have you looked at overstate?
22:39 Joseph http://docs.saltstack.com/en/latest/topics/tutorials/states_pt5.html
22:39 UtahDave stevednd: yep!  __salt__['cmd.run']    or __salt__['disk.usage']
22:40 Ryan_Lane scripts: {{ pillar['cloud_init_scripts'].format(service_name='test', service_instance='testme', region='us-east-1') }}
22:40 UtahDave stevednd: __salt__  is a dictionary that holds all of Salt's modules.  You can use all of them
22:40 Ryan_Lane ^^ will that work in states?
22:40 stevednd UtahDave: Those are actual modules though salt/modules, I mean salt/states
22:40 allanparsons lol, so, bootstrap is no help when upgrading
22:41 allanparsons i could only upgrade to 2014.1.4 by:  pip install git+https://github.com/saltstack/salt.git@4ab9e67dff5e3a47279a19a58ce838bea77ef1d8
22:41 Joseph tmsiy: you probably want to combine overstate with some of the more sophisticated requisites, where a particular SLS will only execute a state function if a condition is met etc
22:41 UtahDave stevednd: __salt__['state.single']('name=pkg', 'fun=installed')
22:41 UtahDave stevednd: something like that.  Actually, I think that syntax is off, but hopefully that's close
22:42 tmwsiy ok so in the example mysql and webservers might be thawmachine, runhighstate, and freezemachine and the for the match you just list those states in order?
22:42 hunter_ joined #salt
22:42 stevednd UtahDave: I'll try and look a little more at that, but how would I pass the state's options to that?
22:43 UtahDave stevednd: they're just the arguments.
22:43 stevednd ahh, I see
22:43 Joseph you mean overstate?
22:43 stevednd fun='pkg.installed', **kwargs
22:44 UtahDave Ryan_Lane: Hm. not sure .format exists in jinja
22:44 Joseph oh i see
22:44 Joseph in the overstate.sls, you want to run a highstate
22:44 Ryan_Lane well, I guess only one way to find out for sure :)
22:44 Joseph yes that should work
22:44 UtahDave :)
22:44 Joseph Utahdave: i feel like jinja should be treated like a four letter word.
22:45 Joseph :)
22:45 UtahDave lol, Joseph
22:46 UtahDave I advocate using jinja as sparingly as possible.
22:46 Ryan_Lane well, in this case I have one large pillar that has content that needs to be replaced
22:46 stevednd UtahDave: thanks, looks like that did the trick
22:46 Joseph UtahDave: the documentation should definitely have a big red warning in all caps and bolded: USE AT YOUR OWN RISK. Also, you should only use it in these specific cases else your past self will hate your future self when you need to maintain jinja macros
22:46 Ryan_Lane otherwise I need to duplicate that data a billion times
22:47 Joseph Ryan_Lane: iterating through/modifying pillar/grain data is the only situation where i think jinja works out okay.
22:48 Joseph If see anything beyond a for loop i get real nervous
22:48 * Ryan_Lane nods
22:48 Joseph I think part of the issue is that people are trying to use jinja to act like a state or execution module because they can't find one from the built in set and either don't know how or don't want to create a custom one
22:49 Ryan_Lane .format works
22:50 UtahDave Ryan_Lane: ah, good to know!
22:51 UtahDave Joseph: yeah, people often start using jinja to try to imperatively "code" their infrastructure, instead of statefully defining their infrastructure
22:51 Joseph UtahDave: at which point, i would say they should just shell scripts.
22:51 Ryan_Lane if I want to do that I'll make a custom module :)
22:51 kermit joined #salt
22:52 UtahDave yep!   Anytime you get too crazy with jinja, that probably means it should be in an execution module
22:52 Ryan_Lane https://gist.github.com/ryan-lane/c2daf5dd1aa9fbbe0c36
22:52 Ryan_Lane ^^ example
22:52 Joseph UtahDave: speaking of stupidity, i almost tried to create a jinja template that would somehow dynamically iterate through unix users on a minion and then either remove or add users based on the given list.
22:53 Joseph After torturing myself for an hour or so, it dawned on me how trivially simple this would be to in a module
22:53 Joseph It was then that i decided to have jinja intervention with myself
22:53 Joseph I am still recovering but i am making good progress
22:54 UtahDave that's a great example, Ryan_Lane. Thanks!
22:54 Ryan_Lane yw
22:54 allanparsons ok
22:54 rgarcia_ joined #salt
22:54 taandrews joined #salt
22:54 allanparsons so, my salt master and salt minion are on the same version
22:54 allanparsons and service.running - restart:True still doesnt work
22:54 forrest Ryan_Lane, I've starred that, so don't delete it :P
22:54 taandrews Are pillars supported in overstate
22:54 taandrews ?
22:54 UtahDave forrest: fork it!
22:54 Ryan_Lane forrest: heh
22:54 forrest lol
22:55 forrest I forgot you can fork a gist
22:55 UtahDave allanparsons: can you pastebin what you have?
22:55 Joseph that's really funny
22:55 forrest allanparsons, did you run the command with debug?
22:55 allanparsons yep!
22:55 allanparsons 1 sec
22:56 allanparsons @utahdave... what do you want me to pastebin
22:56 allanparsons there's some sensitive info in the log
22:56 UtahDave allanparsons: your state file. Or at least the relevant part
22:57 allanparsons @UtahDave: http://pastebin.com/NdbdNBbn
22:58 allanparsons there's only 1 service.running in there
22:58 allanparsons (my init.d is... restart | reload )
22:59 UtahDave allanparsons: so your example has   - reload: True
22:59 UtahDave is that what you want?
22:59 allanparsons i tried reload: True
23:00 allanparsons and restart: True
23:00 allanparsons both didnt cycle the service
23:00 rgarcia_ joined #salt
23:01 UtahDave allanparsons: You realize it only will reload or restart when something you're "watch"ing changes, right?
23:03 dimeshake and changes via salt*
23:04 forrest UtahDave, hmm, at one point wasn't there an option to just always restart when the state ran?
23:04 allanparsons @Utah, yeah
23:04 allanparsons i realize.
23:04 allanparsons @Utahdave
23:05 allanparsons those three watch statements changed
23:05 allanparsons so it should have triggered a restart
23:06 UtahDave allanparsons: let me double check something
23:07 UtahDave allanparsons: what's the name of your init.d script?
23:07 allanparsons unicorn_{{ app }}
23:07 allanparsons so:  unicorn_hypeman
23:07 allanparsons # ls -ltr /etc/init.d/unicorn_hypeman  -rwxr-xr-x 1 rvm root 2262 May 29 21:48 /etc/init.d/unicorn_hypeman
23:09 UtahDave Hm. you might have to pass in it's sig so Salt can check if it's running or not
23:09 UtahDave http://docs.saltstack.com/en/latest/ref/states/all/salt.states.service.html#salt.states.service.running
23:10 dimeshake if i want to iterate over nested dicts... can anyone give me a hand? jinja is foreign to me, but I've got this so far and am running into a "cannot iterate over 'NoneType'" error on this state + pillar data
23:10 hembree1 joined #salt
23:10 dimeshake just simple syntax i'm sure
23:10 dimeshake https://gist.github.com/mshade/6e305f6e61d03b60742c
23:11 dimeshake i basically am iterating over elements in a dict, and then want to refer to those again. see line 9 in the users.sls of that gist
23:12 Heartsbane whiteinge: UtahDave: I owe you guys some more sushi, maybe before a LUG Meeting
23:12 forrest dimeshake, take a look at https://gist.github.com/gravyboat/0ae1c96bd05bbe315f1a
23:12 whiteinge yay sushi!
23:12 forrest or UtahDave's original https://gist.github.com/UtahDave/3785738
23:12 UtahDave sushi!
23:12 allanparsons @utahdave...
23:12 allanparsons http://pastebin.com/E7PxLp9X
23:12 allanparsons pass in the name of the process?
23:13 Heartsbane thanks for showing me that trick whiteinge
23:13 allanparsons or sig 0? (which is what start passes in in that init script)
23:13 UtahDave allanparsons: yeah, what shows up in ps
23:14 allanparsons oh man, thats not gonna be good.
23:14 allanparsons if i have mulp. unicorn workers, it may cycle all of them, no?
23:14 oz_akan_ joined #salt
23:14 dimeshake thanks forrest - will play with this
23:15 Theo-SLC I don't think the saltstack scheduler works at all on minions.  Does anybody here use it?
23:15 forrest dimeshake, np
23:15 joehh fusionx86: if you see this, pm me for assistance patching packages
23:15 forrest joehh, thanks for the quick fix on that issue last night
23:15 forrest I bow down to your ubuntu packaging after having to do some at my current job
23:16 forrest I can't believe I feel that rhel does a 'good' job compared to ubuntu
23:16 UtahDave allanparsons: Hm.  I'm not super experienced with that.  whiteinge do you know what allanparsons should do with his service.running?
23:17 UtahDave Theo-SLC: It should work.  If not we need to fix it!  :)   can you pastebin the relevant section of your master config?
23:18 joehh forrest: no worries - can't believe I only half fixed it... glad you prompted
23:19 forrest Yea I wouldn't even have noticed had a coworker not brought the default change to my attention
23:20 joehh I've "refreshed" the precise packaging (brought it up to using dh_python2), but it hasn't been widely tested
23:20 allanparsons @utahdave... ok, so it says "service restarted"
23:20 allanparsons after the sig
23:21 allanparsons but that deff didnt happen
23:21 joehh It is in the salt-depends ppa - 2014.1.4+ds-2ubuntu3
23:21 allanparsons 1 sec...
23:21 joehh If you use precise, could you please see if there are any issues in your environment?
23:22 joehh I don't expect any issues, but would like to ensure there are none before pushing it out to everyone (probably at next major release)
23:23 joehh forrest: ^
23:24 joehh re packaging - steep learning curve, but now it is (mostly) in git-buildpackage it is much easier to keep track of the different distros
23:25 joehh (4 debian (sid, jessie, wheezy, squeeze), 4 ubuntu (trusty, saucy, precise, lucid) at the moment)
23:25 joehh was a nightmare before
23:27 forrest joehh, I actually can't, we aren't provisioning any additional precise boxes and I don't have any sandbox machines that are precise
23:27 forrest I don't use precise at all, it was just some VM a coworker had
23:27 allanparsons @UtahDave @Theo-SLC... a restart should actually issue:  service <name> restart
23:27 joehh no worries- just keeping an eye out for people who use the packages "actively"
23:27 allanparsons that sig is gonna be an issue
23:27 forrest joehh, yea for sure
23:28 forrest I can check my digitalocean VM when I get home, I *might* have an image I can spin back up that has salt cloud on it that is precise
23:28 joehh I'm pretty happy with making the change at the next major release, but if someone has an environment which might break, that is worth testing
23:28 whiteinge allanparsons: sorry, slow reply. does that init script restart things correctly without salt?
23:28 allanparsons yes
23:29 forrest joehh, yea
23:29 joehh I wouldn't bother - I have done a fair bit of that, just looking for edge cases
23:29 allanparsons and sending in sig: unicorn
23:29 forrest ok
23:29 joehh :)
23:29 allanparsons it issues a restart
23:29 allanparsons that works
23:29 allanparsons but if i have 10 apps all using unicorn on a box
23:29 allanparsons and i deploy app 1 of 10
23:29 allanparsons i **think** salt will cycle all 10 apps
23:29 forrest why would each app not have it's own instance?
23:30 allanparsons we use Ec2
23:30 jimklo how do I reference the salt.modules.x.func() from within the Jinja inside the SLS?
23:30 allanparsons so if we have an underutilized app
23:30 allanparsons we'll double up an app
23:30 unknown007 joined #salt
23:31 Theo-SLC UtahDave:
23:31 Theo-SLC http://pastebin.com/FHJKJ8Br
23:31 whiteinge allanparsons: salt is just shelling out to the init script. did you paste the state you're calling it with? (i quickly skimmed the backlog)
23:32 dimeshake forrest: works perfectly. thanks mang!
23:33 forrest dimeshake, yea np, I'm glad UtahDave wrote that up so long ago, it's come in handy
23:33 redondos joined #salt
23:33 dimeshake i basically changed two lines. for usr, args in... and then for dev in args['devs']
23:34 dimeshake perfecto.
23:34 dimeshake ... as long as i have devs defined anyway :)
23:34 forrest nice
23:35 alunduil joined #salt
23:36 hembree1 left #salt
23:37 allanparsons whiteinge... i did paste the state
23:37 Ryan_Lane whiteinge: export ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future"
23:37 dimeshake is there any good way to add an arbitrary of lines to a jinja template? rather than anticipating key: value, have a block of "insert crap here" basically
23:37 whiteinge allanparsons: sorry i missed it. mind pasting it again?
23:38 Ryan_Lane whiteinge: that's needed to compile M2Crypto on OS X
23:38 allanparsons whiteinge: http://pastebin.com/NdbdNBbn
23:38 Ryan_Lane I don't really know where to fit that into the docs
23:38 forrest joehh, are you running 14.04 as a desktop?
23:38 whiteinge Ryan_Lane: that looks like a made up flag
23:38 Ryan_Lane doesn't it? :)
23:39 whiteinge i think that should be in the installing from source doc
23:39 Ryan_Lane it's a flag for the version of cc that comes with xcode
23:39 whiteinge building M2Crypto is a consistent problem-point on multiple OSes
23:39 Ryan_Lane yep
23:39 Ryan_Lane let's just get rid of M2Crypto ;)
23:39 whiteinge we use to have one in there for building M2Crypto on Fedora (I added it)
23:39 whiteinge +1 to that :-P
23:39 UtahDave lol, looks totally made up
23:42 joehh forrest: normal desktop is debian wheezy, but now have a fairly permanent 14.04 vm for ubuntu packaging
23:42 forrest ahh ok
23:43 whiteinge allanparsons: i see the ``{{ company }}_{{ repo_name }}_unicorn_{{ repo_name }}`` service state but i don't see what's kicking off the restart
23:43 forrest joehh, well, if you run the desktop version within the VM, as a heads up the latest kernel is doing something weird where it's trying to pxe boot on shutdown
23:44 joehh ok - I'll keep an eye out for that - thanks
23:44 forrest np
23:46 unknown007 joined #salt
23:54 timc3_ joined #salt
23:54 stevednd UtahDave: on that state.single, is there a way to get its ret hash? As it is, it just returns a hash with the value of ret['changes']
23:56 UtahDave let me check
23:57 bhosmer joined #salt
23:58 UtahDave stevednd: doesn't look like it
23:59 stevednd thanks. I'll just use my ugly hack until I can get a patch submitted, and hopefully accepted for the functionality

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary