Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-07-19

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 jbunting joined #salt
00:05 oz_akan_ joined #salt
00:11 Jahkeup_ joined #salt
00:13 jslatts joined #salt
00:17 bejer joined #salt
00:19 Ryan_Lane how do I target using grains in the pillar top file?
00:19 Ryan_Lane I'm trying this: 'deployment_target:*':
00:19 Ryan_Lane but that doesn't seem to work
00:20 Ryan_Lane ah
00:20 Ryan_Lane - match: grain
00:24 dthom91 joined #salt
00:27 auser :)
00:27 auser http://www.saltstat.es/posts/role-infrastructure.html
00:28 Jahkeup_ joined #salt
00:29 DredTiger joined #salt
00:30 [diecast] joined #salt
00:30 [diecast] joined #salt
00:30 Jahkeup__ joined #salt
00:33 Mrono joined #salt
00:33 Mrono joined #salt
00:41 SEJeff_work Ryan_Lane, http://docs.saltstack.com/ref/states/top.html#other-ways-of-targeting-minions
00:42 Ryan_Lane heh
00:42 Ryan_Lane yep. found that right after I asked :)
00:43 SEJeff_work Ryan_Lane, I wrote the first iteration of that example
00:43 SEJeff_work after I managed to figure it out myself :)
00:49 Ryan_Lane hahaha
01:00 emocakes joined #salt
01:06 Jahkeup_ joined #salt
01:16 kenbolton joined #salt
01:20 liuyq joined #salt
01:24 dthom91 joined #salt
01:25 Furao joined #salt
01:25 auser joined #salt
01:34 jeddi joined #salt
01:36 LyndsySimon joined #salt
01:45 Nexpro joined #salt
01:52 mikedawson joined #salt
01:55 dthom91 joined #salt
01:55 andrew joined #salt
02:02 UtahDave joined #salt
02:04 cocoy1 joined #salt
02:08 avienu joined #salt
02:12 Furao joined #salt
02:15 sifusam joined #salt
02:18 jschadlick joined #salt
02:27 Gifflen joined #salt
02:28 stevedb joined #salt
02:30 avienu joined #salt
02:37 oz_akan_ joined #salt
02:38 Gifflen_ joined #salt
02:47 Jahkeup__ joined #salt
02:54 UtahDave joined #salt
03:00 possibilities joined #salt
03:01 possibilities hi all. is it possible to have a cmd only run if a pillar value has changed?
03:01 UtahDave possibilities: Well, how would you determine if it had changed?
03:02 possibilities i'm unsure
03:02 redbeard2 left #salt
03:03 possibilities ah, ok, i think i see a way
03:05 napperjabber joined #salt
03:06 possibilities thanks for answering my question with the perfect question (:
03:07 UtahDave :)  Why do I suddenly feel like a zen master?
03:09 EugeneKay Cocaine?
03:10 UtahDave he he
03:14 jbunting joined #salt
03:16 stevedb joined #salt
03:18 bluemoon joined #salt
03:18 possibilities oh i thought i could watch the results of another cmd, guess not
03:19 possibilities i need more zen mastery and/or some of that cocaine
03:21 jschadlick joined #salt
03:23 kho joined #salt
03:25 EugeneKay I don't know how much it will help with Salt per se....
03:26 possibilities surprised this doesn't work: https://gist.github.com/possibilities/0f7c30158e43a21cf6bd
03:38 avienu Another silly question… Is files_root relative to the rest of the salt files or absolute on the salt master
03:40 avienu Okay, did a quick test. It's absolute.
03:53 nmistry joined #salt
04:07 jalbretsen joined #salt
04:08 nmistry Hi, i am a racker and just getting my feet wet with salt.  I wanted to find out if there were any examples of using salt to setup nginx / wordpress / mysql?
04:08 nmistry I want to do some testing, and need to tear down and rebuild an environment several times.   Figured it was a great time to play with salt some more
04:13 UtahDave nmistry: cool!
04:13 UtahDave Let me track down some examples
04:14 nmistry Thanks UtahDave
04:14 nmistry i can do it in chef, but whats the fun in that… right?
04:16 UtahDave :)
04:20 UtahDave nmistry: OK, I have a mostly complete Wordpress install state here: https://gist.github.com/UtahDave/f4aba7a49a1b715f1a08
04:20 UtahDave It has a couple shortcomings.
04:20 UtahDave 1. The password is sitting there in the sls file.  The password should be put in pillar and templated here
04:21 UtahDave 2. It doesn't actually load the database schema and data, so when you go to the webserver's url, you have to follow the web gui to finish the install
04:21 UtahDave 3. Also, you have to supply the wordpress directory with all the files
04:21 UtahDave nmistry: that should give you an idea on  how to do it.
04:21 nmistry This is great.    is it difficult to add functionality where salt downloads a file from wordpress and untar's it
04:22 UtahDave nope. That wouldn't be hard to do at all.
04:22 UtahDave I use that for a demo, and I didn't want to depend on the wordpress tarball changing names or location right when i'm trying to show off Salt.  :)
04:22 nmistry ill start w/ this and work on replacing apache w/ nginx, and go from there
04:22 UtahDave cool.  Let me know how it goes.
04:22 nmistry I will keep you in the loop
04:23 UtahDave nmistry: are you going to the Salt Sprint at the castle next on the 27th?  (A week from this saturday)
04:23 nmistry I would like to
04:24 nmistry Gotta clear it w/ the boss
04:24 UtahDave cool.  I'll be flying in for it.
04:24 nmistry and by boss i mean my wife
04:25 qba73 joined #salt
04:39 nmistry Thanks for the help UtahDave
04:39 kallek joined #salt
04:57 whit joined #salt
04:58 raydeo joined #salt
05:01 Ryan_Lane joined #salt
05:33 jhauser joined #salt
05:46 [ilin] joined #salt
05:48 Lue_4911 joined #salt
06:08 Ryan_Lane joined #salt
06:20 dthom91 joined #salt
06:22 Newt[cz] joined #salt
06:38 Ivo joined #salt
06:49 UtahDave joined #salt
06:54 liuyq joined #salt
06:55 az87c joined #salt
06:59 LucasCozy joined #salt
06:59 LucasCozy joined #salt
07:01 __gotcha joined #salt
07:01 __gotcha joined #salt
07:03 qba73 joined #salt
07:05 linjan joined #salt
07:13 __gotcha joined #salt
07:13 __gotcha joined #salt
07:18 __gotcha joined #salt
07:19 dthom91 joined #salt
07:25 scott_w joined #salt
07:41 middleman_ joined #salt
07:49 dthom91 joined #salt
07:52 scalability-junk UtahDave: I use wordpress as a submodule in git for showing projects
07:52 bemehow joined #salt
07:53 scalability-junk I mirror the svn from wordpress into my git repo and use that as submodule for my wordpress projects
07:53 scalability-junk same for plugins
07:53 scalability-junk theme and data is plugged in as submodules too ;)
07:54 scalability-junk git clone super_repo.git and it runs sorta :d
07:54 Ryan_Lane1 joined #salt
07:54 scalability-junk database needs to be setup with saltstack.
08:12 scalability-junk Something else: how do you save your states and pillar? do you have one state repo and one pillar repo for each project or do you just keep all of them in a few repos?
08:12 scalability-junk so one central states repo?
08:16 zooz joined #salt
08:16 kstaken joined #salt
08:21 irctc249 joined #salt
08:21 UtahDave scalability-junk: I keep them in separate repos
08:22 UtahDave we've got initial support for git pillar now, though I haven't used it yet
08:22 scalability-junk UtahDave: may I ask how you use them on the master then and which repos you setup per project?
08:22 scalability-junk git pillar?
08:23 knightsamar joined #salt
08:41 hazzadous joined #salt
08:43 bluemoon joined #salt
08:49 dthom91 joined #salt
08:51 whiskybar joined #salt
08:52 kstaken joined #salt
09:00 kstaken joined #salt
09:04 Koma joined #salt
09:06 felixhummel joined #salt
09:12 dzen mh
09:12 dzen is there a way ti debug an errir as
09:12 dzen [ERROR   ] No changes made for /etc/fabulousproject/fabulousproject-client.conf
09:12 dzen i'm using salt-call -l debug on the minion allready
09:15 knightsamar what do the logs say dzen ?
09:16 dzen nothing more than that :'(
09:16 dzen (for this context)
09:16 hazzadous joined #salt
09:27 knightsamar dzen: did u check the logs on the master and minion both ?
09:31 dzen ok I upgraded my server to 0.16 and it wurked
09:34 bluemoon joined #salt
09:41 giantlock joined #salt
09:42 Koma joined #salt
09:45 carlos joined #salt
09:50 efixit joined #salt
09:50 dthom91 joined #salt
09:54 lemao joined #salt
10:08 oz_akan_ joined #salt
10:16 __gotcha joined #salt
10:16 __gotcha joined #salt
10:16 scalability-junk does installed for pkg imply latest?
10:16 scalability-junk or should I use latest?
10:18 liuyq joined #salt
10:20 dthom91 joined #salt
10:34 jeddi joined #salt
10:41 scalability-junk when I have a list of names for example and I want to iterate over it can I just use a for loop from salt['pillar.get']('data:user:names') ? or do I need .items or so too?
10:51 bemehow joined #salt
10:52 mikedawson joined #salt
10:52 lemmings scalability-junk: I assume install implies the same as apt-get install, which is always latest (OR pinned with higher priority, in actuality)
10:52 scalability-junk ok so I wouldn't actually need latest?
10:56 scalability-junk something else: a minion would run as root usually or?
10:56 scalability-junk then I would save ssh keys private ones in /root/.ssh/ or is there another best practice?
10:57 nkuttler scalability-junk: yeah, it runs as root, but why would you need ssh keys?
10:57 scalability-junk nkuttler: git deploy keys or rsync ssh keys...
10:58 nkuttler hm, not really sure what you mean, but that sounds like something i wouldn't do with root privs
10:58 scalability-junk or would I do that from the cmd.run: - identity: salt://ssh_git_key
10:58 scalability-junk nkuttler: how else would you deploy code via saltstack?
10:58 nkuttler scalability-junk: i use the git module
10:58 viq lemmings / scalability-junk : in state installed means "is there such a package there"; you need "latest" to atumatically upgrade it to latest version
10:58 scalability-junk viq: alright great
10:59 scalability-junk nkuttler: yeah me too, but how do you specify the ssh key?
10:59 scalability-junk would you use - identity: salt://some_ssh_key?
10:59 nkuttler scalability-junk: well i create app users that get pub/priv key and put them into the authorized keys of the git host
11:00 scalability-junk alright so you create an extra user running the app say "project1"
11:00 nkuttler yeah
11:00 scalability-junk this user then gets a ssh keypair within his .ssh/
11:01 scalability-junk and all git stuff is run via this project...
11:01 scalability-junk ok
11:01 nkuttler i just like to avoid root whenever possible :)
11:01 scalability-junk yeah great
11:02 scalability-junk so instead of running my cmd with root I would specify the created user.
11:03 nkuttler yeah, with the runas parameter
11:03 nkuttler actually, i'm just thinking.. having the app user own the files might not be so great either..
11:05 scalability-junk nkuttler: ok what do you think would be best?
11:06 nkuttler i'm just thinking about creating a source user or something..
11:06 nkuttler but somehow i think there must have been a reason for why i did it like this..
11:07 p3rror joined #salt
11:07 nkuttler probably making the files -w would be good enough
11:08 nkuttler nah..
11:08 nkuttler anyway, got work to do :)
11:08 scalability-junk thanks so fat
11:08 scalability-junk *far
11:13 scalability-junk if anyone could take a look at my first state I would love to hear some feedback: http://pastebin.com/hVR7THPb
11:17 scalability-junk Now the syntax should be right: http://pastebin.com/Z5UrajWU would really appreciate some feedback
11:17 scalability-junk btw it hopefully is in the end a quite abstract deploy state for git with submodules with git annex submodules for binary files, which get synced via ssh or rsync
11:18 swa How do one set a multiline value to an attribute in a pillar ?
11:18 emilisto swa: start the content on a new line, and prefix with double indent
11:18 emilisto do `salt-call pillar.data` and you'll see it all outputted in this way
11:19 swa emilisto: ok thx
11:20 emilisto scalability-junk: looks good, maybe add a require: user.present: {{ salt['pillar.get']('user:name') to a few of the states
11:21 dthom91 joined #salt
11:21 emilisto I just ran into some trouble due to not having enough require, which worked in one environment but not on another, where the order is different
11:21 scalability-junk emilisto: yeah seems like I should require the user for git and git-annex
11:21 emilisto scalability-junk: right, and the git-key, also add chmod 600 to the private key or else ssh will complain
11:21 scalability-junk btw: how is the syntax of require?
11:21 scalability-junk kk
11:22 emilisto you can add arbitrarily number of requires, so '- user.present: username' right after the previous require line
11:23 scalability-junk yeah, but could I say '- require: git' too?
11:23 scalability-junk or do I really do '- require: pkg: git'
11:25 swa emilisto: looks like salt-call returns lines with space instead of line return
11:25 timl0101 joined #salt
11:27 rroa joined #salt
11:27 scalability-junk and something else when I do a '- watch: - git: git' does it update the statement in the usual order the require statement says it should or does watch with require overwrite that somehow?
11:32 swa emilisto: figured it out.. had to do "options: |" then new line with my content
11:32 [diecast] joined #salt
11:33 emilisto scalability-junk: I think you need the latter, I'd love to see support for the former though
11:33 emilisto swa: ah, right
11:34 scalability-junk emilisto: but how is it formatted than?
11:34 scalability-junk is it vice versa to the declaration in the state?
11:34 emilisto exactly
11:34 swa emilisto: now jinja will output without return line :-) geez
11:35 scalability-junk ok thanks so far
11:36 emilisto https://gist.github.com/emilisto/49e918795d786559bb06
11:36 emilisto hehe np, like that we've helped each other - like an open source utopia :)
11:36 emilisto those hints you gave me on environments yesterday untied a knot I've been working on for two days
11:38 scalability-junk hehe :)
11:38 scalability-junk emilisto: the comment is right on your gist?
11:39 scalability-junk damn markdown syntax :D
11:40 jbunting joined #salt
11:40 emilisto hmm not sure what you mean? I didn't test run it but I think it should be right
11:42 scalability-junk yeah just wanted to make sure that you can require: -user: ... or require: - user.present: ...
11:48 ggoZ joined #salt
11:51 diegows joined #salt
11:53 kenbolton joined #salt
12:05 clone1018 joined #salt
12:05 blee_ joined #salt
12:21 dthom91 joined #salt
12:24 Newt[cz] joined #salt
12:33 p3rror joined #salt
12:35 jslatts joined #salt
12:39 toastedpenguin joined #salt
12:40 sciyoshi joined #salt
12:40 clone1018_ joined #salt
12:41 txmoose joined #salt
12:42 bemehow joined #salt
12:43 N-Mi joined #salt
12:44 mikedawson joined #salt
12:45 jlaffaye joined #salt
12:51 dthom91 joined #salt
12:53 KennethWilke joined #salt
12:55 efixit joined #salt
12:57 Jahkeup_ joined #salt
12:57 smoof joined #salt
12:59 anteaya joined #salt
13:00 oz_akan_ joined #salt
13:00 hankinnyc joined #salt
13:01 __gotcha joined #salt
13:01 __gotcha joined #salt
13:01 juicer2 joined #salt
13:05 tempspace Does anybody see anything wrong with the statement {% if ('contractor' in args and salt['pillar.get']('CONTRACTORS', 'False')) or 'contractor' not in args %}
13:08 Kzim joined #salt
13:08 Kzim Hi
13:09 Kzim where can i see the new 0.16 feature please ? :)
13:10 knightsamar is there any way i can tell salt to return the exit status of one of the minions whose exit status on which it is remotely executing, as it's own exit status ?
13:10 typhus45 joined #salt
13:10 chubrub joined #salt
13:11 LucasCozy joined #salt
13:11 LucasCozy joined #salt
13:12 Gifflen joined #salt
13:12 [diecast] joined #salt
13:12 [diecast] joined #salt
13:14 chubrub hi all, i've read somewhere that it's possible to sync multiple filesto minion (like - sources salt://state/files/* ), but I dont remeber how to do it.  Please help, how to do it?
13:16 Kholloway joined #salt
13:17 Jahkeup_ joined #salt
13:19 brianhicks joined #salt
13:22 smoof @chubrub - I think you are looking for file.recurse
13:22 chubrub how could I missed that <facepalm>
13:22 smoof hehe
13:22 chubrub thx, smoof
13:23 smoof np
13:25 napperjabber joined #salt
13:25 emocakes_ joined #salt
13:25 scalability-junk emilisto: yeah first reworked version is online, will take a few hours to write my example pillars and test it till tomorrow, but doesn't look too bad. https://gist.github.com/stp-ip/6039080
13:29 N-Mi joined #salt
13:31 oz_akan_ joined #salt
13:32 racooper joined #salt
13:33 dthom91 joined #salt
13:34 timl0101 joined #salt
13:44 LucasCozy joined #salt
13:45 wilkystyle joined #salt
13:48 wilkystyle left #salt
13:48 sifusam joined #salt
13:48 __gotcha joined #salt
13:48 __gotcha joined #salt
13:49 bieberhole69 joined #salt
13:51 bieberhole69 Anyone know if there is state support for pkg group_install yet?
13:54 aat joined #salt
13:55 smoof I don't think so. I've just done it with cmd
13:56 smoof seems like a good feature to add though...
13:57 tempspace bieberhole69: No, you can monitor https://github.com/saltstack/salt/issues/5504 though
13:58 bieberhole69 tempspace: yeah, I went digging and couldnt find it.
13:58 bieberhole69 :smoof I'll have a go at using cmd for now then, thanks for the tip
13:59 MrTango joined #salt
14:00 deanvanevery joined #salt
14:02 opapo joined #salt
14:02 kaptk2 joined #salt
14:04 smoof tempspace: thanks for the link to that. I've been thinking that it might be a feature I'd like to have...
14:04 smoof :)
14:06 naemon joined #salt
14:06 jbunting joined #salt
14:11 teskew joined #salt
14:12 mgw joined #salt
14:13 bemehow joined #salt
14:14 sifusam joined #salt
14:14 felixhummel hi! can I print pillar information for a minion that does not exist yet? i want to check pillar before even creating the minion...
14:18 Linz joined #salt
14:21 StDiluted joined #salt
14:22 StDiluted morning
14:24 smoof StDiluted: Morning to you
14:24 m_george left #salt
14:25 scalability-junk StDiluted: morning finished my first state and example pillar \o/
14:25 StDiluted scalability-junk, awesome!
14:25 scalability-junk StDiluted: if you ever want to deploy a project with git, submodules and git annex subodules for binary data: https://gist.github.com/stp-ip/6039080
14:26 scalability-junk with even syncing uploaded content ;)
14:26 StDiluted nice, that's actually something we might need
14:27 jbunting joined #salt
14:28 scalability-junk StDiluted: feedback is always welcome. I'll test it tonight or tomorrow with my projects and if everything goes smooth I'll open up a clean git repo.
14:28 StDiluted nice
14:29 scalability-junk not sure if I should use the www-data user instead of a special one...
14:29 scalability-junk or if I should just add the new user to www-data group...
14:30 bieberhole69 I have a virtualenv that requires a whole bunch of system packages. Is there a way to group packages together so I can tell my virtualenv declaration to require a group of packages instead of listing all of them?
14:31 napperjabber joined #salt
14:33 bemehow joined #salt
14:34 cnelsonsic joined #salt
14:40 wilkystyle joined #salt
14:41 small_spigot joined #salt
14:41 philipforget joined #salt
14:43 avienu joined #salt
14:46 whit joined #salt
14:48 whiskybar joined #salt
14:49 Furao joined #salt
14:50 smoof left #salt
14:52 mikedawson joined #salt
14:57 kho joined #salt
14:58 jalbretsen joined #salt
15:01 squid joined #salt
15:02 StDiluted how do I use a salt-contrib state? Do I put it in a _states dir?
15:02 StDiluted as with grains?
15:02 emilisto scalability-junk: nice!
15:03 wilkystyle left #salt
15:07 LucasCozy joined #salt
15:12 lineman60 joined #salt
15:15 chrisgilmerproj joined #salt
15:16 diegows joined #salt
15:18 jimallman joined #salt
15:20 hazzadous What are peoples methods of maintaining iptables?  I was planning on using iptables-persistent with a config that gets rendered from a list of rules within the file:managed context that would get salt extended when needed.  Bit hacky, is there a less still way to do this?
15:22 StDiluted I believe that the iptables states are in progress right n ow
15:22 conan_the_destro joined #salt
15:23 brutasse hazzadous: I have this, it's a bit limited but OK for my simple needs https://github.com/brutasse/states/tree/master/iptables
15:24 dthom91 joined #salt
15:27 Gifflen joined #salt
15:28 hazzadous StDiluted: yeah looking through the github issues thats the view I came to.  thanks @brutasse will have a look
15:28 cron0 joined #salt
15:33 bensix2 joined #salt
15:36 terminalmage joined #salt
15:38 tqrst left #salt
15:39 jschadlick joined #salt
15:44 jschadlick left #salt
15:44 jschadlick joined #salt
15:47 felixhummel @scalability-junk why do you use {{ salt['pillar.get']('git:url') }} instead of {{ pillar['git']['url'] }} ?
15:51 andrew_seattle joined #salt
15:51 timl0101 joined #salt
16:00 jkleckner joined #salt
16:04 scalability-junk felixhummel: it's shorter on deeply nested configs
16:04 scalability-junk and enables a default value
16:04 felixhummel i see
16:04 scalability-junk and I didn't want to switch between different methods within one state
16:04 felixhummel good point
16:05 scalability-junk it's a bit long sometime, but it's easier to see if everything behaves similar i thought
16:05 scalability-junk anything else I should consider?
16:12 juicer2 Hi, is it possible to have a minion send a notification to master when an event occurs (i.e., a file is created or changed on the minion)? And then have the master take some action based on that. Or is this outside the realm of what salt can do?
16:12 Newt[cz] joined #salt
16:12 pgsnake joined #salt
16:13 scalability-junk juicer2: the thing is why should the master do something?
16:14 nmistry joined #salt
16:14 juicer2 @scalability-junk: need to kick off a seperate notification system from the master, which is inhouse
16:14 scalability-junk juicer2: but why go through the master?
16:14 scalability-junk notify form the minion.
16:14 scalability-junk *from
16:15 juicer2 @scalability-junk: I hear you on that. So if we substract the master from the question, is that possible with salt?
16:15 scalability-junk sure
16:16 juicer2 @scalability-junk: I think the catch is it will only happen on a highstate or when a state is executed from the master though
16:16 scalability-junk you could have a state, which watches on file changes for example and then trigger a cmd for example
16:16 scalability-junk you can watch on filechanges.
16:16 juicer2 @scalability-junk: yeah but I'd have to keep executing that state from the master. So maybe salt is not the right vehicle.
16:16 jdenning joined #salt
16:16 pgsnake Hi. Can anyone give me a little guidance on using git as an ext_pillar? I want to move my entire pillar into git, and have created an appropriate repo containing top.sls and the rest of the files in the same structure as I had on the filesystem. I've added "  - git: master file:///srv/salt-pillar.git" under ext_pillar in the master config file, but it doesn't seem to work, having removed the old file based pillar data. I'm a littl
16:17 scalability-junk juicer2: why wouldn't you keep executing the state?
16:17 scalability-junk you define your state and then it can run once, 1000 times or more it doesn't matter, it's always the same outcome
16:18 juicer2 @scalability-junk: need it to be more real time. Not 'whenever salt runs'. thanks though.
16:18 scalability-junk if you really need notifications from the system itself, which doesn't really trigger anything about configs, use a deamon or a cronjob...
16:19 scalability-junk juicer2: 2 different things I would say
16:20 bmorriso1 joined #salt
16:21 dthom91 joined #salt
16:21 bmorriso1 In Salt-Cloud, with instance-sizes like this https://gist.github.com/esacteksab/6040448 How exactly do I tell a map/profile to use an m3.xlarge?
16:21 mikedawson joined #salt
16:22 StDiluted juicer2, salt has a reactor system for what you are describing
16:22 jacksontj joined #salt
16:25 juicer2 @StDiluted: hmm, maybe. I'll read up on. thanks.
16:30 KyleG joined #salt
16:31 dthom91 joined #salt
16:33 Gordonz joined #salt
16:33 sifusam joined #salt
16:34 kstaken joined #salt
16:37 teepark joined #salt
16:37 UtahDave joined #salt
16:38 teepark left #salt
16:42 Kzim hello, do we have a good doc or example for the Prereq. new feature of .16 ?
16:42 teepark joined #salt
16:42 teepark left #salt
16:43 aat joined #salt
16:43 teepark joined #salt
16:43 teepark left #salt
16:43 teepark joined #salt
16:44 linjan joined #salt
16:49 nmistry joined #salt
16:52 pentabular joined #salt
16:54 talso joined #salt
16:54 mgw joined #salt
16:55 eightyeight trying to get my head around pillars
16:55 fee joined #salt
16:55 bmorriso1 any salt-cloud experts around???
16:56 eightyeight if i do some logic in my pillars/packages.sls, based on OS, and i put a package name in say the "Ubuntu" version, and reference it later, will it throw an error for CentOS systemS?
16:56 fee left #salt
16:57 mgw joined #salt
16:58 Lue_4911 joined #salt
16:59 UtahDave eightyeight: it won't if you use the salt['pillar.get']('thepackage', 'default_value')    way of accessing that pillar in your sls file
16:59 UtahDave bmorriso1: what's going on?
17:00 eightyeight UtahDave: ok. i'm just looking for a way to do platform agnostic package names for Ubuntu and CentOS
17:00 logix812 joined #salt
17:00 eightyeight UtahDave: and things like 'python-software-properties' don't exist with CentOS, so i need to make sure they only get installed onto Ubuntu
17:01 UtahDave eightyeight: correct.
17:01 UtahDave putting those packages names in pillar is a good way to do it.
17:01 bmorriso1 UtahDave: https://github.com/saltstack/salt-cloud/issues/658#issuecomment-21262267 and https://github.com/saltstack/salt-cloud/issues/658#issuecomment-21262586 -- probably a formatting issue, maybe a yaml parsing issue? but I can't create more than one server at a time, and I'd love some guidance/direction on troubleshooting this
17:01 teepark I have a returners question -- http://docs.saltstack.com/ref/returners/ says that returners are called "in place of returning the data to the salt master"
17:03 anteaya joined #salt
17:03 teepark does that mean I'll lose my cli feedback on the master from the salt invocation?
17:03 teepark we can also specify multiple returners so I'd love to use some of these returners, but only if there is one I can include in the list that will give me my regular results output
17:03 teepark but it doesn't look like there's one like that .. what's the story overall?
17:04 UtahDave bmorriso1: have you tried indenting everything below the top level declaration? the second line is on the same level as the top in your examples
17:04 UtahDave teepark: No, it will still return all the data to your cli on the master
17:05 teepark UtahDave: oh that's good news, thanks :)
17:05 teepark second paragraph on the returners docs is a little misleading then (?)
17:07 kermit joined #salt
17:07 UtahDave yeah, I think you're right, teepark.  Would you mind rephrasing that to something that you think is clearer?
17:09 teepark UtahDave: sure
17:09 UtahDave thanks!
17:13 scalability-junk UtahDave: may I ask how you use them on the master then and which repos you setup per project?
17:14 scalability-junk and what did you mean by git pillar?
17:14 scalability-junk gitfs?
17:15 sifusam joined #salt
17:16 bmorriso1 UtahDave: https://github.com/saltstack/salt-cloud/issues/658#issuecomment-21263473 << works...
17:18 UtahDave bmorriso1: ah, good
17:18 bmorriso1 Thank you!
17:20 bluemoon joined #salt
17:20 jbunting joined #salt
17:21 bmorriso1 UtahDave: any idea how to do something like this? https://gist.github.com/esacteksab/6040852 short of generating/rendering the template?
17:22 bmorriso1 It errors out like this https://gist.github.com/esacteksab/6040852#file-gistfile2-txt
17:23 StDiluted Whats the best practice way to set up a package that comes in a tarball?
17:24 whit joined #salt
17:24 StDiluted Should I use the archive module, or the salt-contrib archive state?
17:24 StDiluted and if the module, how does that get used, generally?
17:29 eightyeight why would i be missing grain information on a centos machine?
17:30 eightyeight "grains.os_family" is not available.
17:30 eightyeight as an example
17:32 pentabular joined #salt
17:33 StDiluted what does grains.items get you
17:33 StDiluted have you run a saltutil.sync_grains on that?
17:34 napperjabber joined #salt
17:36 Ryan_Lane joined #salt
17:36 Ryan_Lane joined #salt
17:36 rbstewart joined #salt
17:37 eightyeight interesting
17:38 eightyeight grains.items returns everything
17:38 eightyeight but, tryingto get to a specifc grain says it's not available
17:38 eightyeight and, running saltutil.sync_grains give a traceback error
17:42 blee_ anyone in here using salt with archlinux?
17:42 blee_ or the maintaner for the aur package/
17:45 ware joined #salt
17:46 ware test
17:47 LucasCozy joined #salt
17:47 druonysus joined #salt
17:48 druonysus joined #salt
17:49 dzen test
17:49 kevino joined #salt
17:50 UtahDave bmorriso1: Yeah, I don't think templating of map files has been implemented.
17:51 bmorriso1 So I'll have to generate them.  Fair enough.
17:52 ware left #salt
17:52 UtahDave blee_: cedwards is the arch package maintainer
17:53 UtahDave bmorriso1: You might check the salt-cloud issue tracker and see if there's an open issue/request for that. If not, please create one. That would be a nice feature
17:53 bmorriso1 will do!
17:53 blee_ im opening an issue on github, that makes more sense than bothering him/her in IRC
17:53 UtahDave thanks!
17:53 blee_ although i love bothering people
17:53 UtahDave :)
17:53 cedwards blee_: what's up
17:54 UtahDave blee_: are you using salt-bootstrap?  If so, it's currently broken for arch unless you tell it to install Salt from the git repo
17:54 Kzim UtahDave, hey how are you ? do you know if there is a nice doc for the new prereq. stuff ?
17:54 sifusam joined #salt
17:55 UtahDave Kzim: Hm. Let me check.
17:55 UtahDave Kzim: ok, first here: http://docs.saltstack.com/ref/states/requisites.html?highlight=prereq#prereq
17:57 UtahDave Kzim: also, you might be able to glean some ideas from the original issue requesting the prereq.  https://github.com/saltstack/salt/issues/5636
17:58 UtahDave We do need to improve the docs on the prereq, so as you go through it, if you find anything that needs clarification, we'd love any improvements to the docs
17:58 Kzim UtahDave, oh great thanks. no news about https://github.com/saltstack/salt/issues/5424 ? i think prereq will not work across multiple minions :(
17:59 blee_ cedwards, i guess this might not pertain specifically to the packaging, but arch instructs users to store the hostname in /etc/hostname instead of /etc/hosts.  This causes issues where salt's default fqdn grain is pulling the default localhost stuff outa /etc/hosts (i think), even though hostname was set in /etc/hostname
17:59 wilkystyle joined #salt
17:59 Kzim UtahDave, i will but you know i still read a lot on Salt but i still can't really use it :( so i would not be very helpfull but one day i'll find how to use it nicely and i will help as much as i can :)
18:00 cbloss Does anyone happen to have a good example of a state/pillar for iptables? I keep getting an error with the one i'm using
18:00 UtahDave Kzim: It's still on the TODO list, but hasn't been worked on yet, unfortunately.
18:00 blee_ I assume the default grain parses that file, because using the the command 'hostname' returns the appropriate result (stored in /etc/hostname), but the fqdn grain has whatever is in /etc/hosts
18:02 kenbolton joined #salt
18:03 blee_ cedwards, It is also an issue when generating salt keys, which seem to get the hostname the same way
18:03 auser joined #salt
18:04 balltongu_ joined #salt
18:06 brianhicks joined #salt
18:07 carmony joined #salt
18:07 auser heyall
18:11 jMyles joined #salt
18:11 jMyles the "is unavailable" messages are bewildering
18:11 jMyles salt -v * postgres.db_create dbname works
18:12 jMyles but the same thing in a sls file doesn't work
18:12 kenbolton joined #salt
18:15 kenbolton joined #salt
18:15 cedwards blee_: doesn't sound specifically related to packaging to me.
18:16 cedwards blee_: submit something on the bug tracker and i'm sure it'll get straightened out
18:16 Linz_ joined #salt
18:16 giantlock joined #salt
18:19 Yulli joined #salt
18:19 Yulli Is there any comprehensive list of available properties to be specified in a profile?
18:20 Yulli For example, there's no documentation on how to enable backups for DigitalOcean minions.
18:21 UtahDave cbloss: here's a simple iptables example: https://gist.github.com/UtahDave/5217462
18:21 Linz joined #salt
18:22 UtahDave blee_: Yeah, could you open an issue on that /etc/hosts issue? We can get that fixed before the next bugfix release
18:22 carmony joined #salt
18:22 kenbolton joined #salt
18:23 UtahDave jMyles: It's because  postgres.db_create is part of the execution module.  In your sls file you'll need to use the postgres state.
18:24 druonysus joined #salt
18:24 druonysus joined #salt
18:25 jbunting joined #salt
18:26 UtahDave Yulli: The docs are admittedly sparse in that area. Try here: http://salt-cloud.readthedocs.org/en/latest/topics/config.html#digital-ocean
18:26 UtahDave and here: http://salt-cloud.readthedocs.org/en/latest/topics/action.html
18:26 blee_ UtahDave, will do, i realized shortly after that this was not really a packaging issue
18:26 UtahDave Yulli: but I don't think that salt-cloud has support yet for enabling backups.
18:26 blee_ thanks cedwards
18:26 UtahDave blee_: cool, thanks
18:26 twiedenbein zThe best examples of runners can be found in the Salt source:
18:26 twiedenbein https://github.com/thatch45/salt/blob/master/salt/runners
18:26 twiedenbein I like this line in the docs ;)
18:26 twiedenbein there's exactly one runner in that directory ;)
18:27 Yulli Mm... alright. Thanks, UtahDave.
18:27 UtahDave twiedenbein: where did you find that link?  That's really really old.
18:28 UtahDave that's a 2year old fork of salt.
18:29 Yulli left #salt
18:31 carmony joined #salt
18:32 shane UtahDave: btw, I've been ruminating on the WIndows pkg manager thing.  I think it's good a good idea, particular being able to manipulate the start menu/quickstart in general (even for applications that do have installers).
18:33 shane I do have a little concern about making it a bit hairy...
18:33 saurabhs joined #salt
18:34 UtahDave cool.  Yeah, we'll have to think it through well.
18:35 carmony joined #salt
18:35 shane UtahDave: I have a friend that works on test builds, etc.  He's been doing a lot of stuff with Sikuli (http://www.sikuli.org) when trying to automate things.  I found it terribly unreliable.
18:36 UtahDave that's interesting. I'm quite surprised they'd use image recognition.
18:36 shane I had looked at something like https://code.google.com/p/pywinauto/ as well for something I was trying to do, but ended up deciding against it.
18:37 Yulli joined #salt
18:37 Yulli left #salt
18:38 UtahDave interesting.  There was a similar framework that I used lightly a couple years ago that I really liked, but the name escapes me at the moment.
18:38 scalability-junk can someone explain git pillar to me? is it using pillars from git on the minion instead of salt:// or is it like gitfs on the master?
18:38 kenbolton joined #salt
18:38 shane UtahDave: well, when I say terribly unreliable, I mean that it would work for me some 98%+ of the time.  But it was actually worse than it failing more often.  It worked just enough that it tricks you into think it works consistently.
18:39 jbunting joined #salt
18:40 UtahDave shane: yeah, that would be frustrating.
18:41 nineteeneightd joined #salt
18:44 shane BTW, is it possible to use pillar data in the winrepo stuff?
18:45 UtahDave shane: Well, you can use pillar data in your regular sls files that install Windows software, but I haven't added the ability to templatize the winrepo specific stuff.
18:45 UtahDave shane: that's a really good idea that I hadn't considered.
18:46 StDiluted What's best practice for installing a package that comes in a tarball? Should I use the archive module, or the contrib archive state, or...
18:46 UtahDave would you mind opening an issue requesting that?  Just on the main Salt issue tracker
18:46 Yulli joined #salt
18:46 scalability-junk ok so understanding that right gitfs is the "git pillar" thingy
18:46 sifusam joined #salt
18:46 UtahDave StDiluted: I've usually used a file.managed to get the tarball downloaded,
18:46 StDiluted yeah that part is done
18:46 UtahDave StDiluted: then I just use  cmd.script()  and use a script that does all the steps you need.
18:47 StDiluted ah ok
18:47 scalability-junk with gitfs I could for each project symlink the config to project/branch/ for example
18:47 Yulli Is there any reason that a minion won't contact the master after being deployed with salt-cloud? I'm seeing the correct master in /etc/salt/minion.
18:47 scalability-junk sounds not too bad
18:47 shane UtahDave: will do
18:47 StDiluted Yulli, firewall/security groups?
18:47 UtahDave StDiluted: we've discussed creating a state that does that because it's a pretty common thing to do, but we haven't had a chance to implement it.
18:47 Yulli StDiluted: Got those open.
18:47 StDiluted UtahDave, there's a state in salt-contrib from 8 or so months ago that does it
18:48 StDiluted Yulli, any indication in /var/log/minion on the minion of the problem?
18:49 Yulli StDiluted: No log exists yet, apparently.
18:49 UtahDave Yulli: are the correct ports open on the master?
18:49 StDiluted salt-minion running?
18:49 Yulli Ah, you know what, it actually was a firewall issue. I thought my script added Salt exceptions, but it hadn't.
18:49 Yulli UtahDave: StDiluted: Thanks for your help.
18:50 UtahDave you're welcome.
18:50 mannyt joined #salt
18:50 StDiluted np!
18:51 scalability-junk mhhh gitfs does seem to be interesting, but not really want I imagines :(
18:53 Yulli I'm seeing a new issue. I've rebooted and this error still shows up: This master address: 'salt.zxzx.com' was previously resolvable but now fails to resolve! The previously resolved ip addr will continue to be used
18:54 StDiluted dns failure?
18:54 Yulli Still not seeing the minion's key in the master.
18:54 Yulli StDiluted: Worked with a non-salt-cloud minion.
18:54 dzen Yulli: you're renaming a minion ?
18:54 dzen or changing the master ?
18:54 Yulli dzen: Er, no. This is a minion that was just set up with salt-cloud.
18:55 dzen I hve no experience in salt-cloud, sorry
18:55 saurabhs joined #salt
18:55 Yulli I'd assume it's just the same as setting up a new minion. All it does is provision the server as well.
18:56 Thiggy joined #salt
18:56 dzen try removing the minion pki ?
18:56 scalability-junk Yulli: firewall rules?
18:57 Yulli scalability-junk: Noooooooooo. I made double-sure I added the exceptions.
18:57 dthom91 joined #salt
18:57 scalability-junk Yulli: ok local dns server, which isn't available on the cloud?
18:57 Yulli scalability-junk: Sorry, what do you mean?
18:57 scalability-junk you said it worked with another minion (I assumed a local one)
18:58 scalability-junk the dns name can't be resolved so perhaps you only have the entry locally and not fully accessible by the cloud minion
18:58 Yulli scalability-junk: No. I had two nodes, a master and minion, that I provisioned manually, and that setup worked fine. Now I have a minion provisioned through salt-cloud and the minion fails to contact the master.
18:58 shane UtahDave: So I'm thinking of something like this - https://gist.github.com/zaad/83594bef8fdea1efac8c.  Which may not be a great example, but I can imagine using something like to push in values like hostnames, passwords, etc.
18:59 Thiggy If I call `sudo salt-call state.highstate` twice on a minion, it runs two simultaneous highstates, causing all manner of bad things. Is this expected behavior? Do I need to be more careful in my scheduling?
19:00 dzen wait for one to have finished ?
19:01 Thiggy It was happening because of some scheduled jobs. I was under the impression that salt would prevent simultaneous highstate runs, but I just managed to see it doing The Bad Thing in action
19:02 scalability-junk Yulli: then no idea sorry
19:02 scalability-junk about scheduling, what is the default time between runs? where could I set it?
19:03 Yulli scalability-junk: Ah well. Thanks for your help. I'll do some more troubleshooting myself and report back here if I find a solution.
19:03 scalability-junk alright sorry I couldn't be more of a help
19:05 Thiggy So is salt supposed to prevent multiple simultaneous highstate runs on a minion, and this is a bug, or is that my responsibility and I need to fix this?
19:07 UtahDave Yulli: are you running salt-cloud on the salt master?
19:07 wilkystyle left #salt
19:07 teepark left #salt
19:08 scalability-junk gitfs related is there any example on how to configure different project pillars with environement branches for gitfs perhaps even separate sshkey data from "public" config data?
19:08 Yulli UtahDave: No.
19:08 StDiluted Thiggy, no, it is meant to run repeatedly to maintain a state
19:08 Yulli salt-cloud is running on my personal computer. The master is on its own droplet already.
19:09 UtahDave Thiggy: Salt should only allow one highstate to be run at a time.  If it is, then please open a bug on that.
19:09 Thiggy @StDiluted - I understand that, but it's running two at a time which is wrecking the state.
19:10 StDiluted ahhh
19:10 StDiluted that is a bug
19:10 Thiggy @UtahDave Thanks! Will do. That's just via github issues, right?
19:10 jMyles UtahDave: I don't understand - postgres state?
19:10 UtahDave Thiggy: yep!  Thanks
19:10 UtahDave Yulli: If you don't run salt-cloud on the salt-master, then salt-cloud can't pre-authenticate the minion into your master.
19:10 Thiggy Is there anything I can do to make the bug report better? I have a screenshot of the dupe processes running, and can post the state files.
19:10 UtahDave Yulli: Also, did you set master's ip or hostname in your saltcloud config
19:11 Yulli UtahDave: The hostname. A fresh, hand-installed minion is also "failing to resolve"
19:11 Yulli The hostname. Failing to resolve the hostname.
19:11 juicer2 StDiluted: I read up on the salt reactor, looks like could do what I need, can't find any examples via google though. I added reactor: in my master file on my salt master,  now I want to fire an event on a minion manually to test.
19:11 UtahDave Thiggy: those would be helpful.  It's especially helpful if you can show how to reproduce it.  What versions of Salt and your os, as well
19:11 Thiggy ok, I'll put all that in there too. I'll see if I can rebuild something that reliably reproduces it. It's been kind of hit or miss
19:12 StDiluted juicer2: I haven't done any reactor setup yet, but plan to in the future. UtahDave can answer reactor questions, I think
19:12 UtahDave shane: I like that example.
19:12 Yulli UtahDave: You think I should specify the IP address instead of hostname?
19:12 juicer2 StDiluted: k thx
19:12 UtahDave Yulli: That would help if your dns isn't working correctly.
19:13 UtahDave juicer2: have you read through the reactor docs yet?
19:13 juicer2 I have, found http://docs.saltstack.com/ref/modules/all/salt.modules.event.html    and  http://docs.saltstack.com/topics/reactor/index.html
19:14 juicer2 I've set up the following in my salt master file,
19:14 juicer2 reactor:
19:14 juicer2 - virginia:
19:14 juicer2 - reactor/notify.sls
19:14 Yulli Ah... you know what, I'm an idiot. I added the wrong record for the domain, so salt. never existed.
19:14 Yulli I'll excuse myself now.. thank you for your help, UtahDave
19:14 Yulli left #salt
19:15 jslatts joined #salt
19:15 StDiluted lol, he didn't have to leave! ;)
19:15 juicer2 UtahDave: and now I want to kick off an event manually on the minion named virginia ... all the examples show salt '*' event.fire_master 'stuff to be in the event' 'tag'   , first of all my minions don't have salt, only salt-call and salt-minion, maybe those are the newer cmds available on minions? Maybe I'm behind the times on that.
19:17 juicer2 UtahDave: I need to kick off an event manually on a minion (automated would be nice, but have no idea how to monitor things on a minion besides calling the state all the time from master) from a shell script, and then have the master run a cmd locally (on the master)
19:18 UtahDave juicer2: so on the minion do this:   salt-call event.fire_master 'tagname' 'This is the data I want to send'
19:19 whit joined #salt
19:19 juicer2 UtahDave: that runs,
19:19 juicer2 [INFO    ] Configuration file path: /etc/salt/minion
19:19 juicer2 local:
19:19 juicer2 True
19:20 jschadlick joined #salt
19:20 juicer2 UtahDave: salt-call event.fire_master 'virginia' 'This is the data I want to send'
19:20 juicer2 UtahDave: is that syntax right for the config I have in my master? I though it was, matching on the tag virginia, use the sls in reactor/notify.sls
19:21 UtahDave I think you might need virginia in single quotes:      'virginia':
19:21 juicer2 UtahDave: reactor/notify.sls is in /etc/salt/reactor/notify.sls
19:22 juicer2 UtahDave: changed that, still nothing.  /etc/salt/reactor/notify.sls contains :
19:22 juicer2 echo notification > /var/log/message:
19:22 juicer2 cmd.run
19:22 juicer2 ah, I see a typo, I'll fix that
19:22 UtahDave cmd.cmd.run
19:23 juicer2 UtahDave: it should be cmd.cmd.run ?
19:24 UtahDave yep, because you have to tell it what type of command it's going to be
19:24 UtahDave http://docs.saltstack.com/topics/reactor/index.html   search for clean_tmp for an example
19:24 juicer2 UtahDave: changed that, notify.sls now has :
19:24 twiedenbein UtahDave: ah, wow
19:24 juicer2 echo notification > /var/log/messages:
19:24 juicer2 cmd.cmd.run
19:25 twiedenbein I googled salt runner and the 0.9.1 docs were the first thing that came up ;)
19:25 juicer2 UtahDave: that notify.sls should echo the text to /var/log/messages on the master right?
19:25 UtahDave :)  Yeah, when I saw that linke, twiedenbein, I was really surprised.
19:26 UtahDave juicer2: it will run on whatever minion you target.  If you're running a minion on the master, then you can target that.
19:27 juicer2 UtahDave: That's one issue , I don't have minion on the master, I can fix that.
19:27 dthom91 joined #salt
19:30 shane UtahDave: https://github.com/saltstack/salt/issues/6240
19:31 aranhoide joined #salt
19:32 juicer2 UtahDave: installed minion code on my master, connected and tested
19:32 UtahDave shane: thanks!  I love it.
19:33 juicer2 UtahDave: adjusted that notify.sls,  http://pastebin.com/TzL4WxYS
19:33 juicer2 UtahDave: rt* is the salt master
19:33 juicer2 UtahDave: and is now a minion also
19:34 UtahDave juicer2: did it work?
19:34 juicer2 UtahDave: no
19:34 kuffs cmd.cmd.run doesn't seem right
19:34 kuffs think you've got an extra 'cmd.' in there
19:35 UtahDave ok, juicer2, could you pastebin your reactor config stanza from your master config, your reactor file and any output you see?
19:35 juicer2 UtahDave: master reactor stanza: http://pastebin.com/ejWVy8jG
19:36 juicer2 UtahDave: reactor file (/etc/salt/reactor/notify.sls  http://pastebin.com/TzL4WxYS
19:38 juicer2 UtahDave: output from minion I'm trying to trigger the notification from,  http://pastebin.com/7pqKgVwG
19:39 jdenning juicer2: I think you need a complete path to the sls file in the reactor stanza in your master config
19:40 jdenning (/etc/salt/reactor/notify.sls instead of reactor/notify.sls  on line 3)
19:40 UtahDave juicer2: yep, I was about to suggest what jdenning said.  Or you could move the reactor sls to the default location of /srv/reactor/  at least for testing purposes.
19:40 juicer2 UtahDave: thx, that seems to have done the trick. working now.
19:40 juicer2 jdenning: thx
19:41 jdenning juicer2: no prob :)
19:41 juicer2 UtahDave: Now is it possible for salt minion to monitor things (load, files, disk space) automatically, in real time? Or am I understanding that wrong?
19:41 zooz joined #salt
19:43 jdenning juicer2: "real time" is probably stretching it a bit..there is *some* overhead/latency..you will have to do a fair bit of extra work if you want low-latency monitoring/alerting (especially if you want it to scale)
19:44 UtahDave juicer2: the bones for monitoring have been put in Salt, but it's not quite finished for actual monitoring.
19:44 juicer2 jdenning: Mainly wondering if it's state driven (from the master) ? Or can a minion monitor things on it's own?
19:44 juicer2 UtahDave: I see. Thanks for your help on the reactor.
19:44 UtahDave You can set a command to run in the scheduler or a cron job and have that data logged, for example
19:45 juicer2 UtahDave: I see
19:45 jdenning juicer2: Yeah, that's what I do (what UtahDave just suggested)
19:45 StDiluted someone here or in lops, i forget where, was writing some check_mk stuff to use salt's message bus to pass alerts/etc
19:45 StDiluted lopsa*
19:45 napperjabber joined #salt
19:46 bmorriso1 left #salt
19:46 UtahDave StDiluted: Yeah, I do seem to remember something about that, too
19:47 tseNkiN joined #salt
19:47 pentabular left #salt
19:51 jacksontj joined #salt
19:53 _vimalloc Is there a way to recursively exclude every sls in an include statement? ex: If I exclude: - sls: test, and test.sls includes test2.sls
19:54 jMyles UtahDave: How can I use the postgres module in a sls file?  The syntax is a bit different from the command line?  Is there a doc on this?
19:55 tqrst joined #salt
19:57 UtahDave jMyles: you need to use the postgres state in an sls file.
19:57 juicer2 UtahDave: what variable is the actual data exposed in , from the event ... data[type] ?
19:58 UtahDave jMyles: http://docs.saltstack.com/ref/states/all/salt.states.postgres_database.html#module-salt.states.postgres_database
19:58 dthom91 joined #salt
19:58 UtahDave juicer2: yes, the "data" variable.  the tag is in the "tag" variable
19:58 juicer2 UtahDave: data['data']  then ?
19:59 UtahDave _vimalloc: what have you tried so far?
19:59 UtahDave juicer2: data contains it directly, I believe
19:59 jMyles UtahDave: I see.
20:00 small_spigot1 joined #salt
20:02 tqrst does salt-minion expect cachedir to persist across reboots? I'd just stuff it into tmpfs otherwise.
20:04 ipmb joined #salt
20:06 deanvanevery joined #salt
20:07 juicer2 UtahDave: In my cmd.cmd.run context, would that be {{ data }} or {{%data[ ...   etc. Sorry for my ignorance of how that variable shows up in the cmd.run context
20:07 UtahDave tqrst: Yeah, I think it does.
20:08 UtahDave juicer2: yeah, I think it's  {{ data }}  if you want to stick it in another sls file.
20:09 juicer2 UtahDave: well, was trying to do something with the data within the cmd.cmd.run arg:    http://pastebin.com/Paqfwz8t
20:10 aranhoide joined #salt
20:12 aranhoide joined #salt
20:12 _vimalloc UtahDave: I've tried manually adding all the extra includes in my -exclude statement, and that works great, but it would be handy to be able to exclude test1 and have an option or something to also recursively exlcude all the include statements in test1
20:13 _vimalloc But that is about as far as I have gotten
20:16 scalability-junk let's try again :D
20:16 UtahDave _vimalloc: I'm not sure about the details of that.  You might try asking on the mailing list.
20:16 scalability-junk are there any more detailed docs about gitfs an how environments and projects (different repos) could map to fileroot?
20:17 scalability-junk and perhaps even how to use 2 different repos for pillars per project
20:18 scalability-junk something else can I use variables from the same pillar in the same pillar?
20:18 _vimalloc Will do, thanks :)
20:18 scalability-junk for example compile a list of 2 lists with jinja?
20:19 jschadlick joined #salt
20:21 diegows joined #salt
20:21 UtahDave juicer2: Ah, I was wrong.  "data" is indeed a python dict.
20:22 UtahDave data['data']  and data['id'] are available. I'm not sure if there is anything else in that dict
20:23 UtahDave scalability-junk: There is a git external pillar you can use.
20:23 napperjabber joined #salt
20:23 UtahDave scalability-junk: https://github.com/saltstack/salt/blob/develop/salt/pillar/git_pillar.py
20:24 UtahDave it's still pretty new.
20:24 juicer2 UtahDave: yeah, tried data['data'], nothing.
20:25 UtahDave juicer2: does it work if you stick a string in there instead of data['data']  ?
20:25 juicer2 UtahDave: a string works. Or did work. I've borked something, not working at all now. aargh.
20:25 scalability-junk UtahDave: mhh seems like hard with it to map more than a few repos into it then.
20:26 scalability-junk probably will go with super repo with submodules, which get cloned via cronjob for now... we'll see how it goes
20:26 juicer2 UtahDave: I need the data out of the event though
20:26 ipmb joined #salt
20:27 jMyles Now I'm getting AttributeError: 'NoneType' object has no attribute 'split' from user_list in the postgres state
20:27 saurabhs joined #salt
20:27 juicer2 UtahDave: had to restart the master, manual string is back to working now
20:28 dthom91 joined #salt
20:32 juicer2 UtahDave: got it, it is in fact      echo {{ data['data'] }} >> /var/log/messages
20:34 scalability-junk oh wait UtahDave am I right that ext_pillar could be called from within a pillar or state and then used with minion id and the name of the data file from a special repo, which in return get's accessed? any example how this would look like?
20:34 UtahDave juicer2: ah, good!
20:35 UtahDave scalability-junk: when a highstate is requested by a minion, the master compiles the pillar data for that specific minion. If you have an ext_pillar configured, then that ext_pillar can look in a repo or wherever it wants and add data to the pillar dict that is sent back to the minion
20:37 scalability-junk so in which location would I then define which ext_pillar to use for a specific minion? and in that repo there should only be one sls file or what should the structure be? or does the master lookup the data it needs?
20:38 jdenning joined #salt
20:38 bmorriso1 joined #salt
20:38 bmorriso1 UtahDave: ever seen this before? https://gist.github.com/esacteksab/6042170
20:40 qba73 joined #salt
20:42 UtahDave scalability-junk: the master always compiles the pillar. The master collects all the data to send back to the minion.
20:43 UtahDave scalability-junk: within the ext_pillar, the master has access to the minion's id and grains.
20:44 SEJeff_work juicer2, I'd suggest something more like "logger {{ data['data'] }} /var/log/messages
20:44 UtahDave bmorriso1: Yeah, I have seen that. I'm not sure what causes that.  Does all the pillar data get displayed anyway?
20:44 shane UtahDave: just occurred to me that it may also be desirable to be able to modify environment path & settings from the package.  Or do you think that's overkill?
20:44 bmorriso1 No
20:44 bmorriso1 It displays nothing actually, just that error from the minion
20:44 bmorriso1 From the master, it displays nothing (but no error either)
20:44 juicer2 SEJeff_work: well, I'm doing other things than just echoing to a file. I have to manipulate it into another text file.
20:44 UtahDave bmorriso1: have you run  salt '*' saltutil.refresh_pillar    ?
20:45 UtahDave shane: no, I don't think that's overkill. That would be nice
20:45 bmorriso1 UtahDave: rather than nothing, now it returns 'None'
20:46 bmorriso1 https://gist.github.com/esacteksab/6042217
20:46 UtahDave bmorriso1: you might run the salt-master in the foreground in debug mode and see if you're getting some errors.
20:49 StDiluted what does the error: Name icinga in sls icinga is not a dictionary generally mean
20:51 StDiluted or in general 'Name <whatever> in els <thingie> is not a dictionary'
20:51 StDiluted sls*
20:52 timl0101 joined #salt
20:55 UtahDave StDiluted: it means your sls file is malformed
20:56 StDiluted ok
20:56 StDiluted that's what I thought. indentation is confusing
20:57 StDiluted can you take a look, I'll paste my sls
20:57 StDiluted https://gist.github.com/dginther/6042280
20:57 StDiluted never mind
20:57 StDiluted i figure it out
20:58 StDiluted no - before service
20:58 dthom91 joined #salt
20:58 StDiluted man, my brain is so fried
20:58 StDiluted making dumb mistakes
21:01 bmorriso1 UtahDave: error running in debug mode: https://gist.github.com/esacteksab/6042310
21:05 scalability-junk anyone has some example pillar structure laying aroung?
21:06 jschadlick joined #salt
21:08 mr_chris joined #salt
21:09 mr_chris I'm reading up on http://docs.saltstack.com/ref/states/all/salt.states.file.html and and confused about the "context" option. It reads,  "Overrides default context variables passed to the template." What is an example of a context variable?
21:09 * scalability-junk is totally confused on the mapping if get.pillar and pillar files on the master
21:10 mr_chris Or a default context variable for that matter.
21:11 scalability-junk are all pillar data merged into one big data file so no new nesting is created? how are conflicts handled?
21:11 SEJeff_work scalability-junk, It all works out to a python dictionary datastructure
21:11 SEJeff_work There are no conflicts, it simply overwrites the keys however a python dictionary would
21:11 Jahkeup_ joined #salt
21:12 SEJeff_work s/keys/values/
21:12 scalability-junk ok so when I say in the top.sls pillar  I want to use dev_data.sls and prod_data.sls for some minions it would overwrite the dev data with prod values?
21:13 scalability-junk so instead of values being prefixed or the keys being prefixed it's just flat merged and overwritten on conflict?
21:13 SEJeff_work scalability-junk, Again, it is yaml, which renders to a python dictionary
21:13 SEJeff_work this is how it works by design
21:13 scalability-junk that's why pillar.get']('flat:key:stuff:works') right?
21:13 SEJeff_work yes
21:13 scalability-junk ok great that just wasn't clear in my head yet. cool
21:14 scalability-junk SEJeff_work: mind if I ask something else about pillars.
21:14 SEJeff_work scalability-junk, I tend to have a default set of pillar values for everything, and then include additional pillar files per-thing
21:14 SEJeff_work like foo.sls
21:15 SEJeff_work {% if grains["id"] == "foobar" %} one_fish: red {% elif grains["id"] == "blah" %} one_fish: blue_fish{% else %} one_fish: NOOOOOOO{% endif %}
21:15 SEJeff_work but with newlines obviously
21:16 SEJeff_work since it is yaml templatized with jinja2 (by default)
21:16 ingwaem joined #salt
21:16 scalability-junk ok cool
21:16 SEJeff_work scalability-junk, It is somewhat disturbing how flexible it is at times. Very overwhelming at first. You need to find a standard of consistency that works for you and your team/environment.
21:16 SEJeff_work Salt is like a chainsaw though, you can cut through a lot of hard trees^Wproblems with it :)
21:17 UtahDave bmorriso1: can you pastebin your top.sls?
21:17 scalability-junk yeah I will probably go for something like '$project': - $env: ... - prod: ... '$projectx': .... and some core stuff on '*'
21:18 scalability-junk and within each prod I would include private.pillar_file and public.pillar_file so I get the pillars from 2 different repos per project
21:18 scalability-junk and the states are all in one repo without any per project usage
21:18 scalability-junk great
21:18 SEJeff_work :)
21:18 SEJeff_work If that works for you, awesome
21:18 scalability-junk sounds like a sort of plan :D
21:19 bmorriso1 UtahDave: https://gist.github.com/esacteksab/6042423
21:19 jer_ joined #salt
21:19 scalability-junk SEJeff_work: haha what works for you? if I may ask?
21:19 mr_chris Nevermind. I see now. Under - defaults:
21:19 deanvanevery joined #salt
21:19 jer_ question
21:19 jer_ if I clone a machine running salt-minion
21:19 kkartch joined #salt
21:19 jer_ how do I uh..."reset" the minion
21:19 SEJeff_work scalability-junk, So for states, I have (at the top level): default/, datacenters/, roles/, and services/
21:19 SEJeff_work default is stuff applied to every machine in the firm
21:20 SEJeff_work scalability-junk, I've got a config mgmt database internally which maps every datacenter to a cidr address (network) with a datacenter code that matches hostnames
21:20 cbloss I am having an issue with an iptables state. when changing the rules, i get the following error: stderr: iptables-restore v1.4.12: no command specified. If I manually run the command /sbin/iptables-restore < /etc/iptables.rules, I get the same error. I am modifying the file in sublime text 2. If i open the /etc/iptables.rules in vi and save it, the command works without a problem. Anyone have a clue what is going on?
21:20 linjan joined #salt
21:20 SEJeff_work scalability-junk, So I autogenerate datacenters/name/init.sls and in datacenters/init.sls include the proper datacenters.name sls depending on what ip address (and hence network/datacenter) the host is in
21:21 SEJeff_work so if I want to do something to all hosts in a given datacenter ie: move to a different ldap/dns/ntp server, I can change it at the datacenter level with 1 file
21:21 scalability-junk that's interesting yeah
21:21 UtahDave cbloss: vim FTW!!
21:21 UtahDave :)
21:22 SEJeff_work scalability-junk, roles is for more like a project ie: this is a monitoring server, all monitoring servers have these services
21:22 SEJeff_work services is _solely_ for managing daemons
21:22 scalability-junk how do you map the pillars?
21:22 cbloss UtahDave: i know! haha. I just <3 Sublimetext
21:22 SEJeff_work so in a 1 off fashion (should we so choose) we could do something like: salt $minion state.sls services.httpd etc, etc
21:23 SEJeff_work scalability-junk, So in pillars, I have a default.sls which includes some others like pkgs.sls
21:23 SEJeff_work where I map a lot of common package names between debian, fedora, rhel, and ubuntu
21:23 SEJeff_work so I can say {{ salt['pillar.get']("httpd") }}:\n    pkg.installed
21:23 SEJeff_work with the newline and indentation, and it will do the right thing based on the os it is running on
21:24 SEJeff_work scalability-junk, Then I tend to create pillar sls files that map to the state tree roles/*
21:24 hazzadous joined #salt
21:25 scalability-junk just to be sure: in the pkgs.sls you have if grain ... httpd: apache for example
21:25 BRYANT__ joined #salt
21:26 scalability-junk alright sounds reasonable to do that in one package file.
21:26 SEJeff_work scalability-junk, bingo!
21:26 SEJeff_work {% if grains['os_family'] == "Debian" %}httpd: apache2
21:26 scalability-junk do you work with different environments in pillars?
21:26 SEJeff_work No I don't use environments
21:27 jMyles joined #salt
21:27 SEJeff_work I have prod, dev, qa roles, and promote that way
21:27 SEJeff_work I might investigate environments at some point
21:27 scalability-junk ah kk
21:27 SEJeff_work but I've seen a lot of issues struggle with them and just haven't had the time/patience to sit down and see how well they are implemented
21:27 SEJeff_work and if they work for me
21:27 scalability-junk I'll probably need a few weeks to work that all into my setup
21:27 scalability-junk kk
21:27 SEJeff_work scalability-junk, I try to keep all of the service states 100% self sustaining
21:28 SEJeff_work And templatize all of the configs
21:28 scalability-junk yeah I'm trying that too
21:28 SEJeff_work use {{ salt['pillar.get']('pkgs:httpd', 'httpd') }} stuff heavily in the templates
21:28 SEJeff_work notice that I have a default, so if pillar isn't available or blows up, all is well
21:28 dthom91 joined #salt
21:29 SEJeff_work scalability-junk, Then you can use pillar to extend things OR extend via salt states
21:29 SEJeff_work scalability-junk, sound sane?
21:29 scalability-junk just to be sure cause you included the pkgs.sls file into the default one you have to use 'pkgs:httpd' instead of just 'httpd'?
21:29 SEJeff_work scalability
21:30 SEJeff_work scalability-junk, Well pkgs.sls is included into default.sls and it is a structure
21:30 SEJeff_work ie:
21:30 SEJeff_work pkgs:
21:30 SEJeff_work {% if grains["os_family"] == "Debian" %}
21:30 SEJeff_work httpd: apache2
21:30 scalability-junk ah within the pkgs file got it, just wanted to make sure
21:30 SEJeff_work {% else if grains["os_family"] == "RedHat" %}
21:31 SEJeff_work httpd: httpd
21:31 SEJeff_work {% endif %}
21:31 SEJeff_work scalability-junk, Does that make sense in concept?
21:31 SEJeff_work you have a pkgs pillar key, which has subkeys
21:31 SEJeff_work Thats how I namespace it
21:31 scalability-junk yeah sounds reasnable
21:31 scalability-junk *reasonable
21:31 SEJeff_work pkgs:httpd installs the apache package for the distro
21:32 SEJeff_work but always set sane defaults
21:32 scalability-junk yeah it makes a lot of sense and clears up a lot of thoughts i had, but mostly improves what I had planned \o/
21:32 SEJeff_work yay :D
21:32 SEJeff_work scalability-junk, I see a lot of benefit to splitting out services and roles
21:32 SEJeff_work thats how I prefer to manage systems in that some times you want to mix and match
21:32 SEJeff_work some times you have set roles
21:33 scalability-junk I want to use states for functions/services aka git deploy, mysql, apache and then use pillars and the top file to match these up to my final roles/servers
21:33 SEJeff_work scalability-junk, any improvements or suggestions?
21:33 SEJeff_work scalability-junk, absolutely
21:34 scalability-junk not really not yet anyway
21:34 scalability-junk perhaps when I got it running with environments :)
21:34 lemao_ joined #salt
21:35 scalability-junk one thing how do you save the pillars? or do you save them per project/role or just all in one place?
21:36 scalability-junk I would do all states in one repo and 2 pillar repos per project and then have one super repo with all pillars used as submodules... to be able to git clone all pillars into a sane dir structure
21:36 Xeago joined #salt
21:36 SEJeff_work scalability-junk, Sounds overly complex to me, but to each their own :)
21:36 SEJeff_work I have 1 states repo and 1 pillar repo
21:37 SEJeff_work access to the pillar repo is extremely locked down
21:37 SEJeff_work it doesn't email out diffs on push (due to not wanting sensitive data to go over email like passwords)
21:38 scalability-junk yeah one thing I dislike with 1 pillar repo is that all projects even client projects would spam the history of "my" pillar repo
21:38 SEJeff_work scalability-junk, Yes I have 0 clients
21:38 SEJeff_work so that makes perfect sense in that regard
21:38 SEJeff_work scalability-junk, Or...
21:38 SEJeff_work You could have client pillar data pulled down via a ext_pillar module that hits a database
21:38 SEJeff_work or something like that
21:39 scalability-junk still not quite sure where I would define the ext_pillar :) in the top.sls file?
21:39 SEJeff_work scalability-junk, master config
21:39 ingwaem Is it possible for salt to define global variables on a target minion? Thinking of managing those instead of hacking profile files
21:39 scalability-junk kk
21:40 SEJeff_work ingwaem, templatize /etc/bashrc :)
21:40 scalability-junk SEJeff_work: yeah we'll see, but actually having at least one pillar per project is not too bad, resulting in 1 super repo with modules... could be manageable.
21:40 SEJeff_work scalability-junk, It is all about what you're comfortable with
21:40 SEJeff_work salt will flex to fit your needs
21:40 scalability-junk good thing would be that I could trust subdirectories to different people and review the code before it's pushed via submodule checkouts
21:40 ingwaem SEJeff_work, I'll look into that thanks
21:40 SEJeff_work when it doesn't, file issues for us to add features :D
21:40 SEJeff_work scalability-junk, Use phabricator... you're welcome :D
21:41 SEJeff_work seriously, nothing compares to it
21:42 scalability-junk SEJeff_work: damn now I use gitlab primarily for code and workflow
21:42 scalability-junk gitlab ci, mirroring on github
21:43 scalability-junk and otrs for customer support
21:43 scalability-junk piwik for analytics ;)
21:43 scalability-junk yourl for url shortening
21:43 scalability-junk owncloud for more public stuff like calendar, contacts, presentations and data
21:43 scalability-junk git annex for binary files in code repos
21:43 scalability-junk and so on :D
21:44 SEJeff_work gitlab is quite awesome
21:44 SEJeff_work but for just code review, you won't be phab
21:44 SEJeff_work with it's great commandline app (arc) and api
21:44 SEJeff_work s/be/&at/
21:45 scalability-junk hehe first I'll get the saltstack stuff running :D
21:47 scalability-junk oh and I'm looking forward to finally get to work on ceph and openstack stuff after finishing with salt in probably 10 years :D
21:48 SEJeff_work scalability-junk, ceph requires some pretty bleeding edge kernels (for the client stuff)
21:48 SEJeff_work I saw sage weil (ceph creator) at the socal linux expo a year or so ago
21:49 SEJeff_work Even inktank, the company he founded, basically says don't use ceph for serious prod data (unless that has changed in 1 year)
21:49 SEJeff_work I'm more partial to glusterfs
21:49 scalability-junk depends on which layer you use
21:49 SEJeff_work Because it is very stable and if it breaks, you can always get to the underlying files without any trouble
21:49 scalability-junk the storage backend is stable
21:49 napperjabber joined #salt
21:49 scalability-junk so are object and blocks storage on top of it
21:49 SEJeff_work scalability-junk, I know the guys at MetaCloud use ceph and are very happy with it
21:50 SEJeff_work in prod
21:50 SEJeff_work but I would go with gluster over ceph given the option
21:50 scalability-junk the only thing not yet stable and as you said bleeding edge is the cephfs implementation
21:50 scalability-junk SEJeff_work: yeah but gluster is mostly distributed filesystem
21:50 scalability-junk not block and object storage like I would mostly use it ;)
21:51 scalability-junk dreamhost for example is quite happy with object and blockstorage with ceph, one storage backend 2 products
21:51 scalability-junk and running with more than 1pb of data I read on the ml
21:51 SEJeff_work 1pb isn't that much
21:51 KyleG they would be happy with ceph
21:51 KyleG they made it....
21:51 SEJeff_work Sort of
21:51 scalability-junk KyleG: yeah true
21:52 SEJeff_work Sage made it while he worked at dreamhost, then founded inktank with their investment and split off
21:52 SEJeff_work I used to live in LA, they are based in downtown LA literally 20 minutes from where I used to live. I met the guys
21:52 KyleG SEJeff_work: I used to work at DH and they're pouring a ton of money and people into it, he didn't then "found it" it's supported by dreamhost funds.
21:52 KyleG my girlfriend still works for em
21:52 SEJeff_work but you know what I mean, inktank was founded with DH money
21:52 SEJeff_work but sage runs it
21:52 KyleG yeah
21:52 SEJeff_work and ceph is and always has been his project
21:53 SEJeff_work not dreamhost's
21:53 jschadlick joined #salt
21:53 KyleG it was his PhD thesis or whatever you call it
21:53 KyleG wasn't it
21:53 scalability-junk yeah but dreamhost is really using it :D
21:53 SEJeff_work sure
21:53 KyleG Ceph is cool for VM's and whatnot, but I work for a SaaS company and I need NFS support
21:53 scalability-junk the thesis is not bad, quite interesting to read through the papers
21:54 KyleG Which Ceph does not have yet, not stable anyways. Some hacked implementation through fuse
21:54 scalability-junk KyleG: oh yeah cephfs so
21:54 SEJeff_work KyleG, gluster has pretty solid nfs.
21:54 scalability-junk I would mostly nead object and blockstorage for the vms
21:54 SEJeff_work It isn't the fastest thing on the block
21:54 SEJeff_work but it is rock solid stable
21:54 SEJeff_work and 0 single point of failure, which I personally love
21:54 KyleG SEJeff_work: I've got some gluster boxes I'm playing with, going to probably use it for transcode storage instead of our expensive Isilons.
21:54 SEJeff_work Unlike moosefs, orangefs, gpfs, etc
21:55 SEJeff_work KyleG, thats a sweet spot for it, commodity storage with local disks. That is where RedHat seems to be targetting it
21:55 scalability-junk hehe
21:56 KyleG I can't wait until someone (hopefully Ceph) is able to compete with the likes of Isilon on a purely support basis, bringing down the cost of total ownership for enterprise storage clusters. I hate that right now we're paying $1/GB for Isilon storage :\
21:56 scalability-junk anyway heading off see ya
21:56 KyleG seeya scalability-junk
21:57 koolhead17 joined #salt
21:58 dthom91 joined #salt
22:01 jMyles joined #salt
22:01 jMyles Is there a way to use a module from with a sls file?
22:03 Yulli joined #salt
22:03 Yulli I'm thinking about whether to have salt-cloud on the master or on my personal computer.
22:04 Yulli Is it possible to have salt-master and salt-minion on the same server?
22:04 Yulli I guess it is, but what's the point?
22:08 chrisgilmerproj left #salt
22:08 scalability-junk Yulli: it is but it's discouraged
22:09 scalability-junk cause when running a false state and all minions are inaccessible if the master is inaccessible too cause it's a minion itself it's worse to recover
22:10 scalability-junk but you could just use state files and call them via cronjob so they are not tied to actions you do on master, but you still can configure your master with salt
22:10 kermit joined #salt
22:15 Yulli Ouch, alright.
22:15 Yulli So is it reasonable to have salt-cloud on the master?
22:17 bmorriso1 Yulli: Run both salt-master and salt-cloud on the master
22:17 bmorriso1 no issues
22:17 Yulli Great.
22:19 Ryan_Lane joined #salt
22:19 bmorriso1 jMyles: I'd also like to know this /cc UtahDave
22:22 whiteinge jMyles: you can call out to execution modules from an sls file with the syntax: {{ salt['mod.fun']('arg') }}. is that what you're after?
22:26 UtahDave Yulli: yes, people often have a salt-master and salt-minion running on the same machine.  that way you can manage your master as well
22:27 UtahDave I would recommend running salt-cloud on your master, otherwise salt-cloud can't pre-authenticate the minion into your master
22:27 Yulli UtahDave: That wasn't the case
22:27 Yulli Unless pre-authentication means something different
22:27 Yulli UtahDave: What do you mean by pre-authenticate the minion into the master?
22:29 napperjabber joined #salt
22:29 UtahDave jMyles: Yeah, you can use a module from within a sls file if you use the   module.run state
22:30 UtahDave Yulli: I do this 100 times a day.  If you're running salt-cloud on your master, then salt-cloud preseeds the new minions keys on the master so that the minion can connect directly into the master
22:30 UtahDave Yulli: then you don't have to manually accept the minion's keys on the master
22:31 Yulli UtahDave: Great, thanks! That's a real time-saver.
22:31 UtahDave yep.
22:31 UtahDave bmorriso1: I think the problem is that your top file isn't compiling correctly.  That's a seriously complicated mako jumble.
22:32 UtahDave I would clear that out, and start from a simple section and build out the parts as you know it's working.  I'm pretty dang sure your pillar is crashing.
22:32 bmorriso1 UtahDave: yeah...I didn't create it. I inherited it.
22:33 bmorriso1 Can't say for certain it ever worked, :-/
22:33 bmorriso1 UtahDave: EVERYTHING is like this. There isn't a simple sls here w/o all that crazy logic/templating
22:35 bmorriso1 UtahDave: any tooling exist that could output *how* this is rendering?
22:36 UtahDave bmorriso1: Yeah, there isn't right now.
22:36 UtahDave Sometimes running the master in the foreground in debug mode will give you some more information
22:37 oz_akan_ joined #salt
22:37 UtahDave Really, I recommend using templating lightly, if possible.
22:38 bmorriso1 So do I! :D
22:38 bmorriso1 hahaha
22:38 UtahDave :)
22:38 UtahDave I'll use you as an example, now.  :)
22:38 bmorriso1 !!
22:38 bmorriso1 Glad I could help
22:38 bmorriso1 "what not to do"
22:38 UtahDave lol
22:38 UtahDave can I use your frowning gravatar?
22:39 UtahDave anyway, I have to head home. I have a trip to get ready for.  I'll get someone else here to take a look at that file.
22:39 jMyles Has anybody used djangomod?  I'm not entirely clear on what each kwarg is supposed to be
22:41 bmorriso1 jMyles: this? http://docs.saltstack.com/ref/modules/all/salt.modules.djangomod.html
22:42 Ryan_Lane joined #salt
22:44 jMyles bmorriso1: That's the one.  What's the difference between bin_env and env?  And also, pythonpath doesn't seem to work for me
22:45 bmorriso1 I'm going to guess env == virtualenv bin_env == virtualenv/bin
22:47 ipmb joined #salt
22:48 pentabular joined #salt
22:49 jMyles UtahDave: Where can I learn more about the module.run state?
22:50 Sean joined #salt
22:51 pentabular left #salt
22:54 Yulli left #salt
22:57 TOoSmOotH in salt what would be the equivelent to a node file in say puppet?
22:59 pentabular joined #salt
22:59 bmorriso1 jMyles: http://docs.saltstack.com/ref/states/all/salt.states.module.html#salt.states.module.run ?
23:01 Guest5619 /nick pentabular
23:01 pentabular fooey
23:06 Guest5619 left #salt
23:12 emocakes joined #salt
23:13 aranhoide salt-cloud: I launched an instance from a map file (salt v.0.16.0, salt-cloud 0.8.9) and everything seems to work alright except the instance won't reply to commands from the master (e.g. test.ping)
23:13 aranhoide what changed since salt-cloud 0.8.8 that might have broken this?  because this was working for me before
23:15 schannel joined #salt
23:15 jschadlick joined #salt
23:18 aat joined #salt
23:18 platoscave joined #salt
23:20 schannel left #salt
23:27 Nexpro joined #salt
23:27 aat joined #salt
23:30 aranhoide connectivity to the master to the 450(5|6) TCP ports seems OK
23:36 aat joined #salt
23:40 platoscave joined #salt
23:41 platoscave joined #salt
23:42 platoscave joined #salt
23:50 bmorriso1 left #salt
23:52 emocakes joined #salt
23:53 avienu joined #salt
23:55 jschadlick left #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary