Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-07-30

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 salty456 joined #salt
00:15 salty456 joined #salt
00:15 FreeSpencer joined #salt
00:15 UtahDave D-BO: add .pop() to your call
00:17 ponderability1 joined #salt
00:18 hblanks joined #salt
00:19 hblanks Might anyone have an example of how to configure salt scheduling for a minion from within /srv/pillar? I'm able to get it working from /etc/salt/minion, but not from pillar.
00:20 UtahDave hblanks: just put the exact same config in an sls file in your pillar roots.  Then in your pillar top.sls match on your minion and include the sls file you wrote for the scheduler
00:21 D-BO Thanks @UtahDave
00:22 UtahDave did that work for you, D-BO?
00:22 hblanks Thanks, @UtahDave. Indeed, I had:
00:23 hblanks base:
00:23 hblanks '*':
00:23 hblanks - schedule
00:23 hblanks for /srv/pillar/top.sls
00:23 hblanks and:
00:23 hblanks schedule:
00:23 hblanks snapshot:
00:23 hblanks function: cmd.run
00:23 hblanks args:
00:23 hblanks - "touch /tmp/foo"
00:23 hblanks seconds: 5
00:23 hblanks for /srv/pillar/schedule.sls
00:24 hblanks ....oh
00:24 hblanks sorry, one moment.
00:25 hblanks Yes. The curious thing is that file, schedule.sls, works fine when copied into /etc/salt/minion.d/, (for instance as schedule.conf)
00:26 hblanks1 joined #salt
00:28 UtahDave Hm. That should work.  Did you execute a  pillar refresh?    salt \* saltutil.refresh_pillar
00:29 andrew joined #salt
00:29 D-BO @UtahDave: yes it did
00:29 hblanks1 joined #salt
00:30 UtahDave cool
00:30 dthom91 joined #salt
00:33 oz_akan_ joined #salt
00:34 [diecast] joined #salt
00:37 cron0 joined #salt
00:38 hblanks1 @UtahDave, I was doing salt-call saltutil.sync_all — this minion is colocated with the master. But, I've also now done  salt \* saltutil.refresh_pillar . Still not much luck, though — since config by /etc/ works, I can just use that. Just a surprise that it's not happy. I'll also try from a fresh instantiation and let you know if that works; they're not hard to do.
00:39 UtahDave OK, if you're still seeing that, this may be a bug.  Please open an issue on that and we'll do some testing here, too
00:44 JasonSwindle joined #salt
00:44 JasonSwindle Anyone here use iptables-persistent on Ubuntu?
00:45 JasonSwindle I am having a hell of a time with it, minions, and them randomly dropping off
00:48 UtahDave I haven't used iptables on ubuntu at all with salt yet.
00:49 JasonSwindle UtahDave:  I am using iptables-persistent to load the rules up on reboot
00:49 JasonSwindle but when the rules get loaded; the minions drop off the map for like 5 minutes
00:49 UtahDave is that an ubuntu app?
00:50 UtahDave are you allowing the minions to reach out to the master?
00:51 JasonSwindle yep, very basic rules https://gist.github.com/JasonSwindle/299586faace0657018e6
00:52 UtahDave do the minions work correctly once they reconnect?
00:53 JasonSwindle Yes
00:53 JasonSwindle you can SSH into them, and after that they work
00:53 JasonSwindle but you wait or SSH in
00:53 JasonSwindle https://launchpad.net/ubuntu/precise/amd64/iptables-persistent <-
00:54 Gifflen joined #salt
00:54 JasonSwindle It is an INIT.D script to restore IPTables rules on boot
00:55 JasonSwindle in the time the minions are not talking; test.ping is no go
00:55 JasonSwindle or state.highstate
00:56 UtahDave Can you time how long they are unresponsive?
00:56 krissaxton joined #salt
00:56 JasonSwindle We find it to be about 5 minutes
00:56 JasonSwindle I have not a 100% down to the second test
00:56 UtahDave OK, there's probably a timeout there where the minion is waiting to reauth or something.
00:56 UtahDave can you open an issue on this?
00:56 whit joined #salt
00:57 JasonSwindle Did it on 0.16.0 and .1
00:57 JasonSwindle I can try….. not sure how to even sum it up other than….. eh?
00:58 UtahDave Well, just describe what's happening
00:58 JasonSwindle sure, and I will give my states as well
00:58 hazzadous @JasonSwindle have a v. similar setup with little issue.  Perhaps we're not including the master in the iptables setup.
00:58 JasonSwindle this also happened on Ubuntu 12.04 and 13
00:59 JasonSwindle hazzadous:  go on
01:00 hazzadous So we use iptables-persistent to manage access.  It doesn't seem to have cause the same issues, but possible minions are dropping off on update and retrying only after a set time
01:00 JasonSwindle hazzadous:  My sls, if you want to see if I foobarred something https://gist.github.com/JasonSwindle/e930852795d1fcac0a3b
01:01 hblanks joined #salt
01:04 hblanks joined #salt
01:05 axisys joined #salt
01:07 hazzadous we have https://gist.github.com/hazzadous/eb7c35037e5fc218330d going on but I don't think we have salt master picking this up.  Does it happen when you high state just on master?
01:09 Nexpro joined #salt
01:10 JasonSwindle Yep
01:10 JasonSwindle As soon as the highstate hits the iptables part; no more minion / master talking
01:11 JasonSwindle UtahDave:  Done; https://github.com/saltstack/salt/issues/6424
01:11 hblanks joined #salt
01:11 UtahDave JasonSwindle: thanks!
01:12 JasonSwindle Thank you.  Right now I am still in dev | staging…. so I am running iptables-less
01:14 UtahDave JasonSwindle: So it makes sense to me that the minion would lose connection while the iptables and networking is in flux. Not sure why it would take 5 minutes to reconnect
01:15 JasonSwindle I agree
01:15 aat joined #salt
01:16 jeddi So on a debian (or derivative) machine, any cron states will operate by default on the file /var/spool/cron/crontabs/root (for example).  If i want to stick with /etc/cron.*/ directories, should I just push files out to those using the file state/module?
01:17 cron0 joined #salt
01:21 hblanks Thanks, @UtahDave! Much obliged for the help.
01:21 hblanks left #salt
01:21 hblanks joined #salt
01:21 hblanks left #salt
01:22 mgw joined #salt
01:25 oz_akan_ joined #salt
01:28 liuyq joined #salt
01:29 jshare joined #salt
01:29 liuyq joined #salt
01:36 FreeSpencer joined #salt
01:53 mannyt joined #salt
01:58 fxhp http://missingm.co/2013/06/ansible-and-salt-a-detailed-comparison/
02:00 Furao joined #salt
02:06 mirrorbox joined #salt
02:15 danishabdullah joined #salt
02:17 Lue_4911 joined #salt
02:17 logix812 joined #salt
02:18 robbyt joined #salt
02:18 danishabdullah I can't seem to be able spin up a proper salty vagrant instance. The box is created, then it announces that it calling state.highstate but it doesn't actually install any of the packages I outline or even create shared folders. I am on windows 8 using virtualbox 4.2
02:19 akoumjian danishabdullah: What does it say when you ssh in and call "sudo salt-call state.highstate -l debug" manually? (that's all salty-vagrant is doing)
02:24 danishabdullah the end of the message is:
02:24 danishabdullah local:
02:24 danishabdullah ----------
02:24 danishabdullah State: - no
02:24 danishabdullah Name:      states
02:24 danishabdullah Function:  None
02:24 danishabdullah Result:    False
02:24 danishabdullah Comment:   No Top file or external nodes data matches found
02:24 danishabdullah Changes:
02:26 qybl joined #salt
02:28 akoumjian danishabdullah: Well, there you go. Salt isn't finding your top file.
02:28 akoumjian danishabdullah: or not matching your minion.
02:29 danishabdullah here's the full version http://pastebin.com/NK6dTeai
02:29 danishabdullah I see..
02:30 danishabdullah I wonder why.
02:32 akoumjian danishabdullah: Where is your top file located and what do you have your file_root set to? (it is /srv/salt by default)
02:33 raydeo joined #salt
02:38 danishabdullah joined #salt
02:38 samsalt joined #salt
02:43 jpeach joined #salt
02:43 kenbolton joined #salt
02:49 liuyq joined #salt
02:49 oz_akan_ joined #salt
02:56 z0rkito does anyone have an example of using pillars in a pydsl?
02:56 Nexpro joined #salt
03:03 ponderability joined #salt
03:10 m_george|away joined #salt
03:12 kcb joined #salt
03:16 salty456 joined #salt
03:27 FreeSpencer joined #salt
03:33 intchanter joined #salt
03:38 Nexpro joined #salt
03:38 jaequery joined #salt
03:42 Nitron joined #salt
03:45 oz_akan_ joined #salt
03:48 Nitron left #salt
03:52 salty456 basepi: thanks for commenting on my issue -- fyi, the latest develop commit fixed it
03:54 teepark joined #salt
03:55 jaequery joined #salt
03:55 FreeSpencer joined #salt
03:56 teepark I have a file in my repo that I want to send to a command on stdin, but I don't really need on the minion otherwise
03:56 teepark is there a good pattern for doing this?
03:57 teepark I could drop it somewhere permanent with a file.managed and watch that with a cmd.wait that runs the command,
03:57 teepark but if there is a way to combine them into a single state and somehow use a salt:// url as stdin source, that's what I'm after
04:01 nmistry joined #salt
04:16 Ivo joined #salt
04:22 salty456 left #salt
04:23 tsheibar joined #salt
04:46 benturner joined #salt
04:54 dthom91 joined #salt
05:01 Ryan_Lane1 joined #salt
05:02 Ryan_Lane joined #salt
05:09 oz_akan_ joined #salt
05:27 teepark left #salt
05:38 liuyq joined #salt
05:51 xl1 joined #salt
05:55 cxz joined #salt
05:58 Ivo joined #salt
06:07 druonysus joined #salt
06:07 druonysus joined #salt
06:17 middleman_ joined #salt
06:18 Nexpro joined #salt
06:18 fragamus joined #salt
06:24 Newt[cz] joined #salt
06:30 balboah joined #salt
06:30 krissaxton joined #salt
06:36 mortis all salt youtube vids downloaded and ready for playing on the flight back to norway :x
06:36 mortis \o,
06:36 Singularo There are salt youtube vids?
06:36 * Singularo googles.
06:36 mortis theres a channel yeah :)(
06:36 mortis -(
06:37 * Singularo found Angelina Jolie... ;-p
06:37 mortis lol
06:37 mortis download that too
06:37 unicoletti_ joined #salt
06:37 mortis Singularo: if you dont know it, you can replace "www." on youtube with "ss" and you get to download the video to your computer for offlineplaying
06:38 Singularo orly? I did not know that, thanks.,
06:38 mortis pretty nice if you are gonna go on a long planetrip
06:38 mortis :)
06:38 mortis a long flight*
06:38 Singularo Yeah.. being based in Adelaide, South Australia means that going pretty much anywhere is a long plane trip.
06:41 xl1 joined #salt
06:46 mortis sweet
06:49 mortis 3 of us went to OSCON in portland and went on holiday for a week in seattle now :)
06:49 mortis been here 24h now
06:49 mortis pretty cool city
06:50 Singularo Nice.
06:50 mortis chilling in the hotelroom with a beer and a snus and playing with salt after a day of walking around the city
06:50 mortis thinking about a segwaytour tomorrow
06:50 scalability-junk ss.youtube.com doesn't work for me
06:50 mortis scalability-junk: remove the .
06:50 cxz lol, sucks Singularo
06:50 mortis its ssyoutube
06:51 cxz i'm from melbourne
06:51 scalability-junk ah I thought it's a youtube featured :D
06:51 mortis nah :)
06:52 Ryan_Lane joined #salt
06:52 syngin Singularo, cxz: Melbs here too.
06:53 Singularo Good to see a few Aussies getting salty.
06:53 scalability-junk anyone from saltstack.com here?
06:53 mortis salty aussies
06:53 scalability-junk aussie aussie aussie!
06:53 mortis saussie
06:53 syngin Anyone here using #!py in pillar?
06:54 Nexpro joined #salt
06:54 * scalability-junk seems no australian wants do cheer :(
06:55 syngin scalability-junk: I'm *in* AU, but I'm not an Aussie, so I wasn't going to follow that up ;)
06:55 Singularo Oh. Oy, Oy, Oy ;-p
06:55 scalability-junk \o/
06:55 * Singularo is slow..
06:55 syngin A welsh-person would have picked that up straight away.
06:55 scalability-junk syngin: hehe
06:56 scalability-junk so back to the saltstack.com issue anyone around?
07:06 capricorn_1 joined #salt
07:08 Koma joined #salt
07:09 bud joined #salt
07:22 matanya joined #salt
07:27 abele joined #salt
07:30 krissaxton joined #salt
07:34 krissaxton joined #salt
07:34 cowyn joined #salt
07:34 cowyn left #salt
07:34 carlos joined #salt
07:36 cowyn joined #salt
07:41 cowyn hello, i have a grain item like this:   gpus:
07:41 cowyn {'model': 'ASPEED Graphics Family', 'vendor': 'unknown'}
07:41 cowyn but salt '*' grains.item gpus:model return an empty value
07:42 cowyn gpus value in grain is a list or a dict?
07:54 s0undt3ch joined #salt
07:55 carlos joined #salt
08:00 felixhummel joined #salt
08:05 xproteus joined #salt
08:06 krak3n` joined #salt
08:09 Xeago joined #salt
08:13 auser joined #salt
08:19 az87c joined #salt
08:21 Jason-AVST joined #salt
08:26 APLU joined #salt
08:31 xl2 joined #salt
08:31 abele joined #salt
08:35 zooz joined #salt
08:35 ml_1 joined #salt
08:47 qba73 joined #salt
08:48 unicoletti_ joined #salt
08:56 danishabdullah joined #salt
09:20 yota joined #salt
09:29 whiskybar joined #salt
09:35 jeddi joined #salt
09:59 david_a joined #salt
09:59 auser joined #salt
10:02 Newt[cz] joined #salt
10:06 fredvd joined #salt
10:06 xl1 joined #salt
10:08 helderco joined #salt
10:11 matanya_ joined #salt
10:16 matanya joined #salt
10:16 ggoZ joined #salt
10:40 krissaxton joined #salt
10:43 sw__ left #salt
10:43 sw__ joined #salt
10:43 matanya joined #salt
10:44 giantlock joined #salt
10:51 krak3n` joined #salt
10:52 s0undt3ch joined #salt
10:58 Newt[cz] joined #salt
11:01 arthurlutz joined #salt
11:02 arthurlutz Report about the Paris Salt Sprint : http://www.logilab.org/blogentry/157960
11:04 Teknix joined #salt
11:05 mikedawson joined #salt
11:07 Furao arthurlutz: I recently did my own "configure existing monitoring solution through salt (add machines, add checks, etc) on various backends with a common syntax"
11:09 Furao I ends with some more simple, each state need a "minion.jinja2" which is yaml, and I have a module running on shinken minion that cp.get all those files from each states and perform those checks (which shinken options)
11:18 viq Can salt find master by SRV record, or does it have to be an A record?
11:18 balboah joined #salt
11:20 Nexpro joined #salt
11:22 danishabdullah_ joined #salt
11:25 jeddi viq: if ping works from the minion, that's enough.
11:25 diegows joined #salt
11:25 viq jeddi: ping to what?
11:25 jeddi ie - you just need to be able to route to the box --  dns, hosts, ip address, whatever.
11:26 viq Yes. But do I need to have the A record for salt.example.com, or can I have an SRV record instead?
11:28 jeddi aren't srv records a subset of A?  anyway.  as I say, so long as the minion can resolve the address, that's enough.  port numbers (as per srv feature) aren't necessary.   heck, you don't even need 'salt' to be the master name - i assign the salt master during provisioning (salt-cloud) to the actual name of the master on the network.
11:31 viq Subset? Not sure I'd call it that. And yes, I know port number isn't necessary. I'm asking whether salt will look for SRV records, which would let me say "that's the salt master host" without having an A record or CNAME, and without having to modify the client config even if I was able to during provisioning
11:32 sw__ viq & jeddi : as far as I know, I've never seen the ability to point to a master thru SRV records. What would be the point anyway ? You can just specify in your minion config where your master stands
11:32 viq sw__: sure, have fun doing that in 200+ machines by hand :P
11:32 sw__ then just create a record "salt" in your DNS and be fine with it
11:33 viq And I am asking whether I have to, or can I create an SRV record instead.
11:34 sw__ yes you have to :) My about 200 machines were already integrated into Rundeck, so I didn't have to bother doing that..
11:35 viq OK, thank you.
11:50 sphinx joined #salt
11:50 blee_ joined #salt
11:57 swa I'm looking for a way to have a comment at the top of my template files saying something like this "File managed by Salt, through state ${state}. Last deployed ${date}".. What's the best approach at doing this ?
11:58 swa I already have the basic comment w/o the state name or date thru a pillar
12:09 arthurlutz Furao: nice, is the code published ?
12:11 lemao joined #salt
12:11 Furao arthurlutz: not yet
12:12 arthurlutz Furao: would it be in the core of salt or in formulaes ?
12:13 Furao I have 35 000 lines of states + template I can't really pick and choose into them and push it into that repo :)
12:14 Furao oh no it's more than that
12:16 hazzadous joined #salt
12:18 krak3n` joined #salt
12:21 timl0101 joined #salt
12:23 krissaxton joined #salt
12:26 nickc joined #salt
12:26 KennethWilke joined #salt
12:27 ponderability joined #salt
12:28 lwarx joined #salt
12:30 lwarx folks, how to escape this command: echo -e "postgis hold\npostgresql-9.2-postgis-2.0 hold" | dpkg --set-selections
12:30 Furao lwarx: use debconf state
12:30 lwarx in cmd.wait so it passes \n correctly to shell
12:30 Furao oh no
12:31 Furao sorry It's not debconf-set-selections :)
12:31 lwarx it works in console, but when used in state it produces "dpkg: error: unexpected data after package and selection at line 1"
12:32 jslatts joined #salt
12:37 bhosmer joined #salt
12:40 oz_akan_ joined #salt
12:42 unicoletti left #salt
12:48 Xeago joined #salt
12:50 JasonSwindle joined #salt
12:51 oz_akan_ joined #salt
12:56 moko joined #salt
12:58 jeffasinger joined #salt
12:58 moko__ joined #salt
13:01 moko__ I'm trying to use Salt to provision a Vagrant precise32 box with PHP 5.2.4. I specify "pkg.installed: - names: - php5 5.2.4" in the state, but get an error upon provisioning during `vagrant up`: "The following packages failed to install/update: php5 5.2.4." What am I doing wrong?
13:02 KennethWilke moko__: what distro are you working with? it looks like the package name may be incorrect
13:02 ktenney joined #salt
13:02 anteaya joined #salt
13:03 swa shouldn't it be php5:5.2.4 ?
13:03 moko__ KennethWilke: Ubuntu 12.04 LTS
13:04 KennethWilke moko__: it looks like pkg.install supports a version keyword: http://docs.saltstack.com/ref/states/all/salt.states.pkg.html#salt.states.pkg.installed
13:05 KennethWilke try using php5 for the package name, and the version in a - version: 5.2.4 line below that
13:05 JasonSwindle moko__:  Do you have a repo / ppa for PHP 5.2 as well?  It looks like 5.2.x is EOL
13:07 moko__ swa: You're right, it should have a colon. Just tried that with the same error, unfortunately.
13:08 moko__ KennethWilke: I'm using the "-names" syntax and listing packages underneath. I think that requires a colon and then the version number after each package.
13:09 KennethWilke moko__: ah alrighty
13:09 juicer2 joined #salt
13:11 krak3n` joined #salt
13:11 moko__ JasonSwindle: I don't have a repo for 5.2.4 unfortunately. Does it being EOL mean I'd have to use pkgrepo instead? I've not been able to find a repo for 5.2.4 online so far.
13:11 Kholloway joined #salt
13:11 whiskybar joined #salt
13:12 lwarx left #salt
13:12 JasonSwindle moko__:  THis may help; https://launchpad.net/~skettler/+archive/php
13:13 JasonSwindle Never used this PPA, but may help
13:13 Gifflen joined #salt
13:13 drawsmcgraw joined #salt
13:14 JasonSwindle moko__:  https://dpaste.de/uSAbR/ <-
13:15 JasonSwindle That should add the PPA
13:15 JasonSwindle I use PPAs in my Salt
13:16 tseNkiN joined #salt
13:16 moko__ JasonSwindle: Thanks! That's been very helpful.
13:17 JasonSwindle Also to make sure you then add on require_in … on the PPA SLS
13:17 JasonSwindle but the above should get you in the right direction
13:22 Kholloway joined #salt
13:24 brianhicks joined #salt
13:24 racooper joined #salt
13:31 [diecast] joined #salt
13:31 [diecast] joined #salt
13:34 JasonSwindle moko__:  Did that help?
13:34 mikedawson joined #salt
13:34 waverider joined #salt
13:38 Drekonus joined #salt
13:39 Drekonus joined #salt
13:39 EWDurbin left #salt
13:42 aat joined #salt
13:43 MasterNayru joined #salt
13:46 Nitron joined #salt
13:46 jsummerfield joined #salt
13:46 Nitron left #salt
13:46 Drekonus joined #salt
13:47 toastedpenguin joined #salt
13:53 FreeSpencer joined #salt
13:55 Drekonus joined #salt
13:55 Heartsbane joined #salt
13:55 Heartsbane joined #salt
13:55 Drekonus joined #salt
13:56 Drekonus joined #salt
13:58 LyndsySimon joined #salt
14:05 jsummerfield joined #salt
14:05 blink__ joined #salt
14:07 moko__ JasonSwindle: Yes – thanks very much for your help. I had to use the `-name` syntax instead of the `-names` one, though, for reasons that escape me. Here's what I ended up with, for those trawling the logs in future: https://gist.github.com/anonymous/6113175
14:07 mannyt joined #salt
14:08 JasonSwindle I get an error on that link
14:08 JasonSwindle there we go
14:08 JasonSwindle names would be when you have a list of items; but I like pkgs better
14:09 JasonSwindle https://gist.github.com/JasonSwindle/43ec1fea3acfaafbc504
14:10 helderco joined #salt
14:11 moko__ I see. I was expecting to be able to do something like this: https://gist.github.com/anonymous/6113226
14:12 JasonSwindle try pkgs
14:12 moko__ Thanks, I will do.
14:13 napperjabber joined #salt
14:14 swa how do I cmd.run something only if a file has been created or updated ?
14:15 Furao joined #salt
14:15 jeddi swa:  updated - maybe cmd.wait?   created - require: ls filename
14:15 swa yeah, I was looking at cmd.wait indeed
14:15 jeddi swa: oh - is the file being created by salt, or some external mechanism?
14:16 swa file managed by salt yeah
14:19 swa blink__: I do, RHEL 5 and 6
14:20 cnelsonsic joined #salt
14:20 swa blink__: ah, indeed, PyYAML avail. on base.. didn't know that, and never had it installed prior to using Salt
14:22 swa blink__: wait a minute.. I see my minion has PyYAML installed.. no trace of python-yaml
14:22 swa RHEL 6.4
14:22 FreeSpencer joined #salt
14:23 JaredRookie joined #salt
14:23 swa blink__: yum search python-yaml doesn't return anything against the base repo here
14:24 swa I checked again in other repos (supplementary, etc.).. let me look it up on RHN
14:25 swa blink__: no trace of python-yaml on RHN.. can you run rpm -qi python-yaml and tell the vendor
14:25 samsalt1 joined #salt
14:26 bhosmer joined #salt
14:26 swa blink__: that was my next question.. do you have third party repos enabled ? Like rpmforge: p
14:28 swa blink__: what if you try to remove python-yaml, does it remove other deps ?
14:29 krak3n` joined #salt
14:31 Fernandos joined #salt
14:31 Fernandos hi
14:32 Fernandos would using salt as layman replacement make sense (that's a gentoo portage [=package management system] overlay manager tool)?
14:33 JasonSwindle KennethWilke:  Ping
14:33 JasonSwindle Fernandos:  KennethWilke may know
14:34 JasonSwindle If he is around
14:34 KennethWilke que?
14:34 KennethWilke Fernandos: sorry i'm not familiar with layman
14:34 KennethWilke as far as programmatically interacting with portage for installing packages salt is a good tool
14:35 Fernandos KennethWilke: layman is like apt-add-repository
14:36 Fernandos KennethWilke: it just manages adding/removing xml files and runs git fetch and git pull on the git/hg/svn/csv/.. cloned repos
14:36 KennethWilke oh alrighty, i'm not to osure if it can act as a replacement, but i do see a module for it in salt: http://docs.saltstack.com/ref/modules/all/salt.modules.layman.html#module-salt.modules.layman
14:37 KennethWilke and a state module as well: http://docs.saltstack.com/ref/states/all/salt.states.layman.html#module-salt.states.layman
14:37 Fernandos oh cool, didn't know that there are modules for layman
14:37 KennethWilke yeah i hope that'll help you out, if not i'd very much appreciate you improving it :)
14:39 dthom91 joined #salt
14:43 mikedawson joined #salt
14:48 drawsmcgraw On the Reactor system: Any reason tags are limited to 20?
14:48 drawsmcgraw I appreciate that you'd want *some* limit. Just wondering why such a short (to me) limit.
14:49 fridder joined #salt
14:50 kermit joined #salt
14:51 Linz joined #salt
14:51 teskew joined #salt
15:10 jpeach joined #salt
15:11 forrest joined #salt
15:14 StDiluted joined #salt
15:15 dizzyd joined #salt
15:15 StDiluted joined #salt
15:15 dizzyd hallo..I'm a newbie salt user -- nice so far!
15:16 dizzyd I was wondering if anyone could point me to an example of how to write a new state module that supports "require"
15:16 forrest http://docs.saltstack.com/topics/tutorials/walkthrough.html#adding-some-depth
15:16 forrest that section of the walkthrough has a service require
15:16 forrest for nginx
15:17 avienu joined #salt
15:17 JasonSwindle dizzyd:  My postgres exmaple
15:17 JasonSwindle https://gist.github.com/JasonSwindle/43ec1fea3acfaafbc504
15:17 JasonSwindle a little advance
15:18 bhosmer joined #salt
15:18 dizzyd sorry, to clarify
15:18 scalability-junk JasonSwindle: -> saltstack-formulas :)
15:18 dizzyd I have written a new state (i.e. _states/foo.py)
15:18 scalability-junk but probably already there :D
15:18 dizzyd and want to enable a user to specify a require
15:18 dizzyd it seems like it's up to the individual state implementations to do that?
15:18 JasonSwindle oh
15:19 JasonSwindle nvm
15:19 dizzyd thanks tho :)
15:19 forrest Can you explain more dizzyd? You want a user to specify a require when they are applying a state? As in a prompt or something?
15:20 dizzyd https://gist.github.com/dizzyd/c549f7d7342f63c062c0
15:20 forrest Would writing the logic to have it use the proper requires (or a state per machine type) not work better?
15:20 dizzyd I wrote a new state implementation for ephemeraldisk
15:21 RookieJared joined #salt
15:21 dizzyd but if I want it to format an XFS disk, I need to make sure it's run after xfsprogs is installed
15:21 dizzyd but I'm not sure how one goes about implementing support for the "require" directive
15:21 dizzyd right now it just blows up with:
15:22 dizzyd https://gist.github.com/dizzyd/8e2ddd92915094fb687b
15:22 StDiluted can you paste your whole state?
15:22 dizzyd sure, one moment
15:22 StDiluted looks like you have an error in your SLS
15:23 dizzyd plz to not be laughing: https://gist.github.com/dizzyd/7ea4ae504294f397ed1f
15:23 dizzyd :)
15:23 forrest Yea everyone laugh at the guy trying.
15:23 StDiluted no laughing, I'm a noob as well
15:23 StDiluted ohh, you've written a salt module
15:24 denstark I have a simple state sls file but it seems that my pkgrepo isn't being applied before the package even though i have a require (i have confirmed via salt-call -l debug on the minion that the repo is installed after). Here is the state sls file: https://gist.github.com/anonymous/61e24a35f989611222e8
15:24 jsummerfield joined #salt
15:25 denstark not sure if i'm using require incorrectly
15:25 StDiluted dizzyd: that's beyond my capability to troubleshoot, but the error you get looks like a problem in your SLS, can you paste the SLS that is calling this module?
15:26 devinus joined #salt
15:26 dizzyd StDiluted: see my first gist
15:27 lazyguru joined #salt
15:27 JaredR joined #salt
15:28 StDiluted denstark, try https://gist.github.com/dginther/6113998
15:28 carmony joined #salt
15:28 * denstark looks
15:28 SEJeff_work $ git log --no-merges --pretty=oneline v0.$(git tag -l | sed 's/^v0\.//g' | sort -n | tail -n1)..develop | wc -l
15:28 SEJeff_work 1115
15:28 SEJeff_work Thats a LOT of commits since the last release
15:29 denstark StDiluted: I think you're right ;) Let me give that a whirl
15:30 samsalt joined #salt
15:30 StDiluted dizzyd, have you tried the long declaration for xfsprogs? i.e.: xfsprogs:\n  - pkg:\n  - installed
15:30 StDiluted rather than pkg.installed
15:30 SEJeff_work even better: git log --pretty=oneline --no-merges $(git tag -l | sort -n -k2 -t. | tail -n1)..develop | wc -l
15:31 dizzyd well, xfprogs does actually get installed
15:31 dizzyd just not in the appropriate order
15:32 drawsmcgraw Any docs/examples on writing Reactor files? I'm having trouble accessing the 'tag' and 'data' variables. The only doc I've found is this one: http://docs.saltstack.com/topics/reactor/index.html
15:36 JaredR joined #salt
15:37 kenbolton joined #salt
15:39 denstark StDiluted: Totally worked
15:39 denstark I see where my error was
15:41 Linz joined #salt
15:42 forrest was it the require denstark?
15:42 dizzyd StDiluted: so…you were right
15:42 SEJeff_work Is there a sane way to say: If a symlink doesn't exist create it, but if it does, don't touch it
15:42 dizzyd changing it to the longer form fixed the require ordering
15:42 dizzyd which is…counter-intuitive to me
15:43 dizzyd you can use the short form IIF nothing requires the entry
15:43 SEJeff_work The symlink is for an app deployment directory, which defaults (on new builds) to the newest tag in git, but jenkins will do deploys and I don't want to overwrite that symlink if it exists already
15:43 StDiluted dizzyd: http://docs.saltstack.com/topics/troubleshooting/yaml_idiosyncrasies.html#yaml-does-not-like-double-short-decs
15:43 LucasCozy joined #salt
15:43 StDiluted I'm not sure why that particular thing doesn't work because that page seems to say it should
15:44 dizzyd YAML is a parsing nightmare
15:44 SEJeff_work It just takes some getting used to. It isn't that awful
15:44 SEJeff_work At least (unlike json) you can add comments
15:44 dizzyd true
15:44 dizzyd comments are nice
15:44 StDiluted SEJeff_work, unless: test -e /path/to/symlink ?
15:45 SEJeff_work StDiluted, Thats for cmd.run. I was hoping to use file.symlink, but yeah, that will likely work
15:45 SEJeff_work just lame :)
15:45 SEJeff_work StDiluted, Good point, I'll do it that way
15:46 StDiluted well, I believe file.symlink checks to see if the link is there before it fires?
15:46 StDiluted I haven't noticed that it recreates it every time
15:46 SEJeff_work You can force overwrite the symlink, but that is the opposite of what I want
15:46 SEJeff_work default to deploy-master
15:46 forrest can't you just set force to false?
15:46 dizzyd anyways, thanks StDiluted
15:47 StDiluted dizzyd: no worries.
15:47 SEJeff_work but if deploy-YYYY-MM-DD-HH-mm-SS-username is the path leave it alone or something vaguely like that
15:47 forrest by default it's set that way
15:47 SEJeff_work forrest, Yeah, but then it fails
15:47 SEJeff_work I don't want the state to fail
15:47 forrest salt.states.file.symlink(name, target, force=False, makedirs=False, user=None, group=None, mode=None, **kwargs)¶
15:47 SEJeff_work I'm super ocd in that I do all of the ordering and requirements so running states the first time always works
15:47 StDiluted SEJeff, I think that the default behavior of file.symlink is what you are wanting
15:48 SEJeff_work StDiluted, forrest I'll give it a shot, thanks gentleman
15:48 forrest yea I agree, default should work fine, it won't overwrite.
15:48 andrew_seattle joined #salt
15:49 forrest If it is overwriting even with the defaults set I think you should file a bug
15:49 StDiluted Oh, I see, you want it to ignore the fact that the path in your salt state doesn't match the path that jerkins made
15:49 StDiluted jenkins*
15:49 SEJeff_work YES
15:49 StDiluted I get it
15:49 SEJeff_work but if it is a new build, default to the latest master
15:49 SEJeff_work or a hardcoded name
15:50 SEJeff_work We are moving our internal webapps to continuous integration / continuous deployment with salt + jenkins
15:50 StDiluted hrm. I think the best way to do that would be to keep the path in a pillar that gets written to by Jenkins or is somehow external
15:50 StDiluted and sub in the variable in your salt state
15:50 SEJeff_work I can do some cmd.run hax0ry
15:51 SEJeff_work Well jenkins updates the symlinks and hot reloads the app for 0 downtime
15:51 StDiluted right
15:51 SEJeff_work We use mozilla circus to run the wsgi apps and apache + mod_proxy to proxy to circus
15:51 SEJeff_work all in a cluster using the redhat cluster software. Super slick setup
15:51 SEJeff_work Trying to automate the snot out of it
15:51 StDiluted but you want /var/www/app/current symlink not to get overwritten by salt
15:52 SEJeff_work but yes
15:52 StDiluted when Jenkins changes it to the current build
15:52 SEJeff_work /srv/http/app/src/src-$app
15:52 SEJeff_work thats the symlink ^^
15:52 StDiluted right
15:53 StDiluted how about grabbing a dir listing in /srv/http/app/src, checking dates and returning the latest one as the target for the symlink, and making that an external pillar?
15:54 scalability-junk SEJeff_work: I haven't read all, but isn't StDiluted right with jenkins just changing a pillar and then salt would update the symlink and reload the app.
15:55 scalability-junk for 0 downtime too.
15:55 StDiluted that way salt just knows what the path should be and won't overwrite the symlink
15:55 UtahDave joined #salt
15:55 scalability-junk a salt run could actually be triggered by jenkins so no delay would come up either.
15:56 StDiluted sure
15:57 JasonSwindle TheRealBill:  Thanks for the English help on my Gist :P
15:57 scalability-junk and you can always see the data within the pillar and easily overwrite it if you want
15:57 scalability-junk if jenkins fails you don't have to resymlink just change the pillar and wait for jenkins to run again and then change it back or to a later build.
15:58 JaredR joined #salt
16:01 chrisgilmerproj joined #salt
16:01 tsheibar joined #salt
16:03 whiskybar joined #salt
16:06 jpadilla joined #salt
16:06 jalbretsen joined #salt
16:07 nliadm anyone know how salt-thin is used/built? I'd like to play with it
16:09 jimallman joined #salt
16:11 napperjabber_ joined #salt
16:12 Linz joined #salt
16:13 SpX joined #salt
16:13 dthom91 joined #salt
16:14 bitz joined #salt
16:14 jdenning joined #salt
16:15 tsheibar terminalimage: you rock, sir….your workaround worked perfectly regarding the 'providers' for yum
16:15 tsheibar thank you for that
16:16 KyleG joined #salt
16:16 KyleG joined #salt
16:16 StDiluted how can i require a group to be created before a user is created that needs that group?
16:16 UtahDave yes, terminalmage does rock!!
16:17 UtahDave StDiluted: make sure you have a group.present for the group, and then when you're creating the user  add:     - require:\n  - group: <group name>
16:18 StDiluted ok, that's what I thought
16:19 terminalmage tsheibar: no prob!
16:20 terminalmage tsheibar: I'd still like to find out why the pkg module didn't work in the first place
16:21 terminalmage but actually, I think I know
16:21 tsheibar oh?
16:21 FreeSpencer joined #salt
16:21 terminalmage so, grains are used to determine which provider should be used for pkg
16:22 terminalmage if you have a problem accessing grains, that's going to keep salt from assigning yumpkg as pkg
16:22 terminalmage I'll test with a standalone minion
16:23 terminalmage with no master cachedir
16:23 terminalmage looks like we might actually have a bug here
16:23 terminalmage and anyway, a standalone minion should still properly detect the OS and assign the proper providers
16:24 tsheibar ok….I mean, I had it working locally with vagrant on my laptop (mac), but when I cloned the repo on a shiny new centos6 VM, I encountered the issue
16:24 chrisgilmerproj joined #salt
16:24 SEJeff_work StDiluted, forrest I ended up doing this, which circumvents the problem entirely: https://gist.github.com/SEJeff/c81792f130a4ad72d176
16:25 terminalmage yeah, it's kinda weird, but if it's a bug I want to fix it
16:25 terminalmage fix ALL THE THINGS
16:25 tsheibar lol
16:25 SEJeff_work \o/ #winning
16:27 cedwards UtahDave: did a 0.16.2 get pushed yet after the fix yesterday?
16:28 UtahDave cedwards: We're planning on pushing it this afternoon.  Do you have something else that should be included?
16:28 chadhs joined #salt
16:29 ponderability1 joined #salt
16:30 cedwards UtahDave: nothing so far, just hoping I hadn't missed a packagers announcement
16:30 SEJeff_work UtahDave, Did you see that git magic command I did earlier? It should help quite a bit in writing the changelog
16:30 UtahDave Ah, ok.  Yeah, we'll make sure to email everyone
16:30 napperjabber joined #salt
16:30 UtahDave SEJeff_work: no, I didn't. Lemme go look
16:31 SEJeff_work UtahDave, Do this in a checkout with everything (develop and master) up to date
16:31 SEJeff_work UtahDave, git log --pretty=oneline --no-merges $(git tag -l | sort -n -k2 -t. | tail -n1)..develop
16:31 UtahDave ah, cool. nice
16:33 UtahDave wow, that's a lot of commits
16:33 ponderability1 joined #salt
16:34 Lue_4911 joined #salt
16:34 SEJeff_work UtahDave, Thats all commits between the newest git tag and develop
16:34 SEJeff_work which I'll assume is what you'll roll into 0.16.2
16:34 SEJeff_work So basically 0.16.1..develop
16:35 terminalmage SEJeff_work: we just port bugfixes to the minor releases
16:35 UtahDave actually we cherrypick bug fixes into 0.16.2.  New features stay in develop
16:35 terminalmage ha, I won
16:35 terminalmage :D
16:35 UtahDave terminalmage is a fast typer
16:36 SEJeff_work UtahDave, Gotcha, did you by chance get terminalmage's awesome patch to prevent the loader from griping on every minion start and invocation of salt-call?
16:36 terminalmage And I'm not even using my das keyboard
16:36 SEJeff_work Super super annoying
16:36 UtahDave SEJeff_work: I'm not sure. basepi, did we cherry pick that?
16:36 SEJeff_work UtahDave, If it isn't, please make sure this gets in: https://github.com/saltstack/salt/commit/75eac606c08a316ab7ca064c44a82e4c61419322
16:37 SEJeff_work I rolled a special 0.16 package with that added.
16:37 jimallman joined #salt
16:37 SEJeff_work Also, 0.16.1 doesn't seem to be in EPEL/Fedora
16:37 basepi SEJeff_work: looks like you're linking that directly off of the 0.16 branch.  so it's in
16:37 SEJeff_work win
16:37 terminalmage yeah 0.16 is the bugfix branch
16:38 StDiluted yaml question: how do i set grains in a minion file that are an array?
16:38 StDiluted enclose them in []?
16:38 basepi SEJeff_work: was that the only one you wanted me to look at for cherry-picking?
16:38 StDiluted ah wait
16:38 StDiluted never mind
16:38 SEJeff_work basepi, I've not looked at the others. I just know that bug really irked me in the 0.16.0 package, so I rolled a custom one with that patch
16:39 basepi kk
16:39 basepi yep, it's in
16:40 SEJeff_work awesome
16:40 ipmb joined #salt
16:40 SEJeff_work Looks like it was in 0.16.1, but there is no bloody changelog, please fix that guys
16:40 SEJeff_work You need to do a better job with release notes
16:42 terminalmage basepi: we can probably do most of the work of creating a changelog by using git log... I can work on this if you want
16:42 basepi SEJeff_work: we're often bad about it on bugfix releases.  but in this case, 0.16.1 never officially got released
16:42 basepi we only notified the packagers, then hit bugs which are leading to 0.16.2.
16:43 SEJeff_work basepi, It is a tag in git, it was officially released :)
16:43 basepi terminalmage: that would be great
16:43 basepi SEJeff_work: and it's on PyPI, so you're right
16:43 basepi but no announcement on the list, for example
16:43 terminalmage basepi: ok, I'll wrap up what I'm working on right now and get to it
16:43 terminalmage testing tsheibar's issue with standalone minion at the moment
16:44 tsheibar cool…re: release notes, you could hack the github api a little to grab all the issues from merged pull requests between two dates
16:44 tsheibar or revisions
16:44 tsheibar git log is certainly much easier :)
16:45 tsheibar leave it to developers to add levels of complexity
16:45 SEJeff_work :D
16:45 basepi plus we don't cherry-pick everything, so we'd still have to weed out the bugfixes.  =)
16:46 terminalmage tsheibar: sorry, I just noticed that, since you are running a standalone minion, you needed to use salt-call rather than salt
16:46 tsheibar oh I am
16:46 terminalmage can you run grains.items with salt-call?
16:46 terminalmage and pastebin it?
16:47 tsheibar sure
16:47 terminalmage because I tried on CentOS6 and the pkg provider was deteceted
16:47 terminalmage *detected
16:47 tsheibar "Function minion-id is not available" pwned
16:47 terminalmage no
16:47 terminalmage you don't need minion-id with salt call
16:47 tsheibar oh ok
16:47 terminalmage minion-id is the host of the minion
16:47 terminalmage sorry
16:47 terminalmage salt-call grains.items
16:47 tsheibar processing...
16:48 terminalmage make sure you sanitize anything you don't want the world to see
16:48 terminalmage of course
16:48 jpadilla joined #salt
16:50 StDiluted http://missingm.co/2013/07/identical-droplets-in-the-digitalocean-regenerate-your-ubuntu-ssh-host-keys-now/
16:50 denstark Can you require a cmd.run properly? for example: https://gist.github.com/anonymous/ac498f7abbea36ac0f06
16:50 StDiluted for anyone using DO
16:50 tsheibar terminalimage: http://pastebin.com/RwWmLAFh
16:51 djn joined #salt
16:52 terminalmage tsheibar: heh, so the parts that you sanitized are what was needed
16:52 terminalmage hahaha
16:52 terminalmage that explains it
16:52 tsheibar which ones?
16:52 tsheibar well
16:52 terminalmage the os and os_family grains in particular
16:52 tsheibar ok
16:53 terminalmage if you're using a custom spin, that would cause the OS detection in salt not to match
16:53 tsheibar those are the same, but it seems those have been change to my company's name
16:53 tsheibar that's probably why
16:53 tsheibar ok
16:53 tsheibar whew
16:53 tsheibar well, having to add providers: pkg: yumpkg is totally fine if I can't get that changed
16:53 terminalmage ok, did they by chance muck with the /etc/redhat-release?
16:54 scalability-junk btw anyone one the salt-user malinglist?
16:54 scalability-junk did my mail go throught?
16:54 scalability-junk *through
16:54 tsheibar yeah, most likely…I'm using their spin and don't' know all of what they changed
16:54 tsheibar yes
16:54 tsheibar they definitely did
16:54 terminalmage ok. well that's solved at least
16:54 tsheibar awesome
16:54 tsheibar thanks (again) for working through that with me
16:55 qybl joined #salt
16:55 tsheibar I'll update that issue…which is now closed, but would be good to document
16:55 terminalmage hey, no prob. btw, in case you're ever curious and want to dig into the code, the logic that detects whether or not to use a given module is in the __virtual__ function
16:56 tsheibar nice, ok
16:56 tsheibar yeah, I've been meaning to do a deep dive
16:56 terminalmage tsheibar: https://github.com/saltstack/salt/blob/develop/salt/modules/yumpkg.py#L104
16:57 terminalmage the logic in that function almost certainly didn't match the grains you posted, hence it failed to be recognized as the pkg provider
16:58 bhosmer_ joined #salt
16:58 tsheibar right, yeah…..perhaps a regex against /etc/redhat-release for 'redhat/centos' if it's not found in the grains
16:58 tsheibar looks like we can assume a yum-based system at that point anyway
16:58 tsheibar I'll mess with it and see if I can make something salt worthy
16:59 tsheibar ha - looks like we found an answer for that TODO
16:59 gadams joined #salt
16:59 gadams joined #salt
17:00 terminalmage ok. check salt/grains/core.py for the OS detection code
17:00 terminalmage it's what builds those grains when the minion starts
17:00 tsheibar ah, nice…I will check that out
17:01 napperjabber_ joined #salt
17:06 Ryan_Lane joined #salt
17:11 forrest When you have a group of machines defined, lets say [server1, server2, server3], is there anything like batch-size that allows you to work through them in an ordered fashion? So I need to apply a configuration to server1, then a configuration to server2, etc.
17:12 JasonSwindle joined #salt
17:14 StDiluted forrest: a for loop in bash?
17:15 forrest StDiluted, I'd rather not do a for loop, I've got several hundred servers that this would have to apply to.
17:15 StDiluted ah
17:16 SEJeff_work forrest, Thats basically what overstate does
17:16 SEJeff_work overstate is to a group of minions what state.highstate is to a single minion
17:16 SEJeff_work with dependencies and the whole fancy
17:19 gadams joined #salt
17:19 gadams joined #salt
17:20 forrest Is there any way with overstate to restart a machine after the configuration is applied?
17:20 waverider joined #salt
17:20 devinus joined #salt
17:21 LyndsySimon joined #salt
17:22 brianhicks joined #salt
17:22 gadams joined #salt
17:22 berto- joined #salt
17:23 SEJeff_work forrest, So something a lot of people don't realize or forget is that a lot of state files don't have to be in top.sls. It is perfectly fine to have a state file that is only manually applied via state.sls or from an overstate
17:23 denstark I think I'm going to do all of my stuff via state.sls
17:23 JasonSwindle terminalmage:  You work fast! #6484 Summary on finish of Highstate
17:24 terminalmage JasonSwindle: yeah I liked that idea
17:24 juanlittledevil joined #salt
17:24 dizzyd I feel silly asking this, but is there a good one/two pager on what "highstate" is?
17:24 cron0 joined #salt
17:24 JasonSwindle Awesome.
17:24 terminalmage that's something that we kinda wanted at my old job but never got implemented
17:24 forrest yea dizzyd, http://docs.saltstack.com/ref/states/highstate.html
17:24 JasonSwindle 0.17 is going to be big
17:24 UtahDave forrest: Yeah, you can have an overstate restart a server.  Also, you can use the batch option to a regular salt highstate
17:25 terminalmage JasonSwindle: yeah, something else I added during the sprint that I'm super excited about, is that you'll be able to list, restore, and delete past file backups from the master
17:25 dizzyd forrest: that tells me what a "highstate" is composed of…but not what it _is_? Or am I missing that?
17:25 terminalmage that'll be in 0.17
17:25 JasonSwindle terminalmage:  That will be handle
17:26 terminalmage so, say you nuke a config file on 200 nodes
17:26 terminalmage you can revert with a single command
17:26 JasonSwindle terminalmage:  What maybe handle is a postgres dump + backup State
17:26 terminalmage provided you're using file backups with your file.managed states
17:26 JasonSwindle Something like WAL-E built into Salt
17:26 JasonSwindle but could be used for POSTGRES or whatever the backup could touch
17:27 forrest ok cool, thanks SEJeff_work and UtahDave.
17:27 juanlittledevil hi guys, sorry to bug you, I'm having an odd problem and I'd like some input as to how to troubleshoot this. One of my minions does not seem to be responding to the master. I have checked the keys, I can initiate a salt-call from the minion and it updates just fine. However doing a highstate from master just times out and returns nothing. Any ideas?
17:27 terminalmage don't think that would be supported, this is specifc to the file state backup system
17:27 forrest Yea dizzyd, I couldn't find a great example past what is referenced here: http://docs.saltstack.com/ref/modules/all/salt.modules.state.html#salt.modules.state.highstate
17:27 stpehendotexe joined #salt
17:27 terminalmage JasonSwindle: http://docs.saltstack.com/ref/states/backup_mode.html <--- this
17:27 dizzyd HIGHSTATE: WAT IS IT?! :)
17:28 stephendotexe joined #salt
17:28 auser joined #salt
17:28 stephendotexe man. Is a busy IRC channel the sign of a healthy project or what?? :)
17:28 auser hey all
17:28 terminalmage juanlittledevil: can you check how many salt-minion processes are running on the minion?
17:29 juanlittledevil just one
17:29 gadams joined #salt
17:29 gadams joined #salt
17:30 juanlittledevil if I do an lsof on the minion I can see that it's got an established connection to master:4505
17:30 stephendotexe Can someone explain why grains['branch'] returns "cannot concatenate 'str' and 'list' objects"? The context int he SLS file is this:
17:31 stephendotexe - rev: {{ grains['branch'] }}
17:31 stephendotexe in git.latest
17:31 terminalmage ok, I've seen two salt-minion instances running on a box before, causing an issue. another problem I've seen seems to be cache-related. try renaming the cachedir for the troublesome minion from the master. It should be at /var/cache/salt/master/minions/<minion-id>
17:32 stephendotexe grains.items on the host returns:
17:32 stephendotexe branch: some_git_branch
17:33 terminalmage stephendotexe: that isn't one of the core grains, how are you defining this grain?
17:34 juanlittledevil ok this is odd.
17:34 juanlittledevil I can do a grains.items
17:34 stephendotexe In the minion file:
17:34 juanlittledevil but not a state.highstate
17:34 juanlittledevil or cmd.run
17:34 stephendotexe grains:
17:34 stephendotexe git:
17:34 stephendotexe - sprint_branch
17:34 TheRealBill JasonSwindle: re: your gist yesterday: np, my one pet peeve is that one. ;)
17:34 gadams joined #salt
17:34 gadams joined #salt
17:34 stephendotexe I think I just realized I need - name: sprint_branch
17:34 terminalmage stephendotexe: try to pastebin
17:35 terminalmage for long pastes
17:35 waverider joined #salt
17:35 terminalmage keeps it easier to read in RC
17:35 terminalmage IRC
17:35 juanlittledevil I was wrong cmd.run works. but not highstate.
17:36 terminalmage stephendotexe: so, your custom grain would be called 'git'
17:36 stephendotexe How would I refer to it in an init.sls file? {{ grains['git'] }} ?
17:36 terminalmage and also, the indentation looks funny. try copy and pasting directly from the minion config into pastebin
17:36 z0rkito can any explain why using the same pillar value in two different loops causes this error: Detected conflicting IDs, SLS IDs need to be globally unique. The conflicting ID is "base" and is found in SLS "base:svn.local" and SLS "base:svn.global" ?
17:37 scalability-junk z0rkito: cause pillars and states are merged
17:37 z0rkito so.
17:37 scalability-junk the whole pillar data is glat
17:37 scalability-junk *flat
17:38 z0rkito that doesn't make any sense.  for x in y: do something, for x in y: do something different shouldn't cause a conflict...
17:39 scalability-junk so the first one runs the second one gives the error?
17:39 JaredR joined #salt
17:39 z0rkito no it doesn't run at all, it gives the error i pasted.
17:40 austin987 joined #salt
17:40 terminalmage stephendotexe: it looks like your git grain is a list, which is why you can't refer to it like you were trying to do
17:42 gadams joined #salt
17:45 JasonSwindle TheRealBill:  Yeah, I see. :)
17:45 terminalmage stephendotexe: might want to read up on YAML if you're confused on usage.
17:45 terminalmage took me a little reading to figure it out initially, myself
17:46 druonysus joined #salt
17:46 gadams joined #salt
17:48 dthom91 joined #salt
17:51 SpX joined #salt
17:53 masm joined #salt
17:55 stephendotexe Yikes.
17:55 stephendotexe It looks like comments don't stop the yaml jinja engine from failing
17:55 stephendotexe I had to remove the actual code before it it stopped failing at a bad jinja code.
17:55 Fernandos where is the difference between puppet and salt?
17:56 scalability-junk z0rkito: sorry had an emergency issue
17:56 diegows joined #salt
17:56 stephendotexe Where do we report salt bugs?
17:56 scalability-junk z0rkito: that's cause the namespace is flat, but I thought it overwrites conflicts without any issue...
17:56 scalability-junk UtahDave: any idea?
17:56 UtahDave stephendotexe: https://github.com/saltstack/salt/issues
17:57 stephendotexe gracias
17:57 UtahDave z0rkito: can you pastebin what you have so far?
17:57 UtahDave stephendotexe: de nalgas
17:57 LyndsySimon joined #salt
17:59 z0rkito UtahDave: http://pastebin.com/4wBaa5xh
17:59 juanlittledevil is there some verbose level I can set or a log that will tell me what's being executed from the master that might help me figure out why I can't do a highstate to a minion but cmd.run works just fine?
18:00 z0rkito the svn.sls is the pillar.
18:00 z0rkito minus the sensitive data
18:00 UtahDave juanlittledevil: try running your minion in the foreground in the cli in debug mode    salt-minion -l debug
18:01 SpX joined #salt
18:01 juanlittledevil k
18:01 UtahDave z0rkito: Oh, it's because you're setting the ID declaration to "base"  because your svn.sls has "base" as an item.   "base" is a reserved word. That's the main environment.
18:02 z0rkito UtahDave: hmm thats interesting.   if i combine global.sls and local.sls into a single sls it works.... it only fails if they are in two seperate sls files.
18:03 koolhead17 joined #salt
18:03 z0rkito UtahDave: that's whats frustrating about it and why it doesn't make any sense.
18:03 UtahDave z0rkito: change "base" to some other word.
18:03 juanlittledevil so if I run it in foreground it works just fine.
18:04 z0rkito UtahDave: that's not an option, it's a file name that needs to be downloaded.  also like i said, if i make it one sls it works.  if it works running it from a single sls why doesn't it work running it from two seperate sls files?
18:06 KyleG joined #salt
18:06 KyleG joined #salt
18:07 UtahDave z0rkito: So all your sls file get compiled into a big python dict.  If you have conflicting keys one will win
18:08 StDiluted Fernandos, do you have specific questions regarding the difference between Puppet and Salt?
18:08 z0rkito UtahDave: but only if those conflicting keys happen to be in two different sections of salt.  if they're in teh same exact section it doesn't care...
18:09 Fernandos StDiluted: no just in general
18:09 UtahDave z0rkito: I'm still not 100% what's going on here, but you can't have conflicting ID declarations at all
18:10 z0rkito UtahDave: my issue here is that it shouldn't be conflicting period.  The value of a dict can be teh same as the value of a different dict without any issues in python.
18:11 StDiluted you can't have duplicate keys
18:11 UtahDave z0rkito: StDiluted is correct.  You're setting "base" to be the key.  So you can't have two "base" keys.
18:14 StDiluted z0rkito: you can't change your sls name from base to something else?
18:14 z0rkito StDiluted: it's not an sls name. it's a value of a pillar.
18:14 z0rkito StDiluted: http://pastebin.com/4wBaa5xh
18:15 z0rkito StDiluted: and no i can't change the name of it, it's downloading two different files named base from two different locations.
18:15 StDiluted hm
18:15 stephendotexe So i'm very new to YAML and Jinja, but how do you refer to grains within an init.sls file? {{ grain['os'] }} or {% grains['os'] %} ?
18:16 UtahDave change the names to - baseglobal      and - basesvn     or something like that
18:16 UtahDave stephendotexe: the first
18:16 stephendotexe thanks
18:16 z0rkito also what is troublesome about it is that it works if i put both of those for loops into init.sls.  it only stops working if the two loops are in seperate files.
18:16 saurabhs joined #salt
18:17 StDiluted if you have two separate sls's that have the same id, it fails.
18:17 StDiluted if they are in the same sls, you get a warning instead
18:17 StDiluted if you look in your minion config, I'm betting you get a warning
18:17 StDiluted err minion log
18:17 m_george left #salt
18:19 KyleG joined #salt
18:19 KyleG joined #salt
18:20 stephendotexe Can someone take a look at this pastebin and explain how I'm using grains incorrectly? http://pastebin.com/HX0DfBPe
18:22 UtahDave stephendotexe: {{ grains['sprint'] }}
18:22 UtahDave stephendotexe: I'd also recommend  {{ grains.get('sprint', 'defaultvalue') }}
18:23 stephendotexe ah
18:23 stephendotexe Let me try it quick
18:23 stephendotexe UtahDave you the man :D
18:25 stephendotexe UtahDave: So why use defaultvalue over calling it directly?
18:26 UtahDave if for some reason 'sprint' doesn't exist, then it won't blow up on you and give you stacktrace
18:26 UtahDave if you leave off the default value it will return None.
18:26 UtahDave But it won't stacktrace
18:26 KyleG joined #salt
18:26 KyleG joined #salt
18:27 stephendotexe Nice
18:29 UtahDave Time for some lonche
18:29 Fernandos left #salt
18:31 jschadlick joined #salt
18:31 druonysuse joined #salt
18:32 stephen__ joined #salt
18:34 KyleG joined #salt
18:34 KyleG joined #salt
18:34 LyndsySimon joined #salt
18:36 KyleG joined #salt
18:36 KyleG joined #salt
18:37 jdenning joined #salt
18:38 diegows joined #salt
18:40 nate888 joined #salt
18:42 nate888 hi, I am hoping for a quick nudge in the right direction... how do I replace an existing file with the file.managed? I seem to be able to create a file with file.managed if it does not exist, but if it is already there, my source file never seems to take.
18:43 StDiluted nate888: the file.managed should replace the target file
18:44 StDiluted you should get a diff of the new file vs. the old one
18:44 FreeSpencer joined #salt
18:44 TheRealBill unless the old file matches the 'new' file. seems simple but can easily be accidentally the same.
18:44 nate888 Am I kicking things off incorrectly by using "salt '*' state.highstate"?
18:45 StDiluted no that's right
18:45 TheRealBill (yes, this is the voice of experience.)
18:45 StDiluted lol, Bill, I've had that happen as well
18:45 StDiluted nate888: what's the exact behavior you see?
18:45 nate888 I wish there was no diff between old and new....
18:48 nate888 A few different things... I'm pushing out an hbase config. So, the debian package I've built includes an hbase-site.xml file, then via a requires hierarchy (the file.managed requires the package), I want to push out an updated hbase-site.xml via a source file from my salt server. Additionally, when I change the source file in the salt server, that appears not to replace the file it already pushed out...
18:49 nate888 could this be permissions on the salt server?
18:50 mannyt joined #salt
18:51 krissaxton joined #salt
18:51 StDiluted can you paste your state sls file somewhere?
18:51 nate888 one sec
18:51 stephendotexe What's the best way to execute a script stored in /srv on the master? Do I download it first, and then execute it on the host?
18:52 StDiluted stephendotexe: cmd.script:\n - source: salt://script.sh
18:52 StDiluted as long as it is in your file root
18:52 StDiluted like /srv/salt
18:52 stephendotexe StDiluted: thanks I'll try that
18:54 bhosmer_ joined #salt
18:54 dthom91 joined #salt
18:54 nate888 http://pastebin.com/WHZW2hSK
18:54 StDiluted looking
18:55 berto- joined #salt
18:55 nate888 thx
18:56 koolhead17 joined #salt
18:56 koolhead17 joined #salt
18:56 StDiluted might be because of the 'create: True'
18:56 StDiluted I've never used that declaration, and not seen the issue you are
18:56 nate888 I'll get rid of it and see. one sec
18:58 StDiluted create: True should be the default anyway
18:58 nate888 no change...
18:59 StDiluted also you can probably manage all those nested directories a bit easier using the makedirs: True declaration in your file.managed states.
19:00 nate888 :) I expect there are a lot of things I can do better, still quite a newb with salt
19:00 StDiluted nate888: have you tried running a salt-call -l debug state.highstate on the minion?
19:00 StDiluted might give you a little more information about what it's actually doing
19:00 nate888 will do. one sec
19:02 nate888 well that's a lot af messages.... of note there seems to be an exception at the end
19:03 nate888 http://pastebin.com/6SqytQH3
19:03 StDiluted yeah gives a lot of ourput
19:03 nate888 did I break my config (the link is just the traceback...)
19:04 nate888 ?
19:04 StDiluted output*
19:04 StDiluted not sure why it gave that error, actually
19:05 gadams joined #salt
19:05 gadams joined #salt
19:05 KyleG1 joined #salt
19:06 aranhoide joined #salt
19:06 nate888 http://pastebin.com/AKma1Emb
19:06 nate888 that is the full debug output
19:07 StDiluted looking
19:08 nate888 It doesn't seem to mention pulling/rendering any of the source files
19:09 StDiluted seems like it's puking on grains in opts.opts
19:09 StDiluted are you running a custom grain?
19:10 nate888 not that I am aware of
19:10 tsheibar terminalimage: I just created a pull request to get rid of that workaround with custom /etc/centos-release info
19:10 aranhoide left #salt
19:10 tsheibar it's that thing that didn't allow me to install packages through Salt, but I could by running yum directly
19:10 tsheibar https://github.com/saltstack/salt/issues/6452
19:11 StDiluted nate888: line 149 indicates that there's a problem in the opts.py grain
19:12 tsheibar err, https://github.com/saltstack/salt/pull/6452
19:13 StDiluted terminalimage: any comment on this issue of nate888?
19:14 StDiluted his last pastebin has some weird errors related to grains that look like they are the stock grains acting up
19:14 StDiluted nate888, what version of salt are you running
19:14 nate888 Could a messed up set of pillar code induce something like this?
19:15 StDiluted if __pillar__ somehow got defined as something else, yes
19:15 jeddi nate888: seeing the same problem on other minions / with a stripped down configuration?
19:15 JaredR joined #salt
19:15 dthom91 joined #salt
19:15 nate888 no, that I didn't do (redefine __pillar__)
19:16 StDiluted seems like it thinks either __opts__ or __pillar__ is a str, and has no method of 'get'
19:16 StDiluted might be __opts__
19:16 napperjabber_ joined #salt
19:16 StDiluted and not __pillar__
19:18 nate888 jeddi: when I explicitly call an sls via state.sls <statename> the issue manifests as well. I'll see if I can whip up a simplified version... I am still hoping for someone to call me a dummy for missing something simple....
19:18 jesusaurus StDiluted: have a second? i tried implementing something similar to your mysql formula, but im hitting an error
19:19 jesusaurus I'm not sure why I'm getting: State debconf.set found in sls galera is unavailable
19:19 StDiluted jesusaurus: can you paste galera.sls?
19:20 StDiluted do you have the package debconf-utils installed?
19:20 napperjabber_ joined #salt
19:23 jesusaurus StDiluted: http://pastie.org/private/o2zho8a34mkbb2yohk9mlq
19:23 jesusaurus i have debconf installed, is there a seperate package debconf-utils?
19:24 jesusaurus oh weird, i have debconf-utils installed on the master, but the minions dont think thats a valid package name
19:24 StDiluted hrm
19:26 jesusaurus ohh... its not in my partial mirror, lets see if thats the root of the problem
19:26 LyndsySimon joined #salt
19:27 isomorphic joined #salt
19:28 StDiluted I believe the minions need that package for it to work
19:28 tsheibar joined #salt
19:29 nate888 jeddi: http://pastebin.com/aawyEdvK I am able to replicate this issue as indicated in this paste
19:30 jesusaurus We need to note the dependency on debconf-utils (as opposed to just debconf) in the debconfmod module and state documentation
19:30 FreeSpencer joined #salt
19:31 StDiluted jesusaurus: probably a good plan. Did that fix it?
19:33 stephendotexe I like yaml for readability but man.. the sls files get really long.
19:33 nate888 StDiluted: jeddi: http://pastebin.com/50LfDHZB is the salt-call -l debug output from http://pastebin.com/aawyEdvK that seems to be a replica of my earlier issue. All this leads me to think that something is pathalogically wrong with my salt install ...
19:33 troyready joined #salt
19:33 StDiluted what version are you using, nate888?
19:34 nate888 0.15.3 (from the ubuntu ppa I believe)
19:34 StDiluted that may be a bug in .15.3
19:35 StDiluted I don't know if it is or not
19:35 dthom91 joined #salt
19:35 druonysus joined #salt
19:35 druonysus joined #salt
19:36 stephendotexe Can anyone recommend a good YAML parsing/syntax checker for VIM?
19:37 nate888 I'm running an apt-get update / reinstall it looks like the ppa now has 0.16.0
19:38 StDiluted yes
19:38 StDiluted it's been updated
19:38 SEJeff_work Does the salt fileserver support file:// paths?
19:39 SEJeff_work I have a state that does a git checkout
19:39 SEJeff_work then I want to run a virtualenv.managed and feed it the requirements from the git checkout
19:39 halla joined #salt
19:45 juanlittledevil joined #salt
19:46 nate888 StDiluted: 0.16.0 behaves the same. I ran "apt-get install salt-minion" and "apt-get install salt-master"; followed by a service stop and start on all nodes, then re-ran my example tests, and the opts.py issue remains in the debug output as well as the file not being replaced issue
19:46 JasonSwindle Is there a way to make Salt-Cloud use bootstrap git deploy?
19:46 StDiluted bizarre
19:48 tsheibar joined #salt
19:49 carmony ok, I have a question about managing packages with salt. Using vagrant & vagrant-salt with the debug option, and its showing an output of "[INFO    ] Executing command 'apt-get -q update' in directory '/root'" every time it checks to see if the package is the latest
19:49 carmony is there a way to have salt just call that once?
19:50 nate888 the opts.py file clearly has a get method....
19:50 nate888 oops, no, it doest't... it is checking for one
19:50 whit joined #salt
19:51 krissaxton joined #salt
19:52 StDiluted right
19:52 StDiluted and not finding it
19:52 StDiluted carmony: yes?
19:53 carmony lol, I'm just drawing a blank on how to prevent it from executing an apt-get -q update for each package every highstate
19:53 nate888 so it seems, where should that method be defined?
19:53 jesusaurus StDiluted: I think that installing debconf-utils solved *that* problem. theres a new problem popping up that probably isnt related
19:54 carmony StDiluted: any guidence in what direction or where in the docs to look?
19:54 StDiluted carmony: I don't think that's possible. I think that's a necessary part of the process
19:54 auser joined #salt
19:54 StDiluted it needs to update the apt database to know that it's the most current package
19:54 auser hey all
19:54 StDiluted and there's no way of determining whether it was previously run
19:55 JasonSwindle auser:  Hey!
19:55 StDiluted hey auger!
19:55 StDiluted hey Jason
19:55 auser how goes JasonSwindle
19:55 carmony ok, its just a pain when I have 50 packages or so I'm managing, and it executes an update 50 times within a 3-4 minute period
19:55 StDiluted jesusaurus: ah, well, glad the other is solved
19:56 StDiluted carmony: agreed, and it does slow things down a tad.
19:56 JasonSwindle Made progress on my deploy……. dev is currently being used
19:57 ipmb Can somebody help me get up and running with salt-cloud? I'm missing something: https://dpaste.de/JwkPs/
19:58 qba73 joined #salt
19:58 dthom91 joined #salt
19:59 Nitron joined #salt
20:02 ponderability1 joined #salt
20:02 UtahDave carmony: that's fixed in git.  Also are you listing your packages with   - pkgs?
20:02 carmony yup
20:04 ponderability2 joined #salt
20:05 chrisgilmerproj joined #salt
20:05 stephendotexe joined #salt
20:06 napperjabber joined #salt
20:09 qba73_ joined #salt
20:10 david_a joined #salt
20:10 diegows joined #salt
20:16 ipmb ok, I've got salt-cloud building a VM at digital ocean, but it's repeatedly asking me for the root password during the build process
20:18 LyndsySimon joined #salt
20:22 UtahDave carmony: can you test with the latest git? I'm pretty sure that's been fixed.
20:23 UtahDave ipmb: what version of Salt-cloud are you using?
20:23 ipmb from the ppa
20:23 * ipmb checks
20:23 ipmb 0.8.9
20:23 carmony UtahDave: yeah, just need to figure out how to get salty-vagrant to install the git version
20:23 ipmb carmony: iirc, there's a setting in the vagrantfile for that
20:24 stephendotexe How do I require a cmd.run to return "changed" before a new command is run?
20:24 carmony ipmb: let me check
20:24 ipmb carmony: https://github.com/saltstack/salty-vagrant#install-options
20:24 ipmb install_type
20:25 carmony ipmb: awesome, thanks
20:25 carmony can I just say
20:25 carmony I love this channel
20:25 forrest Hey s0undt3ch, do you know what the timeline is looking like for building out the rhel specific functions in https://github.com/saltstack/salt-bootstrap/blob/develop/bootstrap-salt.sh ?
20:26 aranhoide joined #salt
20:26 aranhoide left #salt
20:27 juanlittledevil joined #salt
20:28 JaredR_ joined #salt
20:29 forrest blink__ it doesn't work for RHEL if you don't have the optional repo already enabled.
20:29 aboe joined #salt
20:30 forrest it enables EPEL then tries to download jinja2, which someone was kind enough to incorrectly name jinja2-26 in th epel repo, so it won't install that, and then will fail to install salt because jinja2 is a deps. Works fine if you enable the optional repo though
20:30 brianhicks joined #salt
20:30 eculver_ joined #salt
20:30 tsheibar joined #salt
20:30 forrest yes
20:31 KyleG joined #salt
20:31 KyleG joined #salt
20:31 forrest jinja2 is in the optional repo
20:31 forrest jinja2-26 is in epel: http://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/python-jinja2-26.html
20:31 forrest which is kind of dumb
20:31 s0undt3ch forrest: hmm, should be working
20:31 juanlittledevil How do you guys go about managing mount points with salt? I've got a situation where I've got some mysql db's running mysql multi thus I'm having to manage different or multiple mount points. When I was working with puppet I would have manivests for each host. i.e. mounts.fqdn. Is there a way to do something like this with state files? so I can include something like db.dirs.{{ grains['fqdn'] }} as the sate but still have a default
20:31 FreeSpencer joined #salt
20:31 s0undt3ch forrest: if it's not for you, please create a ticket with all information you can gather...
20:32 SEJeff_work terminalmage, Any idea when 0.16.2 is going to be released "officially"? Just curious about a ballpark
20:32 s0undt3ch SEJeff_work: hello there!
20:32 UtahDave SEJeff_work: It should be released to the packagers this afternoon/evening
20:32 s0undt3ch SEJeff_work: you once we after sentry integration, weren't you?
20:32 * SEJeff_work waves at s0undt3ch !
20:32 kenbolton joined #salt
20:32 UtahDave juanlittledevil: I think there's a mount state, is there not?
20:32 SEJeff_work s0undt3ch, Yeah I adore sentry for python stuff. Nothing comes remotely close to it
20:33 s0undt3ch s/we/were
20:33 SEJeff_work I use it for pretty much everything
20:33 terminalmage SEJeff_work: I was talking about this with basepi earlier today and he said that 0.16.2 would likely be pushed to pypi today
20:33 terminalmage but he wasn't sure
20:33 SEJeff_work terminalmage, WIN
20:33 forrest s0ndt3ch, this was already brought up in the user group: https://groups.google.com/forum/#!msg/salt-users/ZyatYnb95QY/ADngjIjVF8cJ , as well as a bugzilla post: https://bugzilla.redhat.com/show_bug.cgi?id=844710
20:33 s0undt3ch SEJeff_work: k, you'll have a nice suprise in 0.17 ;)
20:33 s0undt3ch *surprise
20:33 SEJeff_work s0undt3ch, yay
20:33 s0undt3ch SEJeff_work: and logstash
20:33 SEJeff_work not a returner, but actual default integration from the config?
20:33 SEJeff_work We pay a gazillion $$$ for splunk
20:33 s0undt3ch SEJeff_work: logging
20:33 SEJeff_work so I don't much care about logstash :D
20:33 SEJeff_work s0undt3ch, Well what about exceptions
20:34 SEJeff_work I know I know that in a perfect world salt will never have exceptions, however, it does
20:34 terminalmage we've been trying to announce to the packagers first before the release is announced officially, to limit the amount of "When is the package for X going to be available"
20:34 SpX joined #salt
20:34 juanlittledevil UtahDave: yes there is, but not all of these are mounts some of them are just directories and they have different paths on different hosts. Is there a right vs. wrong way of managing something like this?
20:34 SEJeff_work it would be nice in the minion config to configure sentry and have any traceback sent to sentry
20:34 s0undt3ch SEJeff_work: you'll get *all* the logging salt does in sentry, just set your desired logging level
20:34 SEJeff_work then you'd have a "dashboard" of salt exceptions
20:34 SEJeff_work s0undt3ch, thats cool
20:34 s0undt3ch SEJeff_work: that will be possible ;)
20:35 SEJeff_work s0undt3ch, I just wanted exceptions, but that is still very cool
20:35 SEJeff_work And I would likely use it
20:35 SEJeff_work s0undt3ch, Only 152 issues to fix before 0.17! Lets get crankin'
20:35 s0undt3ch SEJeff_work: just need to finish it -> https://github.com/s0undt3ch/salt/tree/features/even-more-logging-power
20:35 SEJeff_work s0undt3ch, yes please!
20:36 s0undt3ch heheheh, that's some nice pumping to get me going ;)
20:36 forrest blink__ , did you find documentation regarding enabling the optional repo via the command line as opposed to via RHN? I usually work centos so some of the way this is configured is odd to me with the subscriptions and everything
20:36 SEJeff_work s0undt3ch, If you ever come to the US and A, I owe you $beverage_of_choice for this: https://github.com/s0undt3ch/salt/commit/d5ebba5b24ac3699e29b90094de7fab11a433dc3
20:36 s0undt3ch SEJeff_work: hehehe
20:36 SEJeff_work s0undt3ch, Also, raven has a udp mode
20:37 SEJeff_work which I would highly suggest you consider using and maybe even defaulting to as it is 0 performance impact
20:37 s0undt3ch SEJeff_work: you defined what ever protocol you wish to use and raven supports :)
20:37 s0undt3ch *define
20:37 Nitron left #salt
20:37 forrest blink__ , I don't disagree with you but I didn't pay for it, I'll take a look at the rhn-channel command, thanks.
20:37 SEJeff_work s0undt3ch, Perhaps just put a note in the docs / code about using udp for MOAR performance
20:38 SEJeff_work s0undt3ch, rocking though!
20:38 s0undt3ch SEJeff_work: I'm also creating a thread mixin to allow slow handler to pass the heavy work load to a separate thread
20:38 juanlittledevil is there a way I could include a certain state if it's exist else include a different default state?
20:38 SEJeff_work That works certainly
20:39 forrest bink__ , I know, I'm constantly doing that and it drives me nuts
20:39 z0rkito does anyone have an example of using py or pydsl states with pillars.... i can't seem to find any examples and trial and error is producing nothing but errors.
20:39 juanlittledevil I don't think jinja can deal with that.
20:39 s0undt3ch SEJeff_work: yeah, just struggling with MRO resolution at the moment
20:39 SEJeff_work ?
20:40 terminalmage SEJeff_work: I'm working on that changelog, about 40% of the way through the commits in the bugfix branch
20:40 terminalmage (that came after 0.16.0)
20:40 s0undt3ch SEJeff_work: once done I'll point you to the MRO issue
20:40 UtahDave juanlittledevil: can you pastebin an example of what you're trying to do? There might be a simpler way.
20:40 SEJeff_work terminalmage, You're a gentleman and a scholar!
20:41 terminalmage I am neither ;)
20:41 terminalmage hahahaha
20:41 jdenning joined #salt
20:41 terminalmage j/k
20:41 cluther joined #salt
20:41 TOoSmOotH quick question about salt.states.pkg.. Can I use a wildcard in the - require_in: part? like:
20:41 TOoSmOotH -require_in:
20:41 TOoSmOotH - pkg: Custom-*
20:42 SEJeff_work no
20:42 terminalmage TOoSmOotH: I don't think so, but you're welcome to test it and file an issue
20:42 SEJeff_work At least I don't believe so
20:42 SEJeff_work It would be a cool feature
20:42 terminalmage fwiw, in your top.sls you can refer to states using wildcards
20:42 UtahDave TOoSmOotH: You can require and entire SLS file that you're including.
20:43 TOoSmOotH well I am trying to add a custom repo
20:43 terminalmage at least something like "- webserver.*" works
20:43 UtahDave terminalmage: you can? huh, I didn't know that.
20:43 TOoSmOotH where I have custom packages
20:43 terminalmage yeah, I don't know if "webserver.foo*" would work though
20:43 juanlittledevil this doesn't work btw… I obviously am misinterpreting how the jinja include works. http://pastebin.com/y9r4YkHu
20:43 pcarrier joined #salt
20:43 TOoSmOotH I mean I guess I could just keep the files in sync vs using the pkgrepo.managed stuff
20:44 cluther left #salt
20:44 TOoSmOotH not as sexy though
20:44 TOoSmOotH :)
20:46 p3rror joined #salt
20:46 TOoSmOotH Does anyone have the ssh_auth working properly in 0.16
20:46 TOoSmOotH ?
20:47 TOoSmOotH I cannot get it to copy the ssh keys over
20:47 robbyt joined #salt
20:47 TOoSmOotH says the key is already there
20:48 jeffasinger Is there a way to force a package to be upgraded if theres an upgrade available from a state?
20:48 SEJeff_work jeffasinger, pkg.latest :)
20:48 jeffasinger SEJeff_work: Thanks, I seem to be having an awful time navigating the docs today :)
20:48 SEJeff_work no worries
20:49 SEJeff_work I always google as such: docs salt $thing
20:49 SEJeff_work jeffasinger, Or: inurl:docs.saltstack.com pkg state
20:50 jeffasinger Thanks
20:50 juanlittledevil UtahDave: I'm really trying to find a way to keep the state file small and clean. I could probably get around doing this with a single state file but it will likely get ugly really quickly. So this is why I thought something like this might make the state more readable.
20:50 SEJeff_work anytime
20:53 devinus how are you guys deploying salt states on production boxes in a repeatable manner?
20:54 JaredR_ joined #salt
20:55 UtahDave devinus: what do you mean by in a repeatable manner?
20:56 devinus UtahDave: like, let's say you spool up a new server. what are the next steps you take to provision it?
20:56 devinus install salt with your pkg mgr and mount your salt states with NFS?
20:56 devinus or what?
20:57 UtahDave devinus: I generally use salt-cloud to spin up a new server, so salt-minion is already installed for me.
20:57 UtahDave Otherwise, yes I'll install salt from the OS's repo.  No need to mount the states with NFS. I keep them on the Salt master
20:57 racooper devinus,  my method is to add EPEL then yum install salt-minion, before doing anything else :)
20:58 forrest I think his question relates more to the machine checking in with the master and figuring out 'how do I apply this configuration to these specific machines'
20:58 UtahDave devinus: And then I have my top.sls set up in such a way that when I run  a highstate on the new minion, the minion gets all the correct state files.
20:59 devinus UtahDave: okay, so you basically have one honking set of salt states that configure your entire infrastructure?
20:59 TheRealBill devinus: I create a firstboot script which installs salt and fires it up. I use the hostname to identify which set of states it will get, and the only step in between is to have the server accept the minion cert. I'm working on a way around that as well
21:00 UtahDave devinus: well, all my states are on the master, and the appropriate ones get applied to the new server based off the top.sls
21:01 dizzyd doesn't salt transfer states to minions?
21:01 dizzyd i.e. you don't need to download them on each minion ala chef
21:01 dizzyd devinus: is that what you're getting at?
21:02 UtahDave dizzyd: correct. just stick your states in the file_roots (/srv/salt/  by default)  and Salt takes care of distributing them for you
21:02 krissaxton joined #salt
21:03 devinus dizzyd: i'm not familiar with chef, but what i'm working out is how i can develop salt states per-app instead of per-infrastructure
21:03 dizzyd devinus: I'm a n00b, but I believe pillars are what you use to apply specific states to specific machines
21:03 dizzyd (or apps)
21:04 EugeneKay Pillar is a place you can store data to be distributed to minions
21:04 UtahDave devinus: You can keep all your states bundled however you'd like.
21:04 SEJeff_work Here is how I organize my states...
21:04 SEJeff_work I don't use environments
21:04 devinus i've got salt + salty-vagrant and i'm developing salt-states alongside my app. i'd like to have one salt-master orchestrate several top.sls files if that makes any sense
21:04 SEJeff_work I have /srv/salt/{roles,default,datacenters,roles,services}
21:04 SEJeff_work services.httpd, services.haproxy, services.mongod for instance
21:05 devinus /srv/salt/* on the master?
21:05 SEJeff_work devinus, yes
21:05 devinus yeah, i'm trying to figure out the most elegant way to do that right now.
21:05 SEJeff_work devinus, Then I have a "inventory database app role" which includes all of the services that are required and then extends those states and includes pillar bits to configure that specific incarnation
21:06 devinus i'm more concerned with the most physically elegant way to manage the states
21:06 SEJeff_work devinus, I have a config management database which maps a bunch of /16 networks to a datacenter name authoritatively, so I autogenerate all of the default datacenters/$datacentername/init.sls if they don't exist already
21:06 SEJeff_work devinus, git :D
21:06 devinus NFS mount points, git, I'm not sure
21:06 SEJeff_work devinus, git
21:07 devinus my "roles" are my apps i guess
21:07 SEJeff_work I've heard a lot of complaints from the gitfs backend so tend to shy away from it
21:07 SEJeff_work exactly
21:07 devinus and i'm distributing their salt states in their repos
21:07 SEJeff_work an app is a collection of services with customized configs
21:07 devinus so maybe i need to repartee their states from their repos.
21:07 devinus separate*
21:07 SEJeff_work Thats one way to do it yeah
21:07 SEJeff_work I prefer a single place for all states
21:08 SEJeff_work Makes it much simpler to manage
21:08 devinus SEJeff_work: yeah, i'd do that too but that seems to go against the salty-vagrant vibe
21:09 devinus where your states will be under project/salt
21:09 akoumjian devinus: waah? I was the first dev on salty-vagrant and that's how I use it.
21:09 SEJeff_work devinus, I don't use vagrant as I have 0 virtual machines
21:09 SEJeff_work but several thousand real machines
21:09 mannyt joined #salt
21:10 fredvd joined #salt
21:10 akoumjian Unless I misunderstand
21:10 akoumjian Oh, now i see.
21:10 kstaken joined #salt
21:10 akoumjian devinus: No, you do not want your salt states inside of your app repo
21:10 giantlock joined #salt
21:10 devinus akoumjian: how do you do it then?
21:11 akoumjian devinus: Two separate repositories. Your (django/rails/node/java) app is only one piece of your infrastructure
21:11 devinus akoumjian: but you keep your vagrant file in your app repo
21:11 devinus akoumjian: and your salt states in a separate repo?
21:12 akoumjian devinus: Yeah, but that's just as a convenience for devs
21:12 devinus and then do a synced folder to that separate repo?
21:12 juicer2 joined #salt
21:12 akoumjian devinus: exactly
21:12 devinus akoumjian: yeah, i'm birding dev/ops right now and trying to accommodate both
21:12 akoumjian devinus: Our stack is setup so that if the minion grian env != 'dev' then clone the app git repo, else do nothing (because it's mounted locally)
21:13 devinus akoumjian: hem…so each app/role/whatever has a separate folder in your states repo and can each of those have a top.sls file and diff envs?
21:13 akoumjian devinus: There is only one top.sls. It says if you match grain X, then do Y
21:14 akoumjian devinus: Our minions have grains set in their minion config, so on develop it matches pretty much everything and install everything. On production different VMs are running different roles
21:14 akoumjian devinus: But yes to the first part. Each role has its own folder, and shared states have their own folders as well so they can be included in the role formulas
21:15 JaredR joined #salt
21:15 bdf does anyone have a link to any better documentation on gitfs other than the 'tutorial'
21:15 devinus i wish there was a higher lvl overview of this kinda stuff
21:16 bdf I'm trying to work out how the branching/enviornment selection is supposed to work
21:16 devinus i'm like gleaming what i can from IRC and scattered github salt states
21:17 bdf yep
21:17 JaredR joined #salt
21:17 akoumjian devinus: Deployment is hard and salt is not opinionated because there are many different use cases. However, a book on 'patterns in salt' would be awesome
21:21 carmony UtahDave: running from git fixed that problem
21:23 stephendotexe Can someone point me to documentation on how I can create custom states? Someting like "salt "*" state.non_production" to run after state.highstate is run.
21:24 devinus SEJeff_work: can i do cascading environments with gifts?
21:24 devinus gitfs*
21:25 SEJeff_work devinus, I've only seen issues with gitfs and have not touched it. The code will eventually be rewritten wholesale to use libgit2 I believe
21:25 akoumjian stephendotexe: You can run a particular salt formula by running salt '*' state.sls formula_name
21:25 quantumsummers|c joined #salt
21:25 akoumjian stephendotexe: where formula_name points to formula_name.sls or formula_name/init.sls
21:25 stephendotexe akoumjian: oh I didn't realize it was that eays.
21:25 akoumjian it is! :-)
21:30 napperjabber_ joined #salt
21:33 TOoSmOotH How do I make a minion check in every certain amount of time?
21:33 TOoSmOotH just a cron job that calls high state say every 15 minutes?
21:35 BRYANT__ joined #salt
21:42 aranhoide joined #salt
21:44 Corey TOoSmOotH: You can do that, *BUT* I would strongly advise doing it on the master, not the minion.
21:45 TOoSmOotH lets say I have a few thousand minions and I want to spread them out
21:45 aranhoide left #salt
21:47 stephendotexe What do you do for state.highstate's that take a long time? Just send the job to the background?
21:49 dizzyd tmux? :)
21:49 SEJeff_work stephendotexe, if you to --timeout 1 the client times out
21:50 SEJeff_work that doesn't mean the highstate is stopped or cancelled
21:50 SEJeff_work it keeps running on the minions
21:50 SEJeff_work you can see it's status using the jobs running
21:50 SEJeff_work *runner
21:50 stephendotexe oh sweet
21:50 SEJeff_work salt-run jobs -d I think will give you the docs
21:50 SEJeff_work left #salt
21:50 SEJeff_work joined #salt
21:51 juanlittledevil joined #salt
21:51 SEJeff_work stephendotexe, salt-run jobs.active to get the jid. Then try: salt-run job.print_job # I think that is what you want
21:51 SEJeff_work And that data is stored by default for a few days I believe
21:51 stephendotexe SEJeff_work: thanks a million. This IRC channel rules
21:52 SEJeff_work stephendotexe, Pay it forward! The next time you know the answer to someone's question and have the time to answer it, do so
21:52 SEJeff_work Thats what makes our community so great
21:52 SEJeff_work we help eachother
21:54 JaredR joined #salt
21:59 geopet joined #salt
22:01 stephendotexe Here's what you add to your vimrc to tell it that sls files are actually yaml: au BufNewFile,BufRead *.sls setlocal ft=yaml
22:01 stephendotexe dat beautiful pastel color scheme
22:02 dthom91 joined #salt
22:03 krissaxton joined #salt
22:05 akoumjian stephendotexe: I don't use vim but I know this exists: https://github.com/saltstack/salt-vim
22:07 opapo joined #salt
22:09 stephendotexe Nice!
22:11 jslatts joined #salt
22:13 tsheibar joined #salt
22:13 zooz joined #salt
22:14 jschadlick left #salt
22:19 UtahDave devinus: yes, you can do cascading environments with gitfs
22:20 dthom91 joined #salt
22:24 LyndsySimon joined #salt
22:26 qba73 joined #salt
22:29 VertigoRay joined #salt
22:29 jessep joined #salt
22:30 forrest Has anyone ever set up a job in salt where it actually monitors logs, so you're monitoring a file on the server, when you move/rename it, the LB stops sending connections, then you monitor the logs till certain conditions are met, and then it restarts the box?
22:33 juanlittledevil joined #salt
22:35 oz_akan_ joined #salt
22:38 VertigoRay Hello!  Trying to get minion setup on OSX.  The docs say to brew install swig zmq.  Then pip install salt.  Then says to edit the /etc/salt/minion file to set the 'master' variable to something other than 'salt'.  The /etc/salt dir doesn't exist though.
22:39 z0rkito forrest: why not use monit (http://mmonit.com/monit/download/)
22:40 forrest Never heard of it before z0rkito, I'll take a look, thanks.
22:41 VertigoRay Also, the google group salt-users suggest running brew install salt.  neither one generates an /etc/salt dir.
22:41 mmilano joined #salt
22:43 aranhoide joined #salt
22:49 forrest VertigoRay, are there salt files anywhere on your system?
22:49 napperjabber_ joined #salt
22:49 forrest I assume pip completed successfully?
22:51 forrest and are you using the system python, or a homebrew?
22:51 avienu cedwards: How do you handle dealing with /etc/make.conf ?
22:55 JaredR joined #salt
22:55 bastion2202 joined #salt
22:56 FL1SK joined #salt
22:56 bastion2202 any hint on the best way to install the epel-release rpm using salt?
22:57 forrest bastion2202, you could store it in a repo on the salt master?
22:57 forrest then just add that repo when you do the build, and install epel that way
22:57 kstaken joined #salt
22:58 bastion2202 do you mean create a rpm repo on salt-master ?
22:58 forrest yes
22:58 forrest or another server if you have one already configured on your network
22:58 bastion2202 but them I will have to install my own repo rpm before epel
22:59 forrest http://docs.saltstack.com/ref/states/all/salt.states.pkgrepo.html
22:59 forrest Do you have to install the epel rpm?
22:59 forrest Or can you just drop in the path
23:00 forrest So then you don't have to install the epel repo at all
23:01 bastion2202 I am curious to know if it is possible with pkg.installed to do something like ymu install URL/package.rpm
23:02 aat joined #salt
23:03 forrest The docs say you can do that with sources
23:03 forrest sources
23:03 forrest A list of packages to install, along with the source URI or local path from which to install each package. In the example below, foo, bar, baz, etc. refer to the name of the package, as it would appear in the output of the pkg.version or pkg.list_pkgs salt CLI commands.
23:04 krissaxton joined #salt
23:04 forrest http://docs.saltstack.com/ref/states/all/salt.states.pkg.html
23:04 forrest about 2/3 of the way down that page
23:04 timops joined #salt
23:05 forrest Does that do what you wanted?
23:05 aranhoide left #salt
23:06 UtahDave bastion2202: yeah, you can install an rpm from any arbitrary location with pkg.installed
23:07 chrisgilmerproj left #salt
23:10 jessep joined #salt
23:10 VertigoRay forrest: there are salt files in /usr/local/bin, but not what I'm expecting. pip did complete successfully.  using system python.  From new install of osx: installed xcode cli tools, then homebrew (with ruby -e as shown on homebrew site), then ran through the home brew steps on salt docs osx install steps.
23:11 VertigoRay forrest: running 10.8.4
23:11 dthom91 joined #salt
23:12 forrest VertigoRay, Hmm ok then I'm not sure, I'm not familiar enough with the OSX install to provide much more past that. What did it bomb out on when you tried using the system python? I assume you ran it as sudo/root?
23:14 VertigoRay forrest: which salt returns nothing.  so does which salt-minion.  the installs from brew run as an admin (brew won't run as root). ran pip install as root tho. just reverted to clean snapshot ... surely I screwed something up.  gonna start fresh tomorrow.
23:15 forrest VertigoRay, ok cool, hopefully someone who has a little more osx experience will be around to provide assistance.
23:16 VertigoRay forrest:  thanks for the quick attempt.  maybe a fresh start after some sleep with coffee buzzing through my veins will make a diff ;)
23:16 forrest VertigoRay, hah you'll figure it out halfway home I'm sure.
23:17 VertigoRay forrest: yeah .. won't be the first time.
23:17 kermit joined #salt
23:18 lucasvickers joined #salt
23:19 kenbolton joined #salt
23:19 lucasvickers joined #salt
23:23 JaredR joined #salt
23:23 cron0 joined #salt
23:30 mikedawson joined #salt
23:33 hazzadous joined #salt
23:36 mikedawson joined #salt
23:38 nate888 I was working with StDiluted earlier to no avail on a problem where I cannot replace a file with one I just created. the following links have the error and code to replicate it...   http://pastebin.com/50LfDHZB is the salt-call -l debug output from http://pastebin.com/aawyEdvK  . Since the earlier chatting I have uninstalled and reinstalled salt 0.16.0 via the ubuntu ppa, removed /var/cache/salt and reconfigured all minions and master
23:39 nate888 And still cannot replace an existing file.... there seems to be a persistent error in the output about opts.py not having a get method...
23:40 nate888 I am not pythonic enough to figure out where __opts__.get should be defined...
23:46 pdayton joined #salt
23:47 cxz joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary