Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-12-29

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 MedicalJaneParis aha! prereqs are the answer
00:20 munderwo joined #salt
00:27 nebuchadnezzar joined #salt
00:27 Heartsbane joined #salt
00:30 forresta MedicalJaneParis, Weird, I'll have to look at force and how the docs are worded, I thought it wouldn't do an overwrite
00:31 MedicalJaneParis nope, there is a ln option called de-reference which is different than force, that would fix the problem, but I got around by just having a prereq on whether other folders were created
00:32 forresta gotcha
01:07 micko joined #salt
01:07 oz_akan_ joined #salt
01:26 wonhunawks joined #salt
01:34 bhosmer joined #salt
01:35 rojem joined #salt
01:36 rojem joined #salt
01:38 quanta_ joined #salt
01:53 mgw joined #salt
02:08 oz_akan_ joined #salt
02:10 joshe joined #salt
02:16 rojem joined #salt
02:22 ajw0100 joined #salt
02:28 druonysus joined #salt
02:29 flebel joined #salt
02:32 taion809 joined #salt
02:37 eskp joined #salt
02:37 ravibhure joined #salt
02:37 eskp left #salt
02:39 Sheco joined #salt
02:44 Uthark joined #salt
02:45 eskp joined #salt
02:46 eskp left #salt
02:56 canci joined #salt
02:58 pdayton joined #salt
03:02 cachedout joined #salt
03:05 btaitelb joined #salt
03:08 oz_akan_ joined #salt
03:12 quanta_ joined #salt
03:16 cachedout joined #salt
03:21 cachedout joined #salt
03:23 bhosmer joined #salt
03:25 mgw joined #salt
03:27 blarghmatey_ joined #salt
03:28 flebel joined #salt
03:28 cachedout joined #salt
03:36 cachedout joined #salt
03:36 ckao joined #salt
03:38 blarghmatey joined #salt
03:44 cachedout joined #salt
03:46 quanta_ joined #salt
03:52 quanta_ joined #salt
04:07 quanta_ joined #salt
04:11 oz_akan_ joined #salt
04:33 quanta_1 joined #salt
05:00 bhosmer joined #salt
05:00 quanta_ joined #salt
05:10 nebuchadnezzar joined #salt
05:11 bhosmer joined #salt
05:12 oz_akan_ joined #salt
05:22 orion joined #salt
05:22 orion left #salt
05:26 ravibhure1 joined #salt
05:38 anuvrat joined #salt
05:50 sfvivek1 joined #salt
06:12 oz_akan_ joined #salt
06:36 matanya joined #salt
06:39 cachedout joined #salt
06:59 bhosmer joined #salt
07:01 MedicalJaneParis anyone have luck with mysql states on ubuntu? i installed mysql client and mysqldb python module but i get "State mysql_user.present found in sls is unavailable"
07:01 MedicalJaneParis nvm, i didn't do mysql minion config. funny how i always find answer after i type :x
07:19 cachedout joined #salt
07:32 MedicalJaneParis there any examples using random numbers or one off values in pillar for a server? i'm trying to setup mysql replication, and want to generate a unique int id for nodes i setup
07:35 MedicalJaneParis seems like approach would be to put values in each nodes minion, and reference it in a jinja template?
07:43 MedicalJaneParis aha, opts variable :)
08:36 bhosmer joined #salt
08:48 bhosmer joined #salt
08:48 rmt joined #salt
08:51 rmt basepi, could you maybe ping Thatch regarding https://github.com/saltstack/salt/issues/7669 ?  Salt won't make it into my org without a feature like this..
08:54 bdf doesnt salt-ssh kind of give you that?
09:07 _ilbot joined #salt
09:07 Topic for #salt is now Welcome to #salt - SaltConf Jan 28-30, 2014! http://saltconf.com (reg deadline January 3) | 0.17.4 is the latest | Please be patient when asking questions as we are volunteers and may not have immediate answers - Channel logs are available at http://irclog.perlgeek.de/salt/
09:15 oz_akan_ joined #salt
09:27 quanta_ joined #salt
09:39 toastedpenguin joined #salt
09:42 rmt bdf, if I wanted to use ssh, I'd use ansible. ;-)
09:47 rmt bdf, I work in an organisation with a couple of hundred sysadmins and hundreds of developers working on many unique applications.. and many thousands of servers across many datacenters and clouds.  There's usually one or two sysadmins working with a group of developers to manage their servers, which generally shouldn't be touched by other sysadmins or developers (but can be touched by our support organisation).. we have standard automated deploy
09:47 rmt ment using puppet.. but 101 different ways to manage systems after they're deployed.
09:49 rmt It would be nice to have a generic communication & RPC framework that we can provide as part of the automated deployment that lets teams manage their servers in a single, well defined way, as well as giving us a global control point for automating common functionality (eg. staged deployment of new releases on existing servers)
09:50 rmt 99% of the time, all we need is secure point-to-point RPC.
09:52 rmt And the ability to restrict who can run what, when and where - Which we can implement ourselves since we already have our own access and permissions model.
09:55 rmt Given the variance of our environments, and it crossing several trust boundaries, I'd rather not allow people to design their own RPCs if they'll just end up broadcasting secrets to every bloody machine we have.  At some point, one of them will be compromised, and we'll be even more screwed.
10:01 sgviking joined #salt
10:15 oz_akan_ joined #salt
10:29 quanta_ joined #salt
10:29 druonysus joined #salt
10:29 druonysus joined #salt
10:38 ravibhure joined #salt
11:03 druonysus joined #salt
11:03 druonysus joined #salt
11:16 druonysuse joined #salt
11:16 oz_akan_ joined #salt
11:33 gasbakid joined #salt
11:34 taion809 joined #salt
11:42 quanta_ joined #salt
11:46 flebel joined #salt
11:52 psyl0n joined #salt
11:56 aleszoulek joined #salt
12:04 harobed joined #salt
12:17 oz_akan_ joined #salt
12:18 dangra1 joined #salt
12:20 pengunix joined #salt
12:24 bhosmer joined #salt
12:39 harobed_ joined #salt
12:42 william_20111 joined #salt
12:46 anuvrat joined #salt
12:54 quanta_ joined #salt
13:09 quanta_ joined #salt
13:10 cetex joined #salt
13:15 Sheco joined #salt
13:18 oz_akan_ joined #salt
13:26 william_20111 joined #salt
13:47 canci joined #salt
13:49 william_20111 joined #salt
13:59 blarghmatey joined #salt
14:01 bhosmer joined #salt
14:17 bhosmer_ joined #salt
14:18 ravibhure1 joined #salt
14:18 quanta_ joined #salt
14:19 oz_akan_ joined #salt
14:23 wolfpackmars2 joined #salt
14:28 ggoZ joined #salt
14:32 wolfpackmars2 joined #salt
14:32 wolfpackmars2_ joined #salt
14:37 wolfpackmars2_ I'm using a virtualbox VM to create a saltmaster.  It seems that when the public NIC (bridged nic) is configured as adapter 2 and a NAT nic is configured as adapter 1 (in the virtual machine settings) - packets from the WAN are not reaching my salt-master.  However, local salt-minion (another virtual machine on the same computer) are able to connect to the salt master just fine.
14:38 wolfpackmars2_ if I configure the first network adapter as the bridged adapter, WAN packets reach the salt-master just fine
14:41 wolfpackmars2_ so this is what I'm seeing:
14:41 wolfpackmars2_ Salt-Master virtual machine:  Network adapter 1: NAT; Network Adapter 2: Bridged
14:41 wolfpackmars2_ Salt-Minion VM on same host:  Connects to salt-master fine
14:41 wolfpackmars2_ Salt-Minion VPS on WAN: Cannot connect to salt-master
14:41 wolfpackmars2_ Salt-Master virtual machine: Network adapter 1: Bridged; Network adapter 2: disabled
14:41 wolfpackmars2_ Salt-Minion VM on same host: connects to salt-master fine
14:41 wolfpackmars2_ Salt-Minion VPS on WAN: Connects to salt-master fine
14:41 pdayton joined #salt
14:41 [diecast] joined #salt
14:41 wolfpackmars2_ is there a basic linux networking reason this might occur?  Such as the salt-master server binding to a specific network interface in linux (Debian 7)?
14:48 pengunix joined #salt
15:02 ravibhure joined #salt
15:10 justBob joined #salt
15:16 rojem joined #salt
15:19 neilf_ joined #salt
15:19 mtaylor joined #salt
15:19 oz_akan_ joined #salt
15:20 chutz joined #salt
15:20 goki joined #salt
15:21 bwq joined #salt
15:23 Kraln joined #salt
15:26 goki_ joined #salt
15:27 robins joined #salt
15:28 hotbox_ joined #salt
15:28 puppet_ joined #salt
15:28 darrend joined #salt
15:29 LordOfLA|Broken joined #salt
15:29 aleszoulek joined #salt
15:30 nkuttler_ joined #salt
15:32 Guest18932 joined #salt
15:32 djbclark` joined #salt
15:33 litheum joined #salt
15:34 pt|Zool joined #salt
15:36 doki_pen joined #salt
15:36 diegows joined #salt
15:39 jkleckner joined #salt
15:39 zpotoloom joined #salt
15:39 ggoZ joined #salt
15:40 chutz joined #salt
15:40 ClausA joined #salt
15:41 scoates joined #salt
15:42 Sheco joined #salt
15:43 mwmnj joined #salt
15:44 bwq joined #salt
15:46 justBob joined #salt
15:47 nkuttler joined #salt
15:47 nkuttler joined #salt
15:47 MTecknology joined #salt
15:47 MTecknology joined #salt
15:48 nkuttler joined #salt
15:48 nkuttler joined #salt
16:01 bhosmer joined #salt
16:02 jkleckner joined #salt
16:05 tollmanz joined #salt
16:10 ajw0100 joined #salt
16:14 munderwo joined #salt
16:15 martoss joined #salt
16:18 zach joined #salt
16:19 EnTeQuAk joined #salt
16:24 cachedout joined #salt
16:26 oz_akan_ joined #salt
16:27 1JTABM3W0 joined #salt
16:32 Linz joined #salt
16:32 mapu joined #salt
16:43 crazysim joined #salt
16:54 MTecknology joined #salt
17:02 ajw0100 joined #salt
17:12 sfvivek1 joined #salt
17:13 mephx joined #salt
17:14 djinni` joined #salt
17:14 baffle joined #salt
17:15 MK_FG joined #salt
17:16 eliasp joined #salt
17:16 beardo_ joined #salt
17:16 dccc joined #salt
17:16 carmony joined #salt
17:18 fishpen0 joined #salt
17:21 psyl0n joined #salt
17:24 djbclark joined #salt
17:24 djbclark joined #salt
17:25 matanya joined #salt
17:35 pcurry_nomi joined #salt
17:42 stoffell what could be a good way to check "freshness" of minion polling? (they have a scheduled option to poll every 60 minutes for state.highstate) I would like to use nagios to check if they poll regularly.
17:49 bhosmer joined #salt
17:55 william_20111 joined #salt
18:02 Jarus joined #salt
18:02 Linz joined #salt
18:10 rojem joined #salt
18:12 Jarus joined #salt
18:13 bhosmer joined #salt
18:14 ajw0100 joined #salt
18:15 kamyl joined #salt
18:21 [diecast] joined #salt
18:22 ajw0100 joined #salt
18:30 jchen joined #salt
18:32 yarik joined #salt
18:41 blarghmatey_ joined #salt
18:52 Psi-Jack Hmmm..
18:55 berto- joined #salt
18:57 bezaban joined #salt
18:57 aleszoulek joined #salt
18:58 Psi-Jack I'm trying to figure out how exactly to get salt to install a custom RPM package. So far, I have it as this: http://paste.linux-help.org/view/434ef5a4  -- Which shows it failing to install a the rpm package expected.
18:59 Psi-Jack I've checked, and manually confirmed installnig the package works just fine.
19:06 martoss joined #salt
19:10 Ryan_Lane joined #salt
19:14 toastedpenguin joined #salt
19:17 Psi-Jack What's that actually means: SUFFIX_NOT_NEEDED?
19:20 Psi-Jack i think.... It's a bug that involves noarch...
19:25 oz_akan_ joined #salt
19:25 tollmanz joined #salt
19:27 tollmanz joined #salt
19:33 pengunix joined #salt
19:37 bhosmer joined #salt
19:38 tollmanz joined #salt
19:45 martoss joined #salt
19:50 toastedpenguin joined #salt
19:51 Psi-Jack Yep..Definitely a bug. Just trying to determine actual cause, but it's definitely related to __SUFFIX_NOT_NEEDED, being claimed it's not defined, yet, it is defined, I see it.
19:54 Psi-Jack Ahhhh, found it, and it's been fixed. LOL, just not in a release yet., :/
19:55 Psi-Jack Well, not in an epel-released version. :(
20:10 stoffell how can I see, on the master, the last time a minion checked it's state.higstate?
20:10 Psi-Jack You make the highstate touch a file and check its timestamp.
20:11 stoffell hm, so every minion doing a highstate should touch a file, on the master, can you point me in the right direction on how to do that?
20:12 Psi-Jack http://paste.linux-help.org/view/5a19e91a
20:12 Psi-Jack stoffell: There's an example state that does what I said.
20:13 stoffell Psi-Jack, thanks. but that will touch a file on the minion, right? (that would work also, I'm monitoring the minions also)
20:14 Psi-Jack Yeah, you're also running a minion on the salt master, right?
20:14 stoffell yes I am. I'm jus trying to find a good way to see if all minions are checking in regularly (I've got them scheduled with the minion scheduler).. just want to make sure they check-in :)
20:15 Psi-Jack Yep.
20:15 stoffell i'll try that out (and combine it with check_mk/nagios), thanks!
20:15 Psi-Jack So, you write a state that does similar, and you use that on each minion with whatever monitoring solution you're using. I use Zabbix to read those files and report to the zabbix server, for exaample.
20:15 Psi-Jack Ewww, naggy-os, sorry to hear. :)
20:16 stoffell haha :)
20:16 Psi-Jack Look into Zabbix, it'll save your life. :)
20:16 stoffell yeah, it's also nice, but I'm stuck with nagios/icinga, but I'll be adding a check like that :)
20:17 Psi-Jack heh, I would never work for a company using Nagios. Unles sthey were willing to switch. It's even in my own personal interview queestions.
20:17 stoffell hehe :)
20:17 Psi-Jack "Do you run Nagios? Can I switch it to Zabbix as soon as I'm hired?"
20:19 Psi-Jack I'm actually sadly not kidding. :)
20:21 stoffell I'll look into zabbix again in the future (when time permits) but our current monitoring setup is pretty good so I'm not in a rush :)
20:23 Psi-Jack Zabbix 2.0.9 is flat out, awesome. :)
20:23 rojem joined #salt
20:24 stoffell out of curiosity, any specific feature/reason? or just in general ?
20:24 Psi-Jack It does so much more with so much less. it can, natively, auto-escelate to pagers, auto-action to correct problems, graph the entire situation, and then some. It basically does the full job of monitoring and analyitics.
20:25 tollmanz joined #salt
20:25 stoffell okay, will have a look into it again :)
20:26 Psi-Jack That, combined with rsyslog, logstash, you have monitoring and log analysis/search.
20:26 stoffell can you also do distributed monitoring? (probes spread out over several locations)
20:26 Psi-Jack Yep.
20:26 oz_akan_ joined #salt
20:26 stoffell nice, will look into it, I can do that now with check_mk and nagios/icinga, but always looking for better ways :)
20:28 Psi-Jack yeaah., Zabbix is definitely the way. I've setup huge Zabbix installations and it works very well. :)
20:30 Psi-Jack heh
20:30 Psi-Jack The one thing I hate the most is... upgrading salt, on hundreds of servers.
20:31 Psi-Jack Working on a scripted approach to handle that because it can't upgrade itself without issues. :)
20:31 stoffell upgrading the minions? it can go bad?
20:31 stoffell ouch, okay...
20:31 Psi-Jack Oh yeaaah. it goes bad. :)
20:31 Psi-Jack salt '*' cmd.run 'yum -y upgrade salt-minion' will fail. :)
20:32 stoffell hm, first thing I did was creating a state to schedule the minions..
20:32 stoffell no scheduling upgrades then :)
20:32 Psi-Jack Nope. :)
20:32 Psi-Jack use alternative ssh-based approaches, or something, to handle that. :)
20:33 Psi-Jack I'm thinking... A cron job to watch for a file, if file exists, upgrade.
20:33 stoffell okay, thanks for the heads up..
20:33 Psi-Jack That can be setup all through salt to automate. :)
20:33 EugeneKay Obviously you should just recreate a Base image and then redeploy all your machines via salt-cloud
20:33 * EugeneKay ducks
20:34 stoffell :)
20:34 * Psi-Jack uppercuts right into EugeneKay as he ducks.
20:34 Psi-Jack :)
20:34 stoffell I like the friendly salt community...
20:34 stoffell rofl
20:34 Psi-Jack heh
20:35 Psi-Jack I just upgraded my salt to 0.17.4 via epel-testing repo, remembering that issue I just told you about, stoffell. :)
20:35 Psi-Jack I had to login to 20 servers to manually restart salt-minions. it ran on most of them, but they died.
20:35 Psi-Jack And yaaaay! Now I can install noarch rpm packages!
20:36 stoffell ouch indeed
20:42 Psi-Jack Thankfully, it's just my home cluster. :)
20:42 Psi-Jack Else, it would've been hundreds of servers, not 20. ;)
20:44 jfzhu_us joined #salt
20:44 stoffell yeah, pick your targets wisely and be selective :)
20:45 Psi-Jack That's why I have a test-centos, and test-debian vm, dedicated to just salt testing, since it's currently the only reliable way to do so.
20:48 wonhunawks joined #salt
20:49 rojem joined #salt
20:53 cewood joined #salt
20:54 tollmanz joined #salt
21:09 pcurry_nomi joined #salt
21:10 munderwo joined #salt
21:11 dangra joined #salt
21:18 oz_akan_ joined #salt
21:23 elfixit joined #salt
21:25 tollmanz joined #salt
21:26 bhosmer joined #salt
21:34 bhosmer joined #salt
21:35 yano joined #salt
21:53 foxx joined #salt
21:59 jfzhu_us1 joined #salt
22:03 jfzhu_us joined #salt
22:10 martoss joined #salt
22:19 kamyl joined #salt
22:20 jfzhu_us I have several pillars located in a single directory called pkgs. In my top.sls file I only state that the pkgs pillar should be shown to all minions. In my mind that should be only the init.sls pillar inside the pkgs directory but when I do a salt \* pillar.items I see that the other pillars defined within the pkgs directory are also being displayed. Is this expected behavior?
22:22 Gareth just realized a useful feature for salt...if it doesn't exist already.  If you run salt against a host and that host's key hasn't been accepted yet, say "key not accepted yet" :) or something along those lines.  Just spent 10 minutes trying to figure out why something wasn't working...turned out the minion_id was wrong.
22:24 rojem joined #salt
22:25 rojem joined #salt
22:26 oz_akan_ joined #salt
22:26 tollmanz joined #salt
22:26 rojem joined #salt
22:31 ajw0100 joined #salt
22:32 mgw joined #salt
22:33 Guest18932 joined #salt
22:35 nebuchadnezzar joined #salt
22:48 martoss1 joined #salt
22:53 elfixit joined #salt
22:57 oz_akan_ joined #salt
23:07 mgw joined #salt
23:08 Jarus joined #salt
23:10 jkleckner joined #salt
23:10 nineteen1ightd joined #salt
23:14 sirtaj joined #salt
23:15 oz_akan_ joined #salt
23:16 z3uS joined #salt
23:17 forresta joined #salt
23:18 Jarus joined #salt
23:21 robertkeizer joined #salt
23:26 mgw joined #salt
23:26 tollmanz joined #salt
23:28 tollmanz joined #salt
23:28 ajw0100 joined #salt
23:32 ds12 joined #salt
23:36 wonhunawks joined #salt
23:46 jkleckner joined #salt
23:46 Gifflen joined #salt
23:48 Psi-Jack Is it possible to make a state that all that state does is require_in to other things so that I can use that state in other things to require on? Or something to that effect?
23:48 Psi-Jack Or, can I make one state file.manage multiple files at the same time?
23:49 EugeneKay There's file.recurse
23:49 forresta Psi-Jack, if you do an include statement at the top of the state, then you can require_in anything that exists in the states you include
23:49 Psi-Jack yeah, that I know about, but I want the files templated. I'm specifically managing CentOS-*.repo files.
23:50 Psi-Jack But, I also have other repos I include otherwise, that I am forming a standard on, repo-<name>, so I have repo-base for the base repos.
23:50 forresta http://docs.saltstack.com/ref/states/all/salt.states.pkgrepo.html
23:50 EugeneKay Or that ^
23:50 Psi-Jack Yeah, no.
23:51 forresta if you wanna use file.managed, and require that in other stuff, you need to include the stuff you want it to be required in
23:51 Psi-Jack The pkgrepo is horribly broken at this time, and I'm not about to use broken stuff.
23:51 EugeneKay I would jinja out all the file.manageds from Pillar
23:57 elithrar joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary