Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-07-21

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 qba73_ joined #salt
00:03 qba73 joined #salt
00:05 emocakes joined #salt
00:21 diegows joined #salt
00:37 mgw joined #salt
00:49 djn joined #salt
01:01 Lue_4911 joined #salt
01:04 LucasCozy joined #salt
01:04 LucasCozy joined #salt
01:08 napperjabber joined #salt
01:26 mgw joined #salt
01:59 dave_den joined #salt
02:00 oz_akan_ joined #salt
02:02 TOoSmOotH I[INFO    ] Configuration file path: /etc/salt/minion
02:02 TOoSmOotH [WARNING ] The function 'grains()' defined in '/usr/lib/pymodules/python2.7/salt/loader.py' is not yet using the new 'default_path' argument to `salt.config.load_config()`. As such, the 'SALT_MINION_CONFIG' environment variable will be ignored
02:02 TOoSmOotH fresh install and I get that error
02:09 mgw joined #salt
02:10 kleinishere joined #salt
02:12 lazyguru joined #salt
02:12 z3uS joined #salt
02:15 lemao joined #salt
02:17 Jahkeup_ joined #salt
02:27 terminal1 TOoSmOotH: that is fixed in git
02:27 TOoSmOotH yea
02:28 TOoSmOotH Specified SLS users in environment sensor is not available on the salt master
02:28 TOoSmOotH following the guide on the website
02:28 TOoSmOotH for user management for the pillars
02:31 oz_akan_ joined #salt
02:33 Gordonz joined #salt
02:36 oz_akan_ joined #salt
02:38 terminal1 ah
02:38 TOoSmOotH but it isn't working
02:38 TOoSmOotH I get that error
02:39 terminal1 ok
02:39 terminal1 URL?
02:39 TOoSmOotH http://docs.saltstack.com/topics/tutorials/pillar.html
02:39 terminal1 thanks
02:40 terminal1 can you pastebin your pillar top.sls?
02:40 TOoSmOotH its 3 lines
02:40 TOoSmOotH base:
02:41 TOoSmOotH '*'"
02:41 TOoSmOotH - users
02:41 TOoSmOotH with the correct formatting
02:41 oxalatefree joined #salt
02:43 mgw joined #salt
02:43 TOoSmOotH my /srv/salt/top.sls looks exactly the same
02:44 terminal1 "environment sensor" suggests you have an environment called "sensor" in your setup
02:45 TOoSmOotH that fixed it
02:45 TOoSmOotH hehe
02:45 terminal1 ahh, changed it to "base"?
02:45 TOoSmOotH yea
02:45 terminal1 cool
02:46 terminal1 wtf, looks like freenode went wonky, I'm not under my correct nick
02:46 terminal1 brb
02:46 terminalmage joined #salt
02:46 terminalmage there, much better
02:47 TOoSmOotH I have this {% for user, uid in pillar.get('users', {}).items() %}
02:47 terminalmage should be iteritems
02:47 terminalmage that is wrong in the docs it seems
02:48 terminalmage unless items returns a tuple of tuples or whatever
02:48 napperjabber joined #salt
02:48 terminalmage ahh it does
02:48 TOoSmOotH how do I call users in the init.sls
02:48 terminalmage I just always use iteritems
02:48 TOoSmOotH the {{uid}} works
02:49 TOoSmOotH but I am trying to create a home directory based on that users name
02:49 terminalmage what do you mean by "calling "users
02:49 TOoSmOotH like user.present: - home: /home/$users
02:50 terminalmage oh
02:50 terminalmage well you wouldn't use "users"
02:51 terminalmage it would be {{ user }}
02:51 TOoSmOotH I use {{uid}} for the user id
02:51 TOoSmOotH but I am trying to figure out what I use to pass users to - home
02:51 terminalmage {% for user, uid in pillar.get('users', {}).items() %}
02:52 terminalmage - home: /home/{{ user }}
02:52 TOoSmOotH ok let me try that.
02:52 TOoSmOotH I did that at first but I used users
02:52 TOoSmOotH heh
02:52 terminalmage yeah it's user because that's what you're naming it in that "for" jinja statement
02:53 TOoSmOotH that was it
02:53 aat joined #salt
02:53 terminalmage cool
02:53 terminalmage though you don't technically need to specify the homedir
02:54 terminalmage the system default should be used
02:54 terminalmage I believe
02:54 TOoSmOotH yea.. I do have some use cases where I don't want the default
02:54 noob2 joined #salt
02:54 terminalmage ahh
02:54 TOoSmOotH but mostly just trying to verify syntaxes
02:54 terminalmage cool
02:54 TOoSmOotH get my learn on so to speak
02:54 noob2 salt: crazy question but can i use salt to tail logs across a cluster?
02:55 terminalmage I'm Erik by the way, I'm one of the core devs. I don't spend much time here but if you run into parts of the documentation, please do file an issue on github
02:55 terminalmage parts of the documentation that are not clear, I mean
02:56 terminalmage we're always looking to improve the docs and input from new users is very helpful
02:56 terminalmage noob2: are you talking about a tail -f?
02:56 noob2 terminalimage: correct but across a bunch of machines
02:56 noob2 same file :)
02:56 noob2 like salt '*' cmd.run 'tail -f somefile.log'
02:56 TOoSmOotH sure thing!
02:56 terminalmage noob2: no. shell commands are timed
02:56 noob2 aww :(
02:56 terminalmage they eventually timeout
02:57 noob2 i see
02:57 terminalmage and have to return so that the pid, retcode, etc. can be reported on
02:57 noob2 that would be neat to have :)
02:57 noob2 yeah makes sense
02:57 kleinishere joined #salt
02:57 terminalmage I agree, but the issue with that is that the minion only returns data to the master once the command completes
02:58 terminalmage so tail -f wouldn't really work
02:58 noob2 right
02:58 terminalmage yeah it doesn't send the data as it is printed to stdout/stderr
02:58 terminalmage waits for the command to complete
02:59 noob2 any idea if a ceph module is being worked on?
02:59 noob2 i don't want to duplicate work
02:59 terminalmage I think one exists already, or if not there is an open issue for it
02:59 dthom91 joined #salt
02:59 terminalmage lemme check
02:59 EugeneKay What would be a sane way to run a prep command after git.latest does it's thing? cmd.run with a depends: ? Or.... a post-checkout hook installed in the git repo(possibly via file.managed)
03:00 oz_akan_ joined #salt
03:00 terminalmage noob2: no, there is none yet
03:00 noob2 ok
03:00 noob2 are you using the issue tracker to gauge interest in having various state modules?
03:00 terminalmage noob2: https://github.com/saltstack/salt/issues/4043
03:01 noob2 awesome
03:01 noob2 maybe i'll help out with that
03:01 TOoSmOotH ok one last ?
03:01 terminalmage TOoSmOotH: sure
03:01 noob2 ceph already has a deploy tool written in python.  i'm wondering how hard it would be to integrate that with salt
03:01 TOoSmOotH {% for user, uid, fullname in pillar.get('users', {}, {}).items() %}
03:01 noob2 or just have salt call it
03:02 TOoSmOotH users:
03:02 TOoSmOotH joeblow: 4000 "Joe Blow"
03:02 terminalmage noob2: so it has a python interface?
03:02 noob2 yep
03:02 terminalmage import ceph ?
03:02 noob2 the rbd portion of ceph has a python library
03:02 noob2 but the deploy stuff is different
03:02 terminalmage noob2: cool, you might want to look at the mysql module to see how we manage stuff like that
03:02 noob2 lemme find it
03:02 TOoSmOotH ypeError: get expected at most 2 arguments, got 3
03:02 terminalmage typically a try/except to try to import the module
03:03 TOoSmOotH that is the error
03:03 TOoSmOotH do I need a : after the 4000?
03:03 terminalmage and then if the module could not be imported, the __virtual__ function returns False
03:03 noob2 terminalimage: https://github.com/ceph/ceph-deploy
03:04 terminalmage TOoSmOotH: no, this is yaml
03:04 terminalmage it's key: value
03:04 terminalmage if you're specifying more than just the uid, then you should do nested dicts
03:04 terminalmage TOoSmOotH: I'll pastebin an example
03:04 TOoSmOotH awesome
03:04 TOoSmOotH thanks
03:05 terminalmage np
03:10 stevedb joined #salt
03:13 terminalmage TOoSmOotH: http://dpaste.com/1312092/
03:14 terminalmage TOoSmOotH: so, this can help you from having to declare default stuff like "- shell: /bin/bash"
03:14 oz_akan_ joined #salt
03:14 terminalmage because the 2nd-level dict contains name/value pairs for the user.present state
03:15 terminalmage so, only what you declare at that 2nd level gets into your SLS
03:15 terminalmage helps make your setup a bit more flexible
03:16 TOoSmOotH awesome
03:16 TOoSmOotH that is exactly what I was trying to figure out
03:16 terminalmage note that I only specified the home dir in the pillar data once, for the one user that had a non-standard home dir
03:16 terminalmage yeah, I think I'll modify the docs to add this example
03:17 TOoSmOotH yea I think it helps make more sense
03:17 terminalmage yeah. while I'm at it, here's a link to some slides I did for a presentation a few months ago
03:17 terminalmage http://goo.gl/T8SVz
03:18 terminalmage a similar example using pillar to configure users is there
03:18 terminalmage as well as some other stuff
03:18 TOoSmOotH we use puppet mostly
03:18 terminalmage TOoSmOotH: ouch
03:18 terminalmage I'm sorry
03:18 TOoSmOotH but salt is much easier to set up and get running
03:18 TOoSmOotH its been great for us honestly
03:18 terminalmage that's good to hear!
03:19 terminalmage yeah, I evaluated both and salt was much easier
03:19 TOoSmOotH but I want to use this for some open source stuff I am doing
03:19 terminalmage I got involved about 18 mos ago, submitting patches to add features and fix bugs, it's a very easy codebase to work with
03:19 TOoSmOotH yea I have some familiarity with python
03:20 TOoSmOotH which was the other thing I liked about salt
03:20 TOoSmOotH :)
03:20 terminalmage yeah, I used to be a sysadmin before I started working for saltstack
03:20 terminalmage and that experience taught me to hate ruby with a vengeance
03:21 TOoSmOotH I hate messing with passenger and gems
03:21 terminalmage and unicorn
03:21 TOoSmOotH lets say I have 1k nodes can I schedule state checks?
03:22 terminalmage yeah there is a scheduler
03:22 terminalmage I haven't used it
03:22 terminalmage but there should be docs
03:22 terminalmage one sec
03:22 TOoSmOotH have em check in so I don't crush the master
03:22 terminalmage http://docs.saltstack.com/topics/jobs/schedule.html?highlight=scheduler
03:22 terminalmage well, in salt the minions don't check in
03:22 Teknix joined #salt
03:22 terminalmage you initiate using the master
03:23 terminalmage the minions maintain a persistent connection to the master, over which the master issues commands
03:23 TOoSmOotH waht if I did a cron job for them to check in?
03:23 TOoSmOotH like salt-call
03:23 EugeneKay Salt has a mechanism built in to do that
03:23 TOoSmOotH in cron
03:23 EugeneKay http://docs.saltstack.com/topics/jobs/schedule.html
03:24 terminalmage EugeneKay: yeah I posted that above
03:24 terminalmage TOoSmOotH: what did you want to check in for?
03:24 terminalmage you should be able to use the scheduler to do what you need
03:24 TOoSmOotH looking for changes to files
03:24 terminalmage and you can have it return data using one of the other returners like mysql
03:24 terminalmage storing the results there
03:25 terminalmage TOoSmOotH: would you be looking to take action if a file changed?
03:25 EugeneKay That's what highstate does; looks for(and corrects) deviations from the specified state
03:25 TOoSmOotH yea.. if they change get the new rules and restart the service
03:25 terminalmage such as, applying your file states?
03:25 terminalmage yeah that can be done with a highstate or a state.sls
03:25 terminalmage highstate runs all SLS matched in your top file
03:25 terminalmage state.sls runs specific SLS files
03:26 EugeneKay I'm in the habit of putting a warning at the top of salt-managed files to the effect of "Your changes will be eaten by Salt, so don't edit this file"
03:26 terminalmage EugeneKay: yeah, we did the same at my old job
03:26 EugeneKay And then I edit it
03:26 * EugeneKay argues with Postfix a bit
03:27 terminalmage TOoSmOotH: if you're looking to restart a service, when its config file changes, you can set a watch on the config file in your service state
03:27 terminalmage then when the file changes, the service restarts
03:27 terminalmage TOoSmOotH: http://docs.saltstack.com/ref/states/requisites.html
03:30 TOoSmOotH terminalmage: EugeneKay  Thanks for the help.. I really appreciate it
03:30 EugeneKay Anything I can do to slow down the learning process
03:30 TOoSmOotH haha
03:30 TOoSmOotH time to crash thanks all
03:33 terminalmage TOoSmOotH: no prob!
03:36 LarsN_ what's the flag to "test-run" a state.highstate
03:37 EugeneKay IIRC test=True
03:38 LarsN_ EugeneKay: that will output as if it would have run, without actually changing things then?
03:38 EugeneKay Preeeeetty sure. I'd consult the docs if it might eat your head :-p
03:39 raadad joined #salt
03:39 raadad hey guys
03:39 raadad long time
03:39 EugeneKay Stupid question. What's the jinja/Python magic I want to use to combine the output of .items() from two differnet lists
03:39 raadad im trying to configure my states to install redis
03:39 EugeneKay Eg, in a for loop
03:40 raadad I found, https://github.com/saltstack-formulas/redis
03:41 raadad https://github.com/raadad/node-dev-vagrant-example/tree/master/srv
03:41 raadad this is my salt config
03:46 SpX joined #salt
03:47 Lue_4911 joined #salt
03:49 stevedb joined #salt
03:54 terminalmage EugeneKay: combine how?
03:54 EugeneKay Append, replacing any duplicates
03:55 terminalmage lists don't have .items()
03:55 terminalmage do you mean a dict?
03:55 EugeneKay Probably. I'm not much of a Pythonista
03:56 terminalmage most of the member functions are available in jinja
03:56 terminalmage so perhaps dictname.update(second_dict)
03:56 EugeneKay Sounds right.
03:57 * EugeneKay consults man pages
03:58 g3cko joined #salt
03:58 EugeneKay Yup, that's the one
03:58 EugeneKay Danke
03:58 terminalmage sweet
03:59 terminalmage didn't know if that would work because not all dict functions are available
03:59 EugeneKay Oh, I haven't tried it yet. :-p
03:59 terminalmage oh
04:00 terminalmage well, hope that works then
04:00 EugeneKay It just /looks/ right.
04:00 * EugeneKay finishes writing his loop first
04:00 terminalmage though, I wouldn't try that on pillar or grains
04:00 terminalmage because it literally updates the value
04:00 terminalmage don't know if that would mess with that data
04:00 terminalmage haven't tested it
04:00 timl0101 joined #salt
04:00 EugeneKay My understanding is that it wouldn't save it persistently
04:00 EugeneKay But I'll find out!
04:02 stevedb joined #salt
04:05 EugeneKay Hm. Fails if the dict is undefined.
04:05 EugeneKay Which it is, for most of my minions
04:06 EugeneKay Background: I'm giving a list of users/groups that are only needed on a few systems. Rather than repeat a for loop twice.....
04:06 EugeneKay Though maybe that's easier.
04:06 dthom91 joined #salt
04:11 carmony joined #salt
04:31 Yulli joined #salt
04:32 Yulli Why does salt-cloud ask me to enter a passphrase for my key when provisioning a new minion?
04:32 Yulli This happens on the first ssh into the minion
04:33 joonas o_O
04:33 Yulli joonas: Hi.
04:33 joonas hi
04:33 Yulli joonas: This goes well and I'll be writing up docs for salt-cloud for DO.
04:33 Yulli Hopefully, a nice article. :)
04:33 joonas that'd be good :)
04:38 Yulli Oh yes. This is a key without a passphrase.
04:50 kermit joined #salt
04:51 waverider joined #salt
04:54 Ivo joined #salt
05:22 oz_akan_ joined #salt
05:45 dthom91 joined #salt
06:20 dthom91 joined #salt
07:06 Ryan_Lane joined #salt
07:13 talso joined #salt
07:14 dthom91 joined #salt
07:17 Xeago joined #salt
07:23 koolhead17 joined #salt
07:25 Jahkeup_ joined #salt
07:32 sariss joined #salt
07:44 dthom91 joined #salt
07:51 emilisto can multiple minions use the same key to communicate with the master?
08:16 carlos joined #salt
08:23 Ryan_Lane joined #salt
08:38 dthom91 joined #salt
09:08 dthom91 joined #salt
09:25 Jahkeup_ joined #salt
09:40 jhauser joined #salt
09:45 zach joined #salt
10:02 dthom91 joined #salt
10:03 Furao joined #salt
10:23 qba73 joined #salt
10:24 lemao joined #salt
10:30 Ivo joined #salt
10:37 dthom91 joined #salt
11:16 qba73 joined #salt
11:18 napperjabber joined #salt
11:29 Xeago joined #salt
11:32 dthom91 joined #salt
11:39 faust joined #salt
11:51 jslatts joined #salt
12:06 stevedb joined #salt
12:08 dthom91 joined #salt
12:18 koolhead17 joined #salt
12:18 koolhead17 joined #salt
12:21 zach joined #salt
12:23 faust left #salt
12:25 faust joined #salt
12:27 napperjabber joined #salt
12:37 kenbolton joined #salt
12:47 stevedb joined #salt
12:54 faust joined #salt
13:05 napperjabber joined #salt
13:09 faust joined #salt
13:12 stevedb joined #salt
13:15 LucasCozy joined #salt
13:15 LucasCozy joined #salt
13:30 stevedb joined #salt
13:31 diegows joined #salt
13:31 jslatts joined #salt
13:48 Xeago joined #salt
13:48 aat joined #salt
13:54 Nexpro1 joined #salt
14:04 aat joined #salt
14:05 stevedb joined #salt
14:05 napperjabber joined #salt
14:18 avienu joined #salt
14:38 fragamus joined #salt
14:46 stevedb joined #salt
14:53 xinkeT joined #salt
15:02 mikedawson joined #salt
15:10 jslatts joined #salt
15:21 jhauser joined #salt
15:23 Corey_ joined #salt
15:46 Teknix joined #salt
15:52 koolhead17 joined #salt
16:00 freelock joined #salt
16:11 _david_a joined #salt
16:20 philipforget joined #salt
16:22 nineteeneightd joined #salt
16:22 philipforget morning everyone. I'm having an issue where some of my managed files are being re-renedered on every highstate, despite the templates not changing. Any ideas why that would be?
16:23 philipforget it's only two of my templates, supervisord configs, that are causing the rerender on every highstate
16:24 philipforget joined #salt
16:35 philipforget joined #salt
16:47 jslatts joined #salt
16:51 timl0101 joined #salt
16:52 freelock joined #salt
16:54 freelock hmm. I just updated my salt-master and many minions to 0.16. Now about 2/3rds of the minions don't connect to the master
16:54 freelock getting "Minion failed to authenticate with the master, has the minion key been accepted?"
16:54 freelock on the minions
16:54 freelock no key changes
16:55 freelock restarting the master or the minions don't make any difference -- same set of minions connects
16:56 freelock ah, I do see one thing in common with connected vs disconnected -- it looks like the connected machines are all on the same lan as the master
16:56 philipforget joined #salt
16:58 _ilbot joined #salt
16:58 Topic for #salt is now Welcome to #salt - http://saltstack.org | 0.16.0 is the latest | Please be patient when asking questions as we are volunteers and may not have immediate answers - Channel logs are available at http://irclog.perlgeek.de/salt/
16:59 freelock is there any kind of port change that might need to be updated on the router?
17:00 freelock or the other possibility is the local working ones use a master: salt, where as the remotes use a FQDN -- is there a certificate change I need to specify?
17:03 avienu joined #salt
17:04 napperjabber joined #salt
17:11 Sacro joined #salt
17:20 freelock strange, looks like I had 4505 open on the firewall but not 4506, opening that made things start working again
17:24 zooz joined #salt
17:26 platoscave joined #salt
17:27 platoscave joined #salt
17:27 auser joined #salt
17:27 auser hey all
17:28 platoscave joined #salt
17:29 platoscave joined #salt
17:29 platoscave joined #salt
17:31 platoscave joined #salt
17:31 platoscave joined #salt
17:32 platoscave joined #salt
17:33 platoscave joined #salt
17:39 emilisto if I write a custom grain in _grains/, what's the best way to test-run it?
17:41 isomorphic joined #salt
17:46 freelock got another issue: For one minion, I get an error on any salt * state.* command from the master
17:46 freelock but from the minion, salt-call state.* commands work just fine
17:46 freelock the error on the master is Data failed to compile:
17:47 freelock Pillar failed to render with the following messages:
17:47 freelock Specified SLS overrides.myoverride in environment base is not available on the salt master
17:47 freelock in my Pillar top file, I have an overrides.myoverride specified for this id
17:47 freelock and an overrides/myoverride.sls
17:48 freelock other hosts, this is working fine -- it's one host that fails, and only on state.* commands, pillar.data compiles and loads fine
17:48 freelock as does state.highstate from the minion
17:50 auser emilisto: sync it to your minions
17:50 auser and then type salt-call grains.items
17:50 emilisto freelock: and if you do `salt 'your-troublesome-minion' pillar.get overrides.myoverride`, you see it?
17:51 emilisto auser: ah, thanks
17:51 emilisto auser: do you know if there's some way to access configuration options from the grain? I need AWS credentials, and I guess they should be in the master config
17:54 auser in your grains?
17:54 auser no, unfortunately your grains aren't compiled
17:54 auser (I don't think)
17:54 auser although
17:55 emilisto right
17:55 auser StDiluted did this a few weeks ago: https://github.com/dginther/ec2-tags-salt-grain/blob/master/ec2_tag_roles.py
17:56 emilisto that's the example I started from actually :)
17:56 auser nice
17:56 emilisto so grains not being compiled, that means they don't have access to the salt tree like moduels do?
17:56 emilisto *modules
17:57 auser yep, they don't get access (AFAIK) to salt's pillar data
17:58 emilisto ah, okay, but I don't need the pillar data, I figured I'll put this in the /etc/salt/master configuration
17:58 emilisto since I don't think the AWS tokens required for fetching info is minion specific, but rather on the "whole deployment" level, i.e. master config
18:01 emilisto got it, it's in the __opts__ dict
18:04 Lue_4911 joined #salt
18:07 auser I gotcha
18:14 _jps joined #salt
18:15 _jps_ joined #salt
18:25 emilisto https://gist.github.com/emilisto/6049407
18:26 emilisto if someone's needs to access EC2 tags as grains, very similar to StDiluted's version of the same thing
18:26 emilisto (but it includes all tags, and takes AWS credentials from config)
18:30 m_george left #salt
18:35 diegows joined #salt
18:39 qba73_ joined #salt
18:45 platoscave joined #salt
18:47 Gareth joined #salt
18:47 Artanicus joined #salt
18:47 jafo joined #salt
18:47 z3uS joined #salt
18:47 tqrst joined #salt
18:47 crashmag joined #salt
18:47 akio joined #salt
18:47 yidhra joined #salt
18:47 LucasCozy joined #salt
18:47 tethra joined #salt
18:47 Lue_4911 joined #salt
18:47 tqrst joined #salt
18:48 tethra joined #salt
18:48 LucasCozy joined #salt
18:48 flurick joined #salt
18:48 Kyle joined #salt
18:48 EntropyWorks joined #salt
18:48 Xeago joined #salt
18:48 ahammond joined #salt
18:48 pjs joined #salt
18:48 majoh joined #salt
18:48 gmoro_ joined #salt
18:48 dave_den joined #salt
18:49 capricorn_1 joined #salt
18:50 UForgotten joined #salt
18:50 waverider joined #salt
18:51 waverider left #salt
18:52 kstaken joined #salt
18:52 [ilin] joined #salt
18:52 [ilin] joined #salt
18:52 Sacro joined #salt
18:53 kermit joined #salt
18:53 pnl joined #salt
18:54 freelock joined #salt
18:55 faulkner joined #salt
18:59 oz_akan_ joined #salt
19:02 platoscave joined #salt
19:04 tomeff joined #salt
19:08 giantlock joined #salt
19:09 sternum joined #salt
19:10 lemao_ joined #salt
19:11 sternum For some reason one of my packages in a requirements file gets installed on every highstate, despite it being already installed and at the stated version in the managed virtualenv. Has anyone seen this behavior
19:12 yml_ joined #salt
19:13 Jahkeup__ joined #salt
19:13 zz_farra joined #salt
19:13 sw_ joined #salt
19:16 herlo__ joined #salt
19:16 jhujhiti_ joined #salt
19:20 defunctzombie_zz joined #salt
19:20 trinque joined #salt
19:20 ahammond joined #salt
19:21 svx_ joined #salt
19:22 platoscave joined #salt
19:22 jete_ joined #salt
19:22 robawt joined #salt
19:23 napperjabber joined #salt
19:23 platoscave joined #salt
19:24 jete_ hi, im struggling to install a package from source
19:24 jete_ libmcrypt:
19:24 jete_ pkg.installed:
19:24 jete_ - source: salt://php/libmcrypt-2.5.8-9.el6.x86_64.rpm
19:28 jete_ on the minion, im using "salt-call state.highstate -l debug" but i can't see the rpm command it runs (if it running that)
19:32 platoscave joined #salt
19:33 oz_akan_ joined #salt
19:33 platoscave joined #salt
19:37 oc joined #salt
19:39 stephen__ joined #salt
19:40 _trinque joined #salt
19:43 eskp_ joined #salt
19:44 eqe joined #salt
19:44 thick_mcrunfast joined #salt
19:45 a1j_ joined #salt
19:45 Daviey_ joined #salt
19:45 Viaken2 joined #salt
19:47 spoktor_ joined #salt
19:48 yidhra_ joined #salt
19:49 antsygeek joined #salt
19:49 drogoh_ joined #salt
19:50 _monokrome joined #salt
19:51 lahwran_ joined #salt
19:52 Yulliz joined #salt
19:52 Yulli joined #salt
19:53 Linz_ joined #salt
19:54 dthom91 joined #salt
19:54 kvbik joined #salt
19:56 gmoro_ joined #salt
19:56 zooz joined #salt
19:57 flurick_ joined #salt
19:57 cwright joined #salt
19:58 jchen joined #salt
19:58 svx__ joined #salt
19:58 Kamal joined #salt
19:59 kleinishere joined #salt
19:59 SpX joined #salt
20:00 trinque joined #salt
20:00 platoscave joined #salt
20:01 Sacro joined #salt
20:02 dthom91 joined #salt
20:04 zach joined #salt
20:04 Nitron joined #salt
20:04 Politoed joined #salt
20:04 avienu joined #salt
20:04 coolj joined #salt
20:05 z3uS joined #salt
20:06 pt|Zool joined #salt
20:09 freelock joined #salt
20:10 twinshadow joined #salt
20:11 kstaken joined #salt
20:11 platoscave joined #salt
20:12 _monokrome joined #salt
20:12 luminous joined #salt
20:15 craig_ joined #salt
20:17 luminous joined #salt
20:18 Ryan_Lane joined #salt
20:20 indymike joined #salt
20:21 yidhra joined #salt
20:26 iiii joined #salt
20:38 oz_akan_ joined #salt
20:42 Gordonz joined #salt
20:43 dthom91 joined #salt
20:50 _jps joined #salt
20:50 xinkeT joined #salt
20:57 qba73 joined #salt
20:57 robawt joined #salt
21:02 jalbretsen joined #salt
21:10 andrew_ joined #salt
21:13 [diecast] joined #salt
21:13 lazyguru joined #salt
21:16 mattt joined #salt
21:22 Teknix joined #salt
21:23 z3uS joined #salt
21:36 bemehow joined #salt
21:37 _jps joined #salt
21:40 lemao joined #salt
21:43 cedwards joined #salt
21:45 Thiggy joined #salt
21:45 jesusaurus joined #salt
21:45 scalability-junk joined #salt
21:47 Xeago joined #salt
21:48 avienu joined #salt
21:53 platoscave joined #salt
21:55 dthom91 joined #salt
22:04 mgw joined #salt
22:04 Thiggy joined #salt
22:07 kleinishere joined #salt
22:10 EugeneKay Oh wow, it's simpler than I thought it would be to do a match against Pillar.
22:11 EugeneKay This makes me a happy man.
22:11 LucasCozy joined #salt
22:18 jslatts joined #salt
22:21 flagellum joined #salt
22:22 mgw joined #salt
22:29 avienu joined #salt
22:34 Thiggy joined #salt
22:42 iamaporkchop joined #salt
22:44 Lue_4911 joined #salt
22:44 Psi-Jack_ joined #salt
22:44 raydeo joined #salt
22:55 dthom91 joined #salt
22:57 crashmag joined #salt
23:06 logix812 joined #salt
23:11 aat joined #salt
23:11 Gordonz joined #salt
23:19 mgw joined #salt
23:24 Gordonz joined #salt
23:25 dthom91 joined #salt
23:33 freelock emilisto: thanks for your response...
23:33 freelock with salt "mytroublesomeminion" pillar.get overriddendict
23:34 freelock I get the default, not the overridden dictionary from the sls it's giving me an error on
23:34 freelock but if I go to the minion and do salt-call pillar.get overriddendict
23:34 freelock I get the expected contents in the override sls
23:34 freelock so pillar.get is definitely giving different data on the master and on the minion
23:34 freelock ???
23:40 kcb joined #salt
23:42 carmony joined #salt
23:43 aat joined #salt
23:54 EugeneKay freelock - saltutil.refresh_pillar
23:56 EugeneKay The data is cached; .data gives a freshly-generated pillar as the master believes it should be; .get and .raw are from the minion, which may be cached.
23:57 freelock EugeneKay: thanks... I tried that earlier... the thing is, the minion has the correct data, but the master was showing incorrect data
23:57 sturdy joined #salt
23:57 EugeneKay o.O
23:57 freelock e.g. pillar.raw showed the correct data, pillar.data showed incorrect and an error
23:57 EugeneKay Syntax error? ;-) Fix your pillar files.
23:57 freelock but I just tried doing saltutil.refresh_pillar again, and now it seems to have fixed it

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary