Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-11-04

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 nooblee joined #salt
00:13 nooblee Hi, I've just started using salt and have a pretty basic question about how to organize things.
00:14 nooblee I want to generate ssh host keys on the master for the minions.  Does it makes sense to run a minion on the master to do this?  I'm thinking I can simply delete all the keys, run highstate on the masters' minion which would then generate the files.
00:16 NV nooblee: err, that sounds like a terrible idea (generating ssh host keys on anything other than the host that they're used on)
00:23 nooblee If I update the key on minion A, how do I then get that minion to trigger the master to update the public keys on all the dependant minions?
00:27 nooblee Or maybe I'm thinking about it the wrong way.  The highstate for all dependant minions would grab the public key from the one I changed?
00:29 macker joined #salt
00:31 NV nooblee: http://docs.saltstack.com/ref/states/all/salt.states.module.html#module-salt.states.module allows you to execute a module from a state
00:31 NV http://docs.saltstack.com/ref/modules/all/salt.modules.cp.html allows you to push files from a minion to the master (cp.push)
00:31 redondos joined #salt
00:31 redondos joined #salt
00:32 NV create a state called regensshkeys that regenerates your ssh keys, then have a state that watches your regenssh state that uses the cp module to push the public ssh key to the master
00:33 NV the master can then have a state that copies the ssh keys over to your state directory somewhere (can be an additional location if you're say using git and cant just add it easily - salt can have multiple fileserver roots :D)
00:33 NV and then have minions fetch from there
00:33 NV kdun
00:34 NV (alternatively, make the minion push location a fileserver root, but that assumes you're not going to ever cp.push anything sensitive in there, hence why id opt to do it the other way)
00:34 NV private keys now no longer leave the machine they were generated on (good for security!) and you can still get your ssh pubkeys shared
00:36 c0bra joined #salt
00:36 nooblee OK, that makes sense, I'll have a go at it and may come back with more questions :).  I'm still wading through the rather large manual (that's a positive comment) so I've lots of knowledge gaps still, plus I'm trying the shake off the CFEngine way of thinking.  Thanks a lot for your help.
00:37 scott_w joined #salt
00:37 NV haha np
00:37 NV I've come from a puppet background, go figure :)
00:40 steveoliver luminous: - are you 'illumin-us-r3v0lution' on github?
00:41 steveoliver a la https://github.com/saltstack/salt-cloud/issues/615
00:50 sgviking joined #salt
01:07 mapu joined #salt
01:23 m_george|away joined #salt
01:27 Tekni joined #salt
01:27 Nexpro1 joined #salt
01:30 xmltok joined #salt
01:37 scott_w joined #salt
01:41 diegows joined #salt
01:48 cachedout joined #salt
01:50 Teknix joined #salt
01:50 liwen joined #salt
01:59 reinsle joined #salt
02:00 nooblee OK, I ended up implementing this is a slightly different way.  The minion (A) generates it's own hostkey if it's not there.  A minion (B) that needs (A)'s pubkey uses the ssh.set_known_host module to grab it.  Not the question I asked, but this will work.  I just need to mass delete the keys when I need them regenerated and the rest should take care of itself.
02:02 redondos joined #salt
02:02 redondos joined #salt
02:03 jbub joined #salt
02:05 oz_akan_ joined #salt
02:16 liwen joined #salt
02:20 mwillhite joined #salt
02:23 m_george left #salt
02:26 oz_akan_ joined #salt
02:27 zz__ does anyone have a good recommendation for a 'best practices' approach deal for salt? i've mostly worked with masterless minions right now, but i'm trying to make a case for our sysops guys to use it
02:27 forrest in what sense zz__?
02:27 forrest setup or what
02:28 zz__ mostly setup and security. the biggest concern for them right now is that zmq is pretty much required if you're looking for speed.
02:28 forrest and what case needs to be made past it's written in Python, and doesn't require an outrageously beefy master server? :P
02:28 forrest Well, what are they currently using?
02:29 zz__ ...rsync :[
02:29 forrest lol
02:29 forrest that's no slower than using salt-ssh
02:29 zz__ they're arguing that ssh is too slow. which is kinda backwards because--- yeah it's just ssh
02:30 zz__ we have < 100 servers to manage too
02:30 zz__ i'd say maybe ~35 vms that all have the exact same set up
02:30 forrest Ok, so let's start here: http://www.youtube.com/watch?v=uWGDC1PdySQ
02:31 forrest that's a video fo using salt-ssh to set up a 100 node riak cluster, yea it's not as fast as zmq, and it did tax UtahDave's machine a bit (it had 16 gigs of RAM with I think it was a quad core?)
02:32 forrest I mean 35 machines is laughable in terms of being concerned about salt-ssh speed.
02:32 zz__ you could turn down the amount of concurrent connections and decrease the cpu load, though?
02:32 forrest yes
02:32 zz__ yeah i know
02:32 forrest by default it's 25
02:32 forrest you can change that to whatever you want
02:33 forrest What other concerns do they have except zeromq?
02:34 zz__ really that's it. i think it might just be laziness on their part because they wdon't want to learn how salt states work.
02:34 zz__ but i feel liek it'd really decrease the mental load for them because they'll always be guaranteed things are running/installed
02:34 zz__ after the inital hurdle that is
02:35 forrest Ok, then I would suggest creating both a demo, as well as a business case. The business case should be aimed at their manager/manager's manager. Simply show how Salt will ENSURE a uniform system (which means less errors, and less downtime due to confusion/config drift, or more money in your pocket), as well as ensuring that in the future as machines are added the team will have more free time to continue pushing projects other than this.
02:36 forrest The initial hurdle isn't too bad either, it's REALLY easy with salt-ssh if you wanna go that route (though I'd suggest straight up salt since it's so easy anyways)
02:36 Tekni joined #salt
02:36 forrest So maybe put a powerpoint or something together showing how YOU set up a minion, and a master system that gets configured like one of your existing machines. Then show some examples of what that state looks like
02:36 forrest take some of the voodoo out of it
02:36 mwillhite joined #salt
02:37 forrest If you had the VMs ready (maybe use docker on your local workstation or something), you could have salt  and the minions ready to go in less than an hour.
02:38 forrest install rpms/debs, open ports on master, start services, join keys, restart services, boom.
02:38 scott_w joined #salt
02:39 xl1 joined #salt
02:40 forrest zz__, There has to be someone on that team who wants to do config management, you should find that person
02:40 forrest make them your ally
02:40 zz__ :] well i think i'll start with making a state that mimic my (well... and others) setup
02:40 forrest Cool
02:40 forrest If you get stuck going through the tutorials and stuff (not sure how familiar you are) feel free to drop in
02:41 zz__ i have an ally in our dba, haha. he's the one that has been having the most problems with his set up breaking.
02:41 forrest gotcha
02:42 forrest yea that sucks
02:42 zz__ thanks for the advice! i'm pretty sure i can make a good push for it
02:44 forrest yea np, a few links for you
02:44 forrest https://github.com/terminalmage/djangocon2013-sls
02:44 forrest good project terminalmage made for a django site
02:44 zz__ the only problem is our ops team is 2 people and the senior guy is pretty set in his qays
02:44 zz__ ways*
02:44 forrest and https://github.com/saltstack-formulas for pre-made formulas
02:44 forrest Yes I understand that issue completely :)
03:02 sciyoshi joined #salt
03:05 sciyoshi question on best practice: i've set up a service (files, upstart, config, etc) on a server with salt but now want to ensure it's completely gone
03:05 sciyoshi any good way to do that?
03:08 forrest you would need to write another state that removes it sciyoshi
03:08 forrest or if you made it a formula, a remove formula could be cool
03:12 redondos joined #salt
03:12 redondos joined #salt
03:16 sciyoshi forrest: formula?
03:16 forrest formulas are a collection of states that are used for specific functions, like https://github.com/saltstack-formulas/mysql-formula
03:20 sciyoshi forrest: oh i see, was hoping for something that would figure out sane defaults for reversal (file.managed -> file.absent, service.running -> service.stopped, etc) and then runs them in reverse dependency order
03:20 forrest yea there is no functionality like that built into salt currently as far as I am aware.
03:21 sciyoshi i guess it would be dangerous too, guess i'll have to do it manually
03:21 sciyoshi it's useful in a lot of cases, like moving services to different servers
03:21 forrest yea
03:24 Teknix joined #salt
03:38 cachedout joined #salt
03:39 scott_w joined #salt
03:46 Tekni joined #salt
03:48 krichardson left #salt
03:48 oz_akan__ joined #salt
03:48 cachedout joined #salt
03:49 smkelly Why is {{ grains["xyzzy"] }} the preferred format and not {{ grains.xyzzy }} in jinja? The latter works and is less typing
03:50 forrest smkelly, I'm pretty sure that's just to keep things standard through, in case you have to do {{ grains["xyzzy"]("otherVar") }}
03:50 forrest *throughout
03:51 smkelly ah
03:51 forrest if the docs constantly switched back and forth I think it would be more confusing for people who are new
03:53 prooty joined #salt
03:53 smkelly I think I like salt better than puppet, but some bits are taking time to remold my brain
03:54 jalbretsen joined #salt
04:02 sciyoshi left #salt
04:07 cachedout joined #salt
04:08 Teknix joined #salt
04:14 Tekni joined #salt
04:15 packeteer try going from salt to chef. brain splode much wow
04:16 packeteer in a bad way
04:16 packeteer (not by choice)
04:18 smkelly ah
04:18 smkelly Fortunately I run the IT so I can do what I want :)
04:22 __number5__ Anyone know while this pkg.installed failed: http://hastebin.com/pudonogapo
04:23 __number5__ error: http://hastebin.com/diyihemaye
04:29 NV packeteer: eww, why?
04:38 ctdawe joined #salt
04:40 scott_w joined #salt
04:42 UtahDave joined #salt
04:46 smkelly Does anybody have any complex salt examples? like, creating new grains, using mine.send/get, etc? all the stuff I find online doesn't do any of this
04:50 smkelly or reactors
04:57 ctdawe joined #salt
04:58 liwen joined #salt
05:08 redondos joined #salt
05:08 redondos joined #salt
05:14 smkelly that is what I wish salt had more of, full examples that exercise all features
05:15 LLKCKfan Does any1 know anything about App Shop
05:17 anuvrat joined #salt
05:20 jalbretsen joined #salt
05:22 jdenning joined #salt
05:31 Teknix joined #salt
05:40 scott_w joined #salt
05:42 mianos joined #salt
05:44 mianos just venting but in the last month I now have 4 files in _modules
05:44 mianos all of them regressions
05:44 mianos pip busting, dpkg busting
05:44 mianos doesn;t anyone use saltsack to install pyton under ubuntu anymore?
05:45 forrest mianos, what do you mean?
05:45 forrest I build out virtualenvs with salt
05:46 mianos did pip break for you last month?
05:46 forrest my ubuntu machine is still on 0.16.4
05:46 forrest I'll wait till 0.17.2 is out then upgrade
05:46 mianos wise
05:46 forrest 0.17 and 0.17.1 weren't stable enough for me to risk on my SUPER HIGH TRAFFIC BLOG :P
05:47 forrest and by that I mean, it would have been fine if it was down due to Salt because it gets no traffic
05:47 mianos but I'm using saltbootstrap as part of a cloud init in amazon
05:47 forrest ahh
05:47 forrest what isn't working on the bootstrap for you?
05:47 mianos works fine
05:47 mianos but today dpkg does not work
05:47 forrest you know you can modify the script to pull down a specific release right?
05:47 forrest really? That's odd.
05:47 mianos worked on the old machines
05:47 forrest what versions?
05:48 forrest of salt
05:48 mianos documented bug a few days ago
05:48 forrest ahh ok
05:48 mianos https://github.com/saltstack/salt/issues/8015
05:48 forrest yea there are a ton of fixes with 0.17.2
05:48 forrest hopefully that makes it in
05:48 mianos fixed as in get a very specific version of apt.py and push it out into the modules
05:48 forrest oh it's ready to go, sweet
05:48 forrest hah
05:48 mianos which is really handy to do
05:49 forrest yea but you shouldn't need to do that
05:49 mianos but one after another is getting on my nerves
05:49 UtahDave joined #salt
05:49 forrest yep I don't blame you
05:55 mianos it's love/hate
05:55 mianos pushed and everything installed to 8 machines
05:56 mianos that's the love
06:01 mianos 300 machines installed and now a dumb question, require: requires: I am confused the difference
06:01 honestly me too
06:02 honestly requires are confusing
06:02 mianos module does not work, I change require to requires and it all works
06:02 mianos but ehaps of my modules have require:
06:02 __number5__ looks like salt pkg.installed with version parameter is broken :(
06:02 __number5__ at least for apt/dpkg
06:02 mianos yep
06:06 UtahDave joined #salt
06:22 UtahDave Corey: you around?
06:25 packeteer NV | packeteer: eww, why?  <- we use a lot of aws, and their sdk is ruby/chef. so my boss mandated chef
06:26 forrest :(
06:27 mianos AWS is ruby/chef?
06:27 mianos never heard that
06:28 __number5__ I guess he means AWS OpsWorks is Chef
06:39 mianos joined #salt
06:41 scott_w joined #salt
06:45 cachedout joined #salt
06:45 sebgoa joined #salt
07:07 redondos joined #salt
07:08 packeteer ^^
07:11 Katafalkas joined #salt
07:16 matanya joined #salt
07:27 carlos joined #salt
07:34 tomspur joined #salt
07:34 tomspur joined #salt
07:37 ashtonian joined #salt
07:39 ctdawe joined #salt
07:45 elfixit joined #salt
07:57 bhosmer joined #salt
08:05 ramteid joined #salt
08:06 balboah joined #salt
08:15 N-Mi joined #salt
08:15 N-Mi joined #salt
08:16 krissaxton joined #salt
08:17 Iwirada joined #salt
08:18 redondos joined #salt
08:18 redondos joined #salt
08:23 slav0nic joined #salt
08:23 slav0nic joined #salt
08:24 scott_w joined #salt
08:45 scott_w joined #salt
08:46 scott_w_ joined #salt
08:49 scott_w_ joined #salt
08:52 krissaxton left #salt
08:56 scott_w joined #salt
09:08 fysaen joined #salt
09:11 krissaxton joined #salt
09:14 matanya joined #salt
09:15 scott_w joined #salt
09:21 intchanter joined #salt
09:31 Damoun joined #salt
09:34 Nadz joined #salt
09:38 jpcw joined #salt
09:48 gmjones joined #salt
09:54 gasbakid joined #salt
10:01 scott_w joined #salt
10:17 scott_w joined #salt
10:40 scott_w joined #salt
10:45 liwen joined #salt
10:51 aleszoulek joined #salt
10:53 gasbakid joined #salt
10:54 qba73 joined #salt
10:57 hazzadous joined #salt
11:04 whiskybar joined #salt
11:07 Katafalkas joined #salt
11:08 rjc joined #salt
11:09 hazzadous Having trouble with pkg.installed and version arg with 0.17.1 (https://github.com/saltstack/salt/issues/7933).  Looks like there's a fix in master, any clues as to when this will hit the repos?
11:11 LLKCKfan Does any1 know anything about App Shop
11:13 Furao joined #salt
11:21 scott_w joined #salt
11:23 linuxnewbie joined #salt
11:23 linuxnewbie joined #salt
11:26 linuxnewbie hello, how can i use salt-call and execute a state from /srv/salt/dev env ? ...  i get No matching sls found for 'install_csf' in env 'base'
11:27 N-Mi joined #salt
11:27 N-Mi joined #salt
11:28 aco joined #salt
11:28 aco hello, can i require a specific ID from another state into my current state?
11:29 NV aco: use include
11:30 NV then you can use require as per usual
11:30 aco i do not want to run the whole other state
11:30 NV ...
11:30 NV if you don't run the state
11:30 aco i i used include then the other whole state will run
11:30 NV how can you require something from it?
11:30 aco this is my question
11:30 NV split the state in two
11:30 viq joined #salt
11:30 NV into parts you do want to run (in that specific instance) and parts that you don't
11:31 viq joined #salt
11:31 NV the fact that it makes sense at all to run part of it, while not running another part means it logically should be split up anyway
11:31 NV then you can include the specific part you need and use require
11:31 druonysus joined #salt
11:31 druonysus joined #salt
11:32 aco ok i will clear it more, i have another sls that has alot of pckgs to be installed, and anther sls that depend only on one of those pckgs, i do not want to include the pkgs state cuz it will just rerun the checks and this is just a time waste
11:34 aco i was thinking in require: - sls: pkgs, but i dunno if this is right especially if i do not want to rerun the code from another state while i am not calling a state.highstate and just only calling a specific sls via state.sls
11:42 diegows joined #salt
11:43 linuxnewbie Can someone help me on No matching sls found for 'csf' in env 'base' ...i have another env callled dev ..but i dont know how to use: salt-call and execute from dev and not from base
12:00 gildegoma joined #salt
12:03 TonnyNerd joined #salt
12:04 bhosmer joined #salt
12:07 slav0nic why salt '*' selinux.getenforce return    "selinux.getenforce" is not available. ? (semanage and setsebool  available on minion)
12:11 jumperswitch joined #salt
12:12 krissaxton joined #salt
12:13 Teknix joined #salt
12:13 joehh slav0nic: which release are you running?
12:13 slav0nic 17.1
12:14 matanya joined #salt
12:14 mapu joined #salt
12:14 joehh are the commands semanage and setsebool available on the minion?
12:15 joehh also, is there an "enforce" file in either of '/sys/fs/selinux' or '/selinux'
12:16 joehh just seen end of question :)
12:16 slav0nic ops, sorry selinux was disabled on munion host
12:17 joehh good you've solved it :)
12:20 scott_w joined #salt
12:21 slav0nic as i understand i can't disable selinux via states.selinux.mode?
12:35 joehh my understanding of the source is it can only be set to permissive or enforcing
12:35 joehh I'm not familiar enough with selinux itself to know the consequences of that...
13:00 amahon joined #salt
13:01 Whissi joined #salt
13:04 Whissi Hi. 'salt-run jobs.print_job' (no jobid) will throw an un-handled exceptions (TypeError: print_job() takes exactly 1 argument (0 given)
13:04 Whissi ). Should I fill a bug for un-handled exceptions or are these exceptions (missing/invalid argument) wanted?
13:04 cron0 joined #salt
13:07 snikkers joined #salt
13:08 jefferai joined #salt
13:12 scott_w joined #salt
13:13 scott_w joined #salt
13:13 qba73 joined #salt
13:13 gmoro joined #salt
13:19 blee joined #salt
13:24 Brew joined #salt
13:25 vkurup joined #salt
13:28 aherzog joined #salt
13:31 Brew joined #salt
13:32 mwillhite joined #salt
13:33 bhosmer joined #salt
13:36 VSpike joined #salt
13:38 scott_w joined #salt
13:38 matanya joined #salt
13:39 krissaxton joined #salt
13:43 viq joined #salt
13:43 viq joined #salt
13:47 matanya joined #salt
13:52 viq joined #salt
13:52 scott_w joined #salt
13:53 matanya joined #salt
13:56 scott_w joined #salt
13:57 jankowiak joined #salt
14:04 pass_by_value joined #salt
14:06 brianhicks joined #salt
14:06 juicer2 joined #salt
14:07 gasbakid_ joined #salt
14:09 brimpa joined #salt
14:09 racooper joined #salt
14:09 mattmtl joined #salt
14:10 brimpa joined #salt
14:11 Gifflen joined #salt
14:13 brimpa joined #salt
14:13 cachedout joined #salt
14:14 jankowiak joined #salt
14:15 brimpa joined #salt
14:19 racooper Noticed that 0.17.1 hit EPEL repos. is there anything between 0.16.4 and 0.17.1 that would break existing states or commands? I am not finding any deprecations listed anywhere
14:20 brimpa joined #salt
14:20 mike25_ joined #salt
14:21 * mike25_ hi all
14:21 jcsp joined #salt
14:22 oz_akan_ joined #salt
14:24 KBme joined #salt
14:24 xt racooper: yes, there's a bug with some pillarthingies
14:25 xt nested dicts in pillars broke for me
14:25 xt it's fixed in git but no release in sight
14:25 cachedout joined #salt
14:26 Kholloway joined #salt
14:28 brimpa joined #salt
14:29 sebgoa joined #salt
14:31 kolaman joined #salt
14:31 mike joined #salt
14:31 * mike hi
14:32 slav0nic {% set hostname = hostname_prefix + None|strftime("%Y%m%d") %}
14:32 slav0nic exist more elegant method for get current date in sls?
14:32 kolaman hi all, I have a fresh installation of salt-master on a centos 6.4 machine I accepted key for an ubuntu 12.10 machine and that worked like a charm now I am getting this errror whilw doing anything on salt-master (like installin pkg, ping test etc)     Minion did not return
14:33 mike25ro kolaman: did you change anything on the master?
14:34 kolaman mike25ro: nothing this is a fresh install
14:34 foxx joined #salt
14:34 kolaman just accepted key for a machine
14:34 karlgrz joined #salt
14:34 mike25ro and you can not ping it?
14:35 karlgrz Anyone help with problems with supervisord?
14:35 karlgrz Err, supervisord salt state?
14:35 kolaman I am currently pinging a minion/installing pkg on that but get minion did not return
14:41 kolaman mike25ro: it is a strange issue for me :(
14:41 DredTiger joined #salt
14:41 mike25ro kolaman: i agree with you
14:42 scott_w joined #salt
14:43 scott_walton joined #salt
14:45 kolaman mike25ro: got it minion is not running and whenever i start that that stops automcatically, checking that
14:45 mike25ro ps -ef |grep salt ... on minion
14:45 mike25ro do u have the salt-minion deamon running?
14:47 kolaman mike25ro: no it is not running it stops working checking the logs
14:48 mike25ro weird as well
14:48 mike25ro kolaman: reboot the ubuntu :)
14:49 jslatts joined #salt
14:49 imaginarysteve joined #salt
14:50 jankowiak joined #salt
14:51 toastedpenguin joined #salt
14:52 mannyt joined #salt
14:52 DredTiger We've got Joseph here at Clemson Univ. teaching about 30 people Salt :-D
14:53 UtahDave joined #salt
14:53 nahamu good morning UtahDave
14:53 nahamu I just opened a pull request of a cherry-picked commit from develop to get it into the 0.17 on the assumption that 0.17.2 will be cut from the 0.17 branch.
14:54 jcsp joined #salt
14:54 nahamu If that was the wrong way to go about it, please let me know. :)
14:55 kolaman mike25ro: actually that version of minion was too old that was the prob :)
14:55 DredTiger s/Joseph/redbeard2/
14:57 kaptk2 joined #salt
14:58 scott_w joined #salt
14:59 timoguin joined #salt
15:06 KBme left #salt
15:11 cbloss joined #salt
15:12 mike25ro kolaman: install it via wget -O - http://bootstrap.saltstack.org | sudo sh
15:13 cbloss joined #salt
15:13 mwillhite joined #salt
15:13 scott_w joined #salt
15:14 cbloss joined #salt
15:14 mike25ro did anyone install salt-minion using a kickstart script?
15:19 chadhs joined #salt
15:23 quickdry21 joined #salt
15:25 mike25ro can anyone suggest how to deploy salt-minions to offline servers?
15:26 kolaman joined #salt
15:26 forrest joined #salt
15:28 bhosmer joined #salt
15:29 noob2 joined #salt
15:30 noob21 joined #salt
15:30 aberant joined #salt
15:36 pdayton joined #salt
15:36 cnelsonsic joined #salt
15:38 smccarthy joined #salt
15:40 foxx[cleeming] joined #salt
15:40 foxx[cleeming] joined #salt
15:41 utahcon mike25ro: offline, as in not actually connected to the internet?
15:41 utahcon mike25ro: I installed the salt ppa, and then salt minion from a kickstart post-install script
15:44 teskew joined #salt
15:46 mike25ro utahcon: can you tell me how?
15:47 Furao utahcon: you arrived yesterday?
15:47 mike25ro utahcon: i want to deploy some servers into a non-public environment... and i want these to be managed by salt, so i should isntall salt-minion
15:48 EWDurbin left #salt
15:50 scott_w joined #salt
15:51 imaginarysteve joined #salt
15:52 scott_w joined #salt
15:53 mike25ro utahcon: i am on the web interface and it seems i can not chat with you on private
15:54 krichardson joined #salt
15:55 cachedout joined #salt
15:57 apergos uh very n00b question...  how do I know if salt.utils_refresh_pillar is successful or has failed?  (no I don't have a context, I'm just looking at the docs about usage and wondering how one can tell)
15:58 mike25ro apergos: good Q
15:59 mike25ro might want to display the pillars :) you just created
15:59 apergos I was afraid that might be the answer
16:00 IJNX joined #salt
16:05 slav0nic pkg.groupinstall unusable in states?
16:05 foxx[cleeming] joined #salt
16:05 foxx[cleeming] joined #salt
16:08 viq apergos: some stuff also shows up in master's log
16:09 apergos well I was thinking more along the lines of inline in one's code
16:09 pdayton left #salt
16:13 scott_w joined #salt
16:15 carlos joined #salt
16:15 zwevans joined #salt
16:19 linuxnewbie joined #salt
16:19 linuxnewbie joined #salt
16:19 linuxnewbie left #salt
16:19 sciyoshi joined #salt
16:19 jalbretsen joined #salt
16:22 zwevans joined #salt
16:23 amahon joined #salt
16:23 rgbkrk joined #salt
16:25 lineman60 joined #salt
16:26 ctdawe joined #salt
16:27 foxx joined #salt
16:27 foxx joined #salt
16:29 keen left #salt
16:31 bemehow joined #salt
16:31 dave_den apergos: if the minion returns true, then it is successful. I don't think it will return if it's not successful.
16:32 mr_chris How were salt jobs meant to be cronned? A single cron running on the master that does state.highstate on * or salt-call state.highstate on each of the servers that are running the minions?
16:32 apergos dave_den: ok, that's a help, thanks a lot
16:33 forrest mr_chris, there isn't really consensus there
16:33 dave_den mr_chris: both ways are valid. one thing about doing it from the master is that you have control over things like batching
16:33 forrest depends on the size of your environment and such I imagine.
16:33 mr_chris forrest, dave_den Thanks.
16:33 mr_chris batching?
16:34 forrest http://docs.saltstack.com/topics/targeting/batch.html
16:34 dave_den http://docs.saltstack.com/topics/targeting/batch.html
16:34 dave_den forrest ;P
16:34 forrest THAT'S RIGHT dave_den, I CAN COPY AND PASTE FASTER
16:34 forrest woooooo
16:34 dave_den heh
16:35 lineman60__ joined #salt
16:37 juicer2 joined #salt
16:38 noob21 hahah
16:38 forrest ?
16:38 noob21 rapid fire paste
16:38 forrest yea the benefits of having batch bookmarked from constantly forgetting what it is called.
16:38 linuxnewbie joined #salt
16:38 linuxnewbie joined #salt
16:38 noob21 nice
16:38 mr_chris Thanks.
16:40 bitz joined #salt
16:42 lineman60 joined #salt
16:42 lineman60 as what the food!?
16:43 lineman60 NT
16:43 lineman60 MT
16:44 gasbakid__ joined #salt
16:44 forrest heh
16:45 ksalman is salt minion not being updated for solaris?
16:45 ksalman i just installed it from opencsw and got version 0.14
16:45 forrest ksalman, I have no idea who builds the Solaris package, you might have to ask on the mailing list.
16:45 ksalman okay
16:45 ksalman thanks
16:45 forrest I'm pretty sure like all the others it is built by some volunteer
16:46 forrest yea np, do solaris packages have changelogs??
16:46 forrest maybe it's in there.
16:46 ksalman ohh.. I'll check
16:46 forrest ok, if you find out let me know
16:46 forrest I don't touch Solaris unless it's broken and I'm the on-call so I don't have a lot of hands on with how it handles packages :\
16:47 ksalman thanks for the information, I'll dig around =)
16:47 forrest cool!
16:54 linjan_ joined #salt
16:56 cmoberg joined #salt
16:56 gasbakid_ joined #salt
16:58 worstadmin joined #salt
16:59 worstadmin Im deploying some shellscripts via salt to my servers - but they are always non-executable (linux) - how can I deploy them g+x ?
17:00 carmony worstadmin: you can set the file mode I believe to do this
17:02 modafinil any insight into debugging gitfs problems? -- things aren't working, but no errors (warnings about accepting keys for the host, but those go away)
17:02 worstadmin carmony: You mean on the actual file or is this syntax in the sls file?
17:03 carmony in the sls
17:03 cachedout worstadmin: http://docs.saltstack.com/ref/states/all/salt.states.file.html#salt.states.file.managed
17:03 carmony worstadmin: http://docs.saltstack.com/ref/states/all/salt.states.file.html
17:04 liwen joined #salt
17:06 worstadmin thanks guys
17:06 worstadmin Working like a charm
17:07 ipmb joined #salt
17:07 cachedout worstadmin: Glad to hear it!
17:09 KyleG joined #salt
17:09 KyleG joined #salt
17:12 redondos joined #salt
17:12 redondos joined #salt
17:12 nahamu terminalmage: thanks!
17:13 terminalmage nahamu: ?
17:13 terminalmage oh, the merge?
17:13 nahamu yeah.
17:13 terminalmage no prob!
17:13 nahamu (irc nick and github username disconnect can cause confusion. :) )
17:14 terminalmage I'm hopefully going to be looking into taking over packaging for salt in OpenCSW
17:14 nahamu cool
17:14 terminalmage we had a packager a while back, but he no longer uses salt for work
17:14 terminalmage so the packages are really old
17:14 nahamu I'm running it on SmartOS and focusing on deploying esky builds.
17:15 terminalmage yeah, I've seen people using that as an alternative. Pretty nice.
17:15 terminalmage hopefully in the next month or two we can get OpenCSW up to speed and we won't have to worry about that
17:17 sciyoshi why is the version numbering changing?
17:18 liwen joined #salt
17:18 james79 joined #salt
17:18 ctdawe joined #salt
17:20 james79 hi, I have a question about using returners. Am using a Vagrant instance with masterless config. I can get smtp returner working fine, but using my own returner doesn't seem to trigger. any insight into where I should be looking please?
17:21 cachedout sciyoshi: Did you read the blog post on the version numbers?
17:21 james79 um, no, am pretty new to this. do you have a link at all please?
17:21 mwillhite joined #salt
17:22 cachedout james79: Most certainly! http://www.saltstack.com/saltblog/2013/10/27/salt-version-numbers
17:22 cachedout Err, wrong person. My bad.
17:22 cachedout That was supposed to go to sciyoshi.
17:23 ipmb joined #salt
17:24 xmltok_ joined #salt
17:27 blee joined #salt
17:27 liwen joined #salt
17:30 rlarkin joined #salt
17:30 scott_w joined #salt
17:31 scott_w joined #salt
17:34 gasbakid__ joined #salt
17:35 liwen joined #salt
17:36 teebes joined #salt
17:40 m_george|away joined #salt
17:42 bhosmer joined #salt
17:43 utahcon have I asked about pcre in jinja ifs?
17:43 ajw0100 joined #salt
17:44 bhosmer_ joined #salt
17:44 mr_chris So far the upgrade to 0.17 is very painful. Just upgrade the master on centos from epel and am now encountering python errors when trying to start the service. http://paste.linux-help.org/view/db48520f
17:44 nahamu mr_chris: 0.17.1 on the master?
17:45 mr_chris nahamu, Correct.
17:45 nahamu it's painful because of the security issue.
17:45 sciyoshi cachedout: thanks, i had looked for an explanation but couldn't find one
17:45 nahamu link in one moment
17:45 cachedout sciyoshi: You're welcome!
17:45 brimpa joined #salt
17:45 mr_chris Looks like this is relevant.
17:45 nahamu http://docs.saltstack.com/topics/releases/0.17.1.html
17:46 nahamu "THIS RELEASE IS NOT COMPATIBLE WITH PREVIOUS VERSIONS. If you update your master to 0.17.1, you must update your minions as well. Sorry for the inconvenience -- this is a result of one of the security fixes listed below."
17:46 mr_chris Running master as root. That was is the problem.
17:47 dlloyd anyone doing anything like generating nagios configs from salt definitions?
17:47 mr_chris Was in that is what was affecting me. Is in that it should not break ran ran as a non-root user.
17:47 lineman60 joined #salt
17:50 scott_w joined #salt
17:51 racooper by any chance...was support for groups in external_auth added in 0.17.1?
17:51 scott_w_ joined #salt
17:53 kopernikus joined #salt
17:53 racooper ah, nvm I see it's slated for H.
17:55 kopernikus Hi all, I'd like to print the current context (defined vars) from inside a template to the console during state.highstate for debugging purposes. Is that possible?
17:59 bemehow_ joined #salt
18:00 jumperswitch joined #salt
18:00 oz_akan_ joined #salt
18:01 forrest kopernikus, I haven't seen someone do that before. You could try printing from within the jinja2 template maybe?
18:05 drags for replacing a master, would it be any more than: syncing the pki dir and configs, and using the old master to change the master: line in the minion config?
18:06 forrest I don't think so drags.
18:06 viq How in something like https://github.com/saltstack-formulas/mysql-formula/blob/master/mysql/map.jinja I can do "for this OS do this, vor any other do something else" ?
18:06 forrest might be worth testing with a single minion first
18:07 forrest viq
18:07 forrest https://github.com/saltstack-formulas/mysql-formula/blob/master/pillar.example
18:07 forrest the lookup data in the pillar would be used in the event that the map doesn't overwrite the lookup values
18:08 farra joined #salt
18:08 nineteeneightd I think I might be a little bit confused as to how salt-ssh works...
18:08 forrest nineteeneightd, What about?
18:08 Nazca joined #salt
18:08 nineteeneightd Do I still need to deploy all my states and pillar data to the remote machine, or can they live on teh machine I'm running salt-ssh from?
18:08 viq forrest: hmm, let me see then
18:09 forrest nineteeneightd, they live only on the system you're running salt-ssh on
18:09 viq forrest: what's somewhat confusing is that I'm using it in a pillar, not a state
18:10 forrest the OS grain data you mean?
18:10 forrest so you have kinda what looks like the map.jinja data inside your pillar?
18:10 troyready joined #salt
18:10 forrest how does that work when you're trying to keep everything clean?
18:11 viq forrest: https://gist.github.com/viq/7306831
18:12 forrest now I'm confused, this looks fine to me.
18:12 viq forrest: the '*' part does not work
18:12 forrest oh
18:12 forrest yea set that in ayea I think it interprets that as a literal star
18:13 forrest so you want to say 'if the OS is debian, set them to sudo group, otherwise for any other OS, set it to wheel'
18:13 viq precisely
18:14 viq as debuntu like to be special like that
18:15 viq Any idea how to approach that?
18:17 m_george left #salt
18:17 Katafalkas joined #salt
18:19 forrest viq, https://gist.github.com/gravyboat/7306947
18:19 forrest check that out
18:20 forrest I think that should work
18:20 forrest I don't have a box here to test it on right now :\
18:21 viq forrest: yeah, I was thinking something along those lines, let me see
18:21 forrest cool, I'm pretty sure that should work
18:21 viq oh, no, I already see it won't
18:21 forrest at least it works for my formula...
18:21 forrest oh why?
18:21 viq https://gist.github.com/gravyboat/7306947#file-gistfile1-scm-L8 - you can't call a pillar from a pillar...
18:22 forrest ohhhhhhhhhh your group1.sls is a pillar file?
18:22 _ikke_ joined #salt
18:22 viq yes
18:22 forrest I should have realized that from what you were talking about earlier
18:22 forrest well, what if you declared it as a secondary pillar?
18:22 forrest the lookup that is
18:23 forrest http://docs.saltstack.com/topics/pillar/#including-other-pillars
18:23 forrest like that
18:23 forrest IT seems like that would work
18:23 forrest maybe?
18:23 viq From what was explained to me, you cannot get pillar data from within a pillar, because at the time you're compiling pillar data, said pillar data is empty. All pillars at the same time.
18:23 forrest Hmm
18:24 _ikke_ Is there a way to change the password of the user once, with a random salt? Becausd now, everytime salt has run, the salt changes, resulting in a changed hash, which cause salt to change the password everytime
18:24 viq _ikke_: you usually just give the salted hash to saltstack, AFAIK
18:24 viq Or at least I believe that's the recommended approach
18:25 _ikke_ viq: The thing is that I am generating the passwords with salt
18:25 forrest _ikke_, you could try using watch_in: http://docs.saltstack.com/ref/states/requisites.html#watch-in depending on how things are structured maybe?
18:25 forrest or maybe the reactor system to only update the hash if something else occurs
18:25 rlarkin https://salt-cloud.readthedocs.org/en/latest/topics/aws.html  <-- I think that I can use salt-cloud with AMI's that don't have cloud-init , but haven't found any explicit statement...can anyone confirm for me.  sorry for the interrupt
18:25 josephholsten joined #salt
18:25 scott_w joined #salt
18:25 viq forrest: includes... hmm, I was trying that before, and it didn't really work the way I was trying it then, maybe could approach it differently now.
18:26 viq _ikke_: how are you generating those passwords?
18:26 ipmb joined #salt
18:26 _ikke_ viq: pydsl script
18:26 cron0 joined #salt
18:27 _ikke_ I first used a static salt to test, but now I'm randomly generating it
18:27 viq uh, I saw somewhere an example to generate salted password, let me see
18:27 viq ah, https://gist.github.com/UtahDave/3785738
18:28 Chocobo joined #salt
18:28 Chocobo joined #salt
18:28 viq _ikke_: uh, yeah, if it's every time different, then you're going to change it every time... as suggested, I guess you need to tie it to something else
18:29 viq Say, 'touch /var/run/${user}changed_password.${date}' or something
18:29 _ikke_ yeah, was thinking about something like that
18:30 _ikke_ Perhaps a serial
18:30 jhulten joined #salt
18:30 HeadAIX joined #salt
18:31 mannyt joined #salt
18:32 liwen joined #salt
18:34 lineman60 joined #salt
18:34 elfixit joined #salt
18:36 Ryan_Lane joined #salt
18:36 dave_den _ikke_: you could always make the serial part of the salt
18:37 dave_den and use an 'unless' option to the password change state that checks the serial number in the salt
18:37 viq uh, tomorrow, enough for today
18:38 utahcon Trying to include more sls down the line, having problems, can anyone see what is wrong: http://pastebin.com/gzgSmcge
18:39 viq utahcon: what's the error?
18:39 utahcon getting comment: State include.repo.creditrepair.internal found in sls repo is unavailable
18:39 mwillhite joined #salt
18:40 utahcon "creditrepair" is shortened to "cr" in my paste
18:40 viq utahcon: try removing the two space in front of both lines and see if it makes a difference?
18:40 jslatts anyone here using syndic? I am having trouble getting the syndic to relay output back to the masterofmasters
18:41 Katafalkas joined #salt
18:42 viq utahcon: also, what is in the init.sls of cr/internal ?
18:43 utahcon http://pastebin.com/nN0psDMw
18:44 utahcon viq ^
18:44 utahcon also I was hoping to be able to set a variable in that same if, and pass it down to the included file, but that seems to not work
18:45 utahcon ideally I would have rev: {{ some_var_here }} in that last sls
18:45 utahcon moving the include to the first column fixes the include, but breaks the inheritence (?)
18:46 viq Did you remove two spaces from both lines?
18:46 utahcon viq: yes
18:46 utahcon and then the other file is included
18:46 viq Then what breaks?
18:47 ctdawe joined #salt
18:48 danielbachhuber joined #salt
18:49 utahcon viq: new file http://pastebin.com/iLSHL9kD complains on the {{ new_var }} in /salt/repo/cr/init.sls
18:49 utahcon undefined jinja variable
18:49 utahcon my guess is the include happens before (and independent) of the if
18:50 utahcon hmm, or not
18:51 amckinley joined #salt
18:51 utahcon ok so the include is dependent on the if still so I am half way there now
18:51 utahcon thanks
18:51 utahcon I will tinker for a while
18:51 foxx[cleeming] joined #salt
18:51 oz_akan_ joined #salt
18:51 foxx[cleeming] joined #salt
18:52 josephholsten joined #salt
18:53 xmltok_ joined #salt
18:54 oz_akan__ joined #salt
18:55 _ikke_ Is there also a way to specify only the part of a state in a pydsl file (and include that), while the main part is in a normal sls file?
18:56 oz_akan__ joined #salt
19:01 bhosmer joined #salt
19:01 Brew exit
19:02 madduck joined #salt
19:02 madduck joined #salt
19:03 druonysus joined #salt
19:03 druonysus joined #salt
19:04 madduck joined #salt
19:05 madduck_ joined #salt
19:05 madduck_ joined #salt
19:05 utahcon is there a good explanation of how include works in salt?
19:05 jslatts joined #salt
19:07 dave_den utahcon: beyond this explanation? http://docs.saltstack.com/ref/states/include.html#include
19:08 aberant joined #salt
19:08 utahcon dave_den: sort of, is the does the included file get appended or prepended? does that make sense?
19:09 dmwuw joined #salt
19:10 utahcon hmm I think I have it backwards
19:10 veetow joined #salt
19:12 dave_den utahcon: appended is what i'd guess, according to the code.
19:13 utahcon dave_den: that is what I am guessing too
19:13 utahcon thanks
19:14 patrek joined #salt
19:15 noob2 joined #salt
19:15 QauntumRiff joined #salt
19:16 bemehow joined #salt
19:16 mr_chris We have a couple of centos 5 boxes still around. While epel has the latest version of salt for them, the version of zeromq is still at 2. Should I expect problems due to the older zeromq version?
19:16 QauntumRiff it appears over the weekend, my master updated itself to salt 17.
19:16 QauntumRiff now I am getting this error: [WARNING ] TypeError encountered executing saltutil.find_job: __init__() got an unexpected keyword argument 'with_communicate'. See debug log for more info.  Possibly a missing arguments issue:  ArgSpec(args=['jid'], varargs=None, keywords=None, defaults=None)
19:17 QauntumRiff also, startup takes forever
19:18 QauntumRiff I'm also seeing quite a few of these: Failed to import module selinux, this is due most likely to a syntax error. Tra
19:18 QauntumRiff ceback raised:#012Traceback (most recent call last):#012  File "/usr/lib/python2.6/site-packages/salt/loader.py", line 605, in gen_functions#012
19:18 Gareth QauntumRiff: is that on the minions or the master?
19:18 QauntumRiff master
19:18 QauntumRiff selinux, grub, zpool, freebsdpkg, etc, are giving that last warning, but I don't use them
19:19 oz_akan_ joined #salt
19:19 Gareth I see the selinux import error but only if I'm running it in debug mode.
19:19 Katafalkas joined #salt
19:19 mastrolinux joined #salt
19:19 QauntumRiff okay, I switched to debug, to try to figure out whats happened
19:19 QauntumRiff so that could explain that
19:20 bemehow joined #salt
19:20 utahcon are jinja variables available between states?
19:20 QauntumRiff also getting lots of errors in function _mine
19:21 mastrolinux Minion did not return just after update at = 0.17.1-2precise
19:21 mastrolinux is Corey Quinn here?
19:21 Corey mastrolinux: Usually.
19:21 Gareth QauntumRiff: minions are running 0.17.1 too?
19:21 QauntumRiff I doubt it.
19:21 QauntumRiff I disable auto updating on my systems
19:22 QauntumRiff not sure how the master got updated
19:22 Corey mastrolinux: Master and minion at the same version, and both services have been restarted successfully?
19:22 Gareth that could be part of it.  I had seen something this morning that minions < 0.17.1 won't work with a master = 0.17.1
19:22 Katafalk_ joined #salt
19:22 Corey And you're still not seeing the minion return?
19:22 mastrolinux Corey: do you know what is happening with the last minor update in the package? no more communication from salt master to minion, yes
19:22 mastrolinux I have both at the same version
19:22 noob2 left #salt
19:22 mastrolinux and restarted properly
19:22 mastrolinux (new PID)
19:22 Corey mastrolinux: Fixing a bug; a patch was applied to the wrong line because I'm an idiot. Nothing more.
19:23 Corey mastrolinux: salt --versions to a pastebin from the master and minion?
19:23 mastrolinux Corey: sure in a bit
19:23 Corey mastrolinux: No worries. That said, this doesn't *sound* like an explicit packaging bug. Not to mention that if it were I'd have heard about it a lot. :-p
19:24 Corey We saw a lot of this back when zmq was in the 2.x tree.
19:25 mastrolinux 3.2.2 here but pasting now
19:25 Corey 3.2.4 on this machine. Neat. Pypi is so fancy.
19:25 mastrolinux I usually use pip freeze all the time :)
19:26 mastrolinux Corey: http://pastebin.com/rD5yCqCP
19:26 QauntumRiff gareth: thanks, i'm working on updating all the minions
19:26 bhosmer joined #salt
19:27 Gareth QauntumRiff: no worries.
19:27 Corey mastrolinux: You've got mismatched ZMQ versions. 3.x and 4.
19:27 ipmb joined #salt
19:27 Corey I'm not sure what that would do, but if they're following SEMVAR I'd question it.
19:27 mastrolinux I installed everything from dev packages, so, what do you suggest?
19:27 Corey mastrolinux: These both on the same OS?
19:28 mastrolinux yep!
19:28 bemehow_ joined #salt
19:28 mastrolinux Ubuntu 12.04.3
19:29 Corey mastrolinux: Curious. How did you do the install in both, just apt-get install from the PPA?
19:29 mastrolinux both from ppa http://ppa.launchpad.net/saltstack/salt/ubuntu
19:29 Corey Any third party repositories?
19:29 Corey (Namely ones that would bump zmq on one but not the other)
19:29 mastrolinux let me check, do not think so
19:29 Corey Whenever you wind up with a zmq version mismatch that's a major version, there will be tears and maybe blood.
19:30 Corey So that's my number one question mark right there.
19:30 mastrolinux why not pin the versions in requirements and using pip? But thanks for the support, I am looking into it
19:30 gasbakid joined #salt
19:31 Corey mastrolinux: Ubuntu packaging guidelines, mainly.
19:31 xmltok_ joined #salt
19:31 Corey We're also looking at self-hosting the Ubuntu packages going forward, but that's a fair bit of work. :-)
19:31 Corey And ultimately I do have a day job that isn't Salt. :-)
19:32 mastrolinux ghgh, ok looks like the datadog script is using pyzmq 13 while you use 14 version
19:32 rlarkin joined #salt
19:32 mastrolinux I am in trouble
19:32 Corey mastrolinux: Well at least we found it. That said, both SHOULD work.
19:32 Corey I'd yell at utahdave, but he's not around.
19:32 Corey Try emailing the mailing list; that's probably the best venue for this kind of thing.
19:33 Corey Oh, and file an issue. :-)
19:33 Corey Probably in that order. BEst to get some direction from the Salt engineers.
19:33 mastrolinux Coredy: I am going to update via pip and let you know
19:35 imaginarysteve joined #salt
19:38 mastrolinux Corey, looks like having everything at the same exact version does not help
19:38 rlawrie joined #salt
19:38 Corey mastrolinux: I'd mailing list this. It's an odd situation.
19:38 mannyt joined #salt
19:39 mastrolinux thanks, FYI calling salt-call in the minion works like a charm
19:40 TheRealBill joined #salt
19:41 dave_den mastrolinux: when you run the minion with debug logging and then run a command from the salt master, do you see the job info logged in the minion log?
19:41 _ikke_ If I call a module function from jinja {{ salt['module.function']() }}, can that function then access info from the current state? (like it's name for example)?
19:41 _ikke_ ?
19:42 bemehow joined #salt
19:44 mastrolinux looks like the minion crashes: http://pastebin.com/AHtffH41
19:44 dave_den _ikke_ - execution modules can access teh context dunder
19:45 dave_den _ikke_: http://docs.saltstack.com/topics/development/dunder_dictionaries.html#context
19:45 ckao joined #salt
19:46 mgw joined #salt
19:47 _ikke_ __context__ seem to be used to persist some state. Does it already containt information, or do you have to fill it yourself?
19:48 foxx joined #salt
19:48 foxx joined #salt
19:49 madduck joined #salt
19:50 mastrolinux Corey: I bet is this: https://github.com/saltstack/salt/pull/7953/files
19:51 nineteeneightd I'm trying to get acclamated with salt-ssh and having a little diffculty getting it to apply some states using 'state.highstate'
19:51 Corey mastrolinux: Hmm.
19:51 hazzadous joined #salt
19:51 Corey If they want to cut a release that includes it I'll package it, but cherry-picking from develop is bad from a packager's point of view. :-)
19:51 nineteeneightd I have salt installed in a virtualenv, have some modules and a top.sls in /srv/salt/ and my roster at /etc/salt/roster
19:52 zetsuboudev joined #salt
19:52 blafountain joined #salt
19:52 blafountain hello
19:53 mastrolinux Corey: indeed, I agree with you, will try to setup an installation from Git, problem is: I am using it in prod ;)
19:54 blafountain i had a quick question about salt-cloud and bootstraping... is there a way to use salt-cloud to bootstrap a machine from your local development machine?
19:54 nineteeneightd `salt-ssh '*' test.ping` works fine, so I know I have the connection set up right in the roster, but when I call state.highstate, nothing
19:54 blafountain all of the documentation referrers to running it on the master
19:55 cbloss has anyone had luck deploying salt inside a virtualenv container?
19:55 dave_den cbloss: i use virtualenvs, yes
19:55 bhosmer joined #salt
19:55 nineteeneightd I'm wondering if there's more info I could find out about this video: http://www.youtube.com/watch?v=uWGDC1PdySQ
19:56 mastrolinux blafountain: I use a fabric script to install salt-minion
19:56 cbloss dave_den: do you have your process documented at all? I'd love to see how you got everything setup
19:57 blafountain mastrolinux: so you don't use salt-cloud at all?
19:57 [1]VertigoRay joined #salt
19:57 bemehow joined #salt
19:59 jhulten joined #salt
19:59 dave_den cbloss: i only use virtualenvs for development, so right now i just have packer kick off a script after my dev image is created which just makes a virtualenv, copies salt from a vmware shared dir to /opt/, then pip installs salt in that virtualenv.
19:59 dave_den https://gist.github.com/dlanderson/956c109185837f93c577
20:00 dave_den then calls highstte
20:01 cbloss dave_den: thanks I'll give this a try. I want to try and use virtualenv to have control over salt versions
20:01 dave_den cbloss: I added virtualenv activation in the upstart scripts, so you can specify SALT_USE_VIRTUALENV in /etc/default/salt-minion (or master), and it will activate the virtualenv when you start salt from upstart. https://github.com/saltstack/salt/blob/develop/pkg/salt-minion.upstart
20:01 cbloss I've been stuck on 17.1
20:02 jhulten_ joined #salt
20:02 dave_den cbloss: i'd like to deploy from bbfreeze/esky builds, but last time i tried to use that it wasn't working on ubuntu 12.04
20:02 dave_den when i have time i'm going to revisit tht
20:03 cbloss thanks for the info. I'll start working on this soon.
20:04 heewa joined #salt
20:08 nineteeneightd Hmmm, I wonder if state.highstate makes sense in a salt-ssh world...
20:08 Ymage joined #salt
20:08 oz_akan_ joined #salt
20:10 oz_akan_ joined #salt
20:11 dave_den nineteeneightd: possibly related? https://github.com/saltstack/salt/issues/8005
20:15 fwiles joined #salt
20:15 IJNX joined #salt
20:16 _ikke_ What is an easy way to debug a module? Trying to get output, but I don't seem to get any
20:17 forrest are you doing -l debug?
20:17 dave_den _ikke_: i usually have to add logging in the module where i suspect the problem lies
20:18 mgw joined #salt
20:18 fwiles Any cool tricks for getting virtualenv.managed to use a pip requirements file, on the minion, that references other requirements files relatively via for example '-r base.pip'
20:19 mr_chris As an experiment to see why my salt minions aren't consistently running as the are cronned to, I have it doing one thing and one thing only. Writing the date to a tag file. I then waited for the cron job to run, that job being "*/30 * * * * /usr/bin/salt-call state.highstate > /var/log/salt/salt-call.log". Some of them ran. Others resulted in the contents of /var/log/salt/salt-call.log being "Minion failed to authenticate with the master, has the minion
20:19 mr_chris key been accepted?". All of my Minions are authenticated properly. When I log into those servers and run the job manually, it works fine. This is on salt 0.17.1 on both the minion and the master all on centos 6.
20:19 fwiles ?
20:19 heewa If it gets really hairy, I've had success with using the python debugger. Where something's wonky, add: "import pdb; pdb.set_trace()", then use salt-call to run highstate on the minion, and that process (salt-call) will throw you into the debugger.
20:19 _ikke_ dave_den: And what does logging mean in this case?
20:20 bemehow joined #salt
20:20 ajw0100 joined #salt
20:21 nineteeneightd Looks like there's a lot of bug being squashed around salt-ssh, so I won't keep prodding too much, but does pillar data work with salt-ssh?
20:21 dave_den _ikke_: create a log object in the module if it doesn't have one already (e.g. https://github.com/saltstack/salt/blob/develop/salt/modules/brew.py#L13) and use it to print out whatever you need log.debug()...
20:21 forrest nineteeneightd as far as I'm aware yea
20:21 _ikke_ dave_den: thanks
20:21 dave_den nineteeneightd: yes, it should
20:21 heewa I'm trying to follow the directions in http://docs.saltstack.com/topics/community.html for submitting a pull request, but having trouble with Travis-CI. I signed in, enabled it on my fork of salt, set up the hook, and even made a new commit to try to trigger a build, but nothing's happening.
20:22 forrest heewa, can you show me the commit in your repo?
20:23 forrest seems like you did everything
20:23 heewa @forrest commit to trigger a build is the latest one on master: https://github.com/heewa/salt/commits/master but I have the commit for the pull request on another branch: https://github.com/heewa/salt/commits/pip_vcs_error_msg
20:25 dave_den fwiles: you can specify a requirements file with the 'requirements' arg in the virtualenv.managed state
20:26 fwiles dave_den: yeah that's what I'm doing, from a cloned git repo on the minion. Problem is the requirements file references another
20:26 _ikke_ yay, unknown error in jinja variable :S
20:26 forrest heewa, https://travis-ci.org/heewa/salt/builds
20:26 fwiles dave_den: it's a known issue, just wondering if there was a relatively simple work around
20:26 forrest if you go to build history it looks fine, they are queued up right now
20:26 forrest I can see your 'trigger a travis build' message right there.
20:26 diegows joined #salt
20:27 heewa forrest: Oh wait, the instructions kinda imply it'd show up in my travis dashboard. So, where should I see the build?
20:27 forrest It's in your travis dashboard
20:27 forrest unless you're looking at a different place than I am
20:27 dave_den fwiles: oh, i misunderstood. i'm not sure about that. i guess you could always write a tiny script to pull in the other requirements file and replace the require line in the original requirements file
20:28 heewa forrest: Ok, I see it now. Sorry, I didn't think there'd by that much lag! Thanks :)
20:28 JasonSwindle joined #salt
20:28 forrest heewa, yea since Travis is free sometimes things get backed up during the day in the US, if you do commits at night it's pretty much instantly going. Salt still takes a while though.
20:29 mike251 joined #salt
20:29 * mike251 says hi to all
20:29 mike251 forrest: are you around?
20:29 jankowiak joined #salt
20:29 forrest mike251 ye
20:30 forrest *p
20:30 mike251 nice to see you again forrest.... i came up today with a challenge... i need to install 20 vms ... that are in a private env, but i want to manage them with salt... how do i install salt-minions on them ... ? offline?
20:31 mike251 anyone .. any ideas?
20:31 mwmnj joined #salt
20:31 forrest Can you expand on what you mean by private env a bit more? Is the possibility of a salt master, or connections into that environment not feasible?
20:31 fwiles dave_den: ah FYI you can get around it by setting no_chown: True on the virtualenv
20:31 fwiles dave_den: then it doesn't muck with the paths
20:31 dave_den fwiles: ah, glad you got it
20:32 mike251 forrest: the machines can not access the internet directly .. to pull salt-minion... and they are installed via kickstart
20:32 forrest mike251, Are they installed via kickstart manually from the DC?
20:32 mike251 manually
20:34 forrest mike25ro, so when you have to do work on these boxes, you walk to the DC, then jump on the console?
20:34 mike25ro yeah
20:34 forrest man that sucks
20:34 alunduil joined #salt
20:34 mike25ro no... sorry forrest... i use a jump host...
20:34 forrest oh ok
20:34 mike25ro i can get to the machines...
20:34 hjubal joined #salt
20:34 hjubal joined #salt
20:34 bhosmer joined #salt
20:34 mike25ro via another subnet etc.
20:34 QauntumRiff so I updated all my minions to 17, and things are mostly better, but keep getting this message, clogging up my syslog: TypeError encountered executing saltutil.find_job: __init__() got an unexpected keyword argument 'with_communicate'
20:34 forrest can you not create a salt master in this environment?
20:34 QauntumRiff it also sends to the console on my master
20:34 mike25ro yeah i can create forrest
20:35 dave_den mike25ro: can you not put the salt packages and package dependencies on the kickstart server or another server that the vms can access?
20:35 forrest dave_den, was just going to suggest that
20:35 heewa forrest: Build failed on an apt install: "The command "sudo apt-get install git swig supervisor rabbitmq-server ruby" failed and exited with 100 during before_install." Is that a retry kinda thing, or is something actually wrong?
20:35 mike25ro dave_den: .... i think ... i can access a .... yum repo... a clone of centos .. locally
20:36 mike25ro i think we have an up to date clone of centos repo
20:36 dave_den mike25ro: i'd just put the salt packages in the yum repo and have kickstart install them.
20:36 forrest just add the salt packages to the repo (local or otherwise) that the machines pull from, then push that to the master, once the master is done, you can go abour writing salt to manage what needs to be.
20:36 forrest I agree with dave_den
20:36 forrest heewa, I honestly don't know, sometimes travis craps out for no reason on salt builds, it's been an issue for a while.
20:36 forrest I've had it randomly stop working on package installs, and all sorts of other stuff
20:36 _ikke_ If I use logging, how can I see the output of the log statements?
20:37 heewa forrest: Ok, well I put out the pull request. I guess if someone wants me to fiddle with the builds more they'll say so in the review.
20:37 dave_den _ikke_: it depends on how you have logging configured in salt. if you are doing log.debug(), you need to run salt with either -l debug or set log_level: debug in the salt config
20:37 mike25ro dave_den: how to add the packages to the repo? ... i am not a specialist :)
20:37 dave_den mike25ro: there's lot of documentation for that online.
20:38 _ikke_ dave_den: I run with -l debug, but I don't see output
20:38 forrest heewa, yep that's pretty much how it goes depending on what you're merging in. Since develop is broken all the time, there's a chance you were already working with part of the develop branch that was busted :P
20:38 dave_den _ikke_: what does your debug log code look like?
20:38 forrest mike25ro, it might be also worth talking to whoever set up the kickstart and repo server in case there is unique configuration.
20:38 nineteeneightd dave_den forrest: do you guys know of any resources related to salt-ssh besides the docs?
20:38 QauntumRiff it would also appear that besides the erorrs above, all my minions are getting zmq errors, and crashing
20:38 dave_den _ikke_: mare you sure your debug log code path is being hit?
20:38 mike25ro forrest: will look into that... thanks forrest and dave_den
20:38 forrest np
20:39 dave_den nineteeneightd: other than the salt source code, i just read the docs
20:39 mike25ro forrest: always great to... get help from you
20:39 _ikke_ dave_den: I have put the log statement in the module itself
20:39 forrest nineteeneightd, all I know of are the http://docs.saltstack.com/ref/cli/salt-ssh.html and http://docs.saltstack.com/topics/ssh/index.html
20:39 _ikke_ log = logging.getLogger(__name__); log.debug('test')
20:39 forrest What do you need that isn't documented there nineteeneightd?
20:40 dave_den _ikke_: is the module being loaded?
20:40 _ikke_ dave_den: yes, I see it from the result output
20:40 Ryan_Lane joined #salt
20:40 _ikke_ Changes:   passwd: $1$/MEJipOh$cSpZfOmMAhIr6wyh711q5.
20:41 dave_den _ikke_: change your log string from 'test' to 'GIVE ME DEBUG', restart your salt process, then grep your logs for 'GIVE ME DEBUG'
20:41 _ikke_ dave_den: master or minion process?
20:41 mapu joined #salt
20:42 dave_den that depends on where you put the debug code...
20:42 _ikke_ I've put it in the file_root/_module/module.py file
20:42 ajw0100 joined #salt
20:42 mastrolinux blafountain: no, too much not needed overhead to use salt-cloud IMHO
20:43 dave_den _ikke_: are you syncing your modules after changing the code?
20:43 _ikke_ dave_den: yes
20:44 dave_den http://docs.saltstack.com/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.sync_all
20:44 _ikke_ dave_den: Should I see the output on the master, or in the log files of the minion?
20:44 _ikke_ dave_den: Yes, that command
20:44 dave_den _ikke_: http://docs.saltstack.com/ref/modules/all/salt.modules.saltutil.html#salt.modules.saltutil.sync_all  you are doing this to push the module to the minions
20:44 _ikke_ well, sync_modules
20:44 dave_den in the log files of the minion
20:44 _ikke_ ah ok
20:45 bhosmer_ joined #salt
20:46 _ikke_ Ok, in the log file of the minion I do get output
20:46 QauntumRiff it looks like I had some extra "salt-master" processes running in the background.. once I killed them off, things have improved
20:46 mastrolinux Corey: I was right, fixing self.opts.get('ext_job_cache', '') in minion.py fixes the issue
20:47 druonysus joined #salt
20:47 mastrolinux and with the new zeromq is faster than ever
20:49 Psi-Jack Would there be a problem with salt-minions being on zeromq-2 and others being on zeromq-3?, in the same salt infrastructure?
20:49 mastrolinux Corey: I suggest you to repackage with the fix, otherwise everyone with the deb package update will have the same issue
20:51 bemehow joined #salt
20:54 mastrolinux Psi-Jack: the important thing is to have the same ZMQ version on the master and on the minion
20:56 Psi-Jack mastrolinux: Hmm. What side-effects would that potentially entail? We have that situation between CentOS 6.x and CentOS 5.x servers, unfortunately.
20:56 scott_w joined #salt
20:57 dave_den Psi-Jack: you will have connectivity issues with ZMQ 2.x
20:59 jslatts are most people using gitfs with syndic now to sync state and pillars?
21:00 bemehow joined #salt
21:00 Psi-Jack dave_den: Just those running zeromq-2 ?
21:00 Psi-Jack to/from?
21:00 cewood joined #salt
21:01 mr_chris dave_den, We're having connectivity issues, and performance issues with ZMQ 3.x.
21:01 bhosmer joined #salt
21:01 dave_den Psi-Jack: zmq 2.x will stop communicating with the master eventually.
21:02 dave_den mr_chris: zmq 3.x is stable, so you should not be having connectivity issues, afaik.
21:02 mastrolinux Psi-Jack: yes it often happens you lost communication between master and slave
21:03 mr_chris dave_den, I'm really not sure what the issue is at this point. I can't even test.ping reliably to our  servers right now. Sometimes it's fast. Other times it's slow. Other times it just times out.
21:03 mr_chris Our salt environment is falling apart around me as we speak and all it's tasked to do right now is write the date to a file.
21:03 JasonSwindle left #salt
21:04 * mr_chris is frustrated
21:04 mpanetta joined #salt
21:04 forrest mr_chris, deep breaths
21:05 dave_den mr_chris: how many minions do you have?
21:05 mpanetta joined #salt
21:06 mr_chris dave_den, About 150. I would guess this is probably more related to our environment and less with salt but I have no way of knowing at this point.
21:07 dave_den what are the ram/cpu specs of the master?
21:07 dave_den you should have no problem supporting 150 minions
21:07 forrest mr_chris, is the issue happening on the same minion when it does happen?
21:08 mr_chris forrest, 6 core AMD. ~28 GB of RAM.
21:08 alunduil_ joined #salt
21:08 forrest Oh man, that hsould be way more than enough
21:08 forrest *should
21:08 dave_den definitely
21:08 mr_chris I agree.
21:08 bhosmer joined #salt
21:08 mr_chris The primary issue I'm trying to deal with is salt-minion being unable to authenticate when cronned.
21:09 mr_chris I can ignore to other issues until then.
21:10 forrest it can't auth against the master when configured as a cron?
21:12 cachedout In what way are you invoking salt-minion from cron? Directly or via a system startup script?
21:12 KyleG left #salt
21:13 Chocobo joined #salt
21:13 Chocobo joined #salt
21:13 mr_chris forrest, Correct. cachedout "*/30 * * * * /usr/bin/salt-call state.highstate > /var/log/salt/salt-call.log"
21:14 forrest are you running that as the root user?
21:14 mr_chris Yes.
21:14 mr_chris id: and master: are hard coded in the /etc/salt/minion
21:14 cachedout Are you using virtualenv?
21:15 fatbox mr_chris: http://docs.saltstack.com/topics/jobs/schedule.html
21:15 mr_chris Not sure what that is.
21:15 mr_chris We are not.
21:15 cachedout K
21:15 josephholsten joined #salt
21:16 mr_chris fatbox, Good to know. I may have to go this route.
21:16 forrest so when it fails to run, what gets logged?
21:16 cachedout Does that log give have any output and/or is crond logging errors?
21:16 forrest does anything get dumped into the system messages?
21:17 fatbox mr_chris: if you're looking for more flexibility you may want to checkout what we use for applying state: https://github.com/fatbox/saltcommander/blob/master/saltcommander.py
21:18 mr_chris cachedout, The log only gives 'Minion failed to authenticate with the master, has the minion key been accepted?'
21:19 mr_chris forrest, I didn't think to check the system messages. Doing so now.
21:19 forrest mr_chris, ok, it might also be worth checking the logs historically when you see the timestamp failure for that minion key error.
21:19 forrest see if there is any correlation around that time
21:20 dave_den mr_chris: in what cron file is that cron job? if it's not in root's (or the user salt-call should run as) crontab you need to specify a user after the time
21:20 forrest basepi, if whiteinge can just confirm how I think it works is how it works, I'll modify the docs and add an example explaining how it gets overwritten for issue 8239, I just don't want to put anything inaccurate in the docs.
21:21 forrest even if you just wanna throw something at him and ask :P
21:22 mr_chris forrest, Nothing in /var/log/messages. In /var/log/salt/minion all I get when it fails is '[salt.loaded.int.grain.core][INFO    ] The `lspci` binary is not available on the system. GPU grains will not be available.' and '2013-11-04 15:30:53,950 [salt.loaded.int.module.cmdmod][INFO    ] Executing command 'ps -efH' in directory '/root''
21:22 basepi forrest: luckily, whiteinge has his bouncer up again.  He's in Portland right now, but hopefully he'll find a few minutes to jump on and powow with you
21:22 basepi forrest: otherwise, i would happily throw something at him
21:23 basepi ;)
21:23 forrest basepi, hah
21:23 forrest basepi, I'm sure he'll respond on the issue, I just didn't feel like that needed to be added to the existing discussion.
21:23 veetow left #salt
21:24 basepi cool
21:24 Corey mastrolinux: Is this bug deb specific?
21:27 mastrolinux Corey I am not able to verify on other packages but it was not in 0.71.1-1 precise
21:27 mastrolinux so could be just for deb
21:28 mr_chris dave_den, It is root's.
21:30 dave_den mr_chris:  are all of your 150 minions doing the salt-call cron at the same time?
21:30 dave_den and do they work fine when you issue commands from the master?
21:32 ekaqu joined #salt
21:33 linuxnewbie joined #salt
21:33 linuxnewbie joined #salt
21:33 mr_chris dave_den, Yes to the first question.
21:34 mr_chris They do work fine when you issue the commands from the master but they take forever to run state.highstate even all I'm doing at this debugging stage is running a single state that runs a single command to output the date to a file.
21:34 ekaqu I am new to salt, so sorry if im missing something, but I am trying to use pkgrepo to setup a third party repo.  Based off the docs required_in is valid.  When I try to use it, I always get "Data failed to compile:", when I remove required_in, then salt works.  I tried looking over the code and yet to see required_in in https://github.com/saltstack/salt/blob/develop/salt/states/pkgrepo.py.  Any idea whats up?
21:34 mr_chris I have to set -t 120 just to get it to wait long enough to see the output.
21:34 mr_chris But if I run the same command from cmd.run it's nearly instant.
21:37 forrest ekaqu, can you paste your state?
21:37 forrest or a link to it
21:37 ekaqu docker:   pkgrepo.managed:     - humanname: Docker Inc. PPA     - name: deb http://get.docker.io/ubuntu docker main     - comments:       - "# Docker Inc. Repo"     - key_url: https://get.docker.io/gpg #    - require_in: #      - pkg: lxc-docker   pkg.installed:     - pkgs:       - linux-image-extra-{{ grains.get('kernelrelease') }}       - lxc-docker
21:37 dave_den mr_chris: i suspect your master is the bottleneck. have you made sure your limits are set properly and have you profiled the master during the highstates? if you kick off a highstate on the master with salt -b 10 '*' state.highstate  does it work properly?
21:37 forrest mr_chris, when you run highstate, can you see how many process spin up versus how many do on a cmd.run?
21:37 ekaqu formating is bad.  let me gist
21:37 forrest was just gonna ask ekaqu
21:38 ekaqu https://gist.github.com/dcapwell/7309603
21:38 jhulten joined #salt
21:38 mr_chris forrest, On the master or the minions?
21:38 dave_den ekaqu: require_in:\n    - pkg: docker
21:38 forrest master
21:38 dave_den not lxc-docker
21:39 ekaqu even if the package name is lxc-docker?
21:39 dave_den ekaqu: you reference the require/watch by ID declaration
21:39 mr_chris dave_den, Checking now. What limits do you speak of?
21:39 dave_den http://docs.saltstack.com/ref/states/highstate.html#id-declaration
21:39 mr_chris dave_den, I have not profiled the master during the high states. That is something I should have done. Thanks for mentioning it.
21:40 forrest dave_den, you beat me to it :D
21:40 dave_den mr_chris: nofile limits, first and foremost.
21:40 mr_chris dave_den, It's running 10 at a time pretty well.
21:40 dave_den mr_chris: nofile defaults to 1024, which is way too low.
21:40 dave_den you should up that to at least 100000
21:40 mafalb joined #salt
21:41 mr_chris What is nofile?
21:41 mr_chris Or where can I look it up?
21:41 dave_den http://docs.saltstack.com/topics/troubleshooting/index.html?highlight=nofile#too-many-open-files
21:41 ekaqu thanks dave_den, switched to the ID and it worked
21:41 jhulten_ joined #salt
21:41 dave_den mr_chris: depending on how you have salt-master installed/started and your distro, you will have to configure that as appropriate.
21:42 dave_den ekaqu: no prob :)
21:42 mr_chris Thanks for the help, btw. I was flailing for a bit. I'm beginning to see how this process is supposed to work.
21:43 mr_chris dave_den, And so you think that when all the minions run at once, I'm reaching some kind of bottleneck that could be cause them not to authenticate?
21:43 dave_den mr_chris: of course! we want to see people succeed with salt.
21:43 jhulten joined #salt
21:43 mr_chris And I want to see salt succeed.
21:43 dave_den mr_chris: yes, i suspect it's your nofile limit. is your master also a minion?
21:43 mr_chris Not currently.
21:43 mr_chris It got through the batched operations just fine.
21:44 dave_den 99% sure you just need to up your nofile limit
21:44 dave_den mr_chris: what distro is your master and how do you start salt-master?
21:45 mr_chris dave_den, So you said at least 10000. The does say. "So, an environment with 1800 minions, would need 1800 x 2 = 3600 as a minimum."
21:45 mr_chris I have only 150 minions.
21:45 mr_chris centos. service salt-master start
21:46 gkze joined #salt
21:47 younqcass joined #salt
21:47 mr_chris When profiling the master when the highstates run, what specifically should I be looking for outside of RAM usage, CPU usage, and usual stuff like that?
21:47 forrest number of processes.
21:48 mafalb joined #salt
21:48 forrest see how many processes get kicked off associated with salt
21:48 dave_den mr_chris: your master can have millions of filehandles open. 100k will ensure you salt-master has plenty of room to operate.
21:48 johncc joined #salt
21:49 forrest to see if that is limited or held up in some fashion, also see if those proceses are sleeping etc. It might be worth stacktracing with -c to see the calls as wel.
21:49 forrest *well
21:50 madduck_ joined #salt
21:51 dave_den mr_chris:  echo 'ulimit -n 100000' >> /etc/default/salt && service salt-master restart
21:52 dave_den you can confirm the new limit by:   grep 'open files' /proc/$(cat /var/run/salt-master.pid)/limits
21:54 Ryan_Lane joined #salt
21:56 mr_chris dave_den, Limits put in place. Confirmed.
21:56 mr_chris What you're saying makes sense. The master being the bottleneck would explain the symptoms I've been seeing.
21:56 pholbrook joined #salt
21:57 _ikke_ Ha! My system works (with a serial)
21:59 jumperswitch joined #salt
22:00 travisfischer joined #salt
22:00 mr_chris I think I have to go on by myself now. Thanks again dave_den, forrest, and fatbox
22:00 mr_chris *Have enough to go on
22:00 dave_den mr_chris: welcome
22:01 dave_den _ikke_: nice
22:01 pholbrook joined #salt
22:02 forrest later mr_chris
22:04 VSpike joined #salt
22:07 mgw joined #salt
22:08 goki joined #salt
22:09 pholbrook left #salt
22:09 xerxas joined #salt
22:09 m_george|away joined #salt
22:11 _ikke_ The only thing I'm missing is getting the username from the state, instead of passing it as a parameter
22:14 gamingrobot joined #salt
22:15 pcarrier joined #salt
22:17 akitada joined #salt
22:19 linjan joined #salt
22:19 scooby2 is 0.17.1 compatible with an older master (0.16.4)?
22:19 jesusaurus whee, found some strange behaviour
22:20 m_george left #salt
22:20 seanz Quick question: When I use the bootstrap.saltstack.org script, does that defer to a distro's packaging systems wherever possible?
22:20 seanz Sorry, but I haven't inspected it deeply (obviously).
22:20 linjan joined #salt
22:21 jesusaurus has anyone ever seen state modules return different values depending on if you are running `salt` from the master or running `salt-call` from the minion?
22:22 jesusaurus seanz: by default, yes
22:22 seanz jesusaurus: Thanks. I was hoping but wanted to confirm.
22:25 cachedout jesusaurus: I was dealing with a tangentially related issue today. Tell me more about what you're seeing.
22:26 heewa joined #salt
22:26 shinylasers joined #salt
22:27 jesusaurus cachedout: im hacking on some of the rabbitmq stuff, so on the master i have some modified versions of the modules in _states/ and _modules/ and one state function in particular returns True in salt-call on the minion but returns None from salt on the master
22:27 madduck joined #salt
22:27 madduck joined #salt
22:27 jesusaurus (both while running with test=True)
22:27 VSpike salt-cloud is attempting to ssh into the windows machines it creates for me on rackspace to run the bootstrap-salt.sh script... is this a known issue?
22:28 kermit joined #salt
22:28 LLKCKfan Does any1 know anything about or use App Shop?
22:28 Psi-Jack LLKCKfan: Uhh, pardon?
22:29 packeteer cachedout: im hacking on some of the rabbitmq stuff  <- i thought salt used zeromq ??
22:29 cachedout jesusaurus: Any chance you could whip up a basic way to reproduce this easily and put it in GH issue for me? I have time to spend on it this afternoon.
22:31 jesusaurus cachedout: not really :/ I have no idea where the issue is...
22:31 jesusaurus could be caching, could be code paths, could be version differences
22:31 * cachedout nods
22:31 jesusaurus was just wondering if other people had something similar i could use as a lead
22:31 jesusaurus cachedout: what was the tangential issue you saw?
22:32 cachedout To summarize the issue I was dealing with this morning (just to see if it rings any bells)...the __salt__ dict for execution modules aren't guaranteed to be the same for modules when they are executed from the master versus locally on the minion with salt-call
22:32 jesusaurus oh, interesting
22:33 VSpike http://sprunge.us/jIfX in case anyone can help
22:34 cachedout I have a PR in right now which basically forces that dict to be rebuilt from scratch upon sys.module_refresh. I have no idea of course if that's what you're dealing with but it is an area where the contents of an execution module are different depending on from where you examine it.
22:34 Psi-Jack dave_den: Yeah, CentOS doesn
22:34 Psi-Jack 't use /etc/default/* BTW. :)
22:34 ipmb joined #salt
22:34 cachedout (And additionally, that PR need more review before being merged so please don't blindly slam it into production servers. ) ;]
22:35 Psi-Jack But, for some odd reason, some select few init scripts do.
22:35 mwillhite joined #salt
22:35 dave_den Psi-Jack: salt  on centos does.
22:36 Psi-Jack yeah... Which.. Is annoying. :)
22:36 dave_den c'est la vie  :)
22:36 gamingrobot joined #salt
22:36 Psi-Jack Took me actually looking to see it. :)
22:36 jesusaurus what init system does centos use?
22:36 Psi-Jack jesusaurus: upstart currently.
22:36 Psi-Jack Mostly lsb-init style stuff, though, wrapped into upstart.
22:37 jesusaurus hmm
22:37 jesusaurus interesting
22:37 Psi-Jack CentOS 7, as-will RHEL 7, will be using systemd, though. :)
22:39 bemehow joined #salt
22:39 Parabola joined #salt
22:39 jesusaurus any idea why theyre moving away from upstart?
22:39 jesusaurus i like upstart, but i dont have strong feelings for any of the init systems
22:40 Psi-Jack jesusaurus: Uhh, because lennart is a Red Hat employee, and Fedora's already running systemd?
22:40 Psi-Jack :)
22:41 Psi-Jack And the only reason they used upstart was because it was the only alternative choice at the time for capabilities like it had, but upstart is.. no good ;)
22:41 jrgifford joined #salt
22:42 shennyg joined #salt
22:43 jesusaurus theres always openrc, too
22:44 neilf joined #salt
22:44 Psi-Jack Which is nowhere near what upstart can do.
22:44 Psi-Jack upstart actually does craploads more than openrc could ever do.
22:45 scalability-junk joined #salt
22:45 bemehow joined #salt
22:48 Brew joined #salt
22:51 teebes joined #salt
22:52 heewa joined #salt
22:59 aparashar joined #salt
23:01 josephho_ joined #salt
23:05 goki joined #salt
23:06 VSpike Can anyone tell me what the win_username and win_password are for at http://salt-cloud.readthedocs.org/en/latest/topics/windows.html#configuration ?
23:08 pears am I allowed to use the py renderer for pillar data?
23:09 NV pears: yes
23:09 pears same API?
23:09 mpanetta joined #salt
23:10 pears a run() function that poops out a dict?
23:10 NV yup
23:10 pears cool
23:10 fwiles nice, didn't think about doing that
23:11 pears and yes I am using fancy computer science jargon (poops = returns)
23:11 fwiles pears: Honestly I think that is the technical term ;)
23:12 LLKCKfan Does any1 know anything about or use AppShop?
23:13 Psi-Jack LLKCKfan: Seriously. What the heck are you talking about? This doesn't sound like a Salt question at all.
23:13 GradysGhost joined #salt
23:13 GradysGhost Hey everyone.
23:13 GradysGhost Got another issue today...
23:13 GradysGhost Probably my fault, but salt-master won't run.
23:13 GradysGhost Output is a Python stack trace
23:14 GradysGhost http://pastebin.com/aPAnLzXm
23:14 GradysGhost Complaining about a library affecting a gid
23:15 GradysGhost If I get rid of the 'user: salt' bit in the config and leave it off the command line, it runs fine.
23:15 LLKCKfan left #salt
23:15 GradysGhost I've verified the user exists, he's in the appropriate groups, they all have valid user and group IDs. I can su to the user.
23:15 GradysGhost So why does salt freak out?
23:16 wibberwock joined #salt
23:17 wibberwock how can I have a minion wait until another minion has a service running?
23:17 fwiles wibberwock: maybe uses events and reactor?
23:17 fwiles wibberwock: er maybe use that is
23:19 NV GradysGhost: that's actually _NOT_ your fault :P
23:19 NV edit "/usr/lib/python2.6/site-packages/salt/utils/verify.py", line 296
23:20 NV replace pwd.gid with pwd.pw_gid
23:20 GradysGhost ok
23:20 GradysGhost Is this a known bug?
23:20 NV yeah, its fixed in git
23:20 NV i had to do the same thing
23:20 GradysGhost Cool. So should I get on the package maintainers for the epel repo that got installed from?
23:20 NV next release it'll be fixed i assume
23:20 NV although might be worth doing
23:20 NV only affects the master though
23:21 GradysGhost Okay, I did that and now I have a massive stack trace related to the inability for that salt user to read the master file.
23:22 GradysGhost I suppose I should set those perms, eh?
23:23 GradysGhost The perms there are set identically to other systems, though. Systems that work fine.
23:24 NV err, salt runs as root by default
23:24 NV if you've changed that then yes, you'll probably need to chown some things
23:24 krichardson left #salt
23:25 GradysGhost I did, and things work. As a point of clarification, I have other salt masters with a 740 mode on /etc/salt/master and root:root ownership. These all have user:salt in the master configs and no problems running the service. Why does this one box require the permissions change?
23:25 GradysGhost I think there may be an issue with versions.
23:26 GradysGhost I think the ones that work are older versions...
23:26 * GradysGhost checks
23:26 GradysGhost heh, yeah. Working version is 0.16.3. Broken version is 0.17.1
23:26 GradysGhost ok, so code changes. I'll update my scripts.
23:26 GradysGhost Thanks, NV, you're the best.
23:27 NV np :)
23:29 mafalb joined #salt
23:32 mike251 joined #salt
23:32 wibberwock is it possible to fire and listen for events from within sls files?
23:37 mike251 left #salt
23:38 modafinil joined #salt
23:39 andrewclegg joined #salt
23:41 mpanetta joined #salt
23:41 pviktori joined #salt
23:41 Gifflen joined #salt
23:43 pears wibberwock: don't think so, no
23:43 pears all that the minion is paying attention to is the zeromq socket
23:43 pears so it only does stuff when instructed to
23:43 wibberwock reactor system also doesn't seem to support cross-event state
23:44 wibberwock like a cmd ran only when both a and b events are received, but not until then
23:45 wibberwock ah nvm, i should be using overstate
23:45 MTecknology I just managed to use salt to break a server. I forgot that one server needs postfix, the rest just use nullmailer
23:46 teebes joined #salt
23:47 Psi-Jack You know.. I'm beginning to wonder if directory.recurse may be somewhat buggy, or at least related to the nofiles limitation issue.
23:47 wibberwock nope, nvm, overstate seem to take targeting arguments
23:50 amahon joined #salt
23:51 alunduil joined #salt
23:54 VSpike Ah, if I define the windows_installer locatin then it does try to connect on port 445 instead. Except that port is closed by default
23:54 octarine joined #salt
23:55 VSpike I dont suppose salt-cloud allows me to create VM's from my own snapshots on Rackspace, does it?
23:56 carmony VSpike: Hrm, you might be able to specify which image, though I don't know if it is documented.
23:58 rachbelaid joined #salt
23:59 VSpike Another option would be to run salt-cloud from a VM on rackspace, so that I could use the private network, since I think all ports are open there
23:59 Psi-Jack http://paste.hostdruids.com/view/c8c9ba54  -- here's a likely bug I found that randomly keeps popping up with various sourced files and directory.recurse's. And the stack dump related to it, same thing for both, every time this error randomly pops up.
23:59 Psi-Jack Bug in 0.17.1

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary