Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-09-24

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 n8n joined #salt
00:01 yomilk joined #salt
00:02 mechanicalduck joined #salt
00:03 Outlander left #salt
00:10 halfss joined #salt
00:19 vbabiy joined #salt
00:22 to_json joined #salt
00:23 oz_akan joined #salt
00:24 oz_akan joined #salt
00:24 aparsons joined #salt
00:29 ajprog_laptop joined #salt
00:31 halfss joined #salt
00:32 halfss joined #salt
00:33 ramishra joined #salt
00:38 mrlesmithjr joined #salt
00:41 smcquay joined #salt
00:46 saurabhs left #salt
00:47 oz_akan_ joined #salt
00:48 Angie_Rath21 joined #salt
00:51 TyrfingMjolnir joined #salt
00:52 hasues joined #salt
00:57 ajolo joined #salt
00:57 mrlesmithjr joined #salt
00:59 halfss joined #salt
01:00 malinoff joined #salt
01:00 yomilk joined #salt
01:05 halfss joined #salt
01:07 TheThing joined #salt
01:13 kusams joined #salt
01:15 mhedgpeth joined #salt
01:20 dstufft left #salt
01:21 UtahDave joined #salt
01:25 catpiggest joined #salt
01:37 dude051 joined #salt
01:40 n8n joined #salt
01:40 halfss_ joined #salt
01:45 Alex_Olson joined #salt
01:49 halfss joined #salt
02:09 ramishra joined #salt
02:11 n8n joined #salt
02:14 catpiggest joined #salt
02:15 to_json joined #salt
02:23 wiqd joined #salt
02:23 rjh joined #salt
02:24 rjh I'd like to configure WebSphere application server and DB2 with salt
02:25 rjh Thought this might be typical and wondering what resources might be available
02:26 rjh that would be to install WAS on linux and Windows platforms, configure, run and monitor
02:26 rjh and also DB2 and JBOSS
02:26 rjh are the initial components
02:31 halfss_ joined #salt
02:32 anotherZero joined #salt
02:37 VictorLin joined #salt
02:44 Amie74 joined #salt
02:50 Ryan_Lane joined #salt
02:52 oz_akan joined #salt
02:53 midacts joined #salt
02:54 rypeck joined #salt
02:54 iggy rjh: check github.com/salt-formulas
02:55 rjh starting smaller with test platform infrastructure and execution, although those big components are a part
03:00 rjh thanks iggy, saltstack-formulas is a good site, see some db formulas for couch and postgres, but not the app servers or the big dbs like db2 and oracle
03:00 VictorLin joined #salt
03:01 iggy do a full github search
03:01 iggy I've found a few things that werent in there
03:02 iggy there's also salt-contrib
03:02 tkharju joined #salt
03:05 anotherZero joined #salt
03:05 Gnouc joined #salt
03:13 bhosmer_ joined #salt
03:15 rjh thanks, see a lot of entries for websphere on github, will keep looking for one for salt.
03:16 iggy yeah, it's probably going to be called something like websphere formula
03:16 iggy that should narrow things down
03:16 iggy that is if it exists
03:16 iggy you may have to write your own
03:18 bezeee joined #salt
03:19 iggy in which case, the formulas in salt-formulas might be a good place to look
03:23 sherbs_tee joined #salt
03:24 jalaziz joined #salt
03:25 Gnouc joined #salt
03:26 Gnouc joined #salt
03:27 rjh thanks, was trying to gdt the salt-contrib repository, permission denied
03:27 rjh looking thru github now
03:29 rjh thanks for advice on github formulas if I need to write my own.
03:30 dccc joined #salt
03:32 bhosmer joined #salt
03:35 jeffspeff how can i push a new config file to all of my minions?
03:35 miqui joined #salt
03:36 Ryan_Lane jeffspeff: you just need to push a random file from the master to the minions?
03:36 jeffspeff yes, but it needs to go the c:\salt dir on the minions
03:38 jeffspeff i need to enable keepalive settings for the minions. i found that in the config file, so my thought was to push a new minion config file to all minions then issue a restart service command
03:38 Ryan_Lane jeffspeff: I think you want this module: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cp.html
03:40 rjh do see some jypthon and python libs to make websphere admin easy, would be good start for a custom websphere formula
03:41 rjh will look into that later, thx
03:41 jeffspeff Ryan_Lane, thanks it looks like "salt '*' cp.get_file salt://path/to/file /minion/dest" will do what i need.
03:41 Ryan_Lane cool :)
03:43 TheThing joined #salt
03:44 bhosmer joined #salt
03:44 Arne48 joined #salt
03:44 troyready joined #salt
03:51 jeffspeff so, pushing out a static config file won't work for updating the config file for all minions since the config file includes the minion name.
03:53 oz_akan joined #salt
03:56 Outlander joined #salt
03:58 tkharju joined #salt
04:01 aurynn joined #salt
04:02 aurynn is it just me, or is using "make_master: True" in my cloud map file buggy?
04:04 aurynn specifically, I'm getting a stacktrace from 'setdefault' in the yaml parsing
04:06 SheetiS joined #salt
04:07 oeuftete joined #salt
04:08 Heggan joined #salt
04:11 ramishra joined #salt
04:26 anotherZero joined #salt
04:26 TyrfingMjolnir joined #salt
04:37 felskrone joined #salt
04:39 ramishra joined #salt
04:40 johngrasty joined #salt
04:41 oz_akan joined #salt
04:43 kermit joined #salt
04:50 ramteid joined #salt
04:51 TyrfingMjolnir joined #salt
04:55 oz_akan joined #salt
04:56 johngrasty joined #salt
05:03 NotreDev joined #salt
05:07 cuonglm joined #salt
05:08 cuonglm_ joined #salt
05:14 ramishra joined #salt
05:15 sectionme joined #salt
05:17 a6patch joined #salt
05:21 Outlander_ joined #salt
05:22 thayne joined #salt
05:28 NotreDev joined #salt
05:31 a6patch Whats the best way to programmatically in Python fire custom events to the master?  I've been playing with client.api and client.caller and utils.events and can't seem to get it to go.  My listener on the master is seeing events come through when sent with the salt-call event.fire_master with my custom tag though.
05:34 NV a6patch: use salt.utils.events seems to be the way to go?
05:34 NV http://docs.saltstack.com/en/latest/topics/event/index.html has example code doing that
05:34 ajprog_laptop joined #salt
05:35 ramishra joined #salt
05:41 a6patch I've been trying all those examples...with various variations, and simply cannot get an event to fire up to the master.
05:42 a6patch salt-call event.fire_master works fine from the command line, but I just can't get a snippet of python to do something similar.
05:44 scryptic1 joined #salt
05:46 scryptic1 Does anyone know if there is any salt-cloud functionality to set dns a records/reverse dns/mx record?
05:49 oyvjel joined #salt
05:52 oz_akan joined #salt
05:55 catpigger joined #salt
06:07 ramishra joined #salt
06:12 a6patch joined #salt
06:19 girb joined #salt
06:19 girb Hi all… it may be a stupid question
06:20 girb I'm new to salt..  salt '*' state.sls intern/nova-compute/compute  … I have a init.sh file in  intern/nova-compute/compute/
06:20 mosen init.sls yep
06:20 girb but it is not executing the init.sls file .. please may I know the reason for it
06:21 mosen you would use a period/dot instead of the directory separator
06:21 mosen salt '*' state.sls intern.nova-compute.compute
06:21 girb I placed this    intern.nova-compute.init  it did not work
06:21 mosen you don't have to add the init on the end
06:22 mosen it automatically adds init if theres one in that path
06:23 girb mosen: yes that should be the case I thought .. but I do  not know its not taking init.. let me give a try again
06:23 mosen girb: ok no problem :)
06:26 girb mosen:   so when I execute the salt '*' state.sls intern.nova-compute.compute    intern.nova-compute.init should also be executed by default right ?
06:26 mosen girb: it would also execute intern/nova-compute/compute/init.sls
06:31 mechanicalduck_ joined #salt
06:35 lcavassa joined #salt
06:35 kingel joined #salt
06:39 NotreDev joined #salt
06:41 n8n joined #salt
06:41 ramishra joined #salt
06:47 girb mosen: should i need to use "include:"   in compute.sls  to include init.sls ?
06:47 mosen girb: oh sorry i just re-read your previous message
06:48 mosen girb: if you execute intern.nova-compute, you get init
06:48 mosen girb: if you run intern.nova-compute.compute, you get the compute.sls
06:48 mosen if anything it would usually be the other way.. including compute from init
06:48 girb mosen: its ok . yes .. but how to include init.sls
06:48 bhosmer joined #salt
06:50 girb mosen: it worked now .. I need explicitly add inclide:  - init .. thanks
06:50 n8n joined #salt
06:53 oz_akan joined #salt
06:54 n8n joined #salt
06:59 Sweetshark joined #salt
07:00 ramishra_ joined #salt
07:02 alanpearce_ joined #salt
07:04 n8n joined #salt
07:06 a6patch joined #salt
07:09 jensnockert joined #salt
07:09 bhosmer joined #salt
07:22 fredvd joined #salt
07:32 ramishra joined #salt
07:40 TheThing joined #salt
07:40 maboum joined #salt
07:41 ramishra_ joined #salt
07:43 TyrfingMjolnir joined #salt
07:43 martoss joined #salt
07:54 oz_akan joined #salt
07:55 spo0nman joined #salt
07:56 darkelda joined #salt
07:56 Ryan_Lane joined #salt
08:06 slav0nic joined #salt
08:10 kiorky joined #salt
08:19 facefullofbacon joined #salt
08:23 ramishra joined #salt
08:27 snuffeluffegus joined #salt
08:39 krak3n` joined #salt
08:42 ajprog_laptop joined #salt
08:44 ramishra joined #salt
08:45 kbyrne joined #salt
08:52 jhauser joined #salt
08:54 oz_akan joined #salt
08:56 the_drow_ joined #salt
08:56 VSpike Is there some advice on the typical limit of size or number of files that can be tranferred over salt to a minion?
08:56 the_drow_ Hi guys, I now have a live master but the gitfs config isn't working
08:56 babilen the_drow_: Please elaborate
08:56 the_drow_ babilen: pasting logs and config as we speak
08:57 babilen +1 (please don't use pastebin.com)
08:57 PI-Lloyd joined #salt
09:00 the_drow_ babilen: config fle https://bpaste.net/show/7850844f0a42
09:00 VSpike I'm wondering how best to install this https://github.com/bliker/cmder/releases on a Windows box. No installer, circa 25MB 7zip. Should I put the zip on the master and use script on the box to unpack it? Or can I unpack the zip on the master, and send the contents recursively
09:00 VSpike Unpacked, it's 285M total .. 285 folders, 3614 files
09:01 the_drow_ Well the logs are gone. That's strange
09:01 babilen the_drow_: One small tip: I found it to be much nicer to simply remove all commented lines from configuration files.
09:02 the_drow_ babilen: It's generated from salt-formula...
09:02 babilen Ah, okay.
09:02 the_drow_ The only thing in the log is [ERROR   ] Salt request timed out. If this error persists, worker_threads may need to be increased.
09:03 the_drow_ But there used to be more logs
09:05 babilen Could you start the master with "-ldebug" and check for errors there? Which version of salt are you using? (I assume 2014.7 as you use the "root=pillar" argument for your ext pillar)
09:05 ramishra joined #salt
09:06 babilen And what exactly constitutes "not working" -- What did you try, what did you expect to happen and what did happen?
09:07 the_drow_ babilen: sure
09:07 the_drow_ develop
09:07 babilen Ah, what could go wrong?!
09:08 TheThing joined #salt
09:09 the_drow_ It seems that the error is resolved. Strange
09:09 the_drow_ where would I see the files from gitfs?
09:09 * babilen stops waving his hands
09:09 babilen You are welcome
09:09 the_drow_ Thanks :)
09:10 babilen the_drow_: /var/cache/salt/master/gitfs
09:12 the_drow_ The cache folder named after what seems to be the commit number is empty
09:12 VSpike I have a question about gitfs for salt states, having seen it discussed here. I understand it's better to have state and pillar in separate repos, and that branches will be seen as environments. But that leaves the question - where do you put the top.sls which is expected to be in the base?
09:15 the_drow_ babilen: does that mean something is wrong?
09:15 giantlock joined #salt
09:15 babilen the_drow_: salt doesn't completely clone a directory there, but simply caches files it need(ed|s)
09:16 the_drow_ so only when I invoke the highstate the cache will be populated?
09:17 VSpike Just found http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html?highlight=gitfs#branches-environments-and-top-files which seems to answer it pretty succinctly
09:17 babilen VSpike: Here's the deal: The top.sls will be merged across branches  which, unfortunately, means that equating branches to environments is a braindead way of dealing with this as you cannot use any of the standard git merge/branch workflows. You can either have your top.sls in a separate repository or refrain from using standard git workflows.
09:18 babilen (neither is sensible)
09:18 mikber joined #salt
09:19 VSpike babilen: yeah.. I was just trying to wrap my head around that
09:19 VSpike You'd likely end up with one branch (master) which bears no relation to the others (e.g. dev/staging/production )
09:20 babilen It's IMHO a very idiotic way of dealing with this. Fortunately you'll have more control over your repository layout with the per-remote configuration discussed in http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html?highlight=gitfs#gitfs-per-remote-config
09:20 VSpike How will that resolve the issue, though?
09:20 babilen But even then it simply doesn't make sense to equate branches and environments. Let me configure that explicitly and don't create an environment for every branch that I push
09:21 the_drow_ I have a minion authenticated with my master but it won't ping. salt '*' test.ping displays no output
09:21 _mel_ joined #salt
09:21 babilen the_drow_: Restart master and minion, how long did you wait after you accepted the key?
09:22 the_drow_ probably 8 hours
09:22 the_drow_ I had to sleep :P
09:22 VSpike :)
09:23 babilen VSpike: Well, the idea would be to use one directory per branch and to configure gitfs in such a way that gitfs_root for that environment is being served from that directory. That way you can have one top.sls per environment without fearing that any other top.sls files will be merged into it.
09:23 the_drow_ Now test.ping simply hangs
09:23 babilen Still stupid to equate branches and environments
09:23 the_drow_ babilen: Yup you're right. That's much better
09:24 the_drow_ babilen: Why would test.ping hang? The master and the minion are on the same machine. It's an AWS micro instance. Is that a problem?
09:26 the_drow_ [ERROR   ] Salt request timed out. If this error persists, worker_threads may need to be increased. Failed to authenticate!  This is most likely because this user is not permitted to execute commands, but there is a small possibility that a disk error ocurred (check disk/inode usage).
09:26 babilen the_drow_: It didn't hang. Let me explain: Once you restart master/minion there will be a delay before you can communicate with your minion (10 seconds I believe). This is due to the fact that minions are being authenticated on a set schedule and there will simply be a little delay.
09:26 the_drow_ both minion and master are running as root
09:26 babilen It still doesn't ping?
09:26 Nemo joined #salt
09:27 the_drow_ yup
09:27 the_drow_ Now the output is empty again
09:28 the_drow_ I just checked and the key is accepted
09:29 babilen the_drow_: Mind pasting the output of "salt-master -ldebug", "salt-minion -ldebug", "salt-key -L", "salt -ldebug -v '*' test.ping" and your configuration files for master/minion -- Don't forget to stop master/minion processes before you start them manually.
09:29 the_drow_ ports 4505 and 4506 are open (TCP) on my security group
09:30 the_drow_ that's way to many ssh sessions :P
09:30 the_drow_ I should probably learn how to use tmux someday
09:30 Sp00n left #salt
09:30 babilen Would you be interested in a better hosting provider? ;)
09:30 jhauser joined #salt
09:31 the_drow_ It's for work so no. I don't decide on where to deploy
09:32 babilen (that was mostly a joke)
09:32 the_drow_ :P
09:33 the_drow_ babilen: That's the output of salt * test.ping https://bpaste.net/show/7fd74e40747b
09:33 the_drow_ It's still trying
09:34 babilen It's perfectly fine to put all information in a single pastebin.
09:36 ramishra_ joined #salt
09:38 the_drow_ well now it pings
09:38 the_drow_ That's very strange. All the processes are in the forground
09:41 the_drow_ babilen: I'll try to raise the log level on the log files and see what happens
09:43 CeBe joined #salt
09:44 yomilk joined #salt
09:44 wr3nch joined #salt
09:49 babilen I have magical powers today! :)
09:50 Twiglet_ joined #salt
09:55 oz_akan joined #salt
09:56 the_drow_ babilen: It seems that the problem was that I restarted the master first and then the minion
09:56 the_drow_ Btw, do both of them load their configuration without restarting?
10:06 wnkz joined #salt
10:06 iclebyte joined #salt
10:07 giantlock joined #salt
10:08 the_drow_ babilen: Thanks! Everything seems to be in order
10:09 wnkz_ joined #salt
10:10 askhan joined #salt
10:15 diegows joined #salt
10:22 bhosmer_ joined #salt
10:22 jensnockert joined #salt
10:23 jensnockert joined #salt
10:31 N-Mi joined #salt
10:31 N-Mi joined #salt
10:36 mechanicalduck joined #salt
10:47 peters-tx joined #salt
10:56 oz_akan joined #salt
10:58 oz_akan_ joined #salt
11:00 the_drow_ joined #salt
11:05 linjan joined #salt
11:10 TyrfingMjolnir joined #salt
11:12 Sonny__ joined #salt
11:18 akafred joined #salt
11:20 yomilk joined #salt
11:24 johngrasty joined #salt
11:24 intellix joined #salt
11:33 jslatts joined #salt
11:34 jensnockert joined #salt
11:34 Nico__ joined #salt
11:39 askhan SheetiS, you there?
11:39 _ale_ joined #salt
11:40 AdamSewell joined #salt
11:40 halfss joined #salt
11:41 wnkz_ joined #salt
11:41 Nico__ Version conflict maybe?
11:42 Nico__ hi, I'm trying to get salt running on Debian Squeeze using the install instruction on saltstack, but getting Invalid Argument.  Wil post my versions.
11:43 VSpike An entire morning spent trying to find a way to install a decent console emulator on Windows with salt. The amount of time spent working around the lack of a package manager in Windows is incredible.
11:43 Nico__ Salt: 2014.7.0rc Python: 2.6.6 (r266:84292, Dec 27 2010, 00:02:40) Jinja2: 2.7.3        M2Crypto: 0.22  msgpack-python: 0.4.2    msgpack-pure: Not Installed        pycrypto: 2.6.1         libnacl: Not Installed          PyYAML: 3.11           ioflo: Not Installed           PyZMQ: 14.3.1            RAET: Not Installed             ZMQ: 4.0.4            Mako: 0.8.1
11:49 oz_akan joined #salt
12:01 glyf joined #salt
12:03 micko joined #salt
12:05 glyf Hi #salt, I just wanted to say thanks for this great tool. I've been using ansible but found it difficult to debug playbooks and diagnose errors. I've just started using salt today. The output is so much clearer and errors far more helpful. The speed is impressive so far too. Just thought I'd say thanks :)
12:05 hobakill joined #salt
12:10 flyboy82 joined #salt
12:11 ndrei joined #salt
12:14 iamtew I feel the same.. salt just feels a bit more robust, when I'm working with it
12:14 iamtew ansible feels like a hack around "while host in $hosts; do ssh $host "..." ; done" :D
12:15 iamtew ahem.. but yeah, let's not get in to that
12:17 glyf iamtew: robust is the word I'd use too. I'm excited to see what salt can do.
12:18 ndrei joined #salt
12:20 viq Although at times salt is a bit too verbose and it would be nice to dial it down a bit
12:21 iamtew yeah I wrote a bash function for that: salt() { salt $* &> /dev/null }
12:21 iamtew :D
12:21 iamtew nice and quiet
12:22 viq that's not quite what I want
12:23 viq salt -G first_batch state.highstate test=True | wc -l
12:23 viq 7171
12:23 iamtew yeah, I know what you mean
12:26 jensnockert joined #salt
12:27 kingel joined #salt
12:29 kingel joined #salt
12:30 ndrei joined #salt
12:33 jslatts joined #salt
12:36 diegows any formula or something to launch an openstack cluster with salt?
12:37 diegows I'm checking salt-formulas now, but may be there is something in other place
12:37 diegows this looks good https://github.com/saltstack-formulas/openstack-standalone-formula
12:37 diegows the title at least
12:37 viq https://github.com/EntropyWorks/salt-openstack
12:38 diegows viq, that looks good too! thanks!
12:38 viq hm, not if that's the one I mean
12:38 viq meant
12:38 viq there's also https://github.com/CSSCorp/openstack-automation
12:41 halfss joined #salt
12:41 ramishra joined #salt
12:41 diegows good
12:41 diegows thanks
12:42 diegows are there something to download dependencias between formulas?
12:42 diegows If not, we should code something :)
12:42 viq AFAIK there isn't currently
12:45 pduersteler joined #salt
12:45 giantlock joined #salt
12:48 vejdmn joined #salt
12:54 yomilk joined #salt
12:57 ndrei joined #salt
13:01 cpowell joined #salt
13:05 racooper joined #salt
13:05 snuffeluffegus joined #salt
13:09 pduersteler I'm struggling with "include" and pkgrepo.managed. how do I include a repo in an sls? I tried various thing but didn't get the concept.. can anyone give me a hint? https://gist.github.com/pduersteler/1fe65e98fdfa60dce581
13:09 the_drow_ joined #salt
13:09 the_drow_ Hi guys, I can't find the documentation on the new boto vpc module
13:09 oz_akan joined #salt
13:10 mpanetta joined #salt
13:11 viq pduersteler: can you explain what you're having problem with?
13:11 miqui joined #salt
13:12 pduersteler viq: I want to include dotdeb in nginx.sls so nginx is pulled from dotdeb, but I can't seem to include repos/dotdeb.sls inside nginx without getting errors that the state is not found / unavailable
13:12 viq the_drow_: https://github.com/saltstack/salt/blob/develop/salt/modules/boto_vpc.py ;)
13:13 the_drow_ But it's not on the site yet for some reason
13:13 viq pduersteler: indeed, include: repos.dotdeb
13:13 nitti joined #salt
13:13 the_drow_ does salt have an aws account I can use in order to develop a state for it?
13:14 pduersteler viq that's what I tried, which gives me "State include.repos.dotdeb found in sls nginx is unavailable"
13:14 pduersteler where do I have to put it exactly? I just nested it on the second level
13:15 the_drow_ Meh it doesn't even create a VPC :/
13:16 viq pduersteler: https://pbot.rmdir.de/qJC0Sow5Ztgb95S6slbJIA
13:16 viq erm, no
13:16 _ale_ pduersteler, have a look https://github.com/saltstack-formulas/epel-formula
13:16 _ale_ should have a good example of how to do it (i hope/guess)
13:16 viq there - https://pbot.rmdir.de/N5NbWShulr-E1OrySzMhIw
13:17 viq pduersteler: ^
13:17 aquinas joined #salt
13:17 aquinas_ joined #salt
13:18 pduersteler viq: aaah, okay, that makes sense, putting it on top-level. thanks!
13:19 mpanetta_ joined #salt
13:20 mpanetta joined #salt
13:22 madduck joined #salt
13:22 peloponnesian joined #salt
13:24 peloponnesian good morning... can anyone tell me how i can append a directory to the python path that salt uses?
13:24 peloponnesian i am developing an execution module and using salt-call with the -m flag to try out my code
13:25 babilen peloponnesian: Custom execution modules can (and should) be placed in _modules in file_roots and will be loaded from there.
13:26 peloponnesian i can place my modules there. i thought the -m flag was convenient for development though
13:35 hasues joined #salt
13:38 bhosmer_ joined #salt
13:38 to_json joined #salt
13:42 mgw joined #salt
13:42 ramishra joined #salt
13:44 ndrei joined #salt
13:44 rypeck joined #salt
13:45 cheus joined #salt
13:46 mapu joined #salt
13:46 peloponnesian what does the --local flag do if you are using salt-call? it seems redundant to me?
13:47 to_json1 joined #salt
13:47 eunuchsocket joined #salt
13:48 drlkf joined #salt
13:48 smcquay joined #salt
13:52 hybridpollo joined #salt
13:54 mndo joined #salt
13:55 micah_chatt joined #salt
13:56 kusams joined #salt
13:58 darrend joined #salt
13:59 dude051 joined #salt
13:59 micah_chatt_ joined #salt
13:59 bhosmer_ joined #salt
14:02 flyboy82 hey everyone. Is there by any chance a full list of available mine_functions and what they return? documentation is a bit scarse for saltmine. Thanks
14:04 viq peloponnesian: "don't try to contact master at all" - normally it would
14:05 peloponnesian i see, cool
14:06 kingel joined #salt
14:06 viq flyboy82: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.mine.html doesn't asnwer your question?
14:07 kaptk2 joined #salt
14:07 flyboy82 viq, thanks for the prompt answer, but I was looking for a list of all the mine_functions I can order my minions to run
14:08 viq what do you mean by "mine functions"?
14:08 flyboy82 something like network.ip_addrs[], or disk.usage
14:09 djstorm joined #salt
14:09 viq flyboy82: I believe here's the list ;) http://docs.saltstack.com/en/latest/ref/modules/all/index.html
14:10 viq flyboy82: as in, I think any output can be cached in mine
14:11 flyboy82 aaah, perfect! Thanks viq!
14:14 perfectsine joined #salt
14:15 kiwnix left #salt
14:15 zekoZeko joined #salt
14:16 ndrei joined #salt
14:21 ckao joined #salt
14:21 rallytime joined #salt
14:25 elfixit joined #salt
14:27 zekoZeko i'm building deb packages to test 2014.7.0, should i build from debelop or 2014.7 branch?
14:27 zekoZeko debelop=develop
14:29 timoguin zekoZeko: use the 2014.7 branch
14:30 _ale_ joined #salt
14:30 rawtaz is the PyObjects renderer something that will be quite common you think? or is it going to be something just a few use? obviously speculation, just kinda curious if going that route would mean one is totally into uncommon land (in which case it might be considered to stay with the regular renderer instead)
14:31 timoguin rawtaz: hard to say, really. the syntax is nice, but all the documentation floating around is still primarily YAML
14:32 rawtaz how "big" of a problem do ppl think it is with the templating in yaml? i mean, do most ppl who use salt extensively move away from the default?
14:33 rawtaz (i think the jinja and yaml can be a bit odd to read)
14:33 rawtaz not that i have a problem reading it, it's just that it gets messy
14:33 ndrei joined #salt
14:33 timoguin yea i try really hard to keep jinja out of things as much as possible, so i'm only setting a few variables and a few for loops every now and then
14:34 VSpike joined #salt
14:34 bhosmer_ joined #salt
14:34 rawtaz right. but you use yaml and not some other renderer instead of jinja+yaml? and you get by with that?
14:34 timoguin and i also use descriptive names for my SLS IDs, like "checkout latest git repo" or "ensure service user"
14:34 rawtaz i imagine most of the logic is loops and if statements
14:34 timoguin I just use YAML.
14:34 rawtaz ok
14:34 timoguin I've gotten really used to it. And actually I tend to take handwritten notes in YAML format now.
14:35 rawtaz right, you mean you use descriptive names in combination with the 'name' and/or 'names' declaration
14:35 timoguin It was confusing to me at first
14:35 rawtaz aha
14:35 timoguin rawtaz: yea
14:35 rawtaz some markdown renderer would be cool. then one could have it as documentation too :-)
14:35 rawtaz or well, commonmark.org nowadays
14:37 hasues joined #salt
14:45 scryptic1 joined #salt
14:45 VictorLin joined #salt
14:46 felskrone joined #salt
14:47 mapu joined #salt
14:50 ndrei joined #salt
14:50 maboum_ joined #salt
14:51 bhosmer_ joined #salt
14:51 Sonny__ hello.. I have just deployed a new minion, but the status of my new minion is down.. How can I tell my minion to be UP
14:53 viq Sonny__: did you accept the key?
14:54 Sonny__ yes
14:54 Sonny__ it is under my accepter keys
14:55 SheetiS joined #salt
14:56 dccc joined #salt
14:59 smcquay joined #salt
15:01 Gareth w 1
15:01 Gareth er
15:02 ndrei joined #salt
15:04 Sonny__ anyone?
15:04 rawtaz patience :D
15:04 gladiatr joined #salt
15:05 eunuchsocket Sonny__: can you verify that the minion service is running on the minion?
15:05 viq also that both ports 4505 and 4506 are reachable from the minion
15:08 conan_the_destro joined #salt
15:08 dccc joined #salt
15:08 UtahDave joined #salt
15:10 anotherZero joined #salt
15:10 mpanetta Has anyone here tried wrapping salt.utils.event in an event loop like twisted?
15:11 gladiatr mpanetta, well, you don't really have to.  It's already running an event loop via zmq
15:11 mpanetta gladiatr: I want to interface it with some other twisted based code, but I am not sure how to...
15:12 gladiatr mpanetta, are you looking to have a bi-directional conversation with salt via your twisted code?
15:12 mpanetta yep
15:13 Sonny__ eunuchsocket: Is wat not running it.. Weird, since I installed it using salt-cloud with digital ocean. salt-cloud uses the bootstrap script to install minion.. And the key was in my accepted keys..
15:13 StDiluted joined #salt
15:13 mpanetta gladiatr: I don't think I will have an issue calling the salt client calls, I am more worried about the event calls...
15:14 gladiatr mpanetta, as in getting a response from salt?
15:14 anotherZero joined #salt
15:14 timoguin mpanetta: have you looked at the eventlisten script? tests/eventlisten.py
15:14 mpanetta gladiatr: I want to translate salt events to another event bus basically...
15:15 che-arne joined #salt
15:15 mpanetta timoguin: Briefly.  I wonder if I could wrap that easier...
15:15 micah_chatt_ joined #salt
15:15 timoguin also, checkout salt-eventsd. it listens to the event bus on the master and dumps everything into an external database
15:16 gladiatr mpanetta, gotcha.  You could hook into whatever returner your salt-master is running to get information on the results of salt calls.
15:16 mpanetta Hmm
15:16 scryptic1 joined #salt
15:18 lempa joined #salt
15:20 briner joined #salt
15:20 gladiatr mpanetta, I guess it depends on how much effort you want to put into it.  The mentioned scripts are good examples of how to hook into the salt event loop, so translating from one loop to another certainly sounds doable.  I just don't think there is an easy-mode way of doing that with salt's current offerings.
15:21 mpanetta gladiatr: Yeah I am not sure yet what the best course of action would be.
15:23 conan_the_destro joined #salt
15:25 N-Mi joined #salt
15:26 jergerber joined #salt
15:34 rihannon joined #salt
15:34 rihannon joined #salt
15:35 wendall911 joined #salt
15:39 XenophonF joined #salt
15:39 jchen joined #salt
15:40 jensnockert joined #salt
15:42 tristianc joined #salt
15:44 JPaul joined #salt
15:44 elfixit joined #salt
15:44 tristianc When would I use the orchestrate runner to deploy configuration instead of using just salt.highstate?
15:45 TheThing joined #salt
15:45 tristianc I'm having trouble distinguishing their use cases
15:45 jaimed joined #salt
15:46 ajolo joined #salt
15:46 StDiluted I am having a hell of a time figuring out the missing piece on AWS with salt, autoscale, and deploying apps
15:47 UtahDave tristianc: When you want to have dependencies between minions.   A highstate only concerns itself with one minion
15:47 iggy tristianc: when doing cross-host stuff mostly
15:47 tligda joined #salt
15:47 ramishra joined #salt
15:47 linjan joined #salt
15:48 micah_chatt joined #salt
15:51 bezeee joined #salt
15:51 JPaul does anyone know why salt would return that a package install failed, even though it did not?
15:51 ndrei joined #salt
15:52 JPaul I'm having to install another version of libssl
15:52 JPaul it will install it, but it seems that when it checks to make sure it's installed it can't find it for some reason
15:53 JPaul ran the minion in debug and saw it was getting the two libssl packages back when running the dpkg query (it's a debian wheezy system):
15:53 rihannon left #salt
15:53 JPaul install ok installed libssl0.9.8 0.9.8o-4squeeze13 amd64
15:53 JPaul install ok installed libssl1.0.0 1.0.1e-2+deb7u12 amd64
15:54 JPaul libssl0.9.8 is the one I am having it install from a .deb package stored on the salt master
15:55 saurabhs joined #salt
15:55 elfixit joined #salt
16:02 gokr joined #salt
16:05 warmwaffles joined #salt
16:05 warmwaffles left #salt
16:07 JPaul I figured it out, I didn't realize you need to have the package name listed properly in the state file when specifying the "sources:" entry
16:07 KyleG joined #salt
16:07 KyleG joined #salt
16:08 girb joined #salt
16:08 TyrfingMjolnir joined #salt
16:10 VictorLin joined #salt
16:10 cpowell joined #salt
16:10 sherbs_tee joined #salt
16:11 UtahDave arnoldB: you there?
16:12 possibilities joined #salt
16:13 wnkz__ joined #salt
16:13 aparsons joined #salt
16:15 troyready joined #salt
16:15 UtahDave arnoldB: ping!
16:18 jslatts joined #salt
16:21 spookah joined #salt
16:22 quickdry21 joined #salt
16:23 TheThing joined #salt
16:24 v0rtex joined #salt
16:27 JPaul joined #salt
16:27 gmcwhistler joined #salt
16:28 Zachary_DuBois joined #salt
16:29 dalexander joined #salt
16:30 ghanima joined #salt
16:30 ghanima hello all
16:30 ghanima question I remember a while back
16:30 ghanima that is the ability to add grains from puppet facts
16:30 ghanima but for some reason this wasn't working properly
16:30 ghanima is anyone aware of this issue
16:32 aparsons_ joined #salt
16:33 wendall911 joined #salt
16:39 eunuchsocket is there a way to ignore files/folders in file.recurse?  I'd like to manage a directory but ignore .svn, .git, etc
16:41 timoguin eunuchsocket: you can pass exclude_pat to ignore a pattern, glob or regex
16:41 elfixit joined #salt
16:42 timoguin http://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.recurse
16:43 eunuchsocket timoguin: thanks!  not sure why I didn't see that
16:43 girb joined #salt
16:46 thayne joined #salt
16:50 capricorn_1 joined #salt
16:52 chrisjones joined #salt
16:54 ndrei joined #salt
17:00 Gareth morning morning
17:02 aparsons joined #salt
17:03 mindKMST joined #salt
17:04 alanpearce_ joined #salt
17:05 alanpear_ joined #salt
17:07 ianau74 joined #salt
17:09 ianau74 is there an issue with windows salt-minions that keep disconnecting?  it seems if i go away for a while and do a test.ping, the windows minions don't respond
17:11 ndrei joined #salt
17:11 scryptic1 ianau74 I've noticed this as well. If I call salt-run manage.status multiple times, more minions show 'up' over a few runs
17:12 ianau74 yeah, if i keep doing test.ping multiple times, then the windows minion responds after a while
17:13 jaimed joined #salt
17:13 Ryan_Lane joined #salt
17:14 thehaven joined #salt
17:15 patrek joined #salt
17:15 mapu joined #salt
17:16 scalability-junk so how does docker come in. should I just write my salt scripts as usual and if I think it's better to run the machines in a container that's good, or should I actually build my containers with salt (aka bootstrap) and then distribute new containers with the help of salt and orchestrate bindings between them with salt? Or should I just die in a fire cause of
17:16 scalability-junk too many choices? :D
17:18 Ryan_Lane scalability-junk: we build the containers with salt
17:18 schristensen joined #salt
17:18 Ryan_Lane and deploy them to a private registry
17:18 ericof joined #salt
17:19 rlarkin joined #salt
17:19 Ryan_Lane I'd probably have more salt integration if we were using ubuntu for the docker hosts, but we're using coreos
17:21 scalability-junk Ryan_Lane: alright so basically you have salt scripts to bootstrap a specific image and then distribute that image, when needed.
17:21 scalability-junk coreos doesn't support running salt?
17:21 Ryan_Lane I build the image with a highstate
17:21 Ryan_Lane coreos doesn't support anything, basically
17:21 Ryan_Lane it can only run go and bash
17:21 scalability-junk Ah kk
17:22 scalability-junk Mhh I am still not convinced that containers solve more than versioning and less overhead.
17:22 scalability-junk The only interesting things I found related to coreos were etcd as distributed config storage and fleet as finally some distributed orchestration tool.
17:22 Ryan_Lane it allows us to run all of our services in vagrant
17:23 scalability-junk Ryan_Lane: for development or production too?
17:23 desertigloo joined #salt
17:25 ajolo joined #salt
17:27 scalability-junk Is there anything more automated to have multi server infrastructures than using reactor and custom states? With salt everything seems quite reinventing the wheel at least a bit.
17:27 chrisjones joined #salt
17:27 Ryan_Lane well at this point we can't run every service at the same time, but we can run enough to run services with dependencies together
17:27 Ryan_Lane development
17:27 Ryan_Lane scalability-junk: multi-server in what way?
17:28 Ryan_Lane if you're used to exported resources in puppet, there's salt-mine
17:28 Ryan_Lane I don't like having dependencies between my servers, though
17:28 Ryan_Lane there's also external pillars
17:29 jensnockert joined #salt
17:29 scalability-junk Ryan_Lane: I mean something like. My wordpress state needs mysql and memcache and they should "automaticly" bind together
17:29 Ryan_Lane you could use a returner + external pillars to work like salt-mine, if you wanted more control
17:29 Ryan_Lane there's a number of options there
17:29 Ryan_Lane you could use salt-mine
17:29 Ryan_Lane you could do reactors + states
17:29 scalability-junk Which sort of introduces a SPOF
17:30 Ryan_Lane you could make peer calls between the servers
17:30 Ryan_Lane all of those have SPOF
17:30 Ryan_Lane you could use an external pillar + a returner to some service that's distributed (like etcd)
17:30 cpowell joined #salt
17:30 n1ck3 joined #salt
17:30 Ryan_Lane I think using something like etcd or zookeeper is the best option, personally
17:31 scalability-junk but wouldn't the whole config go through pillars and ultimately be SPOF again?
17:31 Ryan_Lane since that's becoming somewhat standard for distributed configuration
17:31 scalability-junk Or would you use etcd and zookeeper as config management aka IPs and general service discovery and then use salt to manage it?
17:31 Ryan_Lane it doesn't for me, but I run masterless
17:32 Ryan_Lane scalability-junk: yeah, that
17:32 vejdmn joined #salt
17:32 Ryan_Lane I use file_client: local and deploy my salt code to all systems
17:32 scalability-junk Mhh Ok so basically I configer all states/services in salt, without them knowing about their required services from salt, but tell them to just ask etcd for example.
17:32 Ryan_Lane yes
17:33 Ryan_Lane in the current RC etcd is natively supported
17:33 scalability-junk In what sense? That you can call the etcd module from local states and therefore retrieve configs during a salt run?
17:34 scalability-junk so changes would only propagated when pulled instead of pushed... so longer reaction time than with reactor for example
17:34 scalability-junk and when using etcd for config retrieval on state runs it only removes the SPOF when salt is run locally.
17:35 scalability-junk Which on the other hand would then need to be rescheduled via some other way.
17:35 scalability-junk Man I hate choice and pro/cons of distributed vs. centralized
17:36 ndrei joined #salt
17:37 Ryan_Lane scalability-junk: heh
17:38 Ryan_Lane well, you could fire an event when you modify something in etcd
17:38 Ryan_Lane and have reactor trigger runs on other nodes
17:38 Ryan_Lane it would speed up the process
17:38 pduersteler joined #salt
17:39 scalability-junk ok so basically use reactor to speed it up, but use etcd and a pull/cron/wtf based fallback so it gets done anyway.
17:39 Ryan_Lane yep
17:39 scalability-junk Ryan_Lane: sounds good. Will probably start with etcd usage and think about the reactor integration afterwards.
17:39 Ryan_Lane it's best to always treat remote execution like it'll always fail
17:39 scalability-junk So is there any possibilty to inject something into minions?
17:40 Ryan_Lane inject in which way?
17:40 scalability-junk It would be awesome to tell the monitoring state to inject it's agent into web* for example instead of adding it to the base state of web...
17:40 scalability-junk yeah probably not relevant, better to add it to the base and be done with it.
17:41 scalability-junk Better not force something, when not needed.
17:41 oz_akan joined #salt
17:41 aparsons joined #salt
17:42 Ryan_Lane scalability-junk: if it helps, I've written some blog posts about my process: http://ryandlane.com/blog/2014/08/26/saltstack-masterless-bootstrapping/
17:42 Ryan_Lane as well as others
17:42 scalability-junk Still would be cool to just tell some backup or logging tool to just inject itself into every minion available :P
17:42 kermit joined #salt
17:42 Ryan_Lane I have two repos
17:42 Ryan_Lane well, I have a base repo
17:42 Ryan_Lane and a ton of service repos
17:42 tnachen joined #salt
17:42 scalability-junk great thanks. Yeah I was reading up on a few different methods as finally I will have the priority to revamp the infrastructure
17:43 aparsons_ joined #salt
17:43 Ryan_Lane I put that kind of stuff into base, since it needs to exist on every servic
17:43 tnachen hi all, I'm trying to launch instances with salt-cloud and run commands on all of them
17:43 tnachen I can provision based on cloud.profiles
17:43 pduersteler joined #salt
17:43 tnachen but the deploy scripts always fail
17:43 scalability-junk Ryan_Lane: yeah will probably use a similar idea. was writing a git-annex state to use for files/uploads etc. but couldn't finish due to shifted priorities
17:43 tnachen and I tried manually running the minion on the provisioned node, I keep getting public key rejected by master
17:43 scalability-junk thanks for the discussion.
17:44 Ryan_Lane scalability-junk: yw
17:46 tnachen anyone know how to get this to work?
17:48 Ryan_Lane manfred: ^^
17:48 Ryan_Lane he may
17:48 StDiluted ah that conversation was relevant to my interestes
17:48 StDiluted -e
17:48 StDiluted scalability-junk, I am doing something similar
17:48 StDiluted havent figured it out yet
17:48 totte joined #salt
17:49 StDiluted etcd is native in the rc?
17:49 StDiluted that’s good to know
17:49 StDiluted as a pillar?
17:50 Ryan_Lane StDiluted: execution module and pillar
17:51 StDiluted hm. man, I just dont know what the best way to proceed is
17:51 StDiluted i feel like there’s a missing piece here somewhere but I can’t put my finger on it
17:51 murrdoc joined #salt
17:52 scalability-junk StDiluted: automatic Automation is missing ;)
17:53 scalability-junk StDiluted: so you are overengineering your infrasgructure too?
17:55 pduersteler Hi all. I'm playing around with 'extend' to add additional files only when needed. This works with one file, but I'm wondering how I can add multiple file to 'extend'? https://gist.github.com/pduersteler/ed433e5c7c7bf3bafe81
17:55 StDiluted scalability-junk: I’m not sure if I am or not. We have a web app. We deploy regular updates and features to it. I want it to autoscale.
17:55 StDiluted so my options are
17:55 scalability-junk StDiluted: so basically the idea is to have config management distributed with etcd run Salt masterless from time to time and have a salt master for the usual stuff.
17:56 StDiluted bake a new AMI every time we deploy to production
17:56 StDiluted or figure out how to get autoscale to trigger a deploy
17:56 scalability-junk If the master fails I still have the possibility to being up nodes via salt locally and with etcd it should get added to the system.
17:57 scalability-junk StDiluted: AMIs can be nice too. You have actual versioning ;)
17:57 scryptic1 To anyone that has written custom salt execution modules: how to you go about handling python dependencies used in the module? I'd like to not pip install globally on every node.
17:57 StDiluted yeah, I know, AMIs have their pros for sure
17:57 ndrei joined #salt
17:58 StDiluted or, do i have the node come up, get provisioned with salt, and then somehow that can trigger a capistrano deploy?
17:58 StDiluted the other issue is that once there are multiple backend systems in play, maybe brought up by autoscale
17:58 StDiluted if we update our app
17:58 StDiluted i want to deploy to all of them
17:58 StDiluted so I am thinking capistrano is the solution there
17:59 StDiluted so many moving parts
18:00 halfss joined #salt
18:00 murrdoc chitown:  did u use an enc
18:00 scalability-junk StDiluted: probably salt reactor is best. To trigger new state runs on the back end systems when your app updated.
18:00 scalability-junk StDiluted: but yeah usually you have 10 tools and each has their own way. With salt you additionally have 20ways.
18:03 viq joined #salt
18:03 SheetiS1 joined #salt
18:06 kusams joined #salt
18:06 vejdmn joined #salt
18:08 ndrei joined #salt
18:09 skyler_ Unless isn't working in modules in hydrogen. Is this normal?
18:10 timoguin skyler_: in modules? you mean in SLS files?
18:11 murrdoc joined #salt
18:12 jalbretsen joined #salt
18:12 girb joined #salt
18:13 murrdoc chitown:  good presentation
18:13 skyler_ timoguin: Ah, I should have been more clear. In sls files using module.run. Unless works with cmd.run, but not module.run.
18:13 P0bailey joined #salt
18:13 P0bailey joined #salt
18:14 timoguin skyler_: I'm pretty sure that's not in Hydrogen at all. https://github.com/saltstack/salt/issues/5050
18:15 timoguin yea just check my setup. not in hydrogen.
18:15 timoguin *checked
18:16 gngsk joined #salt
18:16 murrdoc so i have heard this in a decent amount of presentations now
18:17 murrdoc do a lot of people blacklist cmd.* in their salt setups
18:18 chitown murrdoc: thanks :)
18:18 NotreDev joined #salt
18:18 chitown imho, cmd.* is kinda dangerous in large envs
18:19 chitown in my new company, there are only 2 of us. so, i have not disabled it
18:19 chrisjones joined #salt
18:20 skyler_ timoguin: Thanks! Time to upgrade.
18:20 murrdoc do you try to do the same with the modules/states too  ?
18:20 timoguin murrdoc: I'm planning on disabling it once I write enough custom modules to wrap the functionality I need.
18:20 murrdoc avoid using cmd.run as much as possible that is
18:21 arknix joined #salt
18:21 timoguin I do try to avoid it if I can, but I've ran into a lot of instances where the module I wanted to use wasn't quite there and I didn't have time to improve it
18:23 pduersteler joined #salt
18:26 ajolo joined #salt
18:28 pduersteler I try to conditionally add configurations to a managed file, as the software (fail2ban) does not provide directory includes. Is there a way to check which packages are being assigned to a minion in order to add according sections? I've thought about adding custom grains to each minion, but that seems a bit non-automatic.. ;)
18:29 fllr joined #salt
18:29 oz_akan joined #salt
18:30 TheThing joined #salt
18:31 jalaziz joined #salt
18:31 egalano joined #salt
18:31 kusams joined #salt
18:32 smcquay joined #salt
18:33 chitown timoguin: ftr: it was disabled from the command line, but you could still call cmd.run withint modules
18:33 blarghmatey joined #salt
18:33 chitown so, yes, you would have to write a small module that did what you wanted
18:34 blarghmatey Is it possible to specify the api version that is used when calling a salt-cloud function?
18:34 jalaziz joined #salt
18:34 chitown but, then you had, at least, some chance to protect against "doing something stupid" (TM)
18:34 blarghmatey I am trying to use salt on a eucalyptus setup which uses a slightly older version of the EC2 api than libcloud is defaulting to.
18:34 vejdmn joined #salt
18:34 murrdoc 'prevent stupid' … sre job description
18:37 fllr Hey guys. Sometimes some of my machines drop out of my salt cluster. Like, we send commands, but they don't respond. Any reasons why?
18:38 kusams joined #salt
18:39 jcockhren fllr: older version of zeromq maybe?
18:39 fllr jcockhren: how do I check that?
18:40 blarghmatey Nevermind, figured it out. ec2_api_version parameter in the provider config.
18:41 mgw1 joined #salt
18:41 intellix joined #salt
18:41 jcockhren fllr: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.test.html#salt.modules.test.versions_report
18:42 sadbox left #salt
18:45 mgw joined #salt
18:45 chitown fllr: something else to keep in mind: they may not have "dropped out", rather they may not have responded within the timeout
18:45 fllr chitown: what do you mean dropped out?
18:45 kusams joined #salt
18:46 chitown 11:37 < fllr> Hey guys. Sometimes some of my machines drop out of my salt cluster. Like, we send
18:46 chitown they dont really "drop out"
18:46 chitown the timeout, iirc, is 10 secs
18:46 chitown so, if the minion doesnt repsond, the salt command wont show anything
18:47 chitown it may be a good idea to look at the job a few mins later
18:47 chitown you can view job info via the josb runner
18:47 chitown http://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.jobs.html
18:49 KennethWilke joined #salt
18:50 ndrei joined #salt
18:54 rlarkin oman I finally read the release notes for 2014.7.0
18:54 rlarkin blockbuster release
18:54 mndo joined #salt
18:56 babilen fllr: There is also: https://github.com/saltstack/salt/issues/15415
18:58 fllr chitown: Ok... so I just checked... We run a module (any module, it might even be a highstate, or sync_modules for example), module times out (or maybe takes too long to finish, we don't know yet), but importantly, we notice that those machines drop out of the list (saltutil.running). And then we go to that machine, the minion is dead, and in the logs it says the machine failed to sign in...
18:59 fllr babilen: That is one of the issue that we have. Not this one right now, but I was gonna ask about that later. lol
18:59 fllr babilen: what we usually do about that one, is run mine.update, and it wakes them up. lol
18:59 tnachen hi, anyone can hlep me with the "The Salt Master has rejected this minion's public key!" error?
19:00 babilen fllr: Read the bug report, it is fixed and also contains ways for handling it right now (in case you don't have the fix yet)
19:00 tnachen salt-call state.highstate -l debug
19:01 rlarkin tnachen: there should be an instruction to remove the offending key in the log
19:01 tnachen rlarkin: there is, but I don't want to do this for every instance in my cluster though
19:01 rlarkin perhaps you re-used an old name?
19:01 tnachen rlarkin: it happens everytime on every instance
19:02 tnachen rlarkin: hmm, possibly, I'm reusing the same cloud.map
19:02 tnachen but I delete the cluster before I start it again
19:02 baconbeckons joined #salt
19:02 rlarkin you mean salt-key -D ?
19:02 tnachen do I have to update the names every time ?
19:02 tnachen no I mean the names of the minions
19:03 rlarkin if you spin up a new instance but use an old name you have to make sure the old key is removed
19:03 tnachen i see, hmm
19:03 tnachen let me remove all the keys and try again
19:03 timoguin salt-cloud should take care of that if you use it to destroy the machines
19:03 timoguin removing the keys that is
19:03 vejdmn joined #salt
19:04 rlarkin isn't that ( removal ) a configuration option?
19:04 tnachen I did use salt-cloud -m cloud.map -d
19:04 tnachen to delete the nodes
19:04 tnachen not sure what's going on, the pub keys matches on master and minion
19:04 timoguin rlarkin: yea i think it's an option
19:04 timoguin is it removing the keys after you do that?
19:04 ravenac95 joined #salt
19:05 fllr babilen: it's telling me to run --asycn after every command. What does async do?
19:05 babilen fllr: Don't listen to the voices.
19:05 babilen (who/what is telling you that?)
19:06 fllr babilen: lol. that salt issue link you send me...
19:06 thayne joined #salt
19:07 babilen fllr: "Instead of waiting for the job to run on minions only print the jod id of the started execution and complete." (cf. salt(1))
19:08 kusams_ joined #salt
19:08 fllr Ah, I see... thanks!
19:09 babilen The fix seems to have been backported only to 2014.7
19:09 fllr I don't know... we're running that version now...
19:10 gq45uaethdj26jw7 joined #salt
19:10 fllr Any ideas on how to fix my original issue, though?
19:11 babilen What's the exact error message?
19:11 babilen (and I typically don't support .7)
19:11 girb1 joined #salt
19:13 fllr babilen: Lemme post the issue, real quick...
19:13 fllr babilen: http://pastie.org/9591635
19:13 tnachen hmm wierd exact same error
19:13 tnachen after deleting all the keys
19:13 girb joined #salt
19:13 tnachen what else is the minion checking besides the pub key matches?
19:15 babilen fllr: That happens when you do what?
19:16 fllr babilen: We think it happens when we run any states, or any modules...
19:17 jensnockert joined #salt
19:18 peloponnesian joined #salt
19:19 jalaziz joined #salt
19:19 babilen fllr: You might want to file an issue
19:19 * babilen hasn't seen that in 2014.1
19:21 NotreDev joined #salt
19:25 kermit joined #salt
19:26 UtahDave fllr: can you provide the output of  salt-master --versions-report    ?     and also  salt 'minion' test.versions_report
19:26 KennethWilke joined #salt
19:26 kusams joined #salt
19:28 baconbeckons joined #salt
19:33 ndrei_ joined #salt
19:36 ipmb joined #salt
19:38 NotreDev joined #salt
19:44 fllr UtahDave: Yeah, hold on...
19:45 fllr UtahDave: https://gist.github.com/felipellrocha/b3e3911a3ee76889f3c0
19:45 KennethWilke joined #salt
19:51 renoirb Hi all!
19:53 pduersteler joined #salt
19:54 renoirb Hey guys, anybody has a reference on how various itempotent YAML formats?  e.g.   hi: [{dude: value}]
19:55 renoirb i´m looking at http://www.yaml.org/start.html
19:55 Ryan_Lane renoirb: what do you mean?
19:55 renoirb but I dont see same data structure, different syntax examples
19:55 renoirb I want to rewrite the rsync_shares you wrote Ryan_Lane  :)
19:55 KennethWilke renoirb, i like wikipedia's thinger for that: http://en.wikipedia.org/wiki/YAML#Basic_components_of_YAML
19:55 Ryan_Lane ah
19:55 renoirb And I cannot remember the right syntax
19:56 Ryan_Lane we should really switch you guys to trebuchet
19:56 KennethWilke shows block and inline formats of stuff
19:56 renoirb KennethWilke, that’s a good one!
19:56 Ryan_Lane hm. you're not doing straight git deloyments, though, are you?
19:56 renoirb Ryan_Lane, i’m working on such improvements at this time :)
19:56 Ryan_Lane you kind of need artifacts, right?
19:57 renoirb I just built a new gdnsd local dns and building nodes at this very moment
19:57 Ryan_Lane ah, cool
19:57 renoirb I’m about, also, to minimize the number of publicly available IP addresses
19:57 jslatts joined #salt
19:58 renoirb Ryan_Lane, what do I need to start using trebuchet?
19:58 Ryan_Lane heh
19:58 Ryan_Lane well...
19:58 Ryan_Lane you have a lot of the required components
19:58 renoirb I recall that trebuchet works with more than one salt master or something similar
19:58 Ryan_Lane you also need a redis server, which you should likely install on the salt master
19:58 Ryan_Lane and a deployment server
19:58 renoirb I’m to replace memcached to redis
19:58 Ryan_Lane which can also probably be the salt master
19:59 renoirb In fact when I upgraded MW to wmf/1.24wmf16, I had to.
19:59 Ryan_Lane really? that doesn't make sense
20:00 pduersteler When the docs state "in the master config file", I guess that means /etc/salt/master, or master.d/* respectively? Struggling with the git backend..
20:01 bhosmer_ joined #salt
20:01 halfss joined #salt
20:01 renoirb Ryan_Lane  I could not figure out the right settings changes
20:01 renoirb ... during a crisis.
20:01 Ryan_Lane ah
20:01 Ryan_Lane got ya
20:01 renoirb so I went to follow wmf puppet configs
20:01 Ryan_Lane well, they use redis for jobs
20:01 Ryan_Lane they still use memcache for parser cache
20:02 renoirb also. So I have a redis only for jobs, and redis for the page cache. That’ll do until I migrate everything to dreamcompute
20:02 Ryan_Lane there's no issue using redis for page cache too
20:02 eightyeight running 'salt 2014.1.4' on the master and 'salt 2014.1.10' on the minion, which is a centos 5.10 server, with python26-zmq from EPEL
20:02 Ryan_Lane it's simpler to just have either memcache or redis
20:02 saltymoli joined #salt
20:02 eightyeight however, i cannot 'state.highstate' it from the master, but i can 'salt-call state.highstate' from the minnion
20:02 arnoldB joined #salt
20:02 Ryan_Lane eightyeight: in general your master should always be a higher version than your minion
20:03 eightyeight agreed
20:03 eightyeight but, i'm finding that all old python zmq libraries cannot highstate from the master, unless updated
20:03 eightyeight is this known?
20:04 renoirb If the master is at a lower version than a minion, eightyeight, it happens that the minion doesnt return. I guess because the way send messages to the wire changed and the master doesnt understand.
20:05 Ryan_Lane eightyeight: are they running the same version of 0mq?
20:05 renoirb This happened to me too. I had to make sure my master is upgraded, then it works again on the faulty minion
20:05 arnoldB joined #salt
20:05 Ryan_Lane ideally that shouldn't be a problem with point releases
20:05 Ryan_Lane and I'd imagine it's an incompatibility between some libraries.
20:05 Ryan_Lane but if not, it's the version mismatch
20:05 eightyeight renoirb: i would agree with that, but i have identical salt versions on master and minion, where this does not succeed
20:05 eightyeight and no, the zmq versions are very different:
20:06 Ryan_Lane that could be a problem if the version on the master if lower than the version on the minion
20:06 eightyeight '14.0.1-1build2' on the master and 'python26-zmq-2.1.9-3.el5' on the minion
20:06 Ryan_Lane eightyeight: can you test.ping it from the master?
20:06 eightyeight i'm seeing that 13.x.x or later needs to be installed, or the master can't highstate the minion
20:07 eightyeight no
20:07 Ryan_Lane install 13.x.x or above everywhere, then :)
20:07 arnoldB joined #salt
20:07 eightyeight i'd like to
20:07 eightyeight but centos 5 doesn't ship it
20:07 Ryan_Lane EPEL?
20:07 eightyeight yet, it ships the latest salt-minion
20:07 eightyeight yes
20:08 Ryan_Lane ah. that's annoying
20:08 Ryan_Lane UtahDave, basepi: ^^
20:08 Ryan_Lane eightyeight: you should add an issue in github for this
20:08 eightyeight i can get python-zmq to 13.x.x everywhere except centos 5 epel
20:08 eightyeight ok. so, it's not known, or expected then?
20:08 bmcorser joined #salt
20:08 Ryan_Lane I don't know, but it's worth reporting
20:08 eightyeight kk
20:08 Ryan_Lane (I'm just another normal user ;) )
20:09 eightyeight as mentioned, i can 'salt-call state.highstate' from the minion, as a work around
20:09 eightyeight although, that's not the way we've configured things here
20:12 dude051 joined #salt
20:13 eightyeight i'll wait to hear back from basepi / UtahDave before submitting an issue
20:14 kickerdog joined #salt
20:14 kickerdog Can anyone recommend a Windows Server AMI that works with salt-cloud out of the box?
20:15 timoguin kickerdog: there isn't one
20:15 KennethWilke kickerdog, i believe the salt engineers are struggling to solve for that as well
20:16 KennethWilke though (full transparency i'm a racker) it does work on openstack or rackspace cloud windows boxen
20:16 kickerdog good to know
20:18 masm joined #salt
20:20 kickerdog i'll just roll my own ami I guess and give it a shot
20:21 KennethWilke my windows skills are incomprehensibly weak, but i wish you luck!
20:21 kickerdog i'll report back
20:22 KennethWilke i setup a game server on windows about 2 months back, it took me like 20 minutes to figure out how to change my password :(
20:23 kickerdog ouch
20:23 renoirb Ryan_Lane, so, getting back to Trebuchet, let me see if i can make it installed as part of this maintenance
20:23 renoirb Ryan_Lane, just seen https://github.com/airbnb/trebuchet   and its not what you did :/1
20:25 pduersteler Every time I add a gitfs_remotes entry, I get "Exception must be string or buffer, not dict occurred in file server update". Anyone an idea about that? Trying authentication via ssh pubkey
20:27 UtahDave sorry, basepi and I were at lunch
20:36 KennethWilke how dare you!
20:36 KennethWilke no lunch!
20:42 eightyeight UtahDave: no worries
20:42 eightyeight UtahDave: let me know if i need to create an issue for this, or where to go from here
20:43 UtahDave what are the Salt versions you're using?
20:47 rawtaz guys, the link to GitHub is still wrong at http://docs.saltstack.com/en/latest/topics/releases/releasecandidate.html
20:49 UtahDave specifically which link are you referring to there?
20:49 rawtaz the first one named GitHub
20:49 rawtaz it goes to a relative url instead of an absolute github url
20:50 rawtaz see it? :)
20:50 eightyeight UtahDave: master is 2014.1.4
20:51 eightyeight UtahDave: in prod, all our minions are the same salt version
20:51 rawtaz obviously this is a major critical issue that will wreak havoc for everyone :)
20:51 gq45uaethdj26jw6 joined #salt
20:51 gq45uaethdj26jw6 left #salt
20:51 basepi eightyeight: We host RPMs for a reasonable version of ZMQ, if that's all you're looking for (didn't spend much time in the scrollback):  http://docs.saltstack.com/downloads/cent5/
20:51 kusams joined #salt
20:52 UtahDave rawtaz: would you open an issue on that? It has something to do with the restructured text thingy we're using for github urls
20:52 UtahDave "thingy" is an official python term, by the way
20:52 eightyeight basepi: hmm. ok.
20:52 eightyeight would be nice that the dependencies are in epel also
20:52 KennethWilke my goodness at: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6271
20:52 eightyeight whoever is in charge of that
20:52 KennethWilke fire up your salt masters!
20:53 basepi Right, we haven't been able to get the maintainer of the epel5 ZMQ version to update
20:53 basepi That's the issue.
20:53 eightyeight ok
20:53 rawtaz UtahDave: sure, if you want. the saltstack repo i take it?
20:53 eightyeight good to know
20:53 basepi Need to look into ways to take control of that.
20:53 UtahDave rawtaz: yeah, the docs are all in the main salt repo
20:53 rawtaz err, i mean the salt repo. ok cool
20:53 eightyeight then is a better work around then 'salt-call state.highstate' from the minion, provided they'll install with minimal trouble
20:53 eightyeight :)
20:53 rawtaz ouch, 1600 issues :D
20:53 basepi hehe
20:54 basepi rawtaz: ya.....you guys just keep us too busy!  Luckily, we're hiring to try to alleviate that.  ;)_
20:54 basepi ;)*
20:55 kiorky joined #salt
20:55 rawtaz :)
20:55 rawtaz im just about to try installing the latest RC with homebrew to see if i can get going with this
20:55 ipmb_ joined #salt
20:56 delinquentme joined #salt
20:57 delinquentme when instancing up minions Im seeing this:  'state': 'TERMINATED' ... in some of the JSON ... should I be seeing that on a instancing UP event??
20:57 SheetiS1 KennethWilke: I just ran the following to make sure they were all patched successfully:   salt \* cmd.run 'env x='\''() { :;}; echo vulnerable'\'' bash -c "echo this is a test"'
20:57 eightyeight basepi: that worked. thx
20:57 KennethWilke ooh, very nice SheetiS1
20:57 arknix joined #salt
20:57 perfectsine joined #salt
20:57 basepi eightyeight: awesome, I'm glad
20:59 rawtaz hm, should probably just do `brew install --HEAD saltstack`, doesnt seem to be possible to install a specific tag
21:04 kballou joined #salt
21:05 glyf joined #salt
21:06 jensnockert joined #salt
21:06 kusams_ joined #salt
21:12 lionel joined #salt
21:21 kickerdog So for this salt-cloud + windows ami setup i'm trying to do, it appears that salt-cloud is looking for port 449 to be available, however, on my windows server there is nothing using port 449, so what service needs to be running on windows to have 449 be active?
21:23 rawtaz did anyone else have this issue when trying `brew install --HEAD saltstack`?  https://pastebin.mozilla.org/6607794
21:24 timoguin kickerdog: windows file/printer sharing, samba from the linux side
21:24 timoguin WinRS would be much better. Pretty sure that's being worked on with help from some of the MS guys.
21:27 glyf joined #salt
21:30 yomilk joined #salt
21:36 rawtaz does anyone have any clue when the rc2 might turn into a release?
21:36 yomilk joined #salt
21:36 rawtaz im having trouble installing HEAD via homebrew and honestly cant be arsed to spend a ton of time debugging that. figured maybe the release will work
21:39 iggy somebody said yesterday "when it's done"
21:39 iggy I doubt that answer will have changed today
21:40 rawtaz yeah that is always a given response. but there's always the gut feeling
21:40 iggy I doubt it's in the next couple of days based on test progression, etc.
21:40 rawtaz after all it's a second RC so it's not about estimating years :)
21:41 rawtaz mm
21:41 iggy you've obviously never heard of enlightenment...
21:41 rawtaz lol
21:41 iggy decades...
21:41 timoguin e19!
21:45 * kickerdog dances
21:45 kickerdog finally got salt-cloud to provision a windows ami
21:47 murrdoc noice!
21:49 ndrei joined #salt
21:59 n8n joined #salt
22:00 KennethWilke kickerdog, can you save the steps you took to build a working windows ami, if i recall correctly the salt team was having some issues with that
22:00 KennethWilke at least if we have the steps somewhere that'll help other people in the same situation
22:00 kickerdog Sure
22:01 N-Mi joined #salt
22:01 N-Mi joined #salt
22:02 halfss joined #salt
22:05 kickerdog Infact I'll make a little video right now
22:07 ndrei joined #salt
22:08 glyf joined #salt
22:12 TheThing joined #salt
22:14 oz_akan joined #salt
22:14 gzcwnk joined #salt
22:14 gzcwnk hi, where is the best place to ask for a minor feature?
22:14 jslatts joined #salt
22:14 gzcwnk improvement
22:14 kusams joined #salt
22:16 rawtaz maybe the google group if there is one?
22:17 egalano joined #salt
22:18 carlf joined #salt
22:19 carlf I would like to call a function which pulls some info from AWS in a jinja template. Is there a way to call a module from inside a jinja file?
22:22 gzcwnk I got on reply, so yeah i posted to salt github, see if I get told off...doh...
22:23 steveoliver joined #salt
22:24 rawtaz :)
22:24 rawtaz how exciting!
22:24 kusams joined #salt
22:26 perfectsine joined #salt
22:26 oz_akan joined #salt
22:27 rawtaz in case anyone stumbles upon https://pastebin.mozilla.org/6607794 when doing `brew install --HEAD saltstack`, run `xcode-select --install` to fix it
22:28 gzcwnk exciting is this bash vunerability
22:28 gzcwnk critical
22:29 steve1 I saw someone hit my webserver a little while ago with this as a UserAgent, testing for BashBleed: http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html
22:29 rawtaz "salt 2014.7.0-673-g2edfe62 (Helium)" <-- is this rc2 or newer? it should be, as it's from HEAD
22:30 rawtaz steve1 :D
22:30 rawtaz fancy seeing you here
22:30 steve1 rawtaz: clearly we keep good company. :)
22:31 rawtaz mm :3
22:31 rawtaz steve1: cool. that scan makes you part of something big :-)
22:31 rawtaz too bad attackers can do that type of test to probe for vulnerables
22:31 rawtaz :(
22:32 aurynn left #salt
22:33 steve1 it's interesting to me that a year or so ago some researchers portscanned the entire IPv4 Internet in under an hour, and now tools are now available to do the same for the hot-new-vulnerability, and with very little effort.
22:34 rawtaz you mean online services to do it?
22:34 murrdoc das crazy
22:34 rawtaz i would assume they used quite a bit of machines to do that scan, and/or the scan was pretty limited/Specific?
22:35 egalano joined #salt
22:36 steve1 no, Masscan.  https://github.com/robertdavidgraham/masscan  they now say they can port-scan the entire IPv4 Internet in under 6 minutes.  AFAIK that's a single machine, just doing it asynchronously.
22:36 smkelly joined #salt
22:37 rawtaz i wonder if services like cloudflare WAF are patched already so that users using them are protected
22:37 rawtaz steve1: hm ok. interesting
22:39 debian112 joined #salt
22:40 debian112 got a question for everyone
22:40 debian112 I am trying to run the following command: http://paste.debian.net/122930/
22:41 debian112 anyone testing the bash vulnerability
22:41 steve1 debian112: I used this to test: https://twitter.com/DanGarthwaite/status/514839706207813633
22:44 Outlander joined #salt
22:45 debian112 thanks
22:46 rawtaz gah
22:46 rawtaz bedtime and here i am patching servers
22:46 rawtaz but shouldnt complain. it's just a command or two. what if there were no patches at all :-)
22:47 murrdoc salt * cmd.run 'apt get upgrade —@#$kt'
22:47 murrdoc :D
22:48 rawtaz >_>
22:48 ndrei joined #salt
22:55 jensnockert joined #salt
22:57 bmcorser joined #salt
23:00 kermit joined #salt
23:00 polliard joined #salt
23:01 yomilk joined #salt
23:01 halfss joined #salt
23:02 polliard Anyone had trouble getting gitfs working on centos6.  I installed GitPython, and set the config to enable one backend but when I run highstate it complains that No Top file or external nodes data matches found.  I then remove the gitfs config and highstate runs just fine
23:05 Vye polliard: I've used it on 2014.1 with no problems. What branch is your topfile in?
23:05 polliard top.sls is in base but the state that is in git is in common
23:06 pdayton joined #salt
23:06 polliard ie /srv/salt/top.sls
23:06 polliard I should say I am on 2014.1.0
23:07 polliard I have tried with and without the option gitfs_provider: GitPython
23:07 ndrei joined #salt
23:08 dalexand_ joined #salt
23:08 Vye ok, so your topfile is not in git. Do you have /srv/salt configured in file_roots?
23:08 Vye and what is your fileserver_backend setting?
23:09 polliard I have both file_roots: (local files) and gitfs_remotes: (forked formulas from github)
23:09 smcquaid joined #salt
23:09 polliard top.sls is local
23:09 polliard file_roots:
23:09 polliard base:
23:09 polliard - /srv/salt
23:09 polliard common:
23:09 polliard - /srv/salt/common
23:09 polliard trukoda:
23:09 polliard - /srv/salt/trukoda
23:09 polliard - /srv/formulas/polliard-bind
23:09 polliard - /srv/formulas/postfix-formula
23:09 polliard gitfs_provider: GitPython
23:09 polliard fileserver_backend:
23:09 rawtaz ever heard of pastebin? :-)
23:09 polliard - git
23:09 polliard gitfs_remotes:
23:10 polliard - https://bitbucket.org/trukoda/saltstack-openssh-formula.git
23:10 polliard no, I havent done IRC in 20 years
23:10 polliard lol sorry
23:10 rawtaz hehe. https://pastebin.mozilla.org/ is good
23:11 Vye polliard: add "roots" to your fileserver_backend. It never gets to file_roots because that is missing.
23:12 polliard config file wise that makes sense.
23:12 Vye The order matters too.
23:12 polliard I read that
23:12 polliard It will search them in order specified
23:13 Vye Did it work?
23:13 polliard Does the order in the config matter (ie. Do I need to declare the gitfs remote before I add it to fileserver_backend
23:13 Vye I don't think so.
23:14 TyrfingMjolnir joined #salt
23:14 polliard That did not work.  Let me try the pastebin tool for just that section to make sure I am doing it right :)
23:15 polliard fileserver_backend:
23:15 polliard - roots
23:15 polliard - git
23:15 polliard humm
23:15 Vye Looks right. Did you restart the master?
23:15 polliard yeah
23:15 polliard let me double check... 14 hours of salt/servicenow sucks
23:17 polliard ok, so that worked sort of.  It can't find the source.  That is something I can debug.  Thanks for the help, it seems that the issue was exactly that I didn't have roots
23:18 polliard totally makes sense.  Thanks for the help
23:18 Vye polliard: Glad to hear it. yw.
23:21 gfa joined #salt
23:21 p2 joined #salt
23:21 dstokes is there anywhere i can see a digest of the changes btwn rc1 & rc2? 610 commits is a lot to go through..
23:23 perfectsine joined #salt
23:23 mosen joined #salt
23:26 rawtaz is there a way to ask salt where it thinks the config file should be (by default)?
23:27 gfa joined #salt
23:28 Vye rawtaz: Like salt-call -h?
23:29 rawtaz that sayd "Default: /etc/salt"
23:29 rawtaz says*
23:30 rawtaz i was reading the manual and it said that's usually the case
23:31 rawtaz thanks
23:34 sudarkoff joined #salt
23:35 Vye yw
23:35 rawtaz mm Saltfile <3
23:36 polliard Anyone seen this error on the master in 2014.1.10 with a gitfs backend.
23:36 polliard Exception md5() argument 1 must be string or read-only buffer, not dict occurred in file server update
23:36 polliard Is it because Im using the GitPython backend?
23:37 polliard I found a python article but they are saying it has to do with unicode handling in the python backend
23:37 Vye polliard: What version of GitPython?
23:37 GnuLxUsr joined #salt
23:37 polliard GitPython-0.3.2-0.6.RC1.el6.noarch
23:37 Daemonik joined #salt
23:38 Daemonik Is there any reason the bootstrap script at bootstrap.saltstack.org can't be used to upgrade a salt-master from 2014.1.3 to 2014.1.11 ?
23:38 polliard was the only backend in CENTOS or EPEL repos
23:39 Vye polliard: I am also using the same version so it shouldn't be that. Can you paste your -l debug output somewhere (assuming it doesn't contain anything sensitive)?
23:39 polliard Vye: sure thing
23:40 polliard All forgive me if I dont use the pastebin correctly, I am trying :)
23:41 dude051 joined #salt
23:41 bbradley joined #salt
23:44 polliard Vye: hopefully this link works :) https://pastebin.mozilla.org/6608973
23:45 debian112 anyone know how I can view pillars in different environments?
23:45 debian112 salt-call pillar.get server_gt saltenv='greentpa'
23:46 debian112 does not work
23:47 jslatts joined #salt
23:47 Vye polliard: can you pastebin your gitfs_remotes?
23:47 polliard Vye: Yes and thank you for pointing me to bastebin
23:48 polliard s/bastebin/pastebin/
23:49 polliard Vye: https://pastebin.mozilla.org/6608994
23:50 Vye polliard: Your mountpoint option is not supported in 2014.1, that's something new in 2014.7.
23:50 polliard Vye: odd because it couldnt find my gitfs state, I added that line and now highstate can find it but the md5 error started to show up
23:51 rawtaz hmm
23:51 rawtaz cachedir and pki_dir. inconsistent ;)
23:51 polliard Vye/rawtaz: Scratch that, the master showed it "processing" the state from git but the minion didn't actually get the state pushed
23:52 Vye polliard: When you're reading the docs, it shows the latest by default (in this case 2014.7) so when you see "New in version X" you'll want to make sure you have at least that version.
23:52 Vye polliard: You can see what I'm talking about here: http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#per-remote-configuration-parameters
23:52 Vye polliard: probably a different issue.
23:53 kusams joined #salt
23:53 polliard thanks.  sorry to waste your time with my newbie questions.  I didn't catch that.  I am having some conversations with C.R. about documentation sites that are great to emulate (ie. docs.splunk.com)
23:54 Vye polliard: no problem.
23:58 rawtaz is it not possible tp write a relative path in the Saltfile's config_dir option?
23:58 rawtaz such as ~/salt/etc or even just  etc  or  ./etc  (that being relative to where the Saltfile is)
23:59 claytron joined #salt
23:59 mapu joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary