Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-07-17

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 sectionme joined #salt
00:01 matthiaswahl joined #salt
00:02 VictorLin joined #salt
00:03 schimmy1 joined #salt
00:04 XenophonF wow that's pretty cool
00:04 azylman_ joined #salt
00:06 gzcwnk anyway to determin how a job is running? I did a high state an hour ago and it hasnt finished
00:08 XenophonF gzcwnk: http://docs.saltstack.com/en/latest/topics/jobs/index.html
00:08 XenophonF check there
00:08 XenophonF try running "salt-run jobs.active" on the minion
00:08 XenophonF or running "salt minion-id jobs.active" from the master
00:09 RandalSchwartz anything that says "new in 0.X.Y" is all before "2014.1.4", right?
00:09 XenophonF yes
00:09 gzcwnk I get {}
00:10 schimmy joined #salt
00:10 XenophonF gzcwnk: are you sure the job is still running?
00:10 XenophonF look in the proc directory or check the minion log file
00:10 XenophonF the salt proc directory, not the system /proc mount
00:10 gzcwnk well i can tell it hasnt finished and if i job print I get {}
00:10 RandalSchwartz what about "Heilum", is that newer than 2014?
00:11 TheThing lol
00:11 TheThing we should put into topic something about that
00:11 RandalSchwartz yeah - that's three naming conventions now
00:12 gzcwnk the log files are not currently dated, ie they have not been written to today
00:12 XenophonF Helium is the current dev branch
00:12 gzcwnk can you tell a high.state to print as it goes along?
00:13 XenophonF gzcwnk: salt -l debug minion-id state.highstate
00:13 XenophonF man salt
00:13 XenophonF there are other logging options
00:13 RandalSchwartz or salt-call on the minion itself
00:13 RandalSchwartz gets pretty verbsoe that way
00:13 RandalSchwartz salt-call state.highstate
00:13 oz_akan joined #salt
00:13 RandalSchwartz or salt.call state.sls some-state-file
00:14 RandalSchwartz oops... salt-call
00:14 gzcwnk i think its crashed
00:16 gzcwnk any idea what this measn pls?  http://pastebin.com/hm0kFqve
00:22 kumarat9pm joined #salt
00:24 XenophonF yeah - you have an invalid state somewhere
00:24 XenophonF made any changes recently?
00:24 gzcwnk yeah im writing a state to meet SANS cis security.
00:25 gzcwnk I found the error in teh sysctl settings
00:25 gzcwnk I thought I tested it yesterday OK. now no
00:26 Eureka_ gzcwnk: Run each of your states for that host manually in order until you hit the one thats blowing up.
00:26 Eureka_ gzcwnk: ex. "salt 'vuwunicobepwlt1.ods.vuw.ac.nz' state.sls STATENAME"
00:27 gzcwnk yeah taht is waht I am doing
00:27 jcsp joined #salt
00:28 john5223 joined #salt
00:28 gzcwnk I had a type a . and there should have been a _
00:28 Eureka_ =)
00:28 gzcwnk doh
00:28 aquinas joined #salt
00:28 gzcwnk picky little bugger  :P
00:30 gzcwnk i would have thougt it would fail gracefully or at least be more hepful in telling me im an idiot...
00:33 sectionme joined #salt
00:33 andrej joined #salt
00:34 garthk joined #salt
00:35 RandalSchwartz almost all my jinja files now start with
00:35 RandalSchwartz ## this file is managed by salt at {{ source }} - do not edit!
00:35 XenophonF oh {{ source }} is how you do that?
00:36 pjs Oh nice..
00:36 pjs I wondered that myself ;)
00:36 rojem joined #salt
00:39 pclermont joined #salt
00:42 RandalSchwartz yeah - I got help
00:43 acabrera joined #salt
00:50 luminous anyone interested in debuggin ext_pillar and build scripts based on salt.client.Caller()
00:50 luminous I have a build process which uses reclass as an ext_pillar. While running the initial steps of the build process, reclass is noted as unavailable. After a highstate it is seen as available
00:51 luminous I have been applying some of the base state.sls that would run in highstate, but this seems to do nothing
00:51 luminous it seems that iterating over a list of modules, retrieving a new salt-client each time, then running the module.function, is problematic
00:51 allanparsons has anyone installed + configured couchbase via salt?
00:51 luminous but I haven't been able to determine why
00:52 allanparsons wondering how i can dynamically resize my cachepool size
00:52 luminous saltutil.sync_all also does _nothing_ - while it works fine when used with salt-call --local
00:59 TyrfingMjolnir joined #salt
01:02 bluehawk joined #salt
01:02 matthiaswahl joined #salt
01:10 TyrfingMjolnir joined #salt
01:14 agliodbs joined #salt
01:19 azylman joined #salt
01:29 bhosmer joined #salt
01:31 hardwire joined #salt
01:34 sectionme joined #salt
01:37 yomilk joined #salt
01:39 retrospek joined #salt
01:40 talwai I created a new branch on my ext_pillar git repo and expected it to map to a new environment, like gitfs_remotes
01:41 talwai however it doesn't seem like the cache is updating with the new ref
01:41 talwai any ideas how to debug this?
01:41 talwai am i understanding the ext_pillar behavior correctly in that branches map to environments?
01:45 mgw joined #salt
01:47 ajolo joined #salt
01:48 Luke joined #salt
01:51 Luke joined #salt
01:51 garthk joined #salt
01:54 ajolo joined #salt
02:03 matthiaswahl joined #salt
02:08 sectionme joined #salt
02:13 mgw joined #salt
02:16 schimmy1 joined #salt
02:17 azylman joined #salt
02:26 dude051 joined #salt
02:28 otter768 joined #salt
02:29 dude051 joined #salt
02:36 rushm0r3 joined #salt
02:36 DaveQB joined #salt
02:37 vbabiy joined #salt
02:43 taterbase joined #salt
02:44 ajolo joined #salt
02:53 ramishra joined #salt
02:58 yomilk joined #salt
03:00 VictorLin joined #salt
03:04 sectionme joined #salt
03:05 luminous Specified ext_pillar interface reclass is unavailable
03:06 luminous why do you get this error, if you have the ext_pillar installed and configured?
03:07 Ryan_Lane joined #salt
03:17 mgw luminous, what version are you on?
03:18 mgw there's a regression (of sorts) in develop and 2014.7 that does that in some cases
03:21 mgw looking at the reclass adapter though, that regression would not affect it
03:23 dccc joined #salt
03:24 XenophonF does anyone have an example of a cmd.script they can share?
03:24 aw110f joined #salt
03:25 XenophonF i want to push a script and run it, but not directly
03:25 XenophonF i want to run a command like "env EDITOR=script vipw"
03:34 catpigger joined #salt
03:46 TheThing joined #salt
03:49 VictorLin joined #salt
03:51 ramteid joined #salt
03:52 pclermont joined #salt
04:00 octagonal joined #salt
04:01 octagonal Is it not possible to have multiple cmd.run's per section?
04:04 matthiaswahl joined #salt
04:07 Hell_Lap joined #salt
04:07 agronholm no
04:11 kermit joined #salt
04:12 schimmy joined #salt
04:13 octagonal agronholm: next question.. how is my sed expression magically invalid when I run it through salt. ugh I'm tired
04:14 agronholm escaping issues probably
04:14 octagonal sed -ir "0,/^NSTTL/{s|^(NSTTL) .*|\\1 300|p};/^NSTTL/d"
04:14 retrospek escaping spaces/quotes 99% of the time
04:14 octagonal it's adding in the extra \
04:14 octagonal retrospek: We have to do this in state files now?
04:15 agronholm what does your cmd.run line look like
04:15 schimmy1 joined #salt
04:16 octagonal https://gist.github.com/anonymous/ecdea1f03154338120ad
04:16 retrospek try using ' instead " to enclose it
04:16 octagonal retrospek: I already am, thanks
04:16 octagonal ;P
04:16 retrospek interesting
04:16 retrospek on the entire cmd.run or just the argument to sed?
04:16 octagonal retrospek: I can't nest ' inside sls as outer and inner quote easily
04:16 octagonal octagonal: check the gist?
04:16 octagonal erm
04:16 octagonal wow
04:16 retrospek ah
04:16 octagonal retrospek: check the gist :)
04:17 octagonal If I run that command, it doesn't have a problem. I run it through salt like that, and it has a problem. I'm not sure why.
04:17 bfwg joined #salt
04:17 agronholm what was the complaint about in the first place? didn't you even try it before this?
04:17 octagonal minion debug appears to have correct command, but somehow something weird is happening.
04:17 retrospek name: sed -ir '0,/^TTL/{s|^(TTL) .*|\1 300|p};/^TTL/d' /etc/wwwacct.conf
04:17 octagonal agronholm: Of course I have.
04:18 octagonal retrospek: mmmk testing.
04:18 agronholm octagonal: I'm confused -- so it's working now?
04:18 octagonal agronholm: No, it's not.
04:18 retrospek it works when he runs the command directly. through salt it quote escapes the backslash
04:18 octagonal ^^
04:19 retrospek it shouldn't touch a single quote expr
04:19 retrospek double has interpolation semantics
04:20 octagonal retrospek: that's why I put single on the outside
04:20 octagonal I assumed it would behave like bash and leave it alone
04:20 retrospek well are you passing it to shell or as is?
04:20 retrospek different semantics
04:20 octagonal it's a cmd.run, I assume it's passing it to as hell
04:20 octagonal A direct exec shouldn't affect it afaik, since the sed expression is valid
04:20 retrospek by default it'll be your shell grain yea
04:21 octagonal waiting for git cherrypicks to sync
04:21 retrospek been a while since i poked at that code but it didn't used to escape anything so you had to perform some escape-fu to get certain things working right. looks like somebody tried to implement escape logic but it doesn't implement nesting correctly
04:22 octagonal retrospek: Ironically, they didn't break any of my other state files.
04:23 octagonal ..fucking git.
04:23 octagonal *headdesk*
04:23 octagonal this is why I hate trying to work cross branch when I'm tired.
04:24 retrospek like trying to tapdance through barbed wire at times :)
04:24 octagonal yeah
04:24 octagonal and I just got snagged
04:24 octagonal It puts the lotion on its skin
04:25 retrospek if cmd.run still borks you may want to try file.replace or file.sed directly and see if it behaves more to your intent
04:26 retrospek since everything more or less piggybacks cmd.run there may be weird quotemeta assumptions going on
04:26 retrospek surprised there isn't a flag to disable/enable that behaviour though
04:26 octagonal damnit, I missed a quote.
04:26 ramishra joined #salt
04:26 octagonal the other almost identical command for NSTTL worked though
04:26 octagonal so this makes it behave.
04:27 octagonal We have a file.sed now?
04:27 octagonal I've been focused on other projects aside from our salt, it actually has been really stable the past few months
04:27 manfred octagonal: file.replace is recommended over file.sed
04:28 octagonal manfred: does it support sed syntax tho
04:28 manfred it supports regex
04:29 retrospek ah yes but which flavor heh
04:29 octagonal manfred: that expression is a lot more than just regex, it actually deletes every entry after the first
04:29 retrospek and sed expressions aren't just regex yea
04:29 octagonal manfred: i.e. it insures both value and number of entries
04:29 octagonal I like that we have file.replace()
04:29 octagonal that's going to be very nifty
04:29 manfred oh, well, then use file.sed
04:30 ramishra_ joined #salt
04:30 octagonal manfred: Probably will in the future.
04:30 octagonal manfred: But if this works, THAT change just gets TODO'd in the dev tree.
04:31 vbabiy joined #salt
04:33 octagonal retrospek: Can you give me the location of that code?
04:37 octagonal retrospek: found it
04:39 octagonal proc = salt.utils.timed_subprocess.TimedProc(cmd, **kwargs)
04:40 yomilk joined #salt
04:40 octagonal subprocess.Popen then
04:42 octagonal SO lets see
04:44 matthiaswahl joined #salt
04:45 Lomithrani joined #salt
04:47 allanparsons joined #salt
04:50 octagonal sed -ir '0,/^TTL/{s|^(TTL) .*|\1 300|p};/^TTL/d' /etc/wwwacct.conf
04:50 octagonal hmmm
04:50 octagonal the plot thickens.
04:52 mosen joined #salt
04:54 octagonal retrospek: Hrm. I wonde.r
05:01 octagonal manfred: is file.sed slated to be removed?
05:01 retrospek docs for .17 say use replace with sed deprecated yea
05:02 octagonal damnit
05:04 * octagonal tries harder.
05:05 octagonal retrospek: the parens ahve to be escaped.
05:05 vandemar the docs recommend a tool that comes with mysql rather than a tool that's installed just about everywhere?
05:05 octagonal retrospek: the parens and the backslashes have to be manually escaped, apparently, regardless of quoting. >.<
05:08 ramishra joined #salt
05:10 octagonal retrospek: It's because these hosts are CentOS on python 2.6
05:10 octagonal retrospek: or
05:12 octagonal retrospek: No, it's not, it's.. *rummages more*
05:15 octagonal it's Popen itself.
05:15 octagonal okay, I'm done.
05:15 * octagonal moves on to next problem
05:15 octagonal thanks, you guys so rock.
05:16 anuvrat joined #salt
05:18 octagonal There's nothing quite as satisfying as fixing 30 boxes with one highstate, and knowing that any new ones won't ever have this problem :D
05:23 wt joined #salt
05:24 agliodbs joined #salt
05:37 kumarat9pm joined #salt
05:44 rushm0r3 joined #salt
05:46 arthabaska joined #salt
05:52 picker joined #salt
06:14 TyrfingMjolnir joined #salt
06:16 ramishra joined #salt
06:16 anuvrat joined #salt
06:17 n8n joined #salt
06:19 zain_ joined #salt
06:29 Hell_Fire_ joined #salt
06:29 victorpoluceno__ joined #salt
06:29 jeremyBass2 joined #salt
06:29 jnials_laptop joined #salt
06:30 badon_ joined #salt
06:30 perfectsine_ joined #salt
06:30 che-arne|2 joined #salt
06:31 jpaetzel_ joined #salt
06:31 clone1018_ joined #salt
06:31 patrek joined #salt
06:32 notbmatt joined #salt
06:32 _Flusher joined #salt
06:33 rnts_ joined #salt
06:33 hvn_ joined #salt
06:33 utahcon_ joined #salt
06:33 mortis_ joined #salt
06:33 xzarth_ joined #salt
06:33 Phibs_ joined #salt
06:33 zemm_ joined #salt
06:33 shano_ joined #salt
06:33 davroman1ak joined #salt
06:33 rjc_ joined #salt
06:33 cwright joined #salt
06:33 kuffs_ joined #salt
06:33 cruatta_ joined #salt
06:33 ahale_ joined #salt
06:33 lynxman_ joined #salt
06:33 Vye_ joined #salt
06:33 erjohnso_ joined #salt
06:33 cyrusdav- joined #salt
06:34 Ymage_ joined #salt
06:34 thehaven_ joined #salt
06:34 jalaziz_ joined #salt
06:34 ksalman joined #salt
06:34 user__ joined #salt
06:34 gmoro joined #salt
06:34 harkx joined #salt
06:34 nkuttler joined #salt
06:34 dober joined #salt
06:34 Ixan joined #salt
06:34 bdf_ joined #salt
06:34 hop joined #salt
06:34 JPaul joined #salt
06:34 alainv joined #salt
06:34 lynxman joined #salt
06:34 JoeHazzers joined #salt
06:35 slav0nic joined #salt
06:35 picker joined #salt
06:35 jrdx joined #salt
06:35 Hydrosine joined #salt
06:35 repl1can1 joined #salt
06:35 cwyse joined #salt
06:36 msciciel_ joined #salt
06:36 basepi joined #salt
06:36 ramishra joined #salt
06:36 jpaetzel joined #salt
06:36 djaykay joined #salt
06:37 matthew-parlette joined #salt
06:38 blast_hardcheese joined #salt
06:38 alff joined #salt
06:39 hardwire joined #salt
06:39 davromaniak joined #salt
06:44 hardwire joined #salt
06:45 ramishra joined #salt
06:46 Lomithrani joined #salt
06:46 ndrei joined #salt
06:52 luette joined #salt
06:53 oz_akan joined #salt
06:53 chiui joined #salt
06:53 stephanbuys joined #salt
06:54 bhosmer joined #salt
07:00 ramishra joined #salt
07:06 Damoun joined #salt
07:09 matthiaswahl joined #salt
07:12 oz_akan joined #salt
07:16 alanpearce joined #salt
07:21 vu joined #salt
07:33 linjan joined #salt
07:35 Hell_Fire joined #salt
07:36 nocturn joined #salt
07:41 Kenzor joined #salt
07:53 schimmy joined #salt
07:54 ramishra joined #salt
07:54 Damoun joined #salt
07:56 Lomithrani joined #salt
07:56 oz_akan joined #salt
07:57 schimmy joined #salt
07:58 ndrei joined #salt
07:59 schimmy1 joined #salt
08:00 intellix joined #salt
08:01 vu joined #salt
08:02 yomilk joined #salt
08:03 scott_w joined #salt
08:05 darkelda joined #salt
08:09 jhauser joined #salt
08:15 Flusher joined #salt
08:17 TheThing joined #salt
08:19 ml_1 joined #salt
08:44 TheThing joined #salt
08:47 intellix joined #salt
08:47 ramishra joined #salt
08:53 cDR_ joined #salt
08:54 vu joined #salt
08:57 oz_akan joined #salt
08:57 giantlock joined #salt
09:10 Lloyd_ Morning all. Anyone know if salt-cloud supports provisioning and attaching disks to GCE instances during initial instance provision?
09:12 Lloyd_ for example, is it possible to add options into the cloud.profile or cloud.provider files to specify that an additional disk should be provisioned (if not already exists) and attached
09:21 TheThing joined #salt
09:23 MrTango joined #salt
09:24 babilen Good Morning - I am using the reactor system to perform some actions when a new key is being accepted. In particular I am syncing a custom grain and a custom module to the new minion. I *also* use my custom module to send a newly generated certificate to the minion.
09:25 babilen This all works well apart from the fact that the new minion naturally does not have my custom module *before* sync_modules returns which causes a problem if the certificate is ready earlier which triggers a call to a not-yet-present module on the minion.
09:26 babilen Is there a way to essentially tell reactors to wait for another event or reaction to finish?
09:29 Lloyd_ Would the 'order' function not work in a reactor state? (sry im not familiar with reactor as of yet, not needed it)
09:29 ramishra joined #salt
09:31 luette joined #salt
09:31 vu joined #salt
09:34 babilen Lloyd_: Well, the reactions are triggered in the right order. I am doing something like: http://paste.debian.net/110209/ but the sync_module call did not return in time
09:35 babilen Sorry, http://paste.debian.net/110210/
09:35 luette1 joined #salt
09:37 Lloyd_ babilen:  have you tried using cmd.wait ?
09:37 babilen The cmd.pki.request_certificate call returns before the sync_modules has finished so it cannot call the pki.manage_crt function on the minion *yet*. I essentially want to say: "Wait for the sync_modules" call to finish before calling "cmd.pki.request_certificate"
09:38 babilen Reactors don't work that way ... I believe that I cannot do it like that and that I have to use the orchestrate system to "order" these things, which is a shame.
09:38 shorty_mu joined #salt
09:39 babilen Lloyd_: The reactors simply receive events and send of "reactions" which is done immediately. My problem is that I don't want to trigger manage_{key,crt} *every* time any sync_modules call returns, but only that particular one ...:-/
09:41 shorty_mu Hi all, after upgrading a Minion from some very old 0.16 Version to 2014.1.5 I have one working Salt highstate run and then it fails and I have to stop minion, remove all files in the cache dir and start the minion again. Any idea what might be wrong here?
09:45 babilen Not sure if upgrades from 0.16 to 2014.1.5 (why not .7?) are supported like that.
09:45 Lloyd_ babilen: module.wait with a watch might do it
09:45 shorty_mu I did a reboot after upgrade. And .7 because it's still in epel-testing.
09:48 Lloyd_ babilen: sorry I can't be much more help than that, i'm just poking around on Google looking at the reactor system, trying to get a better understanding of it.
09:48 babilen Sure, I am grateful for that, but I am simply under the impression that what I want to express cannot be expressed right now.
09:55 chiui joined #salt
09:58 oz_akan joined #salt
09:59 oz_akan_ joined #salt
10:03 Steamwells joined #salt
10:03 rawzone joined #salt
10:05 Steamwells I have a quick question regarding how others are managing state/pillar files across multiple syndic masters. Can anyone provide any tips or pointers for that?
10:17 Micromus_ thanks eliasp
10:23 TheThing joined #salt
10:24 vu joined #salt
10:31 Damoun joined #salt
10:33 laite^ joined #salt
10:34 chiui joined #salt
10:37 alff joined #salt
10:38 alanpearce joined #salt
10:43 babilen Steamwells: It is really hard to answer your question without further input
10:46 N-Mi joined #salt
10:46 N-Mi joined #salt
10:48 babilen (such as your actual question)
10:48 TyrfingMjolnir joined #salt
10:53 Steamwells hmmm, sorry. I am trying to get a generalized opinion on how to manage state files across different regional salt syndic masters. So lets say the top syndic master is in the cloud in EU, I have a syndic master minion in the US, Asia etc
10:54 Steamwells The states and pillars may be specific to each region. I am wondering how to manage the overall state/pillar files from this scenario
11:00 oz_akan joined #salt
11:08 poogles joined #salt
11:12 Lloyd_ Steamwells: from my understanding of syndics, the master doesn't replicate the file_roots down to the syndic machines by default... is your question relating to managing/copying said file_roots to said syndics?
11:13 Steamwells exactly Lloyd_
11:14 Steamwells My way would be to create states that manage files but this seems like the only use for the syndic master the way i see it
11:15 Lloyd_ Steamwells: In that case i would probably assign additional grain data to the syndics, and create a state using jinja to copy to the relevant file_roots to the correct region
11:15 viq Steamwells: we use gitfs, and since our external salt master can't connect to our internal git repo we push relevant repos out to it on changes with a psot-update hook
11:16 Steamwells that makes sense
11:16 Steamwells i need to get my .sls files onto git ASAP
11:16 Lloyd_ they should already be there :p
11:17 viq That's why I'm using gitfs, so you simply _can't_ edit files without putting them in git if you want salt to see them :P
11:17 Steamwells ohh yes i know, im slapping myself right now, i setup gitlab internally and documented everything for OPS to use and I have already commited a crime :P
11:17 Steamwells ahh ok viq, ive never used gitfs
11:18 Steamwells ill take a look at the docs
11:18 Steamwells cheers for the suggestions guys
11:18 viq Steamwells: yeah, gitlab here as well
11:19 viq Steamwells: one thing that often bites people, when configuring gitfs remember that you need to use / as path separator, not :
11:19 viq so it's git+ssh://git@server/repo.git and not git+ssh://git@server:repo.git
11:20 ramishra joined #salt
11:21 Lloyd_ viq: random question, with those post commit/update hooks do you have a verification/testing step before the changes hit the master?
11:22 viq Lloyd_: no, I'm way too much of a n00b for that
11:22 Lloyd_ no reason for the question, just curious as to how many people actually test the states before pushing them out
11:22 bhosmer joined #salt
11:22 Lloyd_ haha fair enough
11:22 danielbachhuber joined #salt
11:23 viq Also, somewhat of an attitude from the team alomg the lines of "we don't have time to set up a complete test environment for everything, also what do you mean I'd have to wait 15 minutes for the tests to run? I want to apply this now, and if it breaks I'll correct it"
11:23 Lloyd_ oh dear oh dear
11:23 viq Also, I wouldn't even really know how to start testing all the stuff
11:24 Lloyd_ Jenkins + kitchen-salt + serverspec is the way we are testing at present
11:24 viq Yeah, well, it's still progress - now we can easily manage users on our machines, and with one command check what updates are waiting accross our fleet
11:24 diegows joined #salt
11:25 viq Well, I'd like to get there one day, but busy, pushback, etc
11:25 Lloyd_ we push to a 'staging' branch in git, jenkins auto-pulls those changes, fires up a test env via kitchen, runs a highstate checks for deployment errors, if deploy is good run serverspec tests to verify certain things.
11:26 ndrei joined #salt
11:26 Lloyd_ if all good push to master and roll it out
11:26 ramishra joined #salt
11:26 logix812 joined #salt
11:27 vu joined #salt
11:28 viq mhmm
11:28 viq Problem is, we have quite a bit of variety
11:29 bhosmer_ joined #salt
11:29 viq So preparing it all would be non-trivial to say the least
11:29 viq But yeah, anything would be a start
11:29 viq You automatically apply highstate accross your machines?
11:29 Lloyd_ highstates are scheduled
11:29 viq mhmm
11:30 viq We don't do that (yet?) - we're in the process of slowly adding things to salt, and kinda don't have certainty yet that what we put in salt wouldn't break things on machines that have been running for years now
11:31 joshpaul joined #salt
11:31 Lloyd_ yeah I have the same problem, the old stuff runs on Puppet, we are migrating our stuff away from puppet and from our old host and moving it all to salt and GCE
11:32 Lloyd_ but we have salt running on some of the old stuff for some core things like user management etc, but yeah, trying to figure out what's going to break etc etc... ballache to say the least
11:33 viq aye
11:33 Lloyd_ and you can schedule highstate runs using pillar data :)
11:33 viq Out of curiousity, what made you move from puppet to salt?
11:33 viq yeah, I know :)
11:34 ekristen joined #salt
11:35 Lloyd_ We moved because the version of puppet that is on the old infrastructre is pre-historic, unsure of what updating it will break... plus it can't update SSL certs on 100+ systems in under 5 mins with just one command :p
11:36 viq hehe
11:36 Sypher joined #salt
11:36 ndrei joined #salt
11:38 Lloyd_ plus Salt has a lot of features that right now I am finding to be really damn useful, like salt-cloud.... one command fire up 10 new webservers and have them deployed in parallel.... online in less than 10 mins, yup yup useful
11:38 jrdx joined #salt
11:42 TheThing joined #salt
11:42 vu joined #salt
11:43 Damoun joined #salt
11:44 viq we have physical infrastructure, with some openvz containers and vmware (no vcenter), so I don't get that
11:44 stephanbuys joined #salt
11:45 gywang joined #salt
11:47 mechanicalduck joined #salt
11:48 babilen Hello all. I am using a reactor to generate certificates with a custom module and reactor definitions as in http://paste.debian.net/110210/ -- The problem is that the manage_{crt,key} reactions are being run before the sync_all returns. As the custom module is not yet available on the new minions I am not sure what to do now ... What are strategies to model dependencies between reactions?
11:51 MrTango joined #salt
11:53 vbabiy joined #salt
11:56 sectionme I'm trying to use the tomcat state to deploy a WAR from a Nexus server, but getting the following back "MinionError: HTTP error 401 reading http://username:password@example.com/nexus/service/local/artifact/maven/redirect?a=artifact_id&g=com.group_id&r=internal-releases&v=1.54.2&e=war: No permission -- see authorization schemes" in a stacktrace, do I need to specify the username and password in another way? I can't find anything
11:58 vu_ joined #salt
12:06 ramishra joined #salt
12:07 blarghmatey joined #salt
12:10 ndrei joined #salt
12:12 Lomithrani joined #salt
12:12 ramishra joined #salt
12:13 babilen Is there a maximum number of cores that salt supports? We just raised it to 12 Cores and I run into http://paste.debian.net/110237/ when starting the master
12:13 DanGarthwaite joined #salt
12:14 18VAAD8ZC joined #salt
12:16 vu__ joined #salt
12:16 TheThing joined #salt
12:17 stephanbuys joined #salt
12:18 zain_ joined #salt
12:20 ramishra joined #salt
12:20 ndrei joined #salt
12:24 jas- joined #salt
12:26 babilen This seems to be a bug as the master works fine when running as "root" user, but fails if we configure it to run as "salt" user (user: salt)
12:27 stephanbuys joined #salt
12:30 giannello joined #salt
12:30 hobakill joined #salt
12:31 sectionme babilen: How are you trying to start it up? Looks more like a permission issue than anything to do with number of cores. Is the CWD and salt directories writable by the salt user?
12:37 pclermont joined #salt
12:37 zain_ joined #salt
12:41 miqui joined #salt
12:43 babilen sectionme: yeah, it is definitely a permission problem, but it started after raising the number of cores. I tried running http://paste.debian.net/110243/ as user salt and it fails with the same Error, even though "ls -ld /var/run/salt/master/" → "drwxrwx--- 2 salt salt 40 Jul 17 14:20 /var/run/salt/master/"
12:44 babilen Well, raising the cores triggered a master restart, but still
12:44 babilen This is on .7 btw
12:45 sectionme babilen: Make sure the CWD is writable by the user too, I googled the ZMQ error and it seems a common cause with upstart/systemd scripts.
12:47 babilen Yeah, that are the only errors I could fine too. This is on Debian wheezy so neither upstart nor systemd, but lets see
12:48 TheThing joined #salt
12:49 stephanbuys_ joined #salt
12:51 stephanbuys__ joined #salt
12:52 babilen sectionme: Well, I run "salt-master -ldebug" from a directory that is owned by the salt user and run into the same problem.
12:53 babilen Maybe upgrading to .7 wasn't such a good idea after all :(
12:54 babilen (only did it because we installed a bunch of new minions that ended up with .7)
12:54 ghartz joined #salt
12:55 babilen Master was running before raising the cores though
12:56 bhosmer_ joined #salt
12:57 vu joined #salt
12:57 sectionme I'm running .7 on our masters and minions but not on 12 cores. No issues.
12:58 ccase joined #salt
12:59 mpanetta joined #salt
13:01 babilen yeah, I had to change the owner of /var/run/salt to salt:salt (even though that's tmpfs!!!) -- Looks as if salt is buggy and has to adjust the permissions of that directory when it starts.
13:01 topochan joined #salt
13:01 babilen I mean I would really prefer if the salt-master would run on boot and if I wouldn't have to change the permissions there manually every time
13:01 babilen ;)
13:05 vu joined #salt
13:06 vu_ joined #salt
13:07 lgsilva joined #salt
13:08 lgsilva Hello, how can I run a file.rename only if the destination target does not exists?
13:08 anuvrat joined #salt
13:10 lgsilva right now I'm using this but if I run highstate on a second time I will get an error saying the target file already exists:
13:10 Lloyd_ lgsilva: use a cmd.run with an '- unless:' statement
13:10 lgsilva sendmail:
13:10 lgsilva file:
13:10 lgsilva - rename
13:10 lgsilva - name: /usr/bin/sendmail.original
13:10 lgsilva - source: /usr/bin/sendmail
13:10 shorty_mu Igsilva: Maybe this one: https://stackoverflow.com/questions/22673022/check-file-exists-and-create-a-symlink. But I haven't tested it.
13:10 vu joined #salt
13:11 intellix joined #salt
13:12 vu joined #salt
13:13 Lloyd_ igsilva: you could also try using a file.exists state with the source set to the original file.... providing the original file doesn't change
13:13 Steamwells Sorry back from long lunch, cheers viq for the gotchas
13:16 elfixit joined #salt
13:16 Lloyd_ actually igsilva, it shouldn't matter if the source file changes as long as the state isn't set to managed, first run should copy from source to destination, then every subsequent run will say the file exists without change
13:16 lgsilva shorty_mu: that seams to be working. I will have to test more
13:16 lgsilva thanks Lloyd_, I will try that too
13:16 Lloyd_ np
13:18 \ask joined #salt
13:19 rojem joined #salt
13:19 shorty_mu @Igsilva: np
13:21 dude051 joined #salt
13:23 babilen Ah, christ ...
13:23 Lloyd_ yes?
13:23 Lloyd_ :p
13:23 mpanetta joined #salt
13:23 babilen None of my minions return anymore -- I see lots of "Authentication accepted from foo.minion.tld" but none of them responds to test.ping or *anything*
13:24 Lloyd_ update them ;)
13:24 Lloyd_ they are probably still running the old version since you updated the master
13:25 babilen Well, I have some minions with .7 and they are not returning either and updating a few hundred minions without salt will be nightmare
13:26 kumarat9pm left #salt
13:27 Lloyd_ we've had the same problem a few times when the master has been auto updated.... minions stop working. We just run - sudo salt '*' cmd.run ' apt-get update ; apt-get install salt-minion' and it fixes them all
13:28 alff joined #salt
13:28 Lloyd_ obviously change apt-get for whatever package manager is on your distro
13:30 babilen One could do that with "salt '*' pkg.install salt-minion" btw, but that assumes that you can still run something on your minions. But then, I can login to some minions manually, restart the salt-minion process and they still don't respond.
13:30 Lloyd_ pkg.install would work, provding the apt cache is up to date
13:30 babilen Even the minion running on master is not returning
13:30 kedo39 joined #salt
13:31 Lloyd_ how did you start the master since the permissions problem earlier?
13:31 babilen I started it manually with "salt-master -ldebug" and with the init script.
13:31 babilen Oh, nightmare!
13:32 acabrera joined #salt
13:32 Lloyd_ ok so how is it running at the moment, as salt-master -ldebug or via the init?
13:33 babilen Okay, the locally running minion is answering test.ping now. I now run it via init
13:34 Lloyd_ :)
13:35 babilen None of the others are though
13:36 Lloyd_ even the new ones?
13:37 babilen You mean the .7 (i.e. latest version, identical to master) ones? Yes, those don't return either
13:38 pdayton joined #salt
13:39 Lloyd_ try running a 'state.highstate test=True' on a minion see if the minion does anything. Also have a look in the logs on the minion machine, see if anything in there to indicate a problem
13:41 ipmb joined #salt
13:41 mgw joined #salt
13:46 tkharju4 joined #salt
13:46 babilen I see "Authentication accepted from foo.minion" (salt-master -ldebug) and "SaltReqTimeoutError: Waited 60 seconds" + "Waiting for minion key to be accepted by the master." on the minion.
13:46 TheThing joined #salt
13:46 Lloyd_ has the system been rebooted ?
13:47 rushm0r3 joined #salt
13:47 alff joined #salt
13:49 Lloyd_ I recall you mentioning about salt not starting on boot earlier, so I'm presuming you rebooted, in which case check to see if a firewall has been started up with the system with default rules or some nonsense
13:50 babilen Lloyd_: Which system? We rebooted the master, yes
13:50 Lloyd_ yeah on the master
13:50 Lloyd_ check firewall
13:52 rbohn joined #salt
13:52 q1x joined #salt
13:52 oncallsucks joined #salt
13:52 babilen No firewall at all
13:53 twoflowers joined #salt
13:54 babilen https://groups.google.com/forum/#!topic/salt-users/Zq38NKjd1z4 recommends deleting /var/cache/salt and  /etc/salt/pki/minion/minion_master.pub on the minion, but that doesn't help either
13:54 stephanbuys joined #salt
13:54 XenophonF what does "onlyif execution failed" mean?
13:54 babilen It's simply stuck in "Waiting for minion key to be accepted by the master."
13:55 tkharju joined #salt
13:55 XenophonF it's just a simple cmd.run state
13:55 XenophonF http://paste.debian.net/110257/
13:55 babilen XenophonF: It means that you use a "onlyif" stanza in one of you cmd.run states and that the call to that command failed
13:56 q1x babilen: using ipv6?
13:57 babilen No, they are talking on a private ipv4 network (and yes, I can ping)
13:57 XenophonF babilen: That's what I thought, but I can run the onlyif command using "salt minion-id cmd.run 'test $(domainname)'" and it works
13:57 victorpoluceno__ joined #salt
13:57 kivihtin joined #salt
13:57 babilen XenophonF: What does "works" mean here?
13:58 q1x babilen, ah ok. I had the same problem when I enabled ipv6 support but didn't adjust the listen address on the master
13:58 q1x took me a little while to figure that one out :)
13:58 babilen q1x: The weird thing is that I do see the incoming "Authentication request ..." on the master. It is just that the minions never seem to get a reply
13:58 aquinas joined #salt
13:59 Lloyd_ does 'salt-key -L' show all the keys as accepted on the master?
13:59 manfred it shows all they keys, accepted, waiting or declined
13:59 q1x babilen: I'm guessing they don't show up with salt-key -L?
13:59 Lloyd_ manfred, i was talking to babilen :p
13:59 zain_ joined #salt
14:00 manfred ¯\(°_o)/¯
14:00 Lloyd_ lol
14:00 rbohn Q: I have salt deployed at customer sites where I don't have control over their firewall.  To minimize firewall issues we run salt master on 80 and 443.  This mostly works except for a few sites where it appears that their firewall inspects packets on 80 and kills the connection.  salt-ssh is not an option.  Any suggestions besides using different ports and getting in touch with their sysadmins?
14:01 babilen q1x: I have them all in salt-key -L (they are old minions)
14:01 babilen Lloyd_: ^^
14:02 oz_akan joined #salt
14:02 babilen They just don't seem to be able to authenticate with the salt-master anymore and just run into that "SaltReqTimeoutError"
14:03 viq rbohn: tor hidden service ? ;)
14:03 babilen Seriously, I don't know how to debug this further. I restarted the master, stopped selected minions and removed /var/cache/salt on the minion and then started them again.
14:04 Lloyd_ babilen, if you remove one of the keys from the master, then restart the associated minion, does the key appear back in the list ?
14:05 vejdmn joined #salt
14:05 babilen They do, but I still can't communicate after accepting the key again
14:06 Lloyd_ ok so network connectivity would appear to be ok then
14:06 Lloyd_ so that would point me back to the version mismatch between master/minion
14:06 Lloyd_ but you said you have some minions running .7 and some running .6 ?
14:07 babilen I debug on one specific minion right now and that one uses the same, identical, versions of *everything*.
14:08 babilen It might be the case that some other minions run .5 still, but unless that has an effect on the master itself and the connection to the .7 minion I can't figure out what the problem is
14:10 Lloyd_ google'ing for this problem has turned up trumps as well, only similar issue found had the workaround of 'restart the master' :-(
14:10 ajprog_laptop1 joined #salt
14:10 housl joined #salt
14:10 babilen This is quite a serious problem for us right now as it means that we cannot perform a number of tasks that were necessary.
14:11 jslatts joined #salt
14:11 TheThing_ joined #salt
14:12 zain_ joined #salt
14:13 rbohn viq:  lol!
14:17 CeBe joined #salt
14:17 babilen I can't quite think of anything else that I could do ... If anybody has seen SaltReqTimeoutError on .7 and knows a solution to this I would be grateful for *anything*
14:21 Lloyd_ babilen; random one, ip address of the system hasn't changed during the reboot has it?
14:21 Lloyd_ of the master that is
14:23 ekristen_ joined #salt
14:23 Lloyd_ babilen: also try stopping a minion, removing /etc/salt/pki/minion/minion_master.pub from the minion, and restart the minion. This shouldn't make any difference, but just to rule it out
14:24 babilen No, it has not
14:24 babilen I've done that
14:24 masterkorp simonmcc: hello, sorry to bother you again
14:24 masterkorp simonmcc: http://pastie.org/private/xf5vyiigf4zcmowutw7mg
14:24 ndrei joined #salt
14:24 babilen And I can remove keys from the master, restart the minion and accept the key again and the master can *still* not communicate with the minion
14:24 simonmcc masterkorp: looking now
14:25 masterkorp would this be a good usage of the dependancies feature
14:25 masterkorp ?
14:25 babilen We are now stopping all 300+ minions per for loop via SSH and I plan to drop *all* accepted keys from the master and then add single minions (and delete /var/cache/salt ...)
14:26 babilen I am not convinced that that would help anything, but it is the only thing I can think of now.
14:26 oncallsu1ks joined #salt
14:26 babilen *sigh*
14:26 rojem joined #salt
14:28 simonmcc masterkorp: yes, but aren’t lines 13 & 14 in the wrong place?  don’t lines 15-18 belong under line 12?
14:28 masterkorp yeah sorry
14:28 masterkorp simonmcc: http://pastie.org/private/hcnteed7wwfy5ktvaath9q
14:29 babilen So, what seems to fail is the "ret_val = sreq.send('aes', self.crypticle.dumps(load)) except SaltReqTimeoutError:
14:29 simonmcc masterkorp: yeah, that looks right now
14:29 Lloyd_ sounds like a plan babilen, i would start the re-adding with a .7 minion, if that works then get that for loop going to update all the out of date minions ;)
14:29 babilen call on the minion, but I don't know why
14:29 kaptk2 joined #salt
14:29 babilen Lloyd_: Sure, that was the plan
14:29 masterkorp so if i include the rabbitmq state it would get the state from that directory right ?
14:29 aubsticle_ joined #salt
14:30 Lloyd_ babilen that a .7 minion?
14:31 simonmcc masterkorp: that’s my understanding, are you having problems?
14:31 simonmcc masterkorp: I’m just about to test it here…I trust Kiall, so merged it in good faith
14:32 Lloyd_ babilen: try a .6 minion and see if you get the same result
14:32 masterkorp yeah it seems to be unable to find the state
14:32 babilen Lloyd_: I don't have .6 (.5 was the last version) ...
14:32 masterkorp i am probbaly doing something wrong tough
14:32 Lloyd_ oh yeah, .5 minion and .7 master wont work together at all
14:32 babilen Lloyd_: We just ran "service salt-minion stop" on all minions, but some processes seem to keep on running.
14:33 babilen Lloyd_: "at all" ?
14:33 Lloyd_ babilen: we couldnt get the .5 minion to work with our .7 master when the master updated, we HAD to update all the minions
14:33 simonmcc masterkorp: gimme a few minutes, "rabbitmq: "../rabbitmq-formula” might need to be a list “- rabbitmq: "../rabbitmq-formula”
14:34 Lloyd_ even the bootstrapping through salt-cloud (which stil had .5 in the bootsrap file) on the .7 master wouldn't work.... so we'd fire up new instances that would never deploy because of the version mismatch
14:34 babilen Lloyd_: Well, I planned to do that with a nice "salt '*' pkg.upgrade", but ... well ;)
14:34 babilen This is huger SNAFU, really.
14:34 babilen *huge
14:35 Lloyd_ mhmm
14:35 Lloyd_ you got your master configs backed up right?
14:36 Lloyd_ it just sounds like the master has got itself into a bad state, with all the permission problems and now this..... i would redploy the master personally... but that's just me
14:36 gq45uaethdj26jw6 joined #salt
14:36 dude051 joined #salt
14:37 ghartz joined #salt
14:38 gq45uaethdj26jw6 deploying a rather large map file with salt-cloud... went to delete everything this morning with the -d flag, and it seems to get tossed into an infinite loop checking all the providers. won't state which machines in the map need to be torn down,. hangs at "applying map", anyone experience this?
14:40 gq45uaethdj26jw6 didn't have the issue before today. using about 8 different providers. concerned that the api query to list nodes on one of the providers is like, paginated, and not getting parsed properly, so it keeps looping. just speculation....
14:41 babilen Lloyd_: I get "The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate" when adding a single minion now
14:41 Theo-SLC joined #salt
14:42 babilen Well, until I accepted it.
14:42 cofeineSunshine joined #salt
14:42 \ask joined #salt
14:42 rawzone joined #salt
14:42 babilen But I still can't communicate with the newly accepted minion.
14:42 babilen *sigh*
14:44 dude051 joined #salt
14:44 babilen If only I could think of a way to "redeploy" the master (short of nuking the entire box)
14:45 babilen yeah, still SaltReqTimeoutError
14:45 babilen ffs
14:46 john5223 joined #salt
14:48 masterkorp simonmcc: another question, how do I change the salt output level to trace ?
14:48 Lloyd_ babilen: when you removed the keys from the master, did you clear the key cache and also restart the master?
14:48 ajprog_laptop1 joined #salt
14:49 ytjohn joined #salt
14:49 simonmcc masterkorp: whatever log level you set with test-kitchen gets passed on to salt-call
14:49 simonmcc masterkorp: kitchen converge --log-level=debug default-with-deps-ubuntu-1204
14:51 babilen Lloyd_: I removed *everything*. What I did is: 1. Stopped salt-master and salt-minion 2. Removed /var/cache/salt/minion/* and /etc/salt/pki/minion/minion_master.pub on the minion 3. Removed /var/cache/salt/master/* and /etc/salt/pki/master/minion*/* on the master. 4. Started the master, waited 30 Seconds 5. Started the minion and waited. 5. Accepted the key of the minion I just started with "salt-key -A" 6. Checked salt-key (it is accepted)
14:51 rushm0r3 joined #salt
14:52 babilen Still "SaltReqTimeoutError" on the minion and "Minion timed out" on test.ping (well, everything really)
14:52 Lloyd_ that's still only 1 minion on .7 ?
14:52 babilen That's one minion on .7, yes
14:52 shorty_mu left #salt
14:53 simonmcc masterkorp: yeah, the dependencies need to be laid out slightly different from what you had, this is working for me: http://pastie.org/9399851
14:53 simonmcc I’ll update the docs now so that the example is better
14:53 TheThing joined #salt
14:54 mpanetta_ joined #salt
14:54 oz_akan_ joined #salt
14:55 Lloyd_ babilen: anything further/new in the logs?
14:55 babilen No
14:55 gq45uaethdj26jw7 joined #salt
14:55 Lloyd_ have you got a spare system you can fire up another master install just to test with?
14:55 thedodd joined #salt
14:55 babilen Just the "[salt.crypt                               ][WARNING ] SaltReqTimeoutError: Waited 60 seconds"
14:55 jas-__ joined #salt
14:55 babilen Lloyd_: I have a lot of other systemd with .7 running, that's not the point :)
14:55 babilen (and they are working fine)
14:56 Lloyd_ masters?
14:56 babilen *systems
14:56 babilen yeah
14:56 Lloyd_ well what I was going to say was, point that minion at a test master, and see if test.ping returns
14:56 babilen I just don't know what to try anymore. The only difference is that this system runs as "salt" user and the other as "root"
14:57 user___ joined #salt
14:57 jalbretsen joined #salt
14:57 babilen I will try running the system that behaves weirdly as root (master that is)
14:57 masterkorp simonmcc: thank you
14:57 nkuttler_ joined #salt
14:57 masterkorp on the dependencies its not very explicit
14:58 Ymage joined #salt
14:58 ksalman_ joined #salt
14:58 rallytime joined #salt
14:58 rushm0r3 babilen: coming in late on this convo, but have you checked iptables on the machine timing out ?
14:58 Lloyd_ rushm0r3 that was the first thing i suggested :p
14:58 dude051 joined #salt
14:59 masterkorp simonmcc: wouldn't a rabbitmq:"path/to/the/formula" make more sense ?
14:59 rushm0r3 also may need to wipe the master_minion key on the minion.
14:59 Micromus_ So, I'm unable to use git+ssh for ext_pillar (but gitfs is working), I'm getting this error: 2014-07-17 16:58:31,256 [salt.pillar.git_pillar][ERROR] Unable to fetch the latest changes from remote git+ssh://git@git.xxx.no/salt-pillars.git: 'git fetch' returned exit status 128: Access denied.
14:59 Micromus_ fatal: The remote end hung up unexpectedly
14:59 Lloyd_ already done that too
15:00 rushm0r3 hmmmmm. selinux ?
15:00 Lloyd_ rushm0r3: the problem seems to of occured with a master upgrade to .7
15:00 Lloyd_ all of his minions stopped responded, even those on .7
15:00 simonmcc masterkorp: it would definitely be neater..
15:00 rushm0r3 hmmm brb.
15:02 babilen rushmore: I have, but they can exchenge keys with the master just fine
15:02 rojem joined #salt
15:03 Lloyd_ he left :p
15:03 babilen I saw that a bit too late
15:03 Lloyd_ haha, i do it all the time
15:03 jslatts joined #salt
15:04 blarghmatey joined #salt
15:05 dude051 joined #salt
15:05 babilen Lloyd_: What is surprising is that the minion does not get the minion_master.pub file from the master when I accept its key
15:06 JPaul joined #salt
15:06 Micromus_ Is anyone using ext_pillar with git+ssh and gitlab?
15:07 ajprog_laptop1 joined #salt
15:07 Lloyd_ babilen: as mentioned before, I would point that minion at a 'test' master and see if it works. If so then we know for 100% that it's the master (i'm pretty sure the master is ballsed anyway, but never hurts to be sure).
15:07 djaykay joined #salt
15:07 harkx joined #salt
15:07 linjan joined #salt
15:07 dober joined #salt
15:07 Lloyd_ is the master running as root now btw?
15:07 darkelda joined #salt
15:07 linjan joined #salt
15:08 pdayton joined #salt
15:10 oncallsucks joined #salt
15:10 Lomithrani joined #salt
15:12 helderco joined #salt
15:12 jslatts joined #salt
15:12 masterkorp simonmcc: hmm, it does not work this way too
15:12 conan_the_destro joined #salt
15:13 masterkorp http://pastie.org/private/8yqqi7mzafw5bi7v2in6na
15:13 masterkorp my .kithcen.yml
15:13 masterkorp then i do include:  \n -rabbitmq
15:13 masterkorp "    No matching sls found for 'sensu.server' in env 'base'"
15:13 masterkorp then this blows
15:14 simonmcc masterkorp: you need to spell dependancies correctly :)
15:14 masterkorp lLOLOL
15:14 simonmcc masterkorp: it’s not spelt dependendancies
15:14 masterkorp i am officially an arsehole
15:14 masterkorp thanks
15:15 rushm0r3 joined #salt
15:16 * masterkorp does the it work dance
15:16 simonmcc masterkorp: easily fixed problems are the best
15:16 kiorky joined #salt
15:17 rushm0r3 ok, im back. did wesolve Babilen's problem yet ?
15:17 helderco Hey everyone. I’m having trouble with a custom module. One of the functions isn’t returning what I expect. The others seam ok. It was working fine before putting in a different environment (not in base). http://pastebin.com/K16vwUbJ project_get() returns the right data, but pillar_get() always returns empty :\
15:18 babilen rushmore: No, we didn't
15:18 babilen Lloyd_: I tried running it as root (removed what I detailed above) and they still can't talk to each other.
15:18 masterkorp simonmcc: yes
15:18 masterkorp simonmcc: thank you for this tool man
15:18 Theo-SLC joined #salt
15:19 rushm0r3 Babilen, can u post ur conf file to a gist/bin somewhere
15:20 simonmcc masterkorp: my pleasure, just glad somebody else finds it useful!
15:22 babilen rushm0r3: It's nothing special, It's just that the minion is *consistently* running into SaltReqTimeoutError when it communicates with the master (i.e. the "ret_val = sreq.send('aes', self.crypticle.dumps(load))
15:22 babilen call fails
15:22 babilen And I simply have no idea what I could do anymore.
15:22 masterkorp simonmcc: how the heck was i supposed to test the formulas isolated without a buttload of work ? :)
15:23 rushm0r3 i was dealing with this same thing and it was a permissions issue the second time (first time was firewall)
15:23 babilen I would really need somebody who implemented minion.py and who knows under what circumstances sreq.send('aes', self.crypticle.dumps(load)) would fail.
15:23 simonmcc masterkorp: exactly...
15:23 babilen rushm0r3: Permissions of what?
15:23 zain__ joined #salt
15:23 war2 joined #salt
15:23 Lloyd_ babilen: permissions sounds about right considering the faff you had earlier with the /var/run/salt issues
15:24 blarghmatey joined #salt
15:24 Lloyd_ but then, you said you tried running the master as root?
15:24 masterkorp simonmcc: i would like to make a tool for the next step
15:24 zain_ joined #salt
15:24 rushm0r3 cache dir. but really the structure in general, if you were trying to run as non root user, you should have all ur salt dirs under that user ie: /home/user/.salt/etc/salt
15:25 simonmcc masterkorp: which is?
15:25 ramishra joined #salt
15:25 rushm0r3 and so on with var/cache etc
15:25 masterkorp something that would launch an openstack based load of vms that would allow me to test uncommited work
15:25 babilen Lloyd_: Yes, I am currently running the master as root and removed everything I detailed above.
15:25 masterkorp simonmcc: right know testing multimachine integration, like the reactor stuff is done on the staging environmnet
15:25 babilen rushm0r3: Assume that I am running as root.
15:25 masterkorp which leeds to a heck of ugly commits
15:26 masterkorp simonmcc: kinda chef metal
15:27 simonmcc masterkorp: so the Chef community has leibniz from Steven Nelson-Smith, which leans on test-kitchen for VM setup & leibniz is then used to coordinate the testing of the “system"
15:27 war2 joined #salt
15:27 rushm0r3 k. and the user in conf file was changed back ?
15:27 babilen rushm0r3: I removed the "user: salt" stanza from the master config for now.
15:28 masterkorp simonmcc: i used leibniz a while ago
15:28 simonmcc I haven’t looked too closely at chef-metal, but the ship show episode I listened to about it made it sound like a tools I wrote 18mths ago, called stack-kicker
15:28 masterkorp never got it to actually work
15:28 babilen I would be happy to get it running in "vanilla" mode, but it is simply not behaving at all.
15:28 masterkorp simonmcc: how is the state right now
15:28 masterkorp seemed dead at the time
15:28 Eureka joined #salt
15:28 simonmcc masterkorp: I’ve only read about leibniz, it’s been on my list of “is that how we do multi-vm testing” investigations
15:29 masterkorp i did some testing
15:29 masterkorp got serious problems with it
15:29 masterkorp just stopped and went chef-metal
15:30 simonmcc masterkorp: for what I’m working on right now, we need a fairly complex network setup between the VMs, I think we’re going to use openstack/nova & neutron api’s to build the VMs & networks & then use our salt deploy tools to hightstate & test
15:30 rushm0r3 Babilen: did you wipe minion and master_minion keys?
15:30 masterkorp on salt a tool like this would make even more sense since the orchestration abilities of salt are so much superior
15:30 simonmcc but that’s not very generic
15:30 masterkorp simonmcc: yeah, its hard to make clean tests
15:30 masterkorp you know since vm creation
15:31 babilen rushm0r3: No, I did not wipe the minion keys, I can try that
15:31 rushm0r3 make sure to kill master_minion.pub too.
15:31 simonmcc masterkorp: so how good is salt-cloud at building multiple networks for a set of instances?
15:31 masterkorp simonmcc: its awesome
15:31 masterkorp the map files are great once they are set
15:33 anuvrat joined #salt
15:33 simonmcc masterkorp: I should read up on that more then…and decent docs anywhere?
15:34 Damoun joined #salt
15:34 masterkorp only the default docs
15:34 masterkorp simonmcc: http://docs.saltstack.com/en/latest/topics/cloud/
15:34 masterkorp this is your best bet
15:34 simonmcc ok. right, I better hit the office & get some real work done :)  Last day in Seattle today, back to europe in the AM
15:35 babilen rushm0r3: I just removed the /etc/salt/pki on the minion and it did not change anything) (also stopped the master, removed /var/run/salt and /var/cache/salt)
15:36 babilen The master can talk to a locally running minion though.
15:36 q1x joined #salt
15:36 babilen (it's the only one)
15:36 masterkorp simonmcc: have a good one, thanks for your help
15:37 babilen I still get SaltReqTimeoutError on the minions. I guess I will have to dig deeper into the code to debug this.
15:37 taterbase joined #salt
15:38 babilen yay, what an evening. Hope that I get home before 11pm today :(
15:38 rushm0r3 babilen: is it all minions or just the 1 ?
15:39 dzen :win 6
15:40 babilen rushm0r3: I am currently just trying with two minions. 1. Minion running on the master itself (keys can be accepted and it reacts to test.ping) 2. One other minion running on a different host (can ping master, no firewall), I can accept the keys, but the minion consistently gets SaltReqTimeoutError
15:41 babilen But all minions that I tried behave that way .. we have a few too many to check every single one and I am therefore debugging with a single one.
15:42 quantumriff joined #salt
15:42 rushm0r3 joined #salt
15:43 _jslatts joined #salt
15:43 vu joined #salt
15:45 Lloyd_ that definitely sounds like a firewall or something blocking it babilen, to be sure can you flush all of the iptables rules on the master and set the default policy on the chains to ACCEPT
15:45 babilen There is no firewall .. why should they be able to exchange keys?
15:46 wendall911 joined #salt
15:47 babilen So, what fails is the, I guess, Auth._sign_in() in salt/crypt.py and, in particular, the  payload = sreq.send_auto(
15:47 babilen self.minion_sign_in_payload(),
15:47 babilen timeout=timeout
15:47 babilen )
15:47 babilen Oh, sorry ...
15:47 babilen But that call in there ..
15:47 babilen The question is why that can't communicate with the master.
15:47 blarghmatey joined #salt
15:47 rushm0r3 its timing out because it isn't validating the payload against minion_master pub
15:47 ekristen I just upgraded my salt master to 2014.1.7 and my minions to 2014.1.7 and I’m using ext_pillars and now none of my minions are getting their pillar data!
15:48 rushm0r3 somethign is blocking communication which is usually some salt dir permission or firewall
15:48 Lloyd_ ^^
15:48 rushm0r3 hence why the local CAN talk but outside cannot
15:48 Lloyd_ exactly
15:49 war2 joined #salt
15:49 rushm0r3 i would post your conf file for master and minion, then tree -u all your salt dirs
15:50 Micromus_ How do I convert this to gitfs: source: salt://common/sshd/sshd_config
15:51 rushm0r3 babilen: and post the results somewhere. its easier to resolve when someone can look at it rather than just taking someones word for it. sometimes a fresh pair of eyes catches something u miss
15:52 babilen rushm0r3: What I do see is that (after removing minion_master.pub) it never reappears on the minion I am testing with.
15:52 ekristen anyone using pillar git data and 2014.1.7? all my minions have lost their pillar data!
15:52 ekristen it isn’t syncng
15:52 rushm0r3 need to also clear the minion key from the master with salt-key
15:52 babilen rushm0r3: Sure, I completely understand that. I will prepare a suitable pastebin (have to redact some bits of information in there, but that should hopefully not be a problem, I'll be consistent in what I do)
15:53 babilen (and I did remove the minion's key with "salt-key -D")
15:54 diegows joined #salt
15:54 rushm0r3 Babilen: and just for giggles.. service stop iptables
15:55 rushm0r3 and see what happens after that (restart master/minion etc)
15:57 gds_ joined #salt
15:59 UtahDave joined #salt
15:59 rushm0r3 joined #salt
16:00 vejdmn joined #salt
16:00 agliodbs joined #salt
16:02 rushm0r3 joined #salt
16:03 tligda joined #salt
16:04 rushm0r3 joined #salt
16:05 yomilk joined #salt
16:05 ekristen anyone using git for pillar data?
16:05 notbmatt yup
16:05 rushm0r3 joined #salt
16:06 notbmatt we have a private repo with pillar data in it
16:06 ekristen notbmatt: what version of salt are you running?
16:06 notbmatt 2014.1.4
16:06 ekristen I just upgraded to 2014.1.7 and my minions seemed to have lost all their pillar data
16:07 notbmatt on a minion, run salt-call saltutil.pillar_refresh
16:07 notbmatt you should see attempted refreshes of the pillar cache
16:07 quantumriff is there a way in a state, I can make it not apply to a certain computer?  I have lots of linux servers.. I want to setup the sendmail.cf file to relay to our main mail server.. I don't want "mail.example.com" to get its config overwritten to relay to itself
16:07 ekristen notbmatt: yup, done that, still no pillar data
16:08 notbmatt do you at least see the minion attempting?
16:08 notbmatt quantumriff: see http://docs.saltstack.com/en/latest/topics/targeting/compound.html, "Matchers can be joined using boolean and, or, and not operators."
16:08 ekristen notbmatt: local:
16:08 ekristen None
16:08 ekristen is all I see
16:09 ekristen I think I’m going to stop using apt packages for salt updates, I continue to have problem after problem with them
16:09 notbmatt hm
16:10 babilen rushmore: https://www.refheap.com/a01e16bafcdbbe475c4a9c6f5
16:10 notbmatt did you upgrade your minions and master at the same time?
16:10 ekristen yup
16:10 ekristen master first
16:10 notbmatt fwiw, refresh_pillar should return None; that's expected
16:10 babilen rushmore: (and I cannot stop iptables as that would kill a bunch of forwards that customers rely on)
16:10 quantumriff ekristen: I switched to using the bootstrap loader and git for the same problem on Centos.. the minions might get updated before my master.. and I had suprises
16:10 babilen rushmore: Oh, I leaked information there .. one second
16:11 quantumriff notbmatt: I guess I could also do something like {% if 'mail.example.com' in grains['id] %} and then skip it too
16:12 ccase joined #salt
16:13 quantumriff or NOT IN, etc
16:15 babilen So, I am seeing https://www.refheap.com/96c3ff06eb346f0dcdfe25b6a and have no idea how to get my minions to speak to the master
16:15 babilen rushmore: ^^^
16:15 babilen (talking to it again that is)
16:15 goodwill babilen: encourage them to be more confident?
16:15 * goodwill runs
16:16 KyleG joined #salt
16:16 KyleG joined #salt
16:16 babilen goodwill: Seriously, I've been working on this for hours, crucial infrastructure cannot be managed anymore. I would appreciate it if I get serious replies at this point. (sorry, I know that humour is good, but I have no motivation to keep on doing this for longer than I have to)
16:18 goodwill sorry
16:18 babilen rushm0r3: Ah, pasted it above (https://www.refheap.com/96c3ff06eb346f0dcdfe25b6a)
16:18 goodwill whiteinge: ping
16:18 babilen goodwill: it's okay
16:18 whiteinge goodwill: hi!
16:18 rushm0r3 joined #salt
16:18 goodwill whiteinge: can you help babilen there?
16:18 goodwill whiteinge: :)
16:18 goodwill babilen: whiteinge is the best
16:19 goodwill babilen: if you give him hugs and beer he will help
16:20 goodwill whiteinge: right?
16:20 notbmatt babilen: can you please start your minion and master in interactive mode with debug logging turned on?
16:20 babilen Little bit of background to this: Co-Worker installed new minions that got .7 while our master was still on .5. I then upgraded the master to .7 and we gave it more RAM and cores (4G and 12 cores now). Rebooted the box.
16:20 bmatt babilen: also, can you share the versions in question?
16:20 bmatt ah, okay, thanks
16:20 babilen Sure, one second.
16:21 ekristen bmatt: I’m dropping everything back down to 2014.1.5
16:21 ekristen because I can’t get any pillar data working which is a huge issue
16:22 whiteinge babilen: i'm just about to step into a meeting and don't have a quick answer. i'll check back in with you once i'm back.
16:22 bmatt also, can you confirm that you're able to transit TCP (instead of ICMP) to your master?
16:22 babilen Rest of the story: Master didn't work after restart because it was running as a different user (salt) and it does not seem to make sure that /var/run/ is owned by that user (tmpfs on Debian) .. changed that, but am now running it as root. Can't get *any* minion by the locally running one to talk to the master ever since.
16:22 babilen whiteinge: Okay.
16:23 bmatt oh hm, so you can accept the minion key
16:23 bmatt so you're obviously able to connect using TCP
16:24 bmatt okay, yeah, start 'em up interactively with -l debug
16:25 Lloyd_ ekristen: are there any errors in the salt log with regards to your pillar data?
16:25 babilen Updated https://www.refheap.com/96c3ff06eb346f0dcdfe25b6a with --versions, -ldebug in a second
16:25 ekristen Lloyd_: no
16:25 rushm0r3 joined #salt
16:26 ekristen Lloyd_: that’s why I’m a little worried, I can’t seem to pinpoint any problem
16:26 patrek joined #salt
16:26 Lloyd_ ekristen: I'm presuming that if you run salt 'minion' pillar.items it returns nothing?
16:27 ekristen it returns the master pillar data only, none of the other pillar data it did before the upgrade
16:27 schmutz joined #salt
16:28 ekristen ugh
16:28 babilen https://www.refheap.com/6fb6dc9a19802a0e19d619296 are the debug logs (minion at the bottom)
16:28 ekristen even downgrading back to 2014.1.5 the problem still exists
16:28 rushm0r3 babilen: can u verify both ports ? 4505 & 4506
16:28 babilen I seriously don't know what to try anymore?
16:28 babilen rushm0r3: Verify?
16:28 Lloyd_ do you store your pillar data in /srv/salt ekristen ?
16:28 Lloyd_ or in /srv/pillar ?
16:28 ekristen Lloyd_: git
16:28 babilen I would like to add that 150+ minions could talk to that master just fine before
16:29 Lloyd_ i mean, what is the file_root for the pillar data on the master ?
16:29 babilen bmatt: https://www.refheap.com/6fb6dc9a19802a0e19d619296 + https://www.refheap.com/96c3ff06eb346f0dcdfe25b6a
16:29 ekristen Lloyd_: ext_pillar
16:30 Lloyd_ aahh, not using ext_pillar myself, we pull our pillar repo into /srv/pillar file_root on the master
16:30 joehillen joined #salt
16:32 babilen ekristen: Cursing yourself for deploying .7 too?
16:32 babilen heh
16:32 joehillen joined #salt
16:32 ekristen I’m cursing a lot of stuff right now and trying not to panic too much — my scripts are heavily reliant on pillar data
16:32 ekristen I just need to figure out how to fix this
16:32 ekristen and I think I’m never going to upgrade again
16:33 rallytime joined #salt
16:34 ekristen gonna try and drop back down to 2014.1.4 now too
16:34 ekristen I was like on 2014.1.0rc3 on my master
16:34 ekristen upgraded it to 2014.1.7 and all my stuff broke
16:34 ekristen UtahDave: you around?
16:35 patarr any good guides out there on how to do software deployment with salt?
16:36 rushm0r3 Babilen: yeah, can you verify that both pub & sub ports are open and communicating ?
16:36 Eugene patarr - Build a rpm/deb, use file.managed to set up a repo, pkg.managed it in.
16:37 war2 joined #salt
16:39 war2 joined #salt
16:39 babilen rushm0r3: I'd love to, but how can I do that?
16:39 cheus patarr, For less complex examples (eg, php/python apps) you could build a formula to use git/bzr/svn/tar to get the source on the target then manage the cfg files through the formula
16:39 patarr cheus: thats what i was thinking of doing. okay. cool
16:40 ekristen ok
16:40 ekristen semi-victory!
16:40 ekristen downgrading the master back down to 2014.1.4 fixed the issue
16:40 ekristen so something in 1.5 and later makes git pillar data not work properly
16:41 * ekristen sighs relief
16:41 scarcry joined #salt
16:41 * babilen is jealous
16:42 ekristen babilen: what are you problems?
16:43 bmatt patarr: what cheus recommends will work, but it scales very badly, and is brittle; I recommend proper packaging
16:44 babilen ekristen: My minions cannot talk to my master anymore and always run into SaltReqTimeout
16:44 alanpearce joined #salt
16:44 babilen cf. https://www.refheap.com/6fb6dc9a19802a0e19d619296 + https://www.refheap.com/96c3ff06eb346f0dcdfe25b6a
16:45 ekristen I saw that too on mine, but my master was a little overused at the time during its start up
16:45 babilen I just downgraded to .5 on my master and minion and that didn't change anything either.
16:45 cheus bmatt, To each his own. Packaging imparts its own issues with complex config file management. Debconf is often too blunt an instrument.
16:45 ekristen did you isntall via package originall?
16:45 ekristen originally*
16:45 ekristen how did you downgrade?
16:45 cheus bmatt, For me, it's easier to have release branches in git repos and to call a git.latest
16:46 ekristen salt is a little notorious for having packages leave .pyc files laying around
16:46 babilen yeah, I am using the Debian packages and downgraded by directly installing the packages with dpkg
16:46 ekristen joined #salt
16:47 ekristen oh and I had to go to 2014.1.4
16:47 ekristen 2014.1.5 I still had issues
16:48 bmatt cheus: yeah, I guess it depends quite a bit on what, exactly, you're releasing
16:48 bmatt we release binary blobs
16:49 ekristen patarr: I recommend docker
16:49 cheus bmatt, Which I'd totally agree as being a prime candidate for packaging. A django app, though? Eh.
16:49 babilen I honestly don't know what to try anymore.
16:49 ekristen babilen: did you install .7 from a package?
16:49 schimmy joined #salt
16:49 babilen I did, yes.
16:49 ekristen try and remove it and purge it
16:49 bmatt babilen: I'd probably start sniffing ethernet traffic now
16:50 ekristen so that all files are removed
16:50 babilen bmatt: heh
16:50 babilen I mean what baffles me is that they can obviously talk to each other, but that the crypto seems to time out (almost as if the minion is using the wrong key or so)
16:50 babilen How else would they be able to exchange the keys initially?
16:51 babilen And what else could I remove in order to get the master or minion into the "pristine" state?
16:51 ekristen babilen: if you remove the pki folder it will gen new keys
16:51 ekristen you’ll have to re-accept them on the master
16:51 war2 joined #salt
16:51 cheus ekristen, I really need to get up to speed on that. I still can't get over how docker is supposed to 'work' if we can't trust our devs to be good sysadmins and production is so far from development (eg, ssl on lb's, db's  in different tiers, etc). I just don't see how docker helps me when there are so many differences between dev and production.
16:51 babilen ekristen: I've done that on the minions, but I am still holding on to the master keys
16:52 schimmy1 joined #salt
16:52 ekristen I’ll PM you cheus
16:53 jnials joined #salt
16:55 babilen Could it be that salt can't handle 12 cores on the box?
16:55 Ryan_Lane joined #salt
16:55 dzen you know, it's written in python
16:56 babilen I mean I know that it is using multiprocessing (which is why we gave it more cores), but zmq.Context(1) is being called mostly ...
16:56 TheThing joined #salt
16:56 babilen So that this shouldn't have an effect on ZMQ, but *shrug*
16:59 babilen I mean that is literally the only difference I have between two of our masters (one with 4 cores the problematic one with 12)
16:59 Heartsbane Is there a compatibility issue with salt 1.5 to 1.4
16:59 ml_1 joined #salt
16:59 alanpearce joined #salt
16:59 babilen Heartsbane: master .5 and minions .4 should™ work, but I'd have them at the same version
16:59 Heartsbane I just upgraded my master and all my minions are saying Minion did not return
17:00 babilen hehe
17:00 babilen Oh, I am battling that problem the entire day and we have *massive* problems that have to be worked around by hand now (yay for ssh and for loops)
17:00 repl1can1 .q/quit
17:00 babilen I upgraded to .7 on my master though, but it seems as if something is being wonky there.
17:01 Heartsbane babilen: I just made my master 1.5 and minions are 1.4 ... great
17:01 babilen Heartsbane: Are you getting SaltReqTimeout errors on your minions? (check the logs, or "salt-minion -l debug"
17:01 babilen )
17:01 Heartsbane this is going to make this maintenance event fun
17:02 KyleG Minions can be older than the master
17:02 Heartsbane Could I keep the master at 1.4 and upgreade to 1.5 on the minions
17:02 Heartsbane ?
17:02 babilen no
17:02 Heartsbane Terrific
17:03 XenophonF The master  has to upgrade first, right?
17:03 babilen Heartsbane: so, are you running into the SaltReqTimeout problem?
17:03 KyleG "When upgrading Salt, the master(s) should always be upgraded first. Backwards compatibility for minions running newer versions of salt than their masters is not guaranteed."
17:03 KyleG http://docs.saltstack.com/en/latest/faq.html#can-i-run-different-versions-of-salt-on-my-master-and-minion
17:03 Heartsbane I can check
17:03 Heartsbane I already rolled back the master
17:03 * Heartsbane sighs.
17:04 * Heartsbane blames whiteinge.
17:04 vu joined #salt
17:04 Heartsbane Is there a fix to SaltReqTimeout?
17:04 bmatt Heartsbane: babilen is reporting the same issue
17:04 babilen Heartsbane: I've been working on this for roughly seven hours straight and haven't found a solution. Is that your problem too?
17:05 babilen https://www.refheap.com/96c3ff06eb346f0dcdfe25b6a + https://www.refheap.com/6fb6dc9a19802a0e19d619296 is what I am seeing
17:05 Heartsbane Well right now I am upgrading back to a 1.5 master to find out
17:05 Heartsbane This is going to make this maintenance event all kinds of fun
17:06 vu joined #salt
17:06 war2 joined #salt
17:07 Heartsbane babilen: yep ...
17:07 * Heartsbane blames whiteinge and UtahDave.
17:07 jaimed joined #salt
17:08 mayak joined #salt
17:08 mayak left #salt
17:09 pdayton joined #salt
17:11 babilen Heartsbane: So you are seeing that too? That is interesting and seems to point at a more general problem as I upgraded from .5 to .7 while you upgraded to .4 to .5 .. So it can't be specific to those versions really.
17:12 Heartsbane Well I am scheduled to upgrade Dev/QA/Staging in 48 minutes
17:12 Heartsbane so I am going to have to dig into this after than
17:12 Heartsbane that
17:13 * Heartsbane sighs.
17:13 babilen But you cannot talk to your minions anymore?
17:13 Heartsbane nope
17:13 babilen (due to SaltReqtimeout?)
17:13 Heartsbane Correct
17:14 Heartsbane I think what I am going to do is a salt cmd.run 'yum --exclude=salt*\ -y update' on all those environments and dig into the bug after the event
17:15 babilen Okay, thanks .. I will have to take a break soon (yay, that'll be a fun day tomorrow), so please let me know if you figure something out.
17:15 kaptk2 joined #salt
17:15 Heartsbane babilen: supply the link to your bug and I will subscribe with my notes
17:15 Heartsbane I am going to have to rescript this
17:16 babilen I'll file an issue now, one second
17:16 agliodbs joined #salt
17:16 Ryan_Lane is there a way for me to make my custom execution module return both output and a return code?
17:17 Ryan_Lane that salt-call will properly handle with --retcode-passthrough ?
17:17 Ryan_Lane I don't want to just return a dict
17:18 repl1cant joined #salt
17:18 kballou joined #salt
17:20 babilen Heartsbane: Just out of interest: What are the specs of your master (we also raised cores from 1 to 12 and RAM from 1G to 4G on ours)
17:24 babilen Heartsbane: https://github.com/saltstack/salt/issues/14307 (whiteinge, UtahDave, bmatt)
17:25 davet joined #salt
17:25 Heartsbane 2 cores 4G RAM
17:25 babilen okay, thanks
17:25 babilen So, time for a break
17:25 bmatt babilen: is it possible for you to back out your hardware change?
17:25 higgs001 joined #salt
17:26 druonysus joined #salt
17:28 mpanetta_ joined #salt
17:28 TheThing joined #salt
17:30 mpanetta joined #salt
17:32 ramishra joined #salt
17:34 poogles joined #salt
17:35 aw110f joined #salt
17:37 kermit joined #salt
17:37 repl1cant joined #salt
17:38 bhosmer joined #salt
17:41 kballou joined #salt
17:41 Damoun joined #salt
17:41 vexati0n hello #salt.. is it possible at all to define the pub_port in the minion cfg instead of relying on the port the master tells the minion about?
17:44 ramishra joined #salt
17:46 mpanetta joined #salt
17:47 marickstarr joined #salt
17:50 bhosmer_ joined #salt
17:54 marickstarr left #salt
17:54 rushm0r3 joined #salt
17:55 whovfly joined #salt
17:58 arthabaska joined #salt
17:59 rushm0r3 joined #salt
18:01 druonysuse joined #salt
18:02 VictorLin joined #salt
18:04 troyready joined #salt
18:05 rushm0r3 joined #salt
18:07 Phibs joined #salt
18:09 dude051 joined #salt
18:10 jnials joined #salt
18:11 Ryan_Lane is there an equivalent to rm -Rf in the file module?
18:12 rushm0r3 joined #salt
18:15 rushm0r3 joined #salt
18:15 Eureka Ryan_Lane: file.rmdir should di it if its a dir
18:16 UtahDave Ryan_Lane: Hm.  does file.absent take a directory as the name?
18:16 Eureka otherwise file.absent
18:16 Ryan_Lane UtahDave: I'm running this from an execution module
18:16 Eureka Ryan_Lane: UtahDave -- I am using file.absent to take out a dir. It works currently for me.
18:17 mgw joined #salt
18:17 Ryan_Lane I guess I could call the state, but it's weird calling a state from an execution module
18:17 Eureka Ryan_Lane: Example here: http://pastebin.com/NitysxBx
18:17 Ryan_Lane again, writing a custom execution module ;)
18:17 n8n joined #salt
18:17 RandalSchwartz left #salt
18:18 Eureka Ryan_Lane: Ah. Thought you meant something like this. http://pastebin.com/TKQPvmJp
18:18 Ryan_Lane nope. doing this in states is straightforward
18:19 Ryan_Lane it's weirdly not via execution modules
18:19 UtahDave Ryan_Lane: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.file.html#salt.modules.file.rmdir
18:19 Ryan_Lane "Fails if a directory is not empty.'
18:19 Ryan_Lane s/'/"/
18:19 UtahDave ah
18:20 UtahDave I wonder if file.remove will work on a directory.  If not, we should probably add an option to file.rmdir
18:21 alanpearce joined #salt
18:21 Ryan_Lane let me try
18:21 alanpearce joined #salt
18:22 Ryan_Lane sigh
18:24 Ryan_Lane oh, that does actually work
18:24 UtahDave file.remove?
18:25 Ryan_Lane yep
18:26 Ryan_Lane docs for tar are a little bad: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.archive.html#salt.modules.archive.tar
18:26 UtahDave cool
18:26 Ryan_Lane the examples show the args in different orders
18:26 Ryan_Lane some examples show options first, and others show the options after the tar
18:27 UtahDave Ryan_Lane: I'll open an issue on that and get someone to audit that for consistency
18:27 ckao joined #salt
18:27 Ryan_Lane cool. thanks
18:30 UtahDave issue added.  thanks for pointing that out
18:34 Ryan_Lane yw
18:38 talwai joined #salt
18:38 talwai Is there a way to copy a file from minion to master? salt-cp seems to be just one directional
18:38 toddnni joined #salt
18:39 thedodd joined #salt
18:39 talwai Nvm, just found cp.push
18:39 vu joined #salt
18:40 tkharju1 joined #salt
18:41 kballou joined #salt
18:49 dlam joined #salt
18:51 alanpearce joined #salt
18:56 babilen bmatt: Yes, that is possible and definitely something I'll try.
18:58 babilen It might be that 12 cores were a bit over the top, but we figured "why not? load will be up only temporarily when we run commands and we have those cores to spend"
19:01 bluehawk joined #salt
19:01 schimmy joined #salt
19:02 schimmy1 joined #salt
19:04 bluehawk joined #salt
19:05 vejdmn joined #salt
19:06 thayne joined #salt
19:06 bmatt I cannot imagine why "too many cores" could ever be a bad thing :)
19:07 bmatt I'm not aware of python or multiprocessing having issues with big servers, but who knows
19:07 nahamu bmatt: if the machine doesn't have enough RAM to go with all the cores, too much forking could eat all your RAM...
19:07 nahamu so, there are never too many cores as long as you have sufficient RAM for each one. :)
19:07 KyleG all my 12 core machines have a minimum of 96 GB of RAMz
19:08 nahamu KyleG: nice! :)
19:08 KyleG jah dat overkill
19:08 bmatt yeah, but salt-master isn't a fork bomb suspect
19:08 vejdmn joined #salt
19:11 q1x joined #salt
19:12 q1x joined #salt
19:12 kaiserpathos joined #salt
19:14 dude051 joined #salt
19:14 endersavage joined #salt
19:14 babilen I tried 5 (default) and 10 worker threads, not sure how many multiprocessing jobs that might kick off though.
19:14 endersavage is there a command to show all packages/services installed on an instance running salt-minion?
19:15 babilen KyleG: So you do run salt-master on 12 cores, but with 96G of RAM and not 4G like I do?
19:15 bmatt babilen: multiprocessing forks actual PIDs, so ps shows you
19:15 KyleG I don't run salt on those boxes at all.
19:15 babilen I know, I have nproc
19:15 bmatt IME salt-master processes aren't particularly memory-heavy
19:15 KyleG Some things I want as few things on as possible. even my config management.
19:15 babilen KyleG: Okay, that data point doesn't help in debugging my issue then.
19:17 babilen endersavage: salt '*' pkg.list_pkgs (aptpkt module, http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.aptpkg.html)
19:17 babilen bmatt: But you are not aware of n
19:17 bluehawk joined #salt
19:17 babilen *anybody having used that many cores in the past?
19:17 endersavage thank you! :)
19:17 babilen enjoy!
19:18 bmatt babilen: no; I'm thinking now about whether I have some hardware I could test that on, but I don't think I have any
19:18 babilen endersavage: Well, I foolishly assumed that you would use Debian (or at least a Debian based distribution, haven't checked if yum  supports that too)
19:18 endersavage similiar command for a list of services running?
19:18 babilen endersavage: Which init system?
19:18 bmatt endersavage: do you want services or do you want packages?
19:19 vejdmn joined #salt
19:19 endersavage that works on both apt and yum
19:19 bmatt pkg.list_pkgs gives you packages, service.get_all gives you services
19:19 babilen bmatt: Do you know if I could delete anything apart from: /var/cache/salt, /var/run/salt, /etc/salt that salt uses to keep "state" ?
19:20 bmatt babilen: emphatically not =[ (I'm looking at the output of dpkg -L too)
19:21 bmatt I do wonder if there's a stale .pyc somewhere
19:21 bmatt in /usr/share/pyshared/salt
19:21 bmatt babilen: you could also try apt-get purge [] to "start over"
19:21 babilen We will try downsizing the master to 4 cores, but if that doesn't work I am really running out of ideas.
19:21 endersavage thanks!  is service.get_all showing me running services?
19:21 babilen I purged salt-minion, salt-master and salt-common already.
19:22 bmatt endersavage: http://salt.readthedocs.org/en/v2014.1.4/ref/modules/all/salt.modules.service.html#module-salt.modules.service
19:22 XenophonF left #salt
19:22 bmatt babilen: goddamnit.
19:22 babilen endersavage: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.service.html#salt.modules.service.get_all
19:22 babilen (no)
19:22 babilen bmatt: I know :(
19:22 bmatt babilen: I can't test this, but purge, then see if leftover pyc files are in /usr/share/pyshared?
19:22 babilen Can do
19:23 q1x hi guys...I'm trying to change a salt-formula to work with Ubuntu 14.04, but I'm not getting it right. In the map file, it seems to filter on the os_family grain. As that is debian, that is where I need to look. Debian has a service 'libvirtd' but Ubuntu 14.04 has 'libvirt-bin'. Any pointers on what I'm doing wrong? map file -> https://github.com/q1x/libvirt-formula/blob/master/libvirt/map.jinja
19:23 stevednd does anyone know of any efforts to add some kind of output to the orchestration runner?
19:23 thayne joined #salt
19:23 babilen bmatt: But I am still under the impression as if the problem is specific to the minion in that *they* somehow can't get encryption to the server right (i.e. Auth._sign_in() sreq sending is failing)
19:24 bmatt hm
19:24 bmatt possible
19:24 babilen I also notice that new minions don't get the minion_master.pub anymore ...
19:24 babilen But I don't know *why* (let alone what I could do against it)
19:25 vejdmn joined #salt
19:27 che-arne joined #salt
19:28 vu joined #salt
19:29 bhosmer_ joined #salt
19:29 bmatt D=
19:29 bmatt oh hm.
19:29 bmatt you can transit TCP in both directions, yes?
19:30 q1x apperantly Jinja doesn't like my if test in there :-P
19:30 dude051 joined #salt
19:30 babilen bmatt: How would I test?
19:31 Ahlee_ Back again with more grains fun.  Can't find a reason this is happening in grains.py, any theories? https://gist.github.com/jalons/c00400a057588ba5397a
19:31 bmatt babilen: netcat and telnet :)
19:31 vejdmn joined #salt
19:40 q1x ok, f it...I'm not running debian anyway and I need this to weok...
19:40 quantumriff I need some help understanding some permissions.. with salt.. I need to create a directory, but I need to use "setfacl" to set several groups when its created..
19:40 q1x work
19:41 quantumriff from what I have seen, the folder.present stuff doesn't seem to handle the setfacl command, would i need something like a 'cmd.wait' along with it?
19:42 babilen bmatt: I can nc to the SSHD running on it and a few other services .. should the salt-minion be listening?
19:42 bmatt yes
19:44 bmatt oh, you know what
19:44 bmatt no, it shouldn't
19:44 bmatt the minion reaches out to the master
19:44 bmatt it doesn't listen for incoming connections
19:47 Theo-SLC joined #salt
19:47 vejdmn joined #salt
19:47 babilen Okay, I can telnet and netcat from the master to the minion, I downsized it from 12 to 4 cores now (I know that 4 cores work) and purged both the master and the minion and removed /var/cache/salt, /var/run/salt, /etc/salt and checked that there are no files named *salt* in /usr/share ...
19:48 babilen *sigh*
19:50 babilen But I get "connection refused" if I try to telnet from the minion to master:4506
19:51 bmatt interesting
19:54 babilen It works fine on two other setups (telnet from minion to master:4506)
19:55 babilen So either: 1. Something is blocking the connection (i.e. a firewall is rejecting the SYN packet OR 2. The service isn't running.
19:56 bmatt do you have more than one interface on the master?
19:56 bmatt could it be binding to the wrong one?
19:57 babilen I have many interfaces, but configured the correct IP in /etc/salt/master interfaces
19:58 babilen I even have "tcp        0   1887 10.10.102.103:4506      10.10.102.19:39180      ESTABLISHED" (10.10.102.103 is the master and 10.10.102.19 the minion)
19:59 babilen Hmm, but why is there nothing in LISTEN ?
20:01 babilen Ah, shit .. restarted the master and I now have "tcp        0      0 10.10.102.103:4506      0.0.0.0:*               LISTEN      0          17678       8349/python" and can telnet into the master.
20:02 GradysGhost joined #salt
20:02 tkharju2 joined #salt
20:03 dlam hmm how do i manually connect a minion to a master?  i've only done it before with bootstrap.sh
20:03 stevednd paging whiteinge, manfred, or anyone else that may have some experience with the orchestration runner
20:04 whiteinge babilen: back from the meeting. reading the scroll-back...
20:04 whiteinge stevednd: pong
20:04 babilen whiteinge: I also filed https://github.com/saltstack/salt/issues/14307 with some information I gathered so far. I am really not sure what else I can try
20:05 stevednd whiteinge: couple things. A) I'm experiencing some wonkiness with states run via the orchestration runner failing, but then if I run them with a regular state.sls on their own they succeed B) are there any plans to allow the orchestration runner to have more complicated dependency/require chains?
20:06 schmutz joined #salt
20:08 whiteinge Ahlee_: no idea why restarting the minion would affect that even a little. thing to try: specify the full function arg (and quote the vals for pete's sake!):  ``grains.setval application val='[foo,bar]'``
20:09 whiteinge babilen: ty for the link. looking...
20:10 tkharju3 joined #salt
20:11 stevednd whiteinge: https://gist.github.com/dnd/39e2cb6ef9161b335e64 that shows the error that's happening
20:12 babilen whiteinge: I mean I can't completely rule out that *something* might have happened to the master VM, but I can't think of anything else (short of completely nuking and reinstalling the entire box) that I can try.
20:12 salt_new_guy joined #salt
20:12 whiteinge stevednd: the orchestrate runner has access to the full collection of salt requisites, is there an addition there you think might be helpful? or what do you think is causing the failure?
20:12 whiteinge stevednd: ah
20:12 babilen whiteinge: It would also be good to know what caused this so that I can explain it to my people and we can refrain from doing it again :)
20:12 whiteinge babilen: noted :-P
20:12 bhosmer_ joined #salt
20:12 babilen But right now all I care about is that I get my minions to talk to master again :)
20:13 bmatt babilen: did anything change with the verification of LISTEN on the master?
20:13 lz-dylan Howdy, folks! Does anyone know if I can specify grain info in my cloud.profiles (instead of just cloud.providers)? My preferred method was to set EC2 tags but I can't get them to read into my grains...
20:13 whiteinge babilen: (sorry for possible dupe Q. haven't finished scroll-back yet) how did you install originally and how did you upgrade? OS-level packages?
20:13 babilen bmatt: Yes, I could telnet to the master from the minion, but I still can't test.ping
20:13 babilen whiteinge: We use the Debian packages
20:14 babilen whiteinge: I can't blame you for not reading the entire scrollback, I've been at this for a few hours now ... the issue just sort of summarises all information that I gathered so far.
20:14 whiteinge babilen: (last possible dupe Q.) any chance the older minion daemon is still running in-memory?
20:15 tkharju4 joined #salt
20:15 babilen whiteinge: I ran "pgrep salt-minion" and "pkill -f salt-minion" on the minion after "service salt-minion stop", so I would hope not.
20:15 * whiteinge nods
20:16 tkharju joined #salt
20:16 stevednd whiteinge: yes, I use the requires, but I'm looking for something deeper. For some parts of the orchestration I need some actions to occur before any other machines can continue their part of the deployment. That's easy, the 'require' in orchestration has that covered. But then after that point, I need to be able to run a bunch of states on a group of  machines where they are no longer dependent on the other machines finishing a
20:16 stevednd step before each moving on together. If I provide the salt.state sls argument an array of states they appear to be concurrently. I need them to be called sequentially
20:17 stevednd I hope I explained that well enough
20:17 whiteinge i've seen similar-sounding super-unexplainable connection issues when doing python-installed (pip/setuptools) upgrades before. wonder if parts of the old install weren't blown away correctly. i have not seen this happen with proper OS-packages though.
20:17 whiteinge babilen: reading...
20:19 babilen whiteinge: There was another user ( Heartsbane ) earlier in here who ran into the SaltReqTimeout issue too after upgrading from .4 to .5 (not sure which packages/install method), but I am not sure if it is related.
20:19 lz-dylan also, didn't see it in /who -- does anyone have chanops so's to update the 'latest' to 2014.1.6?
20:19 Theo-SLC can I not use init.sls 2 directories down? Example: "/base/openshift/broker/init.sls".  When I call salt-call openshift.broker I get error "No matching sls found for 'openshift.broker' in env 'base'"
20:20 lz-dylan Theo-SLC: I think openshift.broker maps to /base/openshift/broker.sls
20:20 whiteinge babilen: good to know -- mostly because Heartsbane owes me lunch
20:20 whiteinge or maybe I owe him...
20:20 babilen heh
20:20 Ahlee_ whiteinge: well, bouncing hte master seemed to have resolved it
20:20 whiteinge O_o
20:20 Ahlee_ bad me for not trying taht before
20:21 Ahlee_ wait until i type up my fun (continued) fun with pillar based matching
20:21 Ahlee_ er, minus that first fun
20:21 Ahlee_ anyway, back to trying to get crash to play nicely with this kernel dump
20:22 alanpearce joined #salt
20:22 Theo-SLC lz-dylan: that's confusing. thanks. I think you're right.
20:22 babilen whiteinge: But just to clarify: Purging the packages, and "rm -rf /etc/salt /var/cache/salt /var/run/salt" should get rid of everything where salt stores "state" ? That is if I perform that on both master and minions I should™ get the "pristine" state ?
20:23 whiteinge stevednd: problem with sequentially is salt sends commands to the minion asyncronously. orchestrate doesn't have a mechanism to wait for multiple returns (well it does, but you'd need a require statement for each minion you're waiting for)
20:23 whiteinge stevednd: see here for an alternate proposal: https://github.com/saltstack/salt/issues/6792#issuecomment-43361621
20:23 whiteinge babilen: yes, you're right about that
20:24 babilen okay
20:24 lz-dylan Theo-SLC: http://docs.saltstack.com/en/latest/topics/best_practices.html helps with structuring statefiles
20:24 whiteinge babilen: er, i suppose there's a little state kept in /etc/salt (grains, cached minion_id)
20:25 whiteinge ^^ pendantry
20:25 babilen whiteinge: I removed that too
20:25 babilen (see above)
20:25 whiteinge k
20:25 babilen On both the master and minion
20:25 babilen This *must* be something super-obscure, but *what* ?
20:25 kermit joined #salt
20:26 stevednd whiteinge: yeah, and making a require for each minion is obviously very unwieldy. I did that for a couple states where I only had a limited number of minions, but some of these require running on 20-30 machines at once
20:27 whiteinge babilen: that's a pretty thorough nuking :(
20:28 stevednd only other complaints currently are that there's no way to pass in pillar information to be used in the orchestration file itself(for templating purposes), and that omg the orchestration runner needs some kind of output
20:28 stevednd at least tell me each time a salt.state block finishes
20:28 vejdmn joined #salt
20:29 babilen whiteinge: I worked up to it in steps and had hoped that I could keep some state, but meh .. all that I left was /var/log/salt. Guess we'll completely nuke the machine tomorrow and reinstall from scratch (not much worth preserving on it anyway)
20:30 campee joined #salt
20:31 campee left #salt
20:31 whiteinge babilen: mind purging that salt-master package and running a few searches?
20:31 schimmy joined #salt
20:31 whiteinge (again)
20:31 babilen sure
20:32 babilen done
20:33 quantumriff I have a config file template, I only want to copy once, when the parent directory is first created.. but then, I don't want to update it again.. is that possible?
20:33 whiteinge babilen: i'm intersted in the output of two (sorry, overly broad. can't remember where debian puts these by default):
20:33 babilen sure
20:33 quantumriff I'm only used to forcing the file to stay the same every check in..(hourly in our case)
20:33 whiteinge babilen: find /usr -iname '*salt*'
20:33 whiteinge babilen: grep -rni salt /usr
20:34 quantumriff actually, for initial provisioning of a machine.. there are a few things I am interested in only doing once.. copying database client programs, etc
20:34 schimmy1 joined #salt
20:35 whiteinge babilen: these symptoms are eerily similar to bunk pip/setuptools installs, i'm wondering if maybe there's rogue file(s) the .deb are missing
20:35 babilen I will do this on both the master and my test minion, give me a second please.
20:35 * whiteinge nods
20:36 vbabiy joined #salt
20:37 whiteinge stevednd: see https://github.com/saltstack/salt/issues/13324 for passing in pillar
20:38 quantumriff would it be best to do something like a cmd.run "salt-call single one time state" with the - unless option checking if the file exists?
20:39 whiteinge stevednd: er, https://github.com/saltstack/salt/issues/11904
20:40 lz-dylan By the way, I answered my own question, so for those that care: grains for salt-cloud can be placed in either cloud.providers or cloud.profiles. Huzzah!
20:40 babilen whiteinge: https://www.refheap.com/a7e4986ce326bcff6f64ee2c3 (nothing, but pasted it anyway)
20:40 stevednd whiteinge: that's to modify some kind of pillar data for the salt states being called, isn't it?
20:42 babilen whiteinge: So, that's not it (unless you spot something in there that I can't see)
20:42 whiteinge reading...
20:42 quantumriff I guess, if its a template file.. I could use file.blockreplace.. but I have a pretty decent size document, and the formatting is critical..
20:47 quantumriff so how would you guys go about adding a template, or base configuration file to a system, that others can add on to later, that saltstack would not revert?
20:49 babilen quantumriff: I don't know, but I would generally approach it with two files. One "system" one that sources a "local" one. Most, if not all, software supports something along those lines. I mean you could also simply file.append to that file and salt won't append again (but would create it on first run)
20:49 whiteinge babilen: thanks for the full paste. i don't see anything there either
20:49 whiteinge babilen: did you paste master/minion logs earlier? if so, do you have those URLs handy still?
20:50 babilen They are in the issue, but I can reinstall both packages now and re-generate them (so we start from a known state)
20:50 whiteinge wait. found 'em
20:50 beneggett joined #salt
20:51 whiteinge babilen: oh, since you have the master purged. do a quick netstat to see if the salt ports are open: netstat -luntp
20:51 quantumriff babilen: I like that idea.. but these are oracle config files.. I'm looking now.. but they are "special". :)  I will look at the append if I can't include another lisnter file
20:55 babilen whiteinge: They were not
20:55 * whiteinge nods
20:55 whiteinge figured it'd be a good time to check at least
20:56 poogles joined #salt
20:57 babilen whiteinge: Sure, I now reinstalled the master and minion and they behave the same. I guess that it is something with our network infrastructure and not salt's fault or something outside my control.
20:58 babilen Thanks, it's 11pm now and I'll call it a day. Let me know if you can think of anything because I can't at this moment :)
20:58 babilen *sob*
20:58 babilen Have a good afternoon/evening.
21:00 whiteinge babilen: ping me tomorrow. g'night
21:01 ndrei joined #salt
21:07 yomilk joined #salt
21:07 bmatt babilen: sorry you didn't get satisfaction; keep us updated
21:11 agliodbs joined #salt
21:11 stevednd whiteinge: what might be required to get the orchestrate runner to spit out some kind of output?
21:12 notpeter_ What's the easiest way to consistently set a hostname when creating instances with salt-cloud? Rackspace does this based on the provided instance name, but the Amazon and Linode systems seem to not seed from that value.
21:13 ajolo joined #salt
21:14 Theo-SLC joined #salt
21:14 linjan joined #salt
21:17 vbabiy joined #salt
21:18 gq45uaethdj26jw7 left #salt
21:18 badon_ joined #salt
21:20 whiteinge stevednd: incremental output may be a tad tricky. improvements could easily be made on the final output though
21:22 donnyk joined #salt
21:25 talwai joined #salt
21:25 donnyk Hi, i installed salt-master on ubuntu from a ppa.  But it's missing some of the execution modules.  I was wondering what the best way to install/distribute these?  I could git clone https://github.com/saltstack/salt/tree/develop/salt/modules into _modules under base.  But what's the recommended way to do it?
21:26 Sauvin joined #salt
21:26 donnyk in particular mysql-formula uses 'test.rand_str' which doesn't exist.
21:28 rogst joined #salt
21:28 mattikus joined #salt
21:30 beneggett is there a way to see which formulas are running in real time when running state.highstate
21:31 beneggett like a tail?  --verbose doesn't say a whole lot
21:31 lz-dylan beneggett: you can try it from the minion using salt-call
21:31 Eureka beneggett: I dont know a really easy way.. But you could run the master in -l debug mode and get a more-or-less realtime readout
21:32 stevednd whiteinge: the runner doesn't know each time a salt.state completes before it moves on to the next one?
21:33 schimmy joined #salt
21:33 bhosmer_ joined #salt
21:33 Theo-SLC joined #salt
21:35 schimmy2 joined #salt
21:35 beneggett lz-dylan: yeah, that will help me debug in one off cases, thanks!
21:35 pdayton joined #salt
21:36 beneggett Eureka: yeah, debug mode just doesn't say a whole lot (other than its executing)
21:36 Eureka beneggett: yeah =/
21:37 beneggett lz-dylan's method works quite well from the minion, strange we can't do that from master though
21:37 beneggett just using salt-call state.highstate
21:40 Eureka It would also be really nice if the master would log out ONLY errors to a file for us to more easily read.
21:40 Eureka it can be hard to find your 1 failed state in over 100 states
21:49 kivihtin joined #salt
21:54 thejeff joined #salt
21:55 bmatt Eureka: at that point, it's time to move to something like a redis returner
21:57 thejeff Is there an elegant way to handle restart a minion part way the provisioning and have it complete automatically when it comes back up?
21:57 thejeff I'm running into issues where I need to set global environment variables.
22:05 beneggett joined #salt
22:08 agliodbs joined #salt
22:23 jalbretsen hmmmmm
22:23 jalbretsen it appears that salt.states.file.exists does not apply to symlinks...
22:25 arthabaska joined #salt
22:25 DaveQB joined #salt
22:27 arthabaska joined #salt
22:29 lz-dylan Is there a clean way to get salt-cloud ec2 to attach to a public IP? All the instances I'm spinning up are private IP only, and I'd like for them to be publically accessible. No VPC involved.
22:30 lz-dylan Hm. Revise that. Is there a way to get the public IP that my instances apparently don't know to show up in my grains? :P
22:30 lz-dylan Or at least attach to a known EIP?
22:32 ndrei joined #salt
22:32 higgs001 joined #salt
22:36 robawt hey cool kids!
22:36 robawt anyone generate pillar from a text file?
22:38 rojem joined #salt
22:43 btorch is there something special that we need to do to pass awk to cmd.run :) ?
22:43 jalbretsen oh wait, I'm just dumb, file.exists does work with symlinks
22:43 lz-dylan btorch: you may need to quote it (try single-quotes)
22:44 btorch tried it
22:44 lz-dylan btorch: what's your line look like?
22:45 btorch cmd.run  "df -kh | egrep 'srv/node/c[1,2]' | awk '{print $2}' | grep -v '2.8T'"
22:46 smcquay joined #salt
22:48 Brian joined #salt
22:48 btorch ahh needed \$
22:48 lz-dylan btorch: I have this weird and funny feeling that you can't use pipes in cmd.run. Can anyone verify? Also, I suspect your $2 is being evaluated by your shell before being passed.
22:48 lz-dylan ....haha, okay so that second part :)
22:50 Sypher joined #salt
22:50 simmel joined #salt
22:50 techdragon joined #salt
22:50 talwai joined #salt
22:51 salt_new_guy joined #salt
22:51 wpot joined #salt
22:51 brandon__ joined #salt
22:51 gzcwnk joined #salt
22:51 d_d_d joined #salt
22:53 kivihtin joined #salt
22:53 badon joined #salt
22:53 bluehawk joined #salt
22:53 Ryan_Lane joined #salt
22:53 taterbase joined #salt
22:53 archrs joined #salt
22:53 ghartz joined #salt
22:53 housl joined #salt
22:53 acabrera joined #salt
22:53 Hell_Fire joined #salt
22:53 alainv joined #salt
22:53 xzarth_ joined #salt
22:53 utahcon_ joined #salt
22:53 dimeshake joined #salt
22:53 scoates joined #salt
22:53 baoboa joined #salt
22:53 torrancew joined #salt
22:53 workingcats joined #salt
22:53 LordOfLA|Broken joined #salt
22:53 xsteadfastx joined #salt
22:53 rcsheets joined #salt
22:53 tempspace joined #salt
22:53 Linuturk joined #salt
22:53 zartoosh joined #salt
22:53 borgstrom joined #salt
22:53 quickdry21_ joined #salt
22:53 d3vz3r0 joined #salt
22:53 canci joined #salt
22:53 BbT0n joined #salt
22:53 mikkn joined #salt
22:53 koyd joined #salt
22:53 scooby2 joined #salt
22:53 gfa joined #salt
22:53 quickdry21 joined #salt
22:53 tmmt_ joined #salt
22:53 geekmush joined #salt
22:53 monokrome joined #salt
22:53 sashka_u1 joined #salt
22:53 pressureman joined #salt
22:53 codysoyland joined #salt
22:53 AlcariTheMad joined #salt
22:53 luminous joined #salt
22:53 NV joined #salt
22:53 _ale_ joined #salt
22:53 whitepaws joined #salt
22:53 UForgotten joined #salt
22:53 faulkner joined #salt
22:53 rmnuvg joined #salt
22:53 dcolish joined #salt
22:53 nlb joined #salt
22:53 goodwill joined #salt
22:53 austin987 joined #salt
22:53 schimmy2 joined #salt
22:53 Theo-SLC joined #salt
22:53 ajolo joined #salt
22:53 thayne joined #salt
22:53 troyready joined #salt
22:53 repl1cant joined #salt
22:53 KyleG joined #salt
22:53 ajprog_laptop1 joined #salt
22:53 JPaul joined #salt
22:53 ksalman_ joined #salt
22:53 rushmore joined #salt
22:53 mugsie joined #salt
22:53 etw joined #salt
22:53 davidone joined #salt
22:53 jerrcs joined #salt
22:53 uzomg joined #salt
22:53 chitown__ joined #salt
22:53 ede joined #salt
22:53 MaZ- joined #salt
22:53 Schmidt joined #salt
22:53 mephx joined #salt
22:53 retr0h joined #salt
22:53 Sacro joined #salt
22:53 hoodow joined #salt
22:53 mr_keke joined #salt
22:53 espen_ joined #salt
22:53 icebourg joined #salt
22:53 Ssquidly joined #salt
22:53 jpcw joined #salt
22:53 cods joined #salt
22:53 drogoh joined #salt
22:53 rockey joined #salt
22:53 Ahlee_ joined #salt
22:53 wigit joined #salt
22:53 badon joined #salt
22:54 monokrome joined #salt
22:54 majoh joined #salt
22:54 ThomasJ|d joined #salt
22:54 ldlework joined #salt
22:54 xintron joined #salt
22:54 totte joined #salt
22:54 svs joined #salt
22:54 mpoole joined #salt
22:54 kamal_ joined #salt
22:54 mschiff joined #salt
22:54 oc joined #salt
22:57 lahwran joined #salt
22:57 mortis__ joined #salt
22:57 smkelly joined #salt
22:57 jchen joined #salt
22:57 bonezed joined #salt
22:57 simonmcc joined #salt
22:57 nahamu joined #salt
22:57 akoumjian joined #salt
22:57 thunderbolt joined #salt
22:57 dotplus joined #salt
22:57 tcotav joined #salt
22:57 twoflowers joined #salt
22:57 herlo joined #salt
22:57 huleboer joined #salt
22:57 gwmngilfen joined #salt
22:57 nickg joined #salt
22:57 Emantor joined #salt
22:57 andrej joined #salt
22:57 supplicant joined #salt
22:57 manfred joined #salt
22:57 honestly joined #salt
22:57 txmoose joined #salt
22:57 Striki joined #salt
22:57 peno joined #salt
22:57 agronholm joined #salt
22:57 flebel joined #salt
22:57 scristian joined #salt
22:57 anotherZero joined #salt
22:57 whytewolf joined #salt
22:57 Whissi joined #salt
22:57 crashmag joined #salt
22:57 TheRealBill joined #salt
22:57 Doqnach joined #salt
22:57 kevinbrolly joined #salt
22:57 jab416171 joined #salt
22:57 toddejohnson joined #salt
22:57 maboum_ joined #salt
22:57 m1crofarmer joined #salt
22:57 perfectsine joined #salt
22:57 bmatt joined #salt
22:57 kuffs joined #salt
22:57 lynxman joined #salt
22:57 Vye_ joined #salt
22:57 jalaziz_ joined #salt
22:57 jpaetzel joined #salt
22:57 matthew-parlette joined #salt
22:57 oc joined #salt
22:57 mschiff joined #salt
22:57 kamal_ joined #salt
22:57 mpoole joined #salt
22:57 svs joined #salt
22:57 totte joined #salt
22:57 xintron joined #salt
22:57 ldlework joined #salt
22:57 ThomasJ|d joined #salt
22:57 majoh joined #salt
22:57 monokrome joined #salt
22:57 badon joined #salt
22:57 wigit joined #salt
22:57 Ahlee_ joined #salt
22:57 rockey joined #salt
22:57 drogoh joined #salt
22:57 cods joined #salt
22:57 jpcw joined #salt
22:57 Ssquidly joined #salt
22:57 icebourg joined #salt
22:57 espen_ joined #salt
22:57 mr_keke joined #salt
22:57 hoodow joined #salt
22:57 Sacro joined #salt
22:57 retr0h joined #salt
22:57 mephx joined #salt
22:57 Schmidt joined #salt
22:57 MaZ- joined #salt
22:57 ede joined #salt
22:57 chitown__ joined #salt
22:57 uzomg joined #salt
22:57 davidone joined #salt
22:57 etw joined #salt
22:57 mugsie joined #salt
22:57 rushmore joined #salt
22:57 ksalman_ joined #salt
22:57 JPaul joined #salt
22:57 ajprog_laptop1 joined #salt
22:57 KyleG joined #salt
22:57 repl1cant joined #salt
22:57 troyready joined #salt
22:57 thayne joined #salt
22:57 ajolo joined #salt
22:57 Theo-SLC joined #salt
22:57 schimmy2 joined #salt
22:57 austin987 joined #salt
22:57 goodwill joined #salt
22:57 nlb joined #salt
22:57 dcolish joined #salt
22:57 rmnuvg joined #salt
22:57 faulkner joined #salt
22:57 UForgotten joined #salt
22:57 whitepaws joined #salt
22:57 _ale_ joined #salt
22:57 NV joined #salt
22:57 luminous joined #salt
22:57 AlcariTheMad joined #salt
22:57 codysoyland joined #salt
22:57 pressureman joined #salt
22:57 sashka_u1 joined #salt
22:57 geekmush joined #salt
22:57 tmmt_ joined #salt
22:57 quickdry21 joined #salt
22:57 gfa joined #salt
22:57 scooby2 joined #salt
22:57 koyd joined #salt
22:57 mikkn joined #salt
22:57 BbT0n joined #salt
22:57 canci joined #salt
22:57 d3vz3r0 joined #salt
22:57 quickdry21_ joined #salt
22:57 borgstrom joined #salt
22:57 zartoosh joined #salt
22:57 Linuturk joined #salt
22:57 tempspace joined #salt
22:57 rcsheets joined #salt
22:57 xsteadfastx joined #salt
22:57 LordOfLA|Broken joined #salt
22:57 workingcats joined #salt
22:57 torrancew joined #salt
22:57 baoboa joined #salt
22:57 scoates joined #salt
22:57 dimeshake joined #salt
22:57 utahcon_ joined #salt
22:57 xzarth_ joined #salt
22:57 alainv joined #salt
22:57 Hell_Fire joined #salt
22:57 acabrera joined #salt
22:57 housl joined #salt
22:57 ghartz joined #salt
22:57 archrs joined #salt
22:57 taterbase joined #salt
22:57 Ryan_Lane joined #salt
22:57 bluehawk joined #salt
22:57 kivihtin joined #salt
22:57 d_d_d joined #salt
22:57 gzcwnk joined #salt
22:57 brandon__ joined #salt
22:57 wpot joined #salt
22:57 salt_new_guy joined #salt
22:57 talwai joined #salt
22:57 techdragon joined #salt
22:57 simmel joined #salt
22:57 Sypher joined #salt
22:57 smcquay joined #salt
22:57 rojem joined #salt
22:57 higgs001 joined #salt
22:57 ndrei joined #salt
22:57 arthabaska joined #salt
22:57 DaveQB joined #salt
22:57 agliodbs joined #salt
22:57 beneggett joined #salt
22:57 pdayton joined #salt
22:57 mattikus joined #salt
22:57 rogst joined #salt
22:57 Sauvin joined #salt
22:57 kermit joined #salt
22:57 schmutz joined #salt
22:57 che-arne joined #salt
22:57 dlam joined #salt
22:57 kballou joined #salt
22:57 toddnni joined #salt
22:57 ckao joined #salt
22:57 n8n joined #salt
22:57 mgw joined #salt
22:57 VictorLin joined #salt
22:57 Damoun joined #salt
22:57 aw110f joined #salt
22:57 davet joined #salt
22:57 scarcry joined #salt
22:57 rallytime joined #salt
22:57 joehillen joined #salt
22:57 patrek joined #salt
22:57 ccase joined #salt
22:57 tligda joined #salt
22:57 diegows joined #salt
22:57 wendall911 joined #salt
22:57 kiorky joined #salt
22:57 conan_the_destro joined #salt
22:57 oncallsucks joined #salt
22:57 dober joined #salt
22:57 harkx joined #salt
22:57 djaykay joined #salt
22:57 Ymage joined #salt
22:57 nkuttler joined #salt
22:57 jalbretsen joined #salt
22:57 rawzone joined #salt
22:57 \ask joined #salt
22:57 cofeineSunshine joined #salt
22:57 miqui joined #salt
22:57 mechanicalduck joined #salt
22:57 jrdx joined #salt
22:57 logix812 joined #salt
22:57 TyrfingMjolnir joined #salt
22:57 N-Mi joined #salt
22:57 Kenzor joined #salt
22:57 nocturn joined #salt
22:57 hardwire joined #salt
22:57 blast_hardcheese joined #salt
22:57 basepi joined #salt
22:57 msciciel_ joined #salt
22:57 cwyse joined #salt
22:57 Hydrosine joined #salt
22:57 JoeHazzers joined #salt
22:57 Ixan joined #salt
22:57 gmoro joined #salt
22:57 thehaven_ joined #salt
22:57 cyrusdav- joined #salt
22:57 erjohnso_ joined #salt
22:57 ahale_ joined #salt
22:57 cruatta_ joined #salt
22:57 cwright joined #salt
22:57 rjc joined #salt
22:57 davromaniak joined #salt
22:57 shano_ joined #salt
22:57 zemm_ joined #salt
22:57 Phibs joined #salt
22:57 mortis_ joined #salt
22:57 hvn_ joined #salt
22:57 rnts_ joined #salt
22:57 Flusher joined #salt
22:57 clone1018_ joined #salt
22:57 jeremyBass2 joined #salt
22:57 octagonal joined #salt
22:57 catpigger joined #salt
22:57 dccc joined #salt
22:57 sectionme joined #salt
22:57 retrospek joined #salt
22:57 fxdgear joined #salt
22:57 Hollinski joined #salt
22:57 gothix_ joined #salt
22:57 Eliz joined #salt
22:57 Bryanstein joined #salt
22:57 skullone joined #salt
22:57 penguin_dan joined #salt
22:57 berto- joined #salt
22:57 robertkeizer joined #salt
22:57 claytron joined #salt
22:57 t0rrant joined #salt
22:57 babilen joined #salt
22:57 Micromus_ joined #salt
22:57 synical joined #salt
22:57 darrend joined #salt
22:57 gecos joined #salt
22:57 notpeter_ joined #salt
22:57 amontalban joined #salt
22:57 ashb joined #salt
22:57 oeuftete joined #salt
22:57 octarine joined #salt
22:57 patarr joined #salt
22:57 Jahkeup joined #salt
22:57 cbaesema joined #salt
22:57 analogbyte joined #salt
22:57 InAnimaTe joined #salt
22:57 micko joined #salt
22:57 jeblair joined #salt
22:57 ifmw joined #salt
22:57 nicksloan joined #salt
22:57 johngrasty joined #salt
22:57 jY joined #salt
22:57 bretep joined #salt
22:57 SaveTheRbtz joined #salt
22:57 ahammond joined #salt
22:57 seb` joined #salt
22:57 jacksontj joined #salt
22:57 Nazca__ joined #salt
22:57 MTecknology joined #salt
22:57 jforest joined #salt
22:57 Corey joined #salt
22:57 kossy joined #salt
22:57 EWDurbin joined #salt
22:57 jeffrubic joined #salt
22:57 sverrest joined #salt
22:57 twinshadow joined #salt
22:57 sifusam joined #salt
22:57 carmony joined #salt
22:57 bitmand joined #salt
22:57 Heggan joined #salt
22:57 emostar joined #salt
22:57 trevorjay joined #salt
22:57 lude joined #salt
22:57 delkins_ joined #salt
22:57 jamesog joined #salt
22:57 bensons joined #salt
22:57 __number5__ joined #salt
22:57 marcinkuzminski joined #salt
22:57 seventy3_away joined #salt
22:57 pjs joined #salt
22:57 dcmorton joined #salt
22:57 SachaLigthert joined #salt
22:57 veb joined #salt
22:57 Shish joined #salt
22:57 pviktori joined #salt
22:57 iMil joined #salt
22:57 AnswerGu1 joined #salt
22:57 rigor789 joined #salt
22:57 philipsd6 joined #salt
22:57 andredieb joined #salt
22:57 eculver joined #salt
22:57 Zuru joined #salt
22:57 andabata joined #salt
22:57 lionel joined #salt
22:57 tedski joined #salt
22:57 ntropy joined #salt
22:57 dzen joined #salt
22:57 Heartsbane joined #salt
22:57 aarontc joined #salt
22:57 roo9 joined #salt
22:57 viq joined #salt
22:57 seblu joined #salt
22:57 ronc_ joined #salt
22:57 Voziv joined #salt
22:57 renoirb joined #salt
22:57 ixokai joined #salt
22:57 Deevolution joined #salt
22:57 jayne joined #salt
22:57 hillna_ joined #salt
22:57 jesusaurus joined #salt
22:57 balltongu joined #salt
22:57 Gareth joined #salt
22:57 codekoala joined #salt
22:57 Eugene joined #salt
22:57 georgemarshall joined #salt
22:57 [vaelen] joined #salt
22:57 madduck joined #salt
22:57 maber_ joined #salt
22:57 CaptTofu_ joined #salt
22:57 Karunamon joined #salt
22:57 Blacklite joined #salt
22:57 xt joined #salt
22:57 arapaho joined #salt
22:57 cb joined #salt
22:57 chutzpah joined #salt
22:57 Nazzy joined #salt
22:57 Sway joined #salt
22:57 neilf__ joined #salt
22:57 goki joined #salt
22:57 ifur joined #salt
22:57 londo joined #salt
22:57 FL1SK joined #salt
22:57 pmcg joined #salt
22:57 juice joined #salt
22:57 nliadm joined #salt
22:57 zsoftich1 joined #salt
22:57 munhitsu_ joined #salt
22:57 xenoxaos joined #salt
22:57 masterkorp joined #salt
22:57 bernieke joined #salt
22:57 ikanobori joined #salt
22:57 jmccree joined #salt
22:57 jasonrm joined #salt
22:57 codekobe_ joined #salt
22:57 bezaban joined #salt
22:57 mfournier joined #salt
22:57 jcristau joined #salt
22:57 eightyeight joined #salt
22:57 logandg joined #salt
22:57 djinni` joined #salt
22:57 meganerd joined #salt
22:57 CyanB joined #salt
22:57 jperras joined #salt
22:57 blackjid joined #salt
22:57 esogas joined #salt
22:57 nyov joined #salt
22:57 nhubbard joined #salt
22:57 dwfreed joined #salt
22:57 lazybear joined #salt
22:57 pwiebe_ joined #salt
22:57 fivethreeo joined #salt
22:57 kaictl joined #salt
22:57 Kalinakov joined #salt
22:57 arnoldB joined #salt
22:57 iggy joined #salt
22:57 v0rtex joined #salt
22:57 bigl0af joined #salt
22:57 zirpu joined #salt
22:57 aberdine joined #salt
22:57 Dattas joined #salt
22:57 sirtaj joined #salt
22:57 kalessin joined #salt
22:57 btorch joined #salt
22:57 brewmaster joined #salt
22:57 timoguin joined #salt
22:57 steveoliver joined #salt
22:57 mirko joined #salt
22:57 stotch joined #salt
22:57 cedwards joined #salt
22:57 mihait joined #salt
22:57 terinjokes joined #salt
22:57 lyddonb_ joined #salt
22:57 whiteinge joined #salt
22:57 grep_away joined #salt
22:57 beebeeep joined #salt
22:57 abele joined #salt
22:57 GnuLxUsr joined #salt
22:57 EntropyWorks joined #salt
22:57 jcockhren joined #salt
22:57 Xiao joined #salt
22:57 Daviey joined #salt
22:57 rhand joined #salt
22:57 kwmiebach_ joined #salt
22:57 modafinil_ joined #salt
22:57 wiqd joined #salt
22:57 gadams joined #salt
22:57 individuwill joined #salt
22:57 robawt joined #salt
22:57 dancat joined #salt
22:57 pfallenop joined #salt
22:57 twiedenbein joined #salt
22:57 akitada joined #salt
22:57 drags joined #salt
22:57 tru_tru joined #salt
22:57 joehh joined #salt
22:57 gamingrobot joined #salt
22:57 vlcn joined #salt
22:57 freelock joined #salt
22:57 fejjerai joined #salt
22:57 lz-dylan joined #salt
22:57 svx joined #salt
22:57 dean joined #salt
22:57 Dinde joined #salt
22:57 TaiSHi joined #salt
22:57 lipiec joined #salt
22:57 Fa1lure joined #salt
22:57 zach joined #salt
22:57 eliasp joined #salt
22:57 dstokes joined #salt
22:57 funzo joined #salt
22:57 Twiglet joined #salt
22:57 nebuchadnezzar joined #salt
22:57 FarrisG joined #salt
22:57 Hipikat_ joined #salt
22:57 Lloyd_ joined #salt
22:57 DamonNL joined #salt
22:57 fxhp joined #salt
22:57 yano joined #salt
22:57 moos3 joined #salt
22:57 rofl____ joined #salt
22:57 z3uS joined #salt
22:57 eofs joined #salt
22:57 devx joined #salt
22:57 anteaya joined #salt
22:57 __alex joined #salt
22:57 mackstick joined #salt
22:57 redondos joined #salt
22:57 Spark joined #salt
22:57 n0arch joined #salt
22:57 phx joined #salt
22:57 [M7] joined #salt
22:57 chamunks joined #salt
22:57 mariusv joined #salt
22:57 robinsmidsrod joined #salt
22:57 eclectic joined #salt
22:57 wm-bot4 joined #salt
22:57 alekibango joined #salt
22:57 Kraln joined #salt
22:57 vandemar joined #salt
22:57 sindreij joined #salt
22:57 ze- joined #salt
22:57 hotbox joined #salt
22:57 yetAnotherZero joined #salt
22:58 rblackwe joined #salt
22:58 rogst_ joined #salt
22:58 davroman1ak joined #salt
22:58 erjohnso joined #salt
22:58 nkuttler_ joined #salt
22:59 hvn joined #salt
23:00 jforest joined #salt
23:00 jerrcs joined #salt
23:00 MTecknology joined #salt
23:03 oz_akan joined #salt
23:03 erjohnso joined #salt
23:10 higgs001 joined #salt
23:13 yomilk joined #salt
23:13 oz_akan_ joined #salt
23:15 blast_hardcheese joined #salt
23:20 schimmy joined #salt
23:21 higgs001_ joined #salt
23:21 meganerd joined #salt
23:24 mosen joined #salt
23:28 bhosmer joined #salt
23:30 meganerd joined #salt
23:36 innkeeper joined #salt
23:36 innkeeper left #salt
23:38 Outlander joined #salt
23:43 smcquay joined #salt
23:44 Eureka joined #salt
23:45 smcquay joined #salt
23:45 yomilk joined #salt
23:47 ajolo joined #salt
23:48 mgw joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary