Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-11-22

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 zz_Cidan but worth every penny imo
00:00 * EugeneKay is a NetApp instructor as a dayjob
00:01 EugeneKay And don't forget how awesome the NAS side is ;-)
00:01 zz_Cidan the NAS side is so rock solid
00:01 zz_Cidan like, soooo rock solid
00:01 torrancew I always found the NAS side way better than the SAN side
00:01 zz_Cidan even when failing over to another filer
00:01 EugeneKay Upgraded to Cluster yet? ;-)
00:01 torrancew (fair disclaimer, I hate most SAN solutions)
00:02 EugeneKay It's like fukken magic
00:02 zz_Cidan EugeneKay, I don't use NetApp anymore, I moved 3000 miles away and now work at a startup, lol
00:02 zz_Cidan but we're going to build out hardware soon, and you bet your ass we're going to use netapp
00:02 EugeneKay Poor thing. Good time to get the rep in
00:02 berto- joined #salt
00:02 forrest zz_Cidan, I realized that amazon wasn't for me when I got a 100 dollar gift card last year at PyCon to test it out, and two of their small instances burned that in a month
00:03 zz_Cidan ugh, :(
00:03 zz_Cidan I'm trying to convince everyone that DO is the way to go
00:03 forrest Yea, I was not happy that I actually owed money on top of the credit they provided
00:03 EugeneKay Amazon is for butt scale projects, where cost isn't an issue - variability is.
00:03 forrest Did you show them the API?
00:03 zz_Cidan I haven't yet
00:03 EugeneKay You need to go from 0 to 100 to 0 in a day
00:03 forrest EugeneKay, yea I know, but I didn't expect two small instances running almost nothing, and using very little bandwidth to destroy like that
00:03 zz_Cidan our big problem is S3
00:04 zz_Cidan We have 350 million s3 objects
00:04 EugeneKay For long-term stuff(Reserved Instances) it isn't /too/ bad, but still.
00:04 zz_Cidan 17 TB or so
00:04 EugeneKay Transfer is the real killer - 10c/GB = gtfo
00:04 forrest yea don't go to DO then zz_Cidan, I want it to stay nice for my pet projects :P
00:04 zz_Cidan lol
00:04 zz_Cidan :D
00:04 forrest yea what is with the bandwidth costs?
00:04 forrest bandwidth costs them almost nothing
00:05 EugeneKay Who knows
00:05 EugeneKay If they took a zero off the price I would use them more
00:05 forrest lol
00:05 EugeneKay Their big database stuff is cool and works GOOD(a helluvalot better than building your own cluster)
00:05 EugeneKay But it's $$$ to get any data out of there
00:05 EugeneKay Glacier, same problem.
00:16 s0undt3ch kiorky: same TZ? FR right?
00:16 forrest s0undt3ch, man of all hours
00:17 s0undt3ch forrest: I work in shifts ;)
00:17 forrest are you an insomniac? lol
00:21 s0undt3ch forrest: not yet
00:22 forrest ok that's good.
00:22 s0undt3ch ;)
00:23 dangerousbeans joined #salt
00:23 dangerousbeans left #salt
00:24 jslatts joined #salt
00:24 berto- joined #salt
00:25 joehh pniederw: you are correct, package updates have the fix
00:26 jslatts joined #salt
00:39 pniederw joehh: can I get the fix via salt bootstrap too? is there a branch/tag for the fix?
00:40 pniederw I'm a bit surprised that the packages are based on something other than a new release (say 0.17.3 or 0.17.2.1)
00:44 forrest pniederw, because it takes time to package them up and get them in the repos, most of the packaging is done by volunteers.
00:45 pniederw ok, so how would I get this with salt-bootstrap? which tag or commit are the updates based on?
00:49 forrest the bootstrap right now as far as I'm aware only supports tags and the develop branch from git, not specific commits.
00:49 forrest so usually people will install the latest, then manually make changes to the files they need :\
00:49 forrest not the optimal solution
00:49 pniederw you have brave users
00:50 NotreDev joined #salt
00:53 zandy joined #salt
00:54 luketheduke joined #salt
01:00 pipps joined #salt
01:00 zandy joined #salt
01:05 redondos joined #salt
01:08 Gareth Anyone think of a module off hand where you pass muliple values using the same flag?
01:09 ajw0100 joined #salt
01:09 joehh pniederw: It is on the 0.17 branch - it has not been tagged yet as there has not been an official complete release
01:10 joehh I believe there are other bugs that they are wishing/trying to fix for 0.17.3
01:11 blee joined #salt
01:12 pipps_ joined #salt
01:14 cachedout joined #salt
01:14 pniederw ok, thanks for the info
01:15 ajw0100 joined #salt
01:18 thelorax123 joined #salt
01:19 jkleckner joined #salt
01:23 JulianGindi joined #salt
01:24 NotreDev joined #salt
01:24 bhosmer joined #salt
01:25 ajw0100 joined #salt
01:27 xl1 joined #salt
01:30 ddv joined #salt
01:30 zandy joined #salt
01:35 pniederw am I right in that salt-ssh can't handle pillar data that gets applied to a specific minion id rather than to '*' in pillar top.sls? I was hoping it would just match the minion id in top.sls to the salt-ssh `host`, but it doesn't seem to work.
01:41 deepakmd_oc joined #salt
01:43 stefanmonkey joined #salt
01:44 pygmael joined #salt
01:50 jalbretsen joined #salt
01:55 NotreDev joined #salt
02:09 mpanetta joined #salt
02:09 redondos joined #salt
02:10 oz_akan_ joined #salt
02:11 Rager joined #salt
02:11 Rager howdy
02:12 jalbretsen joined #salt
02:12 forrest hi
02:12 forrest pniederw, I don't have an answer to your question, I thought pillar data was supposed to work the same
02:12 Rager how does salt handle the management of directories?
02:12 forrest Rager, in what sense?
02:12 Rager like... do I update the directories that are managed on the master, then tell the master to notify the slaves?
02:12 Rager or do the clients poll the master for marching orders?
02:13 forrest Rager, http://docs.saltstack.com/ref/states/all/salt.states.file.html#salt.states.file.directory
02:13 forrest they can poll if you set it up to do so, you can also run an update for all your machines from the master whenever you want.
02:14 sgviking joined #salt
02:18 NV forrest: I thought you couldn't match on grains, but otherwise was the same?
02:18 NV or was that for state targeting only?
02:20 berto- joined #salt
02:23 Rager thanks
02:23 Rager can I have a command depend on the directory structure being there?
02:23 Rager I'd like to execute a "bundle update" from the root of my rails app
02:24 jY anyone know any repos with examples of integration tests?
02:25 Rager eventually... these questions will seem pretty silly to me
02:26 forrest Rager, you could use a require
02:26 forrest or just order things properly, by default salt now runs in the order of the state file
02:26 Rager ah, k
02:26 Rager wasn't sure if it tried to parallelize tasks or something
02:26 forrest NV, uhh I don't remember
02:27 forrest NV, there's no docs on that anywhere that say it can't?
02:29 redondos joined #salt
02:29 redondos joined #salt
02:29 NotreDev joined #salt
02:29 NV http://docs.saltstack.com/topics/ssh/#targeting-with-salt-ssh found what I was thinking of
02:29 forrest NV ahh ok
02:38 VertigoRay Hey, where does a salt minion keep all it's pillar files?  I assumed it was all pulled and placed in /var/cache/salt/minion, but I've cleared that out and salt minion is still trying to pull old files.
02:38 forrest VertigoRay, with a salt-call?
02:39 VertigoRay forrest, yes
02:39 forrest do salt-call --local saltutil.refresh_pillar
02:39 VertigoRay forrest, also with `salt 'testbox' state.highstate
02:40 bemehow_ joined #salt
02:40 redondos joined #salt
02:40 forrest well is the pillar data getting pulled from the salt master?
02:41 vkurup joined #salt
02:41 VertigoRay forrest, no, it should be thought, right? Using `salt-call --log-level=debug state.highstate`
02:45 forrest salt-call only runs with data ON the minion I believe
02:45 VertigoRay forrest, loads the minion key and debugs logs "Decrypting the current master AES key"
02:45 VertigoRay forrest, using salt-call
02:45 forrest hmm, what happened when you refreshed the pillar data?
02:45 VertigoRay forrest, either way, same result when using `salt 'asdf' state.highstate` or `saltutil.sync_all` then highstate
02:45 Thiggy joined #salt
02:45 forrest from the master?
02:45 forrest did you restart the minion after you trashed the /var/cache/salt/minion dir?
02:45 VertigoRay forrest, from the master, saltutil.refresh_pillar returns "computer: None
02:45 VertigoRay forrest, yes ... restarting again for good measure
02:47 forrest is it only this one minion VertigoRay, or are all minions affected this way?
02:47 VertigoRay forrest, all of them on this master.
02:48 vkurup joined #salt
02:48 VertigoRay forrest, I should note that I'm using git_pillar with one branch.
02:48 forrest ok, so as lame as it is, try clearing the cache on the master maybe?
02:48 forrest then restart the master service?
02:48 forrest it should automatically be updating..
02:51 mannyt joined #salt
02:51 xl1 Does it occur to anyone that state.highstate test=True occasionally reports spurious binary file changed for file.managed?
02:51 vkurup joined #salt
02:51 Rager this is odd - I've got this state here that runs (except the last command) successfully, according to salt
02:51 Rager http://hastebin.com/natonoquya.sm
02:51 Rager but I don't actually have ruby-dev installed when I check on the client
02:51 VertigoRay forrest, just did `rm -rf /var/cache/salt/master && /etc/init.d/salt-master restart` then saltutil.refresh_pillar on my target (returned "hostname: None") then state.highstate returns same error
02:51 Rager master and client are both x64 debian servers on 7.x
02:52 forrest VertigoRay, I guess I'm confused now, are you getting a hostname none error? Or bad pillar data.
02:52 lineman60 joined #salt
02:52 forrest what's the error Rager?
02:52 Rager well, the error shows up during the bundle update, actually
02:53 Rager it's caused by not having ruby-dev installed
02:53 forrest does ruby-dev say it installed?
02:53 Rager when I check on the client, ruby-dev is not installed, but ruby gets installed successfully
02:53 VertigoRay forrest, running refresh_pillar from master has triggers job on minion.  minion returns "None"
02:54 Rager Comment:   Package ruby-dev is already installed
02:54 forrest but it's not installed?
02:54 Rager but on the client, (dpkg -l | grep ruby-dev) returns nothign
02:54 Rager nothing*
02:54 Rager and (apt-get install ruby-dev) asks if I'd like to go through with installation
02:55 forrest that's.... weird
02:55 forrest Rager, can you run the call but add -l debug to it?
02:55 forrest maybe some additional output will help
02:56 forrest VertigoRay, what happens when you run a test.ping against that box?
02:56 fragamus joined #salt
02:56 VertigoRay forrest, returns "True"
02:56 forrest VertigoRay, ok can you try and use a very simple state on that minion?
02:56 blee joined #salt
02:57 Rager here's the log from the attempt at running the state: http://hastebin.com/rekaneveju.vhdl
02:58 VertigoRay forrest, it actually is a very simple state -- I'll get you a paste in a sec.  Here's the response from state.highstate:  http://hastebin.com/piroguxaqe.vhdl
02:58 forrest VertigoRay, so what's going on with environment, and conf.dsconfigad?
02:58 VertigoRay forrest, those settings are old, actually, they aren't in the top.sls anymore.
02:58 forrest whaaaaaaaa
02:58 forrest that is dumb
02:59 forrest why is the highstate not clearing this crap
02:59 VertigoRay I agree ... that's what I've been saying last 6 hours ... I've stripped this master to bare bones.  so it's very simple
02:59 forrest yea
03:00 forrest the data should be automatically getting cleared, especially if you trashed the cache data
03:01 VertigoRay yeah, I have the same set-up on another box for servers instead of desktop infra.  I'm about to just blow away the box and start over with it.
03:01 VertigoRay here's the master config http://hastebin.com/suyopubama.avrasm
03:01 Rager well, that seems to have done it - I added explicit versions and requires
03:02 forrest what does the state look like now Rager?
03:02 thelorax123 joined #salt
03:02 Rager http://hastebin.com/vawupuxule.sm
03:02 forrest that's weird
03:02 Rager not sure which one fixed it
03:02 Rager I'll uninstall ruby-dev and find out
03:02 forrest I wonder if the - is somehow breaking ruby-dev?
03:03 forrest so maybe only ruby is getting passed to apt, and it thinks it is installed
03:03 forrest hmmmmmm
03:03 forrest VertigoRay, can you see what pillar.items returns?
03:03 VertigoRay this is the top.sls
03:03 VertigoRay http://hastebin.com/rexurafupa.sm
03:04 Rager ok, it was the version numbers
03:04 redondos joined #salt
03:04 oz_akan_ joined #salt
03:05 VertigoRay and this is global.sls:  http://hastebin.com/nodemuyehi.sm
03:05 zach hastebin - is it full hurried along code?
03:05 forrest ?
03:06 zach I was making a poor joke
03:06 VertigoRay forrest, yes, pillar.items returns just _errors, master, saltmasterversion, schedule
03:06 forrest VertigoRay, can you pull down that git repo to confirm using that exact URL, you get the latest copy?
03:06 VertigoRay zach, I'm laughing now ... ;)
03:06 zach I should setup schedule instead of using crontab on the minion
03:06 zach I just have */45 * * * * salt-call state.highstate in cron
03:06 forrest boooooo
03:07 VertigoRay booooooooo
03:07 VertigoRay schedule is so simple  ;)
03:07 zach heh
03:07 mapu joined #salt
03:07 forrest yea even VertigoRay got it going and he can't even fix his pillar! :P
03:07 zach don't feel bad VertigoRay, I don't like pillars
03:08 VertigoRay yeah, for real
03:08 zach I would likely run into some bug if I used schedule
03:08 ravibhure joined #salt
03:08 VertigoRay oh, I'm suppossed to be cloning that url... :D
03:09 forrest yes
03:09 forrest lol
03:09 forrest I've gotta head to the gym in 20 minutes man, THE TIME IS TICKING, WHAT WILL YOU DO WITHOUT MY USELESS ADVICE??
03:09 Rager an hero
03:10 VertigoRay forrest, !!!!!!!!!!!!!!!!!!
03:10 forrest lol
03:10 forrest you're checking out the wrong version huh?
03:11 VertigoRay it's working, but this just clicked
03:11 forrest what was it
03:11 VertigoRay there's a bug cause it's pulling my prd branch (which is the default branch)
03:11 forrest ahhh
03:11 VertigoRay even though I'm specifying master in the ext_pillar
03:12 VertigoRay on my other box, master is the default branch
03:12 VertigoRay and that's the only one I need.  this box is different cause it's got different environments.
03:12 redondos joined #salt
03:12 redondos joined #salt
03:12 forrest ahh ok
03:13 zach sudo vpnc
03:13 zach d'oh wrong terminal
03:13 mpanetta joined #salt
03:13 forrest well, at least you got it figured out VertigoRay
03:14 VertigoRay yeah, for I'll swap the default branch if see if it's fixed.  if so, I'll flatten things out a bit until I submit a merge req  ;)
03:14 forrest nice
03:16 Rager thanks, forrest
03:16 bretep @all does anyone know the status of https://github.com/saltstack/salt/issues/7413
03:16 forrest Rager, yea no problem, you're on Ubuntu?
03:16 Rager debian
03:16 bretep I guess we are waiting for zmq4?
03:16 forrest Rager, which version?
03:17 Rager 7
03:17 Rager 7.1a on the master
03:17 Rager and 7.2 on the clients
03:17 Rager so... 7
03:17 forrest ok, I need to do some testing as to why that bombs when you use ruby-dev
03:17 forrest makes no sense.
03:17 Rager it works fine if I specify a version number for ruby-dev
03:17 Rager 1:1.9.3 is the default version, though - how do I find out what the command was that it ran?
03:17 forrest Rager, yea but you shouldn't have to do that.
03:18 forrest you'd have to trace the calls down and such.
03:18 forrest 64 or 32 bit Rager?
03:18 Rager would I be able to do that with the client in debug mode
03:18 Rager 64
03:19 forrest uhh I don't think the system calls like that will be traced down like that.
03:19 forrest but maybe? I can't remember, someone was troubleshooting a similar issue.
03:19 Rager so it doesn't just call a shell?
03:19 Rager dang.
03:19 VertigoRay cant u turn debug logging on for the minion and watch the calls when it runs?
03:19 VertigoRay can't remember if I see the cmd.run output
03:19 VertigoRay actually, I know you do
03:20 VertigoRay you'll see the shlex output which is better cause it'll break out the arguments for you
03:20 VertigoRay right?  or am I mixing products ... ;)
03:20 Rager I'm currently running the minion in foregrounded debug mode
03:20 Rager and I get commands
03:20 forrest oh sweet
03:20 Rager to some degree, at least
03:21 Rager odd
03:21 Rager ruby1.9.1 is the name of the package installed
03:21 forrest as in that's the real name?
03:21 forrest not ruby-1.9.1?
03:22 forrest bretep, I have no idea, I'd assume they're just waiting
03:22 Rager ooh
03:22 Rager yeah
03:22 forrest you could always ask for an update bretep
03:22 Rager let me paste you some exerpts
03:24 ajw0100 joined #salt
03:24 mannyt joined #salt
03:25 Rager odd
03:25 Rager ruby -v gives 1.9.3
03:25 Rager and everything went fine this time
03:25 forrest maybe because ruby was already installed?
03:26 Rager hm
03:26 VertigoRay forrest, swapping the default branch did it.  now it's giving me a diff err -- error generated from all my tearing of things down, so I know how to fix.
03:26 Rager no, I removed ruby
03:26 forrest VertigoRay, awesome
03:26 forrest Rager, oh weird
03:26 Rager wait, nevermind - things didn't go so smoothly
03:30 Rager http://hastebin.com/hehayiyece.cmake
03:30 Rager this is the output from the client when I run the state
03:31 Rager this is the state: http://hastebin.com/mamexacije.sm
03:31 VertigoRay forrest, https://github.com/saltstack/salt/blob/b98f5f4acdfd87c10f2277614daa3d1f7f621b94/salt/pillar/git_pillar.py#L96-L121   Looks like `branch` is never being checked out.  Should be an easy fix.  Just will have to test.
03:31 forrest Rager, I've gotta head to the gym, I'll try to take a look when I get back if you haven't figured it out
03:32 forrest nice VertigoRay
03:32 Rager what's really odd is that nothing works, but it says that it went through fine
03:32 Rager vOv
03:32 Rager I found a work-around
03:37 redondos joined #salt
03:37 redondos joined #salt
03:38 ravibhure1 joined #salt
03:39 redondos joined #salt
03:41 redondos joined #salt
03:45 redondos joined #salt
03:47 redondos joined #salt
03:50 davidfischer joined #salt
03:53 mannyt joined #salt
03:57 ajw0100 joined #salt
04:00 NotreDev joined #salt
04:06 redondos joined #salt
04:08 mgw joined #salt
04:08 mafrosis joined #salt
04:12 ckao joined #salt
04:14 favadi joined #salt
04:19 junedm joined #salt
04:22 mafrosis left #salt
04:26 redondos joined #salt
04:27 mannyt joined #salt
04:28 redondos joined #salt
04:28 redondos joined #salt
04:33 redondos joined #salt
04:33 redondos joined #salt
04:36 anuvrat joined #salt
04:39 mannyt joined #salt
04:43 dvogt joined #salt
04:43 wilywonka joined #salt
04:44 redondos joined #salt
04:44 redondos joined #salt
04:49 noob2 joined #salt
04:49 noob2 can salt watch a directory for a service?
04:49 noob2 i saw pkg and file
04:49 NV file.directory?
04:49 noob2 yeah lemme try it and see what happens
04:50 noob2 there's a bug report where thatch says - file: /some/dir/* works also
04:52 bemehow joined #salt
04:54 redondos joined #salt
04:54 redondos joined #salt
05:00 Furao joined #salt
05:01 wilywonka joined #salt
05:01 Furao left #salt
05:02 yano joined #salt
05:05 malinoff joined #salt
05:06 redondos joined #salt
05:10 jalbretsen joined #salt
05:13 yano joined #salt
05:16 zandy joined #salt
05:37 redondos joined #salt
05:37 redondos joined #salt
05:40 bemehow joined #salt
05:45 jY what's the best way to convert something like this to salt where i want to pass in custom vars from many sls files?
05:45 jY http://pastebin.com/pjDb5WkK
05:47 zandy joined #salt
05:47 NV jY: have the instance information in pillar, use jinja templating of the state sls combined with a for loop to iterate over the pillar list data
05:48 NV can also template the contents of the file in a similar manner too
05:48 NV using defaults/context
05:48 jY ok i think i need to really sit down and learn about pillars
05:48 jY thanks
05:57 rmt joined #salt
06:13 bemehow joined #salt
06:14 middleman_ joined #salt
06:17 zandy joined #salt
06:18 redondos joined #salt
06:21 bemehow joined #salt
06:28 pygmael left #salt
06:31 bemehow joined #salt
06:35 redondos_ joined #salt
06:37 berto- joined #salt
06:37 redondo__ joined #salt
06:37 jalbretsen joined #salt
06:38 davidfischer joined #salt
06:38 Katafalkas joined #salt
06:39 redondos joined #salt
06:41 redondos_ joined #salt
06:41 Destro joined #salt
06:42 redondo__ joined #salt
06:46 redondos joined #salt
06:48 zandy joined #salt
06:48 redondos joined #salt
06:49 bhosmer joined #salt
06:50 redondos_ joined #salt
06:51 redondos_ joined #salt
06:52 zandy joined #salt
06:52 lemao joined #salt
06:53 redondos joined #salt
06:55 redondo__ joined #salt
06:57 redondos_ joined #salt
06:59 redondos joined #salt
07:00 redondos joined #salt
07:02 Katafalkas joined #salt
07:04 Katafalkas joined #salt
07:07 redondos_ joined #salt
07:08 redondos_ joined #salt
07:09 CheKoLyN joined #salt
07:10 redondos joined #salt
07:12 redondo__ joined #salt
07:13 pdayton joined #salt
07:14 redondos_ joined #salt
07:16 cym3try joined #salt
07:17 jY anyone know why gitfs isn't working for me http://pastebin.com/emMtWhgu
07:17 jY i can git clone the same thing as root on the server just fine
07:17 jY but salt is giving me a [WARNING ] GitPython exception caught while fetching: 'Error when fetching: fatal: remote error:' returned exit status 2: None
07:17 redondo__ joined #salt
07:19 jimallman joined #salt
07:19 redondos_ joined #salt
07:20 jY deleting /var/cache/salt/master/gitfs seemed to fix it
07:20 renothing joined #salt
07:21 redondos joined #salt
07:23 redondos joined #salt
07:25 redondos_ joined #salt
07:26 redondo__ joined #salt
07:27 TomasNunez joined #salt
07:28 redondos joined #salt
07:30 redondos joined #salt
07:31 jY now i guess my question is how do i use salt-call to hit a certain branch on my master's gitfs
07:32 redondos joined #salt
07:33 redondos_ joined #salt
07:40 redondos joined #salt
07:41 redondo__ joined #salt
07:44 redondos_ joined #salt
07:45 redondos joined #salt
07:47 redondo__ joined #salt
07:49 redondos_ joined #salt
07:49 vkurup_ joined #salt
07:51 redondos joined #salt
07:53 redondo__ joined #salt
07:54 redondos_ joined #salt
07:56 juasiepo joined #salt
07:56 redondos joined #salt
07:56 balboah joined #salt
07:58 redondo__ joined #salt
08:00 redondos_ joined #salt
08:00 dranger joined #salt
08:01 redondos joined #salt
08:02 harobed joined #salt
08:03 redondos joined #salt
08:04 giantlock_ joined #salt
08:05 redondos joined #salt
08:07 favadi joined #salt
08:11 redondos joined #salt
08:11 slav0nic joined #salt
08:13 redondos joined #salt
08:14 druonysus joined #salt
08:14 druonysus joined #salt
08:15 kiorky s0undt3ch: i assume you to sleep so :p
08:16 redondos_ joined #salt
08:18 redondo__ joined #salt
08:18 favadi joined #salt
08:19 redondos joined #salt
08:21 redondos_ joined #salt
08:23 redondo__ joined #salt
08:25 redondos joined #salt
08:25 ajw0100 joined #salt
08:26 redondos_ joined #salt
08:28 redondo__ joined #salt
08:30 redondos joined #salt
08:31 elsmorian joined #salt
08:32 redondos_ joined #salt
08:34 redondo__ joined #salt
08:35 backjlack joined #salt
08:35 redondos joined #salt
08:37 redondos_ joined #salt
08:39 redondos_ joined #salt
08:41 redondos joined #salt
08:41 ravibhure joined #salt
08:42 ravibhure2 joined #salt
08:43 redondo__ joined #salt
08:45 redondos_ joined #salt
08:47 redondos joined #salt
08:51 redondos_ joined #salt
08:53 redondos joined #salt
08:53 lemao joined #salt
08:55 redondo__ joined #salt
08:56 redondos_ joined #salt
08:58 redondos joined #salt
09:00 redondo__ joined #salt
09:02 redondos_ joined #salt
09:03 redondos joined #salt
09:04 dranger joined #salt
09:05 redondos joined #salt
09:06 ravibhure joined #salt
09:07 redondos_ joined #salt
09:08 ravibhure1 joined #salt
09:09 redondo__ joined #salt
09:11 redondos joined #salt
09:12 redondos joined #salt
09:13 carlos joined #salt
09:14 redondos joined #salt
09:15 zandy joined #salt
09:16 mpanetta joined #salt
09:16 redondos_ joined #salt
09:19 redondos joined #salt
09:22 pengunix joined #salt
09:23 redondos_ joined #salt
09:23 bemehow joined #salt
09:24 redondos joined #salt
09:27 aco_ joined #salt
09:28 redondos_ joined #salt
09:30 aco_ i have already wrote alot of states but without managing ordeting, i now have 2 states that i wish that they at last and if i give them both order:last they still both do not execute in the order that i want cuz of alphapet order, and i searched alot and seems that i have nothing to do but add order argument to all my state functions and this is a very hard process, so can any one help me solve this  "may be there is a way that i don't
09:30 redondo__ joined #salt
09:34 redondos joined #salt
09:34 Destro joined #salt
09:36 redondos_ joined #salt
09:38 redondo__ joined #salt
09:40 redondos joined #salt
09:41 Iwirada joined #salt
09:41 redondos_ joined #salt
09:43 redondo__ joined #salt
09:45 redondos joined #salt
09:46 Guest4120 joined #salt
09:46 Guest4120 left #salt
09:47 redondos joined #salt
09:48 totte joined #salt
09:48 redondos_ joined #salt
09:51 lemao joined #salt
09:51 redondo__ joined #salt
09:52 redondos joined #salt
09:54 JasonG_TA joined #salt
09:54 redondos_ joined #salt
09:56 redondo__ joined #salt
10:00 redondos_ joined #salt
10:02 redondo__ joined #salt
10:03 redondos joined #salt
10:05 redondos_ joined #salt
10:06 bhosmer joined #salt
10:10 Destro joined #salt
10:11 MrTango joined #salt
10:16 mpanetta joined #salt
10:36 cym3try joined #salt
10:41 simonmcc joined #salt
10:41 juasiepo joined #salt
10:42 simonmcc joined #salt
10:44 linjan_ joined #salt
10:44 Dinde joined #salt
10:51 mag joined #salt
10:52 mag joined #salt
11:03 ajw0100 joined #salt
11:07 redondos joined #salt
11:07 s0undt3ch kiorky: yep, sometimes I do ;)
11:07 s0undt3ch kiorky: and about yesterday, you were not harsh
11:08 redondos joined #salt
11:09 zarath_ joined #salt
11:09 zarath_ hi there
11:10 zarath_ is anyone knowing if the pillar will not work with salt-ssh?
11:11 zarath_ i've tried the top.sls / data.sls example but i don't see info item with salt-ssh 'host' pillar.items
11:12 aleszoulek joined #salt
11:12 redondos joined #salt
11:13 tomspur joined #salt
11:14 redondos_ joined #salt
11:14 mag_ left #salt
11:15 redondos_ joined #salt
11:19 redondos joined #salt
11:21 redondos_ joined #salt
11:21 macduke joined #salt
11:23 redondos joined #salt
11:24 redondos_ joined #salt
11:26 redondos_ joined #salt
11:27 hhenkel Hi all, I'm trying to have multiple repositories (with rhel/centos) in a file. Is there a way to achive that?
11:28 redondos joined #salt
11:30 redondos_ joined #salt
11:31 s0undt3ch left #salt
11:32 redondo__ joined #salt
11:33 redondos joined #salt
11:35 redondos joined #salt
11:39 viq Anyone familiar with jinja? My states are unhappy when the pillars they pull data from are empty...
11:42 redondos_ joined #salt
11:44 redondos joined #salt
11:46 redondos joined #salt
11:48 redondos_ joined #salt
11:49 redondo__ joined #salt
11:55 cron0 joined #salt
11:55 redondos joined #salt
11:57 redondo__ joined #salt
11:58 redondos_ joined #salt
12:00 redondos joined #salt
12:02 redondos joined #salt
12:04 redondos_ joined #salt
12:05 redondo__ joined #salt
12:07 redondos joined #salt
12:09 honestly viq: pillar.get('foo:key:subkey', default)
12:09 honestly or maybe salt['pillar.get']()
12:09 honestly or __salt__['pillar.get']
12:09 honestly try which one works
12:10 honestly and then tell me so I might remember
12:10 ravibhure joined #salt
12:12 redondos_ joined #salt
12:13 viq honestly: I have something along the lines of https://github.com/viq/cm-lab-salt/blob/master/salt/roots/salt/users/group1.sls
12:13 honestly {% for user, args in pillar['group1'].iteritems() %}
12:13 honestly ->
12:14 honestly {% for user, args in pillar.get('group1',{}).iteritems() %}
12:14 viq hmm, let's see
12:14 redondo__ joined #salt
12:16 dangerousbeans joined #salt
12:16 redondos joined #salt
12:19 dangerousbeans left #salt
12:20 redondos joined #salt
12:21 viq honestly: no, neither of those work, especially __salt__ one
12:21 honestly ok
12:21 honestly lemme find my own code
12:21 redondos_ joined #salt
12:21 honestly https://github.com/saltstack-formulas/users-formula/blob/master/users/init.sls
12:22 honestly there
12:22 honestly that works (:
12:22 elfixit joined #salt
12:22 harobed_ joined #salt
12:23 redondo__ joined #salt
12:25 viq Thanks, looking
12:25 honestly maybe the iteritems() fails when the hashmap is empty
12:25 honestly so use items()
12:25 viq It's still complaining with items
12:27 redondos joined #salt
12:27 honestly what is it complaining about?
12:28 viq http://pbot.rmdir.de/86FmfKw1zl4P6Hy2uS6NcA
12:29 redondos_ joined #salt
12:30 redondos joined #salt
12:32 redondo__ joined #salt
12:34 redondos_ joined #salt
12:35 Destro left #salt
12:35 Destro joined #salt
12:35 Destro left #salt
12:36 redondos joined #salt
12:37 Destro joined #salt
12:37 Destro left #salt
12:38 redondo__ joined #salt
12:39 redondos_ joined #salt
12:44 redondos joined #salt
12:44 foxx joined #salt
12:46 redondos_ joined #salt
12:47 cron0 joined #salt
12:48 redondos_ joined #salt
12:48 cron0 joined #salt
12:48 cron0 joined #salt
12:50 redondos joined #salt
12:51 dranger joined #salt
12:51 blee joined #salt
12:52 redondo__ joined #salt
12:52 hhenkel Hi all, currently trying to get halite up and running. I created a virtualenv and installed all depdencies that made a startup failling.
12:53 hhenkel The server is now running but as soon as I try to connect I get the following error: IOError: [Errno 2] No such file or directory: '/opt/halite-venv/lib/python2.6/site-packages/halite/mold/main_bottle.html'
12:53 redondos_ joined #salt
12:53 hhenkel I installed halite via pip.
12:55 junedm left #salt
12:55 redondos joined #salt
12:56 harobed_ joined #salt
12:57 redondos_ joined #salt
13:02 redondos joined #salt
13:04 redondos joined #salt
13:06 redondos_ joined #salt
13:08 redondo__ joined #salt
13:09 redondos joined #salt
13:11 harobed joined #salt
13:11 redondos_ joined #salt
13:12 honestly viq: hm, I'll have to take a closer look at this
13:12 honestly viq: but no time now
13:12 Chrisje joined #salt
13:13 redondo__ joined #salt
13:13 harobed joined #salt
13:14 harobed joined #salt
13:15 xet7 why salt-ssh says "ERROR: sudo expected a password, NOPASSWD required" ?
13:16 whiskybar joined #salt
13:18 zarath_ xet7: how do your sudoers file looks like?
13:19 redondos joined #salt
13:21 redondos joined #salt
13:22 zarath__ joined #salt
13:23 xet7 zarath_ : http://paste.openstack.org/show/53807/   logging in as user
13:23 abele joined #salt
13:23 redondos_ joined #salt
13:24 bemehow joined #salt
13:24 xet7 it's the only one of ubuntu hosts that has error with user credentials
13:24 xet7 I mean minions
13:25 redondo__ joined #salt
13:27 redondos joined #salt
13:29 redondos_ joined #salt
13:31 redondos joined #salt
13:32 redondos joined #salt
13:33 abele_ joined #salt
13:33 mannyt joined #salt
13:33 abele joined #salt
13:34 redondos joined #salt
13:35 abele joined #salt
13:36 kvbik joined #salt
13:36 redondos joined #salt
13:37 bhosmer_ joined #salt
13:38 redondos_ joined #salt
13:38 cron0 joined #salt
13:39 abele joined #salt
13:40 redondo__ joined #salt
13:41 harobed joined #salt
13:41 brianhicks joined #salt
13:42 harobed joined #salt
13:43 redondos joined #salt
13:44 harobed joined #salt
13:44 abele joined #salt
13:44 redondos_ joined #salt
13:45 harobed joined #salt
13:46 redondo__ joined #salt
13:46 jslatts joined #salt
13:50 redondos joined #salt
13:52 redondos_ joined #salt
13:53 redondo__ joined #salt
13:55 redondos joined #salt
13:56 abele joined #salt
13:57 redondos_ joined #salt
13:59 redondo__ joined #salt
13:59 georgj05 joined #salt
13:59 mpanetta joined #salt
14:00 redondos joined #salt
14:01 viq honestly: sure, thanks
14:02 redondos_ joined #salt
14:04 redondo__ joined #salt
14:04 vejdmn joined #salt
14:06 redondos joined #salt
14:06 juicer2 joined #salt
14:08 redondos_ joined #salt
14:08 Gifflen joined #salt
14:09 redondo__ joined #salt
14:11 JasonG_TA joined #salt
14:11 redondos joined #salt
14:13 redondos_ joined #salt
14:14 JasonSwindle joined #salt
14:14 ipmb joined #salt
14:15 redondo__ joined #salt
14:16 linjan joined #salt
14:17 redondos joined #salt
14:18 abe_music joined #salt
14:18 redondos_ joined #salt
14:19 JasonG_TA joined #salt
14:20 JasonG_TA joined #salt
14:21 Chocobo joined #salt
14:22 redondos joined #salt
14:23 redondos joined #salt
14:24 pexio Hey guys I'm trying to set a config option in a ntp.conf jinja template depending on where on a network a machine exists, but I can't to get it to work, I would very much like to be able to do something like this in the template: {% if salt['network.in_subnet'](172.16.1.0/24) %} server 172.16.1.1 iburst {% endif %}
14:25 redondos_ joined #salt
14:26 zandy joined #salt
14:27 pexio It is also strange for me that something like this would work: {% if 'staging' in grains['host'] %} while {% if "172.16.4" in grains['fqdn_ip4'] %} does not
14:28 pexio If someone could point me in the right direction I would really appreciate It.
14:28 tempspace pexio: Have you tried wrapping your CIDR in ''s
14:29 redondos joined #salt
14:29 pexio tempspace: yeah that seems like a good idea, I will try it
14:30 redondos joined #salt
14:31 mapu joined #salt
14:32 pexio tempspace: so obvious, it works thanks alot.
14:32 redondos joined #salt
14:32 tempspace pexio: No problem!
14:32 Brew joined #salt
14:34 redondos_ joined #salt
14:34 felskrone joined #salt
14:35 felskrone left #salt
14:35 harobed joined #salt
14:35 felskrone joined #salt
14:35 harobed joined #salt
14:36 redondo__ joined #salt
14:38 redondos joined #salt
14:38 Chrisje joined #salt
14:39 zandy joined #salt
14:39 th3reverend joined #salt
14:39 gasbakid joined #salt
14:39 redondos joined #salt
14:41 redondos_ joined #salt
14:43 pengunix joined #salt
14:43 zarath__ ah, pillar with salt-ssh works with the newest git checkout ;-)
14:43 redondo__ joined #salt
14:45 redondos joined #salt
14:47 redondos_ joined #salt
14:48 quickdry21 joined #salt
14:48 redondos_ joined #salt
14:50 gkze joined #salt
14:50 redondos joined #salt
14:50 micah_chatt joined #salt
14:52 amahon joined #salt
14:54 redondos_ joined #salt
14:55 redondos joined #salt
14:56 zandy joined #salt
14:57 redondos_ joined #salt
14:58 zandy joined #salt
14:59 ipmb joined #salt
15:01 redondos joined #salt
15:01 carlos joined #salt
15:02 moos3 joined #salt
15:02 redondos joined #salt
15:02 bemehow joined #salt
15:03 harobed joined #salt
15:03 jimallman joined #salt
15:04 redondos_ joined #salt
15:05 danielbachhuber joined #salt
15:06 redondos_ joined #salt
15:06 thelorax123 joined #salt
15:07 mannyt joined #salt
15:08 bhosmer_ joined #salt
15:08 redondos joined #salt
15:10 redondo__ joined #salt
15:10 jsm joined #salt
15:11 redondos_ joined #salt
15:13 jslatts joined #salt
15:13 redondos joined #salt
15:14 viq redondos: fix your connection :P
15:14 opapo joined #salt
15:14 * viq ponders whether jesusaurus would be up to helping again with jinja
15:15 redondos joined #salt
15:15 davidfischer joined #salt
15:16 harobed joined #salt
15:16 zandy joined #salt
15:17 redondos_ joined #salt
15:19 redondo__ joined #salt
15:19 teskew joined #salt
15:19 viq Aha! jesusaurus, honestly - adding this at top of state solves this for me: {% if pillar.get('TEMPLATE') != None %}
15:20 redondos joined #salt
15:22 redondos_ joined #salt
15:26 redondos joined #salt
15:27 redondos_ joined #salt
15:27 lineman60 joined #salt
15:29 bt joined #salt
15:32 redondos joined #salt
15:34 redondos joined #salt
15:36 redondos_ joined #salt
15:36 harobed joined #salt
15:38 redondo__ joined #salt
15:38 snewell joined #salt
15:39 bhosmer_ joined #salt
15:40 redondos joined #salt
15:41 redondos joined #salt
15:42 cnelsonsic joined #salt
15:43 dangerousbeans joined #salt
15:43 redondos_ joined #salt
15:44 harobed joined #salt
15:45 kermit joined #salt
15:45 redondo__ joined #salt
15:45 StDiluted joined #salt
15:46 harobed joined #salt
15:46 harobed joined #salt
15:46 kermit joined #salt
15:47 redondos joined #salt
15:47 harobed joined #salt
15:48 dscott joined #salt
15:48 harobed joined #salt
15:49 redondos_ joined #salt
15:49 NotreDev joined #salt
15:49 harobed joined #salt
15:50 redondos_ joined #salt
15:52 redondos joined #salt
15:54 redondos_ joined #salt
15:55 harobed joined #salt
15:56 harobed joined #salt
15:56 juasiepo joined #salt
15:56 redondos joined #salt
15:57 harobed joined #salt
15:57 smccarthy joined #salt
15:58 redondo__ joined #salt
15:58 Iwirada left #salt
16:00 redondos_ joined #salt
16:01 redondos joined #salt
16:01 pentabular joined #salt
16:02 cachedout joined #salt
16:02 sgviking joined #salt
16:03 redondos joined #salt
16:04 harobed joined #salt
16:04 dangerousbeans joined #salt
16:05 harobed joined #salt
16:06 UtahDave joined #salt
16:06 harobed joined #salt
16:06 redondos_ joined #salt
16:07 Authority joined #salt
16:07 jalbretsen joined #salt
16:08 redondos joined #salt
16:08 harobed joined #salt
16:09 UtahDave Good morning, everyone!
16:09 harobed joined #salt
16:09 viq UtahDave: morfternoon ;)
16:09 redondos joined #salt
16:10 UtahDave :)
16:10 harobed joined #salt
16:11 harobed joined #salt
16:11 redondos_ joined #salt
16:12 forrest joined #salt
16:12 harobed joined #salt
16:13 jslatts joined #salt
16:13 th3reverend left #salt
16:13 redondo__ joined #salt
16:14 danielm joined #salt
16:14 harobed joined #salt
16:14 Guest48258 is there a way to send a job to minions that are currently offline? like waiting until the client becomes available, and issue that job then?
16:15 amahon joined #salt
16:15 forrest Guest48258, not as far as I am aware.
16:15 redondos joined #salt
16:15 pentabular joined #salt
16:15 harobed joined #salt
16:15 danielmch hmm, okay
16:16 jalbretsen1 joined #salt
16:16 harobed joined #salt
16:17 redondos joined #salt
16:17 forrest it's something that has been discussed, but there hasn't been action on it because it's kind of a lot of work.
16:17 harobed joined #salt
16:18 dave_den danielmch: you could make a reactor that sends out jobs when it sees a minion_start event from your minions
16:18 jalbretsen joined #salt
16:18 redondos_ joined #salt
16:19 harobed joined #salt
16:19 forrest nice idea dave_den
16:20 danielmch what do you mean by reactor? is this actually implemented in saltstack?
16:20 dave_den or, if you wanted you could just create a scheduled job on the minions that checks for any jobs waiting it has not executed yet
16:20 redondo__ joined #salt
16:21 dave_den danielmch: http://docs.saltstack.com/topics/reactor/index.html
16:22 danielmch dave_den: cool, thank you!
16:22 StDiluted joined #salt
16:22 redondos joined #salt
16:22 dave_den danielmch: this may be useful, too: http://docs.saltstack.com/ref/runners/all/salt.runners.jobs.html
16:22 mapu joined #salt
16:23 dave_den and the ext_job_cache
16:24 redondos_ joined #salt
16:25 davet joined #salt
16:26 redondo__ joined #salt
16:26 austin987 joined #salt
16:27 dwyerj joined #salt
16:27 redondos joined #salt
16:28 harobed joined #salt
16:29 zarath__ left #salt
16:29 Katafalkas joined #salt
16:29 redondos_ joined #salt
16:30 Authority left #salt
16:31 redondos joined #salt
16:32 mgw joined #salt
16:35 kiorky_ joined #salt
16:35 sinenitore joined #salt
16:36 snave joined #salt
16:36 oraqol So when I run state.highstate using one type of grain that is rolled out on a small number of minions everything works fine
16:36 oraqol but when I run it using another grain that is on a whole bunch more minions, nothing happens
16:37 oraqol and the minions that have the grain that works are a subset of the grains that dont
16:37 oraqol does that make sense?
16:37 dave_den oraqol: try running it with a smaller batch size
16:37 bemehow joined #salt
16:37 redondos_ joined #salt
16:37 dave_den i think your master is still struggling with those >600 minions
16:38 oraqol how do I specify batch size?
16:38 forrest agreed with dave_den
16:38 oraqol probably a very derpy question
16:38 oraqol but hey why not
16:38 forrest oraqol, use the -b option
16:38 EvaSDK joined #salt
16:38 forrest http://docs.saltstack.com/topics/targeting/batch.html
16:38 dave_den salt -b 10 -G 'your:grain' your.state
16:38 felskrone -b :-)
16:39 oraqol so that will run through the whole list of them, but only 10 at at time?
16:39 dave_den right
16:39 oraqol awesome, thank you
16:39 felskrone btw, running with batches will work with several thousand minions on a master
16:39 oraqol still trying to get a handle on this thing, which is awesome btw
16:39 redondo__ joined #salt
16:40 dave_den oraqol: with a few hundred minions and lengthy highstates your master gets pretty busy
16:41 forrest dave_den, I'd be really interested to see metrics from a lot of people regarding how many minions they are running, and the settings on their master.
16:41 oraqol Understandable, I wasn't even aware of the batch option
16:41 redondos joined #salt
16:41 forrest yea the batch option is awesome
16:41 oraqol I've got the master on an 8 core 12 gig VM
16:41 forrest There's an issue open for it right now
16:41 forrest because sometimes it spawns more processes than it should
16:42 oraqol and ist still chugging with under 1000
16:42 forrest oraqol, most people seem to have issues because they don't have enough file handles allocated to salt.
16:42 dave_den oraqol: you're tuned your filehandle limit, ya?
16:42 forrest hmm, that seems odd, it shouldn't be
16:42 dave_den youre/you've
16:42 oraqol ulimit -n 9000?
16:43 redondos_ joined #salt
16:43 oraqol its over 9000!
16:43 dave_den cat /proc/$(cat /var/run/salt-master.pid)/stat
16:43 oraqol ...sorry
16:43 oraqol 14671 (salt-master) S 4043 14671 2835 34818 14671 1077960704 42993 0 0 0 731 4002 0 0 20 0 7 0 634345 520495104 8487 18446744073709551615 4194304 7051780 140733671958240 140733671954336 140433332359037 0 0 16781312 16898 18446744073709551615 0 0 17 11 0 0 0 0 0 9149888 9625684 28905472 140733671966912 140733671966958 140733671966958 140733671968739 0
16:44 dave_den sorry, status not stat
16:44 dave_den i am human, need human text
16:44 dave_den :)
16:44 oraqol Name:salt-master State:S (sleeping) Tgid:14671 Pid:14671 PPid:4043 TracerPid:0 Uid:0000 Gid:0000 FDSize:2048 Groups:0  VmPeak:  508296 kB VmSize:  508296 kB VmLck:       0 kB VmPin:       0 kB VmHWM:   60472 kB VmRSS:   33948 kB VmData:  394464 kB VmStk:     136 kB VmExe:    2792 kB VmLib:    8684 kB VmPTE:     400 kB VmSwap:       0 kB Threads:7 SigQ:1/128225 SigPnd:0000000000000000 ShdPnd:00000000000000
16:44 oraqol one can never tell on the internets
16:44 forrest yea if you could just graph that data...
16:44 forrest :P\
16:44 VertigoRay or draw a picture with crayon for me, please
16:45 redondos_ joined #salt
16:45 dave_den oraqol: and check your /proc/<pid>/limits setting
16:45 dave_den you're not using more than 2048 file handles so far, so you are probably fine there
16:45 oraqol will do
16:45 oraqol I'll try that and the batch option
16:46 dave_den you could probably bump your worker_threads up (assuming it's the default 5)
16:46 austin987 joined #salt
16:46 redondos joined #salt
16:47 micah_chatt joined #salt
16:48 forrest yea that's a good point, when you say chugging do you mean it's slow? Or it's using a lot of resources?
16:48 redondos joined #salt
16:48 pipps_ joined #salt
16:49 oraqol when I run highstate it doesnt return anything to screen, debug in saltmaster doesnt seem to show anything funky, and the processors spike while the highstate is going
16:49 redondos_ joined #salt
16:49 alunduil joined #salt
16:50 forrest can you run a ps in another screen to see how many salt processes are running? It seems odd that it's spiking your cpu that badly to me.
16:50 forrest maybe it's just because I don't use Salt in a big env though
16:51 pniederw joined #salt
16:51 cdsrv joined #salt
16:52 Brew joined #salt
16:53 jimallman joined #salt
16:53 UtahDave oraqol: what version of Salt are you on?
16:53 pniederw looking at salt virt, I see commands for creating vms and such, but I don't see any states that describe which vms are deployed where. am I missing something?
16:55 pniederw i.e. does salt virt only provide an execution layer, but not a cm layer?
16:58 davet joined #salt
16:58 oraqol 0.17.2
16:58 UtahDave pniederw: salt virt builds your vms and passes control to Salt itself for cm
16:58 UtahDave thanks, oraqol
16:59 oraqol there are 9 of those guys running
16:59 pniederw what does that mean? that I have to write my own states (haven't done that yet)?
16:59 oraqol "/usr/bin/salt-master -l debug"
16:59 dave_den i haven't spent much time reading the fileserver code, but it's possible you're spending a lot of time computing file hashes, depending on what your highstates look like
17:00 pniederw I'm looking for a way to describe which vms, with what settings, run on which machines
17:00 dave_den glancing at it, it looks like the hash is cached and tied to the file mtime
17:01 cbloss joined #salt
17:01 pniederw since I'm already using kvm and libvirt, I though that salt virt might be a good fit, but now I can't see how to use it
17:02 jeffrubic joined #salt
17:02 wilywonka joined #salt
17:03 oraqol ok just tried to run state.highstate on only one of the minions and nothing was printed back to screen
17:03 pniederw can I just declare a state that invokes one of these commands similar to how I would use cmd.run?
17:04 dave_den oraqol: how long do your highstates take to run when doing "salt-call state.highstate' form the minion?
17:04 pniederw or does it mean implementing custom states?
17:04 UtahDave oraqol: have you checked the job cache to see if the cli just didn't catch all the returns?  Salt sends commands asynchronously
17:06 troyready joined #salt
17:07 oraqol it seems to run immediately if i salt-call from the minion
17:07 oleksiy joined #salt
17:08 Katafalkas joined #salt
17:08 dave_den oraqol: i mean how long from start to finish
17:08 UtahDave oraqol: right
17:09 dave_den like UtahDave said, when you run 'salt' from your salt master, it waits only for a certain time before the 'salt' process exits and returns you to the shell. that doesn't mean the highstate jobs are not still running on the minions
17:09 bhosmer joined #salt
17:09 dave_den and when the minions finish their highstate runs, they report the job info back to the salt master deamon
17:09 KyleG joined #salt
17:10 KyleG joined #salt
17:10 gmoro joined #salt
17:10 VertigoRay right, and if you CTRL+C, it'll spit out the job id and the command that you can use later to call upon the status of the job.
17:10 forrest oraqol, you could increase the timeout if you want in your /etc/salt/master
17:11 kermit joined #salt
17:11 redondos joined #salt
17:12 forrest ugh, crisis of the day, tea leaves didn't last through 3 brews.
17:12 dave_den nnnooooooo!
17:12 VertigoRay if that's the worst it gets, you're in for a good day
17:13 forrest it's still early
17:15 jslusher joined #salt
17:15 forrest I don't think you drink much tea VertigoRay
17:16 VertigoRay More of a coffee man myself, but I usually just brew my tea twice and replace the leaves.  Maybe I should be pushing them harder ... ;)
17:16 forrest VertigoRay, yea if I wasn't almost out I'd only have done twice
17:16 jslusher forrest: me again. you were saying that if I installed salt and/or salt0cloud using a branch in git that I could get a working pair
17:16 VertigoRay oh, so the real issue is lack of stock...
17:16 forrest jslusher, I think I said it was worth a shot :P
17:16 jslusher forrest: fair enough
17:16 forrest jslusher, I'd make no promises since I haven't done it myself.
17:17 forrest VertigoRay, yea pretty much
17:17 jslusher forrest: would that be a pull from the salt repo?
17:17 VertigoRay forrest, things may start getting bad for you then ... /grin
17:18 forrest jslusher, Yea I'd probably try to look for a stable commit and then try to pull that in, maybe near the develop branch
17:18 aleszoulek joined #salt
17:18 redondos joined #salt
17:18 redondos joined #salt
17:18 pniederw is there any material on salt virt other than the one or two pages in the official docs? couldn't find anything. not enough for me to understand how to use it in a non-adhoc (i.e. cm) way.
17:18 bhosmer joined #salt
17:18 forrest VertigoRay, I don't drink much caffeine so it should be ok, I was mostly joking.
17:18 forrest pniederw, not as far as I know, can you open an issue on github for that?
17:18 jslusher forrest: I'll see about that then. thanks for the help.
17:19 forrest jslusher, yea np, wish I had some better advice
17:19 jslusher forrest: the trouble is that I'm now getting heat from the new boss
17:19 forrest lol of course
17:19 jslusher basically wanting us to jump ship and use Chef
17:20 VertigoRay joined #salt
17:20 jslusher forrest: which is what he used at his last place
17:20 dave_den isn't salt-virt being replaced by salt-cloud?
17:20 forrest "How dare this free software have issues!"
17:20 forrest was he a developer jslusher?
17:20 forrest because that's who chef is for
17:20 jslusher forrest: not any more so than I am. I dare say less so
17:20 forrest dave_den, I thought so, but I couldn't find any docs that said so.
17:21 pniederw forrest: sure, can do
17:21 UtahDave dave_den: No. Salt Cloud is for managing vms on various other clouds.  Salt Virt is basically a private cloud on your hardware
17:21 dave_den ah
17:21 UtahDave jslusher: what was the issue you were running into?
17:21 jslusher forrest: his design at his previous place sounds horrific
17:21 teskew jslusher: if redbeard shows up here, i'm hoping we can get a working pair of salt-cloud and the current salt.
17:21 forrest jslusher, heh
17:22 forrest jslusher, ahh see now we have teskew coming in
17:22 teskew it's killing me over here too
17:22 jY i'm having an issue with gitfs and branchs and envs I made a post here https://groups.google.com/forum/#!topic/salt-users/cqMVx9Matss
17:22 jY anyone know what I'm doing wrong?
17:22 jslusher UtahDave: trying to deploy salt and salt-cloud at the moment and it's broken
17:22 forrest teskew, have you been able to get ANY of the recent releases working? jslusher has been troubleshooting this for two days, and he's gone back multiple versions with no dice, salt and salt-cloud pairs
17:22 UtahDave Ah, yeah. That's embarrassing.
17:22 forrest UtahDave, you broke it huh? :P
17:23 UtahDave No, thankfully not me.
17:23 forrest but you guys are well aware it's busted?
17:23 jslusher I've had to defend Salt even before this happened
17:23 forrest jslusher's issue is that he went back a couple releases it seems, and still no dice.
17:24 forrest jslusher, I like how you have to 'defend' it, it would be the same with any other config management tool, just in different ways, lol
17:24 teskew forrest: nope. i've patched and went back through git commits and can't get it to work on specific things like deleting maps. i patched to get the _contrib.py change to work.
17:24 jslusher and now Salt looks especially bad. I know it's not Salt in particular that's broken, but I made the mistake of using salt-cloud to deploy
17:24 forrest teskew, ok cool, at least you are having the same issues with multiple releases, that makes me feel better it's not just a single issue.
17:24 UtahDave jslusher: have you installed the latest salt cloud?  I think redbeard made a release
17:24 jslusher forrest: I know what you're saying.
17:25 jslusher UtahDave: 0.8.20
17:25 jslusher UtahDave: 0.8.10
17:25 jslusher not 20
17:25 UtahDave jslusher: I'm using Salt from the develop branch right now.  Salt Cloud has been merged in there and it's been working pretty well for me.
17:25 teskew salt is borked here pretty badly i think: https://github.com/saltstack/salt/issues/7526
17:26 forrest UtahDave, yea that's what we were discussing earlier, trying to do a pull from close to the develop branch
17:26 jslusher UtahDave: I was using pip for my installs. and I'm not totally comfortable using the dev branch if it will be in pip soon enough
17:26 teskew i'm not going to run something from a dev branch on hundreds of machines.
17:26 jslusher pypi that is
17:26 forrest teskew, how is that 'borked'?
17:26 redondos joined #salt
17:27 forrest when you include a state, it runs everything in that state
17:27 jslusher does anyone have an ETA for when this dev branch will be merged and sent to PyPi?
17:27 UtahDave teskew: that's smart
17:27 UtahDave jslusher: which dev branch?
17:28 forrest teskew, I don't disagree, jslusher is doing this on a test env I believe.
17:28 Linz joined #salt
17:28 jslusher UtahDave: the branch you mentioned that worked with salt-cloud
17:28 teskew forrest: it's borked because the states don't run if you include a state in another state if the included state is also applied at base
17:29 jslusher correct, we're still developing our implementation strategy
17:29 jslusher but the devs are getting impatient with us and we have a new boss who's hot for Chef/Vagrant
17:30 forrest teskew, ahhh ok yea that sucks then
17:30 jslusher but we spent the last 4 months designing our deployments using salt/salt-cloud
17:30 forrest I thought it was specifically including state A in the top file, which was already a part of state B, not having state C that also includes A having a problem
17:33 jslusher UtahDave: forgive my inexperience, but once I'm on the develop branch for salt, is it just a setup.py?
17:34 UtahDave jslusher: yeah.      sudo python setup.py install --force
17:34 jslusher UtahDave: thanks
17:34 noob2 joined #salt
17:34 UtahDave you're welcome!
17:34 dvogt joined #salt
17:34 noob2 is there a way to put a timeout on the file.directory command?
17:35 noob2 i have a fuse mount that keeps hanging and when file.directory tries to check it the salt-call hangs also
17:36 harobed_ joined #salt
17:37 noob2 i realize this isn't salt's problem per say because fuse is being stupid but i'm stuck none the less
17:37 harobed_ joined #salt
17:37 [diecast] joined #salt
17:38 mannyt_ joined #salt
17:38 dave_den noob2: no, there's no timeout for that
17:38 noob2 darn
17:38 VertigoRay noob2, if `touch` times out faster, maybe do a cmd.run and tell file.directory to "require" the cmd.run
17:38 forrest VertigoRay, oh man that's dirty
17:38 VertigoRay :D
17:39 noob2 haha
17:39 noob2 i like it
17:39 blee joined #salt
17:39 noob2 i'm ok with a hack
17:39 noob2 fuse is being really dumb
17:39 VertigoRay he said it's not salt's fault ... gotta hack around it
17:39 teskew i find myself fixing race conditions like that with the cmd.run and require a lot :)
17:40 noob2 yeah
17:40 noob2 teskew: i agree.  hung processes are really annoying
17:41 redondos joined #salt
17:41 redondos joined #salt
17:41 VertigoRay I use salt to manage OSX and have to use cmd.run and onlyif with awk statements to make things work that work just find on my debian infrastructure.
17:42 UtahDave jslusher: So it's looking like we're going to have the first RC for the next version of Salt the first part of December.
17:42 VertigoRay for example pip.installed doesn't work on OSX.  Says not available.
17:42 pipps_ joined #salt
17:43 [diecast] UtahDave what's the branch for that currently?
17:43 cron0 joined #salt
17:44 UtahDave [diecast]: all new development happens on the develop branch
17:49 redondos joined #salt
17:49 redondos joined #salt
17:49 backjlack joined #salt
17:50 druonysus joined #salt
17:51 Katafalkas joined #salt
17:51 jslusher UtahDave: I was able to successfully deploy a couple of minions using salt-cloud, but I can't get salt itself to start
17:51 jslusher salt -d -l debug
17:51 jslusher [DEBUG   ] Reading configuration from /etc/salt/master
17:51 jslusher [DEBUG   ] Missing configuration file: /root/.salt
17:51 jslusher [DEBUG   ] loading log_handlers in ['/var/cache/salt/master/extmods/log_handlers', '/usr/lib/python2.7/site-packages/salt/log/handlers']
17:51 jslusher [DEBUG   ] Skipping /var/cache/salt/master/extmods/log_handlers, it is not a directory
17:51 jslusher [DEBUG   ] None of the required configuration sections, 'logstash_udp_handler' and 'logstash_zmq_handler', were found the in the configuration. Not loading the Logstash logging handlers module.
17:51 jslusher [DEBUG   ] Configuration file path: /etc/salt/master
17:51 jslusher [DEBUG   ] Reading configuration from /etc/salt/master
17:51 jslusher [DEBUG   ] Missing configuration file: /root/.salt
17:51 jslusher [DEBUG   ] LocalClientEvent PUB socket URI: ipc:///var/run/salt/master/master_event_pub.ipc
17:52 jslusher [DEBUG   ] LocalClientEvent PULL socket URI: ipc:///var/run/salt/master/master_event_pull.ipc
17:52 jslusher Failed to connect to the Master, is the Salt Master running?
17:52 jslusher No minions found to gather docs from
17:52 jslusher oopp
17:52 _ikke_ jslusher:
17:53 [diecast] joined #salt
17:53 noob2 teskew: my ls /var/www/bad_dir
17:53 noob2 - timeout: 30 seems to never timeout
17:54 jslusher UtahDave: I got it
17:54 kermit joined #salt
17:54 UtahDave jslusher: salt-master -d -l debug
17:54 jslusher and sorry everyone for the runaway paste
17:54 redondos joined #salt
17:54 forrest jeez jslusher, what a spammer
17:55 jslusher UtahDave: yeah, I missed that detail in my haste
17:55 jslusher forrest: I am a semi-sophisticated bot
17:55 forrest heh
17:56 jslusher forrest: there are bots more sophisticated, and bots less sophisticated
17:56 bdf joined #salt
17:56 dpippenger joined #salt
17:57 forrest but this bot is yours?
18:02 * Gareth waves
18:03 Corey Gareth!
18:03 [diecast] joined #salt
18:03 Gareth Corey: ahoy :)
18:03 Gareth brb
18:03 Corey Submitting that paper with the wife shortly.
18:04 audreyr joined #salt
18:04 elfixit joined #salt
18:05 forrest what
18:05 Corey forrest: Hmm?
18:05 forrest your comment a minute ago
18:05 Corey My wife and I are looking to give a tag-team presentation at SCaLE.
18:05 forrest ahh ok
18:05 forrest context
18:06 Corey I'm a DevOps Unicorn, she's a lawyer. We're going to have a bit of fun with it.
18:06 forrest lol
18:07 VertigoRay Hey, I had a custom grain file that I've used for a while and am now having issues with it running cmd.run.  I made an issue on my own repo that I'm sharing out with people: https://github.com/VertigoRay/salt-osx-grains/issues/1
18:07 VertigoRay Anyone have any thoughts?
18:07 fuser joined #salt
18:07 noob2 what should i do if cmd.run with a timeout never times out haha?
18:07 dave_den VertigoRay: have you tried just salt['cmd.run'] ?
18:08 VertigoRay I had when I wrote this several salt version ago ... looking now.
18:09 sgviking joined #salt
18:10 dave_den nah, that won't work. it's still __salt__
18:10 VertigoRay dave_den, I was about to say --- that would conflich with salt.utils anyway.
18:12 dave_den dunno - __salt__['cmd.run'] is an alias to salt.modules.cmdmod._run_quiet
18:12 dave_den in grains
18:12 dave_den this is how the core grains load it: https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L33
18:13 NotreDev joined #salt
18:13 VertigoRay it wasn't available for me, so I had to generate it.  Maybe it's available now, so I can comment out L9-12 ?
18:14 noob2 it would be nice if salt organized all the repo adds and then did one apt-get update after all them
18:14 noob2 i see it running apt-get -q update over and over
18:15 redondos joined #salt
18:15 VertigoRay noob2, that would by the DRY way to do it.  However, they would have to do some serious parsing to get the per-state status.
18:15 noob2 ok
18:16 dave_den noob2: i think terminalmage or basepi did a similar optimization for pkg.installed a few months ago. it should be feasible to do the same for pkgrepo.managed
18:16 noob2 yeah maybe i just need to be smarter and organize them all into one state file
18:16 noob2 dave_den: that would be nice :)
18:16 dave_den check for an open issue, or file one
18:17 VertigoRay dave_den, looks like what I saw before and just copied it ... https://github.com/VertigoRay/salt-osx-grains/blob/master/salt_osx_grains.py#L9-L12
18:18 dave_den VertigoRay: not sure what's going on there
18:18 cdcalef joined #salt
18:18 Gareth Corey: tag team, huh?
18:18 Corey Gareth: It seemed like a fun idea.
18:19 Corey Gareth: She does improv, I do standup comedy. She can speak to the legal issues, I can speak to the technical implications.
18:19 JasonSwindle joined #salt
18:19 Gareth Corey: sounds interesting :)
18:21 KyleG1 joined #salt
18:22 Ahlee Returners do not connect directly from minions, correct?  Returners report back to master, master shoves return codes into returner?
18:23 Ahlee just enabled redis (ext_job_cache: redis in master), restarted master/minion, minion is now attempting to connect to salt:6379 (the redis port)
18:23 dave_den Ahlee: no, the minions send the return info directly to the returner
18:23 Ahlee god damnit
18:24 dave_den Ahlee: http://docs.saltstack.com/ref/configuration/master.html#ext-job-cache
18:25 ddv left #salt
18:25 Ahlee While I can understand it, not pigging backing on existing connections is annoying.
18:26 twiedenbein joined #salt
18:26 jslusher forrest: what ports need to be opened for the master and minion to communicate?
18:26 amckinley joined #salt
18:26 jslusher thought it was just 22
18:26 ajw0100 joined #salt
18:27 forrest the master just needs to have 4505 and 4506
18:27 forrest you don't have to open those on the minions
18:29 maxleonca joined #salt
18:29 maxleonca Hello everyone
18:29 maxleonca I have a small problem, documentation related actually
18:29 forrest what's up maxleonca
18:30 maxleonca I have been using salt since release 0.11 and as you know is awsome.
18:30 jslusher forrest: the instance gets created but I can't runs  test.ping afterwards
18:30 forrest the minion?
18:30 maxleonca now I have to manage about 20 windows servers, and I'm looking for a way to do it.
18:30 forrest are the keys accepted jslusher?
18:30 maxleonca As you can imagine I want to use salt, but the documentation or examples for salt on windows is a bit limited
18:31 maxleonca any idea where can I look for more detail or examples for that?
18:31 jslusher forrest: yes they are
18:31 jslusher forrest: and I can ping the instance manually
18:31 jslusher forrest: this is in a vpc
18:32 forrest maxleonca, how about this? https://github.com/saltstack/salt-winrepo
18:32 cdsrv pass_by_value
18:32 forrest jslusher, what happens when you try to do test.ping?
18:33 jslusher forrest: times out
18:33 maxleonca hmmm, thanks forrest... I'll start there.
18:33 jslusher forrest: returns nothing
18:33 cdsrv pass_by_value, it was in fact nothing. or, maybe something that went away.
18:34 elsmorian joined #salt
18:34 maxleonca if I may
18:34 maxleonca jslushere: what VPC configuration do you have
18:34 maxleonca jslusher: is it private/public or just public?
18:34 cdsrv looked like maybe the 'debug' and 'verbose' flags weren't working fully in the older version and after it upgraded, it was spitting out a bunch of stuff on the page that wasn't there before
18:35 maxleonca in my experience with VPC 99% of times is the issue is with the security zones.
18:35 maxleonca sorry security groups
18:35 maxleonca it may very well be that ICMP is not being allowed
18:36 cdsrv eventually reinstalled salt & saltmaster, restarted everything,
18:36 cdsrv halite throws some errors when saltmaster restarts, it stopped doing it and then started again.
18:36 cdsrv terrible debugging , I know. but its working now.
18:36 cdsrv :)
18:38 cdsrv pass_by_value, its better now, does not throw any errors.
18:38 cdsrv here's what is running:
18:39 jslusher forrest: I think it's a vpc thing
18:39 jslusher I can't see the ports 4505 and 4506 I opened on the master from the minion
18:39 jslusher forrest: but I can see ssh
18:39 cdsrv centos + epel testing repo, salt installed via yum, halite installed via pip
18:39 forrest jslusher, ok cool
18:39 forrest you should have more issues I think jslusher
18:39 forrest :P
18:39 forrest clearly not enough roadblocks here
18:40 forrest weird
18:40 forrest you can't telnet to them or anything?
18:40 cdsrv all are latest version now, and everything seems to be playing nicely as long as they got installed in the right order.
18:40 mannyt joined #salt
18:40 cdsrv jslusher, turn off iptables to test
18:40 forrest yep I agree with cdsrv
18:41 cdsrv you can go add proper rules later,
18:41 jslusher forrest: I can get into each the master and the created minion via ssh. I can see the key from the master. I can ping the minion from the master.
18:41 DredTiger joined #salt
18:41 jslusher disabling iptables doesn't help
18:42 forrest yea something else is blocking it then
18:42 forrest unless you didn't open the ports right :P
18:43 forrest what does a netstat -aln | grep 4505 show?
18:43 cdsrv right,
18:43 cdsrv or use 'ss -na'
18:43 cdsrv on the master,
18:43 cdsrv you should see the two listening ports
18:43 forrest yep
18:43 cdsrv if the service is running of course
18:44 jslusher forrest: tcp        0      0 0.0.0.0:4505            0.0.0.0:*               LISTEN
18:44 forrest ok you should be good to go then.
18:45 sroegner joined #salt
18:45 cdsrv now from the remote side,,
18:45 cdsrv install netcat
18:46 cdsrv run 'nc -z saltmaster 4505'
18:46 cdsrv it should say Connection to saltmaster 4505 port [tcp/*] succeeded!
18:47 cdsrv if not, then its some firewall issue somewhere on your network.
18:47 jslusher nc -v 10.0.0.11 4505
18:47 jslusher Ncat: Version 6.40 ( http://nmap.org/ncat )
18:47 jslusher Ncat: Connection refused.
18:47 cdsrv ok then
18:48 jslusher yeah, I think it's the vpc security groups, but I've opened the ports on what I think should be opened
18:48 cdsrv thats all it is.
18:48 cdsrv no more voodoo , just get the path cleared
18:48 jslusher PORT     STATE  SERVICE
18:48 jslusher 22/tcp   open   ssh
18:48 cdsrv "iptables -L" ?
18:49 redondos joined #salt
18:49 cdsrv [root@saltmaster ~]# chkconfig | grep iptables
18:49 cdsrv iptables       0:off1:off2:off3:off4:off5:off6:off
18:50 DredTiger Seems I spoke to soon the other day w/ regard about being able to configure my Mac w/ salt. :-(
18:50 DredTiger I've been using salt w/ Linux until just yesterday
18:50 cdsrv slusher, also do "service iptables stop"
18:51 cdsrv cause it might still be running even if you chkconfig it off
18:51 cdsrv (assuming centos)
18:51 cdsrv what OS are you running on the saltmaster?
18:51 DredTiger I set up to simple states and ran "salt-call --local state.highstate -l debug"
18:52 DredTiger s/to/two/
18:53 Ryan_Lane joined #salt
18:54 xmltok joined #salt
18:54 maxleonca jslusher: Go the AWS panel and check the security groups, if the instances are in different VPC networks, your problem is there.
18:55 cdsrv oh there you go, max
18:55 DredTiger Err... never mind
18:55 DredTiger there is no /etc/salt/minion file
18:55 jslusher maxleonca: same vpc, same subnet
18:56 DredTiger Nor is there anything in /etc/salt/minion.d
18:56 dranger joined #salt
18:56 foxx[cleeming] joined #salt
18:56 foxx[cleeming] joined #salt
18:56 cdsrv js, can you confirm the iptables service is 'off'
18:57 DredTiger I installed following the directions at http://docs.saltstack.com/topics/installation/osx.html
18:57 wilywonka joined #salt
18:57 DredTiger Shouldn't that have installed a default minion config file?
18:57 cdsrv dred, no
18:58 cdsrv dred, all you need in the minion file is the key=value for the master
18:58 cdsrv to start at least
18:58 cdsrv assuming its not the same machine of course
18:59 jslusher cdsrv: iptables is stopped
18:59 jslusher cdsrv: still refusing my connection
18:59 cdsrv ..
18:59 cdsrv 'ss -na' from saltmaster?
19:00 cdsrv LISTEN     0      100                                                         *:4505                                                       *:*
19:00 DredTiger Interesting
19:00 DredTiger I was doing the "Masterless Quickstart" http://docs.saltstack.com/topics/tutorials/quickstart.html
19:00 cdsrv dred, ok. thats going to be trickier. do you have another master running?
19:01 DredTiger Not one I want to mess with at all. I just wanted to set up some "self contained" states for this new laptop...
19:02 DredTiger I guess I could install and run the master and minion locally
19:02 DredTiger but I didn't think that was required.
19:03 jslusher cdsrv: I don't see it's listening now
19:03 cdsrv try the salty-vagrant stuff, its a good example of masterless minion
19:03 maxleonca jslusher, do you have any rules on the inbound security group?
19:03 cdsrv js, 'service salt-master restart'
19:04 jslusher cdsrv: yep. now it sees it
19:04 cdsrv max, looks like the master service isn't running if 'ss -na' does not list the port
19:04 cdsrv cool!
19:05 jslusher cdsrv: still can't test.ping to the minion for some reason
19:06 linjan joined #salt
19:07 davet joined #salt
19:08 Ryan_Lane joined #salt
19:09 forrest Ryan_Lane, in the air again?
19:10 xmltok joined #salt
19:10 Ryan_Lane forrest: nope. just bad wifi
19:10 forrest ahh ok
19:11 maxleonca jslusher: very dumb question, what is the  result of "salt-key -L"
19:12 forrest maxleonca, List all public keys on this Salt master: accepted, pending, and rejected
19:13 maxleonca I mean on jslushers environment
19:13 mapu joined #salt
19:13 maxleonca I had problems using the short hostname once insted the actual name of the accepted key which had the whole name.domain.tld
19:16 redondos joined #salt
19:16 snave joined #salt
19:17 bastion2202 joined #salt
19:18 jslusher maxleonca: the ports are open between the master and minion
19:18 jslusher maxleonca: the key is accepted
19:18 cdsrv yeah!!
19:18 jslusher cdsrv: but I still can't test.ping
19:19 jslusher and I can't even run a salt-call
19:19 jslusher on the minion
19:19 jslusher wait
19:19 jslusher I can with sudo
19:20 jslusher no. can't
19:20 jslusher I was on the master
19:21 jslusher so, salt-minion didn't install on the minion
19:21 jslusher after the deplyoment
19:22 jslusher this is still effed
19:24 mannyt joined #salt
19:24 NotreDev joined #salt
19:24 JasonSwindle left #salt
19:28 jslusher looks like I get to learn Chef
19:28 dcolish :*
19:28 forrest jslusher, lol
19:28 jslusher and throw away 6 months of work
19:28 forrest I imagine your boss in two weeks 'why is chef not doing exactly what we need and this functionality is broken!'
19:29 jslusher forrest: he apparently used it for years at his other place
19:29 jslusher probably set up by someone else
19:29 forrest yea unless he implemented it, doesn't count
19:31 jslusher I would like to be able to laugh about, but I'm pretty uh… salty right now
19:31 forrest yea
19:31 maxleonca so the problem is that the minion didn't bootstrap
19:31 maxleonca sorry guys I got a bit lost
19:31 jslusher right
19:31 oz_akan_ joined #salt
19:32 dave_den jslusher: have you enabled debug logging on the master and minion and watched both log files?
19:32 jslusher the minion log file doesn't get created
19:32 jslusher since the minion doesn't even get installed
19:32 maxleonca so how did the minion cert got accepted, did it get preaccepted?
19:32 dave_den if the minion is not installed, how do you expect to have the salt master talk to it?
19:32 jslusher maxleonca: good question
19:33 maxleonca I think there lies the root of your problem, somehow.
19:33 cdsrv lol
19:33 cdsrv :)
19:34 maxleonca if the key gets pregenerated, then you need to find out the issue on the minion install and feed the key
19:34 jslusher maxleonca: this used to all be done automatically when using salt-cloud
19:34 maxleonca which is not that complicated, troubleshoot the minion bootstrap maybe.
19:34 maxleonca then perhaps is broken on the release you have
19:34 maxleonca which one are you using?
19:35 jslusher maxleonca: so yes, I can do that, but the real problem is that salt-cloud is still broken
19:35 jslusher maxleonca: the very latest develop branch, which is the only one that even sort of works
19:35 maxleonca indeed, but they have an awsome turn around on bug fixes, specially on something as big as salt-cloud
19:38 jslusher maxleonca: it's just bad timing really
19:38 RobSpectre Getting a non-descript KeyError on my top file: https://gist.github.com/RobSpectre/7605575
19:38 jslusher maxleonca: I was in the middle of demonstrating salt with salt-cloud to the new boss, and the whole thing has blown up in my face
19:39 RobSpectre Getting KeyError: development in the minion logs - not sure at all how to debug.
19:39 maxleonca jslusher: sorry to read it man.
19:41 noob2 joined #salt
19:42 harobed joined #salt
19:46 cdcalef joined #salt
19:47 DredTiger late to the conversation but I though salt-cloud was being merged into main line salt?
19:48 forrest DredTiger, they're working on it
19:48 forrest it was put into the main repo already
19:54 dwyerj joined #salt
19:54 NotreDev joined #salt
19:55 Gifflen joined #salt
19:57 amckinley hm, i seem to have broken my salt dev environment. im working on developing a new state module. ive been testing it with salt-call, after setting up my environment with these instructions: http://docs.saltstack.com/topics/hacking.html
19:57 amckinley whenever i try and invoke functionality in my state module, im getting "Specified state circus.watcher is unavailable."
19:58 amckinley anyone got a minute to help me figure out what i did wrong? looks like salt-call is failing to find my custom modules
19:58 amckinley but its finding my sls hierarchy, which is rooted at the same path as my new state
20:02 maxleonca amckinley: can you post somewhere your sls?
20:02 amckinley maxleonca: sure one sec
20:02 KyleG joined #salt
20:02 KyleG joined #salt
20:02 amckinley http://pastebin.com/XS2HW9Ab
20:03 dave_den amckinley: where are you putting your custom state module on the master?
20:03 wilywonka joined #salt
20:03 amckinley dave_den: im running a masterless minion
20:04 amckinley dave_den: let me put up my config file
20:04 dave_den where are you putting your custom state modules?
20:05 dave_den by default they will be loaded from /etc/salt/_states
20:05 amckinley dave_den: http://pastebin.com/n71Bwn2V
20:06 amckinley my modules are in /home/austin/src/salt-cf/states/_states
20:06 amckinley and my sls files are in /home/austin/src/salt-cf/states
20:06 dave_den amckinley: you're configuring the modules_dirs. you need to put that path in states_dirs
20:06 noob2 left #salt
20:06 dave_den modules_dirs is for execution modules
20:06 dave_den states_dirs is for states modules
20:07 amckinley *facepalm*
20:07 dave_den ;)
20:07 amckinley i was doubly-confused to be getting log messages from other modules in that directory, implying that things were getting loaded
20:07 amckinley just not as states, apparently :)
20:07 dave_den yeah, it's an important distinction
20:08 Psi-Jack Is there a salt module to handle alternatives properly for CentOS?  Googling for "salt alternatives" wasn't very useful. :)
20:08 Psi-Jack UNless I wanted a spice alternative to salt. lOL
20:09 amckinley dave_den: yep, that worked! thanks so much for your help
20:09 cachedout Psi-Jack: The 'alternatives' state is what you're looking for
20:09 dave_den amckinley: awesome! no prob
20:09 Psi-Jack Aha, found it./
20:09 cachedout Psi-Jack: http://docs.saltstack.com/ref/states/all/salt.states.alternatives.html?highlight=alternatives#salt.states.alternatives
20:10 Psi-Jack Hmmm, so there's a .install, I'm guessing it won't install it multiple times if it's already there.
20:19 Psi-Jack Okay. When making a require, I want to make the require depend on another state in the sls file.  Is this correct? http://paste.linux-help.org/view/6890c9ac
20:19 Psi-Jack That line 1 begins with a /
20:19 JulianGindi joined #salt
20:21 dave_den - require:\n  - alternatives: /usr/bin/pecl
20:21 Psi-Jack I see. :)
20:29 Linz joined #salt
20:30 rlarkin joined #salt
20:34 JulianGindi joined #salt
20:39 Psi-Jack Heh fun..
20:39 Psi-Jack I have a state that's using pecl that, every state.highstate, it installs the pecl. Successfully, but every time.
20:40 cachedout Psi-Jack: I' m rewriting the pecl stuff today as a matter of fact.
20:40 Psi-Jack cachedout: Heh, Ahhhhh
20:40 forrest cachedout, nice
20:40 cdsrv joined #salt
20:40 Psi-Jack So this is a known bug?
20:40 Psi-Jack Somehow a pecl just keeps reinstalling every time?
20:40 forrest cachedout, you're writing some tests too right? :P
20:42 redondos joined #salt
20:43 cachedout Heh. We'll see what I can get done before my meeting in twenty minutes. :]
20:44 forrest meetings??
20:44 forrest boooooo
20:44 blee_ joined #salt
20:44 Psi-Jack Ahhhh, I found out what's wrong., and my boss actually reported a bug about it. If pecl fails, it reports success. Pecl that I see keep succeeding isn't actually installed when I pecl list.
20:45 forrest lol
20:45 cachedout Psi-Jack: That's the very bug I'm fixing.
20:45 Psi-Jack Nice. :)
20:45 mannyt joined #salt
20:45 cachedout pecl is pretty bad at reporting return a correct return status, so we'll just re-examine the list of installed packages and compare against that.
20:45 Psi-Jack cachedout: The problem is upstream as well, though. https://bugs.php.net/bug.php?id=63935
20:45 Psi-Jack Ahhhh
20:45 Psi-Jack cachedout: That works.
20:46 cachedout Yeah. I wish upstream had it working with return codes but c'est la vie.
20:48 Gifflen_ joined #salt
20:54 korylprince joined #salt
20:54 Psi-Jack cachedout: Well, either way, nice to know you're working on a solution salt-side. The bug's out there for pecl itself.
20:54 [diecast] joined #salt
20:55 korylprince Hello all. I have a state file that creates a new user and a new file. The new file should have the user and group of this new user. I tried using require: user: <user> but it says the requisite is not found.
20:56 forrest can you paste your state korylprince?
20:56 forrest using pastebin or something
20:56 korylprince Here is output http://pastie.org/8501966
20:57 korylprince (and the state)
20:58 korylprince On the newest salt minion/master in the ubuntu archive (salt 0.17.2)
21:04 forrest ok, so when you run that state, then go onto the machine, has the duplicity user been created?
21:05 forrest I'm wondering if maybe it's having a problem because of the group?
21:05 forrest under duplicity try to add a group.present perhaps?
21:06 korylprince I just figured it out. I was also installing the duplicity pkg. It's been a while since I created a state and made the mistake of using two top-level duplicity keys. merged those and it worked.
21:06 korylprince Sorry for the noise and thanks for the help
21:06 forrest ahh yea that makes sense
21:06 forrest np
21:06 oraqol So i implemented your suggestions, and it works now!
21:06 oraqol yayyyyyyyyyy!
21:06 oraqol thanks guys
21:07 forrest oraqol, which one?
21:07 forrest resolved the issue that is
21:08 pipps_ joined #salt
21:08 madduck whatever you may make out of http://www.infoworld.com/d/data-center/review-puppet-vs-chef-vs-ansible-vs-salt-231308, it makes me happiest to see ansible last ;)
21:09 madduck GO SALT GO ;)
21:09 madduck now whoever did that puppet scoring is a fanboy or on crack and just managed by chance to hit the right keys
21:10 madduck "Salt is the sleekest and most robust of the four"
21:10 redondos joined #salt
21:10 forrest I think that score is more than fair compared to how long it's been around.
21:11 madduck (I don't think this is anything to rest upon, but it's good to hear)
21:11 forrest yea
21:11 madduck forrest: puppet is a frickin' nightmare. Don't get me started
21:11 forrest I know
21:11 forrest I used it at my last job
21:11 madduck i am sorry
21:11 forrest me too, I had to rewrite all of our puppet for the 2.3->2.7 update
21:11 forrest well, almost all of it.
21:12 madduck i don't want to talk about puppet
21:12 forrest fine with me
21:13 dave_den hamstrung by the web UI… ugh
21:13 forrest lol
21:13 forrest I know right
21:13 forrest because when I think admins, I think a web ui!
21:14 dave_den that point was for the PHBs
21:14 dave_den must have pointy clicky graphs!
21:14 forrest heh
21:15 shutej joined #salt
21:16 shutej i'm using boto to send cloud-config syntax to tell my node what its hostname is.  but i'm really confused by amazon's vpc documentation, i was expecting to be able to reach this box using that hostname.  so if i tell my instance it's "foobar" and set the dhcp option set to "baz" shouldn't i be able to dig "foobar.baz" from another node in the same subnet?
21:16 shutej basically trying to get salt to be able to see its masters and minions by their logical names
21:17 shutej configuring things with ip-10-0-0-6.ec2.internal seems ... crazed
21:18 MK_FG joined #salt
21:20 forrest oraqol, any answer on that?
21:21 pipps_ joined #salt
21:22 forrest basepi, I don't know if issue 6456 and 8597 are related. Batch mode 'worked' fine for the issue in 8597 in the last release, it was only when the timeout for batch mode was specifically modified to be the value of the timeout setting in the conf, as opposed to the 99999 that someone had it set to that there was a problem.
21:23 forrest well, not the last release, but a few back
21:23 ajw0100 joined #salt
21:24 basepi forrest: are you not seeing the hanging behavior of 6456 then?
21:24 redondos joined #salt
21:25 basepi forrest: i was under the distinct impression that batch mode was pretty severely broken and has been for months.
21:25 basepi (i've had an open tab meaning to find time to rewrite it for months, too)
21:25 forrest basepi, in the test I did I didn't see that, but I also was only running a single state.
21:26 basepi gotcha.  well, if you see any other weird behavior, keep me posted.  otherwise i guess we'll focus on the timeout issue first then do more extensive testing.
21:26 forrest I mean it clearly 'broke' 4 months ago when Tom changed it to the timeout setting for batch, but that issue with the time problem is strictly related to that change.
21:26 forrest Yea the timeout issue is just because the value for the timeouts was changed from a straight up int of 99999 to the timeout from salt
21:27 forrest This other issue also seems to be super problematic though
21:27 basepi yup
21:27 amckinley joined #salt
21:28 oraqol which one what?
21:28 redondos joined #salt
21:28 oraqol sorry, afk for a bit
21:29 forrest which solution fixed your master problem
21:29 oraqol umm not sure
21:29 forrest and did it resolve the slow times to execute?
21:29 oraqol i did all of them
21:29 oraqol lol
21:29 forrest ok what did you change
21:29 forrest I want to file an issue to get the troubleshooting docs updated.
21:29 oraqol changes timeout and worker threads from 5  to10
21:29 oraqol and ran state.highstate in batches of 10
21:30 forrest ok
21:30 oraqol works like a charm now
21:30 forrest cool
21:32 ajw0100 joined #salt
21:32 scristian joined #salt
21:33 redondos joined #salt
21:34 zach I've been thinking about instead of running a bot to call /usr/bin/salt on my master, to just incorporate a jabber bot within salt-minion  and pull the config from /etc/salt/minion -- would this have any undesirable results?
21:36 forrest it could if the latest data hadn't been pulled from the master
21:36 zach The idea was to have the salt-minion client connect to the jabber server to make sure the minions are online
21:40 snave joined #salt
21:44 redondos joined #salt
21:50 redondos joined #salt
21:53 ajw0100 joined #salt
22:01 jslusher joined #salt
22:02 jslusher https://groups.google.com/forum/#!topic/salt-users/vCGDr4lVHx0
22:02 jslusher can anyone confirm whether I'm doing something wrong or this is another bug?
22:03 bhosmer joined #salt
22:03 forrest ahh jslusher, aka, whipping boy
22:04 jslusher forrest: feeling pretty whipped right now
22:04 forrest lol
22:04 jslusher forrest: but dammit if I want to go learn Chef
22:04 forrest I have no idea what the status is with that, UtahDave was saying it was looking fine. Hey basepi, is UtahDave in the office today?
22:08 snewell joined #salt
22:08 steveoliver jslusher++
22:08 steveoliver :)
22:08 forrest you don't wanna write ruby jslusher?
22:09 forrest I mean, when I think fun, I think writing ruby...
22:09 JulianGindi joined #salt
22:09 jslusher forrest: precisely. ruby = happyfuntime
22:10 zach ruby == language of the devil
22:10 forrest I won't go that far
22:10 zach I just did ;)
22:10 forrest rails however
22:10 zach no need to
22:10 forrest lol
22:11 zach Python was the spawn of satan for me until I really started to learn it, while Ruby still is satan
22:11 zach Really starting to love Python
22:11 forrest heh
22:13 zach I'm going to do something insane and make a salt-jabber, like mentioned above
22:14 zach The only thing it will be doing is joining a MUC in jabber. Kind of pointless aside from being able to see when new minions come online or go offline in real time
22:14 forrest cool, make sure to document it then update https://github.com/saltstack/salt/issues/8682
22:14 zach Yeah, I'm trying to sort out how I want to configure it still
22:14 zach and/or which python libs to use
22:15 zach SleekXMPP is nice, but I feel it's too large for what I need - open to suggestions though
22:15 avdhoot joined #salt
22:15 forrest I don't mess with jabber much so I'm not very familiar with the libs
22:16 avdhoot can we use Reactor multi environment like prod, stage?
22:17 zach forrest: yeah - I'd almost rather have it join IRC
22:17 zach forrest: I think that would be much more popular
22:18 vkurup_ joined #salt
22:18 forrest well, if you're gonna do that you should just write a hubot plugin or something
22:18 forrest or Willie: http://willie.dftba.net/
22:18 forrest if you want a python bot
22:19 zach I actually looked into hubot the other day, seemed really cool
22:19 forrest yea, other than the fact it's in node
22:19 zach yeah, that was the only downside
22:20 zach I will work at getting jabber working first since that is what we're using at work
22:20 forrest pretty big downside :P
22:20 zach Then IRC
22:20 forrest yea get jabber working
22:20 forrest exactly
22:20 zach Currently there is just one jabber bot that runs on the master
22:20 forrest ahh ok
22:20 zach using the acl to limit what access it has, and then a system call to salt '*web*' acl'd_cmd
22:21 zach Seems like a bad way to do it since I could just integrate it directly into salt as it is all python
22:21 worstadmin joined #salt
22:24 forrest seems ok to me as long as you made sure it was easy to plug in
22:25 shadowsun Hrm
22:25 shadowsun new slackware
22:26 redondos joined #salt
22:31 redondos joined #salt
22:31 oz_akan__ joined #salt
22:32 aleszoul3k joined #salt
22:33 oz_akan__ joined #salt
22:38 Ryan_Lane joined #salt
22:38 Ryan_Lane joined #salt
22:44 redondos joined #salt
22:45 oraqol So now there's a whole chunk of minions that I just attempted to add to the master but they don't show up when i run salt-key -L
22:45 oraqol i can ping the minion from the master and vice versa
22:45 oraqol the minion config is pointing to the master ip
22:45 forrest can you hit the master on port 4505, and 4506 from the minions?
22:47 sroegner joined #salt
22:47 oraqol I can using ping <masterip> -p 4505/4506
22:48 forrest ping is icmp
22:48 forrest try telnet
22:48 forrest you need to make sure tcp is working
22:48 forrest and what OS uses ping where -p is the port and not the pattern?
22:48 oraqol ok its failing
22:49 oraqol network related
22:49 oraqol goddamit
22:49 oraqol thanks
22:49 forrest lol np
22:49 forrest I think your -p option was for pattern :P
22:49 oraqol you right
22:52 UtahDave joined #salt
22:52 redondos joined #salt
22:55 [diecast] joined #salt
22:57 torrancew forrest: given that ICMP has no notion of ports ;)
22:57 forrest yea
22:57 forrest I had that discussion with a coworker
22:57 forrest had to explain it
22:57 torrancew "What port does ping use" is one of my favorite trick questions for interviews :)
22:58 honestly ALL OF THEM
22:58 sgviking joined #salt
22:58 honestly did you know only root can ping?
22:58 torrancew of course
22:58 honestly on most systems ping is setuid
22:58 torrancew it's a raw socket
22:58 honestly yeah
22:59 zach I wish salt '*' test.ping would return false if they don't respond
22:59 honestly did you also know the linux kernel has a capability system that allows very fine-grained privilege settings, and for some reason it's broken on debian?
22:59 * honestly grumbles
22:59 zach honestly: I bet ubuntu broke it
22:59 torrancew yeah, caps are handy
23:00 honestly lol
23:00 honestly I needed to do raw pinging in a python script
23:00 honestly I cursed a lot
23:00 redondos joined #salt
23:00 redondos joined #salt
23:00 forrest lol
23:01 honestly it works perfectly in arch
23:01 honestly exactly the way I expected it to work
23:01 honestly and everything looks exactly the same way in debian, except it doesn't work
23:01 * honestly stops himself
23:01 forrest relax
23:02 forrest deep breaths
23:02 zach just blame canonical
23:02 amckinley joined #salt
23:06 xmltok joined #salt
23:07 gasbakid_ joined #salt
23:07 honestly why would I blame canonical
23:07 Corey honestly, why wouldn't you?
23:07 jesusaurus because you can
23:07 honestly did they secretly commit patches to break debian?
23:07 forrest lol
23:08 Corey honestly: I generally blame Canonical. Or the debbil. Or the weather.
23:08 honestly I blame Guido.
23:08 honestly fucking hell, urllib3
23:08 Corey I'm Jewish; we were the last generation's scapegoat. It's Canonical's turn.
23:08 honestly lol
23:08 shadowsun lol
23:09 zach Canonical has this thing about applying a patch over a patch to fix a patch from another patch
23:09 zach and then you have Ubuntu
23:09 zach or rather, and now you have Ubuntu ;-)
23:10 zach they patch things that dont need patching and break the package
23:10 honestly well
23:10 honestly I don't think any such canonical breakage can possibly have made it into debian wheezy
23:10 honestly they would've had to start 10 years ago!
23:11 zach nah, they do a lot of stuff for Debian now
23:11 zach Debian is not the same as it once was
23:11 zach Ubuntu has a lot of influence on Debian now somehow
23:11 honestly that was just a joke about debian
23:11 zach That joke no longer really applies to debian but Redhat ;-)
23:12 forrest ugh
23:12 zach So many outdated packages on RHEL
23:13 forrest zach, let's please not discuss that
23:13 forrest it infuriates me
23:13 zach :)
23:13 honestly I guess I should start running my servers on arch
23:13 forrest rhel 7, will finally have python 2.7
23:13 forrest but not as the default
23:13 forrest just included from what I understand
23:13 abe_music joined #salt
23:13 forrest lazy bastards
23:14 forrest 'whaaaa, we do not want to rewrite YUM', well boo friggin hoo
23:15 zach I have python 2.7 and 3.x on my rhel boxes
23:15 forrest from the extras repo
23:15 zach nah, I rolled my own packages :-)
23:15 forrest even worse, lol
23:16 cachedout joined #salt
23:16 zach I have built so many rpms where I am at by hand, I could do it with my eyes closed now
23:16 forrest alright cachedout, you had your 20 minutes, you fixed everything ever right?
23:16 forrest solved global warming and such?
23:16 forrest zach, are you building them with mock though?
23:17 zach forrest: I do now, at first I was just using 'rpmbuild'
23:17 cachedout forrest: I've been doing other things, but I'll ensure there are tests there. We're really committed to getting as many tests in as possible.
23:18 redondos joined #salt
23:18 forrest cachedout, I was just joking with you about the pecl stuff man
23:18 zach I've been a Debian/BSD admin longer than RHEL, I have alawys hated RPM
23:18 zach and still do
23:18 forrest sorry I forgot a smily face or a /s
23:18 forrest zach, mock makes it the worst
23:19 forrest *smiley
23:19 zach forrest: I've been considering just adding my builds to a folder and letting salt handle the files instead of building an RPM heh
23:19 forrest zach, ugh
23:19 zach super ghetto :D
23:20 forrest I would fly to Austin and slap you
23:20 forrest then rm -rf all that garbage
23:20 forrest how dare you do something so terrible
23:20 forrest you sound like a php developer :P
23:20 zach ;-)
23:20 zach EWWW> How insulting.
23:20 forrest Goal achieved then
23:24 redondos joined #salt
23:34 alunduil joined #salt
23:35 oz_akan_ joined #salt
23:39 worstadmin joined #salt
23:48 jesusaurus so, whats the deal with github.com/saltstack-formulas ? are they officially supported? are they tested? how would i go about adding new ones?
23:48 forrest jesusaurus, so right now yes they are officially supported
23:48 forrest if you want to add new ones ask whiteinge, and he can add you to the org
23:48 forrest I think he was just asking that you test them on all the distros you claim things are supported on
23:50 jesusaurus cool
23:53 renoirb Hey guys, how can I send arguments to a command from the terminal, e.g. filling what is described in salt.modules.tls.create_ca for eaxampe
23:54 forrest renoirb, http://docs.saltstack.com/ref/modules/all/salt.modules.cmd.html
23:55 forrest is that what you're looking for?
23:55 Ryan_Lane renoirb: salt 'blah' tls.create_ca arg arg2 named_arg=val named_arg2=val
23:55 renoirb oh! ok.
23:55 renoirb Did not try this
23:55 renoirb I mean without arg arg2
23:55 forrest oh sorry I was overcomplicating it, thanks Ryan_Lane
23:56 Ryan_Lane yw
23:56 Ryan_Lane the docs aren't amazingly clear on this :)
23:56 renoirb is case sensitivity important?
23:57 renoirb I tried from method signature: salt.modules.tls.create_self_signed_cert(tls_dir='tls', bits=2048, days=365, CN='localhost', C='US', ST='Utah', L='Salt Lake City', O='SaltStack', OU=None, emailAddress='xyz@pdq.net')
23:57 renoirb with: salt 'bla*' tls.create_self_signed_cert CN=docs.webplatform.org C=CA
23:57 renoirb and it fails
23:58 renoirb I'll try differently
23:58 Ryan_Lane I dunno if case sensitivity matters
23:58 renoirb ok, I quoted all of the arguments, even though any of them had spaces, it worked.
23:58 forrest renoirb, let me know what works and I'll update the example in the docs.
23:59 forrest so "CN=docs.webplatform.org" ?
23:59 forrest or CN="docs.webplatform.org"
23:59 forrest I imagine the second?
23:59 renoirb no, It worked with salt 'bla' tls.create_self_signed_cert CN='docs.webplatform.org'

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary