Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-09-26

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 |Kellan| joined #salt
00:04 |Kellan| Hello All, I am trying to write a state for nagios/icinga. I want to render my config files with salt. The server needs to have one file called hosts.cfg Inside that file there needs to be a a couple of lines that reference every host that salt manages. The data is in the grains I need, but I can't figure out how to query the list of hosts. Do I need to write a pillar for this?
00:06 |Kellan| This is the basic logic I am trying to achieve http://pastebin.com/kLcEKnnC
00:08 |Kellan| I just found the answer thank you ;-)
00:10 raghavp joined #salt
00:10 raghavp left #salt
00:11 NotreDev joined #salt
00:11 jamescarr joined #salt
00:13 LarsN joined #salt
00:14 LarsN Does anyone here have a link to example state files that would deploy wordpress on a vanilla ubuntu or centos install?
00:18 dthom91 joined #salt
00:19 nu7hatch joined #salt
00:27 juanlittledevil I'm looking to do a highstate from within a python script and found this https://salt.readthedocs.org/en/latest/ref/clients/index.html#localclient I can instantiate this class into my script and run a cmd('*', 'test.ping') just fine. but doing a state.highstate returns with no top file found. Can one of you guys point me to some documentation as to how I may do this inside a python script?
00:39 rgbkrk joined #salt
00:41 __number5__ juanlittledevil: have you try to use the SaltCall class https://github.com/saltstack/salt/blob/develop/salt/cli/__init__.py#L256
00:46 pjs joined #salt
00:47 bhosmer joined #salt
00:50 jacksontj joined #salt
00:55 saurabhs left #salt
00:57 jamescarr joined #salt
00:58 faldridge joined #salt
01:01 faldridge joined #salt
01:04 dcolish joined #salt
01:12 Gifflen joined #salt
01:12 Gwayne joined #salt
01:13 halfss joined #salt
01:22 jamescarr joined #salt
01:30 scristian_ joined #salt
01:40 jslatts joined #salt
01:44 Thiggy joined #salt
01:48 jamescarr joined #salt
01:54 Nexpro joined #salt
01:56 m_george|away joined #salt
01:57 Lue_4911 joined #salt
02:05 sixninetynine joined #salt
02:12 mwillhite joined #salt
02:19 xl1 joined #salt
02:29 NotreDev joined #salt
02:30 jamescarr joined #salt
02:31 Furao joined #salt
02:39 vipul joined #salt
02:39 woebtz joined #salt
02:43 gadams joined #salt
02:52 juicer2 joined #salt
02:53 redbeard2 joined #salt
02:58 redondos joined #salt
03:05 redondos joined #salt
03:06 mnemonikk joined #salt
03:12 pdayton joined #salt
03:26 seanz Good evening. Anyone here actively using salt to manage Windows machines?
03:27 faldridge joined #salt
03:31 seanz `Good evening. Anyone here actively using salt to manage Windows machines?
03:31 seanz In case the message didn't go through - couldn't tell if my IRC client was finished connecting yet.
03:31 salt_noob joined #salt
03:31 EugeneKay It did.
03:31 EugeneKay And nope, not yet.
03:34 redondos joined #salt
03:35 salt_noob Has anyone here seen a well-documented Github Boxen-like solution using Salt? https://github.com/blog/1345-introducing-boxen
03:37 berto- joined #salt
03:37 seanz EugeneKay: Hey, thanks.
03:37 seanz So you're not using salt to manage Windows machines. I'm sure someone else is. Maybe they're just idle or not online right now.
03:39 EugeneKay Yup.
03:41 Thiggy joined #salt
03:43 mwillhite joined #salt
03:43 jamescarr joined #salt
03:48 joehh seanz: we are - works well
03:49 seanz joehh: Oh nice. How long have you been using it? Any major gotchas you ran into?
03:49 joehh it was a bit hit and miss in the early days - features would be fixed then stop working when unix improvements/bugfixes were
03:49 joehh made
03:50 joehh , but seems to have stabilised considerably since midway through the 0.15 branch
03:50 joehh also, thing simprved dramatically once it was built against zeromq3
03:50 seanz joehh: Are you also managing files and directories with salt?
03:51 joehh we're doing a lot of file.managed, file.recurse, hg.latest, cmd.wait and package stuff
03:51 joehh we had  a minor issue with the creation of users a little while back, but I thikn it is either fixed or we found a simple enough workaround
03:52 seanz joehh: How many Windows machines are you managing with salt?
03:52 joehh right now, it is probably around 20-25 - we have a number of small to mid sized groups at various customer sites
03:53 seanz Thanks for bearing with all the questions. Are you working for an IT company?
03:53 joehh I thikn it is about to be rolled out on all internal staff machines and that will add about another 30 or so over the next month or two
03:54 EugeneKay Typically people use Group Policy on Windows boxes(I do)
03:54 joehh no, we're somewhere between a software company and an engineering services company
03:54 seanz Interesting.
03:54 EugeneKay But I can see the advantage of Salting
03:56 seanz EugeneKay: Hm. You can't manage group policy through salt, can you?
03:56 EugeneKay No clue.
03:56 seanz joehh: Thanks for the info.
03:57 packeteer scalability-+| am I the only one thinking this flexbility becomes an issue?  <- only if not documented correctly. especially for basic tasks
03:57 joehh no worries
03:57 packeteer ps. just reading my backlog
03:59 joehh EugeneKay: interesting thought about gpo - I think the main benefit we get out of salt is repeatability in different environments
03:59 joehh but tha tcould just be my lack of knowledge about gpo
03:59 seanz GPO is definitely Windows only.
03:59 sgviking joined #salt
04:00 NV joined #salt
04:03 joehh that is the other side, we have linux machines there too and are loking to move as much to lnux as possible in time
04:10 forrest joined #salt
04:12 redondos joined #salt
04:25 Katafalkas joined #salt
04:40 mianos joined #salt
04:41 mianos hi, is there a good way to run things on the master aside from installion a minion on the master, for example I want to run 'grunt' on static files before distributing them to the minions
04:42 mgw joined #salt
04:43 jacksontj joined #salt
04:44 packeteer install minion, the apply state specific for the master?
04:44 packeteer the/then/
04:45 Katafalkas joined #salt
04:45 jalbretsen joined #salt
04:51 mianos yes OK
04:51 mianos so there is no 'aside'
04:51 mianos we'll do that
04:55 packeteer wait a sec, I know mianos...
04:56 packeteer do you ride bikes and write trading engines?
04:56 packeteer or used to
04:58 mianos still ride bikes, but not trading engines now :)
04:59 mianos working in a startup
04:59 mianos you from CPU?
04:59 mianos hm, craig?
05:00 mianos anyway, running minion on master is fine as when I use another machine for building packages I'll have a separate machine
05:00 mianos just being scabby
05:01 packeteer he Rob, how goes it?
05:01 packeteer hey even
05:04 Lue_4911 joined #salt
05:07 halfss joined #salt
05:26 Katafalkas joined #salt
05:27 cyp joined #salt
05:45 mianos joined #salt
05:50 felixhummel joined #salt
05:53 dthom91 joined #salt
05:58 berto- joined #salt
06:01 jalbretsen joined #salt
06:01 jalbretsen left #salt
06:02 middleman_ joined #salt
06:02 halfss joined #salt
06:03 middleman__ joined #salt
06:04 middleman_ joined #salt
06:11 bzf130_mm joined #salt
06:13 benno joined #salt
06:14 jinnko joined #salt
06:14 benno left #salt
06:15 scooby2 joined #salt
06:19 NV has anyone looked at a state for vyatta (and derivatives)?
06:23 redondos joined #salt
06:25 packeteer i watched one of Tom's recent interviews and he mentioned something in passing about using salt to manage network devices
06:26 packeteer and i know of old projects like rancid (uses expect to manage devices)
06:26 packeteer other than that, i know nothing  :)
06:27 rmt jcockhren, back to my Q yesterday... Ideally, I'd take key acceptance out of the critical path for the initial configuration of a server.  So I guess a standalone minion, and loading the state config and role-configuration from a known source.  The Q is then just about exporting the applicable state & configuration for a given minion/role ahead of time.
06:28 SpX joined #salt
06:30 ml_1 joined #salt
06:35 anuvrat joined #salt
06:42 halfss joined #salt
06:44 az87c joined #salt
06:44 elsmorian joined #salt
06:46 MrTango joined #salt
06:49 malinoff Hi all
06:51 kleinishere joined #salt
06:52 linjan joined #salt
06:58 redondos joined #salt
06:59 packeteer g'day
07:09 ronc joined #salt
07:12 qba73 joined #salt
07:22 gildegoma joined #salt
07:26 balboah joined #salt
07:29 redondos joined #salt
07:32 halfss joined #salt
07:43 ml_1 joined #salt
08:00 redondos joined #salt
08:00 geak joined #salt
08:02 krissaxton joined #salt
08:07 bk201 joined #salt
08:07 zakm joined #salt
08:12 goodbytes joined #salt
08:13 goodbytes Is anyone actively using saltstack to deploy openstack?
08:25 balboah so many stacks
08:35 goodbytes cloud this, cloud that.. stack this, stack that.
08:35 packeteer tru dat
08:39 geak_ joined #salt
08:40 krissaxton joined #salt
08:41 genkimind joined #salt
08:49 ahale goodbytes: we're using salt with openstack swift (not completely yet but moving towards it)
08:52 goki joined #salt
08:52 goodbytes ahale, ok. I have been working on an openstack rollout using salt stack for some time. This is for a production setup.. But I am considering if I need to move to Chef instead and use some of the work already created, for better support and maintainability
08:52 aleszoulek joined #salt
08:53 goodbytes i'm only one person, there are limits to what I can do :) so it's probably best I use whatever is backed by the community
08:53 zooz joined #salt
08:54 goodbytes ahale, what do you use for the rest of your openstack cluster?
08:55 ahale my group just does swift, but I think the other guys are a mix of chef and ansible, and some other teams use salt from what I hear
08:55 goodbytes oh ok
08:57 ahale you're right though, its a huge amount of work.. we have been converting production from homegrown system to salt for a while now and still have a lot to do
09:00 goodbytes ahale, just out of curiosity, what kind of vm storage back-end are you using? I'm thinking the VM image files.
09:02 ahale what do you mean? the disk files for active machines or..?
09:02 goodbytes we also have our own homegrown system, but it doesn't do configuration management. Only bootstrapping (pxe install, post install scripts, dhcp, dns, ip management and documentation, etc.) like crowbar.
09:02 goodbytes ahale, yeah exactly
09:02 bdenning joined #salt
09:03 ahale just local disk on the host
09:04 ahale (had to check we say that in public - we == rackspace btw)
09:05 goodbytes oh cool, that explains :)
09:09 goodbytes ahale, I'm actually surprised to hear that rackspace uses salt.
09:10 ahale my team loves it, it really fits in with how we work and our existing internal tools
09:11 ahale theres a rackspace logon at the bottom of http://saltstack.com/ :)
09:12 goodbytes ahale, I feel the same way. I initially started diving into puppet, spending quite some time learning how to write manifests that suited our deployment. But then I found Salt and switched. Took me literrally less than half the time I spent on puppet, before I had a working setup in production, that could deploy most of our services.
09:12 ahale i know the US guys (I am UK) had some sprints and events too, lots of interest
09:12 goodbytes We run a mix of Solaris (OpenIndiana) and Ubuntu. Approximately 50 servers
09:13 ahale oh cool, yeah exactly..
09:14 goodbytes oh ok. So would you think that there is a chance that rackspace would contribute to the ongoing development of salt stack states that could deploy openstack? much like there is for chef already?
09:15 goodbytes to be honest, I'm not sure if rackspace had anything to do with the chef openstack recipes.
09:16 halfss_ joined #salt
09:16 ahale I don't know of any plans, but it doesn't sound unlikely.. I think some groups publish chef stuff and contribute
09:17 Furao_ joined #salt
09:19 natim How does pillar_roots works ?
09:19 natim Does it loads and merge all top.sls in each pillar_roots ,
09:19 natim N
09:19 natim ?
09:21 ricicle joined #salt
09:22 ricicle Hi there, how much of http://docs.saltstack.com/ref/states/all/salt.states.rvm.html do I need to include in order to just install sinatra and bundle gems please?
09:25 krissaxton joined #salt
09:26 ricicle IOW, what's the simplest stanza in an sly file to install these two gems.
09:29 MrTango joined #salt
09:34 ricicle *sls file, obv
09:42 ggoZ joined #salt
09:45 faldridge joined #salt
09:59 ricicle joined #salt
10:01 krissaxton joined #salt
10:02 qba73 joined #salt
10:03 ml_1 joined #salt
10:42 geak joined #salt
10:47 geak_ joined #salt
10:55 adepasquale joined #salt
11:02 krissaxton joined #salt
11:05 redondos joined #salt
11:06 geak joined #salt
11:08 geak_ joined #salt
11:22 saysjonathan joined #salt
11:29 bhosmer joined #salt
11:36 terminalmage joined #salt
11:36 redondos joined #salt
12:01 carlos__ joined #salt
12:02 jamescarr joined #salt
12:04 saysjonathan joined #salt
12:07 blee joined #salt
12:07 redondos joined #salt
12:09 saysjonathan joined #salt
12:11 letterj joined #salt
12:12 saysjonathan joined #salt
12:15 jslatts joined #salt
12:18 TheCodeAssassin joined #salt
12:20 oz_akan_ joined #salt
12:21 oz_akan_ joined #salt
12:22 elsmorian joined #salt
12:23 dthom91 joined #salt
12:23 linjan joined #salt
12:31 copelco joined #salt
12:33 bastion2202 joined #salt
12:33 MrTango joined #salt
12:34 bastion2202 System Information: Model: iMac (Late 2009) • CPU: Intel Core i7 860 (8 Threads, 4 Cores) @ 2.80 GHz • Memory: 16.00 GB • Uptime: 10 Hours • Load: 68% • OS: Mac OS X 10.8.5 (Mountain Lion) (Build 12F37)
12:34 xl1 left #salt
12:37 piffio joined #salt
12:38 mgw joined #salt
12:39 redondos joined #salt
12:46 tyler-baker joined #salt
12:54 __number5__ can salt-ssh use public key authentication?
12:56 bcc joined #salt
13:00 isomorphic joined #salt
13:01 ggoZ joined #salt
13:01 Boohbah joined #salt
13:01 felixhummel joined #salt
13:04 dthom91 joined #salt
13:06 lil_cain joined #salt
13:06 backjlack joined #salt
13:10 redondos joined #salt
13:11 racooper joined #salt
13:11 mapu joined #salt
13:12 krissaxton joined #salt
13:13 redbeard2 joined #salt
13:16 SoR I have problem with pkg.install on my sls file using source: salt://files/package.deb
13:17 SoR [salt.loaded.int.module.cmdmod][ERROR   ] Command 'apt-get -q -y  -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef --allow-unauthenticated  install package failed with return code: 100
13:19 juicer2 joined #salt
13:22 goodbytes SoR, have you checked for errors in the apt logs in /var/log/apt/ on the minion ofcourse
13:22 bhosmer_ joined #salt
13:24 terminalmage joined #salt
13:24 SoR goodbytes: I'm trying to install deb package which is not in repository
13:24 adepasquale joined #salt
13:24 goodbytes SoR, oh yeah, I noticed, haven't tried installing a package using salt from salt:// source. sorry :(
13:25 SoR not a problem :)
13:25 SoR so.. salt should use dpkg rather than apt-get on this way
13:26 goodbytes yeah, nothing shows up in /var/log/dpkg.log either?
13:26 SoR no
13:27 faldridge joined #salt
13:29 brianhicks joined #salt
13:29 SoR in minion logs I have only something in /var/log/salt/minion
13:32 kenbolton joined #salt
13:33 dh joined #salt
13:36 morton_ joined #salt
13:38 mwillhite joined #salt
13:39 scooby2 joined #salt
13:39 faldridge joined #salt
13:40 ml_1 joined #salt
13:41 redondos joined #salt
13:41 jamescarr joined #salt
13:42 micah_chatt joined #salt
13:46 krissaxton joined #salt
13:51 imaginarysteve joined #salt
13:55 bemehow joined #salt
13:55 m_george left #salt
13:56 saysjonathan SoR: what happens when you try to install the package by hand, via `dpkg -i package.deb`?
13:59 bemehow what's the context of the jinja set variables, when including sls files? I am trying to include a 'common' file that has a {% set variable %} statement and use it in the downstream sls file as {{ variable }} but it complaints about it. Is there any way to include variables from jinja markups?
13:59 SoR saysjonathan: it's installing ok
14:00 QauntumRiff joined #salt
14:04 alunduil joined #salt
14:04 felixhummel joined #salt
14:06 saysjonathan SoR: try running apt-get update, then running the manifest again. I think error 100 (if coming from apt) is a sources thing, iirc.
14:07 saysjonathan s/manifest/state
14:09 faldridge joined #salt
14:09 scott_w joined #salt
14:09 nu7hatch joined #salt
14:12 redondos joined #salt
14:15 SoR saysjonathan: the problem is that my package.deb is not in any repos, I want to install standalone deb package, something like rpm -Uvh package.rpm but on Ubuntu
14:16 Furao SoR: http://docs.saltstack.com/ref/states/all/salt.states.pkg.html#module-salt.states.pkg
14:16 Furao sources
14:17 danielbachhuber joined #salt
14:19 jefftriplett left #salt
14:20 SoR Furao: thans, it's working
14:24 ccase joined #salt
14:28 miva joined #salt
14:28 MK_FG joined #salt
14:35 [diecast] joined #salt
14:38 jmlowe left #salt
14:39 anteaya joined #salt
14:40 cnelsonsic joined #salt
14:40 opapo joined #salt
14:43 bemehow if anyone cares. I can answer my question. Yes the jinja context gets imported just fine, but the error was confusing. It requires import from %{ from 'filename-relative-to-base_dir' import variable %} to import vars into current jinja context. The problem is it MUST use single quotes, without it it wont work (at least in develop branch)
14:43 redondos joined #salt
14:43 petersabaini1 joined #salt
14:44 StDiluted joined #salt
14:47 mohae joined #salt
14:47 petersabaini1 hey all — i've got a pillar which is implemented as a python module. works beautifully EXCEPT i have to restart the salt master to make changes effective. (i did refresh_pillar and sync_all). is this expected behaviour?
14:48 cnelsonsic left #salt
14:48 higgs001 joined #salt
14:50 nu7hatch joined #salt
14:53 macduke joined #salt
14:53 Gifflen joined #salt
14:54 UtahDave joined #salt
14:55 balboah salt-call state.sls is great for running one state for a specific environment. But can you also provide specific pillars?
14:55 balboah I keep finding myself editing the top file temporarily to test stuff in dev
14:58 timoguin joined #salt
14:58 timoguin so happy about the mercurial backend. :)
14:58 timoguin 0.17 is going to be a great release, guys
15:01 jalbretsen joined #salt
15:01 p3rror joined #salt
15:04 pipps1 joined #salt
15:05 jslatts joined #salt
15:07 tempspace Has anybody come across this when trying to update salt on Ubuntu?  salt-minion : PreDepends: salt-common (= 0.16.2-2precise) but 0.16.3-1precise is installed
15:07 ioggstream joined #salt
15:07 ioggstream left #salt
15:09 joehh tempspace: how are you doing the update?
15:09 tempspace joehh: apt-get update ; apt-get upgrade
15:10 tempspace joehh: guess I have to use aptittude huh
15:10 joehh I've seen similar things, but generally only when I run dpkg -i on a .deb directly and forget to include salt-common
15:10 joehh tempspace: worth a try
15:11 timoguin tedski, are you not using the PPA?
15:11 SunSparc joined #salt
15:12 kenbolton joined #salt
15:13 Gifflen joined #salt
15:13 mwillhite joined #salt
15:13 mmilano joined #salt
15:13 ipmb joined #salt
15:14 redondos joined #salt
15:15 tempspace joehh: same thing unfortunately
15:15 bcc how can I hold a package with salt? I know I can do apt-mark hold.. but im hoping for a cleaner method..
15:16 joehh what if you do dpkg -l | grep -i salt
15:17 joehh tempspace: ^^ (not bcc)
15:17 ahale bcc: i was looking at the apt module yesterday and it had set-selection stuff
15:20 tempspace joehh: it's like it's using the dependency from 0.16.2 and not from 0.16.4
15:20 bcc yeah ahale I saw the salt '*' pkg.set_selections selection='{"hold":.... but struggling to get it work in a sls file..
15:21 ahale ah cant help there, haven't got around to adding it to my stuff yet
15:22 cwright bcc: that was only added in 0.17
15:22 cwright i don't think the docs mention that
15:22 bcc ooh.
15:23 tempspace joehh: https://gist.github.com/anonymous/fb7e45a53d6e0a72a93c
15:23 cwright yes i am waiting for 0.17 to be released for that too
15:23 bcc okay.. that would explain why then!
15:23 bcc I suppose I can always do it with cmd for now.
15:24 devinus joined #salt
15:24 cron0 joined #salt
15:24 athit joined #salt
15:25 forrest_ joined #salt
15:26 joehh tempspace: looks painful - times like this I tend to get a bit brutal with dpkg --purge and similar tools
15:26 joehh not sure if that is suitable in your circumstances
15:26 athit hi, is it possible to call salt \* cmd.run uptime outside from the salt-master server?
15:26 tempspace joehh: I deleted salt-minion and installed it again and that worked fine, but upgrades never work
15:28 felixhummel_ joined #salt
15:31 m_george|away joined #salt
15:32 UtahDave salt-ssh is freaking awesome!!
15:32 redondos joined #salt
15:34 shinylasers joined #salt
15:34 m_george left #salt
15:34 joehh tempspace: do you run the upgrades from within salt?
15:35 tempspace joehh: I use the Ubuntu PPA, and upgrade via apt
15:35 nu7hatch joined #salt
15:35 forrest_ UtahDave, yea you've gotta bump the MOTD again :P
15:35 cloud_noob joined #salt
15:38 forrest_ So UtahDave, the one thing I'm concerned about with salt-ssh, is security within the salt roster file
15:38 mwillhite joined #salt
15:38 joehh tempspace: do you run apt from the cmd line or is the apt command controled via salt
15:38 joehh eg salt 'minion' pkg.upgrade salt-minion
15:38 tempspace joehh: command line
15:39 joehh or salt 'minion' cmd.run "apt-get upgrade"
15:39 tempspace joehh: directly on the server, not via salt
15:39 farra joined #salt
15:39 joehh tempspace: there goes that theory - I've had problems with the init scripts being too aggressive and killing off upgrades
15:40 joehh but if you are doing it via the cmd line then that should be fine. Also since you are on ubuntu, it should be upstart in control
15:41 UtahDave forrest_: what do you mean?
15:43 forrest_ So the Roster docs here: http://docs.saltstack.com/topics/ssh/roster.html don't really explain where those targets are stored
15:43 forrest_ I know it says you can attach it to any existing system, but what if you don't? It's just a plain text file hanging out somewhere that you reference in the conf?
15:43 UtahDave forrest_: yeah  /etc/salt/roster
15:44 balboah if I want high state to run automatically for new minions, is it the reactor system I should look at?
15:44 UtahDave balboah: the reactor would work.  But you could also set your minions to run a highstate when they boot up.
15:45 balboah oh, that sounds simpler for now
15:46 forrest_ UtahDave, I'd be concerned about security on that file, there's no ability for it to connect as user xyz, then sudo up right?
15:46 cmthornton joined #salt
15:46 forrest_ Don't get me wrong, I think the system is awesome, and the password issue isn't even a concern since I use keys, but I'm just curious
15:47 UtahDave forrest_: you don't have to put the password in there
15:47 UtahDave forrest_: if it doesn't have a password for the server, it will ask you if you want to add Salt's ssh key to the authorized keys on the remote server. Then it prompts you for the remote user's password
15:48 forrest_ oh nice
15:48 forrest_ that's slick
15:48 balboah UtahDave: do you have the configuration in your head? I fail to see it in the minion config docs
15:48 UtahDave balboah: startup_states: highstate
15:49 djn does anybody know if the salt python api supports the functions of salt-key?
15:49 UtahDave djn: yeah, they are available through the "wheel" functions.
15:50 forrest_ So UtahDave, what kind of performance were you guys seeing on this for large scale systems since it doesn't user ZeroMQ?
15:50 forrest_ I assume Puppet levels of slowness?
15:51 djn UtahDave: thanks, wheel is the keyword I was looking for
15:51 balboah UtahDave: thanks! Maybe it should be listed here? http://salt.readthedocs.org/en/latest/ref/configuration/minion.html
15:51 UtahDave forrest_: I've only tested it against 1 minion so far.  but it took  1.8 seconds to run a disk.usage
15:51 forrest_ versus what sort of time with a standard salt call?
15:52 UtahDave balboah: good idea.  Would you mind adding that in?  That would be awesome.
15:53 UtahDave forrest_: 0.5 seconds with regular salt.
15:53 UtahDave just tested
15:53 forrest_ Ok cool
15:53 markm joined #salt
15:53 UtahDave forrest_: also, I'm sure as you get into the hundreds of minions salt-ssh won't scale
15:54 forrest_ UtahDave, yea I assume it will blow chunks like Puppet does, or require an outrageously beefy master.
15:54 forrest_ I was just interested to see what sort of performance you guys had been encountering
15:54 forrest_ It is a cool feature that makes selling saltstack a bit easier to people who don't want the minion running everywhere though!
15:55 ggoZ joined #salt
16:01 kenbolton joined #salt
16:01 Katafalkas joined #salt
16:03 pipps1 joined #salt
16:08 redondos joined #salt
16:09 c0bra salt-ssh isn't bundled with salt-master?
16:10 UtahDave c0bra: yeah, it is, but it's only in 0.17 RC1 and the develop branch
16:10 UtahDave it's a new feature
16:10 c0bra ohhh
16:11 derelm joined #salt
16:12 xinkeT joined #salt
16:12 Micromus_ joined #salt
16:13 piffio joined #salt
16:17 UtahDave forrest_ and c0bra:  time sudo salt-ssh '*0*' disk.usage         against 5 minions takes 2.1 seconds
16:18 benno joined #salt
16:18 forrest_ UtahDave, interesting, that's not as much of an up swing as I would expect
16:18 forrest_ What was resource usage on the master looking like?
16:19 KyleG joined #salt
16:19 KyleG joined #salt
16:19 UtahDave forrest_: how would you like me to watch that?  I'm running it from my laptop. I didn't notice anything
16:19 benno left #salt
16:19 forrest_ that's good enough for me
16:19 forrest_ I'm contemplating whether I want to spin up a ton of VMs this weekend to test
16:20 troyready joined #salt
16:20 forrest_ I'd be interested to see what 50 or 100 machines looked like
16:20 forrest_ The minion you were running salt-ssh against have the minion service installed right?
16:20 UtahDave nope
16:20 forrest_ ok cool
16:20 UtahDave no service.
16:21 carmony Hmmm, it'd be cool to just "spin up 100 machines" to test :)
16:21 jpeach joined #salt
16:21 UtahDave carmony: yeah, I'm going to do that right now
16:21 forrest_ Well, I'd have to write a script to do it over on digitalocean with their api
16:21 forrest_ UtahDave, awesome, you're going to save me like 10 dollars in cycles!
16:22 carmony UtahDave: what service are you using? some internal VMs? Rackspace? AWS? :P
16:22 UtahDave carmony: Yes.   :)
16:23 forrest_ hah
16:23 carmony LOL, all of them? :{
16:23 carmony :P
16:23 forrest_ UtahDave, are you spinning all of those up in the same DC?
16:23 whyzgeek hi guys I have a question. I am using git module to checkout some code during the build process. Obviously the assumption is that git package is already installed. Now if I put require there. Then I can't call this state directly as another sls is responsible for installing git. If I do include then there will be duplicate IDs. Is there any conditional include? means that if its not included, then  include?
16:23 forrest_ It would be interesting to see how it handles 100 machines in the same DC, versus 100 machines spread between the US/Europe/SE Asia regions.
16:24 Thiggy joined #salt
16:24 UtahDave whyzgeek: can you pastebin what you have? You can have arbitrary IDs so that they don't conflict
16:24 forrest_ whyzgeek, why would there be duplicate IDs?
16:25 whyzgeek UtahDave: that is good suggestion
16:25 whyzgeek let me do that
16:25 UtahDave carmony and forrest_: I'll probably spin up 50 on digitalocean and 50 on Rackspace
16:25 forrest_ cool
16:25 carmony UtahDave: there are days where I'm seriously jealous of your job :P
16:25 whyzgeek forrest_: because the same sls will be included
16:25 forrest_ Gotcha
16:26 whyzgeek UtahDave: there is one catch with that approach as I am thinking I have to replicate all of that logic. Its not just git installation. Its copying private keys and configs for it.
16:26 whyzgeek do I have to replicate all of that?
16:27 micah_chatt joined #salt
16:27 UtahDave whyzgeek: can you pastebin what you have? I'm a bit confused
16:28 whyzgeek UtahDave: http://pastebin.com/Tp8v73Cs
16:29 whyzgeek sorry this bit is for running it
16:29 whyzgeek let me paste the checkout bit
16:30 yimmy1 joined #salt
16:30 cro joined #salt
16:30 whyzgeek UtahDave: http://pastebin.com/3vQbBxjy
16:31 whyzgeek I currently check if the private key is present
16:31 whyzgeek I didn't specifically use require here
16:31 whyzgeek because I normally run this command later
16:31 whyzgeek like salt * state.sls pyapps.default-web.install
16:32 whyzgeek also I have other states like
16:32 whyzgeek salt * state.sls pyapps.default-web.start or stop and so on
16:32 whyzgeek but during system build I need this to run last
16:33 whyzgeek because at that time git and other stuff is still being installed
16:33 devinus joined #salt
16:34 kenbolton joined #salt
16:35 pipps joined #salt
16:35 joshe joined #salt
16:36 yimmy1 left #salt
16:40 forrest UtahDave, when you run salt-ssh against all those boxes, can you see what top is looking like?
16:41 pentabular joined #salt
16:41 UtahDave yeah, I'm just getting ready to build those vms.
16:41 forrest preferably with top -M so the memory usage isn't so crappy to read :P
16:41 forrest ok awesome
16:41 dork joined #salt
16:41 forrest carmony was saying that digitalocean maxes you at 10 VMs and then you have to contact support apparently
16:41 UtahDave my top apparently doesn't have a -M option
16:41 felixhummel joined #salt
16:42 UtahDave forrest: yeah, I had them bump up ours
16:42 kenbolton_ joined #salt
16:42 forrest UtahDave, oh what distro are you running?
16:42 UtahDave ubuntu 12.04
16:42 forrest ahh ok
16:42 petersabaini1 left #salt
16:42 NotreDev joined #salt
16:44 forrest UtahDave, thanks for testing this out!
16:45 pentabular joined #salt
16:46 athit joined #salt
16:47 eightyeight i'm seeing about 600MB used virt-memory for salt-minion in idle (not highstating). is this normal?
16:47 eightyeight seems excessive
16:47 UtahDave eightyeight: yes, that's very excessive
16:47 eightyeight see this pastebin, which is containers running on virtuozzo nodes: http://ae7.st/p/7l8
16:47 eightyeight UtahDave: how can i minimize it?
16:47 UtahDave what version are you on?
16:48 eightyeight 0.16.3
16:49 gordonm joined #salt
16:50 eightyeight restarting the minion doesn't drop the used virt memory that much
16:50 atx_sono joined #salt
16:51 eightyeight 637372KB -> 636112KB
16:51 eightyeight what should i be expecting for an idle salt-minion pid?
16:52 UtahDave I'm not sure.   Have you changed your worker process in your master config?
16:52 atx_sono Hey, wandering if anyone else has been dealing with this or has a similar setup.  I have an ELB pointed the salt master, and all my minions are pointed to the ELB.  When the ELB ip changes I lose access to all my minions.  I was under the impression that the minions would do a DNS Check and attempt to self heal, am I wrong?
16:53 eightyeight UtahDave: you mean 'worker_threads' in the /etc/salt/master config?
16:53 eightyeight UtahDave: it's default (commented out) currently
16:54 CheKoLyN joined #salt
16:55 Gifflen joined #salt
16:55 Gifflen_ joined #salt
16:57 eightyeight CT-101-bash-4.1# ps -eo user,pid,ppid,vsz,%mem,comm,args | grep [s]alt
16:57 eightyeight root     12265     1 628080  0.4 salt-minion     /usr/bin/python2 /usr/bin/salt-minion -d
16:57 eightyeight ^ on a 512MB RAM install
16:58 eightyeight so, there's no way it could be occupying 628 MB when only 512MB is possible
16:58 eightyeight also claims "0.4%" of memory
16:58 eightyeight yay for measuring memory in gnu/linux
16:58 eightyeight :/
17:00 mattmtl joined #salt
17:01 eightyeight 0.4% of 512MB is 2.048 MB, which seems really, really light
17:01 eightyeight heh
17:01 eightyeight so, who's telling the truth?
17:02 UtahDave atx_sono: you have to restart the minions to do a dns check.  Once they're started they resolve the hostname's ip address and only use the ip address from then on.
17:03 UtahDave eightyeight: i'm not sure.  Have you modified any other items in your minion config?
17:04 pipps joined #salt
17:04 xuru joined #salt
17:05 macduke joined #salt
17:05 krissaxton joined #salt
17:06 eightyeight no. but, i'm not buying what i'm reading from the kernel. so, i'm learning more about the memory system. bbiab.
17:06 UtahDave ok, cool
17:11 mwillhite joined #salt
17:11 forrest UtahDave, once you run the tests, are you going to update the docs with that example data?
17:12 juanlittledevil joined #salt
17:14 atx_sono @UtahDave:  Thanks.  Any suggestions on settings this up in a totally dynamic environment.  Since the minions and even the salt server can go away at anytime I need find a more resilient setup.  The crappy thing is I lose the remote exec capabilities and the ability to restart the minions.
17:14 UtahDave forrest: sure, that's a great idea
17:14 forrest Awesome
17:14 jpeach joined #salt
17:15 UtahDave atx_sono: have you tried using multi-master?
17:16 mmilano joined #salt
17:18 simonmcc joined #salt
17:21 atx_sono @UtahDave:  Have not, but at first hand I don't see how this addresses this as all the IP's are dynamic.   The only static config I have is a DNS entry :-/
17:22 forrest you're connecting via IP as opposed to hostname?
17:24 UtahDave forrest: no, he's connecting via hostname. but the minions resolve the ip from the hostname and then only use the ip
17:25 forrest Ahh ok
17:26 forrest I wasn't aware it cached that data as opposed to doing a lookup every single time
17:28 devinus joined #salt
17:31 atx_sono I assume my master can die and will die and if it does it will just rebuild itself and get attached to the ELB again.
17:38 KyleG1 joined #salt
17:40 m_george|away joined #salt
17:43 eightyeight UtahDave: echo 0$(awk '/Pss/ {printf "+"$2}' /proc/41635/smaps)|bc
17:43 m_george left #salt
17:43 eightyeight where $PID is the PID of the salt-minion
17:43 lineman60 joined #salt
17:43 eightyeight er, s/41635/$PID/
17:43 eightyeight heh
17:44 eightyeight that's the "proportional set size", which seems to be the most interesting memory statistic of the VM subsystem
17:44 eightyeight with that said, each of my salt-minion PIDs seems to be sitting around 24MB, which seems much more realistic
17:44 * eightyeight gets back on with life
17:46 alexandrel eightyeight: I get similar memory usage.
17:46 alexandrel eightyeight: roughly the same as a httpd worker.
17:47 alexandrel it's a much better use of those resources :)
17:48 eightyeight the thing that sucks about the linux VM subsystem, are all the stats (private_dirty, shared_dirty, references, etc)
17:48 eightyeight it's hard to get a good handle on exactly how much RAM any given PID is actively using
17:48 eightyeight this is the best i could come up with
17:49 alexandrel yeah, there is a real need for new monitoring tools in linux.
17:51 nu7hatch joined #salt
18:03 StDiluted joined #salt
18:04 imaginarysteve joined #salt
18:05 krissaxton joined #salt
18:09 Boohbah joined #salt
18:11 mapu joined #salt
18:19 faldridge joined #salt
18:22 lgordon joined #salt
18:22 lgordon left #salt
18:23 grets joined #salt
18:25 torandu joined #salt
18:26 saysjonathan joined #salt
18:28 NotreDev joined #salt
18:29 druonysuse joined #salt
18:31 saysjonathan joined #salt
18:32 auntie_ joined #salt
18:33 Gifflen joined #salt
18:36 pentabular joined #salt
18:39 UtahDave forrest: first test.   test.ping on 50 hosts  using salt-ssh.....    2.4 seconds
18:39 forrest that's pretty good
18:39 forrest better than I expected to spin up that many connections
18:39 forrest are you connectin via password, or using keys?
18:39 mwillhite joined #salt
18:40 Corey UtahDave: Uh... that strains credulity a bit. :-) Ansible is... not that fast.
18:41 psyl0n joined #salt
18:42 UtahDave Corey: I'm spinning up 50 more to test on 100
18:43 berto- joined #salt
18:44 pentabular joined #salt
18:44 timoguin that is impressively fast
18:45 UtahDave forrest: using keys
18:45 forrest Cool
18:45 UtahDave Corey: Once I get the test going with 100 we're going to record a video.
18:46 forrest nice
18:47 faeroe joined #salt
18:49 nu7hatch joined #salt
18:52 eightyeight UtahDave: we have 161. want some stats? :)
18:53 UtahDave eightyeight: using salt-ssh?
18:57 imaginarysteve joined #salt
18:58 mgw joined #salt
18:59 zach joined #salt
19:01 Teknix joined #salt
19:06 krissaxton joined #salt
19:07 [diecast] is there a way to check the last time highstate was ran for a minion?
19:10 zach joined #salt
19:10 honestly the jobs are definitely logged on the master
19:10 honestly don't know how to get at that though
19:12 UtahDave [diecast]: you can query the job cache for that info
19:12 [diecast] honestly i grepped for some keywords but didnt find it
19:13 [diecast] UtahDave i've not done that before
19:13 [diecast] i'll check the docs
19:16 nu7hatch joined #salt
19:16 zach joined #salt
19:16 [diecast] multi-line output, no --out=text available on salt-run =/
19:17 jamescarr left #salt
19:18 Brew joined #salt
19:18 jcockhren I don't always --out, but when I do, I --out=yaml
19:18 jcockhren stay thristy, my friends
19:18 [diecast] that's nice, and funny
19:18 [diecast] yes, yes
19:18 [diecast] i dont always use one-liners, but when i do i use awk
19:19 [diecast] so, blah
19:19 [diecast] anwaysy, looks like the saltutil for jobs doesn't offer list_jobs
19:19 kaptk2 joined #salt
19:20 [diecast] sounds like i'm back to grep the log, just need to figure out what term to use
19:20 [diecast] ok, it's there
19:21 zooz joined #salt
19:21 isomorphic joined #salt
19:22 kenbolton joined #salt
19:23 [diecast] seems that every action is missing just that single piece of info
19:23 [diecast] no one method will return a complete listing
19:24 [diecast] i can get the jid from the master log for state.highstate
19:24 carxwol joined #salt
19:24 KyleG joined #salt
19:24 KyleG joined #salt
19:24 [diecast] i can find the job with salt-run jobs.lookup_jid <jid> which returns the job minus the action used
19:25 [diecast] the logline is missing the host info where the action was taken
19:27 keee joined #salt
19:28 UtahDave [diecast]: each minion should return all its info for each job.
19:29 [diecast] it does that
19:29 [diecast] for my current purpose it doesn't provide enough detail
19:30 UtahDave ah, what info do you need to be in there?
19:30 [diecast] i think if the loglines in master were to read "User xxxx Published command state.highstate with jid xxxxxxxxxx on minion xx.xx.xx" it would be perfect
19:30 [diecast] if the jid contained the action taken that would be useful in this situation
19:31 [diecast] also, i think you have to specify on the minion that it retain the job information
19:31 [diecast] right now, mine do not
19:32 rmt joined #salt
19:32 faeroe left #salt
19:34 giantlock joined #salt
19:35 jpeach joined #salt
19:38 pipps joined #salt
19:39 nu7hatch joined #salt
19:40 cjh joined #salt
19:47 emilisto joined #salt
19:48 UtahDave Corey and forrest  disk.usage on 100 vms    2.8 seconds
19:49 copelco will pillar yaml combine if defined in two locations?
19:50 kiorky joined #salt
19:51 StDiluted Is there a way to rename a minion?
19:51 StDiluted like
19:51 StDiluted salt-key -L will return a new name?
19:51 copelco not sure if that makes sense... say i have a dict in my pillar. if i define it twice, will it combine the two together?
19:51 StDiluted or do i need to change the id's and then reaccept keys
19:51 StDiluted I think you will throw an exception, copelco
19:51 copelco StDiluted: ah ok
19:51 copelco thanks
19:53 timoguin UtahDave, nice!
19:56 micah_chatt joined #salt
19:58 forrest that's pretty awesome UtahDave
19:58 forrest What sort of system utilization were you seeing?
19:58 linjan joined #salt
19:58 forrest And do you plan on running some states through for the video, and then benchmarking against machines with the minion installed?
19:59 timoguin UtahDave, you guys should release some benchmarks for 0MQ vs. SSH minions too
20:00 forrest We discussed having that content on the documentation pages earlier timoguin, so I think it's planned.
20:00 timoguin cool. :)
20:00 forrest If UtahDave decides to be a slacker I'll spin up some instances this weekend and mess with it! :P
20:02 timoguin daaaang, competition!
20:02 forrest Nah, I just like poking fun
20:04 ronc joined #salt
20:07 krissaxton joined #salt
20:07 Lue_4911 joined #salt
20:10 UtahDave bam!
20:10 nu7hatch joined #salt
20:10 UtahDave created a 100 node Riak cluster in 59 seconds using salt-ssh
20:10 forrest nice
20:13 ifnull joined #salt
20:14 forrest What sort of load did that put on your master server?
20:15 aleszoulek joined #salt
20:16 bhosmer joined #salt
20:22 grets joined #salt
20:23 jcockhren nice
20:23 jcockhren salt tickles my imagination
20:24 UtahDave I'm doing it from my laptop. It didn't make any noticeable issues. I think it may have pegged a cpu because I set it to 100 worker processes
20:26 carlos joined #salt
20:37 druonysuse joined #salt
20:45 brianhicks joined #salt
20:51 berto- joined #salt
20:53 grets joined #salt
20:53 bhosmer joined #salt
21:00 krissaxton joined #salt
21:01 gildegoma joined #salt
21:02 felixhummel joined #salt
21:02 jdenning joined #salt
21:05 jdenning Is anyone here using the GitFS external pillar?  If so - have you experienced any problems with it not returning the most recently committed version of the pillar?
21:05 pipps joined #salt
21:05 jdenning My setup seems to be working - it is in fact pulling all pillar data from the git repo, as intended.  But it is not pulling the latest version, for reasons unknown..
21:06 nu7hatch joined #salt
21:07 jcockhren jdenning: yeah. I use it. And that's a known bug
21:07 forrest jcockhre, I can't remember, did clearing the cache fix that?
21:07 jdenning jcockhren: Good to know - any workaround?
21:08 jcockhren yeah a couple:
21:08 jcockhren 1. cron job
21:08 jcockhren 2. use salt scheduler
21:09 andrej While back I asked about usermanagement, and was wondering how people go about making sure that a machine is in a well-defined state.  Is there a way to make sure that a) system accounts are left alone and b) only people present in a state/pillar are present on a machine?  So if joe was on machine XYZ before I made it a minion, could I make sure salt culls hime without me having to make him user.absent?
21:09 andrej If this isn't available atm: what would be the best way of implementing it
21:09 andrej and if I did it - how would I share that with the rest of the world?
21:10 jdenning jcockhren: What command should I run (via cron job / salt scheduler) to clear the cache / update the pillar?  I thought a state.highstate would do it, but it doesn't seem to work..
21:11 jcockhren jdenning: it'll be to update the git repo in /var/cache/salt on the master
21:11 jdenning jcockhren: Ahhh, ok..thanks, it would have taken me a while to track that down..
21:12 jcockhren jdenning: /var/cache/salt/master/pillar_gitfs/
21:12 jdenning jcockhren: Got it.  Thanks a lot!
21:13 jdenning Yup, just tested - that's exactly what I needed.
21:17 m_george|away joined #salt
21:19 jcockhren andrej: a) obtain a list of system users based on uid
21:19 jcockhren I think system users are < 1000 on linux
21:20 jcockhren andrej: you can awk or grep that list from /etc/passwd
21:21 jcockhren you can ALSO, get the current list of users where uid > 1000 (human users already on the system)
21:21 jcockhren that's b^
21:22 jcockhren andrej: after b, then yaml-fy those lists
21:22 jcockhren (or jSON or anything python knows how to read)
21:23 jcockhren andrej: then use jinja to set a variable (with context) in a sls file
21:23 jcockhren and again, use jinja to loop through that variable to remove the users that aren't in your pillar
21:24 jcockhren then done \o/
21:24 forrest lol
21:24 forrest jcockhren's rube goldberg right there.
21:25 jcockhren andrej: to implement that for the world, make a issue and then a PR
21:25 andrej lol
21:26 jcockhren I'd encourage adding it to the system module or something
21:26 jcockhren b/c for both win and linux, it would make sense to be able to list the users
21:27 andrej That's about the approach I'd normally take on a single box... some Linux distros start normal users @ 500, and debian and *buntu have nobody as 65534
21:27 andrej Love rube goldberg ;}
21:28 UtahDave joined #salt
21:28 jcockhren yeah.... that's true
21:28 UtahDave Oh, man.  So Tom and I just recorded a screen cast where we stand up a 100 node Riak cluster in 50 seconds.   :)
21:28 andrej so awk would look like '$3>=500 && $3 != 65534
21:28 forrest Nice
21:29 forrest Should link that on the docs too when you put the stats up
21:31 kermit joined #salt
21:33 NotreDev joined #salt
21:33 mgw joined #salt
21:34 forrest UtahDave, are you guys showing that in comparison to a similar setup but using the master/minion config?
21:35 UtahDave no, this video was just showing of salt-ssh
21:35 UtahDave I'll have a link shortly
21:37 UtahDave here we go: http://www.youtube.com/watch?v=uWGDC1PdySQ
21:38 Corey Interesting.
21:39 andrej Is salt-ssh part of the ubuntu packages?
21:39 Corey left #salt
21:39 Corey joined #salt
21:40 Corey WHoops.
21:40 nu7hatch joined #salt
21:41 jcockhren concerning salt-formulas, you do all envision states being able to reference them remotely?
21:41 jcockhren b/c that would be awesome
21:42 UtahDave andrej: It will be in Salt 0.17.0 which should be released today.
21:42 jcockhren o_O
21:42 martoss joined #salt
21:42 UtahDave jcockhren: that would be cool.
21:42 andrej Ta UtahDave
21:43 martoss Hey folks, is the entire content of /srv/salt transfered to the minions if i do a salt '*' state.highstate?
21:44 [diecast] is there a file.wait method, or how do you wait for a cmd to finish if you don't have any other cmd in the state
21:44 martoss If not, is there anything that speaks against distributing a software package as tarball from file_roots?
21:45 jcockhren martoss: you're trying to distribute files across minions?
21:45 martoss no, just from the master to the minions
21:46 nkuttler mmh, nifty, salt ssh
21:46 jcockhren martoss: there's s3 as a supported backend
21:47 UtahDave martoss: the minions only download exactly what they need.
21:47 farra joined #salt
21:47 UtahDave martoss: but the minions do have access to the entire file_roots
21:47 martoss the other alternative would be to scp or ftp them to the target, but if the tarballs are only transferred on demand, I would use file_roots.
21:47 UtahDave martoss: people often distribute software as tarballs from file_roots
21:48 martoss UtahDave: thx for the info. That's fine for me - I know that there's no security and that pillars don't have support for that (yet).
21:49 Thiggy joined #salt
21:49 Thiggy What does "ZMQError: Operation cannot be accomplished in current state" when I highstate mean?
21:50 [diecast] there is a zeromq error on your server
21:50 UtahDave martoss: you're welcome
21:50 forrest Woah UtahDave, you wear collared shirts??
21:50 martoss_ joined #salt
21:51 Thiggy @[diecast] what does that mean though? Is there some action I should take?
21:53 [diecast] check current actions if it will allow it
21:55 [diecast] Thiggy salt '*' saltutil.running
21:55 forrest UtahDave, this is a good video, you should append the description to link to your formula for riak
21:56 Thiggy No actions running. It seems to have been a transient thing? That's kind of concerning
21:56 alunduil joined #salt
21:56 [diecast] is this a physical server or cloud instance?
21:58 Thiggy cloud instance
22:01 bgilmore joined #salt
22:03 UtahDave forrest: lol, yes, sometimes I wear collared shirts.  :)
22:04 UtahDave Thiggy: what version of salt?
22:04 jefftriplett joined #salt
22:04 copelco joined #salt
22:05 Thiggy 16.4
22:05 Thiggy we just went through and brought everything up to 16.4
22:06 oz_akan_ joined #salt
22:08 howdoieven joined #salt
22:08 howdoieven Is it possible to do cross host dependencies?
22:08 forrest in what sense howdoieven?
22:08 howdoieven for example, swapping a code base only after a database migration has been run on the database
22:09 UtahDave Thiggy: what version of zmq are you on?
22:09 kenbolton joined #salt
22:09 Thiggy I'm not sure. I used the bootstrapper to install salt on ubuntu 12.04. How can I check the zmq version?
22:10 UtahDave Thiggy: salt 'minion-name' test.versions_report
22:10 forrest howdoieven, I think you'd want to look at highstate: http://docs.saltstack.com/ref/states/highstate.html
22:10 nu7hatch joined #salt
22:11 Thiggy Every minion reports being on ZMQ 3.2.2 and PyZMQ 13.0.0
22:11 boite joined #salt
22:11 howdoieven Yes, I'm aware of how highstate works; but I believe that is per-host correct?
22:11 forrest you can apply highstate to host groups I believe.
22:12 jcockhren howdoieven: you want to execute a state on host A only when a state on host B is ran?
22:12 Thiggy @howdoieven We're solving that issue with a salt-runner to coordinate across-host stuff. We've also used the reactor previously to do similar things.
22:12 howdoieven jcockhren: exactly
22:12 forrest oh reactor is a good idea Thiggy
22:12 howdoieven Thiggy: okay I'll check that out, thanks!
22:12 Thiggy it's nice, post migration you can fire your "db_migrated" event and consume that and do whatever you want
22:13 UtahDave howdoieven: you'll want to look at the overstate.
22:13 pentabular joined #salt
22:14 howdoieven Sweet, thanks UtahDave
22:20 faldridge joined #salt
22:20 jacksontj joined #salt
22:20 nu7hatch joined #salt
22:20 pentabular1 joined #salt
22:21 Thiggy I've got a weird slowness. I ran all the highstates individually, and no single one takes more than ~45 seconds to apply on a minion, but if I do a salt \* state.highstate, it can take over 5 minutes and some of them time out and I don't get responses. Is that normal?
22:22 forrest is there any correlation on the machines that timeout Thiggy?
22:22 forrest specific location, or cluster, or anything?
22:22 forrest and can you see if anything got dumped into the logs on the minions where it times out?
22:22 Thiggy They're all in the same AWS VPC. That's a good question. I'll try to pay more attention to that moving forward.
22:23 forrest Stop DDOSing your AWS cluster Thiggy :P
22:23 UtahDave Thiggy: can you try running both your master and a minion or two in debug mode when you run those?
22:23 Thiggy I only have 12 minions, I wouldn't expect it to get that cranky that fast
22:23 jacksontj question for the community-- i'm trying to limit the memory usage of a module (we had a problem where someone comitted a module which used 20gb on load)
22:23 krissaxton joined #salt
22:23 forrest oh yea
22:24 jacksontj i'm thinking of adding this as a feature in the module load section of the code
22:24 jacksontj some option like module_max_memory
22:24 jacksontj and enforce some memory limit per module
22:24 jacksontj (if you configure it)
22:24 jacksontj does that seem like a terrible idea?
22:24 Thiggy @UtahDave I'll dig into that more.
22:25 UtahDave jacksontj: sounds like that would be pretty cool
22:26 jacksontj doesn't sound too crazy?
22:26 UtahDave jacksontj: especially if it shot a nerf dart at the offender.
22:26 jacksontj we put in a memory limit on the overall salt process (in the start script) but that means any process you start inherits the limit
22:26 jacksontj which isn't so great ;)
22:26 jacksontj UtahDave: working on the nerf defense network :D
22:27 [diecast] salt '*' cmd.run 'apt-get -o Dpkg::Options::="--force-confold" install salt-common salt-minion'
22:28 [diecast] ^^ yes/no?
22:28 mianos joined #salt
22:28 forrest All the better if the person is bald jacksontj
22:28 forrest can stick to their head
22:28 jacksontj O.o
22:28 jacksontj like that
22:28 jacksontj :D
22:28 jcockhren salt-openCV module. gotta target it forrectly
22:29 jcockhren correctly*
22:29 [diecast] missing -y, ok there we go
22:30 jacksontj UtahDave: k, i'll implement it then :)
22:31 jcockhren [diecast]: I'm confused. if you can salt '*', then they're already bootstrapped no?
22:31 [diecast] jcockhren just want the latest on them
22:31 jcockhren ah
22:31 [diecast] i guess i could call a pkg method
22:31 [diecast] but im not sure how to preserve the conf, or if it does
22:32 UtahDave [diecast]: it won't overwrite the conf
22:32 [diecast] oh, ok. well now i know ;)
22:32 forrest So when you submit a build of saltstack now, does it automatically push to Jenkins?
22:34 UtahDave forrest: what do you mean by "submit a build of saltstack"?
22:34 forrest UtahDave, when you guys push back to the saltstack repo.
22:35 UtahDave It does a build on every commit or pull request automatically
22:36 forrest ok, so you've just modified it to no longer use travis CI then.
22:37 UtahDave we still use Travis too
22:37 forrest So what's the standard for clones now? To use Travis, or to use Jenkins?
22:37 forrest or both
22:38 UtahDave what do you mean by clones?
22:38 forrest so I have my clone of salt
22:38 forrest and right now the service hook is only for travis CI
22:40 UtahDave Yeah, that stays the same
22:40 forrest Ok
22:40 forrest I wasn't sure if we were supposed to be hooking clones onto travis CI and jenkins now
22:41 UtahDave no, we're just running the jenkins builds so we can target all of our supported OSes.  Travis only runs on Ubuntu server
22:41 forrest Are there plans at some point to hook the formulas into being tested by Jenkins?
22:41 woebtz joined #salt
22:42 Thiggy if I run a module function from the command line via salt-call or salt, I'm by default in the base environment, right? no matter what?
22:42 UtahDave I'm not sure. I don't know if there is a plugin to do that with Jenkins.  Plus it might get pretty expensive to spin up 10 vms to test for every commit that anyone ever commits to Salt
22:42 forrest Yea true
22:43 jacksontj joined #salt
22:43 UtahDave Thiggy: unless your minion is set to use another environment.  But yes, the default is the base environment
22:44 Thiggy @UtahDave thanks. Hrmm, I didn't know about the environment config file setting… interesting.
22:45 KyleG joined #salt
22:45 KyleG joined #salt
22:48 Thiggy Hrmm I just got both the slow highstate run AND zeromq errors! http://pastebin.com/ysheFxq0
22:48 forrest lol
22:49 Thiggy My lucky day
22:51 mianos haha a REQ/REP socket did not receive a REP before it did a second send
22:52 aantony joined #salt
22:53 jslatts joined #salt
22:53 aantony left #salt
22:53 aantony joined #salt
22:54 Thiggy It all seems to go to hell after it tries to pull a file that doesn't exist from the saltmaster. It's not in the pastebin but it's a few state results before that.
22:54 mianos that's a "should not be able to occur" zermoq error
22:55 mianos something else is wrong, not usually a zeromq problem
22:57 Thiggy actually the file it's trying to pull does exist… but it's not coming down… wat
22:58 Thiggy re-run highstate, it both finds the file and succeeds without error.
23:02 krissaxton1 joined #salt
23:02 pipps joined #salt
23:10 derelm joined #salt
23:10 dzderic joined #salt
23:11 Gwayne joined #salt
23:12 dzderic hey all
23:13 dzderic does anyone know if you can target which hosts to run on inside of a module, rather than what's specified on the command line?
23:16 oz_akan_ joined #salt
23:17 robertkeizer joined #salt
23:21 Katafalkas joined #salt
23:30 Brew joined #salt
23:30 UtahDave dzderic: well, the modules are run on the minions. So you could have the module do something to determine if it should actually do something on that minion
23:33 jacksontj UtahDave: v1 out ;) https://github.com/saltstack/salt/pull/7477
23:34 jacksontj apparently setting memory caps in windows in non-trivial
23:34 jacksontj does that look sane to you?
23:34 dzderic UtahDave: that's possible, but then you're potentially needlessly running this module on many hosts
23:36 andrej Hmmm ... I have the following script  /srv/scripts/non_sys_users.sh
23:36 andrej which is a oneliner
23:37 andrej awk -F: '$3 > 500 && $3 != 65534 {print $1"\t"$3}' /etc/passwd
23:37 andrej salt 'playpen' cmd.script 'salt://srv/scripts/non_sys_users.sh'
23:37 andrej I get this output
23:38 andrej playpen:
23:38 andrej ----------
23:38 andrej pid:
23:38 andrej 22231
23:38 andrej retcode:
23:38 andrej 0
23:38 andrej stderr:
23:38 andrej
23:38 andrej stdout:
23:38 andrej
23:38 andrej what am I missing?
23:38 Corey andrej: A pastebin? :-D
23:38 jacksontj UtahDave: just want to see if that seems sane-- i need to hotfix this here-- but i want to make sure its feasible to get this merged upstream :)
23:45 troyready joined #salt
23:47 mianos joined #salt
23:48 UtahDave jacksontj: my initial read looks pretty good! I'm sure Tom will want to test it.
23:48 UtahDave andrej: remove '/srv/'
23:49 jacksontj UtahDave: cool, i'll merge into our release here and get our hotfix rolling ;)
23:51 redbeard2 joined #salt
23:53 UtahDave cool.  thanks for the pull req, Thomas!
23:53 Thiggy I can pretty reliably cause the zeroMQ errors I was talking about by highstating all my minions, and I've been pouring over the logfiles and I can't find anything that really stands out. Should I report this as an issue? Not sure what to do.
23:57 UtahDave Thiggy: Yeah, please do report that as an issue.  Please include as many details as possible, especially how to reproduce it.
23:57 UtahDave thanks!
23:57 Thiggy Ok, I don't have a whole lot of info, but I'll do my best.
23:59 UtahDave thanks!  I appreciate it.

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary