Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-02-12

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 gladiatr waitasec.
00:00 gladiatr chitown said it works from salt-call (minonside) so its already sync'd to the minion
00:00 gladiatr actually, it's kinda weird that salt-call would work but not remotely from the master
00:00 gladiatr chitown, salt-minion -l debug
00:02 tokyo_jesus Is there a timeout on custom grain modules?
00:02 gladiatr tokyo_jesus, Not unless you put one in.
00:02 ocdmw joined #salt
00:03 chitown it behaves like there is some kind of cache on the master
00:03 tokyo_jesus I'll see if restarting the minion does anything
00:03 iggy there can be
00:03 gladiatr chitown, the master does cache minion grains
00:03 chitown i saw the same thing; works from salt-call locally, but not from the master directly to minion
00:03 chitown right.... but, iiuc, it is supposed to refresh when you do a sync_grains... correct?
00:04 chitown i dont remember the conditions where the master refreshes the grains
00:04 iggy sync_grains pushes custom grain modules from master to minion... doesn't necessarily update the master side cache
00:05 iggy salt 'minion' grains.items  <-- that should
00:05 alexhayes So (following on from my documentation question) is there a way of getting "help" on a state. For instance, like in python console, help(myfunc)
00:05 chitown iggy: iirc, it doesnt... i can verify tomorrow when i back in the office
00:06 chitown alexhayes: stae = sls? or state = custom state module
00:06 murrdoc salt  sys.doc modulename
00:06 iggy there's also some master side settings that impact that
00:06 chitown sys.state_doc
00:06 alexhayes thanks! I'll look into those!
00:07 alexhayes state as in my own .sls file
00:07 chitown sys.list_functions sys
00:07 kermit joined #salt
00:07 chitown alexhayes: i dont think so
00:07 chitown iggy: tbh, i didn't put much effort into it... it was a one-time hit so i restarted the 10-ish minions that were broken
00:08 tokyo_jesus iggy: calling salt '<minion>' grains.items didn't update the grains listing
00:09 tokyo_jesus I'll dig into it a bit more tomorrow, thanks for the help!
00:09 iggy sounds like you've got another issue then
00:09 murrdoc tokyo_jesus:  is you _grains in the ext_modules dir ?
00:10 iggy if salt-call shows the right stuff, one would think so
00:10 murrdoc yeah i normally do sync_all, clear_cache, sync_grains
00:11 bhosmer_ joined #salt
00:11 InAnimaTe joined #salt
00:12 iggy I have a one-liner that does a bunch of that plus mine.update, etc, etc.
00:12 tokyo_jesus When I called sync_grains initially I got a response saying the module had been sent to the minions
00:13 murrdoc did u clear_cache
00:13 murrdoc sync_grains only puts the _grains dir from ext-modules onto the minion
00:14 tokyo_jesus Minions say 'saltutil.clear_cache' is not available
00:14 iggy /etc/salt/minion -> grains_cache grains_cache_expiration
00:15 murrdoc http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.saltutil.html
00:18 otter768 joined #salt
00:20 martineg_ joined #salt
00:21 jalaziz joined #salt
00:22 brianfeister joined #salt
00:28 sjol_
00:29 Pixionus joined #salt
00:32 aurynn the orchestrate+salt-ssh integration wasn't very well-tested. :\
00:32 InAnimaTe joined #salt
00:35 gladiatr I believe it was bolted on last spring.  A general uprising was about to ensue, so it was persued as a feature.
00:36 TTimo joined #salt
00:37 yomilk joined #salt
00:40 aurynn currently it's trying to do regex on a dict, and throwing an error
00:41 bhosmer_ joined #salt
00:43 teebes joined #salt
00:58 jdowning joined #salt
00:59 aurynn bypassing that, now it's telling me that dicts are an unhashable type
00:59 aurynn (actually that other madness is going on)
00:59 TTimo joined #salt
00:59 timoguin joined #salt
00:59 bfoxwell joined #salt
01:00 loggyer joined #salt
01:02 dude051 joined #salt
01:06 aurynn that's... interesting
01:06 lnxnut joined #salt
01:07 ocdmw joined #salt
01:08 aurynn Looks like salt/saltmod.py on 2014.7 is using 'ret' instead of 'return' for the data coming back from a minion
01:11 aquinas joined #salt
01:13 timoguin joined #salt
01:17 ITChap joined #salt
01:17 aqua^mac joined #salt
01:17 murrdoc this might be late in the game
01:18 murrdoc but i recommend .7.1 over .7
01:18 murrdoc not sure if it will fix your problems
01:18 aurynn I'm using .7.1
01:18 murrdoc oh
01:18 murrdoc u are linking to 7.0 code
01:18 murrdoc no ?
01:18 aurynn I'm linking to the git tag
01:18 murrdoc i will now shut up
01:19 aurynn no, I appreciate it
01:19 aurynn I've opened an issue around that one
01:19 aurynn I'll do another one regarding needing an explicit timeout set
01:20 murrdoc i have salt-ssh setup as a backup
01:20 murrdoc but use salt with queues everywhere
01:20 murrdoc so i am intrigued by your troubles
01:20 aurynn I'm trying to use salt-ssh as an initial bootstrap. That orchestration isn't working correctly is ... not good.
01:20 InAnimaTe joined #salt
01:21 I3olle joined #salt
01:21 gladiatr have you checked out the new saltify bits?
01:22 elfixit joined #salt
01:24 aurynn gladiatr, no, what parts are those?
01:24 murrdoc are u trying to orchestrate salt installs ?
01:24 murrdoc or ?
01:24 iwishiwerearobot joined #salt
01:24 gladiatr http://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud.clouds.saltify.html
01:24 aurynn murrdoc, I'm trying to orchestrate some initial cluster stuff
01:24 lpmulligan joined #salt
01:24 gladiatr ohh
01:24 murrdoc in the cloud ?
01:25 gladiatr at least according to the docs, salt-cloud.saltify will get a salt-minion onto a system
01:25 aurynn murrdoc, not currently
01:25 aurynn but eventually yes
01:25 murrdoc so u are using salt-ssh against existing servers
01:26 murrdoc to get a bootstrap setup going ?
01:26 jerematic joined #salt
01:27 jalaziz joined #salt
01:27 ITChap joined #salt
01:27 timoguin joined #salt
01:27 murrdoc k its chicago late, late!
01:28 loggyer How can i get data from mine as a string in jinja,
01:28 loggyer this is what i'm doing, http://pastebin.com/WK8Q6E3T
01:28 I3olle_ joined #salt
01:30 TTimo joined #salt
01:37 aurynn can't seem to pass pillar data on the commandline to salt-ssh
01:38 tokyo_jesus joined #salt
01:41 murrdoc joined #salt
01:43 loggyer How can i get data from mine as a string in jinja,
01:43 loggyer this is what i'm doing, http://pastebin.com/WK8Q6E3T
01:47 warthog42 joined #salt
01:48 jeremymcm joined #salt
01:49 ALLmightySPIFF joined #salt
01:53 scoates I'm having a weird problem. When I first highstate a certain node, I see a `file.managed` fail with `Comment: Source file salt://valid/path/to/file.name not found`, but that file is indeed there. If I immediately highstate again, that file provisions just fine. Any ideas? this is on 2014.7.1
02:05 andrej thanks iggy, gladiatr ... that's what I though (and told him) ... it would be a nice to have he thought
02:05 * andrej shrugs
02:09 murrdoc joined #salt
02:15 ale joined #salt
02:16 evle joined #salt
02:18 vd joined #salt
02:19 cheus joined #salt
02:23 neogenix joined #salt
02:29 atree joined #salt
02:35 nitti joined #salt
02:38 murrdoc joined #salt
02:42 _JZ_ joined #salt
02:46 badon_ joined #salt
02:48 ilbot3 joined #salt
02:48 Topic for #salt is now Welcome to #salt | SaltConf 2015 is Mar 3-5! http://saltconf.com | 2014.7.1 is the latest | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
02:50 druonysuse joined #salt
02:50 druonysuse joined #salt
02:50 Cidan joined #salt
02:51 Cidan joined #salt
02:52 blast_hardcheese joined #salt
02:56 fragamus joined #salt
02:57 ocdmw joined #salt
02:57 repl1cant joined #salt
02:58 amcorreia joined #salt
02:59 jdowning joined #salt
02:59 ALLmightySPIFF joined #salt
02:59 subsignal joined #salt
03:01 hasues joined #salt
03:13 iwishiwerearobot joined #salt
03:19 otter768 joined #salt
03:21 favadi joined #salt
03:26 tristianc joined #salt
03:30 desposo joined #salt
03:40 hasues left #salt
03:43 kermit joined #salt
03:44 ITChap joined #salt
04:01 ocdmw joined #salt
04:04 clintberry joined #salt
04:07 ALLmightySPIFF joined #salt
04:08 joehh joined #salt
04:10 ITChap joined #salt
04:15 StDiluted joined #salt
04:23 InAnimaTe joined #salt
04:31 ajw0100 joined #salt
04:38 active8 joined #salt
04:46 malinoff joined #salt
04:50 kermit joined #salt
04:59 Furao joined #salt
04:59 jdowning joined #salt
05:01 iwishiwerearobot joined #salt
05:02 pahko joined #salt
05:05 stanchan joined #salt
05:25 jalaziz joined #salt
05:30 Terminus- joined #salt
05:32 Terminus- hello. i'm familiarizing myself with salt and i'm reading about how it uses zeromq. does this mean that a client-side agent is required like puppet as opposed to ansible where only sshd is required to be running on clients?
05:33 Furao Terminus-: the bus (zmq) is optional in at least 2 ways, you can use salt-ssh or copy your states and manually run salt-call on local files
05:35 Furao but yes you can manage minions trough clients-servers model
05:35 jonasbjork joined #salt
05:35 Terminus- Furao: thanks. i'll check out salt-ssh. as of right now, i just want a master pushing the config to clients like ansible with zero installation required on the client side.
05:35 monkey661 joined #salt
05:39 madduck notnotpeter: thanks. I will take a look.
05:45 jalaziz joined #salt
05:46 felskrone joined #salt
05:50 jonasbjo1k joined #salt
05:58 kormoc joined #salt
05:58 kermit joined #salt
05:59 desposo joined #salt
06:07 pahko joined #salt
06:12 jonasbjork joined #salt
06:13 otter768 joined #salt
06:21 _stej joined #salt
06:27 steverweber joined #salt
06:28 Ryan_Lane joined #salt
06:29 ocdmw joined #salt
06:31 msciciel_ joined #salt
06:36 favadi joined #salt
06:48 Terminus- just want to verify, everything i can do with a master-minion architecture i can also do with salt-ssh right? only difference is performance?
06:49 madduck you cannot advertise something and hope for execution with pubsub; I believe salt-ssh deterministically iterates all nodes and provides proper feedback when a node isn't reachable
06:50 jalaziz joined #salt
06:50 iwishiwerearobot joined #salt
06:52 Terminus- madduck: oh yeah, i should have clarified except for limitations imposed by a push architecture itself. i was more concerned about whether i would be able to use all modules.
06:53 madduck i don't know, but I'd assume so. the only limitations I could imagine are inter-minion comms and the mine
06:53 Terminus- except for modules that by functionality require pubsub of course.
06:53 madduck but i don't actually know
06:53 Terminus- madduck: gotcha. thanks. i guess i'll push on with my efforts to do stuff in salt.
07:00 jdowning joined #salt
07:00 colttt joined #salt
07:07 desposo joined #salt
07:08 pahko joined #salt
07:15 jonasbjork joined #salt
07:15 vukcrni joined #salt
07:18 kapil___ joined #salt
07:18 flyboy joined #salt
07:20 jonasbjo1k joined #salt
07:21 bash124512 joined #salt
07:30 huleboer joined #salt
07:32 pdayton joined #salt
07:34 ocdmw joined #salt
07:34 mikkn joined #salt
07:35 jalaziz joined #salt
07:36 slafs joined #salt
07:36 kawa2014 joined #salt
07:38 malinoff joined #salt
07:39 Auroch joined #salt
07:40 slafs left #salt
07:41 Cidan joined #salt
07:41 Cidan joined #salt
07:42 meylor joined #salt
07:47 Ryan_Lane joined #salt
07:48 Terminus- is there a reference for all env vars that salt can use?
07:49 kapil___ joined #salt
07:51 AviMarcus joined #salt
07:51 Cidan joined #salt
07:51 Cidan joined #salt
07:56 krelo joined #salt
07:57 lumberjack_ joined #salt
07:59 trikke joined #salt
08:03 dunz0r I need to filter my hosts in some sane way... I can't go buy device-name. Anyone have any ideas? Should I just do an ip-match on the networks they're on perhaps?
08:04 dunz0r I prefer not to have to specify all the hosts by hand... but looking at the docs it seems to be my best option right now :/
08:04 malinoff dunz0r, you can actually do a lot of stuff: http://docs.saltstack.com/en/latest/topics/targeting/
08:05 dunz0r The "problem" is that I've got a bunch of hosts on external and internal adresses, ideally I'd want some way to "group" them or something
08:06 dunz0r Since no one planned for us ever automating anything, hostnames are more or less arbitrary :/
08:07 eseyman joined #salt
08:07 dunz0r Looks like writing some Jinja in my pillar and manually specifying the host is the best bet right now. Not that bad, I only have to do it once I guess :)
08:08 pahko joined #salt
08:12 shoma joined #salt
08:14 otter768 joined #salt
08:15 KermitTheFragger joined #salt
08:18 karimb joined #salt
08:20 marnom joined #salt
08:21 chiui joined #salt
08:21 Roee joined #salt
08:22 Roee Hi All,
08:23 aurynn hi.
08:23 Roee Does someone have an idea : i'm running a state that running installation script. and would like that during this runnig salt will watch another file, and in case of an "Error" it will fails the installation and return a fail exit code
08:24 Roee is it possible ?
08:27 marnom Roee: if the installer returns a non-zero exit code, salt should detect it as failed as far as I know...
08:27 marnom Roee: have you tried a regular cmd.run state for this?
08:27 Roee Yes this is runing via cmd.run
08:28 marnom Does anyone know if it's possible to use the output of network.subnets module in a state? I would like to limit stuff to the current subnet and can't find a way to retrieve this info without writing either custom grains or putting it statically in a pillar...
08:28 Roee means currently running with cmd.run
08:28 marnom Roee: okay and it's currently not detecting issues? Or what is your question exactly?
08:29 Roee i would like that salt will review that log file during the installation and in case of errors it will returns the error line...
08:29 Roee or a block of lines...
08:29 Roee that we could review this later.
08:29 marnom Roee: normal procedure is for the installation script to determine the 'exit state' and return the appropriate exit code
08:30 marnom I assume your installation script always returns 0 and you're try to solve it another way?
08:30 dunz0r marnom: You can use the ip-matching I guess?
08:30 marnom dunz0r: you mean in state targetting?
08:30 Roee you are right and it does
08:30 dunz0r marnom: Hmm... maybe that isn't avaliable there.
08:30 lumberjack_ joined #salt
08:31 Roee but i also wants salt to review the log during the instllation
08:31 marnom Roee: maybe use a small wrapper script that executes the installation binary & then checks log?
08:31 dunz0r marnom: You can "cheat" and use a compound match though.
08:31 marnom dunz0r: I need it in my 'source' field for iptables state...
08:31 marnom so it's not just a targetting issue.
08:31 dunz0r Oh :/
08:31 marnom thanks for your advise though :D
08:31 Roee this can be also an option
08:32 Roee and have another question please
08:32 dunz0r marnom: I did this in my top.sls, G@ipv4:192.168.85.*
08:32 dunz0r It works for /24s and such at least.
08:32 marnom dunz0r: it's not a targetting issue...
08:32 marnom My current state (snip):
08:32 jhauser joined #salt
08:32 fredvd joined #salt
08:32 marnom {% if grains['environment'] == 'dev' %}
08:32 marnom - source: 10.168.150.0/24
08:32 marnom {% elif grains['environment'] == 'test' %}
08:32 marnom - source: 10.168.151.0/24
08:33 marnom So I don't need it to target the minions, I need it to change the state depending on the subnet of the minion...
08:33 dunz0r Hmm, I wonder why it doesn't publish the subnet in grains...
08:33 * dunz0r should write a patch
08:33 marnom yeah :( seems strange
08:33 dunz0r Because right now, I kind of need it as well.
08:33 marnom the module network.subnets works great
08:34 aurynn a mine function might also work
08:34 Roee is there is an option that in each state salt will compare 2 parameters and if they indentical it will skip the state
08:34 marnom aurynn: not looking to set up a mine just for this..
08:34 marnom Roee: without telling us what you're trying to achieve I find it very hard to help you :)
08:35 dunz0r marnom: You know what? Since I need this too, I'm going to try and make a patch or something... I'll let you know how it goes :D
08:35 Roee i would like that the state won't run in case that the version in the pillar is different then a version in the minion
08:35 marnom dunz0r: cool!
08:35 marnom Roee: version of what? :\
08:35 dunz0r I mean... it's only python. Can't be that hard :)
08:35 marnom you can use an onlyif
08:36 Roee sorry - in that case i would like it to be run :)
08:36 marnom dunz0r: technically it shouldn't be too hard, especially with the module already done
08:36 Roee version of a specific component that I asked salt to install
08:36 marnom dunz0r: I'll work around it with a pillar per environment for now
08:37 Roee i'm using salt as a deployment tool...
08:37 marnom Roee: Just make a state that says package version X needs to be installed, then salt will make sure that version is installed...
08:37 marnom I don't understand your issue sorry
08:38 Roee and how salt determine if the package already installed
08:38 marnom depends on the OS...
08:39 marnom the pkg.* states map to the various package managers (apt-get, rpm etc)
08:39 CeBe joined #salt
08:39 Roee and how salt will know that the package has an upgarde
08:39 BigBear joined #salt
08:39 marnom refresh: True will check the package repo
08:39 iwishiwerearobot joined #salt
08:39 dunz0r This doesn't look awfully complex...
08:39 TyrfingMjolnir joined #salt
08:40 dunz0r Woops. Thinking out loud :D
08:40 marnom :D
08:41 Roee yes but this is not a regular package, we are using salt master to holds also the installations, so in each state. salt master firsts copying the  installation to the minion and then untar it and running the installation
08:41 dunz0r Roee: Hmm... you can't check the hash of the file instead?
08:42 dunz0r Currently trying to think of a good way to implement the subnet-information without breaking anything.
08:43 dunz0r I'm also not sure why my minions have both ip4_interfaces and ip_interfaces, both with the same information in them
08:43 Terminus- hi. i've got http://paste.ofcode.org/3abbc8en8TSZ5SZavi4M6ZG. how do i get it to run with salt-ssh?
08:43 marnom dunz0r: perhaps for ipv4/ipv6 migration
08:43 Roee we can do it... but what will happen in case that salt start the installation copied that files to the minion and then the installation failed. what will be next time ?
08:43 dunz0r marnom: Ah, of course. Silly me.
08:43 SheetiS {% set subnets = salt['network.subnets'] %} then iterate imo
08:43 marnom SheetiS: That works in a state? Hmmm
08:44 SheetiS er that needs a () i think
08:44 marnom set subnet = salt['network.subnets'][0] should work then
08:44 marnom lemme test
08:44 SheetiS you can run any salt function in a state
08:44 dunz0r SheetiS: So that would list the subnets a minion is a member of?
08:44 SheetiS yeah it would create a list
08:45 intellix joined #salt
08:45 SheetiS if you wanted the first subnet it'd be {% set subnet = salt['network.subnets']()[0] %}
08:47 dunz0r SheetiS: But you wouldn't be able to match it like a grain though, right?
08:47 marnom thanks that seems to work! Bonus points: if I would like the subnet without the cidr/mask to limit it to seperate parts, could I 'string split' that in Jinja or something?
08:47 SheetiS no, but you could use it to match a dict of your subnets
08:48 dunz0r Hmm... I need to think hard about this
08:48 SheetiS you could use a jinja split on the /
08:48 dunz0r Not having to "cheat" and use G@ipv4:192.168.85.* and such would be sweet :D
08:48 MK_FG joined #salt
08:49 giomandaz joined #salt
08:49 giomandaz hi everyone
08:49 SheetiS dunz0r: I am going to give a pastebin of what I would do if I needed to install a different package based upon a subnet.
08:49 ajw0100 joined #salt
08:49 dunz0r SheetiS: Please do. Maybe it could push me in the right direction :)
08:49 jalaziz joined #salt
08:50 giomandaz is it posible to copy all files within a directory to another folder in  a minnion?
08:50 marnom file.recurse?
08:50 giomandaz both source and destination is located on the minion
08:50 marnom cmd.run?
08:50 dunz0r giomandaz: cmd.run 'cp -r somedir someotherdir' ?
08:50 giomandaz i did that with cmd.run
08:51 giomandaz but is there a way to doit with salt states instead of cmd.run?
08:51 markm joined #salt
08:51 dunz0r giomandaz: You could have the cmd.run-part in a state
08:52 giomandaz yes i know. sorry i wasnt clear
08:52 giomandaz what i mean is that , for example i used "file.copy" instead of cmd.run "cp file1"
08:52 Terminus- urgh... i feel stupid. i thought the default file_roots was relative to root_dir. >_<
08:53 giomandaz but this works for single file, can i achieve the above recursive?
08:53 SheetiS dunz0r: https://bpaste.net/show/1ac53ebebb7f something like this?
08:54 JlRd joined #salt
08:54 bash124512 joined #salt
08:57 Grokzen joined #salt
08:57 SheetiS If anything doesn't make sense there, let me know.  I didn't try and actually run that as a state, but the general principles should be correct even if I typoed some syntax.
08:59 zz_Cidan joined #salt
09:00 Cidan joined #salt
09:00 linjan joined #salt
09:00 Pooogles joined #salt
09:01 jdowning joined #salt
09:01 [LF] joined #salt
09:02 fragamus joined #salt
09:02 giomandaz ok i have anither aproach to my problem
09:02 giomandaz is there a way to rebaname a folder in minion?
09:02 giomandaz *rename
09:04 marnom giomandaz: I'd do it with a cmd.run combined with an onlyif: '[ ! -f /my/new/directory/name ]'
09:04 dunz0r SheetiS: Thanks man. I'll have to do some experimenting to see if I can use this the way I want :)
09:04 marnom * ! -d
09:04 SheetiS there is a salt state called file.rename
09:05 SheetiS http://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.rename
09:05 giomandaz thanks Sheeti5!
09:05 jonasbjork joined #salt
09:05 giomandaz i was looking at older version doc and couldnt find that
09:06 nullptr joined #salt
09:09 pahko joined #salt
09:11 iwishiwerearobot joined #salt
09:11 fbettag joined #salt
09:12 monkey661 left #salt
09:12 zz_Cidan joined #salt
09:13 Cidan joined #salt
09:13 skarn_ joined #salt
09:16 jonasbjo1k joined #salt
09:16 lumberjack_ joined #salt
09:20 cberndt joined #salt
09:21 monkey66 joined #salt
09:22 I3olle joined #salt
09:25 ocdmw joined #salt
09:26 marnom SheetiS: thanks your solution for the subnet/salt module call in state file works perfectly
09:32 zz_Cidan joined #salt
09:32 N-Mi joined #salt
09:32 Cidan joined #salt
09:35 fredvd joined #salt
09:37 Xevian joined #salt
09:37 brayn joined #salt
09:39 paulm- joined #salt
09:44 zz_Cidan joined #salt
09:44 Cidan joined #salt
09:45 SheetiS :)
09:48 pf_moore joined #salt
09:52 bash124512 joined #salt
09:57 jalaziz joined #salt
09:58 _ether_ joined #salt
10:07 dariusjs joined #salt
10:08 dRiN joined #salt
10:10 pahko joined #salt
10:11 linjan joined #salt
10:13 linjan joined #salt
10:15 otter768 joined #salt
10:16 I3olle reactor question: the file /etc/salt/master doesn’t have a reactor section and the file /etc/salt/master.d/reactor.conf
10:16 I3olle which one should i create?
10:17 I3olle * and the file /etc/salt/master.d/reactor.conf doesn’t exist
10:17 aurynn I3olle, whichever you prefer
10:17 aurynn I use reactor.conf to keep it separate
10:17 I3olle ok
10:17 I3olle sounds good to me
10:17 I3olle thx
10:28 jrluis joined #salt
10:29 jespada joined #salt
10:32 zz_Cidan joined #salt
10:32 Cidan joined #salt
10:32 BigBear joined #salt
10:33 dkrae joined #salt
10:33 michelangelo joined #salt
10:39 ocdmw joined #salt
10:41 CeBe joined #salt
10:43 jonasbjork joined #salt
10:44 ocdmw joined #salt
10:46 jcsp joined #salt
10:49 giantlock joined #salt
10:54 budman joined #salt
10:59 zerthimon joined #salt
11:02 jdowning joined #salt
11:02 bhosmer joined #salt
11:04 yomilk joined #salt
11:06 I3olle is it possible to automatically always write the output of a state into a file?
11:09 wnkz joined #salt
11:10 jonasbjo1k joined #salt
11:12 claudiu joined #salt
11:13 claudiu Hi, I'm trying to write boto_rds state and I am a bit stuck at the logic part. Any one here experienced in writing states?
11:20 bhosmer_ joined #salt
11:21 I3olle claudiu: that’s a very generic question. what have you written so far?
11:22 claudiu Yeah, generic, wanted to get the discussion started first :)
11:22 claudiu What I have so far:
11:22 claudiu boto_rds.present, boto_rds.absent
11:23 I3olle ok
11:23 claudiu Something else that is needed: parameter groups
11:23 I3olle what’s your goal?
11:28 claudiu Sorry, was caught up with something else
11:28 claudiu So. the goal is to have an rds instance
11:28 claudiu With parameters
11:28 claudiu So my problem is if I should split this in different states or not
11:28 claudiu Like having boto_rds_parameters.present and .absent
11:29 claudiu Instead of having boto_rds.parameter_group
11:30 claudiu http://pastebin.com/sCnNAZ3M
11:30 claudiu This is how a state looks, now
11:30 Cidan joined #salt
11:31 claudiu And I don't know if I should implement the parameter_group thing inside boto_rds state or if I should write a different state for it
11:31 I3olle have you takane a look at this one?
11:31 intellix joined #salt
11:31 I3olle http://docs.saltstack.com/en/latest/ref/states/requisites.html#direct-requisite-and-requisite-in-types
11:31 nancy joined #salt
11:31 nancy hi
11:32 I3olle because your check, whether something is running could easily be replaced by a require
11:32 I3olle hi
11:33 nancy im new to salt
11:33 nancy can u give a brief abt it
11:34 evle joined #salt
11:34 djinni` joined #salt
11:36 favadi left #salt
11:37 babilen nancy: http://docs.saltstack.com/en/latest/topics/
11:37 claudiu Point is that I am writting the python state, not sls file :)
11:37 dunz0r SheetiS: I think I've solved it! :D
11:38 dunz0r https://gist.github.com/dunz0r/1685c4961e2006036526
11:38 dunz0r marnom: You too
11:39 nancy guys
11:39 nancy i dont even know wat salt is
11:39 nancy please can u help me out
11:39 egil then read the website
11:39 nancy ya i ve gone thro
11:39 nancy but i need some of your help to understand more
11:40 I3olle nancy: does puppet or chef ring a bell?
11:47 ocdmw joined #salt
11:47 ocdmw joined #salt
11:51 robothands joined #salt
11:51 linjan joined #salt
11:51 robothands hello
11:51 bhosmer joined #salt
11:51 robothands I'm using salt.states.service to restart corosync when package/config file changes
11:52 robothands however, when this occurs, I keep getting 'failed to reload the service'
11:52 robothands I'm able to restart the service manually using cmd.run though
11:52 robothands fairly new to salt, any tips on how to troubleshoot this? :)
11:55 warthog42 joined #salt
11:59 pahko joined #salt
11:59 ocdmw joined #salt
12:01 lumberjack_ joined #salt
12:02 jonasbjork joined #salt
12:04 ksj in jinja, I'm trying to use a filter to set a default variable within a set statement. e.g. '{% set x = "y"|default("z") %}', but this doesn't work
12:04 ksj I really don't grok jinja
12:05 ksj i don't want to have three extra lines for an 'if empty then' statement
12:05 ksj and the default filter works fine when it's in a print statement
12:06 ksj e.g. {{ x|default("z") }}
12:09 _ether_ ksj: for the whitespaces: http://jinja.pocoo.org/docs/dev/templates/#whitespace-control
12:11 joehh anyone know if salt-call should require msgpack?
12:12 _ether_ ksj: the default filter is used only for an undefined variable, "y" is not a variable in your example
12:13 _ether_ ksj: {% set x = y |default("z") %}{{ x }} will give "z", {% set x = "y"|default("z") %}{{ x }} will give "y"
12:15 ksj _ether_: thanks, I'll give it a try
12:15 mikkn joined #salt
12:16 bhosmer joined #salt
12:16 lumberjack_ joined #salt
12:16 otter768 joined #salt
12:18 rudi_s Hi. How can I cache data during a single salt run on a minion? I have a state file which needs to perform some expensive work, but only once per run and I'd like to cache the data. Any idea?
12:18 ksj ahh, just found the issue. It actually was working correctly the way I'd guessed, it but I'd left a colon trailing at the end of the line. All works great now. Thanks
12:23 otter768 joined #salt
12:25 hobakill joined #salt
12:26 ksj next issue, I know from the docs that jinja doesn't have a "continue" for loops, and you have to do it with filters in the loop definition, but, what if your "continue" condition relies on a variable created within the loop? pastebin here: http://pastebin.com/raw.php?i=fSaRUdzP
12:26 VSpike Has anyone here used the proxmox salt cloud provider? I'm testing it out and it creates a container from the template OK but then errors out with a strange error. It doesn't bootstrap or configure network or anything
12:27 ksj it's not a big deal, but for the sake of elegance I'd prefer to have the 'if pkg != ignore' condition as an inline "continue" expression
12:28 ksj maybe I'm just being picky, but I like to keep my jinja to a minimum because I find that lots of conditions becomes very difficult to read
12:30 VSpike Error message https://bpaste.net/show/333ba9c78e88
12:34 blunte joined #salt
12:35 blunte hi all.  I'm new to salt, so please forgive me if my approach to using salt is strange.  I'm trying to modify a file on a minion (using a state init.sls, and within that using file.sed.
12:36 blunte my question is, inside the init.sls, how can I get the hostname of the minion that the state is being run on?
12:36 zz_Cidan joined #salt
12:36 Cidan joined #salt
12:36 blunte (I want to replace something in a file with the hostname of the minion that the state is being applied to)
12:37 joehh blunte: something like {{grains['id']}}
12:37 Furao joined #salt
12:37 rudi_s blunte: grains.host or grains.fqdn
12:37 rudi_s id is not necessarily the hostname.
12:38 blunte at command line I can do $ salt hosta.domain grains.get 'host', and it returns the thing I want
12:38 yomilk joined #salt
12:38 blunte so I would just do {{grains['host']}} in a state file?
12:39 rudi_s blunte: Yes. You can also use {{grains.host}} which is IMHO more readable.
12:39 rudi_s (That's a jinja thing.)
12:39 blunte rudi_s: super, thank you
12:41 rudi_s np
12:41 JlRd joined #salt
12:43 jerematic joined #salt
12:44 dotz joined #salt
12:44 dotz Hi. What would be the easiest way to copy file from minion to another minion via Salt?
12:46 covox dotz: ssh? :)
12:47 dotz covox: Well, I use ZModem now, but I'm looking into some less user interactive solutions.
12:47 linjan joined #salt
12:47 covox can you explain the setup a bit
12:48 dotz Sure, there's "minion one", "minion two" and "master", and I want to get a database dump from "minion one" and restore it on "minion two"!
12:51 covox how often does this need to be run
12:51 jonasbjo1k joined #salt
12:53 dotz covox: often enough for me to try to make it non-interactive
12:55 kellnola joined #salt
12:57 covox salt's file serving capabilities are a bit of a poor fit for this, even if you set up the publish module to allow minion-to-minion communication, things like e.g. the cp module work by sending data to/from the master
12:57 VSpike Fixed it. I was missing the the ip_address part in the profile
12:57 covox being a database dump that might be a pain
12:57 VSpike It's a bit limiting really... the profile has to contain both the node in the cluster to deploy on and the ip address of the container you want to create
12:58 VSpike That means you're going to have to create a profile for every single node you want to create
12:58 covox the easiest way I can think of is to write a script on minion one that copies with rsync/scp, then executes the restore process over ssh
12:58 jonasbjork joined #salt
12:58 covox deploy a private key to minion one that is authorized on minion two
12:59 covox (which you can do with salt)
12:59 covox (same with the script actually)
12:59 pahko joined #salt
13:01 Roee
13:01 babilen You could even use something fancy like lsyncd or a networked filesystem, but naturally copying files around would work too
13:01 Roee Hi
13:01 covox or! expose the database on minion two via TCP (you could probably even limit it to just minion one) and do the whole thing from minion one
13:01 bhosmer joined #salt
13:01 babilen covox: Are you planning to implement a "backup db FOO from A to B" execution module thingie?
13:02 covox babilen: no? :P
13:02 babilen covox: You explicitly mentioned that you would want to restore from a dump, not via MySQL master/slave or comparable setups.
13:02 covox do you mean dotz
13:02 jdowning joined #salt
13:02 babilen sure
13:03 babilen dotz: ^
13:03 dotz babilen: exactly
13:03 babilen But we might be into xy problem territory here. The actual problem might be "How do I setup replication from A to B with salt?"
13:04 ocdmw joined #salt
13:06 amcorreia joined #salt
13:09 ocdmw joined #salt
13:10 hobakill good morning. can you have multiple 'and not' minions in a top file? for example: http://hastebin.com/uwodacifey.erl
13:11 zz_Cidan joined #salt
13:11 babilen hobakill: Sure, why not?
13:11 hobakill my master doesn't seem to be honoring that configuration. any minion i have with the hostname MAPR1* still gets the underlying packages.
13:11 Cidan joined #salt
13:11 babilen also: TIAS
13:11 I3olle joined #salt
13:11 hobakill what's TIAS?
13:12 babilen try it and see
13:12 babilen (which you seem to have done already, but that wasn't clear from your first question)
13:12 hobakill oh. well i did and it got the states i defined.
13:13 babilen I wonder how they are groups and if "G@host:MAPR1*" does what you want it to
13:13 hobakill oh butt crack i think i know what's what.
13:14 babilen You also want "G@osmajorrelease:6 and not (RDSSOLR2-DEV.qwestcolo.local or  G@host:MAPR1*)" don't you?
13:15 babilen a de morgan ;)
13:16 hobakill sure i could do that but i think the problem is this... that specific box has a customized id: in the minion config. the actual hostname is something stupid and 'mapr-y' like... dc1-r2-s3
13:18 hobakill yep. stupid. http://hastebin.com/ruwilesexo.avrasm
13:18 hobakill i guess that's on me. thanks for letting me think out loud a bit babilen
13:18 babilen So "G@host:MAPR1" doesn't do what you thought it would
13:19 babilen Sure, no problem. However, I might feel the need to provide you with a complimentary rubber duck if this continues to happen
13:19 hobakill nope. i'd have to change that to host:dc1*
13:20 hobakill that's....random...? but thanks! :)
13:20 yomilk joined #salt
13:20 hobakill shouldn't i be gifting things to you?
13:24 wincus joined #salt
13:24 * dunz0r just turned on the sudo-insults-option on all his machines with the help of Salt
13:24 dunz0r It's without doubt a necessary option for all systems.
13:29 Cidan joined #salt
13:30 tomh- joined #salt
13:31 karimb joined #salt
13:32 iamtew sudo-insults?
13:32 iamtew oh right, the thing that insult you if you do a bad auth..
13:33 dunz0r "You can't come in. Our tiger has the flu"
13:33 iamtew :p
13:34 iamtew I should get kerberos auth up and running on my boxes..
13:35 iamtew but might be a bit overkill for only a few instances
13:35 iamtew but it's way too convenient :D
13:39 GabLeRoux joined #salt
13:40 lumberjack_ joined #salt
13:41 jonasbjo1k joined #salt
13:44 jeremymcm joined #salt
13:46 Roee Does someone knows if there is a problem to run few states on one minion at the same time
13:46 Roee because I can't, receiving an error on the second state
13:46 jonasbjork joined #salt
13:47 jdowning joined #salt
13:47 scoates (repost from last night) I'm having a weird problem. When I first highstate a certain node, I see a `file.managed` fail with `Comment: Source file salt://valid/path/to/file.name not found`, but that file is indeed there. If I immediately highstate again, that file provisions just fine. Any ideas? this is on 2014.7.1
13:49 clintberry joined #salt
13:53 joehh scoates: is it just the one minion?
13:53 joehh and is the minion a different version?
13:55 scoates joehh: these states only apply to the one minion.
13:55 scoates let me double check the minion version
13:56 scoates (need a few mins to undo my excavation)
13:56 joehh no worries
13:56 bhosmer joined #salt
13:59 scoates while that's happening, in case it's relevant: this minion is the same node as the master
14:00 jeremyr joined #salt
14:00 FRANK_T joined #salt
14:00 Roee Does someone knows if there is a problem to run few states on one minion at the same time ?
14:00 Roee ping you again :)
14:02 joehh scoates: so it is unlikely that they are different versions
14:03 joehh there goes that theory
14:03 scoates joehh: it's actually possible. just trying to destroy/up without having highstate run.
14:03 joehh salt-minion --versions-report
14:03 scoates joehh: it's a base image, but the master gets provisioned by salt-bootstrap (with Vagrant)
14:04 joehh and salt-master --versions report will get you their versions
14:04 joehh also salt --versions-report for completeness
14:04 scoates joehh: yeah. I'm trying to get the VM up without having it make any changes (-: still need a min or 3
14:04 joehh I imagine that could be tricky depending on circumstances
14:05 scoates yeah. I think I've got it. It just takes minutes, not seconds (-:
14:05 nitti joined #salt
14:05 scoates current theory is that it is a version mismatch and the first highstate updates the minion version
14:06 primechuck joined #salt
14:06 nitti joined #salt
14:06 scoates nope. there goes that theory.
14:06 scoates joehh: http://paste.roguecoders.com/p/0105b3c95047d2f4024621a5fc9436c7.txt
14:07 joehh thats no good - just out of interest, what is the os?
14:07 aquinas joined #salt
14:08 mindscratch joined #salt
14:08 mindscratch I have an SLS file for starting mesos master process, /srv/salt/mesos/master.sls, all it does is:  mesos-master: service.running: []
14:09 mindscratch I'd like to have a way to also kill it
14:09 mindscratch so I was going to create /srv/salt/mesos/master-dead.sls, with mesos-master: service.dead: []
14:09 scoates joehh: debian 7
14:09 mindscratch ...is there a better approach? or a recommended approach?
14:09 GabLeRoux joined #salt
14:10 BigBear joined #salt
14:10 joehh thought it might have been :)
14:11 mindscratch these SLS files are being referenced from an orchestration SLS
14:11 CeBe1 joined #salt
14:11 joehh I'm out of ideas unfortunately, maybe one of the salt devs might be able to help in a couple of hours 7am ish in their part of the world
14:13 * scoates nods
14:13 scoates thanks anyway, joehh
14:13 miqui_ joined #salt
14:14 joehh no worries, I'm waiting to catch some of them myself, so hoping they appear soon...
14:15 GabLeRoux joined #salt
14:16 racooper joined #salt
14:20 evle joined #salt
14:21 subsignal joined #salt
14:21 yomilk joined #salt
14:23 mpanetta joined #salt
14:23 steverweber_ joined #salt
14:23 timoguin joined #salt
14:23 scoates joehh: I see what's going on. It's a weird combination of a bad symlink in my salt states repository, and a reactor script that "fixes" it. Will fix it to not fix it. (-:
14:23 scoates thanks again.
14:24 mindscratch is it possible to get output during the process of salt-run?
14:24 cpowell joined #salt
14:24 mindscratch currently I run salt-run state.orchestrate orchestration.deploy, and I have to wait until it's done to see the output from everything
14:24 toastedpenguin joined #salt
14:26 toastedpenguin joined #salt
14:26 jalaziz joined #salt
14:28 hobakill remind me - changes to top.sls require a service restart before they go into effect, right?
14:29 hasues joined #salt
14:29 hasues left #salt
14:29 ocdmw joined #salt
14:30 ocdmw joined #salt
14:31 VSpike Now theres a few more people around ... looks to me like a profile for proxmox for salt-cloud has to contain both the node in the cluster to deploy on and the ip address of the container you want to create...
14:31 lpmulligan joined #salt
14:31 malcium joined #salt
14:31 paulm- joined #salt
14:31 VSpike That means you're going to have to create a profile for every single container you want to create
14:31 VSpike Is that correct or am I being daft?
14:32 elfixit joined #salt
14:33 bhosmer_ joined #salt
14:38 cotton joined #salt
14:40 Furao joined #salt
14:44 aquassaut1 joined #salt
14:46 dude051 joined #salt
14:47 mage_ I have a question regarding "organization": let's say I have "n" python webapps, with "n" virtualenv and "n" users. I want to be able to "redeploy" the a whole webapp, but also "all" virtualenvs for example. Is there a way to avoid duplication when I write my sls files ?
14:47 bhosmer joined #salt
14:48 mindscratch mage_: would orchestration help?
14:48 mage_ orchestration .. ? :)
14:48 pahko joined #salt
14:49 mage_ you mean The Orchestrate Runner ?
14:50 colttt joined #salt
14:52 micah_chatt joined #salt
14:52 paulm-- joined #salt
14:52 bhosmer joined #salt
14:52 mindscratch mage_: http://docs.saltstack.com/en/latest/topics/tutorials/states_pt5.html
14:52 mindscratch yea
14:53 mindscratch i'm new to salt so i might be off track, but i'm using orchestration to execute certain states in a certain order
14:53 mage_ I'll take a look...
14:54 Guest31320 joined #salt
14:55 micah_chatt_ joined #salt
15:00 jdesilet joined #salt
15:02 teebes joined #salt
15:02 bhosmer joined #salt
15:03 steverweber_ joined #salt
15:05 andrew_v joined #salt
15:06 Tyrm joined #salt
15:06 Tyrm joined #salt
15:07 dthorman joined #salt
15:08 Brew joined #salt
15:08 _mel_ joined #salt
15:09 steverweber_ joined #salt
15:10 jdesilet I have a question for the group. I'm working on some template files for ntp.conf. I wanted to try and make them as self maintaining as possible. Right now I'm populating a servers site based off a match statement in a pillar. It's something similar to {% if grains['domain'} == 'app.us.test.com' %} site: us. That value is then populated into the ntp server as ntp1.us.test.com calling out the pillar value site. This requires me to maintain the match list. I
15:10 jdesilet because my servers are in a subdomain I cannot just call the 'domain' of the server and populate it into the template. I was wondering if there was a way to call the domain, but apply a delimiter='.' and then set it to a value that the template can then use.
15:12 jdesilet basically trying to just pull the 'us' value from the app.us.test.com domain so I can then reuse it in the template. Or if there is a way to just strip off the leading subdomain that would work too.
15:13 SheetiS joined #salt
15:14 warthog42 jdesilet:  I believe you can do something like this to set the domain like you want:  {% set domain = grains['domain'].split('.')[0] %} and just adjust the [0] to which ever piece you want.
15:15 jdesilet excellent! I will give that a try. I'm still trying to learn python so that holds me back sometimes.
15:16 bhosmer joined #salt
15:20 BigBear joined #salt
15:20 cyberfart joined #salt
15:20 timoguin joined #salt
15:22 murrdoc joined #salt
15:22 yomilk joined #salt
15:23 cyberfart I have an interesting issue with salt. For 'some' minions, the first highstate runs fine and returns. Every run after runs fine, returns but master never gets them. I see in minion logs that it returns, and again for masters find_job. Assuming that the versions and libraries are equal, what else can cause this?
15:25 schlueter joined #salt
15:25 yawniek_ joined #salt
15:27 schlueter1 joined #salt
15:27 rojem joined #salt
15:28 lnxnut_ joined #salt
15:28 sijis how could i clear this? "An inconsistency occurred, a job was received with a job id that is not present in the local cache: 20150212104556132874". i have tried deleting everything in jobs/* directory on both master and syndic (service are stopped)
15:28 sijis any suggestions.
15:29 felskrone joined #salt
15:29 tru_tru joined #salt
15:30 andrein joined #salt
15:30 andrein Hi guys, it comes a moment in the life of every sysadmin when he must update his servers from centos 6 to centos 7. Some packages we're using have changed names, and some states will probably break in centos 7 (epel.sls is one of them for sure) Are there any best practices for this, short of wrapping huge chunks of sls files in ifs and elses? :)
15:30 clintberry joined #salt
15:32 jalaziz joined #salt
15:33 SheetiS andrein: I'd use a map/packagemap.jinja similar to how the formulas do for various distros and build the CentOS6/7 differences into your states that way
15:34 jonasbjork joined #salt
15:35 danemacmillan Question: Typically when I need to provision a new server, I would pass yum (on CentOS) dozens of space-separated CLI tool names, and yum would just chug along installing them. With salt, do I need to create a block for every CLI tool, or is there some less verbose way to install them all?
15:35 murrdoc use names
15:36 murrdoc or setup a pillar with a list of vms
15:36 murrdoc and use jinja to iterate over it
15:36 murrdoc thats what i do
15:37 andrein SheetiS: got some sample code i could look up? or maybe some reference in the docs?
15:37 murrdoc danemacmillan:  http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html search for names
15:38 mindscratch I have an SLS file for starting mesos master process, /srv/salt/mesos/master.sls, all it does is:  mesos-master: service.running: [].  I'd like to have a way to also kill it.  so I was going to create /srv/salt/mesos/master-dead.sls, with mesos-master: service.dead: [] ...is there a better approach? or a recommended approach? these SLS files are being referenced from an orchestration SLS
15:38 murrdoc andrein https://github.com/saltstack-formulas/mongodb-formula/blob/master/mongodb/map.jinja
15:40 andrein murrdoc: thanks!
15:40 SheetiS murrdoc: thanks for sharing the link.  I got busy with work stuff and just got back to glance here again :D
15:40 murrdoc its ok
15:40 murrdoc i am booting up
15:41 murrdoc we totally need to make a salt faq
15:41 murrdoc how do i do multiple packages
15:41 murrdoc how do i map packages by grains
15:41 murrdoc do you know daddy
15:41 tristianc joined #salt
15:41 SheetiS ;-)
15:41 murrdoc and other such pertinent questions
15:42 ocdmw joined #salt
15:42 hobakill babilen, i finally figured out the right compound matching string.
15:42 murrdoc go on
15:42 murrdoc how u fix it
15:43 hobakill babilen, http://hastebin.com/eveyuqonev.vhdl
15:43 murrdoc ouch
15:43 funzo joined #salt
15:43 murrdoc totally should make that a node group
15:44 smcquay joined #salt
15:46 paulm- joined #salt
15:49 jdesilet joined #salt
15:49 pahko joined #salt
15:49 danemacmillan Anyone know a good repo on GitHub to see how people organize their CM?
15:49 danemacmillan Or a few, as examples to learn from
15:49 Furao danemacmillan: https://github.com/bclermont/states ?
15:50 paulm-- joined #salt
15:50 Furao http://salt.readthedocs.org/en/latest/topics/index.html?highlight=bclermont#example-salt-states
15:50 murrdoc woah woah
15:50 murrdoc thast like so 2013
15:50 murrdoc :D
15:50 Furao yes it’s 3 years old in most cases
15:50 Furao since then I have > 8000 commits :)
15:51 Furao it look totally different now
15:51 jalbretsen joined #salt
15:53 ocdmw joined #salt
15:53 murrdoc how about https://gist.github.com/puneetk/1fb349b10a84c8a6a0e5
15:53 murrdoc just wrote that up
15:54 bhosmer joined #salt
15:54 murrdoc btw Furao that github stuff is nice
15:54 ccarney_ROCC joined #salt
15:54 ccarney_ROCC left #salt
15:54 murrdoc github/bclermont/states
15:54 murrdoc +1
15:55 jalbretsen1 joined #salt
15:55 slafs joined #salt
15:55 Furao this is so old, today it’s far more better, with formula testing framework, automatic monitoring discovery, more formulas, documentation
15:55 Furao http://doc.robotinfra.com/doc/intro.html
15:55 murrdoc automatic monitoring discover ?
15:55 murrdoc go on
15:56 babilen hobakill: Great! De Morgan would be proud :)
15:56 Furao using salt mine, formula minion exposes what they expect to be monitored and they’re now monitored
15:57 Furao most checks are passive, but i just rewrote the passive checks daemon in golang from python, now it only need 4 mb of ram instead of 19
15:58 murrdoc nice
15:58 murrdoc nagios ?
15:58 Furao yes or shinken
15:59 murrdoc is the code related to this available to browse ?
15:59 hobakill babilen, thanks for the tip. you + documentation = /me winning :)
15:59 conan_the_destro joined #salt
15:59 Furao for our clients only at this point
15:59 murrdoc fair nuff
15:59 murrdoc :)
16:00 Furao we worked on these formulas for equivalent of 7 human/years
16:00 murrdoc oh u guys are a devops for hire group
16:00 Furao yes
16:01 murrdoc must be fun
16:01 jtang joined #salt
16:01 Rojematic joined #salt
16:01 budman joined #salt
16:01 Furao not every days :)
16:01 murrdoc i mean i hope the stuff u have to automate changes alot with each new project
16:02 Furao deploy inhouse php code on mysql is the worst
16:02 murrdoc :)
16:02 bbradley left #salt
16:03 Furao yeah it goes from deploy embeded linux box that get installed in cellphone shop to nodejs app in ec2
16:03 wendall911 joined #salt
16:03 Furao with everything in between, lot’s of diversity
16:04 murrdoc u guys work from all over the country ?
16:04 murrdoc or one of the coasts ?
16:04 Furao as they all use the same “common” formulas it make it easier and we save a lot of time
16:04 babilen Ah, once again it would be nice if I could use mine information in pillars
16:04 Furao all over the world!
16:04 slafs left #salt
16:04 Furao I’m in vietnam now but most clients are in americas + europe
16:04 murrdoc nice
16:04 Furao (i’m not vietnamese)
16:04 Furao canadian
16:04 murrdoc not that there is anything wrong with that
16:05 murrdoc :D
16:05 murrdoc i work from home in chicago, works in sunny socal
16:05 murrdoc must be fun travelling the world tho
16:06 Furao work from home is great, been doing that for 6 years
16:06 Furao yes but it’s hard to have conference call at 1 am :P
16:06 babilen I couldn't work from home - I like to have people around
16:07 Furao wife act like if 4 coworkers are around :)
16:07 hobakill Furao, hahah! +1
16:07 lnxnut joined #salt
16:07 Furao and some of the guys i hired come here sometime
16:08 hobakill murrdoc, i'm in logan square
16:08 murrdoc whats logan square ?
16:08 murrdoc i just moved here
16:08 murrdoc :|
16:08 murrdoc i am in downtown
16:08 hobakill haha. the neighborhood i live in.
16:08 murrdoc ./me googles
16:09 babilen ← sits in Berlin
16:09 murrdoc nice, you work for a firm ? out there
16:09 StDiluted joined #salt
16:09 hobakill my work is also downtown but i WFH 50%
16:09 murrdoc nice
16:09 Furao even me know went to logan square :) to install some hardware in a sprint shop
16:09 murrdoc haha show off
16:10 murrdoc i didnt realise there was a lot of tech in chicago
16:10 hobakill up and coming murrdoc !
16:10 murrdoc i thought it was just groupon and venmo
16:10 favadi joined #salt
16:10 igorwidl joined #salt
16:11 Furao here is one of them, favadi
16:11 murrdoc do they have 'shared coworking' spaces in chicago hobakill
16:11 murrdoc whats favadi
16:11 hobakill murrdoc, yeah. it's in merchandise mart i believe.
16:11 Furao he had been in our team for a while
16:12 hobakill murrdoc, (at least the big one)
16:12 robothands <-- london
16:12 murrdoc nice
16:12 clintber_ joined #salt
16:12 favadi Furao: what happened? :|
16:13 danemacmillan For someone trying to start off with salt--would it be a good recommendation to first run salt as masterless, without daemons, and just figure out how to write sls files?
16:13 babilen In some ways that will make it harder
16:13 murrdoc are you comfortable with vagrant danemacmillan ?
16:13 danemacmillan Yes, very
16:13 Furao favadi: you can look at irc log :P but said few things about us and you joined the channel just few minutes after :P
16:14 babilen danemacmillan: https://gist.github.com/babilen/e9479fdfbcca431db208 might serve as inpiration for a test setup
16:17 mindscratch Furao: on the product/stack page on robotinfra, there's a typo "loose" should be "lose"
16:17 Furao mindscratch: ok thanks there is a lot of typos some native english speaker is actually reviewing everything :)
16:18 scarcry joined #salt
16:20 mindscratch Furao: cool, i was just looking at it to see what robotinfra was and saw it.
16:23 timoguin joined #salt
16:24 Furao fixed
16:24 bhosmer joined #salt
16:24 ocdmw joined #salt
16:24 vlcn_ joined #salt
16:26 ajw0100 joined #salt
16:27 scoates_ joined #salt
16:29 bhosmer_ joined #salt
16:29 tokyo_jesus joined #salt
16:29 forrest joined #salt
16:29 Hipikat joined #salt
16:30 malcium joined #salt
16:31 fxhp joined #salt
16:31 iggy joined #salt
16:31 sijis joined #salt
16:35 wilkystyle joined #salt
16:36 jcsp joined #salt
16:36 viq joined #salt
16:37 wilkystyle joined #salt
16:37 steverweber joined #salt
16:37 josephleon joined #salt
16:43 scarcry joined #salt
16:45 malcium joined #salt
16:46 Grokzen joined #salt
16:49 tligda joined #salt
16:51 StDiluted joined #salt
16:52 bhosmer joined #salt
16:52 tomspur joined #salt
16:54 lnxnut joined #salt
16:55 joehh basepi: do you know if you are supposed to be able to use salt-call without msgpack?
16:56 joehh I've got https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=777665
16:56 joehh and can easily fix it via adding a dependency, but am wondering if it is not a bug as well
17:04 wilkystyle left #salt
17:07 drawsmcgraw I just read about Ansible's 'vault' - encrypts files with sensitive details. I don't suppose there's any native analogue to that in Salt... ?
17:08 drawsmcgraw The most I've seen/been told is 'use external pillar' or 'use a private git repo'
17:08 GabLeRoux joined #salt
17:08 malcium left #salt
17:09 murrdoc http://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.gpg.html
17:10 TheoSLC joined #salt
17:10 [LF] left #salt
17:11 yomilk joined #salt
17:11 aquinas joined #salt
17:14 linjan joined #salt
17:15 GabLeRoux joined #salt
17:15 p0rkjello joined #salt
17:16 aparsons joined #salt
17:17 p0rkjello left #salt
17:18 KyleG joined #salt
17:18 KyleG joined #salt
17:19 ThomasJ drawsmcgraw: http://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.gpg.html
17:19 murrdoc fun fact salt scheduler will run jobs at 0 seconds, that is, if u use schedule pillar to add jobs, and one of the jobs restats the minion, that job will happen every 5 secons
17:19 murrdoc cos everytime it restarts it will run the job
17:19 ThomasJ Aaand murrdoc beat me :)
17:21 jtang joined #salt
17:21 iggy is there a reactor event that means the node just booted (as opposed to the minion just started)?
17:21 bhosmer joined #salt
17:22 steveoliver left #salt
17:22 iggy I think there is for salt-cloud, but I don't know about anything in general
17:24 SheetiS iggy: I don't know that there is a way for salt to know that since the system will already be 'booted' when the salt daemon starts (regardless of whether it was started by sysvinit, upstart, or systemd).
17:25 SheetiS best i think you could do is have the reactor call a state that called salt['status.uptime']() and looked for a threshold to decide whether to run
17:25 jtang joined #salt
17:25 SheetiS so if the uptime is < threshold then assume we just booted.
17:25 iggy that's not a bad idea
17:25 prwilson joined #salt
17:26 drawsmcgraw ThomasJ: Neat! Thanks!
17:27 SheetiS status.uptime might be tricky to parse though
17:27 favadi joined #salt
17:27 desposo joined #salt
17:27 SheetiS it basically returns the result of this: w | head -n1
17:28 TheoSLC I'm using Salt 2014.7.1 with a Windows 8.1 minion.  I have a string stored in a pillar that is a windows file path "C:\path\to\file".  When I pass that string variable into my jinja template it is output as "C:\\path\\to\\file" which does not work for me.  How can I fix this?  Thanks
17:28 aparsons joined #salt
17:28 SheetiS actually it just runs cmd.run uptime
17:28 SheetiS so exaclty that
17:31 davet joined #salt
17:32 iggy yeah, that output is a bit "tricky"
17:32 SheetiS might be easier to make a custom module that used python's uptime.uptime.
17:33 SheetiS from uptime import uptime\n return uptime()
17:33 SheetiS something like that inside a function
17:33 SheetiS then you could use numerical comparisons
17:34 rojem joined #salt
17:34 SheetiS I'm actually surprised that status.uptime doesn't use that to be cross platform.
17:34 lietu- joined #salt
17:35 AirOnSkin joined #salt
17:36 SheetiS I guess since uptime.uptime would be an external python module, maybe it'd be to keep dependencies under control.
17:36 I3olle joined #salt
17:36 AirOnSkin Can I non-interactively accept a specific minion key?
17:37 AirOnSkin I don't have any unaccepted keys right now, so testing is a little hard
17:37 SheetiS AirOnSkin: I have a rector that accepts minion keys on a few types of conditions.  There is a little bit of risk in that, but it is definitely possible.
17:38 AirOnSkin SheetiS: Can you provide an example?
17:38 pahko joined #salt
17:38 SheetiS http://docs.saltstack.com/en/latest/topics/reactor/#a-complete-example has one i think.
17:39 _prime_ joined #salt
17:39 AirOnSkin I was rather thinking about something like: salt-key -ay minion_id
17:39 AirOnSkin Would that work?
17:39 AirOnSkin (On the master ofc)
17:40 rvankleeck joined #salt
17:40 rvankleeck is there a history of salt commands/output stored anywhere?
17:41 rvankleeck more precisely, what changed due to a command (e.g. highstate)
17:41 SheetiS AirOnSkin: that should work I would think.  -a and -y are valid flags
17:41 pdayton joined #salt
17:41 SheetiS rvankleeck: you can use salt-run to lookup jobs that have run and their output http://docs.saltstack.com/en/latest/topics/jobs/
17:42 AirOnSkin SheetiS: thanks, will try that
17:42 rvankleeck Oh wow. Thanks, SheetiS !
17:43 iggy the job cache (generally) only lasts 24 hours
17:44 neogenix joined #salt
17:44 SheetiS iggy++ yeah it's only good for fairly recent jobs.
17:45 rvankleeck iggy, SheetiS, thanks. noticed that in the linked doc. If I were to change that to a larger interval (e.g. 7 days), what affect might that have on the master itself?
17:45 mpanetta You can configure salt to store the jobs in a DB though IIRC
17:45 iggy ^
17:45 rvankleeck mpanetta, ooohh...that would be nice
17:45 iggy I think most people that want that kind of history setup returners to write to a DB of some sort
17:46 mpanetta Yeah there is a mongodb returner I thnk
17:46 iggy there are a number of returners
17:46 SheetiS There are several db returners
17:46 SheetiS http://docs.saltstack.com/en/latest/ref/returners/all/ for a list.
17:47 iggy cassandra, couchbase, etcd, hipchat, kafka, memcache, mongo, mysql, redis, slack, smtp (and many others)
17:48 rvankleeck is that only for scheduled jobs, or can it be a global thing as well?
17:48 KyleG #buzzwords
17:48 iggy global
17:49 rvankleeck iggy, would i just set 'schedule_returner' in the master.conf for that?
17:50 iggy I'm not sure exactly how to set it up tbh
17:50 jalaziz joined #salt
17:50 iggy it's on my TODO list ;)
17:50 rvankleeck ah, kk
17:50 rvankleeck lol
17:52 rvankleeck iggy, looks like I found the correct doc: http://docs.saltstack.com/en/latest/ref/returners/index.html#event-returners
17:52 rvankleeck parent to the link from SheetiS
17:54 SheetiS yep that can definitely do it as long as the returner supports event_return()
17:55 rvankleeck thanks guys!
17:55 theologian joined #salt
17:58 spookah joined #salt
18:02 josephleon joined #salt
18:02 Heartsbane joined #salt
18:06 micah_chatt_ joined #salt
18:08 bhosmer joined #salt
18:10 desposo joined #salt
18:10 amcorreia_ joined #salt
18:11 jdowning joined #salt
18:13 schlueter joined #salt
18:14 hebz0rl_ joined #salt
18:15 basepi joehh: yes, `salt-call --local` should work without msgpack, that's how salt-ssh works. Let me glance at your link.
18:16 basepi joehh: but msgpack is a dependency for all of salt except salt-ssh. So it should probably be a dependency for common, I feel like.
18:17 rap424 joined #salt
18:18 bhosmer__ joined #salt
18:20 amcorreia__ joined #salt
18:23 Deevolution Is there a way to get salt to report Pillar data on a non-responding minion (say using cached grain data)?
18:26 sjol_ joined #salt
18:26 berserk joined #salt
18:27 murrdoc salt 'minion' pillar.get lolhi
18:27 notnotpeter Does anyone know if you can specify the version of salt which gets installed via bootstrap when instances are spun with salt-cloud?
18:27 notnotpeter When 2015.2 stable gets released I'd prefer to wait a couple weeks before I push it into production.
18:28 mpanetta notnotpeter: Yes, in your profile with a scriptargs:
18:28 TheoSLC Hi all.  I really need help getting Salt 2014.7.1 on Windows to stop escaping my strings in templtes c:\path\to\file is turning into c:\\path\\to\\file
18:32 Ryan_Lane joined #salt
18:33 dstokes hey guys, how does one configure an ext_pillar to handle gpg encrypted data? decryption works with values in the standard pillar, but now with ext_pillar
18:33 otter768 joined #salt
18:33 stanchan joined #salt
18:34 iggy sounds like a bug
18:34 bhosmer joined #salt
18:34 amcorreia_ joined #salt
18:35 dstokes do i need to import a python lib or something to enable it, or does it happen after the ext pillar returns it's data
18:36 ajw0100 joined #salt
18:41 iggy hmm, that I don't know... I would assume it was after the pillar is pulled, but it could very well be built into "normal pillars"
18:42 jonasbjork joined #salt
18:43 dstokes maybe i need to use the gpg module explicitly?
18:44 murrdoc in your ext_pillar ?
18:44 murrdoc yup
18:44 dstokes yea
18:45 dstokes __salt__['gpg.decrypt']() ?
18:46 micah_chatt joined #salt
18:46 jalaziz joined #salt
18:47 murrdoc https://github.com/saltstack/salt-contrib/blob/master/pillars/lookup.py
18:49 murrdoc not sure if thats how it works
18:50 murrdoc https://github.com/saltstack/salt/blob/develop/salt/modules/gpg.py does have a decrypt function
18:50 murrdoc so yeah thats how u do it
18:50 murrdoc :D
18:50 murrdoc but if u are using the pillar in a state
18:51 murrdoc then all u have to do i think is put | gpg at the end of your first line
18:51 murrdoc # yaml | gpg
18:52 murrdoc #!yaml|gpg to the top of the state file
18:53 SheetiS keep in mind that if you use gpg stuff in a state, then the keys need to live on the minion(s) as well as the master since the state would be executed from the minion.
18:54 SheetiS I always put my gpg stuff in a pillar for this reason and then just reference the pillar data in the state.
18:54 micah_chatt_ joined #salt
18:54 murrdoc pillars are automatically decrypted ?
18:55 SheetiS If you shebang at the top of the pillar as gpg or if you set your default pillar renderers to include gpg.
18:55 SheetiS can do this in your master config or so: renderer: jinja | yaml | gpg or #!jinja|yaml|gpg on the particular pillar.
18:56 murrdoc probably should do both
18:56 SheetiS I tend to use the shebang method so that I know that I have gpg to decrypt on purpose
18:56 Ryan_Lane I want to output a yaml file from a pillar, but I'd like to filter the results by some keys in the pillar, can anyone think of a sane way to do this?
18:56 Ryan_Lane should I generate another dict in a local jinja var and use file.serialize?
18:57 TTimo joined #salt
18:58 smithd joined #salt
18:58 SheetiS Ryan_Lane: I'd think you'd need to operate on a local dict in any case since you are filtering the data.  file.serialize seems a sane 'next-step' from there.
18:59 * Ryan_Lane nods
18:59 Ryan_Lane I was hoping to avoid jinja as much as possible, since manipulating dicts in jinja sucks
19:00 yomilk joined #salt
19:01 dstokes murrdoc: yeah, standard pillar works fine. adding call to gpg.decrypt in my ext_pillar is throwing an error. still investigating
19:01 bhosmer joined #salt
19:01 forrest joined #salt
19:01 murrdoc at the top of your pillar file (assuming python) u could try #! py | gpg
19:02 Godfath3r joined #salt
19:03 murrdoc SheetiS:  renderer: yaml_jinja | gpg is what you would put in the salt master config yeah ?
19:03 murrdoc not yaml | jinja
19:03 dstokes murrdoc: yeah, that totally works, but this is an external pillar .py file i'm working with
19:03 dstokes doesn't decrypt automagically
19:03 murrdoc really ?
19:03 bhosmer joined #salt
19:03 dstokes trying to figure out if i can use the salt module or if i have to roll my own using the gnupg-python module
19:03 SheetiS murrdoc: probably
19:05 dstokes murrdoc: yeah, i get back the gpg message, no errors in salt master log
19:05 dstokes w/o any modification to my ext_pillar
19:05 ckao joined #salt
19:06 SheetiS Ryan_Lane: thinking to do something like this to filter the data? https://bpaste.net/show/526858da90c5
19:06 Ryan_Lane yep, doing that now
19:06 dstokes hmm.. don't see a gpg module in the source, maybe that's why __salt__['gpg.decrypt'] isn't working
19:06 thedodd joined #salt
19:06 Ryan_Lane SheetiS: any idea how to put that into serialize?
19:06 Ryan_Lane just {{ data_dict }} ?
19:07 SheetiS That's what I was thinking.
19:07 Ryan_Lane I'll actually know in a sec. testing it :)
19:08 SheetiS dstokes: what salt version are you using?
19:08 SheetiS the gpg renderer stuff was new in 2014.7.0 I think.
19:08 dstokes 2014.7.0rc2
19:08 Edgan and you should be using 2014.7.1
19:09 dstokes there are various regressions in >= 2014.7.0 stable that prevent me from updating :'(
19:09 dstokes https://github.com/saltstack/salt/issues?q=is%3Aissue+is%3Aopen+commenter%3Adstokes
19:09 SheetiS not sure when it got in, but I know I was able to use it in 2014.1.xx by using it as a custom module in /srv/modules/renderers/
19:10 SheetiS just added the gpg.py into there
19:10 smithd joined #salt
19:10 dstokes ah, yeah, looks like gpg module was added after the version of salt i'm pinned to
19:11 SheetiS you'd also want to do the same with the module as the renderer i think based upon what you look to be trying to do.
19:11 iggy if there isn't version info in the docs, file a bug
19:11 iggy I like harassing them
19:12 dstokes iggy: will do
19:12 SheetiS When I have lots of time (not that often), I try and PR the fix right in the documentation, but today is the first time I've had a chance to really work on any of my salt stuff in a month or 2 (yuck).
19:13 murrdoc u graduated to managemetn ?
19:13 fragamus joined #salt
19:13 steverweber_ joined #salt
19:13 Ryan_Lane SheetiS: cannot represent an object: OrderedDict(...
19:13 Ryan_Lane -_-
19:13 SheetiS try and pipe it to yaml?  {{ data_dict|yaml }}
19:14 SheetiS that might be interesting might have to do the multiline trick and give it an data_dict|yaml|iindent(some_number) or something though to make it work.
19:16 Ryan_Lane SheetiS: I actually only needed a subset, so I'm just going to pull the specific attributes I need
19:17 Ryan_Lane oh. interesting. it seems that dicts in jinja are actually ordered dicts
19:17 Ryan_Lane with salt
19:17 nich0s joined #salt
19:18 SheetiS that works.  I was just thinking you could go "- dataset: |\n        {{ data_dict|yaml|intent(8) }}" or the like
19:18 SheetiS s/intent/indent
19:18 Ryan_Lane this isn't working at all :(
19:19 Ryan_Lane I'm thinking I can't use serialize
19:19 Ryan_Lane when I set dicts in jinja, they're ordered dicts
19:19 Ryan_Lane so it can't serialize them
19:19 Ryan_Lane it's absurd that the serializer can't handle that
19:19 Ryan_Lane it should transform for you
19:20 Ryan_Lane (the salt one, that is)
19:20 Ryan_Lane meh. I'll use file.managed with a template
19:20 breakingmatter joined #salt
19:21 SheetiS template is {{ data_dict|yaml(False) }}?
19:21 breakingmatter Hey guys, I'm having some salt-api eauth issues. Would anyone be able to help?
19:21 nich0s Does anyone here know of an HA OpenStack build out using Salt?
19:22 Ryan_Lane SheetiS: what's the False for?
19:22 iggy breakingmatter: just ask... if you don't get any help, try the list
19:23 sophomeric When I have master_alive_interval enabled, that job on the minions pegs a core and takes awhile to run. The master is up, reachable and not under any load. Thoughts/ideas?
19:23 pahko joined #salt
19:24 fragamus joined #salt
19:24 felskrone joined #salt
19:25 breaking_ joined #salt
19:26 SheetiS Ryan_Lane: by default the piping to |yaml will try and put it all on one line.  I think that sets the default_flow_style=Fals on the yaml.dump
19:26 SheetiS I just do it to make it more human readable.
19:26 Ryan_Lane oh. awesome
19:27 schlueter joined #salt
19:28 breakin__ joined #salt
19:29 Ryan_Lane SheetiS: yeah, I wanted that. thanks for the tip
19:29 defenestratexp joined #salt
19:31 SheetiS I forget where I read that, and I am not 100% certain if it's just an arg like that or of you have to specify a kwarg, so let me know if it doesn't work, and I'll dig further.
19:31 breakin__ joined #salt
19:32 breaking_ joined #salt
19:33 berserk joined #salt
19:35 shoma joined #salt
19:35 BigBear joined #salt
19:36 GnuLxUsr joined #salt
19:37 SheetiS Ryan_Lane: actually I just tested it and |yaml(False) worked as expected for me.
19:38 Ryan_Lane SheetiS: passing that into serialie?
19:38 SheetiS in a file.managed
19:38 Ryan_Lane yeah. that works fine
19:38 Ryan_Lane it was serialize that was giving me issues
19:39 breakin__ joined #salt
19:41 clintberry joined #salt
19:42 SheetiS Ryan_Lane: I think that file.serialize might just be broken completely.
19:42 breakin__ joined #salt
19:42 Ryan_Lane it may be
19:43 Ryan_Lane it may be broken since everything was switched to ordered dicts
19:43 SheetiS This gave me the same error: https://bpaste.net/show/89e27528dfa2
19:44 Ryan_Lane https://github.com/saltstack/salt/issues/20647
19:48 SheetiS I included a bpaste with an example of the behavior on your ticket.
19:48 babilen beacons will be wonderful
19:48 babilen Anybody played with them yet?
19:49 murrdoc i want to
19:50 nich0s babilen: I'm only seeing the "notify" beacon ATM. Is there a more comprehensive list? I'd love to learn more.
19:50 Ryan_Lane SheetiS: cool
19:50 nich0s inotify*
19:50 Ryan_Lane I want beacons and reactors to work with masterless
19:50 Ryan_Lane very badly
19:50 babilen I'm not surprised :D
19:51 Ryan_Lane I have lots of use-cases for beacons and I've been writing simple python daemons
19:51 babilen nich0s: I don't know, but the applications made possible by that are, well, tempting
19:51 Ryan_Lane an etcd watch beacon would be great
19:51 mpanetta Where does one find docs on these beacons?
19:51 murrdoc consul wrote consul-templates for that
19:51 murrdoc the etcd watch beacon
19:52 babilen https://docs.saltstack.com/en/latest/ref/beacons/all/index.html http://docs.saltstack.com/en/latest/topics/releases/2015.2.0.html
19:52 babilen mpanetta: ^
19:52 mpanetta babilen: Thank you
19:52 SheetiS Ryan_Lane: I tested a bit more and found the json formatter works, so it must be specific to yaml.  I worked past that previously somewhere, and I am going to try and find where. (I think there are cases where this is worked around in the salt codebase itself elsewhere to be honest)
19:53 Ryan_Lane murrdoc: there's a beacon for consul?
19:53 Ryan_Lane or you mean they wrote some service that listens on watches?
19:54 murrdoc the latter
19:54 murrdoc https://github.com/hashicorp/consul-template
19:54 Ryan_Lane I like the idea of beacons better :)
19:54 murrdoc you want master less for states but a beacon and reactor server
19:54 murrdoc its a fun requirement
19:54 Ryan_Lane nah
19:54 Ryan_Lane I want it all to run locally
19:54 iggy why is there a file.write, but no file.read...
19:55 Ryan_Lane each minion would do their own beacons/reactors
19:55 nich0s iggy: What are you trying to accomplish?
19:55 iggy read a file...
19:55 Ryan_Lane could have a centralized beacon that watches SQS, as well :)
19:56 iggy or you could just run a master like the rest of us have to do
19:56 Ryan_Lane no thanks
19:56 Ryan_Lane it's still a SPOF
19:56 Ryan_Lane even with multi-master
19:56 mpanetta Imo the multi master impl sucks...
19:56 Ryan_Lane and multi-master is a pain to setup and maintain
19:57 robawt i think the pain is subjective.  running multi-master here and it's been fine once setup
19:57 mpanetta It is all wrong... The minions should all connect to all masters simultaniusly, so no matter what master you are using they all get the data...
19:57 mpanetta That is the only way it will truly be redundent.
19:57 Ryan_Lane AWS has built-in cross AZ solutions that I could use to emulate the same thing :)
19:57 murrdoc thats the gossip protocol
19:57 breakingmatter joined #salt
19:58 mpanetta salt implements gossip?
19:58 murrdoc no
19:58 murrdoc but if it did
19:58 murrdoc then what u are asking for is doable
19:59 mpanetta Yep
19:59 SheetiS Ryan_Lane: I found another place in the salt codebase where the ordered dict yaml dump issue was already solved, so hopefully it is quick and easy for someone to address the ticket.  I've already spent too long tinkering today or I'd look to test and do a PR myself.
20:00 jeremyr joined #salt
20:00 smcquay joined #salt
20:00 capricorn_1 joined #salt
20:02 iggy I hate when I'm all "this should take 5 minutes" and it takes hours
20:03 bhosmer_ joined #salt
20:04 SheetiS iggy: what's giving you trouble?
20:04 breakingmatter I'm having an issue with salt-api (CherryPy) and eAuth. I have the user defined in '/etc/salt/master' along with '@wheel' and '@runner', but when I do a test curl against the master I still get a '401 Unauthorized' error. Any ideas?
20:05 iggy still messing around with this "run something after a minion (re)boots"
20:05 murrdoc its more than startup_state config ?
20:05 SheetiS iggy: do it a simple way.  file.append to /etc/rc.local something like this: salt-call state.highstate
20:05 rvankleeck_ joined #salt
20:05 rvankleeck_ joined #salt
20:05 neogenix joined #salt
20:06 SheetiS so that way it's only on reboot and not when the daemon starts
20:06 iggy well... I need to do something on another set of hosts when one comes up
20:06 murrdoc oooh
20:06 murrdoc orchestrate dont work ?
20:06 SheetiS salt-call event.fire_master in rc.local
20:07 iggy i.e. activemq with a DB as the persistence store... when the persistence DB (re)boots we need to restart the activemq service on the other instances
20:07 murrdoc hahah
20:07 murrdoc rc.local recommendation
20:07 murrdoc <3 it
20:07 SheetiS murrdoc: it's terrible, I know
20:07 SheetiS but how does salt know when the system rebooted?
20:07 SheetiS it knows when its daemon restarted.
20:08 SheetiS you could guess at a reboot base on status.uptime (which is _not_ in a usable format btw)
20:08 murrdoc yup
20:09 SheetiS you could write a custom module that uses python uptime.uptime once you install it via pip, but you still guess that you just rebooted.
20:09 iggy well, so far I'm trying: - 'salt/minion/*mqdb*/start': reactor that runs an sls on the mqdb server if /proc/uptime first digit is < X, then fire a custom event that will then run the service restart on the activemq nodes
20:09 rap424 joined #salt
20:09 timoguin joined #salt
20:10 MikeR joined #salt
20:10 ajw0100 joined #salt
20:11 iggy I think I'm trying to make this too pretty for it to be wallpaper over an activemq bug
20:11 SheetiS iggy: I think you could make that all work
20:12 iggy well, I'm trying to get the number with a cmd.run call and it's misinterpreting my | (because I only want part of /proc/uptime)
20:12 stej joined #salt
20:13 gyre007 joined #salt
20:13 iggy file.search just says True/False... doesn't actually give you the search results
20:14 SheetiS are you using a split()[0] on the cmd.run then?
20:15 iggy so just cmd.run('cat /proc/uptime')|split()[0]
20:15 iggy guess that works
20:15 SheetiS and then if you want it to be an integer
20:15 iggy seems odd
20:15 SheetiS pipe it to |int afterward
20:15 kwmiebach joined #salt
20:16 SheetiS since you only want the first number not the second and you want it as an int
20:16 MaliutaLap joined #salt
20:16 MaliutaLap joined #salt
20:16 SheetiS cmd.run('cat /proc/uptime')|split()[0]|int and then you can compare it to your time X sanely
20:16 iggy I just feel like there should be something to read a file other than cmd.run('cat foo')
20:17 iggy but whatever... like I said... trying to make some crappy activemq bandaid look pretty for some reason
20:17 breakingmatter I know, I'm a terrible person for bumping myself, but I could really use some help with salt-api.
20:17 EWDurbin joined #salt
20:18 manytrees joined #salt
20:18 simonmcc joined #salt
20:20 jay_d joined #salt
20:20 wiqd joined #salt
20:21 SheetiS breakingmatter: I don't have enough experience with the specifics of your question to be able to assist, or I would do my best to do so.  Did you include a pastebin of relevant configs as part of your api question?
20:22 whiteinge breakingmatter: did you restart the salt-master and salt-api daemons after adding the user?
20:23 breakingmatter SheetiS: Oh, to be honest it's my first time joining this channel so I wasn't sure what the process actually was for posting a question. I can pull the logs and get a pastebin going.
20:23 breakingmatter SheetiS: Yes, restarted both services multiple times.
20:24 SheetiS also I know if you are using pam as your auth method, you might find that you have to run the api daemon as root for it to work if so.  (I've seen that with other apps) I don't know what user you are running your salt-api daemon as.
20:24 murrdoc he right
20:27 SheetiS I currently do most of my stuff via the CLI api rather than via restapi, so I am just not as familiar with the specifics as I should be.
20:27 breakingmatter Here we go: http://pastebin.com/dBwXnxzf
20:28 drawsmcgraw So this thing..... http://stackoverflow.com/questions/27517476/salt-stack-bootstrap-onliner-breaking-on-centos-6-6
20:28 drawsmcgraw I'm using salt-cloud on a private OpenStack instance
20:29 drawsmcgraw And salt-bootstrap is choking because (I imagine) it's hitting the EPEL repo with 'https'
20:29 breakingmatter drawsmcgraw: have you tried manually installing EPEL first?
20:30 drawsmcgraw breakingmatter: salt-bootstrap installed EPEL via the rpm, which generated the 'epel.repo' with the https lines in it
20:30 drawsmcgraw I'm swapping out 'https' for 'http' and will try just re-running salt-boostrap to confirm
20:30 jdowning joined #salt
20:31 breakingmatter Well, one thing that I've noticed is that sometimes the download.fedoraproject.org DNS server doesn't respond fast enough for the script.
20:31 SheetiS breakingmatter: I was able to run the cli test you had there in my test environment without issue.  I did not get an error.
20:31 drawsmcgraw and that did it. Changing 'https' to 'http' in epel.repo, re-running salt-bootstrap, and it seems to work now
20:32 drawsmcgraw breakingmatter: Fair enough. But I re-ran salt-bootstrap several times and kept getting that error
20:33 breakingmatter drawsmcgraw: Interesting. I can wget that file with https and http.
20:33 SheetiS I only have .*, @runner, @wheel in my /etc/salt/master.d/auth.conf though
20:33 drawsmcgraw breakingmatter: which file?
20:34 bhosmer joined #salt
20:34 otter768 joined #salt
20:36 drawsmcgraw For what it's worth, this is a CentOS 6.4 image. Maybe I should test this with a 6.5 image and see if the problem persists :/
20:36 breakingmatter drawsmcgraw: wget https://mirrors.fedoraproject.org/metalink?repo=epel-6&amp;arch=$basearch
20:36 drawsmcgraw or 6.6... or .... something not so old
20:37 drawsmcgraw Yeah, but I wonder if it's this issue coming up again https://github.com/saltstack/salt/issues/8226
20:37 drawsmcgraw tl;dr - the yum python pkg is the root of the problem
20:38 drawsmcgraw breakingmatter: ping
20:38 breakingmatter drawsmcgraw: I'm not familiar with that bug report
20:38 drawsmcgraw Me neither. It just showed up in my Google searching
20:39 breakingmatter Are you running Amazon's image?
20:39 drawsmcgraw I'm not. It's a CentOS 6.4.
20:39 drawsmcgraw But
20:39 drawsmcgraw Given the date of that issue on Github, it's possible that the Amazon AMI was older as well
20:40 drawsmcgraw I think what I actually need to do is just try this out on CentOS 6.5/6.6 and see if the problem persists
20:40 breakingmatter Perhaps. Why not just 'yum update -y' ?
20:40 drawsmcgraw If it goes away with 6.5, then  I can safely ignore...
20:40 drawsmcgraw I'm using Salt Cloud, so any manual intervention would sort of go against the point of setting up Salt Cloud
20:41 drawsmcgraw I got my problem minion connected to the master with little problem. I just had to make some manual changes
20:41 breakingmatter Ohh, see, I've never used Salt Cloud. I don't know the first thing about it's architecture.
20:41 drawsmcgraw Just makes an instance and runs salt-bootstrap on it. Nothing particularly magical
20:43 breakingmatter SheetiS: Yeah, I tried removin "- '@jobs'" as well and it still didn't work.
20:44 breakingmatter SheetiS: All of this worked FINE like two months ago. But, other things came up so I dropped the project. Now it doesn't. Even rebuilt our salt-master dev environment and it doesn't work there.
20:45 SheetiS breakingmatter: have you tried using pam to authenticate with the account for something else (say logging in via sshd) or anything?
20:45 breakingmatter drawsmcgraw: Well, regardless, EPEL should be able to install on _any_ version. It's supposed to be version agnostic.
20:45 breakingmatter SheetiS: yes, I used ssh and it worked fine. It seems that only salt-master doesn't like it.
20:47 breakingmatter SheetiS: Do you know what pam modules salt-api queries against?
20:47 rvankleeck__ joined #salt
20:47 rvankleeck__ joined #salt
20:47 breakingmatter I have recently enabled things like SSSD and LDAP integration. Maybe there's a requisite module that's not getting tripped correctly. But, I don't know what modules salt-api queries.
20:47 SheetiS I can't reproduce it on salt 2014.7.0 from Amazon Linux 2014.9 (os_family RedHat).  I really don't have a recommendation at the moment.  I can check what the cli uses at the very least really quick.
20:48 drawsmcgraw breakingmatter: good point. I'll get to the bottom of it...
20:48 breakingmatter SheetiS: I think I may give the LDAP integration a try. I think that's what we want to use /eventually/ anyways.
20:48 yomilk joined #salt
20:48 johtso joined #salt
20:49 breakingmatter drawsmcgraw: One thing you can try is wget'ing that file that I tried.
20:49 sjol_ joined #salt
20:50 breakingmatter "wget https://mirrors.fedoraproject.org/metalink?repo=epel-6"
20:51 breakingmatter drawsmcgraw: Another thing you can try is flushing out Fastestmirror
20:51 breakingmatter It may have resolved a bad mirror
20:51 breakingmatter https://www.ndchost.com/wiki/server-administration/yum-fastestmirror-reset
20:51 drawsmcgraw Ah, interesting.
20:51 drawsmcgraw Thanks breakingmatter
20:51 sijis i keep seeing the following over and over on a client, "Command details {'tgt_type': 'pcre', 'jid': '20150212204755978838', 'tgt': '.*', 'ret': '', 'user': 'root', 'arg': ['20150212181844431228'], 'fun': 'saltutil.find_job'}".. the arg is only 1 of 2 values... is there is a way to kill this?
20:52 breakingmatter drawsmcgraw: Not a problem. Let me know if you make any headway and/or need help
20:54 breakingmatter sijis: try "salt '*' saltutil.kill_job 20150212181844431228"
20:54 sijis i have a master/syndic setup. i see this query happen every 5 seconds or so
20:54 sijis on all the minions :(
20:54 sijis breakingmatter: ok. gonna try
20:54 bhosmer joined #salt
20:54 breakingmatter That should just kill the job that's running the process.
20:55 breakingmatter Unless there's another job that's checking on another job.
20:55 breakingmatter In which case you could just try "salt '*' cmd.run 'shutdown -h -r now'"
20:55 breakingmatter (JOKING)
20:55 sijis damn! ;)
20:55 sijis i did restrat all the minions too and both master and syndic, and cleaned out the jobs/* cache
20:56 jrluis joined #salt
20:56 iggy fwiw, I see that too
20:56 jalaziz joined #salt
20:57 breakingmatter I see a some similar log entries, but I have a lot of automated salt-mines and reactors so it's  hard for me to tell what's supposed to be there and what isn't lol
20:57 iggy it really dirties up your graphite events if you are sending every event to it ;)
20:57 murrdoc graphite can handle it
20:58 gattie joined #salt
20:58 iggy not as a stat, as an event (which are saved in a sqlite db by default)
20:59 sijis to be clear, this is what i see.. over and over: http://paste.fedoraproject.org/184934/14237747/
20:59 murrdoc oh
20:59 murrdoc ok i agree
20:59 timoguin joined #salt
20:59 sijis its almost like someting is 'stuck'
21:00 iggy sijis: I don't think it's something to worry about tbh
21:00 chiui joined #salt
21:01 sijis iggy: i am.. whatever it is.. my syndic is at 100%cpu for the last day for 4 cpu's
21:01 iggy well, start ripping things out until it stops
21:01 sijis we do notice that stopping the syndic server alleviates it
21:01 iggy use mine? disable that... scheduled jobs? kill those... etc until you narrow it down
21:02 sijis that's about it
21:02 breakingmatter sijis: "Have you tried turning it off and on again?"
21:02 sijis honestly, we don't use any of that.. we *only* use salt for remote cmd execution
21:02 murrdoc haha
21:02 iggy I'd try filing a bug then
21:03 breakingmatter sijis: Do you have _any_ state files?
21:03 StDiluted joined #salt
21:03 breakingmatter There's a chance that a cmd timed out and it's still running.
21:03 sijis we don't even have /srv/salt
21:04 joehh basepi: thanks for that - i suspect it would be possible for it not to be needed, but easiest fix is just to add the dependency
21:04 breakingmatter Well, cmd.run can still time out.
21:04 breakingmatter I'd say try killing all jobs on the master.
21:04 breakingmatter Then restart the service.
21:04 iggy I believe they said they tried that
21:05 rudi_s Hi. How can I cache data during a single salt run on a minion? I have a state file which needs to perform some expensive work, but only once per run and I'd like to cache the data. Any idea?
21:05 sijis i did find some cronjobs yesterday and i disabled those. and restarted ,the master, minion and api services
21:06 murrdoc sijis:  do u have a graphite instance ?
21:06 murrdoc that u can send data to
21:07 murrdoc send iostat, memory, process and load data to it
21:07 murrdoc thats the visibility u want
21:07 breakingmatter sijis: Did you remove all the jobs on the minions or the master?
21:08 murrdoc cause if its iostat then u need to bump up file limits for the syndic user
21:08 sijis breakingmatter: on master and syndic i stopped all salt-* services and then removed all the files in jobs/*, then i restarted the minion serivces on all the servers.
21:08 sijis once that was done, i started the master services
21:09 jalaziz joined #salt
21:09 murrdoc sijis:  put that server in new relic
21:10 breakingmatter sijis: is there anything in "/var/cache/salt/master/proc" on the master?
21:10 murrdoc its free for two weeks
21:10 murrdoc it will be easier then whack-a-mole
21:11 murrdoc sever monitoring is free for life
21:11 murrdoc or something
21:11 sijis breakingmatter: empty on both master and syndic
21:11 breakingmatter Run "salt-run jobs.active" on the master
21:11 sijis running ...
21:12 pahko joined #salt
21:14 linjan joined #salt
21:15 breakingmatter sijis: Still running?
21:16 sijis yessir
21:17 CeBe1 joined #salt
21:17 iggy oh ffs, can't send events because we are stuck on <2014.7
21:17 breakingmatter sijis: Whoa.
21:17 iggy why did I even bother with this week
21:18 breakingmatter iggy: I won't tell anyone if you go home now.
21:18 rogst joined #salt
21:19 murrdoc sijis:  salt 'minion' pillar.get schedule
21:19 murrdoc on any one minion
21:21 iggy one time when I saw things going crazy like that was I had a reactor that was responding to an event, but somehow the reactor was also firing the event, so it would get itself into an infinite loop
21:21 iggy but apparently none of that stuff is in use
21:22 sijis i get 0 response back ..
21:23 sijis the jobs.active is still running btw
21:23 breakingmatter I'd say you ahve a stuck job them.
21:23 sijis well,for the pillar.get i see "hostname:" and a newline, that's it
21:23 breakingmatter Either that or an astronomical amount of jobs running.
21:24 sijis so, how can i find this 'stuck' job then?
21:24 sijis wouldn't deleting the stuff in jobs/* do that or is that somewhere else?
21:26 murrdoc you get hostname back for the pillar.get ?
21:26 murrdoc waaaa
21:26 sijis murrdoc: i did.. it was just the hostname:
21:26 sijis nothing else
21:27 murrdoc thats not good
21:28 smcquay joined #salt
21:30 murrdoc salt '*' schedule.disable
21:30 murrdoc run that
21:31 breakingmatter sijis: If the command never returns a list of jobs that are active, I'm not sure how you can remove the job.
21:31 bash124512 joined #salt
21:31 breakingmatter sijis: Might be a good idea to post a bug report
21:31 bash124512 joined #salt
21:32 murrdoc or salt '*' schedule.disable -t 180
21:32 signull hello
21:33 iggy at this point I usually shut everything down ; rm -rf /var/cache/salt/* ; start everything back up
21:36 sijis murrdoc: running that
21:36 sijis iggy: hmm. i've oly done job. i could just do all that's in there
21:39 bhosmer joined #salt
21:40 whiteinge breakingmatter: (got pulled afk) did you get your eauth pam question sorted out?
21:40 MindDrive So I guess I will try asking once again: why would a master suddenly be unable to communicate to a minion when it had just been doing so 30 minutes earlier?  And by unable to communicate, I mean when running a 'salt <minion> test.ping', a 'tcpdump host <minion>' from the master shows NOTHING.
21:41 MindDrive If I can't figure out this issue, I'll pretty much have to give up completely on Salt.
21:41 mpanetta Are you sure your minion is actually running?
21:42 breakingmatter have you tried running salt-master with "salt-master -l debug"
21:43 mpanetta That probably won't help.  He needs to run debug on the minion side.
21:43 mpanetta Minions pull jobs from the master, the master does not push them.
21:43 mpanetta If the minion isn't connected to the master the minion will never see the job request
21:44 giantlock joined #salt
21:45 sijis breakingmatter: i'm running trace on syndic master
21:45 sijis not on 'master of masters' though.
21:45 murrdoc 'mom'
21:45 sijis thx
21:49 yomilk joined #salt
21:51 MindDrive mpanetta: The salt-minion process is running, yes.
21:51 sijis deleting the files in cache/salt/* and restarting services
21:51 MindDrive breakingmatter: Yes, and the debug output has been lacking in any useful information.
21:53 MindDrive What's really strange is I'll see the minion do a 'Decrypting the current master AES key' and I *will* see output in the tcpdump, but any attempt to communicate from the master itself fails.
21:54 breakingmatter whiteinge: No, still bashing my head against it. :(
21:54 breakingmatter MindDrive: What version of salt are you running? "salt --versions"
21:55 breakingmatter MindDrive: Also, are you using anything like salt-mine or pillar data?
21:55 * robawt highfives whiteinge
21:55 AlecF joined #salt
21:56 MindDrive breakingmatter: 2014.7.0  - have tried 2014.7.1 on the masters, but didn't help.
21:56 whiteinge breakingmatter: i'd suggest removing salt-api from the equation and test directly with `salt -a pam '*' test.ping` instead. the pam eauth backend should work transparently with anything pam is configured to talk to on the backend. so long as the user can log into a console it should work.
21:56 * whiteinge trampoline-high-fives robawt
21:58 whiteinge breakingmatter: also i'd suggest running the master in the foreground with debug logging while you attempt an auth to see if any additional output shows up there.
21:59 breakingmatter whiteinge: I've tried both of those. Here's the logs I posted earlier. http://pastebin.com/dBwXnxzf
21:59 MindDrive breakingmatter: And even worse, restarting salt-minion makes it start working again immediately... for a time.
21:59 AlecF Hi all. Quick question: I'm trying to figure out if Salt would be good for my render far. It's made up of, for now, 30 blades with CentOS that need to be periodically updated. The number of blades as well as the number of products that need updating will grow over time. Is Salt good for this type of thing or should I just become better at shell scripting?
21:59 AlecF That's render farm, not far.
22:00 breakingmatter AlecF: Salt is very flexible, and remote execution was what it was originally built for.
22:00 Quassel joined #salt
22:00 TTimo joined #salt
22:00 signull Hello has anyone seen an error that appears like the following?
22:00 signull State 'docker.pulled' was not found in SLS ....
22:00 whiteinge breakingmatter: thanks for the re-post. that log entry at the bottom is with debug-level logging set?
22:00 bunk_home joined #salt
22:01 yomilk joined #salt
22:02 breakingmatter whiteinge: That is correct. It doesn't log anything other than that.
22:02 Quassel left #salt
22:03 AlecF @breakingmatter - OK, thanks. I guess I'll go through the install process. It sounds as if it'll allow me to do the simple things I need as well as grow as the render farm grows in complexity.
22:03 breakingmatter AlecF: We use salt to manage our Dell Blades (m720s) and we have about 64. Work fine for that purpose.
22:03 Ahlee whiteinge: question for you while yo'ure here, with salt -v from the cli, you get back a list of minions that didn't respond - do you know off hand where (or if) that data is in the LocalClient()?
22:04 whiteinge breakingmatter: ok, probably need to dive into the module itself then. one last question before we do (sorry it's a dumb one). have you double-checked that you can log into a console session with these credentials?
22:04 jrluis joined #salt
22:04 breakingmatter AlecF: Salt can do a lot more than just remote execution, as I'm sure you'll fine. It's great for managing configurations and even for setting up formulas. i.e., a formula to install a render software so you only have to run one line of code.
22:04 breakingmatter whiteinge: Yes sir, tried that too. Works fine.
22:05 aurynn anyone know why I'd be seeing divergent behaviour in a rabbitmq_cluster.join from a normal state.sls and running the same state.sls through orchestrate"?
22:05 whiteinge Ahlee: i _think_ that data is annoying somewhere in the CLI code in print() statements and not available to LocalClient. i've been meaning to verify that because the behavior changed a little bit recently so I wanted to track it down :-/
22:06 sijis breakingmatter: so, deleting the files in /var/cache/salt/master/* and restarting, seemed to have stopped 1 of the two jobs spitting over and over. so we are trying it again
22:06 breakingmatter sijis: Well, at least you're halfway there.
22:07 breakingmatter Statistically speaking anyways.
22:07 sijis yup.
22:08 sijis is the job information stored somewhere on the master? which could be whacked. or is that in the cache/salt/master/* location
22:09 Ahlee whiteinge: ok.  I'll tinker and see what i can come up with, too.
22:09 Ahlee my guess was the cli is actually async'ing
22:10 whiteinge breakingmatter: good gravy. there are no logging statements in that module.
22:10 alexhayes Has anyone else experienced odd issues with ssh_known_hosts, specifically the fingerprint sometimes being valid, sometimes not?
22:10 breakingmatter whiteinge: Sad day :(
22:10 breakingmatter sijis: To be honest, I haven't delved deep enough into the jobs to know if it's stored on the minion at all.
22:12 bunk_home Hey guys. Anyone to assist me at #ip2 ?
22:14 TaiSHi How can I run a state and point to a specific pillar file ?
22:14 TaiSHi Command-line ish I mean, without editing top.sls
22:14 sijis breakingmatter: fair enough
22:14 iggy TaiSHi: you can set pillar data on the command line, but not a specific file
22:16 drawsmcgraw left #salt
22:17 overyander joined #salt
22:19 TaiSHi Thanks iggy, I should submit a feature request
22:19 micah_chatt Does anyone know what order the files in `/etc/salt/master.d/` are loaded in?
22:20 bhosmer__ joined #salt
22:20 bhosmer joined #salt
22:21 murrdoc i use 00 through 99 to order files in there
22:21 murrdoc and in minion.d
22:21 murrdoc ymmv
22:23 MindDrive http://paste.pound-python.org/show/DzjGC3ti9pI9f6ZsA2FE/ - note: salt-master is indeed running (as root, which is the user this code was executed from) and is working just fine from the command line.  Ideas on what I'm missing here?
22:23 micah_chatt I’ve got `10master.conf` and `20app.conf` in /etc/salt/master.d but it looks like the data is being read last from 10master.conf
22:23 micah_chatt I was hoping a higher number would be an override
22:23 mschiff I am still having problems with highstate: most of my minions are stuck in highstate until I kill that process. (Version 2014.7.1 on Debian squeeze and wheezy). Anyone seeing this as well?
22:24 cpowell joined #salt
22:24 mschiff This happens not directy, but after salt-minion has run for some time (hours, few days ...)
22:24 bunk_home Do i need to add a new network to my ird-client ?
22:24 bunk_home c
22:25 teebes joined #salt
22:25 sijis breakingmatter: murrdoc iggy - so doing the double-remove worked, at least we didn't see the saltutil.find_job coming through after several minutes.
22:25 murrdoc sweet
22:26 sijis however, we enabled the api, and ran a job against it.. and now it back
22:26 breakingmatter sijis: Best to go home now.
22:26 iggy ^
22:26 sijis it actually is time to go home for me...
22:26 sijis :)
22:26 breakingmatter sijis: Schrödinger's cat
22:26 sijis anyhow.. maybe its soething on the api or how our code is calling it.
22:27 breakingmatter Would've been nice to see what the job was.
22:29 mosen joined #salt
22:31 sijis breakingmatter: wel, the odd hing is that it keeps looking for te same jid but under a different job.
22:31 wedgie joined #salt
22:31 sijis this is tbasicaly the simple api call: http://paste.fedoraproject.org/184975/80234142/
22:32 rap424 joined #salt
22:35 otter768 joined #salt
22:35 johnkeates joined #salt
22:36 cpowell joined #salt
22:36 thedodd joined #salt
22:36 alexhayes If anyone else finds my question about about ssh_known_hosts, you'll probably be interested in this - http://irclog.perlgeek.de/salt/2013-04-16#i_6709480 - specifically you'll just want to run 'ssh-keygen -lf  /etc/ssh/ssh_host_rsa_key.pub' on the host.
22:36 johnkeates left #salt
22:38 bunk_home left #salt
22:39 ksalman1 There's no state for yum groupinstall?
22:40 SheetiS joined #salt
22:40 spookah joined #salt
22:41 JlRd joined #salt
22:42 kermit joined #salt
22:42 murrdoc whats groupinstall ?
22:43 MugginsM joined #salt
22:44 ksalman1 it's a collection of packages, like "Development Tools"
22:44 ksalman1 yum groupinstall "Development Tools" installs a bunch of development packages
22:45 murrdoc ah
22:46 murrdoc right
22:46 aurynn so, I need to add some jitter to a state run; what's the best way to achieve that?
22:46 aurynn to allow for varying times to attempt a network request
22:46 aurynn to avoid a thundering herd
22:46 tristianc can I have multiple top files for the same environment?
22:48 tristianc i would like some way to run a top.sls, perform some commands manually, then run a second top.sls
22:48 ksalman1 i guess i can do yum groupinfo and get all the packages and list them in pkg.installed
22:49 murrdoc you could write something in python
22:49 murrdoc http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.yumpkg.html#module-salt.modules.yumpkg
22:50 murrdoc look that up in github
22:50 murrdoc and then u can just write your module/state combo
22:50 ksalman1 hm yea
22:52 nich0s joined #salt
22:52 Singularo joined #salt
22:52 sijis \
22:54 whiteinge breakingmatter: ok, initial step. might not get us anywhere but it's a start: https://github.com/whiteinge/salt/blob/2e93359/salt/auth/pam.py
22:55 whiteinge breakingmatter: `mkdir -p /srv/modules/auth` put that pam.py file into that folder, then edit your master config and set `extension_modules: /srv/modules`, restart your master and try to auth again
22:55 breakingmatter whiteinge: Oh, that's great. Thank you for that.
22:56 breakingmatter whiteinge: I'm about to head out of the office for the day though. You going to be around tomorrow?
22:56 whiteinge from what I can tell this may be a bear to debug. that module is using PAM's C-interface directly. but we'll try :)
22:56 whiteinge breakingmatter: yeah, i'll be around. feel free to ping me
22:57 breakingmatter whiteinge: That sounds absolutely lovely :) lol
22:57 breakingmatter whiteinge: Alright, well I'll make a note of what you said and try it in the morning and ping you with some results.
22:57 breakingmatter What timezone are you?
22:57 whiteinge MST
22:58 breakingmatter Alright, well I'm EST. So I'll talk to you late tomorrow morning mytime lol
22:58 breakingmatter Thanks for your help thus far
22:58 TonyP joined #salt
22:58 breakingmatter This isn't the first time you've helped me lol
22:58 hal58th joined #salt
22:58 breakingmatter (bug report a week ago or so)
22:58 whiteinge happy to :)
22:59 breakingmatter Enjoy your evening!
22:59 whiteinge thanks for finding weird salt bugs!  ;-)
23:01 pahko joined #salt
23:05 breakingmatter joined #salt
23:05 iggy aurynn: can you link me whatever issues you opened about orchestrate?
23:07 nullptr joined #salt
23:11 cotton joined #salt
23:12 debian112 joined #salt
23:13 Tyrm_ joined #salt
23:15 tokyo_jesus left #salt
23:16 teebes joined #salt
23:17 TheoSLC joined #salt
23:17 adelcast has anyone worked creating a recipe for the Minion on OpenEmbedded?
23:18 adelcast would be pretty cool to get it integrated into OE, that way we could spit images with a Minion preconfigured
23:18 aurynn iggy, just have one so far; going to do another about ssh+orchestrate
23:20 iggy aurynn: what # I think I'm hitting the same thing
23:21 aurynn iggy, https://github.com/saltstack/salt/issues/20615
23:21 aurynn I'm also needing to log the explicit-timeout bug
23:25 iggy I'm getting the sense that nobody at saltstack has tested orchestrate in a long time
23:27 jdowning joined #salt
23:27 aurynn iggy, I'm getting that feeling too
23:27 jcockhren heh
23:28 paha joined #salt
23:28 iggy I've already filed 3 bugs on it in the past month
23:32 dunz0r joined #salt
23:36 murrdoc slacking
23:36 murrdoc we need everyone in the chan doing a bug a week
23:36 murrdoc support your friendly neighborhood salt dev
23:36 jcockhren haha
23:36 iggy I've been busy putting together this saltconf talk
23:37 murrdoc you know what a successful talk needs right ?
23:37 murrdoc cookies
23:38 jcockhren murrdoc: timoguin yelled at me to tell me to fix the docs in my returner, then fixed it. Meanwhile, I fixed docs for a renderer
23:38 murrdoc bring cookies aint no body asking quesitons
23:38 murrdoc jcockhren:  nice
23:38 jcockhren it works!
23:38 murrdoc paste ?
23:38 jcockhren no. just the yelling then fixing part
23:38 jcockhren can't paste that
23:38 jcockhren haha
23:39 clintberry joined #salt
23:40 murrdoc #sharethelols
23:42 ipmb joined #salt
23:44 pahko joined #salt
23:46 micah_chatt joined #salt
23:50 timoguin joined #salt
23:54 timoguin jcockhren: more like I accidentally yelled at you to fix it when i was telling you i was fixing it. then i fixed it. then i realized it was already fixed upstream. then i deleted branch and went on my merry.
23:54 joehoyle joined #salt
23:55 micah_chatt joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary