Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-09-25

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 wilywonka joined #salt
00:02 StDiluted joined #salt
00:03 blast_hardcheese joined #salt
00:03 jacksontj joined #salt
00:14 jacksontj is there a nice way for me to see who all is connected to the salt-master publish socket? If i get the pid of the publisher worker i can do some lsof magic to get ir (or psutil) but i'd like to get it within salt if possible
00:18 Ryan_Lane joined #salt
00:23 Gifflen joined #salt
00:25 devinus joined #salt
00:29 [diecast] joined #salt
00:31 pipps1 joined #salt
00:35 oz_akan_ joined #salt
00:35 ktenney joined #salt
00:44 kenbolton joined #salt
00:45 mgw joined #salt
00:45 Gwayne joined #salt
00:53 cro joined #salt
00:56 Ryan_Lane joined #salt
00:57 bhosmer_ joined #salt
01:01 gaoyang joined #salt
01:01 halfss joined #salt
01:04 Ryan_Lane joined #salt
01:04 dthom91 joined #salt
01:10 kenbolton joined #salt
01:10 fxhp I found a hack to make hg mercurial accept an identity file on CLI
01:11 fxhp think I should research integrating with modules/hg.py ?
01:11 higgs001 joined #salt
01:11 fxhp (git also has a hack for identiy so I feel like this is ok)
01:13 diegows joined #salt
01:13 fxhp http://pad.yohdah.com/173/6ac9bde6-cb3f-481d-be30-ee62fc16ab52/raw
01:13 sibsibsib_ joined #salt
01:16 kenbolton joined #salt
01:17 dthom91 joined #salt
01:22 kenbolton joined #salt
01:29 Ryan_Lane joined #salt
01:30 jbunting joined #salt
01:35 Nexpro1 joined #salt
01:42 Ryan_Lane joined #salt
01:43 forrest joined #salt
01:44 pipps3 joined #salt
01:45 pipps4 joined #salt
01:48 deepakmd_oc joined #salt
01:49 oz_akan_ joined #salt
01:51 oz_akan_ joined #salt
01:55 KyleG1 joined #salt
01:59 pipps joined #salt
01:59 Ryan_Lane joined #salt
02:01 xl1 joined #salt
02:06 pipps joined #salt
02:11 Ryan_Lane joined #salt
02:11 Ryan_Lane1 joined #salt
02:11 joehh fxhp: neat
02:16 nu7hatch joined #salt
02:16 UtahDave joined #salt
02:17 sssslang joined #salt
02:22 racooper joined #salt
02:24 fxhp joehh - I think I'm going to make a pull request
02:25 fxhp not for identity but to allow opts in states/hg.py
02:26 oz_akan_ joined #salt
02:32 woebtz_ joined #salt
02:32 mwillhite joined #salt
02:32 dthom91 joined #salt
02:34 Lue_4911 joined #salt
02:35 druonysus can pillar be served from gitfs?
02:36 jcockhren druonysus: yes
02:36 UtahDave druonysus: git pillar
02:36 jcockhren true story
02:36 druonysus jcockhren & UtahDave: is that set up the same as setting up pillar for /srv/salt?
02:37 Gifflen joined #salt
02:38 jcockhren druonysus: slight differences. let me gist that for you right fast
02:38 druonysus jcockhren: okay
02:39 jcockhren https://gist.github.com/jcockhren/6600457
02:39 jcockhren druonysus: ^
02:40 jcockhren already had one ready.
02:41 druonysus jcockhren: awesome
02:41 druonysus thank you
02:42 jcockhren np
02:46 Jahkeup joined #salt
02:57 saurabhs joined #salt
02:59 jbunting joined #salt
03:02 jpeach joined #salt
03:07 faldridge joined #salt
03:11 berto- joined #salt
03:12 cro joined #salt
03:17 jacksontj joined #salt
03:33 jheise joined #salt
03:36 oz_akan_ joined #salt
03:42 dthom91 joined #salt
04:02 jcockhren s3fs backend
04:02 jcockhren in the master config, is it denoted as "s3" or "s3fs" under the fileserver_backend key?
04:08 jamescarr joined #salt
04:13 NotreDev joined #salt
04:14 chuffpdx joined #salt
04:21 mnemonikk joined #salt
04:28 Lue_4911 joined #salt
04:30 Boohbah joined #salt
04:33 dthom91 joined #salt
04:52 jcockhren with the s3fs backend, I've been getting a "SignatureDoesNotMatch" repsonse. Any idea on what could be the cause?
04:52 forrest the creds are correct right?
04:52 jcockhren yes
04:53 jcockhren I'm using IAM
04:54 jcockhren Gave this user Full access to S3
04:54 forrest hmm, I don't see that error string in https://github.com/saltstack/salt/blob/develop/salt/fileserver/s3fs.py
04:54 jcockhren salt-cloud works fine with the same user and creds though
04:54 jcockhren it's from the debug output. looks like a reponse from aws
04:56 forrest well, there's https://forums.aws.amazon.com/thread.jspa?threadID=75517# , and http://docs.aws.amazon.com/general/latest/gr/signature-v4-troubleshooting.html
04:56 forrest the second paragraph on the docs
04:56 forrest does it JUST say signaturedoesnotmatch? There's no additional output from AWS?
04:56 forrest most of these errors seem to output more data.
04:57 jcockhren there's more... but contains my signatures and key. let me sanitize for a gist
04:57 forrest you don't have to, might just be worth googling with that signaturedoesnotmatch error
04:57 forrest because there's a bunch of examples with a ton of different stuff that returns that :\
05:01 jcockhren unless s3fs is broken itself, valid key+id+s3 full permissions should work.
05:02 dthom91 joined #salt
05:02 forrest right, some guys are saying formatting issues, or utf-8 encoding problems and such
05:02 fxhp https://github.com/saltstack/salt/pull/7443
05:02 fxhp woot
05:02 jcockhren yeah. I saw that
05:04 redondos joined #salt
05:06 forrest nice fxhp
05:06 forrest an easy fix too
05:06 capricorn_1 joined #salt
05:09 jcockhren hmmm
05:09 jcockhren even attempted to encode special characters, get the same error
05:10 jcockhren :(
05:11 shinylasers joined #salt
05:11 forrest yea I'm not sure
05:11 forrest If the error isn't providing any details, you're just gonna have to mess with it
05:13 jcockhren maybe we need signing certs?
05:13 forrest You don't seem to need that from the docs
05:14 fxhp forrest - thanks, it should get accepted
05:15 jcockhren other than s3fs, the only way to push to s3 using salt is to place KeyID+key either on all my minions or in pillar
05:16 forrest fxhp, yea it's nothing too intense, additional options are always good, I don't see any unit tests for hg
05:20 Boohbah joined #salt
05:20 Lue_4911 joined #salt
05:21 faldridge joined #salt
05:23 zakm joined #salt
05:30 anuvrat joined #salt
05:41 fxhp forrest - not many unit tests for anything
05:42 forrest there are for the core items
05:42 forrest but yea
05:43 robawt good work fxhp
05:43 fxhp robawt - thanks
05:43 fxhp even though you are a git guy
05:44 forrest don't listen to robawt, he just wants to give out more fake beer
05:44 robawt haha
05:47 jcockhren oh shit
05:47 jcockhren I figured out the issue with the s3fs
05:47 forrest oh?
05:48 jcockhren the message was because the bucket name (created in aws console) wasn't in all lowercase
05:48 forrest lol
05:48 jcockhren yeah. that should be a note in the docs
05:49 forrest for aws?
05:50 forrest oh in salt
05:50 forrest was the bucket name in aws all lowercase>?
05:50 forrest or is it a mix, and the issue lies with salt
05:51 jcockhren yeah. in Aws the bucket was like: MyBackup
05:52 forrest but you had to make it mybackup in salt
05:52 jcockhren in the master sale config it was listed exactly with the correct casing
05:52 jcockhren the thing is..
05:52 jcockhren https://MyBackup.s3.amasonaws.com is the endpoint/bucket location
05:53 jcockhren http though... doesn't respect casing
05:53 forrest right
05:53 forrest but salt didn't even work until you put it in lowercase
05:54 jcockhren so even though salt will request MyBackup.s3.amazon.com, aws will complain and say it doesn't know what you're talking about
05:54 forrest lol awesome
05:54 forrest so your bucket looks like:
05:55 forrest s3.buckets:
05:55 forrest production:
05:55 forrest - mybackup
05:55 jcockhren I'm doing multi-env setup. so
05:55 jcockhren s3.buckets:
05:55 jcockhren - mybackup
05:56 forrest gotcha
05:56 jcockhren the key is to make sure it's setup with proper casing in aws console
05:56 forrest well, in the aws console it can be MyBackup
05:56 forrest right?
05:56 forrest that's what you have it set as
05:56 jcockhren yeah
05:56 jcockhren not anymore.
05:57 forrest oh so now it's set to mybackup both in aws and in salt
05:57 jcockhren yes
05:57 forrest that's stupid
05:57 forrest lol
05:58 jcockhren yeah... that was an annoying snag
06:01 forrest Do you feel like this covers that issue? Note that bucket names must be all lowercase, both in the AWS console and in Salt, otherwise you may encounter "SignatureDoesNotMatch" errors.
06:02 forrest if so I'll put in a pull request with that data updated.
06:02 jcockhren yeah.
06:02 forrest cool
06:05 jcockhren so I guess its safe to say that s3fs and the s3 module do not really interact
06:06 forrest https://github.com/saltstack/salt/pull/7446
06:08 auser joined #salt
06:08 jcockhren thanks
06:09 auser hey all
06:09 fxhp hi
06:09 forrest yea npo
06:09 forrest *np
06:11 forrest alright I'm outta here, have a good one guys.
06:12 nu7hatch joined #salt
06:13 middleman_ joined #salt
06:20 zakm joined #salt
06:21 nu7hatch joined #salt
06:22 lied joined #salt
06:25 nu7hatch joined #salt
06:26 iMil joined #salt
06:27 dthom91 joined #salt
06:28 faldridge joined #salt
06:28 ml_1 joined #salt
06:36 bzf130_mm joined #salt
06:42 puppet joined #salt
06:43 rmt One model for automated deployment that I quite like is to store configuration data and logic in the cloud or on NFS (for DCs) and provide a single URL during bootstrap.. for example, when using Puppet I'd store a script in S3 that contained both node data and a pointer to a versioned tarball of the puppet modules. I'd simply export a snapshot of the released node config to a particular location, and all new nodes would use that.
06:43 nu7hatch joined #salt
06:43 rmt Existing nodes would then have to be kicked.
06:43 rmt (to initiate their updates)
06:45 rmt Is anyone doing similar with Salt?  Basically standalone configuration at boot, but trying to connect to a master as a secondary non-vital task?
06:46 packeteer sounds like something covered by tools kickstart ??
06:46 packeteer *tools like kickstart*
06:47 rmt This assumes you have some way to execute a custom script at bootup, so presupposes something like userdata or kickstart.
06:48 rmt I guess a use-case is:  Making autoscaling of nodes in AWS work when the saltmaster is down. Is there a standard approach to this with Saltstack?
06:50 packeteer sorry, I can't help you there, only my 2nd day with Salt
06:50 rmt ;-)
06:52 falican joined #salt
06:53 packeteer afk for a bit, need to leave library and walk home
06:58 elsmorian joined #salt
07:00 hotbox joined #salt
07:01 deepakmd_oc joined #salt
07:01 redondos joined #salt
07:01 mmilano left #salt
07:03 balboah joined #salt
07:06 jcockhren rmt: let me make sure I unserstand you
07:06 jcockhren you want a way to run states immediately after bootstrap?
07:06 ronc joined #salt
07:07 jcockhren Oh I see. bootup
07:07 jcockhren there's a couple hurdles:
07:07 jcockhren 1. key acceptance of a minion. though you can preseed those
07:09 jcockhren 2. once up. you can have the minions run their highstate
07:10 jcockhren ideally, you'd leverage both the s3fs for the tarball (even for the custom script if you like)
07:10 jcockhren and pillar (for userdata)
07:11 jcockhren as soon as bootstrapped, any userdata would be available for the new minions
07:12 lied left #salt
07:12 jcockhren you can use grains to classify the minions and such
07:12 jcockhren rmt: I could be misunderstanding you. so feel free to correct my assumptions
07:30 carlos joined #salt
07:38 az87c joined #salt
07:40 matanya joined #salt
07:46 Katafalkas joined #salt
07:49 druonysus joined #salt
07:53 auser joined #salt
07:54 nu7hatch joined #salt
07:57 Niichan joined #salt
07:58 nu7hatch joined #salt
08:00 mmilano joined #salt
08:06 linjan_ joined #salt
08:13 mmilano left #salt
08:13 redondos joined #salt
08:29 druonysus joined #salt
08:29 druonysus joined #salt
08:39 qba73 joined #salt
08:45 g4rlic joined #salt
08:56 zooz joined #salt
09:02 ggoZ joined #salt
09:06 dh joined #salt
09:10 packeteer in the base top.sls, is this the correct way to have a match all: https://gist.github.com/packeteer/6696810
09:13 jcockhren packeteer: the '*'? yes
09:13 packeteer i was more concerned with the last line? without it, it errors out
09:14 jcockhren packeteer: oh. that b/c that is an incomplete top file
09:14 jcockhren that dash is supposed to come before a state you specify to run
09:15 jcockhren see the comment I made on your gist
09:15 packeteer how about now? https://gist.github.com/packeteer/6696810
09:16 jcockhren nope. you're missing a state on line 3
09:17 bhosmer_ joined #salt
09:17 packeteer hmm, so i'd be better off making that somehting like: - all
09:17 packeteer (gist updated)
09:18 jcockhren yeah
09:18 jcockhren that's what the '*' means. it targets all minion
09:18 jcockhren minions
09:18 packeteer yeah i get that, just didn;t realise i need to attach a state name to it
09:18 jcockhren if you have nothing you want to target on all minions, then leave lines 2 & 3 out
09:19 packeteer oic
09:19 jcockhren that also implies you actually have a state to apply
09:19 packeteer in the future possibly, but i am testing atm
09:20 jcockhren I can give a small example of a top file and matching states
09:20 packeteer assuming i leave lines 2 and 3, does that also mean i need a all.sls in base?
09:20 packeteer jcockhren: thanks, that would be helpful
09:26 jpcw_ joined #salt
09:26 jcockhren https://gist.github.com/jcockhren/6697195
09:26 jcockhren packeteer: ^
09:26 jcockhren that should git you going
09:26 jcockhren get*
09:27 puppet joined #salt
09:29 jcockhren touches on, targeting with grains, the highstate format/directory structure
09:29 jcockhren (updated gist, there was a minor typo)
09:29 jpcw joined #salt
09:30 yota joined #salt
09:40 packeteer and i thought you were making a joke :)
09:41 nocturn haha:  puppet () has joined #salt :-)
09:42 packeteer jcockhren: yeah, that gels with what I understand
09:42 jcockhren packeteer: word
09:42 packeteer to your momma?  :P
09:43 jcockhren yermom
09:43 jcockhren let me know if you have any questions with my approach
09:44 jcockhren s/with/about/
09:45 packeteer i should be ok. it looks to be what i was working towards. cheers again
09:45 packeteer ie. a set of packages and configs that should be applied across the board.
09:48 packeteer form there i can build per host type configs/packages etc
09:49 packeteer form/from/
09:49 packeteer the only question remaining is, should i put those configs in the core.sls?
09:50 jcockhren configs as in configuration files?
09:51 packeteer sort of.. so looking at your core.sls
09:52 packeteer would i put "file.managed" stuff in there?
09:52 jcockhren if you want
09:52 packeteer heh
09:53 emocakes joined #salt
09:53 jcockhren in my env, there's no config stuff that I need applied to every minion
09:53 packeteer oic
09:53 geak joined #salt
09:54 jcockhren if there is for you, either add it to core.sls or add another state under "- core" in the top file
09:54 packeteer doh, of course
09:54 packeteer i can seperate to packages and configs easily
09:55 jcockhren looking at the monitoring stateID, I chose the directory form b/c I have newrelic configs I'd like applied
09:55 piffio joined #salt
09:55 jcockhren I turned the newrelic config files as templates filed by jinja
09:55 jcockhren the values are fills by stuff in pillar
09:56 packeteer future stuff for me. trying to sort out the basics first
09:57 felixhummel joined #salt
09:57 jcockhren source: salt://monitoring/newrelic.list
09:57 jcockhren is the part that may matter to you
09:57 jcockhren salt://"
09:58 halfss joined #salt
09:58 jcockhren is the base directory where your top.sls resides
09:58 packeteer k
09:59 geak_ joined #salt
09:59 jcockhren so configs can be placed and referenced from any state. so the location of file.manage calls don't matter
09:59 jcockhren I placed them in monitoring just for nice organization (for myself)
10:01 packeteer yeah, i guess i'm trying to build a long term style / layout for states
10:02 packeteer again, my last job flavours the way i think
10:22 sssslang joined #salt
10:22 halfss joined #salt
10:29 Furao joined #salt
10:31 adepasquale joined #salt
10:32 ronc joined #salt
10:34 ronc joined #salt
10:43 SpX joined #salt
10:59 packeteer hmm, what am I doing wrong with this sls file: https://gist.github.com/packeteer/6698068
11:00 packeteer by my reckoning, it should update the conf file and start/restart the service if needed
11:00 packeteer but the service is not starting
11:05 jcockhren packeteer: add "reload: True" to the service section
11:06 packeteer isn't that the point of running?
11:06 jcockhren in order to achieve what you need, you have to tell the service to "watch" the file for changes
11:07 packeteer the conf file? and then bounce accordingly?
11:07 jcockhren http://docs.saltstack.com/ref/states/all/salt.states.service.html#module-salt.states.service
11:08 jcockhren the 3rd code block from the top
11:08 packeteer ok, thanks
11:08 jcockhren in your case, it'll say: "- file: /etc/ntp.conf"
11:08 jcockhren in the watch section
11:08 packeteer ya
11:09 packeteer so, now i'm not understanding the reason for 'running'
11:09 jcockhren running asks, "Is it running?"
11:09 jcockhren if not, then it starts it
11:10 packeteer yeah, thats what i thought. so i must have buggered the syntax coz it is not starting the service
11:10 packeteer nm, its a typo
11:11 packeteer wait, no
11:11 jcockhren notice in my example
11:11 jcockhren I have the newrelic service "require" the pkg
11:11 packeteer line 17 - 25 ?
11:12 jcockhren lines 22-23 are the most relavent
11:12 packeteer would that install the package if missing?
11:13 jcockhren 17-18 installs the package
11:13 packeteer gah, too many ways to kill the cat  :)
11:13 jcockhren 22-23 says to require the package to be installed before running the service state
11:14 jcockhren requisities help determine order and dependency og the states
11:15 matanya joined #salt
11:16 packeteer hmm, so i need to take a step back because i was trying to seperate packages and configs. maybe i should seperate based on packages that do/dont need config changes
11:20 oz_akan_ joined #salt
11:23 krissaxton joined #salt
11:26 aleszoulek joined #salt
11:30 packeteer afk, time to sleep on it
11:32 canci joined #salt
11:37 diegows joined #salt
11:38 jbunting joined #salt
11:40 matanya joined #salt
11:41 ggoZ joined #salt
11:49 MrTango joined #salt
11:51 mephx joined #salt
11:52 qba73 joined #salt
11:52 gmoro joined #salt
11:59 blee joined #salt
12:00 nu7hatch joined #salt
12:02 krissaxton joined #salt
12:06 mephx joined #salt
12:08 geak joined #salt
12:08 copelco joined #salt
12:18 bhosmer_ joined #salt
12:19 halfss joined #salt
12:19 StDiluted joined #salt
12:21 loque joined #salt
12:21 loque I wonder if anyone can help
12:22 loque my salt returner is now returning jibberish
12:22 loque looks like binary data
12:22 loque on a highstate
12:22 APLU joined #salt
12:22 Furao joined #salt
12:23 loque Got return from host####### for job 20130925132322263353
12:24 loque but the output from the return is
12:26 loque Salt: 0.15.3          Python: 2.6.6 (r266:84292, Feb 22 2013, 00:00:18)          Jinja2: 2.2.1        M2Crypto: 0.20.2  msgpack-python: 0.1.9.final    msgpack-pure: Not Installed        pycrypto: 2.0.1          PyYAML: 3.10           PyZMQ: 2.2.0             ZMQ: 3.2.1
12:32 felixhummel joined #salt
12:32 mwillhite joined #salt
12:32 Martinez joined #salt
12:33 matanya joined #salt
12:38 krissaxton joined #salt
12:41 giantlock joined #salt
12:49 brianhicks joined #salt
12:49 oz_akan_ joined #salt
12:51 kenbolton joined #salt
12:52 oz_akan_ joined #salt
12:52 absolute joined #salt
12:53 TheCodeAssassin joined #salt
12:55 dustinbot joined #salt
12:58 StDiluted joined #salt
13:03 natim joined #salt
13:04 natim Hello guys
13:04 natim I have a github with all my salt and default pillar
13:04 natim But on my overlord (salt-master) I want to be able to override some pillar
13:04 natim How does it work ? What is the best way to do that ?
13:06 natim What does it means ? http://docs.saltstack.com/ref/configuration/master.html#master-file-server-settings
13:06 natim What will be the order ?
13:06 natim How can I overrride default configuration ?
13:07 juicer2 joined #salt
13:08 Gifflen joined #salt
13:09 racooper joined #salt
13:10 anteaya joined #salt
13:12 davidone joined #salt
13:14 ipmb joined #salt
13:18 redbeard2 joined #salt
13:19 jbunting joined #salt
13:24 oz_akan_ joined #salt
13:27 kenbolton joined #salt
13:29 pakdel joined #salt
13:30 pakdel Hi all
13:30 pakdel Can anyone help me with a state and target matching question?
13:30 krissaxton joined #salt
13:32 Gifflen joined #salt
13:35 imaginarysteve joined #salt
13:39 m_george|away joined #salt
13:40 mapu joined #salt
13:40 pakdel Can anyone help me with a state and target matching question?
13:41 msrivas joined #salt
13:44 toastedpenguin joined #salt
13:45 joehh pakdel: maybe wait an hour or two for the west coast of the us to arrive
13:45 pakdel OK, thanks
13:46 alunduil joined #salt
13:46 pakdel im the meantime, do you know what is used as minion's id?
13:46 pakdel hostname, FQDN or something else?
13:48 piffio pakdel: the default is FQDN
13:48 piffio but you can change it in the minion config file
13:48 piffio to see the actual minion ID, run salt-key -L in your master
13:48 pakdel That's great, thanks a lot
13:48 piffio it'll show all the minions that are "connected" to the master
13:48 pakdel It might even solve my other problem :)
13:49 Brew joined #salt
13:51 pakdel And I suppose it is also accessible as grains['id']
13:51 ingwaem joined #salt
13:51 m_george left #salt
13:53 piffio yes, that should be
13:53 piffio you can also send a command like
13:53 farra joined #salt
13:53 piffio salt \* test.ping
13:53 piffio and see if the minion is answering
13:55 pakdel actually, I am going to use id of some minions in state files that are pushed to some others
13:57 shinylasers joined #salt
13:57 cro1 joined #salt
14:05 Brew joined #salt
14:06 krissaxton joined #salt
14:08 Kraln joined #salt
14:11 blee_ joined #salt
14:13 [diecast] joined #salt
14:13 quantumsummers|c joined #salt
14:13 quantumsummers|c joined #salt
14:18 Furao joined #salt
14:20 devinus joined #salt
14:26 linjan joined #salt
14:30 higgs001 joined #salt
14:39 mohae joined #salt
14:42 pentabular joined #salt
14:42 cnelsonsic joined #salt
14:43 tyler-baker joined #salt
14:44 cnelsonsic joined #salt
14:44 cnelsonsic left #salt
14:46 codysoyland I'm not sure if this is actually a bug, but a new version of salt is now stringifying the arguments to modules called with salt.client.LocalClient().cmd(), which broke my deployment command, as it called a custom salt module with a None argument, which now is being called with the string "None"
14:46 pentabular joined #salt
14:47 codysoyland i wrote a workaround for now
14:54 abe_music joined #salt
14:55 jpeach joined #salt
14:58 Furao really? that will be a lot of non-fun in my states testing framework
14:58 Furao but it work in 0.16.4
14:58 Furao you mean git master branch?
15:00 SunSparc joined #salt
15:02 teskew joined #salt
15:03 Sypher joined #salt
15:04 nineteeneightd joined #salt
15:04 jalbretsen joined #salt
15:08 kvbik joined #salt
15:09 codysoyland it's happening on my nodes running 0.16.3 and 0.16.4
15:10 codysoyland the master is 0.15.1. could that be causing it?
15:11 uta joined #salt
15:12 mwillhite joined #salt
15:13 uta hey guys, has anybody had any problems with a salt-runner blocking regular salt calls?
15:13 Sypher joined #salt
15:16 forrest joined #salt
15:16 balboah for pip installs by command line I have a default setting via environment variables. Is there a way to provide this default setting for the salt pip module as well?
15:17 balboah I have a different index_url to be able to have a local cache
15:22 c0bra Anyone have experience with running highstate locally on a windows machine?
15:24 nu7hatch joined #salt
15:24 elsmorian joined #salt
15:27 ronc joined #salt
15:28 uta c0bra: how do you mean?
15:29 Valda joined #salt
15:29 c0bra uta: I can't figure out where the local top.sls file should go on windows
15:30 c0bra I have put one in every path imaginable but salt-call state.highstate --local says it can't find anything
15:30 uta c0bra: it's defined in the main master configuration file
15:30 c0bra file_roots: /srv/salt right?
15:31 uta c0bra: there's a setting for top.sls location i think
15:32 micah_chatt joined #salt
15:32 tmp__ joined #salt
15:35 c0bra "state_top: top.sls" defines the file name, I think, but not the path
15:36 c0bra and putting in a c:\ path causes parsing errors
15:36 c0bra and surrounding it in quotes causes the minion service to not start
15:36 NotreDev joined #salt
15:38 jamescarr joined #salt
15:38 jamescarr how does salt handle ordering of resources?
15:39 ingwaem c0bra: could be because you have a backslash in your path. Escape it with another backslash and see if that helps.
15:42 c0bra "Error reading C:\\salt\srv\\salt\\top.sls: unknown url type: c"
15:43 c0bra oops, missed a backslash, but adding it in results in the same error
15:44 Ahlee s0undt3ch: You around?
15:45 s0undt3ch Ahlee: yep, what's up?
15:45 Ahlee The ps issue from https://github.com/saltstack/salt/issues/7432 - I'm not following what you requested in the psutil differences.  Were you looking for the differences between versions provided in latest psutil from pypi and what EPEL/etc package?
15:46 kaptk2 joined #salt
15:47 krissaxton joined #salt
15:47 bitz joined #salt
15:47 s0undt3ch Ahlee: nah, the issue was that python 2.6 support for it(within salt) was disabled, and disabled for a reason. We can't just enabled it, we need to make sure the previous issues are solved
15:48 s0undt3ch Ahlee: at first I though some functions were not available, but that's not the case
15:48 s0undt3ch probably because a newer version of psutil solved that previous issue
15:49 kvbik joined #salt
15:49 UtahDave joined #salt
15:50 KyleG joined #salt
15:50 KyleG joined #salt
15:50 Ahlee Right. I was wondering if there was anything else I can do to help
15:51 ingwaem c0bra, that's a new one for me I'm afraid. sorry. UtahDave might know more though :)
15:52 scalability-junk oh god I think slowly salt is getting a bit too flexible... it's hard to have a straight thing to do something.
15:54 UtahDave ingwaem: what issue is he having?
15:54 krissaxton joined #salt
15:54 scalability-junk for delivering backupserver adresses for example one could use mine.send/mine.get or publish.publish or the reactor system or hostname targeting from the config files or grains or whatever... salt is great in flexibility, but in shareability with others it sucks I think. Sharing a setup with someone else or collaborate is so much harder, when there are 10
15:54 scalability-junk ways of doing something...
15:55 ingwaem UtahDave: Pathing issues with c:\ style paths. "Anyone have experience with running highstate locally on a windows machine?" … state_top: top.sls" defines the file name, I think, but not the path … and c:\path causing parsing errors
15:56 ingwaem UtahDave: I haven't personally hit this one yet :) been too busy with linux and mac up to now
15:57 c0bra same problem I was having last night, UtahDave
15:57 scalability-junk am I the only one thinking this flexbility becomes an issue?
15:57 Ahlee No.
15:57 Ahlee scalability-junk: I and my team at least share your frustrations
15:58 Ahlee but, it's the nature of the beast. Forced workflow avoidance is echoed repeatedly
15:58 micah_chatt joined #salt
15:58 scalability-junk yeah but with no workflow guidance everyone is reinventing the wheel, which cm should prevent.
15:59 scalability-junk in my opinion a best practices guide not just with the formulas, but for more complex stuff is needed.
15:59 dthom91 joined #salt
15:59 Ahlee Would be nice.
15:59 scalability-junk there need to be more and better guidance. if you start with salt or even in a more advanced state it's overwhelming as 10 paths to do something prevent you from doing it right away.
16:00 pdayton joined #salt
16:00 scalability-junk I mean what's the good thing when using publish.publish (distribution), mine.send/.get (centralization), etc...
16:02 scalability-junk UtahDave: ansible is doing a far better job with guiding, I know salt doesn't really want to force workflows, but there is a small path between enforcing and becoming inefficient.
16:04 toastedpenguin joined #salt
16:06 ronc joined #salt
16:10 forrest scalability-junk, in what way do you feel the documentation could be improved to help alleviate that confusion? Just more walkthroughs, or project examples?
16:11 scalability-junk forrest I'm not sure thinking about it right now, but it's not just the feeling that too many ways make things more flexible and complex, there is the feeling that 10 ways tend to get unmaintainable...
16:11 higgs001 joined #salt
16:12 pipps joined #salt
16:12 forrest At the core each of those things serves a specific purpose though, maybe the confusion there is in the way things are explained when you're trying to understand and comprehend your options.
16:13 faldridge joined #salt
16:13 s0undt3ch Ahlee: you can run your minion without the check, if it doesn't fail report that, that will help remove the check :)
16:14 Lue_4911 joined #salt
16:14 forrest s0undt3ch, thanks for getting that documentation merged in for the bootstrap.
16:14 Ahlee s0undt3ch: ok. I'm actually exeucting hte state right now to move the 204 staging hosts from my installed-from-source-on-python2.7 minions to EPEL's release
16:15 Ahlee once that finishes i'll sed the check out and bounce them again
16:15 scalability-junk forrest: than each specific purpose should be made clear.
16:15 forrest scalability-junk, I agree with you, in addition the explanation needs to make sense from the standpoint of someone just starting with Salt
16:15 deepakmd_oc joined #salt
16:16 scalability-junk yeah as I said when I wanna distribute database servers for a config for example, would I do it staticly, pillars, reactor system, publish, mine etc...
16:16 scalability-junk there is no clear path for this answer
16:16 scalability-junk and how would I go about webserver updates and taking them out of the loadbalancer? reactor system, publish, mine?
16:17 forrest I'll email myself to try and take a look at that to see if there's a good way to clear that up. I think it's a combination of 'x can be used in y ways', as well as real world examples showing how it is used.
16:17 scalability-junk these things make using salt a lot harder.
16:18 forrest I understand
16:18 s0undt3ch forrest: sorry it took this long
16:18 scalability-junk forrest: I think a starting point would be one page for each thing for example message distribution and then explain the different pros and cons for distributing messages and examples for it.
16:18 forrest s0undt3ch, no worries man, there are much more important things than a simple documentation update :P
16:18 scalability-junk then another one for file distribution, talking about torrent like distribution, rsync usage, usage of the integrated file server, where the differences are etc.
16:18 ldlework Is there any way to debug state compiling?
16:19 scalability-junk and after these abstract usage pages are there it get's much easier to choose I think
16:20 scalability-junk you are looking to implement messages between servers, you get one page with pros and cons of all available solutions and perhaps even real world examples giving you something to work with.
16:20 forrest that's a good idea
16:21 scalability-junk the thing is for example it's not only messaging it's a lot more. one thing I was like alright how would I go about file transfers... git, rsync, salt file server, torrent, http tar transfers etc.
16:21 scalability-junk I know that's not just about salt, but it helps get things cleared up that for huge files salt file server could be bad and a torrent like solution excels with rsync or salt file server used for further upgrades as only small bits of the data change etc.
16:22 forrest Yea I know what you mean.
16:23 scalability-junk I'm still wrapping my head around the different concepts. for example I'm thinking about git annex usage for file transfer as it's versioned. but 15gb via rsync or git annex is inefficient compared to one http request or a distributed setup, which gives you a lot more bandwidth
16:23 ingwaem Idlework: if you run salt-call state.sls statefilename instead of salt '*' state.sls you should see output in the console for that state. Other than that you can enable debugging on the minion or master to see what's going on. Once jobs are complete, you can also query them…I query the raw api and then build up my own job queue in the database for historical status lookups.
16:23 redondos joined #salt
16:23 ldlework ingwaem: I'm more interested in the compilation of the sls files themselves
16:23 ldlework I can't understand how include works...
16:24 forrest what about include confuses you idlework?
16:24 ldlework let me pastebin some stuff
16:24 forrest think of it like you're basically maing everything in the state you include, available to the state you're working on
16:24 forrest *making
16:24 forrest ok cool
16:25 scalability-junk forrest:  I think I'll write up a mail to the mailinglist with my structured thoughts and ideas to solve it.
16:25 shinylasers joined #salt
16:26 forrest sounds good scalability-junk, there will probably be some good input there. Getting those examples and such going would be a pretty heavy time investment, so maybe there's a good way to do so.
16:26 ldlework forrest: http://hastebin.com/raw/nimifukane
16:27 devinus joined #salt
16:28 ldlework ingwaem: ^
16:28 ingwaem Idlework: thanks for the link. Reading through right now.
16:30 scalability-junk Idlework: {% for users in pillar['users'].iteritems() %} for example
16:31 ldlework scalability-junk: you mean "for users, item" ?
16:31 ingwaem Hmm, reading through the configuration, seems you can use include a file into the config. So reading your paste bin when it gets to the users: state, it's including a file called .dlacewell into the current config.
16:32 ingwaem Reading more
16:33 scalability-junk depends on what you want idlework
16:34 ingwaem Ahh, got it now. Yea, I think scalability-junk is on the lines of what you're looking for. It's a combination of state file, with some syntax included in the state file etc. I've kind of kept away from that myself, using lists I generate from salt and the api to manipulate in my own application so I just use salt where needed.
16:36 ldlework I guess I'm asking what the resulting python data structure is and I guess you're saying "dict"
16:36 CheKoLyN joined #salt
16:36 bhosmer joined #salt
16:39 ingwaem Idlework: the example scalability-junk provided above should do the trick… {% for users in pillar['users'].iteritems() %} … you would then define pillar files with all the users that should be iterated through and salt will do it when the state is ran.
16:40 ldlework See when I do that I'm getting http://hastebin.com/raw/heradocege
16:40 ldlework So I'm just suspect that the include is creating a dict
16:41 scalability-junk oh I think it should be .items() instead...
16:41 scalability-junk as we do a for loop.
16:42 felixhummel_ joined #salt
16:42 ldlework scalability-junk: that's I'm doing
16:42 ldlework I think that since the include is not at the top level but under the "users:" id, that it actually makes "users" pillar a list?
16:42 * scalability-junk is confused :P
16:43 * scalability-junk not my day today :P
16:43 ldlework so should I expect include-under-a-state-id to create a list?
16:44 ldlework and a list of what? dicts? or additional lists?
16:44 * ldlework is the one who's confused :)
16:44 krissaxton joined #salt
16:45 ldlework It'd be super awesome if salt had a way to simply render out the intended salt state tree before applying anything to the minions
16:45 ldlework and the pillar tree
16:47 ldlework anyone agree?
16:48 scalability-junk There is something as a test run do you mean that?
16:48 ldlework no
16:48 ldlework I mean, literally just print to stdout the compiled state tree
16:48 scalability-junk ah kk
16:48 ldlework IE, the resulting huge SLS dict
16:48 ldlework rendered back to yaml
16:49 ldlework This way you could see what salt is working with when it goes to apply that to your minions
16:50 Katafalkas joined #salt
16:51 jcockhren ldlework, scalability-junk: pillar.get('users', {}).items()
16:51 elsmorian joined #salt
16:53 travispaul joined #salt
16:56 scalability-junk idlework I think there was at least a way to render yaml and get the output but no idea where I saw that :(
16:56 scalability-junk jcockhren: thanks that is what I wanted
16:56 Khazix joined #salt
16:57 travispaul i didn't see a VirtualBox plugin, does anyone know of one? Just wnated to double check one didn't exist before I looked too much into rolling my own.
16:58 jcockhren travispaul: salt is a native provisioner for vagrant now
16:58 jcockhren vagrant can create virtualbox vms
16:59 nu7hatch joined #salt
17:00 travispaul Ah ok, (had to google vagrant) I'll look into using then. Thanks
17:00 ldlework jcockhren: anyway to get salt to print out the final salt/pillar tree?
17:01 jcockhren ldlework: after applied to a state? or stand alone before deployed to minion?
17:02 jcockhren I know of now way to "test" a pillar without using in a state
17:02 jcockhren s/now/no
17:06 Ryan_Lane joined #salt
17:06 Ryan_Lane joined #salt
17:07 Brew joined #salt
17:07 ldlework jcockhren: the latter
17:07 UtahDave ldlework: salt \* pillar.items
17:08 anuvrat joined #salt
17:10 higgs001 joined #salt
17:10 jcockhren UtahDave: when using the cmd.run state, can the "unless" and "onlyif" arguments be the result of an executed module or state?
17:12 craig_ joined #salt
17:12 jcockhren UtahDave: on a related note, are saltstack T-shirts a thing yet?
17:13 UtahDave jcockhren: it might work. it runs a cli command
17:13 UtahDave jcockhren: Yeah, we do have SaltStack t-shirts!
17:13 jcockhren UtahDave: oh!!
17:14 rysch joined #salt
17:15 jcockhren UtahDave: soooo... I have an opportunity to give a take at this Python conf coming up. Not THE Pyconf, but PyTennessee
17:15 jcockhren I'd love to be wearing a saltstack t-shirt while talking about salt
17:15 UtahDave jcockhren: Oh, yeah?  very cool.  I saw an ad about pytennessee
17:15 jcockhren ;)
17:15 UtahDave jcockhren: pm me your address and info!
17:18 travispaul I took a look at vagrant, it doesn't offer me anything other than ruby depends and another layer of config. I just need to stop, start, and restore snapshots of VMs so I think I'm going to roll my own VBox plugin, thanks for the info though.
17:19 Ahlee if you set cmd.run to watch a file - the cmd.run should only execute on file changes, right?
17:19 mwillhite joined #salt
17:20 UtahDave travispaul: someone recently announced a project they did called salty sandbox or something like that on thte mailing list
17:20 dthom91 joined #salt
17:20 travispaul UtahDave: I'll see if I can't find it...
17:21 UtahDave travispaul: if you don't find it I think I can track it down for you.
17:22 ldlework UtahDave: Thanks!!!!!!!!
17:23 mgw joined #salt
17:23 ldlework UtahDave: hmm that doesn't show the final result
17:23 ldlework IE, it still shows include and excludes
17:25 travispaul UtahDave: I found mentions of salty-vagrant and saltstack-sandbox. Is the project you are talking about manage VBox VMs without vagrant?
17:25 lemao joined #salt
17:26 pentabular joined #salt
17:27 UtahDave travispaul: Yeah, I thought it was saltstack-sandbox.
17:27 UtahDave does saltstack-sandbox require vagrant?
17:28 travispaul UtahDave: Yeeah, it's vagrant based
17:28 [diecast] will salt 'hostname' state.sls pkg.installed nginx - return a true/false based on the installed state of nginx ?
17:28 berto- joined #salt
17:28 [diecast] i was looking for a quick way to determine if a package is installed on the cli
17:29 UtahDave travispaul: ah, sorry. you're right.
17:29 travispaul UtahDave: No worries, I appreciate the help
17:29 Katafalkas joined #salt
17:30 Thiggy joined #salt
17:31 pdayton joined #salt
17:32 pentabular1 joined #salt
17:32 racooper [diecast],  I don't think that will do what you want.  look at the pkg.version command perhaps?
17:33 [diecast] racooper yes, thank you. i just found pkg.list_upgrades
17:33 [diecast] i think this will do what i need ultimately
17:37 scalability-junk forrest: alright hope I got it a bit clearer with the mail :)
17:37 forrest cool
17:38 forrest I'll check it out when I get home, having to work at work, psssh
17:39 scalability-junk hehe good that I work at home :)
17:42 zakm joined #salt
17:43 devinus joined #salt
17:44 c0bra joined #salt
17:45 korylprince joined #salt
17:48 zakm joined #salt
17:51 UtahDave travispaul: Let me know if you get anything working
17:53 StDiluted joined #salt
17:53 jmlowe joined #salt
17:55 jmlowe I think I have a bug in file.find or client.cmd, it seems that calling file.find with a path of '/' doesn't work but it does with the salt cli, if I specify a directory under /, /etc for example it works as expected
17:55 jmlowe Is there something I'm missing?
17:56 jmlowe >>> client.cmd('gw31.*','file.find', ['/','type=f','name=tomcat-users.xml'])
17:56 jmlowe {}
17:56 jmlowe >>> client.cmd('gw31.*','file.find', ['/etc','type=f','name=tomcat-users.xml'])
17:56 jmlowe {'gw31.quarry.iu.teragrid.org': ['/etc/tomcat5/tomcat-users.xml']}
17:59 UtahDave jmlowe: do you see any errors or stacktraces on the minion when you use '/'  ?
18:00 druonysuse joined #salt
18:01 druonysuse joined #salt
18:02 jmlowe UtahDave: nothing in /var/log/salt/minion on the client
18:03 micah_chatt joined #salt
18:03 UtahDave jmlowe: have you tried running the same command on the cli from the master?
18:05 imaginarysteve joined #salt
18:08 jmlowe UtahDave: yes, it works
18:08 Thiggy I ended up with 2 diff versions of salt installed simultaneously in /usr/lib/python2.7/dist-packages/salt/  &  /usr/lib/pymodules/python2.7/salt/ and I'm not entirely sure how. I'm on ubuntu 12.04. Ideas?
18:08 jmlowe UtahDave: takes forever, like you would expect, api returns way to quickly to actually have done the find
18:09 Thiggy and of course the sys.path load order makes sure I load the older one instead of the newer one first.
18:10 Thiggy leading to all kinds of hijinks
18:15 mwillhite joined #salt
18:16 jmlowe UtahDave: so this returns nothing client.cmd('gw61.*','file.find', ['/','type=f','name=tomcat-users.xml'])
18:16 jmlowe UtahDave: this returns tons of stuff salt 'gw61*' file.find / type=f name=tomcat-users.xml
18:16 nineteeneightd joined #salt
18:18 jmlowe UtahDave: I may have this narrowed down a bit, it's looking more like the api won't recurse
18:20 _ilbot joined #salt
18:20 Topic for #salt is now Welcome to #salt - http://saltstack.org | 0.16.4 is the latest | Please be patient when asking questions as we are volunteers and may not have immediate answers - Channel logs are available at http://irclog.perlgeek.de/salt/
18:21 jmlowe All 3 should have returned the same thing no?
18:21 devinus joined #salt
18:22 jmlowe again, the cli works as expected
18:22 UtahDave hm. I'm not sure. It makes sense that it would.
18:24 _ilbot joined #salt
18:24 Topic for #salt is now Welcome to #salt - http://saltstack.org | 0.16.4 is the latest | Please be patient when asking questions as we are volunteers and may not have immediate answers - Channel logs are available at http://irclog.perlgeek.de/salt/
18:25 mephx joined #salt
18:25 shinylasers joined #salt
18:31 jmlowe UtahDave: figured it out, it wasn't recursing because the directories aren't files so the criterion isn't met and they are skipped
18:31 rgbkrk joined #salt
18:31 shinylasers joined #salt
18:34 jmlowe I'll have to check but I don't think that's how gnu find works, if you say show only files it will recurse down subdirectories but won't show matching directory names in the output
18:36 TheCodeAssassin joined #salt
18:37 jmlowe yep, gnu find doesn't throw out subdirectories and instead walks them unlike salt.uitls.find.Finder when type=f
18:38 _ilbot joined #salt
18:38 Topic for #salt is now Welcome to #salt - http://saltstack.org | 0.16.4 is the latest | Please be patient when asking questions as we are volunteers and may not have immediate answers - Channel logs are available at http://irclog.perlgeek.de/salt/
18:39 _ilbot joined #salt
18:39 Topic for #salt is now Welcome to #salt - http://saltstack.org | 0.16.4 is the latest | Please be patient when asking questions as we are volunteers and may not have immediate answers - Channel logs are available at http://irclog.perlgeek.de/salt/
18:40 morty_ joined #salt
18:47 giantlock_ joined #salt
18:53 mmilano joined #salt
18:53 anuvrat joined #salt
18:55 vimalloc joined #salt
18:56 polaco joined #salt
18:56 StDiluted What would be the best way to check for the presence of a partcular grain in a jinja template?
18:56 StDiluted I want to check if a machine is in a VPC, and my vpc instances have a ec2_vpc-id grain, and the ones that arent don't have that
18:57 StDiluted so I want one line in the template if it is, and a different line if it is not
18:58 vimalloc Is it possible to do 'wildcard' includes in an sls? I have :salt/a/b/init.sls which wants to include :salt/a/b/c/*.sls. I have tried putting this on the include '- a.b.c.*', '- a.b.c*', and '.c.*' but am getting the error "Specified SLS in environment base is not available on the salt master"
18:58 vimalloc It works if I do a.b.c.whatever.sls
18:59 vimalloc s/\.sls//
18:59 shinylasers joined #salt
18:59 jslatts joined #salt
19:00 Drawsmcgraw joined #salt
19:01 mapu joined #salt
19:01 jamescarr joined #salt
19:02 StDiluted anyone?
19:02 Drawsmcgraw StDiluted: check if that grain == ''  ?
19:03 StDiluted the grain wont exist at all if it's not a vpc instance
19:03 Drawsmcgraw ah
19:03 Drawsmcgraw ...maybe for a different approach, use a NodeGroup?
19:03 vimalloc grains.get()
19:03 StDiluted I think maybe I'm looking for if grains['ec2_vpc-id'] is defined
19:03 travispaul joined #salt
19:03 vimalloc Can set a default value if the grain doesn't exist, and check if it equals that
19:04 Drawsmcgraw Also, UtahDave (and all the other SaltStack guys) congrats on v 17! I owe you guys a beer if I ever run into you.
19:04 Drawsmcgraw StDiluted: right... Can you just say --> {% if grains['ec2_vpc-id'] %}    ?
19:05 Drawsmcgraw Or would that not render properly?
19:05 StDiluted not sure if that syntax works in jinja
19:05 StDiluted rusty jinja
19:05 pipps joined #salt
19:05 vimalloc {% if grains.get('ec2_vpc-id', 'default') == 'default' ... %}
19:06 vimalloc is one way. had to do something similiar on our system
19:06 StDiluted ah, that would work nicely, and wouldnt throw an exception if it was not defined
19:06 jstrunk joined #salt
19:08 travispaul How can I run a command on a minion as a different user (non root)? Is this possible?
19:10 travispaul It looks to me like the module must support changing user (looking at the cron module)? Is that correct assumption?
19:13 Furao joined #salt
19:17 jcockhren travispaul: you mean with cmdmod?
19:17 qba73 joined #salt
19:17 jcockhren there's the user argument
19:17 dthom91 joined #salt
19:18 jcockhren clearly... not. I can't read
19:19 jcockhren │[jcockhren(i)]
19:19 jcockhren │[jcockhren(i)]
19:19 jcockhren my bad
19:19 jcockhren http://docs.saltstack.com/ref/modules/all/salt.modules.cron.html#salt.modules.cron.set_job
19:19 jcockhren travispaul: ^
19:19 jcockhren I'm having hard day it seems.
19:19 jcockhren (low in sleep)
19:19 jcockhren on*
19:20 travispaul jcockhren: I'm writing a module for VirtualBox, using the VirtualBox python API so far so good, except the minion runs as root and I typically run VBox as a normal user. I'm just curious if there is some universal mechanism in salt to change user, otherwise I suppose I could just use os.setuid() ?
19:20 StDiluted runas?
19:22 travispaul StDiluted: how would I use runas?
19:23 jmlowe UtahDave: so, this is insane http://pastebin.com/zQYt09a5
19:23 jcockhren travispaul: https://github.com/saltstack/salt/blob/ba9a89d685e095a5e199ef6bf51168462bb29f0b/salt/modules/cmdmod.py#L92
19:24 micah_chatt joined #salt
19:24 travispaul jcockhren: awesome! That's exactly what I was considering doing, just wanted to make sure I didn't bypass something already available
19:25 travispaul FWIW cudos to the salt authors, setting up a minion, master and putting together a POC plugin took all of 30 minutes, couldn't be any easier!
19:26 kleinishere joined #salt
19:30 lempa joined #salt
19:30 xt Page closed!
19:31 UtahDave jmlowe: sorry, just got back from eating lunch.
19:31 UtahDave jmlowe: so if you run it multiple times, sometimes it works?
19:33 jmlowe yeah, I had to question my sanity there
19:34 seanz joined #salt
19:34 nu7hatch joined #salt
19:35 jmlowe UtahDave: is the default timeout=None not being honored or does it not work like I expect?  Is it timing out and returning nothing, but once in a while all the right bits are in cache and it returns something in time?
19:36 martoss joined #salt
19:37 darrend joined #salt
19:37 martoss hey folks, I would like to set /etc/hosts for a bunch of hosts, ideally set up via salt cloud.
19:38 martoss How do I do this in a state? I would need some all2all communication for this.
19:40 scalability-junk martoss: depends on what you want to achieve I would say. If you want to communicate each hosts dns to all other minions then you should take a look at the publish module in salt
19:40 jmlowe UtahDave: ah, so timeout=None doesn't mean wait forever
19:41 scalability-junk if you want to let everyone push their info to the master you could use mine.send and then retrieve a compiled list with mine.get perhaps
19:41 felixhummel joined #salt
19:42 UtahDave jmlowe: I don't think so. It's more the timeout for the master waiting for the minion to respond if it's still working
19:42 martoss scalability-junk: ah ok I wasn't aware of salt mine :-)
19:42 UtahDave jmlowe: you might bump up the timeout a few seconds.  I think it defaults to 5 seconds by default
19:42 martoss scalability-junk: I'll have a look into it.
19:42 scalability-junk martoss: hope it does what you want
19:43 jmlowe UtahDave: set sufficiently high it behaves more like I expect
19:43 nu7hatch joined #salt
19:44 devinus joined #salt
19:44 UtahDave jmlowe: how high?
19:45 jmlowe 600
19:46 higgs001 joined #salt
19:46 nu7hatch joined #salt
19:47 ccase joined #salt
19:49 dthom91 joined #salt
19:49 martoss scalability-junk: yeah looks interesting. I could use this to read out the grain id and the network interface address from all hosts and use those to construct a hosts file that is send to all minions.
19:50 scalability-junk yeah if you want to make it more distributed you could use publish. It doesn't use the master afaik, but harder to compile the hosts file.
19:50 Thiggy Are previous versions of the .deb packages from the PPA hosted anywhere?
19:52 hjubal joined #salt
19:52 seanz left #salt
19:55 scalability-junk martoss: so mine.send/get could work, but if you want it to be responsive and dynamic perhaps the reactor system is better.
19:55 JasonSwindle joined #salt
19:56 scalability-junk when a new node comes up an event on the master is triggered to recompile the hosts file and then wait for all minions to run their update/get hosts file command or trigger all related minions to update the file ;)
19:56 * scalability-junk really salts flexibility is confusing...
20:00 pipps1 joined #salt
20:04 saurabhs joined #salt
20:08 jcockhren scalability-junk: true story
20:11 martoss ok, that sounds also interesting, this would also allow dynamic updating, e.g. of loadbalancers etc. nice!
20:19 devinus joined #salt
20:24 scalability-junk martoss: yeah the most dynamic is the reactor system. but it's also harder to implement.
20:25 scalability-junk but I would probably go the reactor route, as it is better for loadbalancers etc.
20:27 martoss all right thx for the hints :-)
20:29 scalability-junk no worries
20:36 alexandrel mmm if I do a "require_in: -pkg: some_package" in a file.managed entry will it reinstall the package if the file changes?
20:38 jpeach joined #salt
20:41 alexandrel erm... "watch_in" not "require_in"
20:44 pipps joined #salt
20:46 scalability-junk alexandrel: afaik no it will only update or install if it's not already available...
20:49 scristian_ joined #salt
20:54 forrest alexandrel, for that sort of scenario you'd probably want to write a unique state
20:54 forrest and then use pkg.latest
20:55 forrest that plus some logic to run include that state only if certain conditions are met MIGHT be what you want to do
21:00 raghavp80 joined #salt
21:06 Thiggy joined #salt
21:07 rgbkrk joined #salt
21:09 devinus joined #salt
21:09 JasonSwindle joined #salt
21:13 cjh does salt support sticky bits?
21:15 UtahDave of course!
21:16 UtahDave :)
21:16 cjh haha
21:17 cjh i'm not well versed in how to do sticky's with octals.  i usually do chmod +s somefile
21:17 forrest 1*** cjh
21:18 mgw cjh: like '1755'
21:18 UtahDave forrest: watch your mouth.   :)
21:18 cjh i see
21:18 forrest lol
21:18 forrest the * represents the other values :P
21:18 cjh so rwxr-sr-x would be 1755 ?
21:19 forrest well, rwxr-xr-xt
21:19 mgw I think there's a note somewhere that you must use quotes if you provide the sticky bit, but it's off
21:19 cjh oh ok
21:19 forrest you only need quotes if it's a leading 0
21:20 mgw right, so that it doesn't get coerced to an int I think
21:20 forrest exactly
21:20 forrest converted to an octal
21:21 imaginarysteve joined #salt
21:22 cjh forrest: docs online seem to say that my rwxr-sr-x would equal 2755 ?
21:22 cjh 2 is set group id on execution
21:22 forrest oh is that r-s supposed to be s?
21:22 forrest sorry
21:22 forrest I thought you mistyped
21:22 cjh yeah that's how ubuntu is displaying it
21:22 cjh yes s
21:23 toastedpenguin left #salt
21:23 forrest yea 2 would be the correct setting then, my apologies
21:23 cjh awesome
21:23 cjh i'm trying to setup sendmail and it has unusual perms i guess
21:23 forrest yea, salt makes it pretty easy
21:24 berto- joined #salt
21:24 cjh yeah it does.  i'll push the cf files and then do a cmd.run to turn them into mc files
21:24 forrest mgw, I'll try to get an update to the file states page tonight that shows a variety of setuid and such.
21:24 cjh thanks forrest :)
21:24 forrest because it would be nice to have those since the only example I wrote for one has a leading zero
21:24 forrest cjh, np.
21:25 cjh my next task is getting salt bootstrapped after ubuntu maas sets up a box
21:25 forrest are you using the bootstrap script?
21:26 cjh i'd like to yeah.  just need to figure out how to edit the preseed file
21:26 forrest yea, it's in the plans to eventually make an rpm/deb package for it to make that easier
21:27 forrest but I got distracted last week when I was gonna work on it :\
21:27 cjh that'd be nice
21:27 cjh then i could just say install x, go :)
21:27 forrest yep
21:27 forrest the RPM would be pretty simple, just installs the bootstrap script, which you could then call
21:27 forrest or deb
21:27 forrest whatever.
21:28 cjh right
21:29 mgw cjh: can't you just do it in the late_command?
21:29 cjh late_command?
21:29 cjh oh
21:29 mgw You're working in ubuntu preseed?
21:29 cjh mgw: yes i think that's the right place to put it
21:30 cjh can you call bash directly from that?
21:30 scristian joined #salt
21:30 mgw d-i preseed/late_command string  in-target wget http://bootsrap.salt.com/bootstrap.sh && bash bootstrap.sh
21:30 mgw I believe so
21:31 mgw bad domain, remembered it wrong
21:31 mgw but you get the idea
21:31 seanz joined #salt
21:31 mgw It's bootstrap.saltstack.org
21:31 mgw lol, too bad saltstack doesn't have salt.com
21:31 cjh someone snagged that?
21:33 cjh btw if anyone asks about multiple salt-call's backing up here was my hack: @hourly pgrep salt-call || salt-call state.highstate > /var/log/salt/highstate.log
21:33 cjh only call it if it isn't already running
21:33 forrest why were they backing up?
21:33 cjh and init script hung
21:33 cjh and i logged on one day to find like 50 salt-call's running haha
21:33 forrest ahh ok
21:34 cjh i may have been something else that hung but you get the idea
21:34 forrest yea
21:34 cjh good thing i didn't set salt to run */15
21:34 cjh it'd prob kill the box
21:35 abe_music joined #salt
21:38 cjh mgw: think this would fly? http://paste.fedoraproject.org/42291/80145109/
21:39 cjh that's the end of my preseed file that came from ubuntu
21:43 kermit joined #salt
21:44 forrest hey s0undt3ch, are you guys vetting the RPMs when they are built now?
21:44 nu7hatch joined #salt
21:44 s0undt3ch forrest: vetting?
21:44 forrest testing
21:44 jcockhren s0undt3ch: getting them spayed
21:45 s0undt3ch running the unittests on built time?
21:45 mwillhite joined #salt
21:45 forrest Yea
21:45 forrest UtahDave was saying that he was working on getting all the tests to pass when building zeromq
21:45 s0undt3ch at least the unit tests are executed, integration test are not because of an issue with the sandbox
21:45 woebtz joined #salt
21:46 forrest ok, am I just overlooking where the RPM tests are in the repo?
21:46 rgbkrk joined #salt
21:46 s0undt3ch forrest: were you talking about salt or zeromq rpms?
21:46 forrest zeromq
21:46 forrest I'm gonna look at building the bootstrap RPMs at some point, wasn't sure what the standards were.
21:46 mgw cjh: i think it probably would
21:47 s0undt3ch forrest: don't know what's being done with the zeromq rpm
21:47 UtahDave we've got the zmq and pyzmq rpms built for cent5, but still don't have them posted online yet.
21:47 cjh mgw: ok cool.  we'll find out when i boot a machine up later this week
21:47 s0undt3ch UtahDave: Nice!!!
21:47 forrest I was more curious about the tests UtahDave
21:48 mgw cjh: that's the annoying thing about late_commands — you just don't know for sure until you try them
21:48 juanlittledevil joined #salt
21:48 mgw and the wash-rinse-repeat cycle is a bit slow
21:48 cjh yeah exactly
21:48 cjh kind of a pain
21:50 sixninetynine joined #salt
21:51 mgw cjh: one thing: I'm not sure you can have two late_commands
21:51 juanlittledevil Hi guys, I've got a quick question for ya. I've writen a post-recevie git hook to to automatically update the salt repo on a testing salt-master. The script is a simple python script uses pythingGit to check out and pull a certain branch etc. Other than doing is there a way that I can import some salt modules to call highstate the pythonic way as opposed to doing a subprocess to call 'salt '*' highstate'?
21:57 cjh mgw: ok i'll combine them into one
21:57 cjh i was wondering about that
21:57 dthom91 joined #salt
21:58 ipmb joined #salt
21:59 NotreDev joined #salt
21:59 mgw cjh: #ubuntu-server is fairly active — you could check there before going through a real cycle
21:59 cjh good point.  alright i'll hit them up
21:59 a1j unclear in documentation: does file.absent actually deletes file or just checks if file is absent?
21:59 jcockhren juanlittledevil: if I understand you correctly, you're updating a git repo on a salt-master on push?
22:00 a1j if it does - is there any way to negate file.exists?
22:00 jcockhren juanlittledevil: firstly, you'd want to use gitfs. salt can use a remote get repo as a fileserver backend
22:19 pipps1 joined #salt
22:21 piffio joined #salt
22:22 felixhummel joined #salt
22:22 jeff-ck joined #salt
22:26 jalbretsen UtahDave:  Have people had good success running Salt on Mac?
22:27 JasonSwindle joined #salt
22:28 SEJeff_work joined #salt
22:29 UtahDave jalbretsen: yeah. Lots of people run on Mac.
22:29 rgbkrk joined #salt
22:29 jalbretsen cool, looking forward to it.  Guess who is now the "Mac support" guy
22:33 oz_akan_ joined #salt
22:36 alunduil joined #salt
22:37 andrej While back I asked about usermanagement, and was wondering how people go about making sure that a machine is in a well-defined state.  Is there a way to make sure that a) system accounts are left alone and b) only people present in a state/pillar are present on a machine?  So if joe was on machine XYZ before I made it a minion, could I make sure salt culls hime without me having to make him user.absent?
22:37 andrej If this isn't available atm: what would be the best way of implementing it
22:37 andrej and if I did - how would I share that with the rest of the world?
22:38 andrej ls -ltr
22:38 UtahDave jalbretsen: lol
22:38 andrej Ooops ... sorry, focus :)
22:41 emocakes joined #salt
22:43 dthom91 joined #salt
22:45 troyready joined #salt
22:48 pipps joined #salt
23:16 bhosmer joined #salt
23:23 rgbkrk joined #salt
23:33 dthom91 joined #salt
23:33 mesmer joined #salt
23:41 dthom91 joined #salt
23:45 fragamus joined #salt
23:55 halfss joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary