Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-10-17

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 jhulten joined #salt
00:08 holler_ joined #salt
00:09 holler_ hello, I have a masterless minion setup where I provision a local dev VM with vagrant + salt... now I want to expand this to have the ability to provision a cloud ubuntu install (maybe using fabric to initiate for now).. what are my next steps?
00:09 holler_ my setup is similar to https://github.com/marselester/abstract-internal-messaging-deploy
00:14 whitenite joined #salt
00:14 zekoZeko joined #salt
00:17 bhosmer joined #salt
00:18 anotherZero joined #salt
00:18 Emantor joined #salt
00:19 __number5__ holler_: you can try using packer with salt masterless provider to build AWS AMI
00:19 UtahDave joined #salt
00:19 ndovu joined #salt
00:20 ndovu I have a question on the sqlite returner. I can get it working when the DB is local on the minion. Is it impossible to use a central sqlite DB on the master?
00:20 holler_ __number5__: does packer work with rackspace?
00:21 __number5__ holler_: yes
00:21 holler_ what is packer? is it similar to vagrant but for cloud instances?
00:22 repl1cant joined #salt
00:22 __number5__ http://www.packer.io from same guy built Vagrant, it's for building cloud images like AMI
00:23 holler_ interesting
00:24 holler_ I have another question.. we are small team and no devops so Im just trying to learn more on the subject but in our use case would it make more sense to just spin up new instances from a machine image via the loadbalancer instead of controlling a bunch of minions?
00:25 holler_ our PM said we can just spin up new images of the dev server as needed
00:25 whitenite joined #salt
00:27 manfred holler_:  it is really better to configure it once in salt, and then any time you make a new one, it gets created
00:27 manfred it is better than having to make a new image each time you make even the smallest change
00:28 scoates bummer https://github.com/saltstack/salt/issues/16412#issuecomment-59451108
00:28 __number5__ holler_: it depends on how often you deploy to the production
00:29 Alan_S joined #salt
00:29 __number5__ scoates: oh no! that's the most important bug fix I'm waiting for
00:30 murrdoc scoates: that sucks
00:30 jgelens joined #salt
00:30 jslatts joined #salt
00:31 holler_ __number5__: can I have all of my files in the same repo so I can have 1 deploy repo that can be used with vagrant for local dev or packer for rackspace? since its masterless minion I dont think I can set gitfs backend and use different branches
00:35 ndovu anyone famiiliar with the sqlite returner?
00:35 Singularo joined #salt
00:36 ndovu with mysql I am able to get the returns going to a mysql DB on the master. Would I be able to do the same (i.e. central DB) using sqlite?
00:36 __number5__ holler_: sounds like you might want to try vagrant-aws plugin
00:37 holler_ __number5__: hm interesting.. we use rackspace right now but I found this https://github.com/cloudbau/vagrant-openstack-plugin
00:37 holler_ would that work
00:37 holler_ ?
00:37 jsm joined #salt
00:38 manfred there is also a vagrant-rackspace plugin
00:38 UtahDave ndovu: you should be able to do that with the master job cache that's coming in 2014.7
00:38 manfred holler_:  https://github.com/mitchellh/vagrant-rackspace
00:38 holler_ manfred: thanks!
00:38 __number5__ holler_: I don't have experience with vagrant-aws or rackspace plugin since we use packer, sorry
00:39 manfred vagrant rackspace is solid
00:39 manfred i use it from time to time
00:39 holler_ why would I use packer vs vagrant-rackspace?
00:39 ndovu UtahDave, so it cannot work as of now? Thanks
00:39 __number5__ holler_: only if you want to build an image
00:40 UtahDave ndovu: I don't think it would work right now because the external job cache requires that the minions return directly to the desired database, which they can't do when the sqlite db is a file on the master
00:40 manfred *disclaimer, if you have managed cloud at rackspace, uploaded images cannot be marked as managed
00:40 manfred we are working on it
00:41 holler_ manfred: do you work at rackspace?
00:42 manfred yes
00:43 Emantor joined #salt
00:46 kusams joined #salt
00:48 cb joined #salt
00:50 scoates does a bootstrapped salt run in a virtualenv?
00:51 __number5__ scoates: by default it's system-wide installed
00:51 scoates # pip freeze | grep requests
00:51 scoates requests==0.12.1
00:52 scoates guess that's the problem
00:53 zekoZeko joined #salt
00:53 scoates pip install --upgrade requests gives me 2.4.3 ; trying that
00:53 thayne joined #salt
00:54 scoates silly me thought I was done building base VM images for the day (-;
00:54 murrdoc hah
00:54 murrdoc what you know about packer
00:54 murrdoc scoates :)
00:55 cb joined #salt
00:55 scoates that fixed my problem FWIW
00:56 ndovu UtahDave, thanks
00:57 murrdoc scoates:  totally srs btw, https://github.com/mitchellh/packer is recommended for managing base vms
00:57 murrdoc wish there was something like that for hardware
00:57 murrdoc sigh
00:57 __number5__ murrdoc: virtualize all your hardware :P
00:57 murrdoc hah
00:57 murrdoc someones gotta run the hardware
00:57 scoates murrdoc: *nod* I use a base packer image and bootstrap salt onto it, zero out the disks, then vagrant package it. will add a pip upgrade in there
00:58 ramishra joined #salt
00:58 murrdoc that the virtualizations need to run on
00:58 murrdoc scoates:  o/
00:58 Emantor joined #salt
00:58 scoates heh \o
01:01 jgelens joined #salt
01:08 otter768 joined #salt
01:09 Mso150_x joined #salt
01:18 whitenite joined #salt
01:20 aquinas_ joined #salt
01:28 obimod_ joined #salt
01:28 aparsons joined #salt
01:45 schimmy joined #salt
01:49 schimmy1 joined #salt
01:50 fannet is there a maximum size that pillars can be?
01:51 bhosmer_ joined #salt
01:51 jnials joined #salt
01:58 lacrymology joined #salt
01:59 ramishra joined #salt
02:03 otter768 joined #salt
02:04 jnials joined #salt
02:10 cads joined #salt
02:19 otter768 joined #salt
02:24 jgelens joined #salt
02:35 schimmy joined #salt
02:39 Emantor joined #salt
02:40 schimmy1 joined #salt
02:42 n8n joined #salt
02:48 whitenite joined #salt
02:52 ramishra joined #salt
02:57 bhosmer joined #salt
02:58 ramishra_ joined #salt
03:02 ramishra joined #salt
03:06 thayne joined #salt
03:06 jnials joined #salt
03:09 mordonez joined #salt
03:15 goal joined #salt
03:18 mosen joined #salt
03:21 mgw joined #salt
03:22 ramishra joined #salt
03:25 lacrymology joined #salt
03:30 jeddi joined #salt
03:31 Emantor joined #salt
03:35 elbaschid joined #salt
03:39 bhosmer joined #salt
03:43 iggy ours is less than 1MB
03:47 XenophonF joined #salt
03:49 n8n joined #salt
03:51 ipmanx so do y'all use the same Salt master in dev, staging, qa, and prod?
03:51 ipmanx not totally separate environments?
03:52 Emantor joined #salt
03:53 tligda joined #salt
03:54 fxhp joined #salt
03:54 scoates we do, yes
03:54 scoates I have a secondary saltmaster that I/we use for testing before I push to the main repository
03:55 scoates we'll actually be moving to two saltmasters in production in the non-distant future. one in open mode, IP-restricted, and one on the public 'net for our VMs
03:55 jnials joined #salt
03:57 ipmanx i'm confused by versions... my "salt --version" reports "2014.1.10" but I see things like "0.11.0" mentioned in the docs.  Is that newer or older?
03:58 scoates ipmanx: after the 0.x series, it went to a date scheme
03:58 scoates so 0.n < 2014.1 < 2014.7
03:58 ipmanx so anything 2014.* is newer than anything 0.*, ok thanks.  and do the 2014.* versions have anything to do with dates?  e.g. is 2014.7 from July?
03:58 scoates yes, exactly
03:59 scoates the 2014 series also have code names, if you ever run into those. 2014.1 is Hydrogen, 2014.7 (the next version) is Helium
04:00 scoates July is when 2014.7 was split off, FWIW. it's not officially released yet (it's in release candidate 4 as of tonight)
04:00 ipmanx which is the one where they fix the 'requests' library version requirement ;-)
04:00 scoates heh
04:00 scoates I had to manually fix that, myself, tonight on 2014.1
04:01 scoates er
04:01 scoates I had to manually fix that, myself, tonight on 2014.7rc4
04:02 mosen youre everywhere scoates
04:02 scoates mosen: there's a reason ##homebrew is on freenode (-;
04:03 mosen scoates: hah.. I think i found your blog through a programming question, and through a recommendation for saison
04:04 scoates hehe weird. (-:
04:04 scoates .oO( blog that is currently broken )
04:05 scoates that VM keeps dying. really need to find some time to move those things over to openstack
04:05 tligda joined #salt
04:09 pipps joined #salt
04:10 pipps joined #salt
04:14 loz-- joined #salt
04:15 Emantor joined #salt
04:15 lyddonb_ left #salt
04:16 lyddonb_ joined #salt
04:17 lyddonb_ left #salt
04:20 ramishra joined #salt
04:35 thayne joined #salt
04:39 TyrfingMjolnir joined #salt
04:44 obimod joined #salt
04:50 jgelens joined #salt
04:50 ramishra joined #salt
04:52 loz--_ joined #salt
04:59 whitenite joined #salt
05:01 wedgie joined #salt
05:01 Emantor joined #salt
05:03 Guest87008 joined #salt
05:06 wedgie joined #salt
05:10 jalbretsen joined #salt
05:16 jgelens_ joined #salt
05:28 bhosmer joined #salt
05:30 ramteid joined #salt
05:31 ramishra joined #salt
05:33 wedgie_ joined #salt
05:36 whitenite joined #salt
05:48 felskrone joined #salt
05:54 pipps joined #salt
05:56 mg__ joined #salt
05:59 mg__ Hi. I am trying set up a salt with syndic. If I execute the 'salt' command on the syndic server, everything is happy, but if I execute from the top level master, I get warnings i the syndic master log and no results appear in the top level master.
05:59 mg__ the warnings are [WARNING ] Could not write job cache file for minions: ['nightly-appserver'] [WARNING ] Could not write job invocation cache file: [Errno 2] No such file or directory: '/var/cache/salt/master/jobs/e8/f1ee3a4738b2951f1895f512d3bccf/.load.p'
05:59 mg__ I am using v2014.7.0rc4
06:00 mg__ anyone have any ideas why this doesn't work?
06:01 colttt joined #salt
06:05 deepz88 joined #salt
06:09 Emantor joined #salt
06:09 CeBe joined #salt
06:12 obimod joined #salt
06:15 ndrei joined #salt
06:16 pravka joined #salt
06:18 jgelens joined #salt
06:21 tld_wrk joined #salt
06:28 n8n joined #salt
06:34 ttrumm joined #salt
06:38 tomspur joined #salt
06:38 tomspur joined #salt
06:39 ramishra joined #salt
06:42 n8n joined #salt
06:43 tomspur joined #salt
06:44 deepz88 joined #salt
06:46 tomspur joined #salt
06:47 jgelens joined #salt
06:55 Nikunj_ joined #salt
06:55 Nikunj_ hi, where can i find apache configs in salt ?
07:02 trikke joined #salt
07:02 flyboy joined #salt
07:04 whitenite joined #salt
07:05 cmatheson joined #salt
07:17 bhosmer joined #salt
07:19 pravka joined #salt
07:22 whitenite joined #salt
07:25 student__ joined #salt
07:26 ipmanx joined #salt
07:32 Emantor joined #salt
07:32 jalaziz joined #salt
07:34 Mso150_x joined #salt
07:38 deepz88 left #salt
07:38 lcavassa joined #salt
07:39 DolourousEdd joined #salt
07:40 slav0nic joined #salt
07:44 Rene_ joined #salt
07:45 thayne joined #salt
07:46 jgelens joined #salt
07:47 sander^work joined #salt
07:48 cberndt joined #salt
07:49 sander^work salt-ssh '*' cmd.run 'uptime' <-- THis one works.. but when I execute: salt-ssh "*" cmd.script "salt://script.sh" .. I get pid: 0.. so it dosnt seems to work. and script.sh only contains: #!/bin/bash \n uptime;
07:51 sander^work script.sh is located inside /etc/salt/script.sh
07:51 Jellyfrog tried with the RC release sander^work ? lots of work been done for salt-ssh
07:52 Jellyfrog sander^work: At the moment fileserver operations must be wrapped to ensure that the relevant files are delivered with the salt-ssh commands. The state module is an exception, which compiles the state run on the master, and in the process finds all the references to salt:// paths and copies those files down in the same tarball as the state run. However, needed fileserver wrappers are still under development.
07:53 Jellyfrog sander^work: http://docs.saltstack.com/en/latest/topics/releases/2014.7.0.html try 2014.7.0
07:56 sander^work Jellyfrog, o
07:57 sander^work i'm using salt from the ubuntu mirror.
07:57 sander^work salt-ssh 2014.1.11 (Hydrogen)
07:57 sander^work will I get a newer version with pip?
07:58 Jellyfrog its only RC
07:58 Jellyfrog http://docs.saltstack.com/en/latest/topics/releases/releasecandidate.html
08:01 sander^work Jellyfrog, what do you mean by wrapping fileserver operations?
08:01 sander^work how do I do that?
08:01 sander^work I want to avoid installing rc if possible.
08:03 wr3nch joined #salt
08:03 Jellyfrog if you read it says; "However, needed fileserver wrappers are still under development."
08:08 intellix joined #salt
08:13 PI-Lloyd joined #salt
08:14 chiui joined #salt
08:14 rjc joined #salt
08:15 Daemonik joined #salt
08:17 calvinh joined #salt
08:18 ramishra joined #salt
08:19 ramishra_ joined #salt
08:19 aw110f joined #salt
08:22 cberndt joined #salt
08:23 sander^work Jellyfrog, "pip install salt" failed with: error: command 'swig' failed with exit status 1 SWIG/_m2crypto.i:30: Error: Unable to find 'openssl/opensslv.h' SWIG/_m2crypto.i:33: Error: Unable to find 'openssl/safestack.h'
08:25 Jellyfrog well i have no idea but looks like you're missing openssl libs
08:26 martoss joined #salt
08:29 sander^work Jellyfrog, I found some instructions on it: http://docs.saltstack.com/en/latest/topics/development/hacking.html
08:30 Damoun_ joined #salt
08:34 shookees joined #salt
08:35 jrluis joined #salt
08:36 sander^work Now i'm getting: x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -fwrapv -Wall -Wstrict-protot
08:36 sander^work ypes -fPIC -std=c99 -O3 -fomit-frame-pointer -Isrc/ -I/usr/include/python2.7 -c
08:36 sander^work src/MD2.c -o build/temp.linux-x86_64-2.7/src/MD2.o
08:36 sander^work src/MD2.c:31:20: fatal error: Python.h: No such file or directory
08:36 sander^work Sorry for linebreaks.
08:37 smcquay joined #salt
08:37 Nexpro joined #salt
08:39 mortis joined #salt
08:39 peters-tx joined #salt
08:42 mortis_ joined #salt
08:43 ekkelett joined #salt
08:43 ramishra_ joined #salt
08:43 honestly joined #salt
08:49 rofl____ joined #salt
08:51 TheThing joined #salt
08:59 bhosmer joined #salt
09:02 aw110f_ joined #salt
09:05 thayne joined #salt
09:06 bhosmer joined #salt
09:08 agend joined #salt
09:09 Guest57925 joined #salt
09:13 ekkelett joined #salt
09:13 ekkelett joined #salt
09:13 yomilk joined #salt
09:15 whitenite joined #salt
09:17 akafred joined #salt
09:18 ramishra_ joined #salt
09:21 garthk joined #salt
09:22 giantlock joined #salt
09:31 agend joined #salt
09:34 garthk left #salt
09:36 sander^work Now I finally got latest salt from git: salt-ssh 2014.7.0-832-gd87d6f0 (Helium)
09:37 sander^work But having the same problem: salt-ssh "*" cmd.script "salt://script.sh" gives: cache_error: True pid: 0 retcode: 1 stderr: stdout:
09:40 jnials joined #salt
09:42 ipmanx joined #salt
09:44 whitenite joined #salt
09:49 sander^work i'm just trying to get a hello world script working here at first:)
09:49 sander^work Jellyfrog, do you know?
09:53 Emantor joined #salt
09:54 viq < sander^work> script.sh is located inside /etc/salt/script.sh
09:54 viq I believe by default salt looks in /srv/salt so unless you changed file_roots, that's where the script needs to be
10:00 oyvjel joined #salt
10:01 packeteer joined #salt
10:03 gyre007_ joined #salt
10:04 hooker joined #salt
10:07 cb joined #salt
10:08 jgelens joined #salt
10:11 sander^work viq, thanks, works now :-D
10:11 bhosmer joined #salt
10:15 TheThing joined #salt
10:16 badon joined #salt
10:19 peters-tx joined #salt
10:20 jgelens joined #salt
10:39 jgelens joined #salt
10:46 che-arne joined #salt
10:49 TheThing joined #salt
10:49 VSpike Hi. Jinja question - the first form here works https://bpaste.net/show/cfb8e0459c7e but injects a lot of whitespace into the output. I was trying for the second form, but it doesn't work
10:50 VSpike I get " Comment: Jinja syntax error: expected token 'end of statement block', got 'set'; line 7"
10:50 VSpike I tried also having the for in one block and the 3 sets in another, but same error different line
10:50 VSpike Is there a simple way around this?
10:51 rattmuff VSpike: doesn't the second form miss "%}" on the for loop`?
10:52 VSpike rattmuff: I also tried something like https://bpaste.net/show/e593ac6c77b7
10:52 rattmuff VSpike: and I guess you might have to move the else statement
10:52 rattmuff VSpike: I'm no jinja ninja though, just looks like the opening and closing statements are not matching
10:53 ggoZ joined #salt
10:53 rattmuff VSpike: what about: https://bpaste.net/show/ffffec9782cd ?
10:55 bhosmer_ joined #salt
10:55 cb joined #salt
10:58 CeBe1 joined #salt
11:02 toastedpenguin joined #salt
11:04 TheThing joined #salt
11:05 CeBe2 joined #salt
11:05 tafa2 joined #salt
11:13 elbaschid joined #salt
11:16 lacrymology joined #salt
11:21 hobakill joined #salt
11:22 CeBe1 joined #salt
11:24 aw110f joined #salt
11:27 VSpike rattmuff: I tried a few more combinations, and decided to live with the white space :) after all, nobody will see it. It was just annoying me
11:27 VSpike Thanks for the help though
11:28 rattmuff hehe :P
11:32 vejdmn joined #salt
11:36 kiorky vejdmn: https://bpaste.net/show/6d336d12a4c9
11:36 kiorky VSpike: ^
11:37 VSpike kiorky: I guess the important bit is {%- there... what does that do?
11:37 kiorky VSpike: strip whitespace
11:38 kiorky :)
11:38 kiorky VSpike: but stripping in jinja is more a religion that a science
11:38 kiorky this come with experience ...
11:38 kiorky *than
11:39 kiorky VSpike: the pattern, here, is the for loop, for - before the for statement, and the - on the endfor one
11:39 VSpike ahh found it! nice
11:48 VSpike Do you know if there are any particular quirks with this form of "list of hashes" in YAML for salt? https://bpaste.net/show/ce4536e58804
11:48 VSpike I've tried with yaml.load( ... ) in a python shell various simple versions of this and it seems to work, as long as you put a space after the colons
11:49 VSpike I'm just getting an unknown YAML render error on what would be line 2 in that paste
11:49 whitenite joined #salt
11:49 johtso Can you only have one include statement per state file?
11:52 VSpike Ah. I think it's actually the backslashes it's objecting to. How strange
11:52 cads joined #salt
11:54 elfixit joined #salt
11:55 VSpike Hm, so i have to use single quotes and double \\
12:00 vbabiy joined #salt
12:01 gildegoma joined #salt
12:02 SheetiS joined #salt
12:04 trikke joined #salt
12:10 Outlander joined #salt
12:13 jrluis joined #salt
12:16 tafa2 Can anyone recommend a good SSH app for Mac to organise multiple SSH connections - something a bit more structured than iterm or standard terminal?
12:17 Rene-_ joined #salt
12:20 viq tafa2: for one, what do you mean by "organize"?
12:21 tafa2 Viq something like this http://www.mremoteng.org/screenshot.png but that only does SSH...
12:22 viq tafa2: when I was using mac, I had that stuff defined in iterm2
12:23 viq also, maybe https://github.com/emre/storm ?
12:26 bhosmer joined #salt
12:26 bhosmer_ joined #salt
12:29 Jellyfrog tafa2: http://fitztrev.github.io/shuttle/ too simple?
12:29 tafa2 viq storm looks a bit pointless tbh :)
12:30 tafa2 well unless u live 100% in a CLI kind of a world
12:30 Jellyfrog i dont see the problem with iterm2 :P
12:30 tafa2 Shuttle looks cool
12:30 tafa2 iterm2 just never agreed with me
12:30 tafa2 I dont know why
12:30 Jellyfrog use zsh
12:30 Jellyfrog and youre fine
12:30 Jellyfrog autocomplete all the way
12:31 Jellyfrog ssh *space* *tab* *tab*
12:31 Jellyfrog and you get a list of all known servers
12:31 Jellyfrog so
12:31 CeBe1 joined #salt
12:32 cb joined #salt
12:33 tafa2 how do you guys do it in iterm2?
12:33 Jellyfrog i just said ?
12:34 Jellyfrog then you can use the broadcast input
12:34 Jellyfrog if you want to controll multiple
12:34 Jellyfrog or..since we're in #salt.. use salt :)
12:35 jaimed joined #salt
12:41 vejdmn joined #salt
12:41 pravka joined #salt
12:44 cb joined #salt
12:44 viq http://hiltmon.com/blog/2013/07/18/fast-ssh-windows-with-iterm-2/
12:47 viq tafa2: ^
12:52 cb joined #salt
12:52 TheRealBill_here joined #salt
12:54 cb joined #salt
12:57 jsm joined #salt
12:59 ramishra joined #salt
13:00 lionel joined #salt
13:02 CeBe1 joined #salt
13:05 diegows joined #salt
13:05 cpowell joined #salt
13:10 acabrera joined #salt
13:10 racooper joined #salt
13:17 cpowell joined #salt
13:17 mpanetta joined #salt
13:19 jslatts joined #salt
13:23 gmcwhistler joined #salt
13:25 kusams joined #salt
13:31 vejdmn joined #salt
13:34 wnkz joined #salt
13:36 wnkz Hi, I'm trying to setup "hooks" in a formula so it sends an update after a change or something ; for example now I use cmd.wait + curl to send JSON to an external website ; does someone know something builtin / cleaner ?
13:40 ajolo joined #salt
13:41 tafa2 joined #salt
13:42 ramishra joined #salt
13:46 dude051 joined #salt
13:48 higgs001 joined #salt
13:49 badon joined #salt
13:58 whitenite joined #salt
13:59 giantlock joined #salt
14:01 wnkz joined #salt
14:01 FarrisG_ joined #salt
14:02 higgs001 joined #salt
14:05 gildegoma joined #salt
14:05 intellix joined #salt
14:06 kaptk2 joined #salt
14:07 XenophonF joined #salt
14:07 Ahrotahntee is there a way to run a command if a file is missing (as part of a formula); for example to generate a dh file?
14:07 housl joined #salt
14:08 Ahrotahntee oh; I suppose I could use the unless clause
14:12 workingcats joined #salt
14:14 nitti joined #salt
14:15 djstorm joined #salt
14:16 chiui joined #salt
14:24 _prime_ joined #salt
14:26 viq wnkz: I'm guessing you'd need to write your own module - you probably could base something off of http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pagerduty.html for example
14:27 wnkz viq: interesting .. :)
14:28 wnkz viq: I'm surprised a generic JSON/Hook module isn't builtin though
14:29 viq wnkz: well, there's event system...
14:30 ek6_ joined #salt
14:31 hobakill joined #salt
14:32 iggy aww... I was going to show him my riemann-event-listener thing
14:33 scoates viq: wnkz: might also look into Reactor
14:33 wnkz viq: hmm I don't know about that ; quickly read the doc but I don't see how can I _send_ JSON from my states using this
14:34 wnkz I already use salt-api + Reactor to trigger states
14:34 wnkz but I'd also like those states to send JSON to other external apps
14:35 bhosmer joined #salt
14:39 viq wnkz: no, I meant "salt already has a mechanism for talking to another instances of itself, so people probably didn't feel much need to talk to external services"
14:40 wnkz viq: ok, well I might be on my way to code something then .. :)
14:41 ipmb joined #salt
14:41 Emantor joined #salt
14:43 whitenite joined #salt
14:45 StDiluted joined #salt
14:46 kusams joined #salt
14:48 flyboy82 null... sto repository_client recipe, vlepw oti kaleitai apla to repository_client, to opoio einai ena definition
14:48 flyboy82 oops, sorry guys
14:52 kusams joined #salt
14:56 KennethWilke joined #salt
14:57 jxqz joined #salt
14:57 perfectsine joined #salt
15:03 _prime_ quick question on 2014.7.  Has pillar['master'] been removed?
15:04 elfixit joined #salt
15:05 mgw joined #salt
15:05 nitti_ joined #salt
15:05 flyboy82 left #salt
15:07 UtahDave joined #salt
15:07 jergerber joined #salt
15:09 to_json joined #salt
15:11 jalbretsen joined #salt
15:11 pdayton joined #salt
15:13 glyf joined #salt
15:20 n8n joined #salt
15:21 deepz88 joined #salt
15:23 babilen Hi all - I am getting an error in a cmd.run state due to the fact that the HOME variable is not set. Will I have to set that explicitly via env?
15:24 thayne joined #salt
15:24 iggy what do you need it set for? the command being run?
15:24 anotherZero joined #salt
15:24 babilen iggy: yeah, exaclty
15:24 babilen +spelling
15:26 wendall911 joined #salt
15:26 obimod joined #salt
15:26 thedodd joined #salt
15:26 SheetiS babilen: It wouldn't hurt to specify it via - env:.
15:26 SheetiS Worst case, it'd be just like overriding it in your bash session.
15:27 babilen I am currently trying that. It's just curious that I run into this when I roll it into qa, but didn't face this problem on vagrant :-/
15:27 troyready joined #salt
15:27 babilen yeah, that worked. Curious
15:28 thedodd joined #salt
15:29 vejdmn joined #salt
15:31 iggy was going to sah there's some shorthand for "user's home dir", but it wouldn't help that case
15:32 StDiluted joined #salt
15:33 SheetiS babilen: is salt running as the same user with the same shell on the same os in vagrant vs qa?
15:33 hasues joined #salt
15:33 babilen SheetiS: It is
15:34 babilen different hosting naturally, but same OS and the minion setup is the identical too.
15:35 mgw joined #salt
15:36 SheetiS babilen: could be the difference between the minion starting via init script at boot vs being restarted from a session context where it could see $HOME maybe?
15:37 tligda joined #salt
15:38 Ozack1 joined #salt
15:38 babilen In the HOME-isn't-set variant I installed from the Debian packages whereas I used vagrant's provisioner (and therefore the bootstrap script) in the case where it is set. I guess I'll have to take a look at the init scripts, but it is annoying to have a difference between my local vagrant setup and what is running on "real" boxes. :)
15:38 SheetiS i know when I make upstart jobs, if I want the application to see all environment variables, I have to do 'exec su root -c /path/to/daemon' for example
15:40 thedodd joined #salt
15:42 SheetiS babilen: That could be it.  I'm typically in os_family RedHat (Amazon Linux -- makes for 'fun' trying to have local vagrant boxes that match the real world -- especially since amazon repos give a 403 from non amazon IP addresses), so I'm not sure what to expect out of the debian init script.
15:42 iggy GENERAL REQUEST: use http://tools.ietf.org/html/rfc1123 to name roles/tags/etc in formulas
15:43 Emantor joined #salt
15:44 funzo joined #salt
15:45 __TheDodd__ joined #salt
15:46 n8n joined #salt
15:47 pdayton joined #salt
15:47 pipps joined #salt
15:53 gladiatr joined #salt
15:53 n8n_ joined #salt
15:54 Gareth morning morning
15:55 babilen Good eveneming
16:01 ekristen joined #salt
16:01 tafa2 joined #salt
16:01 ekristen morning everyone
16:01 pipps joined #salt
16:02 tafa2 joined #salt
16:02 sunkist akoumjian: do you still support salty vagrant?
16:03 tafa2 joined #salt
16:05 tafa2 joined #salt
16:06 ekristen UtahDave: is there any way to detect failed states?
16:06 UtahDave morning everyone!
16:07 UtahDave ekristen: aside from the output you get after running the states?
16:07 _prime_ good morning
16:07 ekristen UtahDave: yeah, thinking about reactors and states running
16:07 ekristen need a way to detect when one fails
16:07 Gareth UtahDave: morning
16:07 notpeter_ joined #salt
16:07 _prime_ UtahDave: I have a question about 2014.7 and pillars.
16:08 N-Mi is there a way with states.pkgrepo to add a comment to a repo entry with Debian ? (for example the "humanname")
16:08 UtahDave morning, Gareth!
16:09 _prime_ pillar['master'] appears to have disappeared after I did a git pull of 2014.7.0-830-g4085cfa .  Has that pillar been removed? (ie is this expected behavior?)
16:09 Gareth _prime_: looks like there is a comments option.
16:09 ajolo joined #salt
16:09 Gareth UtahDave: hows it going?
16:09 Emantor joined #salt
16:10 nitti joined #salt
16:10 UtahDave _prime_: Hm. They may have changed the option to have the master's config in the minion's pillar default to false rather than true
16:11 UtahDave pretty good, Gareth! I'm taking my girls to a pumpkin patch after work today. They've been excited all week to pick out their own pumpkin
16:11 _prime_ is it just a minion config I can look up somehwere.  I'm fine with that, just want to make sure I haven't discovered a bug at the 11th hour
16:11 iggy ekristen: I'm considering hooking up some returner and having the output filtered by something that can tell success vs failure
16:11 iggy but I don't know of anything ready right now
16:11 UtahDave _prime_: let me check
16:12 ekristen iggy: yeah — I definitely need to get something in play for monitoring for failed states
16:12 ekristen just don’t know where to start exactly
16:12 kickerdog joined #salt
16:13 UtahDave _prime_: it's the pillar_opts config option in your master config
16:14 ozzzo joined #salt
16:16 UtahDave _prime_: ok, in the develop branch we're setting pillar_opts to false by default now
16:16 UtahDave _prime_: are you on the develop branch or 2014.7 branch?
16:16 UtahDave :q
16:17 Gareth UtahDave: nice :)
16:17 _prime_ I assume the develop branch: 2014.7.0-832-gd87d6f0
16:17 mechanicalduck joined #salt
16:17 UtahDave _prime_: ok.   So set   pillar_opts: True    in your master config and restart the salt-master daemon.  Then do a pillar refresh and it should be back.
16:18 bhosmer joined #salt
16:18 _prime_ thanks UtahDave, that did the trick.
16:19 _prime_ I've closed a bug I opened earlier today on this topic
16:19 UtahDave cool, thanks, _prime_!
16:19 iggy _prime_: did you put a comment why you closed it (just in case anybody else comes across the same thing)?
16:20 pipps joined #salt
16:22 _prime_ yes "I'm closing this bug. The default setting for pillar_opts in /ec/salt/master is now false. Setting it to true returns the old behavior."
16:22 _prime_ https://github.com/saltstack/salt/issues/16693
16:22 hasues joined #salt
16:22 scoates there's still no way for two pillar .sls within the same pillar source to merge their data together, right? e.g. this https://github.com/saltstack/salt/issues/3991
16:22 spookah joined #salt
16:23 ekristen anyone know if docker 1.3.0 is supported yet or being worked on by anyone?
16:23 obimod joined #salt
16:23 bhosmer joined #salt
16:25 iggy _prime_: thanks :) you wouldn't believe the sheer number of bugs I've seen closed with no explanation of why or how to fix the actual problem
16:25 UtahDave ekristen: we'll get to it, but we're right in the middle of finishing the 2014.7 release
16:26 ekristen UtahDave: I was looking at potentially helping, just didn’t want to do work someone else was already doing
16:26 _prime_ iggy - Sure thing!  That's a pet peave of mine too...having the same problem, find the bug or thread that says 'fixed' or 'resolved', but no explanation as to why or how.
16:26 ekristen I’m assuming it won’t make it into the 2014.7 release though even if it got pull requested in shortly
16:26 UtahDave ekristen: you're help would be really appreciated! I don't think we'd be able to get to it until next week
16:26 iggy ekristen: nope
16:27 iggy but there's always 2014.7.1
16:27 UtahDave ekristen: depends on whether it's a bug fix or not, I think
16:28 ekristen UtahDave: it looks like there is a problem with the way salt determines whether a pull was successful or not, haven’t tested all the commands, but docker added a new message to the end of the pull status
16:29 UtahDave that sounds right.   ekristen, if you don't mind, make the pull req against the 2014.7 branch.  This sounds like it could be considered a bug fix, rather than a new feature
16:29 khaije1 joined #salt
16:30 * dstufft hopes 2014.7 hits final soon :D
16:30 ekristen UtahDave: kk, I’ll try and figure out where the problem is
16:30 KyleG joined #salt
16:30 KyleG joined #salt
16:30 jhulten joined #salt
16:31 ekristen is there good documentation someplace that talks about the best way to checkout the git code and run the master and minion off of the checked out code?
16:31 khaije1 hi all, I'd like to configure salt-cloud to be able to run w/o root priviledges as salt-ssh can ... is that currently possible?
16:31 hasues joined #salt
16:32 UtahDave ekristen: looks like there's already an open bug on that https://github.com/saltstack/salt/issues/16710
16:33 UtahDave ekristen: here's what I do:  1. Install salt-master and salt-minion from packages to get all the deps. 2. Uninstall salt-master and salt-minion, leaving deps. 3. clone salt repo. 4. pip install -e /path/to/salt/repo
16:33 UtahDave then whatever branch you check out of the repo, or whatever changes you make to the repo, restart the salt-master and salt-minion and they run directly from your repo
16:34 ekristen do I need to run pip install each time?
16:34 UtahDave nope.
16:34 UtahDave just the first time.
16:35 ekristen kk
16:36 ekristen so I have a salt testing server UtahDave, I installed using the bootstrap script and the git option, whats the best way to uninstall?
16:36 UtahDave does   pip uninstall salt-minion   work?
16:37 cb joined #salt
16:37 ekristen no, cause it was installed via git
16:37 ekristen and the bootstrap
16:37 UtahDave well, you can just delete salt out of the site-packages directory
16:40 gyre007_ joined #salt
16:40 thayne joined #salt
16:42 aparsons joined #salt
16:43 ndrei joined #salt
16:48 druonysus joined #salt
16:49 rypeck joined #salt
16:49 wnkz joined #salt
16:50 aparsons joined #salt
16:50 StDiluted so who is doing a rails deploy using salt, and how is it working?
16:50 bhosmer joined #salt
16:56 jalbretsen joined #salt
16:56 Jahkeup joined #salt
16:57 hasues So, upon using salt-cloud to create a VM, it appears it created a VM from an image, but it is stuck and is not "returning" from the operation.  Is that normal?  What is the exit condition?
16:57 Setsuna666 joined #salt
17:02 cb joined #salt
17:04 to_json joined #salt
17:05 kusams joined #salt
17:05 repl1cant soooo
17:05 repl1cant quick question
17:05 repl1cant when you submit a job via local_async to a minion
17:06 repl1cant will the minion give the job results back to the master w/o asking the master the job status? so if you watch the event bus, will it pop up there?
17:06 repl1cant or do you have to query the master before it'll get the results in the bus?
17:06 troyready joined #salt
17:07 UtahDave hasues: it's probably trying to ssh in to install the salt-minion
17:07 jgelens joined #salt
17:08 hasues UtahDave: ooh, interesting.  I'm watching it with the log level at debug now, so I'm trying to see.
17:08 UtahDave repl1cant: I think whenever a minion finishes a job it sends it up to the master, and so I think it should show up on the event bus
17:09 hasues UtahDave: What credentials would it be trying to use to install the minion?  I don't recall setting that up anywhere.
17:09 Ryan_Lane joined #salt
17:10 UtahDave hasues: usually you have to tell it which user to use and a local path to a private ssh key to use
17:10 UtahDave hasues: what cloud are you using?
17:10 hasues UtahDave: vsphere
17:10 UtahDave Hm. it's been a while since I've used vsphere. just a sec
17:11 UtahDave yeah, you have to set a username and password in your provider config
17:11 repl1cant UtahDave: awesome, thanks
17:11 Mso150_x joined #salt
17:11 hasues UtahDave: Hm, okay, I'll look into that.
17:12 UtahDave hasues: https://github.com/saltstack/salt/blob/develop/salt/cloud/clouds/vsphere.py#L19
17:12 hasues UtahDave: I thought that was the username/password to access the vCenter host?
17:13 hasues UtahDave: When the MOR is being used, that is going to take some authentication of some kind, and that is what was set there.
17:13 UtahDave ah, that's probably correct.
17:13 hasues UtahDave: Is there a limitation of the VMs being created having to use those same creds?
17:13 hasues UtahDave: ah okay
17:13 bhosmer joined #salt
17:14 UtahDave hasues: still, there still needs to be some auth creds for the os.
17:14 hasues UtahDave: Totally agree, just need to know where to set those :)
17:15 n8n joined #salt
17:16 chiui joined #salt
17:16 cmatheson left #salt
17:16 hasues UtahDave: Well, to be fair, it looks like the VM came up with no network configuration.  I need to figure out if this driver can call guest host customization from such inside VMWare.
17:16 jgelens joined #salt
17:19 conan_the_destro joined #salt
17:20 Setsuna666 joined #salt
17:21 kusams joined #salt
17:24 conan_the_destro joined #salt
17:26 giantlock joined #salt
17:27 higgs001 joined #salt
17:28 canci joined #salt
17:29 FarrisG__ joined #salt
17:29 kusams joined #salt
17:30 FarrisG__ Any pointers or strategies for saltifying simple binary utilities, such as the java time zone updater tool? http://www.oracle.com/us/technologies/java/tzupdater-readme-136440.html
17:31 rap424 joined #salt
17:31 Gnouc joined #salt
17:31 Mso150_x_h joined #salt
17:32 Gnouc joined #salt
17:32 schimmy joined #salt
17:35 ekristen anyone ever seen “Error encountered while render pillar top file”
17:35 ekristen I get that, but nothing else really telling me what is wrong
17:35 robawt ekristen: double check includes
17:35 schimmy1 joined #salt
17:36 holler_ joined #salt
17:36 ekristen robawt: double check what exactly?
17:37 robawt ekristen: try runing each individual state that the top file is touching to see if one of them has a compile problem
17:38 ekristen robawt: yeah I’m trying to run a specific state, its complaining about pillar data though, weirdly
17:39 robawt ekristen: then you need to see which part of pillar is being called by your state and make sure it's legal.  remember YAML complains about whitespace too
17:40 n8n joined #salt
17:40 jnials joined #salt
17:42 jnials joined #salt
17:44 kickerdog joined #salt
17:50 repl1cant anyone using the salt-api cherrypy behind a vip?
17:51 holler_ hello, when using ubuntu 14.04 lts x64 I am getting this error
17:51 holler_ failed: Jinja variable 'dict object' has no attribute
17:51 holler_ it doesnt happen when using ubuntu 12.04... anyone have ideas?
17:51 jrluis joined #salt
17:53 murrdoc joined #salt
17:53 kickerdog can you pastebin your code?
17:53 HPGreg joined #salt
17:54 holler_ ok one sec
17:55 conan_the_destro joined #salt
17:55 toddnni joined #salt
17:55 kusams joined #salt
17:57 holler_ kickerdog: http://dpaste.com/099VQYE
17:57 holler_ Rendering SLS "base:mysql.server" failed: Jinja variable 'dict object' has no attribute 'mysql'; line 30
17:58 holler_ this is masterless minion via vagrant + salt
17:59 trevorj holler_: It means you're getting the mysql attribute from most likely pillar
18:00 trevorj holler_: but it does not exist
18:00 gyre007_ joined #salt
18:00 holler_ trevorj: why wouldnt my pillar data be created when using this version of ubuntu?
18:00 holler_ it works using 12.04
18:00 trevorj holler_: look at your sls
18:00 trevorj holler_: see what it's trying to get the mysql attribute of
18:00 holler_ (ps Im still new to salt)
18:01 trevorj holler_: pastebin the mysql/server.sls file
18:01 trevorj holler_: also, welcome to the world of salt
18:02 holler_ trevorj: thanks! http://dpaste.com/3Q1V6P6
18:02 trevorj holler_: ok, it is pillar
18:02 murrdoc lets get saltay
18:02 trevorj holler_: on the client, run salt-call pillar.items
18:03 trevorj holler_: sorry, what I mean is
18:03 trevorj holler_: salt-call pillar.get mysql
18:03 trevorj holler_: It's probably not due to ubuntu 14.04
18:03 trevorj holler_: but due to pillar not targeting your new minion
18:03 trevorj holler_: check top.sls in your pillar and make sure it's targetting your new minion's id
18:04 holler_ http://dpaste.com/2YYN66N
18:04 holler_ oh
18:04 holler_ vagrant@coachlogix-dev:~$ sudo salt-call pillar.get mysql
18:04 holler_ local:
18:04 holler_ empty
18:04 trevorj holler_: yup
18:04 trevorj holler_: pillar isn't targetting your minion id
18:05 trevorj holler_: that or you don't have the pillar data on your minion
18:05 nitti_ joined #salt
18:05 holler_ its masterless minion so I didnt set an id in the minion config
18:05 holler_ and pillar/top.sls has just: base: '*': - settings
18:05 trevorj holler_: you can still have an id
18:06 trevorj holler_: you actually definitely still do, it's hostname
18:06 trevorj holler_: but the * will catch it
18:06 holler_ how can I check what it is?
18:06 holler_ ok
18:06 trevorj holler_: in settings do you have the mysql dictionary?
18:06 holler_ yeah
18:06 holler_ http://dpaste.com/099VQYE
18:06 iggy pillar.items
18:06 iggy do you see what you expect?
18:07 holler_ iggy: no, its not there
18:07 trevorj iggy: Just asked him that, no
18:07 iggy oh, I'm still catching up...
18:07 holler_ is it possible something in vagrant provision part is not working?
18:07 holler_ or permissions or something
18:07 iggy masterless?
18:07 iggy (probably so if you're using vagrant)
18:07 trevorj holler_: in your vagrant vm, do the pillar files exist
18:08 trevorj iggy: yes
18:08 iggy 2014.1.11?
18:08 holler_ trevorj: yes, I can go to /src/pillar/top.sls
18:08 holler_ and its there
18:08 ndrei joined #salt
18:08 holler_ so is /src/pillar/settings.sls
18:09 thedodd joined #salt
18:10 holler_ http://dpaste.com/398S865
18:10 iggy holler_: what version of salt?
18:11 holler_ iggy: not sure? how do i check
18:11 iggy salt<anything> --versions
18:11 iggy or --version
18:11 iggy it's probably the infamous 2014.1.11 masterless pillar bug
18:12 iggy https://github.com/saltstack/salt/issues/16656
18:12 iggy that's one of them
18:12 trevorj Man, 2014.1.11+ has been bug ridden
18:12 iggy Fix regression in pillar in masterless (:issue:`16210`, :issue:`16416`, :issue:`16428`)
18:13 iggy it's fixed in 2014.1.12
18:13 holler_ sudo salt-call --version
18:13 holler_ salt-call 2014.1.11 (Hydrogen)
18:13 ekristen iggy UtahDave isn’t there some new database that information is stored in now?
18:13 holler_ ah ok.. its a bug!
18:13 holler_ haha
18:13 ekristen like event data or something to that effect?
18:13 holler_ how can I specify a salt version with vagrant I wonder?
18:13 UtahDave ekristen: you can set up a master job cache in 2014.7
18:13 holler_ or does the minion bootstrap script have 1.12?
18:13 ekristen UtahDave: k
18:14 * babilen is still miffed that all his screaming didn't help regarding pillars on masterless minions
18:14 murrdoc what u talking about
18:14 babilen murrdoc: saltstack
18:14 Outlander joined #salt
18:14 murrdoc haha
18:15 iggy NOBODY CARES ABOUT MASTERLESS
18:15 murrdoc i meant more the 'problems regarding pillars in masterless'
18:15 murrdoc I DO
18:15 Ryan_Lane babilen: is that still broken in 2014.1?
18:15 iggy except all the people that use masterless
18:15 Ryan_Lane I definitely care about masterless
18:15 UtahDave holler_: you can pass in an option to the bootstrap script to specify the version you want
18:15 viq joined #salt
18:15 murrdoc stfu iggy, you are out of your element (love the movie)
18:15 babilen Ryan_Lane: No, it was fixed, but there are no packages for .12 (well .13 really) and so there are still many people arriving here with that problem
18:15 gildegoma joined #salt
18:15 UtahDave we definitely care about masterless.
18:15 Ryan_Lane this has been broken for weeks now, right?
18:15 holler_ lol iggy
18:15 babilen masterless is important for lots of tests
18:16 UtahDave babilen: .13 has been cut and should be available soon
18:16 murrdoc UtahDave:  how we looking on the packages for rcs
18:16 babilen Ryan_Lane: Well, it was fixed as soon as it was reported, but it just shouldn't have made it into a package
18:16 holler_ masterless is great for e.g. spinning up a local vagrant vm, maybe a test/dev on rackspace like I want to do :)
18:16 ndrei joined #salt
18:16 murrdoc you guys want a unique case ?
18:16 murrdoc we are helping a sister company with their monitoring
18:16 UtahDave murrdoc: windows installers are out. yum packages are being built
18:16 babilen UtahDave: I saw it on -pkg and am happily waiting for it to hit the mirrors
18:16 murrdoc they wont give me sudo on the boxes
18:17 iggy what ever happened to test cases
18:17 murrdoc so i am making masterless salt states that they can run themselves
18:17 * murrdoc needs that masterless
18:17 Ryan_Lane you guys really need to get automatic package building going, which would allow you to cut a package immediately
18:17 murrdoc i am using grains for everything tho
18:17 Ryan_Lane or let people use nightlies
18:17 murrdoc and yes to what Ryan_Lane said
18:17 murrdoc or yes please
18:17 kusams joined #salt
18:17 ekristen iggy: I wonder if we can’t just the job cache and redis server to watch for failed states
18:17 UtahDave Ryan_Lane: yep.  I just finished (mostly) automating the Windows installer builds. Now it takes me like 3 minutes to cut a new build
18:18 iggy ekristen: there are probably quite a few ways you can do it
18:18 ekristen probably
18:18 mechanicalduck_ joined #salt
18:18 UtahDave rpms are being built automatically every hour now and debs are being worked on
18:18 iggy ekristen: just nothing that like "set foo_setting to True in the config and bam"
18:19 murrdoc UtahDave:  i will gladly help with the deb part
18:19 murrdoc for u know selfish reasons
18:19 murrdoc :D
18:19 iggy aww... debian/ubuntu users are second class citizens now
18:19 UtahDave we have the new copr yum repos so people can use those repos right away instead of waiting for epel
18:19 murrdoc +1
18:20 ekristen can you not run salt stack with gitfs only?
18:20 UtahDave Our QA team has been making amazing progress in automating testing and is working feverishly on improving test coverage.  This is all going to make automated builds easier, too
18:21 iggy ekristen: sure (we do)
18:21 murrdoc nice
18:21 hasues UtahDave: I'm at a loss.  If I use vSphere cloud driver, how do I load customization specifications?  If I can't use cusomtization specifications, how am I to clone a VM and expect the nic to be configured?  Any thoughts?
18:21 ekristen hrm my trest env keeps failing to past preflight checks
18:21 ekristen Failed to load fileserver backends, the configured backends are: gitfs
18:21 iggy I like Django's stance of "every code commit includes documentation and test cases"
18:22 holler_ is the latest release v2014.1.13?
18:22 robawt UtahDave++ awesome
18:22 UtahDave holler_: yes. Though I'm not sure all OSes have packages released yet.
18:22 holler_ if anyone can confirm that this looks right in my Vagrantfile to use the latest thatd be great http://dpaste.com/0GQC2GK
18:22 holler_ with ubuntu 14.04
18:23 holler_ UtahDave: should I use .12?
18:23 * dstufft notices the topic says the latest version is 2014.1.10
18:23 murrdoc nah man .rc4
18:23 jgelens joined #salt
18:23 masterkorp So kitchen-salt broke
18:23 iggy holler_: looks okay to me (as much as I understand vagrant anyways)
18:23 UtahDave holler_: I think you can do that more easily.  Look here: https://github.com/UtahDave/salt-vagrant-demo/blob/master/Vagrantfile
18:28 ekristen iggy doesn’t seem to work for me
18:28 ekristen when I add roots back in it starts up fine
18:29 khaije1 sort of a "blue sky" question, does salt have any capability to read or operate on SCAP data?
18:29 iggy ekristen: probably missing the requisites to use the gitfs backend
18:29 iggy khaije1: what's SCAP data?
18:31 khaije1 it's a clever security auditing and automation data format, becoming a standard (imo)
18:31 n8n joined #salt
18:31 mechanicalduck_ joined #salt
18:31 iggy it's probably possible, don't knwo about out of the box
18:31 kusams joined #salt
18:33 Emantor joined #salt
18:33 khaije1 iggy: ok thats fine. I didn't see anything in my scan so thought I'd ask. fwiw RHEL7 is or will have the ability to configure a system using SCAP during initial pre-boot config stage
18:34 Thiggy joined #salt
18:34 masterkorp not
18:34 masterkorp https://github.com/simonmcc/kitchen-salt/issues/14
18:34 masterkorp any ideas ?
18:36 sschwartz_ee joined #salt
18:37 jeffspeff with 200 minions, how many worker threads are recommended for the master?
18:37 sschwartz_ee So, has anyone else had the experience of Salt running just fine, and then one day packing in, with almost everything you try and do producing a "Failed to authenticate, is this user permitted to execute commands?"  I briefly turned auth on to test something with salt-api, but I have since turned ift off in the master, and the problem persists.
18:41 mpanetta Is it possible to create dynamic pillar and file roots?
18:42 jeffspeff sschwartz_ee, Did you restart the master service?
18:44 UtahDave jeffspeff: for 200 minions I don't know if you'd have to modify the worker_threads at all from the default 5
18:44 whiteinge sschwartz_ee: I saw that same thing yesterday. I have eauth configured but wasn't using it when I saw the error.
18:44 UtahDave maybe bump it up to 6
18:45 whiteinge sschwartz_ee: what salt version are you running?
18:45 sschwartz_ee whiteinge: Makes me wonder if it's a newish bug.  Have you been able to recover from it? salt 2014.1.10 (Hydrogen)
18:45 jeffspeff UtahDave, I had it on 10 and it was running really slow when doing a highstate. It'd make it through so many minions and then just sit there for hours.
18:46 whiteinge sschwartz_ee: I'm seeing it intermittently, not constantly
18:47 sschwartz_ee whiteinge: Hm. I'm getting it all the time now -- completely shut down the environment.
18:47 ekristen sschwartz_ee: I’m getting that on my test server with the latest code checked out
18:49 whiteinge sschwartz_ee: take a look at your /var/cache/salt/master dir. Might be worth trying to blow away any tokens in there
18:49 whiteinge (On my phone or I'd give a more complete path.)
18:49 sschwartz_ee whiteinge: Found it, no worries.
18:50 jalaziz joined #salt
18:51 nitti joined #salt
18:53 sschwartz_ee whiteinge: sadly, that didn't do it. Ah, well.
18:56 pipps joined #salt
18:56 Mso150_x_h joined #salt
18:58 n8n joined #salt
18:59 jsm joined #salt
19:00 whiteinge sschwartz_ee: dang. I'm going to poke at it when I'm front of my computer next. Ping me if you made any head-way? I'll do the same.
19:00 kickerdog joined #salt
19:00 sschwartz_ee whiteinge: Will do; I spent about three hours swearing at it last night.
19:05 Mso150 joined #salt
19:12 jalaziz joined #salt
19:14 ckao joined #salt
19:14 trevorj Ugh, rename_on_destroy is broken
19:15 ekristen sschwartz_ee: what version are oyu running?
19:15 UtahDave trevorj: really? What version are you on?
19:15 trevorj UtahDave: 2014.1 git head
19:15 trevorj UtahDave: It looks for the original name in a dictionary after it renames it in the dict, it appears
19:16 ekristen sschwartz_ee: have you checked your diskspace?
19:16 mechanicalduck_ joined #salt
19:17 stewba joined #salt
19:17 sschwartz_ee ekristen: This started when I hit an inode limit, actualy --or seems to.
19:17 sschwartz_ee seemed to.
19:18 UtahDave trevorj: so the latest from the 2014.1 branch?
19:19 trevorj UtahDave: Yeah, rename is actually broken as a whole
19:19 trevorj UtahDave: at least on ec2
19:19 ekristen sschwartz_ee: yeah I hear you there — something just went crazy on my test system
19:19 ekristen df -h doesn’t even show all my drives :/
19:20 UtahDave trevorj: do you know the last version that it worked for you?
19:20 trevorj UtahDave: I've never actually used it before now.
19:20 trevorj UtahDave: I just turned rename_on_destroy on recently
19:20 trevorj UtahDave: If I have time today I'll take a look
19:21 UtahDave trevorj: Hm. OK.  Can you pastebin a sanitized version of your config? Also, are you getting a stacktrace, or is it just not renaming the vm?
19:21 trevorj UtahDave: Stack trace, keyerror
19:21 UtahDave can you pastebin the stacktrace as well?
19:22 JoeJulian given a pillar of  http://ur1.ca/ief8h shouldn't {% for type,data in salt['pillar.get']('systems:types').iteritems() if type in ['type1','type2'] %} iterate through and give me type='type1' data={a dict containing the data in type1}, and the same for type2, but not type3? Because it's telling me that data is a StrictUndefined when I go to use it.
19:22 Heartsbane joined #salt
19:22 Heartsbane joined #salt
19:23 holler_ is it possible to tell salt to use a specific environment when provisioning with Vagrantfile? http://dpaste.com/2CJM7Y3
19:23 trevorj UtahDave: I can't sanitize my config as quickly, but here's the stack trace http://dpaste.com/3MQ40V4
19:24 UtahDave trevorj: oh. so how is rename_on_destroy related to that stacktrace?
19:24 trevorj UtahDave: rename_on_destroy runs rename
19:24 timoguin holler_: you should be able to assign the minion to a specific environment via the state and pillar top.sls files.
19:24 trevorj UtahDave: and it bails out with the same trace
19:24 cb joined #salt
19:24 intellix joined #salt
19:24 trevorj UtahDave: What I'm saying is rename is the culprit
19:25 ndrei joined #salt
19:26 holler_ timoguin: ok maybe I need to provide more info.. I am using vagrant in masterless minion mode and I have some settings for mysql/nginx as well as pillar data all for local dev machine.. now Im playing with vagrant-rackspace and I am able to get salt provisioner working on there too but I need to use different configs/pillar data when deploying to rackspace.. not sure how to do that!
19:26 UtahDave trevorj: ok, thanks.  What's the output of   salt-cloud --versions-report
19:29 trevorj UtahDave: http://dpaste.com/1DEQHPT
19:29 Setsuna666 joined #salt
19:30 timoguin holler_: I've only used the local salt provisioner
19:31 sschwartz_ee ekristen: Weird; I'm also getting significant load out of salt as well, which I don't normally get.  Just sitting there doing nothing.
19:31 Supermathie joined #salt
19:31 ekristen sschwartz_ee: what version?
19:31 sschwartz_ee salt 2014.1.10 (Hydrogen)
19:31 Supermathie jeeves… I've been sitting in the Ubuntu channel and wondering "why the fsck are all these salt users talking about basic Ubuntu support stuff" :D
19:31 Supermathie rehi
19:33 khaije1 left #salt
19:34 cmthornton joined #salt
19:34 sschwartz_ee ekristen: BUt I had not just done an upgrade when everything exploded, which makes it odd. (2014.1.10)  Are you able to get anything out of the test system?
19:35 UtahDave trevorj: would you mind opening an issue on that? Please include all the info you've shared here. That would really be helpful!
19:35 ekristen so it appears I had run out of diskspace and that was the issue, my test system on 2014.7 seems to be working again
19:36 UtahDave ekristen: ah, yeah.
19:37 mechanicalduck joined #salt
19:37 esogas_ joined #salt
19:38 Supermathie ekristen, yeah that's bitten me a few times. OH LOOK OUT OF INODES FOR SALT MASTER. kerblooie
19:39 kballou joined #salt
19:39 sschwartz_ee Supermathie: I was hoping that would fix it, but my master's remained hosed even with enough inodes.
19:40 thedodd joined #salt
19:40 trevorj UtahDave: Sure thing.
19:40 mechanicalduck joined #salt
19:40 UtahDave thanks!
19:41 ekristen sschwartz_ee: have you increased the worker threads and chekced your diskspace?
19:41 ekristen sschwartz_ee: I had to reboot twice to get my system working
19:41 sschwartz_ee ekristen: Doubled it to 10, disk space clear -- that's how I missed the inodes the first time around.  Stoopit graphios.  Twice? Hm. I did it once, but it's in EC2, so it's a pain to restart and get everything right.
19:42 intellix_ joined #salt
19:42 pfallenop joined #salt
19:43 giantlock joined #salt
19:43 pipps joined #salt
19:50 ggoZ joined #salt
19:52 ekristen eh, UtahDave I think there are a few extra issues with docker and 2014.7
19:52 pfallenop joined #salt
19:52 andrej joined #salt
19:53 kickerdog Found a bug with salt-cloud where if one of the VM's fails to be created, start_action: state.highstate fails to execute on the VMs that did succeed.
19:55 ksalman say, is it possible for to get a list on the salt-master of what states were applied to all the salt-minions? Sometimes people apply states on the minion and it's not necessarily in the top.sls file
19:57 ekristen kickerdog: that doesn’t make senese, cause it is the minion that runs the startup_states
19:58 Gareth ksalman: you might be able to do that with a runner, looking at the job cache.
19:58 higgs001_ joined #salt
20:01 viq ksalman: or you could have states set grains
20:03 cberndt joined #salt
20:10 perfectsine joined #salt
20:11 jgelens joined #salt
20:12 Ahlee so i have an ext_pillar that initially is blank for any host. If I update/set that value outside of salt, the pillar isn't being picked up unless I run saltutil.refresh_pillar
20:13 Ahlee I was under the impression that pillars are rendered every time on the master and shipped down to the master.  Is that incorrect?
20:17 kusams joined #salt
20:18 forrest Hmm, does anyone remember if file_roots have an issue with nested dirs? So you have - /srv/salt/blah, and then /srv/salt/blah/blah gets marked as 'cannot find.'
20:19 iggy Ahlee: I think normally, but with ext_pillar, salt can't tell something changed
20:20 ndrei joined #salt
20:20 Ahlee iggy: right, I thought ext_pillars were called every time and passively queried every time
20:21 Supermathie forrest, WFM: echo hello > /srv/salt/blah/blah/blah; sudo salt 'win7-client1*' cp.get_file_str salt://blah/blah/blah → hello
20:22 forrest Supermathie: alright thanks
20:23 iggy forrest: cp.list_master is your friend
20:23 Ahlee and now servers aren't returning at all
20:24 druonysuse joined #salt
20:24 forrest iggy: yeah I did that, and the problem is it shows up :P
20:25 kickerdog joined #salt
20:27 iggy start gist'ing code
20:27 forrest iggy: Yeah I have a few other things to test first, just wanted to confirm
20:30 jsm joined #salt
20:31 UtahDave kickerdog: would you mind opening an issue on that?  That would be really helpful
20:31 jhulten joined #salt
20:31 kickerdog Yeah
20:31 spookah joined #salt
20:34 mgw Ahlee: did you have servers stop responding after running refresh_pillar?
20:34 Ahlee mgw: Yes
20:34 mgw What version master and minion?
20:34 Ahlee mgw: well, they seem to have forgotten all their pillars
20:34 Ahlee 0.17.5
20:34 mgw that happened to me yesterday
20:35 mgw oh
20:35 mgw that' old
20:35 Ahlee That's stable ;)
20:35 mgw I was on 2017.7+git-something and updated to latest git and the problem went away
20:35 andapathisch joined #salt
20:35 murrdoc 2014.1.11 is stable
20:35 mgw but I don't know if it was coincidence or something got fixed
20:37 Ahlee and now it works again
20:38 Ahlee lol. zoookeeper, not zookeeper
20:38 Ahlee oh well, still strange that ext_pillars don't initially populate until you tell clients to refresh_pillar
20:41 jasonrm joined #salt
20:42 ekristen whats the best way to figure out why ext_pillar data isn’t making it to a minion?
20:42 druonysuse joined #salt
20:42 mgw Ahlee: a highstate automatically refreshes pillars, i believe
20:42 mgw ekristen: what ext_pillar are you using?
20:43 ekristen mgw: git, but I might have just answered by own question
20:43 Ahlee ekristen: running hte master in debug while running saltutil.refresh_pillar
20:43 mgw yeah, and looking for messages about ext_pillars not loading
20:43 mgw or faiures to compile pillar
20:44 Ahlee mgw: hrm. Bit heavy handed but yeah worth trying. thanks.
20:45 Pork__ joined #salt
20:45 viq ekristen: I assume you did "salt-run fileserver.update" ?
20:48 druonysus joined #salt
20:48 druonysus joined #salt
20:50 druonysuse joined #salt
20:50 druonysuse joined #salt
20:52 pfallenop joined #salt
20:52 jalaziz joined #salt
20:53 sschwartz_ee ekirsten & whiteinge: I appear to have fixed the problem, through the rather brutal method of blowing away (after tarring up) /var/cache/salt/master.
20:53 druonysus joined #salt
20:54 nitti joined #salt
20:55 mapu joined #salt
20:57 pfallenop joined #salt
21:01 Pork__ Can anyone tell me real quick the difference between salt-cloud and salt-virt?
21:01 Pork__ Is it that salt-virt is on my infrastructure?
21:01 Pork__ And KVM?
21:01 hintss joined #salt
21:01 manfred salt-cloud is for different providers
21:01 manfred salt-virt actually manages the hypervisors
21:01 manfred and can control them
21:01 Pork__ manfred: Thanks, bro
21:02 pfallenop joined #salt
21:02 manfred salt-virt actually controls the hypervisors through libvirt iirc
21:02 glyf joined #salt
21:03 Pork__ It does. I was trying to get it to work on Ubuntu host, but I was having trouble. As I understand they test everything RHL
21:07 pfallenop joined #salt
21:07 mgw Pork__: what trouble are you having? I was using the virt system on Ubuntu at one point
21:07 mgw with kvm
21:08 elfixit1 joined #salt
21:09 Pork__ mgw: I seem to be having trouble getting the machines deployed
21:10 Pork__ mgw: They seed, but then the host doesnt see them
21:10 mgw what do you mean by 'seed'?
21:11 Pork__ mgw: I might be doing the img handling wrong. I am trying to spin up ubuntu-cloud.img, and I did not convert it into a qcow2 first
21:11 mgw I know what salt means
21:11 mgw by it
21:11 Pork__ mgw: that's what I mean, the salt def
21:11 mgw so it's seeding the image with salt?
21:11 mgw or with config at least
21:11 Pork__ mgw: Salt finds the hypervisor, copies the .img, and seeds the VM
21:11 ericof joined #salt
21:12 mgw do you have virsh on the hypervisor?
21:12 Pork__ mgw: on the host, I can see the vm in /srv/salt/salt-images
21:12 Pork__ mgw: yes
21:12 Pork__ mgw: I have virsh
21:12 mgw and it doesn't show anything at all?
21:12 pfallenop joined #salt
21:13 Pork__ mgw: Right. virsh list brings back nothing
21:13 mgw anything in the hypervisor' minion log?
21:14 Pork__ I will need to look back at it. I might do a fresh install on some of my spare metal and make sure I didn't fuck it up
21:15 Pork__ mgw: I think I might have done something wrong in setting up the hyp because the directions were for RHL
21:15 helderco joined #salt
21:16 mgw there's not a lot to setting it up
21:16 Pork__ mgw: I need to check out the dependencies for Ubuntu again, because I thought it was just libvert and python-libvert
21:16 mgw as i recall
21:16 kickerdog joined #salt
21:16 Pork__ virt***
21:16 mgw yeah, i think that's all you need
21:16 ekristen UtahDave: quick question for you
21:16 mgw Pork__: check the minion logs before blowing anything away
21:17 mgw I don't remember whether you need to convert to qcow
21:17 UtahDave ekristen: sure.
21:17 Pork__ mgw: For sure. I definitely won't throw anything away
21:17 ekristen It looks like there a number of different status messages now are possible for for when a docker image is pulled
21:17 Ryan_Lane sigh. state_output and state_verbose aren't respected with json outputter
21:18 Pork__ mgw: Before I had a need for config management, I was just using virtualbox for everything
21:18 Pork__ mgw: wich is less efficient than KVM, but was easy as hell to manage, even through terminal
21:19 hasues joined #salt
21:19 Pork__ mgw: If I had the sense that any competent people wanted the functionality, I would totally build it out for Salt
21:19 Pork__ mgw: But I feel like people would just... laugh at me
21:19 deepz88 joined #salt
21:20 mgw no comment ;-)
21:20 hasues What if we just laugh now?
21:20 Pork__ I'm sure the lurkers here are
21:20 hasues :)
21:20 Pork__ They're all like these MOFOs don't know jack
21:20 Pork__ I'm gonna go use Ansible
21:20 Pork__ Like that
21:21 Ryan_Lane heh
21:21 hasues I'm liking Salt from what I've seen so far.
21:21 Ryan_Lane Pork__: what salt issue are you having?
21:21 Pork__ Ryan_Lane: Just with salt-virt on Ubuntu, but I have some things that I would like to try after talking here with @mgw
21:22 Ryan_Lane ahhh, ok
21:22 Pork__ Ryan_Lane: I'm not actually going to go use Ansible
21:22 Ryan_Lane :D
21:22 Pork__ Ryan_Lane: I'd rather manage everything by hand
21:22 Ryan_Lane hahaha
21:22 mgw Pork__: It might be faster
21:22 mgw by hand
21:22 Pork__ HAhaha
21:23 Pork__ Just the fact that the clients connect to the master was enough for me to use Salt
21:23 * Ryan_Lane uses masterless
21:23 kickerdog joined #salt
21:24 Pork__ And family-tech-support has never been easier
21:24 hasues Pork__: Chef works that way as well
21:24 Ryan_Lane puppet does as well
21:24 Pork__ Chef is written and configured in the Devil's language
21:24 hasues As does cFengine
21:24 Pork__ So is Puppet
21:24 hasues Ryan_Lane: Puppet, the master pushes to its clients
21:24 cads joined #salt
21:24 hasues They don't pull I believe.
21:24 Ryan_Lane no it doesn't :)
21:24 Ryan_Lane it's pull based
21:24 viq hasues: you're mistaken
21:25 hasues Oh really?
21:25 viq only ansible is push, everything is agents pulling
21:25 viq s/everything/everything else/
21:25 hasues viq: Historically, I thought that I had read that push jobs on Puppet master was a consideration of architecture
21:25 viq puppet doens't have push jobs
21:26 hasues But I didn't write it, so it wouldn't be the first time I've gotten bad information.
21:26 Pork__ Chef and Puppet are written/configd in the Devil's language, and CFE is almost as old as I am
21:26 viq You can kinda get something like that with mcollective
21:26 hasues viq: Oh sure, right with mcollective, but without it?
21:26 viq hasues: you have an agent that periodically checks with server. Same with cfengine, chef, and probably some others
21:27 to_json joined #salt
21:28 viq now salt is even funnier - it has an agent, but you push to it ;)
21:29 hasues viq: Good to know.
21:29 * viq did some research on CM systems
21:30 hasues viq: I did as well, but it seems like what I had read in the past was stating that puppetmaster's were there for pushing.  Looks like what I'm reading matches to what you are saying, however.
21:30 hasues https://projects.puppetlabs.com/issues/2045
21:32 hasues viq: And regarding Salt, does it really push?  I mean, it publishes to a message bus, but does the client not perform a pull based on that?
21:32 viq And chef is now going to have push jobs. And ansible has support for setting up basically a cron job and server holding configuration so it will periodically check whether it has something to do. And you have salt-ssh, and scheduled jobs... So in fact everyone does a bit of everything ;)
21:33 viq hasues: well, depends how much into detail you want to get. Also it's push in the sense of "if the client's not there, the job is lost", there's no support (currently?) for "buffering" commands for offline hosts
21:33 smcquay joined #salt
21:34 babilen I.want.2014.7 -- I can't wait anymore.
21:34 viq Also it's real-time enough to be considered "push", but yeah, technically it's that message bus
21:34 babilen I hated writing a new setup in 2014.1 today.
21:34 viq babilen: I hear early next week, I'm waiting too
21:34 babilen I know .. still, it was so painful writing all the things that I will have to change soon.
21:36 * viq poofs
21:36 hasues It looks like salt-cloud isn't going to be of any use after all for me noting I can't get it to configure vsphere hosts.  Looks to be a limitation in pysphere perhaps, but I did not see the component that calls for interface customization after cloning
21:36 dude051 joined #salt
21:37 babilen But I have a question regarding salt-ssh. Does it necessitate that I'm able to ssh as root, be able to to sudo without password or can I enter the password somewhere and salt will provide it when necessary (ssh-agent style) ?
21:38 babilen I'd really like to use salt for the "first five steps" of initialisation of new boxes/masters and am tempted by the idea of using salt-ssh for that.
21:38 UtahDave babilen: you can have salt-ssh prompt you for a password, you can set your password in your roster file, you can use ssh keys... lots of options
21:38 hasues Maybe there is some sort of way one configures a template in vsphere so that the interface just comes up and "works".
21:39 UtahDave hasues: I think you may have to make sure vsphere has a vm come up with a configured interface.
21:39 babilen UtahDave: Ah, no. I was referring to salt obtaining superuser privileges on the target boxes not SSH itself
21:39 hasues UtahDave: That there in lies with the problem.
21:39 babilen I guess that salt-ssh doesn't really care how I setup my SSH infrastructure as long as it works™
21:39 UtahDave babilen: ah, ok. It seems like I heard it can do sudo, but I'm not 100% sure. There's been a ton of updates to salt-ssh for 2014.7
21:40 UtahDave babilen: that's what we're attempting to do.  :)
21:41 hasues UtahDave: Cloning a VM in vSphere results in a VM with the same configuration as the template.  In vSphere, one uses a "customization specification" to have the VM change its network and hostname configuration upon boot once cloned.
21:41 babilen UtahDave: I can live with NOPASSWD, but root is disabled almost everywhere.
21:41 hasues UtahDave: I don't see where pysphere supports calling the customization specification.
21:42 UtahDave hasues: Yeah, I'm not sure on that. It's been quite a while since I've used the vsphere stuff. I wouldn't be surprised if there's some missing functionality in there.
21:42 UtahDave babilen: true
21:42 UtahDave ok, everyone. Time for me to head home. Taking my girls to pick out pumpkins at the pumpkin patch.
21:43 babilen ETOOAMERICAN
21:43 hasues UtahDave: I would think a configuration directive would be there to suggest the specification.  In Chef, for instance, if one uses knife-vsphere, you pass an argument to state which customization specification to use so that the interfaces are configured to be usable.
21:43 UtahDave babilen: ;)
21:43 hasues UtahDave: Okay.
21:43 babilen UtahDave: I have no idea what that means, but have a good evening and thanks for all :D
21:43 hasues UtahDave: enjoy pumpkin picking!
21:43 UtahDave thanks!
21:44 intellix joined #salt
21:44 intellix_ joined #salt
21:54 babilen Whenever I am using pkgrepo.managed with consolidate=True I get very funky (i.e. wrong) results on Debian boxes. It constantly complains that repos changes just because the comps are in a different order in a list, but that would be okay if it wouldn't also leave commented lines in the sources.list and fail to add some repos.
21:54 babilen I wonder if anybody is using that functionality without problems and what the secret tricks are (keep every repo in its own file and don't consolidate?)
21:56 murrdoc 'keep every repo in its own file and don't consolidate' i do this for all non upstream
21:56 murrdoc all ubuntu repos go in /etc/apt/sources.list
21:56 murrdoc security goes in security.list
21:56 murrdoc but i use a mirror of upstream so my repos dont change much
21:58 perfectsine joined #salt
21:58 babilen I liked the idea of using consolidate, but I seem to have to spent some time debugging fixing the code as it really doesn't work how it should.
21:59 murrdoc cron, apt, logrotate, nginx, sudoers
21:59 roolo joined #salt
22:00 murrdoc all of those get the conf.d treatment
22:00 murrdoc not that u asked
22:01 babilen I don't mind using .d. In fact as a Debian maintainer I am very much convinced that it is the best thing since sliced bread, but I prefer to not have duplicate entries in my sources.list and happily have it in one file *if it is managed by salt*.
22:03 amanuel joined #salt
22:04 amanuel joined #salt
22:04 mgw is it just me... or has the default timeout been changed to 1s or2s (vs 10 before, I think)?
22:05 Pork__ Later, bros. Thanks for the help
22:07 yetAnotherZero joined #salt
22:07 pfallenop joined #salt
22:07 jsm joined #salt
22:10 jgelens joined #salt
22:10 kickerdog joined #salt
22:14 ggoZ joined #salt
22:16 ajolo joined #salt
22:17 pdayton joined #salt
22:20 ksalman I have two separate states that are setting "roles:" grain of type list. Somethimes those two states get added to a one salt-minion and i get conflicting IDs error. Is there no way to append to the "roles:" grain list from multiple states?
22:20 jgelens joined #salt
22:21 Pixionus Is there a correct way to restart the minion after resetting something like it's name or keys?  'service.restart salt-minion' was my initial thought on the matter.
22:22 bhosmer_ joined #salt
22:23 nitti_ joined #salt
22:23 Pixionus but then I read someone saying they have problems with that.
22:23 Pixionus so I guess I am more asking if there is an incorrect way
22:23 dude051 joined #salt
22:25 iggy ksalman: in 2014.7, yes
22:25 ksalman aww
22:25 ksalman iggy: thanks
22:27 iggy ksalman: I know the feeling, I had it all over a bunch of states before I realized it was 2014.7 only :/
22:27 DaveQB joined #salt
22:28 ksalman =(
22:28 ksalman I guess I'll use non-list grains and later conver it to list
22:29 babilen I would have loved to work with different pillar merging strategies today to define, for example, independent pillars for users. (cf. http://docs.saltstack.com/en/latest/ref/configuration/master.html#pillar-source-merging-strategy)
22:31 holler_ hello, Im having trouble cloning a git repo
22:31 holler_ http://dpaste.com/07PTPEZ
22:31 holler_ which form is supposed to be used?
22:32 holler_ I tried the latter and it says no repo found
22:32 holler_ also how do I specify the user for ssh? e.g. my own git user has access to the repo, so can I use my ssh key for my laptop automatically?
22:35 babilen holler_: Both forms work. If you don't supply "- name: " the state id will be used.
22:36 holler_ babilen: thanks, which form for the git repo url though? is it ssh+git@github.com? maybe that is more of a git/ssh question
22:36 holler_ or - name: git://github.com ?
22:36 babilen holler_: And http://git-scm.com/book/en/Git-on-the-Server-The-Protocols has an overview of different protocols
22:37 babilen It really depends on the repo in question and cannot be answered generally.
22:38 ipmb joined #salt
22:39 iggy I'm not sure ~ is going to work there
22:44 ekristen joined #salt
22:47 holler_ how can I use my local ssh key for authentication for the git repo?
22:47 holler_ I set ssh_agent_forward on the Vagrantfile
22:53 elfixit1 joined #salt
22:53 mechanicalduck_ joined #salt
22:56 bhosmer joined #salt
22:59 mechanicalduck joined #salt
23:00 masterkorp Did salt masterless setup change ?
23:05 Ryan_Lane masterkorp: why do you ask?
23:05 kirscht joined #salt
23:05 cads joined #salt
23:06 masterkorp my salt masteless setup cannot read pillar data
23:06 Ryan_Lane are you on 2014.1.11?
23:06 masterkorp yeahp
23:06 Ryan_Lane it has a regression
23:06 masterkorp link ?
23:07 Ryan_Lane http://docs.saltstack.com/en/latest/topics/releases/2014.1.12.html'
23:07 Ryan_Lane http://docs.saltstack.com/en/latest/topics/releases/2014.1.12.html
23:07 masterkorp i am subscribed to the mailing lists
23:08 Ryan_Lane basepi: http://docs.saltstack.com/en/latest/topics/releases/2014.1.7.html <-- that seems like it's not correct
23:08 Ryan_Lane err. well, http://docs.saltstack.com/en/latest/topics/releases/ has it listed at the top, out of order
23:08 masterkorp why don't these kinds of regressions are announced ?
23:09 Ryan_Lane masterkorp: I'm not sure why 2014.1.12 wasn't announced
23:09 Ryan_Lane I think they're still working on packages for it?
23:09 murrdoc thats the word
23:09 masterkorp i am using the curl script that gets the lastest packages
23:09 murrdoc salt-bootstap ?
23:09 masterkorp yeah
23:10 masterkorp so now the solution is to downgrade ?
23:10 mechanicalduck_ joined #salt
23:10 murrdoc do u use that to isntall ?
23:10 Ryan_Lane currently, yes
23:10 murrdoc install*
23:10 jcockhren heh. I didn't know there was a 2014.1.12 either. I was was just gonna stick with 2014.1.10
23:10 murrdoc salt-bootstrp —git 2014.1.12
23:11 murrdoc is close to what the right way to do that is
23:11 Ryan_Lane what's your current installation method?
23:11 murrdoc mine ?
23:12 murrdoc i am using rc3
23:12 Ryan_Lane no. masterkorp
23:12 murrdoc ah
23:12 Ryan_Lane if he's using ubuntu packages, for instance, he should downgrade
23:12 murrdoc yuup
23:12 Ryan_Lane I guess I shouldn't assume he
23:12 spookah joined #salt
23:12 mechanicalduck joined #salt
23:13 murrdoc he s using salt-bootstrap, so i am recommending to use the 2014.1.12 tag
23:13 murrdoc git tag*
23:13 Ryan_Lane ah, yes, likely the best option
23:14 Ryan_Lane http://docs.saltstack.com/en/latest/topics/releases/2014.7.0.html#fileserver-backends-in-salt-call <-- \o/ \o/
23:14 Ryan_Lane I didn't realize that was possible now
23:14 jsm joined #salt
23:14 nitti joined #salt
23:14 murrdoc woah, can u specify a credentials file ?
23:15 Ryan_Lane I'd imagine it works just like on the master
23:15 Ryan_Lane that's really awesome, though
23:15 murrdoc yes
23:15 murrdoc its a win for masterless for sure
23:15 jcockhren wow!
23:15 jcockhren that is a win for sure!
23:18 chilli_peper joined #salt
23:19 masterkorp Ryan_Lane: thank you !
23:19 Ryan_Lane yw
23:21 basepi .12 had a bug we missed. .13 is packaging now.
23:22 canci joined #salt
23:22 basepi Ryan_Lane: I'll look into why the releases doc is listing out of order tomorrow.
23:23 yomilk joined #salt
23:30 otter768 joined #salt
23:32 Emantor joined #salt
23:37 Pickled joined #salt
23:40 higgs001 joined #salt
23:46 masterkorp basepi: will it be on the sable repos soon ?
23:47 basepi masterkorp: yes. Hopefully first thing next week. We'll vote it through epel.
23:48 masterkorp yay!
23:48 masterkorp and debian users ?
23:50 jgelens joined #salt
23:51 jsm joined #salt
23:52 Mso150 joined #salt
23:54 schimmy joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary