Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-02-16

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 hal58th joined #salt
00:05 subsignal joined #salt
00:07 Nazca joined #salt
00:10 MugginsM joined #salt
00:12 timoguin joined #salt
00:18 TTimo joined #salt
00:21 NikolaiToryzin joined #salt
00:22 NikolaiToryzin joined #salt
00:25 waddles joined #salt
00:44 jdowning joined #salt
00:52 waddles joined #salt
00:55 waddles joined #salt
00:56 MugginsM joined #salt
01:04 nich0s joined #salt
01:04 nich0s left #salt
01:04 nich0s joined #salt
01:10 timoguin joined #salt
01:11 iwishiwerearobot joined #salt
01:15 dooshtuRabbit joined #salt
01:19 subsignal joined #salt
01:23 markm joined #salt
01:30 forrest joined #salt
01:33 nighter joined #salt
01:37 badon_ joined #salt
01:41 canci joined #salt
01:56 ksj joined #salt
02:10 timoguin joined #salt
02:16 Singularo joined #salt
02:16 dthorman joined #salt
02:22 MugginsM joined #salt
02:31 TTimo joined #salt
02:38 jhauser_ joined #salt
02:38 MatthewsFace joined #salt
02:41 clintber_ joined #salt
02:45 jdowning joined #salt
02:47 jdowning_ joined #salt
02:52 jdowning joined #salt
03:00 iwishiwerearobot joined #salt
03:10 timoguin joined #salt
03:20 bhosmer_ joined #salt
03:23 evle joined #salt
03:23 aparsons joined #salt
03:25 CaptainMagnus joined #salt
03:25 dualicorn joined #salt
03:29 cberndt joined #salt
03:31 favadi joined #salt
03:33 kermit joined #salt
03:40 canci joined #salt
03:40 hal58th joined #salt
03:41 active8 joined #salt
03:46 clintberry joined #salt
03:58 badon joined #salt
04:10 timoguin joined #salt
04:12 badon_ joined #salt
04:12 MatthewsFace joined #salt
04:16 nich0s joined #salt
04:19 jalaziz joined #salt
04:23 MatthewsFace joined #salt
04:26 rm_jorge joined #salt
04:29 MatthewsFace joined #salt
04:33 TyrfingMjolnir joined #salt
04:34 MatthewsFace joined #salt
04:48 iwishiwerearobot joined #salt
04:52 jdowning joined #salt
04:53 waddles I have installed salt RC in Darwin using the manual install method but it seems to ignore —salt_transport=raet so the resultant salt-master still depends on zmq
04:54 tkharju joined #salt
04:59 yomilk joined #salt
05:04 MatthewsFace joined #salt
05:07 aparsons joined #salt
05:10 timoguin joined #salt
05:16 Ilja joined #salt
05:17 desposo joined #salt
05:22 subsignal joined #salt
05:38 APLU joined #salt
05:38 waddles hmm, actually using —salt-master=raet instead of —salt_master=raet works but now gets an error
05:38 waddles File "/usr/local/lib/python2.7/site-packages/salt/daemons/salting.py", line 50, in SaltKeep
05:38 waddles Auto = raeting.autoModes.never #auto accept
05:38 waddles AttributeError: 'module' object has no attribute 'autoModes'
05:41 iggy tried without raet?
05:45 waddles well, no, that's what I'm trying to test
05:45 clintberry joined #salt
05:47 jdesilet joined #salt
05:50 Furao joined #salt
05:51 favadi joined #salt
05:51 ramteid joined #salt
05:52 Furao joined #salt
05:56 favadi left #salt
06:00 * waddles smacks head
06:00 waddles i meant —salt-transport=raet
06:10 TTimo joined #salt
06:10 timoguin joined #salt
06:12 iggy I'm not really sure if raet is considered "ready to go" just yet
06:12 j_t joined #salt
06:13 ajw0100 joined #salt
06:14 j_t "If you are confident that you are connecting to a valid Salt Master, then remove the master public key and restart the Salt Minion."
06:15 j_t I have 50 or so minions with a bad salt master key
06:15 j_t but puppet is running on them
06:15 j_t Is there any way to temporarily have salt-minion delete the master key if it finds out it doesn't match?
06:16 aparsons joined #salt
06:28 calvinh joined #salt
06:29 otter768 joined #salt
06:36 jalaziz joined #salt
06:37 iwishiwerearobot joined #salt
06:49 bash124512 joined #salt
06:53 jdowning joined #salt
06:56 favadi joined #salt
06:58 colttt joined #salt
07:09 ekkelett joined #salt
07:10 timoguin joined #salt
07:11 calvinh joined #salt
07:11 ekkelett so, hi! A question of curiosity: I'm experiencing repo corruption issues with dulwich as a git provider -- anyone here experienced similarly or have any clues as to the cause?
07:11 j_t left #salt
07:13 calvinh_ joined #salt
07:15 mikkn joined #salt
07:18 favadi joined #salt
07:18 krelo joined #salt
07:19 jonasbjork joined #salt
07:22 slafs joined #salt
07:23 stoogenmeyer_ joined #salt
07:23 slafs left #salt
07:27 bash124512 joined #salt
07:31 hebz0rl joined #salt
07:32 toanju joined #salt
07:34 yuhl_work_ joined #salt
07:35 doobi-sham-95717 joined #salt
07:36 dRiN joined #salt
07:43 jalaziz joined #salt
07:44 trikke joined #salt
07:52 JlRd joined #salt
07:58 KermitTheFragger joined #salt
07:59 kawa2014 joined #salt
08:03 eseyman joined #salt
08:07 Auroch joined #salt
08:10 crazysim joined #salt
08:10 flyboy joined #salt
08:10 timoguin joined #salt
08:11 TTimo joined #salt
08:12 FineTralfazz joined #salt
08:13 JlRd joined #salt
08:18 malinoff joined #salt
08:21 sifusam joined #salt
08:23 twiedenbein joined #salt
08:24 skullone joined #salt
08:25 shoma joined #salt
08:25 emostar joined #salt
08:26 iwishiwerearobot joined #salt
08:28 wincyj joined #salt
08:29 intellix joined #salt
08:29 bash124512 joined #salt
08:30 otter768 joined #salt
08:32 jtang joined #salt
08:34 alainv joined #salt
08:37 egil joined #salt
08:39 felskrone joined #salt
08:43 crazysim joined #salt
08:46 tomspur joined #salt
08:46 malinoff_ joined #salt
08:47 alanpearce joined #salt
08:49 malinoff joined #salt
08:54 bash123124124 joined #salt
08:54 jdowning joined #salt
08:55 twiedenbein joined #salt
08:56 favadi1 joined #salt
08:59 karimb joined #salt
09:01 Grokzen joined #salt
09:02 N-Mi joined #salt
09:06 bash1245_ joined #salt
09:08 bash1234_ joined #salt
09:13 TTimo joined #salt
09:13 felskrone joined #salt
09:15 dkrae joined #salt
09:18 favadi joined #salt
09:23 bhosmer_ joined #salt
09:29 mdupont joined #salt
09:30 hojgaard joined #salt
09:31 jrluis joined #salt
09:32 hojgaard Hello folks. I am setting up postfix amavis salt-state. And i need to make sure amavis is in the clamav group and clamav is in the amavis group.. How can i do this in using salt states?
09:32 Xevian joined #salt
09:34 markm joined #salt
09:36 Furao hojgaard: create a third .sls that is included in both amavis and postfix
09:37 Furao and you can extend and append the additional group
09:37 Furao hojgaard: https://doc.robotinfra.com/amavis/doc/index.html https://doc.robotinfra.com/postfix/doc/index.html we went trough the same issues :)
09:40 i3lifee joined #salt
09:40 paulm- joined #salt
09:43 favadi joined #salt
09:44 holms joined #salt
09:44 chiui joined #salt
09:45 holms is it possible to make something like "apt-get update" cache expiration time?
09:45 CeBe joined #salt
09:45 Furao holms: I wish the same thing
09:45 N-Mi joined #salt
09:45 holms seems like basics =/?
09:45 Furao i’m about to write a proxy module to do this :|
09:45 holms :DD
09:45 jtang joined #salt
09:47 krelo joined #salt
09:47 hax404 hi, i want to controll hosts that are not 24/7 up. is it possible that the minions get their state when they come up?
09:49 i3lifee_ joined #salt
09:50 dunz0r Can I use an ipcidr match in a compound match somehow?
09:52 holms http://stackoverflow.com/questions/28538603/saltstack-how-to-launch-apt-get-upgrade-with-cache-expiration-time
09:52 holms If it's easy for anyone in here
09:53 holms I'd use ansible as it's just works in there, but i need to have provisioner inside vm.. so I thought saltstack is best way to go..
09:53 dunz0r holms: If you can pass the arguments to apt-cache on the commandline a cmd.run might work
09:53 dunz0r s/cache/update/
09:53 holms hmz
09:54 dunz0r Or make a cron-job out of it :)
09:55 holms ok will try
09:55 holms thanks for hint
09:57 * dunz0r is now going to fork salt so he can make a pull-request documenting how to use the ipcidr-match in yaml
09:57 ekkelett I take it gitfs corruption issues aren't a common thing then? :D
09:59 holms dunz0r: even more stupid question, how to do 'apt-get update' ))
09:59 holms i mean i've found whatever other action except this one
10:00 dunz0r holms: salt 'somehost' cmd.run 'apt-get update'
10:00 perfinion salt '*' pkg.upgrade
10:00 dunz0r But there might be some special state which can do it automagically
10:00 fe92 joined #salt
10:00 dunz0r Like perfinion just said :)
10:00 holms it's upgrade
10:00 holms not update =/
10:00 perfinion and pkg works for all distros, it figures out what hte right thing is (apt-get, yum, emerge etc)
10:01 perfinion holms: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.pkg.html
10:01 holms pkg.upgrade will update pkg manager cache?
10:02 jespada joined #salt
10:02 holms sorry maybe i'm blind but i'm unable to find even pkg.upgrade
10:03 perfinion for debian?
10:03 holms http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html
10:03 holms or or here.. http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.aptpkg.html
10:03 holms yeap
10:04 perfinion http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.aptpkg.html#salt.modules.aptpkg.upgrade
10:04 perfinion salt '*' pkg.upgrade dist_upgrade=False
10:05 perfinion that looks like the one you want
10:06 dunz0r Wohoo, figthing the good fight \o/
10:06 * dunz0r just created his first pull-request ever
10:07 joehh dunz0r: congrats!
10:07 holms hmz
10:07 holms perfinion: trying that
10:07 joehh hopefully the first of many
10:07 dunz0r It's just a few lines of documentation though :)
10:07 i3lifee joined #salt
10:08 perfinion dunz0r: documentation is super important and always a welcome patch :)
10:08 perfinion cuz most people dont like doing it
10:08 joehh I'd argue that better docs is more important than many code pull requests
10:08 perfinion yeah exactly
10:08 dunz0r perfinion: Yeah. I hope to spare someone else the hassle that I've been through :)
10:09 joehh also if it is an issue you have picked up, likely that other have run into the same prob
10:09 dunz0r I suspect I can use the ipcidr module in a compound match as well... but I'm not sure how. I need to figure it out and document that as well
10:09 holms perfinion: maybe you know what "refresh" param means?
10:09 dunz0r joehh: Yep, I think some of them are on this channel even :)
10:09 favadi joined #salt
10:10 joehh :)
10:10 perfinion holms: i assume its the thing that updates your list of packages (emerge --sync on gentoo, not too sure about debian)
10:10 holms whoray
10:10 holms that's seems to be apt-get update
10:10 holms if expiration cache would be in there, would be perfect
10:10 yomilk joined #salt
10:10 perfinion how do you do a normal full update? apt-get update; apt-get upgrade?
10:11 holms when you do package upgrade
10:11 holms then you do apt-get update and then apt-get upgrade
10:11 holms when you just want to update package manager cache (like emerge --sync) it's apt-get update
10:11 holms s/cache/index
10:11 perfinion in the gentoo module,salt * pkg.upgrade will sync first then emerge world. i suspect hte debian one will do the right thing too
10:12 holms upgrading world doesn't sound good for me lol
10:12 holms unless for the first time,
10:12 holms version freezing is prefered on prod right?
10:12 perfinion holms: ah then pkg.refresh_db is what you want for apt-get update
10:12 holms woo
10:13 perfinion holms: well gentoo is a rolling release distro so there arnt really freezes
10:13 bluenemo joined #salt
10:13 * holms learning to read docs
10:13 perfinion holms: but it looks like you dont need to do pkg.refresh_db then pkg.upgrade; just do pkg.upgrade and it'll refresh first
10:13 holms perfinion: i prefer to freeze packages like: mongodb, mysql or redis..
10:14 holms ok trying that
10:14 perfinion oh you just pin certain slots when you install then, emerge python:3.4 will keep it in the 3.4.x range
10:15 perfinion but yeah, to each their own
10:15 iwishiwerearobot joined #salt
10:18 holms perfinion: and we have expiration cache option in there, in case of reprovisioning you don't need to wait for it to update (it takes up to 1min if your mirrors is not set..)
10:18 holms so it would do this once per like 24 hours if you prefer..
10:18 holms not sure how to achieve this in here..
10:18 perfinion ah cool
10:19 holms i wonder what distro devs of saltstack prefer
10:19 perfinion that i have no idea, check the docs about expiration stuff
10:19 holms deb and rhel is all over the place.. i mean whatever is developed top priority for both
10:19 wnkz joined #salt
10:19 perfinion i would think most big companies use rhel cuz of support
10:21 holms been working on those "big" companies
10:21 holms dell is using ubuntu
10:21 holms barclays using rhel
10:21 holms although now it's 40% of ubuntu for the past 2 years
10:21 holms it's equalizing little by little
10:23 jtang most 'big' companies use EL for hardware support and compliance
10:23 jtang traditionally hardware support for 'enterprise' equipment has always had EL support first before other distros
10:23 jtang ubuntu is catching up on that front, but thats more to do with how ubuntu synchronises their kernels to the mainline and how they backport things
10:24 jtang I guess the LTS support from ubuntu has made things better as well
10:30 I3olle joined #salt
10:31 otter768 joined #salt
10:32 CeBe1 joined #salt
10:33 roolo joined #salt
10:33 favadi joined #salt
10:36 michelangelo joined #salt
10:37 trikke joined #salt
10:37 holms anyone is using saltstack with vagrant :)?
10:37 holms jtang: yes you right, and I'm actually telling my enterprise experience as i've been working in there for 3 years
10:37 holms in dell inc. and barclays
10:38 holms one is a bank, another one is a biggest hardware company in the world?
10:38 holms both using ubuntu almost at 40%
10:38 holms 2 years ago it's been 20% :)
10:39 holms currently most backends, if not critical, are hosted in ubuntu
10:39 holms and currently barclays implementing paas, and dell is using openstack on ubuntu base (developing it together with mirantis)
10:39 holms barclays goes for docker
10:40 holms usually ubuntu containers :P
10:40 jtang heh yea
10:40 holms i'm not relios in here.. just stats.. i'm ok with rhel anyway
10:40 jtang well ubuntu is nice for application developers
10:40 holms religious*
10:40 jtang EL is kinda nice for infrastructure things that need to last a long time
10:40 jtang at least thats my experience
10:40 holms agreed
10:40 jtang ive ran clusters for 5-6yrs on EL and I've been quite happy with it
10:41 jtang but less happy with EL for app development
10:41 jtang or desktops
10:41 jtang each have their own strengths
10:41 holms i had too many os'es in my life, with various clusters))
10:41 holms most happiest i've been about is rhel and deb. before them fbsd
10:42 holms worst is sun and hp/ux
10:42 holms vangrant output of saltstack is scrambled =(
10:42 holms looks ugly
10:43 jtang i miss the bsd's
10:43 holms digital ocean just added support for bsd
10:43 jtang we never used them in work cause of hte lack of 10gig-e or infiniband drivers
10:43 holms actually enjoing my personal vm's in there with this os :D
10:45 phx jtang, WITH_OFED= to src.conf and it's there
10:47 holms State 'pkg.refresh_db' found in SLS 'common.packages' is unavailable
10:47 holms hmz
10:47 holms oops it's aptpkg
10:50 evle joined #salt
10:52 favadi1 joined #salt
10:52 babilen holms: That isn't a state
10:53 holms not sure what i don't understand
10:53 holms pkg.installed with provided list works
10:53 holms how to call pkg.refresh_db without any list
10:53 holms my-action-name:
10:53 babilen http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.aptpkg.html#salt.modules.aptpkg.refresh_db is an execution module not a state
10:53 holms and how can i call it :P
10:53 babilen You don't, what are you trying to achieve?
10:53 holms apt-get update
10:54 holms all i need
10:54 Furao module.run: -name: apt.refresh_db
10:54 holms without any package upgrades or other crap
10:54 babilen (you run it interactively from the master via "salt '*' pkg.refresh_db" on whatever minion you want to target)
10:54 holms ok, how can I "name" an action
10:54 holms so I could call this later?
10:55 babilen holms: Are you in fact looking for "refresh" with pkg.installed ? What are you *really* trying to achieve?
10:55 holms jez..
10:55 jdowning joined #salt
10:55 holms you familiar with debian/ubuntu?
10:55 holms "apt-get update" <-- all i need
10:55 Furao oh no sorry holms, it’s : module.run: -name: pkg.refresh_db
10:55 Furao or module.wait and some watch if you need to
10:55 holms Furao: I can call this right from sls file?
10:55 Furao yes
10:55 holms and can this have a 'name'? or it's called 'id' in here i think
10:56 babilen holms: Yes, I understand that you are at a point where you think that executing "apt-get update" is what you want, but I also believe that there is something you want to actually achieve and that there might be a better way to do that
10:56 Furao https://github.com/bclermont/states/blob/master/states/apt/init.sls#L29
10:56 holms babilen: i just to update apt-get cache, because it's empt
10:56 holms empty*
10:56 Furao it’s commented in that example but since then we have a better apt formula
10:56 holms without even installing anything
10:56 holms Furao: thanks
10:57 Furao oh yes I remember there was a bug in aptpkg module 3 years ago
10:57 Furao that’s why I run apt-get update
10:57 holms ok understood
10:58 babilen holms: When is it empty? I am using pkg.installed without problems with current versions of saltstack and various Debian and Ubuntu releases.
10:58 babilen But meh ...
10:58 babilen Why solve the actual problem
10:59 holms why i need pkg.installed if just want to update pkgmanager package index?
10:59 holms or i'm missing something?
10:59 holms I also would like, NOT to update cache by setting cache expiration date
10:59 holms because updating it everytime on dev machine is a pain
10:59 babilen No, you don't, but "updating the pkgmanager package index" is meaningless as you typically want to do that for a specific reason.
10:59 holms unless i'll write some formula for choosing right mirror for the first time
11:00 babilen (e.g. "ensure latest version of pkg FOO is installed")
11:00 holms which i'm not very confident enough, as in ansible this would take like 3min to do, i here.. everything is crypted (sorry been working with saltstack earlier 2 weeks, overcomplicated)
11:00 holms babilen: point is - i don't :d
11:00 jtang phx, heh, is this in freebsd?
11:00 phx jtang, yes
11:01 jtang its been years since i looked at it
11:01 jtang :P
11:01 jtang this was maybe 7yrs ago when we first got IB kit
11:01 phx it's been there, it's just not advertised very much. also, the ofed stack is continously being updated
11:01 jtang ofed didnt exist then
11:01 phx 7 years i'm not sure. about 3 years ago, it was there :)
11:01 jtang it was just voltaire/mellanox
11:01 sdenizhan joined #salt
11:01 jtang yea OFED is a good thing
11:02 babilen holms: It is easy, you can, without problem, use "cmd.run" with "apt-get update" or use, the slightly more correct, module.wait Furao suggested earlier. All I am saying is that I have the feeling that you are approaching the problem in a, well, too procedural fashion and that there might be a better way ..
11:02 phx yup. and it's full of 10gE drivers now
11:02 babilen holms: Which is why I ask: Why do you care if the package cache has been updated?
11:02 holms babilen: cmd.run probaby a good idea after all
11:02 jtang the other deal breaker was kernel modules for GPFS/Lustre
11:02 jtang i guess that was another reason why EL just worked better for us
11:02 phx jtang, i cannot tell a thing about those
11:02 rattgrain joined #salt
11:03 jtang they  are distributed filesystems
11:03 * phx is playing with AFS in his spare $worktime
11:03 babilen holms: To elaborate on that: You could, for example, use pkgrepo states (cf. http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkgrepo.html#salt.states.pkgrepo.managed ) to manage your repositories and set refresh_db there
11:03 jtang with a posix front end that sits on top of the linux vfs layers
11:03 rattgrain Does anyone know of a good article that elaborates on different ways to set up and organise grains?
11:03 TyrfingMjolnir joined #salt
11:03 sdenizhan left #salt
11:03 jtang which, well, makes it very linux or even linux kernel specific
11:04 babilen holms: But then, as you aren't telling us what you *really* care about there is little to do. It is just that cmd.run calls to apt-get are typically a "code smell"
11:04 holms babilen: probably my goal, can be solved in another way, if i could just generate proper source.list , i mean with mirrors for my country. because currently apt-get update takes like.. 3min
11:04 holms becaues it's US servers, because most of devs are in US
11:04 babilen holms: Debian has http.debian.net which automagically selects the "best" mirror for you
11:05 holms what about ubuntu
11:05 holms (that's why i love debian actually))
11:06 babilen I'm sure that they have some geo meta mirror service too
11:06 holms checking
11:06 babilen "deb mirror://mirrors.ubuntu.com/mirrors.txt" seems to be what you are looking for
11:06 holms oyea found the same
11:06 holms well then this will solve my problem
11:06 jespada joined #salt
11:07 holms if i just could avoid overwriting source.list myself
11:07 babilen http://mywiki.wooledge.org/XyProblem -- sorry for insisting so much, but it typically *really* pays off to ask about your actual problem
11:08 holms actual goal is to minify provisioning time, also as deployment
11:08 TyrfingMjolnir joined #salt
11:08 holms great link :D
11:09 babilen Yeah, wooledge's wiki is typically a very nice resource
11:09 holms that's how this guys solved it https://github.com/rentalita/ubuntu-setup/blob/master/salt/apt/init.sls
11:10 babilen Which is the wrong approach
11:10 holms so any module for this .. or perhaps just running some cmd?
11:10 babilen You should use pkgrepo.managed to manage various repositories not reference hardcoded files
11:10 babilen http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkgrepo.html#salt.states.pkgrepo.managed (as mentioned earlier)
11:11 babilen http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html#salt.states.pkg.uptodate might also be of interest
11:12 babilen But then I personally prefer to *not* include such "upgrade the box" states in my configuration and prefer to roll it out manually whenever I see fit.
11:12 fredvd joined #salt
11:12 holms running this once (for the first provision) would be nice, although not sure how to achieve this
11:12 babilen (so that two subsequent highstates are a no-op if I haven't changed anything in the configuration)
11:14 TTimo joined #salt
11:14 babilen holms: You could model the "initial" provisioning differently with, say, orchestrate or even just reactors on the "new minion" event. There are different approaches to *that* problem and it might make sense to investigate the "provisioning of new instances" vs "running highstates against all minions afterwards"
11:15 babilen You could, in the easiest of worlds, define a different startup state (cf. http://docs.saltstack.com/en/latest/ref/states/startup.html ), but then that might not be appropriate
11:15 notnotpeter joined #salt
11:17 calvinh joined #salt
11:17 babilen I am not an expert in that area and you might want to write a mail to salt-users and inquire about various approaches to that. It is, IMHO, well worth investigating a bit more before you commit to something unmanageable in your infrastructure
11:18 miqui joined #salt
11:20 IOMonster joined #salt
11:20 IOMonster joined #salt
11:20 dthorman joined #salt
11:21 rattgrain I'm looking for a way to manage grains. Would it be total crazyness to manage /etc/salt/grains files for each minion through states or pillars?
11:22 flebel joined #salt
11:23 NightMonkey joined #salt
11:25 rattgrain I'm also curious if it is better to have one grains file per minion or one grains file per "type of setup" that the minion is matched against. I'm thinking that one minion may have multiple "setyp types" so either the top file will be cluttered or there will be duplication of data in the grains files.
11:26 babilen rattgrain: No, not really. But then why don't you use the information from pillars directly rather than saving them as grains first ? (which introduces dependencies that you wouldn't have otherwise for no particular gain as you have that information in pillars already)
11:26 rattgrain I guess it really depends on our setup but if you know of any good articles discussing this I'd love to read more
11:26 jespada joined #salt
11:27 rattgrain babilen: so the pillar controls which states to use for a minion rather then the pillar setting grains which are then used to select states?
11:28 * rattgrain tries to avoid the XY problem
11:30 Ilja joined #salt
11:31 malinoff joined #salt
11:33 yomilk joined #salt
11:35 rattgrain Maybe what I'm really looking for is nodegroups actually
11:35 rattgrain Instead of setting "webserver" as a grain it could be a webserver nodegroup
11:35 elfixit joined #salt
11:36 flebel joined #salt
11:36 Ash__ joined #salt
11:36 Ilja joined #salt
11:37 Ash__ left #salt
11:37 yomilk_ joined #salt
11:39 Ilja1 joined #salt
11:43 johtso joined #salt
11:46 fivmo joined #salt
11:52 vamsee joined #salt
11:54 vamsee Hi, I'm trying to run eventlisten.py on master, and I'm getting this error. Can anyone tell me what I'm missing, please?
11:54 vamsee python eventlisten.py
11:54 vamsee Traceback (most recent call last):
11:54 vamsee File "eventlisten.py", line 21, in <module>
11:54 vamsee import salt.ext.six as six
11:54 vamsee ImportError: No module named six
11:58 jespada joined #salt
11:58 cnelsonsic_ joined #salt
12:01 VSpike Is 'L@*' a valid compound match?
12:04 iwishiwerearobot joined #salt
12:05 calvinh_ joined #salt
12:05 kossy joined #salt
12:14 VSpike looks like not
12:14 TTimo joined #salt
12:16 mike25de hi all ... i have 2 environments totally separated with 2 salt masters.  Sometimes it happens that a vm must be migrated to the other environment.    SO the trick is to  edit the /etc/hosts and change the new salt ip (that's fine i will use sed) AND restart the salt-minion.   salt -t 10 'salt-minion' cmd.run "sed -i 's/^.*salt.*$/192.168.0.235 salt/g' /etc/hosts && /etc/init.d/salt-minion restart"   THE ISSUE is that the salt-minion is not restarted... it
12:19 babilen VSpike: It is http://docs.saltstack.com/en/latest/topics/targeting/compound.html
12:19 rattgrain mike25de: I've been using "at" to restart minions since the salt minion restarting itself could be problematic
12:20 mike25de rattgrain:  at??
12:20 babilen http://docs.saltstack.com/en/latest/faq.html#what-is-the-best-way-to-restart-a-salt-daemon-using-salt
12:20 babilen mike25de: "man at"
12:20 rattgrain mike25de: http://unixhelp.ed.ac.uk/CGI/man-cgi?at
12:20 mike25de rattgrain: thanks mate
12:20 rattgrain nice link babilen :D
12:21 rattgrain i thought i was unique :P
12:21 babilen But then I haven't had any problems with a simple "service.restart" in my states and it wasn't necessary to do what is suggested there (Debian boxes, nothing older than squeeze)
12:21 mike25de thanks for the tips guys
12:25 lothiraldan joined #salt
12:29 calvinh joined #salt
12:30 otter768 joined #salt
12:31 babilen rattgrain: forum or appleton ?
12:34 rattgrain babilen: huh?
12:35 holms can pkgrepo.managed remove everything there's in source.list?
12:35 babilen rattgrain: ah, forget it (wrongly assumed that you are in ed.ac.uk as you linked to their website)
12:35 babilen holms: It can, yes
12:35 rattgrain babilen: ahaa, hehe :)
12:36 holms clean_file
12:36 holms ok
12:36 rattgrain Is it possible to sum up the use case difference between node groups and grains in 1 sentence?
12:36 babilen holms: There are various ways to use pkgrepo.managed (e.g. "consolidate" has quite the effect)
12:36 * rattgrain is about to roll dice
12:37 kbyrne joined #salt
12:37 babilen rattgrain: nodegroups are useless because you have to restart the master when they change and grains are useless for targeting anything sensitive
12:37 intellix joined #salt
12:37 rattgrain babilen: aha, very nice :)
12:39 yomilk joined #salt
12:39 babilen rattgrain: To elaborate on the grains bit: You *really* don't want to make any sensitive information available to your minions based on their grains data. That is due to the fact that grains are provided by the minion
12:40 rattgrain babilen: yes, i'm not planning on putting any sensitive data. I'm just trying to split my minions into groups that I can target where each minion can be member of many groups.
12:40 babilen I also find targeting by grains to be problematic and typically stick to minion ids or (external) pillars (e.g. keep it in a database). You can't, naturally, not target pillars by pillars
12:41 rattgrain we find that the minion id's are not enough for our purposes
12:41 rattgrain (or we are naming them wrong)
12:41 babilen rattgrain: I don't mean to put sensitive data into grains, but to target sensitive information based on grains (some might argue that even the states a minion get is sensitive and should *not* be controlled by the minion itself)
12:41 mike25de rattgrain: babilen  AT works perfect with my minion-restart  THANKS!!!
12:41 * babilen passes some of that over to rattgrain
12:41 babilen yay!
12:41 rattgrain babilen: aha, true that. Thanks.
12:42 * rattgrain hoorahs
12:42 ThomasJ babilen: Shame nodegroups cannot be changed while running
12:42 babilen Absolutely
12:42 calvinh_ joined #salt
12:42 ThomasJ They really help keeping things clean when having nasty compound matchers going
12:43 rattgrain I think I will go with the groups, although it does not feel good to create group mappings in master config
12:44 babilen The way grains are used is also quite broken ... It is no surprise though as there is no other "static" source of data that is available everywhere and people therefore abuse grains for metadata. I'd argue that you do not want to keep that data on the minions (you really don't want that distributed and under minion control now, do you?), but that you would want it on an authoritative data source
12:44 ThomasJ rattgrain: /etc/salt/master.d/nodegroups.conf
12:44 ThomasJ Can keep them in a separate config file
12:44 babilen You can get quite far with pillars, but that breaks down for pillar targeting
12:45 rattgrain ThomasJ: awesome
12:45 rubenb Why does salt need two ports to connect to the master?
12:45 ThomasJ Basically, I never touch my master config file, I keep everything in separate configs in /etc/salt/master.d
12:45 babilen ThomasJ: You'd still have to restart the master which is something I wouldn't want to do just because I add a new minion. If you provision many minions you would constantly hammer the master with auth requests and don't get anything done in between restarts
12:46 otter768 joined #salt
12:46 ThomasJ babilen: True, fortunately not an issue with our deployment so far
12:47 ThomasJ Had to give up on Windows minions for now though :\
12:47 babilen Sure, as with everything else it boils down to your particular needs and setup
12:47 babilen Just give up on Windows completely and you will be a happier person altogether ;)
12:47 I3olle joined #salt
12:47 ThomasJ Keep running into bugs with Windows minions going haywire with connections to the master
12:48 fredvd joined #salt
12:48 ThomasJ Seeing anywhere from 5k to 15k connections from a single minion
12:48 babilen whew
12:48 babilen That's quite suboptimal behaviour
12:48 ThomasJ Given that it only takes a very low number to exhaust the master of file descriptors.. yes
12:49 holms babilen: one more question, how can I force salt to break when one action failed? because aparently it continues to execute stuff futher
12:49 ThomasJ I'm seeing some odd behaviour with Linux minions too though. Each one is taking up some 25 filedescriptors
12:50 babilen holms: http://docs.saltstack.com/en/latest/ref/states/failhard.html
12:50 holms great
12:51 holms i'm actually not sure what the mean global failhard is bad
12:51 holms is one cmd failed to bring something, let's say some config or whatever it created or made
12:52 holms other action will fail too.. and you can end up loosing even data
12:52 holms chef and ansible approach usually to fail hard globally by default., ofcourse by rolling back an action (usually modules supports this)
12:52 babilen Which is why you should use *explicit* requisites for things that depend on each other. That way salt can run all states it can run without problems
12:53 holms guessing all depadencies would be insane
12:53 holms i mean for me is action priority over each other is amust
12:53 babilen holms: How would you roll back a file.absent ? (for example)
12:53 holms well in ansible if this fails, it restores it back
12:53 holms so it moves it temporalily somewhere
12:54 holms not all actions are secured but most of them
12:54 holms whatever fails i need to stop futher actions, mostly they are connected
12:55 babilen So use failhard, but then I'd much rather be explicit about dependencies and finish all states that can be finished.
12:55 babilen A "roll back" mechanism would be great though
12:55 holms let's say some config just failed to transfer to server, and you continue to execute your minion, and you restart the service, and bravo - you'r edown
12:55 jdowning joined #salt
12:56 holms there's tons of situation where it's just impossible to know depedencies, well you not a god to know how every piece of packages in this universe behaves
12:56 babilen holms: Yes, that is why you would define a requisite between those two.
12:56 ThomasJ holms: Which is why you want to define requisites
12:57 ThomasJ Alternatively, do not restart services unless a config change is detected
12:57 ThomasJ ie, watch files
12:57 holms i'll make some other examples :P
12:57 gothos Hello! Is it possible to check in a sls file via if statement for the distribution of a client or maybe if it belongs to a specific nodegroup?
12:57 holms as for me basically everything requires everything,
12:57 holms and when you'll want to modify something in the future, modifying requisities will be insane..
12:58 trikke joined #salt
12:58 babilen gothos: You have access to grains in your SLS files, what are you really trying to do?
12:58 holms and in here, just an order..
12:58 holms makes it selfexplicit
12:58 VSpike Can anyone explain why calling this from master doesn't seem to do anything? https://bpaste.net/show/a903485e7eb4
12:58 babilen holms: If you want failhard then, by all means, set it. I wouldn't want that in my infrastructure, but if you prefer it differently then ... that's why there is that functionality
12:59 holms m
12:59 gothos babilen: I want to execute one specific file.append statement and a file.managed only for centos 7 systems, but would prefere not to use two different files
12:59 kossy joined #salt
12:59 lothiraldan joined #salt
12:59 gothos babilen: since I would like to keep all the configs for the service together
12:59 babilen VSpike: What do you want it to do?
12:59 aphorise joined #salt
13:00 ThomasJ gothos: Keep them in their own directory
13:00 ThomasJ You will save yourself a lot of grief if you use matcthing in your top.sls and not in your states
13:00 VSpike babilen: I'd expect calling state.sls from the master to do the same thing as the minion. It looks like it's doing nothing at all (or at least produces no useful output)
13:00 rattgrain Already liking the node groups more then I thought. Thanks again babilen and ThomasJ
13:01 gothos ThomasJ: okay, if that is the recommended way, then I'll just go with it :)
13:01 babilen gothos: Sure, you can use "{% if grains['os_family'] == 'RHEL' %} or something like that .. Check out the "salt 'THEMINION' grains.items" output to see what you can match on
13:01 ThomasJ But in any case from memory {% if 'CentOS' in salt['grains.get']('os_family') %}
13:02 babilen Ah, is it "CentOS" ?
13:02 babilen Isn't that RHEL?
13:02 ThomasJ No, you are right
13:02 babilen os might be CentOS though
13:02 ThomasJ yep
13:03 ThomasJ I started out doing grains checks in states... ended up being a bad idea
13:03 babilen VSpike: Ah! I see ...
13:03 kossy joined #salt
13:03 babilen VSpike: Could you paste linux.utils.base ?
13:03 ThomasJ The less places you check for something, the fewer "WTF" moments you will end up having down the line
13:04 aphoriser joined #salt
13:04 kossy joined #salt
13:04 VSpike On the subject of pillars and grains, I've recently concocted a pillar system in python which so far is working out pretty well. I include it from top.sls for all hosts and it does the real matching in there...
13:05 babilen ThomasJ: I totally agree with that. My states are typically quite explicit and I rather use more complicated targeting and idioms common in "salt formulas" (e.g. map.jinja, different source:// files for each distribution, ...)
13:05 * ThomasJ nods
13:05 VSpike It works entirely off the hostname, and leaves pillar items for roles, location, environment, etc to make selecting states easy. It also includes other pillar files based on standard paths. https://bpaste.net/show/86a832fe628a
13:06 trikke joined #salt
13:06 VSpike It's very tailored for my needs, but having never used python for states or pillar before I was pleasantly surprised by how easy and powerful it is
13:06 babilen oh yeah
13:08 VSpike babilen: https://bpaste.net/show/caee633f031e <--- linux.utils.base
13:09 Furao joined #salt
13:10 babilen VSpike: That's all there is in there?
13:10 babilen (btw, saltstack typically prefer "pkg.installed:" style these days)
13:10 babilen (so "pkg.installed: []" in this case)
13:11 babilen Or just "pkg.installed" actually ... *brainfart*
13:11 aphorise joined #salt
13:13 VSpike babilen: ah, OK. Yep, that's all there is
13:13 babilen VSpike: And you changed your master config to ensure that /srv/salt/states is in file_roots ? (as that differs from the default) - You might also want to sprinkle a "-v" in your command and check the master log. Can you run "salt 'debtest*' test.ping" ?
13:14 babilen How soon after accepting the minion did you run that command?
13:16 babilen VSpike: And please note that state.sls does *not* work if you have includes in your file. (which is why I asked you to paste that file)
13:16 TTimo joined #salt
13:16 babilen But that obviously doesn't apply in your case
13:17 VSpike Oh, looks like the master has lost contact with the minion. test.ping and grains.items no longer working. They were a short while ago, which is odd
13:17 VSpike babilen: minion was accepted days ago, so not that.
13:18 babilen Any idea as to what might have happened?
13:19 VSpike no ... not sure!
13:20 VSpike restarted minion and it's OK now
13:20 lothiraldan joined #salt
13:20 holms Calling state.highstate... (this may take a while).. and it's stuck
13:20 patrek joined #salt
13:20 VSpike Yep, all fine after minion restart. Very odd!
13:21 VSpike babilen: thanks for looking at it
13:21 subsignal joined #salt
13:22 kellnola2 I wonder if our ec2 tag / grain matching is causing our performance issues
13:22 kellnola2 well not really slow, but a lot of latency in everything you do
13:22 babilen holms: context?
13:23 holms babilen: another problem, after vagrant starts, it makes apt-update and installs some crap, and that using US mirrors.. my first action is to change them, apparently salt installs some kind of stuff before my action
13:24 holms like this: Setting up python3-software-properties
13:24 holms oops
13:25 babilen Ah, so you are working on a vagrant setup that uses the salt provisioner!
13:25 babilen (btw, don't use vbox with vagrant if you don't have to as it is horribly slow, vagrant-libvirt is *much* faster with kvm)
13:26 holms hmz
13:29 fredvd joined #salt
13:30 mikkn joined #salt
13:33 babilen holms: I think it really just calls the bootstrap script, what's the actual problem you try to solve?
13:34 nnion joined #salt
13:34 overyander joined #salt
13:34 numkem joined #salt
13:36 holms babilen: change source.list before salt provisioner will install anything else
13:37 lothiraldan joined #salt
13:39 yomilk joined #salt
13:40 holms and it update apt-get almos 5 times already
13:40 holms even before states been executed
13:40 holms insane..
13:40 holms if i'll have slow mirror, or any person from some far country will have to launch this he'll wait like 20min
13:41 holms sadly those people doesn't have experience with linux.. they are on win =(
13:41 Ssquidly joined #salt
13:49 solvik left #salt
13:50 TyrfingMjolnir joined #salt
13:50 cpowell joined #salt
13:51 lothiraldan joined #salt
13:51 dyasny joined #salt
13:52 iwishiwerearobot joined #salt
13:56 michelangelo joined #salt
13:57 fivmo joined #salt
13:58 subsignal joined #salt
14:00 FRANK_T joined #salt
14:02 subsigna_ joined #salt
14:03 numkem joined #salt
14:04 nitti joined #salt
14:06 chris-m joined #salt
14:06 chris-m morning all! Happy Monday!
14:08 chris-m qq on modules - reference: file_roots:    the master and regular minions are being updated with pyhon modules i store in the _modules folder, but the syndic doesn't get the modules.    what am I doing wrong?
14:08 nitti joined #salt
14:08 eseyman joined #salt
14:10 racooper joined #salt
14:10 chris-m i.e. when I run this command: ./salt.sh -G datacenter:QIDC foo.bar (it says foo.bar is not available). - even though I executed the saltutil.syndc_modules to all minions from mom
14:11 funzo joined #salt
14:12 murrdoc joined #salt
14:12 jbub joined #salt
14:13 intellix joined #salt
14:15 TTimo joined #salt
14:17 holms babilen: i need you a bit
14:17 holms i can't find a way to wipe source.conf.d directory
14:17 holms and especially whole file
14:17 lietu- so, I'm trying to have the rabbitmq-server package installed and the service enabled, but if I don't give any parameters to pkg.installed, it wants me to omit the colon at the end, and if I omit it, it complains about having the second service.running line .. if I add - before them, it says it's not a dict .. what format should I have to just have a state (e.g. pkg.installed) without parameters AND define another state under it?
14:17 holms although i've specified clean_file: True
14:19 babilen source.conf.d ?
14:19 babilen Are you referring to /etc/apt/sources.list.d ?
14:19 babilen (and please ask the channel for now, I am busy at the moment)
14:19 jbub joined #salt
14:27 breakingmatter joined #salt
14:29 ekkelett lietu-: []
14:29 ekkelett You could do <id>:\n  <state>:[]\n  <another state>:[]
14:30 lietu- thanks, that worked fine
14:33 murrdoc morning
14:34 babilen jinja
14:34 babilen ...
14:34 babilen Is it really that far fetched to assume that |format() uses the .format string method?
14:35 murrdoc it does
14:35 murrdoc the jinja filter ?
14:36 babilen Yeah
14:36 babilen Looks as if that does % string formatting and not SOMETHING.format(arg1, arg2, ...)
14:36 murrdoc {% STRANG|format(opts) %}
14:37 lpmulligan joined #salt
14:37 Ssquidly joined #salt
14:38 babilen Try "foo {} {}"|format('a', 'b') vs "foo {} {}".format('a', 'b')"
14:38 murrdoc http://jinja.pocoo.org/docs/dev/templates/#builtin-filters
14:38 murrdoc or actually http://jinja.pocoo.org/docs/dev/templates/#format
14:39 chris-m re-posting
14:39 chris-m <chris-m> qq on modules - reference: file_roots:    the master and regular minions are being updated with pyhon modules i store in the _modules folder, but the syndic doesn't get the modules.    what am I doing wrong? [09:10] <chris-m> i.e. when I run this command: ./salt.sh -G datacenter:QIDC foo.bar (it says foo.bar is not available). - even though I executed the saltutil.syndc_modules to all minions from mom
14:39 primechuck joined #salt
14:39 babilen murrdoc: Yeah, that seems to use "STRANG % (opts)" rather than "STRANG.format(opts)"
14:39 murrdoc started the day with too much norleans music, ergo STRANG :)
14:40 babilen It's fine .. just wanted to be consistent
14:40 * babilen laughs
14:40 paulm- joined #salt
14:40 murrdoc the jinja filter are always get applied as string|filter(opts), from what i have used
14:41 yomilk joined #salt
14:42 Furao joined #salt
14:43 blee joined #salt
14:43 blee Hey everyone, is there a way to store an array of data inside a pillar?
14:43 timoguin joined #salt
14:44 murrdoc yup, list or dict ?
14:45 toanju joined #salt
14:46 blee murrdoc, which would be easiest to parse using the jinja? I assume something like a for loop
14:47 murrdoc depends on what you are doing with the pillar
14:47 otter768 joined #salt
14:48 kaptk2 joined #salt
14:48 murrdoc https://github.com/saltstack-formulas/nginx-formula/blob/master/nginx/ng/map.jinja#L1-L5 thats a jinja way to iterate on items in a dict
14:49 murrdoc https://github.com/saltstack-formulas/nginx-formula/blob/master/pillar.example overly complicated pillar :)
14:52 sieve joined #salt
14:52 Ilja joined #salt
14:53 sieve I have this file in my pillar called "sensu.sls" which describes the configuration of the sensu that we are deploying.
14:53 sieve This sensu.sls contains the ip addresses of the various components.   "    host: 172.31.7.251" for example
14:54 sieve I dont like having ip addresses in configuration management however for obvious reasons.
14:55 sieve How can I update these ip addresses with the actual ip addresses from the infrastructure compontents?
14:55 sieve I know I can get the ip address into the pillar with the command:
14:55 sieve mine_functions:
14:55 sieve network.ip_addrs: [eth0]
14:56 evle sieve: grains
14:56 micko joined #salt
14:56 sieve and then use this jinja to put stuff into the configuration files.
14:56 sieve {% for server, addrs in salt['mine.get']('roles:master', 'network.ip_addrs', expr_form='grain').items() %}
14:56 sieve master: {{ addrs[0] }}
14:56 sieve {% endfor %}
14:56 jdowning joined #salt
14:59 sieve evle: yes, indeed. how do I get the salt name grain however?
15:01 murrdoc salt 'minion' grains.ls
15:01 murrdoc salt 'minion' grain.get item
15:01 murrdoc salt['grains.get']('thang')
15:01 murrdoc where thang is item
15:02 murrdoc http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.grains.html
15:02 murrdoc http://docs.saltstack.com/en/latest/topics/tutorials/states_pt3.html#using-grains-in-sls-modules
15:03 murrdoc anyone know if the saltstack faq page is in gi
15:03 murrdoc git*
15:08 Vye murrdoc, https://github.com/saltstack/salt/blob/develop/doc/faq.rst
15:09 bluenemo joined #salt
15:10 murrdoc vye thank you
15:10 murrdoc #punsforfuns
15:11 roolo joined #salt
15:12 Vye oh heh
15:13 blee murrdoc, I have a state, that does the exact same thing (clones down git repos), but different types of boxes load different types of repos
15:13 blee so server 1, needs repo 1/2/3, and server 2 needs 3, etc
15:16 sieve I think that my sensu pillar is not being parsed for Jinja
15:16 murrdoc whats different about the servers ?
15:16 murrdoc blee:  do they have a grain like 'role'
15:16 murrdoc or is this based on hostname
15:18 Guest15 joined #salt
15:20 dualicorn joined #salt
15:21 dualicorn joined #salt
15:22 sieve Can anyone see what I am doing wrong here?
15:22 sieve {% for server, addrs in salt['mine.get']('id:rabbitmq-*', 'network.ip_addrs', expr_form='grain').items() %}
15:22 babilen sieve: id: is wrong
15:22 babilen Ah, you set expr_form, but want that to be expr_from
15:23 babilen Are you trying to match hosts named "rabbitmq-foo1" (and so on)
15:23 babilen ?
15:23 sieve babilen: exactly, on their salt id
15:24 babilen btw, it is being matched against id by default, so you don't have to use expr_from nor "id:"
15:24 sieve babilen: {% for server, addrs in salt['mine.get']('rabbitmq-*', 'network.ip_addrs').items() %}
15:24 babilen So, make that {% for server, addrs in salt['mine.get']('rabbitmq-*', 'network.ip_addrs').items() %}
15:25 sieve snap!
15:26 nitti joined #salt
15:26 sieve babilen: https://gist.github.com/mooperd/2b7171e0c725cb43e714
15:26 sieve babilen: I still see an error
15:27 babilen "KeyError: 'master_uri'"
15:28 babilen Does that ring a bell? Could you paste the *entire* file in which you reference master_uri ?
15:29 sieve babilen: https://gist.github.com/mooperd/2c7e2e0734f3c1c77867
15:29 sieve This is a pillar however….
15:29 claytonk joined #salt
15:29 babilen I don't see master_uri in there
15:29 dualicorn joined #salt
15:30 sieve babilen: oh, er. master_uri?
15:30 babilen Ah .. that is something you don't set yourself, but is a lookup by salt itself
15:30 babilen https://gist.github.com/mooperd/2b7171e0c725cb43e714#file-gistfile1-txt-L25 (hence my comment)
15:30 sieve ah, yes
15:30 sieve The thing is, its not clear to me that you can use Jinja like this in the pillar
15:31 sieve You know, like this makes some kind of recursive loop or something
15:31 babilen I've only used it in states so far
15:32 babilen (or managed files)
15:32 sieve babilen: right, actually, this pillar is to control a bunch of formulas
15:32 babilen But then your loop is wrong (it would generate identical entries over and over again)
15:32 sieve babilen: how so?
15:33 claytonk I am working on spinning up multiple Percona MySQL galera nodes in a Vagrant environment. Anyone have recommendations on how to accomplish different salt provisioning per cluster node?
15:33 babilen (namely sensu:rabbitmq:{port,user,password})
15:33 babilen sieve: Not sure if you can use mine.get in pillars though
15:34 claytonk I'm using the Vagrant salt provisioner
15:34 murrdoc u cant use mine.get in pillars
15:34 fivmo joined #salt
15:35 sieve murrdoc: Is that documented anywhere?
15:35 murrdoc not that i know off
15:35 murrdoc shoot maybe u can
15:35 murrdoc br
15:35 murrdoc brb
15:36 GTSiciliaMeMMT joined #salt
15:38 andrew_v joined #salt
15:38 jdowning joined #salt
15:39 sieve sorry babilen, I am not understanding your "(namely sensu:rabbitmq:{port,user,password})" comment
15:40 GTSiciliaMeMMT left #salt
15:41 Brew joined #salt
15:41 iwishiwerearobot joined #salt
15:41 sieve babilen: http://docs.saltstack.com/en/latest/topics/tutorials/states_pt3.html
15:41 yomilk joined #salt
15:47 murrdoc1 joined #salt
15:49 rome_390 joined #salt
15:50 timoguin joined #salt
15:50 thedodd joined #salt
15:50 ipmb joined #salt
15:51 jtang joined #salt
15:51 babilen sieve: You generate sensu:rabbitmq:port multiple times in your pillar (i.e. duplicate entried)
15:51 babilen (same for user and password)
15:53 sieve babilen: hmm, well
15:53 sieve babilen: a for loop is not ideal here indeed
15:54 murrdoc imho https://github.com/komljen/sensu-salt/ might be worth reading
15:56 BigBear joined #salt
15:58 BigBear is salt-call winrepo.genrepo supposed to run through to completion when run direclty on a windows minion?
16:01 murrdoc joined #salt
16:02 teogop joined #salt
16:03 conan_the_destro joined #salt
16:03 dyasny joined #salt
16:03 Fiber^ joined #salt
16:04 pdayton joined #salt
16:06 eliasp BigBear: winrepo.genrepo isn't supposed to be run on a minion
16:07 eliasp BigBear: unless you're in a masterless setup
16:07 sieve I am having a hardtime getting data our of the mine based on id.
16:07 sieve {% for server, addrs in salt['mine.get']('id:rabbitmq-1', 'network.ip_addrs', expr_form='grain').items() %}
16:07 sieve master: {{ addrs[0] }}
16:07 sieve {% endfor %}
16:07 sieve I have tried a few variations
16:07 murrdoc have u run salt '*' mine.refresh ?
16:08 murrdoc or the equivalent
16:08 _JZ_ joined #salt
16:09 hebz0rl joined #salt
16:10 BigBear eliasp: Yes, I am trying to test the capabilites of a masterless setup. So I edited the c:\salt\conf\minion file to say "client_file local". I am hoping to be able to read both local sls recipes and git hosted win_repo sls files.
16:12 sieve murrdoc: I have run a "salt '*' mine.flush"
16:12 eliasp BigBear: did you also set "win_repo" and "win_repo_mastercachefile"? and also read this issue: https://github.com/saltstack/salt/issues/20526
16:13 mohae joined #salt
16:13 malinoff joined #salt
16:14 teebes joined #salt
16:15 thedodd joined #salt
16:18 josephleon joined #salt
16:18 sieve funnily:
16:18 sieve {% for server, addrs in salt['mine.get']('roles:master', 'network.ip_addrs', expr_form='grain').items() %}
16:18 sieve is working as expected
16:18 sieve but
16:18 sieve {% for server, addrs in salt['mine.get']('rabbitmq-1', 'network.ip_addrs', expr_form='grain').items() %}
16:18 sieve or
16:19 sieve {% for server, addrs in salt['mine.get']('id: rabbitmq-1', 'network.ip_addrs', expr_form='grain').items() %}
16:19 sieve does not
16:19 holms can you please put it to pastebin or smtng
16:19 holms how can i remove *.conf files in directory?
16:21 holms oh c'mon srs'ly? http://serverfault.com/questions/528085/clear-directory-with-salt-state-file
16:21 holms can saltstack do something properly at least once..
16:21 holms hacks everywhere for any usecase i have , that's starting to be annoying
16:22 murrdoc clean: True should just work
16:22 murrdoc in a file.directory
16:22 holms thanks! trying that
16:22 holms (sorry for my tone)
16:22 holms i'm fighting things in here for the whole evening
16:22 murrdoc sounds like a monday
16:22 holms :D
16:23 murrdoc if u want to put only your salted files in a dir
16:23 murrdoc you have to explicitly 'require' them in your directory state
16:23 holms understood
16:24 murrdoc so like apt-sources-d:\n\tname:/etc/apt/sources.d/\n\tclean:True\n\trequire: file:apt-source-ppa
16:24 murrdoc so on
16:26 tzero what's the difference between setting the minion's id in /etc/salt/minion_id vs /etc/salt/minion || /etc/salt/minion.d/*.conf ?
16:28 murrdoc style preference
16:28 murrdoc ?
16:29 jalaziz joined #salt
16:30 nkuttler tzero: packaged vs non-packaged files
16:30 holms murrdoc: best decision, gonna answer that serverfault's question :DD
16:30 BigBear eliasp: yes and no ;-) Thanks for the link, was reading that issue about 15 minutes ago, but could not translate the paths from unix master unto a windows minion setting so now the win_repo is set to /srv/salt/win/repo and win_repo_mastercachefile to /srv/salt/win/repo/winrepo.p
16:32 BigBear eliasp: but I still have errors and I woudl really like the salt repo files on windows to live under c:\salt\win\repo and NOT under c:\srv\salt\win\repo
16:33 BigBear eliasp: but I tried /win/repo /salt/win/repo and they both faild. the only one that wrote out a winrepo.p file was /srv/salt/win/repo prefix
16:33 StDiluted joined #salt
16:34 eliasp BigBear: you need to provide proper windows paths… otherwise it will most likely fail
16:34 eliasp BigBear: have you tried setting it to C:/salt/ .....?
16:36 BigBear eliasp: no - I haven't. will give that a go. was hoping between the root_dir: c:\salt and the file_client: local it would interprete /win as c:\salt\win , but it evidently inteprets /srv as c:\srv. will try the ODD c:/ prefix ;-)
16:37 eliasp BigBear: forward-slashes should work just fine, will backward-slashes might be interpreted as escape char in YAML
16:37 Ozack-work joined #salt
16:37 eliasp BigBear: … I'm not sure, but Python itself should be able to deal with forward slashes on Win just fine
16:40 BigBear eliasp: no c:/salt/win/repo gives me same error I get when i use /salt/win/repo, so maybe that means it actually tries to write the c:/salt/win/repo/winrepo.p file. but fails before it writes the first byte to it.
16:41 eliasp BigBear: does the directory exist? otherwise, create it…
16:41 BigBear eliasp: yes the direcotry exists.
16:47 dualicorn joined #salt
16:48 Guest15 joined #salt
16:48 otter768 joined #salt
16:56 igorwidl joined #salt
16:57 yomilk joined #salt
16:57 jalbretsen joined #salt
16:58 BigBear eliasp: so sorry, looks lke I had some 'broken' init.sls files in my c:\salt\win\repo subdirs. now that I cleared the crud python finally compiles my winrepo.p
16:59 BigBear can something like salt-call --local winrepo.update_git_repos be made to work on a masterless windows minion? (assume git is already installed)
16:59 eliasp hehe, shit happens… and feel free to subscribe to https://github.com/saltstack/salt/issues/8157
16:59 eliasp :)
17:01 sieve How can I show what data is in the mine / pillar?
17:01 sieve I have a pillar (ip.sls) with the following:
17:01 sieve mine_functions:
17:01 sieve network.ip_addrs: [eth0]
17:01 sieve How can I show this data on the commandline?
17:01 igorwidl Looking for a way to test my salt states before pushing them to master salt server
17:02 eliasp sieve: salt foo pillar.get whateveryouwant
17:02 eliasp sieve: http://docs.saltstack.com/en/latest/topics/tutorials/pillar.html
17:02 eliasp igorwidl: https://github.com/simonmcc/kitchen-salt
17:03 igorwidl eliasp: awesome, ty
17:03 sieve eliasp: I cannot see how to get mine data this way
17:04 sieve redis-1:
17:04 sieve ----------
17:04 sieve mine_functions:
17:04 sieve ----------
17:04 sieve network.ip_addrs:
17:04 sieve - eth0
17:04 sieve for example
17:04 eliasp sieve: don't flood the channel… please use a nopaste service
17:04 iggy the mine only updates every hour by default
17:05 sieve iggy, now that is an interesting peice of information
17:05 eliasp sieve: see the available functions: "salt foo-bar -d mine"
17:05 thedodd joined #salt
17:08 sieve so I guess "salt '*' mine.update" is the key command here then
17:09 clintberry joined #salt
17:09 clintberry joined #salt
17:11 aquinas joined #salt
17:13 blackhelmet joined #salt
17:13 tkharju joined #salt
17:18 bhosmer joined #salt
17:22 signull joined #salt
17:22 signull hey
17:22 signull Hello
17:23 signull how can I run a cmd.run with a | in it?
17:23 smcquay joined #salt
17:23 signull how can I escape it so that yaml doesnt complain its a dictionary?
17:25 lbotti joined #salt
17:27 Grokzen joined #salt
17:28 wendall911 joined #salt
17:30 iwishiwerearobot joined #salt
17:31 _JZ_ joined #salt
17:31 BigBear joined #salt
17:31 rome_390 joined #salt
17:31 andrew_v joined #salt
17:31 dthorman joined #salt
17:31 evle joined #salt
17:31 twiedenbein joined #salt
17:31 egil joined #salt
17:31 alainv joined #salt
17:31 bash124512 joined #salt
17:31 FineTralfazz joined #salt
17:31 colttt joined #salt
17:31 jayne_ joined #salt
17:31 jasonrm joined #salt
17:31 stolitablrrr_ joined #salt
17:31 ahammond joined #salt
17:31 bryguy joined #salt
17:31 SheetiS joined #salt
17:31 lionel joined #salt
17:31 shnguyen joined #salt
17:31 steveoliver joined #salt
17:31 seanz joined #salt
17:31 Hipikat joined #salt
17:31 Puckel_ joined #salt
17:31 xs- joined #salt
17:31 nicolerenee joined #salt
17:31 teepark joined #salt
17:31 snuffychi joined #salt
17:31 nethershaw joined #salt
17:31 cmek joined #salt
17:31 ashb joined #salt
17:31 schristensen joined #salt
17:31 albertid_ joined #salt
17:31 superseb joined #salt
17:31 h8 joined #salt
17:31 zemm_ joined #salt
17:31 muep_ joined #salt
17:31 iMil joined #salt
17:31 analogbyte joined #salt
17:31 eightyeight joined #salt
17:31 Karunamon joined #salt
17:31 lazybear joined #salt
17:31 borgstrom joined #salt
17:31 sc` joined #salt
17:31 jmccree joined #salt
17:31 iamtew joined #salt
17:31 rbjorkli1 joined #salt
17:31 gmoro joined #salt
17:31 pmcg joined #salt
17:31 wm-bot4 joined #salt
17:31 dober joined #salt
17:31 keekz joined #salt
17:31 honestly joined #salt
17:31 beebeeep_ joined #salt
17:31 wolog joined #salt
17:31 dcmorton joined #salt
17:31 garphy`aw joined #salt
17:31 stotch joined #salt
17:31 tedski joined #salt
17:31 TronPaul_ joined #salt
17:31 ahale joined #salt
17:31 pjs joined #salt
17:31 jdowning joined #salt
17:32 che-arne joined #salt
17:33 BigBear basepi: who can switch conference mode ON and OFF? Is it needed rihtt now? Makes it very hard to read this channel without scrolling a lot.
17:33 nszceta joined #salt
17:34 iggy signull: did you try \| ?
17:34 hal58th joined #salt
17:34 iggy BigBear: what is conference mode?
17:36 signull iggy: i got it, nothing to do with pipes actually. I put a - before cmd.run
17:36 BigBear iggy: the message [INFO]Conference Mode has been disabled for this view; joins, leaves, quits and nickname changes will be shown. , followed by screens and screens of people joining or leaving or changing names.
17:36 iggy BigBear: tell your irc client to ignore them
17:36 jalaziz joined #salt
17:38 BigBear iggy: OK, but I never had this issue before now. and it looks as if someone switched it ON a few minutes ago (maybe basepi ?) and it did get switched off again. but will go looking in the IRC client for that setting. do you happne to lknow how to do it in chatzilla?
17:38 iggy conference mode is apparently something that your client calls "ignore joins/parts/quits/etc"... it's not a channel setting
17:38 iggy I use irssi, so I don't really know anything about chatzilla
17:39 roolo joined #salt
17:39 nszceta left #salt
17:41 jtang joined #salt
17:42 murrdoc1 joined #salt
17:47 bash1245_ joined #salt
17:47 bash1245_ Hello ,Any one used boto_elasticache ?
17:50 I3olle joined #salt
17:53 holms does pkgrepo.manage accepts list? insted of name, names i mean
17:55 hal58th holms doesn't look like it, but give it a shot to make sure
17:55 mikaelhm joined #salt
18:04 desposo joined #salt
18:04 dualicorn joined #salt
18:05 desposo1 joined #salt
18:07 dooshtuRabbit joined #salt
18:07 desposo2 joined #salt
18:08 murrdoc joined #salt
18:10 chris-m joined #salt
18:12 chris-m hi
18:12 chris-m I have custom-modules that I created on MOM.  I have no issues with MOM synching these with other minions, but when it comes to syncing with a syndic, I have issues  What am I missing.  From MOM, I can ping the syndic and it's children: ./salt.sh -G "datacenter:QIDC" test.ping    MOM does pass –G datacenter:QIDC commands down to underlings.  But why does the syndic not sync?  It’s kind of its main purpose.  I created the folder on
18:12 dwfreed joined #salt
18:15 amcorreia_ joined #salt
18:16 cberndt joined #salt
18:16 fxhp joined #salt
18:17 ipmb joined #salt
18:19 tligda joined #salt
18:20 kaptk2 joined #salt
18:21 tligda joined #salt
18:22 iggy I don't know anything about syndic setups, but did you check to see if they are making it to where they need to be in /var/cache/salt/?
18:24 chris-m hi iggy
18:25 roolo joined #salt
18:26 chris-m since we are using a non-root implementation of salt, we are storing the python modules in /apps/infra/salt-master/srv/salt/_modules
18:27 chris-m @Iggy - What should I be looking for under the var/cache/salt folder?
18:27 murrdoc and u have setup the extension_modules config ?
18:27 murrdoc in the salt masters
18:28 chris-m @murdoc - yes.  this is the config from mom -> # The root directory prepended to these options: pki_dir, cachedir, # sock_dir, log_file, autosign_file, autoreject_file, extension_modules, # key_logfile, pidfile. root_dir: /apps/infra/salt-master/2014.7.1
18:28 chris-m on the syndic (master) it is set to /apps/infra/salt-syndic/...
18:30 murrdoc salt 'somename' config.get extension_modules on the syndic and the master
18:31 breakingmatter joined #salt
18:31 chris-m @murrdoc - what should 'somename' be replaced with?
18:33 murrdoc some minion name
18:33 chris-m @murrdoc/iggy - I just checked the order_masters setting on both mom and the syndic both are set to false.   where should this be set to true? (could be a red herring)
18:33 riftman joined #salt
18:34 stevednd anyone using iptables with a large number of rules? reconfiguring the firewall on my machines is by far the longest running state since it executes each command individually. Want to find a way to shorten it
18:35 nkuttler stevednd: hm, i just write the config to /etc/iptables/rules, bypassing salt
18:35 nkuttler but i don't have a dynamic setup
18:36 cberndt joined #salt
18:37 chris-m @murrdoc - ln98948:infra$:/apps/infra/salt-master/2014.7.1 ./salt.sh 'ln99168.corp.ads' config.get extension_modules ln99168.corp.ads:     /work/infra/salt-minion/2014.7.1/var/cache/salt/minion/extmods (from mom)
18:38 chris-m from syndic
18:38 chris-m infra@btln001245:/apps/infra/salt-syndic/2014.7.1 $ ./salt.sh 'btln001254.corp.ads' config.get extension_modules btln001254.corp.ads:     /work/infra/salt-minion/2014.7.1/var/cache/salt/minion/extmods
18:38 signull /topic
18:42 stevednd nkuttler: that's what I was leaning towards. I was hoping to find some nice salt friendly way to do it instead
18:42 hebz0rl joined #salt
18:44 chris-m @iggy - the syndic isn't putting the files into var/cache/salt/minion/extmods/...
18:45 ajw0100 joined #salt
18:46 yomilk joined #salt
18:49 otter768 joined #salt
18:58 cberndt joined #salt
19:04 N-Mi_ joined #salt
19:10 ckao joined #salt
19:18 stoogenmeyer_ joined #salt
19:19 linjan joined #salt
19:19 iwishiwerearobot joined #salt
19:23 spookah joined #salt
19:25 abe_music joined #salt
19:27 teebes joined #salt
19:32 BigBear joined #salt
19:35 mailo joined #salt
19:36 MatthewsFace joined #salt
19:41 hal58th1 joined #salt
19:42 denys joined #salt
19:43 bash1245_ joined #salt
19:43 lbotti joined #salt
19:46 toanju joined #salt
19:47 holms sorry guys but pkgrepo.managed clean_file param just DOESN" WORK
19:47 holms it doesn't clean a file
19:48 lbotti joined #salt
19:49 CeBe1 joined #salt
19:49 chiui joined #salt
19:52 chris-m @Iggy/Murrdoc - I added the parm order_masters = True on mom, and can now ping any minion that is connected direct to it, or via the syndic.
19:52 chris-m but I still can't sync the modules :(
19:53 murrdoc try syncing from a master first ?
19:53 smcquay joined #salt
19:54 chris-m From MOM : I ran the command ./salt.sh '*' saltutil.sync_all
19:56 thedodd joined #salt
19:56 iggy can you stop pinging me on this problem? I told you I don't know dick about syndic. I just gave you some general ideas to try.
19:57 holms murrdoc: pkgrepo.managed updates apt-get everytime no matter if refresh is set to true or false, same goes with clean_file, it doesn't even clean it.. it's just don't care
19:57 chris-m no worries. thx Iggy for your help
19:59 holms cleaning up sources.list seems to be a bigger problem in here then even in cfengine
19:59 murrdoc whats your state look like
20:00 bash1245_ anyone created elasticache cluster with salt state ?
20:01 holms murrdoc: http://pastebin.com/KVqrQcLu
20:01 holms clean_file is removed because it's just doesn't work for me
20:01 holms tried it with one mirror
20:01 I3olle joined #salt
20:01 holms it's just appends mirror
20:02 murrdoc are u trying to ensure /etc/apt/source.list is clean ?
20:02 holms will try this with one mirror for now
20:02 holms murrdoc: yeap
20:02 murrdoc u are probably better off doing it as a file.managed
20:03 holms so it will do this every time i run provisioning
20:03 holms this is not good
20:04 holms putting probably file in place would be just easier
20:04 holms and it wouldn't put it twice
20:04 holms and wouldn't run apt-get update again if not changed
20:04 murrdoc yeah
20:04 holms if i just could pass somehow
20:04 murrdoc you ll have to do an onchanges thing to run apt-get update
20:05 holms whole list to this state
20:05 murrdoc thing is you are removing the file everytime
20:05 holms then it would generate file,
20:05 holms and it could check hash by him self, no?
20:05 murrdoc or you could read the python code for pkgrepo.managed
20:05 murrdoc and see what refresh does
20:05 holms that's how ansible does it actually
20:05 holms so do chef
20:07 aparsons joined #salt
20:07 murrdoc also your whole state can just use -names
20:07 holms doesn't work
20:07 holms tried it already
20:07 holms no error though
20:08 dualicorn joined #salt
20:08 mohae joined #salt
20:09 holms murrdoc: is this looks good to you? http://pastebin.com/JwTvd3gV
20:09 holms we have multiple mirrors, cleaning file completely before populating it
20:09 holms and refreshing index after updating file
20:09 holms right?
20:11 murrdoc holms:  http://pastebin.com/SVtu4QMe
20:11 murrdoc or what u have
20:13 dualicorn joined #salt
20:14 holms ok
20:14 murrdoc joined #salt
20:19 holms ' ' <-- brings error
20:19 holms loool
20:19 holms murrdoc: i've got only 2 mirrors added out of 4
20:19 holms how did this happen :D
20:20 holms oh ok it's even worser
20:20 holms ~# cat /etc/apt/sources.list
20:20 holms deb mirror://mirrors.ubuntu.com/mirrors.txt trusty restricted universe main multiverse
20:20 holms deb mirror://mirrors.ubuntu.com/mirrors.txt trusty restricted main multiverse universe
20:22 jhauser joined #salt
20:22 murrdoc oh because my bad
20:22 murrdoc - dist: trusty
20:22 murrdoc only adds the trusty ones
20:22 murrdoc if u want the apt source list cleaned up
20:23 murrdoc use the consolidate: True and refresh_db: False mix
20:23 murrdoc it should work
20:23 murrdoc if it doesnt, use file.managed (its what i use)
20:23 holms trying
20:25 giantlock joined #salt
20:26 aparsons joined #salt
20:27 cpowell_ joined #salt
20:34 holms murrdoc: two mirrors added )))
20:35 yomilk joined #salt
20:36 murrdoc did u remove the - dist : trusty
20:42 spiette joined #salt
20:44 TTimo joined #salt
20:44 MugginsM joined #salt
20:46 Whissi Mh, my salt master was not available for ~3 hours due to maintenance (multiple restarts, network disconnects...). Now it is back but all the minions "died" with -> http://pastebin.com/raw.php?i=vThRmWZ3
20:46 Whissi Did I miss something in the salt configuration?
20:47 Whissi I know that minions will give on failures... but they will log if they give up...
20:49 ciss joined #salt
20:49 aparsons joined #salt
20:50 otter768 joined #salt
20:51 hal58th joined #salt
20:54 I3olle joined #salt
20:54 iggy Whissi: what version of salt?
20:55 Whissi 2014.7.1
20:55 iggy it's probably worth opening an issue
20:56 iggy there shouldn't be unhandled exceptions anywhere (in a perfect world)
20:56 Whissi hehe :)
20:56 Whissi OK, I'll fill a bug.
20:58 elextro joined #salt
21:04 TTimo joined #salt
21:07 iwishiwerearobot joined #salt
21:12 lpmulligan joined #salt
21:14 Brew joined #salt
21:17 smcquay_ joined #salt
21:18 rawkode joined #salt
21:18 lbotti joined #salt
21:21 transmutated joined #salt
21:21 smcquay_ joined #salt
21:21 transmutated The install instructions for installing salt on redhat/centos/etc 5.x are broken.
21:21 transmutated A python dependency specifically
21:22 transmutated Error: Missing Dependency: python26-distribute is needed by package python26-jinja2-2.5.5-6.el5.noarch (epel)
21:24 BigBear joined #salt
21:25 signull pip install the requirements
21:25 signull go into the salt directory
21:25 signull pip install -r requirements.txt
21:25 transmutated can't find python-pip package.
21:25 transmutated Is that in the epel repo as well?
21:25 signull youre on redhat
21:25 transmutated yea
21:25 signull you dont need to
21:25 signull install python-setuptools
21:25 signull than
21:25 transmutated ahhh
21:25 transmutated gotcha
21:25 signull easy_install pip
21:26 signull *then
21:26 signull also
21:26 signull go to epel and see if you can get python 2.7
21:26 signull 2.6 would work
21:27 signull but 2.6 is like the ugly sister of 2.7
21:27 fxhp joined #salt
21:28 transmutated nope, no python27
21:28 transmutated :(
21:28 transmutated Stupid RHEL 5.
21:28 transmutated STUPID
21:28 transmutated :)
21:30 murrdoc this might sound like more work, but checkout the bootstrap script
21:30 murrdoc and see what repos it recommends for it
21:33 JDiPierro joined #salt
21:34 JDiPierro Hey Guys. I'm having a problem where setting grains on a minion claims to be successful but it isn't. I've tried grains.append and grains.setval and they both return success but when I do grains.get afterwards the new grain is missing.
21:34 JDiPierro Output at: http://pastebin.com/GLgRpBzJ
21:35 yomilk joined #salt
21:37 smcquay_ joined #salt
21:39 spookah joined #salt
21:40 babilen JDiPierro: Did you verify that it has indeed not been set on the minion?
21:40 babilen (by looking at /etc/salt/*)
21:41 JDiPierro I had not, just with grains.get
21:41 JDiPierro will check that now
21:41 ipmb joined #salt
21:42 babilen /etc/salt/grains is what you are looking for
21:43 mosen joined #salt
21:45 murrdoc ftw
21:54 yomilk joined #salt
21:56 jhauser joined #salt
21:58 jalaziz joined #salt
21:59 holms joined #salt
22:00 subsignal joined #salt
22:02 conan_the_destro joined #salt
22:02 lbotti joined #salt
22:03 toanju joined #salt
22:05 bash124__ joined #salt
22:05 monkey661 joined #salt
22:06 holms murrdoc: i did
22:07 murrdoc context, i have lost it
22:07 holms 22:36:28   murrdoc | did u remove the - dist : trusty
22:07 holms finishing provisioning, will check in one min
22:11 Ryan_Lane joined #salt
22:13 holms http://pastebin.com/SCANvQUp
22:13 hal58th1 joined #salt
22:13 holms murrdoc: i'm not sure what is this at all
22:13 holms two mirrors added
22:14 iggy transmutated: also, check the issue tracker, I'm pretty sure this has been reported (more than once)
22:14 holms mixed in lines somehow
22:15 murrdoc yeah it kept the existing mirrors and added the trusty ones
22:16 murrdoc use file.managed
22:16 Pixionus joined #salt
22:16 primechuck joined #salt
22:17 bash1245_ joined #salt
22:18 chris-m left #salt
22:19 signull anyone know a good way to re-name a minion, to the newly changed hostname?
22:20 iggy rm /etc/salt/minion_id ?
22:20 signull for instance I want to start up a amazon instance, change hostname, update route53 with private ip to app01.dev.company.com
22:20 denys joined #salt
22:20 iggy and you probably want it to be all automatic?
22:21 signull iggy: i guess im asking for too much?
22:21 iggy there's a recipe in the reactor docs that talks about key acceptance
22:21 signull hmmm. thanks
22:22 iggy there's not an easy "salt 'oldname' rename.minion newname" type command
22:22 signull i was writing something with libcloud over the weekend
22:22 signull iggy: less looking rename a to b
22:22 iggy you're going to have to do some pasting together of different bits
22:22 signull iggy: more looking for if it runs this state it should get role-name<001>.hostname.com
22:23 signull yeah
22:23 signull ive been writing something with libcloud and libcloud hates AWS
22:23 iggy so you don't really care what the minion is called (because that's not what you originally said)
22:23 signull sorry for being confusing
22:23 iggy libcloud hates everything
22:23 signull lol
22:24 iggy (don't tell anybody I told you, but it's kind of a piece)
22:24 signull for instance for route53 you need to iterate zones to get the zone id and then use that to driver.get_zone
22:24 signull with openstack and others its just. driver.get_zone("some.zone.net")
22:24 signull no id needed
22:25 signull and they dont have that documented anywhere.
22:25 signull anyways yeah
22:25 signull probably gonna end up writing something then
22:25 signull thanks for the heads up about salt reactor for rename though
22:26 signull i really want to do. if a server gets this role. rename it to role<available number>.dev.company.net and then update dns/route53
22:26 signull and when salt runs again. do nothing cause its done.
22:27 smcquay joined #salt
22:27 aquinas joined #salt
22:28 signull by the way does anyone know if there is a state module in the works for libcloud to do generic dns work?
22:28 iggy there's some boto modules that might work better
22:28 signull i didnt see any in the doc under ref/states/all/
22:29 signull probably.. but if salt is going to throw out boto. may as well sharpen my skills with libcloud.
22:29 MugginsM joined #salt
22:29 signull upsetting that the boto modules are stated as deprecated even though there isnt a fully working accepted replacement yet
22:37 evidence anyone running with gpg enabled on fbsd?  i have python-gnupg 0.3.7 installed properly, which works fine on the linux counterparts.   fbsd gives an sls rendering failure with no further info anything you have gpg in the shebang.. #!yaml|gpg here for example
22:38 denys joined #salt
22:39 kellnola2 signull, cli53 lets you just type in the zone
22:39 transmutated iggy, I didn't see anything in there. I submit it though
22:39 transmutated *I'll
22:43 scooby2 is there a way to override     - replace: False  on the command line?
22:45 cberndt joined #salt
22:45 hal58th1 Ahhh, what are you doing scooby2. A lot more info needed
22:48 scooby2 we have an important file set to - replace: False
22:48 scooby2 i want to test replacing it
22:48 teebes joined #salt
22:48 scooby2 i was just curious if there was a way to do it via command line or salt-call
22:49 hal58th1 I'm sure there is. Give a few minutes. I think you just need to supply the data via yaml block. But on the command line
22:50 scooby2 thanks
22:50 signull kellnola2: thanks thats a cool script
22:50 nitti joined #salt
22:50 signull I am honstly just going to write something with libcloud and make a reactor
22:51 otter768 joined #salt
22:51 signull if my code looks pretty enough I'll commit it to the project
22:51 murrdoc signull:  stop slacking
22:51 murrdoc dew it already
22:51 signull lol
22:52 signull i wrote something ish over the weekend to iterate entried in route53 and provide next role<avail-num>.dev.company.com and enter it
22:53 signull I just need to add change hostname, change minion name, accept new keys since minion name changed
22:53 signull and then make it all into a viable reactor
22:53 signull and yeah ima shut up now
22:55 murrdoc stop slacking
22:55 murrdoc start shipping
22:56 iwishiwerearobot joined #salt
22:57 iggy scooby2: salt 'foo' state.sls foo.config test=True ?
22:58 scooby2 iggy: with replace: False it doesnt want to touch the file
22:59 hal58th1 scooby2, both work.
22:59 hal58th1 sudo salt minionname state.single file.managed name=/etc/test replace=True source=salt://test
22:59 hal58th1 sudo salt-call state.single file.managed name=/etc/test replace=True source=salt://test
22:59 iggy ahh, I see what you're saying
22:59 scooby2 hal58th1: thank you
22:59 murrdoc its a python function call
22:59 murrdoc so its similar to how u would call the function in python
22:59 murrdoc captslow!
23:00 hal58th1 scooby2, welcome. but that's just doing it on a single command line. much better to do through highstate function when possible.
23:01 otter768 joined #salt
23:03 evidence finally found the gpg error.. 'Crypt' object has no attribute 'stderr'
23:04 MugginsM joined #salt
23:04 mosen joined #salt
23:08 ciss1 joined #salt
23:08 blackhelmet joined #salt
23:09 holms joined #salt
23:10 jalaziz joined #salt
23:13 bash1245_ joined #salt
23:22 adelcast joined #salt
23:23 holms joined #salt
23:25 Nazca__ joined #salt
23:28 holms joined #salt
23:31 waddles joined #salt
23:32 Nazca joined #salt
23:32 teebes joined #salt
23:33 kermit joined #salt
23:34 bash1245_ joined #salt
23:34 MK_FG joined #salt
23:34 yomilk joined #salt
23:37 Nazca__ joined #salt
23:38 waddles joined #salt
23:39 bash1245_ joined #salt
23:44 MorbusIff joined #salt
23:45 waddles joined #salt
23:45 [BNC]bash124512 joined #salt
23:48 jdowning joined #salt
23:49 ajw0100 joined #salt
23:50 Linuturk how would I have salt-call check to see if changes were made in tne last run, and if so, exit nonzero
23:50 waddles joined #salt
23:50 Linuturk I want to run masterless salt twice, and check the 2nd run for changes
23:50 Linuturk if there are any, I need to fix that part of my states
23:52 MK_FG joined #salt
23:53 TTimo joined #salt
23:53 waddles joined #salt
23:59 waddles joined #salt
23:59 ajw0100 joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary