Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-11-06

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 utahcon I am getting an error that jinja variable is undefined, the variable is {{ pillar['brand_long'] }} when I run pillar.items I see that the brand_long is set, any ideas?
00:00 whiteinge seanz: hello!
00:01 seanz whiteinge: How's your day going?
00:01 whiteinge decent. you?
00:02 seanz Pretty well.
00:09 elfixit joined #salt
00:10 pipps_ joined #salt
00:11 NotreDev joined #salt
00:12 honestly whiteinge!
00:13 Jahkeup joined #salt
00:22 tremendous joined #salt
00:35 NV so whats the correct way to make one minion trigger a command on another minion?
00:36 NV in this case my source control post-commit hooks on the release branch triggering a deploy
00:38 scott_w joined #salt
00:39 __number5__ NV: Reactor/Event system http://docs.saltstack.com/topics/reactor/index.html
00:40 NV will that allow me to do it without requiring additional code running on the minion?
00:41 __number5__ what do you mean by additional code?
00:42 NV I looked at the event system, but my understanding would be that I'd need a bit of code on the minion that listens to the event then does something as a result (ie, running salt-call to highstate or the like)
00:43 NV or is that what reactor does?
00:43 jacksontj anyone familiar with the command/arg passing stuff? i'm running into some weird stuff… looks like salt is converting my dict to a string before it gets to the module function
00:44 __number5__ NV: you need something/code to trigger the event
00:45 NV yeah, writing something to trigger is fine, just the handling ideally shouldnt need another process running listening or anything
00:45 NV reading into reactor atm
00:47 __number5__ reactor will do the events handling for you, just need to write the reactor config to specify what sls should be runned when certain event happened
00:48 jacksontj i think i found my problem. It looks like the localclient always takes kwargs and makes them key=value
00:48 jacksontj which is fine unless the value is large-- like the contents of a file
00:48 __number5__ jacksontj: can you try your args with the test module? http://docs.saltstack.com/ref/modules/all/salt.modules.test.html#salt.modules.test.arg
00:49 jacksontj i can repro wth my module that just returns types
00:49 jacksontj basically the same, just dict of arg name and type
00:49 jacksontj the value is correct if you string'd the dict
00:50 jacksontj looks like the "load" object just has args, not kwargs
00:50 jacksontj so the client takes kwargs and makes them key=value and puts them in the args
00:50 jacksontj it should either have a kwargs dict (seperate key) or make a list (of 2 items)
00:50 maoroo joined #salt
00:51 bhosmer joined #salt
00:51 jacksontj function is condition_kwarg in salt/client/__init__.py
00:54 jacksontj easy fix :)
00:54 jacksontj let me make a unittest for this though
00:54 jacksontj hmm, actually this will break if your arg is a list :/
00:59 Ryan_Lane joined #salt
01:04 teebes joined #salt
01:07 bemehow joined #salt
01:11 sroegner joined #salt
01:27 jslatts joined #salt
01:31 teebes joined #salt
01:31 kermit joined #salt
01:36 pipps joined #salt
01:38 bemehow joined #salt
01:39 scott_w joined #salt
01:43 g4rlic left #salt
01:52 NV __number5__: with reactor, is it possible to determine which minion fired the event?
01:52 snuffeluffegus joined #salt
01:55 bhosmer joined #salt
01:58 pengyao joined #salt
01:58 jhulten joined #salt
01:58 pengyao how did saltstack get it's name?  Is there some history behind the name 'saltstack'?
02:00 cron0 joined #salt
02:00 forrest joined #salt
02:00 Gifflen joined #salt
02:03 NV to answer my own question, data['id'] has the minion id
02:26 oz_akan_ joined #salt
02:38 bemehow joined #salt
02:40 scott_w joined #salt
02:47 pipps joined #salt
02:48 steveoliver apt-get has half-configured package… maybe apt-get has some interactive prompts that I'm missing?
02:48 oz_akan_ joined #salt
02:54 mannyt joined #salt
03:02 diegows joined #salt
03:04 mwillhite joined #salt
03:04 anuvrat joined #salt
03:09 jhulten joined #salt
03:10 grep_awesome joined #salt
03:14 Ryan_Lane joined #salt
03:19 liwen joined #salt
03:19 [1]VertigoRay joined #salt
03:21 steveoliver …i guess apt-get processes that run for a long time will fail state checks until they are finished...
03:21 lineman60 joined #salt
03:21 steveoliver it seems my pkg.installed (for mariadb-server-10) is just really slooooww
03:22 Ryan_Lane hey, I don't see salt in the expo hall in openstack. no expo this time?
03:24 ajw0100 joined #salt
03:25 amckinley joined #salt
03:28 forrest Ryan_Lane, are any of the SaltStack guys there? I thought UtahDave was in Singapore?
03:28 forrest and I know Whiteinge is in Portland this week?
03:28 Ryan_Lane well, OpenStack is in hong kong ;)
03:28 __number5__ there is a bug in 0.17.1 related to apt-get/pkg/apt state
03:29 forrest Ryan_Lane, yea it's still 1600 miles
03:29 forrest Ryan_Lane, that's cool you get to go to that, are you paying out of your own pocket?
03:31 cnelsonsic joined #salt
03:39 bemehow joined #salt
03:40 scott_w joined #salt
03:41 joehh UtahDave: is in hong kong (or at least he was last night...)
03:43 ctdawe joined #salt
03:43 jesusaurus is UtahDave openstacking?
03:43 forrest I don't think so, pretty sure he's in Singapore this week.
03:43 Furao joined #salt
03:43 forrest Maybe some of the sales guys are there though?
03:43 jacksontj apparently UtahDave is omnipresent ;)
03:43 jcockhren forrest: figured out the syndic issue
03:43 forrest jcockhren, nice, what was it?
03:44 jcockhren needs minion, syndic and master services installed
03:44 jcockhren (on the syndic)
03:44 forrest before you just had the syndic and minion right?
03:45 honestly joined #salt
03:45 jcockhren before the syndic had salt-syndic and salt-master installed
03:45 whiteinge honestly!
03:45 forrest ahh ok, you should either do a pull request, or file an issue so those docs can get updated.
03:45 jcockhren it needs a local salt-minion so commands from the masterofmaster can be sent to the syndic's salt-minion server
03:46 jcockhren service*
03:46 jcockhren forrest: word
03:46 shinylasers joined #salt
03:49 forrest jcockhren, if you don't get around to filing that issue let me know, gotta go hit the gym
03:50 jcockhren forrest: ok. I'm in the middle of lxc templates with salt. Will file soon
03:53 honestly joined #salt
03:58 Teknix joined #salt
04:01 lineman60 joined #salt
04:02 bemehow joined #salt
04:05 bemehow joined #salt
04:14 sciyoshi1 joined #salt
04:15 Ryan_Lane joined #salt
04:16 octarine_ joined #salt
04:16 nineteen1ightd joined #salt
04:16 jrgifford_ joined #salt
04:16 eclectic_ joined #salt
04:19 frantou_ joined #salt
04:19 larstr_ joined #salt
04:19 jasiek_ joined #salt
04:19 jeblair_ joined #salt
04:20 ajw0100_ joined #salt
04:20 Furao joined #salt
04:22 andyshin` joined #salt
04:23 scalability-junk joined #salt
04:24 jankowiak joined #salt
04:25 chjohnst_work joined #salt
04:26 faldridge joined #salt
04:30 dan_johnsin joined #salt
04:31 anteaya joined #salt
04:33 UtahDave joined #salt
04:33 UtahDave forrest: I actually am in Hong Kong
04:34 nahamu I guess that means you're not the saltstack representative at LISA...
04:36 [1]VertigoRay joined #salt
04:37 Ryan_Lane joined #salt
04:41 scott_w joined #salt
04:43 noob2 joined #salt
04:45 jalbretsen joined #salt
04:47 jcockhren anyone know/have experienced any issues of minions of syndics missing from masterofmaster calls?
04:47 noob21 joined #salt
04:48 jcockhren I have a topmost master that manages server minions and a syndic
04:48 linjan joined #salt
04:48 jcockhren that syndic has lxc minions
04:49 jcockhren if I do a "sudo salt -G 'virtual:VMware' test.ping" all the LXCs don't appear consistently
04:50 jcockhren (b/c my test VM is based on vmware workstation)
04:50 jcockhren (hence the grain match)
04:50 jcockhren however, when the same command is ran from the syndic, all the LXCs appear just fine
04:59 hazzadous joined #salt
05:05 forrest UtahDave, oh ok cool, how is openstack?
05:06 UtahDave very cool!
05:06 octarine joined #salt
05:09 anuvrat joined #salt
05:18 cachedout joined #salt
05:33 bemehow joined #salt
05:35 prooty joined #salt
05:42 faldridge joined #salt
05:42 scott_w joined #salt
05:45 mapu joined #salt
05:48 JulianGindi joined #salt
05:51 v0id_ joined #salt
05:55 forrest hey whiteinge, you still around?
05:56 whiteinge yeah
05:56 forrest Thumbs up on 8239!
06:00 matanya joined #salt
06:00 whiteinge oh, thanks. you had great suggestions!
06:00 whiteinge i didn't realize that whole sub-topic had been glossed over :)
06:00 forrest Yea being dumb comes in handy once in a while :P
06:01 forrest heh
06:01 whiteinge it's always nice to have thorough docs
06:01 * whiteinge shakes his fist at whoever wrote the ext_pillar docs
06:01 forrest lol
06:02 Teknix joined #salt
06:02 whiteinge (i've been struggling with it on and off all day today)
06:02 forrest oh that reminds me, have you seen this error trying to build the docs lately? ImportError: cannot import name tagify
06:02 forrest Yea you're on site with a customer this week right?
06:04 whiteinge i haven't seen that error. building now...
06:04 whiteinge i'm in portland for a conference
06:04 whiteinge devopsdays
06:04 forrest oh nice
06:04 forrest devopsdays are really fun
06:05 whiteinge yeah. the casual nature works really well for sharing ideas
06:05 forrest oh yea, I had a ton of fun at the one in Atlanta, wearing my salt t-shirt lead to a ton of good discussion with people.
06:05 forrest *led
06:06 whiteinge :D
06:06 forrest I pulled the error back up, looks like the dev branch was reporting it here:   File "/root/salt-test/salt/salt/minion.py", line 66, in <module>
06:06 forrest *shrug*
06:06 forrest Are the sales guys out there with you?
06:06 whiteinge no. just me
06:06 forrest Or are you getting to go to the talks? The open spaces are by far my favorite part
06:07 whiteinge no booth. i came to attend and chat in the open spaces
06:08 forrest nice, that's the best
06:08 whiteinge i'm not seeing the tagify error. not sure why
06:09 forrest weird, maybe I broke my virtualenv that I use for doc testing somehow
06:09 whiteinge that function is in the salt source so it *should* be import-able
06:09 Destro joined #salt
06:09 forrest yea, like I said I might have broken my virtualenv :P
06:10 forrest If they do an open space on getting started with contributing to open source software you should hit that one up, it was a really good discussion in Atlanta, we had one of the guys from opscode there.
06:13 whiteinge good topic
06:13 ctdawe joined #salt
06:22 giantlock joined #salt
06:27 gnawux joined #salt
06:28 bemehow_ joined #salt
06:37 gnawux hi folks, is there any salt state to specify filesystem of a block device, such as set /dev/sdb1 to be ext4
06:39 DallaRosa joined #salt
06:39 creich joined #salt
06:39 DallaRosa yo. I'm trying to use the iptables state to flush the iptables rules but I keep getting
06:39 DallaRosa "State iptables.flush found in sls firewall is unavailable"
06:43 scott_w joined #salt
06:44 fink_ployd joined #salt
06:45 creich joined #salt
06:53 eqe joined #salt
06:54 mephx_ joined #salt
06:56 chjohnst_work joined #salt
06:58 puppet joined #salt
07:00 ninkotech joined #salt
07:06 isomorphic joined #salt
07:09 Ryan_Lane joined #salt
07:12 faldridge joined #salt
07:16 bhosmer joined #salt
07:19 ninkotech joined #salt
07:22 matanya joined #salt
07:27 sebgoa joined #salt
07:28 Furao joined #salt
07:31 gnawux_ joined #salt
07:34 slav0nic joined #salt
07:34 slav0nic joined #salt
07:35 fink_ployd joined #salt
07:38 gnawux joined #salt
07:41 joehh joined #salt
07:43 scott_w joined #salt
07:50 giantlock joined #salt
07:51 ravibhure joined #salt
08:02 Iwirada joined #salt
08:09 mekstrem anyone else having connectivity issue when executing a command from master to minions? For me it gets returned jobs from some minions then it stop and times out with "salt.exceptions.SaltReqTimeoutError: Waited 60 seconds"
08:10 mekstrem if i run again i get "minion did not return" on alla minions even those that worked on previous run
08:10 mekstrem and if i run the same command again a third time it works and i get response from all minions
08:11 mekstrem all*
08:11 matanya joined #salt
08:11 mekstrem how stable is zmq really?
08:11 ravibhure joined #salt
08:12 shinylasers joined #salt
08:22 echos joined #salt
08:26 joehh joined #salt
08:33 matanya joined #salt
08:41 hazzadous joined #salt
08:50 gildegoma joined #salt
08:50 zooz joined #salt
08:52 ravibhure joined #salt
08:57 hhenkel joined #salt
09:00 gildegoma joined #salt
09:01 hhenkel Hi all, we're currently looking for a cm system to implement in our existing environment. I've been playing around with puppet and it seems like one is forced to have a fact for everything you want to check.
09:01 austin987 joined #salt
09:01 hhenkel For example I'm not sure if "splunk" is installed on a system and I only want to create a splunk config file if it really is installed.
09:02 hhenkel So I need to have a fact that tells me that splunk is installed (from my understanding). Is there a better way with salt?
09:03 hhenkel For example is it possible to have a conditional to check if a package (rpm) is installed even it is currently not managed with salt?
09:06 Ryan_Lane joined #salt
09:07 mike25ro joined #salt
09:07 * mike25ro hi
09:11 marcinkuzminski joined #salt
09:11 marcinkuzminski joined #salt
09:12 anuvrat joined #salt
09:12 marcinkuzminski joined #salt
09:18 carlos joined #salt
09:22 giantlock joined #salt
09:24 fxhp joined #salt
09:29 ninkotech joined #salt
09:32 creich hhenkel, yes it should be possible
09:32 creich i am kind of new to salt too, but as far as i have understood it, that should be possible
09:34 hhenkel creich: okay, I understand that you're new to salt as well but do you know any docs regarding something like that?
09:36 slav0nic hhenkel, maybe http://docs.saltstack.com/ref/states/requisites.html
09:37 Anb_ joined #salt
09:37 Anb_ joined #salt
09:38 creich hhenkel, http://docs.saltstack.com/ref/states/all/salt.states.file.html
09:38 creich think you can set up a "managed config file" and set the "required" to pkg: <your_package_name>
09:39 creich therefor the package itself must not be managed by salt
09:39 creich the function will only check if the package is there and ifso create or even manage your file
09:39 creich if the package is not installed it will skip touching the file
09:40 creich hope i did the correct idea of what you're trying to do
09:42 hhenkel creich: Yes, that is almost what I'm trying to achieve, I'll look at the doc you provided and give it a try. Thanks for the info so far!
09:51 creich you're welcome :)
09:51 ravibhure hi
09:52 ravibhure I am new to salt and trying to write states
09:52 ravibhure I have one question
09:52 ravibhure what I know is, usually to run top level states run 'salt 'testserver' state.highstate'
09:52 creich has anybody an idea why i am running into timeouts everytime i try to initially set up some state?
09:52 creich i allready tried the -t option
09:53 ravibhure which executes top.sls on salt base path
09:53 creich but somhow it does not help
09:53 creich ravibhure, that is correct
09:53 ravibhure but I don't want to use top.sls
09:53 ravibhure have multiple sls on base root
09:53 creich you should use the top.sls file as you base description
09:53 ravibhure so is there any option?
09:54 ravibhure something like
09:54 ravibhure salt 'testserver' state.highstate 'anothertop.sls'
09:54 creich everything else should be ordered wihtin subdirectories or even other .sls files
09:54 creich ah yes it is possible
09:54 ravibhure so how it is ?
09:55 creich first of all i think in that case you should not run state.highstate , you shout run state.sls 'your_state'
09:55 ravibhure I have multiple sls deployments and we trying to pull/run it for multiple purpose
09:55 ravibhure ok,
09:55 creich thats the way ou can run secific sls files
09:55 creich but
09:55 creich what you are looking for is something like that i think
09:56 ravibhure thanks creich: trying your suggestion
09:56 creich ravibhure, you should look for the "#file_roots:" section within your master config
09:57 creich there you can specify different roots for complex states
09:57 ravibhure yep, have set it up already
09:57 creich each root has its own top.sls
09:57 creich ah ok
09:57 creich great
09:57 ravibhure yes, but want to keep single one
09:57 creich ok
09:57 ravibhure and use different top.sls
09:57 ravibhure ie. deployment defined level
09:58 creich i am not sure if i got your idea correctly but if you use one single sls for each of your deployments, you should go with the commandline mentioned above
09:59 creich each complex deployment should be done within that file_roots sub-directiries is think
09:59 ravibhure cool, thanks creich: realy appriciate
09:59 ravibhure salt 'testserver' state.sls  centos6base
09:59 ravibhure which works great
09:59 creich as i said, each of this substructures has its own top.sls
09:59 creich :)
09:59 creich cool
09:59 ravibhure thanks man
10:00 ravibhure actually I already running with chef for last three years
10:00 creich and now you're trying salt
10:00 ravibhure but some how for some reasons I need few stack in place
10:00 creich thats a switch i am interested in
10:00 ravibhure not trying want to implement strongly in prod
10:00 creich we are trying to figure out which tool we schould use in the future
10:01 ravibhure great
10:01 ravibhure thats what
10:01 creich we have chef VS salt
10:01 ravibhure I already use to for ansible,chef
10:01 creich and try to figure out which one we should use
10:01 ravibhure cool
10:01 hazzadous joined #salt
10:01 creich so what would you recomment and why?
10:01 ravibhure :)
10:02 ravibhure its vary with environment of your requirements :)
10:02 ravibhure but I am mixing salt with too many things
10:02 ravibhure you can say a admin web panel for all ops stuff
10:03 slav0nic creich, http://missingm.co/2013/06/ansible-and-salt-a-detailed-comparison/ + exist book http://devopsu.com/books/taste-test-puppet-chef-salt-stack-ansible.html but it not free
10:03 ravibhure so basically don't want to customized to many things and base supported language
10:03 ravibhure yep
10:04 ravibhure but what I have found
10:04 ravibhure salt is good in taste ;)
10:04 ravibhure and much faster than ansible
10:04 ravibhure which gives me multiple (ssh/agent) supported agents
10:06 creich ok cool thanks for your opinion
10:07 creich the second book i allready found we have ordered it allready
10:07 ravibhure great
10:07 creich so have fun than with your further work using salt :)
10:07 ravibhure we have tried all our own
10:08 ravibhure well said, did all things now time to rock with salt
10:08 creich :D
10:08 ravibhure 8-) thanks creich
10:08 creich yw
10:13 viq morfternoon
10:13 _ikke_ morternooning
10:16 krissaxton joined #salt
10:16 bhosmer joined #salt
10:17 unicoletti_ joined #salt
10:21 hazzadous joined #salt
10:25 viq Is there something akin to puppet's virtual resources http://docs.puppetlabs.com/guides/virtual_resources.html (namely, I'm interested in the search part of it) available in salt? Or the way to do it is by proper filtering of pillars or such?
10:27 _ikke_ What is the usecase?
10:28 viq Define a bunch of users. Certain servers get only certain users
10:30 _ikke_ Yeah, that's usually done through pillars
10:30 arthurlutz joined #salt
10:31 viq Now also trying to figure out how to make it that the users get different groups depending on what server they end up on
10:31 urtow joined #salt
10:37 creich has anybody an idea why i am running into timeouts everytime i try to initially set up some state? i allready tried the -t option
10:39 viq creich: have you looked at master/minion logs?
10:40 harobed_ joined #salt
10:40 creich viq, yes but i somehow missed something i get
10:40 krissaxton joined #salt
10:40 creich found some missing file in the meantime
10:40 creich thx
10:41 harobed_ where to place "fileserver_backend" ? in /srv/salt/top.sls ?
10:41 viq harobed_: I believe that it should go in /etc/salt/master
10:52 whitepaws joined #salt
11:14 thewrinklyninja joined #salt
11:17 bhosmer joined #salt
11:18 lemao joined #salt
11:19 thewrinklyninja Hi all. Stupid question of the day
11:20 diegows joined #salt
11:21 Minioncito joined #salt
11:21 thewrinklyninja Does Salt have something comparable to manifests/cookbooks/playbooks etc
11:22 viq thewrinklyninja: yes, states
11:22 viq thewrinklyninja: kinda also formulas
11:22 thewrinklyninja Thanks
11:23 thewrinklyninja There's no repo or anything for them though right?
11:23 thewrinklyninja community states etc
11:24 nahamu there are repos for formulas
11:24 viq thewrinklyninja: well, there's https://github.com/saltstack-formulas
11:24 viq But there isn't anything comparable to say puppet forge
11:24 thewrinklyninja ah ok
11:24 thewrinklyninja Thanks
11:27 Minioncito Hi, can anyone help please? I found some troubles with syndic. I have a masterofmasters with 'order_masters', 'open_mode' and 'auto_accept' set to True and in the other hand, I have a regular master with 'syndic_master' option set to my masterofmasters hostname. I have started salt-master service on masterofmasters and salt-syndic on the regular master. Communications work fine but I still cannot see any key or auth attemp from masterofm
11:27 elfixit joined #salt
11:29 _ikke_ lol: https://github.com/saltstack-formulas/chef-formula
11:29 Jarus joined #salt
11:30 Minioncito If it helps, in my regular master I'm debugging syndic with 'salt-syndic -l debug' and no errors or critical messages appear. Last line is '[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem' and '[INFO    ] Waiting for minion key to be accepted by the master.' but the key never comes to the masterofmasters
11:30 viq Minioncito: it cut off at "see any key or auth attemp from masterofma"
11:31 Minioncito Sorry, pasting it again: 'Communications work fine but I still cannot see any key or auth attemp from masterofmasters. Nothing in logs on both sides :(. In the other hand, my regular master is also a master of 5 minions with also salt-master service running and it's working fine'
11:36 tempspace joined #salt
11:40 Minioncito an important fact, I'm using: salt-master-0.17.1-1.el6
11:42 faust joined #salt
11:42 harobed_ joined #salt
11:46 IJNX joined #salt
11:52 creich do i have to place files i want to copy to a minion into a special place?
11:52 creich if created a test file and called file.copy
11:52 mekstrem Minioncito: have you tried to delete all keys in the minion/pki folder then restart?
11:52 creich everytime i get source file is not present
11:52 mekstrem minion.pem minion.pu minion_master.pub
11:52 creich but if i call file.managed the same file would be found
11:53 viq creich: in your file_roots
11:53 krissaxton joined #salt
11:54 Minioncito mekstrem: yeah, and nothing happens :(
11:54 creich ah, ok thx. didn't notice that there was such a setting
11:54 creich thx :)
11:54 viq say you have /srv/salt/top.sls and you reference salt://ssh/key.pub then path would be /srv/salt/ssh/key.pub
11:54 creich ah oh that is what i did
11:55 creich as i mentioned file.managed found the file
11:55 creich only the copy command id NOT find it
11:55 viq oh, I think file.copy is for minion->master
11:55 creich ah
11:56 anuvrat joined #salt
11:56 viq Huh, no, it seems to be a simple cp, so on a single machine
11:56 creich so managd is the only way to get a file to the minion
11:57 viq well, there's http://docs.saltstack.com/ref/modules/all/salt.modules.file.html#salt.modules.file.manage_file
11:57 mekstrem Minioncito: Have you checked if you got the correct dns entry for your master? I mean so you have the right IP to the master? You can look in minion conf for "Master: xxxx" row
11:57 mekstrem Minioncito: i've had these problems before also
12:00 Minioncito mekstrem: I've configured the hostname and have checked that I can connect from the master to the masterofmasters
12:00 Minioncito and also checked that I can reach 4505 and 4506 ports
12:01 _ikke_ Minioncito: Have you tried setting logging to debug and watch the log files?
12:04 mekstrem Minioncito: as _ikke_ said. Kill the salt-minion and star the minion with "salt-minion -l debug" from command line then tail the minion logs
12:04 mekstrem start*
12:06 Minioncito mmmm... It's not a minion, but a regular master with syndic, to salt-minion service is not running
12:06 Minioncito s/to/so/
12:06 mekstrem ah my bad
12:07 mekstrem i was just referring to the debug message you pasted earlier so i thought you were talking about a minion :)
12:07 Minioncito mekstrem: no, it was salt-syndic debugging :)
12:07 mekstrem Minioncito: roger :9
12:07 mekstrem :)
12:15 Minioncito _ikke_: mekstrem: just in case, I'm seeing this line on the salt-syndic debug: '[INFO    ] Setting up the Salt Syndic Minion "None"' and I think it has to be related
12:16 Minioncito This None value should be the hostname of the salt syndic minion I guess
12:21 jeddi joined #salt
12:27 jslatts joined #salt
12:30 Iwirada joined #salt
12:32 blee_ joined #salt
12:40 Minioncito mekstrem: Minioncito so finally it's working, it seems that salt-minion package and config is needed to authenticate with syndic >.<
12:40 Minioncito don't understand that point...
12:40 Minioncito anyways, thank you guys for the help!
12:40 Minioncito I mean _ikke_ !
12:41 Teknix joined #salt
12:44 carlos joined #salt
12:48 Minioncito if anyone knows, why is salt-syndic part of salt-master's package if syndic needs /etc/salt/minion file to work? Why is not this file included in salt-master package, or even why there is not a salt-syndic package separately?
12:49 _ikke_ You have to talk to the package maintainers for that
12:51 cron0 joined #salt
12:53 gasbakid joined #salt
12:55 xl1 left #salt
12:58 teebes joined #salt
12:59 gasbakid_ joined #salt
13:00 abele joined #salt
13:06 gasbakid_ joined #salt
13:10 gasbakid_ joined #salt
13:12 carlos joined #salt
13:14 gasbakid_ joined #salt
13:14 jankowiak joined #salt
13:15 gasbakid joined #salt
13:17 gasbakid joined #salt
13:22 gasbakid joined #salt
13:23 \ask joined #salt
13:26 Destro left #salt
13:27 gasbakid joined #salt
13:31 gasbakid joined #salt
13:32 carlos joined #salt
13:34 tempspace Since 0.17, my pip.installed doesn't seem to be working, I get 'State pip.installed found in sls newrelic is unavailable'
13:37 gasbakid joined #salt
13:39 pentabular joined #salt
13:40 bhosmer joined #salt
13:41 diegows is there a way to render the template without executing it? I want to see the final version
13:41 diegows somelike like a dry run
13:42 _ikke_ something like salt '*' state.sls state test=True ?
13:44 diegows no, something like show_sls
13:44 diegows sls and show_sls shows the yaml parsed
13:44 diegows I want to see the yaml
13:44 diegows after jinja2
13:44 diegows just to debug something
13:45 mgw joined #salt
13:47 diegows well, fixed my issue... but it would be great to have something to display the rendered yaml files
13:48 _ikke_ yeah, agree
13:48 pentabular +1
13:48 scott_w joined #salt
13:49 _ikke_ lol
13:49 IJNX joined #salt
13:51 honestly diegows: "-l debug" will print the rendered yaml file
13:51 honestly "-l debug" and "test=True" is what you want
13:51 honestly (via salt-call on the minion)
13:53 IJNX How do I command salt minion from laptop behind NAT network?
13:53 IJNX I have my minion running in some public IP address.
13:54 LarsN IJNX: in my experience, the Master needs to be accessable by the minions.
13:55 bhosmer joined #salt
13:56 _ikke_ IJNX: You need to port-forward port 4505 and 4506 to your laptop
13:56 IJNX I had a good plan: 1) tune install scripts locally using two vbox machines: master and minion. 2) deploy to minion running in cloud
13:56 LarsN and then, if you don't have a static IP address, I'd recommend configuring something like dyn.
13:57 LarsN IJNX: that's not dissimilar to what we did.  Although we setup 4 instances in the cloud (master and three minions), as a development environment
13:57 LarsN in part because it allowed us to tune the salt-cloud configurations.
13:58 LarsN our goal was to type a command, and ~10-15 minutes later have a completely configured worker instance.
13:58 jeffrubic joined #salt
13:59 IJNX …but?
13:59 LarsN We then put all of our state files into a git repo.  with a Master branch, and a release branch, and git hooks on the production master server that do a git pull, and execute state.highstate when we merge changes in master to release.
14:00 LarsN I was just explaining our goal was perhaps a bit different than yours.
14:01 IJNX not so much different. I just don't' want to touch the cloud server until I have good enough script. I have cheap plan which doesn't include recovery...
14:01 LarsN <--- works for a large public cloud.
14:02 IJNX I have git in use and I'm planning to do exactly the same steps with it.
14:03 linjan joined #salt
14:03 oz_akan_ joined #salt
14:03 IJNX I have this expressjs+couchdb+angularapp to be released.
14:05 oz_akan_ joined #salt
14:06 IJNX this laptop is the painpoint in this and lack of mobileip
14:07 IJNX I would really love to have public static ipv6 address for my laptop.
14:07 LarsN the catch for us was making sure our cloud provider & cloud profiles were properly configured.
14:08 jcsp joined #salt
14:08 mgw joined #salt
14:09 brianhicks joined #salt
14:09 IJNX cloud profile?
14:09 amahon joined #salt
14:10 LarsN so using salt-cloud you setup a provider.
14:10 LarsN which would be something like.... HP Cloud, US West, Availability zone 3
14:10 Gifflen joined #salt
14:10 LarsN and then a profile would be something like...  standard.xsmall, with a specific OS image, some security groups, and you can set grains at the time the instance is created.
14:11 LarsN our state files build software based primarily on grains.
14:11 ipmb joined #salt
14:11 LarsN so salt-cloud creates the cloud instance for us, bootstraps salt-minion on it, ties it to the master, and sets metadata about the instance.
14:11 LarsN which then has highstate run against it....
14:12 IJNX ah, ok so you can now turn up the volume on need basis?
14:12 LarsN yeah
14:12 IJNX cool
14:12 LarsN or replace an instance that died in the event of a host failure.
14:13 IJNX …which never should happen :)
14:13 LarsN the next step in my master plan was to poke monitoring when the instances were up, and in the event one died, to remove it & build a replacement.
14:14 LarsN I treat cloud instances as ephemeral.
14:14 LarsN so while they shouldn't die....  I expect they might
14:14 LarsN :)
14:14 ctdawe joined #salt
14:15 creich can i use get_file within a state file to get some tgz-archives from master to minion during rollout a state? and if so, how?
14:16 LarsN creich: I've used file.managed in the past.
14:16 LarsN although I don't know that I'm doing it right in that regard :)
14:16 creich i tried that, but somehow this seems not to work for large files (~1gb)
14:16 racooper joined #salt
14:16 juicer2 joined #salt
14:16 creich i used managed also for small cofnigs and so on
14:16 LarsN my next push on my personal salt cluster is to setup a whole lot of git.
14:17 creich but i think i don't really want to manage that file and thought i could prevent some overhead
14:17 LarsN virtually all of my personal environments are managed via a git server.  If/when I get this solve, I might actually be at the point of hands free laptop deployment :)
14:17 creich oh than good luck with that
14:17 creich :)
14:17 mapu joined #salt
14:18 LarsN creich: if/when you figure out the file_get thing point me at an example sls entry.
14:18 LarsN I think I could use that as well :)
14:18 creich i sometimes experience some problems with the connection between my master and its minions
14:19 creich maybe that has to do something with my virtual machine setup
14:19 creich but i am not sure
14:19 LarsN IJNX: are you working with Salty-Vagrant?
14:19 LarsN or just regular VMs and salt?
14:19 creich has anyone problems like: master after some time could not ping some minions or something like that
14:20 mwillhite joined #salt
14:20 creich i've set up two vms within Virtual box
14:20 LarsN creich: I've seen that as well.  Typically I blame it on the terrible connection at my Apartment.
14:20 creich and try to configure the one from the other
14:20 creich nothing complicated i think
14:20 creich its just to play around a bit
14:20 creich hmm but maybe that is a salt problem
14:20 creich and that would be bad
14:21 creich cause i am evaluating this for possible usage within our company
14:21 LarsN I haven't used virtualbox in so long, but I remember some of the networking being sub optimal.
14:21 creich and so far i am happy with salt
14:21 IJNX LarsN: regular VM and salt.
14:21 creich but that would not be acceptable
14:21 Kholloway joined #salt
14:21 LarsN you "might" try making sure the virtual interfaces are bridges, rather than using nat, which I think is the default.
14:21 LarsN IJNX: rgr.
14:21 IJNX creich: is ZeroMQ really designed to work with 1gb file transfers?
14:21 creich hmm i'll try that
14:21 creich IJNX, don't know
14:21 creich thats the question
14:22 creich i try to figure out
14:22 LarsN I've got Vagrant-kvm on my laptop I use for development.  Been meaning to setup salty-vagrant, just haven't spent the time.
14:22 LarsN was out of town for the last two weeks training.
14:23 JulianGindi joined #salt
14:23 halfss joined #salt
14:23 IJNX creich: at least the socket buffers are all full with just file data always with that big files.
14:24 LarsN creich: you "could" cheat and pass an ssh key to the minions, and execute scp against the master to pull the larger file
14:24 Brew joined #salt
14:24 LarsN and now that one state, can require another state, you could then remove that key at the end of your run...
14:24 halfss joined #salt
14:24 LarsN again, that's likely doing it wrong. :)
14:24 IJNX LarsN: I have network working fine in bridged mode, but then I'm behind NAT at my home and behind different NAT when mobile (iPhone). Those are causing some challenges with IP addressing and ports.
14:25 LarsN IJNX: dyndns should help solve the IP addressing if the master is at home behind nat
14:25 IJNX LarsN: how does that actually differ from using git to pull data?
14:26 LarsN IJNX: the ssh+scp bit?  not at all, other than perhaps the 1gb file isn't in an SCM
14:26 halfss joined #salt
14:26 bhosmer joined #salt
14:29 vaasu joined #salt
14:29 halfss joined #salt
14:31 IJNX have you used sale-ssh the masterless version?
14:31 IJNX that would solve the port forwarding issues
14:32 IJNX s/sale/salt
14:33 IJNX right… my virtual machines just decided to shut down due to "low battery" :P
14:34 pnl nice one
14:35 _ikke_ lol
14:36 halfss joined #salt
14:36 IJNX If this wasn't sticky enough already...
14:38 pdayton joined #salt
14:38 halfss joined #salt
14:38 toastedpenguin joined #salt
14:39 halfss joined #salt
14:40 Ryan_Lane joined #salt
14:41 MTecknology I have about 400 Debian servers that I want a specific state applied to, unless it's a specific server. One server in the entire bunch should not get this state. What's the bestest best way to exclude it?
14:42 IJNX I have again this feeling like I'm seeing the lovely place in the horizon but I need to first go through this jungle filled with traps and poison oaks, rivers with hiding crocs, odd deep sea creatures, tempting sidetracks…
14:42 ahale_ MTecknology: compound match like '* not specifichost' ?
14:44 gasbakid_ joined #salt
14:44 viq MTecknology: http://docs.saltstack.com/topics/targeting/compound.html last example
14:44 slav0nic watch works correct with user?
14:44 slav0nic cmd.run:
14:44 slav0nic ...
14:44 slav0nic - watch:
14:44 slav0nic - user: foo
14:44 slav0nic where `foo` - user.present; cmd start not only when user created
14:44 MTecknology ahale_: viq: Thanks!
14:44 MTecknology slav0nic: use a pastebin next time
14:44 slav0nic MTecknology, it was only 3 lines ;)
14:45 MTecknology That was six lines; and three is more than enough for a pastebin
14:45 shinylasers joined #salt
14:45 slav0nic MTecknology, okey will chatting with u only via pastebin)
14:47 LarsN MTecknology: you could do something similar to....  {% if grains.fqdn != 'some.host' %}
14:47 LarsN do the things....
14:48 LarsN {% endif %}
14:50 imaginarysteve joined #salt
14:51 MTecknology This is a bit of a challenge because all of my servers use rsyslog and have states set up for remote logging; but the logging server needs a whole lot of these states to not be applied.
14:51 LarsN MTecknology: seems like virtually all other things related to computers, 900 ways to skin that cat.
14:51 LarsN or at least 2
14:51 bhosmer joined #salt
14:51 MTecknology well.. when not using windows anyway
14:52 LarsN MTecknology: you could put the rsyslog.sls on it's own, and wrap it with the {% if grains.fqdn  !=...} bit
14:53 LarsN I've done that to prevent invalid cron entries from being applied to FreeBSD hosts which use a different syntax.
14:55 MTecknology Trying to migrate so many servers from a chaos-managed never-updated very-insecury system to salt-managed always-updated hardened system is a bitch...
14:57 viq MTecknology: nodegroups to the rescue?
14:57 tempspace MTecknology: I set a pillar for our rsyslog server called RSYSLOG_SERVER and then only apply the client rules if RSYSLOG_SERVER is False
14:58 MTecknology heh.. that could work nicely
14:58 MTecknology viq: something else neat for me to learn?
14:59 bhosmer joined #salt
14:59 quickdry21 joined #salt
15:00 viq MTecknology: http://docs.saltstack.com/topics/targeting/nodegroups.html
15:02 MTecknology that seems like a wrapper to compound matches to let your top.sls be prettier
15:03 mwillhite joined #salt
15:04 mannyt joined #salt
15:04 viq ish, yeah
15:05 MTecknology side note: I just got signed up for my elective benefits for the year.  $69.11/mo for medical, dental, vision, life, and accidental death and dismemberment
15:05 viq Wrong continent ;)
15:08 MTecknology your continent is?
15:08 xmltok joined #salt
15:09 viq Europe
15:10 MTecknology I hear health care is actually sensible over there. Here, we have mister gov't trying to force yet more shit on us.
15:10 MTecknology s/trying to force/forcing/
15:12 quickdry21 This is driving my nuts, I have 90 minions I'm trying to apply highstate to, and I run into this bug every time - https://github.com/saltstack/salt/issues/7755
15:13 LarsN MTecknology: you must also work for a reasonably large organization.
15:13 MTecknology 24,000 or so people; I'm the only linux admin
15:14 LarsN so my idea of Large has changed since coming here.  but yeah that's not bad sized.
15:14 viq ...ouch
15:14 * LarsN works with ~ 340,000 other people.
15:14 viq hah
15:15 MTecknology It doesn't seem large to me. It feels like we're in the medium category. 400 mismanaged servers, though... that's giant (giant mess)
15:15 anitak joined #salt
15:17 slav0nic joined #salt
15:17 anitak Guys, quick question if I may. I am looking for a specific commit that fixes a regression (cannot use a version in pkg.install state). The commit is https://github.com/saltstack/salt/commit/91b11646df694f88d56a5a1d9977851a41eb8fb9 How do I find out when and if this will make it into a release?
15:18 MTecknology anitak: https://github.com/saltstack/salt/pull/7949  (link in the title) It'll make its way into the next release (0.17.2)
15:19 anuvrat joined #salt
15:19 anitak Ah cool. Where do I see that though? Seems like there is no milestone listed.
15:19 anitak May be I am looking in the wrong place
15:21 MTecknology The develop branch is what it was merged into and that branch is what becomes the next release
15:27 forrest joined #salt
15:28 MTecknology where's utahdave? :(
15:31 matanya joined #salt
15:33 Kholloway joined #salt
15:33 forrest MTecknology, he's out in Hong Kong at the openstack expo, so he's probably sleeping .
15:33 jankowiak joined #salt
15:34 MTecknology forrest: oh... I wanted to tell him that I'm going to be pushing to be sent to saltconf; he asked me once upon a time when we were talking about the reactor
15:35 forrest nice
15:35 MTecknology I'm going to be pushing for the pre-conf too so I can shoot for that certification
15:36 forrest Well, did anyone from your company go to puppetconf?
15:37 MTecknology nope
15:37 nocturn joined #salt
15:38 forrest I'm curious about the pre-conf stuff for the cert as well, but there's no way I'm dropping 1250 myself to go
15:38 forrest It would be my only self funded conference for 2014 if that was the case :\
15:39 MTecknology it'll be my only conference for the year
15:39 forrest Usually I try for 2, but I usually pay myself
15:39 lineman60 joined #salt
15:40 MTecknology I can't afford paying for them myself. I can actually prove value for this one, though. :)
15:40 MTecknology back in 30
15:41 creich is there a way to use the cp.get_file command during rolling out a state?
15:41 creich i have to bring some "large" (~1gb) archives to my minions
15:41 creich but using file.managed will get a timeout
15:42 creich but salt '*' cp.get_file.... works for me
15:42 creich now the only idea i have is to use that command
15:42 creich but i want to ensure the existenc of that files during state rollout as i said
15:42 cnelsonsic joined #salt
15:42 creich is there any way to do that?
15:43 viq creich: I believe the salt:// file server is not recommended for large files, maybe you could host it over http for example?
15:44 creich ok, i think in production this files will either be hosted on an nfs mount or within some deb-packegs to roll them out
15:44 creich i just was playing around within my test-system and had to roll out this files manually
15:44 creich i just want to give that a try
15:44 creich thx
15:44 forrest creich, did you try: http://docs.saltstack.com/ref/states/all/salt.states.file.html#salt.states.file.copy
15:44 mwillhite joined #salt
15:45 creich yes found it, but maybe i did it wrong... i'll give it a try again
15:45 creich thx for the hint
15:47 jslatts joined #salt
15:51 symroe joined #salt
15:52 thatcr joined #salt
15:53 forrest creich, np, I don't know if it will work for exactly what you want, but it's worth a shot.
15:53 scott_w joined #salt
15:53 creich forrest, if i use copy i got the error that the file is not present
15:53 forrest ok, copy must only be for local file then, sorry
15:53 creich yeah i think so
15:53 creich yw
15:53 creich it was worth the shot
15:53 creich ;)
15:53 forrest Yea
15:54 forrest I don't know why file.managed times out on big files
15:54 symroe Is it possible to run a macro in a for loop?  I don't see why it shouldn't, however the recipes in the macro aren't doing anything (i.e., it's as if they aren't there)
15:54 LarsN creich: could that 1gb file be put in git?
15:54 LarsN then use the git clone mechanism? :)
15:54 forrest symroe, within jinja
15:54 forrest *?
15:54 symroe forrest: Yep
15:54 LarsN because adding more mechanisms around it is sure to make things better :)
15:54 harobed_ joined #salt
15:54 dave_den do not put a 1gb file in git.
15:55 mekstrem joined #salt
15:55 creich LarsN, that may be possible, but .. as dave_den said
15:55 forrest creich, can you try to use file.managed again, but increase the command timeout?
15:55 creich so i can do a workaround and copy the file with scp using cmd.run
15:55 LarsN dave_den: I think more importantly, don't put binary files in git. :)
15:55 LarsN but yeah.
15:56 viq for such things this sounds attractive https://github.com/mattray/bittorrent-cookbook
15:56 forrest symroe, I know there are limits to the extent you can do stuff in jinja, might be worth investigating if it will work there first.
15:57 symroe forrest: thanks, I guess it was more of a jinja question anyway
15:57 forrest symroe, I wish I had a better answer for you
15:58 symroe forrest: Ok, so it might help if I could see the rendered template.  Can I do that anywhere?
16:00 mekstrem Anyone knows if the bug with minions forking itself gazillion times when master is unreachable is fixed?
16:00 mekstrem in 0.17.1 that is
16:00 kermit joined #salt
16:01 forrest symroe, I'm not sure
16:01 scott_w joined #salt
16:01 amckinley joined #salt
16:02 symroe forrest: ok, thanks
16:03 cachedout joined #salt
16:04 forrest mekstrem, I'm not sure, haven't seen that bug
16:04 scott_w_ joined #salt
16:06 cnelsonsic joined #salt
16:06 imaginarysteve joined #salt
16:07 gasbakid__ joined #salt
16:12 xmltok joined #salt
16:13 timoguin joined #salt
16:15 symroe joined #salt
16:15 jslatts joined #salt
16:15 xmltok joined #salt
16:15 sgviking joined #salt
16:16 jumperswitch joined #salt
16:16 scott_w joined #salt
16:17 creich forrest, ok file.managed with an higher timeout works for me :)
16:18 creich thx
16:18 mekstrem forrest: ok. Having trouble in 0.15.3 where minion host fork bombs and creating 100+ minions trying to connect to the master when the master is unavailable
16:18 mekstrem found some workarounds but just wondering if it was fixed in 0.17.1 :)
16:18 viq mekstrem: I seem to recall something like that being fixed since 0.15, but that's just my hazy memory
16:19 mekstrem viq: i've seen people with 16.x having these issues also
16:19 mekstrem normally, how many salt-minions processes SHOULD there be running? 2?
16:21 dave_den mekstrom: there should only be one salt-minion process, but that process can have many threads
16:22 mekstrem dave_den: ok cause when i do 'ps -ef | grep salt-minion' on a host with 0.17.1 running i get 2 minions returned back
16:22 dave_den on what distro, and how was salt installed?
16:22 forrest creich, yea np, glad that worked!
16:22 forrest mekstrem, oh yea I have no idea, there have been so many bug fixes since then it would be a chore to search them!
16:23 mekstrem SLES 11.1 through rpms that we manually install however downloaded from upstream
16:23 scott_w joined #salt
16:24 dave_den mekstrem: you should only see one salt-minion process. when I do 'ps -ef | grep salt' on a centos 6 host with 0.17.1 i get one process listed.
16:24 mekstrem dave_den: interesting
16:26 dave_den but if you check the /proc/<PID>/task directory, you will see all the pids of the threads
16:26 dave_den on my centos 6 minion doing nothing, it has 3 threads total.
16:26 anitak MTecknology: thank you!
16:26 symroe joined #salt
16:29 Kholloway joined #salt
16:33 xmltok joined #salt
16:33 vaasu joined #salt
16:34 unicoletti_ left #salt
16:34 jY is there a way other then putting regexes into top.sls to say a minion should be in a certain environment
16:35 dave_den jY: http://docs.saltstack.com/topics/targeting/index.html
16:35 LarsN jY: ^ that :)
16:36 jalbretsen joined #salt
16:37 UtahDave joined #salt
16:37 forrest jY, you can also use environments and file_roots (documented in http://docs.saltstack.com/ref/states/top.html) to create multiple environments.
16:40 forrest can't sleep again UtahDave?
16:40 UtahDave he he.   Just wrapping up notes from today
16:40 forrest ahh ok
16:41 UtahDave forrest: You holding down the fort in here?
16:41 LarsN So UtahDave Amos did a pretty good job of presenting to the LOPSA group.
16:41 forrest Nope, letting the IRC burn to the ground :P
16:41 LarsN Next up is a presentation to our group internally.
16:42 UtahDave lol, forrest
16:42 UtahDave LarsN: That's great!
16:42 UtahDave glad to hear that1
16:42 forrest In reality everyone has stepped up in your absence to get questions answered.
16:42 forrest except dave_den, giving out terrible advice at all hours of the day :D
16:42 UtahDave forrest: that's awesome. We've got such a great community in here.
16:43 LarsN I'd like to go to SaltConf, but I don't think I can justify it in my new role.
16:43 UtahDave LarsN: What??????
16:43 forrest Yea it spoils you for when you go to another channel and it takes an hour for someone to respond
16:43 forrest LarsN, are you in marketing now?
16:43 LarsN I moved to the Automation team, which should be a good fit, but we have another Automation tool that's heavily entrenched.
16:43 forrest psssssssh
16:43 UtahDave LarsN: Well, we'll miss having you here for sure
16:44 LarsN I'll get it on my calendar for next year though.
16:44 LarsN even if I have to travel on my own time.
16:44 LarsN dime/
16:44 forrest LarsN, what tool are you guys using?
16:44 dave_den forrest and I have been busy, UtahDave! Don't know how you keep up with all the help you give here
16:44 UtahDave LarsN: cool.
16:44 forrest didn't you see the last salt air dave_den? He had a few extra arms grafted on.
16:44 dave_den heh
16:44 LarsN Some of our teams are using Salt, but under the hood there's a lot of Chef
16:45 UtahDave LarsN: I think a bunch of people from the seattle group are coming. Speaking, too, I think
16:45 LarsN yeah.  the PaaS teams are using Salt.
16:45 forrest LarsN, ahh, does the automation team mostly use ruby?
16:46 LarsN forrest: part of it would be.  $Years of time invested in $Tool
16:46 LarsN is always a tough cookie to crack.
16:46 UtahDave LarsN: yeah, EntropyWorks gave a great lightning talk yesterday about using Salt to stand up openstack infrastructures. He has to integrate will all the chef stuff that underpins a lot of the other infra stuff
16:46 forrest LarsN, yea I understand.
16:47 UtahDave I'm looking at the room service menu here. One page is titled, "ASIAN AND CHINESE DELIGHTS"
16:47 forrest lol
16:47 UtahDave Then the second menu option is     FRIED RICE "FUKIEN STYLE"
16:47 UtahDave LOL
16:47 forrest You need to go find some street vendors
16:47 forrest get one of those awesome crepe like things with all the cabbage
16:49 kermit joined #salt
16:49 quickdry21 How would I go about targeting a list of minions in my top file that have a certain grain? Example type grain = typea, typeb, typec, typed
16:49 viq http://pbot.rmdir.de/NnjEa4WiKiWMMmFHW5K5zg  - how do I do this properly? Right now it creates group1 and group3
16:50 shinylasers joined #salt
16:50 viq quickdry21: http://docs.saltstack.com/topics/targeting/compound.html
16:50 forrest viq++
16:50 UtahDave quickdry21:  add     - match: grain
16:50 viq and https://github.com/viq/cm-lab-salt/blob/master/salt/roots/salt/top.sls
16:51 quickdry21 viq: I tried compound: 'G@type:typea,G@type:typeb,G@type:typec,G@type:typed'
16:51 viq quickdry21: not commas, 'or' or 'and'
16:51 forrest hey viq, have you tried using jinja map files for this?
16:51 quickdry21 ah
16:52 UtahDave viq: replace  {{ group }}   with {{ args.pop() }}
16:52 mekstrem UtahDave: quick question. How many minion processes should there normally be running in 0.17.1? 2?
16:53 viq UtahDave: yeah, that worked, thanks! Now to see what else it complains about ;)
16:53 viq forrest: kinda, in a different spot, still figuring out how to use them properly
16:53 forrest viq, ok cool. I was just curious
16:54 viq forrest: here's part of my attempts - https://github.com/viq/cm-lab-salt/tree/users/salt/roots/pillar/users
16:54 UtahDave mekstrem: usually there's one minion process.  Each job sent from the master will fork a new minion process, which should then die when it finishes.
16:55 forrest viq, I'll check it out in a bit, gotta run to a meeting
16:55 viq forrest: sure, just showing/boasting ;)
16:55 forrest heh
16:55 mekstrem UtahDave: okey, well when looking up with 'ps -ef | grep salt' i get two salt-minions running on each host that has 0.17.1
16:56 UtahDave mekstrem: Hm. what happens if you restart the salt-minion service?
16:56 mekstrem UtahDave: experienced a known bug today also where the minions fork bombs itself creating 100+ salt-minions if there's no connection to the salt master
16:56 mekstrem but that's for 0.15.3
16:56 UtahDave yeah, that was a fun bug.
16:56 mekstrem gonna try reproduce in 0.17.1 soon
16:56 UtahDave cool, thanks, mekstrem
16:57 ctdawe joined #salt
16:57 utahcon I am having problems with pillar in my states: http://pastebin.com/RMhWdZDR
16:57 mekstrem we had some questioning network engineers today mailing wondering why there was over 15k tcp connections to our master host x)
16:57 utahcon Can anyone suggest changes?
16:57 viq utahcon: Spam Detection For Pastebin ID: RMhWdZDR   :D
16:58 utahcon oops
16:58 utahcon viq: http://pastebin.com/RMhWdZDR
16:59 utahcon the error btw is on crwmi-dev
16:59 viq utahcon: try instead {{ salt['pillar.get']('brand_long') }} and see if it works better
16:59 viq also try salt crwmi-dev pillar.get brand_long
17:00 utahcon the salt command returns the proper value from the server
17:01 utahcon /s/server/minion/
17:01 ashtonian joined #salt
17:03 pipps_ joined #salt
17:03 bemehow joined #salt
17:04 utahcon viq: so why does changing that one to the extra syntax make it all work?
17:04 UtahDave utahcon: what's the ouput of          salt '<the minion>' state.show_sls repo.cr.internal
17:05 utahcon UtahDave: basically empty, just the minion name, and the ----- line
17:05 ctdawe joined #salt
17:05 elm- joined #salt
17:06 xmltok_ joined #salt
17:06 UtahDave utahcon: that syntax will use a default if it can't find that pillar variable
17:07 bhosmer joined #salt
17:07 jacksontj joined #salt
17:08 troyready joined #salt
17:08 teskew joined #salt
17:12 redondos joined #salt
17:12 KyleG joined #salt
17:12 KyleG joined #salt
17:14 mesmer joined #salt
17:14 jdenning joined #salt
17:16 claudep joined #salt
17:21 vimalloc joined #salt
17:22 vimalloc Has anyone seen something like this before? I changed a minion's id, accepted it on the master, and it is working, test.ping-able, etc. However every once in a while the old id for that minion will show up in the 'Unaccepted Keys:
17:24 dave_den vimalloc: how are you setting the minion_id ?
17:24 abe_music joined #salt
17:24 vimalloc /etc/salt/minion id: <new_id>; /etc/init.d/salt-minion restart
17:25 vimalloc grepping 'id' in the /etc/salt/minion only shows one uncommented line, which is the id: <new_id> one
17:25 claudep is it possible to set multiline content to repl parameter of file.replace state?
17:26 ajw0100 joined #salt
17:26 claudep currently I'm getting "ScannerError: while scanning a simple key"
17:27 elm- hello, I've a problem with loosing the connection to my minions from time to time and the only way to re-establish it is to restart the minions. e.g. test.ping yields no response and even with a debug trace I don't see any error messages on master or minion about connection problems. Any tips how to debug this?
17:28 dave_den vimalloc: does "cat /etc/salt/minion_id" show the correct minion id?
17:29 vimalloc I don't see a /etc/salt/minion_id file.
17:30 dave_den vimalloc: also, next time you see the old minion id show up in Unnaccepted keys in the master, do a diff on the accepted key with proper minion id against the unaccepted key:   diff /etc/salt/pki/master/minions/minion_id /etc/salt/pki/master/minions_pre/minion_id
17:30 dave_den vimalloc: what salt-minion version are you running?   salt-call --versions-report
17:31 vimalloc looks like the minion is running salt-minion 0.16.2, and the master is salt-master 0.16.3. Perpahps that is part of the issue.
17:31 UtahDave elm-: I think there's an open issue on what you're seeing
17:33 vimalloc Also did a diff of the accepted key and the unaccepted one. No differences.
17:34 UtahDave vimalloc: what version of Salt on your master and minion?
17:34 dave_den vimalloc: have you made sure that you stopped all the salt-minion processes and then restarted salt-minion?
17:35 vimalloc Yea, stopped minion, verified with ps, started again. And the master is 16.3, minion is 16.2. I just noticed there was a difference there, so I will be upgrading the minions today.
17:37 elm- UtahDave: Thanks, I just checked, there are some issues regarding connection problems, but they show distinct connection errors for it in the log, I'm not getting any error. I just checked, curiously also the other way around, salt-call on the minion works, while at the same time salt from the master does not work
17:38 UtahDave elm-: hm.  can you test against the develop branch from git? Or at least 0.17.1?
17:39 elm- UtahDave: It's the 0.17.1
17:39 UtahDave ok
17:40 dave_den elm-: can you gist the connection errors you see in the logs?
17:40 elm- Is there a nightly repository with builds or something I could try, or do I have to build from source to test the latest?
17:40 claudep my failing snippet: https://gist.github.com/claudep/7340719 :-(
17:41 dave_den elm-: from git is the only way to get the latest code
17:42 dave_den claudep: how is it failing?
17:42 jhulten joined #salt
17:42 elm- dave_den: that's my  problem, I don't see any connection errors, this is the debug output from the salt master: http://pastebin.com/7ST3NhCL and in the client logs (which are on debug) I'm not seing anything
17:44 claudep dave_den: I've updated the snippet with the error (just didn't include the whole traceback)
17:44 pipps joined #salt
17:47 dave_den claudep: i think that's an indentation issue
17:48 claudep should the string be aligned on the pipe?
17:48 druonysus joined #salt
17:48 mgw joined #salt
17:49 claudep I've already tried several combinations with no luck
17:49 * Gareth waves
17:49 pipps_ joined #salt
17:52 pipps__ joined #salt
17:55 Gareth PING 172.23.5.21 (172.23.5.21): 56 data bytes
17:56 Gareth ping: sendto: Cannot allocate memory
17:56 Gareth thats a new one :)
17:56 Brad_K joined #salt
17:56 Brad_K joined #salt
17:56 Brad_K left #salt
17:56 EugeneKay http://www.amazon.com/dp/B0057Q4AGW ;-)
17:57 pipps_ joined #salt
17:58 * Gareth shakes his head
17:58 Gareth 4GB for $40....amazing.  $10 per 1GB.
17:59 EugeneKay And that's expensive
17:59 EugeneKay 8GB sticks are <$70
17:59 Gareth go for the 8GB...price per GB is less :)
18:00 EugeneKay I just bought a pair of Atom 1Us. I would have maxed them out, but because it's an Atom it has a max of 4GB :-|
18:00 EugeneKay And they only take laptop sticks - no ECC and limited to 800Mhz
18:01 EugeneKay But hey, $500 each for two new gateway/routers
18:02 zooz joined #salt
18:03 utahcon is it possible to watch a directory/file and restart a service based off that file/dir changing?
18:03 dave_den claudep: strange - your state works for me
18:05 cachedout joined #salt
18:05 dave_den utahcan: you mean during a state run?
18:05 jslatts UtahDave: yes, look into watch for service state http://docs.saltstack.com/ref/states/all/salt.states.service.html
18:05 jslatts sorry, that was for utahcon
18:05 utahcon thanks jslatts
18:06 jslatts utahcon: also watch_in will be your friend as you get into more complex scenarios (http://docs.saltstack.com/ref/states/requisites.html)
18:07 forrest was just going to link that jslatts!
18:07 jslatts :)
18:07 jslatts <3 watch_in
18:08 jslatts sooo, got a question myself. I am trying to figure out a way to read a value out of a file that I pulled down in a state and use the contents to execute another state. I can figure out a nasty bash work around to do this, but I would prefer not to. any ideas?
18:08 LarsN So where's the Salt store?
18:09 LarsN <--- needs more stickers.
18:09 logix812 joined #salt
18:09 Jahkeup LarsN: me too :D
18:10 jacksontj joined #salt
18:10 EugeneKay http://www.amazon.com/dp/B001GHYO4E
18:11 roger_rackspace joined #salt
18:11 LarsN EugeneKay: ;)
18:11 EugeneKay Perfect for your slugs!
18:12 Jahkeup http://www.anatomystuff.co.uk/repository/product/user/img_img_LDSA105_salt_stack_up.jpg
18:12 Jahkeup thanks EugeneKay xD
18:12 Jahkeup I'll buy this instead! ^^^
18:12 LarsN What would be even better would be a SaltStack coffee mug
18:12 roger_rackspace Good morning/afternoon/evening... I am trying to install salt-minion 0.16.4 on a CentOS 6.4 box and I can't seem to find an install rpm.  Can anyone point me in the right direction?
18:13 Jahkeup yeah some awesome saltstack merch would be quite desirable :)
18:13 LarsN so I can deploy jet fuel, while showing the world which automation system I prefer :)
18:13 mwillhite joined #salt
18:14 dave_den roger_rackspace: only the latest version is kept in the EPEL rpo, so it's hard to find RPMs of older versions. you could do a git checkout of the salt git repo at tag v.0.16.4 and build an RPM from that
18:14 sroegner joined #salt
18:15 roger_rackspace dave_den thank you
18:15 dave_den np
18:15 pipps joined #salt
18:15 dave_den the spec is at salt/pkg/rpm/salt.spec
18:16 Parabola left #salt
18:19 amckinley joined #salt
18:24 bitz joined #salt
18:25 racooper http://koji.fedoraproject.org/koji/buildinfo?buildID=462950
18:27 dave_den roger_rackspace: racooper found the rpm for you
18:28 racooper ah, sorry, I meant to say something after posting the link!
18:28 teebes joined #salt
18:29 amahon joined #salt
18:29 Cidan one of the things that threw me off, and made me reconsider using salt was out pkg.installed with source salt:// broke in this release in apt
18:30 forrest Cidan, in 0.17.1?
18:30 Cidan indeed.
18:30 modafinil joined #salt
18:30 Cidan It's in git too, and fixed for next release
18:30 Cidan but the fact that new packages weren't pushed right away for what I consider to be core functionality which is broken...
18:30 sfz joined #salt
18:30 Cidan that's, I don't know, odd at best.
18:30 andrewclegg joined #salt
18:30 Cidan not trying to be critical!
18:30 pipps_ joined #salt
18:31 amahon joined #salt
18:32 aleszoulek joined #salt
18:32 bhosmer joined #salt
18:32 Cidan anyone that downloaded this release as a new user and tried to follow the tutorials using salt:// sources in ubuntu, would hit a wall, and you would more than likely lose that new user; I guess I'm thinking of it in terms of a business in a way
18:32 amckinley joined #salt
18:33 matanya joined #salt
18:33 forrest Cidan, Yea I think they were trying to do 0.17.2 as a 'hot' fix, but stuff keeps snowballing into it
18:34 Cidan makes sense, it happens I guess
18:35 redondos joined #salt
18:35 Cidan salt rocks, the idea of using chef or puppet makes my stomach turn
18:35 forrest Cidan, yep, I know it's a bummer though
18:35 forrest oh yea man, I am trying to push Salt hard
18:35 forrest not going well, but I'm trying
18:36 MTecknology WOOHOOO!!!!! Debian servers are fully taken care of!!! FINALLY!!!
18:36 racooper Cidan,  do you have a link to that bug?
18:36 Cidan well, I've spread the word of Salt to my former job working for the White House, and we're using it here in my new job in Santa Monica, slowly but surely!
18:36 MTecknology (as far as I can go at the moment) ... now to bring these states to rhel
18:36 Cidan racooper I can find it, one second
18:37 matanya joined #salt
18:37 ctdawe joined #salt
18:38 utahcon Is there a way to set an environment variable for cron without using the cron.file option?
18:38 forrest MTecknology, are you gonna make those existing ones into formulas so they support both?
18:39 smkelly Does anyone have any fairly complex or sophisticated salt examples? Like use of reactors, mines, etc? I'm trying to grok what can actually be done, as most of the examples I find online are not using many of the cooler features
18:39 utahcon if not, can you use a hybrid of a cron.file and cron.present to extend the cron?
18:39 forrest smkelly, I don't know of an example that uses reactors and mines :\
18:40 MTecknology forrest: that's the plan, but it sounds scary. I want my states as distro-agnostic as possible
18:40 forrest nah it's easy!
18:40 smkelly I couldn't even find a self-contained example of either of them independently
18:40 forrest MTecknology, just take a look at the salt-formulas repo and copy that over, and read through http://docs.saltstack.com/topics/conventions/formulas.html
18:40 forrest you'll be rocking the formulas in no time
18:41 forrest smkelly, yea I actually still have https://github.com/saltstack/salt/issues/6655 open regarding additional docs for the salt mine at least, I need to create a time vortex so I can build that sort of example.
18:41 MTecknology heh.. interesting
18:41 forrest need to learn more about the mine/reactor that's for sure
18:42 pipps joined #salt
18:42 jacksontj joined #salt
18:42 MTecknology mine?
18:42 MTecknology I know about reactor...
18:43 forrest I was talking to smkelly
18:43 forrest sorry
18:43 MTecknology oooh - is there a mine system?
18:43 forrest http://docs.saltstack.com/topics/mine/index.html
18:43 forrest not much in there right now.
18:43 smkelly mine lets you share data between nodes
18:43 forrest yep
18:44 pipps_ joined #salt
18:44 forrest It's cool, but as smkelly said, I haven't seen a live example of it
18:44 MTecknology woah... I didn't think that would ever be possible
18:44 forrest I think UtahDave was saying that one group of guys was using the mine to constantly sync host files, so they didn't have a DNS server.
18:44 MTecknology damn... salt just keeps doing more and more and more
18:44 smkelly an example of publish for running things on remote nodes would be neat too
18:44 forrest the salt mine is a few releases old :P
18:45 MTecknology ya, but the rate of release and changes in releases makes it tough to keep up
18:45 smkelly forrest: I was trying to use it to share ssh host public keys to generate a /etc/ssh/ssh_known_hosts so all my boxes know each other
18:45 forrest yea that would be awesome
18:45 claudep dave_den: mmmh, it finally worked, the fact that I never know when the git repo has been really updated or not doesn't help :-(
18:45 MTecknology UtahDave: I might get to meet you at saltcon! :D  (it's up to the boss of my boss at the moment)
18:45 claudep dave_den: many thanks for testing!
18:45 forrest I think he fell asleep MTecknology
18:45 octarine joined #salt
18:45 MTecknology He's not allowed to sleep.
18:46 UtahDave MTecknology: awesome! can't wait to meet you in person!
18:46 MTecknology see? :)
18:46 forrest lol
18:46 UtahDave Yeah, I'm beat. I gotta go to bed now. It's 2:45 am here.
18:46 forrest Yea I was just gonna say, not sure how you are doing that when the conference starts up in a few hours
18:46 MTecknology oh shit...  "can't wait to meet you in person"  I hope this doesn't mean I will be beat into a million pieces!
18:46 UtahDave lol, MTecknology
18:46 MTecknology :)
18:46 MTecknology g'night!
18:47 UtahDave night.
18:47 forrest Just show your bosses boss how awesome salt is MTecknology, he understand tech right? SURE
18:47 MTecknology lol... sure
18:47 forrest 'look how sexy these lookups are compared to puppet, look at it!'
18:48 MTecknology we've never used puppet; I considered it once when salt was extremely young but decided that I'd rather keep our chaotic scripts
18:48 jafe joined #salt
18:48 forrest heh
18:48 claudep left #salt
18:49 rachbelaid joined #salt
18:49 jrgifford joined #salt
18:50 jcockhren forrest: I fell asleep. I'll make that issue today
18:50 forrest jcockhren, sounds good!
18:50 gkze joined #salt
18:52 MTecknology forrest: salt is the first system management piece this company has ever had for servers
18:53 forrest MTecknology ahh
18:53 forrest well, at least that makes it easier to convince people in some ways
18:53 shennyg joined #salt
18:56 neilf joined #salt
18:57 pears I get to present salt to my group at our meeting today
18:57 pears for some definition of "get"
18:57 xerxas joined #salt
18:57 forrest oh yea pears? Did you build out that business case and the powerpoint and such?
18:57 goki joined #salt
18:57 Cidan racooper bleh, figures I can't find it, but it's there and fixed
18:57 pears haha no
18:57 forrest boooooooo, slacker :P
18:57 Cidan lol
18:58 scalability-junk joined #salt
18:59 racooper hah thanks for checking. I'll take a look at outstanding issues later
18:59 ctdawe joined #salt
18:59 MTecknology forrest: I presented it as this is now what I'm using to bring all of our servers into a consistent and secure state and it's handling our deployments and all around being awesome. This conference is geared toward sys admins and won't be mostly advertising and would likely be a huge help for me.
19:04 gamingrobot joined #salt
19:05 munhitsu joined #salt
19:06 pipps joined #salt
19:07 bhosmer joined #salt
19:07 Narven joined #salt
19:08 pcarrier joined #salt
19:08 whitepaws is there a simple command to download a file from the filebackend, like:   salt '*' file.download salt://prod/httpd.conf ?
19:08 whitepaws just to test if the dir structure is working out as i expected (and file backend is working properly) ?
19:09 MTecknology file.cp might work for that... not sure
19:09 whitepaws oh yeah, that looks promising. thanks
19:09 akitada joined #salt
19:10 pears or file.check_file_meta
19:12 pears ah no that wouldn't work
19:14 copelco joined #salt
19:15 aparashar joined #salt
19:16 hazzadous joined #salt
19:16 pears here we go
19:17 pears salt $target cp.get_file_str salt://...
19:17 pears that will print the file contents
19:17 anitak guys, is there a release plan of some sort available? or a glimpse of when 0.17.2 will come out? I have trouble running off of the develop branch and would want to stick with the released versions
19:17 pears from the perspective of the minion
19:18 pears lots of interesting stuff in here http://docs.saltstack.com/ref/modules/all/salt.modules.cp.html
19:20 anitak cp is awesome. I use it with cp.push to push a file back from a minion to the master
19:20 mgw joined #salt
19:21 noob2 joined #salt
19:29 gasbakid__ joined #salt
19:31 kermit joined #salt
19:32 quist joined #salt
19:32 pdayton joined #salt
19:34 troyready joined #salt
19:43 jacksontj found a bug in the kwarg passing within salt (https://github.com/saltstack/salt/pull/8289)
19:43 jacksontj not sure if anyone from saltstack is online-- i think most of them are overseas?
19:53 Savagedbright joined #salt
19:55 kamal__ joined #salt
19:57 jcsp jacksontj: nice, I hit that problem just the other day
19:58 hazzadous joined #salt
19:58 grep_awesome with jinja, is there a way to test if a string includes another string? e.g. (grains['hostname'].includes('something'))
19:59 jacksontj jcsp: i ran into it a week or 2 ago, but just got the time yesterday to track it down
19:59 jacksontj its a realatively easy fix
19:59 scott_w joined #salt
19:59 pdayton left #salt
19:59 lyddonb_ joined #salt
20:00 jacksontj just need to get some feedback on it-- i usually like to get it merged before backporting it to my prod release ;)
20:01 jimallman joined #salt
20:02 msil_ joined #salt
20:02 emilisto_ joined #salt
20:03 pipps_ joined #salt
20:04 eculver_ joined #salt
20:06 a1j why in dual master configuration only one of my masters have salt-mine data?
20:07 a1j oh nm i did it with salt-call, aparently salt-call does not return mine data for some reason
20:08 gkze joined #salt
20:08 dave_den a1j: salt mine does not work for multi master
20:08 Damoun_ joined #salt
20:09 ckao joined #salt
20:09 crashmag joined #salt
20:09 lyddonb joined #salt
20:09 crashmag joined #salt
20:09 Nazca joined #salt
20:09 MTecknology anitak: I think releases tend to be "when they're ready"
20:09 crashmag joined #salt
20:09 Nazca joined #salt
20:10 dave_den a1j: https://github.com/saltstack/salt/issues/7697
20:14 IJNX joined #salt
20:15 forrest MTecknology, sorry I went to lunch. Yea for sure, it should be awesome
20:15 micah_chatt joined #salt
20:16 Brew joined #salt
20:18 nijotz joined #salt
20:20 a1j dave_den: ok it seems like custom grain module is the way to go then..
20:21 horknfbr joined #salt
20:21 pears joined #salt
20:22 Teknix joined #salt
20:23 jalbretsen Uh oh.  Salt 0.17 awesomeness is making me fix some of my formula sloppiness.  Which is good, I've been looking for an excuse to take the time to do it.
20:23 andredublin joined #salt
20:24 andredublin joined #salt
20:25 forrest jalbretsen, formula all the things!
20:26 andredublin left #salt
20:26 utahcon occasionally I see my minions report that a file or package is not available (not all my minions all of the time, but more randomly), is this something that is known?
20:26 utahcon subsequent runs usually don't have the same failure
20:28 jumperswitch joined #salt
20:30 troyready joined #salt
20:31 harobed joined #salt
20:31 harobed joined #salt
20:32 harobed joined #salt
20:35 fragamus joined #salt
20:39 bemehow joined #salt
20:40 horknfbr is there a way to have salt prompt for a password?  i need to pass it to a command, but do not want to store it in a file nor give it on the command line
20:43 scott_w joined #salt
20:44 grep_awesome Since matching minion_id's is regex based, is there any added functionality that lets you match "not this_id"?
20:44 scott_w_ joined #salt
20:47 racooper does the "* and not this_id" format work with regexes, or just with grains?
20:48 forrest grep_awesome, racooper, Note that a leading not is not supported in compound matches. Instead, something like the following must be done:
20:48 racooper grep_awesome,  this: http://docs.saltstack.com/topics/targeting/compound.html
20:48 grep_awesome racooper forrest: thanks! I missed this page in the docs
20:49 forrest grep_awesome, np, I forget the compound matcher page all the time for some reason
20:49 grep_awesome forrest: this one is a lot easier to find, but less informative http://docs.saltstack.com/topics/targeting/globbing.html
20:49 bemehow joined #salt
20:50 forrest grep_awesome, yea that's the one I always seem to stumble onto, then I go back over to http://docs.saltstack.com/ref/states/top.html#other-ways-of-targeting-minions
20:50 forrest and find the compound matcher page :P
20:51 zach horknfbr: unset HISTFILE; salt 'server' cmd.run 'echo blah|passwd --stdin blah' ?
20:55 dave_den horknfbr: what are you trying to give the password for?
20:57 faust joined #salt
20:57 forrest terminalmage, are you around?
21:03 ashtonian joined #salt
21:04 cewood joined #salt
21:05 gkze joined #salt
21:06 mwillhite joined #salt
21:09 kaptk2 joined #salt
21:11 pipps joined #salt
21:16 scott_w joined #salt
21:19 terminalmage yes
21:19 terminalmage what's up
21:19 forrest looks like halite is up in the repo now
21:19 forrest http://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/python-halite.html
21:19 forrest I'll update the docs tonight
21:19 vorp joined #salt
21:19 dfinn joined #salt
21:19 forrest I was gonna ask you to let me know as soon as they went live so I could do the push, but since it's already there *shrug* .
21:20 terminalmage yeah I emailed salt-users this morning
21:20 terminalmage :)
21:20 forrest yea I just saw your post from 11
21:20 terminalmage :)
21:20 forrest So looking at the spec file, I noticed you called this python-halite, instead of salt-halite. Also there are no requires for salt itself
21:21 forrest what's the reasoning behind that?
21:21 dfinn i'm still struggling how to assign and manage roles with salt.  I found an example like so: http://pastebin.com/16pMSnEk but it appears that is relying on hostnames to break out the roles.  That won't really work in my environment.  What would the best or right way to assign a list of servers to a certain role?
21:21 forrest dfiin, http://docs.saltstack.com/topics/targeting/nodegroups.html perhaps?
21:22 forrest dfinn, or if one of the existing grains doesn't work for you, then you could write your own grain
21:22 dfinn the issue I see with a grain is then it's an extra task to assign a grain out to a list of servers.  and what happens if that grain ever gets lost or dropped?  then they are no longer part of that role?
21:23 dfinn but nodegroups looks like it might be what i'm looking for
21:23 terminalmage forrest: it's a python module
21:23 forrest terminalmage, fair enough :P
21:23 terminalmage python modules are packaged with python-modulename
21:23 terminalmage :)
21:23 forrest yea I keep forgetting that it's technically a python module
21:24 forrest It operates just like the pip install right?
21:24 dfinn nodegroups would get defined in the salt/top.sls file?
21:24 forrest the master will pick it up and start the service as long as it's configured?
21:25 forrest dfinn, no /etc/salt/master
21:25 forrest http://docs.saltstack.com/ref/configuration/master.html#node-groups
21:25 dfinn excellent, thanks!
21:25 forrest np
21:25 forrest it isn't clear in the node groups docs really, I'll make an issue on that
21:26 dfinn yeah, it's a little vague
21:27 dfinn in those examples, what are the L@ and G@ for?
21:27 dfinn list and grains?
21:27 forrest G = grainsL = List of minions (so name)
21:27 forrest yea check out the compound matcher page
21:27 forrest http://docs.saltstack.com/topics/targeting/compound.html
21:28 forrest another good thing to update on that doc page :P
21:28 dlloyd joined #salt
21:30 forrest https://github.com/saltstack/salt/issues/8306 if you care or want to do the pull request yourself dfinn.
21:31 aleszoulek joined #salt
21:33 dfinn i defined a node group and am trying to run a salt -N group_name test.ping to that group but it just sits for a bit and comes back with no output
21:33 forrest did you restart the salt master?
21:33 dfinn oh strange, it just worked on the 3rd try
21:33 dfinn i did restart the master
21:33 dfinn weird, seems to be working now
21:33 forrest hmm, I wonder if one box was holding things up perhaps?
21:33 dfinn possibly
21:34 dfinn there's only 2 nodes in the group, both return true
21:34 forrest weird
21:34 dfinn but then the salt commands sits and waits before giving my prompt back
21:34 forrest ping should be returning almost instantly, what happens when you ping each machine individually
21:34 dfinn returns right away
21:35 forrest that's odd.
21:35 dfinn just tried the group again, still hanging
21:35 forrest can you do salt -n group1 test.ping -l debug?
21:35 forrest seems weird sometimes it returns no problem, sometimes it hangs
21:35 dfinn it hangs every time now, after returning true for both nodes
21:35 dfinn it's maybe 5 seconds or so
21:35 forrest are there any other machines in that group?
21:36 forrest that don't technically exist?
21:36 dfinn no, just 2 in the list
21:36 dfinn the salt command doesn't like the -l debug on the end
21:36 forrest oh
21:36 forrest can you try to use the -t command for timeout?
21:36 forrest by default it's 5 seconds
21:36 forrest try -t 2
21:37 dfinn still hung
21:37 forrest hmm
21:37 dfinn for more than what seemed like 2 seconds
21:37 dfinn let me post the debug output from salt-master
21:37 forrest ok
21:37 forrest I'll be back in a few minutes
21:37 dfinn http://pastebin.com/6qiv3VrC
21:37 dfinn ok
21:38 mapu left #salt
21:38 horknfbr zach: I don't even want the password to show in a process list.  we are exceedingly paranoid
21:39 Brew joined #salt
21:39 horknfbr dave_dev: grsecurity, we have to authenticate to an admin role to do anything on our servers.  root is useless without authentication to the RBAC
21:39 noob2 left #salt
21:43 vorp is there a way for me to find what packages should be accessible on a window's salt-minion machine? doing 'salt myhost pkg.available_version rubyenv -v' returns nothing. trying to then install it using salt myhost pkg.install rubyenv -v -t 20 also returns nothing after the hostname. When turning up logging on the salt minion i see that it is complaining about "Unable to locate package rubyenv". This is all after I have created the /srv
21:43 dfinn forrest, to test I added another new group and I see the exact same thing so it doesn't have anything to do with the servers in that nodegroup
21:43 dave_den horknfbr: how are you giving the password now?
21:45 dave_den vorp: pkg.list_available ?
21:45 gkze joined #salt
21:45 vorp returns nothing
21:45 vorp after listing the hostname
21:46 forrest dfinn, that's weird
21:46 dfinn yeah, probably not an issue but sure seems odd
21:46 vorp and i did run salt-run winrepo.genrepo and then salt '*' pkg.refresh_db
21:47 dave_den vorp: dunno - i don't use windows
21:47 dave_den someone else might know or maybe the mailing list
21:47 vorp wish i didn't either
21:47 dave_den heh
21:47 forrest dfinn, can you do salt-run jobs.lookup_jid 20131106203648493047
21:48 diegows joined #salt
21:48 forrest and then the same thing for 20131106203652562247
21:48 horknfbr dave_den: via a external call to a python script to decrypt a gpg file that is then stored in a tcl variable in a rather slow and nasty expect script..
21:48 dfinn http://pastebin.com/CJdr6h4e
21:49 forrest dfinn, what does salt-run jobs.active
21:49 alunduil joined #salt
21:49 dfinn {}
21:49 dfinn ^ that was the output
21:49 forrest ok that's good
21:49 forrest but why is 2247 returning values that are just blank?
21:50 dfinn those did return true when I ran them
21:50 dfinn oh, unless those are the job IDs for when they returned no output
21:50 dave_den horknfbr: what about creating a custom salt module that does the gpg decrypt and expect stuff?
21:50 forrest oh yea this is the saltutil.find_job
21:50 dfinn I saw that same thing for both groups.  add group, restart salt-master, run command referencing nodegroup and get no output for about a minute, then salt commands started working and returning output
21:50 forrest can you try again, but tail the logs in a second console to see if the test.ping job returns, and then it gets hung up on the saltutil.find_job command?
21:51 forrest if it hangs for so long, should be easy to see
21:51 dfinn sure
21:51 MTecknology Apparently salt has zero clue how to handle 'McAfee  OS Server'
21:51 forrest McAfee OS?
21:51 MTecknology which happens to be a rebranded rhel
21:51 horknfbr i was think about that..  but i don't want event he gpg file in our prod environment..  currently the file is only stored on workstations and copied via sneaker net...  yeah, did i mention we are paranoid?
21:51 forrest so it's a trash can sitting somewhere in the DC?
21:51 forrest badum tis
21:51 dfinn http://pastebin.com/VJ5KfGtV
21:52 MTecknology they tweaked the crap out of their system. Is it possible to override the os_family grain with the minion config?
21:52 forrest so dfinn, were you doing tail -f there to see when ecah command got pushed into the logs?
21:52 forrest Sorry I should have been more specific.
21:52 jmlowe joined #salt
21:52 MTecknology It is! YAY!!!
21:52 dfinn that's all the output that went into the the console when I ran the salt command
21:52 horknfbr although, a custom mod could prompt for a password...  hmm, i'll have to try that
21:52 forrest oh I thought that was the log, gotcha
21:53 dfinn running salt-master in debug
21:53 jmlowe So still no saucy builds for 0.17.1? You guys are killing me here.
21:53 forrest yea I'm not sure then dfinn, I mean it all looks fine, I don't see what would cause the delay, you didn't set an unsually high timeout in the master config or anything did you?
21:53 forrest Might be worth posting on the mailing list :\
21:53 xmltok joined #salt
21:53 dfinn haven't touched it, it's default
21:53 forrest weird
21:53 forrest 0.17.1?
21:53 bemehow joined #salt
21:54 dfinn salt-0.16.0-1.el6.noarch
21:54 forrest oh
21:54 jmlowe I mean seriously it's been a month, but you have builds for quantal which is EOL.
21:54 forrest is this a VM dfinn?
21:54 dfinn yes
21:54 dfinn doesn't look like .17 is on EPEL yet
21:54 forrest can you try spinning one up with a more recent release? At least 0.16.4?
21:54 dfinn yeah, i'll test that
21:54 forrest 0.16.4 isn't in EPEL, which is annoying.
21:54 dfinn do the updates make it onto EPEL pretty quickly usually?
21:54 forrest I'm seeing 0.17.1 http://dl.fedoraproject.org/pub/epel/6/x86_64/repoview/salt-master.html
21:54 dfinn kind of sounds like no
21:55 dfinn strange, that's the repo I should be pointing to
21:55 dave_den horknfbr: you can't do interactive stuff in state runs. you could only do that using salt-call yourmodule.function from the minion itself
21:55 forrest yea don't ugprade straight to 0.17 if this is prod
21:55 forrest might break other stuff for you
21:55 dfinn it's not
21:55 dfinn i'm hoping to replace puppet with salt
21:55 dfinn just starting out
21:55 dfinn 0.17 is pretty stable for a new env?
21:55 forrest but regarding the updates, EPEL is always slow to update, because it has to go into epel-testing first, and I don't believe at this time anyone has the 'credits' available to push it through to epel
21:56 forrest well, 0.17.1 is out now, it's ok stability wise
21:56 forrest some people are still seeing problems where stuff broke from 0.16.4 to 0.17
21:56 forrest 0.17.2 *should* be out soon
21:56 _ikke_ What is wrong with this config? https://gist.github.com/Ikke/7344797. It complains about pkg not having a function
21:56 forrest so yea, epel takes a few weeks because that's just how long it takes when you don't have the power to push it
21:57 dfinn few weeks seems more than reasonable when you consider normal RHEL/CENT timing
21:57 forrest Yea it's reasonable
21:57 forrest but it still sucks because the salt team is on it fast, then you have to wait unless you want to pull from testing :P
21:58 mordred joined #salt
21:58 _ikke_ hmm, changing it to pkg.installed works
21:58 jslatts _ikke_: no colon
21:58 jslatts yeah
21:58 forrest _ikke_ https://gist.github.com/gravyboat/7344831
21:59 _ikke_ forrest: Ah, ok. The names would then belong on the same level as installed
21:59 _ikke_ right
21:59 forrest yea and that colon obviously :P
21:59 bemehow joined #salt
21:59 _ikke_ right, I first left it out, but added it because it complained :P
21:59 forrest you'd only do the indent further if  you did pkg.installed
21:59 _ikke_ right
21:59 vorp ok... so it looks like the minion is complaining that is it unable to cache the file, yet master still returns "true" when doing a pkg.refresh_db
22:00 rhand joined #salt
22:00 horknfbr dave_den: thats a bummer...  but as my security guy just pointed out, if we used a password safe with revokable certs and some sort of api or cli access,then getting this working could be very possible..  thanks for the help
22:01 AdamSewell joined #salt
22:02 AdamSewell Where would be a good place to find someone to tutor me for an hour or so on Salt?
22:02 dave_den horknfbr: good luck. say hi to the NSA for us
22:03 forrest AdamSewell, there aren't really any services like that which are offered right now. I'd say just go through the getting started docs, and you can ask questions in here, we can point you towards the documentation you might need for questions, or try to help you.
22:03 dave_den AdamSewell: maybe watcing some salt-air videos may be useful, too
22:03 MTecknology http://dpaste.com/1445718/  <-- everything works by setting os_family to RedHat in the config except this one.
22:03 forrest dave_den, good point
22:03 forrest actually...
22:04 AdamSewell dave_den, hrm salt-air. haven't heard of that?
22:05 forrest sweet I found it, AdamSewell: http://www.youtube.com/watch?v=yphLKSjnSU8
22:05 dave_den MTecknology: that is checking grains['os']. Change that, too.
22:05 forrest That's the CTO of SaltStack where he goes over an intro and such
22:05 forrest if you'd rather watch a video, and then read the docs.
22:05 AdamSewell ok, thanks guys. i'll check into it
22:06 forrest Cool, let us know if you have questions
22:06 AdamSewell sometime it just helps me to have someone to explain things to me haha
22:06 AdamSewell limited time unfortunately
22:06 forrest yea watch that Video
22:06 forrest Tom does a great job explaining it
22:06 dave_den AdamSewell: i learn by seeing and doing. i feel ya
22:06 bhosmer_ joined #salt
22:06 AdamSewell dave_den, yup same
22:08 vorp i figured it out, incase anyone has a similar issue. the problem ended up being that I moved the salt file_roots: base to /srv/salt/base and win repo needs to be within it.
22:08 amckinley joined #salt
22:09 ddv joined #salt
22:09 MTecknology dave_den: the problem here is that I actually don't want that changed. I want it reporting the McAfee Linux because that much is true. I don't want to match on something like that. I'm thinking the thing should be matching on os_family instead of os anyway...
22:10 VanClone joined #salt
22:10 dave_den MTecknology: fix it and submit a PR
22:10 dave_den or just patch your local version
22:10 bhosmer joined #salt
22:11 nineteen1ightd So, apt.install isn't working with sources for me, it seems to boil down to the call to cmd.run_all and the option python_shell being set to false
22:11 worstadmin joined #salt
22:11 nineteen1ightd It complains about dpkg not being found
22:12 MTecknology dave_den: I tend to do everything via PRs :)
22:12 dave_den nineteen1ightd: do you have any log or output we can see?
22:12 nineteen1ightd `salt-call cmd.run_all "dpkg --help" python_shell=False` fails, while `salt-call cmd.run_all "dpkg --help" ptyhon_shell=True` works
22:12 jhermann joined #salt
22:13 dave_den nineteen1ightd: why not just use the apt or pkg salt module?
22:13 nineteen1ightd I am using the pkg.installed state
22:13 nineteen1ightd Giving it the sources option and linking to a .deb elsewhere on the internet
22:14 nineteen1ightd This is just where I've tracked the problem down to
22:15 _ikke_ Is it also possible to append to a managed file?
22:15 dave_den nineteen1ightd: what does your state look like and what's the exactl error from the log?
22:15 _ikke_ I have a base file, but want to append machine specific files to it
22:15 dave_den _ikke_: yes, check the docs.
22:15 _ikke_ dave_den: I see file.managed, and file with - apend
22:15 _ikke_ can I combine that?
22:16 _ikke_ perhaps with a require?
22:16 gkze joined #salt
22:16 dave_den _ikke_: you may want to just template the file
22:16 nineteen1ightd Once apt.install figures out what it needs to do, it fails while executing the `dpkg -i ...` command
22:17 dave_den nineteen1ightd: can you gist/pastebin the actual logs?
22:17 _ikke_ I thought it was easiest to have the machine specific parts (which can become quite large) inside seperate files, instead of in the sls files
22:18 sciyoshi _ikke_: try doing a jinja template in the main file, and then {% include %}-ing the other machine specific ones
22:18 ipmb joined #salt
22:19 _ikke_ sciyoshi: Ooh, ok, that might work too
22:19 sciyoshi or {% extends %} might be better too
22:19 _ikke_ I guess the machine specific files should be managed too then
22:20 _ikke_ / recursively added
22:20 nineteen1ightd http://pastebin.com/QkUaA36D
22:20 dave_den _ikke_: you can also use file.accumulated, but the docs for that are thin
22:20 scott_w joined #salt
22:20 nineteen1ightd Ptyhon's subprocess.Popen is puking because it can't find the dpkg command
22:21 dave_den nineteen1ightd: how are you invoking the salt minion?
22:21 _ikke_ dave_den: I gather that's for accumulating structured data?
22:21 dfinn how often does salt do a high state run?  and is that configurable?
22:22 _ikke_ dfinn: You have to do it yourself
22:22 nineteen1ightd However it gets invoked after installing salt from the Ubuntu ppa
22:22 dfinn how do people typically do it if they want salt to handle automated configuration management?  cron?
22:23 dave_den nineteen1ightd: something is not right with your setup. pkg.installed definitely works on ubuntu
22:23 dave_den nineteen1ightd: do:  salt 'yourminion' cmd.run 'echo $PATH'
22:23 dave_den from the master
22:24 sciyoshi is there a way to centrally assign grains to nodes using glob/regex matching?
22:24 sciyoshi other than maybe managing /etc/salt/grains through a top.sls file
22:25 nineteen1ightd local: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
22:25 nineteen1ightd Doing masterless minion
22:26 nineteen1ightd The problem has to do with https://github.com/saltstack/salt/blob/develop/salt/modules/apt.py#L424 and the use of python_shell=False
22:26 dave_den nineteen1ightd: use "-l trace"  when you try to run your pkg.installed state
22:27 __number5__ nineteen1ightd: it's a bug in 0.17.1 but has been fixed in develop
22:27 ekaqu joined #salt
22:28 nineteen1ightd __number5__ how long until 0.17.2 would you guess?
22:28 nineteen1ightd Thanks for working with me dave_den
22:28 __number5__ That's also my question, I'm not the devs sorry
22:29 dave_den __number5__: do you have a bug/issue id for it?
22:29 nineteen1ightd Ohh.
22:29 forrest Hey basepi, what's the timeline looking like on the 0.17.2 release?
22:29 cachedout basepi is away from his desk but I can tell you that there is no firm date but it shouldn't be too much longer
22:30 forrest cachedout, ahh I didn't see you jump in the IRC this morning
22:30 forrest so 0.17.2, and Hydrogen are two different releases right?
22:30 cachedout Yes
22:30 forrest ok cool
22:30 forrest thanks for the details cachedout.
22:31 __number5__ dave_den: I don't know the issue id, but I've test the develop branch it has been fixed there
22:31 forrest pretty quiet in the office this week eh?
22:31 dave_den ah
22:31 jhermann joined #salt
22:32 mnemonikk joined #salt
22:32 pipps joined #salt
22:36 _ikke_ dave_den: This works btw: https://gist.github.com/Ikke/8d34b08b3438d9644ec1
22:36 jacksontj joined #salt
22:37 dave_den _ikke_: the downside to doing it that way is that it will always change the file back to the file.managed version and then do the append on every state run
22:37 dave_den so you sort of have dueling states
22:37 _ikke_ Hmm, ok
22:37 _ikke_ And what would be the alternative then?
22:38 dave_den templating the file and just using file.managed with - template: jinja might be better
22:38 _ikke_ And how would I get the data in the template?
22:38 matanya joined #salt
22:38 ashtonian joined #salt
22:39 dave_den where's the data coming from?
22:39 pipps_ joined #salt
22:39 _ikke_ salt file system
22:40 dave_den like sciyoshi mentioned, you can use jinja's "include"
22:41 _ikke_ And can I then use salt:// style references?
22:41 dave_den no
22:41 _ikke_ ok
22:41 dave_den your template is compiled on the master
22:41 _ikke_ ok
22:41 dave_den so it would just be the path to the other files or if they are in the same dir, just the filelname
22:42 _ikke_ ok
22:42 _ikke_ It's in one subdir
22:42 dave_den _ikke_:  here are the jinja docs: http://jinja.pocoo.org/docs/templates/#include
22:43 _ikke_ Ok, thanks
22:44 tdillio joined #salt
22:49 _ikke_ dave_den: Is it correct when the diff output shows unrendered jinja tags?
22:50 ekaqu joined #salt
22:50 pipps joined #salt
22:50 ekaqu question about the docs: http://docs.saltstack.com/ref/modules/all/salt.modules.saltutil.html, refresh_modules and sync_all.  what are the differences?
22:51 _ikke_ aparently not
22:51 _ikke_ I just get the jinja tags literally in my file
22:52 tdillio i think I'm confused about renderers and grains. I am trying to use grain['fqdn'] in a managed file but I'm not sure how to render the source with the grain data. http://pastebin.com/sT7wVRWf
22:52 dave_den _ikke_: are you specifying '- template: jinja' in the file.managed?
22:52 _ikke_ dave_den: no
22:52 tdillio Ha, looks like I'm having the same issue as ikke
22:53 zooz joined #salt
22:53 _ikke_ yup, I guess we have
22:54 dave_den tdillio: yeah, you also need to specify '- template: jinja' in file.manaed.
22:55 _ikke_ dave_den: Relative to what are the includes?
22:55 tdillio dave_den: Yup, that did it, thanks!
22:55 __number5__ file.managed one is correct behaviour, the template default to None, so need to set it to jinja
22:55 jcockhren anyone have a seed.apply example using salt-ssh?
22:56 kermit joined #salt
22:56 _ikke_ dave_den: Thank you for your help, it's working now
22:56 __number5__ jcockhren: why you want to seed.apply with salt-ssh?
22:57 __number5__ bootstrapping new minion with salt-ssh?
22:57 pipps_ joined #salt
22:57 jacksontj joined #salt
22:57 dave_den _ikke_: great!
22:57 blee joined #salt
22:59 tdillio I'm using the gitfs backend and wondering if there is a simple way to force salt to clear the git cache?
22:59 MTecknology I think I lost a deer tag...
22:59 MTecknology that really really really fucking sucks
23:00 harobed_ joined #salt
23:00 cachedout Doesn't matter. You can't hunt servers anyway.
23:00 MTecknology It's even worse than being forced to use windows and notepad exclusively for a year.
23:01 MTecknology cachedout: true...
23:01 __number5__ "deer tag"?
23:01 MTecknology It means I can't get into any record book. I have a record deer and without that tag, I can't get into any record book.
23:01 MTecknology __number5__: hunting
23:02 __number5__ :)
23:02 sfz_ joined #salt
23:04 JulianGindi joined #salt
23:04 seanz whiteinge: Are you around by chance?
23:06 ctdawe joined #salt
23:07 gkze joined #salt
23:07 sfz joined #salt
23:09 alunduil_ joined #salt
23:10 jcockhren __number5__: yeah. bootstrapping with salt-ssh
23:12 jcockhren __number5__: is that a silly idea?
23:13 sfz_ joined #salt
23:15 anitak MTecknology, thanks for the answers. And good luck with the hunting
23:16 __number5__ jcockhren: no, it make sense to saltify a new minion with salt-ssh, but I'm not sure if seed.apply is required
23:16 dfinn can I issue a "service salt-minion reload" to all my minions from the master or will that kill things half way?  I ask because I just used salt to upgrade salt on all of my minions
23:17 jcockhren __number5__: I guess I could use salt-cloud
23:18 __number5__ jcockhren: correct me if I was wrong, running salt-bootstrap script inside salt-ssh might be a better option, since salt-ssh only install a salt-thin client
23:18 jmlowe Is there anybody around who can answer a question about the salt-stack ppa?
23:18 __number5__ yep, if you can use salt-cloud use it
23:19 __number5__ jmlowe: just ask
23:19 jmlowe Why is the only package for saucy salt-api?
23:19 jmlowe It's been a month and the 0.17.1 vs 0.16.4 problems are killing me
23:20 jmlowe I got rid of chef for this reason, they couldn't get their packaging right, broke every update
23:20 jcockhren jmlowe is mad
23:22 jcockhren __number5__: yeah. the silly part is that I sat up salt-cloud long ago. :-/ That's how much I've been using it
23:22 jcockhren weening myself off GUIs for deploying servers
23:22 jmlowe I was patient for a couple of weeks, my ratio of 'time to do thing without salt' to 'time spent fixing salt and then doing the task' to a dramatic turn for the worse today
23:22 micah_chatt joined #salt
23:24 jmlowe I only really got mad when I realized quantal was end of life in July and there are packages for it but not for saucy which I really need
23:24 MTecknology anitak: no prob, hope life is grand!
23:24 anitak it is :)
23:24 MTecknology yay!
23:24 jcockhren jmlowe: this is a group effort
23:24 MTecknology going to saltconf?
23:25 anitak I should, should I?
23:25 * jcockhren ducks
23:25 MTecknology Who's going to saltconf!?  (I'm still waiting for higher ups to approve the expense)
23:25 dfinn I asked for a seat at it today, we'll see if the bossman actually follows through
23:25 jcockhren MTecknology: I have no higher ups and the expense is still not approved. ;)
23:26 MTecknology who needs to approve it?
23:26 jcockhren Mr. Wallet
23:26 MTecknology I have five people above me in the company of 25,000
23:26 dfinn at least it's not Mrs. Wallet
23:26 anitak MTecknology, same here. It is me, myself and I...
23:27 MTecknology FUCK YES!!!!!!
23:27 MTecknology I FOUND THE TAG!!!
23:27 anitak awesome
23:30 MTecknology It's incredibly awesome. I need that to prove that I took the deer and that I took it legaly and to prove how it was taken.
23:30 __number5__ GO HUNTING!
23:32 MTecknology I have four more tags for this year. I'm going hunting F-Su
23:32 forrest joined #salt
23:34 jmlowe jcockhren: I know, but seriously this is a major problem, if you can't get the basic packaging right it damages the whole project, I like salt but and that's part of why I'm mad
23:34 jmlowe jcockhren: well, looks like I'm too flustered to type
23:35 KyleG left #salt
23:36 jcockhren jmlowe: aside from my poking fun, I'm sure everyone here understands your frustration and have their own examples.
23:36 MTecknology jcockhren: I missed everything, but... you mentioned saucy, which an ubuntu name. Ubuntu should NEVER be used on servers. Not anymore anyway. Also, salt devs have no control over the mutilated packages the ubuntu devs create. On top of that, salt is still very young and very heavily developed.
23:36 MTecknology jmlowe: **
23:37 * jcockhren was wondering why MTecknology was yelling at him
23:37 jcockhren nah. I'm good
23:37 * MTecknology wasn't yelling. :(
23:37 jcockhren joking
23:37 * MTecknology *stiffle*
23:38 __number5__ MTecknology: I'm using Ubuntu as servers for more than 6 years
23:39 utahcon Can anyone see why my crons are getting pushed to all my servers? http://pastebin.com/MD1MW1N2
23:39 jacksontj joined #salt
23:39 MTecknology It *used* to be a good option. All I can say is that it's understandable if you're in a legacy environment, but ubuntu server dev has gone to hell.
23:39 jmlowe MTecknology: ahem, https://launchpad.net/~saltstack/+archive/salt, not the Ubuntu devs
23:40 MTecknology utahcon: which state file has the crons?
23:41 __number5__ jmlowe: some suggestions on how to fix your problem: 1. use fpm or other tools to create 0.17.1 deb package for yourself, 2 use pip install, and you might want to wait until 0.17.2 released since 0.17.1 has some serious bugs
23:41 MTecknology ah.. line 40?
23:41 jcockhren __number5__++
23:41 utahcon MTecknology: yeah line 40
23:41 MTecknology utahcon: line 35 isn't needed
23:41 utahcon are the comments not being lined up the problem?
23:42 MTecknology I've never seen comments done that way in these files
23:42 utahcon MTecknology: what aout 19, and 27
23:42 utahcon I will move the comment to the right indent and see what happens
23:42 jmlowe __number5__: I know I can roll my own, right now salt is causing more problems for me than it is solving
23:42 MTecknology utahcon: same
23:42 jcockhren jmlowe: the transition to 0.17.x has been rough for most of us
23:42 utahcon MTecknology: it is needed for line 12 though, right?
23:43 MTecknology utahcon: for that one, I think it is needed
23:43 utahcon ok thanks
23:43 alunduil joined #salt
23:43 MTecknology jcockhren: except me... My headache was long ago, and one hell of a painful head bashing experience that even brought thatch into irc, which is apparently very rare.
23:44 forrest MTecknology, yea they made him stay out of IRC because he was spending too much time in here answering questions :P
23:44 MTecknology really?
23:44 jcockhren that's funny
23:44 jcockhren like lol funny
23:44 cewood joined #salt
23:45 anitak MTecknology, if not ubuntu, what's your os of choice?
23:45 packeteer slackware or centos
23:45 MTecknology I thought he just didn't enjoy it, but the issues I was having were in salt core and were far from trivial.
23:45 MTecknology anitak: Debian!
23:45 anitak hmm
23:46 MTecknology anitak: I used to be an ubuntu server dev. Many of us used to stay in both realms but switched to worrying about Debian because of ......
23:46 MTecknology ubuntu is a lot like php
23:46 pipps joined #salt
23:46 anitak uh
23:46 jcockhren MTecknology: wow
23:46 MTecknology it's great if you know it and it's already there and things are already built around it, but there's so many better options now.
23:46 jcockhren that's a statement there
23:47 anitak well, having grown up with Solaris, having to move to RedHat, I've had my headaches with Ubuntu
23:47 anitak but Debian...
23:47 anitak I need to think about that
23:48 __number5__ MTecknology: which version of Debian you will recommend as production servers?
23:48 jmlowe servers is a bit broad
23:48 utahcon Alright, changed the file to this: http://pastebin.com/7XvDqDTF
23:48 MTecknology __number5__: I run 7 and 8; 8 was an accident, I tend to stick with whatevery stable iss
23:48 utahcon but still all my changes are going to all my servers (crons on all :( )
23:48 pipps_ joined #salt
23:49 __number5__ Do Debian has something similar as ubuntu ppa?
23:49 MTecknology utahcon: You have some odd spacing...
23:49 packeteer debian's biggest problem is also its biggest asset. the code base is VERY old
23:49 MTecknology utahcon: make sure you don't have tabs and spaces mixed and make sure you're indenting with two spaces; yaml is picky
23:50 MTecknology packeteer: sorta... that's the case for stable because stable releases are infrequent. However, I keep unstable repos available on all servers so if I need something from further ahead that isn't in backports yet, I can just grab it.
23:51 __number5__ I think the biggest problem of Debian might be the DFSG...
23:51 sentx joined #salt
23:51 MTecknology /etc/apt/preferences.d/pinning
23:51 MTecknology __number5__: hm?
23:52 ctdawe joined #salt
23:52 sentx akoumjian, Saw on your github project salty-vagrant that vagrant now has salt support but couldn't find any mention of it on vagrants website
23:52 __number5__ MTecknology: I mean many new software/packages has problem to fit in DFSG, need to change here and there just for it
23:52 basepi forrest: wow sorry i missed your mention earlier.  as cachedout said, we want 0.17.2 out ASAP, hopefully by end of week.
23:53 patrek_ joined #salt
23:53 forrest basepi, no worries he said you were away from your desk and answered.
23:53 __number5__ sentx: it's merged in, you'll find it somewhere in the v2 docs
23:53 packeteer MTecknology: yeah, I know about pinning. but i still prefer not to mix new and old packages.
23:53 sentx __number5__, thanks
23:54 utahcon MTecknology: that was it, thanks!
23:54 MTecknology utahcon: yay!
23:54 packeteer anyway, back to my ubuntu and chef misery
23:55 MTecknology packeteer: I used to be opposed to it until I realized how often they would just fit together with no problem. If it's ever more of an issue what just installing newer deps, I won't do it... say if a newer gcc is neede.
23:55 MTecknology ubuntu and chef.. heh
23:55 MTecknology forrest: see... that... soooo many updates!
23:55 MTecknology It's awesome, though.
23:58 __number5__ sentx: actually there is no salt documentation on vagrant docs, but you can just follow the instructions on salty-vagrant github, only skipped the plugin installing step
23:59 sentx __number5__, I started to figure as much
23:59 mgw joined #salt
23:59 sentx __number5__, appreciate letting me know I wasn't going crazy
23:59 __number5__ sentx: :)

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary