Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-12-30

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 giantlock joined #salt
00:00 Ryan_Lane wt: can't do multiple roots from cli :(
00:01 abednarik joined #salt
00:01 Ryan_Lane wt: one thing you could do is create a .salt directory, with a minion file in it
00:01 Ryan_Lane then do salt-call -c .salt state.highstate
00:01 Ryan_Lane wt: here's an example, that uses a .orchestration directory: https://github.com/lyft/confidant/tree/master/salt
00:02 Ryan_Lane and the minion file that goes with it: https://github.com/lyft/confidant/blob/master/salt/.orchestration/minion
00:03 Ryan_Lane wt: related issue: https://github.com/saltstack/salt/issues/13515
00:03 saltstackbot [#13515]title: Allow minion config options to be modified from salt-call | There's a number of settings that would be ideal to be able to modify from salt-call, like root_dir, user, etc.. If we could modify arbitrary configuration from salt-call that would be nice.
00:03 wt That's a nice trick.
00:03 Edgan I do _orchestration, _masterless, _grains, _macros, etc.
00:04 Ryan_Lane Edgan: this is a dot directory for the minion config
00:04 wt Too bad I can't refer directly to the file.
00:04 Edgan Ryan_Lane: yes, but I do _orchestration/topic/minion
00:04 Ryan_Lane wt: another thing I do is to use a wrapper that generates a minion config for every run
00:05 Ryan_Lane using mktmp
00:05 Edgan Ryan_Lane: I do the same thing with _masterless
00:05 wt interesting idea
00:05 Edgan Ryan_Lane: same thing as in the same thing I do with _orchestration
00:05 wt I think I have an idea how to do what I want.
00:06 Ryan_Lane Edgan: and you call salt-call -c _orchestration/topic ?
00:06 brianfeister joined #salt
00:06 Ryan_Lane wt: cool
00:06 Edgan Ryan_Lane: salt-run state.orchestrate _orchestration.consul pillar="{'orchestration_cluster':'main'}"
00:07 wt Ryan_Lane, does the root_dir option in the minion config need to be set when using the "-c .orchestration" option?
00:07 Ryan_Lane Edgan: I'm not sure we're talking about the same thing :)
00:07 Ryan_Lane wt: you can specify it in the minion config too
00:07 wt does -c modify where the pki data is found?
00:07 Ryan_Lane you can use a combo of --root-dir --pillar-dir and the minion config, btw
00:08 Ryan_Lane yeah
00:08 wt or is that why you set root_dir?
00:08 Ryan_Lane it does
00:08 Edgan Ryan_Lane: Are they mis-using the orchestration name, and it is really masterless?
00:08 Ryan_Lane Edgan: orchestration is a concept as well as a salt feature ;)
00:08 wt I would prefer to use the existing pki dir (and other important directories)
00:08 Ryan_Lane wt: I think that should work
00:09 Edgan Ryan_Lane: yes, but masterless isn't orchestration, and running a high state isn't orchestration
00:09 Ryan_Lane I've never used salt-call with master/minion
00:09 Ryan_Lane Edgan: it can be.
00:09 Ryan_Lane look at this particular example https://github.com/lyft/confidant/tree/master/salt
00:09 wt I will experiment with this
00:09 wt thanks
00:09 Ryan_Lane in that example a salt highstate is orchestrating AWS resources
00:11 Edgan Ryan_Lane: Orchestration is about doing things more in real time like a bash script would. Masterless is basically an installer. Configuration management is about maintaining a VM/instance/node/server/system/etc in a certain state.
00:11 Edgan orchestrating AWS resources is really provisioning AWS resources
00:11 Ryan_Lane it's creating a dynamodb table, a security group, kms keys, an elb, an elasticache, an autoscale group and an iam role (that references dynamo, kms, etc), and it's linking them all together to form a single application
00:11 zmalone joined #salt
00:12 Ryan_Lane Edgan: this would run outside the context of a virtual machine
00:12 Ryan_Lane you could run it from your laptop, for instance
00:13 Edgan Ryan_Lane: yeah, we are talking the same thing. Using boto to spin up AWS resources is orchestration, and salt-run state.orchestrate doesn't require a master
00:13 Ryan_Lane and it would create all the AWS infra, and it would start with a fully working application
00:13 Ryan_Lane sure, but this doesn't need salt-run. it just needs highstate
00:14 Ryan_Lane because it uses state modules and it can rely on sequential ordering
00:14 Ryan_Lane and the modules are written in such a way that they can reference resources by name
00:14 nyx_ joined #salt
00:15 Ryan_Lane take a look at the actual sls file that's getting invoked: https://github.com/lyft/confidant/blob/master/salt/orchestration/confidant.sls
00:15 RandyT Ryan_Lane: success... while I did setup cross account access for route53, the real issue that was failing was in a for loop in my jinja
00:16 Ryan_Lane RandyT: cool. how'd you do the cross-account stuff?
00:16 RandyT it is a matter of creating a policy, specifically for granting cross account access.
00:16 RandyT You then given access to the arn for that policy to the other account in a role
00:16 Ryan_Lane Edgan: there's no reason orchestration can't also be considered "config management from the outside" and done in a properly declarative way :)
00:17 Edgan Ryan_Lane: orchestration is just one time, configuration management is over time
00:17 RandyT Ryan_Lane: http://docs.aws.amazon.com/IAM/latest/UserGuide/walkthru_cross-account-with-roles.html#walkthru_cross-account-with-roles-1
00:17 Ryan_Lane RandyT: are you using the boto_route53 state, or using the cnames argument to boto_elb?
00:17 Edgan Ryan_Lane: hence puppet is configuration management, and mcollective is orchestration
00:17 Ryan_Lane Edgan: no way :) I run orchestration every single deploy
00:17 RandyT just the cnames argument to boto_elb right now, which appears not to do anything...
00:17 Ryan_Lane it ensures my cloud resources are in the state as I defined them
00:18 RandyT but the state completes without error...
00:18 Edgan Ryan_Lane: Mixing deployment and configuration management, yuck
00:18 Ryan_Lane RandyT: yeah, the part about cnames is the part I'm not sure how it would work
00:18 Ryan_Lane RandyT: because boto_elb is already running with the context of one account
00:19 Ryan_Lane and it'll call cnames with the same context
00:19 Ryan_Lane Edgan: I believe config management is 100% intertwined with deployment
00:19 Ryan_Lane in fact, I don't think config management should run without a deployment.
00:20 Edgan Ryan_Lane: I believe it the exact opposite. When I have seen people mix the two, the results were awful.
00:20 Ryan_Lane a deployment for me is: orchestration -> deployment -> config management
00:20 Ryan_Lane and if either the deploy fails or the config management fails it's considered to be a failed deployment that needs to be rolled back
00:21 murrdoc u mean config convergence
00:21 * murrdoc tries to coin a new phrase
00:21 Ryan_Lane :D
00:21 Ryan_Lane we're doing SOA, and every service maintains their own config management and orchestration code
00:21 Edgan Too much stuff that falls under configuration management has nothing to do with deployment, and you don't do deployment of all types of instances
00:22 Ryan_Lane we do deployment for every single thing we have.
00:22 Ryan_Lane there's no snowflakes. everything uses the exact same process
00:22 Ryan_Lane every single instance is launched in an autoscale group, too ;)
00:22 geekatcmu Edgan, it must be nice that adding a new node to your Hadoop cluster never happens, or else only happens by rebuilding the entire cluster from scratch.
00:22 Edgan What do you need to deploy to long running instances every single time?
00:23 charo joined #salt
00:23 Edgan geekatcmu: There are app servers with deployment, and then there is everything else
00:23 Ryan_Lane yeah, everything is a deployment
00:24 Ryan_Lane you want to change a rule in your security group? do a deployment. need to add a package to your instances? do a deployment. need to change your application code? do a deployment,.
00:24 geekatcmu murrdoc: convergence has been part of the CM lexicon since CM has been a thing.
00:24 Ryan_Lane the nice thing about that is there's no special cases. everything is code and everything is done the exact same way
00:24 murrdoc whatever geekatcmu i just came u with it
00:24 murrdoc up*
00:24 Edgan add a package isn't a deploy, that is configuration management.
00:24 * murrdoc puts on shades
00:24 Ryan_Lane Edgan: and we deploy config management code :)
00:24 murrdoc #dealwithit
00:25 Ryan_Lane make a PR, wait for tests to pass, merge it, deploy it. <-- that's the process for every single change
00:25 Ryan_Lane whether it's application code, config management, or orchestration
00:26 Ryan_Lane <3
00:27 murrdoc are u using travis
00:27 murrdoc just got the work crew to travis our salt stuff
00:27 Ryan_Lane nah. we're running our own CI
00:27 Ryan_Lane we do use travis for our open source stuff
00:28 murrdoc what did u build your CI on
00:28 Ryan_Lane jenkins
00:28 Ryan_Lane looking also at stackstorm for some things
00:28 shaggy_surfer joined #salt
00:28 geekatcmu Jenkins here, too
00:28 Ryan_Lane would use saltstack and reactors, but lack of proper HA makes that a non-starter
00:29 murrdoc https://docs.stackstorm.com/install/salt.html
00:29 Ryan_Lane if reactors worked with masterless that would be a different story
00:29 Ryan_Lane could do reactors + SQS + salt-ssh
00:30 Ryan_Lane stackstorm's workflow engine is much better than salt's orchestration modules, though
00:30 murrdoc i love salt
00:30 murrdoc but uh orchestration sucks le donkey balls
00:30 Ryan_Lane :D
00:31 murrdoc had to write our own provisining workflow using a redis cache
00:31 Thiggy joined #salt
00:31 Ryan_Lane stackstorm + salt looks to be a good combo for workflows
00:32 murrdoc https://stackstorm.com/2015/07/29/getting-started-with-stackstorm-and-saltstack/
00:32 murrdoc so it runs jobs ?
00:33 murrdoc can i replace our devops team ?
00:33 Ryan_Lane :D
00:33 Ryan_Lane it's basically reactors + orchestrate
00:34 murrdoc oh interesting
00:34 murrdoc so it has 'runners'
00:34 Ryan_Lane yeah, basically that
00:35 jaybocc2 joined #salt
00:37 Ryan_Lane mostly we're looking at replacing parts of jenkins with it. not parts of salt
00:38 murrdoc me too
00:38 murrdoc it could sit in a central location
00:38 rem5 joined #salt
00:38 murrdoc shit dude i have been thinking about having a way to guarrantee that highstate ran everywhere
00:39 murrdoc if this thing does reporting in an aight manner
00:39 murrdoc it could work.
00:39 Guest94421 joined #salt
00:42 shaggy_surfer joined #salt
00:49 Edgan murrdoc: master mode highstate reporting?
00:49 murrdoc yeah
00:49 Edgan murrdoc: a dashboard?
00:50 murrdoc yeah
00:50 murrdoc joined #salt
00:50 Edgan murrdoc: foreman with an plugin for salt will do that. I don't use it for all it's other features, just as a dashboard.
00:50 murrdoc except i may or may not have 35+ salt masters
00:51 murrdoc and no MoM yet
00:51 Edgan murrdoc: MoM?
00:51 murrdoc Mater of Masters
00:51 Guest94421 joined #salt
00:51 Edgan murrdoc: Why is so many masters a problem?
00:52 charo joined #salt
00:52 Edgan murrdoc: it supports more than one
00:52 Edgan and it calls out which master a machine is assigned to
00:54 zmalone $ sudo salt-key -L; Accepted Keys: foo0\"*
00:55 zmalone I didn't expect that to work
01:00 Guest94421 joined #salt
01:00 ninkotech joined #salt
01:09 Guest94421 joined #salt
01:10 Hydrosine joined #salt
01:11 jaybocc2 joined #salt
01:15 Guest94421 joined #salt
01:17 justanotheruser joined #salt
01:21 charo joined #salt
01:27 jaybocc2 joined #salt
01:34 hasues joined #salt
01:39 ALLmightySPIFF joined #salt
01:41 anotherZero joined #salt
01:43 hasues left #salt
01:46 Guest94421 joined #salt
02:01 PeterO joined #salt
02:01 breshead joined #salt
02:03 nyx_ joined #salt
02:06 Sokel left #salt
02:11 lorengordon joined #salt
02:14 anotherZero joined #salt
02:17 cyborg-one joined #salt
02:26 rootforce joined #salt
02:26 breshead_ joined #salt
02:29 breshead left #salt
02:35 cberndt joined #salt
02:38 breshead_ anybody know why http://irclog.perlgeek.de/salt/  shows a "not found" error?
02:38 breshead_ I am looking for history, does that mean there is none?
02:41 Bryson joined #salt
02:47 ilbot3 joined #salt
02:47 Topic for #salt is now Welcome to #salt! | Latest Version: 2015.8.3 | Logs: http://irclog.perlgeek.de/salt/ | Paste: https://gist.github.com/ (please don't multiline paste into channel) | Ask with patience as we are volunteers and may not have immediate answers
02:49 catpigger joined #salt
02:49 charo joined #salt
02:57 malinoff joined #salt
03:01 PeterO How can you get a salt state to answer yes to a question.
03:03 PeterO Trying to use lvm.pv_present and the device it's trying to format apparently has data on it and I can see in the logs it's asking [y/N] and its sending a no.
03:15 favadi joined #salt
03:26 evle joined #salt
03:27 XenophonF breshead_: the irclog is broken
03:27 XenophonF maybe there's something we can help you with?
03:28 jfindlay breshead_: multiple people have been asking and the site administrator hasn't responded to my inquiry
03:28 jfindlay I'll talk with some other salt people tomorrow and see what we should do
03:30 XenophonF PeterO: it looks like salt/modules/linux_lvm.py's pvcreate() doesn't give you the option to force physical volume creation
03:30 XenophonF PeterO: but maybe the override parameter does what you want? i don't know - not that familiar with LVM
03:32 PeterO I gave up. Doesn't even look like the kwargs supported "yes" even though pvcreate itself allows for a --yes. I resorted to using cmd.run
03:32 XenophonF PeterO: pvcreate() doesn't blindly pass keyword arguments to the underlying command
03:33 PeterO gotcha
03:33 XenophonF PeterO: it wouldn't be that difficult to modify pvcreate()
03:33 PeterO eh cmd.run worked great for what I needed
03:33 PeterO my Python is too rusty to do any good to the module.
03:33 XenophonF PeterO: i'm sure the saltstack people would appreciate a patch - i've had good luck with them accepting my pull requests
03:34 PeterO but I'll take a look
03:34 XenophonF PeterO: it could be something as simple as `if force in kwargs and kwargs['force'] == True: cmd.append('...')`
03:36 PeterO I'll see what I can do :)
03:36 XenophonF PeterO: you can test it out by putting the modified version of linux_lvm.py into base: .../states/_modules/, running a saltutil.sync_all job on a minion, and then running lvm.pvcreate force=True ...
03:38 jaybocc2 joined #salt
03:38 jfindlay PeterO: I'm definitely willing to look something over if you think you've got something for a pull request
03:39 PeterO jfindlay: I'll give it a try :)
03:40 XenophonF speaking of, jfindlay, the fact that i can't install windows features on non-server operating systems makes me a sad panda
03:41 XenophonF so i came up with a different way to do it that works on everything since Vista
03:41 XenophonF https://github.com/irtnog/active-directory-formula/blob/master/_states/windows_feature.py
03:41 XenophonF https://github.com/irtnog/active-directory-formula/blob/master/_modules/windows_servicing.py
03:41 XenophonF it calls out to dism.exe instead of the servermanager cmdlets
03:41 quasiben joined #salt
03:42 XenophonF how might i go about convincing the folks at saltstack to look at this?
03:42 XenophonF it still needs some work, obviously
03:43 XenophonF but the goal would be to support package and feature installation in both offline and online images, and that could facilitate better support for future editions of windows
03:46 jfindlay XenophonF: sorry, I'm stuck at my inlaw's with sketchy wifi
03:46 XenophonF heh
03:46 XenophonF no worries!
03:46 jfindlay the simplest, I think, would be to send a pull request
03:46 XenophonF OK
03:47 jfindlay that will get the right people looking at it
03:47 XenophonF i need to add better documentation before i do that
03:47 jfindlay either Shane or Dave
03:51 quasiben joined #salt
03:52 jfindlay also an explanation as to how it complements/extends/replaces existing functionality since there may be some confusion about that
03:52 jfindlay in the pull request
03:52 jfindlay looks pretty good to me
03:53 XenophonF thanks for the advice
03:53 XenophonF i've got another module+state pair in the works that's specific to AD FS
03:54 jaybocc2 joined #salt
03:54 XenophonF and one planned for sharepoint
03:54 XenophonF don't hold me to that yet
03:55 jfindlay needs some boilerplate code at the top of the state file, but otherwise it looks great
03:58 XenophonF gotcha
03:59 quasiben joined #salt
04:01 moogyver joined #salt
04:02 quasiben joined #salt
04:02 XenophonF thanks again
04:04 jfindlay no problem.  Thanks for contributing to salt!
04:13 thehaven joined #salt
04:14 quasiben joined #salt
04:17 racooper joined #salt
04:19 brianfeister joined #salt
04:20 quasiben joined #salt
04:23 invalidexception On a similar note, who/how to contact in order to have formulas reviewed for inclusion in saltstack-formulas?
04:26 MeltedLux joined #salt
04:26 quasiben joined #salt
04:31 kshlm joined #salt
04:40 quasiben joined #salt
04:50 justanotheruser joined #salt
04:52 quasiben joined #salt
04:56 justanot1eruser joined #salt
04:57 debian112 left #salt
05:01 colegatron joined #salt
05:10 wenxin joined #salt
05:13 akhter joined #salt
05:14 TTimo jfindlay: I got a windows minion to fully spawn and start getting configured .. that was .. awesome
05:14 zmalone joined #salt
05:14 jaybocc2 joined #salt
05:16 XenophonF it's a beautiful thing, eh?
05:16 TTimo yuuuup
05:17 XenophonF really, it's the simple things in life, like making sure PuTTY or RSAT gets installed on every goddamn Windows box like it says in the SOE documentation
05:18 XenophonF (if i had a nickel for every time i reached for a non-default MMC snap in that should have been installed according to our SOE...)
05:40 anmol joined #salt
05:42 rdas joined #salt
05:43 anmol joined #salt
05:44 kshlm joined #salt
05:51 nyx_ joined #salt
06:02 PeterO joined #salt
06:06 favadi joined #salt
06:07 tristianc joined #salt
06:09 kshlm joined #salt
06:32 kshlm joined #salt
06:33 favadi joined #salt
06:55 favadi joined #salt
07:02 kshlm joined #salt
07:04 jaybocc2 joined #salt
07:23 rotbeard joined #salt
07:44 KermitTheFragger joined #salt
07:50 cberndt joined #salt
07:52 dkrae joined #salt
07:56 otter768 joined #salt
08:00 subsignal joined #salt
08:04 dustywusty joined #salt
08:05 jaybocc2 joined #salt
08:08 kshlm joined #salt
08:10 viq joined #salt
08:16 drwx left #salt
08:19 eseyman joined #salt
08:30 jamesp9 joined #salt
08:32 malinoff joined #salt
08:32 malinoff joined #salt
08:35 qman__ joined #salt
08:38 Vaelatern joined #salt
08:56 Rumbles joined #salt
09:03 slav0nic joined #salt
09:07 Fiber^ joined #salt
09:12 AlberTUX joined #salt
09:16 tercenya_ joined #salt
09:30 jaybocc2 joined #salt
09:57 otter768 joined #salt
09:57 Crazy67 joined #salt
10:03 subsignal joined #salt
10:13 anmol joined #salt
10:16 felskrone joined #salt
10:22 scott_w joined #salt
10:28 giantlock joined #salt
10:30 denys joined #salt
10:34 rubenb joined #salt
10:41 linjan joined #salt
10:46 fredvd joined #salt
10:47 favadi joined #salt
11:01 colegatron joined #salt
11:05 lothiraldan joined #salt
11:11 keimlink joined #salt
11:30 jaybocc2 joined #salt
11:36 denys_ joined #salt
11:47 favadi joined #salt
11:48 scott_w joined #salt
11:49 av_ joined #salt
11:51 favadi joined #salt
11:55 favadi joined #salt
11:57 otter768 joined #salt
11:59 joejoba joined #salt
12:00 sathya joined #salt
12:00 sathya Hi
12:00 sathya Hi all
12:00 sathya Is there a way to make salt-master to auto accept new-minions keys ?
12:02 grumm_servire joined #salt
12:02 favadi joined #salt
12:02 sathya Can some one help me with my question
12:03 scott_w joined #salt
12:04 AndreasLutro sathya: if you look at the master config docs you'll find auto_accept
12:04 subsignal joined #salt
12:06 sathya you mean in the master config file ?
12:07 favadi joined #salt
12:08 sathya AndreasLutro: Thanks, i got that config and enabled it, let me test and update
12:14 slav0nic joined #salt
12:17 slav0nic joined #salt
12:19 Rumbles joined #salt
12:33 akhter joined #salt
12:33 pcdummy joined #salt
12:43 illern joined #salt
12:44 catpig joined #salt
12:56 nyx_ joined #salt
13:00 giantlock joined #salt
13:03 amcorreia joined #salt
13:06 An_T_oine joined #salt
13:10 thalleralexander joined #salt
13:10 XenophonF joined #salt
13:18 thalleralexander joined #salt
13:19 rootforce joined #salt
13:22 M-MadsRC1 Shouldn't Salt choose a minion ID based on the reverse DNS of the servers IP? or did that change?
13:22 morissette joined #salt
13:23 dijit I always thought it got it from uname.
13:23 An_T_oine hi all
13:23 An_T_oine somebody can explain me why use foreman if i already use saltstack ?
13:23 dijit on windows at least you have to manually specify it.
13:24 M-MadsRC1 I'm seeing that I have to define a hostname in /etc/hostname for Salt to choose the correct ID, instead of doing a PTR lookup. If /etc/hostname isn't set, it just chooses it's WAN IP as the ID. This is because it uses python's socket.getfqdn(), which will get you the FQDN of localhost by default
13:24 dijit An_T_oine: foreman is not for config management, it's for PXE booting mainly.
13:24 dijit like cobbler.
13:25 dijit foreman is really trying to manage a lot of different things, salt can be one of them.
13:25 dijit http://theforeman.org/
13:25 dijit the site tells much more.
13:25 M-MadsRC1 and localhost.localdomain is banned in the code, so it defaults to the WAN address :P
13:25 dijit I don't think it'll use ptr records to specify minion name.
13:25 dijit that sounds complicated and dangerous.
13:26 dijit you can use dhcpd to 'get' a hostname from the network.
13:26 dijit and then salt will use it once it's been started for the first time.
13:26 felskrone joined #salt
13:26 dijit your trick should be; get a hostname before invoking salt.
13:27 M-MadsRC1 Hmm, I must have misread the old documenation, 'cause I swear I've read it somewhere. But seems like that's not the case anymore
13:27 M-MadsRC1 Anyway, I'll just remember to set the hostname :P
13:27 An_T_oine dijit: foreman is a configuration management and much more, but it use puppet natively
13:28 An_T_oine and i don't understand why use salt with it
13:28 An_T_oine http://fr.slideshare.net/skbenja1/salt-44219915?related=1
13:29 M-MadsRC1 It gives you a GUI for salt
13:29 M-MadsRC1 some people like that, others don't
13:29 xmj does it come with a big green button?
13:29 An_T_oine lol
13:31 jaybocc2 joined #salt
13:34 dijit salt is good, why not have more options?
13:34 dijit it's not like making it modular is a bad thing.
13:37 dijit looks cool.
13:38 TyrfingMjolnir joined #salt
13:41 tmclaugh[work] joined #salt
13:56 subsignal joined #salt
13:58 otter768 joined #salt
14:02 favadi joined #salt
14:03 TooLmaN joined #salt
14:05 digitalflaunt joined #salt
14:13 bhosmer joined #salt
14:13 subsignal joined #salt
14:14 otter768 joined #salt
14:21 pcdummy joined #salt
14:23 cnginx_ joined #salt
14:27 cnginx_ hi there
14:27 cnginx_ is there any way to execute a script with salt-cloud after state.highstate function ?
14:27 favadi joined #salt
14:27 cnginx_ any ide ?
14:28 cnginx_ this execution just for one time.
14:32 murrdoc joined #salt
14:33 numkem joined #salt
14:36 perfectsine joined #salt
14:39 numkem joined #salt
14:40 slav0nic joined #salt
14:42 lompik joined #salt
14:45 FreeSpencer How do you guys write unit tests for your saltstack sates?
14:47 AndreasLutro unit tests are pretty much impossible to write, I use serverspec for acceptance/functional testing
14:49 FreeSpencer serverspec looks cool, thanks!
14:54 CheKoLyN joined #salt
14:57 toastedpenguin joined #salt
15:02 perfectsine joined #salt
15:02 mapu joined #salt
15:04 om joined #salt
15:05 mpanetta joined #salt
15:11 M-MadsRC1 What does serverspec give you that Salt can't do? i mean, checking if a package is installed can be done with Salt aswell, so how do you use it Andreas?
15:14 lompik joined #salt
15:22 thalleralexander joined #salt
15:23 winsalt joined #salt
15:25 anotherZero joined #salt
15:27 pcn If you really want unit tests, you can check out https://github.com/librato/salt-state-test
15:28 winsalt joined #salt
15:31 pcn M-MadsRC1: are you proposing that salt is the right tool to replace serverspec?
15:32 ekristen joined #salt
15:32 AndreasLutro M-MadsRC1: serverspec does things on a bit of a higher level - I use them to make sure files contain certain strings, or that certain ports are listening on the correct address, or that a `curl localhost` invocation contains a string
15:32 jaybocc2 joined #salt
15:32 hasues joined #salt
15:32 hasues left #salt
15:32 AndreasLutro you can tell salt "service x should be running", but sometimes salt will straight out lie to you and say it is running when it's not
15:33 Arbusto joined #salt
15:38 zmalone joined #salt
15:48 andrew_v joined #salt
15:48 bhosmer joined #salt
15:50 spuder joined #salt
15:50 brianfeister joined #salt
15:51 perfectsine joined #salt
15:52 lothiraldan joined #salt
15:53 XenophonF or tell you it isn't running when it is
15:58 quasiben joined #salt
16:05 PeterO joined #salt
16:10 jfindlay TTimo: nice
16:10 anotherZero joined #salt
16:12 TTimo first use will be to spin up windows build slaves on EC2 to accomodate for demand
16:13 TTimo I should be able to publish a number of config files and examples related to that
16:15 Gareth o/
16:20 pcdummy joined #salt
16:22 murrdoc anyone know if salt has plans for syndic
16:23 murrdoc like is syndic is how salt will be recommending multi master
16:24 * murrdoc drops pins
16:24 murrdoc Gareth:  \o
16:24 Gareth murrdoc: yo yo.
16:25 murrdoc hows the new gig
16:25 Gareth good good.
16:25 moogyver joined #salt
16:29 peters-tx Getting a weird crash on a Red Hat 6.7 machine http://fpaste.org/305987/   Any ideas?
16:32 bhosmer joined #salt
16:34 Gareth peters-tx: 3rd and 4th lines when the backtrace starts, I'd guess something to do with ZMQ and the minion being unable to connect.  can you verify that the minion can talk to 4506 on 10.0.0.1?
16:39 favadi joined #salt
16:39 Guest94421 joined #salt
16:40 peters-tx Gareth, Yes, I did try and manually test just using Telnet; however I'll do a Wireshark
16:41 peters-tx (Currently trying to get Debuginfo bits into place)
16:42 jaybocc2 joined #salt
16:43 tristianc joined #salt
16:45 amcorreia joined #salt
16:45 peters-tx Gareth, Wireshark on both ends shows communication OK
16:47 Gareth peters-tx: not sure then.  Something is failing to communicate somewhere though.
16:48 foundatron I Installed salt-minion on my master which created a key (named after my custom minion id), but when when I started my salt master it seems to have also created a key named after the hostname...is this supposed to happen? The latter fails when I run I run state.highstate
16:48 foundatron which is bad
16:48 peters-tx Gareth, Currently trying to get ABRTD to actually capture the crash
16:50 foundatron Where is the localhost key coming from when I start the master? my /etc/salt/minion and /etc/salt/minion_id both have my custom id name (saltmaster-01)...so where is that that localhost named key coming from when I start my master?
16:51 stevej99 joined #salt
16:51 foundatron I can delete it, but comes back when i restart my master
16:54 dayid joined #salt
16:54 LtLefse joined #salt
16:58 stevej99 Hi All, I'm playing with the win_service module and the enabled() function appears to struggle with service names that have spaces in. It seems a bit obvious and also the code appears correct to me, so I'd like to rule out user error and also confirm it's not a known issue. I have checked the issues in salt github searching for win_service and can't see it.
16:58 XenophonF foundatron: is your computer's hostname set?
16:59 XenophonF stevej99: are you confusing the service name with the service display name?
16:59 scott_w joined #salt
17:00 XenophonF stevej99: e.g., adfssrv vs "AD FS 2.0 Web Service" - you need to use the former, not the latter
17:00 stevej99 XenophonF: No, there are some crazy guys out there writing services with spaces in the name as well as the display name unfortunately.
17:00 onlyanegg joined #salt
17:00 XenophonF bless their hearts
17:01 XenophonF hang on let me glance at the win_service code
17:01 colegatron joined #salt
17:01 XenophonF stevej99: you're running 2015.8.3? or something lese?
17:01 foundatron XenophonF no, I hoping to assign the minion id with minion_id in the minion config file
17:02 foundatron the weird thing is there are two
17:02 whytewolf foundatron: you should be able to
17:02 murrdoc minion id is assigned by the 'id' key
17:02 stevej99 Will be prety recent, let me check what we set up.
17:02 whytewolf foundatron: make sure you restart the salt-minion after you change the minion id
17:03 XenophonF stevej99: looks like salt shells out to sc.exe to interacte with the service control manager
17:03 XenophonF stevej99: could be a problem with escaping
17:03 foundatron I have one key for the key set in the minion file (from the that minion starts), and another key based on the hostname that gets created/submitted when the master starts
17:04 foundatron Is the master supposed to created a key for it's self when it starts?
17:04 stevej99 I checked that out, but the actuall command being passed correctly encapsulates the argument in "'s as it should and I've confirmed by running the cmd variable contents in a shell with good results.
17:04 stevej99 My python is weak at best so I agree there is probably some issue with escaping probably in the list2cmdline function which I've not drilled into yet.
17:04 foundatron do I have a cache somewhere on the master that needs to be evacuated?
17:05 stevej99 If it's possible that this issue exists as the use case is rare and would be listed in the github issues then I will look into it further
17:07 XenophonF stevej99: hm, dunno then
17:08 stevej99 I'll double check I haven't got a custom module mixed up on this test minion and then dig around some more.
17:08 lompik joined #salt
17:09 whytewolf foundatron: do you have any other systems that have the minion running. or do you have more then one copy of the minion running on the same system
17:10 foundatron my test setup is one external minion, and one master. salt-minion is installed on both the external minion and on the salt-master
17:11 Thiggy joined #salt
17:11 foundatron I don't believe I have more than one minion running on the master
17:11 foundatron which is where the issue is occurring
17:11 quasiben joined #salt
17:12 whytewolf foundatron: are you sure it is the salt master that it is occurring on? just because it says localhost doesn't mean it is the localhost. just that the keys name is the localhost
17:12 foundatron but...the salt master appears to think there are two minions on the master, as it tries to apply highstate twice
17:12 foundatron whytewolf, how would i verify that
17:12 whytewolf okay. stop salt-minion on the master.
17:13 whytewolf and then see if salt-minion shows up in ps -ef
17:13 brianfeister joined #salt
17:13 whytewolf also, do that on the minion as well
17:14 whytewolf just to be safe
17:14 murrdoc stop salt-minion
17:14 murrdoc pkill -f salt-minion
17:14 whytewolf while the minion is down delete the offending localhost key out of salt-key
17:15 dayid joined #salt
17:15 foundatron ahh, so on the master systemctl stop salt-minion isn't killing the minion
17:16 moogyver joined #salt
17:16 XenophonF foundatron: not if the minion you want to stop is running on another computer
17:17 foundatron I stopped it on the master, and and it still showed up in ps
17:17 XenophonF stevej99: are you sure that win_service ends up quoting the service name in the sc.exe command?
17:17 whytewolf sounds like you had a second process running
17:17 whytewolf kill it
17:17 foundatron yeah
17:17 whytewolf kill it with fire
17:17 Bryson joined #salt
17:17 foundatron murdered
17:18 foundatron I've been banging my head at this for 30 minutes.
17:18 foundatron thanks
17:18 foundatron lets seeif this works
17:18 XenophonF stevej99: try embedding quotes in the string, like '"stupid developers"'
17:19 XenophonF stevej99: a related bug i've been meaning to report is how salt does case-sensitive matches on service names
17:19 XenophonF stevej99: so it thinks w32time and W32time are different :(
17:19 perfectsine joined #salt
17:20 om joined #salt
17:20 shaggy_surfer joined #salt
17:21 babilen Ah, Windows and case-sensitivity ...
17:21 stevej99 I have so far tried building the name var with quotes around it and also escapaing them, but the list2cmdline function goes crazy expanding and escaping them. I need to look at what that function is doing. I'll try bypassing it and build the string manually to start with perhaps.
17:30 perfectsine joined #salt
17:33 teryx510 joined #salt
17:34 bhosmer joined #salt
17:34 om joined #salt
17:39 CeBe joined #salt
17:40 Guest94421 joined #salt
17:40 scott_w joined #salt
17:42 XenophonF I wonder why Salt doesn't just use win32service directly instead of shelling out to sc.exe.
17:42 murrdoc might be easier in python land to do the former
17:46 shaggy_surfer joined #salt
17:46 XenophonF salt requires pywin32, which has had this win32service class/library forever
17:46 stevej99 salt module function cmd.run is being passed sc qc "service name" so cmd.run must be not dealing well with the quotes, it's not a small function though.
17:47 XenophonF stevej99: no, and you'll notice there are calls to a _render function somewhere in there, too
17:48 stevej99 As I said, not familiar with python at all, but I would have thought there is a better way to return windows services without using sc as XenophonF I believe is alluding to.
17:51 AlberTUX joined #salt
17:52 pcn Does anyone have a salt-ish method for installing/maintaining multiple JVMs and alternatives?
17:52 amcorreia joined #salt
17:53 jaybocc2 joined #salt
17:53 ranomore1 pcn: there is a salt state for alternatives
17:54 ranomore1 https://docs.saltstack.com/en/latest/ref/states/all/salt.states.alternatives.html
17:56 dyasny joined #salt
17:59 stevej99 I wonder if it's this ressurecting somehow: https://github.com/saltstack/salt/issues/4934
17:59 saltstackbot [#4934]title: quoting for cmd.run exhibits odd behaviour | cmd.run seems to strip the quotes out of the command so that my sql query hangs. ...
17:59 perfectsine joined #salt
18:00 stevej99 Pub time, I'll pick this up again in the morning.
18:02 murrdoc pub time!
18:08 sbogg joined #salt
18:11 xmj pub time?
18:11 babilen XenophonF: It might simply be that nobody has yet written a better implementation
18:11 foundatron it's 10am here in seattle....so almost pubtime
18:12 babilen xmj: The time of the day when you and your pals head out to enjoy a well-deserved pint
18:12 xmj oh i know pub time
18:12 xmj i was wondering about the 'deseved' part for now
18:15 aetherios joined #salt
18:17 murrdoc isnt seattle all 'green' now
18:18 wt joined #salt
18:19 wt Is there a way to add additional jinja filters to salt?
18:19 murrdoc macros only
18:19 JoeJulian seattle's all frozen today.
18:19 murrdoc oh u mean like the filter functions ?
18:19 murrdoc no lue
18:20 zmalone anyone have advice on how to fetch a single salt:// file using salt-call?
18:21 bhosmer joined #salt
18:22 JoeJulian Use file.managed
18:23 murrdoc or salt-call cp.get_file
18:23 zmalone "'cp.get' is not available."
18:24 zmalone "salt-call -d" prints the full list of modules, but a decent number are unavailable, or throw errors
18:24 wt yeah, I'd like to have a filter to pick the third ip address out of a subnet (e.g. in 10.0.0.0/24, the third ip address is 10.0.0.1). I think it'd be hard to do that with macros.
18:24 zmalone It's unclear what should and should not work with it
18:24 zmalone oh shucks, get_file vs get
18:25 wt something like this in a salt context: http://stackoverflow.com/questions/4828406/import-a-python-module-into-a-jinja-template
18:25 zmalone thanks!
18:31 JoeJulian wt: imho, that should be done outside of the template, leaving templates to just fill in the blanks from your defaults or context. The need is so specific, I would consider a custom execution module.
18:34 JoeJulian Hey foundatron, there's no Salt meetups around Seattle, is there?
18:35 pcn wt remember that your state can be a python script - you can do your CIDR calculations in python, then pass in the baked strings to jinja
18:35 foundatron JoeJulian, you know I haven't looked
18:35 JoeJulian I haven't seen any, but there's at least two of us.
18:36 pcn http://www.meetup.com/Seattle-SaltStack-Meetup/
18:36 foundatron ^
18:37 forrest joined #salt
18:37 JoeJulian So not really.
18:37 scott_w joined #salt
18:37 murrdoc forrest:  make me a repo
18:38 murrdoc packer-formula por fa vor
18:38 foundatron Looks like I joined that group back in november...lol don't remember that
18:38 forrest murrdoc, https://github.com/saltstack-formulas/packer-formula
18:39 murrdoc i ll push up some stuff to it
18:39 murrdoc then i will reprimand myself for pushing directly
18:40 foundatron murrdoc looking forward to this!
18:40 murrdoc packer is a dumb dumb
18:40 murrdoc he wont publish packages
18:40 murrdoc cos its 500 meg of binaries
18:41 forrest murrdoc, Don't push directly
18:41 murrdoc dont reprimand me
18:41 forrest murrdoc, Fork and commit there, or make a PR
18:41 murrdoc I WAS GOING TO SELF REPRIMAND
18:41 forrest I will reprimand anyone who does force pushes, especially when you know better
18:41 murrdoc WHY U DOING DIS
18:41 forrest Seriously, you're setting a bad example, don't push straight to the repo
18:42 murrdoc u took my fun away
18:42 murrdoc 1. i wasnt going to push
18:42 forrest It's enough of a pain in the ass to manage the formula repos as is, arguing with people about force pushing adds even more work
18:42 wt JoeJulian, I have written many of those. In this case however, it seemed like a case where a utility to function pick the nth address out of a particular cidr would be useful. I could imaging an execution module doing something similar.
18:42 murrdoc 2. i was saying i would do it for the fun of it if i did
18:42 forrest WELL IT WOULDN'T BE FUN FOR ME!
18:42 murrdoc I DONT CARE ABOUT YOUR FUN
18:42 murrdoc :(
18:42 murrdoc also happy new year u turds
18:43 forrest Thanks you too
18:43 murrdoc forrest:  iggy whytewolf so on
18:43 murrdoc its been a fun year of hanging out
18:43 murrdoc wont be coming to saltstack
18:43 forrest No? Employer wouldn't pay?
18:43 murrdoc nah team is bigger
18:43 forrest Did talk confirmations go out to all the people who had talks accepted?
18:43 murrdoc letting the new crew go
18:43 winsalt joined #salt
18:43 forrest jfindlay, Are all the talk participants selected?
18:44 Edgan murrdoc: I like packer, because I am not aware of anything else the does the same things. But I agree, that it is written in GO, and hence statically limited sucks. It sucks even more to try to compile it.
18:44 Crazy67 joined #salt
18:44 murrdoc Edgan:  packaging it is trivial
18:44 murrdoc u want a gist i can get u a gist
18:44 murrdoc and with the formula I ll have a salty way to setup the profile in /etc/profile.d
18:44 murrdoc and life will be a-ok
18:45 whytewolf nice. I have been meaning to check out packer to incorp into my home openstack projects,
18:45 murrdoc its the bomb
18:45 Edgan murrdoc: mkdir -p packer/usr/bin ; cp -a packer* usr/bin ; fpm -s dir -t deb(or rpm) -n packer -v 1.2.3 packer
18:45 Edgan murrdoc: something like that is what I do
18:45 murrdoc Edgan:  its not as pretty as mine
18:46 murrdoc i have echos
18:46 Edgan haha
18:46 murrdoc and a wget
18:46 murrdoc no error checking tho
18:46 murrdoc :D
18:46 Edgan murrdoc: Do you handle the packer already exists as a binary name in rpm based distributions?
18:46 murrdoc prefix with company name
18:47 Edgan murrdoc: I just do packer.io and then alias it back to packer as my user
18:47 murrdoc package name dont matter
18:47 Destreyf joined #salt
18:47 murrdoc do it
18:47 Edgan murrdoc: I mean command name.
18:47 murrdoc oh i keep it as packer
18:47 Edgan rpm -qf /usr/sbin/packer
18:47 Edgan cracklib-dicts-2.9.1-6.fc23.x86_64
18:48 Destreyf using the API, is it possible to use a list of minions?
18:48 murrdoc work in good ol ubuntu
18:48 murrdoc Edgan:  thing is i am trying to convince my peoples
18:48 murrdoc to use atlas
18:48 Edgan murrdoc: Money, bleh
18:48 murrdoc only downside is it s restricted to vagrant provisioners
18:49 scbunn joined #salt
18:49 murrdoc and i need aws, qemu kvm, docker build for travis and vagrant
18:49 Edgan murrdoc: what do you want atlas for? One thing, or the whole thing?
18:49 scbunn rpm salt repos appear to be broken?
18:49 murrdoc not having to setup my own packer install only
18:49 zmalone which ones are you using scbunn?
18:50 zmalone they are broken, but some are broken to different levels than others
18:50 Edgan murrdoc: You mean like the bento public repo?
18:50 zmalone Saltstack knows, and has had open issues for it since Fall-ish.
18:50 scbunn baseurl=https://repo.saltstack.com/yum/rhel7
18:50 murrdoc Edgan:  yeah but u can push up your own templates
18:50 scbunn rhel6 appears to be broken as well
18:50 zmalone scbunn: Complaints about not being able to upgrade due to package dependency conflicts?
18:51 scbunn yes and no, I reposync to internal repositories and reposync fails with some 404's and it looks like metadata doens't match actually packages
18:51 Edgan murrdoc: I just forked the bento repository, and put the vagrant boxes on our mirror server. Then the jsons live in whatever git repo you want.
18:51 zmalone yep, that's another one
18:51 murrdoc Edgan:  me too
18:51 murrdoc but bento dont do kvm or aws stuff
18:51 zmalone scbunn: https://github.com/saltstack/salt/issues/29477
18:51 saltstackbot [#29477]title: Metadata of repo.saltstack.com/yum/rhel6/ doesn't match packages in repo | Hello there,...
18:52 murrdoc actually i went with boxcutter/ubuntu
18:52 dayid joined #salt
18:53 scbunn zmalone: thanks.. major blocker for me.. seems like it would be a high priority.
18:53 whytewolf scbunn: where did that baseurl come from? isn't the baseurl for redhat7 supposed to be http://repo.saltstack.com/yum/redhat/7/x86_64/latest/
18:53 murrdoc Edgan:  cos i like the var_file format
18:53 Edgan murrdoc: I need centos and ubuntu. So I use bento, and heavily modified it. I made a vagrant+salt multi-box. We have CentOS and Ubuntu as generic boxes with the salt master already installed. Then you can vagrant up 1-4 VMs in VMWare or Virtualbox, and 2-4 point to 1 for their salt master, but you can override that if you want.
18:53 zmalone It could be worse, the Fedora people stopped getting saltstack packaged by saltstack altogether a few months back
18:53 scbunn whytewolf: I don't know, I pulled it off off the salt docs months ago.. maybe its changed
18:54 murrdoc Edgan:  u want a job ? in la :D
18:54 murrdoc iggy:'s happy
18:54 murrdoc we can use more smarts :D
18:54 murrdoc the fun one i am doing with packer is a docker box
18:54 whytewolf scbunn: months ago? you might have been in the 2 week period when they were still decideding on url then
18:54 murrdoc for making travis docker instance Edgan
18:55 Edgan murrdoc: Have you played with making a vagrant box from an existing vagrant box? I know it isn't directly supported like it is with AWS, but supposed you can point it at the VM files and make it work.
18:55 scbunn whytewolf: possibly, let me change location on my end
18:55 murrdoc Edgan:  nah sticking with virtualbox to vagrant
18:55 murrdoc and docker to docke
18:55 whytewolf i could be wrong also
18:55 Edgan murrdoc: Reinstalling ubuntu every time you want to rebuild the vagrant box sucks
18:56 murrdoc packer_cache_dir ?
18:56 murrdoc global environment variable
18:56 murrdoc lets u use a shared cache dir
18:56 Edgan That is where it drops the isos. Not sure what you mean.
18:57 murrdoc oh just saying, i am happy with just that
18:57 scbunn haha, thats worse.. all 404's
18:57 zmalone I've got a gotcha that everyone probably already knows, but might not have thought of: don't put pillar/ in /srv/salt/ , and don't put anything you don't want all your minions knowing into /srv/salt (like keys, hashed passwords, etc.).  It's easy to mount a single git repo for salt in /srv/salt, but things really need to be broken up
18:57 scbunn arrrr... maybe not..
18:58 whytewolf zmalone: actually. not as many people know that as should...
18:58 zmalone Yeah, pillar targeting doesn't really count as security if any minion can go fetch the raw pillar files
18:59 zmalone (which is why I wanted to figure out how to fetch a file using salt-call)
18:59 mapu joined #salt
18:59 scbunn no, changing to the new location still have broken metadata.
18:59 jfindlay forrest: I'm not sure, they might be
18:59 zmalone Oh yeah, that problem is ongoing
18:59 forrest jfindlay, Okay
19:00 whytewolf jfindlay: any word on when the repo build process is going to be released? :P
19:00 Edgan scbunn: My opinion of best practice is to mirror their yum repo, and regenerate the metadata with createrepo. So then doing random stupid stuff doesn't randomly break you.
19:00 jfindlay whytewolf: https://github.com/saltstack/salt-pack
19:00 Edgan scbunn: s/then/them/g
19:00 scbunn Edgan: thats what I do, but I can't mirror the repo because their metadata is broken
19:01 Edgan scbunn: wget -m --no-parent
19:01 whytewolf oh nice.
19:01 scbunn Edgan: or at least I can't be sure that I have all the packages...
19:01 Edgan scbunn: there is an open bug for rsync support
19:01 scbunn Edgan: what not reposync ? isn't that the tool for the job
19:01 Edgan scbunn: you are saying reposync isn't working because the repo is broken
19:02 whytewolf jfindlay: thank you
19:02 scbunn Edgan: right, which is the *standard* tool for mirroring repositories.  If I use wget there is no guarantee that I have a working repo/dependency tree at the end.
19:03 Edgan scbunn: You download everything, clean up the index.html* files, and then run createrepo to make the metadata
19:03 Edgan scbunn: No reason it shouldn't work if all the packages are there
19:03 scbunn Edgan: and if say... python-zmq-foo doesn't download? but its depended on by salt-2015-foo then I have broken dependencies
19:03 jfindlay whytewolf: sure
19:04 jfindlay whytewolf: but now I'm going to expect some pull requests :-)
19:04 Edgan scbunn: If it is on the server there is no reason that wget shouldn't pull it.
19:04 whytewolf jfindlay: as soon as i have something i have very little of. time
19:05 Edgan scbunn: Assuming the didn't set bad permissions or something like SELinux/Apparmor aren't getting in the way
19:05 scbunn Edgan: agreed, but if its on the server there isn't any reason that somebody can regenerate the metadata with createrepo and deliver a working repository, otherwise -- whats the point?
19:05 forrest whytewolf, Just don't spend as much time on IRC ;)
19:05 subsignal joined #salt
19:05 Edgan scbunn: I mirrored it a week or two ago with wget as I described. I have it as of 2015.8.3.
19:05 jfindlay whytewolf: I totally know how you feel.  I know I've taken on too many projects here
19:06 quasiben joined #salt
19:06 Edgan scbunn: You don't want to be dependent on their server availability. You want it to be local and faster. You want to rebuild salt with your own patches. etc
19:07 scbunn Edgan: Not really.  If thats the case I might as well git clone on my end and roll my own rpms
19:07 Edgan scbunn: if you are being really paranoid, you don't want your servers going out to the internet for anything
19:07 Edgan scbunn: I have done that. You better off taking their src.rpms and starting with that.
19:08 dayid_ joined #salt
19:08 Edgan scbunn: I am currently patching for a bug in orchestration. It is merged, but it isn't released.
19:09 scbunn Edgan: its not about being paranoid.  I sync locally because most of my machines can't reach the internet.  My point is, if you are going to provide RPMs then make sure it works, because at this point I have to take time out of my day to fix rpms for salt, which is a hard sell to mgmt when the selling point of moving from puppet to salt was easy of use and not having to manage a giant gem installation chain.
19:10 murrdoc basepi:  ^
19:10 Edgan scbunn: Sadly, they screwed that pooch when they started doing their own python module packages that override some in CentOS
19:10 nyx__ joined #salt
19:10 scbunn If I was hacking on salt, great got it, but I'm not.. I have it running on 1K+ servers spread across the glob and right now I just need to get it upgraded and move on with my day.
19:10 Edgan scbunn: there are open issues to get them to fix that
19:10 forrest murrdoc, he isn't here, he's on vacation
19:11 murrdoc i dont know who else is on that would care
19:11 murrdoc scbunn:  file a github issue
19:11 geekatcmu Welcome to the wonderful world of cutting-edge software.
19:11 Edgan murrdoc: probably already exists
19:11 Edgan geekatcmu: yep
19:11 brianfeister joined #salt
19:11 murrdoc well then add to it
19:11 jfindlay scbunn: definitely file an issue if you haven't already or if there isn't an issue already
19:12 jfindlay mirroring the repo should not be causing problems
19:12 murrdoc mirroring was a problem for us too btw jfindlay
19:12 murrdoc using aptly
19:13 murrdoc ended up wget and add to local mirror using aptly api
19:13 Edgan murrdoc: we use aptly too
19:13 dyasny joined #salt
19:13 murrdoc its good stuff
19:13 scbunn jfindlay: done and thanks.
19:13 job jfindlay, help https://github.com/saltstack/salt/issues/29690
19:13 saltstackbot [#29690]title: Debian Jessie amd64 APT repository out of sync | Followed the instructions here https://docs.saltstack.com/en/latest/topics/installation/debian.html on a Debian Jessie amd64 machine, but seems one of the packages is currently missing....
19:16 jfindlay I wonder if these issues should be filed against salt-pack
19:16 whytewolf I would think they should
19:17 cberndt joined #salt
19:17 whytewolf that way any PR's can be linked to issues without crossing projects
19:18 jfindlay scbunn, job: thanks for the issues
19:18 scbunn jfindlay: no problem, I try and work it around by doing a recursive wget
19:18 job issue #29690 is preventing me from deploying salt on debian jessie hosts
19:19 job it went away for a short while (i closed the ticket subsequently), but came back (thus i reopened)
19:19 jfindlay I'm just thinking how we can test this better, which has typically been partially my responsibility (https://github.com/saltstack/salt-pkg-tests), and be running the repo gen better, which has not
19:20 jfindlay I will, however, be talking with the packaging team next week about this
19:20 job thank you
19:20 jfindlay theoretically, everything that the repo does minus some pillar stuff should be contained within salt-pack
19:21 job i didnt know salt-pack existed
19:21 cyborg-one joined #salt
19:21 jfindlay we only just released it
19:21 murrdoc the formula installer ?
19:21 murrdoc its live /
19:22 murrdoc ?
19:22 jfindlay salt-pack is used to build salt and its deps
19:22 murrdoc get outta here
19:22 jfindlay totally
19:23 murrdoc https://github.com/saltstack/salt-pack
19:23 murrdoc its a saltstack state to build salt debs
19:23 murrdoc noice
19:25 scbunn cool
19:26 jfindlay and the repo generation as well, but there seems to be some trouble syncing to the public site
19:26 scott_w joined #salt
19:37 subsignal joined #salt
19:42 evle1 joined #salt
19:43 Thiggy Where's a good place to find DevOps people with salt experience for contract and FTE positions?
19:43 dyasny joined #salt
19:46 whytewolf ...
19:46 whytewolf well this channel has a few.
19:47 whytewolf you could do a search on linked in for Salt. or a job posting that listes salt as a requirment
19:47 forrest Thiggy, Are you hiring remote?
19:47 forrest Or are you just a recruiter.
19:48 whytewolf ^^^
19:48 forrest I'm pretty sure I've seen you around before. But my memory for names is bad :(
19:50 scott_w joined #salt
19:51 forrest Thiggy, If you're hiring remote let me know since I'm currently looking
19:52 xmj +1
19:52 whytewolf +1
19:52 whytewolf :P
19:52 xmj always up for ansible/salt/python remote gigs :)
19:52 babilen +20k
19:52 forrest ansible? Mehhh
19:52 xmj it's a tool
19:52 forrest Why use Ansible when you can duplicate the same thing with salt-ssh in like an hour?
19:52 job don't be hating
19:52 forrest I know, but ssh execution is so slow
19:53 forrest I dislike salt-ssh for the same reason
19:53 job admitting to have knowledge about a topic shouldnt be cause for "meh"
19:53 job xmj, good you know multiple systems! makes you more valuable :)
19:53 forrest job, I know, I'm messing with xmj
19:53 xmj job: :p
19:53 forrest I know ansible too, doesn't mean I can't poke fun at it ;)
19:53 whytewolf lol
19:53 forrest Same way I poke fun at salt
19:54 whytewolf yeah but salt is an easy target :P
19:54 xmj so is pepper
19:54 * xmj runs
19:54 * forrest groans
19:56 forrest whytewolf, Very true.
19:56 iggy murrdoc: I don't like talking to you at the office, why would I want to go to a conf with you :P
19:57 forrest Ouch
19:57 Thiggy Sorry, standup meeting, yes, remote is an option for the right people (I work remote).
19:57 Thiggy @forrest ^^
19:57 forrest What's the company?
19:57 Thiggy Will take offline.
19:58 murrdoc iggy:  i aint going
19:59 iggy \o/
20:07 Thiggy_ joined #salt
20:11 intel joined #salt
20:14 Rumbles joined #salt
20:15 wryfi joined #salt
20:16 iggy murrdoc: don't think I am either (I didn't do a talk and I'm not a recruiting machine with extra money for that stuff ;)
20:17 forrest it's only 900
20:17 forrest a bargain
20:17 forrest Why do you think I didn't go last year iggy ;)
20:22 breshead joined #salt
20:22 murrdoc i wish salt had a 'streaming version' of their conf
20:22 murrdoc i would buy it for the talks, cant fly out to saltstack
20:22 murrdoc lil dude is too lil
20:22 whytewolf ^ +1
20:22 forrest Some of the talks get posted afterwards
20:23 forrest granted some still aren't posted, and the keynote still isn't available
20:23 forrest or it takes like 6 months
20:23 forrest but they DO eventually get posted in some form
20:28 iggy and some aren't worth posting
20:28 * iggy whistles innocently
20:30 traph joined #salt
20:34 murrdoc forrest: lets rty for new year u first agree with something i say
20:34 murrdoc then shit on it
20:34 murrdoc damnit it man
20:34 murrdoc :D
20:34 forrest I agree with things you say all the time
20:34 forrest Just not creating more unpaid work for me
20:36 giantlock joined #salt
20:37 murrdoc iggy is a very aggressive person. He/She attacked others 6 times.
20:37 murrdoc Poor murrdoc, nobody likes him/her. He/She was attacked 2 times.
20:37 murrdoc murrdoc1 wrote the shortest lines, averaging 28.3 characters per line.
20:37 murrdoc murrdoc was tight-lipped, too, averaging 29.6 characters.
20:37 murrdoc damn
20:37 murrdoc that murrdoc guy
20:37 murrdoc needs to write longer lines
20:37 murrdoc gawd that guy is an idiot, pyramid
20:39 Steven- joined #salt
20:41 perfectsine joined #salt
20:42 AnUserX joined #salt
20:42 ghanima joined #salt
20:43 ghanima Question... I am looking at some of the outstanding bugs that have been reported on saltstack and was looking to volunteer and see if there was anything I could try to attempt to fix on my open and submit... Is there any documentation on how you accept such help or assistance
20:44 AnUserX left #salt
20:44 Ryan_Lane ghanima: look in the github issues
20:44 Ryan_Lane there may be ones marked as easy or something along those lines
20:45 whytewolf ghanima: i think this is what you are looking for https://docs.saltstack.com/en/latest/topics/development/contributing.html
20:45 Ryan_Lane ghanima: and https://github.com/saltstack/salt/blob/develop/Contributing.rst
20:45 breshead ghanima: I am pretty sure you just clone the salt repo and start bug fixing then post back for pull requests of your fixes.
20:46 Ryan_Lane not sure why they have two different docs for that
20:46 nyx_ joined #salt
20:46 teryx510 joined #salt
20:47 ghanima I presume that anything marked as an issue is a recognized as a bug by core salt dev group
20:47 wryfi i'm having an issue with salt and elasticsearch
20:47 zmalone joined #salt
20:47 wryfi i currently have a watch set on my /etc/elasticsearch/elasticsearch.yml (which i am going to remove in the future, actually)
20:47 wryfi but whenever that file changes, and salt restarts the service
20:47 wryfi i end up with two running instances of es
20:47 wryfi this is on ubuntu trusty
20:47 wryfi manually restarting the service with `service elasticsearch restart` does not exhibit the same problem
20:48 whytewolf wryfi: that is odd. I never saw that behavour before. gist your states. maybe we can find the breakdown
20:49 whytewolf also. os version?
20:49 wryfi ubuntu 14.04
20:49 wryfi i'll work on the gists <sigh>
20:50 wryfi not that there's anything unusual about them
20:50 Ryan_Lane ghanima: they use an "accepted" tag
20:50 whytewolf wryfi: just server.running with a watch i take it?
20:51 ghanima Ryan_Lane: thank you sir :)
20:51 Ryan_Lane ghanima: they also use a "bug" tag, I believe
20:52 wryfi whytewolf: yup. here's the giet https://gist.github.com/wryfi/cd356dba17e2c4b10db2
20:52 murrdoc forrest:  can u write a cookie cutter for saltstack formulas
20:52 murrdoc kthnx
20:53 murrdoc also drop whatever else u are doing
20:53 forrest murrdoc, There already is one
20:53 murrdoc there is a 'template-formula'
20:53 murrdoc not a cookiecutter
20:53 murrdoc https://github.com/audreyr/cookiecutter
20:53 murrdoc != https://github.com/saltstack-formulas/template-formula
20:53 forrest I'm not writing that
20:54 forrest Why are you trolling me today murrdoc?
20:55 murrdoc i do it everyday
20:55 murrdoc i just randomly troll the homies
20:55 forrest No wonder I didn't join the IRC for a couple days
20:56 whytewolf wryfi: salt version? just to make sure I'm looking at the right code
20:56 whytewolf murrdoc, just missed you forrest :P
20:57 wryfi 2015.8.3+ds-1
20:57 forrest Fair enough
20:57 wryfi @whytewolf ^
20:57 whytewolf thank you
20:57 wryfi thank you!
20:57 denys joined #salt
20:58 rem5 joined #salt
20:59 nethershaw joined #salt
20:59 whytewolf wryfi: what does 'salt 'es-minion' service.status elasticsearch' show?
20:59 whytewolf should be running. but ya never know for sure
21:00 brianfeister joined #salt
21:00 wryfi yeah, correctly returns True
21:00 kusams joined #salt
21:02 whytewolf does service.start start a new instence instead of returning that the service is already running?
21:04 wryfi oh good question
21:05 whytewolf thats why i get paid the big 0 bucks
21:06 wryfi whytewolf: yes indeed
21:06 fumito joined #salt
21:06 whytewolf it does start a new instence instead of reporting that it already is started?
21:06 wryfi however, `service elasticsearch start` does not
21:06 murrdoc whytewolf:  where do u live again
21:06 wryfi it properly outputs '* Already running'
21:06 whytewolf murrdoc: las vegas
21:07 wryfi whytewolf: yes
21:07 murrdoc wanna move to la ?
21:07 murrdoc convince iggy to refer your butt
21:07 whytewolf wryfi: okay. that is strange.
21:07 wryfi salt starts a new instance instead of reporting 'already started'
21:07 wryfi isn't it?
21:07 whytewolf but helps norrow the problem
21:07 whytewolf murrdoc: I like vegas though.
21:07 quasiben joined #salt
21:08 murrdoc u win
21:09 whytewolf now i just need to remeber which server module ubuntu uses. lol
21:10 whytewolf okay. if it is the debian service module the command it uses is start elasticsearch
21:11 whytewolf humm or /etc/init.d/elasticsearch start
21:11 whytewolf ...
21:12 jaybocc2 it should use init or upstart
21:12 pdayton joined #salt
21:12 whytewolf jaybocc2: I was reading the salt source code :P
21:12 jaybocc2 https://github.com/saltstack/salt/blob/2015.8/salt/modules/upstart.py
21:12 murrdoc whats a way to get wget a url in salt
21:12 murrdoc but not have the hash hand
21:12 murrdoc handy*
21:13 pcn wget
21:13 jaybocc2 probably cmd.run
21:13 rem5 joined #salt
21:13 whytewolf jaybocc2: ahh compleatly forgot about upstart returns that it is service
21:13 whytewolf [one of the few that don't have service in the name]
21:14 fumito joined #salt
21:14 murrdoc hate it
21:15 whytewolf upstart uses cmd.retcode 'service elasticsearch start' python_shell=False
21:17 scott_w joined #salt
21:18 Crazy67 joined #salt
21:18 fumito join #C
21:19 whytewolf segmentation fault (core dumped)
21:19 nethershaw joined #salt
21:19 teryx510 joined #salt
21:20 zmalone https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.pepa.html#module-salt.pillar.pepa really
21:20 zmalone salt n pepa
21:21 whytewolf zmalone: there is a api library for salt called pepper. and you have a problem with ext_pillar called pepa?
21:21 whytewolf :P
21:22 zmalone https://www.youtube.com/watch?v=vCadcBR95oU
21:22 whytewolf no one ever salt isn't puntastic
21:23 brianfeister joined #salt
21:24 rem5 joined #salt
21:35 Thiggy joined #salt
21:35 Rumbles joined #salt
21:36 joshin joined #salt
21:58 qman__ joined #salt
22:01 intr1nsic joined #salt
22:01 jamesp9 joined #salt
22:02 pdayton1 joined #salt
22:02 qman__ joined #salt
22:03 qman__ joined #salt
22:03 dustywusty joined #salt
22:03 okfine joined #salt
22:05 Guest95048 joined #salt
22:09 intr1nsic joined #salt
22:09 pdayton joined #salt
22:09 murrdoc forrest:  https://github.com/saltstack-formulas/packer-formula/pull/1
22:09 saltstackbot [#1]title: First pass of Packer formula |
22:09 murrdoc its UGLY
22:09 murrdoc cos u know thats how i like it
22:10 scott_w joined #salt
22:10 wryfi whytewolf: so it sounds like the upstart job isn't returning the right value?
22:10 bhosmer joined #salt
22:10 whytewolf wryfi: thats what it sounds like
22:11 wryfi grr, and of course it's all going to get replaced by systemd in a few months.
22:11 wryfi well, i'm just going to remove the watch, it's not the behavior i really want anyway
22:11 pmcnabb joined #salt
22:11 * wryfi wonders if this is a bigger upstart bug or specific to this init script
22:12 whytewolf most likely that init script. cause like i said havn't seen this through out the rest of ubuntu
22:12 murrdoc who is this woodya person
22:13 babilen woodya?
22:14 forrest Fine by me, merged.
22:15 * babilen just had a few nitpicks
22:15 babilen Ah, ta! :)
22:15 linjan joined #salt
22:16 babilen murrdoc: woodya is apparently me ;)
22:16 murrdoc woodya backoff!
22:16 murrdoc :D
22:16 murrdoc i knew it was you
22:16 murrdoc i just wanted to make a terrible joke
22:16 murrdoc and thank u for look babilen
22:16 murrdoc i am going to travis it up soon
22:17 babilen Okay, good night everyone .. enjoy hogmanay and see you in 2016
22:17 pmcnabb joined #salt
22:18 murrdoc babilen:  happy new year
22:18 murrdoc take care
22:18 whytewolf happy new year babilen
22:20 xenoxaos- joined #salt
22:22 AndreasLutro joined #salt
22:22 RobertChen117 joined #salt
22:24 lompik joined #salt
22:24 wt joined #salt
22:24 kidneb joined #salt
22:25 MaZ- joined #salt
22:31 brianfeister joined #salt
22:47 lompik joined #salt
22:49 wryfi here's a good jinja problem
22:50 wryfi i have a list of es servers in a pillar
22:50 wryfi ['es1', 'es2', 'es3', 'es4']
22:50 wryfi and i need to output a list of es servers in a config file, but append a port number to them
22:50 wryfi like ['es1:80
22:51 wryfi like ['es1:80', 'es2:80', 'es3:80', 'es4:80']
22:51 kidneb joined #salt
22:51 wryfi in regular python i would use a list comprehension, e.g.  [server + ':80' for server in es-servers]
22:51 wryfi but jinja doesn't support those
22:52 xenoxaos joined #salt
22:52 wryfi any other concise way of doing this that i'm missing?
22:53 MaZ- joined #salt
22:55 wryfi whytewolf? anybody?
22:57 geekatcmu you can't just do it in a loop?
22:57 AndreasLutro {% set my_list = [] %} {% for item in other_list %} {% do my_list.append(item ~ ':80') %} {% endfor %}
22:57 geekatcmu http://jinja.pocoo.org/docs/dev/templates/#list-of-control-structures
22:57 lorengordon wryfi: the only way i know how in jinja would be to create a new list, nvm, what AndreasLutro said
22:58 geekatcmu You're writing to a config file.  Use a loop.
22:58 wryfi AndreasLutro: didn't quite realize i could do that, i'll give it a try
22:58 wryfi thanks guys
22:58 geekatcmu There's no point to creating a new list, and almost certainly it's a bad idea, because you'll later need to re-use the now-altered values.
22:58 lorengordon does append() work though? if not, try extend() instead
22:59 lorengordon no, the original values are still in other_list
23:02 colegatron joined #salt
23:04 wryfi hmm, AndreasLutro that doesn't seem to work
23:05 TyrfingMjolnir joined #salt
23:05 AndreasLutro can you be more specific? ;)
23:06 geekatcmu wryfi: you're heading the wrong way.
23:06 geekatcmu http://pastebin.com/eM7hFuCU
23:06 tristianc_ joined #salt
23:06 geekatcmu that's from my zoo.cfg
23:07 wryfi geekatcmu: the output needs to look like a python list in a single line: ['es1:80', 'es2:80' ...]
23:08 wryfi AndreasLutro: {% set hosts = [] %}{% for host in pillar['network'][grains['domain']]['elasticsearch'] %}{% do hosts.append(host + ':80' %}{% endfor %}    <======================
23:08 wryfi oh wait
23:08 wryfi i see it now
23:08 whytewolf missing )
23:08 * wryfi nods
23:09 wryfi sweet, all good now
23:09 geekatcmu wryfi: and you can't do a loop in a single line because ...?
23:10 wryfi it's much more readable to construct the dictionary first and then insert it directly (to my eyes)
23:11 geekatcmu http://pastebin.com/sqACrYuq
23:12 geekatcmu Yes, that'll leave a trailing comma, but that's fine syntactically.
23:13 wryfi depends on the language ;)
23:15 geekatcmu You said "python list".
23:15 geekatcmu I know python
23:15 geekatcmu trailing comma is valid in a list
23:15 baweaver joined #salt
23:15 whytewolf ES isn't python. it is cranky java
23:15 geekatcmu anyway, style arguements are stupid unless we're working together.
23:17 lemur joined #salt
23:17 wryfi ;)
23:18 wryfi geekatcmu: i said it needs to "look like" a python list, not be one ;)
23:26 lorengordon anyone feel like schooling a relative python noob? i've finished tweaking my first salt custom execution module and would love having it torn apart...
23:26 lorengordon https://gist.github.com/lorengordon/a1d1c08e8587d92d3612
23:29 zmalone joined #salt
23:53 wt lorengordon, the comments for the libs at the top seem self explanatory
23:53 wt In general, docstrings usually use triple double quotes instead of the single quotes.
23:54 wt I don't really see any reason to be using raw strings for the docstrings.
23:55 amcorreia joined #salt
23:55 Arbusto joined #salt
23:55 wt lorengordon, the code itself looks okay. The style is not what I am used to.
23:56 wt lorengordon, seems reasonable
23:57 mage_ what's the best way to have a file.managed where the content is just a list of key="value"\n where key and value come from the pillar ?
23:57 lorengordon wt: thanks, i'm open to style nitpicks as well
23:57 mage_ I have - contents: | and then a {% for ... %} but I wonder if it's the best solution
23:58 wt mage_, are you talking about a json or yaml encoded files or just literally key=value on each line?
23:59 mage_ just literally key="value" on each line (it's to fill /etc/rc.conf)
23:59 mage_ something like https://gist.github.com/silenius/b727f85d38faa92901e9

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary