Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-05-21

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 murrdoc u wrote the connection code ?
00:00 baweaver the 'We dont serve your kind here' error was a pain to figure out
00:00 baweaver the API client
00:00 baweaver referenced above
00:00 baweaver rwaterbury just doesn't know my alias on here, I'm sitting a few seats down from them
00:01 murrdoc oh
00:01 murrdoc like in the same office ?
00:01 baweaver Rails project, so I wrote the Ruby API
00:01 baweaver yeah
00:01 murrdoc u guys should like get along
00:01 baweaver API Client*
00:01 baweaver We do, but they're on this one
00:02 murrdoc my first step would have been
00:02 baweaver I actually suggested she ask IRC on that one since I had no idea.
00:02 ageorgop joined #salt
00:02 murrdoc ah
00:02 baweaver or I would have tried to help a bit more :/
00:03 scbunn joined #salt
00:03 murrdoc well basically get the low state for anything u want
00:03 murrdoc and post it over
00:03 murrdoc it works
00:03 murrdoc (tm)
00:03 baweaver yeah, you can send lowstates / etc easy enough. I just didn't know the node removal commands
00:03 giantlock joined #salt
00:04 iggy the saltpad docs have a brief primer on setting up salt-api
00:04 murrdoc the key module doesnt have a remove
00:04 murrdoc it has finger()
00:05 baweaver If we can get legal to stop being a pain, we'll open source the rubygem client for salt-api
00:06 murrdoc hax
00:06 druonysuse joined #salt
00:06 murrdoc next time start your client on github
00:06 murrdoc fork to internal repo
00:06 murrdoc do work
00:06 murrdoc push back up to fork
00:06 baweaver Rules :/
00:07 murrdoc yup
00:07 murrdoc stink
00:08 Gareth just give it to murrdoc, he's good at pretending where things came from :) "Oh. Guy dropped a USB key with the code off the back of a truck."
00:08 murrdoc true
00:08 murrdoc or false
00:08 murrdoc we ll never know
00:08 Gareth or we already do.
00:09 baweaver Sneak up to SF, rwaterbury won't tell ;)
00:09 murrdoc too far
00:09 murrdoc too cold
00:09 murrdoc also supposedly the florence of our times
00:09 murrdoc great food tho
00:09 murrdoc mebbe i should
00:09 alexanderilyin joined #salt
00:10 pdx6 hey folks, I have a question on salt.modules.freebsd_sysctl
00:10 pdx6 http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.freebsd_sysctl.html
00:10 Gareth road trip!
00:10 pdx6 in my sys file, I'm not sure if I am formatting things right
00:10 murrdoc Gareth:  i am back in la in june
00:10 murrdoc lets go free some good code
00:10 murrdoc from the clutches of tyranny
00:11 Gareth only if we can do it blue brothers style.
00:11 smcquay joined #salt
00:11 pdx6 http://pastebin.com/AKHLbR3J
00:11 murrdoc tis the only way
00:11 scbunn joined #salt
00:15 chingadero joined #salt
00:15 smcquay joined #salt
00:16 smcquay joined #salt
00:17 scbunn joined #salt
00:18 debian112 left #salt
00:21 cheus_away Are there any objects in jinja's default (salt) context that have access to the current job cache / log?
00:23 iggy cheus_: salt.modules.saltutil
00:24 timoguin joined #salt
00:24 chingadero left #salt
00:24 chingadero joined #salt
00:25 chingadero left #salt
00:25 cheus_ Thanks iggy -- I looked at that but didn't think it'd serve the purpose. Was looking for a nice way to send error messages when templating a file if the source data was in the wrong format.
00:25 cheus_ Instead of the good old failed to render sls
00:26 cheus_ or jinja failre
00:27 iromli joined #salt
00:29 iggy if it's something common, you can use the test states and some jinja conditionals to fail out somewhat gracefully
00:30 iggy but I'd hate to litter all my states with a bunch of error checking to save users who can't write pillars correctly
00:34 cheus_ Oh! I didn't think of doing it at the state level! That's a good idea. I was originally thinking of doing so in the templated file.
00:35 cheus_ iggy, I'd normally agree (re: clutter) but as states get more dynamic, pillars get more complicated so a little sanity checking isn't always a bad thing, eg, ensuring context XYZ is a list of two-item tuples
00:36 scoates joined #salt
00:38 joehh TaiSHi: yes
00:40 iggy cheus_: like I said, if it's something common, go for it, but I'd hate to have my state files be a string of "if something test.fail else do useful stuff" repeated over and over
00:40 iggy but as I always say... do whatever works best for you
00:41 cheus_ I think what really need to work best for me is to just develop a habit of pure-python states
00:42 iggy ^5
00:48 bones050 joined #salt
00:54 druonysuse joined #salt
00:54 druonysuse joined #salt
00:58 kusams joined #salt
01:02 aw110f joined #salt
01:03 aw110f I’m using gtfs on salt 2014.1.13
01:03 druonysus joined #salt
01:03 druonysus joined #salt
01:04 iggy why such an old version?
01:04 aw110f I’m using gtfs on salt 2014.1.13 and for some reason the remotes are not updating
01:04 aw110f haven’t had the chance to upgrade
01:04 iggy salt-run -l debug fileserver.update
01:05 viq joined #salt
01:06 druonysus joined #salt
01:06 druonysus joined #salt
01:06 aw110f [DEBUG   ] Updating fileserver cache
01:06 aw110f [DEBUG   ] diff_mtime_map: the maps are the same
01:07 aw110f i tried deleting the cache from the master
01:07 aw110f and restarted the salt-master service
01:08 iggy I'm assuming it outputs more than 2 lines, mind gist'ing that?
01:08 Singularo joined #salt
01:08 cromark joined #salt
01:09 enarciso joined #salt
01:11 yexingok joined #salt
01:12 aw110f https://gist.github.com/wongster80/1646eb078f4f76ae9751
01:13 hasues left #salt
01:21 aw110f Hi iggy: any hints?
01:24 andrej thanks for making me read/google iggy ... when did _grains get introduced?
01:24 dimeshake joined #salt
01:28 philipsd6 Dumb question maybe... what do the P1, P2 .. P4 labels on the github issues mean?
01:29 aurynn I'd guess priority?
01:29 aurynn but that's just a guess
01:30 philipsd6 Oh that makes sense. Good guess. So my issue moving from P4 to P2 would be a good thing for me. :D
01:31 ITChap joined #salt
01:31 julez joined #salt
01:42 ALLmightySPIFF joined #salt
01:47 otter768 joined #salt
01:48 codekobe joined #salt
01:49 mrbigglesworth joined #salt
01:50 neogenix joined #salt
01:53 salty_to_the_cor joined #salt
01:53 beauby joined #salt
01:54 __number5__ andrej: pre-0.17...
01:55 salty_to_the_cor when do gitfs and svnfs reload data? on every highstate? or at some periodic interval?
01:58 andrej thanks __number5__
01:58 andrej Not sure how I missed that when I first started, then
01:58 andrej :}
01:59 desposo joined #salt
01:59 andrej On an unrelated topic ... when using the API ... where in the doco do I find out how to utilise package management?
02:00 iggy aw110f: how are you determining they arent updating?
02:04 iggy salty_to_the_cor: every minute (by default)
02:04 aw110f iggy: git commits on the remote is not updating the file.managed file. in /var/cache/salt/master/gitfs/refs/master on the master keeps pulling an old version of the committed file
02:06 salty_to_the_cor iggy: any way we can change that value?
02:06 aw110f i delete the file in /var/cache/salt/master/gitfs/refs/master/salesforce-ingest/file re-run fileserver.update .  an old version keeps coming back
02:07 writtenoff joined #salt
02:07 iggy aw110f: if you cp.get_file does it show the old version?
02:07 aw110f iggy: when i do a separate new git clone on the repo, i see the new file
02:07 iggy salty_to_the_cor: not without impacting other things
02:08 beauby joined #salt
02:08 salty_to_the_cor if there any documentation pointing to this? or some code snippet?
02:09 iggy salty_to_the_cor: loop_interval
02:10 salty_to_the_cor thanks
02:11 iggy salty_to_the_cor: note that also impacts the scheduler, so if you raise that too much, the scheduler may not run correctly
02:12 salty_to_the_cor ok, point noted
02:12 aw110f iggy: yes, when running salt-call cp.get_file "salt://salesforce-ingest/home/t/salesforce-ingest/conf/sla_sf.properties" /tmp/file_salesforce
02:12 cedwards in jinja templating I can do something like {{ salt['grains.get']('ip_interfaces:eth0')[0] }}
02:13 cedwards is there any way to replicate that (particularly the [0]) from the cli?
02:13 iggy cedwards: does ip_interfaces:eth0:0 not work?
02:13 iggy aw110f: what? that shows the correct or incorrect contents?
02:13 cedwards oh.. i guess that's the one permutation i didn't try
02:14 aw110f iggy: it shows the incorrect content
02:14 aw110f showing the old version of the file
02:15 iggy aw110f: and gitfs is set to pull the correct branch?
02:15 aw110f yes
02:16 blast_hardcheese joined #salt
02:16 * iggy got nothing
02:16 cwyse joined #salt
02:18 aw110f iggy: i happen to have a branch that was cut with the new change on the file and ran cp.get_file saltenv=PDSOPS-2020 and that shows the new content i need
02:20 aw110f so the master is not able the master branch only it seems.  I’m going to put another file or update on the master branch to see if changes shows up
02:21 iggy ENOPARSE
02:26 aw110f iggy: how does the minion and master cache files?
02:27 iggy separately... in /var/cache/salt/... I'm not really sure how to answer that question
02:28 nikogonzo does anyone feel like they have a clever user management framework in salt and willing to share details?
02:30 kusams joined #salt
02:33 iggy users-formula? ldap?
02:37 beauby joined #salt
02:41 aw110f iggy: I tried changing the file in the cache directory of the master in /var/cache/salt/master/gitfs/refs/master/salesforce-ingest/home/t/salesforce-ingest/conf/sla_sf.properties and ran cp.get_file, it still shows me the old content
02:43 nikogonzo iggy yeah, sorry - I meant straight up local user/group creation
02:43 power8ce joined #salt
02:44 evle joined #salt
02:49 druonysus joined #salt
02:51 favadi joined #salt
02:58 donmichelangelo joined #salt
03:06 mapu joined #salt
03:08 cedwards does anyone have a good example of using 'onchanges'? The documentation isn't clear to me
03:08 cedwards "The onchanges requisite makes a state only apply if the required states generate changes, and if the watched state's "result" is True. This can be a useful way to execute a post hook after changing aspects of a system."
03:08 mrbigglesworth joined #salt
03:09 beauby joined #salt
03:11 iggy cedwards: it's like watch, but the states have to be True
03:12 cedwards iggy: I guess I assumed a watch expected a True result..
03:13 iggy just changes
03:15 cedwards sweet. that seems to do exactly what I need it to.
03:15 cedwards for whatever reason watch was still executing the function every time
03:36 elektrix joined #salt
03:36 druonysus joined #salt
03:36 druonysus joined #salt
03:40 LyndsySimon joined #salt
03:47 UForgotten joined #salt
03:48 UForgotten joined #salt
03:54 mosen joined #salt
04:07 cowpunk21 joined #salt
04:10 mrbigglesworth joined #salt
04:21 soren joined #salt
04:27 dalexander joined #salt
04:28 rhodgin joined #salt
04:29 cromark joined #salt
04:29 cowpunk21 joined #salt
04:34 golodhrim|work joined #salt
04:39 enarciso joined #salt
04:40 ramaseshan joined #salt
04:40 aw110f joined #salt
04:41 ramaseshan joined #salt
04:44 aw110f_ joined #salt
04:57 TyrfingMjolnir joined #salt
04:59 mrbigglesworth joined #salt
05:02 mrbigglesworth joined #salt
05:08 EWDurbin joined #salt
05:09 julez joined #salt
05:11 dopesong joined #salt
05:13 jalaziz joined #salt
05:20 cberndt joined #salt
05:21 cberndt joined #salt
05:22 mrbigglesworth joined #salt
05:23 rdas joined #salt
05:24 julez joined #salt
05:27 linjan joined #salt
05:29 alexanderilyin joined #salt
05:38 mrbigglesworth joined #salt
05:49 otter768 joined #salt
05:59 joeto joined #salt
06:04 eliasp joined #salt
06:06 mrbigglesworth joined #salt
06:07 colttt joined #salt
06:22 impi joined #salt
06:27 JayFK joined #salt
06:28 hojgaard joined #salt
06:29 hojgaard Anyone here who has experience in using the salt python api?
06:30 CeBe joined #salt
06:32 flyboy joined #salt
06:33 mrbigglesworth joined #salt
06:43 dingo joined #salt
06:43 hefp joined #salt
06:43 Mate joined #salt
06:43 Mate joined #salt
06:43 rogst joined #salt
06:43 seb` joined #salt
06:43 Norrland joined #salt
06:43 drags joined #salt
06:43 g3cko joined #salt
06:43 ropes joined #salt
06:43 sevalind joined #salt
06:43 nadley joined #salt
06:43 cheus joined #salt
06:43 dec joined #salt
06:43 steve__ joined #salt
06:43 manfred joined #salt
06:43 emid joined #salt
06:43 msciciel joined #salt
06:43 SaveTheRbtz joined #salt
06:43 seev joined #salt
06:44 jY- joined #salt
06:44 pmcg joined #salt
06:47 dean joined #salt
06:48 shadowsun joined #salt
06:48 mrbigglesworth joined #salt
06:53 KermitTheFragger joined #salt
06:58 eseyman joined #salt
06:59 Berty_ joined #salt
06:59 kawa2014 joined #salt
07:01 hojgaard None here with python api experience?
07:03 favadi please just ask the question, don't ask to ask
07:10 lb1a joined #salt
07:13 favadi left #salt
07:13 Auroch joined #salt
07:18 dopesong_ joined #salt
07:20 hojgaard I am just starting to try to use the salt python api, and i want to use a python variable to run a cmd to salt. I have an example here: http://pastebin.com/cNH0taeL
07:21 hojgaard Putting in the variable does not work as intended. The salt client does not see it as a variable..
07:22 julez joined #salt
07:22 hojgaard Furthermore i want to be able to sort minions like i do with the bash command, eg: salt -G 'os:Debian' cmd.run 'which ls' but i do not know how to do that in python...
07:23 fredvd joined #salt
07:29 iromli joined #salt
07:32 slav0nic joined #salt
07:41 is_null hi all, how to start salt-minion manually so that it doesn't take over a pdb breakpoint ? as you can see here i can't input anything in pdb: http://dpaste.com/117658B
07:41 Guest57 joined #salt
07:42 julez joined #salt
07:42 chiui joined #salt
07:45 stanchan joined #salt
07:48 favadi joined #salt
07:50 otter768 joined #salt
07:50 stanchan joined #salt
07:51 stoogenmeyer joined #salt
07:53 jeanneret joined #salt
07:53 rsimpkins joined #salt
07:53 paha joined #salt
07:54 david_an11 joined #salt
07:59 giantlock joined #salt
07:59 CeBe joined #salt
08:01 david_an11 joined #salt
08:03 markm joined #salt
08:06 cedwards joined #salt
08:07 nethershaw joined #salt
08:11 N-Mi joined #salt
08:14 lietu joined #salt
08:16 cedwards joined #salt
08:17 al joined #salt
08:17 losh joined #salt
08:28 Berty_ joined #salt
08:30 saifi joined #salt
08:31 Grokzen joined #salt
08:35 s_kunk joined #salt
08:35 s_kunk joined #salt
08:35 stephanbuys joined #salt
08:39 MatthewsFace joined #salt
08:43 TheHelmsMan joined #salt
08:44 denys joined #salt
08:50 Emkeh joined #salt
08:50 dopesong joined #salt
08:51 aze_ joined #salt
08:55 linjan joined #salt
08:58 fredvd joined #salt
08:58 hojgaard joined #salt
09:04 hojgaard using the Python salt api salt.client.LocalClient(), how can i sort by grains? like i do with the shell command salt -G os:Debian ?
09:07 VSpike Is this working as expected? https://bpaste.net/show/99fe8b2baf34
09:08 VSpike I did a highstate on a new minion from the master and it returned saying minion not connected. However, the minion appears to be running a highstate. Subsequent attempts say a job is running, but also says "Data failed to compile"
09:09 www-BUKOLAY-com joined #salt
09:15 stoogenmeyer hi all, how would I go about installing salt but having it disabled until a later date?
09:18 rofl____ if i try to do a publish.publish and i get The following keyword arguments are not valid...what am i doing wrong?
09:18 rofl____ it works fine on the salt master
09:20 jeanneret joined #salt
09:21 jeanneret Hi I search a way to do a break in a sls file is it possible? (I didn't find anything on the Web)
09:23 ksj I have a template file where I need to list the ips of all minions that have a certain grain. What's the best way of doing that?
09:23 jay_d joined #salt
09:24 Sacro jeanneret: break?
09:24 jeanneret Sacro: pause
09:26 Sacro hojgaard: ('os:Debian', cmd, args, timeout, 'grain')
09:26 Sacro or just ('os:Debian', cmd, arg, expr_form='grains')
09:27 Sacro s/grains/grain/
09:27 hojgaard Sacro, i will try it thanks :)
09:27 Sacro jeanneret: cmd.run sleep?
09:27 jeanneret I will try thank you
09:28 Sacro hojgaard: though LocalClient seems a bit broken in 2015.5.0
09:28 Sacro jeanneret: depends what you're trying to acheive
09:29 jeanneret I apply Win Update then I restart the server and I want to know when the server restart I do a ping but the ping goes before the server can shutdown
09:30 Sacro Ah, you could set a listener for it coming back onilne
09:32 hojgaard Sacro, i am trying local.cmd('os:Debian', 'grains.item', ['server_id'], expr_form='grains'), but it returns "No minions matched the target"
09:32 Sacro grain
09:33 Sacro singular
09:33 Sacro :)
09:36 hojgaard perfekt Sacro.. You're the (wo)man!!
09:38 Sacro I am no woman!
09:39 hojgaard Sory thats why i used (wo) :) Sacro, do you know if there is a way that i can output ONLY the server_id?, and not like this: {'hostname.something.com': {'server_id': 203714127}}
09:40 c10b10 joined #salt
09:44 ksj anyone?? sorry to be a nuisance, I just want to know if I'm even approaching this in the right way
09:45 chiui joined #salt
09:45 supersheep joined #salt
09:46 PI-Lloyd ksj, if the list is being used as a config somewhere else, you could use salt mine
09:48 PI-Lloyd a load balancer config for example, it's how I manage our haproxy configs and get the IP addresses of certain systems that need to be balanced
09:48 ksj PI-Lloyd: is there no way of looping over al machines that have set grains without mine? I thought mine was more for sharing ips/information between minions
09:48 PI-Lloyd Not sure on that one
09:49 ksj http://stackoverflow.com/questions/26046478/how-do-i-use-the-ip-addresses-of-machines-matching-a-grain-in-a-salt-state-file - this is similar to what I want to do, but the solution proposed with mine is a little overkill....or maybe that's just how it's done
09:50 otter768 joined #salt
09:54 supersheep joined #salt
09:56 cromark joined #salt
09:56 evle joined #salt
10:06 PI-Lloyd ksj: what exactly are you trying to do? Is this for a config/firewall or is it for something else?
10:11 ksj PI-Lloyd: for a config file on a deployment machine. it needs a list of all ips that have a certain role
10:12 PI-Lloyd in that case mine is the only way
10:12 PI-Lloyd afaik
10:12 nethershaw joined #salt
10:14 ksj PI-Lloyd: yeah, I'm looking into it now. It's not as bad as I thought. Should do the trick, thanks.
10:14 david_an11 joined #salt
10:14 Pulp joined #salt
10:15 PI-Lloyd No worries.
10:16 ndrei joined #salt
10:17 julez joined #salt
10:24 vincent_vdk joined #salt
10:29 TheHelmsMan joined #salt
10:36 TheHelmsMan joined #salt
10:38 JDog joined #salt
10:50 elfixit joined #salt
10:51 giantlock joined #salt
10:52 richi_ joined #salt
10:53 richi_ Is there anyway to change cherrypy server in salt-api?
10:54 zipkid Is there a description, how/why, of the Git repo structure for https://github.com/saltstack-formulas/ ?
10:55 richi_ I am specifically asking about tornado and cherrypi rest api. When we run salt-api, these servers are ran automatically. Anyway to change the servers by myself?
10:55 gladiatr joined #salt
10:59 richi_ Are there more configurations for rest_tornado?
11:00 PI-Lloyd richi_: you can set which netapi mopdule to load in the master config
11:01 PI-Lloyd http://docs.saltstack.com/en/latest/ref/netapi/all/salt.netapi.rest_tornado.html
11:01 amcorreia joined #salt
11:02 richi_ I understand. But how do I run, say, my personal tornado/cherrypy app?
11:03 geronime joined #salt
11:06 nper joined #salt
11:07 PI-Lloyd the docs are not very helpful on tht one - http://docs.saltstack.com/en/latest/topics/netapi/writing.html
11:08 PI-Lloyd Maybe someone else will be able to answer this one better than I can
11:08 evle1 joined #salt
11:09 teohhanhui joined #salt
11:11 richi_ There's no way to use my personal cherry.py or whatever?
11:12 teohhanhui I'm trying to remount root ("/") with "acl" option.. without a fixed "device" (e.g. "dev/mapper/..." when using lvm, "UUID=..." when not), how do I go about creating the "mount.mounted" state?
11:13 teohhanhui I can get the other fields by passing /etc/fstab through awk, but how to feed that into a salt state?
11:17 teohhanhui Am I supposed to use grains for this?
11:18 JDog renderer question: One of the pytho files in the project I'm deploying contains all the packages that should be downloaded into a static area on the provisioned box. So I need to extract the filenames (easy) and then wget (maybe? this does work) them onto the target. Should I use the python or pyobject renderer to do that?
11:21 c10b10 how can i trigger a cmd.run command only after another cmd.run command is done executing?
11:22 c10b10 tried making different states for them and them adding - require: - cmd: cmd_name
11:22 c10b10 but it doesn't seem to work
11:28 bluenemo joined #salt
11:28 bluenemo joined #salt
11:31 bash124512 any screenshots of the saltstack enterprise GUI ? :)
11:31 PI-Lloyd c10b10: try a cmd.wait
11:31 c10b10 PI-Lloyd: trying it now
11:32 PI-Lloyd cmd.wait + watch on the cmd you want to run after
11:33 ksj I'm trying to create a list and append values to it within a jinja for loop. I'm using ips.append(ip), where ips is the name of the list and ip is a valid variable, but it's crashing on writing the template
11:37 denys joined #salt
11:39 nicolerenee joined #salt
11:39 PI-Lloyd ksj: here's how I do it for our mesos cluster - https://www.refheap.com/82f351821424f48b2b0469767
11:43 nexus joined #salt
11:43 PI-Lloyd obviously we have extra pillar data added in for the port numbering, but you get the general idea
11:43 refnode joined #salt
11:44 amcorreia joined #salt
11:44 richi_ Pi_lloyd, do you know much about cherrypy deployment?
11:44 ksj PI-Lloyd: thanks, it works now, I was missing the "do" keyword.....wow. "do". that's pretty horrible
11:45 PI-Lloyd richi_: not much, we have one of our salt masters running it but it's not custom in any way
11:45 richi_ Is there way to make it custom?
11:45 richi_ I don't think that netap/writing is custom at all.
11:46 richi_ Seems like it's just default by setting for maybe curl call?
11:46 PI-Lloyd not sure, I've not looked into customising it as it does what we need it to already
11:47 joren joined #salt
11:47 benvon joined #salt
11:47 mkropinack joined #salt
11:47 pipeep joined #salt
11:48 PI-Lloyd ksj: lol, at least it works :)
11:48 Laogeodritt joined #salt
11:48 bradthurber joined #salt
11:48 Georgyo joined #salt
11:48 ede joined #salt
11:50 ksj PI-Lloyd: thing is, I read the jinja documentation thoroughly and made a lot of notes. I guess I skipped over that tiny section on "expression statements", because it seemed kind of redundant.
11:51 ksj how can they design something so bad. and it's jinja2!!! what the hell was jinja1 like?
11:51 OGuilherme joined #salt
11:51 otter768 joined #salt
11:52 stoogenmeyer_ joined #salt
11:54 PI-Lloyd I dread to think
11:58 keimlink joined #salt
12:06 julez joined #salt
12:07 Haihan joined #salt
12:11 salt-n00b joined #salt
12:12 Mate does anyone have a good practice of changing the IP address of a minion?
12:12 Mate normally it gets disconnected, and the highstate timeouts
12:21 Twiglet run the highstate in screen
12:21 Twiglet only way I've foind of doing it
12:22 Twiglet that is if I'm running it via salt-call locally
12:23 Mate i launch esx vms with salt-cloud, they get dhcp address
12:23 Mate and then I would change their ip based on pillar and the minion-id
12:23 c10b10 any ideas why npm.bootstrap fails while using cmd.run works (in the same directory, same user)?
12:23 Mate and the connection between the new minion and the master disconnects
12:24 Mate i mean the master doesn't get the response
12:25 dimeshake joined #salt
12:27 multani joined #salt
12:32 XenophonF left #salt
12:32 Setsuna666_ joined #salt
12:32 TyrfingMjolnir joined #salt
12:34 c10b10 this is a bit voodoo, it looks like the "npm install" command was exectured, but the npm.bootstrap state still returns an error
12:35 c10b10 does anybody have any ideas to why?
12:36 CedNantes joined #salt
12:37 dendazen joined #salt
12:37 CedNantes hi there. Anyone using salt-cloud to deploy Windows Server VM ? I'm having trouble setting HostName and IPAddresses
12:38 CedNantes Salt-cloud is not taking them into account
12:42 cb joined #salt
12:43 cromark joined #salt
12:43 cromark joined #salt
12:43 the_lalelu joined #salt
12:44 ht joined #salt
12:45 fredvd joined #salt
12:45 izibi joined #salt
12:46 dfinn joined #salt
12:46 Corey joined #salt
12:47 ndrei joined #salt
12:53 fredvd joined #salt
12:55 dyasny joined #salt
12:57 p0rkbelly joined #salt
12:59 subsignal joined #salt
13:02 Tecnico1931 joined #salt
13:02 julez joined #salt
13:04 dendazen Is there a way on the salt minion to see what states have been applied?
13:05 dendazen or do some kind of high state dry run.
13:05 JDiPierro joined #salt
13:07 ndrei joined #salt
13:08 racooper joined #salt
13:10 kusams joined #salt
13:10 PI-Lloyd dendazen: add test=True to your salt-call command
13:10 PI-Lloyd salt-call state.highstate test=True
13:10 dendazen Thanks
13:11 PI-Lloyd or you can run state.show_highstate
13:12 kusams joined #salt
13:13 murrdoc joined #salt
13:14 jdesilet joined #salt
13:15 elfixit joined #salt
13:17 jhauser joined #salt
13:20 mapu joined #salt
13:24 primechuck joined #salt
13:25 emaninpa joined #salt
13:26 tkharju joined #salt
13:30 dendazen I also have another issue
13:30 agentnoel joined #salt
13:30 dendazen where i have The following requisites were not found  pkg: sudo
13:30 dendazen but on the box i have
13:30 dendazen yum list installed | grep sudo
13:30 dendazen sudo.x86_64                      1.8.6p3-15.el6            @rhel-x86_64-server-6
13:34 ageorgop joined #salt
13:35 mordonez joined #salt
13:36 Tyrm joined #salt
13:37 Tyrm joined #salt
13:42 drawsmcgraw1 dendazen: Can you post the offending Salt states?
13:42 bradthurber joined #salt
13:42 drawsmcgraw1 It sounds like you have a requisite and you passed it the name of a state that doesn't exist.
13:43 JDiPierro joined #salt
13:43 mpanetta joined #salt
13:46 faliarin joined #salt
13:46 perfectsine joined #salt
13:48 hasues joined #salt
13:48 bradthurber hi. I'm running branch 2015.5 (not tag v2015.5 due to a centos 6 bug that is fixed in the branch). state service.running never returns. I see in the minion debug log that is doing a ''/sbin/service jbossas-domain status' followed by '/sbin/service jbossas-domain start' and then nothing happens after that. The service actually gets started but I gue
13:48 bradthurber ss the minion never gets the message that it started. Strangely if I do a 'service jbossas-domain stop' in another session, that causes the minion to break loose
13:49 murrdoc bradthurber:  can u open a ticket in github
13:49 bradthurber murrdoc: yes
13:49 murrdoc give them the salt-master —version, salt-minion —version out put and the state
13:49 murrdoc only because u are on develop
13:52 otter768 joined #salt
13:54 bradthurber joined #salt
13:54 flyboy82 joined #salt
14:00 andrew_v joined #salt
14:00 jdesilet joined #salt
14:02 bastiaan joined #salt
14:02 bastiaan howdy! Anyone here got experience with reclass?
14:04 bastiaan it looks like a better way to structure pillar data and master tops than the fileserver, but it seems not to be very popular (yet)
14:05 * madduck crawls under a desk
14:07 nodens joined #salt
14:07 nodens hi there
14:08 nodens I'm trying to parse the output from a salt execution module
14:08 nodens problem is, I added the --out=json and --static to have a single string
14:09 nodens but I get a string with each minion and then a string for each minion
14:09 nodens so my parser whines
14:09 nodens any pointer ?
14:10 nodens the doc says --static should prevent that, but it doesn't really prevent it, I still have a string for each minion (admitedly, I also have a string that include all minions)
14:15 kaptk2 joined #salt
14:20 gladiatr joined #salt
14:20 rhodgin joined #salt
14:22 hasues left #salt
14:23 bastiaan ah, madduck, I can see you hiding there ;-)
14:23 murrdoc haha
14:24 GreyGnome joined #salt
14:26 bastiaan propagate_pillar_data_to_reclass looks interesting, but how do you actually use it?
14:26 catpig joined #salt
14:26 murrdoc paging madduck
14:27 johndeo joined #salt
14:27 pestouille joined #salt
14:27 madduck you set it to true and then pillar data should be available to reclass parameter interpolation
14:27 pestouille Hello Salt ocommunity
14:28 pestouille anyone have seen some truncated output log when launching a highstate ?
14:28 pestouille sometime I have the final result (success : X failed : Y) but sometime not
14:28 SaltAddict joined #salt
14:29 bastiaan ok, I see. Didn't know about pameter interpolation yet. Thanks!
14:30 ageorgop joined #salt
14:31 SaltAddict Hi all. I'm running saltstack on my datacenter. I have debian, ubuntu and redhat server... The problem is with debian and redhat I can specify the version in my salt repositories. But I did not manage to find the same with redhat. Can someone help me ? I would like my server to keep version 2014.7
14:33 drawsmcgraw1 pestouille: That can happen with some large and/or long runs. I like to tack -v --summary onto the end of my commands to increase the chance of getting results back.
14:34 pestouille drawsmcgraw1: that’s very random and might depends on the time the task take indeed. Any chance to have something more predictable
14:35 murrdoc yeah
14:35 murrdoc use a returner
14:35 murrdoc instead of the default one
14:35 murrdoc how many masters do u have
14:35 stanchan joined #salt
14:35 pestouille I’m using salt in masterless with packer
14:36 murrdoc ah
14:36 murrdoc put a default -t 60
14:36 murrdoc in your salt calls
14:36 pestouille sometime I have the final summary, sometimes not
14:36 murrdoc that means it will wait 60 seconds at least
14:36 pestouille I need to change packer sourcecode :p
14:36 pestouille 60 seconds might be not enough
14:36 pestouille got a lot of states
14:36 murrdoc you actually set the timeout in the minion config
14:37 murrdoc so no on touching packer
14:37 smcquay joined #salt
14:37 drawsmcgraw1 You can also use the jobs Runner to look through the job cache
14:38 drawsmcgraw1 If you use "-v" you'll get a job ID that you can use to query on the results of the run.
14:40 nodens (I guess I'll just tell my parser to ignore whatever's after a correct json string)
14:41 nodens the question is, is this a bug or expected behaviour ?
14:43 saru11 joined #salt
14:43 saru11 hello
14:44 saru11 do you know if there are any imitations how module used for minion master_type: func and master: mod.fun configuration options should be written?
14:45 cowpunk21 joined #salt
14:46 saru11 the point is that if I have a custom module mod and function fun returns for example __salt__[network.ipaddrs]()[0]
14:47 saru11 it will throw exception with errro
14:47 saru11 __salt__[network.ipaddrs]()[0]
14:48 saru11 TypeError: string indices must be integers, not str
14:48 turkey007 joined #salt
14:49 saru11 the above code is just an exmple but I would like to use __salt__ dict from the module
14:49 turkey007 left #salt
14:49 saru11 the module works fine and it returns first IP address of the minion it is called from
14:49 druonysus joined #salt
14:50 saru11 the master IP address is evaluated like this (taken from minion.py)
14:50 saru11 https://github.com/saltstack/salt/blob/develop/salt/minion.py#L689
14:50 kphanann joined #salt
14:52 DavidB_ joined #salt
14:55 litwol I'm coming to a sad/interesting realization.
14:55 big_area joined #salt
14:55 litwol and i'll continue by prefixing that it is not salt's fault.
14:55 PI-Lloyd the world is actually flat?
14:55 litwol i notice all configuration/orchestration managers have this difficulty.
14:56 litwol CMs are not truly platform agnostic.
14:56 peters-tx joined #salt
14:56 ht joined #salt
14:56 rick_ joined #salt
14:56 litwol i haven't yet found a way where a single state declaration will "mean" the same thing on all systems.
14:57 Brew joined #salt
14:57 rick__ joined #salt
14:57 litwol for example. binary distributions (lets take debian repo) customizes php5 by installing stand-alone php5-* binaries. all good there.
14:57 litwol gentoo on the other hand installs /single/ 'php' package, but customizes it with "use flags".
14:58 litwol which means on gentoo state will define a /single/ pkg.installed with args** customizations, whereas other OSs/platforms states define multiple pkg.installed per "customizaation"
14:59 litwol This results in a completely incompatible "infrastructure state declaration" between platforms.
15:01 kphanann I'm having trouble with an include of a formula... How can I determine what the search path is for included files?
15:01 litwol Combine that "state" of CMs with current "standard" salt formula examples all over the web (https://github.com/saltstack-formulas/nginx-formula/blob/master/nginx/map.jinja) and you end up with a severely misleading and negative guidance to new developers.
15:01 litwol kphanann: includes are from the "root of environment"
15:01 litwol kphanann: look in master files_roots
15:01 kevit joined #salt
15:01 litwol or file_roots (i forget syntax)
15:02 litwol kphanann: in there you define environments. paths you see there are "roots"
15:02 litwol so just look in those folders.
15:02 kphanann For states, pillar and formula?
15:02 litwol kphanann: states and formulas are "the same thing", and pillars is different.
15:02 litwol kphanann: for states are formulas you have file_roots
15:02 litwol kphanann: for pillars you have pillar_roots
15:03 litwol kphanann: my note about "environments" is the same for both.
15:03 honestly joined #salt
15:03 litwol kphanann: look inside your master configuration file.
15:03 litwol kphanann: http://docs.saltstack.com/en/latest/ref/configuration/master.html#file-roots
15:03 rideh joined #salt
15:03 litwol kphanann: http://docs.saltstack.com/en/latest/ref/configuration/master.html#pillar-roots
15:03 UtahDave joined #salt
15:04 litwol kphanann: if your minion is matched under "dev" environment, then "include foo" will be located under /srv/salt/dev/{services,states}/foo
15:05 scoates stupid question of the day: what's up with 2015.5.1 ? Is that actually latest? http://docs.saltstack.com/en/latest/topics/releases/ says 2015.5.0, as does /topic … ?
15:07 froztbyte so I'm trying to come up with a pattern here; I want to build things "nicely" for our monitoring setup, where we have a bunch of different teams touching different salt states. currently I'm thinking that a 'monitoring' SLS could be made within their component's SLS folder
15:08 froztbyte however, this would still need a manual SLS entry in the topfile; is there some way I can "export and collect" SLS' like that? or scan the available states for a monitoring SLS in each available namespace?
15:08 PI-Lloyd scoates: I don't see 2015.5.1 in the ubuntu repo, or even tagged on github... where are you seeing this?
15:09 scoates PI-Lloyd: under "release notes" here http://docs.saltstack.com/en/latest/topics/releases/ … maybe they're just pre-release notes?
15:09 scoates ah yes. "release: TBA"
15:09 scoates thanks for the rubber ducking (-:
15:09 PI-Lloyd lol
15:09 teohhanhui joined #salt
15:09 kevit hi guys. How I can check that my pillars was loaded?
15:09 kevit salt-ssh '*' pillar.items
15:10 kevit ?
15:11 tkharju joined #salt
15:12 litwol froztbyte: you bring up an interesting question
15:12 primechuck Has anyone else ever run into this bug with cherrypy and salt?  https://bitbucket.org/cherrypy/cherrypy/pull-request/50/fix-race-condition-in-session-clean-up/diff
15:12 froztbyte I've found http://grokbase.com/t/gg/salt-users/1524et32kz/exported-resources-automatic-monitoring now
15:12 froztbyte which is a little not quite what I want
15:12 litwol froztbyte: state/pillar has to come from somewhere
15:12 froztbyte the monitoring system in question is sensu, fwiw
15:12 conan_the_destro joined #salt
15:12 froztbyte I have a couple different hosttypes
15:12 litwol froztbyte: you may consider mongodb source for pillar info
15:13 litwol froztbyte: then pull defined values and loop inside top.sls to decine includes dynamically
15:13 froztbyte if you're not familiar with sensu, you basically define subscriptions/groups, and clients can then tag in on each of those subscription types; checks are aggregated a subscription, and broadcast to all participating clients
15:13 litwol froztbyte: that abstracts access to tip.sls
15:14 froztbyte so for dropping checks and such onto clients, that's easy enough
15:14 litwol froztbyte: but you'll atill be responsible for 'other sources for pillar info' access
15:14 froztbyte litwol: that's not really a good solution, I think
15:14 teohhanhui My question is essentially the same as this: http://grokbase.com/t/gg/salt-users/148wqa8atw/updating-etc-fstab-file-using-salt
15:14 * litwol nods
15:14 froztbyte now there's another cog for people to worry about
15:15 froztbyte I already have the unfortunate need of having to run a separate states repo which causes some gymnastics
15:15 druonysuse joined #salt
15:15 froztbyte (although yay git submodules)
15:15 DavidB_ left #salt
15:16 froztbyte I guess I need to find out how I could search for all available states
15:17 lothiraldan joined #salt
15:17 gmoro joined #salt
15:18 froztbyte would it be better to post to list, or open a github issue asking about it?
15:18 adelcast left #salt
15:18 teohhanhui "Use 'mount' state" doesn't address the issue of how to retrieve such info and feed them into the state
15:18 N-Mi joined #salt
15:18 N-Mi joined #salt
15:19 kphanann litwol: thanks I'm looking at it now.  I'm tracking down why a pillar for a formula is not being included.
15:20 froztbyte oooooor, alternatively I could just tell people to write things into a monitoring SLS and care about it later
15:20 * froztbyte goes with that option
15:20 adelcast joined #salt
15:21 litwol kphanann: 2 reasons i can think of. (1) you are not targetting it correctly in the pillar top.sls by /minion/, and (2) you are targetting minion correctly, but not including pillar definition correctly.
15:21 rhodgin joined #salt
15:21 litwol kphanann: also, make sure you refresh pillars
15:21 litwol kphanann: http://docs.saltstack.com/en/latest/topics/pillar/#refreshing-pillar-data
15:22 jalbretsen joined #salt
15:22 litwol froztbyte: are you looking for something like "include [directory to states]/*" ?
15:22 litwol froztbyte: and then individual teams can commit states to this directory?
15:23 froztbyte more like
15:23 froztbyte /path/to/states/{foo,bar,baz}/monitoring.sls
15:23 froztbyte and then I want to autodiscover foo.monitoring, bar.monitoring, baz.monitoring, etc
15:24 froztbyte so that if it exists, it's automatically used
15:24 litwol sounds to me like pillar listing "teams", and then you loop over those teams to include the monitoring.sls
15:24 froztbyte I'm doing that for users atm, it's pretty crappy
15:24 froztbyte too damn fiddly
15:24 litwol also not sure you would need to loop over anything
15:24 litwol for example https://github.com/saltstack-formulas/mysql-formula/tree/master/mysql
15:25 froztbyte what in that should I look at?
15:25 litwol this formula for mysql breaks down different states into {named state}.sls, and then has master state "init.sls" to include them all
15:25 litwol your case sounds like monitoring/{foo,bar,bar}.sls, with init.sls including them.
15:25 litwol your "discovery" happens manually by adding into init.sls
15:25 froztbyte yes, which isn't discovery at all, it's people doing things by hand
15:25 froztbyte which is what I'm trying to avoid (but can't find a way to)
15:26 litwol i can't help but wonder /why/ is this discovery necessary?
15:26 litwol every deployment process requires some kind of QA/testing
15:26 litwol whether it is human resources onboarding new employees, or new service being deployed
15:26 litwol some one looks at it and then says 'run this thing'
15:27 Fiber^ joined #salt
15:27 rhodgin joined #salt
15:27 litwol froztbyte: not doing that "manual" step is a very loud invitation to screw things up.
15:27 iggy state top file accepts * (but not pillar sadly)
15:28 litwol froztbyte: under your scenario i, a disgruntled employee, can write an incredibly malicious state and ruin a lot.
15:28 kphanann litwol:  It appears the formula is getting included - the states are declared - What is missing is some pillar data
15:28 froztbyte eh, there are other ways to deal with that
15:28 kphanann litwol: specifically this error message: SaltRenderError: Jinja variable 'dict' object has no attribute 'servers'
15:28 froztbyte I'm trying to enable people to get shit done as fast as possible
15:28 froztbyte tldr seems to be that what I want isn't possible right now though
15:29 litwol froztbyte: does iggy's suggestion above not help?
15:29 froztbyte hmm, missed that
15:30 froztbyte I'll give it a try
15:30 froztbyte anyway, the reason I want this is that there's a bunch of different teams working on their own little corner of things
15:30 kphanann this is all from the logstash_forwarder formula
15:30 froztbyte however, the flipside of a central monitoring SLS would be that there's a lot of other stuff to refer to
15:30 froztbyte so I guess I'll just see how it works out
15:31 litwol sounds to me like access-control issue
15:31 litwol allowing certain people access/control over X minions
15:31 litwol etc
15:31 litwol anyway i retract myself from this convo before i misunderstand the need any further.
15:32 murrdoc good talk
15:32 PI-Lloyd +1
15:33 cedwards ∫n
15:33 cedwards ~
15:33 cedwards .
15:34 hal58th joined #salt
15:35 hal58th_ joined #salt
15:37 JayFK left #salt
15:40 keimlink joined #salt
15:41 dopesong joined #salt
15:47 dalexander joined #salt
15:48 tomspur joined #salt
15:48 tomspur joined #salt
15:48 pestouille joined #salt
15:53 sl_ joined #salt
15:53 amcorreia joined #salt
15:53 otter768 joined #salt
15:54 giantlock joined #salt
15:54 numkem_ joined #salt
15:54 Tyrm joined #salt
15:55 jeremyr joined #salt
15:55 jeremyr left #salt
15:55 jespada joined #salt
15:57 debian112 joined #salt
15:57 jespada If I want to grab all hostnames with in a particular role, should I do somethig like: {% for server, addrs in salt['mine.get']('roles:myrole,myrole2', 'network.ip_addrs', expr_form='grain').items() %}
15:57 jespada then use {{server}} ?
15:57 cowpunk21 joined #salt
15:57 jespada is that more or less the path?
15:58 supersheep joined #salt
15:59 murrdoc yes
15:59 pestouille joined #salt
16:00 tcolvin joined #salt
16:02 iggy I don't think that grain lookup will work
16:03 iggy use a compountd match and do 'G@roles:myrole or G@roles:myrole2'
16:03 pestouille_ joined #salt
16:04 jespada hmm ok, thanks, I'll give it a try
16:05 enarciso joined #salt
16:10 twork i have (yet another) very basic question. is it accurate to say that all configuration data could, in principle, be stored in the State tree? ...but it's better (more flexible, maybe more secure?) to make the State tree a structure/template that gets populated from within the Pillar?
16:11 iggy twork: it depends on your needs
16:11 aparsons joined #salt
16:11 twork okay, so in principle, yes?
16:12 iggy there are some people that dislike formulas (and their data driven nature)... so they only thing they are going to use pillars for is actual targeted/sensitive data
16:12 twork ok
16:12 ksj why does service.running always report a changed state, every run, if I have enabled: True ?
16:12 iggy there are others that will make their states as generic as possible and then drive that all through pillars
16:13 mohae joined #salt
16:13 twork this is helping
16:13 iggy ksj: usually because your init script doesn't report status correctly
16:13 twork i've been having trouble figuring out where the state tree/pillar border is
16:13 ksj iggy: my "initscript" is systemd.....sigh
16:14 desposo joined #salt
16:14 sine_nitore joined #salt
16:15 iggy that's just the most common thing that causes that problem
16:15 supersheep joined #salt
16:15 stanchan joined #salt
16:19 andrew_v joined #salt
16:20 writtenoff joined #salt
16:23 supershe_ joined #salt
16:23 cpowell joined #salt
16:23 lothiraldan joined #salt
16:23 mpanetta joined #salt
16:25 ksj iggy: unfortunately, the chances of systemd devs fixing the way it reports status is much less likely than salt devs adding a hack to salt
16:25 cpaclat joined #salt
16:25 stanchan_ joined #salt
16:25 iggy does it do it for every service? if not then it's not a systemd level problem
16:26 iggy it could be something as simple as systemd tracking the pid, the service forks, so systemd loses track of the pid... the fix in that case would be to not have the process fork (there's usually a foreground option)
16:27 cowpunk21 joined #salt
16:29 spookah joined #salt
16:34 ksj more likely it's something in the way debian 8 has implemented systemd...I have too many other things to do to dig into it right now
16:37 Tyrm joined #salt
16:40 ndrei joined #salt
16:41 Tyrm joined #salt
16:42 ajw0100 joined #salt
16:43 kunersdorf joined #salt
16:43 kunersdorf is there a page on templating?
16:44 MatthewsFace joined #salt
16:47 KyleG joined #salt
16:47 KyleG joined #salt
16:48 Tyrm joined #salt
16:50 MatthewsFace joined #salt
16:51 drawsmcgraw joined #salt
16:52 baweaver_ joined #salt
16:53 keimlink joined #salt
16:53 stanchan joined #salt
16:53 baweaver_ joined #salt
16:55 douglasssss joined #salt
16:56 prwilson joined #salt
16:58 Billias joined #salt
16:59 Billias left #salt
16:59 rhodgin joined #salt
17:01 lempa joined #salt
17:02 SheetiS joined #salt
17:02 druonysus joined #salt
17:03 theologian joined #salt
17:04 stanchan joined #salt
17:04 forrest joined #salt
17:07 nexus Hi
17:08 nexus I'm trying to create a formula with defaults.yaml file to be able to get default data from that files if there is no data in the pillar
17:09 alexanderilyin joined #salt
17:09 hal58th joined #salt
17:09 nexus the problem is when I configure the defaults.yaml and then the pillar, when I apply the state, it only applies the data from the pillar, it doesn't take the yaml
17:09 hal58th_ joined #salt
17:09 TheHelmsMan joined #salt
17:09 nexus anyone could help me?
17:10 iggy nexus: try pasting code
17:10 nexus I'm creating the gist XD
17:11 nexus defaults.yaml -> https://gist.github.com/dseira/40e2bdc803304038867f
17:12 amcorreia joined #salt
17:12 iggy protip gist supports multiple files per paste
17:13 nexus pillar.sls -> https://gist.github.com/dseira/a2f1fcd8172837902780
17:13 nexus map.jinja -> https://gist.github.com/dseira/fb586c0c3f37eb777ba1
17:14 nexus param.sls -> https://gist.github.com/dseira/51a9f3b939e4c9396132
17:18 nexus here are the 4 files: https://gist.github.com/dseira/5e8911ef898674843732
17:18 murrdoc merge=defaults_settings.sysctl
17:18 murrdoc should be merge=True
17:18 nexus I tried but with the same result
17:19 nexus the problem is that only applies the pillar data
17:19 murrdoc u are using the sysctl formula ?
17:19 impi joined #salt
17:19 nexus more o less
17:20 nexus but I also tested the sysctl formula from the repository with the same result
17:21 murrdoc thats weird
17:21 nexus I don't know if it is possible but my idea is to put in the defaults.yaml all the sysctl variable that I want be by default, but also another in the pillar that are specific for a server
17:21 murrdoc thats not possible
17:21 murrdoc no wait
17:21 murrdoc it should be
17:21 nexus for example in the sysctl formula this seems to be the idea
17:23 overyander joined #salt
17:23 nexus if you see the sysctl formula it has the pillar.example with some params but in the defaults.yaml it also has another params
17:23 iggy 5 chrome tabs when one would do...
17:24 murrdoc yeah
17:24 murrdoc it works fine
17:24 wt joined #salt
17:24 murrdoc i am not sure why its not working for u
17:24 nexus iggy: see my last file, please
17:24 iggy yeah, spoke too soon
17:24 nexus can be something related to the salt version?
17:24 nexus I'm using 2014.7.0 for master and 2014.7.5 for minion
17:24 nexus in ubuntu server 12.04
17:25 iggy should be fine
17:25 tiadobatima hi guys... I'm confused about what {{ self }} is supposed to do... I'm looking at this example in the docs: http://docs.saltstack.com/en/latest/topics/targeting/grains.html#matching-grains-in-the-top-file
17:26 iggy nexus: and you're sure you tried merge=True in the pillar.get call?
17:26 tiadobatima why not using node_type instead of {{ self }}?
17:27 nexus iggy: right now I've done again the test with the same result, it only applies the params configured in the pillar
17:27 iggy nexus: also... sysctl_settings.get('params', {}) but in your defaults.yaml params is actually a list
17:27 iggy mapping is only in newer versions of jinja
17:27 nexus iggy: which is the proper way?
17:28 iggy your call, I was just pointing out they were different
17:30 nexus iggy: I toke  the example from the sysctl formula
17:30 murrdoc link
17:30 iggy I'm not sure if merge will merge lists
17:30 murrdoc merge wont merge lists
17:30 nexus so, which is the proper way to do that?
17:31 murrdoc so you want a defaults ?
17:31 iggy so if you're expecting params to be kernel.sysrq and net.ipv4.ip_forward, that won't be the case
17:31 schristensen joined #salt
17:31 nexus yes, I want a default file with params by default
17:31 murrdoc and then the ability to add on to it
17:31 murrdoc hmm interesting
17:32 nexus but I also want to be able to specify some params in a pillar that need to be overwrite
17:32 murrdoc its not built to do that
17:32 nexus I understand
17:32 murrdoc but it would be easy to do
17:33 nexus so, which is the idea for the defaults.yaml file?
17:34 murrdoc the way the defaults.yml is written
17:34 murrdoc you would have to specify all your sysctls in your pillar
17:35 druonysus joined #salt
17:35 murrdoc u dont need to specify the pkg, and config keys btw
17:35 murrdoc just the params
17:35 iggy defaults.yaml is for when people don't setup the pillar at all, not for adding to the defaults with pillar data
17:35 nexus understand
17:36 nexus in fact, I've also tested the sysctl formula in the github repository and it doesn't work
17:36 iggy it's to handle things like {% for param in sysctl_settings.get('params', {}) %}
17:36 murrdoc it doesnt work ?
17:36 nexus for what I would expect (exposed before) to do, no
17:36 stoogenmeyer_ joined #salt
17:36 iggy if you setup defaults.yaml correctly, you don't need to have settings.get('foo', defaultval) all over the place
17:37 nexus but it seems I'm wrong...
17:37 linjan_ joined #salt
17:38 iggy if you look at the postgres-formula's defaults.yaml
17:38 iggy you'll see a lot of the data is just empty lists/dicts
17:38 nexus I see
17:39 iggy that's to avoid things like {% for param in sysctl_settings.get('params', {}) %}
17:39 iggy since params will _always_ be set (either by the defaults.yaml to an empty dict or by the user via pillar)
17:39 iggy so you end up with just {% for param in sysctl_settings.params %}
17:40 nexus ok
17:40 cpowell has anyone here used the gpg renderer for pillar data?
17:41 nexus but I think it is interesting to have a defaults with the params by default and then redefine some of them in the pillar
17:41 twork another basic question. why is it that the default location for the file server is under /srv/salt, while the pillar gets its own independent area? put another way: would it be sensible to put file_roots under /srv/faile_roots?
17:41 nexus you avoid to have to repeat the same params for several servers
17:41 nexus in the pillar
17:42 iggy nexus: you can redefine them fine, you just can't add to them
17:43 nexus iggy: sorry I don't understand that, how can I redefine them?
17:43 iggy i.e. if you left sysctl:config:location out of your pillar it would pull what was in defaults.yaml
17:43 iggy but if you set it in the pillar, it's completely overwritten
17:43 nexus ah, ok
17:43 nexus understood
17:44 iggy twork: you can put file_roots anywhere you want
17:44 hal58th__ joined #salt
17:45 hal58th_1 joined #salt
17:45 nexus iggy: so, in the case exposed before, is not going to work because the same keys are written in the defaults.yaml and the pillar, no?
17:46 twork iggy: ok. that's what i'm about to do (/srv/file_roots etc.) but is that typical? does the stock config put it inside /srv/salt for a particular reason?
17:46 cowpunk21 joined #salt
17:46 iggy correct
17:46 iggy nexus: you can have multiple pillars defined for a host and it should merge them (i.e. a common.sls for * and a web.sls for just web servers)
17:47 iggy it's _supposed_ to merge them, but I think it might be broken
17:47 mordonez joined #salt
17:48 iggy twork: it's common because it's the default, which makes it commonly mentioned in docs... which makes it common, which makes it a sensible default, which....
17:49 druonysuse joined #salt
17:49 twork heh. ok, thanks.
17:50 nexus iggy: are the any way to open a request or something like that? because I think it is interesing the possibility to do that
17:51 iggy github.com/saltstack/salt/issues (or somewhere close by)
17:51 nexus thank you very much
17:52 overyander where can i find the release notes or changelog for 2015.5.0-2 ?
17:53 murrdoc what is that
17:53 murrdoc i only so the .1 release for the cloud fix
17:53 overyander http://docs.saltstack.com/en/latest/topics/installation/windows.html
17:53 overyander http://docs.saltstack.com/downloads/
17:54 murrdoc whats an exe file ?
17:54 overyander lol
17:54 otter768 joined #salt
17:54 murrdoc :D
17:56 murrdoc the -2 must be iteration of the build
17:56 murrdoc not an actual code version
17:57 overyander that's what i'm thinking too, just wanted to double check.
17:57 rhodgin joined #salt
17:57 overyander i'm about to update my master, but it only showed version 2015.5.0-1 as being available, even in the testing repo's.
18:00 LiamM joined #salt
18:01 TheHelmsMan1 joined #salt
18:01 murrdoc the - is iteration
18:01 murrdoc i also googled .exe
18:02 murrdoc we are good
18:02 overyander lol, ok thanks
18:04 druonysuse joined #salt
18:05 UtahDave overyander: yeah, the -2 is because of a build issue. It doesn't modify Salt at all
18:06 Heartsbane joined #salt
18:06 Heartsbane joined #salt
18:08 denys joined #salt
18:09 litwol I've finally got a nice mysql cluster running all with help from salt :-D
18:09 murrdoc write a blog post!
18:09 murrdoc share the knowledge
18:09 litwol all it takes to spin up a new database host is to name it with a pattern that matches "db*prod*"
18:09 murrdoc (sorta serious)
18:09 litwol it goes from blank container to fully functional db
18:09 murrdoc wow
18:10 litwol what i have /not/ configured in salt yet is joining new db into the database cluster as a slave.
18:10 litwol i am not sure how todo that yet.
18:10 litwol well. i know how. but i'm not sure what method is efficient.
18:10 litwol for example i can write "slave join cluster" commanads in an sql file and execute it when my db boots up.
18:10 litwol etc etc.
18:10 litwol fun stuff
18:10 litwol :-D
18:11 litwol right now i have a *very small* cluster going with 1 master and 1 immediate slave, and 1 slave with 6 hours sql delay.
18:12 litwol oh forgot to mention. it also handles importing db dump to bootstrap "stale slave" db data.
18:12 litwol pretty happy with results :-D
18:12 murrdoc do u have that scripted ?
18:13 murrdoc the mysql join cluter thing ?
18:13 litwol not yet.
18:13 litwol murrdoc: to join cluster you need 3 conditions met. well 4, 4th being db nodes must be able to talk to one another.
18:14 litwol murrdoc: (1) import somewhat recent db backup
18:14 litwol murrdoc: (2) run chagne master to .. command to set master.
18:14 litwol murrdoc: (3) start slave;
18:14 litwol done :)
18:14 litwol (1) is easily handled by salt mysql-formula from github
18:14 litwol you can specify which "schema" file to import
18:15 litwol (2) and (3) is something that still needs to be scripted.
18:15 murrdoc do it in python
18:15 murrdoc execute from salt
18:15 murrdoc \o/
18:15 litwol but it should be very easy because these commands are simple sql.. just put them in flat .sql file and "import" that file into your mysql
18:15 litwol also master and start slave commands are "safe". so you won't run into a problem if you ran them multiple times;
18:16 litwol murrdoc: lol why. it is a simple sql command. really just do cmd.run mysql < [the cluster join file].sql
18:16 litwol :)
18:16 litwol anyhow. just wanted to share my excitement.
18:16 litwol i will blog about it once i manage to make it 100% automated.
18:16 murrdoc cmd.run is evil
18:16 murrdoc no logging
18:17 murrdoc no try/catch
18:17 litwol not a big deal in this scenario
18:17 murrdoc i am totally jealous u knocked the mysql out the park btw
18:17 litwol i still didn't finish many necessary steps. such as taking db backup and storing it in the "correct place"
18:17 KyleG ugh
18:17 KyleG containers
18:17 KyleG kill me now
18:17 litwol and then automating slave bootstrap by importing that db dump from "correct place"
18:18 litwol KyleG: wat?
18:18 ek6 cmd.run is not evil...its just a loaded gun with no safety or trigger guard.....that occasionally jams
18:18 KyleG "I want to run on hardware; I'm a real man." - Linus Torvald
18:19 murrdoc ek6:  guns kill people
18:19 bastiaan joined #salt
18:19 ek6 so aim carefully
18:19 litwol KyleG: i'm unable to decipher your point. would love to understand criticism better.. maybe i can learn something from it.
18:20 KyleG I don't believe in the latest container craze.
18:20 KyleG that's the tl;dr
18:20 litwol oh gotcha
18:20 * litwol nods
18:20 litwol yeah no problem
18:20 KyleG It's awesome, I want to implement it for my dev environment
18:20 KyleG but for prod, I just don't see it
18:20 litwol given my excited rant above you can as easily substitute mention of "container" with a "virtual machine" or "linode node" or "aws"
18:20 KyleG aye
18:21 KyleG I hate all these marketing terms :(
18:21 litwol i use it for sandboxing and app isolation.
18:21 KyleG First networked servers became cloud
18:21 KyleG now VMs are containers?
18:21 litwol and also because i don't like the IOPS penalty on virtual.. otherwise they are all the same as far as my neeeds are concerned.
18:21 KyleG See that makes sense
18:22 litwol by the way you can nest containers inside a VM with minimal overhead.
18:22 mbrgm joined #salt
18:22 litwol again, for self-containment of app deploymnet.
18:22 litwol w/e you are comfy with.
18:22 mbrgm regarding the iptables state: what exactly does the save option do?
18:22 KyleG Our app is python, so we just use virtualenv's
18:22 KyleG and uwsgi + emperor mode
18:24 litwol yesterday i began implementing my "host management" into salt states.
18:24 litwol so far i'm able to spin up w/e containers i need (for example to facilitate above db cluster spin up)
18:25 litwol i happen to use bare steel host with containers. i could as easily spin up virtual machines for minions.
18:25 litwol absolutely doesn't matter in the end.
18:25 druonysus joined #salt
18:25 druonysus joined #salt
18:25 litwol and the cool thing about going down the config management route is i can re-launch entire infra on alternate hosting service on per need basis.
18:25 litwol ie prices change. or licensing change. or w/e
18:26 litwol "wat? disk fails? okey lanuch all these instances on diff host"
18:26 litwol "what? linode prices went up? launch all on aws!"
18:26 litwol etc etc
18:30 c10b10 joined #salt
18:30 ashb joined #salt
18:34 _JZ_ joined #salt
18:40 c10b10 joined #salt
18:40 c10b10 joined #salt
18:42 lictor36 left #salt
18:42 amontalban Hey guys, anyone know how to use connection_args in mysql module (Not state)
18:43 amontalban I have used it in states but can't get it to work on module i.e mysql.query
18:43 andrew_v joined #salt
18:44 julez joined #salt
18:44 losh joined #salt
18:45 murrdoc iggy:  https://github.com/saltstack-formulas/aptly-formula/pull/18
18:45 mohae_ joined #salt
18:46 bmac2 joined #salt
18:47 aboe joined #salt
18:48 bhosmer_ joined #salt
18:48 dopesong joined #salt
18:54 whytewolf amontalban: salt  -C 'G@roles:mysql' mysql.query table 'show tables;' connection_user=user connection_pass=password
18:54 neogenix joined #salt
18:56 tomh- joined #salt
18:58 aboe can I talk to a saltstack employee about suse package automation?
18:59 forrest Before I start working on it, has anyone set up a vagrant system that provisions, syncs directories, then creates docker images and syncs each project to one of those, then runs salt code inside the docker image?
18:59 forrest aboe: I'm not sure who is building the suse package now, basepi or UtahDave might now though.
19:00 impi joined #salt
19:00 aboe forrest, I'm building them, but I want to automate it.
19:00 amontalban whytewolf: cool let me test. Thanks!
19:00 cowpunk21 joined #salt
19:00 forrest aboe: ahh okay, yeah I'd talk to basepi or UtahDave then
19:00 aboe ping basepi
19:00 aboe lol
19:02 GreyGnome joined #salt
19:02 murrdoc aboe:  u should talk to aboe
19:02 murrdoc aboet it
19:03 murrdoc assuming aboe is pronounced a bow
19:03 beneggett joined #salt
19:03 forrest murrdoc: He's the on doing the packaging already...
19:03 amontalban whytewolf: I didn't know it was that easy, thanks! I was trying to use connection_default_file but I'm going to use connection_user and connection_pass instead
19:03 TheHelmsMan joined #salt
19:03 * murrdoc shows himself out
19:04 forrest these jokes are terrible
19:04 aboe murrdoc, lol,
19:04 forrest So has anyone used vagrant to spin a VM up, then used salt to create docker containers and sync directories, THEN inside those containers used salt to provision the system
19:04 forrest I'm not going to use a dockerfile
19:04 forrest that shit is lame.
19:04 murrdoc no
19:04 forrest I already have the salt code written, just need to get it there and make it go.
19:04 murrdoc also yo dawg i heard u like containers
19:04 forrest bleh, maybe just a dockerfile to install salt...
19:05 forrest I do
19:05 forrest especially on mac osx
19:05 forrest where I need 5 VMs running
19:05 forrest so if I can do 1 VM, 5 containers
19:05 forrest way better
19:05 murrdoc i recommend https://www.packer.io/docs/builders/docker.html
19:05 murrdoc + https://www.packer.io/docs/provisioners/salt-masterless.html
19:06 forrest Have you used that with salt at all murrdoc?
19:06 murrdoc packer ?
19:06 murrdoc yes
19:06 murrdoc packer + docker
19:06 murrdoc no
19:06 murrdoc packer + salt yes
19:06 stoogenmeyer_ joined #salt
19:07 perfectsine joined #salt
19:08 hybridpollo joined #salt
19:08 forrest iggy, murrdoc: https://github.com/saltstack-formulas/aptly-formula/pull/18 Did that bug with the gpg keys get fixed?
19:08 forrest I remember it was set to false for a reason before
19:09 Berty_ joined #salt
19:10 murrdoc https://github.com/smira/aptly/labels/bug
19:10 forrest yeah I looked at that but I remember iggy setting it that way for a reason
19:11 forrest so murrdoc, have you used vagrant then packer inside of that to provision machines on the vagrant system?
19:13 forrest I don't want to export anything
19:13 forrest I just want to sync dirs (which are git based content) to the VM, then to the docker images
19:13 murrdoc i have used packer to make vagrant vms
19:13 murrdoc and i loved it
19:13 murrdoc both with puppet and salt
19:14 murrdoc but no i havent done what u are asking
19:14 murrdoc :|
19:14 forrest yeah I think that is the problem, none of the examples I'm seeing do that, it's either vagrant or docker
19:14 forrest not one inside of the other
19:16 badon joined #salt
19:16 ponpanderer joined #salt
19:17 baweaver joined #salt
19:17 jdesilet joined #salt
19:18 baweaver joined #salt
19:19 c10b10 is anybody using salt-ssh from a mac hombrew installed saltstacj?
19:19 c10b10 *saltstack
19:20 ponpanderer Hey guys, using salt-cloud and running into a pretty silly issue that could maybe be resolved via undocumented CLI args. Seems on centos7 openstack images in my Openstack cloud the "ec2-user" (default cloud user) is created after SSH is listening. This results in the deploy script trying 15 times getting "invalid user" and then giving up without deploying the salt minion
19:20 ponpanderer and script arguments for retry count and/or interval to work around that?
19:21 jalaziz joined #salt
19:23 sinenitore joined #salt
19:23 cpowell_ joined #salt
19:25 forrest ponpanderer: manfred might know
19:26 rick_ Is there way to modify the server side cherrpy that gets run?
19:26 c10b10 is salt using some sort of cache? i've edited all instances of a string in modules and states in the salt source code, but salt-ssh seems to not take notice of that
19:27 \ask_ joined #salt
19:27 muep joined #salt
19:27 crd_ joined #salt
19:28 aboe ponpanderer, are these images build with cloud-init ?
19:28 spookah joined #salt
19:28 spookah joined #salt
19:28 bhosmer_ joined #salt
19:29 codehotter joined #salt
19:30 ponpanderer aboe: iirc they were and should be. i'm checking with our private cloud team now as this seems wrong that SSH listens without a user that can be connected with. in the interim a custom retry parameter instead of the default 15 would be really handy as the user gets created about 5 seconds after salt-cloud gives up :/
19:30 renoirb Is there a recommended way to develop salt states and using gitfs?
19:30 impi joined #salt
19:30 mrbigglesworth joined #salt
19:31 ponpanderer I don't see it as a salt-cloud issue (maybe feature request at most), just looking for a workaround
19:32 aboe with cloud-init you need a json file with user_data form cloud-config file otherwise no user is created
19:32 aboe ponpanderer, https://www.rdoproject.org/forum/discussion/816/cloud-init-default-user/p1
19:32 Parabola joined #salt
19:32 Parabola joined #salt
19:33 ponpanderer aboe: lol, i'm reading the exact same page :)
19:33 aboe I ran in the same config issue with my kvm machine.
19:34 Parabola left #salt
19:36 aboe ponpanderer, https://github.com/saltstack/salt/issues/15381
19:37 rm_jorge joined #salt
19:38 c10b10 does anybody have any idea about this: https://github.com/saltstack/salt/issues/24045
19:38 ponpanderer aboe: this is interesting. in my case ssh would work just fine. it just seems my 'cloud-user' user is getting delayed being created and SSH is listening long enough without cloud-user that salt-cloud gives up getting Permission Denied after 15 times. Then like 5 seconds later cloud-user is created properly...but like 5 seconds after salt-cloud gives up unfortunately
19:40 aboe ponpanderer, http://docs.saltstack.com/en/latest/topics/cloud/misc.html#connection-timeout
19:41 ponpanderer ahh, exactly what i was looking for!
19:41 ponpanderer many thanks aboe
19:41 aboe ponpanderer, you're welcome,
19:43 andrew_v joined #salt
19:45 ckao joined #salt
19:48 rsimpkins I am trying to set up a job with salt.states.schedule. It adds the job correctly. However, when I modify the state and run highstate it says that it modifies the job, but schedule.list shows that the job actually does not get modified.
19:48 rsimpkins Is this expected behavior?
19:48 jpaetzel_ joined #salt
19:50 robflyn joined #salt
19:51 robflyn joined #salt
19:51 robflyn Mm
19:53 smcquay joined #salt
19:53 smcquay_ joined #salt
19:54 aboe rsimpkins, testing it now on dev version.
19:54 aboe rsimpkins, on development version the state.schedule changes, and schedule.list shows the correct state.
19:55 rsimpkins aboe: Thanks. I wonder what I'm doing wrong... :(
19:55 otter768 joined #salt
19:56 aboe rsimpkins, this is my state: http://pastebin.com/kdA50kEw
19:58 bhosmer_ joined #salt
19:59 otter768_ joined #salt
20:01 yannisc joined #salt
20:02 robflyn joined #salt
20:07 jespada joined #salt
20:07 rsimpkins aboe: I am guessing this may be related to my issue. https://github.com/saltstack/salt/pull/23879
20:08 rsimpkins aboe: Using salt-call schedule.delete won't delete my job on 2015.5. Using salt will.
20:08 baweaver joined #salt
20:09 aboe rsimpkins, hope it get fixed soon.
20:09 rsimpkins Hopefully we won't have to wait 6 months for the bug fix version of 2015.5. :)
20:09 rsimpkins I seem to be finding lots of little issues like that.
20:10 aboe rsimpkins, I don't thinks so I from a reliable source I got the information about speeding up the release schedule
20:10 neogenix rsimpkins: yes, it likely is. I started working with garethgreenaway on the schedule stuff. I believe they're working to fix it.
20:10 neogenix rsimpkins: are you running multimaster perchance?
20:10 linjan joined #salt
20:12 rsimpkins neogenix: No, but we were planning on it. Should we hold off?
20:12 ponpanderer i see 2015.5 is the default now with a 'yum install salt-minion' would the 2015.x branch be consider prod ready now or best to stick with the 2014.7 series for now?
20:13 rsimpkins ponpanderer: Subjective. We were having tons of issues with 2014.7. 2015.5 fixed many of those problems, but now a few new little things are popping up with use.
20:16 mbrgm left #salt
20:17 OnTheRock joined #salt
20:19 renoirb UtahDave, basepi, sorry to bother you guys but i’m trying to find a way to develop my salt states from a workbench VM by using git repos and having the workbench to clone the formulas. But I can’t find recommendations on how to leverage gitfs for state development. You have links/feelings to share?
20:20 RabidCicada joined #salt
20:21 ponpanderer rsimpkins: thanks for the answer. i'll be testing out moving to 2015.5 with that in mind as there's a few issues in 2014.7 that my use case requires
20:22 RabidCicada Can anyone provide a small description of the state modules vs the minion execution modules and how they play together?  I'm thinking about writing one to control steam game server stuff and looking to understand which modules drive what part of the process
20:22 murrdoc state modules call execution modules
20:23 _JZ_ joined #salt
20:23 timoguin renoirb: You can either mount the states working dir inside the VM. Or you can mount the Git repos and use the file:/// syntax to point to them.
20:24 RabidCicada ok...so to write functionality from scratch...I'd write an minion execution module.  I also just realised a state module is just the normal sls stuff I've been using (just didn't know the terminology).
20:25 timoguin RabidCicada: well not quite. The SLS files call state modules, whereas execution modules are typically what you call from the CLI.
20:26 timoguin State modules call execution modules to do the bulk of their work
20:26 renoirb timoguin, yeah. I’m looking more at a workspace setup recommendation. i.e. a salt provisioned vagrant VM that pulls git repos based on pillar data. I’ve spent some time working on that and I just happened to wonder if anybody else did that.
20:26 RabidCicada oh...ok.  so I really will need to write both to have a complete solution then?  I foudn something online earlier I was going to try to link...but ...having trouble finding it again
20:28 renoirb something like that http://www.ryanwalder.com/vagrant-saltstack-development/
20:28 timoguin renoirb: That's what I do, but I clone all my formulas locally so the VM doesn't have to bother with calling out to git
20:28 timoguin err, to a remote git repo I mean
20:28 timoguin That way I can work on formulas and test them locally too
20:29 renoirb yeah timoguin, that’s what I do. But imagine you want somebody else to work on your states.
20:29 renoirb You could salt your local salt dev workspace too
20:29 anotherZero joined #salt
20:30 timoguin Yea I have a master repo with a Vagrantfile and a script that will init all the formulas.
20:30 neogenix rsimpkins: we worked around the schedule one by just adding the schedule to the minion file, in lieu of waiting for schedule.*
20:31 neogenix rsimpkins: multimaster causes some oddness too though, but that's underway (being worked on)
20:31 neogenix rsimpkins: I believe 2015.5.1 is due shortly, and then after that .2 in a few weeks, which should resolve a lot of these issues.
20:32 rsimpkins neogenix: Awesome. Thank you for the suggestion. I'll go that route.
20:32 big_area joined #salt
20:34 RabidCicada http://salt.readthedocs.org/en/latest/topics/development/modular_systems.html#state-modules....  What are the rules they are referring to?  Documented anywhere?..or just copy other code?
20:35 ajw0100 joined #salt
20:37 murrdoc wrong documenation
20:37 RabidCicada oi?...wheres he correct documentation?
20:40 murrdoc http://docs.saltstack.com/en/latest/topics/development/index.html
20:40 timoguin Also here: http://docs.saltstack.com/en/latest/ref/states/writing.html#example-state-module
20:40 murrdoc +1
20:41 dendazen Can someone help me with this problem
20:41 dendazen https://gist.github.com/anonymous/779db2df69195b105f64
20:42 RabidCicada Thanks murrdoc and timoguin!  Very useful.  Data deluge!
20:43 Nazca joined #salt
20:43 Gareth rsimpkins: I have a fix in the works for that.  just need to get the unit tests passing again so it can be merged.
20:46 Gareth rsimpkins: ah. and you found the PR :)
20:48 c10b10 joined #salt
20:50 klj joined #salt
20:50 soren_ joined #salt
20:51 klj left #salt
20:52 catpig joined #salt
20:52 rsimpkins Gareth: Thanks for working to make that bit more reliable. I'm having to do a lot of experimentation to tease out how scheduler works. The docs are often vauge.
20:52 B3_ joined #salt
20:53 baweaver joined #salt
20:54 baweaver joined #salt
20:55 dopesong joined #salt
20:56 rsimpkins dendazen: It looks like 'operations' is a conflicting ID.
20:56 dopesong_ joined #salt
20:57 rsimpkins dendazen: You might want to call the group config operations_group and use name: operations.
20:57 nesv joined #salt
20:58 XenophonF joined #salt
20:59 jimklo joined #salt
20:59 XenophonF is there any documentation regarding minion globbing?
21:00 XenophonF i'm not sure where to start looking in the source code
21:00 rsimpkins XenophonF: You mean, beyond this? http://docs.saltstack.com/en/latest/topics/targeting/globbing.html
21:01 XenophonF yes, rsimpkins
21:01 KyleG1 joined #salt
21:01 XenophonF i have some matches that look like https://github.com/irtnog/salt-pillar-example/blob/master/top.sls#L16
21:02 XenophonF but they don't seem to be working like i expect
21:03 dendazen rsimpkins, that was it. Thanks.
21:04 Berty__ joined #salt
21:06 forrest APIError: 404 Client Error: Not Found ("client and server don't have same version (client : 1.18, server: 1.17)") Sigh
21:06 sine_nitore joined #salt
21:07 forrest version is specified as a string: https://docker-py.readthedocs.org/en/latest/api/
21:07 forrest and should just be auto
21:07 forrest going to be sad if I see 0.17 hard coded.
21:08 giantlock joined #salt
21:08 jalaziz joined #salt
21:10 TaiSHi joehh: how frequent are they? Just curious
21:10 nesv Has anyone been getting HTTP 422 errors when trying to create a new VM in Digitalocean, with salt-cloud?
21:10 badon joined #salt
21:10 nesv This is the error: {"id":"unprocessable_entity","message":"You specified an invalid image for Droplet creation."}
21:11 nesv I wrote something myself, using libcloud, and specifying the same image ID yields no errors. Also, running `salt-cloud --list-images=digitalocean` shows the image ID, which I copy and paste into my cloud profile.
21:11 TyrfingMjolnir joined #salt
21:11 inad922 joined #salt
21:12 TaiSHi nesv: care to share you configs? Mine's working perfectly
21:12 nesv TaiSHi: Sure...give me a moment, whilst I cleanse some configs. :)
21:13 bhosmer_ joined #salt
21:13 TaiSHi No rush, I have a few hours till I go home
21:13 desmoullins joined #salt
21:13 sergutie_ joined #salt
21:16 desmoullins hi guys, i'm sorry to annoy you, but i'm looking for some details about "material requirement" for saltstack and i haven't found it on the website. Could you help me ?
21:16 nesv TaiSHi: https://gist.github.com/nesv/b521360925ba592ec934
21:17 nesv TaiSHi: I should also mention, I'm using salt 2015.5.0
21:17 TaiSHi So am I
21:17 nesv ...from the PPA
21:17 TaiSHi How many servers do you have ?
21:17 nesv TaiSHi: Okay, I just wanted to make sure. :)
21:17 sporkd2 joined #salt
21:18 sporkd2 Hey everyone, I'm trying to figure out how to include a long wget command in cmd.run name: but it has two sets of double quotes.. any suggestions?
21:18 TaiSHi Give me a couple mins, nature calls
21:18 daschatten joined #salt
21:18 desmoullins thx, i've about 50 server
21:18 nesv TaiSHi: Maybe about, two dozen, but that is the config I'm using; I just created a new image from a droplet snapshot, and wanted to test it out to make sure it works.
21:19 hal58th joined #salt
21:20 iggy murrdoc: forrest: if I had something to do with that file being that way, I don't remember what it was (I don't think I did)
21:21 forrest iggy: okay
21:21 forrest maybe I did that
21:21 forrest because the key stuff wasn't in originally, I don't remember.
21:21 forrest I assume it works for you though right murrdoc?
21:22 primechuck Did something change in 2015 where doing something like grains.append wouldn't be immediately be set on the minion?  Upgrading to 2015 appears to have brought out a race condition between setting and using a grain.
21:23 iggy it looks correct (if not aptly:secure, don't bother doing the gpg signing/verifying)
21:23 murrdoc yes
21:24 iggy unfortunately, I force pushed over the original changes I did, so I don't really have any idea what of that formula is my fault and what is forrest's fault ;)
21:25 forrest yeah I might have just written that originally when I didn't have the key stuff working
21:25 murrdoc as a force pusher
21:25 murrdoc how do u feel about yourself
21:25 murrdoc as a human
21:25 TaiSHi nesv: I'm listing the images and I see what you mean
21:26 baweaver_ joined #salt
21:26 alexanderilyin joined #salt
21:26 nesv TaiSHi: Yeah? Am I going crazy? :/
21:27 TaiSHi You're not, but I'm using this: ubuntu_512MB_ny2
21:27 soren_ joined #salt
21:27 TaiSHi That's 'my' name, sec
21:27 catpig joined #salt
21:27 nesv TaiSHi: Ah, are you using the "name" field from the salt-cloud --list-images=... output?
21:27 hal58th Hey all, I tried lookint in issues to no avail, but has anyone had problems with highstate not refreshing pillar data?
21:27 TaiSHi Ah! You have to use the slug, not the ID!
21:27 jhauser joined #salt
21:27 nesv TaiSHi: Because, in previous uses, it's always been the image ID, or whatever the YAML block's name was.
21:28 nesv Damn.
21:28 iggy I blame forrest for that, he had it in his repo first, then moved it into saltstack-formulas in a weird way
21:28 forrest in a weird way??
21:28 TaiSHi nesv: http://dpaste.com/1TMD88J
21:29 beneggett joined #salt
21:29 baweaver joined #salt
21:30 nesv TaiSHi: In my case, I'm trying to use a snapshot I created from a droplet. :/
21:31 nesv TaiSHi: ...and in the --list-images output, the "slug" field of the image I would like to use is `None`
21:31 hal58th Can someone test something for me on 2015.5.0? Change a pillar value, run highstate and then do a "pillar.get" for that item. I think highstate no longer refreshes pillar data as it should. Make sure to do pillar.get and not pillar.item.
21:31 catpig joined #salt
21:32 TaiSHi Hmm nesv
21:32 iggy hal58th: there's a bug open about inconsistencies between pillar.get and pillar.item
21:32 hal58th actually, just found out it's something with the difference between pillar.get and pillar.item
21:32 hal58th yeah, guess I will change everywhere to pillar.item
21:33 murrdoc waaaaaa
21:33 iggy pillar.item doesn't support foo:bar syntax :(
21:33 murrdoc whats the difference
21:33 murrdoc that made u want to switch
21:33 baweaver joined #salt
21:34 TaiSHi nesv: don't you have to use caps on size?
21:34 nesv TaiSHi: I haven't had to before. :/
21:34 nesv TaiSHi: I mostly built my initial set of cloud configs from the examples in the documentation.
21:35 nesv At the time, the size suffixes weren't capitalized.
21:35 TaiSHi API v1 or v2?
21:35 iggy hal58th: https://github.com/saltstack/salt/issues/23391
21:35 nesv TaiSHi: Should be v2
21:36 TaiSHi Download latest digital_ocean_v2.py from 2015.5
21:37 TaiSHi Should be located at salt/cloud/clouds/
21:37 TaiSHi We did some modifications to it lately
21:38 nesv TaiSHi: Alright, I'll check that out. Thank you. :)
21:39 TaiSHi np, I'm around if I can be of any help
21:39 nesv TaiSHi: Thanks a bunch. :)
21:39 TaiSHi Oh, when I asked about images, old apiv2 had an issue listing vms past page 1
21:39 TaiSHi Which is 25<
21:39 nesv TaiSHi: So, just out of generic curiosity, why does salt-cloud create so many connections, even for one VM?
21:39 hal58th_1 iggy, I would still think that highstate not doing a pillar_refresh is a bug. No?
21:40 murrdoc it used to
21:40 TaiSHi nesv: one per page
21:40 nesv TaiSHi: Ah, okay
21:40 TaiSHi (paging is different on the site -50 vms- than on the API -25-)
21:41 * nesv nods
21:42 iggy hal58th_1: yeah, there's mention of that in there somewhere (iirc)
21:42 hal58th_1 I see no highstate in this particular bug
21:45 hal58th_1 I am writing it up
21:47 Nazca__ joined #salt
21:51 neogenix_ joined #salt
21:53 XenophonF so it looks like the minion globbing code uses https://docs.python.org/2/library/fnmatch.html#module-fnmatch
21:54 XenophonF oh my god, i'm an idiot
21:54 XenophonF i managed to put the wrong domain name on all the minion IDs in pillar/top.sls
21:54 XenophonF no wonder nothing matches
21:55 Gareth rsimpkins: I wrote the remote execution and state modules for the schedule, anything lacking in documentation, let me know or put in a PR.  There are likely *alot* of the features that aren't documented or aren't documented enough :)
21:55 XenophonF yeah, it's workning now that minion names are right
21:55 onmeac joined #salt
21:56 keimlink joined #salt
21:56 Berty_ joined #salt
21:57 onmeac Hello: in a multi master environment, 'salt \* grains.item master', each minion will return but 1 ip address, is there a way to make it return all configured masters?
21:58 iggy onmeac: config.get master
21:58 neogenix_ for the gpg renderer, do you need to update the renderer line on the minion, or the master, or both, or either?
21:59 iggy neogenix_: #!jinja|gpg (at the top... I think)
22:00 murrdoc minion
22:00 murrdoc and master
22:01 murrdoc states and pillars respectively
22:01 mike25de joined #salt
22:01 aw110f joined #salt
22:01 aw110f Hi is it possible to list jobs that ran on the minion ?
22:01 neogenix iggy: ah, yeah, I saw those, but there's also a note about setting it globally using the renderer(s) line.
22:01 neogenix murrdoc: ah, do I need the keys on the master, or the minion, or both?
22:02 onmeac iggy: salt \* config.get master only returns one mater ip per minion
22:03 murrdoc keys on master
22:03 jalaziz joined #salt
22:03 iggy onmeac: weird, is the multi-master thing a new addition to your setup? (i.e. have you restarted the minions since you made that change)
22:03 murrdoc if u are working with pillars
22:05 neogenix murrdoc: cool! thanks!
22:05 onmeac Yes i have restared the minions and masters to be sure. if i do 'salt-call grains.items' i get a master entry with, in this case, 2 ip addresses as there are 2 masters in existance atm
22:05 mrbigglesworth joined #salt
22:08 alexanderilyin joined #salt
22:10 Berty__ joined #salt
22:11 aw110f Found it #cache_jobs: False
22:14 aw110f does the minion clean up jobs if cache_jobs is enabled?
22:14 onmeac just tried some other grain: salt \* config.get pythonpath, this does return a list of values, weird it doesnt do that for master
22:17 onmeac 'salt \* grains.items' shows 1 entry under master, salt-call grains.items shows 2 entries of IP addresses...
22:19 cowpunk21 joined #salt
22:21 baweaver joined #salt
22:21 onmeac bug report? :)
22:22 debian112 left #salt
22:24 hal58th_ joined #salt
22:24 hal58th_2 joined #salt
22:27 hal58th joined #salt
22:28 mosen joined #salt
22:28 cruatta joined #salt
22:29 hal58th__ joined #salt
22:32 nethershaw joined #salt
22:33 primechuck joined #salt
22:38 primechuck joined #salt
22:46 nethershaw joined #salt
22:51 baweaver joined #salt
22:53 wt joined #salt
22:54 sergutie_ joined #salt
22:55 mrbigglesworth joined #salt
22:57 bfoxwell joined #salt
22:59 jhujhiti does anyone know what's going on with the freebsd port for salt 2015.5?
22:59 murrdoc freebsd man
22:59 murrdoc who knows
23:00 subsignal joined #salt
23:01 jhujhiti meh
23:01 mosen hi murrdoc
23:01 jhujhiti i need an equivalent to salt.modules.file.dirname() for 2014.7.5
23:01 murrdoc mosen:  o/
23:03 Zachary_DuBois joined #salt
23:07 smcquay_ joined #salt
23:09 baweaver_ joined #salt
23:10 KyleG joined #salt
23:10 KyleG joined #salt
23:11 smcquay joined #salt
23:14 baweaver joined #salt
23:14 otter768 joined #salt
23:15 baweaver joined #salt
23:20 thayne joined #salt
23:20 Cidan joined #salt
23:21 Cidan joined #salt
23:23 markm joined #salt
23:23 mgw joined #salt
23:24 mgw S3 provides an MD5 sum as the 'ETag' header... has anyone found a way to leverage this with file.managed?
23:28 XenophonF jhujhiti: are you running into problems?
23:29 jhujhiti XenophonF: just lamenting the fact that i'm stuck on an old version until the freebsd port maintainer wakes up
23:29 XenophonF oh yeah it's on 2014.7.5
23:30 jhujhiti i think i figured out that i can get equivalent dirname functionality with a jinja macro, i'm working on that now
23:30 murrdoc or
23:30 murrdoc u could copy the function from 2015.5 into a file.py in _modules
23:30 murrdoc and u could have the functionality
23:30 jhujhiti i'd rather not
23:32 XenophonF jhujhiti: have you checked freebsd bugzilla for an update
23:37 XenophonF jhujhiti: see https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=200044
23:39 lude joined #salt
23:39 jhujhiti huh, i did but i didn't see that
23:39 mrbigglesworth joined #salt
23:41 jhujhiti well, it's being worked, i'm satisfied
23:42 jhujhiti sorry for the noise, apparently i need irc to drive google for me today
23:44 keimlink_ joined #salt
23:47 writtenoff joined #salt
23:52 XenophonF no worries!
23:53 fxhp joined #salt
23:55 jalaziz joined #salt
23:57 stanchan joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary