Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-07-01

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 TaiSHi Well fixed my sync issue
00:00 TaiSHi Food time brb
00:00 TaiSHi I'm still calling all salt admins for opinions on cloud distributed filesystems
00:00 KyleG ZFS all the things
00:00 forrest zfs all the things, if your OS supports it, and it actually works...
00:01 jeddi joined #salt
00:01 to_json joined #salt
00:01 KyleG For dedicated backend storage, your OS choices are Illumos (Nexenta, SmartOS, OpenIndiana) or FreeBSD, or ZFS On Linux
00:01 forrest :(
00:01 zach Or two tin cans and some yarn
00:01 KyleG otherwise, ZFS has SMB/NFS and iSCSI
00:02 zach Why not use CEPH, KyleG ?
00:02 zach Because of the affiliation with EVIL?
00:02 KyleG zach: My application requires NFS right now
00:02 KyleG That's why
00:02 KyleG and I don't want some FUSE layer to serve NFS with Ceph
00:02 KyleG It's just janky
00:02 dvogt joined #salt
00:02 zach That is janky
00:03 KyleG I used to work @ Dreamhost and I've talked to some of my former co-workers involved with ceph about it
00:03 KyleG and basically it does not fit my needs currently with my app the way it is
00:04 dvogt joined #salt
00:05 TaiSHi Sorry I didn't explain my background
00:06 TaiSHi We do use ceph at work and it's marvelous
00:06 TaiSHi But my current infrastructure-to-be has a few webservers and if load is high, more are fired up
00:07 TaiSHi First thing first, current webservers must have same data at all times, and rsync is ... well... I don't like it as a solution for this
00:07 TaiSHi Gluster IS nice and IS performant but adding new hosts (and deleting them afterwards) is a pain
00:07 KyleG joined #salt
00:07 KyleG joined #salt
00:08 manfred KyleG: http://freenode.net/sasl/
00:08 manfred set that up so you have your cloak before joining channels
00:08 KyleG1 joined #salt
00:08 KyleG1 I am having networking issues…fw machine crapping out. -_-
00:08 manfred KyleG1: http://freenode.net/sasl/
00:08 manfred set that up so you have your cloak before joining channels
00:09 KyleG1 thanks manfred
00:09 TaiSHi Oh, I have pending my SASL config...
00:09 TaiSHi well, my bnc has been down for 21 hours so idc
00:09 TaiSHi KyleG1: did you read what I said earlier on?
00:10 KyleG1 "We do use ceph at work and it's marvelous"
00:10 elfixit joined #salt
00:10 KyleG1 is the last I saw
00:10 TaiSHi Ah, I have VM1 and VM2
00:10 TaiSHi On some situations VM3 is fired up
00:10 TaiSHi I need data to be consistent between all 3
00:11 TaiSHi Gluster can do it, but current version doesn't support removing bricks on a clean manner
00:11 TaiSHi which reminds me why I fired up these 3 instances
00:11 KyleG1 I like ZFS because I can consolidate space and power, I have 2x 2U headnodes and 2x 4U 60 Bay Top Loader JBODs
00:12 KyleG1 Power is very precious in the datacenter for us
00:12 TaiSHi But that's really an overkill for me, I'm using really small DO instances
00:13 TaiSHi I've read on tahoe-lafs but it seems to be... not so fast as gluster
00:17 manfred joehh: can you add libsodium to the salt ppa?
00:17 manfred currently compileing it manually in my raet.sh salt bootstrap wrapper script
00:20 TaiSHi ohh is joehh the maintainer?
00:23 rallytime joined #salt
00:25 aw110f joined #salt
00:26 delucks_ joined #salt
00:28 delucks_ left #salt
00:30 TaiSHi KyleG1: any suggestions ? I'm at the edge of my capabilities here :P
00:32 KyleG1 what are you lookin' for exactly?
00:32 forrest WHY YOU ASK GLUSTER QUESTION IN SALT IRC, WHY YOU DO THIS?? :P
00:32 manfred forrest: i am trying to decide which file manager to use, got any suggestions?
00:32 * manfred hides
00:33 forrest manfred, nope
00:34 TaiSHi I'm asking here because I plan to use salt to manage it
00:34 TaiSHi I have become a saltddict
00:34 * TaiSHi executes a couple commands on salt
00:34 TaiSHi Ok back
00:35 TaiSHi KyleG1: Some FS to keep my files replicated among servers, and the ability to add or delete a node
00:35 ajolo joined #salt
00:37 KyleG1 hrm, i mean there's always good ol' rsync
00:37 KyleG1 not being a smartass
00:37 KyleG1 I handle replication for a 70 TB dataset entirely with rSYNC
00:37 TaiSHi Yeah rsync was the first idea
00:37 nahamu joined #salt
00:37 manfred guys, can you take this to a pm, or another channel
00:37 TaiSHi But with a setup I've been working on it could take up to 2 minutes for a file to appear on a server
00:37 TaiSHi Yeah, sorry manfred
00:37 forrest and here we see the wild KyleG1 as he plays with fire day in and day out, will he one day be burned? Tune in next time to 'weekend oh shit theater' to find out.
00:38 TaiSHi KyleG1: mind if I PM you ?
00:38 TaiSHi lol
00:38 forrest manfred, shouldn't you be at home?
00:38 KyleG1 TaiSHi sure
00:38 manfred i am home
00:38 forrest it's like 8 in texas?
00:38 forrest ahh ok
00:38 manfred yah
00:38 forrest I didn't see you leave and rejoin
00:38 manfred forrest: znc :P
00:39 forrest manfred, pssh, no thanks
00:39 forrest then I'd always be here
00:39 TaiSHi Speaking of znc!
00:39 TaiSHi Tears have been brought to my eyes
00:39 manfred forrest: i am working on a wrapper script to install and setup the minions with raet
00:39 forrest let's not discuss yet another non-salt related topic in here today
00:39 TaiSHi My VM is back! 22 hours of downtime
00:39 forrest my heart can only take so much
00:39 forrest manfred, nice
00:40 TaiSHi_ joined #salt
00:40 TyrfingMjolnir joined #salt
00:41 manfred and now my desk is sticky because beer
00:41 * TaiSHi hugs TaiSHi
00:41 TaiSHi brb
00:42 forrest alright I'm outta here, have a good one
00:42 TaiSHi Yes!
00:42 TaiSHi night for
00:42 TaiSHi joined #salt
00:44 joehh manfred: will do so this week
00:44 manfred thanks!
00:45 KyleG joined #salt
00:45 KyleG joined #salt
00:45 manfred this is what I am currently at, and i think it is about to work http://ix.io/ddd
00:46 manfred ugh, forgot to pip install -r, pip -r doesn't work obviously
00:46 KyleG1 joined #salt
00:48 TaiSHi joehh: any chance of having dev/nightly/weekly build in ppa ?
00:48 TaiSHi Just asking
00:49 manfred TaiSHi: there was talk of putting together a whole build environment for building those for all distros... no idea if it is going to happen
00:49 TaiSHi ty manfred
00:49 manfred http://openbuildservice.org/
00:49 mosen minion packages?
00:49 joehh TaiSHi: I'm 70% of the way through doing that for ubuntu and debian
00:50 TaiSHi joehh: all my love is on you atm
00:50 manfred joehh: run
00:50 joehh just working through how I want the dev packages to differ from the "release" packages
00:50 joehh I am!
00:52 manfred success
00:52 manfred http://ix.io/dde
00:55 TaiSHi You know what, I'm fed up with goddamned distributed FS
00:55 * TaiSHi grabs rsync's hair
00:55 TaiSHi That and expensive 404s
01:02 bdf joined #salt
01:07 malinoff joined #salt
01:07 TaiSHi Have you ever wanted to bind 'exit' to something else?
01:08 bdf joined #salt
01:15 DaveQB joined #salt
01:16 bhosmer joined #salt
01:18 Luke_ joined #salt
01:28 Hipikat joined #salt
01:28 darrend joined #salt
01:29 rhand joined #salt
01:29 goodwill joined #salt
01:29 crazysim joined #salt
01:29 dstokes joined #salt
01:29 individuwill joined #salt
01:30 dvogt joined #salt
01:31 TaiSHi How would I specify in top.sls for nodes 3 to X ?
01:32 Luke_ joined #salt
01:32 manfred to x what?
01:33 manfred you want to say between 3 and x number nodes do something?
01:33 TaiSHi 3 to infinite and beyond
01:34 manfred you can't
01:34 TaiSHi Oh :(
01:34 manfred you can say all of the minions that match this name, or match this grain, or match this pillar... but you can't say there have to be at least 3
01:36 TaiSHi It's either "this name" or "th*"
01:36 TaiSHi I thought I could play with some regex
01:36 manfred you can
01:36 manfred you can do all that
01:37 manfred but you can't say 3 to X, you can say all X that meet this requirement
01:37 gzcwnk I am trying to run a small sls to remove some packages, I run it with salt vuwunicopatch02 state.sls security.sls but I get a failure, any ideas why pls  http://pastebin.com/4fzrmzVJ
01:37 manfred TaiSHi: http://docs.saltstack.com/en/latest/topics/targeting/
01:37 schimmy joined #salt
01:37 TaiSHi manfred: thanks for pointing it, gonna check
01:37 manfred gzcwnk: where is the security.sls file located?
01:37 malinoff gzcwnk, you shouldn't put '.sls' in the end of your state file
01:37 malinoff just write 'security'
01:38 malinoff when running via the cli, of course
01:38 manfred oh that too
01:38 malinoff the file name must ends with .sls
01:38 gzcwnk its in /srv/salt
01:38 TaiSHi manfred: love you, found exactly what I was looking for
01:38 gzcwnk ah ok, lemme try, thanks
01:38 ajolo_ joined #salt
01:39 manfred gzcwnk: yeah just drop the .sls from the file name when referencing it on the command line or in top.sls files
01:39 gzcwnk yep that was it, thanks
01:39 manfred TaiSHi: ok, raet is working better than it did before, it isn't filling up my system with yard.uxd files, but it doesn't work for passing stuff around yet, like publish.publish or mine.get
01:39 manfred https://github.com/saltstack/salt/issues/13859
01:40 malinoff manfred, are you using raet already?
01:40 TaiSHi manfred: great, I almost deploy it on my master today lol
01:40 schimmy1 joined #salt
01:40 manfred malinoff: i keep spinning it up and down to see how it is going
01:40 manfred malinoff: i am using this as my salt-cloud deploy script on ubuntu. http://ix.io/ddf
01:41 manfred events work etc, but publish.publish (using raet_publish.py) and mine don't work on my gluster states
01:42 malinoff manfred, you're a pretty desperate person, aren't you? :)
01:45 manfred malinoff: no?
01:45 manfred malinoff: i just wanted to see how it was going...
01:45 malinoff manfred, ah, so you don't really use it, just testing?
01:45 manfred ... yeah...
01:45 malinoff Alright. Afaik it is not even in beta
01:45 manfred it is still alpha
01:45 manfred it was barely two weeks ago when they released 0.0.21, and they are already tagged up to 0.0.31
01:45 malinoff I don't know much about raet internals, does it provide message persistency? Reliability?
01:45 malinoff i can't imagine how this can be achieved with udp without reinventing tcp
01:45 manfred malinoff: http://www.slideshare.net/SaltStack/salt-air-19-saltstack-raet
01:46 malinoff transactions, huh
01:48 APLU joined #salt
01:49 logix812 joined #salt
01:49 malinoff I'm managing servers in 5500 miles away from me, can't really imagine how that would work. Anyway, let's see
01:50 gzcwnk How would I check the permissions fo a list of files? I tried this, http://pastebin.com/JLWyTBKt but it errored
01:50 malinoff I hope they will provide some performance benchmarks in comparison with rabbitmq/activemq
01:51 mateoconfeugo joined #salt
01:52 whiteinge malinoff: count on it. though that's a good year away, i'd guess
01:54 malinoff whiteinge, i hope test cases will also be exposed. It would be very interesting to see if a messaging library written in python can be faster than another one written in erlang with the same (or almost the same) settings
01:55 * whiteinge nods
01:55 gfa joined #salt
01:56 whiteinge i haven't watched that vid yet so i'm not sure all what they said, but some of the techniques behind raet (not raet itself which is a recent reimplementation) has been in use with autonomous submaries for ~15 years or so
01:56 manfred malinoff: the plan was to rewrite raet in c once it is stablized iirc
01:57 ingwaem joined #salt
01:57 gzcwnk how would I check the state of more than one file in a sls file?
01:58 gzcwnk i am trying to amke sure /etc/passd and /etc/group have teh right ownership
01:59 ingwaem gzcwnk, have a chunk that checks both files in the same file...so you would have file1: and supporting yaml, then have a file2:
01:59 gzcwnk uh
01:59 malinoff whiteinge, well, that's cool, of course. But do salt really need these techniques? I used salt for almost a year for many different things (even for a very complex dynamic deployments), and one thing I really missing is not the speed, but message persistency and reliability
01:59 gzcwnk i tried this but it doesnt like teh second one, http://pastebin.com/JLWyTBKt
02:00 malinoff whiteinge, also, custom cryptography implementation is horrible. If raet will resolve these problems, i guess it's ok
02:00 manfred gzcwnk: tab out file.managed in the second one
02:00 malinoff manfred, so right now it is just a poc?
02:01 manfred malinoff: hrm? i don't think so... i would say it is more like the python libraryies in Mysql-python, without the _mysql compiled C code behind it yet...
02:02 ingwaem hey whiteinge, mind if I pm please?
02:02 gzcwnk thanks...pedantic little sucker salt
02:02 whiteinge malinoff: raet was started specifically to address those three things (among others)
02:02 whiteinge ingwaem: shoot
02:02 manfred gzcwnk: it should have thrown an error that it wasn't a valid state. :P
02:03 gzcwnk it threw an error but it was double dutch, written by a blind chinese guy
02:03 ingwaem joined #salt
02:04 manfred heh
02:05 ingwaem joined #salt
02:05 ingwaem joined #salt
02:17 jaimed joined #salt
02:20 napper joined #salt
02:21 oz_akan_ joined #salt
02:28 ramishra joined #salt
02:31 l0x3py joined #salt
02:31 VictorLin joined #salt
02:35 TaiSHi manfred: you using gluster? *grins*
02:36 manfred TaiSHi: not in production
02:36 TaiSHi Oh :(
02:36 manfred i just have a bunch of gluster states for testing overstates and stuff
02:36 TaiSHi Got any documentation on your progress?
02:37 manfred uhh, i have the documentation one of my friends wrote up when we were began adding support for it at work
02:37 manfred http://training.unsupported.me/gluster/
02:38 * TaiSHi reads
02:38 ajolo_ joined #salt
02:42 mateoconfeugo joined #salt
02:49 smcquay joined #salt
02:53 ipalreadytaken joined #salt
02:56 toastedpenguin joined #salt
03:02 jaimed joined #salt
03:04 beneggett joined #salt
03:04 tinuva joined #salt
03:09 kedo39 joined #salt
03:14 nliadm joined #salt
03:15 beneggett Anyone have any good examples of getting started with salt for a ruby on rails app?
03:18 TheThing joined #salt
03:19 ingwaem beneggett: depends on what you mean by “get started” :)
03:19 dvogt joined #salt
03:19 beneggett ingwaem: anything at all ;), code samples, docs, etc.
03:19 ingwaem first and foremost you would want to install salt master and a minion, and figure out how they talk together, and what you can do with them
03:19 beneggett ingwaem: Just diving into salt today, but have a background with chef/capistrano
03:19 malinoff beneggett, http://docs.saltstack.com/en/latest/
03:20 ingwaem once you do that, it will open up your mind to a billion new possibilities
03:20 beneggett Yes, I've got salt master slave setup, ran through a basic tutorial
03:20 beneggett or minion rather ;)
03:20 malinoff !modules
03:20 malinoff hm
03:20 beneggett I guess what I'm really trying to decide is do i still utilize something like capistrano, or just use salt to automate it all..
03:21 beneggett or use salt to set it up and cap for deployments, etc.
03:21 malinoff beneggett, here you can find all available modules: http://docs.saltstack.com/en/latest/ref/modules/all/index.html
03:21 ingwaem cool…next step if you’re going to use it as part of an app would be to setup the api…the api allows you to connect to salt and command it, and get results
03:21 malinoff and here are the states: http://docs.saltstack.com/en/latest/ref/states/all/index.html
03:21 quickdry21 joined #salt
03:22 beneggett malinoff: beautiful resources to keep close
03:22 ingwaem beneggett: malinoff has some great examples there…they are all the things you will use while utilizing salt…the bridge between salt commandline and your minions is the api, and with the combination of all the documents provided you should have an awesome toolset available
03:23 Novtopro joined #salt
03:23 ingwaem meant to say between salt command and your app :)
03:23 Novtopro left #salt
03:23 malinoff beneggett, also i'd suggest to completely move away from capistrano, because using more than one infrastructure management tool is a pain
03:24 malinoff Of course, you shouldn't just drop capsitrano
03:24 beneggett ingwaem: yes, yes. I'm seeing that
03:24 malinoff but don't use salt just to run capistrano's recipes
03:25 beneggett malinoff: are you suggesting utilize salt to do the rails deployments, etc?
03:25 malinoff beneggett, yes
03:25 malinoff for everything
03:25 ingwaem beneggett: salt can do anything
03:26 beneggett yes, I'm seeing that. any rails buildpack type modules out there?
03:26 ingwaem it’s not just orchestration, it’s just just config management, it’s not just file management/serving
03:26 beneggett or just hand craft it?
03:26 ingwaem I hand crafted my solutions using php, curl and the api
03:26 malinoff That's why I like to name it as 'infrastructure manager'
03:27 UtahDave joined #salt
03:28 ingwaem great name :)
03:30 beneggett so really, i'm writing my own SLS definitions to setup: system dependencies, server config, Application deploy, and Continuous deployment via pushes. - utilizing built in modules/states along the way
03:31 ingwaem to an extent yes…you can do a whole bunch more utilizing the communication aspect of salt, getting commands out to the right machines and getting the right data back…so you can fire off events or have a centralized adminstration where you orchestrate everything you need, using salt to make it all happen.
03:31 malinoff beneggett, we found that building deb/rpm packages on per-project basis is what makes things a lot of clearer
03:32 malinoff beneggett, so I build a package contains anything without environment dependencies
03:32 ingwaem the fantastic thing about salt because of that is then you don’t have to write your own agents…and if you did would they be the perfect agent, would they be able to just keep on doing more and more that you throw at them, or do you have to rewrite all the time…it’s like a distributed developers framework
03:32 malinoff push it to specific repository
03:32 malinoff install if from the repository on the server side
03:32 malinoff and then configure environment dependencies with my configuration management tool
03:33 UtahDave ingwaem++
03:34 ingwaem hiya Dave :) !
03:34 malinoff e.g. for a rails app I may include everything except database configuration (or whatever you may have in environments dir) in a package
03:34 UtahDave hey!
03:34 beneggett ingwaem: thanks for the great insight
03:35 beneggett malinoff: and how do you handle db config, or sensitive information files, etc? Just have a separate salt state that transfers/manages those?
03:35 malinoff beneggett, yes
03:35 ingwaem beneggett, you came to the right place :) my knowledge of salt is limited, however I love it, and have been using it for a year now, but still so much more to learn :)
03:35 beneggett do you still do things like ENV variables?
03:35 ingwaem beneggett: you can do “ENV” variables with pillars
03:35 UtahDave beneggett: Yeah, like ingwaem said.  In fact, Google's new kubernetes docker project is built on top of Salt.
03:36 malinoff beneggett, RAILS_ENV, yes, but it is a limitation of the project
03:37 ingwaem wow really Dave? That’s awesome! I have heard of this “docker” and poked a little into it but haven’t had the bandwidth to really sit down and peel it open…cool :)
03:37 beneggett Ok cool, and is it best to manage those at grain or pillar level?
03:37 malinoff beneggett, all sensitive information should be stored in pillars
03:38 beneggett @utahdave: yes, I've played with docker quite a bit and really like it, there are some powerful things that can be done there
03:38 beneggett malinoff: gotcha
03:38 ingwaem think of grains as custom tags for your minions…so those would be global variables…don’t want password stuff in those :)
03:38 malinoff Unfortunately, docker is a kind of old image management
03:38 ajolo__ joined #salt
03:38 manfred beneggett: you can manage all your docker instances with the dockerio module, soon to be available in the next feature release of salt
03:38 beneggett malinoff: do you have any *non-sensitive* packages I could take a look at for inspiration?
03:39 ingwaem tags as in a tag cloud, or hashtagger
03:39 beneggett ingwaem: yeah, that makes a lot of sense
03:39 malinoff beneggett, sorry, no :( Everything is private, NDA, you know...
03:39 beneggett malinoff: np. thought I'd ask ;)
03:40 oz_akan_ joined #salt
03:40 ingwaem beneggett there are a bunch of very useful examples in the documentation though…there’s a lot to go through and it will take you some time to absorb all the possibilities
03:40 malinoff But you can take salt (since they're building a system-dependent packages) and investigate their spec files
03:40 malinoff beneggett, https://github.com/saltstack/salt/tree/develop/pkg here
03:41 malinoff and https://github.com/saltstack/salt/tree/develop/debian there for debian
03:41 manfred why not just deploy your application from a git repository?
03:42 beneggett manfred: yes, that's the direction I'm leaning
03:42 beneggett I'm just getting started with Salt this week
03:42 manfred i would do it the same way you would do it manually
03:42 malinoff manfred, what if something breaks? How can i revert changes? How can i find what version of application is running on staging environment?
03:42 manfred git clone the new release on your production server, and then symlink the $DOCUMENTROOT/current to the git repo
03:42 manfred then you can always revert back
03:43 manfred just using a file.symlink
03:43 ingwaem beneggett: salt can report if a git version has changed…it will tell you the previous version and the new current version
03:43 manfred malinoff: manage your tag or version in git and manage that in pillar data, and use git.present to clone that tag
03:43 ingwaem in git hub state files you can define tags or versions and it will pull that version always…yea as manfred just said :)
03:43 manfred and that will also know which version file should symlink to
03:43 malinoff manfred, oops, I can't manage tags because the code is not mine
03:43 manfred ingwaem: i know
03:44 manfred malinoff: then do the commit number
03:44 beneggett so either tag or SHA
03:44 manfred yeah
03:44 manfred and then that would also tell you what to symlink to, so you can revert it just by changing the pillar, and then going into a highstate again
03:44 malinoff Difficult to understand the status of that 'version' when you have 100+ different projects
03:45 ingwaem example: http://pastebin.com/6eKGD2wg
03:45 manfred ingwaem: why not git.present on tags
03:45 manfred shouldn't be updating if it is the same tag
03:45 malinoff Well, I don't say you can't achieve the same with git tags. But in our situation, building packages is the most suitable approach
03:46 ingwaem ahh…I’m using a database to maintain my version numbers…I then push out the yaml on update
03:46 manfred i would also put it to something like target: /srv/github-files/something/{{ pillar['something']['tag'] }} so you can symlink to each version
03:46 manfred exactly like you should when doing a release manually
03:47 beneggett manfred: really the same way capistrano  manages your deploys, but with a hell of a lot less code work
03:47 manfred basically
03:48 beneggett and what about shared files between minions  for logging, etc. ? or does that live at master level?
03:48 manfred that would be on the master level... right now
03:49 bchung joined #salt
03:49 manfred http://docs.saltstack.com/en/latest/topics/tutorials/minionfs.html
03:49 ingwaem manfred: normally I target the master revision, but in some cases on some builds I want to target specific versions until they’re signed off as acceptable builds … so new version would have to be signed off first before being assigned as current version
03:49 manfred that is the closest you can get to sharing files right now
03:49 manfred it has to do a cp.push to the master minion cache, and then stuff can be pulled from there by other minions
03:49 ingwaem beneggett: there’s file sharing on the minions but master can also share files, and then there’s a gitfs eya?
03:49 manfred ingwaem: there is not
03:50 manfred you can't share stuff directly between minions yet
03:50 manfred that should be possible with raet, but that isn't ready yet
03:50 ingwaem ahh ok, thanks manfred…always learning more about salt :)
03:51 manfred if you want to share files between minions, minionfs is the best way
03:51 ingwaem but there is a way of pulling files from minions to the master and then distributing yea? is that minionfs
03:51 beneggett so best to either use master, or other asset hosting then (S3, etc.)?
03:51 manfred and with that, you do a cp.push to push a file to the master, and then you can pull it down to another minion using salt://<minion>/path/to/file
03:51 manfred beneggett: depends on what you are trying to do, but probably
03:52 beneggett you guys are awesome, this is super helpful to me
03:52 TaiSHi woosh
03:53 TaiSHi I've arrived my dear dear home
03:53 ingwaem YaY
03:53 phx joined #salt
03:53 ipalreadytaken joined #salt
03:53 manfred someone was talking about using woosh to write a search engine that would rival google ... in #python earlier, and hadn't even started learning python yet...
03:53 ingwaem yea I think next time someone wants an answer to what can salt do, the answer should be “How long is a piece of string?”
03:53 ingwaem :)
03:54 manfred http://docs.saltstack.com/en/latest/topics/tutorials/minionfs.html
03:54 manfred ...
03:54 manfred https://pypi.python.org/pypi/Whoosh/
03:55 TaiSHi I wonder what have you been up to
03:55 TaiSHi But I sincerely don't want to read lol
03:55 manfred don't
03:55 manfred it isn't worth it
03:56 manfred basically everyone was telling him to stop talking heh
03:56 napper joined #salt
03:59 djinni` joined #salt
03:59 TaiSHi manfred: I might have "solved" (note the quotes) my storage problem
03:59 manfred nice
04:00 TaiSHi It was basically an early idea, chewed a bit
04:00 TaiSHi I mean, most % of the traffic will get eaten by a cdn (images, static files and crap)
04:00 alainv joined #salt
04:00 TaiSHi And seriously, how disk intensive can php be?
04:00 manfred very
04:00 * TaiSHi may regret asking that
04:00 manfred heh
04:01 manfred TaiSHi: https://www.facebook.com/notes/facebook-engineering/the-hiphop-virtual-machine/10150415177928920
04:01 ingwaem for php to be most effective, you want to opcode cache if you can
04:01 manfred https://github.com/facebook/hhvm
04:01 TaiSHi Worry not
04:01 dstokes joined #salt
04:01 TaiSHi I -am- testing another site with (mainly) the same configuration as the ones I'm migrating
04:01 manfred or just ... use facebooks thing to compile php
04:01 TaiSHi for hhvm
04:01 TaiSHi Currently it's using opcache with a 50% performance gain
04:02 TaiSHi hhvm requires a bit more testing, but it's in my plans
04:02 manfred nice, when you get it working, tell me about it outside this channel
04:02 TaiSHi I have no idea how to test it, other than face smashing my keyboard
04:02 crazysim joined #salt
04:02 TaiSHi Sure, I was seriously thinking on dropping it on a productive site
04:02 TaiSHi You know, stealth and such
04:03 acabrera joined #salt
04:03 TaiSHi With salt I found DO to be far more effective
04:03 zartoosh joined #salt
04:03 ingwaem DO?
04:03 TaiSHi Digital Ocean
04:03 manfred digital ocean
04:04 manfred TaiSHi: meh, i work at rackspace so my opinion is biased
04:04 ingwaem ahh ok :)
04:04 TaiSHi I work for a company who (thank god) doesn't provide infrastructure anymore
04:04 TaiSHi rackspace is far more expensive :P
04:04 manfred i tried DO, and for how little you pay, it is ok, but the io I experienced was attrocious
04:04 ingwaem but RX support is second to none :)
04:05 TaiSHi I am hoping my company start selling lemons
04:05 TaiSHi Because we fail at management as well
04:05 manfred ingwaem: if you knew the managed cloud guys like i do...
04:05 manfred heh :P
04:05 TaiSHi manfred: :P
04:05 ingwaem :)
04:06 TaiSHi Did you guys do a full scale analysis on WHAT calculate management costs ?
04:06 manfred what?
04:06 TaiSHi Yeah, my CEO said that he spoke to someone who worked @ rackspace
04:06 jerrcs joined #salt
04:06 ingwaem I’ve worked with RX and still do work with RX these days :)
04:07 TaiSHi And that they did an analysis which concluded that management cost per hour should be based on memory
04:07 manfred we have solutions engineers who will talk to you about your environment and design something to fit your needs (a lot of them come talk to me since i sit next to them)
04:07 TaiSHi RX = ?
04:07 manfred rax
04:07 TaiSHi Ah
04:07 manfred https://www.google.com/search?q=RAX
04:07 TaiSHi We had a customer who moved there, iirc rx used them to pivot into LA market
04:08 manfred ingwaem: if you wanna mess with it, the nova.py salt-cloud driver is only in develop, and requires novaclient straight from git right now (for __exit__ in the httpmanager class) if you wanted to give it a shot
04:08 TaiSHi WHAT?! AGAIN?!
04:08 ingwaem sweet thanks please :)
04:08 TaiSHi [salt             ][ERROR   ] Failed sign in
04:08 manfred TaiSHi: it hates you
04:08 TaiSHi Seriously I thought I had solved that
04:09 TaiSHi I have no idea why it happens
04:09 TaiSHi It kills the minion, I have to log in or reboot the vm
04:10 TaiSHi That will kill our auto-scale script ¬¬
04:10 TaiSHi Tried debugging it with no success
04:11 herlo joined #salt
04:11 TaiSHi manfred: yay split
04:11 chuffpdx joined #salt
04:12 ingwaem wow those still happen in 2014?
04:12 TaiSHi Yeah
04:12 ingwaem amazing
04:12 ingwaem I remember those in the 90's
04:12 manfred it is just fundementally how irc works
04:12 manfred also that freenode sucks
04:12 ingwaem indeed :) they don’t have salt binding it all together
04:14 manfred i wish i could remember where that freenode graph thing was to show the responsiveness from all the servers
04:14 v0rtex joined #salt
04:14 ronc joined #salt
04:14 TaiSHi manfred: I just fired up another instance
04:14 manfred but it has been too long since I have been in #freenode to find it...
04:15 TaiSHi Have no idea why that error comes up
04:15 JPaul joined #salt
04:15 chamunks joined #salt
04:17 yomilk joined #salt
04:18 manfred aight, I am off to bed, got my drumoff tshirt in the mail today https://represent.com/drumoff and can't wait to wear it tomorrow
04:18 manfred nite
04:18 TaiSHi nn
04:19 ingwaem gnight
04:20 mateoconfeugo joined #salt
04:21 clone1018 joined #salt
04:21 kossy joined #salt
04:21 TaiSHi god
04:21 TaiSHi this error is going to kill me
04:21 ramishra joined #salt
04:21 TaiSHi Im gonna keep deploying vms with debug on
04:21 ingwaem why does the minion continue to die?
04:21 TaiSHi It dies once
04:22 TaiSHi After deploying with salt-cloud
04:22 TaiSHi Then I start it (service salt-minion start) and it provisions the server allright
04:22 ingwaem hmm
04:22 krow joined #salt
04:22 alainv joined #salt
04:22 DenkBrettl joined #salt
04:22 tempspace joined #salt
04:22 druonysuse joined #salt
04:22 Damoun joined #salt
04:22 pjs joined #salt
04:22 monokrome joined #salt
04:22 fxdgear joined #salt
04:22 Kalinakov joined #salt
04:22 seventy3 joined #salt
04:22 happytux_ joined #salt
04:22 notpeter_ joined #salt
04:22 dotplus joined #salt
04:22 SaveTheRbtz joined #salt
04:22 jforest joined #salt
04:22 yidhra joined #salt
04:22 zsoftich joined #salt
04:22 dcmorton joined #salt
04:22 marcinkuzminski joined #salt
04:22 viq joined #salt
04:22 kriberg joined #salt
04:22 __number5__ joined #salt
04:22 bensons joined #salt
04:22 erjohnso_ joined #salt
04:22 jamesog joined #salt
04:22 pfalleno1 joined #salt
04:22 mephx joined #salt
04:22 AlcariTh1Mad joined #salt
04:22 delkins_ joined #salt
04:22 shano_ joined #salt
04:22 jmccree joined #salt
04:22 terminalmage joined #salt
04:22 Kraln joined #salt
04:22 gmoro joined #salt
04:22 lude joined #salt
04:22 ixokai_ joined #salt
04:22 trevorjay joined #salt
04:22 lynxman joined #salt
04:22 MTecknology joined #salt
04:22 jeremyBass joined #salt
04:22 basepi joined #salt
04:22 faulkner joined #salt
04:22 torrancew joined #salt
04:22 philipsd6 joined #salt
04:22 mattikus joined #salt
04:22 jasonrm joined #salt
04:22 johtso joined #salt
04:22 freelock joined #salt
04:22 lionel joined #salt
04:22 ahammond joined #salt
04:22 renoirb joined #salt
04:22 emostar joined #salt
04:22 jeblair joined #salt
04:22 Heggan joined #salt
04:22 cliffstah joined #salt
04:22 repl1cant joined #salt
04:22 ghartz joined #salt
04:22 mgarfias joined #salt
04:22 robinsmidsrod joined #salt
04:22 ifmw joined #salt
04:22 Deevolution joined #salt
04:22 bitmand joined #salt
04:22 carmony joined #salt
04:22 sifusam joined #salt
04:22 twinshadow joined #salt
04:22 sverrest joined #salt
04:22 d3vz3r0 joined #salt
04:22 bmatt joined #salt
04:22 zach joined #salt
04:22 jeffrubic joined #salt
04:22 patarr joined #salt
04:22 EWDurbin joined #salt
04:22 smferris joined #salt
04:22 ingwaem are you able to perform a highstate on it before it bombs out?
04:23 chamunks joined #salt
04:23 TaiSHi Nope, let me show you
04:23 ph8 joined #salt
04:24 TaiSHi http://dpaste.com/1YNR00Q
04:24 savvy-lizard joined #salt
04:25 ingwaem I was going to suggest if you were able to, you could build in a cronjob to force it to restart
04:25 ingwaem but looking at the result you sent, seems the login fails after a minute…is it setting up keys at that point?
04:25 TaiSHi This one did, others just did the first 2 lines and failed
04:25 TaiSHi Or first line and failed (all in the same second)
04:25 TaiSHi It's really random
04:25 kermit joined #salt
04:26 ingwaem weird
04:26 akoumjian joined #salt
04:26 zsoftich1 joined #salt
04:26 DenkBret1l joined #salt
04:26 kwmiebach_ joined #salt
04:27 ingwaem was Dave around earlier when you first posted about it? Perhaps he knows what might be going on
04:27 modafinil_ joined #salt
04:27 thunderbolt joined #salt
04:27 codekobe_ joined #salt
04:27 JordanTesting joined #salt
04:27 wiqd joined #salt
04:27 TaiSHi I didn't fully post it earlier
04:27 copelco joined #salt
04:29 ingwaem my problem is I haven’t delved into salt-vert fully yet, it’s the next big todo I have…but what ever is going on seems like it’s timing out for some reason…would have been nice to somehow get a highstate into there so that it forces the application management criterias for the service to be managed and running always…i had had similar issues with some mac vm’s a while back but I believe it’s now fixed
04:31 TaiSHi It's happen very randomly
04:31 dotplus joined #salt
04:31 dotplus joined #salt
04:31 TaiSHi that's the main issue
04:31 ingwaem ugh :( not fun
04:31 rallytime joined #salt
04:31 Kalinakov joined #salt
04:32 jmccree_ joined #salt
04:32 schimmy joined #salt
04:32 fxdgear joined #salt
04:33 munhitsu_ joined #salt
04:33 TaiSHi Woosh
04:33 TaiSHi it's happening in real time!
04:36 TaiSHi I started the daemon and it didn't highstate
04:36 TaiSHi Then restarted and it did
04:36 schimmy1 joined #salt
04:36 MTecknology joined #salt
04:36 mosen joined #salt
04:37 jforest joined #salt
04:38 mateoconfeugo joined #salt
04:38 ajolo joined #salt
04:38 ingwaem very strange…what os?
04:38 TaiSHi linux x64 / ubuntu
04:39 mosen ios there an established format for distributing states? I come from puppet land :)
04:39 jmccree joined #salt
04:39 TaiSHi I think you might refer to saltstack-formulas
04:39 UtahDave mosen: drop your states in /srv/salt/_states on your master.  Then run   salt \* saltutil.sync_states
04:39 xenoxaos joined #salt
04:40 TaiSHi Ah, that
04:40 TaiSHi Hey Dave
04:40 oz_akan_ joined #salt
04:41 UtahDave also, like TaiSHi said, saltstack-formulas is a very useful place to star
04:41 TaiSHi ingwaem: filled an issue, could be a bug
04:41 UtahDave stt\art
04:41 UtahDave gosh.   start
04:42 ingwaem flangelitis, I get it too at times, or sometimes brain switches to typoneese :)
04:42 TaiSHi lol
04:42 TaiSHi I'm sleepy =(
04:42 TaiSHi Want to fix this
04:42 mosen ahh maybe im thinking formulas
04:43 mosen nifty
04:43 ingwaem formulas are great :) you can then target a sleuth of servers just based on their os, or first part of the name etc, then start getting more and more specific to each one. grains are cool too for tagging servers, then easy to target in the formulas
04:44 ingwaem at least what I remember of the reading I did :) was a marathon run at the time
04:45 TaiSHi UtahDave: mind if I ask you to check on something really quick ?
04:46 TaiSHi ingwaem: now I think of it, it -might- be cache, right ?
04:46 mosen still shuffling some things into salt
04:47 ramteid joined #salt
04:47 ingwaem there is a cache yes…try clear it out if it’s being prepopulated
04:47 ingwaem it’s the sub directories in the /etc/salt dir
04:48 ingwaem mosen: check these out: https://github.com/saltstack-formulas
04:48 mosen ingwaem: yeah just found it
04:48 UtahDave TaiSHi: sure
04:48 TaiSHi the pki's are fine
04:48 UtahDave sure
04:48 TaiSHi I just cleared /var/cache
04:49 TaiSHi https://github.com/saltstack/salt/issues/13863 <- there so I don't type it all over
04:49 maxskew joined #salt
04:50 ramishra_ joined #salt
04:50 * TaiSHi is learning to write issue on gh with markup
04:50 UtahDave TaiSHi: can you add the output of   salt --versions-report    from your master?
04:50 UtahDave TaiSHi: also have you tried running    salt-cloud -U        ?  That updates your salt bootstrap version
04:51 mosen I wrote some execution modules but more or less ignored half the tutorials.. getting ahead of myself :)
04:51 TaiSHi Done UtahDave
04:52 TaiSHi UtahDave: -U and then the rest? (--profile and such)
04:52 TaiSHi Also it says salt-cloud: error: no such option: -U
04:53 UtahDave sorry, -u
04:53 TaiSHi Ok just did
04:54 TaiSHi I might have been using old deb packages ones
04:54 thayne_ joined #salt
04:55 TaiSHi Wait, I wasn't tracking that file before
04:55 TaiSHi Hmmm
04:55 TaiSHi It's newly created
04:57 TaiSHi Sorry, today I was using deb packages, then uninstalled and re-installed from git
04:58 TaiSHi Need any more info from the instance or can I delete it?
05:02 aquinas joined #salt
05:13 TaiSHi ingwaem:
05:13 TaiSHi Whoops
05:13 TaiSHi ingwaem: I just added a couple failsafe options
05:14 TaiSHi Gah forgot to time deploy time
05:14 ingwaem ahh
05:14 TaiSHi Yeah tweaking the minion conf is fun
05:15 TaiSHi I want to have it really good so future deployments dont get f* up
05:15 TaiSHi I'm moving production sites here and they're going to work with an auto-scaling script
05:17 TaiSHi MTecknology: I find you everywhere.
05:17 MTecknology I am everywhere.
05:17 ingwaem yea, you seemed to be on the right track with the mine_interval…and since it was aroudn the 2 minute mark seemed to point there
05:17 TaiSHi Oh, mine_interval is something else
05:17 ajolo joined #salt
05:17 ingwaem yea, was jsut reading through your doc
05:17 TaiSHi It's to gather info about hosts and push it to minions
05:18 mosen damn MTecknology
05:18 TaiSHi ingwaem: added restart_on_error and auth_tries
05:19 MTecknology mosen: hm?
05:19 mosen MTecknology: you are everywhere!
05:19 MTecknology ya
05:20 TaiSHi ingwaem: test #1 running
05:21 MTecknology mosen: not really, though... I used to hang out in >200 channels and actually keep an eye on most of them. I cut back a lot...
05:22 mosen MTecknology: just join everything on freenode?
05:22 TaiSHi woosh, that's some channels
05:22 mosen my tab hotkeys only go to 9
05:22 mosen therefore there shall not be more than 9
05:23 MTecknology it was across three networks; my hotkeys go 0-9a-zA-Z
05:23 TaiSHi ingwaem: new data: 2014-07-01 01:23:24,077 [salt.payload     ][INFO    ] SaltReqTimeoutError: after 60 seconds. (Try 3 of 10)
05:23 MTecknology I can also do /g <num>
05:24 ml_1 joined #salt
05:24 MTecknology TaiSHi: remember that you can stop the salt-minion service and run salt-minion -l debug and see exactly what's going on as far as the minion sees
05:25 TaiSHi It deployed perfectly :P
05:25 TaiSHi Damn it!
05:28 MTecknology well... salt 'minion' test.ping   you should see the debug info for that payload show up in stdout from the minion
05:28 MTecknology could be a firewall
05:29 TaiSHi But it works when I executed just now with -l
05:29 MTecknology if you see the payload, the minion might not be returning correctly
05:29 TaiSHi I'm re-deploying
05:29 MTecknology oh
05:30 TaiSHi I'm deploying 2 instances right now
05:31 MTecknology also, watch out for two minions with the same keys and id
05:31 TaiSHi as in hostname ?
05:31 TaiSHi DO wont let me create equal minions
05:32 MTecknology I should be sleeping... :(
05:33 TaiSHi 2:30 am here
05:33 TaiSHi 7:30 am I have to be up for university
05:33 TaiSHi =(
05:33 TaiSHi Someone in salt gh is going to hate me
05:34 TaiSHi Fetched 34.4 kB in 8s (3,829 B/s) <- loving DO's speed...
05:36 MTecknology I get up at 07:24 for work
05:37 TaiSHi Feel you
05:37 TaiSHi Got home an hour ~ ago from work
05:37 TaiSHi long days
05:37 TaiSHi But I want to get this done, it's for a personal customer / friend
05:39 TaiSHi Ok, gonna check this
05:39 TaiSHi and get to bed
05:39 MTecknology you use enter too much
05:39 TaiSHi I'm sorry, I'm sleepy and tend to write in a badly manner
05:41 TaiSHi Wow... 11 minutes to deploy a VM... DO's mirror have been a pain
05:41 oz_akan_ joined #salt
05:41 schimmy joined #salt
05:42 MTecknology DO is pretty solid for the most part, but they do leave a bit to be desired
05:43 TaiSHi Yeah those ubuntu repos are supossed to be very speedy since it's theirs... but oh well
05:46 MTecknology I'll never again use ubuntu on a server
05:47 TaiSHi How so ?
05:48 MTecknology their server team (the competent guys) mostly died off and went elsewhere
05:49 TaiSHi btw, I just deployed a couple instances, first one received payload after 2 timeouts, second one after 6 timeouts
05:49 MTecknology I have a big rant about it, especially since the new guys took an effort in pushing out the veterans, but this isn't the place and I'm tired
05:49 MTecknology try out debian for servers
05:50 TaiSHi Well the lack of ppa might be bothering me
05:52 MTecknology I manage 450 linux boxes and the lack of a ppa hasn't had any impact on me
05:52 MTecknology pain meds kicking in... bed time
05:52 MTecknology g'night
05:52 TaiSHi I might check debian for servers
05:52 TaiSHi Sleep well
06:12 marco_en_voyage joined #salt
06:17 ingwaem joined #salt
06:19 ingwaem TaiSHi: were you able to get the minion to connect? …just saw your update on teh bug and notice that it’s timing out…I wonder, did you check the master for keys to see if it auto registered or was perhaps in a state other than authenticated
06:27 jhauser joined #salt
06:33 mateoconfeugo joined #salt
06:42 oz_akan_ joined #salt
06:42 schimmy joined #salt
06:43 felskrone joined #salt
06:44 w\laite joined #salt
06:52 Outlander joined #salt
06:52 Tekni joined #salt
06:56 picker joined #salt
06:58 Kenzor joined #salt
06:59 Hell_Fire_ joined #salt
07:04 ml_1 joined #salt
07:05 bhosmer joined #salt
07:10 w\laite I'm using salt-ssh to run upgrades with my ubuntu servers, e.g. salt-ssh '*' pkg.list_upgrades ; is there a way to get error messages from this back from minion? Now this fails silently, whereas cmd.run 'apt-get dist-upgrade' shows there actually are errors
07:17 slav0nic joined #salt
07:23 chiui joined #salt
07:24 ndrei joined #salt
07:28 felskrone joined #salt
07:30 darkelda joined #salt
07:30 darkelda joined #salt
07:30 agend joined #salt
07:34 jdmf joined #salt
07:36 Zuru_ joined #salt
07:39 ramishra joined #salt
07:42 oz_akan_ joined #salt
07:44 vu joined #salt
07:47 ipalreadytaken joined #salt
07:48 happytux joined #salt
07:51 googolhash joined #salt
07:53 HACKING-TWITTER joined #salt
07:55 Lomithrani joined #salt
07:57 luette joined #salt
08:00 CeBe joined #salt
08:02 maxskew joined #salt
08:03 babilen w\laite: There isn't
08:03 w\laite babilen: ok, thanks for confirming
08:04 babilen w\laite: What's the error?
08:04 w\laite dpkg database was locked due to unfinished upgrade process earlier
08:06 chiui joined #salt
08:06 babilen Okay - I find this behaviour to be suboptimal, but unfortunately _get_upgradable() doesn't to any checking of the exit code and simply parses the output for upgradable packages. It might be a good idea to make this a bit more robust and actually fail if the apt-get --just-print dist-upgrade run fails. Could you file an issue?
08:07 w\laite sure thing
08:07 babilen ta
08:12 ingwaem joined #salt
08:19 HACKING-TWITTER joined #salt
08:20 HACKING-TWITTER joined #salt
08:21 ingwaem grettings all, i’m looking into external pillars and am having an issue using the example online and getting a result from mysql
08:21 ingwaem getting an error message [ERROR   ] Failed to load ext_pillar mysql: ext_pillar() takes at least 3 arguments (2 given)
08:21 ingwaem Traceback (most recent call last):
08:21 ingwaem was trying to use this example: http://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.mysql.html#module-salt.pillar.mysql
08:21 HACKING-TWITTER joined #salt
08:23 HACKING-TWITTER joined #salt
08:24 HACKING-TWITTER joined #salt
08:25 kiorky joined #salt
08:26 HACKING-TWITTER joined #salt
08:27 HACKING-TWITTER joined #salt
08:28 HACKING-TWITTER joined #salt
08:31 yomilk joined #salt
08:32 ramishra joined #salt
08:37 jolo joined #salt
08:39 jolo Hi
08:40 jolo The pdf still seems broken; is there at least an older version available somewhere?
08:43 oz_akan_ joined #salt
08:45 oz_akan__ joined #salt
08:46 milissa joined #salt
08:46 milissa http://adf.ly/pyduc
08:46 milissa left #salt
08:51 alanpearce joined #salt
08:52 xzarth joined #salt
08:57 Lomithrani joined #salt
08:57 elfixit joined #salt
08:58 linjan joined #salt
09:05 Lomithrani Does salt require much ram ?
09:06 Lomithrani or does 500mo is enough for lets say <100 minions
09:07 ingwaem Lomithrani: from the limited testing I’ve done up to now you should be good
09:08 ingwaem best way to find out would be to test and then see how it performs with your entire stack in place
09:08 xmj you wanna run 100 minions controlled by a master with not even 512MB ram?
09:08 * xmj astonished
09:08 ingwaem but it would be slow I imagine
09:09 ingwaem in otherwords it would run, but wouldn’t be fast … but then I’ve only used up to about 10 minions on a master up to now…haven’t had the need for more yet
09:10 jeddi ingwaem: are you likely to do highstates across * - ie, all minions at one go.  or in batches?
09:11 ingwaem jeddi: up to now I’ve had each minion setup with it’s own cronjob to do it’s highstate…each is randomized.
09:13 ingwaem jeddi: doesn’t salt run in batches though? So if you did a highstate across *, and it could only process 3 at a time, it would cycle through them till the whole job was done…no?
09:14 TheThing joined #salt
09:15 ingwaem as per worker_threads directive in the configs
09:15 xmj why not just increase Memory
09:15 xmj there's such a thing as memory overcommit anyway.
09:16 xmj 512MB is likely to be a VM, just throw 4096 at it, it'll just work.
09:16 ingwaem i agree xmj…especially with the cost of ram these days
09:17 ingwaem not like it used to be…4MB for $2k
09:17 TheThing good times
09:17 xmj ingwaem: especially if you run your server off ZFS.
09:17 xmj "buy more ECC RAM" is the solution to 80% of all problems :p
09:18 TheThing lol
09:18 ingwaem hey have you guys ever used the external pillars?
09:18 ingwaem i’m having issues trying out the mysql example in the documentation…trying to get the pillar to dynamically build from db
09:19 babilen Lomithrani: I would give it more RAM, but then it really depends on the actual number of minions. 99 is different from 0 you know ;)
09:19 babilen ingwaem: I am using git pillars quite extensively, but those might be considered "semi-internal" by now :)
09:21 Lomithrani I don't see really any importance on my master to be fast, as long as the minions recieve the state from the master and if they execute it fast I don't really care when they execute it
09:21 Lomithrani ram still cost something in the cloud ^^
09:21 Lomithrani I will only have 20ish server at start
09:21 ingwaem ahh ok…yea I’ve kind of avoided putting much reliance directly on git…i have processes at the moment that sync directories on the master using git states, but then use internal file transfer to get them to the minions…but seems I may be missing something…oh well :)
09:22 ingwaem Lomithrani: in that case just give it a bash…the default worker_threads is set to 5, so if you have 50 minions it will cycle through 50 internal jobs to get the overall result…5 at a time
09:23 babilen Lomithrani: Should be okay then
09:23 ingwaem my terminology is not speficically correct to the terminology used in salt, as in the internal jobs do not equate to the jobs you see in the salt queue
09:25 Lomithrani Ok thanks for the precisions :)
09:27 Linuturk joined #salt
09:27 Linuturk joined #salt
09:29 giantlock joined #salt
09:31 linjan joined #salt
09:33 ramteid joined #salt
09:41 linjan_ joined #salt
09:42 jeddi ingwaem: ah, already answered.  :)  hadn't read up on worker_threads before - but yeah, that'll throttle things - though the docs say that if your master is slow you should increase that number, which seems counter-intuitive.    i've run a salt minion on a RPi - which has 512mb only, and is pretty painful - but not thought of trying to run a master on it.
09:44 Sp00n rpi has a weak cpu
09:44 jeddi Sp00n: yeah, and dodgy versions of 0mq from memory ... it ran, but it wasn't especially snappy.
09:44 jeddi salt minion took up about half its memory IIRC.
09:45 jeddi and this was probably late 2013, or very early 2014 versions.
09:45 ingwaem jeddi, i think they mention increasing the count if your system can afford the resorces. Since salt’s foundation is on communication and orchestration, it utilizes agents for everything it does…so if the server can spawn more instances it can service more minions at once. With a small stack this is as you say counter intuitive, however if you start getting into the 100’s or 1000’s and more minions, chances are you want to tal
09:45 ingwaem more than 5 at once if you have a big beefy master. and imagine all that along with master of masters, and syndic :)
09:45 jeddi ingwaem: yup - that's doubtless bang on the mark.  i've got about 20-30 minions, but it's rare that i try to update them all at the same time.
09:46 oz_akan_ joined #salt
09:46 jeddi actually, my salt master is a 512MB machine - but it's a VPS over at digital ocean - and that's pretty snappy.
09:46 ingwaem ahh nice
09:46 linjan joined #salt
09:46 ingwaem yea, with the right thought and planning you can have an uber large system without having to do too much…just balance it all right.
09:47 babilen I also found that using more cores really pays off due to salt's usage of multiprocessing
09:47 babilen Unfortunately there does not seem to be a "recommended specs of master for $NUM_MINIONS" specification somewhere.
09:48 ingwaem the cron module is great i find for randomizing my interactions to the master…just by setting random as the minute value then balances them all out over an hour, as opposed to say a set interval…if it’s daily then a random hour would be fine and then that should be ultra well balanced
09:55 pdayton joined #salt
10:16 N-Mi joined #salt
10:16 N-Mi joined #salt
10:28 masterkorp Hello everyone
10:28 masterkorp salt-cloud can i place my aws settings on pillar data ?
10:28 masterkorp http://docs.saltstack.com/en/latest/topics/cloud/config.html#cloud-configurations
10:28 cliffstah left #salt
10:28 masterkorp in here its not dead obvious that is why i am asking
10:30 babilen masterkorp: http://docs.saltstack.com/en/latest/topics/cloud/config.html#pillar-configuration is not what you are looking for (from the documentation you just linked)
10:30 babilen ?
10:31 masterkorp babilen: yeah, does that work for aws key id too ?
10:32 babilen Would that be the api_key in there?
10:32 babilen Well, key most likely ..
10:33 babilen It is shown in 15.3.1.5.2. Amazon AWS
10:34 babilen And I see no particular reason why that wouldn't be applicable in the pillar configuration too. Do you run into problems if you use that?
10:34 masterkorp lol, sorry, not coffee on the system
10:34 babilen I mean I haven't tested it, but one could read the documentation that way
10:35 masterkorp i will try it
10:35 masterkorp and make this pillar data available to the salt-master only right ?
10:36 babilen masterkorp: Let me read the documentation ... ;)
10:36 babilen masterkorp: I don't know (yet)
10:37 q4brk joined #salt
10:37 q4brk joined #salt
10:37 masterkorp well, i will try it and see
10:37 ingwaem there’s a switch to disable sending all the master data to all the minions…I believe without that switch the minions will have record of the aws key
10:39 ingwaem pillar_opts: False
10:39 ingwaem For convenience the data stored in the master configuration file is made available in all minion's pillars. This makes global configuration of services and systems very easy but may not be desired if sensitive data is stored in the master configuration.
10:40 CeBe1 joined #salt
10:40 TheThing joined #salt
10:42 ingwaem masterkorp: reading again, “salt-cloud can i place my aws settings on pillar data ?” yes. you can call your pillar data from any state file, formular etc, eg: {{ salt['pillar.get']('foo:bar:baz', 'qux') }}. so you would use those values in your cloud config file, and the config file will then retrieve the values from the pillars
10:42 ingwaem or raw in a template: {{ pillar['foo']['bar']['baz'] }}
10:42 ingwaem - name: {{ pillar['apache'] }}
10:43 oz_akan_ joined #salt
10:43 xmj viq: ping
10:43 xmj viq: did you send a pull-request from gitlab-formula-1 to upstream?
10:44 babilen ingwaem: Data in pillars are only made available to those minions that you target
10:44 masterkorp ingwaem: yeah that i know, i can salt salt-cloud, but the docs make it seem that all i need is a cloud namespace with settings on it
10:45 ingwaem babilen: gotcha
10:46 babilen ingwaem: That is what allows you to store sensitive data in your pillars without having to fear that every minion has access to it. We frequently use for certificates, keys and user data.
10:47 babilen *use it
10:47 ingwaem babilen: yea, I’ve been reading a lot up on pillars recently…specifically have issues at the moment connecting it to mysql :)
10:48 viq xmj: no, I didn't. The requirement for ruby 2+ somewhat thre a wrench into the works... Though I guess the formula still allows you to install 6.9
10:49 xmj ah
10:49 xmj because 7.0 ?
10:53 ingwaem well i have to get some shuteye…gnight all
10:53 viq yeah
10:53 * xmj checks on 2014.1.6
10:54 babilen Oh, has it been released?
10:55 viq xmj: still not tagged and not on pypi
10:56 viq babilen: ^
10:56 babilen yeah, I just ran "git fetch"
10:56 xmj babilen: no i just check every few hours. :D
11:01 bhosmer joined #salt
11:02 ekristen joined #salt
11:05 agend joined #salt
11:25 jrdx joined #salt
11:27 krow joined #salt
11:28 diegows joined #salt
11:28 bhosmer joined #salt
11:29 logix812 joined #salt
11:29 mgw joined #salt
11:34 alanpearce joined #salt
11:38 Lomithrani joined #salt
11:43 mapu joined #salt
11:48 sgate1 joined #salt
11:49 haven_ joined #salt
11:49 hhenkel_ joined #salt
11:50 N-Mi joined #salt
11:50 N-Mi joined #salt
11:50 __alex_ joined #salt
11:50 Valda joined #salt
11:52 Ymage_ joined #salt
11:52 masterkorp1 joined #salt
11:52 canci joined #salt
11:52 bernieke joined #salt
11:56 tmmt joined #salt
11:57 HACKING-TWITTER joined #salt
11:59 HACKING-TWITTER joined #salt
12:00 ramishra_ joined #salt
12:00 HACKING-TWITTER joined #salt
12:01 HACKING-TWITTER joined #salt
12:03 HACKING-TWITTER joined #salt
12:04 HACKING-TWITTER joined #salt
12:06 HACKING-TWITTER joined #salt
12:07 dduvnjak joined #salt
12:08 dduvnjak i'm a puppet user trying out salt for the first time. what would be the salt equivalent of a puppet definition?
12:08 dduvnjak (searched but couldn't find anything)
12:09 ramishra joined #salt
12:09 ramishra joined #salt
12:10 vbabiy joined #salt
12:10 ramishra_ joined #salt
12:14 alanpearce joined #salt
12:15 ndrei joined #salt
12:16 vejdmn joined #salt
12:21 viq dduvnjak: what is a puppet definition?
12:22 dduvnjak it's a reusable part of the code that can be applied several times with different parameters
12:23 dduvnjak http://docs.puppetlabs.com/learning/definedtypes.html
12:24 viq dduvnjak: then depends. If you want a larger, more complete block, then formulas somewhat fit the bill. For smaller bits, you just do a for loop
12:24 vortec joined #salt
12:25 happytux_ joined #salt
12:25 viq Say, a tab I still have open somewhat showing an example https://gist.github.com/gravyboat/638b69b90c010dbdf929
12:25 dduvnjak formula does look similar, thanks.
12:26 viq formulas are somewhat like modules you'd download from puppetforge
12:26 viq I guess there isn't a strict 1:1 mapping
12:27 dduvnjak hmm maybe it's not that similar
12:28 lynxman joined #salt
12:28 dduvnjak let's say i wanted to write salt code that adds nginx virtuahosts. i would write generic code that takes in parameters like domain name and root directory
12:29 dduvnjak then i would just call that code anytime i need to add a new vhost
12:29 viq dduvnjak: well, slightly different approach
12:29 dduvnjak how would that be done in Salt?
12:30 phx dduvnjak, i'm not a very experienced user, but i'd just write an sls that defines the nginx conf, and since script usage is allowed in remotely destributed files, i'd just have the nginx conf generated
12:30 viq You define vhosts you want in eg pillar (think: hiera), then in your code you loop over the pillar data and for each of them generate a vhost
12:30 phx that is, the script knows where's the list of vhosts, whatever, then output the nginx conf based on that, when pulled
12:30 dduvnjak that is a different approach :)
12:30 phx true
12:30 viq dduvnjak: trying to find an example
12:31 dduvnjak thanks guys
12:31 dduvnjak i'll keep trying it out
12:31 dduvnjak i want to like Salt :)
12:31 viq dduvnjak: ish https://github.com/saltstack-formulas/apache-formula/blob/master/apache/vhosts/standard.sls
12:32 viq and example pillar for this https://github.com/saltstack-formulas/apache-formula/blob/master/pillar.example
12:33 dduvnjak i think i get the picture
12:33 viq dduvnjak: sadly, AFAIK one thing that you can do with puppet and can't really do with salt is virtual resources
12:38 dduvnjak left #salt
12:41 Teknix joined #salt
12:43 DaveQB joined #salt
12:52 vejdmn joined #salt
12:54 jas-_ joined #salt
12:54 babilen Can I somehow always checkout the latest tag with git.latest ?
12:54 babilen (or will I have to chenge rev whenever a new version has been tagged?)
12:58 or1gb1u3 joined #salt
12:58 Lomithrani is there an opposite of require ?
12:58 Lomithrani I mean I want to download a file only if I haven't already done it
13:01 Lomithrani unless , with ls seems like a good way to do this !
13:05 oz_akan_ joined #salt
13:08 oz_akan_ joined #salt
13:11 ipmb joined #salt
13:13 racooper joined #salt
13:16 jaimed joined #salt
13:23 ramishra joined #salt
13:26 ajprog_laptop1 joined #salt
13:27 to_json joined #salt
13:32 stevednd can you template orchestration files?
13:35 jpcw joined #salt
13:35 tkharju joined #salt
13:36 ggoZ joined #salt
13:36 Lomithrani joined #salt
13:36 manfred should be able to
13:37 stevednd manfred: how does it work exactly? Isn't the orchestration owned by master, and thus doesn't have pillar data?
13:37 stevednd does it have its own special pillar?
13:37 manfred i mean... you can't use those things... but you should be able to use jinja in it to template it in some way
13:37 manfred ¯\(°_o)/¯
13:38 manfred stevednd: have you tried it yet?
13:38 stevednd no, mainly because I had no idea where I would pull data from. :)
13:39 manfred the only one I would use would be something like cloud.query to look up info
13:39 manfred ¯\(°_o)/¯
13:40 Lomithrani my mine suddenly stopped working ?
13:40 Lomithrani any idea why ?
13:40 Lomithrani network.ipaddrs  precisely
13:42 stevednd whiteinge: https://github.com/saltstack/salt/issues/13873 sorry for taking so long to submit the issue for this
13:42 manfred Lomithrani: network.ip_addrs?
13:42 Lomithrani both works
13:43 Lomithrani usually
13:43 manfred ahh
13:43 manfred ¯\(°_o)/¯
13:43 danielbachhuber joined #salt
13:43 Lomithrani {% set number_of_cassandra= salt['mine.get']('Cass*', 'network.ipaddrs', expr_form='pcre').items() | length %}  I use this personnally , and for the first time the highstate failed
13:44 Lomithrani and so I tried to   salt '*' mine.get '*' network.ipaddrs and oddly it doesnt return anything
13:44 Lomithrani (ony names of minions)
13:45 mapu joined #salt
13:46 Deevolution joined #salt
13:46 Lomithrani http://pastebin.com/gfZ5Mtpm
13:46 Lomithrani not enough space oO ?
13:47 aquinas joined #salt
13:51 yomilk joined #salt
13:52 nahamu Lomithrani: you might be running out of memory
13:53 nahamu when fork() fails on SmartOS that's usually the culprit.
13:53 Lomithrani I'm might be a bit short on memory yes
13:53 Lomithrani I solved the problem rebooting my master
13:54 Lomithrani but 500 mo might be a bit short
13:54 perfectsine joined #salt
13:55 kivihtin joined #salt
13:58 dude051 joined #salt
14:00 jalbretsen joined #salt
14:02 ipmb joined #salt
14:06 CheKoLyN joined #salt
14:08 timoguin http://www.julython.org/
14:08 jnials joined #salt
14:08 timoguin the saltstack team should get on that ^^
14:09 bhosmer joined #salt
14:09 xmj timoguin: they're doing month+"ton" where month in ['Jan', 'Feb', 'Mar', 'Apr, .., 'Dec'] anyway
14:10 ramteid joined #salt
14:12 taterbase joined #salt
14:17 babilen Hmm, do I really have to use cmd.run if I want to copy one directory on a minion?
14:21 racooper babilen,  look at file.copy perhaps? or any of the other file.* functions?
14:22 babilen racooper: I don't think that file.copy will copy directories recursively
14:22 napper joined #salt
14:23 babilen (nor will file.recurse work with a local source AFAIUI)
14:23 racooper http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.file.html
14:24 babilen Yes, sure
14:24 racooper salt.modules.file.copy(src, dst, recurse=False, remove_existing=False)
14:24 babilen That's the module not the state
14:25 racooper yo udidn't say anything about a state
14:25 babilen http://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.copy
14:25 babilen Okay, *in a state*
14:26 babilen I'd like to checkout a git repo and copy part of it somewhere else.
14:26 racooper might be able to use module.run to run the file.copy module; I'm trying that with quota.set and having some issues though.
14:27 babilen Ah, well. I just use cmd.run. It's not that much of a problem, but it simply "felt" like the kind of thing for which a state might already be available.
14:30 Lomithrani how can I download once and manage a file inside what would be downloaded  , I'm trying something like this : http://pastebin.com/mzs2Cd2U
14:30 Lomithrani but I don't know why the curl just wont trigger
14:30 Lomithrani "unless execution succeeded"
14:31 Lomithrani because the file.managed create the directory
14:33 tyler-baker joined #salt
14:36 Lomithrani aouch I'm tired
14:36 rallytime joined #salt
14:37 Lomithrani I thought it wasnt logical this unless thing
14:37 Lomithrani well is there an opposite thing of unless ?
14:37 Lomithrani oh no nvm it is unless I need
14:38 Lomithrani so yeah my problem is the one discribe over ^
14:44 higgs001 joined #salt
14:44 perfectsine joined #salt
14:48 higgs001_ joined #salt
14:51 mgw joined #salt
14:55 q4brk is there a problem in salt 2014.03 when having a multi-master setup, it seems that my minions connect, but don't execute highstate on startup, even if I have "startup_states: highstate" in their config file ?
14:55 q4brk *2014.1.3
14:56 quickdry21 joined #salt
14:57 babilen And that worked before, but broke after upgrading the master [and the minions] to 2014.1.3 ?
14:57 q4brk babilen: no idea, I started testing this setup using 2014.1.3
14:57 babilen Ack, just wanted to clarify :)
14:59 PLATOSCAVE joined #salt
15:03 jslatts joined #salt
15:04 conan_the_destro joined #salt
15:06 dvogt joined #salt
15:08 HACKING-TWITTER joined #salt
15:08 HACKING-TWITTER joined #salt
15:09 HACKING-TWITTER joined #salt
15:10 slav0nic joined #salt
15:11 tinuva joined #salt
15:12 kaptk2 joined #salt
15:22 ipalreadytaken joined #salt
15:28 Eureka joined #salt
15:29 vejdmn joined #salt
15:29 Eureka @manfred I think i was talking to you yesterday about an odd error about reactors. Ring a bell?
15:33 Eureka Hi All. Yesterday I had an issue with reactors not completing a full event with the salt master. I believe ive figured it out and can reproduce the error now.
15:35 mateoconfeugo joined #salt
15:37 kballou joined #salt
15:41 Lomithrani Me again with the same question : whats wrong with that ? http://pastebin.com/mzs2Cd2U  how can I have the unless behave properly for some reason it won't trigger the command eventhough the bin directory doesnt exist
15:44 tinuva joined #salt
15:44 thayne_ joined #salt
15:44 tligda joined #salt
15:47 maxleonca joined #salt
15:48 ipalreadytaken joined #salt
15:50 jslatts joined #salt
15:50 nickg does salt handle and can it distribute hostname changes to minions?
15:53 viq nickg: that's a bit tricky, as different systems set it in different places, and also I think often you need a reboot for everything to see the new hostname. Also minion id is separate from hostname, though initially based on it
15:54 thedodd joined #salt
15:54 nickg viq is there a minion id rename feature?  it would be nice to keep the ID inline with the hostname
15:54 maxleonca Hello, how can I bootstrap salt-minion so it pulls the config from master on 1st boot?
15:55 viq nickg: kinda. you can remove /etc/salt/minion_id and upon restarting salt-minion it will regenerate it - but I believe at that point you will need to accept minion's key anew and remove the old one
15:55 viq maxleonca: define "pulls config from master"
15:55 nickg viq ok best to just set it before installing salt
15:56 viq nickg: yeah, probably
15:57 maxleonca (viq) I have the autosign.conf setup, so right now I'm using a ks file to provision, when the vm comes online it installs salt-minion and the key gets accepted.  What can I do to make the newly created minion to call the master for the default config?
15:57 maxleonca by default I mean the common states for all my servers
15:57 maxleonca I need it to be done without human intervention.
15:58 viq maxleonca: there's a setting for minion for startup_states I think, you can set highstate there - but it means highstate will run every time the minion process starts. You could also do that via reactor, with a similiar caveat
15:58 slav0nic joined #salt
15:59 viq Unless for example one of initial states set a grain on the system, and the reactor excluded minions with that grain set
15:59 druonysus joined #salt
16:02 maxleonca hmmm, not sure if I'm getting your meaning....
16:02 maxleonca this is done on the minion or the master side?
16:02 maxleonca because if the minion is deployed as an RPM, I figure it will have to be the master the one that pushes the initial conifg to the newly created minion, right?
16:03 ffrodrigues joined #salt
16:04 dimeshake you can run a salt-call state.highstate on the minion after salt is installed?
16:04 viq maxleonca: either. you can set it in minion config, or you can configure reactor on master
16:05 viq maxleonca: if you want it purely master side, then reactor is the way to go, I think
16:05 maxleonca yes, I know I can run a salt-call but I need it to be done without human intervention the 1st run at least
16:05 maxleonca afterward I just make sure it loads the custom minion config.
16:05 dimeshake i'm saying you can script it in your provisioning
16:06 maxleonca OK, so reactor it is, but can I configure a reactor based on a new key being accepted?
16:06 maxleonca ahhh, true
16:06 maxleonca I'm so dumb, the simpliest of things...
16:06 Eureka @maxleonca yep!
16:06 maxleonca See, this happens whan you go to deep and over engineer things...
16:07 dimeshake hahah
16:07 Eureka Part of an example here: http://vbyron.com/blog/infrastructure-management-saltstack-part-3-reactor-events/ and here: https://salt.readthedocs.org/en/v0.17.5/topics/reactor/index.html
16:08 maxleonca Thank you again.
16:08 Eureka @maxleonca Im doing the same thing atm. Now working on auto-deleting a host when it is terminated using a trigger o.0 Fun stuff.
16:09 KyleG joined #salt
16:09 KyleG joined #salt
16:10 troyready joined #salt
16:12 smcquay joined #salt
16:13 Lomithrani whats wrong with that ? http://pastebin.com/mzs2Cd2U  how can I have the unless behave properly for some reason it won't trigger the command eventhough the bin directory doesnt exist. Anyone ? :(
16:14 ndrei joined #salt
16:15 lude does anyone have any thoughts on using pkg.latest on salt itself?
16:16 lude i run into issues from time to time when i make assumptions about my minions being up to date
16:18 Eureka @lude I have ran into issues updating via YUM as sometimes if you upgrade the minion before the master it will no longer be able to authenticate and will require the minion to be removed and re-added.
16:19 jslatts joined #salt
16:19 lude so you think just make sure version=whatever
16:19 lude and just manually tick it after a master update?
16:20 timoguin yea auto-update makes me nervous, especially for something like salt
16:20 maxleonca Deleting a host with a trigger, that sounds great, but quite tricky.
16:21 fllr joined #salt
16:21 maxleonca @Eureka, because who will trigger the removal?
16:21 fllr Hey guys. Why isn't this line working? sudo salt-call mine.get '* and not mimic' grains.items
16:24 horus_plex fllr: add the '-C'
16:24 horus_plex for compound matcher
16:24 fllr horus_plex: where?
16:25 seblu what's the procedure to request that a patch being backported to the stable branch?
16:26 viq seblu: ask for it in that github issue
16:26 bhosmer joined #salt
16:26 viq Or create a new one I guess, if it didn't come though PR
16:26 seblu ok thx
16:26 tyler-baker joined #salt
16:27 Eureka @maxleonca I am using scalr with salt so I can get a trigger from scalr that the host is being terminated and issue a reactor event that will delete the host from the salt-master. ;)
16:27 fllr horus_plex: Like, what part of the line?
16:27 tyler-baker_ joined #salt
16:28 Ryan_Lane joined #salt
16:28 seblu or maybe any idea when the next release of salt (20.14.2) will be scheduled,
16:28 seblu ?
16:29 bhosmer_ joined #salt
16:29 viq seblu: 1) if it was released this month, it would be 2014.7.0; it's codename is helium 2) no, there was talk of RC1 due last month, but that doesn't seem to have happened
16:30 maxleonca @Eureca, never heard of scalr.  I'm currently using Foreman which is ok but doesn't play as nicely as I want with Salt.  Thanks for the tip, I'll take a look at it.
16:30 krow joined #salt
16:31 ksalman salt does not like windows environment varialble? https://gist.github.com/ksalman/0c7226735d622e8c5b86
16:31 timwillis joined #salt
16:31 ksalman I get unknown yaml render error
16:34 yomilk joined #salt
16:41 joehillen joined #salt
16:43 jbub joined #salt
16:43 schimmy joined #salt
16:46 schimmy1 joined #salt
16:46 Eureka @maxleonca Scalr seems to work well and exposes all kinds of useful functionality. Not only that it lets you talk to multiple cloud platforms ;) I like it so far!
16:47 dvogt joined #salt
16:53 bhosmer joined #salt
16:55 nickg i'm in a catch 22 here with grains and mines.  when i bring on a new device i need to send the custom grain to it before i can run highstate (which calls sls that use mines).  I can determine which minions are new with Grain compound searches, so I sync_all/refresh_pillars to send everything
16:55 forrest joined #salt
16:56 nickg then I cannot send a mine.update to the minion because now the compound search won't find it, but the minion doesn't update its mine manually.  so its in stuck in no mans land
16:56 schimmy joined #salt
16:57 N-Mi joined #salt
16:57 N-Mi joined #salt
16:57 nickg when i sync the new custom grain, why doesnt the system update the mine?
16:59 shaggy_surfer joined #salt
16:59 nickg or why doesn't sync_all refresh_pillars ?
17:00 jbub joined #salt
17:00 jnials joined #salt
17:06 whytewolf joined #salt
17:08 davet joined #salt
17:09 TyrfingMjolnir joined #salt
17:18 shaggy_surfer joined #salt
17:21 jcsp1 joined #salt
17:21 nickg looks like by default grain_refresh_every is 0 minutes not 10 minutes like the conf says
17:21 ramteid joined #salt
17:22 aquinas joined #salt
17:28 dstokes has anybody put together a salt development vagrant image yet?
17:29 vbabiy joined #salt
17:30 funzo joined #salt
17:30 jcsp joined #salt
17:33 krow joined #salt
17:35 ffrodrigues left #salt
17:35 aw110f joined #salt
17:40 ml_1 joined #salt
17:44 maxleonca @Eureka, Scalr looks great but sadly I'm tied to VMWare so not an option.  Looking into mangeiq.org.
17:46 * MTecknology would take vmware over scalr just from a brief glance
17:49 Eureka @maxleonca I believe last time I talked to them that they will be supporting vmware soon(ish)
17:50 Gareth morning
17:54 wigit joined #salt
17:55 titish_maryam joined #salt
17:59 MTecknology In an event, what's pretag used for?
17:59 titish_maryam Hi
18:00 TheThing Hi
18:00 MTecknology It seems like events from minions are passed to the master through syndics, but it also seems like the master won't react to those events, but I want it to.
18:03 kermit joined #salt
18:04 vejdmn joined #salt
18:05 ndrei joined #salt
18:05 fllr joined #salt
18:05 vbabiy joined #salt
18:08 elfixit joined #salt
18:09 BrendanGilmore joined #salt
18:11 perfectsine joined #salt
18:11 MTecknology heh... from the master, I'm seeing  [ERROR   ] Failed to render "/srv/reactor/auth-complete.sls"  but it's not telling me which part of the file failed to render or any reason that it's wrong. I'm not seeing anything wrong...
18:12 MTecknology This is the entire file  http://dpaste.com/11NXBDP
18:14 kuffs MTecknology: cmd.cmd.run doesn't seem right
18:15 Eureka update your     - tgt: 'salt.corp.domain.tld'
18:15 Eureka Its needs your salt domain i believe?
18:16 MTecknology Eureka: hm? domain.tld is in place of our actual domain
18:17 Eureka @Mtecknology Actually. Looking at this you need the tgt to be the minion ID like listed below? This is not intended to run on the salt master is it?
18:18 MTecknology Eureka: yup, that's intended
18:18 Eureka MTecknology. Are you running the event listener python script? I had issues with the reactors not working and it was not firing the whole host.
18:18 Eureka host event*
18:18 MTecknology I watched eventlistener.py to make sure the event I expect is actually happening. Watching the master debug log confirmed that because it tried to render that file and failed.
18:19 MTecknology kuffs: what makes it seem wrong?
18:19 kuffs cmd is repeated?
18:19 schimmy1 joined #salt
18:19 kuffs unless there's a new feature at play here I haven't seen
18:20 Eureka That is the correct format when using reactors o.-
18:20 kuffs welp
18:20 MTecknology it's because you're not limited to just states
18:20 titish_maryam left #salt
18:21 schimmy joined #salt
18:22 Eureka So you are attempting to run bh_unlock on your master with the id of your minion as an argument?
18:22 MTecknology yup
18:22 BrendanGilmore joined #salt
18:23 MTecknology This used to work and I'm not sure what changed to make it stop working.
18:23 ckao joined #salt
18:24 zz_RedDeath joined #salt
18:25 Eureka Yeah, im not seeing anything that would do that
18:26 UtahDave joined #salt
18:26 MTecknology *grumble*
18:26 MTecknology it has to be something really really dumb... like I forgot a character or used tabs or.... irunno
18:26 Eureka @MTecknology is there any reason you are using the != boothost condition? It looks like you are filtering twice?
18:27 MTecknology I want boothost*, but not boothost
18:28 Eureka @MTecknology ah, ive got you. Maybe shut down the master and run it in debug?
18:29 Eureka @MTecknology Oh, i see you have done that already o.0
18:29 MTecknology Eureka: type  mt<tab>
18:29 Eureka MTecknology: Ah, Thanks. I dont use IRC all that much.
18:29 MTecknology it's one of the differences between twitter and irc ;)
18:30 Eureka MTecknology: hah. I dont use twitter at all ;)
18:32 TheThing joined #salt
18:33 ccase joined #salt
18:33 MTecknology A bit more details...  http://dpaste.com/2VWK6FA
18:34 MTecknology It's interesting that I'm getting each event twice on the master. I wonder if that's normal....
18:34 UtahDave left #salt
18:36 Eureka thats odd.. Ive not seen that with mine.
18:36 thedodd joined #salt
18:36 Eureka Are you sure you dont have 2x hosts with an identical name? That could cause issues (duplicat ID's)
18:37 MTecknology positive
18:37 MTecknology it only happens when I launch salt-minion on this box
18:37 MTecknology weird
18:38 MTecknology if I have this minion connect directly to the master, the reactor file renders fine
18:38 Eureka MTecknology: that is really strange. Is it not possible to reimage the box?
18:39 MTecknology not a chance
18:39 zach this is cool: https://github.com/ProjectUn1c0rn/SaltStrap/tree/instance-tor
18:40 zach https://twitter.com/hashtag/SaltStrap?src=hash
18:40 MTecknology all boxes are sending duplicate events
18:41 MTecknology I get the feeling this is a different issue that will end up with a bug report. My big concern is why the master fails to render that file when running through a syndic
18:42 Eureka MTecknology: are you by any chance configuring your minions or syndic via the minion.d folder on the host?
18:43 MTecknology ya
18:44 Eureka MTecknology: humor me. I think you have the problem I just found. Take what is in your minion.d folder on the problem host. Put it directly into the main minion config file in /etc/salt and mv the minion.d folder out of /etc/salt for a test. Restart the minion, then try to run your command again.
18:45 kermit joined #salt
18:47 MTecknology Eureka: same thing happens on both fronts (no change)
18:47 Eureka MTecknology: darn =/ I was hoping that would help. I just found a problem where files in minion.d can cause the reactor events to never get fired =/
18:47 MTecknology I see why it's different, though...
18:49 MTecknology The event, when forwarded from a syndic has an extra data dict, so when coming from a syndic, I need to use data['data']['id'] instead of just data['id']
18:49 KyleG1 joined #salt
18:49 MTecknology nothing to do with duplicate events, but seems to be the reason the reactor won't render
18:50 KyleG joined #salt
18:50 KyleG joined #salt
18:51 bhosmer joined #salt
18:51 MTecknology so, I guess what I want is .... {% if 'data' in data set data = data['data'] %}  I wonder if I can do it just like that...
18:51 Eureka MTecknology: oh man. that looks annoying.
18:52 to_json joined #salt
18:52 Eureka MTecknology: in theory ;)
18:52 mgw joined #salt
18:52 MTecknology I think this is a bug
18:54 UtahDave joined #salt
18:55 CeBe1 joined #salt
19:00 MTecknology I lied... it's the opposite
19:01 ingwaem joined #salt
19:06 dvogt joined #salt
19:08 MTecknology well... there we go
19:08 dvogt joined #salt
19:08 MTecknology {% if not data %} {% set data = {'id': data['id'] } %} {% endif %}
19:09 MTecknology Eureka: ^ that's the hack to make it work... :(
19:12 taterbase joined #salt
19:12 bhosmer joined #salt
19:13 ingwaem Greetings all…anyone familiar with the pillar mysql module?
19:13 cheus Does anyone know of the special 'sls' requisite also works in 'require_in' statements?
19:13 TheThin__ joined #salt
19:13 cheus s/of/if/
19:14 dstokes cheus: it doesn't
19:14 dstokes cheus: but it should
19:14 cheus dstokes, Ta.
19:14 bmatt I wonder how that'd work
19:14 bmatt "don't include this sls until..."
19:14 dstokes bmatt: same way require works, but inverse
19:15 bmatt right, but are includes called in the same way? or are they called by the parser?
19:15 ajolo joined #salt
19:15 dstokes include runs regardless. require{_in} manipulates state order afaik
19:16 forrest joined #salt
19:20 dstokes also, salt development vagrantfile if anyone's interested https://gist.github.com/dstokes/4887b2a631c2cc93a33c
19:20 Eureka MTecknology: Thanks ;)
19:21 ingwaem very cool thanks dstokes
19:21 UtahDave joined #salt
19:21 jbub joined #salt
19:21 UtahDave left #salt
19:21 to_json joined #salt
19:21 dstokes ingwaem: *thumbs up*
19:26 forrest http://www.infoq.com/articles/virtual-panel-cfg-mgmt-tools-real-world went live today.
19:27 forrest I should have found a better picture to give them :\
19:28 Leech joined #salt
19:33 dvogt joined #salt
19:37 mapu joined #salt
19:38 MTecknology Eureka: eh... there's something more screwy going on
19:38 dvogt joined #salt
19:39 Eureka mt MTecknology ??
19:40 Eureka MTecknology:  ??
19:44 MindDrive joined #salt
19:44 tkharju1 joined #salt
19:45 rjc joined #salt
19:46 mapu joined #salt
19:46 ikanobori joined #salt
19:48 shaggy_surfer joined #salt
19:51 ajolo joined #salt
19:53 gadams joined #salt
19:54 cheus So here's a weird one: how can I ensure that a state that is known to fail will always be executed before some other states without preventing the execution of the other states when failhard=False.
19:56 manfred if failhard is False, it should still run all the other states
19:57 manfred cheus: you can say order=1, and have it be the first state that is run, ahead of everything
19:57 cheus manfred, I just got messages saying 'One or more requisite failed'
19:57 manfred then you have something requiring that state
19:57 manfred remove the requires, and just set order: 1
19:58 manfred failhard quits the whole state run when something fails
19:58 KyleG1 joined #salt
19:58 manfred if you have a requires in there, the state is required to be successfull for the other states to run, but again, order: 1 overrides everything and will just be the first thing to run
20:00 cheus manfred, Right but in this case I don't want it to run before *every* state in a highstate. Here's the objective, there's a nastly little bit of insecurity in a formula that I'd like to fix. I've provided a workaround that's more secure but want to notify formula users that the problem should be fixed. The original idea was to use a test-state that failed if the proper conditions weren't met and the workaround kicked in. Is there a better way to send that me
20:00 cheus ssage to the even log?
20:02 manfred there is not an after: variable, maybe there should be, where it would say execute after this one whether it succeeds or not
20:03 cheus manfred, Or some general state that can just function as a way to write the event log.
20:03 manfred to the log or send events?
20:03 manfred there is an event state
20:03 manfred in the next release
20:03 manfred http://docs.saltstack.com/en/latest/ref/states/all/salt.states.event.html
20:04 cheus manfred, Sorry, meant state output.
20:04 manfred there isn't anything that does that afaik
20:04 dstokes anyone know why relative imports are disallowed in the base jinja renderer class?
20:05 manfred dstokes: imports or includes? afaik, it worked? can you provide an example?
20:06 HACKING-TWITTER joined #salt
20:06 colinbits joined #salt
20:06 KyleG joined #salt
20:06 KyleG joined #salt
20:06 viq Can I tell execution modules to use pillar data? specifically, something like "salt \* cmd.run something runas=salt['pillar.get']('desktop:user')" ?
20:07 dstokes manfred: imports https://github.com/saltstack/salt/blob/develop/salt/utils/jinja.py#L100,L105
20:07 manfred oh
20:07 manfred i have no idea
20:07 HACKING-TWITTER joined #salt
20:08 gadams joined #salt
20:08 to_json joined #salt
20:08 dstokes attempting to `from '../thing.sls' import thing` fails w/ template not found
20:08 manfred dstokes: my only guess would be that the jinja is rendered on the minion and it might not transfer all the files, that would just be my guess though
20:08 dstokes manfred: if jinja is rendered on the minion, how does master know which files to send?
20:09 dstokes or does it just send all..
20:09 manfred it sends the states that are applied to it, and then i think the minion asks for more files if they are needed?
20:09 HACKING-TWITTER joined #salt
20:09 manfred i am not certain, but i thought the jinja was rendered on the minion
20:09 cheus manfred, You're right, except in pillar
20:09 manfred right
20:10 dstokes sounds like both cases allow for the relative files to be included
20:10 manfred dstokes: i don't know that it knows how to ask for them if they are in jinja.
20:10 dstokes the use case is reusing macros across pillars & states
20:10 dstokes how do non-relative imports in jinja work?
20:10 manfred cheus: i knew pillars were on the master, which was why https://github.com/saltstack/salt/issues/13886 worked i thought
20:10 dstokes same thing right?
20:10 aquinas joined #salt
20:10 manfred it is not
20:11 stevednd manfred: any further thoughts on how one might template an orchestration file? In my case I have an orchestration file that more or less generically defines the way an application is deployed. Right now I have copy/pasted the config for each app.
20:11 manfred stevednd: i have not, been superbusy
20:11 HACKING-TWITTER joined #salt
20:11 manfred dstokes: because pillars have to be known exactly on the master, because you put sensitive information in it
20:11 manfred dstokes: highstates and states are rendered to lowstate data on the minion
20:11 stevednd I suppose I could set some vars in each specific app file, and then jinja include a base orchestration file
20:11 cheus dstokes, I wouldn't be surprised if it had to do with how multiple backends get compiled into a single state tree
20:11 dstokes right, but macro imports in state files work just fine (long as their non-relative)
20:12 dstokes i figure there's a good reason for it, just can't find it ;)
20:13 manfred dstokes: looks to be added a long time ago https://github.com/saltstack/salt/blame/develop/salt/utils/jinja.py#L100
20:13 dstokes s/non-relative/within top.sls dir/
20:13 manfred https://github.com/saltstack/salt/commit/2d77d969d3e7fe302d4aaf29823150faa8f1c051#diff-d6ca1847d2c73f1d435155fd1f5d3cc9L72
20:14 dstokes manfred: yeah, saw that. unfortunately no explanation for the relative change
20:14 manfred and i have no idea who that is that commited it
20:15 dstokes manfred: looks like jinja mimics the behavior.. http://code.nabla.net/doc/jinja2/api/jinja2/loaders/jinja2.loaders.split_template_path.html
20:15 manfred dstokes: i would just make an issue request and ask why. cause i have no idea
20:15 viq forrest: starting, interesting read
20:15 dstokes manfred: got it, thx
20:16 djaime joined #salt
20:17 krow joined #salt
20:17 manfred dstokes: good luck, i am wondering why now as well :)
20:18 forrest viq, yea it has multiple tools on there which is good
20:18 oz_akan__ joined #salt
20:19 dstokes manfred: yeah. it's annoying when you get stuck w/ pillar/macro.sls & states/macro.sls b/c you can't import ../macro.sls
20:20 bmatt is there a way to ask a minion about its job history, in the same way I can list_jobs and lookup_jid on the master?
20:20 bmatt (I ran salt-call and would like to inspect the results)
20:21 manfred bmatt: they are all stored on the master in /var/cache/salt/master iirc
20:22 HACKING-TWITTER joined #salt
20:22 manfred i am not aware of how to query them though
20:22 MTecknology .... woah
20:23 MTecknology salt-master -l trace  with 400 connected minions.... woah
20:24 MTecknology dangit... and even that isn't enough to tell me why this reactor file is failing to render.
20:25 E1NS joined #salt
20:25 viq Can I tell execution modules to use pillar data? specifically, something like "salt \* cmd.run something runas=salt['pillar.get']('desktop:user')" ?
20:26 manfred viq: not exactly but
20:26 PLATOSCAVE joined #salt
20:26 manfred well...
20:26 Eureka manfred: Hey. I figured out my reactor issue from yesterday. It looks like a bug in the minion =/
20:26 manfred Eureka: cool
20:26 manfred yeah i saw the bug report
20:27 viq manfred: yes?
20:27 Eureka manfred: you are on top of it ;)
20:27 dstokes manfred: https://github.com/saltstack/salt/issues/13889
20:27 manfred viq: uhhhh... try... runas='__salt__["pillar.get"]("desktop:user")' ... that might work, but i would be kind of supprised if it did
20:27 alanpearce joined #salt
20:28 viq manfred: ok, thanks
20:28 manfred viq: viq salt \* cmd.run 'su - $(salt-call pillar.get desktop:user) -c "something"'
20:28 manfred that would be the other way to do it
20:29 manfred viq: pass it through su - with a salt-call to it that doesn't expand until the command is run on the minion, and that might not work, because it should sanatize some things... maybe
20:29 manfred dstokes: cool, thanks
20:29 manfred aight, usa is on, I will be back after the match
20:29 manfred o/
20:29 stevednd anyone have any thoughts on this...I have a state  file that updates ssl certs on machines, and needs to restart whatever web server(s) may be on the machine so they pickup the cert changes. I wanted to use watch_in, but that would require the named service state to be included or used somewhere at the same time as the cert update sls, wouldn't it? I thought about just triggering it myself from the cert update sls, but then I po
20:29 stevednd tentially restart the web server twice. which seems poor
20:29 viq manfred: indeed the __salt__ one didn't work
20:30 Eureka stevednd: You should be able to use a 'reload' in the service watch to define it should reload rather than restart
20:30 Eureka stevednd: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.service.html
20:32 viq manfred: neither really did the other one
20:32 Eureka stevednd: That or you could have a file that only is changed after your certs are *all* updated and have that as part of your watch
20:32 stevednd Eureka: yes, I know about reload. Some of the web servers don't support reload I don't think.
20:33 krow joined #salt
20:33 alanpearce I have a minion that fails to fork with "cannot allocate memory" unless I disable multiprocessing, but it's got a reasonable amount available (cached). It's not even the smallest one I have. Any ideas why this might be?
20:33 stevednd Eureka: the problem is that the cert file potentially runs on its own, or as part of another state file. I'm not sure how to ensure reload/restart only once, no matter which way it is used
20:34 TheThing alanpearce: Solution to 80% of computer problems: Increase RAM :>
20:34 Eureka stevednd: ah, i see. Not sure at the moment o.0 Still learning a bunch of this myself.
20:34 manfred viq: the second one should work, as long as salt cmd.run module executes a subshell correclty, which i am not sure it does, you ight ahve to tweak it a bit, because i threw it together in just like5 seconds, but abusing su - to do subshell stuff
20:34 schimmy joined #salt
20:35 dstokes for anyone interested, symlink to overcome relative import restriction ftw
20:35 manfred but yeah, soccer soccer soccer soccer , peace o/
20:35 alanpearce TheThing: not really helpful :(
20:35 dstokes hax0r
20:35 thayne_ joined #salt
20:35 manfred dstokes: 1337 h4x0r
20:35 manfred bye! i will be back in... after the match
20:36 ajolo joined #salt
20:37 alanpearce http://pastebin.com/0g78fx65
20:37 alanpearce Even test.ping doesn't work :(
20:37 schimmy1 joined #salt
20:41 viq manfred: I guess I am not that bothered at the moment, was just hoping there is a known way to do it
20:42 mgw joined #salt
20:45 bhosmer joined #salt
20:50 rjc joined #salt
20:51 ingwaem anyone familiar with the pillar mysql module? http://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.mysql.html#module-salt.pillar.mysql
20:51 to_json joined #salt
20:51 kermit joined #salt
20:53 aquinas joined #salt
20:56 bhosmer joined #salt
20:56 ggoZ joined #salt
20:58 toastedpenguin1 joined #salt
20:58 TaiSHi o/ all
21:01 oz_akan_ joined #salt
21:01 steveoliver oy … state.highstate has been giving error: Function: no.none; Result: False; Comment: Unknown yaml render error :(
21:02 TheThing steveoliver: Better post the state
21:02 TheThing steveoliver: on pastie.org or something
21:02 steveoliver well i'm not sure which state it's complaining about
21:02 steveoliver salt <id> state.highstate is all I'm running
21:02 TaiSHi ingwaem: nom
21:03 ingwaem hi :)
21:03 TheThing steveoliver: Run the agent in debug mode and try again
21:03 TheThing steveoliver: Or the server
21:03 TheThing steveoliver: Or both :D
21:03 steveoliver k
21:03 TaiSHi How're you doing ?
21:03 ingwaem doing good thanks :)
21:03 ingwaem hoping to get to the bottom of my pillar mysql issue but haven’t had much luck yet
21:06 TaiSHi I'm going to keep testing my new failsafe options to see I get around the SaltReqTimeoutError
21:06 ingwaem ahh ok nice
21:07 steveoliver TheThing: -l debug gives me nada on the "Unknown yaml render error."
21:08 steveoliver both -master and -minion side have no mention of any issues
21:09 steveoliver …all —version 2014.1.5
21:12 steveoliver TheThing: I'm just enforcing each state manually to test which breaks :/
21:17 seme joined #salt
21:17 rallytime joined #salt
21:18 seme hi guys... can anyone help me figure out how to set this up.  I'm trying to execute a batch file stored on a unc path by doing a pkg.install... my minion is a windows box... I can install various packages that are stored on the salt server but I want to refer to a central repo instead of the salt server
21:18 xintron joined #salt
21:18 forrest seme, I don't know if you can do that. I think you have to set up that winrepo file
21:18 forrest seme: http://docs.saltstack.com/en/latest/topics/windows/windows-package-manager.html
21:19 forrest maybe you could change the installer to be available as an http path?
21:19 seme yeah I have that set up and it works for packages stored in the salt master
21:19 seme hrm...
21:19 forrest seme, yea here we go
21:19 forrest https://github.com/saltstack/salt-winrepo/blob/master/filezilla.sls
21:19 forrest that shows using http
21:19 forrest so you could treat it like a linux repo, and just make it available over http
21:19 forrest you could just do a directory listing with some webserver easily
21:20 seme aah... so http is possible and salt:// paths but no smb or file
21:21 forrest seme smb might be possible
21:21 forrest I don't know
21:21 seme I'll give it a shot
21:21 seme :)
21:21 forrest ok cool, if it works could you add it as an example to the repo?
21:22 forrest seme, also if you're running a batch file, can you not just do cmd.run?
21:22 seme thats what I was thinking
21:23 seme but I was going to try and go the route of the installer first
21:23 seme I kind of like hte idea of keeping it consistent
21:23 forrest yea I agree
21:24 forrest if it works that would be interesting
21:24 kermit joined #salt
21:25 to_json joined #salt
21:29 countdig1 joined #salt
21:30 Ryan_Lane joined #salt
21:31 yomilk joined #salt
21:31 maxleonca joined #salt
21:31 maxleonca Hello, need a bit of help here.  I'm starting a fresh install of salt with 2014.1-5 on Centos 6.  But I have a problem the minions repot "No Top file or external nodes data matches found".
21:32 maxleonca Indeed my top.sls is inside /srv/salt
21:32 maxleonca any ideas?
21:32 maxleonca and no debug is not giving me anything.
21:32 higgs001 joined #salt
21:32 gothix_ joined #salt
21:32 maxleonca I checked the master and the minion config and nothing looks out of place.
21:33 babilen maxleonca: Could you paste the output of "salt-key -L", you top.sls and applicable states to http://paste.debian.net ? (anonymize as needed, but don't remove important bits, but replace them with, say "example.com" or so)
21:34 azylman joined #salt
21:35 maxleonca Hi Babilen, all minions keys have been accepted and I only have one state loaded right now.  the files are inside /srv/salt/common/ntpd and in the top.sls under base: '*'' only - common/ntpd is listed.
21:35 babilen maxleonca: Okay, so you cannot paste them? Okay, I work with what you give me.
21:36 maxleonca just a sec please
21:36 shaggy_surfer joined #salt
21:36 babilen maxleonca: Please run "salt-run fileserver.update" -- Does that change something? Could you paste^Wsummarize your master config?
21:37 babilen maxleonca: And is that /srv/salt/common/ntpd or /srv/salt/common/ntpd.sls or /srv/salt/common/ntpd/init.sls ?
21:39 maxleonca just a sec, for some reason the tree command is not working
21:39 maxleonca but all states are set using init.sls
21:40 babilen Sorry, it is just so much easier to support someone if you see *actual* data rather than some idealized version of it.
21:40 babilen But then it is also pretty late here and I won't be able to stick around for long, so there is a tradeoff :-/
21:41 maxleonca I totally understand.
21:41 maxleonca thank you for your patience
21:42 maxleonca @babilen http://pastebin.com/7u1GnejS
21:42 babilen Thanks
21:43 maxleonca and the fileserver.update didn't change anything.
21:44 svx joined #salt
21:45 stevednd maxleonca: common.ntpd
21:45 babilen maxleonca: You want "- common.ntpd" not "common/ntpd"
21:46 babilen maxleonca: I would also recommend to fix your whitespace in your top.sls to always use two spaces for each indentation level. So you want the file to look like: http://paste.debian.net/107651/
21:46 rlarkin joined #salt
21:47 maxleonca yes I do have 2 spaces per indent
21:47 maxleonca or at least I think so, thanks for the recommendation.
21:47 maxleonca and I ge the exact same message
21:48 stevednd from your paste, you '*' line looks to only have one leading space
21:49 maxleonca yes you where right, corrected now.
21:49 babilen So, is it doing what it should now?
21:49 maxleonca no still the same "No Top file or external node data matches found":
21:50 maxleonca http://pastebin.com/yKB9mXtH
21:50 * babilen begrudgingly opens another pastebin.com link
21:51 babilen maxleonca: Err, you want to run "salt '*' state.highstate" if you want to run it on your minions rather than executing it locally on a (probably not running) minion on your master.
21:52 maxleonca same thing, I'm runnning all tests from the master.
21:53 maxleonca and all minions are running
21:53 stevednd maxleonca: maybe a silly question, but do you have file_roots: set in /etc/salt/master?
21:53 maxleonca I did check that afore hand.
21:53 bhosmer joined #salt
21:53 stevednd file_roots:\n  base:\n    - /srv/salt
21:53 babilen stevednd: I am assuming that maxleonca does not use a changes master config as none was pasted. That might have been in error though.
21:53 maxleonca I tried with default and then I did set up specifically the file_roots
21:54 babilen Anyway, I'm out. All the best and may you find your, probably tiny, mistake soon.
21:54 babilen maxleonca: And you tried the other command?
21:54 ghanima joined #salt
21:54 maxleonca Indeed I have
21:54 ghanima hey guys
21:56 ghanima question for you I am runing the salt module cmd.retcode on a Nagios nrpe plugin. I am trying to get the retcode of the operation and unfortantely its not working for me.... What I mean by that is that when executing the the plugin and setting warn and critical when the critical is tripped the exit shell code status is still 0
21:56 ghanima I am doing something like the following
21:56 ghanima sudo salt '*' cmd.retcode "/usr/lib64/nagios/plugins/check_procs -p 1 -C collectd -c 1:1; echo $?"
21:57 ghanima the module output is hostname following by 0
21:57 ghanima any thoughts?
21:57 bhosmer_ joined #salt
21:58 stevednd ghanima: the last command there is echo $? which is likely succeeding, so I would assume salt is getting the return code from there which would be 0
21:59 ghanima It seems to report the right status when I do a cmd.run_al
21:59 ghanima stevednd: your right that was it
21:59 ghanima my basd
21:59 ghanima sorry my bad
21:59 stevednd ghanima: cool
22:01 druonysus joined #salt
22:02 jnials joined #salt
22:05 active8 joined #salt
22:06 azylman joined #salt
22:06 ghanima Another question
22:07 fllr joined #salt
22:07 jcsp1 joined #salt
22:08 ghanima I want to create a state file that looks at the content of a directory for *.sh files and executes those files on an interval
22:08 ghanima When I try to approach this I am not able to figure out how to store the results of the ls and interate over the results and report the retcode
22:08 ghanima is this possible?
22:08 forrest I'd just use cmd.run for that
22:09 ghanima forrest: was that for me
22:09 forrest yes
22:10 ghanima forrest: but if I did cmd.run I have to specifiy the full path. I am not able to do(I can't think of another term) a compond statement that execute an ls for all the scripts and then another cmd executing the list of scripts
22:10 ghanima Thinking I am going to have to create my on module and then execute it through the state cmd.run
22:11 ingwaem ghanima: yea I think you’ll need one salt command to get a list of all the items in that directory, then use that result to create your loop
22:11 forrest what?
22:11 oz_akan_ joined #salt
22:11 forrest just do for i in $(ls /path/to/dir); do echo $i; done;
22:11 ingwaem Oh ok
22:12 ghanima forrest: My ultimate goal would be the record the results of each script execution as a high state
22:13 forrest ghanima, what?
22:13 forrest you want to somehow take a script, and take it's output and record that to a highstate?
22:13 ghanima sorry that meant to say record the results of each script execution as a salt highstate
22:13 forrest as it's OWN salt highstate?
22:13 forrest or just in the highstate
22:13 ghanima forrest: that is correct
22:13 ghanima Oh sorry in highstate
22:14 forrest you could use a jinja for loop then
22:14 forrest and create a bunch of IDs
22:14 forrest easy enough
22:15 forrest ghanima, https://gist.github.com/UtahDave/3785738
22:15 forrest that's an example where Dave loops through multiple items
22:15 forrest you'd have to adapt it of course
22:15 forrest but I THINK you could do it
22:15 forrest you could also use pydsl, might be easier to loop through like that
22:16 kermit joined #salt
22:16 ghanima forrest:: your referring to either group.sls or users.sls right?
22:16 forrest yep
22:16 forrest just as an example to give you ideas
22:16 forrest I can't remember if you can run shell commands inside of jinja
22:16 forrest I haven't done that
22:16 forrest thus why I'd suggest to use the pydsl, then you can just use python and that makes it way easier
22:17 ghanima forrest:: is there anything special I have to do in salt to have that template engine recognized pydsl that is
22:18 forrest http://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.pydsl.html
22:18 forrest just call it with a shebang, done and done
22:19 ghanima forrest: OMG has this always been available this make it so much easier than worrying about the jinja abstraction
22:19 forrest ghanima, uhh it's been available for a while
22:20 forrest I don't use it that often because not many people ask questions about it
22:20 ghanima forrest: I don't even have to write any output module if I use pydsl
22:20 forrest Yea
22:20 ghanima I do everything with cmd.run
22:20 forrest that's what's nice about it
22:20 ghanima forrest: Sorry lightbulb moment need to read the entire docs instead of the highlights
22:20 DaveQB joined #salt
22:20 ghanima thanks for that
22:20 forrest lol yea np
22:27 maxleonca Hi forrest, any chance you can give me a hand? Fresh salt install only 3 minions not finding top.sls http://pastebin.com/HBdD9kUR
22:27 maxleonca babilen tried to help me but it is very late in the Netherlands.
22:27 maxleonca and hello btw
22:27 yomilk joined #salt
22:27 forrest I'll take a quick look to see if there is anything obvious, give me a minute
22:27 maxleonca bad manners on my side
22:27 forrest *shrug* no worries
22:27 maxleonca sure thing
22:27 maxleonca thank you.
22:27 forrest yea np, probably won't spot it anyways :P
22:28 forrest maxleonca, you have the same version of salt on the master and these new minions right?
22:28 maxleonca @forrest, default /etc/salt/master
22:28 forrest I mean the package itself
22:29 maxleonca yes, sorry I missed that on the pastebin they are all 2014.1.5
22:29 maxleonca on Centos 6
22:30 forrest maxleonca, can you change common/ntpd to common.ntpd?
22:31 maxleonca damn, yes I did as babilen suggested same result
22:31 forrest exact same error?
22:31 maxleonca exact same thing
22:31 forrest ok, can you apply states manually?
22:31 azylman joined #salt
22:31 forrest they work if you just so salt 'adm01*' state.sls common.ntpd
22:31 azylman_ joined #salt
22:32 forrest maxleonca, ^ sorry forgot to tag you
22:32 maxleonca @forrest yes, I can.
22:32 forrest ok
22:33 mateoconfeugo joined #salt
22:33 maxleonca @forrest
22:33 maxleonca hold on, I ran that from the minion
22:33 maxleonca a sec please
22:33 forrest ok
22:33 forrest the top is changed to common.ntpd on the master right?
22:33 forrest not a secondary on the minion, or the cache
22:34 maxleonca yes and yes I can run states manually from the master
22:34 mateocon_ joined #salt
22:34 rlarkin|2 joined #salt
22:34 forrest interesting
22:34 maxleonca that is not the word that came to mind.
22:35 forrest let's try salt 'adm01' state.highstate -l debug
22:37 phil__ joined #salt
22:38 forrest maxleonca, also as another test, create /srv/salt/ntpd and drop the init and conf into that directory. Then modify the top to just have ntpd
22:38 forrest and see what happens
22:39 maxleonca ok
22:39 maxleonca just a sec please
22:39 forrest yea no rush, working on other stuff anyways
22:39 forrest While they pay me the big IRC bucks, I still have to hold down a day job
22:39 maxleonca pastebin?
22:39 forrest sure
22:40 aw110f joined #salt
22:41 maxleonca http://pastebin.com/04dEFCkk  append at the end
22:41 forrest well that's worthless
22:41 to_json joined #salt
22:41 forrest alright make that directory and drop the files in, then make the top change and we can see what happens
22:41 smcquay joined #salt
22:42 maxleonca same error
22:42 maxleonca :S
22:42 forrest that.... is confusing
22:42 forrest I'm not sure then, the only other thing I can think as a 'quick fix' is to try and downgrade a release and see if it still happens
22:43 HACKING-TWITTER joined #salt
22:43 maxleonca I'll try that thank you
22:43 forrest yea let me know if it works
22:43 forrest if it does, then you might want to create an issue about it not working on 2014.1.5
22:44 schimmy joined #salt
22:46 seme quit
22:46 schimmy1 joined #salt
22:47 beneggett joined #salt
22:49 bmatt no! nevar!
22:50 krow1 joined #salt
22:51 gildegoma joined #salt
22:55 smcquay just out of curiousity, and I am guessing those who do are in the minority, anyone run salt-master as non-root?
22:56 forrest smcquay, we have a whole doc on it http://docs.saltstack.com/en/latest/ref/configuration/nonroot.html
22:58 smcquay Hmm, yeah I did things differently, including setting root_dir to some place in opt.
22:58 smcquay I am more curious how curious/exotic of a configuration this would be.
22:59 pressureman joined #salt
23:02 jnials_laptop joined #salt
23:03 thayne_ joined #salt
23:12 jcsp joined #salt
23:15 beneggett joined #salt
23:22 Outlander joined #salt
23:22 smcquay joined #salt
23:24 ipalreadytaken joined #salt
23:24 sxar_ joined #salt
23:26 svx joined #salt
23:28 Leech joined #salt
23:30 seblu joined #salt
23:32 yomilk joined #salt
23:33 mosen joined #salt
23:34 N-Mi joined #salt
23:34 N-Mi joined #salt
23:35 seblu joined #salt
23:38 notpeter_ Good afternoon all, I've gotten salt-cloud setup with my provider (rackspace) and created profiles for the instance sizes/images I'll be using and was able to spin up an instance..
23:38 notpeter_ Salt-cloud looked like it got salt and everything installed, but I can't actually the machine and there no minion keys waiting for me in salt-key.
23:38 forrest it should have auto accepted the key
23:38 forrest what do you see with salt-key -L
23:39 notpeter_ Just the existing hosts. Nothing new.
23:39 forrest that's odd
23:40 forrest can you ssh into the system?
23:40 forrest see what the conf looks like to confirm it's pointing at the correct master?
23:41 notpeter_ I can reset the root pw with rackspace and then connect in and see. brb.
23:42 bhosmer joined #salt
23:43 notpeter_ forrest: got in, my master is listed in /etc/salt/minion
23:44 smcquay joined #salt
23:44 notpeter_ forrest: Figured out it. Sorry for the waste of time.
23:44 forrest np, what was it?
23:45 notpeter_ name showed up as something different than I expected in the key list and I just looked over it.
23:46 forrest ahh ok
23:46 ajw0100 joined #salt
23:49 ggoZ1 joined #salt
23:50 ggoZ1 joined #salt
23:52 seblu42 joined #salt
23:54 mgw joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary