Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-06-03

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 nicknac joined #salt
00:04 nicknac Hello, So I have been plying around with saltstack and was wondering if anyone has deploy salt over the WAN / internet and what kind of  security risk with exposing the salt master to the internet.
00:05 ZombieFeynman joined #salt
00:07 __number5__ nicknac: don't expose your salt master to the Internet, try to at least use openvpn or something similar
00:07 mgw joined #salt
00:08 allanparsons any idea how i can loop through the output of a salt module (for example: mount.active)
00:08 forrest Agreed, there's been a lot of discussion with this before regarding salt in a 'closed network' environment even within the cloud, where salt is only able to speak with certain IP ranges, etc.
00:08 allanparsons i tried: {% for mount_point in salt['module.run']('mount.active') %}
00:09 manfred allanparsons: why not just salt['mount.active']
00:09 manfred allanparsons: why not just salt['mount.active']()
00:09 allanparsons lemme try in my jinja
00:09 allanparsons 1 sec
00:09 manfred module.run is a state
00:09 manfred not a module
00:09 manfred so you can't run it in jinja like that
00:09 redondos joined #salt
00:09 allanparsons :(
00:09 manfred allanparsons: that salt['']() is essentially salt-call on the minion that it is running on
00:09 forrest allanparsons, a for look worked fine for me, granted it breaks each item, but you could parse the output
00:10 manfred so, salt['mount.active']() should do what you actually want
00:10 forrest I did for i in $(salt 'minion' mount.active); do echo $i; echo ---; done
00:10 forrest in a state, manfred's solution should work though
00:12 TaiSHi Main issue tho, in 'private' networks like digital ocean
00:12 TaiSHi all your datacenter neighbors can see you
00:12 forrest right
00:12 forrest honestly in situations like that I suggest to either run just salt-minion, or use salt-ssh
00:13 ajolo_ joined #salt
00:14 manfred so, it isn't that bad ... all of your data is  encrypted with keys before it is transmitted, but it is better to use a private network whenever possible
00:14 manfred TaiSHi: unless you are using openstacks private networks inside neutron
00:14 manfred i am not sure that digital ocean has those yet though
00:14 manfred then everything is routed by ovs
00:14 manfred and is completely seperated (if they support security groups too...)
00:14 TaiSHi manfred: They don't, all of your datacenter neighbors can see you
00:15 manfred lame
00:15 manfred rackspace private networks support that
00:15 manfred ¯\(°_o)/¯
00:15 TaiSHi I do work on a cloudstack env and private networks are truly private
00:15 allanparsons @manfred - that worked!
00:15 __number5__ or aws VPC
00:15 allanparsons _+1
00:15 manfred yeah, or aws
00:15 manfred and their security group stuff
00:16 allanparsons if you're using VPC
00:16 forrest yea but those cost money
00:16 forrest I don't want to actually spend money...
00:16 allanparsons we actually bootstrap a minion at boot using userdata
00:16 forrest :P
00:16 TaiSHi Still, nothing that some iptables can't fix
00:16 manfred not if you work there :/ yay for free cloud servers
00:16 allanparsons Cloudformation + AutoScale Groups + UserData = amazing
00:16 manfred such autoscale
00:16 manfred wow
00:17 jhulten joined #salt
00:17 TaiSHi I'm making my own autoscale within digital ocean :P
00:17 TaiSHi for a friend even, not earning any money
00:17 allanparsons ohg od.
00:17 allanparsons oh god*
00:17 manfred that... is crazy
00:17 __number5__ I hate cloudformation, working hard to replace them with salt+boto
00:17 allanparsons well....
00:18 allanparsons i'll replace cloudformatiion once salt-cloud supports autojoining nodes to an ELB
00:18 allanparsons and creating dns entries
00:18 allanparsons and creating iam roles
00:18 allanparsons and provisioning elbs
00:18 allanparsons out of the box
00:18 TaiSHi manfred: it aint that crazy, salt-cloud pretty much does the trick tbh
00:18 forrest allanparsons, heh
00:18 manfred i would be supprised if it added dns entries, but... if I ever leave rackspace and have to choose another provider... i would be adding the load balancer support
00:19 philip_ joined #salt
00:19 manfred TaiSHi: you should look at the digital ocean driver, and split out the request_instance function like we have done with ec2, nova, and openstack, so that you can just use the salt reactor to bootstrap your servers
00:20 MTecknology "Wait, this is happening now." "I'll create a GUI interface in Visual Basic. See if we can track them real time." <-- said CSI
00:20 philip_ hey salters, I've got a weird one right now. fresh ubuntu 14.04, git latest master install upstart scripts aren't actually killing the master process. Anyone seen this happen to them?
00:20 forrest philip_, what version of salt
00:21 philip_ forrest: let me pull it back up real fast
00:21 philip_ im installing from the bootstrap script if that helps
00:22 philip_ curl -L http://bootstrap.saltstack.org -o install_salt.sh # Install the master sh install_salt.sh -M -N git develop
00:22 forrest you're pulling down develop?
00:22 happytux joined #salt
00:22 forrest I don't know which version is in the ppa right now
00:22 forrest can't remember
00:22 mgw joined #salt
00:23 manfred it is 2014.1.4 in the ppa
00:23 TaiSHi MTecknology: lel...
00:23 bhosmer joined #salt
00:23 TaiSHi manfred: indeed I did look at reactor, still much of my idea is a TIP (thought in progress)
00:24 philip_ of course I pick a fine time to lock myself out of that box after a reboot
00:24 forrest philip_, check in the upstart script to see whether it's still using su -c
00:24 manfred TaiSHi: this is the formula to use with the nova driver https://github.com/saltstack-formulas/salt-cloud-reactor
00:24 forrest unless you're installing from develop
00:24 forrest in which case *shrug*
00:24 manfred and works with salt-cloud cache
00:24 manfred but that is only in develop
00:24 philip_ ah I took a peek before rebooting, it was just an 'exec salt-master` after sourcing the env
00:25 forrest ok that's good then.
00:25 TaiSHi heh, I have that one starred manfred
00:25 manfred kk
00:25 TaiSHi Still I haven't even started to read -cloud docs =(
00:25 TaiSHi so much work to do lately
00:26 TaiSHi My boss quit, my new boss is...
00:26 TaiSHi No comments.
00:26 philip_ forrest im gonna bring up a new box and do nothing but install the master and see what it gives me, brb
00:26 forrest ok, I'm leaving in 15 minutes
00:26 forrest as a heads up
00:27 TaiSHi btw formulas saved my life
00:27 TaiSHi Just updated, need to tryout the new nginx.ng formula
00:27 philip_ oh shouldn't be too long
00:28 philip_ its installing from git develop now
00:28 forrest so you are installing the develop branch
00:28 forrest in that case, you're playing with fire since it receives so many updates
00:28 philip_ no worries if you have to take off in the meantime, was hoping it was a known issue already
00:28 forrest and I have no idea why the process wouldn't die, maybe someone introduced a bug
00:28 philip_ gotcha, better to install from PPA?
00:28 TaiSHi ^yes
00:28 forrest well, not many people are installing from develop, so we don't discuss issues in here for it that often
00:29 forrest I'd suggest to just install from the ppa, I believe the su -c bug was resolved in the latest release for 14.04
00:29 philip_ cool, well I'll install from PPA for now and i'll put in an issue tomorrow if I can track down why develop isn't killing
00:29 philip_ all good, thanks man
00:29 forrest sounds good
00:29 forrest yea np, develop changes so much that it's hard to track down issues sometimes
00:29 TaiSHi Is there an 'out' parameter just to output changed/failed stuff ?
00:30 forrest TaiSHi, I don't think so, I thought I remember someone filing an issue..
00:30 forrest some guy was using a grep that searched on color code or something weird?
00:30 TaiSHi forrest: quiet is too quiet :P
00:30 TaiSHi Ugh... please no...
00:30 forrest there's some work around but I can't remember what it is.
00:30 forrest check the issues, I'm pretty sure there is already one for that
00:30 forrest and yea, I said the same thing with the color search
00:31 TaiSHi Which of the 1283 issues? :P
00:31 TaiSHi It's been a while since I saw a project with so many opens
00:31 forrest lol, yea, just try a search for output, and filter by only open tickets
00:31 TaiSHi Good to see 11900 closed tho
00:31 forrest might work
00:31 forrest yea a lot of tickets get closed quickly, but a lot are also created.
00:32 TaiSHi It's good to see activity
00:32 forrest on the note of open issues, does anyone work at pricing assistant?
00:34 TaiSHi Is that what I think it is ?
00:34 forrest ?
00:34 forrest I don't know, all I know is I've had a PR open with them for over a month
00:34 TyrfingMjolnir joined #salt
00:34 forrest I have no idea what they make or do
00:35 TaiSHi Ah, I give up on the issue search
00:35 TaiSHi not precisely an urgent matter
00:37 dude051 joined #salt
00:37 philip_ hey thanks again as always for the help forrest, gonna go drink some beers
00:37 forrest yea np, have a good evening.
00:38 TaiSHi I envy you
00:38 xt joined #salt
00:38 TaiSHi I finish my shift @ 00 and got univ at 8:30
00:40 forrest heh, I'm outta here, later
00:43 sgviking joined #salt
00:45 shaggy_surfer joined #salt
00:45 bemehow joined #salt
00:46 n8n joined #salt
00:47 Vye_ Anyone know approx. when the next major release of Salt is due?
00:48 manfred Vye: rc1 for 2014.2 is due on june 18th abouts
00:48 otter768 joined #salt
00:53 krow joined #salt
00:54 bemehow joined #salt
00:55 Vye manfred: thanks, looking forward to the gitfs white/blacklist feature. :)
00:55 Ryan_Lane anyone know how to pass in non-named arguments via http://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html ?
00:56 Ryan_Lane also, if they do, can they update the docs with that?
00:57 manfred Ryan_Lane: i think it just passes them as **kwargs
00:57 Joseph Ryan_Lane: do you mean non-keyword arguments i.e positional args?
00:57 Ryan_Lane yes
00:57 Ryan_Lane positional
00:58 manfred Ryan_Lane: even if they are positional, you should still be able to specify them by the name of the variable
00:58 manfred Ryan_Lane: http://paste.gtmanfred.com/3wv/
00:59 Ryan_Lane ah, ok. cool
00:59 manfred yar
01:05 Networkn3rd joined #salt
01:11 Ryan_Lane if I want to assign the value of a module call to a variable in a state, but I want it to happen after a state runs, is that possible?
01:12 Ryan_Lane I'm trying to do something like this: https://gist.github.com/ryan-lane/45816ae22390b28c06aa
01:12 Ryan_Lane but the jinja set variable call happens before any of the states, so it fails
01:13 cruatta does anyone have an example of using the py renderer in a file.managed?
01:16 cruatta to template the file, not the state
01:21 schimmy joined #salt
01:22 Ryan_Lane manfred, Joseph: any idea on how to handle variable setting order with state ordering?
01:22 Joseph Ryan_Lane: more details please?
01:22 Ryan_Lane or any way to call a state and have it register a variable?
01:23 Joseph what kind of variable?
01:23 Joseph jinja, pillar, grain etc
01:23 Ryan_Lane https://gist.github.com/ryan-lane/45816ae22390b28c06aa <-- trying to do something like this
01:23 Ryan_Lane {% set elb = salt['boto_elb.get_elb_config']('myelb', profile='myprofile') %} <-- that doesn't get called in order. it always gets called first
01:23 Ryan_Lane I need it to occur after the elb is created
01:23 manfred Ryan_Lane: i do not believe so
01:23 manfred Ryan_Lane: so
01:24 manfred Ryan_Lane: might be worth a shot
01:24 Ryan_Lane or I need some way to have a state set a variable...
01:24 manfred Ryan_Lane: jinja is rendered before it starts running the states
01:24 Ryan_Lane sigh. right.
01:24 manfred Ryan_Lane: so that would require you to do two run throughs of the state
01:24 Ryan_Lane that isn't really a good option
01:24 manfred eyah
01:25 manfred yeah
01:25 Ryan_Lane what is this? puppet?
01:25 Ryan_Lane ;)
01:25 manfred i was thinking, if you could use some part of the output for something, in another state
01:25 manfred but i can't think of a way
01:25 Joseph Ryan_Lane: watch your mouth...such swear words. filthy!! :)
01:25 schimmy joined #salt
01:25 Ryan_Lane this is standard functionality in ansible
01:25 manfred would be nice to use the output from boto_elb.present['change'] in your other one
01:25 Joseph Nothing beats the awfulness of puppet...nothing
01:25 Joseph Ryan_Lane: them's fighting words
01:25 Joseph can you point me to the ansible documentation that talks about this?
01:26 Joseph I am curious to see what the saltstack gap is and if there is on to get it closed
01:26 Ryan_Lane Joseph: see register at http://docs.ansible.com/playbooks_variables.html
01:26 manfred Ryan_Lane: i wonder if it would work if the bottom state was in a different file
01:26 Ryan_Lane well, do a search for register
01:26 krow joined #salt
01:27 manfred but i don't think it will
01:27 Joseph Ryan_Lane: so you want to take the result of a command, save it into a variable, and then use a jinja if on the content to determine whether something gets execed in the sls?
01:27 shaggy_surfer joined #salt
01:27 Ryan_Lane yes
01:28 ipalreadytaken joined #salt
01:28 manfred Joseph: he might be making the elb in the state above, so he needs the jinja stuff not to be rendered until after that one runs
01:28 Joseph manfred: i thought the jinja ran before any state was execed?
01:28 manfred that is the problem
01:28 Joseph in other words doing this with jinja in salt wouldn't work at all
01:28 manfred right now he has to run the state twice to have it do everything
01:29 Joseph i thought it worked like raw SLS > jinja rendere > compiled sls > exec sls on minion
01:29 manfred right, so... in the one above, the first time he runs it, it will do only the first state... in the second, it will run both
01:29 Joseph oh....so ryan_lane are you using overstate?
01:29 Ryan_Lane I'm using masterless salt
01:30 Joseph oh ha
01:30 Ryan_Lane overstate only exists for masters
01:30 Joseph i did not know you were masterless
01:30 Joseph hmmm masterless
01:30 manfred Ryan_Lane: i think the only way to do it would be to modify the route: to be definable inside of boto_elb.present
01:30 Ryan_Lane sigh
01:30 manfred in the actualy boto_elb state
01:30 Joseph honestly the "easiest" way to do this in salt would be a custom module
01:30 Ryan_Lane I was trying to avoid adding this functionality to boto_elb state
01:30 Joseph ahh
01:31 Ryan_Lane hah, you realize I wrote all the boto_* stuff, right? :)
01:31 Joseph I did not realize this
01:31 manfred yeah
01:31 manfred :)
01:31 Ryan_Lane so these are basically custom modules
01:31 manfred i knew it!
01:31 manfred but i also have been watching almost all of the salt cloud stuff
01:31 Ryan_Lane this is a place where salt is lacking a basic feature ansible has :(
01:31 Joseph Ryan_lane another option is to use different renderer like pyobjects
01:32 Joseph Ryan_Lane: open a feature request then no?
01:32 Ryan_Lane we're exposing this directly to developers
01:32 manfred Ryan_Lane: i agree
01:32 Ryan_Lane there's no way I can use a complex renderer
01:32 Joseph Ryan_Lane: ahhh
01:32 Ryan_Lane it needs to be simple. I guess for now I'll extend boto_elb
01:33 manfred Ryan_Lane: i may be missing something, but i am fairly certain salt can't do that in it's current state
01:33 Ryan_Lane it's not a major extension, just an annoying one, since it can't delete records, only add
01:33 Joseph You know i have seen several people talk about the "i need developers to have be able set and manage global state...how do i do that?"
01:33 Joseph and salt doesn't have a good answer to that i don't think
01:34 Joseph Pillars is a poor fit because of how its controlled through the master. Grains are bad cause they aren't global. So what does that leave you with?
01:34 Joseph that was just a general question for the crowd
01:34 Joseph custom modules i suppose...but that seems silly for handling global state
01:34 manfred Ryan_Lane: elb is a cloud loadbalancer?
01:34 Ryan_Lane manfred: yep
01:35 manfred cool
01:35 Ryan_Lane there's apparently a way to handle this now via states...
01:35 Joseph elb = elastic load balancer?
01:35 Ryan_Lane Joseph: yep
01:35 Ryan_Lane I had brought this up with folks before
01:35 Joseph global state?
01:35 manfred i need to sit down and add openstack's lbaas to the nova and openstack drivers
01:35 Ryan_Lane and apparently in helium it's doable, but it's not really exposed
01:36 manfred Ryan_Lane: I see you followed me back on twitter when I decided to revive my account the other day
01:36 Joseph Ryan_Lane: how does helium handle global state? I ddn't see that in any docs.
01:36 Ryan_Lane looking in state.py
01:39 Joseph I am going to email the google group on this one. I wanna know what the deal is with global state and why saltstack doesn't support something like registered variables. That would be very useful i think.
01:39 ml_1 joined #salt
01:39 Ryan_Lane I think it's via the running dict
01:42 Ryan_Lane Joseph: yeah, it definitely would
01:44 Ryan_Lane I think it'll be hard to make it usable via jinja, since it's run before the states, though
01:44 Joseph yes this should not  and cannot  be dealt with via jinja
01:44 * Ryan_Lane nods
01:44 Joseph this would have to be a programmatic change as the compiled SLS are processed in the running dict
01:44 Joseph I think jinja has become a horrible crutch for the salt community
01:44 Joseph iterating through pillar or grains fine
01:44 Joseph anything else and its usually going to end badly
01:44 Ryan_Lane seems that way, yeah
01:44 Joseph especially since so many people way to do specific programmatic things during the actual enforcement of the state. They think they can do that with jinja when of course jinja is done before the execution has even startd.
01:45 Joseph want to do
01:45 Joseph Ryan_Lane: if it maks you feel any better, it could be worse. You could be repeatedly testing hadoop upgrade. I am about ready to shoot myself.
01:45 Ryan_Lane :D
01:50 Joseph if the namenode throws another nullpointer at me after metadata upgrade....i swear :(
01:50 Joseph i swear i'll say mean things about hdfs...and mourn the loss of my sanity
01:51 Joseph one of my projects is actually to get saltstack to autodeploy and manage opensource hadoop clusters
01:53 thayne joined #salt
01:54 Joseph Ryan_Lane: here you go https://groups.google.com/forum/#!topic/salt-users/tO2WqWhplBU
01:54 Joseph by all means pile on if you so desire
01:58 mateoconfeugo joined #salt
01:58 travisfischer joined #salt
02:09 zlhgo joined #salt
02:12 bhosmer joined #salt
02:12 ajolo_ joined #salt
02:12 krow joined #salt
02:13 ZombieFeynman joined #salt
02:14 __number5__ any one has a salt-master directory layout/examples a bit complex than the salt-formula?
02:18 dude051 joined #salt
02:18 iwat joined #salt
02:20 travisfischer joined #salt
02:21 austin987 joined #salt
02:22 nicknac joined #salt
02:24 Ryan_Lane joined #salt
02:24 manfred Ryan_Lane: the reactor may be your answer
02:24 manfred Ryan_Lane: fire an event when the elb is built, and have it add your stuff to the elb
02:24 Ryan_Lane reactors assume you're using a master
02:24 manfred oh right...
02:24 manfred i keep forgetting that...
02:25 Ryan_Lane and it's silly that I can create an elb, then can't take an output from it to create something else
02:25 manfred the minion has an event system too... we just don't have anything that listens on it
02:25 manfred well, you can, just not with the constraints you are setting on the environment
02:25 manfred with the minion only
02:25 logix812 joined #salt
02:25 Ryan_Lane I don't want to have to rely on event
02:25 Ryan_Lane *events
02:26 Ryan_Lane we have a process where orchestration happens in a jenkins pipeline
02:26 manfred the events system was what was included partitially for passing information from states, to cause other states to happen
02:27 logix812 joined #salt
02:29 manfred Ryan_Lane: i would like to see the ability to have a state that runs a module and assigns it to a temporary grain for only that state run
02:29 Ryan_Lane manfred: ah. events could work in that situation, then
02:29 manfred would that fill your need?
02:30 manfred right, so if we had the minion also paying attention to it's reactor
02:30 manfred but unfortunately it doesn't do anything on it yet
02:31 manfred what you would do would fire an event back to the master to run that one specific block where it does the {% set elb =... %} and then checks it. and then would use the information passed to the reactor to know which minion to tgt the state.sls with
02:31 manfred if a minion could just know to run a state.sls on itself with essential salt-call that would be usefull
02:31 manfred otherwise... temporary grains?
02:32 Joseph manfred: that sounds awfully complicated to do something fairly simple
02:32 manfred i mean, it would be the more compilicated way to do it
02:32 manfred but what i want to try and setup
02:32 Joseph reactors are fine. they serve a purpose. But do we really need to impose the requirement of dealing with the event bus etc just to make a programmatic decision based on a command result?
02:32 manfred is a temporary grain thing...
02:33 mgw joined #salt
02:34 manfred so that that can be used for the whole thing... and then you use the watch: to check if the check for whatever is successfull, and if it is, then it does the boto_route53.present
02:34 Joseph manfred: how would you set the grain in the first place?
02:34 manfred Joseph: one second
02:34 Joseph okay
02:36 manfred Joseph: Ryan_Lane if you had something like this http://paste.gtmanfred.com/k4vX/
02:36 Ryan_Lane I definitely don't want to use a reactor
02:36 manfred so you could set a grain from the output of a module
02:36 manfred I would need to clean that up
02:36 manfred but then you remove the grain at the end
02:37 sunkist joined #salt
02:37 Joseph wouldn't this involve having some code write a file to /etc/salt/grains.d or something ?
02:38 manfred so yes
02:38 manfred that is the part i would modify
02:38 manfred i would only want to inject the grain in to that running state run, and not write it to a file
02:38 manfred because then you could access it in other states, but it wouldn't write it to a file
02:38 manfred and it would be available through the whole state run
02:38 Joseph writing to the file is the  bad thing  agreed
02:38 mgarfias_ joined #salt
02:39 manfred and the fact that it is temporary, and you don't need it forever
02:39 Joseph manfred: but aren't you essentially describing a global state facility for the minion?
02:39 manfred thise was just to give you an idea of what I was thinking
02:39 manfred i just want to be able to extend grains temporarily so that the values are available through out the state run
02:40 Joseph to me this doesn't sound like a grain....a grain is supposed to be static data that is inited at salt-minion start up
02:40 Joseph and isn't expected to change etc
02:40 manfred it would be an expansion of that yes
02:40 Joseph manfred: okay yes i understand you are trying to hack the grains to achieve a functionality similar to registered in variables no?
02:40 manfred right, this is just an example
02:40 Joseph registered variables in ansible
02:40 manfred i would do it differently
02:41 manfred i am not going to use grains
02:41 Joseph ahhhh
02:41 manfred i was just thinking about this as an idea
02:41 Joseph i think you areo n the right track then yes
02:41 manfred it would basically be built like the grains, but would be completely seperate
02:41 manfred and would only be available inside a state run
02:41 mgarfias_ is there anyway for salt-cloud to manage rackspace cloud things other than just instances?  Or will I have to write a bootstrap script to configure all the other bits, then fire up the instances?
02:41 Joseph and no writing to file that's evil
02:41 wt_ ok, I am having a pretty weird problam
02:41 wt_ It looks like my scheduled tasks are not happening.
02:41 manfred Joseph: it would be super temporary, but it should fulfill Ryan_Lane's need
02:42 taion809 joined #salt
02:42 Joseph manfred: would you use jinja to evaluate the the result of this "temporary grain"?
02:42 wt_ startup_states: highstate
02:42 wt_ schedule:
02:42 wt_ highstate:
02:42 wt_ function: state.highstate
02:42 wt_ minutes: 15
02:42 manfred Joseph: i want to add another thing, similar to pillar[] and salt[] and grains[]
02:42 wt_ that's what I have for schedule. Is there something wrong with it?
02:42 manfred Joseph: so yes
02:42 Joseph manfred: yes absolutely
02:42 Joseph its not any of those things but its like them
02:42 Joseph in the sense of "set data that can be iterated trhough by jinja:
02:43 manfred statevars[]
02:43 Joseph yes!!!
02:43 Joseph state variables
02:43 manfred ok, i am going to work on that tomorrow
02:43 Joseph that's exactly what is needed
02:43 Joseph wt_: can you provide more info?
02:43 Joseph like what makes you think its not being run for example
02:44 wt_ well, when I manually do "salt 'machname' state.highstate", the hightstate runs. However, changes that would result from a highstate change are not happening.
02:45 Joseph example?
02:45 wt_ BTW, my minion config only contains the lines I pasted above and a couple masters
02:45 wt_ nothing else
02:46 Joseph couple masters? are you using syndic?
02:46 wt_ No, just minion with two masters.
02:46 Joseph you mean a master with two minions?
02:47 wt_ no, each minion has two masters, which use git as the source for their file and pillar backends
02:47 kermit joined #salt
02:47 wt_ master:
02:47 wt_ - master1
02:47 wt_ - master2
02:47 Joseph so both masters have the exact same data in the file server?
02:48 wt_ Seems to from the /var/cache/master/... data
02:49 Joseph what happens if you change the schedule to 1 minute
02:49 wt_ I can try that.
02:49 Joseph and run the salt minion in foreground with debug log on it?
02:49 agliodbs joined #salt
02:52 wt_ well, the salt minion is just waiting
02:52 wt_ for over a minute now
02:52 wt_ I also removed the "startup_states: hightstate"
02:53 wt_ I don't set a loop_interval....could that have any affect?
02:53 Joseph not sure
02:53 Joseph can you do a gisthub of the full sls?
02:53 wt_ Joseph, do you mean the top.sls?
02:54 Joseph yes
02:54 Joseph how did you enable scheduling for the minion?
02:54 wt_ Joseph, in the /etc/salt/minion
02:55 Joseph and you have already service restarted and cleared out the minions cache etc
02:55 Joseph ?
02:55 sunkist joined #salt
02:56 Joseph can you copy the schedule section from the minion config and put it in a gisthub?
02:57 manfred Joseph: that still won't work, because jinja is compiled before any states are run, so even if you had it, they wouldn't be call able via jinja
02:58 manfred the correct answer is to use an overstate, or add it to one module
02:58 wt_ Joseph, I need to manually delete the minion cache dir?
02:58 Joseph you shouldn't need to
02:58 Joseph but you may need to ...sometimes salt can have cache problems
02:59 Joseph manfred: yea...especially if the point is that i want to make a decision during runtime on whether a compiled block should be executed
02:59 Joseph jinja simply doesn't make any sense in that context
02:59 manfred Ryan_Lane: with the way that states are rendered, the only way to do it without using the reactor system or using runners, is to include it as part of your boto_elb.present state
03:00 Joseph manfred: agreed but i think this just goes to the point that being able to put some data in a variable and then execute certain states based on that variable contents is needed.
03:01 wt_ ok, I stopped the minion and did rm -rf /var/cache/salt/minion/*
03:01 manfred i disagree, the state needs to just be expanded so you can do the stuff inside it
03:01 Joseph manfred: not sure i follow
03:01 manfred Joseph: also, you can do some stuff like that, using module.run kind of
03:02 Joseph wt_: and what's in your schedule section in the minion conf?
03:02 manfred module.run and a watch... it still doesn't solve the whole issue, cause there is no way to pass the statevars into the other state, because the jinja has already been rendered
03:03 manfred Joseph: sure, you could tell it to run the state or not, but you can't pass the {{ statevars['something'] }}, because they would get set after the first state runs, but still that is after all the jinja is rendered
03:03 Joseph yea and what about everything that is cmd.run plus watch?
03:03 manfred Joseph: still can't pass variables as settings
03:03 manfred cause you need jinja templating to do it
03:03 manfred so it is a cyclical dependency
03:04 Joseph manfred: to be clear i don't think jinja really works in this context...or at least i don't see how it does because you have pointed out the rendering has occurred so state determination basedo n the state vars doesn't really fly
03:04 Joseph or i just might e missing something?
03:04 wt_ Joseph, I update the gist with the full minion config
03:04 manfred how else are you going to pass the variable to the state without jinja
03:04 Joseph wt_: okay
03:05 Joseph manfred: well that's the question. i think saltcommunity has a tendency to twist itself into knots to fit within the jinja box. What if just stepped back and asked what we want and how that could bei mplemented
03:05 Joseph to me it seems like the functionality needs to be during the state run
03:05 mateoconfeugo joined #salt
03:05 Joseph which excludes jinja entirely
03:05 Joseph i.e. when salt iterates through the running dictionary
03:06 Joseph wt
03:06 Joseph can you send the link again
03:06 Joseph lost it sorry
03:07 manfred Joseph: http://paste.gtmanfred.com/jAWQu/
03:07 manfred Joseph: i can't think of another way to implement that though
03:07 wt_ Joseph, sent
03:07 ajw0100 joined #salt
03:07 manfred without rewriting a ton of states
03:08 Joseph wt_: sorry i don't see the link?
03:08 Joseph ophhh messages
03:08 Joseph ha
03:08 Joseph sorry okay
03:09 Joseph wt_: confused....what file is this? Top.sls or the minion configu?
03:09 wt_ you should see two file contents
03:09 wt_ the top.sls followed by the minion config
03:09 Joseph wt_: got it
03:10 wt_ and to be clear, that is the full minion config with altered master names
03:10 Joseph manfred: so can you walk me through that. State2 retrieves teh result of state state and stores it where?
03:11 Joseph of state1
03:11 Joseph wt_: understood
03:11 manfred Joseph: you remember how he did a {{ set info = salt['boto_elb.get_config']('something') %}
03:11 manfred you are doing essentially that in state2
03:11 manfred and saving it to the statevars
03:11 Joseph with jinja?
03:12 manfred he was doing it with jinja, i was going to use modules to inject it into the environment
03:12 manfred but it doesn't matter
03:12 Joseph ah okay
03:12 manfred because the jinja at the bottom is already failed to render
03:12 Joseph right
03:12 manfred there isn't a better way to do it
03:13 manfred what are you going to use to reference them without rewriting all the states and modules
03:13 manfred like, you would need to have every module check for a variable in statevars[] if it isn't defined when it was called
03:14 wt_ Joseph, I have been watching that minion in debug...nothing yet.
03:15 ajolo__ joined #salt
03:15 taterbase joined #salt
03:16 Joseph_ joined #salt
03:16 Joseph_ argh
03:16 Joseph_ disconnected
03:16 Joseph_ did you get my last comment manfred?
03:16 manfred i did not
03:16 Joseph_ sadness
03:16 Joseph_ okay
03:16 Joseph_ So here's how i am reading this
03:17 catpiggest joined #salt
03:17 Joseph_ state2 retrieves the output of state1 and puts that output in a "state variable". State3 then retrieves that variable and then makes a decision about whether it should execute.
03:18 Joseph_ did i get that right?
03:18 logix812 joined #salt
03:18 logix812 joined #salt
03:19 manfred it can't retrieve the variable, because the jinja is already rendered
03:19 manfred the whole thing will fail
03:19 Joseph_ confused....sure but i thought you weren't going jinja so you avoid that problem?
03:19 Joseph_ not so?
03:19 logix812 I have salt states that describe various applications. I was curious if anyone knew if I could just create a docker minon that leverages those states and freeze that image for docker reuse.
03:20 Joseph_ logix812: probably ...salt has a docker module.
03:20 manfred Joseph_: i said ealier, i can't think of another way to do it without jinja
03:20 logix812 Joseph_ ya, but it looks like it just runs containers
03:21 Joseph_ manfred: sorry if i am being dense. So are you saying you are stuck i.e. not sure how to achieve what ryan_lane wants given the current constrains of salt?
03:21 manfred no
03:21 manfred i know how to achieve it
03:21 manfred it is to expand boto_elb.present
03:21 Joseph_ manfred: wasn't that file you showed me your example of how to achieve it? or did i misunderstand?
03:22 manfred i don't know how to achieve the ability to have internal state variables
03:22 manfred Joseph_: it won't work
03:22 Joseph_ oh
03:22 manfred Joseph_: the file i showed you is what i wanted to do
03:22 Joseph_ ahhh
03:22 Joseph_ buti t won't work
03:22 Joseph_ got it
03:22 manfred yar
03:23 Joseph_ wt_: haven't forgotten you
03:23 wt_ Joseph_, highstates run as expected when I replace the list of masters with one master
03:23 Joseph_ i was just about to suggest that!
03:23 Joseph_ doht
03:23 Joseph_ now i don't look as smart
03:23 Joseph_ ah well
03:23 wt_ Joseph_, I got more for you
03:23 Joseph_ uh oh
03:23 wt_ if I have one master in list form, it also fails
03:23 wt_ master:
03:23 wt_ - only_master
03:23 wt_ that will fail
03:24 wt_ however "master: only_master" will work
03:24 Joseph_ <logix812> i actually not sure i understand your docker use case. What is it that you want to freeze and reuse?
03:24 Joseph_ if its salt states you would just gitfs i would presume
03:24 logix812 Joseph_: I think I got it.
03:24 Joseph_ logix812: okay
03:25 Joseph_ wt_: i am not terribly familiar with multi-master set up
03:25 Joseph_ migth be a good question for the google group
03:26 logix812 I want to "containerize" the result of my state runs. I was only thinking of them though a docker file, but I think I can just have a minon installed on a vanilla container, apply my states and then docker commit on that image.
03:26 Joseph_ logix812: why not just use a returner and put them in a mysql db?
03:27 logix812 Joseph_ I have them all in gitfs, I was just trying to turn "provisioning" into "download this image and run it" time
03:27 Joseph_ provisioning of the minion?
03:27 logix812 ya
03:28 Joseph_ oh yea docker would probably be fine for that assuming your salt-master hostname isn't changing a lot
03:28 logix812 Joseph_ these images would be for vagrant dev boxes
03:28 Joseph_ i actually use glance images in openstack to do the same thing
03:28 logix812 I would not need the master at all.
03:28 Joseph_ oh masterless set up i see
03:29 logix812 ya, we still need all of our states for a production deploy to actual in-use masters
03:29 Joseph_ so you'd launch an image from docker and that image would already be configured for gitfs so it would already have access to all of the SLS files etc
03:29 logix812 ya
03:29 Joseph_ seems reasonable
03:29 logix812 and it would have run the provisioning
03:29 Joseph_ is that not working?
03:29 logix812 I would just double chekcing the concept
03:29 logix812 checking
03:30 Joseph_ manfred: what if there was a special state run: set list of variables to be used by the next state run
03:30 Joseph_ then the second time around jinja compilation would work
03:30 Joseph_ or is that what ryan_lane is already doing ?
03:30 manfred that would be an overstate
03:30 Joseph_ manfred: it would
03:31 manfred and if all you did was set variables, it wouldn't work
03:31 wt_ Joseph_, do you have any idea where in the code the minion config is parsed?
03:31 manfred because you need to set set the variable, after the first state runs
03:31 Joseph_ wt_: not off the top of my head
03:31 Joseph_ manfred: but that's why it won't work with overstate because there's no way to temporarily persist the data betwee nruns?
03:32 Joseph_ <logix812>: tread carefully with fqdn. by default, the minion starts up and inits the FQDN as its id.
03:32 Joseph_ that c an be problematic in a virtualized environment where hostnames can be reused or float around a lot
03:33 logix812 ya, we have the minons on our vagrant vm's configured to use "vagrant" as their id
03:33 Joseph_ there are some pretty bad usability issues with this but i actually prefer setting a UUID for the minion id
03:33 Joseph_ gaurantees no accidentally minion id reuse
03:33 logix812 on a per-project basis the mapped folders over vagrant control what states are delivered on top of the gitfs states we already have in play
03:34 logix812 again these are just throw away dev boxes
03:34 logix812 local only
03:34 Joseph_ and they only have network access to the git reop?
03:34 Joseph_ repo
03:34 manfred Joseph_: no it isnt? it works perfectly in an overstate, because you do the one overstate where it runs the first state that creates the elb, then it does the one where it routes for the elb... but in the second one, the {% %} will return a dictionary... he can't use an overstate because he is using masterless minions
03:34 Joseph_ oh right
03:35 Joseph_ <head desk>
03:35 manfred :)
03:35 Joseph_ two salt-calls in a shell script
03:35 Joseph_ <shifty eyes> that sounds awful even if it did work
03:36 wt_ https://github.com/saltstack/salt/blob/141a59a968baade2bc53fb40d7d86f80bf9e13e2/salt/daemons/flo/core.py#L308 <-- fun typo
03:36 stanchan joined #salt
03:36 Joseph_ oh wow
03:36 Joseph_ i actually need to head out
03:37 manfred wt_: lol
03:37 Outlander left #salt
03:37 Joseph_ manfred: let me know if you end up cracking the nut for the masterless set up
03:37 Joseph_ sounds like we are just getting  broken teeth
03:37 manfred i am pretty much done thinking about it, cause his best answer is to expand the capabilities of boto_elb
03:39 InAnimaTe joined #salt
03:45 jalaziz joined #salt
03:46 mosen joined #salt
03:47 schimmy joined #salt
03:50 schimmy1 joined #salt
03:51 ldlework Does anyone know of some states for provisioning the new-relic agent?
03:53 manfred i wrote a newrelic formula for php and sysmond the other day
03:54 manfred https://github.com/saltstack-formulas/newrelic-formula
03:54 manfred http://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html
03:57 ckao joined #salt
03:58 Theo-SLC joined #salt
03:58 Theo-SLC left #salt
04:01 bhosmer joined #salt
04:07 mgw joined #salt
04:14 sunkist So I used the mount.mounted stated to mount a partition by UUID.  It works the first time I run salt, but salt refuses to execute it later on.
04:15 sunkist It claims that it can't mount the disk by uuid because the device name ('/dev/sdb') is already mounted there.
04:15 wt_ I think I found a bug with multimaster and schedules.
04:16 ajolo joined #salt
04:17 mgw joined #salt
04:18 Outlander joined #salt
04:20 jrcresawn_ joined #salt
04:21 oz_akan_ joined #salt
04:23 oz_akan__ joined #salt
04:23 mgw joined #salt
04:26 rjc joined #salt
04:28 CheKolyN joined #salt
04:30 mgw joined #salt
04:33 mateoconfeugo joined #salt
04:33 fragamus joined #salt
04:34 Ryan_Lane joined #salt
04:43 thayne joined #salt
04:45 mgw joined #salt
04:48 felskrone joined #salt
04:49 ramteid joined #salt
04:58 bhosmer joined #salt
05:00 mgw joined #salt
05:02 jalbretsen joined #salt
05:06 mgw joined #salt
05:08 oz_akan_ joined #salt
05:12 travisfischer joined #salt
05:16 ajolo_ joined #salt
05:22 googolhash joined #salt
05:23 mgw left #salt
05:25 linjan joined #salt
05:31 malinoff joined #salt
05:44 mgw joined #salt
05:49 bhosmer joined #salt
05:53 mgw joined #salt
05:53 aw110f joined #salt
05:57 sheldon joined #salt
05:58 sheldon which interface can monitor a  job's status ?
06:05 ramteid joined #salt
06:07 Furao joined #salt
06:10 linjan joined #salt
06:11 ggoZ joined #salt
06:16 krow joined #salt
06:17 ajolo__ joined #salt
06:18 flebel joined #salt
06:20 borgstrom joined #salt
06:21 kermit joined #salt
06:26 ecdhe joined #salt
06:27 jeddi joined #salt
06:31 borgstrom joined #salt
06:33 Ryan_Lane joined #salt
06:33 krow joined #salt
06:36 greyhatpython joined #salt
06:43 Ryan_Lane joined #salt
06:44 Ryan_Lane joined #salt
06:58 ml_1 joined #salt
07:04 ipalreadytaken joined #salt
07:04 ipalreadytaken joined #salt
07:06 borgstrom joined #salt
07:13 ilbot3 joined #salt
07:13 Topic for #salt is now Welcome to #salt | 2014.1.4 is the latest | SaltStack trainings coming up in SLC/London: http://www.saltstack.com/training | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
07:16 ajolo__ joined #salt
07:17 ml_1 joined #salt
07:19 TyrfingMjolnir joined #salt
07:23 oz_akan_ joined #salt
07:25 TyrfingMjolnir joined #salt
07:32 mateoconfeugo joined #salt
07:35 alanpearce joined #salt
07:38 slav0nic joined #salt
07:38 slav0nic joined #salt
07:38 bhosmer joined #salt
07:40 alanpear_ joined #salt
07:44 druonysus joined #salt
07:52 sheldon how can i get a job's status, is the job done or not ?
07:52 dsolsona joined #salt
07:52 sheldon i am really really confused.
07:53 jalaziz joined #salt
07:55 alanpearce joined #salt
07:58 darkelda joined #salt
08:08 MrTango joined #salt
08:13 TyrfingMjolnir joined #salt
08:17 ajolo joined #salt
08:23 oz_akan_ joined #salt
08:24 ajw0100 joined #salt
08:33 ajw0100 joined #salt
08:35 Shenril joined #salt
08:36 fragamus joined #salt
08:40 ggoZ joined #salt
08:45 aw110f joined #salt
08:48 AirOnSkin joined #salt
08:51 sverrest joined #salt
09:11 LordOfLA joined #salt
09:12 DieselxXx joined #salt
09:20 Zoresvit joined #salt
09:21 masterkorp hello fellow salt users
09:21 Zoresvit Hello, community. What is the best way to trigger remote salt master to execute commands on minions? LocalClients seems to be able to do this only if it's running on the same machine with SaltMaster. Thanks.
09:21 Furao left #salt
09:23 malinoff Zoresvit, events & reactor
09:23 masterkorp Zoresvit: the reactor module is awesome for orchestation
09:24 krow joined #salt
09:24 oz_akan_ joined #salt
09:25 Zoresvit Thanks a lot for the tip! According to the doc it seems to be the solution, will look into it.
09:26 sgate1 joined #salt
09:27 masterkorp yeah, i am think on doing autoscaling with it
09:27 masterkorp have rimenn send an event if the servers are under load
09:27 bhosmer joined #salt
09:29 workingcats joined #salt
09:34 CeBe joined #salt
09:42 CeBe1 joined #salt
09:49 ajolo_ joined #salt
09:49 rowleyaj joined #salt
09:50 CeBe1 joined #salt
09:53 giantlock joined #salt
09:59 krow1 joined #salt
10:03 bhosmer joined #salt
10:18 iwat joined #salt
10:21 orbit_darren joined #salt
10:22 masterkorp Guys how do you test salt states ?
10:25 oz_akan_ joined #salt
10:28 CeBe2 joined #salt
10:32 jdmf joined #salt
10:38 xmj joined #salt
10:38 oz_akan_ joined #salt
10:39 alanpearce joined #salt
10:42 alanpearce joined #salt
10:49 jdmf joined #salt
10:50 ajolo_ joined #salt
10:52 jdmf joined #salt
10:55 xmj joined #salt
10:56 elfixit joined #salt
11:07 geekmush joined #salt
11:13 Ivo joined #salt
11:13 Ivo is there a way to use the salt cmd to find out available states to run on a minion?
11:16 bhosmer joined #salt
11:16 bhosmer_ joined #salt
11:22 saravanans joined #salt
11:27 bhosmer joined #salt
11:29 alanpearce joined #salt
11:30 jslatts joined #salt
11:39 diegows joined #salt
11:39 oz_akan_ joined #salt
11:45 bfwg_ joined #salt
11:50 c2c joined #salt
11:50 c2c Hello, I'm trying to target a nodegroup in a SLS file, is that possible ?
11:51 sk011 joined #salt
11:54 ajolo_ joined #salt
11:56 sk011 joined #salt
11:59 ajprog_laptop joined #salt
12:05 jrdx joined #salt
12:07 mfournier emning: hello! fyi, these features we talked about the other day got merged and will be part of the next release: https://github.com/saltstack/salt/pull/13160
12:11 krow1 joined #salt
12:13 masterkorp Guys can i use the cmd.run inside another module ?
12:13 TyrfingMjolnir joined #salt
12:14 viq what do you mean?
12:15 masterkorp i am wring my first module, its will handle runing
12:15 saravanans joined #salt
12:15 masterkorp i am making a def version(): function
12:15 masterkorp basically i want to return the output of runit --version
12:15 masterkorp can i use cmd.run ?
12:16 viq Oh, modules... I have no idea about those, sorry
12:16 masterkorp and state modules ?
12:16 krow joined #salt
12:16 masterkorp it does seem to be much documentation
12:16 viq I know some how to use them, I have no idea about internals.
12:16 masterkorp yeah but i need to write me own now
12:23 * viq shuts up due to being of no use in this particular thread of conversation
12:24 krow joined #salt
12:26 alanpearce joined #salt
12:32 dangra joined #salt
12:33 Lomithrani joined #salt
12:35 toastedpenguin joined #salt
12:37 jdmf joined #salt
12:37 Lomithrani Hi , guys I have an issue  "The Salt Master has rejected this minion's public key!
12:37 Lomithrani To repair this issue, delete the public key for this minion on the Salt Master and restart this minion."   But on the master I can't see any key in the Rejected Keys
12:42 cofeineSunshine salt-key -a
12:42 cofeineSunshine you can accept new key
12:42 cofeineSunshine or something like that
12:43 fluidics joined #salt
12:44 Lomithrani Yeah I know  , I mean de key doesn't appear anywhere with salt-key -L
12:44 Lomithrani *the
12:45 Lomithrani and my /etc/minion  has     master:master_ip_adress set correctly , it is waiting for the master to respond tells me the master has cached the public key , but on the master side I get nothing
12:49 viq Lomithrani: it should be on the list of accepted keys, I think
12:49 viq Lomithrani: and if not, are you sure the packets are getting from the minion to the master?
12:49 Lomithrani it should but it's not thats my problem ^^
12:50 Lomithrani well I can ping master from minion yes
12:50 Lomithrani The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
12:50 viq 'telnet master 4505' - does that work? Of course your proper master address, do it from the minion
12:50 Lomithrani I get this , so that means it should be on the "unaccepted keys"
12:51 Lomithrani master: service name not available for the specified socket type
12:51 viq well, yes, put the dns name or IP of your master in there
12:51 Lomithrani connection refused
12:51 viq There you go then
12:52 Lomithrani oh wait
12:52 Lomithrani nvm
12:52 Lomithrani works on 4505
12:52 viq OK then
12:53 Lomithrani how do I quit telnet though :p
12:53 viq On master I would run something like 'tcpdump -nnpi eth0 host <ip of your minion>', on minion machine stop salt-minion, rm -rf /etc/salt/pki/* /var/cache/salt/* and restart minion and see if any packets appear
12:53 viq ctrl ]
12:53 viq and then you can ctrl d out of it
12:54 jdmf joined #salt
12:58 emning mfournier: nice, well done
12:58 ajolo__ joined #salt
12:59 TyrfingMjolnir joined #salt
13:00 Lomithrani Did try to reboot the minion but I dont have tcpdump on smart os
13:01 Lomithrani and now I have this message The Salt Master has rejected this minion's public key!
13:01 Lomithrani To repair this issue, delete the public key for this minion on the Salt Master and restart this minion.
13:01 Lomithrani But still nothing appears in the rejected keys
13:01 Lomithrani (I should add that the master has accepted the key just fine of the minion running on 127.0.0.1)
13:04 miqui joined #salt
13:04 spiette joined #salt
13:08 resmike joined #salt
13:10 c2c Hello, I'm trying to target a nodegroup in a SLS file, is that possible ?
13:10 viq c2c: why in sls?
13:11 Zoresvit joined #salt
13:11 c2c I'm trying to execute some cmd.run
13:12 c2c only for minions in some nodegroups
13:12 oz_akan_ joined #salt
13:12 c2c the rest of the SLS is global for all minions
13:12 viq It would be easier to assign that specific state via top.sls, I'm not sure you can easily do that from sls
13:12 c2c ha
13:13 c2c and that specific state can require the "main" SLS ?
13:13 viq certainly, yes
13:13 viq What comes to mind that you could do, is, again in top.sls, assign to nodes in a nodegroup a state that would set a grain, and then target on that grain
13:13 dsolsona joined #salt
13:14 c2c How can I be sure that the grain setting state will be runned before the "main" ?
13:14 c2c ha .. require ... nevermind
13:14 viq :P
13:14 c2c Last one : How can i _SET_ a grain in a SLS ?
13:15 viq http://docs.saltstack.com/en/latest/ref/states/all/salt.states.grains.html
13:15 c2c Okay
13:15 c2c Thank you !
13:15 racooper joined #salt
13:16 viq So not quite how you wanted to do this, but provided two workarounds ;)
13:16 Zoresvit Hello again :) How can I get rid of the 2 first debugging lines when using salt CLI?
13:16 Zoresvit {'mopts': None, 'c_path': '/etc/salt/master', 'self': <salt.client.LocalClient object at 0x20d6bd0>} {'verbose': False, 'tgt': 'G@cluster:Victor and ( G@roles:proxy or G@roles:proxy-master or G@roles:storage )', 'self': <salt.client.LocalClient object at 0x20d6bd0>, 'arg': [], 'kwarg': None, 'ret': '', 'expr_form': 'compound', 'timeout': 5, 'kwargs': {'show_timeout': False}, 'fun': 'test.ping'}
13:17 Zoresvit For some reason "salt -l quiet" doesn't do it
13:18 TyrfingMjolnir joined #salt
13:18 malinoff joined #salt
13:23 AdamSewell joined #salt
13:23 AdamSewell joined #salt
13:25 Lomithrani I have reinstalled both zones and I still have the same issue. I don't have any keys other than loghost appearing in the list of keys
13:26 st0newa11 joined #salt
13:28 TyrfingMjolnir joined #salt
13:30 sgviking joined #salt
13:31 svs joined #salt
13:33 TyrfingMjolnir joined #salt
13:33 q4brk joined #salt
13:36 ggoZ joined #salt
13:37 oz_akan_ joined #salt
13:37 Guest85419 joined #salt
13:39 krow joined #salt
13:42 chiui joined #salt
13:42 GradysGhost joined #salt
13:42 mpanetta joined #salt
13:45 q4brk_ joined #salt
13:46 jaycedars joined #salt
13:51 jaimed joined #salt
13:52 mpanetta joined #salt
13:52 mapu joined #salt
13:52 aw110f_ joined #salt
13:55 anotherZero joined #salt
13:57 krow joined #salt
14:01 vejdmn joined #salt
14:02 kaptk2 joined #salt
14:02 alanpearce joined #salt
14:04 ggoZ1 joined #salt
14:09 jdmf joined #salt
14:10 Networkn3rd joined #salt
14:11 quickdry21 joined #salt
14:13 mapu_ joined #salt
14:14 gothix_ joined #salt
14:21 bastion1704 joined #salt
14:23 repl1cant joined #salt
14:27 mgw joined #salt
14:27 mgw left #salt
14:28 brandon_ joined #salt
14:29 jalbretsen joined #salt
14:29 Gowza joined #salt
14:31 oz_akan__ joined #salt
14:34 jeremyBass joined #salt
14:36 andredieb_ joined #salt
14:37 vejdmn joined #salt
14:38 andredieb_ joined #salt
14:41 alunduil joined #salt
14:43 CeBe joined #salt
14:45 MK_FG joined #salt
14:46 Gowza left #salt
14:49 UtahDave joined #salt
14:50 mateoconfeugo joined #salt
14:55 bhosmer joined #salt
14:55 Lomithrani Parent directory not present for a filed managed , I have checked and the directory is present chmoded 755 , any idea why I could get this error ?
14:59 conan_the_destro joined #salt
15:03 InAnimaTe joined #salt
15:05 UtahDave Lomithrani: can you pastebin what your sanitized state file?
15:06 Lomithrani I found the error, both my minions where called loghost I changed the name of one , and wasn't the one I thought
15:06 Lomithrani so I was checking wrong minion , it's good anyway it made me use the makedir
15:07 teskew joined #salt
15:08 aw110f joined #salt
15:08 UtahDave ah, glad you figured it out!
15:08 Lomithrani wow it's even worse than what I thought I just have 2 minion on the same host ...
15:09 Lomithrani so I guess it's still my problem that I couldnt bind my minion through the network
15:15 bhosmer_ joined #salt
15:15 bhosmer_ joined #salt
15:21 ajolo joined #salt
15:23 Theo-SLC joined #salt
15:23 alanpearce joined #salt
15:24 ajolo joined #salt
15:25 Lomithrani Is there something specific to do if I want to run salt-minion on my salt-master host ?
15:26 acu joined #salt
15:26 thayne joined #salt
15:26 to_json joined #salt
15:28 manfred nope
15:28 manfred it should just work â„¢
15:30 oz_akan_ joined #salt
15:32 darkelda joined #salt
15:32 bhosmer_ joined #salt
15:33 bhosmer_ joined #salt
15:36 jnj joined #salt
15:36 jnj left #salt
15:44 tligda joined #salt
15:45 toddejohnson joined #salt
15:46 alanpearce joined #salt
15:49 smcquay joined #salt
15:50 jimklo joined #salt
15:52 schimmy joined #salt
15:54 jhulten joined #salt
15:54 Gareth morning morning
15:56 schimmy1 joined #salt
15:59 bemehow joined #salt
16:03 to_json joined #salt
16:05 conan_the_destro joined #salt
16:06 intr1nsic Had anyone done any work with modules that may collect information from 2 other systems to run an execution state on the target minion?
16:07 manfred intr1nsic: publish.publish ?
16:07 jimklo joined #salt
16:07 manfred https://github.com/gtmanfred/salt-states/blob/master/hosts/init.sls#L1
16:07 intr1nsic Reading now, thanks.
16:08 Lomithrani how can I interrupt a function if the jid ? (I have a state.highstate whos been running for 20 minutes I think it has a problem..)
16:08 Husio joined #salt
16:09 Husio hi, I have problem with managing git repository through salt: I have two machines, both doing the same job: fetching the branch production, and since few weeks, one of the machines does not pull latest branch repo anymore
16:09 Husio anyone knows what might be the reason?
16:09 Husio should I remove some cache or something?
16:09 ipmb joined #salt
16:10 KyleG joined #salt
16:10 KyleG joined #salt
16:10 andredieb_ joined #salt
16:11 Husio that's the configuration I'm using:
16:11 Husio http://paste.42reports.com/fc5e5be3-5b63-495f-abc3-ded45063b823
16:12 wati`` joined #salt
16:16 mateoconfeugo joined #salt
16:18 travisfischer joined #salt
16:20 peters-tx joined #salt
16:27 joehillen joined #salt
16:28 dsolsona joined #salt
16:30 thayne joined #salt
16:30 forrest joined #salt
16:32 stanchan joined #salt
16:32 ccase joined #salt
16:32 darkelda joined #salt
16:34 jslatts joined #salt
16:34 ajolo_ joined #salt
16:34 troyready joined #salt
16:35 kballou joined #salt
16:39 schimmy1 joined #salt
16:40 bhosmer joined #salt
16:41 ajolo joined #salt
16:45 thedodd joined #salt
16:49 chrisjones joined #salt
16:50 bhosmer joined #salt
16:51 bhosmer_ joined #salt
16:54 shaggy_surfer joined #salt
16:54 surge joined #salt
16:54 ajolo_ joined #salt
16:54 surge heyz
16:54 viq Is there a way to say "user should only have those keys in authorized_keys" other than file.managed ?
16:55 redondos joined #salt
16:55 redondos joined #salt
16:56 viq Husio: if you can afford it you could try removing the checkout dir
16:58 jdmf Husio: I had this same problem a few weeks ago. Changes within my git repo on the local server, prevented me from doing git pull
16:59 jdmf Husio: Check the server with git status, and see if you have any locally changed files. This might be preventing you from doing git pull. (just athought) :)
16:59 ramteid joined #salt
16:59 viq Ah, yeah, that too.
17:00 viq You could do a git stash beforehand, maybe
17:00 surge hm...
17:01 surge what would be the best way to ensure a server has all of it's drives mounted using salt
17:02 forrest surge http://docs.saltstack.com/en/latest/ref/states/all/salt.states.mount.html
17:02 viq or cmd.run 'mount -a'
17:02 forrest yea or that
17:02 taterbase joined #salt
17:03 viq Depends how you want to skin that cat ;)
17:03 viq tinuva: ping
17:04 viq tinuva: how far are you with trying to get gitlab to work on CentOS?
17:05 bVector heh I tried to setup the docker container for gitlab yesterday
17:05 bVector I learned that I'm terribad at docker
17:06 viq bVector: I have a formula that could certainly use some attention
17:06 viq Though it-works-for-me(tm) on debian
17:07 viq Also I thought there are some ready recepies for gitlab in docker?
17:07 ajolo joined #salt
17:07 aw110f joined #salt
17:08 bVector yeah there is one, I fumbled around with it for a bit
17:08 bVector maybe I'll get it working today
17:10 forrest oh docker
17:11 jhulten joined #salt
17:12 resmike joined #salt
17:14 Ryan_Lane joined #salt
17:17 shaggy_surfer joined #salt
17:17 mpanetta joined #salt
17:20 mpanetta joined #salt
17:20 mgw joined #salt
17:21 Networkn3rd joined #salt
17:24 micah_chatt joined #salt
17:24 TyrfingMjolnir joined #salt
17:25 Theo-SLC What does it mean when you do this to a array variable in python  **array?
17:26 to_json joined #salt
17:28 mgarfias_ what the heck does this error mean: The following exception was thrown by libcloud when trying to run the initial deployment: 400 Bad Request Invalid key_name provided.
17:29 bVector Theo-SLC: that means to unpack values
17:29 timoguin mgarfias_: are you passing an ssh key? sounds like the API might not be able to find a key by that name
17:29 Theo-SLC unserialize?
17:30 bVector what context is it?
17:30 mgarfias_ timoguin: yes, passing in an ssh key.  I have the full path specified in the config
17:30 bVector in functions, that means to assign the remaining args to a dictionary of that name
17:30 Theo-SLC bVector: trying to get Sentry logging working again.  I'm looking at the code https://github.com/saltstack/salt/blob/c36544472304163ad3e401ebe35034b6ef5741cb/salt/log/handlers/sentry_mod.py
17:30 Theo-SLC (I'm not a python developer)
17:31 timoguin mgarfias_: ah, the API is expecting you to pass the name of a key as the service knows it
17:31 mgarfias_ ooooooh
17:31 mgarfias_ ok, so whats the correct format for the config
17:31 bVector Theo-SLC: that means unpack the options variable, and pass them all in as arguments to the function
17:31 kermit joined #salt
17:32 schmutz joined #salt
17:33 timoguin mgarfias_: mine look like this in the provider config: ssh_key_file: /home/salt/.ssh/id_rsa \n ssh_key_name: salt@master01.domain.com
17:33 bVector Theo-SLC: http://pynash.org/2013/03/13/unpacking.html
17:33 mgarfias_ danke
17:33 Theo-SLC bVector: thanks
17:34 Joseph joined #salt
17:34 bVector np
17:34 bVector so if options is a dictionary, it will pass all the items in options as keyword arguments to raven.Client
17:35 n8n joined #salt
17:35 druonysuse joined #salt
17:35 druonysuse joined #salt
17:37 TyrfingMjolnir joined #salt
17:37 mgarfias_ woo it works
17:38 jpcw joined #salt
17:40 TyrfingMjolnir joined #salt
17:44 andredieb_ joined #salt
17:45 giannello joined #salt
17:45 rglen joined #salt
17:45 krow joined #salt
17:46 thayne joined #salt
17:47 dude051 joined #salt
17:47 to_json1 joined #salt
17:49 mgarfias_ is there a way to add an instance to an openstack network?
17:49 yurei joined #salt
17:49 wati`` joined #salt
17:50 whiteinge Theo-SLC: that'd be great to have the sentry returner fixed up. how's it coming?
17:53 wt joined #salt
17:55 Joseph mgarfias_: i know you can do it. I hwoever do not know how. :)
17:57 Ryan_Lane whiteinge, forrest, UtahDave: any feelings on a second doc sprint? if we try to do them on a schedule we'll likely get more people to consistently show up :)
17:57 Ryan_Lane we also need to follow-up on the doc from last sprint
17:58 rglen joined #salt
17:58 whiteinge Ryan_Lane: still on my radar. you still up for hosting for a physical sprint?
17:58 mgarfias_ next problem: [WARNING ] Private IPs returned, but not public... Checking for misidentified IPs
17:58 mgarfias_ [WARNING ] 192.168.254.1 is a private IP
17:58 Ryan_Lane yeah. I need to figure out how to get official approval for the space
17:58 mgarfias_ wtf ?
17:58 forrest Yea a second doc sprint would be good, I also would need to get approval for a space
17:59 zain_ joined #salt
17:59 forrest I need a date, I'm not sure if I'll be able to use our office, I might have to look at taking the day off and using one of the 'maker' spaces that exist around here
17:59 whiteinge we were originally kicking around a june date, how about first week of july instead? (or last week of june)
17:59 Joseph thast would be good from my point of view because then that'll give me enough time to copmlete my rewrite of the sates tutorial
18:00 forrest yea that sounds pretty good, let me know when you guys have a date decided since I assume san fran will be more lively than seattle
18:00 Joseph btw how the heck are you supposed execute salt cloud ?
18:00 forrest ?
18:00 forrest salt-cloud blah
18:00 Joseph salt-cloud Usage: salt-cloud  salt-cloud: error: salt-cloud requires >= libcloud 0.11
18:00 Joseph and if i try install cloud it complains about a conflict with the salt master
18:00 mgarfias_ install apache-libcloud
18:00 Joseph i am on 2014.1.3
18:00 Joseph what?
18:00 Joseph lol
18:00 Joseph okay.....
18:00 giannello install apache-libcloud
18:00 giannello ops :D
18:00 forrest Joseph, is someone doing a qa on your state tutorials since you're rewriting so much stuff?
18:01 mgarfias_ what is the complaint with the master?
18:01 Joseph Transaction Check Error:   file /usr/bin/salt-cloud from install of salt-cloud-0.8.8-1.el6.noarch conflicts with file from package salt-master-2014.1.3-1.el6.noarch   file /usr/share/man/man1/salt-cloud.1.gz from install of salt-cloud-0.8.8-1.el6.noarch conflicts with file from package salt-master-2014.1.3-1.el6.noarch
18:01 Joseph i am on centos 6.4
18:01 mgarfias_ sounds like an os packaging issues
18:01 mgarfias_ try installing via pip maybe?
18:01 forrest why would you install salt-cloud when you have salt which includes salt-cloud?
18:02 Ryan_Lane whiteinge: I'm out of town from Jun 24 - Jul 1
18:02 Joseph forrest: agreed....so why doesn't salt-cloud work right now?
18:02 Joseph i have the master installed what more should i need to do
18:02 forrest install a more recent release of libcloud from the error
18:02 Joseph mgarfias: did not apache-libcloud. But i did find this: python-libcloud.noarch
18:03 Joseph and python-libcloud was what i needed woot
18:04 forrest whiteinge, Ryan_Lane let's try to work on Ryan_Lane's schedule, I feel as though attendance will be better there.
18:04 ghartz_ joined #salt
18:05 schimmy joined #salt
18:06 Ryan_Lane the week of 7/7 should likely work for me. Ideally Tuesday or Thursday.
18:06 forrest Ryan_Lane, are you going to be able to get a space approved?
18:06 Ryan_Lane I need to figure out a tentative date before I ask, but I should be able to
18:06 whiteinge thurs 7/10 sounds good to me
18:06 schimmy1 joined #salt
18:07 Ryan_Lane ok. it'll need to end before 17:00 PDT that day
18:07 * whiteinge nods
18:07 * Ryan_Lane sends an email
18:07 forrest yea I think that's fine, I don't want to spend all night working
18:08 forrest Ryan_Lane, can you let me know when you hear back?
18:08 Joseph is there a typo in the openstack config file example
18:08 forrest Then I can confirm with my boss.
18:08 Ryan_Lane forrest: yep
18:08 Joseph it says set the master but then it has a minion value?
18:08 forrest or see about getting an open space
18:08 Joseph http://docs.saltstack.com/en/latest/topics/cloud/openstack.html
18:10 wt Anyone here that can help me with https://github.com/saltstack/salt/pull/13189 ? It looks like lint tests in other files are marking this change as failing (or maybe I am not seeing something), but this is a one line change that appears to make scheduling work in multimaster.
18:10 forrest wt don't work about it
18:11 forrest wt, there are test failures all the time on unrelated merges, if there is a problem someone will mention it to you on the PR
18:12 ml_1 joined #salt
18:12 forrest wt, though they probably won't appreciate you making a commit against 2014.1
18:12 forrest it would be preferable if you did it in dev
18:12 forrest the dev branch that is
18:12 forrest usually the tags are more for historical purposes than back porting patches to since the packages aren't usually rebuilt
18:15 linjan joined #salt
18:16 shaggy_surfer joined #salt
18:16 Joseph salt-cloud --list-images=openstack-config.....resulted in this error
18:16 Joseph alformedResponseError: <MalformedResponseException in None 'Malformed response'>: 'code: 400 body: {"error": {"message": "get_version_v2() got an unexpected keyword argument \'auth\'", "code": 400, "title": "Bad Request"}}'
18:16 Joseph here's my cloud provider config https://gist.github.com/jaloren/c8bf87ec955fa5708138
18:17 nicksloan joined #salt
18:17 Joseph is this user error?
18:17 ipalreadytaken joined #salt
18:18 Joseph i can see that keystone is not happy with it but the question is why
18:19 jpcw joined #salt
18:19 jpcw joined #salt
18:20 Joseph i am on openstack havana...is that supported?
18:22 chrisjones joined #salt
18:29 jhulten joined #salt
18:31 oz_akan_ joined #salt
18:32 micah_chatt joined #salt
18:32 manfred Joseph: it should work fine
18:33 manfred havana is the newest minus 1 right?
18:33 manfred oh
18:33 troyready joined #salt
18:33 manfred Joseph: can you set your identity_url to have the /tokens at the end
18:33 manfred just for shits and giggles
18:33 micah_chatt_ joined #salt
18:34 krow joined #salt
18:34 oz_akan__ joined #salt
18:34 Joseph sure...
18:35 Joseph arrrggh!!!
18:35 Joseph that was it
18:35 manfred :P
18:35 Joseph I am fixing the documentation right now!!
18:35 Joseph that's dumb
18:35 manfred so, in the openstack driver it needs /tokens
18:36 manfred but in the nova driver it doesn't, because novaclient addes it before making calls :/
18:36 Joseph intuitively obviouso f course
18:36 manfred and it doesn't check if the url ends in /tokens... so you just start hitting
18:36 Joseph :)
18:36 manfred /v2.0/tokens/tokens
18:36 manfred whichis dumb
18:36 manfred i hate novaclient
18:36 nvmme joined #salt
18:36 Joseph manfred: i reserve my hatred for neutron
18:36 manfred heh
18:36 Joseph it consumes all my energy to just hate that
18:37 manfred better than openstack networks
18:37 Joseph nova network you mena?
18:37 Joseph mean
18:37 manfred yeah that
18:37 Joseph i disagree
18:37 Joseph i can get nova network to work!!
18:37 manfred heh
18:37 manfred Joseph: i have a buddy who works on those all day for our private cloud offering, so I just get to bug him
18:37 Joseph manfred: so jealous
18:37 jchen manfred: you work at rax right?
18:38 Joseph i was trying to do the neutron networking with icehouse a few weeks ago and it was simply disastrous
18:38 manfred jchen: yessir
18:38 jchen that's cool
18:38 Joseph rax is short for rack space?
18:38 manfred yes
18:38 Joseph that is very cool
18:38 jchen https://www.google.com/finance?q=rax
18:39 Joseph one of the people i worked with said that neutron is suffering from the lack of maturity with software defined networking which is just fundamentally a bear to get working right now
18:39 Joseph openvswitch is not  for the faint of heart is the gist i got
18:39 chiui joined #salt
18:40 manfred heh
18:40 manfred yes
18:40 manfred it caused me two days of work to get the Archlinux image fixed and a pull request pushed to openstack-guest-agents-unix to fix a problem that was exposed with the switch to neutron.
18:40 giannello oh, a rax engineer...hello - a customer :)
18:43 ajolo joined #salt
18:44 andredieb_ joined #salt
18:44 manfred hi
18:44 londo__ joined #salt
18:46 TyrfingMjolnir joined #salt
18:46 davidnknight joined #salt
18:48 Networkn3rd joined #salt
18:50 Networkn3rd joined #salt
18:51 andredieb_ joined #salt
18:51 n8n joined #salt
18:55 ggoZ joined #salt
18:55 mgw joined #salt
18:58 CeBe2 left #salt
18:58 CeBe joined #salt
18:58 jalaziz joined #salt
19:02 brandon_ rax is short for rackspace? wow, I have always used rs.
19:02 krow joined #salt
19:06 notbmatt http://finance.yahoo.com/q?s=RAX
19:08 philipsd6 Using salt-ssh, is it possible to stagger runs? I know I can use max_procs to limit the number of simultaneous ssh connections, but I don't want just want to limit the connections, I want to control the rate at which they initiate connections. Is that possible?
19:08 defunctzombie left #salt
19:08 kermit joined #salt
19:12 Ahlee joined #salt
19:15 bhosmer joined #salt
19:17 Ryan_Lane forrest, whiteinge: I'll need a day or so to get back with you on confirmation, but it's looking good so far.
19:18 forrest sounds good
19:18 whiteinge woot
19:18 jalaziz joined #salt
19:18 jaycedars joined #salt
19:20 troyready joined #salt
19:22 Ahlee so custom runner...possible to determine success of the runner, short of just being extremely defensive in the actual runner?
19:24 Ahlee looks like jsut parsing the output will have to do
19:25 whiteinge i take it the return code is getting swallowed?
19:25 Ahlee I never thought to check
19:25 Ahlee jesus
19:25 whiteinge i'd guess it does
19:26 Ahlee yeah, it's coming through
19:26 oz_akan_ joined #salt
19:26 Ahlee well, non-0 is coming through when i purposefully blow it up
19:26 Ahlee vs 0 when it executes
19:26 whiteinge well that's something, i guess
19:26 Ahlee pretty sure I need to just start hanging drywall, i'm so not cut out for this technology thing.
19:26 JesseC joined #salt
19:27 whiteinge i've considered making a wrapper script for runner modules a couple times in the past to better control undesired stdout and return value / exit code kind of interactions.
19:27 Ahlee yeah, now on to consuming this through Pepper
19:27 Ahlee which, btw, thanks again for.
19:28 whiteinge oh, neat. you'll be among the first. don't be shy to hit me up with Qs
19:28 Ahlee I so don't like hearing that.
19:28 * whiteinge nods
19:28 Ahlee but, yeah, we bascially replaced our internal version with pepper
19:29 Ahlee so far, so good
19:30 whiteinge Ahlee: i'd be very interested to hear use-case feedback. i'm happy to steer that project where users need it to go
19:31 whiteinge s'pose that's an obvious statement...
19:32 Ahlee Not a problem.  Right now I'm trying to get a system into the 'stuck through our API but not through teh command line" so I can throw the same commands through Pepper at it
19:34 Ahlee neat, if i sys.exit(2) in the runner it comes through to salt-run
19:34 Ahlee very nice.  Suppose i should have read through salt-run
19:34 Ahlee anyway, work beckons
19:35 nvmme joined #salt
19:36 whiteinge Ahlee: beware calling sys.exit() in runner functions that will be called via the api
19:37 whiteinge (or test it anyway)
19:37 Ahlee was just testing that.
19:39 Ahlee Yeah, bad juju
19:40 Ahlee BadStatusLine exception
19:40 Ahlee oh well, that's a pretty obvious "Something is wrong here"
19:45 Joseph for salt cloud profile, how do i specify the name for the VM i want to boot ?
19:45 Joseph http://salt-cloud.readthedocs.org/en/latest/topics/profiles.html
19:46 manfred you don't in the profile
19:46 manfred salt-cloud -p <profile> <name of vm>
19:46 Joseph got it
19:49 Joseph if i store the profile in /etc/salt/cloud.providers.d
19:49 Joseph then should i just be able to pull it in via the fillename only?
19:50 manfred hrm? it goes in /etc/salt/cloud.profiles.d
19:50 manfred and you refer to the profile by the name
19:50 manfred something:
19:50 manfred provider: openstack
19:50 manfred ....
19:51 manfred salt-cloud -p something vm.joseph.com
19:51 Joseph manfred: tried that and it keeps saying the file is not defined
19:51 Joseph i am surei  am diong something really basic wrong
19:51 manfred it isn't the file name, it is the profile name in the file
19:52 manfred so this is your provider https://gist.github.com/jaloren/c8bf87ec955fa5708138 , it should be in /etc/salt/cloud.providers.d/blah.conf
19:52 manfred then you have one that is just the profile
19:52 manfred blah:
19:52 manfred provider: openstack-config
19:52 manfred image: <uuid>
19:52 manfred size: 4
19:53 manfred in /etc/salt/cloud.profiles.d/blah.conf
19:53 manfred and salt-cloud -p blah vm.something.com
19:53 Joseph n addition to /etc/salt/cloud.profiles, profiles can also be specified in any file matching cloud.profiles.d/*conf which is a sub-directory relative to the profiles configuration file(with the above configuration file as an example, /etc/salt/cloud.profiles.d/*.conf).
19:53 manfred make sure to remember both of the words are plural, i have screwed that up before
19:53 manfred oh yeah
19:53 Joseph oh do i need a .conf extension
19:54 manfred yar
19:54 Joseph lol fine!
19:54 savorywatt__ joined #salt
19:54 manfred :P
19:55 jdmf joined #salt
19:55 Joseph [WARNING ] The cloud driver, 'openstack-config', configured under the 'openstack_base' cloud provider alias was not loaded since 'openstack-config.get_configured_provider()' could not be found. Removing it from the available providers list
19:55 Joseph sigh
19:55 Joseph i am sure this is file location naming thing
19:55 Joseph so in /etc/salt/cloud.providers.d
19:56 Joseph i have two files
19:56 Joseph openstack.conf and openstack-profile.con
19:56 manfred move the profile to cloud.profiles.d
19:56 Joseph <head desk>
19:56 Joseph i am going to put on a dunce cap
19:56 Joseph just give me a sec
19:57 muhammadp joined #salt
19:57 muhammadp left #salt
19:58 surge yeah I think cmd.run 'mount -a' is better
20:02 nineteeneightd joined #salt
20:08 Joseph salt-cloud -p openstack-profile cloudtest [INFO    ] salt-cloud starting [ERROR   ] Profile openstack-profile is not defined Error:     Profile openstack-profile is not defined
20:09 Joseph manfred: thoughts?
20:09 Joseph i have /etc/salt/cloud.providers.d
20:09 manfred gimme one second, i am on a phonecall
20:10 Joseph i have /etc/salt/cloud.profiles.d
20:10 Joseph okay manfred
20:10 Guest88904 joined #salt
20:11 mgw joined #salt
20:12 n8n joined #salt
20:12 Joseph got it
20:14 Joseph so now i was able to initate the launch...i am running into other problems
20:14 Joseph let me know when you are free manfred
20:20 bhosmer joined #salt
20:21 bhosmer_ joined #salt
20:23 jdmf joined #salt
20:24 shaggy_surfer joined #salt
20:25 fspot joined #salt
20:26 dsolsona joined #salt
20:26 Kelsar joined #salt
20:26 dude051 joined #salt
20:26 nineteeneightd Is there a way, how do I say this, to "parameterize" a specific call to highstate
20:26 nineteeneightd That's probably not the best way to say that
20:27 nineteeneightd But, an example...I want to put an app in maintenance mode
20:27 nineteeneightd One way I might set my states up is that some where in the .sls {% if is_maintenance_mode %} blah blah blah {% endif %}
20:28 fspot left #salt
20:28 nineteeneightd I'm thinking I want to to temporarily set some pillar or grain run highstate then whne all done unset that pillar or grain and run highstate again
20:28 Joseph <nineteeneightd> : are you using a salt master?>
20:28 stevednd Does anyone have any experience with file.copy state failing for not apparent reason? If I run what would likely be the exact same command on the CLI everything works just fine
20:28 nineteeneightd Yup
20:29 nineteeneightd I see grains.setval/grains.setvals
20:29 Joseph Then you could achieve that with an overstate
20:29 nineteeneightd Hmmmm
20:29 nineteeneightd That's a new term
20:29 notbmatt stevednd: invoking your state on the command line using salt-call will emit the actual command run, as well as stdout and stderr
20:29 notbmatt stevednd: so,  testhost$ sudo salt-call state.sls fooapp.copyfile
20:30 Joseph ninenteent: http://docs.saltstack.com/en/latest/topics/tutorials/states_pt5.html#states-overstate
20:31 stevednd notbmatt: I don't think that will put out anything more than what watching the output running salt-minion -l debug will it?
20:32 schimmy joined #salt
20:32 stevednd because this is the extent of what it says it did:
20:32 stevednd [INFO    ] Executing state file.copy for /home/events/app/releases/20140527010203 [ERROR   ] Failed to copy "/home/events/app/shared/cached-copy" to "/home/events/app/releases/20140527010203"
20:34 stevednd notbmatt: tried salt-call anyways, and it's the same output
20:35 notbmatt do you see a line that looks like '[INFO    ] Executing command 'mount -o noatime -t ext4 /dev/image_vg/image_root /var/www ' in directory '/root''?
20:35 Joseph has anyone successfully used salt-cloud to launch a VM from openstack AND set the ssh key in the VM?
20:36 miqui joined #salt
20:36 stevednd notbmatt: nope, nothing resembling a 'cp' command. I'm thinking it may possibly do the operation entirely in python instead of shelling out
20:37 nineteeneightd Joseph: How do you envision something like I described working with overstate?
20:37 bhosmer joined #salt
20:37 nineteeneightd I can kind of see it, but curious on your take
20:37 bhosmer joined #salt
20:37 Joseph run a function that sets the grains/pillars then run highstate after the data is set and finally you can clean up the grain pillar data you set
20:38 Joseph nineteeneightd: your core problem is sls compilation. you have a chicken egg problem due to how the jinja render comes into play
20:38 schimmy joined #salt
20:38 Joseph nineteeneightd: in all hontesty i think the overstate is quite kludgy. I think they need the concept of registered variables that exist in ansible.
20:39 Joseph overstate is quite kludgy for your use case i mean
20:39 Joseph overstate itself is fine for what it is
20:43 Joseph manfred: i got it sorta kinda working.  but i have two questions about the set up that really stumped me.
20:43 Joseph * stares malevolently at salt-cloud(
20:43 n8n joined #salt
20:47 stevednd notbmatt: looks like file.copy as of the current release version can only copy single files, not directories, even though it claims it can do both. It looks like develop has the functionality fixed, so will probably work with helium
20:50 thayne joined #salt
20:50 ldlework Hi! In my pillar top.sls, when I am writing the match, and specifying what pillar files should be applied to the target, is there anyway to specify any pillar data for the matches /right there/?
20:51 Joseph idlework: i don't believe so. The top is for targetting the pillar data you've already defined not creating new data on top of that
20:51 Joseph no pun intended
20:52 ldlework Joseph: hrm, I have a single thing I want to set on matched devices but it seems ugly to specify a whole file just for it
20:52 ldlework alright
20:53 Joseph ldlework: i would probably just create a base.sls
20:53 Joseph and then dump stuff in there
20:53 Joseph i would not be surprised if the file grew over time
20:54 ldlework Joseph: its actually data specific to specific machines
20:54 bhosmer joined #salt
20:54 ldlework so like, api keys that are specific to single nodes
20:54 ajolo joined #salt
20:54 bhosmer__ joined #salt
20:54 ldlework I find it odd to target the machine based on a hostname grain (or something) just to identify a pillar sls just for it
20:55 Joseph you can target a machine based on many things
20:55 ldlework When I want to add a new server, I have to add it to the top, but also add a sls file
20:55 ldlework Joseph: right but that's not what I'm talking about
20:56 Joseph you could put this all in a single sls though. You just need to use a jinja if to determine what value gets set
20:56 ldlework Joseph: Ah, like 'switch' on the grain, and render the right key out
20:56 ldlework Joseph: perfect that works
20:56 jhulten joined #salt
20:56 ldlework still it'd be nice to specify top-level pillar values in the top file!
20:57 smcquay joined #salt
20:57 smcquay joined #salt
20:57 Joseph ldlework: not sure i get the use case for that but you could throw it out to the google group and see what they say
20:58 ldlework I guess I didn't explain very well.
20:58 scoates joined #salt
20:58 scoates hello friends
20:58 viq nineteeneightd: you can also set pillars on command line
20:59 kballou joined #salt
20:59 scoates I'm trying to highstate a specific node (node_id) programmatically. I have: `saltclient = salt.client.LocalClient(); saltclient.cmd_async(node_id, 'state.highstate')` … but that doesn't seem to highstate (I don't see it in the minion log, and I don't see a file's timestamp change as it should/does on a manual highstate.) Any ideas?
20:59 scoates (this is running from my master)
21:00 Joseph well apparently salt-cloud doesn't like it if an openstack vm only has a private IP. https://gist.github.com/jaloren/44cf9a6f9b746d6f1a2b
21:00 mgarfias_ Failed to authenticate, is this user permitted to execute commands?
21:00 Joseph mgarfias_: is that to me or scoates?
21:01 scoates I doubt me (-:
21:01 mgarfias_ neither
21:01 smcquay_ joined #salt
21:01 Joseph ah
21:01 Joseph well then
21:01 mgarfias_ i just ran into this
21:01 Joseph ha
21:01 mgarfias_ on a freshly bootstrapped install
21:01 Joseph mgarfias_: there's something wrong with the key setup
21:01 taterbase joined #salt
21:02 mgw joined #salt
21:02 Joseph mgarfias_: shut down all the minions, reject and then delete all minion keys, clear the cache of master and minion, start up the daemons and accept the key again
21:03 Joseph also before you do any of that make sure salt-key on the master actually shows a key for the minion
21:03 Joseph :)
21:03 mgarfias_ keys are there
21:03 dstokes for some reason i'm unable to run `salt-call state.sls state-name`, complains of no matching sls, but works fine when declaring in the top file. are dashes invalid characters in state names?
21:03 mgarfias_ was first thing i  checked
21:04 Joseph then i'd clean out the keys and initiate the key exchange again
21:05 Joseph dstokes: the state name is a python module consequently it needs to conform to python naming conventions. Pretty sure a dash is a no go in python for a module name.
21:06 nineteeneightd viq: How?
21:06 dstokes interesting.. 'nginx-passenger' works as a state name, but 'salt-minion' doesn't..
21:08 Joseph whats the error output?
21:08 Joseph can you create a gisthub?
21:09 oz_akan_ joined #salt
21:09 krow joined #salt
21:11 viq nineteeneightd: at least I seem to remember you can, trying to find it now
21:12 MrTango joined #salt
21:12 Joseph salt-cloud is mocking me. i almost have it working! why does it need to  care about a public ip?
21:13 manfred Joseph: are you running develop?
21:13 dstokes Joseph: looks like it might be a prob with my init.sls. I can run subs of the state (i.e. state.sls state-name.thing) but not the state itself..
21:13 Joseph 2014.1.3
21:13 manfred Joseph: that is a bug in that if you don't have a public_ip
21:13 Joseph :(
21:13 manfred Joseph: just set ssh_interface: private_ips in your provider
21:14 manfred it is fixed in develop
21:14 Joseph ahhh
21:15 viq nineteeneightd: sorry, can't find it, could be my memory is misleading me
21:15 Joseph manfred: so bootstrap-salt.sh alwqays runs...is there a way to turn that off?
21:16 jhulten_ joined #salt
21:16 manfred yes
21:16 manfred script: /bin/true
21:16 londo__ joined #salt
21:16 Joseph oh i see
21:16 manfred or actually
21:16 manfred drop a
21:16 manfred #!/bin/bash
21:16 manfred return 0
21:17 manfred in /etc/salt/cloud.deploy.d/something.sh
21:17 manfred script: something.sh
21:17 manfred in /etc/salt/cloud
21:17 Joseph in file named c loud
21:17 Joseph or in my provider?
21:17 manfred /etc/salt/cloud are for things like script: or stuff that would apply to all vms
21:18 manfred i think you would want to put it in the profile
21:18 Joseph ahh
21:18 manfred or /etc/salt/cloud
21:18 wt left #salt
21:18 manfred this is my /etc/salt/cloud http://ix.io/cLl
21:19 dude051 joined #salt
21:19 Joseph manfred: why would that be return 0? wouldn't it be exit 0?
21:19 manfred i mean, you can do either
21:20 Joseph got it
21:20 manfred :)
21:21 Joseph okay next question
21:21 Joseph ssh_key_name: saltstack  ssh_key_file: /root/.ssh/saltstack
21:21 Joseph those properties
21:21 Joseph they aren't workign like i would expect
21:21 Joseph my expectation is that it would upload the ssh key file to openstack and then boot the vm with that keypair
21:21 Joseph but that doesn't seem to happen
21:21 manfred ssh_key_name is the key that you have put in your openstack thing
21:22 manfred and what you reference in nova keypair-list
21:22 Joseph via nova keypair ?
21:22 Joseph so then whats the purpose of the file property?
21:22 manfred ssh_key_file is the file to use with ssh -i
21:22 Joseph can you elaborate on that?
21:22 manfred when it is bootstrapping the server
21:22 manfred instead of using the password
21:22 Joseph ohhh
21:22 manfred it sshs to root with the private key
21:22 manfred that is the file to use
21:22 z3uS joined #salt
21:23 Joseph and then that key is thrown away to never be used again?
21:23 manfred no
21:23 Joseph okay...
21:23 manfred it is the private key that matches the key you have stuck into openstack's key stuff
21:23 Joseph OH!
21:23 Joseph got it
21:23 manfred so i do nova keypair-create saltstack
21:23 manfred and it spits out a private key
21:23 Joseph yes
21:23 manfred or you upload one with --pub-key
21:23 Joseph nope i see whats going on
21:23 manfred ssh_key_file is where that private key is stored
21:23 manfred :)
21:24 Joseph provisioning keypair is out of band for salt cloud
21:24 manfred it is
21:24 manfred i have been thinking about adding it
21:24 manfred but haven't yet
21:24 Joseph but once its provisioned giving it to salt cloud is nice for bootstrapping
21:24 manfred where you have ssh_pub_key and if the ssh_key_name isn't created already in keypair-list, it would upload the public key before making the server
21:25 Joseph that's exactly what i was expecting to happen :)
21:25 Joseph and thus got confused when it uh didn't
21:25 manfred :)
21:25 Joseph got any examples of using salt cloud in a sls?
21:27 manfred yes
21:27 manfred https://github.com/gtmanfred/salt-states/blob/master/gluster/init.sls#L27
21:27 manfred https://github.com/gtmanfred/salt-states/blob/master/cloud/gluster.sls
21:27 manfred and it runs with this overstate https://github.com/gtmanfred/salt-states/blob/master/overstate.sls
21:27 manfred to make 8 servers and configure them with gluster
21:27 manfred and cloud block storage from cinder
21:28 jchen manfred: s/requre/require
21:28 jchen https://github.com/gtmanfred/salt-states/blob/master/gluster/init.sls
21:29 manfred oh
21:29 manfred hrm... never had that throw an error lol
21:29 manfred and i always use ubuntu servers
21:30 jchen salt 2014.1.3: now with auto-spell-check
21:30 ajolo_ joined #salt
21:31 manfred heh
21:33 jalbretsen is there a way in file.recuse to just upload the files in a directory, but then not change them should they be modified?    file.managed has replace: False which works nicely for a single file
21:35 manfred jalbretsen: it does not look like there is
21:35 jalbretsen Feature request time
21:36 jchen pull request time
21:37 manfred jalbretsen: the only thing it would do is if the directory exists, it just exits... i can do that in like 5 minutes...
21:37 jalbretsen my focus is the files in the directory
21:38 jalbretsen so I upload 10 files in the directory, one is changed, and I don't need salt to change it back
21:38 manfred so, the problem is, file.recurse doesn't use file.managed for each file...
21:38 manfred hrm... or does it
21:39 manfred yup it does
21:39 manfred gimme a bit
21:40 manfred jalbretsen: actually
21:40 manfred jalbretsen: can you try it
21:41 Joseph manfred: so does salt-cloud automation of glance?
21:41 manfred jalbretsen: https://github.com/saltstack/salt/blob/develop/salt/states/file.py#L1638
21:41 manfred recurse accepts kwargs
21:41 manfred jalbretsen: https://github.com/saltstack/salt/blob/develop/salt/states/file.py#L1874
21:41 manfred then passes them to manage_file
21:41 manfred which has a pass_kwargs array
21:42 manfred that checks if replace is set in kwargs, and if it is https://github.com/saltstack/salt/blob/develop/salt/states/file.py#L1892
21:42 manfred jalbretsen: https://github.com/saltstack/salt/blob/develop/salt/states/file.py#L1909 passes it to managed
21:42 manfred so it should just work â„¢
21:42 jalbretsen ya, I tried it actually, it didn't appear to
21:42 jalbretsen let me double check for typos and try ago to be super sure
21:43 manfred what version are you on?
21:43 Joseph never mind answered my own question http://salt.readthedocs.org/en/v2014.1.1/ref/modules/all/salt.modules.glance.html?highlight=glance#module-salt.modules.glance
21:43 manfred looks like it is in 2014.1 branch
21:43 viq jalbretsen: you could set a grain once the files are there to not match the recurse state again
21:43 viq Or set a lock or something
21:44 manfred that is way more work than it needs to be
21:44 manfred it should just work by passing replace: False in the file.recurse
21:44 manfred based on digging into the code
21:44 jalbretsen ya
21:44 happytux joined #salt
21:44 jalbretsen I'm running Hydrogen
21:45 viq Everyone is ;)
21:45 manfred yeah it should work
21:45 manfred i am about to head home, if it doesn't work, open an issue and tag @gtmanfred, and I will look at it tonight or tomorrow when I have freetime
21:45 jalbretsen it's not
21:46 jalbretsen https://github.com/saltstack/salt/issues/13212
21:47 manfred cool i will look at it when I have a minute and see if I can figure out why it ain't working
21:47 manfred peace o/
21:53 aw110f joined #salt
21:54 miqui joined #salt
21:56 thayne joined #salt
21:57 Nazzy joined #salt
22:00 chuffpdx joined #salt
22:03 tristianc|Mobile joined #salt
22:04 kermit joined #salt
22:06 ajolo joined #salt
22:09 londo__ joined #salt
22:15 jhulten joined #salt
22:17 redondos joined #salt
22:17 redondos joined #salt
22:18 aw110f joined #salt
22:18 dsolsona joined #salt
22:22 thoughtjacked joined #salt
22:22 thoughtjacked left #salt
22:23 quickdry21 joined #salt
22:25 bemehow joined #salt
22:30 jaycedars joined #salt
22:30 stevednd whiteinge: is there any way to specify pillar data on the command line? It would be awesome to say salt 'webserver' --pillar={deploy_version:123} state.sls deploy.app
22:31 whiteinge stevednd: salt '*' state.sls mysls pillar='{foo: "Foo!"}'
22:33 stevednd whiteinge: nice. does that work with anything, or only state.sls?
22:33 whiteinge only state.sls and state.highstate
22:33 whiteinge hm. maybe state.top too?
22:34 stevednd okay, thanks
22:35 jdmf joined #salt
22:40 DaveQB joined #salt
22:43 notbmatt so this is weird, I'm getting...
22:43 notbmatt from salt-call state.show_highstate 'Fetching file from saltenv 'test', ** skipped ** latest already in cache 'salt://imageservers/init.sls''
22:43 notbmatt and also 'No matching sls found for 'imageservers' in env 'test''
22:43 notbmatt O_o
22:45 TyrfingMjolnir joined #salt
22:47 londo__ joined #salt
22:47 notbmatt I'm totally stymied
22:48 schmutz joined #salt
22:49 stevednd what's the syntax to reuse blocks of yaml with saltstack?
22:49 stevednd the normal &whatever <<: *whatever yaml syntax doesn't work
22:51 XenDeKhra joined #salt
22:52 whiteinge stevednd: i'm not aware that we've turned off any yaml features (though it's possible)
22:53 stevednd whiteinge: I've tried everything I can think of, and it won't compile the yaml
22:53 whiteinge depending on what you're doing you might find jinja macros useful for repeating blocks
22:53 bmcorser joined #salt
22:54 whiteinge i'm not super familiar with some of yaml's more esoteric features but if you pastebin a sample i can try to recreate
22:55 shaggy_surfer joined #salt
22:56 manfred stevednd: https://github.com/saltstack/salt/blob/develop/salt/utils/yamlloader.py#L32 that is where the loader is
22:56 manfred i don't see anything really disabling stuff...
22:56 whiteinge notbmatt: i've seen that error when there's a syntax error somewhere up the include chain. not easy to single-out. i'd suggest commenting out states and includes to narrow it down
22:56 manfred i would just use jinjas include ability
22:56 stevednd whiteinge, manfred: https://gist.github.com/dnd/72f2da2354d8abe39636
22:57 stevednd I'm not sure if it doesn't work because the data represents an array, or not
22:58 whiteinge stevednd: try indenting the << line another two spaces
22:58 manfred that is failing inside the yaml parser
22:59 whiteinge no, that's not quite it. it only grabs the first list item
22:59 manfred local:
22:59 manfred Data failed to compile:
22:59 manfred ----------
22:59 manfred ID appUserGroup in SLS this is not a dictionary
22:59 whiteinge stevednd: http://yamllint.com/
22:59 Joseph stevednd: its not valid yaml
22:59 manfred ^^
23:00 whiteinge stevednd: if you fix the indentation it produces valid yaml but not a valid state data structure
23:01 whiteinge you need to be able to generate a list of single-value dictionaries. i'm guessing yaml doesn't have a built-in way to do that
23:01 druonysus joined #salt
23:02 stevednd yeah, I was thinking that might be the cause of the problem. I've used the syntax tons of times before, but never with a list
23:03 mosen joined #salt
23:04 notbmatt whiteinge: thanks, that ended up being the case; init.sls->a.sls->b.sls and b is broken gives that no-content error
23:05 whiteinge notbmatt: glad you found it! that error message isn't helpful though. mind filing a ticket?
23:06 whiteinge stevednd: you can do something similar with jinja. i.e., defining the values in jinja vars and building out the state programmatically
23:06 Outlander joined #salt
23:07 Ryan_Lane whiteinge, forrest: is your jenkins config public anywhere?
23:07 stevednd whiteinge: I could, but a your suggestion to use a macro is probably what I'll use rather than create some funky string with line breaks
23:07 Ryan_Lane nevermind. found it
23:07 jslatts joined #salt
23:07 stevednd the nesting level should always be the same, so I shouldn't have to worry about indentation
23:08 * whiteinge nods
23:09 bemehow joined #salt
23:09 whiteinge Ryan_Lane: in addition to that repo you (probably) found, see the salt/tests/jenkins.py script that spins up VMs and kicks off the tests
23:09 Ryan_Lane I'm looking for the actual jenkins job :)
23:11 forrest Ryan_Lane, the one for salt? You could always ask soundtech
23:11 forrest I'm sure he'd be happy to share
23:13 chrisjones joined #salt
23:14 patrek_ joined #salt
23:14 pmcg_ joined #salt
23:14 aiqa joined #salt
23:14 londo_ joined #salt
23:15 lyddonb joined #salt
23:16 pfalleno1 joined #salt
23:16 oeuftete_ joined #salt
23:16 manfred stevednd: got it
23:16 zach joined #salt
23:16 jchen_ joined #salt
23:16 Xiao_ joined #salt
23:17 manfred stevednd: http://ix.io/cMt
23:18 manfred whiteinge: ^^'
23:18 manfred renders correctly in the yamllint.com
23:19 schristensen_ joined #salt
23:19 whiteing1 joined #salt
23:19 seblu42 joined #salt
23:19 svs_ joined #salt
23:20 bwq- joined #salt
23:20 _sifusam_ joined #salt
23:20 z3uS| joined #salt
23:20 flebel_ joined #salt
23:20 Ryan_Lane I'd like to see how the github plugin is configured for the jobs
23:20 whiteinge joined #salt
23:20 Ryan_Lane forrest: yeah, the one for salt
23:20 forrest yea s0undt3ch then
23:21 Ryan_Lane I don't know how to contact him :)
23:21 agronholm joined #salt
23:21 forrest he's in IRC
23:21 forrest just msg him
23:21 forrest I just asked him if he was around
23:21 forrest he is
23:21 manfred he is in #salt-devel, they are actually messing with jenkins right now it looks like
23:21 jpaetzel joined #salt
23:22 NV hrm, getting a runtime error about maximum recursion depth when trying to highstate a box
23:22 NV and can't seem to work out why
23:22 forrest NV, well, stop recursing :P
23:23 NV state is fairly complex, so im not sure exactly what's doing it but i do know the last state executed and what the next one would be afterwards
23:23 NV show_highstate and show_lowstate both succeed
23:24 NV http://pastie.org/private/wd6yckllkdcq5ikq0vwyiw
23:25 NV any ideas on where to go from there?
23:25 shaggy_surfer joined #salt
23:27 Joseph NV: have you looked at your requisites? If a requisite is pointed to the dictionary that the requisite is in you would get that
23:28 NV if requisites were wrong, wouldn't that break show_lowstate?
23:29 Joseph not sure
23:29 Joseph I just remember encountering this recursion error when i did that by accident
23:30 NV mhmm, what do yo umean by requisite pointing to the dictionary anyway, require is a list isn't it?
23:31 notbmatt ohhh weird, so I think I just tracked my issue down
23:31 notbmatt there's a leftover salt://apache/ directory, but we've moved to the formula
23:31 notbmatt init.sls tries to include apache
23:31 notbmatt salt://apache/ is empty
23:32 notbmatt (at least, the first one the fileserver backend finds is empty)
23:32 Joseph NV: just a sec i'll give you an example
23:32 notbmatt hm, maybe not. damn.
23:33 Joseph NV: here's an example https://gist.github.com/jaloren/2ba1c5f0f2c2472c2afa
23:33 notbmatt oh, no, that totally IS it
23:34 notbmatt wow, that's a weird edge case
23:34 notbmatt apache doesn't exist in the 'test' env, and I forgot the 'base:' env spec
23:34 NV Joseph: oh you mean if a state requires itself?
23:35 Joseph NV: yes sorry if i worded that in a confusing way
23:35 davidnknight joined #salt
23:36 NV its ok, it's given me something to look for
23:36 NV sadly it doesn't seem to be the case, at least not that i can spot
23:38 conan_the_destro joined #salt
23:38 shaggy_surfer joined #salt
23:40 stevednd manfred: salt didn't like that for me
23:41 Joseph I would start commenting out state blocks until you isolate the dctionary that is the cause
23:41 stevednd it compiles, but doesn't put the values in place
23:41 NV yeah, just found a couple of blocks that if i comment out it works
23:41 NV they had prereq's to stop a service gracefully before changing stuff
23:41 * TaiSHi raises fist
23:42 TaiSHi A suggestion I made for salt is probably getting introduced into 2014.2
23:42 manfred stevednd: it works here
23:42 manfred hrm, maybe it didn't
23:42 manfred gimme a second
23:44 bemehow joined #salt
23:46 stevednd manfred: - <<: *user
23:47 forrest whiteinge, someone brought up to me today that it is somewhat difficult to differentiate between what is bash code, and what is directly salt state code on stuff such as the walkthrough
23:47 stevednd that works, and the state does its thing, owning the file properly here
23:47 manfred stevednd: that's it
23:47 manfred cool
23:47 forrest whiteinge, might be good at least on the walkthrough to somehow try to be more clear? I don't know
23:48 manfred stevednd: i am going to have to use this in a few places now
23:48 Joseph forrest: #!/bin/bash?
23:49 forrest ?
23:49 forrest what do you mean
23:49 Joseph forrest: for differentiating salt staate from bash code
23:49 forrest yea, shell calls
23:49 forrest versus code that is inside state files
23:49 Joseph refering to something like cmd.run?
23:50 Joseph where its running a shell script?
23:50 forrest not even that
23:50 forrest http://docs.saltstack.com/en/latest/topics/tutorials/walkthrough.html#the-first-sls-formula
23:50 forrest like the first item there
23:50 forrest then the second
23:50 forrest then the third
23:50 forrest they state they are parts of states
23:50 forrest but VISUALLY this is not clear
23:51 Joseph got it
23:51 whiteinge stevednd: wait, did you get that working successfully? mind updating your gist with the syntax?
23:51 Joseph dost RST support colorization for specific types of code blocks?
23:51 forrest Joseph, I think it depends on the theme
23:52 whiteinge forrest: good point. i think we can add a little something there. perhaps the file name for files and "run the following in your shell" for commands
23:52 forrest whiteinge, I was thinking even simpler, differentiate the background
23:52 whiteinge forrest: mind filing a ticket and linking it to that main docs sprint ticket?
23:52 Joseph whiteinge: would that make sense to a windows user? wouldn't you want to use the more generic command line?
23:52 whiteinge that would work too
23:52 forrest because if we add a note like that, it's still not a visual representation that they are different things
23:52 Joseph forrest: yes right
23:52 forrest whiteinge, yea I'll make a ticket, which one is the main sprint ticket again?
23:53 whiteinge forrest: Refs #12446.
23:54 forrest thanks
23:54 NV http://pastie.org/private/uhhn6skqisjdvta4mfpgia if that is commented, the state works
23:54 NV if it's uncommented, i get the recursion error
23:55 NV nothing requires/watches/prereq's the -stop state
23:55 shaggy_surfer joined #salt
23:56 oz_akan_ joined #salt
23:57 ldlework How does salt determine the 'host' grain?

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary