Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-09-08

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 rjc joined #salt
00:01 rome joined #salt
00:03 kingel joined #salt
00:12 halfss joined #salt
00:18 xcbt joined #salt
00:21 mrrc joined #salt
00:24 jalaziz joined #salt
00:24 diegows joined #salt
00:38 dvestal joined #salt
00:47 dude051 joined #salt
00:53 anotherZero joined #salt
00:54 kralla joined #salt
01:04 winmutt yawn
01:04 elfixit1 joined #salt
01:11 njs126 joined #salt
01:14 dvestal joined #salt
01:14 rome joined #salt
01:15 AdamSewell joined #salt
01:25 schimmy joined #salt
01:27 schimmy1 joined #salt
01:52 kingel joined #salt
01:56 bhosmer joined #salt
01:59 ajolo_ joined #salt
02:03 acabrera joined #salt
02:04 to_json joined #salt
02:06 dvestal joined #salt
02:07 englishm joined #salt
02:30 viq joined #salt
02:33 digin4 joined #salt
02:35 ramishra joined #salt
02:39 digin4 joined #salt
02:45 digin4 joined #salt
02:46 malinoff joined #salt
03:02 otter768 joined #salt
03:03 malinoff joined #salt
03:05 AdamSewell joined #salt
03:09 nitti joined #salt
03:12 bhosmer joined #salt
03:16 mosen joined #salt
03:21 yomilk joined #salt
03:26 digin4 joined #salt
03:26 tmh1999 joined #salt
03:27 danielbachhuber joined #salt
03:33 dccc__ joined #salt
03:36 digin4 joined #salt
03:38 sectionme joined #salt
03:41 kingel joined #salt
03:46 AdamSewell joined #salt
03:47 ramishra joined #salt
03:51 mrlesmithjr joined #salt
03:55 mrlesmithjr joined #salt
03:59 troyready joined #salt
04:01 ajolo_ joined #salt
04:02 nitti joined #salt
04:02 XenophonF hm, what's a good way to signal after a state.highstate that, because something changed the computer needs to reboot?
04:03 mosen good q XenophonF, I have no idea
04:03 mosen a command that watches everything somehow
04:04 XenophonF hm
04:04 XenophonF could work
04:04 mosen maybe salt reactor has something like that
04:04 XenophonF have a reboot state, and then watch_in to it
04:04 XenophonF so what i'm doing is adding a udev rule on centos
04:05 XenophonF it enables memory hot add under hyper-v
04:05 XenophonF to activate the rule requires a reboot
04:05 XenophonF hence my question
04:05 mosen it seems like salt reactor kinda fits the bill
04:06 XenophonF i'll check that out, moses
04:06 mosen because the job event could run a command
04:06 XenophonF sorry mosen i mean :)
04:06 mosen hehe
04:06 mosen I dont have hv 2012, sad
04:06 XenophonF but the reboot would need to finish _after_ the state run
04:06 XenophonF you know, a related question i have might have to do with key generation at highstate time
04:07 mosen yeah i guess it would depend on what events were available after the highstate ran
04:07 XenophonF right now i have to manually create key pairs and certificate signing requests for my web servers
04:07 XenophonF it'd be kind of interesting if i could get salt to do that for me...
04:07 XenophonF thanks for the pointer!  time to RTFM :)
04:07 mosen im not pre-signing stuff, I just use salt-key
04:08 mosen but there is a section somewhere on doing that err
04:09 XenophonF interesting, re: my q about reboots, reactor looks like the way to go
04:10 mosen yeah im not sure whether you can make it conditional
04:10 mosen on the fact that the highstate ran ok
04:11 XenophonF well, i only need to reboot if the one file.managed state changes
04:12 mosen eugh, i dont know enough about reactor to say
04:14 XenophonF maybe i have to look for a specific event
04:15 mosen I dont think a single state produces a new event, just a job run
04:15 XenophonF hm
04:17 mosen i could be wrong, there might be another way to do it
04:18 XenophonF that's cool
04:21 XenophonF well my googling around for an answer brought me to something completely unrelated but really cool: https://code.google.com/p/security-onion/wiki/Salt
04:21 XenophonF I love me some Security Onion.  I need to check this out.
04:22 ramishra joined #salt
04:22 mosen ah nice idea
04:23 Furao joined #salt
04:24 ramishra joined #salt
04:24 schimmy joined #salt
04:24 XenophonF mosen, do you run any linux virtual machines under any version of hyper-v?
04:25 schimmy1 joined #salt
04:25 mosen XenophonF: yep CentOS under hv 2008r2
04:25 geekmush joined #salt
04:28 Furao joined #salt
04:30 XenophonF i have both centos 6.5 and 7.0 guests running under 2012r2
04:30 XenophonF under 6.5, the 'manufacturer' and 'productname' grains don't exist
04:31 XenophonF i'm not sure why---same version of salt in both cases (2014.1.10 installed via EPEL)
04:31 Furao joined #salt
04:33 XenophonF have you encountered anything similar?
04:34 mosen hmmm to be honest i dont think i have anything using those grains but i can check
04:34 XenophonF i was hoping to use those two grains to determine whether to fire hyper-v-related states
04:34 mosen oh yeah you can filter by grain for sure
04:34 XenophonF works fine on FreeBSD minions, too, so I'm leaning toward Linux 2.6 or something not exposing the data
04:35 mosen might be the implementation of the grains
04:35 mosen i filter by virtual:VirtualPC (even though it's hyper-v that's the value of Virtual)
04:36 XenophonF i thought about that too
04:36 ramishra joined #salt
04:36 mosen I havent found a great way to install the integration components automatically
04:36 XenophonF but again, on my CentOS 6.5 virtual machines, the value of vitual is "physical"
04:36 mosen apart from statically definining the package name
04:36 mosen oh really?
04:36 XenophonF yeah
04:37 mosen got LinuxIC installed on those?
04:37 XenophonF no
04:37 XenophonF minimal install of CentOS 6.5 and 7.0 in both cases
04:38 XenophonF straight defaults except for configuring networking, enabling the EPEL repo, and installing the salt-minion
04:38 mosen ah ok
04:38 XenophonF all generation 1 virtual machines, too, so the emulation should be the same as what you're using on 2008r2
04:39 mosen i dont have any machines that dont run the integration components though
04:39 mosen I think
04:39 XenophonF my FreeBSD VMs also report "virtual: physical"
04:39 XenophonF so I was hoping to key off of manufacturer/productname
04:40 XenophonF I need to install/upgrade the integration components across the board: Windows, Linux, FreeBSD.
04:40 mosen yeah.. i just have a state that installs the pkg with a big lookup table
04:40 XenophonF It'd be nice if I had a consistent way to kick that off within the state, but I'm not afraid of some kind of crazy compound if condition
04:41 XenophonF this is kind of the direction i'm heading in
04:41 XenophonF http://paste.debian.net/119815/ - hyperv/map.jinja
04:42 mosen neater than my setup
04:42 XenophonF http://paste.debian.net/119816/ - hyperv/init.sls
04:42 kingel joined #salt
04:42 XenophonF right now freebsd and redhat only
04:42 XenophonF but i'm going to add support for debian, gentoo, and maybe ubuntu and suse too
04:43 XenophonF i use that if state.packages pattern everywhere
04:43 XenophonF i may have gone overboard with the formulas ;)
04:43 XenophonF you should see my apache formula
04:43 mosen where do you source the IC packages from?
04:43 XenophonF base
04:44 XenophonF i assumed in my naivety that everything i needed was in base
04:44 mosen im copying them inside salt then using salt:/// sources, obviously thats silly
04:44 mosen didnt realise that they were in base now
04:44 XenophonF http://technet.microsoft.com/en-us/library/dn531026.aspx
04:44 XenophonF that's basically what i've been following for centos
04:45 XenophonF i've been assuming that rhel is pretty much the same
04:45 mosen ahh right
04:45 mosen no need for 3rd party after 6.4, i just havent kept up to date on the information
04:46 mosen maybe thats when virtual:VirtualPC breaks
04:46 XenophonF funny thing is, hyper-v reports that stuff's running in a degraded state
04:46 XenophonF integration services upgraded recommended but there isn't one, of course
04:46 XenophonF could be
04:46 XenophonF i don't have anything older
04:46 XenophonF i could just ditch 6.5 at this point
04:46 XenophonF supporting it is mostly an academic exercise for me
04:47 mosen I can't even do hot add
04:47 XenophonF well, if it's any consolation, i have been having problems with hot add
04:48 XenophonF my centos 6.5 and 7.0 guests will go from 512 to about 900 MB
04:48 XenophonF and then they run out of memory
04:48 XenophonF hence the google search that put me on that technet page
04:48 XenophonF and wanting to configure udev
04:48 sectionme joined #salt
04:48 mosen ah right
04:48 XenophonF and needing to reboot after :)
04:49 XenophonF i think i'm going to drop centos 6.5
04:49 XenophonF concentrate on 7
04:49 steveoliver left #salt
04:49 XenophonF maybe figure out how to make the 'virtual' grain work properly on FreeBSD
04:49 XenophonF and move on with life
04:50 XenophonF i need to get the freebsd integration components built, too
04:50 mosen are they shipped with fbsd now?
04:50 XenophonF kinda
04:50 XenophonF kernel support is good!
04:50 XenophonF especially with 10.0
04:50 XenophonF so it uses the native storage and networking now
04:51 mosen nice
04:51 XenophonF there's a few rough spots
04:51 mosen I'm missing out on a ton of stuff
04:52 XenophonF like if you use DHCP you have to configure it to wait until it gets a lease before continuing the boot process
04:52 ramteid joined #salt
04:52 XenophonF (ifconfig_hn0=SYNCDHCP in /etc/rc.conf)
04:52 XenophonF dunno why
04:52 XenophonF the Azsure blog says the same thing, sadly
04:53 XenophonF and none of the userspace stuff ships with the base distribution
04:54 mosen support has always been a little bit sketchy
04:54 felskrone joined #salt
04:54 XenophonF and it hasn't been merged into the FreeBSD Ports/Packages system
04:54 XenophonF yeah
04:54 XenophonF i haven't bothered building the userspace stuff yet
04:54 XenophonF haven't needed hot add
04:54 XenophonF i will soon though
04:55 XenophonF i only have two production FreeBSD guests - the salt master plus a spam filter for my ancient exchange server
04:56 jalbretsen joined #salt
04:57 XenophonF i need to upgrade my dev hypervisors to 2012r2
04:57 XenophonF i've been really happy with hyper-v, which is saying something since i've been a loyal vmware customer since workstation 2.0
04:58 XenophonF i'm openstack-curious but that seems to have a steeper learning curve
04:59 mosen yeah im pretty happy with HV
04:59 mosen cheap licensing
04:59 Eugene "openstack-curious" made me lul
04:59 mosen small shop mixed environment, so works well here
04:59 XenophonF OMG the licensing model is a killer feature
05:02 XenophonF have a similar setup here
05:03 XenophonF i'm desperately hoping salt-cloud support for hyper-v improves
05:04 XenophonF we aren't currently using VMM
05:04 XenophonF and from what i can tell of salt-cloud, we may not need it
05:04 XenophonF VMM that is
05:05 mosen havent even tried salt-cloud for hyperv
05:07 thayne joined #salt
05:08 Furao joined #salt
05:11 XenophonF ok i'm outies - you all have a good day/night
05:12 mosen seeya round
05:12 favadi joined #salt
05:13 halfss_ joined #salt
05:15 halfss joined #salt
05:16 QuinnyPig joined #salt
05:22 QuinnyPig joined #salt
05:26 jnials joined #salt
05:26 geekmush1 joined #salt
05:27 jtratner joined #salt
05:28 Furao_ joined #salt
05:29 jtratner hi all - I’m trying to get salt to work with a minion and master on the same server. test.ping works, but state.highstate (and any state.sls <some_state>) fails (with either No top data or state not found errors). However salt-call —local with the same command works. Any ideas about what to look for?
05:31 mosen hi jtratner, im no expert
05:31 mosen but you might have to specify the master in /etc/salt/minion
05:31 jtratner right, but as I said, test.ping works
05:31 mosen oh sorry baout that
05:32 mosen reading comprehension
05:33 mosen file roots is default, etc..??
05:34 jtratner they’re set to the same thing in /etc/salt/minion as well as /etc/salt/master
05:34 Furao joined #salt
05:35 mosen im just comparing mine, because i have that same setup
05:35 malinoff jtratner, are master and minion have the same versions? 0mq verions?
05:35 jtratner yeah
05:36 jtratner esp. in that I just installed the packages for both within 30s of each other
05:36 jtratner but thanks for the suggestion!
05:36 mosen heh, i dont have any directives in minion except for master:
05:36 malinoff jtratner, which version are you using?
05:36 jtratner 2014.1.10
05:37 TTimo joined #salt
05:39 malinoff jtratner, do you have any issues in logs? Have you tried to restart daemons with logging level debug?
05:39 jtratner good call
05:40 Furao joined #salt
05:42 jtratner blah
05:42 jtratner it just started working after I restarted the daemon
05:42 jtratner not thrilled by that
05:42 mosen hah
05:43 mosen is that the salt minion sleeping problem ive heard mentioned?
05:43 jtratner not sure - link?
05:44 mosen oh just in the channel, people were mentioning minions just not responding for some time
05:44 yano joined #salt
05:44 malinoff jtratner, "have you tried turning it off and on again?" :)
05:44 mosen but that may be related to the RC for the latest version
05:45 jtratner again though, it seems like the issue is just transfer of files not happening, (even though I ran saltutil.sync_all multiple times)
05:48 Furao joined #salt
05:50 oyvjel joined #salt
05:53 Furao joined #salt
05:56 Furao joined #salt
05:58 kingel joined #salt
05:58 kermit joined #salt
06:04 yomilk joined #salt
06:06 jnials joined #salt
06:13 colttt joined #salt
06:32 schimmy joined #salt
06:36 schimmy1 joined #salt
06:38 lcavassa joined #salt
06:38 masm joined #salt
06:43 jeffrey4l joined #salt
06:47 ndrei joined #salt
06:51 englishm joined #salt
06:51 Sweetshark joined #salt
06:52 picker joined #salt
06:54 sectionme joined #salt
06:58 slav0nic joined #salt
07:01 kingel joined #salt
07:02 oyvjel joined #salt
07:02 halfss joined #salt
07:04 jnials joined #salt
07:07 kingel joined #salt
07:14 kingel joined #salt
07:14 jhauser joined #salt
07:14 delinquentme joined #salt
07:16 hardwire joined #salt
07:17 ujjain joined #salt
07:20 alanpearce joined #salt
07:24 chiui joined #salt
07:28 felskrone1 joined #salt
07:30 istram joined #salt
07:35 ramishra joined #salt
07:37 calvinh joined #salt
07:37 Hell_Fire joined #salt
07:37 felskrone joined #salt
07:38 felskrone2 joined #salt
07:38 TTimo joined #salt
07:41 _mel_ joined #salt
07:44 darkelda joined #salt
07:44 darkelda joined #salt
07:46 martoss joined #salt
07:47 ramishra joined #salt
07:50 digin4 joined #salt
07:53 PI-Lloyd joined #salt
08:02 calvinh joined #salt
08:04 calvinh_ joined #salt
08:15 jdmf joined #salt
08:15 digin4 joined #salt
08:16 stolitablrrr_ joined #salt
08:17 Dinde joined #salt
08:18 kiorky_ joined #salt
08:20 che-arne joined #salt
08:20 jkaye joined #salt
08:20 kiorky joined #salt
08:21 ggoZ joined #salt
08:21 ramishra joined #salt
08:22 Daviey joined #salt
08:26 Schmidt Is there support for RHEL/CentOS 7, and from what version if yes? I tried googling but the official docs does not mention versions, just operating systems in general
08:28 ghartz Schmidt, salt use python actually
08:29 Schmidt ghartz: I am aware, but as RHEL7 has moved to systemd there could be some issues with services (I assumed).
08:29 N-Mi joined #salt
08:29 N-Mi joined #salt
08:30 ghartz Schmidt, I tested with centos7 salt minion and didn't notice problem
08:30 digin4 joined #salt
08:31 Schmidt ghartz: Nice, I am running a centos7 with salt right now, what version is the master and minion on?
08:33 digin4_ joined #salt
08:33 ghartz Schmidt, I don't remember. No centos running at the moment
08:34 Schmidt ghartz: alright, thanks for the info :) I'll continue test over here
08:36 digin4_ joined #salt
08:36 giannello joined #salt
08:36 viq joined #salt
08:37 ramishra joined #salt
08:38 martoss joined #salt
08:39 tinuva Schmidt, I use salt with CentOS 7, both salt-master and salt-minion runs 100% on CentOS 7
08:39 xcbt joined #salt
08:39 Schmidt tinuva: do you use the version EPEL or form PyPi?
08:40 tinuva the only issue I had with systemd services, was with redis which is from epel, wasn't completely updated properly
08:40 tinuva i use the epel version
08:40 tinuva and the redis systemd issue got fixed recently
08:40 Schmidt we have no plans on using redis on these machines atm
08:41 ramishra joined #salt
08:41 Schmidt so many thanks tinuva, I just wanted to doublecheck with someone running the same things before i went for "full production" mode :)
08:43 albertid joined #salt
08:43 tinuva Schmidt, pleasure. I would still test it however, mostly unsure about other programs that is installed from epel.
08:43 Schmidt tinuva: This will mainly run a third party application and not much else (aspera enterprise server and console, for media transfers)
08:44 Schmidt worst case is that I will have to write my own systemd service files
08:44 Schmidt which is fine, but if salt does not support systemd or has issues with centos7 it sort of defeated the purpose =)
08:45 verwilst joined #salt
08:45 Outlander joined #salt
08:45 verwilst hello! If i run a highstate on the master for a host i get 6 fails, if i run the same highstate on the host itself, i get all successes. Any ideas?
08:46 briner joined #salt
08:47 emostar verwilst: any log output to share?
08:47 verwilst emostar: of the failed states?
08:48 verwilst well, functions
08:48 emostar of the highstate output
08:50 jtratner joined #salt
08:50 halfss joined #salt
08:51 verwilst http://pastebin.ca/2839636
08:53 albertid Hi, can I define my own jinja filters? I'd need that for a slightly more complex map.jinja
09:01 ramishra joined #salt
09:04 CeBe joined #salt
09:04 chiui joined #salt
09:07 nebuchadnezzar joined #salt
09:09 emostar verwilst: looks like a mysql permission problem. it must be accessing mysql differently somehow.
09:09 verwilst but it works for 3 other mysql hosts
09:09 verwilst which should be identical
09:09 emostar verwilst: something about it isn't the same then
09:10 nyx joined #salt
09:11 yomilk joined #salt
09:12 tomspur joined #salt
09:12 Sp00n do those other 3 hosts allow local root access without a password perhaps
09:12 verwilst emostar: restarting salt-minion fixed it...
09:13 verwilst i changed my minion config before, which salt-call picked up on every run i guess, but the minion process only now i guess :_
09:13 verwilst i added mysql.default_file: '/root/.my.cnf' to it
09:13 verwilst which made sense for it not working :)
09:14 kingel joined #salt
09:16 martoss left #salt
09:18 bmcorser joined #salt
09:27 oyvjel joined #salt
09:27 darkelda joined #salt
09:27 darkelda joined #salt
09:35 picker joined #salt
09:37 ramishra joined #salt
09:40 TTimo joined #salt
09:43 albertid_ joined #salt
09:43 superted666 joined #salt
09:54 mr_chris joined #salt
09:54 calvinh joined #salt
09:56 calvinh_ joined #salt
09:58 giantlock joined #salt
10:03 dabb left #salt
10:03 martoss joined #salt
10:04 kingel joined #salt
10:05 darkelda_work joined #salt
10:06 Shish joined #salt
10:07 darkelda joined #salt
10:08 Shish RE: aggregate states -- the docs all imply that all package installs will be done up-front regardless of dependency ordering; what if one needs to install some things, run commands from those packages to prepare things, and only after that install some more?
10:12 halfss joined #salt
10:16 bhosmer joined #salt
10:20 ramishra joined #salt
10:21 dvestal joined #salt
10:29 martoss joined #salt
10:40 apergos joined #salt
10:47 calvinh joined #salt
10:48 ramishra joined #salt
10:50 bhosmer joined #salt
10:56 uber joined #salt
11:02 calvinh_ joined #salt
11:05 tmh1999 joined #salt
11:05 intellix joined #salt
11:07 XenophonF Shish: That's news to me.
11:09 XenophonF I establish dependencies between pkg.installed states and their antecedents/subsequents
11:09 XenophonF everything seems to run in the proper order
11:12 winmutt_ joined #salt
11:19 dvestal_ joined #salt
11:27 mr_chris joined #salt
11:34 kingel_ joined #salt
11:35 diegows joined #salt
11:38 calvinh joined #salt
11:41 TTimo joined #salt
11:42 calvinh_ joined #salt
11:46 ramishra joined #salt
11:48 alanpearce joined #salt
11:48 mr_chris joined #salt
11:50 CeBe1 joined #salt
12:11 acabrera joined #salt
12:11 AdamSewell joined #salt
12:12 blarghmatey joined #salt
12:13 blarghmatey Does anyone know if it's possible to execute the cloud runner from within the orchestrate runner? I'm trying to incorporate execution of a cloud map file in the overall orchestrate run.
12:13 nyx_ joined #salt
12:27 giannello joined #salt
12:29 jslatts joined #salt
12:31 jaimed joined #salt
12:36 ajolo_ joined #salt
12:37 mr_chris joined #salt
12:38 mrlesmithjr joined #salt
12:40 englishm joined #salt
12:48 felskrone joined #salt
12:51 englishm joined #salt
12:53 jrb28 joined #salt
12:55 dariusjs joined #salt
12:56 vejdmn joined #salt
13:00 brain5ide joined #salt
13:03 TTimo joined #salt
13:03 troyready joined #salt
13:05 giannello joined #salt
13:08 TheRealBill joined #salt
13:10 blarghmatey joined #salt
13:13 racooper joined #salt
13:13 englishm joined #salt
13:13 jkaye joined #salt
13:15 Shish XenophonF: this merging thing is a new feature in 2014.7, I am wondering if upgrading to that will break my 2014.1.10 setup :P
13:15 oz_akan joined #salt
13:15 XenophonF hah gotcha
13:15 vbabiy joined #salt
13:19 picker joined #salt
13:19 felskrone joined #salt
13:20 dvestal joined #salt
13:21 mpanetta joined #salt
13:22 ramishra joined #salt
13:22 mrlesmithjr When is 2014.7 scheduled for release btw?
13:26 mpanetta_ joined #salt
13:26 Shish "When it's done", I presume. I wonder what the bit after the dot means; I thought it was supposed to be month of release but 2014.1 came out on 2014/2/19 and 2014.7 is looking to come out some time in 2014/9...
13:27 manfred mrlesmithjr: when it is ready
13:28 manfred Shish: it was the month that it was taged
13:28 manfred 2014.1 was tagged in january
13:28 mpanetta joined #salt
13:28 manfred then did release candidates
13:28 manfred 2014.7 was tagged in july, then release candidates now
13:30 gms joined #salt
13:34 cpowell joined #salt
13:37 mrlesmithjr manfred: Perfect
13:40 dude051 joined #salt
13:43 mr_chris joined #salt
13:49 miqui joined #salt
13:56 ericof joined #salt
13:57 mr_chris joined #salt
14:00 justyns joined #salt
14:01 justyns joined #salt
14:02 quickdry21 joined #salt
14:02 justyns joined #salt
14:03 kaptk2 joined #salt
14:03 ajolo joined #salt
14:06 natewalck left #salt
14:07 workingcats joined #salt
14:08 briner joined #salt
14:10 mechanicalduck joined #salt
14:10 nitti joined #salt
14:11 jkaye joined #salt
14:12 jalbretsen joined #salt
14:15 toastedpenguin joined #salt
14:16 thayne joined #salt
14:21 mr_chris joined #salt
14:22 dccc__ joined #salt
14:23 Daviey joined #salt
14:24 ramishra joined #salt
14:25 AdamSewell joined #salt
14:25 jnials joined #salt
14:25 peters-tx joined #salt
14:26 micah_chatt joined #salt
14:28 ajprog_laptop joined #salt
14:28 scoates joined #salt
14:30 jnials joined #salt
14:32 nitti joined #salt
14:36 dude051 joined #salt
14:39 englishm joined #salt
14:40 felskrone joined #salt
14:40 pdayton joined #salt
14:45 Ozack1 joined #salt
14:47 debian112 joined #salt
14:48 iggy anybody ever seen the following error from salt-cloud? InvalidRequestError: {u'domain': u'global', u'message': u"Required field 'value' not specified", u'reason': u'required'}
14:49 bmcorser joined #salt
14:49 jchen iggy: which provider?
14:49 iggy gce
14:50 jchen what are you passing into salt-cloud
14:53 SheetiS joined #salt
14:53 elfixit joined #salt
14:55 mschiff in top.sls I want to match by pillar: 'service:webapp:webapp-servers' with "match: pillar". And "webapp-servers" is a list in the pillar data... Is tzhis supposed to work in 2014.1?
14:55 iggy salt-cloud -p someprofile testvm
14:55 jergerber joined #salt
14:57 TyrfingMjolnir joined #salt
14:58 calvinh joined #salt
14:58 schristensen joined #salt
14:58 oyvjel joined #salt
14:58 iggy hmm, it didn't look like #14985, but I think it may have been after all... at least updating libcloud gives a different error now
14:59 mr_chris joined #salt
15:01 patarr joined #salt
15:01 patarr joined #salt
15:04 econnell joined #salt
15:04 penguin_dan joined #salt
15:04 t0rrant joined #salt
15:04 econnell joined #salt
15:05 dude051 joined #salt
15:08 higgs001 joined #salt
15:11 programmerq joined #salt
15:11 TyrfingMjolnir joined #salt
15:13 yomilk joined #salt
15:15 BrendanGilmore joined #salt
15:16 penguin_dan joined #salt
15:17 halfss joined #salt
15:17 TyrfingMjolnir joined #salt
15:25 jnials joined #salt
15:28 TyrfingMjolnir joined #salt
15:29 ndrei_ joined #salt
15:30 ndrei joined #salt
15:39 jnials joined #salt
15:41 rallytime joined #salt
15:41 Gareth morning
15:42 martoss joined #salt
15:43 tligda joined #salt
15:43 dvestal joined #salt
15:43 pfallenop joined #salt
15:44 Ahlee looks like in pkg.installed, refresh: True isn't honored if the repo is disabled and you're also using enablerepo: reponame
15:44 bhosmer joined #salt
15:45 sijis joined #salt
15:45 sijis if i had a jobid from the master, can i see what command was run with that?
15:45 Ahlee sijis: salt-run jobs.lookup_jid jobid
15:46 Ahlee runners require root, or elevated privileges in salt config
15:48 koalallama joined #salt
15:51 pdayton joined #salt
15:51 aparsons joined #salt
15:52 jkaye joined #salt
15:56 troyready joined #salt
15:58 thayne joined #salt
15:58 tkharju3 joined #salt
16:00 sijis Ahlee: does that rerun it or just showing me the output?
16:00 xcbt joined #salt
16:03 Ahlee sijis: Shows output
16:03 Ahlee the jobs runner just pulls values from the jobcache
16:04 sijis Ahlee: is ther a way to see the actual request sent? like 'salt host* cmd.run 'blah' ??
16:05 Ahlee it records the target afaik, not the hosts that matched the target
16:05 Ahlee though if you have an external jobcache defined you can pull those values from there
16:06 sijis ohh. 'salt-run jobs.list_jobs' shows what i'm looking for :)
16:06 Ahlee ah, nice
16:07 mmarcrr joined #salt
16:07 mmarcrr hello
16:07 mmarcrr i have a doupt about salt
16:07 mmarcrr i work befor with puppet and quattor
16:08 mmarcrr and it can restart a daemon or recopy file if it is modified
16:08 mmarcrr without human intervention
16:08 mmarcrr salt do the same thing?
16:08 Ahlee salt runs on schedules
16:09 iggy for the restart, you use a feature called watch
16:09 mmarcrr ok
16:09 Ahlee it does not by default listen to file system events or similar to react
16:09 mmarcrr ok
16:09 mmarcrr Im new and I try to undestand de concepts
16:09 mmarcrr thanks
16:09 Ahlee but, on the schedule yes, it can revert a file and start or restart a service
16:10 KyleG joined #salt
16:10 KyleG joined #salt
16:10 iggy there's a recipe somewhere to have a commit on your git repo automatically push the changes to the salt master, I'd think you could do something simlar to have it automatically run a highstate on pushes
16:10 mmarcrr the schedule is configure in de minion?
16:10 mmarcrr or in the master?
16:10 Ahlee I schedule on the master via a 3rd party utility.  There is scheduling minion side as well
16:10 Gareth mmarcrr: minion.
16:11 mmarcrr ok
16:11 mmarcrr cron or some thing
16:12 Ahlee The minion has an internal method of scheduling - not via cron.
16:12 orev joined #salt
16:13 mmarcrr a ok
16:13 Gareth http://docs.saltstack.com/en/latest/topics/jobs/schedule.html
16:14 Gareth Note. some of those features are available in 2014.7 so if you're running 2014.1.x they won't be available.
16:14 mmarcrr Thanks for your help
16:14 mmarcrr I check the documentation and I try to undestand
16:15 mmarcrr thanks againt
16:15 apergos anyone here who knows salt innards pretty well? I've got a nice bug, maybe buried in the zmq library someplace, where a job bing sent from the master to minions makes it to the events ipc (eventslistener sees it) but ...
16:15 apergos not actually to packets that go to the network... sporadically. Thatis, I run it, it fails, I run it it succeeds, randomly.
16:15 dvestal_ joined #salt
16:16 apergos this is salt 0.17.1 and zmq 3.2.2
16:16 apergos and I can't easily drop evrything nd update salt, it's on a production cluster, I've tried duplicating it in a nice little sandboxed environment but no luck
16:18 dvestal joined #salt
16:22 Heartsbane joined #salt
16:24 ramishra joined #salt
16:26 orev hi, I've been using puppet for a bit and getting frustrated with it.  I like the looks of salt, and I'm wondering if anyone can point me in a direction of "best practices", cookbooks, etc... for accomplishing config management with salt?  a lot of what I've found so far covers basics, but I'm more concerned with overall management and more complex structures
16:27 mr_chris joined #salt
16:28 cwyse joined #salt
16:28 Gareth orev: good place would be to look at some existing formulars.
16:28 Gareth https://github.com/saltstack-formulas
16:30 aparsons joined #salt
16:31 martoss joined #salt
16:31 tmh1999 joined #salt
16:32 chrisjones joined #salt
16:33 Ahlee apergos: It's no concillation, but I believe I also experienced that issue.  I upgraded to 0.17.5+patches, but no longer remember exactly what those patches where
16:37 iggy orev: there is actually a "best practices" section in the docs as well
16:37 iggy generally speaking, salt is pretty flexible, so everybody's config is going to be slightly different
16:37 orev I'm thinking of a few different things, like for example if I want to install apache, then open firewall ports for it
16:38 apergos Ahlee: ok, that's some informtion anyways
16:38 apergos I'll see if any core dev recalls osmetehing when they show up
16:40 justyns joined #salt
16:40 justyns joined #salt
16:41 skyler joined #salt
16:43 iggy does anybody actually use the extend functionality in salt-cloud profiles? I keep getting errors no matter what I try
16:43 Ahlee Just went hunting for my patches, don't see where i stuck them
16:43 Ahlee i'll keep digging apergos
16:44 Ahlee i know it's on one of these systems
16:44 Ahlee and curse younger Ahlee for not commiting his patches/changes to a local repo
16:49 aparsons_ joined #salt
16:49 apergos heh
16:50 Ahlee apergos: so https://gist.github.com/jalons/dd44b6eec4f36f96c10b is the patch I was thinking about, which adds a try:except block around sreq.send which reading back might not be the same issue you're facing
16:50 apergos I'll have a look in about 1 minute (finding a regression in my code unrelated to this)
16:51 blarghmatey joined #salt
16:51 forrest joined #salt
16:51 Ahlee That did address my random "This job never made it to this minion"
16:53 chrisjones joined #salt
16:54 smcquay joined #salt
16:55 kermit joined #salt
16:59 jslatts joined #salt
17:00 aparsons joined #salt
17:01 halfss joined #salt
17:03 jh0486 joined #salt
17:04 apergos hmm ok I might hot-patch that  on the master and see if it has an impact
17:04 apergos thanks for the lead
17:05 diegows joined #salt
17:05 jh0486 Hey guys, getting the error http://pastebin.com/3xg8Crft on a master-minion startup. I am running the same version of master/minion. I delete all of the keys and restarted the master and minion and same error persists.
17:05 sxar joined #salt
17:06 gothix joined #salt
17:07 jh0486 I also deleted cache
17:07 ze- jh0486: have you deleted the first seen master pki?
17:07 ze- i'd say the minion once connected to master A, and now tries to connect to master B.
17:07 ze- not same private key, so it fails.
17:08 eunuchsocket joined #salt
17:08 schimmy joined #salt
17:08 ze- jh0486: on the minion, remove the /etc/salt/pki/minion/minion_master.pub (or appropriate path)
17:09 jh0486 yeah, I tried that from the log output. I still receive the same error.
17:10 ze- jh0486: and you only have a single master configured ?
17:10 schimmy1 joined #salt
17:10 saru_ joined #salt
17:10 saru_ hi, guys
17:11 jh0486 This node was strapped to another master initially, now I am setting up multi-master including this node with the error.
17:11 ze- jh0486: all masters must have the SAME private key.
17:11 ze- if they do not, it can't work.
17:11 saru_ if I use file.managed state with a source and source_hash as HTTP URL, what kind of hash algos can I use?
17:12 sijis jh0486: are you setting up a syndic master?
17:12 ze- jh0486: http://docs.saltstack.com/en/latest/topics/tutorials/multimaster.html - 3.4.4.1. -- 2. Copy primary master key to redundant master
17:12 saru_ I have a state to pull down an installer over HTTP, I have a hash.md5 file with installer md5 checksum
17:13 saru_ but when I call it, I get on Windows an error 'list index out of range'
17:13 murrdoc joined #salt
17:13 saru_ any idea why it may be working on Linux platform but not on Windows?
17:14 rap424 joined #salt
17:14 saru_ the hash.md5 file simply generated by command 'md5sum LIST_OF_FILES > hash.md5'
17:16 jnials joined #salt
17:20 jh0486 Thanks ze-, I copied the keys over and deleted the minion_master and all is well. Sorry, I should have RTFMed.
17:21 jkaye left #salt
17:25 UtahDave joined #salt
17:26 mr_chris joined #salt
17:26 jnials joined #salt
17:27 snuffeluffegus joined #salt
17:27 wendall911 joined #salt
17:31 shaggy_surfer joined #salt
17:32 rawzone joined #salt
17:32 sa2ajj joined #salt
17:32 chiui joined #salt
17:34 aparsons joined #salt
17:41 druonysus joined #salt
17:41 druonysus joined #salt
17:41 iggy I've got a traceback in salt-cloud, although the VM is actually being created (although it's not being salted after the fact)
17:41 n8n joined #salt
17:43 dvestal joined #salt
17:45 sa2ajj joined #salt
17:45 Ryan_Lane joined #salt
17:46 Setsuna666 joined #salt
17:46 sa2ajj joined #salt
17:48 sa2ajj joined #salt
17:50 sa2ajj joined #salt
17:51 rallytime joined #salt
17:52 felskrone joined #salt
17:56 aparsons joined #salt
17:57 chrisjones joined #salt
18:01 swa_work joined #salt
18:02 grove_ joined #salt
18:03 blarghmatey joined #salt
18:03 aparsons joined #salt
18:06 cmbeelby joined #salt
18:08 cmbeelby left #salt
18:08 kingel joined #salt
18:10 kermit joined #salt
18:11 rjc joined #salt
18:19 ajolo joined #salt
18:19 ekristen joined #salt
18:20 davet joined #salt
18:21 kingel joined #salt
18:21 Ahlee apergos: client side patch, unfortunately.  I'd push it to a few minions and see if they behave better.  I actually pushed that out first via file.managed via salt, and then kicked over all the minions during a cron job over night
18:24 kingel_ joined #salt
18:25 apergos oh dear, ok well that's unfortunate for testing but I'll see what can be done
18:27 jslatts joined #salt
18:40 iggy ssh 146.148.36.55
18:41 iggy heh, wrong window obviously
18:43 drawks heh, well after 3 days of trying I finally got a clean jenkins pass on my PR
18:43 forrest drawks, lol
18:44 drawks i gotta say, jenkins is much harder to understand travis
18:44 drawks OTOH I'm getting quite a bit better at figuring out how the heck to handle merging 3 days worth of conflicting changes
18:45 forrest drawks, salt had to stop using Travis
18:45 forrest couldn't test all the different distros/architectures, and I think it was actually negatively impacting them for a bit
18:45 jalaziz joined #salt
18:46 drawks orly? seems like the test matrix on jenkins is smaller than what we have for graphite on travis
18:46 drawks although honestly test automation is not my strong suit at all
18:48 forrest drawks, at this point I can't remember since the switch was a while back
18:48 drawks but i could see how some of the platform testing could be more complicated for salt since it touches plenty of non python system stuff
18:49 bhosmer joined #salt
18:49 halfss joined #salt
18:49 drawks at any rate, I'm still waiting on a release from my legal department before i contribute anything non-trivial
18:49 drawks these mdadm bug fixes i did were purely to scratch my own itch
18:50 UtahDave thanks, drawks!
18:52 drawks np
18:52 scoates joined #salt
18:52 dvestal joined #salt
18:56 XenophonF hey anyone here familiar enough with salt/grains/core.py to explain why grains['virtual'] = 'VirtualPC' doesn't get set on FreeBSD?
18:56 XenophonF there's two places where it can happen, lines 479 and 686
18:56 schmutz joined #salt
18:57 XenophonF the first one looks like it gets set as a result of a search through the dmesg output
18:57 mr_chris joined #salt
18:59 XenophonF the second looks like it gets set as a result of some bios info checks
18:59 XenophonF the manufacturer and productname grains get set properly though, which is odd
19:00 XenophonF oh, line 686 only runs on Windows minions
19:03 XenophonF oh FreeBSD 10 at least, a better test might be to check the dev.acpi.0.%desc sysctl
19:04 XenophonF or grepping dmesg for VRTUAL MICROSFT
19:04 XenophonF (not misspelled)
19:07 swa joined #salt
19:10 felskrone joined #salt
19:12 Kax joined #salt
19:12 kingel joined #salt
19:15 martoss joined #salt
19:19 sectionme joined #salt
19:19 rypeck joined #salt
19:20 aparsons joined #salt
19:20 nyx_ are salt modules available when rendering salt pillars? it appears they are not
19:28 mr_chris joined #salt
19:31 aparsons joined #salt
19:35 aparsons joined #salt
19:36 sa2ajj joined #salt
19:37 KennethWilke joined #salt
19:42 aparsons joined #salt
19:49 ajolo_ joined #salt
19:49 aparsons joined #salt
19:51 spookah joined #salt
19:51 spookah joined #salt
19:52 unstable left #salt
19:53 XenophonF ah got it
19:53 XenophonF patch for salt/grains/core.py is on its way
19:56 murrdoc joined #salt
19:58 mechanicalduck joined #salt
20:00 shaggy_surfer joined #salt
20:00 ckao joined #salt
20:00 kingel joined #salt
20:02 debian112 Anyone using pillars in a multi-environment?
20:02 kingel joined #salt
20:02 XenophonF basepi or UtahDave, please note https://github.com/saltstack/salt/issues/15594 whenever you get a chance
20:04 debian112 Is there a way to view the pillars in a different environment? In our setup minions are nailed down to environments.
20:04 debian112 salt-call pillar.item web1_greentoads_net saltenv='greentoads_staging' The following keyword arguments are not valid: saltenv=greentoads_staging
20:06 vejdmn joined #salt
20:06 basepi XenophonF: great!  Would you be willing to submit a pull request since you already have the patch all ready?
20:07 XenophonF basepi: sorry i did the patch by hand
20:07 debian112 forrest: I got the pillars working; from the conversation on Friday.
20:07 forrest debian112, great, what was the issue?
20:07 basepi XenophonF: We love to see new contributors to the code base, so we'd love it if you submitted a pull request -- that said, if you don't have time we're happy to apply the patch ourselves.
20:08 XenophonF basepi: give me a little bit to clone the salt repo
20:08 XenophonF sorry about that
20:08 basepi XenophonF: no apology necessary
20:08 basepi just like to give people the opportunity to contribute wherever we can.
20:08 debian112 minions not finding the pillars because of being in two different environments, and the minion is hard coded to a environment.
20:08 XenophonF do I just clone the salt repo, or do i need to fork first?
20:09 aparsons joined #salt
20:09 debian112 for example minion is hard set to: greentoads, but pillars are created in greentoads_staging
20:09 basepi XenophonF: I'm going to switch to PM, one sec
20:09 XenophonF ok
20:09 debian112 once I merged greentoads_staging to greentoads it saw the pillars
20:11 forrest debian112, ahh that would totally make sense
20:11 debian112 forrest: Is there a way to run something like this: salt-call pillar.item web1_greentoads_net saltenv='greentoads_staging'?
20:12 debian112 I get this: The following keyword arguments are not valid: saltenv=greentoads_staging
20:12 debian112 which tells me that it is not an option
20:12 nyx joined #salt
20:14 lz-dylan joined #salt
20:16 xcbt joined #salt
20:16 bretep joined #salt
20:16 n0arch joined #salt
20:16 SaveTheRbtz joined #salt
20:17 forrest debian112, hmm, I can't remember, I don't see anything on the man page though. I feel like there's a good way to do it that isn't compound matching, but can't remember.
20:17 forrest sorry
20:18 utahcon_ joined #salt
20:18 mr_chris joined #salt
20:19 vlcn joined #salt
20:20 hypnosb joined #salt
20:20 kingel joined #salt
20:21 Psi-Jack joined #salt
20:23 Voziv joined #salt
20:23 aparsons joined #salt
20:25 TTimo joined #salt
20:26 jY joined #salt
20:28 tcotav debian112 -- run it without the saltenv
20:29 invsblduck hi, new salt user here.  does salt-minion not invoke the equivalent of salt-call on an interval? ie., it only receives commands--does not enforce state automatically
20:29 debian112 tcatav: it does not return the pillar info.
20:29 tcotav really?  mine does
20:29 debian112 hang on a sec, got a conference call. I will explain the setup
20:29 tcotav ah, though I'm running masterless.  not sure if that affects it.
20:30 aparsons joined #salt
20:30 alaskabear joined #salt
20:33 amart joined #salt
20:33 bhosmer joined #salt
20:34 dstokes joined #salt
20:34 Xiao joined #salt
20:35 tcotav fwiw -- tried on our prod master-minion and it doesn't appear that passing a saltenv is valid.  you get *pillar*
20:35 intellix joined #salt
20:37 druonysuse joined #salt
20:37 halfss joined #salt
20:37 UtahDave thanks, XenophonF!
20:38 XenophonF np!
20:38 XenophonF I need to take another shot at the MSI installer, too.
20:38 XenophonF am recreating my build env tonight
20:38 UtahDave cool!
20:39 n8n joined #salt
20:39 Gareth hm. best place to put secure data is pillar right?
20:39 micah_chatt joined #salt
20:39 UtahDave invsblduck: No, the salt-minion does not execute anything on an interval unless you tell it to. You can use Salt's internal scheduler to do that or even the system cron
20:39 UtahDave Gareth: yep
20:39 invsblduck UtahDave: i couldn't tell from the docs or salt-minion(1) necessarily, thanks.
20:39 Gareth UtahDave: it's data that only lives on the master right?
20:40 gfa joined #salt
20:40 forrest Gareth, yeah we have a 'secure' pillar that only lives on the master
20:40 housl joined #salt
20:41 UtahDave Gareth: Yeah, it lives on the master, and your top file determines which data gets sent encrypted to each minion separately.
20:42 Gareth okay cool
20:42 invsblduck UtahDave: oh, this is probably because the system is message based?  it only does something when the master publishes?
20:42 hardwire joined #salt
20:43 UtahDave invsblduck: Well, you can initiate commands from the Salt Master or even locally on the salt-minion
20:43 hardwire joined #salt
20:43 UtahDave but yeah, in the default set up, the minion listens to the master for commands to run.
20:46 aparsons joined #salt
20:46 invsblduck UtahDave: right, i see salt-call is pretty obvious, but was having trouble determining whether salt-minion was going to take action or not...
20:47 invsblduck UtahDave: i could have spun up a test minion and waited, waited, waited -- trial/error -- but thought it would be quicker to ask here :)
20:48 davet1 joined #salt
20:50 hardwire joined #salt
20:55 hardwire joined #salt
20:56 debian112 tcotav and forrest: Minions are set to environments. For example minion: environment: greentoads
20:57 debian112 I do all my testing here in the repo environment: greentoads_staging
20:57 UtahDave invsblduck: have you read through any of the walkthroughs on docs.saltstack.com ?
20:57 shaggy_surfer joined #salt
20:57 debian112 When it is working, I will merge to environment: greentoads repo
20:57 dbanck joined #salt
20:57 mr_chris joined #salt
20:57 debian112 and then salt will auto deploy
20:57 debian112 stuff
20:58 debian112 so I create my pillars info here first: environment: greentoads_staging
20:58 QuinnyPig UtahDave: will salt.client.LocalClient() block on pkg operations until it completes, or will it background the job and roll on?
20:59 gngsk joined #salt
20:59 debian112 then I try to run salt-call pillar.item web1_greentoads_net
20:59 invsblduck UtahDave: yep... maybe i'm just missing the one sentence that reads, "salt-minion daemon won't take any action by default and salt-call must be used if you want to "pull" state to the minion."
20:59 debian112 I get the greentoads environment which is expected
20:59 pjs joined #salt
21:00 debian112 but I want to test pillars in the greentoads_staging like I do with state.sls files
21:00 debian112 salt-call pillar.item web1_greentoads_net saltenv='greentoads_staging'   Errors out: The following keyword arguments are not valid: saltenv=greentoads_staging
21:01 aparsons joined #salt
21:01 dvestal joined #salt
21:01 hypnosb joined #salt
21:02 murrdoc joined #salt
21:02 TTimo joined #salt
21:02 UtahDave QuinnyPig: It depends on the actual command used. there's cmd_async, cmd, etc
21:03 UtahDave QuinnyPig: Ever since you changed your nick I feel so much more affectionate to you. QuinnyPig is such a cute name!
21:03 forrest UtahDave, I keep thinking of one of those hawaiian BBQs
21:03 QuinnyPig QuinnyPig: It is!
21:03 QuinnyPig Er, UtahDave.
21:04 QuinnyPig UtahDave: Yeah, the command itself is cmd, so it'll wait for return, correct?
21:04 Gareth forrest: don't eat QuinnyPig...that would very bad for the community.
21:04 UtahDave QuinnyPig: Yeah, it will block and wait until the entire command has finished and returned results
21:04 Gareth s/would/would be/
21:04 forrest Gareth, heh
21:04 QuinnyPig UtahDave: Amazing. Thank you.
21:04 UtahDave anytime, my friend!
21:05 kingel joined #salt
21:05 bhosmer joined #salt
21:06 KyleG joined #salt
21:06 KyleG joined #salt
21:06 aparsons joined #salt
21:07 grove_ joined #salt
21:12 dvestal_ joined #salt
21:14 sectionme joined #salt
21:15 aparsons joined #salt
21:15 kballou joined #salt
21:16 kelseelynn joined #salt
21:16 alexthegraham joined #salt
21:17 bill_h joined #salt
21:18 alexthegraham I don't see an open issue for it, but I keep seeing SSL cert errors when trying to install from bootstrap.saltstack.org (multiple OS's). Should I create an issue?
21:18 alexthegraham "ERROR: certificate common name `www.github.com' doesn't match requested host name `raw.github.com'."
21:19 hardwire joined #salt
21:19 bhosmer joined #salt
21:20 mrlesmithjr joined #salt
21:21 mrlesmithjr joined #salt
21:22 jacksontj joined #salt
21:22 cwyse_ joined #salt
21:23 UtahDave alexthegraham: how are you running the command?
21:23 alexthegraham wget -O - http://bootstrap.saltstack.org | sh
21:23 dvestal_ joined #salt
21:23 wangofett joined #salt
21:24 murrdoc joined #salt
21:24 hardwire joined #salt
21:24 UtahDave alexthegraham: Yeah, opening an issue would be very helpful.  I'll point some people here to it.
21:24 wangofett is there a way to run something like `tail <file>` using Salt, aside from `cmd.run 'tail <file>'`?
21:25 wangofett I guess the easiest way would be to write my own module to do that...
21:25 wangofett if nothing exists already
21:26 alexthegraham @UtahDave Done. https://github.com/saltstack/salt-bootstrap/issues/462  - Thanks.
21:26 UtahDave wangofett:  I don't think there's a 'tail' function in the file execution module
21:26 UtahDave thank you, alexthegraham!
21:27 kermit joined #salt
21:28 hardwire joined #salt
21:29 molaAMINE_ joined #salt
21:30 ggoZ joined #salt
21:30 nitti joined #salt
21:32 supersheep joined #salt
21:34 BrendanGilmore joined #salt
21:34 dvestal_ joined #salt
21:34 molaAMINE_ joined #salt
21:36 QuinnyPig UtahDave: Oh, is the return code significant from salt.client.LocalClient.cmd with respect to package operations?
21:37 QuinnyPig UtahDave: I'm trying to catch the edge case where the minion doesn't respond / the package installation fails.
21:37 UtahDave QuinnyPig: no, it basically returns a failure code if there's a stacktrace
21:37 QuinnyPig QuinnyPig: So "package operation failed, another version is installed" isn't really catchable via traditional means.
21:37 QuinnyPig Er, UtahDave ^
21:38 QuinnyPig I gotta stop doing that.
21:38 QuinnyPig Talking to oneself is an indicator of mental illness.
21:38 UtahDave otherwise, you can use cmd_runall or cmd_run_all which will return a dictionary which includes the return code from the command
21:39 debian112 ok back! forrest any idea?
21:39 UtahDave QuinnyPig: wait, I may be wrong on that command. I'm looking up docs right now
21:39 hardwire joined #salt
21:39 wangofett https://gist.github.com/waynew/f6285ea48fc4f15f0a47 <-- an ultra basic tail
21:39 kingel joined #salt
21:39 forrest debian112, no sorry, working on some stuff currently
21:40 saltn3wb joined #salt
21:40 wangofett obviously it's going to read the entire file into memory... but that's what 30 seconds of effort gets you ;)
21:40 nyx joined #salt
21:40 Ryan_Lane I <3 whoever made this mdadm state
21:40 Ryan_Lane I hate mdadm so much
21:40 Ryan_Lane and this looks really simple
21:41 saltn3wb Hello!  I have gotten external auth working via pam module (winbindd) to authenticate my LDAP account.  However, I can't get it working using an LDAP group that I am a part of.  I am using 2014.1.10.4.  Is this supported?
21:41 debian112 Ok thanks forrest, I been bouncing back and forth, so checking if someone reponded
21:41 sctsang joined #salt
21:41 debian112 responded
21:41 forrest for sure
21:42 UtahDave wangofett: :) nice
21:42 UtahDave Ryan_Lane: yep!
21:43 hardwire joined #salt
21:43 UtahDave saltn3wb: I don't think the external auth group support made it into the 2014.1 branch.  I believe 2014.7 is the where it exists.
21:46 al joined #salt
21:47 hardwire joined #salt
21:47 djaime joined #salt
21:48 druonysuse joined #salt
21:48 druonysuse joined #salt
21:53 saltn3wb thanks @UtahDave
21:54 UtahDave you're welcome!  There's an RC out for 2014.7, if you want to give that a try
21:54 englishm about to start a discussion of Salt at our local DevOps group...
21:55 murrdoc UtahDave:  do you guys have packages too ? for the RC's
21:55 UtahDave englishm: nice!
21:56 UtahDave murrdoc: I'm not sure if our packagers have built RC packages or not
21:56 wangofett zomg adfly is the worst. tips4admin should be shot for using such a terrible POS
21:57 loque joined #salt
22:01 loque I am seeing some strange behaviour when moving from 2014.1.4 to 2014.1.10 regarding rendering grain data that we then use to extract a value in jina
22:02 loque we have a grain that we set on each machine of the form environment: - client_env: test,  - type prd
22:03 loque when then in a state file salt['grain.get']('environment')
22:04 loque sorry when in a state file we do salt['grain.get']('environment')
22:05 digin4 joined #salt
22:06 loque we then use this in jinja as follows {% set type = env_details.type[0] %}
22:06 pjs joined #salt
22:07 loque which worked in salt 2014.1.4
22:07 loque this has now changed and no longer works
22:07 saltn3wb Is there an easy way to test out the helium RC on centOs?  How can I install from the cloned git here: https://github.com/saltstack/salt/tree/2014.7
22:08 manfred saltn3wb: same way you install from git using pip
22:08 manfred saltn3wb: pip install -e git://github.com/saltstack/salt.git
22:08 manfred or use salt-bootstrap
22:08 saltn3wb thanks
22:08 loque instead {% set type = env_details.type %} now gives the value
22:08 loque that we are intreasted in
22:08 manfred saltn3wb: http://docs.saltstack.com/en/latest/topics/tutorials/salt_bootstrap.html
22:09 UtahDave saltn3wb: checkout the branch you want first before pip installing
22:09 manfred salt-bootstrap is the way to do it imo
22:09 alexthegraham Anyone attempting to manage ntp or snmp on OpenSUSE clients via Salt?
22:10 loque I am assuming the change is intended
22:10 loque as a convenience but this breaks a lot of state for us
22:11 hardwire joined #salt
22:11 loque we had to use the index access as salt would return a list and we had to extract teh value from the list
22:11 UtahDave loque: Well, there shouldn't be any functional changes like that within the same branch
22:12 manfred loque: what have you upgraded to?
22:12 loque hmm best check wht command I ran when I did a salt bootstrap
22:12 manfred oh, .10
22:12 loque on a dev test machine
22:13 loque from 2014.1.4 to 2014.1.10
22:13 manfred yeah, salt-call --versions-report, otherwise, open a bug report for that
22:13 miqui joined #salt
22:13 hardwire joined #salt
22:14 loque ok will do so tomorrow as we usually test all our states before we upgrade to a later salt release just incase something breaks
22:15 spookah joined #salt
22:15 loque the version report for both systems one system running 2014.1.4 and the test system running 2014.1.10
22:17 mr_chris joined #salt
22:18 hardwire joined #salt
22:19 Setsuna666 joined #salt
22:21 saltn3wb unfortunately, still not authenticating groups with the 2014.7 release
22:21 druonysus joined #salt
22:21 druonysus joined #salt
22:23 hardwire joined #salt
22:26 halfss joined #salt
22:26 snuffeluffegus joined #salt
22:28 digin4 joined #salt
22:30 aquinas joined #salt
22:35 ajw0100 joined #salt
22:36 hardwire joined #salt
22:39 hardwire joined #salt
22:41 KyleG1 joined #salt
22:42 pdayton joined #salt
22:43 jnials joined #salt
22:45 Outlander joined #salt
22:51 rap424 joined #salt
22:52 yomilk joined #salt
22:53 jeddi joined #salt
22:56 blarghmatey joined #salt
22:56 KyleG joined #salt
22:56 KyleG joined #salt
22:57 Outlander joined #salt
23:04 lionel joined #salt
23:05 dvestal_ joined #salt
23:05 nbrunson joined #salt
23:06 ajprog_laptop joined #salt
23:07 englishm The main question we all have: Why is it called Salt?
23:07 manfred for the puns
23:08 manfred it is called a stack because it is a stack of tools, like openstack etc
23:08 bhosmer joined #salt
23:08 manfred as for why... you will have to ask tom... there isn't a real reason afaik
23:09 mosen joined #salt
23:10 n8n joined #salt
23:10 forrest I can tell you why
23:10 forrest manfred, didn't you see the key note video?
23:10 shaggy_surfer joined #salt
23:10 forrest englishm, it's called that because Tom was working on Salt, watching Lord of the rings, and there is that one part where Gimli goes 'Salted pork!?'
23:10 nitti joined #salt
23:10 Ryan_Lane LOTR, funny enough
23:10 Ryan_Lane :D
23:10 forrest and thus the name was born, because a sprinkle of salt makes everything better
23:11 mosen saltcon?
23:11 Gareth englishm: everything is better with a little salt :)
23:11 manfred i never sit down long enough
23:11 forrest mosen, from saltconf yeah
23:11 manfred forrest: nice
23:11 mosen forrest: cool
23:11 manfred forrest: i have a long list of key notes I need to watch
23:11 mosen too bad i can't fly over. I'll hold a mini salt conf by myself
23:12 manfred including all of the saltconf ones... even though they are all probably outdated at this point
23:12 forrest manfred, I don't watch many keynotes, as much as I watch the conference videos
23:12 manfred yeah
23:12 forrest manfred, some of them are still relevant
23:12 manfred ok
23:12 hardwire joined #salt
23:13 manfred I am going to go watch handegg! http://i.imgur.com/XeeYMfS.gif o/
23:13 CatPlusPlus joined #salt
23:13 forrest manfred, later
23:15 ajolo joined #salt
23:15 Hell_Fire_ joined #salt
23:16 hardwire joined #salt
23:20 aquinas joined #salt
23:20 aquinas_ joined #salt
23:21 aparsons_ joined #salt
23:25 Setsuna666 joined #salt
23:27 mr_chris joined #salt
23:27 hardwire joined #salt
23:29 scoates_ joined #salt
23:30 dccc joined #salt
23:39 hardwire joined #salt
23:39 alexthegraham Nobody hates themselves enough to use OpenSUSE, eh?
23:40 mr_chris joined #salt
23:40 forrest lol
23:40 nbrunson I have a grain called "roles" that is an array of strings, and on most minions it's fine and if I add a role to the array it updates fine, but on one machine in particular, I can't change the array at all. Have any of you seen this before? I'm running 2014.1.5
23:43 murrdoc ssh minion 'stop salt-minion; rm -rvf /salt/cache/dir/;start salt-minon'
23:43 murrdoc ssh master 'salt "minion*" salt.Highstate'
23:43 aparsons joined #salt
23:43 murrdoc is the generalized version of what you could try
23:44 hardwire joined #salt
23:44 debian112 This is for all the old puppet users here. what does saltstack has that is equal to external_data in puppet? The puppet guys at work are asking.
23:45 mosen murrdoc: begs the question, can salt-ssh restart a minion as a part of a process that would normally require intervention?
23:47 murrdoc mosen:  yes it can
23:47 murrdoc debian112:  if u transitioning u can use hiera
23:48 murrdoc mosen:  salt 'minion*' saltutil.sync_grains
23:48 forrest debian112, Ryan_Lane just moved from puppet
23:48 murrdoc i think
23:48 Ryan_Lane I wouldn't use hiera
23:48 Ryan_Lane if you *really* need hierarchy in your config data, you should use reclass
23:49 debian112 Right now I am building pillars for each server
23:49 hardwire joined #salt
23:49 forrest yeah we're using reclass where I work
23:49 forrest and it's okay
23:50 murrdoc i only recommend hiera if transitioning people from puppet is important
23:50 murrdoc or facter
23:50 murrdoc you can use that too
23:50 murrdoc i do like facter
23:50 murrdoc heira is bogus
23:50 murrdoc imho
23:50 Ryan_Lane I'd recommend dropping the idea of hierarchy
23:52 Ryan_Lane constraints are good :)
23:52 Ryan_Lane it leads to simplification
23:53 hardwire joined #salt
23:54 murrdoc left #salt
23:54 murrdoc joined #salt
23:54 murrdoc dry is nice too
23:55 murrdoc i mean, in a non arguementative way :)
23:56 iggy should I be able to have a pillar in a file not named the same as the pillar (i.e. reposerver.sls -> nginx:) ?
23:57 debian112 I am going to look into the reclass
23:58 debian112 Any gotcha's or is straight forward?

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary