Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-01-26

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 giantlock joined #salt
00:02 timoguin joined #salt
00:05 kalloc joined #salt
00:14 n8n joined #salt
00:21 ConceitedCode joined #salt
00:23 oz_akan_ joined #salt
00:25 brianhicks joined #salt
00:29 Gareth joined #salt
00:31 cachedout joined #salt
00:32 oz_akan_ joined #salt
00:42 fragamus joined #salt
00:45 Gareth joined #salt
00:51 johtso joined #salt
01:08 kalloc joined #salt
01:13 AdamSewell joined #salt
01:14 ChoHag module_dirs et al are documented in the reference guide and the example minion conf as a list of *extra* directories to search.
01:15 ChoHag What list of directories is appended to?
01:19 oz_akan_ joined #salt
01:29 gfa joined #salt
01:31 mgw joined #salt
01:33 gfa hello i want to be able to automate key managment, i need something smarter than autoaccept, i want to run an script to do the job but i cannot found where is the metadata associated with the key (minion ip at least)
01:38 mgw gfa: are your minions primarily bare metal or VMs of some sort?
01:38 sroegner joined #salt
01:38 MTecknology I use the reactor system for that
01:39 gfa mgw: VMs, i have another master for baremetal provisioning
01:41 gfa if i could change the minion_id at least when accept the key, it will suffice me
01:45 Eugene My usual answer(and personal solution) is to change the minion's hostname(and thus, the id) as part of a first-boot-after-clone script
01:46 MTecknology I do it before the machine boots with dns
01:47 yomilk joined #salt
01:48 mgw gfa: are you using the virt runner to provision your VMs?
01:52 n8n joined #salt
01:57 jimallman joined #salt
01:59 gfa i don't want to join the machine then rename, i could but i should have 2 different masters,  i don't trust their operators
02:00 gfa no, i provision the VMs using an external tool, a mix of heat and custom scripts
02:02 gfa my fear is, i have multiple networks, firewalls, security groups and naming conventions. if somebody starts a VM named mysql-something on apache network, i want to detect that and provision the machine as apache even if it's name is mysql-something
02:03 gfa or even call the external tool and delete it
02:07 gfa think about this, mysql-* machines get the latest dump of the db in order to add them to replication chain. anybody outside dba team can create a VM with any name. they then get the dump even if they should not see it
02:08 kalloc joined #salt
02:08 gfa if i can get the ip of the minion i can ask if that machine was launched from somebody from dba team, then allow it's key or deny if somebody else did it
02:09 gfa s/from somebody/by somebody/
02:34 Mua joined #salt
02:48 elfixit1 joined #salt
02:57 joshe joined #salt
02:57 cachedout joined #salt
03:02 jeter_ salt-run virt.query causes "TypeError: string indices must be integers, not str" error
03:02 jeter_ salt-run virt.hyper_info works though
03:03 mgw joined #salt
03:04 cedwards jeter_: I just used salt-run virt.query without issue. What version are you on?
03:05 jeter_ salt-run 0.17.4
03:06 jeter_ CentOS - EPEL repo
03:06 cedwards hmm. that's what i'm running too.
03:08 kalloc joined #salt
03:18 jeter_ also, trying to create a new VM
03:18 jeter_ salt-run -l debug virt.init centos1 1 512 salt://virt/images/CentOS-6.5-x86_64-minimal.iso
03:18 jeter_ the last 2 lines of output are
03:19 jeter_ Creating VM centos1 on hypervisor dev VM centos1 initialized on hypervisor dev
03:19 Mua joined #salt
03:19 jeter_ but nothing is created and logging doesn't say anything
03:19 cedwards can you use an .iso like that? I thought it required a .qcow2/.raw image
03:19 jeter_ i tried that too
03:20 cedwards i wonder if you're missing a required package or something
03:20 cedwards are you following this? http://docs.saltstack.com/topics/tutorials/cloud_controller.html
03:21 jeter_ i cut and paste the sls from that page
03:22 cedwards hmm. i've been working from that page today and haven't really had issue
03:22 cedwards i'm having an issue with preseeding the image, but the provisioning works otherwise
03:25 jeter_ yah, maybe im missing a package but i dont know what it is, i have libguestfs and libvirt-python
03:26 cedwards mine didn't fully work until i applied that sls, and the networking
03:27 jeter_ i had to turn off tls on the libvirtd daemon b/c its missing the ca pem file
03:27 sroegner joined #salt
03:28 jeter_ i did start w/ minimal centos (very bare bones in terms of whats started), let me double check the bridge, its likely that in my salt config
03:35 jeremyBass left #salt
03:46 UtahDave joined #salt
03:48 cachedout joined #salt
03:50 UtahDave PyPI is now managed with Salt!  :)   https://mail.python.org/pipermail/distutils-sig/2014-January/023522.html
03:51 carmony nice
03:51 dxcxlg joined #salt
03:51 carmony UtahDave: you know that little project I mentioned? I'm about half way from having a basic version of it working
03:51 MTecknology +5
03:52 jeter_ on another side note, bookit.com has transferred from puppet to salt for conf management
03:52 forrest UtahDave, that is very cool
03:52 UtahDave carmony: nice!
03:52 UtahDave yep!
03:52 carmony it also made me think of a funny shirt idea for Salt
03:52 UtahDave nice, jeter_!
03:52 forrest Man they were using chef before
03:52 forrest those poor poor bastards
03:52 carmony Let Salt power all of your *aaSes.
03:53 MTecknology I got an email today from lanyard and I almost cried
03:53 cast jeter_: happen to know what sort of fleet size?
03:53 UtahDave lol
03:54 jeter_ 300 physical
03:54 MTecknology are they mostly identical
03:54 MTecknology ?*
03:54 jeter_ just getting into the cloud
03:55 MTecknology the what?
03:56 jeter_ MTecknology: me?
03:57 MTecknology you?
03:57 MTecknology are you a cloud?
03:57 jeter_ speaking for bookit.com
03:57 jeter_ no, im not a cloud
03:58 forrest *in a non-official capacity, all opinions, mentions, and remarks are those of jeter_, and not of bookit.com*
03:58 forrest right jeter_ :P
03:58 forrest unless you are a cloud
03:58 jeter_ lol
04:01 cast i live in a puppet dominated land, though fortunately mcollective is awkard to setup and no one knows [or wants to know] ruby, so salt is invading - as a remote execution engine at least :)
04:01 justlooks joined #salt
04:01 forrest wooo
04:01 forrest I need to find someone who will put salt into foreman
04:02 MTecknology I was first hooked on salt for remote execution, now I don't use it for that at all
04:02 justlooks hi, i have a pillar which consist of all three project variable define ,how can i seperate three project variable define to three pillar?
04:08 kalloc joined #salt
04:12 jeter_ really weird, virt.full_info is saying its unavailable when running virt.query, virt.hyper_info returns perfectly
04:17 MTecknology forrest: did you're going to saltconf?
04:18 forrest MTecknology, yea I am giving a talk
04:18 MTecknology forrest: will you do me a favor?
04:18 MTecknology take a picture of me and put it on a chair?
04:18 forrest rofl
04:19 forrest put a little header 'in memory of'?
04:19 cast i notice i can use hiera as a datastore, can one match minions based on facts? [i notice facter was used early on in salt]
04:19 MTecknology any heading you want
04:19 MTecknology It can say "don't miss you any" if you want
04:21 forrest MTecknology, I just checked, my printer does't have any ink it appears. The joys of never using it I guess.
04:22 MTecknology oh :(
04:28 Shish_ joined #salt
04:35 ajw0100 joined #salt
04:37 ubuntu__ joined #salt
04:37 UtahDave joined #salt
04:38 cast interesting, 'Live upgrading many thousands of servers' talk on youtube by google engineer, 'Any push based method is doomed. If you're triggering updates by push, you're doing it wrong :)
04:38 forrest heh
04:39 cast he pushed out updates in an interesting fashion, rather than using package management to push out updates he uses rsync
04:39 forrest ughhhhhh
04:40 forrest rsync is so unreliable
04:40 ajw0100 joined #salt
04:42 strgcloud joined #salt
04:42 MTecknol1gy joined #salt
04:42 MTecknology rsync is unreliable?...
04:43 chitown_ joined #salt
04:43 rockey_ joined #salt
04:43 forrest Yea, relies on your network being dependable, etc.
04:43 forrest 'Why did this rsync job we have not run?' "well there was a minor blip this morning at 2 AM when your job runs so it failed, and it never tries again"
04:43 forrest granted that is poor usage of rsync
04:44 forrest but people seem to think they can just use rsync, and not design a real solution, which annoys me
04:44 cast i'm not convinced push based methods are doomed on a large scale,
04:44 forrest cast, ehh, I think it depends on whether you're trying to push from a single location or what.
04:45 tzero joined #salt
04:45 MTecknol1gy forrest: rsync ... ... || alarm 'rsync failed'
04:46 forrest Yea, surprisingly enough, they didn't consider that.
04:46 cast they did have to write their own scalable version of rsync, to achieve their mass fleet updates via rsync model
04:46 forrest cast, when was this?
04:46 forrest as in the year
04:46 MTecknol1gy rsync is just fine, people who don't understand it are not
04:47 bashcode` joined #salt
04:47 cast its not immediately clear, circa 2013
04:47 forrest and that's the problem, people just assume rsync will solve the problem, so they put in an rsync job, and then forget it, and complain when it doesn't work.
04:47 cast events.linuxfoundation.org/sites/events/files/lcjp13_merlin.pdf
04:48 nineteen1ightd joined #salt
04:48 forrest weird, I was expecting it to be older for some reason
04:49 MTecknol1gy I have a system that goes out and does an rsync job pulling data from 180 servers every night... rsync "server.$facility.domain.tld:/..." /.../ || add_failed_facility "$facility"
04:53 Corey MTecknol1gy: Nice alt/ :-p
04:53 Corey I'm with forrest on this.
04:53 cast what are you rsyncing?
04:54 Corey "Fire and forget" makes you stupid.
04:54 Corey "What if the network is down when that cron job fires" is a real concern.
04:54 cast it is, but something you can cope with
04:54 forrest and it might not even be a network issue, could be as simple as the box it's supposed to hit is under high load, and the connection fails there.
04:54 forrest just bleh
04:57 jalbretsen joined #salt
04:57 cast we could brain storm alternatives for MTecknol1gy [if he tells us what hes doing]
04:58 MTecknol1gy Corey: taking 180 remote servers and doing a simple backup to a backup server before doing a backup from that server to a tape library
04:59 MTecknol1gy cast: **
04:59 UtahDave joined #salt
04:59 cast how is the rsync job structured? does it pull in batches? or serially?
04:59 MTecknol1gy rsync returns an exit status of !0 if it failed to complete properly, that's enough for me since it means I can alarm on !0
05:00 MTecknol1gy pulls batches (sorta)
05:00 Corey MTecknol1gy: There's a downside to that approach as well.
05:00 Corey rsync warning: some files vanished before they could be transferred (code 24) at main.c(1039) [sender=3.0.6]
05:00 Corey I'm currently beseiged with cron jobs stating that in an environment I just inherited.
05:00 Corey Pop quiz; how am I fixing this? :-)
05:00 MTecknol1gy that's true... but that shouldn't actually happen in my situation
05:01 Corey MTecknol1gy: So your source is static?
05:01 cast or append only*
05:01 MTecknol1gy it's static when people aren't on their computers and at about 02:00, nobody should be on, not even in hawaii
05:01 cast do you snapshot the source?
05:02 MTecknol1gy nope, just a straight rsync from source
05:02 cast i notice some people do backups on rw data aren't point in time...but rather over the period of time it took to run the backup,
05:02 Corey MTecknol1gy: Someone will disappoint you and cause this error sooner or later. :-)
05:02 cast rarely anyone complains about it though
05:03 cast Corey: so, in MTecknol1gy's situation, "What would Corey do?"
05:04 MTecknol1gy I pull data from 180 remote facilities to a central server that does backups and allows self-service file restorations; ranging it time zones -3 to -7
05:04 MTecknol1gy I think that's the range anyway...
05:04 cast how does self-service file restoration work?
05:06 tzero joined #salt
05:07 cast what times may a user restore a file from? yesterday? the last week?
05:07 MTecknol1gy they authenticate to the web app, the web app passes that to AD and gets their facility id and home directory info and finds what they have access to, and then presents them with a list of files they have access to, if the file was missing in the last backup, and date it was backed up, then they select the files and the dates from the list and the server pushes the files back to them in a
05:07 MTecknol1gy designated location
05:07 MTecknol1gy it sends them an email when the restore is finished
05:07 MTecknol1gy 90 days
05:08 cast how do you store those 90 days of directory tree state?
05:08 kalloc joined #salt
05:08 MTecknol1gy please... don't make me explain that one
05:08 MTecknol1gy it's not in the directory structure
05:09 cast doing that in a space efficient but still reliable and fast manner is interesting
05:09 Sypher joined #salt
05:09 MTecknol1gy short answer: TSM
05:09 cast t as in tivoli?
05:11 MTecknol1gy yup
05:11 MTecknol1gy that's the non-current version on their files
05:12 scuwolf joined #salt
05:13 Corey cast: Ideally? lvm snapshot, rsync from that as a source, delete afterwards.
05:13 BenCoinanke joined #salt
05:14 Corey MTecknol1gy: Tivoli? Really?
05:14 Corey Ow. :-)
05:14 MTecknol1gy ya... :(
05:14 cast Corey: how would you call rsync?
05:15 Corey cast: I'd probably wrap it in a script that has better failure modes than the "oh, it broke? send an email" that cron has.
05:15 sroegner joined #salt
05:18 MTecknol1gy doing an lvm snapshot and mounting it and pulling from than and then unmounting and removing it... sounds like too many potential breaking points
05:19 MTecknol1gy hell... I don't care if a facility fails it's back up for three days... until day for, I won't bat an eye
05:20 Corey MTecknol1gy: The only realistic problem I see is if the snapshot fills before unmounting.
05:20 Corey MTecknol1gy: Then "it's been four days since Location SlapHappy has been backed up" should be your alerting strategy. :)
05:22 cast what do you guys think of triggering the rsync wrapper script with salt?
05:23 fllr joined #salt
05:23 MTecknol1gy cast: I'm gonna go with a 'no' on that one
05:24 cast why is that?
05:26 Corey Actually, that could be done reasonably sanely.
05:26 Corey It wouldn't be my first choice, given that it's introducing an external dependency that I'd prefer not be there, but...
05:27 MTecknol1gy wow... fucking nerds
05:28 MTecknol1gy arguing about whether or not rsync is good
05:28 cast supposing the rsync dst and salt master were close together network wise, if the master cant get through to the minions then its likely the minions wouldn't be been able to get through to the rsync dst
05:29 cast suppose at the dst, or over the network, you were running into contention problems with all the rsyncs, you could use salt -b :)
05:30 MTecknol1gy the 180 facilities I back up with rsync are connected with between 1 and four T1,s
05:30 MTecknol1gy 's*
05:32 Nexpro1 joined #salt
05:32 cast T1 = 1.5Mbps?
05:40 MTecknol1gy yup
05:40 MTecknol1gy 1.5/1.5
05:40 MTecknol1gy only useful because it's guaranteed as opposed to where you share with neighbors
05:55 oz_akan_ joined #salt
05:59 yomilk joined #salt
06:08 ndrei joined #salt
06:08 kalloc joined #salt
06:14 fllr joined #salt
06:20 oz_akan_ joined #salt
06:21 MTecknol1gy to cancel the room or to let my company pay for it...
06:22 oz_akan__ joined #salt
06:22 cast so why did they say you couldn't go?
06:23 MTecknol1gy too much is broken (because of others and I'm apparently the only one that can fix these things)
06:25 MTecknol1gy I suppose cancelling the room would be the right thing to do, but... I'm pissed and making them pay $604 would make me feel slightly better
06:27 forrest Nah someone needs to be mature about the situation, might as well be you
06:28 MTecknol1gy forrest: you ass hole...
06:28 MTecknol1gy fune
06:28 MTecknol1gy fine*
06:28 forrest lol
06:28 MTecknol1gy I hope they at least still charge 'something' because of the short notice
06:28 forrest yea I don't know
06:29 cast later on when something big breaks hopefully someone who went to saltconf 2014 goes to your boss "oh, yeah that situation was covered in a saltconf 2014 presentation...good thing i went so our enterprise didn't hit that!"
06:29 MTecknol1gy "Your reservation has been canceled. An email with this information has been sent to ..." :'(
06:30 MTecknol1gy I was going to meet lindsey stirling and marry her and have three kids!
06:31 forrest what
06:31 * cast googles
06:31 forrest yea I had to google her as well
06:32 MTecknol1gy is this because I'm old or hip?
06:32 forrest *shrug*
06:32 forrest does she live in SLC or something?
06:33 MTecknol1gy probably not
06:33 MTecknol1gy it was a random comment
06:33 cast oh, i was hoping she used saltstack
06:33 forrest that would be cool
06:34 MTecknol1gy no, but she's damned cute
06:35 MTecknol1gy and extremely talented
06:38 justlooks joined #salt
06:38 justlooks hi anyone can help this https://gist.github.com/justlooks/8629378
06:40 MTecknol1gy projects doesn't exist in pllar
06:40 MTecknol1gy pillar*
06:40 DND joined #salt
06:40 MTecknol1gy UndefinedError: 'dict' object has no attribute 'projects'  <-- says it all
06:41 srage joined #salt
06:53 matanya joined #salt
06:56 justlooks MTecknol1gy: how can i modify?
06:57 pdayton joined #salt
06:58 pdayton joined #salt
06:58 pdayton joined #salt
07:01 yomilk joined #salt
07:04 justlooks MTecknol1gy: ok i figure out , add for i in projects in config template file
07:04 sroegner joined #salt
07:08 kalloc joined #salt
07:22 sharoonthomas joined #salt
07:22 oz_akan_ joined #salt
07:33 ckao joined #salt
07:33 fllr joined #salt
07:36 DND guys, you have a good site that compares salt,chef and puppet?
07:36 xmj google
07:39 DaveQB joined #salt
07:42 ravibhure joined #salt
07:47 justlooks priblem again  https://gist.github.com/justlooks/8629378  help!
07:48 Ryan_Lane joined #salt
07:51 justlooks anyone can help?
08:02 justlooks no one?
08:07 yomilk joined #salt
08:07 n8n joined #salt
08:08 kalloc joined #salt
08:12 sharoonthomas_ joined #salt
08:16 Patrick joined #salt
08:27 kalloc joined #salt
08:27 MohShami_ joined #salt
08:27 MohShami_ hey guys, is there a way to tell a state which state to call after? I know about require and watch, but I want to define the flow in the state that runs first
08:30 MohShami_ just found require_in, thanks :)
08:37 ConceitedCode joined #salt
08:53 sroegner joined #salt
08:54 fragamus joined #salt
09:04 fllr joined #salt
09:13 fllr joined #salt
09:14 kalloc joined #salt
09:14 linjan joined #salt
09:24 oz_akan_ joined #salt
09:31 Koma-AFK ma è giù facebook?
09:31 anuvrat joined #salt
09:34 fllr joined #salt
09:45 MohShami_ is there a way to check if a grain is there? I'm trying to build an SLS file, one of the states checks if a certain grain is there, if it's not defined, I don't want that state to run, but so far salt tries to check it and fails
09:48 kalloc joined #salt
09:58 nkuttler MohShami_: __grains__ var
09:58 MohShami_ thanks nkuttler, I'm trying to do this "{% grains['zfs'] is defined %}" but I'm receiving an error
09:59 MohShami_ can you point me to the proper syntax please?
09:59 MohShami_ if I do grains['zfs'] == directly, I get an "undefined" error
10:00 kalloc joined #salt
10:04 MohShami_ nkuttler, finally found it, thanks mate :)
10:17 yomilk joined #salt
10:21 yomilk joined #salt
10:22 nkuttler MohShami_: you're welcome ;)
10:35 UtahDave joined #salt
10:36 fllr joined #salt
10:42 sroegner joined #salt
10:44 Mua joined #salt
10:46 elfixit1 joined #salt
10:48 Mua_ joined #salt
10:58 ConceitedCode joined #salt
10:59 ndrei joined #salt
10:59 yomilk_ joined #salt
11:01 kalloc joined #salt
11:05 ConceitedCode joined #salt
11:12 ndrei joined #salt
11:15 kalloc joined #salt
11:23 ccase joined #salt
11:25 oz_akan_ joined #salt
11:30 harobed_ joined #salt
11:31 harobed_ joined #salt
11:34 fllr joined #salt
11:45 sharoonthomas joined #salt
12:01 ndrei joined #salt
12:02 sharoonthomas joined #salt
12:15 harobed_ joined #salt
12:31 sroegner joined #salt
12:32 crane joined #salt
12:33 matanya joined #salt
12:34 fllr joined #salt
12:38 gfa left #salt
12:56 analogbyte joined #salt
13:07 yomilk joined #salt
13:09 ndrei joined #salt
13:21 taion809 joined #salt
13:24 elithrar joined #salt
13:27 srage joined #salt
13:34 fllr joined #salt
13:58 KinyobiWan joined #salt
14:07 mgw joined #salt
14:10 farra joined #salt
14:14 sroegner joined #salt
14:19 sroegner_ joined #salt
14:22 matanya joined #salt
14:28 oz_akan_ joined #salt
14:28 ndrei joined #salt
14:34 fllr joined #salt
14:35 jimallman joined #salt
14:37 sharoonthomas joined #salt
14:50 ndrei joined #salt
14:55 kalloc joined #salt
14:57 felixhummel joined #salt
14:59 ndrei joined #salt
15:03 kalloc joined #salt
15:03 sharoonthomas joined #salt
15:16 dangra joined #salt
15:27 teebes joined #salt
15:28 sroegner joined #salt
15:29 oz_akan_ joined #salt
15:34 fllr joined #salt
15:41 rojem joined #salt
15:42 yomilk joined #salt
15:43 oz_akan_ joined #salt
15:45 ndrei joined #salt
15:47 teebes joined #salt
15:47 bejer joined #salt
15:54 leron joined #salt
16:06 blafountain joined #salt
16:06 Psi-Jack What, exactly, does this do? salt['grains.get']('roles', [])   I'm guessing, if grains['roles'] exists, it returns it, but if not, it returns an empty list?
16:06 blafountain hey all!
16:06 blafountain is it just me or is sall-bootstrap broken, it appears you upgraded yeserday
16:07 blafountain sh: 206: __check_config_dir: not found  * ERROR: The configuration directory  does not exist.
16:07 blafountain it appears its fixed in develop, but 'stable' is still broken
16:08 sroegner_ joined #salt
16:09 rojem joined #salt
16:09 blafountain https://github.com/saltstack/salt-bootstrap/commit/5340e62461aa3bb7e391591eb8356c798b808956
16:11 sroegner Psi-Jack: if you put this in you jinja file i'ts the equivalent of salt-call grains.get roles, and the second arg is just the default empty list
16:12 Psi-Jack So, basically what I said. If set, it returns the values, if not, returns the default empty list.
16:12 sroegner yes, exactly
16:12 Psi-Jack trying to drill down and make all my stuff work on an unconfigured system. Piece by pied.
16:12 Psi-Jack piece*
16:13 sroegner works great
16:13 sroegner use it all the time
16:13 Psi-Jack There we go, had to refresh pillars. :)
16:17 taion809 joined #salt
16:29 Psi-Jack Noooow we're cookin with gas.
16:29 Psi-Jack Got my setup.sls state I run to initialize a new host, then I highstate after that, and presto.
16:34 fllr joined #salt
16:39 Psi-Jack I have that setup state because I'll need to schedule an at job to restart salt-minion because of the auth stuff being done, adding groups that are needed for permissions. heh
16:40 Psi-Jack And salt, restarting salt, never works. LOL
16:42 zooz joined #salt
16:50 Mua joined #salt
16:51 MTecknology joined #salt
16:56 JasonSwindle joined #salt
16:56 JasonSwindle UtahDave:  Are you online and away, or away away?
16:59 gasbakid_ joined #salt
16:59 fllr joined #salt
17:03 fllr joined #salt
17:13 devull joined #salt
17:14 devull .
17:17 ndrei joined #salt
17:21 fllr joined #salt
17:22 puppet Psi-Jack: restart the machine, fixed! :D
17:22 matanya joined #salt
17:23 * Psi-Jack gets a shovel and hands it to puppet. "Start digging." P)
17:23 Psi-Jack :)
17:23 puppet Psi-Jack: OR! install puppet to restart salt!
17:23 puppet then after the restart, you use salt to remove puppet again
17:23 puppet :D
17:23 puppet see I am full of smart ideas
17:24 * Psi-Jack points the Glock 9mm silenced at puppet "We're gonna need this to be VERY deep, so nobody will ever find the body..."
17:25 puppet :D
17:25 puppet I need to stop slacking and start labbing with states :(
17:26 Psi-Jack Ugh, sometimes I truely hate squid.
17:26 Mua_ joined #salt
17:27 puppet Psi-Jack: is it evil?
17:27 Psi-Jack Eh, sometimes.
17:27 Psi-Jack I updated my inhouse rpm repo, and squid kept the old one cached and wouldn't handle it proper.
17:27 anuvrat joined #salt
17:28 puppet Psi-Jack: that happend to my with nginx once.... had to update it manually
17:28 Psi-Jack Until I manually deleted the spool cache.
17:28 fllr joined #salt
17:28 puppet Psi-Jack: yeah thats what i did, used salt to run that command on all web servers
17:35 teebes joined #salt
17:37 ConceitedCode joined #salt
17:40 gazprom joined #salt
17:43 ndrei joined #salt
17:49 fragamus joined #salt
17:57 minaguib joined #salt
17:57 sroegner_ joined #salt
18:12 eculver joined #salt
18:12 eculver joined #salt
18:12 sroegner left #salt
18:13 jimallman joined #salt
18:15 wolfpackmars2 joined #salt
18:36 jmccree joined #salt
18:36 EugeneKay joined #salt
18:37 Cidan joined #salt
18:38 felixhummel joined #salt
19:02 Guest49891 joined #salt
19:08 wolfpackmars2 <puppet> there you are
19:09 puppet wolfpackmars2: :)
19:09 wolfpackmars2 <puppet> don't want to hijack the other channel
19:09 wolfpackmars2 so what do you do?  sys admin?
19:09 puppet wolfpackmars2: thas true :)
19:10 puppet wolfpackmars2: well I am "Technical Director" But I am more hands on CTO to describe what I do :)
19:10 wolfpackmars2 I'm trying to pully myself away from the coin so I can start working on my saltstack
19:10 ndrei joined #salt
19:11 puppet wolfpackmars2: I need to port most of my puppet-stuff to salt, we are moving over to LDAP authentication though, instead of doing user accounts on every computer
19:11 puppet server*
19:11 zooz joined #salt
19:12 wolfpackmars2 <puppet> I'm a hobbyist admin - actually been doing admin type stuff since I was 10.  but my day job is electronics inspector :\
19:12 puppet wolfpackmars2: my day job 2 years ago was salesman in a retail electronic store :)
19:12 n8n joined #salt
19:12 puppet wolfpackmars2: now I handle infrastructure and development for a online game :D
19:12 puppet wolfpackmars2: kind of dream come true, except, working almost 24/7
19:13 wolfpackmars2 <puppet> that's the trajectory if you want to be successful.  putting lots of time into something and being lucky enough to have it work out and you make millions
19:13 wolfpackmars2 that's what I've found anyways
19:13 wolfpackmars2 still waiting to make my first million XD
19:14 puppet wolfpackmars2: I was really lucky, but then I had been working to widen my net of contacts a long time, and it payed off
19:15 wolfpackmars2 <puppet> but yes, I hate ruby ever since I worked on a borked project that used Ruby.  As a sys admin, I couldn't get past their attitude of "who cares if the code is efficient.  Programmers are expensive.  Throw more hardware at it"
19:15 wolfpackmars2 I was like "lolwut?!"
19:15 puppet Haha
19:15 wolfpackmars2 that seems to be the snarky attitude of rubyists
19:16 wolfpackmars2 "if the app is slow, just keep adding hardware until it runs smoothly"
19:16 wolfpackmars2 "*snap* sys-admins, get on it"
19:16 puppet Thing is, that's what you do when you don't have time to fix stuff, when you have time to fix it - you fix it in code
19:17 wolfpackmars2 <puppet> I have set up a base debian box for vagrant to test with saltstack, and I'm running my salt master on a vagrant VM
19:18 wolfpackmars2 <puppet> my plan is to test using vagrant VMs, and deploy to my live VPSs
19:19 wolfpackmars2 something I can't wrap my head around though - it appears that you set up your SLS, say for apache.  Whenever the minion refreshes, it will update apache.  I would think this would be bad for a live server
19:19 puppet wolfpackmars2: we are running XenServer combined with cobbler, so i set up cobbler so autosign every new machine that gets installed
19:20 puppet wolfpackmars2: depends on config, in my case I control the nginx package at my place because I compile it
19:20 wolfpackmars2 you could have apache update one day automatically and take down your whole operation?  if something in the new apache (or php) was incompatible with your app
19:20 puppet wolfpackmars2: you don't run php in production!
19:20 puppet ;)
19:20 wolfpackmars2 so... using a watched folder ? you put the new version there and let salt distribute it to the net?
19:21 puppet Nah, I run a YUM repo
19:21 puppet internally in the network
19:21 puppet which got highest prio of the repos
19:21 wolfpackmars2 makes sense.  lots of work tho ?  I guess less work than managing 1000 servers by hand
19:22 puppet Nah, you just compile nginx, run right scripts
19:22 wolfpackmars2 so what does your app run on?
19:22 puppet and then you get a rpm that you put in a folder run createrepo, done
19:22 puppet we run most of our stuff based on cherrypy
19:22 puppet then we run nginx infront of that
19:22 dancat joined #salt
19:23 puppet which in turn we run behind one of the bigger CDN providers
19:23 wolfpackmars2 so what stage are you at for migrating to SLS?
19:24 puppet wolfpackmars2: really just the start, we are at the point where we put up puppet + salt to be installed on new servers, then we thought "fuck puppet" :)
19:25 wolfpackmars2 <puppet> I really like salt stack - I just keep getting bogged down in the details.  so much to do, I end up getting overwhelmed and haven't started anything yet.  Also, so many different ways to accomplish the same thing.  Which is good, but can be confusing
19:25 puppet wolfpackmars2: well it's not pythonic then
19:25 puppet :)
19:25 wolfpackmars2 would probably be easier if I were a "real" sys admin and had more experience
19:26 wolfpackmars2 having a family takes a lot of time as well.  Managing a family is like managing a medium sized business :P
19:31 puppet :D
19:31 wolfpackmars2 <puppet> have you played with vagrant at all?
19:32 puppet no, not really, we looked on it for internal use for dev enoriments
19:32 puppet but we dont ahve the time for it
19:32 puppet easier for us to spin up a new intenral server on a seperate network
19:32 puppet if we start hiring more people it may be easier to look on vagrant
19:32 puppet but not righ tnow
19:32 wolfpackmars2 well, it's basically just automating what virtualbox already does
19:33 wolfpackmars2 it's nice to just "vagrant up" a new vm tho
19:33 wolfpackmars2 VMs become repeatable and easily throw away
19:33 puppet wolfpackmars2: thing is virtual box on mac os x, is kind of... not good atm, the network interfaces act funky
19:34 wolfpackmars2 you using a lot of mac osx in your env?
19:34 puppet all dev machines
19:35 wolfpackmars2 interesting, because I think vagrant was geared mostly towards mac to start with.  it started as a ruby project, which is mostly mac
19:35 wolfpackmars2 it's only recently that vagrant has become less reliant on ruby (still based on ruby, but you no longer need to install ruby as it comes rolled with ruby internally, i believe)
19:40 wolfpackmars2 <puppet> keep vagrant in the back of your mind.  someday it could prove useful in your org, even if it isn't immediately apparent right now.  Just like having something like Salt Stack helps with managing lots of servers, Vagrant helps to manage lots of developers XD
19:41 puppet wolfpackmars2: yeah, vagrant feels right now like a deveolpment tool though
19:41 CheKoLyN joined #salt
19:42 leron joined #salt
19:45 wolfpackmars2 <puppet> that's how it was intended.  But I'm using it to test salt formulas as well. basically anything you can do with a VM/vps/server
19:46 sroegner_ joined #salt
19:49 kalloc joined #salt
19:49 darless joined #salt
19:50 darless left #salt
19:54 noob13_ joined #salt
19:59 ccase joined #salt
20:10 flebel joined #salt
20:15 morsik left #salt
20:16 rojem joined #salt
20:18 DerekRBN joined #salt
20:18 KinyobiWan1 joined #salt
20:19 DerekRBN Anyone know how many people are supposed to be at saltconf?
20:19 kalloc joined #salt
20:25 harobed_ joined #salt
20:29 johtso joined #salt
20:40 JasonSwindle joined #salt
20:41 PLA1 left #salt
20:46 aleszoulek joined #salt
20:52 Ahlee That's a good question
20:53 JasonSwindle JINJA, why you hate me……. gah
20:55 Ryan_Lane joined #salt
20:55 kalloc joined #salt
20:56 JasonSwindle anyone here good at JINJA?
20:56 dangra joined #salt
21:06 ndrei joined #salt
21:07 Gordonz joined #salt
21:08 ajw0100 joined #salt
21:09 Gordonz joined #salt
21:09 AdamSewell joined #salt
21:09 DanGarthwaite joined #salt
21:16 ndrei joined #salt
21:28 yomilk joined #salt
21:29 oz_akan_ joined #salt
21:30 bastion2202 joined #salt
21:33 bastion2202 hey Guys, looking at mine atm. how can I call it from a sls (to get eth0 network.ip_addrs  for ex). the doc is close to inexistant
21:33 forrest joined #salt
21:35 sroegner_ joined #salt
21:37 Mua joined #salt
21:40 rojem joined #salt
21:41 ndrei joined #salt
21:43 taion809 joined #salt
21:44 DanGarthwaite To be executed on a minion?  You' could adapt "salt '*' mine.get '*' network.interfaces"  to use salt-call instead.
21:45 DanGarthwaite First: Could you give an example of retrieving the information from the command line?
21:47 DanGarthwaite @bastion2202 : Actually, found this:  for minion, peer_grains in salt['mine.get']('*', 'grains.items').items():
21:49 MedicalJaneParis joined #salt
21:53 viq joined #salt
21:53 viq joined #salt
21:55 kalloc joined #salt
21:57 DaveQB joined #salt
21:59 cast joined #salt
22:06 david_a joined #salt
22:14 nkuttler JasonSwindle: some people in #pocoo probably are
22:15 JasonSwindle nkuttler:  Thank you.  JINJA + YAML + Salt Modules gets messy
22:15 nkuttler JasonSwindle: tbh, the only thing that annoys me is whitespace.. but maybe my states are simple ;)
22:16 [diecast] joined #salt
22:34 toofer joined #salt
22:38 rojem joined #salt
22:46 [diecast] joined #salt
22:55 kalloc joined #salt
23:02 pydanny joined #salt
23:02 bastion2202 joined #salt
23:04 ajw0100 joined #salt
23:09 JasonSwindle anyone see the correct way to do this?
23:09 JasonSwindle {%- set redis_compound_string = "G@node_type:redis and G@env:grains['env'] and G@datacenter:grains['datacenter']" -%}
23:09 JasonSwindle I cannot seem to get it correct.
23:09 forrest JasonSwindle, which part of that is not working? Did you break it down yet?
23:09 JasonSwindle forrest:  Howdy
23:10 JasonSwindle When I do it by hand with no pillar data…works great.
23:10 JasonSwindle Now trying to change out the by hand stuff with grains
23:11 forrest So what about the output is currently incorrect?
23:11 JasonSwindle It is not matching, in fact I get the friendly error:
23:11 JasonSwindle Comment: Unable to manage file: Jinja variable list object has no element 0; line 8
23:12 JasonSwindle redis_url: redis://{{ salt['publish.publish']( redis_compound_string, 'network.ip_addrs', 'eth0', expr_form='compound').values()[0][0] }}:6379/2    <======================
23:12 JasonSwindle meaning I messed up something for no match to happen
23:13 JasonSwindle forrest:  https://gist.github.com/JasonSwindle/6b3b41644a7d91edfcbd the full SLS
23:13 forrest oh this again
23:13 forrest :P
23:14 JasonSwindle Eh?  I break a lot of things….. lol
23:14 forrest I thought that was working at one point
23:14 JasonSwindle half 'n half
23:14 forrest when you put in the [0][0] earlier this week
23:16 JasonSwindle by hand, using the correct data without the grains[''], it works
23:17 JasonSwindle https://gist.github.com/JasonSwindle/6b3b41644a7d91edfcbd
23:17 JasonSwindle transport: redis
23:17 JasonSwindle redis_url: redis://10.69.244.239:6379/2
23:17 JasonSwindle redis_namespace: logstash
23:18 JasonSwindle top vs bottom…… and I am brain-fried
23:18 JasonSwindle forrest:  ^
23:19 forrest well, it is a Sunday :P
23:19 forrest so bleh for working
23:19 forrest I think you should break it down
23:19 forrest try with one grain first and see if that fails, and so on
23:19 forrest till you find the failure.
23:19 JasonSwindle I am trying to get this done so I can stop working :P
23:20 forrest heh
23:20 forrest you're not flying out till tomorrow right? That gives you all morning!
23:20 JasonSwindle I wake up at 3, so not much time. <_<
23:20 forrest 3 AM?
23:21 forrest what time are you flying out
23:21 forrest you're just in Texas, not a long flight out
23:21 JasonSwindle Take off is 6:30AM
23:21 forrest oh
23:21 forrest yea I had to work half the day on Monday, so I get in later
23:21 JasonSwindle our airport is being worked on, so things are slower, too
23:21 forrest didn't want to burn 5 vacation days
23:21 JasonSwindle All work days for me
23:22 forrest yea, because you guys actually use Salt :P
23:22 * Eugene is in the air right now!
23:22 forrest Eugene, on the way to SLC?
23:22 JasonSwindle Eugene:  SHow off!
23:22 Eugene Nope. DEN. Work this week. :-/
23:22 forrest ahh
23:22 Eugene I was hoping I'd get SLC so I could come by after class was done
23:23 forrest Eugene, how long till you get in to Denver?
23:23 Eugene Another hour to wheels-down
23:23 forrest You should go find terminalmage
23:23 forrest he's apparently stuck in Denver till 8:15 if he doesn't find an earlier flight
23:23 Eugene Ha!
23:24 sroegner_ joined #salt
23:24 JasonSwindle Any good coffee places in SLC?  Like SFO has?
23:24 forrest coffee???
23:24 forrest psssssh
23:24 forrest all about the tea
23:24 Eugene Im debating finding a pot shop in Denver tonight
23:24 JasonSwindle I am a tea guy, too
23:25 * Eugene IS a washington resident, anyway.....
23:25 forrest JasonSwindle, I was thinking about bringing some green tea with me, not sure I want to deal with the annoyance at security about it though
23:25 JasonSwindle lol
23:26 JasonSwindle I have one of these at home and desk
23:26 JasonSwindle http://www.brevilleusa.com/the-tea-maker-onetouch.html
23:26 forrest nice
23:26 forrest bah time to get some chores done
23:26 forrest damned responsibility
23:26 JasonSwindle forrest:  by hand
23:27 JasonSwindle the grains work great
23:28 JasonSwindle I just cannot get the grains[''] or of the like in the JINJA set
23:28 JasonSwindle {%- set redis_compound_string = "G@node_type:redis and G@datacenter:iad and G@env:dev" -%}
23:28 JasonSwindle and
23:28 JasonSwindle redis_url: redis://{{ salt['publish.publish']( redis_compound_string, 'network.ip_addrs', 'eth0', expr_form='compound').values()[0][0] }}:6379/2
23:28 JasonSwindle worked great
23:38 jalbretsen joined #salt
23:40 yomilk joined #salt
23:41 oc joined #salt
23:45 mgw joined #salt
23:46 grep_awesome joined #salt
23:52 gazprom joined #salt
23:54 diegows joined #salt
23:55 kalloc joined #salt
23:56 forrest Hmm, looks like the event is up on sched, but it requires a username/password. Was anyone provided with those creds?

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary