Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-07-11

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 Whissi joined #salt
00:00 TaiSHi evening
00:00 TaiSHi joehh: don't want to annoy you, but, how's dev branch on debianlike?
00:00 njs126 joined #salt
00:03 joehh thought you'd ask that - getting closer hoping to finish on the weekend again
00:03 joehh really last unresolved part is signing of the packages
00:03 TaiSHi That'll be a pain
00:03 TaiSHi when adding repos automatically
00:04 joehh and me tossing up between having unencrypted copy of private key lying around, working through some sort of agent setup with a long enough duration or setting up a new key
00:05 joehh I'm kind of hoping for a workable agent setup, but think I'll end up with a new key for these packages (signed by my main key)
00:06 joehh other option is it all runs, and I just sign and rsync it each morning when I get up
00:06 dude051 joined #salt
00:06 mgw joined #salt
00:07 dimeshake is there a vagrant box for salt development?
00:08 dimeshake set up with virtualenv or otherwise for checking out and running masterless for development, for example
00:08 manfred dimeshake: there is salt-testkitchen?
00:08 manfred and vagrant will deploy salt on the servers for you
00:08 manfred using salt-bootstrap
00:09 manfred but i don't believe there is a prebuilt salt box out there
00:09 Ryan_Lane it doesn't put it in any location that's editable
00:09 Ryan_Lane that works ok for testing salt, but it doesn't work well for developing it
00:09 dimeshake yeah
00:09 Ryan_Lane maybe I should publish my vagrantfile
00:09 Ryan_Lane one sec. let me sanitize it
00:09 dimeshake i'd love to see it
00:10 dimeshake was thinking about concocting a vagrantfile and some packer configs for quick testing and development
00:10 dimeshake how'd the rest of the doc sprint go?
00:10 forrest dimeshake, good, people started dropping off
00:10 forrest no big deal
00:10 dimeshake i had to run for a band practice but it got canceled
00:11 forrest yea we ended about an hour ago
00:11 manfred i am at the bar, so
00:11 manfred my sprint is going pretty well right now
00:11 dimeshake lol
00:11 dimeshake manfred: my buddy asked me if i had any management aspirations. and then he said he didn't, added this:
00:11 dimeshake People are nondeterministic and have complex failure modes
00:14 dude051 joined #salt
00:15 joehillen joined #salt
00:15 mgw joined #salt
00:18 Ryan_Lane dimeshake: https://gist.github.com/ryan-lane/94c7d96b5586d661f30f
00:19 Ryan_Lane I have it installing salt via pip via git, but you could also just point at a shared salt filesystem if that's easier for you
00:19 Ryan_Lane also, this is only for salt-call
00:19 dimeshake gotcha was going to ask
00:19 dimeshake thanks
00:20 dimeshake this is useful
00:20 Ryan_Lane this is for masterless
00:20 dimeshake yeah
00:20 Ryan_Lane you'll need to modify it some for master/minion/syndic/etc
00:20 dimeshake i may create one for centos6 (... or dare I 7?) for minion/master testing
00:20 dimeshake this is helpful
00:21 * Gareth returns...slightly numb
00:22 forrest hey Gareth
00:22 forrest I figured the dentist had taken all your teeth
00:22 Gareth hey forrest
00:23 Gareth hah. nope.  start off as a cleaning which turned into a 'while you're here, lets take care of that one filling we need to do.'
00:23 forrest fair enough
00:24 Gareth how'd the rest of the doc sprint go?
00:26 dimeshake me ~ 20 minutes ago: how'd the rest of the doc sprint go?
00:26 dimeshake are you me?
00:26 Gareth yes.  I'm you in the future.
00:26 rn__ joined #salt
00:27 dimeshake damn. i thought i was set for dental work.
00:36 Shenril joined #salt
00:37 talwai SaltStack Veterans: What are the steps you normally go through when debugging why a salt minion and master are not talking to each other? I'm currently tailing the respective logs under /var/log/salt/ but this approach is getting me nowhere. What are some other smart ways I can get information?
00:38 dimeshake talwai: run salt-minion in the foreground and see if it connects
00:38 dimeshake make sure salt and master versions match, or at least the master is > than minion in version
00:38 dimeshake check for port connectivity from minion -> master
00:38 dimeshake so kill salt-minion and run with salt-minion -l debug
00:39 talwai dimeshake: got it, trying it out now
00:39 talwai noob question: but what exactly does it mean to check for port connectivity? Simply 'ping MASTER_IP: MASTER_PORT'
00:40 talwai ?
00:40 mateoconfeugo joined #salt
00:42 otter768 joined #salt
00:49 Hell_Fire joined #salt
00:54 Hell_Fire joined #salt
00:55 Gareth talwai: make sure the minions can talk to the master on ports 4505 and 4506.  Good to tool to check this is nc (netcat)
00:58 ndrei joined #salt
01:00 logix812 joined #salt
01:09 talwai Gareth: thanks, great tip
01:10 Gareth no worries
01:10 talwai Unrelated question: I'm having trouble understanding how the gitfs backend works. Do the files from Git Fetched into a particular folder on the master, and then get sent to the minions from there? Or are they fetched directly from Git with no intermediate folders?
01:12 bhosmer joined #salt
01:13 dimeshake good question, i'm not clear on that either
01:13 Luke joined #salt
01:13 dimeshake i think there's a fetch any time a state refers to them, but i may be mistaken.
01:13 dimeshake any time a state is called that refers to them*
01:15 vejdmn joined #salt
01:17 talwai dimeshake: thanks for that. I'm specifically trying to understand if a fileserver setup like this makes sense: http://pastie.org/9376430
01:18 talwai Specifically should the /srv/salt/ paths simply be relative paths from the gitfs_root?
01:18 talwai Considering that the files don't seem to be copied to /srv/salt/ before being served
01:26 KVron joined #salt
01:27 KVron left #salt
01:32 aw110f joined #salt
01:49 Shenril joined #salt
01:56 mortis__ joined #salt
01:56 MK_FG joined #salt
01:56 smkelly joined #salt
01:56 jchen joined #salt
01:56 bonezed joined #salt
01:56 simonmcc joined #salt
01:56 rglen joined #salt
01:56 hardwire joined #salt
01:56 nahamu joined #salt
01:56 gfa joined #salt
01:56 akoumjian joined #salt
01:56 DenkBrettl joined #salt
01:56 thunderbolt joined #salt
01:56 dotplus joined #salt
01:56 canci joined #salt
01:56 tcotav joined #salt
01:56 mike25de1 joined #salt
01:56 crane joined #salt
01:56 lynxman joined #salt
01:56 sirtaj joined #salt
01:56 twoflowers joined #salt
01:56 gmoro joined #salt
01:56 agronholm joined #salt
01:56 toddejohnson joined #salt
01:56 herlo joined #salt
01:56 LordOfLA|Broken joined #salt
01:56 huleboer joined #salt
01:56 flebel joined #salt
01:56 patrek joined #salt
01:56 gwmngilfen|afk joined #salt
01:56 ze- joined #salt
01:56 nickg joined #salt
01:56 cruatta_ joined #salt
01:56 Kalinakov joined #salt
01:56 Emantor joined #salt
01:56 synical joined #salt
01:56 jcsp joined #salt
01:56 Guest95103 joined #salt
01:56 ckao joined #salt
01:56 BbT0n joined #salt
01:56 wendall911 joined #salt
01:56 \ask joined #salt
01:56 analogbyte joined #salt
01:56 zz_cro joined #salt
01:56 supplicant joined #salt
01:56 manfred joined #salt
01:56 honestly joined #salt
01:56 Whissi joined #salt
01:56 txmoose joined #salt
01:56 cofeineSunshine joined #salt
01:56 gldnspud joined #salt
01:56 ashb joined #salt
01:56 toddnni joined #salt
01:56 Hazelesque joined #salt
01:56 ksalman joined #salt
01:56 jab416171 joined #salt
01:56 oeuftete joined #salt
01:56 amontalb1n joined #salt
01:56 kevinbrolly_ joined #salt
01:56 ahale joined #salt
01:56 copelco joined #salt
01:56 rcsheets joined #salt
01:56 xsteadfastx joined #salt
01:56 Cidan joined #salt
01:56 dh joined #salt
01:56 jbub joined #salt
01:56 t0rrant joined #salt
01:56 a1j joined #salt
01:56 claytron joined #salt
01:56 eliasp joined #salt
01:56 Kelsar joined #salt
01:56 vexati0n joined #salt
01:56 torrancew joined #salt
01:56 the_lalelu joined #salt
01:56 xmj joined #salt
01:56 ampex_ joined #salt
01:56 JordanTesting joined #salt
01:56 Jarus joined #salt
01:56 jgelens joined #salt
01:56 dimeshake joined #salt
01:56 avn_ joined #salt
01:56 kedo39 joined #salt
01:56 aberdine joined #salt
01:56 dccc joined #salt
01:56 m1crofarmer joined #salt
01:56 ghartz joined #salt
01:56 stevednd joined #salt
01:56 twobitsprite joined #salt
01:56 Bosch[] joined #salt
01:56 bdf joined #salt
01:56 aqua^^ joined #salt
01:56 keekz joined #salt
01:56 crazysim joined #salt
01:56 programmerq joined #salt
01:56 SpeeR joined #salt
01:56 miqui joined #salt
01:56 notpeter_ joined #salt
01:56 ecdhe joined #salt
01:56 hhenkel joined #salt
01:56 tinuva joined #salt
01:56 TyrfingMjolnir joined #salt
01:56 Hipikat joined #salt
01:56 catpig joined #salt
01:56 sontek joined #salt
01:56 jakubek joined #salt
01:56 TamCore joined #salt
01:56 ujjain joined #salt
01:56 Yoda-BZH joined #salt
01:56 rblackwe joined #salt
01:56 ilako joined #salt
01:56 peno joined #salt
01:56 dstokes joined #salt
01:56 ronc joined #salt
01:56 MindDrive joined #salt
01:56 seblu joined #salt
01:56 platforms joined #salt
01:56 tempspace joined #salt
01:56 mattikus joined #salt
01:56 Voziv joined #salt
01:56 mariusv joined #salt
01:56 baffle_ joined #salt
01:56 snoozer_ joined #salt
01:56 Corey joined #salt
01:56 schristensen joined #salt
01:56 user136 joined #salt
01:56 funzo joined #salt
01:56 keyvan joined #salt
01:56 rogst joined #salt
01:56 sashka_ua joined #salt
01:56 jacksontj joined #salt
01:56 geekmush joined #salt
01:56 eclectic_ joined #salt
01:56 etw joined #salt
01:56 repl1cant joined #salt
01:56 monokrome joined #salt
01:56 Vye joined #salt
01:56 cwright joined #salt
01:56 |rt| joined #salt
01:56 djaykay joined #salt
01:56 ninkotech__ joined #salt
01:56 mik3 joined #salt
01:56 tligda joined #salt
01:56 dangra joined #salt
01:56 yomilk joined #salt
01:56 njs126 joined #salt
01:56 kermit joined #salt
01:56 blast_hardcheese joined #salt
01:56 phx joined #salt
01:56 xzarth joined #salt
01:56 n0arch joined #salt
01:56 Spark joined #salt
01:56 DaveQB joined #salt
01:56 Nazca joined #salt
01:56 redondos joined #salt
01:56 hopthrisC joined #salt
01:56 msciciel_ joined #salt
01:56 mackstick joined #salt
01:56 __alex joined #salt
01:56 anteaya joined #salt
01:56 devx joined #salt
01:56 WhyteWolf joined #salt
01:56 eofs joined #salt
01:56 perfectsine joined #salt
01:56 z3uS joined #salt
01:58 [M7] joined #salt
01:58 chamunks joined #salt
01:58 tmmt joined #salt
01:58 MTecknol1gy joined #salt
01:58 jforest joined #salt
01:58 hillna_ joined #salt
01:58 jesusaurus joined #salt
01:58 balltongu joined #salt
01:58 Jahkeup joined #salt
01:58 Gareth joined #salt
01:58 codekoala joined #salt
01:58 Eugene joined #salt
01:58 georgemarshall joined #salt
01:58 [vaelen] joined #salt
01:58 madduck joined #salt
01:58 kalessin joined #salt
01:58 maber_ joined #salt
01:58 CaptTofu_ joined #salt
01:58 berto- joined #salt
01:58 Karunamon joined #salt
01:58 codysoyland joined #salt
01:58 vandemar joined #salt
01:58 Blacklite joined #salt
01:58 JoeHazzers joined #salt
01:58 xt joined #salt
01:58 arapaho joined #salt
01:58 cb joined #salt
01:58 chutzpah joined #salt
01:58 Nazzy joined #salt
01:58 Flusher joined #salt
01:58 kaictl joined #salt
01:58 Sway joined #salt
01:58 neilf__ joined #salt
01:58 sdebot joined #salt
01:58 goki joined #salt
01:58 ifur joined #salt
01:58 londo joined #salt
01:58 FL1SK joined #salt
01:58 pmcg joined #salt
01:58 juice joined #salt
01:58 nliadm joined #salt
01:58 zartoosh joined #salt
01:58 zsoftich1 joined #salt
01:58 munhitsu_ joined #salt
01:58 xenoxaos joined #salt
01:58 masterkorp joined #salt
01:58 bernieke joined #salt
01:58 ikanobori joined #salt
01:58 jmccree_ joined #salt
01:58 jasonrm joined #salt
01:58 codekobe_ joined #salt
01:58 bezaban joined #salt
01:58 mfournier joined #salt
01:58 jcristau joined #salt
01:58 eightyeight joined #salt
01:58 scalability-junk joined #salt
01:58 logandg joined #salt
01:58 djinni` joined #salt
01:58 fxhp joined #salt
01:58 v0rtex joined #salt
01:58 meganerd joined #salt
01:58 CyanB joined #salt
01:58 scoates joined #salt
01:58 rawzone joined #salt
01:58 viq joined #salt
01:58 jperras joined #salt
01:58 basepi joined #salt
01:58 blackjid joined #salt
01:58 esogas joined #salt
01:58 nyov joined #salt
01:58 nhubbard joined #salt
01:58 dwfreed joined #salt
01:58 lazybear joined #salt
01:58 Sypher joined #salt
01:58 pwiebe_ joined #salt
01:58 kossy joined #salt
01:58 _gothix_ joined #salt
01:58 bigl0af joined #salt
01:58 zemm_ joined #salt
01:58 aarontc joined #salt
01:58 E1NS joined #salt
01:58 Deevolution joined #salt
01:58 Heartsbane joined #salt
01:58 yetAnotherZero joined #salt
01:58 ccase joined #salt
01:58 dzen joined #salt
01:58 ntropy joined #salt
01:58 seb` joined #salt
01:58 tedski joined #salt
01:58 ahammond joined #salt
01:58 erjohnso joined #salt
01:58 johtso joined #salt
01:58 rnts joined #salt
01:58 lionel joined #salt
01:58 andabata joined #salt
01:58 Zuru joined #salt
01:58 eculver joined #salt
01:58 andredieb joined #salt
01:58 philipsd6 joined #salt
01:58 cyrusdav- joined #salt
01:58 rigor789|away joined #salt
01:58 SaveTheRbtz joined #salt
01:58 AnswerGu1 joined #salt
01:58 iMil joined #salt
01:58 pviktori joined #salt
01:58 jpaetzel joined #salt
01:58 Shish joined #salt
01:58 veb joined #salt
01:58 ekristen joined #salt
01:58 ninkotech joined #salt
01:58 robinsmidsrod joined #salt
01:58 renoirb joined #salt
01:58 zain_ joined #salt
01:58 bretep joined #salt
01:58 jY joined #salt
01:58 johngrasty joined #salt
01:58 fxdgear joined #salt
01:58 smferris joined #salt
01:58 EWDurbin joined #salt
01:58 jeffrubic joined #salt
01:58 bmatt joined #salt
01:58 sverrest joined #salt
01:58 twinshadow joined #salt
01:58 sifusam joined #salt
01:58 carmony joined #salt
01:58 bitmand joined #salt
01:58 ifmw joined #salt
01:58 Heggan joined #salt
01:58 jeblair joined #salt
01:58 emostar joined #salt
01:58 trevorjay joined #salt
01:58 lude joined #salt
01:58 terminalmage joined #salt
01:58 delkins_ joined #salt
01:58 AlcariTh1Mad joined #salt
01:58 jamesog joined #salt
01:58 bensons joined #salt
01:58 __number5__ joined #salt
01:58 marcinkuzminski joined #salt
01:58 seventy3_away joined #salt
01:58 pjs joined #salt
01:58 dcmorton joined #salt
01:58 SachaLigthert joined #salt
02:02 ramishra joined #salt
02:02 gldnspud joined #salt
02:03 ldlework joined #salt
02:03 xintron joined #salt
02:03 majoh joined #salt
02:03 crashmag joined #salt
02:03 totte joined #salt
02:03 al joined #salt
02:03 svs joined #salt
02:03 mpoole joined #salt
02:03 kamal_ joined #salt
02:03 mikkn joined #salt
02:03 mschiff joined #salt
02:03 oc joined #salt
02:03 Twiglet joined #salt
02:03 Outlander joined #salt
02:03 techdragon joined #salt
02:03 simmel joined #salt
02:04 nkuttler joined #salt
02:04 alekibango joined #salt
02:04 mechanicalduck joined #salt
02:04 logix812 joined #salt
02:04 mosen joined #salt
02:04 Phibs joined #salt
02:04 kuffs joined #salt
02:04 utahcon joined #salt
02:04 jaimed joined #salt
02:04 Ixan joined #salt
02:04 sindreij joined #salt
02:04 shano joined #salt
02:04 fivethre1o joined #salt
02:04 hvn joined #salt
02:05 Damoun joined #salt
02:05 zach joined #salt
02:05 Hell_Fire joined #salt
02:05 kballou joined #salt
02:05 Fa1lure joined #salt
02:05 lipiec joined #salt
02:05 TaiSHi joined #salt
02:05 Dinde joined #salt
02:05 dean joined #salt
02:05 schimmy joined #salt
02:05 svx joined #salt
02:05 lz-dylan joined #salt
02:05 koyd joined #salt
02:05 jeremyBass joined #salt
02:05 BrendanGilmore joined #salt
02:05 fejjerai joined #salt
02:05 freelock joined #salt
02:05 anotherZero joined #salt
02:05 vlcn joined #salt
02:05 gamingrobot joined #salt
02:05 mapet joined #salt
02:05 joehh joined #salt
02:05 tru_tru joined #salt
02:05 drags joined #salt
02:05 bfwg joined #salt
02:05 akitada joined #salt
02:05 borgstrom joined #salt
02:05 twiedenbein joined #salt
02:05 pfallenop joined #salt
02:05 dancat joined #salt
02:05 robawt joined #salt
02:05 d3vz3r0 joined #salt
02:05 intr1nsic joined #salt
02:05 individuwill joined #salt
02:05 darrend joined #salt
02:05 gadams joined #salt
02:05 wiqd joined #salt
02:05 modafinil_ joined #salt
02:05 kwmiebach_ joined #salt
02:05 APLU joined #salt
02:05 rhand joined #salt
02:05 Daviey joined #salt
02:05 Xiao joined #salt
02:05 jcockhren joined #salt
02:05 EntropyWorks joined #salt
02:05 scarcry joined #salt
02:05 GnuLxUsr joined #salt
02:05 abele joined #salt
02:05 beebeeep joined #salt
02:05 grep_away joined #salt
02:05 whiteinge joined #salt
02:05 lyddonb_ joined #salt
02:05 octarine joined #salt
02:05 zirpu joined #salt
02:05 terinjokes joined #salt
02:05 babilen joined #salt
02:05 Dattas joined #salt
02:05 Doqnach joined #salt
02:05 mihait joined #salt
02:05 cedwards joined #salt
02:05 stotch joined #salt
02:05 mirko joined #salt
02:05 nadley joined #salt
02:05 steveoliver joined #salt
02:05 timoguin joined #salt
02:05 brewmaster joined #salt
02:05 btorch joined #salt
02:05 cwyse joined #salt
02:06 lahwran joined #salt
02:06 Hydrosine joined #salt
02:06 mortis joined #salt
02:06 davromaniak joined #salt
02:06 savvy-lizard joined #salt
02:06 penguin_dan joined #salt
02:06 pressureman joined #salt
02:06 Ymage joined #salt
02:06 archrs joined #salt
02:06 alainv joined #salt
02:06 dober joined #salt
02:08 eliasp joined #salt
02:08 Striki joined #salt
02:08 6A4AACB48 joined #salt
02:08 luminous joined #salt
02:08 oncallsucks joined #salt
02:08 VictorLin joined #salt
02:08 jerrcs joined #salt
02:08 jalaziz joined #salt
02:08 beando joined #salt
02:08 NV joined #salt
02:08 bmcorser joined #salt
02:08 micko joined #salt
02:08 _ale_ joined #salt
02:08 whitepaws joined #salt
02:08 UForgotten joined #salt
02:08 faulkner joined #salt
02:08 rmnuvg joined #salt
02:08 xinkeT joined #salt
02:08 JPaul joined #salt
02:08 dcolish_ joined #salt
02:08 nlb joined #salt
02:08 goodwill joined #salt
02:08 beardo_ joined #salt
02:08 Hollinski joined #salt
02:08 austin987 joined #salt
02:08 hotbox joined #salt
02:09 [M7] joined #salt
02:09 Striki joined #salt
02:09 talwai joined #salt
02:09 ghanima_ joined #salt
02:09 ghanima joined #salt
02:10 talwai joined #salt
02:11 kedo39 joined #salt
02:13 chamunks joined #salt
02:14 pdayton joined #salt
02:18 ndrei joined #salt
02:19 jerrcs joined #salt
02:20 oz_akan joined #salt
02:20 kossy joined #salt
02:20 dude051 joined #salt
02:20 dude051 joined #salt
02:20 blast_hardcheese joined #salt
02:20 Luke joined #salt
02:20 badon joined #salt
02:21 otter768 joined #salt
02:22 scooby2 joined #salt
02:24 Shenril joined #salt
02:24 to_json1 joined #salt
02:34 yano joined #salt
02:34 Nojla joined #salt
02:34 Nojla hi guys
02:34 Nojla anyone here?
02:36 eliasp joined #salt
02:39 Nojla actually i'm an OJT in a certain company here in the philippines and would like to get some help regarding this saltstack. I'm a third year student and i've only got a little knowledge in networking but the company currently will launch a website this coming august.
02:39 talwai nojla: hi
02:39 Nojla i need some help regarding the salt stack
02:40 Nojla can i ask where should i put the sls files in ubuntu? i'm using salt-minion
02:40 talwai is it a masterless setup or will there be a salt-master as well
02:40 talwai ?
02:41 yomilk joined #salt
02:41 Nojla i've installed both
02:42 Nojla in 1 vm
02:42 Nojla so i've got it running simultaenously
02:42 Nojla include:     - common  nginx:     pkg.installed:         - require:             - pkg: common     service.running:         - require:             - pkg: nginx  /etc/nginx/nginx.conf:     file.managed:         - source: salt://nginx/files/etc/nginx/nginx.conf         - watch_in:             - service: nginx  /etc/nginx/sites-available/default:     file.managed:         - source: salt://nginx/files/etc/nginx/sites-available/default
02:42 Nojla that's the sls file of nginx
02:43 Nojla where should i put that?
02:43 moos3 joined #salt
02:43 whyzgeek joined #salt
02:46 talwai Why do you want them both running on the same VM? is this just a test setup?
02:46 toastedpenguin joined #salt
02:46 talwai anyway sls files usually go under /srv/
02:47 bhosmer joined #salt
02:48 aw110f joined #salt
02:48 Nojla yes i'm just testing it since i'm only a beginner and i don't want to connect to the salt-master by out network engineer
02:48 kiorky joined #salt
02:48 rofl____ joined #salt
02:48 Nojla thanks i'll try it
02:49 davidone joined #salt
02:57 yomilk joined #salt
02:59 Outlander so salt key preseeding means you cannot use dynamic minion_id names
03:04 arthabaska joined #salt
03:15 roo9 joined #salt
03:19 VictorLin joined #salt
03:20 ipalreadytaken joined #salt
03:28 quickdry21 joined #salt
03:35 Tween joined #salt
03:36 pdayton joined #salt
03:41 catpigger joined #salt
03:41 neilf__ Hi all,  Anyone here using postgres 9.3 deployed via salt?
03:41 neilf__ that can show me an example of how they create their users
03:42 neilf__ https://gist.github.com/neilferreira/0b6c176d8581330b36ed
03:42 neilf__ I'm gutting this error
03:43 yetAnotherZero so violent
03:44 neilf__ I wish I could gut it
03:44 neilf__ at the moment it is gutting me
03:44 yetAnotherZero :)
03:44 neilf__ Any hints?
03:45 yetAnotherZero no, sorry.  i believe this is above me
03:46 joehh neilf__: does you pg_hba.conf permit conenctions over tcp with the postgres user
03:46 joehh there might be a little chicken and egg issue
03:46 logix812 joined #salt
03:48 neilf__ local   all             postgres                                peer
03:49 neilf__ WAIT
03:49 neilf__ this is my bad
03:49 neilf__ shit
03:49 neilf__ soz :)
03:50 neilf__ my postgresql.conf file had an issue.
03:50 neilf__ thankyou all.
03:51 neilf__ -l debug saved the day.
03:52 AlcariTheMad joined #salt
04:01 Outlander joined #salt
04:03 ramishra joined #salt
04:11 XenophonF joined #salt
04:13 ajolo joined #salt
04:13 Luke joined #salt
04:13 yetAnotherZero lol good news neilf__
04:18 benturner joined #salt
04:18 benturner left #salt
04:22 yomilk joined #salt
04:22 schimmy joined #salt
04:22 schimmy joined #salt
04:26 schimmy1 joined #salt
04:29 mosen joined #salt
04:36 bhosmer joined #salt
04:37 VictorLin joined #salt
04:38 yomilk joined #salt
04:39 kermit joined #salt
04:46 MatthewsFace joined #salt
04:49 jalbretsen joined #salt
04:49 XenophonF left #salt
04:55 ramteid joined #salt
05:02 pdayton joined #salt
05:04 pdayton joined #salt
05:10 lahwran joined #salt
05:13 XenophonF joined #salt
05:22 MTecknology joined #salt
05:35 XenophonF joined #salt
05:40 malinoff joined #salt
05:56 pdayton joined #salt
06:02 yomilk joined #salt
06:05 ramishra joined #salt
06:09 Joeb joined #salt
06:09 Joeb Hello
06:10 badon_ joined #salt
06:14 MatthewsFace joined #salt
06:16 jhauser joined #salt
06:17 toddejohnson joined #salt
06:24 bhosmer joined #salt
06:33 roolo joined #salt
06:37 bhosmer joined #salt
06:42 nebuchadnezzar joined #salt
06:42 Hell_Fire_ joined #salt
06:45 Ryan_Lane joined #salt
06:54 pressureman joined #salt
06:56 felskrone joined #salt
07:03 chiui joined #salt
07:04 anuvrat joined #salt
07:06 Kenzor joined #salt
07:12 ckao joined #salt
07:13 ml_1 joined #salt
07:13 Guest41666 joined #salt
07:14 picker joined #salt
07:15 pressureman joined #salt
07:25 intellix joined #salt
07:28 oz_akan_ joined #salt
07:29 matthiaswahl joined #salt
07:32 VictorLin joined #salt
07:33 Damoun joined #salt
07:35 martoss joined #salt
07:37 linjan joined #salt
07:41 robinsmidsrod joined #salt
07:44 Hell_Fire joined #salt
07:46 Lomithrani joined #salt
07:47 darkelda joined #salt
07:47 darkelda joined #salt
07:48 codysoyland joined #salt
07:57 thehaven joined #salt
08:00 vlcn anyone having issues with reactors in 2014.1.7?
08:00 jdmf joined #salt
08:02 babilen "issues" ?
08:03 babilen vlcn: I do indeed have an issue. I am trying to cause minions to fire custom events and I want to trigger that from the master. How would I go about that?
08:10 scooby2 joined #salt
08:11 vlcn babilen, call a state in the reactor
08:11 vlcn but mine all seem to be broken at this point
08:12 babilen how would calling a state fire a custom event?
08:12 babilen broken how?
08:13 bhosmer joined #salt
08:15 Striki joined #salt
08:20 ThomasJ|d joined #salt
08:26 Damoun joined #salt
08:29 oz_akan_ joined #salt
08:36 martoss joined #salt
08:36 yomilk joined #salt
08:37 ndrei joined #salt
08:38 ccase joined #salt
08:41 Vivi-1 joined #salt
08:41 workingcats joined #salt
08:42 Vivi-1 left #salt
08:46 msciciel_ joined #salt
08:55 xmj huh
08:55 xmj 2014.1.7 is out?
08:56 xmj terminalmage: do you wanna update the topic?
09:00 yomilk joined #salt
09:00 N-Mi joined #salt
09:00 N-Mi joined #salt
09:00 oz_akan_ joined #salt
09:08 tligda joined #salt
09:08 viq_ joined #salt
09:10 ndrei joined #salt
09:14 fxhp joined #salt
09:15 giantlock joined #salt
09:18 ramishra joined #salt
09:24 zooz joined #salt
09:48 yomilk joined #salt
09:50 pdayton joined #salt
09:54 ggoZ joined #salt
09:57 martoss joined #salt
09:57 martoss left #salt
10:01 oz_akan_ joined #salt
10:01 ramishra_ joined #salt
10:02 tomspur joined #salt
10:02 tomspur joined #salt
10:02 matthiaswahl joined #salt
10:02 bhosmer joined #salt
10:10 matthias_ joined #salt
10:15 martoss joined #salt
10:22 Damoun joined #salt
10:24 davidone joined #salt
10:34 yomilk joined #salt
10:38 ramishra joined #salt
10:43 CeBe joined #salt
10:53 agend joined #salt
10:54 anuvrat joined #salt
10:56 ramishra_ joined #salt
10:57 bhosmer joined #salt
10:57 al joined #salt
11:02 oz_akan_ joined #salt
11:02 diegows joined #salt
11:21 yomilk joined #salt
11:25 yomilk joined #salt
11:28 aleszoulek joined #salt
11:29 Lomithrani joined #salt
11:31 matthiaswahl joined #salt
11:34 yomilk joined #salt
11:37 masterkorp Hello
11:37 masterkorp http://pastie.org/private/jug8bwug7d7l9cyf4o4fg
11:37 masterkorp i am trying to convert some pillar data to json
11:37 masterkorp http://pastie.org/private/xadwa5jwsibnrv1hggclq
11:37 masterkorp and i get this
11:37 masterkorp any ideas ?
11:39 bmcorser the salt-master process has about 100 subprocesses
11:40 bmcorser it's eating up all the RAM on that box
11:40 bmcorser is this necessary or is it a setting we can change somewhere?
11:45 TheThing joined #salt
11:56 ThomasJ|d joined #salt
11:58 davidone joined #salt
11:59 aleszoulek joined #salt
11:59 davidone joined #salt
12:00 davidone joined #salt
12:02 babilen bmcorser: Yes, just set "behave_or_its_gets_the_hose: True" in your master config
12:03 babilen bmcorser: (Just kidding) -- There was a thread about this on the mailing list a while back, but I can't recall a definite solution. How many minions do you have and how much RAM does your master use/have available?
12:04 martoss joined #salt
12:05 babilen masterkorp: Ah, pydsl .. let me double check that. It did work with the pure Python example that I gave you, didn't it?
12:07 babilen masterkorp: What's the error you get in the master/minion log about this?
12:10 hobakill joined #salt
12:12 hobakill hello all - i could really use some help getting salt-ssh working. not sure why but i cannot do even the simplest ping using salt-ssh. standard salt binary seems to work fine however. any thoughts?
12:13 dave joined #salt
12:14 clone1018 joined #salt
12:14 dave__ joined #salt
12:15 millz0r joined #salt
12:15 dave__ joined #salt
12:21 ramishra joined #salt
12:21 martoss joined #salt
12:30 bhosmer joined #salt
12:31 logix812 joined #salt
12:35 pdayton joined #salt
12:38 Kenzor joined #salt
12:43 XenophonF hobakill: what kind of errors do you get with salt-ssh?
12:47 pressureman joined #salt
12:48 shiin joined #salt
12:50 shiin Im trying to configure salt to use multiple pillars. is such possible yet with the bleed through feature like it works with states?
12:58 bhosmer_ joined #salt
12:59 Kalinakov joined #salt
12:59 mpanetta joined #salt
12:59 zooz joined #salt
13:00 bhosmer__ joined #salt
13:01 flupke joined #salt
13:01 bmcorser babilen: in the region of 80, i just recompiled libzmq, reinstalled pyzmq and updated the salt-master package -- no change
13:02 ramishra joined #salt
13:02 bmcorser babilen: PS: I tried 'stop_touching_my_leg: True' but that failed too
13:02 oz_akan_ joined #salt
13:08 XenophonF LOL
13:08 pdayton joined #salt
13:10 ramishra joined #salt
13:13 racooper joined #salt
13:13 pressureman joined #salt
13:17 beardo joined #salt
13:18 hobakill XenophonF, nothing is returned. i think i figured it out though. i had no /etc/salt/roster file. after setting that up it 'works' but i'm still investigating.
13:20 vejdmn joined #salt
13:20 babilen bmcorser: And how much RAM is being used?
13:20 hobakill my question that remains is whether or not i have have multiple IPs/FQDN under a single header.
13:23 che-arne joined #salt
13:23 anuvrat joined #salt
13:26 TyrfingMjolnir joined #salt
13:31 ndrei joined #salt
13:31 tessellare joined #salt
13:32 hobakill XenophonF, well i spoke too soon. i'm getting "permission denied" errors despite not being prompted for the root pw.
13:32 TheThing joined #salt
13:32 tessellare left #salt
13:36 scoates_ joined #salt
13:37 Lomithrani joined #salt
13:38 babilen How can I programmatically send an event from a minion to a master (in an execution module) ?
13:38 masterkorp babilen: sorry, was on lunch break
13:39 babilen That's okay :)
13:39 babilen I had nommy lunch earlier too
13:41 hobakill retcode:
13:41 hobakill 255
13:41 hobakill stderr:
13:41 hobakill Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
13:41 hobakill
13:41 hobakill stdout:
13:43 xerxas joined #salt
13:45 ndrei joined #salt
13:45 XenophonF hobakill, did you change PermitRootLogin in the sshd_config file and remember to restart the SSH service?
13:46 anuvrat joined #salt
13:47 mechanicalduck joined #salt
13:48 zooz joined #salt
13:48 XenophonF although if you can ssh to root yourself, then that wouldn't be at issue
13:49 hobakill XenophonF, yeah i can do a standard bash ssh just fine
13:50 oz_akan_ joined #salt
13:51 XenophonF hm
13:51 XenophonF let me spin up a new VM and give it a try myself
13:51 bhosmer joined #salt
13:51 bhosmer__ joined #salt
13:52 aquinas joined #salt
13:55 geekmush joined #salt
13:55 hobakill XenophonF, i think i have it figured out and it's clearly my bad.
13:57 geekmush joined #salt
13:57 XenophonF oh great!  what was it?
14:00 hobakill XenophonF, i'm an idiot....but new with salt so it's forgivable i think. i'm still testing but i have a saltstate that changes permitting root login to 'without-password'.... something is amiss but i think i'm on the right track thanks to your help.
14:00 XenophonF ah hah! i'm glad to hear it!
14:00 Ixan joined #salt
14:03 hobakill XenophonF, i have a file.sed command that changes '#PermitRootLogin yes' to 'PermitRootLogin without-password'. but doesn't restart sshd so that's what i have to figure out next.
14:04 XenophonF gotcha
14:04 XenophonF hobakill: you should be able to put in a service.running state that watches the file.sed state
14:04 hobakill good call. thanks.
14:05 XenophonF something like:
14:05 XenophonF sshd:
14:05 XenophonF service:
14:05 XenophonF - running
14:05 XenophonF - watch:
14:05 XenophonF - file: edit_sshd_config
14:07 hobakill presumably i could put that in the same init.sls file as my file.sed but after the file.sed command runs?
14:07 XenophonF yes
14:07 XenophonF it doesn't matter what order you put the states
14:08 Ixan joined #salt
14:08 hobakill that's interesting. i thought ordering was important.
14:08 XenophonF no
14:08 XenophonF if you need things to run in order because one state depends on the result of another, you have to use require/watch
14:08 XenophonF ever use a makefile?
14:08 hobakill ok
14:08 hobakill yes
14:08 XenophonF good
14:09 XenophonF have the same mindset when writing your salt states
14:09 XenophonF and assume that if you don't put dependencies in, salt can run the states in any order
14:10 XenophonF and if you have a watch clause in your state, you don't need a require (for some reason I had thought I needed both when starting out with Salt)
14:11 ajprog_laptop1 joined #salt
14:14 hobakill XenophonF, thanks a ton. really appreciate the help. i've posted in the IRC before and haven't received any responses. is there a better time to post in here? i even check the logs the next day.
14:16 dude051 joined #salt
14:17 flupke_ joined #salt
14:18 xet7 joined #salt
14:19 Lomithrani1 joined #salt
14:22 ramishra joined #salt
14:23 wendall911 joined #salt
14:26 bhosmer joined #salt
14:30 Kenzor joined #salt
14:32 Kenzor_ joined #salt
14:35 ipmb joined #salt
14:43 jaimed joined #salt
14:44 jalbretsen joined #salt
14:46 tkharju joined #salt
14:47 XenophonF hobakill, it depends on who's all online
14:47 XenophonF usually the salt devs are on during the day (US time)
14:48 higgs001 joined #salt
14:48 jaimed joined #salt
14:52 terminalmage xmj: no. we haven't made the official announcement
14:52 ramishra joined #salt
14:52 xmj huh
14:52 xmj weirdness!
14:53 xmj terminalmage: what's missing there?
14:54 thedodd joined #salt
14:54 arthabaska joined #salt
14:54 terminalmage xmj: after we cut the release, we wait a couple days to allow packagers to have it ready
14:54 xmj ah
14:54 terminalmage because otherwise, the first thing people do is complain there aren't packages
14:55 xmj freebsd packages will be ready on wednesday :)
14:55 terminalmage it's still on pypi for those that can't wait
14:55 xmj ha
14:55 terminalmage I know people that build their own RPMs
14:55 XenophonF xmj: which release?
14:55 xmj any
14:55 XenophonF salt 2014.1.5 is already in the FreeBSD ports tree
14:55 xmj culot committed 2014.1.7 at around 4AM this morning
14:56 XenophonF oh wow great!
14:56 xmj so, update your portstree
14:56 XenophonF haha
14:56 xmj XenophonF: http://www.freshports.org/sysutils/py-salt
14:56 terminalmage huh, I guess 2014.1.7 is not marked as the stable release yet in pypi
14:57 XenophonF xmj: I am about 300 ports in on a 700-port bulk build
14:57 XenophonF now i'm going to have to restart all that :0
14:57 terminalmage I think we'll probably announce officially on Monday
14:58 stevednd terminalmage: what's the word on helium? Is there a date yet, or an RC?
14:58 terminalmage stevednd: the RC is really close
14:58 ntropy XenophonF: you're using poudriere for that?  its smart enough to only build stuff that it didn't previously :)
14:59 xmj XenophonF: thats why you always svn up (or portsnap or poudriere ports -u) before :_D
14:59 XenophonF yeah I'm using poudriere but I skipped the ports update thinking it was close enough (synced last weekend)
15:00 anuvrat joined #salt
15:00 stevednd terminalmage: thanks. do you know if anything has been found out about https://github.com/saltstack/salt/issues/13873 yanatan16's comment at the end is interesting
15:01 bhosmer joined #salt
15:02 terminalmage stevednd: no, I was going to take a look at that today or Saturday
15:02 terminalmage plan is to get a fix in, before the RC is cut for the new feature release
15:02 nyx joined #salt
15:03 terminalmage my guess is that there were changes to how file caching works, which did not take local files into account
15:03 terminalmage so the fix will also include an integration test to prevent future regressions
15:04 terminalmage stevednd: I have a flight back home today, and the fact that this deals with local files means I can easily hack on it on the flight
15:04 terminalmage :)
15:05 stevednd terminalmage: cool, thanks
15:05 terminalmage np
15:05 terminalmage I know this worked in the past though, because I've used this feature before
15:06 ipmb joined #salt
15:06 terminalmage stevednd: if the fix doesn't make the RC, we should still be able to get it into the final release
15:07 stevednd how long do the RCs typically last?
15:07 terminalmage depends
15:07 terminalmage couple weeks maybe
15:08 manfred 2014.1 lasted about 3 weeks
15:08 stevednd terminalmage:  Do you know what this issue might entail? https://github.com/saltstack/salt/issues/13174 it seems like it should be easy, but I don't know where to make the changes for it.
15:11 vejdmn joined #salt
15:12 kaptk2 joined #salt
15:12 rallytime joined #salt
15:13 geekmush joined #salt
15:14 Guest18473 what is the recommended way to upgrade a salt-master that was bootstrapped via salt-cloud?
15:15 Guest18473 it's an ubuntu machine, and I checked the salt-master package installation state via 'dpkg -s salt-master' and see that it's not installed
15:15 manfred Guest18473: salt-cloud just used salt-bootstrap, if it didn't use pip, then just apt-get update; apt-get install salt-master
15:16 timoguin Guest18473: did you pass the bootstrap script any special options? because it'd use apt with the ppa by default
15:16 Guest18473 manfred: ok it looks like it used pip
15:16 manfred pip install --upgrade salt
15:16 Guest18473 salt==2014.1.0
15:17 Guest18473 all the minions are in similar boat - so to upgrade them I do the same?
15:19 manfred salt \* pip.install salt upgrade=True
15:19 Guest18473 manfred: thanks a bunch
15:22 Dattas joined #salt
15:22 hobakill XenophonF, most of my problems are stemming from selinux being enabled. i have to have it on so i'm working around those restrictions. salt-ssh still doesn't work but ... oh well...
15:23 pdayton joined #salt
15:24 econnell1 joined #salt
15:25 rojem joined #salt
15:25 ndrei joined #salt
15:26 anuvrat joined #salt
15:28 quickdry21 joined #salt
15:30 ajolo joined #salt
15:30 ramishra joined #salt
15:34 mway joined #salt
15:35 XenophonF hobakill: what kind of selinux exceptions are you seeing?  and which Linux distro?
15:36 mway do reactions have access to salt modules? e.g., if I wanted to do some peering based on grains so I could conditionally target minions in the reaction, is that possible?
15:36 hobakill XenophonF, Centos 6.5.
15:36 manfred mway: yes
15:37 yidhra joined #salt
15:37 manfred mway: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html#salt.states.module.run
15:37 mway cool, just checking. master -ldebug is showing failure to parse the reaction w/o other info, just making sure it wasn't something dumb before I went too far into it
15:37 mway danke
15:37 XenophonF hobakill: good - i have a centos 6.5 instance here.  let me switch selinux to enforcing and see what breaks.
15:37 hobakill XenophonF, ok.
15:37 manfred mway: you have to use the module.run state to do it
15:38 manfred mway: well, so
15:38 hobakill type=USER_LOGIN msg=audit(1405093006.267:1863): user pid=53494 uid=0 auid=264251256 ses=81 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login acct="root" exe="/usr/sbin/sshd" hostname=? addr=10.12.221.77 terminal=ssh res=failed'
15:38 manfred mway: you actually straight reference the states, then in those you can use modules
15:38 mway oh, I can't explicitly do something like salt['publish.publish'](...) in the reaction?
15:38 manfred inside jinja?
15:39 mway yeah
15:39 hobakill ssh root@ip.ad.dr.ess works fine using key-based auth directly from the salt master
15:39 manfred it will be run on the master if it can be done
15:39 manfred mway: i am not sure that you can do that though
15:39 mway yeah, which is fine, just needs to do some peering without mine cache. but not 100% sure it's actually possible yet, so digging on that
15:39 XenophonF hobakill, my instance already has SELINUX=enforcing and SELINUXTYPE=targeted
15:39 XenophonF is that the same configuration as you?
15:40 manfred mway: what are you peering?
15:40 manfred gluster?
15:40 hobakill same XenophonF  yeah
15:40 VictorLin joined #salt
15:40 XenophonF huh
15:41 manfred mway: this may be relevant to your interests once it gets implemented https://github.com/saltstack/salt/issues/14074
15:41 XenophonF let me create a new CentOS 6.5 instance real quick
15:41 XenophonF just a sec
15:42 elfixit joined #salt
15:42 bhosmer_ joined #salt
15:42 mway no, this is just a basic muxing/proxy test, e.g. having a new minion opt in to being part of a pool, but in a way that grains are used to decide what pool/etc
15:43 mway I may just be overcomplicating it, unsure if there's a better way to handle arbitrary pooling
15:43 manfred ahh
15:43 manfred mway: so
15:43 manfred mway: here is what I would do
15:43 manfred wait for helium
15:43 mway :p
15:43 manfred mway: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.event.html#salt.states.event.fire_master
15:43 manfred use that
15:44 manfred throw it at the end of a highstate
15:44 manfred when the minion does it's first highstate, throw that event if it changes it's pool
15:44 manfred and then have a reactor that just updates everything else, matching on Grains to tell them to go looking for the new pool member
15:45 manfred that is how I have been looking at doing a couple things
15:45 mway cool, that looks promising - the grains thing is the catch though, I'm not matching on grains, I'm reading grains to then execute the appropriate states
15:45 bhosmer joined #salt
15:45 manfred right, but you could just do that in the top file?
15:47 rojem joined #salt
15:47 hobakill XenophonF, the genesis of all of this is i'd like to be able to communicate to all the minions over ssh as well as salt w/o having to do something like ansible. i've found something rather annoying. if i bounce my salt master server, none of the minions seems to be able to communicate with it once it's back up UNLESS i restart the salt-minion service on each box.
15:47 mway manfred: yeah, I could do that. going to tinker a bit and see how it works out - thanks for the input
15:48 manfred np
15:49 AdamSewell joined #salt
15:49 bmcorser babilen: about 300MB?
15:50 bmcorser I wasn't sure if this is normal for a salt master
15:50 XenophonF hobakill: that's quite odd and not my experience
15:50 bmcorser and I should just not be a cheapskate and buy a bigger box
15:50 XenophonF i can restart my master server, and minions will reconnect without any issue
15:50 XenophonF hobakill: are you running the latest version of salt?
15:50 kballou joined #salt
15:53 hobakill XenophonF, AFAIK yeah - getting stuff from EPEL and i BELIEVE it's the most recent.  2014.1.5
15:53 troyready joined #salt
15:53 shiin joined #salt
15:54 babilen bmcorser:
15:55 babilen bmcorser: I have a salt-master with 4G for 300+ minions that performs well and another with 2G for around 180
15:55 higgs001 joined #salt
15:55 XenophonF hobakill: good, i've got pretty much the same setup here
15:55 babilen bmcorser: The former has 4 cores and the latter 1 and the first is definitely snappier. I'd say that 300M for 80 minions is not enough.
15:57 kballou joined #salt
15:57 tligda joined #salt
15:59 XenophonF hobakill: are you using salt-ssh to bootstrap the minion install?
15:59 hobakill XenophonF, yeah i'm not sure how to show this to you without bombing the channel with code.
15:59 XenophonF use paste.debian.net
16:00 manfred hobakill: curl -F 'f:1=<-' ix.io < /path/to/file
16:00 manfred or
16:00 manfred just curl -F 'f:1=<-' ix.io
16:00 manfred then past it into the buffer
16:00 manfred and hit ^d to EOF
16:00 XenophonF oh that is so slick
16:00 hobakill XenophonF,  i'm using a bootstrap script i wrote to install salt from a CentOS 6.5 template in vmware
16:03 Kenzor joined #salt
16:03 forrest joined #salt
16:06 XenophonF hobakill: that sounds really interesting - I'm attempting the same only with Hyper-V
16:06 hobakill manfred, curl: (52) Empty reply from server
16:06 XenophonF although my approach is to include EPEL and the Salt minion install in anaconda somehow
16:07 manfred hobakill: your file might be too long
16:07 manfred it has a limit of 10000 characters
16:07 manfred try sprunge.us
16:07 manfred same thing
16:07 manfred curl -F 'sprunge=<-' sprunge.us < /pth/to/file
16:08 manfred otherwise you will probably need paste.debian.net
16:08 hobakill i tried the paste into buffer trick manfred
16:08 manfred how long is your paste?
16:08 hobakill XenophonF, my script is on github but it's very sloppy
16:08 hobakill manfred, tiny
16:09 XenophonF hobakill: I don't judge.  :)
16:10 manfred hobakill: /shrug http://paste.gtmanfred.com/FJZ0Y/
16:10 hobakill XenophonF, it works for our environment (POC at this point) but could use some work: https://github.com/hobakill/salt/blob/master/rc.local.salt
16:10 hobakill manfred, yeah that looks ideal :)
16:11 manfred hobakill: i just host my own at this point http://git.server-speed.net/users/flo/filebin/ , nice and simple and has a nice client
16:11 intellix left #salt
16:11 manfred hobakill: there is also http://ix.io/client
16:11 hobakill thanks. :)
16:12 manfred it supports usernames too so you could delete your pastes
16:13 manfred there is also an inline form that just does data:text/html POST
16:13 manfred data:text/html,<form action="http://sprunge.us" method="POST"><textarea name="sprunge" cols="80" rows="24"></textarea><br><button type="submit">sprunge</button></form>
16:13 KyleG joined #salt
16:13 KyleG joined #salt
16:13 XenophonF hobakill: this script looks fine - it should do everything you need
16:13 meteorfox joined #salt
16:13 hobakill sure does. messy tho! :)
16:14 XenophonF why do you need salt-ssh if you have the minion installed?
16:14 hobakill XenophonF, it goes back to the not being able to communicate with the minions if the master is bounced error
16:14 XenophonF hm
16:14 manfred hobakill: MAC=$(</sys/class/net/eth0/address)
16:15 XenophonF so the salt master...it's also CentOS 6.5?
16:15 manfred also, should use [[ ]] for your comparisons, it adds extra protection for spaces thingies
16:15 manfred espc since you are using bash
16:15 hobakill XenophonF, it is... tho to be fair i try pinging almost immediately after the master comes back.... i JUST discovered if i wait a bit of time it's able to ping again.
16:16 Ryan_Lane joined #salt
16:16 hobakill manfred, this is exactly why i put this on github so i could get suggestions. i'm newish to scripting so any help i can get is really appreciated.
16:16 XenophonF hobakill: I assume that the Salt master has static IP configs and DNS entries, or at least DHCP reservations, etc.?
16:17 manfred $(</path/to/file) is probably one of my favorite things that very few people know
16:17 XenophonF bah, bash scripts - real men script using csh!
16:17 XenophonF ;)
16:17 hobakill XenophonF, everything is static in our enivro
16:18 XenophonF hobakill: tbh my minion setup/config is almost identical to yours
16:18 XenophonF except it's run manually
16:18 quickdry21_ joined #salt
16:18 manfred hobakill: http://paste.gtmanfred.com/yEBC/
16:18 XenophonF i also don't use the scheduler to run state.highstate automatically
16:18 matthiaswahl joined #salt
16:19 XenophonF and i set the FQDN of the Salt master
16:19 hobakill XenophonF, we're trying to use salt as a drift management system... or at least reduce drift
16:19 * Heartsbane blames whiteinge.
16:19 XenophonF oh sure, that makes perfect sense
16:19 hobakill manfred, yeah that's the most ghetto part of my script for sure.
16:19 hobakill (IMHO)
16:20 manfred those are the things i would change, otherwise it looks fine
16:20 manfred nothing grossly wrong
16:20 hobakill XenophonF, i'm infrastrucure and we have an over-zealous devops team. ;)
16:20 manfred just a few best stylist thingies
16:20 joehillen joined #salt
16:20 manfred hobakill: http://mywiki.wooledge.org/BashFAQ
16:21 hobakill thx again manfred
16:21 manfred np
16:22 XenophonF my other difference is that my salt master is running on FreeBSD
16:22 XenophonF dunno if that matters
16:22 XenophonF hobakill, do your minions eventually reconnect with the master?
16:22 XenophonF and do you see anything unusual in the master or minion logs?
16:23 manfred hobakill: is it 2014.1.5?
16:23 hobakill XenophonF, yeah they do eventually reconnect. i was just being impatient i guess.
16:23 hobakill manfred, it is
16:23 ramishra joined #salt
16:23 manfred hobakill: there are numerous bugs
16:23 manfred any chance that you can test 2014.1.7 that should be in epel-testing?
16:24 manfred hrm
16:24 manfred nevermind...
16:24 manfred hrm
16:24 XenophonF hobakill: how long is eventually?
16:25 hobakill i haven't timed it but .... under the 15 minutes check in period i think
16:26 XenophonF hm
16:27 XenophonF I wonder if there's a setting on the minion that specifies a keep-alive interval or something.
16:27 XenophonF hang on am rtfm/rtfs-ing
16:27 XenophonF there's retry_dns, defaults to 30 seconds
16:28 XenophonF acceptance_wait_time, defaults to 10 seconds with infinte retries
16:28 ramishra_ joined #salt
16:29 XenophonF random_reauth_delay and auth_timeout, but you aren't having problems with authentication
16:29 manfred that is for accepting the key, not authenticating iirc
16:29 XenophonF oh here we go
16:29 XenophonF the ZeroMQ recon_* settings
16:29 XenophonF so, hobakill
16:30 XenophonF it looks like with the default settings, the minion will try to reconnect to the master between 1-60 seconds after losing contact
16:30 XenophonF with the chosen interval doubling after each connection attempt
16:30 XenophonF this is all in the minion config file btw
16:31 hobakill XenophonF, yeah i'm seeing that now too.
16:31 hobakill ok so that's more or less settled. the ssh thing still annoys me but i dont' want to take up more of your time today.
16:31 XenophonF now that all depends on how fast the salt master starts
16:31 XenophonF :)
16:31 geekmush joined #salt
16:32 hobakill it's about 20 seconds start to finish at most
16:34 hobakill i really love the idea of salt-ssh so i can ditch ansible
16:34 XenophonF so why do you need ansible when you have salt installed?
16:34 hobakill ansible is old tech from when devops was going berserk with trying new things. now we have taken over to try to streamline this mess. :)
16:35 XenophonF if you need remote execution, can't you just use "salt minion cmd.run blah"?
16:35 hobakill XenophonF, correct assuming you can connect to it from the salt master. :)
16:35 XenophonF haha
16:35 XenophonF true
16:35 hobakill XenophonF, so to answer your ? - we don't need anisble anymore.
16:35 XenophonF gotcha
16:35 kermit joined #salt
16:36 hobakill my stomach is calling for lunch. thanks XenophonF and manfred . maybe we'll hit the ssh crap on monday... ;)
16:36 anuvrat joined #salt
16:37 XenophonF sounds good! catch you later
16:37 mateoconfeugo joined #salt
16:38 ipalreadytaken joined #salt
16:40 anotherZero ugh... i just can't get my head wrapped around anything beyond simple tasks with salt...
16:40 anotherZero perhaps if i sacrifice a virgin to the sysadmin gods...
16:41 forrest anotherZero, what are you having trouble with?
16:42 anotherZero i want to have a flexible scalable structure, but the pillar/formula stuff is just killing me
16:42 forrest let's talk it through then, what is confusing you about them?
16:42 anotherZero if only i could be more specific.  it's like i'm going through the walkthoughs and when I get beyond simple states it's all foggy
16:43 forrest ok, so let's start there then, show me an example you find confusing
16:43 anotherZero give me a sec to get a specific example together
16:44 * anotherZero closes the puppetlabs website tab in browser
16:44 ixokai joined #salt
16:46 XenophonF anotherZero: i have some formulas with pillar that i can share and explain, as well
16:46 anotherZero forrest: i'll hit you up later with some specifics.  work needs my attention right now :)
16:46 XenophonF stuff that works on both Linux and FreeBSD
16:47 Eugene forrest - how'd your solo drinkfest go?
16:47 anotherZero thanks XenophonF.  I'll check back with you later on that
16:47 XenophonF ok
16:47 forrest anotherZero, ok
16:47 forrest Eugene, No drinking other than water, just sat at my desk and worked on some issues
16:47 Eugene That kinda ruins it
16:48 AdamSewell joined #salt
16:48 forrest I don't really like to drink
16:48 forrest tastes awful
16:48 UtahDave joined #salt
16:48 Eugene Stop drinking piss :v
16:48 forrest I didn't enjoy the number 1 beer in Germany that they don't export
16:49 forrest it was a dark beer as well, which is one of the only types that is 'ok' to me
16:49 anotherZero which beer is this?
16:50 forrest I don't remember the name, this was almost 2 years ago
16:50 anotherZero ah, ok
16:50 forrest I just remember they don't export it because they can't even make enough for German citizens
16:51 anotherZero sounds awesome
16:51 veb anyone want a FB page 354k organic likes... query me!
16:51 veb srs
16:51 veb had a damn car crash
16:51 forrest veb could you please take that elsewhere?
16:51 veb motoorway then the car spun 360 into  a wall.
16:51 veb uh. fine.
16:52 veb whatever.
16:52 forrest Thanks
16:52 veb no need to thank me for being  a jerk obviously.
16:52 anotherZero I really want to get out to Germany to try some real, unpasteurized beer
16:52 veb but beer, that's on topic.
16:53 anotherZero we don't mind off topic veb
16:53 forrest Well, we should try to avoid it really
16:54 veb why, exactly?
16:54 veb it's like a culture at a company; makes it shit.
16:55 forrest because the chat is logged, makes it easier to search for people trying to review historical discussions
16:55 veb right.
16:56 ndrei joined #salt
16:59 nyx joined #salt
17:01 chrisjones joined #salt
17:02 schimmy joined #salt
17:03 ipalreadytaken joined #salt
17:05 schimmy1 joined #salt
17:05 mgw joined #salt
17:09 bhosmer joined #salt
17:11 ipalreadytaken joined #salt
17:11 smcquay joined #salt
17:12 ipalreadytaken joined #salt
17:17 tkharju joined #salt
17:17 XenophonF so i just discovered the "context" parameter to file.managed
17:18 XenophonF and my question is: can I extend a file.managed state and change the provided context?
17:18 XenophonF i'm thinking client and server configs, specifically ntp
17:19 XenophonF where ntpd/server.sls would extend the ntpd:file.managed state in ntp/client.sls
17:20 AdamSewell joined #salt
17:25 azylman joined #salt
17:25 rojem joined #salt
17:26 CyanB 4
17:26 mgw joined #salt
17:30 notpeter_ joined #salt
17:33 oz_akan__ joined #salt
17:34 repl1can1 joined #salt
17:35 sashka_u1 joined #salt
17:35 etw_ joined #salt
17:35 talwai joined #salt
17:36 talwai Anyone successfully been able to serve pillars and sls files from the same git repo?
17:36 seblu42 joined #salt
17:36 punal joined #salt
17:36 mik3_ joined #salt
17:36 peno_ joined #salt
17:36 talwai If so, what directory structure did you use, and what did you point file_roots and pillar_roots to?
17:36 nyx_ joined #salt
17:36 KyleG talwai: I serve modules and sls files from the same git repo using symlinks
17:37 KyleG so like /usr/local/etc/salt/states/_modules is a symlink to /path/to/git/repo/modules
17:37 Corey_ joined #salt
17:37 KyleG and the states folder itself is /path/to/git/repo/states
17:38 jacksontj_ joined #salt
17:38 cwright_ joined #salt
17:38 mariusv_ joined #salt
17:38 talwai KyleG: Thanks for the info. I should have been more clear though, I meant serving pillars and states from the same repo using the gitfs backend
17:38 rogst_ joined #salt
17:38 KyleG o
17:39 snoozer joined #salt
17:39 KyleG yeah I tried gitfs I wasn't feeling it
17:39 Vye_ joined #salt
17:39 forrest talwai, did you already look at http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#using-git-as-an-external-pillar-source ?
17:39 ronc_ joined #salt
17:40 forrest honestly having both pillar and states in the same repo sounds bad to me, but it might work
17:40 robins joined #salt
17:40 dstokes_ joined #salt
17:40 ajolo_ joined #salt
17:40 lahwran_ joined #salt
17:40 aw110f joined #salt
17:40 baffle joined #salt
17:41 talwai forrest: Yes I did. But it seems like, prior to the helium release, there's no way to specify a root for states AND a root for pillars if they're being served from the same repo.
17:41 mattikus joined #salt
17:41 bhosmer joined #salt
17:41 talwai forrest: Could you elaborate on why you think pillars + states in the same repo is a bad idea?
17:42 1JTAAUZFE joined #salt
17:42 forrest I just like to keep them separate for gitfs
17:42 kballou joined #salt
17:42 MTecknology Apparently   salt-call event.fire_master "{'host': 'salt', 'action': 'gitfs_sync'}" 'minion_request'   isn't making it to the master? ... I see an authentication event, but that's it. eventlisten.py isn't showing the event anyway... :(
17:42 forrest *shrug*
17:42 forrest talwai, seems messy to me to have pillar data in the same repo
17:43 MTecknology My states repo is public and my pillar repo is private.
17:43 forrest another good reason
17:43 Voziv joined #salt
17:43 monokrome joined #salt
17:44 platforms joined #salt
17:44 |rt| joined #salt
17:44 MTecknology This feels more like a bug than me doing something wrong... :S
17:44 njs126 joined #salt
17:45 schristensen joined #salt
17:45 fxhp joined #salt
17:45 djaykay joined #salt
17:47 keyvan joined #salt
17:47 blast_hardcheese joined #salt
17:47 eclectic joined #salt
17:47 talwai forrest: I feel like you were about to say something but it got lost in the etherweb
17:47 forrest nope
17:48 forrest was just agreeing with MTecknology
17:48 talwai gotcha
17:48 zain_ joined #salt
17:48 UtahDave joined #salt
17:48 eliasp keeping states + pillars separate as well just for the same reason as MTecknology … states are more or less public information, while pillars are secret
17:49 mpanetta joined #salt
17:49 rojem joined #salt
17:52 talwai Well for my purposes states and pillars will both remain private. In development right now, and pillar structure is changing as rapidly as states themselves, so it makes sense to have them versioned as a collective.
17:53 VictorLin joined #salt
17:53 talwai Though again it looks like there's no way to do exactly what I want before Helium comes out, so I might just give up on gitfs
17:55 timoguin is there some new killer feature with gitfs coming in Helium?
17:55 InAnimaTe|whosto joined #salt
17:56 scoates joined #salt
17:56 talwai timoguin: specifically the ability to specify a subdirectory as the root for an external pillar source: http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html#using-git-as-an-external-pillar-source
17:57 wm-bot4 joined #salt
17:58 bhosmer joined #salt
17:58 bhosmer joined #salt
17:59 tempspace joined #salt
18:01 timoguin talwai: ah thanks
18:02 Damoun joined #salt
18:03 ajprog_laptop1 joined #salt
18:04 talwai irc noob here but what do nicknames in red mean?
18:04 XenophonF that may be specific to your client, talwai
18:04 timoguin talwai: not sure. it's different in different clients
18:04 eliasp talwai: depens on your client
18:04 timoguin super response!
18:04 eliasp ;)
18:05 talwai freenode
18:05 XenophonF we are on the ball :)
18:05 talwai any insighte?
18:05 mgarfias joined #salt
18:05 XenophonF the web client?
18:05 talwai yup
18:05 XenophonF hm, maybe they're ops or staff or something
18:05 XenophonF am I in red?
18:05 XenophonF i'm _not_ an op
18:05 eliasp the only OP right now is basepi
18:05 jcockhren and utahcon
18:05 talwai XenophonF: well you were for your 11:04 post
18:05 jcockhren I mean UtahDave
18:05 eliasp probably +v?
18:05 XenophonF LOL
18:06 XenophonF yeah possibly
18:06 XenophonF oh
18:06 XenophonF talwai: am I in red now?
18:06 basepi might be registered vs unregistered or something
18:06 timoguin i want to be red. :(
18:07 XenophonF i think it's if we use their name in our reply
18:08 talwai XenophonF: yup i think you hit the nail on the head
18:08 XenophonF talwai: I think if I reply to you by using your name in the message, Freenode's web client highlights the reply so you see it
18:10 anotherZero ahhh... IRC web clients...
18:14 beando joined #salt
18:14 mgarfias hey, whats the status of Helium?  I'm putting off some work in anticipation of it being in RC soon.
18:14 Kenzor joined #salt
18:15 ckao joined #salt
18:16 murpium joined #salt
18:18 forrest manfred, nice updates with 14117
18:19 anotherZero what's the skinny with helium?
18:20 anotherZero what's making you wait for it mgarfias
18:20 xet7 joined #salt
18:20 anuvrat joined #salt
18:20 mgarfias i thought it wasn't ready yet
18:21 mgarfias is it in a usable state?  if so, then i'll start using it
18:21 anotherZero i guess i'm asking what you are excited about for that release.  i don't know what's coming
18:21 ckao joined #salt
18:22 aw110f joined #salt
18:22 mgarfias o
18:22 mgarfias the ability to do more with AWS than just instance management
18:22 anotherZero ah
18:22 manfred forrest: thanks
18:29 UtahDave mgarfias: we've been working feverishly to get helium RC out. We're doing last minute testing, small bug fixes and polishing
18:30 ilako joined #salt
18:30 mapu joined #salt
18:31 mgarfias oh i figured
18:31 mgarfias a few weeks ago i heard "next week" so i thought i'd ask
18:31 mgarfias is there a way to use it now?
18:31 mgarfias we're still not in production, so if it borks on me its not the end of the world
18:35 MTecknology mgarfias: grab latest head and build it
18:36 smcquay joined #salt
18:38 Ryan_Lane Is there any jinja magic to use in templates to say "this file is managed by salt from this module in this directory"?
18:39 davet1 joined #salt
18:40 forrest UtahDave, Friday deployment? THE HORROR!
18:40 mgarfias can i bootstrap helium?
18:40 manfred mgarfias: yes,
18:40 forrest mgarfias, yea use the tags
18:40 mgarfias ok, cool
18:40 manfred helium hasn't been tagged
18:40 manfred curl -L https://bootstrap.saltstack.com | sh -s -- git develop
18:41 forrest manfred, with the risky plays
18:41 manfred forrest: have you seen my current deploy script?
18:41 forrest nope
18:41 manfred forrest: http://ix.io/ddf
18:42 * forrest vomits
18:42 manfred :P
18:42 forrest disgusting
18:42 manfred like that custom compile libsodium?
18:42 forrest my god, it's like the ghetto
18:42 forrest Are you compiling this on an old Solaris 4 box too?
18:42 manfred Ryan_Lane: you could do it with the internal template variables like {{ sls }} and things to dynamically change it, but there isn't anything built in afaik
18:43 manfred forrest: 14.04
18:43 forrest manfred, woo 14.04
18:43 Ryan_Lane if there's any level of introspection usable it may work
18:43 Ryan_Lane let me see what's in {{ sls }}
18:43 smcquay out of curiousity any way to get --output=json from salt-run?  # /me is likely not looking in the right places (salt-run -h doesn't have that option)
18:43 manfred Ryan_Lane: should be the sls file name iirc
18:44 thedodd joined #salt
18:44 manfred Ryan_Lane: it might not be sls, but there is something that says what file you are in
18:44 manfred someone just added troubleshooting/debugging for jinja rules
18:44 Ryan_Lane is there any list of these arounf?
18:44 Ryan_Lane around*
18:44 manfred jinja variables
18:44 manfred looking for it
18:44 forrest Not that I know of Ryan_Lane manfred
18:45 forrest unless it's really well hidden
18:45 manfred Ryan_Lane: https://github.com/saltstack/salt/pull/12832
18:45 mapu having some difficulty untarring a file. using this for guide: http://docs.saltstack.com/en/latest/ref/states/all/salt.states.archive.html
18:45 koyd joined #salt
18:46 forrest show_full_context, of course how could I have forgotten! :P
18:46 mapu I have this in my sls file:
18:46 TheThing joined #salt
18:46 mapu http://pastebin.com/TZCwN33D
18:47 forrest mapu, what error are you getting?
18:48 forrest also I'm pretty sure you don't need to pass those tar options
18:48 mapu it;s here:
18:48 mapu http://pastebin.com/2KDWWwfq
18:48 forrest if you look at the tar example further down, they aren't doing so.
18:48 manfred that is something broken in the state...
18:49 manfred mapu: what version of salt are you on?
18:49 mapu the file gets copied, the dir gets made (name)
18:49 forrest manfred, +1
18:49 manfred it is an error in the actual salt/states/archive.py file
18:49 mapu 2014.1.0
18:49 forrest upgrade
18:49 manfred ^^
18:49 forrest 2014.1.0 has bugs, go to at least 2014.1.1
18:49 forrest you shouldn't need to pass those options as well
18:50 matthiaswahl joined #salt
18:50 Ryan_Lane hm. we need better docs for show_full_context
18:50 Ryan_Lane I have no clue how I'm supposed to use this :)
18:50 forrest we need better docs for allof that stuff
18:50 forrest *all of
18:50 forrest explicit list would be good
18:50 forrest with exampels
18:50 forrest *examples
18:51 mapu Ok- first I’ll remove the options, then upgrade
18:51 Ryan_Lane I used show_full_context() and it dumped a massive odict
18:51 forrest Ryan_Lane, hah
18:51 Ryan_Lane with all the pillars and grains
18:51 Ryan_Lane and other stuff
18:51 forrest mapu, ok, removing the options won't help in this case unless something crazy is going on, upgrade should resolve it though.
18:51 mapu heh -removing the tar options - Comment: tar archive need argument tar_options
18:51 mapu yeah- now to upgrade :)
18:52 manfred Ryan_Lane: yeah, it dumps every single thing you could put in a jinja thingy
18:53 manfred Ryan_Lane: how exactly did you use show_ful_context? i thought we would just drop {{ show_full_context }} in a file.managed
18:53 Ryan_Lane that doesn't work
18:53 Ryan_Lane {{ show_full_context() }} is needed
18:53 manfred ahh
18:53 Ryan_Lane since it's a function
18:53 al joined #salt
18:54 Ryan_Lane {{ show_full_context()['source'] }} <-- that gives me back the file
18:55 manfred nice
18:56 XenophonF does salt have a way to manage hard links on unix?
18:56 XenophonF or links and junctions on windows?
18:56 lz-dylan Is there a way to have 'watch' glob the contents of a directory? ie. can I do service.running: watch: file.glob: /etc/package/config.d/*.conf
18:57 forrest lz-dylan, no
18:57 XenophonF i'm scanning through docs.../salt.states.file.html and only see stuff about symlinks, so maybe that's my answer
18:57 manfred that isn't what watch does
18:57 manfred lz-dylan: watch is just for watching other states, not actual files on the server
18:58 lz-dylan manfred: got it, thanks! I'll at least stop pulling hair over that one :)
18:58 forrest lz-dylan, you could always manage all the conf files with salt :P
18:59 lz-dylan forrest: I'm effectively going to end up doing that; I just would like to (in this case) have my kibana state drop a new conf file into /etc/nginx/sites-enabled and have my nginx state be smart enough to reload the service
18:59 forrest lz-dylan, so why can't it just watch the file inside of sites-enabled?
18:59 manfred yeah, you need to manage every file then
18:59 manfred lz-dylan: what os?
19:00 InAnimaTe joined #salt
19:00 lz-dylan ubuntu. I almost said "at the moment" because I'd like to abstract this enough to share, but for my purposes I don't intend to move away.
19:00 forrest I mean you could always put the conf file names into pillar, then loop over them to make it a little less verbose.
19:01 manfred lz-dylan: so... unfortunately i don't remember the way to do it with ubuntu/upstart... maybe icron?  i would just do it with a systemd.path unit that can glob watch directories and perform commands like sighup to nginx or just reload the nginx service
19:01 xet7_ joined #salt
19:01 manfred sorry, incrond
19:01 rojem joined #salt
19:02 manfred http://linux.die.net/man/8/incrond
19:02 manfred http://linux.die.net/man/5/incrontab
19:02 lz-dylan manfred: nifty. It does *feel* like something I should be handling in my states but I like that approach too.
19:03 manfred i would just do it in your states since you said you will be managing all the .conf files in salt as well it sounds like?
19:03 lz-dylan yeah, this is a clean deploy
19:04 lz-dylan from the instance point of view I don't think I've done anything that salt hasn't set except maybe poke at /etc/hosts
19:04 lz-dylan well and manually bootstrapped the minion
19:04 lz-dylan <-- (relative saltstack newbie)
19:04 beando joined #salt
19:04 manfred lz-dylan: http://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud.clouds.saltify.html
19:04 manfred http://docs.saltstack.com/en/latest/topics/cloud/config.html#config-saltify
19:06 mapu Ok- I have upgraded. My master is 2104.1.7, my minion is 2014.1.5
19:06 mapu Iam still getting the same error are I did before
19:06 lz-dylan overall context: I'm trying to deploy an elasticsearch+logstash+kibana setup that accepts incoming log data on redis and syslogd. since that's a bunch of moving parts, many of which aren't packaged well, and because I haven't spent *that* much time with Salt, some of my statefiles are very amateurish. Elasticsearch's APT GPG key pulled into a statefile directly, fr'example. There's a lot of cleanup.
19:06 mapu (and master has been restarted)
19:07 manfred mapu: one second
19:07 mapu sure
19:07 lz-dylan At the moment I'm just trying to get some distance between individual states, but they're so interdependent that some of them just won't run on their own, only through state.highstate
19:07 a1j so batching is not working anymore in 2014.1.6 . is it intended behavior?
19:08 lz-dylan manfred: and saltify looks *great*, will definitely end up using that. need to clean up my own mess first, though, I'm afraid :)
19:08 manfred yar
19:08 a1j ah https://github.com/saltstack/salt/issues/14046
19:08 vlcn anyone having issues with reactors in 2014.1.7?
19:09 vlcn IE, the reactor isn't triggering when it should
19:09 azylman_ joined #salt
19:10 a1j reactors, mine, scheduler never worked for me for some reason. (multimaster setup).
19:13 mapu I think I got it…
19:14 manfred mapu: get rid of the f
19:14 mapu yu
19:14 manfred it is already adding it, you just need to add the extra args
19:14 mapu yup- and got rid of the f
19:14 mapu just reread the docs on that.
19:17 vlcn I can see the correct tag by running eventlisten.py, but then the associated sate doesn't actually run
19:17 manfred vlcn: do you see in the logs that your reactor state rendered correctly?
19:18 manfred it should not if it rendered or failed to render
19:18 badon_ joined #salt
19:18 InAnimaTe joined #salt
19:19 vlcn manfred, it doesn't seem to attempting to render it at all from what I can tell
19:19 vlcn I'm running the master in debug mode
19:20 manfred can you share your config?
19:20 vlcn sure
19:20 manfred cool
19:22 Whissi joined #salt
19:23 beando joined #salt
19:23 lz-dylan so, how does environment scope work when you're calling state.sls (not highstate)? Are other states in (say) /srv/salt fair game for including?
19:24 manfred lz-dylan: state.sls <state> saltenv=dev
19:25 manfred pass it on the command line
19:25 manfred http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html#salt.modules.state.sls
19:26 lz-dylan in this case: my /srv/salt has logstash/init.sls and elasticsearch/init.sls, among others. can I prepend logstash/init.sls with include: - elasticsearch and have it work? at the moment it bugs out with "No matching sls found for 'logstash' in env 'base'" (and runs clean without the include statement)
19:26 lz-dylan manfred: I get KeyError: 'dev'
19:26 manfred lz-dylan: cause you don't have a dev environment
19:27 vlcn manfred, https://gist.github.com/kelchm/fa7f748d41d916d6ba0a
19:27 vlcn this is probably more confusing than it should be because github won't let me use the full path as the file name
19:27 manfred vlcn: np, gimme a minute
19:27 lz-dylan no sir I don't. there's-----ohhhh. okay. huh. is there any sense in targeting getting these states to run separately on production, or is best practice to just assume you're running state.highstate on prod?
19:28 vlcn fwiw, this was working flawlessly for several months, the only thing that has changed is the version of salt I think
19:28 manfred lz-dylan: http://docs.saltstack.com/en/latest/ref/states/top.html
19:28 manfred lz-dylan: checkout the 3rd code block in that
19:30 manfred vlcn: this is awesome
19:30 vlcn manfred, thanks :)
19:30 vlcn it's fairly simple solution to an otherwise pain in the ass problem
19:31 jrb28 joined #salt
19:31 manfred vlcn: i don't... see anything that would have broken
19:31 vlcn the more I'm reading today... I'm wondering if this is yet another bug with 2014.1.5
19:32 manfred oh in 2014.1.5?
19:32 vlcn I fixed most of my issues with moving my master up to 2014.1.7
19:32 manfred i wouldn't be supprised if it is
19:32 vlcn but I still have a mess of minions that are stuck on 2014.1.5 for now
19:32 manfred there were people with reactor problems with .5 in the issue tracker
19:32 vlcn okay, that's probably what's going on then.
19:32 vlcn I just don't want to screw with manually installing RPMs on 30+ minions
19:33 lz-dylan manfred: okay, I understand that I can set up multiple named environments, split them by which state run in them, and specify that on the CLI. Does that affect the way things inherit with include:, or are you suggesting that I just package up states that depend on each other into a separate environment? (and you're obviously split for attention ATM, let me know if this is too distracting!)
19:33 manfred are you using syndic or multimaster?
19:33 FeatherKing joined #salt
19:33 manfred lz-dylan: it shouldn't you should still be able to include.
19:34 manfred you should be able to just use include for that
19:34 manfred vlcn: are you using syndic or multimaster?
19:35 manfred vlcn: https://github.com/saltstack/salt/issues/13879
19:35 vlcn vlcn, neither
19:35 manfred that is the one I remember being made
19:35 vlcn er
19:35 vlcn manfred, netiher
19:35 vlcn lol
19:35 manfred kk
19:35 lz-dylan manfred: I think something confusing happened and I'm not sure why. Now when I run state.sls logstash -- without having made changes -- it's perfectly happy finding logstash in base, and having separate issues with elasticsearch (which I'll deal with). Sorry for misdirection! I've already got base and top set up.
19:35 manfred no problem :)
19:36 lz-dylan manfred: blaming this on enormous latency syncing a Transmit.app-mounted EC2 store. Back to editing in the terminal instead of Sublime...
19:37 manfred vlcn: and you see the solusvm/addnode event going off in eventlistener.py ?
19:38 vlcn manfred, correct
19:38 manfred it... should work... one second. I have to get a 2014.1 master setup
19:39 vlcn manfred, testing with an updated minion here to see if that solves it
19:40 manfred that... is possibly the problem
19:40 surgex joined #salt
19:40 surgex hey all
19:40 surgex anyone get saltstack working on CentOS 7?
19:41 manfred surgex: it is packaged, and we are just waiting for it to get added to el7
19:42 rojem joined #salt
19:42 surgex thanks...in the mean time, is there another way I can install it?  like a git and then compile it myself?  sorry -- I am new to saltstack
19:42 manfred git://github.com/saltstack/salt.git
19:42 surgex thank you!
19:42 manfred surgex: https://bootstrap.saltstack.com/
19:44 vlcn manfred, unfortunately still seems to not be working after updating both minions (and the master) to 2014.1.7-3
19:45 manfred vlcn: kk, one second
19:45 manfred getting a vm up
19:47 vlcn manfred, debug output from the master
19:47 vlcn https://gist.github.com/kelchm/253334a0f69ade1bf8f5
19:47 aquinas joined #salt
19:47 FeatherKing i was wondering about using a runner to install the salt minion on many hosts via ssh
19:47 FeatherKing can i use a runner and the bootstrap via ssh and read the hosts from a file?
19:48 manfred FeatherKing: yes, but you would be better to just use salt-ssh \* state.sls
19:48 manfred i don't believe you can use runners via salt-ssh
19:48 bhosmer joined #salt
19:48 FeatherKing unless there is a better way to mass install the minion
19:49 FeatherKing not tied to ssh but its the only real way i have in right now, other than vmware toosl
19:49 manfred setup a state file that does it
19:49 manfred FeatherKing: have you seen saltify yet?
19:49 manfred FeatherKing: http://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud.clouds.saltify.html
19:49 manfred FeatherKing: http://docs.saltstack.com/en/latest/topics/cloud/config.html#config-saltify
19:49 FeatherKing looking now
19:49 InAnimaTe joined #salt
19:49 vlcn manfred, I actually have to run to a meeting now.  IF you happen to turn anything up, can you drop me a PM?
19:50 manfred FeatherKing: it is specifically for running salt.utils.cloud.bootstrap() on the vm, which just uses the https://bootstrap.saltstack.com script to install
19:50 manfred vlcn: yessir
19:50 vlcn manfred, thanks!  I appreciate the help digging into this.
19:50 manfred np
19:51 ndrei joined #salt
19:52 roolo joined #salt
19:54 matthiaswahl joined #salt
19:56 hopthrisC left #salt
19:59 lz-dylan Helium is still unstable/unpackaged, right? Not sure how names map to version numbers
20:00 babilen lz-dylan: Helium will be the next release (just names from the periodic table) and releases are numbered as YEAR.MONTH.MINOR
20:00 lz-dylan (I'd love to hop on the TLS LS module.
20:00 lz-dylan )
20:00 TheThing It's pretty simple: Every single person on this chat is waiting on helium like it's the second coming of Jesus. Once it's out, you should see the floor get seriously... sticky <_<
20:01 lz-dylan and wow I cannot type. -- TheThing: ooh, there'll be free soda?
20:01 XenophonF LOL
20:02 * Eugene prepares his hose
20:03 Eugene Wait
20:03 rojem joined #salt
20:04 TheThing lz-dylan: Not that kind of sticky but... sure <_<
20:04 manfred lz-dylan: when they are an actual release, then there won't be a code name
20:04 XenophonF it's going to be so chill when Lithium comes out
20:05 forrest I don't think that's the case, but ok
20:05 mpanetta XenophonF: heh
20:05 forrest more like 'this thing does not work, why does it not work' followed by linking an issue
20:07 aquinas joined #salt
20:07 forrest basepi, I'll just update it with major directories from now on
20:07 forrest topics just has a TON of stuff in it, and I wasn't sure I'd make it through all of it
20:08 basepi Oh, I totally agree with documenting which ones you've done.  Just doesn't need a separate comment for each.  ;)
20:08 forrest psssssssh
20:08 forrest unsubscribe SUCKA
20:08 matthiaswahl joined #salt
20:09 FeatherKing manfred: it mentions a map file where does that go? We used salt cloud some in training, but at my work we dont so i have forgotten some of that cloud stuff
20:10 manfred FeatherKing: you can put it wherever, you just reference it in salt-cloud with -m /path/to/file
20:10 FeatherKing is that the same as the salt-ssh roster? or would i need both a map and a roster
20:10 manfred it is in a different format
20:11 manfred if you are just trying to deploy salt, i would use salt-cloud's saltify driver
20:11 manfred FeatherKing: we are looking at/working on maybe moving to using salt-ssh instead of the salt.utils.cloud.bootstrap() functino
20:11 TaiSHi o/ all
20:12 zain_ joined #salt
20:12 FeatherKing it looks like the saltify provider would cover what i need probably
20:12 FeatherKing if i set that up as a provider and then ran salt-cloud * -m map file
20:12 FeatherKing good right?
20:12 manfred yeah
20:12 FeatherKing if that is my only salt cloud provider will i have to specify the provider?
20:13 FeatherKing in the command
20:13 manfred you specify it in the map file
20:13 FeatherKing oh right right i see
20:13 manfred :)
20:14 XenophonF hey how do you all keep packages up to date using salt?
20:14 XenophonF like, what's the best way?
20:14 XenophonF should I set allow_update: True in all of my pkg.installed states?
20:14 FeatherKing manfred: thanks i will work on this next week
20:14 manfred np
20:15 manfred aight, i gotta go to a meeting
20:15 manfred o/
20:15 picker joined #salt
20:15 FeatherKing XenophonF: you want to keep all packages up to date or just a few
20:18 XenophonF FeatherKing: all packages
20:18 XenophonF but i'll settle for just the stuff managed by salt
20:19 FeatherKing I would look at http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html#salt.states.pkg.latest
20:19 FeatherKing and
20:19 FeatherKing in helium http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkg.html#salt.states.pkg.uptodate
20:19 FeatherKing the first one takes the pkgs parameter so you could feed it ones you know about line by line
20:19 FeatherKing probably ok for just a few
20:19 XenophonF gotcha
20:19 XenophonF uptodate is probably what I want
20:19 FeatherKing the new one in helium seems to address exactly your question
20:19 FeatherKing ya
20:20 XenophonF on Windows I'd just use WSUS or whatever for O/S patches
20:20 dimeshake XenophonF: i use salt to make sure yum-cron is running and updating boxes where i want that to happen
20:20 XenophonF but I'd like similar functionality on Linux/FreeBSD
20:20 FeatherKing yeah you could always shell out and yum upgrade or what not
20:20 XenophonF true
20:21 XenophonF One of my favorite WSUS features is being able to automatically approve/deploy/install security updates.
20:21 FeatherKing what distro are you using
20:21 lz-dylan can states manipulate pillar? ie. there's pillar.get; is there pillar.set or similar?
20:21 XenophonF I'm just not certain how to replicate that functionality in Salt.
20:21 toastedpenguin joined #salt
20:21 toastedpenguin left #salt
20:21 XenophonF FeatherKing: I'm using CentOS 6.5 (soon to be 7.0) and FreeBSD 10.0
20:21 dimeshake you wouldn't really do it totally with salt - you'd use salt to push the command to update packages or all packages, though
20:21 FeatherKing you should look at spacewalk
20:22 timoguin or use salt to manage your production repo clones
20:22 XenophonF FreeBSD is a little different - packages are maintained separately from the base O/S
20:22 FeatherKing it has similar functionality to WSUS
20:22 FeatherKing it will work for centos
20:22 XenophonF thanks I'll check it out
20:22 timoguin that way you can test new packages in dev and then move them up through to prod
20:22 FeatherKing maybe not for bsd
20:22 FeatherKing in spacewalk i get notified of bug fixes i can automatically pull the new rpms and then i use salt to check in with spacewalk
20:23 FeatherKing its the community version of redhat satellite
20:23 XenophonF looks interesting!
20:24 FeatherKing it took a bit of setting up here, but we have several environments now, dev,prod, etc and we move systems around and can report just like wsus
20:24 FeatherKing works really well once its going
20:24 kermit joined #salt
20:24 jslatts joined #salt
20:24 FeatherKing it literally becomes the repository for the systems so when i run yum upgrade or pkg.install its really coming from spacewalk
20:24 XenophonF ah
20:25 FeatherKing i could go on and on but it may not solve the bsd
20:25 XenophonF no but it could work fine for CentOS
20:25 FeatherKing centos for sure, we use 6.5
20:25 racooper joined #salt
20:25 XenophonF can you import EPEL into Spacewalk as well?
20:25 forrest FeatherKing, you saw the new satellite for RHEL right?
20:25 FeatherKing yes
20:25 FeatherKing forrest: i have not
20:25 mateoconfeugo joined #salt
20:26 forrest Avoid it
20:26 FeatherKing lol
20:26 forrest it's basically just a bunch of open source tools mashed together
20:26 FeatherKing XenophonF: in fact we import EPEL and also some of our own repositories
20:26 forrest that only supports puppet :P
20:26 FeatherKing ugh
20:26 forrest or at least it did last time I asked them why
20:27 XenophonF brb
20:27 FeatherKing anyway i am o/ by all
20:27 FeatherKing *bye
20:28 mapu other than a cmd.run- is there a way to set the immutable bit on a file?
20:32 ndrei joined #salt
20:34 InAnimaTe joined #salt
20:35 scoates joined #salt
20:39 XenophonF back
20:40 XenophonF mapu: the file states modules doesn't appear to support that
20:40 XenophonF so yeah you'll have to shell out
20:40 forrest mapu, I don't know of any chattr stuff, file.managed doesn't support it (unless you can somehow make it work with kwargs, but I don't think so)
20:40 mapu ok- hadn’t seen it. thanks.
20:40 forrest might be worth asking about support for that.
20:40 forrest I don't see any issues for chattr support
20:40 XenophonF it doesn't look too difficult to add
20:40 forrest but the github search is a bit of a fail, so who knows
20:41 forrest XenophonF, with file.managed I don't know, might be more work than we think, you'd need to write a chattr module and such
20:41 higgs001 joined #salt
20:41 XenophonF i'm thinking just another optional function argument for file.managed
20:42 XenophonF and have it call out to chattr/chflags underneath
20:43 XenophonF it'd be nice if salt supported posix acls as well
20:44 XenophonF that should be even easier since it's standardized inside p1003
20:44 forrest XenophonF, there is facl support: https://github.com/saltstack/salt/pull/3051
20:45 XenophonF nice!
20:45 forrest http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.linux_acl.html#module-salt.modules.linux_acl
20:45 forrest not sure if there is a state though
20:45 forrest but you could always use the module state to call the module
20:45 XenophonF it isn't on the salt.states.file page
20:45 XenophonF hence my comment :)
20:45 forrest yea
20:50 arthabaska joined #salt
20:52 arthabaska joined #salt
20:56 matthew-parlette joined #salt
20:59 XenophonF you all have a good weekend
21:00 XenophonF left #salt
21:00 tligda joined #salt
21:04 HACKING-TWITTER] joined #salt
21:05 Damoun joined #salt
21:16 rojem joined #salt
21:20 Ryan_Lane it looks like if I have two file_roots locations and each specifies a top file, then whichever is defined first wins
21:21 Ryan_Lane is there any way to have both top evaluated?
21:22 doddstack joined #salt
21:26 bhosmer joined #salt
21:28 bhosmer_ joined #salt
21:32 felskrone joined #salt
21:41 kermit joined #salt
21:45 lz-dylan so the convention in http://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html for including a formula into an existing state is to use an include: and then later a require: pkg: (state name from formula). Will that always invoke (state name from formula) or is there a possibility that salt will just use the package manager to install a package with that overlapping name?
21:54 logix812 joined #salt
21:58 rojem joined #salt
22:02 yano joined #salt
22:08 grim_crypt joined #salt
22:11 grim_crypt joined #salt
22:11 grim_crypt Hey folks anyone still around? I was wondering if there is any other docs on debuging LDAP connections… I followed the documentation to what I think should work.
22:16 eliasp grim_crypt: in which context do you use LDAP? for 'external auth'?
22:18 grim_crypt Eliasp: yeah as an external auth but I only get an error. “Authentication module threw an exception: global name 'ldap' is not defined”
22:18 lz-dylan Is there a way to have more than one file.managed in a state?
22:18 eliasp grim_crypt: python-ldap is installed?
22:18 eliasp lz-dylan: sure, just like any other … the "name" makes them unique
22:19 grim_crypt Eliasp yeah that was missing
22:19 alekibango joined #salt
22:19 dstokes_ hey guys, is there a way to set --show-timeout on a master at the config level? i'd prefer to _always_ see when a minion doesn't respond
22:19 lz-dylan eliasp: I get a 'conflicting ID' error even when I use different 'name's under each file.managed
22:20 eliasp lz-dylan: could you nopaste your state?
22:21 lz-dylan http://nopaste.info/2175e3dac8.html
22:21 lz-dylan hadn't used nopaste previously. preferred over gist?
22:21 tligda joined #salt
22:21 eliasp dstokes_: using 'show_timeout' (!! underscore instead of -) should work just fine in your config
22:22 lz-dylan n.b. oddity of managing mime.types there is to deal with potentially sloppy package removal/reinstall
22:22 dstokes_ eliasp: sweet, wasn't in the docs. i'll give it a whirl
22:23 eliasp lz-dylan: decrease indentation for your definitions starting line 4 … currently they're all part of "nginx"
22:25 lz-dylan so, that brings up a chicken-egg problem, in that (unless I specify otherwise) that directory may not exist until the nginx package is installed, but if I try to start service before those files are installed, nginx will fail
22:26 WhyteWolf so in the service have a watch for the files.
22:26 lz-dylan I suppose I could just manually create the directory structure beforehand, but it doesn't feel like the Right Way
22:27 WhyteWolf and in the file.managed have a require for the pkg
22:27 eliasp set makedirs=True
22:27 eliasp then parent directories should be created automatically
22:27 vlcn manfred, I think I figured it out.
22:29 vlcn manfred, I'm not exactly sure why, but the ownership on /etc/salt/master was incorrect
22:31 lz-dylan Okay. With those file.managed reduced to no indentation, I still get a Conflicting ID error. I can see how to address it -- you can see on http://nopaste.info/016b748159.html that I just named each section file.managed with a name statement inside, and I can bump that name statement to the top level -- but it would be Nice if this would work as-is. C'est la vie.
22:32 eliasp lz-dylan: try this: http://bpaste.net/show/454872/
22:33 eliasp my first suggestion was wrong, sorry… missed that you don't have names for your file-states…
22:33 lz-dylan I like that a lot more. Didn't know Salt had enough smarts builtin to interpret 'package-service' as 'package' ... or does it? Do I need a name?
22:33 eliasp if you don't set an explicit "name: asdadad", the name above "file.managed" will be used…
22:34 lz-dylan mmmn...still needs an explicit name in the service. I can live with that.
22:34 eliasp well, to interpret it as "service", that's what the line after is for…
22:34 eliasp lz-dylan: sure, sorry… missed that one…
22:35 eliasp otherwise it would try to start a service named "nginx-service"
22:35 lz-dylan right-o. adding 'name: nginx' makes salt smile.
22:35 eliasp salt actually doesn't make any assumptions based on the name…
22:36 lz-dylan Good deal. Now my whole highstate runs smoothly.
22:37 lz-dylan Thank you!
22:37 eliasp great, congrats
22:37 eliasp <w
22:37 Eugene Anybody brave enough to play with CentOS7 as a minion?
22:38 WhyteWolf :/ I really need to sit down with jinja one of these days and learn some better handaling then what i currently have.
22:41 eliasp Eugene: planned to do it yesterday… postponed until at least next weekend ;(
22:43 MindDrive joined #salt
22:44 alainv hm, if salt has been talking to master over eth0 and i now have firewalled that and pointed it at master via VPN, what do i need to do to make it work?
22:45 eliasp the hostname/IP which the minion can use to reach the master via VPN
22:46 alainv Right, i have been setting that via hosts
22:46 alainv the 'salt' hosts entry is updated
22:46 stevednd does anyone know if there's a way to ensure that a rule appears last in an iptables chain?
22:46 alainv it's timing out, however, even though i can `ping salt` successfully
22:49 WhyteWolf steve, add it using position: 9001 and add all the other rules using position:1 [using insert, not append]
22:51 stevednd WhyteWolf: good idea, thanks
22:51 manfred vlcn: interesting
22:51 vlcn manfred, indeed.  If you take a look at this: Jaleo Spanish Restaurant
22:52 vlcn [DEBUG   ] Missing configuration file: /etc/salt/master
22:52 manfred yeah
22:52 manfred eliasp: works fine here
22:52 vlcn er...
22:52 manfred errr
22:52 manfred Eugene: works fine here
22:52 vlcn fucking chrome broke my paste
22:52 vlcn https://gist.github.com/kelchm/253334a0f69ade1bf8f5
22:52 manfred yarp
22:53 Eugene Cool. Guess I'll give it a go
22:53 * Eugene is trying to sort whether python3 made it in
22:54 Ryan_Lane joined #salt
22:54 manfred it did not
22:54 manfred it is still py2
22:54 manfred Eugene: it is in the repos though
22:54 Eugene 6 has the SCL system
22:54 manfred but not default
22:55 Eugene I'm not finding the add-on package anywhere tho
22:55 manfred right, it just isn't the deafult is what I meant
22:55 Eugene So where's it hiding
22:55 manfred hrm
22:55 manfred one second
22:55 Eugene centosplus?
22:55 stevednd what is everyone's preferred salt installation/update method? OS package mgr, pip, straight from git?
22:55 manfred Eugene: maybe
22:56 Eugene `yum`
22:56 WhyteWolf stevednd: importing epel and yum
22:56 alainv yeah, deb
22:56 manfred stevednd: pkg manager is probably the best way, pypi is the second best
22:56 manfred cause you can still specify versions relatively easily.
22:56 manfred doing versions by tags out of git gets messy
22:57 Eugene No mention of it in plus either. :-/
22:57 manfred with the different init scripts that could be used and such
22:57 manfred Eugene: i don't see any python3 packages in epel
22:57 Eugene Yeah.
22:57 stevednd yeah, I've been using the apt repo
22:58 stevednd I've been getting pretty tired of waiting for helium though
22:58 manfred Eugene: they should be there somewhere
22:58 stevednd so I guess I'm strongly considering running develop
22:58 * Eugene dons asbestos suit, tries #centos
23:00 * eliasp runs 2014.1.X and deployes where necessary more up-to-date modules + states via http://docs.saltstack.com/en/latest/ref/file_server/dynamic-modules.html
23:01 manfred nice, rhel7 does have systemtap https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/SystemTap_Beginners_Guide/index.html
23:01 Eugene manfred - #centos says it just hasn't happened yet. Patience, I spose.
23:01 manfred kk
23:02 manfred i might be thinking of ktap not systemtap
23:02 Eugene I'm hoping its a plus package; SCL sucks.
23:02 manfred does centos use scl? i thought that was just RHEL
23:02 manfred i guesss they may use it now that they are under the rhel corporation
23:03 Eugene centos-release-SCL brings it down, yup.
23:03 Eugene That's how you get 3.3 on 6 without having to suck in some obscene fourth-party repo(remi, anybody?)
23:04 manfred lame
23:05 Eugene Works pretty well
23:25 dev9 joined #salt
23:27 stevednd does a salt master need to accept any incoming ports from its minions?
23:27 eliasp stevednd: no, only 4505 and 4506
23:27 eliasp stevednd: see also: http://docs.saltstack.com/en/latest/topics/tutorials/firewall.html
23:29 stevednd eliasp: ahh, I had it flipped, I thought the minions needed to accept the incoming, thanks
23:29 eliasp stevednd: no, unless you use salt-ssh (then you need to open 22 on the minions)
23:29 eliasp stevednd: otherwise, it's only the minions talking to the master, but not the other way round
23:30 eliasp stevednd: the minions get all their jobs by pulling them from the ZeroMQ queue run by the saltmaster…
23:30 stevednd right
23:31 stevednd just wanted to make sure traffic wasn't going any other directions for some reason
23:34 felskrone joined #salt
23:53 kickerdog joined #salt
23:59 kickerdog left #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary