Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-09-13

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 mesmer joined #salt
00:05 whit joined #salt
00:08 NV err, if it's running via xinetd shouldn't it not be running (ie, only xinetd running)
00:08 NV if it's running or not should be a xinetd configuration, no?
00:08 NV or does service tftp (stop|start) just change xinetd config and reload it?
00:08 g4rlic so, it depends..
00:09 g4rlic chkconfig handles xinetd services the same as sysV services in CO6.4
00:09 g4rlic eg: chkconfig tftp on
00:09 g4rlic will modify the /etc/xinetd.d/tftp config to change disabled to no.
00:09 g4rlic However..
00:09 g4rlic service tftp start will *not* enable the service.
00:10 g4rlic So, I'm not quite certain how to handle that.
00:12 ben___ joined #salt
00:21 lineman60 joined #salt
00:29 g4rlic NV: fwiw, my SLS that manages xinetd appears to work.
00:30 jbunting joined #salt
00:33 halfss joined #salt
00:33 lemao joined #salt
00:38 emocakes joined #salt
00:39 sibsibsib joined #salt
00:41 kchr joined #salt
00:51 mwillhite joined #salt
00:54 oz_akan_ joined #salt
01:10 jetblack joined #salt
01:12 gatoralli joined #salt
01:15 pdayton1 joined #salt
01:17 kchr joined #salt
01:27 whit joined #salt
01:29 liuyq joined #salt
01:30 jbunting joined #salt
01:30 bhosmer joined #salt
01:31 liuyq joined #salt
01:32 NV http://docs.saltstack.com/ref/states/all/salt.states.quota.html is it just me, or that example there wrong?
01:35 jslatts joined #salt
01:50 austin987 joined #salt
01:54 deepakmd_oc joined #salt
01:58 mgw1 NV: something's not quite right there...
01:59 mgw1 actually, maybe that's correct
02:00 Transformer joined #salt
02:03 xl1 joined #salt
02:05 mgw joined #salt
02:06 oz_akan_ joined #salt
02:07 techdragon joined #salt
02:15 auser joined #salt
02:15 xl1 joined #salt
02:15 premera_w joined #salt
02:16 bhosmer joined #salt
02:20 Lue_4911 joined #salt
02:23 StDiluted joined #salt
02:28 racooper joined #salt
02:29 pipps1 joined #salt
02:37 oz_akan_ joined #salt
02:38 forrest joined #salt
02:39 mg__ joined #salt
02:43 xl1 joined #salt
02:45 emocakes joined #salt
02:52 NV can one call a module from a regular state sls (ie, without breaking out py/pydsl)
02:53 NV nevermind, just found states.module which took longer than it should have to appear in my google :P
02:55 sssslang forrest: I think the problem I met yesterday is due to large file transfer.
02:55 forrest oh the tar was quite large?
02:55 forrest why was it only a problem for that one box though?
02:55 sssslang i dunno.
02:55 sssslang It's 40MB in size.
02:56 sssslang transfer this file between two host is a bit slow using salt.
02:56 forrest ahh ok
02:59 forrest is anyone else going to the atlanta devops day?
02:59 auser joined #salt
03:03 jacksontj joined #salt
03:06 aat joined #salt
03:11 jbunting joined #salt
03:14 JaredR joined #salt
03:16 jheise joined #salt
03:16 carmony anyone around to answer a question about states?
03:17 forrest Just post the question in, if someone knows they will respond.
03:20 carmony One thing that is nice with puppet is you can have a Class declaration that groups "states" together, so you can require a Class, which will basically require everything in the class
03:21 forrest so you want a state, that has a bunch of states in it?
03:21 carmony is there anything like that in salt states? Where I can group a set of declarations under one name
03:22 carmony i.e. I have 6 different repositories that I need before moving on to the next set of instructions
03:22 carmony so I have to at some places have 6 different requires
03:22 carmony it'd be awesome if I could group them by a name
03:23 carmony and just: require: states_group: git_repos
03:23 forrest http://docs.saltstack.com/ref/states/highstate.html#large-example
03:23 forrest look at the bottom example for large example
03:23 forrest that's what you're looking for :P
03:24 forrest I don't know if you can JUST put the state there though
03:24 carmony can you have multiples of the same state under an id?
03:24 forrest oh not highstate I'm stupid
03:24 forrest http://docs.saltstack.com/ref/states/overstate.html
03:24 forrest overstate
03:25 carmony awesome, I think this is what I'm looking for!
03:25 forrest cool
03:25 forrest here's an example of using it
03:25 forrest https://github.com/terminalmage/djangocon2013-sls/blob/master/overstate.example
03:25 forrest I don't know if the match item is required for hosts or not
03:27 carmony hrm, interesting
03:40 middleman_ joined #salt
03:45 avienu joined #salt
03:49 StDiluted joined #salt
03:49 sgviking joined #salt
04:01 mbmb joined #salt
04:05 techdragon Any Salt Stack employees in the room? Very tiny business question i wanted to ask that seemed overkill for an email. :-)
04:15 micah_chatt joined #salt
04:17 forrest joined #salt
04:18 Jahkeup joined #salt
04:18 oz_akan_ joined #salt
04:18 jpcw joined #salt
04:19 NV forrest: my understanding of overstate is for ordering between different minions
04:20 NV ie, ensure your db minions come up before your webserver minions
04:20 \ask joined #salt
04:20 NV it sounds like carmony is just asking for a state that uses the include directive to cause other states to get called
04:21 NV ie, webserver.sls has include: \n  - apache\n  - php\n  - python
04:21 dane joined #salt
04:21 madduck joined #salt
04:21 madduck joined #salt
04:22 lynxman_ joined #salt
04:23 bwq joined #salt
04:23 bkonkle joined #salt
04:23 jfalco joined #salt
04:23 jfalco joined #salt
04:23 lynxman joined #salt
04:28 mbmb looking to update my salt-cloud to include the recently added softlayer support from the dev branch on github.  having trouble sorting out where it is installed to.  anybody know?
04:28 carmony w00t, 73 different states, all firing correctly. From base install wheezy to fully running web server: less than 2 minutes
04:35 efixit joined #salt
04:38 forrest sorry NV was at the gym, yea that was my first proposal to have a state of states.
04:38 jalbretsen joined #salt
04:38 forrest carmony, what did you end up using for all those?
04:38 carmony I figured how to make the include works
04:38 forrest did you just do a bunch of includes inside of a single state
04:38 forrest then include that state?
04:44 middleman_ joined #salt
04:49 deepakmd_ joined #salt
04:52 mbmb n/m  python setup.py install --force :">
04:53 * techdragon still isn't fond of include heavy state trees
04:53 forrest heh
04:54 berto- joined #salt
04:55 techdragon I suppose I just like having a very declarative state tree that functions in a very 'top down' fashion that makes keeping most of the tree in my head while working on them.
04:56 techdragon I'm growing to like the includes via the formula system though.
04:57 superflit joined #salt
04:57 techdragon Loving formulas.
05:00 forrest you mean instead of just a bunch of states hanging out techdragon?
05:04 zfouts joined #salt
05:05 denstark joined #salt
05:05 techdragon forrest yeah (i think, depending on just what you mean by 'hanging out') the purpose of the states being to represent the state of the system in a language as simple and readable as YAML left me feeling like it was a bad idea to overly 'interlink' the state configs. I see a lot of configs online that aren't as readable due to being less 'direct declaration of system and software state/conf' and more "i think of $foo as something, here
05:05 techdragon is $foo's state/conf' with $foo being something like ruby-site-name or 'webserver' etc.
05:06 forrest yea so you have apache/init to just install apache, then apache/base_config
05:06 forrest or whatever
05:07 forrest I'm talking about the avoidance of 100 sls files in /srv/salt
05:07 techdragon oh well yeah I'm 100% with you there.
05:08 forrest friggin fedora man
05:08 forrest their primary repo conflicts with EPEL, but doesn't have the same content as EPEL
05:09 ajslater joined #salt
05:09 techdragon Every time I think fedora, I remind myself 'if i want cutting edge I'll just use Arch or Gentoo"
05:09 forrest I'm just testing some selinux stuff for documentation purposes
05:09 forrest I would never actually use this
05:09 techdragon hehe yeah. I really don't like using fedora.
05:10 * techdragon is glad to have never needed to use it for work.
05:11 forrest I would laugh if anyone said to use Fedora at work
05:11 forrest 'yea so you just want broken systems all the time then?'
05:12 satya joined #salt
05:12 forrest g4rlic, the iptables stuff isn't even present all the way back to fedora 17
05:12 forrest g4rlic, not iptables, selinux.
05:12 techdragon I've run a few public Gentoo servers, people trash it all they like, but at least it wasn't Fedora lol.
05:13 forrest g4rlic, so I'm not going to modify the docs to show what needs to be done there since it's nothing
05:13 forrest heh
05:13 forrest fedora is awesome, just not as a server platform when you're actually trying to ensure your service is reliable
05:13 y0j joined #salt
05:13 forrest and actually patching consistantly
05:16 techdragon Yeah, the only reason I think gentoo does a decent job as a server is 1- I know how to run a build server to make prebuilt binaries to speed up the updates. and 2 - Security, Hardened profile, quicker patch access if tracking upstream sources, able to turn off unused features to reduce attack surface.
05:16 techdragon If your OS makes patching hard… Your going to have a bad time m'kay
05:19 Katafalkas joined #salt
05:20 __number5__ If you are running Gentoo servers, make sure your data centre/VPS provider don't charge you based on CPU usage :P
05:23 techdragon __number5__ hehe thats a given
05:28 redondos joined #salt
05:29 oz_akan_ joined #salt
05:33 ajslater left #salt
05:33 bhosmer joined #salt
05:59 Ryan_Lane joined #salt
06:14 kenny__ joined #salt
06:16 gamingrobot joined #salt
06:20 xinkeT joined #salt
06:24 redondos joined #salt
06:33 ml_1 joined #salt
06:51 davidone joined #salt
06:58 balboah_ joined #salt
07:09 sfello joined #salt
07:19 carlos joined #salt
07:21 nonuby joined #salt
07:21 nonuby joined #salt
07:35 redondos joined #salt
07:54 lemao joined #salt
07:56 bemehow joined #salt
08:01 honestly joined #salt
08:04 adepasquale joined #salt
08:04 olaf39 joined #salt
08:07 natim joined #salt
08:07 natim Hello
08:08 natim Look, I have two roles that need a postgres user and a postgres database
08:08 natim Is it possible to execute twice the same salt with two different pillars ?
08:09 natim It looks like salt loads all the pillar and override with the last configuration
08:09 natim But I would like to load this pillar for this role then the other pillar for the other role even if both role are configured on the same server
08:10 natim This is my salt : https://friendpaste.com/4VZWxGByUaKEDzSc8qlyQf
08:11 natim This is my two pillars that are in two files : https://friendpaste.com/4VZWxGByUaKEDzSc8qUbA2
08:12 natim This is my pillar/top.sls
08:12 tonthon joined #salt
08:12 natim https://friendpaste.com/4VZWxGByUaKEDzSc8qlxoa
08:13 ronc joined #salt
08:13 natim And finaly this is my salt/top.sls : https://friendpaste.com/4VZWxGByUaKEDzSc8qdJEw
08:14 tonthon Hi, when a minion's log says "The Salt Master has cached the public key for this node...", where is this key supposed to be cached ?
08:17 ProT-0-TypE joined #salt
08:17 tonthon ok, I've got it, the hostname was not the one I expected
08:17 tonthon sorry for the noise ...
08:22 ml_1 joined #salt
08:23 natim I put everything in a gist it would much easier to understand : https://gist.github.com/Natim/6548009
08:30 emocakes joined #salt
08:40 ggoZ joined #salt
08:43 natim joined #salt
08:47 zooz joined #salt
08:48 druonysus joined #salt
08:48 druonysus joined #salt
08:59 fredvd joined #salt
09:00 scott_w joined #salt
09:07 liuyq joined #salt
09:12 olaf39 joined #salt
09:23 afa joined #salt
09:25 yota joined #salt
09:35 abele joined #salt
09:37 zooz joined #salt
09:42 malinoff_ joined #salt
09:46 redbeard2 joined #salt
09:50 multani natim: I would go with a loop in salt_postgresql.sls around postgres_database.present and postgres_user.present
09:51 multani instead of having one pillar "postgresql_db_name", just find away to have a pillar "postgresql-databases" which would be a list with your databases and users inside
09:51 natim multani, yes but how to define the pillar the right way
09:51 multani that you can populate from your current pillar_ files
09:51 natim Because I want to create the database only if there is the role
09:51 natim I wish to do that
09:52 natim Do you know how I can populate the postgresql-databases using multiple pillar files ?
09:55 LucasCozy hellome, I very trouble with writing a simple module of my own. I write it in /salt/_modules, run a sync_modules to the minion and still 'not available' and not present in modules list.
09:55 LucasCozy I'm missing something?
09:56 LucasCozy Note, sync_all, refresh do the same(nothing is sync, already updated) and my minion go trough a salt-syndic.
09:58 mgw LucasCozy: do you see anything in the minion logs?
10:00 multani natim: something like this maybe: https://gist.github.com/multani/6548728
10:01 multani natim: I'm not sure, but I suppose pillars are going to accumulate in postgresql-databases, if they have different names
10:01 multani (postgresql-databases is going to be a dict in this case)
10:02 LucasCozy mgw, you're right, Thank's for help. 'Failed to import module'
10:03 az87c joined #salt
10:04 natim multani, I will give it a try :)
10:06 xl1 How to get the list of all minion ids in a runner?
10:07 xl1 For monitoring purpose. The minions' data are already collected in a db, now I need the id list to run a query and check.
10:08 xl1 manage.status looks like an overkill and needs a roundtrip to all minions
10:10 xl1 nvm, manage.status has the exact code I need:)
10:14 bemehow joined #salt
10:16 bemehow_ joined #salt
10:33 natim multani, it does override
10:33 natim They don't accumulate
10:58 bhosmer joined #salt
11:00 zooz joined #salt
11:06 TheCodeAssassin joined #salt
11:09 superflit joined #salt
11:40 emocakes joined #salt
11:45 efixit joined #salt
11:46 blee joined #salt
11:55 sibsibsib joined #salt
12:02 emocakes joined #salt
12:02 bhosmer joined #salt
12:06 mwillhite joined #salt
12:19 jbunting joined #salt
12:21 jbunting joined #salt
12:23 Teknix joined #salt
12:24 bkonkle joined #salt
12:25 emocakes joined #salt
12:29 pdayton joined #salt
12:39 pdayton joined #salt
12:52 aat joined #salt
12:55 imaginarysteve joined #salt
13:01 Jahkeup joined #salt
13:01 racooper joined #salt
13:09 anteaya joined #salt
13:12 tyler-baker joined #salt
13:12 tyler-baker joined #salt
13:16 Gifflen joined #salt
13:17 Jahkeup joined #salt
13:20 oz_akan_ joined #salt
13:21 Kholloway joined #salt
13:22 fredvd joined #salt
13:23 ksk joined #salt
13:23 ksk hey guys
13:23 ipmb joined #salt
13:23 ksk i did upgrade some module and saltutil.sync_modules synced it successfully - just it seems like something of the old version (some debug output i removed recently) is still there
13:24 oz_akan_ joined #salt
13:25 ksk in the modules file on the server(as in salt-client) itself in /var/cache/salt/minion/extmods/modules/ there is no hint of that debug outout (some print statements)
13:25 ksk but salt-call still prints them
13:25 berto- joined #salt
13:25 ksk i already did delete these .pyc file for the script
13:25 ksk any idea whats wrong?
13:29 brianhicks joined #salt
13:33 StDiluted joined #salt
13:35 redbeard2 joined #salt
13:35 pipps joined #salt
13:36 halfss joined #salt
13:39 ksk ive like deleted everything thats like "name_of_module*"
13:39 ksk synced again - still output thats not in the file on the salt-master
13:39 ksk and i cannot manage to find these strings on the salt-minion either - they are just in the output
13:39 nielsbusch joined #salt
13:47 dyim_ joined #salt
13:48 alexandrel dayum, mako templates are nice.
13:48 alexandrel you can even import in them.
13:50 honestly mako is nice
13:50 honestly the dependency is annoying though
13:50 honestly need to install mako on all minions
13:52 kaptk2 joined #salt
13:53 toastedpenguin joined #salt
13:55 alunduil joined #salt
13:56 davidone joined #salt
13:57 m_george|away joined #salt
13:59 joehh joined #salt
13:59 imaginarysteve joined #salt
14:01 redondos joined #salt
14:05 StDiluted joined #salt
14:05 pipps joined #salt
14:06 pipps2 joined #salt
14:07 avienu joined #salt
14:07 aat joined #salt
14:13 kermit joined #salt
14:13 juicer2 joined #salt
14:14 danielbachhuber joined #salt
14:16 p3rror joined #salt
14:19 djinni` joined #salt
14:20 derelm joined #salt
14:21 kallek joined #salt
14:21 faldridge joined #salt
14:24 kallek joined #salt
14:34 stifmaister joined #salt
14:35 stifmaister Hello everyone, i'm trying to call a custom module on the minion we get this error ;  AttributeError: 'module' object has no attribute '__context__' . Any idea?
14:38 teskew joined #salt
14:42 opapo joined #salt
14:47 Katafalkas joined #salt
14:48 nielsbusch joined #salt
14:50 JaredR joined #salt
14:52 UtahDave joined #salt
14:52 JaredR joined #salt
14:54 nielsbusch joined #salt
14:56 jalbretsen joined #salt
14:56 mannyt joined #salt
15:00 abe_music joined #salt
15:03 mapu joined #salt
15:06 deepakmd_oc joined #salt
15:07 micah_chatt joined #salt
15:08 xet7 joined #salt
15:10 jodell joined #salt
15:12 redondos joined #salt
15:12 ckao joined #salt
15:14 pygmael joined #salt
15:14 bemehow joined #salt
15:14 m_george left #salt
15:16 forrest joined #salt
15:21 StDiluted joined #salt
15:21 Lue_4911 joined #salt
15:26 bemehow_ joined #salt
15:27 Kholloway joined #salt
15:28 devinus joined #salt
15:29 bemehow__ joined #salt
15:31 [diecast] joined #salt
15:31 faldridge joined #salt
15:32 bemehow joined #salt
15:36 tomeff joined #salt
15:38 [diecast] 0.16.4 - adding the finger!
15:40 jslatts joined #salt
15:40 aleszoulek joined #salt
15:42 UtahDave [diecast]: ?
15:43 [diecast] to grains osfinger
15:45 UtahDave ah.  :)
15:52 nielsbusch joined #salt
15:52 Lue_4911 joined #salt
15:57 JaredR joined #salt
15:58 nbari joined #salt
15:58 cwright i suspect this is a jinja issue, but i keep getting errors trying to print joined strings from list comprehensions, is this a known issue?
15:58 cwright server_name localhost 127.0.0.1 {%- print " ".join([iface[0] for iface in salt['network.ip_addrs']]) -%};
15:59 nbari cedwards: any idea of how to fix this on freebsd: https://github.com/saltstack/salt/issues/7221
15:59 cwright expected token ',', got 'for'; line 6 in template
15:59 nbari I tried to create a nodegroup and got Error parsing configuration file: /usr/local/etc/salt/master - expected ..
16:01 KyleG joined #salt
16:01 KyleG joined #salt
16:01 cwright i get the same message when I use {{ }} too, instead of {% print %}
16:01 forrest cwright, is there only one address there?
16:01 cwright no
16:02 cwright even still that shouldn't matter, the list comprehension should just output a single ip
16:03 JaredR joined #salt
16:04 cwright {{ " ".join([iface[0] for iface in salt['network.ip_addrs']]) }}
16:04 bemehow_ joined #salt
16:04 cwright that looks perfectly valid to me, i don't understand what could be wrong
16:08 cwright ok, looks like a limitation of jinja.  http://jinja.pocoo.org/docs/faq/
16:08 JaredR joined #salt
16:10 forrest you could just do it differently
16:10 forrest just use a for loop, not as nice, but it would work
16:11 cwright of course
16:11 cwright thanks
16:13 bhosmer joined #salt
16:14 sssslang_ joined #salt
16:16 cwright this did the trick:
16:16 cwright server_name localhost 127.0.0.1 {% for ip in salt['network.ip_addrs']() %} {{ ip }}{%- endfor %};
16:18 forrest cool
16:19 mapu joined #salt
16:25 g4rlic forrest: bro, EPEL != fedora.  it's a bad idea to use EPEL on a Fedora system.
16:26 forrest g4rlic, yea I know
16:26 forrest but they don't have the latest salt rpm
16:26 devinus joined #salt
16:26 sssslang_ salt's file server seems not very efficient :-/
16:26 g4rlic uhm
16:27 dumol joined #salt
16:27 redondos joined #salt
16:27 g4rlic http://mirrors.kernel.org/fedora/updates/19/x86_64/salt-0.16.3-1.fc19.noarch.rpm
16:27 g4rlic 0.16.3 is current, no?
16:28 g4rlic As opposed to EPEL, which is still on 0.16.0
16:28 g4rlic >.<
16:31 Kholloway joined #salt
16:34 alunduil joined #salt
16:35 g4rlic forrest: try removing the epel repo and installing salt via yum. It should pull the latest.
16:35 forrest the epel repo never installed
16:35 forrest it conflicts with the base fedora repo
16:35 dumol hi all! pkgrepo seems to be misbehaving in latest salt, please take a look at https://gist.github.com/dumol/c4f8baf096061b12b774
16:35 forrest I already blew the VM away last night anyways after installing from the bootstrap
16:35 Kholloway joined #salt
16:35 harshini joined #salt
16:38 forrest dumol, I assume you don't get an error when you run those commands from the command line right?
16:38 jdenning joined #salt
16:39 ccase joined #salt
16:39 dumol forrest: i've tried apt-get and the apt-key commands, they work just fine
16:39 pipps1 joined #salt
16:39 JaredR joined #salt
16:40 g4rlic forrest: then why weren't you able to pull 0.16.3?  I'm confused, and trying to help. :)
16:40 forrest I did pull 0.16.3
16:40 forrest with the bootstrap
16:40 forrest just couldn't do so with the RPM, which I would have preferred.
16:40 g4rlic uhm.
16:40 g4rlic yum install salt didn't get you 0.16.3?
16:41 forrest no
16:41 g4rlic or is that not "latest" ?
16:41 forrest it got me 0.15.something
16:41 g4rlic how were you installing Fedora?
16:41 forrest even after running a full yum update on the system itself (which shouldn't have mattered), updating the fedora repo data, and cleaning the repo metadata
16:41 forrest I just spun up a droplet over at digital ocean
16:42 g4rlic Sounds like digital ocean isn't pulling from the official mirrors.
16:42 g4rlic 0.15.0 is in the base repo, but updates has 0.16.3
16:42 g4rlic which is where I got it from, via the usual yum update procedure.
16:42 forrest yea I'm not sure
16:43 forrest when I looked at the repo data, it looked like the official mirrors
16:43 forrest maybe it was just the mirror I was pulling from didn't have an update or something *shrug*
16:43 g4rlic forrest: that's unlikely, most mirrors are at most 1-2 days out of date.  0.16.3 was packaged in late august, so every mirror ought to have it now.
16:44 forrest Yea I don't disagree
16:45 JaredR joined #salt
16:45 forrest maybe I forgot to update the updates repo
16:45 forrest and just updated the base
16:45 forrest I only worked on it for about 4 minutes before saying screw it and installing from the bootstrap
16:46 jimallman joined #salt
16:47 forrest Either way, it worked all the way back to 17
16:47 forrest which is good
16:47 g4rlic Nice
16:47 g4rlic my guess is that the context for salt-call in Fedora is more permissive than the one in CentOS.
16:48 g4rlic I have a small hack in my common state that forces the correct context using cmd.run.
16:48 forrest heh
16:49 g4rlic fwiw, the reason I'm using Fedora for this little project, is that CentOS ships such *old* stuff (nee, tested and stable), that things like running IE in Wine don't work in CentOS very well.
16:49 g4rlic BUt only as VM's
16:49 g4rlic all the bare metal VM's are using CO 6.5
16:49 g4rlic er, 6.4
16:49 KyleG bare metal VM is a contradictory statement...
16:49 g4rlic KyleG: VM Manager.  Mistyped.  Whatever.  Pedant. ;)
16:50 KyleG I WILL NOT STAND FOR THIS.
16:50 KyleG lol
16:50 g4rlic heh
16:50 forrest you don't have to explain it to me man
16:50 forrest is fedora 19 still using python 2.6?
16:50 g4rlic no, 2.7
16:50 g4rlic and 3.3
16:50 forrest finally
16:50 forrest jeez
16:50 g4rlic Fedora 17 was using 2.7 and 3.3 too
16:50 forrest it's stupid they aren't rolling that into cent/rhel 7
16:50 forrest they're sticking with 2.6 as the default apparently
16:50 g4rlic For RHEL7?
16:50 g4rlic seriously?
16:50 JaredR joined #salt
16:51 forrest Apparently
16:51 forrest It's not on their roadmap for 7 anywhere
16:52 forrest makes me rage
16:52 forrest because they are too lazy to fix yum
16:52 forrest That's what they claimed was the blocker last time I checked :\
16:53 g4rlic lazy != unincentivized to fix something that works. ;)
16:53 forrest I don't care if it works
16:53 forrest the workarounds for 2.6 suck
16:53 JaredR joined #salt
16:53 forrest plus I have to support 2.4 in my code for rhel 5
16:53 forrest lol
16:54 forrest you have to write your own function to handle zip files, I mean come on
16:54 Ahlee so i know state.highstate - how do i get a list of all methods under the state class?
16:54 forrest to unzip them that is
16:55 forrest you mean which ones would be run Ahlee?
16:55 Ahlee could be
16:55 Ahlee coworker wants to run specific state, not just highstate
16:55 Ahlee i can't find easy way of verifying what method name is
16:55 g4rlic salt-call state.sls 'nameofstate'
16:55 forrest yup
16:56 JaredR joined #salt
16:56 Ahlee ok, so highstate and sls exist
16:56 forrest ok
16:56 Ahlee what's easiest way to find what other methods are available
16:56 forrest what do you mean?
16:56 Ahlee state is a module, right? with at least two methods
16:56 Ahlee state.hightstate, state.sls
16:56 g4rlic forrest: scannign through python bugs for RHEL7 in redhat's bugzilla seems to indicate they're going to ship 2.7.
16:57 forrest g4rlic, as the default though?
16:57 g4rlic Probably.
16:57 Ahlee it stands to reason, there are additional methods available that might be useful/handy
16:57 forrest that would be great, last time I checked it looked like they weren't going to
16:57 g4rlic since the package names are python-2.7.3, as opposed to python27.
16:57 forrest :\
16:57 Ahlee short of trolling through state.py, i'm wondering if there's an easier way
16:57 nbari guys how can i avoid or fix this: Error parsing configuration file: /usr/local/etc/salt/master - expected '<document start>', but found '<block mapping start>'
16:58 forrest https://salt.readthedocs.org/en/latest/ref/modules/all/salt.modules.state.html
16:58 forrest Ahlee
16:58 auser joined #salt
16:58 Ahlee that's it!
16:58 forrest cool
16:58 Ahlee i knew i'd seen that somewhere before.
16:58 Ahlee thank you sir.
16:58 forrest np
16:59 forrest nbari, what did you modify in your master conf?
16:59 micah_chatt left #salt
16:59 g4rlic forrest: https://bugzilla.redhat.com/show_bug.cgi?id=856672  <-- yep, looks like RHEL7 will be python 2.7.3, which iirc, is the last python 2 release.  All forward code is python 3.
17:00 forrest g4rlic, huzzah
17:00 Ahlee 2.7.5 already exists
17:00 g4rlic Ahlee: what I meant was, 2.7 is the last of the python 2 branches.
17:00 g4rlic there will be no 2.8
17:00 Ahlee but, yeah, 2.7 is end of line
17:00 micah_chatt joined #salt
17:00 Ahlee correct :(
17:00 forrest I'm glad
17:00 forrest go go 3
17:00 nbari forrest: https://github.com/saltstack/salt/issues/7221
17:00 Ahlee die die 3 ;)
17:00 forrest 3 isn't even hard :D
17:00 Ahlee python 3 will be equal to perl's 6
17:00 forrest nah
17:00 nbari just uncomented 1 line and added this: group1: 'S@192.168.1.0/24'
17:00 g4rlic no, python 3 is acutally shipped. ;)
17:00 Ahlee lol
17:00 forrest the big boys are starting to support 3
17:01 Ahlee valid point g4rlic
17:01 g4rlic I'm curious if Autodesk will ship ptyhon 3 support in the next Maya.
17:01 g4rlic Last I Checked, it was still 2.6.
17:01 g4rlic That will be a telling sign of python 3's coming of age.
17:01 JaredR joined #salt
17:02 jacksontj joined #salt
17:02 nbari any idea?
17:02 pdayton joined #salt
17:06 g4rlic nbari: any reason you can't target using hostnames or other grains?  just curious..  (I'm not using nodegroups at all, top.sls uses hostname grain matching.)
17:06 forrest I don't have a freebsd machine to test it on, can you paste the lines around that one into a gist from the config file?
17:06 nbari I was following the docs and want to give a try to the nodegroups
17:06 forrest also, can you try dropping the subnet from the group?
17:07 JaredR joined #salt
17:07 forrest looks like the compound matches should work though
17:07 g4rlic Aye.
17:07 g4rlic http://docs.saltstack.com/topics/targeting/compound.html
17:07 forrest yep
17:07 nbari I am still learning salt and created some minions, for example this works salt 'test_server_*' test.ping
17:07 forrest yea
17:07 forrest but obviously you want the nodes to work
17:08 nbari mmm i can continue learning using the salt '*' etc
17:08 forrest When you get a second please create a gist or pastebin or whatever of the line around that one
17:08 forrest I don't get why it would throw that error
17:08 nbari one extra question, how to install the mysql modules ?
17:09 nbari http://docs.saltstack.com/ref/modules/all/salt.modules.mysql.html
17:09 forrest the mysql modules are there out of the box
17:10 nbari ok
17:10 forrest you're running 0.16.3 right?
17:10 nbari 0.16.4
17:10 redondos joined #salt
17:10 forrest oh I see it on your gist
17:10 nbari I want to see if I can make a mysql (master --> slaves) with salt
17:11 whyzgeek hi I have questions regarding environments. Suppose I have base/webserver and I have dev, qa, prod as other environments. I only want to overrride webserver in dev. When I create dev/webserver and put it in dev/top.sls it will both run base/webserver as well as dev/webserver. How can I override base from lower env? Or any other solution to this?
17:11 JaredR joined #salt
17:13 devinus joined #salt
17:14 hoppala joined #salt
17:18 ghanima joined #salt
17:18 forrest whyzgeek, have you looked at th top file documentation?
17:18 ghanima hello all
17:18 forrest it's pretty thorough regarding environment setups
17:18 ghanima was wondering if I can ask about salt '*' puppet.run
17:18 ghanima is there anyway to supply any options
17:19 ghanima I would like to supply --onetime or the -t
17:19 ghanima option
17:19 uta joined #salt
17:19 forrest ghanima, https://salt.readthedocs.org/en/latest/ref/modules/all/salt.modules.puppet.html#salt.modules.puppet.run
17:19 forrest looks like you can pass args to it
17:20 uta do salt-minions cache modules? i am making changes to a module then running state.highstate but the minion is running the old code...
17:20 uta any help appreciated :)
17:20 forrest yea uta, use salt '*' state.clear_cache
17:21 ghanima forrest so is it salt '*' puppet.run(-t)
17:21 forrest it's disabled by default though uta.
17:21 troyready joined #salt
17:21 forrest so it should pull a fresh copy
17:21 forrest ghanima, maybe, I've never used puppet.run
17:21 forrest try salt '*' puppet.run -t
17:22 nbari how to list current connected minions ?
17:22 uta forrest: hmmm, that didn't fix it...
17:22 forrest yea like I said uta it's disabled by default for states.
17:22 forrest you could blow away /var/cache/salt
17:22 forrest then restart the service
17:23 uta forrest: okay thanks, i'll give that a go
17:24 forrest if you don't want to use salt '*' test.ping nbari, you can cat /var/cache/salt/master/minions, or /etc/salt/pki/master/minions
17:24 ghanima forrest: sorry I got this error salt: error: -t option requires an argument
17:24 forrest hmm, maybe try wrapping it in single quotes?
17:24 nbari ok
17:25 forrest err don't cat sorry
17:25 forrest ls :P
17:25 forrest I'd suggest using test.ping though, that is just going to confirm pki data
17:26 oz_akan_ hi UtahDave, has salt.mine started to support complex expressions, like grain role:this and environment:that?
17:26 faldridge joined #salt
17:26 forrest UtahDave might not be around oz_akan_, they've been doing a bunch of interviews this week
17:26 oz_akan_ forrest: thanks
17:26 oz_akan_ is there anyone from saltstack then?
17:27 forrest no clue, I haven't seen anything from them since I've been in here.
17:27 uta forrest: clearing /var/cache/salt worked, thankyou!
17:27 oz_akan_ forrest: thanks
17:27 forrest np uta, I can't remember if there's a more elegant way of doing that, if so it's not in the docs.
17:27 forrest or at least not where I looked :P
17:30 forrest did you figure out the puppet stuff ghanima?
17:31 ghanima Yeah looks like they already support other options
17:31 ghanima 'onetime', 'verbose', 'ignorecache', 'no-daemonize',
17:31 ghanima 'no-usecacheonfailure', 'no-splay', 'show_diff'
17:31 ghanima just have to supply those
17:31 ghanima I think
17:31 ghanima I am testing now
17:31 forrest oh cool, so just the full item instead of the - option
17:31 uta forrest: yeah we'll figure out why this is happening hopefully. do you know if just restarting the salt-minion clears the cache? it seems to also fix the problem
17:31 ghanima forrest: will no in 30 sec :)
17:31 forrest uta, yea usually you have to restart the minion as well
17:32 forrest you can just do that via a service call on the command line from salt though
17:32 forrest It's a common thing uta, the answer has probably been discussed before while I was in here, I just don't remember.
17:32 forrest if you just do salt '*' service.restart salt-minion you can restart it everywhere
17:33 forrest but I think you actually have to trash the cache on the master then restart it to ensure it rebuilds, obviously don't quote me on that.
17:33 MrTango joined #salt
17:34 uta forrest: okay cool. It also seems to work if we don't clear the cache, but just restart the minion
17:34 cro joined #salt
17:34 forrest it might rebuild it then, I don't remember.
17:34 UtahDave oz_akan_: have you tested that?
17:34 uta forrest: yeah. thanks for your help!
17:34 forrest or it might just query the master when you restart, so it pulls the latest cache
17:34 forrest uta, np
17:35 oz_akan_ UtahDave: a few months back it was not working
17:35 oz_akan_ UtahDave: I haven't tested since
17:35 oz_akan_ UtahDave: testing at this point
17:35 ghanima forrest: this is what you have to do
17:35 UtahDave oz_akan_: OK. I haven't tested that in a while.  I can test that against the develop branch here in a few minutes. We're finishing up a hiring interview
17:36 ghanima salt '*' puppet.run agent onetime no-daemonize no-usecacheonfailure no-splay ignorecache
17:36 oz_akan_ UtahDave: thanks, I'd appreciate if you could
17:36 forrest ahh ok
17:36 forrest so just toss all the options in there that are part of -t
17:37 ghanima forrest: yup even though -t includes all of those options not sure why they just don't have that as a valid arg
17:37 nbari ok I got the mysqlplugin to work but I had to acess the minion via ssh and edit minion file to add the mysql.options, if I would do this but using pure salt how could I create this 'steps' in order to do the same for each minion ?
17:38 forrest maybe it's the - in the -t that causes an issue.
17:39 oz_akan_ UtahDave: documentation doesn't mention compound matcher
17:42 mesmer joined #salt
17:44 alunduil joined #salt
17:44 TheCodeAssassin joined #salt
17:47 redondos joined #salt
17:47 Katafalkas joined #salt
17:48 TheComrade joined #salt
17:49 forrest nbari, I didn't know you had to do that, what within the config did you have to change?
17:50 nbari i just append to the bottom the mysql.host: etc options
17:50 nbari but trying to learn now how to doitfor each minion
17:51 forrest do you mean you modified /etc/salt/minion?
17:51 nbari yes
17:52 nbari In order to connect to MySQL, certain configuration is required in /etc/salt/minion on the relevant minions.
17:52 jacksontj joined #salt
17:54 TheComrade left #salt
17:55 mannyt_ joined #salt
17:56 forrest so you could just make /etc/salt/minion a managed file nbari
17:56 forrest then have the salt-minion service watch that file for changes and restart when it finds an update.
17:57 ghanima one more question
17:57 ghanima maybe one more
17:57 forrest heh
17:57 ghanima the concept of grains
17:57 ghanima can grains interact with puppet facts
17:58 nbari ok i will try that
17:58 redondos joined #salt
17:59 forrest There was this module written for puppet facts as grains ghanima: https://github.com/saltstack/salt-contrib/pull/50
17:59 madko joined #salt
17:59 forrest other than that I have no idea.
17:59 ghanima forrest: sorry so if I read this correctly it is integrated
18:00 ghanima I can't find any documentation on it
18:00 forrest ghanima, not that link I put is custom modules by the community
18:01 druonysuse joined #salt
18:01 druonysuse joined #salt
18:01 pipps joined #salt
18:01 madko Hi guys, can someone tell me why this http://ur1.ca/fiv9x is not working (missing require user)
18:02 ghanima forrest: So salt-contrib is not apart of the man salt base?
18:02 forrest correct
18:04 ghanima Ahhhhhh outside of github is there any other documentation on the project
18:06 madko http://ur1.ca/fivdv <= this is the result, no user state, I don't understand why
18:06 forrest ghanima, I don't know, that was just what a quick google search pulled up
18:06 forrest there's documentation on adding custom modules that you could probably follow to add that though
18:08 ghanima forrest: have you used the salt-contrib would you considerate it moderately stable?
18:08 forrest I've never used it before
18:08 forrest The merges have to be approved by members of the saltstack team I believe
18:09 toastedpenguin joined #salt
18:09 forrest that pull request isn't approved though, it's still open
18:10 forrest madko, are those users created in this state?
18:10 madko yes
18:10 forrest with your sets at the top, ok
18:10 madko group state are fine, but I don't know why users are not
18:10 forrest try running that state with the debug option
18:10 madko ok
18:11 forrest well, you aren't requiring the group for the user
18:11 forrest you have the instance_group requiring the user, but the user is supposed to be assigned to the instance_group
18:11 forrest how can it do that if the instance_group requires that instance_user run first
18:11 forrest unless I'm missing some logic you have in here.
18:12 madko My user are part of the group
18:12 forrest before you run salt?
18:13 madko Ok this is my latest try, before I was doing without the group requiring the user
18:13 madko and same probelm
18:13 mohae joined #salt
18:13 forrest try making the user require the group, and remove that - members:, you will be adding them to the group anyways.
18:13 madko I'm trying again without the user requirement in the group
18:13 madko ok
18:13 forrest so then it will create the group first, then create the user, and then ensure it's part of those groups
18:13 JaredR joined #salt
18:14 Thiggy joined #salt
18:15 Thiggy Kinda unrelated, but is anyone using AWS OpsWorks? Kinda curious about where it overlaps/competes/is different from saltstack
18:16 madko http://ur1.ca/fivpm doesn't this make more sense?
18:17 forrest You need to specify your requi on line 10 to be a group I believe
18:17 forrest otherwise yea that's what I was going for.
18:18 madko oops
18:19 madko but same problem
18:19 madko I'm trying with debug
18:19 forrest gotcha
18:19 madko thanks for helping me
18:19 forrest oh make sure you clear the cache too
18:19 JaredR joined #salt
18:19 madko ok
18:19 forrest rm /var/cache/salt, then restart the master, see if that does it
18:21 mapu joined #salt
18:22 JaredR joined #salt
18:22 madko this is the rendered data (from debug) http://ur1.ca/fivw0
18:22 p3rror joined #salt
18:22 madko debug says duplicate key for my users
18:24 berto- joined #salt
18:24 JaredR joined #salt
18:24 madko http://ur1.ca/fivxx the debug output
18:25 madko does the duplicate keys about my users can prevent their creation?
18:26 forrest I'm not sure, technically it isn't creating an error
18:27 madko yes it's just a warning so...
18:27 forrest did it create stuff properly on the minion?
18:27 JoAkKiNeN joined #salt
18:27 forrest Or do you make that user anywhere else in the states you apply
18:27 forrest oh here's probably why
18:27 forrest you define eduard: twice, intead of putting the group under
18:28 madko let me check that
18:29 swa_work joined #salt
18:30 forrest madko, https://gist.github.com/gravyboat/6554309
18:30 JaredR joined #salt
18:30 forrest the only issue there is that you can't split the istance_owner versus the instance group
18:30 forrest so if it was instance group asdf, owner madko, you'd be fine and those warnings wouldn't pop
18:31 madko ok that make sense
18:31 forrest Yea, you could add some additional logic to check if the instance_owner, and instance_group are equivalent, and if they aren't create that instance group item
18:31 madko thanks forrest :)
18:32 forrest np madko
18:32 JaredR joined #salt
18:35 madko my working sls http://paste.fedoraproject.org/39453/9728913/
18:35 JaredR joined #salt
18:36 forrest no warnings on that one?
18:37 madko only one on my /var/cache/nginx.
18:37 madko but that normal
18:38 pdayton joined #salt
18:41 JaredR joined #salt
18:41 madko thanks again forrest , bye
18:46 logix812 joined #salt
18:46 JaredR joined #salt
18:49 JaredR joined #salt
18:50 EntropyWorks I'm seeing a lot of these mesgs in my logs salt.modules.cmdmod][INFO    ] Executing command '/sbin/zfs help || :' in directory '/root'
18:50 Ahlee Are the functions at https://salt.readthedocs.org/en/latest/ref/modules/all/salt.modules.service.html#module-salt.modules.service really the only options?
18:51 Ahlee i.e., where's the disabled function?
18:51 EntropyWorks what is up with that? it cause an error since /bin/zfs help dump the help into my logs. which isn't really good.
18:52 JaredR joined #salt
18:54 mohae joined #salt
18:55 mwillhite joined #salt
18:57 Thiggy joined #salt
18:57 JaredR joined #salt
19:03 JaredR joined #salt
19:07 foxx[cleeming] joined #salt
19:07 JaredR joined #salt
19:07 BrendanGilmore joined #salt
19:09 mapu joined #salt
19:16 f47h3r joined #salt
19:16 whyzgeek forrest: yes I did but what I gather was that base top gets precedence over. But what I want is oposite. Means I define a general profile at base level but when I define something in the specific environment that overrides the base one.
19:16 whyzgeek is that possible
19:16 whyzgeek ?
19:18 whit joined #salt
19:22 aat joined #salt
19:32 redondos joined #salt
19:32 bemehow joined #salt
19:41 robertkeizer joined #salt
19:41 forrest sorry whyzgeek I was at lunch
19:42 whyzgeek forrest: no problem :)
19:42 forrest So in that scenario you have a couple options, you can either do something like apache/init.sls, which just installs apache, then you have a unique file for each environment, or you can try to use 'extend'
19:43 forrest then you group your systems in the top.sls, so dev gets apache/init.sls, then dev/apache_config.sls, test gets apache/init.sls, and test gets apache/apache_config.sls
19:43 forrest that sort of thing
19:43 UtahDave whyzgeek: also, when you define your file_roots in your master config, you can specify multiple directories
19:43 whyzgeek forrest: ic that make sense!
19:44 forrest Good point UtahDave!
19:44 forrest yea whyzgeek, check out this
19:44 UtahDave Put the most specific directories at the top, with the defaults at the bottom
19:44 forrest https://github.com/terminalmage/djangocon2013-sls
19:44 forrest oh and it will traverse through them in order UtahDave?
19:44 UtahDave yep
19:44 forrest nice
19:44 forrest in that example I linked there's an option, just imagine instead of foo, it's dev or test
19:45 whyzgeek UtahDave: thanks this solve a lot of my problems!
19:45 UtahDave whyzgeek: let me get you an example
19:46 whyzgeek UtahDave: I really appreciate it
19:46 forrest And here I thought UtahDave was slacking all day sitting through interviews :P
19:46 UtahDave :) I have been
19:46 whyzgeek its just the documents are thin on this. I am trying to create a large config setup and this will solve a lot of problems
19:47 ipmb joined #salt
19:47 whyzgeek thank you forest as well
19:47 forrest yea np
19:47 forrest depending on what UtahDave links I'll see about expanding the docs this weekend, so there's at least another example
19:47 whyzgeek forrest: I appreciate it
19:48 forrest Yea np
19:48 whyzgeek the more examples are the better
19:49 forrest Oh yea I agree, I'd like to see a repo of 'projects' eventually created that just shows entire setups as opposed to single formulas and such
19:49 whyzgeek somehow the link you sent is not openning for me!
19:49 whyzgeek forrest: tottally
19:49 forrest yea github is being lame
19:49 whyzgeek something like salt-states
19:49 forrest it will resolve at some point.
19:49 forrest yea like salt states, but salt-projects
19:49 UtahDave OK, sorry for the delay. I was trying to put this in a gist, but github is misbehaving
19:49 UtahDave http://pastebin.com/g5BGHJkK
19:49 forrest I want to see the whole config, the /etc/salt/master, all of it
19:50 whyzgeek but takes few different setups and scenarios, it speedup the deployment and eventually speedup using salt
19:50 whyzgeek because to be honest developers just look at examples
19:50 whyzgeek that way you don't need that much of explanation
19:50 forrest Nice UtahDave, you mind if I steal that to add to: http://docs.saltstack.com/ref/file_server/file_roots.html ?
19:50 UtahDave whyzgeek: see how the same directory provides defaults?
19:50 luminous hello! say you have a salt state for somethign you do but which you don't hook into top.sls - cloning some repos, or resetting users, etc. now, if this state needs some pillar, you also have that skipped in pillar/top.sls.. QUESTION: if you use state.sls to call this specific state, is there a way to provide that pillar it needs?
19:51 UtahDave If you want to override something in your dev environment, just drop the file in /srv/salt/dev   otherwise they share the same /srv/salt/defaults across environments.
19:51 dumol left #salt
19:51 UtahDave forrest: that would be really helpful!  Thanks!
19:51 forrest cool!
19:51 whyzgeek UtahDave: that's cool!
19:51 UtahDave luminous: you can pass in pillar data on the cli
19:51 forrest so UtahDave, when you define the dev file_roots for example, then in your top, you just apply dev plus whatever other states you want to the machine right?
19:52 luminous UtahDave: yea, I have seen that as a dictionary, but can you also reference a pillar file?
19:52 luminous the dictionary is sufficiently long
19:52 UtahDave luminous: no, not that I know of.  Any reason why that data can't be in the regular pillar?
19:52 luminous I could add the pillar to top.sls and it wouldn't hurt anything without the state in top.sls so that is ok
19:53 UtahDave luminous: that's what I would do , if at all possible.
19:53 UtahDave forrest: yep!
19:53 pipps1 joined #salt
19:53 luminous thanks for confirming
19:53 forrest awesome
19:53 auser joined #salt
19:54 whyzgeek forrest: I am trying it :)
19:54 forrest cool
19:54 Ahlee reasking, are https://salt.readthedocs.org/en/latest/ref/modules/all/salt.modules.service.html#module-salt.modules.service really the only serivce options available? so there's no way to ensure a service (cron, atd, etc) are disabled?
19:56 carmony I'm just not having any luck
19:56 UtahDave Ahlee: http://docs.saltstack.com/ref/states/all/salt.states.service.html#module-salt.states.service
19:56 carmony with finding it in the docs, but how can I target based off of a minion's pillar data?
19:56 carmony from the command line
19:56 Ahlee UtahDave: thanks. Why the descrepencies?
19:56 Ahlee What am I missing here?
19:57 whyzgeek UtahDave: does it have to be seperate default, what's wrong with this? http://pastebin.com/Na5AcFLm
19:57 UtahDave Ahlee: You're looking at the difference between an execution module and a state module
19:57 UtahDave carmony: -I
19:57 Ahlee Ah.
19:57 forrest you'd have overlaps there whyzgeek.
19:58 UtahDave whyzgeek: yes, like forrest said
19:58 forrest because if we tell apache in the bass to drop in host file zyx, then in dev say to drop in host file abc, it's not going to like that
19:58 Ahlee so I apparently misunderstood originally, and execution modules and state modules don't match 1:1
19:58 forrest right Ahlee
19:59 forrest Salt won't know which one you really want to use whyzgeek (unless of course your base is just the very basic stuff that you want)
19:59 whyzgeek forrest: I c
20:00 UtahDave Ahlee: execution modules are the commands that actually do things on your system. "install this" "pip install that" "restart this service"
20:01 UtahDave Ahlee: state modules use the execution modules to to ensure your server is in the correct state.
20:01 UtahDave Ahlee: pkg.installed, for example, is a state module. it uses the corresponding execution module to check if a package is installed, if it's not installed, then install it.
20:03 Thiggy Just a shout out about how I keep getting forced to do something in Chef and it's like 10x harder than it would be in SaltStack. #ymmv
20:03 forrest lol
20:04 forrest You haven't convinced a switch yet Thiggy?
20:04 forrest Just configure the entire environment in salt in a day or two
20:04 Ahlee UtahDave: Right. I naively assumed that state modules were brokering through execution modules
20:04 Ahlee thus, figured for every state module, there was an execution module, and vice versa
20:05 copelco joined #salt
20:06 alexandrel Ahlee: I assumed the same thing, at first.
20:06 alexandrel Ahlee: then I looked at the code ;)
20:06 UtahDave there's actually multiple execution modules per state module.  For example, there's on pkg state module.
20:06 auser Hey UtahDave
20:06 Thiggy @forrest Just evaluating some stuff. It's for something I don't think will become permanent. #things #stuff #blah
20:06 forrest Gotcha
20:06 UtahDave Underneath that there are  yum.py, apt.py pacman.py, win_pkg.py  and others that provide a virtual "pkg" execution module
20:07 UtahDave auser: hey, my man!
20:07 mapu joined #salt
20:07 forrest Did you check out http://devopsu.com/books/taste-test-puppet-chef-salt-stack-ansible.html Thiggy?
20:07 UtahDave carmony: did -I work for you?
20:07 forrest goes through setting up a simple project with ansible, shell scripts, salt, puppet, and chef
20:07 Thiggy I have not seen this, no. Thanks!
20:08 carmony trying it out now UtahDave
20:08 Ahlee UtahDave: ok, so things like watch and reload, where are those defined?
20:08 forrest it's not free, but it's an interesting read to hand to people who may be on the fence
20:08 forrest or who aren't familiar with config management
20:08 Ahlee like, what does "reload" mean on a service? reload on config change? what defines the config?
20:08 UtahDave Ahlee: Those are all handled by the Salt state compiler.  So anything can be watched ,etc/
20:08 nielsbusch left #salt
20:09 Ahlee ok, so state.py would be best place find out those?
20:09 UtahDave Ahlee: the mod_watch() function defines the behavior of what happens when something is watching something else
20:10 alunduil joined #salt
20:10 Ahlee ok, so then the compiler is what goes through, searches for - watch: in the yaml, and calls mod_watch() to determine what watch means in this context?
20:11 UtahDave Ahlee: exactly!
20:11 Ahlee That's going to be interesting to explain.
20:11 Ahlee Thanks
20:11 Ahlee same for reload then?
20:11 forrest UtahDave, I'm surprised you haven't gone through and compiled all these explanations of different things into a big 'Salt Explained' doc
20:11 UtahDave Ahlee: so for example.  service.py  has a mod_watch() function that says when it's watching something else and a change is indicated in the other thing, then restart the service (or reload it, if that option is passed in)
20:12 g4rlic UtahDave: I'm digging into the salt code to see if I can extend services.running to manage xinetd based services.  Good idea, or bad idea?
20:12 Ahlee oh man, i haven't seen xinetd used in a long time.
20:12 Ahlee Bringing sexy back :)
20:12 g4rlic tftp is run under xinetd in Centos/Fedora/every other distro I've used in recent memory. ;)
20:13 g4rlic Which is the current devil of a service I'm trying to get salted.
20:13 g4rlic :\
20:13 forrest hah
20:13 Ahlee ah, good call
20:13 Ahlee i guess i still have a rsync daemon running on a host too throug xinetd
20:13 Ahlee i rescind my comments
20:14 UtahDave g4rlic: you might look in the modules directory to see if that has already been implemented.
20:14 Ahlee g4rlic: i'd write a _module/ to handle that
20:14 UtahDave ls salt/salt/modules/*serv*
20:15 copelco not sure if i'm wording this correctly, but is it possible to pass a pillar into the env variable of cmd.run?
20:16 copelco pillar dict*
20:16 UtahDave copelco: from the cli?
20:16 copelco in an sls file
20:17 JaredR joined #salt
20:18 dan1111 joined #salt
20:18 dan1111 left #salt
20:18 UtahDave copelco: yep.  - env: {{ salt['pillar.get']('mypillarval', 'defaultval') }}
20:19 mapu joined #salt
20:19 copelco but what if mypillarval is a dict?
20:20 UtahDave do you want the whole dict, or just a value from the dict?
20:20 copelco whole dict
20:20 UtahDave yeah, what I just pasted will give you the whole 'mypillarval' dict
20:20 copelco wow, ok
20:20 copelco thanks Utah
20:20 copelco erm
20:20 copelco thank UtahDave :)
20:21 UtahDave you're welcome, copelco!
20:21 forrest Should have kept pretending you were working hard in those interviews UtahDave!
20:21 UtahDave Also, just FYI, if you want to get a specific value from the dict, it would be like this:
20:21 UtahDave - env: {{ salt['pillar.get']('mypillarval:mykey', 'defaultval') }}
20:21 UtahDave and you can dive down arbitrarily deep into the dict
20:22 UtahDave - env: {{ salt['pillar.get']('mypillarval:mykey:asubkey:anotherkey', 'defaultval') }}
20:22 UtahDave lol, forrest
20:22 JaredR joined #salt
20:24 whyzgeek forrest: where you able to make this defaults work? I tried it and it still picks up the base one. Now I have all files in defaults and base has only top with * in it. In dev, I am dropping the httpd.conf with different content. I have the same file in defaults. However, it picks the defaults one.
20:25 forrest I haven't whyzgeek, I'm only able to work on salt related stuff at home, we don't use it where I work.
20:25 whyzgeek forrest: IC
20:25 whyzgeek may be UtahDave can comment
20:26 whyzgeek I also tried to redefine the sls in dev/top.sls then I got the ID conflict error
20:26 forrest so you created the subdirectories right?
20:26 forrest such as /srv/salt/dev
20:26 forrest and put the content in there
20:26 whyzgeek ya
20:27 UtahDave whyzgeek: they're probably picking it up in the base environment since you're matching all your systems with '*"
20:27 whyzgeek UtahDave: yes I know, but the whole point was that I can override selectively
20:28 UtahDave whyzgeek: can you pastebin your top.sls?
20:29 bhosmer joined #salt
20:29 g4rlic UtahDave: I see no xinetd module in salt/modules/service.py.  Is that the file I ought to be working on?  I'm a total noob to how Salt is developed, so any documentation/guidance you know of would be helpful. :)
20:30 whyzgeek UtahDave: http://pastebin.com/rBfJF1iR
20:31 UtahDave g4rlic: if you notice, there are several service modules in the modules directory.  debian_service.py, freebsdservice.py, service.py, win_service.py
20:32 UtahDave g4rlic: each of them has a __virtual__() function. The salt loader evaluates each module's __virtual__ function at salt-minion startup. The __virtual__ returns the name it wants to be known as if all the correct criteria apply on the host.
20:33 UtahDave So for example,   on a freebsd machine,  debian_service.py and rh_service.py won't load, but freebsdservice.py will provide the "service" virtual module
20:33 forrest whyzgeek, I think you're a bit confisued. Look at http://docs.saltstack.com/ref/states/top.html#other-ways-of-targeting-minions
20:33 forrest but not that actual section
20:33 forrest scroll up to the 2 examples above it
20:33 forrest that talks about mulitple environments
20:33 UtahDave g4rlic: So you'll want to look at those other service modules and do something similar.
20:36 whyzgeek forrest: I see what you mean, so I have to repeat the definition but configs will be overwritten
20:36 whyzgeek forrest: let me try it
20:36 forrest each one of those has an specific webserver/init.sls item
20:38 copelco say i have a ssh key i want to push to my web server so i can clone a private github repo. would i keep the key in pillar? is there some way to use file.managed with a pillar path?
20:38 forrest you'd keep it in pillar yea
20:38 zloidemon Hello
20:39 zloidemon Who is NOC engineer?
20:39 forrest does saltstack have a NOC?
20:40 zloidemon I'm implimenting Radius protocol in lua
20:40 zloidemon I don't know most popular RFC
20:40 UtahDave copelco: yeah, you can put the contents of your keys in pillar. In the soon-to-be-release 0.17 file.managed has a  - pillar_contents  argument where you can put the pillar path
20:40 copelco oh wow, how soon? :)
20:41 zloidemon forrest: http://freeradius.org/rfc/rfc2865.html I'm starting from that
20:41 whyzgeek forrest: ya still extra redundancy, ya but it works now thanks ;)
20:41 forrest np
20:44 forrest I won't like to you zloidemon, I don't have time to read through all of that, lol
20:45 robertkeizer joined #salt
20:45 mapu joined #salt
20:47 zloidemon :)
20:47 UtahDave copelco: we're probably going to cut the first 0.17 RC tonight
20:47 copelco cool
20:47 forrest Nice
20:49 baniir joined #salt
20:50 carmony UtahDave: is there a way to specify a server state's method of restarting?
20:53 UtahDave carmony: like this?  - reload: True
20:53 juanlittledevil joined #salt
20:53 carmony well, one second... I might not need it...
20:54 carmony wheezy's supervisor is having a bug where service supervisor restart fails
20:54 carmony but stopping and startting work
20:56 carmony but I fixed wheezy's problem
20:56 UtahDave woot!
20:57 lacrymology joined #salt
20:57 lacrymology I'm writing some salt modules in python, and I'm having trouble understanding what does __salt__[foo] and where are the default.. of those defined
20:59 lacrymology it looks like I can do __salt__['file.remove'] but I cannot do __salt__['file.symlink']. I can do salt.states.file.symlink instead, but there's no salt.states.file.remove, though. There's `absent` instead.
20:59 kula __salt__, __opts__ and friends are defined through the magic that is salt/loader.py. i have to meditate to clear my mind every time i try reading it.
21:00 mapu joined #salt
21:01 lacrymology kula: that's OK, but it looks like the __salt__[]-available functions are the same that are listed from `$ salt sys.list_functions`
21:03 lacrymology so, what's what?
21:04 kula i couldn't tell you off hand, because i can't follow the maze that is loaders.py right now. but that's where all the magic happens; untangle that and you'll discover what it is.
21:05 lacrymology kula: is there a way I can get a python REPL shell with __salt__ available?
21:09 cmthornt1n joined #salt
21:12 brutasse_ joined #salt
21:12 whit_ joined #salt
21:13 aptiko_ joined #salt
21:15 UtahDave joined #salt
21:15 g4rlic UtahDave: thanks for the tips!
21:15 redbeard2 joined #salt
21:16 g4rlic I'm on a tight time crunch, so this may have to wait (I'm going to hack around it with some cmd.run action for the time being)
21:16 g4rlic But thank you regardless.
21:16 kevinbrolly joined #salt
21:16 Thiggy joined #salt
21:17 halfss joined #salt
21:19 kula lacrymology: not sure. at least i don't know any easy way, although i wish there was.
21:19 jheise joined #salt
21:25 juanlittledevil joined #salt
21:29 bhosmer joined #salt
21:32 ezraw left #salt
21:35 blee_ joined #salt
21:37 zooz joined #salt
21:37 dyim42 joined #salt
21:38 faldridge joined #salt
21:39 devinus joined #salt
21:39 pipps1 joined #salt
21:41 HumanCell joined #salt
21:45 pipps joined #salt
21:48 whyzgeek forrest: this actually become very beautiful! I extracted all of main ones and put them in core. So I don't have that much of repetition in top either! Things now make sense for me ;)
21:48 faldridge joined #salt
21:50 nbari I want to send a my.cnf to every minion but want to change the server-id for each minion
21:50 nbari how could i do that ?
21:51 nbari based on the minion I would like to set something like server-id = 2 and so on
21:52 forrest awesome whyzgeek, glad to hear it worked out
21:52 forrest can you share what your top file ended up looking like?
21:54 jalbretsen joined #salt
21:54 UtahDave nice, whyzgeek.  That's how we like it!
21:55 UtahDave nbari: you'd use jinja to templatize your my.cnf       server-id = {{ salt['pillar.get']('serverid', 'noserverid') }}
21:56 dyim42 joined #salt
21:56 nbari ok thanks
21:56 UtahDave nbari: or if your server id is in your grains  server-id = {{ grains.get('server-id', 'noserverid') }}
21:56 UtahDave and use file.managed
21:56 ckao left #salt
21:57 AndChat|519129 joined #salt
21:57 nbari I see, and if I would like to create a relation of minions -> ids something like {'minion_XX', '23'} and later do a match/replace is posible ?
21:58 ifiokjr joined #salt
21:59 UtahDave nbari: I'm not exactly sure what you mean, but I think you would want to do that in pillar
21:59 * scalability-junk still no time for salt yet :(
21:59 nbari I will read more about it
21:59 forrest gotta work on it at home like I do scalability-junk :P
22:00 scalability-junk forrest: haha good one. For me there is not much difference between home and work :)
22:00 forrest oh do you work from home?
22:00 mesmer_ joined #salt
22:00 forrest I guess I should rephrase it, gotta work on salt on your free time
22:00 scalability-junk I'm sort of a freelancer yeah :)
22:01 forrest fair enough
22:01 scalability-junk forrest: that's the issue not much time there either :P
22:01 forrest pssssssh
22:01 forrest excuses!
22:01 scalability-junk hehe not really still was the top contributor at the salt sprint :P
22:01 forrest I mean, between work, making food, and other assorted tasks, I have at least 2 hours of free time a night to work on what I want!
22:02 scalability-junk ok not that fair as I did dummy work mostly :P
22:02 forrest well, maybe an hour
22:02 scalability-junk forrest: yeah about 1 and a half
22:02 scalability-junk 20 minutes is for writing (new habbit) and 20 min for blogposts
22:02 scalability-junk another 20 for news
22:02 mesmer joined #salt
22:03 druonysuse joined #salt
22:03 scalability-junk and the last half or sometimes one hour is for catching up with breaking bad or dexter or so :P
22:03 JaredR joined #salt
22:03 forrest heh
22:03 forrest this is why you need more monitors at home
22:03 scalability-junk haha no multitasking is the worst :D
22:04 redondos joined #salt
22:04 scalability-junk I'm so happy with mostly my x230 thinkpad
22:04 scalability-junk no chance to multitask with multiple monitors as the hassle prevents me from using more than one :)
22:05 forrest do you plug a keyboard into that?
22:06 scalability-junk forrest: mostly not, I found a great height of my arms so the keyboard doesn't really annoy me
22:06 forrest gotcha
22:07 scalability-junk but yeah could probably craft out an hour to work on salt, but I'm waiting for the time I want/have to do the automation finally
22:08 druonysuse joined #salt
22:08 terminalmage joined #salt
22:09 forrest Yea I'm just joking
22:10 Katafalkas joined #salt
22:10 nocturn joined #salt
22:11 kermit joined #salt
22:11 scalability-junk forrest: priorities you know :)
22:11 forrest Oh yea I know
22:12 scalability-junk what do you do with salt? more work related stuff or more private stuff?
22:12 forrest private only
22:12 forrest I'm trying to get my work to use it, no luck so far.
22:13 ifiokjr hey all - i'm pretty new to salt and server deployments in general. I've been able to provision a cool working environment using salt which was pretty cool. I'm wondering if someone here could let me know what all the fuss is about openstack.
22:13 ifiokjr I've watched a few videos, read some docs and am still not clear on why it's advantageous.
22:13 scalability-junk forrest: mhh stuck with puppet/chef at work?
22:14 forrest no, we don't use config management at work
22:14 forrest my old job was all puppet though, so I'm familiar with both at this point
22:14 ifiokjr My current preferred env uses Ubuntu as it's a good entry point for starting out with salt.
22:14 forrest granted I haven't written any puppet in a while (thankfully)
22:14 scalability-junk kk but config management is not seen as useful at your current employer?
22:15 forrest Uhh, it's more the paradigm shift in thinking that is the issue.
22:15 forrest we're a RHEL shop
22:15 nbari how does the grains.item  server_id is generated ?
22:15 nbari how does the grains.item  server_id is generated ?
22:15 scalability-junk ifiokjr: depends on the viewpoint. what do you wanna do and what do you have now? advantageous has some comparison factor build in
22:16 scalability-junk forrest: mhh but RHEL is anti confi management? I would understand if RHEL has some great tool itself, but afaik it doesn't
22:16 forrest they will be integrating puppet into satellite
22:16 forrest which I am not happy about
22:18 scalability-junk at least you are familiar with puppet, doesn't make it much better but still
22:18 forrest hah yea that 'helps'
22:18 forrest it helps me say 'Oh please can we do something else' :(
22:18 ifiokjr scalability-junk: at the moment i have a simple development environment set up with salt - using a master and a minion which I can deploy to the cloud for simple django staging deployments. But I want to flesh this out for actual production.
22:19 pipps joined #salt
22:20 nu7hatch joined #salt
22:20 scalability-junk ifiokjr: alright advantageous is hard to say, but coding your config is mostly great
22:21 ifiokjr scalability-junk: so what does openstack offer on top of the basic ubuntu 12.04 deployment
22:21 scalability-junk openstack is a completely other boat here
22:22 ifiokjr scalability-junk: if it helps me to automate my workflow, and abstract away some of the nuances that can crop up - then I'm all for it!
22:22 nu7hatch hi guys, out of nowhere i started getting "Running a benchmark to measure system clock frequency" message and so when running salt commands, any idea why it happened and how to get rid of it? The message says that i can disable it by setting some env variable but it looks like some random thing
22:23 ifiokjr scalability-junk: If someone can point me towards any good articles that would be great - at the moment every video or article is another company saying how important they are to openstack.
22:23 scalability-junk ifiokjr: alright so firstly openstack is a vm management and deployment environment sort of a cloud framework sorat
22:23 scalability-junk and salt helps in managing configs via code on servers
22:24 scalability-junk additionally it has some great perks as code command execute etc.
22:24 aboe joined #salt
22:25 ifiokjr scalability-junk: okay that makes more sense. So someone like digital ocean could use it to provide virtualized machines to their customers?
22:25 scalability-junk yeah
22:25 ifiokjr scalability-junk: also would be interested to know if you use it at all.
22:25 scalability-junk rackspace uses openstack to provide their services for example or most of them
22:26 ifiokjr scalability-junk: thanks, sounds like a really cool project but I don't think I'll need it at the moment.
22:26 scalability-junk ifiokjr: hard to setup mostly of you only need a few vms
22:26 ifiokjr scalability-junk: yeah I can imagine
22:27 ifiokjr scalability-junk: is it something you use at all?
22:27 scalability-junk but I can really give you the hint of using https://fuel.mirantis.com/ easy and great to test openstack
22:27 scalability-junk ifiokjr: I use openstack for testing and some things now and will hopefully migrate all of my vms to it in 2014
22:30 ifiokjr scalability-junk: is that to coincide with ubuntu 14.04 or just a good time?
22:31 scalability-junk ifiokjr: not that high in priority ;) doesn't have to do with ubuntu 14.04
22:31 ifiokjr scalability-junk: ha
22:31 ifiokjr scalability-junk: anyway thanks for your help - just wanted to know the basics
22:32 scalability-junk first finishing up some projects then fully automate all vms with fallback possibility to some public clouds and then a private cloud setup, which will reinvent the kvm setup used now.
22:32 scalability-junk look at fuel if you wanna test. it's quite great
22:32 ifiokjr scalability-junk: I don't think it's a priority for me either - not at the moment
22:33 ifiokjr yeah, am looking at the fuel website and will have a play around with it. :D
22:33 scalability-junk have fun great documentation too
22:39 JaredR joined #salt
22:47 jacksontj joined #salt
22:54 aat joined #salt
22:54 JaredR joined #salt
23:02 thingy joined #salt
23:34 mesmer joined #salt
23:37 jacksontj joined #salt
23:41 StDiluted joined #salt
23:43 redondos joined #salt
23:52 sgviking joined #salt
23:56 sibsibsib joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary