Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-08-13

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 LucasCozy joined #salt
00:04 jacksontj joined #salt
00:04 cxz joined #salt
00:05 z3uS joined #salt
00:14 baniir joined #salt
00:17 telexicon joined #salt
00:21 aat joined #salt
00:31 whit joined #salt
00:31 logix812 joined #salt
00:32 dthom91 joined #salt
00:36 dthom911 joined #salt
00:39 balltongu_ joined #salt
00:39 godber joined #salt
00:47 pjs_ joined #salt
00:48 sinh joined #salt
00:57 z3uS joined #salt
01:02 mannyt joined #salt
01:09 aat joined #salt
01:10 cxz still having some trouble with the publish module
01:10 cxz doesn't seem to do anything
01:11 cxz but gives success
01:16 godber joined #salt
01:23 dthom91 joined #salt
01:25 Nexpro1 joined #salt
01:27 liuyq joined #salt
01:27 Lue_4911 joined #salt
01:27 liuyq joined #salt
01:38 lazyguru joined #salt
01:41 bhosmer joined #salt
01:49 ahammond cxz publish.publish has been a little unreliable until 0.16.2
01:52 Corey Quite. It took me screaming at Tom to get that fixed. :-) He returned with a 500 line diff.
01:55 bhosmer_ joined #salt
01:58 malinoff joined #salt
02:02 xl1 joined #salt
02:08 baniir joined #salt
02:10 L2SHO_ joined #salt
02:13 Gifflen joined #salt
02:15 cxz ahammond: we're on 0.16.2
02:19 cocoy1 joined #salt
02:20 aat joined #salt
02:32 z3uS joined #salt
02:37 whit joined #salt
02:44 zonk1024 joined #salt
02:51 oz_akan_ joined #salt
02:51 __Jahkeup__ joined #salt
03:08 whit joined #salt
03:09 rfgarcia joined #salt
03:20 luminous Corey: seriously? hah
03:20 luminous what needed 500 lines to fix?
03:21 Furao joined #salt
03:34 jpeach joined #salt
03:48 xl1 joined #salt
03:56 __Jahkeup__ joined #salt
04:04 baniir joined #salt
04:20 druonysuse joined #salt
04:20 druonysuse joined #salt
04:27 saurabhs joined #salt
04:28 racooper joined #salt
04:41 sephoreph joined #salt
04:42 sephoreph Hi, I've just started using salt and so far it's working really well!  I was wondering if it's possible to purge some packages before installing others in the same state file?  The reason for this, is that I have ~50 servers with an old version of PHP/MySQL installed via yum, and I'm now upgrading it to the current version of both using a different upstream repo.  I'd like to get rid of the old versions (i.e. yum -y erase mysql* p
04:42 saurabhs HI I am little confused about Salt caching. What is cached on Salt master and what is cached on salt -minion?
04:42 sephoreph Hope that made sense, still getting my head around all the terminology :)
04:43 racooper sephoreph: are the package names the same of what you want to remove and replace with?
04:44 saurabhs sephoreph: you will have to explicitly specify that in your state so that before it executes package install step it removes the old package
04:45 baniir joined #salt
04:47 ds_shadof pls show me exaple how to restart service adn run cmd if config file changes
04:49 racooper ds_shadof: look into the "watch" parameter
04:49 racooper (sorry, I don't have any examples but there are plenty on github)
04:49 ds_shadof i'm looking 3 days already to it
04:50 gamingrobot ds_shadof: http://docs.saltstack.com/topics/tutorials/states_pt2.html
04:51 rfgarcia_ joined #salt
04:51 ds_shadof gamingrobot, how i can modify this example ti run cmd after config file changed?
04:52 gamingrobot this might help better
04:52 gamingrobot http://intothesaltmine.org/blog/html/2013/03/01/using_the_cmd_module_in_salt_states.html
04:52 sephoreph Hmm, I'm removing php* / mysql* and installing specific php / mysql rpms instead.  i.e. php-fpm, php-cli, etc.  If I try to add two pkg.* lines (e.g. http://docs.saltstack.com/ref/states/all/salt.states.pkg.html#salt.states.pkg.installed - adding pkg.purged: above pkg.installed:) I get an error stating "Name "php-fpm" in sls "php-fpm" contains multiple state decs of the same type"
04:52 sephoreph Maybe I should put the purge under another name
04:54 gamingrobot ds_shadof cmd.wait is very useful in situations where you need to call a command based on an event, like a configuration file being updated.
04:54 ds_shadof gamingrobot, ok ai'm triyng
04:56 racooper sephoreph, if you're running states, it may take two separate state runs: one to remove the old version, and one to add the new, if they are the same package names (or have dependencies on the same named packages)
04:56 __Jahkeup__ joined #salt
04:57 gamingrobot why does archive module not have bz2?
04:58 sephoreph2 joined #salt
04:58 sephoreph2 sorry my internet just picked an awesome time to drop out
05:01 gamingrobot oh nvm I can just use the tar function and pass the bz2 flags
05:02 syngin ds_shadof: cmd.wait
05:04 mgw joined #salt
05:05 druonysuse joined #salt
05:12 ollins joined #salt
05:15 ds_shadof syngin, thx i think i get what i wanted
05:15 it_dude joined #salt
05:17 sephoreph2 Figured it out using different ID declarations, thanks! :D
05:21 Katafalkas joined #salt
05:35 mgw joined #salt
05:37 druonysuse joined #salt
05:40 middleman_ joined #salt
05:52 xl1 joined #salt
05:56 lyddonb_ joined #salt
05:56 kstaken joined #salt
06:00 unicoletti_ joined #salt
06:00 stevetodd joined #salt
06:17 druonysuse joined #salt
06:17 druonysuse joined #salt
06:19 Katafalkas joined #salt
06:21 gildegoma joined #salt
06:24 dthom91 joined #salt
06:33 berto- joined #salt
06:45 druonysuse joined #salt
06:45 renothing joined #salt
06:47 matanya joined #salt
06:51 druonysuse joined #salt
06:51 druonysuse joined #salt
07:02 vaxholm joined #salt
07:06 rahul joined #salt
07:06 Guest20101 left #salt
07:07 ml_1 joined #salt
07:10 Katafalkas joined #salt
07:12 Katafalkas joined #salt
07:14 flepied joined #salt
07:15 druonysuse joined #salt
07:15 druonysuse joined #salt
07:15 davidone joined #salt
07:19 dthom91 joined #salt
07:20 druonysuse joined #salt
07:21 mgw joined #salt
07:24 druonysuse joined #salt
07:27 __Jahkeup__ joined #salt
07:27 balboah joined #salt
07:29 druonysuse joined #salt
07:29 __gotcha joined #salt
07:32 carlos joined #salt
07:33 druonysus joined #salt
07:33 druonysus joined #salt
07:36 Xeago joined #salt
07:37 chubrub joined #salt
07:40 tomtomtom joined #salt
07:41 chubrub Hi all! I have strange problem with newest version of saltstack. Here is my sls file: http://pastebin.com/JzUzMd6D It's quite simple and dpkg command should be fired after deb file will be placed on the minion. Unfortunatelly I got an error from dpkg, that file does not exists... What I'm doing wrong here?
07:43 syngin chubrub: check out cmd.wait
07:43 syngin chubrub: http://intothesaltmine.org/blog/html/2013/03/01/using_the_cmd_module_in_salt_states.html
07:44 syngin chubrub: otherwise, use ordering: http://docs.saltstack.com/ref/states/ordering.html
07:44 malinoff Are you sure you have 'openjdk-7-jre-headless' in your repositories? What's your OS?
07:44 Furao chubrub: not an answer to your question, but: https://github.com/bclermont/states/blob/master/states/elasticsearch/init.sls
07:44 Furao this used to work
07:44 syngin chubrub: order: last
07:44 syngin malinoff: i think the problem is the manual file going into /root/
07:44 Furao order should only be used in last resort
07:45 syngin malinoff: the "elasticsearch-install" is probably failing before the file is copied to /root/
07:45 syngin Furao: agreed
07:45 chubrub @malinoff: java is installed correctly, I'm using wheezy
07:45 Furao the problem is because line 14 is badly indented
07:45 Furao require is not a salt module
07:45 Furao it need to be an argument of cmd.run
07:46 Furao it need to start with "- "
07:46 syngin Furao: ahh, good spotting.
07:46 malinoff Furao: right
07:46 Furao and be indented 2 2 spaces
07:46 Furao 2 2 -> 2
07:46 chubrub after state is executed, deb file is on the minion, so ordering might be the issue
07:46 syngin chubrub: then look at cmd.wait
07:46 chubrub ok, checking indents...
07:46 Furao and I don't think you should keep .deb in your salt state repo
07:47 Furao especially with ES which is quite big
07:47 Furao i keep a separate archive for that http://archive.robotinfra.com/
07:47 Furao it's usefull when github or pypi are down
07:48 Furao and also when saltstack upgrade salt to a new version which is broken and you can't use bootstrap on an older version
07:48 Furao http://archive.robotinfra.com/mirror/salt/0.15.3/ <--- life saver
07:49 gamingrobot is there a way to use cmd.wait if any file in a directory changes?
07:49 chubrub yup, there was problem with require and indentation - thanks guys!
07:49 Furao gamingrobot: if you use file module with file/directory, yes
07:49 dthom91 joined #salt
07:49 Furao file module -> file state
07:50 gamingrobot perfect
07:50 chubrub This solution is only temporrary due to bug: https://github.com/saltstack/salt/issues/6563
07:50 chubrub And I'm going to use reprepro for delivering custom deb packages
07:51 Furao I got a reprepro state :)
07:51 Furao and also I wrote a django webui on top of reprepro
07:51 Furao to make it easier for my dev to manage their own .deb
07:51 chubrub mine reprepro state is also ready  :)
07:52 chubrub nice!
07:52 chubrub my devs prefer term ;)
07:52 Furao it's java dev
07:53 Furao they need a mouse to do basic things
07:54 chubrub yeah! and strong machines to run their code ;]
07:54 malinoff Furao: >they need a mouse to do basic things; and prepared 400+ mb environment to run 'Hello, world!'
07:55 Furao and oracle jdk 1.6.4.311.3. patch 15 32 bits on RHEL
07:55 Furao because it's supposed to run anywhere
07:57 Furao Ran 686 tests in 45967.573s
07:57 Furao FAILED (failures=1)
07:57 malinoff LOL
07:57 Furao almost 13 hours to ends with a failure due to ruby gems mirror down
07:57 malinoff Now its 'java haters channel' :)
08:01 Furao it's just so easy to make fun of java
08:03 Furao there is a french expression that say "java pas de problème" (java instead of "j'avais"). which translate into "I didn't had problems before"
08:07 Katafalkas joined #salt
08:07 scott_w joined #salt
08:10 felixhummel joined #salt
08:16 zooz joined #salt
08:16 syngin pikcing on java is like making fun of the slow kid - he can't help that he's like that. and there's always the promise that "one day!"
08:17 ml_11 joined #salt
08:20 qba73 joined #salt
08:31 ramesh joined #salt
08:36 scalability-junk oh big funding for ansibleworks
08:45 felixhummel syngin: lmao!
08:49 tomeff joined #salt
08:52 japage joined #salt
08:53 claudiu joined #salt
08:54 claudiup joined #salt
08:55 claudiup Hi all, I want to loop through all registered minions in a sls file and I can't manage to find something similar in the official documentation
08:56 az87c joined #salt
08:57 claudiup Trying to do something like this:
08:57 claudiup {% for host in  pillar['master']['id'] %} {{ host }}:   file.managed:     - name: /etc/nagios3/conf.d/{{ host }}.cfg     - source: salt://nagios-server/host.jinja     - template: jinja     - mode: 644     - context:       client-hostname: {{ host }}       client-address: {{ host }}       client-alias: {{ host }} {% endfor %}
08:59 gamingrobot gist or pastebin?
08:59 Furao claudiup: there is no easy way to handle that. I use something similar to nagios (sentry) and I ends create my own auto-discovery of hosts and services NRPE checks to perform using salt mine
09:00 claudiup http://pastebin.com/zYVPtjkf
09:00 malinoff Why don't you specify grains['host']?
09:00 claudiup Can I look through all hosts with grains['host']?
09:00 Furao malinoff: because this is the value of the nagios server only, not all hosts :P
09:01 claudiup I have a nagios-server state, that is applied to a single server
09:01 malinoff Furao: i was asking claudiup, sorry :)
09:01 claudiup And in that state I want to look through all minions and gather details from them
09:01 Furao claudiup: if someone tell you about publish.publish ignore them
09:02 Furao you can't rely on publish for monitor, as down minion won't there to reply and ends not monitored
09:03 claudiup So there is no easy way of looping through all active minions inside a state?
09:03 Furao yes and no
09:03 Furao the only good way is using salt mine
09:04 Furao I wrote my own salt module to send to salt mine what are the minions roles
09:04 scalability-junk claudiup: take a look at the reactor feature.
09:04 Furao scalability-junk: how reactor can solve that problem?
09:04 scalability-junk it can trigger loadbalancer updates and pillar updates from slaves for example
09:05 Furao loadbalancer for nagios?
09:05 scalability-junk Furao: when a new nagios host is configured it sends a 'data.. new host with fdn or ip' 'new_host'
09:06 Furao yes, and what if the host evolve over time and more state get applied to it
09:06 scalability-junk Furao: then there can be new triggers.
09:06 Furao well try the reactor way, good luck :)
09:06 scalability-junk Furao: it was just one way to think about it
09:07 claudiup Ok, what about something simpler.. How can I loop through all registered minions inside a state?
09:07 Furao claudiup: that's your original question
09:07 Furao you loop in your questions list? :)
09:08 claudiup Well, all answers so far are complex to be honest for a simple for loop :)
09:09 scalability-junk claudiup: the thing is autodiscovery is a little complex
09:09 claudiup Even salt mine, but I will try with that one
09:09 Furao I spent weeks on that specific problem in the past year (long time salt user) and there is no simple answer for that
09:09 scalability-junk you want an up to date list of all your services, which gets updated on error, updates etc.
09:10 scalability-junk Furao: how do you do service removements from a loadbalancer for example on update?
09:10 scalability-junk or when it crashed?
09:10 Furao all my LB service handling strategy was different for all clients who needed that
09:10 Furao so I ends implement it per client, I don't have a global solution for it
09:11 scalability-junk Furao: that's something I really dislike :(
09:11 Furao at least I had been able to get a single solution for all monitoring requirements
09:11 scalability-junk Furao: could you give me some hints?
09:12 Furao for LB handling? well I use shinken to check for failure (using business rules) and I ends with event that trigger salt to perform change on LB
09:12 scalability-junk how do you trigger the change?
09:12 scalability-junk as I said I was probably going to do it with the reactor system... but you disliked that idea
09:13 Furao shinken -> local minion -> event on bus
09:13 Furao the reactor is good
09:13 Furao but not for monitoring auto discovery
09:13 malinoff Well, one of possible ways is to manage /var/cache/salt/master/minions file from the master to a minion
09:14 scalability-junk Furao: but why not use it for both...
09:14 Furao using my way I can even have the host monitored BEFORE VM is even created
09:14 scalability-junk Furao: ok I get your point
09:15 scalability-junk what about changing ips etc. how do you handle that?
09:16 krissaxton joined #salt
09:17 Furao when minion start it run all configured salt mine module
09:17 Furao which send data to master, and I made sure to include IPs
09:17 Furao and they run on each 60 minutes (default)
09:18 Furao I wrote a module monitoring.data
09:18 Furao it take some of pillar data and local info and send it to master
09:18 scalability-junk ok so changes would only get picked up every 60 minutes, except when a host is failing you would remove it via shinken
09:18 Furao data is persistent on master
09:18 Furao if host go down it's still avail
09:18 Furao unless master go down :)
09:19 Furao most of the time, IP are changes on VM reboot
09:19 Furao but yeah, you can move an elastic IP live
09:20 Furao but the elastic IP is not terminated to the EC2 VM itself
09:20 Furao so you can't detect the change from the VM itself
09:20 Furao sure that solution is no magical solution for all type of infra/clouds
09:24 matanya joined #salt
09:33 Nexpro1 joined #salt
09:33 jwholdsworth joined #salt
09:35 Katafalkas joined #salt
09:35 scalability-junk Furao: thanks so far not sure what to use so :D
09:45 bhosmer joined #salt
09:45 ze- any idea how to securely check the minion's name (hostname or id) while generating its pillars ?
09:47 cocoy3 joined #salt
09:53 pnl joined #salt
10:14 mike25ro joined #salt
10:14 mike25ro hi guys
10:14 mike25ro is ... state.highstate... running in the background at a predefined period of time?
10:18 ollins joined #salt
10:18 Furao you can do it
10:19 Furao it's in the doc somewhere
10:19 Furao under scheduler
10:19 Furao http://docs.saltstack.com/topics/jobs/schedule.html
10:23 gamingrobot yay this was just what i was looking for
10:25 mike25ro Furao: thanks buddy
10:25 mike25ro :)
10:30 david_a joined #salt
10:36 twobitsp1ite joined #salt
10:36 gaoyang_ joined #salt
10:39 bturner joined #salt
10:39 andyshin` joined #salt
10:42 eskp joined #salt
10:42 eskp joined #salt
10:45 matanya joined #salt
10:46 werewolf13 joined #salt
10:46 fredvd joined #salt
10:46 vaxholm joined #salt
10:46 MK_FG joined #salt
10:46 yota joined #salt
10:46 Vivek joined #salt
10:46 esrax joined #salt
10:50 whiskybar joined #salt
10:53 giantlock joined #salt
10:54 blee joined #salt
11:04 Katafalkas joined #salt
11:06 xl1 left #salt
11:12 flepied joined #salt
11:13 knightsamar joined #salt
11:16 it_dude joined #salt
11:16 qba73 joined #salt
11:21 logix812 joined #salt
11:24 lemao joined #salt
11:30 ramesh I am trying gitfs with salt
11:31 ramesh Is there any way to locate top.sls file from a folder in my git repo ?
11:32 Furao from a folder?
11:32 Furao ah just mix gitfs + rootfs
11:33 Furao I file.managed my master top.sls, but this is the only file not in gifs
11:36 ml_1 joined #salt
11:38 Xeago joined #salt
11:39 baniir joined #salt
11:39 m_george|away joined #salt
11:53 matanya_ joined #salt
11:54 Katafalkas joined #salt
11:57 harleytaz joined #salt
11:57 __Jahkeu_ joined #salt
11:58 harleytaz left #salt
11:59 Katafalkas joined #salt
12:06 zach joined #salt
12:07 Katafalkas joined #salt
12:07 spiksius joined #salt
12:07 SpX joined #salt
12:08 mechanicalduck joined #salt
12:09 whiskybar joined #salt
12:14 faldridge joined #salt
12:17 Ryan_Lane joined #salt
12:24 oz_akan_ joined #salt
12:25 SpX joined #salt
12:26 linuxguy joined #salt
12:27 linuxfan hello
12:28 oz_akan_ joined #salt
12:29 jslatts joined #salt
12:38 ramesh joined #salt
12:41 cron0 joined #salt
12:42 whiskybar joined #salt
12:43 flepied left #salt
12:44 ramesh left #salt
12:44 ramesh joined #salt
12:53 baniir joined #salt
12:55 brianhicks joined #salt
12:56 lempa joined #salt
12:57 Katafalkas joined #salt
12:58 juicer2 joined #salt
13:01 __Jahkeup__ joined #salt
13:03 scalability-junk is delegating commands to other minions available in salt too? something like delegate in ansible?
13:03 scalability-junk seems to be much easier to use for removing services from specific servers for a given run and add it in the end.
13:06 anteaya joined #salt
13:10 mechanicalduck_ joined #salt
13:10 Katafalk_ joined #salt
13:12 bhosmer joined #salt
13:13 aleszoulek joined #salt
13:15 baniir joined #salt
13:16 whiskybar joined #salt
13:17 Ryan_Lane1 joined #salt
13:17 Gifflen joined #salt
13:18 fxhp http://techcrunch.com/2013/08/12/ansibleworks-raises-6m-for-popular-open-source-and-easy-to-use-it-automation-framework/
13:19 toastedpenguin joined #salt
13:20 Furao is ansible less buggy over each release? unlike salt?
13:22 Furao just found an other bug with pillars, and now I have to convert tons of {{ grains[key] }} to {{ salt['grains.get'](key) }} :(
13:23 kermit joined #salt
13:23 xt where does everyone store their pillars? in git?
13:23 xt I wish there was a good web and db system for storing pillars
13:23 xt storing/designing even
13:23 __Jahkeup__ joined #salt
13:25 aat joined #salt
13:30 Furao xt:I wrote in the past weeks a salt-cloud webui in django and there is a pillars section. eventually pillar will be editable in some json editor widget
13:30 xt Furao, yes, that's sort of what I had in mind
13:31 xt preferabbly something that discovered the structure of the pillar data and made elegant choices
13:33 cocoy3 left #salt
13:35 xt I wish there was more focus on the salt web stuff
13:35 scalability-junk Furao: yeah ansible seems a bit less buggy, but that should change with the jenkins stuff, which is in the works... at least I hope
13:35 mike25ro Furao: why isn't  {{ grains[key] }}  work inside a pillar? i just tested and it seems it works. this is how i used ... to send pillar data to each minion. i have a key on the minion and from pillar top.sls i just send pillar data using grains
13:35 xt I just looked at the ansible web frontend :-)
13:36 racooper joined #salt
13:36 scalability-junk xt: I dislike the opencore mentality
13:36 xt scalability-junk: who does not
13:36 scalability-junk that's really something I do not wish to have for such an integral part of the stack
13:36 mike25ro open-core? meaning?
13:37 scalability-junk if one or 2 apps/projects have an opencore I can live with that, but the underlaying stack... dear god
13:37 scalability-junk mike25ro: opensourceing basic things and a payable closed source version for advanced stuff.
13:37 xt mike25ro: http://en.wikipedia.org/wiki/Open_core
13:37 scalability-junk in ansible that would be rest api, interface, autoscaling and in the future probably more ;)
13:37 mike25ro ah got it ... i imagined that is what you are saying :)
13:37 mike25ro yeah .. i just saw on their site...
13:38 mike25ro well.. you can not blame them for wanting some € / $
13:38 kmrhb joined #salt
13:38 xt mike25ro: who does :-)
13:38 xt everyone is still free to dislike the model, even if I want them to succeed economically
13:38 mike25ro true
13:39 pdayton joined #salt
13:39 scalability-junk mike25ro: I don't mind paying for a support contract, but having the ability to get the support contract from someone else or doing it inhouse is something I want as a choice.
13:39 mike25ro scalability-junk:  i agree with you.
13:40 scalability-junk (it still contributes to the project and in turn to the economical uprise of the main support firm)
13:40 mike25ro i wouldn't mind paying for a service that is good... or software
13:40 scalability-junk but when I would switch to inhouse and use the closed source I'm screewed
13:40 mike25ro i totally agree
13:40 mike25ro you have to have the possibility to have a close house :)
13:40 scalability-junk especially when companies can fail. it's not sure it will opensource it afterwards.
13:41 mike25ro closed :)
13:41 mike25ro true
13:41 mike25ro risk is too great especially if you base your entire infra on smth like that
13:41 aleszoulek joined #salt
13:41 mike25ro i would prefer to pay... for the entire product but to know i can use it without need to be connected to their servers etc
13:41 scalability-junk yeah I hope gitlab for example isn't going that road too extensively I love the idea, but closed source :(
13:42 scalability-junk mike25ro: salt is better anyway :)
13:42 mike25ro scalability-junk: check this out http://www.turnkeylinux.org/gitlab
13:42 mike25ro i am not sure if you guys know about turnkeylinux, but is awesome
13:43 mike25ro i use it a lot...especially for the amazon S3 integration : you can backup the entire vm on S3
13:43 anteaya joined #salt
13:43 mike25ro later... if the vm crashes... you can install a fresh copy and download the updates ... is pretty awesome.
13:43 scalability-junk mike25ro: already running gitlab on my hardware and will hopefully using salt to setup and manage ;)
13:43 mike25ro :)
13:44 lyddonb_ joined #salt
13:44 mike25ro i am just a newbie here @salt ... so .. i am still reading a lot
13:44 __Jahkeup__ scalability-junk: you have some states for that? :D
13:44 scalability-junk planned similar to that too, just with git deploy, salt, kvm, duplicity, xtrabackup, etc.
13:45 kalmar joined #salt
13:45 scalability-junk __Jahkeup__: planned, no time yet. but will be in the states orga on github. already have an empty repo...
13:45 scalability-junk but will probably take a bit. as it's not the first state on my list :)
13:45 __Jahkeup__ scalability-junk: awesome! I'll keep an eye out :)
13:46 baniir joined #salt
13:46 __Jahkeup__ scalability-junk: I may be making one myself sometime soon so I'll see if I have anything that can be of use later
13:47 scalability-junk __Jahkeup__: https://github.com/saltstack-formulas/gitlab-formula
13:48 scalability-junk you are free to push your work in there, I'll gladly try to work together on it, when the time does allow it
13:48 pdayton joined #salt
13:48 __Jahkeup__ scalability-junk: sweet sounds good to me! I'll let you know when I can get around to that
13:49 scalability-junk don't hurry won't be able to start for at least 3 weeks :D
13:50 __Jahkeup__ oh! well in that case I think we'll both be ready for that at the same time :)
13:50 mike25ro on a different ... level .. have you guys used latest version of glusterFS to store KVM vms ?
13:50 mike25ro i am wondering if it is buggy ... or crap like that
13:50 __Jahkeup__ mike25ro: I'm currently setting up my env for that :)
13:51 lesnail joined #salt
13:51 mike25ro __Jahkeup__:  you are my man :)
13:51 mike25ro i have used an oledr version of gluster to store data.. and it was decent
13:51 * scalability-junk waits till cephfs is ready to use
13:51 __Jahkeup__ mike25ro: I've been using ceph but I'm switching to glusterfs for instances
13:51 __Jahkeup__ at least for now
13:51 scalability-junk __Jahkeup__: why not stable enough? talking about cephfs?
13:52 mike25ro __Jahkeup__:  scalability-junk   i was looking at ceph as well.... but gluster seems a lot easier to setup and manage
13:52 __Jahkeup__ scalability-junk: yea I've been getting strange IO errors on my vms
13:53 mike25ro __Jahkeup__:  i am using at this moment Proxmox ... with nfs for storing the machines.
13:53 mike25ro i do not have a huge load... just 10 vms ... on 3 nodes
13:54 alexandrel Damn, I don't getting, I'm looking at utils/templates.py more specifically at the py template handler. I can see that the __salt__ structure is passed to the python template (which is loaded with load_source). But __salt__ is invisible from my python templates.
13:54 __Jahkeup__ mike25ro: that's been pretty much defacto storage but I really would like to use cephfs, glusterfs is almost dropin for NFS but with clustering and such
13:54 alexandrel Erm... that should read: Damn, I don t get it.
13:54 __Jahkeup__ mike25ro: cephfs would be ideal and ceph overall has proven to be awesome
13:54 mike25ro __Jahkeup__:  i have read some things... indeed so it seems
13:55 scalability-junk __Jahkeup__: so we can see ceph states soon \o/
13:55 __Jahkeup__ yep
13:55 mike25ro i am just curious to see if glusterfs can match it
13:55 __Jahkeup__ scalability-junk: working on that today :)
13:55 scalability-junk so why glusterfs first?
13:55 zach joined #salt
13:55 scalability-junk mike25ro: not yet it has another approach. ceph has a storage server with layers for different things: fs, block, object
13:56 scalability-junk glusterfs is a fs, which adds layer to cope with block or at least they try...
13:56 Gifflen joined #salt
13:56 scalability-junk so I think ceph has a better structure to start/begin with.
13:56 __Jahkeup__ I think so too
13:56 __Jahkeup__ I'm really hating myself for having to use gluster :/
13:57 mgw joined #salt
13:57 scalability-junk ^^
13:57 whiskybar joined #salt
13:57 scalability-junk then jump into using cephfs and hope for the best :D
13:57 __Jahkeup__ I just may! ;D hell it would give me one less state to deal with!
13:57 lesnail Hey everyone, I have a pkg state using "sources" that does not want to succeed. Obviously there is an error with the package naming as the error is "[ERROR   ] Package file salt://resources/sbt_0.12.4.deb (Name: sbt:all) does not match the specified package name (sbt)." Im using salt 0.16.2 and run master and slave on debian 7. The state and debug output can be found here: http://pastebin.com/E7wQpsh1 . Any help is appreciated.
13:58 scalability-junk __Jahkeup__: hehe which underlaying filesystem do you wanna use?
13:58 lesnail My problem is that I cant, name the package sbt:all as this would contradict with yaml syntax
13:59 __Jahkeup__ scalability-junk: zfs :P though if we're being dangerous we could go with btrfs ;D
13:59 scalability-junk __Jahkeup__: go full in and use btrfs :D
13:59 alexandrel btrfs is sooo production ready.
13:59 * alexandrel coughs.
13:59 scalability-junk living on the edge, breathing in the risk of failure... software failure :D
14:00 __Jahkeup__ yeah! I think I may stick with zfs and dump gluster xD
14:00 ipmb joined #salt
14:01 scalability-junk __Jahkeup__: go with btrfs and ceph and make your data backup sit in zfs and gluster :D
14:01 p3rror joined #salt
14:02 __Jahkeup__ scalability-junk: nawh, have to be dangerous and use cephfs for everything :P
14:02 __Jahkeup__ besides my project has the "beta" disclaimer
14:03 __Jahkeup__ well at least internally, a few professors/faculty members won't miss their vms
14:03 __Jahkeup__ :P
14:06 scalability-junk __Jahkeup__: I bet vm daily vm snapshots hold for a week and monthly snapshots should be in your budget or?
14:06 __Jahkeup__ scalability-junk: unfortunately the funding isn't here. No budget for a student being revolutionary :P
14:07 __Jahkeup__ scalability-junk: they won't even give me an older storage array
14:07 SEJeff_work __Jahkeup__, You know glusterfs works as well as ceph and isn't beta
14:07 SEJeff_work it also doesn't require insanely new kernels for the clients
14:07 scalability-junk SEJeff_work: but it doesn't provide object and block if that's needed ;)
14:07 SEJeff_work ceph is good stuff, Sage Weil is a really bright guy
14:08 jalbretsen joined #salt
14:08 mike25ro left #salt
14:08 scalability-junk SEJeff_work: yeah I love the papers about ceph, quite interesting.
14:08 SEJeff_work scalability-junk, I'm afraid it does. The commercial openstack redhat sells uses openstack object store ontop of it :)
14:08 __Jahkeup__ SEJeff_work: yeah, I really would prefer ceph over gluster though
14:08 SEJeff_work scalability-junk, It is newish in glusterfs 3.3. __Jahkeup__ why?
14:09 __Jahkeup__ SEJeff_work: the translators for glusterfs hardly count as block storage in my opinion
14:09 __Jahkeup__ ceph's object distribution - CRUSH - has proven itself to me already
14:10 __Jahkeup__ SEJeff_work: tbh gluster comes off as a NFS replacement not object/block storage
14:10 scalability-junk SEJeff_work: yeah translators are mostly not as good as the right implementation in the first place.
14:10 scalability-junk I agree with glusterfs being great as nfs, but it will not shine over ceph in the object and blockstorage places.
14:11 scalability-junk __Jahkeup__: read the papers about crush?
14:11 __Jahkeup__ scalability-junk: skimmed as time allowed :) it was quite intriguing
14:11 mannyt joined #salt
14:12 scalability-junk yeah not too bad.
14:12 SEJeff_work scalability-junk, not sure I follow how they are a bad implementation. With O_DIRECT, glusterfs is quite fast. Also, the fuse client mounts it up as though it is local storage. I'm not trying to argue or troll, I'm actually curious.
14:13 SEJeff_work I just remember talking to Sage in person a few years ago at the socal linux expo. His words verbatim were that they didn't backport the ceph client to older kernels and that you need to use bleeding edge kernels for ceph clients or bad things could happen. Also, that it was considered "beta"
14:13 SEJeff_work where none of that is a problem with glusterfs. Thats just my experience
14:13 scalability-junk SEJeff_work: I'm not arguing that the fs implementation in ceph is better than in glusterfs. It's the block and object storage I think is better.
14:13 Furao mike25ro: it's quite complicated to reproduce, it only using salt.client.Client and custom _grains/ module
14:14 SEJeff_work scalability-junk, Right. Thats what I'm asking about. Why do you think that? I'm just curious
14:14 mrpull joined #salt
14:15 SEJeff_work but if you'd rather not go in detail, thats fine.
14:15 danielbachhuber joined #salt
14:16 jbean joined #salt
14:16 scalability-junk SEJeff_work: I don't want into detail as that's different from case to case.
14:18 SEJeff_work fair enough
14:18 scalability-junk But a few things where I think ceph has an advantage. btrfs is recommended with ceph, which has quite a few features giving it a headstart. last time I checked glusterfs didn't integrate well with it
14:19 aat joined #salt
14:19 scalability-junk ceph has the ability to utilize the more recent kernels and can use this for faster development and less backporting etc.
14:19 scalability-junk that's where some say it's not stable or production ready. that depends on the importance of the workload/data
14:19 faldridge joined #salt
14:20 scalability-junk Additionally the storage pool for glusterfs was first designed to handle filesystem workloads and was then decoupled to handle new layers, which is another workload.
14:20 opapo joined #salt
14:21 scalability-junk one thing I didn't check with the new version was chunk deduplication on the roadmap. Ceph has that planned.
14:22 scalability-junk to clarify the have dedup on their roadmap, but I'm not sure if it works on the stored chunks...
14:22 scalability-junk additionally with crush you get a lot customization capabilities, which is harder to get with glusterfs.
14:22 SEJeff_work scalability-junk, gluster is decoupled from the kernel. You can use any kernel since 2.6.32. That is the beauty
14:23 SEJeff_work Yes crush is really nice for sure
14:23 scalability-junk SEJeff_work: depends on the viewpoint if it's the beauty or not :)
14:24 SEJeff_work the client is in the kernel, so to get an updated client, you have to upgrade the kernel to the latest 3.xxx thats often hard. But I digress :)
14:24 scalability-junk but I would happily use both. to say that. but I would probably go with ceph if I can, cause it does give a bit more modularity or was built with it...
14:24 SEJeff_work sure
14:25 scalability-junk one cool thing is the direct access to the storage pool, which you can use for additional layers or your specific application. that's great sometimes, but most of the time not needed.
14:26 mmilano joined #salt
14:26 scalability-junk I aggree that with red hat there is a great company behind it, but inktank is young and dynamic, which isn't too bad too.
14:26 scalability-junk SEJeff_work: now I digress :)
14:26 SEJeff_work Ha
14:26 scalability-junk SEJeff_work: if you disagree jump in
14:26 SEJeff_work Inktank has the full backing of dreamhost. They aren't going away
14:26 ksalman I am trying to use templates but the master pushes the file as it is without substituting fqdn. Anybody know why? https://gist.github.com/anonymous/6221607
14:27 SEJeff_work ksalman, Yeah that is totally wrong. Just a sec and I'll comment
14:28 SEJeff_work ksalman, https://gist.github.com/anonymous/6221607#comment-885673
14:28 scalability-junk SEJeff_work: another thing to consider. dreamhost is backing inktank and is one of the bigger users so it gets tested in the public, at least more than gluster :P
14:28 * scalability-junk anyway going to get some studying done :D
14:29 SEJeff_work \o/
14:29 kaptk2 joined #salt
14:29 scalability-junk no more like /o\
14:31 ksalman SEJeff_work: thanks a bunch! i've been pulling my hair for hours. Indentation screwed me
14:32 SEJeff_work ksalman, No problem! The next time you know an answer for someone's question please pay it forward and help them. Thats what has made this community so great
14:32 ksalman I will =)
14:33 SEJeff_work community++
14:40 Katafalkas joined #salt
14:40 JasonSwindle joined #salt
14:44 faldridge joined #salt
14:45 giantlock_ joined #salt
14:46 jacksontj joined #salt
14:46 mrpull left #salt
14:47 teskew joined #salt
14:50 jeffasinger joined #salt
14:53 whit joined #salt
14:55 werewolf13 joined #salt
14:55 danielbachhuber- joined #salt
14:59 abe_music joined #salt
15:02 CaptTofu joined #salt
15:02 CaptTofu hi all!
15:02 CaptTofu how does one override a pillar value on the command line?
15:04 [diecast] joined #salt
15:05 dabl joined #salt
15:08 lazyguru joined #salt
15:08 StDiluted joined #salt
15:10 rfgarcia joined #salt
15:11 rfgarcia_ joined #salt
15:11 unicoletti left #salt
15:11 lyddonb_ joined #salt
15:13 forrest joined #salt
15:17 alunduil joined #salt
15:18 chrisgilmerproj joined #salt
15:21 devinus joined #salt
15:22 txmoose joined #salt
15:23 jacksontj joined #salt
15:23 kalmar joined #salt
15:24 jschadlick joined #salt
15:25 ksalman I can do nfs mounts ok, but on consecutive runs i get this error https://gist.github.com/anonymous/6222315
15:26 ksalman It seems it throws that error if the mount already exists
15:26 backjlack joined #salt
15:28 ksalman mount sls and error https://gist.github.com/anonymous/6222354
15:28 hwang251 joined #salt
15:29 hwang251 I have a package that depends on an environment variable being set,  how would I do it in Salt?
15:29 conan_the_destro joined #salt
15:31 ksalman i love ponies
15:35 faldridge joined #salt
15:37 ksalman stupid coworker
15:37 teskew hwang251: what kind of 'package'
15:37 teskew CaptTofu: https://github.com/saltstack/salt/issues/3579
15:39 hwang251 teskew: sudo-ldap on ubuntu
15:40 teskew and you are just doing a pkg.installed state for that package?
15:40 hwang251 yeah
15:40 hwang251 but it can't install unless the environment variable SUDO_FORCE_REMOVE=yes
15:41 ksalman ah the mounts issue is fixed in 0.16.3
15:41 scalability-junk ksalman: about to say that
15:41 scalability-junk upgrade to 0.16.3 ;)
15:41 ksalman =)
15:41 m_george left #salt
15:42 CaptTofu @teskew thanks! I had just found that
15:42 teskew i'm not sure the pkg.installed will take an env statement. cmd.run will. so you could do a cmd.run state
15:43 teskew apt-get -q -y  -o DPkg::Options::=--force-confold -o DPkg::Options::=--force-confdef install sudo-ldap and put an env: SUDO_FORCE_REMOVE=yes in your yaml statement
15:43 hwang251 ok thanks
15:45 teskew hwang251: something like this: https://gist.github.com/tateeskew/6222531
15:45 aat joined #salt
15:45 alexandrel ksalman: I like ponies too.
15:46 ksalman darn, the latest debian package is 0.16.2
15:46 ksalman alexandrel: hah! that was my coworker =(
15:46 alexandrel ;)
15:46 ksalman really, i should've locked my computer
15:47 saurabhs joined #salt
15:48 alexandrel hwang251: worst case, you could code a wraper.
15:48 luminous is it possible to pass pillar to salt states directly from the cli? eg using kwargs: https://github.com/saltstack/salt/blob/develop/salt/modules/state.py#L226
15:48 ksalman any idea when they'd put the latest debian package on saltstack.com?
15:48 TheoTannen joined #salt
15:49 luminous ksalman: when the package mainters complete the process
15:49 luminous *maintainers
15:49 luminous usually within a couple of days, they they also get adavanced notice of releases
15:49 ksalman luminous: thanks
15:50 luminous the mailing list will sometimes get a notice too
15:50 luminous to confirm
15:50 ksalman I should probably join the list
15:50 luminous it's great, lots of good info
15:50 ksalman nice
15:51 ksalman might as well!
15:54 KennethWilke joined #salt
15:57 [diecast] topic - 0.16.3 is latest? where is the changelog
15:58 forrest_ joined #salt
16:01 dabl joined #salt
16:02 KennethWilke [diecast]: http://docs.saltstack.com/topics/releases/index.html
16:02 it_dude joined #salt
16:02 [diecast] thx
16:02 KennethWilke np
16:03 KyleG joined #salt
16:03 KyleG joined #salt
16:04 [diecast] "0.16.3 is latest - Changelog: http://bit.ly/147IGsj"
16:05 danielbachhuber joined #salt
16:12 Kraln joined #salt
16:14 jacksontj joined #salt
16:14 jpeach joined #salt
16:14 dabl Hi all, in a cmd.wait state that watch a bunch of managed file, is there a way to get the state id that triggered the watch and put this as an argument to cmd.wait.name ? (err.. sorry noob, may be I'm clear as mud... :P )
16:14 troyready joined #salt
16:15 backjlack joined #salt
16:15 stefw joined #salt
16:18 baniir joined #salt
16:21 mgw joined #salt
16:23 backjlack joined #salt
16:25 auser joined #salt
16:31 mikedawson joined #salt
16:32 aat joined #salt
16:36 Linz joined #salt
16:36 Lue_4911 joined #salt
16:39 bhosmer joined #salt
16:41 baniir joined #salt
16:42 jacksontj joined #salt
16:43 aat joined #salt
16:44 jpeach joined #salt
16:46 TheRealBill joined #salt
16:48 __Jahkeup__ joined #salt
16:50 aat joined #salt
16:52 dabl I'll try again later, bye all.
16:52 eculver joined #salt
16:52 eculver joined #salt
16:54 aat joined #salt
16:55 Thiggy joined #salt
16:57 saurabhs joined #salt
16:58 UtahDave joined #salt
16:59 UtahDave auser: ping!
16:59 jacksontj joined #salt
16:59 auser pong
16:59 UtahDave how's it going?
16:59 Topic for #salt is now Welcome to #salt - http://saltstack.org | 0.16.3 is the latest | Please be patient when asking questions as we are volunteers and may not have immediate answers - Channel logs are available at http://irclog.perlgeek.de/salt/
17:00 auser great, how are you UtahDave
17:00 UtahDave good
17:03 StDiluted morning auser, Morning UtahDave
17:03 auser mornin StDiluted
17:03 UtahDave morning, StDiluted!
17:10 kenbolton joined #salt
17:14 baniir joined #salt
17:19 forrest joined #salt
17:23 forrest joined #salt
17:26 whit joined #salt
17:27 jpeach are utf8 strings in usernames expected to work?
17:29 JasonSwindle joined #salt
17:29 jpeach eg. a user full name with a ö in it, gives http://paste.fedoraproject.org/31861/41496613/
17:29 JasonSwindle UtahDave:  Howdy!
17:30 UtahDave jpeach: I think there were some issues with utf8 strings and msgpack.  Let me find out what the status is on that
17:30 UtahDave hey, JasonSwindle!
17:32 jpeach ah, I saw some mailing list messages about msgpack issues
17:32 UtahDave joined #salt
17:37 L2SHO is there a way to run a command only when a file's state changes?
17:41 UtahDave L2SHO: yeah.  cmd.wait   while watching that file
17:44 whit joined #salt
17:46 L2SHO UtahDave, like this? http://pastebin.com/bpYZwGA5
17:46 Xeago joined #salt
17:47 UtahDave L2SHO: I think that's exactly right. I think you may not need the quotes around the command in the  - name
17:48 devinus joined #salt
17:49 druonysus joined #salt
17:49 L2SHO Looks good, thanks!
17:50 UtahDave you're welcome!
17:53 zonk1024 joined #salt
17:55 dthom91 joined #salt
17:55 VertigoRay UtahDave: afternoon!  Thanks for all the help yesterday.  We're making a lot of headway on our management of OS X via salt.
17:56 UtahDave VertigoRay: nice!  You're very welcome.
17:56 VertigoRay UtahDave: just wanted to say thanks.  Hope you're doing well.  Cheers!
17:56 UtahDave I'm doing great, thanks!  Cheers!
17:56 Thiggy joined #salt
17:56 UtahDave VertigoRay: So are you managing OS X workstations and laptops?
17:58 jkleckner joined #salt
17:59 jkleckner joined #salt
18:01 alunduil joined #salt
18:06 cbier joined #salt
18:06 oz_akan_ joined #salt
18:12 berto- joined #salt
18:13 scalability-junk UtahDave: is there a way to delegate commands etc. to other minions? something like delegate_to in ansible?
18:13 scalability-junk or would I then use the reactor system and trigger a command?
18:14 scalability-junk say message 'remove $host from lb' -> triggers cmd to remove $host from lb ... update done ... message 'add $host to lb' -> triggers cmd to add $host to lb...
18:14 scalability-junk would that be reasonable for at least manage service discovery on updates?
18:15 scalability-junk or to remove broken hosts from a service via triggers from nagios perhaps...
18:16 UtahDave scalability-junk: Yeah, I think you'd want to use the event system and the reactor system
18:16 scalability-junk and then the list for all services would need to be changed too, but that could be done... just wanna make sure I'm not diving into the wrong direction
18:17 scalability-junk ok that could help with autodiscovery too. saltrun -> trigger event -> add to host -> trigger saltrun to retrieve and deploy hostlist...
18:19 UtahDave yep!
18:20 UtahDave I'm heading to lunch.  back in a bit!
18:21 ksalman is this not right? https://gist.github.com/anonymous/6224068. I get an error "requisites were not found pkg: mysql-server". It works fine if i change the watch: to "pkg: server"
18:22 renoirb anybody can refer me to the page describing the environment?
18:23 renoirb I seen the full introduction, the salt.modules.state, and it only describes at one place that we can say ' dev' at the end in some contexts.
18:23 renoirb (dev, as in, let's call it an env for that example)
18:23 __Jahkeup__ ksalman: try changing the 'server' to mysql-server and changing pkg -> pkg: - installed
18:23 __Jahkeup__ ksalman: I'll comment :)
18:24 Xeago joined #salt
18:25 jacksontj joined #salt
18:26 __Jahkeup__ ksalman: commented :)
18:29 ksalman __Jahkeup__: thanks. so it doesn't go by the actual package name?
18:29 __Jahkeup__ ksalman: I do believe its the 'id' at the top, mysql-server: that defines the watch
18:29 ksalman oh interesting
18:30 __Jahkeup__ ksalman: that is what its watching I mean ;)
18:30 ksalman what's the reason for moving the grain['os'] part up to the pkg?
18:30 ksalman well
18:31 ksalman i have it down on the service because for centos the init script is mysqld instead of mysql
18:31 __Jahkeup__ you can use logic to determine the pkg name, makes it easier to port it
18:31 __Jahkeup__ ah!
18:32 ksalman well anyway, this helps. Thanks a bunch =)
18:32 __Jahkeup__ ksalman: I didn't realize, you can move that back down then !
18:32 __Jahkeup__ ksalman: np :)
18:32 __Jahkeup__ glad to help
18:35 krissaxton1 joined #salt
18:37 Thiggy I need a reactor sls to execute once a set of minions have *all* fired a particular event. Any ideas?
18:38 JesseC Anyone know of a premade state to generate a self signed ssl certificate?
18:43 jacksontj joined #salt
18:45 anon123321 joined #salt
18:48 terminalmage Corey: Thanks for the access to the mac mini btw. Working on proper user/group management
18:48 terminalmage for MacOS
18:48 quantumsummers|c joined #salt
18:50 dthom91 joined #salt
18:54 ml_1 joined #salt
19:00 ksalman can someone explain why i am hitting this error while creating a sql database? https://gist.github.com/anonymous/6224501
19:00 ksalman the mysql database is runing
19:01 __Jahkeup__ ksalman: do you have python modules for sql?
19:01 ksalman i don't know =) i guess that'd explain it!
19:02 ksalman __Jahkeup__: thanks
19:02 __Jahkeup__ ksalman: https://salt.readthedocs.org/en/v0.12.1/ref/modules/all/salt.modules.mysql.html?highlight=mysql
19:02 __Jahkeup__ ksalman: no problemo
19:03 __Jahkeup__ ksalman: idk why that did 0.12
19:03 __Jahkeup__ ksalman: https://salt.readthedocs.org/en/v0.16/ref/modules/all/salt.modules.mysql.html
19:03 ksalman thanks
19:05 devinus joined #salt
19:06 Liebach left #salt
19:09 terminalmage __Jahkeup__: for up-to-date docs, you can search at docs.saltstack.com instead
19:09 terminalmage we don't keep the past versions there
19:10 __Jahkeup__ terminalmage: I'll search there from now on, thanks!
19:10 terminalmage no prob!
19:10 abe_music joined #salt
19:14 [diecast] joined #salt
19:14 [diecast] joined #salt
19:16 kalmar joined #salt
19:23 JesseC How do you restart a service in a state? It says it's tied to mod_watch, would I just pass full_restart = true before the watch?
19:28 zooz joined #salt
19:29 dave_den JesseC: if you want to just restart a service (e.g. myservice) if another state (e.g. somestate) runs, just include the ID of 'somestate' in the service's 'watch' arguements. You can also have 'somestate' do a 'watch_in: myservice'. If you don't specify 'reload: True' in the myservice state, the default is to fully restart the service
19:29 dthom91 joined #salt
19:30 dave_den watch_in basically allows somestate to add itself to the list of state being 'watch'ed by myservice
19:30 JesseC ah gotcha, the docs say full_restart default is false, but then you have to specify reload if you want it
19:31 dave_den don't look at mod_watch, just look at the very top examples on http://docs.saltstack.com/ref/states/all/salt.states.service.html
19:34 np_ joined #salt
19:34 np_ left #salt
19:37 jacksontj joined #salt
19:38 devinus joined #salt
19:39 qba73_ joined #salt
19:43 ksalman how are people creating mysql db structure, manually?
19:44 ksalman tables and what not
19:45 baniir joined #salt
19:49 jschadlick joined #salt
19:49 TheoTannen I was hoping to use salt-cloud to bring up a cluster with a service we are developing.  The service running on each of the instances currently needs a configuration file with the dns name of all the other instances in its cluster.   Are there any examples of doing something like this?  It looks like it should be possible with salt-mine and templates.
19:55 pdayton joined #salt
19:56 pdayton1 joined #salt
19:56 Xeago joined #salt
19:58 mechanicalduck joined #salt
19:59 devinus joined #salt
19:59 devinus joined #salt
20:00 pdayton joined #salt
20:00 devinus joined #salt
20:01 devinus joined #salt
20:06 faldridge joined #salt
20:07 druonysuse joined #salt
20:27 david_a joined #salt
20:31 pdayton joined #salt
20:31 dthom91 joined #salt
20:33 pdayton1 joined #salt
20:34 ksalman is it not possible to put minion specific grains on the master?
20:35 ksalman I thought of managing the /etc/salt/grains file via the master, but then i restart salt-minion to update grains , but that stops the minion and it never returns
20:35 UtahDave ksalman: I'm not quite sure what you mean.  Can you explain what you're trying to do?
20:36 bdf so is there any way to make a minion send some sort of heartbeat over the zmq connection and recycle the connection if it fails to get the expected response?
20:37 ksalman UtahDave: I'd like to store a value "lab: 123" but if i put that in /etc/salt/grains on the minion then i have to manually manage it, no?
20:37 UtahDave ksalman: have you tried using    grains.setval from the master to add it?
20:37 bdf I have a firewall (Juniper SRX, but it will not be the only kind) that doesn't seem to think the TCP keepalives are enough to keep the state active.
20:38 ksalman UtahDave: i havne't , can that be done for a state?
20:38 Gifflen_ joined #salt
20:38 bdf but the minon seems to sit there and think that the connection is still alive, and doesn't really try to restart it at all
20:39 UtahDave ksalman: So you want to be able to run a highstate and ensure a minion has certain grains?
20:39 ksalman UtahDave: yea, exactly
20:39 UtahDave bdf: Then can the master not send commands to the minion?
20:39 UtahDave ksalman: let me find an example for you
20:39 bdf UtahDave: that's right.
20:39 ksalman UtahDave: there are some things in an the state taht depends on those grains :)
20:39 bdf the master sees the connection go away
20:39 bdf and the state is gone from the firewall
20:40 UtahDave ksalman: have you tried the grains state?  http://docs.saltstack.com/ref/states/all/salt.states.grains.html#module-salt.states.grains
20:40 bdf but look at netstat on the minion and it's still therte
20:40 UtahDave bdf: what version of Salt are you on?
20:40 bdf 0.16.3 on the minion and 0.16.2 on the master (waiting for debian package)
20:41 bdf and it's zmq 2 on the minion side, but it's a FreeBSD minion
20:41 bdf and the path to zmq 3 on freebsd is nasty.
20:41 ksalman UtahDave: I looked at it but that requires me to hardcode 'value: edam' in the state, no? The value would be different for different minions
20:41 UtahDave bdf: Yeah, that's your problem.
20:41 bdf haha
20:41 bdf thanks
20:42 UtahDave zmq 2 has known bug.
20:42 bdf oh?
20:42 UtahDave zmq3.2.+  is HIGHLY recommended.
20:42 bdf will it solve this?
20:42 ksalman UtahDave: I'd like to have a dynamic 'value: ' based on the minion
20:43 bdf I'll take your word it and just try a debian/zmq3 machine behind the same firewall, but I would love to see more aggressive robustness in this situation, and a heartbeat would completely do the trick.
20:44 druonysuse joined #salt
20:44 UtahDave bdf: I'm pretty close to being 100% sure zmq 3.2.+ will solve this problem.  (I don't have a support contract with you do I?   lol)
20:45 UtahDave bdf: Yeah, some people set their minions to do a cron test.ping
20:46 bdf but will the minion actually tear down if it fails?
20:46 bdf I suppose I'll just have to try :)
20:46 baniir joined #salt
20:47 backjlack joined #salt
20:48 UtahDave bdf: I believe so.
20:49 UtahDave ksalman: how are you determining what that value is?
20:49 __Jahkeu_ joined #salt
20:51 ksalman UtahDave: It's just a predetermined value based on the hostname of the minion. so "host1: valueX", "host2: valueY", for example
20:51 xt bdf: tcp_keepalive has heartbeats
20:51 xt bdf: as long as your network supports it, it should be pretty smooth
20:52 xt if not you can always tune tcp_keepalive at the OS level
20:52 giantlock joined #salt
20:54 UtahDave ksalman: I see. Have you thought about putting that in pillar?  Or do you really need it in grains?
20:56 ksalman UtahDave: I don't really need it in grains. I'll read up on pillars then =)
20:56 ksalman I didn't know i could do this with pillars
20:56 UtahDave ksalman: Yeah, pillars would allow you to keep that info in a more central location
20:56 UtahDave you can still match and things on pillar data
20:57 ksalman UtahDave: thanks, I'll try it
20:57 UtahDave the benefit of grains is that they're faster for matching, but most of the time the speed difference is so small that it's not noticable
20:57 ksalman oh i see
21:05 blee_ joined #salt
21:08 helderco joined #salt
21:10 TheoTannen left #salt
21:10 arapaho_ joined #salt
21:12 EnTeQuAk joined #salt
21:12 pcarrier joined #salt
21:17 Jahkeup joined #salt
21:20 dthom91 joined #salt
21:21 bdf xt: unless keepalives are totally broken in zmq 2
21:21 bdf then I have them on
21:21 bdf and I have them tweaked to be very aggressive.
21:21 bdf and it didn't work.
21:22 kermit joined #salt
21:22 bdf and matter of fact, several largely-deployed enterprise firewalls (Juniper, Cisco, SonicWALL, Fortigate)  still have unconditional session timeouts regardless of whether keepalives are being sent or not.
21:23 bdf just to prevent session exhaustion
21:24 xt bdf: dont you need zmq3 for keepalive to work?
21:24 jkyle joined #salt
21:24 bdf I dont know actually.
21:25 bdf it didn't really clearly say that in the config file, but...
21:25 xt you should investigate
21:25 xt I have had problems myself, but they got solved with keepalive
21:26 bdf yeah. that's what I'm doing now, I'll more or less have to rebuild the package on FreeBSD in order to depend on zmq3
21:28 bdf I think what I'll do at least to begin with is get a pcap and see if the keepalives are even there.
21:29 pcarrier joined #salt
21:32 xt that's what I did :-)
21:35 kenbolton joined #salt
21:39 alexandrel joined #salt
21:41 iquaba joined #salt
21:43 iquaba left #salt
21:45 nliadm I can see the authentication requests for this minion coming into the master, and the minion just sitting there, claiming it's unaccepted
21:46 UtahDave nliadm: even after using salt-key to accept the minion?
21:46 nliadm yeah.
21:46 faldridge joined #salt
21:50 UtahDave what version of Salt are you on?
21:51 nliadm 0.16.0. there was the gitfs bug in 0.16.2 that kept us from upgrading
21:51 UtahDave nliadm: any way you can test on 0.16.3?
21:51 nliadm I've got time set aside to try tomorrow
21:51 UtahDave ok, let me know how it goes.
21:56 alexandrel joined #salt
21:58 dthom91 joined #salt
22:05 lesnail joined #salt
22:05 cewood joined #salt
22:12 bhosmer joined #salt
22:20 Gifflen joined #salt
22:22 pdayton joined #salt
22:23 g3cko joined #salt
22:25 pcarrier joined #salt
22:27 EnTeQuAk joined #salt
22:31 baniir joined #salt
22:32 brutes joined #salt
22:34 lynxman joined #salt
22:34 lynxman joined #salt
22:36 blee joined #salt
22:42 mikedawson joined #salt
22:42 jslatts joined #salt
22:45 alekibango joined #salt
22:45 MTecknology If you guys were deploying a completely fresh environment, how would you do it? Build a salt-master box, then make a template that has salt-minion that tries to auth to the host on first boot?
22:45 MTecknology err... I guess I don't have a first boot option...
22:48 UtahDave if you have control of the dns, setting   "salt" to point to your master is convenient
22:48 MTecknology I already have that done actually. :)
22:49 MTecknology I tore down my home network and I'm rebuilding it from scratch. I have my virtual machine host, pfsense box, and laptop set up so far. I assume the first box I want deployed is the salt master.
22:50 UtahDave that's what I do.
22:51 bhosmer joined #salt
22:51 sgviking joined #salt
22:52 MTecknology So, from there... If I'm deploying all my boxes using openvz, all that I have to do is install salt-minion on the box, kill the service, delete the generated key, and deploy from that?
22:52 pjs joined #salt
22:53 UtahDave yeah, make sure to delete the minion_master.pem, too
22:53 UtahDave (doublecheck that filename)
22:53 MTecknology ah- I would have missed that
22:54 MTecknology this is actually extremely exciting...
22:54 MTecknology How ofter does someone ever get to build an entire network from scratch however the crap they want? :D
22:55 UtahDave :) seriously!!
22:57 chrisgilmerproj left #salt
22:59 vimalloc joined #salt
23:05 cxz joined #salt
23:17 intchanter joined #salt
23:20 kenbolton joined #salt
23:32 kenbolton joined #salt
23:35 oz_akan_ joined #salt
23:38 falican joined #salt
23:45 [diecast] joined #salt
23:53 LucasCozy joined #salt
23:54 jesusaurus can the gitfs backend be used with pillars, or just states?
23:54 * jesusaurus finally gets around to testing out salt's gitfs stuff
23:54 UtahDave jesusaurus: there's a new pillar git backend
23:54 jesusaurus UtahDave: awesome!
23:54 jesusaurus what version is needed?
23:55 UtahDave http://docs.saltstack.com/ref/pillar/all/salt.pillar.git_pillar.html?highlight=pillar%20git#salt.pillar.git_pillar
23:55 jesusaurus UtahDave: perfect
23:55 UtahDave Hm. I don't remember if it made it into 0.16.  I think it may have. jacksontj wrote it
23:55 jacksontj yea, it should be in 0.16
23:55 jacksontj its not great, but it works ;)
23:56 jesusaurus jacksontj: dont worry, i'll break it (then fix it)
23:56 kalmar joined #salt
23:56 jacksontj what it really needs is a rewrite of the pillar compiler-- to modularize the filesystem out
23:56 jacksontj similar to the file_roots stuff
23:56 jacksontj so we can have 1 gitfs module for both
23:56 jacksontj basically a unified pluggable storage
23:57 jesusaurus yeah, file and pillar are so similar, they should really be sharing more code than they do
23:57 LucasCozy joined #salt
23:57 jschadlick left #salt
23:59 jesusaurus UtahDave: is there a standard way to note where the documentation is lacking?
23:59 UtahDave just create an issue on the github repo.
23:59 jesusaurus http://docs.saltstack.com/topics/tutorials/gitfs.html should probably have a blurb at the bottom about http://docs.saltstack.com/ref/pillar/all/salt.pillar.git_pillar.html

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary