Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-07-09

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 arknix joined #salt
00:11 aw110f joined #salt
00:12 snuffeluffegus joined #salt
00:12 druonysus joined #salt
00:18 happytux joined #salt
00:19 Shenril joined #salt
00:22 happytux_ joined #salt
00:25 druonysuse joined #salt
00:34 bhosmer joined #salt
00:42 pykevin joined #salt
01:04 toastedpenguin joined #salt
01:05 krow joined #salt
01:09 oz_akan_ joined #salt
01:14 shaggy_surfer joined #salt
01:14 mateoconfeugo joined #salt
01:14 tkharju3 joined #salt
01:19 arknix joined #salt
01:24 ajolo_ joined #salt
01:40 m1crofarmer joined #salt
01:42 mgw joined #salt
01:45 krow joined #salt
01:50 malinoff joined #salt
02:02 yomilk joined #salt
02:04 rlarkin|2 joined #salt
02:10 otter768 joined #salt
02:15 dude051 joined #salt
02:15 scoates joined #salt
02:15 ramishra joined #salt
02:16 dude051 joined #salt
02:16 Shenril joined #salt
02:19 Cyanid joined #salt
02:20 Cyanid left #salt
02:45 mateoconfeugo joined #salt
03:06 m1crofarmer joined #salt
03:07 scoates joined #salt
03:19 logix812 joined #salt
03:24 bhosmer joined #salt
03:37 mosen joined #salt
03:41 azylman joined #salt
03:43 catpigger joined #salt
03:44 tligda joined #salt
03:48 Luke joined #salt
03:51 otter768 joined #salt
03:51 bhosmer joined #salt
04:09 m1crofarmer joined #salt
04:14 ipalreadytaken joined #salt
04:15 krow joined #salt
04:18 tligda joined #salt
04:27 mgw joined #salt
04:29 vbabiy joined #salt
04:30 dangra joined #salt
04:33 robinsmidsrod joined #salt
04:34 yomilk joined #salt
04:36 oz_akan_ joined #salt
04:38 oz_akan_ joined #salt
04:50 dangra joined #salt
04:58 yomilk joined #salt
05:03 dude051 joined #salt
05:05 wendall911 left #salt
05:08 kermit joined #salt
05:12 Ryan_Lane joined #salt
05:12 bhosmer joined #salt
05:15 m____s joined #salt
05:17 TheThing joined #salt
05:24 rawzone joined #salt
05:24 yomilk_ joined #salt
05:26 yetAnotherZero joined #salt
05:39 oz_akan_ joined #salt
05:43 ipalreadytaken joined #salt
05:53 marnom joined #salt
05:54 ramishra joined #salt
06:03 Ryan_Lane joined #salt
06:10 thayne joined #salt
06:11 ndrei joined #salt
06:12 Hollinski joined #salt
06:12 picker joined #salt
06:19 Tween_ joined #salt
06:20 Kenzor joined #salt
06:25 krow joined #salt
06:39 jdmf joined #salt
06:40 oz_akan_ joined #salt
06:41 vu joined #salt
06:41 sashka_ua joined #salt
06:44 marnom hi guys, anyone using the new vSphere cloud module by any chance? http://docs.saltstack.com/en/latest/ref/clouds/all/salt.cloud.clouds.vsphere.html was wondering if there's a way to test drive this version easily..
06:48 mgw joined #salt
06:50 ml_1 joined #salt
06:52 pdayton joined #salt
06:54 Nazca__ joined #salt
06:56 albertid joined #salt
06:57 albertid Hi, why does state.highstate whitelist=xyz seem to have no effect? I want only to have a single .sls executed.
06:57 davidone joined #salt
06:57 albertid The sls is in xyz/init.sls and also referenced in the top.sls for the server in question
06:57 albertid but highstate executes the complete top.sls
07:01 bhosmer joined #salt
07:06 ndrei joined #salt
07:06 felskrone joined #salt
07:08 chiui joined #salt
07:09 albertid ok, to anwser my own question: you can use state.sls to execute a single sls
07:10 ckao joined #salt
07:13 albertid_ joined #salt
07:24 ghartz joined #salt
07:25 ghartz joined #salt
07:27 vinian joined #salt
07:27 gywang joined #salt
07:29 vinian 'warnings': ["'expire' is an invalid keyword argument for 'user.present'
07:29 vinian anyone meet this before, i checked the doc, it has expire keyword argument
07:31 gywang Hi all, I have a custom state, some code like this: http://pastie.org/9370630 , but it seems that __salt__['pkg.list_pkgs'] only exec once, what's wrong with it?
07:33 linjan joined #salt
07:34 gywang all code here: http://pastie.org/9370640
07:34 gywang Help, help
07:35 darkelda joined #salt
07:36 marnom gywang: you sure that sending literally '**kwargs' into that pkg.list_pkgs call is correct?
07:37 krow joined #salt
07:38 gywang what **kwargs should i send?
07:39 gywang In fact I send nothing
07:40 oz_akan_ joined #salt
07:46 che-arne joined #salt
07:47 marnom well I don't know but it looks strange to me, gywang.. hopefully someone else can clarify/advise
07:53 gywang ok, I got where the problem is.
07:54 gywang pkg.list_pkgs was cached in __context__
07:54 ggoZ joined #salt
07:59 gywang_ joined #salt
07:59 gywang_ How can I modify __context__ in custom states?
08:00 Lomithrani joined #salt
08:01 malinoff gywang_, sys.modules[__salt__['pkg.list_pkgs'].__module__].__context__
08:01 malinoff I did this in 0.17
08:01 malinoff Don't know is there a better way right now
08:02 thehaven joined #salt
08:08 dualinity joined #salt
08:11 ramishra_ joined #salt
08:12 gywang_ malinoff, it works, thanks, although seems so strange:  sys.modules[__salt__['pkg.list_pkgs'].__module__].__context__.pop('pkg.list_pkgs', None)
08:14 malinoff gywang_, welcome in the salts real world where you can't step aside without having a lot of pain :)
08:15 malinoff Although I can tell what's going on there, if you're interested
08:20 gywang_ :) well, a lot of pain is waiting for me, and a lot of question is wating for u from me.
08:21 jhauser joined #salt
08:21 ramishra joined #salt
08:28 babilen Hello. My understanding of config.option (cf. http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.config.html) is that I can also use it to query pillars. That works, but I can only query "top-level" keys (i.e. as soon as I query for foo.bar it doesn't work)
08:29 babilen Am I simply wrong and what I am trying to do was never supported or is there some trick that I have to use to make it work?
08:29 babilen Could you try to retrieve a value from a nester pillar? Does it work for you?
08:32 marnom babilen: sample command of what you're trying to do? Not sure I fully follow..
08:33 Greyer joined #salt
08:33 babilen I just see it being used in other execution modules (e.g. https://github.com/saltstack/salt/blob/develop/salt/modules/postgres.py#L80)
08:34 babilen marnom: I am writing an execution module and would like to give the user the option to configure settings in the master config, but to override those in a pillar.
08:34 babilen Much like the aforementioned postgres execution module should work.
08:35 marnom babilen: config.get works for nested pillar values, but config.option only retrieves the top level for me also..
08:35 babilen I could, naturally, use "__salt__['pillar.get']('foo:bar')" to get the pillar value myself and forget about the options, but I am just trying to copy the style used there
08:37 babilen marnom: Okay, I might simply use config.get in that case ... But all I've seen in the other modules is config.option and I just can't get it to work (nor am I convinced that the distributed code actually works)
08:38 babilen It looks as if the linked postgres code would only work if you actually set these values in the master config and that you can not "This data can also be passed into pillar. Options passed into opts will overwrite options passed into pillar"
08:39 babilen (and you find this multiple times in various other execution modules which is why I simply tried to copy it (only to run into the problem that I can't get nested values)
08:39 marnom babilen: odd.. :\
08:40 ipalreadytaken joined #salt
08:42 babilen And then you also find things such as https://github.com/saltstack/salt/blob/develop/salt/modules/apache.py#L326 where config.get uses a mixture of . and :
08:42 babilen I am utterly confused now :D
08:43 babilen (I have to use : with config.get, but it doesn't work with config.option)
08:43 babilen I guess I will have to read the actual code to figure this one out
08:45 babilen Ah, isn't '_|-' a cute default value!?
08:46 malinoff babilen, "users will never use that value!" :)
08:46 * babilen would have used '/o' in https://github.com/saltstack/salt/blob/develop/salt/modules/config.py#L228
08:47 babilen I have the feeling as if I am stepping into the salt underworld now where I am being introduced to some secret incarnations
08:47 malinoff it's like "okay, let's use \x00\x01 as a separator between AES keys"
08:47 yomilk joined #salt
08:47 giantlock joined #salt
08:48 marnom hehe
08:48 malinoff babilen, salt has a nice end-user interface, but a lot of strange decisions and architecture problems internally
08:49 bhosmer joined #salt
08:49 malinoff babilen, if you walk through the earliest version to the newest, you will see that there was no blueprint ever, just "wow, that's a cool feature, let's add it"
08:50 malinoff master.py (god object #1) & minion.py (god object #2) are nice indicators :)
08:51 babilen Okay, I am not surprised that config.option doesn't work as it simply uses "return __pillar__[value]" which obviously doesn't work with nested dictionaries
08:52 babilen It also looks as if you would have to use : as delimiter for each level of nesting and as if the . that you so frequently see in the code is not actually separating namespaces, but simply part of the (single level) key
08:56 babilen That still leaves me in the situation that I am not sure how values in the master config map to values in the pillar ... is it simply foo.bar.baz == foo:bar:baz ? But then code like https://github.com/saltstack/salt/blob/develop/salt/modules/apache.py#L326 seems to be impossible to translate
08:56 babilen Ah, wtf
09:02 picker joined #salt
09:05 martoss joined #salt
09:07 martoss1 joined #salt
09:10 babilen Okay .. I figured it out. \o/
09:10 babilen (had to read the commit logs)
09:12 babilen It's fairly straigtforward really .. the posgres.host, postgres.port is not really nested, but would correspond to the postgres.host and postgres.port *top-level* pillar entries (i.e. a dictionary {'postgres.host': 'foo'})
09:12 davidone joined #salt
09:13 babilen Now I am only unsure how to override values from a nested pillar in the master/minion configuration :-/
09:15 bhosmer joined #salt
09:20 babilen Okay, looks as if the config module was a good idea but is useless in its current form. Pity.
09:31 ninkotech joined #salt
09:36 RabidDog joined #salt
09:37 RabidDog hi
09:37 RabidDog struggling with default salt install
09:37 RabidDog fedora 19 salt 2014.1.5-1.fc19 installed via yum repository
09:40 babilen RabidDog: I don't quite see a problem with that (apart from using yum and not apt-get)
09:40 babilen But then you have little choice on Fedora :)
09:41 RabidDog :) everything seems to install fine but the master won't accept the minion key
09:41 RabidDog if I netstat port 4506 they are connected
09:41 babilen RabidDog: Are they listed in "salt-key -L" ?
09:41 RabidDog empty listings
09:42 oz_akan_ joined #salt
09:42 RabidDog babilen: nothing, just the three titles
09:42 babilen Could you start the master and minion in debug mode? (i.e. stop the service and run "salt-master -ldebug" and "salt-minion -ldebug" respectively)
09:43 RabidDog babilen: already there :) what output can I give you?
09:43 babilen RabidDog: Did you configure the minions to contact the correct master? How did you configure them and can they actually reach the master that way?
09:44 babilen RabidDog: Not sure, rarely run into this .. so why not all? (redact sensitive information such as domains if you like)
09:44 Lomithrani hi guys
09:44 Lomithrani should I write ({{ data['id'] }}|'Varnish*')   or ({{ data['id'] }}|'Varnish'*)
09:44 babilen Please use a proper pastebin such as gist.github.com, http://paste.debian.net or http://refheap.com please :)
09:45 RabidDog babilen: the master and minion are running on the same machine. iptables disabled, minion is configured using master: localhost - master is configured using ip: 0.0.0.0
09:45 babilen Lomithrani: Where do you write that and what are you trying to achieve? (i.e. "we don't care what you write in your journal")
09:46 CeBe joined #salt
09:46 Lomithrani :) well I try to target my minions in a reactor
09:46 Lomithrani itself with data['id'] and all the one that starts with Varnish
09:47 babilen Lomithrani: Would it maybe make sense to provide a complete example on a pastebin?
09:47 RabidDog babilen: minion output - http://paste.debian.net/108852/
09:47 matthiaswahl joined #salt
09:48 Lomithrani https://www.refheap.com/87996 here it is babilen ;)
09:48 babilen RabidDog: Don't you want "interface: 0.0.0.0" rather than "ip: 0.0.0.0" in your master config. Or, for that matter, not set anything at all?
09:50 babilen Lomithrani: And what are you trying to express there? What you have written essentially means "minion id or Varnish* (i.e. Varnis, Varnish, Varnishh, Varnishhh, ...)
09:51 TheThing joined #salt
09:51 Lomithrani should I write ({{ data['id'] }}|'Varnish*')   or ({{ data['id'] }}|'Varnish'*)  that was the question
09:51 Lomithrani " '*  or *' "
09:52 babilen Neither really makes sense
09:52 babilen (hence my question what you are trying to express there)
09:52 Lomithrani well I want an highstate to run on the minion that just started
09:52 RabidDog bailen: when you done with Lomi: http://paste.debian.net/108854/
09:52 Lomithrani and all the minions called "Varnish"
09:53 Lomithrani well called Varnish01 .. Varnish02 etc ..
09:53 matthiaswahl hey guys. I've got a problem with salt mine not collecting anything from configured minions
09:53 matthiaswahl here is some more information: https://gist.github.com/mfelsche/2b5a0a3c4291c1cc0ce5
09:54 Lomithrani matthiswahl : did you send the function to the minions  ?
09:54 matthiaswahl what else can i get you to give you any helpful information?
09:54 matthiaswahl Lomithrani: i have to send them there first?
09:54 Lomithrani well you set
09:54 Lomithrani network.ip_addrs
09:54 Lomithrani and use network.ipaddrs
09:55 matthiaswahl its in the matching pillar items, is that sufficient?
09:55 babilen Lomithrani: You have to write a proper regular expression then. Do you want to be specific about 01 or do you essentially want "Starts with Varnish" ?
09:55 Lomithrani babilen: starts with varnish
09:55 Lomithrani well I don't use pillar , might be if its the same as the state
09:55 Lomithrani but still your not using the same funciton in both
09:56 babilen Lomithrani: If the latter: Use "Varnish.+"
09:56 Lomithrani try to get network.ip_addrs
09:56 RabidDog babilen:
09:56 babilen Lomithrani: https://docs.python.org/2/library/re.html#module-re + https://docs.python.org/2/howto/regex.html
09:57 RabidDog babilen: removed the "interface: 0.0.0.0" from master config, restarted, still same thing happening
09:57 babilen RabidDog: That looks alright(ish) ...
09:57 matthiaswahl Lomithrani: tried it. see the bottom of the (updated) gist: https://gist.github.com/mfelsche/2b5a0a3c4291c1cc0ce5
09:57 babilen RabidDog: Did you restart both the master and the minion?
09:57 Lomithrani babilen: ok thanks strange thing is that my top.sls looks like https://www.refheap.com/87997 and works , I guess in theory it shouldnt because its not proper regex
09:58 Lomithrani I thought salt had its own regex thingy
09:58 TheThing joined #salt
09:58 matthiaswahl Lomithrani: is there any way i can check that the mine_function is running on the minion?
09:58 RabidDog babilen: I did
09:58 babilen Lomithrani: Salt does globbing by default (and I have no idea why you would use pcre there rather than the default glob, which is what you want)
09:59 Lomithrani matthiaswahl: I still see network.ipaddrs  in one  and network.ip_addrs: in the other
09:59 Lomithrani try to do # salt '*' mine.send network.ipaddrs
09:59 matthiaswahl Lomithrani: arrghhh. grmpf!!! :/ sry to bother you with this stupid error :(
09:59 babilen RabidDog: hmm, let me take a look at your logs again.
10:00 Lomithrani matthiaswahl : don't worry I do a bunch of stupid error all the time , glad I can help sometime here I get much more help than I give :D
10:00 TheThing_ joined #salt
10:01 matthiaswahl Lomithrani: works fine, i finally got some ips back :) stupid, stupid me
10:01 RabidDog babilen" followed the isntallation procedure here: http://docs.saltstack.com/en/latest/topics/installation/fedora.html
10:01 ramishra_ joined #salt
10:01 matthiaswahl Lomithrani: though i didn't know from the docs, that one had to send the functions to the minions first
10:02 Lomithrani wait it works since you did the mine.send
10:03 babilen RabidDog: And your /etc/hosts is correct? Could you configure your minion with "master: 127.0.0.1" and paste the respective logs with these two (-ip+interfaces on the master and -localhost+127.0.0.1 on the minion) Start the master first, then the minion and then get the logs.
10:03 Lomithrani thats a quick fix just to see if you had a problem with the function. The minion has to be aware that its authorised to use the function if I understood well: 3 way of doing this apparently : states , pillar (thats what you do ) and directly sending through sli
10:03 Lomithrani *cli
10:09 ninkotech joined #salt
10:13 viq joined #salt
10:18 matthiaswahl Lomithrani: ah, i see
10:19 RabidDog babilen: master configured to listen on 127.0.0.1 http://paste.debian.net/108859/
10:19 babilen No, that wasn't what I meant, but okay
10:20 RabidDog babilen: minion configured to connect on 127.0.0.1
10:20 RabidDog http://paste.debian.net/108860/
10:20 RabidDog oops, sorry
10:21 mateoconfeugo joined #salt
10:21 babilen RabidDog: And no key in "salt-key -L" nor in /etc/salt/pki/master/*/ ?
10:22 RabidDog babilen: zero
10:25 matthiaswahl Lomithrani: now there is this strange problem, when using the mine data in a template: https://gist.github.com/mfelsche/2b5a0a3c4291c1cc0ce5
10:25 matthiaswahl Lomithrani: at the bottom
10:25 matthiaswahl Lomithrani: when using it without expr_from, no exception but no ip_addr either
10:26 Lomithrani matthiaswahl: I'll just send you one of my example
10:27 ramishra joined #salt
10:29 Lomithrani (I'll have to do that after lucnh I'm late)
10:29 matthiaswahl Lomithrani: thank you, keep it cool :)
10:31 ramishra joined #salt
10:32 RabidDog babilen: netstat output http://paste.debian.net/108865/
10:33 babilen RabidDog: Sorry, something came up that needs my urgent attention (hence my silence)
10:34 babilen RabidDog: But really nothing in /etc/salt/pki/master/*/ ?
10:34 RabidDog babilen: sort things out your side :) let me know when you get a gap
10:35 yomilk joined #salt
10:39 babilen RabidDog: Okay, sorted :)
10:40 intellix joined #salt
10:40 RabidDog babilen: master.pem, master.pub
10:40 babilen RabidDog: I am not familiar with Fedora and the salt packaging, but the behaviour strikes me as weird. It might be related to the zeromq version you have installed, but I would be surprised if you were actually the first person to run into this.
10:41 babilen RabidDog: Okay, but you don't have the minions/ minions_pre/ or minions_rejected/ directories in there?
10:42 RabidDog I do
10:42 oz_akan_ joined #salt
10:42 RabidDog but they are all emptyu
10:43 babilen Ah, important difference :)
10:43 RabidDog babilen yes, feel slightly embaressed that I missed that :D
10:44 xt babubilen
10:44 babilen And you have /etc/salt/pki/minion/ with three files in there, two .pub and one .pem or only minion.{pem,pub}?
10:44 Nazca joined #salt
10:44 babilen RabidDog: Well, I just don't want to work with wrong assumptions. What does "salt --versions" give you?
10:46 giantlock joined #salt
10:46 RabidDog babilen
10:46 RabidDog http://paste.debian.net/108867/
10:46 TheThing joined #salt
10:49 BbT0n joined #salt
10:49 RabidDog babilen: dumb question, do I have to install zeromq as a seperate operation?
10:49 RabidDog or does the package install deal with that?
10:50 BbT0n that's a pleasure to see saltstack handle freebsd pkg + poudriere system !
10:50 babilen RabidDog: That looks alright. And you are *sure* that you don't have any firewall running? (i.e. mind showing me "iptables-save") Could you run tcpdump and listen if you get the key request?
10:51 babilen RabidDog: I would assume yum to be sensible and install the dependencies. And then salt obviously found it as you have it.
10:52 babilen If you google for your error you find reports mostly from people who run Fedora
10:53 RabidDog babilen: http://paste.debian.net/108868/ while this might sound stupid, again, I haven't been able to figure out what the error is and espress it in such a way that google returns anything useful. Would you mind sharing your query?
10:55 babilen RabidDog: https://groups.google.com/forum/#!topic/salt-users/qPJO8sYb6jM
10:56 babilen RabidDog: I simply googled for the "SaltReqTimeoutError: Waited 60 seconds" message from your minion
10:57 RabidDog yeah I saw that post, didn't help much :(
10:57 RabidDog busy on tcpdump for you
10:59 beardo_ joined #salt
11:00 dopeh joined #salt
11:01 babilen RabidDog: Ì simply don't see anything obvious ... I mean you could start again (and configure it correctly to begin with now) by purgin the packages and removing /var/cache/salt/) ... but short of that I haven't seen any obvious mistake. I might have overlooked something though.
11:01 RabidDog babilen, I will try that. thanks so much for the effort :)
11:01 dopeh left #salt
11:05 babilen RabidDog: I'm sorry for that. I am familiar with the Debian packaging and never had this problem ...
11:11 RabidDog babilen, not a problem at all, I really appreciate the effort
11:12 logix812 joined #salt
11:19 masterkorp How can i get grains from a masterless setup ?
11:20 ekristen joined #salt
11:28 dualinity is there any way how I can have someone who installs salt-minion connect with me?
11:29 dualinity rather than changing the config file for the minion on my local machine? (that is when my pc is using both salt-minion and salt-master)
11:30 dualinity I would basically ask like some kind of one click install such that people who would install automatically have all the right settings to connect with my salt-master?
11:30 dualinity it's not clear for me how to do that
11:30 thayne joined #salt
11:32 otter768 joined #salt
11:33 jrdx joined #salt
11:34 babilen dualinity: ship a suitable minion config.
11:37 babilen masterkorp: "salt-call grains.items" ?
11:39 alanpearce joined #salt
11:43 oz_akan_ joined #salt
11:44 masterkorp thank you sirrrrrr
11:44 babilen yw
11:51 ghartz joined #salt
11:52 ndrei joined #salt
11:53 bhosmer joined #salt
11:54 intellix joined #salt
11:56 jas- joined #salt
11:56 Lomithrani joined #salt
12:00 Lomithrani matthiaswahl: still there , have you solved your problem ?
12:04 matthiaswahl Lomithrani: yes, i did.
12:04 Lomithrani Out of curiosity , what was it ?
12:04 matthiaswahl Lomithrani: there seems to be an error, so you can't give expr_from from within a template
12:04 matthiaswahl Lomithrani: worked fine with globbing to
12:04 felskrone joined #salt
12:05 matthiaswahl Lomithrani: this works: {% for server, addrs in salt['mine.get']('*', 'network.ip_addrs').items() %}
12:05 Lomithrani Yes that's what I do
12:05 vbabiy joined #salt
12:05 Lomithrani but it should work with expr_from
12:05 Lomithrani form
12:05 matthiaswahl i have to admit that i discovered lots of rough corners in salt and salt-cloud. :( will file some github issues
12:06 Lomithrani Actually that might be your error :D
12:06 Lomithrani from istead of form
12:06 matthiaswahl Lomithrani: it is expr_form? not expr_from?
12:06 matthiaswahl then there he is again: the stupid me
12:06 Lomithrani salt['mine.get']('roles:web', 'network.ip_addrs', expr_form='grain').items()
12:06 matthiaswahl hoho
12:07 matthiaswahl Lomithrani: thanks for pointing that out :D
12:08 badon joined #salt
12:09 Lomithrani matthiaswahl: No problem :)
12:09 babilen heh
12:11 babilen Hmm, is there a state that allows me to copy the contents of a directory or would I have to use cmd.run for that?
12:13 flupke joined #salt
12:15 vinian left #salt
12:15 jacksontj joined #salt
12:16 flupke Hi, what can cause a minion key to change? I'm struggling to get our autoscaling ec2 instances working with salt. I see them in salt-key, I see them being bootstraped by salt-cloud, but the keys on the master and the new minions are different
12:16 flupke Every new instance has a different key, why?
12:19 flupke (the source AMI already has salt-minion setup, but the keys on new instances are different)
12:28 toastedpenguin joined #salt
12:35 vbabiy joined #salt
12:38 ramishra joined #salt
12:39 TheThing joined #salt
12:43 badon joined #salt
12:43 __number5__ flupke: every new minion will need a new key, or else master can't tell one from another
12:44 babilen flupke: https://github.com/saltstack-formulas/ec2-autoscale-reactor might come in handy
12:44 flupke that is what I implemented
12:45 flupke ec2-autoscale-reactor
12:45 quist joined #salt
12:45 flupke I see minions accepted, but the keys differ from what's on the minions
12:47 flupke e.g. /etc/salt/pki/minion/minion.pub on the minion and /etc/salt/pki/master/minions/<instance-id> are different
12:48 jrdx joined #salt
12:49 flupke and then of course they can't communicate with the master...
12:50 veb joined #salt
12:52 jslatts joined #salt
12:53 babilen So you are using that reactor, but it isn't working because your minions change keys?
12:53 babilen Are you sure that your AMI have keys already?
12:54 veb joined #salt
12:54 thayne joined #salt
12:55 flupke babilen: yes, we keep an instance to create the source AMI of the auto-scaling group, and it is provisionned with salt
12:57 babilen And that one already has those keys?
12:57 flupke it has a minion.pub key, but different from the new instances
12:57 babilen I should note that I am neither very familiar with the autoscaling reactor nor with the way it should work, just trying to ask pesky questions to understand the issue :)
12:58 flupke and this key does not change between reboots, the issue only happens on newly created instances
12:58 babilen No minion.pem ?
12:58 bhosmer joined #salt
12:59 flupke yes, and I just checked minion.pem is identical on the source instance and newly created instances
13:00 flupke only minion.pub is different
13:00 bhosmer_ joined #salt
13:00 ccase joined #salt
13:02 elfixit joined #salt
13:02 babilen flupke: And "openssl rsa -in minion.pem -pubout" gives you the same public key?
13:03 mechanicalduck joined #salt
13:03 flupke babilen: yes
13:04 flupke oh wait, sorry, the minion.pub are identical on both minions
13:04 flupke it's on the master that it's different
13:04 babilen So, you have minion.pem in your AMI instances, but different minion.pub. The original minion.pub (i.e. the one in the AMI instance and the one the master expects) is different from the one that is being used eventually *and* also different from the output "openssl rsa -in minion.pem -pubout"" ?
13:05 babilen Ah ..
13:05 mapu joined #salt
13:05 mpanetta joined #salt
13:05 babilen Different how?
13:05 babilen (on the master)
13:05 babilen So to summarize: The minion.{pem,pub} are identical on the minions (as they should), but are somehow different on the master?
13:06 flupke yes
13:07 babilen Different how?
13:08 flupke babilen: https://gist.github.com/flupke/4768d8fa4378933bb4b9
13:10 racooper joined #salt
13:11 babilen flupke: And the second one is the correct one in the AMI (i.e. /etc/salt/pki/minion/minion.pub) and identical to the output of "openssl rsa -in minion.pem -pubout" ?
13:11 flupke the first one is OK, it's the second one, on the instance spawned by auto-scaling, that differs
13:12 msciciel_ joined #salt
13:13 babilen So it is the first one that is identical to: /etc/salt/pki/minion/minion.pub and openssl rsa -in minion.pem -pubout in the AMI instance?
13:13 babilen And as you said "minion.pem is identical on the source instance and newly created instances" it should also be identical to /etc/salt/pki/minion/minion.pub on the autoscale'd instance. Is that correct?
13:14 babilen It's just that for some reason you get a different key on the master than minion:/etc/salt/pki/minion/minion.pub
13:15 flupke yes that's what I see
13:16 babilen Could you remove that key on the master and restart the autoscale'd minion?
13:16 flupke sure
13:17 babilen It also looks as if the reactor is creating its own key
13:18 MTecknology My brain is broken today. Authentication happens on tcp/4505 minion -> master, the session (zeromq) happens tcp/4506 minion -> master, right? no?
13:18 flupke babilen: the reactor calls runner.cloud.create when an instance is spawned
13:19 oz_akan_ joined #salt
13:20 to_json joined #salt
13:21 flupke babilen: deleted the key on the master, restarted minion, and in /var/log/salt/minion I get a 'The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate' (normal since it has the same key as the source for AMIs)
13:21 babilen flupke: Does it appear in "salt-key -L" now?
13:21 flupke babilen: nope
13:24 babilen flupke: Weird, but before I dive into salt/crypt.py -- You removed the key from /etc/salt/pki/master/minions{,_pre,_rejected} ?
13:25 flupke babilen: just from /etc/salt/pki/master/minions
13:26 babilen flupke: It is present in _pre ?
13:27 babilen (or _rejected)
13:27 flupke babilen: no, but I found an instance of the same public key in minions_denied (not visible in salt-key output)
13:27 babilen aha!
13:27 flupke I guess I have to remove them... but can it work at all with all minions having the same key?
13:27 babilen Should have mentioned that too (not present on my play system as I haven't denied a key)
13:28 yomilk joined #salt
13:28 babilen flupke: Yeah it should. You simply need a unique id:key combination I guess. (I bootstrap many minions in my testsetup with identical keys)
13:29 babilen I am not sure about the details (I really don't want to have to dive into crypt.py and payload.py in the salt codebase), but that is my "internal model" so far :)
13:30 bhosmer_ joined #salt
13:31 xintron So, I'm about to get myself a new VPS and wondered: how would you go along with using salt if you only have (currently) one machine to configure the minion?
13:31 perfectsine joined #salt
13:31 xintron Local virtual machine that will act as a temporary master? Or maybe minion-only setup?
13:31 flupke babilen: I'll wait for a new instance to spawn and see what happens
13:32 bhosmer__ joined #salt
13:35 flupke babilen: I don't get it though, I don't see traces of this minion ID in the autoscale'd instances
13:35 flupke babilen: /etc/salt/minion doesn't have an "id:" entry, and the hostname has nothing to do with the amazon's instance ID...
13:36 Damoun joined #salt
13:37 Lomithrani joined #salt
14:12 ilbot3 joined #salt
14:12 Topic for #salt is now Welcome to #salt | 2014.1.5 is the latest | SaltStack doc sprint this Thurs!! Sign up here: http://goo.gl/19BbGM | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
14:12 quickdry21 joined #salt
14:12 tinuva joined #salt
14:13 dude051 joined #salt
14:13 babilen flupke: You are welcome. Which changes are necessary to the reactor?
14:14 btorch could someone refresh my memory please, I remember during the saltconf this year, someone talking about the differences between using something like  pkg.installed: vs pkg: and then - installed ... but I can't remember
14:14 RockAndOrRoll joined #salt
14:14 babilen btorch: One is syntactic sugar for the other
14:15 dude051 joined #salt
14:15 flupke babilen: I will just accept keys with wheel.key.accept, and completely remove the salt-cloud part
14:15 micko joined #salt
14:16 btorch babilen: not following :) I thought someone said it was better/safer to use one kind
14:17 babilen btorch: I consistently use "state.function:" these days (more concise), but apart from that I am not aware of any other advantages.
14:17 btorch I'm just wondering why sometimes when I run a highstate on some boxes that have had apt-get update run, salt is also updating pkgs that have been installed already
14:17 btorch it doesn't happen on all boxes
14:17 ajprog_laptop1 joined #salt
14:17 babilen Can you be more specific? (states + output)
14:22 btorch babilen: http://goo.gl/mbuUAo
14:23 mechanicalduck_ joined #salt
14:24 babilen btorch: And you install those from the normal repositories? Why do you have to use "skip_verify" and "force_yes" ?
14:24 btorch babilen: no, it's an internal repo and we need that
14:24 babilen btw, current salt versions do not require the require statements in states that are in the right order already (states are executed from top to botton)
14:24 mateoconfeugo joined #salt
14:24 Deevolution Is there any way to run arbitrary salt commands from a non-saltmaster host?  I.e. something like "salt '*' test.ping" from a workstation?
14:25 babilen btorch: And that is a correct repository in that you only have one version in there at any given time?
14:25 btorch babilen: I don't think that's the case with 0.17.5-1lucid/precise :)
14:26 btorch babilen: yes only one version at a time, one thing that I noticed now is that that common.packaging state that sets up the repos if they don't exist does a refresh
14:26 babilen It's a requirement for every Debian repository. I just see an increase in repositories that are broken in this regard and ship multiple versions.
14:26 Shish_ Deevolution: salt-call
14:27 diegows joined #salt
14:27 Deevolution Shish: Doesn't that run salt commands on a single workstation locally?  I was hoping to be able to run commands against multiple hosts without having to log into the saltmaster first.
14:28 babilen btorch: Well, I have not seen anything problematic so far and it is impossible to debug this without seeing the information about the state *before* the problem and the actual output that is being problematic.
14:28 btorch it's very strange cause this also only happened on 1 box out of 10
14:28 Shish Deevolution: ah; for multiple hosts I think you need to be on the master v.v
14:28 babilen btorch: "this" ?
14:28 Deevolution Okay.  So be it.  I'll have to sort that out then.  Thanks!
14:29 btorch babilen: the pkgs getting upgraded
14:29 btorch I'm trying to see if I can find a way to replicate on another one but no luck
14:30 babilen btorch: The states you showed me should not cause upgrades of packages.
14:31 babilen So it is either a problem in another state or a problem with your repository setup. As I haven't even seen the output of the "upgrade" run (i.e. "the problem") I have no idea how to continue debugging this.
14:31 btorch babilen: no worries, thanks though
14:31 babilen As a sidenote: I'd recommend to actually sign your packages and remove the "skip_verify"
14:32 babilen s/packages/Release file
14:32 babilen reprepro supports that
14:32 rallytime joined #salt
14:32 Hell_Fire joined #salt
14:32 jalbretsen joined #salt
14:33 babilen As do a couple of other tools: See https://wiki.debian.org/HowToSetupADebianRepository#APT_Archive_Generation_Tools for an overview
14:35 TheThing joined #salt
14:37 babilen The signing key can be configured easily by making it available via pkgrepo.managed: - key* (either key_url or keyid + keyserver). We just use key_url: salt://foo/bar/baz.gpg here
14:39 housl joined #salt
14:40 btorch babilen: about the state ordering thing you mentioned earlier, do you know what release that started on ?
14:41 babilen 2014.* IIRC, but let me check
14:41 babilen (might have been 0.17.5)
14:42 btorch oh you mean within a state file, don't you ? not within a collection of states
14:43 babilen yes, within a state file. You naturally still need to introduce require: statements when you need to enforce a suitable order between states in different files
14:44 btorch ah ok cool
14:45 babilen it's just that in your particular example it wasn't absolutely necessary to enforce order there
14:46 lazybear joined #salt
14:47 babilen On second thought: If you want to call states by sls_id http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html#salt.modules.state.sls_id it might actually be necessary to declare the requirements so that it is able to handle the requisites
14:47 * babilen makes a mental note of that
14:48 thayne joined #salt
14:49 thedodd joined #salt
14:50 flupke What is the point of /etc/salt/minion_id?
14:50 kballou joined #salt
14:51 flupke I just found after some digging that all my minions had the same ID because of it...
14:51 babilen flupke: I mentioned that file quite some time ago (no reaction from you though) as I was surprised that all your minions had the same id
14:53 flupke babilen: today? I did not see it sorry
14:54 conan_the_destro joined #salt
14:54 doddstack joined #salt
14:55 babilen flupke: Oh, pity .. yeah I mentioned it earlier.
14:56 flupke babilen: is there a way to ignore it? and have the ID generated at every start-up?
14:57 babilen flupke: Just remove that file: The logic is https://www.refheap.com/88002
14:57 flupke But why cache it?
14:58 babilen why not? They don't change often in *a lot* of very common setups
15:00 mpanetta joined #salt
15:01 flupke It's going to make things very complicated for my use case..
15:02 babilen flupke: Set "minion_id_caching: False" in your minion config and see how that works
15:02 flupke Either I delete it each time I snapshot my AMI, or I have to create a startup script that deletes is before saltstack starts
15:03 babilen (and delete that file once)
15:03 flupke Oh, didn't see it in the docs
15:03 babilen It is not in the docs
15:03 babilen I used the source
15:04 babilen https://github.com/saltstack/salt/blob/develop/salt/config.py#L1928
15:04 flupke babilen: thanks, it works
15:04 babilen Oh yeah!
15:05 Gareth morning
15:05 babilen Good evening :)
15:06 Gareth morning.  relative term :)
15:06 babilen flupke: While we are at it: You probably want to set state_output: changes and show_jid: True too
15:06 dude051 joined #salt
15:07 badon joined #salt
15:07 babilen flupke: The former will result in a *much* easier to read state run output (I have no idea what that is not the default) and the latter will, eventually, print the JID if you run into a timeout during a highstate run (or so)
15:08 flupke babilen: yes I'm already using these :)
15:10 babilen Ah, great :)
15:11 babilen So, is it working for you now? salt ec2 autostart goodness?!
15:12 maboum Hi everyone
15:12 davet joined #salt
15:13 davet1 joined #salt
15:14 pdayton joined #salt
15:16 flupke babilen: still figuring the details out, I'm rewriting the reactor to see what data SNS posts on instance launch (can't find docs...)
15:16 elfixit joined #salt
15:17 maboum With salt-cloud deployment of ec2-instance, is there a way to specify the private-ip you want your instance to have
15:18 jslatts joined #salt
15:19 flupke maboum: apparently yes, see https://github.com/saltstack/salt/issues/10816
15:20 flupke maboum: PrivateIpAddress: 10.0.0.101
15:21 ndrei joined #salt
15:23 maboum flupke: Thanks!
15:24 dyim_ joined #salt
15:25 babilen yay for #salt!
15:27 Lomithrani hi/re : is it possible when you do a state/highstate to wait for one minion to finish its state for the otherone to go ?
15:28 dimeshake Lomithrani: yep, you can use --batch 1 to run 1 at a time, for example
15:28 Lomithrani for isntance I want all my cluster apps to reboot , but one after the other (for obvious reason )
15:29 malinoff joined #salt
15:29 krow joined #salt
15:30 flupke babilen: the hostname is not included in amazon's notification, so there's no way to know which minion to accept :/
15:31 flupke babilen: that's why the original ec2-autoscale reactor uses the instance-id to identify minions
15:37 zach Anyone had luck with salt-cloud & vsphere?
15:37 zach I ask because I'm curious if it works with vcenter
15:38 zach and if it can work with both ec2 and vsphere
15:40 Lomithrani dimeshake: thanks
15:40 ramishra joined #salt
15:41 mgw joined #salt
15:43 Lomithrani https://www.refheap.com/88003 do you have better eyes / brains than me and can understand where the problem could come from
15:43 che-arne joined #salt
15:45 Lomithrani because in "<unicode string>" doesnt help me much :(
15:46 malinoff Lomithrani, can you paste your top file?
15:47 Lomithrani malinoff: no need to now that you've pointed out where the problem was :)
15:48 smcquay joined #salt
15:48 malinoff Lomithrani, you should've read the first two lines of the traceback :)
15:48 Lomithrani ?
15:48 Lomithrani I have
15:48 intellix joined #salt
15:48 andrej joined #salt
15:49 Lomithrani oh the top = selft.get_top() ?
15:49 malinoff yep
15:49 Lomithrani was looking at the path and I was like "thats no file of mine !"
15:49 Lomithrani thankyou anyway
15:49 malinoff you're welcome
15:49 Lomithrani problem solved ;)
15:50 malinoff also, if you see "base:", think about top file
15:50 malinoff always
15:50 Lomithrani ok ! I'll remember that
15:50 malinoff I can't remember an other place where you can meet that string
15:51 malinoff if you don't use 'base' in state identifiers (can't really imagine what can be named like that)
15:54 maboum is there a way to change some configuration in the yaml files by passing an argument?
15:55 ml_1 joined #salt
15:57 jrdx joined #salt
16:01 lude is there a way for salt to replace a file only if that file matches an checksum
16:01 lude like, i want to only replace the OS installed config file with a custom one, but not change it if someone edited that
16:01 dude051 joined #salt
16:02 racooper lude, why don't you just manage the file via salt, in the first place, and don't let anyone edit them directly?
16:03 lude because in some cases i need to override minor things on a per-server basis
16:03 sacasumo joined #salt
16:03 racooper pillars can help you do that :)
16:03 lude and this particular daemon (nginx) doesn't allow overriding of some directives with a .local file
16:04 lude i need people who don't necessarily have access to my salt master be able to make changes
16:05 lude with most things, i just use a #include something.local.conf
16:05 lude but nginx, as an example, will not let you override the worker_processes by specifying it there
16:05 lude once it's set, it's set
16:06 lude i'm open to suggestions other than just including a local file as well, of course
16:06 jcsp joined #salt
16:08 alanpearce joined #salt
16:09 sacasumo hey guys, just started playing around with salt quite recently, and probably trying to shortcut here, but how easy would it be / what is the process required to check whether a given user exists, and if not create it - along the lines of the following, assuming it was done the pre-salt-era way
16:09 racooper well, what about making worker_processes always in a local include and not in the main config file? would it accept that?
16:09 sacasumo useradd -d /home/devel devel usermod -G wheel devel vi /etc/ssh/sshd_config  PermitUserEnvironment yes  visudo  ## Same thing without a password %wheel  ALL=(ALL)       NOPASSWD: ALL  /etc/init.d/sshd restart  su - devel mkdir /home/devel/.ssh vi /home/devel/.ssh/authorized_keys chmod 700 /home/devel/.ssh/ chmod 600 /home/devel/.ssh/authorized_keys
16:10 racooper sacasumo,  pastbin lets you show proper spacing and multiple lines....
16:10 racooper um pastebin
16:11 tligda joined #salt
16:11 lude racooper: it would, but i have a set of sane defaults i'd like to use
16:11 bhosmer joined #salt
16:11 lude and that's not the only directive there i override by default
16:11 dfinn joined #salt
16:12 bhosmer_ joined #salt
16:12 racooper so include nginx.defaults.conf?
16:12 sacasumo apologies for my rudeness, http://pastebin.com/xURYANAY
16:12 lude hrm, that might work
16:12 racooper set your defaults all in another file that you can manage via salt
16:13 lude but i think i still run into the override issue
16:13 lude like i have nginx.defaults set worker_processes to 8 by default
16:13 lude and this particular box needs it at 32 for one reason or another
16:13 dfinn how do you create the password hash for user.present?
16:13 sacasumo something like devel.sls
16:14 Flusher damn, "official" formulas are often hard to understand :p
16:14 KyleG joined #salt
16:14 KyleG joined #salt
16:15 racooper sacasumo,  are you trying to set these up in states?
16:15 Flusher sacasumo: you should have a look to https://github.com/saltstack-formulas/users-formula :)
16:15 * Shish is unrelatedly surprised by worker_processes=32 o_O  He's handling ~10,000 concurrent users with 2 workers, and IIRC the docs say that setting it higher makes little difference... (though he could be thinking of varnish o.o)
16:16 Flusher sacasumo: you can enforce users through pillar with this formula (which is mature enough)
16:16 koyd lude: you could manage the file named as "nginx.conf..salt" and then use a cmd.run to check checksum and substitute nginx.conf with it as per the checksum result
16:17 racooper that user formula is quite complicated :P I'm trying to add quotas to a stripped-down version of it but having problems with module.run
16:18 * Dinde slaps Flusher with a large trout in his face
16:18 Flusher =)
16:18 alanpear_ joined #salt
16:18 wendall911 joined #salt
16:18 Flusher racooper: not so complicated finally ^^ very flexible !
16:18 racooper lude,  you could always just default worker_processes to "auto" and let the application handle it :D
16:19 matthiaswahl joined #salt
16:20 racooper Flusher,  let's just say the formula had much more involved than I needed to deal with :)
16:20 lude actually we just came up with a nifty solution here
16:20 lude read in a nginx-salt-vars file
16:20 lude then worker_process is whatever's in there if that file exists, or num_cpus if not
16:20 hk_em joined #salt
16:20 hk_em howdy
16:20 krow joined #salt
16:20 Flusher racooper: I do understand and I don't need all these features also :p So my pillar is tiny compared to the example :)
16:21 racooper but I still don't have working quota implementation :(
16:21 Flusher :x
16:22 joehillen joined #salt
16:22 alanpearce joined #salt
16:23 alanpea__ joined #salt
16:23 benturner joined #salt
16:24 alanpe___ joined #salt
16:24 sacasumo yes racopper ..
16:25 alanp____ joined #salt
16:25 hk_em am using the salt.client in my script
16:25 sacasumo like I said I'm not too familiar with the stack mechanism yet, but I'd like a noob-view of how to set about doing this .. because from there I think I'll get a good idea of how to go about doing the rest of the stuff I have in mind
16:25 hk_em http://paste.debian.net/108929/
16:26 hk_em how do I pass the -v argument to test.ping
16:26 retr0h joined #salt
16:26 hk_em so I can get those that have not responded..
16:26 hk_em thanks in advance
16:26 racooper sacasumo,  then take a look at Flusher's suggestion of the User formula. you can use it to get an idea of how to do what you want in pillars and states.
16:27 sacasumo yep having a look at it .. looks exactly the sort of thing I need ..
16:27 sacasumo so fat I've got a master and a bunch of minions, and I've been getting OS info and stuff, but I haven't done any "execute" stuff yet
16:27 bmatt hk_em: maybe better to use http://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.manage.html#salt.runners.manage.up
16:28 bmatt or maybe status
16:28 racooper I've been backing salt states into 20 or so RHEL/CentOS hosts that have been around for several years. finding all sorts of stuff that was not standardized across them and slowly trying to make them stateful, one service and/or server at a time.
16:28 miqui joined #salt
16:29 Vye hk_em: I'm not sure you can do that with test.ping. You can use the manage.down runner though: http://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.manage.html
16:29 Vye hk_em: salt-run manage.down on your master.
16:29 sacasumo nice - same exact scenario as me .. got around 50 or so centos 6.2-6.5 - need to streamline they nicely and tune some security policies without forgetting anything ,,
16:30 sacasumo how to I apply the user pillar to the minions then?
16:30 hk_em k thanks guys
16:30 racooper have you been through the pillar docs yet? that's the best place to start
16:31 krow joined #salt
16:31 sacasumo no I haven't :) probably a good idea to go through them and mess a bit around in pre-prod before jumping in here ..
16:32 linjan joined #salt
16:35 lude hm
16:35 lude how do i store things on the minion that i need to use in the state tree
16:36 lude seems like maybe pillars, but that's stored on the master
16:38 krow1 joined #salt
16:39 beando joined #salt
16:42 jas-_ joined #salt
16:44 yomilk joined #salt
16:46 mateoconfeugo joined #salt
16:46 anotherZero joined #salt
16:47 lude seems like my only choice is grains here
16:47 KyleG only if you want to get: *puts on sunglasses*. Granular.
16:49 lude lol, nice
16:49 zach http://i.imgur.com/nU6hWwW.jpg
16:49 KyleG lol zach
16:50 KyleG zach: http://i.imgur.com/3kYnyZV.jpg
16:50 KyleG I can't get enough of it.
16:51 penguin_dan joined #salt
16:51 zach http://i.imgur.com/SI1fmgP.jpg
16:53 dfinn for some reason (probably something dumb on my end) my states are not being recognized.  it's saying no matching sls but they are definitely there.  I think I have everything in place that I need.  anyone mind taking a look?
16:53 dfinn http://pastebin.com/Q8XSWUxm
16:53 dfinn this is a new salt master in the dmz that I'm trying to get up and running
16:53 * zach can't help it.... http://i.imgur.com/HWak3DP.jpg
16:54 zach dfinn: have you tried clearing the cache?
16:54 miqui_ joined #salt
16:54 dfinn i have not, how is that done?
16:54 dfinn i ran this : salt '*' state.clear_cache
16:54 dfinn and no change
16:56 zach Can you try 'salt-call' from the minion with verbose / debug output
16:56 dfinn yeah, let me get you that output
16:57 dfinn when I do that, it clearly is seeing reading the state file because it loads everything in the common.sls but then fails with the same error
16:57 zach that's odd
16:58 dfinn http://pastebin.com/zWFwT3pw
16:59 dfinn that iptables output is pretty annoying
16:59 unixman9000 joined #salt
17:00 dfinn you can see that it's running through all the states that are pulled in by common though but still failing at the end saying the dmz role doesn't exist
17:05 aw110f joined #salt
17:09 bhosmer_ joined #salt
17:10 avn_ dfinn: try shorewall, it have cool configs, and they should be manageable via salt pretty well
17:10 dfinn what's that?
17:10 avn_ is firewall manager/rulesets generator
17:11 dfinn i'm not trying to manage a firewall
17:11 dfinn i'm trying to manage linux servers that are in my DMZ
17:12 avn_ Ahh, I see (read scrollback bit more). Sorry for misunderstanding
17:12 dfinn np
17:12 dfinn so I think what I have here is a pretty basic salt config but for whatever reason salt is not seeing any of my states
17:14 dfinn hmm…we have stuff split into "packages" and "roles" in salt.  but both are just state files I suppose.  I just to apply one of the "packages" states and it worked so it seems to just be roles that aren't working
17:15 shaggy_surfer joined #salt
17:16 meteorfox joined #salt
17:19 masterkorp Hello
17:19 masterkorp i have the weirdest bug
17:19 masterkorp a simple file.directory
17:19 dfinn i found the issue.  the salt error reporting can be really frustrating sometimes.  there was a package that was referenced by the common role and it was missing.  no useful information was given in any of the errors about this.
17:19 masterkorp it does not even appear on the info level log
17:19 masterkorp its like the state does not exist
17:19 Sokel joined #salt
17:20 ml_1 joined #salt
17:20 Sokel Has there been any considerations to add firewalld support into salt (for RHEL 7 or Fedora 19+)? Or is there a way to handle firewalld through salt currently?
17:20 logix812 joined #salt
17:21 Ryan_Lane joined #salt
17:23 forrest joined #salt
17:24 erjohnso joined #salt
17:25 Ryan_Lane whiteinge, forrest: I need a count of people for the sprint at Lyft and a list of names, ideally :)
17:26 forrest Ryan_Lane, I just asked Colton if Seth was in this morning so we could ask for an update
17:26 forrest I hit him up Monday but he was afk, worst case I'll email Rhett directly
17:26 Ryan_Lane thanks
17:26 Kenzor joined #salt
17:27 forrest Ryan_Lane, yup
17:28 ipmb joined #salt
17:28 taterbase joined #salt
17:31 forrest Ryan_Lane, it's probably just going to be you on your end of the hangout, and me
17:32 Ryan_Lane forrest: oh? we have no people joining the sprint at either location?
17:32 forrest I don't know
17:32 forrest haven't asked for an update in a few weeks
17:32 Ryan_Lane heh, well, hopefully people have signed up since then
17:33 Ryan_Lane we had 6 people on hangout last time, right? we had more lead time and announcements this time
17:37 eliasp I'd love to join… just having no time at all for anything outside the current project here ;-/
17:37 eliasp so probably for a later sprint in fall or so then
17:37 forrest eliasp, sounds like your org sucks if they can't give you 4 hours on a thursday :P
17:40 eliasp forrest: more like "managing 2 jobs + a kid at once" :)
17:40 forrest pssssh
17:40 forrest get rid of the kid, DUH
17:41 eliasp :)
17:44 eliasp I'd rather should get rid of job#1 and #2 and exchange it for something from saltstack.com/careers/
17:44 eliasp :)
17:44 forrest eliasp, hah
17:45 eliasp hmm… well, thinking about it: do you have US-only employees or world-wide?
17:45 saurabhs joined #salt
17:45 kermit joined #salt
17:46 forrest eliasp, I don't work for Salt, but so far it's only US employees because of how difficult it is to hire people from other countries I believe
17:46 forrest since you have to prove that an American can't do the same work or something dumb, and pay for all sorts of stuff, it's weird
17:46 eliasp well… true… international legal stuff is really painful
17:46 kballou joined #salt
17:46 forrest yea
17:47 eliasp we had to set up internation offices across the world at a past job just to hire people from there
17:47 eliasp s/internation/international/g
17:48 forrest yea I'm not surprised
17:48 forrest I'd happily open SaltStack Europe :P
17:48 eliasp I'd happily join in ;)
17:48 eliasp forrest: UK?
17:49 forrest Nah I live in the US, I'd move to Europe though
17:49 schimmy joined #salt
17:49 TheThing Can a lonely Icelander join in? :)
17:50 vbabiy joined #salt
17:50 eliasp TheThing: na, but I'd happily move to Iceland :)
17:50 TheThing I'd happily welcome you :D
17:50 eliasp yeehaw! ;)
17:51 bhosmer joined #salt
17:51 bhosmer_ joined #salt
17:54 forrest hah
17:54 chrisjones joined #salt
17:54 ipmb_ joined #salt
17:56 jslatts joined #salt
17:56 eliasp on the other hand… as job#2 is freelancing… I might actually focus a bit more on the consulting side as I did in the past and put some Salt into it ;)
17:59 rblackwe_ joined #salt
17:59 schimmy joined #salt
18:00 koyd eliasp: in the same boat here, focusing more one Salt for freelancing this past year
18:01 forrest Ryan_Lane, basepi has been kind enough to get us that information, whiteinge is out this week.
18:01 koyd forrest and eliasp saltstack europe would be interesting :)
18:01 forrest koyd, I actually thought about just going from conference to conference to talk about Salt over in Europe
18:02 basepi forrest, Ryan_Lane: looks like whiteinge will be attempting to attend the sprint remotely, but he won't be in office tomorrow, and his internet may be spotty as he's out of town.  in any case, we'll get you those list
18:02 basepi lists*
18:02 forrest just for something fun to do, take like 3 months off, would be great.
18:02 koyd forrest: what confs would you hit? I've thought about dockerconf in london, september
18:03 forrest basepi, Sounds good, no pressure on whiteinge if you can let him know, we can handle it and we'll have you guys in the office as well
18:03 koyd but I might be in south america in september, so not sure I'll make it
18:03 forrest koyd, there are quite a few small python conferences, and other associated ones
18:03 forrest There was enough to attend 2 conferences a week when I looked last time
18:03 koyd hm nice
18:04 koyd never did an extensive search
18:04 koyd any particular months ?
18:04 forrest uhh this was late Summer of last year
18:04 forrest when I was looking
18:04 eliasp koyd: if that'd be a serious option to you (SaltStack EU) we should probably talk at some point…
18:05 eliasp or anyone else who's from EU… ;)
18:05 forrest hah
18:05 ksalman how would one use the extfs module in a state? http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.extfs.html
18:06 forrest ksalman, use http://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html
18:06 koyd eliasp: not until late 2014, but I've thought about it some, yeah
18:06 koyd eliasp: but first I need to meet goals for this year :)
18:06 ksalman forrest: thanks
18:06 eliasp koyd: yes, same here… need to finish a project until late Fall or mid-winter…
18:07 eliasp but once that's done, I'm completely open to anything new
18:07 bhosmer joined #salt
18:07 Sokel Any idea on managing firewalld with salt at all? I don't want to fall back to the old iptables way on my rhel 7 machines if I don't have to
18:07 bhosmer joined #salt
18:07 forrest Sokel, I don't think there is support yet for the new way RHEL is doing it
18:08 forrest if you wanted to write a module for it though, that would be kick ass
18:08 Sokel I know there are salt packages and workings for fedora, and fedora has had firewalld for so damn long, which is why I was wondering.
18:11 kballou joined #salt
18:11 smcquay joined #salt
18:11 forrest friggin whiteinge, on vacation, still merging PRs :P
18:11 eliasp forrest: I'd call that addiction
18:11 forrest heh
18:11 smcquay joined #salt
18:12 eliasp forrest: he probably goes cold-turkey after 24h without a merge/commit
18:12 koyd lol
18:14 thedodd joined #salt
18:16 druonysus joined #salt
18:17 Nazca__ joined #salt
18:18 benturner left #salt
18:18 hk_em how can I run manage.down within python using salt.runner.RunnerClient ?
18:22 ipalreadytaken joined #salt
18:24 Nazca__ joined #salt
18:25 bhosmer joined #salt
18:25 bhosmer joined #salt
18:33 arthabaska joined #salt
18:35 Nazca joined #salt
18:36 druonysus joined #salt
18:36 druonysus joined #salt
18:42 shaggy_surfer joined #salt
18:42 Kenzor joined #salt
18:43 racooper Still trying to resolve this problem: https://gist.github.com/racooper/970be82b500a5c044afb after two weeks.  looking at the code for quota.py module, the device name isn't a kwarg. so how do I pass it through module.run? nothing I've tried so far has worked.
18:45 dstokes anyone know if salt-api's logging is configurable? currently it's logging to the same file as master using rest_cherrypy
18:48 blarghmatey Any advice on the best way to take a tree from pillar and use it as the contents for a yaml file for file.managed?
18:52 avodovoz joined #salt
18:59 ecdhe Does salt-mine come installed automatically?
19:00 forrest ecdhe, yea
19:00 matthiaswahl joined #salt
19:04 smcquay joined #salt
19:04 dstokes has anyone tried setting up salt provisioning with a docker provider in vagrant? provisioning never runs for me, #vagrant isn't much help
19:05 vodo joined #salt
19:06 forrest dstokes, I'm pretty sure there has been discussion on that before, but I don't have any examples
19:07 forrest did you look at this https://github.com/nolar/vagrant-saltstack-docker-sample ?
19:07 bmatt blarghmatey: I looked into it a while back and afaik there's no way to instruct jinja to emit bare yaml
19:08 blarghmatey bmatt: thanks. I decided to try just passing the root node of the yaml tree as a context variable and then rendering it. We'll see how that works out.
19:09 dstokes forrest: not quite what i'm looking for. the setup i'm after is provisioning docker provider containers with salt. testing salt states works fine w/ virtualbox but vm boot is slow, containers would be near instant
19:09 dstokes i can create a docker container w/ vagrant, but provisioning is never run on the resultant container
19:09 babilen dstokes: I have it ready, yeah .. one second
19:10 forrest dstokes, gotcha
19:10 avn_ Guys, are exists any way to include remote (git) formulas direclty from states and/or pillars?
19:11 bmatt avn_: nope
19:11 dstokes forrest: also, you know if salt-api logging is configurable or is that whiteinge's area?
19:12 forrest dstokes, I don't remember off the top, and whiteinge is OOTO this week
19:12 dstokes forrest: got it, i'll keep digging. thx
19:12 forrest dstokes, np
19:13 avn_ bmatt: I try to manage salt master with salt-formula, but when I add url to nging-formula into pillar as parameter for master, and include - nginx from state, I have a fail,  because master config not yet updated
19:14 avn_ or better use git subtrees for formulas?
19:15 dstokes ah, should've checked `salt-api -h` rather than manpage. `salt-api --log-file`
19:16 bmatt avn_: a git subtree won't help the bootstrapping problem because the included state will never be there at the time of compilation
19:16 bmatt the yaml is compiled before it's executed, and you need it to be executed before it's compiled
19:20 yomilk joined #salt
19:25 babilen dstokes: dstokes `
19:25 dstokes babilen: hai ;p
19:25 babilen dstokes: https://gist.github.com/babilen/e9479fdfbcca431db208 is an adapted version of something I use
19:26 dstokes babilen: and your provision block actually runs after container boot?
19:26 babilen It does, yeah
19:27 dstokes babilen: what's the privileged docker flag? not in docs
19:27 krow joined #salt
19:28 dstokes babilen: i'll give it a shot
19:28 babilen dstokes: https://docs.docker.com/reference/run/ discusses this.
19:28 babilen dstokes: I use docker 1.0.0~dfsg1-1 and vagrant 1:1.6.3 on Debian testing and unstable.
19:29 babilen Not entirely sure if privileged is absolutely necessary - that might be a remnant of some early experiments :)
19:33 avn_ bmatt: subtree, not sumbodules, so all formulas will be included to my own repo
19:39 dstokes babilen: starting to suspect my base ubunutu:12.04 image doesn't have the facilities for bootstrapping salt dev, maybe that's my issue. tho i'm now getting an ssh error on boot *facepalm*
19:39 dstokes `container_ssh_command': undefined method `[]' for nil:NilClass
19:40 dfinn left #salt
19:43 * babilen pats his trusty Debian …
19:44 dstokes heh heh
19:45 bmatt avn_: *shrug* I think trying to couple gitfs backends with states is probably the wrong approach, but I definitely see a reasonable case for something like on-demand formula inclusion via another channel
19:45 druonysuse joined #salt
19:45 druonysuse joined #salt
19:48 to_json joined #salt
19:50 felskrone joined #salt
19:51 avn_ bmatt: my master live at amazon EC2 instance, so I care about reproducing of master
19:51 Kenzor joined #salt
19:52 bmatt yeah, I get that - we use a master that manages itself, but it ends up necessitating that our master has no non-core dependencies
19:52 jdmf joined #salt
19:57 quantumriff joined #salt
19:57 felskrone joined #salt
19:59 ktenney joined #salt
19:59 quantumriff I have a pillar question.. in my pillar, I have a definition: "client: ['abc']" In my state, I try to pass into a jinja template {{ salt['pillar.get']('client', '') }}.  however, its getting passed in  as "['abc']"  how would I pull the 'abc' out of the brackets?
19:59 quantumriff no quotes or brackets
19:59 picker joined #salt
20:02 babilen quantumriff: Why don't you use "client: 'abc'" if that is what you want?
20:02 quantumriff well, in a previous use of the pillar, there could be more then one
20:02 quantumriff so we would do a "for each client in {{ client}}"
20:03 babilen So your example was too simplified?
20:03 quantumriff but in our new datacenter, that will not happen.. just didn't want to have to create a whole new pillar item
20:03 dstokes quantumriff: try `{{ salt['pillar.get']('client', [])|first }}`
20:04 Linuturk joined #salt
20:04 babilen Well, you can either adapt the data in the pillar to be what you actually want in there, or use something like "{{ salt['pillar.get']('client', '')|first }}
20:04 Linuturk how would I output a summary of a highstate run? just a summary of hosts affected?
20:04 mgw joined #salt
20:05 quantumriff if I have it defined as "client: 'abc", instead of in brackets.. will that break older states I have, that do a "for record in client"?
20:05 babilen quantumriff: I would personally not bother with "|first" as you can just as well change the data in the pillar.
20:05 dstokes Linuturk: you could try `.. state.highstate test=True`
20:05 saru11 joined #salt
20:05 manfred Linuturk: pass test=True
20:05 babilen quantumriff: It will, yes
20:05 manfred and it should just tell you which ones will have changes
20:05 Linuturk oh, I actually want things to happen. I just want a host summary at the end
20:05 thedodd joined #salt
20:06 babilen quantumriff: You would naturally have to adapt all states that use that pillar to the new data format (or use |first and accept that this transition will never happen)
20:06 manfred Linuturk: that ... i believe is in helium
20:06 manfred one second
20:06 forrest Linuturk, did you already check out http://docs.saltstack.com/en/latest/ref/output/all/salt.output.highstate.html ?
20:06 rallytime joined #salt
20:06 babilen quantumriff: http://jinja.pocoo.org/docs/templates/#builtin-filters btw
20:06 forrest manfred, oh is there actually a good way to just get a summary in helium?? That would be sick
20:06 manfred i think so
20:06 manfred hold on
20:06 Linuturk terse then I guess?
20:07 saru11_ joined #salt
20:07 saru11_ hi miners
20:08 saru11_ is there anybody with deeper experience of gitfs fileserver backend in the current stable SatlStack release?
20:08 quantumriff babilen: dstokes: thanks for the tipes
20:08 quantumriff *tips*
20:08 saru11_ I have a problem with top.sls files merging
20:08 jslatts joined #salt
20:08 manfred Linuturk: i can't test it right now cause I have everything using raet, and it isn't returning anything for my highstates right now
20:08 babilen Is anybody of salt's upstream authors familiar enough with salt/config.py to explain how I would use config.get, nested pillars and setting values in the master configuration file?
20:08 Linuturk manfred: forrest dstokes --state-output=terse is close to what I'm looking for
20:08 manfred cool
20:08 forrest Linuturk, cool
20:08 dstokes nice :)
20:09 saru11_ we use a model "repo per service" which means that I have configured many git repos to access states related to a service
20:09 saru11_ the problem is it's too slow as it goes over all the branches of repos and this take some time
20:10 jslatts joined #salt
20:10 Linuturk manfred: forrest dstokes it would be cool to see a simple host summary. ie, tried this against these hosts, summary of failed changed succeeded, so I could quickly identify problem spots
20:10 saru11_ It's a pity that gitfs module in develop branch is not available in stable branches
20:10 manfred Linuturk: there is something... one second
20:10 babilen Linuturk: I personally prefer "changes" over "terse", but yeah ...
20:11 babilen Oh, I'm not allowed to answer :)
20:11 saru11_ I'm missing branches white/black lists, do you know how to effectively work around it?!?
20:13 forrest Linuturk, I agree
20:13 forrest ok, let's get this figured out, Ryan_Lane, manfred, have you guys looked at this CSV file for the attendees?
20:14 jergerber joined #salt
20:14 Ryan_Lane I didn't get one
20:14 forrest Ryan_Lane, manfred I don't understand how to read this, it does't show location
20:14 manfred yes
20:14 forrest let me forward it to you Ryan
20:14 forrest looks like it went to your wikimedia address Ryan
20:14 dober joined #salt
20:14 manfred correct, the first line does not have a venue column
20:15 forrest I sent it to your lyft email
20:15 forrest manfred, right
20:15 Ryan_Lane ok. got it now
20:15 forrest basepi, can you explain how to read this csv file from Rhett? There's no venue details
20:16 forrest I assume the two rackspace guys will be in Texas, I know timoguin will be remote
20:16 basepi forrest: reply to the thread and ask him about venues
20:16 Damoun joined #salt
20:16 basepi i actually thought the same thing and wondered if maybe you knew something i didn't
20:16 forrest basepi, will do, need to add manfred and adjust ryan's email
20:16 manfred what thread?
20:16 forrest lol ok good, makes me feel less stupid
20:16 forrest don't worry manfred I will CC you
20:16 manfred kk
20:16 basepi i don't think manfred was on it, i think he sent to me, forrest, and Ryan_Lane
20:16 forrest correct
20:19 forrest Ryan_Lane, basepi, manfred, just responded, let me know if you still didn't get it for some reason
20:19 MTecknology basepi: fixit!
20:19 manfred doit!
20:19 basepi NO
20:19 MTecknology :(
20:19 basepi I refuse.  Because minionswarm.py is being mean.
20:19 MTecknology I guess I'll ask a question, then...
20:19 basepi Also, what am I fixing?
20:20 pc_ joined #salt
20:20 forrest Ryan_Lane, are you starting the hangout tomorrow? Manfred is saying they will start a bit early because of the time shift
20:20 Ryan_Lane I don't need to be the one who starts it
20:20 forrest Ryan_Lane, yea I know, I just want to confirm who is starting it
20:21 Ryan_Lane if they're starting a bit early, they can start it and we can join
20:21 forrest so we can join it easily
20:21 forrest cool
20:22 forrest manfred, ok, so can you start the hangout tomorrow then?
20:22 forrest and we'll hop on
20:22 manfred sure
20:22 forrest cool.
20:23 forrest man, auto-correct screwed me on this email
20:23 forrest piece locations *facepalm*
20:24 SpeeR joined #salt
20:26 forrest basepi, can you yell at Rhett to not laugh too much at my email, and provide us the locations, not the piece locations
20:26 forrest this is what I get for typing on my phone
20:26 basepi I shall say nothing, and laugh together with him!
20:27 avn_ bmatt: how u bootstrap (literally restore-after-crash) new master from repo?
20:27 basepi (I assume you sent a correction e-mail?)
20:27 forrest basepi, no I can though
20:28 forrest done
20:28 bmatt avn_: TBD :)
20:28 forrest basepi, just sent, gonna get some food
20:28 to_json joined #salt
20:28 basepi i think rhett's in a meeting at the moment, just didn't want to forget to tell him.  =P
20:32 smcquay joined #salt
20:33 martoss joined #salt
20:35 Eureka_ joined #salt
20:36 ecdhe I just installed the salt-formula into a test environment.  The generated /etc/salt/minion file resets the id line to "#id:", which triggers a minion reload, which causes the minion to take its new name, which prevents it from communicating with the salt-master further.
20:36 martoss2 joined #salt
20:36 ecdhe My goal in the first place was to distribute some salt-mine settings into /etc/salt/minion across all minions.
20:37 ecdhe Any thoughts on how to do this?
20:38 ecdhe I see that this line is failing: https://github.com/saltstack-formulas/salt-formula/blob/master/salt/files/minion#L62
20:39 ecdhe How can I get 'id in minion' to succeed there?
20:41 martoss joined #salt
20:42 dimeshake ecdhe: what's the error that pops out?
20:42 krow joined #salt
20:43 ecdhe dimeshake, I'm running the command: salt '*' state.highstate --verbose
20:43 rallytime joined #salt
20:43 ecdhe Without --verbose, there's no output.
20:43 MTecknology I have four syndic servers that used to work perfectly great. I can test.ping from master to syndic and from syndic to minion, but I can't test.ping from master to minion. This is turning into a fun issue to figure out... :(
20:44 ecdhe But with --verbose, all I get is:  Minion did not return
20:44 dimeshake add a long timeout
20:44 ecdhe dimeshake, the minion is getting reloaded with a new id -- it will never return
20:45 ecdhe It actually contacts the master to register a new key under the default hostname, 'ubuntu'
20:46 dimeshake ahh. can you run it from the minion with salt-call with debug to see what's happening?
20:46 dimeshake salt-call -l debug state.highstate from minion for example
20:49 ecdhe dimeshake, now I can see that it's happening for sure:
20:49 ecdhe -id: tdrive
20:49 ecdhe +#id:
20:49 ecdhe I'm losing my minion name in /etc/salt/minion
20:50 dimeshake I haven't used that formula - where should it be set from? pillar, grain?
20:50 ecdhe Well, the grain would still be stored in the /etc/salt/minion
20:50 martoss1 joined #salt
20:51 ecdhe dimeshake, two hours ago, I just wanted to use hostsfile-formula
20:52 ecdhe But it won't work unless you can put some salt-mine config in the /etc/salt/minion file for every minion that you want in your /etc/hosts
20:52 ecdhe So I installed the salt-formula so I could put the salt-mine data into all of the minions automatically.
20:52 manfred ecdhe: could drop a .conf file into /etc/salt/minion.d/ ?
20:53 manfred ecdhe: are you using cloud servers?
20:53 ecdhe manfred, no, locally provisioning some vms is all.
20:53 manfred using salt-cloud or manually deploying?
20:53 ecdhe manually.
20:54 manfred have you looked into the saltify cloud driver yet?
20:54 manfred it could be used to deploy salt on your servers, and you can setup the minion: array to have mine_functions:
20:54 aw110f joined #salt
20:55 martoss joined #salt
20:55 manfred the saltify cloud driver doesn't make vm's it just runs the equivalent of salt.utils.cloud.bootstrap() on them to seed minion keys and install salt, plus can configure your /etc/salt/minion file
20:56 manfred http://docs.saltstack.com/en/latest/topics/cloud/config.html#config-saltify
20:56 MTecknology It's like the syndics no longer realize that they're syndics
20:56 aquinas_ joined #salt
20:57 AdamSewell joined #salt
20:57 dober joined #salt
20:58 ecdhe thanks manfred, I'll look into this.
20:59 felskrone joined #salt
21:00 Sokel left #salt
21:00 Damoun joined #salt
21:04 ndrei joined #salt
21:06 martoss1 joined #salt
21:07 Kenzor joined #salt
21:09 alekibango joined #salt
21:10 ninkotech__ joined #salt
21:11 martoss joined #salt
21:13 ksalman how would i define a file only if it exists?
21:14 manfred ksalman: in helium, every single state will have an onlyif: value, where you could do test -f /pth/to/file and will only execute if that returns a 0
21:14 ksalman what verison is helium again? :)
21:14 ksalman 2014.2?
21:14 ksalman err
21:14 TheThing ksalman: One that everyone's waiting for
21:14 ksalman the next release?
21:14 ksalman okay
21:14 manfred 2014.7
21:14 manfred 7 for july
21:14 ksalman yea, okay
21:14 ksalman thanks
21:14 manfred should be this month
21:15 manfred well
21:15 manfred rc1 this month
21:15 manfred hopefully
21:15 ksalman for now, is there a way around it? can i do something like {% if os.path.isfile(foo) %}
21:16 manfred if you can do it in jinja, yeah
21:16 manfred but i don't know of a way to do that
21:18 manfred ksalman: using file.managed?
21:18 manfred create
21:18 manfred Default is True, if create is set to False then the file will only be managed if the file already exists on the system.
21:20 Damoun joined #salt
21:23 ksalman manfred: thanks haha
21:23 manfred np
21:23 hk_em am running the following statement on a python script
21:23 hk_em aval_srvs = rclient.cmd('manage.up',[])
21:24 hk_em if does what I want
21:24 hk_em but...
21:24 hk_em in addition to sending it to my variable
21:24 hk_em it prints it to stdout
21:24 hk_em ??
21:24 shaggy_surfer joined #salt
21:24 jhauser joined #salt
21:26 thedodd joined #salt
21:30 druonysuse joined #salt
21:30 druonysuse joined #salt
21:31 elfixit1 joined #salt
21:31 oz_akan_ joined #salt
21:33 ipalreadytaken joined #salt
21:36 martoss joined #salt
21:36 savvy-lizard joined #salt
21:37 VictorLin joined #salt
21:37 yomilk joined #salt
21:37 forrest What the hell Seattle, no attendees for the salt sprint tomorrow? So lame
21:38 martoss1 joined #salt
21:38 bhosmer_ joined #salt
21:39 rgarcia_ joined #salt
21:40 dstokes forrest: there a sprint schedule somewhere?
21:40 forrest yea
21:40 forrest https://www.eventbrite.com/e/saltstack-documentation-sprint-tickets-12010895913?aff=eorg
21:40 forrest dstokes ^
21:41 martoss joined #salt
21:41 manfred woot 1 rackspace
21:42 manfred central time is better!
21:42 dstokes forrest: thx. in LA but might checkout the hangout
21:44 kermit joined #salt
21:44 kermit joined #salt
21:46 forrest dstokes, for sure
21:46 Hell_Fire_ joined #salt
21:46 forrest manfred, at least we can all say that the west side of the country is actually participating
21:46 ecdhe manfred, the conf file in /etc/salt/minion.d  got it working for me; I will rebuild my next dev env with saltify.
21:47 bhosmer joined #salt
21:56 dualinity joined #salt
21:58 dualinity guys im sorry to ask but im still looking for ANY documentation that can help me in creating some kind of an installer which people could use to become a minion?
21:58 nahamu dualinity: what OS?
21:58 nahamu have you seen the bootstrap tool?
21:58 dualinity more specifically; a minion with settings connected to my host
21:58 dualinity yea I have
21:58 dualinity Im sorry that was unclear what I just said
21:58 forrest dualinity, why not just package it up yourself?
21:59 forrest and modify the minion conf of the package
21:59 dualinity hmmm yea that sounds really obvious
21:59 forrest or just do it as part of provisioning
21:59 forrest with salt-ssh if these are linux/mac machines
21:59 dualinity the goal would be to kind of make a distributed computing network
21:59 forrest so a botnet.
21:59 forrest :P
21:59 dualinity where I can package one software
22:00 nahamu dualinity: what OS are the minions?
22:00 dualinity preferably any
22:00 dualinity if you guys must know
22:00 dualinity I'd like to use the Stockfish chess engine
22:00 dualinity :)
22:00 dualinity send minions a game string and have the minions calculate
22:00 dualinity return the answer
22:00 matthiaswahl joined #salt
22:00 nahamu If you need to be cross platform across windows, mac, linux, and unix it's tricky.
22:00 nahamu if all the minions are the same OS it's slightly easier
22:00 forrest nahamu, that's why I'm just suggesting he build the packages
22:01 dualinity couldnt there be several packages?
22:01 dualinity for now lets say simply linux
22:01 forrest all the required files live in the salt repo for building a variety of distro
22:01 nahamu sure, you just have to build all of them.
22:01 forrest *s
22:01 dualinity well that seems acceptable
22:01 nahamu you just need a shell script that runs the bootstrap tool and drops in the minion file and starts the service.
22:01 forrest nahamu, that's an option as well
22:01 dualinity heh thanks guys
22:01 dualinity I considered it
22:02 forrest nahamu, I'm saying he can just distribute the package to people
22:02 dualinity but you know with "path" or whatever
22:02 forrest dualinity, can't you just spin up some cheap minions at digital ocean or something?
22:02 dualinity oh and automatic permissions?
22:02 nahamu Here's how I do it in the Joyent cloud: http://blog.shalman.org/getting-started-with-saltstack-in-the-joyent-cloud/
22:02 forrest honestly as a user I wouldn't install saltstack on my computer
22:02 forrest because of security concerns
22:02 dualinity hmmm
22:03 dualinity my idea is to just start some massive open source project
22:03 dualinity basically people playing games
22:03 dualinity and rather than spending any money myself on cloud computing
22:03 dualinity I think it could actually be a feature
22:03 dualinity for each game they play on their side
22:03 dualinity they will be able to play a game online
22:03 dualinity something like that
22:03 forrest I think that using salt for this is the wrong way to go about it
22:03 forrest salt would provide you full access to their system
22:03 dualinity ahhh okay
22:04 dualinity so yea people would never "buy" it
22:04 dualinity too risky
22:04 forrest even with restricted permissions, or a salt only user, it wouldn't matter.
22:04 forrest even free
22:04 dualinity I only need to run one simple command
22:04 forrest so why couldn't the user just run it?
22:04 dualinity hmmm
22:04 forrest one command to install salt, versus one command to run
22:05 forrest Salt is just a bad plan, what if the salt master was compromised?
22:05 dualinity I'd want to send the game strings interactively
22:05 forrest they'd have a botnet to ddos people, and lock them out of their systems, steal personal data, etc.
22:05 dualinity lol good point
22:05 dualinity I know for sure I wouldn't be a safe host lol
22:06 dualinity is there no way to really restrict salt?
22:06 forrest you can restrict it by using a non-root user
22:06 dualinity I mean... I simply haven't found any way to do this distributed idea
22:06 forrest but I assume the command you want to run requires root
22:06 dualinity let me check (I don't think so)
22:06 azylman joined #salt
22:06 forrest even then, I still wouldn't do it
22:06 forrest you could sniff around systems with incorrect permissions and such
22:07 dualinity well basically I use a shell script around it
22:07 dualinity so they'd need to give that one permission
22:08 dualinity hmm or I need to figure out how I can interact with an existing process
22:08 dualinity engine runs
22:08 dualinity and I interact with it
22:08 dualinity but yea, if salt is not the way.....
22:08 forrest I would say Salt is not the way
22:08 dualinity I tried looking for open source stuff to just do something simple
22:08 forrest too many security risks
22:08 dualinity send game string, return score
22:09 forrest seems like the application should handle that
22:09 forrest via some sort of api
22:09 dualinity hmmm
22:09 dualinity perhaps a python app
22:09 dualinity cross platform
22:10 forrest could work
22:10 dualinity though I'd need to be able to send the app stuff
22:10 dualinity from internet
22:10 dualinity I guess that would work tho xD
22:10 forrest there are plenty of apps out there that do that, you just need to research it
22:11 dualinity yea
22:11 dualinity alright
22:11 dualinity I'm sad Salt then wouldn't be it
22:11 dualinity I like the community
22:11 dualinity lots of help and fast too
22:11 dualinity :)
22:11 forrest yea it just isn't a good solution for your use case
22:11 dualinity basically salt is too powerful for it
22:12 dualinity or not specific enough
22:12 forrest it's just not the right tool for what you want to accomplish
22:12 dualinity yea, alright
22:12 dualinity thanks anyway :)
22:13 miqui_ joined #salt
22:13 forrest np
22:13 dualinity good evening, bye
22:13 forrest bye
22:14 kermit joined #salt
22:16 jrb28 joined #salt
22:20 jrb28 joined #salt
22:21 krow joined #salt
22:21 vejdmn joined #salt
22:36 notpeter_ joined #salt
22:37 notpeter_ Good afternoon all.  When using "contents" with file.managed is there any way to include (trailing) newline?
22:40 mosen joined #salt
22:44 bhosmer joined #salt
22:46 ecdhe notpeter_, see http://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.managed
22:47 ecdhe notpeter_, search that page for the string " id_rsa: |" for an example of a multiline string.
22:47 notpeter_ yeah, I found contents_newline.
22:48 notpeter_ Oh, I see.
22:51 Damoun joined #salt
22:57 Damoun joined #salt
22:58 krow joined #salt
22:59 ajprog_laptop1 joined #salt
23:01 jhauser joined #salt
23:01 Outlander joined #salt
23:04 aquinas joined #salt
23:07 azylman joined #salt
23:17 freelock Hmm... I just updated the salt-master on an Ubuntu box... did something change with the -b switch?
23:18 jgarr left #salt
23:18 freelock trying to batch update a bunch of minions, I've been using -b 1 to make sure local boxes don't crash our apt cacher
23:18 freelock this is now throwing an exception on the master
23:18 freelock AttributeError: 'module' object has no attribute 'get_local_client'
23:20 freelock salt commands appear to work fine without -b
23:20 forrest freelock, https://github.com/saltstack/salt/issues/14046
23:21 freelock ah ok, thanks!
23:21 forrest np
23:23 notpeter_ ecdhe: I found my issue.  I peeked at the source on the git master and the behavior matches the documentation, but the source of the currently packaged version (2014.1.6) contents_newline only applies when your contents is coming from a pillar, not when you just specify inline with "contents: xyz". So at somepoint in the near future my files will get newlines, but in the meantime I will live without.
23:23 quickdry21 joined #salt
23:33 aw110f joined #salt
23:34 joehh freelock, forrest: new packages coming shortly
23:35 freelock sweet!
23:35 forrest joehh, cool
23:45 kermit joined #salt
23:57 UtahDave joined #salt
23:59 forrest Alright it's the last few minutes of the day, everyone spam UtahDave with complicated and convoluted questions that will keep him from going home to see his family.
23:59 UtahDave lol    :)
23:59 UtahDave how are you forrest!
23:59 forrest good, you?
23:59 koyd forrest: lol. not in freenode, it's universal greeting time
23:59 UtahDave good!
23:59 forrest koyd, fair enough

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary