Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-09-05

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 wryfi joined #salt
00:18 nethershaw joined #salt
00:19 lorengordon joined #salt
00:29 aneeshus1 joined #salt
00:30 tacoboy joined #salt
00:35 omie888777 joined #salt
00:38 johnj joined #salt
00:59 GMAzrael joined #salt
01:19 Namx joined #salt
01:30 Rika joined #salt
01:37 Church- joined #salt
01:39 johnj joined #salt
01:51 ilbot3 joined #salt
01:51 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.7, 2017.7.1 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers
01:54 spuder joined #salt
02:00 GMAzrael joined #salt
02:05 zerocoolback joined #salt
02:07 zerocoolback joined #salt
02:28 Uni babilen: thanks for the pointer, finally getting back to this and yeah, pillarstack is exactly what I was looking for
02:40 dxiri joined #salt
02:41 johnj joined #salt
02:41 zerocoolback joined #salt
02:49 johnj joined #salt
03:01 GMAzrael joined #salt
03:04 aneeshus1 joined #salt
03:13 zerocoolback joined #salt
03:18 k_sze joined #salt
03:18 jab416171 joined #salt
03:20 michelangelo joined #salt
03:44 spuder joined #salt
03:47 aneeshus1 joined #salt
04:02 GMAzrael joined #salt
04:18 preludedrew joined #salt
04:48 golodhrim|work joined #salt
05:01 GMAzrael joined #salt
05:25 dxiri joined #salt
05:31 omie888777 joined #salt
05:56 LotharKAtt joined #salt
06:02 GMAzrael joined #salt
06:06 evle joined #salt
06:22 hoonetorg joined #salt
06:35 usernkey joined #salt
06:45 pualj joined #salt
06:48 Ricardo1000 joined #salt
06:52 _KaszpiR_ joined #salt
06:56 LotharKAtt joined #salt
07:02 GMAzrael joined #salt
07:10 johnj joined #salt
07:24 icebal joined #salt
07:31 vb29 joined #salt
07:31 vb29 left #salt
07:36 nledez joined #salt
07:40 cyteen joined #salt
07:40 Angleton joined #salt
07:41 Church- joined #salt
07:43 omie888777 joined #salt
07:47 rgrundstrom joined #salt
07:53 Rumbles joined #salt
07:59 pbandark joined #salt
08:03 GMAzrael joined #salt
08:08 felskrone joined #salt
08:10 Namx joined #salt
08:11 johnj joined #salt
08:12 mikecmpbll joined #salt
08:39 Mattch joined #salt
08:40 ChubYann joined #salt
08:42 nledez joined #salt
08:50 Felgar joined #salt
08:51 nona joined #salt
08:55 Hybrid joined #salt
09:04 GMAzrael joined #salt
09:10 johnj joined #salt
09:15 Hybrid joined #salt
09:17 _KaszpiR_ joined #salt
09:42 lorengordon joined #salt
09:47 kavakava Hi. Any idea how to update the cache on the master? I have one minion that pushed a lot of files (using cp.push_dir). I have deleted the files on the minion, but they are still located in the cache folder on the master.
09:50 _KaszpiR_ joined #salt
10:02 rgrundstrom joined #salt
10:05 GMAzrael joined #salt
10:12 johnj joined #salt
10:17 jhauser joined #salt
10:18 netcho joined #salt
10:18 netcho joined #salt
10:58 cyborg-one joined #salt
11:06 GMAzrael joined #salt
11:11 johnj joined #salt
11:46 DanyC joined #salt
11:46 zerocoolback joined #salt
11:53 netcho hi all
11:55 rgrundstrom o/
11:58 netcho i have a strance issue :)
11:58 netcho while importing pillar to salt.orch
11:58 netcho for some states it's ok and for some it's not
11:58 netcho example in pastebin in few
12:00 GMAzrael joined #salt
12:02 usernkey1 joined #salt
12:05 babilen kavakava: https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.cache.html
12:06 usernkey joined #salt
12:08 usernkey1 joined #salt
12:12 mage__ left #salt
12:13 dh joined #salt
12:20 numkem joined #salt
12:21 kavakava babilen: Thanks. Should I generate the pillar, grains or similar after running clear_all?
12:22 babilen Won't hurt
12:22 netcho babilen: what is the right sintax for passing pillar in salt.state function in orchestrator?
12:23 netcho i have pillar i json file which i import in orch. it gets the data for salt functions
12:23 kavakava babilen: Thanks a bunch!
12:23 netcho but for running states on minion using that pilalr data i need to pass it again
12:24 netcho here is my pilalr
12:24 netcho https://hastebin.com/xotayulozu.hs
12:26 netcho here is pillar and state
12:26 netcho https://hastebin.com/enaqoliguk.php
12:28 babilen netcho: Wouldn't you just pass a dictionary?
12:28 oida joined #salt
12:28 netcho i pass it from file
12:29 netcho it works for salt functions and runners
12:29 babilen What does?
12:30 netcho hm
12:30 netcho it worked now
12:30 netcho i just set pillar: {{ pillar }}
12:30 babilen Yeah .. what did you try before?
12:31 netcho i trtied that also but the sls file that needed to be ran had wrong pillar name
12:31 netcho it works now
12:32 schasi How would I get data that is in a orchestration state (== run on the master)? Is there a data structure like __opts__ that I can access?
12:32 johnj joined #salt
12:34 babilen schasi: Get it where?
12:34 schasi In a salt-cloud provider module that I have written
12:34 babilen That you call from orchestration?
12:34 schasi I try to have instance-specific data somewhere and think the orch state is a good place
12:35 schasi Yes
12:35 babilen You normally pass it via inline pillar or function arguments (see above)
12:35 babilen Not sure about this particular case
12:36 schasi "salt-run state.orch orch.cloud" would be my command
12:36 netcho you can use pillar with that
12:37 schasi I think I tried a pillar example from the docs. That didn't work for me though.
12:37 schasi Do you happen to have nother example?
12:37 netcho salt-run state.orch orch.cloud pillar_enc=gpg pillar="$(cat somefile.json)"
12:37 netcho this should work
12:37 babilen "didn't work" == "Drank your g&t and chilled on the couch" ?
12:37 schasi It's good to get an answer, I have been fiddling around with this for quite some time now, with no result
12:38 schasi bailen: I wanted it to build a nice cottage, but it fled with my daugther, both to never been seen again
12:38 schasi So maybe it does work, but not for me specifically
12:38 babilen Must have been me .. was it a bonnie lass?
12:38 schasi Yes
12:38 schasi YOU!
12:38 schasi Damn
12:39 schasi Thanks for the example netcho.
12:39 babilen But, back to your problem, what did actually happened when you tried whatever you tried before
12:39 babilen *happen
12:41 schasi I think I defined a pillar in my orch state file and then tried to access it inside my salt-cloud provider module, but didn't manage to access it. So... "nothing" happened
12:42 netcho schasi: here is what works for me
12:43 netcho https://hastebin.com/erococonig.bash
12:43 netcho service.json = pillarfile
12:44 schasi Thanks netcho
12:45 netcho sorry missing part
12:45 netcho pillar='$(cat "${pillarfile}" | tr -d '\n')'
12:48 schasi Are inline pillars just that file that you `cat`?
12:48 netcho service.json yes
12:49 schasi Because I tried this: https://gist.github.com/schasi/6dcf353193572963d6306a5937f1604a
12:49 schasi Which I think would be nice if it worked
12:49 netcho i think it should be singular for pilalr
12:49 netcho pillar
12:50 netcho not pillars
12:50 netcho https://gist.github.com/schasi/6dcf353193572963d6306a5937f1604a#file-cloud-sls-L8
12:52 schasi Thanks. I fixed it
12:52 schasi How would I access that from inside the salt-cloud python code?
12:52 netcho i don't get the question
12:54 schasi I have a custom salt-cloud module that creates VMs with something like "salt-cloud -p <profile> <vm-name>". I would like that module to be able to access the data in the pillar
12:55 tacoboy joined #salt
12:56 pbandark is it possible to configure zeromq using salt? as salt already use zeromq for internal communication(master <--> minion), I am wondering if it can be used for other purpose(other than saltstack)? if yes then, any extra configuration required on salt-minion ?
12:58 babilen schasi: If the pillar is passed in, you'd use the normal pillar.get function
12:58 schasi That is the question, is the pillar passed on
12:59 schasi Let's try
13:00 netcho you have example in my hastebin
13:00 netcho of passing pillar and fetvhing it from the orvh
13:00 netcho orch
13:01 schasi Yes, but you fetch the pillar in the state, I fetch it in the salt code. And you give it via an external file, I wanna give it via the state as such
13:01 gh34 joined #salt
13:15 _KaszpiR_ joined #salt
13:32 schasi I am still very confused and don't know how to do this, but thanks for your input, netcho and babilen
13:32 jas02 joined #salt
13:33 drawsmcgraw joined #salt
13:33 johnj joined #salt
13:33 lkolstad joined #salt
13:34 babilen schasi: What have you tried (exactly) and what was the result? We probably haven't done the exact thing, but maybe people in here can spot specific things in your paste?
13:37 cgiroua joined #salt
13:39 drawsmcgraw Bit of a morning puzzler -> In theory, one could DDOS a Salt master by sending tons of new minion requests to the master. This would result in (say) thousands of entries under "Unaccepted Keys". Even if you white/black list with `autosign_file` and `autoreject_file`, I'm under the impression that the rejected minions would still wind up under the "Unaccepted Keys" section. Is there a Master config arrangement that can mitigate such a
13:41 babilen drawsmcgraw: Your message was truncated at "mitigate such a"
13:43 schasi babilen: I hope this is enough information: https://gist.github.com/schasi/795b5c334d633f58fb0d893b871ed95d
13:44 drawsmcgraw "...such a situation"
13:44 drawsmcgraw Thanks babilen
13:46 babilen schasi: I take it that it "works" if you remove the pillar entries in line 8-10 ?
13:47 babilen drawsmcgraw: How would the master be able to differentiate between legitimate and illegitmate requests?
13:48 babilen drawsmcgraw: Ah, you essentially want to ensure that autorejected ones don't end up in the filesystem?
13:48 racooper joined #salt
13:49 schasi babilen: Yes, then the exception doesn't appear
13:49 babilen Something is pesky in that codepath then
13:49 babilen You could open an issue, I would like that to work :)
13:49 wavded joined #salt
13:49 drawsmcgraw babilen: exactly
13:52 schasi Me open an issue or drawsmcgraw?
13:53 Angleton joined #salt
13:54 schasi I am running 2017.7.1 btw, the same does happen on 2016.11.5 though (just tested it)
13:54 babilen Why not both of you guys?!
13:54 schasi Heh :D
13:54 drawsmcgraw this guy :)
13:55 drawsmcgraw It's just a conversation at the moment. Was curious if anyone in here had some thoughts.
13:55 brd drawsmcgraw: I would say the admin is responsible for protecting it appropriately, i.e. not just wide open to the internet.. at least firewall it off to a subset..
13:55 drawsmcgraw brd: valid response. That was actually part of our thought. Fail2ban, iptables, etc
13:55 babilen drawsmcgraw: In a way it is a bit tricky as it might cause previosly rejected minions to become legitimate ones again when the autoreject settings are changed
13:55 brd drawsmcgraw: I mean, any software will have flaws and a mulitlayered security approach is a good one
13:56 schasi babilen: Do you think what I am trying to do should work and it's a "bug"?
13:56 drawsmcgraw babilen: it is tricky but I was hoping there was an `auto_delete` feature of sorts where, if the minion was not on a whitelist, then it's key is dropped on the floor and no entry is made anywhere.
13:59 Brew joined #salt
13:59 babilen schasi: There are multiple situations where you want to hand over data from one layer to another .. this mostly works via inline pillars and is, IMHO, an idiom that should work in most places. I would therefore consider it to be a bug if it doesn't
13:59 schasi Then I shall open an issue. Thanks
14:10 lordcirth_work joined #salt
14:11 beardo joined #salt
14:13 Rumbles joined #salt
14:14 evle joined #salt
14:20 dxiri_ joined #salt
14:22 shoemonkey joined #salt
14:24 fxhp joined #salt
14:25 mchlumsky joined #salt
14:28 Cottser joined #salt
14:29 wavded joined #salt
14:32 schasi babilen: I have managed to solve my use case with "__opts__". I now have a "opts:\n   mac: 'f0:00....'" field in my orchestration file, which I can read from the __opts__ dict.
14:32 johnj joined #salt
14:33 mikecmpb_ joined #salt
14:35 sarcasticadmin joined #salt
14:37 mchlumsky joined #salt
14:37 edrocks joined #salt
14:38 Rumbles joined #salt
14:50 spuder joined #salt
14:53 michiel joined #salt
14:55 jas02 joined #salt
14:55 pbandark Hello everyone... is it possible to configure zeromq using salt? as salt already use zeromq for internal communication(master <--> minion), I am wondering if it can be used for other purpose(other than saltstack)? if yes then, any extra configuration required on salt-minion ?
14:57 ssplatt joined #salt
14:59 _JZ_ joined #salt
15:00 ssplatt anyone know the best way to get yaml|jinja to place an exact regex? for instance, i am trying to set `thing: \[sshd\]` and have {{ thing }} place exactly `\[sshd\]`.  if i put double quotes around it, salt complains that it can’t render the yaml file “\[sshd\]”.    if i leave out to quotes, then it works but jinja autoescapes and places \\[sshd\\]
15:00 lordcirth_work pbandark, zeromq is just a set of protocols, you can use zeromq independently of salt.  Or do you mean reusing the salt connection?
15:01 ssplatt if i “[sshd]”  then i get exaclty “[sshd]” placed but i want the \ because my regex needs exactly \[
15:01 lordcirth_work ssplatt, {% raw %} <regex> {% endraw %}
15:01 pbandark lordcirth_work: i meant, using zeromq for other applications than salt
15:02 Church- joined #salt
15:02 lordcirth_work pbandark, what other purpose?  You can write whatever program you like using zeromq libs.  I don't see how it's salt-related.
15:03 ssplatt my jinja template is {% for k,v in options|dictsort %} {{ k }} = {{ v }} {% endfor %}.  i guess i could {% raw %} {{ v }}{% endraw %} ?  lordcirth_work
15:03 ssplatt that doesn’t seem right.
15:03 lordcirth_work ssplatt, if v is your regex string, sure.  Apparently Jinja still subs variables inside raw, oddly: https://github.com/ansible/ansible/issues/4638
15:04 lordcirth_work though that might be only in Ansible's config?
15:04 ssplatt http://jinja.pocoo.org/docs/2.9/templates/#escaping  seems to suggest that won’t work
15:06 lordcirth_work ssplatt, ok.  Try {% autoescape False %} {{ v }} {% endautoescape %}
15:06 ssplatt i don’t get why placing it in quotes isn’t working.
15:07 lordcirth_work jinja is used to render webpages; I think it autoescapes dangerous characters by default.
15:08 pbandark lordcirth_work: personally i have not yet used zeromq. i have been asked to configure zeromq on servers. as i can see on salt-minions, bydefault zeromq gets installed, i was confused if i need to make any additional configuration.
15:08 ssplatt funny how single quotes acts teh same as no quotes.
15:19 johnj joined #salt
15:26 lordcirth_work pbandark, that's like being asked to configure HTTP on a server.
15:26 ssplatt {% autoescape false %}{{ v }}{% endautoescape %}   acts teh same as {{ v|e }} which does not give teh correct result. the escaping is happening when the yaml is loaded, not when processing the jinja template file.
15:26 mikecmpbll joined #salt
15:26 ssplatt result is \\[
15:27 zerocoolback joined #salt
15:27 ssplatt well i guess that’s not entirely true, |e has issues with other characters.
15:27 spuder joined #salt
15:28 ssplatt autoescape false is the same as just {{ v }}
15:29 pbandark lordcirth_work: thanks for confirmation though :)
15:30 tiwula joined #salt
15:32 vb29 joined #salt
15:32 jas02 joined #salt
15:35 simondodsley My first PR for Salt https://github.com/saltstack/salt/pull/43321. I'd love someone to have a look and review/comment.
15:37 lordcirth_work simondodsley, you should run the pylint
15:37 ssplatt {{ v|replace("\\\\","\\") }}   omg.
15:37 lordcirth_work simondodsley, for example Salt prefers single quotes over double.  https://docs.saltstack.com/en/latest/topics/development/conventions/style.html#pylint-instructions
15:38 Steve87 joined #salt
15:40 bushelofsilicon joined #salt
15:40 simondodsley @lordcirth_work I ran the pylint recommended in the documentation and it passed OK.
15:41 lordcirth_work simondodsley, hmm, not sure why that is.  Anyway, as the doc says:  "docstrings use single quotes, standard strings use single quotes etc"
15:42 simondodsley Thanks. I'll look at that now
15:44 Angleton joined #salt
15:47 onlyanegg joined #salt
15:48 bushelofsilicon Hey all, I've got to generate keys to put into Pillar, but I want to create a user interface for it for an abstraction (i.e. the interface just requires a name and it generates the keys and adds them to the yaml). Is parsing the sls file and creating an script with Python to add entries to the yaml a good way to go about this?
15:50 fatal_exception joined #salt
15:54 Shirkdog joined #salt
15:55 gh34 joined #salt
15:56 mikecmpbll joined #salt
16:02 Church- joined #salt
16:05 racooper joined #salt
16:08 aldevar joined #salt
16:10 lordcirth_work bushelofsilicon, sounds reasonable, assuming you don't have any jinja in this file.
16:13 bushelofsilicon lordcirth_work: Good point, but I think I can avoid that. There's not a better way to do it, through salt-api or something?
16:14 lordcirth_work bushelofsilicon, there's external pillar, or SDB
16:15 gmoro joined #salt
16:16 babilen You could, for example, save the keys in vault
16:18 colegatron joined #salt
16:22 shanth_ joined #salt
16:23 shoemonkey joined #salt
16:27 bushelofsilicon hmm, I'm going to have to think about this a bit
16:32 edrocks joined #salt
16:33 MTecknology What is vault anyway? I hear about it and have yet to ever read what it is
16:33 whytewolf MTecknology: https://www.vaultproject.io/
16:35 onlyanegg secret management from hashicorp with wide acclaim
16:36 MTecknology ah, interesting. Has it been reviewed in depth yet?
16:37 MTecknology (audited)
16:37 babilen MTecknology: You should play with it, its quite nice
16:44 lordcirth_work Looks nice
16:49 viq MTecknology: I think so, and I hear it's used at government agencies and large enterprised. I also hear it's quite a pain to use without the paid enterprise version, but yes, it does sound like a very interesting project.
16:49 viq s/enterprised/enterprises/
16:49 lordcirth_work viq, oh dear, another open-core project?
16:50 lordcirth_work We don't need more of thos
16:50 MTecknology it's obviously a device you want backed by a high-security hsm
16:51 viq lordcirth_work: well, salt is kinda one too
16:52 babilen Their enterprise options is horrendously expensive though
16:52 lordcirth_work viq, more so than I'd like, but still perfectly functional
16:52 babilen I hate open-core .. we had it all figured out a couple of years ago and now open-core is creeping in
16:53 viq lordcirth_work: and I hear vault is also perfectly functional - just sometimes painful to use.
16:53 babilen But then .. I'm using SaltStack that isn't developed by a community either, so ...
17:00 cyborg-one joined #salt
17:01 mikecmpbll joined #salt
17:08 quique joined #salt
17:10 fatal_exception joined #salt
17:10 schasi There is a strong and nice support community though
17:11 quique Is there a way to launch instances in aws with salt-cloud using the iam role of the saltmaster?
17:12 quique I see the option for giving the new minions iam roles, but don't see the option for the salt-master, when I trying to launch with out an 'id' key I get an error
17:20 xet7 joined #salt
17:27 astronouth7303 first time spinning up a windows minion. Seems to be ignoring the selectors in the topfile, namely 'G@virtual:VMware and G@kernel:Linux' and 'G@kernel:Linux'. cp.get_file_str shows the right topfile, but state.show_top indicates it's pulling states it's not supposed to?
17:30 astronouth7303 and, of course, using the selectors in other contexts works correctly -_-
17:32 edrocks joined #salt
17:40 nixjdm joined #salt
17:42 lordcirth_work astronouth7303, do minion and master versions match?
17:47 joseph4325 joined #salt
17:55 joseph4325 is there any way to retrieve metadata about a state via the api?  Basically i want to be able to pull a README-style summary of a state via the api.
17:58 xet7 joined #salt
18:00 GMAzrael joined #salt
18:02 lordcirth_work joseph4325, well, you could put a <state>.readme state file inside all of your state dirs, that you file.cp somewhere?
18:05 joseph4325 how would i access that via HTTP though?
18:06 tapoxi joined #salt
18:06 joseph4325 we use foreman as our webui for salt and i want to be able to display summaries for the states on the UI
18:07 astronouth7303 lordcirth_work: figured it out, the minion was configured for the dev environment and i had updated my topfile in master/base
18:08 lordcirth_work astronouth7303, well that would do it
18:10 felskrone joined #salt
18:13 xet7 joined #salt
18:16 lordcirth_work file.check_managed_changes has wayyy too many args
18:18 edrocks joined #salt
18:22 vexati0n anyone know when 2017.7.2 is expected to be released?
18:23 Guest73 joined #salt
18:24 shoemonkey joined #salt
18:31 astronouth7303 lordcirth_work: I think it's meant mostly as a back-end provider for the state file.managed and wasn't meant for humans
18:38 xet7 joined #salt
18:39 onlyanegg joined #salt
18:46 GMAzrael joined #salt
18:47 lordcirth_work astronouth7303, oh I'm well aware.  But I'm trying to add a feature: https://github.com/saltstack/salt/issues/28165
18:48 johnj joined #salt
18:50 sh123124213 joined #salt
18:55 nixjdm joined #salt
19:00 _KaszpiR_ joined #salt
19:09 jas02 joined #salt
19:11 schasi What is the difference between salt-virt and salt-cloud please? I don't really get it :D
19:12 slugfish joined #salt
19:18 smead joined #salt
19:18 lordcirth_work schasi, salt-virt is "here's a physical machine.  Make a VM on it."  salt-cloud is "here's a cloud system (AWS, Azure, Openstack), make VMs on it."
19:18 slugfish howdy
19:18 lordcirth_work Note how salt-virt docs talk about configuring hypervisors, while salt-cloud talks about configuring AWS auth
19:18 edrocks joined #salt
19:19 lordcirth_work slugfish, hi
19:19 schasi Haha. Thanks, lordcirth_work
19:19 schasi That was a good explanation :D
19:19 smead I'm trying to use salt-cloud via the python API.  I'd really like to configure providers and profiles in code (instead of in config files).  I don't really see in the documentation how I can pass a dictionary to the CloudClient class.  Does anybody have any pointers ?
19:22 Shirkdog joined #salt
19:30 smead anybody ?
19:30 mechleg smead: i have not tried this yet, but according to the source it looks like you could do salt.cloud.CloudClient(opts=dict).  it looks like it can use pillars too
19:31 mechleg source: https://github.com/saltstack/salt/blob/develop/salt/cloud/__init__.py
19:31 smead sweet
19:31 smead thanks
19:35 simondodsley @lordcirth_work changed those quotation marks as you requested. Not sure why pylint didn't pick those up
19:37 lordcirth_work simondodsley, nice.  btw I just noticed that your first commit is as 'root' :P  Common mistake, watch out for that
19:40 jas02 joined #salt
19:44 coredumb joined #salt
19:44 smead Anybody familiar with KeyError: 'extension_modules'
19:50 johnj_ joined #salt
19:51 dxiri joined #salt
19:51 simondodsley @lordcirth_work Good catch - new server that I forgot to set that on. Thanks
19:56 nixjdm joined #salt
19:57 coredumb joined #salt
20:16 Smada joined #salt
20:25 shoemonkey joined #salt
20:38 coredumb joined #salt
20:51 johnj_ joined #salt
20:52 debian112 joined #salt
20:53 debian112 left #salt
20:55 smead joined #salt
20:57 edrocks joined #salt
21:03 jas02 joined #salt
21:10 shanth_ joined #salt
21:12 fatal_exception joined #salt
21:14 omie888777 joined #salt
21:18 Eelis today i learned that salt is unable to determine a minion's ip address unless it has iproute2 or ifconfig installed. kinda silly dependency when getifaddrs() is right there
21:21 onlyanegg joined #salt
21:22 Eelis i think there should be a policy like: if the information to be retrieved is readily provided by glibc, then just get it from glibc instead of adding a dependency on an additional package and creating a new process for it
21:23 Eelis does that seem reasonable?
21:23 sh123124213 joined #salt
21:24 babilen +1
21:25 LotR well, the python stdlib rather than the C one I suppose
21:26 Eelis LotR: the python stdlib probably doesn't have bindings for it. so you need ~100 loc like this: http://programmaticallyspeaking.com/getting-network-interfaces-in-python.html
21:26 Eelis (for the case of getifaddrs)
21:27 absolutejam Cant anyone suggest the best way of copying files from one minion to another during an orchestrate run?
21:28 Angleton joined #salt
21:30 absolutejam Guess I can use minionfs and cp.push I guess
21:31 absolutejam there's a lot of ways of doing it, but I want the most succinct way
21:31 whytewolf absolutejam: the best advice is "avoid at all costs"
21:31 whytewolf rsync
21:31 absolutejam rsync state or just vanilla?
21:31 whytewolf vanilla rsync.
21:31 absolutejam I don't mind going minion -> master -> minion
21:32 absolutejam hm
21:35 cgiroua joined #salt
21:44 schasi absolutejam: I have had that problem before and apparently there isn't a good solution for it (yet)
21:46 schasi How do you guys deal with very long running jobs on minions? I have states that take 15 minutes and more, which can lead to "minion did not respond" (even though I have set the "timeout" ridiculously high already)
21:50 whytewolf i typically break things down into smaller chunks and use lots of orchistration instead of cramming everything into a highstate.
21:51 Guest73 joined #salt
21:52 johnj_ joined #salt
21:52 schasi I see. You think the highstate gives me the timeout?
21:52 schasi That everything is in one highstate, that is. Because some of my individual states alone can take 15+ minutes. Like creating a FreeBSD jail
21:53 onlyanegg joined #salt
21:53 whytewolf 15 min to create a jail? ew
21:54 * whytewolf feels spoiled when he can launch a vm in a ready state in seconds now.
21:58 schasi whytewolf: What hardware is behind that?
21:58 schasi Yes, it's kinda slow. It's running over an unoptimized nfs connection still
21:59 schasi I need like 40 secs to launch a VM into ready state
21:59 Guest73 joined #salt
21:59 Edgan schasi: Any time you are time sensitive you are better moving in a more image based direction. It also has the advantage of making things more reproducible by not redoing work. But I would also profile your stuff and figure out why it is so slow.
21:59 absolutejam schasi: whytewolf alright thanks
21:59 absolutejam I'll just rsync
22:00 absolutejam You guys just cmd.run rsync right?
22:00 absolutejam and probably stateful: True
22:00 Edgan schasi: I would say most VMs I have worked with in different roles in puppet or salt tend to take about 10-12 minutes max to go from nothing to done.
22:02 schasi Edgan: I already have done maximum work on the image (preinstalled packages, ...). Still, the creating a VM from image and starting it and configuring salt on it takes 2 minutes
22:02 absolutejam Is it generally worth tar -> copy over ssh -> untar for lots of small files?
22:02 schasi whytewolf: With what provider and hardware and configuration do you launch a VM in seconds?
22:03 rpb joined #salt
22:03 Edgan schasi: Running salt to check state or actually installing salt?
22:03 schasi Well, py27-salt is preinstalled, so configuring salt and installing some stuff (salt bootstrap is running)
22:04 schasi And creating and booting the VM ofc
22:04 whytewolf schasi: my personally openstack configuration.
22:05 Edgan schasi: I bake salt-minion into my AMIs, and use cloud-init to feed a salt-key to the the instance via user data.
22:05 coredumb apart from a VM, creating a jail shouldn't take more than a few seconds, especially if running on ZFS
22:05 Edgan schasi: So if I had an AMI with packages preinstalled like you, it would be just a verification that everything was in the right state already
22:06 schasi coredumb: The images are exported from ZFS over an (unoptimized) NFS, which I suspect is where the bottleneck lies
22:06 schasi images = VM disks
22:07 coredumb you use image jails?
22:07 schasi I use a jail on the started VM. It's a poudriere
22:07 coredumb ah ok
22:08 Edgan schasi: I don't touch NFS if I can help it. I would definitely thing that is your bottleneck.
22:08 schasi Edgan: It shall be replaced by an iscsi, but that's beyond my control
22:09 schasi whytewolf: What hardware does that openstack run on? Creating a _running_ VM in seconds sounds very nice :D
22:10 whytewolf schasi: the controller nodes are some custom asus rssomething 1u units. and the compute nodes are lenovo rd450's with raid-10 setup for the empherical disk space. hypervisor is KVM
22:12 schasi Huh. Are the VMs cloned pre-running or something? ;-) I don't get how it is so fsat
22:12 schasi fast
22:12 whytewolf pretty much. cloud iamges are based on cloning
22:13 schasi Well, that's how I do it too. Clone it, start it, have it running. But the cloning alone takes quite some time (even though it's just a sparse clone, not a raw one or something)
22:14 coredumb schasi: there it just starts
22:15 coredumb your fully backed image is snapshoted into a new disk to boot the new instance
22:15 coredumb it's almost instant on not too busy hypervisors
22:15 whytewolf [and my hypervisors are not that busy. since this is a cluster in my bedroom]
22:16 coredumb like creating/starting a new jail from a ZFS clone takes roughly 1s
22:17 whytewolf schasi: https://imgur.com/gallery/HgSk1
22:18 coredumb whytewolf: where's your bed?
22:18 whytewolf bed?
22:18 schasi Well, on my non-busy hypervisor it takes ~100 seconds I guess.
22:19 whytewolf lol, jk. bed is off to the side closer to the window. along with the AC unit in that room
22:19 absolutejam niiiice
22:19 coredumb :D
22:20 coredumb schasi: seems like you need some upgrade then :D
22:21 schasi At least on the storage I guess. But nice to know that creating a VM so fast is possible :D
22:21 schasi How is your storage connected?
22:21 hammer065 joined #salt
22:21 whytewolf it is all internal storage.
22:21 schasi That would explain a lot ;-)
22:22 whytewolf well. not really. considering the glance is hosted on the three pizza boxes with the blue lights. and compute is the large monolithes below them.
22:23 whytewolf openstack uses a lot of iscsi
22:23 ssplatt joined #salt
22:23 dxiri_ joined #salt
22:25 schasi Where does Cinder run on?
22:26 shoemonkey joined #salt
22:27 schasi I just surprised that my VM creation+booting takes over a minute ;-)
22:27 usernkey joined #salt
22:28 whytewolf the cinder clients all run on those three blue light pizza boxes also. [all of the api services do]. but the storage for it is the nas just above the pizza boxes. [currently using cinder-nfs driver to mount the nfs storage locally to the cinder-volume space that then exports iscsi to the kvm hypervisor]
22:29 whytewolf but i also don't use the inder much. I use the emphirical disk space which gets stored locally on the compute nodes
22:29 GMAzrael joined #salt
22:31 schasi Okay
22:33 whytewolf honestly now i just want to save for a load balancer. so that i don't have to do the same haproxy thing all the vendors do.
22:33 schasi Why not?
22:33 schasi And what are you using the setup for? Playing around and getting knowledgeable or a real usage?
22:34 whytewolf bit of a. bit of b. keeping fresh in my day job.
22:35 whytewolf also how i cut my teeth on salt.
22:36 schasi cool
22:42 vexati0n i swear syndic gets less usable by the day
22:48 schasi gnight everyone
22:48 whytewolf night schasi
22:52 johnj_ joined #salt
23:01 vtolstov joined #salt
23:02 vtolstov HI!
23:02 vtolstov What is the preffered wayt to import yaml data inside state file?
23:02 whytewolf ?
23:02 vtolstov i have pillar data for each server with mac address of it interfaces
23:02 vtolstov for example cn01: xxx cn02: xxx
23:03 whytewolf ok.
23:03 vtolstov each data placed on different pillar file corresponding to each minion id
23:03 vtolstov on boot server i need to gather this pillar data and construct dnsmasq dhcp config
23:03 vtolstov my problem that each pillar data passed to corresponding minion based on it id
23:04 vtolstov so boot server can get only boot server pillar data =)
23:04 edrocks joined #salt
23:04 vtolstov i found import_yaml in jinja
23:04 whytewolf yes.
23:04 vtolstov but i need to file.find all files on path, and import data =(
23:05 vtolstov does salt provide more beautiful methods for this use-case
23:06 whytewolf not really.
23:06 vtolstov @whytewolf: may be i misuse pillar data and need to hold on one file all mac addresses ?
23:07 vtolstov (but i think this is overhead, i have on each dc 100-300 servers and some vms)
23:07 whytewolf well. does any server need the mac address then the boot server?
23:07 vtolstov @whytewolf - yes, because i pass hostname when node boots up and this hostname = minion id
23:08 vtolstov i can't distinglish nodes and have only mac address
23:08 whytewolf ???
23:08 vtolstov after node boot up salt-minion pass it id to master and configured
23:08 whytewolf what server needs the mac address other then the boot server?
23:09 vtolstov @whytewolf - monitoring servers, and may be some salt-proxies
23:09 vtolstov that configures dlink/cisco switches
23:10 whytewolf so far it sounds like everything that needs the mac address needs the whole list.
23:11 vtolstov @whytewolf: i think so. i'm pass mac address for each node to different file because i think that this is logical =)
23:11 vtolstov each pillar file for each node =)
23:11 whytewolf why
23:12 whytewolf if everything that needs the mac address needs the whole list. then putting it in seperate files does you no good. you are better off just building a list with a minion-id linked to a mac address
23:12 vtolstov because also this file contains other data like disk size and it serial (used to create symlinks, memory size and bank location)
23:14 vtolstov @whytewolf: may be use pillar stack ? so it included needed pillars ?
23:15 whytewolf pillar stack. or use the python render.
23:15 vtolstov ok thanks =)
23:19 jas02 joined #salt
23:25 sh123124213 joined #salt
23:26 Hybrid joined #salt
23:26 Uni <3 pillarstack
23:27 Uni one of the best features someone should have told me about months ago
23:27 shoemonkey joined #salt
23:27 fatal_exception joined #salt
23:41 Hybrid joined #salt
23:51 babilen :D
23:53 johnj_ joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary