Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2016-09-20

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 ninjada joined #salt
00:07 flowstate joined #salt
00:17 Mate joined #salt
00:17 Mate joined #salt
00:27 badon joined #salt
00:33 oida joined #salt
00:37 DEger joined #salt
00:38 fxhp joined #salt
00:38 ajw0100 joined #salt
00:47 voxpop joined #salt
00:49 scoates joined #salt
00:58 John_Kang joined #salt
01:00 PerilousApricot joined #salt
01:04 edrocks joined #salt
01:06 DEger joined #salt
01:09 rem5 joined #salt
01:12 sp0097 joined #salt
01:13 badon joined #salt
01:15 hasues joined #salt
01:15 hasues left #salt
01:17 mattbillenstein joined #salt
01:24 watersoul joined #salt
01:33 catpigger joined #salt
01:35 stooj joined #salt
01:47 ilbot3 joined #salt
01:47 Topic for #salt is now Welcome to #salt! | Latest Versions: 2015.8.12, 2016.3.3 | Support: https://www.saltstack.com/support/ | Logs: http://irclog.perlgeek.de/salt/ | Paste: https://gist.github.com/ (please don't multiline paste into channel) | See also: #salt-devel, #salt-offtopic | Ask with patience as we are volunteers and may not have immediate answers
01:59 flowstate joined #salt
02:02 pppingme joined #salt
02:04 mpanetta joined #salt
02:06 cyborg-one joined #salt
02:09 flowstate joined #salt
02:14 drawsmcgraw left #salt
02:17 onlyanegg joined #salt
02:17 subsignal joined #salt
02:29 subsignal joined #salt
02:30 rem5_ joined #salt
02:34 k_sze[work] joined #salt
02:34 ajw0100 joined #salt
02:54 bastiandg joined #salt
03:06 DEger joined #salt
03:08 oida joined #salt
03:08 onlyanegg joined #salt
03:12 malinoff joined #salt
03:21 VR-Jack joined #salt
03:23 edrocks joined #salt
03:26 DEger_ joined #salt
03:52 cmek joined #salt
03:52 voxpop joined #salt
03:56 oliver_are joined #salt
04:35 flowstate joined #salt
04:36 oida joined #salt
04:40 spuder joined #salt
04:45 onlyanegg joined #salt
04:54 rdas joined #salt
04:57 DEger joined #salt
05:01 DEger joined #salt
05:02 yuhlw_ joined #salt
05:06 DarkKnightCZ joined #salt
05:10 flowstate joined #salt
05:12 sh1znc joined #salt
05:21 cmek joined #salt
05:30 DEger joined #salt
05:31 oliver_are joined #salt
05:31 dimeshake joined #salt
05:42 DEger joined #salt
05:46 jxm_ joined #salt
05:54 bocaneri joined #salt
05:56 braneless joined #salt
06:01 narfology joined #salt
06:01 felskrone joined #salt
06:16 flowstate joined #salt
06:18 infrmnt joined #salt
06:30 djgerm joined #salt
06:32 djgerm Hello! I am trying to spin up machines in vsphere, specifying a single disk that is 1TB from a template with a small partition. Is there a way in salt-cloud to instantiate such that the filesystem takes up the whole disk, or should I just write a state for the partition?
06:32 djgerm (The issues being that the new machine, though the disk be 1000GB large, the partition and thus filesystem is only as big as that from the original template)
06:33 ninjada anybody have any examples for boto states setting up an AWS VPC with pub/priv subnets & NAT and route table?
06:47 mattbillenstein1 joined #salt
06:50 jeddi joined #salt
07:05 keimlink joined #salt
07:08 haam3r joined #salt
07:14 flowstate joined #salt
07:17 Elsmorian joined #salt
07:17 toanju joined #salt
07:22 oliver_are joined #salt
07:22 ronnix joined #salt
07:24 ninjada_ joined #salt
07:25 impi joined #salt
07:27 oliver_are joined #salt
07:28 edrocks joined #salt
07:30 mohae joined #salt
07:36 ninjada joined #salt
07:43 krymzon joined #salt
08:01 geomacy joined #salt
08:02 impi joined #salt
08:05 Rumbles joined #salt
08:08 fannet joined #salt
08:08 cmek joined #salt
08:15 flowstate joined #salt
08:16 s_kunk joined #salt
08:16 s_kunk joined #salt
08:17 netcho joined #salt
08:20 cmek joined #salt
08:23 TyrfingMjolnir joined #salt
08:28 TyrfingMjolnir joined #salt
08:30 irctc850 joined #salt
08:30 irctc850 hi there
08:30 irctc850 git.latest won't do a git pull , right ?
08:32 babilen Why not?
08:32 ivanjaros joined #salt
08:33 malinoff joined #salt
08:33 irctc850 I can send you the errors
08:33 babilen I don't need them ;)
08:33 babilen But git.latest would update the repository to HEAD
08:33 irctc850 this is the output : Repository would be updated to df65aa0, but there are uncommitted changes. Set 'force_reset' to True to force this update and discard these changes.
08:34 babilen Well, then you have uncommitted changes in your repository (so it wouldn't be a fast-forward merge)
08:34 irctc850 no I don't , otherwise I wouldn't post it here
08:34 babilen If you want to overwrite them set "force_reset: True", but I would recommend to investigate where those changes came from
08:34 irctc850 I have a fresh directory
08:34 irctc850 I ran the git.latest for the first time , it will clone it
08:35 babilen What does "git status" and "git pull" give you in that repository?
08:35 irctc850 and when I run it for the second time , and I didn't change a thing
08:35 irctc850 and this error will apear
08:35 babilen Yes, so *something* introduced changes
08:35 babilen You just have to find out what
08:36 irctc850 git pull says its already up to date
08:36 haam3r joined #salt
08:36 irctc850 but git.latest won't give me the same , it says you have uncomited changes , blah blah
08:38 AndreasLutro no, git.latest is not *just* a git pull
08:38 sfxandy joined #salt
08:38 sfxandy hi guys.
08:39 irctc850 as I know , git.latest should do a git clone if there is nothing in the directory , and if there are files in it , do git pull on that directory
08:39 irctc850 right ?
08:40 irctc850 but it seems buggy , cuz I'm not changing a thing man ....
08:40 sfxandy got an issue with one of the minions, for some reason the highstate output appears in a much more verbose format now i.e. with run_num etc rather than the usual output.  any ideas?  there are no directives set in the /etc/salt/master or /etc/salt/minion that can be used to affect the output format/verbosity...
08:40 nkuttler irctc850: um, do you edit code on your minions?
08:40 irctc850 no
08:40 nkuttler irctc850: where are those uncommitted changes then?
08:41 AndreasLutro irctc850: no there's a lot more going on. as the state error is telling you, it won't allow uncommited changes in the repo (even if git pull would work)
08:42 irctc850 ok nkuttler , tell me this , if I have uncommited changes there , why when I do a git pull it works ?
08:42 irctc850 and it says already up-to-date
08:42 Hydrosine joined #salt
08:42 nkuttler git pull doesn't care if there are no commits
08:42 nkuttler (afaik, not a git expert)
08:42 nkuttler irctc850: git status
08:42 nkuttler err, s#commits#conflicts#
08:42 sfxandy incidentally its only happened after upgrading master and minons to salt 2016.3.3
08:43 sfxandy but not all minions have the same issue
08:44 irctc850 mine is 2016.3.3
08:49 DEger joined #salt
08:50 krymzon I'm quite confused by archive.extracted chown being tied to if_missing. Especially since I'm using a never-existing file there. Should I try to make sense of it, or just use file.directory(recurse) to chown?
08:51 haam3r joined #salt
09:05 free_beard joined #salt
09:07 free_beard hi guys, I'm trying to adapt a golang formula and I'm having trouble understanding the initialization of config here https://is.gd/d787A8 . Is there a way I could test that command in the console?
09:11 free_beard i could do salt '*' pillar.get key=golang:lookup , but I fail to string the rest of the params
09:13 jamesp9 joined #salt
09:15 flowstate joined #salt
09:15 djgerm left #salt
09:17 AndreasLutro free_beard: no, not really
09:18 AndreasLutro but basically it takes defaults.yaml, merges your pillar and grains into it, picks/merges the values that correspond to your os family, then updates the archive_name and base_dir at the end
09:31 haam3r joined #salt
09:39 nicksnick joined #salt
09:40 nicksnick hi, what are best practices to manage users via salt especially password changes?
09:41 nicksnick we have several admins for all our minions and the goal would be to have the same passwords on each minion. we want to avoid to have password hashes in our pillars
09:49 jamesp9 joined #salt
09:51 voxpop_ joined #salt
09:53 krymzon nicksnick: not sure about passwords, I prefer keys, but have a look at users-formula, it takes care of a lot of stuff nicely
09:53 N-Mi joined #salt
09:57 lubyou joined #salt
09:58 unusedPhD_ joined #salt
09:59 netcho joined #salt
10:00 shadoxx joined #salt
10:00 voxpop joined #salt
10:01 mjimeneznet joined #salt
10:02 wm-bot4 joined #salt
10:04 jamesog nicksnick: You can use GPG in pillars
10:04 haam3r joined #salt
10:04 nicksnick thanks guys, but how do you manage password changes... we want to avoid manually hacking in passwords...
10:05 jamesog Honestly, I don't use passwords. Key auth everywhere
10:06 honestly Thanks for fighting the good fight jamesog
10:07 AndreasLutro why do you want to avoid password hashes in pillars?
10:07 Reverend jamesog - do you use MFA with that?
10:07 Reverend for live servers, that is./
10:07 jamesog On bastion hosts, yes
10:07 Reverend GG. pk'
10:07 Reverend key pairs with MFA are where it's at
10:08 Reverend fuck all that password nonsense
10:09 nicksnick so admins log in using their key and i.e. sudo is configured with NOPASSWD?
10:09 nicksnick looking for the right concept here
10:13 manji nicksnick, yes
10:13 krymzon yes, that seems to be the general good practice
10:14 nicksnick ok thanks
10:14 lubyou do environments truly isolate minions (with top_file_merging_strategy=same), or can any minion still access all files on the file server, including files from other environments?
10:14 flowstate joined #salt
10:15 babilen lubyou: Sure
10:16 haam3r joined #salt
10:16 lubyou babilen sure as in sure, they can still access all files?
10:16 babilen I really would recomment to look into different approaches for implementing "environments" in salt. If you want to model software development workflows you might want to use different masters per branch/environment
10:17 lubyou babilen im trying to serve different customers from one master
10:17 babilen You'll need different masters for that
10:17 lubyou from a management point of view that would create a tremendous overhead for us
10:17 babilen I haven't played with environments in a while (found them horribly frustrating) .. so .. you might want to check that yourself
10:18 babilen lubyou: We do the same and have them in syndic + salt-formula for configuration
10:18 babilen It's not that bad, just additional VMs
10:19 babilen Even if you environments would allow you to access all master data, it would still be easily changed on the minion and therefore not secure
10:22 free_beard AndreasLutro: Thanks!
10:22 lubyou I was under the impression that the minion can be pinned to an environment via top file matching
10:22 krymzon well, the security-secret stuff should be in pillar anyway, so tight targeting of pillar and a little bit of care with salt-key should go a long way for that, won't it?
10:24 krymzon documentation does seem to confirm that lubyou is saying, "environment can be isolated on the minion side by statically setting it. Remember that the recommended way to manage environments is to isolate via the top file"
10:25 babilen lubyou: It's not pinned and that is not exclusive (minions can be in multiple environments at the same time)
10:25 babilen lubyou: But just test it .. create two environments and configure your minions .. run something like "salt-call cp.list_master" on the minion to check
10:26 babilen lubyou: I really haven't touched them as I found them to be horrible and not useful (in particular with GitFS where every branch becomes an environment)
10:26 babilen So I wouldn't say that I give authoritative information at the moment
10:27 krymzon also, I'm sure I know much less than babilen, I'm just glancing at the docs as the topic is of possible interest to me :)
10:28 babilen krymzon: People who have root on the minion would just change "environment: ourcustomer" to "environment: competitor" and steal all their code ;)
10:29 babilen I think this is more a "minion will, by configuration, only look at the configured environment" rather than "minion can't get hold of the environment at all"
10:29 lubyou babilen is right, I can list different environments with sudo salt-call cp.list_master saltenv=<environment>
10:29 babilen The latter is what you want for multi tenancy
10:29 lubyou right
10:30 babilen lubyou: Thank you for checking .. I wasn't entirely sure, but always thought that environments are more of a "if in doubt get stuff from ENV"
10:30 babilen setting
10:30 krymzon oh I see, thank you for clarifying it for me. So the 'isolation' docs talk about is easily overridable...
10:30 babilen That seems to be the case, yes
10:31 babilen We are using independent masters that are being configured with salt-formula
10:31 lubyou babilen in your setup, are you able to to manage all the clients on your top level master?
10:31 lubyou ah
10:31 lubyou ok
10:31 babilen lubyou: Yeah, syndic is nice
10:31 lubyou so in theory I could spin up one master per customer and make all the changes on my "main" master?
10:31 babilen probably ..
10:32 babilen I'm off for lunch now .. I recommend to play with it a little to get a feeling for it
10:32 lubyou "Each Syndic must provide its own file_roots directory. Files will not be automatically transferred from the Master node."
10:33 jfindlay joined #salt
10:44 haam3r joined #salt
10:45 teryx510 joined #salt
10:50 pcdummy joined #salt
10:50 pcdummy joined #salt
10:51 jamesp9 joined #salt
10:51 haam3r joined #salt
10:55 babilen lubyou: Separation!
10:56 lubyou babilen I think im beginning to understand how this is supposed to work :)
10:57 Rumbles joined #salt
11:14 DEger joined #salt
11:14 flowstate joined #salt
11:16 Rumbles joined #salt
11:18 SWA joined #salt
11:18 amcorreia joined #salt
11:18 rem5 joined #salt
11:20 SWA Hi, I want to use "salt '*' extfs.mkfs /dev/sdb1" but it seems I need to create sdb1 first.. is there a module for such manipulation?
11:21 johnkeates joined #salt
11:22 rem5 joined #salt
11:22 teryx510 SWA: I'm using the linux_lvm module. https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.linux_lvm.html
11:23 SWA we don't use LVM, it should be a pure ext4 partition :(
11:24 teryx510 There is a parted module. https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.parted.html
11:24 impi joined #salt
11:24 ronnix joined #salt
11:26 edrocks joined #salt
11:29 DEger joined #salt
11:33 SWA teryx510: tried it as well, i get a "ERROR: Invalid device passed to partition module.".. i think i still need to create the partition before the mkfs
11:35 psy0rz i want to do a yum upgrade on our dev environment every week, and then replay the transaction on production servers a few days later. how do i transfer the required yum transaction file to the other minion?
11:35 psy0rz is it possible to push it to the mine somehow?
11:35 DEger joined #salt
11:41 VR-Jack psy0rz: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.cp.html see salt.modules.cp.push
11:42 AndreasLutro psy0rz: I'd probably implement that outside of salt - mine isn't really meant for transient data like that
11:43 psy0rz VR-Jack that way it will be pushed to that directory's minion right?
11:43 psy0rz how to make it available for all minion?
11:43 psy0rz cp?
11:44 psy0rz AndreasLutro ok
11:44 VR-Jack It lets you push it from the minion to the master. Then the master has to move it locally to the the salt:// fs so other minions can grab it
11:44 psy0rz yeah so i need a "hack" on the master
11:44 psy0rz i hoped to prevent that
11:44 dendazen joined #salt
11:44 VR-Jack reactor is probably the best option there.
11:45 VR-Jack catch event of push, and copy file in supposing it meets criteria. This is a security thing is why.
11:45 psy0rz the first quick and dirty way i thought of was just a cronjob on the master that pulled the file, and distributes it to the correct minions and runs yum to replay the transaction
11:45 psy0rz reactor?
11:46 johnkeates plutonium.
11:46 VR-Jack https://docs.saltstack.com/en/latest/topics/reactor/
11:46 psy0rz oh i didnt know about reactors..will get into that
11:47 psy0rz johnkeates the only way to know that you're joking is to google it. since salt uses all these silly names ;)
11:47 VR-Jack They can be very useful
11:47 psy0rz salt minies minion reactors grains
11:47 psy0rz *mines
11:47 johnkeates ;-)
11:47 psy0rz thx VR-Jack
11:47 johnkeates molten salt reactors!
11:47 psy0rz w00p
11:48 VR-Jack johkeates, did we get minion reactors out yet?
11:48 johnkeates don't think we do
11:48 VR-Jack I've been away awhile. a long while
11:48 johnkeates at least not in stable
11:48 johnkeates but it might be in some super new release
11:48 johnkeates then again, my production env are stuck on 2016.3.x or something like that
11:48 johnkeates so maybe i just got tired of testing every release :p
11:49 VR-Jack I'm still on 2015.3 I think. lol
11:49 AndreasLutro that's fairly recent ;)
11:49 johnkeates haha
11:49 johnkeates yeah
11:49 west575 joined #salt
11:50 AndreasLutro we're on 2015.8, bugs stopping us from upgrading were only just fixed, might end up just jumping straight to 2016.9 if it comes out
11:50 VR-Jack minion reactors is the final glue I was looking for to have bidirectional communications between master/minion
11:50 Reverend channel - did the internet just derp, or is it UK only/
11:50 Reverend oh - google's DNS DID go down.
11:50 Reverend lol
11:51 johnkeates no derping here
11:51 Reverend sick - just us then :P lmao
11:52 VR-Jack google DNS is anycast, so no telling where that break is
11:52 johnkeates i got tired of never knowing if systems break due to my fault or some external fault
11:52 johnkeates so i started hosting my own stuff
11:52 johnkeates so if it breaks i always know who to scream at :p
11:52 johnkeates (i.e. a mirror)
11:53 Reverend well, we lost our primary line, then we lost our entire live network. :(
11:53 Reverend so we looked on twitter, and it looks like google DNS had litterally just stopped.
11:53 johnkeates I once had a network fail because two virtual routers were migrated to the same hypervisor box and then it died
11:53 johnkeates that was fun.
11:55 johnkeates sometimes i just want to run away from it all
11:55 psy0rz google dns still fine here
11:57 VR-Jack johnkeates: I did. Left job, started own company, and moved out into the middle of nowhere. Love it.
11:58 johnkeates Also started my own company, 'twas a good start
11:58 johnkeates but not in the middle of nowhere just yet
11:58 johnkeates still battling dracs, ilo's and imm's making me come over to kick their reset buttons
11:59 VR-Jack Well, middle of nowhere is relative. 1 mile drive way, 2 miles to neighbor, 8 miles to first small town as the wireless flies.
11:59 johnkeates so being too far away is hard :p
11:59 johnkeates also i'm spoiled with 100mbit at home, making is hard to go back to 10mbit moving out of any city
11:59 teryx510 SWA: spun up a minion with an extra disk and was able to get it working with the following: salt-call --local partition.mklabel /dev/sdb gpt
11:59 teryx510 salt-call --local partition.mkpart /dev/sdb primary start=0% end=100%
11:59 teryx510 salt-call --local extfs.mkfs /dev/sdb1 fs_type=ext4
12:00 johnkeates woah, provisioning disk with salt are we?
12:00 johnkeates i still leave that up to preseeding
12:02 VR-Jack I kickstart/preseed mine
12:03 VR-Jack except the addon disks
12:04 JohnnyRun joined #salt
12:06 cyteen_ joined #salt
12:06 teryx510 I'm using kickstart to get the initial lvm going but then using the lvm module to get the rest of the way. SWA has a disk he wants to format but needed to create the partition first.
12:08 JohnnyRun joined #salt
12:15 mavhq joined #salt
12:16 SWA teryx510: i'll check that thx :)
12:16 teryx510 swa: no worries.
12:24 edrocks joined #salt
12:33 gheistbane joined #salt
12:34 gh34 joined #salt
12:35 gheistbane Hi.  I have a VM running salt minion that is only accessable via ssh.  I need to apply states to it.  I was able to get it to send it's key to the master via forwarding 4506 back to the master (my laptop) and now I need to apply the states, but the same port is not working.  Has anyone done this before?
12:37 Tanta joined #salt
12:39 AndreasLutro you probably need to do the same with port 4505
12:39 AndreasLutro though personally I would just set up a private network for VMs and run the master in a VM as well
12:39 gheistbane I wish.
12:39 gheistbane yeah forwarding port 4505 doesnt seem to help.
12:39 gheistbane I have no network access or control.
12:40 flowstate joined #salt
12:42 gheistbane could I run the master and the minion on the same vm?
12:42 AndreasLutro sure
12:42 gheistbane that would solve it.
12:42 AndreasLutro is this for testing purposes or what
12:42 gheistbane dev
12:42 gheistbane so yeah in essence testing.
12:42 AndreasLutro that's fine then. you could also use a masterless minion
12:43 gheistbane can you apply states on a masterless minion?
12:43 AndreasLutro yep
12:43 gheistbane thanks I will google that.
12:43 flowstat_ joined #salt
12:44 s_kunk joined #salt
12:46 SWA teryx510: thanks it works!
12:53 JohnnyRun joined #salt
12:54 numkem joined #salt
13:00 Rumbles joined #salt
13:04 gheistbane joined #salt
13:04 gheistbane thanks, the masterless minion was absolutely the way to go.
13:06 ronnix_ joined #salt
13:06 gheistbane has anyone seen any good guides for using salt for devops in a large multi-tier environment?
13:09 Rumbles joined #salt
13:14 bowhunter joined #salt
13:15 racooper joined #salt
13:24 toastedpenguin joined #salt
13:31 dyasny joined #salt
13:31 ssplatt joined #salt
13:31 avozza joined #salt
13:35 Klas joined #salt
13:36 Klas hello, trying to get a handle if saltstack would be right for us, particulary the OSS version, is there a comparison between the different versions somewhere?
13:36 AndreasLutro there is only the OSS version, enterprise is just a web GUI for it
13:37 Klas ah, nice =)
13:38 angvp joined #salt
13:38 Klas we are very early in the process of migrating from our homebrewed orchestration toolkit to salt, ansible, puppet or chef, or a combination ;)
13:39 impi joined #salt
13:40 Klas salt seems to focus primarily on larger infrastuctures than we are handling though (we handle about a 1000 nodes or so)
13:41 AndreasLutro dunno about that, salt has in my experience been very easy to set up in small environments
13:43 Shirkdog joined #salt
13:43 Shirkdog joined #salt
13:45 perfectsine joined #salt
13:46 onlyanegg joined #salt
13:49 teryx5101 joined #salt
13:50 Klas from brief overviews, salt does seem like a really cool option
13:52 JohnnyRun joined #salt
13:52 jamesog Salt works great in small environments. The master does need tuning in larger environments though
13:53 Klas from my overview, it basically looks like ansible done right
13:53 jamesog That's how I view it :-)
13:53 Klas hehe, not too far off then
13:53 jamesog I found Salt much easier to use for homogenous environments. Ansible's 'group_by' didn't really cut it for me
13:58 bluenemo joined #salt
13:59 _JZ_ joined #salt
13:59 dendazen joined #salt
14:03 subsignal joined #salt
14:05 mantas776 joined #salt
14:07 iggy gheistbane: there's also salt-ssh if you only have ssh access
14:09 gheistbane yeah I read about that too.  I went with the masterless setup.  Much easier.  It turns out, we cant even use this server without a firewall change, which takes too long, so it is getting decommed.
14:10 saltsalt joined #salt
14:10 Tanta_G joined #salt
14:10 gheistbane my kingdom for a more agile environment.
14:10 iggy Klas: one of the really nice things about salt that I have yet to really see anywhere else is the concept of event driven config mgmt... you can have a minion fire an event when something happens, that event goes up to the master which can handle it with a reactor, that reactor can then fire off other bits of work to any number of systems
14:11 Klas cool
14:12 gheistbane iggy: could you have it monitor a github repo for commits and have reactor redeploy?
14:12 iggy also, with salt-api + reactor, you can do lots of neat things too
14:13 whytewolf gheistbane: yes. actually you can have github send webhooks that trigger salt-api to fire an event that can trigger redeploys
14:13 iggy gheistbane: yeah, that's where salt-api would come in... github repo does a webhook to the master, reactor runs orchestrate job to update git checkout and run highstate everywhere
14:13 gheistbane now that is interesting.  I may have to try that out.
14:14 iggy at old job, we went that way instead of gitfs, because we (stupidly maybe) had some huge git repos that couldn't finish updating in the minute between gitfs refreshes
14:17 west575_ joined #salt
14:18 DarkKnightCZ joined #salt
14:21 antani joined #salt
14:22 dyasny joined #salt
14:26 ub1quit33 Anyone know how to specify multiple IP addresses to assign an interface under network.managed?
14:27 ub1quit33 the documentation seems to indicate that a parameter of 'ipaddrs' should work
14:27 ub1quit33 but when I try that, the state fails with the error "[DEBUG   ] output: Missing required variable: address"
14:27 abednarik__ joined #salt
14:27 mohae joined #salt
14:27 antani joined #salt
14:28 ronnix joined #salt
14:29 ub1quit33 I actually don't see "address" as a parameter used anywhere in the network.managed documentation..
14:30 mpanetta joined #salt
14:35 gheistbane you want 1 interface with 2 ips?
14:35 numkem joined #salt
14:36 beardedeagle joined #salt
14:38 gheistbane can you do something like this:
14:38 gheistbane IPADDR1="192.168.1.151"
14:38 gheistbane IPADDR2="192.168.1.152"
14:38 gheistbane ?
14:38 gheistbane that works as far as the Linux OS is concerned.
14:38 gheistbane not sure about salt though.
14:39 jamesog I've had mixed experiences with the network state module. It's not well documented, partly because under the hood it depends which OS you're using and the different execution modules seem to work very differently
14:39 _JZ_ joined #salt
14:39 spuder joined #salt
14:40 gheistbane you could always write a shell script to do it and use cmd.run to execute it.
14:40 gheistbane or use file.managed on the interface file
14:41 ivanjaros joined #salt
14:41 dyasny joined #salt
14:42 khaije1 joined #salt
14:46 Renich joined #salt
14:50 armonge joined #salt
14:50 BattleChicken joined #salt
14:50 RandyT good day everyone
14:50 cmarzullo good day sir
14:51 RandyT could anyone tell me if there is an alternative to wtmp/btmp beacons for windows minions?
14:51 RandyT not finding it in my search...
14:52 RandyT winsalt_: ?? any feedback on that one ^
14:54 catpig joined #salt
14:54 iggy doubtful
14:54 iggy beacons are fairly new, don't know how many people are using them on Windows
14:55 winsalt_ i have no idea
14:55 RandyT tks guys, beacons are a beautiful thing. I am using them to monitor services running on some windows minions, but not finding solution for this challenge.
14:56 winsalt_ you might have to make your own
14:56 RandyT will have to work on my windoze foo to figure out what I need to monitor to accomplish that. :-)
14:57 RandyT eventlog I would guess...
14:58 Harry` joined #salt
14:59 perfectsine joined #salt
15:02 ponyofdeath joined #salt
15:03 kusen joined #salt
15:03 DEger joined #salt
15:05 JPT joined #salt
15:05 XenophonF so with the gpg renderer, i can only encrypt string values, right?
15:06 beowuff joined #salt
15:06 XenophonF so like, if i have a list, i'd have to encrypt each list item singly
15:06 XenophonF right?
15:12 sp0097 joined #salt
15:13 Brew joined #salt
15:16 tapoxi joined #salt
15:16 jimklo joined #salt
15:18 ub1quit33 gheistbane: thanks for the help m8.. I figured out a way to do 2 separate interfaces instead
15:23 gheistbane ok :)
15:24 spuder joined #salt
15:25 heewa joined #salt
15:25 q1x joined #salt
15:26 cyteen_ joined #salt
15:27 jimklo joined #salt
15:28 jimklo joined #salt
15:32 ivanjaros3916 joined #salt
15:32 Harry` left #salt
15:33 Renich joined #salt
15:33 orionx joined #salt
15:37 west575 joined #salt
15:38 DarkKnightCZ joined #salt
15:44 ronnix_ joined #salt
15:47 ageorgop joined #salt
15:47 flowstate joined #salt
15:50 tiwula joined #salt
15:50 DEger joined #salt
15:51 DEger joined #salt
15:51 ozux joined #salt
15:58 flowstate joined #salt
16:01 flowstat_ joined #salt
16:03 flowstate joined #salt
16:04 jhauser_ joined #salt
16:07 spuder joined #salt
16:08 hasues joined #salt
16:08 hasues left #salt
16:11 cyborg-one joined #salt
16:15 cmarzullo XenophonF: you can encode whole blocks of yaml.
16:17 cmarzullo When rendered it'll replace the gpg block with the yaml block.
16:20 spuder joined #salt
16:21 djgerm joined #salt
16:22 djgerm Hello! I was wondering if there were a good way of partitioning new disks to use all available, but optimized
16:22 avozza joined #salt
16:25 wendall911 joined #salt
16:28 jimklo joined #salt
16:28 spuder_ joined #salt
16:28 jimklo joined #salt
16:31 ozux joined #salt
16:32 ajw0100 joined #salt
16:32 edrocks joined #salt
16:36 kusen joined #salt
16:38 kusen_ joined #salt
16:42 ozux joined #salt
16:45 impi joined #salt
16:45 woodtablet joined #salt
16:48 queso What does SLS stand for?
16:48 djgerm SaLt State
16:52 edrocks joined #salt
16:54 queso Thanks
16:58 smcquay joined #salt
17:03 pcn hey, does the nginx module work on ubuntu 14.04 or is it specific to systemd distros?
17:04 bowhunter joined #salt
17:04 gheistbane yes it does
17:04 gheistbane wait ... oh, Idk
17:04 gheistbane sorry
17:08 onlyanegg joined #salt
17:08 fannet_ joined #salt
17:08 aagbds joined #salt
17:10 fannet__ joined #salt
17:12 flowstate joined #salt
17:13 pcn it looks like it's checking for systemctl and failing, then saying that the virtual method returned False even though initctl is found
17:16 upb joined #salt
17:17 randomword joined #salt
17:17 spuder joined #salt
17:24 pipps joined #salt
17:27 flowstate joined #salt
17:29 sjmh joined #salt
17:31 west575 joined #salt
17:33 ozux joined #salt
17:36 beardedeagle joined #salt
17:39 randomword hello
17:39 randomword can I use file.blockreplace to replace a block within a file with the contents of another file?
17:39 orionx_ joined #salt
17:42 haam3r joined #salt
17:43 orionx joined #salt
17:50 haam3r joined #salt
17:53 randomword anyone?
17:53 foundatron joined #salt
17:56 haam3r joined #salt
17:57 spuder joined #salt
18:00 spuder_ joined #salt
18:07 haam3r joined #salt
18:07 subsignal joined #salt
18:11 cmarzullo maybe. it's almost always advisable to manage the whole file though.
18:12 bowhunter joined #salt
18:13 berserk joined #salt
18:16 djgerm what's the best way to salt-cloud from vsphere templates and grow the first hard disk?
18:20 ivanjaros joined #salt
18:28 teryx510 joined #salt
18:32 fannet joined #salt
18:36 DEger joined #salt
18:37 spuder joined #salt
18:38 spuder_ joined #salt
18:44 foundatron joined #salt
18:45 foundatron Hi, does anyone know how to test for the presence of a grain in salt a state? basically is if this grain exists, do this thing.
18:45 foundatron I found this https://docs.saltstack.com/en/latest/ref/states/all/salt.states.grains.html
18:46 foundatron but that seems more like setting and deleting grains
18:46 ssplatt set g_ = grains[‘thing’]       if g_ ...
18:48 Edgan foundatron: Do you want it to silently pass on non-existence or error on non-existence?
18:48 foundatron yes
18:48 foundatron I want to silently pass
18:49 Edgan {% if salt['grains.get']('foo) %}
18:49 Edgan I meant
18:49 Edgan {% if salt['grains.get']('foo') %}
18:49 babilen foundatron: In some cases it might be better to simply not target the state to minions with that grain
18:49 babilen (or rather: SLS)
18:50 Edgan babilen: you mean in the top or init?
18:50 foundatron I agree, with that. basically we have a multi cloud deployment
18:50 foundatron and I'm testing for if it's ec2, or google, or our bare metal
18:50 babilen Edgan: I was thinking of top.sls
18:50 foundatron so, it's the same role in each cloud
18:51 babilen ah .. roles .. sure
18:51 * babilen doesn't like grains for roles, but don't let that stop you :)
18:51 Edgan babilen: I love them
18:51 foundatron well, roles isn't the right word
18:52 foundatron basically, ec2 has some default cloud users, which are different than google cloud, which are different than our bare metal stuff
18:52 babilen Edgan: Why? It really doesn't make sense to manage them in a distributed manner that's quite hard to maintain later on (How do you manage those grains? Why don't you use the same logic to assign roles to begin with?)
18:52 Edgan babilen: role-deployment-subrole-oscodename-01.env.region.provider.root_domain, and then turn them all into grains
18:53 babilen Edgan: You could easily assign pillars based on that or even just target SLS based on the naming scheme directly .. I see no reason whatsoever for maintaining insecure local state on the minion :)
18:54 Edgan babilen: these are custom grains, not /etc/salt/grains
18:54 Edgan babilen: I hate /etc/salt/grains
18:54 bluenemo joined #salt
18:54 Edgan babilen: pillars hold secrets and I don't match secrets against grains
18:54 babilen Edgan: That makes a lot better, but is still insecure (which doesn't have to be a problem)
18:55 Edgan babilen: My key system is secure
18:55 Edgan babilen: I don't auto accept
18:55 Edgan babilen: So an instance can change it's name
18:55 babilen Yeah, I'm just saying that you can target SLSs based on pillars just like you can target them based on grains and I *much* prefer to keep those roles/minion-SLS assignments in a central an secure location
18:55 sagerdearia joined #salt
18:56 babilen Edgan: Sure, but if you have malicious operators on the minion they might trick the master into deploying states that shouldn't be deployed there
18:56 babilen This is the area where one really has to look at the details
18:56 Edgan babilen: states yes, but not secrets, so who cares?
18:56 Edgan babilen: Sounds more DoS
18:58 Edgan babilen: I want things more auto managed. I already have to maintain too much in pillars.
18:58 babilen It's fine if you have some programmatic way to manage them (and you seem to do), but I much rather update my pillars and run refresh_pillar than having to roll out individual grains manually or based on some other targetting scheme
18:59 babilen Edgan: If your naming scheme is easy to handle then it makes perfect sense .. but then I wasn't really talking about custom grains (such as ec2 tags as well), but the fact that a lot of people manually set grains in /etc/salt/grains or the minion config
18:59 babilen That's just hard to manage and doesn't buy you anything over keeping the same information in a central location (e.g. an external pillar)
18:59 Edgan babilen: yeah, I think /etc/salt/grains, grains in states, etc are a mostly bad idea
18:59 markm__ joined #salt
19:00 babilen That was mainly my point .. I'm aware that the devil is in the details
19:02 mikea- joined #salt
19:02 babilen I mean your custom grain is just one way of programmatically spliting some data (minion_id in this case) and using it in salt. The fact that these are grains doesn't really make a difference to you nor the minion as you never work with grains directly (you don't set them explicitly, they are derived from other data points)
19:03 mikea Is it me or is there nearly zero documentation on creating custom salt engines?
19:03 babilen I'm sure it's not just you
19:05 Edgan mikea: I have multiple issues open about how poorly implemented they are
19:06 mikea I have a process right now that monitors the event bus and pushes crap into a database
19:06 Edgan mikea: There are at least four different ways to set a grain, and the order of precedence is illogical.
19:06 mikea I want to make it a salt engine, and it seems straight forward enough
19:06 JPT I know we use a salt engine to provide a local interface on each minion that enables us to push custom events into the salt event bus
19:07 JPT Also, the example engine linked in the docs seems straight forward about what an engine is
19:07 mikea is that not accurate?
19:09 mikea one of my questions is that in my current code I am getting the event bus via event = salt.utils.event.SaltEvent(node, sock_dir)
19:09 mikea but as an engine I'll have access to all of the data from the master, is there already an event bus open that I can attach to?
19:10 JPT Salt already has its own event bus :)
19:11 JPT So, yes, you should be able to access it *digs through docs*
19:12 haam3r joined #salt
19:12 JPT https://github.com/saltstack/salt/blob/develop/salt/engines/test.py <-- The example engine already accesses the salt event bus
19:13 mikea yeah, okay, so I still need to call  event.saltevent
19:13 JPT depending on what you want to achieve, yes.
19:14 KajiMaster joined #salt
19:20 mikea dumb question, is there a way to make pycharm understand that __opts__ really does exist?
19:20 toanju joined #salt
19:20 ozux joined #salt
19:21 perfectsine joined #salt
19:21 patrek joined #salt
19:22 teryx510 joined #salt
19:23 babilen mikea: __opts__ = None at the beginning (will be monkey-patched later on)
19:23 babilen Or check if you can override those warnings for a specific set of dunder variables
19:23 Trauma joined #salt
19:26 bltmiller joined #salt
19:29 ajw0100 joined #salt
19:31 keimlink joined #salt
19:31 Trauma_ joined #salt
19:32 spuder joined #salt
19:33 spuder_ joined #salt
19:36 haam3r joined #salt
19:43 pipps joined #salt
19:54 mavhq joined #salt
20:04 pipps joined #salt
20:13 heewa joined #salt
20:14 armonge joined #salt
20:16 kaak joined #salt
20:18 spuder joined #salt
20:19 spuder_ joined #salt
20:24 fannet joined #salt
20:30 bowhunter joined #salt
20:41 haam3r joined #salt
20:50 raspado joined #salt
20:53 netcho joined #salt
20:55 pfallenop joined #salt
20:56 heewa joined #salt
20:58 spuder joined #salt
20:59 pipps joined #salt
21:02 cDR joined #salt
21:02 pipps99 joined #salt
21:05 spuder_ joined #salt
21:09 heewa joined #salt
21:09 mcor joined #salt
21:11 mcor hi all.  I recently started a new role at a company that uses SaltStack for configuration management, and some inhouse tools for orchestration.
21:14 mcor I've been diving into salt and it's capabilities a bit more in my free time, and I'm interested in demoing its orchestration abilities... after going over the docs, though, it looks like there are probably a number of ways to accomplish the task I want to try, and I'm not sure about the best place to start.
21:14 mcor does anyone here have any experience using saltstack to automate cassandra cluster management tasks?
21:19 ageorgop joined #salt
21:26 cyborg-one joined #salt
21:29 fyb3r_ joined #salt
21:29 eichiro joined #salt
21:30 darvon joined #salt
21:38 pipps joined #salt
21:38 toanju joined #salt
21:39 pipps joined #salt
21:45 pipps joined #salt
21:46 DEger_ joined #salt
21:53 west575 joined #salt
21:56 fannet joined #salt
21:57 fannet_ joined #salt
22:01 flowstate joined #salt
22:03 cro joined #salt
22:08 pipps joined #salt
22:10 flowstat_ joined #salt
22:18 spuder joined #salt
22:19 spuder_ joined #salt
22:21 flowstate joined #salt
22:24 regretio is there a workaround for custom master_tops which don't work with salt-ssh?
22:26 flowstat_ joined #salt
22:30 flowstate joined #salt
22:34 flowstate joined #salt
22:39 foundatron joined #salt
22:53 hasues joined #salt
22:53 hasues left #salt
22:56 hoonetorg joined #salt
23:09 voileux_ joined #salt
23:19 hoonetorg joined #salt
23:27 jeddi joined #salt
23:27 flowstate joined #salt
23:31 hoonetorg joined #salt
23:36 flowstate joined #salt
23:37 amcorreia joined #salt
23:38 flowstat_ joined #salt
23:52 om joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary