Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-06-13

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 iggy why can't show_highstate just show things in the order it's going to put them...
00:11 nicksloan joined #salt
00:11 whytewolf isn't that part of what lowstate is for?
00:13 woodtablet left #salt
00:13 iggy highstate is easier to read imo, but yeah, I guess that's workable
00:14 whytewolf highstate is easier to read. but i don't think order actually has been determined by that point
00:15 iggy I wish lowstate was easier to read ?
00:16 iggy I mean it knows the order because it says the order, it's just not sorted by the order
00:16 iggy could probably do something with jq, but cba
00:26 monjwf joined #salt
00:35 druonysus joined #salt
00:35 mosen joined #salt
00:40 druonysus left #salt
00:40 druonysus joined #salt
00:48 fritz09 joined #salt
00:53 spartakos joined #salt
00:53 cockosho joined #salt
00:53 brocka joined #salt
00:53 kela1 joined #salt
00:53 phorike joined #salt
00:57 nicksloan joined #salt
01:05 cyborg-one joined #salt
01:10 edrocks joined #salt
01:11 cliluw joined #salt
01:28 cliluw joined #salt
01:36 mosen joined #salt
01:39 mikecmpbll joined #salt
01:48 ilbot3 joined #salt
01:48 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.3.6, 2016.11.5 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers <+> The call for speakers for SaltConf17 is now open: http://tinyurl.com/SaltConf17
02:00 druonysuse joined #salt
02:15 zerocoolback joined #salt
02:15 onlyanegg joined #salt
02:21 Sense8 joined #salt
02:23 mpanetta joined #salt
02:28 nicksloan joined #salt
02:36 asyncsec joined #salt
02:37 nafg joined #salt
02:40 cyborg-one joined #salt
02:41 nicksloan joined #salt
02:47 onlyanegg joined #salt
02:54 seffyroff joined #salt
02:55 nicksloan joined #salt
03:05 Praematura joined #salt
03:06 nicksloan joined #salt
03:07 rick_ joined #salt
03:20 rem5 joined #salt
03:35 donmichelangelo joined #salt
03:37 Praematura joined #salt
03:45 patrek joined #salt
04:03 StrikerST joined #salt
04:04 StrikerST QQ, is it possible to use the Result of a cmd.run state in a SLS file? If so is there an example... I basically want to use JINJA templates based off of this states return value ?
04:11 whytewolf no, it isn't possable. states happen after jinja has already rendered
04:11 onlyanegg joined #salt
04:12 edrocks joined #salt
04:13 nicksloan joined #salt
04:19 StrikerST joined #salt
04:40 ronnix joined #salt
05:28 h32Lg joined #salt
05:34 felskrone joined #salt
05:35 NightMonkey joined #salt
05:37 NightMonkey joined #salt
05:40 onlyanegg joined #salt
05:48 fracklen joined #salt
06:04 do3meli joined #salt
06:04 do3meli left #salt
06:05 colttt joined #salt
06:05 [CEH] joined #salt
06:07 onlyanegg joined #salt
06:15 mpanetta_ joined #salt
06:16 ronnix joined #salt
06:16 impi joined #salt
06:27 sgo_ joined #salt
06:28 Tgrv joined #salt
06:39 mugsie joined #salt
06:39 mugsie joined #salt
06:44 ravenx joined #salt
06:44 ravenx hey guys, i'm using these pillar roles atm:
06:44 ravenx https://docs.saltstack.com/en/latest/topics/tutorials/states_pt4.html
06:45 k_sze[work] joined #salt
06:45 ravenx and my question is:  if i have two versions of webserver.foobarcom (one for prod, one for qa)
06:45 ravenx how can i specify, based on roles, what to apply?
06:46 ravenx for example:  /srv/salt/prod/webserver/foobarcom.sls    and /srv/salt/qa/webserver/foobarcom.sls
06:47 ravenx mine only ever runs the prod one, despite passing it: salt --pillar 'webserver_role:dev'
06:50 ronnix joined #salt
07:00 ivanjaros joined #salt
07:04 patrek joined #salt
07:06 sgo_ joined #salt
07:09 sjorge joined #salt
07:11 AndChat345984 joined #salt
07:12 johnkeates joined #salt
07:12 aldevar joined #salt
07:15 edrocks joined #salt
07:15 gnomethrower joined #salt
07:16 xet7 joined #salt
07:17 asyncsec joined #salt
07:18 dyasny joined #salt
07:20 o1e9 joined #salt
07:21 k_sze[work] joined #salt
07:24 Deliant joined #salt
07:30 qwertyco joined #salt
07:36 mikecmpbll joined #salt
07:38 impi joined #salt
07:38 onlyanegg joined #salt
07:40 pbandark joined #salt
07:40 Remo joined #salt
07:48 zer0def joined #salt
07:49 Hybrid joined #salt
07:58 qwertyco joined #salt
08:01 demize joined #salt
08:01 Xevian joined #salt
08:01 darioleidi joined #salt
08:02 kshlm joined #salt
08:02 defswork joined #salt
08:05 felskrone1 joined #salt
08:05 LondonAppDev joined #salt
08:06 preludedrew joined #salt
08:11 bdrung_work joined #salt
08:13 onovy joined #salt
08:16 oida_ joined #salt
08:18 samodid joined #salt
08:28 Mattch joined #salt
08:28 fracklen joined #salt
08:30 GnuLxUsr joined #salt
08:41 Praematura joined #salt
08:43 teclator joined #salt
08:50 pbandark i would like to:  "1. start mongo service --> 2. add mongodb user --> 3. modify mongod configuration --> 4. restart mongod service.". I have used "service.running" to restart mongod service. I was plaining to use "watch" requsite in order to start/restart mongod at step 1 and 4. But, from "highstate" I can see, mongod is getting started once at the end. Is it possible to achieve what I am looking for with "watch" requsite ?
08:51 kungfoopanda joined #salt
09:00 Erik-P joined #salt
09:01 doradus joined #salt
09:02 Erik-P left #salt
09:02 Erik-P joined #salt
09:09 mugsie joined #salt
09:09 mugsie joined #salt
09:09 capnhex joined #salt
09:17 edrocks joined #salt
09:19 johnkeates joined #salt
09:20 coredumb how do I run salt '*' saltutil.refresh_pillar from an orchestrator ?
09:23 coredumb can salt.function: name: saltutil.refresh_pillar be used ?
09:37 N-Mi joined #salt
09:37 N-Mi joined #salt
09:39 POJO joined #salt
09:40 fracklen joined #salt
09:42 kungfoopanda i think saltutil runner can be used as
09:42 kungfoopanda run-refresh_pillar:
09:42 kungfoopanda salt.runner:
09:42 kungfoopanda - name: saltutil.refresh_pillar
09:42 kungfoopanda i have not tested it
09:42 coredumb this part of the documentation is pretty bad
09:43 candyman88 joined #salt
09:43 coredumb kungfoopanda: problem is there's no runner module by that name
09:43 coredumb salt-run saltutil.refresh_pillar doesn't work
09:44 POJO_ joined #salt
09:45 kungfoopanda which version of salt is it?
09:45 coredumb latest
09:45 kungfoopanda https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.saltutil.html
09:48 coredumb kungfoopanda: yep it doesn't have a refresh_pillar
09:48 coredumb sync_pillars is a different function
09:50 kungfoopanda yes, i see
09:51 coredumb the only solution I see - from the actual doc - is calling a state from orch that calls module.run - name: saltutil.refresh_pillar
09:51 coredumb which is a bit ...
09:52 coredumb convoluted if I may say
09:55 gmoro joined #salt
10:00 farcaller joined #salt
10:00 farcaller hi
10:01 farcaller can someone tell me why I need to run saltutil.sync_all all the time before state.apply to sync state files? I thought the latter should take care of that?
10:05 rem5 joined #salt
10:12 POJO joined #salt
10:16 zerocool_ joined #salt
10:21 babilen farcaller: You shouldn't have to run that at all for syncing *state files* nor when running the highstate for dynamic modules
10:22 farcaller babilen: what if I'm testing the code, running state.apply path.to.my.sls repeatedly? (i.e. not a highstate)?
10:22 babilen That's not a highstate, but also shouldn't be necessary as sync_all is for custom modules, states, beacons, grains, returners, output modules, renderers, and utils rather than states.
10:23 babilen If you have changes in those, you'd have to sync them explicitly though
10:23 capnhex left #salt
10:23 zerocoolback joined #salt
10:24 farcaller so it syncs custom module code only on a highstate run?
10:25 absolutejam Anyone know if there's another place fro Hubblestack project besides Slack?
10:25 absolutejam Not the most active of places
10:25 kungfoopanda joined #salt
10:26 kungfoopanda left #salt
10:29 kungfoopanda joined #salt
10:29 mikecmpbll joined #salt
10:32 kungfoopanda_ joined #salt
10:32 kungfoopanda_ quit
10:33 kungfoopanda joined #salt
10:33 losh joined #salt
10:35 LondonAppDev joined #salt
10:39 nicksloan joined #salt
10:45 evle1 joined #salt
10:51 blahasdfdsa joined #salt
11:00 sgo_ joined #salt
11:02 impi joined #salt
11:18 mugsie joined #salt
11:18 mugsie joined #salt
11:19 edrocks joined #salt
11:21 h32Lg joined #salt
11:27 rem5_ joined #salt
11:30 nku joined #salt
11:33 LondonAppDev joined #salt
11:33 Mogget joined #salt
11:36 ivanjaros joined #salt
11:41 onlyanegg joined #salt
11:47 xet7 joined #salt
11:48 sgo_ joined #salt
11:48 manji hey all, has anyone ever managed to target minion using grains in salt-ssh ?
11:50 Praematura joined #salt
11:54 thinkt4nk joined #salt
11:55 manji for example this;
11:55 manji salt-ssh -G 'environment:staging'  test.ping
11:55 manji doesn't work:/
11:57 zerocoolback joined #salt
12:00 mr_kyd joined #salt
12:02 kungfoopanda_ joined #salt
12:05 lasseknudsen joined #salt
12:07 usernkey1 joined #salt
12:09 usernkey joined #salt
12:13 POJO joined #salt
12:19 absolutejam How would you do that?
12:20 absolutejam Wouldn't it need the minion to pass the grains to the master for that?
12:20 vlebo joined #salt
12:23 amcorreia joined #salt
12:24 jure joined #salt
12:26 netcho joined #salt
12:26 netcho hello
12:28 LondonAppDev joined #salt
12:28 edrocks joined #salt
12:29 nickadam joined #salt
12:30 edrocks joined #salt
12:32 yuhl joined #salt
12:36 mikecmpbll joined #salt
12:40 netcho whats the best way to test an endpoint with salt? i have minion running on my app servers and i would like to check for "200 OK" status code ... example curl -i localhost:9000/api/status
12:42 netcho tried with cmd.run but it hangs
12:50 mugsie joined #salt
12:57 usernkey1 joined #salt
12:58 candyman88 joined #salt
13:05 numkem joined #salt
13:06 haam3r netcho: Maybe beacons, if you can do the checks based on something else? ref: https://docs.saltstack.com/en/latest/ref/beacons/all/index.html#all-salt-beacons & https://docs.saltstack.com/en/latest/topics/beacons/
13:08 Yamazaki-kun joined #salt
13:09 babilen netcho: I don't think SaltStack is the best solution for that. Why are you not eyeing a more traditional monitoring solution?
13:12 nku file.copy always returns an error, but the minion doesn't output one in debug mode, and the file is copied too.. wtf..
13:13 babilen Success: Error
13:13 POJO joined #salt
13:13 nku actually, it looks like a race condition. i create a timestamped directory, but the error is for a directory that indeed doesn't exist
13:13 babilen How do you create that directory?
13:13 nku the minion has a different timestamp and copies that file. but shouldn't the master get the answer from the minion..
13:14 nku hm, implicitly it seems. i clone a git repository inside it
13:14 nku didn't know that would create the parent. let me try to do it explicitly
13:16 asyncsec joined #salt
13:17 netcho babilen:  deploying with salt so it would be neat to have a state that will tell me if deployment is OK or not
13:18 netcho did app ran at all
13:19 c_g joined #salt
13:20 c_g joined #salt
13:23 nku yeah, no, this doesn't help either..
13:26 onlyanegg joined #salt
13:32 racooper joined #salt
13:37 nku ok, somehow the master didn't like the source file.. full path doesn't work, but replacing it with ~user/file returns no error..
13:37 * nku should file a bug
13:42 babilen netcho: You could maybe make a http.query call and then use test.fail_without_changes or test.succeed_without_changes to fail or succeed
13:43 coredumb when calling a state file from an orchestrator using * as tgt, apparently the orchestrator tries to call the pillar for  "<hostname>_master" for the master instead of <hostname> ... where does this come from ?
13:43 babilen Call a pillar?
13:47 Brew joined #salt
13:47 coredumb babilen: should have been more explicit that the state in question calls module.saltutil.refresh_pillars
13:47 coredumb with tgt *
13:48 coredumb debug mode on
13:49 babilen Isn't that a runner?
13:50 babilen nvm
13:50 babilen So, you are calling saltutil.refresh_pillars on the minions. What's the problem?
13:51 coredumb actually from the debug I see it's actually calling a pillar refresh when entering the orchestrator from what I can see
13:51 coredumb and well it's refreshing for <master_fqdn>_master instead of <master_fqdn>
13:52 coredumb which is very weird
13:58 mugsie joined #salt
13:59 patrek joined #salt
14:01 dyasny joined #salt
14:01 nicksloan joined #salt
14:02 coredumb babilen: ok it's the reactor datas that does that
14:04 coredumb it's happending _master to master's hostname and apparently calls a refresh_pillar on the hostname before actually running the orchestrator
14:19 edrocks joined #salt
14:23 coredumb damn debugging orchestrators is PITA
14:26 bowhunter joined #salt
14:28 coredumb any idea why: {% set master = salt.config.get('master') %} in orch.sls gives me master = salt ?
14:29 coredumb indeed from the cli it returns the correct value O_o
14:31 sarcasticadmin joined #salt
14:31 POJO joined #salt
14:31 spicyJalepeno joined #salt
14:33 mikecmpbll joined #salt
14:38 mugsie joined #salt
14:38 mugsie joined #salt
14:39 onlyanegg joined #salt
14:41 ssplatt joined #salt
14:44 XenophonF joined #salt
14:49 nicksloan joined #salt
14:59 netcho babilen:  this looks ok with http.query
15:00 impi joined #salt
15:01 netcho how can i tel another state to run if http.query has Result: true
15:01 netcho require sls?
15:04 mrrc joined #salt
15:04 evle1 joined #salt
15:06 pbandark1 joined #salt
15:10 nicksloan joined #salt
15:15 dendazen joined #salt
15:16 Praematura joined #salt
15:18 rihannon joined #salt
15:20 mavhq joined #salt
15:22 heaje joined #salt
15:23 Inveracity joined #salt
15:24 cyborg-one joined #salt
15:28 mikecmpbll joined #salt
15:28 adelcast left #salt
15:28 adelcast joined #salt
15:29 nku babilen: fwiw, i had two minions with the same id, that was the problem. didn't even know salt would allow that
15:30 adelcast joined #salt
15:32 major is it possible use a gitfs_root for your main state structure? .. basically add a top.sls to that?
15:34 DammitJim joined #salt
15:37 schemanic joined #salt
15:37 edrocks joined #salt
15:38 PatrolDoom joined #salt
15:38 schemanic Heya, I'm trying to organize my user pillars, and I wanted to ask if I can dynamically match pillar sls files in an include block
15:43 ivanjaros joined #salt
15:51 POJO joined #salt
15:51 edrocks joined #salt
15:53 filippos joined #salt
15:55 farcaller left #salt
16:00 schemanic Is there a list of grains that get included with salt?
16:00 schemanic I'm looking for the possible outputs of grains.os_family
16:00 schemanic I don't know what the MacOS one its
16:00 schemanic is*
16:01 sgo_ joined #salt
16:02 tiwula joined #salt
16:03 fracklen joined #salt
16:05 OCP joined #salt
16:06 netcho joined #salt
16:08 asyncsec joined #salt
16:08 onlyanegg I'm having trouble with the cron absent state. I've tried it several different ways. Can someone tell me what I'm doing wrong? https://gist.github.com/onlyanegg/608af5786eb44cc0dbb3b4dcc43d7525
16:10 onlyanegg running state.apply with test=true, it's returning that the job is absent
16:12 XenophonF major: i have everything in gitfs, so i think the answer to your question is "yes"
16:12 XenophonF schemanic: you could look through salt/grains/core.py (IIRC)
16:14 schemanic Hey XenophonF, thanks. btw you gave me the tip about using conditional grain targeting in my user formula - that's helping me a lot right now.
16:15 major XenophonF, so your top.sls is managed in a gitfs_remote?
16:15 major is there a document that describes the directory structure you need to use to make that work? repo naming convention, etc..
16:16 whytewolf major: yes top.sls can be in gitfs. just needs to be at the root of where the reop reads.
16:16 schemanic in jinja, can I use an IN statement on an array for grain matching?
16:17 whytewolf array?
16:17 schemanic {%- if grains.role_code in ['AP', 'DB', 'CM'] %} <-- is this legal?
16:17 whytewolf that is a list.
16:17 whytewolf and yes
16:18 schemanic Sorry, I've been working in bash a while.
16:18 whytewolf basicly think of it like this arrays are a combination of two types of structures. lists and dicts. python seperates them.
16:19 schemanic Im designing my pillars in such a way that a user's pillar file first looks at what role the system they're going into is, then checks what OS it's running before setting up home directories and whatnot.
16:20 whytewolf this becomes very important with questions because some things work for dicts that don't work on lists. and vise versa
16:20 schemanic mmm. I am aware that it's different in python, I just got stuck in how I've been calling them.
16:20 schemanic thanks whytewolf
16:21 whytewolf schemanic: do not put anything related to security behind grain based roles.
16:21 mpanetta joined #salt
16:21 major whytewolf, is there a document that describes doing this? or does salt just assume that a gitfs_remote is using a spm structure? what about storing pillar data in the same repo?
16:21 schemanic whytewolf, can you elaborate? I want to be sure my use case is correct
16:22 schemanic right now I'm giving my hosts grains which designate them as an app server, or a db server, or a workstation, etc.
16:22 whytewolf major: gitfs is assuming you use the same structure you use for a file_root structure
16:22 schemanic then assigning users and states to things that match certain combinations of grains.
16:23 schemanic whytewolf, is that not correct>
16:23 schemanic ?
16:23 whytewolf schemanic: basicly grains based roles have the problem that if someone conmpromises a server. they can do discovery attacks against pillar to find more info about the infrastructure maybe even gain passwords/keys and other things. as grains on a minion can be changed on the minion side
16:24 major oh..
16:24 major I think I follow ..
16:24 major hurm
16:25 schemanic So, how am I supposed to target my minions by role?
16:25 schemanic Do elaborate minion id parsing?
16:25 schemanic That sucks
16:26 whytewolf schemanic: well grains based roles are fine for non secure data. things syou don't care if someone finds. but for secure data. the only piece of data you can be sure of is minion id.
16:26 whytewolf as that is what the key is locked to.
16:27 schemanic well that's going to screw me if I say 'role_code:AP' gets tomcat-formula with tomcat_master_password: blar blar blar
16:27 whytewolf there is also nodegroups
16:27 schemanic Am I safe if I GPG encrypt all that crap?
16:28 schemanic Can I set a nodegroup from a salt-cloud profile?
16:28 whytewolf nope, nodegroups require a restart of the master for everychange
16:29 schemanic That blows. I'm assigning role grains in my salt-cloud profile so that when I make an app server, the system knows it's an app server
16:29 whytewolf and if you use the same gpg key for all of that data. then how easy do you think it will be to decrypt?
16:29 whytewolf schemanic: that blows is the motto of security
16:30 whytewolf if it is easy for you, then it is easy for an attacker.
16:30 schemanic Wait, how do people do GPG encryption then? I thought there is only one keypair?
16:30 babilen schemanic: You might consider running an external pillar (with ext_pillar_first: True) such as pillarstack for roles in pillars (that are grounded in the minion id)
16:30 whytewolf ^
16:30 schemanic babilen, whytewolf I already am running ext_pillar
16:30 schemanic I've got my pillar up in Bitbucket
16:32 schemanic What If I assign grains as normal then write a state to assign the minion id and hostname to a combination of grains, then filter my tops by minion id?
16:32 babilen I'm off now, but wanted to mention that possibility. Keeping secrets in vault and accessing it with the vault module is quite nice also.
16:32 babilen Grains simply aren't secure. At all. A minion can claim to have whatever grains it wants.
16:33 * whytewolf wishes they had never wrote the grains based role documents
16:34 schemanic We have a naming convention scheme made out of codes that designate a system by a number of axes. so If I say owner_code=MC, role_code=WK, type_code=LT, asset_tag=0000, then my system should be named MCWKLT0000. Then my pillars would target *WK* instead of role_code:WK
16:34 schemanic and then you're saying it would be secure yes?
16:35 netcho joined #salt
16:35 whytewolf if you go off of minion id and not grain, yeap
16:35 schemanic right
16:36 schemanic can you do regex in jinja?
16:36 whytewolf you can use the match module
16:36 babilen whytewolf: Yeah, that document influenced so many setups :(
16:37 whytewolf schemanic: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.match.html#salt.modules.match.pcre
16:37 schemanic I'm wondering if I can do it the other way around, and give the system grains based on regex matches on the minion id
16:38 schemanic since the point of origin is salt-cloud, to spin up the machine I already need to use my naming convention to give the box it's AWS Name: tag
16:38 whytewolf you can, however keep in mind grains can still be changed on the minion no matter how you set them
16:39 schemanic Oh
16:39 schemanic I see
16:39 schemanic so If I'm doing it from grains an attacker can futz with my grains and thus influence my next highstate
16:40 whytewolf they can futz with your highstate. or if you are doing any kind of matching in pillar against grains. they can change grains and use pillar.items and pillar.get to gather info
16:40 ThomasJ joined #salt
16:40 schemanic mmm
16:40 schemanic okay
16:40 schemanic sigh
16:40 whytewolf it is the matching grains in pillar that is the real problem
16:40 schemanic but thats... its SUCH a clean system
16:41 schemanic I hate that  because it's so clean and nice. Why cant that get made secure!@>!@
16:41 whytewolf insecure systems generally are the clean ones
16:41 whytewolf because it is the minion side data
16:41 schemanic Why can't we have nice things, humanity
16:41 schemanic Okay
16:42 schemanic Forget about it
16:42 schemanic well
16:42 schemanic not forget, but, I concede - I'll remember my own naming convention and spin it up properly that way, then set the hostname from the minion id
16:43 impi joined #salt
16:45 babilen schemanic: You can match on pillars
16:45 babilen (which is why I mentioned them earlier)
16:48 schemanic babilen, I don't understand what you mean match on pillars
16:48 schemanic the pillars themselves are what do the matching
16:48 schemanic like, top.sls tells which pillars what hosts they apply to, yes?
16:50 schemanic oh...
16:50 schemanic hmm
16:51 schemanic so what you're saying is to have a pillar which says 'host with minion id X gets pillar role_code:Y'
16:53 whytewolf yeah
16:54 whytewolf not sure of the order of external pillars but if you had one that ran before git_pillar you could target on those pillars also.
16:55 POJO joined #salt
16:56 mikecmpbll joined #salt
16:58 babilen ^ that's the basic idea
16:59 asyncsec joined #salt
16:59 babilen And you can then match on I@role:foo (cf. compound matching)
17:05 winsalt joined #salt
17:08 Edgan joined #salt
17:11 xet7 joined #salt
17:13 nicksloan joined #salt
17:17 samodid joined #salt
17:23 poliva joined #salt
17:25 woodtablet joined #salt
17:26 _KaszpiR_ joined #salt
17:29 hassan2566 joined #salt
17:30 XenophonF major: this is my gitfs repo for states - https://github.com/irtnog/salt-states
17:30 XenophonF major: note that git branches get translated into salt environments
17:30 XenophonF major: this is an example git repo for Pillar data - https://github.com/irtnog/salt-pillar-example
17:31 major XenophonF, thanks
17:36 shanth what's the salt flag to only make it show things that are going to change during a state.apply?
17:37 haam3r27 joined #salt
17:37 onlyanegg shanth: test=true
17:38 shanth but i only want it to show things that actually going to execute and change, not all of the state already in correct state onlyanegg
17:39 onlyanegg maybe one of the --state-output values?
17:40 shanth ah looks like it
17:41 olivap joined #salt
17:43 DammitJim joined #salt
17:44 DammitJim is there an easy way to update the hardware enablement stack for ubuntu using salt?
17:46 haam3r joined #salt
17:48 poliva joined #salt
17:50 fracklen joined #salt
17:51 DammitJim I think this is all that needs to be called: sudo apt-get install --install-recommends linux-generic-lts-xenial
17:51 aldevar left #salt
17:56 major the salt formula seems to dislike something about how I am specifying gitfs_remotes.. it is dropping any entries that have children
17:58 Praematura joined #salt
18:01 nicksloan joined #salt
18:03 edrocks joined #salt
18:05 Lionel_Debroux joined #salt
18:07 XenophonF major: I'm using GitPython, so my gitfs config looks like this - https://github.com/irtnog/salt-pillar-example/blob/master/salt/example/com/init.sls#L241
18:08 major yah .. it seems to be because I am specying options to the remotes
18:08 major specifying even
18:11 hassan2566 hi all, I have problem with salt-minion , if i start the minion as server does fail and show errors could not find grains, but when I start the minion as salt-minion -d is working
18:11 hassan2566 any idea why?
18:11 shanth tried setting up salt with gitfs backed by pygit2, it's not working but i don't see any tips to troubleshoot it - what can you do next?
18:15 hassan2566 can I paste the error
18:16 shanth sure use dpaste
18:17 hassan2566 Jun 14 03:24:18 s4c-proxy01 salt-minion[17226]: msg = msg % self.args
18:17 hassan2566 Jun 14 03:24:18 s4c-proxy01 salt-minion[17226]: TypeError: not all arguments converted during string formatting
18:17 hassan2566 Jun 14 03:24:19 s4c-proxy01 salt-minion[17226]: [ERROR   ] Rendering exception occurred: Jinja variable 'dict object' has no attribute 'site'
18:17 hassan2566 Jun 14 03:24:19 s4c-proxy01 salt-minion[17226]: [CRITICAL] Rendering SLS 'base:common.managed-linux' failed: Jinja variable 'dict object' has no attribute 'site'
18:17 hassan2566 I am running centos 7.3 with systemd service
18:17 Trauma joined #salt
18:18 whytewolf hassan2566: when someone says yes use <x service> to paste. it means do not paste it here
18:18 hassan2566 sorry
18:19 major doh
18:19 major I am fighting the ext_pillar_first bug ...
18:19 major snarf
18:20 mpanetta There is a bug?
18:20 shanth whytewolf: you said you setup salt with atlassian bitbucket for gitfs yeah?
18:20 major "For a while, this config option did not work as specified above, because of a bug in Pillar compilation. This bug has been resolved in version 2016.3.4 and later."
18:21 whytewolf shanth: it was a while ago. but yes i no long own a copy of bitbucket [stash]
18:21 shanth i just set mine up but it's not pulling from it, there's no error messages - states.apply just fails - what can you do to test that it is connected to a gitfs backend?
18:22 whytewolf shanth: salt-run -l debug fileserver.update backend=gitfs
18:22 shanth niceee
18:22 shanth how did you know that?
18:23 whytewolf well. -l debug because i use it everywhere
18:23 whytewolf and fileserver.update backend=gitfs because i happen to use it a lot with my current gitfs configuration
18:24 shanth is fileserver.update a module?i  dont see that on the salt-run page
18:24 whytewolf it is a runner
18:24 whytewolf https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.fileserver.html#salt.runners.fileserver.update
18:24 shanth i dont really get what runners do but i'll chcek that out
18:25 whytewolf and i was wrong it isn't gitfs it is git
18:25 whytewolf runners are basicly modules that the master uses
18:25 shanth and they only work with salt-run?
18:25 shanth run as in runner/
18:25 shanth ?
18:25 whytewolf yeap
18:25 shanth ahhh
18:25 sgo_ joined #salt
18:26 shanth when you use gitfs, does it merely pull the files from git or does it cache them on the master somewhere?
18:26 whytewolf salt stores the git filesystem in a headless git repo in /var/cache/salt/master/
18:27 shanth cause right now mine is empty
18:27 whytewolf i forget the directory after that  and the master that uses gitfs currently is offline while i have my openstack cluster turned off
18:29 shanth http://dpaste.com/0XEEKPQ i dont see any errors that it cant hit git
18:29 shanth but there is nothing in /var/cache/salt/master :(
18:29 whytewolf nothing at all?
18:29 shanth empty
18:29 shanth when i try to apply a state that i know should be there it says
18:29 shanth Salt request timed out. The master is not responding. If this error persists after verifying the master is up, worker_threads may need to be increased.
18:30 whytewolf um...
18:30 shanth but if i turn gitfs off, it works of course with roots as the filesystem
18:30 shanth hoping this is not a freebsd thing
18:30 patrek joined #salt
18:31 whytewolf it might be. i don't know where salt would store it's cache files if they are not in /var/cache/salt/master
18:31 whytewolf [there should be a lot more then just the git files in there]
18:31 shanth so if i change the password to something wrong it wont connect, so it is allegedly connecting ok
18:32 whytewolf humm check to see if there isa /usr/local/var/salt/master directory?
18:32 ChubYann joined #salt
18:33 whytewolf or /usr/local/var/cache/salt
18:33 druonysus joined #salt
18:33 shanth nothing there
18:33 shanth let me turn back on roots and test
18:34 whytewolf okay, i have no idea wwhere it is storing your salt cache then.
18:34 shanth yes /var/cache/salt/master gets populated when i use roots and i did fileserver.update
18:35 whytewolf okay.
18:35 shanth so it seems to auth to git but not grab anything, lol
18:36 whytewolf well. it isn't that it isn't grabbing anything it is. i am just not sure wheree it is putting it when you don't have roots on
18:36 DammitJim joined #salt
18:36 shanth it should be in the same spot though right?
18:36 * whytewolf shrugs.
18:36 shanth might have to summon the salt gods
18:37 whytewolf when roots is off fileserver.dir_list shows nothing?
18:38 edrocks joined #salt
18:39 shanth should i run that with salt-run?
18:39 whytewolf yes
18:39 shanth it shows stuff but it's stalling
18:39 shanth waiting for it to finish
18:40 shanth going to lunch whytewolf i'll test later
18:40 whytewolf ok
18:42 rihannon joined #salt
18:42 nixjdm joined #salt
18:47 cliluw joined #salt
18:48 candyman88 joined #salt
18:49 fracklen joined #salt
18:51 cliluw joined #salt
18:52 POJO_ joined #salt
18:54 netcho joined #salt
18:57 POJO joined #salt
18:57 druonysus joined #salt
18:59 cliluw joined #salt
19:01 spicyJalepeno is it possible to send a string with newlines to a reactor as event data? i am testing it, but when i look on the eventbus the newlines are stripped out of my event data
19:07 druonysus left #salt
19:08 druonysus joined #salt
19:08 druonysus joined #salt
19:15 btorch joined #salt
19:17 [CEH] joined #salt
19:17 btorch anyone using any salt UI out there ? I've seen talks on saltpad and some other thing but can't recall
19:18 btorch I saw the enterprise version now has a UI but any chance that will come to the OSS one  ? :)
19:18 sgo_ joined #salt
19:18 whytewolf we have asked, the enterprise UI will not be coming to OOS
19:18 whytewolf OSS
19:20 haam3r_ Haven't gotten around to trying it, but there is also molten: https://github.com/martinhoefling/molten
19:21 fracklen joined #salt
19:22 [CEH] joined #salt
19:27 absolutejam joined #salt
19:27 btorch haam3r_: cool thanks , have you tried Lothiraldan/saltpad ?
19:29 cgiroua joined #salt
19:29 candyman88 joined #salt
19:32 haam3r_ btorch: nah..seems a bit abandoned
19:32 btorch yeah that's what I thought too
19:33 DammitJim joined #salt
19:35 _CEH_ joined #salt
19:36 nicksloan joined #salt
19:39 withasmile joined #salt
19:39 nicksloan joined #salt
19:46 onlyanegg joined #salt
19:46 Aleks3Y joined #salt
19:48 nicksloan joined #salt
19:51 shanth whytewolf: i ran salt-run -l debug fileserver.dir_list with roots as my fileserver and it listed all my state files
19:53 shanth when i set it to git it just stalls out - http://dpaste.com/3N6E1JP
19:53 nicksloan joined #salt
19:55 shanth then it says Killed, lol
19:58 nixjdm joined #salt
20:01 cyborg-one joined #salt
20:03 nicksloan joined #salt
20:03 fracklen joined #salt
20:05 shanth i wonder if i should post a bug on github
20:13 edrocks joined #salt
20:14 [CEH] joined #salt
20:20 whytewolf yes
20:21 shanth will do
20:26 major I am having a braincloud day.. how do I show my config? (not pillar/grain/highstate)
20:26 whytewolf show your config?
20:27 major information picked up from /etc/salt
20:28 whytewolf there isn't a way directly.. there is config.get but that doesn't output only /etc/salt options also there is no conifg.items function.
20:28 * major cries.
20:29 major and .. is there a reactor_root?
20:29 major or does salt:// always just itterate the file_roots?
20:31 whytewolf salt:// is file_roots
20:32 major soo .. if I have <repo>/{states,pillar,reactors}, and in file_roots I assign - root: states
20:32 major do I have to put an extra git checkout w/out the 'root' specified to use salt://reactors/ ?
20:33 major the pain..
20:33 whytewolf or, stop using 1 repo
20:33 XenophonF what whytewolf said
20:34 major or just die a little more every day..
20:34 whytewolf I'm currently upto 6 repos
20:34 major and 100+ developers in it?
20:35 major or .. in them..
20:35 whytewolf just me
20:35 whytewolf but 1 repo with 100 devs in it doens't make it any safer
20:36 whytewolf repos are cheap
20:36 major yah .. not so much about safe so much about not spending all day explaining where something lives .. but I dunno if it matters to omuch in the end
20:36 ScottK_ joined #salt
20:37 whytewolf major: https://github.com/whytewolf all of the salt-phase0 repos are salt repos. [there is also a pillar repo but that one is private.] also dyn_top is my top file
20:38 whytewolf it is all work in progress as i move from an old system that i can't share [because way to much was hard coded]
20:39 whytewolf when i get to reactors i might add a 7th
20:42 whytewolf now, given that structure, how much time would i need to explain what goes where?
20:42 major to who?
20:42 whytewolf to any devs that would share it
20:43 major a 100+ C/Python/Perl developers who spent the last 5 years using Puppet and are looking to transition to SaltStack? ;)
20:43 shanth good major
20:43 asyncsec joined #salt
20:43 whytewolf lol, looking at the way our puppet is at work. this is actually simpler. we have literally thousands of repos for puppet
20:45 major yah .. we currently have one massive repo for puppet.. and most of its layout isn't all that sane to begin with :(
20:45 whytewolf then don't let them carry over bad behavour
20:47 major yah .. hoping not to
20:47 shanth salt irc is best irc
20:47 whytewolf also. given you have pillars in your main repo you NEVER want to add a gitfs remote that doesn't have a root
20:47 major but I am still trying to figure out so much of this stuff to a level that will help me better aim the CLUExFOUR
20:48 major whytewolf ?
20:48 whytewolf security concern. basicly any minion would have full access to all pillars.
20:49 whytewolf and even how they are rendered
20:49 whytewolf makeing the grains roles issue i was talking with shanth earlier about seem like childs play
20:50 shanth is the pillars main repo thing for me whytewolf? what
20:50 major I am having a stupid moment in trying to picture the implication
20:50 whytewolf no it is for major
20:51 whytewolf major: do you have any ssh keys in pillar or passwords?
20:51 major no
20:51 shanth salt is good but confusing and large
20:51 major that is all expected to go into an ext_pillar
20:51 shanth looking at the wrong module page is the best lol
20:51 major shanth, lol
20:51 whytewolf well git_pillar is an ext_pillar but i get what you mean
20:52 major well .. we are looking at dealing with sensitive data via an alternate method as different teams have different passwords/keys for their stuff and we store all of that encrypted
20:52 major such that the end users can't usually access it ..
20:53 major getting all of that into a pdb of some sort is going to be a later headache that I am really not looking forward to
20:53 whytewolf okay, that is accaptable. however putting pillar into the same space as gitfs is still considered bad practice
20:54 major hmmm
20:54 whytewolf even without security.
20:54 Bryson joined #salt
20:55 whytewolf at the very least i would recomend 2 repos. 1 for your states tree [reactors modules all that] and one for pillar
20:55 major well .. it certainly has complicated a few things for me ;)
20:56 whytewolf shanth: the wrong module page ... https://docs.saltstack.com/en/latest/salt-modindex.html such as that?
20:57 whytewolf salt has so many modules types it gets confusing sometimes
20:57 shanth yeah exactly
20:58 dendazen joined #salt
20:59 nixjdm joined #salt
21:02 whytewolf man i am tired. i most likely shouldn't have binged orange is the new blank last night
21:05 major sounds like a new drink
21:05 major "Gimmi an Orange is the new blank" .. "okay .. dreamcicle vodka death ball coming right up!"
21:07 whytewolf hummm orange is the new blank could be an interesting drink. would have to be a split level drink orange and black
21:07 whytewolf would be horrible
21:08 whytewolf i could sell it to goths
21:08 major a colorful version of a cement mixer...
21:22 nicksloan joined #salt
21:24 Guest73 joined #salt
21:27 ThomasJ joined #salt
21:31 lordcirth_work For my nfs_exports state docstring, should I link to docs such as this for host matching syntax?  https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Deployment_Guide/s1-nfs-server-config-exports.html
21:32 lordcirth_work Or should I just briefly summarize?
21:33 whytewolf I would say briefly summarize. i think salt likes to keep outbound links to a min
21:34 patrek joined #salt
21:41 McNinja howdy all, does anyone know of a way to delay the salt bootstrapping when creating servers via salt-cloud?
21:41 cgiroua joined #salt
21:49 nafg joined #salt
21:51 GnuLxUsr joined #salt
21:59 nixjdm joined #salt
22:17 Roh joined #salt
22:23 major okay .. I split the repos apart .. better to do this early before everyone starts using this stuff vs later...
22:26 Praematura_ joined #salt
22:27 dxiri joined #salt
22:35 druonysus joined #salt
22:35 dxiri_ joined #salt
22:40 onlyanegg joined #salt
22:43 dendazen joined #salt
22:54 onlyanegg joined #salt
22:59 druonysus_ joined #salt
23:03 lstor joined #salt
23:03 Edur joined #salt
23:03 rofl____ joined #salt
23:07 dxiri joined #salt
23:08 cyteen joined #salt
23:15 patrek joined #salt
23:19 bowhunter joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary