Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-08-11

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 debian1121 joined #salt
00:15 twiedenbein joined #salt
00:26 icebal joined #salt
00:38 icebal joined #salt
01:04 llua joined #salt
01:11 shoemonkey joined #salt
01:22 shoemonkey joined #salt
01:35 armguy joined #salt
01:41 feliks joined #salt
01:51 ilbot3 joined #salt
01:51 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.6, 2017.7.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers <+> Please make sure you're properly identified to speak in the channel.
01:56 armguy joined #salt
01:56 shoemonkey joined #salt
02:02 jab416171 joined #salt
02:05 lkolstad joined #salt
02:06 noraatepernos joined #salt
02:10 zerocool_ joined #salt
02:10 ECDHE_RSA_AES256 joined #salt
02:14 mmidgett joined #salt
02:14 Sarph joined #salt
02:14 Hipikat_ joined #salt
02:15 jerryc joined #salt
02:15 patrek_ joined #salt
02:15 inetpro_ joined #salt
02:16 shoemonkey joined #salt
02:18 shanth__ joined #salt
02:18 saltsa_ joined #salt
02:19 dh joined #salt
02:19 k1412 joined #salt
02:19 eprice joined #salt
02:19 karlthane_ joined #salt
02:21 sjohnsen joined #salt
02:21 JPT joined #salt
02:23 GnuLxUsr joined #salt
02:23 Morrolan joined #salt
02:24 nledez joined #salt
02:24 LeProvokateur joined #salt
02:24 scooby2 joined #salt
02:25 yidhra joined #salt
02:25 cro joined #salt
02:26 merlinco1ey joined #salt
02:27 filippos joined #salt
02:27 PatrolDoom joined #salt
02:28 hatifnat1 joined #salt
02:28 aphor_ joined #salt
02:29 alvinstarr joined #salt
02:29 whytewolf joined #salt
02:29 hemebond joined #salt
02:29 agustafson joined #salt
02:29 hammer065 joined #salt
02:31 jmiven joined #salt
02:32 KevinAn2757 joined #salt
02:32 swa_work joined #salt
02:33 Morrolan joined #salt
02:33 LeProvokateur joined #salt
02:33 nledez joined #salt
02:35 sjorge joined #salt
02:38 mbuf joined #salt
03:03 robin2244 joined #salt
03:11 mavhq joined #salt
03:12 donmichelangelo joined #salt
03:42 high_fiver joined #salt
03:48 shanth_ joined #salt
03:49 onlyanegg joined #salt
04:11 LeProvokateur joined #salt
04:11 SamYaple joined #salt
04:17 k_sze[work] joined #salt
04:28 gmoro_ joined #salt
04:41 justanotheruser joined #salt
04:54 darioleidi_ joined #salt
04:55 g3cko joined #salt
05:03 ecdhe joined #salt
05:06 darioleidi_ joined #salt
05:10 rgrundstrom Good morning everyone
05:14 mbuf rgrundstrom, good morning
05:16 impi joined #salt
05:25 LeProvokateur joined #salt
05:33 felskrone joined #salt
05:54 oida_ joined #salt
05:54 evle joined #salt
05:57 mbuf joined #salt
06:05 do3meli joined #salt
06:05 do3meli left #salt
06:16 Nazca joined #salt
06:29 gnomethrower joined #salt
06:35 keldwud joined #salt
06:37 Tucky joined #salt
06:41 sh123124213 joined #salt
06:42 honestly can someone point me to the documentation on making a custom grains module? :)
06:43 Tucky joined #salt
06:43 honestly nevermind, found it: https://docs.saltstack.com/en/latest/topics/grains/#writing-grains
06:47 o1e9 joined #salt
06:58 Ricardo1000 joined #salt
07:07 Nazca joined #salt
07:08 darioleidi joined #salt
07:12 frdm joined #salt
07:23 Antiarc joined #salt
07:30 usernkey joined #salt
07:35 Hybrid joined #salt
07:45 pppingme joined #salt
07:54 justanotheruser joined #salt
07:56 debian112 joined #salt
07:56 impi joined #salt
07:59 mikecmpbll joined #salt
08:10 J0hnSteel joined #salt
08:10 pualj joined #salt
08:13 cyteen joined #salt
08:17 armyriad joined #salt
08:19 jespada joined #salt
08:22 pbandark joined #salt
08:31 Inveracity joined #salt
08:32 Naresh joined #salt
08:41 JonsBo joined #salt
08:43 zulutango joined #salt
08:46 JonsBo Is it possible to make a service reload multiple times during a highstate using watch? It seems to only reload once for me even if I change the file multiple times.
08:56 impi joined #salt
08:56 whytewolf no, it will only restart once. besides it is normally bad practices to restart the same services mutliple times in a couple of min.
09:01 flughafen joined #salt
09:01 flughafen does file.absent throw an error if a file is already gone?
09:01 whytewolf no, since the file is gone it reports that it is in the state it should be
09:02 flughafen whytewolf: thanks.
09:04 jhauser joined #salt
09:06 k_sze[work] So in standard Unix practice, it's possible to disable password login for a user by setting the password hash to * in /etc/shadow
09:07 k_sze[work] But how do I do that via Salt?
09:07 k_sze[work] If I change the password value to * in pillar, Pillar complains with a syntax error.
09:08 dunz0r k_sze[work]: I think you need to use the salt.modules.shadow.lock_password-module
09:08 cyborg-one joined #salt
09:09 dunz0r k_sze[work]: https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.shadow.html#salt.modules.shadow.lock_password
09:09 dunz0r That will set the password to X or something which isn't a proper hash. It'll do the same as setting it to *
09:10 JonsBo flughafen: Is there any way to force it to restart on each change? Reloading multiple times is probably the best way. Need to set up nginx for let's encrypt verification, reload, then create config for TLS and then reload.
09:11 whytewolf you are supposed to leave password empty and set empty_password: false [not sure if that actually works thought]
09:11 flughafen JonsBo: i think i'm not your intended target
09:11 whytewolf JonsBo: No, honestly you should be using orchestration for something like that anyway.
09:12 JonsBo flughafen: Oh, sorry,.
09:13 whytewolf JonsBo: you could in thoery setup up multiple states. that each watch a different file edit. but honestly that is an ugly solution and you shouldn't need to do this kind of thing every highstate
09:17 JonsBo whytewolf: I'm currently checking if the TLS-certificate exists and only if it's needed I'll reload the service. Do you know what the rules for when a watching service reloads are? Should it happen on the first change, og the second if I have multiple changes to the same file during a highstate?
09:17 ws2k3_ joined #salt
09:17 JonsBo or the*
09:18 whytewolf JonsBo: the restart happens after everything in the watch has been gone through
09:18 JonsBo whytewolf: Ok, thank you so much for helping. :)
09:19 zulutango joined #salt
09:27 mbuf left #salt
09:32 bdrung_work joined #salt
09:44 yannik_ joined #salt
09:47 aldevar joined #salt
09:48 aldevar left #salt
09:48 yannikenss joined #salt
09:58 Guest73 joined #salt
10:00 ahrs joined #salt
10:06 mrud joined #salt
10:06 mrud joined #salt
10:11 lorengordon joined #salt
10:17 _KaszpiR_ joined #salt
10:20 Rumbles joined #salt
10:24 exegesis joined #salt
10:44 exegesis joined #salt
10:48 Kelsar joined #salt
10:49 xet7 joined #salt
11:16 Inveracity joined #salt
11:16 xet7 joined #salt
11:25 mbuf joined #salt
11:26 jeddi joined #salt
11:27 cro joined #salt
11:28 nledez joined #salt
11:30 debian1121 joined #salt
11:30 LeProvokateur joined #salt
11:32 darioleidi joined #salt
11:34 hammer065 joined #salt
11:34 cyteen joined #salt
11:40 evle1 joined #salt
11:50 k_sze[work] joined #salt
11:50 o1e9 joined #salt
11:51 shoemonkey joined #salt
11:55 Ricardo1000 joined #salt
11:56 oida_ joined #salt
11:57 mike25de joined #salt
11:57 * mike25de hi all
12:07 Ricardo1000 joined #salt
12:07 mike25de stupid Q guys... how can i get the eth0 interface from the salt 'vm' grains.item ip4_interfaces => this shows me eth0 and lo... but i need only eth0. I tried ip4_interfaces.eth0 but it does not work
12:13 ahrs joined #salt
12:13 smartalek joined #salt
12:18 mike25de no help needed > grains.item ip4_interfaces:eth0  that's how it should be used :)
12:19 jcristau joined #salt
12:19 beardo joined #salt
12:19 packeteer joined #salt
12:19 phobosd__ joined #salt
12:19 astronouth7303 joined #salt
12:19 ople_ joined #salt
12:20 Gareth joined #salt
12:20 daxroc joined #salt
12:20 simonmcc joined #salt
12:20 Ricardo1000 joined #salt
12:21 ThomasJ|m joined #salt
12:22 LeProvokateur joined #salt
12:24 izrail joined #salt
12:25 pbandark1 joined #salt
12:25 mbuf joined #salt
12:28 aldevar joined #salt
12:28 LeProvokateur joined #salt
12:48 omgwtf joined #salt
12:49 noobiedubie joined #salt
12:49 cyborg-one joined #salt
12:50 k1412 D
12:50 k1412 oups, miss typo sorry (bad laptop)
12:51 omgwtf left #salt
12:55 ssplatt joined #salt
13:02 exegesis joined #salt
13:02 jdipierro joined #salt
13:07 shoemonkey joined #salt
13:17 CrummyGummy joined #salt
13:19 drawsmcgraw joined #salt
13:20 saintpablo joined #salt
13:21 cgiroua joined #salt
13:25 filippos joined #salt
13:26 jdipierro joined #salt
13:37 CrummyGummy joined #salt
13:48 pualj joined #salt
13:55 pualj joined #salt
13:58 teratoma joined #salt
14:12 DammitJim joined #salt
14:18 racooper joined #salt
14:26 promorphus joined #salt
14:32 lordcirth_work Would anyone be interested in salt.modules.disk.usage being expanded to allow passing a path, ie '/', to filter data?
14:32 lordcirth_work Or is just disk.usage()['/'] the way to go?
14:34 aldevar joined #salt
14:34 aldevar left #salt
14:35 SamYaple joined #salt
14:38 amcorreia joined #salt
14:39 amcorreia Anyone knows if SaltPad still in development, or have any other better option?
14:47 jdipierro joined #salt
14:48 pualj joined #salt
14:49 feliks is there a way to list all modules currently found on a minion?
14:49 feliks i want to use influxdb_retention_policy.present but i get:
14:49 feliks > [ERROR   ] State 'influxdb_retention_policy.present' was not found in SLS 'stats.influxdb'
14:49 feliks > Reason: 'influxdb_retention_policy' __virtual__ returned False
14:51 feliks the problem is that i dont understand where 'influxdb.db_exists' comes from here:
14:51 feliks https://github.com/saltstack/salt/blob/develop/salt/states/influxdb_retention_policy.py#L16
14:52 _JZ_ joined #salt
14:53 feliks seems to me like this should be callable through the CLI, because it is listed here:
14:53 feliks https://docs.saltstack.com/en/2015.8/ref/modules/all/salt.modules.influx.html#salt.modules.influx.db_exists
14:54 drawsmcgraw joined #salt
14:54 feliks but using `salt $host influxdb.db_exists $db_name` gives me:
14:54 feliks > 'influxdb.db_exists' is not available.
14:55 feliks im running 2017.7.0 (Nitrogen) and triple checked the db to exist. what am i doing wrong?
14:56 lordcirth_work feliks, so that usually means that some python library that's required for that module isn't installed
14:56 lordcirth_work feliks, do you have 'python-influxdb' installed? (Debian/Ubuntu package name)
14:58 sjorge joined #salt
15:00 feliks no, but that version seems to be outdated because it gives errors
15:00 feliks installing `influxdb` through pip works though. thank you lordcirth_work :)
15:01 beardedeagle joined #salt
15:02 lordcirth_work feliks, good. What distro/version?  You're welcome
15:04 exegesis joined #salt
15:05 alvinstarr1 joined #salt
15:07 sjorge joined #salt
15:11 jschoolcraft joined #salt
15:17 Cottser joined #salt
15:19 permalac joined #salt
15:20 feliks debian jessie
15:20 feliks is there a way to better debug this? reading sources seems to be the wrong approach
15:21 onlyanegg joined #salt
15:24 Brew joined #salt
15:26 drawsmcgraw joined #salt
15:27 lordcirth_work feliks, debug how?  Like find what package it needs?  90% of the time module 'foo' needs python-foo, or something similar enough for apt or pip search to find.
15:28 sjorge joined #salt
15:29 pualj_ joined #salt
15:29 feliks yeah but seems like stabbing in the dark
15:38 sjorge joined #salt
15:48 sarcasticadmin joined #salt
15:57 sjorge joined #salt
16:01 cyteen joined #salt
16:02 sjorge joined #salt
16:04 XenophonF joined #salt
16:20 sjorge joined #salt
16:25 stanchan joined #salt
16:28 wendall911 joined #salt
16:31 cgiroua joined #salt
16:33 whytewolf feliks: for your original question. you can get a list of loaded modules with with sys.list_modules
16:34 whytewolf [and sys.list_state_modules for a list of loaded state modules]
16:34 doubletwist So if I wanted to set a grain, say called 'datacenter' and have that grain data depend on which subnet the minion is in, what's the best way to do that?
16:37 whytewolf depends. do you want highstate logic to depend on that grain?
16:38 feliks whytewolf: thanks :)
16:38 dstensnes joined #salt
16:38 doubletwist whytewolf: Um. I'm not 100% certain I understand what you're asking.
16:38 iggy I thought watch would restart multiple times... listen does not
16:39 doubletwist It will be used for things like determining which resolvers will be configured and things like that
16:41 whytewolf iggy: nope, watch restarts once. after everything it watches[since it being a requisite orders the function that watches after what it is watching]. listen creates a seperate state that has an order of last but keeps the normal state in the order it was
16:42 doubletwist Right now I have that logic happening in the resolvers.sls but there will be other formulas that need to know which datacenter a minion is in. I don't want to have to include that logic in each formula. I'd like to set it once, then have the formula just use 'if datacenter=FOO then blah'.
16:42 whytewolf doubletwist: well, in that case i would sugest a custom grain module that is pushed to the minion before a highstate
16:43 whytewolf i asked because if you set a grain in a highstate it isn't there for that highstate
16:44 iggy I never knew that was the difference... good to know
16:46 fatal_exception joined #salt
16:46 nixjdm joined #salt
16:51 Micromus joined #salt
17:00 pipps joined #salt
17:03 pipps99 joined #salt
17:05 Micromus joined #salt
17:07 doubletwist So I see how Ican use reactor to force the minion to load a custom grain when the miniojn starts/connects tothe master.
17:07 doubletwist But I'm still not clear then where I 'set' the grain or put the logic? In /srv/salt/_grains ?
17:08 whytewolf salt://_grains
17:09 whytewolf so it can be in a gitfs, or anywhere that gets put into the state fileserver.
17:09 doubletwist and is that basically written as a python function?
17:09 whytewolf yes
17:09 whytewolf that returns a dict
17:09 doubletwist guess I need to learn python :)
17:10 whytewolf for what you want you most likely don't need to get 40 layers deep.
17:11 doubletwist really what I need is an "if subnet is one of subnet1 subnet2 subnet3... ;then mygrain=blah, elsif subnet is subnet4 subnet5 then mygrain=blergh"
17:11 whytewolf just enough to figure out hey I'm on this subnet [should be salt helper functions that will help with this [i don't remeber if you have access to the salt dunder in grains. if you do 90% of the work is done for you in the network module, other wise there should be other modules]
17:12 numkem joined #salt
17:14 doubletwist Yeah you lost me there
17:14 whytewolf the salt dunder is basicly just a version of the same salt dict you use in jinja
17:14 whytewolf __salt__
17:15 whytewolf dunder is short for double under
17:15 doubletwist That's a whole other rabbit hole I haven't gone down :)
17:16 doubletwist So basically what I'm feeling like is that I need to spend 6 months learning python before I can use salt for anything more complicated than a single-subnet homelab...
17:16 whytewolf or, you could just not create a grain and use cidr matching
17:17 whytewolf the issue is creating the grain. not matching on subnets
17:17 doubletwist I don't care if it's a grain or not, I just don't want to have to write the match in a dozen different places and updating it in each place any time a subnet is added or removed.
17:19 whytewolf there is a shorter way to do it then a custom _grains. have your reactor exacute a single state sls instead of sync a module.
17:20 whytewolf or, put it in pillars.
17:20 whytewolf instead of grains
17:21 whytewolf lots of different paths.
17:23 whytewolf no matter what you do. everytime you update or delete a subnet you are going to have to change something. as salt can't read the future and no all possable subnets you are going to use
17:24 doubletwist I understand having to change it, I just want to set ONE place to change it
17:24 whytewolf personally i would go with pillar.
17:25 whytewolf with cidr matching in a top file
17:25 doubletwist Well, as it stands now, I was going to check the grain from pillar - and I was told you can't set something in pillar adn then read it in pillar
17:26 doubletwist Would something like this work? http://paste.lopsa.org/194
17:26 whytewolf no
17:27 whytewolf you don't do that kind of matching in a sls file
17:28 whytewolf matching is for top files not state files.
17:29 doubletwist Even if it's an sls file only loaded by reactor on start?
17:29 whytewolf state files don't do matching. period
17:30 whytewolf this is what jinja is for
17:33 simondodsley joined #salt
17:35 whytewolf you want something more like this https://gist.github.com/whytewolf/c6ed0f158851d4decf96876575898356
17:38 shanth__ joined #salt
17:40 DammitJim joined #salt
17:43 mavhq joined #salt
17:52 drawsmcgraw1 joined #salt
17:55 willprice joined #salt
17:58 Micromus joined #salt
18:00 k1412 joined #salt
18:02 XenophonF doubletwist: if you want to set something in one Pillar SLS file and use that same value in another Pillar SLS file, put that something into Jinja and use one of import functions to load the Jinja file in both Pillar SLSes
18:03 impi joined #salt
18:03 nixjdm joined #salt
18:03 XenophonF alternatively, supposedly PillarStack lets you use Pillar values from Pillar, but I can't help you with it as I don't use it
18:04 mikecmpbll joined #salt
18:10 sjorge joined #salt
18:11 pipps joined #salt
18:17 drawsmcgraw joined #salt
18:27 cyborg-one joined #salt
18:29 nledez joined #salt
18:30 cliluw joined #salt
18:31 djinni` joined #salt
18:31 psychi[m] joined #salt
18:43 tongpu joined #salt
18:46 whytewolf gtmanfred: you there? need to know who to talk about with the way my position is displayed on the saltconf talks page
18:46 freelock joined #salt
18:46 hackel joined #salt
18:46 theblazehen joined #salt
18:46 jerrykan[m] joined #salt
18:46 toofoo[m] joined #salt
18:46 ThomasJ|m joined #salt
18:46 aboe[m] joined #salt
18:46 gomerus[m] joined #salt
18:46 benjiale[m] joined #salt
18:46 fujexo[m] joined #salt
18:49 patrek joined #salt
18:50 gtmanfred whytewolf: what should it be changed to?
18:52 whytewolf either whytewolf tech. or pretty much anything but the bank i work at. or if possable not set. i actually can't have it as my current employment because that puts a liability on the bank i work at and on me.
18:53 aldevar joined #salt
18:55 gtmanfred whytewolf: rhett is going to get it changed asap
18:55 whytewolf thank you
18:59 gtmanfred no problem!
18:59 gtmanfred sorry about that
19:00 whytewolf there are days that i do hate regulations the bank puts on me. even not having that listed does put some minor liabilities on me. but having it under my name like that... oh man things could go south really quick for me. as i become a "repersentive of the bank". and as they don't use salt ... i'm sure they would call lawyers in to see if i caused any damage to their reputation
19:00 promorphus joined #salt
19:00 whytewolf stupid lawyers
19:02 nixjdm joined #salt
19:02 * whytewolf shudders
19:02 gtmanfred heh
19:02 gtmanfred i just felt a chill run down my back
19:08 keldwud joined #salt
19:08 keldwud joined #salt
19:13 noobiedubie joined #salt
19:14 XenophonF joined #salt
19:14 XenophonF man 2017.7 is really harshing my mellow
19:14 whytewolf if i had a mellow 2017.7 would harsh it for me too
19:15 XenophonF even on EC2 instances in the same subnet as my salt-master, I'm having problems getting state.apply to do its thing when invoked remotely
19:16 XenophonF I'm really worried there's some awful config error that the new version exposes.
19:16 XenophonF b/c---real talk here---it's usually my fault
19:16 XenophonF 54 sec to run state.apply via salt-call
19:17 whytewolf 54 sec isn't bad
19:18 XenophonF yeah so why can't i invoke state.apply from the master?
19:18 whytewolf any errors or does it just futz out?
19:19 whytewolf gtmanfred: besides hopefully i won't be working at $bank when saltconf happens :P
19:19 gtmanfred good luck
19:19 gtmanfred !
19:20 whytewolf thank you
19:23 pipps joined #salt
19:23 ChubYann joined #salt
19:24 xet7 joined #salt
19:27 _KaszpiR_ joined #salt
19:30 DammitJim joined #salt
19:31 LeProvokateur joined #salt
19:35 aldevar joined #salt
19:36 lorengordon joined #salt
19:41 drawsmcgraw left #salt
19:48 bildz I'm seeing an issue where I've changed a state and then the minion doesnt appear to be seeing the changes, even after a saltutil.sync_all.  What needs to be done to refresh the minion to see those state changes?
19:48 bildz i tested making a syntax error and its not even picking it up
19:50 LeProvokateur joined #salt
19:51 kramer joined #salt
19:52 afics joined #salt
19:54 whytewolf bildz: are you using gitfs. and was this a change to a custom module or just a state file such as package.sls?
19:55 bildz no
19:56 bildz just static on the master
19:57 whytewolf humm. run 'salt-run -l debug fileserver.update' to see if any errors are thrown.
19:57 vtolstov joined #salt
19:57 vtolstov hi all =)
19:57 whytewolf [you shouldn't need to update the fileserver for filesystem based states. but it is worth checking]
19:57 vtolstov i'm investigating about integrate ci with salt
19:58 bildz whytewolf: will do thanks
19:58 whytewolf and on that note. lunch
19:58 vtolstov what is the best practice for this? now i have git repo with my stack ext_pillar config, pillars and states and modules
19:59 vtolstov i have 5 master on each of 5 dc
20:00 vtolstov so after git push my ci can do something to deploy changed data to dc. how can i da that in saltstack world?
20:00 vtolstov when i'm use chef - i'm create tar.gz arcive with recipes and rsync it to dc
20:03 nixjdm joined #salt
20:04 LeProvokateur joined #salt
20:06 psychi[m] joined #salt
20:08 perfectsine joined #salt
20:19 LeProvokateur joined #salt
20:22 freelock joined #salt
20:22 jerrykan[m] joined #salt
20:22 benjiale[m] joined #salt
20:22 ThomasJ|m joined #salt
20:22 theblazehen joined #salt
20:22 toofoo[m] joined #salt
20:22 gomerus[m] joined #salt
20:22 hackel joined #salt
20:22 aboe[m] joined #salt
20:22 fujexo[m] joined #salt
20:24 XenophonF whytewolf: no errors - just nothing happens
20:25 XenophonF i can see the master querying the minion for a job status update when I run salt-run jobs.print_job ...
20:25 XenophonF test.ping works
20:25 XenophonF salt-call state.apply on the minion itself works
20:25 XenophonF the more remote minions all die with SaltReqTimeoutError exceptions
20:25 XenophonF so I don't know
20:25 XenophonF something's messed up
20:26 whytewolf humm, sounds like a networking issue between the remotes and the master
20:26 LeProvokateur joined #salt
20:29 XenophonF could be
20:29 XenophonF no obvious problems at the TCP layer and below though
20:29 aldevar joined #salt
20:30 XenophonF maybe AWS is interfering with it somehow?
20:31 whytewolf maybe
20:31 shanth__ so i noticed that if i use file.recurse to copy a 150mb folder and set user permissions it takes nearly 25 minutes. if i split them into two tasks - file.recurse to copy files and file.directory to recursively set permission it takes 4 minutes
20:33 shanth__ wonder why that is :(
20:35 whytewolf because file.recurse has causes extra processing per file when you enable pretty much any file level item on the files. such as templating or permissions. file.directory just does it wholesale.
20:36 pualj_ joined #salt
20:38 shanth__ also noticed that file.recurse was not applying the owner/group to every file - it missed about 4-5 files per run
20:38 DammitJim for my life, I can't understand why applying a state to a minion takes like 5 minutes
20:38 shanth__ what is the state jim?
20:38 DammitJim (maybe I'm exagerating), but it's not the time it prints when it gets done
20:39 DammitJim it's a state where I'm just applying a template
20:39 shanth__ gonna stop using it to apply permissions whytewolf
20:39 DammitJim let me see what "simple" state I can use as a use case
20:41 whytewolf there are things that can take 5 min because they use things that take a while in the operating system. like say a package manager cache update when you have 40 extra repos on top of the default repos
20:41 DammitJim yeah, I think this is a simple template with pillar data
20:41 DammitJim but again, let me pick a good use case to see what's up
20:42 drel joined #salt
20:42 whytewolf simple templates should defintly not take 5 min.
20:43 whytewolf [other wise my top file would be insane]
20:43 ws2k3 joined #salt
20:44 pualj_ joined #salt
20:50 DammitJim whytewolf, what do you suggest I look at to see what's taking so long?
20:50 DammitJim I'm going to enable debug logging on the minion and see
20:50 whytewolf i would enable trace or all
20:51 DammitJim wait, what is that?
20:51 whytewolf also just do a render of the file
20:51 whytewolf -l all
20:51 whytewolf it is a higher debug level then debug
20:51 DammitJim ok, I take that back, running the state that applies a template took 1:46
20:51 DammitJim the Total run time: 24.580 ms
20:52 DammitJim where do I set -l all? when I run the state?
20:52 DammitJim or in the minion config?
20:52 whytewolf you have never used -l debug?
20:52 rpb joined #salt
20:52 whytewolf salt-call -l all state.apply state.that.you.suspect
20:52 DammitJim no, I've always set the debug level on the config file instead
20:52 DammitJim :(
20:53 Sarphram joined #salt
20:53 pualj_ joined #salt
20:53 DammitJim whytewolf, what am I looking for in the output?
20:54 whytewolf anything that pauses
20:54 DammitJim may I paste on here?
20:54 whytewolf gist
20:55 DammitJim https://gist.github.com/anonymous/a01000377730d3227cc966ab9559b3cc
20:55 DammitJim man, I hope I don't get in trouble for posting information about my network on gist
20:55 whytewolf eh you shouldn't
20:56 whytewolf so.... was there a pause there what are the lines around that?
20:57 whytewolf that looks like it is trying to communicat with the master
20:58 DammitJim after some time, it said: [DEBUG   ] SaltReqTimeoutError, retrying. (1/3)
20:59 whytewolf yeah trying to communicate with the master and having issues
20:59 DammitJim ooohhh.. interesting test, let me see if this doesn't happen with servers that are on the same network
21:02 DammitJim oh, weird, it also happens with servers on the same network
21:02 DammitJim I mean, there is some routing, but still
21:03 pualj joined #salt
21:03 DammitJim ok, found a server that is on the same network
21:03 nixjdm joined #salt
21:03 DammitJim would the master say anything?
21:04 whytewolf humm. might not. since it isn't getting the communication being sent to it
21:04 DammitJim is this a wireshark thing?
21:04 whytewolf yeap
21:04 DammitJim ok
21:04 whytewolf you can limit down to port 4506
21:04 DammitJim it's consistent at least
21:05 justanotheruser joined #salt
21:05 vexati0n what is the correct way to run cmd.run with --async, from salt-api ?
21:06 vexati0n the docs say "see full list of runner modules," then helpfully does not give a link to any such list, and the full list of runner modules i can find by googling for it doesn't even include "local" and so is obviously not relevant to salt-api
21:06 whytewolf also check the load on the master. just in case you need more workers.
21:07 whytewolf netapi or python api?
21:07 vexati0n rest
21:07 whytewolf cherrypy or saltnado?
21:07 vexati0n cherrypy
21:08 DammitJim should I be concerned with this? [DEBUG   ] Could not find file 'salt://login.sls' in saltenv 'base'
21:08 vexati0n adding 'async': true doesn't work (500 error), and using 'async' as the client interface also doesn't work (bad request syntax)
21:08 DammitJim the actual state is not login.sls, but login.init
21:09 whytewolf DammitJim: no, it is just looking for the file and the first version wasn't found [it searchs state.sls before state/init.sls]
21:09 DammitJim oh
21:10 DammitJim shoot, so I need to do some kind of tcpdump 'cause I don't have a UI
21:10 DammitJim *sigh*
21:10 pualj joined #salt
21:10 whytewolf vex i think this is what you are looking for or at least something close to it https://docs.saltstack.com/en/latest/topics/netapi/index.html#salt.netapi.NetapiClient.local_async
21:11 vexati0n yes exactly. thank you
21:12 whytewolf DammitJim: if you just do a tcpdump that captures the packs to a file you can load that file into wireshark later
21:12 vexati0n ugh. except that, too, gives me a 500 error.
21:13 systemexit joined #salt
21:13 vexati0n 'local_async' should be the 'client' field, but nope.
21:14 whytewolf you are correct that should be all you need. a 500 error means something is throwing an exception. see if you can find that in the logs
21:14 vexati0n eh. nevermind. i was using 'target' instead of 'tgt' for some reason
21:15 whytewolf oh ... well then
21:15 whytewolf neverminded
21:18 DammitJim ok, so I can see an ACK from the master
21:18 DammitJim but the minion doesn't do anything for like 100 seconds after that
21:19 whytewolf humm strange so the minion gets the ACK but doesn't react to it? i actually have no idea where to go from there
21:20 whytewolf or did you see the ACK leave the master and havn't tested the minion yet?
21:20 DammitJim I saw the ACK packet come to the minion
21:20 DammitJim then the minion doesn't send anything to the master for 100 seconds
21:21 LeProvokateur joined #salt
21:21 whytewolf it sounds like the minion doesn't reconize the ACK.
21:21 whytewolf but i don't get that
21:21 DammitJim actually the master sends the ACK
21:21 DammitJim then sends the PSH after 100 seconds
21:21 DammitJim sorry
21:21 DammitJim I'm going to have to take a tcpdump of the master, aren't I?
21:22 whytewolf yeap
21:22 whytewolf you should be taking from both sides anyway. just to make sure something in the middle is not grabbing a packet or something
21:23 whytewolf also check any selinux security files see if selinux nuked the packet
21:23 DammitJim k
21:24 whytewolf or appamour if on ubuntu
21:26 whytewolf lovely, thunder.
21:26 _KaszpiR_ joined #salt
21:26 DammitJim man, how do I filter stuff on the master?
21:26 DammitJim by destination and source IP?
21:26 whytewolf are you tcpdumping to a packet file?
21:28 DammitJim yes
21:28 DammitJim ok, got it
21:28 whytewolf kewlies
21:28 DammitJim so, 2.6 seconds into it, the master sends to the minion an ACK
21:28 DammitJim then the minion responds @ 42 seconds with an ACK
21:29 DammitJim then the master sends immediately an TCP ACKed unseen segment
21:29 DammitJim then immediately @ 42.1 seconds an TCP Keep-Alive
21:29 DammitJim then a TCP Previous segment not capture
21:29 pipps joined #salt
21:30 DammitJim then at 109 seconds, the master sends another ACK, followed by PSH, ACK, and things seem to start moving at that time
21:30 whytewolf something strange is a foot at the circle k
21:30 whytewolf those timings seem way off
21:32 DammitJim hhmmm
21:32 DammitJim I'll have to research what this stuff means
21:33 socket-_ Is there a way to kill all jobs of state.apply that are running across a fleet of minions? I have other jobs running that I don't want to kill.
21:34 DammitJim I need to corelate packet ids, huh? between the master and minion...
21:35 whytewolf yeap
21:36 pipps joined #salt
21:37 dograt joined #salt
21:38 DammitJim holy crap, I found the problem
21:38 whytewolf ?
21:38 whytewolf what is it
21:38 DammitJim so, I have a mount to another server from my master
21:38 DammitJim I get files from that server
21:39 DammitJim and the packet capture shows almost as if the whole mount is being scanned
21:39 DammitJim I wonder who the heck is doing that
21:39 DammitJim would the master be scanning directories like that?
21:39 whytewolf if they are mounted in the file_roots directory yes
21:40 whytewolf about every 60 or so seconds i believe
21:40 DammitJim yup, that was it
21:40 DammitJim holy crap
21:41 DammitJim so, again, why is it doing that?
21:41 pipps joined #salt
21:41 whytewolf was it in the file_roots directory?
21:41 whytewolf or symlinked to it
21:41 DammitJim it's mounted on /srv/salt/files
21:42 whytewolf do you have a default file_roots set?
21:42 whytewolf because if you didn't set file_roots in the master it defaults to /srv/salt
21:43 DammitJim right, that's what my file_roots is: /srv/salt
21:43 DammitJim man, drinking a beer can help you so much sometimes!
21:44 Rumbles joined #salt
21:45 justanotheruser joined #salt
21:45 whytewolf well, least you found the issue
21:48 DammitJim so, but what do I do now? the reason why it's mounted there is so that I can "file.manage" stuff
21:48 DammitJim it just happens to be a HUGE mount
21:48 whytewolf don't mount the whole thing
21:48 whytewolf cause EVERYTHING that is mounted through that is avalible through salt://
21:49 DammitJim ok, maybe I'll do multiple mounts to separate mountpoints
21:49 DammitJim thanks
21:50 whytewolf no problem. glad we got to the bottom of your issue.
21:51 DammitJim :)
21:51 DammitJim I'm sure this is the weirdest thing you've seen!
21:52 whytewolf lol hardest to debug with out intiment knowledge of your setup.. i don't know about weirdest
21:53 whytewolf corrupted python packages can get pretty funky sometimes.
21:53 whytewolf then there is a dracut issue i am having with some of my personal hardware
21:53 overyander joined #salt
21:54 DammitJim dracut?
21:55 DammitJim like dracula involved?
21:55 socket-_ is there a way to have the output of a state be returned as text instead of json?
21:55 whytewolf dracut is a init ramdisk system
21:56 overyander i'm using winrepo, i need to install a program that requires an ini settings file to be present in the running directory when the installer is started. I know salt caches the installers in "C:\salt\var\cache\salt\minion\files\prod\win\repo" but will it cache all the files that are in the repo directory or just the ones called by the packages init.sls config?
21:58 whytewolf socket-_: --out txt?
21:58 socket-_ thanks
21:59 whytewolf overyander: things are cached as they are run. in the order they are run. although file managment tends to clean it's cache up
22:01 nixjdm joined #salt
22:02 pipps joined #salt
22:04 pipps99 joined #salt
22:06 overyander whytewolf, would it work if i did a file.managed and put the config file in the dir where the installer will be cached?
22:06 overyander or is there another more common way to do this?
22:07 whytewolf i don't know. I don't know of any package that has this kind of requirment so this is a new one on me.. in thoery it should work.
22:08 overyander ok, i'll give it a shot and let you know how it goes.
22:08 overyander thanks
22:08 whytewolf no problem
22:11 justanotheruser joined #salt
22:11 socket-_ also how do i target multiple minions. I tried salt "minnion1,minnion2" test.ping and also 'minion1,minion2'
22:12 whytewolf that would be a list so would need -l
22:12 whytewolf err -L
22:12 socket-_ thanks, i was trying -C
22:13 whytewolf -C is compound which that does not fit
22:15 pipps joined #salt
22:21 shoemonkey joined #salt
22:33 pipps joined #salt
22:39 lorengordon joined #salt
22:47 justanotheruser joined #salt
22:57 pipps joined #salt
23:09 onlyanegg joined #salt
23:23 shoemonkey joined #salt
23:40 nick__ joined #salt
23:43 justanotheruser joined #salt
23:44 pipps joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary