Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2016-12-11

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:58 jas02_ joined #salt
01:31 jeblair joined #salt
01:33 preludedrew joined #salt
01:53 edrocks joined #salt
01:55 amcorreia joined #salt
01:59 jas02_ joined #salt
01:59 emerson joined #salt
02:05 rubenb joined #salt
02:08 catpiggest joined #salt
02:08 swills joined #salt
02:11 mountpoint joined #salt
02:14 tmrtn[m] joined #salt
02:19 mountpoint joined #salt
02:37 mountpoint joined #salt
02:40 hemebond joined #salt
02:41 sebastian-w joined #salt
02:49 ilbot3 joined #salt
02:49 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.3.4, 2016.11.0 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ (please don't multiline paste into channel) <+> See also: #salt-devel, #salt-offtopic <+> Ask with patience as we are volunteers and may not have immediate answers
02:54 freelock[m] joined #salt
02:54 jcl[m] joined #salt
02:54 Mads[m]1 joined #salt
02:54 Guest80850 joined #salt
02:54 jerrykan[m] joined #salt
02:54 saintaquinas[m] joined #salt
02:54 M-MadsRC joined #salt
02:54 dnull[m] joined #salt
02:54 M-liberdiko joined #salt
02:57 _W_ joined #salt
02:57 mountpoint joined #salt
02:57 jenastar joined #salt
03:00 jas02_ joined #salt
03:06 krobertson joined #salt
03:15 g3cko joined #salt
03:29 jenastar left #salt
03:32 DEger joined #salt
03:35 bastiand1 joined #salt
03:40 swills joined #salt
03:41 writtenoff joined #salt
03:59 writtenoff joined #salt
04:02 jas02_ joined #salt
04:08 CreativeEmbassy joined #salt
04:16 eeeprom joined #salt
04:31 DEger joined #salt
04:33 stooj joined #salt
04:46 debian112 joined #salt
04:49 stooj joined #salt
05:00 flebel joined #salt
05:02 jas02_ joined #salt
05:04 justanotheruser joined #salt
05:12 freesk joined #salt
05:45 SDKr00t joined #salt
06:03 jas02_ joined #salt
06:14 debian112 joined #salt
06:19 icebal joined #salt
06:19 icebal hey guys, looking for some help with gitfs backend
06:20 hemebond Been a lot of people with questions/problems with gitfs lately.
06:22 icebal 1. with gitpython, I get the usual no matching sls found in env base, 2. for pygit2 I get error parsing - pubkey line. using pip install with the newest version, and made sure all the info and spacing is correct
06:23 icebal seems like the first one is common with older versions, but I'm on the newest soooo yeah......
06:25 icebal I can paste my config if it helps, into pastebin
06:30 icebal http://pastebin.com/EX9bvgme
06:37 samodid joined #salt
06:38 iggy icebal: salt-run -l debug fileserver.update
06:41 SDKr00t .
06:44 icebal http://pastebin.com/yVMunZKt
06:44 jacksontj joined #salt
06:44 icebal looks like it gets exit code 2 while fetching gitfs remote
06:45 icebal GitCommandError: 'Error when fetching: fatal: Could not read from remote repository.
06:45 icebal ' returned with exit code 2
06:46 icebal iggy ^
06:49 blue joined #salt
06:58 iggy and you can git clone that manually, right?
06:59 icebal yup yup, just fine. i only have 1 branch as well for both remotes
07:00 icebal with pygit2 i tried https as well, and no go. makes me think its some other config line im missing
07:04 jas02_ joined #salt
07:04 cyborg-one joined #salt
07:09 iggy I only ever used gitpython... but it was so long ago
07:10 iggy we had to stop using gitfs because of some of it's downsides
07:20 icebal what downsides? we use it at work, reason i was using it
07:29 iggy it sync's every minute, at the time it had memory leaks when using large git trees (I imagine those have been fixed)
07:29 iggy those were the big ones for us
07:29 debian112 joined #salt
07:30 iggy we moved to a more event driven workflow (commit -> webhook -> salt-api -> git.latest)
07:30 icebal yeah, no issues at work, we use the suse managed one with susemanager
07:31 iggy our git repo was so big it often didn't sync in 1 minute... you can imagine that didn't work out so well
07:31 icebal guess i'll just do the root filesystem untill i can get it working
07:31 icebal yesh lol
07:32 icebal ours is pretty small, so it syncs in about 5-10 sec
07:35 icebal coolio, changed to file backend, and now the only issue i have is the nodegroup matching issue :/
07:35 icebal this one: https://github.com/saltstack/salt/issues/37742
07:35 saltstackbot [#37742][MERGED] Cannot match on nodegroup when checking minions | Description of Issue/Question...
07:35 buu What on earth did you put in git to take so long?
07:38 icebal buut it works even with that error, so i guess its fine lol
07:45 iggy we were unfortunately having to distribute binaries (jar's) via salt:// url's at the time
07:52 debian112 joined #salt
07:52 SamYaple how can I ensure order of state file runs? for example, I have a state that applies to the database server that creates a database, in a different state file on another host I use that database, but obviously i need to wait on it to exist
07:53 buu SamYaple: You can specify it as a dependency
07:53 SamYaple buu: i thought as much, i dont know how
07:53 buu Me either!
07:53 buu =]
07:53 SamYaple the -include param seems to run the state
07:54 iggy across hosts? you have to use the orchestration runner
07:54 buu https://docs.saltstack.com/en/latest/ref/states/requisites.html
07:54 buu Oh
07:54 buu Yeah that could be awkward
07:54 SamYaple iggy: ok thanks. im digging into it
07:54 buu SamYaple: Maybe you want the orchestration runner in general, or salt cloud
07:55 SamYaple oh! overstate
07:55 SamYaple i remember that from a while ago
07:55 SamYaple yea i think thats right
07:57 iggy overstate is deprecated for orchestrate
07:57 SamYaple right i just read that. thats what i was saying
07:57 SamYaple i remember overstate so im familiar with orchestrate
07:58 iggy they work similarly, but the format is different (orch is more powerful, but more complex)
07:58 SamYaple yea i had to abandon my saltstack plans back in the day because overstate wasnt enough
07:59 SamYaple cool. thanks
08:01 iggy if you are working on openstack, there's some stuff that the tcpcloud guys are working on
08:01 SamYaple do you have a link?
08:04 iggy http://git.openstack.org/cgit/openstack/openstack-salt (I think...)
08:05 SamYaple ah. right, yea ive seen that
08:05 jas02_ joined #salt
08:08 preludedrew joined #salt
08:23 SamYaple is there anyway to get a bit more info out of salt when running? like which state it is excuting at least?
08:23 buu Run it with logging turned up?
08:24 SamYaple heh yea. simple enough
08:31 iggy I think there are numerous issues open about "progress" reporting of some sort... it doesn't really fit in well with how salt is designed
08:32 iggy that said, I usually check logs if something seems to be amiss
08:32 SamYaple iggy: i wasn't looking for fine grain type reporting, just basic status. turned logging to info and got all i needed
08:36 CrashOverride joined #salt
08:38 whytewolf icebal: you forgot the : after each repo
08:38 icebal oops..... whytewolf thanks lol
08:40 whytewolf no problem... i wouldn't even have noticed myself if i hadn't had that problem when spining up my own test masters with gitfs not to long ago
08:52 aarontc joined #salt
08:54 cyborg-one joined #salt
09:00 aarontc joined #salt
09:04 jas02_ joined #salt
09:12 guerby joined #salt
09:26 Trauma joined #salt
09:27 [CEH] joined #salt
09:28 icebal between learning salt and trying to figure out the best docker HA solution that's self hosted, my brain is fried
09:38 sebastian-w joined #salt
09:44 cyteen joined #salt
10:03 mikecmpbll joined #salt
10:04 sarlalian joined #salt
10:04 preludedrew joined #salt
10:19 Flying_Panda I know the feeling icebal what have you gone with >
10:21 icebal nothing lol. nothing is even close to working well. they either need hard core configs, rewrite all my images, use outside volumes (AWS, GCE, etc) or just plain dont work
10:22 icebal so it looks like im just going to mount everything as NFS, and figure out a failover process
10:24 Flying_Panda ive used salt to write the dockerfiles to deploy the app and chucked a pillar file over to the devs to store in there project
10:25 Flying_Panda update the project pillar to write a new dockerfile and spin up an instance and then update nginx
10:26 Flying_Panda might need a slight rework to make it work with swarm but dont see it being to bad
10:26 Flying_Panda *too
10:27 Flying_Panda but is quite hacky
10:27 Flying_Panda :(
10:29 aarontc joined #salt
10:29 Flying_Panda there git commit kicks off a pipeline which SSHs into the salt master updates the pillar file with git archive then deploys the app
10:31 jas02_ joined #salt
10:31 Flying_Panda the only thing I image changing with swarm is the dockerng states
10:32 Flying_Panda can give you some code snippets if you like
10:32 icebal the problem with swarm is it doesn't do anything with "containers". its now swarm services
10:33 icebal and everything is different
10:33 icebal but I do love me sone code snippets ;)
10:34 Flying_Panda could you not create the service as part of the image creation ?
10:37 icebal the service uses the image, but now your calling a command from said image. also there isn't volumes, just mounts and thr options have changed
10:41 Flying_Panda heres an examble of what I use for the base docker still working on the swarm tweaks
10:41 Flying_Panda https://gist.github.com/DrunkenAngel/7dc3012dc7ea0c1c2b7a1528a323921c
10:41 Flying_Panda really basic :(
10:46 icebal for simple images its fine, like web processes, but for instance plex, it goes downhill fast
10:46 Flying_Panda yea luckly dont havee to play with that :D
10:47 icebal lol true, but really anything more complicated than nginx, a config, and some files to attach and it gets stupid
10:48 jas02_ joined #salt
10:54 sarlalian joined #salt
10:56 Miouge joined #salt
10:56 Miouge Any good way to do append a string to all elements of a list in Jinja?
10:57 Miouge Something like “{{['localhost', '127.0.0.1'] | append(‘:443’) | join(',')  }}” ?
10:58 cebreidian_ joined #salt
10:59 keimlink joined #salt
11:08 haam3r1 joined #salt
11:25 jas02_ joined #salt
12:07 SamYaple how can I debug a -watch that is triggering when none of the files have changed
12:07 SamYaple http://pastebin.com/XzrX5XQU
12:07 SamYaple when run, none of those files change, and yet this state still executeds
12:33 [|SDK|] joined #salt
12:36 XenophonF d
12:40 XenophonF joined #salt
12:49 Xenophon1 joined #salt
12:53 jas02_ joined #salt
12:58 XenophonF joined #salt
13:08 [CEH] joined #salt
13:13 onmeac joined #salt
13:26 CeBe joined #salt
13:28 skeezix-hf joined #salt
13:40 systeem joined #salt
13:44 cyteen joined #salt
13:55 jas02_ joined #salt
14:34 mavhq joined #salt
14:42 sh123124213 joined #salt
14:46 systeem joined #salt
14:55 jas02_ joined #salt
15:00 keimlink joined #salt
15:01 delpa joined #salt
15:15 judy joined #salt
15:22 klaas joined #salt
15:37 debian112 joined #salt
15:56 jas02_ joined #salt
15:57 euidzero joined #salt
16:12 manh joined #salt
16:12 manh hello
16:12 manh my name Manh
16:12 manh from Vietnam
16:12 manh so, I have question
16:14 warpil joined #salt
16:14 warpil Hello
16:14 warpil I trying to target minion with specific interface existing
16:14 warpil in state
16:15 warpil {% if grains['location'] == 'A1' and grains['hwaddr_interfaces'] == 'sfn0p1.1573:*' %}
16:15 warpil but it doesn't works =(
16:16 warpil i mean - i dont understand how to filter and apply state to only specific minions which only have taht specific interface existing
16:18 AndreasLutro you can't use wildcards etc anywhere other than top.sls, you need to find another way to check the grain using jinja/python
16:18 warpil salt -C 'G@hwaddr_interfaces:sfn0p1.1573:* and myhost01' test.ping - it works
16:18 warpil I see. any other ideas how to check existence of interface in state , except top.sls ?
16:18 warpil and except adding additional own grain
16:18 AndreasLutro in your case you can do {% if grains.hwaddr_interfaces.get('snf0p1.1573') %}
16:19 warpil doesnt works
16:19 warpil i dont see state applied in test
16:21 euidzero joined #salt
16:23 Flying_Panda anyone fancy spotting my syntax error cant for the life of me see it. https://gist.github.com/DrunkenAngel/28721980fc00717c89b55e553ab4eee7
16:25 euidzero joined #salt
16:26 jas02_ joined #salt
16:28 euidzero joined #salt
16:29 onmeac {% if salt['cmd.run']('salt-call service.available { item } ') == True %}, { item } this isnt correct? havent done or seen that before
16:30 Flying_Panda trying to call an item from the pillar array
16:32 Flying_Panda specified with the for item in pillar  how would you query it
16:33 onmeac one moment, making an sls on my test system
16:36 krymzon joined #salt
16:40 euidzero joined #salt
16:46 onmeac commented on github
16:53 mrueg joined #salt
16:54 jas02 joined #salt
16:55 mohae joined #salt
17:05 jhauser joined #salt
17:07 nethope joined #salt
17:09 systeem left #salt
17:09 Flying_Panda thanks trying now :D
17:10 Flying_Panda same error :(
17:11 mavhq joined #salt
17:14 krymzon joined #salt
17:17 cyborg-one joined #salt
17:48 jas02_ joined #salt
17:54 NeoXiD joined #salt
17:55 jooni joined #salt
17:58 vodik is there a version of file.managed that makes sure a file is present on the system, but doesn't touch it if it already exists? (bootstrap it, but i expect the system to overwrite it/amend it)
17:59 vodik looking at the documentation, i think 'create' might do it, but to me, reads like the opposite i'm asking for?
18:06 [CEH] joined #salt
18:09 scsinutz joined #salt
18:11 sh123124213 joined #salt
18:14 lionel joined #salt
18:17 DEger joined #salt
18:21 Nahual joined #salt
18:34 scsinutz joined #salt
18:51 buu vodik: Pretty sure one of the options to file.managed or file.create does that exact thing
18:52 DEger joined #salt
18:53 vodik buu: ... thanks for making me double check - i missed it the first time
18:53 vodik replace: false
18:53 buu =O
18:53 vodik "If set to False and the file already exists, the file will not be modified even if changes would otherwise be made. Permissions and ownership will still be enforced, however."
18:53 vodik buu: thanks
18:53 buu It's my only life skill
18:53 vodik heh
19:03 samodid joined #salt
19:05 Lionel_Debroux joined #salt
19:06 Lionel_Debroux joined #salt
19:11 eeeprom joined #salt
19:12 chadhs joined #salt
19:15 scsinutz joined #salt
19:19 jas02_ joined #salt
19:24 Lionel_Debroux joined #salt
19:28 ronnix joined #salt
19:33 swa_work joined #salt
19:37 jhauser joined #salt
19:56 Trauma joined #salt
19:59 fgimian joined #salt
20:05 irctc367 joined #salt
20:06 irctc367 Hi! I renamed around 1000 minions. This went fine. But i know i wanted to delete the old keys and tried to run salt-run -t 1200 manage.down but this always times out. Any suggestions?
20:09 justanotheruser joined #salt
20:10 onmeac the command times out or the minions do
20:12 irctc367 onmeac: SaltClientError: Salt request timed out. The master is not responding. If this error persists after verifying the master is up, worker_threads may need to be increased.
20:13 irctc367 onmeac: but there's NO load on the master it does nothing so i don't see any reason to raise the worker_threads
20:14 onmeac did you run the salt master in forground(non-daemon mode) and with debug logging see whats going on?
20:17 irctc367 onmeac: no but i could try. Logfile says nothing...
20:17 scsinutz joined #salt
20:18 onmeac It helped us in the past to debug issues, running salt-master in debug mode in console
20:23 scsinutz joined #salt
20:25 irctc367 onmeac: these are the last lines in console debug mode - nothing happens after that: https://gist.githubusercontent.com/disaster123/13d4c493247bbb53bcd4699ae82d314b/raw/27cd9b1bfa2e701a571ed1e2dbe0efc0c5867a81/gistfile1.txt command still hanging
20:31 sh123124213 joined #salt
20:39 nidr0x joined #salt
20:43 fracklen joined #salt
20:44 krymzon joined #salt
20:46 onmeac nothing happens at all when you do some salt command?
20:48 Miouge joined #salt
20:50 jas02_ joined #salt
20:53 irctc367 onmeac: oh everything is working fine in batch mode except salt-run manage.down
20:54 onmeac what about: salt-run manage.alived
20:56 onmeac "Print a list of all minions that are up according to Salt's presence detection (no commands will be sent to minions)"
20:58 hemebond joined #salt
21:00 keimlink joined #salt
21:02 madboxs_ joined #salt
21:02 irctc367 onmeac: alived returns a list in seconds
21:03 onmeac what happens to your CPU usage when targeting \* not in batch mode
21:04 Miouge joined #salt
21:05 irctc367 onmeac> what kind of command? test.ping?
21:06 onmeac yeah sure, just to see what hapens
21:16 irctc367 onmeac> it goes down from 95% idle to 55% idle
21:16 irctc367 onmeac> but it's working fine within seconds too
21:19 onmeac irctc367, and with manage.down or up?
21:24 irctc367 onmeac> manage.up works fine
21:24 irctc367 onmeac> manage.down still running
21:25 onmeac anything 'weird' in debug log and or cpu usage
21:26 irctc367 omneac: cpu is jumping from mostly idle 95% down to 5%. but the 5% spike is just for 3s
21:33 onmeac what if you up the worker threads for testing, or have you tried that already?
21:35 preludedrew joined #salt
21:36 irctc367 onmeac: i'm already at 512 while tetsing and saw no difference - was starting at 64
21:37 irctc367 onmeac: but need to got to bed so will test tomorrow again
21:38 onmeac gnight
21:39 irctc367 onmeac> thanks for your help
21:52 mpanetta joined #salt
21:55 fracklen joined #salt
21:56 jas02 joined #salt
21:57 jas02 joined #salt
22:11 fracklen joined #salt
22:19 fracklen joined #salt
22:19 jas02_ joined #salt
22:27 fracklen_ joined #salt
22:33 iggy for future reference, you can use a target with salt-run manage.down removekeys=True
22:41 hemebond :-O
23:07 judy joined #salt
23:10 lorengordon joined #salt
23:13 huleboer joined #salt
23:18 cowboycoder joined #salt
23:34 jas02 joined #salt
23:48 mohae joined #salt
23:56 UForgotten joined #salt
23:58 stooj joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary