Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2017-11-01

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 sp0097 joined #salt
00:05 kellyp joined #salt
00:10 swa joined #salt
00:11 swa left #salt
00:28 ashmckenzie joined #salt
00:54 kellyp joined #salt
00:56 johnj joined #salt
01:05 whytewolf Dxiri. Putting it in the hostname just eliminates a possible issue. If it works with the host file then the issue is the dns. Other wise you could have something else wrong and are chaseing the wrong problem
01:05 whytewolf It isn't a solution it is a troubleshooting step
01:08 tiwula joined #salt
01:14 maestropandy joined #salt
01:19 gf joined #salt
01:31 rhavenn joined #salt
01:37 glenn joined #salt
01:38 rhavenn can you nest .sls files? ie: from the top.sls have a grain to a sub-directory and then in the init.sls have a grain to another file? is that supposed to work?
01:42 rhavenn basically i have this: https://gist.github.com/anonymous/ab15402853c3e70b8a9c4aee6d521f26   and when I run a salt target state.apply I get errors that it can't find State 'G@baseclass:vdi.vdi' was not found in SLS 'os_windows'
01:44 rhavenn also that G@baseclass:desktop.desktop wasn't found
01:44 dxiri_ joined #salt
01:51 _JZ_ joined #salt
01:57 johnj_ joined #salt
02:02 Nightcinder left #salt
02:08 nomeed joined #salt
02:40 bryan joined #salt
02:54 evle2 joined #salt
02:56 ilbot3 joined #salt
02:56 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.8, 2017.7.2 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic <+> We are volunteers and may not have immediate answers
02:58 johnj_ joined #salt
03:12 sp0097 joined #salt
03:18 rhavenn whytewolf: yeah, but I've restarted both my minion and master and it's been a few hours and still the same error. The grains are defined on the Windows minion. Do they need to be in a pillar or something?
03:21 whytewolf Oh wait. Reread what you have. It is way off. Top.sls is always a single file. It doesn't Cascade. And top file formatting doesn't work in other sls files
03:22 rhavenn whytewolf: well that explains that :) thanks. I'll go re-do my layout.
03:27 k_sze[work] How does salt-minion know it is communicating with the right salt-master?
03:28 k_sze[work] e.g. if I install salt-minion on Debian using apt-get, salt-minion is automatically started and tries to send the key to the salt master, without give me the chance to configure the salt master hostname first.
03:28 k_sze[work] Isn't that kinda dangerous?
03:32 onlyanegg joined #salt
03:38 ntropy k_sze[work]: minion expects to find the master by resolving the name for "salt", this usually won't resolve unless you set it up in advance
03:38 ntropy k_sze[work]: also, salt uses public keys for encryption & auth, so minion only sends its public key to the master, not the private key.  in other words, its not dangerous
03:39 ntropy once minion successfully auths to the master, minion also stores a copy of the master's public key, so next time it can check if its talking to the same master or an imposter
03:40 k_sze[work] ntropy: I mean if a rogue master accepts the public key, it can send arbitrary commands to the minion.
03:40 rhavenn k_sze[work]: sure, but then you've got worse problems
03:41 ntropy k_sze[work]: correct, once minion auths to master, master owns it and can do what it wants
03:59 johnj_ joined #salt
04:12 hemebond joined #salt
04:12 hemebond Does anyone know if orchestration can timeout?
04:12 dxiri joined #salt
04:15 SkyRocknRoll_ joined #salt
04:38 ntropy hemebond: timeout at what level?  when you call the orchestration state from the master?  at individual state level?
04:51 SkyRocknRoll_ joined #salt
04:51 zerocool_ joined #salt
04:54 kellyp joined #salt
04:59 hemebond ntropy: At the orchestration level.
05:00 hemebond My orchestration file seems to stop after a few of the steps.
05:00 hemebond Specifically, after a particularly long highstate.
05:00 johnj_ joined #salt
05:00 hemebond But if I run the orchestration again it completes all the steps.
05:01 euidzero joined #salt
05:04 N-Mi joined #salt
05:24 impi joined #salt
05:25 zerocool_ joined #salt
05:28 LocaMocha joined #salt
05:29 maestropandy joined #salt
05:30 aldevar joined #salt
05:35 maestropandy left #salt
05:35 zerocool_ joined #salt
05:39 AvengerMoJo joined #salt
05:42 LocaMocha joined #salt
05:44 ntropy when you say "seems to stop" - you mean you killed it after waiting for a few minutes?  maybe its just slow, not timed out
05:45 ntropy also, i think debug level logging should tell you if its actually doing anything or not
05:51 LocaMocha joined #salt
05:51 hemebond The orchestration is called via a reactor so I'm just watching the events.
05:51 hemebond I see an event for the orchestration finishing/returning.
05:51 hemebond But it does it after the highstate, not after all the states have run.
05:53 hemebond I'll have a look for an error.
05:53 hemebond Seems odd though, because if I trigger the reactor again (rebooting the minion) the full orchestration is run.
05:54 icebal joined #salt
05:54 hemebond ... maybe the highstate fails the first time...
05:56 hemebond Aha, return code 2. So the highstate is actually failing the first time, but not in subsequent runs.
06:01 johnj_ joined #salt
06:09 hemebond Found it. Goodness me, what a round-about way to find an error in my formula :-D
06:16 armyriad joined #salt
06:17 omie888777 joined #salt
06:22 onlyanegg joined #salt
06:24 ntropy maybe you could just test the formula alone? :D
06:29 Shirkdog_ joined #salt
06:32 armyriad joined #salt
06:43 major joined #salt
06:43 kukacz joined #salt
06:49 Miouge joined #salt
07:02 johnj_ joined #salt
07:19 aviau joined #salt
07:23 onlyanegg joined #salt
07:23 maestropandy1 joined #salt
07:23 maestropandy1 left #salt
07:31 jas02 joined #salt
07:33 pualj joined #salt
07:52 zulutango joined #salt
07:55 hoonetorg joined #salt
07:57 impi joined #salt
08:03 johnj joined #salt
08:04 omie888777 joined #salt
08:19 absolutejam Anyone else having issues with docker_events reactor not really doing anything?
08:19 absolutejam I've only just tried it, but not seen an event come through so far...
08:23 aldevar joined #salt
08:25 gnomethrower joined #salt
08:36 jhauser joined #salt
08:44 Trauma joined #salt
08:48 Ricardo1000 joined #salt
08:51 cyborg-one joined #salt
08:53 kellyp joined #salt
08:55 Lenz joined #salt
09:04 johnj joined #salt
09:24 onlyanegg joined #salt
09:26 pbandark joined #salt
09:27 aldevar joined #salt
09:33 pbandark joined #salt
09:56 Naresh joined #salt
09:58 maestropandy joined #salt
09:58 maestropandy left #salt
10:05 johnj joined #salt
10:09 maestropandy1 joined #salt
10:10 maestropandy1 left #salt
10:15 hemebond ntropy: It seemed to be working fine before I changed over to orchestration (from a series of events and reactors)
10:16 hemebond Nothing like a refactor to chase out any bugs.
10:17 mikecmpbll joined #salt
10:31 pualj joined #salt
10:42 absolutejam nevermind
10:43 absolutejam nuked my pip docker and docker-py installs
10:47 fracklen joined #salt
10:54 absolutejam Is there any way to use pillar values in reactor?
11:05 hemebond absolutejam: As far as I know, not from the minion pillar, no.
11:05 hemebond But you can pass data in as pillar data.
11:06 johnj joined #salt
11:07 fracklen joined #salt
11:12 absolutejam Just because some of my reactors fire a message to Slack
11:12 absolutejam and I'm having to duplicate the config in every one
11:12 hemebond Every reactor?
11:13 hemebond Reactors get `tag` and `data` variables which can be passed on.
11:18 stooj joined #salt
11:25 jas02 joined #salt
11:25 onlyanegg joined #salt
11:36 jas02 joined #salt
12:06 vishvendra joined #salt
12:07 johnj joined #salt
12:12 kellyp joined #salt
12:14 Nahual joined #salt
12:19 fracklen_ joined #salt
12:20 fracklen_ joined #salt
12:30 mchlumsky joined #salt
12:41 Trauma joined #salt
12:55 yuhl joined #salt
12:58 sjorge joined #salt
13:08 oyvindmo joined #salt
13:08 johnj joined #salt
13:08 numkem joined #salt
13:13 maestropandy joined #salt
13:14 maestropandy left #salt
13:22 sjorge joined #salt
13:25 racooper joined #salt
13:26 onlyanegg joined #salt
13:33 gh34 joined #salt
13:38 andreios joined #salt
13:48 user-and-abuser joined #salt
13:56 fracklen joined #salt
13:57 onlyanegg joined #salt
14:06 euidzero joined #salt
14:09 euidzero joined #salt
14:09 johnj joined #salt
14:09 netcho joined #salt
14:09 netcho hi all
14:10 edrocks joined #salt
14:11 maestropandy joined #salt
14:11 jholtom joined #salt
14:11 netcho when i run my state (boto - route53) on salt-minion 2017.7.1 (Nitrogen) it works fine. but on salt-minion 2017.7.2 (Nitrogen) i get State 'boto_route53.present' was not found in SLS 'route53.createArecord
14:12 netcho here is debug outouy
14:12 matti joined #salt
14:12 netcho https://hastebin.com/cigepuwava.rb
14:13 netcho [DEBUG   ] Failed to import utils raetevent:
14:13 netcho ...
14:13 netcho ImportError: No module named raet
14:14 onlyanegg joined #salt
14:14 netcho salt-master 2017.7.0 (Nitrogen)
14:16 oyvindmo joined #salt
14:16 cgiroua joined #salt
14:17 maestropandy1 joined #salt
14:17 keltim joined #salt
14:19 netcho bumped master to latrest, still the same
14:19 netcho any ideas?
14:23 matti joined #salt
14:28 bluetex joined #salt
14:29 Brew joined #salt
14:34 netcho joined #salt
14:37 netcho joined #salt
14:37 netcho joined #salt
14:37 ecdhe joined #salt
14:41 keltim joined #salt
14:41 netcho figured it out
14:41 netcho needs to have boto3 AND boto
14:44 heaje joined #salt
14:45 omie888777 joined #salt
14:53 anubhaskar[m] joined #salt
14:54 hackel joined #salt
14:58 dxiri joined #salt
15:10 johnj joined #salt
15:15 tiwula joined #salt
15:17 dxiri_ joined #salt
15:23 yuhl joined #salt
15:23 tracphil joined #salt
15:25 NVX joined #salt
15:26 sp0097 joined #salt
15:28 fracklen joined #salt
15:47 choke joined #salt
15:51 fracklen joined #salt
15:54 ErikaNY joined #salt
15:54 kellyp joined #salt
15:55 fracklen_ joined #salt
15:56 _JZ_ joined #salt
15:58 ErikaNY joined #salt
16:03 threwahway joined #salt
16:03 SkyRocknRoll joined #salt
16:04 edrocks joined #salt
16:05 kellyp joined #salt
16:11 jas02 joined #salt
16:11 johnj joined #salt
16:13 edrocks joined #salt
16:15 bluetex joined #salt
16:15 dxiri joined #salt
16:16 dxiri joined #salt
16:17 kellyp joined #salt
16:17 onlyanegg joined #salt
16:20 pbandark hi.. is it possible to POST json data using http.query? i am using https://paste.fedoraproject.org/paste/krlxGh43wDvvfQRFkRaf8Q but, i can see an error on the endpoint. so i suspect something wrong in the salt command.
16:20 bstevenson joined #salt
16:31 XenophonF what's the error?
16:35 astronouth7303 what's the 2016.11 workaround to set file permissions on windows minions?
16:35 astronouth7303 for some reason, a file is being created with extremely restrictive permissions
16:35 fracklen joined #salt
16:36 choke joined #salt
16:37 impi joined #salt
16:38 astronouth7303 also, how can i tell file.managed to use a specific line ending?
16:52 pipps joined #salt
16:55 ntropy joined #salt
16:59 pipps joined #salt
16:59 llua i have this weird problem where i can't connect to my minions from the master via the salt command but my minions can connect to the master via salt-call.
17:00 llua started to happen after updating to 2017.7.1
17:01 ujjain joined #salt
17:01 ujjain joined #salt
17:04 skullone joined #salt
17:04 windblow joined #salt
17:12 johnj_ joined #salt
17:20 threwahway joined #salt
17:22 lordcirth_work llua, are you sure the salt-minion daemon is still running?
17:23 lordcirth_work salt-call doesn't need a minion
17:23 llua it is
17:23 pipps joined #salt
17:24 lordcirth_work llua, any errors in /var/log/salt/minion?
17:25 thekevjames joined #salt
17:26 XenophonF llua: you don't happen to be getting SaltReqTimeoutError exceptions, do you?
17:28 skullone does anyone use salt much for provisioning VPCs and instances?
17:29 zach We use terraform for provisioning, that then bootstraps our salt-master and salt-minion installation is handled via cloud-init
17:29 skullone im just trying to avoid using another tool :p
17:30 llua XenophonF: when using salt-call -l debug yeah.
17:30 zach indeed, I imagine you could wrap boto and do it that way with salt-cloud...maybe
17:31 RandyT zach, can you expand on what you are doing with cloud-init?
17:31 RandyT I'm currently working on migration to terraform and masterless salt
17:31 llua lordcirth_work: seems to be complaining about the schedule key not being iterable, It is currently null for some reason
17:31 llua deleting it makes the minion responsive
17:31 zach RandyT: https://cloudinit.readthedocs.io/en/latest/topics/modules.html#salt-minion
17:32 zach Pretty handy actually, unless you're running something like ubuntu 16.04, then it installs salt-minion 2015.x
17:33 zach So I have it install the official repo, and update salt-minion
17:33 wongster80 joined #salt
17:34 RandyT I see.. probably not needed in my case since I don't actually need to run the salt-minion in masterless mode
17:34 vishvendra can we protect pillar values using different gpg keys ?
17:34 vishvendra is there any concept like ansible vault
17:34 RandyT provisioning salt-minion as part of the terraform deploy
17:34 vishvendra we can different keys for different data
17:34 zach RandyT: you could use the built in deploy part of terraform to bootstrap the salt minions
17:35 zach RandyT: using "remote-exec"
17:35 RandyT zach: yes, that is what I am doing.
17:35 RandyT zach: there is a salt-masterless provisioner, but could not get it to work. found the inline remote-exec did the job
17:35 zach RandyT: since we're using master/minion we just use cloudinit to point to the correct salt master, and set proper grains
17:36 Edgan zach: are you setting /etc/salt/grains.yml style grains?
17:36 RandyT zach: understood
17:37 RandyT zach: currently tearing my hair out trying to figure out how to reattach aws volumes... :-) but outside of scope for this group
17:37 Edgan RandyT: terraform? boto3?
17:37 zach Edgan: unfortunately cloud-init sets them into /etc/salt/minion instead, we do have a bootstrap script written in bash (porting everything to actually run from a single golang binary to do our bootstrap) that injects the grains into /etc/salt/grains
17:38 zach RandyT: feel free to ping me, I'll show you how we do it
17:38 skullone zach: I bundle salt-minion into my AMIs, along with some other things like dynamic DNS scripts that set their hostname, and update internal DNS when they boot up
17:38 RandyT Edgan: terraform
17:38 Edgan zach: what are you setting? I don't use locally stored grains at all
17:38 Edgan skullone: same
17:38 zach Edgan: "roles"
17:38 Edgan zach
17:38 skullone i just set an A record on the internal domain, "salt.domain.com" so cloud-init doesnt have to work hard :p
17:39 nomeed joined #salt
17:39 Edgan zach: Roles are part of the hostname, and the hostname is turned into grains via custom grains, python code.
17:39 skullone but, im considering just using Salt to manage VPC, subnets, security groups, etc...
17:39 skullone terraform is ok, but yet another tool
17:39 zach Edgan: correct, but we're actually setting a grain called "roles" and utilizing that
17:39 Edgan skullone: It can, but won't do it well till salt-cloud gets rewritten.
17:39 nomeed joined #salt
17:40 zach Edgan: especially since we have to do some really stupid things to set minion_id in AWS with ASG
17:40 Edgan zach: But there is not need to define them, if they are part of the hostname, you just do them dynamically in code
17:40 zach If there are better ways of doing it, I'm all ears
17:40 skullone zach: yah, i struggle with that too = i like hostnames to reflect what they do, and not just assign roles off tags or whatever
17:40 zach Edgan: unfortunately they may not always be part of the hostname
17:41 zach We have multi roles on a few of our hosts, the hostname doesn't always reflect what the machine does
17:41 Edgan zach: They should be, and enforced.
17:41 Edgan zach: A role can be a tag for a group of roles
17:42 nomeed joined #salt
17:42 zach Right, in the perfect world that's how we'd do it
17:42 Edgan zach: But what is stopping you?
17:42 skullone so what are salt-clouds limitations? what about the boto3 capabilities in salt proper?
17:43 Edgan skullone: salt-cloud currently only does instances/VMs
17:43 Edgan skullone: boto/boto3 states are incomplete
17:43 Trauma joined #salt
17:43 Edgan skullone: There is a team working on making boto/boto3 more complete and making that work inside salt-cloud
17:43 skullone ah interesting
17:44 zach Edgan: resources and time unfortunately
17:44 Edgan skullone: In the interim I find it far easier to write my own boto3 code
17:44 Edgan skullone: I know someone who was/is writing their own more complete boto3 states, but in isolation.
17:45 cyborg-one joined #salt
17:45 zach The biggest issue I have is autoscale groups
17:46 zach So each minion_id has to be set with a bootstrap script or we end up with identical names
17:47 zach we've just started prepending the last 6 digits of instance_id to the minion_id, that seems to fix the collisions
17:48 Edgan zach: yes, for auto scaling, instance id not 01, 02
17:48 Edgan zach: everyone does that
17:48 Trauma joined #salt
17:48 zach Up until this summer, I haven't used salt since ~0.6.7
17:49 zach 0.9.7*
17:49 zach lots of things have changed
17:49 Edgan zach: If you want to move to next gen, use containers and Kubernetes instead of auto scaling for most things. Then you just have to worry about Kubernetes hosts instead of auto scaling groups. Unless you scale so fast you need to auto scale them too.
17:50 zach lol
17:50 zach I built the worlds largest opensource deployment of openshift at my last gig
17:50 Edgan zach: fun
17:50 zach Our workload does not fit into kubernetes unfortunately
17:50 zach I WISH
17:50 skullone i find a lot of workloads dont fit into containers
17:50 zach indeed
17:51 skullone small kitchy little node apps, sure
17:51 zach We're doing 100TB/day of logs to Elasticsearch
17:51 zach Doesn't quite fit into containers
17:51 Edgan I don't see it quite that way. What do you mean by workload? I think of it more like is it stateless(containers) or is it stateful(databases).
17:51 zach ASG's are being used not to scale, but to enforce cluster size
17:51 skullone yah, ES, bigger java apps, yah, you're running em with 64GB RAM per iteration or more, with a dozen cores, not a fit for a container
17:51 zach yup
17:52 zach It can be done, but it's messy
17:52 skullone i generally only talk about kubernetes with smaller apps, that operate on small datasets only queue fed
17:53 zach k8s is great, we use it for half of the environment (front ends), but our backends are baremetal/vm
17:53 zach The bare metal/VM deployments are what we use salt for
17:54 Edgan zach: In my world the devs were pushing for literally everything to run in Kubernetes, and won. Which is why I am looking for a new job.
17:54 zach Edgan: indeed, that's why I switched to a new company ;-)
17:54 skullone hehe
17:54 zach Edgan: built the infrastructure, and was laid off
17:55 Edgan zach: nod, it seems to be a pattern
17:55 zach was part of a mass layoff of about 85 people
17:55 skullone kube fits nicely into the "sprint" mindset... devs wash their hands and move to the next sprint, and whatever they did in the last one, someone elses problem
17:55 Edgan zach: one way or another
17:55 Edgan skullone: yes!
17:55 zach of which...I was pretty certain I was safe since it was myself and one other guy who did the architecture design
17:56 zach Don't need to retain two architects, that's madness!
17:56 skullone "this new app requires new version of kubernetes, zuul, calico, so, build a new environment for this new sprint"
17:56 skullone "what about the old one?" ... "that was last month, who cares"
17:56 Edgan zach: where do you work now?
17:56 zach secrets
17:56 Edgan skullone: yep
17:56 zach went from publishing/media to video games though
17:57 skullone i struggle with AWS a bit too... with that dev/sprint mindset, AWS sprawl happens, management freaks out about costs, blames ops, lays them off
17:57 zach Edgan: if you're in SLC for SaltConf I'll buy you dinner & drinks in exchange for some consulting ;-)
17:58 zach skullone: oh yeah, our entire reason for containerizing our workloads was to move to AWS to "save costs". Funny enough it's actually cheaper to run the datacenters with the hardware we already paid for and keep staff than it is to use AWS
17:59 skullone i can buy 480TB of storage for about $45k, a rack full of dense fat twins or C-Series with hundreds of cores and all SSD, 10/40/100Gb host/core connectivity
17:59 skullone for peanuts compared to a year of AWS
17:59 zach The 3 years I worked for ${company} ... we shut down...5 datacenters?
18:00 zach I take that back, 4 years
18:00 nova joined #salt
18:00 skullone i dont like datacenter sprawl either.. all these models fail if management and devs dont exercise restraint
18:01 threwahway_ joined #salt
18:02 threwahway_ joined #salt
18:03 threwahway joined #salt
18:04 skullone i worked for an archiving company, who wanted to move to AWS
18:04 skullone so, they stopped purchasing all hardware, thinking itd be a few months to copy 20PB to AWS
18:04 XenophonF wow
18:05 skullone consequently, the whole ops team quit, they ran out of storage, almost went out of business, ended up with data loss a few times
18:08 Eugene I am frequently un-amazed at how many Business decisions don't bother to involve numbers or graphs. Six months later you get stories like these ^
18:10 skullone well, management, execs/VPs nowdays consider themselves gentry or elite, and everyone else are peons to just "deal with it"
18:11 Edgan zach: I wish I was. If this year had worked out better I would be.
18:13 johnj_ joined #salt
18:19 netcho joined #salt
18:21 edrocks joined #salt
18:31 Antiarc joined #salt
18:39 astronouth7303 skullone: you can do that now, if you pony up for a container
18:40 astronouth7303 (and i don't mean a docker-flavored one)
18:41 Edgan astronouth7303: do what?
18:41 astronouth7303 move 20PB in a few months
18:41 Edgan astronouth7303: That requires a snowball or something. You wouldn't want to pay for 20PB of bandwidth.
18:42 astronouth7303 i was thinking https://aws.amazon.com/snowmobile/
18:44 Edgan astronouth7303: yes, but even that has logistics problems. What is the 20pb is constantly changing, but it takes AWS six months to give you access to any of it?
18:45 astronouth7303 i thought it was more like 2mo from when the truck left amazon
18:45 Antiarc joined #salt
18:45 Edgan astronouth7303: still even with 2 months, that is a long time
18:46 astronouth7303 i wouldn't think an archival company would be writing much of the 20PB (or, any company, really)
18:51 Edgan astronouth7303: Depends on how big each "archive" is. If it is many big ones, yeah, they wouldn't change much. But if it is millions of customers with a few gigabytes, more like DropBox or Google Drive, it is constant churn.
18:52 Bryson joined #salt
18:52 astronouth7303 at 10% daily changed, 2PB is a frightening amount of ingress
18:53 skullone yah, changerate was high
18:54 skullone prob 50TB/day
18:54 skullone and the company was all on windows file servers
18:54 skullone a hundred windows boxes, to dell iscsi targets
18:54 skullone was total trash
18:55 aldevar joined #salt
18:55 Edgan skullone: interesting
18:56 astronouth7303 jesus, migrating to S3 is an entire engineering project, the physical migration aside
18:57 skullone yah, they had no in-house devs anymore... a handful of frontend people, the rest was done in russia
18:57 astronouth7303 hahahahahahahahahahahahahahahahahahha
18:57 astronouth7303 i, too, have to work with russian software
18:58 astronouth7303 it is better than the indian software i've had the pleasure of working with in the past, but still gives me headaches every time i have to work with it.
19:03 bantone joined #salt
19:08 aldevar joined #salt
19:14 johnj_ joined #salt
19:16 gh34 joined #salt
19:17 nixjdm joined #salt
19:20 pipps joined #salt
19:31 kellyp joined #salt
19:32 eightyeight joined #salt
19:54 Trauma joined #salt
19:55 jas02 joined #salt
19:56 Kax joined #salt
19:56 threwahway joined #salt
19:58 user-and-abuser joined #salt
20:01 kellyp joined #salt
20:02 edrocks joined #salt
20:03 sp0097 joined #salt
20:11 thekevjames joined #salt
20:15 pipps joined #salt
20:15 stevendgonzales joined #salt
20:15 johnj_ joined #salt
20:16 Diaoul joined #salt
20:24 kellyp joined #salt
20:30 brd When I try to use: {{ salt['network.ip_addrs6']('cidr=2607::/16')[0] }}  I am getting an error: Jinja variable list object has no element 0
20:30 brd but when I run it on the cli, I get a list
20:31 kellyp joined #salt
20:33 XenophonF brd: syntax error - 'cidr=...' in above should be cidr='...'
20:34 XenophonF the text `cidr` is a keyword argument to the network.ip_addrs() function
20:34 brd XenophonF: so move the single quotes inside?
20:34 XenophonF it takes a string value
20:34 XenophonF the string value is "2607::/16"
20:34 brd XenophonF: oh, I see
20:34 brd yeah
20:35 jholtom joined #salt
20:50 threwahway joined #salt
21:16 johnj_ joined #salt
21:20 netcho joined #salt
21:33 dankolbrs joined #salt
21:49 pipps joined #salt
21:55 Heartsbane joined #salt
21:55 Heartsbane joined #salt
22:17 johnj_ joined #salt
22:17 toofer joined #salt
22:25 bryan joined #salt
22:30 threwahway joined #salt
22:45 jfelchner joined #salt
22:49 fracklen joined #salt
22:51 onlyanegg joined #salt
22:53 aldevar left #salt
23:01 omie888777 joined #salt
23:06 Hybrid joined #salt
23:12 sp0097 joined #salt
23:18 johnj_ joined #salt
23:23 Hybrid joined #salt
23:23 scbunn joined #salt
23:27 onlyanegg joined #salt
23:29 demize joined #salt
23:40 Whissi joined #salt
23:50 pipps joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary