Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2018-01-20

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 saltslackbridge joined #salt
00:06 Aleks3Y joined #salt
00:33 ouemt joined #salt
00:34 hemebond joined #salt
00:54 RandyT joined #salt
00:54 RandyT joined #salt
01:19 demize joined #salt
01:25 otaria joined #salt
01:27 Trauma joined #salt
01:44 ouemt joined #salt
01:46 stewgoin joined #salt
01:50 snath joined #salt
02:02 vexati0n i tail login logs and do a high state every time 3 different people named Bob log into their workstations within any 6-hour period
02:02 vexati0n works pretty well
02:10 LocaMocha joined #salt
02:11 kettlewell joined #salt
02:15 ouemt_ joined #salt
02:17 nomeed joined #salt
02:28 RandyT joined #salt
02:28 RandyT joined #salt
02:41 sauth joined #salt
02:57 ilbot3 joined #salt
02:57 Topic for #salt is now Welcome to #salt! <+> Latest Versions: 2016.11.8, 2017.7.2 <+> Support: https://www.saltstack.com/support/ <+> Logs: http://irclog.perlgeek.de/salt/ <+> Paste: https://gist.github.com/ <+> See also: #salt-devel, #salt-offtopic, and https://saltstackcommunity.herokuapp.com (for slack) <+> We are volunteers and may not have immediate answers
02:57 ouemt joined #salt
03:01 pbandark joined #salt
03:26 otaria joined #salt
03:31 ouemt joined #salt
03:51 tiwula joined #salt
04:01 RandyT joined #salt
04:01 RandyT joined #salt
04:21 uncool joined #salt
04:29 stanchan joined #salt
04:34 ahrs joined #salt
04:38 otaria joined #salt
04:43 ouemt joined #salt
04:44 nethershaw joined #salt
04:45 MTecknology Is there any way to get data from reactor to orchestrate without the use of setting pillar values?
04:48 shoogz joined #salt
05:08 karlthane Is there a saltstack equivelant to chef supermarket or ansible-galaxy?
05:12 hemebond karlthane: What are they?
05:13 karlthane Places where recipes/playbooks can be uploaded and shared. I guess the term in salt is where state files can be shared.
05:14 hemebond There's https://github.com/saltstack-formulas
05:15 socket- joined #salt
05:31 DanyC joined #salt
05:34 saltslackbridge <gtmanfred> Ansible gallery and community cookbooks are full of garbage, Salt formulas are the closest thing, and are collaborative
05:34 saltslackbridge <gtmanfred> Galaxy
05:35 stanchan joined #salt
05:53 ouemt joined #salt
06:01 stanchan joined #salt
06:39 otaria joined #salt
06:51 ouemt joined #salt
06:52 stanchan joined #salt
07:28 jab416171 joined #salt
07:29 stanchan joined #salt
07:31 ouemt joined #salt
07:55 RandyT joined #salt
07:55 RandyT joined #salt
08:09 ouemt joined #salt
08:18 Muir joined #salt
08:29 Trauma joined #salt
08:30 ouemt joined #salt
08:40 otaria joined #salt
08:43 ouemt joined #salt
08:52 RandyT_ joined #salt
08:52 RandyT_ joined #salt
08:59 Hybrid joined #salt
09:03 aldevar joined #salt
09:19 Hybrid joined #salt
09:24 ouemt joined #salt
09:30 rh10 joined #salt
09:33 sayyid9000 joined #salt
09:34 av_ joined #salt
09:54 ouemt joined #salt
09:56 evle joined #salt
09:57 ipsecguy joined #salt
10:08 RandyT joined #salt
10:08 RandyT joined #salt
10:11 ouemt joined #salt
10:18 Elsmorian joined #salt
10:23 cliluw joined #salt
10:31 ouemt joined #salt
10:39 RandyT joined #salt
10:39 RandyT joined #salt
10:41 otaria joined #salt
10:56 Creme joined #salt
11:11 AvengerMoJo joined #salt
11:30 ouemt joined #salt
11:33 zer0def MTecknology: is there anything wrong with passing information from reactor to state by means of pillar? because that seems to be the running theme not only in reactors, i figure you ought to do the same with API calls, too; just be sure to pick a unique enough pillar name
11:38 zer0def s/pillar name/pillar key/
11:51 ouemt joined #salt
12:11 ipsecguy_ joined #salt
12:17 ouemt joined #salt
12:21 zulutango joined #salt
12:25 mk-fg joined #salt
12:25 mk-fg joined #salt
12:32 Trauma joined #salt
12:42 otaria joined #salt
12:46 RandyT joined #salt
12:46 RandyT joined #salt
12:48 ouemt joined #salt
12:51 RandyT joined #salt
12:51 RandyT joined #salt
13:01 ipsecguy joined #salt
13:01 ouemt joined #salt
13:09 ouemt joined #salt
13:16 gustavobgama joined #salt
13:17 gustavobgama joined #salt
13:18 gustavobgama joined #salt
13:18 stewgoin joined #salt
13:22 XenophonF zer0def: can you do that? reactor scripts run inside the master IIRC
13:35 RandyT joined #salt
13:35 RandyT joined #salt
13:38 sauth joined #salt
13:42 Creme joined #salt
13:51 ouemt joined #salt
13:55 Creme left #salt
13:59 sayyid9000 joined #salt
14:07 evle2 joined #salt
14:10 wryfi joined #salt
14:13 ouemt joined #salt
14:29 zer0def XenophonF: sure, ref: https://github.com/zer0def/salt-reactive-aws-in-15/blob/master/salt/reactor/sqs.sls https://github.com/zer0def/salt-reactive-aws-in-15/blob/master/salt/aws/reactions/provision_minion.sls
14:33 zer0def for the record, don't use ^, but rather use beacons and react to them instead
14:35 zer0def from my usage, there are very few states that fail because they're running on a master instead of a minion, but it can happen and usually they have complementing runners
14:37 Creme joined #salt
14:43 otaria joined #salt
14:57 RandyT joined #salt
14:57 RandyT joined #salt
15:02 RandyT joined #salt
15:02 RandyT joined #salt
15:03 ouemt joined #salt
15:10 Elsmorian joined #salt
15:24 GnuLxUsr joined #salt
15:36 Elsmorian joined #salt
15:45 otaria joined #salt
15:54 XenophonF gotcha
15:55 XenophonF babilen: got TCP transport working alongside 0mq in dev, will deploy in prod assuming Congress passes a budget this weekend so I can work
16:09 Elsmorian joined #salt
16:30 GnuLxUsr joined #salt
16:41 RandyT joined #salt
16:41 RandyT joined #salt
16:44 GnuLxUsr joined #salt
16:52 RandyT joined #salt
16:52 RandyT joined #salt
17:01 RandyT_ joined #salt
17:01 RandyT_ joined #salt
17:05 _val_ Does salt-key have an option to display hosts only?
17:06 _val_ without Accepted Keys, Recjected Keys.. etc... (Only the hosts that are accepted
17:06 RandyT joined #salt
17:06 RandyT joined #salt
17:07 _val_ ah -l accepted?
17:08 nethershaw joined #salt
17:09 stanchan joined #salt
17:09 kettlewell joined #salt
17:12 kettlewell joined #salt
17:12 Creme _val_: correct
17:14 _val_ Creme: I was doing things like: salt-key | awk '{l[NR] = $0} END {for (i=1; i<=NR-3; i++) print l[i]}'   :)..
17:14 _val_ But  salt-key -l accecpted | awk 'NR>1' is way nicer.
17:15 _val_ Thanks. I tested -l accepted though.. the question mark there was a bit silly.
17:23 kettlewell joined #salt
17:29 kettlewell joined #salt
17:33 MTecknology zer0def: interacting with AWS always looks exceptionally painful and miserable..
17:34 zer0def MTecknology: well, in the linked case it's letting ASGs to do the autoscaling work, instead of Salt using beacons to scale up/down, that's why you *SHOULDN'T* do it that way
17:35 kettlewe_ joined #salt
17:36 zer0def but it is a neat show of stateful runs from reactor
17:36 MTecknology indeed, and shows pillar used as you described
17:36 MTecknology It just feels wrong to me because I think of pillar as something specific to the minion, not an arbitrary dict to just stick stuff into.
17:36 zer0def i've used something similar from jenkins to pass in additional info into state.orchestrate called from api
17:37 zer0def well, in the case of a runner, there's technically no speak of a minion (even if the master is a pseudo-minion of sorts to itself)
17:37 MTecknology So far I've either had what I needed or it's been in event.fire_master where I can send up whatever data I want.
17:38 zer0def in case of a runner/reactor
17:39 zer0def there's always this to send events from master to itself, but it probably won't prove useful and cause a lot of noise instead: https://docs.saltstack.com/en/latest/ref/runners/all/salt.runners.event.html
17:39 MTecknology Soon enough, netbox is going to have triggers. When that happens, I want to send create/destroy VM events to my api server so it can turn that into a proper salt event to send to the reactor, so it can run an orchestrate sls.
17:39 zer0def that's also why i use state.orchestrate and delegate ordering+statefulness to states
17:40 MTecknology The only thing it needs to know is the minion id to create/destroy...
17:41 zer0def well, in the case of my example, i'd probably use the ec2 grains that were laying somewhere in salt-contrib
17:41 MTecknology in your case, I assume the machine was created first, and then inventory updated?
17:41 zer0def yes, the machine is created via ASG, which sends an event to SQS, caught by master whom then runs salt-cloud saltify on that machine
17:42 MTecknology I'm trying to switch that order and make netbox drive everything. Add/remove a machine from inventory, and wait for salt to make it so.
17:42 stanchan joined #salt
17:43 RandyT joined #salt
17:43 RandyT joined #salt
17:43 zer0def in that case i guess you'd need to salt-cloud on netbox first and monitor whether your minions are available periodically, pruning keys of those which have been absent for too long
17:43 zer0def or pruning keys of those you explicitly call to be absent
17:44 MTecknology why would I need to prune keys?
17:44 MTecknology I'll just remove them when I remove the node because it was removed from netbox :)
17:45 zer0def either way it accomplishes the same goal, i'm just unfamiliar with netbox to have a grasp yet
17:46 MTecknology Once netbox has those triggers, I think I'll be 80% done with this project. I'm planning to present at saltconf next year (assuming I'm healthy by then).
17:46 otaria joined #salt
17:47 MTecknology 80% done with everything except documentation *
17:47 zer0def well, if you don't have triggers, couldn't you write a small engine that would periodically probe the API for current state to send events onto the bus?
17:48 zer0def don't have triggers yet*
17:48 MTecknology Yup, but triggers are basically done and they come with guaranteed delivery so it's just a matter of patiently waiting for the PR to be accepted.
17:49 zer0def ah, in that case, yeah, it's probably worth waiting
17:49 MTecknology I can still write/test orchestrate and other stuff while I wait too.
17:50 stanchan joined #salt
17:51 MTecknology I haven't been able to find anyone doing anything like this except for one government that was hoping to someday release some of their magic, so either I'm doing something really smart or something really dumb. Only time will tell.
17:53 zer0def you mean auto-managing machines with salt in cooperation with $provider?
17:54 MTecknology nah, that kinda thing exists, but it's almost always vendor-specific lock-in
17:54 zer0def so basically breaking away from said lock-in?
17:56 MTecknology yup.. 1) make inventory be the authority, not the thing that is hopefully gonna be accurate at some point  2) deploy/destroy machines internally and externally, across different providers, without fuss
17:57 kettlewell joined #salt
17:57 MTecknology unfortunately, the project kinda locks you into salt, but that's really just because salt doesn't have competition when it comes to the event/reactor/orchestrate/cloud framework it provides
17:58 zer0def no competition doing the same? i've been recently forced to learn chef which may come close, but i haven't seen any reactor-like functionality elsewhere
17:58 MTecknology does chef have something like that?
17:59 zer0def it has some bootstrapping crap, but i'm not yet versed enough to be competent to answer competently
17:59 zer0def welp, words.
17:59 zer0def answer accurately*
18:00 MTecknology From what I remember of chef bootstrapping, it exceptionally painful and barbaric
18:00 zer0def well, they promote doing the same RVM does, which is running a shell script that picks up on the environmenet it's running in and installs accordingly
18:01 zer0def well, what everyone else does, i suppose
18:01 stanchan joined #salt
18:02 zer0def basically `curl -s <script_source>` | bash -s -- <args>
18:02 MTecknology yuckiness
18:02 zer0def i mean, i guess it works, but it requires you to trust the source
18:02 MTecknology works != works well
18:03 zer0def "works" in the sense of "works well enough for $random to be satisfied"
18:04 MTecknology right.. that sorta touches on a big rant for me... the lazy admin that thinks "if it works, it can't be dumb" is how systems should be managed
18:05 zer0def so basically the claim of "mysql is a competent database"
18:05 Elsmorian joined #salt
18:08 MTecknology heheh.. I can agree, but that's not really what I meant. More like- the admin that klunks together a bash script that kinda sorta works in specific scenarios, vs. the admin that figures out what language is appropriate for the task, learns the basic libs for proper error checking and sanitization, and makes sure it's deployed using config management.
18:10 MTecknology It's kinda the reason the word "devops" bothers me so much. It seems to describe a relatively incompetent admin doing relatively incompetent dev tasks and kludging their way around until something works, and then calling it complete.
18:11 zer0def at least i'm not alone in loathing the term
18:12 zer0def it's a catch-all term for people who just happen to do administration and scripting/coding at the same time
18:12 MTecknology this is my home setup (where I'm building this project)- https://i.imgur.com/qh3hMjn.png
18:13 zer0def looks relatively straight forward, you have all your grouping figured out
18:14 MTecknology prettier pictures if you're interested- https://imgur.com/a/fjdoE
18:14 tiwula joined #salt
18:18 zer0def so… what are you aiming to achieve? because it's fairly broad
18:18 zer0def almost inclined to say "too broad"
18:19 MTecknology how so?
18:19 zer0def but then again, i'm nobody to judge on it's use
18:21 zer0def depends on what you're actually doing, but my networking is basically a cheapo router running openwrt with split vlans, appropriate firewall grouping and remote access through ssh
18:21 zer0def most of my virtualization is on a single machine, but i guess i could set up some home lizardfs/ceph, given how i have a backup for the former machine
18:22 zer0def backups are just stashed as encrypted veracrypt volumes on a machine in ovh
18:22 MTecknology ouchies :(
18:23 * MTecknology 's backups -> https://michael.lustfield.net/linux/long-term-secure-backups
18:23 zer0def i might be oversimplifying it, but that's enough for me to suffice
18:24 MTecknology I'm doing everything an enterprise should consider critical/essential, in addition to running infra for a few home labs, and my effort is to do the best dang job possible.
18:24 zer0def tl;dr your home stack could rival small to mid-sized company infras
18:24 MTecknology yup :)
18:24 zer0def especially given how infras in those are usually an afterthought
18:25 MTecknology heck.. as far as how it's managed, it'd rival big dawgs too
18:25 zer0def this one depends on the particular dawg
18:26 MTecknology depends.. I'm referring to quality and long-term maintainability.
18:27 zer0def i agree, i just choose to keep things compact, because i can't be arsed to maintain some things for the art of doing so, when i as an end-user don't require it
18:27 zer0def end-user and maintainer
18:27 MTecknology I mean, clearly I don't have multiple ISP's coming in and I don't have a proper SAN solution and my UPS only has two deep cycle marine batteries... but
18:28 zer0def yeah, those would be the first things i'd pick on when we're talking such setups
18:29 MTecknology those things cost lots of money; the two batteries alone were $300 (and just recently hit their replacement mark)
18:30 zer0def yeah, i know, but since you're doing it for hobbyist purposes on your home network to mimic enterprise infras, there's no point in rubbing it in
18:31 zer0def depending on how rural of an area you live in, your second ISP would probably be a failover 3g/4g, which openwrt has the mwan package for
18:31 zer0def mwan3*, as far as i remember
18:31 MTecknology It'd be a verizon dongle, and I've considered trying to make that happen.
18:33 MTecknology I've also used that lab to spin up labs to replicate problems at work and troubleshoot them without the fear of wrecking something important, but mostly, it's part of my pursuit for IT perfection.
18:33 zer0def anyway, that's more than i'd make use of, anyway
18:34 zer0def oh, most of my HA experiments were just vagrants that i just happened to "accidentally" rip things from
18:34 * MTecknology has no life.
18:34 zer0def vagrant-libvirts, in case you wanted to be judgemental on virtualbox usage ;P
18:34 zer0def i sure as hell am
18:41 MTecknology I've used vagrant some. Enough to know it's a valuable tool, but not enough to have any interest trying to build labs with it.
18:43 Creme left #salt
18:43 zer0def i've developed an appreciation for vagrant and packer, however i despise later Hashicorp tools, mostly because they're all over the damned place
18:45 zer0def this is especially true for terraform and consul
18:45 MTecknology ^ +10
18:45 major joined #salt
18:46 zer0def well, terraform simply lacks constructs to be a viable tool and consul tries doing too many things at once, adding more noise than it's worth
18:47 zer0def heck, even zookeeper is better than consul and i don't value apache projects higher than "mediocre"
18:47 saltslackbridge <randy> consul is the Go version of mnesia that they then decided to add in a bunch of features to
18:48 MTecknology oh?.. mnesia looks interesting
18:48 saltslackbridge <randy> meh, I use it a lot in my elixir/erlang code
18:48 saltslackbridge <randy> and you use it every time you use rabbitmq
18:49 saltslackbridge <randy> it’s alright, it does what I need it to do
18:49 zer0def i personally would compare consul to etcd, zookeeper and couchbase, but mnesia fits the bill just as much
18:50 saltslackbridge <randy> wel etcd and zookeeper are the immediate comparison because the feature set is similar
18:50 zer0def well, the "feature set" isn't broad, any highly available atomic KV storage fits the bill
18:50 saltslackbridge <randy> I actually had my Sr. VP recommend I use something more fault tolerant than mnesia for my app
18:50 saltslackbridge <randy> he suggested consul
18:51 zer0def had a trout nearby?
18:51 saltslackbridge <randy> my reply was “while I love consul and use it heavily in other area’s, that suggestion really just moved the concern of network partitions to a different system. it doesn’t solve the problem”
18:52 saltslackbridge <randy> I use consul/consul template/vault in another app
18:52 RandyT joined #salt
18:52 RandyT joined #salt
18:52 saltslackbridge <randy> and it works really well
18:52 saltslackbridge <randy> still have to deal with network partitions though
18:52 saltslackbridge <randy> He didn’t like my partitioning recovery plan
18:53 saltslackbridge <randy> I gave a demo where I said “I handle partitioning by not handling partitioning”
18:54 saltslackbridge <randy> high level overview: effectively my app lives in kubernetes. my app is a quorum per dc. my app can see when a partition is in effect. the minority commits hari kari. kubernetes see’s this and spins up the requisite pods my deployment strategy specifies. boom. healthy again.
18:55 saltslackbridge <randy> there is a little more too it than that, but basically my app is a distributed network monitoring system that load balances checks via a consistent hash ring. when nodes come up or go down that hash is regenerated and those checks “migrate” to current live nodes.
18:56 saltslackbridge <randy> same thing with my table data in mnesia. my tables live everywhere
18:56 saltslackbridge <randy> so if a node or two dies
18:56 saltslackbridge <randy> I give 0 fucks
18:56 saltslackbridge <randy> just kill it and bring up new nodes/pods
18:56 saltslackbridge <randy> checks and table data will redistribute and I am good
18:56 zer0def wait, isn't consul supposed to handle distribution, then?
18:57 saltslackbridge <randy> distribution of what?
18:57 saltslackbridge <randy> kv data?
18:57 zer0def yup
18:57 saltslackbridge <randy> yes, in _IT’S_ qourum and via consul replicate
18:57 zer0def oh.
18:57 saltslackbridge <randy> but there is leader election, source security dc’s, etc
18:57 saltslackbridge <randy> in my app
18:57 saltslackbridge <randy> I don’t have to deal with taht
18:58 saltslackbridge <randy> there is no leader election. there is no source of truth. it’s completely, 100% distributed and fault tolerante
18:58 MTecknology see... I'm an admin.. randy is a dev
18:58 saltslackbridge <randy> I’m an engineer
18:58 saltslackbridge <randy> I do a lot of admin work too
18:59 saltslackbridge <randy> hence why when I dev my apps, I think about it from both a dev and ops perspective
18:59 saltslackbridge <randy> weird huh, almost like that should be a thing
18:59 saltslackbridge <randy> I’ll call it devops
19:00 MTecknology heheh...
19:00 zer0def yeah, sadly most development seems to be done without regard for end-user's perspective, be it ops or other consumer ;P
19:01 saltslackbridge <randy> well that’s because most dev’s don’t “feel their own pain”
19:01 saltslackbridge <randy> they make an app and pass it off to ops
19:01 saltslackbridge <randy> I own my apps
19:01 saltslackbridge <randy> I don’t want to be called for someone else’s mistake
19:01 saltslackbridge <randy> but I _ESPECIALLY_ don’t want to be called for my own mistake
19:02 saltslackbridge <randy> because then I would have to admit that I _MAKE_ mistakes
19:02 saltslackbridge <randy> and I’m just not about that
19:02 nledez joined #salt
19:02 zer0def oh i understand that perfectly, i used to give devs flak for being inconsiderate of regular situations they somehow managed to consider edge cases :D
19:03 * MTecknology doesn't make mistakes
19:03 * MTecknology makes learning opportunities
19:03 zer0def broadly called "happy accidents"
19:05 saltslackbridge <randy> there is no way to build a perfect distributed system
19:05 saltslackbridge <randy> but I have been playing with LDFI
19:05 saltslackbridge <randy> uhm let me see if I can’t find something on it
19:05 saltslackbridge <randy> https://blog.acolyer.org/2015/03/26/lineage-driven-fault-injection/
19:06 saltslackbridge <randy> hell python has a lib for it
19:06 saltslackbridge <randy> https://github.com/KDahlgren/pyldfi
19:06 saltslackbridge <randy> could be useful for salt, idk
19:06 zer0def yeah, those two appeared in the top 3 results
19:07 saltslackbridge <randy> it ain’t perfect for sniffing out the possible flaws in a distributed system
19:07 saltslackbridge <randy> but it’s something
19:10 saltslackbridge <randy> my life for the last several months has been all about distributed system architecture and design
19:10 saltslackbridge <randy> distributed systems are…hard
19:10 saltslackbridge <randy> like really, really hard
19:12 zer0def definitely a worthwhile lecture for the weekend, thanks
19:13 saltslackbridge <randy> anyway, time for me to crawl back into the dark recesses of erlang documentation
19:19 Elsmorian joined #salt
19:24 Elsmorian joined #salt
19:27 tiwula joined #salt
19:32 MTecknology that sounds like a dark and painful place
19:36 major joined #salt
19:36 RandyT joined #salt
19:36 RandyT joined #salt
19:39 riftman joined #salt
19:46 Hybrid joined #salt
19:48 otaria joined #salt
20:01 zer0def sounds like a place of comfort
20:12 RandyT joined #salt
20:12 RandyT joined #salt
20:52 zer0def anyone able to tell me whether 2016.11 still takes in pull requests?
20:53 zer0def saw some commits merged from yesterday, but not sure whether i ought to PR to it or 2017.7
20:53 mk-fg joined #salt
20:53 mk-fg joined #salt
20:55 saltslackbridge <gtmanfred> if it is a bug fix for 2016.11, then yeah, until 2016.11.9 is out, then it will be cve only
20:58 zer0def oh, so i'm in luck
21:15 rojem joined #salt
21:17 Elsmorian joined #salt
21:26 saltslackbridge <gtmanfred> well
21:26 saltslackbridge <gtmanfred> we already forked the 2016.11.9 branch
21:26 saltslackbridge <gtmanfred> so unless it is something that is decided that it has to be in that release, it won’t be getting added for this release
21:26 saltslackbridge <gtmanfred> same for 2017.7.3, that has been forked
21:31 magnus1 joined #salt
21:42 zer0def well, it would be great to have that `git.detached` fix for 2016.11.9
21:42 zer0def that typo in `boto3_route53`, hopefully, can wait
21:46 major joined #salt
21:48 otaria joined #salt
22:05 cb joined #salt
22:05 SMuZZ joined #salt
22:06 davedash joined #salt
22:06 nickadam joined #salt
22:06 futuredale joined #salt
22:06 ToeSnacks joined #salt
22:06 simonmcc joined #salt
22:06 niluje joined #salt
22:06 linovia joined #salt
22:06 SteamWells joined #salt
22:06 frdy joined #salt
22:06 Antiarc joined #salt
22:06 doriftoshoes___ joined #salt
22:06 petems joined #salt
22:06 m0nky joined #salt
22:07 tcolvin joined #salt
22:07 tiwula joined #salt
22:08 toofoo[m] joined #salt
22:09 Tenyun[m] joined #salt
22:09 marwel joined #salt
22:09 hackel joined #salt
22:09 freelock joined #salt
22:11 Hybrid joined #salt
22:12 mk-fg joined #salt
22:12 mk-fg joined #salt
22:13 adongy joined #salt
22:19 major joined #salt
22:23 pppingme joined #salt
22:26 RandyT joined #salt
22:26 RandyT joined #salt
22:31 theanalyst joined #salt
22:41 olipovch joined #salt
22:46 mk-fg joined #salt
22:46 mk-fg joined #salt
22:51 sbroderick joined #salt
22:59 poige joined #salt
22:59 simondodsley joined #salt
23:01 esharpmajor joined #salt
23:01 leev joined #salt
23:02 toofoo[m] joined #salt
23:02 ___[0_0]___ joined #salt
23:02 fl3sh joined #salt
23:03 tongpu joined #salt
23:03 nixjdm joined #salt
23:03 chutzpah joined #salt
23:03 tom29739 joined #salt
23:03 kshlm joined #salt
23:03 benjiale[m] joined #salt
23:03 viq[m] joined #salt
23:04 glock69[m] joined #salt
23:04 rtr63gdh[m] joined #salt
23:04 dmaphy joined #salt
23:05 Processus42 joined #salt
23:05 RandyT joined #salt
23:05 RandyT joined #salt
23:05 gomerus[m]1 joined #salt
23:05 kbaikov[m] joined #salt
23:06 freelock joined #salt
23:06 nledez joined #salt
23:10 mk-fg joined #salt
23:10 mk-fg joined #salt
23:13 s0undt3ch joined #salt
23:14 sjorge joined #salt
23:29 zulutango joined #salt
23:49 otaria joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary