Perl 6 - the future is here, just unevenly distributed

IRC log for #opentreeoflife, 2015-09-19

| Channels | #opentreeoflife index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:48 ilbot3 joined #opentreeoflife
01:48 Topic for #opentreeoflife is now Open Tree Of Life | opentreeoflife.org | github.com/opentreeoflife | http://irclog.perlgeek.de/opentreeoflife/today
07:52 jimallman joined #opentreeoflife
11:37 guest|86110 joined #opentreeoflife
15:39 guest|68196 joined #opentreeoflife
15:47 guest|15818 joined #opentreeoflife
15:48 guest|64719 joined #opentreeoflife
15:57 kcranstn joined #opentreeoflife
15:59 kcranstn Noting that most opentree chat has moved over slack for the better UI, but here today
16:00 mtholder joined #opentreeoflife
16:00 kcranstn and trying to address the Hug of Death from being on the frontpage of reddit...
16:00 guest|7329 joined #opentreeoflife
16:08 guest|68635 joined #opentreeoflife
16:15 guest|64719 joined #opentreeoflife
16:19 jar286 joined #opentreeoflife
16:25 kcranstn joined #opentreeoflife
16:33 guest|89861 joined #opentreeoflife
16:37 guest|38488 joined #opentreeoflife
16:37 guest|38488 I can't see the tree
16:38 guest|21908 joined #opentreeoflife
16:38 guest|38488 can someone send me the link for the tree? I want to see it :(
16:42 guest|13657 joined #opentreeoflife
16:43 Charlie_ joined #opentreeoflife
16:45 kcranstn is this page not loading? https://tree.opentreeoflife.org  ?
16:45 kcranstn (we are working to get things back up to regular speed)
16:48 TunaLobster I got more curious once I saw the python. Just poking at the code
16:49 TunaLobster Y'all got reddit hugged hard
16:50 kcranstn yup…
16:52 kcranstn just testing a fix on dev
16:52 jimallman hi all, we’re moving our fire-drill conversation to here (from Slack)
16:52 guest|7896 joined #opentreeoflife
16:52 mtholder I can see that someone is interested in the most recent common ancestor of tuna and lobster...
16:53 mtholder should  be bilateria
16:54 TunaLobster I'm a mechanical engineer student. No clue about taxonomy.
16:57 jimallman TunaLobster: are you into biomimetics? tunas and lobsters have some very good tricks for locomotion!
16:59 TunaLobster What's the thing in a shell that moved with a big hook? Just look at how you implanted a python web service.
16:59 jimallman contemplating a hot fix for the stalled home page…
16:59 kcranstn jimallman - ok with you trying to tweak the controller on production
16:59 jimallman and now i can’t tell if TunaLobster is talking about sea life or web tech
17:00 * jimallman cracks knuckles...
17:02 jimallman ok, the fix is in, testing now.
17:02 jimallman so far, very slow to establish secure connection…
17:02 jimallman VERY slow
17:03 mtholder sorry. I think that it needs to be statictop.html
17:03 mtholder if you're doing a hot fix
17:04 mtholder I changed the name in the PR so that the file I moved would not inhibit the git pull
17:04 mtholder ^jimallman
17:05 jimallman indeed, very snappy now
17:05 mtholder you're kidding, I assume...
17:06 guest|45881 joined #opentreeoflife
17:06 jimallman seriously, it was just very quick for me.
17:06 jimallman the current delays all seem to involve establishing SSL connections
17:07 mtholder curl is telling me "Unknown SSL protocol error in connection to tree.opentreeoflife.org:443"
17:07 guest|62046 joined #opentreeoflife
17:09 mtholder jimallman, is all of the logic for moving to SSL in the apache config, or is there some web2py side to it?
17:09 mtholder I'm not sure where to even start looking for that stuff...
17:09 jimallman checking now…
17:10 TunaLobster I see things
17:11 mtholder things are good.
17:14 TunaLobster Have viruses been taxonomized?
17:14 mtholder I do like the "reddit tree-hug of death" comment.
17:15 mtholder TunaLobster: yes, but they aren't in our tree
17:15 mtholder http://www.ictvonline.org/virustaxonomy.asp
17:16 kcranstn joined #opentreeoflife
17:18 TunaLobster Wow. That is some serious work that is moons over my head. Why 2.7.3? Was there something different about that version?
17:20 kcranstn just happened to be the one we settled on as ‘good enough’ to write up the paper ;)
17:21 jimallman mtholder: regarding the bounce to HTTPS, it’s enabled via config and enforced in web2py: https://github.com/OpenTreeOfLife/opentree/issues/409
17:21 mtholder thanks
17:22 jimallman aside from all the rigamarole around configuration, here’s the line that matters: https://github.com/OpenTreeOfLife/opentree/commit/ef45f9100c5339c47c278f23ef7485c9c8e81d2f#diff-6f2dc398ce1dd70a41f4ef4477db798fR67
17:23 mtholder and https://github.com/OpenTreeOfLife/opentree/blob/master/webapp/models/db.py#L80
17:24 jimallman yes, each web2py app handles this independently
17:25 jimallman but once we’re in HTTPS, all internal URLs are scheme-relative, so we should stay there without doing any more work
17:25 kcranstn joined #opentreeoflife
17:31 kcranstn joined #opentreeoflife
17:35 kcranstn jar286 - are you working on replication?
17:37 jar286 I was reading the newspaper.  had started looking through the aws site
17:41 mtholder I do think that if were able to figure out an apache redirect we might make things snappier.
17:42 mtholder right now http://tree... redirects (slowly) to https://tree....
17:42 mtholder which then redirects (slowly) to the static page
17:42 jimallman yes, but only because we’re waiting so long for connections. jar286, i’m looking at some possible apache tweaks here (KeepAlive, MaxClients): https://servercheck.in/blog/3-small-tweaks-make-apache-fly
17:44 jar286 we could probably benefit from http2
17:45 guest|95319 joined #opentreeoflife
17:46 mtholder agreed. but it seems like, if we could reduce the number of connections by a factor of 3 for the top page that might help.
17:46 jar286 caching the phylopics could help a little (or do we do that already?)
17:47 mtholder no, we don't
17:47 mtholder but it takes me a couple of minutes to get the first redirect
17:47 mtholder (and the second)
17:47 mtholder (and then the static page)
17:47 jar286 ok, reducing keepalive seems plausible
17:49 jimallman mtholder: agreed that an immediate redirect from apache would be nice, straight to https://…statictop.html
17:50 jar286 cpu load is not too high right now, well below 50%
17:51 jar286 498577 http requests since 6:25 UTC this morning
17:52 jar286 135M log file
17:52 mtholder I tried https://josephscott.org/archives/2011/10/timing-details-with-curl/ but it didn't help me much.
17:53 mtholder I don't know how to diagnose the cause of the lag. ssh-ing into the machine is fast
17:55 jimallman we don’t seem to specify MaxClients anywhere. perhaps we’re exceeding the default value and making others wait?  http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients
17:55 jar286 it’s conceivable we’re thrashing, but I don’t think so.  we have 8 apache worker processes, each with 463 ‘virt’ (virtual memory size?) and 22m ‘res’ (resident set size). the main process is 553 virt 209 res
17:56 jar286 we’re using defaults for everything
17:56 jar286 the ‘machine’ has 2G RAM, I think
17:57 jar286 the maxclients setting is more important when the server is doing more work than ours is
18:01 jar286 we don’t allowoverrides.
18:03 mtholder fwiw: i am not seeing big differences in times of completion for the http redirect and the https redirect. so I'm thinking that SSL is not a big part of the problem.
18:09 jimallman it looks like default apache settings (MaxClients, ServerLimit, ThreadsPerChild) are OK for 8 worker processes. if i’m reading the docs right, we should support up to 400 clients with these settings. but the waiting for connections suggests a bottleneck.
18:11 mtholder I just tried a wget to tree.opentreeoflife.org after ssh-ing into the machine itself. Unsurprisingly, this was slow indicating that it is not some issue like our network communication being governed by amazon.
18:12 mtholder (at least assuming that wget from the machine itself does not entail network overhead).
18:12 mtholder perhaps I need to use 127.0.0.1 for that...
18:15 mtholder On tree and devtree, I get:
18:15 mtholder wget http://127.0.0.1
18:15 mtholder --2015-09-19 18:13:07--  http://127.0.0.1/
18:15 mtholder Connecting to 127.0.0.1:80... connected.
18:15 mtholder HTTP request sent, awaiting response... 303 SEE OTHER
18:15 mtholder Location: https://127.0.0.1/ [following]
18:15 mtholder --2015-09-19 18:13:07--  https://127.0.0.1/
18:15 mtholder Connecting to 127.0.0.1:443... connected.
18:15 mtholder The certificate's owner does not match hostname `127.0.0.1'
18:15 mtholder I assume that this is OK
18:15 mtholder but it takes .5 s on devtree and 1min on tree
18:15 mtholder I'm not sure if that tells us that it is apache rather than web2py...
18:16 mtholder Maybe if everyone on the project posts as many cat pictures as possible, we can distract reddit
18:17 kcranstn :)
18:18 jimallman @jar286, i’m looking at WSGIScriptAliasMatch: https://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIScriptAliasMatch
18:18 jimallman this should allow something like
18:18 jimallman WSGIScriptAliasMatch ^/.* /home/opentree/web2py/wsgihandler.py
18:19 jimallman to send all BUT simple ‘^$’ requests to web2py
18:20 blackrim joined #opentreeoflife
18:20 jar286 that seems better than relying on processing order or whatever
18:20 jimallman i’ll give this a try on devtree...
18:21 jar286 as I said, reducing keepalive seems like it might help
18:22 blackrim left #opentreeoflife
18:23 blackrim joined #opentreeoflife
18:23 kcranstn hey blackrim
18:23 kcranstn trying to get a snappier load
18:24 jar286 mtholder, to skip cert checking, do wget --no-check-certificate
18:25 mtholder thanks.
18:25 blackrim hello. saw we were getting a reddit hug
18:25 mtholder but do we agree that this is probably apache, not web2py?
18:25 blackrim number three on there https://www.reddit.com/r/all
18:25 kcranstn yup
18:26 jar286 I would try some more localhost wget timings, e.g. for static
18:26 jar286 if static page loads are slow, it’s defniitely an apache problem
18:26 jar286 because they bypass web2py
18:26 blackrim not sure i can help, but will be here if there is anything
18:27 kcranstn thanks
18:35 jar286 doing local non-https GETs on tree is very quick
18:35 jar286 haven’t yet figured out how to do a local https GET
18:37 jar286 I take that back.
18:38 jar286 30 seconds: ssh tree time wget --no-check-certificate http://localhost/static/statistics/synthesis.json
18:38 jar286 so at least 30 seconds of the load delay has nothing to do with web2py.
18:38 mtholder gotta run.
18:38 jar286 so my best explanation is that we are thrashing
18:39 mtholder left #opentreeoflife
18:39 jar286 and we should do one of (a) replication, (b) more RAM, (c) reduce maxclients
18:43 kcranstn votes?
18:43 jar286 (a) is hardest & most expensive, I vote for (b) + (c)
18:44 jar286 we already have a server set up with more ram, will be easy to put webapp on it
18:44 kcranstn aws instance or physical hardware?
18:45 jar286 aws instance
18:45 jar286 it’s currently called api2.opentreeoflife.org and it’s running the back end, but it looks to me like we won’t need back end redundancy
18:46 kcranstn ok
18:48 jar286 the annoying part of setup will be client keys and so on… would be nice to make sure it’s fully working before swapping it in as the production site
18:51 kcranstn agrred
18:51 kcranstn agreed
18:53 blackrim1 joined #opentreeoflife
18:53 blackrim1 left #opentreeoflife
18:58 guest|5901 joined #opentreeoflife
19:00 jar286 the thrashing hypothesis still makes no sense to me, since the total resident size is only about 200M, out of 2G available
19:01 jar286 but it’s the only hypothesis I have right now (given 30 seconds to load a static page)
19:02 jimallman related threads suggest we might be limiting clients too harshly. possibly an opportunity to bump up MaxClients (or for threaded versions, ServerLimit and/or ThreadsPerChild)?
19:03 jar286 did you find out the default maxclients? (I can look it up)
19:04 jimallman depends on threaded vs. non-threaded servers, see http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients
19:04 jimallman should we consider bumping up to 16 processes? (i gather we’re currently using 8, right?)
19:05 jar286 I don’t think we’re threaded
19:05 jar286 that’s a good idea
19:08 jar286 looking at apache2.conf
19:09 guest|52556 joined #opentreeoflife
19:12 jar286 I don’t see an 8 in apache2.conf (expected StartServers to be 8)
19:12 jar286 damned emoji
19:12 jar286 to be 8 )
19:13 jar286 I don’t see any mpm module enabled
19:13 guest|6488 joined #opentreeoflife
19:15 jar286 looking at http://2bits.com/articles/tuning-the-apache-maxclients-parameter.html
19:16 jimallman that page gives me an idea. i’ll look and see if a client limit might be imposed in web2py..
19:16 jar286 if we’re doing prefork, which is likely, then our maxclients is currently 150
19:17 jar286 I wouldn’t bother - we know we have a 30 second delay even when web2py isn’t involved
19:18 jar286 the 2bits article, like the other one, says the site grinds to a halt when MaxClients is too *high* …
19:20 jimallman or too low, since “excessive” users will pointlessly wait for a connection, right?
19:20 jimallman i gather this should be tuned to available RAM
19:21 jimallman (the minimum number of clients, that is)
19:21 guest|6488 We can probably use small instances for tree... right? (This is mtholder on a phone)
19:22 jimallman one would think tree doesn’t need much if a server, but we’re hitting some kind of wall here
19:22 guest|6488 In which case we could do a bit of replication and experimentation
19:22 jar286 that’s what we’re trying to figure out: whether we’re memory limited, or just need to tune config parameters
19:23 jar286 I’m hesitant to invest in replication until I understand why we have a 30 second delay for static pages, without cpu saturation
19:24 jimallman possible bottlenecks in redis or memcache?
19:24 jimallman (or are these only used within web2py?)
19:25 jar286 no, right now I’m looking at static http page loads run locally
19:25 jar286 so it’s not the network and not anything that’s the least bit dynamic
19:25 guest|6488 The only use of those that I know of are on api in the phylesystem api
19:25 jimallman gotcha
19:25 jar286 (assuming /static/ is bypassing web2py… which it’s supposed to, according to the configuration)
19:29 guest|6488 joined #opentreeoflife
19:29 jar286 gotta take a break, back in a bit
19:36 guest|62363 joined #opentreeoflife
19:47 jar286 if our maxclients is 150, and the formula in the article suggests it should be 86, maybe we should decrease it
19:50 mtholder joined #opentreeoflife
19:50 jar286 (but I still don’t get the reasoning… and ‘top’ says we still have memory free)
19:51 jar286 if we have plenty of cpu, and plenty of ram, that suggests we’re setting a limit imposed by apache
19:52 jar286 s/setting/hitting/
19:53 jar286 that limit would be maxclients, yes? … so increase maxclients to get more parallelism?  and yes maybe increase maxservers as well.
19:58 guest|5822 joined #opentreeoflife
20:04 guest|17891 joined #opentreeoflife
20:05 kcranstn are those simple hotfixes?
20:08 guest|32251 joined #opentreeoflife
20:12 guest|8884 joined #opentreeoflife
20:12 jar286 simple fixes. would require an apache restart, which means blowing away curator sessions (not that I expect there are any)
20:19 * jimallman is still banging away on devtree, about to give up on a root-only redirect (before wsgi grabs the request)
20:19 jimallman this *should* work, but PCRE doesn’t seem to behave the same here as with RewriteRule
20:20 guest|90387 joined #opentreeoflife
20:23 guest|53485 joined #opentreeoflife
20:25 * jimallman is resetting devtree (back to working apache configuration)
20:25 jar286 is it possible web2py is grabbing our /static/ requests?
20:28 jar286 I mean, we’re following the web2py deployment recipe to the letter, but I’m still trying to imagine what could be going wrong
20:29 jar286 jimallman, did you put the / rule before or after the wsgi rule?
20:29 guest|89199 joined #opentreeoflife
20:30 jimallman i tried excluding simple root (‘’ or ‘/‘) from the wsgi rule, using WSGIScriptAliasMatch
20:30 jimallman https://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIScriptAliasMatch
20:31 jimallman but i could not find a regex that would not match all or none, despite testing with PCRE tools like this one: http://martinmelin.se/rewrite-rule-tester/
20:31 jar286 but maybe it matters whether this comes before vs. after the WSGIAlias directive
20:31 jimallman i beefed up the logging, enough to see which URLs would trigger the WSGI handler, but the results don’t match what the tester shows
20:32 jar286 oh I see the problem I think. those directives will *always* treat the first thing as a prefix, not as a while url
20:32 jimallman i can see some URL rewriting going on (‘/‘ is changed to ‘/index.html’ and then is tested by WSGIAliasMatch)
20:32 jimallman the *Match should be smarter, not assume a prefix
20:33 jar286 s/while/whole/
20:33 jimallman but it seems quirky. as i said, it doesn’t seem to follow the same rules as RewriteURL
20:33 jar286 the thing to compare it wo would be Alias or ScriptAlias
20:33 jar286 s/wo/to/
20:33 jimallman yep, i was using those as a guide, but no luck
20:34 jimallman (i can send my various attempts if you like)
20:34 jar286 no thanks
20:34 jar286 I think if we can’t fix static page loads nothing else is going to make much difference
20:34 jimallman agreed.
20:35 jimallman i’m moving on to other tests (initial load of local comments, with a different “no-op” filter)
20:35 jimallman to see if that’s also slowing down the initial page… once again, you’re on the real pain point, which is connection delays
20:36 jimallman glad to help with that, if you want another pair of eyes
20:37 jar286 if I could go over the reasoning with you, that might help
20:41 jar286 but today’s a gorgeous day and I haven’t been outside yet…
20:42 jar286 jimallman, in retrospect, it looks like the tree browser & curator app should have been on separate servers
20:42 jimallman perhaps.
20:44 guest|65219 joined #opentreeoflife
20:45 kcranstn joined #opentreeoflife
20:46 jimallman just to confirm, connection delays are the lion’s share of the problem. loading the “extra” comments on the home page is a negligible hit vs. loading none at all.
20:48 jar286 right
20:49 jar286 although any delays are going to tie up resources and cause other requests to be delayed
20:49 * jimallman nods
20:50 jar286 ‘apache2ctl status’ requires lynx to be installed
20:50 jar286 I don’t like installing software on  a server currently in heavy use… but it could yield useful information
20:55 Slick joined #opentreeoflife
20:55 jar286 ha!     150 requests currently being processed, 0 idle workers
20:56 jar286 so  MaxClients is maxed out
20:57 mtholder If we decide to go the replication route, do we know how to do that?
20:57 mtholder (by which I mean "I don't")
20:58 jar286 I haven’t done it before, but the non-aws approach is to use the load balancer feature of apache, which is well documented and not too hairy
20:58 jar286 it involves magic bal: URIs and redirects
20:59 jar286 but mtholder, I’m thinking now that I don’t see why we need more capacity, given that we’re at something like 20% CPU and 80% RAM (i.e. not maxed out with RAM)
21:00 jimallman jar286: i noticed that lynx requirement, very annoying
21:00 jar286 I installed lynx, and it works
21:01 jimallman great!
21:01 kcranstn it = ?
21:01 jar286 you just have to say ‘export APACHE_LYNX=lynx’
21:01 jar286 it = lynx, and therefore the apache status report interface
21:01 kcranstn ok, cool
21:03 jar286 the phylopic proxy is seeing a lot of action   (the apache status lists all requests & their state)
21:03 mtholder I'm not saying that we couldn't fix it if we knew how to configure apache, but given that we don't seem to know...
21:04 jar286 I think I know… I’m guessing it will help to increase MaxClients, since we’re not at maximum cpu or ram
21:04 jimallman to quickly check apache status from local system: $ ssh admin@ot14 'export APACHE_LYNX=lynx; sudo apache2ctl status'
21:06 kcranstn this is all pretty frustrating
21:07 jar286 it is just what I expected. not clear to me that being able to support current demands is either possible or necessary
21:08 kcranstn I didn’t expect this (perhaps I was naive?)
21:08 kcranstn but certainly possible to deal with the (not really that great) load we got today
21:08 jar286 I didn’t expect the front end to be the bottleneck
21:09 jimallman without heavy load testing (sadly not my forte), it’s hard to anticipate how all the gears will mesh
21:11 jar286 any objections to my changing MaxClients from 150 to 300 and restarting apache? (losing curator sessions) - then afterwards measuring cpu and ram load
21:11 mtholder I doubt that anyone is curating anything
21:11 jar286 I doubt it too
21:12 jar286 oh, also measuring static page load latency as I did before
21:12 jar286 I want to get some information out of this experiment, even if it fails
21:12 kcranstn yes, let’s try it
21:12 jar286 ok… proceeding… this does not involve any github or deployment system operations
21:14 jimallman i see where MaxClients is set to 150 for all versions(?) of apache, in ot14:/etc/apache2/apache2.conf
21:14 mtholder gotta run for a bit...
21:14 jar286 yes, already there
21:14 jar286 I’m assuming we have prefork
21:14 jar286 since it’s the default and we have no other mpm module configured
21:15 jar286 how about if I also raise MaxSpareServers from 10 to 20?
21:15 kcranstn I don’t knwo what that does
21:16 jar286 I think it’s the upper limit on number of worker processes to create
21:16 jar286 restarting now…
21:17 jimallman worth a shot. i’ll look for a definitive answer on which MPM (prefork, etc) we’re using
21:17 jar286 hmm, that didn’t do it.  still at 150
21:18 jimallman see $ /usr/sbin/apache2 -l
21:18 jimallman Compiled in modules:
21:18 jimallman core.c
21:18 jimallman mod_log_config.c
21:18 jimallman mod_logio.c
21:18 jimallman mod_version.c
21:18 jimallman worker.c
21:18 jimallman http_core.c
21:18 jimallman mod_so.c
21:18 jimallman i think we might be running worker by default
21:19 jimallman jar286: confirmed, see the more friendly output of ‘/usr/sbin/apache2 -V’
21:20 jar286 thanks
21:21 jar286 well now I’m really puzzled as to why ps always shows 8 www-data processes
21:22 jar286 I did apache2ctl graceful before, I don’t think that’s going to do it, will need apache2ctl restart
21:24 jar286 (reading over comments for worker MPM config)
21:24 kcranstn I need to run to the grocery store before they close
21:24 jar286 ok
21:26 jar286 well I don’t really understand the worker config settings, so I’m going to just change MaxClients and see what that does
21:27 jar286 here goes restart
21:27 jar286 long pause…
21:28 jar286 it may be busy trying to cleanly shut down the server
21:28 jimallman i posted a link (apache docs for MaxClients) that talks about how to config for worker, etc.
21:28 jimallman http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients
21:30 jar286 not getting any response from apache2ctl status
21:31 jar286 requests are certainly being processed
21:31 jimallman my one-liner above works for me:
21:31 jimallman ssh admin@ot14 'export APACHE_LYNX=lynx; sudo apache2ctl status'
21:32 jar286 oh.  lynx is not behaving nicely for me, but I typed space and can now see status
21:32 jimallman “300 requests currently being processed, 0 idle workers”
21:33 jar286 CPU & RAM unchanged - still plenty of capacity
21:34 jar286 latency now 52 seconds !
21:34 jar286 that suggests *decreasing* MaxClients
21:35 jimallman where are you seeing latency?
21:36 jar286 ssh tree time wget -O /tmp/wgettest http://localhost/static/statistics/synthesis.json
21:36 jimallman i’m seeing improvement (vs. 150-request) as follows, was .14 requests/sec - 1315 B/second …  now 12 requests/sec - 119.1 kB/second
21:36 jimallman though that might depend on the recent requests in each case..?
21:37 jar286 also suggests that increasing RAM by 2x, or adding redundant servers, will not make a difference, since the demand is enormous
21:37 jar286 the only approach would be something like AWS elastic
21:37 jar286 and I suspect that’s beyond our budget
21:39 jimallman or we refactor for fewer requests (bundle JS and other assets) to reduce the number of requests, etc. (though these should be one-time GETs for each user, then cached)
21:40 jar286 ok, now I’m going to go for a walk in the fading light.  thanks for your help, this has been interesting
21:42 jar286 I checked latency a 2nd time by the way - 1 minute 36 seconds
21:43 jar286 once someone gets a connection, they can do many GETs without the same delay.  connection lost after being idle 5 seconds
21:44 jar286 gotta get away from this.
21:48 jimallman same here, we have a dinner date.
21:48 jimallman but i’ll be here off an on.
21:50 kcranstn joined #opentreeoflife
21:50 mtholder joined #opentreeoflife
21:53 guest|64310 joined #opentreeoflife
21:57 guest|57528 joined #opentreeoflife
22:53 mtholder joined #opentreeoflife
23:03 kcranstn joined #opentreeoflife
23:11 guest|88470 joined #opentreeoflife
23:24 guest|70154 joined #opentreeoflife
23:32 mtholder joined #opentreeoflife
23:41 kcranstn joined #opentreeoflife
23:44 kcranstn ok, so we’re still hosed...
23:45 kcranstn nope, just really slow still

| Channels | #opentreeoflife index | Today | | Search | Google Search | Plain-Text | summary