The web in a box - a next generation web framework for the Perl programming language

IRC log for #mojo, 2017-07-04

| Channels | #mojo index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:57 marty joined #mojo
01:18 aborazmeh joined #mojo
01:48 ilbot2 joined #mojo
01:48 Topic for #mojo is now 🍩 nom nom | http://mojolicious.org | http://irclog.mojolicious.org | http://code-of-conduct.mojolicious.org
02:18 noganex_ joined #mojo
02:58 marty joined #mojo
03:55 tchaves joined #mojo
05:11 inokenty-w joined #mojo
05:39 genio joined #mojo
06:07 CHYC joined #mojo
06:12 meredith joined #mojo
06:55 prg joined #mojo
06:57 AndrewIsh joined #mojo
06:58 dod joined #mojo
07:04 marty joined #mojo
07:05 dod joined #mojo
07:24 zen win 9
07:27 coolo nothing new there
07:48 trone joined #mojo
08:08 marcus good morning #mojo
08:14 bjakubski joined #mojo
08:15 leont_ joined #mojo
08:29 rshadow joined #mojo
08:32 CandyAngel Morning marcus
08:49 sri ok, i think i know what the graceful worker shutdown problem at work is, but it's not a bug
08:49 sri it's just that our request takes longer than the graceful timeout
08:51 sri dunno, perhaps 20 seconds is a bit low for the default?
08:52 pink_mist perhaps change it to a minute?
08:54 sri and i guess the log message "Stopping worker $pid" is a bit too inconspicuous
08:54 sri perhaps "Stopping worker $pid immediately" would be better
08:58 sri and i've got another feature request from work... which is to allow new workers to be spawned while the old one is still in graceful shutdown
09:05 sri marcus, batman, jberger: thoughts?
09:05 pink_mist 0_o that sounds ... odd
09:06 marcus I don't think our requests take longer than 20 seconds, so I guess our problem is somewhere else.
09:07 marcus sri: Re feature request, don't we already start new workers before we start shutting down the old ones?
09:07 sri marcus: nope, we start them only after the old one is dead
09:07 marcus sri: Or do you mean that shutdown takes so long that the workers on the new parent has to be replaced?
09:07 marcus sri: How does that even work?
09:07 marcus Won't we be without any workers for the graceful period if that's the case?
09:08 sri https://github.com/kraih/mojo/blob/master/lib/Mojo/Server/Prefork.pm#L93-L94
09:08 sri marcus: we might be if all workers go into a graceful shutdown
09:09 sri and that's what happens at work, since we have strict memory limits on workers, and they shutdown gracefully a lot
09:09 marcus sri: Ah right, so you are talking about spawning additional workers to replace spent ones, not the initial ones.
09:09 sri it's actually pretty easy to change
09:09 sri marcus: right
09:09 marcus that makes some sense I guess.
09:10 marcus If you have slow requests and strict memory limits
09:10 sri can get dangerous though if things spiral out of control
09:10 sri like the new workers also reaching the limit and new workers getting started again... and so on
09:11 marcus you're worried about piling up dead workers?
09:11 marcus :)
09:11 sri "spent workers" as you called them
09:11 marcus mm
09:12 marcus I think for most cases that shouldn't be a problem
09:12 marcus maybe we could have a max limit for workers in graceful state?
09:13 sri what would the logic be?
09:13 marcus spawn new worker if less than $limit in graceful state and less than $wanted active workers?
09:15 cosimo joined #mojo
09:17 sri you mean non-graceful state
09:18 sri no wait, i'm confused now
09:19 sri i'm having a hard time imagining it... don't think it will work for most people :S
09:22 good_news_everyon joined #mojo
09:22 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vQzMh
09:22 good_news_everyon mojo/master 93e314e Sebastian Riedel: increase graceful_timeout from 20 to 60 seconds and make the log message for immediate shutdown a little more threatening
09:22 good_news_everyon left #mojo
09:24 howitdo joined #mojo
09:24 sri sadly we won't really find much inspiration in other web servers... since this is a very app server specific problem
09:30 sri of course, there is the feature where the process pool grows up to a second worker limit when there is demand
09:30 sri and shrinks when things cool down
09:37 kes joined #mojo
09:37 sri we could have a setting like overload=4
09:37 sri and that would mean it can spawn an additional 4 workers if it decides there is a demand for some reason
09:38 sri "Temporarily spawn up to this number of additional workers if there is a need."
09:38 sri something like that
09:38 purl something like that is totally possible
09:38 sri "Temporarily spawn up to this number of additional workers if there is a need, for example if too many workers are shutting down gracefully at the same time."
09:44 karjala_ joined #mojo
09:55 pink_mist sri: makes sense
09:55 pink_mist (more sense now than what you were saying at the start :P)
10:00 good_news_everyon joined #mojo
10:00 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vQz97
10:00 good_news_everyon mojo/master 8b02829 Sebastian Riedel: mention timeouts in log messages and make forced worker shutdown a warning
10:00 good_news_everyon left #mojo
10:07 pink_mist jberger: https://www.youtube.com/watch?v=dDH10srLgVc new version of your talk =)
10:07 sri coolo: what's your opinion on overload=4 above?
10:08 good_news_everyon joined #mojo
10:08 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vQzHa
10:08 good_news_everyon mojo/master c85b189 Sebastian Riedel: server log messages are not just for debugging
10:08 good_news_everyon left #mojo
10:20 * ashimema thinks he's going mad
10:21 ashimema if I call $c->req->url->base I get a nice base url.. but if I call $c->req->url->host in the same place I get undefined.. am I missing something obvious?
10:22 pink_mist $c->req->url->base->host ?
10:22 ashimema thankfully, as they both return a Mojo::URL I can chain them so $c->req->url->base->host works nicely.. but I wasn't expecting ->host on it's own not to work?
10:22 ashimema indeed
10:36 pink_mist jberger: I watched the new version too, just because :P
10:46 depesz left #mojo
10:46 coolo sri: is that a new version of some game? :)
10:47 coolo sri: but it would work in our case - at least if paired with proper 'you are in overload, consider raising worker number' in log_warn
10:48 coolo if you might remember, we first blamed apache - because we were left pretty clueless
10:56 sri coolo: i don't know what the condition for that log message would be
10:56 sri you'd get an overloaded worker process every time another one shuts down gracefully
10:57 coolo I mean if you'd add a worker, but don't because you already have 4
10:57 sri but that's no indicator for extra worker demand
10:57 sri but workers cycle regularly
10:58 sri for most this just makes it a little faster
11:01 sri all you could check is if the number of workers is at the maximum of workers+overload
11:03 sri $log->info("Server fully overloaded (14 workers)") if $num_workers >= $max_workers + overload;
11:03 sri somethign like that
11:03 sri but is that really useful?
11:03 coolo that would do - I guess
11:04 coolo hmm, well actually you want to know how many workers are actually accepting requests
11:05 sri yea, that's what i'm suspecting, you want real metrics
11:05 sri log messages can't do that
11:05 sri and shouldn't try
11:06 sri a plugin for collecting metrics would be cool
11:11 CandyAngel Sounds like the sort of thing you'd monitor with something like munin
11:11 jabberwok banner: "minion workers #404 on strike for overload wages"
11:38 dod joined #mojo
11:39 sri in case someone else wants to build a metrics plugin, i would just make a plugin that hooks into all sorts of framework hooks/events to collect data, and store that data with IPC::Shareable, so all workers have access
11:39 sri and add a route that renders a simple page showing the data
11:40 sri it's a fairly simple project
11:40 tchaves joined #mojo
11:49 VVelox joined #mojo
12:11 dod joined #mojo
12:12 marty joined #mojo
12:29 cosimo joined #mojo
12:40 eseyman joined #mojo
12:58 dod joined #mojo
13:02 ashimema_ joined #mojo
13:02 dod joined #mojo
13:25 good_news_everyon joined #mojo
13:25 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vQgtm
13:25 good_news_everyon mojo/master f5fecd3 Sebastian Riedel: allow more workers to be spawned temporarily if there is a need
13:25 good_news_everyon left #mojo
13:25 sri marcus, batman, jberger: please review
13:29 sri is that the behavior we want?
13:33 tchaves joined #mojo
13:36 good_news_everyon joined #mojo
13:36 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vQgq0
13:36 good_news_everyon mojo/master 56045c2 Sebastian Riedel: better explanation for the overload feature
13:36 good_news_everyon left #mojo
13:41 sri one thing we should do to allow better metrics plugins is make the server instance available to the app
13:42 sri so the plugin can read config information
13:43 sri perhaps even hook into the server for more information
13:46 karjala_ sri, when you say "reducing the performance cost of restarts", do you mean "restarts of workers"?
13:46 sri yes
13:47 CandyAngel The cost reduction is that a long graceful shutdown doesn't prevent the next startup..
13:47 sri correct
13:48 good_news_everyon joined #mojo
13:48 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vQgmH
13:48 good_news_everyon mojo/master 91fae2b Sebastian Riedel: mention workers
13:48 good_news_everyon left #mojo
13:48 karjala_ Therefore the maximum useful value of overload is equal to the number of workers?
13:48 karjala_ no?
13:48 sri no
13:48 karjala_ ok
13:49 CandyAngel An overloadworker can be spawned from an overloadworker in graceful shutdown
13:49 sri which is why a hard limit is needed in the first place, so they don't cascacde out of control
13:50 CandyAngel So if you have 4 workers and they are shutting down, 4 overworkers come up and if they go into shutdown, another 4 can be spawned (until maximum overload count)
13:50 karjala_ Graceful shutdown of worker happens when it has served a fixed number x (=100 for example) of requests?
13:51 sri at work we also also restart workers when they use too much memory
13:51 sri which i guess might become a generic plugin at some point
13:52 batman sri: i'm not sure if i understand why you add a new attribute. can't it just always be workers/2 or something?
13:52 karjala_ Systemd allows for socket-activated services (on services that don't get called very often). I wonder if you'd consider making hypnotoad such a service as well
13:52 sri whole reason i have to optimize this is because coolo is mass murdering workers with his memory limits :)
13:52 batman i'm also unsure about the the name "overload"
13:53 sri batman: is workers/2 always perfect?
13:53 sri what if you want to disable it?
13:53 batman why would you want that?
13:54 sri you tell me!
13:54 batman i don't see why you would not want workers accepting connections
13:54 CandyAngel If you did $workers/2, if coolo killed all the workers, his workforce would be halved while they shut down
13:54 sri extra workers cost memory
13:55 sri yea, workers/2 definitely doesn't always work
13:56 sri about the name, i don't really care, make a better suggestion
13:57 sri spare?
13:57 purl somebody said spare was stupid
13:57 sri :(
13:58 CandyAngel I can think of other suggestions, but they're not better :P
13:58 batman i like spare a lot better.
13:58 CandyAngel Step-in, fallback etc.
13:59 CandyAngel The Wolf Workers
14:02 good_news_everyon joined #mojo
14:02 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vQgOB
14:02 good_news_everyon mojo/master 0c1b1ae Sebastian Riedel: rename overload to spare
14:02 good_news_everyon left #mojo
14:05 * CandyAngel might start building her first thing using postgresql :|
14:06 pink_mist surely you mean ":D"
14:06 CandyAngel We'll see! :P
14:07 CandyAngel I've been looking at Redis but with Pg's NOTIFY/LISTEN, I think I can use Pg instead of Redis
14:08 CandyAngel Whether I should or not, is another matter :)
14:15 CandyAngel Hm, found a benchmark which indicates that maybe I shouldn't :P
14:29 rshadow joined #mojo
14:32 Vandal joined #mojo
14:35 sri hmm, i was just thinking if just starting 6 workers instead of 4 + 2 spare is always better
14:37 sri the spare workers do have the advantage that they are very likely to stick around for some time, while 6 workers started at the same time might go down together
14:39 nic joined #mojo
14:41 S joined #mojo
14:47 brunoramos joined #mojo
14:52 PryMar56 joined #mojo
15:17 batman sri: I'm +1 on the patch in general
15:18 batman Not sure about the default values, but can't come up with a killer argument for changing it
15:22 cosimo joined #mojo
15:24 good_news_everyon joined #mojo
15:24 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vQgWN
15:24 good_news_everyon mojo/master e862c19 Sebastian Riedel: change heartbeat timeout to a value not shared with anything else
15:24 good_news_everyon left #mojo
15:29 sri coolo: btw. i think we should try using PSS instead of sketchy RSS values for memory limits
15:30 sri i really hate seeing those rss values in the logs... they tell you absolutely nothing :p
15:32 sri batman: if the default is too high it could result in unexpected memory spikes for people
15:32 sri it's conservative, like our 4 worker default
15:32 sri (2 spare)
15:40 ptolemarch joined #mojo
15:42 sh14 joined #mojo
15:43 batman Yeah, that or workers/2
15:43 batman But I don't really have an argument for that...
15:44 sri workers/2 is not good when you have setups with 30 workers
15:44 sri that's a ossible spike of 15 workers
15:52 marty joined #mojo
16:02 CandyAngel Huh, people use Pg for spatial calculations.. that would be handy..
16:02 sri people use postgres for all kinds of things, it's the swiss army knife of databases
16:03 pink_mist CandyAngel: did you see this already? http://renesd.blogspot.com/2017/02/is-postgresql-good-enough.html
16:05 CandyAngel I didn't, I'll have a look in a second (just about to leave to go home)
16:25 marty joined #mojo
16:27 karjala_ joined #mojo
17:29 CandyAngel pink_mist: Useful read, thankies :)
17:53 karjala_ joined #mojo
18:01 marty joined #mojo
18:16 marty joined #mojo
18:22 jberger pink_mist cool thanks
18:25 jberger sri are you still looking for review/comments on this new feature?
18:25 sri yes
18:26 jberger I'm not at my lappy but if you post a comparison link I'll read it on my phone, for whatever that's worth
18:28 jberger also to bikeshed: holdover, tieover, bridge
18:28 sri https://github.com/kraih/mojo/compare/4d9b7f39848607ebda66fe9251b486e21e7e2887...e862c190d32c2cc0decbad20929b190623e3e07a
18:31 good_news_everyon joined #mojo
18:31 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vQgoD
18:31 good_news_everyon mojo/master a8af0f5 Sebastian Riedel: update default value in usage message
18:31 good_news_everyon left #mojo
18:36 jberger These spares, they run the old code it seems?
18:37 jberger could you clarify that someplace in the docs
18:37 jberger ?
18:37 jberger otherwise it looks fine to me
18:37 sri what?
18:38 jberger They are new processes serving the existing not incoming application, right?
18:38 sri what incoming application?
18:38 marty joined #mojo
18:39 sri "This allows for new workers to be started while old ones are still shutting down gracefully, drastically reducing the performance cost of worker restarts."
18:40 sri "worker restarts"
18:40 jberger Oh this is just prefork not hypnotoad restarts?
18:40 * jberger re-reads
18:40 sri yes
18:41 jberger Yeah then forget what I said, looks fine
18:42 sri related http://mojolicious.org/perldoc/Mojo/Server/Prefork#accepts
18:43 sri or the restart hack we use at work https://gist.github.com/anonymous/7db38f2441d6cd59962134c738b9dcda
18:44 sri you can call Mojo::IOLoop->stop_gracefully anywhere in your code and it will gracefully restart the current worker
18:44 sri the patch allows a new worker to be spawned immediately without waiting for the 60 second timeout
18:46 sri even websockets could handle that gracefully
18:47 marty joined #mojo
18:47 sri subscribe to Mojo::IOLoop->singleton->on(finish => sub {...}) and make your websockets reconnect in a clean way
18:49 sri current worker would already have stopped accepting connections, so you'd get connected to another worker
18:49 sri but that's only slightly related
18:50 sri just wanted to mention it
18:55 karjala_ joined #mojo
18:58 jberger I'll try to ponder that, I'm a little too holiday for that much thought
19:01 dod joined #mojo
19:03 tchaves joined #mojo
21:19 suede joined #mojo
21:32 schelcj joined #mojo
21:54 sri btw. uWSGI has a cool feature where it manages a pool of processes and resizes the pool so it doesn't go over a single memory limit http://uwsgi-docs.readthedocs.io/en/latest/Cheaper.html
22:22 good_news_everyon joined #mojo
22:22 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vQg7p
22:22 good_news_everyon mojo/master 8c63cdf Sebastian Riedel: make it easier to use systemd for socket activation
22:22 good_news_everyon left #mojo
22:22 sri that seemed like an easy choice, since it ended up costing nothing :)
22:22 sri technically -3 lines
22:26 sri cool thing, i think systemd can actually activate multiple sockets for us, and we can use them like ->listen(['...?fd=3', '...?fd=4'])
22:29 Fraxtur joined #mojo
23:25 fong joined #mojo

| Channels | #mojo index | Today | | Search | Google Search | Plain-Text | summary