The web in a box - a next generation web framework for the Perl programming language

IRC log for #mojo, 2017-08-02

| Channels | #mojo index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 gizmomathboy joined #mojo
00:29 jabberwok joined #mojo
00:29 jabberwok left #mojo
00:29 jabberwok joined #mojo
02:51 noganex joined #mojo
02:57 gryphon joined #mojo
03:08 Ptolemarch joined #mojo
03:53 kavuria joined #mojo
05:14 karjala_ joined #mojo
05:46 zach joined #mojo
05:46 zach if you wanted to test a file upload in a mojo test, how would you post the file?
05:48 zach actually I guess http://mojolicious.org/perldoc/Test/Mojo#post_ok kind of shows
05:48 zach I was curious about binary uploads but I guess it's not that important
05:54 karjala_ joined #mojo
06:25 zach if you want different controllers to be able to reference each other, do you need to export subs in them?
06:43 karjala_ joined #mojo
07:12 batman zach: use helpers instead
07:12 batman ..and a common model or something
07:13 batman zach: binary upload is just $t->post_ok("/some/where", $binary)->status_is_(200);
07:13 batman same as mojo-ua
07:13 stryx` joined #mojo
07:17 AndrewIsh joined #mojo
07:31 prg joined #mojo
07:38 karjala_ Hey, interesting: The top google result for "zfs backup" is a backup solution based on Mojolicious: http://www.znapzend.org/
07:38 pink_mist cool
07:58 Vandal joined #mojo
08:06 sri update on coolo's problem from yesterday, disabling keep-alive in the apache reverse proxy makes everything work... so Mojo::Server::Daemon might have a problem in the keep-alive code
08:08 sri and i have an idea
08:09 sri no, actually that can't be it... was thinking of the pipeline buffer being too small
08:09 * ashimema watches this space.. pretty sure this has affected me in the past too
08:10 sri coolo sent me 400mb of strace output and i don't think i've seen mod_proxy ever pipeline requests
08:11 sri (pipeline is the problem where browsers send two or more http requests on the same connection without waiting for a response, and we have to buffer the second request to handle it after the first)
08:11 sri which keep-alive allows
08:20 bianca joined #mojo
08:22 coolo sri: we have like 300 apache prefork workers hitting 5 mojo workers in this straces
08:22 coolo and every apache worker has its own keep alive - this is really nuts actually :)
08:23 sri indeed :)
08:23 coolo but the fact that the apache requests get into 502 in chunks lead me to believe that this is really a mojo problem
08:24 coolo but the straces don't make sense somehow :(
08:25 coolo because you can see the mojo workers writing successful and then continue polling the kept alive connection - while apache gets the connection reset
08:25 sri i've not really looked at the strace yet, took forever to download through the vpn
08:25 sri the 502 in batches is very weird
08:27 sri do you remember if all those 502s came from the same worker?
08:27 sri (on the mojo side)
08:29 sri wow, i think i just replicated the problem :D
08:30 sri perl -Ilib -Mojo -E 'a(sub { $_->render(json => {just => "works"}) })->start' prefork -l http://*:8080 -m production -r 3 -a 30
08:30 sri wrk -c 100 -d 10 http://127.0.0.1:8080
08:30 sri Socket errors: connect 0, read 276, write 0, timeout 0
08:30 sri 16559 requests in 10.04s, 2.91MB read
08:34 * ashimema really struggled to consistently replicate the problem, or even round it down to anything specific.. hence never mentioning it.. but it's great to see you guys battling through it.. if I can help in any way.. please do shout
08:36 sri really interesting, -r 10 -a 1 seems to not trigger the problem
08:53 rshadow joined #mojo
09:00 sri ok, down to a slightly more constrained scenario
09:00 sri perl -Ilib -Mojo -E 'a(sub { $_->render(json => {just ="works"}) })->start' prefork -l "http://*:8080?single_accept=1" -m production -w 1 -r 3 -a 30 -s 0
09:01 sri wrk -t 1 -c 2 -d 10 http://127.0.0.1:8080
09:01 sri requires at least two connections per worker to trigger the problem
09:01 sri wrk -t 1 -c 1 -d 10 http://127.0.0.1:8080
09:01 sri that does not appear to cause a problem
09:02 sri which would mean our keep-alive code is most likely fine
09:05 sri it's something about concurrent connections
09:11 sri and i have found an anomaly :)
09:11 sri with two concurrent connections a keep-alive connection sometimes gets closed without sending a Connection:close response before closing
09:13 sri now i just need to figure out what triggered those anomalies
09:22 irqq joined #mojo
09:56 bpmedley joined #mojo
10:06 sri yea, looks like i have a fix
10:06 sri it's not a nice one though
10:26 good_news_everyon joined #mojo
10:26 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/v7uOu
10:26 good_news_everyon mojo/master f9ff45e Sebastian Riedel: remove close_idle_connections again and fix a bug where connections would get closed too quickly
10:26 good_news_everyon left #mojo
10:26 sri yea, it was the close_idle_connections logic
10:26 sri which i suspected last time too
10:28 sri the change means stopping workers is now slower, since we wait for one more request or the inactivity timeout
10:30 sri so, for busy servers it shouldn't really make a difference
10:32 sri coolo: ^^
10:33 sri after the change i'm unable to make wrk fail
10:34 sri ashimema: maybe you want to try the change too
10:36 sri there is still a case where a request might get interrupted i think, when the one final request arrives right at the inactivity timeout
10:36 sri but that was always the case
10:36 sh14 joined #mojo
10:37 sri and i'm not sure there's a way to address that
10:37 coolo sri: not sure what you want from me here
10:37 sri coolo: just inform you of the fix
10:37 coolo ACK :)
10:39 coolo sri: if you have an upstream issue/PR I'd like to add it as comment to our apache config
10:41 sri no issue, you never opened one :p
10:41 sri just the commit hash
10:42 * ashimema comes back to pc and reads up
10:45 ashimema I'll stick it on a production machine for the day.. I only ever managed to trigger it in production (thankfully I have pretty forgiving customers and it should be easy to backout if needed) :)
10:45 ashimema thanks sri.. good work narrowing it down
10:50 bianca joined #mojo
10:53 tchaves joined #mojo
11:08 necrophcodr joined #mojo
11:09 necrophcodr Is it possible to run multiple Mojo::UserAgents in parallel, and have them run a sub when done that adds data to a global variable?
11:09 necrophcodr So that two requests are made at the same time, and when done, they both append their results to an array or a hash or something like that
11:19 yukikimoto joined #mojo
11:23 yukikimo_ joined #mojo
11:46 noganex joined #mojo
12:17 CandyAngel necrophcodr: You can run a single UA without blocking on each request (so they are run in parallel, but only one UA)
12:18 necrophcodr CandyAngel: how would that work? Would I then create a single UA object, and then in different threads, make calls to that global object?
12:18 necrophcodr Or how would I ensure parallel utilization
12:18 CandyAngel You send both requests through the UA, then start the IOLoop
12:18 CandyAngel Give me a secnod and I'll link you the documentation
12:21 CandyAngel Sorry, my setup is a bit werid at the moment, there is a keyboard and mouse between the keyboard and mouse I'm using :P
12:22 CandyAngel If you look in the Synopsis for Mojo::UserAgent, the bit you want is the "concurrent non-blocking requests" bit near the bottom
12:23 CandyAngel That will run the second bit where you deal with them once all requests are done
12:23 CandyAngel Above it is the "non blocking request" where you do all the $ua->get's you want, then start the IOLoop
12:24 necrophcodr I wasn't aware that would be concurrent. I'll need to read up about this a lot more, there's a lot more to the whole IOLoop thing than I know.
12:25 CandyAngel The delay one is more for when you are in a procedural situation (because the 'wait' will block until the requests are done and handled)
12:25 necrophcodr I might have to structure it slightly differently if the domains are dynamically retrieved, but I'll look more into it.
12:26 CandyAngel Minion might also be something to use, depending what you are doing
12:26 necrophcodr My only issue with it is that I might have an array or a hash that I need to append data to
12:26 necrophcodr I realistically need to extract data from a lot of domains, store the data in some sort of variable, and only when all the extraction is done, work with the data.
12:27 necrophcodr Minion is neat, but i'd like to avoid further dependencies for now.
12:27 CandyAngel Mhm, sure
12:27 CandyAngel I like using Minion for things like this because you can see what it is doing using the job stats
12:30 CandyAngel And you can make jobs depend on other jobs, so you can schedule the "deal with the data" job as a dependent on the "retrieve the data" job. Is very nice! :P
12:49 bwf joined #mojo
13:08 gryphon joined #mojo
13:20 necrophcodr CandyAngel: Thanks a bunch! I got my problem solved using your tip with UserAgent concurrent and IOLoop delay
13:20 Pyritic joined #mojo
13:20 CandyAngel Glad to hear it :)
13:50 itaipu joined #mojo
13:55 trone joined #mojo
14:03 * sri really really wants a minion web ui
14:07 jeck joined #mojo
14:13 necrophcodr left #mojo
14:23 Pyritic joined #mojo
14:33 yukikimoto joined #mojo
14:42 gizmomathboy joined #mojo
14:47 CHYC joined #mojo
15:03 gizmomathboy joined #mojo
15:08 mib_cgklti joined #mojo
15:08 mib_cgklti hi
15:08 purl hola, mib_cgklti.
15:10 mib_cgklti i'am stucked with one problem i have to solve: i want to serve large files with $c->reply->static('large_file.foo'); and need this to be non-blocking
15:11 pink_mist $c->reply->static() is non-blocking
15:11 pink_mist so no problem
15:11 mib_cgklti but it seems the next request will be served as soon the file-download has finished. until then other users have to wait.
15:13 pink_mist hmm, actually, it seems I'm using $c->reply->asset() rather than $c->reply->static() when I'm doing this ... maybe only asset is non-blocking? but that wouldn't make much sense
15:14 mib_cgklti on  my dev machine no following requests are served while reply->static is serving the large file
15:15 mib_cgklti i can try reply->asset
15:15 mib_cgklti thx
15:15 sri pink_mist: nope, you were right
15:16 mib_cgklti perhaps, the problem is the standard http-server with plack?
15:16 pink_mist well yes
15:17 pink_mist plack doesn't support async
15:17 pink_mist afaik
15:17 pink_mist you'll need hypnotoad/morbo/$app daemon/$app prefork/ or something like that
15:17 sri ashimema: i'd appreciate feedback on the patch when you have some data
15:18 mib_cgklti i'll try that
15:18 sri then i'll release it later today
15:18 mib_cgklti thx for the help
15:23 petru joined #mojo
15:44 PryMar56 joined #mojo
15:54 kgoess joined #mojo
15:59 ashimema will do
16:00 ashimema so far after having applied it I've not seen any 502's
16:01 ashimema but they were fairly infrequent.. so I'd be more confident in saying that again in 24 hours time ;)
16:04 Pyritic joined #mojo
16:07 Janos joined #mojo
16:15 AirDisa joined #mojo
16:29 bianca joined #mojo
16:31 itaipu joined #mojo
16:59 Lee[home] joined #mojo
17:05 jeck joined #mojo
17:17 Lee[home] https://news.ycombinator.com/item?id=14910980 # web developer Chrome extension infected with adware
17:20 Grinnz what did that extension provide anyway? Chrome provides plenty for web dev in its dev tools
17:22 arcanez Grinnz: I think the only thing I like better than the Chrome webdev tools is firebug
17:22 Grinnz also, this is a good example of why I think that gnome 3 relying on community extensions for its basic operation is really stupid
17:25 arcanez unity4lyfe
17:28 jeck joined #mojo
17:29 Lee[home] Grinnz: it's so long since i've used it that i forgot i had it installed, then 2 hours ago i started seeing ads so was "WTF...?"
17:29 itaipu joined #mojo
17:30 Lee[home] https://twitter.com/chrispederick/status/892786731564417024
17:30 Grinnz guess its time to carefully look through what extensions i have installed again :)
17:30 Grinnz ouch
17:30 Lee[home] i've taken this an excuse for a purge :D
17:32 pink_mist arcanez: I thought firebug was discontinued because of firefox's internal webdev tools? :P
17:33 arcanez pink_mist: I don't use FF, but it looks like it
17:34 pink_mist or maybe I should say: "was folded into FF"
17:34 pink_mist =)
17:48 umask001 joined #mojo
17:50 jacoby joined #mojo
18:02 kgoess_ joined #mojo
18:13 wingfold joined #mojo
18:16 howitdo joined #mojo
18:28 petru joined #mojo
18:37 kgoess_ joined #mojo
18:37 jacoby joined #mojo
18:39 bianca joined #mojo
18:40 jacoby joined #mojo
18:41 kgoess_ joined #mojo
18:58 irqq joined #mojo
19:10 bwf joined #mojo
19:31 irqq_ joined #mojo
19:31 stryx` joined #mojo
19:50 bianca joined #mojo
19:54 Lee[home1 joined #mojo
20:11 kgoess joined #mojo
20:11 kgoess joined #mojo
20:35 kgoess joined #mojo
20:48 stryx` joined #mojo
21:05 PopeF Simple question. If I subclass a Mojolicious Controller via Mojo::Base (i.e. "use Mojo::Base 'My::Controller'"), my subclass is still a Controller, correct?
21:06 mishanti1 Correct.
21:06 preaction yes
21:08 PopeF thanks. Just making sure
22:07 jberger assuming My::Controller subclasses from Mojo::Controller
22:08 jberger package My::Controller; use Mojo::Base 'Mojo::Controller; package Some::Controller; use Mojo::Base 'My::Controller';
22:17 gordonfish joined #mojo
23:07 stryx` joined #mojo
23:52 petru joined #mojo

| Channels | #mojo index | Today | | Search | Google Search | Plain-Text | summary