The web in a box - a next generation web framework for the Perl programming language

IRC log for #mojo, 2016-05-19

| Channels | #mojo index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:56 allnighter joined #mojo
01:22 allnighter joined #mojo
02:00 ivi joined #mojo
02:02 jasanj morbo will not restart when i have change my project file immediately
02:03 jasanj it have 1~30s delay
02:04 jasanj is it reasonable ?
02:04 jberger jasanj: windows?
02:05 jasanj jberger: linux
02:06 jasanj just ignore me, i found the reason is my cluster environment, not from mojo
02:07 jberger I was just about to say
02:07 jberger That would be unusual for non windows
02:07 jasanj jberger: :)
02:08 jberger Windows is known to have quite a lag
02:09 dave NFS causes randomness with it too ;)
02:11 sri jberger: i guess we could make repair smarter, and have it not delete parents of unfinished jobs
02:27 jberger sri: Is that a good idea?
02:27 jberger That could lead to piles of stale jobs
02:28 sri how so?
02:28 jberger I guess they would get cleaned up together
02:29 sri https://gist.github.com/anonymous/1194fc71768f2e453e4064c4a5df43e7
02:29 sri that's what i had in mind
02:29 jberger I thought i had a scenario but as i tried to describe it i couldn't
02:30 sri right now if you have a huge backlog of jobs and stuff gets delayed enough to reach remove_after, your finished parent job might get removed before the child could be dequeued
02:30 sri which then ends up with a failed child job
02:31 jberger Yeah
02:31 jberger Ok i like that
02:31 sri the patch would allow the queue to catch up again
02:31 sri one might argue that you're screwed anyway if your backlog gets that big
02:32 jberger It still is possible that jobs get manually removed
02:32 sri of course
02:32 jberger So you have to handle that case (as you still are)
02:32 sri yes
02:32 jberger But i like this change
02:41 sri the default remove_after value is 2 days, which seems pretty reasonable
02:41 sri but i was thinking about weekends, where something goes wrong and nobody notices until monday
02:42 allnighter joined #mojo
02:42 sri this gives you at least a chance to catch up
02:43 sri and i guess there's the case where children want to use the parent result
02:44 sri this would preseve the parent until the child is actually finished, even if they fail and need to be retried a few timers
02:45 sri s/r//
02:45 sri you know, rate limits and the like
02:46 sri the more i think about it the more i like it
02:50 sri https://github.com/kraih/minion/commit/dd92aed5bbd04f8880566add5542efcd0b8c5110
02:57 allnighter joined #mojo
02:58 noganex_ joined #mojo
03:33 PryMar56 joined #mojo
03:35 zivester joined #mojo
03:50 Adura joined #mojo
04:00 allnighter joined #mojo
04:03 elrey joined #mojo
04:09 Guest-quest joined #mojo
04:23 irqq joined #mojo
04:28 sri hahaha, it's so funny to type on a normal macbook keyboard now
04:28 sri almost feels like writing on an old typewriter
04:40 cfedde joined #mojo
05:36 Adura joined #mojo
05:41 inokenty-w joined #mojo
05:51 dod joined #mojo
05:55 dod joined #mojo
06:19 dod joined #mojo
06:20 dod joined #mojo
06:21 ashimema joined #mojo
06:26 allnighter joined #mojo
06:46 allnighter joined #mojo
06:50 AndrewIsh joined #mojo
07:15 BinGOs joined #mojo
07:36 PopeF joined #mojo
07:40 trone joined #mojo
07:41 Vandal joined #mojo
07:43 allnighter in Mojo/ByteStream.pm I don’t understand this line
07:43 allnighter $$self = $sub->($$self, @_);
07:43 allnighter from the code chunk:
07:43 allnighter for my $name (@UTILS) {
07:43 allnighter my $sub = Mojo::Util->can($name);
07:43 allnighter Mojo::Util::monkey_patch __PACKAGE__, $name, sub {
07:43 allnighter my $self = shift;
07:43 allnighter $$self = $sub->($$self, @_);
07:43 allnighter return $self;
07:43 Adura exit;
07:43 allnighter };
07:43 allnighter }
07:44 bpmedley allnighter: Perhaps post a link from github?
07:44 Adura If only that exit worked...
07:45 allnighter https://github.com/kraih/mojo/blob/master/lib/Mojo/ByteStream.pm
07:45 bpmedley allnighter: You can click on a line number and get a link for that line
07:46 allnighter https://github.com/kraih/mojo/blob/master/lib/Mojo/ByteStream.pm#L22
07:47 Atog joined #mojo
07:47 allnighter so $sub is a coderef to a Mojo::Util function, then monkey_patch adds the function to Mojo::Bytestream package
07:49 bpmedley allnighter: Yes.  And, I believe it's a fancy import that allows the imported function in Mojo::Bytestream to be used in an OO fashion.
07:50 bpmedley Also, the monkey_patch'd function supports fluent interfaces.
08:07 dod joined #mojo
08:08 allnighter hmm, I just dont get what is $$self doing
08:08 allnighter symbolic reference?
08:08 allnighter maybe i should sleep on it, it’s like 5 am, thanks.
08:10 bpmedley I think it's dereferencing the scalar reference.
08:11 Adura Make something similar yourself and Dumper it.
08:53 icjs joined #mojo
08:55 dod joined #mojo
09:08 dod joined #mojo
09:27 meshl joined #mojo
09:37 punter joined #mojo
10:22 irqq joined #mojo
10:49 tchaves joined #mojo
10:51 avkhozov joined #mojo
10:52 avkhozov joined #mojo
10:53 kaare joined #mojo
10:54 dvinciguerra joined #mojo
11:02 avkhozov I have an issue with Mojo::Pg and search_path. If I have several cached connections and set new $pg->search_path(['x']) than next $pg->db->query('...') will use old schema, instead of 'x'.
11:04 avkhozov I need manually clear cached connections? Or Mojo::Pg need do this?
11:43 batman Lee: could you join #swagger ?
11:45 Lee batman: done!
12:35 gizmomathboy joined #mojo
12:39 irqq_ joined #mojo
12:39 meshl joined #mojo
13:00 dod joined #mojo
13:01 ramortegui joined #mojo
13:17 zivester joined #mojo
13:31 aborazmeh joined #mojo
13:51 irqq joined #mojo
13:55 mcsnolte joined #mojo
14:12 itaipu joined #mojo
14:38 ashimema joined #mojo
15:01 dod joined #mojo
15:01 disputin joined #mojo
15:05 zivester joined #mojo
15:14 dod joined #mojo
15:15 dod left #mojo
15:16 disputin joined #mojo
15:19 ramortegui joined #mojo
15:19 kaare joined #mojo
15:29 henq joined #mojo
15:37 kaare_ joined #mojo
15:43 tchaves joined #mojo
16:17 lluad joined #mojo
16:17 gryphon joined #mojo
16:32 marty joined #mojo
16:53 kaare joined #mojo
16:53 inokenty-w joined #mojo
16:57 allnighter joined #mojo
16:58 dod joined #mojo
17:00 kaare_ joined #mojo
17:06 kaare joined #mojo
17:11 disputin joined #mojo
17:16 allnighter joined #mojo
17:25 asarch joined #mojo
17:37 PryMar56 joined #mojo
17:49 thowe_work joined #mojo
17:53 thowe_work I have a Mojo::Pg::Results object that I call "arrays" on.  Returns a collection; fine and good...  But then I try to re-use that Results object to run "hash" on (in a while loop, just like the example).  This generates an error.
17:53 thowe_work DBD::Pg::st fetchrow_hashref failed: no statement executing at /home/tim/perl5/perlbrew/perls/perl-5.22.0/lib/site_perl/5.22.0/Mojo/Pg/Results.pm line 22.
17:53 ramortegui joined #mojo
17:54 thowe_work If I don't call arrays before I use "hash", it seems to work fine.  It seems I'm getting to the end of the results and not getting back around to the beginning.  Can I reset to the beginning of the results?  Or is something else going on?
18:00 thowe_work https://gist.github.com/thowe/5884a26dfb6ee160d5c20aa8e1fb6340
18:01 thowe_work I don't get the error if I omit line 19 and the statement that starts on line 23.
18:01 Grinnz_ yes statement handles will only go through the results once. this is how DBI works
18:01 thowe_work OK, how do I go back to the beginnning?
18:01 thowe_work so I can go through again?
18:01 Grinnz_ rerun the query, or store the results
18:02 thowe_work hrm, can I clone a copy?
18:02 Grinnz_ for some drivers, the results are only returned for each row
18:02 Grinnz_ so it can't go back
18:02 Grinnz_ that isn't true for postgres AFAIK but it still has to respect the DBI api
18:04 thowe_work well, if I build my hash structure first, I can get the data I need from arrays from it pretty easily.  I just thought there might be a way to go to the beginning again.
18:05 thowe_work In fact, I probably /should/ do it that way, but I got distracted by the error...
18:11 Grinnz_ yeah, generally, I just store the result arrayref and reuse it
18:12 Grinnz_ some projects may have more memory concerns with storing entire resultsets, but then DBD::Pg isn't very good about that currently anyway
18:25 marty joined #mojo
18:31 allnighter joined #mojo
18:44 bobkare joined #mojo
18:49 jberger do we have a Mojo::Pg favorite way of inserting multiple rows either at once or at least in an easily repetitive way?
18:52 jberger ie can I build a large insert query using things like UNNEST and arrays?
18:52 jberger or could we come up with some analog to repeatedly calling execute?
18:53 jberger obviously I can just call $db->query($sql, @args) repeatedly
18:53 jberger but I was just wondering
18:53 marty joined #mojo
18:55 Grinnz_ in mysql I usually build the insert query manually, but that's due to mysql not having arrays of course
18:56 Grinnz_ don't know if it's at all applicable to postgres, but I find up to a certain amount, it's more efficient to do multiple inserts in one query, but with too many you can lock up related queries
18:56 Grinnz_ so i do it in chunks
19:00 marty joined #mojo
19:01 kaare joined #mojo
19:08 punter joined #mojo
19:31 disputin joined #mojo
20:00 irqq joined #mojo
20:26 ashimema joined #mojo
20:37 allnighter joined #mojo
20:45 tyldis bpmedley: < bpmedley> tyldis: Will you describe your app further?  Does the embedded unit talk to the app after a GET/POST request?
20:45 tyldis bpmedley: It's not HTTP. It's pure JSON to the TCP socket
20:46 tyldis But it obviously just pushes the IP packet up the stack and work packet by packet. The spec specifically says to limit each request to 1260 bytes
20:46 tyldis And you can only have 1 request per packet
20:47 tyldis If you go beyond 1260 bytes you get an error, and if you send two small requests in the same packet (within the byte limit) you get an error.
20:48 tyldis So very fnicky.
20:50 tyldis I'll have to look more closely into the drain callback, it became way to late to wrap my head around it yesterday and no time today :(
20:58 allnighter joined #mojo
21:12 bpmedley tyldis: Sorry, what I'm asking is if you're writing a CLI app or a web app..
21:17 tyldis Web
21:19 bpmedley Does the TCP transaction for a specific request, or specific set of requests?
21:21 tyldis Same for all requests. The unit does not communicate anything unsolicited, and one request per IP packet.
21:22 bpmedley Would your task be easier by using one of the forking plugins?
21:23 tyldis Hmm... Looking briefly at Mojolicious::Plugin::ForkCall it might appear so
21:23 bpmedley Also ReadWriteFork, I think
21:26 tyldis Thanks a lot, I'll dive into that
21:31 good_news_everyon joined #mojo
21:31 good_news_everyon [mojo] kraih pushed 1 new commit to master: https://git.io/vrEs9
21:31 good_news_everyon mojo/master 59d22bb Sebastian Riedel: fix a few typos
21:31 good_news_everyon left #mojo
22:00 punter joined #mojo
22:02 PryMar56 joined #mojo
22:10 meshl joined #mojo
22:15 disputin joined #mojo
22:46 mishanti1 Fun challenge: getting a medium/small test suite to go fast. Do any of you know if prove does anything that is harmful to performance, and should be avoided?
22:47 pink_mist if prove is your bottleneck, you must have a really quick test-suite already.
22:47 pink_mist you might want to experiment with prove's -j switch to run test files in parallel
22:48 mishanti1 I do not think it is my bottleneck. Just curious if prove is a layer that should be avoided.
22:48 * sri always runs his tests with -j9
22:48 Grinnz_ the big problem has always been that each test file has to reload all the modules, in some test suites where every file loads a whole bunch of modules this can be noticeable
22:48 bpmedley mishanti1: I've used prove before and liked it.
22:48 Grinnz_ there's some solutions for that but not great ones
22:48 mishanti1 Grinnz_: Good tip. I'll look into benchmarking load times.
22:48 sri parallel testing is an absolute must have for any project imo
22:49 mishanti1 bpmedley: Yeay, it does provide some pretty neat features. We've been using it for a few years now.
22:49 Grinnz_ prove itself is nothing more than a testfile runner really, it's not going to introduce anything noticeable
22:49 * sri went to great lengths to make parallel testing easy with Mojo::Pg
22:49 mishanti1 sri: We currently run at -j4 because that's the most optimal on our test hardware.
22:51 sri the mojolicious test suite with 11243 tests in 87 files runs in 14 seconds on my laptop
22:51 sri what are you working with?
22:51 mishanti1 We use a lot of temporary postgres databases when firing up instances for tests, but the creation/destruction of those are not our bottleneck either. Suspected the WAL could play a part, but some postgres profiling proved that to be a false assumption.
22:52 sri i use temporary schemas instead of databases, it's really fast
22:52 mishanti1 sri: You get better speeds than me here. We have 74 test files, 5537 tests spread over 26024 lines of test-code.
22:53 mishanti1 sri: I have been looking into temporary schemas as well. Does give a boost, so we might enable that as our default.
22:54 mishanti1 sri: One other thing you can try if you are curious: turing off logging on a table-by-table basis. WAL does not factor in at all with logging disabled on tables. :)
22:54 mishanti1 `ALTER TABLE foo_bar SET UNLOGGED;`
22:55 mishanti1 Though I suspect it it something with how we have structured our tests that pest us.
22:56 sri not really a concern for me
22:57 sri the 364 minion tests for example finish in under a second
22:57 mishanti1 Thats pretty cool.
22:58 sri default postgres with a default database, i don't spin up anything for the tests
22:58 sri just one temp schema per test file
22:58 sri and all tests can run parallel against the same database
22:59 sri in their respective schema
23:00 mishanti1 That is pretty similar to the test experiments I had going here. Did improve it somewhat, but I am still not seeing the speed I would expect, so we must be doing something wonky. Currently running some experiments through NYTProf to see where time is spent.
23:00 mishanti1 NYTProf has come quite a way since just a few years ago.
23:00 allnighter joined #mojo
23:00 sri yea, time for nytprof then
23:37 allnighter joined #mojo
23:47 allnighter Hi, I keep seeing this construct in Mojo::ByteStream:
23:47 allnighter $$self = $sub->()
23:47 allnighter https://github.com/kraih/mojo/blob/master/lib/Mojo/ByteStream.pm#L22
23:47 allnighter and https://github.com/kraih/mojo/blob/master/lib/Mojo/ByteStream.pm#L61
23:47 allnighter I undestand what is doing
23:47 allnighter $self is a blessed scalar reference
23:48 allnighter and by dereferencing and assining a new value we change that reference
23:48 allnighter but $self is still blessed when the function returns
23:48 allnighter is that behaviour documented somewhere in the perl documentation
23:49 Grinnz_ it's changing the referenced value, not the reference
23:49 marty joined #mojo
23:50 Grinnz_ iow, it's assigning a new value to (some scalar) which $self is a reference to
23:52 allnighter yeah sorry that is what I meant
23:52 allnighter I read the whole perlref doc and dint find this behavior
23:53 pink_mist the blessing is on the reference, not the underlying scalar or hash or array
23:56 pink_mist consider: my $foo = "bar"; my $obj = \$foo; bless $obj, 'Class';  ... it's $obj that's blessed. $foo is still just a regular scalar containing the string "bar"
23:57 pink_mist if you did $$obj = "baz", $obj is still blessed, and $foo would now contain the string "baz"
23:59 Grinnz_ you can also have, say, a blessed reference and an unblessed reference pointing to the same data structure

| Channels | #mojo index | Today | | Search | Google Search | Plain-Text | summary