Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2017-08-01

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 buggable ???????????? It's time for the monthly Accidental /win Lottery ???????????? We have 1 ballots submitted by 1 users! DRUM ROLL PLEASE!...
00:00 buggable And the winning number is 23! Congratulations to jnthn! You win a can of WD40!
00:08 vendethiel joined #moarvm
00:18 vendethiel- joined #moarvm
01:52 ilbot3 joined #moarvm
01:52 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
02:18 https_GK1wmSU joined #moarvm
02:20 https_GK1wmSU left #moarvm
02:29 vendethiel joined #moarvm
02:41 MasterDuke joined #moarvm
05:33 brrt joined #moarvm
06:42 brrt joined #moarvm
07:22 brrt good hi #moarvm
07:24 brrt let me ask the stupid question today
07:25 brrt how hard would it be, with lex/yacc in hand, and MoarVM as a library, to make a bootstrap-nqp
07:25 brrt is that even remotely worth the effort
07:25 timotimo huh ...
07:26 brrt basically, i'm asking, how hard would it be to make a compiler for NQP in C, using MoarVM and traditional parsing tools
07:27 timotimo yeah
07:28 brrt that would allow us to rid ourselves from binary moarvm blobs in the nqp codebase
07:29 timotimo yeah
07:30 timotimo hmm. when you have a nqp parser you can potentially compile the qregex engine with that and sidestep having to write Yet Another QRegex Impl
07:35 nwc10 jnthn: callgrind for the setting for your branch http://paste.scsys.co.uk/564738 and for master http://paste.scsys.co.uk/564739 makes it look like the branch should be a winner
07:35 nwc10 so maybe on the *big* stuff it is
07:35 zakharyas joined #moarvm
07:38 timotimo today will be a day of insufferable heat :\
07:40 moritz today will be a day of switching on the AC
07:40 yoleaux 31 Jul 2017 22:19Z <Zoffix> moritz: So "Mark Powers" from Apress emailed me a copy of the book; saying I should post a review on Amazon if I read the book and wanna review it... But where on Amazon do I post the review? I googled the book and found a link, but when I click review it it says "This item has not been released yet and is not eligible to be reviewed". I see a couple other reviews there already. What am I doing wrong? :/
07:40 timotimo if i had any
07:41 timotimo moritz: you know about the "protect somethingsomething relax" in one of the hyperlinks? the one for the | syntax
07:42 moritz .tell Zoffix I don't know how they managed that; a review when it's officially released would do nicely
07:42 yoleaux moritz: I'll pass your message to Zoffix.
07:42 moritz timotimo: I have no idea what you're talking about :(
07:43 timotimo 2
07:43 timotimo https://docs.perl6.org/type/IO\protect\char"0024\relaxCOLON\protect\
07:43 timotimo char"0024\relaxCOLONPath
07:43 timotimo one of at least two examples
07:51 * nwc10 is afk for most of the day
08:14 brrt today is not that bad in .nl
08:20 domidumont joined #moarvm
08:26 zakharyas joined #moarvm
08:47 vendethiel joined #moarvm
08:54 jnthn morning o/
08:54 jnthn Urgh, hottest day of the year so far today, probably
08:56 jnthn nwc10: Hm, wow. Wonder why you get differnet results
08:56 jnthn Maybe my baseline was off or something
09:13 jnthn OK, so timing the spectests and the build it comes out about even
09:13 jnthn But
09:14 jnthn Master does 1281 GC runs
09:14 jnthn The branch does 1429
09:18 jnthn As a sanity check, if I never set the allocate_on_heap flag then it gives the same number of GC runs, so no oddities there
09:36 jnthn OK, I've managed to tweak the criteria so it's around 1310 or so
09:36 jnthn And that seems to be a wallclock winner
09:38 jnthn In part by bumping the threshold to decide but much more that I spotted we would bump the number of promotions whenever there wasn't a spesh cand
09:38 jnthn But that didn't imply we were spesh logging
09:38 jnthn So it could falsely inflate the number of promotions on frames where we didn't log the number of entries
09:44 Geth ¦ MoarVM/frame-heap-prediction: f29412f8ad | (Jonathan Worthington)++ | src/core/frame.c
09:44 Geth ¦ MoarVM/frame-heap-prediction: Improve frame heap allocation prediction.
09:44 Geth ¦ MoarVM/frame-heap-prediction:
09:44 Geth ¦ MoarVM/frame-heap-prediction: Only bump the counter of heap promotions when we have a spesh
09:44 Geth ¦ MoarVM/frame-heap-prediction: correlation ID, which means we will have bumped the entries; before
09:44 Geth ¦ MoarVM/frame-heap-prediction: this could get out of sync and over-state the frame's preference.
09:44 Geth ¦ MoarVM/frame-heap-prediction: Also increaes the thresholds some. This greatly brings down the
09:44 Geth ¦ MoarVM/frame-heap-prediction: number of added GC runs that this change caused by decreasing the
09:44 Geth ¦ MoarVM/frame-heap-prediction: number of false positives.
09:44 Geth ¦ MoarVM/frame-heap-prediction: review: https://github.com/MoarVM/MoarVM/commit/f29412f8ad
09:45 zakharyas joined #moarvm
10:01 jnthn I'm really curious about the callgrind numbers on this vs master, so will run those while I get on with other stuff :)
10:37 brrt joined #moarvm
11:09 Geth ¦ MoarVM: 53000c87fb | (Jonathan Worthington)++ | 4 files
11:09 Geth ¦ MoarVM: Attach type tuples to callsites.
11:09 Geth ¦ MoarVM:
11:09 Geth ¦ MoarVM: We will be able to use this information to better optimize calls where
11:09 Geth ¦ MoarVM: arguments are passed in Scalar containers than is currently possible.
11:09 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/53000c87fb
11:10 * brrt is watching the graal vm talk on the polyglot conference
11:10 brrt it's interesting
11:10 brrt but
11:10 brrt i'm a bit in panic mode
11:10 jnthn 'cus you have a talk coming up too?
11:15 robertle joined #moarvm
11:18 jnthn So, results of master vs my branch are in
11:18 jnthn Seems with latest improvements I see it as a win too; it's possible I got confused by updating Rakudo at some point and the setting grew in some way
11:19 jnthn Either way, 10 billion cycles less
11:19 jnthn m: say 218 / 228
11:19 camelia rakudo-moar 15b259: OUTPUT: «0.956140?»
11:19 jnthn On CORE.setting compilation
11:19 jnthn I think that's a win :)
11:20 Geth ¦ MoarVM: 21450df926 | (Jonathan Worthington)++ | 3 files
11:20 Geth ¦ MoarVM: Allocate frames on heap that will need promotion.
11:20 Geth ¦ MoarVM:
11:20 Geth ¦ MoarVM: By trying to use existing promotions as an indicator. This somehow
11:20 Geth ¦ MoarVM: seems to end up doing *worse* for the CORE.setting build, however
11:20 Geth ¦ MoarVM: it looks like it helps `make spectest` wallclock time a bit (though
11:20 Geth ¦ MoarVM: it's all within noise).
11:20 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/21450df926
11:20 Geth ¦ MoarVM: 3d140fdb50 | (Jonathan Worthington)++ | src/core/frame.c
11:20 Geth ¦ MoarVM: Improve frame heap allocation prediction.
11:20 Geth ¦ MoarVM:
11:20 Geth ¦ MoarVM: Only bump the counter of heap promotions when we have a spesh
11:20 Geth ¦ MoarVM: correlation ID, which means we will have bumped the entries; before
11:20 Geth ¦ MoarVM: this could get out of sync and over-state the frame's preference.
11:20 Geth ¦ MoarVM: Also increaes the thresholds some. This greatly brings down the
11:20 Geth ¦ MoarVM: number of added GC runs that this change caused by decreasing the
11:20 Geth ¦ MoarVM: number of false positives.
11:20 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/3d140fdb50
11:20 jnthn So, rebased and pushed
11:22 jnthn lunch
11:34 brrt jnthn: yes
12:09 jnthn brrt: Good luck with it :)
12:10 brrt thanks :-)
12:10 brrt thing is, i'm covering much of the same ground, i guess, only my approach is quite different
12:12 AlexDaniel joined #moarvm
12:13 jnthn Righty, time to try and get spesh fully back on its feet again with regards to Perl 6 code
12:16 vendethiel joined #moarvm
12:25 jnthn Grr, why do things always have to go wrong in 200+ BB frames :P
12:34 brrt bisect!
12:34 brrt actually, you could make a spesh-bisect tool pretty simply, i'd think
12:35 brrt provided you give them a sequence number and use blocking
12:37 jnthn Oh, it goes away wiht the JIT
12:37 jnthn Uh, with the JIT disabled
12:37 jnthn I wonder if padding a deopt offset of -1 is normal...
12:38 jnthn Ah, it's unused if we have no inlines
12:42 jnthn Taht's odd
12:42 jnthn Deopt one requested by JIT in frame 'variable_declarator' (cuid '322') (-1 -> 4384)
12:42 jnthn but
12:43 jnthn Deopt one requested by interpreter in frame 'variable_declarator' (cuid '322')
12:43 jnthn Will deopt 4398 -> 4378
12:43 jnthn I don't see why the target index should be different
12:45 jnthn ohh
12:45 jnthn So as background, I just enabled having decont as a deoptonepoint
12:46 jnthn So in the pre-opt graph there's
12:46 jnthn [Annotation: INS Deopt One (idx 119 -> pc 4378; line 2977)]
12:46 jnthn [Annotation: Logged (bytecode offset 4370]
12:46 jnthn getlex          r10(74), lex(idx=1,outers=0,$ast)
12:46 jnthn [Annotation: INS Deopt One (idx 120 -> pc 4384; line 2977)]
12:46 jnthn decont          r35(14), r13(74)
12:47 jnthn After optimization it's
12:47 jnthn [Annotation: Logged (bytecode offset 4370]
12:47 jnthn sp_getlex_o     r10(74), lex(idx=1,outers=0,$ast)
12:47 jnthn [Annotation: INS Deopt One (idx 120 -> pc 4384; line 2977)]
12:47 jnthn [Annotation: INS Deopt One (idx 119 -> pc 4378; line 2977)]
12:47 jnthn sp_guardconc    r10(74), sslot(7)
12:47 jnthn Note the deopt point moved
12:48 jnthn And now two are on the same instruction
13:02 brrt yes
13:02 brrt that's … oh
13:02 brrt that might upset some expectations
13:03 jnthn Yeah, seems that being sure to append rather than prepend the annotations will do it though
13:08 Geth ¦ MoarVM: a11b5f1e7e | (Jonathan Worthington)++ | src/spesh/deopt.c
13:08 Geth ¦ MoarVM: Improve deopt logging output.
13:08 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/a11b5f1e7e
13:11 Geth ¦ MoarVM: 3932a0f749 | (Jonathan Worthington)++ | src/spesh/manipulate.c
13:11 Geth ¦ MoarVM: Append, not prepend, moved deopt one annotations.
13:11 Geth ¦ MoarVM:
13:11 Geth ¦ MoarVM: So that the earliest one takes precedence, not the latest one.
13:11 Geth ¦ MoarVM: Otherwise, we might skip over code that we really should run after
13:11 Geth ¦ MoarVM: doing the deopt.
13:11 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/3932a0f749
13:11 Geth ¦ MoarVM: 53ff82e417 | (Jonathan Worthington)++ | 2 files
13:11 Geth ¦ MoarVM: Turn decont into a deoptonepoint.
13:11 Geth ¦ MoarVM:
13:11 Geth ¦ MoarVM: This means that we will guard types resulting from a decont, which
13:11 Geth ¦ MoarVM: should in turn help us do a better job of Perl 6 code without the
13:11 Geth ¦ MoarVM: dubious aliasing handling of before.
13:11 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/53ff82e417
13:13 AlexDaniel joined #moarvm
13:27 jnthn ahh...interesting problem
13:28 jnthn If we enter a hot loop - typically the main loop of a benchmark - at a point where we ran out of spesh log, then we'll miss an ENTRY record and similar for it
13:28 jnthn And so the OSR points will go nowhere
13:29 jnthn And we won't optimize it
13:29 brrt .oO(Now here is nowhere)
13:29 * jnthn ponders
13:30 jnthn The obvious heuristic this might be about to happen is when we enter a new compilation unit for the first time
13:43 ilmari[m] joined #moarvm
13:45 nwc10 jnthn: not afk as much as planned.
13:45 nwc10 window for your branch has MVM_SPESH_BLOCKING=1
13:45 nwc10 other window that built master did not
13:51 jnthn ah
13:57 Geth ¦ MoarVM: a79f22b120 | (Jonathan Worthington)++ | 2 files
13:57 Geth ¦ MoarVM: Force logging when entering a new compunit.
13:57 Geth ¦ MoarVM:
13:57 Geth ¦ MoarVM: This ensures that we log the outermost loops of a compilation unit,
13:57 Geth ¦ MoarVM: and so will reliably be able to OSR hot tight outer loops of
13:57 Geth ¦ MoarVM: benchmarks.
13:57 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/a79f22b120
13:57 jnthn That works pretty effectively :)
14:07 vendethiel can't this function be called from multiple threads?
14:07 vendethiel nvm
14:09 zakharyas joined #moarvm
15:15 zakharyas joined #moarvm
15:15 brrt joined #moarvm
15:36 lizmat joined #moarvm
15:49 AlexDaniel joined #moarvm
16:01 Geth ¦ MoarVM: ee44adbe6a | (Jonathan Worthington)++ | 3 files
16:01 Geth ¦ MoarVM: Also handle case of nearly full spesh log.
16:01 Geth ¦ MoarVM:
16:01 Geth ¦ MoarVM: So that we can better collect data for new compilation units in this
16:01 Geth ¦ MoarVM: case also.
16:01 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/ee44adbe6a
16:01 Geth ¦ MoarVM: 9b166512da | (Jonathan Worthington)++ | 7 files
16:01 Geth ¦ MoarVM: Change the way we log returns.
16:01 Geth ¦ MoarVM:
16:01 Geth ¦ MoarVM: The previous way was fragile and could lead to us getting bogus depth
16:01 Geth ¦ MoarVM: counts, which then confused the order that specialization was done in
16:01 Geth ¦ MoarVM: (not to mention anyone reading the spesh log). It also would block any
16:01 Geth ¦ MoarVM: effort to try and carry over the simulated stack between processing of
16:01 Geth ¦ MoarVM: logs.
16:01 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/9b166512da
16:23 Geth ¦ MoarVM: 10203d7196 | (Jonathan Worthington)++ | 3 files
16:23 Geth ¦ MoarVM: Don't let compunit bonus logs inflate quota.
16:23 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/10203d7196
16:23 Geth ¦ MoarVM: 99332f234c | (Jonathan Worthington)++ | 2 files
16:23 Geth ¦ MoarVM: Tweak compunit heuristic for EVAL/BEGIN-heavy code
16:23 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/99332f234c
16:26 jnthn Well, didn't quite end up doing exactly what I expected this afternoon, but still, improvements are improvemnets :)
16:26 jnthn Will work on invocation spesh tomorrow
16:38 zakharyas joined #moarvm
16:48 robertle joined #moarvm
17:20 zakharyas joined #moarvm
17:25 hoelzro joined #moarvm
19:16 markmont joined #moarvm
19:41 zakharyas joined #moarvm
20:20 AlexDaniel joined #moarvm
20:44 markmont There was talk on perl6-compiler a few days ago about Rakudo Star on systems that enforce W^X, see near the end of http://www.nntp.perl.org/group/perl.perl6.compiler/2017/07/msg16180.html
20:45 markmont timotimo made a change to prevent MoarVM from breaking on systems that do not allow executable heaps: http://www.nntp.perl.org/group/perl.perl6.compiler/2017/07/msg16195.html
20:45 timotimo yup, i'm here
20:45 markmont That still leaves executable stacks.  One solution is to build MoarVM with libffi 3.1 or later.
20:46 timotimo yup
20:46 jnthn Pretty sure that comes via dyncall and is accidental; it's been worked on a bunch upstream.
20:46 markmont But I just looked into dyncall.  As it turns out, updating to the latest (unreleased head) dyncall solves the execstack problem and all Perl 6 spectests pass, see https://gist.github.com/markmont/4a75789703db39315dc64dfafed7ecbc
20:46 timotimo whoa
20:46 timotimo that's cool
20:46 timotimo so we don't have to tell people to pass a non-standard configure flag
20:47 markmont Alternatively, we could just apply these changes to the version of dyncall MoarVM uses and, again, all spectests pass:  http://hg.dyncall.org/pub/dyncall/dyncall/rev/03f0b683918a
20:47 timotimo i do believe we can just bump up the dyncall submodule
20:47 markmont I'm just throwing this out there in case someone wants to either update the version of dyncall or just apply the patch.
20:47 jnthn timotimo: Yeah, I think so
20:47 timotimo i.e. no patches that dyncall doesn't have upstream
20:48 jnthn I think bumping our bundled version is fine.
20:48 markmont Here's the main stuff we get if we bump up dyncall:
20:49 markmont no execstack needed, ARM64 support, PPC64 support, PPC32 support under Linux, and improved stability/reliability for Mach-O.
20:49 markmont well, that's all in dyncall.  But it could lay some foundations for subsequent MoarVM work.
20:49 jnthn Sounds worthwhile then :)
20:50 markmont One potential concern is there hasn't been a release of dyncall in ~18 months, we'd be using an unreleased head.
20:50 jnthn We're a bit before release, so now sounds like a decent time to bump to latest dyncall
20:50 jnthn True, though there's configure options for people who want to use an installed dyncall if packaging.
20:51 jnthn For those doing a source build, probably this just makes things work better.
20:54 markmont Let me know if I can do anything else at this point.  So far, I've only tested on Linux x86_64.  Otherwise, thanks for being open and looking into this!
20:54 jnthn timotimo: One question on your patch: I think it doesn't impact moar --version?
20:54 jnthn In that it'd still say "built with JIT support"?
20:54 timotimo oh
20:55 timotimo we can totally have a piece of output for that
20:55 jnthn Just that people will get a notable performance slowdown if they have their OS configured so we "can't" JIT
20:55 timotimo there's still that workaround with the temporary files
20:55 jnthn And it's nice if it registers in the --version
20:55 timotimo i'll patch something up for the --version
20:55 * jnthn wonders if this is a common configuration
20:55 timotimo "built with JIT support, but operating system forbids it" or something
20:55 jnthn Temporary file sounds...uh...interesting
20:56 timotimo wellllllll, given i had four different programs crash on me in the background while i had that option enabled ... :)
20:56 jnthn hah :)
20:56 timotimo (thunderbird exploded, a google chrome extension or tab or something, something from kde, and something else)
20:56 jnthn I guess we'd have to be darn careful about perms on the file
20:57 timotimo we're supposed to create a temp file and immediately unlink it
20:57 timotimo (but also mmap it twice in different modes)
20:57 markmont Here's how dyncall does it using mmap():  http://hg.dyncall.org/pub/dyncall/dyncall/file/3581366858a6/dyncallback/dyncall_alloc_wx_mmap.c
20:57 jnthn Otherwise we risk turning a program with an path injection vuln into one that potentially has the ability to run arbitary machine code...
20:59 jnthn oh, /dev/zero :)
20:59 jnthn Cute :)
20:59 timotimo that seems clever
20:59 jnthn Yeah, that makes it sound less like a security hole waiting to happen :)
21:00 timotimo and systems that don't have /dev/zero probably don't have execmem rejection
21:00 jnthn I suspect this bites most people doing JITs at some point :)
21:01 timotimo i would assume so
21:02 timotimo i don't understand why this works but our code doesn't, though
21:02 timotimo what their code does is just have one mmap and that start out as RDWR and gets turned read + execute later
21:03 timotimo that's what we do? but mprotect explodes in our faces when we try that?
21:03 timotimo (it gently explodes, of course)
21:03 timotimo markmont: can you shed some light perhaps?
21:04 markmont timotimo: I'm out of my depth here I'm afraid, I'm more of a sysadmin than a programmer.  But I'll look.
21:05 timotimo i'll point you at the code where we do the thing
21:06 timotimo https://github.com/MoarVM/MoarVM/blob/master/src/jit/compile.c#L68
21:06 timotimo though you probably saw the patch itself, too
21:06 timotimo and these platform functions live in ...
21:06 timotimo https://github.com/MoarVM/MoarVM/blob/master/src/platform/posix/mmap.c
21:07 markmont timotimo:  yep!  And thanks for the links.  I'll take a look at those and MVM_platform_alloc_pages() and MVM_platform_set_page_mode().
21:10 markmont oh, those functions are pretty simple :)
21:14 timotimo yup, really just prettier than ifdefs
21:16 MasterDuke jnthn: i just tried doing a --profile-compile of the rakudo build and got immediate segvs, but i thought you'd fixed that?
21:19 jnthn MasterDuke: I fixed some profiler stuff, didn't try anything that sizeable with it. I think timotimo fixed something recently also
21:19 MasterDuke fyi, i was on moarvm HEAD
21:20 timotimo yup i see it
21:21 timotimo it explodes in invoke_o
21:21 timotimo code is a null object there
21:24 geekosaur joined #moarvm
21:25 markmont timotimo: I've looked at everything and for the case where the system has both mprotect and MAP_ANON, it looks like MoarVM is doing exactly what dyncall does.  I've got to leave, but I'll poke at this in detail over the next few days if no one beats me to it.
21:26 timotimo thanks!
21:51 geekosaur joined #moarvm
22:03 markmont joined #moarvm
22:33 lizmat joined #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary