Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2014-05-14

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:06 FROGGS_ joined #moarvm
01:10 woosley left #moarvm
01:25 woosley joined #moarvm
01:26 jnap joined #moarvm
02:58 btyler joined #moarvm
06:46 FROGGS joined #moarvm
06:51 woolfy joined #moarvm
06:55 zakharyas joined #moarvm
07:07 oetiker joined #moarvm
07:21 woolfy left #moarvm
07:45 brrt joined #moarvm
07:46 brrt \o #moarvm
07:46 FROGGS o/ brrt
07:47 brrt (webchat clients ftw :-))
07:47 FROGGS :o)
07:47 FROGGS true about that
07:48 brrt i've a blog post in my notebook about how nice it is that moar is a 'register' vm for purposes of compilation
07:48 brrt i don't have my notebook right now, so i can't type it up
07:50 brrt if jnthn is here, i've read about how invoke works for REPR's
07:50 brrt long story short, i think it'd be sensible to create a jitcode repr
08:11 lizmat joined #moarvm
08:25 jnthn brrt: Potentially
08:29 FROGGS hi jnthn
08:35 brrt but.... i still find it very hard to think about :-)
08:35 brrt the reasoning is as follows: using a repr, we can provide for the invoke method and get passed all the right arguments
08:36 brrt using a repr, we can use our own stack map walking (for gc), if we choose to do so
08:36 brrt we get a great amount of liberty for what seems like a small price
08:37 brrt and i don't think the core interpreter will have to be changed by a great amount, which is always nice
09:32 zacts joined #moarvm
12:14 colomon joined #moarvm
13:10 colomon joined #moarvm
13:13 lizmat joined #moarvm
13:43 btyler joined #moarvm
13:56 woolfy joined #moarvm
13:57 woolfy joined #moarvm
14:00 tgt joined #moarvm
15:42 woolfy left #moarvm
15:43 FROGGS joined #moarvm
16:24 nwc10 jnthn: does anything in NQP write MVMIterBody from NQP code?
16:24 nwc10 ie, is my SEGV because NQP has a different idea about alignment of C structs than the C compiler?
16:30 jnthn No, it's always iterated through repr ops
16:30 jnthn Or other things in MVMIter.c
16:31 jnthn In fact I don't think anywhere pokes into it except things in MVMIter.c
16:31 nwc10 OK. hmmm
16:32 nwc10 OK, problem, I think, is this:
16:32 nwc10 #define OBJECT_BODY(o)   (&(((MVMObjectStooge *)(o))->data))
16:33 nwc10 couple that with this:
16:33 nwc10 struct MVMIter { MVMObject common; MVMIterBody body;
16:33 nwc10 };
16:33 nwc10 oh, OM NOM NOM goes irssi
16:34 nwc10 botherit:
16:34 nwc10 (gdb) p &(((MVMObjectStooge *)(o))->data)
16:34 nwc10 No symbol "MVMObjectStooge" in current context.
16:36 jnthn Ah, and MVMIterBody doesn't align the same way as a void*?
16:36 nwc10 exactly
16:37 nwc10 (gdb) p data - (char *)0xb62f8d3c
16:37 nwc10 $32 = 20
16:37 nwc10 (gdb) p &((MVMIter *)0xb62f8d3c)->body
16:37 nwc10 $34 = (MVMIterBody *) 0xb62f8d54
16:37 nwc10 er, hangon
16:37 nwc10 that doesn't quite say it
16:37 nwc10 (gdb) p data
16:37 nwc10 $35 = (void *) 0xb62f8d50
16:37 nwc10 there, difference of 4
16:37 nwc10 OK. problem is clear
16:38 nwc10 solution not.
16:39 nwc10 might be that the STable or REPR or something needs to have an entry for the offset of the body
16:42 jnthn I'd rather a solution where we force all objects to align their body the same amount off...
16:48 tgt joined #moarvm
16:52 nwc10 that is potentially wasteful of 4 bytes of memory
16:52 nwc10 the trick would be to put the body in a union with MVMuint64 and MVMnum64
16:53 nwc10 I think. As one of those would have the 64 bit alignment constraint (on the fun platforms)
16:53 FROGGS joined #moarvm
16:57 retupmoca jnthn: I got a spesh segfault with one of my precompiled modules; this fixes it: https://github.com/retupmoca/MoarVM/compare/MoarVM:master...master
16:57 retupmoca jnthn: but I don't know if that is just pointing to something else being wrong
16:58 jnthn That's probably hiding the problem.
16:58 retupmoca that's what I figured
17:01 daxim joined #moarvm
17:11 ggoebel111115 joined #moarvm
17:13 ggoebel111115 o/
17:13 FROGGS hi ggoebel111115
17:16 dalek MoarVM: 950f640 | jnthn++ | src/6model/sc.c:
17:16 dalek MoarVM: Revert accidental commenting out of code.
17:16 dalek MoarVM:
17:16 dalek MoarVM: mls++ for noticing.
17:16 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/950f6408eb
17:38 retupmoca jnthn: with 950f640 my panda starts failing again :)
17:44 jnthn retupmoca: oooh
17:44 jnthn That's good.
17:45 jnthn Now we know the failures aren't to do with spesh
17:45 jnthn Can you describe the failure? And how to reproduce it?
17:45 jnthn 'cus it built fine here with that patch...
17:53 FROGGS jnthn: that MAST::Local seems to work for labels on nqp-m so far...
17:53 jnthn yay :)
17:55 retupmoca jnthn: ./rebootstrap.pl ends up giving me 't/builder.t .... No subtests run'
17:55 jnthn Aww
17:55 jnthn Here it works reliably.
17:56 jnthn Very weird.
17:56 jnthn What if you try to run the test file manually?
17:57 retupmoca before precomp: works fine
17:57 retupmoca after precomp: segfault
17:58 retupmoca (segfault before any output, in fact)
17:59 jnthn Ah...
17:59 jnthn And that null check is what fixed it?
17:59 jnthn Or at least, worked around it...
18:00 retupmoca yeah, looks like this segfaults in the same place my module did
18:00 retupmoca src/core/interp.c:4292
18:00 retupmoca check is null on that line
18:02 jnthn Any chance you could revert 4785424 and try it?
18:05 retupmoca looks like it breaks in the same place
18:06 jnthn hmm
18:06 jnthn And I guess reverting 375f5d7 fixes it?
18:09 retupmoca apparently not
18:10 retupmoca although I didn't rebuild nqp/rakudo - just moarvm. idk if that makes a difference
18:11 jnthn huh...that's the exact code the other patch commented out...
18:12 retupmoca moarvm 2360aaa is apparently still causing segfault? maybe I do need to rebuild nqp and rakudo
18:13 retupmoca jnthn: but I'm going to have to look at it later - out of time for now
18:13 jnthn ok
18:31 btyler jnthn: what information do you need re the panda thing? latest moar (with that "cached" line no longer commented out), panda segfaults in the same place as before
18:31 jnthn btyler: Yes, but retupmoca said the commit with it in also did...
18:31 jnthn So I (a) can't reproduce it myself, and (b) am getting conflicting info about it.
18:31 jnthn Which doesn't bode too well for a fix.
18:32 btyler yikes. ok. I'll put moar back at 2360aaa and rebuild things and try again
18:33 btyler unless there's a different course of action that would yield more useful information
18:35 jnthn That's worth a try.
18:38 btyler moar@2360aaa (without having rebuilt nqp or rakudo) let panda build/test/rebootstrap successfully. it -did- still segfault before I removed the blibs from earlier
18:38 btyler retupmoca: is there a chance that panda still had blibs from moar 950f64 when you tried?
18:40 jnthn Well, I did rebootstrap...I thought that rebuild all the things?
18:41 jnthn oh, you were asking retupmoca :)
18:44 retupmoca btyler: ooh, it did have blibs from before
18:44 retupmoca this is what I get for trying to debug at $dayjob
18:49 btyler woo, ok, so we're not giving conflicting info anymore, I think
18:50 jnthn ah, ok
18:51 jnthn Sadly, a fresh build at HEAD of everything doesn't show the issue
19:11 nwc10 jnthn: why does MoarVM really need the body of all objects to start at the actual same offset?
19:11 nwc10 (rather than, say, having the offset in the STable)
19:24 jnthn nwc10: I'm not sure it really needs it; more that it's making something that can be a constant instead be a variable, which will mean extra dereferences, that something JITting a late-bound REPR call would have to emit code to fetch and add it rather than adding a constant offset, etc.
19:25 jnthn nwc10: So it's really a performance/simplicity argument.
19:34 jnthn btyler, retupmoca: I can get a SEGV on my Linux VM
19:34 jnthn Compiling lib/Panda/Builder.pm to mbc
19:34 jnthn Segmentation fault
19:34 jnthn Is that the same place?
19:40 brrt joined #moarvm
19:43 btyler yep, that's the one
19:53 jnthn OK. Teaching and bad sleep has kinda taken it out of me, but but I have a VM just the same on my laptop which is coming with me to plpw/czpw etc :)
19:56 woolfy joined #moarvm
19:58 dalek MoarVM/loop_labels: 845a51b | (Tobias Leich)++ | / (6 files):
19:58 dalek MoarVM/loop_labels: keep loop label as a MAST::Local
19:58 dalek MoarVM/loop_labels:
19:58 dalek MoarVM/loop_labels: Before it was the memory address from the time the label was attached
19:58 dalek MoarVM/loop_labels: to the block. Now we are safe when the label gets moved due to the gc.
19:58 dalek MoarVM/loop_labels: review: https://github.com/MoarVM/MoarVM/commit/845a51ba09
20:00 btyler jnthn: rest up, eh :) let me know if there's anything else I can do (short of becoming a mega C hacker) to help isolate it
20:02 nwc10 jnthn: OK. The counter is that currently it will burn memory (4 bytes, in quite a few objects) on any 32 bit platform which aligns 64 values to 64 bits
20:02 nwc10 until such time as the object header drops by 4 bytes again
20:03 nwc10 which I think you said you thought would be possible with a more funky algorithm
20:03 nwc10 (replacing that union of two 32 bit values with something smaller)
20:10 timotimo we just need to find something fun to put into these 4 bytes!
20:10 nwc10 no, we want to put it on a diet to shed 4 bytes, and make everything work better
20:11 timotimo hmm
20:12 FROGGS we could put LOVE in those four bytes though
20:12 jnthn So they'd be...love bytes? :P
20:12 * nwc10 groans
20:12 FROGGS MoarVM - Where there is guaranteed LOVE in every object header!
20:13 nwc10 also, clearly I should go to bed. I'm not awake enough to spot the feed lines
20:13 timotimo LOVE in utf8 is 4 bytes
20:13 FROGGS so LOVE in utf8 is just a four letter word?
20:14 jnap joined #moarvm
20:19 brrt btyler, you can report it on github, maybe other people less overworked than jnthn can pick it up :-)
20:19 brrt e.g. myself :-)
20:38 woolfy joined #moarvm
21:07 brrt offtopic: i'm desparate for a word processor-like text editor for linux that doesn't suck
21:07 brrt for windows is also ok
21:07 brrt i'm very near the conclusion that i'll have to write one myself
21:08 btyler the biggest yaks need the most shaving
21:09 btyler (come to the vim side, we have cookies, yada yada yada)
21:14 TimToady reimplement libre-office in P6?
21:15 btyler also, there was a really neat write up about the new LLVM-based compiler in safari, using multiple levels of increasingly better code gen (which take longer, and thus are only switched into once code has been proven sufficiently hot). https://www.webkit.org/blog/3362/introducing-the-webkit-ftl-jit/
21:15 btyler *js compiler
21:15 btyler a lot of it went over my head, but some of it sounded familiar based on talk about spesh and such I'd heard here and in #perl6
21:19 brrt i use vim :-) and emacs both
21:20 brrt btyler, i'm of course a horrible cynic, but i'm not all that impressed by adding a 4-stage compiler
21:20 brrt i mean, scala has 47 stages or so? :-p
21:22 brrt and they indeed seem to be using much the same techniques, except that they do it by wiring yet another backend into their system
21:22 btyler brrt: sure, and I'm not too knowledgeable about scala, but it sounded pretty neat to me given how slippery js is
21:23 brrt (which is much more mature than moarvm will do in the forseeable future, though)
21:23 brrt hmm..
21:24 brrt personally i think luajit (http://luajit.org/) is much more impressive than 'lets wire llvm up to our javascriptcore'
21:24 btyler luajit is pretty incredible, yeah
21:24 btyler and I say that in almost complete ignorance of its internals, just based on the performance
21:26 brrt keep in mind that they had to make quite a few sacrficies to do so - as you said, 4 levels of compilation, but also a conservatice gc, and no less than 4 deoptimizer / stack replacement structures
21:26 brrt (safari flt team, that is)
21:27 dalek MoarVM: 6fd8cd1 | skids++ | src/math/bigintops.c:
21:27 dalek MoarVM: Fix bigint bitops.  Really they need to be redone by hand because
21:27 dalek MoarVM: there is a disgusting level of inefficiency introduced by trying to use
21:27 dalek MoarVM: the libtomath API at too high a level for something for which it was
21:27 dalek MoarVM: not designed to do.
21:27 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/6fd8cd12e2
21:27 dalek MoarVM: 13a1234 | jonathan++ | src/math/bigintops.c:
21:27 dalek MoarVM: Merge pull request #96 from skids/master
21:27 dalek MoarVM:
21:27 dalek MoarVM: Fix bigint bitops.
21:27 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/13a123465a
21:27 jnthn 4 deoptimizer/OSR things sounds terrifying. :)
21:28 jnthn I mean, the MoarVM one so far wasn't too hard to do, but it doesn't have to un-inline yet...
21:28 brrt i'm not /sure/ thats what they're doing, but i believe so, yeah
21:28 brrt and i'm assuming they either always go back to an interpreter or to the 'simpler level' - but i'm not sure about that, either
21:29 brrt i think that if i were on that team i'd push for better optimization in the existing (3 stage) interpreter rather than adding yet another layer that is known for being incompatible with GC
21:29 TimToady It's turtles all the way back up too!
21:29 brrt that made me smile :-)
21:30 jnthn :P
21:30 jnthn Yeah, well, I chose us precise GC for Moar, so we gotta live with that everywhere now. :)
21:31 jnthn I guess the advantage of LLVM is it's got a bunch of knowledge about things like instruction scheduling on different architectures, which is tedious to replicate.
21:31 brrt thats true
21:32 brrt and well, perhaps about apple's own ARM architecture too?
21:32 jnthn One'd imagine it supports whatever apple care for rather well, given iiuc they've rather heavily funded it.
21:34 brrt it's for all practical purposes an apple project today imho
22:01 jnthn brrt: Hm, seems I may have a free moment to scribble my JIT/interp thougths down :)
22:02 brrt ok, cool :-)
22:02 brrt it's been 12 o clock
22:02 brrt though
22:02 brrt also, rant away anytime you like :-)
22:03 jnthn My flight tomorrow is at 11am. :)
22:03 jnthn Given I've had to be at teaching by 8:30am for last few days, that's comparatively alright. :)
22:04 brrt oh, yes
22:04 brrt swedish start at 8:30?
22:05 brrt oh wait, timezone, thats not as weird as it seems
22:07 brrt ot: you know what scientists just love doing? wasting time with broken data
22:11 jnthn Well, 9am start, but need to get coffee, get my demos in order, have time for the train to be late, etc.
22:14 brrt fair enough
22:14 brrt what do you teach these days?
22:16 jnthn Was teaching parallel/async programming this week.
22:16 jnthn Quite fun :)
22:16 brrt i imagine :-)
22:17 jnthn Today's exercises including analyzing climate data and querying/plotting earthquake data. :)
22:17 * brrt would wish somebody would want to teach my former coworkers about parallel / concurrency
22:17 brrt ok, that makes it more fun
22:18 jnthn Yeah. Well, of course as soon as you go plot earthquakes, you basically find you drew the ring of fire on a map, dotted the Himalayas, and that you can barely see Japan and Indonesia for the markers. :)
22:19 jnthn Somehow on the data set I used I couldn't pick out yellowstone, though. I'd thought it had a lot of little quakes...
22:19 * brrt doesn't see how plotting earthquakes would be a parallel operation
22:19 jnthn Oh, it's not the plotting.
22:19 jnthn It's querying the data set.
22:19 brrt oh, i see
22:19 jnthn Which can be expressed very nicely as a Linq query.
22:20 jnthn The plotting is just in code I give them. The exercise is really to play with parallel Linq. :)
22:20 jnthn But having some shiny visual output of the results makes it more fun.
22:21 brrt parallel linq, is that something like an async query? i.e. fire away and get a callback?
22:21 * brrt has forgotten what these were called on android
22:21 jnthn No
22:22 jnthn Linq is the thing with all the combinators like Where (grep), Select (map), GroupBy, Join, etc.
22:22 brrt yes, i've seen bits and pieces
22:22 brrt lazy evaluation or not?
22:22 jnthn And Parallel Linq just lets you opt in to using an implementation that automatically spreads the work over multiple threads.
22:22 jnthn Linq is lazy
22:23 jnthn Parallel Linq lets you trade off how much you want lazizness vs throughput
22:23 brrt ok, i see
22:23 brrt hmm
22:23 jnthn So you give it don't batch / batch a bit / batch as much as you can.
22:23 brrt that isn't a very simple tradeoff
22:23 jnthn No
22:24 jnthn You only get to pick 3 points on the scale.
22:24 jnthn Bit limiting...
22:24 jnthn ...but in reality often sufficient.
22:24 brrt i see
22:25 brrt is this much used for webapps? what do people use .net for these days?
22:26 jnthn I see a mix of web apps, backend stuff, web services, and GUI apps.
22:26 jnthn Most folks have 2-3 of those, though the odd place I encounter is using it for all 4.
22:26 brrt better than php, i guess :-)
22:26 jnthn oh, by some way :)
22:27 jnthn C# is quite a decent language these days.
22:27 brrt (did you see the bit about the init system in php? :-o was that a joke?)
22:27 jnthn I...no...what? :)
22:28 brrt http://m0n0.ch/wall/ :-D
22:28 brrt it's real
22:28 jnthn wow
22:29 brrt C# may be nice but windows still annoys me
22:29 brrt 'you can't write this file because some other program  has an open handle'
22:29 brrt fuuuu
22:29 jnthn yes, I hate that aspect of Windows too
22:29 jnthn And I really don't like where they're going with Win8
22:29 brrt 'you can't unlink this file because some other program (or maybe your own program!) has an open handle'
22:30 brrt 'your unicode is broken because you forgot to set the binary write flag lol'
22:30 brrt no, me neither
22:31 brrt otherwise its pretty ok :-D
22:43 jnthn brrt: https://gist.github.com/jnthn/c1b88756121f0525ff28
22:44 brrt thanks
22:50 jnthn Hope it's vaguely coherent. :)
22:50 brrt i think it is
22:50 brrt its kind of hard to see how it 'll all play out
22:51 brrt i'm not sure what you mean by: ' and then point the interpreter at a (possibly global) piece of memory that has the single "enter the JITted code" operation. '
22:53 brrt i agree with the register / memory layout thing, though, completely
22:53 brrt in fact that is imho one of the great advantages of using a register vm
22:53 brrt bytecode is already 'three address code', and memory layout has already been made
22:54 brrt makes compilation so much simpler since for any value you 'know' where it should go
22:56 brrt what i'm not sure of, is whether we can 'escape' the c stack, or at least stack-like behaviour somewhere when calling directly from jit-code to jit-code
22:56 brrt (that is, without the intervention of the interpreter)
22:58 brrt i /think/ the interoperation between jitted and interpreted code is going to give the most trouble, or challenge
22:59 jnthn Does "just consider every call a tail call" or "just consider the JIT as always doing CPS" help, or make things worse? :)
22:59 brrt that helps
23:00 jnthn yay :)
23:00 jnthn That's the way the interpreter works today, really.
23:00 jnthn Whenever we call back into it, things are set up so we're ready to run the next instruction in the place we called.
23:01 jnthn For JIT it's not *quite* that simple, but it can be close: when we want to make a call, we just (in an unoptimized case) call some "invoker" that gives us back a function pointer that we can jump to, making sure tc is in an appropriate register.
23:02 jnthn That is, we don't really call JITted code, just goto it.
23:03 jnthn And the only call is JIT calling C functions, and the intial interpreter -> JIT transition.
23:03 jnthn So falling out of JIT back to interpreter is really a return.
23:03 brrt yes, i was that far with it :-)
23:03 jnthn OK :)
23:04 jnthn It feels workable to me, and one of the "easier" approaches - even if a bit unconventional - when deopt and continuations are factored into the picture.
23:05 brrt my mind says you still need a stack to keep all those continuation pointers
23:06 brrt i.e. i 'call/cc' routine a, routine a call/cc's routine b (with its own continuation), where does routine a store routine b's continuation? or is this already taken care of by the magic growing of mvm registers?
23:06 jnthn The stack is the frame ->caller chain. :)
23:06 brrt yes
23:06 brrt ok
23:07 timotimo mhhh, want moar progress
23:07 timotimo jnthn: any low hanging fruit you can think of i could tackle tomorrow or something?
23:07 jnthn timotimo: Well...it's maybe not low hanging, but looking into the SEGV we have from the thing that makes CORE.setting comp go way faster would be really valuable.
23:08 timotimo i don't know what that is :)
23:08 timotimo how much faster are we talking here?
23:08 jnthn Stage MAST goes from 20s to 12s on my box.
23:08 timotimo cute :)
23:09 jnthn Yes, but the Panda build SEGVing is not cute.
23:09 timotimo aye
23:09 timotimo which commits are that?
23:10 jnthn Well, it appears that are revert of 375f5d7 helps
23:10 jnthn Or failing that, 950f640
23:17 jnthn Gonna get some rest...shouldn't cut the flight too late tomorrow :)
23:17 jnthn 'ngiht
23:17 jnthn *night, even.
23:17 timotimo gnite jnthn!
23:17 timotimo i'm not sure if i'll be able to put a dent into that bug
23:20 brrt goodnight
23:20 brrt do we have a gdb backtrace yet?
23:25 brrt left #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary