Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2014-06-30

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:15 FROGGS joined #moarvm
02:17 FROGGS joined #moarvm
06:29 lizmat joined #moarvm
06:31 woolfy joined #moarvm
07:05 nwc10 does t/spec/S06-advanced/wrap.rakudo.moar fail tests for everyone, or just for me?
07:08 zakharyas joined #moarvm
07:44 FROGGS joined #moarvm
08:04 FROGGS nwc10: for everyone
08:05 FROGGS see https://github.com/coke/perl6-roast-data/blob/master/log/rakudo.moar_summary.out#L2389
08:05 FROGGS (see these fails too)
08:32 nwc10 jnthn: some MOAR things now SEGV or make ASAN unhappy
08:32 nwc10 in theory I have a meeting right now, but one of the business folks is not yet in
08:32 nwc10 (all the tech folks are. This is strange)
08:38 nwc10 missing person located.
08:38 nwc10 had sneaked into the building without anything spotting him
08:40 jnthn nwc10: S17 things?
08:47 brrt joined #moarvm
08:51 brrt \o #moarvm
08:52 jnthn o/ brrt
08:52 brrt o/ jnthn
08:52 brrt how was the weekend?
08:53 jnthn A bit rainy, but fine otherwise.
08:53 jnthn Yours?
08:54 brrt tiring, long train rides, otherwise also fine :-)
08:56 * brrt is looking at some notes he made
08:57 brrt compeletely senseless rant: why do people insist that 'static types' are 'designed for correctness'
08:58 brrt static types are for efficient codegen and have been that since fortran
08:58 brrt such people have never had to use c, i think
09:00 jnthn Trouble is, there's lots of examples of static types out there, and they don't all carry equal value/usefulness.
09:00 jnthn C in particular doesn't promise much.
09:01 jnthn Java's type system is mostly geared towards being able to pop up a method list when the programmer types a "." in Eclipse, or something. :)
09:01 brrt that is... somewhat harsh, but i think it's true :-)
09:01 brrt also - and importantly - it's not like type errors are really a big class of bug
09:02 jnthn Well, what I said is just another way of saying "proving the absence of calls to non-existing methods" :)
09:02 brrt iirc, ruby folks are madly in love with calls to non-existing methods
09:04 jnthn Well, if you've got a method to make them start existing... :)
09:04 jnthn uh, no pun intended :)
09:04 jnthn hah, I got something to SEGV
09:06 * brrt is not seeing the pun
09:06 brrt a segv in compiled-code-with-debugging-instructions is /never/ as bad as a segv in generated code without such instructions
09:07 jnthn True
09:08 brrt although gdb helps with those, too
09:08 jnthn My debugger shows I have 17 threads running at the point I SEGV, though :)
09:08 brrt oh... that can be nasty
09:08 brrt in a wholly different way
09:11 brrt hmm, what do you think of splitting the jit files in moar files, i.e. taking a bit of time to clean up?
09:12 jnthn If you feel it's going to be easier to move forward after some cleanups, go for it
09:13 brrt ok, will do :-)
09:21 brrt hmm, how do you feel about me claiming the prefix MJ rather than MVMJit ?
09:24 jnthn I've tried to keep everything starting with MVM so far so we arne't too evil to embedders.
09:24 brrt fine :-)
09:24 brrt that is a good reason
09:31 * brrt is off to get some tea
09:31 brrt bbiab
09:55 dalek MoarVM: 1f3a2f5 | jnthn++ | src/core/fixedsizealloc.c:
09:55 dalek MoarVM: Fix logical bug in fixed-size-allocator.
09:55 dalek MoarVM:
09:55 dalek MoarVM: Affected the multi-threaded code path.
09:55 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/1f3a2f5d6a
09:55 dalek MoarVM: f3e999d | jnthn++ | src/core/frame.c:
09:55 dalek MoarVM: Fix capturelex race condition.
09:55 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/f3e999dd21
10:39 nwc10 jnthn: t/spec/S06-advanced/wrap.rakudo.moar  t/spec/S17-lowlevel/lock.rakudo.moar t/spec/S32-list/roll.t t/spec/S32-list/pick.rakudo.moar t/spec/integration/99problems-51-to-60.t t/spec/integration/advent2010-day11.t;
10:40 nwc10 Oh, goalposts moved. Retest time
10:40 nwc10 at least one was a backtrace including the fixed size allocator
10:41 jnthn Yes, I'm hoping that may nail it.
10:56 carlin joined #moarvm
11:09 nwc10 jnthn: now, total fails are t/spec/S06-advanced/wrap.rakudo.moar t/spec/S17-lowlevel/lock.rakudo.moar t/spec/S32-list/roll.t t/spec/S32-list/pick.rakudo.moar t/spec/integration/99problems-51-to-60.t
11:09 nwc10 t/spec/S06-advanced/wrap.rakudo.moar is a Rakudo issue
11:09 nwc10 t/spec/S17-lowlevel/lock.rakudo.moar is our old friend
11:10 nwc10 the last three are triggering ASAN doing its pavement pizza thing
11:10 nwc10 and go away with spesh disabled
11:10 jnthn Oh...
11:10 jnthn Can I get a backtrace on the last 3?
11:10 nwc10 yes
11:10 jnthn On lock.rakudo.moar, I currently have two patches in the works
11:12 nwc10 last is http://paste.scsys.co.uk/404224
11:14 nwc10 http://paste.scsys.co.uk/404226
11:14 brrt joined #moarvm
11:15 nwc10 http://paste.scsys.co.uk/404227
11:15 nwc10 looks like the last two I pasted are the same underlying bug
11:19 FROGGS pavement pizza?
11:19 FROGGS I wonder if I should google that term
11:19 nwc10 euphamism for puke
11:20 FROGGS ahh
11:20 FROGGS :o)
11:48 dalek MoarVM: 772a216 | jnthn++ | src/6model/reprs/ConditionVariable.c:
11:48 dalek MoarVM: Missing MVMROOT in ConditionVariable setup.
11:48 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/772a216472
11:48 dalek MoarVM: 8730ef5 | jnthn++ | src/ (5 files):
11:48 dalek MoarVM: Don't share cached Lexotics over threads.
11:48 dalek MoarVM:
11:48 dalek MoarVM: Doing so leads to a race on the result slot, which can cause all kinds
11:48 dalek MoarVM: of trouble.
11:48 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/8730ef5dbc
11:53 brrt what, actually, is a lexotic?
11:53 jnthn A place to throw to, located lexically
11:53 jnthn Used for return
11:55 brrt oh....
11:55 brrt ok :-)
12:00 dalek MoarVM: e2d941b | jnthn++ | src/spesh/osr.c:
12:00 dalek MoarVM: Clear logging bits after OSR finalize.
12:00 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/e2d941b773
12:01 jnthn nwc10: That one may (or may not...) help with the pick/roll ones
12:02 nwc10 will tyest
12:11 jnthn https://gist.github.com/anonymous/8f407afdb8cccc1ed09c that TimToady reported exploding runs nicely for me by now, though I've a debug build with GC every 16KB or so.
12:12 jnthn If anybody on non-Windows cares to test it against MoarVM head, please do. (Note, you'll need to rebuild your Rakudo for this bump, or at least the extops)
12:14 * FROGGS rebuilds
12:17 FROGGS jnthn: that script runs fine here
12:20 jnthn Cool
12:22 jnap joined #moarvm
12:24 jnthn *sigh* the start.t and batch.t failures I typically see under load are 'cus the tests use sleep
12:25 nwc10 sin!
12:26 nwc10 most lazy way I found to acchieve synchronisation in tests was to open a pipe, and use EOF as the "mutex"
12:26 nwc10 ie, create pipe, after (possibly implied) fork close writer and reader as usual
12:27 nwc10 so that pipe has one reader and one writer
12:27 nwc10 and then the reader simply blocks on a read
12:27 nwc10 and the writer end closes the pipe when it's done whater it needed to do
12:27 nwc10 ensuring ordering between writer and reader
12:28 nwc10 it's portable, if your pipe implementation is portable enough :-)
12:36 jnthn Well, this is testing stuff in-process, given there's threads...
13:06 dalek MoarVM: 3a74ab0 | jnthn++ | src/6model/reprs/ConcBlockingQueue.c:
13:06 dalek MoarVM: Some NULLing in gc_free, for easier bug hunting.
13:06 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/3a74ab0f54
13:06 dalek MoarVM: f0b3898 | jnthn++ | src/io/eventloop.c:
13:06 dalek MoarVM: Re-ordering to avoid race on event loop startup.
13:06 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/f0b3898e15
13:12 * brrt doesn't seem to be very productive today :-(
13:21 jnthn brrt: Aww. How's the cleanup going?
13:26 brrt bits and pieces
13:27 brrt basically, i want to move all graph manipulations to a graph.c, and i want to move logging / bytecode dumping also to a file of its own, and the compiler driver, too
13:28 brrt and i /may/ want to split the dynasm files into pieces, but i suspect that will be ugly wrt building
13:28 woolfy left #moarvm
13:29 brrt matter of fact, i want to do a bit too much in too few steps
13:29 brrt because i /also/ want to merge return values handling with c calls
13:29 jnthn Do a piece at a time; that way, if something breaks, you'll know what's to blame.
13:30 brrt :-) ok, will do
13:30 dalek MoarVM: 9c67fc8 | jnthn++ | src/io/asyncsocket.c:
13:30 dalek MoarVM: A bunch of missing MVMROOTing in asyncsocket.
13:30 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/9c67fc8799
13:30 brrt i also want to split primitives into arithmetic, comparison, access
13:30 jnthn Seems I really wasn't on the ballmer peak when I wrote that code...
13:39 lizmat joined #moarvm
13:39 dalek MoarVM: 9f2180b | jnthn++ | src/core/interp.c:
13:39 dalek MoarVM: exit should just get out quick.
13:39 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/9f2180bc95
13:41 cognominal joined #moarvm
13:44 sergot joined #moarvm
13:48 tadzik joined #moarvm
13:49 btyler joined #moarvm
13:50 masak joined #moarvm
13:52 * jnthn thinks he's an idea what's making the async http thing fall in a heap - or at least one reason for it - on Moar...
13:53 jnthn Could do with a walk; bbiab
14:41 timotimo so ... we have a number that stores how many generations an object has survived, right?
14:41 * jnthn back
14:42 jnthn timotimo:  no
14:42 timotimo ah
14:42 timotimo that's good, then :)
14:42 jnthn Well
14:42 jnthn We do, but it's 0, 1, or many :)
14:42 timotimo because i was thinking there's a bunch of spots in the young generation where you can say that all objects "to the left" have "survived N collections already"
14:42 timotimo and that would give us extremely cheap ways to talk about any number of survived generations without per-object overhead
14:43 jnthn So only invovles about 2 bits
14:43 timotimo i suppose we already have the bits to spare and don't need more than 0, 1, 2 or many
14:44 jnthn Mostly you just need some way to avoid premature promotion
14:44 timotimo mhm
14:44 timotimo we don't really have a way to check how many generations after being promoted an "old" object gets freed, right?
14:44 timotimo because gen2 has a free-list that causes fragmentation
14:45 jnthn Right, no way to answer that at present.
14:45 jnthn Well, it's collections rather than generations
14:45 timotimo er, yes
14:46 jnthn We only collect gen2 once ever 25 nursery collections, fwiw.
14:46 jnthn *every
14:46 timotimo ooc, how did you tune that number so far?
14:48 FROGGS that was my guess also.... we could tune it again at some point
14:48 jnthn Largely using the NQP and Rakudo build
14:48 jnthn It makes a rather large difference.
14:49 timotimo this needs a quick recompile on every change, aye?
14:49 jnthn yeah
14:49 timotimo well, recompiling moar only takes half a minute or probably less ... and this is only very partial, isn't it?
14:49 timotimo or does that number live in a header?
14:49 jnthn It's in a header
14:49 timotimo ah, so it'll recompile everything :)
14:49 jnthn Note also, though, that in the long run we may well want a more dynamic strategy
14:51 FROGGS so that these numbers adjust itself given the nature of the running program?
14:52 jnthn *nod*
14:55 timotimo something the nimrod developer told me is that for things like video games, you tend to have a spot in your run where the stack is super small and the number of alive objects tends to be at a local minimum, which is, for example, at the end of a main loop iteration
14:56 brrt is 435476 LoC still a 'small vm'? :-)
14:56 timotimo would it be interesting to expose the "set the GC-please-flag" operation as an nqp:: op to moarvm programs?
14:56 brrt (it probably is :-))
14:57 timotimo something that makes me jealous is that the pypy people can just "plug in" a different GC, as the root finding code is generated at translation-time
14:57 timotimo so they recently were like "hey let's use an incremental GC instead" and "just" did it
14:59 jnthn brrt: I'm hoping you didn't count 3rdparty... :)
15:06 brrt oh... maybe haha
15:06 brrt oh, that is 10.000 lines or so
15:06 brrt much smaller
15:06 brrt no, 105.000 lines
15:06 brrt not my best day today
15:06 brrt anyway, off for dinner :-)
15:07 brrt left #moarvm
15:25 LLamaRider joined #moarvm
15:34 lizmat joined #moarvm
15:35 timotimo i bought an ergonomic keyboard
15:35 timotimo feels nice under my hands, i hope it makes a good difference
15:36 timotimo typing on it already works fine it seems
15:37 jnthn :)
15:37 jnthn good keyboards are a wrothwhile investment
15:37 jnthn hah, worthwhile
15:43 lizmat joined #moarvm
15:45 timotimo aye
15:45 woolfy joined #moarvm
15:50 timotimo annoyingly, with the newest kernel and/or xorg and/or synaptics driver, the sensitivity of the touchpad in my laptop is no longer adjustable ... it's stuck on the lowest setting i can imagine ...
15:50 timotimo oh btw
15:51 timotimo "MJ" is a funny choice for MVMJit
15:51 timotimo but Moar is also already a funny word
15:51 timotimo i wonder if we want to touch on that subject?
16:08 vendethiel joined #moarvm
16:21 jnthn Just pushed a patch that seems to make async http server somewhat more stable on Moar
16:22 jnthn (In conjunction with other patches earlier.)
16:27 lizmat reason to bump MOAR_REVISION?
16:36 jnap joined #moarvm
16:40 btyler jnthn: subjected a fresh moar to some wrk: https://gist.github.com/kanatohodets/46c83872081257f007bf#file-moar-wrk
16:40 btyler with multiple connections it tends to segfault, but more descriptive errors occur with a single thread/single connection coming from wrk
16:41 btyler one single connection/single thread run did complete (default is 10 seconds), at about 40 requests/second :)
16:43 jnthn I guess that's an improvement over before, then...
16:43 btyler emphatically so -- it couldn't complete 10 seconds on any settings
16:44 btyler *previously
16:44 jnthn *nod*
16:45 timotimo how long does it survive now?
16:45 btyler as a point of reference, same settings on wrk going after a similar server in mojolicious  (just spitting back 'hello world') hits about 320 requests/second on my machine
16:45 btyler timotimo: race-y, so it varies
16:45 jnthn m: say 320 / 40
16:45 camelia rakudo-moar 77024f: OUTPUT«8␤»
16:45 timotimo ah
16:46 jnthn Some work to go, but at least it's a single digit factor :)
16:46 timotimo can i get an url to link to for the script that's being torture-tested?
16:46 _sri btyler: is your box very very slow? 320 rps is not much at all
16:46 btyler _sri: single connection single thread
16:46 btyler from wrk
16:47 btyler timotimo: https://github.com/tony-o/perl6-http-server-async/blob/master/ab.pl6
16:47 timotimo thanks
16:48 btyler _sri: when I use more normal settings (2 threads, 10 concurrent connections against hypnotoad on default settings) I get closer to 1.1k req/s
16:48 btyler but perl6-m isn't ready for that kind of assault just yet :) so I wanted to keep things loosely comparable
16:48 _sri btyler: wow, still quite slow
16:49 btyler _sri: perhaps EV isn't triggering; it's a 2010 MBP, so not terrible, but not amazing
16:49 _sri on my little macbook air i get 914 rps with the same wrk settings running against the single process daemon
16:50 _sri make sure you're running in production mode, if you're using a server other than hypnotoad
16:50 _sri full hypnotoad with good settings gets me roughly 2k rps
16:51 _sri with the hello world app we use for profiling https://github.com/kraih/mojo/blob/master/examples/hello.pl
16:52 timotimo what is EV?
16:52 _sri libev binding
16:52 _sri i so wish i could switch to libuv with mojolicious...
16:53 timotimo ah
16:55 btyler _sri: running a comparable app helps, heh. 1.6k now. setting libev kqueue flags doesn't seem to make a difference. 800 single thread/single connection from wrk
16:55 btyler the ab.pl6 script could probably use a closer look as well
16:58 jnthn time for shopping, dinner, etc. bbiab
16:58 _sri :)
17:11 FROGGS joined #moarvm
17:37 FROGGS $ perl6-m -I. --target=mbc --output=Bar.moarvm -e 'sub EXPORT(|) { my %h = baz => Buf.new(0 xx 10); %h }'
17:37 FROGGS $ perl6-m -I. --target=mbc --output=Foo.moarvm -e 'use Bar'␤===SORRY!===␤MVMArray: Unhandled slot type
17:37 FROGGS jnthn: that is my Archive::Tar bug
17:52 FROGGS it happens in MVMArray.serialize btw
17:53 colomon_ joined #moarvm
17:55 FROGGS ahh, that is an MVM_ARRAY_U8 slot_type
17:55 woolfy left #moarvm
17:56 FROGGS okay, seems easy
18:01 FROGGS ó.ò
18:01 FROGGS Compiling lib/Archive/Tar/File.pm to mbc
18:01 FROGGS Missing serialize REPR function for REPR VMException
18:33 FROGGS golfed down to this: https://gist.github.com/FROGGS/8da00dbd8df5895d96b1
18:33 FROGGS compile that to Bar.moarvm, and try to compile a Foo.pm that uses Bar
18:36 FROGGS this happens in serialization.c serialize_object()
18:43 jnthn FROGGS: you manged to fix the other one?
18:43 FROGGS the first one, yes
18:43 FROGGS jnthn: it was like you said, just a NYI :o)
18:44 FROGGS dunno what to do about this one though... because I don't think I really want to serialize a VMException
18:45 jnthn As to the second one...how on earth does it end up wanting to serialize an exception...
18:45 FROGGS something must be garbage in there...
18:46 FROGGS because removing either line 5 *or* 6 makes it not explode
18:46 FROGGS valgrind shows nothing sadly
18:46 jnthn %EXPORT<&BLOCK_SIZE> = { my $n };
18:46 jnthn %EXPORT<&BZIP>       = do { try require ENOCAKE; +not so $! };
18:46 FROGGS and I am unable to break at that exception in gdb
18:46 jnthn So, that closure there closes over the $!
18:47 jnthn Which I guess points to the VMException
18:47 FROGGS can I make it lexical?
18:48 jnthn Well, what if before returning %EXPORT you do $! = Any or so?
18:48 FROGGS that seems to fix it...
18:48 FROGGS hold on
18:48 jnthn That'll tell us if we know the issue...
18:48 FROGGS both helps
18:49 btyler hmm, I wonder if the async http server implmentation is working as intended. perl6-m survived 10 seconds with 2 threads and 10 connections, but the throughput was the same as when wrk used 1 thread and 1 connection (~40 reqs/second)
18:50 jnthn btyler: I didn't really look at the impl, I just worked on making things not explode :)
18:50 btyler jnthn: right right :) I was just thinking that maybe I should try hacking up a simpler test script
18:59 FROGGS okay, I just make that $! lexical, that is what I wanted anyway
18:59 timotimo does it "start" at all? because there's still only one thread where event loopy things are handled
19:00 jnthn timotimo: That thread just tosses things into the queue, and other threads can handle them
19:01 timotimo oK
19:01 timotimo TEST this shift key feels weird
19:01 timotimo WHAT WHAT
19:02 timotimo =** +{) [{[{[{
19:02 timotimo at least that works
19:20 FROGGS Missing or wrong version of dependency 'lib/Compress/Zlib/Raw.pm6'
19:20 FROGGS ^--- require-ing it seems not to be the best thing atm
19:27 vendethiel joined #moarvm
19:40 timotimo maybe "require" doesn't trigger compilation?
19:41 jnthn Well, require + precomp is a small disaster if you aren't really careful
19:41 FROGGS Compress::Zlib::Raw was freshly precompiled
19:42 FROGGS I am require-ing Compress::Zlib btw, which uses the ::Raw which uses NativeCall
19:42 jnthn Yes, the "missing" is the key bit there.
19:43 jnthn ah, well, that's stranger...
19:43 FROGGS RAKUDO_MODULE_DEBUG=1 show that the right modules got loaded btw
19:44 FROGGS (the .moarvms)
19:45 timotimo unfortunately we can hardly tell "missing" and "wrong version" apart
19:45 timotimo also we still don't know which module causes the problem
19:45 timotimo if A uses B and B uses C and the later step blows up, there's no way to tell it's B in the middle.
19:55 FROGGS $ perl6-m -I. --target=mbc --output=Foo.moarvm -e 'if 1 { require Baz; say ::("Test") }'
19:55 FROGGS $ perl6-m -I. -MFoo -e1
19:55 FROGGS (Test)
19:55 FROGGS not that easy to recreate...
19:55 FROGGS will golf the original files down now
19:56 timotimo is there some way we can later make moarvm's garbage collector pauses shorter? or would that require basically replacing the Gc?
19:57 FROGGS you'd have to make it faster to have shorter pauses
19:57 timotimo hah
19:57 timotimo that doesn't scale :)
20:04 timotimo jnthn: do you think it'd make any sense at all to (at least mildly) sort the work list of pointers by address?
20:21 jnthn Are pauses actually a problem at present?
20:22 jnthn Sorting by address sounds really odd.
20:22 timotimo just a shot in the dark
20:22 timotimo i didn't actually measure common pause times
20:22 jnthn At the moment it works like a stack
20:22 timotimo i don't have a workload yet that would need very short pauses
20:22 jnthn There's actually a common in worklist.h, iirc, that mentions this.
20:22 timotimo (because in general we're a bit too slow to be doing things like that anyway)
20:22 jnthn *comment
20:23 timotimo oke
20:23 jnthn Where "bit too slow" means "tadzik++ runs 60fps games" :)
20:23 timotimo heh
20:23 jnthn I don't *remember* hearing pause times were an issue there.
20:24 timotimo well, the game he's running is thoroughly simple, to be fair
20:24 jnthn Sure :)
20:25 timotimo we're not pumping vertex data into the graphics card, or calculating particle systems or anything like that
20:27 jnthn BTW, given I improved inline and fixed opt of for (1..100000) { } it may be worth doing another benchmark update...
20:27 jnthn May show something nice for the for-using ones.
20:28 timotimo was that after the last benchmark run i posted?
20:28 timotimo because i linked that in the weekly
20:29 jnthn Yeah
20:29 jnthn Your last one did OSR I think
20:29 jnthn Oh, maybe some of the inline improvements too
20:29 timotimo i think osr + an inlining fix
20:29 jnthn I know postfix:<++> couldn't be inlined until after it, though...
20:30 timotimo i'm not sure if i have the return handler elimination stuff in there
20:32 jnthn Darn, my small scripts doing lots of threads now seem to not easily SEGV a Moar with a small nursery
20:32 jnthn Guess I have to move up to bigger examples.
20:33 timotimo darn, we're stable enough for small things now!
20:34 jnthn wtf, it creates enough CPU load that my music just skipped
20:35 timotimo could be memory load, too?
20:35 timotimo or disk busyness?
20:35 jnthn Shouldn't be using diks at all...
20:36 timotimo hmm, maybe it's making the computer swap? :)
20:36 jnthn And no, using 150MB or so
20:36 jnthn (or RAM)
20:36 timotimo huh.
20:36 jnthn It was using all the CPU it could lay its hands on :)
20:37 jnthn OK, enough thread bug hunt for today. Time to look at one other thing... :)
20:37 timotimo i'll get rid of some background processes on my desktop and then i'll run another rakudo benchmark
20:37 timotimo i see it updated nqp_revision somewhat recently
20:37 timotimo you're going to quickly rewrite our string handling? ;)
20:38 jnthn urgh
20:38 timotimo sorry :)
20:39 jnthn No, but I should do that at some point...
20:40 jnthn No, gonna put in the role type var opt, I think
20:41 timotimo sounds neat. i'll be interested to see what it does
20:42 jnthn Mostly, just exploits the insight that in a method in a role, $!foo is currently unoptimization because we have to look up the class handle
20:42 jnthn But if you specialized by the invocant type already then it's actually a constant within the specialization.
20:49 timotimo ah, sounds good
20:49 timotimo that would get compile_time_value and annotation quite often, wouldn't it?
20:50 jnthn Well, I was thinking primarily about Cursor...
20:50 timotimo oh, cursor
20:50 timotimo well, if it optimizes access to stuff there, that'd be quite fantastic
20:50 jnthn At the moment !cursor_start, !cursor_capture, and so forth get no opt of the attribute access
20:50 timotimo is that all?
20:50 timotimo don't we have problems in general in cursors because our matching class is a bit indirect?
20:51 jnthn Indirect?
20:58 timotimo er ...
20:58 timotimo i *think* we're doing nqp::bindattr with runtime-valued strings?
20:58 timotimo or something like that?
20:59 timotimo it occurs to me now that that wouldn't be helped by an opt that targets roles
21:01 timotimo http://t.h8.lv/p6bench/2014-06-30-stuff.html
21:02 FROGGS I'd think that optimizing CAPHASH would help a lot, but I'm not able to profile that
21:02 timotimo the differences are now much more pronounced
21:02 timotimo in all the earlier benchmarks at least
21:02 timotimo er... the upper benchmarks
21:16 * jnthn gets typevar scope in place, and starts looking at optimizing it.
21:17 brrt joined #moarvm
21:18 brrt \o #moarvm
21:18 timotimo heyo brrt
21:18 brrt timotimo: the touchpad thing is a known issue
21:18 timotimo how are you feeling today?
21:18 FROGGS hi brrt
21:18 timotimo ah! do you know where i can follow that bug around?
21:18 brrt i think so
21:18 brrt you're using fedora, or not?
21:18 timotimo i am
21:19 brrt :-)
21:19 brrt https://bugzilla.redhat.com/show_bug.cgi?id=1104789
21:19 jnthn missing dalek again :(
21:19 brrt here you go
21:20 brrt https://bugzilla.redhat.com/show_bug.cgi?id=1104789#c20 is thef ix
21:20 brrt i'm... feeling ok, i guess
21:20 brrt :-) had some time to clear my thought
21:20 brrt s
21:24 timotimo thank you for the pointer
21:24 timotimo the pointer for fixing my pointer
21:26 jnthn brrt: Got some ideas on the JIT cleanups you were plotting?
21:26 jnthn oh wait, you might have already pushed them and there was no dalek to say so...
21:28 brrt no, haven't pushed them yet
21:28 brrt and yes, have ideas :-)
21:28 timotimo brrt: have you already tried building an nqp (and maybe even a rakudo) with the jit branch?
21:28 brrt yes, of course, and that works :-)
21:29 lee_ joined #moarvm
21:29 brrt i.e., if it doesn't work now, somethings is broken that i don't know yet
21:29 FROGGS can confirm that
21:29 timotimo wow.
21:29 jnthn brrt: May be worth merning in latest master at some point, thanks to various things that have landed of late :
21:29 tadzik1 joined #moarvm
21:29 timotimo aye, latest master does get lots more chances at speshing
21:29 timotimo and inlining
21:29 jnap1 joined #moarvm
21:29 jnthn brrt++ # keeping it building NQP and Rakudo, so there won't be an epic bughunt later
21:29 brrt :-)
21:30 jnthn That's a very wise thing to do.
21:30 brrt building nqp was how i found out about the.... ahem... whatwasitcalledbug
21:30 brrt anyway, very annoying bug i fixed last week
21:30 jnthn Oh...inline...
21:30 cxreg2 joined #moarvm
21:30 brrt oh, that, yes :-)
21:31 * brrt is greatly fond of make -j these days
21:31 jnthn Yes, sorry you had to share in the pain of inline...
21:31 jnthn I'm hoping we've got things structured so that JIT + inline + deopt isn't actually any more work than just getting JIT + deopt working...
21:31 brrt no problem :-) expectations change,that is all
21:32 jnthn ...at least, that's what I've tried to achieve.
21:32 brrt as i recall, deopt was a function call and jump to exit, right?
21:32 jnthn Well, I think we'll get you to pass an index in too
21:33 jnthn But yes, function call passing that index, let it do the work, then exit.
21:33 brrt yes, but the index was hardcoded iirc
21:33 jnthn Right, it's a constant
21:33 jnthn That you get from ann.
21:36 brrt hmm, my JIT of wval isn't apparantly totally correct
21:36 cognominal joined #moarvm
21:37 brrt whut
21:37 brrt *ahum*
21:40 brrt spot the bug: https://github.com/MoarVM/MoarVM/blob/moar-jit/src/jit/jit.c#L195
21:41 timotimo i suppose you meant to get the two properties of the .reg?
21:41 timotimo because an operand is either a register or a literal, but not both?
21:41 jnthn [0] :)
21:41 timotimo oh
21:41 timotimo you probably wanted the next arg, yeah
21:41 timotimo oops :S
21:41 timotimo quite similar to the bug in isconcrete
21:41 jnthn And right below too
21:41 jnthn MVMint64 idx = idx = ins->operands[0].lit_i64;
21:42 jnthn Wants to be 2
21:43 brrt precisely :-)
21:43 brrt but fixed now
21:46 * brrt is off for tonight and will hopefully do better tomorrow
21:47 brrt left #moarvm
21:49 timotimo :)
21:52 jnthn 'night, brrt
22:49 jnthn Time for some rest...'night o/
22:55 bcode joined #moarvm
23:09 timotimo gnite jnthn
23:13 jnap joined #moarvm
23:27 timotimo i do look forward to the merge master -> moar-jit
23:28 timotimo merge or rebase, whatever fits best
23:30 bcode_ joined #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary