Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2017-10-10

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:55 ilbot3 joined #moarvm
01:55 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
06:01 brrt joined #moarvm
06:04 domidumont joined #moarvm
06:09 domidumont joined #moarvm
06:38 brrt joined #moarvm
06:44 domidumont joined #moarvm
06:47 patrickz joined #moarvm
06:51 TimToady joined #moarvm
08:44 robertle joined #moarvm
08:50 nwc10 "good" *, #moarvm
08:50 nwc10 more ASAN - http://paste.scsys.co.uk/565293 t/spec/S17-promise/nonblocking-await.t
09:02 brrt joined #moarvm
09:07 jnthn Arse. In that there's a release next week so maybe I should look at these but I really, really don't want to be distracted yet again from working on hyper/race, which stuff always drags me away from. :/
09:08 nwc10 there's also ASAN in repl.t (yesterday's paste) but it might be the same cause
09:08 nwc10 sorry to be the bearer of bad news (and not really able to help fix it)
09:08 brrt good * #moarvm
09:08 yoleaux 9 Oct 2017 17:30Z <MasterDuke> brrt: nice talk. now which templates are the most needed?
09:09 brrt MasterDuke: thanks :-). ehm, depends on the testcase, but i think i can whip up a quick script
10:37 rba_ joined #moarvm
10:47 rba joined #moarvm
11:28 brrt joined #moarvm
11:47 ggoebel joined #moarvm
13:07 zakharyas joined #moarvm
13:41 rba_ joined #moarvm
13:43 brrt joined #moarvm
13:47 rba joined #moarvm
13:49 leont joined #moarvm
13:51 brrt joined #moarvm
14:22 rba_ joined #moarvm
14:23 timotimo FFS, locally my moarvm segfaults in NFA's unmanaged_size, but only with --optimize=3
14:24 timotimo so in gdb i hardly see anything
14:31 rba joined #moarvm
14:34 timotimo and putting an fprint there, it won't crash
14:44 * timotimo just cuts out the part that's explodey and goes on wit hlife
14:44 jnthn Seems that at least the new semaphore.t crash goes away on reverting 4092ccebcac7bf9848029d068ed361235cab2d8f
14:44 timotimo "ubnlocked" :)
14:45 zakharyas joined #moarvm
14:45 jnthn At first I thought "hmm, condvar not detecting spurious wakeup?" but it's actually doing it in a loop already
14:49 jnthn oh wait, grr
14:49 jnthn I blew away the GC stressing while reverting, I think
14:51 timotimo you know ... now that our nurseries grow dynamically anyway, we might get away with enabling gc stressing at run time
14:52 jnthn Indeed...
14:52 jnthn It was that
14:52 jnthn So, wasn't that commit at all
14:53 leont joined #moarvm
14:55 timotimo https://gist.github.com/timo/7fc21c4aedd1670a6231788bd98d7b9a - i wonder ... this doesn't look like the thing nwc saw
15:02 jnthn Ugh, the crashes are all over the map
15:03 timotimo on the other hand, on my system the unmanaged_size function reliably segfaults and i see no reason for it to do that
15:03 timotimo maybe my system's in some sort of bad state
15:05 brrt joined #moarvm
15:07 timotimo huh, i had a commit on my master branch that does nothing but jit MVM_decode
15:07 timotimo and without that the crash from the mutex in nonblocking-await goes away?!?
15:07 brrt oh
15:07 brrt oops
15:09 jnthn timotimo: I still have a crash in that locally
15:09 jnthn Without any local patches
15:10 jnthn Doesn't occur every time
15:10 timotimo OK, but probably requires gc torture?
15:10 timotimo or stress or what
15:11 jnthn Yeah
15:11 timotimo i just recompiled with asan
15:12 ugexe btw PR#719 should -only- be used with libuv 1.15.0 or higher. uv_fs_copyfile didn't exist pre 1.14, and didn't work right until 1.15. so if a user supplies their own libuv during Configure it probably needs a version check
15:16 zakharyas1 joined #moarvm
15:21 ugexe there is a libuv PR to add copy on write option to uv_fs_copyfile (https://github.com/libuv/libuv/pull/1491) that looks like it will land in the next release
15:31 dogbert2 jnthn: sorry for reporting a lot of bugs
15:31 jnthn They may all work out to be the same thing
15:32 jnthn Looks like at some point we have a ->caller that is outdated
15:32 jnthn And all kinds of things go bad from that
15:33 timotimo that'd be bad, yeah
15:38 jnthn Yeah, sticking a fromspace check in remove_one_frame does it
15:40 Geth ¦ MoarVM: f9975f8893 | (Jonathan Worthington)++ | src/core/frame.c
15:40 Geth ¦ MoarVM: An extra GC debug assert in remove_one_frame
15:40 Geth ¦ MoarVM:
15:40 Geth ¦ MoarVM: Catches some problems with an outdated ->caller cropping up
15:40 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/f9975f8893
15:52 timotimo Unhandled exception in code scheduled on thread 4
15:52 timotimo continuationinvoke expects an MVMContinuation
15:52 timotimo
15:52 timotimo that'? a new failure mode. interesting.
15:52 timotimo but i need that newest commit
15:52 brrt i need some help in deciding my next course of action
15:52 brrt left #moarvm
15:53 brrt joined #moarvm
15:53 brrt i can focus on:
15:53 dogbert2 joined #moarvm
15:53 brrt - eliminating the 'dynamic label markers' using a stack walker
15:53 brrt - starting the optimizer
15:54 brrt - imporving tiling by having a iteration order we can apply per node type
15:54 timotimo jnthn: with a running valgrind, we can put a command into our c code that finds memory locations that point at a given memory location; can you imagine a good spot where that'd be useful?
15:54 brrt hmm, i'm wondering if the iteration order thing would even work
15:55 jnthn brrt: Expending the range of ops/tiles covered by the expr JIT would also be a wrothwhile task, I guess
15:55 timotimo but brrt wants *others* to contribute that :D
15:55 jnthn brrt: In that, that'll get it used more, meaning that the optimizer will be tested better
15:56 jnthn Yeah, fair point :)
15:56 brrt well, i'm not against doing that myself
15:56 brrt :-)
15:57 brrt hmmm
15:57 timotimo is there still much to be gained from tuning cost functions or something?
15:57 brrt in the expr jit?
15:57 brrt well
15:57 brrt i need to calculate the cost of tiles more properly
15:57 brrt but that's a difficultish job
15:57 timotimo i seem to recall something about that from a long time ago
15:58 brrt and it's … not really super essential in that the current thing mostly work
15:58 brrt *works
15:58 timotimo fair enough
15:59 timotimo will optimizations happen on the same level the graphviz output lives at?
15:59 timotimo or the same level we have these list expressions at?
16:01 timotimo if someone were so inclined, they could build tiles for our bigint ops that implement fast paths for smallbigint arithmetic
16:03 timotimo the first one would be the hardest, as it'd have to have macros for "is smallbigint" and for properly storing the result
16:04 timotimo and perhaps building versions of the arithmetic that can assume parameters are already real big ints
16:04 timotimo hm. not sure if there's really much to be gained from making smaller functions for that
16:16 Geth ¦ MoarVM: ac471af4ba | (Jonathan Worthington)++ | 4 files
16:16 Geth ¦ MoarVM: Further GC debug checks to help find ->caller bug
16:16 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/ac471af4ba
16:19 timotimo could anything be wrong with the special return mechanism?
16:19 jnthn Don't think so
16:20 jnthn It seems to never crash with spesh disabled
16:20 jnthn Disabling inline makes no difference
16:21 timotimo you're running nonblocking-await.t?
16:21 jnthn Yeah
16:21 jnthn oh wow
16:22 jnthn That's an interesting clue if I got it right
16:22 jnthn MVM_SPESH_LIMIT=1 and it still goes bad
16:23 timotimo was it in some paste that showed that extra_data was being use-after-freed?
16:23 jnthn Maybe
16:24 jnthn Hm, so if MVM_SPESH_LIMIT is 1 then it's never actually installing any specializations
16:24 jnthn Well, I guess it's installing 1 of them
16:26 timotimo do we keep collecting logs when spesh limit was reached?
16:27 jnthn yeah
16:27 jnthn Thus my wondering if it's somehow about logging
16:27 timotimo right
16:45 robertle joined #moarvm
16:46 timotimo with the headache i'm cultivating, i won't be of much use ... but i better not keyboard any more today anyway :\
16:48 Geth ¦ MoarVM: 895b217230 | (Jonathan Worthington)++ | src/core/interp.c
16:48 Geth ¦ MoarVM: Missing MVMROOT of return value
16:48 Geth ¦ MoarVM:
16:48 Geth ¦ MoarVM: Spesh logging may allocate a new log. This could cause us to return
16:48 Geth ¦ MoarVM: outdated return values.
16:48 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/895b217230
16:48 jnthn Good news: that decreases the number of failure modes
16:48 jnthn Bad news: it doesn't decrease them to zero, so there's still something not right
16:51 Geth ¦ MoarVM: 49e5882510 | (Jonathan Worthington)++ | src/core/frame.c
16:51 Geth ¦ MoarVM: Check caller chain before promotion
16:51 Geth ¦ MoarVM:
16:51 Geth ¦ MoarVM: So if there's a problem after promotion, then we know for sure that
16:51 Geth ¦ MoarVM: it was the promotion code that caused it.
16:51 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/49e5882510
16:51 Geth ¦ MoarVM: 29250d60f1 | (Jonathan Worthington)++ | src/spesh/log.c
16:51 Geth ¦ MoarVM: Fix possible access to moved object body
16:51 Geth ¦ MoarVM:
16:51 Geth ¦ MoarVM: Could happen only on spurious wakeup of the condvar, and only in a
16:51 Geth ¦ MoarVM: codepath that happens with MVM_SPESH_BLOCKING=1 set, so this would
16:51 Geth ¦ MoarVM: not impact normal use. Still, good to fix it.
16:51 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/29250d60f1
17:06 Geth ¦ MoarVM: dc9a338b23 | (Jonathan Worthington)++ | src/core/frame.c
17:06 Geth ¦ MoarVM: Ensure ->caller/->static_info can't get outdated
17:06 Geth ¦ MoarVM:
17:06 Geth ¦ MoarVM: Recent changes moved ->caller and ->static_info to be set up earlier,
17:06 Geth ¦ MoarVM: to make sure a frame has them. However, in the case that we have a
17:06 Geth ¦ MoarVM: frame allocated on the callstack rather than the heap, then MVMROOT
17:06 Geth ¦ MoarVM: of that frame does nothing. This is normally fine; a frame not on the
17:06 Geth ¦ MoarVM: heap will be hanging off something's `tc->cur_frame` and be marked if
17:06 Geth ¦ MoarVM: GC is triggered. But in this particular case, we're making a new
17:06 Geth ¦ MoarVM: frame, and it's not yet in that chain, so the two could go unmarked
17:06 Geth ¦ MoarVM: and thus become outdated when the spesh log got sent.
17:06 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/dc9a338b23
17:08 jnthn I think those commits fix all the issues dogbert2 and nwc10 reported today-ish
17:12 timotimo nice!
17:12 jnthn oops, what time...
17:12 * jnthn goes home to make dinner
17:12 jnthn o/
17:12 timotimo gutes gelingen!
17:21 dogbert2 jnthn+++
18:01 travis-ci joined #moarvm
18:01 travis-ci MoarVM build errored. Jonathan Worthington 'Further GC debug checks to help find ->caller bug'
18:01 travis-ci https://travis-ci.org/MoarVM/MoarVM/builds/286116404 https://github.com/MoarVM/MoarVM/compare/f9975f889380...ac471af4ba72
18:01 travis-ci left #moarvm
18:03 Zoffix Travis glitches on two jobs. Nothing todo with our code.
18:21 Ven joined #moarvm
18:36 rba joined #moarvm
18:51 AlexDaniel- joined #moarvm
18:57 zakharyas joined #moarvm
18:58 rba_ joined #moarvm
19:09 rba joined #moarvm
19:16 rba joined #moarvm
19:18 mtj_ joined #moarvm
19:39 Geth ¦ MoarVM: jsimonet++ created pull request #726: Typo symobls -> symbols
19:39 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/pull/726
19:52 Geth ¦ MoarVM: e4a9b84907 | (Julien Simonet)++ (committed using GitHub Web editor) | docs/jit/overview.org
19:52 Geth ¦ MoarVM: Typo
19:52 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/e4a9b84907
19:52 Geth ¦ MoarVM: 599eb46642 | (Zoffix Znet)++ (committed using GitHub Web editor) | docs/jit/overview.org
19:52 Geth ¦ MoarVM: Merge pull request #726 from jsimonet/patch-1
19:52 Geth ¦ MoarVM:
19:52 Geth ¦ MoarVM: Typo symobls -> symbols
19:52 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/599eb46642
20:02 lizmat good *, #moarvm
20:02 yoleaux 17:45Z <Zoffix> lizmat: do you know if the `:view` can be removed? It doesn't appear to be used in core, undocumented, and unspecced. I'm making the method shove Nils into holes to resolve RT#132261 and user would still get Mus with `:view` arg. https://github.com/rakudo/rakudo/commit/e99082382905e2da6a8962a8b97663d32736c739
20:02 synopsebot RT#132261 [resolved]: https://rt.perl.org/Ticket/Display.html?id=132261 Unclear what a hole in a List is
20:02 lizmat Zoffix: will get back about that
20:02 lizmat so what is the best way to see if something is Callable in nqp proper ?
20:03 lizmat Note: Callable doesn't exists there (afaik)
21:03 timo joined #moarvm
21:26 releasable6 joined #moarvm
21:34 lizmat jnthn: a thought re startup times
21:34 * timotimo ears twitch
21:35 lizmat would it make sense to start the spesh thread let's say after 500 msecs ?
21:35 lizmat if the program is done within 500 msecs, spesh probably wouldn't have made much difference anyway
21:35 timotimo hmm, i wonder
21:35 timotimo if the compiler part takes a bunch of time it'll be worth having it in
21:36 timotimo like, having spesh be active for that duration
21:37 timotimo with the way logging now works we no longer keep frames around if they started logging and then were never run again
21:37 lizmat well, it feels to me that we maybe doing too much at startup that may not be needed for short running programs
21:37 timotimo but specialized frames that don't get called any mmore after some point will still stick around; shouldn't have a big impact on run time, though
21:38 timotimo since spesh runs on its own thread, it only increases cpu time, not wallclock time
21:38 lizmat ok, lemme put it this way: for me, bare startup is around 136 msecs atm
21:39 lizmat about 16 msecs of that is running the mainline of the setting
21:39 lizmat what is it doing the other 120 msecs (wallclock!)  ?
21:40 lizmat also: sometimes (about 1/30 times) I have a startup of 110 msecs
21:40 timotimo how does disabling spesh change the wallclock on your end?
21:40 lizmat why would that be?  maybe something didn't get started and therefore didn't have to be stopped ?
21:40 lizmat a few msecs at most
21:41 timotimo the telemetry log can show you how things behave wrt GC pauses
21:41 timotimo like, worst case spesh could kick in right before other threads want to GC, then it'll do its work before it allows the GC to happen
21:41 timotimo spesh is generally rather fast, though, and it checks for gc in between every little job
21:41 timotimo (have not actually measured, just my gut feeling)
21:42 lizmat ok, well, it was just an idea..  :-)
21:42 timotimo i get around 130% cpu with spesh on and 98% with spesh off
21:42 lizmat so: what do you think happens in those 120 msecs ?  Is there a way to find out ?
21:42 timotimo 0.1 seconds elapsed either way
21:42 timotimo we can callgrind it to see what internal functions spend how many cpu cycles
21:43 lizmat ok, will keep that in mind
21:43 lizmat meanwhile, /me is going to get some shuteye  :-)
21:43 timotimo good night!
21:43 lizmat good night, #moarvm!
21:44 timotimo perhaps a little optimization in the bytecode validator could pay off
21:45 Zoffix \o
21:45 timotimo it's not very expensive, but MVM_proc_getenvhash gets called 6.3k times
21:45 timotimo that spends a bunch of time doing utf8-c8 decoding
21:46 * timotimo runs again without spesh
21:49 rba joined #moarvm
21:49 timotimo might want to cache the result of getenvhash in a few places, or maybe even inside moarvm
22:01 rba_ joined #moarvm
22:05 timo1 joined #moarvm
22:06 timotimo pretty much all the calls to getenvhash come from nqp's moduleloader where it looks if there's an NQP_LIB env var
22:06 timotimo i'll now compare Ir counts with what it looks like when i cache that hash inside the moduleloader
22:09 timo1 joined #moarvm
22:09 timotimo it's really worth a lot
22:09 timotimo 463,878,397 before the change, 405,717,806 after
22:10 timotimo it doesn't translate 1:1 to time, of course
22:14 timotimo a really big portion of startup is still deserialization
22:16 rba joined #moarvm
22:18 jnthn Huh, I was sure the env hash was already cached o.O
22:18 jnthn timotimo:
22:18 jnthn timotimo++ for catching that one
22:18 timotimo we'd want to cache it in the instance?
22:18 timotimo currently i'm caching it in nqp hll code
22:19 jnthn Yeah, in instance should do it
22:19 timotimo should we ever invalidate that hash?
22:19 jnthn Can the environment ever change "from the outside"?
22:20 timotimo if you nativecall to setenv
22:20 timotimo but user code will always be looking at %*ENV
22:20 timotimo rather than nqp::getenvhash
22:20 jnthn OK, well, if you're doing that, you can nativecall to getenv so far as I care :P
22:20 jnthn Right
22:20 jnthn And that's 'cus setenv/getenv are, iirc, a multi-threading disaster
22:20 timotimo yeah, ugh.
22:25 timotimo i don't know how to get a more accurate wallclck measurement from perl6 startup
22:26 greppable6 joined #moarvm
22:28 timotimo caching it in moarvm gets the count down to 403,548,367
22:29 timotimo anyway, it's like two strings for each env var and another for the = we're searching for each time
22:29 timotimo so we're also pushing back the first GC run a little bit
22:30 jnthn Nice :)
22:31 rba_ joined #moarvm
22:31 timotimo hm. it might be beneficial for the spesh thread to check some "is moarvm currently quitting?" flag or something?
22:31 timotimo here i see "moarvm teardown" at 316857586 ticks, but spesh worker finishes up at 334677178
22:32 timotimo m: say (334677178 - 316857586) / 3392386327
22:32 camelia rakudo-moar e19b34: OUTPUT: «0.00525281919␤»
22:32 timotimo so 0.005 seconds? that's not so terrible i guess
22:33 MasterDuke look, i'm a very important person and i need those 0.005s!
22:33 timotimo you truly are
22:37 buggable joined #moarvm
22:39 greppable6 joined #moarvm
22:39 Geth ¦ MoarVM/cache_env_hash: f2deedfc96 | (Timo Paulssen)++ | 3 files
22:39 Geth ¦ MoarVM/cache_env_hash: cache the env var hash after first access
22:39 Geth ¦ MoarVM/cache_env_hash:
22:39 Geth ¦ MoarVM/cache_env_hash: turns out the nqp module loader asked for the whole hash
22:39 Geth ¦ MoarVM/cache_env_hash: to maybe find an NQP_LIB var about 6k times during bare
22:39 Geth ¦ MoarVM/cache_env_hash: startup of rakudo.
22:39 Geth ¦ MoarVM/cache_env_hash: review: https://github.com/MoarVM/MoarVM/commit/f2deedfc96
22:40 timotimo should be fine, right? even with concurrent first accesses it shouldn't crash?
22:41 timotimo the threads will just race to install it and whichever wins doesn't get cleaned up by gc later?
22:41 jnthn Yeah
22:41 jnthn We do that trick in a few places
22:41 timotimo good
22:41 jnthn So long as you only install it once it's fully populated
22:42 jnthn And it's immutable from then on
22:42 jnthn Then it's fine
22:42 timotimo there's the merge
22:42 Geth ¦ MoarVM: f2deedfc96 | (Timo Paulssen)++ | 3 files
22:42 Geth ¦ MoarVM: cache the env var hash after first access
22:42 Geth ¦ MoarVM:
22:42 Geth ¦ MoarVM: turns out the nqp module loader asked for the whole hash
22:42 Geth ¦ MoarVM: to maybe find an NQP_LIB var about 6k times during bare
22:42 Geth ¦ MoarVM: startup of rakudo.
22:42 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/f2deedfc96
22:42 timotimo well, it's not really immutable; you can still bindkey and such to the result of nqp::getenvhash
22:42 timotimo but since it doesn't have an effect otherwise, why would you. and if you do, you're on your own i guess?
22:43 jnthn Yeah
22:43 timotimo a nice little win.
22:43 jnthn Indeed
22:43 jnthn timotimo++
22:43 timotimo wonder how many msecs it'll turn out to be on liz' machine
22:44 timotimo (not enough to satisfy us, of course)
22:45 buggable joined #moarvm
22:46 timotimo out of the 445,664,567 Ir for a -e 'say "hi"', about 131,290,357 is under MVM_6model_get_how, i.e. causing metaobjects to be deserialized
22:46 rba joined #moarvm
22:47 timotimo in total, work_loop is responsible for 245,266,376 Ir
22:47 timotimo gets invoked 1.5k times
22:47 timotimo i think we run work_loop once every time we want to have an object and need to get deserializin'?
22:48 jnthn Yeah
22:48 jnthn 1.5k things is a lot less than all the things :)
22:48 timotimo OK, how about MVM_validate_static_frame: responsible for 50,380,527 Ir; called 1,843 times in total
22:49 timotimo does it sound bad that we call "repossess" 438 times?
22:49 jnthn It might be interesting to see which types we need the HOW
22:49 timotimo i can surely instrument that
22:49 jnthn Easy thanks to debug_name
22:50 timotimo yay
22:50 greppable6 joined #moarvm
22:50 jnthn I've poked at the bytecode validation a few times and struggled to see any more obvious speedups there
22:51 jnthn But if you can find them, then that's surely a good thing
22:51 timotimo hm. 6model_get_how calls sc_get_object for these: https://gist.github.com/timo/9d90da147191d8283658e07e52d4bcfa
22:52 timotimo added a file for total times get_how is called
22:52 timotimo unsurprisingly with -e '' we have no s1, h1, f1, c1, or a1, and no qq or b1 either
22:53 timotimo and no Perl6::QGrammar+*
22:53 jnthn https://gist.github.com/timo/9d90da147191d8283658e07e52d4bcfa#file-gistfile1-txt-L43
22:53 jnthn Where'd the hi come from? )
22:53 timotimo that's -e 'say "hi"' for you
22:54 jnthn Oh
22:54 jnthn I thought you meant -e ''
22:54 timotimo nah, but i can put the data in there for that, too
22:54 jnthn Oh, right, you were comparing it with the stuff in the gist :)
22:54 timotimo yup
22:54 jnthn I'm a bit surprised by some of them
22:55 jnthn NQPFileHandle for example
22:55 timotimo the very first how we ask for
22:55 jnthn Oh, we could log the backtraces of where we ask too :)
22:56 timotimo i just put an uv_hrtime before and after to see which one takes how long
22:57 timotimo though i probably want it in front for better readability
22:58 timotimo updated
22:58 timotimo not terribly surprising that Perl6::Grammar takes the longest
22:58 timotimo by far, it looks like
23:00 jnthn Lots of methods. Lots of NFAs.
23:00 timotimo i mean, we still have the thing where every NFA deserializes all its guts followed (or preceded?) by a bunch of nqp arrays with the same data again
23:01 jnthn Yeah, there's a big memory and startup time win to be had for whoever takes that task on
23:01 rba_ joined #moarvm
23:01 * timotimo wasn't developer enough the last time
23:01 timotimo 2 years 1 month ago
23:02 jnthn Probably one that's in need of either patience or stubbornnes :)
23:05 timotimo with the way the sc work loop works it's not terribly easy to account for what object accounts for what amount of time spent deserializing once the work loop has been entered
23:06 timotimo i had a potentially silly idea in the shower today: should we have a mode (ifdefd) that saves a pointer to "which object was the first to put me into a worklist" when doing GC?
23:06 timotimo our problem is usually more that things are not put into worklists, right?
23:07 timotimo though if we encounter an object in the fromspace we'd have a pointer into the fromfromspace for an object that used to hang on to us?
23:07 timotimo i think it's clear i haven't thought this through
23:12 jnthn Hm, not quite sure I follow, but it is kinda late...
23:12 jnthn We already do trap on worklist addition of something that is in a fromspace though
23:12 jnthn (In debug mode, that is)
23:14 timotimo right
23:14 timotimo it's not clear if my idea si a win in any situation
23:21 timotimo rest time o/
23:29 jnthn me too o/

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary