Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2016-10-21

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:48 ilbot3 joined #moarvm
01:48 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
03:46 hoelzro joined #moarvm
07:10 geekosaur joined #moarvm
07:30 zakharyas joined #moarvm
07:32 brrt good * #moarvm
07:33 domidumont joined #moarvm
07:38 domidumont joined #moarvm
08:12 FROGGS joined #moarvm
08:13 FROGGS o/
09:04 domidumont joined #moarvm
09:26 jnthn o/
09:26 yoleaux2 20 Oct 2016 22:16Z <FROGGS> jnthn: I got it to work now: https://gist.github.com/FROGGS/bd727a2408ee66053a4569c4b8765a8c
09:28 jnthn FROGGS: I'm a tad surprised there ain't a c => ... on the first line of output :)
09:28 jnthn But...nice work! \o/
10:24 JimmyZ joined #moarvm
10:52 timotimo jnthn: i figure instead of immediately freeing the last frame's env and locals, i could store up to one of each and when either freeing the next two or allocating new ones hit the allocator four times in a row and locking and unlocking only once in the process
10:52 timotimo and on top of that, if the sizes match up, just not hit the allocator at all
10:53 domidumont joined #moarvm
10:58 timotimo if there's also a way to store what the original size of a slot was, we can re-use one or both of those cached ones if the size of the next needed one is smaller than or equal to the current one, and also not hit the allocator in these cases
10:59 timotimo maybe actually freeing the slot if we require less than half or a quarter so that we don't hang on to a 4 megabytes big frame for an hour
11:10 timotimo oh, hmm, we want zeroes, eh?
11:11 timotimo i suppose when i free a frame and return it to the cache i can zero it out with memset and that'll be fine
11:16 timotimo i'm not entirely sure why the logic for setting up the VMNull pointers differs between allocate_heap_frame and allocate_frame. maybe an oversight when the VMNull-via-spesh mechanism was invented?
11:58 timotimo well, i managed to make things error out
11:58 timotimo that's ... progress?
12:05 timotimo ah, of course. i didn't grab the size out of the cache. that way later frees would give the wrong size
12:11 timotimo not quite nailing this ... at all :)
12:22 timotimo at least i can play my newly obtained vidya game to try and get a fresh look at things
12:24 brrt vidya game?
12:24 timotimo yup. Rhythm Heaven Megamix
12:24 * brrt is so so so so so excited about the nintendo switch :-o
12:27 brrt otoh, haven't touched the wii u in ages, and not for lack of games
12:27 brrt but utterly no energy for it
12:28 timotimo hm. even when i turn off putting stuff into the cache, i get segfaults
12:29 timotimo er, i mean: even when i turn off getting things out of the cache
12:30 brrt and git diff doesn't tell you why?
12:32 timotimo https://gist.github.com/timo/b8f32c09684ef4cda728d530b1844e85 - do you see an obvious mistake?
12:36 brrt not yet
12:36 timotimo all i can imagine is i'm destroying the FSA by giving things the wrong size when freeing
12:36 timotimo i thought i was being sufficiently careful about that
12:37 timotimo oh, i get it
12:37 timotimo the memset is wrong
12:40 timotimo it's still asploding now, though
12:42 timotimo ah
12:42 timotimo allocd_* attributes in MVMFrame are MVMuint16, but the local vars (and therefor my parameters) are size_t
12:43 timotimo no, they are MVMint32 * instead of MVMuint32 *
12:45 timotimo hm. things are still explodey
12:46 timotimo MVMuint16, i should have said.
13:00 timotimo energy levels seem to be used up now :o ... but it's still so early in the day =_=
13:01 timotimo it's a bit suspicious that sometimes fprint is losing its \n
13:03 brrt hmm, that it is
13:05 jnthn What does the line doing the fprintf look like?
13:08 timotimo looks like redirecting to a file makes it less bad
13:08 timotimo https://gist.github.com/timo/1154ffecdb1908f59b3308e44af62bcb - the log with cache hits and misses
13:09 timotimo though the use of parens is b0rky here; for the cache misses i output the sizes that are in the cache, for getting something out of the cache i put the size from the cache first, then the size actually requested into the parens
13:10 timotimo whoops, using 152 byte sized regions to satisfy a request for 16 bytes
13:13 timotimo refresh for better output and more details
13:14 timotimo nap &
13:14 timotimo oh, i really gotta do laundry today
13:15 timotimo so nap has to wait
13:15 harrow joined #moarvm
13:16 nine Turning native_callback_cache into a weak hash doesn't seem to be a winner strategy. At times the cache will be the only thing referencing those MVMNativeCallbacks. That's why we need the cache in the first place.
13:16 japhb joined #moarvm
13:18 nine So I propose a different solution: add a usage counter to MVMNativeCallback. Increase the counter every time we take the callback info out of the cache. On GC go through the cache and evict all items with a usage counter of 0 and then reset all counters to 0.
13:18 nine Does this sound sane?
13:19 nine Could be that it needs a s/MVMNativeCallback/MVMNativeCallbackCacheHead/g
13:25 lizmat nine: feels like a refcounting scheme, and that hasn't worked too well in the past
13:25 lizmat and what we had in refcounting, jnthn removed from Moar recently, didn't he ?
13:25 lizmat (thinking about frames)
13:26 jnthn Unfortunately, I suspect this is one of those problems that we can't have a 100% automatic solution
13:27 jnthn a-c-function-that-uses-the-callback(-> { 42 }) // here we could evict it right away after the call
13:27 jnthn a-c-function-that-registers-the-callback-somewhere(-> { 42 }) // but here we cannot
13:27 jnthn And there's no way to tell between the two
13:28 jnthn Evict all itmes with usage count of 0 - what if the GC run happens before the callback's first use?
13:28 jnthn Or in the second example, what happens if there happens to be no use of the callback between two GC runs?
13:28 nine Good news is that we don't need 100 %. It's just a cache after all.
13:29 jnthn I fear cache is also a tad misleading name wise
13:29 nine ouch
13:29 jnthn It also is serving as what keeps closures alive.
13:29 jnthn Note that "closure" is also significant here
13:30 jnthn I can't remember if the cache keys on ->outer also
13:30 jnthn But at the end of the day, this is a memory management issue
13:31 jnthn Essentially, for how long does the native code we're calling need the callback to stay alive
13:32 jnthn I can't remember the full history of callback impl. I do remember having to fix it at some point, I think because the cache didn't distinguish between closures of the same code
13:33 jnthn So it would end up with outers messed up when you were doing, so, GUI event handler registration
17:05 timotimo a) it'd be better if that cache was bigger, b) a bigger cache will at some point resemble hoard without the added intelligence that makes hoard actually good
17:07 timotimo jnthn: any reason in particular we'll not just use hoard for everything?
18:06 FROGGS joined #moarvm
18:07 FROGGS o/
23:06 dalek joined #moarvm
23:06 synopsebot6 joined #moarvm
23:06 geekosaur joined #moarvm
23:06 _longines joined #moarvm
23:06 nine joined #moarvm
23:06 mtj_ joined #moarvm
23:06 btyler joined #moarvm
23:06 konobi joined #moarvm
23:06 cxreg joined #moarvm
23:06 timotimo joined #moarvm
23:06 sivoais joined #moarvm
23:06 avar joined #moarvm
23:06 yoleaux2 joined #moarvm
23:06 jnthn joined #moarvm
23:06 camelia joined #moarvm
23:06 nwc10 joined #moarvm
23:06 [Coke] joined #moarvm
23:06 harrow joined #moarvm
23:07 moritz joined #moarvm
23:07 ggoebel joined #moarvm
23:07 nebuchadnezzar joined #moarvm
23:07 pyrimidi_ joined #moarvm
23:07 diakopter joined #moarvm
23:07 brrt[idle] joined #moarvm
23:07 japhb joined #moarvm
23:07 mst joined #moarvm
23:07 hoelzro joined #moarvm
23:07 stmuk_ joined #moarvm
23:07 zostay joined #moarvm
23:07 ilmari joined #moarvm
23:07 arnsholt joined #moarvm
23:08 Util joined #moarvm
23:08 BinGOs joined #moarvm
23:08 Dunearhp joined #moarvm
23:08 geekosaur joined #moarvm
23:11 ChanServ joined #moarvm
23:12 TimToady joined #moarvm
23:13 sivoais joined #moarvm
23:13 sivoais joined #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary