Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2017-12-06

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:54 patrickz joined #moarvm
00:59 evalable6 joined #moarvm
01:11 jpf joined #moarvm
01:55 AlexDaniel joined #moarvm
01:58 bisectable6_ joined #moarvm
01:58 bisectable6 joined #moarvm
02:58 ilbot3 joined #moarvm
02:58 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
03:04 yoleaux joined #moarvm
04:12 harrow` joined #moarvm
04:12 SourceBaby joined #moarvm
04:56 TimToady joined #moarvm
06:04 releasable6 joined #moarvm
06:04 greppable6 joined #moarvm
06:05 squashable6 joined #moarvm
07:10 geospeck joined #moarvm
07:14 domidumont joined #moarvm
07:21 domidumont joined #moarvm
07:30 brrt joined #moarvm
07:31 brrt left #moarvm
07:35 reportable6 joined #moarvm
08:32 Ven`` joined #moarvm
08:45 zostay joined #moarvm
09:01 zakharyas joined #moarvm
09:03 zakharyas joined #moarvm
09:44 geospeck joined #moarvm
10:15 brrt joined #moarvm
10:17 samcv argh gonna start over again. well sorta. i'm doing a rework of a bunch of point stuff to ucd2c.pl
10:18 samcv but i've made so many changes and i need to go back to master and then iteratively do it. and not do it on top of adding faux properties for Bidi_Matching_Bracket, i can add that back in later
10:18 samcv frustratingly there is a global $first_point and $last_point variables. but also many subs ALSO have local variables of the same name
10:18 samcv ARGH
10:19 samcv so first i'm going to rename them with titlecase and work from there, since i think that was what ended up going wrong. not realing certain things were local/not
10:21 brrt good * #moarvm
10:23 samcv i'm thinking of changing all globals to titlecase to make it obvious
10:37 AlexDaniel joined #moarvm
11:19 domidumont joined #moarvm
11:43 brrt joined #moarvm
11:47 brrt i always just use $UPPERCASE for globals
11:48 samcv hmm that's a choice too
11:48 yoleaux 11:05Z <tbrowder> samcv: thanks!
11:49 dogbert17 what does this mean: Can NOT inline to (43) into Str (46): target has a :noinline instruction
11:53 dogbert17 looks a bit cryptic to the uninitiated :)
11:58 jnthn The method to couldn't be inlined into the method Str because there's an instruction that we're not allowed to inline inside of to
11:58 jnthn Maybe we should stick quotes around the names :)
12:01 dogbert17 aha, thx for the explanation, just testing the define in optimize.c in order to see what it outputs
12:02 dogbert17 what does the numbers, i.e. 43 and 46, mean?
12:09 jnthn They're ids
12:09 jnthn Useful if needed to try and differentiate different multi candidates with the same name, for example
12:09 jnthn They can be looked up in the spesh_log
12:12 dogbert17 thx
12:13 dogbert17 the most common case for not inlining seems to be that the bytecode is too large. I guess there's a reason why the limit is set the way it is (384). Diminishing returns perhaps?
12:14 timotimo once a frame is gigantic, setting up a call record is the smallest part of executing the thing
12:14 timotimo but if you have only five instructions, setting up the call record and tearing it down for returning takes a significant amount of the total time
12:15 dogbert17 aha, so 384 was chosen after some tuning procedure then?
12:16 timotimo aye, perhaps for core setting compilation?
12:16 dogbert17 also, once the reason, for not inlining was given as 'different HLLs'. What does that mean?
12:17 * dogbert17 asks a lot of noob questions atm
12:18 timotimo a hll has, among other things, a set of named classes that fulfill some role
12:18 timotimo like "slurpy array"
12:18 timotimo if you pass a simple array from a perl6 function down into an nqp function - or vice versa - it will be converted
12:18 timotimo we do that by calling "hllize" on the thing, which looks at "the current frame's hll"
12:19 timotimo if we inline an nqp thing into a perl6 thing, we don't have "the current frame's hll" any more
12:19 timotimo though we could conceivably turn "hllize" in an inlinee into "hllizefor" maybe?
12:21 timotimo or store what hll something is in the inlines table, then we'd have to check for hlls there whenever we look that up
12:21 timotimo not sure if it's worth it; what cases do you see where that happens?
12:21 dogbert17 and hll does not mean High Level Language or ?
12:21 dogbert17 Can NOT inline resolve (150) into grep-callable (1734): different HLLs
12:22 timotimo i wonder what that is
12:22 dogbert17 the src has one grep, i.e. 'my $factors = (1..floor(sqrt($sum))).grep( -> $x { $sum % $x == 0} ).elems * 2;'
12:24 timotimo you should be able to find resolve in the speshlog and figure out what file and line it comes from
12:27 dogbert17 hmm, that log is monstruous :)
12:27 timotimo of course :D
12:27 timotimo spesh does a whole lot of work
12:28 dogbert17 could it be 'Latest statistics for 'resolve' (cuid: 150, file: src/Perl6/World.nqp:3796)'
12:28 timotimo looks good, name and cuid both match
12:28 timotimo "create_lexical_capture_fixup", eh?
12:29 timotimo i won't pretend i know how that stuff works :)
12:31 dogbert17 how are your wrists today?
12:31 timotimo ungood, but not plusungood
12:32 dogbert17 is there anything you can do yourself in order to improve thing, e.g. some kind of training
12:32 dogbert17 *things
12:32 timotimo i can wear my braces and not use my hands at all
12:32 dogbert17 is that the recommended way
12:33 dogbert17 sounds awfully limiting
12:33 timotimo 'tis
12:35 dogbert17 and for how long are you supposed to do this?
12:35 timotimo until i feel better i guess
12:35 timotimo i ought to get another appointment with the doc
12:35 dogbert17 argh
13:23 geospeck joined #moarvm
13:36 domidumont joined #moarvm
13:42 domidumont1 joined #moarvm
13:56 zakharyas joined #moarvm
14:24 dogbert17 jnthn: did you ever have time to look at https://github.com/MoarVM/MoarVM/issues/680 from MasterDuke?
15:05 geospeck joined #moarvm
15:05 dogbert17 anyway I have looked a bit at it and even run it through the profiler, i.e. --profile. If the generated output is correct (?) the reason for the leak seems to be in plain sight
15:07 dogbert17 the following line in the profiler output looks suspicious to me: 'The profiled code ran for 424326.62ms. Of this, 0ms were spent on garbage collection and dynamic optimization (that's 0%).'
15:07 dogbert17 and 'The profiled code did 0 garbage collections. There were 0 full collections involving the entire heap. '
15:09 timotimo one of the problems comes from a multi-threaded program being --profile'd
15:09 timotimo the other is from the spesh thing moving to its own thread and not being accounted for in the profile timing any more
15:10 dogbert17 does that explain the lack of GC's as well?
15:10 timotimo yeah, that's multithreaded profiling stuff, too
15:11 dogbert17 the interesting thing is that according to valgrind there aren't many leaks bet there's plenty of 'reachable' memory, several hundred megs
15:11 dogbert17 *but
15:12 timotimo in these cases, a heap dump profile might help
15:12 dogbert17 e.g. like this
15:12 dogbert17 ==31919== 128,604,456 bytes in 118 blocks are still reachable in loss record 3,205 of 3,205
15:12 dogbert17 ==31919==    at 0x402A17C: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
15:12 dogbert17 ==31919==    by 0x41A90BF: MVM_malloc (alloc.h:2)
15:12 dogbert17 ==31919==    by 0x41A90BF: MVM_string_join (ops.c:1843)
15:12 dogbert17 ==31919==    by 0x40EA526: MVM_interp_run (interp.c:1641)
15:13 timotimo ah?
15:13 timotimo i wonder if we're being silly with ropes
15:13 dogbert17 how silly :)
15:14 timotimo like, we're making single-character substring ropes out of every 4 kilobyte block and we keep those single characters around, not realizing each one holds on to 4k of data
15:15 timotimo not necessarily the case, but certainly possible
15:16 dogbert17 MasterDuke's example contains the line 'output    => @chunks.join.chomp,' which I suspect contains all rakudo commits
15:16 timotimo i don't see anything like that there, though
15:16 dogbert17 odd
15:17 dogbert17 there's other stuff as well (spam warning)
15:17 dogbert17 ==31919== 78,069,964 bytes in 3,616 blocks are still reachable in loss record 3,204 of 3,205
15:18 dogbert17 ==31919==    at 0x402A17C: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
15:18 dogbert17 ==31919==    by 0x4197456: MVM_malloc (alloc.h:2)
15:18 dogbert17 ==31919==    by 0x4197456: MVM_string_utf8_decodestream (utf8.c:350)
15:18 dogbert17 ==31919==    by 0x4193B93: run_decode (decode_stream.c:115)
15:18 dogbert17 ==31919==    by 0x4193B93: MVM_string_decodestream_get_available (decode_stream.c:492)
15:18 dogbert17 ==31919==    by 0x4160116: MVM_decoder_take_available_chars (Decoder.c:245)
15:18 dogbert17 ==31919==    by 0x40E4255: MVM_interp_run (interp.c:5121)
15:24 MasterDuke dogbert17: have you used heaptrack before? i find it helpful for memory related stuff
15:24 dogbert17 nope, I have no tbh
15:24 timotimo ok, so the second part is where we directly re-use the buffer from the decoder as the string's buffer without copying
15:26 timotimo the first one, surprisingly the bigger one, seems to be about strands arrays being created?
15:26 dogbert17 timotimo: I saved a gist of the biggest leaks here: https://gist.github.com/dogbert17/cdf6f54e914c809e608e5f77b909c9e1
15:27 MasterDuke dogbert17: i'd be curious if you got similar results with my "use the fsa for string storage" branch?
15:28 dogbert17 MasterDuke: I can try
15:28 zakharyas joined #moarvm
15:28 MasterDuke https://github.com/MasterDuke17/MoarVM/tree/use_fsa_for_string_storage
15:29 MasterDuke it's rebased to head, and passes rakudo spectest
15:36 dogbert17 oh, so it's not a branch in the MoarVM project. So how do I use it then (is no git virtuoso)?
15:37 timotimo you can "git remode add masterduke git@github.com:masterduke17/moarvm" then "git fetch masterduke" and "git checkout use_fsa_for_string_storage" should give you that branch
15:38 * dogbert17 tries
15:38 MasterDuke s/remode/remote/
15:39 dogbert17 Permission denied (publickey).
15:39 timotimo ok, then use
15:39 timotimo wait
15:39 timotimo or maybe you have to put .git at the end
15:39 timotimo or perhaps upper/lower case matters there?
15:40 eater dogbert17: use https://github.com:masterduke17/moarvm
15:40 eater your public key is not added ot github?
15:42 dogbert17 don't have a public key
15:46 dogbert17 thx guys, got it working, building now
15:50 dogbert17 MasterDuke: to me it seems as if the memory usage is more or less the same
15:53 MasterDuke hm, i guess if it's allocating large strings that makes sense
15:54 dogbert17 your code example, i.e. the leak is fascinating, after 50 iterations, on a 32 bit system, I get: {:data("558.980 kB"), :dirty("0 kB"), :lib("0 kB"), :resident("530.708 kB"), :share("26.484 kB"), :size("588.304 kB"), :text("8 kB")}
15:56 dogbert17 if I add the following lines as the two last statements in the loop '@commits = Empty; @tags = Empty;' memory usage changes, after 50 iterations, to: {:data("344.848 kB"), :dirty("0 kB"), :lib("0 kB"), :resident("320.000 kB"), :share("26.472 kB"), :size("374.172 kB"), :text("8 kB")}
15:56 MasterDuke any difference if you add an nqp::force_gc in the loop?
15:56 dogbert17 sec
15:59 timotimo huh, the for loop is holding on to the result from get-output?
15:59 timotimo if you only empty out commits, not tags, does that make the same difference?
16:00 dogbert17 timotimo: will check
16:00 dogbert17 MasterDuke: your original code with force_gc in the loop: {:data("566.032 kB"), :dirty("0 kB"), :lib("0 kB"), :resident("540.540 kB"), :share("26.484 kB"), :size("595.356 kB"), :text("8 kB")}
16:02 MasterDuke looks about the same
16:03 dogbert17 timotimo: setting @commits = Empty, no forced_gc: {:data("350.648 kB"), :dirty("0 kB"), :lib("0 kB"), :resident("325.708 kB"), :share("26.616 kB"), :size("379.972 kB"), :text("8 kB")}
16:07 brrt joined #moarvm
16:08 dogbert17 it's odd that it suddenly manages to get rid of +200MB, shouldn't it do that anyway, i.e. without setting @sommits=Empty
16:13 brrt joined #moarvm
16:18 timotimo something must be erroneously holding on to the arrays
16:18 timotimo i'm not sure if making an array empty with = Empty actually shrinks the underlying storage, though; our arrays basically only grow ...
16:19 dogbert17 but something seems to change
16:19 timotimo the objects it was holding will be gone
16:22 dogbert17 fun 'fact', according to the borked --profile 98.24% of the time was spent in 'acquire SETTING::src/core/Semaphore.pm:5'
16:23 timotimo not surprising if you know why profiling multithreaded programs breaks
16:23 timotimo the symptom is kind of like getting only the data from any random thread
16:23 dogbert17 aha
16:23 timotimo and since rakudo spawns worker threads for the thread pool, they'll be receiving work items on a queue
16:23 timotimo all the time they're spending not doing something will be spent waiting for a work item to arrive
16:24 timotimo and that's where the semaphore lives
16:24 timotimo and since data from all other threads gets silently dropped, this easily reaches into the 90% range
16:24 timotimo RSI break time
16:24 dogbert17 rest well
16:28 dogbert17 interesting, moving the declaration of 'my @commits' to the outer scope also seems to save ~200MB on a 32 bit system and you no longer have to set the variable to Empty after each iteration
16:44 brrt joined #moarvm
17:10 MasterDuke huh, not what i would have expected
17:30 AlexDaniel joined #moarvm
17:35 domidumont joined #moarvm
17:47 geospeck joined #moarvm
17:54 domidumont1 joined #moarvm
19:36 unicodable6 joined #moarvm
19:36 committable6 joined #moarvm
20:24 squashable6 joined #moarvm
20:30 lizmat so I'm looking at making my @a[2;2]; @a["0";"0"] work
20:30 lizmat and now I have a rather unstable MoarVM that crashes about 20% of the time running:
20:31 lizmat my @matrix[2;2] = [1,2],[3,4]; @matrix[0;0] for ^1000000
20:32 lizmat the diff: https://gist.github.com/lizmat/be498731d5e6365b629292f01f42cce5
20:32 BooK joined #moarvm
20:32 lizmat does anybody see something weird there ?
20:33 benchable6 joined #moarvm
20:33 lizmat basically, I create a list_i of indices that are sensibly coercible to Int
20:44 lizmat oohh,  rather reliably does: moar(74411,0x7fffcb5183c0) malloc: *** error for object 0x7f832e693308: incorrect checksum for freed object - object was probably modified after being freed.
20:44 lizmat *** set a breakpoint in malloc_error_break to debug
20:44 lizmat jnthn timotimo: is that something you could work with ?
20:45 lizmat code in question is:
20:45 lizmat my @matrix[2;2] = [1,2],[3,4]; @matrix[<0>;0] for ^1000000
20:46 lizmat this either runs for about 3 seconds, or crashes after about 350 msecs
20:47 jnthn I'd feed it to perl6-valgrind-m
20:47 jnthn As a first resort
20:47 lizmat ah, that was when I had a spectest running: without that it runs for about 1 second, or crashes after about 200 msecs
20:47 jnthn In the best case it points at something silly and easy to fix
20:47 committable6 joined #moarvm
20:47 unicodable6 joined #moarvm
20:47 AlexDaniel joined #moarvm
20:47 SourceBaby joined #moarvm
20:47 Geth joined #moarvm
20:47 synopsebot joined #moarvm
20:47 dalek joined #moarvm
20:47 tbrowder joined #moarvm
20:48 MasterDuke lizmat: with your patch or without?
20:48 lizmat about to commit the fix to rakudo
20:48 lizmat then it's at HEAD
20:48 lizmat it does make spectest ok
20:49 lizmat https://github.com/rakudo/rakudo/commit/00632edb6f
20:55 MasterDuke got valgrind output, gisting now
20:55 MasterDuke https://gist.github.com/MasterDuke17/453236bf30ff65b88fd8b84409368975
20:56 jnthn .tell brrt SEGV in linear scan allocator: https://gist.github.com/MasterDuke17/453236bf30ff65b88fd8b84409368975
20:57 jnthn hm, no message bot?
20:57 MasterDuke what's the guy's name who runs yoleaux? i think it begins with a d
20:58 jnthn dpk iirc
20:59 jnthn Anyways, in so far as that is isolated to a relatively small piece of code, it's probably not too nasty
20:59 jnthn (To find and fix, that is)
20:59 yoleaux joined #moarvm
21:00 jnthn \o/
21:00 jnthn .tell brrt SEGV in linear scan allocator: https://gist.github.com/MasterDuke17/453236bf30ff65b88fd8b84409368975
21:00 yoleaux jnthn: I'll pass your message to brrt.
21:00 jnthn Danke.
21:01 MasterDuke no surprise, but it doesn't seem to segv with MVM_JIT_DISABLE=1
21:01 * lizmat hopes it's what causing random crashes in make spectest
21:04 jnthn MVM_EXPR_JIT_DISABLE=1 to find out :)
21:04 jnthn Or what MasterDuke said, but the latter is disables less
21:07 lizmat MVM_EXPR_JIT_DISABLE=1 6 'my @matrix[2;2] = [1,2],[3,4]; @matrix[<0>;0] for ^1000000'
21:07 lizmat Command terminated abnormally.
21:08 lizmat after trying about 10x
21:08 lizmat yeah, MVM_EXPR_JIT_DISABLE=1 does not prevent it from happening
21:10 MasterDuke yep, valgrind error output looks the same with MVM_EXPR_JIT_DISABLE=1
21:11 jnthn huh, odd
21:14 MasterDuke gist updated with some gdb output
21:21 timotimo so how can it crash inside expr_compile if the exprjit is disabled?
21:22 jnthn Sorry, it's my fault
21:22 jnthn The flag is MVM_JIT_EXPR_DISABLE
21:22 jnthn Misremembered it :)
21:22 jnthn (The flag not working is mine, I mean. :P)
21:22 jnthn Well, I guess in a sense MoarVM in general is too :P
21:22 timotimo aaah
21:23 MasterDuke any reason not to have MoarVM warn if there's an env variable that starts with MVM_ it doesn't know about?
21:26 MasterDuke MVM_JIT_EXPR_DISABLE=1 doesn't seem to crash
21:34 lizmat confirm, I can't get it to crash with MVM_JIT_EXPR_DISABLE
22:02 nebuchadnezzar joined #moarvm
22:02 japhb joined #moarvm
22:02 bartolin joined #moarvm
23:11 Geth joined #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary