Time |
Nick |
Message |
00:00 |
|
lizmat joined #moarvm |
01:29 |
|
zakharyas joined #moarvm |
01:31 |
MasterDuke |
timotimo: interesting. stage parse now panics when trying to build normally (but with FSA_SIZE_DEBUG still on) but succeeds under valgrind |
01:55 |
|
ilbot3 joined #moarvm |
01:55 |
|
Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at http://irclog.perlgeek.de/moarvm/today |
02:09 |
|
geekosaur joined #moarvm |
02:30 |
MasterDuke |
timotimo: i just ran a couple builds with /usr/bin/time and i got significantly less variability with MVM_SPESH_BLOCKING=1 |
03:17 |
|
lizmat joined #moarvm |
03:43 |
Geth |
¦ MoarVM: 27c72f7d55 | (Samantha McVey)++ | src/strings/ops.c |
03:43 |
Geth |
¦ MoarVM: join: special case 8bit flat pieces/separator. 3x faster |
03:43 |
Geth |
¦ MoarVM: |
03:43 |
Geth |
¦ MoarVM: Special case when pieces or separators are flat 8-bit representations |
03:43 |
Geth |
¦ MoarVM: of strings. This improves speed 3x when joining 10 100,000 grapheme |
03:43 |
Geth |
¦ MoarVM: flat 8-bit pieces. |
03:43 |
Geth |
¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/27c72f7d55 |
04:23 |
MasterDuke |
timotimo: gist updated with latest valgrind output. not sure what's wrong, the buffer it's complaining about https://github.com/MasterDuke17/MoarVM/blob/use_fsa_for_string_storage/src/strings/utf8.c#L183 seems like it should be good at the end of the function |
04:23 |
|
lizmat joined #moarvm |
04:27 |
|
Ven`` joined #moarvm |
04:49 |
|
nativecallable6 joined #moarvm |
04:49 |
|
evalable6 joined #moarvm |
04:49 |
|
quotable6 joined #moarvm |
04:49 |
|
committable6 joined #moarvm |
04:49 |
|
bloatable6 joined #moarvm |
04:49 |
|
bisectable6 joined #moarvm |
04:49 |
|
unicodable6 joined #moarvm |
04:49 |
|
greppable6 joined #moarvm |
04:49 |
|
benchable6 joined #moarvm |
04:49 |
|
releasable6 joined #moarvm |
04:49 |
|
coverable6 joined #moarvm |
04:49 |
|
squashable6 joined #moarvm |
04:49 |
|
statisfiable6 joined #moarvm |
05:00 |
Geth |
¦ MoarVM: samcv++ created pull request #716: join: simplify some conditionals and factor out some code |
05:00 |
Geth |
¦ MoarVM: review: https://github.com/MoarVM/MoarVM/pull/716 |
05:01 |
samcv |
MasterDuke, if you can check out my PR. wanted to check with you since you wrote the code |
05:01 |
samcv |
er or maybe not this code. anyway. |
05:03 |
samcv |
shortens MVM_string_join by 63 lines |
05:31 |
|
domidumont joined #moarvm |
05:38 |
|
domidumont joined #moarvm |
06:05 |
|
brrt joined #moarvm |
06:21 |
|
domidumont joined #moarvm |
06:32 |
|
leont_ joined #moarvm |
06:34 |
moritz |
Hashirama: yes, if it's in a place where YAML allows a multi-line string |
06:35 |
moritz |
eeks, wrong channel, wrong day |
06:36 |
nwc10 |
good *, moritz |
06:36 |
nwc10 |
but it *is* morning :-) |
06:37 |
moritz |
yes *yawn* |
06:37 |
moritz |
yesterday I submitted my manuscript for the Perl 6 Regexes and Grammars book |
06:42 |
brrt |
\o/ |
06:43 |
brrt |
good * moritz, ncw10, #moarvm |
06:46 |
|
lizmat joined #moarvm |
07:40 |
|
patrickz joined #moarvm |
07:56 |
Geth |
¦ MoarVM: a08113ef5e | (Samantha McVey)++ | src/strings/ops.h |
07:56 |
Geth |
¦ MoarVM: Speed up `codes` op by 10% for synthetic graphemes |
07:56 |
Geth |
¦ MoarVM: |
07:56 |
Geth |
¦ MoarVM: Use a grapheme iterator instead of a codepoint iterator, since all |
07:56 |
Geth |
¦ MoarVM: we need is to get the synthetic information for how many codes are |
07:56 |
Geth |
¦ MoarVM: in the synthetic. |
07:56 |
Geth |
¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/a08113ef5e |
07:56 |
samcv |
good * brrt |
07:56 |
lizmat |
good *, #moarvm |
07:57 |
lizmat |
talk about a weird flapper: |
07:57 |
lizmat |
Stage start : 0.000 |
07:57 |
lizmat |
Stage parse : make: *** [CORE.setting.moarvm] Bus error: 10 |
07:57 |
lizmat |
next make worked fine without change |
07:57 |
samcv |
bus error... never got that before |
08:01 |
brrt |
oh, hmm |
08:02 |
brrt |
lizmat, can you retry that with a MVM_JIT_EXPR_DISABLE=1 |
08:02 |
brrt |
what platform are you on |
08:03 |
|
evalable6 joined #moarvm |
08:06 |
lizmat |
macOS |
08:06 |
lizmat |
brrt: as I said, it was a flapper |
08:06 |
lizmat |
couldn't reproduce |
08:07 |
brrt |
should not happen though |
08:07 |
lizmat |
agree :-) |
08:09 |
lizmat |
BTW, I did some tests wrt to the mainline of the core setting: running the mainline takes about 16 milliseconds |
08:10 |
lizmat |
total startup time for me is now about 146 msecs |
08:10 |
lizmat |
which is about 14 msecs up from before the new JIT merge |
08:12 |
lizmat |
make spectest for me is up from 353 seconds to 387 seconds |
08:12 |
lizmat |
so it looks like the JIT is not bringing us any performance improvements just yet |
08:12 |
lizmat |
OTOH, it did run the whole spectest clean :-) |
08:16 |
brrt |
hmm, that's more than i'd thought |
08:18 |
lizmat |
yeah... anyways, will be offline for most of the day |
08:18 |
lizmat |
& |
10:13 |
|
dogbert2 joined #moarvm |
10:50 |
|
brrt joined #moarvm |
11:06 |
|
robertle joined #moarvm |
11:32 |
|
d4l3k_ joined #moarvm |
11:32 |
|
synopsebot joined #moarvm |
11:34 |
dogbert2 |
tests 38 and 39 in t/spec/S17-promise/lock-async.t are quite slow |
11:35 |
dogbert2 |
is this another case of, if you don't have 10+ cores then you're out of luck? |
11:35 |
* dogbert2 |
only has six |
11:37 |
dogbert2 |
if I fiddle with https://github.com/perl6/roast/blob/master/S17-promise/lock-async.t#L75 i get |
11:38 |
dogbert2 |
xx 4 => real 1m17.912s , xx 3 => real 0m53.481s, xx 2 => real 0m2.017s and xx 1 => real 0m0.872s |
12:36 |
|
Geth joined #moarvm |
14:07 |
|
yoleaux joined #moarvm |
14:12 |
|
zakharyas joined #moarvm |
15:16 |
|
SourceBaby joined #moarvm |
15:28 |
|
zakharyas joined #moarvm |
16:32 |
|
buggable joined #moarvm |
16:47 |
|
robertle joined #moarvm |
17:16 |
|
Ven`` joined #moarvm |
17:18 |
|
lizmat joined #moarvm |
18:47 |
|
zakharyas joined #moarvm |
19:00 |
|
weabot joined #moarvm |
19:06 |
|
[Coke] joined #moarvm |
20:54 |
|
zakharyas joined #moarvm |
21:04 |
* dogbert2 |
wonders if the above tests are in need of some more threads, i.e. https://github.com/rakudo/rakudo/commit/6e42b37e1b2384ad67d45d59a51da5b092d8fc7c |
21:05 |
jnthn |
How many cores do you have, ooc? |
21:06 |
dogbert2 |
jnthn: six |
21:08 |
jnthn |
dogbert2: Run with RAKUDO_SCHEDULER_DEBUG=1 to see how many threads it starts |
21:09 |
dogbert2 |
is that an env var? |
21:09 |
jnthn |
Yes |
21:10 |
lizmat |
jnthn: on the subject of attribute default values: if the thunk has a compile time value, would it make sense to generate the BUILDALL with a WVal of that compile time value ? |
21:10 |
* dogbert2 |
tests ... |
21:11 |
jnthn |
lizmat: If the thunk produces one, or? |
21:11 |
jnthn |
afaik the thunk itself is a compile time value |
21:11 |
lizmat |
if the thunk produces one |
21:12 |
jnthn |
Yeah, that'd save a bit of work |
21:12 |
dogbert2 |
[SCHEDULER] Supervisor thinks there are 6 CPU cores |
21:12 |
lizmat |
ok, then I'll investigate that tomorrow |
21:12 |
lizmat |
afk, it was a long tiring day, with another tiring day coming up tomorrow |
21:13 |
dogbert2 |
ok 37 - And can unlock it again |
21:13 |
dogbert2 |
[SCHEDULER] Added a general worker thread |
21:14 |
dogbert2 |
[SCHEDULER] Added a general worker thread |
21:14 |
dogbert2 |
[SCHEDULER] Added a general worker thread |
21:14 |
dogbert2 |
the first was started right away, but it took quite some time before the second and third ones |
21:14 |
jnthn |
Ho, interesting... |
21:15 |
|
evalable6 joined #moarvm |
21:15 |
jnthn |
Given it thinks about whether to do so 100 times a second... |
21:16 |
dogbert2 |
feels like it waits ~15 secs before starting the second one (tried to count => unreliable) |
21:18 |
dogbert2 |
using a clock it's more like +20s |
21:47 |
samcv |
someone broke windows. src/jit/x64/emit.c |
21:47 |
samcv |
this line: #line 1 "src/jit/x64/emit.dasc" |
21:47 |
samcv |
line 4 |
21:47 |
samcv |
i think our scripts are changing the '/' to \ |
21:48 |
samcv |
i guess. they don't even actually need to do that due to C standards though right? |
22:02 |
japhb |
timotimo: How would I go about showing the actual assembly generated for an NQP or Perl 6 code snippet? Concretely, how would I see the assembly for 'my $i = 1; $i *= 2 while $i < 1000;' for instance? |
22:05 |
timotimo |
you set the env var MVM_JIT_BYTECODE_DIR |
22:06 |
samcv |
there's so many ENV vars |
22:06 |
samcv |
we need documentation on all of them |
22:06 |
samcv |
in one list |
22:07 |
timotimo |
moar --help has them :P |
22:07 |
timotimo |
and perl6 --help |
22:09 |
timotimo |
hm, i'm not getting it to output that from the jit |
22:09 |
samcv |
not all of them though |
22:09 |
samcv |
oh maybe i'm wrong |
22:09 |
timotimo |
okay, i had to run it really often |
22:10 |
timotimo |
and then objdump -D -b binary -m i386:x86-64 -M intel jitbytecode/moar-jit-0039.bin |
22:10 |
timotimo |
(i only know this because of my shell history) |
22:11 |
japhb |
"run it really often"? |
22:12 |
japhb |
Do you mean "wrap it in a loop"? |
22:13 |
timotimo |
yeah |
22:13 |
timotimo |
i put the code you gave into a sub "test" |
22:13 |
timotimo |
and ran test for ^10_000 |
22:13 |
timotimo |
i might also have turned the number in the code itself up to 10k |
22:15 |
timotimo |
it helps to have a jitlog, too, i.e. MVM_JIT_LOG=jitlog.txt |
22:15 |
japhb |
OK, so: `MVM_JIT_BYTECODE_DIR=./jitbytecode perl6 -e 'for ^10_000 { my $i = 1; $i *= 2 while $i < 10_000; }' && objdump -D -b binary -m i386:x86-64 -M intel jitbytecode/moar-jit-0039.bin` |
22:15 |
japhb |
Ack re: MVM_JIT_LOG |
22:16 |
* japhb |
is building all the things |
22:16 |
timotimo |
well, the exact number for the moar-jit-*.bin you'll find in the jit-map.txt |
22:16 |
timotimo |
that lands in the bytecode dir, too |
22:17 |
japhb |
OK ... this is begging to be turned into a mini-tool, methinks |
22:22 |
|
evalable6 joined #moarvm |
22:29 |
timotimo |
oh, you're building something? :) |
22:30 |
timotimo |
it'd be very cool, but at the moment hardly possible, to link which bytecode goes with which part of the assembly |
22:30 |
timotimo |
the tiling of the tree can make that rather messy i imagine |
22:30 |
timotimo |
i.e. code belonging to multiple moar instructions getting intermixed |
23:15 |
japhb |
Woah. That's a lot of mov's. Picking a couple .bin files to look at, I'd say around 5 of every 8 instructions are 'mov'. |
23:18 |
japhb |
I mean sure, that's better than the 2/3 you'd get from doing every 3-arg operation as the sequence 'mov rax, A; op rax, B; mov C, rax' -- but only by ~4%. |
23:18 |
japhb |
I'd have hoped the tiler would be a bigger win. |
23:20 |
japhb |
Wishlist: jit-map.txt containing enough source location info to be able to look up the correct .bin file for any point in the input. |
23:21 |
timotimo |
many of the movs probably exist because of the exprjit ending its work in some spot |
23:22 |
timotimo |
and of course also when the exprjit doesn't do parts |
23:28 |
MasterDuke |
timotimo: did you see my comments about MVM_SPESH_BLOCKING=1 significantly reducing the variability in maxrss reported when building the core setting? |
23:28 |
yoleaux |
18:47Z <AlexDaniel> MasterDuke: maybe RT #123926 is for you |
23:28 |
synopsebot |
RT#123926 [new]: https://rt.perl.org/Ticket/Display.html?id=123926 [BUG] LTA error message without Levenshtein distance suggestion when mistyping enum value in signature in Rakudo |
23:33 |
timotimo |
oh, yes |
23:33 |
timotimo |
that's helpful |
23:33 |
timotimo |
i haven't done any more measurements of my fsa tuning branch since i last reported on it |
23:34 |
MasterDuke |
sure, just thought it would be good to know that helps |
23:34 |
timotimo |
it is! |
23:46 |
samcv |
.tell brrt i think your even_moar_jit branch merge broke the windows build |
23:46 |
yoleaux |
samcv: I'll pass your message to brrt. |
23:47 |
samcv |
yoleaux, thank you |
23:56 |
|
geekosaur left #moarvm |