Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2017-09-06

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:55 committable6 joined #moarvm
00:55 committable6 joined #moarvm
01:52 ilbot3 joined #moarvm
01:52 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
01:52 samcv was someone looking at trying to do heap profiling?
01:52 samcv i think there are malloc replacements that can do stuff like that. i know jemalloc does
01:54 MasterDuke yeah, but the heap snapshot profiler actually gives you Rakudo-level info (e.g., number/size of types of objects)
01:54 samcv ah nice
03:30 AlexDaniel joined #moarvm
05:30 ilbot3 joined #moarvm
05:30 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
05:45 brrt joined #moarvm
05:46 brrt good *
05:52 domidumont joined #moarvm
05:59 domidumont joined #moarvm
06:01 leont joined #moarvm
06:16 geekosaur joined #moarvm
06:43 lizmat_ joined #moarvm
06:58 geekosaur joined #moarvm
07:07 brrt joined #moarvm
07:24 lizmat joined #moarvm
07:52 timotimo yo
08:02 brrt ohai timotimo
08:02 brrt i've been worryingly unproductive the last few weeks :-o
08:02 timotimo you think so?
08:02 brrt yeah
08:02 brrt i mean, nothing got done, actually
08:03 timotimo i think you've been fine, whereas i have actually been worryingly unproductive during that time
08:03 brrt well, except for merging the old changes, but these were old
08:03 timotimo it's still an important task
08:05 samcv ^
08:06 robertle joined #moarvm
08:06 nine That's one thing I have a quite hard time teaching my developers at work: unmerged work is completely useless. It has exactly 0 value to the user. So yes, going the final step of cleaning a branch and getting it through review is critical for turning time into value.
08:06 timotimo i have many branches with unmerged work :(
08:07 timotimo at least the one i'm working on right now is making nice progress towards being included for this release
08:08 nine timotimo: that's the irony. You've made great progress this time, yet feel unproductive. Just hang in, you're almost there :)
08:09 timotimo oh, i was refering to the time before this work
08:09 timotimo i actually feel productive for a change
08:12 samcv i've been feeling fairly productive. technically not in my grant but i want to get anything i can that uses get_grapheme_at_nocheck to use the cached grapheme iterator but only for strands
08:12 AlexDaniel joined #moarvm
08:12 samcv and i got it so that now if it requests a grapheme earlier than before, it will reinitialize the grapheme iterator
08:13 samcv eventually we could probably make it so it works backwards as well instead of just forwards. and i already fixed a bug which made it not work for repeats unless it was starting at 0
08:13 samcv feels awesome to get such great speed ups :)
08:14 samcv though since perl 6 flattens the strings before doing regex won't has as much impact i guess?
08:17 timotimo we could perhaps make "indexingoptimized" do something cleverer than just flattening the whole thing
08:17 timotimo if the performance impact from a few strands isn't too bad, we could probably keep a few in there
08:17 timotimo where does your grapheme iterator cache live btw?
08:18 brrt ++ for a backwards-walking iterator
08:18 brrt that seems like it ought to be totally doable
08:18 samcv timotimo, in iter.h
08:19 samcv yeah
08:19 timotimo i mean where is the cache installed?
08:19 samcv and probably easier now that i elimited a ton of loops
08:19 samcv well it's stored only in the function iterating
08:19 timotimo cool
08:19 samcv MVMGraphemeIter_cached which stores the cached grapheme, the last requested index, and the grapheme iterator
08:20 samcv and so either does MVM_string_gi_get_grapheme, or returns the cached grapheme, or moves it and returns it. or reiniatizes it and then moves and returns
08:20 samcv allowing it to save lots of work for strands
08:20 timotimo the GraphemeIter has a GraphemeIter in itself?
08:21 samcv timotimo, https://github.com/samcv/MoarVM/blob/ignorecase_ignoremark_cached/src/strings/iter.h#L308-L328
08:21 samcv scroll up a tad and you will see the struct for the type and the initializer
08:24 timotimo looks good
08:24 timotimo the name is strange, but i suppose it's okay since it's not public API
08:24 samcv yeah
08:25 samcv if you use it for flat strings it will slow it down, so it shouldn't be used everywhere
08:25 samcv i mean not *that* much. but like at least 20%
08:26 samcv and 2x speedup for indexic when it's made up of 10 strands
08:26 samcv which is impressive
08:27 timotimo nice
08:30 brrt can i ask the stupid question why the cache is in the data structure and not in the caller
08:30 zakharyas joined #moarvm
08:32 samcv it is in the caller
08:32 samcv well you declare `MVMGraphemeIter_cached gic` and then initialize it
08:33 samcv so it's stored on the stack in the caller
08:33 samcv or do you mean some other thing?
08:42 brrt hmm, i see what you mean, i guess
08:43 brrt but doesn't the iterator store its current position?
08:46 timotimo it stores a MVMStringIndex "pos"
08:48 jnthn morning o/
08:48 brrt morning
08:49 brrt so, what the 'cached' structure adds, is a cached last grapheme?
08:49 brrt that does make some sense, yes
08:50 lizmat joined #moarvm
08:53 timotimo i need a different hex editor than okteta ...
08:56 jnthn Coming back to something the day after is usually a good idea. :) I realized that there is a fairly cheap way to find which inline we're in: search BBs looking just at their first instruction and seeing if it has an inline annotation, then we know that's where that inline starts
08:56 jnthn Or ends, for that matter
08:56 nine That sounds almost too easy :D
08:57 Skarsnik joined #moarvm
08:57 timotimo tried bless, can't even tell it to limit display to n columns
08:58 jnthn Ends is cheaper
08:59 jnthn 'cus ->linear_next
08:59 jnthn Though means you need to pick the max inline end annotation to know the innermost one you're in
09:01 timotimo hte doesn't support 64bit integers ;_;
09:08 samcv brrt, it doesn't store the position it stores just which strand and position in strand and how many repeats left in the particular strand it's on
09:08 brrt timotimo: hexl-mode?
09:09 brrt i see
09:10 timotimo brrt: i'm not comfortable with emacs, though
09:12 brrt well, that's why you'll always be on the lookout for the next tool :-P
09:12 timotimo heh.
09:13 timotimo things that frustrate me in okteta:
09:13 timotimo can't display the offset in decimal (though i could instead output all numbers from the program in hex instead)
09:13 timotimo can't move backwards/forwards in 64bit increments
09:14 timotimo that's more or less it i suppose
09:15 timotimo also, defining structs is a major song-and-dance where you have to create a .desktop file and javascript or xml by hand
09:17 brrt1 joined #moarvm
09:18 timotimo it doesn't even have an entropy overview
09:21 timotimo m(, you can't click "find previous" right after you clicked "find next" because it puts the cursor after the match and doesn't know to skip the one right before the cursor
09:23 AlexDaniel joined #moarvm
09:30 jnthn Well, seems my inline check thingy does the job
09:30 jnthn Or at least, NQP builds
09:37 lizmat joined #moarvm
09:38 lizmat_ joined #moarvm
09:39 Skarsnik hm no way to get a bt in gdb when moar panic?
09:39 timotimo sure there is
09:39 timotimo break MVM_panic
09:40 Skarsnik but I can't use the perl6-gdb-m script?
09:40 lizmat__ joined #moarvm
09:41 timotimo of course you can
09:41 jnthn You can, just set the breakpoint and hit r to run it again
09:41 timotimo just have to ctrl-c early so you can set the breakpoint
09:41 jnthn Or that :)
09:41 timotimo or what jnthn said
09:43 lizmat joined #moarvm
09:43 Geth ¦ MoarVM: 234f5a8ce7 | (Jonathan Worthington)++ | src/spesh/optimize.c
09:43 Geth ¦ MoarVM: Take more care when eliminating set inside inline
09:43 Geth ¦ MoarVM:
09:43 Geth ¦ MoarVM: If we have something in an inline like:
09:43 Geth ¦ MoarVM:     sp_fastinvoke_o r100(1), ...
09:43 Geth ¦ MoarVM:     set r10(3), r100(1)
09:43 Geth ¦ MoarVM: Where r10(3) is outside of the inline, then we must not rewrite it to:
09:43 Geth ¦ MoarVM:     sp_fastinvoke r10(3), ...
09:43 Geth ¦ MoarVM: Because it will break deoptimization (specifically, uninlining), which
09:43 Geth ¦ MoarVM: assumes the value will be returned into the frame itself.
09:44 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/234f5a8ce7
09:47 Skarsnik well that does not real help me figuring https://rt.perl.org/Ticket/Display.html?id=131003 :(
09:47 Skarsnik I ran it with libefence but after 24h it was still running x)
09:50 Geth ¦ MoarVM: 6048ac75e5 | (Jonathan Worthington)++ | 3 files
09:50 Geth ¦ MoarVM: Re-compute dominance tree before stage 2 opts.
09:50 Geth ¦ MoarVM:
09:50 Geth ¦ MoarVM: This means that they will be able to reach inside of inlines also.
09:50 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/6048ac75e5
09:51 jnthn Finally, we can do that.
09:56 timotimo yeessssss
10:01 Skarsnik 14166 skarsnik  20   0 2526m 1.2g  19m R  99.9 31.5   4:18.58 moar 1-2.5Gb of ram with efence lol
10:03 dogbert17 t/04-nativecall/21-callback-other-thread.t doesn't work very well all of a sudden
10:03 * brrt1 wonders if there isn't a way to deoptimization 'cleverer'
10:03 timotimo there is
10:04 timotimo remember my "deopt bridge" idea?
10:04 brrt yes, i do renember that
10:04 brrt why don't you implement it :-P
10:04 brrt or maybe revive it
10:04 jnthn There's various things we can do, but given the complexity of where we're at is already pretty high...
10:05 brrt that is true
10:10 dogbert17 ==9822== Thread 2:
10:10 dogbert17 ==9822== Invalid read of size 2
10:10 dogbert17 ==9822==    at 0x41A7731: within_inline (optimize.c:2252)
10:10 dogbert17 ==9822==    by 0x41A7977: try_eliminate_set (optimize.c:2301)
10:10 dogbert17 ==9822==    by 0x41A7A59: second_pass (optimize.c:2318)
10:11 jnthn m: say 0.67 / 0.70
10:11 camelia rakudo-moar c39db8: OUTPUT: «0.957143␤»
10:12 jnthn dogbert17: grr, how'd you get that?
10:12 dogbert17 when I run make t/04-nativecall/21-callback-other-thread.t
10:13 dogbert17 with MVM_SPESH_NODELAY=1
10:18 nine 4.962s! That's the first time I've seen csv-ip5xs doing 100_000 iterations in less than 5 seconds
10:18 nine jnthn++
10:19 jnthn huh, I thought you were the one making that faster? ;)
10:19 jnthn Some of my recent changes coulda though :)
10:19 nine Well, Inline::Perl5 is still a whole lot Perl 6 code ;)
10:19 jnthn Spesh is gradually getting smarter
10:20 timotimo once you stomp out the dumb i put in it :D
10:20 jnthn The 0.67 / 0.70 I just did above, btw, is for `for ^1000000 { for 1..5 { last } }`
10:20 nine And I've seen lots of unnecessary sets in the MAST compiled bodies of native subs.
10:21 jnthn The 4% or so win coming from making the rewriting of control exception throws into gotos work with inlines
10:21 jnthn Which means that opt is now useful for Perl 6 code
10:21 jnthn (Because the throw is always in a multi candidate of next/last/redo)
10:22 jnthn So the last actually after optimization becomes a goto in that case now
10:22 timotimo lovely
10:23 timotimo "next" is something Text::CSV has a lot of
10:23 jnthn Yeah, though hard to say whether its stuff falls within inline limits, but perhaps so :)
10:23 timotimo aye
10:23 timotimo if not, we have to make the resulting code even smaller :)
10:24 jnthn ah darn, SEGV when building CORE.setting :(
10:26 nine Now only a 27.4x slow down compared to pure Perl 5
10:27 nine Last number I posted was 33 I think
10:28 jnthn Ah, SEGV is the one dogbert17 found with valgrind
10:28 jnthn But what on earth...
10:29 jnthn Ah, hmm
10:38 brrt joined #moarvm
10:39 jnthn Think I got a fix
10:39 jnthn Showed up in multi-level inlines
10:48 brrt i'm curious
10:50 * timotimo runs a heap snapshot under valgrind
10:50 Geth ¦ MoarVM: 263984f566 | (Jonathan Worthington)++ | 3 files
10:50 Geth ¦ MoarVM: Add a reliable locals count for inlines
10:50 Geth ¦ MoarVM:
10:50 Geth ¦ MoarVM: When we do a multi-level inline, then we won't have a spesh graph for
10:50 Geth ¦ MoarVM: the nested inline, and so cannot simply ask it for its locals count.
10:50 Geth ¦ MoarVM: The static frame is also not reliable for this purpose, as the inline
10:50 Geth ¦ MoarVM: may itself have inlines. Thus just add a field holding the value, so
10:50 Geth ¦ MoarVM: we reliably have it available.
10:50 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/263984f566
10:54 timotimo having a "skip this many bytes after each snapshot" piece in the index lets me figure out proper incremental pieces for aborted heapsnapshots after the code has been merged
10:57 releasable6 joined #moarvm
10:58 mtj_ joined #moarvm
10:58 Geth ¦ MoarVM: 04f6be4ff6 | (Jonathan Worthington)++ | src/spesh/optimize.c
10:58 Geth ¦ MoarVM: Move throw -> goto optimization to second pass.
10:58 Geth ¦ MoarVM:
10:58 Geth ¦ MoarVM: Meaning that we can now apply it to inlined code. Also fix it up to
10:58 Geth ¦ MoarVM: cope with being used across inlines. This means that Perl 6 next, last,
10:58 Geth ¦ MoarVM: and redo may potentially be rewritten into gotos (in Perl 6, throws of
10:58 Geth ¦ MoarVM: control exceptions are always inside of multi candidates, which we've
10:58 Geth ¦ MoarVM: been able to inline for a while, but could not rewrite to a goto since
10:58 Geth ¦ MoarVM: the optimization didn't work across inlines until now).
10:58 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/04f6be4ff6
11:00 jnthn There we go
11:00 AlexDaniel joined #moarvm
11:01 timotimo jnthn: can you think of any particular things i should test before merging the new heapsnapshot format?
11:01 jnthn timotimo: Do all the moar-ha commands work? :)
11:01 jnthn Including path... :)
11:02 timotimo yup
11:02 jnthn And try it on a multi-threaded program
11:02 jnthn Shouldn't be an issue
11:02 jnthn But good to be sure
11:02 timotimo hm, i can no longer directly compare the old and new format of the very same program
11:03 stmuk_ joined #moarvm
11:06 timotimo i don't think we can run sufficiently deterministic for heap snapshots to come out the same
11:07 timotimo ==7103== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
11:07 jnthn Yeah, but can check the output looks sensible :)
11:07 jnthn lunch, bbiab
11:08 timotimo it always looks sensible :D
11:08 timotimo oh amusing, by entering "path random-number" i got one containing a frame from OO::Monitors
11:13 timotimo hah, running a multithreaded program in valgrind while it heapsnapshots is no use, as valgrind serializes threads
11:14 timotimo but at least there'll be multiple heaps
11:15 timotimo jnthn: how terrible is your hack to support "ignore inter-gen roots unless absolutely necessary"?
11:28 timotimo looks good so far
11:29 timotimo the async example from Terminal::Print takes 460744maxresidentk with the new heap snapshot output format
11:29 timotimo now i'll recompile master and see how well that does
11:31 Geth ¦ MoarVM/heapsnapshot_binary_format: c3c1926bf8 | (Timo Paulssen)++ | 2 files
11:31 Geth ¦ MoarVM/heapsnapshot_binary_format: write parts of types/static frames/strings incrementally
11:31 Geth ¦ MoarVM/heapsnapshot_binary_format:
11:31 Geth ¦ MoarVM/heapsnapshot_binary_format: so in case the program crashes before it can finish up, we can
11:31 Geth ¦ MoarVM/heapsnapshot_binary_format: still recover everything we need.
11:31 Geth ¦ MoarVM/heapsnapshot_binary_format: review: https://github.com/MoarVM/MoarVM/commit/c3c1926bf8
11:31 timotimo oh, left some debug output in there
11:31 timotimo i'll amend the commit later
11:36 timotimo whoops, i need the fix from my branch, too
11:37 timotimo 1971868maxresident
11:37 timotimo m: say 460744 / 1971868
11:37 camelia rakudo-moar c39db8: OUTPUT: «0.2336586␤»
11:37 timotimo nice.
11:37 timotimo and that only grows as runs go longer
11:38 Geth ¦ MoarVM/heapsnapshot_binary_format: fbd618283e | (Timo Paulssen)++ | 2 files
11:38 Geth ¦ MoarVM/heapsnapshot_binary_format: write parts of types/static frames/strings incrementally
11:38 Geth ¦ MoarVM/heapsnapshot_binary_format:
11:38 Geth ¦ MoarVM/heapsnapshot_binary_format: so in case the program crashes before it can finish up, we can
11:38 Geth ¦ MoarVM/heapsnapshot_binary_format: still recover everything we need.
11:38 Geth ¦ MoarVM/heapsnapshot_binary_format: review: https://github.com/MoarVM/MoarVM/commit/fbd618283e
11:47 timotimo jnthn: can has access to https://github.com/jnthn/p6-app-moarvm-heapanalyzer ?
11:55 timotimo otherwise you'll have to review a pull request :P
12:06 jnthn timotimo: Sure
12:06 timotimo now i'm writing a nice description for the pull request, but i can just merge it myself :)
12:07 jnthn timotimo: Invite sent
12:07 timotimo accepted
12:13 timotimo one thing i haven't figured out yet (which i just noticed) is that if we start a heap snapshot collection from gdb or something, we don't have a very good spot to finish up the heap snapshot? or can we literally just call into the "end profiling" sub at any point?
12:13 timotimo i suppose we probably can
12:14 Geth ¦ MoarVM/master: 11 commits pushed by (Timo Paulssen)++
12:14 Geth ¦ MoarVM/master: review: https://github.com/MoarVM/MoarVM/compare/04f6be4ff6...de6dceda81
12:14 timotimo \o/
12:24 nine timotimo: congratulations!
12:24 nine Seriously cool work :)
12:24 timotimo thanks!
12:24 jnthn So this gets me...faster heap profiler? :)
12:24 timotimo there's lots of next steps
12:24 jnthn And less memory overhead when profiling?
12:24 timotimo only the loading of snapshots is faster. but it's loads faster
12:24 jnthn Yeah, that's the part I was thinking of :)
12:24 timotimo yes, the memory growth of the program under profile is seriously reduced
12:25 jnthn That's the slow part, the actual snapshotting wasn't bad already
12:25 nine Faster and you get output on interrupted programs. That wold already have been soo helpful for me
12:25 jnthn Very nice
12:25 jnthn timotimo++
12:25 timotimo well ... the parser can't yet handle the output from interrupted programs
12:25 Zoffix timotimo++
12:26 timotimo next up is maybe calculating incident edges so we can quickly query "what points at this object?"
12:26 timotimo "Hey, what points at the Str STable?"
12:26 timotimo "here's your five million objects"
12:27 nine Rather...what the hell points at the A STable, causing a repossession conflict?
12:31 timotimo :)
12:47 zakharyas joined #moarvm
12:53 timotimo and of course at one point there'll be a nicer interface than the cli
13:01 AlexDaniel joined #moarvm
13:02 travis-ci joined #moarvm
13:02 travis-ci MoarVM build failed. Jonathan Worthington 'Re-compute dominance tree before stage 2 opts.
13:02 travis-ci https://travis-ci.org/MoarVM/MoarVM/builds/272413799 https://github.com/MoarVM/MoarVM/compare/234f5a8ce799...6048ac75e5a7
13:02 travis-ci left #moarvm
13:03 jnthn bah, empty log: https://travis-ci.org/MoarVM/MoarVM/jobs/272413811
13:21 Skarsnik_ joined #moarvm
13:22 nwc10 ASAN doesn't seem to make any comment
13:22 nwc10 (not totally confident whether some ASAN errors are eaten up as "regular" test failures, and there are regular test failures under ASAN when run in parallel due to timing "issues")
13:25 squashable6 joined #moarvm
13:40 timotimo oops, i haven't pushed the necessary nqp changes for the heap snapshot profilr
13:42 timotimo there it is
13:49 dogbert17 jnthn: what's next on your fix list?
13:50 brrt timotim++
13:52 dogbert17 c: HEAD say "Test"
13:52 dogbert17 meh
13:52 committable6 dogbert17, ¦HEAD(80a3255): «Test»
13:52 dogbert17 yay
13:52 nwc10 fixing up a curry?
13:53 dogbert17 c: MVM_SPESH_NODELAY=1 MVM_SPESH_BLOCKING=1 HEAD use Test; { { my sub foo { fail }(); CATCH { default { my $bt = .backtrace; is $bt.list.elems, 4, "we correctly have 4 elements in .list"; }} } } # bug
13:53 committable6 dogbert17, ¦HEAD(80a3255): «not ok 1 - we correctly have 4 elements in .list␤# Failed test 'we correctly have 4 elements in .list'␤# at /tmp/tJus6tS0Qr line 1␤# expected: '4'␤#      got: '3' «exit code = 1»»
13:54 jnthn That one is simply that when spesh inlines things, the backtraces don't know the inlinees.
13:54 dogbert17 aha
13:54 jnthn And...not quite so simple to fix
13:54 jnthn Though we should do it at some point
13:55 dogbert17 hardly a disastrous problem
13:55 dogbert17 problem is; I'm running out of spesh problems :)
13:55 jnthn heh
13:59 * timotimo has the first rough draft of incidents searching
14:00 timotimo doesn't seem quite right, though
14:00 jnthn Seems I've managed to eat my way through my available Perl 6 funding for the moment (though hopefully a bit more coming soon to do some more concurrency stuff). So for now I'm back to what can I do in my copious free time, and should probably look at other $dayjob things for a bit. I'm also impressively behind on blogging...
14:03 nwc10 blogging++
14:03 nwc10 https://6guts.wordpress.com/2017/08/06/moarvm-specializer-improvements-part-1-gathering-data/ -- You won’t believe what happens next!
14:04 jnthn haha
14:10 timotimo ouch! :)
14:10 jnthn The next bit is a shorter post, 'cus it's about the planner :)
14:10 timotimo "i've had this post planned for a whole month now!"
14:11 jnthn I got distracted
14:11 jnthn Soemthing involving a cro and a conference...
14:12 jnthn Apparently a bit of vacation too :)
14:14 bisectable6 joined #moarvm
14:16 timotimo yeah! it works \o/
14:18 robertle joined #moarvm
14:20 timotimo nine: i just pushed support for an "incidents" command to the heapanalyzer
14:20 timotimo please test it
14:21 nwc10 joined #moarvm
14:36 zakharyas joined #moarvm
14:43 ilbot3 joined #moarvm
14:43 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
14:49 ilbot3 joined #moarvm
14:49 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
15:04 brrt joined #moarvm
16:05 samcv timotimo, is the heap snapshot analyzer going to get documentation?
16:05 yoleaux 12:34Z <[Coke]> samcv: the class definition you give for utf8 in the docs doesn't compile.
16:09 samcv .tell [Coke] ok i made a change, hopefully this will fix
16:09 yoleaux samcv: I'll pass your message to [Coke].
16:13 dogbert2 joined #moarvm
16:29 AlexDaniel joined #moarvm
17:08 timotimo samcv: sure, what angle should i approach it from, and where should i put it?
17:09 samcv how to use it. how it works.
17:09 samcv in the docs folder. and if it's current, add it to the README.md in the docs folder too
17:10 samcv the readme used to say that it was all likely outdated, but when i made my string documentation i linked to that and said that was current. so if you add current stuff that's not just design docs add to that list
17:10 samcv also will be away for an hour or so
17:11 timotimo mhm
17:11 timotimo though perhaps the docs should go to the performance section of the perl6 docs or the readme and/or wiki of the heap snapshot analyzer instead?
17:19 Skarsnik should probably be a page with everything that help debug stuff x)
17:59 timotimo the "performance" page already talks about the regular profiler
18:04 Geth ¦ MoarVM/jit_nativecall: 16 commits pushed by (Stefan Seifert)++
18:04 Geth ¦ MoarVM/jit_nativecall: review: https://github.com/MoarVM/MoarVM/compare/520d47068f...19209de674
18:18 leont joined #moarvm
18:20 Geth ¦ MoarVM: niner++ created pull request #676: JIT compile native calls
18:20 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/pull/676
18:31 zakharyas joined #moarvm
18:38 jnthn I see code to review...
18:38 jnthn uh...ltos of it given I'm meant to also look at the expr JIT :)
18:44 dogbert2 m: my int $master = 0; await start { if cas($master, 0, 1) == 0 { say "Master!" }} xx 4 # this code fails on 32 bit systems, is that expected?
18:44 camelia rakudo-moar bfee5a: OUTPUT: «Master!␤»
18:45 jnthn Yes
18:46 jnthn That's precisely why the atomicint type exists
18:47 dogbert2 cool, I took it from the atomicint documentation should a comment be added or should int be changed to atomicint?
18:47 jnthn Should be changed to atomicint :)
18:47 jnthn Nice catch
18:48 dogbert2 should I fix it?
18:48 moritz this is rather unperlish
18:48 moritz normally the operation dictates the semantics, not the type
18:49 moritz an atomic operation being silently unatomic just because its operand is unatomic seems dangerous
18:50 jnthn Why are you assuming it's silently unatomic?
18:51 jnthn m: my int32 $master = 0; await start { if cas($master, 0, 1) == 0 { say "Master!" }} xx 4
18:51 camelia rakudo-moar bfee5a: OUTPUT: «Tried to get the result of a broken Promise␤  in block <unit> at <tmp> line 1␤␤Original exception:␤    Cannot atomic load from an integer lexical not of the machine's native size␤      in block  at <tmp> line 1␤␤»
18:51 jnthn It does that
18:51 jnthn Very clear failure
18:51 moritz probably because I misinterprted dogbert2's "fails" as "gives wrong result"
18:51 moritz sorry
18:51 jnthn Ah :)
18:51 dogbert2 my bad, I could have been more explicit :)
18:55 dogbert2 example fixed
19:05 dogbert2 next question, in the last few weeks 'make spectest' has become alot slower 20-25 percent in my case. Any theories as to why?
19:08 * Zoffix didn't notice any slowdown in `make stresstest`
19:08 Zoffix But perhaps the new unicode tests are to blame?
19:09 dogbert2 perhaps, considering trying to figure out when this happened
19:12 dogbert2 compare these two for example:
19:12 dogbert2 https://irclog.perlgeek.de/perl6-dev/2017-07-25#i_14918057
19:13 dogbert2 https://irclog.perlgeek.de/perl6-dev/2017-09-06#i_15123112
19:16 dogbert2 there's one large drop between July 26th and 31st, from ~220 secs to ~260 secs
20:07 bisectable6 joined #moarvm
20:07 committable6 joined #moarvm
20:13 committable6 joined #moarvm
20:38 brrt joined #moarvm
21:06 zakharyas joined #moarvm
21:07 lizmat jnthn: given a native descriptor, is there a way to create a PIO out of that ?
21:08 lizmat or: is there a way to close by a given native descriptor (fileno) ?
21:09 lizmat so one wouldn't need to keep the PIO around, but just the fileno (native desc)
21:22 samcv Zoffix, what new unicode tests?
21:22 brrt what's PIO? parallel? perl-level?
21:23 lizmat historically Parrot IO Object I believe
21:23 lizmat basically the VM's handle of a file
21:26 geekosaur I'd be surprised if that exists usefully
21:26 geekosaur even C doesn't support it these days
21:26 Zoffix samcv: no idea. It was just a guess. I recall last time I noticed slowdown was due to new unicode tests, so I figured maybe dogbert2 was seeing that same slowdown
21:26 geekosaur (that is, you can make a new stdio file object from a descriptor, but you can't find an existing one that way)
21:27 geekosaur and if this is related to buffering, you need to find the existing one, not make a new one
21:43 lizmat joined #moarvm
21:48 jnthn lizmat: No
21:48 yoleaux 20:42Z <Zoffix> jnthn: are return type checks in protos simply NYI or are they not meant to work? This doesn't die: class { proto method f(--> Int) {*}; multi method f { "foo!" } }.new.f.say
21:49 jnthn lizmat: And it wouldn't helpanyway
21:49 jnthn lizmat: Since your goal is to get the handled flushed on close, and the buffer hangs off the handle
21:49 lizmat jnthn: well, then I guess option #2 is not an option, as it thoroughly breaks the lock tests   :-(
21:49 jnthn Optin 2 is what we have now, I thought you were implementing option 1?
21:49 lizmat no, vv
21:49 jnthn (Keep a list of open handles)
21:50 brrt is there anything against doing that on the perl level?
21:50 lizmat argh, yeah, you're right
21:50 lizmat option #2 is what we have, option #1 breaks the locking tests
21:50 jnthn brrt: Keeping the list? That's the approach I've suggested, and that I know lizmat++ is working on
21:50 jnthn The...locking tests?!
21:51 lizmat S32-io/lock.t
21:51 jnthn Details?
21:51 jnthn Oh, *file* locking
21:51 lizmat hangs on the first subtest of test 6
21:51 jnthn I thought you meant S17-lowlevel/lock.t ;)
21:51 jnthn Shoulda guessed but, well, that's what tiredness does
21:51 jnthn That's...odd
21:52 jnthn Any idea why it hangs?
21:52 jnthn I mean, we shouldn't be doing anything different until exit
21:52 lizmat https://gist.github.com/lizmat/001726b47eb5051b247ef1498f3fda7d
21:52 lizmat my work so far
21:52 jnthn Oooking
21:52 jnthn uh, Looking
21:53 jnthn Heh, binding into the array on fileno
21:53 jnthn Cute. I hadn't thought of that and was figuring it would be an O(n) search...
21:53 jnthn Or a hash on fileno
21:53 jnthn I guess maybe the hash has slightly better properties of fd re-use doesn't happen
21:53 lizmat I thought we could spare the few bytes on that array
21:53 jnthn But anyway, that won't be the issue here
21:56 jnthn Hmm, odd, don't immediately see how that'd cause issues
21:56 jnthn oh but
21:56 jnthn It's not threadsafe
21:56 lizmat but?
21:56 jnthn If handles are opened on different threads
21:56 jnthn Then there's a race on $opened
21:56 jnthn Since it's a dynamic array
21:56 jnthn Dunno if that's *the* problem, but it's certainly *a* problem
21:57 lizmat aaahh... hmmm
21:57 jnthn Would need a lock to protect it
21:57 lizmat so can we find out max fileno ?
21:57 lizmat well, I was trying to do this without a lock
21:57 lizmat assuming the fileno's would not be duped across threads
21:57 timotimo the user is free to raise the maximum fileno by quite a bit
21:57 lizmat so we only need to make sure the array has the max fileno size\
21:58 jnthn Also the closing thread != the opening thread
21:58 brrt that assumption is valid, but you might get concurrent resizes, i think
21:58 jnthn Or doesn't have to be
21:58 jnthn Yeah, concurrent resizes might be what's messing this up at the moment
21:58 jnthn Just lock it
21:58 jnthn It's not that expensive.
21:58 lizmat ok, will try that
21:58 timotimo do we only need to store "is opened?" for every fileno?
21:58 jnthn In the scheme of things
21:58 timotimo we might want to store one per bit rather than one per byte or worse
21:58 jnthn timotimo: No, we need to keep track of the handles that are open, so we can close/flush them
21:58 brrt opening a file is already expensive anyway ?
21:59 timotimo ah, we need to know about the buffer, too
21:59 jnthn Right
21:59 jnthn I have no idea if fds get reused if you open lots of files over the lifetime of a process?
21:59 jnthn (open/close, I mean)
22:00 timotimo no clue where to look to find out what subset of our target platforms would make such a guarantee
22:00 timotimo but i'd assume re-use would happen eventually
22:01 jnthn Yeah, depends how eventually
22:01 jnthn If we're locking anyway then we could use a hash
22:01 jnthn And delete the key
22:01 lizmat but re-use would not be an issue
22:01 jnthn Would force stringifying the fds is all
22:01 lizmat because that would imply the old one got closed, so the entry would be nulled in the list
22:02 jnthn lizmat: Yeah, my worry is more that if they're not re-used and it's a long-lived program, then we could read fd 100000 or something
22:02 dogbert2 joined #moarvm
22:02 jnthn Which sure is "only" 800 KB
22:02 jnthn s/read/reach/
22:02 timotimo the default max fileno on my system is 1024, fwiw
22:03 timotimo but remember that we also blow filenos with other things, like libuv related pieces for the event loop
22:04 jnthn ah, ok
22:04 timotimo that's why there was this bug where creating a crapton of threads would segfault
22:04 timotimo we didn't check the return value of creating uv eventloops
22:05 lizmat ok, I've not tried it with locking just yet, just by setting the list size to 1K
22:05 lizmat and it still hangs
22:05 * lizmat is getting tired so will look at this tomorrow with some fresh eyes
22:06 * timotimo is coming down with a little cold
22:06 jnthn sleep++
22:35 Geth ¦ MoarVM: samcv++ created pull request #677: Optimize MVM_string_gi_move_to and use for index* ops
22:35 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/pull/677
22:55 Zoffix .tell lizmat did a bit of debugging of the lock hanging issue. Golfed version is start `./perl6 -e '"foo".IO.open(:w).lock; sleep'` in one terminal. In another terminal in same dir run `./perl6 -e 'start { "foo".IO.open(:w).lock }; sleep 1'` and the second one'll never quit after 1 second, despite the lock running in a separate Promise. Hope that helps :)
22:55 yoleaux Zoffix: I'll pass your message to lizmat.
22:56 jnthn Oh, that makes sense
22:57 jnthn Operations on a syncfile are protected by a mutex
22:57 jnthn The .lock operation will be holding it
22:57 jnthn The .close operation won't be abl to get it
22:57 Zoffix Oh
22:57 jnthn *able
22:57 Zoffix .tell lizmat jnthn++ figured out the cause: https://irclog.perlgeek.de/moarvm/2017-09-06#i_15127594
22:57 yoleaux Zoffix: I'll pass your message to lizmat.
22:58 jnthn The point of the mutex being to protect a file handle data structure when concurrent threads attempt to use it, and thus not explode violently
22:59 jnthn I'm too tired to have any useful thoughts beyond knowing that's probably what's going on, alas
23:01 jnthn Is the point of the test to check we can't acquire the lock?
23:02 Zoffix Yes.
23:02 jnthn Hmm
23:02 jnthn I guess if we're willing to change the test then we could make the close-at-exit thing optional make the .open call pass :!close-at-exit or so
23:03 Zoffix It checks whether .lock blocks (printing some stuff after the .lock line) in that case and because it runs in a promise it exists after 1 second and the test just checks the stuff after the .lock line never got printed
23:03 Zoffix FWIW none of the .lock/.unlock tests are part of 6.c-errata so I'd guess there's more freedom to alter them
23:05 jnthn I guess it hinges on if we think anyone would write such a program in a non-test situation
23:05 jnthn They potentially would
23:07 jnthn If they wanted to try and acquire the log, but whine/exit if they failed
23:07 jnthn And we'd block their exit
23:08 jnthn I think we should provide a :!close-at-exit either way for folks who run into other situations we don't discover ourselves
23:08 jnthn If we want to magic it away, we could remove the file from the auto-close list while attempting to acquire the lock, and put it back in once we have
23:14 jnthn 'night, #moarvm
23:14 timotimo night jnthn
23:20 travis-ci joined #moarvm
23:20 travis-ci MoarVM build failed. Jonathan Worthington 'Move throw -> goto optimization to second pass.
23:20 travis-ci https://travis-ci.org/MoarVM/MoarVM/builds/272433122 https://github.com/MoarVM/MoarVM/compare/263984f566bf...04f6be4ff6d8
23:20 travis-ci left #moarvm
23:28 nwc10 joined #moarvm
23:40 AlexDaniel joined #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary