Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2015-09-25

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 tokuhiro_ joined #moarvm
02:14 zakharyas joined #moarvm
04:07 vendethiel joined #moarvm
04:27 ShimmerFairy joined #moarvm
06:26 FROGGS joined #moarvm
06:26 Ven joined #moarvm
06:29 aiacob joined #moarvm
06:32 _alexander joined #moarvm
06:37 _alexander moarvm is taking about 35MB on my laptop. is this normal?
06:40 FROGGS _alexander: 35MB to do what?
06:40 FROGGS _alexander: err, is this about disk space or RAM?
06:40 _alexander run a perl script that waits for user input and closes
06:40 _alexander ram
06:40 FROGGS ohh, that's pretty good then
06:41 FROGGS a `perl6 -e ''` used to take more than a hundred megabytes of RAM
06:41 _alexander great improvement then
06:43 FROGGS aye
06:43 _alexander it's faster than i thought it would be too
06:44 _alexander i was told it was slow but so far i haven't found any real bottlenecks
06:44 FROGGS it is 38,6MB to sleep forever on my box
06:48 _alexander cool that's more than small enough for me.  i'm really enjoying perl 6 so far
06:49 _alexander how is threading? any gotchas there?
06:49 FROGGS no that I know
06:49 FROGGS not*
06:51 _alexander excellent. thanks for the help FROGGS!
06:51 FROGGS :o)
07:17 masak yay, a satisfied customer! :)
07:36 Ven joined #moarvm
07:52 _alexander i'm very satisfied, in fact.  it's great to use a language/platform that's easier to define by what it can do.
07:52 _alexander as opposed to what it can't do. (ie most languages)
07:55 _alexander that's the long version of 'thanks devs. keep up the great work' ha
07:56 _alexander left #moarvm
08:03 * FROGGS is happy
08:33 Ven joined #moarvm
08:44 Ven joined #moarvm
08:45 jnthn So nobody got to le jitter bug, eh?
08:57 Ven joined #moarvm
09:06 brrt joined #moarvm
09:07 brrt ok, ehm, the new pull request actually makes a lot of things const that i don't see the reason to
09:11 cognominal joined #moarvm
09:40 Ven joined #moarvm
10:22 tokuhiro_ joined #moarvm
10:57 aiacob joined #moarvm
10:58 tokuhiro_ joined #moarvm
11:16 Ven joined #moarvm
11:26 Peter_R joined #moarvm
11:26 Ven joined #moarvm
11:31 brrt joined #moarvm
12:01 zakharyas joined #moarvm
12:06 brrt jnthn, no, not yet
12:17 cognominal joined #moarvm
12:33 jnthn OK, mebbe I will later then :)
12:35 brrt on my bike i had a good idea on how to create an inlineable frame with dynlex value depending on the call stack
12:36 brrt now i've forgotten
12:37 jnthn The extents list search really *should* work though
12:38 brrt hmm
12:38 brrt yes
12:38 jnthn So either the labels of the inlines are hosed (most probable) or the current label is (which would be weird 'cus we rely on that for returning to work)
12:38 brrt not... necessarily
12:38 jnthn Oh?
12:38 brrt i don't think getdynlex is invokish?
12:39 brrt or whichever opcode is compiled from it
12:40 brrt my point is, it'd be possible for the label assigned to the reentry label to be just outside the bounds of the inline label
12:40 brrt although i'd have to say it'd be very weird
12:40 jnthn brrt: But at the point the wrong lookup happens, then we're some call frames deeper
12:40 jnthn As in, we've done an invoke from the inlined frame
12:40 brrt true
12:41 brrt you are right
12:41 brrt hmmm
12:41 jnthn That doesn't rule out us being busted in the way you described additionally ;)
12:41 brrt but the inlines must be at the end of basic block boundaries, right?
12:42 jnthn aye
12:42 brrt hmmm
12:42 jnthn I think I did once do a JIT fix to make sure we end up with the label right for inlines too
12:42 jnthn 'cus we sometimes missed exception handlers in inlined code
12:43 brrt i recall that, yes
12:43 jnthn But that was exactly for the throw/handler being in the same frame, iirc
12:44 FROGGS joined #moarvm
12:45 brrt hmm and we're sure we're not looking in the same frame for the dynvar?
12:46 jnthn We're looking at a dynvar declared in an inlined frame
12:46 jnthn But the point we do the lookup, we're a few frames deeper in the callstack
12:46 jnthn And the problem is we miss the declaration we should find
12:46 brrt how even...
12:47 jnthn 'cus the label doesn't fall within the inline extents list that it should...
12:48 brrt well, yes
12:48 brrt :-)
12:49 brrt or we're not reading as many inlines as we expect
12:49 jnthn Or that
12:49 brrt what would prevent a frame from being inlined
12:49 jnthn See the first thing in inline.c
12:50 jnthn That's the logic that decides
12:53 brrt i see
13:00 tokuhiro_ joined #moarvm
13:18 Ven joined #moarvm
13:48 Ven joined #moarvm
14:00 robertle joined #moarvm
14:06 dalek joined #moarvm
14:52 jnthn Damn, update_ops.p6 feels a load faster than it used to
14:54 FROGGS jnthn: run it with perl6-j for some romantic feelings :o)
14:55 jnthn :P
15:01 tokuhiro_ joined #moarvm
15:03 dalek MoarVM: 663e71a | jnthn++ | src/6model/containers. (2 files):
15:03 dalek MoarVM: Add a can_store function to container v-table.
15:03 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/663e71a4e0
15:03 dalek MoarVM: d28e310 | jnthn++ | / (6 files):
15:03 dalek MoarVM: Add isrwcont op.
15:03 dalek MoarVM:
15:03 dalek MoarVM: For checking if we have a rw container.
15:03 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/d28e3100fb
15:13 hoelzro o/ #moarvm
15:13 nwc10 good UGT, hoelzro
15:14 hoelzro o/ nwc10
15:16 hoelzro did GH notifications stop working for a bit? or does dalek not post info on new branches?
15:16 hoelzro either way, could someone look at my branch https://github.com/MoarVM/MoarVM/tree/defer-close-stdin and tell me if that seems sane?
15:24 tokuhiro_ joined #moarvm
16:03 jnthn hoelzro: Since there's only one event loop thread and you only seem to modify si->state on the event loop, the mutex lock/unlock isn't needed, I think.
16:20 FROGGS[mobile] joined #moarvm
16:30 FROGGS[mobile]2 joined #moarvm
16:39 FROGGS[mobile] joined #moarvm
16:43 cognominal joined #moarvm
16:57 dalek joined #moarvm
17:28 hoelzro jnthn: ah, ok, I was seeing some wonkiness there
17:28 hoelzro I think because I was querying si->state outside of the async thread?
17:31 hoelzro jnthn: other than that, looks good?
17:38 hoelzro on another note, is there a way to tell if an MVMArgProcContext is the sole owner of its callsite? I found a memory leak where MVMCallsite objects aren't getting freed (https://rt.perl.org/Ticket/Display.html?id=126183)
17:42 TimToady that might explain the memory leak I see in http://rosettacode.org/wiki/Numerical_integration#Perl_6
17:43 hoelzro TimToady: I've observed it when using subsignatures, but it may not be limited to that
17:45 hoelzro whenever I try to fix it, I segfault Moar ='(
18:04 FROGGS joined #moarvm
18:46 TimToady I reduced my memory leak to: loop { 0, .1 ... 1000 }
18:47 TimToady there is an implicit * + .1 call generated in there, but it leaks with an explicit function as well
18:48 ggoebel2 joined #moarvm
18:49 TimToady I guess I'll RT it
19:19 TimToady RT #126189
19:41 jnthn hoelzro: The usecapture thing is...more delicate than I'd like...
19:41 hoelzro =/
19:41 jnthn It was an optimization to avoid an allocation/copying
19:41 hoelzro jnthn: what about having a refcount on the callsite object? would that be acceptable?
19:42 hoelzro I'm willing to put in the work, I just don't want to start down a path we'll end up rejecting
19:42 jnthn But...the memory mangement of it is fraught, and we only really hit it on slow paths anyway
19:43 jnthn hoelzro: I'd do a perl6-bench run, then make usecapture do exactly what savecapture does, then do another perl6-bench run, and if they come out the same, then we just make usecapture do exactly what savecapture does
19:43 hoelzro it looks to me that savecapture is the problem, though
19:44 jnthn Really?
19:44 jnthn savecapture makes a copy so it knows it can free it
19:44 jnthn usecapture points to the live callsite so we have to try not to get into trouble
19:44 hoelzro it does, but it doesn't free it
19:44 jnthn Oh
19:45 jnthn If it's its own copy, why? :S
19:45 hoelzro I'm trying to determine the condition in gc_free under which freeing is ok
19:45 jnthn I *thought* the ideas was "if savecapture always, if usecapture never"
19:45 hoelzro so there's the condition: https://github.com/MoarVM/MoarVM/blob/master/src/6model/reprs/MVMCallCapture.c#L61
19:46 hoelzro however, effective_callsite is always equal to callsite in the situation I'm seeing
19:46 hoelzro due to https://github.com/MoarVM/MoarVM/blob/master/src/core/args.c#L105
19:46 hoelzro and https://github.com/MoarVM/MoarVM/blob/master/src/core/args.c#L111
19:47 jnthn Oh...it's trying to make sure it doesn't free interned callsites and only those creating due to flattening, I think
19:48 hoelzro sounds right
21:30 tokuhiro_ joined #moarvm
23:31 tokuhiro_ joined #moarvm
23:41 hoelzro hmm...I have a fix in place that stops the leak, but now MVM segfaults when building Rakudo =/
23:41 hoelzro if I MVM_SPESH_DISABLE=1, it builds fine
23:41 hoelzro src/spesh scares me
23:57 timotimo :(

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary