Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2016-04-23

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:48 ilbot3 joined #moarvm
01:48 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
01:52 geekosaur joined #moarvm
03:44 geekosaur joined #moarvm
05:00 sivoais joined #moarvm
07:30 domidumont joined #moarvm
07:35 domidumont joined #moarvm
07:49 leont_ joined #moarvm
08:39 lizmat joined #moarvm
08:47 vendethiel joined #moarvm
09:07 leont_ joined #moarvm
09:40 lizmat joined #moarvm
09:40 leont_ joined #moarvm
10:08 leont_ joined #moarvm
10:33 leont_ joined #moarvm
10:37 masak joined #moarvm
10:38 masak for people interested in GCs: http://jvns.ca/blog/2016/04/22/java-garbage-collection-can-be-really-slow/
10:53 jnthn GCs are hard. :)
10:55 jnthn Curiously, we have A Different Problem in Moar.
10:55 jnthn We don't have a way to set an absolute memory limit.
10:55 jnthn Instead, the amount of overhead the GC carves out for itself is proportional to the overall heap size.
10:57 timotimo that sleep sort crash seems to be fixed in latest rakudo moar
10:57 jnthn Oh?
10:58 jnthn The author was running an older one?
10:58 jnthn uh
10:58 jnthn reporter
10:58 timotimo latest meaning bleeding edge
10:58 timotimo i.e. newer than nqp_revision and moar_revision
10:58 lizmat jnthn: it crashed on me
10:58 jnthn Yeah, we need to a tad careful with "seems to be fixed" on threading bugs
10:58 lizmat well, you could argue I worked around it
10:58 timotimo i'm running it for the 4th time now :)
10:58 timotimo oh
10:59 timotimo with your latest commit, that makes it no longer crash?
10:59 jnthn Because I've got ones where I didn't reproduce them after 10,000 attempts on either Windows or in a Linux VM
10:59 timotimo well, that'd be why, then
10:59 lizmat indeed
10:59 jnthn (Yes, I actually did run the thing for 10,000+ attempts)
10:59 jnthn And yet Zoffix and others saw it almost right away
10:59 lizmat if you change Channel.list in the code by Channel.Supply.list
10:59 lizmat it will break again
11:00 jnthn fwiw, the 13KB nursery shook out quite a few things in S17
11:00 lizmat I merely re-implemented Channel.list to be an Iterator
11:00 lizmat rather than self.Supply.list
11:00 jnthn I hope you were really careful... :)
11:00 timotimo rebuilding rakudo
11:01 * jnthn went via Supply in order to rule out a bunch of possible races :)
11:01 lizmat well, that's why I'm not sure my solution is correct
11:01 timotimo yup, it totally crashes
11:01 lizmat it's GC related, as adding an nqp::force_gc in the sub, totally fixes it
11:02 timotimo interesting. it's the "trying to destroy a mutex aborts" thing
11:02 jnthn s/totally fixes/totally hides/ ;-)
11:02 lizmat jnthn: eh, yeah  :-)
11:02 jnthn Anyway, I should probably spend the weekend resting rather than debugging GC things :-)
11:02 timotimo to be fair, when a mutex that's still held gets gc_free'd, we're in trouble already
11:03 jnthn But I plan to dig into the various bugs, inclduing S17 ones, that the 13KB nursery turned up next week.
11:03 jnthn I'll probably then split those out into a separate branch from my frame refactors
11:03 jnthn So we get them into master sooner
11:03 jnthn Before returning to optimizing invocation :)
11:04 jnthn timotimo: Totally agree on that
11:04 timotimo how do i ask Lock.condition if the lock is locked at that moment or not?
11:04 timotimo i'd like to give Lock a DESTROY submethod for debugging purposes
11:05 jnthn uh
11:05 jnthn Lock.condition is for creating a condition variable :)
11:05 timotimo oh, er
11:05 timotimo yeah, i meant to say:
11:05 timotimo how do i ask a lock if it's locked :)
11:05 jnthn Don't know there's a way to do that at present
11:05 timotimo the condvar was just the thing i was still looking at at that moment
11:06 timotimo polllock
11:06 jnthn Though
11:06 jnthn You can .wrap the methods maybe :)
11:06 jnthn (lock/unlock)
11:06 timotimo hm, after locking, writing to a lock's attribute would be safe
11:06 jnthn Yeah, or you can patch it that way
11:07 jnthn lizmat: btw, did you keep Channel.list returning a List, or does it return a Seq?
11:07 lizmat Seq
11:07 jnthn Ah...it needs to stay a List, I think.
11:07 jnthn But htat's just a matter of s/Seq.new(...)/List.from-iterator(...)/
11:08 jnthn Won't need any changes to the iterator
11:08 jnthn We should maybe add a .Seq coercion method also though
11:08 lizmat yeah, but then we don't need an Iterator
11:08 lizmat we can just build the list and return that when we're done
11:08 jnthn Or maybe just implement what you have now as .Seq and then method list() { self.Seq.list }
11:09 lizmat but why go through the .Seq then ?
11:09 timotimo so you can get a Seq if you want one
11:09 timotimo without going through list :)
11:09 jnthn List doesn't mean "fully reified", it just means "remembering"
11:09 jnthn Laziness is still valuable.
11:09 timotimo jnthn: This representation (ReentrantMutex) does not support attribute storage
11:09 timotimo d'oh :)
11:09 jnthn timotimo: heh, yes :)
11:11 timotimo i could implement an op that asks a lock for its lockedness without locking, that'd go into moar, then
11:12 timotimo for now i'll be AFK, though
11:27 dalek MoarVM: e3bf435 | (Pawel Murias)++ | src/6model/reprs/P6opaque.c:
11:27 dalek MoarVM: Fix segfault when composing an uncomposed P6opaque repr.
11:27 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/e3bf435454
11:27 dalek MoarVM: 629282f | jnthn++ | src/6model/reprs/P6opaque.c:
11:27 dalek MoarVM: Merge pull request #362 from pmurias/fix-uncomposed-segfault
11:27 dalek MoarVM:
11:27 dalek MoarVM: Fix segfault when composing an uncomposed P6opaque repr.
11:27 dalek MoarVM: review: https://github.com/MoarVM/MoarVM/commit/629282f0e7
11:45 lizmat joined #moarvm
12:10 lizmat_ joined #moarvm
13:50 domidumont joined #moarvm
14:41 Util joined #moarvm
14:50 leont joined #moarvm
15:25 leont joined #moarvm
17:50 lizmat joined #moarvm
17:51 lizmat joined #moarvm
17:52 leont joined #moarvm
17:57 leont I seem to have hit a most interesting bug: every 200th process I run() fails to receive input…
17:57 jnthn Exactly 200? :)
17:58 timotimo we can grep the moar source for "200"
17:58 timotimo must be a magic number somewhere
17:58 jnthn :P
17:58 jnthn dinner &
17:58 timotimo #define DROP_INPUT_EVERY 200
17:59 leont Yes, exactly 200, 400, 600 and now on the way towards 800
18:00 leont (while running the spectest in my harness, this seems to be the only remaining issue in the synchronous parser)
18:09 leont 800 reliably failed too
18:11 masak leont: do you have something for others to run to confirm this behavior?
18:14 nine_ leont: the 200th fails but the 201st works again? Are those run serially or in parallel?
18:16 arnsholt A first golf attempt would be running "echo 1" 1000 times or something, and seeing what happens
18:16 arnsholt I guess
18:18 leont joined #moarvm
18:19 lizmat nine_: I think they're serial
18:20 timotimo if you're doing something in parallel, it's probably extremely hard to pin-point hit every 200th :)
18:24 leont joined #moarvm
18:30 leont joined #moarvm
18:38 jnthn https://github.com/MoarVM/MoarVM/blob/master/src/spesh/osr.h#L2
18:38 jnthn There's a 200
18:38 jnthn Maybe try with MVM_SPESH_OSR_DISABLE=1
18:42 leont Interestingly, it sigaborts reproducibly after 997 tests…
18:50 zakharyas joined #moarvm
20:06 Ven joined #moarvm
20:56 Ven joined #moarvm
21:29 zakharyas joined #moarvm
21:35 leont joined #moarvm
21:37 zakharyas joined #moarvm
21:54 dalek joined #moarvm
21:56 cognominal joined #moarvm
22:46 lizmat joined #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary