Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2017-09-26

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:46 Ven`` joined #moarvm
00:50 samcv Final grant report now published https://cry.nu/grant-report/
01:55 ilbot3 joined #moarvm
01:55 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
02:58 Ven`` joined #moarvm
02:59 MasterDuke samcv++
02:59 yoleaux 25 Sep 2017 09:56Z <cuonglm> MasterDuke: can you tell me where is the code for `-n` command line flags
02:59 yoleaux 25 Sep 2017 10:55Z <jnthn> MasterDuke: I'm surprised join doesn't already know that trick! Yes, let's do that, it's far more general
03:11 zakharyas joined #moarvm
03:19 bisectable6 joined #moarvm
03:43 Ven`` joined #moarvm
04:07 coverable6 joined #moarvm
04:07 committable6 joined #moarvm
04:07 nativecallable6 joined #moarvm
04:31 vendethiel- joined #moarvm
04:32 geekosaur joined #moarvm
05:37 domidumont joined #moarvm
06:11 domidumont joined #moarvm
06:50 brrt joined #moarvm
07:01 leont joined #moarvm
07:03 brrt good * #moarvm
07:17 lizmat brrt o/
07:19 brrt ohai lizmat
07:19 brrt thanks for the weekly :-)
07:22 lizmat you're welcome!
07:49 brrt joined #moarvm
07:49 brrt joined #moarvm
08:25 lizmat samcv++ : https://cry.nu/grant-report/   # dated 20 Sep, did I miss that for the P6W ???  :-(
08:26 nwc10 if we can have point releases for MoarVM, Rakudo etc, can we have point releases for the weekly? :-)
08:28 lizmat hehe... well, eh, at least now I don't have to worry what the main story is going to be next week
08:33 samcv lizmat, no
08:33 samcv i released it today
08:33 lizmat ok, *phew*  :-)
08:34 samcv i should change the time to be accurate
08:34 samcv ok i think it's updated now
08:35 lizmat :-)
09:27 jnthn samcv++ # awesome work, great report :)
09:39 dogbert17 jnthn: did you sample the foodtrucks this weekend?
09:40 jnthn dogbert17: No. That would have been an excellent reason to have a stomach problem on the evening, but I ended up with it without them :P
09:40 jnthn Did make it to the national railway day event though
09:41 dogbert17 cool, was it a nice event?
09:41 jnthn Saw the stream train. It was absolutely packed out, though, so we figured we'd take a ride another time :)
09:41 jnthn But yeah, was nice :)
09:44 * dogbert17 remembers wanting to be an engineer when he was a kid
09:47 dogbert17 let me know when you're well enough to tackle a mystery
09:50 dogbert17 "nasty" stuff like 'Internal error: zeroed target thread ID in work pass'
09:54 jnthn Hm, that's the kind that a MVM_GC_DEBUG=1 or 2 and a small nursery can often narrow
09:56 dogbert17 this happen in one of our favorite test files, albeit seldom, t/spec/S17-promise/nonblocking-await.t
09:57 jnthn Ah, is that the one that was also hanging for some folks?
09:58 dogbert17 yeah, want to check out a gist?
09:59 jnthn Sure
09:59 dogbert17 https://gist.github.com/dogbert17/ead3d659b038b371b128cd9b12e41c0b
10:00 jnthn Ah, the hanging one was S17-supply/syntax.t
10:00 jnthn Is that with MVM_GC_DEBUG=1 or without?
10:00 dogbert17 that's the one making TimToady's life misearble :)
10:00 dogbert17 without
10:01 dogbert17 can try with it if you want
10:01 jnthn Yeah, please do
10:02 dogbert17 ok
10:02 jnthn Darn, it seems it was my supply-related changes that have regressed one of the cro-websocket tests
10:02 nwc10 it has hung for me too at times
10:03 * jnthn wonders if this hang and the hang in the cro-websocket tests are related
10:04 jnthn ooh, I get it on my home machine
10:04 dogbert17 yay
10:04 nwc10 and at this point jnthn declares "lunchtime!" and exits stage left
10:04 jnthn Not feeling up to going to the office at least has some benefit :P
10:05 jnthn And yeah, it's nomming CPU and memory
10:05 dogbert17 being at home has its perks :)
10:06 jnthn Extracting the test from its test file also hangs
10:07 jnthn Anyway, that it works out on my fast machine and hangs reliably in my slower VM makes it likely it's a race
10:11 jnthn Hmm, where's that debug thing I made last week...
10:19 jnthn Well, it confirms that timer events file up
10:19 jnthn Doesn't say why
10:20 jnthn As in, doesn't say something else is holding the lock for a long time
10:34 dogbert17 I noticed something else yesterday, if I decrease the size of the nursery I can get a couple of lock related tests to hang. Nursery size like 32768 * 8
10:35 dogbert17 it would be a simple task to break into the hanging programs with gdb :) I did that yesterday with the annoying syntax.t test
10:39 dogbert17 wrt to the syntax.t test each time the program "hanged" and I used gdb one of the thread was always at this line
10:39 dogbert17 MVM_gc_root_add_frame_roots_to_worklist (tc=0xb26a6c08, worklist=0xa8ccf050, cur_frame=0x9d699e20) at src/gc/roots.c:366
10:40 dogbert17 but perhaps that is to be expected ?
10:44 timotimo "perf record" on the hanging test also says the time spent in gc_root_add_* stuff is a whole lot
10:46 dogbert17 timotimo: it looks like this in gdb, https://gist.github.com/dogbert17/ca7ac0e25dc8fafe5b2adc096eaa4beb
10:47 timotimo i'm not sure if there's any frame that has a whole lot of locals in it
10:47 timotimo or if there's just a whole lot of frames on the moarvm stack. which i don't think there would be
10:49 timotimo digital cookies for someone who builds an extension or userscript that lets me go from a line of code that contains a filename and line number (including path) directly to the right github repository and file in it
10:57 jnthn I think I've found a problem in the Async::Lock queue-on-recurse mechanism
10:57 timotimo cool!
10:57 jnthn Uh, Lock::Async
10:57 dogbert17 cooool :)
10:57 moritz jnthn: that sounds like the kind of thing where you don't want to have problems :-)
11:00 timotimo that could indeed be what makes the supply syntax test hang sometimes
11:00 jnthn It's possible, yeah
11:03 jnthn I think the issue is here:
11:03 jnthn elsif self!on-recursion-list() {
11:03 jnthn # Lock is already held on the stack, so we're recursing. Queue.
11:03 jnthn $try-acquire.then({
11:03 jnthn LEAVE self.unlock();
11:03 jnthn self!run-with-updated-recursion-list(&code);
11:03 jnthn });
11:03 jnthn }
11:03 jnthn It sticks a .then on the Promise that is kept when the async lock is available
11:05 timotimo there could already be thens on there?
11:06 jnthn Yeah, if two things want to queue up there, then it sticks both onto that Promise. The .thens will fire. Both now think they have the lock, run code as if they have it, and then both go to unlock it.
11:06 timotimo but shouldn't that throw the "can't unlock when not locked" error? or does something else quickly enough jump in and hold the lock?
11:07 jnthn Yeah, I'm curious why I don't see that
11:07 timotimo so every time you recurse you add another simultaneous track of pieces that think they can have the lock
11:07 jnthn Which makes we wonder if this really is the problem in question
11:07 timotimo you could try having an "id" for every locking phase
11:07 timotimo and make sure the locks and unlocks match up their ids
11:10 jnthn Well, the easy fix is just to .then and re-contend
11:15 jnthn Doesn't help, though :(
11:15 timotimo dang
11:15 jnthn Which, given we didn't see the expected double-unlock error, isn't entirely surprising
11:16 jnthn m: my $s = Supplier.new; react { whenever $s { .say }; whenever Promise.in(1) { $s.emit("a"); $s.emit("b"); $s.emit("c"); $s.done() } }
11:16 camelia rakudo-moar 8cf083: OUTPUT: «a␤b␤c␤»
11:16 jnthn Hm, I kinda expected that to trip over it
11:17 jnthn m: my $s = Supplier.new; react { whenever $s { .say sleep 1; say now; }; whenever Promise.in(1) { $s.emit("a"); $s.emit("b"); $s.emit("c"); $s.done() } }
11:17 camelia rakudo-moar 8cf083: OUTPUT: «===SORRY!=== Error while compiling <tmp>␤Two terms in a row␤at <tmp>:1␤------> Supplier.new; react { whenever $s { .say⏏ sleep 1; say now; }; whenever Promise.i␤    expecting any of:␤        infix␤        infix stopper␤      …»
11:17 jnthn m: my $s = Supplier.new; react { whenever $s { .say; sleep 1; say now; }; whenever Promise.in(1) { $s.emit("a"); $s.emit("b"); $s.emit("c"); $s.done() } }
11:17 camelia rakudo-moar 8cf083: OUTPUT: «a␤Instant:1506424678.071537␤b␤Instant:1506424679.075984␤c␤Instant:1506424680.077176␤»
11:18 jnthn Hm, and I'd have expected that to have issues, but it works
11:19 jnthn m: my $s = Supplier.new; await start react { whenever $s { .say; sleep 1; say now; }; whenever Promise.in(1) { $s.emit("a"); $s.emit("b"); $s.emit("c"); $s.done() } }
11:19 camelia rakudo-moar 8cf083: OUTPUT: «a␤Instant:1506424799.608883␤b␤Instant:1506424800.611684␤c␤Instant:1506424801.612954␤»
11:19 timotimo m: my $s = Supplier.new; await start react { whenever $s { .say; sleep 1; say now - $*INIT-INSTANT; }; whenever Promise.in(1) { $s.emit("a"); $s.emit("b"); $s.emit("c"); $s.done() } }
11:19 camelia rakudo-moar 8cf083: OUTPUT: «a␤2.12159250␤b␤3.1251021␤c␤4.1262031␤»
11:27 jnthn The first line can also be my $s = Supply.interval(0.001).head(0) and it still hangs
11:27 jnthn Ah, though head is implemented using a supply block :)
11:28 jnthn So it's not actually a golfing
11:28 timotimo how come .head(0) actually does anything at all?
11:50 dogbert17 jnthn: what did you get for lunch?
11:51 * dogbert17 hasn't eaten yet :(
11:54 jnthn Still eating simple stuff for today
11:55 jnthn Had some mashed potato earlier, will have some vegetable soup soon :)
11:55 brrt joined #moarvm
11:55 jnthn Still no particularly good leads on the bug
11:56 jnthn Instrumented the supply block guts some, but they all look to be behaving well
11:57 patrickz joined #moarvm
11:58 zakharyas joined #moarvm
11:59 lizmat jnthn: would it make sense to have all BUILDALL (only) methods to share the same Any:D: signature ?
12:00 timotimo hm, not Mu?
12:03 lizmat no, because the default .new is in Any, not Mu
12:04 lizmat timotimo: so you're saying that would make sense ?
12:04 timotimo haven't spent enough thought on it
12:04 timotimo currently puzzling over the runaway quote thing
12:05 lizmat it would effectively bring the number of Signature objects for BUILDALL down N to 1 (like it is now)
12:06 jnthn Would probably work out
12:06 lizmat ok, lemme try :-)
12:10 jnthn It seems that we're never unlocking an async lock
12:10 jnthn Not clear where that's going wrong jsut yet
12:15 brrt1 joined #moarvm
12:27 brrt joined #moarvm
12:35 lizmat do we have anything like compile constants in nqp that I could let depend on an ENV variable at compile time ?
12:42 dogbert17 jnthn: finally managed to get a 'Adding pointer %p to past fromspace to GC worklist' when running t/spec/S17-promise/nonblocking-await.t. Gist https://gist.github.com/dogbert17/f8cb997404699925730b76da44ef40da
12:51 jnthn dogbert17: Ah, good that we can catch it under that
12:51 jnthn Though, hmm, that doesn't narrow it down a load
12:51 lizmat jnthn: it looks like auto-generating a BUILDALL is going to break some tests that assume BUILDALL lives in Any
12:51 timotimo yeah, frame roots :\
12:52 jnthn lizmat: Which tests?
12:53 lizmat S12-introspection/methods.t 1-3, 9, 15, 20, 22-24, 42, 46, 57
12:53 jnthn oh, interesting
12:54 jnthn m: class C { submethod m() { } }; say C.^methods
12:54 camelia rakudo-moar 33e113: OUTPUT: «(m)␤»
12:54 lizmat ah, it should be a submethod is what you're saying  :-)
12:55 jnthn Oh, I thought you'd done it that way already ;)
12:55 jnthn It should but it won't help :)
12:55 jnthn Since .^methods includes those
12:56 lizmat I guess we can simplify the BUILDALL to a generic one that just returns self if there are no elements in the BUILDPLAN
12:57 lizmat jnthn: not sure it would make sense to make it a submethod though
12:58 lizmat if a subclass only adds methods, shouldn't they be able to share the same BUILDPLAN ?
12:58 brrt joined #moarvm
12:58 lizmat especially if it has a Any:D: invocant ?
12:58 lizmat s/BUILDPLAN/BUILDALL/ ?
13:01 jnthn What it the subclass uses a custom metaclass that doesn't trigger BUILDALL generation?
13:01 lizmat is that possible ?
13:01 jnthn Yes
13:01 jnthn I think submethod is safer
13:02 lizmat so all the "but" cases would just get a new BUILDALL method
13:25 * dogbert17 reboot &
13:31 dogbert17 joined #moarvm
13:41 timotimo brrt, zoffix found a change to the optimizer that causes a segfault, but only if the jit is enabled. somehow a null pointer gets into the first argument of the reprconv getarg function
13:42 timotimo hm, may be a difference between how the interp.c handles the object being null and how the reprconv does it
13:42 timotimo er, sorry, get_attr_o of course
13:43 timotimo the interp just does IS_CONCRETE, which also segfaults on a null pointer
13:43 timotimo oh
13:43 timotimo it's calling that on self, but this is a sub, not a method
13:46 dogbert17 this syntax.t code is interesting, if I change 'await do for ^4' to 'await do for ^2' the program consistently take 1s to execute, if I increase it to 'await do for ^3' is often takes 25s to execute but once in a blue moon this also takes 1s
13:47 * dogbert17 (has four cores in his vm)
13:47 timotimo aye
13:47 timotimo i see something similar, getting that number down gets me to pretty much always succeed it
13:47 timotimo what happens on your machine if you give .interval a second argument, like 0.01?
13:47 dogbert17 will try
13:49 dogbert17 with 0.01, ^3 takes 1.7s and ^4 takes ~2s
13:49 timotimo but it always succeeds, right?
13:50 dogbert17 yep
13:50 dogbert17 even ^5 succeeds in short order
13:51 dogbert17 0.005 also works quite well, is there some queue which gets filled up somewhere?
13:51 jnthn So far I've traced it down to an await on the async lock never coming back
13:52 jnthn Even though the lock is released
13:52 jnthn By now am down in the awatier guts
13:54 [Coke] btw, I have some Proc::Async code that hangs if I start more than X promises simultaneously (on 2017.09 release) https://github.com/perl6/doc/blob/master/xt/aspell.t (check out docs, then TEST_THREADS=10 perl6 xt/aspell.t triggers it here. have to have aspell and awk installed. (need to get rid of the awk requirement there.)
13:57 dogbert17 strace just says futex(0xb4a3fbc, FUTEX_WAIT_PRIVATE, 1, NULL
13:57 [Coke] (looks like it works with 6, then hangs with 7). happy to provide more diagnostics if needed.
13:59 [Coke] (ah, no, six can hang too, just not up front.)
14:02 Geth ¦ MoarVM: c40be90d36 | (Timo Paulssen)++ | src/jit/x64/emit.dasc
14:02 Geth ¦ MoarVM: jit: getlex_o shall give VNMull rather than null
14:02 Geth ¦ MoarVM:
14:02 Geth ¦ MoarVM: you can trigger that in nqp by using self in a sub
14:02 Geth ¦ MoarVM: that doesn't have a self available from outside.
14:02 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/c40be90d36
14:31 dogbert17 timotimo: when I run the syntax program with ^4 the program seems to start 8 threads, after a number of seconds that increases to 9 and an equal amount of time later it goes up to 10. A few seconds after that the code completes
14:32 jnthn [Coke]: The new scheduler was merged shortly after 2017.09, and may well resolve the issue
14:32 jnthn (Or at least, make it work for longer...eventually it's probably a job for 6.d.PREVIEW)
14:38 [Coke] jnthn: doh, forgot that came in after. updating!
15:08 jnthn It's looking like, after all my supply bug hunting, it may be a scheduler bug that I'm looking at
15:08 jnthn I eventually chased it down to the unlock promise not being scheduled
15:09 AlexDaniel joined #moarvm
15:09 dogbert17 jnthn++, that was a tough one
15:09 jnthn And I think I can see what's going on, too
15:10 jnthn The timer thread is busy pushing new work
15:10 jnthn So we're eating resources
15:10 dogbert17 is that the 'memory leak'?
15:10 jnthn "Leak" is just ever-growing work queue
15:10 jnthn On my 12-core machine at the office, it happily spawns up to 12 threads before it starts thinking if it really should add any more
15:11 jnthn On this 4-core VM that point comes far sooner
15:11 dogbert17 so that's why it was difficult to reproduce at work
15:12 jnthn Right
15:12 jnthn Or just impossible
15:12 jnthn On that machine
15:12 jnthn :)
15:12 jnthn It's using blocking react in this test file
15:12 jnthn That's why this also goes away if the loop is for ^2 or something in that test also
15:12 jnthn Anyway, the blocking react really blocks a bunch of threads
15:13 jnthn The progress elsewhere confuses the schduler into thinking everything is looking great
15:13 jnthn *scheduler
15:13 dogbert17 so it does nothing?
15:13 jnthn So it doesn't add more general workers
15:13 jnthn Right
15:14 jnthn It should probably also look at how long since something in the queue made progress.
15:14 jnthn And use that as a "we might be deadlocked" heuristic also
15:15 dogbert17 sounds like a clever move
15:15 jnthn Yeah, want a break, then will look at that.
15:16 dogbert17 so the unluck promise wasn't being scheduled because there was no bvious reason to do so ?
15:16 dogbert17 *obvious
15:29 [Coke] jnthn: that upgrade definitely helped. (I should also be funneling all these promises through something that is concurrenet safe rather than an array)
15:39 AlexDaniel joined #moarvm
15:42 brrt joined #moarvm
15:48 jnthn dogbert17: yeah, pretty much that
16:01 robertle joined #moarvm
16:23 leont joined #moarvm
16:26 brrt hi #moarvm
16:27 jnthn o/ brrt
16:27 brrt why, and how, does my tag still show moarvm 2017.8 after merging with origin/master
16:27 brrt \o jnthn
16:28 brrt that's because that's what git describe is giving me
16:29 brrt oh, because of rebasing
16:29 brrt my bad
16:32 MasterDuke joined #moarvm
16:37 MasterDuke jnthn: fyi, https://github.com/MoarVM/MoarVM/pull/705 now optimizes copying strands and when only one element is in the list to join. no hurry, just getting a ping in
16:37 yoleaux 15:46Z <AlexDaniel> MasterDuke: did we have a ticket for 「say “foo"; my $x = 42; say $x」 ?
16:44 domidumont joined #moarvm
17:13 dalek joined #moarvm
17:19 robertle joined #moarvm
17:42 SourceBaby joined #moarvm
18:17 squashable6 joined #moarvm
18:25 quotable6 joined #moarvm
18:25 committable6 joined #moarvm
18:25 unicodable6 joined #moarvm
18:25 coverable6 joined #moarvm
18:25 bisectable6 joined #moarvm
18:25 bloatable6 joined #moarvm
18:25 nativecallable6 joined #moarvm
18:25 benchable6 joined #moarvm
18:25 releasable6 joined #moarvm
18:25 greppable6 joined #moarvm
18:25 evalable6 joined #moarvm
18:25 squashable6 joined #moarvm
18:25 statisfiable6 joined #moarvm
18:35 Ven`` joined #moarvm
19:09 lizmat joined #moarvm
19:34 TimToady joined #moarvm
19:46 Ven`` joined #moarvm
19:48 dogbert2 MoarVM Panic caught when running t/spec/S14-roles/mixin.t, https://gist.github.com/dogbert17/696587a80b74dae8a3410f9e9f01447e
19:49 timotimo oh, huh, the mixim cache again?
19:49 dogbert2 is there such a beast?
19:51 timotimo hm, maybe i'm thinking of the parameterization cache
19:52 timotimo my code to squish phi argument lists in spesh causes some kind of infinite loop or something in the very last step of rakudo installation %)
19:54 dogbert2 can you break into the process with gdb and see what's going on?
19:57 timotimo i could, but maybe i'll put the squishing into the output instead %)
19:57 timotimo it's not actually an optimization per se, as phi nodes don't appear in the generated code at all
19:57 dogbert2 the old println trick :)
19:58 timotimo well, how fun is it to look at this (screenshot upcoming)
19:59 timotimo https://i.imgur.com/9mqMta9.png
19:59 timotimo and this is a very tame case
20:02 timotimo https://i.imgur.com/Q3l5k16.png
20:02 dogbert2 looks moderately fun :)
20:13 timotimo hum. graph_spesh relies on a MAST/Ops.p6 module that apparently no longer gets written while building moar
20:13 timotimo the file i have locally is from november 2014
20:17 samcv good *
20:17 timotimo heyo samcv
20:17 samcv heyo
20:17 timotimo Ops.p6 doesn't show up in any commit diff of all of moarvm
20:22 ilmari not MAST/Ops.nqp?
20:22 timotimo nope, a p6version of that
20:28 timotimo um, huh
20:29 timotimo m( it's under tools/lib
20:31 timotimo so why is this sp_getlex_ins without operands in the graph drawing
20:34 timotimo oh, and an sp_decont likewise, weird
20:36 timotimo must be because the Ops.pm file was out of date i guess
20:36 timotimo hm, i put that file into gitignore
20:47 timotimo well, hardly anybody besides me uses that tool anyway, so ... :)
20:49 timotimo it also gets very confused by multiple annotations in a row
23:28 timotimo all in all, a grant well done
23:56 MasterDuke joined #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary