Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2017-09-25

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:37 Ven`` joined #moarvm
01:55 ilbot3 joined #moarvm
01:55 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
05:32 domidumont joined #moarvm
05:55 domidumont joined #moarvm
06:14 brrt joined #moarvm
06:18 brrt good * #moarvm
06:41 brrt joined #moarvm
07:17 zakharyas joined #moarvm
07:18 TimToady well, my spectest still hangs in S17-supply/syntax.t test 68
07:19 TimToady it's not chewing up syscalls, strace reports only futex(0x543d454, FUTEX_WAIT_PRIVATE, 1, NULL
07:24 TimToady well, actually, it's hanging in test 69
07:24 TimToady still chews up about .004 gig per second
07:29 leont joined #moarvm
07:38 TimToady interestingly, if I insert signal(SIGINT).tap: { note "HERE"; exit } into that test, it traps the SIGINT, but never runs the block, seemingly
07:39 TimToady so maybe it's the scheduler that's hosed somehow
07:50 leont That sort of thing is tricky to schedule in general
08:26 leont joined #moarvm
08:31 leont In POSIX, signals are actually delivered to a thread, not a process (even if signal disposition is per process). This behavior can be useful but is also hard to meaninfully translate to a higher level programming language.
08:32 TimToady a completely fresh clone passed the spectest, but then hung again when that test was run individually, so it's apparently probabilistic, though likely on my machine
08:38 TimToady if I put 'note $++;' in the inner loop, it generally never gets anywhere close to 500, usually stops by the first 20 or 30
08:39 TimToady well, dinner &
09:03 brrt anyway, i was going to suggest
09:04 brrt lets make a moarvm docs site
09:04 brrt built from the docs/ directory of MoarVM
09:09 nwc10 using dogfood
09:09 nwc10 (or github pages)
09:11 brrt or $whatever-solves-the-problem-using-minimum-brain-ccyles
09:11 brrt :-)
09:11 brrt anyway, i'm for merging nine++'s branch today
09:13 nwc10 I'd argue against writing significant code in anything-other-than-Perl 6 even if it solves the immediate problem in fewer cycles than github pages
09:13 nwc10 as it's a local minima
09:17 brrt hmmm
09:17 brrt i think that's fair
09:18 brrt i'm wondering if i should be converting my documentation from org-mode to pod6
09:26 robertle joined #moarvm
10:03 dogbert17 hmm, MoarVM panic: Internal error: zeroed target thread ID in work pass
10:04 dogbert17 annoyingly, it disappears under gdb
10:43 Ven`` joined #moarvm
11:54 Geth ¦ MoarVM: MasterDuke17++ created pull request #706: Fix comment
11:54 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/pull/706
11:56 Geth ¦ MoarVM: 33c7521c1d | MasterDuke17++ (committed using GitHub Web editor) | src/6model/reprs/P6opaque.c
11:56 Geth ¦ MoarVM: Fix comment
11:56 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/33c7521c1d
11:56 Geth ¦ MoarVM: d274e827af | lizmat++ (committed using GitHub Web editor) | src/6model/reprs/P6opaque.c
11:56 Geth ¦ MoarVM: Merge pull request #706 from MasterDuke17/patch-1
11:56 Geth ¦ MoarVM:
11:56 Geth ¦ MoarVM: Fix comment
11:56 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/d274e827af
12:06 [Coke] (futex... NULL) I have just been seeing that in failure modes for docker. small world.
12:17 jnthn It's what pthreads mutexes boil down to using on Linux, in the case they actually need to block
12:17 jnthn And most multi-threaded uses those :)
12:19 jnthn *code uses
12:22 TimToady funny though that it succeeded under heavy load, but fails quickly when run standalone
12:22 TimToady I never got more than 20 or 30 iterations into the ^500 when standalone
12:23 TimToady and a fresh clone did it too, so it's not something residual from previous builds
12:27 TimToady my failing linux is running a 3.14.1 kernel, while my 3.16.0 kernel on my home server seems to be fine
12:27 TimToady so maybe there's some difference there
12:30 TimToady also, the succeeding cpu is an AMD, while the failing one is an Intel
12:31 lizmat FWIW, my MBP (on which it doesn't fail) has an Intel
12:31 lizmat so feels more like a kernel issue than a processor issue to me
12:31 TimToady well, or a kernel that doesn't quite do hyperthreading right maybe
12:32 lizmat well, fwiw, it does feel good that we're getting closer to the metal with rakudo  :-)
12:44 TimToady a little funny that there's a different kernel release though, since they're both Linux Mint 17.3
12:45 TimToady you'd think they'd keep those synced up across architectures
14:00 leont joined #moarvm
14:00 geekosaur they do. *but*, if you need specific hardware support, you may be pointed to installing a newer kernel
14:23 buggable joined #moarvm
14:24 dogbert17 I tried the pragma 'use trace' on the troublesome test 69 in syntax.t. Running the code it spits out quite a bit of information and then 'hangs' for quite a few seconds on the line
14:25 dogbert17 whenever Supply.interval(0.001) { done }. It then spits out more data and usually 'hangs' again on the same line for a long time before continuing
14:29 dogbert17 runtime numbers also differ wildly between runs, in three runs I got 26, 33 and 11 secs respectively on my 3 core vm
14:31 dogbert17 'max memory usage' according to /usr/bin/time for the three runs was 471704, 590412 and 230464k respectively
14:38 AlexDaniel joined #moarvm
14:48 brrt joined #moarvm
14:55 leont joined #moarvm
15:15 brrt joined #moarvm
15:22 geekosaur joined #moarvm
16:05 domidumont joined #moarvm
16:31 timotimo so, syntax.t is happy when i give it even the lowest amount of startup delay time
16:31 timotimo maybe there's a race somewhere where the interval fires before the tap is set up, but that doesn't explain why the "done" isn't run on the second tick
16:43 timotimo next step for me should probably be to put debug prints everywhere in the scheduler ...
16:44 timotimo afk first, though
16:47 dogbert2 joined #moarvm
17:01 robertle joined #moarvm
17:02 Ven joined #moarvm
17:27 leont joined #moarvm
17:48 Zoffix joined #moarvm
17:57 Ven joined #moarvm
18:04 samcv good * moarvm
18:06 Zoffix \o\
18:09 Ven_ joined #moarvm
18:26 leont joined #moarvm
19:25 patrickz joined #moarvm
19:27 patrickz Hi everyone!
19:27 Zoffix \o
19:29 samcv hi patrickz
19:30 patrickz I have submitted a PR fixing the build with GCCs some days ago. Bumping it again before it gets out of sight: https://github.com/MoarVM/MoarVM/pull/696
19:31 patrickz *older GCCs*
20:03 lizmat and another Perl 6 Weekly hits the Net: https://p6weekly.wordpress.com/2017/09/25/2017-39-smarting-up-the-pool/
20:10 leont joined #moarvm
20:26 timotimo queued an unlock vow; queue is 1276 long
20:27 timotimo hm, i wonder.
20:28 timotimo should an interval supply be able to "take back" its "i want to have access to this lock" when the corresponding interval tap gets cancelled?
20:42 timotimo so anyway, the lock:async used in the interval supply is accruing more and more contenders for acquisition
20:44 timotimo in theory perhaps the done called by one of the interval ticks is deadlocking with something and keeping the lock held that way and the program just continues piling up new ticks that want to execute their code
21:01 timotimo the only things competing for this lock are creating new taps on the interval supply and emitting values
21:04 timotimo hm. is the lock protect in method tap of interval for making sure the first interval is only emitted once the whole thing has been set up? but will cueing make you vulnerable to that?
21:08 timotimo well, removing it doesn't seem to fix anything
22:09 TimToady joined #moarvm
22:11 jnthn timotimo: Yes, there's a rule that we must always call the tap(...) callback before any emit(...) or other messages, which is why it holds the lock around the setup
22:12 jnthn Interesting about the queueing up...weird
22:16 timotimo of course any amount of debug output can mess up scheduling
22:17 jnthn Yeah, there's always that fun
22:17 jnthn But still, the done should run
22:17 jnthn And switch stuff off
22:17 timotimo it should
22:17 jnthn So nothing more piles up
22:17 jnthn It's very odd
22:18 timotimo i also put a print into the CATCH, that doesn't get called
22:18 timotimo so it doesn't exit it "sideways" via an exception without me noticing
22:18 timotimo btw, cool, i can book "intentful testing through domain events" for free! but it only lasts 0 days :(
22:18 jnthn There's always a catch with free things...
22:19 jnthn It seems that one of my supply guts changes has afflicated cro-websockets also :(
22:19 timotimo dang :(
22:20 jnthn I stresstested the darn thing against cro-http; didn't think to try that
22:20 jnthn "It handled 200,000 requests, it can't be that broken" :)
22:20 timotimo hah, on your 'site, "Perl 6" gets split into two lines twice in the bit about Cro
22:24 Zoffix .oO( all the more reason to rename... )
22:25 timotimo i really, really need a better way to visualize concurrent activities than printing to a console
22:26 timotimo exposing telemeh to userspace and building a graphical frontend for that would be one way to do that ...
22:26 jnthn Yeah, some kind of concurrency visualizer would be neat
22:26 jnthn But it'd really need to be Promise/Supply-aware
22:27 timotimo oh yes
23:22 geekosaur joined #moarvm
23:23 Geth ¦ MoarVM: 454e9148a2 | (Samantha McVey)++ | docs/collation.asciidoc
23:23 Geth ¦ MoarVM: Add future info to Collation docs. Language/natural sort
23:23 Geth ¦ MoarVM:
23:23 Geth ¦ MoarVM: Add information about how we would add language specific sorting, as
23:23 Geth ¦ MoarVM: well as a few details about implementing natural sorting (numeric sorting).
23:23 Geth ¦ MoarVM: review: https://github.com/MoarVM/MoarVM/commit/454e9148a2

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary