Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2017-02-27

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:48 ilbot3 joined #moarvm
02:48 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
04:24 MasterDuke .tell jnthn i applied nine's patch and then changed the conditional to `if (bits > 63 || (!sign && bits == 63)) {`, but that broke a bunch of tests. i looked at some of the breaking tests in S02-types/int-uint.t and they seemed a little suspect, but this is really pushing the boundaries of stuff i'm confident in
04:24 yoleaux2 MasterDuke: I'll pass your message to jnthn.
04:28 MasterDuke .tell jnthn i then got distracted by the spesh bug i/Zoffix++ found with EVAL and div, so i'm not sure i'll come up with a solution anytime soon
04:28 yoleaux2 MasterDuke: I'll pass your message to jnthn.
04:31 MasterDuke .tell jnthn fyi, spesh/eval/div discussion starts around here https://irclog.perlgeek.de/perl6-dev/2017-02-26#i_14168641
04:31 yoleaux2 MasterDuke: I'll pass your message to jnthn.
06:48 ggoebel joined #moarvm
08:34 zakharyas joined #moarvm
08:44 pyrimidi_ joined #moarvm
08:53 brrt joined #moarvm
08:53 brrt good *, #moarvm
09:02 brrt so, long story short; i'm doing cycle breaking wrong
09:02 brrt i think, i can better do it with a stack
09:03 brrt (what cycles are you talking about, brrt? long story, good for a blog post :-))
09:08 domidumont joined #moarvm
09:43 jnthn Bonus points if you include sample code formatted to look like a bicycle :P
09:43 yoleaux2 04:24Z <MasterDuke> jnthn: i applied nine's patch and then changed the conditional to `if (bits > 63 || (!sign && bits == 63)) {`, but that broke a bunch of tests. i looked at some of the breaking tests in S02-types/int-uint.t and they seemed a little suspect, but this is really pushing the boundaries of stuff i'm confident in
09:43 yoleaux2 04:28Z <MasterDuke> jnthn: i then got distracted by the spesh bug i/Zoffix++ found with EVAL and div, so i'm not sure i'll come up with a solution anytime soon
09:43 yoleaux2 04:31Z <MasterDuke> jnthn: fyi, spesh/eval/div discussion starts around here https://irclog.perlgeek.de/perl6-dev/2017-02-26#i_14168641
09:56 * brrt wonders how many characters that would require at minimum
09:58 brrt i've got something
09:58 jnthn https://probablydance.com/2017/02/26/i-wrote-the-fastest-hashtable/ # long read, but maybe interesting :)
09:58 brrt +_-_
09:58 brrt O -O
09:58 jnthn :D
09:58 jnthn Next question: is it valid BF? :)
09:59 brrt (bf?)
09:59 jnthn Brainf**k
09:59 brrt ah, i see
09:59 brrt no idea
09:59 jnthn I don't think so, though, 'cus O surely isn't in there :)
10:00 brrt that's unfortunate :-)
10:00 dogbert11 o/ jnthn how is your brane today, have you had any coffee?
10:00 jnthn Currently drinking coffee
10:00 dogbert11 :)
10:01 jnthn And looking at what I've got left to do from Friday's hacking :)
10:01 dogbert11 ah, the async lib stuff
10:01 jnthn aye
10:01 dogbert11 I only have boring gists
10:02 dogbert11 but this is quite short, this always turns up when running a p6 program, is it anything to worry about? https://gist.github.com/dogbert17/0200459af7ea31aea7acbfbf1ed2f6f1
10:22 dogbert11 and if the above isn't enough I have another question this time regarding a roast test file, i.e. https://github.com/perl6/roast/blob/master/S17-supply/supplier-preserving.t
10:24 dogbert11 the last test fails from time to time. I'm simply wondering if there might be some kind of race condition wrt writing to '@closings' on line 16?
10:44 brrt <scratchpad>
10:44 brrt so, what we can do is maintain a stack for the cycles
10:45 brrt traverse the reverse map and push the registers we need to process in a cycle
10:46 brrt then swap the start register value to a scratch value, and process the stack in lifo order (i.e. reverse the cycle), inserting swap registers
10:47 brrt then place the cycle-starter into it's destination afterwards
10:48 brrt e.g. suppose we have cycle a -> b -> c -> a, we loop while (reverse_map[cursor] != start_point) { cycle_stack[cycle_stack_top++] = cursor; cursor = reverse_map[cursor] }
10:48 brrt which would build a, b, but not c
10:48 brrt so we need a slightly different ending criterium
10:51 * jnthn back
10:51 jnthn Power outage.
10:54 jnthn dogbert11: ooh, yes, looks like
11:02 nwc10 jnthn: interesting. Now, *I* am not volunteering (at least, not this calendar year) to transcribe that to C (well, the subset that "we" need)
11:02 nwc10 but I hope someone looks further into it
11:05 jnthn :)
11:05 jnthn Yes, I do drop stuff here now and then in hope of provoking somebody into writing code :-)
11:18 brrt heheeh
11:19 brrt anyway, i figured out my solution ove rlunch
11:19 brrt *over lunch
11:19 brrt that is that in my example, 'a' needn't be on the stack at all, since it is handled specially
11:19 brrt b and c need to be on the stack.
11:20 brrt </scratchpad>
11:55 domidumont joined #moarvm
12:27 dogbert11 power outage, uh oh, usually means that the coffee machine is broken as well
12:33 jnthn Yeah, thankfully I'd just made one before it :)
12:53 domidumont joined #moarvm
13:03 * brrt wonders what would happen if all syscall except for one (wait()) were asynchronous and just stashed integers into a buffer
13:04 moritz brrt: error handling would be "fun"
13:04 brrt (also, you'd have to emulate all the blocking syscalls with a stash+wait(), which would convert your optimized, register-argument syscall into a random-memory-buffer-syscall, and they'd be slower)
13:04 brrt yeah, that too
13:04 brrt but you could write microkernels really nice that way
13:07 brrt (in other words, what if it's just UNIX that sucks?)
13:07 brrt s/sucks/is not optimized for 2017/
13:10 * brrt is reading AnyEvent code and… all of it would just be so much nicer in perl6
13:55 SmokeMachine joined #moarvm
14:11 lucasb joined #moarvm
14:29 pyrimidine joined #moarvm
14:36 dogbert11 jnthn: FYI t/spec/S17-promise/nonblocking-await.t has a tendency to hang on the third test if the nursery is small
14:37 lucasb left #moarvm
14:50 dogbert11 I suspect this test 'Hundreds of time-based Promises awaited completes correctly' as being the culprit
14:52 dogbert11 gdb says something about '79     __atomic_thread_fence(__ATOMIC_SEQ_CST);'
14:52 dogbert11 #0  AO_nop_full () at 3rdparty/libatomic_ops/src/atomic_ops/sysdeps/gcc/generic.h:79
14:52 dogbert11 #1  MVM_gc_enter_from_allocator (tc=tc@entry=0x86dc450) at src/gc/orchestrate.c:409
14:55 Ven joined #moarvm
14:55 timotimo that sounds like it's trying to start a gc run
14:55 timotimo but other threads aren't cooperating
14:56 timotimo i.e. it's waiting forever for other threads to say "yup, i'm ready"
14:56 dogbert11 aha
14:56 timotimo it'd be interesting to see what the other threads are doing at that moment
14:56 timotimo can you "thread apply all bt"?
14:56 dogbert11 sec
14:58 dogbert11 https://gist.github.com/dogbert17/5d29fee33037848e993c3fc5cf6fb132
14:58 timotimo oh, heh.
14:58 timotimo can you re-run with JIT disabled?
14:59 dogbert11 do I have JIT on 32 bit?
14:59 brrt no, you should not
14:59 timotimo you normally do not
14:59 timotimo unless you've been hiding something from us :)
14:59 dogbert11 :-)
14:59 brrt (the 'fastest hash table' article is very nice, jnthn++)
14:59 timotimo so why is the backtrace so shite for threads 2 through 6?
14:59 dogbert11 moar is built with optimizations though
15:00 brrt hmm, could you try and replicate without optimization
15:00 timotimo like ... frame pointer omission?
15:00 dogbert11 can do, I also attached gdb after the hang
15:01 brrt FPO seems like a likely candidate yes
15:01 timotimo attaching it afterwards should be okay
15:01 brrt yeah, that's no problem usually
15:01 dogbert11 rebuilding ...
15:02 lizmat joined #moarvm
15:03 dogbert11 done, refresh the gist
15:04 brrt okay, that looks interesting
15:13 dogbert11 thread 5 seems to be involved in some GC shenanigans as well
15:25 timotimo it doesn't help that the line number in eventloop_queue_work is just the start of the { } inside the MVMROOT
15:25 timotimo >:(
15:26 timotimo and that the interesting frame is just "??"
15:26 timotimo i wonder if get_or_vivify_loop has trouble?
15:26 timotimo okay imagine this scenario:
15:27 dogbert11 'Backtrace stopped: previous frame inner to this frame (corrupt stack?)
15:27 dogbert11 that sounds a bit ominous
15:27 timotimo two threads try to vivify the loop at the same time, so they block on the mutex_event_loop_start
15:27 timotimo no, i mean
15:27 timotimo one blocks, the other proceeds into the critical section
15:27 timotimo but inside the critical section it has to GC
15:27 timotimo i think we just need to mark the thread as blocked inside get_or_vivify_loop
15:27 timotimo i'll whip up a patch
15:28 dogbert11 ++timotimo
15:28 timotimo (except i see no reason for the eventloop to not exist already by that point)
15:29 timotimo https://gist.github.com/timo/66551554fd1c5dc02ece46d04fab155a - try this please
15:31 dogbert11 will do, gimme a few minutes ...
15:36 brrt joined #moarvm
15:41 dogbert11 timotimo: fail, refresh the gist again
15:42 timotimo huh
15:44 dogbert11 should I double check if I applied your patch correctly
15:44 timotimo i'm just surprised it's in serialization_demand_object now
15:45 dogbert11 it worked the first three times and then hanged
15:45 timotimo mhm
15:45 timotimo what if you do the same thing i did in the patch, except around serialization.c:2720?
15:45 pyrimidine joined #moarvm
15:47 dogbert11 will try
15:49 dogbert11 MVMSerializationReader *sr = sc->body->sr;
15:49 dogbert11 MVM_gc_mark_thread_blocked(tc);
15:49 dogbert11 MVM_reentrantmutex_lock(tc, (MVMReentrantMutex *)sc->body->mutex);
15:49 dogbert11 MVM_gc_mark_thread_unblocked(tc);
15:51 dogbert11 timotimo: my change must be incorrect, got 'MoarVM panic: Invalid GC status observed while blocking thread; aborting'
15:53 jnthn MVM_reentrantmutex_lock igself calls MVM_gc_mark_thread_blocked, I believe
15:53 jnthn *itself
15:55 dogbert11 have you backlogged what we're trying to do?
15:56 timotimo oh!
15:56 * dogbert11 removes the second fix in the meantime :)
16:00 dogbert11 so, there's still a problem then
16:08 pyrimidine joined #moarvm
16:27 lizmat_ joined #moarvm
16:31 Ven joined #moarvm
16:33 lizmat joined #moarvm
17:02 Ven joined #moarvm
17:57 lizmat joined #moarvm
18:01 Ven joined #moarvm
18:21 Ven joined #moarvm
18:36 pyrimidine joined #moarvm
18:41 Ven joined #moarvm
18:44 Ven_ joined #moarvm
19:18 pyrimidine joined #moarvm
19:21 lizmat joined #moarvm
19:22 dogbert17 timotimo: looks as if we got stuck with t/spec/S17-promise/nonblocking-await.t. Should I report it?
19:51 timotimo yeah, please do
19:51 samcv morning all
20:01 Ven joined #moarvm
20:05 jnthn o/, samcv
20:05 jnthn dogbert17: Will have time to look into some of those issues tomorrow
20:11 dogbert17 jnthn, timotimo: https://github.com/MoarVM/MoarVM/issues/543
20:13 jnthn Hm, interesting one
20:14 dogbert17 haven't seen it fail when the nursery is at its default size
20:17 pyrimidine joined #moarvm
20:18 jnthn Suspect it can happen, but just unlikely to
20:18 jnthn Which is why we nursery stress :)
20:19 dogbert17 indeed :)
20:32 puddu joined #moarvm
20:35 puddu .tell bttr UNIX is not optimized for 2017? What if all syscalls would be synchronous a part of one, clone, which is also synchronous but let you implement all kind of multiprocessing you want?
20:35 yoleaux2 puddu: I'll pass your message to bttr.
21:38 agentzh joined #moarvm
22:03 agentzh joined #moarvm
22:14 pyrimidine joined #moarvm
22:21 pyrimidine joined #moarvm
22:59 pyrimidine joined #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary