Perl 6 - the future is here, just unevenly distributed

IRC log for #moarvm, 2017-03-02

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 pyrimidine joined #moarvm
00:35 pyrimidine joined #moarvm
01:05 dogbert17_ joined #moarvm
01:16 nebuchadnezzar joined #moarvm
02:22 samcv been gone for a while. gonna get back to work on my UCD rewrite
02:22 samcv gotta get back in the groove
02:22 samcv that looks cool MasterDuke
02:23 MasterDuke samcv: yeah, looking at the data from just doing `perl6 -e ''` now
02:24 MasterDuke not really much i can do with it, but seems like it would be useful
02:25 samcv for certain things
02:29 samcv ok at least i got this bug in the point index golfed down to a much smaller size. was making me frustrated so took a break for a week or so
02:31 MasterDuke this month is when you can resubmit your grant, right?
02:31 samcv yeah
02:31 samcv it's due by midmarch so gonna submit it in a week
02:32 samcv or maybe sooner
02:32 MasterDuke cool
02:48 ilbot3 joined #moarvm
02:48 Topic for #moarvm is now https://github.com/moarvm/moarvm | IRC logs at  http://irclog.perlgeek.de/moarvm/today
03:18 samcv nice fixed that problem :)
03:18 samcv fresh mind can do one good
06:37 brrt joined #moarvm
06:47 pyrimidine joined #moarvm
07:43 domidumont joined #moarvm
07:44 domidumont joined #moarvm
07:49 brrt joined #moarvm
07:51 brrt good *
08:19 pyrimidi_ joined #moarvm
08:26 brrt did i share the v8 turbofan post yet...
08:26 brrt no i didn't
08:26 brrt http://benediktmeurer.de/2017/03/01/v8-behind-the-scenes-february-edition/
08:27 nwc10 I have a slow rsync that's blocking me.
08:27 nwc10 Somethign to read is useful
08:27 nwc10 (thanks)
08:27 brrt :-)
08:27 brrt minor contribution to #perl6 in march; unlocked
08:28 zakharyas joined #moarvm
08:28 brrt (if you click at 'publications', you'll see the work of someone who actually knows what he's doing..)
08:29 brrt joined #moarvm
08:30 domidumont joined #moarvm
08:34 domidumont joined #moarvm
09:09 domidumont joined #moarvm
09:11 domidumont joined #moarvm
09:36 vendethiel joined #moarvm
10:11 jnthn heaptrack does look worth a try :)
10:13 * dogbert17_ hates the decoderstream bug :(
10:14 * dogbert17_ grr :)
10:14 brrt bugs are the way to enlightenment :-)
10:15 dogbert17_ well, it's diffucult to reproduce and when it does show up there does not seem to be any other thread doing anything with it
10:16 dogbert17_ which leads me to my first stupid question of the day ...
10:16 brrt jnthn has historically had some success with inserting debug tracing macros everywhere something is touched
10:16 brrt i personally have had much success with printf debugging
10:16 brrt printf debugging is, all things considered, pretty awesome, if done well
10:17 dogbert17_ indeed, I'm planning to use one of them presently to look for another bug :)
10:18 dogbert17_ is MVM_decoder_ensure_decoder in Decoder.c really the only way to access 'decoder->body.in_use' ?
10:19 dogbert17_ or might there be code somewhere else in MoarVM which has the ability to do that?
10:20 brrt git grep...
10:20 brrt :-)
10:20 jnthn It's tweaked by the enter/exit functions, which are only called within Decoder.c
10:23 dogbert17_ I tripped the bug last night, could not find any other thread that was inside Decoder.c when looking at the backtraces
10:24 dogbert17_ otoh, I still suffer from this Python exception so I'm not able to see all backtraces :(
10:25 jnthn That's...odd
10:25 jnthn That said, the other thread could have just left it
10:25 jnthn By the time the exception throw happened
10:28 dogbert17_ the two other thread were doing gc stuff and one was calling something called A0_swap... don't have access to the info since it's on my comp at home
10:28 jnthn ok
10:32 dogbert17_ on the other hand, shouldn't your array debug flag be useful for https://rt.perl.org/Public/Bug/Display.html?id=130796 ?
10:33 dogbert17_ hmm, is that the right RT, I remember AT_POS being involved.
10:34 dogbert17_ ah yes 'my \a = lazy loop { 1 }; my \b = lazy loop { 1 }; for ^100000 -> $i { await start { a.AT-POS($i) }, start { b.AT-POS($i) } }'
10:42 dogbert17_ jnthn: take a look at this if you have the time, https://gist.github.com/dogbert17/3760aa6be25110e344d6138905085f61
10:51 jnthn Yeah, those look a bit like what the cross-thread write log was suggesting
10:52 jnthn "good" that MVM_ARRAY_CONC_DEBUG agrees
10:53 dogbert17_ jnthn: would it be useful if I put a gdb breakpoint in the debug code in order to see what the other threads are up to
10:53 dogbert17_ or does the gist give enough info
10:54 jnthn I...suspect it's going to be tricky to get more data
10:54 dogbert17_ uh oh
10:54 jnthn As in, possible, but...
10:54 jnthn Not without exploring the lexicals in the various frames
10:54 jnthn At some point, we get contexts in a tangle
10:55 jnthn It wasn't at all clear to me where, however
10:55 dogbert17_ so gdb is not the right tool in this case then?
10:56 jnthn Well, it is, you just have to go poking into the right things
10:56 jnthn I dunno if timotimo's debug plugin can dump addresses etc. of lexicals
10:57 dogbert17_ we'll ask him when he shows up
10:57 timotimo do it
10:57 jnthn What we'd need is to catch two threads both inside of the guarded code
10:57 jnthn And then look at the backtraces
10:58 jnthn And then, look at the Perl 6 source, and look at the various lexicals, outers, etc. to try and figure out where we ended up with the incorrect sharing
10:58 jnthn Which will in turn give us a hint where to look for...probably broken code-gen, but it wasn't obvious when I looked where that might me
10:58 jnthn *might be
10:59 dogbert17_ that sounds difficult
10:59 jnthn tedious to say the least
10:59 timotimo well ... no, the debug plugin doesn't let you introspect frames and such yet
10:59 dogbert17_ all I can add atm is that --optimize=0 does not help a bit
10:59 timotimo it's also quite possible that it needs refreshed
11:00 timotimo and oh my god does the gdb python plugin bridge need fastened ... or at least needed when i last usened it
11:00 dogbert17_ o/ timotimo
11:03 timotimo brrt: we can get at info about the current jit-related frames we're in by accessing the threadcontext, right?
11:04 timotimo that could also be an interesting thing to have in the gdb plugin
11:13 dogbert17_ timotimo, jnthn: FWIW, posted some gdb output at https://gist.github.com/dogbert17/ce4ca774defb537724403383f8f46da8
11:18 timotimo is that a segfault or something?
11:20 dogbert17_ no, gdb has stopped at a debug macro that jnthn put in VMArray.c, looking for several threads accessing the same array I belive
11:20 timotimo ah, so the enter_single_user thing is the one that aborted?
11:20 dogbert17_ yep
11:21 timotimo yeah, thread 2 and 7 are conflictingly pushing to 0xa2b67c8
11:21 timotimo can you get a MVM_dump_backtrace from those threads?
11:21 dogbert17_ ofc :)
11:25 dogbert17_ gah, messed it up, gimme a few secs ...
11:26 dogbert17_ I got one of the traces (thread 2) then gdb acted up
11:28 timotimo yeah it can do that sometimes
11:28 timotimo perhaps it the stack trace dumper crashed?
11:30 brrt yeah, we can
11:32 timotimo thing is, in C you can't easily ask "if i access this pointer, will i crash?"
11:33 dogbert17_ when I do the first 'p MVM_dump_backtrace(tc)' everything works fine, but then I can't get the other. It's as if the code has continued running
11:36 dogbert17_ check this gist so you can see what happens: https://gist.github.com/dogbert17/ce4ca774defb537724403383f8f46da8
11:42 timotimo well, it has to run code in order to do dump_backtrace
11:43 brrt timotimo: you can fork(), and in the child, set a signal handler for SEGV, exit with a specific code; otherwise exit with 0; and wait in the parent
11:43 timotimo m)
11:43 timotimo that'll go over very well when you're doing that for a stack trace of like 10 lines
11:44 dogbert17_ sounds advanced
11:44 timotimo but yeah, it's a little suspicious that the threads seem to move their IPs when you're dumping the stacktrace
11:45 timotimo we don't have a good system in place to "freeze all other threads, because this ship is going down"
11:46 brrt int is_this_pointer_safe(void *p) { int child; if (child = fork()) {  int status; waitpid(child, &status, 0); return status == STATUS_SEGV; } else { signal(SIGSEGV, exit_with_error_code); char foo = ((char*)p)[0]; exit(0); } int exit_with_error_code(int error) { exit(STATUS_SEGV); }
11:46 brrt no, because that's really hard in general
11:46 timotimo fantastic, that almost fits into a tweet
11:46 brrt well, the waitpid would want some error checking
11:47 brrt never let anybody tell you C is not effective...
11:47 brrt oO("That doesn
11:47 brrt .oO("That doesn't look safe. Can you rewrite that in Rust?")
11:47 agentzh joined #moarvm
11:48 brrt and the == needs to be !=, obviously
11:50 dogbert17_ I have a feeling (no proof obviously) that the backtraces would look identical
11:51 timotimo for both conflicting threads, you mean?
11:51 dogbert17_ yes
11:51 timotimo could be
12:10 Ven joined #moarvm
12:40 Ven joined #moarvm
13:03 dogbert17_ so, do we have to leave this in jnthn's capable hands, dunno what else I can do at this point
13:04 * brrt is a bit sad about that state of affair
13:04 brrt s
13:05 dogbert17_ so am I
13:08 dogbert17_ so there are two nasty bugs atm, I have an idea I would like to test with the decoderstream one
13:11 dogbert17_ want to add another field to 'MVMDecoderBody' locally, I want to save some extra info when calling 'enter_single_user'
13:12 * dogbert17_ checks if it is possible to get hold of thread id from a 'tc'
13:13 timotimo probably through tc->thread_obj or what it's called
13:13 * timotimo very recently used that code for the vmhealth op
13:13 timotimo indeed the thread object has a native_thread_id
13:13 timotimo but also a thread_id
13:14 dogbert17_ what's the difference
13:14 timotimo one is probably something like the PID, the other is a number from 0 to num_threads_spawned_so_far?
13:16 dogbert17_ cool, maybe it's useless but I want to see which thread has already entered enter_single_user when the next one comes along
13:17 dogbert17_ so I figured I could save the thread id in the MVMDecoderBody
13:17 timotimo sure
13:18 dogbert17_ so extending the struct hopefully won't cause any problems
13:18 timotimo it shouldn't
13:18 timotimo we consistently use "sizeof" for things
13:18 timotimo and offsetof or what it's called
13:19 dogbert17_ cool, if I hit the bug I intend to get the 'saved' thread id and dump the backtraces from that specific thread. does that sound feasible?
13:19 timotimo hm
13:20 timotimo well, if you're not on the thread that's trying to dump_backtrace, you'll be accessing its stack while it's running
13:20 timotimo i'd do it a different way, actually
13:20 dogbert17_ tell me more
13:20 timotimo how about: when the user of the VMArray marks it "i'm not using it any more", it also checks for a flag to see if "someone else entered in the mean time"
13:21 timotimo use MVM_Load, MVM_Store, and whatever the CAS instruction is called to make sure things work atomically
13:21 dogbert17_ CAS, Check And Set?
13:22 timotimo compare and swap
13:22 dogbert17_ damn :)
13:22 timotimo oh, it's actually called something more like trycas
13:22 timotimo MVM_trycas
13:22 dogbert17_ if (!MVM_trycas(&(decoder->body.in_use), 0, 1)) ...
13:23 timotimo i'm not sure yet if any stage of this needs to trycas
13:23 timotimo but also, we'll essentially be imposing a gigantic overhead here %)
13:23 dogbert17_ I stole that from Decoder.c
13:24 dogbert17_ but I think it was anly added temporarily in order to find the bug
13:24 dogbert17_ s/anly/only/
13:24 timotimo oh!
13:24 timotimo anyway, what you could do is a little dance
13:25 timotimo have the whole thing in phases. phase one, Thread A pushes into the array, phase two, Thread B pushes into array, phase three, Thread A prints out its stacktrace, phase four, Thread B prints out its stacktrace
13:26 dogbert17_ nifty
13:27 dogbert17_ will give it a shot
13:28 timotimo i'd say "read the GC's orchestrate code to see how to do such a thing", but that'd be like throwing someone into boiling oil in order to teach them how to swim?!
13:29 brrt joined #moarvm
13:47 dogbert17_ slightly OT, the Ryzen NDA's are supposedly lifted today
14:01 zakharyas joined #moarvm
14:15 dogbert17_ timotimo: do you want a good laugh?
14:15 timotimo i could use a good laugh, yeah
14:16 dogbert17_ https://gist.github.com/dogbert17/aac8cc2a96d75dee1cb1573bf26c1388
14:17 dogbert17_ I'm hoping to dump backtraces of both threads
14:18 timotimo that'll definitely run into trouble
14:18 dogbert17_ I suspected as much :)
14:18 timotimo if you don't somehow prevent arr->tc from just leaving the single-user part of the code and going along on its merry way
14:19 dogbert17_ my theory is that arr->tc will be in the single user part
14:20 dogbert17_ that's why I want to dump a backtrace of that one first
14:21 timotimo it will, but please consider how short the single user part is
14:21 timotimo even if both threads enter the single-user part really, really close to each other, there's a whole lot more memory traffic involved in the backtrace dumping
14:22 timotimo getting all the filenames, converting them to utf8, and finally printing them out
14:22 timotimo though to be fair, you're going to panic right after that anyway
14:22 timotimo so might as well, i guess
14:22 dogbert17_ can I somehow freeze a thread then?
14:23 timotimo we're looking to implement a "throw an exception at this other thread" in the future
14:23 timotimo but until then ... no
14:24 timotimo "can i halt/stop/break/kill thread 1234?" has a long tradition of being asked
14:24 timotimo the short answer is: "not unless you implement it yourself somehow"
14:24 dogbert17_ gah :)
14:24 timotimo i'm sorry ;(
14:25 dogbert17_ well, at least it compiles, now I guess it's SEGV time :)
14:30 timotimo the dump might segv, or it might just give you "tearing" ;)
14:31 dogbert17_ heh, atm the I dont't even hit my breakpoint, it's as if the bug has disappeared
14:36 timotimo maybe we made the concurrency control around the thing so slow that it effectively makes our code unable to enter the same code twice at the same time?
14:37 dogbert17_ timotimo: https://gist.github.com/dogbert17/aac8cc2a96d75dee1cb1573bf26c1388
14:38 dogbert17_ perhaps it is bs
14:38 timotimo it could very well be that it reaches different code before the dump can do its thing
14:39 dogbert17_ very true, am surprised that it didn't just asplode :)
14:41 dogbert17_ if I don't run this through gdb (with brakpoints) then both backtraces are identical
14:41 FROGGS joined #moarvm
14:42 dogbert17_ oh FROGGS
14:43 FROGGS hi
14:43 yoleaux2 19 Jan 2017 14:30Z <brokenchicken> FROGGS: how come no test for https://rt.perl.org/Ticket/Display.html?id=127671#ticket-history ?
14:44 dogbert17_ hi, do you have time for a stupid question?
14:45 FROGGS .tell brokenchicken the test would involve creating directories with invalid names...what if one would not be able to delete these? # RT#127671
14:45 yoleaux2 FROGGS: I'll pass your message to brokenchicken.
14:45 synopsebot6 Link:  https://rt.perl.org/rt3//Public/Bug/Display.html?id=127671
14:45 FROGGS dogbert17_: go ahead
14:45 dogbert17_ FROGGS, do you recognize this code https://github.com/MoarVM/MoarVM/blob/master/src/core/nativecall_dyncall.c#L7
14:46 dogbert17_ should there really be two lines like this? 'else if (strcmp(cname, "stdcall") == 0)'
14:46 dogbert17_ lines 13 and 19
14:49 * geekosaur wonders if yoleaux2 has support for tracking le zoff's latest nick...
14:50 dogbert17_ didn't he go back to IOninja
14:50 geekosaur yeh, but that means msg to brokenchicken will hangfire :)
14:51 dogbert17_ indeed
14:51 FROGGS dogbert17_: yeah, spotted this in the past too
14:52 dogbert17_ FROGGS: wasn't sure if it was a bug or a clever feature :)
14:52 brokenchicken .
14:52 yoleaux2 14:45Z <FROGGS> brokenchicken: the test would involve creating directories with invalid names...what if one would not be able to delete these? # RT#127671
14:52 synopsebot6 Link:  https://rt.perl.org/rt3//Public/Bug/Display.html?id=127671
14:52 FROGGS dogbert17_: very much bug
14:54 timotimo it's froggs \o/
14:54 * dogbert17_ has no idea how to fix that bug
14:55 IOninja FROGGS: fair enough, though I'd go with creating maybe `make risky-test` target that would run these risky tests and we could run only in environments that wouldn't mind being trashed. There are some other tickets with nul bytes and diacritics on `/` in paths, so there's plenty to fill that category of tests, I think
14:55 timotimo you're never unable to delete a directory
15:04 FROGGS well, sounds like we want to add a test then :o)
15:07 jnthn It's FROGGS \o/ :)
15:08 jnthn FROGGS: I think it was you who figured out the Windows sha-256 powershell trick to validate downloaded binaries in modules? :)
15:08 * jnthn used that today :)
15:12 dogbert17_ jnthn, you're right, it seems as if the backtraces from the two thread mucking with the same array are identical
15:14 FROGGS jnthn: no, wasnt me...
15:14 jnthn FROGGS: Ah, OK :)
15:14 * FROGGS has no powershell knowledge
15:14 jnthn Maybe LibraryMake was you? I thought you'd done some Windows NativeCall module thingy :)
15:14 FROGGS no :o)
15:15 jnthn Hm, OK :)
15:15 FROGGS I did Inline::C
15:15 jnthn Ah, right :)
15:16 jnthn OK, then you're probably the right person to ask if I should be worried about profileration of things like https://github.com/jnthn/p6-ssh-libssh/blob/master/Build.pm
15:17 jnthn uh
15:17 jnthn probably *not*
15:17 jnthn bah, shouldn't try to talk and writing native bindings at the same time :)
15:18 FROGGS ohh, is panda still a thing? :o)
15:18 jnthn Well, zef is certianly the more popular choice, but even it - afaict - emulates the Panda Build.pm support :)
15:19 FROGGS I see
15:19 jnthn Anyway, there's something like this done across a number of modules now
15:20 jnthn (Host dlls, grab/verify them, install as resource)
15:22 FROGGS yeah, that's what I did in the past too, (Alien::* modules for Perl 5)
15:23 jnthn Ohh...so there's a chance we *did* talk about this in the past and I wasn't being entirely confused :)
15:28 FROGGS hehe
15:34 pyrimidine joined #moarvm
15:49 agentzh joined #moarvm
16:10 brrt joined #moarvm
16:15 pyrimidine joined #moarvm
16:21 [Coke] FROGGS: you get  my RT mail?
16:23 FROGGS [Coke]: I did
16:47 jnthn *sigh* The whole sync I/O + threads thing really needs sorting out :S
16:47 * jnthn just ran into it again
16:48 timotimo yeah :|
16:48 timotimo we have a guarantee that an IO object only ever gets used async or sync, right?
16:49 jnthn Yeah, though we may want a general .async coercer on handles
16:49 jnthn Which doesn't transport the handle itself, just the descriptor, which I think we can make happen.
16:50 jnthn Maybe I should do some weeks of IO guts improvements, to go with IOninja's API-level ones :)
16:50 jnthn Or maybe that'll just mean we don't know whose changes are to thank for any problems :D
16:50 timotimo maybe i could break my period of doing fuck-all and try to do a bit of that for you
16:51 jnthn Well, we kinda need a strategy for attacking it.
16:51 timotimo no doubt
16:51 jnthn I was gonna start with sockets, fwiw
16:51 timotimo just stashing a FILE * and using stdio may not be enough
16:51 jnthn No
16:52 jnthn I'd like to do what we've done with async I/O though
16:52 jnthn That is, have decoding coordination pushed up to Perl 6 level
16:52 timotimo yes, that'd be nice
16:52 jnthn So we only have to do binary level I/O in MoarVM
16:52 jnthn So that was gonna be my first step with sockets
16:52 jnthn Meaning the re-work would only be needed at byte level
16:53 timotimo okay, the non-byte stuff gets kept, but deprecated
16:53 jnthn Yeah
16:53 jnthn Or, well
16:53 jnthn Maybe not if we would have to rewrite it only to toss it :S
16:54 timotimo i thought that's what i suggested
16:55 jnthn The way it's structured now, though, would entail rewriting bit of the char-level I/O along the way
16:55 jnthn Like, we can't just rewirte half of it to not use libuv for, say, sockets
16:55 jnthn Because then the rest won't compile any more
16:55 timotimo hm, right, a clean cut wouldn't do it
16:56 jnthn But on deprecation things, I don't actually think we need to care much about having a "deprecation cycle"
16:56 jnthn Our users are...NQP and Rakudo
16:56 jnthn We don't have the time to do half of what we need to do. No way do we have time to not break things for users who might hypothetically exist :P
16:57 timotimo fair enough
16:57 mst trying to pretend moarvm is a generic platform leads one down a path that ends up pining for the fjords at best
16:58 jnthn mst: Yes, if it becomes one it'll only be the same way the JVM did: because people started compiling other stuff to it, not because it was designed for it.
16:58 jnthn The main API compat that matters for us is the stuff that Inline::Perl6 uses :)
16:59 nine Which thankfully is not all that much
16:59 yoleaux2 23 Feb 2017 19:27Z <IOninja> nine: should we move away from sha1 for modules; RE: https://shattered.io/ ? Looks like we can even mod our sha1 with the collision detector thing: https://github.com/cr-marcstevens/sha1collisiondetection not sure what the performance impact is tho
16:59 jnthn nine: No, though I still managed to break that once ;)
16:59 nine once is like never ;)
17:00 jnthn I like that more than "infinitely more than never" :P
17:01 nine IOninja: we probably should move away from sha1 in the long term, though I see no reason for panic. There are still much more important issues to fix.
17:02 IOninja Agreed.
17:06 timotimo on the other hand it'd be like ... one sed -e s/.../.../ and grabbing a different library via git submodule?
17:09 IOninja Maybe :) I've no idea.
17:10 timotimo the API for any hash is usually just "init some struct", "feed in buffer after buffer with length", "grab the output buffer (with a fixed length, usually)"
17:11 timotimo sometimes you have hashes where you can ask for a number of iterations to be performed for you
17:11 timotimo i think bcrypt works like that
17:13 IOninja yeah
17:16 agentzh joined #moarvm
17:22 pyrimidine joined #moarvm
17:28 pyrimidine joined #moarvm
17:30 nine timotimo: sounds like a good project for some newcomer then :)
17:30 timotimo cool.
17:42 domidumont joined #moarvm
17:49 domidumont joined #moarvm
18:17 pyrimidine joined #moarvm
18:25 pyrimidine joined #moarvm
18:40 Ven joined #moarvm
19:27 Ven joined #moarvm
19:59 pyrimidine joined #moarvm
20:21 lizmat joined #moarvm
20:27 Ven joined #moarvm
20:29 pyrimidine joined #moarvm
20:38 zakharyas joined #moarvm
20:40 pyrimidine joined #moarvm
20:50 pyrimidine joined #moarvm
21:26 pyrimidine joined #moarvm
21:50 pyrimidine joined #moarvm
22:01 pyrimidine joined #moarvm
22:05 geekosaur joined #moarvm
22:11 geekosaur joined #moarvm
22:19 geekosaur joined #moarvm
22:37 pyrimidine joined #moarvm
22:55 japhb Please let's not take too long on the sha1 change.  It's just troll bait at this point.  Granted it can go to a newcomer, but let's try to convince a newcomer relatively soon.  :-)
22:56 japhb That said, I'd *much* rather see sanity around async I/O and decoding.  My current workarounds *kinda* work, for a little while, and then break.
22:57 jnthn japhb: What problems are you having with that, exactly?
22:58 japhb jnthn: https://github.com/ab5tract/Terminal-Print/issues/19#issuecomment-280977027
22:58 jnthn Proc::Async and IO::Socket::Async already got a bunch of fixes earlier this year that, so far as I know, dealt with the various issues (codepoints over boundaries, graphemes over boundaries, decoding errors bringing down the process)
22:59 japhb For raw terminal I/O, I'm faking the asynchrony because I have to *avoid* the automatic detection of combiners.
22:59 japhb I'd like to just not do that.
22:59 japhb And my workaround eventually just stops reading data.
23:00 jnthn What is your workaround?
23:00 jnthn Something involving .read(1)?
23:01 MasterDuke btw, if anyone is curious, here's a screenshot of heaptrack's summary of its profile of `./perl6-m -e ''`: http://i.imgur.com/nUTNIVM.png
23:02 jnthn japhb: Is https://github.com/ab5tract/Terminal-Print/blob/raw-input/lib/Terminal/Print/RawInput.pm6 the appropriate spot?
23:03 jnthn And yes, .read(1)
23:04 jnthn And what's really wanted is Uni-level I/O
23:04 japhb jnthn: Yes, that's the right file.
23:05 jnthn I've no idea why htat'd lock up :S
23:06 jnthn *that'd
23:08 jnthn My backlog of Perl 6/MoarVM stuff I'd like to work on is growing hard to manage. :S
23:08 jnthn I ran into the same I/O on multiple threads thing you mention in that code again today also :(
23:08 japhb jnthn: The example program that I use to test RawInput is at least fun to play with ... though certainly not minimized for test cases.
23:09 japhb nodnod
23:09 jnthn It's hard to know what to prioritize when so many things feel pressing.
23:10 japhb Yeah, I completely understand that.  I'm just offering input from the "mantle developer" POV, but I'm sure others are pushing in different directions.  :-(
23:11 jnthn Well, having a userbase is a nice problem to have. :-)
23:11 jnthn "problem" :-)
23:11 japhb heh
23:14 jnthn But yeah, it's becoming clear than "a funded day a week to focus on Perl 6 stuff" is kinda falling short of what'd be ideal and what I'd like to be doing.
23:16 japhb How much would you like to work on it each week?
23:16 japhb (I ask because of previous burnout cycles, for instance)
23:18 jnthn I could quite happily do 2-3 days a week on it.
23:18 jnthn Which would at least shift me through the backlog 2-3 times faster. :-)
23:21 jnthn Will think about it a bit, anyways
23:23 * japhb wonders if the funding committee(s) can handle funding at 2-3x your current rate .... hmmm, didn't we do full-time funding for one of the p5p people for a while ...?
23:24 jnthn Not sure, quite possibly
23:26 jnthn About the full-time for a p5p person. Hard to speculate on what it's possible to fund.
23:26 jnthn In that I'm still wating to hear back if it's possible to extend my current grant at all.
23:26 jnthn *waiting
23:26 japhb Well that's suboptimal./
23:27 pyrimidine joined #moarvm
23:27 jnthn And, to be clear from the communication I've had, I don't think that's TPF people ignoring me at all, I think it's a genuine issue of "is there actually money in the Perl 6 core dev fund to spend"
23:27 japhb Yeah, that needs to be fixed.
23:28 japhb (Which is to say, that coffer should be way more full than it is, but I'm not sure how to fundraise that.)
23:28 jnthn Just a few days ago I got a "we're still working on this" without having to prod for an update, so it's not for lack of effort on the relevant folk's part.
23:34 * jnthn should go rest up some :)
23:35 jnthn Hopefully will get to the cross-thread I/O thing at some point in the not too distant future...not to mention the Uni I/O stuff.
23:35 jnthn 'night
23:38 timotimo gnite jnthn
23:38 timotimo rest well
23:38 timotimo i just got super sick today :|
23:38 timotimo (just a cold, but man ...)
23:42 pyrimidine joined #moarvm
23:54 KDr2_m joined #moarvm

| Channels | #moarvm index | Today | | Search | Google Search | Plain-Text | summary